id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2201.00042
|
Subutai Ahmad
|
Abhiram Iyer, Karan Grewal, Akash Velu, Lucas Oliveira Souza, Jeremy
Forest, and Subutai Ahmad
|
Avoiding Catastrophe: Active Dendrites Enable Multi-Task Learning in
Dynamic Environments
|
31 pages, 17 figures
|
Frontiers in Neurorobotics 16 2022 (1-23)
|
10.3389/fnbot.2022.846219
| null |
cs.NE cs.AI cs.LG q-bio.NC
|
http://creativecommons.org/licenses/by/4.0/
|
A key challenge for AI is to build embodied systems that operate in
dynamically changing environments. Such systems must adapt to changing task
contexts and learn continuously. Although standard deep learning systems
achieve state of the art results on static benchmarks, they often struggle in
dynamic scenarios. In these settings, error signals from multiple contexts can
interfere with one another, ultimately leading to a phenomenon known as
catastrophic forgetting. In this article we investigate biologically inspired
architectures as solutions to these problems. Specifically, we show that the
biophysical properties of dendrites and local inhibitory systems enable
networks to dynamically restrict and route information in a context-specific
manner. Our key contributions are as follows. First, we propose a novel
artificial neural network architecture that incorporates active dendrites and
sparse representations into the standard deep learning framework. Next, we
study the performance of this architecture on two separate benchmarks requiring
task-based adaptation: Meta-World, a multi-task reinforcement learning
environment where a robotic agent must learn to solve a variety of manipulation
tasks simultaneously; and a continual learning benchmark in which the model's
prediction task changes throughout training. Analysis on both benchmarks
demonstrates the emergence of overlapping but distinct and sparse subnetworks,
allowing the system to fluidly learn multiple tasks with minimal forgetting.
Our neural implementation marks the first time a single architecture has
achieved competitive results on both multi-task and continual learning
settings. Our research sheds light on how biological properties of neurons can
inform deep learning systems to address dynamic scenarios that are typically
impossible for traditional ANNs to solve.
|
[
{
"version": "v1",
"created": "Fri, 31 Dec 2021 19:52:42 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Apr 2022 15:36:13 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Iyer",
"Abhiram",
""
],
[
"Grewal",
"Karan",
""
],
[
"Velu",
"Akash",
""
],
[
"Souza",
"Lucas Oliveira",
""
],
[
"Forest",
"Jeremy",
""
],
[
"Ahmad",
"Subutai",
""
]
] |
new_dataset
| 0.996159 |
2202.00291
|
Tushar Abhishek
|
Tushar Abhishek, Shivprasad Sagare, Bhavyajeet Singh, Anubhav Sharma,
Manish Gupta and Vasudeva Varma
|
XAlign: Cross-lingual Fact-to-Text Alignment and Generation for
Low-Resource Languages
|
Update the code repository and acknowledgement
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Multiple critical scenarios (like Wikipedia text generation given English
Infoboxes) need automated generation of descriptive text in low resource (LR)
languages from English fact triples. Previous work has focused on English
fact-to-text (F2T) generation. To the best of our knowledge, there has been no
previous attempt on cross-lingual alignment or generation for LR languages.
Building an effective cross-lingual F2T (XF2T) system requires alignment
between English structured facts and LR sentences. We propose two unsupervised
methods for cross-lingual alignment. We contribute XALIGN, an XF2T dataset with
0.45M pairs across 8 languages, of which 5402 pairs have been manually
annotated. We also train strong baseline XF2T generation models on the XAlign
dataset.
|
[
{
"version": "v1",
"created": "Tue, 1 Feb 2022 09:41:59 GMT"
},
{
"version": "v2",
"created": "Sun, 24 Apr 2022 09:11:01 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Abhishek",
"Tushar",
""
],
[
"Sagare",
"Shivprasad",
""
],
[
"Singh",
"Bhavyajeet",
""
],
[
"Sharma",
"Anubhav",
""
],
[
"Gupta",
"Manish",
""
],
[
"Varma",
"Vasudeva",
""
]
] |
new_dataset
| 0.992206 |
2202.09695
|
Viet Lai
|
Viet Dac Lai, Amir Pouran Ben Veyseh, Franck Dernoncourt, Thien Huu
Nguyen
|
SemEval 2022 Task 12: Symlink- Linking Mathematical Symbols to their
Descriptions
|
SemEval 2022 Task 12
| null | null | null |
cs.CL cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Given the increasing number of livestreaming videos, automatic speech
recognition and post-processing for livestreaming video transcripts are crucial
for efficient data management as well as knowledge mining. A key step in this
process is punctuation restoration which restores fundamental text structures
such as phrase and sentence boundaries from the video transcripts. This work
presents a new human-annotated corpus, called BehancePR, for punctuation
restoration in livestreaming video transcripts. Our experiments on BehancePR
demonstrate the challenges of punctuation restoration for this domain.
Furthermore, we show that popular natural language processing toolkits are
incapable of detecting sentence boundary on non-punctuated transcripts of
livestreaming videos, calling for more research effort to develop robust models
for this area.
|
[
{
"version": "v1",
"created": "Sat, 19 Feb 2022 23:12:57 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Apr 2022 02:11:07 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Lai",
"Viet Dac",
""
],
[
"Veyseh",
"Amir Pouran Ben",
""
],
[
"Dernoncourt",
"Franck",
""
],
[
"Nguyen",
"Thien Huu",
""
]
] |
new_dataset
| 0.997085 |
2203.00859
|
Jinlu Zhang
|
Jinlu Zhang, Zhigang Tu, Jianyu Yang, Yujin Chen, Junsong Yuan
|
MixSTE: Seq2seq Mixed Spatio-Temporal Encoder for 3D Human Pose
Estimation in Video
|
CVPR2022 Accepted Paper
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent transformer-based solutions have been introduced to estimate 3D human
pose from 2D keypoint sequence by considering body joints among all frames
globally to learn spatio-temporal correlation. We observe that the motions of
different joints differ significantly. However, the previous methods cannot
efficiently model the solid inter-frame correspondence of each joint, leading
to insufficient learning of spatial-temporal correlation. We propose MixSTE
(Mixed Spatio-Temporal Encoder), which has a temporal transformer block to
separately model the temporal motion of each joint and a spatial transformer
block to learn inter-joint spatial correlation. These two blocks are utilized
alternately to obtain better spatio-temporal feature encoding. In addition, the
network output is extended from the central frame to entire frames of the input
video, thereby improving the coherence between the input and output sequences.
Extensive experiments are conducted on three benchmarks (Human3.6M,
MPI-INF-3DHP, and HumanEva). The results show that our model outperforms the
state-of-the-art approach by 10.9% P-MPJPE and 7.6% MPJPE. The code is
available at https://github.com/JinluZhang1126/MixSTE.
|
[
{
"version": "v1",
"created": "Wed, 2 Mar 2022 04:20:59 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Mar 2022 02:50:33 GMT"
},
{
"version": "v3",
"created": "Sun, 27 Mar 2022 17:58:21 GMT"
},
{
"version": "v4",
"created": "Mon, 25 Apr 2022 08:24:27 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Zhang",
"Jinlu",
""
],
[
"Tu",
"Zhigang",
""
],
[
"Yang",
"Jianyu",
""
],
[
"Chen",
"Yujin",
""
],
[
"Yuan",
"Junsong",
""
]
] |
new_dataset
| 0.997223 |
2203.03367
|
Dinkun Long
|
Dingkun Long, Qiong Gao, Kuan Zou, Guangwei Xu, Pengjun Xie, Ruijie
Guo, Jian Xu, Guanjun Jiang, Luxi Xing, Ping Yang
|
Multi-CPR: A Multi Domain Chinese Dataset for Passage Retrieval
|
SIGIR 2022 Resource Track
| null | null | null |
cs.IR cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Passage retrieval is a fundamental task in information retrieval (IR)
research, which has drawn much attention recently. In the English field, the
availability of large-scale annotated dataset (e.g, MS MARCO) and the emergence
of deep pre-trained language models (e.g, BERT) has resulted in a substantial
improvement of existing passage retrieval systems. However, in the Chinese
field, especially for specific domains, passage retrieval systems are still
immature due to quality-annotated dataset being limited by scale. Therefore, in
this paper, we present a novel multi-domain Chinese dataset for passage
retrieval (Multi-CPR). The dataset is collected from three different domains,
including E-commerce, Entertainment video and Medical. Each dataset contains
millions of passages and a certain amount of human annotated query-passage
related pairs. We implement various representative passage retrieval methods as
baselines. We find that the performance of retrieval models trained on dataset
from general domain will inevitably decrease on specific domain. Nevertheless,
a passage retrieval system built on in-domain annotated dataset can achieve
significant improvement, which indeed demonstrates the necessity of domain
labeled data for further optimization. We hope the release of the Multi-CPR
dataset could benchmark Chinese passage retrieval task in specific domain and
also make advances for future studies.
|
[
{
"version": "v1",
"created": "Mon, 7 Mar 2022 13:20:46 GMT"
},
{
"version": "v2",
"created": "Sun, 24 Apr 2022 13:29:22 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Long",
"Dingkun",
""
],
[
"Gao",
"Qiong",
""
],
[
"Zou",
"Kuan",
""
],
[
"Xu",
"Guangwei",
""
],
[
"Xie",
"Pengjun",
""
],
[
"Guo",
"Ruijie",
""
],
[
"Xu",
"Jian",
""
],
[
"Jiang",
"Guanjun",
""
],
[
"Xing",
"Luxi",
""
],
[
"Yang",
"Ping",
""
]
] |
new_dataset
| 0.999527 |
2203.05864
|
Alessio Fagioli
|
Danilo Avola, Marco Cascio, Luigi Cinque, Alessio Fagioli and Gian
Luca Foresti
|
Human Silhouette and Skeleton Video Synthesis through Wi-Fi signals
| null |
International Journal of Neural Systems, 2022, 2250015
|
10.1142/S0129065722500150
| null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The increasing availability of wireless access points (APs) is leading
towards human sensing applications based on Wi-Fi signals as support or
alternative tools to the widespread visual sensors, where the signals enable to
address well-known vision-related problems such as illumination changes or
occlusions. Indeed, using image synthesis techniques to translate radio
frequencies to the visible spectrum can become essential to obtain otherwise
unavailable visual data. This domain-to-domain translation is feasible because
both objects and people affect electromagnetic waves, causing radio and optical
frequencies variations. In literature, models capable of inferring
radio-to-visual features mappings have gained momentum in the last few years
since frequency changes can be observed in the radio domain through the channel
state information (CSI) of Wi-Fi APs, enabling signal-based feature extraction,
e.g., amplitude. On this account, this paper presents a novel two-branch
generative neural network that effectively maps radio data into visual
features, following a teacher-student design that exploits a cross-modality
supervision strategy. The latter conditions signal-based features in the visual
domain to completely replace visual data. Once trained, the proposed method
synthesizes human silhouette and skeleton videos using exclusively Wi-Fi
signals. The approach is evaluated on publicly available data, where it obtains
remarkable results for both silhouette and skeleton videos generation,
demonstrating the effectiveness of the proposed cross-modality supervision
strategy.
|
[
{
"version": "v1",
"created": "Fri, 11 Mar 2022 11:40:34 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Avola",
"Danilo",
""
],
[
"Cascio",
"Marco",
""
],
[
"Cinque",
"Luigi",
""
],
[
"Fagioli",
"Alessio",
""
],
[
"Foresti",
"Gian Luca",
""
]
] |
new_dataset
| 0.994978 |
2203.12311
|
Michal Nazarczuk
|
Michal Nazarczuk and Sibi Catley-Chandar and Ales Leonardis and
Eduardo P\'erez-Pellitero
|
Self-supervised HDR Imaging from Motion and Exposure Cues
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent High Dynamic Range (HDR) techniques extend the capabilities of current
cameras where scenes with a wide range of illumination can not be accurately
captured with a single low-dynamic-range (LDR) image. This is generally
accomplished by capturing several LDR images with varying exposure values whose
information is then incorporated into a merged HDR image. While such approaches
work well for static scenes, dynamic scenes pose several challenges, mostly
related to the difficulty of finding reliable pixel correspondences.
Data-driven approaches tackle the problem by learning an end-to-end mapping
with paired LDR-HDR training data, but in practice generating such HDR
ground-truth labels for dynamic scenes is time-consuming and requires complex
procedures that assume control of certain dynamic elements of the scene (e.g.
actor pose) and repeatable lighting conditions (stop-motion capturing). In this
work, we propose a novel self-supervised approach for learnable HDR estimation
that alleviates the need for HDR ground-truth labels. We propose to leverage
the internal statistics of LDR images to create HDR pseudo-labels. We
separately exploit static and well-exposed parts of the input images, which in
conjunction with synthetic illumination clipping and motion augmentation
provide high quality training examples. Experimental results show that the HDR
models trained using our proposed self-supervision approach achieve performance
competitive with those trained under full supervision, and are to a large
extent superior to previous methods that equally do not require any
supervision.
|
[
{
"version": "v1",
"created": "Wed, 23 Mar 2022 10:22:03 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Nazarczuk",
"Michal",
""
],
[
"Catley-Chandar",
"Sibi",
""
],
[
"Leonardis",
"Ales",
""
],
[
"Pérez-Pellitero",
"Eduardo",
""
]
] |
new_dataset
| 0.986288 |
2204.03162
|
Tristan Thrush
|
Tristan Thrush, Ryan Jiang, Max Bartolo, Amanpreet Singh, Adina
Williams, Douwe Kiela, Candace Ross
|
Winoground: Probing Vision and Language Models for Visio-Linguistic
Compositionality
|
CVPR 2022
| null | null | null |
cs.CV cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We present a novel task and dataset for evaluating the ability of vision and
language models to conduct visio-linguistic compositional reasoning, which we
call Winoground. Given two images and two captions, the goal is to match them
correctly - but crucially, both captions contain a completely identical set of
words, only in a different order. The dataset was carefully hand-curated by
expert annotators and is labeled with a rich set of fine-grained tags to assist
in analyzing model performance. We probe a diverse range of state-of-the-art
vision and language models and find that, surprisingly, none of them do much
better than chance. Evidently, these models are not as skilled at
visio-linguistic compositional reasoning as we might have hoped. We perform an
extensive analysis to obtain insights into how future work might try to
mitigate these models' shortcomings. We aim for Winoground to serve as a useful
evaluation set for advancing the state of the art and driving further progress
in the field. The dataset is available at
https://huggingface.co/datasets/facebook/winoground.
|
[
{
"version": "v1",
"created": "Thu, 7 Apr 2022 02:17:05 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Apr 2022 18:54:25 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Thrush",
"Tristan",
""
],
[
"Jiang",
"Ryan",
""
],
[
"Bartolo",
"Max",
""
],
[
"Singh",
"Amanpreet",
""
],
[
"Williams",
"Adina",
""
],
[
"Kiela",
"Douwe",
""
],
[
"Ross",
"Candace",
""
]
] |
new_dataset
| 0.979816 |
2204.09314
|
Ren Yang
|
Ren Yang, Radu Timofte, Meisong Zheng, Qunliang Xing, Minglang Qiao,
Mai Xu, Lai Jiang, Huaida Liu, Ying Chen, Youcheng Ben, Xiao Zhou, Chen Fu,
Pei Cheng, Gang Yu, Junyi Li, Renlong Wu, Zhilu Zhang, Wei Shang, Zhengyao
Lv, Yunjin Chen, Mingcai Zhou, Dongwei Ren, Kai Zhang, Wangmeng Zuo, Pavel
Ostyakov, Vyal Dmitry, Shakarim Soltanayev, Chervontsev Sergey, Zhussip
Magauiya, Xueyi Zou, Youliang Yan, Pablo Navarrete Michelini, Yunhua Lu,
Diankai Zhang, Shaoli Liu, Si Gao, Biao Wu, Chengjian Zheng, Xiaofeng Zhang,
Kaidi Lu, Ning Wang, Thuong Nguyen Canh, Thong Bach, Qing Wang, Xiaopeng Sun,
Haoyu Ma, Shijie Zhao, Junlin Li, Liangbin Xie, Shuwei Shi, Yujiu Yang,
Xintao Wang, Jinjin Gu, Chao Dong, Xiaodi Shi, Chunmei Nian, Dong Jiang,
Jucai Lin, Zhihuai Xie, Mao Ye, Dengyan Luo, Liuhan Peng, Shengjie Chen, Xin
Liu, Qian Wang, Xin Liu, Boyang Liang, Hang Dong, Yuhao Huang, Kai Chen,
Xingbei Guo, Yujing Sun, Huilei Wu, Pengxu Wei, Yulin Huang, Junying Chen, Ik
Hyun Lee, Sunder Ali Khowaja, Jiseok Yoon
|
NTIRE 2022 Challenge on Super-Resolution and Quality Enhancement of
Compressed Video: Dataset, Methods and Results
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper reviews the NTIRE 2022 Challenge on Super-Resolution and Quality
Enhancement of Compressed Video. In this challenge, we proposed the LDV 2.0
dataset, which includes the LDV dataset (240 videos) and 95 additional videos.
This challenge includes three tracks. Track 1 aims at enhancing the videos
compressed by HEVC at a fixed QP. Track 2 and Track 3 target both the
super-resolution and quality enhancement of HEVC compressed video. They require
x2 and x4 super-resolution, respectively. The three tracks totally attract more
than 600 registrations. In the test phase, 8 teams, 8 teams and 12 teams
submitted the final results to Tracks 1, 2 and 3, respectively. The proposed
methods and solutions gauge the state-of-the-art of super-resolution and
quality enhancement of compressed video. The proposed LDV 2.0 dataset is
available at https://github.com/RenYang-home/LDV_dataset. The homepage of this
challenge (including open-sourced codes) is at
https://github.com/RenYang-home/NTIRE22_VEnh_SR.
|
[
{
"version": "v1",
"created": "Wed, 20 Apr 2022 08:50:02 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Apr 2022 07:59:48 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Yang",
"Ren",
""
],
[
"Timofte",
"Radu",
""
],
[
"Zheng",
"Meisong",
""
],
[
"Xing",
"Qunliang",
""
],
[
"Qiao",
"Minglang",
""
],
[
"Xu",
"Mai",
""
],
[
"Jiang",
"Lai",
""
],
[
"Liu",
"Huaida",
""
],
[
"Chen",
"Ying",
""
],
[
"Ben",
"Youcheng",
""
],
[
"Zhou",
"Xiao",
""
],
[
"Fu",
"Chen",
""
],
[
"Cheng",
"Pei",
""
],
[
"Yu",
"Gang",
""
],
[
"Li",
"Junyi",
""
],
[
"Wu",
"Renlong",
""
],
[
"Zhang",
"Zhilu",
""
],
[
"Shang",
"Wei",
""
],
[
"Lv",
"Zhengyao",
""
],
[
"Chen",
"Yunjin",
""
],
[
"Zhou",
"Mingcai",
""
],
[
"Ren",
"Dongwei",
""
],
[
"Zhang",
"Kai",
""
],
[
"Zuo",
"Wangmeng",
""
],
[
"Ostyakov",
"Pavel",
""
],
[
"Dmitry",
"Vyal",
""
],
[
"Soltanayev",
"Shakarim",
""
],
[
"Sergey",
"Chervontsev",
""
],
[
"Magauiya",
"Zhussip",
""
],
[
"Zou",
"Xueyi",
""
],
[
"Yan",
"Youliang",
""
],
[
"Michelini",
"Pablo Navarrete",
""
],
[
"Lu",
"Yunhua",
""
],
[
"Zhang",
"Diankai",
""
],
[
"Liu",
"Shaoli",
""
],
[
"Gao",
"Si",
""
],
[
"Wu",
"Biao",
""
],
[
"Zheng",
"Chengjian",
""
],
[
"Zhang",
"Xiaofeng",
""
],
[
"Lu",
"Kaidi",
""
],
[
"Wang",
"Ning",
""
],
[
"Canh",
"Thuong Nguyen",
""
],
[
"Bach",
"Thong",
""
],
[
"Wang",
"Qing",
""
],
[
"Sun",
"Xiaopeng",
""
],
[
"Ma",
"Haoyu",
""
],
[
"Zhao",
"Shijie",
""
],
[
"Li",
"Junlin",
""
],
[
"Xie",
"Liangbin",
""
],
[
"Shi",
"Shuwei",
""
],
[
"Yang",
"Yujiu",
""
],
[
"Wang",
"Xintao",
""
],
[
"Gu",
"Jinjin",
""
],
[
"Dong",
"Chao",
""
],
[
"Shi",
"Xiaodi",
""
],
[
"Nian",
"Chunmei",
""
],
[
"Jiang",
"Dong",
""
],
[
"Lin",
"Jucai",
""
],
[
"Xie",
"Zhihuai",
""
],
[
"Ye",
"Mao",
""
],
[
"Luo",
"Dengyan",
""
],
[
"Peng",
"Liuhan",
""
],
[
"Chen",
"Shengjie",
""
],
[
"Liu",
"Xin",
""
],
[
"Wang",
"Qian",
""
],
[
"Liu",
"Xin",
""
],
[
"Liang",
"Boyang",
""
],
[
"Dong",
"Hang",
""
],
[
"Huang",
"Yuhao",
""
],
[
"Chen",
"Kai",
""
],
[
"Guo",
"Xingbei",
""
],
[
"Sun",
"Yujing",
""
],
[
"Wu",
"Huilei",
""
],
[
"Wei",
"Pengxu",
""
],
[
"Huang",
"Yulin",
""
],
[
"Chen",
"Junying",
""
],
[
"Lee",
"Ik Hyun",
""
],
[
"Khowaja",
"Sunder Ali",
""
],
[
"Yoon",
"Jiseok",
""
]
] |
new_dataset
| 0.999792 |
2204.09623
|
Marion Wiese
|
Marion Wiese, Paula Rachow, Matthias Riebisch, Julian Schwarze
|
Preventing technical debt with the TAP framework for Technical Debt
Aware Management
|
Accepted manuscript for "Information and Software Technology" -
Special Issue on the TechDebt 2021 conference
|
Information and Software Technology, 2022, 106926, ISSN 0950-5849
|
10.1016/j.infsof.2022.106926
| null |
cs.SE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Context. Technical Debt (TD) is a metaphor for technical problems that are
not visible to users and customers but hinder developers in their work, making
future changes more difficult. TD is often incurred due to tight project
deadlines and can make future changes more costly or impossible. Project
Management usually focuses on customer benefits and pays less attention to
their IT systems' internal quality. TD prevention should be preferred over TD
repayment because subsequent refactoring and re-engineering are expensive.
Objective. This paper evaluates a framework focusing on both TD prevention and
TD repayment in the context of agile-managed projects. The framework was
developed and applied in an IT unit of a publishing house. The unique
contribution of this framework is the integration of TD management into project
management. Method. The evaluation was performed as a comparative case study
based on ticket statistics and two structured surveys. The surveys were
conducted in the observed IT unit using the framework and a comparison unit not
using the framework. The first survey targeted team members, the second one IT
managers. Results. The evaluation shows that in this IT unit, the TAP framework
led to a raised awareness for the incurrence of TD. Decisions to incur TD are
intentional, and TD is repaid timelier. Unintentional TD incurred by
unconscious decisions is prevented. Furthermore, better communication and
better planning of the project pipeline can be observed. Conclusions. We
provide an insight into practitioners' ways to identify, monitor, prevent and
repay TD. The presented framework includes a feasible method for TD prevention
despite tight timelines by making TD repayment part of project management.
|
[
{
"version": "v1",
"created": "Wed, 20 Apr 2022 17:05:37 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Wiese",
"Marion",
""
],
[
"Rachow",
"Paula",
""
],
[
"Riebisch",
"Matthias",
""
],
[
"Schwarze",
"Julian",
""
]
] |
new_dataset
| 0.989132 |
2204.10878
|
Simeng Sun
|
Simeng Sun, Katherine Thai, Mohit Iyyer
|
ChapterBreak: A Challenge Dataset for Long-Range Language Models
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
While numerous architectures for long-range language models (LRLMs) have
recently been proposed, a meaningful evaluation of their discourse-level
language understanding capabilities has not yet followed. To this end, we
introduce ChapterBreak, a challenge dataset that provides an LRLM with a long
segment from a narrative that ends at a chapter boundary and asks it to
distinguish the beginning of the ground-truth next chapter from a set of
negative segments sampled from the same narrative. A fine-grained human
annotation reveals that our dataset contains many complex types of chapter
transitions (e.g., parallel narratives, cliffhanger endings) that require
processing global context to comprehend. Experiments on ChapterBreak show that
existing LRLMs fail to effectively leverage long-range context, substantially
underperforming a segment-level model trained directly for this task. We
publicly release our ChapterBreak dataset to spur more principled future
research into LRLMs.
|
[
{
"version": "v1",
"created": "Fri, 22 Apr 2022 18:20:23 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Sun",
"Simeng",
""
],
[
"Thai",
"Katherine",
""
],
[
"Iyyer",
"Mohit",
""
]
] |
new_dataset
| 0.999871 |
2204.10911
|
Michael Neumann
|
Christian Sanden and Kira Karnowski and Marvin Steinke and Michael
Neumann and Lukas Linke
|
Die Einfl\"usse von Arbeitsbelastung auf die Arbeitsqualit\"at agiler
Software-Entwicklungsteams
|
in German language
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to the Covid 19 pandemic and the associated effects on the world of work,
the burden on employees has been brought into focus. This fact also applies to
agile software development teams in many companies due to the extensive switch
to remote work. Too high a workload can lead to various negative effects, such
as increased sick leave, the well-being of employees, or reduced productivity.
It is also known that the workload in knowledge work impacts the quality of the
work results. This research article identifies potential factors of the
workload of the agile software development team members at Otto GmbH & Co KG.
Based on the factors, we present measures to reduce workload and explain our
findings, which we have validated in an experiment. Our results show that even
small-scale actions, such as the introduction of rest work phases during the
working day, lead to positive effects, for example, increased ability to
concentrate and how these affect the quality of the work results.
|
[
{
"version": "v1",
"created": "Fri, 22 Apr 2022 20:01:27 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Sanden",
"Christian",
""
],
[
"Karnowski",
"Kira",
""
],
[
"Steinke",
"Marvin",
""
],
[
"Neumann",
"Michael",
""
],
[
"Linke",
"Lukas",
""
]
] |
new_dataset
| 0.987085 |
2204.10949
|
Florence Smith Nicholls
|
Florence Smith Nicholls and Michael Cook
|
The Dark Souls of Archaeology: Recording Elden Ring
|
10 pages, 7 figures
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Archaeology can be broadly defined as the study and interpretation of the
past through material remains. Videogame worlds, though immaterial in nature,
can also afford opportunities to study the people who existed within them based
on what they leave behind. In this paper we present the first formal
archaeological survey of a predominantly single-player game, by examining the
player-generated content that is asynchronously distributed to players in the
videogame Elden Ring.
|
[
{
"version": "v1",
"created": "Fri, 22 Apr 2022 22:30:29 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Nicholls",
"Florence Smith",
""
],
[
"Cook",
"Michael",
""
]
] |
new_dataset
| 0.997102 |
2204.10959
|
Tolga Bakirman
|
Tolga Bakirman and Elif Sertel
|
HRPlanes: High Resolution Airplane Dataset for Deep Learning
|
13 pages, 8 figures
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Airplane detection from satellite imagery is a challenging task due to the
complex backgrounds in the images and differences in data acquisition
conditions caused by the sensor geometry and atmospheric effects. Deep learning
methods provide reliable and accurate solutions for automatic detection of
airplanes; however, huge amount of training data is required to obtain
promising results. In this study, we create a novel airplane detection dataset
called High Resolution Planes (HRPlanes) by using images from Google Earth (GE)
and labeling the bounding box of each plane on the images. HRPlanes include GE
images of several different airports across the world to represent a variety of
landscape, seasonal and satellite geometry conditions obtained from different
satellites. We evaluated our dataset with two widely used object detection
methods namely YOLOv4 and Faster R-CNN. Our preliminary results show that the
proposed dataset can be a valuable data source and benchmark data set for
future applications. Moreover, proposed architectures and results of this study
could be used for transfer learning of different datasets and models for
airplane detection.
|
[
{
"version": "v1",
"created": "Fri, 22 Apr 2022 23:49:44 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Bakirman",
"Tolga",
""
],
[
"Sertel",
"Elif",
""
]
] |
new_dataset
| 0.999814 |
2204.10993
|
Qiaojun Feng
|
Qiaojun Feng, Nikolay Atanasov
|
TerrainMesh: Metric-Semantic Terrain Reconstruction from Aerial Images
Using Joint 2D-3D Learning
|
15 pages, 14 figures. arXiv admin note: text overlap with
arXiv:2101.01844
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper considers outdoor terrain mapping using RGB images obtained from
an aerial vehicle. While feature-based localization and mapping techniques
deliver real-time vehicle odometry and sparse keypoint depth reconstruction, a
dense model of the environment geometry and semantics (vegetation, buildings,
etc.) is usually recovered offline with significant computation and storage.
This paper develops a joint 2D-3D learning approach to reconstruct a local
metric-semantic mesh at each camera keyframe maintained by a visual odometry
algorithm. Given the estimated camera trajectory, the local meshes can be
assembled into a global environment model to capture the terrain topology and
semantics during online operation. A local mesh is reconstructed using an
initialization and refinement stage. In the initialization stage, we estimate
the mesh vertex elevation by solving a least squares problem relating the
vertex barycentric coordinates to the sparse keypoint depth measurements. In
the refinement stage, we associate 2D image and semantic features with the 3D
mesh vertices using camera projection and apply graph convolution to refine the
mesh vertex spatial coordinates and semantic features based on joint 2D and 3D
supervision. Quantitative and qualitative evaluation using real aerial images
show the potential of our method to support environmental monitoring and
surveillance applications.
|
[
{
"version": "v1",
"created": "Sat, 23 Apr 2022 05:18:39 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Feng",
"Qiaojun",
""
],
[
"Atanasov",
"Nikolay",
""
]
] |
new_dataset
| 0.992288 |
2204.11015
|
Baorui Ma
|
Baorui Ma, Yu-Shen Liu, Matthias Zwicker, Zhizhong Han
|
Surface Reconstruction from Point Clouds by Learning Predictive Context
Priors
|
To appear at CVPR2022. Project page:this https URL
https://mabaorui.github.io/PredictableContextPrior_page/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Surface reconstruction from point clouds is vital for 3D computer vision.
State-of-the-art methods leverage large datasets to first learn local context
priors that are represented as neural network-based signed distance functions
(SDFs) with some parameters encoding the local contexts. To reconstruct a
surface at a specific query location at inference time, these methods then
match the local reconstruction target by searching for the best match in the
local prior space (by optimizing the parameters encoding the local context) at
the given query location. However, this requires the local context prior to
generalize to a wide variety of unseen target regions, which is hard to
achieve. To resolve this issue, we introduce Predictive Context Priors by
learning Predictive Queries for each specific point cloud at inference time.
Specifically, we first train a local context prior using a large point cloud
dataset similar to previous techniques. For surface reconstruction at inference
time, however, we specialize the local context prior into our Predictive
Context Prior by learning Predictive Queries, which predict adjusted spatial
query locations as displacements of the original locations. This leads to a
global SDF that fits the specific point cloud the best. Intuitively, the query
prediction enables us to flexibly search the learned local context prior over
the entire prior space, rather than being restricted to the fixed query
locations, and this improves the generalizability. Our method does not require
ground truth signed distances, normals, or any additional procedure of signed
distance fusion across overlapping regions. Our experimental results in surface
reconstruction for single shapes or complex scenes show significant
improvements over the state-of-the-art under widely used benchmarks.
|
[
{
"version": "v1",
"created": "Sat, 23 Apr 2022 08:11:33 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Ma",
"Baorui",
""
],
[
"Liu",
"Yu-Shen",
""
],
[
"Zwicker",
"Matthias",
""
],
[
"Han",
"Zhizhong",
""
]
] |
new_dataset
| 0.987063 |
2204.11024
|
Nazia Tasnim
|
Md. Istiak Hossain Shihab, Nazia Tasnim, Hasib Zunair, Labiba Kanij
Rupty and Nabeel Mohammed
|
VISTA: Vision Transformer enhanced by U-Net and Image Colorfulness Frame
Filtration for Automatic Retail Checkout
|
accepted at AI City Challenge workshop - CVPR 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Multi-class product counting and recognition identifies product items from
images or videos for automated retail checkout. The task is challenging due to
the real-world scenario of occlusions where product items overlap, fast
movement in the conveyor belt, large similarity in overall appearance of the
items being scanned, novel products, and the negative impact of misidentifying
items. Further, there is a domain bias between training and test sets,
specifically, the provided training dataset consists of synthetic images and
the test set videos consist of foreign objects such as hands and tray. To
address these aforementioned issues, we propose to segment and classify
individual frames from a video sequence. The segmentation method consists of a
unified single product item- and hand-segmentation followed by entropy masking
to address the domain bias problem. The multi-class classification method is
based on Vision Transformers (ViT). To identify the frames with target objects,
we utilize several image processing methods and propose a custom metric to
discard frames not having any product items. Combining all these mechanisms,
our best system achieves 3rd place in the AI City Challenge 2022 Track 4 with
an F1 score of 0.4545. Code will be available at
|
[
{
"version": "v1",
"created": "Sat, 23 Apr 2022 08:54:28 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Shihab",
"Md. Istiak Hossain",
""
],
[
"Tasnim",
"Nazia",
""
],
[
"Zunair",
"Hasib",
""
],
[
"Rupty",
"Labiba Kanij",
""
],
[
"Mohammed",
"Nabeel",
""
]
] |
new_dataset
| 0.997239 |
2204.11083
|
John Businge
|
Henrique Rocha, John Businge
|
Blockchain-Oriented Software Variant Forks: A Preliminary Study
|
Accepted for the 5th International Workshop on Blockchain Oriented
Software Engineering 2022
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
In collaborative social development platforms such as GitHub, forking a
repository is a common activity. A variant fork wants to split the development
from the original repository and grow towards a different direction. In this
preliminary exploratory research, we analyze the possible reasons for creating
a variant fork in blockchain-oriented software. By collecting repositories in
GitHub, we created a dataset with repositories and their variants, from which
we manually analyzed 86 variants. Based on the variants we studied, the main
reason to create a variant in blockchain-oriented software is to support a
different blockchain platform (65%).
|
[
{
"version": "v1",
"created": "Sat, 23 Apr 2022 14:49:22 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Rocha",
"Henrique",
""
],
[
"Businge",
"John",
""
]
] |
new_dataset
| 0.979589 |
2204.11087
|
Cunliang Kong
|
Cunliang Kong, Xuezhi Fang, Liner Yang, Yun Chen, Erhong Yang
|
LitMind Dictionary: An Open-Source Online Dictionary
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Dictionaries can help language learners to learn vocabulary by providing
definitions of words. Since traditional dictionaries present word senses as
discrete items in predefined inventories, they fall short of flexibility, which
is required in providing specific meanings of words in particular contexts. In
this paper, we introduce the LitMind Dictionary
(https://dictionary.litmind.ink), an open-source online generative dictionary
that takes a word and context containing the word as input and automatically
generates a definition as output. Incorporating state-of-the-art definition
generation models, it supports not only Chinese and English, but also
Chinese-English cross-lingual queries. Moreover, it has a user-friendly
front-end design that can help users understand the query words quickly and
easily. All the code and data are available at
https://github.com/blcuicall/litmind-dictionary.
|
[
{
"version": "v1",
"created": "Sat, 23 Apr 2022 15:10:40 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Kong",
"Cunliang",
""
],
[
"Fang",
"Xuezhi",
""
],
[
"Yang",
"Liner",
""
],
[
"Chen",
"Yun",
""
],
[
"Yang",
"Erhong",
""
]
] |
new_dataset
| 0.99805 |
2204.11104
|
Pavel Tikhonov
|
Pavel Tikhonov, Valentin Malykh
|
WikiMulti: a Corpus for Cross-Lingual Summarization
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cross-lingual summarization (CLS) is the task to produce a summary in one
particular language for a source document in a different language. We introduce
WikiMulti - a new dataset for cross-lingual summarization based on Wikipedia
articles in 15 languages. As a set of baselines for further studies, we
evaluate the performance of existing cross-lingual abstractive summarization
methods on our dataset. We make our dataset publicly available here:
https://github.com/tikhonovpavel/wikimulti
|
[
{
"version": "v1",
"created": "Sat, 23 Apr 2022 16:47:48 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Tikhonov",
"Pavel",
""
],
[
"Malykh",
"Valentin",
""
]
] |
new_dataset
| 0.999707 |
2204.11202
|
Jieyu Li
|
Jieyu Li, Robert Stevenson
|
2D LiDAR and Camera Fusion Using Motion Cues for Indoor Layout
Estimation
| null |
In 2021 IEEE 24th International Conference on Information Fusion
(FUSION), pp. 1-6. IEEE, 2021
| null | null |
cs.CV
|
http://creativecommons.org/publicdomain/zero/1.0/
|
This paper presents a novel indoor layout estimation system based on the
fusion of 2D LiDAR and intensity camera data. A ground robot explores an indoor
space with a single floor and vertical walls, and collects a sequence of
intensity images and 2D LiDAR datasets. The LiDAR provides accurate depth
information, while the camera captures high-resolution data for semantic
interpretation. The alignment of sensor outputs and image segmentation are
computed jointly by aligning LiDAR points, as samples of the room contour, to
ground-wall boundaries in the images. The alignment problem is decoupled into a
top-down view projection and a 2D similarity transformation estimation, which
can be solved according to the vertical vanishing point and motion of two
sensors. The recursive random sample consensus algorithm is implemented to
generate, evaluate and optimize multiple hypotheses with the sequential
measurements. The system allows jointly analyzing the geometric interpretation
from different sensors without offline calibration. The ambiguity in images for
ground-wall boundary extraction is removed with the assistance of LiDAR
observations, which improves the accuracy of semantic segmentation. The
localization and mapping is refined using the fused data, which enables the
system to work reliably in scenes with low texture or low geometric features.
|
[
{
"version": "v1",
"created": "Sun, 24 Apr 2022 06:26:02 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Li",
"Jieyu",
""
],
[
"Stevenson",
"Robert",
""
]
] |
new_dataset
| 0.9998 |
2204.11208
|
Ziling Heng
|
Xiaoru Li, Ziling Heng
|
Constructions of near MDS codes which are optimal locally recoverable
codes
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A linear code with parameters $[n,k,n-k]$ is said to be almost maximum
distance separable (AMDS for short). An AMDS code whose dual is also AMDS is
referred to as an near maximum distance separable (NMDS for short) code. NMDS
codes have nice applications in finite geometry, combinatorics, cryptography
and data storage. In this paper, we first present several constructions of NMDS
codes and determine their weight enumerators. In particular, some constructions
produce NMDS codes with the same parameters but different weight enumerators.
Then we determine the locality of the NMDS codes and obtain many families of
distance-optimal and dimension-optimal locally repairable codes.
|
[
{
"version": "v1",
"created": "Sun, 24 Apr 2022 07:05:33 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Li",
"Xiaoru",
""
],
[
"Heng",
"Ziling",
""
]
] |
new_dataset
| 0.985762 |
2204.11220
|
Xusheng Du
|
Xusheng Du, Jiong Yu
|
Graph Neural Network-based Early Bearing Fault Detection
|
8 pages, 7 figures
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Early detection of faults is of importance to avoid catastrophic accidents
and ensure safe operation of machinery. A novel graph neural network-based
fault detection method is proposed to build a bridge between AI and real-world
running mechanical systems. First, the vibration signals, which are Euclidean
structured data, are converted into graph (non-Euclidean structured data), so
that the vibration signals, which are originally independent of each other, are
correlated with each other. Second, inputs the dataset together with its
corresponding graph into the GNN for training, which contains graphs in each
hidden layer of the network, enabling the graph neural network to learn the
feature values of itself and its neighbors, and the obtained early features
have stronger discriminability. Finally, determines the top-n objects that are
difficult to reconstruct in the output layer of the GNN as fault objects. A
public datasets of bearings have been used to verify the effectiveness of the
proposed method. We find that the proposed method can successfully detect
faulty objects that are mixed in the normal object region.
|
[
{
"version": "v1",
"created": "Sun, 24 Apr 2022 08:54:55 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Du",
"Xusheng",
""
],
[
"Yu",
"Jiong",
""
]
] |
new_dataset
| 0.998898 |
2204.11335
|
Siming Fan
|
Siming Fan, Jingtan Piao, Chen Qian, Kwan-Yee Lin, Hongsheng Li
|
Simulating Fluids in Real-World Still Images
|
Technical Report, 19 pages, 17 figures, project page:
https://slr-sfs.github.io/ code: https://github.com/simon3dv/SLR-SFS
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we tackle the problem of real-world fluid animation from a
still image. The key of our system is a surface-based layered representation
deriving from video decomposition, where the scene is decoupled into a surface
fluid layer and an impervious background layer with corresponding
transparencies to characterize the composition of the two layers. The animated
video can be produced by warping only the surface fluid layer according to the
estimation of fluid motions and recombining it with the background. In
addition, we introduce surface-only fluid simulation, a $2.5D$ fluid
calculation version, as a replacement for motion estimation. Specifically, we
leverage the triangular mesh based on a monocular depth estimator to represent
the fluid surface layer and simulate the motion in the physics-based framework
with the inspiration of the classic theory of the hybrid Lagrangian-Eulerian
method, along with a learnable network so as to adapt to complex real-world
image textures. We demonstrate the effectiveness of the proposed system through
comparison with existing methods in both standard objective metrics and
subjective ranking scores. Extensive experiments not only indicate our method's
competitive performance for common fluid scenes but also better robustness and
reasonability under complex transparent fluid scenarios. Moreover, as the
proposed surface-based layer representation and surface-only fluid simulation
naturally disentangle the scene, interactive editing such as adding objects to
the river and texture replacing could be easily achieved with realistic
results.
|
[
{
"version": "v1",
"created": "Sun, 24 Apr 2022 18:47:15 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Fan",
"Siming",
""
],
[
"Piao",
"Jingtan",
""
],
[
"Qian",
"Chen",
""
],
[
"Lin",
"Kwan-Yee",
""
],
[
"Li",
"Hongsheng",
""
]
] |
new_dataset
| 0.995367 |
2204.11356
|
Kaushal Rai
|
Kshitij Rajput, Raghav Kapoor, Kaushal Rai, Preeti Kaur
|
Hate Me Not: Detecting Hate Inducing Memes in Code Switched Languages
|
To be published in 2022 Americas Conference on Information Systems
| null | null | null |
cs.LG cs.CL cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
The rise in the number of social media users has led to an increase in the
hateful content posted online. In countries like India, where multiple
languages are spoken, these abhorrent posts are from an unusual blend of
code-switched languages. This hate speech is depicted with the help of images
to form "Memes" which create a long-lasting impact on the human mind. In this
paper, we take up the task of hate and offense detection from multimodal data,
i.e. images (Memes) that contain text in code-switched languages. We firstly
present a novel triply annotated Indian political Memes (IPM) dataset, which
comprises memes from various Indian political events that have taken place
post-independence and are classified into three distinct categories. We also
propose a binary-channelled CNN cum LSTM based model to process the images
using the CNN model and text using the LSTM model to get state-of-the-art
results for this task.
|
[
{
"version": "v1",
"created": "Sun, 24 Apr 2022 21:03:57 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Rajput",
"Kshitij",
""
],
[
"Kapoor",
"Raghav",
""
],
[
"Rai",
"Kaushal",
""
],
[
"Kaur",
"Preeti",
""
]
] |
new_dataset
| 0.955851 |
2204.11436
|
Zhishe Wang
|
Zhishe Wang, Yanlin Chen, Wenyu Shao, Hui Li, Lei Zhang
|
SwinFuse: A Residual Swin Transformer Fusion Network for Infrared and
Visible Images
|
12pages, 19figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The existing deep learning fusion methods mainly concentrate on the
convolutional neural networks, and few attempts are made with transformer.
Meanwhile, the convolutional operation is a content-independent interaction
between the image and convolution kernel, which may lose some important
contexts and further limit fusion performance. Towards this end, we present a
simple and strong fusion baseline for infrared and visible images,
namely\textit{ Residual Swin Transformer Fusion Network}, termed as SwinFuse.
Our SwinFuse includes three parts: the global feature extraction, fusion layer
and feature reconstruction. In particular, we build a fully attentional feature
encoding backbone to model the long-range dependency, which is a pure
transformer network and has a stronger representation ability compared with the
convolutional neural networks. Moreover, we design a novel feature fusion
strategy based on $L_{1}$-norm for sequence matrices, and measure the
corresponding activity levels from row and column vector dimensions, which can
well retain competitive infrared brightness and distinct visible details.
Finally, we testify our SwinFuse with nine state-of-the-art traditional and
deep learning methods on three different datasets through subjective
observations and objective comparisons, and the experimental results manifest
that the proposed SwinFuse obtains surprising fusion performance with strong
generalization ability and competitive computational efficiency. The code will
be available at https://github.com/Zhishe-Wang/SwinFuse.
|
[
{
"version": "v1",
"created": "Mon, 25 Apr 2022 05:04:19 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Wang",
"Zhishe",
""
],
[
"Chen",
"Yanlin",
""
],
[
"Shao",
"Wenyu",
""
],
[
"Li",
"Hui",
""
],
[
"Zhang",
"Lei",
""
]
] |
new_dataset
| 0.989851 |
2204.11457
|
Chao-Wei Huang
|
Chao-Wei Huang, Kai-Chou Yang, Zi-Yuan Chen, Hao-Chien Cheng, Po-Yu
Wu, Yu-Yang Huang, Chung-Kai Hsieh, Geng-Zhi Wildsky Fann, Ting-Yin Cheng,
Ethan Tu, Yun-Nung Chen
|
Islander: A Real-Time News Monitoring and Analysis System
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
With thousands of news articles from hundreds of sources distributed and
shared every day, news consumption and information acquisition have been
increasingly difficult for readers. Additionally, the content of news articles
is becoming catchy or even inciting to attract readership, harming the accuracy
of news reporting. We present Islander, an online news analyzing system. The
system allows users to browse trending topics with articles from multiple
sources and perspectives. We define several metrics as proxies for news
quality, and develop algorithms for automatic estimation. The quality
estimation results are delivered through a web interface to newsreaders for
easy access to news and information. The website is publicly available at
https://islander.cc/
|
[
{
"version": "v1",
"created": "Mon, 25 Apr 2022 06:20:49 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Huang",
"Chao-Wei",
""
],
[
"Yang",
"Kai-Chou",
""
],
[
"Chen",
"Zi-Yuan",
""
],
[
"Cheng",
"Hao-Chien",
""
],
[
"Wu",
"Po-Yu",
""
],
[
"Huang",
"Yu-Yang",
""
],
[
"Hsieh",
"Chung-Kai",
""
],
[
"Fann",
"Geng-Zhi Wildsky",
""
],
[
"Cheng",
"Ting-Yin",
""
],
[
"Tu",
"Ethan",
""
],
[
"Chen",
"Yun-Nung",
""
]
] |
new_dataset
| 0.993179 |
2204.11495
|
Michael Bekos
|
Michael A. Bekos, Giordano Da Lozzo, Petr Hlin\v{e}n\'y, Michael
Kaufmann
|
Graph Product Structure for h-Framed Graphs
| null | null | null | null |
cs.DS cs.DM math.CO
|
http://creativecommons.org/licenses/by/4.0/
|
Graph product structure theory expresses certain graphs as subgraphs of the
strong product of much simpler graphs. In particular, an elegant formulation
for the corresponding structural theorems involves the strong product of a path
and of a bounded treewidth graph, and allows to lift combinatorial results for
bounded treewidth graphs to graph classes for which the product structure
holds, such as to planar graphs [Dujmovi\'c et al., J. ACM, 67(4), 22:1-38,
2020].
In this paper, we join the search for extensions of this powerful tool beyond
planarity by considering the h-framed graphs, a graph class that includes
1-planar, optimal 2-planar, and k-map graphs (for appropriate values of h). We
establish a graph product structure theorem for h-framed graphs stating that
the graphs in this class are subgraphs of the strong product of a path, of a
planar graph of treewidth at most 3, and of a clique of size $3\lfloor h/2
\rfloor +\lfloor h/3 \rfloor -1$. This allows us to improve over the previous
structural theorems for 1-planar and k-map graphs. Our results constitute
significant progress over the previous bounds on the queue number,
non-repetitive chromatic number, and p-centered chromatic number of these graph
classes, e.g., we lower the currently best upper bound on the queue number of
1-planar graphs and k-map graphs from 495 to 81 and from 32225k(k-3) to 61k,
respectively. We also employ the product structure machinery to improve the
current upper bounds of twin-width of planar and 1-planar graphs from 183 to
37, and from O(1) to 80, respectively. All our structural results are
constructive and yield efficient algorithms to obtain the corresponding
decompositions.
|
[
{
"version": "v1",
"created": "Mon, 25 Apr 2022 08:21:23 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Bekos",
"Michael A.",
""
],
[
"Da Lozzo",
"Giordano",
""
],
[
"Hliněný",
"Petr",
""
],
[
"Kaufmann",
"Michael",
""
]
] |
new_dataset
| 0.991928 |
2204.11524
|
Stefano Buzzi
|
Stefano Buzzi and Carmen D'Andrea and Maria Fresia and Xiaofeng Wu
|
Multi-UE Multi-AP Beam Alignment in User-Centric Cell-Free Massive MIMO
Systems Operating at mmWave
|
Journal paper to appear on IEEE Transactions on Wireless
Communications. arXiv admin note: text overlap with arXiv:2106.13538
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper considers the problem of beam alignment in a cell-free massive
MIMO deployment with multiple access points (APs) and multiple user equipments
(UEs) simultaneously operating in the same millimeter wave frequency band.
Assuming the availability of a control channel at sub-6 GHz frequencies, a
protocol is developed that permits estimating, for each UE, the strongest
propagation path from each of the surrounding APs, and to perform user-centric
association between the UEs and the APs. Estimation of the strongest paths from
nearby APs is realized at the UE in a one-phase procedure, during which all the
APs simultaneously transmit on pseudo-randomly selected channels with
pseudo-random transmit beamformers. An algorithm for orthogonal channels
assignment to the APs is also proposed, with the aim of minimizing the mutual
interference between APs that transmit on the same channels. The performance of
the proposed strategy is evaluated both in terms of probability of correct
detection of the directions of arrival and of departure associated to the
strongest beam from nearby APs, and in terms of downlink and uplink
signal-to-interference-plus-noise ratio. Numerical results show that the
proposed approach is effective and capable of efficiently realizing beam
alignment in a multi-UE multi-AP wireless scenario.
|
[
{
"version": "v1",
"created": "Mon, 25 Apr 2022 09:29:56 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Buzzi",
"Stefano",
""
],
[
"D'Andrea",
"Carmen",
""
],
[
"Fresia",
"Maria",
""
],
[
"Wu",
"Xiaofeng",
""
]
] |
new_dataset
| 0.973102 |
2204.11548
|
Dennis Ludl
|
Dennis Burgermeister and Crist\'obal Curio
|
PedRecNet: Multi-task deep neural network for full 3D human pose and
orientation estimation
|
Accepted at IEEE IV 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present a multitask network that supports various deep neural network
based pedestrian detection functions. Besides 2D and 3D human pose, it also
supports body and head orientation estimation based on full body bounding box
input. This eliminates the need for explicit face recognition. We show that the
performance of 3D human pose estimation and orientation estimation is
comparable to the state-of-the-art. Since very few data sets exist for 3D human
pose and in particular body and head orientation estimation based on full body
data, we further show the benefit of particular simulation data to train the
network. The network architecture is relatively simple, yet powerful, and
easily adaptable for further research and applications.
|
[
{
"version": "v1",
"created": "Mon, 25 Apr 2022 10:47:01 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Burgermeister",
"Dennis",
""
],
[
"Curio",
"Cristóbal",
""
]
] |
new_dataset
| 0.999008 |
2204.11549
|
Gennadi Malaschonok I.
|
Gennadi Malaschonok
|
MathPartner Computer Algebra
|
9 pages
|
Programming and Computer Software, 43, 2 (2017) 112-118
| null | null |
cs.SC
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we describe general characteristics of the MathPartner
computer algebra system (CAS) and Mathpar programming language thereof.
MathPartner can be used for scientific and engineering calculations, as well as
in high schools and universities. It allows one to carry out both simple
calculations (acting as a scientific calculator) and complex calculations with
large-scale mathematical objects. Mathpar is a procedural language; it supports
a large number of elementary and special functions, as well as matrix and
polynomial operators. This service allows one to build function images and
animate them. MathPartner also makes it possible to solve some symbolic
computation problems on supercomputers with distributed memory. We highlight
main differences of MathPartner from other CASs and describe the Mathpar
language along with the user service provided.
|
[
{
"version": "v1",
"created": "Mon, 25 Apr 2022 10:49:10 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Malaschonok",
"Gennadi",
""
]
] |
new_dataset
| 0.962228 |
2204.11620
|
Ekaterina Kalinicheva
|
Ekaterina Kalinicheva, Loic Landrieu, Cl\'ement Mallet, Nesrine
Chehata
|
Multi-Layer Modeling of Dense Vegetation from Aerial LiDAR Scans
|
Earth Vision Workshop, CVPR 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The analysis of the multi-layer structure of wild forests is an important
challenge of automated large-scale forestry. While modern aerial LiDARs offer
geometric information across all vegetation layers, most datasets and methods
focus only on the segmentation and reconstruction of the top of canopy. We
release WildForest3D, which consists of 29 study plots and over 2000 individual
trees across 47 000m2 with dense 3D annotation, along with occupancy and height
maps for 3 vegetation layers: ground vegetation, understory, and overstory. We
propose a 3D deep network architecture predicting for the first time both 3D
point-wise labels and high-resolution layer occupancy rasters simultaneously.
This allows us to produce a precise estimation of the thickness of each
vegetation layer as well as the corresponding watertight meshes, therefore
meeting most forestry purposes. Both the dataset and the model are released in
open access: https://github.com/ekalinicheva/multi_layer_vegetation.
|
[
{
"version": "v1",
"created": "Mon, 25 Apr 2022 12:47:05 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Kalinicheva",
"Ekaterina",
""
],
[
"Landrieu",
"Loic",
""
],
[
"Mallet",
"Clément",
""
],
[
"Chehata",
"Nesrine",
""
]
] |
new_dataset
| 0.992323 |
2204.11674
|
Elias Najarro
|
Elias Najarro, Shyam Sudhakaran, Claire Glanois, Sebastian Risi
|
HyperNCA: Growing Developmental Networks with Neural Cellular Automata
|
Paper accepted as a conference paper at ICLR 'From Cells to
Societies' workshop 2022
| null | null | null |
cs.NE cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In contrast to deep reinforcement learning agents, biological neural networks
are grown through a self-organized developmental process. Here we propose a new
hypernetwork approach to grow artificial neural networks based on neural
cellular automata (NCA). Inspired by self-organising systems and
information-theoretic approaches to developmental biology, we show that our
HyperNCA method can grow neural networks capable of solving common
reinforcement learning tasks. Finally, we explore how the same approach can be
used to build developmental metamorphosis networks capable of transforming
their weights to solve variations of the initial RL task.
|
[
{
"version": "v1",
"created": "Mon, 25 Apr 2022 14:08:50 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Najarro",
"Elias",
""
],
[
"Sudhakaran",
"Shyam",
""
],
[
"Glanois",
"Claire",
""
],
[
"Risi",
"Sebastian",
""
]
] |
new_dataset
| 0.963328 |
2204.11714
|
Fiona Anting Tan Ms
|
Fiona Anting Tan, Ali H\"urriyeto\u{g}lu, Tommaso Caselli, Nelleke
Oostdijk, Tadashi Nomoto, Hansi Hettiarachchi, Iqra Ameer, Onur Uca, Farhana
Ferdousi Liza, Tiancheng Hu
|
The Causal News Corpus: Annotating Causal Relations in Event Sentences
from News
|
Accepted to LREC 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Despite the importance of understanding causality, corpora addressing causal
relations are limited. There is a discrepancy between existing annotation
guidelines of event causality and conventional causality corpora that focus
more on linguistics. Many guidelines restrict themselves to include only
explicit relations or clause-based arguments. Therefore, we propose an
annotation schema for event causality that addresses these concerns. We
annotated 3,559 event sentences from protest event news with labels on whether
it contains causal relations or not. Our corpus is known as the Causal News
Corpus (CNC). A neural network built upon a state-of-the-art pre-trained
language model performed well with 81.20% F1 score on test set, and 83.46% in
5-folds cross-validation. CNC is transferable across two external corpora:
CausalTimeBank (CTB) and Penn Discourse Treebank (PDTB). Leveraging each of
these external datasets for training, we achieved up to approximately 64% F1 on
the CNC test set without additional fine-tuning. CNC also served as an
effective training and pre-training dataset for the two external corpora.
Lastly, we demonstrate the difficulty of our task to the layman in a
crowd-sourced annotation exercise. Our annotated corpus is publicly available,
providing a valuable resource for causal text mining researchers.
|
[
{
"version": "v1",
"created": "Mon, 25 Apr 2022 15:14:07 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Tan",
"Fiona Anting",
""
],
[
"Hürriyetoğlu",
"Ali",
""
],
[
"Caselli",
"Tommaso",
""
],
[
"Oostdijk",
"Nelleke",
""
],
[
"Nomoto",
"Tadashi",
""
],
[
"Hettiarachchi",
"Hansi",
""
],
[
"Ameer",
"Iqra",
""
],
[
"Uca",
"Onur",
""
],
[
"Liza",
"Farhana Ferdousi",
""
],
[
"Hu",
"Tiancheng",
""
]
] |
new_dataset
| 0.981494 |
2204.11751
|
Fani Deligianni Dr
|
Matthew Malek-Podjaski, Fani Deligianni
|
Adversarial Attention for Human Motion Synthesis
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Analysing human motions is a core topic of interest for many disciplines,
from Human-Computer Interaction, to entertainment, Virtual Reality and
healthcare. Deep learning has achieved impressive results in capturing human
pose in real-time. On the other hand, due to high inter-subject variability,
human motion analysis models often suffer from not being able to generalise to
data from unseen subjects due to very limited specialised datasets available in
fields such as healthcare. However, acquiring human motion datasets is highly
time-consuming, challenging, and expensive. Hence, human motion synthesis is a
crucial research problem within deep learning and computer vision. We present a
novel method for controllable human motion synthesis by applying
attention-based probabilistic deep adversarial models with end-to-end training.
We show that we can generate synthetic human motion over both short- and
long-time horizons through the use of adversarial attention. Furthermore, we
show that we can improve the classification performance of deep learning models
in cases where there is inadequate real data, by supplementing existing
datasets with synthetic motions.
|
[
{
"version": "v1",
"created": "Mon, 25 Apr 2022 16:12:42 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Malek-Podjaski",
"Matthew",
""
],
[
"Deligianni",
"Fani",
""
]
] |
new_dataset
| 0.998774 |
2204.11823
|
Jianglin Fu
|
Jianglin Fu, Shikai Li, Yuming Jiang, Kwan-Yee Lin, Chen Qian, Chen
Change Loy, Wayne Wu, Ziwei Liu
|
StyleGAN-Human: A Data-Centric Odyssey of Human Generation
|
Technical Report. Project page: https://stylegan-human.github.io/
Code and models: https://github.com/stylegan-human/StyleGAN-Human/
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unconditional human image generation is an important task in vision and
graphics, which enables various applications in the creative industry. Existing
studies in this field mainly focus on "network engineering" such as designing
new components and objective functions. This work takes a data-centric
perspective and investigates multiple critical aspects in "data engineering",
which we believe would complement the current practice. To facilitate a
comprehensive study, we collect and annotate a large-scale human image dataset
with over 230K samples capturing diverse poses and textures. Equipped with this
large dataset, we rigorously investigate three essential factors in data
engineering for StyleGAN-based human generation, namely data size, data
distribution, and data alignment. Extensive experiments reveal several valuable
observations w.r.t. these aspects: 1) Large-scale data, more than 40K images,
are needed to train a high-fidelity unconditional human generation model with
vanilla StyleGAN. 2) A balanced training set helps improve the generation
quality with rare face poses compared to the long-tailed counterpart, whereas
simply balancing the clothing texture distribution does not effectively bring
an improvement. 3) Human GAN models with body centers for alignment outperform
models trained using face centers or pelvis points as alignment anchors. In
addition, a model zoo and human editing applications are demonstrated to
facilitate future research in the community.
|
[
{
"version": "v1",
"created": "Mon, 25 Apr 2022 17:55:08 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Fu",
"Jianglin",
""
],
[
"Li",
"Shikai",
""
],
[
"Jiang",
"Yuming",
""
],
[
"Lin",
"Kwan-Yee",
""
],
[
"Qian",
"Chen",
""
],
[
"Loy",
"Chen Change",
""
],
[
"Wu",
"Wayne",
""
],
[
"Liu",
"Ziwei",
""
]
] |
new_dataset
| 0.996695 |
cs/0612015
|
Josep Rif\`a
|
J. Rif\`a, F. Solov'eva, M. Villanueva
|
On the intersection of additive perfect codes
|
Submitted to Trans. Inform. Theory
| null | null | null |
cs.IT math.IT
| null |
The intersection problem for additive (extended and non-extended) perfect
codes, i.e. which are the possibilities for the number of codewords in the
intersection of two additive codes C1 and C2 of the same length, is
investigated. Lower and upper bounds for the intersection number are computed
and, for any value between these bounds, codes which have this given
intersection value are constructed. For all these codes the abelian group
structure of the intersection is characterized. The parameters of this abelian
group structure corresponding to the intersection codes are computed and lower
and upper bounds for these parameters are established. Finally, constructions
of codes the intersection of which fits any parameters between these bounds are
given.
|
[
{
"version": "v1",
"created": "Mon, 4 Dec 2006 12:00:21 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Rifà",
"J.",
""
],
[
"Solov'eva",
"F.",
""
],
[
"Villanueva",
"M.",
""
]
] |
new_dataset
| 0.997897 |
1911.04586
|
Trang Ngoc Cao
|
Trang Ngoc Cao, Vahid Jamali, Wayan Wicke, Phee Lep Yeoh, Nikola
Zlatanov, Jamie S Evans, and Robert Schober
|
Chemical Reactions-based Detection Mechanism for Molecular
Communications
|
13 pages, 11 figures, 1 table. This journal version was submitted in
April 2022 to the IEEE for possible publication. A part of this article was
presented at the IEEE Wireless Communications and Networking Conference 2020
and stored in the previous version on arXiv
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In molecular communications, the direct detection of signaling molecules may
be challenging due to a lack of suitable sensors and interference in the
environment. Motivated by research in molecular biology, we investigate an
indirect detection mechanism using chemical reactions between the signaling
molecules and a molecular probe to produce an easy-to-measure product at the
receiver. We consider two implementations of the proposed detection mechanism,
i.e., unrestricted probe movement and probes restricted to a volume around the
receiver. The reaction-diffusion equations describing the concentrations of the
reactant and product molecules in the system are non-linear and coupled, and
cannot be solved in closed form. Therefore, we develop an efficient iterative
algorithm by discretizing the time variable and solving for the space variables
of the equations in each time step. Our results show that the concentrations of
the product molecules and the signalling molecules share a similar
characteristic over time, i.e., a single peak and a long tail. The peak and
tail values of the product molecule concentration can be controlled by choosing
probes with suitable parameters. By carefully choosing the molecular probe and
optimizing the decision threshold, the BER can be improved significantly and
outperform that of a direct detection system.
|
[
{
"version": "v1",
"created": "Mon, 11 Nov 2019 22:25:11 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Apr 2022 18:56:00 GMT"
}
] | 2022-04-25T00:00:00 |
[
[
"Cao",
"Trang Ngoc",
""
],
[
"Jamali",
"Vahid",
""
],
[
"Wicke",
"Wayan",
""
],
[
"Yeoh",
"Phee Lep",
""
],
[
"Zlatanov",
"Nikola",
""
],
[
"Evans",
"Jamie S",
""
],
[
"Schober",
"Robert",
""
]
] |
new_dataset
| 0.996402 |
1912.01547
|
Sariel Har-Peled
|
Kevin Buchin, Sariel Har-Peled and Daniel Olah
|
Sometimes Reliable Spanners of Almost Linear Size
| null | null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reliable spanners can withstand huge failures, even when a linear number of
vertices are deleted from the network. In case of failures, a reliable spanner
may have some additional vertices for which the spanner property no longer
holds, but this collateral damage is bounded by a fraction of the size of the
attack. It is known that $\Omega(n\log n)$ edges are needed to achieve this
strong property, where $n$ is the number of vertices in the network, even in
one dimension. Constructions of reliable geometric $(1+\varepsilon)$-spanners,
for $n$ points in $\Re^d$, are known, where the resulting graph has $O( n \log
n \log \log^{6}n )$ edges.
Here, we show randomized constructions of smaller size spanners that have the
desired reliability property in expectation or with good probability. The new
construction is simple, and potentially practical -- replacing a hierarchical
usage of expanders (which renders the previous constructions impractical) by a
simple skip-list like construction. This results in a $1$-spanner, on the line,
that has linear number of edges. Using this, we present a construction of a
reliable spanner in $\Re^d$ with $O( n \log \log^{2} n \log \log \log n )$
edges.
|
[
{
"version": "v1",
"created": "Tue, 3 Dec 2019 17:50:05 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Dec 2019 17:20:19 GMT"
},
{
"version": "v3",
"created": "Thu, 21 Apr 2022 19:10:13 GMT"
}
] | 2022-04-25T00:00:00 |
[
[
"Buchin",
"Kevin",
""
],
[
"Har-Peled",
"Sariel",
""
],
[
"Olah",
"Daniel",
""
]
] |
new_dataset
| 0.992967 |
2002.09792
|
Yiannis Kantaros
|
Yiannis Kantaros, Taylor Carpenter, Kaustubh Sridhar, Yahan Yang,
Insup Lee, James Weimer
|
Real-Time Detectors for Digital and Physical Adversarial Inputs to
Perception Systems
| null | null | null | null |
cs.CV cs.RO eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep neural network (DNN) models have proven to be vulnerable to adversarial
digital and physical attacks. In this paper, we propose a novel attack- and
dataset-agnostic and real-time detector for both types of adversarial inputs to
DNN-based perception systems. In particular, the proposed detector relies on
the observation that adversarial images are sensitive to certain
label-invariant transformations. Specifically, to determine if an image has
been adversarially manipulated, the proposed detector checks if the output of
the target classifier on a given input image changes significantly after
feeding it a transformed version of the image under investigation. Moreover, we
show that the proposed detector is computationally-light both at runtime and
design-time which makes it suitable for real-time applications that may also
involve large-scale image domains. To highlight this, we demonstrate the
efficiency of the proposed detector on ImageNet, a task that is computationally
challenging for the majority of relevant defenses, and on physically attacked
traffic signs that may be encountered in real-time autonomy applications.
Finally, we propose the first adversarial dataset, called AdvNet that includes
both clean and physical traffic sign images. Our extensive comparative
experiments on the MNIST, CIFAR10, ImageNet, and AdvNet datasets show that
VisionGuard outperforms existing defenses in terms of scalability and detection
performance. We have also evaluated the proposed detector on field test data
obtained on a moving vehicle equipped with a perception-based DNN being under
attack.
|
[
{
"version": "v1",
"created": "Sun, 23 Feb 2020 00:03:57 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Apr 2022 22:20:39 GMT"
}
] | 2022-04-25T00:00:00 |
[
[
"Kantaros",
"Yiannis",
""
],
[
"Carpenter",
"Taylor",
""
],
[
"Sridhar",
"Kaustubh",
""
],
[
"Yang",
"Yahan",
""
],
[
"Lee",
"Insup",
""
],
[
"Weimer",
"James",
""
]
] |
new_dataset
| 0.986991 |
2005.02264
|
Adrian Boguszewski
|
Adrian Boguszewski, Dominik Batorski, Natalia Ziemba-Jankowska, Tomasz
Dziedzic, Anna Zambrzycka
|
LandCover.ai: Dataset for Automatic Mapping of Buildings, Woodlands,
Water and Roads from Aerial Imagery
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Monitoring of land cover and land use is crucial in natural resources
management. Automatic visual mapping can carry enormous economic value for
agriculture, forestry, or public administration. Satellite or aerial images
combined with computer vision and deep learning enable precise assessment and
can significantly speed up change detection. Aerial imagery usually provides
images with much higher pixel resolution than satellite data allowing more
detailed mapping. However, there is still a lack of aerial datasets made for
the segmentation, covering rural areas with a resolution of tens centimeters
per pixel, manual fine labels, and highly publicly important environmental
instances like buildings, woods, water, or roads.
Here we introduce LandCover.ai (Land Cover from Aerial Imagery) dataset for
semantic segmentation. We collected images of 216.27 sq. km rural areas across
Poland, a country in Central Europe, 39.51 sq. km with resolution 50 cm per
pixel and 176.76 sq. km with resolution 25 cm per pixel and manually fine
annotated four following classes of objects: buildings, woodlands, water, and
roads. Additionally, we report simple benchmark results, achieving 85.56% of
mean intersection over union on the test set. It proves that the automatic
mapping of land cover is possible with a relatively small, cost-efficient,
RGB-only dataset. The dataset is publicly available at
https://landcover.ai.linuxpolska.com/
|
[
{
"version": "v1",
"created": "Tue, 5 May 2020 15:00:49 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Jul 2020 11:59:12 GMT"
},
{
"version": "v3",
"created": "Wed, 26 May 2021 13:45:27 GMT"
},
{
"version": "v4",
"created": "Thu, 21 Apr 2022 19:59:27 GMT"
}
] | 2022-04-25T00:00:00 |
[
[
"Boguszewski",
"Adrian",
""
],
[
"Batorski",
"Dominik",
""
],
[
"Ziemba-Jankowska",
"Natalia",
""
],
[
"Dziedzic",
"Tomasz",
""
],
[
"Zambrzycka",
"Anna",
""
]
] |
new_dataset
| 0.99974 |
2201.05933
|
Antonis Papasavva
|
Amin Mekacher, Antonis Papasavva
|
"I Can't Keep It Up." A Dataset from the Defunct Voat.co News Aggregator
|
16th International Conference on Web and Social Media
| null | null | null |
cs.SI cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Voat.co was a news aggregator website that shut down on December 25, 2020.
The site had a troubled history and was known for hosting various banned
subreddits. This paper presents a dataset with over 2.3M submissions and 16.2M
comments posted from 113K users in 7.1K subverses (the equivalent of subreddit
for Voat). Our dataset covers the whole lifetime of Voat, from its developing
period starting on November 8, 2013, the day it was founded, April 2014, up
until the day it shut down (December 25, 2020). This work presents the largest
and most complete publicly available Voat dataset, to the best of our
knowledge. Along with the release of this dataset, we present a preliminary
analysis covering posting activity and daily user and subverse registration on
the platform so that researchers interested in our dataset can know what to
expect. Our data may prove helpful to false news dissemination studies as we
analyze the links users share on the platform, finding that many communities
rely on alternative news press, like Breitbart and GatewayPundit, for their
daily discussions. In addition, we perform network analysis on user
interactions finding that many users prefer not to interact with subverses
outside their narrative interests, which could be helpful to researchers
focusing on polarization and echo chambers. Also, since Voat was one of the
platforms banned Reddit communities migrated to, we are confident our dataset
will motivate and assist researchers studying deplatforming. Finally, many
hateful and conspiratorial communities were very popular on Voat, which makes
our work valuable for researchers focusing on toxicity, conspiracy theories,
cross-platform studies of social networks, and natural language processing.
|
[
{
"version": "v1",
"created": "Sat, 15 Jan 2022 23:25:53 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Apr 2022 00:31:37 GMT"
},
{
"version": "v3",
"created": "Fri, 22 Apr 2022 17:06:07 GMT"
}
] | 2022-04-25T00:00:00 |
[
[
"Mekacher",
"Amin",
""
],
[
"Papasavva",
"Antonis",
""
]
] |
new_dataset
| 0.999579 |
2204.10402
|
Izzat El Hajj
|
Peter Yamout, Karim Barada, Adnan Jaljuli, Amer E. Mouawad, Izzat El
Hajj
|
Parallel Vertex Cover Algorithms on GPUs
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Finding small vertex covers in a graph has applications in numerous domains.
Two common formulations of the problem include: Minimum Vertex Cover, which
finds the smallest vertex cover in a graph, and Parameterized Vertex Cover,
which finds a vertex cover whose size is less than or equal to some parameter
$k$. Algorithms for both formulations traverse a search tree, which grows
exponentially with the size of the graph or the value of $k$.
Parallelizing the traversal of the vertex cover search tree on GPUs is
challenging for multiple reasons. First, the search tree is a narrow binary
tree which makes it difficult to extract enough sub-trees to process in
parallel to fully utilize the GPU's resources. Second, the search tree is
highly imbalanced which makes load balancing across a massive number of
parallel GPU workers challenging. Third, keeping around all the intermediate
state needed to traverse many sub-trees in parallel puts high pressure on the
GPU's memory resources and may act as a limiting factor to parallelism.
To address these challenges, we propose an approach to traverse the vertex
cover search tree in parallel using GPUs while handling dynamic load balancing.
Each thread block traverses a different sub-tree using a local stack, however,
we also use a global worklist to balance load. Blocks contribute branches of
their sub-trees to the global worklist on an as-needed basis, while blocks that
finish their sub-trees get new ones from the global worklist. We use degree
arrays to represent intermediate graphs so that the representation is compact
in memory to avoid limiting parallelism, but self-contained which is necessary
for load balancing. Our evaluation shows that compared to prior work, our
hybrid approach of using local stacks and a global worklist substantially
improves performance and reduces load imbalance, especially on difficult
instances of the problem.
|
[
{
"version": "v1",
"created": "Thu, 21 Apr 2022 20:44:48 GMT"
}
] | 2022-04-25T00:00:00 |
[
[
"Yamout",
"Peter",
""
],
[
"Barada",
"Karim",
""
],
[
"Jaljuli",
"Adnan",
""
],
[
"Mouawad",
"Amer E.",
""
],
[
"Hajj",
"Izzat El",
""
]
] |
new_dataset
| 0.995941 |
2204.10408
|
Georgios Michalopoulos
|
George Michalopoulos, Michal Malyska, Nicola Sahar, Alexander Wong,
Helen Chen
|
ICDBigBird: A Contextual Embedding Model for ICD Code Classification
|
7 pages, 1 figure, accepted in BioNLP 2022
| null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The International Classification of Diseases (ICD) system is the
international standard for classifying diseases and procedures during a
healthcare encounter and is widely used for healthcare reporting and management
purposes. Assigning correct codes for clinical procedures is important for
clinical, operational, and financial decision-making in healthcare. Contextual
word embedding models have achieved state-of-the-art results in multiple NLP
tasks. However, these models have yet to achieve state-of-the-art results in
the ICD classification task since one of their main disadvantages is that they
can only process documents that contain a small number of tokens which is
rarely the case with real patient notes. In this paper, we introduce ICDBigBird
a BigBird-based model which can integrate a Graph Convolutional Network (GCN),
that takes advantage of the relations between ICD codes in order to create
'enriched' representations of their embeddings, with a BigBird contextual model
that can process larger documents. Our experiments on a real-world clinical
dataset demonstrate the effectiveness of our BigBird-based model on the ICD
classification task as it outperforms the previous state-of-the-art models.
|
[
{
"version": "v1",
"created": "Thu, 21 Apr 2022 20:59:56 GMT"
}
] | 2022-04-25T00:00:00 |
[
[
"Michalopoulos",
"George",
""
],
[
"Malyska",
"Michal",
""
],
[
"Sahar",
"Nicola",
""
],
[
"Wong",
"Alexander",
""
],
[
"Chen",
"Helen",
""
]
] |
new_dataset
| 0.999375 |
2204.10422
|
Giuseppe Abrami
|
Giuseppe Abrami, Mevl\"ut Bagci, Leon Hammerla, Alexander Mehler
|
German Parliamentary Corpus (GerParCor)
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Parliamentary debates represent a large and partly unexploited treasure trove
of publicly accessible texts. In the German-speaking area, there is a certain
deficit of uniformly accessible and annotated corpora covering all
German-speaking parliaments at the national and federal level. To address this
gap, we introduce the German Parliament Corpus (GerParCor). GerParCor is a
genre-specific corpus of (predominantly historical) German-language
parliamentary protocols from three centuries and four countries, including
state and federal level data. In addition, GerParCor contains conversions of
scanned protocols and, in particular, of protocols in Fraktur converted via an
OCR process based on Tesseract. All protocols were preprocessed by means of the
NLP pipeline of spaCy3 and automatically annotated with metadata regarding
their session date. GerParCor is made available in the XMI format of the UIMA
project. In this way, GerParCor can be used as a large corpus of historical
texts in the field of political communication for various tasks in NLP.
|
[
{
"version": "v1",
"created": "Thu, 21 Apr 2022 22:06:55 GMT"
}
] | 2022-04-25T00:00:00 |
[
[
"Abrami",
"Giuseppe",
""
],
[
"Bagci",
"Mevlüt",
""
],
[
"Hammerla",
"Leon",
""
],
[
"Mehler",
"Alexander",
""
]
] |
new_dataset
| 0.999292 |
2204.10447
|
Devesh Jha
|
Devesh K. Jha, Diego Romeres, Siddarth Jain, William Yerazunis and
Daniel Nikovski
|
Design of Adaptive Compliance Controllers for Safe Robotic Assembly
|
8 pages, 10 figures
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Insertion operations are a critical element of most robotic assembly
operation, and peg-in-hole (PiH) insertion is one of the most widely studied
tasks in the industrial and academic manipulation communities. PiH insertion is
in fact an entire class of problems, where the complexity of the problem can
depend on the type of misalignment and contact formation during an insertion
attempt. In this paper, we present the design and analysis of adaptive
compliance controllers which can be used in insertion-type assembly tasks,
including learning-based compliance controllers which can be used for insertion
problems in the presence of uncertainty in the goal location during robotic
assembly. We first present the design of compliance controllers which can
ensure safe operation of the robot by limiting experienced contact forces
during contact formation. Consequently, we present analysis of the force
signature obtained during the contact formation to learn the corrective action
needed to perform insertion. Finally, we use the proposed compliance
controllers and learned models to design a policy that can successfully perform
insertion in novel test conditions with almost perfect success rate. We
validate the proposed approach on a physical robotic test-bed using a 6-DoF
manipulator arm.
|
[
{
"version": "v1",
"created": "Fri, 22 Apr 2022 00:49:08 GMT"
}
] | 2022-04-25T00:00:00 |
[
[
"Jha",
"Devesh K.",
""
],
[
"Romeres",
"Diego",
""
],
[
"Jain",
"Siddarth",
""
],
[
"Yerazunis",
"William",
""
],
[
"Nikovski",
"Daniel",
""
]
] |
new_dataset
| 0.970575 |
2204.10457
|
Maxwell Kolarich
|
Maxwell Kolarich, Negar Mehr
|
Stackelberg Routing of Autonomous Cars in Mixed-Autonomy Traffic
Networks
|
8 pages, 4 figures. Accepted for publication at the 2022 American
Control Conference (ACC)
| null | null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As autonomous cars are becoming tangible technologies, road networks will
soon be shared by human-driven and autonomous cars. However, humans normally
act selfishly which may result in network inefficiencies. In this work, we
study increasing the efficiency of mixed-autonomy traffic networks by routing
autonomous cars altruistically. We consider a Stackelberg routing setting where
a central planner can route autonomous cars in the favor of society such that
when human-driven cars react and select their routes selfishly, the overall
system efficiency is increased. We develop a Stackelberg routing strategy for
autonomous cars in a mixed-autonomy traffic network with arbitrary geometry. We
bound the price of anarchy that our Stackelberg strategy induces and prove that
our proposed Stackelberg routing will reduce the price of anarchy, i.e. it
increases the network efficiency. Specifically, we consider a non-atomic
routing game in a mixed-autonomy setting with affine latency functions and
develop an extension of the SCALE Stackelberg strategy for mixed-autonomy
networks. We derive an upper bound on the price of anarchy that this
Stackelberg routing induces and demonstrate that in the limit, our bound
recovers the price of anarchy bounds for networks of only human-driven cars.
|
[
{
"version": "v1",
"created": "Fri, 22 Apr 2022 01:37:03 GMT"
}
] | 2022-04-25T00:00:00 |
[
[
"Kolarich",
"Maxwell",
""
],
[
"Mehr",
"Negar",
""
]
] |
new_dataset
| 0.994895 |
2204.10461
|
RuiZhuo Xu
|
Lin Yao, Jianfei Song, Ruizhuo Xu, Yingfang Yang, Zijian Chen and
Yafeng Deng
|
WaBERT: A Low-resource End-to-end Model for Spoken Language
Understanding and Speech-to-BERT Alignment
| null | null | null | null |
cs.CL cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Historically lower-level tasks such as automatic speech recognition (ASR) and
speaker identification are the main focus in the speech field. Interest has
been growing in higher-level spoken language understanding (SLU) tasks
recently, like sentiment analysis (SA). However, improving performances on SLU
tasks remains a big challenge. Basically, there are two main methods for SLU
tasks: (1) Two-stage method, which uses a speech model to transfer speech to
text, then uses a language model to get the results of downstream tasks; (2)
One-stage method, which just fine-tunes a pre-trained speech model to fit in
the downstream tasks. The first method loses emotional cues such as intonation,
and causes recognition errors during ASR process, and the second one lacks
necessary language knowledge. In this paper, we propose the Wave BERT (WaBERT),
a novel end-to-end model combining the speech model and the language model for
SLU tasks. WaBERT is based on the pre-trained speech and language model, hence
training from scratch is not needed. We also set most parameters of WaBERT
frozen during training. By introducing WaBERT, audio-specific information and
language knowledge are integrated in the short-time and low-resource training
process to improve results on the dev dataset of SLUE SA tasks by 1.15% of
recall score and 0.82% of F1 score. Additionally, we modify the serial
Continuous Integrate-and-Fire (CIF) mechanism to achieve the monotonic
alignment between the speech and text modalities.
|
[
{
"version": "v1",
"created": "Fri, 22 Apr 2022 02:14:40 GMT"
}
] | 2022-04-25T00:00:00 |
[
[
"Yao",
"Lin",
""
],
[
"Song",
"Jianfei",
""
],
[
"Xu",
"Ruizhuo",
""
],
[
"Yang",
"Yingfang",
""
],
[
"Chen",
"Zijian",
""
],
[
"Deng",
"Yafeng",
""
]
] |
new_dataset
| 0.996653 |
2204.10466
|
Jawad Haj-Yahya
|
Georgia Antoniou, Haris Volos, Davide B. Bartolini, Tom Rollet,
Yiannakis Sazeides, Jawad Haj Yahya
|
AgilePkgC: An Agile System Idle State Architecture for Energy
Proportional Datacenter Servers
| null | null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents the design of AgilePkgC (APC): a new C-state architecture
that improves the energy proportionality of servers that operate at low
utilization while running microservices of user-facing applications. APC
targets the reduction of power when all cores are idle in a shallow C-state,
ready to transition back to service. In particular, APC targets the power of
the resources shared by the cores (e.g., LLC, network-on-chip, IOs, DRAM) which
remain active while no core is active to use them. APC realizes its objective
by using low-overhead hardware to facilitate sub-microsecond entry/exit latency
to a new package C-state and judiciously selecting intermediate power modes for
the different shared resources that offer fast transition and, yet, substantial
power savings. Our experimental evaluation supports that APC holds the
potential to reduce server power by up to 41% with a worst-case performance
degradation of less than 0.1% for several representative workloads. Our results
clearly support the research and development and eventual adoption of new deep
and fast package C-states, like APC, for future server CPUs targeting
datacenters running microservices.
|
[
{
"version": "v1",
"created": "Fri, 22 Apr 2022 02:30:04 GMT"
}
] | 2022-04-25T00:00:00 |
[
[
"Antoniou",
"Georgia",
""
],
[
"Volos",
"Haris",
""
],
[
"Bartolini",
"Davide B.",
""
],
[
"Rollet",
"Tom",
""
],
[
"Sazeides",
"Yiannakis",
""
],
[
"Yahya",
"Jawad Haj",
""
]
] |
new_dataset
| 0.9985 |
2204.10521
|
Qiang Zhang
|
Qiang Zhang, Jason Naradowsky, Yusuke Miyao
|
Rethinking Offensive Text Detection as a Multi-Hop Reasoning Problem
|
18 pages, 4 figures, 10 tables, accepted in Findings of the
Association for Computational Linguistics 2022
| null | null | null |
cs.CL cs.AI cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce the task of implicit offensive text detection in dialogues,
where a statement may have either an offensive or non-offensive interpretation,
depending on the listener and context. We argue that reasoning is crucial for
understanding this broader class of offensive utterances and release SLIGHT, a
dataset to support research on this task. Experiments using the data show that
state-of-the-art methods of offense detection perform poorly when asked to
detect implicitly offensive statements, achieving only ${\sim} 11\%$ accuracy.
In contrast to existing offensive text detection datasets, SLIGHT features
human-annotated chains of reasoning which describe the mental process by which
an offensive interpretation can be reached from each ambiguous statement. We
explore the potential for a multi-hop reasoning approach by utilizing existing
entailment models to score the probability of these chains and show that even
naive reasoning models can yield improved performance in most situations.
Furthermore, analysis of the chains provides insight into the human
interpretation process and emphasizes the importance of incorporating
additional commonsense knowledge.
|
[
{
"version": "v1",
"created": "Fri, 22 Apr 2022 06:20:15 GMT"
}
] | 2022-04-25T00:00:00 |
[
[
"Zhang",
"Qiang",
""
],
[
"Naradowsky",
"Jason",
""
],
[
"Miyao",
"Yusuke",
""
]
] |
new_dataset
| 0.984761 |
2204.10646
|
Laura Pollacci
|
Laura Pollacci, Alina Sirbu, Fosca Giannotti, Dino Pedreschi
|
Measuring the Salad Bowl: Superdiversity on Twitter
| null | null | null | null |
cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
Superdiversity refers to large cultural diversity in a population due to
immigration. In this paper, we introduce a superdiversity index based on the
changes in the emotional content of words used by a multi-cultural community,
compared to the standard language. To compute our index we use Twitter data and
we develop an algorithm to extend a dictionary for lexicon-based sentiment
analysis. We validate our index by comparing it with official immigration
statistics available from the European Commission's Joint Research Center,
through the D4I data challenge. We show that, in general, our measure
correlates with immigration rates, at various geographical resolutions. Our
method produces very good results across languages, being tested here both on
English and Italian tweets. We argue that our index has predictive power in
regions where exact data on immigration is not available, paving the way for a
nowcasting model of immigration rates.
|
[
{
"version": "v1",
"created": "Fri, 22 Apr 2022 11:30:58 GMT"
}
] | 2022-04-25T00:00:00 |
[
[
"Pollacci",
"Laura",
""
],
[
"Sirbu",
"Alina",
""
],
[
"Giannotti",
"Fosca",
""
],
[
"Pedreschi",
"Dino",
""
]
] |
new_dataset
| 0.977188 |
2204.10686
|
Sylvain Sen\'e
|
Jacques Demongeot, Tarek Melliti, Mathilde Noual, Damien Regnault and
Sylvain Sen\'e
|
Boolean automata isolated cycles and tangential double-cycles dynamics
| null |
Springer Series on Emergence, Complexity and Computation, vol. 42,
pp. 145-178, 2022
|
10.1007/978-3-030-92551-2_11
| null |
cs.DM
|
http://creativecommons.org/licenses/by/4.0/
|
Our daily social and political life is more and more impacted by social
networks. The functioning of our living bodies is deeply dependent on
biological regulation networks such as neural, genetic, and protein networks.
And the physical world in which we evolve, is also structured by systems of
interacting particles. Interaction networks can be seen in all spheres of
existence that concern us, and yet, our understanding of interaction networks
remains severely limited by our present lack of both theoretical and applied
insight into their clockworks. In the past, efforts at understanding
interaction networks have mostly been directed towards applications. This has
happened at the expense of developing understanding of the generic and
fundamental aspects of interaction networks. Intrinsic properties of
interaction networks (eg the ways in which they transmit information along
entities, their ability to produce this or that kind of global dynamical
behaviour depending on local interactions) are thus still not well understood.
Lack of fundamental knowledge tends to limit the innovating power of
applications. Without more theoretical fundamental knowledge, applications
cannot evolve deeply and become more impacting. Hence, it is necessary to
better apprehend and comprehend the intrinsic properties of interaction
networks, notably the relations between their architecture and their dynamics
and how they are affected by and set in time. In this chapter, we use the
elementary mathematical model of Boolean automata networks as a formal
archetype of interaction networks. We survey results concerning the role of
feedback cycles and the role of intersections between feedback cycles, in
shaping the asymptotic dynamical behaviours of interaction networks.
|
[
{
"version": "v1",
"created": "Fri, 22 Apr 2022 13:04:53 GMT"
}
] | 2022-04-25T00:00:00 |
[
[
"Demongeot",
"Jacques",
""
],
[
"Melliti",
"Tarek",
""
],
[
"Noual",
"Mathilde",
""
],
[
"Regnault",
"Damien",
""
],
[
"Sené",
"Sylvain",
""
]
] |
new_dataset
| 0.998912 |
2204.10747
|
Hideki Ochiai
|
Hideki Ochiai, Kosuke Ikeya, Patrick Mitran
|
A New Polar Code Design Based on Reciprocal Channel Approximation
|
submitted to IEEE journal
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
This paper revisits polar code design for a binary-input additive white
Gaussian noise (BI-AWGN) channel when successive cancellation (SC) decoding is
applied at the receiver. We focus on the reciprocal channel approximation
(RCA), which is often adopted in the design of low-density parity-check (LDPC)
codes. In order to apply RCA to polar code design for various codeword lengths,
we derive rigorous closed-form approximations that are valid over a wide range
of SNR over an AWGN channel, for both the mutual information of BPSK signaling
and the corresponding reciprocal channel mapping. As a result, the
computational complexity required for evaluating channel polarization is thus
equivalent to that based on the popular Gaussian approximation (GA) approach.
Simulation results show that the proposed polar code design based on RCA
outperforms those based on GA as well as the so-called improved GA (IGA)
approach, especially as the codeword length is increased. Furthermore, the
RCA-based design yields a better block error rate (BLER) estimate compared to
GA-based approaches.
|
[
{
"version": "v1",
"created": "Fri, 22 Apr 2022 15:10:37 GMT"
}
] | 2022-04-25T00:00:00 |
[
[
"Ochiai",
"Hideki",
""
],
[
"Ikeya",
"Kosuke",
""
],
[
"Mitran",
"Patrick",
""
]
] |
new_dataset
| 0.965874 |
2204.10787
|
Hongbin Zhang
|
Hongbin Zhang, Yu Yang, Feng Wu, Qixin Zhang
|
MNL-Bandits under Inventory and Limited Switches Constraints
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Optimizing the assortment of products to display to customers is a key to
increasing revenue for both offline and online retailers. To trade-off between
exploring customers' preference and exploiting customers' choices learned from
data, in this paper, by adopting the Multi-Nomial Logit (MNL) choice model to
capture customers' choices over products, we study the problem of optimizing
assortments over a planning horizon $T$ for maximizing the profit of the
retailer. To make the problem setting more practical, we consider both the
inventory constraint and the limited switches constraint, where the retailer
cannot use up the resource inventory before time $T$ and is forbidden to switch
the assortment shown to customers too many times. Such a setting suits the case
when an online retailer wants to dynamically optimize the assortment selection
for a population of customers. We develop an efficient UCB-like algorithm to
optimize the assortments while learning customers' choices from data. We prove
that our algorithm can achieve a sub-linear regret bound
$\tilde{O}\left(T^{1-\alpha/2}\right)$ if $O(T^\alpha)$ switches are allowed.
%, and our regret bound is optimal with respect to $T$. Extensive numerical
experiments show that our algorithm outperforms baselines and the gap between
our algorithm's performance and the theoretical upper bound is small.
|
[
{
"version": "v1",
"created": "Fri, 22 Apr 2022 16:02:27 GMT"
}
] | 2022-04-25T00:00:00 |
[
[
"Zhang",
"Hongbin",
""
],
[
"Yang",
"Yu",
""
],
[
"Wu",
"Feng",
""
],
[
"Zhang",
"Qixin",
""
]
] |
new_dataset
| 0.987716 |
2204.10803
|
Saket Chaturvedi
|
Saket S. Chaturvedi, Lan Zhang, Xiaoyong Yuan
|
Pay "Attention" to Adverse Weather: Weather-aware Attention-based Object
Detection
|
This paper is accepted at IEEE International Conference on Pattern
Recognition (ICPR), 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Despite the recent advances of deep neural networks, object detection for
adverse weather remains challenging due to the poor perception of some sensors
in adverse weather. Instead of relying on one single sensor, multimodal fusion
has been one promising approach to provide redundant detection information
based on multiple sensors. However, most existing multimodal fusion approaches
are ineffective in adjusting the focus of different sensors under varying
detection environments in dynamic adverse weather conditions. Moreover, it is
critical to simultaneously observe local and global information under complex
weather conditions, which has been neglected in most early or late-stage
multimodal fusion works. In view of these, this paper proposes a Global-Local
Attention (GLA) framework to adaptively fuse the multi-modality sensing
streams, i.e., camera, gated camera, and lidar data, at two fusion stages.
Specifically, GLA integrates an early-stage fusion via a local attention
network and a late-stage fusion via a global attention network to deal with
both local and global information, which automatically allocates higher weights
to the modality with better detection features at the late-stage fusion to cope
with the specific weather condition adaptively. Experimental results
demonstrate the superior performance of the proposed GLA compared with
state-of-the-art fusion approaches under various adverse weather conditions,
such as light fog, dense fog, and snow.
|
[
{
"version": "v1",
"created": "Fri, 22 Apr 2022 16:32:34 GMT"
}
] | 2022-04-25T00:00:00 |
[
[
"Chaturvedi",
"Saket S.",
""
],
[
"Zhang",
"Lan",
""
],
[
"Yuan",
"Xiaoyong",
""
]
] |
new_dataset
| 0.990154 |
2204.10825
|
Buru Chang
|
Seungju Han, Beomsu Kim, Jin Yong Yoo, Seokjun Seo, Sangbum Kim,
Enkhbayar Erdenee, Buru Chang
|
Meet Your Favorite Character: Open-domain Chatbot Mimicking Fictional
Characters with only a Few Utterances
|
NAACL2022 (Short)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we consider mimicking fictional characters as a promising
direction for building engaging conversation models. To this end, we present a
new practical task where only a few utterances of each fictional character are
available to generate responses mimicking them. Furthermore, we propose a new
method named Pseudo Dialog Prompting (PDP) that generates responses by
leveraging the power of large-scale language models with prompts containing the
target character's utterances. To better reflect the style of the character,
PDP builds the prompts in the form of dialog that includes the character's
utterances as dialog history. Since only utterances of the characters are
available in the proposed task, PDP matches each utterance with an appropriate
pseudo-context from a predefined set of context candidates using a retrieval
model. Through human and automatic evaluation, we show that PDP generates
responses that better reflect the style of fictional characters than baseline
methods.
|
[
{
"version": "v1",
"created": "Fri, 22 Apr 2022 17:11:17 GMT"
}
] | 2022-04-25T00:00:00 |
[
[
"Han",
"Seungju",
""
],
[
"Kim",
"Beomsu",
""
],
[
"Yoo",
"Jin Yong",
""
],
[
"Seo",
"Seokjun",
""
],
[
"Kim",
"Sangbum",
""
],
[
"Erdenee",
"Enkhbayar",
""
],
[
"Chang",
"Buru",
""
]
] |
new_dataset
| 0.994917 |
2204.10850
|
Kyle Olszewski
|
Verica Lazova, Vladimir Guzov, Kyle Olszewski, Sergey Tulyakov, Gerard
Pons-Moll
|
Control-NeRF: Editable Feature Volumes for Scene Rendering and
Manipulation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a novel method for performing flexible, 3D-aware image content
manipulation while enabling high-quality novel view synthesis. While NeRF-based
approaches are effective for novel view synthesis, such models memorize the
radiance for every point in a scene within a neural network. Since these models
are scene-specific and lack a 3D scene representation, classical editing such
as shape manipulation, or combining scenes is not possible. Hence, editing and
combining NeRF-based scenes has not been demonstrated. With the aim of
obtaining interpretable and controllable scene representations, our model
couples learnt scene-specific feature volumes with a scene agnostic neural
rendering network. With this hybrid representation, we decouple neural
rendering from scene-specific geometry and appearance. We can generalize to
novel scenes by optimizing only the scene-specific 3D feature representation,
while keeping the parameters of the rendering network fixed. The rendering
function learnt during the initial training stage can thus be easily applied to
new scenes, making our approach more flexible. More importantly, since the
feature volumes are independent of the rendering model, we can manipulate and
combine scenes by editing their corresponding feature volumes. The edited
volume can then be plugged into the rendering model to synthesize high-quality
novel views. We demonstrate various scene manipulations, including mixing
scenes, deforming objects and inserting objects into scenes, while still
producing photo-realistic results.
|
[
{
"version": "v1",
"created": "Fri, 22 Apr 2022 17:57:00 GMT"
}
] | 2022-04-25T00:00:00 |
[
[
"Lazova",
"Verica",
""
],
[
"Guzov",
"Vladimir",
""
],
[
"Olszewski",
"Kyle",
""
],
[
"Tulyakov",
"Sergey",
""
],
[
"Pons-Moll",
"Gerard",
""
]
] |
new_dataset
| 0.997166 |
1809.07124
|
Cinjon Resnick
|
Cinjon Resnick, Wes Eldridge, David Ha, Denny Britz, Jakob Foerster,
Julian Togelius, Kyunghyun Cho, Joan Bruna
|
Pommerman: A Multi-Agent Playground
|
Oral at the AIIDE Multi-Agent Workshop;
0xc8Ac61A4025B35e425b829fCFCab37f038993963
| null | null | null |
cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Pommerman, a multi-agent environment based on the classic console
game Bomberman. Pommerman consists of a set of scenarios, each having at least
four players and containing both cooperative and competitive aspects. We
believe that success in Pommerman will require a diverse set of tools and
methods, including planning, opponent/teammate modeling, game theory, and
communication, and consequently can serve well as a multi-agent benchmark. To
date, we have already hosted one competition, and our next one will be featured
in the NIPS 2018 competition track.
|
[
{
"version": "v1",
"created": "Wed, 19 Sep 2018 11:27:25 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Apr 2022 13:52:02 GMT"
}
] | 2022-04-22T00:00:00 |
[
[
"Resnick",
"Cinjon",
""
],
[
"Eldridge",
"Wes",
""
],
[
"Ha",
"David",
""
],
[
"Britz",
"Denny",
""
],
[
"Foerster",
"Jakob",
""
],
[
"Togelius",
"Julian",
""
],
[
"Cho",
"Kyunghyun",
""
],
[
"Bruna",
"Joan",
""
]
] |
new_dataset
| 0.999576 |
2010.07061
|
Qiuqiang Kong
|
Qiuqiang Kong, Bochen Li, Jitong Chen, Yuxuan Wang
|
GiantMIDI-Piano: A large-scale MIDI dataset for classical piano music
|
11 pages, 13 figures
| null | null | null |
cs.IR cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Symbolic music datasets are important for music information retrieval and
musical analysis. However, there is a lack of large-scale symbolic datasets for
classical piano music. In this article, we create a GiantMIDI-Piano (GP)
dataset containing 38,700,838 transcribed notes and 10,855 unique solo piano
works composed by 2,786 composers. We extract the names of music works and the
names of composers from the International Music Score Library Project (IMSLP).
We search and download their corresponding audio recordings from the internet.
We further create a curated subset containing 7,236 works composed by 1,787
composers by constraining the titles of downloaded audio recordings containing
the surnames of composers. We apply a convolutional neural network to detect
solo piano works. Then, we transcribe those solo piano recordings into Musical
Instrument Digital Interface (MIDI) files using a high-resolution piano
transcription system. Each transcribed MIDI file contains the onset, offset,
pitch, and velocity attributes of piano notes and pedals. GiantMIDI-Piano
includes 90% live performance MIDI files and 10\% sequence input MIDI files. We
analyse the statistics of GiantMIDI-Piano and show pitch class, interval,
trichord, and tetrachord frequencies of six composers from different eras to
show that GiantMIDI-Piano can be used for musical analysis. We evaluate the
quality of GiantMIDI-Piano in terms of solo piano detection F1 scores, metadata
accuracy, and transcription error rates. We release the source code for
acquiring the GiantMIDI-Piano dataset at
https://github.com/bytedance/GiantMIDI-Piano
|
[
{
"version": "v1",
"created": "Sun, 11 Oct 2020 01:23:43 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Jan 2022 02:46:33 GMT"
},
{
"version": "v3",
"created": "Thu, 21 Apr 2022 13:29:22 GMT"
}
] | 2022-04-22T00:00:00 |
[
[
"Kong",
"Qiuqiang",
""
],
[
"Li",
"Bochen",
""
],
[
"Chen",
"Jitong",
""
],
[
"Wang",
"Yuxuan",
""
]
] |
new_dataset
| 0.999877 |
2012.13014
|
Nelson Alves
|
Nelson Alves, Marco Ruiz, Marco Reis, Tiago Cajahyba, Davi Oliveira,
Ana Barreto, Eduardo F. Simas Filho, Wagner L. A. de Oliveira, Leizer
Schnitman, Roberto L. S. Monteiro
|
Low-latency Perception in Off-Road Dynamical Low Visibility Environments
| null | null |
10.1016/j.eswa.2022.117010
| null |
cs.CV cs.LG cs.RO eess.IV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This work proposes a perception system for autonomous vehicles and advanced
driver assistance specialized on unpaved roads and off-road environments. In
this research, the authors have investigated the behavior of Deep Learning
algorithms applied to semantic segmentation of off-road environments and
unpaved roads under differents adverse conditions of visibility. Almost 12,000
images of different unpaved and off-road environments were collected and
labeled. It was assembled an off-road proving ground exclusively for its
development. The proposed dataset also contains many adverse situations such as
rain, dust, and low light. To develop the system, we have used convolutional
neural networks trained to segment obstacles and areas where the car can pass
through. We developed a Configurable Modular Segmentation Network (CMSNet)
framework to help create different architectures arrangements and test them on
the proposed dataset. Besides, we also have ported some CMSNet configurations
by removing and fusing many layers using TensorRT, C++, and CUDA to achieve
embedded real-time inference and allow field tests. The main contributions of
this work are: a new dataset for unpaved roads and off-roads environments
containing many adverse conditions such as night, rain, and dust; a CMSNet
framework; an investigation regarding the feasibility of applying deep learning
to detect region where the vehicle can pass through when there is no clear
boundary of the track; a study of how our proposed segmentation algorithms
behave in different severity levels of visibility impairment; and an evaluation
of field tests carried out with semantic segmentation architectures ported for
real-time inference.
|
[
{
"version": "v1",
"created": "Wed, 23 Dec 2020 22:54:43 GMT"
}
] | 2022-04-22T00:00:00 |
[
[
"Alves",
"Nelson",
""
],
[
"Ruiz",
"Marco",
""
],
[
"Reis",
"Marco",
""
],
[
"Cajahyba",
"Tiago",
""
],
[
"Oliveira",
"Davi",
""
],
[
"Barreto",
"Ana",
""
],
[
"Filho",
"Eduardo F. Simas",
""
],
[
"de Oliveira",
"Wagner L. A.",
""
],
[
"Schnitman",
"Leizer",
""
],
[
"Monteiro",
"Roberto L. S.",
""
]
] |
new_dataset
| 0.999535 |
2103.12242
|
Ali Ayub
|
Ali Ayub, Alan R. Wagner
|
F-SIOL-310: A Robotic Dataset and Benchmark for Few-Shot Incremental
Object Learning
|
Fixed the link to dataset
|
IEEE International Conference on Robotics and Automation (ICRA)
2021
|
10.1109/ICRA48506.2021.9561509
| null |
cs.RO cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Deep learning has achieved remarkable success in object recognition tasks
through the availability of large scale datasets like ImageNet. However, deep
learning systems suffer from catastrophic forgetting when learning
incrementally without replaying old data. For real-world applications, robots
also need to incrementally learn new objects. Further, since robots have
limited human assistance available, they must learn from only a few examples.
However, very few object recognition datasets and benchmarks exist to test
incremental learning capability for robotic vision. Further, there is no
dataset or benchmark specifically designed for incremental object learning from
a few examples. To fill this gap, we present a new dataset termed F-SIOL-310
(Few-Shot Incremental Object Learning) which is specifically captured for
testing few-shot incremental object learning capability for robotic vision. We
also provide benchmarks and evaluations of 8 incremental learning algorithms on
F-SIOL-310 for future comparisons. Our results demonstrate that the few-shot
incremental object learning problem for robotic vision is far from being
solved.
|
[
{
"version": "v1",
"created": "Tue, 23 Mar 2021 00:25:50 GMT"
},
{
"version": "v2",
"created": "Sun, 14 Nov 2021 05:55:53 GMT"
},
{
"version": "v3",
"created": "Wed, 20 Apr 2022 20:54:22 GMT"
}
] | 2022-04-22T00:00:00 |
[
[
"Ayub",
"Ali",
""
],
[
"Wagner",
"Alan R.",
""
]
] |
new_dataset
| 0.99956 |
2104.06977
|
Zixiang Zhao
|
Zixiang Zhao, Jiangshe Zhang, Shuang Xu, Zudi Lin, Hanspeter Pfister
|
Discrete Cosine Transform Network for Guided Depth Map Super-Resolution
|
Accepted by CVPR 2022 (Oral)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Guided depth super-resolution (GDSR) is an essential topic in multi-modal
image processing, which reconstructs high-resolution (HR) depth maps from
low-resolution ones collected with suboptimal conditions with the help of HR
RGB images of the same scene. To solve the challenges in interpreting the
working mechanism, extracting cross-modal features and RGB texture
over-transferred, we propose a novel Discrete Cosine Transform Network (DCTNet)
to alleviate the problems from three aspects. First, the Discrete Cosine
Transform (DCT) module reconstructs the multi-channel HR depth features by
using DCT to solve the channel-wise optimization problem derived from the image
domain. Second, we introduce a semi-coupled feature extraction module that uses
shared convolutional kernels to extract common information and private kernels
to extract modality-specific information. Third, we employ an edge attention
mechanism to highlight the contours informative for guided upsampling.
Extensive quantitative and qualitative evaluations demonstrate the
effectiveness of our DCTNet, which outperforms previous state-of-the-art
methods with a relatively small number of parameters. The code is available at
\url{https://github.com/Zhaozixiang1228/GDSR-DCTNet}.
|
[
{
"version": "v1",
"created": "Wed, 14 Apr 2021 17:01:03 GMT"
},
{
"version": "v2",
"created": "Tue, 30 Nov 2021 12:28:29 GMT"
},
{
"version": "v3",
"created": "Thu, 21 Apr 2022 05:51:43 GMT"
}
] | 2022-04-22T00:00:00 |
[
[
"Zhao",
"Zixiang",
""
],
[
"Zhang",
"Jiangshe",
""
],
[
"Xu",
"Shuang",
""
],
[
"Lin",
"Zudi",
""
],
[
"Pfister",
"Hanspeter",
""
]
] |
new_dataset
| 0.969005 |
2201.05386
|
Ali Samadzadeh
|
Ali Samadzadeh, Ahmad Nickabadi
|
SRVIO: Super Robust Visual Inertial Odometry for dynamic environments
and challenging Loop-closure conditions
|
11 pages, 7 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
There has been extensive research on visual localization and odometry for
autonomous robots and virtual reality during the past decades. Traditionally,
this problem has been solved with the help of expensive sensors, such as
lidars. Nowadays, the focus of the leading research in this field is on robust
localization using more economic sensors, such as cameras and IMUs.
Consequently, geometric visual localization methods have become more accurate
in time. However, these methods still suffer from significant loss and
divergence in challenging environments, such as a room full of moving people.
Scientists started using deep neural networks (DNNs) to mitigate this problem.
The main idea behind using DNNs is to better understand challenging aspects of
the data and overcome complex conditions such as the movement of a dynamic
object in front of the camera that covers the full view of the camera, extreme
lighting conditions, and high speed of the camera. Prior end-to-end DNN methods
have overcome some of these challenges. However, no general and robust
framework is available to overcome all challenges together. In this paper, we
have combined geometric and DNN-based methods to have the generality and speed
of geometric SLAM frameworks and overcome most of these challenging conditions
with the help of DNNs and deliver the most robust framework so far. To do so,
we have designed a framework based on Vins-Mono, and show that it is able to
achieve state-of-the-art results on TUM-Dynamic, TUM-VI, ADVIO, and EuRoC
datasets compared to geometric and end-to-end DNN based SLAMs. Our proposed
framework could also achieve outstanding results on extreme simulated cases
resembling the aforementioned challenges.
|
[
{
"version": "v1",
"created": "Fri, 14 Jan 2022 10:52:04 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Apr 2022 23:49:18 GMT"
}
] | 2022-04-22T00:00:00 |
[
[
"Samadzadeh",
"Ali",
""
],
[
"Nickabadi",
"Ahmad",
""
]
] |
new_dataset
| 0.968132 |
2201.05590
|
Milan Straka
|
Jakub N\'aplava, Milan Straka, Jana Strakov\'a, Alexandr Rosen
|
Czech Grammar Error Correction with a Large and Diverse Corpus
|
Published in TACL, MIT Press
| null |
10.1162/tacl_a_00470
| null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a large and diverse Czech corpus annotated for grammatical error
correction (GEC) with the aim to contribute to the still scarce data resources
in this domain for languages other than English. The Grammar Error Correction
Corpus for Czech (GECCC) offers a variety of four domains, covering error
distributions ranging from high error density essays written by non-native
speakers, to website texts, where errors are expected to be much less common.
We compare several Czech GEC systems, including several Transformer-based ones,
setting a strong baseline to future research. Finally, we meta-evaluate common
GEC metrics against human judgements on our data. We make the new Czech GEC
corpus publicly available under the CC BY-SA 4.0 license at
http://hdl.handle.net/11234/1-4639 .
|
[
{
"version": "v1",
"created": "Fri, 14 Jan 2022 18:20:47 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Apr 2022 14:36:16 GMT"
}
] | 2022-04-22T00:00:00 |
[
[
"Náplava",
"Jakub",
""
],
[
"Straka",
"Milan",
""
],
[
"Straková",
"Jana",
""
],
[
"Rosen",
"Alexandr",
""
]
] |
new_dataset
| 0.99758 |
2203.06296
|
Yi Geng
|
Yi Geng, Sebastian Euler
|
Beyond Conic Section Mainlobe Coverage for Unmanned Aerial Vehicle
|
6 pages, submitted to Globecom 2022
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The cellular-connected drone market is one of the most promising markets of
5G. However, it still accounts for a small share of overall telecommunication
market, and it is unlikely to increase significantly for the foreseeable
future. Deploying dedicated network with up-tilted antennas can be an option,
but the monetary cost of dedicated network directly impacts the acceptance of
mobile operators. Therefore, cost-efficient aerial coverage solutions must be
developed. Reusing network for terrestrial coverage is a cost-efficient
approach for aerial coverage, but several critical challenges caused by antenna
sidelobe should be solved. In this paper, a novel method for aerial coverage is
proposed. By tweaking the measurement report handling mechanism, signals from
sidelobes reported by drones above the predefined height can be identified and
ignored. Simulation results show that the conventional cellular network with
the proposed method can provide wide and continuous aerial coverage with
satisfactory quality.
|
[
{
"version": "v1",
"created": "Sat, 12 Mar 2022 00:53:05 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Apr 2022 02:35:33 GMT"
}
] | 2022-04-22T00:00:00 |
[
[
"Geng",
"Yi",
""
],
[
"Euler",
"Sebastian",
""
]
] |
new_dataset
| 0.996885 |
2203.08896
|
Roger Mar\'i
|
Roger Mar\'i, Gabriele Facciolo, Thibaud Ehret
|
Sat-NeRF: Learning Multi-View Satellite Photogrammetry With Transient
Objects and Shadow Modeling Using RPC Cameras
|
Accepted at CVPR EarthVision Workshop 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We introduce the Satellite Neural Radiance Field (Sat-NeRF), a new end-to-end
model for learning multi-view satellite photogrammetry in the wild. Sat-NeRF
combines some of the latest trends in neural rendering with native satellite
camera models, represented by rational polynomial coefficient (RPC) functions.
The proposed method renders new views and infers surface models of similar
quality to those obtained with traditional state-of-the-art stereo pipelines.
Multi-date images exhibit significant changes in appearance, mainly due to
varying shadows and transient objects (cars, vegetation). Robustness to these
challenges is achieved by a shadow-aware irradiance model and uncertainty
weighting to deal with transient phenomena that cannot be explained by the
position of the sun. We evaluate Sat-NeRF using WorldView-3 images from
different locations and stress the advantages of applying a bundle adjustment
to the satellite camera models prior to training. This boosts the network
performance and can optionally be used to extract additional cues for depth
supervision.
|
[
{
"version": "v1",
"created": "Wed, 16 Mar 2022 19:18:46 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Apr 2022 13:54:10 GMT"
}
] | 2022-04-22T00:00:00 |
[
[
"Marí",
"Roger",
""
],
[
"Facciolo",
"Gabriele",
""
],
[
"Ehret",
"Thibaud",
""
]
] |
new_dataset
| 0.99831 |
2203.12122
|
Juncheng Li
|
Juncheng B Li, Shuhui Qu, Xinjian Li, Po-Yao Huang, Florian Metze
|
On Adversarial Robustness of Large-scale Audio Visual Learning
| null |
2022 International Conference on Acoustics, Speech, and Signal
Processing (ICASSP 2022)
| null | null |
cs.SD cs.MM eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As audio-visual systems are being deployed for safety-critical tasks such as
surveillance and malicious content filtering, their robustness remains an
under-studied area. Existing published work on robustness either does not scale
to large-scale dataset, or does not deal with multiple modalities. This work
aims to study several key questions related to multi-modal learning through the
lens of robustness: 1) Are multi-modal models necessarily more robust than
uni-modal models? 2) How to efficiently measure the robustness of multi-modal
learning? 3) How to fuse different modalities to achieve a more robust
multi-modal model? To understand the robustness of the multi-modal model in a
large-scale setting, we propose a density-based metric, and a convexity metric
to efficiently measure the distribution of each modality in high-dimensional
latent space. Our work provides a theoretical intuition together with empirical
evidence showing how multi-modal fusion affects adversarial robustness through
these metrics. We further devise a mix-up strategy based on our metrics to
improve the robustness of the trained model. Our experiments on AudioSet and
Kinetics-Sounds verify our hypothesis that multi-modal models are not
necessarily more robust than their uni-modal counterparts in the face of
adversarial examples. We also observe our mix-up trained method could achieve
as much protection as traditional adversarial training, offering a
computationally cheap alternative. Implementation:
https://github.com/lijuncheng16/AudioSetDoneRight
|
[
{
"version": "v1",
"created": "Wed, 23 Mar 2022 01:31:17 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Apr 2022 06:35:14 GMT"
}
] | 2022-04-22T00:00:00 |
[
[
"Li",
"Juncheng B",
""
],
[
"Qu",
"Shuhui",
""
],
[
"Li",
"Xinjian",
""
],
[
"Huang",
"Po-Yao",
""
],
[
"Metze",
"Florian",
""
]
] |
new_dataset
| 0.993079 |
2204.02057
|
Dima Kagan
|
Michael Fire, Rami Puzis, Dima Kagan and Yuval Elovici
|
Large-Scale Shill Bidder Detection in E-commerce
| null | null | null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
User feedback is one of the most effective methods to build and maintain
trust in electronic commerce platforms. Unfortunately, dishonest sellers often
bend over backward to manipulate users' feedback or place phony bids in order
to increase their own sales and harm competitors. The black market of user
feedback, supported by a plethora of shill bidders, prospers on top of
legitimate electronic commerce. In this paper, we investigate the ecosystem of
shill bidders based on large-scale data by analyzing hundreds of millions of
users who performed billions of transactions, and we propose a
machine-learning-based method for identifying communities of users that
methodically provide dishonest feedback. Our results show that (1) shill
bidders can be identified with high precision based on their transaction and
feedback statistics; and (2) in contrast to legitimate buyers and sellers,
shill bidders form cliques to support each other.
|
[
{
"version": "v1",
"created": "Tue, 5 Apr 2022 08:45:56 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Apr 2022 09:52:22 GMT"
}
] | 2022-04-22T00:00:00 |
[
[
"Fire",
"Michael",
""
],
[
"Puzis",
"Rami",
""
],
[
"Kagan",
"Dima",
""
],
[
"Elovici",
"Yuval",
""
]
] |
new_dataset
| 0.995726 |
2204.07887
|
Sven Richter
|
Sven Richter, Frank Bieder, Sascha Wirges and Christoph Stiller
|
Mapping LiDAR and Camera Measurements in a Dual Top-View Grid
Representation Tailored for Automated Vehicles
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a generic evidential grid mapping pipeline designed for imaging
sensors such as LiDARs and cameras. Our grid-based evidential model contains
semantic estimates for cell occupancy and ground separately. We specify the
estimation steps for input data represented by point sets, but mainly focus on
input data represented by images such as disparity maps or LiDAR range images.
Instead of relying on an external ground segmentation only, we deduce occupancy
evidence by analyzing the surface orientation around measurements. We conduct
experiments and evaluate the presented method using LiDAR and stereo camera
data recorded in real traffic scenarios. Our method estimates cell occupancy
robustly and with a high level of detail while maximizing efficiency and
minimizing the dependency to external processing modules.
|
[
{
"version": "v1",
"created": "Sat, 16 Apr 2022 23:51:20 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Apr 2022 12:39:43 GMT"
}
] | 2022-04-22T00:00:00 |
[
[
"Richter",
"Sven",
""
],
[
"Bieder",
"Frank",
""
],
[
"Wirges",
"Sascha",
""
],
[
"Stiller",
"Christoph",
""
]
] |
new_dataset
| 0.990294 |
2204.08078
|
Benjamin Horne
|
Benjamin D. Horne
|
A Psycho-linguistic Analysis of BitChute
|
This paper is a Metadata Supplement to The MeLa BitChute Dataset
| null | null | null |
cs.CY cs.CL cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In order to better support researchers, journalist, and practitioners in
their use of the MeLa-BitChute dataset for exploration and investigative
reporting, we provide new psycho-linguistic metadata for the videos, comments,
and channels in the dataset using LIWC22. This paper describes that metadata
and methods to filter the data using the metadata. In addition, we provide
basic analysis and comparison of the language on BitChute to other social media
platforms. The MeLa-BitChute dataset and LIWC metadata described in this paper
can be found at:
https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/KRD1VS.
|
[
{
"version": "v1",
"created": "Sun, 17 Apr 2022 20:10:02 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Apr 2022 21:14:07 GMT"
}
] | 2022-04-22T00:00:00 |
[
[
"Horne",
"Benjamin D.",
""
]
] |
new_dataset
| 0.999681 |
2204.08970
|
Zhihao Li
|
Zhihao Li, Si Yi, Zhan Ma
|
Rendering Nighttime Image Via Cascaded Color and Brightness Compensation
|
Accepted by NTIRE 2022 (CVPR Workshop)
| null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Image signal processing (ISP) is crucial for camera imaging, and neural
networks (NN) solutions are extensively deployed for daytime scenes. The lack
of sufficient nighttime image dataset and insights on nighttime illumination
characteristics poses a great challenge for high-quality rendering using
existing NN ISPs. To tackle it, we first built a high-resolution nighttime
RAW-RGB (NR2R) dataset with white balance and tone mapping annotated by expert
professionals. Meanwhile, to best capture the characteristics of nighttime
illumination light sources, we develop the CBUnet, a two-stage NN ISP to
cascade the compensation of color and brightness attributes. Experiments show
that our method has better visual quality compared to traditional ISP pipeline,
and is ranked at the second place in the NTIRE 2022 Night Photography Rendering
Challenge for two tracks by respective People's and Professional Photographer's
choices. The code and relevant materials are avaiable on our website:
https://njuvision.github.io/CBUnet.
|
[
{
"version": "v1",
"created": "Tue, 19 Apr 2022 16:15:31 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Apr 2022 17:23:11 GMT"
}
] | 2022-04-22T00:00:00 |
[
[
"Li",
"Zhihao",
""
],
[
"Yi",
"Si",
""
],
[
"Ma",
"Zhan",
""
]
] |
new_dataset
| 0.999714 |
2204.09711
|
Iyanuoluwa Shode
|
Iyanuoluwa Shode, David Ifeoluwa Adelani, and Anna Feldman
|
yosm: A new yoruba sentiment corpus for movie reviews
|
Accepted to AfricaNLP Workshop @ICLR 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
A movie that is thoroughly enjoyed and recommended by an individual might be
hated by another. One characteristic of humans is the ability to have feelings
which could be positive or negative. To automatically classify and study human
feelings, an aspect of natural language processing, sentiment analysis and
opinion mining were designed to understand human feelings regarding several
issues which could affect a product, a social media platforms, government, or
societal discussions or even movies. Several works on sentiment analysis have
been done on high resource languages while low resources languages like Yoruba
have been sidelined. Due to the scarcity of datasets and linguistic
architectures that will suit low resource languages, African languages "low
resource languages" have been ignored and not fully explored. For this reason,
our attention is placed on Yoruba to explore sentiment analysis on reviews of
Nigerian movies. The data comprised 1500 movie reviews that were sourced from
IMDB, Rotten Tomatoes, Letterboxd, Cinemapointer and Nollyrated. We develop
sentiment classification models using the state-of-the-art pre-trained language
models like mBERT and AfriBERTa to classify the movie reviews.
|
[
{
"version": "v1",
"created": "Wed, 20 Apr 2022 18:00:37 GMT"
}
] | 2022-04-22T00:00:00 |
[
[
"Shode",
"Iyanuoluwa",
""
],
[
"Adelani",
"David Ifeoluwa",
""
],
[
"Feldman",
"Anna",
""
]
] |
new_dataset
| 0.999697 |
2204.09737
|
Aman Priyanshu
|
Aman Priyanshu, Sarthak Shastri, Sai Sravan Medicherla
|
ARLIF-IDS -- Attention augmented Real-Time Isolation Forest Intrusion
Detection System
|
Paper accepted at the Poster session at the 43rd IEEE Symposium on
Security and Privacy
| null | null | null |
cs.CR cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Distributed Denial of Service (DDoS) attack is a malicious attempt to disrupt
the normal traffic of a targeted server, service or network by overwhelming the
target or its surrounding infrastructure with a flood of Internet traffic.
Emerging technologies such as the Internet of Things and Software Defined
Networking leverage lightweight strategies for the early detection of DDoS
attacks. Previous literature demonstrates the utility of lower number of
significant features for intrusion detection. Thus, it is essential to have a
fast and effective security identification model based on low number of
features.
In this work, a novel Attention-based Isolation Forest Intrusion Detection
System is proposed. The model considerably reduces training time and memory
consumption of the generated model. For performance assessment, the model is
assessed over two benchmark datasets, the NSL-KDD dataset & the KDDCUP'99
dataset. Experimental results demonstrate that the proposed attention augmented
model achieves a significant reduction in execution time, by 91.78%, and an
average detection F1-Score of 0.93 on the NSL-KDD and KDDCUP'99 dataset. The
results of performance evaluation show that the proposed methodology has low
complexity and requires less processing time and computational resources,
outperforming other current IDS based on machine learning algorithms.
|
[
{
"version": "v1",
"created": "Wed, 20 Apr 2022 18:40:23 GMT"
}
] | 2022-04-22T00:00:00 |
[
[
"Priyanshu",
"Aman",
""
],
[
"Shastri",
"Sarthak",
""
],
[
"Medicherla",
"Sai Sravan",
""
]
] |
new_dataset
| 0.999533 |
2204.09753
|
Tony Davis
|
Anthony Davis, Srijita Mukherjee, Paul S. Wills, Bing Ouyang
|
Path Planning Algorithms for Robotic Aquaculture Monitoring
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Aerial drones have great potential to monitor large areas quickly and
efficiently. Aquaculture is an industry that requires continuous water quality
data to successfully grow and harvest fish. The Hybrid Aerial Underwater
Robotic System (HAUCS) is designed to collect water quality data of aquaculture
ponds to reduce labor costs for farmers. The routing of drones to cover each
fish pond on an aquaculture farm can be reduced to the Vehicle Routing Problem.
A dataset is created to simulate the distribution of ponds on a farm and is
used to assess the HAUCS Path Planning Algorithm (HPP). Its performance is
compared with the Google Linear Optimization Package (GLOP) and a Graph
Attention Model (AM) for routing problems. GLOP is the most efficient solver
for 50 to 200 ponds at the expense of long run times, while HPP outperforms the
other methods in solution quality and run time for instances larger than 200
ponds.
|
[
{
"version": "v1",
"created": "Wed, 20 Apr 2022 19:30:28 GMT"
}
] | 2022-04-22T00:00:00 |
[
[
"Davis",
"Anthony",
""
],
[
"Mukherjee",
"Srijita",
""
],
[
"Wills",
"Paul S.",
""
],
[
"Ouyang",
"Bing",
""
]
] |
new_dataset
| 0.99924 |
2204.09774
|
Shi Chen
|
Shi Chen, Ming Jiang, Jinhui Yang and Qi Zhao
|
Attention in Reasoning: Dataset, Analysis, and Modeling
|
To be published in TPAMI. arXiv admin note: substantial text overlap
with arXiv:2007.14419
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
While attention has been an increasingly popular component in deep neural
networks to both interpret and boost the performance of models, little work has
examined how attention progresses to accomplish a task and whether it is
reasonable. In this work, we propose an Attention with Reasoning capability
(AiR) framework that uses attention to understand and improve the process
leading to task outcomes. We first define an evaluation metric based on a
sequence of atomic reasoning operations, enabling a quantitative measurement of
attention that considers the reasoning process. We then collect human
eye-tracking and answer correctness data, and analyze various machine and human
attention mechanisms on their reasoning capability and how they impact task
performance. To improve the attention and reasoning ability of visual question
answering models, we propose to supervise the learning of attention
progressively along the reasoning process and to differentiate the correct and
incorrect attention patterns. We demonstrate the effectiveness of the proposed
framework in analyzing and modeling attention with better reasoning capability
and task performance. The code and data are available at
https://github.com/szzexpoi/AiR
|
[
{
"version": "v1",
"created": "Wed, 20 Apr 2022 20:32:31 GMT"
}
] | 2022-04-22T00:00:00 |
[
[
"Chen",
"Shi",
""
],
[
"Jiang",
"Ming",
""
],
[
"Yang",
"Jinhui",
""
],
[
"Zhao",
"Qi",
""
]
] |
new_dataset
| 0.999821 |
2204.09779
|
Sadbhawna Thakur
|
Abhisek Keshari, Komal, Sadbhawna, Badri Subudhi
|
Multi-Scale Features and Parallel Transformers Based Image Quality
Assessment
| null | null | null | null |
cs.CV cs.MM eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the increase in multimedia content, the type of distortions associated
with multimedia is also increasing. This problem of image quality assessment is
expanded well in the PIPAL dataset, which is still an open problem to solve for
researchers. Although, recently proposed transformers networks have already
been used in the literature for image quality assessment. At the same time, we
notice that multi-scale feature extraction has proven to be a promising
approach for image quality assessment. However, the way transformer networks
are used for image quality assessment until now lacks these properties of
multi-scale feature extraction. We utilized this fact in our approach and
proposed a new architecture by integrating these two promising quality
assessment techniques of images. Our experimentation on various datasets,
including the PIPAL dataset, demonstrates that the proposed integration
technique outperforms existing algorithms. The source code of the proposed
algorithm is available online: https://github.com/KomalPal9610/IQA
|
[
{
"version": "v1",
"created": "Wed, 20 Apr 2022 20:38:23 GMT"
}
] | 2022-04-22T00:00:00 |
[
[
"Keshari",
"Abhisek",
""
],
[
"Komal",
"",
""
],
[
"Sadbhawna",
"",
""
],
[
"Subudhi",
"Badri",
""
]
] |
new_dataset
| 0.998331 |
2204.09813
|
Victor Rios
|
Victor Rios and George Varghese
|
MashUp: Scaling TCAM-based IP Lookup to Larger Databases by Tiling Trees
| null | null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Ternary content addressable memories (TCAMs) are commonly used to implement
IP lookup, but suffer from high power and area costs. Thus TCAM included in
modern chips is limited and can support moderately large datasets in data
centers and enterprises, but fails to scale to backbone WAN databases of
millions of prefixes. IPv6 deployment also makes it harder to deploy TCAMs
because of the larger prefixes used in the 128-bit address space. While the
combination of algorithmic techniques and TCAM has been proposed before for
reducing power consumption or update costs(e.g., CoolCAM [32] and TreeCAM
[28]), we focus on reducing TCAM bits using a scheme we call MashUp that can
easily be implemented in modern reconfigurable pipeline chips such as Tofino-3.
MashUp uses a new technique, tiling trees, which takes into account TCAM grain
(tile) sizes. When applied to a publicly available IPv6 dataset using Tofino-3
TCAM grain sizes (44 by 512), there was a 2X reduction in TCAM required.
Further, if we mix TCAM and SRAM using a new technique we call node
hybridization, MashUp decreases TCAM bits by 4.5X for IPv6, and by 7.5X for
IPv4, allowing wide area databases of 900,000 prefixes to be supported by
Tofino-3 and similar chips
|
[
{
"version": "v1",
"created": "Wed, 20 Apr 2022 23:49:15 GMT"
}
] | 2022-04-22T00:00:00 |
[
[
"Rios",
"Victor",
""
],
[
"Varghese",
"George",
""
]
] |
new_dataset
| 0.999675 |
2204.09860
|
Zhiqiang Yuan
|
Zhiqiang Yuan, Wenkai Zhang, Changyuan Tian, Xuee Rong, Zhengyuan
Zhang, Hongqi Wang, Kun Fu, and Xian Sun
|
Remote Sensing Cross-Modal Text-Image Retrieval Based on Global and
Local Information
| null |
in IEEE Transactions on Geoscience and Remote Sensing, vol. 60,
pp. 1-16, 2022, Art no. 5620616
|
10.1109/TGRS.2022.3163706
| null |
cs.CV cs.IR cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
Cross-modal remote sensing text-image retrieval (RSCTIR) has recently become
an urgent research hotspot due to its ability of enabling fast and flexible
information extraction on remote sensing (RS) images. However, current RSCTIR
methods mainly focus on global features of RS images, which leads to the
neglect of local features that reflect target relationships and saliency. In
this article, we first propose a novel RSCTIR framework based on global and
local information (GaLR), and design a multi-level information dynamic fusion
(MIDF) module to efficaciously integrate features of different levels. MIDF
leverages local information to correct global information, utilizes global
information to supplement local information, and uses the dynamic addition of
the two to generate prominent visual representation. To alleviate the pressure
of the redundant targets on the graph convolution network (GCN) and to improve
the model s attention on salient instances during modeling local features, the
de-noised representation matrix and the enhanced adjacency matrix (DREA) are
devised to assist GCN in producing superior local representations. DREA not
only filters out redundant features with high similarity, but also obtains more
powerful local features by enhancing the features of prominent objects.
Finally, to make full use of the information in the similarity matrix during
inference, we come up with a plug-and-play multivariate rerank (MR) algorithm.
The algorithm utilizes the k nearest neighbors of the retrieval results to
perform a reverse search, and improves the performance by combining multiple
components of bidirectional retrieval. Extensive experiments on public datasets
strongly demonstrate the state-of-the-art performance of GaLR methods on the
RSCTIR task. The code of GaLR method, MR algorithm, and corresponding files
have been made available at https://github.com/xiaoyuan1996/GaLR .
|
[
{
"version": "v1",
"created": "Thu, 21 Apr 2022 03:18:09 GMT"
}
] | 2022-04-22T00:00:00 |
[
[
"Yuan",
"Zhiqiang",
""
],
[
"Zhang",
"Wenkai",
""
],
[
"Tian",
"Changyuan",
""
],
[
"Rong",
"Xuee",
""
],
[
"Zhang",
"Zhengyuan",
""
],
[
"Wang",
"Hongqi",
""
],
[
"Fu",
"Kun",
""
],
[
"Sun",
"Xian",
""
]
] |
new_dataset
| 0.996053 |
2204.09864
|
Emanuel Onica
|
Emanuel Onica, Ciprian Amariei
|
Using SGX for Meta-Transactions Support in Ethereum DApps
|
Preprint of paper accepted at DAIS 2022 - 22nd IFIP International
Conference on Distributed Applications and Interoperable Systems
| null | null | null |
cs.CR cs.DC cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Decentralized applications (DApps) gained traction in the context of the
blockchain technology. Ethereum is currently the public blockchain that backs
the largest amount of the existing DApps. Onboarding new users to Ethereum
DApps is a notoriously hard issue to solve. This is mainly caused by lack of
cryptocurrency ownership, needed for transaction fees. Several meta-transaction
patterns emerged for decoupling users from paying these fees. However, such
solutions are mostly offered via off-chain, often paid relayer services and do
not fully address the security issues present in the meta-transaction path. In
this paper, we introduce a new meta-transaction architecture that makes use of
the Intel Software Guard Extensions (SGX). Unlike other solutions, our approach
would offer the possibility to deploy a fee-free Ethereum DApp on a web server
that can directly relay meta-transactions to the Ethereum network while having
essential security guarantees integrated by design.
|
[
{
"version": "v1",
"created": "Thu, 21 Apr 2022 03:40:47 GMT"
}
] | 2022-04-22T00:00:00 |
[
[
"Onica",
"Emanuel",
""
],
[
"Amariei",
"Ciprian",
""
]
] |
new_dataset
| 0.99016 |
2204.09958
|
VinayKumar Chapala Mr
|
Vinay Kumar Chapala, Arsalan Malik, and S.M.Zafaruddin
|
RIS-Assisted Vehicular Network with Direct Transmission over
Double-Generalized Gamma Fading Channels
|
Accepted for presentation in the 2022 IEEE 95th Vehicular Technology
Conference: VTC2021-Spring to be held in Helsinki, Finland, 19-22 June 2022
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reconfigurable intelligent surface (RIS) can provide stable connectivity for
vehicular communications when direct transmission becomes significantly weaker
with dynamic channel conditions between an access point and a moving vehicle.
In this paper, we analyze the performance of a RIS-assisted vehicular network
by coherently combining received signals reflected by RIS elements and direct
transmissions from the source terminal over double generalized Gamma (dGG)
fading channels. We present analytical expressions on the outage probability
and average bit-error rate (BER) performance of the considered system by
deriving exact density and distribution functions for the end-to-end
signal-to-noise ratio (SNR) resulted from the finite sum of the direct link and
product of channel coefficients each distributed according to the dGG. We also
develop asymptotic analysis on the outage probability and average BER to derive
diversity order for a better insight into the system performance at high SNR.
We validate the derived analytical expressions through numerical and simulation
results and demonstrate scaling of the system performance with RIS elements and
a comparison to the conventional relaying techniques and direct transmissions
considering various practically relevant scenarios.
|
[
{
"version": "v1",
"created": "Thu, 21 Apr 2022 08:33:22 GMT"
}
] | 2022-04-22T00:00:00 |
[
[
"Chapala",
"Vinay Kumar",
""
],
[
"Malik",
"Arsalan",
""
],
[
"Zafaruddin",
"S. M.",
""
]
] |
new_dataset
| 0.969236 |
2204.09996
|
Velko Vechev
|
Velko Vechev, Juan Zarate, Bernhard Thomaszewski, Otmar Hilliges
|
Computational Design of Kinesthetic Garments
| null | null | null | null |
cs.GR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Kinesthetic garments provide physical feedback on body posture and motion
through tailored distributions of reinforced material. Their ability to
selectively stiffen a garment's response to specific motions makes them
appealing for rehabilitation, sports, robotics, and many other application
fields. However, finding designs that distribute a given amount of
reinforcement material to maximally stiffen the response to specified motions
is a challenging problem. In this work, we propose an optimization-driven
approach for automated design of reinforcement patterns for kinesthetic
garments. Our main contribution is to cast this design task as an on-body
topology optimization problem. Our method allows designers to explore a
continuous range of designs corresponding to various amounts of reinforcement
coverage. Our model captures both tight contact and lift-off separation between
cloth and body. We demonstrate our method on a variety of reinforcement design
problems for different body sites and motions. Optimal designs lead to a two-
to threefold improvement in performance in terms of energy density. A set of
manufactured designs were consistently rated as providing more resistance than
baselines in a comparative user study
|
[
{
"version": "v1",
"created": "Thu, 21 Apr 2022 09:41:22 GMT"
}
] | 2022-04-22T00:00:00 |
[
[
"Vechev",
"Velko",
""
],
[
"Zarate",
"Juan",
""
],
[
"Thomaszewski",
"Bernhard",
""
],
[
"Hilliges",
"Otmar",
""
]
] |
new_dataset
| 0.99417 |
2204.10024
|
Jasmine Richter
|
Jasmine Richter, Florian Faion, Di Feng, Paul Benedikt Becker, Piotr
Sielecki and Claudius Glaeser
|
Understanding the Domain Gap in LiDAR Object Detection Networks
|
14. Uni-DAS e.V. Workshop Fahrerassistenz und automatisiertes Fahren
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In order to make autonomous driving a reality, artificial neural networks
have to work reliably in the open-world. However, the open-world is vast and
continuously changing, so it is not technically feasible to collect and
annotate training datasets which accurately represent this domain. Therefore,
there are always domain gaps between training datasets and the open-world which
must be understood. In this work, we investigate the domain gaps between
high-resolution and low-resolution LiDAR sensors in object detection networks.
Using a unique dataset, which enables us to study sensor resolution domain gaps
independent of other effects, we show two distinct domain gaps - an inference
domain gap and a training domain gap. The inference domain gap is characterised
by a strong dependence on the number of LiDAR points per object, while the
training gap shows no such dependence. These fndings show that different
approaches are required to close these inference and training domain gaps.
|
[
{
"version": "v1",
"created": "Thu, 21 Apr 2022 11:18:48 GMT"
}
] | 2022-04-22T00:00:00 |
[
[
"Richter",
"Jasmine",
""
],
[
"Faion",
"Florian",
""
],
[
"Feng",
"Di",
""
],
[
"Becker",
"Paul Benedikt",
""
],
[
"Sielecki",
"Piotr",
""
],
[
"Glaeser",
"Claudius",
""
]
] |
new_dataset
| 0.998424 |
2204.10039
|
Hassan Imani
|
Hassan Imani, Md Baharul Islam, Lai-Kuan Wong
|
A New Dataset and Transformer for Stereoscopic Video Super-Resolution
|
Conference on Computer Vision and Pattern Recognition (CVPR 2022)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Stereo video super-resolution (SVSR) aims to enhance the spatial resolution
of the low-resolution video by reconstructing the high-resolution video. The
key challenges in SVSR are preserving the stereo-consistency and
temporal-consistency, without which viewers may experience 3D fatigue. There
are several notable works on stereoscopic image super-resolution, but there is
little research on stereo video super-resolution. In this paper, we propose a
novel Transformer-based model for SVSR, namely Trans-SVSR. Trans-SVSR comprises
two key novel components: a spatio-temporal convolutional self-attention layer
and an optical flow-based feed-forward layer that discovers the correlation
across different video frames and aligns the features. The parallax attention
mechanism (PAM) that uses the cross-view information to consider the
significant disparities is used to fuse the stereo views. Due to the lack of a
benchmark dataset suitable for the SVSR task, we collected a new stereoscopic
video dataset, SVSR-Set, containing 71 full high-definition (HD) stereo videos
captured using a professional stereo camera. Extensive experiments on the
collected dataset, along with two other datasets, demonstrate that the
Trans-SVSR can achieve competitive performance compared to the state-of-the-art
methods. Project code and additional results are available at
https://github.com/H-deep/Trans-SVSR/
|
[
{
"version": "v1",
"created": "Thu, 21 Apr 2022 11:49:29 GMT"
}
] | 2022-04-22T00:00:00 |
[
[
"Imani",
"Hassan",
""
],
[
"Islam",
"Md Baharul",
""
],
[
"Wong",
"Lai-Kuan",
""
]
] |
new_dataset
| 0.999823 |
2204.10058
|
William Seymour
|
William Seymour, Mark Cote and Jose Such
|
Consent on the Fly: Developing Ethical Verbal Consent for Voice
Assistants
|
Accepted to the CHI'22 Workshop on the Ethics of Conversational User
Interfaces
| null | null | null |
cs.HC cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Determining how voice assistants should broker consent to share data with
third party software has proven to be a complex problem. Devices often require
users to switch to companion smartphone apps in order to navigate permissions
menus for their otherwise hands-free voice assistant. More in line with
smartphone app stores, Alexa now offers "voice-forward consent", allowing users
to grant skills access to personal data mid-conversation using speech.
While more usable and convenient than opening a companion app, asking for
consent 'on the fly' can undermine several concepts core to the informed
consent process. The intangible nature of voice interfaces further blurs the
boundary between parts of an interaction controlled by third-party developers
from the underlying platforms. We outline a research agenda towards usable and
effective voice-based consent to address the problems with brokering consent
verbally, including our own work drawing on the GDPR and work on consent in
Ubicomp.
|
[
{
"version": "v1",
"created": "Thu, 21 Apr 2022 12:44:42 GMT"
}
] | 2022-04-22T00:00:00 |
[
[
"Seymour",
"William",
""
],
[
"Cote",
"Mark",
""
],
[
"Such",
"Jose",
""
]
] |
new_dataset
| 0.997993 |
2204.10082
|
Cho Hei Pang
|
Chohei Pang, Qicheng Wang, Kinwing Mak, Hongyu Yu, Michael Yu Wang
|
Viko 2.0: A Hierarchical Gecko-inspired Adhesive Gripper with
Visuotactile Sensor
|
This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robotic grippers with visuotactile sensors have access to rich tactile
information for grasping tasks but encounter difficulty in partially
encompassing large objects with sufficient grip force. While hierarchical
gecko-inspired adhesives are a potential technique for bridging performance
gaps, they require a large contact area for efficient usage. In this work, we
present a new version of an adaptive gecko gripper called Viko 2.0 that
effectively combines the advantage of adhesives and visuotactile sensors.
Compared with a non-hierarchical structure, a hierarchical structure with a
multimaterial design achieves approximately a 1.5 times increase in normal
adhesion and double in contact area. The integrated visuotactile sensor
captures a deformation image of the hierarchical structure and provides a
real-time measurement of contact area, shear force, and incipient slip
detection at 24 Hz. The gripper is implemented on a robotic arm to demonstrate
an adaptive grasping pose based on contact area, and grasps objects with a wide
range of geometries and textures.
|
[
{
"version": "v1",
"created": "Thu, 21 Apr 2022 13:23:44 GMT"
}
] | 2022-04-22T00:00:00 |
[
[
"Pang",
"Chohei",
""
],
[
"Wang",
"Qicheng",
""
],
[
"Mak",
"Kinwing",
""
],
[
"Yu",
"Hongyu",
""
],
[
"Wang",
"Michael Yu",
""
]
] |
new_dataset
| 0.992876 |
2204.10086
|
Peggy Tang
|
Peggy Tang, Kun Hu, Rui Yan, Lei Zhang, Junbin Gao, Zhiyong Wang
|
OTExtSum: Extractive Text Summarisation with Optimal Transport
|
Findings of NAACL 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Extractive text summarisation aims to select salient sentences from a
document to form a short yet informative summary. While learning-based methods
have achieved promising results, they have several limitations, such as
dependence on expensive training and lack of interpretability. Therefore, in
this paper, we propose a novel non-learning-based method by for the first time
formulating text summarisation as an Optimal Transport (OT) problem, namely
Optimal Transport Extractive Summariser (OTExtSum). Optimal sentence extraction
is conceptualised as obtaining an optimal summary that minimises the
transportation cost to a given document regarding their semantic distributions.
Such a cost is defined by the Wasserstein distance and used to measure the
summary's semantic coverage of the original document. Comprehensive experiments
on four challenging and widely used datasets - MultiNews, PubMed, BillSum, and
CNN/DM demonstrate that our proposed method outperforms the state-of-the-art
non-learning-based methods and several recent learning-based methods in terms
of the ROUGE metric.
|
[
{
"version": "v1",
"created": "Thu, 21 Apr 2022 13:25:34 GMT"
}
] | 2022-04-22T00:00:00 |
[
[
"Tang",
"Peggy",
""
],
[
"Hu",
"Kun",
""
],
[
"Yan",
"Rui",
""
],
[
"Zhang",
"Lei",
""
],
[
"Gao",
"Junbin",
""
],
[
"Wang",
"Zhiyong",
""
]
] |
new_dataset
| 0.963774 |
2204.10149
|
Zheng Zhu
|
Zheng Zhu, Guan Huang, Jiankang Deng, Yun Ye, Junjie Huang, Xinze
Chen, Jiagang Zhu, Tian Yang, Dalong Du, Jiwen Lu, Jie Zhou
|
WebFace260M: A Benchmark for Million-Scale Deep Face Recognition
|
Accepted by T-PAMI. Extension of our CVPR-2021 work:
arXiv:2103.04098. Project website is https://www.face-benchmark.org
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Face benchmarks empower the research community to train and evaluate
high-performance face recognition systems. In this paper, we contribute a new
million-scale recognition benchmark, containing uncurated 4M identities/260M
faces (WebFace260M) and cleaned 2M identities/42M faces (WebFace42M) training
data, as well as an elaborately designed time-constrained evaluation protocol.
Firstly, we collect 4M name lists and download 260M faces from the Internet.
Then, a Cleaning Automatically utilizing Self-Training (CAST) pipeline is
devised to purify the tremendous WebFace260M, which is efficient and scalable.
To the best of our knowledge, the cleaned WebFace42M is the largest public face
recognition training set and we expect to close the data gap between academia
and industry. Referring to practical deployments, Face Recognition Under
Inference Time conStraint (FRUITS) protocol and a new test set with rich
attributes are constructed. Besides, we gather a large-scale masked face
sub-set for biometrics assessment under COVID-19. For a comprehensive
evaluation of face matchers, three recognition tasks are performed under
standard, masked and unbiased settings, respectively. Equipped with this
benchmark, we delve into million-scale face recognition problems. A distributed
framework is developed to train face recognition models efficiently without
tampering with the performance. Enabled by WebFace42M, we reduce 40% failure
rate on the challenging IJB-C set and rank 3rd among 430 entries on NIST-FRVT.
Even 10% data (WebFace4M) shows superior performance compared with the public
training sets. Furthermore, comprehensive baselines are established under the
FRUITS-100/500/1000 milliseconds protocols. The proposed benchmark shows
enormous potential on standard, masked and unbiased face recognition scenarios.
Our WebFace260M website is https://www.face-benchmark.org.
|
[
{
"version": "v1",
"created": "Thu, 21 Apr 2022 14:56:53 GMT"
}
] | 2022-04-22T00:00:00 |
[
[
"Zhu",
"Zheng",
""
],
[
"Huang",
"Guan",
""
],
[
"Deng",
"Jiankang",
""
],
[
"Ye",
"Yun",
""
],
[
"Huang",
"Junjie",
""
],
[
"Chen",
"Xinze",
""
],
[
"Zhu",
"Jiagang",
""
],
[
"Yang",
"Tian",
""
],
[
"Du",
"Dalong",
""
],
[
"Lu",
"Jiwen",
""
],
[
"Zhou",
"Jie",
""
]
] |
new_dataset
| 0.999788 |
2204.10181
|
Harshal Patil
|
Dr. Sunil B. Mane, Harshal Patil, Kanhaiya Madaswar and Pranav
Sadavarte
|
WordAlchemy: A transformer-based Reverse Dictionary
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A reverse dictionary takes a target word's description as input and returns
the words that fit the description. Reverse Dictionaries are useful for new
language learners, anomia patients, and for solving common tip-of-the-tongue
problems (lethologica). Currently, there does not exist any Reverse Dictionary
provider with support for any Indian Language. We present a novel open-source
cross-lingual reverse dictionary system with support for Indian languages. In
this paper, we propose a transformer-based deep learning approach to tackle the
limitations faced by the existing systems using the mT5 model. This
architecture uses the Translation Language Modeling (TLM) technique, rather
than the conventional BERT's Masked Language Modeling (MLM) technique.
|
[
{
"version": "v1",
"created": "Sat, 16 Apr 2022 11:41:48 GMT"
}
] | 2022-04-22T00:00:00 |
[
[
"Mane",
"Dr. Sunil B.",
""
],
[
"Patil",
"Harshal",
""
],
[
"Madaswar",
"Kanhaiya",
""
],
[
"Sadavarte",
"Pranav",
""
]
] |
new_dataset
| 0.99952 |
2204.10195
|
Shankar Biradar Mr
|
Shankar Biradar, Sunil Saumya
|
IIITDWD-ShankarB@ Dravidian-CodeMixi-HASOC2021: mBERT based model for
identification of offensive content in south Indian languages
|
5 pages. Dravidian-CodeMixi-HASOC2021 working notes
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, there has been a lot of focus on offensive content. The
amount of offensive content generated by social media is increasing at an
alarming rate. This created a greater need to address this issue than ever
before. To address these issues, the organizers of "Dravidian-Code Mixed
HASOC-2020" have created two challenges. Task 1 involves identifying offensive
content in Malayalam data, whereas Task 2 includes Malayalam and Tamil Code
Mixed Sentences. Our team participated in Task 2. In our suggested model, we
experiment with multilingual BERT to extract features, and three different
classifiers are used on extracted features. Our model received a weighted F1
score of 0.70 for Malayalam data and was ranked fifth; we also received a
weighted F1 score of 0.573 for Tamil Code Mixed data and were ranked eleventh.
|
[
{
"version": "v1",
"created": "Wed, 13 Apr 2022 06:24:57 GMT"
}
] | 2022-04-22T00:00:00 |
[
[
"Biradar",
"Shankar",
""
],
[
"Saumya",
"Sunil",
""
]
] |
new_dataset
| 0.999658 |
2204.10209
|
Kaushik Balakrishnan
|
Kaushik Balakrishnan, Devesh Upadhyay
|
BTranspose: Bottleneck Transformers for Human Pose Estimation with
Self-Supervised Pre-Training
|
24 pages, 10 figures
| null | null | null |
cs.LG cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The task of 2D human pose estimation is challenging as the number of
keypoints is typically large (~ 17) and this necessitates the use of robust
neural network architectures and training pipelines that can capture the
relevant features from the input image. These features are then aggregated to
make accurate heatmap predictions from which the final keypoints of human body
parts can be inferred. Many papers in literature use CNN-based architectures
for the backbone, and/or combine it with a transformer, after which the
features are aggregated to make the final keypoint predictions [1]. In this
paper, we consider the recently proposed Bottleneck Transformers [2], which
combine CNN and multi-head self attention (MHSA) layers effectively, and we
integrate it with a Transformer encoder and apply it to the task of 2D human
pose estimation. We consider different backbone architectures and pre-train
them using the DINO self-supervised learning method [3], this pre-training is
found to improve the overall prediction accuracy. We call our model BTranspose,
and experiments show that on the COCO validation set, our model achieves an AP
of 76.4, which is competitive with other methods such as [1] and has fewer
network parameters. Furthermore, we also present the dependencies of the final
predicted keypoints on both the MHSA block and the Transformer encoder layers,
providing clues on the image sub-regions the network attends to at the mid and
high levels.
|
[
{
"version": "v1",
"created": "Thu, 21 Apr 2022 15:45:05 GMT"
}
] | 2022-04-22T00:00:00 |
[
[
"Balakrishnan",
"Kaushik",
""
],
[
"Upadhyay",
"Devesh",
""
]
] |
new_dataset
| 0.997166 |
2204.10211
|
Anastasiia Kornilova
|
Anastasiia Kornilova, Marsel Faizullin, Konstantin Pakulev, Andrey
Sadkov, Denis Kukushkin, Azat Akhmetyanov, Timur Akhtyamov, Hekmat
Taherinejad, Gonzalo Ferrer
|
SmartPortraits: Depth Powered Handheld Smartphone Dataset of Human
Portraits for State Estimation, Reconstruction and Synthesis
|
Accepted to CVPR'2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We present a dataset of 1000 video sequences of human portraits recorded in
real and uncontrolled conditions by using a handheld smartphone accompanied by
an external high-quality depth camera. The collected dataset contains 200
people captured in different poses and locations and its main purpose is to
bridge the gap between raw measurements obtained from a smartphone and
downstream applications, such as state estimation, 3D reconstruction, view
synthesis, etc. The sensors employed in data collection are the smartphone's
camera and Inertial Measurement Unit (IMU), and an external Azure Kinect DK
depth camera software synchronized with sub-millisecond precision to the
smartphone system. During the recording, the smartphone flash is used to
provide a periodic secondary source of lightning. Accurate mask of the foremost
person is provided as well as its impact on the camera alignment accuracy. For
evaluation purposes, we compare multiple state-of-the-art camera alignment
methods by using a Motion Capture system. We provide a smartphone
visual-inertial benchmark for portrait capturing, where we report results for
multiple methods and motivate further use of the provided trajectories,
available in the dataset, in view synthesis and 3D reconstruction tasks.
|
[
{
"version": "v1",
"created": "Thu, 21 Apr 2022 15:47:38 GMT"
}
] | 2022-04-22T00:00:00 |
[
[
"Kornilova",
"Anastasiia",
""
],
[
"Faizullin",
"Marsel",
""
],
[
"Pakulev",
"Konstantin",
""
],
[
"Sadkov",
"Andrey",
""
],
[
"Kukushkin",
"Denis",
""
],
[
"Akhmetyanov",
"Azat",
""
],
[
"Akhtyamov",
"Timur",
""
],
[
"Taherinejad",
"Hekmat",
""
],
[
"Ferrer",
"Gonzalo",
""
]
] |
new_dataset
| 0.999906 |
2204.10232
|
Wei Tang
|
Wei Tang, Yanlin Wang, Hongyu Zhang, Shi Han, Ping Luo, Dongmei Zhang
|
LibDB: An Effective and Efficient Framework for Detecting Third-Party
Libraries in Binaries
|
MSR 2022
| null |
10.1145/3524842.3528442
| null |
cs.CR cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Third-party libraries (TPLs) are reused frequently in software applications
for reducing development cost. However, they could introduce security risks as
well. Many TPL detection methods have been proposed to detect TPL reuse in
Android bytecode or in source code. This paper focuses on detecting TPL reuse
in binary code, which is a more challenging task. For a detection target in
binary form, libraries may be compiled and linked to separate dynamic-link
files or built into a fused binary that contains multiple libraries and
project-specific code. This could result in fewer available code features and
lower the effectiveness of feature engineering. In this paper, we propose a
binary TPL reuse detection framework, LibDB, which can effectively and
efficiently detect imported TPLs even in stripped and fused binaries. In
addition to the basic and coarse-grained features (string literals and exported
function names), LibDB utilizes function contents as a new type of feature. It
embeds all functions in a binary file to low-dimensional representations with a
trained neural network. It further adopts a function call graph-based
comparison method to improve the accuracy of the detection. LibDB is able to
support version identification of TPLs contained in the detection target, which
is not considered by existing detection methods. To evaluate the performance of
LibDB, we construct three datasets for binary-based TPL reuse detection. Our
experimental results show that LibDB is more accurate and efficient than
state-of-the-art tools on the binary TPL detection task and the version
identification task. Our datasets and source code used in this work are
anonymously available at https://github.com/DeepSoftwareAnalytics/LibDB.
|
[
{
"version": "v1",
"created": "Thu, 21 Apr 2022 16:10:03 GMT"
}
] | 2022-04-22T00:00:00 |
[
[
"Tang",
"Wei",
""
],
[
"Wang",
"Yanlin",
""
],
[
"Zhang",
"Hongyu",
""
],
[
"Han",
"Shi",
""
],
[
"Luo",
"Ping",
""
],
[
"Zhang",
"Dongmei",
""
]
] |
new_dataset
| 0.997679 |
2011.14109
|
Umberto Martinez-Penas
|
Umberto Mart\'inez-Pe\~nas
|
A general family of MSRD codes and PMDS codes with smaller field sizes
from extended Moore matrices
| null | null | null | null |
cs.IT math.AG math.IT
|
http://creativecommons.org/publicdomain/zero/1.0/
|
We construct six new explicit families of linear maximum sum-rank distance
(MSRD) codes, each of which has the smallest field sizes among all known MSRD
codes for some parameter regime. Using them and a previous result of the
author, we provide two new explicit families of linear partial MDS (PMDS) codes
with smaller field sizes than previous PMDS codes for some parameter regimes.
Our approach is to characterize evaluation points that turn extended Moore
matrices into the parity-check matrix of a linear MSRD code. We then produce
such sequences from codes with good Hamming-metric parameters. The six new
families of linear MSRD codes with smaller field sizes are obtained using MDS
codes, Hamming codes, BCH codes and three Algebraic-Geometry codes. The MSRD
codes based on Hamming codes, of minimum sum-rank distance $ 3 $, meet a recent
bound by Byrne et al.
|
[
{
"version": "v1",
"created": "Sat, 28 Nov 2020 11:14:31 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Dec 2020 11:14:17 GMT"
},
{
"version": "v3",
"created": "Wed, 20 Apr 2022 11:36:05 GMT"
}
] | 2022-04-21T00:00:00 |
[
[
"Martínez-Peñas",
"Umberto",
""
]
] |
new_dataset
| 0.998807 |
2105.06224
|
Zhanzhan Cheng
|
Liang Qiao and Zaisheng Li and Zhanzhan Cheng and Peng Zhang and
Shiliang Pu and Yi Niu and Wenqi Ren and Wenming Tan and Fei Wu
|
LGPMA: Complicated Table Structure Recognition with Local and Global
Pyramid Mask Alignment
|
Award of ICDAR2021 Best Industry Paper. Code is available at
https://davar-lab.github.io/publication.html or
https://github.com/hikopensource/DAVAR-Lab-OCR -------------- Fixed formula
typos in Eq. 1
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Table structure recognition is a challenging task due to the various
structures and complicated cell spanning relations. Previous methods handled
the problem starting from elements in different granularities (rows/columns,
text regions), which somehow fell into the issues like lossy heuristic rules or
neglect of empty cell division. Based on table structure characteristics, we
find that obtaining the aligned bounding boxes of text region can effectively
maintain the entire relevant range of different cells. However, the aligned
bounding boxes are hard to be accurately predicted due to the visual
ambiguities. In this paper, we aim to obtain more reliable aligned bounding
boxes by fully utilizing the visual information from both text regions in
proposed local features and cell relations in global features. Specifically, we
propose the framework of Local and Global Pyramid Mask Alignment, which adopts
the soft pyramid mask learning mechanism in both the local and global feature
maps. It allows the predicted boundaries of bounding boxes to break through the
limitation of original proposals. A pyramid mask re-scoring module is then
integrated to compromise the local and global information and refine the
predicted boundaries. Finally, we propose a robust table structure recovery
pipeline to obtain the final structure, in which we also effectively solve the
problems of empty cells locating and division. Experimental results show that
the proposed method achieves competitive and even new state-of-the-art
performance on several public benchmarks.
|
[
{
"version": "v1",
"created": "Thu, 13 May 2021 12:24:12 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Oct 2021 09:24:19 GMT"
},
{
"version": "v3",
"created": "Wed, 20 Apr 2022 03:41:11 GMT"
}
] | 2022-04-21T00:00:00 |
[
[
"Qiao",
"Liang",
""
],
[
"Li",
"Zaisheng",
""
],
[
"Cheng",
"Zhanzhan",
""
],
[
"Zhang",
"Peng",
""
],
[
"Pu",
"Shiliang",
""
],
[
"Niu",
"Yi",
""
],
[
"Ren",
"Wenqi",
""
],
[
"Tan",
"Wenming",
""
],
[
"Wu",
"Fei",
""
]
] |
new_dataset
| 0.987272 |
2108.00166
|
Fengping Wang
|
Fengping Wang, Jie Li, Siqi Zhang, Chun Qi, Yun Zhang, Danmin Miao
|
A Dynamic 3D Spontaneous Micro-expression Database: Establishment and
Evaluation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Micro-expressions are spontaneous, unconscious facial movements that show
people's true inner emotions and have great potential in related fields of
psychological testing. Since the face is a 3D deformation object, the
occurrence of an expression can arouse spatial deformation of the face, but
limited by the available databases are 2D videos, lacking the description of 3D
spatial information of micro-expressions. Therefore, we proposed a new
micro-expression database containing 2D video sequences and 3D point clouds
sequences. The database includes 373 micro-expressions sequences, and these
samples were classified using the objective method based on facial action
coding system, as well as the non-objective method that combines video contents
and participants' self-reports. We extracted 2D and 3D features using the local
binary patterns on three orthogonal planes (LBP-TOP) and curvature algorithms,
respectively, and evaluated the classification accuracies of these two features
and their fusion results with leave-one-subject-out (LOSO) and 10-fold
cross-validation. Further, we performed various neural network algorithms for
database classification, the results show that classification accuracies are
improved by fusing 3D features than using only 2D features. The database offers
original and cropped micro-expression samples, which will facilitate the
exploration and research on 3D Spatio-temporal features of micro-expressions.
|
[
{
"version": "v1",
"created": "Sat, 31 Jul 2021 07:04:16 GMT"
},
{
"version": "v2",
"created": "Sun, 22 Aug 2021 03:57:15 GMT"
},
{
"version": "v3",
"created": "Sat, 15 Jan 2022 04:40:48 GMT"
},
{
"version": "v4",
"created": "Wed, 23 Feb 2022 14:19:49 GMT"
},
{
"version": "v5",
"created": "Wed, 20 Apr 2022 06:09:56 GMT"
}
] | 2022-04-21T00:00:00 |
[
[
"Wang",
"Fengping",
""
],
[
"Li",
"Jie",
""
],
[
"Zhang",
"Siqi",
""
],
[
"Qi",
"Chun",
""
],
[
"Zhang",
"Yun",
""
],
[
"Miao",
"Danmin",
""
]
] |
new_dataset
| 0.999434 |
2108.01343
|
Jing Zhang
|
Bo Du, Jian Ye, Jing Zhang, Juhua Liu, and Dacheng Tao
|
I3CL:Intra- and Inter-Instance Collaborative Learning for
Arbitrary-shaped Scene Text Detection
|
IJCV. Code is available at
https://github.com/ViTAE-Transformer/ViTAE-Transformer-Scene-Text-Detection
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing methods for arbitrary-shaped text detection in natural scenes face
two critical issues, i.e., 1) fracture detections at the gaps in a text
instance; and 2) inaccurate detections of arbitrary-shaped text instances with
diverse background context. To address these issues, we propose a novel method
named Intra- and Inter-Instance Collaborative Learning (I3CL). Specifically, to
address the first issue, we design an effective convolutional module with
multiple receptive fields, which is able to collaboratively learn better
character and gap feature representations at local and long ranges inside a
text instance. To address the second issue, we devise an instance-based
transformer module to exploit the dependencies between different text instances
and a global context module to exploit the semantic context from the shared
background, which are able to collaboratively learn more discriminative text
feature representation. In this way, I3CL can effectively exploit the intra-
and inter-instance dependencies together in a unified end-to-end trainable
framework. Besides, to make full use of the unlabeled data, we design an
effective semi-supervised learning method to leverage the pseudo labels via an
ensemble strategy. Without bells and whistles, experimental results show that
the proposed I3CL sets new state-of-the-art results on three challenging public
benchmarks, i.e., an F-measure of 77.5% on ICDAR2019-ArT, 86.9% on Total-Text,
and 86.4% on CTW-1500. Notably, our I3CL with the ResNeSt-101 backbone ranked
1st place on the ICDAR2019-ArT leaderboard. The source code will be available
at https://github.com/ViTAE-Transformer/ViTAE-Transformer-Scene-Text-Detection.
|
[
{
"version": "v1",
"created": "Tue, 3 Aug 2021 07:48:12 GMT"
},
{
"version": "v2",
"created": "Mon, 16 Aug 2021 08:39:31 GMT"
},
{
"version": "v3",
"created": "Wed, 20 Apr 2022 07:04:46 GMT"
}
] | 2022-04-21T00:00:00 |
[
[
"Du",
"Bo",
""
],
[
"Ye",
"Jian",
""
],
[
"Zhang",
"Jing",
""
],
[
"Liu",
"Juhua",
""
],
[
"Tao",
"Dacheng",
""
]
] |
new_dataset
| 0.999208 |
2112.12089
|
Xiangtao Kong
|
Xiangtao Kong, Xina Liu, Jinjin Gu, Yu Qiao and Chao Dong
|
Reflash Dropout in Image Super-Resolution
|
CVPR2022 paper + supplementary file
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dropout is designed to relieve the overfitting problem in high-level vision
tasks but is rarely applied in low-level vision tasks, like image
super-resolution (SR). As a classic regression problem, SR exhibits a different
behaviour as high-level tasks and is sensitive to the dropout operation.
However, in this paper, we show that appropriate usage of dropout benefits SR
networks and improves the generalization ability. Specifically, dropout is
better embedded at the end of the network and is significantly helpful for the
multi-degradation settings. This discovery breaks our common sense and inspires
us to explore its working mechanism. We further use two analysis tools -- one
is from recent network interpretation works, and the other is specially
designed for this task. The analysis results provide side proofs to our
experimental findings and show us a new perspective to understand SR networks.
|
[
{
"version": "v1",
"created": "Wed, 22 Dec 2021 17:47:32 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Mar 2022 11:42:28 GMT"
},
{
"version": "v3",
"created": "Wed, 20 Apr 2022 06:14:24 GMT"
}
] | 2022-04-21T00:00:00 |
[
[
"Kong",
"Xiangtao",
""
],
[
"Liu",
"Xina",
""
],
[
"Gu",
"Jinjin",
""
],
[
"Qiao",
"Yu",
""
],
[
"Dong",
"Chao",
""
]
] |
new_dataset
| 0.998997 |
2201.00869
|
Niloofar Bahadori
|
Niloofar Bahadori, Jonathan Ashdown, Francesco Restuccia
|
ReWiS: Reliable Wi-Fi Sensing Through Few-Shot Multi-Antenna
Multi-Receiver CSI Learning
| null | null | null | null |
cs.NI eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
Thanks to the ubiquitousness of Wi-Fi access points and devices, Wi-Fi
sensing enables transformative applications in remote health care, security,
and surveillance. Existing work has explored the usage of machine learning on
channel state information (CSI) computed from Wi-Fi packets to classify events
of interest. However, most of these algorithms require a significant amount of
data collection, as well as extensive computational power for additional CSI
feature extraction. Moreover, the majority of these models suffer from poor
accuracy when tested in a new/untrained environment. In this paper, we propose
ReWiS, a novel framework for robust and environment-independent Wi-Fi sensing.
The key innovation of ReWiS is to leverage few-shot learning (FSL) as the
inference engine, which (i) reduces the need for extensive data collection and
application-specific feature extraction; (ii) can rapidly generalize to new
tasks by leveraging only a few new samples. We prototype ReWiS using
off-the-shelf Wi-Fi equipment and showcase its performance by considering a
compelling use case of human activity recognition. Thus, we perform an
extensive data collection campaign in three different propagation environments
with two human subjects. We evaluate the impact of each diversity component on
the performance and compare ReWiS with a traditional convolutional neural
network (CNN) approach. Experimental results show that ReWiS improves the
performance by about 40% with respect to existing single-antenna low-resolution
approaches. Moreover, when compared to a CNN-based approach, ReWiS shows a 35%
more accuracy and less than 10% drop in accuracy when tested in different
environments, while the CNN drops by more than 45%.
|
[
{
"version": "v1",
"created": "Mon, 3 Jan 2022 20:22:39 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Apr 2022 18:31:39 GMT"
}
] | 2022-04-21T00:00:00 |
[
[
"Bahadori",
"Niloofar",
""
],
[
"Ashdown",
"Jonathan",
""
],
[
"Restuccia",
"Francesco",
""
]
] |
new_dataset
| 0.987722 |
2201.01459
|
Yintong Huo
|
Yintong Huo, Yuxin Su, Hongming Zhang, Michael R. Lyu
|
ARCLIN: Automated API Mention Resolution for Unformatted Texts
|
Accepted by the 44th International Conference on Software Engineering
(ICSE '22)
| null |
10.1145/3510003.3510158
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Online technical forums (e.g., StackOverflow) are popular platforms for
developers to discuss technical problems such as how to use specific
Application Programming Interface (API), how to solve the programming tasks, or
how to fix bugs in their codes. These discussions can often provide auxiliary
knowledge of how to use the software that is not covered by the official
documents. The automatic extraction of such knowledge will support a set of
downstream tasks like API searching or indexing. However, unlike official
documentation written by experts, discussions in open forums are made by
regular developers who write in short and informal texts, including spelling
errors or abbreviations. There are three major challenges for the accurate APIs
recognition and linking mentioned APIs from unstructured natural language
documents to an entry in the API repository: (1) distinguishing API mentions
from common words; (2) identifying API mentions without a fully qualified name;
and (3) disambiguating API mentions with similar method names but in a
different library. In this paper, to tackle these challenges, we propose an
ARCLIN tool, which can effectively distinguish and link APIs without using
human annotations. Specifically, we first design an API recognizer to
automatically extract API mentions from natural language sentences by a
Conditional Random Field (CRF) on the top of a Bi-directional Long Short-Term
Memory (Bi-LSTM) module, then we apply a context-aware scoring mechanism to
compute the mention-entry similarity for each entry in an API repository.
Compared to previous approaches with heuristic rules, our proposed tool without
manual inspection outperforms by 8% in a high-quality dataset Py-mention, which
contains 558 mentions and 2,830 sentences from five popular Python libraries.
|
[
{
"version": "v1",
"created": "Wed, 5 Jan 2022 05:15:04 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Apr 2022 15:54:41 GMT"
}
] | 2022-04-21T00:00:00 |
[
[
"Huo",
"Yintong",
""
],
[
"Su",
"Yuxin",
""
],
[
"Zhang",
"Hongming",
""
],
[
"Lyu",
"Michael R.",
""
]
] |
new_dataset
| 0.996131 |
2202.08699
|
Qin Wang
|
Rujia Li and Qin Wang and Qi Wang and David Galindo
|
How Do Smart Contracts Benefit Security Protocols?
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Smart contracts have recently been adopted by many security protocols.
However, existing studies lack satisfactory theoretical support on how
contracts benefit security protocols. This paper aims to give a systematic
analysis of smart contract (SC)-based security protocols to fulfill the gap of
unclear arguments and statements. We firstly investigate \textit{state of the
art studies} and establish a formalized model of smart contract protocols with
well-defined syntax and assumptions. Then, we apply our formal framework to two
concrete instructions to explore corresponding advantages and desirable
properties. Through our analysis, we abstract three generic properties
(\textit{non-repudiation, non-equivocation, and non-frameability}) and
accordingly identify two patterns. (1) a smart contract can be as an autonomous
subscriber to assist the trusted third party (TTP); (2) a smart contract can
replace traditional TTP. To the best of our knowledge, this is the first study
to provide in-depth discussions of SC-based security protocols from a strictly
theoretical perspective.
|
[
{
"version": "v1",
"created": "Thu, 17 Feb 2022 15:06:54 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Apr 2022 02:18:58 GMT"
}
] | 2022-04-21T00:00:00 |
[
[
"Li",
"Rujia",
""
],
[
"Wang",
"Qin",
""
],
[
"Wang",
"Qi",
""
],
[
"Galindo",
"David",
""
]
] |
new_dataset
| 0.979825 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.