id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2204.05994
|
Niall McLaughlin
|
Niall McLaughlin
|
Malceiver: Perceiver with Hierarchical and Multi-modal Features for
Android Malware Detection
|
13 pages, 2 figures
| null | null | null |
cs.CR cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We propose the Malceiver, a hierarchical Perceiver model for Android malware
detection that makes use of multi-modal features. The primary inputs are the
opcode sequence and the requested permissions of a given Android APK file. To
reach a malware classification decision the model combines hierarchical
features extracted from the opcode sequence together with the requested
permissions. The model's architecture is based on the Perceiver/PerceiverIO
which allows for very long opcode sequences to be processed efficiently. Our
proposed model can be easily extended to use multi-modal features. We show
experimentally that this model outperforms a conventional CNN architecture for
opcode sequence based malware detection. We then show that using additional
modalities improves performance. Our proposed architecture opens new avenues
for the use of Transformer-style networks in malware research.
|
[
{
"version": "v1",
"created": "Tue, 12 Apr 2022 17:59:17 GMT"
}
] | 2022-04-13T00:00:00 |
[
[
"McLaughlin",
"Niall",
""
]
] |
new_dataset
| 0.998556 |
2002.11561
|
Javier Naranjo-Alcazar
|
Javier Naranjo-Alcazar, Sergi Perez-Castanos, Pedro Zuccarrello, Ana
M. Torres, Jose J. Lopez, Franscesc J. Ferri and Maximo Cobos
|
An Open-set Recognition and Few-Shot Learning Dataset for Audio Event
Classification in Domestic Environments
|
Submitted to IEEEAccess
| null | null | null |
cs.SD cs.LG eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The problem of training with a small set of positive samples is known as
few-shot learning (FSL). It is widely known that traditional deep learning (DL)
algorithms usually show very good performance when trained with large datasets.
However, in many applications, it is not possible to obtain such a high number
of samples. In the image domain, typical FSL applications include those related
to face recognition. In the audio domain, music fraud or speaker recognition
can be clearly benefited from FSL methods. This paper deals with the
application of FSL to the detection of specific and intentional acoustic events
given by different types of sound alarms, such as door bells or fire alarms,
using a limited number of samples. These sounds typically occur in domestic
environments where many events corresponding to a wide variety of sound classes
take place. Therefore, the detection of such alarms in a practical scenario can
be considered an open-set recognition (OSR) problem. To address the lack of a
dedicated public dataset for audio FSL, researchers usually make modifications
on other available datasets. This paper is aimed at poviding the audio
recognition community with a carefully annotated dataset
(https://zenodo.org/record/3689288) for FSL in an OSR context comprised of 1360
clips from 34 classes divided into pattern sounds} and unwanted sounds. To
facilitate and promote research on this area, results with state-of-the-art
baseline systems based on transfer learning are also presented.
|
[
{
"version": "v1",
"created": "Wed, 26 Feb 2020 15:26:45 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Feb 2020 14:30:45 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Feb 2020 12:46:34 GMT"
},
{
"version": "v4",
"created": "Tue, 3 Mar 2020 16:17:48 GMT"
},
{
"version": "v5",
"created": "Sat, 14 Mar 2020 14:25:45 GMT"
},
{
"version": "v6",
"created": "Wed, 18 Mar 2020 09:26:51 GMT"
},
{
"version": "v7",
"created": "Sat, 4 Sep 2021 10:48:44 GMT"
},
{
"version": "v8",
"created": "Mon, 11 Apr 2022 08:32:48 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Naranjo-Alcazar",
"Javier",
""
],
[
"Perez-Castanos",
"Sergi",
""
],
[
"Zuccarrello",
"Pedro",
""
],
[
"Torres",
"Ana M.",
""
],
[
"Lopez",
"Jose J.",
""
],
[
"Ferri",
"Franscesc J.",
""
],
[
"Cobos",
"Maximo",
""
]
] |
new_dataset
| 0.958393 |
2003.04380
|
Jorge Pe\~na Queralta
|
Jorge Pe\~na Queralta, Carmen Mart\'inez Almansa, Fabrizio Schiano,
Dario Floreano, Tomi Westerlund
|
UWB-based system for UAV Localization in GNSS-Denied Environments:
Characterization and Dataset
|
Accepted to the 2020 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS 2020)
| null |
10.1109/IROS45743.2020.9341042
| null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Small unmanned aerial vehicles (UAV) have penetrated multiple domains over
the past years. In GNSS-denied or indoor environments, aerial robots require a
robust and stable localization system, often with external feedback, in order
to fly safely. Motion capture systems are typically utilized indoors when
accurate localization is needed. However, these systems are expensive and most
require a fixed setup. Recently, visual-inertial odometry and similar methods
have advanced to a point where autonomous UAVs can rely on them for
localization. The main limitation in this case comes from the environment, as
well as in long-term autonomy due to accumulating error if loop closure cannot
be performed efficiently. For instance, the impact of low visibility due to
dust or smoke in post-disaster scenarios might render the odometry methods
inapplicable. In this paper, we study and characterize an ultra-wideband (UWB)
system for navigation and localization of aerial robots indoors based on
Decawave's DWM1001 UWB node. The system is portable, inexpensive and can be
battery powered in its totality. We show the viability of this system for
autonomous flight of UAVs, and provide open-source methods and data that enable
its widespread application even with movable anchor systems. We characterize
the accuracy based on the position of the UAV with respect to the anchors, its
altitude and speed, and the distribution of the anchors in space. Finally, we
analyze the accuracy of the self-calibration of the anchors' positions.
|
[
{
"version": "v1",
"created": "Mon, 9 Mar 2020 19:44:59 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Aug 2020 10:02:33 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Queralta",
"Jorge Peña",
""
],
[
"Almansa",
"Carmen Martínez",
""
],
[
"Schiano",
"Fabrizio",
""
],
[
"Floreano",
"Dario",
""
],
[
"Westerlund",
"Tomi",
""
]
] |
new_dataset
| 0.996293 |
2011.09524
|
Hasan Saribas
|
Hasan Saribas, Hakan Cevikalp, Okan K\"op\"ukl\"u, Bedirhan Uzun
|
TRAT: Tracking by Attention Using Spatio-Temporal Features
| null | null |
10.1016/j.neucom.2022.04.043
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Robust object tracking requires knowledge of tracked objects' appearance,
motion and their evolution over time. Although motion provides distinctive and
complementary information especially for fast moving objects, most of the
recent tracking architectures primarily focus on the objects' appearance
information. In this paper, we propose a two-stream deep neural network tracker
that uses both spatial and temporal features. Our architecture is developed
over ATOM tracker and contains two backbones: (i) 2D-CNN network to capture
appearance features and (ii) 3D-CNN network to capture motion features. The
features returned by the two networks are then fused with attention based
Feature Aggregation Module (FAM). Since the whole architecture is unified, it
can be trained end-to-end. The experimental results show that the proposed
tracker TRAT (TRacking by ATtention) achieves state-of-the-art performance on
most of the benchmarks and it significantly outperforms the baseline ATOM
tracker.
|
[
{
"version": "v1",
"created": "Wed, 18 Nov 2020 20:11:12 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Saribas",
"Hasan",
""
],
[
"Cevikalp",
"Hakan",
""
],
[
"Köpüklü",
"Okan",
""
],
[
"Uzun",
"Bedirhan",
""
]
] |
new_dataset
| 0.997025 |
2102.06448
|
Haoran Chen
|
Haoran Chen, Jianmin Li, Simone Frintrop, Xiaolin Hu
|
The MSR-Video to Text Dataset with Clean Annotations
|
The paper is under consideration at Computer Vision and Image
Understanding
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Video captioning automatically generates short descriptions of the video
content, usually in form of a single sentence. Many methods have been proposed
for solving this task. A large dataset called MSR Video to Text (MSR-VTT) is
often used as the benchmark dataset for testing the performance of the methods.
However, we found that the human annotations, i.e., the descriptions of video
contents in the dataset are quite noisy, e.g., there are many duplicate
captions and many captions contain grammatical problems. These problems may
pose difficulties to video captioning models for learning underlying patterns.
We cleaned the MSR-VTT annotations by removing these problems, then tested
several typical video captioning models on the cleaned dataset. Experimental
results showed that data cleaning boosted the performances of the models
measured by popular quantitative metrics. We recruited subjects to evaluate the
results of a model trained on the original and cleaned datasets. The human
behavior experiment demonstrated that trained on the cleaned dataset, the model
generated captions that were more coherent and more relevant to the contents of
the video clips.
|
[
{
"version": "v1",
"created": "Fri, 12 Feb 2021 11:14:56 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Apr 2021 04:22:49 GMT"
},
{
"version": "v3",
"created": "Sat, 9 Apr 2022 09:20:25 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Chen",
"Haoran",
""
],
[
"Li",
"Jianmin",
""
],
[
"Frintrop",
"Simone",
""
],
[
"Hu",
"Xiaolin",
""
]
] |
new_dataset
| 0.999772 |
2103.04814
|
Ding Jian
|
Jian Ding, Enze Xie, Hang Xu, Chenhan Jiang, Zhenguo Li, Ping Luo,
Gui-Song Xia
|
Deeply Unsupervised Patch Re-Identification for Pre-training Object
Detectors
|
Accepted to IEEE TPAMI
| null |
10.1109/TPAMI.2022.3164911
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unsupervised pre-training aims at learning transferable features that are
beneficial for downstream tasks. However, most state-of-the-art unsupervised
methods concentrate on learning global representations for image-level
classification tasks instead of discriminative local region representations,
which limits their transferability to region-level downstream tasks, such as
object detection. To improve the transferability of pre-trained features to
object detection, we present Deeply Unsupervised Patch Re-ID (DUPR), a simple
yet effective method for unsupervised visual representation learning. The patch
Re-ID task treats individual patch as a pseudo-identity and contrastively
learns its correspondence in two views, enabling us to obtain discriminative
local features for object detection. Then the proposed patch Re-ID is performed
in a deeply unsupervised manner, appealing to object detection, which usually
requires multilevel feature maps. Extensive experiments demonstrate that DUPR
outperforms state-of-the-art unsupervised pre-trainings and even the ImageNet
supervised pre-training on various downstream tasks related to object
detection.
|
[
{
"version": "v1",
"created": "Mon, 8 Mar 2021 15:13:59 GMT"
},
{
"version": "v2",
"created": "Sun, 10 Apr 2022 09:02:09 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Ding",
"Jian",
""
],
[
"Xie",
"Enze",
""
],
[
"Xu",
"Hang",
""
],
[
"Jiang",
"Chenhan",
""
],
[
"Li",
"Zhenguo",
""
],
[
"Luo",
"Ping",
""
],
[
"Xia",
"Gui-Song",
""
]
] |
new_dataset
| 0.982316 |
2105.14875
|
Jakaria Rabbi
|
Ovishake Sen, Mohtasim Fuad, MD. Nazrul Islam, Jakaria Rabbi, Mehedi
Masud, MD. Kamrul Hasan, Md. Abdul Awal, Awal Ahmed Fime, Md. Tahmid Hasan
Fuad, Delowar Sikder, and MD. Akil Raihan Iftee
|
Bangla Natural Language Processing: A Comprehensive Analysis of
Classical, Machine Learning, and Deep Learning Based Methods
|
Accedpted in IEEE Access and it has 46 pages. Link:
https://ieeexplore.ieee.org/document/9751052 (Early Access - April 10, 2022)
| null |
10.1109/ACCESS.2022.3165563
| null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Bangla language is the seventh most spoken language, with 265 million
native and non-native speakers worldwide. However, English is the predominant
language for online resources and technical knowledge, journals, and
documentation. Consequently, many Bangla-speaking people, who have limited
command of English, face hurdles to utilize English resources. To bridge the
gap between limited support and increasing demand, researchers conducted many
experiments and developed valuable tools and techniques to create and process
Bangla language materials. Many efforts are also ongoing to make it easy to use
the Bangla language in the online and technical domains. There are some review
papers to understand the past, previous, and future Bangla Natural Language
Processing (BNLP) trends. The studies are mainly concentrated on the specific
domains of BNLP, such as sentiment analysis, speech recognition, optical
character recognition, and text summarization. There is an apparent scarcity of
resources that contain a comprehensive review of the recent BNLP tools and
methods. Therefore, in this paper, we present a thorough analysis of 75 BNLP
research papers and categorize them into 11 categories, namely Information
Extraction, Machine Translation, Named Entity Recognition, Parsing, Parts of
Speech Tagging, Question Answering System, Sentiment Analysis, Spam and Fake
Detection, Text Summarization, Word Sense Disambiguation, and Speech Processing
and Recognition. We study articles published between 1999 to 2021, and 50% of
the papers were published after 2015. Furthermore, we discuss Classical,
Machine Learning and Deep Learning approaches with different datasets while
addressing the limitations and current and future trends of the BNLP.
|
[
{
"version": "v1",
"created": "Mon, 31 May 2021 10:58:58 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Jun 2021 09:40:12 GMT"
},
{
"version": "v3",
"created": "Sat, 9 Apr 2022 19:01:54 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Sen",
"Ovishake",
""
],
[
"Fuad",
"Mohtasim",
""
],
[
"Islam",
"MD. Nazrul",
""
],
[
"Rabbi",
"Jakaria",
""
],
[
"Masud",
"Mehedi",
""
],
[
"Hasan",
"MD. Kamrul",
""
],
[
"Awal",
"Md. Abdul",
""
],
[
"Fime",
"Awal Ahmed",
""
],
[
"Fuad",
"Md. Tahmid Hasan",
""
],
[
"Sikder",
"Delowar",
""
],
[
"Iftee",
"MD. Akil Raihan",
""
]
] |
new_dataset
| 0.999424 |
2107.01610
|
Wenshuo Guo
|
Wenshuo Guo and Fang-Wei Fu
|
Two Public-Key Cryptosystems Based on Expanded Gabidulin Codes
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents two public key cryptosystems based on the so-called
expanded Gabidulin codes, which are constructed by expanding Gabidulin codes
over the base field. Exploiting the fast decoder of Gabidulin codes, we propose
an efficient algorithm to decode these new codes when the noise vector
satisfies a certain condition. Additionally, these new codes have an excellent
error-correcting capability because of the optimality of their parent Gabidulin
codes. With different masking techniques, we give two encryption schemes by
using expanded Gabidulin codes in the McEliece setting. Being constructed over
the base field, these two proposals can prevent the existing structural attacks
using the Frobenius map. Based on the distinguisher for Gabidulin codes, we
propose a distinguisher for expanded Gabidulin codes by introducing the concept
of the so-called twisted Frobenius power. It turns out that the public code in
our proposals seems indistinguishable from random codes under this
distinguisher. Furthermore, our proposals have an obvious advantage in public
key representation without using the cyclic or quasi-cyclic structure compared
to some other code-based cryptosystems. To achieve the security of 256 bits,
for instance, a public key size of 37583 bytes is enough for our first
proposal, while around 1044992 bytes are needed for Classic McEliece selected
as a candidate of the third round of the NIST PQC project.
|
[
{
"version": "v1",
"created": "Sun, 4 Jul 2021 12:52:18 GMT"
},
{
"version": "v2",
"created": "Wed, 1 Sep 2021 06:54:49 GMT"
},
{
"version": "v3",
"created": "Sat, 9 Apr 2022 08:13:57 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Guo",
"Wenshuo",
""
],
[
"Fu",
"Fang-Wei",
""
]
] |
new_dataset
| 0.994651 |
2108.09135
|
Chong Xiang
|
Chong Xiang, Saeed Mahloujifar, Prateek Mittal
|
PatchCleanser: Certifiably Robust Defense against Adversarial Patches
for Any Image Classifier
|
USENIX Security Symposium 2022; extended technical report
| null | null | null |
cs.CV cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The adversarial patch attack against image classification models aims to
inject adversarially crafted pixels within a restricted image region (i.e., a
patch) for inducing model misclassification. This attack can be realized in the
physical world by printing and attaching the patch to the victim object; thus,
it imposes a real-world threat to computer vision systems. To counter this
threat, we design PatchCleanser as a certifiably robust defense against
adversarial patches. In PatchCleanser, we perform two rounds of pixel masking
on the input image to neutralize the effect of the adversarial patch. This
image-space operation makes PatchCleanser compatible with any state-of-the-art
image classifier for achieving high accuracy. Furthermore, we can prove that
PatchCleanser will always predict the correct class labels on certain images
against any adaptive white-box attacker within our threat model, achieving
certified robustness. We extensively evaluate PatchCleanser on the ImageNet,
ImageNette, CIFAR-10, CIFAR-100, SVHN, and Flowers-102 datasets and demonstrate
that our defense achieves similar clean accuracy as state-of-the-art
classification models and also significantly improves certified robustness from
prior works. Remarkably, PatchCleanser achieves 83.9% top-1 clean accuracy and
62.1% top-1 certified robust accuracy against a 2%-pixel square patch anywhere
on the image for the 1000-class ImageNet dataset.
|
[
{
"version": "v1",
"created": "Fri, 20 Aug 2021 12:09:33 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Apr 2022 18:52:45 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Xiang",
"Chong",
""
],
[
"Mahloujifar",
"Saeed",
""
],
[
"Mittal",
"Prateek",
""
]
] |
new_dataset
| 0.998618 |
2109.04127
|
Vladimir Dobrovolskii
|
Vladimir Dobrovolskii
|
Word-Level Coreference Resolution
|
Accepted to EMNLP-2021
|
In Proceedings of the 2021 Conference on Empirical Methods in
Natural Language Processing (pp. 7670-7675). Association for Computational
Linguistics 2021
|
10.18653/v1/2021.emnlp-main.605
| null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent coreference resolution models rely heavily on span representations to
find coreference links between word spans. As the number of spans is $O(n^2)$
in the length of text and the number of potential links is $O(n^4)$, various
pruning techniques are necessary to make this approach computationally
feasible. We propose instead to consider coreference links between individual
words rather than word spans and then reconstruct the word spans. This reduces
the complexity of the coreference model to $O(n^2)$ and allows it to consider
all potential mentions without pruning any of them out. We also demonstrate
that, with these changes, SpanBERT for coreference resolution will be
significantly outperformed by RoBERTa. While being highly efficient, our model
performs competitively with recent coreference resolution systems on the
OntoNotes benchmark.
|
[
{
"version": "v1",
"created": "Thu, 9 Sep 2021 09:26:02 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Dobrovolskii",
"Vladimir",
""
]
] |
new_dataset
| 0.999269 |
2110.05604
|
Jon Arrizabalaga
|
Jon Arrizabalaga, Niels van Duijkeren, Markus Ryll, Ralph Lange
|
A caster-wheel-aware MPC-based motion planner for mobile robotics
| null | null |
10.1109/ICAR53236.2021.9659478
| null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Differential drive mobile robots often use one or more caster wheels for
balance. Caster wheels are appreciated for their ability to turn in any
direction almost on the spot, allowing the robot to do the same and thereby
greatly simplifying the motion planning and control. However, in aligning the
caster wheels to the intended direction of motion they produce a so-called bore
torque. As a result, additional motor torque is required to move the robot,
which may in some cases exceed the motor capacity or compromise the motion
planner's accuracy. Instead of taking a decoupled approach, where the
navigation and disturbance rejection algorithms are separated, we propose to
embed the caster wheel awareness into the motion planner. To do so, we present
a caster-wheel-aware term that is compatible with MPC-based control methods,
leveraging the existence of caster wheels in the motion planning stage. As a
proof of concept, this term is combined with a a model-predictive trajectory
tracking controller. Since this method requires knowledge of the caster wheel
angle and rolling speed, an observer that estimates these states is also
presented. The efficacy of the approach is shown in experiments on an
intralogistics robot and compared against a decoupled bore-torque reduction
approach and a caster-wheel agnostic controller. Moreover, the experiments show
that the presented caster wheel estimator performs sufficiently well and
therefore avoids the need for additional sensors.
|
[
{
"version": "v1",
"created": "Mon, 11 Oct 2021 20:50:52 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Arrizabalaga",
"Jon",
""
],
[
"van Duijkeren",
"Niels",
""
],
[
"Ryll",
"Markus",
""
],
[
"Lange",
"Ralph",
""
]
] |
new_dataset
| 0.99957 |
2111.03735
|
Hang Zhou
|
Claire Mathieu and Hang Zhou
|
A PTAS for Capacitated Vehicle Routing on Trees
|
Accepted for publication at ICALP 2022
| null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
We give a polynomial time approximation scheme (PTAS) for the unit demand
capacitated vehicle routing problem (CVRP) on trees, for the entire range of
the tour capacity. The result extends to the splittable CVRP.
|
[
{
"version": "v1",
"created": "Fri, 5 Nov 2021 21:38:17 GMT"
},
{
"version": "v2",
"created": "Thu, 31 Mar 2022 00:15:08 GMT"
},
{
"version": "v3",
"created": "Mon, 11 Apr 2022 11:52:28 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Mathieu",
"Claire",
""
],
[
"Zhou",
"Hang",
""
]
] |
new_dataset
| 0.9986 |
2111.07524
|
Paloma Sodhi
|
Paloma Sodhi, Michael Kaess, Mustafa Mukadam, Stuart Anderson
|
PatchGraph: In-hand tactile tracking with learned surface normals
|
Accepted to IEEE Intl. Conf. on Robotics and Automation (ICRA) 2022.
7 pages, 8 figures
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
We address the problem of tracking 3D object poses from touch during in-hand
manipulations. Specifically, we look at tracking small objects using
vision-based tactile sensors that provide high-dimensional tactile image
measurements at the point of contact. While prior work has relied on a-priori
information about the object being localized, we remove this requirement. Our
key insight is that an object is composed of several local surface patches,
each informative enough to achieve reliable object tracking. Moreover, we can
recover the geometry of this local patch online by extracting local surface
normal information embedded in each tactile image. We propose a novel two-stage
approach. First, we learn a mapping from tactile images to surface normals
using an image translation network. Second, we use these surface normals within
a factor graph to both reconstruct a local patch map and use it to infer 3D
object poses. We demonstrate reliable object tracking for over $100$ contact
sequences across unique shapes with four objects in simulation and two objects
in the real-world. Supplementary video: https://youtu.be/FHks--haOGY
|
[
{
"version": "v1",
"created": "Mon, 15 Nov 2021 03:54:06 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Apr 2022 14:01:33 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Sodhi",
"Paloma",
""
],
[
"Kaess",
"Michael",
""
],
[
"Mukadam",
"Mustafa",
""
],
[
"Anderson",
"Stuart",
""
]
] |
new_dataset
| 0.996206 |
2112.14996
|
Reijo Jaakkola
|
Reijo Jaakkola
|
An Extension of Trakhtenbrot's Theorem
|
Changed the title and improved the presentation
| null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The celebrated Trakhtenbrot's theorem states that the set of finitely valid
sentences of first-order logic is not computably enumerable. In this note we
will extend this theorem by proving that the finite satisfiability problem of
any fragment of first-order logic is RE-complete, as long as it has an
effective syntax, it is equi-expressive with first-order logic over finite
models and it is effectively closed under conjunction.
|
[
{
"version": "v1",
"created": "Thu, 30 Dec 2021 10:17:59 GMT"
},
{
"version": "v2",
"created": "Sat, 9 Apr 2022 19:17:39 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Jaakkola",
"Reijo",
""
]
] |
new_dataset
| 0.998586 |
2201.05051
|
Marcely Zanon Boito
|
Marcely Zanon Boito, Fethi Bougares, Florentin Barbier, Souhir
Gahbiche, Lo\"ic Barrault, Mickael Rouvier, Yannick Est\`eve
|
Speech Resources in the Tamasheq Language
|
Accepted to LREC 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this paper we present two datasets for Tamasheq, a developing language
mainly spoken in Mali and Niger. These two datasets were made available for the
IWSLT 2022 low-resource speech translation track, and they consist of
collections of radio recordings from daily broadcast news in Niger (Studio
Kalangou) and Mali (Studio Tamani). We share (i) a massive amount of unlabeled
audio data (671 hours) in five languages: French from Niger, Fulfulde, Hausa,
Tamasheq and Zarma, and (ii) a smaller 17 hours parallel corpus of audio
recordings in Tamasheq, with utterance-level translations in the French
language. All this data is shared under the Creative Commons BY-NC-ND 3.0
license. We hope these resources will inspire the speech community to develop
and benchmark models using the Tamasheq language.
|
[
{
"version": "v1",
"created": "Thu, 13 Jan 2022 16:24:06 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Jan 2022 09:26:49 GMT"
},
{
"version": "v3",
"created": "Mon, 11 Apr 2022 14:31:52 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Boito",
"Marcely Zanon",
""
],
[
"Bougares",
"Fethi",
""
],
[
"Barbier",
"Florentin",
""
],
[
"Gahbiche",
"Souhir",
""
],
[
"Barrault",
"Loïc",
""
],
[
"Rouvier",
"Mickael",
""
],
[
"Estève",
"Yannick",
""
]
] |
new_dataset
| 0.999131 |
2201.12285
|
Karthik Sivarama Krishnan
|
Karthik Sivarama Krishnan and Koushik Sivarama Krishnan
|
Benchmarking Conventional Vision Models on Neuromorphic Fall Detection
and Action Recognition Dataset
|
6 Pages, 2 Figures
| null |
10.1109/CCWC54503.2022.9720737
| null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neuromorphic vision-based sensors are gaining popularity in recent years with
their ability to capture Spatio-temporal events with low power sensing. These
sensors record events or spikes over traditional cameras which helps in
preserving the privacy of the subject being recorded. These events are captured
as per-pixel brightness changes and the output data stream is encoded with
time, location, and pixel intensity change information. This paper proposes and
benchmarks the performance of fine-tuned conventional vision models on
neuromorphic human action recognition and fall detection datasets. The
Spatio-temporal event streams from the Dynamic Vision Sensing cameras are
encoded into a standard sequence image frames. These video frames are used for
benchmarking conventional deep learning-based architectures. In this proposed
approach, we fine-tuned the state-of-the-art vision models for this Dynamic
Vision Sensing (DVS) application and named these models as DVS-R2+1D, DVS-CSN,
DVS-C2D, DVS-SlowFast, DVS-X3D, and DVS-MViT. Upon comparing the performance of
these models, we see the current state-of-the-art MViT based architecture
DVS-MViT outperforms all the other models with an accuracy of 0.958 and an F-1
score of 0.958. The second best is the DVS-C2D with an accuracy of 0.916 and an
F-1 score of 0.916. Third and Fourth are DVS-R2+1D and DVS-SlowFast with an
accuracy of 0.875 and 0.833 and F-1 score of 0.875 and 0.861 respectively.
DVS-CSN and DVS-X3D were the least performing models with an accuracy of 0.708
and 0.625 and an F1 score of 0.722 and 0.625 respectively.
|
[
{
"version": "v1",
"created": "Fri, 28 Jan 2022 17:54:33 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Krishnan",
"Karthik Sivarama",
""
],
[
"Krishnan",
"Koushik Sivarama",
""
]
] |
new_dataset
| 0.999448 |
2203.03157
|
Nitish Bhardwaj
|
Nitish Bhardwaj, Dhornala Bharadwaj, Alpana Dubey
|
SingleSketch2Mesh : Generating 3D Mesh model from Sketch
|
Working on some updates
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sketching is an important activity in any design process. Designers and
stakeholders share their ideas through hand-drawn sketches. These sketches are
further used to create 3D models. Current methods to generate 3D models from
sketches are either manual or tightly coupled with 3D modeling platforms.
Therefore, it requires users to have an experience of sketching on such
platform. Moreover, most of the existing approaches are based on geometric
manipulation and thus cannot be generalized. We propose a novel AI based
ensemble approach, SingleSketch2Mesh, for generating 3D models from hand-drawn
sketches. Our approach is based on Generative Networks and Encoder-Decoder
Architecture to generate 3D mesh model from a hand-drawn sketch. We evaluate
our solution with existing solutions. Our approach outperforms existing
approaches on both - quantitative and qualitative evaluation criteria.
|
[
{
"version": "v1",
"created": "Mon, 7 Mar 2022 06:30:36 GMT"
},
{
"version": "v2",
"created": "Thu, 10 Mar 2022 07:15:13 GMT"
},
{
"version": "v3",
"created": "Sun, 10 Apr 2022 18:52:20 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Bhardwaj",
"Nitish",
""
],
[
"Bharadwaj",
"Dhornala",
""
],
[
"Dubey",
"Alpana",
""
]
] |
new_dataset
| 0.999281 |
2203.04090
|
Ehud Shapiro
|
Ehud Shapiro and Nimrod Talmon
|
Foundations for Grassroots Democratic Metaverse
| null | null | null | null |
cs.CY cs.AI cs.DC cs.MA cs.SI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
While the physical lives of many of us are in democracies (one person, one
vote - e.g., the EU and the US), our digital lives are mostly in autocracies
(one person, all votes - e.g., Facebook). Cryptocurrencies promise liberation
but stop short, at plutocracy (one coin, one vote). What would it take for us
to live our digital lives in a digital democracy? This paper offers a vision, a
theoretical framework, and an architecture for a grassroots network of
autonomous, people-owned, people-operated, and people-governed digital
communities, namely a grassroots democratic metaverse. It also charts a roadmap
towards realizing it, and identifies unexplored territory for further research.
|
[
{
"version": "v1",
"created": "Wed, 2 Mar 2022 12:16:09 GMT"
},
{
"version": "v2",
"created": "Sat, 12 Mar 2022 15:49:12 GMT"
},
{
"version": "v3",
"created": "Sun, 10 Apr 2022 16:48:44 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Shapiro",
"Ehud",
""
],
[
"Talmon",
"Nimrod",
""
]
] |
new_dataset
| 0.995799 |
2203.07183
|
Edoardo Giusto PhD
|
Daniel Oliveira, Edoardo Giusto, Emanuele Dri, Nadir Casciola, Betis
Baheri, Qiang Guan, Bartolomeo Montrucchio, Paolo Rech
|
QuFI: a Quantum Fault Injector to Measure the Reliability of Qubits and
Quantum Circuits
|
13 pages, 11 figures. To be published in the 52nd Annual IEEE/IFIP
International Conference on Dependable Systems and Networks (DSN'22)
| null | null | null |
cs.ET quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Quantum computing is a new technology that is expected to revolutionize the
computation paradigm in the next few years. Qubits exploit the quantum physics
proprieties to increase the parallelism and speed of computation.
Unfortunately, besides being intrinsically noisy, qubits have also been shown
to be highly susceptible to external sources of faults, such as ionizing
radiation. The latest discoveries highlight a much higher radiation sensitivity
of qubits than traditional transistors and identify a much more complex fault
model than bit-flip. We propose a framework to identify the quantum circuits
sensitivity to radiation-induced faults and the probability for a fault in a
qubit to propagate to the output. Based on the latest studies and radiation
experiments performed on real quantum machines, we model the transient faults
in a qubit as a phase shift with a parametrized magnitude. Additionally, our
framework can inject multiple qubit faults, tuning the phase shift magnitude
based on the proximity of the qubit to the particle strike location. As we show
in the paper, the proposed fault injector is highly flexible, and it can be
used on both quantum circuit simulators and real quantum machines. We report
the finding of more than 285M injections on the Qiskit simulator and 53K
injections on real IBM machines. We consider three quantum algorithms and
identify the faults and qubits that are more likely to impact the output. We
also consider the fault propagation dependence on the circuit scale, showing
that the reliability profile for some quantum algorithms is scale-dependent,
with increased impact from radiation-induced faults as we increase the number
of qubits. Finally, we also consider multi qubits faults, showing that they are
much more critical than single faults. The fault injector and the data
presented in this paper are available in a public repository to allow further
analysis.
|
[
{
"version": "v1",
"created": "Mon, 14 Mar 2022 15:23:29 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Oliveira",
"Daniel",
""
],
[
"Giusto",
"Edoardo",
""
],
[
"Dri",
"Emanuele",
""
],
[
"Casciola",
"Nadir",
""
],
[
"Baheri",
"Betis",
""
],
[
"Guan",
"Qiang",
""
],
[
"Montrucchio",
"Bartolomeo",
""
],
[
"Rech",
"Paolo",
""
]
] |
new_dataset
| 0.998723 |
2203.12165
|
Wondimu Gebre Dikubab
|
Wondimu Dikubab, Dingkang Liang, Minghui Liao, Xiang Bai
|
Comprehensive Benchmark Datasets for Amharic Scene Text Detection and
Recognition
|
2 pages 1 figure 1 supplementary document
| null |
10.1007/s11432-021-3447-9
| null |
cs.CV
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Ethiopic/Amharic script is one of the oldest African writing systems, which
serves at least 23 languages (e.g., Amharic, Tigrinya) in East Africa for more
than 120 million people. The Amharic writing system, Abugida, has 282
syllables, 15 punctuation marks, and 20 numerals. The Amharic syllabic matrix
is derived from 34 base graphemes/consonants by adding up to 12 appropriate
diacritics or vocalic markers to the characters. The syllables with a common
consonant or vocalic markers are likely to be visually similar and challenge
text recognition tasks. In this work, we presented the first comprehensive
public datasets named HUST-ART, HUST-AST, ABE, and Tana for Amharic script
detection and recognition in the natural scene. We have also conducted
extensive experiments to evaluate the performance of the state of art methods
in detecting and recognizing Amharic scene text on our datasets. The evaluation
results demonstrate the robustness of our datasets for benchmarking and its
potential of promoting the development of robust Amharic script detection and
recognition algorithms. Consequently, the outcome will benefit people in East
Africa, including diplomats from several countries and international
communities.
|
[
{
"version": "v1",
"created": "Wed, 23 Mar 2022 03:19:35 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Dikubab",
"Wondimu",
""
],
[
"Liang",
"Dingkang",
""
],
[
"Liao",
"Minghui",
""
],
[
"Bai",
"Xiang",
""
]
] |
new_dataset
| 0.999851 |
2203.12870
|
Yan Xu
|
Yan Xu, Kwan-Yee Lin, Guofeng Zhang, Xiaogang Wang, Hongsheng Li
|
RNNPose: Recurrent 6-DoF Object Pose Refinement with Robust
Correspondence Field Estimation and Pose Optimization
|
Accepted to CVPR 2022
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
6-DoF object pose estimation from a monocular image is challenging, and a
post-refinement procedure is generally needed for high-precision estimation. In
this paper, we propose a framework based on a recurrent neural network (RNN)
for object pose refinement, which is robust to erroneous initial poses and
occlusions. During the recurrent iterations, object pose refinement is
formulated as a non-linear least squares problem based on the estimated
correspondence field (between a rendered image and the observed image). The
problem is then solved by a differentiable Levenberg-Marquardt (LM) algorithm
enabling end-to-end training. The correspondence field estimation and pose
refinement are conducted alternatively in each iteration to recover the object
poses. Furthermore, to improve the robustness to occlusion, we introduce a
consistency-check mechanism based on the learned descriptors of the 3D model
and observed 2D images, which downweights the unreliable correspondences during
pose optimization. Extensive experiments on LINEMOD, Occlusion-LINEMOD, and
YCB-Video datasets validate the effectiveness of our method and demonstrate
state-of-the-art performance.
|
[
{
"version": "v1",
"created": "Thu, 24 Mar 2022 06:24:55 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Mar 2022 11:11:07 GMT"
},
{
"version": "v3",
"created": "Sun, 10 Apr 2022 15:59:21 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Xu",
"Yan",
""
],
[
"Lin",
"Kwan-Yee",
""
],
[
"Zhang",
"Guofeng",
""
],
[
"Wang",
"Xiaogang",
""
],
[
"Li",
"Hongsheng",
""
]
] |
new_dataset
| 0.998513 |
2203.14267
|
Vitthal Bhandari
|
Vitthal Bhandari and Poonam Goyal
|
bitsa_nlp@LT-EDI-ACL2022: Leveraging Pretrained Language Models for
Detecting Homophobia and Transphobia in Social Media Comments
|
6 pages, Accepted at LT-EDI workshop ACL 2022. Camera ready version.
Addressed all reviewer comments. Added Baseline methods and Ablation study
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Online social networks are ubiquitous and user-friendly. Nevertheless, it is
vital to detect and moderate offensive content to maintain decency and empathy.
However, mining social media texts is a complex task since users don't adhere
to any fixed patterns. Comments can be written in any combination of languages
and many of them may be low-resource.
In this paper, we present our system for the LT-EDI shared task on detecting
homophobia and transphobia in social media comments. We experiment with a
number of monolingual and multilingual transformer based models such as mBERT
along with a data augmentation technique for tackling class imbalance. Such
pretrained large models have recently shown tremendous success on a variety of
benchmark tasks in natural language processing. We observe their performance on
a carefully annotated, real life dataset of YouTube comments in English as well
as Tamil.
Our submission achieved ranks 9, 6 and 3 with a macro-averaged F1-score of
0.42, 0.64 and 0.58 in the English, Tamil and Tamil-English subtasks
respectively. The code for the system has been open sourced.
|
[
{
"version": "v1",
"created": "Sun, 27 Mar 2022 10:15:34 GMT"
},
{
"version": "v2",
"created": "Sat, 9 Apr 2022 15:07:38 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Bhandari",
"Vitthal",
""
],
[
"Goyal",
"Poonam",
""
]
] |
new_dataset
| 0.996448 |
2203.15099
|
Santiago Ontanon
|
Santiago Ontanon, Joshua Ainslie, Vaclav Cvicek and Zachary Fisher
|
LogicInference: A New Dataset for Teaching Logical Inference to seq2seq
Models
|
Accepted at ICLR 2022 OSC workshop (v3 contains updated results after
fixing a problem in dataset generation)
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Machine learning models such as Transformers or LSTMs struggle with tasks
that are compositional in nature such as those involving reasoning/inference.
Although many datasets exist to evaluate compositional generalization, when it
comes to evaluating inference abilities, options are more limited. This paper
presents LogicInference, a new dataset to evaluate the ability of models to
perform logical inference. The dataset focuses on inference using propositional
logic and a small subset of first-order logic, represented both in semi-formal
logical notation, as well as in natural language. We also report initial
results using a collection of machine learning models to establish an initial
baseline in this dataset.
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 21:13:22 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Apr 2022 00:01:11 GMT"
},
{
"version": "v3",
"created": "Mon, 11 Apr 2022 13:43:04 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Ontanon",
"Santiago",
""
],
[
"Ainslie",
"Joshua",
""
],
[
"Cvicek",
"Vaclav",
""
],
[
"Fisher",
"Zachary",
""
]
] |
new_dataset
| 0.999848 |
2204.00697
|
Abhay Singh Bhadoriya
|
Abhay Singh Bhadoriya, Christopher Montez, Sivakumar Rathinam, Swaroop
Darbha, David W. Casbeer, and Satyanarayana G. Manyam
|
Assisted Shortest Path Planning for a Convoy through a Repairable
Network
| null | null | null | null |
cs.RO math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this article, we consider a multi-agent path planning problem in a
partially impeded environment. The impeded environment is represented by a
graph with select road segments (edges) in disrepair impeding vehicular
movement in the road network. A convoy wishes to travel from a starting
location to a destination while minimizing some accumulated cost. The convoy
may traverse an impeded edge for an additional cost (associated with repairing
the edge) than if it were unimpeded. A second vehicle, referred to as a service
vehicle, is simultaneously deployed with the convoy. The service vehicle
assists the convoy by repairing an edge, reducing the cost for the convoy to
traverse that edge. The convoy is permitted to wait at any vertex to allow the
service vehicle to complete repairing an edge. The service vehicle is permitted
to terminate its path at any vertex. The goal is then to find a pair of paths
so the convoy reaches its destination while minimizing the total time (cost)
the two vehicles are active, including any time the convoy waits. We refer to
this problem as the Assisted Shortest Path Problem (ASPP). We present a
generalized permanent labeling algorithm to find an optimal solution for the
ASPP. We also introduce additional modifications to the labeling algorithm to
significantly improve the computation time and refer to the modified labeling
algorithm as $GPLA^*$. Computational results are presented to illustrate the
effectiveness of $GPLA^*$ in solving the ASPP. We then give concluding remarks
and briefly discuss potential variants of the ASPP for future work.
|
[
{
"version": "v1",
"created": "Fri, 1 Apr 2022 21:10:34 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Apr 2022 01:06:04 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Bhadoriya",
"Abhay Singh",
""
],
[
"Montez",
"Christopher",
""
],
[
"Rathinam",
"Sivakumar",
""
],
[
"Darbha",
"Swaroop",
""
],
[
"Casbeer",
"David W.",
""
],
[
"Manyam",
"Satyanarayana G.",
""
]
] |
new_dataset
| 0.992752 |
2204.02121
|
Calum Heggan
|
Calum Heggan, Sam Budgett, Timothy Hospedales, Mehrdad Yaghoobi
|
MetaAudio: A Few-Shot Audio Classification Benchmark
|
9 pages with 1 figure and 2 main results tables. V1 Preprint
| null | null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Currently available benchmarks for few-shot learning (machine learning with
few training examples) are limited in the domains they cover, primarily
focusing on image classification. This work aims to alleviate this reliance on
image-based benchmarks by offering the first comprehensive, public and fully
reproducible audio based alternative, covering a variety of sound domains and
experimental settings. We compare the few-shot classification performance of a
variety of techniques on seven audio datasets (spanning environmental sounds to
human-speech). Extending this, we carry out in-depth analyses of joint training
(where all datasets are used during training) and cross-dataset adaptation
protocols, establishing the possibility of a generalised audio few-shot
classification algorithm. Our experimentation shows gradient-based
meta-learning methods such as MAML and Meta-Curvature consistently outperform
both metric and baseline methods. We also demonstrate that the joint training
routine helps overall generalisation for the environmental sound databases
included, as well as being a somewhat-effective method of tackling the
cross-dataset/domain setting.
|
[
{
"version": "v1",
"created": "Tue, 5 Apr 2022 11:33:44 GMT"
},
{
"version": "v2",
"created": "Sun, 10 Apr 2022 09:53:16 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Heggan",
"Calum",
""
],
[
"Budgett",
"Sam",
""
],
[
"Hospedales",
"Timothy",
""
],
[
"Yaghoobi",
"Mehrdad",
""
]
] |
new_dataset
| 0.999542 |
2204.03688
|
Igor Krashenyi
|
Tetiana Martyniuk, Orest Kupyn, Yana Kurlyak, Igor Krashenyi, Ji\v{r}i
Matas, Viktoriia Sharmanska
|
DAD-3DHeads: A Large-scale Dense, Accurate and Diverse Dataset for 3D
Head Alignment from a Single Image
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We present DAD-3DHeads, a dense and diverse large-scale dataset, and a robust
model for 3D Dense Head Alignment in the wild. It contains annotations of over
3.5K landmarks that accurately represent 3D head shape compared to the
ground-truth scans. The data-driven model, DAD-3DNet, trained on our dataset,
learns shape, expression, and pose parameters, and performs 3D reconstruction
of a FLAME mesh. The model also incorporates a landmark prediction branch to
take advantage of rich supervision and co-training of multiple related tasks.
Experimentally, DAD-3DNet outperforms or is comparable to the state-of-the-art
models in (i) 3D Head Pose Estimation on AFLW2000-3D and BIWI, (ii) 3D Face
Shape Reconstruction on NoW and Feng, and (iii) 3D Dense Head Alignment and 3D
Landmarks Estimation on DAD-3DHeads dataset. Finally, the diversity of
DAD-3DHeads in camera angles, facial expressions, and occlusions enables a
benchmark to study in-the-wild generalization and robustness to distribution
shifts. The dataset webpage is https://p.farm/research/dad-3dheads.
|
[
{
"version": "v1",
"created": "Thu, 7 Apr 2022 18:40:51 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Apr 2022 04:55:00 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Martyniuk",
"Tetiana",
""
],
[
"Kupyn",
"Orest",
""
],
[
"Kurlyak",
"Yana",
""
],
[
"Krashenyi",
"Igor",
""
],
[
"Matas",
"Jiři",
""
],
[
"Sharmanska",
"Viktoriia",
""
]
] |
new_dataset
| 0.999885 |
2204.04290
|
Diego Gonz\'alez Mor\'in
|
Diego Gonzalez Morin, ManuelJ. L\'opez Morales, Pablo P\'erez, Ana
Garc\'ia Armada Alvaro Villegas
|
FikoRE: 5G and Beyond RAN Emulator for Application Level Experimentation
and Prototyping
| null | null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Novel and cutting-edge use cases have arisen since the first deployments of
the fifth generation of telecommunication networks (5G). There are plenty of
well-though optimally design 5G simulators and emulators which allow
telecommunication technologies engineers and researchers to thoroughly study
and test the network. However, the 5G ecosystem is not only limited to the
network itself: a fast development of 5G-specific use cases can considerably
accelerate the development of telecommunication technologies. We present
FikoRE, our real-time Radio Access Networks (RAN) emulator carefully designed
for application-level experimentation and prototyping. Its modularity and
straightforward implementation allow multidisciplinary user to rapidly use or
even modify it to test their own applications. In this article, we present
FikoRE's architecture accompanied with relevant validation experiments and
results.
|
[
{
"version": "v1",
"created": "Fri, 8 Apr 2022 20:44:19 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Morin",
"Diego Gonzalez",
""
],
[
"Morales",
"ManuelJ. López",
""
],
[
"Pérez",
"Pablo",
""
],
[
"Villegas",
"Ana García Armada Alvaro",
""
]
] |
new_dataset
| 0.98724 |
2204.04306
|
Bonaventure F. P. Dossou
|
Chris C. Emezue, and Bonaventure F. P. Dossou
|
MMTAfrica: Multilingual Machine Translation for African Languages
|
WMT Shared Task, EMNLP 2021 (version 2)
|
Proceedings of the Sixth Conference on Machine Translation (2021)
398-411, Association for Computational Linguistics
| null | null |
cs.CL cs.AI cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we focus on the task of multilingual machine translation for
African languages and describe our contribution in the 2021 WMT Shared Task:
Large-Scale Multilingual Machine Translation. We introduce MMTAfrica, the first
many-to-many multilingual translation system for six African languages: Fon
(fon), Igbo (ibo), Kinyarwanda (kin), Swahili/Kiswahili (swa), Xhosa (xho), and
Yoruba (yor) and two non-African languages: English (eng) and French (fra). For
multilingual translation concerning African languages, we introduce a novel
backtranslation and reconstruction objective, BT\&REC, inspired by the random
online back translation and T5 modeling framework respectively, to effectively
leverage monolingual data. Additionally, we report improvements from MMTAfrica
over the FLORES 101 benchmarks (spBLEU gains ranging from $+0.58$ in Swahili to
French to $+19.46$ in French to Xhosa). We release our dataset and code source
at https://github.com/edaiofficial/mmtafrica.
|
[
{
"version": "v1",
"created": "Fri, 8 Apr 2022 21:42:44 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Emezue",
"Chris C.",
""
],
[
"Dossou",
"Bonaventure F. P.",
""
]
] |
new_dataset
| 0.998825 |
2204.04380
|
Xiaoyan Cao
|
Meihong Wu, Xiaoyan Cao, Xiaoyu Cao, Shihui Guo
|
A dataset of ant colonies motion trajectories in indoor and outdoor
scenes for social cluster behavior study
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Motion and interaction of social insects (such as ants) have been studied by
many researchers to understand the clustering mechanism. Most studies in the
field of ant behavior have only focused on indoor environments, while outdoor
environments are still underexplored. In this paper, we collect 10 videos of
ant colonies from different indoor and outdoor scenes. And we develop an image
sequence marking software named VisualMarkData, which enables us to provide
annotations of ants in the video. In all 5354 frames, the location information
and the identification number of each ant are recorded for a total of 712 ants
and 114112 annotations. Moreover, we provide visual analysis tools to assess
and validate the technical quality and reproducibility of our data. It is hoped
that this dataset will contribute to a deeper exploration on the behavior of
the ant colony.
|
[
{
"version": "v1",
"created": "Sat, 9 Apr 2022 03:49:55 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Wu",
"Meihong",
""
],
[
"Cao",
"Xiaoyan",
""
],
[
"Cao",
"Xiaoyu",
""
],
[
"Guo",
"Shihui",
""
]
] |
new_dataset
| 0.999741 |
2204.04435
|
H. Umut Suluhan
|
H. Umut Suluhan, Hasan F. Ates, Bahadir K. Gunturk
|
HSTR-Net: High Spatio-Temporal Resolution Video Generation For Wide Area
Surveillance
| null | null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Wide area surveillance has many applications and tracking of objects under
observation is an important task, which often needs high spatio-temporal
resolution (HSTR) video for better precision. This paper presents the usage of
multiple video feeds for the generation of HSTR video as an extension of
reference based super resolution (RefSR). One feed captures video at high
spatial resolution with low frame rate (HSLF) while the other captures low
spatial resolution and high frame rate (LSHF) video simultaneously for the same
scene. The main purpose is to create an HSTR video from the fusion of HSLF and
LSHF videos. In this paper we propose an end-to-end trainable deep network that
performs optical flow estimation and frame reconstruction by combining inputs
from both video feeds. The proposed architecture provides significant
improvement over existing video frame interpolation and RefSR techniques in
terms of objective PSNR and SSIM metrics.
|
[
{
"version": "v1",
"created": "Sat, 9 Apr 2022 09:23:58 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Suluhan",
"H. Umut",
""
],
[
"Ates",
"Hasan F.",
""
],
[
"Gunturk",
"Bahadir K.",
""
]
] |
new_dataset
| 0.997265 |
2204.04462
|
Wenshuai Hu
|
Heng-Chao Li, Wen-Shuai Hu, Wei Li, Jun Li, Qian Du, and Antonio Plaza
|
A3CLNN: Spatial, Spectral and Multiscale Attention ConvLSTM Neural
Network for Multisource Remote Sensing Data Classification
|
16 pages, 10 figures
|
IEEE Transactions on Neural Networks and Learning Systems, vol.
33, no. 2, pp. 747-761, Feb. 2022
|
10.1109/TNNLS.2020.3028945
| null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The problem of effectively exploiting the information multiple data sources
has become a relevant but challenging research topic in remote sensing. In this
paper, we propose a new approach to exploit the complementarity of two data
sources: hyperspectral images (HSIs) and light detection and ranging (LiDAR)
data. Specifically, we develop a new dual-channel spatial, spectral and
multiscale attention convolutional long short-term memory neural network
(called dual-channel A3CLNN) for feature extraction and classification of
multisource remote sensing data. Spatial, spectral and multiscale attention
mechanisms are first designed for HSI and LiDAR data in order to learn
spectral- and spatial-enhanced feature representations, and to represent
multiscale information for different classes. In the designed fusion network, a
novel composite attention learning mechanism (combined with a three-level
fusion strategy) is used to fully integrate the features in these two data
sources. Finally, inspired by the idea of transfer learning, a novel stepwise
training strategy is designed to yield a final classification result. Our
experimental results, conducted on several multisource remote sensing data
sets, demonstrate that the newly proposed dual-channel A3CLNN exhibits better
feature representation ability (leading to more competitive classification
performance) than other state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Sat, 9 Apr 2022 12:43:32 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Li",
"Heng-Chao",
""
],
[
"Hu",
"Wen-Shuai",
""
],
[
"Li",
"Wei",
""
],
[
"Li",
"Jun",
""
],
[
"Du",
"Qian",
""
],
[
"Plaza",
"Antonio",
""
]
] |
new_dataset
| 0.978528 |
2204.04481
|
Manex Agirrezabal
|
Manex Agirrezabal, Janek Amann
|
KUCST@LT-EDI-ACL2022: Detecting Signs of Depression from Social Media
Text
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
In this paper we present our approach for detecting signs of depression from
social media text. Our model relies on word unigrams, part-of-speech tags,
readabilitiy measures and the use of first, second or third person and the
number of words. Our best model obtained a macro F1-score of 0.439 and ranked
25th, out of 31 teams. We further take advantage of the interpretability of the
Logistic Regression model and we make an attempt to interpret the model
coefficients with the hope that these will be useful for further research on
the topic.
|
[
{
"version": "v1",
"created": "Sat, 9 Apr 2022 14:27:13 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Agirrezabal",
"Manex",
""
],
[
"Amann",
"Janek",
""
]
] |
new_dataset
| 0.979649 |
2204.04497
|
Zhuofeng Wu
|
Zhuofeng Wu, Sinong Wang, Jiatao Gu, Rui Hou, Yuxiao Dong, V.G.Vinod
Vydiswaran, Hao Ma
|
IDPG: An Instance-Dependent Prompt Generation Method
|
To appear at the NAACL 2022 main conference
| null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Prompt tuning is a new, efficient NLP transfer learning paradigm that adds a
task-specific prompt in each input instance during the model training stage. It
freezes the pre-trained language model and only optimizes a few task-specific
prompts. In this paper, we propose a conditional prompt generation method to
generate prompts for each input instance, referred to as the Instance-Dependent
Prompt Generation (IDPG). Unlike traditional prompt tuning methods that use a
fixed prompt, IDPG introduces a lightweight and trainable component to generate
prompts based on each input sentence. Extensive experiments on ten natural
language understanding (NLU) tasks show that the proposed strategy consistently
outperforms various prompt tuning baselines and is on par with other efficient
transfer learning methods such as Compacter while tuning far fewer model
parameters.
|
[
{
"version": "v1",
"created": "Sat, 9 Apr 2022 15:45:27 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Wu",
"Zhuofeng",
""
],
[
"Wang",
"Sinong",
""
],
[
"Gu",
"Jiatao",
""
],
[
"Hou",
"Rui",
""
],
[
"Dong",
"Yuxiao",
""
],
[
"Vydiswaran",
"V. G. Vinod",
""
],
[
"Ma",
"Hao",
""
]
] |
new_dataset
| 0.998802 |
2204.04507
|
Jithin Jagannath
|
Jithin Jagannath, Kian Hamedani, Collin Farquhar, Keyvan Ramezanpour,
Anu Jagannath
|
MR-iNet Gym: Framework for Edge Deployment of Deep Reinforcement
Learning on Embedded Software Defined Radio
|
To appear in Proceedings of ACM Workshop on Wireless Security and
Machine Learning (WiseML 2022)
| null | null | null |
cs.LG cs.NI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Dynamic resource allocation plays a critical role in the next generation of
intelligent wireless communication systems. Machine learning has been leveraged
as a powerful tool to make strides in this domain. In most cases, the progress
has been limited to simulations due to the challenging nature of hardware
deployment of these solutions. In this paper, for the first time, we design and
deploy deep reinforcement learning (DRL)-based power control agents on the GPU
embedded software defined radios (SDRs). To this end, we propose an end-to-end
framework (MR-iNet Gym) where the simulation suite and the embedded SDR
development work cohesively to overcome real-world implementation hurdles. To
prove feasibility, we consider the problem of distributed power control for
code-division multiple access (DS-CDMA)-based LPI/D transceivers. We first
build a DS-CDMA ns3 module that interacts with the OpenAI Gym environment.
Next, we train the power control DRL agents in this ns3-gym simulation
environment in a scenario that replicates our hardware testbed. Next, for edge
(embedded on-device) deployment, the trained models are optimized for real-time
operation without loss of performance. Hardware-based evaluation verifies the
efficiency of DRL agents over traditional distributed constrained power control
(DCPC) algorithm. More significantly, as the primary goal, this is the first
work that has established the feasibility of deploying DRL to provide optimized
distributed resource allocation for next-generation of GPU-embedded radios.
|
[
{
"version": "v1",
"created": "Sat, 9 Apr 2022 16:28:43 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Jagannath",
"Jithin",
""
],
[
"Hamedani",
"Kian",
""
],
[
"Farquhar",
"Collin",
""
],
[
"Ramezanpour",
"Keyvan",
""
],
[
"Jagannath",
"Anu",
""
]
] |
new_dataset
| 0.963625 |
2204.04521
|
Usman Naseem
|
Usman Naseem, Byoung Chan Lee, Matloob Khushi, Jinman Kim, Adam G.
Dunn
|
Benchmarking for Public Health Surveillance tasks on Social Media with a
Domain-Specific Pretrained Language Model
|
Accepted @ ACL2022 Workshop: The First Workshop on Efficient
Benchmarking in NLP
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
A user-generated text on social media enables health workers to keep track of
information, identify possible outbreaks, forecast disease trends, monitor
emergency cases, and ascertain disease awareness and response to official
health correspondence. This exchange of health information on social media has
been regarded as an attempt to enhance public health surveillance (PHS).
Despite its potential, the technology is still in its early stages and is not
ready for widespread application. Advancements in pretrained language models
(PLMs) have facilitated the development of several domain-specific PLMs and a
variety of downstream applications. However, there are no PLMs for social media
tasks involving PHS. We present and release PHS-BERT, a transformer-based PLM,
to identify tasks related to public health surveillance on social media. We
compared and benchmarked the performance of PHS-BERT on 25 datasets from
different social medial platforms related to 7 different PHS tasks. Compared
with existing PLMs that are mainly evaluated on limited tasks, PHS-BERT
achieved state-of-the-art performance on all 25 tested datasets, showing that
our PLM is robust and generalizable in the common PHS tasks. By making PHS-BERT
available, we aim to facilitate the community to reduce the computational cost
and introduce new baselines for future works across various PHS-related tasks.
|
[
{
"version": "v1",
"created": "Sat, 9 Apr 2022 18:01:18 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Naseem",
"Usman",
""
],
[
"Lee",
"Byoung Chan",
""
],
[
"Khushi",
"Matloob",
""
],
[
"Kim",
"Jinman",
""
],
[
"Dunn",
"Adam G.",
""
]
] |
new_dataset
| 0.998157 |
2204.04542
|
Mohammad R. Rezaei
|
Ebrahim Pourjafari, Navid Ziaei, Mohammad R. Rezaei, Amir Sameizadeh,
Mohammad Shafiee, Mohammad Alavinia, Mansour Abolghasemian, Nick Sajadi
|
Survival Seq2Seq: A Survival Model based on Sequence to Sequence
Architecture
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
This paper introduces a novel non-parametric deep model for estimating
time-to-event (survival analysis) in presence of censored data and competing
risks. The model is designed based on the sequence-to-sequence (Seq2Seq)
architecture, therefore we name it Survival Seq2Seq. The first recurrent neural
network (RNN) layer of the encoder of our model is made up of Gated Recurrent
Unit with Decay (GRU-D) cells. These cells have the ability to effectively
impute not-missing-at-random values of longitudinal datasets with very high
missing rates, such as electronic health records (EHRs). The decoder of
Survival Seq2Seq generates a probability distribution function (PDF) for each
competing risk without assuming any prior distribution for the risks. Taking
advantage of RNN cells, the decoder is able to generate smooth and virtually
spike-free PDFs. This is beyond the capability of existing non-parametric deep
models for survival analysis. Training results on synthetic and medical
datasets prove that Survival Seq2Seq surpasses other existing deep survival
models in terms of the accuracy of predictions and the quality of generated
PDFs.
|
[
{
"version": "v1",
"created": "Sat, 9 Apr 2022 20:15:02 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Pourjafari",
"Ebrahim",
""
],
[
"Ziaei",
"Navid",
""
],
[
"Rezaei",
"Mohammad R.",
""
],
[
"Sameizadeh",
"Amir",
""
],
[
"Shafiee",
"Mohammad",
""
],
[
"Alavinia",
"Mohammad",
""
],
[
"Abolghasemian",
"Mansour",
""
],
[
"Sajadi",
"Nick",
""
]
] |
new_dataset
| 0.995479 |
2204.04564
|
Chen Chen
|
Momal Ijaz, Renato Diaz, Chen Chen
|
Multimodal Transformer for Nursing Activity Recognition
|
CVPR-2022 Workshop
| null | null | null |
cs.CV cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In an aging population, elderly patient safety is a primary concern at
hospitals and nursing homes, which demands for increased nurse care. By
performing nurse activity recognition, we can not only make sure that all
patients get an equal desired care, but it can also free nurses from manual
documentation of activities they perform, leading to a fair and safe place of
care for the elderly. In this work, we present a multimodal transformer-based
network, which extracts features from skeletal joints and acceleration data,
and fuses them to perform nurse activity recognition. Our method achieves
state-of-the-art performance of 81.8% accuracy on the benchmark dataset
available for nurse activity recognition from the Nurse Care Activity
Recognition Challenge. We perform ablation studies to show that our fusion
model is better than single modality transformer variants (using only
acceleration or skeleton joints data). Our solution also outperforms
state-of-the-art ST-GCN, GRU and other classical hand-crafted-feature-based
classifier solutions by a margin of 1.6%, on the NCRC dataset. Code is
available at \url{https://github.com/Momilijaz96/MMT_for_NCRC}.
|
[
{
"version": "v1",
"created": "Sat, 9 Apr 2022 23:01:00 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Ijaz",
"Momal",
""
],
[
"Diaz",
"Renato",
""
],
[
"Chen",
"Chen",
""
]
] |
new_dataset
| 0.998817 |
2204.04621
|
Zhimin Zhang
|
Zhimin Zhang, Zheng Wang, Wei Hu
|
Unsupervised Manga Character Re-identification via Face-body and
Spatial-temporal Associated Clustering
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the past few years, there has been a dramatic growth in e-manga
(electronic Japanese-style comics). Faced with the booming demand for manga
research and the large amount of unlabeled manga data, we raised a new task,
called unsupervised manga character re-identification. However, the artistic
expression and stylistic limitations of manga pose many challenges to the
re-identification problem. Inspired by the idea that some content-related
features may help clustering, we propose a Face-body and Spatial-temporal
Associated Clustering method (FSAC). In the face-body combination module, a
face-body graph is constructed to solve problems such as exaggeration and
deformation in artistic creation by using the integrity of the image. In the
spatial-temporal relationship correction module, we analyze the appearance
features of characters and design a temporal-spatial-related triplet loss to
fine-tune the clustering. Extensive experiments on a manga book dataset with
109 volumes validate the superiority of our method in unsupervised manga
character re-identification.
|
[
{
"version": "v1",
"created": "Sun, 10 Apr 2022 07:28:41 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Zhang",
"Zhimin",
""
],
[
"Wang",
"Zheng",
""
],
[
"Hu",
"Wei",
""
]
] |
new_dataset
| 0.996432 |
2204.04686
|
Tianyang Cao
|
Tianyang Cao, Shuang Zeng, Xiaodan Xu, Mairgup Mansur, Baobao Chang
|
DISK: Domain-constrained Instance Sketch for Math Word Problem
Generation
| null | null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
A math word problem (MWP) is a coherent narrative which reflects the
underlying logic of math equations. Successful MWP generation can automate the
writing of mathematics questions. Previous methods mainly generate MWP text
based on inflexible pre-defined templates. In this paper, we propose a neural
model for generating MWP text from math equations. Firstly, we incorporate a
matching model conditioned on the domain knowledge to retrieve a MWP instance
which is most consistent with the ground-truth, where the domain is a latent
variable extracted with a domain summarizer. Secondly, by constructing a
Quantity Cell Graph (QCG) from the retrieved MWP instance and reasoning over
it, we improve the model's comprehension of real-world scenarios and derive a
domain-constrained instance sketch to guide the generation. Besides, the QCG
also interacts with the equation encoder to enhance the alignment between math
tokens (e.g., quantities and variables) and MWP text. Experiments and empirical
analysis on educational MWP set show that our model achieves impressive
performance in both automatic evaluation metrics and human evaluation metrics.
|
[
{
"version": "v1",
"created": "Sun, 10 Apr 2022 13:54:23 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Cao",
"Tianyang",
""
],
[
"Zeng",
"Shuang",
""
],
[
"Xu",
"Xiaodan",
""
],
[
"Mansur",
"Mairgup",
""
],
[
"Chang",
"Baobao",
""
]
] |
new_dataset
| 0.982939 |
2204.04708
|
Lin Xiang
|
Lin Xiang, Xiao Wei, Laura Cottatellucci, Robert Schober, and Tao
Jiang
|
Cache-Aided Massive MIMO with Linear Precoding in Multi-cell Systems
|
Extended version of journal submission
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we propose a novel joint caching and massive multiple-input
multiple-output (MIMO) transmission scheme, referred to as \emph{cache-aided
massive MIMO}, for multi-cell downlink transmission to multiple cache-enabled
receivers. With the proposed scheme, users who have cached (a portion of) the
files that they request are offloaded and, hence, (partially) inactive during
downlink transmission. The other users either benefit from the cache-enabled
offloading for mitigating pilot contamination or exploit the cached but
unrequested files to cancel interference during uplink channel estimation and
downlink file reception. Moreover, by redesigning the transmit precoders based
on the cache status of the users and channel state information, we gain
additional degrees of freedom for massive MIMO transmission. For a given cache
status, we analyze the equivalent content delivery rates (ECDRs), i.e., the
average rates of delivering a requested file via both caching and massive MIMO
transmission to the requesting user, for cache-aided massive MIMO employing
re-designed maximum ratio transmission (MRT), zero-forcing (ZF) precoding, and
regularized zero-forcing (RZF) precoding. Based on the derived results, the
impact of (random) uncoded caching and coded caching on the performance of the
re-designed precoding schemes is investigated. Simulation results validate our
derivations and show that caching is beneficial for precoded downlink
transmission as it enhances the transmit power allocation, mitigates intra- and
inter-cell interference, and reduces the impairment caused by pilot
contamination. Compared with conventional massive MIMO without caching and with
cache-oblivious precoding, the proposed cache-aided massive MIMO scheme
achieves a significantly higher ECDR even when the number of users approaches
the number of transmit antennas.
|
[
{
"version": "v1",
"created": "Sun, 10 Apr 2022 15:33:39 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Xiang",
"Lin",
""
],
[
"Wei",
"Xiao",
""
],
[
"Cottatellucci",
"Laura",
""
],
[
"Schober",
"Robert",
""
],
[
"Jiang",
"Tao",
""
]
] |
new_dataset
| 0.956036 |
2204.04724
|
Tao Qi
|
Tao Qi, Fangzhao Wu, Chuhan Wu, Peijie Sun, Le Wu, Xiting Wang,
Yongfeng Huang, Xing Xie
|
ProFairRec: Provider Fairness-aware News Recommendation
|
SIGIR 2022
| null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
News recommendation aims to help online news platform users find their
preferred news articles. Existing news recommendation methods usually learn
models from historical user behaviors on news. However, these behaviors are
usually biased on news providers. Models trained on biased user data may
capture and even amplify the biases on news providers, and are unfair for some
minority news providers. In this paper, we propose a provider fairness-aware
news recommendation framework (named ProFairRec), which can learn news
recommendation models fair for different news providers from biased user data.
The core idea of ProFairRec is to learn provider-fair news representations and
provider-fair user representations to achieve provider fairness. To learn
provider-fair representations from biased data, we employ provider-biased
representations to inherit provider bias from data. Provider-fair and -biased
news representations are learned from news content and provider IDs
respectively, which are further aggregated to build fair and biased user
representations based on user click history. All of these representations are
used in model training while only fair representations are used for user-news
matching to achieve fair news recommendation. Besides, we propose an
adversarial learning task on news provider discrimination to prevent
provider-fair news representation from encoding provider bias. We also propose
an orthogonal regularization on provider-fair and -biased representations to
better reduce provider bias in provider-fair representations. Moreover,
ProFairRec is a general framework and can be applied to different news
recommendation methods. Extensive experiments on a public dataset verify that
our ProFairRec approach can effectively improve the provider fairness of many
existing methods and meanwhile maintain their recommendation accuracy.
|
[
{
"version": "v1",
"created": "Sun, 10 Apr 2022 16:58:34 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Qi",
"Tao",
""
],
[
"Wu",
"Fangzhao",
""
],
[
"Wu",
"Chuhan",
""
],
[
"Sun",
"Peijie",
""
],
[
"Wu",
"Le",
""
],
[
"Wang",
"Xiting",
""
],
[
"Huang",
"Yongfeng",
""
],
[
"Xie",
"Xing",
""
]
] |
new_dataset
| 0.994751 |
2204.04729
|
Vincent Limouzy
|
Liliana Alc\'on and Martin Charles Golumbic and Noem\'i Gudi\~no and
Marisa Gutierrez and Vincent Limouzy
|
On dually-CPT and strong-CPT posets
| null | null | null | null |
cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A poset is a containment of paths in a tree (CPT) if it admits a
representation by containment where each element of the poset is represented by
a path in a tree and two elements are comparable in the poset if and only if
the corresponding paths are related by the inclusion relation. Recently
Alc\'on, Gudi\~{n}o and Gutierrez introduced proper subclasses of CPT posets,
namely dually-CPT, and strongly-CPT. A poset $\mathbf{P}$ is dually-CPT, if and
only if $\mathbf{P}$ and its dual $\mathbf{P}^{d}$ both admit a CPT
representation. A poset $\mathbf{P}$ is strongly-CPT, if and only if
$\mathbf{P}$ and all the posets that share the same underlying comparability
graph admit a CPT representation. Where as the inclusion between Dually-CPT and
CPT was known to be strict. It was raised as an open question by Alc\'on,
Gudi\~{n}o and Gutierrez whether strongly-CPT was a strict subclass of
dually-CPT. We provide a proof that both classes actually coincide.
|
[
{
"version": "v1",
"created": "Sun, 10 Apr 2022 17:12:45 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Alcón",
"Liliana",
""
],
[
"Golumbic",
"Martin Charles",
""
],
[
"Gudiño",
"Noemí",
""
],
[
"Gutierrez",
"Marisa",
""
],
[
"Limouzy",
"Vincent",
""
]
] |
new_dataset
| 0.994797 |
2204.04730
|
Yuchao Dai Dr.
|
Hui Deng and Tong Zhang and Yuchao Dai and Jiawei Shi and Yiran Zhong
and Hongdong Li
|
Deep Non-rigid Structure-from-Motion: A Sequence-to-Sequence Translation
Perspective
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Directly regressing the non-rigid shape and camera pose from the individual
2D frame is ill-suited to the Non-Rigid Structure-from-Motion (NRSfM) problem.
This frame-by-frame 3D reconstruction pipeline overlooks the inherent
spatial-temporal nature of NRSfM, i.e., reconstructing the whole 3D sequence
from the input 2D sequence. In this paper, we propose to model deep NRSfM from
a sequence-to-sequence translation perspective, where the input 2D frame
sequence is taken as a whole to reconstruct the deforming 3D non-rigid shape
sequence. First, we apply a shape-motion predictor to estimate the initial
non-rigid shape and camera motion from a single frame. Then we propose a
context modeling module to model camera motions and complex non-rigid shapes.
To tackle the difficulty in enforcing the global structure constraint within
the deep framework, we propose to impose the union-of-subspace structure by
replacing the self-expressiveness layer with multi-head attention and delayed
regularizers, which enables end-to-end batch-wise training. Experimental
results across different datasets such as Human3.6M, CMU Mocap and InterHand
prove the superiority of our framework. The code will be made publicly
available
|
[
{
"version": "v1",
"created": "Sun, 10 Apr 2022 17:13:52 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Deng",
"Hui",
""
],
[
"Zhang",
"Tong",
""
],
[
"Dai",
"Yuchao",
""
],
[
"Shi",
"Jiawei",
""
],
[
"Zhong",
"Yiran",
""
],
[
"Li",
"Hongdong",
""
]
] |
new_dataset
| 0.976101 |
2204.04844
|
Ziqing Yang
|
Zihang Xu, Ziqing Yang, Yiming Cui, Zhigang Chen
|
HFL at SemEval-2022 Task 8: A Linguistics-inspired Regression Model with
Data Augmentation for Multilingual News Similarity
|
6 pages; SemEval-2022 Task 8
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper describes our system designed for SemEval-2022 Task 8:
Multilingual News Article Similarity. We proposed a linguistics-inspired model
trained with a few task-specific strategies. The main techniques of our system
are: 1) data augmentation, 2) multi-label loss, 3) adapted R-Drop, 4) samples
reconstruction with the head-tail combination. We also present a brief analysis
of some negative methods like two-tower architecture. Our system ranked 1st on
the leaderboard while achieving a Pearson's Correlation Coefficient of 0.818 on
the official evaluation set.
|
[
{
"version": "v1",
"created": "Mon, 11 Apr 2022 03:08:37 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Xu",
"Zihang",
""
],
[
"Yang",
"Ziqing",
""
],
[
"Cui",
"Yiming",
""
],
[
"Chen",
"Zhigang",
""
]
] |
new_dataset
| 0.991867 |
2204.04892
|
Kyushik Min
|
Kyushik Min, Hyunho Lee, Kwansu Shin, Taehak Lee, Hojoon Lee, Jinwon
Choi, Sungho Son
|
JORLDY: a fully customizable open source framework for reinforcement
learning
|
12 pages, 6 figures
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Recently, Reinforcement Learning (RL) has been actively researched in both
academic and industrial fields. However, there exist only a few RL frameworks
which are developed for researchers or students who want to study RL. In
response, we propose an open-source RL framework "Join Our Reinforcement
Learning framework for Developing Yours" (JORLDY). JORLDY provides more than 20
widely used RL algorithms which are implemented with Pytorch. Also, JORLDY
supports multiple RL environments which include OpenAI gym, Unity ML-Agents,
Mujoco, Super Mario Bros and Procgen. Moreover, the algorithmic components such
as agent, network, environment can be freely customized, so that the users can
easily modify and append algorithmic components. We expect that JORLDY will
support various RL research and contribute further advance the field of RL. The
source code of JORLDY is provided on the following Github:
https://github.com/kakaoenterprise/JORLDY
|
[
{
"version": "v1",
"created": "Mon, 11 Apr 2022 06:28:27 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Min",
"Kyushik",
""
],
[
"Lee",
"Hyunho",
""
],
[
"Shin",
"Kwansu",
""
],
[
"Lee",
"Taehak",
""
],
[
"Lee",
"Hojoon",
""
],
[
"Choi",
"Jinwon",
""
],
[
"Son",
"Sungho",
""
]
] |
new_dataset
| 0.99455 |
2204.04898
|
Alessandro Berti Mr
|
Alessandro Berti, Minh Phan Nghia, Wil M.P. van der Aalst
|
PM4Py-GPU: a High-Performance General-Purpose Library for Process Mining
| null | null | null | null |
cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
Open-source process mining provides many algorithms for the analysis of event
data which could be used to analyze mainstream processes (e.g., O2C, P2P, CRM).
However, compared to commercial tools, they lack the performance and struggle
to analyze large amounts of data. This paper presents PM4Py-GPU, a Python
process mining library based on the NVIDIA RAPIDS framework. Thanks to the
dataframe columnar storage and the high level of parallelism, a significant
speed-up is achieved on classic process mining computations and processing
activities.
|
[
{
"version": "v1",
"created": "Mon, 11 Apr 2022 06:53:36 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Berti",
"Alessandro",
""
],
[
"Nghia",
"Minh Phan",
""
],
[
"van der Aalst",
"Wil M. P.",
""
]
] |
new_dataset
| 0.994082 |
2204.04910
|
Shunsuke Aoki
|
Shunsuke Aoki and Ragunathan (Raj) Rajkumar
|
A-DRIVE: Autonomous Deadlock Detection and Recovery at Road
Intersections for Connected and Automated Vehicles
| null | null | null | null |
cs.MA
|
http://creativecommons.org/licenses/by/4.0/
|
Connected and Automated Vehicles (CAVs) are highly expected to improve
traffic throughput and safety at road intersections, single-track lanes, and
construction zones. However, multiple CAVs can block each other and create a
mutual deadlock around these road segments (i) when vehicle systems have a
failure, such as a communication failure, control failure, or localization
failure and/or (ii) when vehicles use a long shared road segment. In this
paper, we present an Autonomous Deadlock Detection and Recovery Protocol at
Intersections for Automated Vehicles named A-DRIVE that is a decentralized and
time-sensitive technique to improve traffic throughput and shorten worst-case
recovery time. To enable the deadlock recovery with automated vehicles and with
human-driven vehicles, A-DRIVE includes two components: V2V communication-based
A-DRIVE and Local perception-based A-DRIVE. V2V communication-based A-DRIVE is
designed for homogeneous traffic environments in which all the vehicles are
connected and automated. Local perception-based A-DRIVE is for mixed traffic,
where CAVs, non-connected automated vehicles, and human-driven vehicles
co-exist and cooperate with one another. Since these two components are not
exclusive, CAVs inclusively and seamlessly use them in practice. Finally, our
simulation results show that A-DRIVE improves traffic throughput compared to a
baseline protocol.
|
[
{
"version": "v1",
"created": "Mon, 11 Apr 2022 07:19:50 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Aoki",
"Shunsuke",
"",
"Raj"
],
[
"Ragunathan",
"",
"",
"Raj"
],
[
"Rajkumar",
"",
""
]
] |
new_dataset
| 0.999631 |
2204.04928
|
Li Wei
|
Li Wei, Chongwen Huang, George C. Alexandropoulos, Wei E. I. Sha,
Zhaoyang Zhang, Merouane Debbah and Chau Yuen
|
Multi-User Wireless Communications with Holographic MIMO Surfaces: A
Convenient Channel Model and Spectral Efficiency Analysis
|
arXiv admin note: substantial text overlap with arXiv:2112.02803
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The multi-user Holographic Multiple-Input and Multiple-Output Surface
(MU-HMIMOS) paradigm, which is capable of realizing large continuous apertures
with minimal power consumption and of shaping radio wave propagation at will,
has been recently considered as an energy-efficient solution for future
wireless networks. The tractable channel modeling of MU-HMIMOS signal
propagation is one of the most critical challenges, mainly due to the coupling
effect induced by the excessively large number of closely spaced patch
antennas. In this paper, we focus on this challenge for downlink communications
and model the electromagnetic channel in the wavenumber domain using the
Fourier plane wave representation. Based on the proposed model, we devise a
Zero-Forcing (ZF) precoding scheme, capitalizing on the sampled channel
variance that depends on the number and spacing of the HMIMOS patch antennas,
and perform a spectral efficiency analysis. Our simulation results showcase
that the more patch antennas and the larger their spacing is, the performance
of the considered MU-HMIMOS system improves. In addition, it is demonstrated
that our theoretical performance expressions approximate sufficiently well the
simulated spectral efficiency, even for the highly correlated cases, thus
verifying the effectiveness and robustness of the presented analytical
framework.
|
[
{
"version": "v1",
"created": "Mon, 11 Apr 2022 07:57:06 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Wei",
"Li",
""
],
[
"Huang",
"Chongwen",
""
],
[
"Alexandropoulos",
"George C.",
""
],
[
"Sha",
"Wei E. I.",
""
],
[
"Zhang",
"Zhaoyang",
""
],
[
"Debbah",
"Merouane",
""
],
[
"Yuen",
"Chau",
""
]
] |
new_dataset
| 0.971955 |
2204.04959
|
Yuntao Du
|
Yuntao Du, Xinjun Zhu, Lu Chen, Baihua Zheng and Yunjun Gao
|
HAKG: Hierarchy-Aware Knowledge Gated Network for Recommendation
|
Accept to SIGIR2022
| null | null | null |
cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
Knowledge graph (KG) plays an increasingly important role to improve the
recommendation performance and interpretability. A recent technical trend is to
design end-to-end models based on information propagation schemes. However,
existing propagation-based methods fail to (1) model the underlying
hierarchical structures and relations, and (2) capture the high-order
collaborative signals of items for learning high-quality user and item
representations.
In this paper, we propose a new model, called Hierarchy-Aware Knowledge Gated
Network (HAKG), to tackle the aforementioned problems. Technically, we model
users and items (that are captured by a user-item graph), as well as entities
and relations (that are captured in a KG) in hyperbolic space, and design a
hyperbolic aggregation scheme to gather relational contexts over KG. Meanwhile,
we introduce a novel angle constraint to preserve characteristics of items in
the embedding space. Furthermore, we propose a dual item embeddings design to
represent and propagate collaborative signals and knowledge associations
separately, and leverage the gated aggregation to distill discriminative
information for better capturing user behavior patterns. Experimental results
on three benchmark datasets show that, HAKG achieves significant improvement
over the state-of-the-art methods like CKAN, Hyper-Know, and KGIN. Further
analyses on the learned hyperbolic embeddings confirm that HAKG offers
meaningful insights into the hierarchies of data.
|
[
{
"version": "v1",
"created": "Mon, 11 Apr 2022 09:13:19 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Du",
"Yuntao",
""
],
[
"Zhu",
"Xinjun",
""
],
[
"Chen",
"Lu",
""
],
[
"Zheng",
"Baihua",
""
],
[
"Gao",
"Yunjun",
""
]
] |
new_dataset
| 0.983595 |
2204.04968
|
Anita Rau
|
Anita Rau, Binod Bhattarai, Lourdes Agapito, Danail Stoyanov
|
Bimodal Camera Pose Prediction for Endoscopy
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Deducing the 3D structure of endoscopic scenes from images remains extremely
challenging. In addition to deformation and view-dependent lighting, tubular
structures like the colon present problems stemming from the self-occluding,
repetitive anatomical structures. In this paper, we propose SimCol, a synthetic
dataset for camera pose estimation in colonoscopy and a novel method that
explicitly learns a bimodal distribution to predict the endoscope pose. Our
dataset replicates real colonoscope motion and highlights drawbacks of existing
methods. We publish 18k RGB images from simulated colonoscopy with
corresponding depth and camera poses and make our data generation environment
in Unity publicly available. We evaluate different camera pose prediction
methods and demonstrate that, when trained on our data, they generalize to real
colonoscopy sequences and our bimodal approach outperforms prior unimodal work.
|
[
{
"version": "v1",
"created": "Mon, 11 Apr 2022 09:34:34 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Rau",
"Anita",
""
],
[
"Bhattarai",
"Binod",
""
],
[
"Agapito",
"Lourdes",
""
],
[
"Stoyanov",
"Danail",
""
]
] |
new_dataset
| 0.990759 |
2204.04988
|
Johannes Dornheim
|
Johannes Dornheim
|
gTLO: A Generalized and Non-linear Multi-Objective Deep Reinforcement
Learning Approach
| null | null | null | null |
cs.LG cs.AI cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
In real-world decision optimization, often multiple competing objectives must
be taken into account. Following classical reinforcement learning, these
objectives have to be combined into a single reward function. In contrast,
multi-objective reinforcement learning (MORL) methods learn from vectors of
per-objective rewards instead. In the case of multi-policy MORL, sets of
decision policies for various preferences regarding the conflicting objectives
are optimized. This is especially important when target preferences are not
known during training or when preferences change dynamically during
application. While it is, in general, straightforward to extend a
single-objective reinforcement learning method for MORL based on linear
scalarization, solutions that are reachable by these methods are limited to
convex regions of the Pareto front. Non-linear MORL methods like Thresholded
Lexicographic Ordering (TLO) are designed to overcome this limitation.
Generalized MORL methods utilize function approximation to generalize across
objective preferences and thereby implicitly learn multiple policies in a
data-efficient manner, even for complex decision problems with high-dimensional
or continuous state spaces. In this work, we propose \textit{generalized
Thresholded Lexicographic Ordering} (gTLO), a novel method that aims to combine
non-linear MORL with the advantages of generalized MORL. We introduce a deep
reinforcement learning realization of the algorithm and present promising
results on a standard benchmark for non-linear MORL and a real-world
application from the domain of manufacturing process control.
|
[
{
"version": "v1",
"created": "Mon, 11 Apr 2022 10:06:49 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Dornheim",
"Johannes",
""
]
] |
new_dataset
| 0.999084 |
2204.05151
|
Travis LaCroix
|
Travis LaCroix, Alexandra Sasha Luccioni
|
Metaethical Perspectives on 'Benchmarking' AI Ethics
|
39 Pages
| null | null | null |
cs.CY cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Benchmarks are seen as the cornerstone for measuring technical progress in
Artificial Intelligence (AI) research and have been developed for a variety of
tasks ranging from question answering to facial recognition. An increasingly
prominent research area in AI is ethics, which currently has no set of
benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI
system. In this paper, drawing upon research in moral philosophy and
metaethics, we argue that it is impossible to develop such a benchmark. As
such, alternative mechanisms are necessary for evaluating whether an AI system
is 'ethical'. This is especially pressing in light of the prevalence of
applied, industrial AI research. We argue that it makes more sense to talk
about 'values' (and 'value alignment') rather than 'ethics' when considering
the possible actions of present and future AI systems. We further highlight
that, because values are unambiguously relative, focusing on values forces us
to consider explicitly what the values are and whose values they are. Shifting
the emphasis from ethics to values therefore gives rise to several new ways of
understanding how researchers might advance research programmes for robustly
safe or beneficial AI. We conclude by highlighting a number of possible ways
forward for the field as a whole, and we advocate for different approaches
towards more value-aligned AI research.
|
[
{
"version": "v1",
"created": "Mon, 11 Apr 2022 14:36:39 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"LaCroix",
"Travis",
""
],
[
"Luccioni",
"Alexandra Sasha",
""
]
] |
new_dataset
| 0.981417 |
2204.05172
|
Zhihao Li
|
Zhihao Li, M. Salman Asif, Zhan Ma
|
Event Transformer
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The event camera is a bio-vision inspired camera with high dynamic range,
high response speed, and low power consumption, recently attracting extensive
attention for its use in vast vision tasks. Unlike the conventional cameras
that output intensity frame at a fixed time interval, event camera records the
pixel brightness change (a.k.a., event) asynchronously (in time) and sparsely
(in space). Existing methods often aggregate events occurred in a predefined
temporal duration for downstream tasks, which apparently overlook varying
behaviors of fine-grained temporal events. This work proposes the Event
Transformer to directly process the event sequence in its native vectorized
tensor format. It cascades a Local Transformer (LXformer) for exploiting the
local temporal correlation, a Sparse Conformer (SCformer) for embedding the
local spatial similarity, and a Global Transformer (GXformer) for further
aggregating the global information in a serial means to effectively
characterize the time and space correlations from input raw events for the
generation of effective spatiotemporal features used for tasks. %In both
LXformer and SCformer, Experimental studies have been extensively conducted in
comparison to another fourteen existing algorithms upon five different datasets
widely used for classification. Quantitative results report the
state-of-the-arts classification accuracy and the least computational resource
requirements, of the Event Transformer, making it practically attractive for
event-based vision tasks.
|
[
{
"version": "v1",
"created": "Mon, 11 Apr 2022 15:05:06 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Li",
"Zhihao",
""
],
[
"Asif",
"M. Salman",
""
],
[
"Ma",
"Zhan",
""
]
] |
new_dataset
| 0.963869 |
2204.05222
|
Lorenz Diener
|
Lorenz Diener, Sten Sootla, Solomiya Branets, Ando Saabas, Robert
Aichner, Ross Cutler
|
INTERSPEECH 2022 Audio Deep Packet Loss Concealment Challenge
|
4 pages + 1 page references, 1 figure, 2 tables. Submitted to
INTERSPEECH 2022
| null | null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Audio Packet Loss Concealment (PLC) is the hiding of gaps in audio streams
caused by data transmission failures in packet switched networks. This is a
common problem, and of increasing importance as end-to-end VoIP telephony and
teleconference systems become the default and ever more widely used form of
communication in business as well as in personal usage. This paper presents the
INTERSPEECH 2022 Audio Deep Packet Loss Concealment challenge. We first give an
overview of the PLC problem, and introduce some classical approaches to PLC as
well as recent work. We then present the open source dataset released as part
of this challenge as well as the evaluation methods and metrics used to
determine the winner. We also briefly introduce PLCMOS, a novel data-driven
metric that can be used to quickly evaluate the performance PLC systems.
Finally, we present the results of the INTERSPEECH 2022 Audio Deep PLC
Challenge, and provide a summary of important takeaways.
|
[
{
"version": "v1",
"created": "Mon, 11 Apr 2022 16:13:36 GMT"
}
] | 2022-04-12T00:00:00 |
[
[
"Diener",
"Lorenz",
""
],
[
"Sootla",
"Sten",
""
],
[
"Branets",
"Solomiya",
""
],
[
"Saabas",
"Ando",
""
],
[
"Aichner",
"Robert",
""
],
[
"Cutler",
"Ross",
""
]
] |
new_dataset
| 0.964312 |
1911.03026
|
Tsuyoshi Yagita
|
Duc A. Hoang, Akira Suzuki, Tsuyoshi Yagita
|
Reconfiguring k-path vertex covers
|
29 pages, 6 figures, to be published in IEICE Trans. Information and
Systems
| null | null | null |
cs.DS cs.CC cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A vertex subset $I$ of a graph $G$ is called a $k$-path vertex cover if every
path on $k$ vertices in $G$ contains at least one vertex from $I$. The
\textsc{$k$-Path Vertex Cover Reconfiguration ($k$-PVCR)} problem asks if one
can transform one $k$-path vertex cover into another via a sequence of $k$-path
vertex covers where each intermediate member is obtained from its predecessor
by applying a given reconfiguration rule exactly once. We investigate the
computational complexity of \textsc{$k$-PVCR} from the viewpoint of graph
classes under the well-known reconfiguration rules: $\mathsf{TS}$,
$\mathsf{TJ}$, and $\mathsf{TAR}$. The problem for $k=2$, known as the
\textsc{Vertex Cover Reconfiguration (VCR)} problem, has been well-studied in
the literature. We show that certain known hardness results for \textsc{VCR} on
different graph classes including planar graphs, bounded bandwidth graphs,
chordal graphs, and bipartite graphs, can be extended for \textsc{$k$-PVCR}. In
particular, we prove a complexity dichotomy for \textsc{$k$-PVCR} on general
graphs: on those whose maximum degree is $3$ (and even planar), the problem is
$\mathtt{PSPACE}$-complete, while on those whose maximum degree is $2$ (i.e.,
paths and cycles), the problem can be solved in polynomial time. Additionally,
we also design polynomial-time algorithms for \textsc{$k$-PVCR} on trees under
each of $\mathsf{TJ}$ and $\mathsf{TAR}$. Moreover, on paths, cycles, and
trees, we describe how one can construct a reconfiguration sequence between two
given $k$-path vertex covers in a yes-instance. In particular, on paths, our
constructed reconfiguration sequence is shortest.
|
[
{
"version": "v1",
"created": "Fri, 8 Nov 2019 03:49:14 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Apr 2022 16:03:13 GMT"
}
] | 2022-04-11T00:00:00 |
[
[
"Hoang",
"Duc A.",
""
],
[
"Suzuki",
"Akira",
""
],
[
"Yagita",
"Tsuyoshi",
""
]
] |
new_dataset
| 0.999325 |
2010.03805
|
Giulia Cisotto
|
Sergio Martiradonna, Giulia Cisotto, Gennaro Boggia, Giuseppe Piro,
Lorenzo Vangelista, and Stefano Tomasin
|
Cascaded WLAN-FWA Networking and Computing Architecture for Pervasive
In-Home Healthcare
| null |
2021 IEEE Wireless Communications
|
10.1109/MWC.001.2000330
|
Volume: 28, Issue: 3, June 2021
|
cs.NI cs.CY cs.DC eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pervasive healthcare is a promising assisted-living solution for chronic
patients. However, current cutting-edge communication technologies are not able
to strictly meet the requirements of these applications, especially in the case
of life-threatening events. To bridge this gap, this paper proposes a new
architecture to support indoor healthcare monitoring, with a focus on epileptic
patients. Several novel elements are introduced. The first element is the
cascading of a WLAN and a cellular network, where IEEE 802.11ax is used for the
wireless local area network to collect physiological and environmental data
in-home and 5G-enabled Fixed Wireless Access links transfer them to a remote
hospital. The second element is the extension of the network slicing concept to
the WLAN, and the introduction of two new slice types to support both regular
monitoring and emergency handling. Moreover, the inclusion of local computing
capabilities at the WLAN router, together with a mobile edge computing
resource, represents a further architectural enhancement. Local computation is
required to trigger not only health-related alarms, but also the network
slicing change in case of emergency: in fact, proper radio resource scheduling
is necessary for the cascaded networks to handle healthcare traffic together
with other promiscuous everyday communication services. Numerical results
demonstrate the effectiveness of the proposed approach while highlighting the
performance gain achieved with respect to baseline solutions.
|
[
{
"version": "v1",
"created": "Thu, 8 Oct 2020 07:16:00 GMT"
}
] | 2022-04-11T00:00:00 |
[
[
"Martiradonna",
"Sergio",
""
],
[
"Cisotto",
"Giulia",
""
],
[
"Boggia",
"Gennaro",
""
],
[
"Piro",
"Giuseppe",
""
],
[
"Vangelista",
"Lorenzo",
""
],
[
"Tomasin",
"Stefano",
""
]
] |
new_dataset
| 0.975701 |
2102.05185
|
Andrew Ross
|
Andrew Slavin Ross and Finale Doshi-Velez
|
Benchmarks, Algorithms, and Metrics for Hierarchical Disentanglement
|
ICML 2021 paper, fixed incorrect version upload
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In representation learning, there has been recent interest in developing
algorithms to disentangle the ground-truth generative factors behind a dataset,
and metrics to quantify how fully this occurs. However, these algorithms and
metrics often assume that both representations and ground-truth factors are
flat, continuous, and factorized, whereas many real-world generative processes
involve rich hierarchical structure, mixtures of discrete and continuous
variables with dependence between them, and even varying intrinsic
dimensionality. In this work, we develop benchmarks, algorithms, and metrics
for learning such hierarchical representations.
|
[
{
"version": "v1",
"created": "Tue, 9 Feb 2021 23:34:24 GMT"
},
{
"version": "v2",
"created": "Sat, 12 Jun 2021 15:22:16 GMT"
},
{
"version": "v3",
"created": "Fri, 7 Jan 2022 19:27:38 GMT"
},
{
"version": "v4",
"created": "Fri, 8 Apr 2022 12:48:03 GMT"
}
] | 2022-04-11T00:00:00 |
[
[
"Ross",
"Andrew Slavin",
""
],
[
"Doshi-Velez",
"Finale",
""
]
] |
new_dataset
| 0.998372 |
2105.03389
|
EPTCS
|
Patricia Johann (Appalachian State University), Enrico Ghiorzi
(Appalachian State University), Daniel Jeffries (Appalachian State
University)
|
GADTs, Functoriality, Parametricity: Pick Two
|
In Proceedings LSFA 2021, arXiv:2204.03415
|
EPTCS 357, 2022, pp. 77-92
|
10.4204/EPTCS.357.6
| null |
cs.LO cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
GADTs can be represented either as their Church encodings a la Atkey, or as
fixpoints a la Johann and Polonsky. While a GADT represented as its Church
encoding need not support a map function satisfying the functor laws, the
fixpoint representation of a GADT must support such a map function even to be
well-defined. The two representations of a GADT thus need not be the same in
general. This observation forces a choice of representation of data types in
languages supporting GADTs. In this paper we show that choosing whether to
represent data types as their Church encodings or as fixpoints determines
whether or not a language supporting GADTs can have parametric models. This
choice thus has important consequences for how we can program with, and reason
about, these advanced data types.
|
[
{
"version": "v1",
"created": "Fri, 7 May 2021 16:50:42 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Dec 2021 11:06:49 GMT"
},
{
"version": "v3",
"created": "Fri, 8 Apr 2022 07:18:08 GMT"
}
] | 2022-04-11T00:00:00 |
[
[
"Johann",
"Patricia",
"",
"Appalachian State University"
],
[
"Ghiorzi",
"Enrico",
"",
"Appalachian State University"
],
[
"Jeffries",
"Daniel",
"",
"Appalachian State\n University"
]
] |
new_dataset
| 0.981624 |
2106.01161
|
Manuel Chakravarty
|
Manuel M. T. Chakravarty and Nikos Karayannidis and Aggelos Kiayias
and Michael Peyton Jones and Polina Vinogradova
|
Babel Fees via Limited Liabilities
|
To appear in "20th International Conference on Applied Cryptography
and Network Security (ACNS 2022)"
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Custom currencies (ERC-20) on Ethereum are wildly popular, but they are
second class to the primary currency Ether. Custom currencies are more complex
and more expensive to handle than the primary currency as their accounting is
not natively performed by the underlying ledger, but instead in user-defined
contract code. Furthermore, and quite importantly, transaction fees can only be
paid in Ether.
In this paper, we focus on being able to pay transaction fees in custom
currencies. We achieve this by way of a mechanism permitting short term
liabilities to pay transaction fees in conjunction with offers of custom
currencies to compensate for those liabilities. This enables block producers to
accept custom currencies in exchange for settling liabilities of transactions
that they process.
We present formal ledger rules to handle liabilities together with the
concept of babel fees to pay transaction fees in custom currencies. We also
discuss how clients can determine what fees they have to pay, and we present a
solution to the knapsack problem variant that block producers have to solve in
the presence of babel fees to optimise their profits.
|
[
{
"version": "v1",
"created": "Wed, 2 Jun 2021 13:55:05 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Apr 2022 12:48:42 GMT"
}
] | 2022-04-11T00:00:00 |
[
[
"Chakravarty",
"Manuel M. T.",
""
],
[
"Karayannidis",
"Nikos",
""
],
[
"Kiayias",
"Aggelos",
""
],
[
"Jones",
"Michael Peyton",
""
],
[
"Vinogradova",
"Polina",
""
]
] |
new_dataset
| 0.983451 |
2110.13027
|
Wei Han
|
Wei han and Hantao Huang and Xiaoxi Yu
|
TAPL: Dynamic Part-based Visual Tracking via Attention-guided Part
Localization
|
Accepted by BMVC2021
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Holistic object representation-based trackers suffer from performance drop
under large appearance change such as deformation and occlusion. In this work,
we propose a dynamic part-based tracker and constantly update the target part
representation to adapt to object appearance change. Moreover, we design an
attention-guided part localization network to directly predict the target part
locations, and determine the final bounding box with the distribution of target
parts. Our proposed tracker achieves promising results on various benchmarks:
VOT2018, OTB100 and GOT-10k
|
[
{
"version": "v1",
"created": "Mon, 25 Oct 2021 15:05:43 GMT"
}
] | 2022-04-11T00:00:00 |
[
[
"han",
"Wei",
""
],
[
"Huang",
"Hantao",
""
],
[
"Yu",
"Xiaoxi",
""
]
] |
new_dataset
| 0.977946 |
2111.12912
|
Thanh Tam Nguyen
|
Minh Tam Pham and Thanh Trung Huynh and Van Vinh Tong and Thanh Tam
Nguyen and Thanh Thi Nguyen and Hongzhi Yin and Quoc Viet Hung Nguyen
|
A War Beyond Deepfake: Benchmarking Facial Counterfeits and
Countermeasures
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, visual forgery has reached a level of sophistication that
humans cannot identify fraud, which poses a significant threat to information
security. A wide range of malicious applications have emerged, such as fake
news, defamation or blackmailing of celebrities, impersonation of politicians
in political warfare, and the spreading of rumours to attract views. As a
result, a rich body of visual forensic techniques has been proposed in an
attempt to stop this dangerous trend. In this paper, we present a benchmark
that provides in-depth insights into visual forgery and visual forensics, using
a comprehensive and empirical approach. More specifically, we develop an
independent framework that integrates state-of-the-arts counterfeit generators
and detectors, and measure the performance of these techniques using various
criteria. We also perform an exhaustive analysis of the benchmarking results,
to determine the characteristics of the methods that serve as a comparative
reference in this never-ending war between measures and countermeasures.
|
[
{
"version": "v1",
"created": "Thu, 25 Nov 2021 05:01:08 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Apr 2022 02:48:16 GMT"
}
] | 2022-04-11T00:00:00 |
[
[
"Pham",
"Minh Tam",
""
],
[
"Huynh",
"Thanh Trung",
""
],
[
"Tong",
"Van Vinh",
""
],
[
"Nguyen",
"Thanh Tam",
""
],
[
"Nguyen",
"Thanh Thi",
""
],
[
"Yin",
"Hongzhi",
""
],
[
"Nguyen",
"Quoc Viet Hung",
""
]
] |
new_dataset
| 0.998234 |
2112.04639
|
Zhichao Li
|
Zhichao Li, Thai Duong, Nikolay Atanasov
|
Safe Autonomous Navigation for Systems with Learned SE(3) Hamiltonian
Dynamics
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Safe autonomous navigation in unknown environments is an important problem
for mobile robots. This paper proposes techniques to learn the dynamics model
of a mobile robot from trajectory data and synthesize a tracking controller
with safety and stability guarantees. The state of a rigid-body robot usually
contains its position, orientation, and generalized velocity and satisfies
Hamilton's equations of motion. Instead of a hand-derived dynamics model, we
use a dataset of state-control trajectories to train a translation-equivariant
nonlinear Hamiltonian model represented as a neural ordinary differential
equation (ODE) network. The learned Hamiltonian model is used to synthesize an
energy-shaping passivity-based controller and derive conditions which guarantee
safe regulation to a desired reference pose. We enable adaptive tracking of a
desired path, subject to safety constraints obtained from obstacle distance
measurements. The trade-off between the robot's energy and the distance to
safety constraint violation is used to adaptively govern a reference pose along
the desired path. Our safe adaptive controller is demonstrated on a simulated
hexarotor robot navigating in an unknown environments.
|
[
{
"version": "v1",
"created": "Thu, 9 Dec 2021 00:54:27 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Apr 2022 23:01:11 GMT"
}
] | 2022-04-11T00:00:00 |
[
[
"Li",
"Zhichao",
""
],
[
"Duong",
"Thai",
""
],
[
"Atanasov",
"Nikolay",
""
]
] |
new_dataset
| 0.978985 |
2201.03677
|
Tiziano Piccardi
|
Sylvain Lugeon, Tiziano Piccardi, Robert West
|
Homepage2Vec: Language-Agnostic Website Embedding and Classification
|
Published in Proc. of ICWSM 2022
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Currently, publicly available models for website classification do not offer
an embedding method and have limited support for languages beyond English. We
release a dataset of more than two million category-labeled websites in 92
languages collected from Curlie, the largest multilingual human-edited Web
directory. The dataset contains 14 website categories aligned across languages.
Alongside it, we introduce Homepage2Vec, a machine-learned pre-trained model
for classifying and embedding websites based on their homepage in a
language-agnostic way. Homepage2Vec, thanks to its feature set (textual
content, metadata tags, and visual attributes) and recent progress in natural
language representation, is language-independent by design and generates
embedding-based representations. We show that Homepage2Vec correctly classifies
websites with a macro-averaged F1-score of 0.90, with stable performance across
low- as well as high-resource languages. Feature analysis shows that a small
subset of efficiently computable features suffices to achieve high performance
even with limited computational resources. We make publicly available the
curated Curlie dataset aligned across languages, the pre-trained Homepage2Vec
model, and libraries
|
[
{
"version": "v1",
"created": "Mon, 10 Jan 2022 22:31:48 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Apr 2022 10:05:21 GMT"
},
{
"version": "v3",
"created": "Fri, 8 Apr 2022 17:47:06 GMT"
}
] | 2022-04-11T00:00:00 |
[
[
"Lugeon",
"Sylvain",
""
],
[
"Piccardi",
"Tiziano",
""
],
[
"West",
"Robert",
""
]
] |
new_dataset
| 0.999807 |
2202.13505
|
Zhengwei Bai
|
Zhengwei Bai, Saswat Priyadarshi Nayak, Xuanpeng Zhao, Guoyuan Wu,
Matthew J. Barth, Xuewei Qi, Yongkang Liu, Emrah Akin Sisbot, Kentaro Oguchi
|
Cyber Mobility Mirror: A Deep Learning-based Real-World Object
Perception Platform Using Roadside LiDAR
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Object perception plays a fundamental role in Cooperative Driving Automation
(CDA) which is regarded as a revolutionary promoter for the next-generation
transportation systems. However, the vehicle-based perception may suffer from
the limited sensing range and occlusion as well as low penetration rates in
connectivity. In this paper, we propose Cyber Mobility Mirror (CMM), a
next-generation real-time traffic surveillance system for 3D object perception
and reconstruction, to explore the potential of roadside sensors for enabling
CDA in the real world. The CMM system consists of six main components: 1) the
data pre-processor to retrieve and preprocess the raw data; 2) the roadside 3D
object detector to generate 3D detection results; 3) the multi-object tracker
to identify detected objects; 4) the global locator to map positioning
information from the LiDAR coordinate to geographic coordinate using coordinate
transformation; 5) the cloud-based communicator to transmit perception
information from roadside sensors to equipped vehicles, and 6) the onboard
advisor to reconstruct and display the real-time traffic conditions via
Graphical User Interface (GUI). In this study, a field-operational system is
deployed at a real-world intersection, University Avenue and Iowa Avenue in
Riverside, California to assess the feasibility and performance of our CMM
system. Results from field tests demonstrate that our CMM prototype system can
provide satisfactory perception performance with 96.99% precision and 83.62%
recall. High-fidelity real-time traffic conditions (at the object level) can be
geo-localized with an average error of 0.14m and displayed on the GUI of the
equipped vehicle with a frequency of 3-4 Hz.
|
[
{
"version": "v1",
"created": "Mon, 28 Feb 2022 01:58:24 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Apr 2022 23:31:07 GMT"
}
] | 2022-04-11T00:00:00 |
[
[
"Bai",
"Zhengwei",
""
],
[
"Nayak",
"Saswat Priyadarshi",
""
],
[
"Zhao",
"Xuanpeng",
""
],
[
"Wu",
"Guoyuan",
""
],
[
"Barth",
"Matthew J.",
""
],
[
"Qi",
"Xuewei",
""
],
[
"Liu",
"Yongkang",
""
],
[
"Sisbot",
"Emrah Akin",
""
],
[
"Oguchi",
"Kentaro",
""
]
] |
new_dataset
| 0.998995 |
2203.01072
|
Dingding Cai
|
Dingding Cai, Janne Heikkil\"a, Esa Rahtu
|
OVE6D: Object Viewpoint Encoding for Depth-based 6D Object Pose
Estimation
|
CVPR 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes a universal framework, called OVE6D, for model-based 6D
object pose estimation from a single depth image and a target object mask. Our
model is trained using purely synthetic data rendered from ShapeNet, and,
unlike most of the existing methods, it generalizes well on new real-world
objects without any fine-tuning. We achieve this by decomposing the 6D pose
into viewpoint, in-plane rotation around the camera optical axis and
translation, and introducing novel lightweight modules for estimating each
component in a cascaded manner. The resulting network contains less than 4M
parameters while demonstrating excellent performance on the challenging T-LESS
and Occluded LINEMOD datasets without any dataset-specific training. We show
that OVE6D outperforms some contemporary deep learning-based pose estimation
methods specifically trained for individual objects or datasets with real-world
training data.
The implementation and the pre-trained model will be made publicly available.
|
[
{
"version": "v1",
"created": "Wed, 2 Mar 2022 12:51:33 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Mar 2022 13:38:13 GMT"
},
{
"version": "v3",
"created": "Thu, 7 Apr 2022 18:35:18 GMT"
}
] | 2022-04-11T00:00:00 |
[
[
"Cai",
"Dingding",
""
],
[
"Heikkilä",
"Janne",
""
],
[
"Rahtu",
"Esa",
""
]
] |
new_dataset
| 0.998509 |
2203.01577
|
Yunze Liu
|
Yunze Liu, Yun Liu, Che Jiang, Kangbo Lyu, Weikang Wan, Hao Shen,
Boqiang Liang, Zhoujie Fu, He Wang, Li Yi
|
HOI4D: A 4D Egocentric Dataset for Category-Level Human-Object
Interaction
| null |
CVPR2022
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present HOI4D, a large-scale 4D egocentric dataset with rich annotations,
to catalyze the research of category-level human-object interaction. HOI4D
consists of 2.4M RGB-D egocentric video frames over 4000 sequences collected by
4 participants interacting with 800 different object instances from 16
categories over 610 different indoor rooms. Frame-wise annotations for panoptic
segmentation, motion segmentation, 3D hand pose, category-level object pose and
hand action have also been provided, together with reconstructed object meshes
and scene point clouds. With HOI4D, we establish three benchmarking tasks to
promote category-level HOI from 4D visual signals including semantic
segmentation of 4D dynamic point cloud sequences, category-level object pose
tracking, and egocentric action segmentation with diverse interaction targets.
In-depth analysis shows HOI4D poses great challenges to existing methods and
produces great research opportunities.
|
[
{
"version": "v1",
"created": "Thu, 3 Mar 2022 09:02:52 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Mar 2022 06:51:56 GMT"
},
{
"version": "v3",
"created": "Fri, 8 Apr 2022 08:34:00 GMT"
}
] | 2022-04-11T00:00:00 |
[
[
"Liu",
"Yunze",
""
],
[
"Liu",
"Yun",
""
],
[
"Jiang",
"Che",
""
],
[
"Lyu",
"Kangbo",
""
],
[
"Wan",
"Weikang",
""
],
[
"Shen",
"Hao",
""
],
[
"Liang",
"Boqiang",
""
],
[
"Fu",
"Zhoujie",
""
],
[
"Wang",
"He",
""
],
[
"Yi",
"Li",
""
]
] |
new_dataset
| 0.999612 |
2203.01754
|
Zijian Dong
|
Zijian Dong, Chen Guo, Jie Song, Xu Chen, Andreas Geiger, Otmar
Hilliges
|
PINA: Learning a Personalized Implicit Neural Avatar from a Single RGB-D
Video Sequence
|
CVPR'2022; Video: https://youtu.be/oGpKUuD54Qk | Project page:
https://zj-dong.github.io/pina/ | Supplementary Material:
https://ait.ethz.ch/projects/2022/pina/downloads/supp.pdf
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present a novel method to learn Personalized Implicit Neural Avatars
(PINA) from a short RGB-D sequence. This allows non-expert users to create a
detailed and personalized virtual copy of themselves, which can be animated
with realistic clothing deformations. PINA does not require complete scans, nor
does it require a prior learned from large datasets of clothed humans. Learning
a complete avatar in this setting is challenging, since only few depth
observations are available, which are noisy and incomplete (i.e. only partial
visibility of the body per frame). We propose a method to learn the shape and
non-rigid deformations via a pose-conditioned implicit surface and a
deformation field, defined in canonical space. This allows us to fuse all
partial observations into a single consistent canonical representation. Fusion
is formulated as a global optimization problem over the pose, shape and
skinning parameters. The method can learn neural avatars from real noisy RGB-D
sequences for a diverse set of people and clothing styles and these avatars can
be animated given unseen motion sequences.
|
[
{
"version": "v1",
"created": "Thu, 3 Mar 2022 15:04:55 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Apr 2022 16:19:05 GMT"
}
] | 2022-04-11T00:00:00 |
[
[
"Dong",
"Zijian",
""
],
[
"Guo",
"Chen",
""
],
[
"Song",
"Jie",
""
],
[
"Chen",
"Xu",
""
],
[
"Geiger",
"Andreas",
""
],
[
"Hilliges",
"Otmar",
""
]
] |
new_dataset
| 0.999725 |
2204.03262
|
TaeYoung Kang
|
TaeYoung Kang, Eunrang Kwon, Junbum Lee, Youngeun Nam, Junmo Song,
JeongKyu Suh
|
Korean Online Hate Speech Dataset for Multilabel Classification: How Can
Social Science Improve Dataset on Hate Speech?
|
12 pages, 3 tables
| null | null | null |
cs.CL cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We suggest a multilabel Korean online hate speech dataset that covers seven
categories of hate speech: (1) Race and Nationality, (2) Religion, (3)
Regionalism, (4) Ageism, (5) Misogyny, (6) Sexual Minorities, and (7) Male. Our
35K dataset consists of 24K online comments with Krippendorff's Alpha label
accordance of .713, 2.2K neutral sentences from Wikipedia, 1.7K additionally
labeled sentences generated by the Human-in-the-Loop procedure and
rule-generated 7.1K neutral sentences. The base model with 24K initial dataset
achieved the accuracy of LRAP .892, but improved to .919 after being combined
with 11K additional data. Unlike the conventional binary hate and non-hate
dichotomy approach, we designed a dataset considering both the cultural and
linguistic context to overcome the limitations of western culture-based English
texts. Thus, this paper is not only limited to presenting a local hate speech
dataset but extends as a manual for building a more generalized hate speech
dataset with diverse cultural backgrounds based on social science perspectives.
|
[
{
"version": "v1",
"created": "Thu, 7 Apr 2022 07:29:06 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Apr 2022 04:04:27 GMT"
}
] | 2022-04-11T00:00:00 |
[
[
"Kang",
"TaeYoung",
""
],
[
"Kwon",
"Eunrang",
""
],
[
"Lee",
"Junbum",
""
],
[
"Nam",
"Youngeun",
""
],
[
"Song",
"Junmo",
""
],
[
"Suh",
"JeongKyu",
""
]
] |
new_dataset
| 0.999822 |
2204.03696
|
Lily Chung
|
Lily Chung, Erik D. Demaine, Dylan Hendrickson, Victor Luo
|
Flat Folding an Unassigned Single-Vertex Complex (Combinatorially
Embedded Planar Graph with Specified Edge Lengths) without Flat Angles
|
17 pages, 8 figures, to appear in Proceedings of the 38th
International Symposium on Computational Geometry
| null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A foundational result in origami mathematics is Kawasaki and Justin's simple,
efficient characterization of flat foldability for unassigned single-vertex
crease patterns (where each crease can fold mountain or valley) on flat
material. This result was later generalized to cones of material, where the
angles glued at the single vertex may not sum to $360^\circ$. Here we
generalize these results to when the material forms a complex (instead of a
manifold), and thus the angles are glued at the single vertex in the structure
of an arbitrary planar graph (instead of a cycle). Like the earlier
characterizations, we require all creases to fold mountain or valley, not
remain unfolded flat; otherwise, the problem is known to be NP-complete (weakly
for flat material and strongly for complexes). Equivalently, we efficiently
characterize which combinatorially embedded planar graphs with prescribed edge
lengths can fold flat, when all angles must be mountain or valley (not unfolded
flat). Our algorithm runs in $O(n \log^3 n)$ time, improving on the previous
best algorithm of $O(n^2 \log n)$.
|
[
{
"version": "v1",
"created": "Thu, 7 Apr 2022 19:09:14 GMT"
}
] | 2022-04-11T00:00:00 |
[
[
"Chung",
"Lily",
""
],
[
"Demaine",
"Erik D.",
""
],
[
"Hendrickson",
"Dylan",
""
],
[
"Luo",
"Victor",
""
]
] |
new_dataset
| 0.98864 |
2204.03738
|
Felipe Oviedo
|
Felipe Oviedo, Srinivas Vinnakota, Eugene Seleznev, Hemant Malhotra,
Saqib Shaikh, Juan Lavista Ferres
|
BankNote-Net: Open dataset for assistive universal currency recognition
|
Pre-print
| null | null | null |
cs.CV cs.HC cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Millions of people around the world have low or no vision. Assistive software
applications have been developed for a variety of day-to-day tasks, including
optical character recognition, scene identification, person recognition, and
currency recognition. This last task, the recognition of banknotes from
different denominations, has been addressed by the use of computer vision
models for image recognition. However, the datasets and models available for
this task are limited, both in terms of dataset size and in variety of
currencies covered. In this work, we collect a total of 24,826 images of
banknotes in variety of assistive settings, spanning 17 currencies and 112
denominations. Using supervised contrastive learning, we develop a machine
learning model for universal currency recognition. This model learns compliant
embeddings of banknote images in a variety of contexts, which can be shared
publicly (as a compressed vector representation), and can be used to train and
test specialized downstream models for any currency, including those not
covered by our dataset or for which only a few real images per denomination are
available (few-shot learning). We deploy a variation of this model for public
use in the last version of the Seeing AI app developed by Microsoft. We share
our encoder model and the embeddings as an open dataset in our BankNote-Net
repository.
|
[
{
"version": "v1",
"created": "Thu, 7 Apr 2022 21:16:54 GMT"
}
] | 2022-04-11T00:00:00 |
[
[
"Oviedo",
"Felipe",
""
],
[
"Vinnakota",
"Srinivas",
""
],
[
"Seleznev",
"Eugene",
""
],
[
"Malhotra",
"Hemant",
""
],
[
"Shaikh",
"Saqib",
""
],
[
"Ferres",
"Juan Lavista",
""
]
] |
new_dataset
| 0.999811 |
2204.03755
|
Beth Malmskog
|
Mar\'ia Chara, Sam Kottler, Beth Malmskog, Bianca Thompson, and
Mckenzie West
|
Minimum Distance and Parameter Ranges of Locally Recoverable Codes with
Availability from Fiber Products of Curves
| null | null | null | null |
cs.IT math.IT math.NT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We construct families of locally recoverable codes with availability $t\geq
2$ using fiber products of curves, determine the exact minimum distance of many
families, and prove a general theorem for minimum distance of such codes. The
paper concludes with an exploration of parameters of codes from these families
and the fiber product construction more generally. We show that fiber product
codes can achieve arbitrarily large rate and arbitrarily small relative defect,
and compare to known bounds and important constructions from the literature.
|
[
{
"version": "v1",
"created": "Thu, 7 Apr 2022 21:59:23 GMT"
}
] | 2022-04-11T00:00:00 |
[
[
"Chara",
"María",
""
],
[
"Kottler",
"Sam",
""
],
[
"Malmskog",
"Beth",
""
],
[
"Thompson",
"Bianca",
""
],
[
"West",
"Mckenzie",
""
]
] |
new_dataset
| 0.998957 |
2204.03764
|
Debasish Chakroborti
|
Debasish Chakroborti, Kevin A. Schneider, Chanchal K. Roy
|
Backports: Change Types, Challenges and Strategies
|
In 30th International Conference on Program Comprehension (ICPC 22),
May 16 to 17, 2022, Virtual Event, Pittsburgh
| null |
10.1145/3524610.3527920
| null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Source code repositories allow developers to manage multiple versions (or
branches) of a software system. Pull-requests are used to modify a branch, and
backporting is a regular activity used to port changes from a current
development branch to other versions. In open-source software, backports are
common and often need to be adapted by hand, which motivates us to explore
backports and backporting challenges and strategies. In our exploration of
68,424 backports from 10 GitHub projects, we found that bug, test, document,
and feature changes are commonly backported. We identified a number of
backporting challenges, including that backports were inconsistently linked to
their original pull-request (49%), that backports had incompatible code (13%),
that backports failed to be accepted (10%), and that there were backporting
delays (16 days to create, 5 days to merge). We identified some general
strategies for addressing backporting issues. We also noted that backporting
strategies depend on the project type and that further investigation is needed
to determine their suitability. Furthermore, we created the first-ever
backports dataset that can be used by other researchers and practitioners for
investigating backports and backporting.
|
[
{
"version": "v1",
"created": "Thu, 7 Apr 2022 22:39:10 GMT"
}
] | 2022-04-11T00:00:00 |
[
[
"Chakroborti",
"Debasish",
""
],
[
"Schneider",
"Kevin A.",
""
],
[
"Roy",
"Chanchal K.",
""
]
] |
new_dataset
| 0.97866 |
2204.03779
|
Julian Jang-Jaccard Dr.
|
Amardeep Singh and Julian Jang-Jaccard
|
Autoencoder-based Unsupervised Intrusion Detection using Multi-Scale
Convolutional Recurrent Networks
|
arXiv admin note: text overlap with arXiv:2111.00626
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
The massive growth of network traffic data leads to a large volume of
datasets. Labeling these datasets for identifying intrusion attacks is very
laborious and error-prone. Furthermore, network traffic data have complex
time-varying non-linear relationships. The existing state-of-the-art intrusion
detection solutions use a combination of various supervised approaches along
with fused features subsets based on correlations in traffic data. These
solutions often require high computational cost, manual support in fine-tuning
intrusion detection models, and labeling of data that limit real-time
processing of network traffic. Unsupervised solutions do reduce computational
complexities and manual support for labeling data but current unsupervised
solutions do not consider spatio-temporal correlations in traffic data. To
address this, we propose a unified Autoencoder based on combining multi-scale
convolutional neural network and long short-term memory (MSCNN-LSTM-AE) for
anomaly detection in network traffic. The model first employs Multiscale
Convolutional Neural Network Autoencoder (MSCNN-AE) to analyze the spatial
features of the dataset, and then latent space features learned from MSCNN-AE
employs Long Short-Term Memory (LSTM) based Autoencoder Network to process the
temporal features. Our model further employs two Isolation Forest algorithms as
error correction mechanisms to detect false positives and false negatives to
improve detection accuracy. %Additionally, covariance matrices forms a
Riemannian manifold that is naturally embedded with distance metrices that
facilitates descriminative patterns for detecting malicious network traffic. We
evaluated our model NSL-KDD, UNSW-NB15, and CICDDoS2019 dataset and showed our
proposed method significantly outperforms the conventional unsupervised methods
and other existing studies on the dataset.
|
[
{
"version": "v1",
"created": "Thu, 7 Apr 2022 23:59:30 GMT"
}
] | 2022-04-11T00:00:00 |
[
[
"Singh",
"Amardeep",
""
],
[
"Jang-Jaccard",
"Julian",
""
]
] |
new_dataset
| 0.996408 |
2204.03800
|
Mina Henein
|
Zena Assaad and Mina Henein
|
End-of-Life of Software How is it Defined and Managed?
|
13 pages, white paper
| null | null | null |
cs.SE cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The rapid development of new software and algorithms, fueled by the immense
amount of data available, has made the shelf life of software products a lot
shorter. With a rough estimate of more than 40,000 new software projects
developed every day, it is becoming quicker and cheaper to abandon old software
and acquire new software that meets rapidly changing needs and demands. What
happens to software that is abandoned and what consequences may arise from
'throwaway' culture (Cooper, 2005) are still open questions. This paper will
explore the systems engineering concept of end-of-life for software, it will
highlight the gaps in existing software engineering practices, it will bring
forward examples of software that has been abandoned in an attempt to
decommission and it will explore the repercussions of abandoned software
artefacts. A proposed way forward for addressing the identified research gaps
is also detailed.
|
[
{
"version": "v1",
"created": "Fri, 8 Apr 2022 01:15:02 GMT"
}
] | 2022-04-11T00:00:00 |
[
[
"Assaad",
"Zena",
""
],
[
"Henein",
"Mina",
""
]
] |
new_dataset
| 0.994325 |
2204.03830
|
Jiazhao Li
|
Jiazhao Li, Corey Lester, Xinyan Zhao, Yuting Ding, Yun Jiang,
V.G.Vinod Vydiswaran
|
PharmMT: A Neural Machine Translation Approach to Simplify Prescription
Directions
|
Findings of EMNLP '20 Camera Ready
|
Findings of EMNLP (2020) 2785--2796
|
10.18653/v1/2020.findings-emnlp.251
| null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The language used by physicians and health professionals in prescription
directions includes medical jargon and implicit directives and causes much
confusion among patients. Human intervention to simplify the language at the
pharmacies may introduce additional errors that can lead to potentially severe
health outcomes. We propose a novel machine translation-based approach,
PharmMT, to automatically and reliably simplify prescription directions into
patient-friendly language, thereby significantly reducing pharmacist workload.
We evaluate the proposed approach over a dataset consisting of over 530K
prescriptions obtained from a large mail-order pharmacy. The end-to-end system
achieves a BLEU score of 60.27 against the reference directions generated by
pharmacists, a 39.6% relative improvement over the rule-based normalization.
Pharmacists judged 94.3% of the simplified directions as usable as-is or with
minimal changes. This work demonstrates the feasibility of a machine
translation-based tool for simplifying prescription directions in real-life.
|
[
{
"version": "v1",
"created": "Fri, 8 Apr 2022 04:03:56 GMT"
}
] | 2022-04-11T00:00:00 |
[
[
"Li",
"Jiazhao",
""
],
[
"Lester",
"Corey",
""
],
[
"Zhao",
"Xinyan",
""
],
[
"Ding",
"Yuting",
""
],
[
"Jiang",
"Yun",
""
],
[
"Vydiswaran",
"V. G. Vinod",
""
]
] |
new_dataset
| 0.999752 |
2204.03858
|
Kowndinya Boyalakuntla
|
Kowndinya Boyalakuntla, Marimuthu C, Sridhar Chimalakonda,
Chandrasekaran K
|
eGEN: An Energy-saving Modeling Language and Code Generator for
Location-sensing of Mobile Apps
|
27 pages, 7 figures, 6 tables
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The demand for reducing the energy consumption of location-based applications
has increased in recent years. The abnormal battery-draining behavior of GPS
makes it difficult for the developers to decide on battery optimization during
the development phase directly. It will reduce the burden on developers if
battery-saving strategies are considered early, and relevant battery-aware code
is generated from the design phase artifacts. Therefore, we aim to develop tool
support, eGEN, to specify and create native location-based mobile apps. eGEN
consists of Domain-specific Modeling Language (DSML) and a code generator for
location-sensing. It is developed using Xtext and Xtend as an Eclipse plug-in,
and currently, it supports native Android apps. eGEN is evaluated through
controlled experiments by instrumenting the generated code in five
location-based open-source Android applications. The experimental results show
4.35 minutes of average GPS reduction per hour and 188 mA of average reduction
in battery consumption while showing only 97 meters degrade in location
accuracy over 3 kilometers of a cycling path. Hence, we believe that code
generated by eGEN would help developers to balance between energy and accuracy
requirements of location-based applications. The source code, documentation,
tool demo video, and tool installation video are available at
https://github.com/Kowndinya2000/egen.
|
[
{
"version": "v1",
"created": "Fri, 8 Apr 2022 05:50:26 GMT"
}
] | 2022-04-11T00:00:00 |
[
[
"Boyalakuntla",
"Kowndinya",
""
],
[
"C",
"Marimuthu",
""
],
[
"Chimalakonda",
"Sridhar",
""
],
[
"K",
"Chandrasekaran",
""
]
] |
new_dataset
| 0.997082 |
2204.03871
|
Meisin Lee
|
Meisin Lee, Lay-Ki Soon, Eu-Gene Siew, Ly Fie Sugianto
|
CrudeOilNews: An Annotated Crude Oil News Corpus for Event Extraction
|
Accepted at LREC 2022. arXiv admin note: text overlap with
arXiv:2105.08214
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present CrudeOilNews, a corpus of English Crude Oil news
for event extraction. It is the first of its kind for Commodity News and serve
to contribute towards resource building for economic and financial text mining.
This paper describes the data collection process, the annotation methodology
and the event typology used in producing the corpus. Firstly, a seed set of 175
news articles were manually annotated, of which a subset of 25 news were used
as the adjudicated reference test set for inter-annotator and system
evaluation. Agreement was generally substantial and annotator performance was
adequate, indicating that the annotation scheme produces consistent event
annotations of high quality. Subsequently the dataset is expanded through (1)
data augmentation and (2) Human-in-the-loop active learning. The resulting
corpus has 425 news articles with approximately 11k events annotated. As part
of active learning process, the corpus was used to train basic event extraction
models for machine labeling, the resulting models also serve as a validation or
as a pilot study demonstrating the use of the corpus in machine learning
purposes. The annotated corpus is made available for academic research purpose
at https://github.com/meisin/CrudeOilNews-Corpus.
|
[
{
"version": "v1",
"created": "Fri, 8 Apr 2022 06:51:35 GMT"
}
] | 2022-04-11T00:00:00 |
[
[
"Lee",
"Meisin",
""
],
[
"Soon",
"Lay-Ki",
""
],
[
"Siew",
"Eu-Gene",
""
],
[
"Sugianto",
"Ly Fie",
""
]
] |
new_dataset
| 0.995739 |
2204.03951
|
Aleksandr Nesterov
|
Alexander Yalunin, Alexander Nesterov, and Dmitriy Umerenkov
|
RuBioRoBERTa: a pre-trained biomedical language model for Russian
language biomedical text mining
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents several BERT-based models for Russian language biomedical
text mining (RuBioBERT, RuBioRoBERTa). The models are pre-trained on a corpus
of freely available texts in the Russian biomedical domain. With this
pre-training, our models demonstrate state-of-the-art results on RuMedBench -
Russian medical language understanding benchmark that covers a diverse set of
tasks, including text classification, question answering, natural language
inference, and named entity recognition.
|
[
{
"version": "v1",
"created": "Fri, 8 Apr 2022 09:18:59 GMT"
}
] | 2022-04-11T00:00:00 |
[
[
"Yalunin",
"Alexander",
""
],
[
"Nesterov",
"Alexander",
""
],
[
"Umerenkov",
"Dmitriy",
""
]
] |
new_dataset
| 0.993268 |
2204.03998
|
Narges Norouzi
|
Narges Norouzi, Reza Azmi, Sara Saberi Tehrani Moghadam, Maral Zarvani
|
SnapMode: An Intelligent and Distributed Large-Scale Fashion Image
Retrieval Platform Based On Big Data and Deep Generative Adversarial Network
Technologies
| null | null | null | null |
cs.IR cs.AI cs.CV cs.DC cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Fashion is now among the largest industries worldwide, for it represents
human history and helps tell the worlds story. As a result of the Fourth
Industrial Revolution, the Internet has become an increasingly important source
of fashion information. However, with a growing number of web pages and social
data, it is nearly impossible for humans to manually catch up with the ongoing
evolution and the continuously variable content in this domain. The proper
management and exploitation of big data can pave the way for the substantial
growth of the global economy as well as citizen satisfaction. Therefore,
computer scientists have found it challenging to handle e-commerce fashion
websites by using big data and machine learning technologies. This paper first
proposes a scalable focused Web Crawler engine based on the distributed
computing platforms to extract and process fashion data on e-commerce websites.
The role of the proposed platform is then described in developing a
disentangled feature extraction method by employing deep convolutional
generative adversarial networks (DCGANs) for content-based image indexing and
retrieval. Finally, the state-of-the-art solutions are compared, and the
results of the proposed approach are analyzed on a standard dataset. For the
real-life implementation of the proposed solution, a Web-based application is
developed on Apache Storm, Kafka, Solr, and Milvus platforms to create a
fashion search engine called SnapMode.
|
[
{
"version": "v1",
"created": "Fri, 8 Apr 2022 11:08:03 GMT"
}
] | 2022-04-11T00:00:00 |
[
[
"Norouzi",
"Narges",
""
],
[
"Azmi",
"Reza",
""
],
[
"Moghadam",
"Sara Saberi Tehrani",
""
],
[
"Zarvani",
"Maral",
""
]
] |
new_dataset
| 0.995348 |
2204.04013
|
Nikola Bulatovic
|
Nikola Bulatovic, Slobodan Djukanovic
|
Mel-spectrogram features for acoustic vehicle detection and speed
estimation
|
Published in: 2022 26th International Conference on Information
Technology (IT)
| null |
10.1109/IT54280.2022.9743540
| null |
cs.LG cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The paper addresses acoustic vehicle detection and speed estimation from
single sensor measurements. We predict the vehicle's pass-by instant by
minimizing clipped vehicle-to-microphone distance, which is predicted from the
mel-spectrogram of input audio, in a supervised learning approach. In addition,
mel-spectrogram-based features are used directly for vehicle speed estimation,
without introducing any intermediate features. The results show that the
proposed features can be used for accurate vehicle detection and speed
estimation, with an average error of 7.87 km/h. If we formulate speed
estimation as a classification problem, with a 10 km/h discretization interval,
the proposed method attains the average accuracy of 48.7% for correct class
prediction and 91.0% when an offset of one class is allowed. The proposed
method is evaluated on a dataset of 304 urban-environment on-field recordings
of ten different vehicles.
|
[
{
"version": "v1",
"created": "Fri, 8 Apr 2022 11:53:13 GMT"
}
] | 2022-04-11T00:00:00 |
[
[
"Bulatovic",
"Nikola",
""
],
[
"Djukanovic",
"Slobodan",
""
]
] |
new_dataset
| 0.99484 |
2204.04120
|
Pengyu Zhang
|
Pengyu Zhang, Jie Zhao, Dong Wang, Huchuan Lu, Xiang Ruan
|
Visible-Thermal UAV Tracking: A Large-Scale Benchmark and New Baseline
|
to be published in CVPR22. The project is available at
https://zhang-pengyu.github.io/DUT-VTUAV/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
With the popularity of multi-modal sensors, visible-thermal (RGB-T) object
tracking is to achieve robust performance and wider application scenarios with
the guidance of objects' temperature information. However, the lack of paired
training samples is the main bottleneck for unlocking the power of RGB-T
tracking. Since it is laborious to collect high-quality RGB-T sequences, recent
benchmarks only provide test sequences. In this paper, we construct a
large-scale benchmark with high diversity for visible-thermal UAV tracking
(VTUAV), including 500 sequences with 1.7 million high-resolution (1920
$\times$ 1080 pixels) frame pairs. In addition, comprehensive applications
(short-term tracking, long-term tracking and segmentation mask prediction) with
diverse categories and scenes are considered for exhaustive evaluation.
Moreover, we provide a coarse-to-fine attribute annotation, where frame-level
attributes are provided to exploit the potential of challenge-specific
trackers. In addition, we design a new RGB-T baseline, named Hierarchical
Multi-modal Fusion Tracker (HMFT), which fuses RGB-T data in various levels.
Numerous experiments on several datasets are conducted to reveal the
effectiveness of HMFT and the complement of different fusion types. The project
is available at here.
|
[
{
"version": "v1",
"created": "Fri, 8 Apr 2022 15:22:33 GMT"
}
] | 2022-04-11T00:00:00 |
[
[
"Zhang",
"Pengyu",
""
],
[
"Zhao",
"Jie",
""
],
[
"Wang",
"Dong",
""
],
[
"Lu",
"Huchuan",
""
],
[
"Ruan",
"Xiang",
""
]
] |
new_dataset
| 0.998886 |
2204.04139
|
Rongjun Qin
|
Shengxi Gui, Rongjun Qin, Yang Tang
|
Sat2lod2: A Software For Automated Lod-2 Modeling From Satellite-Derived
Orthophoto And Digital Surface Model
|
to be published in ISPRS congress 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deriving LoD2 models from orthophoto and digital surface models (DSM)
reconstructed from satellite images is a challenging task. Existing solutions
are mostly system approaches that require complicated step-wise processes,
including not only heuristic geometric operations, but also high-level steps
such as machine learning-based semantic segmentation and building detection.
Here in this paper, we describe an open-source tool, called SAT2LOD2, built
based on a minorly modified version of our recently published work. SAT2LoD2 is
a fully open-source and GUI (Graphics User Interface) based software, coded in
Python, which takes an orthophoto and DSM as inputs, and outputs individual
building models, and it can additionally take road network shapefiles, and
customized classification maps to further improve the reconstruction results.
We further improve the robustness of the method by 1) intergrading building
segmentation based on HRNetV2 into our software; and 2) having implemented a
decision strategy to identify complex buildings and directly generate mesh to
avoid erroneous LoD2 reconstruction from a system point of view. The software
can process a moderate level of data (around 5000*5000 size of orthophoto and
DSM) using a PC with a graphics card supporting CUDA. Furthermore, the GUI is
self-contained and stores the intermediate processing results facilitating
researchers to learn the process easily and reuse intermediate files as needed.
The updated codes and software are available under this GitHub page:
https://github.com/GDAOSU/LOD2BuildingModel.
|
[
{
"version": "v1",
"created": "Fri, 8 Apr 2022 15:49:35 GMT"
}
] | 2022-04-11T00:00:00 |
[
[
"Gui",
"Shengxi",
""
],
[
"Qin",
"Rongjun",
""
],
[
"Tang",
"Yang",
""
]
] |
new_dataset
| 0.999183 |
2204.04154
|
Rachit Agarwal
|
Vikas Maurya, Rachit Agarwal, Saurabh Kumar, Sandeep Kumar Shukla
|
EPASAD: Ellipsoid decision boundary based Process-Aware Stealthy Attack
Detector
|
Submitted
| null | null | null |
cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Due to the importance of Critical Infrastructure (CI) in a nation's economy,
they have been lucrative targets for cyber attackers. These critical
infrastructures are usually Cyber-Physical Systems (CPS) such as power grids,
water, and sewage treatment facilities, oil and gas pipelines, etc. In recent
times, these systems have suffered from cyber attacks numerous times.
Researchers have been developing cyber security solutions for CIs to avoid
lasting damages. According to standard frameworks, cyber security based on
identification, protection, detection, response, and recovery are at the core
of these research. Detection of an ongoing attack that escapes standard
protection such as firewall, anti-virus, and host/network intrusion detection
has gained importance as such attacks eventually affect the physical dynamics
of the system. Therefore, anomaly detection in physical dynamics proves an
effective means to implement defense-in-depth. PASAD is one example of anomaly
detection in the sensor/actuator data, representing such systems' physical
dynamics. We present EPASAD, which improves the detection technique used in
PASAD to detect these micro-stealthy attacks, as our experiments show that
PASAD's spherical boundary-based detection fails to detect. Our method EPASAD
overcomes this by using Ellipsoid boundaries, thereby tightening the boundaries
in various dimensions, whereas a spherical boundary treats all dimensions
equally. We validate EPASAD using the dataset produced by the TE-process
simulator and the C-town datasets. The results show that EPASAD improves
PASAD's average recall by 5.8% and 9.5% for the two datasets, respectively.
|
[
{
"version": "v1",
"created": "Fri, 8 Apr 2022 16:06:10 GMT"
}
] | 2022-04-11T00:00:00 |
[
[
"Maurya",
"Vikas",
""
],
[
"Agarwal",
"Rachit",
""
],
[
"Kumar",
"Saurabh",
""
],
[
"Shukla",
"Sandeep Kumar",
""
]
] |
new_dataset
| 0.999143 |
1912.01059
|
Lukas Ruppert
|
Fabian Groh, Lukas Ruppert, Patrick Wieschollek, Hendrik P.A. Lensch
|
GGNN: Graph-based GPU Nearest Neighbor Search
| null | null |
10.1109/TBDATA.2022.3161156
| null |
cs.CV cs.DB cs.DS cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Approximate nearest neighbor (ANN) search in high dimensions is an integral
part of several computer vision systems and gains importance in deep learning
with explicit memory representations. Since PQT, FAISS, and SONG started to
leverage the massive parallelism offered by GPUs, GPU-based implementations are
a crucial resource for today's state-of-the-art ANN methods. While most of
these methods allow for faster queries, less emphasis is devoted to
accelerating the construction of the underlying index structures. In this
paper, we propose a novel GPU-friendly search structure based on nearest
neighbor graphs and information propagation on graphs. Our method is designed
to take advantage of GPU architectures to accelerate the hierarchical
construction of the index structure and for performing the query. Empirical
evaluation shows that GGNN significantly surpasses the state-of-the-art CPU-
and GPU-based systems in terms of build-time, accuracy and search speed.
|
[
{
"version": "v1",
"created": "Mon, 2 Dec 2019 19:46:13 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Dec 2019 08:15:19 GMT"
},
{
"version": "v3",
"created": "Mon, 12 Apr 2021 15:49:47 GMT"
},
{
"version": "v4",
"created": "Thu, 7 Apr 2022 14:49:40 GMT"
}
] | 2022-04-08T00:00:00 |
[
[
"Groh",
"Fabian",
""
],
[
"Ruppert",
"Lukas",
""
],
[
"Wieschollek",
"Patrick",
""
],
[
"Lensch",
"Hendrik P. A.",
""
]
] |
new_dataset
| 0.993039 |
2109.08228
|
George Chustz
|
George Chustz and Srikanth Saripalli
|
ROOAD: RELLIS Off-road Odometry Analysis Dataset
|
7 pages, 6 figures, 5 tables, IV 2022 conference
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The development and implementation of visual-inertial odometry (VIO) has
focused on structured environments, but interest in localization in off-road
environments is growing. In this paper, we present the RELLIS Off-road Odometry
Analysis Dataset (ROOAD) which provides high-quality, time-synchronized
off-road monocular visual-inertial data sequences to further the development of
related research. We evaluated the dataset on two state-of-the-art VIO
algorithms, (1) Open-VINS and (2) VINS-Fusion. Our findings indicate that both
algorithms perform 2 to 30 times worse on the ROOAD dataset compared to their
performance in structured environments. Furthermore, OpenVINS has better
tracking stability and real-time performance than VINS-Fusion in the off-road
environment, while VINS-Fusion outperformed OpenVINS in tracking accuracy in
several data sequences. Since the camera-IMU calibration tool from Kalibr
toolkit is used extensively in this work, we have included several calibration
data sequences. Our hand measurements show Kalibr's tool achieved +/-1 degree
for orientation error and +/-1 mm at best (x- and y-axis) and +/-10 mm (z-axis)
at worse for position error in the camera frame between the camera and IMU.
This novel dataset provides a new set of scenarios for researchers to design
and test their localization algorithms on, as well as critical insights in the
current performance of VIO in off-road environments.
ROOAD Dataset: github.com/unmannedlab/ROOAD
|
[
{
"version": "v1",
"created": "Thu, 16 Sep 2021 21:25:12 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Apr 2022 15:56:26 GMT"
}
] | 2022-04-08T00:00:00 |
[
[
"Chustz",
"George",
""
],
[
"Saripalli",
"Srikanth",
""
]
] |
new_dataset
| 0.999223 |
2109.08567
|
Praveen Kumar
|
Praveen Kumar, Sudhan Majhi, Subhabrata Paul
|
A Direct Construction of GCP and Binary CCC of Length Non Power of Two
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Golay complementary pairs (GCPs) and complete complementary codes (CCCs) have
found a wide range of practical applications in coding, signal processing and
wireless communication due to their ideal correlation properties. In fact,
binary CCCs have special advantages in spread spectrum communication due to
their simple modulo-2 arithmetic operation, modulation and correlation
simplicity, but they are limited in length. In this paper, we present a direct
construction of GCPs, mutually orthogonal complementary sets (MOCSs) and binary
CCCs of non-power of two lengths to widen their application in the recent
field. First, a generalised Boolean function (GBF) based truncation technique
has been used to construct GCPs of non-power of two lengths. Then Complementary
sets (CSs) and MOCSs of lengths of the form $2^{m-1}+2^{m-3}$ ($m \geq 5$) and
$2^{m-1}+2^{m-2}+2^{m-4}$ ($m \geq 6$) are generated by GBFs. Finally, binary
CCCs with desired lengths are constructed using the union of MOCSs. The row and
column sequence peak to mean envelope power ratio (PMEPR) has been investigated
and compared with existing work. The column sequence PMEPR of resultant CCCs
can be effectively upper bounded by $2$.
|
[
{
"version": "v1",
"created": "Fri, 17 Sep 2021 14:24:43 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Apr 2022 07:27:11 GMT"
}
] | 2022-04-08T00:00:00 |
[
[
"Kumar",
"Praveen",
""
],
[
"Majhi",
"Sudhan",
""
],
[
"Paul",
"Subhabrata",
""
]
] |
new_dataset
| 0.988409 |
2110.06864
|
Yifu Zhang
|
Yifu Zhang, Peize Sun, Yi Jiang, Dongdong Yu, Fucheng Weng, Zehuan
Yuan, Ping Luo, Wenyu Liu, Xinggang Wang
|
ByteTrack: Multi-Object Tracking by Associating Every Detection Box
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-object tracking (MOT) aims at estimating bounding boxes and identities
of objects in videos. Most methods obtain identities by associating detection
boxes whose scores are higher than a threshold. The objects with low detection
scores, e.g. occluded objects, are simply thrown away, which brings
non-negligible true object missing and fragmented trajectories. To solve this
problem, we present a simple, effective and generic association method,
tracking by associating almost every detection box instead of only the high
score ones. For the low score detection boxes, we utilize their similarities
with tracklets to recover true objects and filter out the background
detections. When applied to 9 different state-of-the-art trackers, our method
achieves consistent improvement on IDF1 score ranging from 1 to 10 points. To
put forwards the state-of-the-art performance of MOT, we design a simple and
strong tracker, named ByteTrack. For the first time, we achieve 80.3 MOTA, 77.3
IDF1 and 63.1 HOTA on the test set of MOT17 with 30 FPS running speed on a
single V100 GPU. ByteTrack also achieves state-of-the-art performance on MOT20,
HiEve and BDD100K tracking benchmarks. The source code, pre-trained models with
deploy versions and tutorials of applying to other trackers are released at
https://github.com/ifzhang/ByteTrack.
|
[
{
"version": "v1",
"created": "Wed, 13 Oct 2021 17:01:26 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Oct 2021 14:07:10 GMT"
},
{
"version": "v3",
"created": "Thu, 7 Apr 2022 16:36:24 GMT"
}
] | 2022-04-08T00:00:00 |
[
[
"Zhang",
"Yifu",
""
],
[
"Sun",
"Peize",
""
],
[
"Jiang",
"Yi",
""
],
[
"Yu",
"Dongdong",
""
],
[
"Weng",
"Fucheng",
""
],
[
"Yuan",
"Zehuan",
""
],
[
"Luo",
"Ping",
""
],
[
"Liu",
"Wenyu",
""
],
[
"Wang",
"Xinggang",
""
]
] |
new_dataset
| 0.993907 |
2111.01527
|
Irene Garcia-Camacho
|
Irene Garcia-Camacho, J\'ulia Borr\`as, Berk Calli, Adam Norton and
Guillem Aleny\`a
|
Household Cloth Object Set: Fostering Benchmarking in Deformable Object
Manipulation
|
Submitted
|
IEEE Robotics and Automation Letters, vol. 7, no. 3, pp.
5866-5873, July 2022
|
10.1109/LRA.2022.3158428
| null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Benchmarking of robotic manipulations is one of the open issues in robotic
research. An important factor that has enabled progress in this area in the
last decade is the existence of common object sets that have been shared among
different research groups. However, the existing object sets are very limited
when it comes to cloth-like objects that have unique particularities and
challenges. This paper is a first step towards the design of a cloth object set
to be distributed among research groups from the robotics cloth manipulation
community. We present a set of household cloth objects and related tasks that
serve to expose the challenges related to gathering such an object set and
propose a roadmap to the design of common benchmarks in cloth manipulation
tasks, with the intention to set the grounds for a future debate in the
community that will be necessary to foster benchmarking for the manipulation of
cloth-like objects. Some RGB-D and object scans are also collected as examples
for the objects in relevant configurations. More details about the cloth set
are shared in
http://www.iri.upc.edu/groups/perception/ClothObjectSet/HouseholdClothSet.html.
|
[
{
"version": "v1",
"created": "Tue, 2 Nov 2021 12:10:33 GMT"
}
] | 2022-04-08T00:00:00 |
[
[
"Garcia-Camacho",
"Irene",
""
],
[
"Borràs",
"Júlia",
""
],
[
"Calli",
"Berk",
""
],
[
"Norton",
"Adam",
""
],
[
"Alenyà",
"Guillem",
""
]
] |
new_dataset
| 0.997976 |
2111.13419
|
Yerbolat Khassanov
|
Rustem Yeshpanov, Yerbolat Khassanov, Huseyin Atakan Varol
|
KazNERD: Kazakh Named Entity Recognition Dataset
|
10 pages, 1 figure, 8 tables, accepted to LREC 2022
| null | null | null |
cs.CL cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
We present the development of a dataset for Kazakh named entity recognition.
The dataset was built as there is a clear need for publicly available annotated
corpora in Kazakh, as well as annotation guidelines containing
straightforward--but rigorous--rules and examples. The dataset annotation,
based on the IOB2 scheme, was carried out on television news text by two native
Kazakh speakers under the supervision of the first author. The resulting
dataset contains 112,702 sentences and 136,333 annotations for 25 entity
classes. State-of-the-art machine learning models to automatise Kazakh named
entity recognition were also built, with the best-performing model achieving an
exact match F1-score of 97.22% on the test set. The annotated dataset,
guidelines, and codes used to train the models are freely available for
download under the CC BY 4.0 licence from https://github.com/IS2AI/KazNERD.
|
[
{
"version": "v1",
"created": "Fri, 26 Nov 2021 10:56:19 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Apr 2022 05:57:48 GMT"
}
] | 2022-04-08T00:00:00 |
[
[
"Yeshpanov",
"Rustem",
""
],
[
"Khassanov",
"Yerbolat",
""
],
[
"Varol",
"Huseyin Atakan",
""
]
] |
new_dataset
| 0.999815 |
2111.14465
|
Denys Rozumnyi
|
Denys Rozumnyi, Martin R. Oswald, Vittorio Ferrari, Marc Pollefeys
|
Motion-from-Blur: 3D Shape and Motion Estimation of Motion-blurred
Objects in Videos
|
CVPR 2022 camera-ready
|
2022 IEEE Conference on Computer Vision and Pattern Recognition
(CVPR)
| null | null |
cs.CV cs.AI cs.GR cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We propose a method for jointly estimating the 3D motion, 3D shape, and
appearance of highly motion-blurred objects from a video. To this end, we model
the blurred appearance of a fast moving object in a generative fashion by
parametrizing its 3D position, rotation, velocity, acceleration, bounces,
shape, and texture over the duration of a predefined time window spanning
multiple frames. Using differentiable rendering, we are able to estimate all
parameters by minimizing the pixel-wise reprojection error to the input video
via backpropagating through a rendering pipeline that accounts for motion blur
by averaging the graphics output over short time intervals. For that purpose,
we also estimate the camera exposure gap time within the same optimization. To
account for abrupt motion changes like bounces, we model the motion trajectory
as a piece-wise polynomial, and we are able to estimate the specific time of
the bounce at sub-frame accuracy. Experiments on established benchmark datasets
demonstrate that our method outperforms previous methods for fast moving object
deblurring and 3D reconstruction.
|
[
{
"version": "v1",
"created": "Mon, 29 Nov 2021 11:25:14 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Apr 2022 10:09:38 GMT"
}
] | 2022-04-08T00:00:00 |
[
[
"Rozumnyi",
"Denys",
""
],
[
"Oswald",
"Martin R.",
""
],
[
"Ferrari",
"Vittorio",
""
],
[
"Pollefeys",
"Marc",
""
]
] |
new_dataset
| 0.999138 |
2112.00124
|
Shamiul Alam
|
Shamiul Alam, Md Mazharul Islam, Md Shafayat Hossain, Akhilesh
Jaiswal, and Ahmedullah Aziz
|
CryoCiM: Cryogenic Compute-in-Memory based on the Quantum Anomalous Hall
Effect
|
13 pages, 6figures
|
Appl. Phys. Lett. 120, 144102 (2022)
|
10.1063/5.0092169
| null |
cs.ET
|
http://creativecommons.org/licenses/by/4.0/
|
The scaling of the already-matured CMOS technology is steadily approaching
its physical limit, motivating the quest for a suitable alternative. Cryogenic
operation offers a promising pathway towards continued improvement in computing
speed and energy efficiency without aggressive scaling. However, the memory
wall bottleneck of the traditional von-Neumann architecture persists even at
cryogenic temperature. That is where a compute-in-memory (CiM) architecture,
that embeds computing within the memory unit, comes into play. Computations
within the memory unit help reduce the expensive data transfer between the
memory and the computing units. Therefore, CiM provides extreme energy
efficiency that can enable lower cooling cost at cryogenic temperature. In this
work, we demonstrate CryoCiM, a cryogenic compute-in-memory framework utilizing
a non-volatile memory system based on the quantum anomalous Hall effect (QAHE).
Our design can perform memory read/write, and universal binary logic operations
(NAND, NOR, and XOR). We design a novel peripheral circuit assembly that can
perform the read/write, and single-cycle in-memory logic operations. The
utilization of a QAHE-based memory system promises robustness against process
variations, through the usage of topologically protected resistive states for
data storage. CryoCiM is the first step towards utilizing exclusively cryogenic
phenomena to serve the dual purpose of storage and computation with ultra-low
power (nano-watts) operations.
|
[
{
"version": "v1",
"created": "Tue, 30 Nov 2021 21:49:58 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Dec 2021 23:52:58 GMT"
},
{
"version": "v3",
"created": "Tue, 22 Mar 2022 00:36:06 GMT"
}
] | 2022-04-08T00:00:00 |
[
[
"Alam",
"Shamiul",
""
],
[
"Islam",
"Md Mazharul",
""
],
[
"Hossain",
"Md Shafayat",
""
],
[
"Jaiswal",
"Akhilesh",
""
],
[
"Aziz",
"Ahmedullah",
""
]
] |
new_dataset
| 0.999193 |
2112.01967
|
Paul Staat
|
Paul Staat, Simon Mulzer, Stefan Roth, Veelasha Moonsamy, Markus
Heinrichs, Rainer Kronberger, Aydin Sezgin, Christof Paar
|
IRShield: A Countermeasure Against Adversarial Physical-Layer Wireless
Sensing
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Wireless radio channels are known to contain information about the
surrounding propagation environment, which can be extracted using established
wireless sensing methods. Thus, today's ubiquitous wireless devices are
attractive targets for passive eavesdroppers to launch reconnaissance attacks.
In particular, by overhearing standard communication signals, eavesdroppers
obtain estimations of wireless channels which can give away sensitive
information about indoor environments. For instance, by applying simple
statistical methods, adversaries can infer human motion from wireless channel
observations, allowing to remotely monitor premises of victims. In this work,
building on the advent of intelligent reflecting surfaces (IRSs), we propose
IRShield as a novel countermeasure against adversarial wireless sensing.
IRShield is designed as a plug-and-play privacy-preserving extension to
existing wireless networks. At the core of IRShield, we design an IRS
configuration algorithm to obfuscate wireless channels. We validate the
effectiveness with extensive experimental evaluations. In a state-of-the-art
human motion detection attack using off-the-shelf Wi-Fi devices, IRShield
lowered detection rates to 5% or less.
|
[
{
"version": "v1",
"created": "Fri, 3 Dec 2021 15:18:09 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Apr 2022 13:03:53 GMT"
}
] | 2022-04-08T00:00:00 |
[
[
"Staat",
"Paul",
""
],
[
"Mulzer",
"Simon",
""
],
[
"Roth",
"Stefan",
""
],
[
"Moonsamy",
"Veelasha",
""
],
[
"Heinrichs",
"Markus",
""
],
[
"Kronberger",
"Rainer",
""
],
[
"Sezgin",
"Aydin",
""
],
[
"Paar",
"Christof",
""
]
] |
new_dataset
| 0.998772 |
2112.10043
|
Lei Hu
|
Guyue Li, Lei Hu, Paul Staat, Harald Elders-Boll, Christian Zenger,
Christof Paar, and Aiqun Hu
|
Reconfigurable Intelligent Surface for Physical Layer Key Generation:
Constructive or Destructive?
|
7 pages, 5 figures
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Physical layer key generation (PKG) is a promising means to provide
on-the-fly shared secret keys by exploiting the intrinsic randomness of the
radio channel. However, the performance of PKG is highly dependent on the
propagation environments. Due to its feature of controlling the wireless
environment, reconfigurable intelligent surface~(RIS) is appealing to be
applied in PKG. In this paper, in contrast to the existing literature, we
investigate both the constructive and destructive effects of RIS on the PKG
scheme. For the constructive aspect, we have identified static and
wave-blockage environments as two RIS-empowered-PKG applications in future
wireless systems. In particular, our experimental results in a static
environment showed that RIS can enhance the entropy of the secret key,
achieving a key generation rate (KGR) of 97.39 bit/s with a bit disagreement
rate (BDR) of 0.083. In multi-user systems where some remote users are in worse
channel conditions, the proposed RIS-assisted PKG algorithm improves the sum
secret key rate by more than 2 dB, compared to the literature. Furthermore, we
point out that RIS could be utilized by an attacker to perform new jamming and
leakage attacks and give countermeasures, respectively. Finally, we outline
future research directions for PKG systems in light of the RIS.
|
[
{
"version": "v1",
"created": "Sun, 19 Dec 2021 02:42:14 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Apr 2022 13:29:50 GMT"
}
] | 2022-04-08T00:00:00 |
[
[
"Li",
"Guyue",
""
],
[
"Hu",
"Lei",
""
],
[
"Staat",
"Paul",
""
],
[
"Elders-Boll",
"Harald",
""
],
[
"Zenger",
"Christian",
""
],
[
"Paar",
"Christof",
""
],
[
"Hu",
"Aiqun",
""
]
] |
new_dataset
| 0.990562 |
2203.13926
|
Soujanya Poria
|
Deepanway Ghosal, Siqi Shen, Navonil Majumder, Rada Mihalcea, Soujanya
Poria
|
CICERO: A Dataset for Contextualized Commonsense Inference in Dialogues
|
ACL 2022
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This paper addresses the problem of dialogue reasoning with contextualized
commonsense inference. We curate CICERO, a dataset of dyadic conversations with
five types of utterance-level reasoning-based inferences: cause, subsequent
event, prerequisite, motivation, and emotional reaction. The dataset contains
53,105 of such inferences from 5,672 dialogues. We use this dataset to solve
relevant generative and discriminative tasks: generation of cause and
subsequent event; generation of prerequisite, motivation, and listener's
emotional reaction; and selection of plausible alternatives. Our results
ascertain the value of such dialogue-centric commonsense knowledge datasets. It
is our hope that CICERO will open new research avenues into commonsense-based
dialogue reasoning.
|
[
{
"version": "v1",
"created": "Fri, 25 Mar 2022 22:08:50 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Apr 2022 09:51:21 GMT"
},
{
"version": "v3",
"created": "Thu, 7 Apr 2022 00:17:36 GMT"
}
] | 2022-04-08T00:00:00 |
[
[
"Ghosal",
"Deepanway",
""
],
[
"Shen",
"Siqi",
""
],
[
"Majumder",
"Navonil",
""
],
[
"Mihalcea",
"Rada",
""
],
[
"Poria",
"Soujanya",
""
]
] |
new_dataset
| 0.999827 |
2204.00790
|
Daochang Wang
|
Daochang Wang, Fan Zhang, Fei Ma, Wei Hu, Yu Tang, and Yongsheng Zhou
|
SAD: A Large-scale Dataset towards Airport Detection in Synthetic
Aperture Radar Images
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Airports have an important role in both military and civilian domains. The
synthetic aperture radar (SAR) based airport detection has received increasing
attention in recent years. However, due to the high cost of SAR imaging and
annotation process, there is no publicly available SAR dataset for airport
detection. As a result, deep learning methods have not been fully used in
airport detection tasks. To provide a benchmark for airport detection research
in SAR images, this paper introduces a large-scale SAR Airport Dataset (SAD).
In order to adequately reflect the demands of real world applications, it
contains 624 SAR images from Sentinel 1B and covers 104 airfield instances with
different scales, orientations and shapes. The experiments of multiple deep
learning approach on this dataset proves its effectiveness. It developing
state-of-the-art airport area detection algorithms or other relevant tasks.
|
[
{
"version": "v1",
"created": "Sat, 2 Apr 2022 07:29:10 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Apr 2022 12:25:15 GMT"
}
] | 2022-04-08T00:00:00 |
[
[
"Wang",
"Daochang",
""
],
[
"Zhang",
"Fan",
""
],
[
"Ma",
"Fei",
""
],
[
"Hu",
"Wei",
""
],
[
"Tang",
"Yu",
""
],
[
"Zhou",
"Yongsheng",
""
]
] |
new_dataset
| 0.999896 |
2204.01830
|
Philipp H. Kindt
|
Philipp H. Kindt, Cristian Turetta, Florenc Demrozi, Alejandro Masrur,
Graziano Pravadelli, Samarjit Chakraborty
|
WiFiEye -- Seeing over WiFi Made Accessible
| null | null | null | null |
cs.NI eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While commonly used for communication purposes, an increasing number of
recent studies consider WiFi for sensing. In particular, wireless signals are
altered (e.g., reflected and attenuated) by the human body and objects in the
environment. This can be perceived by an observer to infer information on human
activities or changes in the environment and, hence, to "see" over WiFi. Until
now, works on WiFi-based sensing have resulted in a set of custom software
tools - each designed for a specific purpose. Moreover, given how scattered the
literature is, it is difficult to even identify all steps/functions necessary
to build a basic system for WiFi-based sensing. This has led to a high entry
barrier, hindering further research in this area. There has been no effort to
integrate these tools or to build a general software framework that can serve
as the basis for further research, e.g., on using machine learning to interpret
the altered WiFi signals. To address this issue, in this paper, we propose
WiFiEye - a generic software framework that makes all necessary steps/functions
available "out of the box". This way, WiFiEye allows researchers to easily
bootstrap new WiFi-based sensing applications, thereby, focusing on research
rather than on implementation aspects. To illustrate WiFiEye's workflow, we
present a case study on WiFi-based human activity recognition.
|
[
{
"version": "v1",
"created": "Mon, 4 Apr 2022 20:31:16 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Apr 2022 20:24:11 GMT"
}
] | 2022-04-08T00:00:00 |
[
[
"Kindt",
"Philipp H.",
""
],
[
"Turetta",
"Cristian",
""
],
[
"Demrozi",
"Florenc",
""
],
[
"Masrur",
"Alejandro",
""
],
[
"Pravadelli",
"Graziano",
""
],
[
"Chakraborty",
"Samarjit",
""
]
] |
new_dataset
| 0.990145 |
2204.01956
|
Soumik Mohian
|
Soumik Mohian, Christoph Csallner
|
PSDoodle: Fast App Screen Search via Partial Screen Doodle
| null | null |
10.1145/3524613.3527816
| null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Searching through existing repositories for a specific mobile app screen
design is currently either slow or tedious. Such searches are either limited to
basic keyword searches (Google Image Search) or require as input a complete
query screen image (SWIRE). A promising alternative is interactive partial
sketching, which is more structured than keyword search and faster than
complete-screen queries. PSDoodle is the first system to allow interactive
search of screens via interactive sketching. PSDoodle is built on top of a
combination of the Rico repository of some 58k Android app screens, the Google
QuickDraw dataset of icon-level doodles, and DoodleUINet, a curated corpus of
some 10k app icon doodles collected from hundreds of individuals. In our
evaluation with third-party software developers, PSDoodle provided similar
top-10 screen retrieval accuracy as the state of the art from the SWIRE line of
work, while cutting the average time required about in half.
|
[
{
"version": "v1",
"created": "Tue, 5 Apr 2022 03:22:09 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Apr 2022 18:36:55 GMT"
}
] | 2022-04-08T00:00:00 |
[
[
"Mohian",
"Soumik",
""
],
[
"Csallner",
"Christoph",
""
]
] |
new_dataset
| 0.999376 |
2204.02611
|
Yanan Wang
|
Yanan Wang, Xuezhi Liang, Shengcai Liao
|
Cloning Outfits from Real-World Images to 3D Characters for
Generalizable Person Re-Identification
|
The paper is accepted by CVPR 2022, including the appendix
| null | null | null |
cs.CV
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Recently, large-scale synthetic datasets are shown to be very useful for
generalizable person re-identification. However, synthesized persons in
existing datasets are mostly cartoon-like and in random dress collocation,
which limits their performance. To address this, in this work, an automatic
approach is proposed to directly clone the whole outfits from real-world person
images to virtual 3D characters, such that any virtual person thus created will
appear very similar to its real-world counterpart. Specifically, based on UV
texture mapping, two cloning methods are designed, namely registered clothes
mapping and homogeneous cloth expansion. Given clothes keypoints detected on
person images and labeled on regular UV maps with clear clothes structures,
registered mapping applies perspective homography to warp real-world clothes to
the counterparts on the UV map. As for invisible clothes parts and irregular UV
maps, homogeneous expansion segments a homogeneous area on clothes as a
realistic cloth pattern or cell, and expand the cell to fill the UV map.
Furthermore, a similarity-diversity expansion strategy is proposed, by
clustering person images, sampling images per cluster, and cloning outfits for
3D character generation. This way, virtual persons can be scaled up densely in
visual similarity to challenge model learning, and diversely in population to
enrich sample distribution. Finally, by rendering the cloned characters in
Unity3D scenes, a more realistic virtual dataset called ClonedPerson is
created, with 5,621 identities and 887,766 images. Experimental results show
that the model trained on ClonedPerson has a better generalization performance,
superior to that trained on other popular real-world and synthetic person
re-identification datasets. The ClonedPerson project is available at
https://github.com/Yanan-Wang-cs/ClonedPerson.
|
[
{
"version": "v1",
"created": "Wed, 6 Apr 2022 06:41:08 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Apr 2022 08:39:59 GMT"
}
] | 2022-04-08T00:00:00 |
[
[
"Wang",
"Yanan",
""
],
[
"Liang",
"Xuezhi",
""
],
[
"Liao",
"Shengcai",
""
]
] |
new_dataset
| 0.999122 |
2204.03021
|
Caleb Ziems
|
Caleb Ziems, Jane A. Yu, Yi-Chia Wang, Alon Halevy, Diyi Yang
|
The Moral Integrity Corpus: A Benchmark for Ethical Dialogue Systems
|
ACL 2022 main conference
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Conversational agents have come increasingly closer to human competence in
open-domain dialogue settings; however, such models can reflect insensitive,
hurtful, or entirely incoherent viewpoints that erode a user's trust in the
moral integrity of the system. Moral deviations are difficult to mitigate
because moral judgments are not universal, and there may be multiple competing
judgments that apply to a situation simultaneously. In this work, we introduce
a new resource, not to authoritatively resolve moral ambiguities, but instead
to facilitate systematic understanding of the intuitions, values and moral
judgments reflected in the utterances of dialogue systems. The Moral Integrity
Corpus, MIC, is such a resource, which captures the moral assumptions of 38k
prompt-reply pairs, using 99k distinct Rules of Thumb (RoTs). Each RoT reflects
a particular moral conviction that can explain why a chatbot's reply may appear
acceptable or problematic. We further organize RoTs with a set of 9 moral and
social attributes and benchmark performance for attribute classification. Most
importantly, we show that current neural language models can automatically
generate new RoTs that reasonably describe previously unseen interactions, but
they still struggle with certain scenarios. Our findings suggest that MIC will
be a useful resource for understanding and language models' implicit moral
assumptions and flexibly benchmarking the integrity of conversational agents.
To download the data, see https://github.com/GT-SALT/mic
|
[
{
"version": "v1",
"created": "Wed, 6 Apr 2022 18:10:53 GMT"
}
] | 2022-04-08T00:00:00 |
[
[
"Ziems",
"Caleb",
""
],
[
"Yu",
"Jane A.",
""
],
[
"Wang",
"Yi-Chia",
""
],
[
"Halevy",
"Alon",
""
],
[
"Yang",
"Diyi",
""
]
] |
new_dataset
| 0.998179 |
2204.03028
|
Simon Haller-Seeber
|
Simon Haller-Seeber, Thomas Gatterer, Patrick Hofmann, Christopher
Kelter, Thomas Auer, Michael Felderer
|
Software Testing, AI and Robotics (STAIR) Learning Lab
|
8 pages, 5 figures, Accepted at the Robotics in Education (RiE2022)
Conference
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-sa/4.0/
|
In this paper we presented the Software Testing, AI and Robotics (STAIR)
Learning Lab. STAIR is an initiative started at the University of Innsbruck to
bring robotics, Artificial Intelligence (AI) and software testing into schools.
In the lab physical and virtual learning units are developed in parallel and in
sync with each other. Its core learning approach is based the develop of both a
physical and simulated robotics environment. In both environments AI scenarios
(like traffic sign recognition) are deployed and tested. We present and focus
on our newly designed MiniBot that are both built on hardware which was
designed for educational and research purposes as well as the simulation
environment. Additionally, we describe first learning design concepts and a
showcase scenario (i.e., AI-based traffic sign recognition) with different
exercises which can easily be extended.
|
[
{
"version": "v1",
"created": "Wed, 6 Apr 2022 18:18:47 GMT"
}
] | 2022-04-08T00:00:00 |
[
[
"Haller-Seeber",
"Simon",
""
],
[
"Gatterer",
"Thomas",
""
],
[
"Hofmann",
"Patrick",
""
],
[
"Kelter",
"Christopher",
""
],
[
"Auer",
"Thomas",
""
],
[
"Felderer",
"Michael",
""
]
] |
new_dataset
| 0.967494 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.