id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2207.03726
|
Wen Wang
|
Wen Wang, Shunda Hu, Shiqiang Zhu, Wei Song, Zheyuan Lin, Tianlei Jin,
Zonghao Mu, Yuanhai Zhou
|
TGRMPT: A Head-Shoulder Aided Multi-Person Tracker and a New Large-Scale
Dataset for Tour-Guide Robot
| null | null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A service robot serving safely and politely needs to track the surrounding
people robustly, especially for Tour-Guide Robot (TGR). However, existing
multi-object tracking (MOT) or multi-person tracking (MPT) methods are not
applicable to TGR for the following reasons: 1. lacking relevant large-scale
datasets; 2. lacking applicable metrics to evaluate trackers. In this work, we
target the visual perceptual tasks for TGR and present the TGRDB dataset, a
novel large-scale multi-person tracking dataset containing roughly 5.6 hours of
annotated videos and over 450 long-term trajectories. Besides, we propose a
more applicable metric to evaluate trackers using our dataset. As part of our
work, we present TGRMPT, a novel MPT system that incorporates information from
head shoulder and whole body, and achieves state-of-the-art performance. We
have released our codes and dataset in https://github.com/wenwenzju/TGRMPT.
|
[
{
"version": "v1",
"created": "Fri, 8 Jul 2022 07:32:18 GMT"
}
] | 2022-07-11T00:00:00 |
[
[
"Wang",
"Wen",
""
],
[
"Hu",
"Shunda",
""
],
[
"Zhu",
"Shiqiang",
""
],
[
"Song",
"Wei",
""
],
[
"Lin",
"Zheyuan",
""
],
[
"Jin",
"Tianlei",
""
],
[
"Mu",
"Zonghao",
""
],
[
"Zhou",
"Yuanhai",
""
]
] |
new_dataset
| 0.999132 |
2207.03782
|
Chuong Nguyen
|
Chuong H. Nguyen, Su Huynh, Vinh Nguyen, Ngoc Nguyen
|
VidConv: A modernized 2D ConvNet for Efficient Video Recognition
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Since being introduced in 2020, Vision Transformers (ViT) has been steadily
breaking the record for many vision tasks and are often described as
``all-you-need" to replace ConvNet. Despite that, ViTs are generally
computational, memory-consuming, and unfriendly for embedded devices. In
addition, recent research shows that standard ConvNet if redesigned and trained
appropriately can compete favorably with ViT in terms of accuracy and
scalability. In this paper, we adopt the modernized structure of ConvNet to
design a new backbone for action recognition. Particularly, our main target is
to serve for industrial product deployment, such as FPGA boards in which only
standard operations are supported. Therefore, our network simply consists of 2D
convolutions, without using any 3D convolution, long-range attention plugin, or
Transformer blocks. While being trained with much fewer epochs (5x-10x), our
backbone surpasses the methods using (2+1)D and 3D convolution, and achieve
comparable results with ViT on two benchmark datasets.
|
[
{
"version": "v1",
"created": "Fri, 8 Jul 2022 09:33:46 GMT"
}
] | 2022-07-11T00:00:00 |
[
[
"Nguyen",
"Chuong H.",
""
],
[
"Huynh",
"Su",
""
],
[
"Nguyen",
"Vinh",
""
],
[
"Nguyen",
"Ngoc",
""
]
] |
new_dataset
| 0.998788 |
2207.03827
|
Pier Luca Lanzi
|
Pier Luca Lanzi, Daniele Loiacono, Alberto Arosio, Dorian Bucur,
Davide Caio, Luca Capecchi, Maria Giulietta Cappelletti, Lorenzo Carnaghi,
Marco Giuseppe Caruso, Valerio Ceraudo, Luca Contato, Luca Cornaggia,
Christian Costanza, Tommaso Grilli, Sumero Lira, Luca Marchetti, Giulia
Olivares, Barbara Pagano, Davide Pons, Michele Pirovano, Valentina Tosto
|
One Pixel, One Interaction, One Game: An Experiment in Minimalist Game
Design
| null | null | null | null |
cs.HC cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Minimalist game design was introduced a decade ago as a general design
principle with a list of key properties for minimalist games: basic controls,
simple but aesthetically pleasing visuals, interesting player choices with vast
possibility spaces, and sounds that resonate with the design. In this paper, we
present an experiment we did to explore minimalism in games using a bottom-up
approach. We invited a small group of professional game designers and a larger
group of game design students to participate in a seminal experiment on
minimalism in game design. We started from the most basic game elements: one
pixel and one key which provide the least amount of information we can display
and reasonably the most elementary action players can perform. We designed a
game that starts with a black pixel and asks players to press a key when the
pixel turns white. This minimal game, almost a Skinner box, captures the
essential elements of the mechanics of games like "The Impossible Game," which
asks players to do nothing more than press a key at the right moment. We
presented this game concept to the professional game designers and challenged
them to create other games with the least amount of player interaction and
displayed information. We did not specify any constraints (as usually done in
other contexts) and left them free to express their view of minimalistic game
design. We repeated the experiment with 100+ students attending a master-level
course on video game design and development at our institution. We then
analyzed the creations of the two groups, discussing the idea of minimalistic
design that emerges from the submitted game concepts.
|
[
{
"version": "v1",
"created": "Fri, 8 Jul 2022 11:22:20 GMT"
}
] | 2022-07-11T00:00:00 |
[
[
"Lanzi",
"Pier Luca",
""
],
[
"Loiacono",
"Daniele",
""
],
[
"Arosio",
"Alberto",
""
],
[
"Bucur",
"Dorian",
""
],
[
"Caio",
"Davide",
""
],
[
"Capecchi",
"Luca",
""
],
[
"Cappelletti",
"Maria Giulietta",
""
],
[
"Carnaghi",
"Lorenzo",
""
],
[
"Caruso",
"Marco Giuseppe",
""
],
[
"Ceraudo",
"Valerio",
""
],
[
"Contato",
"Luca",
""
],
[
"Cornaggia",
"Luca",
""
],
[
"Costanza",
"Christian",
""
],
[
"Grilli",
"Tommaso",
""
],
[
"Lira",
"Sumero",
""
],
[
"Marchetti",
"Luca",
""
],
[
"Olivares",
"Giulia",
""
],
[
"Pagano",
"Barbara",
""
],
[
"Pons",
"Davide",
""
],
[
"Pirovano",
"Michele",
""
],
[
"Tosto",
"Valentina",
""
]
] |
new_dataset
| 0.999606 |
2207.03856
|
Kinga Skorupska
|
Anna Jaskulska, Kinga Skorupska, Zuzanna Bubrowska, Kinga Kwiatkowska,
Wiktor Stawski, Maciej Krzywicki, Monika Kornacka, Wies{\l}aw Kope\'c
|
Participatory Action for Citizens' Engagement to Develop a
Pro-Environmental Research Application
| null | null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To understand and begin to address the challenge of air pollution in Europe
we conducted participatory research, art and design activities with the
residents of one of the areas most affected by smog in Poland. The
participatory research events, described in detail in this article, centered
around the theme of ecology and served to design an application that would
allow us to conduct field research on pro-environmental behaviours at a larger
scale. As a result we developed a research application, rooted in local culture
and history and place attachment, which makes use of gamification techniques.
The application gathers air quality data from the densest network of air
pollution sensors in Europe, thereby aligning the visible signs of pollution in
the app with the local sensor data. At the same time it reinforces the users'
pro-environmental habits and exposes them to educational messages about air
quality and the environment. The data gathered with this application will
validate the efficacy of this kind of an intervention in addressing residents'
smog-causing behaviours.
|
[
{
"version": "v1",
"created": "Fri, 8 Jul 2022 12:20:45 GMT"
}
] | 2022-07-11T00:00:00 |
[
[
"Jaskulska",
"Anna",
""
],
[
"Skorupska",
"Kinga",
""
],
[
"Bubrowska",
"Zuzanna",
""
],
[
"Kwiatkowska",
"Kinga",
""
],
[
"Stawski",
"Wiktor",
""
],
[
"Krzywicki",
"Maciej",
""
],
[
"Kornacka",
"Monika",
""
],
[
"Kopeć",
"Wiesław",
""
]
] |
new_dataset
| 0.990214 |
2207.03870
|
Shohei Nobuhara
|
Taichi Fukuda, Kotaro Hasegawa, Shinya Ishizaki, Shohei Nobuhara, and
Ko Nishino
|
BlindSpotNet: Seeing Where We Cannot See
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce 2D blind spot estimation as a critical visual task for road
scene understanding. By automatically detecting road regions that are occluded
from the vehicle's vantage point, we can proactively alert a manual driver or a
self-driving system to potential causes of accidents (e.g., draw attention to a
road region from which a child may spring out). Detecting blind spots in full
3D would be challenging, as 3D reasoning on the fly even if the car is equipped
with LiDAR would be prohibitively expensive and error prone. We instead propose
to learn to estimate blind spots in 2D, just from a monocular camera. We
achieve this in two steps. We first introduce an automatic method for
generating ``ground-truth'' blind spot training data for arbitrary driving
videos by leveraging monocular depth estimation, semantic segmentation, and
SLAM. The key idea is to reason in 3D but from 2D images by defining blind
spots as those road regions that are currently invisible but become visible in
the near future. We construct a large-scale dataset with this automatic offline
blind spot estimation, which we refer to as Road Blind Spot (RBS) dataset.
Next, we introduce BlindSpotNet (BSN), a simple network that fully leverages
this dataset for fully automatic estimation of frame-wise blind spot
probability maps for arbitrary driving videos. Extensive experimental results
demonstrate the validity of our RBS Dataset and the effectiveness of our BSN.
|
[
{
"version": "v1",
"created": "Fri, 8 Jul 2022 12:54:18 GMT"
}
] | 2022-07-11T00:00:00 |
[
[
"Fukuda",
"Taichi",
""
],
[
"Hasegawa",
"Kotaro",
""
],
[
"Ishizaki",
"Shinya",
""
],
[
"Nobuhara",
"Shohei",
""
],
[
"Nishino",
"Ko",
""
]
] |
new_dataset
| 0.998741 |
2207.03895
|
Haripriya Harikumar
|
Haripriya Harikumar, Santu Rana, Kien Do, Sunil Gupta, Wei Zong, Willy
Susilo, Svetha Venkastesh
|
Defense Against Multi-target Trojan Attacks
| null | null | null | null |
cs.CV
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Adversarial attacks on deep learning-based models pose a significant threat
to the current AI infrastructure. Among them, Trojan attacks are the hardest to
defend against. In this paper, we first introduce a variation of the Badnet
kind of attacks that introduces Trojan backdoors to multiple target classes and
allows triggers to be placed anywhere in the image. The former makes it more
potent and the latter makes it extremely easy to carry out the attack in the
physical space. The state-of-the-art Trojan detection methods fail with this
threat model. To defend against this attack, we first introduce a trigger
reverse-engineering mechanism that uses multiple images to recover a variety of
potential triggers. We then propose a detection mechanism by measuring the
transferability of such recovered triggers. A Trojan trigger will have very
high transferability i.e. they make other images also go to the same class. We
study many practical advantages of our attack method and then demonstrate the
detection performance using a variety of image datasets. The experimental
results show the superior detection performance of our method over the
state-of-the-arts.
|
[
{
"version": "v1",
"created": "Fri, 8 Jul 2022 13:29:13 GMT"
}
] | 2022-07-11T00:00:00 |
[
[
"Harikumar",
"Haripriya",
""
],
[
"Rana",
"Santu",
""
],
[
"Do",
"Kien",
""
],
[
"Gupta",
"Sunil",
""
],
[
"Zong",
"Wei",
""
],
[
"Susilo",
"Willy",
""
],
[
"Venkastesh",
"Svetha",
""
]
] |
new_dataset
| 0.993663 |
2207.03917
|
Jinpeng Li
|
Jinpeng Li, Haibo Jin, Shengcai Liao, Ling Shao, Pheng-Ann Heng
|
RePFormer: Refinement Pyramid Transformer for Robust Facial Landmark
Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a Refinement Pyramid Transformer (RePFormer) for robust
facial landmark detection. Most facial landmark detectors focus on learning
representative image features. However, these CNN-based feature representations
are not robust enough to handle complex real-world scenarios due to ignoring
the internal structure of landmarks, as well as the relations between landmarks
and context. In this work, we formulate the facial landmark detection task as
refining landmark queries along pyramid memories. Specifically, a pyramid
transformer head (PTH) is introduced to build both homologous relations among
landmarks and heterologous relations between landmarks and cross-scale
contexts. Besides, a dynamic landmark refinement (DLR) module is designed to
decompose the landmark regression into an end-to-end refinement procedure,
where the dynamically aggregated queries are transformed to residual
coordinates predictions. Extensive experimental results on four facial landmark
detection benchmarks and their various subsets demonstrate the superior
performance and high robustness of our framework.
|
[
{
"version": "v1",
"created": "Fri, 8 Jul 2022 14:12:26 GMT"
}
] | 2022-07-11T00:00:00 |
[
[
"Li",
"Jinpeng",
""
],
[
"Jin",
"Haibo",
""
],
[
"Liao",
"Shengcai",
""
],
[
"Shao",
"Ling",
""
],
[
"Heng",
"Pheng-Ann",
""
]
] |
new_dataset
| 0.993975 |
2207.03927
|
Siamak Mehrkanoon
|
Sheng Kuang, Kiki van der Heijden, Siamak Mehrkanoon
|
BAST: Binaural Audio Spectrogram Transformer for Binaural Sound
Localization
|
7
| null | null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Accurate sound localization in a reverberation environment is essential for
human auditory perception. Recently, Convolutional Neural Networks (CNNs) have
been utilized to model the binaural human auditory pathway. However, CNN shows
barriers in capturing the global acoustic features. To address this issue, we
propose a novel end-to-end Binaural Audio Spectrogram Transformer (BAST) model
to predict the sound azimuth in both anechoic and reverberation environments.
Two modes of implementation, i.e. BAST-SP and BAST-NSP corresponding to BAST
model with shared and non-shared parameters respectively, are explored. Our
model with subtraction interaural integration and hybrid loss achieves an
angular distance of 1.29 degrees and a Mean Square Error of 1e-3 at all
azimuths, significantly surpassing CNN based model. The exploratory analysis of
the BAST's performance on the left-right hemifields and anechoic and
reverberation environments shows its generalization ability as well as the
feasibility of binaural Transformers in sound localization. Furthermore, the
analysis of the attention maps is provided to give additional insights on the
interpretation of the localization process in a natural reverberant
environment.
|
[
{
"version": "v1",
"created": "Fri, 8 Jul 2022 14:27:52 GMT"
}
] | 2022-07-11T00:00:00 |
[
[
"Kuang",
"Sheng",
""
],
[
"van der Heijden",
"Kiki",
""
],
[
"Mehrkanoon",
"Siamak",
""
]
] |
new_dataset
| 0.981108 |
2207.03960
|
Veronika Cheplygina
|
Nikolaj Kj{\o}ller Bjerregaard, Veronika Cheplygina, Stefan Heinrich
|
Detection of Furigana Text in Images
|
This project was originally submitted by NKB in fulfillment of the 30
ECTS MSc thesis at the IT University of Copenhagen
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Furigana are pronunciation notes used in Japanese writing. Being able to
detect these can help improve optical character recognition (OCR) performance
or make more accurate digital copies of Japanese written media by correctly
displaying furigana. This project focuses on detecting furigana in Japanese
books and comics. While there has been research into the detection of Japanese
text in general, there are currently no proposed methods for detecting
furigana.
We construct a new dataset containing Japanese written media and annotations
of furigana. We propose an evaluation metric for such data which is similar to
the evaluation protocols used in object detection except that it allows groups
of objects to be labeled by one annotation. We propose a method for detection
of furigana that is based on mathematical morphology and connected component
analysis. We evaluate the detections of the dataset and compare different
methods for text extraction. We also evaluate different types of images such as
books and comics individually and discuss the challenges of each type of image.
The proposed method reaches an F1-score of 76\% on the dataset. The method
performs well on regular books, but less so on comics, and books of irregular
format. Finally, we show that the proposed method can improve the performance
of OCR by 5\% on the manga109 dataset.
Source code is available via
\texttt{\url{https://github.com/nikolajkb/FuriganaDetection}}
|
[
{
"version": "v1",
"created": "Fri, 8 Jul 2022 15:27:19 GMT"
}
] | 2022-07-11T00:00:00 |
[
[
"Bjerregaard",
"Nikolaj Kjøller",
""
],
[
"Cheplygina",
"Veronika",
""
],
[
"Heinrich",
"Stefan",
""
]
] |
new_dataset
| 0.998967 |
2207.03961
|
Hyounghun Kim
|
Hyounghun Kim, Abhay Zala, Mohit Bansal
|
CoSIm: Commonsense Reasoning for Counterfactual Scene Imagination
|
NAACL 2022 (13 pages)
| null | null | null |
cs.CL cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As humans, we can modify our assumptions about a scene by imagining
alternative objects or concepts in our minds. For example, we can easily
anticipate the implications of the sun being overcast by rain clouds (e.g., the
street will get wet) and accordingly prepare for that. In this paper, we
introduce a new task/dataset called Commonsense Reasoning for Counterfactual
Scene Imagination (CoSIm) which is designed to evaluate the ability of AI
systems to reason about scene change imagination. In this task/dataset, models
are given an image and an initial question-response pair about the image. Next,
a counterfactual imagined scene change (in textual form) is applied, and the
model has to predict the new response to the initial question based on this
scene change. We collect 3.5K high-quality and challenging data instances, with
each instance consisting of an image, a commonsense question with a response, a
description of a counterfactual change, a new response to the question, and
three distractor responses. Our dataset contains various complex scene change
types (such as object addition/removal/state change, event description,
environment change, etc.) that require models to imagine many different
scenarios and reason about the changed scenes. We present a baseline model
based on a vision-language Transformer (i.e., LXMERT) and ablation studies.
Through human evaluation, we demonstrate a large human-model performance gap,
suggesting room for promising future work on this challenging counterfactual,
scene imagination task. Our code and dataset are publicly available at:
https://github.com/hyounghk/CoSIm
|
[
{
"version": "v1",
"created": "Fri, 8 Jul 2022 15:28:23 GMT"
}
] | 2022-07-11T00:00:00 |
[
[
"Kim",
"Hyounghun",
""
],
[
"Zala",
"Abhay",
""
],
[
"Bansal",
"Mohit",
""
]
] |
new_dataset
| 0.999572 |
2207.04021
|
Saad Hassan
|
Saad Hassan, Matthew Seita, Larwan Berke, Yingli Tian, Elaine Gale,
Sooyeon Lee, Matt Huenerfauth
|
ASL-Homework-RGBD Dataset: An annotated dataset of 45 fluent and
non-fluent signers performing American Sign Language homeworks
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We are releasing a dataset containing videos of both fluent and non-fluent
signers using American Sign Language (ASL), which were collected using a Kinect
v2 sensor. This dataset was collected as a part of a project to develop and
evaluate computer vision algorithms to support new technologies for automatic
detection of ASL fluency attributes. A total of 45 fluent and non-fluent
participants were asked to perform signing homework assignments that are
similar to the assignments used in introductory or intermediate level ASL
courses. The data is annotated to identify several aspects of signing including
grammatical features and non-manual markers. Sign language recognition is
currently very data-driven and this dataset can support the design of
recognition technologies, especially technologies that can benefit ASL
learners. This dataset might also be interesting to ASL education researchers
who want to contrast fluent and non-fluent signing.
|
[
{
"version": "v1",
"created": "Fri, 8 Jul 2022 17:18:49 GMT"
}
] | 2022-07-11T00:00:00 |
[
[
"Hassan",
"Saad",
""
],
[
"Seita",
"Matthew",
""
],
[
"Berke",
"Larwan",
""
],
[
"Tian",
"Yingli",
""
],
[
"Gale",
"Elaine",
""
],
[
"Lee",
"Sooyeon",
""
],
[
"Huenerfauth",
"Matt",
""
]
] |
new_dataset
| 0.999716 |
2207.04028
|
Yuan Shen
|
Yuan Shen, Niviru Wijayaratne, Pranav Sriram, Aamir Hasan, Peter Du,
and Katherine Driggs-Campbell
|
CoCAtt: A Cognitive-Conditioned Driver Attention Dataset (Supplementary
Material)
|
Supplementary Material for the main paper, "CoCAtt: A
Cognitive-Conditioned Driver Attention Dataset". Accepted at ITSC2022
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The task of driver attention prediction has drawn considerable interest among
researchers in robotics and the autonomous vehicle industry. Driver attention
prediction can play an instrumental role in mitigating and preventing high-risk
events, like collisions and casualties. However, existing driver attention
prediction models neglect the distraction state and intention of the driver,
which can significantly influence how they observe their surroundings. To
address these issues, we present a new driver attention dataset, CoCAtt
(Cognitive-Conditioned Attention). Unlike previous driver attention datasets,
CoCAtt includes per-frame annotations that describe the distraction state and
intention of the driver. In addition, the attention data in our dataset is
captured in both manual and autopilot modes using eye-tracking devices of
different resolutions. Our results demonstrate that incorporating the above two
driver states into attention modeling can improve the performance of driver
attention prediction. To the best of our knowledge, this work is the first to
provide autopilot attention data. Furthermore, CoCAtt is currently the largest
and the most diverse driver attention dataset in terms of autonomy levels, eye
tracker resolutions, and driving scenarios. CoCAtt is available for download at
https://cocatt-dataset.github.io.
|
[
{
"version": "v1",
"created": "Fri, 8 Jul 2022 17:35:17 GMT"
}
] | 2022-07-11T00:00:00 |
[
[
"Shen",
"Yuan",
""
],
[
"Wijayaratne",
"Niviru",
""
],
[
"Sriram",
"Pranav",
""
],
[
"Hasan",
"Aamir",
""
],
[
"Du",
"Peter",
""
],
[
"Driggs-Campbell",
"Katherine",
""
]
] |
new_dataset
| 0.999695 |
2207.04043
|
Mirac Suzgun
|
Mirac Suzgun, Luke Melas-Kyriazi, Suproteem K. Sarkar, Scott Duke
Kominers, Stuart M. Shieber
|
The Harvard USPTO Patent Dataset: A Large-Scale, Well-Structured, and
Multi-Purpose Corpus of Patent Applications
|
Website: https://patentdataset.org/, GitHub Repository:
https://github.com/suzgunmirac/hupd, Hugging Face Datasets:
https://huggingface.co/datasets/HUPD/hupd
| null | null | null |
cs.CL cs.CY cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Innovation is a major driver of economic and social development, and
information about many kinds of innovation is embedded in semi-structured data
from patents and patent applications. Although the impact and novelty of
innovations expressed in patent data are difficult to measure through
traditional means, ML offers a promising set of techniques for evaluating
novelty, summarizing contributions, and embedding semantics. In this paper, we
introduce the Harvard USPTO Patent Dataset (HUPD), a large-scale,
well-structured, and multi-purpose corpus of English-language patent
applications filed to the United States Patent and Trademark Office (USPTO)
between 2004 and 2018. With more than 4.5 million patent documents, HUPD is two
to three times larger than comparable corpora. Unlike previously proposed
patent datasets in NLP, HUPD contains the inventor-submitted versions of patent
applications--not the final versions of granted patents--thereby allowing us to
study patentability at the time of filing using NLP methods for the first time.
It is also novel in its inclusion of rich structured metadata alongside the
text of patent filings: By providing each application's metadata along with all
of its text fields, the dataset enables researchers to perform new sets of NLP
tasks that leverage variation in structured covariates. As a case study on the
types of research HUPD makes possible, we introduce a new task to the NLP
community--namely, binary classification of patent decisions. We additionally
show the structured metadata provided in the dataset enables us to conduct
explicit studies of concept shifts for this task. Finally, we demonstrate how
HUPD can be used for three additional tasks: multi-class classification of
patent subject areas, language modeling, and summarization.
|
[
{
"version": "v1",
"created": "Fri, 8 Jul 2022 17:57:15 GMT"
}
] | 2022-07-11T00:00:00 |
[
[
"Suzgun",
"Mirac",
""
],
[
"Melas-Kyriazi",
"Luke",
""
],
[
"Sarkar",
"Suproteem K.",
""
],
[
"Kominers",
"Scott Duke",
""
],
[
"Shieber",
"Stuart M.",
""
]
] |
new_dataset
| 0.999813 |
1609.01885
|
Vineeth Balasubramanian
|
Abhay Gupta, Arjun D'Cunha, Kamal Awasthi, Vineeth Balasubramanian
|
DAiSEE: Towards User Engagement Recognition in the Wild
|
12 pages, 14 figures, 5 tables
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce DAiSEE, the first multi-label video classification dataset
comprising of 9068 video snippets captured from 112 users for recognizing the
user affective states of boredom, confusion, engagement, and frustration in the
wild. The dataset has four levels of labels namely - very low, low, high, and
very high for each of the affective states, which are crowd annotated and
correlated with a gold standard annotation created using a team of expert
psychologists. We have also established benchmark results on this dataset using
state-of-the-art video classification methods that are available today. We
believe that DAiSEE will provide the research community with challenges in
feature extraction, context-based inference, and development of suitable
machine learning methods for related tasks, thus providing a springboard for
further research. The dataset is available for download at
https://people.iith.ac.in/vineethnb/resources/daisee/index.html.
|
[
{
"version": "v1",
"created": "Wed, 7 Sep 2016 08:50:11 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Nov 2016 15:24:34 GMT"
},
{
"version": "v3",
"created": "Thu, 16 Nov 2017 11:05:57 GMT"
},
{
"version": "v4",
"created": "Fri, 15 Dec 2017 06:22:10 GMT"
},
{
"version": "v5",
"created": "Thu, 12 Apr 2018 16:40:55 GMT"
},
{
"version": "v6",
"created": "Fri, 13 Apr 2018 04:42:51 GMT"
},
{
"version": "v7",
"created": "Thu, 7 Jul 2022 12:16:48 GMT"
}
] | 2022-07-08T00:00:00 |
[
[
"Gupta",
"Abhay",
""
],
[
"D'Cunha",
"Arjun",
""
],
[
"Awasthi",
"Kamal",
""
],
[
"Balasubramanian",
"Vineeth",
""
]
] |
new_dataset
| 0.999719 |
1904.01293
|
Timo Stoffregen
|
Timo Stoffregen and Guillermo Gallego and Tom Drummond and Lindsay
Kleeman and Davide Scaramuzza
|
Event-Based Motion Segmentation by Motion Compensation
|
When viewed in Acrobat Reader, several of the figures animate. Video:
https://youtu.be/0q6ap_OSBAk
|
IEEE International Conference on Computer Vision 2019
|
10.1109/ICCV.2019.00734
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In contrast to traditional cameras, whose pixels have a common exposure time,
event-based cameras are novel bio-inspired sensors whose pixels work
independently and asynchronously output intensity changes (called "events"),
with microsecond resolution. Since events are caused by the apparent motion of
objects, event-based cameras sample visual information based on the scene
dynamics and are, therefore, a more natural fit than traditional cameras to
acquire motion, especially at high speeds, where traditional cameras suffer
from motion blur. However, distinguishing between events caused by different
moving objects and by the camera's ego-motion is a challenging task. We present
the first per-event segmentation method for splitting a scene into
independently moving objects. Our method jointly estimates the event-object
associations (i.e., segmentation) and the motion parameters of the objects (or
the background) by maximization of an objective function, which builds upon
recent results on event-based motion-compensation. We provide a thorough
evaluation of our method on a public dataset, outperforming the
state-of-the-art by as much as 10%. We also show the first quantitative
evaluation of a segmentation algorithm for event cameras, yielding around 90%
accuracy at 4 pixels relative displacement.
|
[
{
"version": "v1",
"created": "Tue, 2 Apr 2019 08:51:01 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Apr 2019 07:21:56 GMT"
},
{
"version": "v3",
"created": "Thu, 4 Apr 2019 08:16:50 GMT"
},
{
"version": "v4",
"created": "Thu, 22 Aug 2019 23:15:45 GMT"
}
] | 2022-07-08T00:00:00 |
[
[
"Stoffregen",
"Timo",
""
],
[
"Gallego",
"Guillermo",
""
],
[
"Drummond",
"Tom",
""
],
[
"Kleeman",
"Lindsay",
""
],
[
"Scaramuzza",
"Davide",
""
]
] |
new_dataset
| 0.99715 |
2103.07863
|
Kees Middelburg
|
C.A. Middelburg
|
Imperative process algebra with abstraction
|
33 pages, a polished revision of v4
|
Scientific Annals of Computer Science 32(1):137--179, 2022
|
10.7561/SACS.2022.1.137
| null |
cs.LO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper introduces an imperative process algebra based on ACP (Algebra of
Communicating Processes). Like other imperative process algebras, this process
algebra deals with processes of the kind that arises from the execution of
imperative programs. It distinguishes itself from already existing imperative
process algebras among other things by supporting abstraction from actions that
are considered not to be visible. The support of abstraction of this kind opens
interesting application possibilities of the process algebra. This paper goes
briefly into the possibility of information-flow security analysis of the kind
that is concerned with the leakage of confidential data. For the presented
axiomatization, soundness and semi-completeness results with respect to a
notion of branching bisimulation equivalence are established.
|
[
{
"version": "v1",
"created": "Sun, 14 Mar 2021 07:52:48 GMT"
},
{
"version": "v2",
"created": "Mon, 22 Mar 2021 16:11:30 GMT"
},
{
"version": "v3",
"created": "Tue, 18 May 2021 11:23:48 GMT"
},
{
"version": "v4",
"created": "Thu, 23 Dec 2021 10:56:59 GMT"
},
{
"version": "v5",
"created": "Sat, 21 May 2022 11:56:45 GMT"
}
] | 2022-07-08T00:00:00 |
[
[
"Middelburg",
"C. A.",
""
]
] |
new_dataset
| 0.974613 |
2109.04205
|
Guillaume Sartoretti
|
Yuhong Cao and Zhanhong Sun and Guillaume Sartoretti
|
DAN: Decentralized Attention-based Neural Network for the MinMax
Multiple Traveling Salesman Problem
|
Submitted to the 16th International Symposium on Distributed
Autonomous Robotic Systems (DARS 2022)
| null | null | null |
cs.RO cs.AI cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The multiple traveling salesman problem (mTSP) is a well-known NP-hard
problem with numerous real-world applications. In particular, this work
addresses MinMax mTSP, where the objective is to minimize the max tour length
among all agents. Many robotic deployments require recomputing potentially
large mTSP instances frequently, making the natural trade-off between computing
time and solution quality of great importance. However, exact and heuristic
algorithms become inefficient as the number of cities increases, due to their
computational complexity. Encouraged by the recent developments in deep
reinforcement learning (dRL), this work approaches the mTSP as a cooperative
task and introduces DAN, a decentralized attention-based neural method that
aims at tackling this key trade-off. In DAN, agents learn fully decentralized
policies to collaboratively construct a tour, by predicting each other's future
decisions. Our model relies on the Transformer architecture and is trained
using multi-agent RL with parameter sharing, providing natural scalability to
the numbers of agents and cities. Our experimental results on small- to
large-scale mTSP instances ($50$ to $1000$ cities and $5$ to $20$ agents) show
that DAN is able to match or outperform state-of-the-art solvers while keeping
planning times low. In particular, given the same computation time budget, DAN
outperforms all conventional and dRL-based baselines on larger-scale instances
(more than 100 cities, more than 5 agents), and exhibits enhanced agent
collaboration. A video explaining our approach and presenting our results is
available at \url{https://youtu.be/xi3cLsDsLvs}.
|
[
{
"version": "v1",
"created": "Thu, 9 Sep 2021 12:26:04 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Jul 2022 16:10:20 GMT"
}
] | 2022-07-08T00:00:00 |
[
[
"Cao",
"Yuhong",
""
],
[
"Sun",
"Zhanhong",
""
],
[
"Sartoretti",
"Guillaume",
""
]
] |
new_dataset
| 0.969366 |
2111.00993
|
Jianing Qiu
|
Jianing Qiu, Lipeng Chen, Xiao Gu, Frank P.-W. Lo, Ya-Yen Tsai,
Jiankai Sun, Jiaqi Liu and Benny Lo
|
Egocentric Human Trajectory Forecasting with a Wearable Camera and
Multi-Modal Fusion
| null |
IEEE Robotics and Automation Letters, June, 2022
|
10.1109/LRA.2022.3188101
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this paper, we address the problem of forecasting the trajectory of an
egocentric camera wearer (ego-person) in crowded spaces. The trajectory
forecasting ability learned from the data of different camera wearers walking
around in the real world can be transferred to assist visually impaired people
in navigation, as well as to instill human navigation behaviours in mobile
robots, enabling better human-robot interactions. To this end, a novel
egocentric human trajectory forecasting dataset was constructed, containing
real trajectories of people navigating in crowded spaces wearing a camera, as
well as extracted rich contextual data. We extract and utilize three different
modalities to forecast the trajectory of the camera wearer, i.e., his/her past
trajectory, the past trajectories of nearby people, and the environment such as
the scene semantics or the depth of the scene. A Transformer-based
encoder-decoder neural network model, integrated with a novel cascaded
cross-attention mechanism that fuses multiple modalities, has been designed to
predict the future trajectory of the camera wearer. Extensive experiments have
been conducted, with results showing that our model outperforms the
state-of-the-art methods in egocentric human trajectory forecasting.
|
[
{
"version": "v1",
"created": "Mon, 1 Nov 2021 14:58:05 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Nov 2021 13:52:50 GMT"
},
{
"version": "v3",
"created": "Thu, 7 Jul 2022 12:31:21 GMT"
}
] | 2022-07-08T00:00:00 |
[
[
"Qiu",
"Jianing",
""
],
[
"Chen",
"Lipeng",
""
],
[
"Gu",
"Xiao",
""
],
[
"Lo",
"Frank P. -W.",
""
],
[
"Tsai",
"Ya-Yen",
""
],
[
"Sun",
"Jiankai",
""
],
[
"Liu",
"Jiaqi",
""
],
[
"Lo",
"Benny",
""
]
] |
new_dataset
| 0.998422 |
2112.10153
|
Dongchao Yang
|
Dongchao Yang, Helin Wang, Yuexian Zou, Fan Cui and Yujun Wang
|
Detect what you want: Target Sound Detection
|
Submitted to DCASE workshop2022
| null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human beings can perceive a target sound type from a multi-source mixture
signal by the selective auditory attention, however, such functionality was
hardly ever explored in machine hearing. This paper addresses the target sound
detection (TSD) task, which aims to detect the target sound signal from a
mixture audio when a target sound's reference audio is given. We present a
novel target sound detection network (TSDNet) which consists of two main parts:
A conditional network which aims at generating a sound-discriminative
conditional embedding vector representing the target sound, and a detection
network which takes both the mixture audio and the conditional embedding vector
as inputs and produces the detection result of the target sound. These two
networks can be jointly optimized with a multi-task learning approach to
further improve the performance. In addition, we study both strong-supervised
and weakly-supervised strategies to train TSDNet and propose a data
augmentation method by mixing two samples. To facilitate this research, we
build a target sound detection dataset (\textit{i.e.} URBAN-TSD) based on
URBAN-SED and UrbanSound8K datasets, and experimental results indicate our
method could get the segment-based F scores of 76.3$\%$ and 56.8$\%$ on the
strongly-labelled and weakly-labelled data respectively.
|
[
{
"version": "v1",
"created": "Sun, 19 Dec 2021 14:12:28 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Jul 2022 07:20:43 GMT"
}
] | 2022-07-08T00:00:00 |
[
[
"Yang",
"Dongchao",
""
],
[
"Wang",
"Helin",
""
],
[
"Zou",
"Yuexian",
""
],
[
"Cui",
"Fan",
""
],
[
"Wang",
"Yujun",
""
]
] |
new_dataset
| 0.983477 |
2203.05266
|
Donadel Denis
|
Mauro Conti, Denis Donadel, Radha Poovendran, Federico Turrin
|
EVExchange: A Relay Attack on Electric Vehicle Charging System
|
20 pages, 6 figures
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To support the increasing spread of Electric Vehicles (EVs), Charging
Stations (CSs) are being installed worldwide. The new generation of CSs employs
the Vehicle-To-Grid (V2G) paradigm by implementing novel standards such as the
ISO 15118. This standard enables high-level communication between the vehicle
and the charging column, helps manage the charge smartly, and simplifies the
payment phase. This novel charging paradigm, which connects the Smart Grid to
external networks (e.g., EVs and CSs), has not been thoroughly examined yet.
Therefore, it may lead to dangerous vulnerability surfaces and new research
challenges.
In this paper, we present EVExchange, the first attack to steal energy during
a charging session in a V2G communication: i.e., charging the attacker's car
while letting the victim pay for it. Furthermore, if reverse charging flow is
enabled, the attacker can even sell the energy available on the victim's car!
Thus, getting the economic profit of this selling, and leaving the victim with
a completely discharged battery. We developed a virtual and a physical testbed
in which we validate the attack and prove its effectiveness in stealing the
energy. To prevent the attack, we propose a lightweight modification of the ISO
15118 protocol to include a distance bounding algorithm. Finally, we validated
the countermeasure on our testbeds. Our results show that the proposed
countermeasure can identify all the relay attack attempts while being
transparent to the user.
|
[
{
"version": "v1",
"created": "Thu, 10 Mar 2022 09:54:12 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Jul 2022 14:09:33 GMT"
}
] | 2022-07-08T00:00:00 |
[
[
"Conti",
"Mauro",
""
],
[
"Donadel",
"Denis",
""
],
[
"Poovendran",
"Radha",
""
],
[
"Turrin",
"Federico",
""
]
] |
new_dataset
| 0.99943 |
2205.04775
|
Pascal Nasahl
|
Pascal Nasahl, Miguel Osorio, Pirmin Vogel, Michael Schaffner, Timothy
Trippel, Dominic Rizzo, Stefan Mangard
|
SYNFI: Pre-Silicon Fault Analysis of an Open-Source Secure Element
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fault attacks are active, physical attacks that an adversary can leverage to
alter the control-flow of embedded devices to gain access to sensitive
information or bypass protection mechanisms. Due to the severity of these
attacks, manufacturers deploy hardware-based fault defenses into
security-critical systems, such as secure elements. The development of these
countermeasures is a challenging task due to the complex interplay of circuit
components and because contemporary design automation tools tend to optimize
inserted structures away, thereby defeating their purpose. Hence, it is
critical that such countermeasures are rigorously verified post-synthesis. As
classical functional verification techniques fall short of assessing the
effectiveness of countermeasures, developers have to resort to methods capable
of injecting faults in a simulation testbench or into a physical chip. However,
developing test sequences to inject faults in simulation is an error-prone task
and performing fault attacks on a chip requires specialized equipment and is
incredibly time-consuming. To that end, this paper introduces SYNFI, a formal
pre-silicon fault verification framework that operates on synthesized netlists.
SYNFI can be used to analyze the general effect of faults on the input-output
relationship in a circuit and its fault countermeasures, and thus enables
hardware designers to assess and verify the effectiveness of embedded
countermeasures in a systematic and semi-automatic way. To demonstrate that
SYNFI is capable of handling unmodified, industry-grade netlists synthesized
with commercial and open tools, we analyze OpenTitan, the first open-source
secure element. In our analysis, we identified critical security weaknesses in
the unprotected AES block, developed targeted countermeasures, reassessed their
security, and contributed these countermeasures back to the OpenTitan
repository.
|
[
{
"version": "v1",
"created": "Tue, 10 May 2022 09:54:00 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Jul 2022 11:49:40 GMT"
}
] | 2022-07-08T00:00:00 |
[
[
"Nasahl",
"Pascal",
""
],
[
"Osorio",
"Miguel",
""
],
[
"Vogel",
"Pirmin",
""
],
[
"Schaffner",
"Michael",
""
],
[
"Trippel",
"Timothy",
""
],
[
"Rizzo",
"Dominic",
""
],
[
"Mangard",
"Stefan",
""
]
] |
new_dataset
| 0.973134 |
2207.02971
|
Yifan Peng
|
Yifan Peng, Siddharth Dalmia, Ian Lane, Shinji Watanabe
|
Branchformer: Parallel MLP-Attention Architectures to Capture Local and
Global Context for Speech Recognition and Understanding
|
Accepted at ICML 2022
| null | null | null |
cs.CL cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Conformer has proven to be effective in many speech processing tasks. It
combines the benefits of extracting local dependencies using convolutions and
global dependencies using self-attention. Inspired by this, we propose a more
flexible, interpretable and customizable encoder alternative, Branchformer,
with parallel branches for modeling various ranged dependencies in end-to-end
speech processing. In each encoder layer, one branch employs self-attention or
its variant to capture long-range dependencies, while the other branch utilizes
an MLP module with convolutional gating (cgMLP) to extract local relationships.
We conduct experiments on several speech recognition and spoken language
understanding benchmarks. Results show that our model outperforms both
Transformer and cgMLP. It also matches with or outperforms state-of-the-art
results achieved by Conformer. Furthermore, we show various strategies to
reduce computation thanks to the two-branch architecture, including the ability
to have variable inference complexity in a single trained model. The weights
learned for merging branches indicate how local and global dependencies are
utilized in different layers, which benefits model designing.
|
[
{
"version": "v1",
"created": "Wed, 6 Jul 2022 21:08:10 GMT"
}
] | 2022-07-08T00:00:00 |
[
[
"Peng",
"Yifan",
""
],
[
"Dalmia",
"Siddharth",
""
],
[
"Lane",
"Ian",
""
],
[
"Watanabe",
"Shinji",
""
]
] |
new_dataset
| 0.993307 |
2207.02982
|
Aviad Etzion
|
Aviad Etzion and Itzik Klein
|
MoRPI: Mobile Robot Pure Inertial Navigation
|
10 pages, 9 figures
| null | null | null |
cs.RO eess.SP
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Mobile robots are used in industrial, leisure, and military applications. In
some situations, a robot navigation solution relies only on inertial sensors
and as a consequence, the navigation solution drifts in time. In this paper, we
propose the MoRPI framework, a mobile robot pure inertial approach. Instead of
travelling in a straight line trajectory, the robot moves in a periodic motion
trajectory to enable peak-to-peak estimation. In this manner, instead of
performing three integrations to calculate the robot position in a classical
inertial solution, an empirical formula is used to estimate the travelled
distance. Two types of MoRPI approaches are suggested, where one is based on
both accelerometer and gyroscope readings while the other is only on
gyroscopes. Closed form analytical solutions are derived to show that MoRPI
produces lower position error compared to the classical pure inertial solution.
In addition, to evaluate the proposed approach, field experiments were made
with a mobile robot equipped with two types of inertial sensors. In total, 143
trajectories with a time duration of 75 minutes were collected and evaluated.
The results show the benefits of using our approach. To facilitate further
development of the proposed approach, both dataset and code are publicly
available at https://github.com/ansfl/MoRPI.
|
[
{
"version": "v1",
"created": "Wed, 6 Jul 2022 21:35:44 GMT"
}
] | 2022-07-08T00:00:00 |
[
[
"Etzion",
"Aviad",
""
],
[
"Klein",
"Itzik",
""
]
] |
new_dataset
| 0.997827 |
2207.03041
|
Bo-Kai Ruan
|
Bo-Kai Ruan, Hong-Han Shuai, Wen-Huang Cheng
|
Vision Transformers: State of the Art and Research Challenges
|
8 pages, 3 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Transformers have achieved great success in natural language processing. Due
to the powerful capability of self-attention mechanism in transformers,
researchers develop the vision transformers for a variety of computer vision
tasks, such as image recognition, object detection, image segmentation, pose
estimation, and 3D reconstruction. This paper presents a comprehensive overview
of the literature on different architecture designs and training tricks
(including self-supervised learning) for vision transformers. Our goal is to
provide a systematic review with the open research opportunities.
|
[
{
"version": "v1",
"created": "Thu, 7 Jul 2022 02:01:56 GMT"
}
] | 2022-07-08T00:00:00 |
[
[
"Ruan",
"Bo-Kai",
""
],
[
"Shuai",
"Hong-Han",
""
],
[
"Cheng",
"Wen-Huang",
""
]
] |
new_dataset
| 0.966416 |
2207.03056
|
Yiqin Zhao
|
Yiqin Zhao, Sheng Wei, Tian Guo
|
Privacy-preserving Reflection Rendering for Augmented Reality
|
Accepted to ACM Multimedia 2022
| null | null | null |
cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many augmented reality (AR) applications rely on omnidirectional environment
lighting to render photorealistic virtual objects. When the virtual objects
consist of reflective materials, such as a metallic sphere, the required
lighting information to render such objects can consist of privacy-sensitive
information that is outside the current camera view. In this paper, we show,
for the first time, that accuracy-driven multi-view environment lighting can
reveal out-of-camera scene information and compromise privacy. We present a
simple yet effective privacy attack that extracts sensitive scene information
such as human face and text information from the rendered objects, under a
number of application scenarios.
To defend against such attacks, we develop a novel $IPC^{2}S$ defense and a
conditional $R^2$ defense. Our $IPC^{2}S$ defense, used in conjunction with a
generic lighting reconstruction method, preserves the scene geometry while
obfuscating the privacy-sensitive information. As a proof-of-concept, we
leverage existing OCR and face detection models to identify text and human
faces from past camera observations and blur the color pixels associated with
detected regions. We evaluate the visual quality impact of our defense by
comparing rendered virtual objects to ones rendered with a generic
multi-lighting reconstruction technique, ARKit, and $R^2$ defense. Our visual
and quantitative results demonstrate that our defense leads to structurally
similar reflections with up to 0.98 SSIM score across a variety of rendering
scenarios while preserving sensitive information by reducing the automatic
extraction success rate to at most 8.8%.
|
[
{
"version": "v1",
"created": "Thu, 7 Jul 2022 02:48:59 GMT"
}
] | 2022-07-08T00:00:00 |
[
[
"Zhao",
"Yiqin",
""
],
[
"Wei",
"Sheng",
""
],
[
"Guo",
"Tian",
""
]
] |
new_dataset
| 0.956764 |
2207.03081
|
Ukcheol Shin
|
Ukcheol Shin, Kyunghyun Lee, In So Kweon
|
DRL-ISP: Multi-Objective Camera ISP with Deep Reinforcement Learning
|
Accepted by IEEE/RSJ International Conference on Intelligent Robots
and Systems (IROS), 2022 (*First two authors are equal contributed)
| null | null | null |
cs.CV cs.AI cs.LG cs.RO eess.IV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this paper, we propose a multi-objective camera ISP framework that
utilizes Deep Reinforcement Learning (DRL) and camera ISP toolbox that consist
of network-based and conventional ISP tools. The proposed DRL-based camera ISP
framework iteratively selects a proper tool from the toolbox and applies it to
the image to maximize a given vision task-specific reward function. For this
purpose, we implement total 51 ISP tools that include exposure correction,
color-and-tone correction, white balance, sharpening, denoising, and the
others. We also propose an efficient DRL network architecture that can extract
the various aspects of an image and make a rigid mapping relationship between
images and a large number of actions. Our proposed DRL-based ISP framework
effectively improves the image quality according to each vision task such as
RAW-to-RGB image restoration, 2D object detection, and monocular depth
estimation.
|
[
{
"version": "v1",
"created": "Thu, 7 Jul 2022 04:34:05 GMT"
}
] | 2022-07-08T00:00:00 |
[
[
"Shin",
"Ukcheol",
""
],
[
"Lee",
"Kyunghyun",
""
],
[
"Kweon",
"In So",
""
]
] |
new_dataset
| 0.975386 |
2207.03098
|
Zhi Xu
|
Zhi Xu, Hongbo Zhu, Hua Chen, and Wei Zhang
|
Polytopic Planar Region Characterization of Rough Terrains for Legged
Locomotion
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper studies the problem of constructing polytopic representations of
planar regions from depth camera readings. This problem is of great importance
for terrain mapping in complicated environment and has great potentials in
legged locomotion applications. To address the polytopic planar region
characterization problem, we propose a two-stage solution scheme. At the first
stage, the planar regions embedded within a sequence of depth images are
extracted individually first and then merged to establish a terrain map
containing only planar regions in a selected frame. To simplify the
representations of the planar regions that are applicable to foothold planning
for legged robots, we further approximate the extracted planar regions via
low-dimensional polytopes at the second stage. With the polytopic
representation, the proposed approach achieves a great balance between accuracy
and simplicity. Experimental validations with RGB-D cameras are conducted to
demonstrate the performance of the proposed scheme. The proposed scheme
successfully characterizes the planar regions via polytopes with acceptable
accuracy. More importantly, the run time of the overall perception scheme is
less than 10ms (i.e., > 100Hz) throughout the tests, which strongly illustrates
the advantages of our approach developed in this paper.
|
[
{
"version": "v1",
"created": "Thu, 7 Jul 2022 05:35:59 GMT"
}
] | 2022-07-08T00:00:00 |
[
[
"Xu",
"Zhi",
""
],
[
"Zhu",
"Hongbo",
""
],
[
"Chen",
"Hua",
""
],
[
"Zhang",
"Wei",
""
]
] |
new_dataset
| 0.997972 |
2207.03198
|
Stefano Dafarra
|
Stefano Dafarra, Giulio Romualdi, Daniele Pucci
|
Dynamic Complementarity Conditions and Whole-Body Trajectory
Optimization for Humanoid Robot Locomotion
|
It is an evolved paper of the conference version available at
arXiv:2003.04633. Part of the results have been presented in the first author
Ph.D. thesis available at arXiv:2004.07699
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The paper presents a planner to generate walking trajectories by using the
centroidal dynamics and the full kinematics of a humanoid robot. The
interaction between the robot and the walking surface is modeled explicitly via
new conditions, the \emph{Dynamical Complementarity Constraints}. The approach
does not require a predefined contact sequence and generates the footsteps
automatically. We characterize the robot control objective via a set of tasks,
and we address it by solving an optimal control problem. We show that it is
possible to achieve walking motions automatically by specifying a minimal set
of references, such as a constant desired center of mass velocity and a
reference point on the ground. Furthermore, we analyze how the contact
modelling choices affect the computational time. We validate the approach by
generating and testing walking trajectories for the humanoid robot iCub.
|
[
{
"version": "v1",
"created": "Thu, 7 Jul 2022 10:01:44 GMT"
}
] | 2022-07-08T00:00:00 |
[
[
"Dafarra",
"Stefano",
""
],
[
"Romualdi",
"Giulio",
""
],
[
"Pucci",
"Daniele",
""
]
] |
new_dataset
| 0.968877 |
2207.03205
|
Ziyi Xi
|
Ziyi Xi, Hao Lin, Weiqi Luo
|
Dual Stream Computer-Generated Image Detection Network Based On Channel
Joint And Softpool
|
7 pages, 4 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the development of computer graphics technology, the images synthesized
by computer software become more and more closer to the photographs. While
computer graphics technology brings us a grand visual feast in the field of
games and movies, it may also be utilized by someone with bad intentions to
guide public opinions and cause political crisis or social unrest. Therefore,
how to distinguish the computer-generated graphics (CG) from the photographs
(PG) has become an important topic in the field of digital image forensics.
This paper proposes a dual stream convolutional neural network based on channel
joint and softpool. The proposed network architecture includes a residual
module for extracting image noise information and a joint channel information
extraction module for capturing the shallow semantic information of image. In
addition, we also design a residual structure to enhance feature extraction and
reduce the loss of information in residual flow. The joint channel information
extraction module can obtain the shallow semantic information of the input
image which can be used as the information supplement block of the residual
module. The whole network uses SoftPool to reduce the information loss of
down-sampling for image. Finally, we fuse the two flows to get the
classification results. Experiments on SPL2018 and DsTok show that the proposed
method outperforms existing methods, especially on the DsTok dataset. For
example, the performance of our model surpasses the state-of-the-art by a large
margin of 3%.
|
[
{
"version": "v1",
"created": "Thu, 7 Jul 2022 10:19:04 GMT"
}
] | 2022-07-08T00:00:00 |
[
[
"Xi",
"Ziyi",
""
],
[
"Lin",
"Hao",
""
],
[
"Luo",
"Weiqi",
""
]
] |
new_dataset
| 0.980374 |
2207.03217
|
Kinga Skorupska
|
Wies{\l}aw Kope\'c, Cezary Biele, Monika Kornacka, Grzegorz Pochwatko,
Anna Jaskulska, Kinga Skorupska, Julia Paluch, Piotr Gago, Barbara Karpowicz,
Marcin Niewi\'nski, Rafa{\l} Mas{\l}yk
|
Participatory Design Landscape for the Human-Machine Collaboration,
Interaction and Automation at the Frontiers of HCI (PDL 2021)
|
6 pages, 1 figure, workshop held at Interact 2021
| null |
10.1007/978-3-030-85607-6_78
| null |
cs.HC cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a one-day transdisciplinary creative workshop in the broad area of
HCI focused on multiple opportunities of incorporating participatory design
into research and industry practice. This workshop will become a venue to share
experiences and novel ideas in this area. At the same time, we will brainstorm
and explore frontiers of HCI related to engaging end users in design and
development practices of established and emerging ICT solutions often
overlooked in terms of co-design. We welcome a wide scope of contributions in
HCI which explore sustainable opportunities for participatory design and
development practices in the context of interconnected business, social,
economic and environmental issues. The contributions ought to explore
challenges and opportunities related to co-design at the frontiers of HCI -
participatory design of newest and complex technologies, not easily explainable
or intuitive, novel collaborative (remote or distributed) approaches to
empowering users to prepare them to contribute as well as to engaging them
directly in co-design.
|
[
{
"version": "v1",
"created": "Thu, 7 Jul 2022 10:44:14 GMT"
}
] | 2022-07-08T00:00:00 |
[
[
"Kopeć",
"Wiesław",
""
],
[
"Biele",
"Cezary",
""
],
[
"Kornacka",
"Monika",
""
],
[
"Pochwatko",
"Grzegorz",
""
],
[
"Jaskulska",
"Anna",
""
],
[
"Skorupska",
"Kinga",
""
],
[
"Paluch",
"Julia",
""
],
[
"Gago",
"Piotr",
""
],
[
"Karpowicz",
"Barbara",
""
],
[
"Niewiński",
"Marcin",
""
],
[
"Masłyk",
"Rafał",
""
]
] |
new_dataset
| 0.961257 |
2207.03342
|
Shams Nafisa Ali
|
Shams Nafisa Ali, Md. Tazuddin Ahmed, Joydip Paul, Tasnim Jahan, S. M.
Sakeef Sani, Nawsabah Noor, Taufiq Hasan
|
Monkeypox Skin Lesion Detection Using Deep Learning Models: A
Feasibility Study
|
4 pages, 6 figures, conference
| null | null | null |
cs.CV cs.AI eess.IV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The recent monkeypox outbreak has become a public health concern due to its
rapid spread in more than 40 countries outside Africa. Clinical diagnosis of
monkeypox in an early stage is challenging due to its similarity with
chickenpox and measles. In cases where the confirmatory Polymerase Chain
Reaction (PCR) tests are not readily available, computer-assisted detection of
monkeypox lesions could be beneficial for surveillance and rapid identification
of suspected cases. Deep learning methods have been found effective in the
automated detection of skin lesions, provided that sufficient training examples
are available. However, as of now, such datasets are not available for the
monkeypox disease. In the current study, we first develop the ``Monkeypox Skin
Lesion Dataset (MSLD)" consisting skin lesion images of monkeypox, chickenpox,
and measles. The images are mainly collected from websites, news portals, and
publicly accessible case reports. Data augmentation is used to increase the
sample size, and a 3-fold cross-validation experiment is set up. In the next
step, several pre-trained deep learning models, namely, VGG-16, ResNet50, and
InceptionV3 are employed to classify monkeypox and other diseases. An ensemble
of the three models is also developed. ResNet50 achieves the best overall
accuracy of $82.96(\pm4.57\%)$, while VGG16 and the ensemble system achieved
accuracies of $81.48(\pm6.87\%)$ and $79.26(\pm1.05\%)$, respectively. A
prototype web-application is also developed as an online monkeypox screening
tool. While the initial results on this limited dataset are promising, a larger
demographically diverse dataset is required to further enhance the
generalizability of these models.
|
[
{
"version": "v1",
"created": "Wed, 6 Jul 2022 09:09:28 GMT"
}
] | 2022-07-08T00:00:00 |
[
[
"Ali",
"Shams Nafisa",
""
],
[
"Ahmed",
"Md. Tazuddin",
""
],
[
"Paul",
"Joydip",
""
],
[
"Jahan",
"Tasnim",
""
],
[
"Sani",
"S. M. Sakeef",
""
],
[
"Noor",
"Nawsabah",
""
],
[
"Hasan",
"Taufiq",
""
]
] |
new_dataset
| 0.964657 |
2002.07408
|
Chaoqi Yang
|
Haolin Zhou, Chaoqi Yang, Xiaofeng Gao, Qiong Chen, Gongshen Liu and
Guihai Chen
|
MoTiAC: Multi-Objective Actor-Critics for Real-Time Bidding
|
Accepted in ECML-PKDD 2022. Zhou and Yang made equal contributions
| null | null | null |
cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Online Real-Time Bidding (RTB) is a complex auction game among which
advertisers struggle to bid for ad impressions when a user request occurs.
Considering display cost, Return on Investment (ROI), and other influential Key
Performance Indicators (KPIs), large ad platforms try to balance the trade-off
among various goals in dynamics. To address the challenge, we propose a
Multi-ObjecTive Actor-Critics algorithm based on reinforcement learning (RL),
named MoTiAC, for the problem of bidding optimization with various goals. In
MoTiAC, objective-specific agents update the global network asynchronously with
different goals and perspectives, leading to a robust bidding policy. Unlike
previous RL models, the proposed MoTiAC can simultaneously fulfill
multi-objective tasks in complicated bidding environments. In addition, we
mathematically prove that our model will converge to Pareto optimality.
Finally, experiments on a large-scale real-world commercial dataset from
Tencent verify the effectiveness of MoTiAC versus a set of recent approaches
|
[
{
"version": "v1",
"created": "Tue, 18 Feb 2020 07:16:39 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Jul 2022 05:07:10 GMT"
}
] | 2022-07-07T00:00:00 |
[
[
"Zhou",
"Haolin",
""
],
[
"Yang",
"Chaoqi",
""
],
[
"Gao",
"Xiaofeng",
""
],
[
"Chen",
"Qiong",
""
],
[
"Liu",
"Gongshen",
""
],
[
"Chen",
"Guihai",
""
]
] |
new_dataset
| 0.999696 |
2012.14219
|
Antoine Kaufmann
|
Hejing Li, Jialin Li, Antoine Kaufmann
|
SimBricks: End-to-End Network System Evaluation with Modular Simulation
|
17 pages, 13 figures, appeared in In Proceedings of ACM SIGCOMM 2022
Conference (SIGCOMM '22), August 22-26, 2022, Amsterdam, Netherlands
| null |
10.1145/3544216.3544253
| null |
cs.DC cs.NI cs.OS
|
http://creativecommons.org/licenses/by/4.0/
|
Full system "end-to-end" measurements in physical testbeds are the gold
standard for network systems evaluation but are often not feasible. When
physical testbeds are not available we frequently turn to simulation for
evaluation. Unfortunately, existing simulators are insufficient for end-to-end
evaluation, as they either cannot simulate all components, or simulate them
with inadequate detail. We address this through modular simulation, flexibly
combining and connecting multiple existing simulators for different components,
including processor and memory, devices, and network, into virtual end-to-end
testbeds tuned for each use-case. Our architecture, SimBricks, combines
well-defined component interfaces for extensibility and modularity, efficient
communication channels for local and distributed simulation, and a co-designed
efficient synchronization mechanism for accurate timing across simulators. We
demonstrate SimBricks scales to 1000 simulated hosts, each running a full
software stack including Linux, and that it can simulate testbeds with existing
NIC and switch RTL implementations. We also reproduce key findings from prior
work in congestion control, NIC architecture, and in-network computing in
SimBricks.
|
[
{
"version": "v1",
"created": "Mon, 28 Dec 2020 13:03:04 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Oct 2021 11:57:05 GMT"
},
{
"version": "v3",
"created": "Wed, 6 Jul 2022 10:10:41 GMT"
}
] | 2022-07-07T00:00:00 |
[
[
"Li",
"Hejing",
""
],
[
"Li",
"Jialin",
""
],
[
"Kaufmann",
"Antoine",
""
]
] |
new_dataset
| 0.986392 |
2105.02905
|
Roberto Metere
|
Roberto Metere, Myriam Neaimeh, Charles Morisset, Carsten Maple,
Xavier Bellekens, Ricardo M. Czekster
|
Securing the Electric Vehicle Charging Infrastructure
|
42 pages, white paper
| null | null | null |
cs.CR cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Electric Vehicles (EVs) can help alleviate our reliance on fossil fuels for
transport and electricity systems. However, charging millions of EV batteries
requires management to prevent overloading the electricity grid and minimise
costly upgrades that are ultimately paid for by consumers.
Managed chargers, such as Vehicle-to-Grid (V2G) chargers, allow control over
the time, speed and direction of charging. Such control assists in balancing
electricity supply and demand across a green electricity system and could
reduce costs for consumers.
Smart and V2G chargers connect EVs to the power grid using a charging device
which includes a data connection to exchange information and control commands
between various entities in the EV ecosystem. This introduces data privacy
concerns and is a potential target for cyber-security attacks. Therefore, the
implementation of a secure system is crucial to permit both consumers and
electricity system operators to trust smart charging and V2G.
In principle, we already have the technology needed for a connected EV
charging infrastructure to be securely enabled, borrowing best practices from
the Internet and industrial control systems. We must properly adapt the
security technology to take into account the challenges peculiar to the EV
charging infrastructure. Challenges go beyond technical considerations and
other issues arise such as balancing trade-offs between security and other
desirable qualities such as interoperability, scalability, crypto-agility,
affordability and energy efficiency.
This document reviews security and privacy topics relevant to the EV charging
ecosystem with a focus on smart charging and V2G.
|
[
{
"version": "v1",
"created": "Thu, 6 May 2021 18:10:42 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Apr 2022 16:03:07 GMT"
},
{
"version": "v3",
"created": "Wed, 6 Jul 2022 09:54:31 GMT"
}
] | 2022-07-07T00:00:00 |
[
[
"Metere",
"Roberto",
""
],
[
"Neaimeh",
"Myriam",
""
],
[
"Morisset",
"Charles",
""
],
[
"Maple",
"Carsten",
""
],
[
"Bellekens",
"Xavier",
""
],
[
"Czekster",
"Ricardo M.",
""
]
] |
new_dataset
| 0.98828 |
2107.04470
|
Emadeldeen Eldele
|
Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen, Min Wu, Chee-Keong
Kwoh, Xiaoli Li, and Cuntai Guan
|
ADAST: Attentive Cross-domain EEG-based Sleep Staging Framework with
Iterative Self-Training
|
Published in IEEE Transactions on Emerging Topics in Computational
Intelligence
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Sleep staging is of great importance in the diagnosis and treatment of sleep
disorders. Recently, numerous data-driven deep learning models have been
proposed for automatic sleep staging. They mainly train the model on a large
public labeled sleep dataset and test it on a smaller one with subjects of
interest. However, they usually assume that the train and test data are drawn
from the same distribution, which may not hold in real-world scenarios.
Unsupervised domain adaption (UDA) has been recently developed to handle this
domain shift problem. However, previous UDA methods applied for sleep staging
have two main limitations. First, they rely on a totally shared model for the
domain alignment, which may lose the domain-specific information during feature
extraction. Second, they only align the source and target distributions
globally without considering the class information in the target domain, which
hinders the classification performance of the model while testing. In this
work, we propose a novel adversarial learning framework called ADAST to tackle
the domain shift problem in the unlabeled target domain. First, we develop an
unshared attention mechanism to preserve the domain-specific features in both
domains. Second, we design an iterative self-training strategy to improve the
classification performance on the target domain via target domain pseudo
labels. We also propose dual distinct classifiers to increase the robustness
and quality of the pseudo labels. The experimental results on six cross-domain
scenarios validate the efficacy of our proposed framework and its advantage
over state-of-the-art UDA methods. The source code is available at
https://github.com/emadeldeen24/ADAST.
|
[
{
"version": "v1",
"created": "Fri, 9 Jul 2021 14:56:12 GMT"
},
{
"version": "v2",
"created": "Sun, 24 Oct 2021 03:23:39 GMT"
},
{
"version": "v3",
"created": "Mon, 4 Jul 2022 05:10:48 GMT"
},
{
"version": "v4",
"created": "Wed, 6 Jul 2022 09:44:48 GMT"
}
] | 2022-07-07T00:00:00 |
[
[
"Eldele",
"Emadeldeen",
""
],
[
"Ragab",
"Mohamed",
""
],
[
"Chen",
"Zhenghua",
""
],
[
"Wu",
"Min",
""
],
[
"Kwoh",
"Chee-Keong",
""
],
[
"Li",
"Xiaoli",
""
],
[
"Guan",
"Cuntai",
""
]
] |
new_dataset
| 0.996703 |
2108.00004
|
Yunfeng Bai
|
Yunfeng Bai, Qingwen Liu, Riqing Chen, Qingqing Zhang, and Wei Wang
|
Long-Range Optical Wireless Information and Power Transfer
| null | null | null | null |
cs.ET cs.IT eess.SP math.IT physics.optics
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Simultaneous wireless information and power transfer (SWIPT) is a remarkable
technology to support both the data and the energy transfer in the era of
Internet of Things (IoT). In this paper, we proposed a long-range optical
wireless information and power transfer system utilizing retro-reflectors, a
gain medium, a telescope internal modulator to form the resonant beam,
achieving high-power and high-rate SWIPT. We adopt the transfer matrix, which
can depict the beam modulated, resonator stability, transmission loss, and beam
distribution. Then, we provide a model for energy harvesting and data
receiving, which can evaluate the SWIPT performance. Numerical results
illustrate that the proposed system can simultaneously supply 0$\sim$9 W
electrical power and 18 bit/s/Hz spectral efficiency over 20 m distance.
|
[
{
"version": "v1",
"created": "Fri, 30 Jul 2021 03:12:57 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Feb 2022 06:33:48 GMT"
},
{
"version": "v3",
"created": "Wed, 6 Jul 2022 05:32:16 GMT"
}
] | 2022-07-07T00:00:00 |
[
[
"Bai",
"Yunfeng",
""
],
[
"Liu",
"Qingwen",
""
],
[
"Chen",
"Riqing",
""
],
[
"Zhang",
"Qingqing",
""
],
[
"Wang",
"Wei",
""
]
] |
new_dataset
| 0.99687 |
2108.05877
|
Yuzhe Qin
|
Yuzhe Qin, Yueh-Hua Wu, Shaowei Liu, Hanwen Jiang, Ruihan Yang, Yang
Fu, Xiaolong Wang
|
DexMV: Imitation Learning for Dexterous Manipulation from Human Videos
|
https://yzqin.github.io/dexmv
| null | null | null |
cs.LG cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
While significant progress has been made on understanding hand-object
interactions in computer vision, it is still very challenging for robots to
perform complex dexterous manipulation. In this paper, we propose a new
platform and pipeline DexMV (Dexterous Manipulation from Videos) for imitation
learning. We design a platform with: (i) a simulation system for complex
dexterous manipulation tasks with a multi-finger robot hand and (ii) a computer
vision system to record large-scale demonstrations of a human hand conducting
the same tasks. In our novel pipeline, we extract 3D hand and object poses from
videos, and propose a novel demonstration translation method to convert human
motion to robot demonstrations. We then apply and benchmark multiple imitation
learning algorithms with the demonstrations. We show that the demonstrations
can indeed improve robot learning by a large margin and solve the complex tasks
which reinforcement learning alone cannot solve. More details can be found in
the project page: https://yzqin.github.io/dexmv
|
[
{
"version": "v1",
"created": "Thu, 12 Aug 2021 17:51:18 GMT"
},
{
"version": "v2",
"created": "Tue, 17 Aug 2021 10:33:05 GMT"
},
{
"version": "v3",
"created": "Fri, 27 Aug 2021 08:53:51 GMT"
},
{
"version": "v4",
"created": "Thu, 2 Dec 2021 06:47:43 GMT"
},
{
"version": "v5",
"created": "Wed, 6 Jul 2022 17:57:48 GMT"
}
] | 2022-07-07T00:00:00 |
[
[
"Qin",
"Yuzhe",
""
],
[
"Wu",
"Yueh-Hua",
""
],
[
"Liu",
"Shaowei",
""
],
[
"Jiang",
"Hanwen",
""
],
[
"Yang",
"Ruihan",
""
],
[
"Fu",
"Yang",
""
],
[
"Wang",
"Xiaolong",
""
]
] |
new_dataset
| 0.997322 |
2110.00070
|
Cagri Toraman
|
Cagri Toraman, Furkan \c{S}ahinu\c{c}, Eyup Halit Yilmaz
|
BlackLivesMatter 2020: An Analysis of Deleted and Suspended Users in
Twitter
|
Published at the 14th International ACM Conference on Web Science in
2022 (WebSci 2022)
| null |
10.1145/3501247.3531539
| null |
cs.SI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
After George Floyd's death in May 2020, the volume of discussion in social
media increased dramatically. A series of protests followed this tragic event,
called as the 2020 BlackLivesMatter movement. Eventually, many user accounts
are deleted by their owners or suspended due to violating the rules of social
media platforms. In this study, we analyze what happened in Twitter before and
after the event triggers with respect to deleted and suspended users. We create
a novel dataset that includes approximately 500k users sharing 20m tweets, half
of whom actively participated in the 2020 BlackLivesMatter discussion, but some
of them were deleted or suspended later. We particularly examine the factors
for undesirable behavior in terms of spamming, negative language, hate speech,
and misinformation spread. We find that the users who participated to the 2020
BlackLivesMatter discussion have more negative and undesirable tweets, compared
to the users who did not. Furthermore, the number of new accounts in Twitter
increased significantly after the trigger event occurred, yet new users are
more oriented to have undesirable tweets, compared to old ones.
|
[
{
"version": "v1",
"created": "Thu, 30 Sep 2021 20:00:18 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Jul 2022 14:23:20 GMT"
}
] | 2022-07-07T00:00:00 |
[
[
"Toraman",
"Cagri",
""
],
[
"Şahinuç",
"Furkan",
""
],
[
"Yilmaz",
"Eyup Halit",
""
]
] |
new_dataset
| 0.999788 |
2202.02446
|
Ching-An Cheng
|
Ching-An Cheng, Tengyang Xie, Nan Jiang, Alekh Agarwal
|
Adversarially Trained Actor Critic for Offline Reinforcement Learning
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose Adversarially Trained Actor Critic (ATAC), a new model-free
algorithm for offline reinforcement learning (RL) under insufficient data
coverage, based on the concept of relative pessimism. ATAC is designed as a
two-player Stackelberg game: A policy actor competes against an adversarially
trained value critic, who finds data-consistent scenarios where the actor is
inferior to the data-collection behavior policy. We prove that, when the actor
attains no regret in the two-player game, running ATAC produces a policy that
provably 1) outperforms the behavior policy over a wide range of
hyperparameters that control the degree of pessimism, and 2) competes with the
best policy covered by data with appropriately chosen hyperparameters. Compared
with existing works, notably our framework offers both theoretical guarantees
for general function approximation and a deep RL implementation scalable to
complex environments and large datasets. In the D4RL benchmark, ATAC
consistently outperforms state-of-the-art offline RL algorithms on a range of
continuous control tasks.
|
[
{
"version": "v1",
"created": "Sat, 5 Feb 2022 01:02:46 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Jul 2022 19:07:05 GMT"
}
] | 2022-07-07T00:00:00 |
[
[
"Cheng",
"Ching-An",
""
],
[
"Xie",
"Tengyang",
""
],
[
"Jiang",
"Nan",
""
],
[
"Agarwal",
"Alekh",
""
]
] |
new_dataset
| 0.996867 |
2202.02587
|
Mohammad Ridwan Kabir
|
Shahed Anzarus Sabab (1, 2, 3, 4, and 5), Mohammad Ridwan Kabir (1, 2,
and 3), Sayed Rizban Hussain (1, 2, and 3), Hasan Mahmud (1, 2, and 3), Md.
Kamrul Hasan (1, 2, and 3), Husne Ara Rubaiyeat (6) ((1) Systems and Software
Lab (SSL), (2) Department of Computer Science and Engineering, (3) Islamic
University of Technology (IUT), Gazipur, Bangladesh, (4) Department of
Computer Science, (5) University of Manitoba, Winnipeg, Canada, (6) National
University, Bangladesh.)
|
VIS-iTrack: Visual Intention through Gaze Tracking using Low-Cost Webcam
|
15 pages, 9 figures, 4 tables
| null |
10.1109/ACCESS.2022.3187969
| null |
cs.HC cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Human intention is an internal, mental characterization for acquiring desired
information. From interactive interfaces containing either textual or graphical
information, intention to perceive desired information is subjective and
strongly connected with eye gaze. In this work, we determine such intention by
analyzing real-time eye gaze data with a low-cost regular webcam. We extracted
unique features (e.g., Fixation Count, Eye Movement Ratio) from the eye gaze
data of 31 participants to generate a dataset containing 124 samples of visual
intention for perceiving textual or graphical information, labeled as either
TEXT or IMAGE, having 48.39% and 51.61% distribution, respectively. Using this
dataset, we analyzed 5 classifiers, including Support Vector Machine (SVM)
(Accuracy: 92.19%). Using the trained SVM, we investigated the variation of
visual intention among 30 participants, distributed in 3 age groups, and found
out that young users were more leaned towards graphical contents whereas older
adults felt more interested in textual ones. This finding suggests that
real-time eye gaze data can be a potential source of identifying visual
intention, analyzing which intention aware interactive interfaces can be
designed and developed to facilitate human cognition.
|
[
{
"version": "v1",
"created": "Sat, 5 Feb 2022 16:00:03 GMT"
}
] | 2022-07-07T00:00:00 |
[
[
"Sabab",
"Shahed Anzarus",
"",
"1, 2, 3, 4, and 5"
],
[
"Kabir",
"Mohammad Ridwan",
"",
"1, 2,\n and 3"
],
[
"Hussain",
"Sayed Rizban",
"",
"1, 2, and 3"
],
[
"Mahmud",
"Hasan",
"",
"1, 2, and 3"
],
[
"Hasan",
"Md. Kamrul",
"",
"1, 2, and 3"
],
[
"Rubaiyeat",
"Husne Ara",
""
]
] |
new_dataset
| 0.99954 |
2202.06512
|
Qiyang Zhang
|
Qiyang Zhang, Xiang Li, Xiangying Che, Xiao Ma, Ao Zhou, Mengwei Xu,
Shangguang Wang, Yun Ma, Xuanzhe Liu
|
Benchmarking of DL Libraries and Models on Mobile Devices
| null | null | null | null |
cs.LG cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Deploying deep learning (DL) on mobile devices has been a notable trend in
recent years. To support fast inference of on-device DL, DL libraries play a
critical role as algorithms and hardware do. Unfortunately, no prior work ever
dives deep into the ecosystem of modern DL libs and provides quantitative
results on their performance. In this paper, we first build a comprehensive
benchmark that includes 6 representative DL libs and 15 diversified DL models.
We then perform extensive experiments on 10 mobile devices, which help reveal a
complete landscape of the current mobile DL libs ecosystem. For example, we
find that the best-performing DL lib is severely fragmented across different
models and hardware, and the gap between those DL libs can be rather huge. In
fact, the impacts of DL libs can overwhelm the optimizations from algorithms or
hardware, e.g., model quantization and GPU/DSP-based heterogeneous computing.
Finally, atop the observations, we summarize practical implications to
different roles in the DL lib ecosystem.
|
[
{
"version": "v1",
"created": "Mon, 14 Feb 2022 07:00:31 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Jul 2022 09:45:24 GMT"
}
] | 2022-07-07T00:00:00 |
[
[
"Zhang",
"Qiyang",
""
],
[
"Li",
"Xiang",
""
],
[
"Che",
"Xiangying",
""
],
[
"Ma",
"Xiao",
""
],
[
"Zhou",
"Ao",
""
],
[
"Xu",
"Mengwei",
""
],
[
"Wang",
"Shangguang",
""
],
[
"Ma",
"Yun",
""
],
[
"Liu",
"Xuanzhe",
""
]
] |
new_dataset
| 0.999777 |
2206.07360
|
Sebastian Schellhammer
|
Salim Hafid, Sebastian Schellhammer, Sandra Bringay, Konstantin
Todorov, Stefan Dietze
|
SciTweets -- A Dataset and Annotation Framework for Detecting Scientific
Online Discourse
|
submitted to CIKM 2022
| null | null | null |
cs.CL cs.CY cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
Scientific topics, claims and resources are increasingly debated as part of
online discourse, where prominent examples include discourse related to
COVID-19 or climate change. This has led to both significant societal impact
and increased interest in scientific online discourse from various disciplines.
For instance, communication studies aim at a deeper understanding of biases,
quality or spreading pattern of scientific information whereas computational
methods have been proposed to extract, classify or verify scientific claims
using NLP and IR techniques. However, research across disciplines currently
suffers from both a lack of robust definitions of the various forms of
science-relatedness as well as appropriate ground truth data for distinguishing
them. In this work, we contribute (a) an annotation framework and corresponding
definitions for different forms of scientific relatedness of online discourse
in Tweets, (b) an expert-annotated dataset of 1261 tweets obtained through our
labeling framework reaching an average Fleiss Kappa $\kappa$ of 0.63, (c) a
multi-label classifier trained on our data able to detect science-relatedness
with 89% F1 and also able to detect distinct forms of scientific knowledge
(claims, references). With this work we aim to lay the foundation for
developing and evaluating robust methods for analysing science as part of
large-scale online discourse.
|
[
{
"version": "v1",
"created": "Wed, 15 Jun 2022 08:14:55 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Jul 2022 11:32:07 GMT"
}
] | 2022-07-07T00:00:00 |
[
[
"Hafid",
"Salim",
""
],
[
"Schellhammer",
"Sebastian",
""
],
[
"Bringay",
"Sandra",
""
],
[
"Todorov",
"Konstantin",
""
],
[
"Dietze",
"Stefan",
""
]
] |
new_dataset
| 0.999713 |
2206.13358
|
Timon Hackenjos
|
Timon Hackenjos, Benedikt Wagner, Julian Herr, Jochen Rill, Marek
Wehmer, Niklas Goerke, Ingmar Baumgart
|
FIDO2 With Two Displays-Or How to Protect Security-Critical Web
Transactions Against Malware Attacks
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
With the rise of attacks on online accounts in the past years, more and more
services offer two-factor authentication for their users. Having factors out of
two of the three categories something you know, something you have and
something you are should ensure that an attacker cannot compromise two of them
at once. Thus, an adversary should not be able to maliciously interact with
one's account. However, this is only true if one considers a weak adversary. In
particular, since most current solutions only authenticate a session and not
individual transactions, they are noneffective if one's device is infected with
malware. For online banking, the banking industry has long since identified the
need for authenticating transactions. However, specifications of such
authentication schemes are not public and implementation details vary wildly
from bank to bank with most still being unable to protect against malware. In
this work, we present a generic approach to tackle the problem of malicious
account takeovers, even in the presence of malware. To this end, we define a
new paradigm to improve two-factor authentication that involves the concepts of
one-out-of-two security and transaction authentication. Web authentication
schemes following this paradigm can protect security-critical transactions
against manipulation, even if one of the factors is completely compromised.
Analyzing existing authentication schemes, we find that they do not realize
one-out-of-two security. We give a blueprint of how to design secure web
authentication schemes in general. Based on this blueprint we propose FIDO2
With Two Displays (FIDO2D), a new web authentication scheme based on the FIDO2
standard and prove its security using Tamarin. We hope that our work inspires a
new wave of more secure web authentication schemes, which protect
security-critical transactions even against attacks with malware.
|
[
{
"version": "v1",
"created": "Mon, 27 Jun 2022 15:06:59 GMT"
},
{
"version": "v2",
"created": "Thu, 30 Jun 2022 09:35:19 GMT"
},
{
"version": "v3",
"created": "Wed, 6 Jul 2022 06:26:40 GMT"
}
] | 2022-07-07T00:00:00 |
[
[
"Hackenjos",
"Timon",
""
],
[
"Wagner",
"Benedikt",
""
],
[
"Herr",
"Julian",
""
],
[
"Rill",
"Jochen",
""
],
[
"Wehmer",
"Marek",
""
],
[
"Goerke",
"Niklas",
""
],
[
"Baumgart",
"Ingmar",
""
]
] |
new_dataset
| 0.998458 |
2207.02253
|
Samee Ibraheem
|
Samee Ibraheem, Gaoyue Zhou, and John DeNero
|
Putting the Con in Context: Identifying Deceptive Actors in the Game of
Mafia
|
NAACL 2022 Main Conference Long Paper
| null | null | null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While neural networks demonstrate a remarkable ability to model linguistic
content, capturing contextual information related to a speaker's conversational
role is an open area of research. In this work, we analyze the effect of
speaker role on language use through the game of Mafia, in which participants
are assigned either an honest or a deceptive role. In addition to building a
framework to collect a dataset of Mafia game records, we demonstrate that there
are differences in the language produced by players with different roles. We
confirm that classification models are able to rank deceptive players as more
suspicious than honest ones based only on their use of language. Furthermore,
we show that training models on two auxiliary tasks outperforms a standard
BERT-based text classification approach. We also present methods for using our
trained models to identify features that distinguish between player roles,
which could be used to assist players during the Mafia game.
|
[
{
"version": "v1",
"created": "Tue, 5 Jul 2022 18:29:27 GMT"
}
] | 2022-07-07T00:00:00 |
[
[
"Ibraheem",
"Samee",
""
],
[
"Zhou",
"Gaoyue",
""
],
[
"DeNero",
"John",
""
]
] |
new_dataset
| 0.999603 |
2207.02390
|
Guang Yang
|
Jiahao Huang, Xiaodan Xing, Zhifan Gao, Guang Yang
|
Swin Deformable Attention U-Net Transformer (SDAUT) for Explainable Fast
MRI
|
MICCAI 2022
| null | null | null |
cs.CV cs.LG eess.IV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Fast MRI aims to reconstruct a high fidelity image from partially observed
measurements. Exuberant development in fast MRI using deep learning has been
witnessed recently. Meanwhile, novel deep learning paradigms, e.g., Transformer
based models, are fast-growing in natural language processing and promptly
developed for computer vision and medical image analysis due to their prominent
performance. Nevertheless, due to the complexity of the Transformer, the
application of fast MRI may not be straightforward. The main obstacle is the
computational cost of the self-attention layer, which is the core part of the
Transformer, can be expensive for high resolution MRI inputs. In this study, we
propose a new Transformer architecture for solving fast MRI that coupled
Shifted Windows Transformer with U-Net to reduce the network complexity. We
incorporate deformable attention to construe the explainability of our
reconstruction model. We empirically demonstrate that our method achieves
consistently superior performance on the fast MRI task. Besides, compared to
state-of-the-art Transformer models, our method has fewer network parameters
while revealing explainability. The code is publicly available at
https://github.com/ayanglab/SDAUT.
|
[
{
"version": "v1",
"created": "Tue, 5 Jul 2022 15:56:46 GMT"
}
] | 2022-07-07T00:00:00 |
[
[
"Huang",
"Jiahao",
""
],
[
"Xing",
"Xiaodan",
""
],
[
"Gao",
"Zhifan",
""
],
[
"Yang",
"Guang",
""
]
] |
new_dataset
| 0.996144 |
2207.02402
|
Chen Yuqian
|
Yuqian Chen, Fan Zhang, Chaoyi Zhang, Tengfei Xue, Leo R. Zekelman,
Jianzhong He, Yang Song, Nikos Makris, Yogesh Rathi, Alexandra J. Golby,
Weidong Cai, Lauren J. O'Donnell
|
White Matter Tracts are Point Clouds: Neuropsychological Score
Prediction and Critical Region Localization via Geometric Deep Learning
|
11 pages. 3 figures, MICCAI 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
White matter tract microstructure has been shown to influence
neuropsychological scores of cognitive performance. However, prediction of
these scores from white matter tract data has not been attempted. In this
paper, we propose a deep-learning-based framework for neuropsychological score
prediction using microstructure measurements estimated from diffusion magnetic
resonance imaging (dMRI) tractography, focusing on predicting performance on a
receptive vocabulary assessment task based on a critical fiber tract for
language, the arcuate fasciculus (AF). We directly utilize information from all
points in a fiber tract, without the need to average data along the fiber as is
traditionally required by diffusion MRI tractometry methods. Specifically, we
represent the AF as a point cloud with microstructure measurements at each
point, enabling adoption of point-based neural networks. We improve prediction
performance with the proposed Paired-Siamese Loss that utilizes information
about differences between continuous neuropsychological scores. Finally, we
propose a Critical Region Localization (CRL) algorithm to localize informative
anatomical regions containing points with strong contributions to the
prediction results. Our method is evaluated on data from 806 subjects from the
Human Connectome Project dataset. Results demonstrate superior
neuropsychological score prediction performance compared to baseline methods.
We discover that critical regions in the AF are strikingly consistent across
subjects, with the highest number of strongly contributing points located in
frontal cortical regions (i.e., the rostral middle frontal, pars opercularis,
and pars triangularis), which are strongly implicated as critical areas for
language processes.
|
[
{
"version": "v1",
"created": "Wed, 6 Jul 2022 02:03:28 GMT"
}
] | 2022-07-07T00:00:00 |
[
[
"Chen",
"Yuqian",
""
],
[
"Zhang",
"Fan",
""
],
[
"Zhang",
"Chaoyi",
""
],
[
"Xue",
"Tengfei",
""
],
[
"Zekelman",
"Leo R.",
""
],
[
"He",
"Jianzhong",
""
],
[
"Song",
"Yang",
""
],
[
"Makris",
"Nikos",
""
],
[
"Rathi",
"Yogesh",
""
],
[
"Golby",
"Alexandra J.",
""
],
[
"Cai",
"Weidong",
""
],
[
"O'Donnell",
"Lauren J.",
""
]
] |
new_dataset
| 0.957215 |
2207.02431
|
Shruti Vyas
|
Shruti Vyas, Chen Chen, and Mubarak Shah
|
GAMa: Cross-view Video Geo-localization
| null |
ECCV 2022
| null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The existing work in cross-view geo-localization is based on images where a
ground panorama is matched to an aerial image. In this work, we focus on ground
videos instead of images which provides additional contextual cues which are
important for this task. There are no existing datasets for this problem,
therefore we propose GAMa dataset, a large-scale dataset with ground videos and
corresponding aerial images. We also propose a novel approach to solve this
problem. At clip-level, a short video clip is matched with corresponding aerial
image and is later used to get video-level geo-localization of a long video.
Moreover, we propose a hierarchical approach to further improve the clip-level
geolocalization. It is a challenging dataset, unaligned and limited field of
view, and our proposed method achieves a Top-1 recall rate of 19.4% and 45.1%
@1.0mile. Code and dataset are available at following link:
https://github.com/svyas23/GAMa.
|
[
{
"version": "v1",
"created": "Wed, 6 Jul 2022 04:25:51 GMT"
}
] | 2022-07-07T00:00:00 |
[
[
"Vyas",
"Shruti",
""
],
[
"Chen",
"Chen",
""
],
[
"Shah",
"Mubarak",
""
]
] |
new_dataset
| 0.999718 |
2207.02442
|
Vidhi Jain
|
Vidhi Jain, Yixin Lin, Eric Undersander, Yonatan Bisk, Akshara Rai
|
Transformers are Adaptable Task Planners
|
https://anonymous.4open.science/r/temporal_task_planner-Paper148/
| null | null | null |
cs.RO cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Every home is different, and every person likes things done in their
particular way. Therefore, home robots of the future need to both reason about
the sequential nature of day-to-day tasks and generalize to user's preferences.
To this end, we propose a Transformer Task Planner(TTP) that learns high-level
actions from demonstrations by leveraging object attribute-based
representations. TTP can be pre-trained on multiple preferences and shows
generalization to unseen preferences using a single demonstration as a prompt
in a simulated dishwasher loading task. Further, we demonstrate real-world dish
rearrangement using TTP with a Franka Panda robotic arm, prompted using a
single human demonstration.
|
[
{
"version": "v1",
"created": "Wed, 6 Jul 2022 05:13:02 GMT"
}
] | 2022-07-07T00:00:00 |
[
[
"Jain",
"Vidhi",
""
],
[
"Lin",
"Yixin",
""
],
[
"Undersander",
"Eric",
""
],
[
"Bisk",
"Yonatan",
""
],
[
"Rai",
"Akshara",
""
]
] |
new_dataset
| 0.997249 |
2207.02489
|
Debajyoti Halder
|
Rahul Saini, Debajyoti Halder, Anand M. Baswade
|
RIDS : Real-time Intrusion Detection System for WPA3 enabled Enterprise
Networks
| null | null | null | null |
cs.CR cs.NI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
With the advent of new IEEE 802.11ax (WiFi 6) devices, enabling security is a
priority. Since previous versions were found to have security vulnerabilities,
to fix the most common security flaws, the WiFi Protected Access 3 (WPA3) got
introduced. Although WPA3 is an improvement over its predecessor in terms of
security, recently it was found that WPA3 has a few security vulnerabilities as
well. In this paper, we have mentioned the previously known vulnerabilities in
WPA3 and WPA2. In addition to that, we have created our own dataset based on
WPA3 attacks (Section III). We have proposed a two-stage solution for the
detection of an intrusion in the network. The two-stage approach will help ease
computational processing burden of an AP and WLAN Controller. First, AP will
perform a lightweight simple operation for some duration (say 500ms) at certain
time interval. Upon discovering any abnormality in the flow of traffic an
ML-based solution at the controller will detect the type of attack. Our
approach is to utilize resources on AP as well as the back-end controller with
certain level of optimization. We have achieved over 99% accuracy in attack
detection using an ML-based solution. We have also publicly provided our code
and dataset for the open-source research community, so that it can contribute
for future research work.
|
[
{
"version": "v1",
"created": "Wed, 6 Jul 2022 07:49:12 GMT"
}
] | 2022-07-07T00:00:00 |
[
[
"Saini",
"Rahul",
""
],
[
"Halder",
"Debajyoti",
""
],
[
"Baswade",
"Anand M.",
""
]
] |
new_dataset
| 0.973737 |
2207.02502
|
Kevin De Porre
|
Kevin De Porre, Carla Ferreira, Elisa Gonzalez Boix
|
VeriFx: Correct Replicated Data Types for the Masses
|
35 pages, 13 figures
| null | null | null |
cs.PL cs.DC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Distributed systems adopt weak consistency to ensure high availability and
low latency, but state convergence is hard to guarantee due to conflicts.
Experts carefully design replicated data types (RDTs) that resemble sequential
data types and embed conflict resolution mechanisms that ensure convergence.
Designing RDTs is challenging as their correctness depends on subtleties such
as the ordering of concurrent operations. Currently, researchers manually
verify RDTs, either by paper proofs or using proof assistants. Unfortunately,
paper proofs are subject to reasoning flaws and mechanized proofs verify a
formalisation instead of a real-world implementation. Furthermore, writing
mechanized proofs is reserved to verification experts and is extremely time
consuming. To simplify the design, implementation, and verification of RDTs, we
propose VeriFx, a high-level programming language with automated proof
capabilities. VeriFx lets programmers implement RDTs atop functional
collections and express correctness properties that are verified automatically.
Verified RDTs can be transpiled to mainstream languages (currently Scala or
JavaScript). VeriFx also provides libraries for implementing and verifying
Conflict-free Replicated Data Types (CRDTs) and Operational Transformation (OT)
functions. These libraries implement the general execution model of those
approaches and define their correctness properties. We use the libraries to
implement and verify an extensive portfolio of 35 CRDTs and reproduce a study
on the correctness of OT functions.
|
[
{
"version": "v1",
"created": "Wed, 6 Jul 2022 08:11:12 GMT"
}
] | 2022-07-07T00:00:00 |
[
[
"De Porre",
"Kevin",
""
],
[
"Ferreira",
"Carla",
""
],
[
"Boix",
"Elisa Gonzalez",
""
]
] |
new_dataset
| 0.966479 |
2207.02542
|
Manuel Brenner
|
Manuel Brenner, Florian Hess, Jonas M. Mikhaeil, Leonard Bereska,
Zahra Monfared, Po-Chen Kuo, Daniel Durstewitz
|
Tractable Dendritic RNNs for Reconstructing Nonlinear Dynamical Systems
|
To be published in the Proceedings of the 39th International
Conference on Machine Learning (ICML 2022)
| null | null | null |
cs.LG math.DS nlin.CD physics.comp-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In many scientific disciplines, we are interested in inferring the nonlinear
dynamical system underlying a set of observed time series, a challenging task
in the face of chaotic behavior and noise. Previous deep learning approaches
toward this goal often suffered from a lack of interpretability and
tractability. In particular, the high-dimensional latent spaces often required
for a faithful embedding, even when the underlying dynamics lives on a
lower-dimensional manifold, can hamper theoretical analysis. Motivated by the
emerging principles of dendritic computation, we augment a dynamically
interpretable and mathematically tractable piecewise-linear (PL) recurrent
neural network (RNN) by a linear spline basis expansion. We show that this
approach retains all the theoretically appealing properties of the simple
PLRNN, yet boosts its capacity for approximating arbitrary nonlinear dynamical
systems in comparatively low dimensions. We employ two frameworks for training
the system, one combining back-propagation-through-time (BPTT) with teacher
forcing, and another based on fast and scalable variational inference. We show
that the dendritically expanded PLRNN achieves better reconstructions with
fewer parameters and dimensions on various dynamical systems benchmarks and
compares favorably to other methods, while retaining a tractable and
interpretable structure.
|
[
{
"version": "v1",
"created": "Wed, 6 Jul 2022 09:43:03 GMT"
}
] | 2022-07-07T00:00:00 |
[
[
"Brenner",
"Manuel",
""
],
[
"Hess",
"Florian",
""
],
[
"Mikhaeil",
"Jonas M.",
""
],
[
"Bereska",
"Leonard",
""
],
[
"Monfared",
"Zahra",
""
],
[
"Kuo",
"Po-Chen",
""
],
[
"Durstewitz",
"Daniel",
""
]
] |
new_dataset
| 0.990277 |
2207.02662
|
Shuhao Zeng
|
Shuhao Zeng, Hongliang Zhang, Boya Di, Haichao Qin, Xin Su, Lingyang
Song
|
Reconfigurable Refractive Surfaces: An Energy-Efficient Way to
Holographic MIMO
|
5 pages, 4 figures
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Holographic Multiple Input Multiple Output (HMIMO), which integrates massive
antenna elements into a compact space to achieve a spatially continuous
aperture, plays an important role in future wireless networks. With numerous
antenna elements, it is hard to implement the HMIMO via phased arrays due to
unacceptable power consumption. To address this issue, reconfigurable
refractive surface (RRS) is an energy efficient enabler of HMIMO since the
surface is free of expensive phase shifters. Unlike traditional metasurfaces
working as passive relays, the RRS is used as transmit antennas, where the
far-field approximation does not hold anymore, urging a new performance
analysis framework. In this letter, we first derive the data rate of an
RRS-based single-user downlink system, and then compare its power consumption
with the phased array. Simulation results verify our analysis and show that the
RRS is an energy-efficient way to HMIMO.
|
[
{
"version": "v1",
"created": "Wed, 6 Jul 2022 13:31:51 GMT"
}
] | 2022-07-07T00:00:00 |
[
[
"Zeng",
"Shuhao",
""
],
[
"Zhang",
"Hongliang",
""
],
[
"Di",
"Boya",
""
],
[
"Qin",
"Haichao",
""
],
[
"Su",
"Xin",
""
],
[
"Song",
"Lingyang",
""
]
] |
new_dataset
| 0.999083 |
2207.02663
|
Wenliang Dai
|
Wenliang Dai, Samuel Cahyawijaya, Tiezheng Yu, Elham J Barezi, Pascale
Fung
|
Kaggle Competition: Cantonese Audio-Visual Speech Recognition for In-car
Commands
| null | null | null | null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
With the rise of deep learning and intelligent vehicles, the smart assistant
has become an essential in-car component to facilitate driving and provide
extra functionalities. In-car smart assistants should be able to process
general as well as car-related commands and perform corresponding actions,
which eases driving and improves safety. However, in this research field, most
datasets are in major languages, such as English and Chinese. There is a huge
data scarcity issue for low-resource languages, hindering the development of
research and applications for broader communities. Therefore, it is crucial to
have more benchmarks to raise awareness and motivate the research in
low-resource languages. To mitigate this problem, we collect a new dataset,
namely Cantonese In-car Audio-Visual Speech Recognition (CI-AVSR), for in-car
speech recognition in the Cantonese language with video and audio data.
Together with it, we propose Cantonese Audio-Visual Speech Recognition for
In-car Commands as a new challenge for the community to tackle low-resource
speech recognition under in-car scenarios.
|
[
{
"version": "v1",
"created": "Wed, 6 Jul 2022 13:31:56 GMT"
}
] | 2022-07-07T00:00:00 |
[
[
"Dai",
"Wenliang",
""
],
[
"Cahyawijaya",
"Samuel",
""
],
[
"Yu",
"Tiezheng",
""
],
[
"Barezi",
"Elham J",
""
],
[
"Fung",
"Pascale",
""
]
] |
new_dataset
| 0.999609 |
2207.02671
|
Jeff Denis
|
Jeff Denis, Jean-Sebastien Plante and Alexandre Girard
|
Low-Level Force-Control of MR-Hydrostatic Actuators
| null |
IEEE Robotics and Automation Letters ( Volume: 6, Issue: 2, April
2021)
|
10.1109/LRA.2021.3063972
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Precise and high-fidelity force control is critical for new generations of
robots that interact with humans and unknown environments. Mobile robots, such
as wearable devices and legged robots, must also be lightweight to accomplish
their function. Hydrostatic transmissions have been proposed as a promising
strategy for meeting these two challenging requirements. In previous
publications, it was shown that using magnetorheological (MR) actuators coupled
with hydrostatic transmissions provides high power density and great open-loop
human-robot interactions. Still, the open-loop force fidelity at low and high
frequencies are decreased by the transmission's dynamics and by nonlinear
friction. This letter compares control strategies for MR-hydrostatic actuator
systems to increase its torque fidelity, defined as the bandwidth (measured vs
desired torque reference) and transparency (minimizing the undesired forces
reflected to the end effector when backdriving the robot). Four control
approaches are developed and compared experimentally: (1) Open-loop control
with friction compensation; (2) non-collocated pressure feedback; (3)
collocated pressure feedback; (4) LQGI state feedback. A dither strategy is
also implemented to smoothen ball screw friction. Results show that approaches
(1), (2) and (3) can increase the performances but are facing compromises,
while approach (4) can simultaneously improve all metrics. These results show
the potential of using control schemes for improving the force control
performance of robots using tethered architectures, addressing issues such as
transmission dynamics and friction.
|
[
{
"version": "v1",
"created": "Wed, 6 Jul 2022 13:37:51 GMT"
}
] | 2022-07-07T00:00:00 |
[
[
"Denis",
"Jeff",
""
],
[
"Plante",
"Jean-Sebastien",
""
],
[
"Girard",
"Alexandre",
""
]
] |
new_dataset
| 0.998287 |
2207.02696
|
Chien-Yao Wang
|
Chien-Yao Wang, Alexey Bochkovskiy, Hong-Yuan Mark Liao
|
YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for
real-time object detectors
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
YOLOv7 surpasses all known object detectors in both speed and accuracy in the
range from 5 FPS to 160 FPS and has the highest accuracy 56.8% AP among all
known real-time object detectors with 30 FPS or higher on GPU V100. YOLOv7-E6
object detector (56 FPS V100, 55.9% AP) outperforms both transformer-based
detector SWIN-L Cascade-Mask R-CNN (9.2 FPS A100, 53.9% AP) by 509% in speed
and 2% in accuracy, and convolutional-based detector ConvNeXt-XL Cascade-Mask
R-CNN (8.6 FPS A100, 55.2% AP) by 551% in speed and 0.7% AP in accuracy, as
well as YOLOv7 outperforms: YOLOR, YOLOX, Scaled-YOLOv4, YOLOv5, DETR,
Deformable DETR, DINO-5scale-R50, ViT-Adapter-B and many other object detectors
in speed and accuracy. Moreover, we train YOLOv7 only on MS COCO dataset from
scratch without using any other datasets or pre-trained weights. Source code is
released in https://github.com/WongKinYiu/yolov7.
|
[
{
"version": "v1",
"created": "Wed, 6 Jul 2022 14:01:58 GMT"
}
] | 2022-07-07T00:00:00 |
[
[
"Wang",
"Chien-Yao",
""
],
[
"Bochkovskiy",
"Alexey",
""
],
[
"Liao",
"Hong-Yuan Mark",
""
]
] |
new_dataset
| 0.999456 |
2207.02697
|
J\'er\^ome Leroux
|
Petr Jan\v{c}ar and J\'er\^ome Leroux
|
Semilinear Home-space is Decidable for Petri Nets
| null | null | null | null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
A set of configurations $\mathbf{H}$ is an home-space for a set of
configurations $\mathbf{X}$ of a Petri net if every configuration reachable
from $\mathbf{X}$ can reach $\mathbf{H}$. The semilinear home-space problem for
Petri nets asks, given a Petri net $A$, and semilinear sets of configurations
$\mathbf{X},\mathbf{H}$ if $\mathbf{H}$ is an home-space for $\mathbf{X}$. In
1989, Davide de Frutos Escrig and Colette Johnen proved that the problem is
decidable when $\mathbf{X}$ is a singleton and $\mathbf{H}$ is a finite union
of linear sets using the same periods. In this paper, we show that the general
problem is decidable. This result is obtained by proving a duality between the
reachability problem and the non-home-space problem. More formally, we prove
that for any Petri net $A$ and for any linear set of configurations
$\mathbf{L}$, we can effectively compute a semilinear set $\mathbf{W}$ of
configurations such that for every set $\mathbf{X}$, the set $\mathbf{L}$ is
not an home-space for $\mathbf{X}$ if, and only if $\mathbf{W}$ is reachable
from $\mathbf{X}$.
|
[
{
"version": "v1",
"created": "Wed, 6 Jul 2022 14:07:43 GMT"
}
] | 2022-07-07T00:00:00 |
[
[
"Jančar",
"Petr",
""
],
[
"Leroux",
"Jérôme",
""
]
] |
new_dataset
| 0.993074 |
2207.02706
|
Chintan Patel
|
Chintan Patel, Nishant Doshi
|
LDA-2IoT : A Level Dependent Authentication using Two Factor for IoT
Paradigm
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The widespread expansion of the IoT based services are changing peoples
living habits. With the vast data generation and intelligent decision support
system, an IoT is supporting many industries to improve their products and
services. The major challenge for IoT developers is to design a secure data
transmission system and a trustworthy inter device and user device
communication system. The data starts its journey from the sensing devices and
reaches the user dashboard through a different medium. Authentication between
two IoT devices provides a reliable and lightweight key generation system. In
this paper, we put forward a novel authentication approach for the IoT
paradigm. We postulate an ECC based two factor Level Dependent Authentication
for Generic IoT (LDA 2IoT) in which users at a particular level in the
hierarchy can access the sensors deployed at below or the equal level of the
hierarchy. We impart the security analysis for the proposed LDA 2IoT based on
the Dolev Yao channel and widely accepted random oracle based ROR model. We
provide the implementation of the proposed scheme using the MQTT protocol.
Finally, we set forth a performance analysis for the proposed LDA 2IoT system
by comparing it with the other existing scheme.
|
[
{
"version": "v1",
"created": "Wed, 6 Jul 2022 14:27:38 GMT"
}
] | 2022-07-07T00:00:00 |
[
[
"Patel",
"Chintan",
""
],
[
"Doshi",
"Nishant",
""
]
] |
new_dataset
| 0.999173 |
2207.02711
|
Deepal Tennakoon
|
Deepal Tennakoon, Vincent Gramoli
|
SocChain: Blockchain with Swift Proportional Governance for Bribery
Mitigation
| null | null | null | null |
cs.CR cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Blockchain governance is paramount to leading securely a large group of users
towards the same goal without disputes about the legitimacy of a blockchain
instance over another. As of today, there is no efficient way of protecting
this governance against an oligarchy. This paper aims to offer a new dimension
to the security of blockchains by defining the Swift Proportional Governance
problem. This problem is to rapidly elect governance users that proportionally
represent voters without the risk of dictatorship. We then design and implement
an open permissioned blockchain called SocChain (Social Choice Blockchain) that
mitigates bribery by building upon results in social choice theory. We deploy
SocChain and evaluate our new multi-winner election DApp running on top of it.
Our results indicate that using our DApp, 150 voters can elect a proportionally
representative committee of 150 members within 5 minutes. Hence we show that
SocChain can elect as many representatives as members in various global
organizations.
|
[
{
"version": "v1",
"created": "Wed, 6 Jul 2022 14:33:26 GMT"
}
] | 2022-07-07T00:00:00 |
[
[
"Tennakoon",
"Deepal",
""
],
[
"Gramoli",
"Vincent",
""
]
] |
new_dataset
| 0.994025 |
2207.02746
|
Jeffrey Helt
|
Jeffrey Helt and Abhinav Sharma and Daniel J. Abadi and Wyatt Lloyd
and Jose M. Faleiro
|
C5: Cloned Concurrency Control that Always Keeps Up
|
14 pages, 12 figures
| null | null | null |
cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
Asynchronously replicated primary-backup databases are commonly deployed to
improve availability and offload read-only transactions. To both apply
replicated writes from the primary and serve read-only transactions, the
backups implement a cloned concurrency control protocol. The protocol ensures
read-only transactions always return a snapshot of state that previously
existed on the primary. This compels the backup to exactly copy the commit
order resulting from the primary's concurrency control. Existing cloned
concurrency control protocols guarantee this by limiting the backup's
parallelism. As a result, the primary's concurrency control executes some
workloads with more parallelism than these protocols. In this paper, we prove
that this parallelism gap leads to unbounded replication lag, where writes can
take arbitrarily long to replicate to the backup and which has led to
catastrophic failures in production systems. We then design C5, the first
cloned concurrency protocol to provide bounded replication lag. We implement
two versions of C5: Our evaluation in MyRocks, a widely deployed database,
demonstrates C5 provides bounded replication lag. Our evaluation in Cicada, a
recent in-memory database, demonstrates C5 keeps up with even the fastest of
primaries.
|
[
{
"version": "v1",
"created": "Wed, 6 Jul 2022 15:30:48 GMT"
}
] | 2022-07-07T00:00:00 |
[
[
"Helt",
"Jeffrey",
""
],
[
"Sharma",
"Abhinav",
""
],
[
"Abadi",
"Daniel J.",
""
],
[
"Lloyd",
"Wyatt",
""
],
[
"Faleiro",
"Jose M.",
""
]
] |
new_dataset
| 0.967684 |
2207.02756
|
Zihang Lin
|
Zihang Lin, Chaolei Tan, Jian-Fang Hu, Zhi Jin, Tiancai Ye, Wei-Shi
Zheng
|
STVGFormer: Spatio-Temporal Video Grounding with Static-Dynamic
Cross-Modal Understanding
|
Technical report. The 1st place solution in the HC-STVG track of the
4th Person in Context Challenge(2022)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this technical report, we introduce our solution to human-centric
spatio-temporal video grounding task. We propose a concise and effective
framework named STVGFormer, which models spatiotemporal visual-linguistic
dependencies with a static branch and a dynamic branch. The static branch
performs cross-modal understanding in a single frame and learns to localize the
target object spatially according to intra-frame visual cues like object
appearances. The dynamic branch performs cross-modal understanding across
multiple frames. It learns to predict the starting and ending time of the
target moment according to dynamic visual cues like motions. Both the static
and dynamic branches are designed as cross-modal transformers. We further
design a novel static-dynamic interaction block to enable the static and
dynamic branches to transfer useful and complementary information from each
other, which is shown to be effective to improve the prediction on hard cases.
Our proposed method achieved 39.6% vIoU and won the first place in the HC-STVG
track of the 4th Person in Context Challenge.
|
[
{
"version": "v1",
"created": "Wed, 6 Jul 2022 15:48:58 GMT"
}
] | 2022-07-07T00:00:00 |
[
[
"Lin",
"Zihang",
""
],
[
"Tan",
"Chaolei",
""
],
[
"Hu",
"Jian-Fang",
""
],
[
"Jin",
"Zhi",
""
],
[
"Ye",
"Tiancai",
""
],
[
"Zheng",
"Wei-Shi",
""
]
] |
new_dataset
| 0.999479 |
2207.02774
|
Audrey Cui
|
Audrey Cui, Ali Jahanian, Agata Lapedriza, Antonio Torralba, Shahin
Mahdizadehaghdam, Rohit Kumar, David Bau
|
Local Relighting of Real Scenes
|
15 pages, 15 figures
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce the task of local relighting, which changes a photograph of a
scene by switching on and off the light sources that are visible within the
image. This new task differs from the traditional image relighting problem, as
it introduces the challenge of detecting light sources and inferring the
pattern of light that emanates from them. We propose an approach for local
relighting that trains a model without supervision of any novel image dataset
by using synthetically generated image pairs from another model. Concretely, we
collect paired training images from a stylespace-manipulated GAN; then we use
these images to train a conditional image-to-image model. To benchmark local
relighting, we introduce Lonoff, a collection of 306 precisely aligned images
taken in indoor spaces with different combinations of lights switched on. We
show that our method significantly outperforms baseline methods based on GAN
inversion. Finally, we demonstrate extensions of our method that control
different light sources separately. We invite the community to tackle this new
task of local relighting.
|
[
{
"version": "v1",
"created": "Wed, 6 Jul 2022 16:08:20 GMT"
}
] | 2022-07-07T00:00:00 |
[
[
"Cui",
"Audrey",
""
],
[
"Jahanian",
"Ali",
""
],
[
"Lapedriza",
"Agata",
""
],
[
"Torralba",
"Antonio",
""
],
[
"Mahdizadehaghdam",
"Shahin",
""
],
[
"Kumar",
"Rohit",
""
],
[
"Bau",
"David",
""
]
] |
new_dataset
| 0.987273 |
2207.02805
|
Ivan Shugurov
|
Ivan Shugurov, Sergey Zakharov, Slobodan Ilic
|
DPODv2: Dense Correspondence-Based 6 DoF Pose Estimation
| null |
IEEE Transactions on Pattern Analysis and Machine Intelligence
2021
|
10.1109/TPAMI.2021.3118833
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a three-stage 6 DoF object detection method called DPODv2 (Dense
Pose Object Detector) that relies on dense correspondences. We combine a 2D
object detector with a dense correspondence estimation network and a multi-view
pose refinement method to estimate a full 6 DoF pose. Unlike other deep
learning methods that are typically restricted to monocular RGB images, we
propose a unified deep learning network allowing different imaging modalities
to be used (RGB or Depth). Moreover, we propose a novel pose refinement method,
that is based on differentiable rendering. The main concept is to compare
predicted and rendered correspondences in multiple views to obtain a pose which
is consistent with predicted correspondences in all views. Our proposed method
is evaluated rigorously on different data modalities and types of training data
in a controlled setup. The main conclusions is that RGB excels in
correspondence estimation, while depth contributes to the pose accuracy if good
3D-3D correspondences are available. Naturally, their combination achieves the
overall best performance. We perform an extensive evaluation and an ablation
study to analyze and validate the results on several challenging datasets.
DPODv2 achieves excellent results on all of them while still remaining fast and
scalable independent of the used data modality and the type of training data
|
[
{
"version": "v1",
"created": "Wed, 6 Jul 2022 16:48:56 GMT"
}
] | 2022-07-07T00:00:00 |
[
[
"Shugurov",
"Ivan",
""
],
[
"Zakharov",
"Sergey",
""
],
[
"Ilic",
"Slobodan",
""
]
] |
new_dataset
| 0.987965 |
1912.04466
|
Jiaming Ye
|
Jiaming Ye, Mingliang Ma, Yun Lin, Lei Ma, Yinxing Xue, Jianjun Zhao
|
Vulpedia: Detecting Vulnerable Ethereum Smart Contracts via Abstracted
Vulnerability Signatures
| null |
Journal of Systems and Software (2022): 111410
|
10.1016/j.jss.2022.111410
| null |
cs.SE cs.CR cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent years have seen smart contracts are getting increasingly popular in
building trustworthy decentralized applications. Previous research has proposed
static and dynamic techniques to detect vulnerabilities in smart contracts.
These tools check vulnerable contracts against several predefined rules.
However, the emerging new vulnerable types and programming skills to prevent
possible vulnerabilities emerging lead to a large number of false positive and
false negative reports of tools. To address this, we propose Vulpedia, which
mines expressive vulnerability signatures from contracts. Vulpedia is based on
the relaxed assumption that the owner of contract is not malicious.
Specifically, we extract structural program features from vulnerable and benign
contracts as vulnerability signatures, and construct a systematic detection
method based on detection rules composed of vulnerability signatures. Compared
with the rules defined by state-of-the-arts, our approach can extract more
expressive rules to achieve better completeness (i.e., detection recall) and
soundness (i.e., precision). We further evaluate Vulpedia with four baselines
(i.e., Slither, Securify, SmartCheck and Oyente) on the testing dataset
consisting of 17,770 contracts. The experiment results show that Vulpedia
achieves best performance of precision on 4 types of vulnerabilities and
leading recall on 3 types of vulnerabilities meanwhile exhibiting the great
efficiency performance.
|
[
{
"version": "v1",
"created": "Tue, 10 Dec 2019 03:09:57 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Jul 2022 12:24:21 GMT"
}
] | 2022-07-06T00:00:00 |
[
[
"Ye",
"Jiaming",
""
],
[
"Ma",
"Mingliang",
""
],
[
"Lin",
"Yun",
""
],
[
"Ma",
"Lei",
""
],
[
"Xue",
"Yinxing",
""
],
[
"Zhao",
"Jianjun",
""
]
] |
new_dataset
| 0.967506 |
2106.02839
|
Jonathan Klawitter
|
Jonathan Klawitter, Tamara Mchedlidze
|
Upward planar drawings with two slopes
| null |
Journal of Graph Algorithms and Applications, 26(1):171-198, 2022
|
10.7155/jgaa.00587
| null |
cs.DM cs.CG
|
http://creativecommons.org/licenses/by/4.0/
|
In an upward planar 2-slope drawing of a digraph, edges are drawn as
straight-line segments in the upward direction without crossings using only two
different slopes. We investigate whether a given upward planar digraph admits
such a drawing and, if so, how to construct it. For the fixed embedding
scenario, we give a simple characterisation and a linear-time construction by
adopting algorithms from orthogonal drawings. For the variable embedding
scenario, we describe a linear-time algorithm for single-source digraphs, a
quartic-time algorithm for series-parallel digraphs, and a fixed-parameter
tractable algorithm for general digraphs. For the latter two classes, we make
use of SPQR-trees and the notion of upward spirality. As an application of this
drawing style, we show how to draw an upward planar phylogenetic network with
two slopes such that all leaves lie on a horizontal line.
|
[
{
"version": "v1",
"created": "Sat, 5 Jun 2021 08:47:42 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Nov 2021 16:23:15 GMT"
},
{
"version": "v3",
"created": "Tue, 5 Jul 2022 02:50:48 GMT"
}
] | 2022-07-06T00:00:00 |
[
[
"Klawitter",
"Jonathan",
""
],
[
"Mchedlidze",
"Tamara",
""
]
] |
new_dataset
| 0.997731 |
2108.10071
|
Christof Ferreira Torres
|
Christof Ferreira Torres, Hugo Jonker, Radu State
|
Elysium: Context-Aware Bytecode-Level Patching to Automatically Heal
Vulnerable Smart Contracts
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fixing bugs is easiest by patching source code. However, source code is not
always available: only 0.3% of the ~49M smart contracts that are currently
deployed on Ethereum have their source code publicly available. Moreover, since
contracts may call functions from other contracts, security flaws in
closed-source contracts may affect open-source contracts as well. However,
current state-of-the-art approaches that operate on closed-source contracts
(i.e., EVM bytecode), such as EVMPatch and SmartShield, make use of purely
hard-coded templates that leverage fix patching patterns. As a result, they
cannot dynamically adapt to the bytecode that is being patched, which severely
limits their flexibility and scalability. For instance, when patching integer
overflows using hard-coded templates, a particular patch template needs to be
employed as the bounds to be checked are different for each integer size. In
this paper, we propose Elysium, a scalable approach towards automatic smart
contract repair at the bytecode level. Elysium combines template-based and
semantic-based patching by inferring context information from bytecode. Elysium
is currently able to patch 7 different types of vulnerabilities in smart
contracts automatically and can easily be extended with new templates and new
bug-finding tools. We evaluate its effectiveness and correctness using 3
different datasets by replaying more than 500K transactions on patched
contracts. We find that Elysium outperforms existing tools by patching at least
30% more contracts correctly. Finally, we also compare the overhead of Elysium
in terms of deployment and transaction cost. In comparison to other tools, we
find that generally Elysium minimizes the runtime cost (i.e., transaction cost)
up to a factor of 1.7, for only a marginally higher deployment cost, where
deployment cost is a one-time cost as compared to the runtime cost.
|
[
{
"version": "v1",
"created": "Mon, 23 Aug 2021 11:10:30 GMT"
},
{
"version": "v2",
"created": "Wed, 25 Aug 2021 19:16:09 GMT"
},
{
"version": "v3",
"created": "Mon, 4 Jul 2022 20:59:11 GMT"
}
] | 2022-07-06T00:00:00 |
[
[
"Torres",
"Christof Ferreira",
""
],
[
"Jonker",
"Hugo",
""
],
[
"State",
"Radu",
""
]
] |
new_dataset
| 0.999861 |
2202.00159
|
Sugandha Sharma
|
Sugandha Sharma, Sarthak Chandra, Ila R. Fiete
|
Content Addressable Memory Without Catastrophic Forgetting by
Heteroassociation with a Fixed Scaffold
|
Last two authors contributed equally
| null | null | null |
cs.AI cs.IT cs.LG math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Content-addressable memory (CAM) networks, so-called because stored items can
be recalled by partial or corrupted versions of the items, exhibit near-perfect
recall of a small number of information-dense patterns below capacity and a
'memory cliff' beyond, such that inserting a single additional pattern results
in catastrophic loss of all stored patterns. We propose a novel CAM
architecture, Memory Scaffold with Heteroassociation (MESH), that factorizes
the problems of internal attractor dynamics and association with external
content to generate a CAM continuum without a memory cliff: Small numbers of
patterns are stored with complete information recovery matching standard CAMs,
while inserting more patterns still results in partial recall of every pattern,
with a graceful trade-off between pattern number and pattern richness.
Motivated by the architecture of the Entorhinal-Hippocampal memory circuit in
the brain, MESH is a tripartite architecture with pairwise interactions that
uses a predetermined set of internally stabilized states together with
heteroassociation between the internal states and arbitrary external patterns.
We show analytically and experimentally that for any number of stored patterns,
MESH nearly saturates the total information bound (given by the number of
synapses) for CAM networks, outperforming all existing CAM models.
|
[
{
"version": "v1",
"created": "Tue, 1 Feb 2022 00:24:23 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Feb 2022 16:58:09 GMT"
},
{
"version": "v3",
"created": "Mon, 4 Jul 2022 21:02:50 GMT"
}
] | 2022-07-06T00:00:00 |
[
[
"Sharma",
"Sugandha",
""
],
[
"Chandra",
"Sarthak",
""
],
[
"Fiete",
"Ila R.",
""
]
] |
new_dataset
| 0.993763 |
2202.03749
|
Valentin Martinoli
|
Valentin Martinoli, Yannick Teglia, Abdellah Bouagoun, R\'egis
Leveugle
|
CVA6's Data cache: Structure and Behavior
|
13 pages, 10 figures, 1 table
| null | null | null |
cs.CR cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Since Spectre and Meltdown's disclosure in 2018, a new category of attacks
has been identified and characterized by the scientific community. The
Foreshadow attack, which was the first one to target Intel's secure enclave
technology (namely SGX) has been developed shortly after. It opened the way to
micro architectural attacks on Intel's architecture, and led to the quick
development of micro architectural attacks until today. While Spectre and
Meltdown are often considered as the first micro architectural attacks, one can
argue that cache attacks, as introduced by Osvik et al. in 2006, can be seen as
the first types of micro architectural attacks that were developed. Now, even
though there are many variants, they are still the most prominent type of micro
architectural attacks. One example of cache micro architectural covert-channel
is the Prime+Probe. Lately targeting the Intel architecture, the micro
architectural attacks are now challenging a wider variety of CPUs. Recently,
CPUs running the RISC-V Instruction Set Architecture have been targeted. One
famous and widely used RISC-V CPU is the ETH Zurich's CVA6 (formerly Ariane)
core. CVA6 is a 6-stage, single issue, in-order CPU. To the best of our
knowledge, there is no existing document presenting very detailed aspects of
the CVA6's micro architecture, especially with respect to the data cache. Such
information is mandatory to deeply understand any architectural or micro
architectural study successfully, such as the replication of the Prime+Probe
attack on the CVA6 CPU proposed by Nils Wistoff. This paper presents the
implementation of the Data cache in the CVA6 CPU from OpenHW Group by focusing
on its memory structure and explaining through several examples what happens
when a request for memory allocation occurs.
|
[
{
"version": "v1",
"created": "Tue, 8 Feb 2022 09:39:31 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Mar 2022 15:47:02 GMT"
},
{
"version": "v3",
"created": "Tue, 5 Jul 2022 13:46:54 GMT"
}
] | 2022-07-06T00:00:00 |
[
[
"Martinoli",
"Valentin",
""
],
[
"Teglia",
"Yannick",
""
],
[
"Bouagoun",
"Abdellah",
""
],
[
"Leveugle",
"Régis",
""
]
] |
new_dataset
| 0.999148 |
2203.10122
|
Ruike Renee Zhao
|
Qiji Ze, Shuai Wu, Jize Dai, Sophie Leanza, Gentaro Ikeda, Phillip C.
Yang, Gianluca Iaccarino, Ruike Renee Zhao
|
Spinning-enabled Wireless Amphibious Origami Millirobot
| null | null |
10.1038/s41467-022-30802-w
| null |
cs.RO physics.app-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Wireless millimeter-scale origami robots that can locomote in narrow spaces
and morph their shapes have recently been explored with great potential for
biomedical applications. Existing millimeter-scale origami devices usually
require separate geometrical components for locomotion and functions, which
increases the complexity of the robotic systems and their operation upon
limited locomotion modes. Additionally, none of them can achieve both on-ground
and in-water locomotion. Here we report a magnetically actuated amphibious
origami millirobot that integrates capabilities of spinning-enabled multimodal
locomotion, controlled delivery of liquid medicine, and cargo transportation
with wireless operation. This millirobot takes full advantage of the
geometrical features and folding/unfolding capability of Kresling origami, a
triangulated hollow cylinder, to fulfill multifunction: its geometrical
features are exploited for generating omnidirectional locomotion in various
working environments, including on unstructured ground, in liquids, and at
air-liquid interfaces through rolling, flipping, and spinning-induced
propulsion; the folding/unfolding is utilized as a pumping mechanism for
integrated multifunctionality such as controlled delivery of liquid medicine;
furthermore, the spinning motion provides a sucking mechanism for targeted
solid cargo transportation. This origami millirobot breaks the conventional way
of utilizing origami folding only for shape reconfiguration and integrates
multiple functions in one simple body. We anticipate the reported magnetic
amphibious origami millirobots have the potential to serve as minimally
invasive devices for biomedical diagnoses and treatments.
|
[
{
"version": "v1",
"created": "Fri, 18 Mar 2022 18:49:39 GMT"
}
] | 2022-07-06T00:00:00 |
[
[
"Ze",
"Qiji",
""
],
[
"Wu",
"Shuai",
""
],
[
"Dai",
"Jize",
""
],
[
"Leanza",
"Sophie",
""
],
[
"Ikeda",
"Gentaro",
""
],
[
"Yang",
"Phillip C.",
""
],
[
"Iaccarino",
"Gianluca",
""
],
[
"Zhao",
"Ruike Renee",
""
]
] |
new_dataset
| 0.999266 |
2203.15455
|
Binbin Zhang
|
Binbin Zhang, Di Wu, Zhendong Peng, Xingchen Song, Zhuoyuan Yao, Hang
Lv, Lei Xie, Chao Yang, Fuping Pan, Jianwei Niu
|
WeNet 2.0: More Productive End-to-End Speech Recognition Toolkit
| null | null | null | null |
cs.SD cs.CL eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Recently, we made available WeNet, a production-oriented end-to-end speech
recognition toolkit, which introduces a unified two-pass (U2) framework and a
built-in runtime to address the streaming and non-streaming decoding modes in a
single model. To further improve ASR performance and facilitate various
production requirements, in this paper, we present WeNet 2.0 with four
important updates. (1) We propose U2++, a unified two-pass framework with
bidirectional attention decoders, which includes the future contextual
information by a right-to-left attention decoder to improve the representative
ability of the shared encoder and the performance during the rescoring stage.
(2) We introduce an n-gram based language model and a WFST-based decoder into
WeNet 2.0, promoting the use of rich text data in production scenarios. (3) We
design a unified contextual biasing framework, which leverages user-specific
context (e.g., contact lists) to provide rapid adaptation ability for
production and improves ASR accuracy in both with-LM and without-LM scenarios.
(4) We design a unified IO to support large-scale data for effective model
training. In summary, the brand-new WeNet 2.0 achieves up to 10\% relative
recognition performance improvement over the original WeNet on various corpora
and makes available several important production-oriented features.
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2022 11:54:34 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Jul 2022 07:47:22 GMT"
}
] | 2022-07-06T00:00:00 |
[
[
"Zhang",
"Binbin",
""
],
[
"Wu",
"Di",
""
],
[
"Peng",
"Zhendong",
""
],
[
"Song",
"Xingchen",
""
],
[
"Yao",
"Zhuoyuan",
""
],
[
"Lv",
"Hang",
""
],
[
"Xie",
"Lei",
""
],
[
"Yang",
"Chao",
""
],
[
"Pan",
"Fuping",
""
],
[
"Niu",
"Jianwei",
""
]
] |
new_dataset
| 0.985856 |
2207.01026
|
Fabio Bergonti
|
Fabio Bergonti, Luca Fiorio, Daniele Pucci
|
Torque and velocity controllers to perform jumps with a humanoid robot:
theory and implementation on the iCub robot
| null |
2019 International Conference on Robotics and Automation (ICRA)
|
10.1109/ICRA.2019.8794142
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Jumping can be an effective way of locomotion to overcome small terrain gaps
or obstacles. In this paper we propose two different approaches to perform
jumps with a humanoid robot. Specifically, starting from a pre-defined CoM
trajectory we develop the theory for a velocity controller and for a torque
controller based on an optimization technique for the evaluation of the joints
input. The controllers have been tested both in simulation and on the humanoid
robot iCub. In simulation the robot was able to jump using both controllers,
while the real system jumped with the velocity controller only. The results
highlight the importance of controlling the centroidal angular momentum and
they suggest that the joint performances, namely maximum power, of the legs and
torso joints, and the low level control performances are fundamental to achieve
acceptable results.
|
[
{
"version": "v1",
"created": "Sun, 3 Jul 2022 12:50:04 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Jul 2022 08:04:33 GMT"
}
] | 2022-07-06T00:00:00 |
[
[
"Bergonti",
"Fabio",
""
],
[
"Fiorio",
"Luca",
""
],
[
"Pucci",
"Daniele",
""
]
] |
new_dataset
| 0.999297 |
2207.01487
|
Mla{\dj}an Jovanovi\'c Dr
|
Slavisa Aleksic, Michael Atanasov, Jean Calleja Agius, Kenneth
Camilleri, Anto Cartolovni, Pau Climent-Peerez, Sara Colantonio, Stefania
Cristina, Vladimir Despotovic, Hazim Kemal Ekenel, Ekrem Erakin, Francisco
Florez-Revuelta, Danila Germanese, Nicole Grech, Steinunn Gr\'oa
Sigur{\dh}ard\'ottir, Murat Emirzeoglu, Ivo Iliev, Mladjan Jovanovic, Martin
Kampel, William Kearns, Andrzej Klimczuk, Lambros Lambrinos, Jennifer
Lumetzberger, Wiktor Mucha, Sophie Noiret, Zada Pajalic, Rodrigo Rodriguez
Peerez, Galidiya Petrova, Sintija Petrovica, Peter Pocta, Angelica Poli, Mara
Pudane, Susanna Spinsante, Albert Ali Salah, Maria Jose Santofimia, Anna
Sigridur Islind, Lacramioara Stoicu-Tivadar, Hilda Tellioglu and Andrej Zgank
|
State of the Art of Audio- and Video-Based Solutions for AAL
| null | null | null | null |
cs.CY cs.AI cs.HC cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
The report illustrates the state of the art of the most successful AAL
applications and functions based on audio and video data, namely (i)
lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii)
emotional state recognition, (iv) food intake monitoring, activity and
behaviour recognition, (v) activity and personal assistance, (vi) gesture
recognition, (vii) fall detection and prevention, (viii) mobility assessment
and frailty recognition, and (ix) cognitive and motor rehabilitation. For these
application scenarios, the report illustrates the state of play in terms of
scientific advances, available products and research project. The open
challenges are also highlighted.
|
[
{
"version": "v1",
"created": "Sun, 26 Jun 2022 14:27:33 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Jul 2022 05:03:04 GMT"
}
] | 2022-07-06T00:00:00 |
[
[
"Aleksic",
"Slavisa",
""
],
[
"Atanasov",
"Michael",
""
],
[
"Agius",
"Jean Calleja",
""
],
[
"Camilleri",
"Kenneth",
""
],
[
"Cartolovni",
"Anto",
""
],
[
"Climent-Peerez",
"Pau",
""
],
[
"Colantonio",
"Sara",
""
],
[
"Cristina",
"Stefania",
""
],
[
"Despotovic",
"Vladimir",
""
],
[
"Ekenel",
"Hazim Kemal",
""
],
[
"Erakin",
"Ekrem",
""
],
[
"Florez-Revuelta",
"Francisco",
""
],
[
"Germanese",
"Danila",
""
],
[
"Grech",
"Nicole",
""
],
[
"Sigurðardóttir",
"Steinunn Gróa",
""
],
[
"Emirzeoglu",
"Murat",
""
],
[
"Iliev",
"Ivo",
""
],
[
"Jovanovic",
"Mladjan",
""
],
[
"Kampel",
"Martin",
""
],
[
"Kearns",
"William",
""
],
[
"Klimczuk",
"Andrzej",
""
],
[
"Lambrinos",
"Lambros",
""
],
[
"Lumetzberger",
"Jennifer",
""
],
[
"Mucha",
"Wiktor",
""
],
[
"Noiret",
"Sophie",
""
],
[
"Pajalic",
"Zada",
""
],
[
"Peerez",
"Rodrigo Rodriguez",
""
],
[
"Petrova",
"Galidiya",
""
],
[
"Petrovica",
"Sintija",
""
],
[
"Pocta",
"Peter",
""
],
[
"Poli",
"Angelica",
""
],
[
"Pudane",
"Mara",
""
],
[
"Spinsante",
"Susanna",
""
],
[
"Salah",
"Albert Ali",
""
],
[
"Santofimia",
"Maria Jose",
""
],
[
"Islind",
"Anna Sigridur",
""
],
[
"Stoicu-Tivadar",
"Lacramioara",
""
],
[
"Tellioglu",
"Hilda",
""
],
[
"Zgank",
"Andrej",
""
]
] |
new_dataset
| 0.991592 |
2207.01708
|
Huijuan Xu
|
Zhekun Luo, Shalini Ghosh, Devin Guillory, Keizo Kato, Trevor Darrell,
Huijuan Xu
|
Disentangled Action Recognition with Knowledge Bases
|
NAACL 2022
| null | null | null |
cs.CV cs.AI cs.CL cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Action in video usually involves the interaction of human with objects.
Action labels are typically composed of various combinations of verbs and
nouns, but we may not have training data for all possible combinations. In this
paper, we aim to improve the generalization ability of the compositional action
recognition model to novel verbs or novel nouns that are unseen during training
time, by leveraging the power of knowledge graphs. Previous work utilizes
verb-noun compositional action nodes in the knowledge graph, making it
inefficient to scale since the number of compositional action nodes grows
quadratically with respect to the number of verbs and nouns. To address this
issue, we propose our approach: Disentangled Action Recognition with
Knowledge-bases (DARK), which leverages the inherent compositionality of
actions. DARK trains a factorized model by first extracting disentangled
feature representations for verbs and nouns, and then predicting classification
weights using relations in external knowledge graphs. The type constraint
between verb and noun is extracted from external knowledge bases and finally
applied when composing actions. DARK has better scalability in the number of
objects and verbs, and achieves state-of-the-art performance on the Charades
dataset. We further propose a new benchmark split based on the Epic-kitchen
dataset which is an order of magnitude bigger in the numbers of classes and
samples, and benchmark various models on this benchmark.
|
[
{
"version": "v1",
"created": "Mon, 4 Jul 2022 20:19:13 GMT"
}
] | 2022-07-06T00:00:00 |
[
[
"Luo",
"Zhekun",
""
],
[
"Ghosh",
"Shalini",
""
],
[
"Guillory",
"Devin",
""
],
[
"Kato",
"Keizo",
""
],
[
"Darrell",
"Trevor",
""
],
[
"Xu",
"Huijuan",
""
]
] |
new_dataset
| 0.996467 |
2207.01755
|
Han Sun
|
Yuhan Lin, Han Sun, Ningzhong Liu, Yetong Bian, Jun Cen, Huiyu Zhou
|
Attention Guided Network for Salient Object Detection in Optical Remote
Sensing Images
|
accepted by ICANN2022, The code is available at
https://github.com/NuaaYH/AGNet
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Due to the extreme complexity of scale and shape as well as the uncertainty
of the predicted location, salient object detection in optical remote sensing
images (RSI-SOD) is a very difficult task. The existing SOD methods can satisfy
the detection performance for natural scene images, but they are not well
adapted to RSI-SOD due to the above-mentioned image characteristics in remote
sensing images. In this paper, we propose a novel Attention Guided Network
(AGNet) for SOD in optical RSIs, including position enhancement stage and
detail refinement stage. Specifically, the position enhancement stage consists
of a semantic attention module and a contextual attention module to accurately
describe the approximate location of salient objects. The detail refinement
stage uses the proposed self-refinement module to progressively refine the
predicted results under the guidance of attention and reverse attention. In
addition, the hybrid loss is applied to supervise the training of the network,
which can improve the performance of the model from three perspectives of
pixel, region and statistics. Extensive experiments on two popular benchmarks
demonstrate that AGNet achieves competitive performance compared to other
state-of-the-art methods. The code will be available at
https://github.com/NuaaYH/AGNet.
|
[
{
"version": "v1",
"created": "Tue, 5 Jul 2022 01:01:03 GMT"
}
] | 2022-07-06T00:00:00 |
[
[
"Lin",
"Yuhan",
""
],
[
"Sun",
"Han",
""
],
[
"Liu",
"Ningzhong",
""
],
[
"Bian",
"Yetong",
""
],
[
"Cen",
"Jun",
""
],
[
"Zhou",
"Huiyu",
""
]
] |
new_dataset
| 0.995716 |
2207.01760
|
Gyunpyo Lee
|
Gyunpyo Lee, Taesu Kim, Hyeon-Jeong Suk
|
GP22: A Car Styling Dataset for Automotive Designers
|
5th CVFAD workshop, CVPR2022
|
Proceedings of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition (CVPR) Workshops, 2022, pp. 2268-2272
| null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
An automated design data archiving could reduce the time wasted by designers
from working creatively and effectively. Though many datasets on classifying,
detecting, and instance segmenting on car exterior exist, these large datasets
are not relevant for design practices as the primary purpose lies in autonomous
driving or vehicle verification. Therefore, we release GP22, composed of car
styling features defined by automotive designers. The dataset contains 1480 car
side profile images from 37 brands and ten car segments. It also contains
annotations of design features that follow the taxonomy of the car exterior
design features defined in the eye of the automotive designer. We trained the
baseline model using YOLO v5 as the design feature detection model with the
dataset. The presented model resulted in an mAP score of 0.995 and a recall of
0.984. Furthermore, exploration of the model performance on sketches and
rendering images of the car side profile implies the scalability of the dataset
for design purposes.
|
[
{
"version": "v1",
"created": "Tue, 5 Jul 2022 01:39:34 GMT"
}
] | 2022-07-06T00:00:00 |
[
[
"Lee",
"Gyunpyo",
""
],
[
"Kim",
"Taesu",
""
],
[
"Suk",
"Hyeon-Jeong",
""
]
] |
new_dataset
| 0.999824 |
2207.01769
|
Osman Tursun
|
Osman Tursun, Simon Denman, Sridha Sridharan and Clinton Fookes
|
SESS: Saliency Enhancing with Scaling and Sliding
|
This paper will be presented at ECCV2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
High-quality saliency maps are essential in several machine learning
application areas including explainable AI and weakly supervised object
detection and segmentation. Many techniques have been developed to generate
better saliency using neural networks. However, they are often limited to
specific saliency visualisation methods or saliency issues. We propose a novel
saliency enhancing approach called SESS (Saliency Enhancing with Scaling and
Sliding). It is a method and model agnostic extension to existing saliency map
generation methods. With SESS, existing saliency approaches become robust to
scale variance, multiple occurrences of target objects, presence of distractors
and generate less noisy and more discriminative saliency maps. SESS improves
saliency by fusing saliency maps extracted from multiple patches at different
scales from different areas, and combines these individual maps using a novel
fusion scheme that incorporates channel-wise weights and spatial weighted
average. To improve efficiency, we introduce a pre-filtering step that can
exclude uninformative saliency maps to improve efficiency while still enhancing
overall results. We evaluate SESS on object recognition and detection
benchmarks where it achieves significant improvement. The code is released
publicly to enable researchers to verify performance and further development.
Code is available at: https://github.com/neouyghur/SESS
|
[
{
"version": "v1",
"created": "Tue, 5 Jul 2022 02:16:23 GMT"
}
] | 2022-07-06T00:00:00 |
[
[
"Tursun",
"Osman",
""
],
[
"Denman",
"Simon",
""
],
[
"Sridharan",
"Sridha",
""
],
[
"Fookes",
"Clinton",
""
]
] |
new_dataset
| 0.990984 |
2207.01834
|
Yiqiu Wang
|
Yiqiu Wang, Rahul Yesantharao, Shangdi Yu, Laxman Dhulipala, Yan Gu,
Julian Shun
|
ParGeo: A Library for Parallel Computational Geometry
| null | null | null | null |
cs.CG
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents ParGeo, a multicore library for computational geometry.
ParGeo contains modules for fundamental tasks including $k$d-tree based spatial
search, spatial graph generation, and algorithms in computational geometry.
We focus on three new algorithmic contributions provided in the library.
First, we present a new parallel convex hull algorithm based on a reservation
technique to enable parallel modifications to the hull. We also provide the
first parallel implementations of the randomized incremental convex hull
algorithm as well as a divide-and-conquer convex hull algorithm in
$\mathbb{R}^3$. Second, for the smallest enclosing ball problem, we propose a
new sampling-based algorithm to quickly reduce the size of the data set. We
also provide the first parallel implementation of Welzl's classic algorithm for
smallest enclosing ball. Third, we present the BDL-tree, a parallel
batch-dynamic $k$d-tree that allows for efficient parallel updates and $k$-NN
queries over dynamically changing point sets. BDL-trees consist of a
log-structured set of $k$d-trees which can be used to efficiently insert,
delete, and query batches of points in parallel.
On 36 cores with two-way hyper-threading, our fastest convex hull algorithm
achieves up to 44.7x self-relative parallel speedup and up to 559x speedup
against the best existing sequential implementation. Our smallest enclosing
ball algorithm using our sampling-based algorithm achieves up to 27.1x
self-relative parallel speedup and up to 178x speedup against the best existing
sequential implementation. Our implementation of the BDL-tree achieves
self-relative parallel speedup of up to 46.1x. Across all of the algorithms in
ParGeo, we achieve self-relative parallel speedup of 8.1--46.61x.
|
[
{
"version": "v1",
"created": "Tue, 5 Jul 2022 06:34:12 GMT"
}
] | 2022-07-06T00:00:00 |
[
[
"Wang",
"Yiqiu",
""
],
[
"Yesantharao",
"Rahul",
""
],
[
"Yu",
"Shangdi",
""
],
[
"Dhulipala",
"Laxman",
""
],
[
"Gu",
"Yan",
""
],
[
"Shun",
"Julian",
""
]
] |
new_dataset
| 0.994797 |
2207.01837
|
Jingyi Guo
|
Jingyi Guo, Min Zheng, Yajin Zhou, Haoyu Wang, Lei Wu, Xiapu Luo, Kui
Ren
|
iLibScope: Reliable Third-Party Library Detection for iOS Mobile Apps
|
11 pages, 7 figures
| null | null | null |
cs.CR cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vetting security impacts introduced by third-party libraries in iOS apps
requires a reliable library detection technique. Especially when a new
vulnerability (or a privacy-invasive behavior) was discovered in a third-party
library, there is a practical need to precisely identify the existence of
libraries and their versions for iOS apps. However, few studies have been
proposed to tackle this problem, and they all suffer from the code duplication
problem in different libraries. In this paper, we focus on third-party library
detection in iOS apps. Given an app, we aim to identify the integrated
libraries and pinpoint their versions (or the version range).To this end, we
first conduct an in-depth study on iOS third-party libraries to demystify the
code duplication challenge. By doing so, we have two key observations: 1) even
though two libraries can share classes, the shared classes cannot be integrated
into an app simultaneously without causing a class name conflict; and 2) code
duplication between multiple versions of two libraries can vary. Based on these
findings, we propose a novel profile-based similarity comparison approach to
perform the detection. Specifically, we build a library database consists of
original library binaries with distinct versions. After extracting profiles for
each library version and the target app, we conduct a similarity comparison to
find the best matches. We implemented this approach in iLibScope. We built a
benchmark consists of 5,807 apps with 10,495 library integrations and applied
our tool to it. Our evaluation shows that iLibScope achieves a recall exceeds
99% and a precision exceeds 97% for library detection. We also applied
iLibScope to detect the presence of well-known vulnerable third-party libraries
in real-world iOS mobile apps to show the promising usage of our tool. It
successfully identified 405 vulnerable library usage from 4,249 apps.
|
[
{
"version": "v1",
"created": "Tue, 5 Jul 2022 06:41:39 GMT"
}
] | 2022-07-06T00:00:00 |
[
[
"Guo",
"Jingyi",
""
],
[
"Zheng",
"Min",
""
],
[
"Zhou",
"Yajin",
""
],
[
"Wang",
"Haoyu",
""
],
[
"Wu",
"Lei",
""
],
[
"Luo",
"Xiapu",
""
],
[
"Ren",
"Kui",
""
]
] |
new_dataset
| 0.999584 |
2207.01864
|
Zhonghua Sun
|
Zhonghua Sun and Xiaoqiang Wang and Cunsheng Ding
|
Several Families of Irreducible Constacyclic and Cyclic Codes
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, several families of irreducible constacyclic codes over finite
fields and their duals are studied. The weight distributions of these
irreducible constacyclic codes and the parameters of their duals are settled.
Several families of irreducible constacyclic codes with a few weights and
several families of optimal constacyclic codes are constructed. As by-products,
a family of $[2n, (n-1)/2, d \geq 2(\sqrt{n}+1)]$ irreducible cyclic codes over
$\gf(q)$ and a family of $[(q-1)n, (n-1)/2, d \geq (q-1)(\sqrt{n}+1)]$
irreducible cyclic codes over $\gf(q)$ are presented, where $n$ is a prime such
that $\ord_n(q)=(n-1)/2$. The results in this paper complement earlier works on
irreducible constacyclic and cyclic codes over finite fields.
|
[
{
"version": "v1",
"created": "Tue, 5 Jul 2022 07:57:36 GMT"
}
] | 2022-07-06T00:00:00 |
[
[
"Sun",
"Zhonghua",
""
],
[
"Wang",
"Xiaoqiang",
""
],
[
"Ding",
"Cunsheng",
""
]
] |
new_dataset
| 0.998304 |
2207.01877
|
Xiaoqiang Wang
|
Xiaoqiang Wang, Zhonghua Sun and Cunsheng Ding
|
Two families of negacyclic BCH codes
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Negacyclic BCH codes are a subclass of neagcyclic codes and are the best
linear codes in many cases. However, there have been very few results on
negacyclic BCH codes. Let $q$ be an odd prime power and $m$ be a positive
integer. The objective of this paper is to study negacyclic BCH codes with
length $\frac{q^m-1}{2}$ and $\frac{q^m+1}{2}$ over the finite field
$\mathbf(q)$ and analyse their parameters. The negacyclic BCH codes presented
in this paper have good parameters in general, and contain many optimal linear
codes. For certain $q$ and $m$, compared with cyclic codes with the same
dimension and length, the negacyclic BCH codes presented in this paper have a
larger minimum distance in some cases.
|
[
{
"version": "v1",
"created": "Tue, 5 Jul 2022 08:18:46 GMT"
}
] | 2022-07-06T00:00:00 |
[
[
"Wang",
"Xiaoqiang",
""
],
[
"Sun",
"Zhonghua",
""
],
[
"Ding",
"Cunsheng",
""
]
] |
new_dataset
| 0.997891 |
2207.01918
|
V\'esteinn Sn{\ae}bjarnarson
|
V\'esteinn Sn{\ae}bjarnarson and Hafsteinn Einarsson
|
Cross-Lingual QA as a Stepping Stone for Monolingual Open QA in
Icelandic
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
It can be challenging to build effective open question answering (open QA)
systems for languages other than English, mainly due to a lack of labeled data
for training. We present a data efficient method to bootstrap such a system for
languages other than English. Our approach requires only limited QA resources
in the given language, along with machine-translated data, and at least a
bilingual language model. To evaluate our approach, we build such a system for
the Icelandic language and evaluate performance over trivia style datasets. The
corpora used for training are English in origin but machine translated into
Icelandic. We train a bilingual Icelandic/English language model to embed
English context and Icelandic questions following methodology introduced with
DensePhrases (Lee et al., 2021). The resulting system is an open domain
cross-lingual QA system between Icelandic and English. Finally, the system is
adapted for Icelandic only open QA, demonstrating how it is possible to
efficiently create an open QA system with limited access to curated datasets in
the language of interest.
|
[
{
"version": "v1",
"created": "Tue, 5 Jul 2022 09:52:34 GMT"
}
] | 2022-07-06T00:00:00 |
[
[
"Snæbjarnarson",
"Vésteinn",
""
],
[
"Einarsson",
"Hafsteinn",
""
]
] |
new_dataset
| 0.99833 |
2207.01948
|
Tom\'a\v{s} Vojnar
|
Dominik Harmim, Vladim\'ir Marcin, Lucie Svobodov\'a, Tom\'a\v{s}
Vojnar
|
Static Deadlock Detection in Low-Level C Code
|
A pre-print submitted for publication in the post-proceedings of the
EUROCAST'22 conference
| null | null | null |
cs.SE cs.DC cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
We present a novel scalable deadlock analyser L2D2 capable of handling C code
with low-level unstructured lock manipulation. L2D2 runs along the call tree of
a program, starting from its leaves, and analyses each function just once,
without any knowledge of the call context. L2D2 builds function summaries
recording information about locks that are assumed or known to be locked or
unlocked at the entry, inside, and at the exit of functions, together with lock
dependencies, and reports warnings about possible deadlocks when cycles in the
lock dependencies are detected. We implemented L2D2 as a plugin of the
Facebook/Meta Infer framework and report results of experiments on a large body
of C as well as C++ code illustrating the effectiveness and efficiency of L2D2.
|
[
{
"version": "v1",
"created": "Tue, 5 Jul 2022 10:47:20 GMT"
}
] | 2022-07-06T00:00:00 |
[
[
"Harmim",
"Dominik",
""
],
[
"Marcin",
"Vladimír",
""
],
[
"Svobodová",
"Lucie",
""
],
[
"Vojnar",
"Tomáš",
""
]
] |
new_dataset
| 0.998921 |
2207.02042
|
Kaibin Tian
|
Jingjie Shang and Kunchang Li and Kaibin Tian and Haisheng Su and
Yangguang Li
|
MVP: Robust Multi-View Practice for Driving Action Localization
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Distracted driving causes thousands of deaths per year, and how to apply
deep-learning methods to prevent these tragedies has become a crucial problem.
In Track3 of the 6th AI City Challenge, researchers provide a high-quality
video dataset with densely action annotations. Due to the small data scale and
unclear action boundary, the dataset presents a unique challenge to precisely
localize all the different actions and classify their categories. In this
paper, we make good use of the multi-view synchronization among videos, and
conduct robust Multi-View Practice (MVP) for driving action localization. To
avoid overfitting, we fine-tune SlowFast with Kinetics-700 pre-training as the
feature extractor. Then the features of different views are passed to
ActionFormer to generate candidate action proposals. For precisely localizing
all the actions, we design elaborate post-processing, including model voting,
threshold filtering and duplication removal. The results show that our MVP is
robust for driving action localization, which achieves 28.49% F1-score in the
Track3 test set.
|
[
{
"version": "v1",
"created": "Tue, 5 Jul 2022 13:38:10 GMT"
}
] | 2022-07-06T00:00:00 |
[
[
"Shang",
"Jingjie",
""
],
[
"Li",
"Kunchang",
""
],
[
"Tian",
"Kaibin",
""
],
[
"Su",
"Haisheng",
""
],
[
"Li",
"Yangguang",
""
]
] |
new_dataset
| 0.997764 |
2207.02107
|
Renu Solanki
|
Renu Solanki, Monisha Khanna, Shailly Anand, Anita Gulati, Prateek
Kumar, Munendra Kumar, Dushyant Kumar
|
EasyABM: a lightweight and easy to use heterogeneous agent-based
modelling tool written in Julia
|
18 pages, 7 figures
| null | null | null |
cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Agent based modelling is a computational approach that aims to understand the
behaviour of complex systems through simplified interactions of programmable
objects in computer memory called agents. Agent based models (ABMs) are
predominantly used in fields of biology, ecology, social sciences and economics
where the systems of interest often consist of several interacting entities. In
this work, we present a Julia package EasyABM.jl for simplifying the process of
studying agent based models. EasyABM.jl provides an intuitive and easy to
understand functional approach for building and analysing agent based models.
|
[
{
"version": "v1",
"created": "Tue, 5 Jul 2022 15:21:44 GMT"
}
] | 2022-07-06T00:00:00 |
[
[
"Solanki",
"Renu",
""
],
[
"Khanna",
"Monisha",
""
],
[
"Anand",
"Shailly",
""
],
[
"Gulati",
"Anita",
""
],
[
"Kumar",
"Prateek",
""
],
[
"Kumar",
"Munendra",
""
],
[
"Kumar",
"Dushyant",
""
]
] |
new_dataset
| 0.976832 |
2003.01446
|
Chongwei Liu
|
Chongwei Liu, Zhihui Wang, Shijie Wang, Tao Tang, Yulong Tao, Caifei
Yang, Haojie Li, Xing Liu, and Xin Fan
|
A New Dataset, Poisson GAN and AquaNet for Underwater Object Grabbing
|
14 pages, 10 figures
|
IEEE Transactions on Circuits and Systems for Video Technology
2021
|
10.1109/TCSVT.2021.3100059
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To boost the object grabbing capability of underwater robots for open-sea
farming, we propose a new dataset (UDD) consisting of three categories
(seacucumber, seaurchin, and scallop) with 2,227 images. To the best of our
knowledge, it is the first 4K HD dataset collected in a real open-sea farm. We
also propose a novel Poisson-blending Generative Adversarial Network (Poisson
GAN) and an efficient object detection network (AquaNet) to address two common
issues within related datasets: the class-imbalance problem and the problem of
mass small object, respectively. Specifically, Poisson GAN combines Poisson
blending into its generator and employs a new loss called Dual Restriction loss
(DR loss), which supervises both implicit space features and image-level
features during training to generate more realistic images. By utilizing
Poisson GAN, objects of minority class like seacucumber or scallop could be
added into an image naturally and annotated automatically, which could increase
the loss of minority classes during training detectors to eliminate the
class-imbalance problem; AquaNet is a high-efficiency detector to address the
problem of detecting mass small objects from cloudy underwater pictures. Within
it, we design two efficient components: a depth-wise-convolution-based
Multi-scale Contextual Features Fusion (MFF) block and a Multi-scale
Blursampling (MBP) module to reduce the parameters of the network to 1.3
million. Both two components could provide multi-scale features of small
objects under a short backbone configuration without any loss of accuracy. In
addition, we construct a large-scale augmented dataset (AUDD) and a
pre-training dataset via Poisson GAN from UDD. Extensive experiments show the
effectiveness of the proposed Poisson GAN, AquaNet, UDD, AUDD, and pre-training
dataset.
|
[
{
"version": "v1",
"created": "Tue, 3 Mar 2020 10:57:52 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Jul 2021 01:32:42 GMT"
}
] | 2022-07-05T00:00:00 |
[
[
"Liu",
"Chongwei",
""
],
[
"Wang",
"Zhihui",
""
],
[
"Wang",
"Shijie",
""
],
[
"Tang",
"Tao",
""
],
[
"Tao",
"Yulong",
""
],
[
"Yang",
"Caifei",
""
],
[
"Li",
"Haojie",
""
],
[
"Liu",
"Xing",
""
],
[
"Fan",
"Xin",
""
]
] |
new_dataset
| 0.999411 |
2103.13302
|
Sourav De
|
Sourav De, Bo-Han Qiu, Wei-Xuan Bu, Md.Aftab Baig, Chung-Jun Su,
Yao-Jen Lee, and Darsen Lu
|
Neuromorphic Computing with Ferroelectric FinFETs in the Presence of
Temperature, Process Variation, Device Aging and Flicker Noise
| null | null | null | null |
cs.ET cs.LG cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
This paper reports a comprehensive study on the impacts of
temperature-change, process variation, flicker noise and device aging on the
inference accuracy of pre-trained all-ferroelectric (FE) FinFET deep neural
networks. Multiple-level-cell (MLC) operation with a novel
adaptive-program-and-read algorithm with 100ns write pulse has been
experimentally demonstrated in 5 nm thick hafnium zirconium oxide (HZO)-based
FE-FinFET. With pre-trained neural network (NN) with 97.5% inference accuracy
on MNIST dataset as baseline, device to device variation is shown to have
negligible impact. Flicker noise characterization at various bias conditions
depicts that drain current fluctuation is less than 0.7% with virtually no
inference accuracy degradation. The conductance drift of a programmed cell, as
an aftermath of temperature change, was captured by a compact model over a wide
range of gate biases. Despite significant inference accuracy degradation at
233K for a NN trained at 300K, gate bias optimization for recovering the
accuracy is demonstrated. Endurance above 10$^8$ cycles and extrapolated
retention above 10 years are shown, which paves the way for edge device
artificial intelligence with FE-FinFETs.
|
[
{
"version": "v1",
"created": "Fri, 5 Mar 2021 03:24:20 GMT"
},
{
"version": "v2",
"created": "Sat, 2 Jul 2022 07:18:04 GMT"
}
] | 2022-07-05T00:00:00 |
[
[
"De",
"Sourav",
""
],
[
"Qiu",
"Bo-Han",
""
],
[
"Bu",
"Wei-Xuan",
""
],
[
"Baig",
"Md. Aftab",
""
],
[
"Su",
"Chung-Jun",
""
],
[
"Lee",
"Yao-Jen",
""
],
[
"Lu",
"Darsen",
""
]
] |
new_dataset
| 0.991735 |
2106.05681
|
Chongwei Liu
|
Chongwei Liu, Haojie Li, Shuchang Wang, Ming Zhu, Dong Wang, Xin Fan
and Zhihui Wang
|
A Dataset And Benchmark Of Underwater Object Detection For Robot Picking
| null | null |
10.1109/ICMEW53276.2021.9455997
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Underwater object detection for robot picking has attracted a lot of
interest. However, it is still an unsolved problem due to several challenges.
We take steps towards making it more realistic by addressing the following
challenges. Firstly, the currently available datasets basically lack the test
set annotations, causing researchers must compare their method with other SOTAs
on a self-divided test set (from the training set). Training other methods lead
to an increase in workload and different researchers divide different datasets,
resulting there is no unified benchmark to compare the performance of different
algorithms. Secondly, these datasets also have other shortcomings, e.g., too
many similar images or incomplete labels. Towards these challenges we introduce
a dataset, Detecting Underwater Objects (DUO), and a corresponding benchmark,
based on the collection and re-annotation of all relevant datasets. DUO
contains a collection of diverse underwater images with more rational
annotations. The corresponding benchmark provides indicators of both efficiency
and accuracy of SOTAs (under the MMDtection framework) for academic research
and industrial applications, where JETSON AGX XAVIER is used to assess detector
speed to simulate the robot-embedded environment.
|
[
{
"version": "v1",
"created": "Thu, 10 Jun 2021 11:56:19 GMT"
}
] | 2022-07-05T00:00:00 |
[
[
"Liu",
"Chongwei",
""
],
[
"Li",
"Haojie",
""
],
[
"Wang",
"Shuchang",
""
],
[
"Zhu",
"Ming",
""
],
[
"Wang",
"Dong",
""
],
[
"Fan",
"Xin",
""
],
[
"Wang",
"Zhihui",
""
]
] |
new_dataset
| 0.99753 |
2109.09165
|
Mahdi Rezaei
|
Mahdi Rezaei, Mohsen Azarmi, Farzam Mohammad Pour Mir
|
Traffic-Net: 3D Traffic Monitoring Using a Single Camera
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Computer Vision has played a major role in Intelligent Transportation Systems
(ITS) and traffic surveillance. Along with the rapidly growing automated
vehicles and crowded cities, the automated and advanced traffic management
systems (ATMS) using video surveillance infrastructures have been evolved by
the implementation of Deep Neural Networks. In this research, we provide a
practical platform for real-time traffic monitoring, including 3D
vehicle/pedestrian detection, speed detection, trajectory estimation,
congestion detection, as well as monitoring the interaction of vehicles and
pedestrians, all using a single CCTV traffic camera. We adapt a custom YOLOv5
deep neural network model for vehicle/pedestrian detection and an enhanced SORT
tracking algorithm. For the first time, a hybrid satellite-ground based inverse
perspective mapping (SG-IPM) method for camera auto-calibration is also
developed which leads to an accurate 3D object detection and visualisation. We
also develop a hierarchical traffic modelling solution based on short- and
long-term temporal video data stream to understand the traffic flow,
bottlenecks, and risky spots for vulnerable road users. Several experiments on
real-world scenarios and comparisons with state-of-the-art are conducted using
various traffic monitoring datasets, including MIO-TCD, UA-DETRAC and GRAM-RTM
collected from highways, intersections, and urban areas under different
lighting and weather conditions.
|
[
{
"version": "v1",
"created": "Sun, 19 Sep 2021 16:59:01 GMT"
},
{
"version": "v2",
"created": "Sat, 2 Jul 2022 23:56:36 GMT"
}
] | 2022-07-05T00:00:00 |
[
[
"Rezaei",
"Mahdi",
""
],
[
"Azarmi",
"Mohsen",
""
],
[
"Mir",
"Farzam Mohammad Pour",
""
]
] |
new_dataset
| 0.982046 |
2110.15182
|
Alexandros Keros
|
Alexandros Dimitrios Keros, Vidit Nanda, Kartic Subr
|
Dist2Cycle: A Simplicial Neural Network for Homology Localization
|
9 pages, 5 figures
|
Proceedings of the AAAI Conference on Artificial Intelligence. 36,
7 (Jun. 2022), 7133-7142
| null | null |
cs.LG math.AT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Simplicial complexes can be viewed as high dimensional generalizations of
graphs that explicitly encode multi-way ordered relations between vertices at
different resolutions, all at once. This concept is central towards detection
of higher dimensional topological features of data, features to which graphs,
encoding only pairwise relationships, remain oblivious. While attempts have
been made to extend Graph Neural Networks (GNNs) to a simplicial complex
setting, the methods do not inherently exploit, or reason about, the underlying
topological structure of the network. We propose a graph convolutional model
for learning functions parametrized by the $k$-homological features of
simplicial complexes. By spectrally manipulating their combinatorial
$k$-dimensional Hodge Laplacians, the proposed model enables learning
topological features of the underlying simplicial complexes, specifically, the
distance of each $k$-simplex from the nearest "optimal" $k$-th homology
generator, effectively providing an alternative to homology localization.
|
[
{
"version": "v1",
"created": "Thu, 28 Oct 2021 14:59:41 GMT"
},
{
"version": "v2",
"created": "Sun, 3 Jul 2022 21:46:42 GMT"
}
] | 2022-07-05T00:00:00 |
[
[
"Keros",
"Alexandros Dimitrios",
""
],
[
"Nanda",
"Vidit",
""
],
[
"Subr",
"Kartic",
""
]
] |
new_dataset
| 0.998513 |
2111.09497
|
Wen Yang
|
Wen Yang, Zheng Gong, Baifu Huang and Xiaoping Hong
|
Lidar with Velocity: Correcting Moving Objects Point Cloud Distortion
from Oscillating Scanning Lidars by Fusion with Camera
| null | null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Lidar point cloud distortion from moving object is an important problem in
autonomous driving, and recently becomes even more demanding with the emerging
of newer lidars, which feature back-and-forth scanning patterns. Accurately
estimating moving object velocity would not only provide a tracking capability
but also correct the point cloud distortion with more accurate description of
the moving object. Since lidar measures the time-of-flight distance but with a
sparse angular resolution, the measurement is precise in the radial measurement
but lacks angularly. Camera on the other hand provides a dense angular
resolution. In this paper, Gaussian-based lidar and camera fusion is proposed
to estimate the full velocity and correct the lidar distortion. A probabilistic
Kalman-filter framework is provided to track the moving objects, estimate their
velocities and simultaneously correct the point clouds distortions. The
framework is evaluated on real road data and the fusion method outperforms the
traditional ICP-based and point-cloud only method. The complete working
framework is open-sourced
(https://github.com/ISEE-Technology/lidar-with-velocity) to accelerate the
adoption of the emerging lidars.
|
[
{
"version": "v1",
"created": "Thu, 18 Nov 2021 03:13:08 GMT"
},
{
"version": "v2",
"created": "Sat, 26 Feb 2022 17:32:39 GMT"
},
{
"version": "v3",
"created": "Sun, 3 Jul 2022 10:24:18 GMT"
}
] | 2022-07-05T00:00:00 |
[
[
"Yang",
"Wen",
""
],
[
"Gong",
"Zheng",
""
],
[
"Huang",
"Baifu",
""
],
[
"Hong",
"Xiaoping",
""
]
] |
new_dataset
| 0.999228 |
2111.12009
|
Viveck Cadambe
|
Hamidreza Zare, Viveck R. Cadambe, Bhuvan Urgaonkar, Chetan Sharma,
Praneet Soni, Nader Alfares, and Arif Merchant
|
LEGOStore: A Linearizable Geo-Distributed Store Combining Replication
and Erasure Coding
|
Extended version of paper to appear in PVLDB 2022
| null | null | null |
cs.DC cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We design and implement LEGOStore, an erasure coding (EC) based linearizable
data store over geo-distributed public cloud data centers (DCs). For such a
data store, the confluence of the following factors opens up opportunities for
EC to be latency-competitive with replication: (a) the necessity of
communicating with remote DCs to tolerate entire DC failures and implement
linearizability; and (b) the emergence of DCs near most large population
centers. LEGOStore employs an optimization framework that, for a given object,
carefully chooses among replication and EC, as well as among various DC
placements to minimize overall costs. To handle workload dynamism, LEGOStore
employs a novel agile reconfiguration protocol. Our evaluation using a
LEGOStore prototype spanning 9 Google Cloud Platform DCs demonstrates the
efficacy of our ideas. We observe cost savings ranging from moderate (5-20\%)
to significant (60\%) over baselines representing the state of the art while
meeting tail latency SLOs. Our reconfiguration protocol is able to transition
key placements in 3 to 4 inter-DC RTTs ($<$ 1s in our experiments), allowing
for agile adaptation to dynamic conditions.
|
[
{
"version": "v1",
"created": "Tue, 23 Nov 2021 17:20:38 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Apr 2022 02:22:59 GMT"
},
{
"version": "v3",
"created": "Mon, 4 Jul 2022 02:45:23 GMT"
}
] | 2022-07-05T00:00:00 |
[
[
"Zare",
"Hamidreza",
""
],
[
"Cadambe",
"Viveck R.",
""
],
[
"Urgaonkar",
"Bhuvan",
""
],
[
"Sharma",
"Chetan",
""
],
[
"Soni",
"Praneet",
""
],
[
"Alfares",
"Nader",
""
],
[
"Merchant",
"Arif",
""
]
] |
new_dataset
| 0.986111 |
2112.08088
|
Wenyu Liu
|
Wenyu Liu, Gaofeng Ren, Runsheng Yu, Shi Guo, Jianke Zhu, Lei Zhang
|
Image-Adaptive YOLO for Object Detection in Adverse Weather Conditions
|
AAAI 2022, Preprint version with Appendix
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Though deep learning-based object detection methods have achieved promising
results on the conventional datasets, it is still challenging to locate objects
from the low-quality images captured in adverse weather conditions. The
existing methods either have difficulties in balancing the tasks of image
enhancement and object detection, or often ignore the latent information
beneficial for detection. To alleviate this problem, we propose a novel
Image-Adaptive YOLO (IA-YOLO) framework, where each image can be adaptively
enhanced for better detection performance. Specifically, a differentiable image
processing (DIP) module is presented to take into account the adverse weather
conditions for YOLO detector, whose parameters are predicted by a small
convolutional neural net-work (CNN-PP). We learn CNN-PP and YOLOv3 jointly in
an end-to-end fashion, which ensures that CNN-PP can learn an appropriate DIP
to enhance the image for detection in a weakly supervised manner. Our proposed
IA-YOLO approach can adaptively process images in both normal and adverse
weather conditions. The experimental results are very encouraging,
demonstrating the effectiveness of our proposed IA-YOLO method in both foggy
and low-light scenarios.
|
[
{
"version": "v1",
"created": "Wed, 15 Dec 2021 12:54:17 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Mar 2022 02:51:02 GMT"
},
{
"version": "v3",
"created": "Mon, 4 Jul 2022 09:10:22 GMT"
}
] | 2022-07-05T00:00:00 |
[
[
"Liu",
"Wenyu",
""
],
[
"Ren",
"Gaofeng",
""
],
[
"Yu",
"Runsheng",
""
],
[
"Guo",
"Shi",
""
],
[
"Zhu",
"Jianke",
""
],
[
"Zhang",
"Lei",
""
]
] |
new_dataset
| 0.975869 |
2112.14013
|
Chandrahas Tirumalasetty
|
Chandrahas Tirumalasetty, Chih Chieh Chou, Narasimha Reddy, Paul
Gratz, Ayman Abouelwafa
|
Reducing Minor Page Fault Overheads through Enhanced Page Walker
|
To appear in ACM Transactions on Architecture and Code Optimization
(TACO)
| null | null | null |
cs.AR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Application virtual memory footprints are growing rapidly in all systems from
servers down to smartphones. To address this growing demand, system integrators
are incorporating ever larger amounts of main memory, warranting rethinking of
memory management. In current systems, applications produce page fault
exceptions whenever they access virtual memory regions which are not backed by
a physical page. As application memory footprints grow, they induce more and
more minor faults. Handling of each minor fault can take few 1000's of
CPU-cycles and blocks the application till OS finds a free physical frame.
These page faults can be detrimental to the performance, when their frequency
of occurrence is high and spread across application run-time. Specifically,
lazy allocation induced minor page faults are increasingly impacting
application performance. Our evaluation of several workloads indicates an
overhead due to minor faults as high as 29% of execution time. In this paper,
we propose to mitigate this problem through a hardware, software co-design
approach. Specifically we first propose to parallelize portions of the kernel
page allocation to run ahead of fault time in a separate thread. Then we
propose the Minor Fault Offload Engine(MFOE), a per-core HW accelerator for
minor fault handling. MFOE is equipped with pre-allocated page frame table that
it uses to service a page fault. On a page fault, MFOE picks a pre-allocated
page frame from this table, makes an entry for it in the TLB, and updates the
page table entry to satisfy the page fault. The pre-allocation frame tables are
periodically refreshed by a background kernel thread, which also updates the
kernel memory management data-structures. We evaluate this system in the gem5
architectural simulator with a modified Linux kernel. Our results show that
MFOE improves the average critical-path fault handling latency by 33x.
|
[
{
"version": "v1",
"created": "Tue, 28 Dec 2021 06:43:44 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Jul 2022 02:07:10 GMT"
}
] | 2022-07-05T00:00:00 |
[
[
"Tirumalasetty",
"Chandrahas",
""
],
[
"Chou",
"Chih Chieh",
""
],
[
"Reddy",
"Narasimha",
""
],
[
"Gratz",
"Paul",
""
],
[
"Abouelwafa",
"Ayman",
""
]
] |
new_dataset
| 0.99733 |
2201.07375
|
Archisman Ghosh
|
Archisman Ghosh, J.M.B. Mera, Angshuman Karmakar, Debayan Das, Santosh
Ghosh, Ingrid Verbauwhede, Shreyas Sen
|
A 333.9uW 0.158mm$^2$ Saber Learning with Rounding based Post-Quantum
Crypto Accelerator
| null | null | null | null |
cs.CR cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
National Institute of Standard & Technology (NIST) is currently running a
multi-year-long standardization procedure to select quantum-safe or
post-quantum cryptographic schemes to be used in the future. Saber is the only
LWR based algorithm to be in the final of Round 3. This work presents a Saber
ASIC which provides 1.37X power-efficient, 1.75x lower area, and 4x less memory
implementation w.r.t. other SoA PQC ASIC. The energy-hungry multiplier block is
1.5x energyefficient than SoA.
|
[
{
"version": "v1",
"created": "Wed, 19 Jan 2022 01:24:39 GMT"
},
{
"version": "v2",
"created": "Sun, 3 Jul 2022 20:12:02 GMT"
}
] | 2022-07-05T00:00:00 |
[
[
"Ghosh",
"Archisman",
""
],
[
"Mera",
"J. M. B.",
""
],
[
"Karmakar",
"Angshuman",
""
],
[
"Das",
"Debayan",
""
],
[
"Ghosh",
"Santosh",
""
],
[
"Verbauwhede",
"Ingrid",
""
],
[
"Sen",
"Shreyas",
""
]
] |
new_dataset
| 0.99203 |
2202.04781
|
Jung Im Choi
|
Jung Im Choi, Qing Tian
|
Adversarial Attack and Defense of YOLO Detectors in Autonomous Driving
Scenarios
|
Accepted by 2022 IEEE Intelligent Vehicles Symposium (IV 2022)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual detection is a key task in autonomous driving, and it serves as a
crucial foundation for self-driving planning and control. Deep neural networks
have achieved promising results in various visual tasks, but they are known to
be vulnerable to adversarial attacks. A comprehensive understanding of deep
visual detectors' vulnerability is required before people can improve their
robustness. However, only a few adversarial attack/defense works have focused
on object detection, and most of them employed only classification and/or
localization losses, ignoring the objectness aspect. In this paper, we identify
a serious objectness-related adversarial vulnerability in YOLO detectors and
present an effective attack strategy targeting the objectness aspect of visual
detection in autonomous vehicles. Furthermore, to address such vulnerability,
we propose a new objectness-aware adversarial training approach for visual
detection. Experiments show that the proposed attack targeting the objectness
aspect is 45.17% and 43.50% more effective than those generated from
classification and/or localization losses on the KITTI and COCO traffic
datasets, respectively. Also, the proposed adversarial defense approach can
improve the detectors' robustness against objectness-oriented attacks by up to
21% and 12% mAP on KITTI and COCO traffic, respectively.
|
[
{
"version": "v1",
"created": "Thu, 10 Feb 2022 00:47:36 GMT"
},
{
"version": "v2",
"created": "Sun, 3 Jul 2022 13:29:54 GMT"
}
] | 2022-07-05T00:00:00 |
[
[
"Choi",
"Jung Im",
""
],
[
"Tian",
"Qing",
""
]
] |
new_dataset
| 0.999753 |
2203.15135
|
Ge Zhu
|
Ge Zhu, Juan-Pablo Caceres, Justin Salamon
|
Filler Word Detection and Classification: A Dataset and Benchmark
|
To appear at Insterspeech 2022
| null | null | null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Filler words such as `uh' or `um' are sounds or words people use to signal
they are pausing to think. Finding and removing filler words from recordings is
a common and tedious task in media editing. Automatically detecting and
classifying filler words could greatly aid in this task, but few studies have
been published on this problem to date. A key reason is the absence of a
dataset with annotated filler words for model training and evaluation. In this
work, we present a novel speech dataset, PodcastFillers, with 35K annotated
filler words and 50K annotations of other sounds that commonly occur in
podcasts such as breaths, laughter, and word repetitions. We propose a pipeline
that leverages VAD and ASR to detect filler candidates and a classifier to
distinguish between filler word types. We evaluate our proposed pipeline on
PodcastFillers, compare to several baselines, and present a detailed ablation
study. In particular, we evaluate the importance of using ASR and how it
compares to a transcription-free approach resembling keyword spotting. We show
that our pipeline obtains state-of-the-art results, and that leveraging ASR
strongly outperforms a keyword spotting approach. We make PodcastFillers
publicly available, in the hope that our work serves as a benchmark for future
research.
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 22:53:54 GMT"
},
{
"version": "v2",
"created": "Sat, 2 Jul 2022 00:34:13 GMT"
}
] | 2022-07-05T00:00:00 |
[
[
"Zhu",
"Ge",
""
],
[
"Caceres",
"Juan-Pablo",
""
],
[
"Salamon",
"Justin",
""
]
] |
new_dataset
| 0.999511 |
2205.04029
|
Jiatong Shi
|
Jiatong Shi, Shuai Guo, Tao Qian, Nan Huo, Tomoki Hayashi, Yuning Wu,
Frank Xu, Xuankai Chang, Huazhe Li, Peter Wu, Shinji Watanabe, Qin Jin
|
Muskits: an End-to-End Music Processing Toolkit for Singing Voice
Synthesis
|
Accepted by Interspeech
| null | null | null |
cs.SD cs.MM eess.AS
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This paper introduces a new open-source platform named Muskits for end-to-end
music processing, which mainly focuses on end-to-end singing voice synthesis
(E2E-SVS). Muskits supports state-of-the-art SVS models, including RNN SVS,
transformer SVS, and XiaoiceSing. The design of Muskits follows the style of
widely-used speech processing toolkits, ESPnet and Kaldi, for data
prepossessing, training, and recipe pipelines. To the best of our knowledge,
this toolkit is the first platform that allows a fair and highly-reproducible
comparison between several published works in SVS. In addition, we also
demonstrate several advanced usages based on the toolkit functionalities,
including multilingual training and transfer learning. This paper describes the
major framework of Muskits, its functionalities, and experimental results in
single-singer, multi-singer, multilingual, and transfer learning scenarios. The
toolkit is publicly available at https://github.com/SJTMusicTeam/Muskits.
|
[
{
"version": "v1",
"created": "Mon, 9 May 2022 04:25:47 GMT"
},
{
"version": "v2",
"created": "Sat, 2 Jul 2022 15:30:27 GMT"
}
] | 2022-07-05T00:00:00 |
[
[
"Shi",
"Jiatong",
""
],
[
"Guo",
"Shuai",
""
],
[
"Qian",
"Tao",
""
],
[
"Huo",
"Nan",
""
],
[
"Hayashi",
"Tomoki",
""
],
[
"Wu",
"Yuning",
""
],
[
"Xu",
"Frank",
""
],
[
"Chang",
"Xuankai",
""
],
[
"Li",
"Huazhe",
""
],
[
"Wu",
"Peter",
""
],
[
"Watanabe",
"Shinji",
""
],
[
"Jin",
"Qin",
""
]
] |
new_dataset
| 0.994946 |
2205.15575
|
Stefan Schweter
|
Stefan Schweter, Luisa M\"arz, Katharina Schmid and Erion \c{C}ano
|
hmBERT: Historical Multilingual Language Models for Named Entity
Recognition
|
Camera-ready HIPE-2022 Working Note Paper for CLEF 2022 (Conference
and Labs of the Evaluation Forum (CLEF 2022))
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Compared to standard Named Entity Recognition (NER), identifying persons,
locations, and organizations in historical texts constitutes a big challenge.
To obtain machine-readable corpora, the historical text is usually scanned and
Optical Character Recognition (OCR) needs to be performed. As a result, the
historical corpora contain errors. Also, entities like location or organization
can change over time, which poses another challenge. Overall, historical texts
come with several peculiarities that differ greatly from modern texts and large
labeled corpora for training a neural tagger are hardly available for this
domain. In this work, we tackle NER for historical German, English, French,
Swedish, and Finnish by training large historical language models. We
circumvent the need for large amounts of labeled data by using unlabeled data
for pretraining a language model. We propose hmBERT, a historical multilingual
BERT-based language model, and release the model in several versions of
different sizes. Furthermore, we evaluate the capability of hmBERT by solving
downstream NER as part of this year's HIPE-2022 shared task and provide
detailed analysis and insights. For the Multilingual Classical Commentary
coarse-grained NER challenge, our tagger HISTeria outperforms the other teams'
models for two out of three languages.
|
[
{
"version": "v1",
"created": "Tue, 31 May 2022 07:30:33 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Jul 2022 18:39:55 GMT"
}
] | 2022-07-05T00:00:00 |
[
[
"Schweter",
"Stefan",
""
],
[
"März",
"Luisa",
""
],
[
"Schmid",
"Katharina",
""
],
[
"Çano",
"Erion",
""
]
] |
new_dataset
| 0.997064 |
2206.04153
|
Yunyi Zhang
|
Yunyi Zhang, Fang Guo, Jiaming Shen, Jiawei Han
|
Unsupervised Key Event Detection from Massive Text Corpora
|
Accepted to KDD 2022 Research Track
| null |
10.1145/3534678.3539395
| null |
cs.CL cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
Automated event detection from news corpora is a crucial task towards mining
fast-evolving structured knowledge. As real-world events have different
granularities, from the top-level themes to key events and then to event
mentions corresponding to concrete actions, there are generally two lines of
research: (1) theme detection identifies from a news corpus major themes (e.g.,
"2019 Hong Kong Protests" vs. "2020 U.S. Presidential Election") that have very
distinct semantics; and (2) action extraction extracts from one document
mention-level actions (e.g., "the police hit the left arm of the protester")
that are too fine-grained for comprehending the event. In this paper, we
propose a new task, key event detection at the intermediate level, aiming to
detect from a news corpus key events (e.g., "HK Airport Protest on Aug.
12-14"), each happening at a particular time/location and focusing on the same
topic. This task can bridge event understanding and structuring and is
inherently challenging because of the thematic and temporal closeness of key
events and the scarcity of labeled data due to the fast-evolving nature of news
articles. To address these challenges, we develop an unsupervised key event
detection framework, EvMine, that (1) extracts temporally frequent peak phrases
using a novel ttf-itf score, (2) merges peak phrases into event-indicative
feature sets by detecting communities from our designed peak phrase graph that
captures document co-occurrences, semantic similarities, and temporal closeness
signals, and (3) iteratively retrieves documents related to each key event by
training a classifier with automatically generated pseudo labels from the
event-indicative feature sets and refining the detected key events using the
retrieved documents. Extensive experiments and case studies show EvMine
outperforms all the baseline methods and its ablations on two real-world news
corpora.
|
[
{
"version": "v1",
"created": "Wed, 8 Jun 2022 20:31:02 GMT"
},
{
"version": "v2",
"created": "Sun, 3 Jul 2022 19:52:08 GMT"
}
] | 2022-07-05T00:00:00 |
[
[
"Zhang",
"Yunyi",
""
],
[
"Guo",
"Fang",
""
],
[
"Shen",
"Jiaming",
""
],
[
"Han",
"Jiawei",
""
]
] |
new_dataset
| 0.998591 |
2207.00585
|
Fenglong Ma
|
Sean A. Rendar and Fenglong Ma
|
Predicting Ulnar Collateral Ligament Injury in Rookie Major League
Baseball Pitchers
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
In the growing world of machine learning and data analytics, scholars are
finding new and innovative ways to solve real-world problems. One solution
comes by way of an intersection between healthcare, sports statistics, and data
sciences. Within the realm of Major League Baseball (MLB), pitchers are
regarded as the most important roster position. They often are among the
highest paid players and are crucial to a franchise's success, but they are
more at risk to suffer an injury that sidelines them for over a complete
season. The ulnar collateral ligament (UCL) is a small ligament in the elbow
that controls the strength and stability of a pitcher's throwing arm. Due to
repetitive strain, it is not uncommon for pitchers to tear it partially or
completely during their careers. Repairing this injury requires UCL
reconstruction surgery, as known informally as Tommy John surgery. In this
podium abstract, we want to investigate whether we can use machine learning
techniques to predict UCL injury by analyzing online pitcher data.
|
[
{
"version": "v1",
"created": "Thu, 30 Jun 2022 22:09:47 GMT"
}
] | 2022-07-05T00:00:00 |
[
[
"Rendar",
"Sean A.",
""
],
[
"Ma",
"Fenglong",
""
]
] |
new_dataset
| 0.986145 |
2207.00648
|
Manolis Chiou
|
Manolis Chiou, Georgios-Theofanis Epsimos, Grigoris Nikolaou, Pantelis
Pappas, Giannis Petousakis, Stefan M\"uhl, Rustam Stolkin
|
Robot-Assisted Nuclear Disaster Response: Report and Insights from a
Field Exercise
|
Pre-print version of the accepted paper to appear in IEEE IROS 2022
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper reports on insights by robotics researchers that participated in a
5-day robot-assisted nuclear disaster response field exercise conducted by
Kerntechnische Hilfdienst GmbH (KHG) in Karlsruhe, Germany. The German nuclear
industry established KHG to provide a robot-assisted emergency response
capability for nuclear accidents. We present a systematic description of the
equipment used; the robot operators' training program; the field exercise and
robot tasks; and the protocols followed during the exercise. Additionally, we
provide insights and suggestions for advancing disaster response robotics based
on these observations. Specifically, the main degradation in performance comes
from the cognitive and attentional demands on the operator. Furthermore,
robotic platforms and modules should aim to be robust and reliable in addition
to their ease of use. Last, as emergency response stakeholders are often
skeptical about using autonomous systems, we suggest adopting a variable
autonomy paradigm to integrate autonomous robotic capabilities with the
human-in-the-loop gradually. This middle ground between teleoperation and
autonomy can increase end-user acceptance while directly alleviating some of
the operator's robot control burden and maintaining the resilience of the
human-in-the-loop.
|
[
{
"version": "v1",
"created": "Fri, 1 Jul 2022 19:46:43 GMT"
}
] | 2022-07-05T00:00:00 |
[
[
"Chiou",
"Manolis",
""
],
[
"Epsimos",
"Georgios-Theofanis",
""
],
[
"Nikolaou",
"Grigoris",
""
],
[
"Pappas",
"Pantelis",
""
],
[
"Petousakis",
"Giannis",
""
],
[
"Mühl",
"Stefan",
""
],
[
"Stolkin",
"Rustam",
""
]
] |
new_dataset
| 0.999563 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.