id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2112.01131
|
Faeze Ghorbanpour
|
Faeze Ghorbanpour, Maryam Ramezani, Mohammad A. Fazli and Hamid R.
Rabiee
|
FNR: A Similarity and Transformer-Based Approach to Detect Multi-Modal
Fake News in Social Media
|
10 pages, 11 figures, 4 tables and 20 references
| null | null | null |
cs.MM cs.CY cs.LG cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The availability and interactive nature of social media have made them the
primary source of news around the globe. The popularity of social media tempts
criminals to pursue their immoral intentions by producing and disseminating
fake news using seductive text and misleading images. Therefore, verifying
social media news and spotting fakes is crucial. This work aims to analyze
multi-modal features from texts and images in social media for detecting fake
news. We propose a Fake News Revealer (FNR) method that utilizes transform
learning to extract contextual and semantic features and contrastive loss to
determine the similarity between image and text. We applied FNR on two real
social media datasets. The results show the proposed method achieves higher
accuracies in detecting fake news compared to the previous works.
|
[
{
"version": "v1",
"created": "Thu, 2 Dec 2021 11:12:09 GMT"
}
] | 2021-12-29T00:00:00 |
[
[
"Ghorbanpour",
"Faeze",
""
],
[
"Ramezani",
"Maryam",
""
],
[
"Fazli",
"Mohammad A.",
""
],
[
"Rabiee",
"Hamid R.",
""
]
] |
new_dataset
| 0.975875 |
2112.13467
|
Issar Arab
|
Issar Arab and Khaled Barakat
|
ToxTree: descriptor-based machine learning models for both hERG and
Nav1.5 cardiotoxicity liability predictions
|
International Conference on Nanoscience and Nanotechnology in Dubai
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Drug-mediated blockade of the voltage-gated potassium channel(hERG) and the
voltage-gated sodium channel (Nav1.5) can lead to severe cardiovascular
complications. This rising concern has been reflected in the drug development
arena, as the frequent emergence of cardiotoxicity from many approved drugs led
to either discontinuing their use or, in some cases, their withdrawal from the
market. Predicting potential hERG and Nav1.5 blockers at the outset of the drug
discovery process can resolve this problem and can, therefore, decrease the
time and expensive cost of developing safe drugs. One fast and cost-effective
approach is to use in silico predictive methods to weed out potential hERG and
Nav1.5 blockers at the early stages of drug development. Here, we introduce two
robust 2D descriptor-based QSAR predictive models for both hERG and Nav1.5
liability predictions. The machine learning models were trained for both
regression, predicting the potency value of a drug, and multiclass
classification at three different potency cut-offs (i.e. 1$\mu$M, 10$\mu$M, and
30$\mu$M), where ToxTree-hERG Classifier, a pipeline of Random Forest models,
was trained on a large curated dataset of 8380 unique molecular compounds.
Whereas ToxTree-Nav1.5 Classifier, a pipeline of kernelized SVM models, was
trained on a large manually curated set of 1550 unique compounds retrieved from
both ChEMBL and PubChem publicly available bioactivity databases. The proposed
hERG inducer outperformed most metrics of the state-of-the-art published model
and other existing tools. Additionally, we are introducing the first Nav1.5
liability predictive model achieving a Q4 = 74.9% and a binary classification
of Q2 = 86.7% with MCC = 71.2% evaluated on an external test set of 173 unique
compounds. The curated datasets used in this project are made publicly
available to the research community.
|
[
{
"version": "v1",
"created": "Mon, 27 Dec 2021 00:22:37 GMT"
}
] | 2021-12-29T00:00:00 |
[
[
"Arab",
"Issar",
""
],
[
"Barakat",
"Khaled",
""
]
] |
new_dataset
| 0.998561 |
1908.00592
|
Talha Ongun
|
Talha Ongun and Oliver Spohngellert and Alina Oprea and Cristina
Nita-Rotaru and Mihai Christodorescu and Negin Salajegheh
|
The House That Knows You: User Authentication Based on IoT Data
|
11 pages, 5 figures
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Home-based Internet of Things (IoT) devices have gained in popularity and
many households have become 'smart' by using devices such as smart sensors,
locks, and voice-based assistants. Traditional authentication methods such as
passwords, biometrics or multi-factor (using SMS or email) are either not
applicable in the smart home setting, or they are inconvenient as they break
the natural flow of interaction with these devices. Voice-based biometrics are
limited due to safety and privacy concerns. Given the limitations of existing
authentication techniques, we explore new opportunities for user authentication
in smart home environments. Specifically, we design a novel authentication
method based on behavioral features extracted from user interactions with IoT
devices. We perform an IRB-approved user study in the IoT lab at our university
over a period of three weeks. We collect network traffic from multiple users
interacting with 15 IoT devices in our lab and extract a large number of
features to capture user activity. We experiment with multiple classification
algorithms and also design an ensemble classifier with two models using
disjoint set of features. We demonstrate that our ensemble model can classify
five users with 0.97 accuracy. The behavioral authentication modules could help
address the new challenges emerging with smart home ecosystems and they open up
the possibility of creating flexible policies for authorization and access
control.
|
[
{
"version": "v1",
"created": "Thu, 1 Aug 2019 19:37:07 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Aug 2019 21:16:25 GMT"
},
{
"version": "v3",
"created": "Wed, 7 Aug 2019 02:18:23 GMT"
},
{
"version": "v4",
"created": "Mon, 27 Dec 2021 18:11:00 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"Ongun",
"Talha",
""
],
[
"Spohngellert",
"Oliver",
""
],
[
"Oprea",
"Alina",
""
],
[
"Nita-Rotaru",
"Cristina",
""
],
[
"Christodorescu",
"Mihai",
""
],
[
"Salajegheh",
"Negin",
""
]
] |
new_dataset
| 0.996475 |
2104.04255
|
Hichem Sahbi
|
Hichem Sahbi
|
Skeleton-based Hand-Gesture Recognition with Lightweight Graph
Convolutional Networks
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph convolutional networks (GCNs) aim at extending deep learning to
arbitrary irregular domains, namely graphs. Their success is highly dependent
on how the topology of input graphs is defined and most of the existing GCN
architectures rely on predefined or handcrafted graph structures. In this
paper, we introduce a novel method that learns the topology (or connectivity)
of input graphs as a part of GCN design. The main contribution of our method
resides in building an orthogonal connectivity basis that optimally aggregates
nodes, through their neighborhood, prior to achieve convolution. Our method
also considers a stochasticity criterion which acts as a regularizer that makes
the learned basis and the underlying GCNs lightweight while still being highly
effective. Experiments conducted on the challenging task of skeleton-based
hand-gesture recognition show the high effectiveness of the learned GCNs w.r.t.
the related work.
|
[
{
"version": "v1",
"created": "Fri, 9 Apr 2021 09:06:53 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Dec 2021 16:43:54 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"Sahbi",
"Hichem",
""
]
] |
new_dataset
| 0.964268 |
2105.08822
|
Ruijing Yang
|
Ruijing Yang, Ziyu Guan, Zitong Yu, Xiaoyi Feng, Jinye Peng, Guoying
Zhao
|
Non-contact Pain Recognition from Video Sequences with Remote
Physiological Measurements Prediction
|
IJCAI 2021
|
https://www.ijcai.org/proceedings/2021/170
| null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Automatic pain recognition is paramount for medical diagnosis and treatment.
The existing works fall into three categories: assessing facial appearance
changes, exploiting physiological cues, or fusing them in a multi-modal manner.
However, (1) appearance changes are easily affected by subjective factors which
impedes objective pain recognition. Besides, the appearance-based approaches
ignore long-range spatial-temporal dependencies that are important for modeling
expressions over time; (2) the physiological cues are obtained by attaching
sensors on human body, which is inconvenient and uncomfortable. In this paper,
we present a novel multi-task learning framework which encodes both appearance
changes and physiological cues in a non-contact manner for pain recognition.
The framework is able to capture both local and long-range dependencies via the
proposed attention mechanism for the learned appearance representations, which
are further enriched by temporally attended physiological cues (remote
photoplethysmography, rPPG) that are recovered from videos in the auxiliary
task. This framework is dubbed rPPG-enriched Spatio-Temporal Attention Network
(rSTAN) and allows us to establish the state-of-the-art performance of
non-contact pain recognition on publicly available pain databases. It
demonstrates that rPPG predictions can be used as an auxiliary task to
facilitate non-contact automatic pain recognition.
|
[
{
"version": "v1",
"created": "Tue, 18 May 2021 20:47:45 GMT"
},
{
"version": "v2",
"created": "Sat, 25 Dec 2021 19:40:01 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"Yang",
"Ruijing",
""
],
[
"Guan",
"Ziyu",
""
],
[
"Yu",
"Zitong",
""
],
[
"Feng",
"Xiaoyi",
""
],
[
"Peng",
"Jinye",
""
],
[
"Zhao",
"Guoying",
""
]
] |
new_dataset
| 0.96675 |
2105.14680
|
Franklin Mingzhe Li
|
Wei Sun, Franklin Mingzhe Li, Congshu Huang, Zhenyu Lei, Benjamin
Steeper, Songyun Tao, Feng Tian, Cheng Zhang
|
ThumbTrak: Recognizing Micro-finger Poses Using a Ring with Proximity
Sensing
|
MobileHCI '21: The ACM International Conference on Mobile
Human-Computer Interaction, September 27 - October 1, 2021, Toulouse, France
| null |
10.1145/3447526.3472060
| null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
ThumbTrak is a novel wearable input device that recognizes 12 micro-finger
poses in real-time. Poses are characterized by the thumb touching each of the
12 phalanges on the hand. It uses a thumb-ring, built with a flexible printed
circuit board, which hosts nine proximity sensors. Each sensor measures the
distance from the thumb to various parts of the palm or other fingers.
ThumbTrak uses a support-vector-machine (SVM) model to classify finger poses
based on distance measurements in real-time. A user study with ten participants
showed that ThumbTrak could recognize 12 micro finger poses with an average
accuracy of 93.6%. We also discuss potential opportunities and challenges in
applying ThumbTrak in real-world applications.
|
[
{
"version": "v1",
"created": "Mon, 31 May 2021 02:47:56 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Dec 2021 20:11:27 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"Sun",
"Wei",
""
],
[
"Li",
"Franklin Mingzhe",
""
],
[
"Huang",
"Congshu",
""
],
[
"Lei",
"Zhenyu",
""
],
[
"Steeper",
"Benjamin",
""
],
[
"Tao",
"Songyun",
""
],
[
"Tian",
"Feng",
""
],
[
"Zhang",
"Cheng",
""
]
] |
new_dataset
| 0.999821 |
2106.01598
|
Son T. Luu
|
Hanh Hong-Phuc Vo, Hieu Trung Tran, Son T. Luu
|
Automatically Detecting Cyberbullying Comments on Online Game Forums
|
Published in the 2021 RIVF International Conference on Computing and
Communication Technologies (RIVF)
| null |
10.1109/RIVF51545.2021.9642116
| null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Online game forums are popular to most of game players. They use it to
communicate and discuss the strategy of the game, or even to make friends.
However, game forums also contain abusive and harassment speech, disturbing and
threatening players. Therefore, it is necessary to automatically detect and
remove cyberbullying comments to keep the game forum clean and friendly. We use
the Cyberbullying dataset collected from World of Warcraft (WoW) and League of
Legends (LoL) forums and train classification models to automatically detect
whether a comment of a player is abusive or not. The result obtains 82.69% of
macro F1-score for LoL forum and 83.86% of macro F1-score for WoW forum by the
Toxic-BERT model on the Cyberbullying dataset.
|
[
{
"version": "v1",
"created": "Thu, 3 Jun 2021 05:08:11 GMT"
},
{
"version": "v2",
"created": "Sun, 26 Dec 2021 13:36:17 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"Vo",
"Hanh Hong-Phuc",
""
],
[
"Tran",
"Hieu Trung",
""
],
[
"Luu",
"Son T.",
""
]
] |
new_dataset
| 0.999789 |
2107.12699
|
Jukka Ruohonen
|
Jukka Ruohonen and Kalle Hjerppe and Kalle Rindell
|
A Large-Scale Security-Oriented Static Analysis of Python Packages in
PyPI
|
Proceedings of the 18th Annual International Conference on Privacy,
Security and Trust (PST 2021), Auckland (online), IEEE, pp. 1-10
| null |
10.1109/PST52912.2021.9647791
| null |
cs.SE cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Different security issues are a common problem for open source packages
archived to and delivered through software ecosystems. These often manifest
themselves as software weaknesses that may lead to concrete software
vulnerabilities. This paper examines various security issues in Python packages
with static analysis. The dataset is based on a snapshot of all packages stored
to the Python Package Index (PyPI). In total, over 197 thousand packages and
over 749 thousand security issues are covered. Even under the constraints
imposed by static analysis, (a) the results indicate prevalence of security
issues; at least one issue is present for about 46% of the Python packages. In
terms of the issue types, (b) exception handling and different code injections
have been the most common issues. The subprocess module stands out in this
regard. Reflecting the generally small size of the packages, (c) software size
metrics do not predict well the amount of issues revealed through static
analysis. With these results and the accompanying discussion, the paper
contributes to the field of large-scale empirical studies for better
understanding security problems in software ecosystems.
|
[
{
"version": "v1",
"created": "Tue, 27 Jul 2021 09:57:25 GMT"
},
{
"version": "v2",
"created": "Sun, 26 Dec 2021 12:34:19 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"Ruohonen",
"Jukka",
""
],
[
"Hjerppe",
"Kalle",
""
],
[
"Rindell",
"Kalle",
""
]
] |
new_dataset
| 0.998875 |
2107.13186
|
Zhen Xu
|
Zhen Xu, Wei-Wei Tu, Isabelle Guyon
|
AutoML Meets Time Series Regression Design and Analysis of the
AutoSeries Challenge
| null |
ECML PKDD 2021
| null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Analyzing better time series with limited human effort is of interest to
academia and industry. Driven by business scenarios, we organized the first
Automated Time Series Regression challenge (AutoSeries) for the WSDM Cup 2020.
We present its design, analysis, and post-hoc experiments. The code submission
requirement precluded participants from any manual intervention, testing
automated machine learning capabilities of solutions, across many datasets,
under hardware and time limitations. We prepared 10 datasets from diverse
application domains (sales, power consumption, air quality, traffic, and
parking), featuring missing data, mixed continuous and categorical variables,
and various sampling rates. Each dataset was split into a training and a test
sequence (which was streamed, allowing models to continuously adapt). The
setting of time series regression, differs from classical forecasting in that
covariates at the present time are known. Great strides were made by
participants to tackle this AutoSeries problem, as demonstrated by the jump in
performance from the sample submission, and post-hoc comparisons with
AutoGluon. Simple yet effective methods were used, based on feature
engineering, LightGBM, and random search hyper-parameter tuning, addressing all
aspects of the challenge. Our post-hoc analyses revealed that providing
additional time did not yield significant improvements. The winners' code was
open-sourced https://github.com/NehzUx/AutoSeries.
|
[
{
"version": "v1",
"created": "Wed, 28 Jul 2021 06:30:46 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Dec 2021 10:43:30 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"Xu",
"Zhen",
""
],
[
"Tu",
"Wei-Wei",
""
],
[
"Guyon",
"Isabelle",
""
]
] |
new_dataset
| 0.992296 |
2108.06703
|
Jung Ho Ahn
|
Michael Jaemin Kim and Jaehyun Park and Yeonhong Park and Wanju Doh
and Namhoon Kim and Tae Jun Ham and Jae W. Lee and Jung Ho Ahn
|
Mithril: Cooperative Row Hammer Protection on Commodity DRAM Leveraging
Managed Refresh
|
16 pages, to appear in HPCA 2022
| null | null | null |
cs.CR cs.AR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Since its public introduction in the mid-2010s, the Row Hammer (RH)
phenomenon has drawn significant attention from the research community due to
its security implications. Although many RH-protection schemes have been
proposed by processor vendors, DRAM manufacturers, and academia, they still
have shortcomings. Solutions implemented in the memory controller (MC) incur
increasingly higher costs due to their conservative design for the worst case
in terms of the number of DRAM banks and RH threshold to support. Meanwhile,
DRAM-side implementation either has a limited time margin for RH-protection
measures or requires extensive modifications to the standard DRAM interface.
Recently, a new command for RH-protection has been introduced in the
DDR5/LPDDR5 standards, referred to as refresh management (RFM). RFM enables the
separation of the tasks for RHprotection to both MC and DRAM by having the
former generate an RFM command at a specific activation frequency and the
latter take proper RH-protection measures within a given time window. Although
promising, no existing study presents and analyzes RFM-based solutions for
RH-protection. In this paper, we propose Mithril, the first RFM
interfacecompatible, DRAM-MC cooperative RH-protection scheme providing
deterministic protection guarantees. Mithril has minimal energy overheads for
common use cases without adversarial memory access patterns. We also introduce
Mithril+, an optional extension to provide minimal performance overheads at the
expense of a tiny modification to the MC, while utilizing existing DRAM
commands.
|
[
{
"version": "v1",
"created": "Sun, 15 Aug 2021 09:45:12 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Dec 2021 06:07:08 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"Kim",
"Michael Jaemin",
""
],
[
"Park",
"Jaehyun",
""
],
[
"Park",
"Yeonhong",
""
],
[
"Doh",
"Wanju",
""
],
[
"Kim",
"Namhoon",
""
],
[
"Ham",
"Tae Jun",
""
],
[
"Lee",
"Jae W.",
""
],
[
"Ahn",
"Jung Ho",
""
]
] |
new_dataset
| 0.990125 |
2108.11240
|
Zijun Li
|
Zijun Li, Quan Chen and Minyi Guo
|
Pagurus: Eliminating Cold Startup in Serverless Computing with
Inter-Action Container Sharing
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Serverless computing provides fine-grain resource sharing between Cloud
tenants through containers. Each function invocation (action) runs in an
individual container. When there is not an already started container for a user
function, a new container has to be created for it. However, the long cold
startup time of a container results in the long response latency of the action.
Our investigation shows that the containers for some user actions share most of
the software packages. If an action that requires a new container can
``borrow'' a similar warm container from other actions, the long cold startup
can be eliminated. Based on the above finding, we propose Pagurus, a runtime
container management system for eliminating the cold startup in serverless
computing. Pagurus is comprised of an inter-action container scheduler and an
intra-action container scheduler for each action. The inter-action container
scheduler schedules shared containers among actions. The intra-action container
scheduler deals with the management of the container lifecycle. Our
experimental results show that Pagurus effectively eliminates the
time-consuming container cold startup. An action may start to run in 10ms with
Pagurus, even if there is not warm container for it.
|
[
{
"version": "v1",
"created": "Wed, 25 Aug 2021 13:50:36 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"Li",
"Zijun",
""
],
[
"Chen",
"Quan",
""
],
[
"Guo",
"Minyi",
""
]
] |
new_dataset
| 0.99914 |
2108.12928
|
Nathan Schneider
|
Nathan Schneider, Amir Zeldes
|
Mischievous Nominal Constructions in Universal Dependencies
|
Extended version of the paper that is published in Proceedings of the
Fifth Workshop on Universal Dependencies (UDW, SyntaxFest 2021), with
additional sections on adverbial NPs and numbers/measurements
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
While the highly multilingual Universal Dependencies (UD) project provides
extensive guidelines for clausal structure as well as structure within
canonical nominal phrases, a standard treatment is lacking for many
"mischievous" nominal phenomena that break the mold. As a result, numerous
inconsistencies within and across corpora can be found, even in languages with
extensive UD treebanking work, such as English. This paper surveys the kinds of
mischievous nominal expressions attested in English UD corpora and proposes
solutions primarily with English in mind, but which may offer paths to
solutions for a variety of UD languages.
|
[
{
"version": "v1",
"created": "Sun, 29 Aug 2021 22:30:15 GMT"
},
{
"version": "v2",
"created": "Sat, 25 Dec 2021 23:41:28 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"Schneider",
"Nathan",
""
],
[
"Zeldes",
"Amir",
""
]
] |
new_dataset
| 0.996504 |
2109.03805
|
Abhinav Valada
|
Whye Kit Fong, Rohit Mohan, Juana Valeria Hurtado, Lubing Zhou, Holger
Caesar, Oscar Beijbom, and Abhinav Valada
|
Panoptic nuScenes: A Large-Scale Benchmark for LiDAR Panoptic
Segmentation and Tracking
|
The benchmark is available at https://www.nuscenes.org
| null | null | null |
cs.CV cs.AI cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Panoptic scene understanding and tracking of dynamic agents are essential for
robots and automated vehicles to navigate in urban environments. As LiDARs
provide accurate illumination-independent geometric depictions of the scene,
performing these tasks using LiDAR point clouds provides reliable predictions.
However, existing datasets lack diversity in the type of urban scenes and have
a limited number of dynamic object instances which hinders both learning of
these tasks as well as credible benchmarking of the developed methods. In this
paper, we introduce the large-scale Panoptic nuScenes benchmark dataset that
extends our popular nuScenes dataset with point-wise groundtruth annotations
for semantic segmentation, panoptic segmentation, and panoptic tracking tasks.
To facilitate comparison, we provide several strong baselines for each of these
tasks on our proposed dataset. Moreover, we analyze the drawbacks of the
existing metrics for panoptic tracking and propose the novel instance-centric
PAT metric that addresses the concerns. We present exhaustive experiments that
demonstrate the utility of Panoptic nuScenes compared to existing datasets and
make the online evaluation server available at nuScenes.org. We believe that
this extension will accelerate the research of novel methods for scene
understanding of dynamic urban environments.
|
[
{
"version": "v1",
"created": "Wed, 8 Sep 2021 17:45:37 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Sep 2021 05:10:11 GMT"
},
{
"version": "v3",
"created": "Thu, 23 Dec 2021 19:16:51 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"Fong",
"Whye Kit",
""
],
[
"Mohan",
"Rohit",
""
],
[
"Hurtado",
"Juana Valeria",
""
],
[
"Zhou",
"Lubing",
""
],
[
"Caesar",
"Holger",
""
],
[
"Beijbom",
"Oscar",
""
],
[
"Valada",
"Abhinav",
""
]
] |
new_dataset
| 0.999809 |
2111.01431
|
Seokjun Kim
|
Seokjun Kim, Jaeeun Jang, Hyeoncheol Kim
|
Deductive Association Networks
|
A simple experiment was conducted as a series of artificial
association networks
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
we introduce deductive association networks(DANs), a network that performs
deductive reasoning. To have high-dimensional thinking, combining various
axioms and putting the results back into another axiom is necessary to produce
new relationships and results. For example, it would be given two propositions:
"Socrates is a man." and "All men are mortals." and two propositions could be
used to infer the new proposition, "Therefore Socrates is mortal.". To
evaluate, we used MNIST Dataset, a handwritten numerical image dataset, to
apply it to the group theory and show the results of performing deductive
learning.
|
[
{
"version": "v1",
"created": "Tue, 2 Nov 2021 08:47:04 GMT"
},
{
"version": "v2",
"created": "Wed, 17 Nov 2021 16:54:10 GMT"
},
{
"version": "v3",
"created": "Mon, 27 Dec 2021 17:41:53 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"Kim",
"Seokjun",
""
],
[
"Jang",
"Jaeeun",
""
],
[
"Kim",
"Hyeoncheol",
""
]
] |
new_dataset
| 0.999437 |
2111.07441
|
Athanasios Kapoutsis Ch.
|
Athanasios Ch. Kapoutsis, Savvas A. Chatzichristofis and Elias B.
Kosmatopoulos
|
A distributed, plug-n-play algorithm for multi-robot applications with a
priori non-computable objective functions
| null |
The International Journal of Robotics Research, (2019), Volume: 38
issue: 7, page(s): 813-832
|
10.1177/0278364919845054
| null |
cs.RO cs.AI cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a distributed algorithm applicable to a wide range of
practical multi-robot applications. In such multi-robot applications, the
user-defined objectives of the mission can be cast as a general optimization
problem, without explicit guidelines of the subtasks per different robot. Owing
to the unknown environment, unknown robot dynamics, sensor nonlinearities,
etc., the analytic form of the optimization cost function is not available a
priori. Therefore, standard gradient-descent-like algorithms are not applicable
to these problems. To tackle this, we introduce a new algorithm that carefully
designs each robot's subcost function, the optimization of which can accomplish
the overall team objective. Upon this transformation, we propose a distributed
methodology based on the cognitive-based adaptive optimization (CAO) algorithm,
that is able to approximate the evolution of each robot's cost function and to
adequately optimize its decision variables (robot actions). The latter can be
achieved by online learning only the problem-specific characteristics that
affect the accomplishment of mission objectives. The overall, low-complexity
algorithm can straightforwardly incorporate any kind of operational constraint,
is fault-tolerant, and can appropriately tackle time-varying cost functions. A
cornerstone of this approach is that it shares the same convergence
characteristics as those of block coordinate descent algorithms. The proposed
algorithm is evaluated in three heterogeneous simulation set-ups under multiple
scenarios, against both general-purpose and problem-specific algorithms. Source
code is available at
https://github.com/athakapo/A-distributed-plug-n-play-algorithm-for-multi-robot-applications.
|
[
{
"version": "v1",
"created": "Sun, 14 Nov 2021 20:40:00 GMT"
},
{
"version": "v2",
"created": "Sat, 25 Dec 2021 11:27:19 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"Kapoutsis",
"Athanasios Ch.",
""
],
[
"Chatzichristofis",
"Savvas A.",
""
],
[
"Kosmatopoulos",
"Elias B.",
""
]
] |
new_dataset
| 0.980241 |
2112.03650
|
Huajun Zhou
|
Huajun Zhou and Peijia Chen and Lingxiao Yang and Jianhuang Lai and
Xiaohua Xie
|
Activation to Saliency: Forming High-Quality Labels for Completely
Unsupervised Salient Object Detection
|
11 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing deep learning-based Unsupervised Salient Object Detection (USOD)
methods rely on supervised pre-trained deep models. Moreover, they generate
pseudo labels based on hand-crafted features, which lack high-level semantic
information. In order to overcome these shortcomings, we propose a new
two-stage Activation-to-Saliency (A2S) framework that effectively excavates
high-quality saliency cues to train a robust saliency detector. It is worth
noting that our method does not require any manual annotation, even in the
pre-training phase. In the first stage, we transform an unsupervisedly
pre-trained network to aggregate multi-level features to a single activation
map, where an Adaptive Decision Boundary (ADB) is proposed to assist the
training of the transformed network. Moreover, a new loss function is proposed
to facilitate the generation of high-quality pseudo labels. In the second
stage, a self-rectification learning paradigm strategy is developed to train a
saliency detector and refine the pseudo labels online. In addition, we
construct a lightweight saliency detector using two Residual Attention Modules
(RAMs) to largely reduce the risk of overfitting. Extensive experiments on
several SOD benchmarks prove that our framework reports significant performance
compared with existing USOD methods. Moreover, training our framework on 3,000
images consumes about 1 hour, which is over 30$\times$ faster than previous
state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Tue, 7 Dec 2021 11:54:06 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Dec 2021 05:53:01 GMT"
},
{
"version": "v3",
"created": "Fri, 24 Dec 2021 01:53:24 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"Zhou",
"Huajun",
""
],
[
"Chen",
"Peijia",
""
],
[
"Yang",
"Lingxiao",
""
],
[
"Lai",
"Jianhuang",
""
],
[
"Xie",
"Xiaohua",
""
]
] |
new_dataset
| 0.972328 |
2112.06539
|
Kamil \.Zywanowski
|
Kamil \.Zywanowski, Adam Banaszczyk, Micha{\l} R. Nowicki, and Jacek
Komorowski
|
MinkLoc3D-SI: 3D LiDAR place recognition with sparse convolutions,
spherical coordinates, and intensity
| null | null |
10.1109/LRA.2021.3136863
| null |
cs.RO cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The 3D LiDAR place recognition aims to estimate a coarse localization in a
previously seen environment based on a single scan from a rotating 3D LiDAR
sensor. The existing solutions to this problem include hand-crafted point cloud
descriptors (e.g., ScanContext, M2DP, LiDAR IRIS) and deep learning-based
solutions (e.g., PointNetVLAD, PCAN, LPDNet, DAGC, MinkLoc3D), which are often
only evaluated on accumulated 2D scans from the Oxford RobotCar dataset. We
introduce MinkLoc3D-SI, a sparse convolution-based solution that utilizes
spherical coordinates of 3D points and processes the intensity of 3D LiDAR
measurements, improving the performance when a single 3D LiDAR scan is used.
Our method integrates the improvements typical for hand-crafted descriptors
(like ScanContext) with the most efficient 3D sparse convolutions (MinkLoc3D).
Our experiments show improved results on single scans from 3D LiDARs (USyd
Campus dataset) and great generalization ability (KITTI dataset). Using
intensity information on accumulated 2D scans (RobotCar Intensity dataset)
improves the performance, even though spherical representation doesn't produce
a noticeable improvement. As a result, MinkLoc3D-SI is suited for single scans
obtained from a 3D LiDAR, making it applicable in autonomous vehicles.
|
[
{
"version": "v1",
"created": "Mon, 13 Dec 2021 10:21:34 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Dec 2021 10:38:06 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"Żywanowski",
"Kamil",
""
],
[
"Banaszczyk",
"Adam",
""
],
[
"Nowicki",
"Michał R.",
""
],
[
"Komorowski",
"Jacek",
""
]
] |
new_dataset
| 0.999692 |
2112.06554
|
Ramy Ashraf Zeineldin
|
Ramy A. Zeineldin, Mohamed E. Karar, Franziska Mathis-Ullrich and
Oliver Burgert
|
Ensemble CNN Networks for GBM Tumors Segmentation using Multi-parametric
MRI
|
Accepted in BraTS 2021 (as part of the BrainLes workshop proceedings
distributed by Springer LNCS)
| null | null | null |
cs.CV cs.LG eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Glioblastomas are the most aggressive fast-growing primary brain cancer which
originate in the glial cells of the brain. Accurate identification of the
malignant brain tumor and its sub-regions is still one of the most challenging
problems in medical image segmentation. The Brain Tumor Segmentation Challenge
(BraTS) has been a popular benchmark for automatic brain glioblastomas
segmentation algorithms since its initiation. In this year, BraTS 2021
challenge provides the largest multi-parametric (mpMRI) dataset of 2,000
pre-operative patients. In this paper, we propose a new aggregation of two deep
learning frameworks namely, DeepSeg and nnU-Net for automatic glioblastoma
recognition in pre-operative mpMRI. Our ensemble method obtains Dice similarity
scores of 92.00, 87.33, and 84.10 and Hausdorff Distances of 3.81, 8.91, and
16.02 for the enhancing tumor, tumor core, and whole tumor regions,
respectively, on the BraTS 2021 validation set, ranking us among the top ten
teams. These experimental findings provide evidence that it can be readily
applied clinically and thereby aiding in the brain cancer prognosis, therapy
planning, and therapy response monitoring. A docker image for reproducing our
segmentation results is available online at
(https://hub.docker.com/r/razeineldin/deepseg21).
|
[
{
"version": "v1",
"created": "Mon, 13 Dec 2021 10:51:20 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Dec 2021 10:05:43 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"Zeineldin",
"Ramy A.",
""
],
[
"Karar",
"Mohamed E.",
""
],
[
"Mathis-Ullrich",
"Franziska",
""
],
[
"Burgert",
"Oliver",
""
]
] |
new_dataset
| 0.968549 |
2112.11010
|
Youngwan Lee
|
Youngwan Lee, Jonghee Kim, Jeff Willette, Sung Ju Hwang
|
MPViT: Multi-Path Vision Transformer for Dense Prediction
|
technical report
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Dense computer vision tasks such as object detection and segmentation require
effective multi-scale feature representation for detecting or classifying
objects or regions with varying sizes. While Convolutional Neural Networks
(CNNs) have been the dominant architectures for such tasks, recently introduced
Vision Transformers (ViTs) aim to replace them as a backbone. Similar to CNNs,
ViTs build a simple multi-stage structure (i.e., fine-to-coarse) for
multi-scale representation with single-scale patches. In this work, with a
different perspective from existing Transformers, we explore multi-scale patch
embedding and multi-path structure, constructing the Multi-Path Vision
Transformer (MPViT). MPViT embeds features of the same size~(i.e., sequence
length) with patches of different scales simultaneously by using overlapping
convolutional patch embedding. Tokens of different scales are then
independently fed into the Transformer encoders via multiple paths and the
resulting features are aggregated, enabling both fine and coarse feature
representations at the same feature level. Thanks to the diverse, multi-scale
feature representations, our MPViTs scaling from tiny~(5M) to base~(73M)
consistently achieve superior performance over state-of-the-art Vision
Transformers on ImageNet classification, object detection, instance
segmentation, and semantic segmentation. These extensive results demonstrate
that MPViT can serve as a versatile backbone network for various vision tasks.
Code will be made publicly available at \url{https://git.io/MPViT}.
|
[
{
"version": "v1",
"created": "Tue, 21 Dec 2021 06:34:50 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Dec 2021 02:46:40 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"Lee",
"Youngwan",
""
],
[
"Kim",
"Jonghee",
""
],
[
"Willette",
"Jeff",
""
],
[
"Hwang",
"Sung Ju",
""
]
] |
new_dataset
| 0.999315 |
2112.11193
|
Ana Valdivia
|
Ana Valdivia, J\'ulia Corbera-Serraj\`ordia, Aneta Swianiewicz
|
There is an elephant in the room: Towards a critique on the use of
fairness in biometrics
|
14 pages, 3 figures
| null | null | null |
cs.CY cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In 2019, the UK's Immigration and Asylum Chamber of the Upper Tribunal
dismissed an asylum appeal basing the decision on the output of a biometric
system, alongside other discrepancies. The fingerprints of the asylum seeker
were found in a biometric database which contradicted the appellant's account.
The Tribunal found this evidence unequivocal and denied the asylum claim.
Nowadays, the proliferation of biometric systems is shaping public debates
around its political, social and ethical implications. Yet whilst concerns
towards the racialised use of this technology for migration control have been
on the rise, investment in the biometrics industry and innovation is increasing
considerably. Moreover, fairness has also been recently adopted by biometrics
to mitigate bias and discrimination on biometrics. However, algorithmic
fairness cannot distribute justice in scenarios which are broken or intended
purpose is to discriminate, such as biometrics deployed at the border.
In this paper, we offer a critical reading of recent debates about biometric
fairness and show its limitations drawing on research in fairness in machine
learning and critical border studies. Building on previous fairness
demonstrations, we prove that biometric fairness criteria are mathematically
mutually exclusive. Then, the paper moves on illustrating empirically that a
fair biometric system is not possible by reproducing experiments from previous
works. Finally, we discuss the politics of fairness in biometrics by situating
the debate at the border. We claim that bias and error rates have different
impact on citizens and asylum seekers. Fairness has overshadowed the elephant
in the room of biometrics, focusing on the demographic biases and ethical
discourses of algorithms rather than examine how these systems reproduce
historical and political injustices.
|
[
{
"version": "v1",
"created": "Thu, 16 Dec 2021 10:32:41 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Dec 2021 09:44:10 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"Valdivia",
"Ana",
""
],
[
"Corbera-Serrajòrdia",
"Júlia",
""
],
[
"Swianiewicz",
"Aneta",
""
]
] |
new_dataset
| 0.968898 |
2112.12494
|
Ali Furkan Biten
|
Ali Furkan Biten, Ron Litman, Yusheng Xie, Srikar Appalaraju, R.
Manmatha
|
LaTr: Layout-Aware Transformer for Scene-Text VQA
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a novel multimodal architecture for Scene Text Visual Question
Answering (STVQA), named Layout-Aware Transformer (LaTr). The task of STVQA
requires models to reason over different modalities. Thus, we first investigate
the impact of each modality, and reveal the importance of the language module,
especially when enriched with layout information. Accounting for this, we
propose a single objective pre-training scheme that requires only text and
spatial cues. We show that applying this pre-training scheme on scanned
documents has certain advantages over using natural images, despite the domain
gap. Scanned documents are easy to procure, text-dense and have a variety of
layouts, helping the model learn various spatial cues (e.g. left-of, below
etc.) by tying together language and layout information. Compared to existing
approaches, our method performs vocabulary-free decoding and, as shown,
generalizes well beyond the training vocabulary. We further demonstrate that
LaTr improves robustness towards OCR errors, a common reason for failure cases
in STVQA. In addition, by leveraging a vision transformer, we eliminate the
need for an external object detector. LaTr outperforms state-of-the-art STVQA
methods on multiple datasets. In particular, +7.6% on TextVQA, +10.8% on ST-VQA
and +4.0% on OCR-VQA (all absolute accuracy numbers).
|
[
{
"version": "v1",
"created": "Thu, 23 Dec 2021 12:41:26 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Dec 2021 11:06:59 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"Biten",
"Ali Furkan",
""
],
[
"Litman",
"Ron",
""
],
[
"Xie",
"Yusheng",
""
],
[
"Appalaraju",
"Srikar",
""
],
[
"Manmatha",
"R.",
""
]
] |
new_dataset
| 0.994579 |
2112.12823
|
Roberto Bagnara
|
Roberto Bagnara, Abramo Bagnara, Patricia M. Hill
|
A Rationale-Based Classification of MISRA C Guidelines
|
12 pages, 2 figures
| null | null | null |
cs.PL cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
MISRA C is the most authoritative language subset for the C programming
language that is a de facto standard in several industry sectors where safety
and security are of paramount importance. While MISRA C is currently encoded in
175 guidelines (coding rules and directives), it does not coincide with them:
proper adoption of MISRA C requires embracing its preventive approach (as
opposed to the "bug finding" approach) and a documented development process
where justifiable non-compliances are authorized and recorded as deviations.
MISRA C guidelines are classified along several axes in the official MISRA
documents. In this paper, we add to these an orthogonal classification that
associates guidelines with their main rationale. The advantages of this new
classification are illustrated for different kinds of projects, including those
not (yet) having MISRA compliance among their objectives.
|
[
{
"version": "v1",
"created": "Thu, 23 Dec 2021 19:57:09 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"Bagnara",
"Roberto",
""
],
[
"Bagnara",
"Abramo",
""
],
[
"Hill",
"Patricia M.",
""
]
] |
new_dataset
| 0.983758 |
2112.12907
|
Fangyang Ye
|
Ziyu Li, Fangyang Ye, Xinran Guan
|
3D Point Cloud Reconstruction and SLAM as an Input
|
7 pages
| null | null | null |
cs.MM
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
To handle the different types of surface reconstruction tasks, we have
replicated as well as modified a few of reconstruction methods and have made
comparisons between the traditional method and data-driven method for
reconstruction the surface of an object with dense point cloud as input. On top
of that, we proposed a system using tightly-coupled SLAM as an input to
generate deskewed point cloud and odometry and a Truncated Signed Distance
Function based Surface Reconstruction Library. To get higher accuracy,
IMU(Inertial Measurement Unit) pre-integration and pose graph optimization are
conduct in the SLAM part. With the help of the Robot Operating System, we could
build a system containing those two parts, which can conduct a real-time
outdoor surface reconstruction.
|
[
{
"version": "v1",
"created": "Fri, 24 Dec 2021 01:56:09 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"Li",
"Ziyu",
""
],
[
"Ye",
"Fangyang",
""
],
[
"Guan",
"Xinran",
""
]
] |
new_dataset
| 0.968308 |
2112.12913
|
Anna Wr\'oblewska
|
Anna Wr\'oblewska, Pawe{\l} Rzepi\'nski, Sylwia Sysko-Roma\'nczuk
|
Spoiler in a Textstack: How Much Can Transformers Help?
| null | null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents our research regarding spoiler detection in reviews. In
this use case, we describe the method of fine-tuning and organizing the
available text-based model tasks with the latest deep learning achievements and
techniques to interpret the models' results.
Until now, spoiler research has been rarely described in the literature. We
tested the transfer learning approach and different latest transformer
architectures on two open datasets with annotated spoilers (ROC AUC above 81\%
on TV Tropes Movies dataset, and Goodreads dataset above 88\%). We also
collected data and assembled a new dataset with fine-grained annotations. To
that end, we employed interpretability techniques and measures to assess the
models' reliability and explain their results.
|
[
{
"version": "v1",
"created": "Fri, 24 Dec 2021 02:42:44 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"Wróblewska",
"Anna",
""
],
[
"Rzepiński",
"Paweł",
""
],
[
"Sysko-Romańczuk",
"Sylwia",
""
]
] |
new_dataset
| 0.998784 |
2112.12926
|
Yuyu Luo Dr.
|
Yuyu Luo, Jiawei Tang, Guoliang Li
|
nvBench: A Large-Scale Synthesized Dataset for Cross-Domain Natural
Language to Visualization Task
| null | null | null | null |
cs.HC cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
NL2VIS - which translates natural language (NL) queries to corresponding
visualizations (VIS) - has attracted more and more attention both in commercial
visualization vendors and academic researchers. In the last few years, the
advanced deep learning-based models have achieved human-like abilities in many
natural language processing (NLP) tasks, which clearly tells us that the deep
learning-based technique is a good choice to push the field of NL2VIS. However,
a big balk is the lack of benchmarks with lots of (NL, VIS) pairs. We present
nvBench, the first large-scale NL2VIS benchmark, containing 25,750 (NL, VIS)
pairs from 750 tables over 105 domains, synthesized from (NL, SQL) benchmarks
to support cross-domain NL2VIS task. The quality of nvBench has been
extensively validated by 23 experts and 300+ crowd workers. Deep learning-based
models training using nvBench demonstrate that nvBench can push the field of
NL2VIS.
|
[
{
"version": "v1",
"created": "Fri, 24 Dec 2021 03:33:20 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"Luo",
"Yuyu",
""
],
[
"Tang",
"Jiawei",
""
],
[
"Li",
"Guoliang",
""
]
] |
new_dataset
| 0.999683 |
2112.12984
|
Mian Guo
|
Mian Guo, Kai Zhong, Xiaozhi Wang
|
Doppler velocity-based algorithm for Clustering and Velocity Estimation
of moving objects
|
7 pages, 9 figures, 2 tables, 2 algorithms, CACRE2022
| null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a Doppler velocity-based cluster and velocity estimation algorithm
based on the characteristics of FMCW LiDAR which achieves highly accurate,
single-scan, and real-time motion state detection and velocity estimation. We
prove the continuity of the Doppler velocity on the same object. Based on this
principle, we achieve the distinction between moving objects and stationary
background via region growing clustering algorithm. The obtained stationary
background will be used to estimate the velocity of the FMCW LiDAR by the
least-squares method. Then we estimate the velocity of the moving objects using
the estimated LiDAR velocity and the Doppler velocity of moving objects
obtained by clustering. To ensure real-time processing, we set the appropriate
least-squares parameters. Meanwhile, to verify the effectiveness of the
algorithm, we create the FMCW LiDAR model on the autonomous driving simulation
platform CARLA for spawning data. The results show that our algorithm can
process at least a 4.5million points and estimate the velocity of 150 moving
objects per second under the arithmetic power of the Ryzen 3600x CPU, with a
motion state detection accuracy of over 99% and estimated velocity accuracy of
0.1 m/s.
|
[
{
"version": "v1",
"created": "Fri, 24 Dec 2021 07:57:28 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"Guo",
"Mian",
""
],
[
"Zhong",
"Kai",
""
],
[
"Wang",
"Xiaozhi",
""
]
] |
new_dataset
| 0.995676 |
2112.12988
|
Sucheng Qian
|
Sucheng Qian, Liu Liu, Wenqiang Xu, Cewu Lu
|
iSeg3D: An Interactive 3D Shape Segmentation Tool
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
A large-scale dataset is essential for learning good features in 3D shape
understanding, but there are only a few datasets that can satisfy deep learning
training. One of the major reasons is that current tools for annotating
per-point semantic labels using polygons or scribbles are tedious and
inefficient. To facilitate segmentation annotations in 3D shapes, we propose an
effective annotation tool, named iSeg for 3D shape. It can obtain a satisfied
segmentation result with minimal human clicks (< 10). Under our observation,
most objects can be considered as the composition of finite primitive shapes,
and we train iSeg3D model on our built primitive-composed shape data to learn
the geometric prior knowledge in a self-supervised manner. Given human
interactions, the learned knowledge can be used to segment parts on arbitrary
shapes, in which positive clicks help associate the primitives into the
semantic parts and negative clicks can avoid over-segmentation. Besides, We
also provide an online human-in-loop fine-tuning module that enables the model
perform better segmentation with less clicks. Experiments demonstrate the
effectiveness of iSeg3D on PartNet shape segmentation. Data and codes will be
made publicly available.
|
[
{
"version": "v1",
"created": "Fri, 24 Dec 2021 08:15:52 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"Qian",
"Sucheng",
""
],
[
"Liu",
"Liu",
""
],
[
"Xu",
"Wenqiang",
""
],
[
"Lu",
"Cewu",
""
]
] |
new_dataset
| 0.980516 |
2112.13018
|
Hongyi Fan
|
David Charatan, Hongyi Fan, Benjamin Kimia
|
Benchmarking Pedestrian Odometry: The Brown Pedestrian Odometry Dataset
(BPOD)
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We present the Brown Pedestrian Odometry Dataset (BPOD) for benchmarking
visual odometry algorithms in head-mounted pedestrian settings. This dataset
was captured using synchronized global and rolling shutter stereo cameras in 12
diverse indoor and outdoor locations on Brown University's campus. Compared to
existing datasets, BPOD contains more image blur and self-rotation, which are
common in pedestrian odometry but rare elsewhere. Ground-truth trajectories are
generated from stick-on markers placed along the pedestrian's path, and the
pedestrian's position is documented using a third-person video. We evaluate the
performance of representative direct, feature-based, and learning-based VO
methods on BPOD. Our results show that significant development is needed to
successfully capture pedestrian trajectories. The link to the dataset is here:
\url{https://doi.org/10.26300/c1n7-7p93
|
[
{
"version": "v1",
"created": "Fri, 24 Dec 2021 10:11:32 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"Charatan",
"David",
""
],
[
"Fan",
"Hongyi",
""
],
[
"Kimia",
"Benjamin",
""
]
] |
new_dataset
| 0.999772 |
2112.13031
|
Unnikrishnan R Nair
|
Nivedita Rufus, Kanishk Jain, Unni Krishnan R Nair, Vineet Gandhi, K
Madhava Krishna
|
Grounding Linguistic Commands to Navigable Regions
| null |
2021 IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS), 2021, pp. 8593-8600
|
10.1109/IROS51168.2021.9636172
| null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Humans have a natural ability to effortlessly comprehend linguistic commands
such as "park next to the yellow sedan" and instinctively know which region of
the road the vehicle should navigate. Extending this ability to autonomous
vehicles is the next step towards creating fully autonomous agents that respond
and act according to human commands. To this end, we propose the novel task of
Referring Navigable Regions (RNR), i.e., grounding regions of interest for
navigation based on the linguistic command. RNR is different from Referring
Image Segmentation (RIS), which focuses on grounding an object referred to by
the natural language expression instead of grounding a navigable region. For
example, for a command "park next to the yellow sedan," RIS will aim to segment
the referred sedan, and RNR aims to segment the suggested parking region on the
road. We introduce a new dataset, Talk2Car-RegSeg, which extends the existing
Talk2car dataset with segmentation masks for the regions described by the
linguistic commands. A separate test split with concise manoeuvre-oriented
commands is provided to assess the practicality of our dataset. We benchmark
the proposed dataset using a novel transformer-based architecture. We present
extensive ablations and show superior performance over baselines on multiple
evaluation metrics. A downstream path planner generating trajectories based on
RNR outputs confirms the efficacy of the proposed framework.
|
[
{
"version": "v1",
"created": "Fri, 24 Dec 2021 11:11:44 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"Rufus",
"Nivedita",
""
],
[
"Jain",
"Kanishk",
""
],
[
"Nair",
"Unni Krishnan R",
""
],
[
"Gandhi",
"Vineet",
""
],
[
"Krishna",
"K Madhava",
""
]
] |
new_dataset
| 0.996499 |
2112.13224
|
Yusheng Wang
|
Yusheng Wang, Weiwei Song, Yidong Lou, Fei Huang, Zhiyong Tu and
Shimin Zhang
|
Simultaneous Location of Rail Vehicles and Mapping of Environment with
Multiple LiDARs
|
arXiv admin note: text overlap with arXiv:2111.15043,
arXiv:2112.08563
| null | null | null |
cs.RO
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Precise and real-time rail vehicle localization as well as railway
environment monitoring is crucial for railroad safety. In this letter, we
propose a multi-LiDAR based simultaneous localization and mapping (SLAM) system
for railway applications. Our approach starts with measurements preprocessing
to denoise and synchronize multiple LiDAR inputs. Different frame-to-frame
registration methods are used according to the LiDAR placement. In addition, we
leverage the plane constraints from extracted rail tracks to improve the system
accuracy. The local map is further aligned with global map utilizing absolute
position measurements. Considering the unavoidable metal abrasion and screw
loosening, online extrinsic refinement is awakened for long-during operation.
The proposed method is extensively verified on datasets gathered over 3000 km.
The results demonstrate that the proposed system achieves accurate and robust
localization together with effective mapping for large-scale environments. Our
system has already been applied to a freight traffic railroad for monitoring
tasks.
|
[
{
"version": "v1",
"created": "Sat, 25 Dec 2021 11:59:37 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"Wang",
"Yusheng",
""
],
[
"Song",
"Weiwei",
""
],
[
"Lou",
"Yidong",
""
],
[
"Huang",
"Fei",
""
],
[
"Tu",
"Zhiyong",
""
],
[
"Zhang",
"Shimin",
""
]
] |
new_dataset
| 0.996852 |
2112.13237
|
Abhranil Chandra
|
Nithish Kannen, Divyanshu Sheth, Abhranil Chandra, Shubhraneel Pal
|
CABACE: Injecting Character Sequence Information and Domain Knowledge
for Enhanced Acronym and Long-Form Extraction
| null | null | null | null |
cs.CL cs.AI cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
Acronyms and long-forms are commonly found in research documents, more so in
documents from scientific and legal domains. Many acronyms used in such
documents are domain-specific and are very rarely found in normal text corpora.
Owing to this, transformer-based NLP models often detect OOV (Out of
Vocabulary) for acronym tokens, especially for non-English languages, and their
performance suffers while linking acronyms to their long forms during
extraction. Moreover, pretrained transformer models like BERT are not
specialized to handle scientific and legal documents. With these points being
the overarching motivation behind this work, we propose a novel framework
CABACE: Character-Aware BERT for ACronym Extraction, which takes into account
character sequences in text and is adapted to scientific and legal domains by
masked language modelling. We further use an objective with an augmented loss
function, adding the max loss and mask loss terms to the standard cross-entropy
loss for training CABACE. We further leverage pseudo labelling and adversarial
data generation to improve the generalizability of the framework. Experimental
results prove the superiority of the proposed framework in comparison to
various baselines. Additionally, we show that the proposed framework is better
suited than baseline models for zero-shot generalization to non-English
languages, thus reinforcing the effectiveness of our approach. Our team
BacKGProp secured the highest scores on the French dataset, second-highest on
Danish and Vietnamese, and third-highest in the English-Legal dataset on the
global leaderboard for the acronym extraction (AE) shared task at SDU AAAI-22.
|
[
{
"version": "v1",
"created": "Sat, 25 Dec 2021 14:03:09 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"Kannen",
"Nithish",
""
],
[
"Sheth",
"Divyanshu",
""
],
[
"Chandra",
"Abhranil",
""
],
[
"Pal",
"Shubhraneel",
""
]
] |
new_dataset
| 0.998141 |
2112.13238
|
Naghme Jamali
|
Naghme Jamali, Yadollah Yaghoobzadeh, Hesham Faili
|
PerCQA: Persian Community Question Answering Dataset
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Community Question Answering (CQA) forums provide answers for many real-life
questions. Thanks to the large size, these forums are very popular among
machine learning researchers. Automatic answer selection, answer ranking,
question retrieval, expert finding, and fact-checking are example learning
tasks performed using CQA data. In this paper, we present PerCQA, the first
Persian dataset for CQA. This dataset contains the questions and answers
crawled from the most well-known Persian forum. After data acquisition, we
provide rigorous annotation guidelines in an iterative process, and then the
annotation of question-answer pairs in SemEvalCQA format. PerCQA contains 989
questions and 21,915 annotated answers. We make PerCQA publicly available to
encourage more research in Persian CQA. We also build strong benchmarks for the
task of answer selection in PerCQA by using mono- and multi-lingual pre-trained
language models
|
[
{
"version": "v1",
"created": "Sat, 25 Dec 2021 14:06:41 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"Jamali",
"Naghme",
""
],
[
"Yaghoobzadeh",
"Yadollah",
""
],
[
"Faili",
"Hesham",
""
]
] |
new_dataset
| 0.999776 |
2112.13306
|
Luming Wang
|
Luming Wang, Xu Zhang, Tianyue Lu, Mingyu Chen
|
Asynchronous Memory Access Unit for General Purpose Processors
| null | null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In future data centers, applications will make heavy use of far memory
(including disaggregated memory pools and NVM). The access latency of far
memory is more widely distributed than that of local memory accesses. This
makes the efficiency of traditional blocking load/store in most general-purpose
processors decrease in this scenario. Therefore, this work proposes an in-core
asynchronous memory access unit.
|
[
{
"version": "v1",
"created": "Sun, 26 Dec 2021 01:58:04 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"Wang",
"Luming",
""
],
[
"Zhang",
"Xu",
""
],
[
"Lu",
"Tianyue",
""
],
[
"Chen",
"Mingyu",
""
]
] |
new_dataset
| 0.987371 |
2112.13350
|
Ismail Shahin
|
Ismail Shahin, Noor Hindawi, Ali Bou Nassif, Adi Alhudhaif, Kemal
Polat
|
Novel Dual-Channel Long Short-Term Memory Compressed Capsule Networks
for Emotion Recognition
|
19 pages, 11 figures
|
Published in Expert Systems With Applications, 2021
|
10.1016/j.eswa.2021.116080
| null |
cs.SD cs.LG eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent analysis on speech emotion recognition has made considerable advances
with the use of MFCCs spectrogram features and the implementation of neural
network approaches such as convolutional neural networks (CNNs). Capsule
networks (CapsNet) have gained gratitude as alternatives to CNNs with their
larger capacities for hierarchical representation. To address these issues,
this research introduces a text-independent and speaker-independent SER novel
architecture, where a dual-channel long short-term memory compressed-CapsNet
(DC-LSTM COMP-CapsNet) algorithm is proposed based on the structural features
of CapsNet. Our proposed novel classifier can ensure the energy efficiency of
the model and adequate compression method in speech emotion recognition, which
is not delivered through the original structure of a CapsNet. Moreover, the
grid search approach is used to attain optimal solutions. Results witnessed an
improved performance and reduction in the training and testing running time.
The speech datasets used to evaluate our algorithm are: Arabic Emirati-accented
corpus, English speech under simulated and actual stress corpus, English
Ryerson audio-visual database of emotional speech and song corpus, and
crowd-sourced emotional multimodal actors dataset. This work reveals that the
optimum feature extraction method compared to other known methods is MFCCs
delta-delta. Using the four datasets and the MFCCs delta-delta, DC-LSTM
COMP-CapsNet surpasses all the state-of-the-art systems, classical classifiers,
CNN, and the original CapsNet. Using the Arabic Emirati-accented corpus, our
results demonstrate that the proposed work yields average emotion recognition
accuracy of 89.3% compared to 84.7%, 82.2%, 69.8%, 69.2%, 53.8%, 42.6%, and
31.9% based on CapsNet, CNN, support vector machine, multi-layer perceptron,
k-nearest neighbor, radial basis function, and naive Bayes, respectively.
|
[
{
"version": "v1",
"created": "Sun, 26 Dec 2021 10:37:35 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"Shahin",
"Ismail",
""
],
[
"Hindawi",
"Noor",
""
],
[
"Nassif",
"Ali Bou",
""
],
[
"Alhudhaif",
"Adi",
""
],
[
"Polat",
"Kemal",
""
]
] |
new_dataset
| 0.986672 |
2112.13369
|
Xing Wang
|
Xingqi Wang, Chaoyang Jiang, Shuxuan Sheng, Yanjie Xu, Yifei Jia
|
Stop Line Aided Cooperative Positioning of Connected Vehicles
| null | null | null | null |
cs.RO eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
This paper develops a stop line aided cooperative positioning framework for
connected vehicles, which creatively utilizes the location of the stop-line to
achieve the positioning enhancement for a vehicular ad-hoc network (VANET) in
intersection scenarios via Vehicle-to-Vehicle (V2V) communication. Firstly, a
self-positioning correction scheme for the first stopped vehicle is presented,
which applied the stop line information as benchmarks to correct the GNSS/INS
positioning results. Then, the local observations of each vehicle are fused
with the position estimates of other vehicles and the inter-vehicle distance
measurements by using an extended Kalman filter (EKF). In this way, the
benefits of the first stopped vehicle are extended to the whole VANET. Such a
cooperative inertial navigation (CIN) framework can greatly improve the
positioning performance of the VANET. Finally, experiments in Beijing show the
effectiveness of the proposed stop line aided cooperative positioning
framework.
|
[
{
"version": "v1",
"created": "Sun, 26 Dec 2021 12:27:05 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"Wang",
"Xingqi",
""
],
[
"Jiang",
"Chaoyang",
""
],
[
"Sheng",
"Shuxuan",
""
],
[
"Xu",
"Yanjie",
""
],
[
"Jia",
"Yifei",
""
]
] |
new_dataset
| 0.997313 |
2112.13372
|
Ankush Chopra
|
Ankush Chopra, Mahima Arora, Shubham Pandey
|
Delivery Issues Identification from Customer Feedback Data
|
Accepted to be part of MLDS 2022, and will be Published in Lattice
journal
| null | null | null |
cs.CL cs.AI cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Millions of packages are delivered successfully by online and local retail
stores across the world every day. The proper delivery of packages is needed to
ensure high customer satisfaction and repeat purchases. These deliveries suffer
various problems despite the best efforts from the stores. These issues happen
not only due to the large volume and high demand for low turnaround time but
also due to mechanical operations and natural factors. These issues range from
receiving wrong items in the package to delayed shipment to damaged packages
because of mishandling during transportation. Finding solutions to various
delivery issues faced by both sending and receiving parties plays a vital role
in increasing the efficiency of the entire process. This paper shows how to
find these issues using customer feedback from the text comments and uploaded
images. We used transfer learning for both Text and Image models to minimize
the demand for thousands of labeled examples. The results show that the model
can find different issues. Furthermore, it can also be used for tasks like
bottleneck identification, process improvement, automating refunds, etc.
Compared with the existing process, the ensemble of text and image models
proposed in this paper ensures the identification of several types of delivery
issues, which is more suitable for the real-life scenarios of delivery of items
in retail businesses. This method can supply a new idea of issue detection for
the delivery of packages in similar industries.
|
[
{
"version": "v1",
"created": "Sun, 26 Dec 2021 12:41:10 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"Chopra",
"Ankush",
""
],
[
"Arora",
"Mahima",
""
],
[
"Pandey",
"Shubham",
""
]
] |
new_dataset
| 0.971116 |
2112.13511
|
Bhavya Giri Goswami
|
Team Robocon, IIT Roorkee: Bhavya Giri Goswami, Aman Verma, Gautam
Jha, Vandan Gajjar, Vedant Neekhra, Utkarsh Deepak, Aayush Singh Chauhan
|
Design, Manufacturing, and Controls of a Prismatic Quadruped Robot:
PRISMA
|
14 pages, 16 figures, 4 tables
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most of the quadrupeds developed are highly actuated, and their control is
hence quite cumbersome. They need advanced electronics equipment to solve
convoluted inverse kinematic equations continuously. In addition, they demand
special and costly sensors to autonomously navigate through the environment as
traditional distance sensors usually fail because of the continuous
perturbation due to the motion of the robot. Another challenge is maintaining
the continuous dynamic stability of the robot while walking, which requires
complicated and state-of-the-art control algorithms. This paper presents a
thorough description of the hardware design and control architecture of our
in-house prismatic joint quadruped robot called the PRISMA. We aim to forge a
robust and kinematically stable quadruped robot that can use elementary control
algorithms and utilize conventional sensors to navigate an unknown environment.
We discuss the benefits and limitations of the robot in terms of its motion,
different foot trajectories, manufacturability, and controls.
|
[
{
"version": "v1",
"created": "Mon, 27 Dec 2021 04:58:13 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"Robocon",
"Team",
""
],
[
"Roorkee",
"IIT",
""
],
[
":",
"",
""
],
[
"Goswami",
"Bhavya Giri",
""
],
[
"Verma",
"Aman",
""
],
[
"Jha",
"Gautam",
""
],
[
"Gajjar",
"Vandan",
""
],
[
"Neekhra",
"Vedant",
""
],
[
"Deepak",
"Utkarsh",
""
],
[
"Chauhan",
"Aayush Singh",
""
]
] |
new_dataset
| 0.987588 |
2112.13555
|
Pengcheng An
|
Pengcheng An, Ziqi Zhou, Qing Liu, Yifei Yin, Linghao Du, Da-Yuan
Huang, Jian Zhao
|
VibEmoji: Exploring User-authoring Multi-modal Emoticons in Social
Communication
|
To be published at ACM CHI '22
| null |
10.1145/3491102.3501940
| null |
cs.HC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Emoticons are indispensable in online communications. With users' growing
needs for more customized and expressive emoticons, recent messaging
applications begin to support (limited) multi-modal emoticons: e.g., enhancing
emoticons with animations or vibrotactile feedback. However, little empirical
knowledge has been accumulated concerning how people create, share and
experience multi-modal emoticons in everyday communication, and how to better
support them through design. To tackle this, we developed VibEmoji, a
user-authoring multi-modal emoticon interface for mobile messaging. Extending
existing designs, VibEmoji grants users greater flexibility to combine various
emoticons, vibrations, and animations on-the-fly, and offers non-aggressive
recommendations based on these components' emotional relevance. Using VibEmoji
as a probe, we conducted a four-week field study with 20 participants, to gain
new understandings from in-the-wild usage and experience, and extract
implications for design. We thereby contribute both a novel system and various
insights for supporting users' creation and communication of multi-modal
emoticons.
|
[
{
"version": "v1",
"created": "Mon, 27 Dec 2021 07:50:02 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"An",
"Pengcheng",
""
],
[
"Zhou",
"Ziqi",
""
],
[
"Liu",
"Qing",
""
],
[
"Yin",
"Yifei",
""
],
[
"Du",
"Linghao",
""
],
[
"Huang",
"Da-Yuan",
""
],
[
"Zhao",
"Jian",
""
]
] |
new_dataset
| 0.998692 |
2112.13647
|
Borun Xu
|
Borun Xu, Biao Wang, Jiale Tao, Tiezheng Ge, Yuning Jiang, Wen Li,
Lixin Duan
|
Move As You Like: Image Animation in E-Commerce Scenario
|
3 pages, 3 figures, ACM MM 2021 demo session
|
Proceedings of the 29th ACM International Conference on
Multimedia. 2021: 2759-2761
|
10.1145/3474085.3478550
| null |
cs.GR cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Creative image animations are attractive in e-commerce applications, where
motion transfer is one of the import ways to generate animations from static
images. However, existing methods rarely transfer motion to objects other than
human body or human face, and even fewer apply motion transfer in practical
scenarios. In this work, we apply motion transfer on the Taobao product images
in real e-commerce scenario to generate creative animations, which are more
attractive than static images and they will bring more benefits. We animate the
Taobao products of dolls, copper running horses and toy dinosaurs based on
motion transfer method for demonstration.
|
[
{
"version": "v1",
"created": "Sun, 19 Dec 2021 06:41:10 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"Xu",
"Borun",
""
],
[
"Wang",
"Biao",
""
],
[
"Tao",
"Jiale",
""
],
[
"Ge",
"Tiezheng",
""
],
[
"Jiang",
"Yuning",
""
],
[
"Li",
"Wen",
""
],
[
"Duan",
"Lixin",
""
]
] |
new_dataset
| 0.983147 |
2112.13659
|
Yin Jie
|
Jie Yin, Ang Li, Tao Li, Wenxian Yu, and Danping Zou
|
M2DGR: A Multi-sensor and Multi-scenario SLAM Dataset for Ground Robots
|
accepted by IEEE RA-L
| null | null | null |
cs.RO cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce M2DGR: a novel large-scale dataset collected by a ground robot
with a full sensor-suite including six fish-eye and one sky-pointing RGB
cameras, an infrared camera, an event camera, a Visual-Inertial Sensor
(VI-sensor), an inertial measurement unit (IMU), a LiDAR, a consumer-grade
Global Navigation Satellite System (GNSS) receiver and a GNSS-IMU navigation
system with real-time kinematic (RTK) signals. All those sensors were
well-calibrated and synchronized, and their data were recorded simultaneously.
The ground truth trajectories were obtained by the motion capture device, a
laser 3D tracker, and an RTK receiver. The dataset comprises 36 sequences
(about 1TB) captured in diverse scenarios including both indoor and outdoor
environments. We evaluate state-of-the-art SLAM algorithms on M2DGR. Results
show that existing solutions perform poorly in some scenarios. For the benefit
of the research community, we make the dataset and tools public. The webpage of
our project is https://github.com/SJTU-ViSYS/M2DGR.
|
[
{
"version": "v1",
"created": "Sun, 19 Dec 2021 12:37:09 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"Yin",
"Jie",
""
],
[
"Li",
"Ang",
""
],
[
"Li",
"Tao",
""
],
[
"Yu",
"Wenxian",
""
],
[
"Zou",
"Danping",
""
]
] |
new_dataset
| 0.999772 |
2112.13742
|
Salar Mohtaj
|
Vahid Zarrabi, Salar Mohtaj, Habibollah Asghari
|
Hamtajoo: A Persian Plagiarism Checker for Academic Manuscripts
| null | null | null | null |
cs.CL cs.IR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, due to the high availability of electronic documents through
the Web, the plagiarism has become a serious challenge, especially among
scholars. Various plagiarism detection systems have been developed to prevent
text re-use and to confront plagiarism. Although it is almost easy to detect
duplicate text in academic manuscripts, finding patterns of text re-use that
has been semantically changed is of great importance. Another important issue
is to deal with less resourced languages, which there are low volume of text
for training purposes and also low performance in tools for NLP applications.
In this paper, we introduce Hamtajoo, a Persian plagiarism detection system for
academic manuscripts. Moreover, we describe the overall structure of the system
along with the algorithms used in each stage. In order to evaluate the
performance of the proposed system, we used a plagiarism detection corpus
comply with the PAN standards.
|
[
{
"version": "v1",
"created": "Mon, 27 Dec 2021 15:45:35 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"Zarrabi",
"Vahid",
""
],
[
"Mohtaj",
"Salar",
""
],
[
"Asghari",
"Habibollah",
""
]
] |
new_dataset
| 0.999512 |
2112.13761
|
Nicol\'as Navarro-Guerrero
|
Lasse Emil R. Bonner, and Daniel Daugaard Buhl, and Kristian
Kristensen, and Nicol\'as Navarro-Guerrero
|
AU Dataset for Visuo-Haptic Object Recognition for Robots
| null | null |
10.6084/m9.figshare.14222486
| null |
cs.RO cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Multimodal object recognition is still an emerging field. Thus, publicly
available datasets are still rare and of small size. This dataset was developed
to help fill this void and presents multimodal data for 63 objects with some
visual and haptic ambiguity. The dataset contains visual, kinesthetic and
tactile (audio/vibrations) data. To completely solve sensory ambiguity, sensory
integration/fusion would be required. This report describes the creation and
structure of the dataset. The first section explains the underlying approach
used to capture the visual and haptic properties of the objects. The second
section describes the technical aspects (experimental setup) needed for the
collection of the data. The third section introduces the objects, while the
final section describes the structure and content of the dataset.
|
[
{
"version": "v1",
"created": "Mon, 27 Dec 2021 16:15:11 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"Bonner",
"Lasse Emil R.",
""
],
[
"Buhl",
"Daniel Daugaard",
""
],
[
"Kristensen",
"Kristian",
""
],
[
"Navarro-Guerrero",
"Nicolás",
""
]
] |
new_dataset
| 0.999723 |
2112.13762
|
John Lambert
|
John Lambert, Zhuang Liu, Ozan Sener, James Hays, Vladlen Koltun
|
MSeg: A Composite Dataset for Multi-domain Semantic Segmentation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We present MSeg, a composite dataset that unifies semantic segmentation
datasets from different domains. A naive merge of the constituent datasets
yields poor performance due to inconsistent taxonomies and annotation
practices. We reconcile the taxonomies and bring the pixel-level annotations
into alignment by relabeling more than 220,000 object masks in more than 80,000
images, requiring more than 1.34 years of collective annotator effort. The
resulting composite dataset enables training a single semantic segmentation
model that functions effectively across domains and generalizes to datasets
that were not seen during training. We adopt zero-shot cross-dataset transfer
as a benchmark to systematically evaluate a model's robustness and show that
MSeg training yields substantially more robust models in comparison to training
on individual datasets or naive mixing of datasets without the presented
contributions. A model trained on MSeg ranks first on the WildDash-v1
leaderboard for robust semantic segmentation, with no exposure to WildDash data
during training. We evaluate our models in the 2020 Robust Vision Challenge
(RVC) as an extreme generalization experiment. MSeg training sets include only
three of the seven datasets in the RVC; more importantly, the evaluation
taxonomy of RVC is different and more detailed. Surprisingly, our model shows
competitive performance and ranks second. To evaluate how close we are to the
grand aim of robust, efficient, and complete scene understanding, we go beyond
semantic segmentation by training instance segmentation and panoptic
segmentation models using our dataset. Moreover, we also evaluate various
engineering design decisions and metrics, including resolution and
computational efficiency. Although our models are far from this grand aim, our
comprehensive evaluation is crucial for progress. We share all the models and
code with the community.
|
[
{
"version": "v1",
"created": "Mon, 27 Dec 2021 16:16:35 GMT"
}
] | 2021-12-28T00:00:00 |
[
[
"Lambert",
"John",
""
],
[
"Liu",
"Zhuang",
""
],
[
"Sener",
"Ozan",
""
],
[
"Hays",
"James",
""
],
[
"Koltun",
"Vladlen",
""
]
] |
new_dataset
| 0.99956 |
2107.13647
|
Ahmed Elhagry
|
Ahmed Elhagry, Rawan Glalal Elrayes
|
Egyptian Sign Language Recognition Using CNN and LSTM
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Sign language is a set of gestures that deaf people use to communicate.
Unfortunately, normal people don't understand it, which creates a communication
gap that needs to be filled. Because of the variations in (Egyptian Sign
Language) ESL from one region to another, ESL provides a challenging research
problem. In this work, we are providing applied research with its video-based
Egyptian sign language recognition system that serves the local community of
deaf people in Egypt, with a moderate and reasonable accuracy. We present a
computer vision system with two different neural networks architectures. The
first is a Convolutional Neural Network (CNN) for extracting spatial features.
The CNN model was retrained on the inception mod. The second architecture is a
CNN followed by a Long Short-Term Memory (LSTM) for extracting both spatial and
temporal features. The two models achieved an accuracy of 90% and 72%,
respectively. We examined the power of these two architectures to distinguish
between 9 common words (with similar signs) among some deaf people community in
Egypt.
|
[
{
"version": "v1",
"created": "Wed, 28 Jul 2021 21:33:35 GMT"
}
] | 2021-12-27T00:00:00 |
[
[
"Elhagry",
"Ahmed",
""
],
[
"Elrayes",
"Rawan Glalal",
""
]
] |
new_dataset
| 0.999821 |
2011.07252
|
Fanqing Lin
|
Fanqing Lin, Brian Price, Tony Martinez
|
Ego2Hands: A Dataset for Egocentric Two-hand Segmentation and Detection
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Hand segmentation and detection in truly unconstrained RGB-based settings is
important for many applications. However, existing datasets are far from
sufficient in terms of size and variety due to the infeasibility of manual
annotation of large amounts of segmentation and detection data. As a result,
current methods are limited by many underlying assumptions such as constrained
environment, consistent skin color and lighting. In this work, we present
Ego2Hands, a large-scale RGB-based egocentric hand segmentation/detection
dataset that is semi-automatically annotated and a color-invariant
compositing-based data generation technique capable of creating training data
with large quantity and variety. For quantitative analysis, we manually
annotated an evaluation set that significantly exceeds existing benchmarks in
quantity, diversity and annotation accuracy. We provide cross-dataset
evaluation as well as thorough analysis on the performance of state-of-the-art
models on Ego2Hands to show that our dataset and data generation technique can
produce models that generalize to unseen environments without domain
adaptation.
|
[
{
"version": "v1",
"created": "Sat, 14 Nov 2020 10:12:35 GMT"
},
{
"version": "v2",
"created": "Tue, 17 Nov 2020 05:04:14 GMT"
},
{
"version": "v3",
"created": "Mon, 29 Mar 2021 10:54:05 GMT"
},
{
"version": "v4",
"created": "Mon, 20 Dec 2021 10:37:48 GMT"
}
] | 2021-12-24T00:00:00 |
[
[
"Lin",
"Fanqing",
""
],
[
"Price",
"Brian",
""
],
[
"Martinez",
"Tony",
""
]
] |
new_dataset
| 0.999243 |
2107.00676
|
Sumanth Doddapaneni
|
Sumanth Doddapaneni, Gowtham Ramesh, Mitesh M. Khapra, Anoop
Kunchukuttan, Pratyush Kumar
|
A Primer on Pretrained Multilingual Language Models
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multilingual Language Models (\MLLMs) such as mBERT, XLM, XLM-R,
\textit{etc.} have emerged as a viable option for bringing the power of
pretraining to a large number of languages. Given their success in zero-shot
transfer learning, there has emerged a large body of work in (i) building
bigger \MLLMs~covering a large number of languages (ii) creating exhaustive
benchmarks covering a wider variety of tasks and languages for evaluating
\MLLMs~ (iii) analysing the performance of \MLLMs~on monolingual, zero-shot
cross-lingual and bilingual tasks (iv) understanding the universal language
patterns (if any) learnt by \MLLMs~ and (v) augmenting the (often) limited
capacity of \MLLMs~ to improve their performance on seen or even unseen
languages. In this survey, we review the existing literature covering the above
broad areas of research pertaining to \MLLMs. Based on our survey, we recommend
some promising directions of future research.
|
[
{
"version": "v1",
"created": "Thu, 1 Jul 2021 18:01:46 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Dec 2021 09:51:27 GMT"
}
] | 2021-12-24T00:00:00 |
[
[
"Doddapaneni",
"Sumanth",
""
],
[
"Ramesh",
"Gowtham",
""
],
[
"Khapra",
"Mitesh M.",
""
],
[
"Kunchukuttan",
"Anoop",
""
],
[
"Kumar",
"Pratyush",
""
]
] |
new_dataset
| 0.975847 |
2112.07064
|
Walter Hernandez
|
Niall Roche, Walter Hernandez, Eason Chen, J\'er\^ome Sim\'eon, Dan
Selman
|
Ergo -- a programming language for Smart Legal Contracts
| null | null | null | null |
cs.CY cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
We present a smart legal contract platform to support a wide range of smart
legal contract use cases. We see this as a step towards improving existing
approaches to representing the complexity of legal agreements and executing
aspects of these agreements.
|
[
{
"version": "v1",
"created": "Mon, 13 Dec 2021 23:38:06 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Dec 2021 15:11:14 GMT"
}
] | 2021-12-24T00:00:00 |
[
[
"Roche",
"Niall",
""
],
[
"Hernandez",
"Walter",
""
],
[
"Chen",
"Eason",
""
],
[
"Siméon",
"Jérôme",
""
],
[
"Selman",
"Dan",
""
]
] |
new_dataset
| 0.999777 |
2112.12182
|
Duo Wang
|
Duo Wang, Salah Karout
|
Fine-grained Multi-Modal Self-Supervised Learning
|
Accepted at BMVC 2021
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-Modal Self-Supervised Learning from videos has been shown to improve
model's performance on various downstream tasks. However, such Self-Supervised
pre-training requires large batch sizes and a large amount of computation
resources due to the noise present in the uncurated data. This is partly due to
the fact that the prevalent training scheme is trained on coarse-grained
setting, in which vectors representing the whole video clips or natural
language sentences are used for computing similarity. Such scheme makes
training noisy as part of the video clips can be totally not correlated with
the other-modality input such as text description. In this paper, we propose a
fine-grained multi-modal self-supervised training scheme that computes the
similarity between embeddings at finer-scale (such as individual feature map
embeddings and embeddings of phrases), and uses attention mechanisms to reduce
noisy pairs' weighting in the loss function. We show that with the proposed
pre-training scheme, we can train smaller models, with smaller batch-size and
much less computational resources to achieve downstream tasks performances
comparable to State-Of-The-Art, for tasks including action recognition and
text-image retrievals.
|
[
{
"version": "v1",
"created": "Wed, 22 Dec 2021 19:17:45 GMT"
}
] | 2021-12-24T00:00:00 |
[
[
"Wang",
"Duo",
""
],
[
"Karout",
"Salah",
""
]
] |
new_dataset
| 0.997518 |
2112.12193
|
Michael Zw\"olfer
|
Michael Zw\"olfer and Dieter Heinrich and Kurt Schindelwig and Bastian
Wandt and Helge Rhodin and Joerg Spoerri and Werner Nachbauer
|
Improved 2D Keypoint Detection in Out-of-Balance and Fall Situations --
combining input rotations and a kinematic model
|
extended abstract, 4 pages, 3 figures, 2 tables
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Injury analysis may be one of the most beneficial applications of deep
learning based human pose estimation. To facilitate further research on this
topic, we provide an injury specific 2D dataset for alpine skiing, covering in
total 533 images. We further propose a post processing routine, that combines
rotational information with a simple kinematic model. We could improve
detection results in fall situations by up to 21% regarding the PCK@0.2 metric.
|
[
{
"version": "v1",
"created": "Wed, 22 Dec 2021 19:49:09 GMT"
}
] | 2021-12-24T00:00:00 |
[
[
"Zwölfer",
"Michael",
""
],
[
"Heinrich",
"Dieter",
""
],
[
"Schindelwig",
"Kurt",
""
],
[
"Wandt",
"Bastian",
""
],
[
"Rhodin",
"Helge",
""
],
[
"Spoerri",
"Joerg",
""
],
[
"Nachbauer",
"Werner",
""
]
] |
new_dataset
| 0.998309 |
2112.12232
|
William Buchanan Prof
|
Nilupulee A. Gunathilake, Ahmed Al-Dubai, William J. Buchanan, Owen Lo
|
Electromagnetic Side-Channel Attack Resilience against PRESENT
Lightweight Block Cipher
| null |
2022 IEEE 6th International Conference on Cryptography, Security
and Privacy (CSP 2022)
| null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Lightweight cryptography is a novel diversion from conventional cryptography
that targets internet-of-things (IoT) platform due to resource constraints. In
comparison, it offers smaller cryptographic primitives such as shorter key
sizes, block sizes and lesser energy drainage. The main focus can be seen in
algorithm developments in this emerging subject. Thus, verification is carried
out based upon theoretical (mathematical) proofs mostly. Among the few
available side-channel analysis studies found in literature, the highest
percentage is taken by power attacks. PRESENT is a promising lightweight block
cipher to be included in IoT devices in the near future. Thus, the emphasis of
this paper is on lightweight cryptology, and our investigation shows
unavailability of a correlation electromagnetic analysis (CEMA) of it. Hence,
in an effort to fill in this research gap, we opted to investigate the
capabilities of CEMA against the PRESENT algorithm. This work aims to determine
the probability of secret key leakage with a minimum number of electromagnetic
(EM) waveforms possible. The process initially started from a simple EM
analysis (SEMA) and gradually enhanced up to a CEMA. This paper presents our
methodology in attack modelling, current results that indicate a probability of
leaking seven bytes of the key and upcoming plans for optimisation. In
addition, introductions to lightweight cryptanalysis and theories of EMA are
also included.
|
[
{
"version": "v1",
"created": "Wed, 22 Dec 2021 21:26:39 GMT"
}
] | 2021-12-24T00:00:00 |
[
[
"Gunathilake",
"Nilupulee A.",
""
],
[
"Al-Dubai",
"Ahmed",
""
],
[
"Buchanan",
"William J.",
""
],
[
"Lo",
"Owen",
""
]
] |
new_dataset
| 0.995193 |
2112.12409
|
Estefania Talavera
|
Andreea Glavan, Estefania Talavera
|
InstaIndoor and Multi-modal Deep Learning for Indoor Scene Recognition
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Indoor scene recognition is a growing field with great potential for
behaviour understanding, robot localization, and elderly monitoring, among
others. In this study, we approach the task of scene recognition from a novel
standpoint, using multi-modal learning and video data gathered from social
media. The accessibility and variety of social media videos can provide
realistic data for modern scene recognition techniques and applications. We
propose a model based on fusion of transcribed speech to text and visual
features, which is used for classification on a novel dataset of social media
videos of indoor scenes named InstaIndoor. Our model achieves up to 70%
accuracy and 0.7 F1-Score. Furthermore, we highlight the potential of our
approach by benchmarking on a YouTube-8M subset of indoor scenes as well, where
it achieves 74% accuracy and 0.74 F1-Score. We hope the contributions of this
work pave the way to novel research in the challenging field of indoor scene
recognition.
|
[
{
"version": "v1",
"created": "Thu, 23 Dec 2021 08:11:22 GMT"
}
] | 2021-12-24T00:00:00 |
[
[
"Glavan",
"Andreea",
""
],
[
"Talavera",
"Estefania",
""
]
] |
new_dataset
| 0.998956 |
2112.12489
|
Mika H\"am\"al\"ainen
|
Quan Duong, Mika H\"am\"al\"ainen, Khalid Alnajjar
|
TFW2V: An Enhanced Document Similarity Method for the Morphologically
Rich Finnish Language
|
Workshop on Natural Language Processing for Digital Humanities
(NLP4DH)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Measuring the semantic similarity of different texts has many important
applications in Digital Humanities research such as information retrieval,
document clustering and text summarization. The performance of different
methods depends on the length of the text, the domain and the language. This
study focuses on experimenting with some of the current approaches to Finnish,
which is a morphologically rich language. At the same time, we propose a simple
method, TFW2V, which shows high efficiency in handling both long text documents
and limited amounts of data. Furthermore, we design an objective evaluation
method which can be used as a framework for benchmarking text similarity
approaches.
|
[
{
"version": "v1",
"created": "Thu, 23 Dec 2021 12:27:45 GMT"
}
] | 2021-12-24T00:00:00 |
[
[
"Duong",
"Quan",
""
],
[
"Hämäläinen",
"Mika",
""
],
[
"Alnajjar",
"Khalid",
""
]
] |
new_dataset
| 0.962866 |
2112.12595
|
Mubin Ul Haque
|
Mubin Ul Haque, M. Mehdi Kholoosi, and M. Ali Babar
|
KGSecConfig: A Knowledge Graph Based Approach for Secured Container
Orchestrator Configuration
| null | null | null | null |
cs.CR cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Container Orchestrator (CO) is a vital technology for managing clusters of
containers, which may form a virtualized infrastructure for developing and
operating software systems. Like any other software system, securing CO is
critical, but can be quite challenging task due to large number of configurable
options. Manual configuration is not only knowledge intensive and time
consuming, but also is error prone. For automating security configuration of
CO, we propose a novel Knowledge Graph based Security Configuration,
KGSecConfig, approach. Our solution leverages keyword and learning models to
systematically capture, link, and correlate heterogeneous and multi-vendor
configuration space in a unified structure for supporting automation of
security configuration of CO. We implement KGSecConfig on Kubernetes, Docker,
Azure, and VMWare to build secured configuration knowledge graph. Our
evaluation results show 0.98 and 0.94 accuracy for keyword and learning-based
secured configuration option and concept extraction, respectively. We also
demonstrate the utilization of the knowledge graph for automated
misconfiguration mitigation in a Kubernetes cluster. We assert that our
knowledge graph based approach can help in addressing several challenges, e.g.,
misconfiguration of security, associated with manually configuring the security
of CO.
|
[
{
"version": "v1",
"created": "Tue, 21 Dec 2021 07:40:27 GMT"
}
] | 2021-12-24T00:00:00 |
[
[
"Haque",
"Mubin Ul",
""
],
[
"Kholoosi",
"M. Mehdi",
""
],
[
"Babar",
"M. Ali",
""
]
] |
new_dataset
| 0.988605 |
2112.12610
|
Xiaolin Chai
|
Pengchuan Xiao, Zhenlei Shao, Steven Hao, Zishuo Zhang, Xiaolin Chai,
Judy Jiao, Zesong Li, Jian Wu, Kai Sun, Kun Jiang, Yunlong Wang, Diange Yang
|
PandaSet: Advanced Sensor Suite Dataset for Autonomous Driving
|
This paper has been published on ITSC'2021, please check the website
of the PandaSet for more information: https://pandaset.org/
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The accelerating development of autonomous driving technology has placed
greater demands on obtaining large amounts of high-quality data.
Representative, labeled, real world data serves as the fuel for training deep
learning networks, critical for improving self-driving perception algorithms.
In this paper, we introduce PandaSet, the first dataset produced by a complete,
high-precision autonomous vehicle sensor kit with a no-cost commercial license.
The dataset was collected using one 360{\deg} mechanical spinning LiDAR, one
forward-facing, long-range LiDAR, and 6 cameras. The dataset contains more than
100 scenes, each of which is 8 seconds long, and provides 28 types of labels
for object classification and 37 types of labels for semantic segmentation. We
provide baselines for LiDAR-only 3D object detection, LiDAR-camera fusion 3D
object detection and LiDAR point cloud segmentation. For more details about
PandaSet and the development kit, see https://scale.com/open-datasets/pandaset.
|
[
{
"version": "v1",
"created": "Thu, 23 Dec 2021 14:52:12 GMT"
}
] | 2021-12-24T00:00:00 |
[
[
"Xiao",
"Pengchuan",
""
],
[
"Shao",
"Zhenlei",
""
],
[
"Hao",
"Steven",
""
],
[
"Zhang",
"Zishuo",
""
],
[
"Chai",
"Xiaolin",
""
],
[
"Jiao",
"Judy",
""
],
[
"Li",
"Zesong",
""
],
[
"Wu",
"Jian",
""
],
[
"Sun",
"Kai",
""
],
[
"Jiang",
"Kun",
""
],
[
"Wang",
"Yunlong",
""
],
[
"Yang",
"Diange",
""
]
] |
new_dataset
| 0.999787 |
2112.12638
|
Ghislain Fourny
|
Ghislain Fourny and David Dao and Can Berker Cikis and Ce Zhang and
Gustavo Alonso
|
RumbleML: program the lakehouse with JSONiq
|
8 pages + references
| null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Lakehouse systems have reached in the past few years unprecedented size and
heterogeneity and have been embraced by many industry players. However, they
are often difficult to use as they lack the declarative language and
optimization possibilities of relational engines. This paper introduces
RumbleML, a high-level, declarative library integrated into the RumbleDB engine
and with the JSONiq language. RumbleML allows using a single platform for data
cleaning, data preparation, training, and inference, as well as management of
models and results. It does it using a purely declarative language (JSONiq) for
all these tasks and without any performance loss over existing platforms (e.g.
Spark). The key insights of the design of RumbleML are that training sets,
evaluation sets, and test sets can be represented as homogeneous sequences of
flat objects; that models can be seamlessly embodied in function items mapping
input test sets into prediction-augmented result sets; and that estimators can
be seamlessly embodied in function items mapping input training sets to models.
We argue that this makes JSONiq a viable and seamless programming language for
data lakehouses across all their features, whether database-related or
machine-learning-related. While lakehouses bring Machine Learning and Data
Wrangling on the same platform, RumbleML also brings them to the same language,
JSONiq. In the paper, we present the first prototype and compare its
performance to Spark showing the benefit of a huge functionality and
productivity gain for cleaning up, normalizing, validating data, feeding it
into Machine Learning pipelines, and analyzing the output, all within the same
system and language and at scale.
|
[
{
"version": "v1",
"created": "Thu, 23 Dec 2021 15:24:30 GMT"
}
] | 2021-12-24T00:00:00 |
[
[
"Fourny",
"Ghislain",
""
],
[
"Dao",
"David",
""
],
[
"Cikis",
"Can Berker",
""
],
[
"Zhang",
"Ce",
""
],
[
"Alonso",
"Gustavo",
""
]
] |
new_dataset
| 0.999606 |
2112.12668
|
Piotr Koniusz
|
Lei Wang, Jun Liu, Piotr Koniusz
|
3D Skeleton-based Few-shot Action Recognition with JEANIE is not so
Na\"ive
|
Full 17 page version
| null | null | null |
cs.CV cs.HC cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a Few-shot Learning pipeline for 3D skeleton-based
action recognition by Joint tEmporal and cAmera viewpoiNt alIgnmEnt (JEANIE).
To factor out misalignment between query and support sequences of 3D body
joints, we propose an advanced variant of Dynamic Time Warping which jointly
models each smooth path between the query and support frames to achieve
simultaneously the best alignment in the temporal and simulated camera
viewpoint spaces for end-to-end learning under the limited few-shot training
data. Sequences are encoded with a temporal block encoder based on Simple
Spectral Graph Convolution, a lightweight linear Graph Neural Network backbone
(we also include a setting with a transformer). Finally, we propose a
similarity-based loss which encourages the alignment of sequences of the same
class while preventing the alignment of unrelated sequences. We demonstrate
state-of-the-art results on NTU-60, NTU-120, Kinetics-skeleton and UWA3D
Multiview Activity II.
|
[
{
"version": "v1",
"created": "Thu, 23 Dec 2021 16:09:23 GMT"
}
] | 2021-12-24T00:00:00 |
[
[
"Wang",
"Lei",
""
],
[
"Liu",
"Jun",
""
],
[
"Koniusz",
"Piotr",
""
]
] |
new_dataset
| 0.976618 |
2112.12678
|
Itai Boneh
|
Amihood Amir, Itai Boneh
|
Dynamic Suffix Array with Sub-linear update time and Poly-logarithmic
Lookup Time
| null | null | null | null |
cs.DS
|
http://creativecommons.org/publicdomain/zero/1.0/
|
The Suffix Array $SA_S[1\ldots n]$ of an $n$-length string $S$ is a
lexicographically sorted array of the suffixes of $S$. The suffix array is one
of the most well known and widely used data structures in string algorithms. We
present a data structure for maintaining a representation of the suffix array
of a dynamic string which undergoes symbol substitutions, deletions, and
insertions.
For every string manipulation, our data structure can be updated in
$O(n^{\frac{2}{3}})$ time (ignoring multiplicative polylogarithmic factors)
with $n$ being the current length of the string. For an input query $i\in
[1\ldots n]$, our data structure reports $SA_S[i]$ in $O(\log^5(n))$ time.
We also present a faster data structure, with $O(\sqrt{n})$ update time
(ignoring multiplicative polylogarithmic factors), for maintaining the Inverted
Suffix Array of a dynamic string undergoing symbol substitutions updates. For
an input query $i\in [1\ldots n]$, our data structure reports the $i$'th entry
in the inverted suffix array in $O(\log^4(n))$ time.
Our data structures can be used to obtain sub-linear dynamic algorithms for
several classical string problems for which efficient dynamic solutions were
not previously known.
|
[
{
"version": "v1",
"created": "Thu, 23 Dec 2021 16:14:35 GMT"
}
] | 2021-12-24T00:00:00 |
[
[
"Amir",
"Amihood",
""
],
[
"Boneh",
"Itai",
""
]
] |
new_dataset
| 0.998488 |
2112.12702
|
Massimiliano Corsini
|
Gaia Pavoni and Massimiliano Corsini and Federico Ponchio and
Alessandro Muntoni and Paolo Cignoni
|
TagLab: A human-centric AI system for interactive semantic segmentation
|
Accepted at Human Centered AI workshop at NeurIPS 2021,
https://sites.google.com/view/hcai-human-centered-ai-neurips/home
| null | null | null |
cs.CV cs.AI cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Fully automatic semantic segmentation of highly specific semantic classes and
complex shapes may not meet the accuracy standards demanded by scientists. In
such cases, human-centered AI solutions, able to assist operators while
preserving human control over complex tasks, are a good trade-off to speed up
image labeling while maintaining high accuracy levels. TagLab is an open-source
AI-assisted software for annotating large orthoimages which takes advantage of
different degrees of automation; it speeds up image annotation from scratch
through assisted tools, creates custom fully automatic semantic segmentation
models, and, finally, allows the quick edits of automatic predictions. Since
the orthoimages analysis applies to several scientific disciplines, TagLab has
been designed with a flexible labeling pipeline. We report our results in two
different scenarios, marine ecology, and architectural heritage.
|
[
{
"version": "v1",
"created": "Thu, 23 Dec 2021 16:50:06 GMT"
}
] | 2021-12-24T00:00:00 |
[
[
"Pavoni",
"Gaia",
""
],
[
"Corsini",
"Massimiliano",
""
],
[
"Ponchio",
"Federico",
""
],
[
"Muntoni",
"Alessandro",
""
],
[
"Cignoni",
"Paolo",
""
]
] |
new_dataset
| 0.992544 |
1903.05256
|
David Eppstein
|
David Eppstein
|
Cubic Planar Graphs that cannot be Drawn on few Lines
|
15 pages, 10 figures. To appear in Proceedings of the 35th
International Symposium on Computational Geometry (SoCG 2019)
|
J. Computational Geometry 12 (1): 178-197, 2021
|
10.20382/v12i1a8
| null |
cs.CG
|
http://creativecommons.org/licenses/by/4.0/
|
For every integer $\ell$, we construct a cubic 3-vertex-connected planar
bipartite graph $G$ with $O(\ell^3)$ vertices such that there is no planar
straight-line drawing of $G$ whose vertices all lie on $\ell$ lines. This
strengthens previous results on graphs that cannot be drawn on few lines, which
constructed significantly larger maximal planar graphs. We also find apex-trees
and cubic bipartite series-parallel graphs that cannot be drawn on a bounded
number of lines.
|
[
{
"version": "v1",
"created": "Tue, 12 Mar 2019 23:23:06 GMT"
}
] | 2021-12-23T00:00:00 |
[
[
"Eppstein",
"David",
""
]
] |
new_dataset
| 0.999171 |
1906.09239
|
Mohammadreza Kasaei
|
Mohammadreza Kasaei, Nuno Lau, Artur Pereira
|
A Robust Biped Locomotion Based on Linear-Quadratic-Gaussian Controller
and Divergent Component of Motion
| null | null |
10.1109/IROS40897.2019.8967778
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generating robust locomotion for a humanoid robot in the presence of
disturbances is difficult because of its high number of degrees of freedom and
its unstable nature. In this paper, we used the concept of Divergent Component
of Motion~(DCM) and propose an optimal closed-loop controller based on
Linear-Quadratic-Gaussian to generate a robust and stable walking for humanoid
robots. The biped robot dynamics has been approximated using the Linear
Inverted Pendulum Model~(LIPM). Moreover, we propose a controller to adjust the
landing location of the swing leg to increase the withstanding level of the
robot against a severe external push. The performance and also the robustness
of the proposed controller is analyzed and verified by performing a set of
simulations using~\mbox{MATLAB}. The simulation results showed that the
proposed controller is capable of providing a robust walking even in the
presence of disturbances and in challenging situations.
|
[
{
"version": "v1",
"created": "Fri, 21 Jun 2019 16:59:32 GMT"
}
] | 2021-12-23T00:00:00 |
[
[
"Kasaei",
"Mohammadreza",
""
],
[
"Lau",
"Nuno",
""
],
[
"Pereira",
"Artur",
""
]
] |
new_dataset
| 0.99296 |
1912.03879
|
Tuomo Hiippala
|
Tuomo Hiippala and Malihe Alikhani and Jonas Haverinen and Timo
Kalliokoski and Evanfiya Logacheva and Serafina Orekhova and Aino Tuomainen
and Matthew Stone and John A. Bateman
|
AI2D-RST: A multimodal corpus of 1000 primary school science diagrams
|
24 pages; revised version submitted to Language Resources &
Evaluation
|
Language Resources and Evaluation 55(3), 2021, pp. 661-688
|
10.1007/s10579-020-09517-1
| null |
cs.CL cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This article introduces AI2D-RST, a multimodal corpus of 1000
English-language diagrams that represent topics in primary school natural
sciences, such as food webs, life cycles, moon phases and human physiology. The
corpus is based on the Allen Institute for Artificial Intelligence Diagrams
(AI2D) dataset, a collection of diagrams with crowd-sourced descriptions, which
was originally developed to support research on automatic diagram understanding
and visual question answering. Building on the segmentation of diagram layouts
in AI2D, the AI2D-RST corpus presents a new multi-layer annotation schema that
provides a rich description of their multimodal structure. Annotated by trained
experts, the layers describe (1) the grouping of diagram elements into
perceptual units, (2) the connections set up by diagrammatic elements such as
arrows and lines, and (3) the discourse relations between diagram elements,
which are described using Rhetorical Structure Theory (RST). Each annotation
layer in AI2D-RST is represented using a graph. The corpus is freely available
for research and teaching.
|
[
{
"version": "v1",
"created": "Mon, 9 Dec 2019 07:22:54 GMT"
},
{
"version": "v2",
"created": "Fri, 20 Mar 2020 10:03:17 GMT"
}
] | 2021-12-23T00:00:00 |
[
[
"Hiippala",
"Tuomo",
""
],
[
"Alikhani",
"Malihe",
""
],
[
"Haverinen",
"Jonas",
""
],
[
"Kalliokoski",
"Timo",
""
],
[
"Logacheva",
"Evanfiya",
""
],
[
"Orekhova",
"Serafina",
""
],
[
"Tuomainen",
"Aino",
""
],
[
"Stone",
"Matthew",
""
],
[
"Bateman",
"John A.",
""
]
] |
new_dataset
| 0.999744 |
2105.01052
|
Tuomo Hiippala
|
Tuomo Hiippala
|
Applied Language Technology: NLP for the Humanities
|
Accepted to the 5th Workshop on Teaching NLP at NAACL-HLT 2021
|
Proceedings of the Fifth Workshop on Teaching NLP, 2021, pp. 46-48
|
10.18653/v1/2021.teachingnlp-1.5
| null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This contribution describes a two-course module that seeks to provide
humanities majors with a basic understanding of language technology and its
applications using Python. The learning materials consist of interactive
Jupyter Notebooks and accompanying YouTube videos, which are openly available
with a Creative Commons licence.
|
[
{
"version": "v1",
"created": "Mon, 3 May 2021 17:51:17 GMT"
}
] | 2021-12-23T00:00:00 |
[
[
"Hiippala",
"Tuomo",
""
]
] |
new_dataset
| 0.998954 |
2105.11578
|
Yi Zhang
|
Yi Zhang, Lu Zhang, Kang Wang, Wassim Hamidouche, Olivier Deforges
|
SHD360: A Benchmark Dataset for Salient Human Detection in 360{\deg}
Videos
|
21 pages, 13 figures, 5 tables; Project page:
https://github.com/PanoAsh/SHD360; Technical report
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Salient human detection (SHD) in dynamic 360{\deg} immersive videos is of
great importance for various applications such as robotics, inter-human and
human-object interaction in augmented reality. However, 360{\deg} video SHD has
been seldom discussed in the computer vision community due to a lack of
datasets with large-scale omnidirectional videos and rich annotations. To this
end, we propose SHD360, the first 360{\deg} video SHD dataset which contains
various real-life daily scenes. Since so far there is no method proposed for
360{\deg} image/video SHD, we systematically benchmark 11 representative
state-of-the-art salient object detection (SOD) approaches on our SHD360, and
explore key issues derived from extensive experimenting results. We hope our
proposed dataset and benchmark could serve as a good starting point for
advancing human-centric researches towards 360{\deg} panoramic data. The
dataset is available at https://github.com/PanoAsh/SHD360.
|
[
{
"version": "v1",
"created": "Mon, 24 May 2021 23:51:29 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Jun 2021 14:54:53 GMT"
},
{
"version": "v3",
"created": "Sat, 31 Jul 2021 13:18:23 GMT"
},
{
"version": "v4",
"created": "Fri, 6 Aug 2021 16:30:25 GMT"
},
{
"version": "v5",
"created": "Thu, 23 Sep 2021 20:31:20 GMT"
},
{
"version": "v6",
"created": "Tue, 7 Dec 2021 10:02:04 GMT"
},
{
"version": "v7",
"created": "Wed, 22 Dec 2021 11:07:40 GMT"
}
] | 2021-12-23T00:00:00 |
[
[
"Zhang",
"Yi",
""
],
[
"Zhang",
"Lu",
""
],
[
"Wang",
"Kang",
""
],
[
"Hamidouche",
"Wassim",
""
],
[
"Deforges",
"Olivier",
""
]
] |
new_dataset
| 0.999836 |
2107.11414
|
Lucas Gris
|
Lucas Rafael Stefanel Gris, Edresson Casanova, Frederico Santos de
Oliveira, Anderson da Silva Soares, Arnaldo Candido Junior
|
Brazilian Portuguese Speech Recognition Using Wav2vec 2.0
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep learning techniques have been shown to be efficient in various tasks,
especially in the development of speech recognition systems, that is, systems
that aim to transcribe an audio sentence in a sequence of written words.
Despite the progress in the area, speech recognition can still be considered
difficult, especially for languages lacking available data, such as Brazilian
Portuguese (BP). In this sense, this work presents the development of an public
Automatic Speech Recognition (ASR) system using only open available audio data,
from the fine-tuning of the Wav2vec 2.0 XLSR-53 model pre-trained in many
languages, over BP data. The final model presents an average word error rate of
12.4% over 7 different datasets (10.5% when applying a language model).
According to our knowledge, the obtained error is the lowest among open
end-to-end (E2E) ASR models for BP.
|
[
{
"version": "v1",
"created": "Fri, 23 Jul 2021 18:54:39 GMT"
},
{
"version": "v2",
"created": "Sun, 28 Nov 2021 18:09:38 GMT"
},
{
"version": "v3",
"created": "Wed, 22 Dec 2021 16:29:54 GMT"
}
] | 2021-12-23T00:00:00 |
[
[
"Gris",
"Lucas Rafael Stefanel",
""
],
[
"Casanova",
"Edresson",
""
],
[
"de Oliveira",
"Frederico Santos",
""
],
[
"Soares",
"Anderson da Silva",
""
],
[
"Junior",
"Arnaldo Candido",
""
]
] |
new_dataset
| 0.994194 |
2107.11673
|
Hanchen Ye
|
Hanchen Ye, Cong Hao, Jianyi Cheng, Hyunmin Jeong, Jack Huang, Stephen
Neuendorffer, Deming Chen
|
ScaleHLS: A New Scalable High-Level Synthesis Framework on Multi-Level
Intermediate Representation
|
Accepted as a conference paper at HPCA'22
| null | null | null |
cs.PL cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
High-level synthesis (HLS) has been widely adopted as it significantly
improves the hardware design productivity and enables efficient design space
exploration (DSE). Existing HLS tools are built using compiler infrastructures
largely based on a single-level abstraction, such as LLVM. However, as HLS
designs typically come with intrinsic structural or functional hierarchies,
different HLS optimization problems are often better solved with different
levels of abstractions. This paper proposes ScaleHLS, a new scalable and
customizable HLS framework, on top of a multi-level compiler infrastructure
called MLIR. ScaleHLS represents HLS designs at multiple representation levels
and provides an HLS-dedicated analysis and transform library to solve the
optimization problems at the suitable levels. Using this library, we provide a
DSE engine to generate optimized HLS designs automatically. In addition, we
develop an HLS C front-end and a C/C++ emission back-end to translate HLS
designs into/from MLIR for enabling an end-to-end compilation flow.
Experimental results show that, comparing to the baseline designs without
manual directives insertion and code-rewriting, that are only optimized by
Xilinx Vivado HLS, ScaleHLS improves the performances with amazing
quality-of-results -- up to 768.1x better on computation kernel level programs
and up to 3825.0x better on neural network models.
|
[
{
"version": "v1",
"created": "Sat, 24 Jul 2021 19:20:23 GMT"
},
{
"version": "v2",
"created": "Tue, 3 Aug 2021 17:05:39 GMT"
},
{
"version": "v3",
"created": "Mon, 8 Nov 2021 18:17:49 GMT"
},
{
"version": "v4",
"created": "Wed, 22 Dec 2021 07:11:50 GMT"
}
] | 2021-12-23T00:00:00 |
[
[
"Ye",
"Hanchen",
""
],
[
"Hao",
"Cong",
""
],
[
"Cheng",
"Jianyi",
""
],
[
"Jeong",
"Hyunmin",
""
],
[
"Huang",
"Jack",
""
],
[
"Neuendorffer",
"Stephen",
""
],
[
"Chen",
"Deming",
""
]
] |
new_dataset
| 0.975044 |
2108.03530
|
Pei Peng
|
Pei Peng and Emina Soljanin
|
Covert, Low-Delay, Coded Message Passing in Mobile (IoT) Networks
|
Made some revisions, added some future directions, and corrected some
typos
| null | null | null |
cs.IT cs.MA math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a gossip-like protocol for covert message passing between Alice
and Bob as they move in an area watched over by a warden Willie. The area hosts
a multitude of Internet of (Battlefield) Things (Io\b{eta}T) objects. Alice and
Bob perform random walks on a random regular graph. The Io\b{eta}T objects
reside on the vertices of this graph, and some can serve as relays between
Alice and Bob. The protocol starts with Alice splitting her message into small
chunks, which she can covertly deposit to the relays she encounters. The
protocol ends with Bob collecting the chunks. Alice may encode her data before
the dissemination. Willie can either perform random walks as Alice and Bob do
or conduct uniform surveillance of the area. In either case, he can only
observe one relay at a time. We evaluate the system performance by the
covertness probability and the message passing delay. In our protocol, Alice
splits her message to increase the covertness probability and adds (coded)
redundancy to reduce the transmission delay. The performance metrics depend on
the graph, communications delay, and code parameters. We show that, in most
scenarios, it is impossible to find the design parameters that simultaneously
maximize the covertness probability and minimize the message delay.
|
[
{
"version": "v1",
"created": "Sat, 7 Aug 2021 22:14:46 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Dec 2021 21:45:45 GMT"
}
] | 2021-12-23T00:00:00 |
[
[
"Peng",
"Pei",
""
],
[
"Soljanin",
"Emina",
""
]
] |
new_dataset
| 0.965936 |
2109.08267
|
Chris Cummins
|
Chris Cummins, Bram Wasti, Jiadong Guo, Brandon Cui, Jason Ansel,
Sahir Gomez, Somya Jain, Jia Liu, Olivier Teytaud, Benoit Steiner, Yuandong
Tian, Hugh Leather
|
CompilerGym: Robust, Performant Compiler Optimization Environments for
AI Research
|
12 pages. Source code available at
https://github.com/facebookresearch/CompilerGym
| null | null | null |
cs.PL cs.AI cs.LG cs.PF
|
http://creativecommons.org/licenses/by/4.0/
|
Interest in applying Artificial Intelligence (AI) techniques to compiler
optimizations is increasing rapidly, but compiler research has a high entry
barrier. Unlike in other domains, compiler and AI researchers do not have
access to the datasets and frameworks that enable fast iteration and
development of ideas, and getting started requires a significant engineering
investment. What is needed is an easy, reusable experimental infrastructure for
real world compiler optimization tasks that can serve as a common benchmark for
comparing techniques, and as a platform to accelerate progress in the field.
We introduce CompilerGym, a set of environments for real world compiler
optimization tasks, and a toolkit for exposing new optimization tasks to
compiler researchers. CompilerGym enables anyone to experiment on production
compiler optimization problems through an easy-to-use package, regardless of
their experience with compilers. We build upon the popular OpenAI Gym interface
enabling researchers to interact with compilers using Python and a familiar
API.
We describe the CompilerGym architecture and implementation, characterize the
optimization spaces and computational efficiencies of three included compiler
environments, and provide extensive empirical evaluations. Compared to prior
works, CompilerGym offers larger datasets and optimization spaces, is 27x more
computationally efficient, is fault-tolerant, and capable of detecting
reproducibility bugs in the underlying compilers.
In making it easy for anyone to experiment with compilers - irrespective of
their background - we aim to accelerate progress in the AI and compiler
research domains.
|
[
{
"version": "v1",
"created": "Fri, 17 Sep 2021 01:02:27 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Dec 2021 13:33:39 GMT"
}
] | 2021-12-23T00:00:00 |
[
[
"Cummins",
"Chris",
""
],
[
"Wasti",
"Bram",
""
],
[
"Guo",
"Jiadong",
""
],
[
"Cui",
"Brandon",
""
],
[
"Ansel",
"Jason",
""
],
[
"Gomez",
"Sahir",
""
],
[
"Jain",
"Somya",
""
],
[
"Liu",
"Jia",
""
],
[
"Teytaud",
"Olivier",
""
],
[
"Steiner",
"Benoit",
""
],
[
"Tian",
"Yuandong",
""
],
[
"Leather",
"Hugh",
""
]
] |
new_dataset
| 0.999539 |
2111.12429
|
Jeroen Van Der Donckt
|
Jonas Van Der Donckt, Jeroen Van Der Donckt, Emiel Deprost, Sofie Van
Hoecke
|
tsflex: flexible time series processing & feature extraction
|
The first two authors contributed equally. Submitted to SoftwareX
| null | null | null |
cs.LG eess.SP stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Time series processing and feature extraction are crucial and time-intensive
steps in conventional machine learning pipelines. Existing packages are limited
in their applicability, as they cannot cope with irregularly-sampled or
asynchronous data and make strong assumptions about the data format. Moreover,
these packages do not focus on execution speed and memory efficiency, resulting
in considerable overhead. We present $\texttt{tsflex}$, a Python toolkit for
time series processing and feature extraction, that focuses on performance and
flexibility, enabling broad applicability. This toolkit leverages window-stride
arguments of the same data type as the sequence-index, and maintains the
sequence-index through all operations. $\texttt{tsflex}$ is flexible as it
supports (1) multivariate time series, (2) multiple window-stride
configurations, and (3) integrates with processing and feature functions from
other packages, while (4) making no assumptions about the data sampling
regularity, series alignment, and data type. Other functionalities include
multiprocessing, detailed execution logging, chunking sequences, and
serialization. Benchmarks show that $\texttt{tsflex}$ is faster and more
memory-efficient compared to similar packages, while being more permissive and
flexible in its utilization.
|
[
{
"version": "v1",
"created": "Wed, 24 Nov 2021 11:18:03 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Dec 2021 14:49:52 GMT"
}
] | 2021-12-23T00:00:00 |
[
[
"Van Der Donckt",
"Jonas",
""
],
[
"Van Der Donckt",
"Jeroen",
""
],
[
"Deprost",
"Emiel",
""
],
[
"Van Hoecke",
"Sofie",
""
]
] |
new_dataset
| 0.992701 |
2112.11330
|
Avinash Parnandi
|
Avinash Parnandi, Aakash Kaku, Anita Venkatesan, Natasha Pandit, Audre
Wirtanen, Haresh Rajamohan, Kannan Venkataramanan, Dawn Nilsen, Carlos
Fernandez-Granda, Heidi Schambra
|
PrimSeq: a deep learning-based pipeline to quantitate rehabilitation
training
| null | null | null | null |
cs.LG cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Stroke rehabilitation seeks to increase neuroplasticity through the repeated
practice of functional motions, but may have minimal impact on recovery because
of insufficient repetitions. The optimal training content and quantity are
currently unknown because no practical tools exist to measure them. Here, we
present PrimSeq, a pipeline to classify and count functional motions trained in
stroke rehabilitation. Our approach integrates wearable sensors to capture
upper-body motion, a deep learning model to predict motion sequences, and an
algorithm to tally motions. The trained model accurately decomposes
rehabilitation activities into component functional motions, outperforming
competitive machine learning methods. PrimSeq furthermore quantifies these
motions at a fraction of the time and labor costs of human experts. We
demonstrate the capabilities of PrimSeq in previously unseen stroke patients
with a range of upper extremity motor impairment. We expect that these advances
will support the rigorous measurement required for quantitative dosing trials
in stroke rehabilitation.
|
[
{
"version": "v1",
"created": "Tue, 21 Dec 2021 16:19:14 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Dec 2021 13:22:39 GMT"
}
] | 2021-12-23T00:00:00 |
[
[
"Parnandi",
"Avinash",
""
],
[
"Kaku",
"Aakash",
""
],
[
"Venkatesan",
"Anita",
""
],
[
"Pandit",
"Natasha",
""
],
[
"Wirtanen",
"Audre",
""
],
[
"Rajamohan",
"Haresh",
""
],
[
"Venkataramanan",
"Kannan",
""
],
[
"Nilsen",
"Dawn",
""
],
[
"Fernandez-Granda",
"Carlos",
""
],
[
"Schambra",
"Heidi",
""
]
] |
new_dataset
| 0.993221 |
2112.11482
|
Gilles Hacheme
|
Gilles Hacheme
|
English2Gbe: A multilingual machine translation model for {Fon/Ewe}Gbe
| null |
ML4D, 35th Conference on Neural Information Processing Systems
(NeurIPS 2021), Sydney, Australia
| null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Language is an essential factor of emancipation. Unfortunately, most of the
more than 2,000 African languages are low-resourced. The community has recently
used machine translation to revive and strengthen several African languages.
However, the trained models are often bilingual, resulting in a potentially
exponential number of models to train and maintain to cover all possible
translation directions. Additionally, bilingual models do not leverage the
similarity between some of the languages. Consequently, multilingual neural
machine translation (NMT) is gaining considerable interest, especially for
low-resourced languages. Nevertheless, its adoption by the community is still
limited. This paper introduces English2Gbe, a multilingual NMT model capable of
translating from English to Ewe or Fon. Using the BLEU, CHRF, and TER scores
computed with the Sacrebleu (Post, 2018) package for reproducibility, we show
that English2Gbe outperforms bilingual models (English to Ewe and English to
Fon) and gives state-of-the-art results on the JW300 benchmark for Fon
established by Nekoto et al. (2020). We hope this work will contribute to the
massive adoption of Multilingual models inside the community. Our code is made
accessible from Github.
|
[
{
"version": "v1",
"created": "Mon, 13 Dec 2021 10:35:09 GMT"
}
] | 2021-12-23T00:00:00 |
[
[
"Hacheme",
"Gilles",
""
]
] |
new_dataset
| 0.996882 |
2112.11484
|
Anastasia-Maria Leventi-Peetz
|
A.-M. Leventi-Peetz, O. Zendel, W. Lennartz, K. Weber
|
CryptoMiniSat Switches-Optimization for Solving Cryptographic Instances
| null |
Daniel Le Berre, Matti J\"arvisalo (eds). Proceedings of
Pragmatics of SAT 2015 and 2018. EPiC Series in Computing, vol. 59, pp.
79--93, 2019
|
10.29007/vpd6
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Performing hundreds of test runs and a source-code analysis, we empirically
identified improved parameter configurations for the CryptoMiniSat (CMS) 5 for
solving cryptographic CNF instances originating from algebraic known-plaintext
attacks on 3 rounds encryption of the Small AES-64 model cipher SR$(3, 4, 4,
4)$. We finally became able to reconstruct 64-bit long keys in under an hour
real time which, to our knowledge, has never been achieved so far. Especially,
not without any assumptions or previous knowledge of key-bits (for instance in
the form of side-channels, as in \cite{Mohamed2012algebraicSCA}). A statistical
analysis of the non-deterministic solver runtimes was carried out and command
line parameter combinations were defined to yield best runtimes which ranged
from under an hour to a few hours in median at the beginning. We proceeded
using an Automatic Algorithm Configuration (AAC) tool to systematically extend
the search for even better solver configurations with success to deliver even
shorter solving times. In this work we elaborate on the systematics we followed
to reach our results in a traceable and reproducible way. The ultimate focus of
our investigations is to find out if CMS, when appropriately tuned, is indeed
capable to attack even bigger and harder problems than the here solved ones.
For the domain of cryptographic research, the duration of the solving time
plays an inferior role as compared to the practical feasibility of finding a
solution to the problem. The perspective scalability of the here presented
results is the object of further investigations.
|
[
{
"version": "v1",
"created": "Tue, 21 Dec 2021 19:04:39 GMT"
}
] | 2021-12-23T00:00:00 |
[
[
"Leventi-Peetz",
"A. -M.",
""
],
[
"Zendel",
"O.",
""
],
[
"Lennartz",
"W.",
""
],
[
"Weber",
"K.",
""
]
] |
new_dataset
| 0.991387 |
2112.11491
|
Hung Nguyen
|
Hung T. Nguyen, Steven Bottone, Kwang Taik Kim, Mung Chiang, H.
Vincent Poor
|
Adversarial Neural Networks for Error Correcting Codes
|
6 pages, accepted to GLOBECOM 2021
| null | null | null |
cs.LG cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Error correcting codes are a fundamental component in modern day
communication systems, demanding extremely high throughput, ultra-reliability
and low latency. Recent approaches using machine learning (ML) models as the
decoders offer both improved performance and great adaptability to unknown
environments, where traditional decoders struggle. We introduce a general
framework to further boost the performance and applicability of ML models. We
propose to combine ML decoders with a competing discriminator network that
tries to distinguish between codewords and noisy words, and, hence, guides the
decoding models to recover transmitted codewords. Our framework is
game-theoretic, motivated by generative adversarial networks (GANs), with the
decoder and discriminator competing in a zero-sum game. The decoder learns to
simultaneously decode and generate codewords while the discriminator learns to
tell the differences between decoded outputs and codewords. Thus, the decoder
is able to decode noisy received signals into codewords, increasing the
probability of successful decoding. We show a strong connection of our
framework with the optimal maximum likelihood decoder by proving that this
decoder defines a Nash equilibrium point of our game. Hence, training to
equilibrium has a good possibility of achieving the optimal maximum likelihood
performance. Moreover, our framework does not require training labels, which
are typically unavailable during communications, and, thus, seemingly can be
trained online and adapt to channel dynamics. To demonstrate the performance of
our framework, we combine it with the very recent neural decoders and show
improved performance compared to the original models and traditional decoding
algorithms on various codes.
|
[
{
"version": "v1",
"created": "Tue, 21 Dec 2021 19:14:44 GMT"
}
] | 2021-12-23T00:00:00 |
[
[
"Nguyen",
"Hung T.",
""
],
[
"Bottone",
"Steven",
""
],
[
"Kim",
"Kwang Taik",
""
],
[
"Chiang",
"Mung",
""
],
[
"Poor",
"H. Vincent",
""
]
] |
new_dataset
| 0.988978 |
2112.11543
|
Fei Yang
|
Yanquan Chen, Fei Yang, Tianyu Lang, Guanfang Dong, Anup Basu
|
Real-time Street Human Motion Capture
|
7 pages, 7 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/publicdomain/zero/1.0/
|
In recent years, motion capture technology using computers has developed
rapidly. Because of its high efficiency and excellent performance, it replaces
many traditional methods and is being widely used in many fields. Our project
is about street scene video human motion capturing and analysis. The primary
goal of the project is to capture the human motion in a video and use the
motion information for 3D animation (human) in real-time. We applied a neural
network for motion capture and implement it in the unity under a street view
scene. By analyzing the motion data, we will have a better estimation of the
street condition, which is useful for other high-tech applications such as
self-driving cars.
|
[
{
"version": "v1",
"created": "Tue, 21 Dec 2021 22:11:19 GMT"
}
] | 2021-12-23T00:00:00 |
[
[
"Chen",
"Yanquan",
""
],
[
"Yang",
"Fei",
""
],
[
"Lang",
"Tianyu",
""
],
[
"Dong",
"Guanfang",
""
],
[
"Basu",
"Anup",
""
]
] |
new_dataset
| 0.996966 |
2112.11679
|
Liqiang Zhang
|
Qingyuan Gong, Yu Liu, Liqiang Zhang, Renhe Liu
|
Ghost-dil-NetVLAD: A Lightweight Neural Network for Visual Place
Recognition
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Visual place recognition (VPR) is a challenging task with the unbalance
between enormous computational cost and high recognition performance. Thanks to
the practical feature extraction ability of the lightweight convolution neural
networks (CNNs) and the train-ability of the vector of locally aggregated
descriptors (VLAD) layer, we propose a lightweight weakly supervised end-to-end
neural network consisting of a front-ended perception model called GhostCNN and
a learnable VLAD layer as a back-end. GhostCNN is based on Ghost modules that
are lightweight CNN-based architectures. They can generate redundant feature
maps using linear operations instead of the traditional convolution process,
making a good trade-off between computation resources and recognition accuracy.
To enhance our proposed lightweight model further, we add dilated convolutions
to the Ghost module to get features containing more spatial semantic
information, improving accuracy. Finally, rich experiments conducted on a
commonly used public benchmark and our private dataset validate that the
proposed neural network reduces the FLOPs and parameters of VGG16-NetVLAD by
99.04% and 80.16%, respectively. Besides, both models achieve similar accuracy.
|
[
{
"version": "v1",
"created": "Wed, 22 Dec 2021 06:05:02 GMT"
}
] | 2021-12-23T00:00:00 |
[
[
"Gong",
"Qingyuan",
""
],
[
"Liu",
"Yu",
""
],
[
"Zhang",
"Liqiang",
""
],
[
"Liu",
"Renhe",
""
]
] |
new_dataset
| 0.987542 |
2112.11687
|
Jonathan Barron
|
Jonathan T. Barron
|
Squareplus: A Softplus-Like Algebraic Rectifier
|
https://github.com/jonbarron/squareplus
| null | null | null |
cs.LG cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present squareplus, an activation function that resembles softplus, but
which can be computed using only algebraic operations: addition,
multiplication, and square-root. Because squareplus is ~6x faster to evaluate
than softplus on a CPU and does not require access to transcendental functions,
it may have practical value in resource-limited deep learning applications.
|
[
{
"version": "v1",
"created": "Wed, 22 Dec 2021 06:20:27 GMT"
}
] | 2021-12-23T00:00:00 |
[
[
"Barron",
"Jonathan T.",
""
]
] |
new_dataset
| 0.999833 |
2112.11714
|
Zhitao Liu
|
Zhitao Liu (1), Jinke Shi (3), Junhao He (3), Yu Wu (3), Ning Xie (2),
Ke Xiong (3), Yutong Liu (2) ((1) School of Aeronautics and Astronautics
UESTC, (2) Center for Future Media and School of Computer Science and
Engineering UESTC, (3) Glasgow College UESTC )
|
The Time Perception Control and Regulation in VR Environment
| null | null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To adapt to different environments, human circadian rhythms will be
constantly adjusted as the environment changes, which follows the principle of
survival of the fittest. According to this principle, objective factors (such
as circadian rhythms, and light intensity) can be utilized to control time
perception. The subjective judgment on the estimation of elapsed time is called
time perception. In the physical world, factors that can affect time
perception, represented by illumination, are called the Zeitgebers. In recent
years, with the development of Virtual Reality (VR) technology, effective
control of zeitgebers has become possible, which is difficult to achieve in the
physical world. Based on previous studies, this paper deeply explores the
actual performance in VR environment of four types of time zeitgebers (music,
color, cognitive load, and concentration) that have been proven to have a
certain impact on time perception in the physical world. It discusses the study
of the measurement of the difference between human time perception and
objective escaped time in the physical world.
|
[
{
"version": "v1",
"created": "Wed, 22 Dec 2021 07:49:52 GMT"
}
] | 2021-12-23T00:00:00 |
[
[
"Liu",
"Zhitao",
""
],
[
"Shi",
"Jinke",
""
],
[
"He",
"Junhao",
""
],
[
"Wu",
"Yu",
""
],
[
"Xie",
"Ning",
""
],
[
"Xiong",
"Ke",
""
],
[
"Liu",
"Yutong",
""
]
] |
new_dataset
| 0.994557 |
2112.11789
|
Mahdi Boloursaz Mashhadi
|
Mahdi Boloursaz Mashhadi, Deniz Gunduz, Alberto Perotti, and Branislav
Popovic
|
DRF Codes: Deep SNR-Robust Feedback Codes
| null | null | null | null |
cs.IT cs.LG eess.SP math.IT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We present a new deep-neural-network (DNN) based error correction code for
fading channels with output feedback, called deep SNR-robust feedback (DRF)
code. At the encoder, parity symbols are generated by a long short term memory
(LSTM) network based on the message as well as the past forward channel outputs
observed by the transmitter in a noisy fashion. The decoder uses a
bi-directional LSTM architecture along with a signal to noise ratio (SNR)-aware
attention NN to decode the message. The proposed code overcomes two major
shortcomings of the previously proposed DNN-based codes over channels with
passive output feedback: (i) the SNR-aware attention mechanism at the decoder
enables reliable application of the same trained NN over a wide range of SNR
values; (ii) curriculum training with batch-size scheduling is used to speed up
and stabilize training while improving the SNR-robustness of the resulting
code. We show that the DRF codes significantly outperform state-of-the-art in
terms of both the SNR-robustness and the error rate in additive white Gaussian
noise (AWGN) channel with feedback. In fading channels with perfect phase
compensation at the receiver, DRF codes learn to efficiently exploit knowledge
of the instantaneous fading amplitude (which is available to the encoder
through feedback) to reduce the overhead and complexity associated with channel
estimation at the decoder. Finally, we show the effectiveness of DRF codes in
multicast channels with feedback, where linear feedback codes are known to be
strictly suboptimal.
|
[
{
"version": "v1",
"created": "Wed, 22 Dec 2021 10:47:25 GMT"
}
] | 2021-12-23T00:00:00 |
[
[
"Mashhadi",
"Mahdi Boloursaz",
""
],
[
"Gunduz",
"Deniz",
""
],
[
"Perotti",
"Alberto",
""
],
[
"Popovic",
"Branislav",
""
]
] |
new_dataset
| 0.998624 |
2112.11796
|
Jan Van den Bussche
|
Thomas Delva, Anastasia Dimou, Maxime Jakubowski, Jan Van den Bussche
|
Shape Fragments
| null | null | null | null |
cs.DB cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In constraint languages for RDF graphs, such as ShEx and SHACL, constraints
on nodes and their properties in RDF graphs are known as "shapes". Schemas in
these languages list the various shapes that certain targeted nodes must
satisfy for the graph to conform to the schema. Using SHACL, we propose in this
paper a novel use of shapes, by which a set of shapes is used to extract a
subgraph from an RDF graph, the so-called shape fragment. Our proposed
mechanism fits in the framework of Linked Data Fragments. In this paper, (i) we
define our extraction mechanism formally, building on recently proposed SHACL
formalizations; (ii) we establish correctness properties, which relate shape
fragments to notions of provenance for database queries; (iii) we compare shape
fragments with SPARQL queries; (iv) we discuss implementation options; and (v)
we present initial experiments demonstrating that shape fragments are a
feasible new idea.
|
[
{
"version": "v1",
"created": "Wed, 22 Dec 2021 11:01:50 GMT"
}
] | 2021-12-23T00:00:00 |
[
[
"Delva",
"Thomas",
""
],
[
"Dimou",
"Anastasia",
""
],
[
"Jakubowski",
"Maxime",
""
],
[
"Bussche",
"Jan Van den",
""
]
] |
new_dataset
| 0.999083 |
2112.11811
|
Moshe Schwartz
|
Eyar Ben-Tolila and Moshe Schwartz
|
On the Reverse-Complement String-Duplication System
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Motivated by DNA storage in living organisms, and by known biological
mutation processes, we study the reverse-complement string-duplication system.
We fully classify the conditions under which the system has full
expressiveness, for all alphabets and all fixed duplication lengths. We then
focus on binary systems with duplication length $2$ and prove that they have
full capacity, yet surprisingly, have zero entropy-rate. Finally, by using
binary single burst-insertion correcting codes, we construct codes that correct
a single reverse-complement duplication of odd length, over any alphabet. The
redundancy (in bits) of the constructed code does not depend on the alphabet
size.
|
[
{
"version": "v1",
"created": "Wed, 22 Dec 2021 11:35:12 GMT"
}
] | 2021-12-23T00:00:00 |
[
[
"Ben-Tolila",
"Eyar",
""
],
[
"Schwartz",
"Moshe",
""
]
] |
new_dataset
| 0.971485 |
2112.11858
|
Idan Amit
|
Idan Amit
|
End to End Software Engineering Research
| null | null | null | null |
cs.SE cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
End to end learning is machine learning starting in raw data and predicting a
desired concept, with all steps done automatically. In software engineering
context, we see it as starting from the source code and predicting process
metrics. This framework can be used for predicting defects, code quality,
productivity and more. End-to-end improves over features based machine learning
by not requiring domain experts and being able to extract new knowledge. We
describe a dataset of 5M files from 15k projects constructed for this goal. The
dataset is constructed in a way that enables not only predicting concepts but
also investigating their causes.
|
[
{
"version": "v1",
"created": "Wed, 22 Dec 2021 13:09:16 GMT"
}
] | 2021-12-23T00:00:00 |
[
[
"Amit",
"Idan",
""
]
] |
new_dataset
| 0.989241 |
2112.11891
|
Dave Murray-Rust
|
Dave Murray-Rust, Chris Elsden, Bettina Nissen, Ella Tallyn, Larissa
Pschetz, Chris Speed
|
Blockchain and Beyond: Understanding Blockchains through Prototypes and
Public Engagement
|
(Preprint - accepted to TOCHI 30/11/2021)
| null | null | null |
cs.HC cs.CY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper presents an annotated portfolio of projects that seek to
understand and communicate the social and societal implications of blockchains,
distributed ledgers and smart contracts. These complex technologies rely on
human and technical factors to deliver cryptocurrencies, shared computation and
trustless protocols but have a secondary benefit in providing a moment to
re-think many aspects of society, and imagine alternative possibilities. The
projects use design and HCI methods to relate blockchains to a range of topics,
including global supply chains, delivery infrastructure, smart grids,
volunteering and charitable giving, through engaging publics, exploring ideas
and speculating on possible futures. Based on an extensive annotated portfolio
we draw out learning for the design of blockchain systems, broadening
participation and surfacing questions around imaginaries, social implications
and engagement with new technology. This paints a comprehensive picture of how
HCI and design can shape understandings of the future of complex technologies.
|
[
{
"version": "v1",
"created": "Wed, 22 Dec 2021 14:20:27 GMT"
}
] | 2021-12-23T00:00:00 |
[
[
"Murray-Rust",
"Dave",
""
],
[
"Elsden",
"Chris",
""
],
[
"Nissen",
"Bettina",
""
],
[
"Tallyn",
"Ella",
""
],
[
"Pschetz",
"Larissa",
""
],
[
"Speed",
"Chris",
""
]
] |
new_dataset
| 0.993642 |
2112.12049
|
Leonardo Azevedo
|
Maximillien de Bayser and Vinicius Segura and Leonardo Guerreiro
Azevedo and Leonardo P. Tizzei and Raphael Melo Thiago and Elton Soares and
Renato Cerqueira
|
DevOps and Microservices in Scientific System development
|
14 pages, 4 figures, paper accepted as poster in ACM SAC 2022, ACM
ISBN 978-1-4503-8713-2/22/04
| null |
10.1145/3477314.3507317
| null |
cs.SE cs.CE
|
http://creativecommons.org/licenses/by/4.0/
|
There is a gap in scientific information systems development concerning
modern software engineering and scientific computing. Historically, software
engineering methodologies have been perceived as an unwanted accidental
complexity to computational scientists in their scientific systems development.
More recent trends, like the end of Moore's law and the subsequent
diversification of hardware platforms, combined with the increasing
multidisciplinarity of science itself have exacerbated the problem because
self-taught "end user developers" are not familiar with the disciplines needed
to tackle this increased complexity. On a more positive note, agile programming
methods have approached software development practices to the way scientific
software is produced. In this work, we present the experience of a multi-year
industry research project where agile methods, microservices and DevOps were
applied. Our goal is to validate the hypothesis that the use of microservices
would allow computational scientists to work in the more minimalistic
prototype-oriented way that they prefer while the software engineering team
would handle the integration. Hence, scientific multidisciplinary systems would
gain in a twofold way: (i) Subject Matter Experts(SME) use their preferable
tools to develop the specific scientific part of the system; (ii) software
engineers provide the high quality software code for the system delivery.
|
[
{
"version": "v1",
"created": "Wed, 22 Dec 2021 17:18:54 GMT"
}
] | 2021-12-23T00:00:00 |
[
[
"de Bayser",
"Maximillien",
""
],
[
"Segura",
"Vinicius",
""
],
[
"Azevedo",
"Leonardo Guerreiro",
""
],
[
"Tizzei",
"Leonardo P.",
""
],
[
"Thiago",
"Raphael Melo",
""
],
[
"Soares",
"Elton",
""
],
[
"Cerqueira",
"Renato",
""
]
] |
new_dataset
| 0.954484 |
2112.12053
|
Liang Pan
|
Liang Pan, Tong Wu, Zhongang Cai, Ziwei Liu, Xumin Yu, Yongming Rao,
Jiwen Lu, Jie Zhou, Mingye Xu, Xiaoyuan Luo, Kexue Fu, Peng Gao, Manning
Wang, Yali Wang, Yu Qiao, Junsheng Zhou, Xin Wen, Peng Xiang, Yu-Shen Liu,
Zhizhong Han, Yuanjie Yan, Junyi An, Lifa Zhu, Changwei Lin, Dongrui Liu, Xin
Li, Francisco G\'omez-Fern\'andez, Qinlong Wang, Yang Yang
|
Multi-View Partial (MVP) Point Cloud Challenge 2021 on Completion and
Registration: Methods and Results
|
15 pages, 13 figures, ICCV2021 Workshop Technique Report, the
codebase webpage: https://github.com/paul007pl/MVP_Benchmark
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As real-scanned point clouds are mostly partial due to occlusions and
viewpoints, reconstructing complete 3D shapes based on incomplete observations
becomes a fundamental problem for computer vision. With a single incomplete
point cloud, it becomes the partial point cloud completion problem. Given
multiple different observations, 3D reconstruction can be addressed by
performing partial-to-partial point cloud registration. Recently, a large-scale
Multi-View Partial (MVP) point cloud dataset has been released, which consists
of over 100,000 high-quality virtual-scanned partial point clouds. Based on the
MVP dataset, this paper reports methods and results in the Multi-View Partial
Point Cloud Challenge 2021 on Completion and Registration. In total, 128
participants registered for the competition, and 31 teams made valid
submissions. The top-ranked solutions will be analyzed, and then we will
discuss future research directions.
|
[
{
"version": "v1",
"created": "Wed, 22 Dec 2021 17:24:53 GMT"
}
] | 2021-12-23T00:00:00 |
[
[
"Pan",
"Liang",
""
],
[
"Wu",
"Tong",
""
],
[
"Cai",
"Zhongang",
""
],
[
"Liu",
"Ziwei",
""
],
[
"Yu",
"Xumin",
""
],
[
"Rao",
"Yongming",
""
],
[
"Lu",
"Jiwen",
""
],
[
"Zhou",
"Jie",
""
],
[
"Xu",
"Mingye",
""
],
[
"Luo",
"Xiaoyuan",
""
],
[
"Fu",
"Kexue",
""
],
[
"Gao",
"Peng",
""
],
[
"Wang",
"Manning",
""
],
[
"Wang",
"Yali",
""
],
[
"Qiao",
"Yu",
""
],
[
"Zhou",
"Junsheng",
""
],
[
"Wen",
"Xin",
""
],
[
"Xiang",
"Peng",
""
],
[
"Liu",
"Yu-Shen",
""
],
[
"Han",
"Zhizhong",
""
],
[
"Yan",
"Yuanjie",
""
],
[
"An",
"Junyi",
""
],
[
"Zhu",
"Lifa",
""
],
[
"Lin",
"Changwei",
""
],
[
"Liu",
"Dongrui",
""
],
[
"Li",
"Xin",
""
],
[
"Gómez-Fernández",
"Francisco",
""
],
[
"Wang",
"Qinlong",
""
],
[
"Yang",
"Yang",
""
]
] |
new_dataset
| 0.999459 |
2112.12070
|
Chi-Man Pun
|
Wenyun Li and Chi-Man Pun
|
A Single-Target License Plate Detection with Attention
|
IWAIT2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
With the development of deep learning, Neural Network is commonly adopted to
the License Plate Detection (LPD) task and achieves much better performance and
precision, especially CNN-based networks can achieve state of the art
RetinaNet[1]. For a single object detection task such as LPD, modified general
object detection would be time-consuming, unable to cope with complex scenarios
and a cumbersome weights file that is too hard to deploy on the embedded
device.
|
[
{
"version": "v1",
"created": "Sun, 12 Dec 2021 03:00:03 GMT"
}
] | 2021-12-23T00:00:00 |
[
[
"Li",
"Wenyun",
""
],
[
"Pun",
"Chi-Man",
""
]
] |
new_dataset
| 0.997588 |
2112.12101
|
Helen Susannah Moat
|
Giovanni Mizzi, Tobias Preis, Leonardo Soares Bastos, Marcelo Ferreira
da Costa Gomes, Claudia Torres Code\c{c}o, Helen Susannah Moat
|
Faster indicators of dengue fever case counts using Google and Twitter
|
25 pages, 7 figures (3 in supplementary information)
| null | null | null |
cs.SI stat.AP
|
http://creativecommons.org/licenses/by/4.0/
|
Dengue is a major threat to public health in Brazil, the world's sixth
biggest country by population, with over 1.5 million cases recorded in 2019
alone. Official data on dengue case counts is delivered incrementally and, for
many reasons, often subject to delays of weeks. In contrast, data on
dengue-related Google searches and Twitter messages is available in full with
no delay. Here, we describe a model which uses online data to deliver improved
weekly estimates of dengue incidence in Rio de Janeiro. We address a key
shortcoming of previous online data disease surveillance models by explicitly
accounting for the incremental delivery of case count data, to ensure that our
approach can be used in practice. We also draw on data from Google Trends and
Twitter in tandem, and demonstrate that this leads to slightly better estimates
than a model using only one of these data streams alone. Our results provide
evidence that online data can be used to improve both the accuracy and
precision of rapid estimates of disease incidence, even where the underlying
case count data is subject to long and varied delays.
|
[
{
"version": "v1",
"created": "Wed, 22 Dec 2021 18:03:26 GMT"
}
] | 2021-12-23T00:00:00 |
[
[
"Mizzi",
"Giovanni",
""
],
[
"Preis",
"Tobias",
""
],
[
"Bastos",
"Leonardo Soares",
""
],
[
"Gomes",
"Marcelo Ferreira da Costa",
""
],
[
"Codeço",
"Claudia Torres",
""
],
[
"Moat",
"Helen Susannah",
""
]
] |
new_dataset
| 0.991047 |
2112.12141
|
Jingxiao Zheng
|
Jingxiao Zheng, Xinwei Shi, Alexander Gorban, Junhua Mao, Yang Song,
Charles R. Qi, Ting Liu, Visesh Chari, Andre Cornman, Yin Zhou, Congcong Li,
Dragomir Anguelov
|
Multi-modal 3D Human Pose Estimation with 2D Weak Supervision in
Autonomous Driving
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D human pose estimation (HPE) in autonomous vehicles (AV) differs from other
use cases in many factors, including the 3D resolution and range of data,
absence of dense depth maps, failure modes for LiDAR, relative location between
the camera and LiDAR, and a high bar for estimation accuracy. Data collected
for other use cases (such as virtual reality, gaming, and animation) may
therefore not be usable for AV applications. This necessitates the collection
and annotation of a large amount of 3D data for HPE in AV, which is
time-consuming and expensive. In this paper, we propose one of the first
approaches to alleviate this problem in the AV setting. Specifically, we
propose a multi-modal approach which uses 2D labels on RGB images as weak
supervision to perform 3D HPE. The proposed multi-modal architecture
incorporates LiDAR and camera inputs with an auxiliary segmentation branch. On
the Waymo Open Dataset, our approach achieves a 22% relative improvement over
camera-only 2D HPE baseline, and 6% improvement over LiDAR-only model. Finally,
careful ablation studies and parts based analysis illustrate the advantages of
each of our contributions.
|
[
{
"version": "v1",
"created": "Wed, 22 Dec 2021 18:57:16 GMT"
}
] | 2021-12-23T00:00:00 |
[
[
"Zheng",
"Jingxiao",
""
],
[
"Shi",
"Xinwei",
""
],
[
"Gorban",
"Alexander",
""
],
[
"Mao",
"Junhua",
""
],
[
"Song",
"Yang",
""
],
[
"Qi",
"Charles R.",
""
],
[
"Liu",
"Ting",
""
],
[
"Chari",
"Visesh",
""
],
[
"Cornman",
"Andre",
""
],
[
"Zhou",
"Yin",
""
],
[
"Li",
"Congcong",
""
],
[
"Anguelov",
"Dragomir",
""
]
] |
new_dataset
| 0.996934 |
2005.10103
|
Hao Xu
|
Hao Xu, Lei Zhang, Oluwakayode Onireti, Yang Fang, William Bill
Buchanan, Muhammad Ali Imran
|
BeepTrace: Blockchain-enabled Privacy-preserving Contact Tracing for
COVID-19 Pandemic and Beyond
| null | null |
10.1109/JIOT.2020.3025953
| null |
cs.DC cs.CR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The outbreak of COVID-19 pandemic has exposed an urgent need for effective
contact tracing solutions through mobile phone applications to prevent the
infection from spreading further. However, due to the nature of contact
tracing, public concern on privacy issues has been a bottleneck to the existing
solutions, which is significantly affecting the uptake of contact tracing
applications across the globe. In this paper, we present a blockchain-enabled
privacy-preserving contact tracing scheme: BeepTrace, where we propose to adopt
blockchain bridging the user/patient and the authorized solvers to desensitize
the user ID and location information. Compared with recently proposed contract
tracing solutions, our approach shows higher security and privacy with the
additional advantages of being battery friendly and globally accessible.
Results show viability in terms of the required resource at both server and
mobile phone perspectives. Through breaking the privacy concerns of the public,
the proposed BeepTrace solution can provide a timely framework for authorities,
companies, software developers and researchers to fast develop and deploy
effective digital contact tracing applications, to conquer COVID-19 pandemic
soon. Meanwhile, the open initiative of BeepTrace allows worldwide
collaborations, integrate existing tracing and positioning solutions with the
help of blockchain technology.
|
[
{
"version": "v1",
"created": "Wed, 20 May 2020 15:04:43 GMT"
},
{
"version": "v2",
"created": "Thu, 21 May 2020 14:00:26 GMT"
},
{
"version": "v3",
"created": "Tue, 21 Dec 2021 11:09:52 GMT"
}
] | 2021-12-22T00:00:00 |
[
[
"Xu",
"Hao",
""
],
[
"Zhang",
"Lei",
""
],
[
"Onireti",
"Oluwakayode",
""
],
[
"Fang",
"Yang",
""
],
[
"Buchanan",
"William Bill",
""
],
[
"Imran",
"Muhammad Ali",
""
]
] |
new_dataset
| 0.999362 |
2005.11023
|
Wenjun Shi
|
Wenjun Shi, Qinxiang Cao, Yuxin Deng, Hanru Jiang and Yuan Feng
|
Symbolic Reasoning about Quantum Circuits in Coq
|
arXiv admin note: text overlap with arXiv:1802.02648 by other authors
|
Journal of Computer Science and Technology (JCST), 2021, 36(6):
1291-1306
|
10.1007/s11390-021-1637-9
| null |
cs.PL cs.LO quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A quantum circuit is a computational unit that transforms an input quantum
state to an output one. A natural way to reason about its behavior is to
compute explicitly the unitary matrix implemented by it. However, when the
number of qubits increases, the matrix dimension grows exponentially and the
computation becomes intractable.
In this paper, we propose a symbolic approach to reasoning about quantum
circuits. It is based on a small set of laws involving some basic manipulations
on vectors and matrices. This symbolic reasoning scales better than the
explicit one and is well suited to be automated in Coq, as demonstrated with
some typical examples.
|
[
{
"version": "v1",
"created": "Fri, 22 May 2020 06:27:52 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Jul 2020 08:36:41 GMT"
},
{
"version": "v3",
"created": "Thu, 25 Mar 2021 08:00:35 GMT"
},
{
"version": "v4",
"created": "Tue, 21 Dec 2021 08:13:19 GMT"
}
] | 2021-12-22T00:00:00 |
[
[
"Shi",
"Wenjun",
""
],
[
"Cao",
"Qinxiang",
""
],
[
"Deng",
"Yuxin",
""
],
[
"Jiang",
"Hanru",
""
],
[
"Feng",
"Yuan",
""
]
] |
new_dataset
| 0.99881 |
2005.13754
|
Petros Spachos
|
Pai Chet Ng, Petros Spachos, Konstantinos Plataniotis
|
COVID-19 and Your Smartphone: BLE-based Smart Contact Tracing
| null | null |
10.1109/JSYST.2021.3055675
| null |
cs.LG cs.CR cs.HC cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Contact tracing is of paramount importance when it comes to preventing the
spreading of infectious diseases. Contact tracing is usually performed manually
by authorized personnel. Manual contact tracing is an inefficient, error-prone,
time-consuming process of limited utility to the population at large as those
in close contact with infected individuals are informed hours, if not days,
later. This paper introduces an alternative way to manual contact tracing. The
proposed Smart Contact Tracing (SCT) system utilizes the smartphone's Bluetooth
Low Energy (BLE) signals and machine learning classifier to accurately and
quickly determined the contact profile. SCT's contribution is two-fold: a)
classification of the user's contact as high/low-risk using precise proximity
sensing, and b) user anonymity using a privacy-preserving communications
protocol. SCT leverages BLE's non-connectable advertising feature to broadcast
a signature packet when the user is in the public space. Both broadcasted and
observed signatures are stored in the user's smartphone and they are only
uploaded to a secure signature database when a user is confirmed by public
health authorities to be infected. Using received signal strength (RSS) each
smartphone estimates its distance from other user's phones and issues real-time
alerts when social distancing rules are violated. The paper includes extensive
experimentation utilizing real-life smartphone positions and a comparative
evaluation of five machine learning classifiers. Reported results indicate that
a decision tree classifier outperforms other states of the art classification
methods in terms of accuracy. Lastly, to facilitate research in this area, and
to contribute to the timely development of advanced solutions the entire data
set of six experiments with about 123,000 data points is made publicly
available.
|
[
{
"version": "v1",
"created": "Thu, 28 May 2020 02:56:17 GMT"
}
] | 2021-12-22T00:00:00 |
[
[
"Ng",
"Pai Chet",
""
],
[
"Spachos",
"Petros",
""
],
[
"Plataniotis",
"Konstantinos",
""
]
] |
new_dataset
| 0.987668 |
2103.12115
|
Alexander Mathis
|
Lucas Stoffl and Maxime Vidal and Alexander Mathis
|
End-to-End Trainable Multi-Instance Pose Estimation with Transformers
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose an end-to-end trainable approach for multi-instance pose
estimation, called POET (POse Estimation Transformer). Combining a
convolutional neural network with a transformer encoder-decoder architecture,
we formulate multiinstance pose estimation from images as a direct set
prediction problem. Our model is able to directly regress the pose of all
individuals, utilizing a bipartite matching scheme. POET is trained using a
novel set-based global loss that consists of a keypoint loss, a visibility loss
and a class loss. POET reasons about the relations between multiple detected
individuals and the full image context to directly predict their poses in
parallel. We show that POET achieves high accuracy on the COCO keypoint
detection task while having less parameters and higher inference speed than
other bottom-up and top-down approaches. Moreover, we show successful transfer
learning when applying POET to animal pose estimation. To the best of our
knowledge, this model is the first end-to-end trainable multi-instance pose
estimation method and we hope it will serve as a simple and promising
alternative.
|
[
{
"version": "v1",
"created": "Mon, 22 Mar 2021 18:19:22 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Dec 2021 17:16:39 GMT"
}
] | 2021-12-22T00:00:00 |
[
[
"Stoffl",
"Lucas",
""
],
[
"Vidal",
"Maxime",
""
],
[
"Mathis",
"Alexander",
""
]
] |
new_dataset
| 0.991273 |
2103.13282
|
Alexander Mathis
|
Daniel Joska and Liam Clark and Naoya Muramatsu and Ricardo Jericevich
and Fred Nicolls and Alexander Mathis and Mackenzie W. Mathis and Amir Patel
|
AcinoSet: A 3D Pose Estimation Dataset and Baseline Models for Cheetahs
in the Wild
|
Code and data can be found at:
https://github.com/African-Robotics-Unit/AcinoSet
|
2021 IEEE International Conference on Robotics and Automation
(ICRA), 2021, pp. 13901-13908
|
10.1109/ICRA48506.2021.9561338
| null |
cs.CV cs.SY eess.SY q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Animals are capable of extreme agility, yet understanding their complex
dynamics, which have ecological, biomechanical and evolutionary implications,
remains challenging. Being able to study this incredible agility will be
critical for the development of next-generation autonomous legged robots. In
particular, the cheetah (acinonyx jubatus) is supremely fast and maneuverable,
yet quantifying its whole-body 3D kinematic data during locomotion in the wild
remains a challenge, even with new deep learning-based methods. In this work we
present an extensive dataset of free-running cheetahs in the wild, called
AcinoSet, that contains 119,490 frames of multi-view synchronized high-speed
video footage, camera calibration files and 7,588 human-annotated frames. We
utilize markerless animal pose estimation to provide 2D keypoints. Then, we use
three methods that serve as strong baselines for 3D pose estimation tool
development: traditional sparse bundle adjustment, an Extended Kalman Filter,
and a trajectory optimization-based method we call Full Trajectory Estimation.
The resulting 3D trajectories, human-checked 3D ground truth, and an
interactive tool to inspect the data is also provided. We believe this dataset
will be useful for a diverse range of fields such as ecology, neuroscience,
robotics, biomechanics as well as computer vision.
|
[
{
"version": "v1",
"created": "Wed, 24 Mar 2021 15:54:11 GMT"
}
] | 2021-12-22T00:00:00 |
[
[
"Joska",
"Daniel",
""
],
[
"Clark",
"Liam",
""
],
[
"Muramatsu",
"Naoya",
""
],
[
"Jericevich",
"Ricardo",
""
],
[
"Nicolls",
"Fred",
""
],
[
"Mathis",
"Alexander",
""
],
[
"Mathis",
"Mackenzie W.",
""
],
[
"Patel",
"Amir",
""
]
] |
new_dataset
| 0.999734 |
2104.13202
|
Chenglong Li
|
Chenglong Li, Wanlin Xue, Yaqing Jia, Zhichen Qu, Bin Luo, Jin Tang
and Dengdi Sun
|
LasHeR: A Large-scale High-diversity Benchmark for RGBT Tracking
|
IEEE TRANSACTIONS ON IMAGE PROCESSING
| null |
10.1109/TIP.2021.3130533
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
RGBT tracking receives a surge of interest in the computer vision community,
but this research field lacks a large-scale and high-diversity benchmark
dataset, which is essential for both the training of deep RGBT trackers and the
comprehensive evaluation of RGBT tracking methods. To this end, we present a
Large-scale High-diversity benchmark for RGBT tracking (LasHeR) in this work.
LasHeR consists of 1224 visible and thermal infrared video pairs with more than
730K frame pairs in total. Each frame pair is spatially aligned and manually
annotated with a bounding box, making the dataset well and densely annotated.
LasHeR is highly diverse capturing from a broad range of object categories,
camera viewpoints, scene complexities and environmental factors across seasons,
weathers, day and night. We conduct a comprehensive performance evaluation of
12 RGBT tracking algorithms on the LasHeR dataset and present detailed analysis
to clarify the research room in RGBT tracking. In addition, we release the
unaligned version of LasHeR to attract the research interest for alignment-free
RGBT tracking, which is a more practical task in real-world applications. The
datasets and evaluation protocols are available at:
https://github.com/BUGPLEASEOUT/LasHeR.
|
[
{
"version": "v1",
"created": "Tue, 27 Apr 2021 14:04:23 GMT"
},
{
"version": "v2",
"created": "Fri, 26 Nov 2021 08:01:48 GMT"
}
] | 2021-12-22T00:00:00 |
[
[
"Li",
"Chenglong",
""
],
[
"Xue",
"Wanlin",
""
],
[
"Jia",
"Yaqing",
""
],
[
"Qu",
"Zhichen",
""
],
[
"Luo",
"Bin",
""
],
[
"Tang",
"Jin",
""
],
[
"Sun",
"Dengdi",
""
]
] |
new_dataset
| 0.999789 |
2106.10197
|
Muhammad Monjurul Karim
|
Muhammad Monjurul Karim, Yu Li, Ruwen Qin, Zhaozheng Yin
|
A Dynamic Spatial-temporal Attention Network for Early Anticipation of
Traffic Accidents
|
10 pages, 4 figures, submitted to a journal
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The rapid advancement of sensor technologies and artificial intelligence are
creating new opportunities for traffic safety enhancement. Dashboard cameras
(dashcams) have been widely deployed on both human driving vehicles and
automated driving vehicles. A computational intelligence model that can
accurately and promptly predict accidents from the dashcam video will enhance
the preparedness for accident prevention. The spatial-temporal interaction of
traffic agents is complex. Visual cues for predicting a future accident are
embedded deeply in dashcam video data. Therefore, the early anticipation of
traffic accidents remains a challenge. Inspired by the attention behavior of
humans in visually perceiving accident risks, this paper proposes a Dynamic
Spatial-Temporal Attention (DSTA) network for the early accident anticipation
from dashcam videos. The DSTA-network learns to select discriminative temporal
segments of a video sequence with a Dynamic Temporal Attention (DTA) module. It
also learns to focus on the informative spatial regions of frames with a
Dynamic Spatial Attention (DSA) module. A Gated Recurrent Unit (GRU) is trained
jointly with the attention modules to predict the probability of a future
accident. The evaluation of the DSTA-network on two benchmark datasets confirms
that it has exceeded the state-of-the-art performance. A thorough ablation
study that assesses the DSTA-network at the component level reveals how the
network achieves such performance. Furthermore, this paper proposes a method to
fuse the prediction scores from two complementary models and verifies its
effectiveness in further boosting the performance of early accident
anticipation.
|
[
{
"version": "v1",
"created": "Fri, 18 Jun 2021 15:58:53 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Dec 2021 00:43:09 GMT"
}
] | 2021-12-22T00:00:00 |
[
[
"Karim",
"Muhammad Monjurul",
""
],
[
"Li",
"Yu",
""
],
[
"Qin",
"Ruwen",
""
],
[
"Yin",
"Zhaozheng",
""
]
] |
new_dataset
| 0.999067 |
2108.00768
|
Harin Lee
|
Harin Lee, Frank Hoeger, Marc Schoenwiesner, Minsu Park, Nori Jacoby
|
Cross-cultural Mood Perception in Pop Songs and its Alignment with Mood
Detection Algorithms
|
8 pages, 5 figures, to be included as proceedings for the 22nd
International Society of Music Information Retrieval (ISMIR)
|
Proceedings of the 22nd International Society for Music
Information Retrieval Conference, Nov. 2021, pp. 366-373
|
10.5281/zenodo.5625680
| null |
cs.IR cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Do people from different cultural backgrounds perceive the mood in music the
same way? How closely do human ratings across different cultures approximate
automatic mood detection algorithms that are often trained on corpora of
predominantly Western popular music? Analyzing 166 participants responses from
Brazil, South Korea, and the US, we examined the similarity between the ratings
of nine categories of perceived moods in music and estimated their alignment
with four popular mood detection algorithms. We created a dataset of 360 recent
pop songs drawn from major music charts of the countries and constructed
semantically identical mood descriptors across English, Korean, and Portuguese
languages. Multiple participants from the three countries rated their
familiarity, preference, and perceived moods for a given song. Ratings were
highly similar within and across cultures for basic mood attributes such as
sad, cheerful, and energetic. However, we found significant cross-cultural
differences for more complex characteristics such as dreamy and love. To our
surprise, the results of mood detection algorithms were uniformly correlated
across human ratings from all three countries and did not show a detectable
bias towards any particular culture. Our study thus suggests that the mood
detection algorithms can be considered as an objective measure at least within
the popular music context.
|
[
{
"version": "v1",
"created": "Mon, 2 Aug 2021 10:29:36 GMT"
}
] | 2021-12-22T00:00:00 |
[
[
"Lee",
"Harin",
""
],
[
"Hoeger",
"Frank",
""
],
[
"Schoenwiesner",
"Marc",
""
],
[
"Park",
"Minsu",
""
],
[
"Jacoby",
"Nori",
""
]
] |
new_dataset
| 0.999598 |
2108.11468
|
Arlene John
|
Arlene John, Koushik Kumar Nundy, Barry Cardiff, Deepu John
|
SomnNET: An SpO2 Based Deep Learning Network for Sleep Apnea Detection
in Smartwatches
|
Accepted for discussion at the IEEE Engineering in Medicine and
Biology Conference (EMBC) 2021
| null |
10.1109/EMBC46164.2021.9631037
| null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The abnormal pause or rate reduction in breathing is known as the sleep-apnea
hypopnea syndrome and affects the quality of sleep of an individual. A novel
method for the detection of sleep apnea events (pause in breathing) from
peripheral oxygen saturation (SpO2) signals obtained from wearable devices is
discussed in this paper. The paper details an apnea detection algorithm of a
very high resolution on a per-second basis for which a 1-dimensional
convolutional neural network -- which we termed SomnNET -- is developed. This
network exhibits an accuracy of 97.08% and outperforms several lower resolution
state-of-the-art apnea detection methods. The feasibility of model pruning and
binarization to reduce the computational complexity is explored. The pruned
network with 80% sparsity exhibited an accuracy of 89.75%, and the binarized
network exhibited an accuracy of 68.22%. The performance of the proposed
networks is compared against several state-of-the-art algorithms.
|
[
{
"version": "v1",
"created": "Wed, 25 Aug 2021 20:49:49 GMT"
}
] | 2021-12-22T00:00:00 |
[
[
"John",
"Arlene",
""
],
[
"Nundy",
"Koushik Kumar",
""
],
[
"Cardiff",
"Barry",
""
],
[
"John",
"Deepu",
""
]
] |
new_dataset
| 0.998106 |
2109.07577
|
Sagi Eppel
|
Sagi Eppel, Haoping Xu, Yi Ru Wang, Alan Aspuru-Guzik
|
Predicting 3D shapes, masks, and properties of materials, liquids, and
objects inside transparent containers, using the TransProteus CGI dataset
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present TransProteus, a dataset, and methods for predicting the 3D
structure, masks, and properties of materials, liquids, and objects inside
transparent vessels from a single image without prior knowledge of the image
source and camera parameters. Manipulating materials in transparent containers
is essential in many fields and depends heavily on vision. This work supplies a
new procedurally generated dataset consisting of 50k images of liquids and
solid objects inside transparent containers. The image annotations include 3D
models, material properties (color/transparency/roughness...), and segmentation
masks for the vessel and its content. The synthetic (CGI) part of the dataset
was procedurally generated using 13k different objects, 500 different
environments (HDRI), and 1450 material textures (PBR) combined with simulated
liquids and procedurally generated vessels. In addition, we supply 104
real-world images of objects inside transparent vessels with depth maps of both
the vessel and its content. We propose a camera agnostic method that predicts
3D models from an image as an XYZ map. This allows the trained net to predict
the 3D model as a map with XYZ coordinates per pixel without prior knowledge of
the image source. To calculate the training loss, we use the distance between
pairs of points inside the 3D model instead of the absolute XYZ coordinates.
This makes the loss function translation invariant. We use this to predict 3D
models of vessels and their content from a single image. Finally, we
demonstrate a net that uses a single image to predict the material properties
of the vessel content and surface.
|
[
{
"version": "v1",
"created": "Wed, 15 Sep 2021 21:16:36 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Dec 2021 21:12:50 GMT"
}
] | 2021-12-22T00:00:00 |
[
[
"Eppel",
"Sagi",
""
],
[
"Xu",
"Haoping",
""
],
[
"Wang",
"Yi Ru",
""
],
[
"Aspuru-Guzik",
"Alan",
""
]
] |
new_dataset
| 0.981616 |
2110.00119
|
Jae Shin Yoon
|
Jae Shin Yoon, Zhixuan Yu, Jaesik Park, Hyun Soo Park
|
HUMBI: A Large Multiview Dataset of Human Body Expressions and Benchmark
Challenge
|
18 pages; Accepted to TPAMI
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a new large multiview dataset called HUMBI for human body
expressions with natural clothing. The goal of HUMBI is to facilitate modeling
view-specific appearance and geometry of five primary body signals including
gaze, face, hand, body, and garment from assorted people. 107 synchronized HD
cameras are used to capture 772 distinctive subjects across gender, ethnicity,
age, and style. With the multiview image streams, we reconstruct high fidelity
body expressions using 3D mesh models, which allows representing view-specific
appearance. We demonstrate that HUMBI is highly effective in learning and
reconstructing a complete human model and is complementary to the existing
datasets of human body expressions with limited views and subjects such as
MPII-Gaze, Multi-PIE, Human3.6M, and Panoptic Studio datasets. Based on HUMBI,
we formulate a new benchmark challenge of a pose-guided appearance rendering
task that aims to substantially extend photorealism in modeling diverse human
expressions in 3D, which is the key enabling factor of authentic social
tele-presence. HUMBI is publicly available at http://humbi-data.net
|
[
{
"version": "v1",
"created": "Thu, 30 Sep 2021 23:19:25 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Dec 2021 04:31:56 GMT"
}
] | 2021-12-22T00:00:00 |
[
[
"Yoon",
"Jae Shin",
""
],
[
"Yu",
"Zhixuan",
""
],
[
"Park",
"Jaesik",
""
],
[
"Park",
"Hyun Soo",
""
]
] |
new_dataset
| 0.999841 |
2110.00677
|
Bryan Tan
|
Bryan Tan, Benjamin Mariano, Shuvendu K. Lahiri, Isil Dillig, Yu Feng
|
SolType: Refinement Types for Arithmetic Overflow in Solidity
|
To appear in POPL '22. This is the extended version of the paper with
the proofs, after the main text went through peer review. 51 pages, 15
figures
| null |
10.1145/1122445.1122456
| null |
cs.PL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
As smart contracts gain adoption in financial transactions, it becomes
increasingly important to ensure that they are free of bugs and security
vulnerabilities. Of particular relevance in this context are arithmetic
overflow bugs, as integers are often used to represent financial assets like
account balances. Motivated by this observation, this paper presents SolType, a
refinement type system for Solidity that can be used to prevent arithmetic
over- and under-flows in smart contracts. SolType allows developers to add
refinement type annotations and uses them to prove that arithmetic operations
do not lead to over- and under-flows. SolType incorporates a rich vocabulary of
refinement terms that allow expressing relationships between integer values and
aggregate properties of complex data structures. Furthermore, our
implementation, called Solid, incorporates a type inference engine and can
automatically infer useful type annotations, including non-trivial contract
invariants.
To evaluate the usefulness of our type system, we use Solid to prove
arithmetic safety of a total of 120 smart contracts. When used in its fully
automated mode (i.e., using Solid's type inference capabilities), Solid is able
to eliminate 86.3% of redundant runtime checks used to guard against overflows.
We also compare Solid against a state-of-the-art arithmetic safety verifier
called VeriSmart and show that Solid has a significantly lower false positive
rate, while being significantly faster in terms of verification time.
|
[
{
"version": "v1",
"created": "Fri, 1 Oct 2021 23:09:44 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Dec 2021 01:17:43 GMT"
}
] | 2021-12-22T00:00:00 |
[
[
"Tan",
"Bryan",
""
],
[
"Mariano",
"Benjamin",
""
],
[
"Lahiri",
"Shuvendu K.",
""
],
[
"Dillig",
"Isil",
""
],
[
"Feng",
"Yu",
""
]
] |
new_dataset
| 0.968935 |
2110.14207
|
Ashwin Kalyan Vijayakumar
|
Ashwin Kalyan, Abhinav Kumar, Arjun Chandrasekaran, Ashish Sabharwal,
Peter Clark
|
How Much Coffee Was Consumed During EMNLP 2019? Fermi Problems: A New
Reasoning Challenge for AI
|
Accepted for publication at EMNLP 2021, 11 pages, 5 tables, 4 figures
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Many real-world problems require the combined application of multiple
reasoning abilities employing suitable abstractions, commonsense knowledge, and
creative synthesis of problem-solving strategies. To help advance AI systems
towards such capabilities, we propose a new reasoning challenge, namely Fermi
Problems (FPs), which are questions whose answers can only be approximately
estimated because their precise computation is either impractical or
impossible. For example, "How much would the sea level rise if all ice in the
world melted?" FPs are commonly used in quizzes and interviews to bring out and
evaluate the creative reasoning abilities of humans. To do the same for AI
systems, we present two datasets: 1) A collection of 1k real-world FPs sourced
from quizzes and olympiads; and 2) a bank of 10k synthetic FPs of intermediate
complexity to serve as a sandbox for the harder real-world challenge. In
addition to question answer pairs, the datasets contain detailed solutions in
the form of an executable program and supporting facts, helping in supervision
and evaluation of intermediate steps. We demonstrate that even extensively
fine-tuned large scale language models perform poorly on these datasets, on
average making estimates that are off by two orders of magnitude. Our
contribution is thus the crystallization of several unsolved AI problems into a
single, new challenge that we hope will spur further advances in building
systems that can reason.
|
[
{
"version": "v1",
"created": "Wed, 27 Oct 2021 06:39:33 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Dec 2021 01:05:22 GMT"
}
] | 2021-12-22T00:00:00 |
[
[
"Kalyan",
"Ashwin",
""
],
[
"Kumar",
"Abhinav",
""
],
[
"Chandrasekaran",
"Arjun",
""
],
[
"Sabharwal",
"Ashish",
""
],
[
"Clark",
"Peter",
""
]
] |
new_dataset
| 0.990435 |
2112.02143
|
Bingbing Rao
|
Bingbing Rao, Ehsan Kazemi, Yifan Ding, Devu M Shila, Frank M. Tucker,
Liqiang Wang
|
CTIN: Robust Contextual Transformer Network for Inertial Navigation
|
Accepted as technical research paper in 36th AAAI Conference on
Artificial Intelligence, 2022
| null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Recently, data-driven inertial navigation approaches have demonstrated their
capability of using well-trained neural networks to obtain accurate position
estimates from inertial measurement units (IMU) measurements. In this paper, we
propose a novel robust Contextual Transformer-based network for Inertial
Navigation~(CTIN) to accurately predict velocity and trajectory. To this end,
we first design a ResNet-based encoder enhanced by local and global multi-head
self-attention to capture spatial contextual information from IMU measurements.
Then we fuse these spatial representations with temporal knowledge by
leveraging multi-head attention in the Transformer decoder. Finally, multi-task
learning with uncertainty reduction is leveraged to improve learning efficiency
and prediction accuracy of velocity and trajectory. Through extensive
experiments over a wide range of inertial datasets~(e.g. RIDI, OxIOD, RoNIN,
IDOL, and our own), CTIN is very robust and outperforms state-of-the-art
models.
|
[
{
"version": "v1",
"created": "Fri, 3 Dec 2021 19:57:34 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Dec 2021 22:14:17 GMT"
}
] | 2021-12-22T00:00:00 |
[
[
"Rao",
"Bingbing",
""
],
[
"Kazemi",
"Ehsan",
""
],
[
"Ding",
"Yifan",
""
],
[
"Shila",
"Devu M",
""
],
[
"Tucker",
"Frank M.",
""
],
[
"Wang",
"Liqiang",
""
]
] |
new_dataset
| 0.991781 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.