id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2109.00734
|
Masatoshi Osumi
|
Masatoshi Osumi
|
Ramsey Numbers of Trails
| null | null |
10.1587/transfun.2021DMP0003
| null |
cs.DM math.CO
|
http://creativecommons.org/licenses/by/4.0/
|
We initiate the study of Ramsey numbers of trails. Let $k \geq 2$ be a
positive integer. The Ramsey number of trails with $k$ vertices is defined as
the the smallest number $n$ such that for every graph $H$ with $n$ vertices,
$H$ or the complete $\overline{H}$ contains a trail with $k$ vertices. We prove
that the Ramsey number of trails with $k$ vertices is at most $k$ and at least
$2\sqrt{k}+\Theta(1)$. This improves the trivial upper bound of $\lfloor
3k/2\rfloor -1$.
|
[
{
"version": "v1",
"created": "Thu, 2 Sep 2021 06:23:21 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Osumi",
"Masatoshi",
""
]
] |
new_dataset
| 0.997771 |
2109.11725
|
Jonathan Mosheiff
|
Venkatesan Guruswami and Jonathan Mosheiff
|
Punctured Low-Bias Codes Behave Like Random Linear Codes
|
34 pages
| null | null | null |
cs.CC cs.IT math.CO math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Random linear codes are a workhorse in coding theory, and are used to show
the existence of codes with the best known or even near-optimal trade-offs in
many noise models. However, they have little structure besides linearity, and
are not amenable to tractable error-correction algorithms.
In this work, we prove a general derandomization result applicable to random
linear codes. Namely, in settings where the coding-theoretic property of
interest is "local" (in the sense of forbidding certain bad configurations
involving few vectors -- code distance and list-decodability being notable
examples), one can replace random linear codes (RLCs) with a significantly
derandomized variant with essentially no loss in parameters. Specifically,
instead of randomly sampling coordinates of the (long) Hadamard code (which is
an equivalent way to describe RLCs), one can randomly sample coordinates of any
code with low bias. Over large alphabets, the low bias requirement can be
weakened to just large distance. Furthermore, large distance suffices even with
a small alphabet in order to match the current best known bounds for RLC
list-decodability.
In particular, by virtue of our result, all current (and future)
achievability bounds for list-decodability of random linear codes extend
automatically to random puncturings of any low-bias (or large alphabet)
"mother" code. We also show that our punctured codes emulate the behavior of
RLCs on stochastic channels, thus giving a derandomization of RLCs in the
context of achieving Shannon capacity as well. Thus, we have a
randomness-efficient way to sample codes achieving capacity in both worst-case
and stochastic settings that can further inherit algebraic or other
algorithmically useful structural properties of the mother code.
|
[
{
"version": "v1",
"created": "Fri, 24 Sep 2021 03:37:22 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Nov 2021 20:18:10 GMT"
},
{
"version": "v3",
"created": "Mon, 4 Apr 2022 16:17:26 GMT"
},
{
"version": "v4",
"created": "Tue, 13 Sep 2022 17:26:39 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Guruswami",
"Venkatesan",
""
],
[
"Mosheiff",
"Jonathan",
""
]
] |
new_dataset
| 0.990018 |
2111.03552
|
Marsel Faizullin
|
Marsel Faizullin, Anastasiia Kornilova, Azat Akhmetyanov, Konstantin
Pakulev, Andrey Sadkov and Gonzalo Ferrer
|
SmartDepthSync: Open Source Synchronized Video Recording System of
Smartphone RGB and Depth Camera Range Image Frames with Sub-millisecond
Precision
|
IEEE Sensors Journal paper
| null |
10.1109/JSEN.2022.3150973
| null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Nowadays, smartphones can produce a synchronized (synced) stream of
high-quality data, including RGB images, inertial measurements, and other data.
Therefore, smartphones are becoming appealing sensor systems in the robotics
community. Unfortunately, there is still the need for external supporting
sensing hardware, such as a depth camera precisely synced with the smartphone
sensors.
In this paper, we propose a hardware-software recording system that presents
a heterogeneous structure and contains a smartphone and an external depth
camera for recording visual, depth, and inertial data that are mutually
synchronized. The system is synced at the time and the frame levels: every RGB
image frame from the smartphone camera is exposed at the same moment of time
with a depth camera frame with sub-millisecond precision. We provide a method
and a tool for sync performance evaluation that can be applied to any pair of
depth and RGB cameras. Our system could be replicated, modified, or extended by
employing our open-sourced materials.
|
[
{
"version": "v1",
"created": "Fri, 5 Nov 2021 15:16:54 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Feb 2022 16:22:38 GMT"
},
{
"version": "v3",
"created": "Tue, 13 Sep 2022 12:10:30 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Faizullin",
"Marsel",
""
],
[
"Kornilova",
"Anastasiia",
""
],
[
"Akhmetyanov",
"Azat",
""
],
[
"Pakulev",
"Konstantin",
""
],
[
"Sadkov",
"Andrey",
""
],
[
"Ferrer",
"Gonzalo",
""
]
] |
new_dataset
| 0.999623 |
2112.02803
|
Li Wei
|
Li Wei, Chongwen Huang, George C. Alexandropoulos, Wei E. I. Sha,
Zhaoyang Zhang, Merouane Debbah, Chau Yuen
|
Multi-User Holographic MIMO Surfaces: Channel Modeling and Spectral
Efficiency Analysis
| null | null |
10.1109/JSTSP.2022.3176140
| null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The multi-user Holographic Multiple-Input and Multiple-Output Surface
(MU-HMIMOS) paradigm, which is capable of realizing large continuous apertures
with minimal power consumption, has been recently considered as an
energyefficient solution for future wireless networks, offering increased
flexibility in impacting electromagnetic (EM) wave propagation according to the
desired communication, localization, and sensing objectives. The tractable
channel modeling in MU-HMIMOS wireless systems is one of the most critical
research challenges, mainly due to the coupling effect induced by the
excessively large number of closely spaced patch antennas. In this paper, we
focus on this challenge for the downlink of multi-user MIMO communications and
extend an EM-compliant channel model to multiuser case, which is expressed in
the wavenumber domain using the Fourier plane wave approximation. Based on the
presented channel model, we investigate the spectral efficiency of maximumratio
transmission and Zero-Forcing (ZF) precoding schemes. We also introduce a novel
hardware efficient ZF precoder, leveraging Neumann series (NS) expansion to
replace the required matrix inversion operation, which is very hard to be
computed in the conventional way due to the extremely large number of patch
antennas in the envisioned MU-HMIMOS communication systems. In comparison with
the conventional independent and identical Rayleigh fading channels that ignore
antenna coupling effects, the proposed EM-compliant channel model captures the
mutual couplings induced by the very small antenna spacing. Our extensive
performance evaluation results demonstrate that our theoretical performance
expressions approximate sufficiently well ...
|
[
{
"version": "v1",
"created": "Mon, 6 Dec 2021 06:12:25 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Mar 2022 11:38:18 GMT"
},
{
"version": "v3",
"created": "Sun, 22 May 2022 03:28:47 GMT"
},
{
"version": "v4",
"created": "Tue, 24 May 2022 01:45:24 GMT"
},
{
"version": "v5",
"created": "Sun, 3 Jul 2022 14:00:00 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Wei",
"Li",
""
],
[
"Huang",
"Chongwen",
""
],
[
"Alexandropoulos",
"George C.",
""
],
[
"Sha",
"Wei E. I.",
""
],
[
"Zhang",
"Zhaoyang",
""
],
[
"Debbah",
"Merouane",
""
],
[
"Yuen",
"Chau",
""
]
] |
new_dataset
| 0.967573 |
2201.06286
|
Salar Mohtaj
|
Anik Jacobsen, Salar Mohtaj, Sebastian M\"oller
|
MuLVE, A Multi-Language Vocabulary Evaluation Data Set
|
Submitted to LREC 2022
|
Proceedings of the Language Resources and Evaluation Conference.
2022; 673-679
| null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Vocabulary learning is vital to foreign language learning. Correct and
adequate feedback is essential to successful and satisfying vocabulary
training. However, many vocabulary and language evaluation systems perform on
simple rules and do not account for real-life user learning data. This work
introduces Multi-Language Vocabulary Evaluation Data Set (MuLVE), a data set
consisting of vocabulary cards and real-life user answers, labeled indicating
whether the user answer is correct or incorrect. The data source is user
learning data from the Phase6 vocabulary trainer. The data set contains
vocabulary questions in German and English, Spanish, and French as target
language and is available in four different variations regarding pre-processing
and deduplication. We experiment to fine-tune pre-trained BERT language models
on the downstream task of vocabulary evaluation with the proposed MuLVE data
set. The results provide outstanding results of > 95.5 accuracy and F2-score.
The data set is available on the European Language Grid.
|
[
{
"version": "v1",
"created": "Mon, 17 Jan 2022 09:02:59 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Jacobsen",
"Anik",
""
],
[
"Mohtaj",
"Salar",
""
],
[
"Möller",
"Sebastian",
""
]
] |
new_dataset
| 0.999555 |
2201.06573
|
Salar Mohtaj
|
Salar Mohtaj, Fatemeh Tavakkoli, Habibollah Asghari
|
PerPaDa: A Persian Paraphrase Dataset based on Implicit Crowdsourcing
Data Collection
|
Submitted to LREC 2022
|
Proceedings of the Language Resources and Evaluation Conference.
2022; 5090-5096
| null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper we introduce PerPaDa, a Persian paraphrase dataset that is
collected from users' input in a plagiarism detection system. As an implicit
crowdsourcing experience, we have gathered a large collection of original and
paraphrased sentences from Hamtajoo; a Persian plagiarism detection system, in
which users try to conceal cases of text re-use in their documents by
paraphrasing and re-submitting manuscripts for analysis. The compiled dataset
contains 2446 instances of paraphrasing. In order to improve the overall
quality of the collected data, some heuristics have been used to exclude
sentences that don't meet the proposed criteria. The introduced corpus is much
larger than the available datasets for the task of paraphrase identification in
Persian. Moreover, there is less bias in the data compared to the similar
datasets, since the users did not try some fixed predefined rules in order to
generate similar texts to their original inputs.
|
[
{
"version": "v1",
"created": "Mon, 17 Jan 2022 18:48:39 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Mohtaj",
"Salar",
""
],
[
"Tavakkoli",
"Fatemeh",
""
],
[
"Asghari",
"Habibollah",
""
]
] |
new_dataset
| 0.999826 |
2204.04779
|
Saadullah Amin
|
Saadullah Amin, Pasquale Minervini, David Chang, Pontus Stenetorp,
G\"unter Neumann
|
MedDistant19: Towards an Accurate Benchmark for Broad-Coverage
Biomedical Relation Extraction
|
Accepted by COLING 2022 (Oral presentation, Main Conference: Long
Papers)
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Relation extraction in the biomedical domain is challenging due to the lack
of labeled data and high annotation costs, needing domain experts. Distant
supervision is commonly used to tackle the scarcity of annotated data by
automatically pairing knowledge graph relationships with raw texts. Such a
pipeline is prone to noise and has added challenges to scale for covering a
large number of biomedical concepts. We investigated existing broad-coverage
distantly supervised biomedical relation extraction benchmarks and found a
significant overlap between training and test relationships ranging from 26% to
86%. Furthermore, we noticed several inconsistencies in the data construction
process of these benchmarks, and where there is no train-test leakage, the
focus is on interactions between narrower entity types. This work presents a
more accurate benchmark MedDistant19 for broad-coverage distantly supervised
biomedical relation extraction that addresses these shortcomings and is
obtained by aligning the MEDLINE abstracts with the widely used SNOMED Clinical
Terms knowledge base. Lacking thorough evaluation with domain-specific language
models, we also conduct experiments validating general domain relation
extraction findings to biomedical relation extraction.
|
[
{
"version": "v1",
"created": "Sun, 10 Apr 2022 22:07:25 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Sep 2022 14:32:11 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Amin",
"Saadullah",
""
],
[
"Minervini",
"Pasquale",
""
],
[
"Chang",
"David",
""
],
[
"Stenetorp",
"Pontus",
""
],
[
"Neumann",
"Günter",
""
]
] |
new_dataset
| 0.9933 |
2207.06799
|
Shuchang Lyu
|
Qi Zhao, Shuchang Lyu, Wenpei Bai, Linghan Cai, Binghao Liu, Meijing
Wu, Xiubo Sang, Min Yang, Lijiang Chen
|
A Multi-Modality Ovarian Tumor Ultrasound Image Dataset for Unsupervised
Cross-Domain Semantic Segmentation
|
code: https://github.com/cv516Buaa/MMOTU_DS2Net paper:13 pages, 10
figures, 10 tables, 15 formulas
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Ovarian cancer is one of the most harmful gynecological diseases. Detecting
ovarian tumors in early stage with computer-aided techniques can efficiently
decrease the mortality rate. With the improvement of medical treatment
standard, ultrasound images are widely applied in clinical treatment. However,
recent notable methods mainly focus on single-modality ultrasound ovarian tumor
segmentation or recognition, which means there still lacks researches on
exploring the representation capability of multi-modality ultrasound ovarian
tumor images. To solve this problem, we propose a Multi-Modality Ovarian Tumor
Ultrasound (MMOTU) image dataset containing 1469 2d ultrasound images and 170
contrast enhanced ultrasonography (CEUS) images with pixel-wise and global-wise
annotations. Based on MMOTU, we mainly focus on unsupervised cross-domain
semantic segmentation task. To solve the domain shift problem, we propose a
feature alignment based architecture named Dual-Scheme Domain-Selected Network
(DS2Net). Specifically, we first design source-encoder and target-encoder to
extract two-style features of source and target images. Then, we propose
Domain-Distinct Selected Module (DDSM) and Domain-Universal Selected Module
(DUSM) to extract the distinct and universal features in two styles
(source-style or target-style). Finally, we fuse these two kinds of features
and feed them into the source-decoder and target-decoder to generate final
predictions. Extensive comparison experiments and analysis on MMOTU image
dataset show that DS2Net can boost the segmentation performance for
bidirectional cross-domain adaptation of 2d ultrasound images and CEUS images.
Our proposed dataset and code are all available at
https://github.com/cv516Buaa/MMOTU_DS2Net.
|
[
{
"version": "v1",
"created": "Thu, 14 Jul 2022 10:23:17 GMT"
},
{
"version": "v2",
"created": "Fri, 26 Aug 2022 17:17:50 GMT"
},
{
"version": "v3",
"created": "Mon, 12 Sep 2022 18:17:54 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Zhao",
"Qi",
""
],
[
"Lyu",
"Shuchang",
""
],
[
"Bai",
"Wenpei",
""
],
[
"Cai",
"Linghan",
""
],
[
"Liu",
"Binghao",
""
],
[
"Wu",
"Meijing",
""
],
[
"Sang",
"Xiubo",
""
],
[
"Yang",
"Min",
""
],
[
"Chen",
"Lijiang",
""
]
] |
new_dataset
| 0.998802 |
2207.14087
|
Hao Sun
|
Hao Sun, Hongyi Wang, Jiaqing Liu, Yen-Wei Chen, and Lanfen Lin
|
CubeMLP: An MLP-based Model for Multimodal Sentiment Analysis and
Depression Estimation
|
Accepted by ACM MM 2022
| null |
10.1145/3503161.3548025
| null |
cs.MM cs.CL cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multimodal sentiment analysis and depression estimation are two important
research topics that aim to predict human mental states using multimodal data.
Previous research has focused on developing effective fusion strategies for
exchanging and integrating mind-related information from different modalities.
Some MLP-based techniques have recently achieved considerable success in a
variety of computer vision tasks. Inspired by this, we explore multimodal
approaches with a feature-mixing perspective in this study. To this end, we
introduce CubeMLP, a multimodal feature processing framework based entirely on
MLP. CubeMLP consists of three independent MLP units, each of which has two
affine transformations. CubeMLP accepts all relevant modality features as input
and mixes them across three axes. After extracting the characteristics using
CubeMLP, the mixed multimodal features are flattened for task predictions. Our
experiments are conducted on sentiment analysis datasets: CMU-MOSI and
CMU-MOSEI, and depression estimation dataset: AVEC2019. The results show that
CubeMLP can achieve state-of-the-art performance with a much lower computing
cost.
|
[
{
"version": "v1",
"created": "Thu, 28 Jul 2022 13:50:55 GMT"
},
{
"version": "v2",
"created": "Sun, 7 Aug 2022 04:16:59 GMT"
},
{
"version": "v3",
"created": "Tue, 13 Sep 2022 03:20:18 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Sun",
"Hao",
""
],
[
"Wang",
"Hongyi",
""
],
[
"Liu",
"Jiaqing",
""
],
[
"Chen",
"Yen-Wei",
""
],
[
"Lin",
"Lanfen",
""
]
] |
new_dataset
| 0.995582 |
2209.00383
|
yangtao wang
|
Yangtao Wang (M-PSI), Xi Shen, Yuan Yuan (MIT CSAIL), Yuming Du,
Maomao Li, Shell Xu Hu, James L Crowley (M-PSI), Dominique Vaufreydaz (M-PSI)
|
TokenCut: Segmenting Objects in Images and Videos with Self-supervised
Transformer and Normalized Cut
|
arXiv admin note: substantial text overlap with arXiv:2202.11539
| null | null | null |
cs.CV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we describe a graph-based algorithm that uses the features
obtained by a self-supervised transformer to detect and segment salient objects
in images and videos. With this approach, the image patches that compose an
image or video are organised into a fully connected graph, where the edge
between each pair of patches is labeled with a similarity score between patches
using features learned by the transformer. Detection and segmentation of
salient objects is then formulated as a graph-cut problem and solved using the
classical Normalized Cut algorithm. Despite the simplicity of this approach, it
achieves state-of-the-art results on several common image and video detection
and segmentation tasks. For unsupervised object discovery, this approach
outperforms the competing approaches by a margin of 6.1%, 5.7%, and 2.6%,
respectively, when tested with the VOC07, VOC12, and COCO20K datasets. For the
unsupervised saliency detection task in images, this method improves the score
for Intersection over Union (IoU) by 4.4%, 5.6% and 5.2%. When tested with the
ECSSD, DUTS, and DUT-OMRON datasets, respectively, compared to current
state-of-the-art techniques. This method also achieves competitive results for
unsupervised video object segmentation tasks with the DAVIS, SegTV2, and FBMS
datasets.
|
[
{
"version": "v1",
"created": "Thu, 1 Sep 2022 11:52:26 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Sep 2022 12:33:17 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Wang",
"Yangtao",
"",
"M-PSI"
],
[
"Shen",
"Xi",
"",
"MIT CSAIL"
],
[
"Yuan",
"Yuan",
"",
"MIT CSAIL"
],
[
"Du",
"Yuming",
"",
"M-PSI"
],
[
"Li",
"Maomao",
"",
"M-PSI"
],
[
"Hu",
"Shell Xu",
"",
"M-PSI"
],
[
"Crowley",
"James L",
"",
"M-PSI"
],
[
"Vaufreydaz",
"Dominique",
"",
"M-PSI"
]
] |
new_dataset
| 0.999157 |
2209.02280
|
Letian Yu
|
Letian Yu, Haiyang Mei, Wen Dong, Ziqi Wei, Li Zhu, Yuxin Wang, Xin
Yang
|
Progressive Glass Segmentation
| null | null |
10.1109/TIP.2022.3162709
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Glass is very common in the real world. Influenced by the uncertainty about
the glass region and the varying complex scenes behind the glass, the existence
of glass poses severe challenges to many computer vision tasks, making glass
segmentation as an important computer vision task. Glass does not have its own
visual appearances but only transmit/reflect the appearances of its
surroundings, making it fundamentally different from other common objects. To
address such a challenging task, existing methods typically explore and combine
useful cues from different levels of features in the deep network. As there
exists a characteristic gap between level-different features, i.e., deep layer
features embed more high-level semantics and are better at locating the target
objects while shallow layer features have larger spatial sizes and keep richer
and more detailed low-level information, fusing these features naively thus
would lead to a sub-optimal solution. In this paper, we approach the effective
features fusion towards accurate glass segmentation in two steps. First, we
attempt to bridge the characteristic gap between different levels of features
by developing a Discriminability Enhancement (DE) module which enables
level-specific features to be a more discriminative representation, alleviating
the features incompatibility for fusion. Second, we design a
Focus-and-Exploration Based Fusion (FEBF) module to richly excavate useful
information in the fusion process by highlighting the common and exploring the
difference between level-different features.
|
[
{
"version": "v1",
"created": "Tue, 6 Sep 2022 08:11:17 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Yu",
"Letian",
""
],
[
"Mei",
"Haiyang",
""
],
[
"Dong",
"Wen",
""
],
[
"Wei",
"Ziqi",
""
],
[
"Zhu",
"Li",
""
],
[
"Wang",
"Yuxin",
""
],
[
"Yang",
"Xin",
""
]
] |
new_dataset
| 0.995012 |
2209.03528
|
Harsh Verma
|
Harsh Verma, Parsa Bagherzadeh, Sabine Bergler
|
CLaCLab at SocialDisNER: Using Medical Gazetteers for Named-Entity
Recognition of Disease Mentions in Spanish Tweets
|
In Proceedings of the Social Media Mining for Health Applications
Workshop at COLING 2022
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
This paper summarizes the CLaC submission for SMM4H 2022 Task 10 which
concerns the recognition of diseases mentioned in Spanish tweets. Before
classifying each token, we encode each token with a transformer encoder using
features from Multilingual RoBERTa Large, UMLS gazetteer, and DISTEMIST
gazetteer, among others. We obtain a strict F1 score of 0.869, with competition
mean of 0.675, standard deviation of 0.245, and median of 0.761.
|
[
{
"version": "v1",
"created": "Thu, 8 Sep 2022 02:08:51 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Sep 2022 01:34:11 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Verma",
"Harsh",
""
],
[
"Bagherzadeh",
"Parsa",
""
],
[
"Bergler",
"Sabine",
""
]
] |
new_dataset
| 0.962914 |
2209.05481
|
Bing Su
|
Bing Su, Dazhao Du, Zhao Yang, Yujie Zhou, Jiangmeng Li, Anyi Rao, Hao
Sun, Zhiwu Lu, Ji-Rong Wen
|
A Molecular Multimodal Foundation Model Associating Molecule Graphs with
Natural Language
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although artificial intelligence (AI) has made significant progress in
understanding molecules in a wide range of fields, existing models generally
acquire the single cognitive ability from the single molecular modality. Since
the hierarchy of molecular knowledge is profound, even humans learn from
different modalities including both intuitive diagrams and professional texts
to assist their understanding. Inspired by this, we propose a molecular
multimodal foundation model which is pretrained from molecular graphs and their
semantically related textual data (crawled from published Scientific Citation
Index papers) via contrastive learning. This AI model represents a critical
attempt that directly bridges molecular graphs and natural language.
Importantly, through capturing the specific and complementary information of
the two modalities, our proposed model can better grasp molecular expertise.
Experimental results show that our model not only exhibits promising
performance in cross-modal tasks such as cross-modal retrieval and molecule
caption, but also enhances molecular property prediction and possesses
capability to generate meaningful molecular graphs from natural language
descriptions. We believe that our model would have a broad impact on
AI-empowered fields across disciplines such as biology, chemistry, materials,
environment, and medicine, among others.
|
[
{
"version": "v1",
"created": "Mon, 12 Sep 2022 00:56:57 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Su",
"Bing",
""
],
[
"Du",
"Dazhao",
""
],
[
"Yang",
"Zhao",
""
],
[
"Zhou",
"Yujie",
""
],
[
"Li",
"Jiangmeng",
""
],
[
"Rao",
"Anyi",
""
],
[
"Sun",
"Hao",
""
],
[
"Lu",
"Zhiwu",
""
],
[
"Wen",
"Ji-Rong",
""
]
] |
new_dataset
| 0.999123 |
2209.05520
|
Hang Zhou
|
Fabrizio Grandoni, Claire Mathieu, and Hang Zhou
|
Unsplittable Euclidean Capacitated Vehicle Routing: A
$(2+\epsilon)$-Approximation Algorithm
| null | null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
In the unsplittable capacitated vehicle routing problem, we are given a
metric space with a vertex called depot and a set of vertices called terminals.
Each terminal is associated with a positive demand between 0 and 1. The goal is
to find a minimum length collection of tours starting and ending at the depot
such that the demand of each terminal is covered by a single tour (i.e., the
demand cannot be split), and the total demand of the terminals in each tour
does not exceed the capacity of 1.
Our main result is a polynomial-time $(2+\epsilon)$-approximation algorithm
for this problem in the two-dimensional Euclidean plane, i.e., for the special
case where the terminals and the depot are associated with points in the
Euclidean plane and their distances are defined accordingly. This improves on
recent work by Blauth, Traub, and Vygen [IPCO'21] and Friggstad, Mousavi,
Rahgoshay, and Salavatipour [IPCO'22].
|
[
{
"version": "v1",
"created": "Mon, 12 Sep 2022 18:08:00 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Grandoni",
"Fabrizio",
""
],
[
"Mathieu",
"Claire",
""
],
[
"Zhou",
"Hang",
""
]
] |
new_dataset
| 0.993522 |
2209.05556
|
Sreejeet Maity
|
Dibyendu Roy, Sreejeet Maity, Madhubanti Maitra, Samar Bhattacharya
|
Fragile object transportation by a multi-robot system in an unknown
environment using a semi-decentralized control approach
|
7 pages,8 figures, IEEE International Conference on Robotics and
Automation (ICRA) 2023
| null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we introduce a semi-decentralized control technique for a
swarm of robots transporting a fragile object to a destination in an uncertain
occluded environment.The proposed approach has been split into two parts. The
initial part (Phase 1) includes a centralized control strategy for creating a
specific formation among the agents so that the object to be transported, can
be positioned properly on the top of the system. We present a novel triangle
packing scheme fused with a circular region-based shape control method for
creating a rigid configuration among the robots. In the later part (Phase 2),
the swarm system is required to convey the object to the destination in a
decentralized way employing the region based shape control approach. The
simulation result as well as the comparison study demonstrates the
effectiveness of our proposed scheme.
|
[
{
"version": "v1",
"created": "Mon, 12 Sep 2022 19:16:18 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Roy",
"Dibyendu",
""
],
[
"Maity",
"Sreejeet",
""
],
[
"Maitra",
"Madhubanti",
""
],
[
"Bhattacharya",
"Samar",
""
]
] |
new_dataset
| 0.999236 |
2209.05566
|
Rakesh Nadig
|
Jisung Park, Roknoddin Azizi, Geraldo F. Oliveira, Mohammad
Sadrosadati, Rakesh Nadig, David Novo, Juan G\'omez-Luna, Myungsuk Kim, Onur
Mutlu
|
Flash-Cosmos: In-Flash Bulk Bitwise Operations Using Inherent
Computation Capability of NAND Flash Memory
|
To appear in 55th IEEE/ACM International Symposium on
Microarchitecture (MICRO), 2022
| null | null | null |
cs.AR cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Bulk bitwise operations, i.e., bitwise operations on large bit vectors, are
prevalent in a wide range of important application domains, including
databases, graph processing, genome analysis, cryptography, and
hyper-dimensional computing. In conventional systems, the performance and
energy efficiency of bulk bitwise operations are bottlenecked by data movement
between the compute units and the memory hierarchy. In-flash processing (i.e.,
processing data inside NAND flash chips) has a high potential to accelerate
bulk bitwise operations by fundamentally reducing data movement through the
entire memory hierarchy. We identify two key limitations of the
state-of-the-art in-flash processing technique for bulk bitwise operations; (i)
it falls short of maximally exploiting the bit-level parallelism of bulk
bitwise operations; (ii) it is unreliable because it does not consider the
highly error-prone nature of NAND flash memory. We propose Flash-Cosmos (Flash
Computation with One-Shot Multi-Operand Sensing), a new in-flash processing
technique that significantly increases the performance and energy efficiency of
bulk bitwise operations while providing high reliability. Flash-Cosmos
introduces two key mechanisms that can be easily supported in modern NAND flash
chips: (i) Multi-Wordline Sensing (MWS), which enables bulk bitwise operations
on a large number of operands with a single sensing operation, and (ii)
Enhanced SLC-mode Programming (ESP), which enables reliable computation inside
NAND flash memory. We demonstrate the feasibility of performing bulk bitwise
operations with high reliability in Flash-Cosmos by testing 160 real 3D NAND
flash chips. Our evaluation shows that Flash-Cosmos improves average
performance and energy efficiency by 3.5x/32x and 3.3x/95x, respectively, over
the state-of-the-art in-flash/outside-storage processing techniques across
three real-world applications.
|
[
{
"version": "v1",
"created": "Mon, 12 Sep 2022 19:37:09 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Park",
"Jisung",
""
],
[
"Azizi",
"Roknoddin",
""
],
[
"Oliveira",
"Geraldo F.",
""
],
[
"Sadrosadati",
"Mohammad",
""
],
[
"Nadig",
"Rakesh",
""
],
[
"Novo",
"David",
""
],
[
"Gómez-Luna",
"Juan",
""
],
[
"Kim",
"Myungsuk",
""
],
[
"Mutlu",
"Onur",
""
]
] |
new_dataset
| 0.970813 |
2209.05574
|
Sandeep Banik
|
Sandeep Banik and Shaunak D. Bopardikar
|
FlipDyn: A game of resource takeovers in dynamical systems
|
8 pages, 13 figures, accepted at the 61st IEEE Conference on Decision
and Control, 2022, in Canc\'un, Mexico
| null | null | null |
cs.GT cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a game in which two players with opposing objectives seek to
repeatedly takeover a common resource. The resource is modeled as a discrete
time dynamical system over which a player can gain control after spending a
state-dependent amount of energy at each time step. We use a FlipIT-inspired
deterministic model that decides which player is in control at every time step.
A player's policy is the probability with which the player should spend energy
to gain control at each time step. Our main results are three-fold. First, we
present analytic expressions for the cost-to-go as a function of the hybrid
state of the system, i.e., the physical state of the dynamical system and the
binary \texttt{FlipDyn} state for any general system with arbitrary costs.
These expressions are exact when the physical state is also discrete and has
finite cardinality. Second, for a continuous physical state with linear
dynamics and quadratic costs, we derive expressions for Nash equilibrium (NE).
For scalar physical states, we show that the NE depends only on the parameters
of the value function and costs, and is independent of the state. Third, we
derive an approximate value function for higher dimensional linear systems with
quadratic costs. Finally, we illustrate our results through a numerical study
on the problem of controlling a linear system in a given environment in the
presence of an adversary.
|
[
{
"version": "v1",
"created": "Mon, 12 Sep 2022 19:58:14 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Banik",
"Sandeep",
""
],
[
"Bopardikar",
"Shaunak D.",
""
]
] |
new_dataset
| 0.987712 |
2209.05579
|
Mikel Ngueajio Kengni
|
Mikel K. Ngueajio, Gloria Washington, Danda B. Rawat, and Yolande
Ngueabou
|
Intrusion Detection Systems Using Support Vector Machines on the
KDDCUP'99 and NSL-KDD Datasets: A Comprehensive Survey
| null |
Proceedings of SAI intelligent systems conference, IntelliSys
2022, Intelligent Systems and Applications, pages 609 to 629
|
10.1007/978-3-031-16078-3_42
| null |
cs.CR cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
With the growing rates of cyber-attacks and cyber espionage, the need for
better and more powerful intrusion detection systems (IDS) is even more
warranted nowadays. The basic task of an IDS is to act as the first line of
defense, in detecting attacks on the internet. As intrusion tactics from
intruders become more sophisticated and difficult to detect, researchers have
started to apply novel Machine Learning (ML) techniques to effectively detect
intruders and hence preserve internet users' information and overall trust in
the entire internet network security. Over the last decade, there has been an
explosion of research on intrusion detection techniques based on ML and Deep
Learning (DL) architectures on various cyber security-based datasets such as
the DARPA, KDDCUP'99, NSL-KDD, CAIDA, CTU-13, UNSW-NB15. In this research, we
review contemporary literature and provide a comprehensive survey of different
types of intrusion detection technique that applies Support Vector Machines
(SVMs) algorithms as a classifier. We focus only on studies that have been
evaluated on the two most widely used datasets in cybersecurity namely: the
KDDCUP'99 and the NSL-KDD datasets. We provide a summary of each method,
identifying the role of the SVMs classifier, and all other algorithms involved
in the studies. Furthermore, we present a critical review of each method, in
tabular form, highlighting the performance measures, strengths, and limitations
of each of the methods surveyed.
|
[
{
"version": "v1",
"created": "Mon, 12 Sep 2022 20:02:12 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Ngueajio",
"Mikel K.",
""
],
[
"Washington",
"Gloria",
""
],
[
"Rawat",
"Danda B.",
""
],
[
"Ngueabou",
"Yolande",
""
]
] |
new_dataset
| 0.994705 |
2209.05588
|
Zixiang Zhou
|
Zixiang Zhou, Xiangchen Zhao, Yu Wang, Panqu Wang, Hassan Foroosh
|
CenterFormer: Center-based Transformer for 3D Object Detection
|
Accepted to ECCV 2022 (oral)
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Query-based transformer has shown great potential in constructing long-range
attention in many image-domain tasks, but has rarely been considered in
LiDAR-based 3D object detection due to the overwhelming size of the point cloud
data. In this paper, we propose CenterFormer, a center-based transformer
network for 3D object detection. CenterFormer first uses a center heatmap to
select center candidates on top of a standard voxel-based point cloud encoder.
It then uses the feature of the center candidate as the query embedding in the
transformer. To further aggregate features from multiple frames, we design an
approach to fuse features through cross-attention. Lastly, regression heads are
added to predict the bounding box on the output center feature representation.
Our design reduces the convergence difficulty and computational complexity of
the transformer structure. The results show significant improvements over the
strong baseline of anchor-free object detection networks. CenterFormer achieves
state-of-the-art performance for a single model on the Waymo Open Dataset, with
73.7% mAPH on the validation set and 75.6% mAPH on the test set, significantly
outperforming all previously published CNN and transformer-based methods. Our
code is publicly available at https://github.com/TuSimple/centerformer
|
[
{
"version": "v1",
"created": "Mon, 12 Sep 2022 20:15:11 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Zhou",
"Zixiang",
""
],
[
"Zhao",
"Xiangchen",
""
],
[
"Wang",
"Yu",
""
],
[
"Wang",
"Panqu",
""
],
[
"Foroosh",
"Hassan",
""
]
] |
new_dataset
| 0.996216 |
2209.05603
|
Zhenishbek Zhakypov
|
Zhenishbek Zhakypov, Yimeng Qin, and Allison Okamura
|
Hoxels: Fully 3-D Printed Soft Multi-Modal & Multi-Contact Haptic Voxel
Displays for Enriched Tactile Information Transfer
|
The extended abstract paper was presented in the LEVERAGING
ADVANCEMENTS IN SMART MATERIALS SCIENCE: SOFT ROBOTS GAINING NEW ABILITIES
THROUGH SMART AND FUNCTIONAL MATERIALS workshop at the 2022 IEEE
International Conference on Robotics and Automation
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Wrist-worn haptic interfaces can deliver a wide range of tactile cues for
communication of information and interaction with virtual objects. Unlike
fingertips, the wrist and forearm provide a considerably large area of skin
that allows the placement of multiple haptic actuators as a display for
enriching tactile information transfer with minimal encumbrance. Existing
multi-degree-of-freedom (DoF) wrist-worn devices employ traditional rigid
robotic mechanisms and electric motors that limit their versatility,
miniaturization, distribution, and assembly. Alternative solutions based on
soft elastomeric actuator arrays constitute only 1-DoF haptic pixels.
Higher-DoF prototypes produce a single interaction point and require complex
manual assembly processes, such as molding and gluing several parts. These
approaches limit the construction of high-DoF compact haptic displays,
repeatability, and customizability. Here we present a novel, fully 3D-printed,
soft, wearable haptic display for increasing tactile information transfer on
the wrist and forearm with 3-DoF haptic voxels, called hoxels. Our initial
prototype comprises two hoxels that provide skin shear, pressure, twist,
stretch, squeeze, and other arbitrary stimuli. Each hoxel generates force up to
1.6 N in the x and y-axes and up to 20 N in the z-axis. Our method enables the
rapid fabrication of versatile and forceful haptic displays.
|
[
{
"version": "v1",
"created": "Mon, 12 Sep 2022 20:38:03 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Zhakypov",
"Zhenishbek",
""
],
[
"Qin",
"Yimeng",
""
],
[
"Okamura",
"Allison",
""
]
] |
new_dataset
| 0.999041 |
2209.05612
|
Sanjay Haresh
|
Sanjay Haresh, Xiaohao Sun, Hanxiao Jiang, Angel X. Chang, Manolis
Savva
|
Articulated 3D Human-Object Interactions from RGB Videos: An Empirical
Analysis of Approaches and Challenges
|
3DV 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human-object interactions with articulated objects are common in everyday
life. Despite much progress in single-view 3D reconstruction, it is still
challenging to infer an articulated 3D object model from an RGB video showing a
person manipulating the object. We canonicalize the task of articulated 3D
human-object interaction reconstruction from RGB video, and carry out a
systematic benchmark of five families of methods for this task: 3D plane
estimation, 3D cuboid estimation, CAD model fitting, implicit field fitting,
and free-form mesh fitting. Our experiments show that all methods struggle to
obtain high accuracy results even when provided ground truth information about
the observed objects. We identify key factors which make the task challenging
and suggest directions for future work on this challenging 3D computer vision
task. Short video summary at https://www.youtube.com/watch?v=5tAlKBojZwc
|
[
{
"version": "v1",
"created": "Mon, 12 Sep 2022 21:03:25 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Haresh",
"Sanjay",
""
],
[
"Sun",
"Xiaohao",
""
],
[
"Jiang",
"Hanxiao",
""
],
[
"Chang",
"Angel X.",
""
],
[
"Savva",
"Manolis",
""
]
] |
new_dataset
| 0.998699 |
2209.05633
|
Alexander Spiegelman
|
Alexander Spiegelman, Neil Giridharan, Alberto Sonnino, Lefteris
Kokoris-Kogias
|
Bullshark: The Partially Synchronous Version
| null | null | null | null |
cs.DC cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
The purpose of this manuscript is to describe the deterministic partially
synchronous version of Bullshark in a simple and clean way. This result is
published in CCS 2022, however, the description there is less clear because it
uses the terminology of the full asynchronous Bullshark. The CCS version ties
the description of the asynchronous and partially synchronous versions of
Bullshark since it targets an academic audience. Due to the recent interest in
DAG-based BFT protocols, we provide a separate and simple description of the
partially synchronous version that targets a more general audience. We focus
here on the DAG ordering logic. For more details about the asynchronous
version, garbage collection, fairness, proofs, related work, evaluation, and
efficient DAG implementation please refer to the fullpaper. An intuitive
extended summary can be found in the "DAG meets BFT" blogpost.
|
[
{
"version": "v1",
"created": "Mon, 12 Sep 2022 21:58:13 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Spiegelman",
"Alexander",
""
],
[
"Giridharan",
"Neil",
""
],
[
"Sonnino",
"Alberto",
""
],
[
"Kokoris-Kogias",
"Lefteris",
""
]
] |
new_dataset
| 0.997755 |
2209.05667
|
Basheer Qolomany
|
Aos Mulahuwaish, Manish Osti, Kevin Gyorick, Majdi Maabreh, Ajay
Gupta, and Basheer Qolomany
|
CovidMis20: COVID-19 Misinformation Detection System on Twitter Tweets
using Deep Learning Models
| null | null | null | null |
cs.LG cs.CL cs.HC cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
Online news and information sources are convenient and accessible ways to
learn about current issues. For instance, more than 300 million people engage
with posts on Twitter globally, which provides the possibility to disseminate
misleading information. There are numerous cases where violent crimes have been
committed due to fake news. This research presents the CovidMis20 dataset
(COVID-19 Misinformation 2020 dataset), which consists of 1,375,592 tweets
collected from February to July 2020. CovidMis20 can be automatically updated
to fetch the latest news and is publicly available at:
https://github.com/everythingguy/CovidMis20. This research was conducted using
Bi-LSTM deep learning and an ensemble CNN+Bi-GRU for fake news detection. The
results showed that, with testing accuracy of 92.23% and 90.56%, respectively,
the ensemble CNN+Bi-GRU model consistently provided higher accuracy than the
Bi-LSTM model.
|
[
{
"version": "v1",
"created": "Tue, 13 Sep 2022 00:43:44 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Mulahuwaish",
"Aos",
""
],
[
"Osti",
"Manish",
""
],
[
"Gyorick",
"Kevin",
""
],
[
"Maabreh",
"Majdi",
""
],
[
"Gupta",
"Ajay",
""
],
[
"Qolomany",
"Basheer",
""
]
] |
new_dataset
| 0.999716 |
2209.05698
|
Feng Zhao
|
Feng Zhao, Ziqi Zhang, Donglin Wang
|
KSG: Knowledge and Skill Graph
|
5 pages, 7 figures, published to CIKM 2022
| null |
10.1145/3511808.3557623
| null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The knowledge graph (KG) is an essential form of knowledge representation
that has grown in prominence in recent years. Because it concentrates on
nominal entities and their relationships, traditional knowledge graphs are
static and encyclopedic in nature. On this basis, event knowledge graph (Event
KG) models the temporal and spatial dynamics by text processing to facilitate
downstream applications, such as question-answering, recommendation and
intelligent search. Existing KG research, on the other hand, mostly focuses on
text processing and static facts, ignoring the vast quantity of dynamic
behavioral information included in photos, movies, and pre-trained neural
networks. In addition, no effort has been done to include behavioral
intelligence information into the knowledge graph for deep reinforcement
learning (DRL) and robot learning. In this paper, we propose a novel dynamic
knowledge and skill graph (KSG), and then we develop a basic and specific KSG
based on CN-DBpedia. The nodes are divided into entity and attribute nodes,
with entity nodes containing the agent, environment, and skill (DRL policy or
policy representation), and attribute nodes containing the entity description,
pre-train network, and offline dataset. KSG can search for different agents'
skills in various environments and provide transferable information for
acquiring new skills. This is the first study that we are aware of that looks
into dynamic KSG for skill retrieval and learning. Extensive experimental
results on new skill learning show that KSG boosts new skill learning
efficiency.
|
[
{
"version": "v1",
"created": "Tue, 13 Sep 2022 02:47:46 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Zhao",
"Feng",
""
],
[
"Zhang",
"Ziqi",
""
],
[
"Wang",
"Donglin",
""
]
] |
new_dataset
| 0.999191 |
2209.05707
|
Daniel DiPietro
|
Daniel DiPietro, Vivek Hazari, Soroush Vosoughi
|
Robin: A Novel Online Suicidal Text Corpus of Substantial Breadth and
Scale
|
10 pages, 4 figures
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Suicide is a major public health crisis. With more than 20,000,000 suicide
attempts each year, the early detection of suicidal intent has the potential to
save hundreds of thousands of lives. Traditional mental health screening
methods are time-consuming, costly, and often inaccessible to disadvantaged
populations; online detection of suicidal intent using machine learning offers
a viable alternative. Here we present Robin, the largest non-keyword generated
suicidal corpus to date, consisting of over 1.1 million online forum postings.
In addition to its unprecedented size, Robin is specially constructed to
include various categories of suicidal text, such as suicide bereavement and
flippant references, better enabling models trained on Robin to learn the
subtle nuances of text expressing suicidal ideation. Experimental results
achieve state-of-the-art performance for the classification of suicidal text,
both with traditional methods like logistic regression (F1=0.85), as well as
with large-scale pre-trained language models like BERT (F1=0.92). Finally, we
release the Robin dataset publicly as a machine learning resource with the
potential to drive the next generation of suicidal sentiment research.
|
[
{
"version": "v1",
"created": "Tue, 13 Sep 2022 03:32:47 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"DiPietro",
"Daniel",
""
],
[
"Hazari",
"Vivek",
""
],
[
"Vosoughi",
"Soroush",
""
]
] |
new_dataset
| 0.999761 |
2209.05708
|
Shuaixin Li
|
Shuaixin Li, Bin Tian, Zhu Xiaozhou, Gui Jianjun, Yao Wen and Guangyun
Li
|
InTEn-LOAM: Intensity and Temporal Enhanced LiDAR Odometry and Mapping
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Traditional LiDAR odometry (LO) systems mainly leverage geometric information
obtained from the traversed surroundings to register laser scans and estimate
LiDAR ego-motion, while it may be unreliable in dynamic or unstructured
environments. This paper proposes InTEn-LOAM, a low-drift and robust LiDAR
odometry and mapping method that fully exploits implicit information of laser
sweeps (i.e., geometric, intensity, and temporal characteristics). Scanned
points are projected to cylindrical images, which facilitate the efficient and
adaptive extraction of various types of features, i.e., ground, beam, facade,
and reflector. We propose a novel intensity-based points registration algorithm
and incorporate it into the LiDAR odometry, enabling the LO system to jointly
estimate the LiDAR ego-motion using both geometric and intensity feature
points. To eliminate the interference of dynamic objects, we propose a
temporal-based dynamic object removal approach to filter them out before map
update. Moreover, the local map is organized and downsampled using a
temporal-related voxel grid filter to maintain the similarity between the
current scan and the static local map. Extensive experiments are conducted on
both simulated and real-world datasets. The results show that the proposed
method achieves similar or better accuracy w.r.t the state-of-the-arts in
normal driving scenarios and outperforms geometric-based LO in unstructured
environments.
|
[
{
"version": "v1",
"created": "Tue, 13 Sep 2022 03:36:34 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Li",
"Shuaixin",
""
],
[
"Tian",
"Bin",
""
],
[
"Xiaozhou",
"Zhu",
""
],
[
"Jianjun",
"Gui",
""
],
[
"Wen",
"Yao",
""
],
[
"Li",
"Guangyun",
""
]
] |
new_dataset
| 0.99924 |
2209.05834
|
Chen Rothschild
|
Chen Rothschild
|
Computer vision system to count crustacean larvae
| null | null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Fish products account for about 16 percent of the human diet worldwide, as of
2017. The counting action is a significant component in growing and producing
these products. Growers must count the fish accurately, to do so technological
solutions are needed. Two computer vision systems to automatically count
crustacean larvae grown in industrial ponds were developed. The first system
included an iPhone 11 camera with 3024X4032 resolution which acquired images
from an industrial pond in indoor conditions. Two experiments were performed
with this system, the first one included 200 images acquired in one day on
growth stages 9,10 with an iPhone 11 camera on specific illumination condition.
In the second experiment, a larvae industrial pond was photographed for 11 days
with two devices an iPhone 11 and a SONY DSCHX90V cameras. With the first
device (iPhone 11) two illumination conditions were tested. In each condition,
110 images were acquired. That system resulted in an accuracy of 88.4 percent
image detection. The second system included a DSLR Nikon D510 camera with a
2000X2000 resolution with which seven experiments were performed outside the
industrial pond. Images were acquired on day 1 of larvae growing stage
resulting in the acquisition of a total of 700 images. That system resulted in
an accuracy of 86 percent for a density of 50. An algorithm that automatically
counts the number of larvae was developed for both cases based on the YOLOv5
CNN model. In addition, in this study, a larvae growth function was developed.
Daily, several larvae were taken manually from the industrial pond and analyzed
under a microscope. Once the growth stage was determined, images of the larva
were acquired. Each larva's length was measured manually from the images. The
most suitable model was the Gompertz model with a goodness of fit index of R
squared of 0.983.
|
[
{
"version": "v1",
"created": "Tue, 13 Sep 2022 09:18:13 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Rothschild",
"Chen",
""
]
] |
new_dataset
| 0.998474 |
2209.05840
|
Keisuke Shirai
|
Keisuke Shirai, Atsushi Hashimoto, Taichi Nishimura, Hirotaka Kameko,
Shuhei Kurita, Yoshitaka Ushiku, Shinsuke Mori
|
Visual Recipe Flow: A Dataset for Learning Visual State Changes of
Objects with Recipe Flows
|
COLING 2022
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a new multimodal dataset called Visual Recipe Flow, which enables
us to learn each cooking action result in a recipe text. The dataset consists
of object state changes and the workflow of the recipe text. The state change
is represented as an image pair, while the workflow is represented as a recipe
flow graph (r-FG). The image pairs are grounded in the r-FG, which provides the
cross-modal relation. With our dataset, one can try a range of applications,
from multimodal commonsense reasoning and procedural text generation.
|
[
{
"version": "v1",
"created": "Tue, 13 Sep 2022 09:38:32 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Shirai",
"Keisuke",
""
],
[
"Hashimoto",
"Atsushi",
""
],
[
"Nishimura",
"Taichi",
""
],
[
"Kameko",
"Hirotaka",
""
],
[
"Kurita",
"Shuhei",
""
],
[
"Ushiku",
"Yoshitaka",
""
],
[
"Mori",
"Shinsuke",
""
]
] |
new_dataset
| 0.999767 |
2209.05877
|
Uche Onyekpe Dr
|
Uche Onyekpe, Alicja Szkolnik, Vasile Palade, Stratis Kanarachos,
Michael E. Fitzpatrick
|
R-WhONet: Recalibrated Wheel Odometry Neural Network for Vehicular
Positioning using Transfer Learning
|
arXiv admin note: text overlap with arXiv:2104.02581
| null | null | null |
cs.RO eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
This paper proposes a transfer learning approach to recalibrate our
previously developed Wheel Odometry Neural Network (WhONet) for vehicle
positioning in environments where Global Navigation Satellite Systems (GNSS)
are unavailable. The WhONet has been shown to possess the capability to learn
the uncertainties in the wheel speed measurements needed for correction and
accurate positioning of vehicles. These uncertainties may be manifested as tyre
pressure changes from driving on muddy and uneven terrains or wheel slips.
However, a common cause for concern for data-driven approaches, such as the
WhONet model, is usually the inability to generalise the models to a new
vehicle. In scenarios where machine learning models are trained in a specific
domain but deployed in another domain, the model's performance degrades. In
real-life scenarios, several factors are influential to this degradation, from
changes to the dynamics of the vehicle to new pattern distributions of the
sensor's noise, and bias will make the test sensor data vary from training
data. Therefore, the challenge is to explore techniques that allow the trained
machine learning models to spontaneously adjust to new vehicle domains. As
such, we propose the Recalibrated-Wheel Odometry neural Network (R-WhONet),
that adapts the WhONet model from its source domain (a vehicle and environment
on which the model is initially trained) to the target domain (a new vehicle on
which the trained model is to be deployed). Through a performance evaluation on
several GNSS outage scenarios - short-term complex driving scenarios, and on
longer-term GNSS outage scenarios. We demonstrate that a model trained in the
source domain does not generalise well to a new vehicle in the target domain.
However, we show that our new proposed framework improves the generalisation of
the WhONet model to new vehicles in the target domains by up to 32%.
|
[
{
"version": "v1",
"created": "Tue, 13 Sep 2022 10:58:54 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Onyekpe",
"Uche",
""
],
[
"Szkolnik",
"Alicja",
""
],
[
"Palade",
"Vasile",
""
],
[
"Kanarachos",
"Stratis",
""
],
[
"Fitzpatrick",
"Michael E.",
""
]
] |
new_dataset
| 0.979289 |
2209.05947
|
Paolo Arcaini
|
Stefan Klikovits, Vincenzo Riccio, Ezequiel Castellano, Ahmet
Cetinkaya, Alessio Gambi, Paolo Arcaini
|
Does Road Diversity Really Matter in Testing Automated Driving Systems?
-- A Registered Report
|
Accepted registered report at the 16th ACM/IEEE International
Symposium on Empirical Software Engineering and Measurement (ESEM 2022)
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Background/Context. The use of automated driving systems (ADSs) in the real
world requires rigorous testing to ensure safety. To increase trust, ADSs
should be tested on a large set of diverse road scenarios. Literature suggests
that if a vehicle is driven along a set of geometrically diverse roads-measured
using various diversity measures (DMs)-it will react in a wide range of
behaviours, thereby increasing the chances of observing failures (if any), or
strengthening the confidence in its safety, if no failures are observed. To the
best of our knowledge, however, this assumption has never been tested before,
nor have road DMs been assessed for their properties. Objective/Aim. Our goal
is to perform an exploratory study on 47 currently used and new, potentially
promising road DMs. Specifically, our research questions look into the road DMs
themselves, to analyse their properties (e.g. monotonicity, computation
efficiency), and to test correlation between DMs. Furthermore, we look at the
use of road DMs to investigate whether the assumption that diverse test suites
of roads expose diverse driving behaviour holds. Method. Our empirical analysis
relies on a state-of-the-art, open-source ADSs testing infrastructure and uses
a data set containing over 97,000 individual road geometries and matching
simulation data that were collected using two driving agents. By sampling
random test suites of various sizes and measuring their roads' geometric
diversity, we study road DMs properties, the correlation between road DMs, and
the correlation between road DMs and the observed behaviour.
|
[
{
"version": "v1",
"created": "Tue, 13 Sep 2022 12:43:27 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Klikovits",
"Stefan",
""
],
[
"Riccio",
"Vincenzo",
""
],
[
"Castellano",
"Ezequiel",
""
],
[
"Cetinkaya",
"Ahmet",
""
],
[
"Gambi",
"Alessio",
""
],
[
"Arcaini",
"Paolo",
""
]
] |
new_dataset
| 0.954769 |
2209.05956
|
Kyle Retan
|
Kyle Retan, Frasher Loshaj and Michael Heizmann
|
Radar Odometry on SE(3) with Constant Acceleration Motion Prior and
Polar Measurement Model
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents an approach to radar odometry on $SE(3)$ which utilizes a
constant acceleration motion prior. The motion prior is integrated into a
sliding window optimization scheme. We use the Magnus expansion to accurately
integrate the motion prior while maintaining real-time performance. In
addition, we adopt a polar measurement model to better represent radar
detection uncertainties. Our estimator is evaluated using a large real-world
dataset from a prototype high-resolution radar sensor. The new motion prior and
measurement model signifcantly improve odometry performance relative to the
constant velocity motion prior and Cartesian measurement model from our
previous work, particularly in roll, pitch and height.
|
[
{
"version": "v1",
"created": "Sun, 11 Sep 2022 02:40:33 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Retan",
"Kyle",
""
],
[
"Loshaj",
"Frasher",
""
],
[
"Heizmann",
"Michael",
""
]
] |
new_dataset
| 0.999427 |
2209.05984
|
Manuel M. H. Roth
|
Manuel M. H. Roth, Hartmut Brandt, Hermann Bischl
|
Distributed SDN-based Load-balanced Routing for Low Earth Orbit
Satellite Constellation Networks
| null | null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
With the current trend towards low Earth orbit mega-constellations with
inter-satellite links, efficient routing in such highly dynamic space-borne
networks is becoming increasingly important. Due to the distinct network
topology, specifically tailored solutions are required. Firstly, the relative
movement of the constellation causes frequent handover events between the
satellites and the terminals on ground. Furthermore, unevenly distributed
traffic demands lead to geographical hot spots. The physical size of the
network also implies significant propagation delays. Therefore, monitoring the
dynamic topology changes and link loads on a network-wide basis for routing
purposes is typically impractical with massive signaling overhead. To address
these issues, we propose a distributed load-balanced routing scheme based on
Software Defined Networking. The approach divides the large-scale network into
sub-sections, called clusters. In order to minimize signaling overhead, packets
are forwarded between these clusters according to geographical heuristics.
Within each cluster active Quality of Service-aware load-balancing is applied.
The responsible on-board network controller forwards routing instructions based
on the network state information in its cluster. We also analyze specific
design choices for the clusters and the interfaces between them. The protocol
has been implemented in a system-level simulator and compared to a
source-routed benchmark solution.
|
[
{
"version": "v1",
"created": "Tue, 13 Sep 2022 13:31:43 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Roth",
"Manuel M. H.",
""
],
[
"Brandt",
"Hartmut",
""
],
[
"Bischl",
"Hermann",
""
]
] |
new_dataset
| 0.997322 |
2209.06083
|
Dawson Fox
|
Dawson Fox, Jose M Monsalve Diaz, Xiaoming Li
|
Chiplets and the Codelet Model
|
11 pages, 4 figures, 2 tables
| null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Recently, hardware technology has rapidly evolved pertaining to
domain-specific applications/architectures. Soon, processors may be composed of
a large collection of vendor-independent IP specialized for
application-specific algorithms, resulting in extreme heterogeneity. However,
integrating multiple vendors within the same die is difficult. Chiplet
technology is a solution that integrates multiple vendor dies within the same
chip by breaking each piece into an independent block, each with a common
interconnect for fast data transfer.
Most prior chiplet research focuses on interconnect technology, but program
execution models (PXMs) that enable programmability and performance are missing
from the discussion. In chiplet architectures, a cohesive co-designed PXM can
further separate the roles of the different actors, while maintaining a common
abstraction for program execution. This position paper describes the need for
co-designed PXMs and proposes the Codelet PXM and associated architectural
features as a candidate to fill this need in extremely heterogeneous
chiplet-based architectures.
|
[
{
"version": "v1",
"created": "Tue, 13 Sep 2022 15:33:39 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Fox",
"Dawson",
""
],
[
"Diaz",
"Jose M Monsalve",
""
],
[
"Li",
"Xiaoming",
""
]
] |
new_dataset
| 0.983425 |
2209.06120
|
Neel Guha
|
Neel Guha, Daniel E. Ho, Julian Nyarko, Christopher R\'e
|
LegalBench: Prototyping a Collaborative Benchmark for Legal Reasoning
|
13 pages, 7 tables
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Can foundation models be guided to execute tasks involving legal reasoning?
We believe that building a benchmark to answer this question will require
sustained collaborative efforts between the computer science and legal
communities. To that end, this short paper serves three purposes. First, we
describe how IRAC-a framework legal scholars use to distinguish different types
of legal reasoning-can guide the construction of a Foundation Model oriented
benchmark. Second, we present a seed set of 44 tasks built according to this
framework. We discuss initial findings, and highlight directions for new tasks.
Finally-inspired by the Open Science movement-we make a call for the legal and
computer science communities to join our efforts by contributing new tasks.
This work is ongoing, and our progress can be tracked here:
https://github.com/HazyResearch/legalbench.
|
[
{
"version": "v1",
"created": "Tue, 13 Sep 2022 16:11:54 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Guha",
"Neel",
""
],
[
"Ho",
"Daniel E.",
""
],
[
"Nyarko",
"Julian",
""
],
[
"Ré",
"Christopher",
""
]
] |
new_dataset
| 0.998367 |
2209.06130
|
Jun Wang
|
Jun Wang, Samarth Kalluraya, Yiannis Kantaros
|
Verified Compositions of Neural Network Controllers for Temporal Logic
Control Objectives
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a new approach to design verified compositions of Neural
Network (NN) controllers for autonomous systems with tasks captured by Linear
Temporal Logic (LTL) formulas. Particularly, the LTL formula requires the
system to reach and avoid certain regions in a temporal/logical order. We
assume that the system is equipped with a finite set of trained NN controllers.
Each controller has been trained so that it can drive the system towards a
specific region of interest while avoiding others. Our goal is to check if
there exists a temporal composition of the trained NN controllers - and if so,
to compute it - that will yield composite system behaviors that satisfy a
user-specified LTL task for any initial system state belonging to a given set.
To address this problem, we propose a new approach that relies on a novel
integration of automata theory and recently proposed reachability analysis
tools for NN-controlled systems. We note that the proposed method can be
applied to other controllers, not necessarily modeled by NNs, by appropriate
selection of the reachability analysis tool. We focus on NN controllers due to
their lack of robustness. The proposed method is demonstrated on navigation
tasks for aerial vehicles.
|
[
{
"version": "v1",
"created": "Tue, 13 Sep 2022 16:17:54 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Wang",
"Jun",
""
],
[
"Kalluraya",
"Samarth",
""
],
[
"Kantaros",
"Yiannis",
""
]
] |
new_dataset
| 0.969295 |
2209.06131
|
Ahmed J. Obaid Dr.
|
Niran A. Abdulhussein and Ahmed J Obaid
|
User recommendation system based on MIND dataset
| null |
International Journal of Nonlinear Analysis and Applications
(2022)
|
10.22075/ijnaa.2022.6857
| null |
cs.IR cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Nowadays, it's a very significant way for researchers and other individuals
to achieve their interests because it provides short solutions to satisfy their
demands. Because there are so many pieces of information on the internet, news
recommendation systems allow us to filter content and deliver it to the user in
proportion to his desires and interests. RSs have three techniques:
content-based filtering, collaborative filtering, and hybrid filtering. We will
use the MIND dataset with our system, which was collected in 2019, the big
challenge in this dataset because there is a lot of ambiguity and complex text
processing. In this paper, will present our proposed recommendation system. The
core of our system we have used the GloVe algorithm for word embeddings and
representation. Besides, the Multi-head Attention Layer calculates the
attention of words, to generate a list of recommended news. Finally, we achieve
good results more than some other related works in AUC 71.211, MRR 35.72,
nDCG@5 38.05, and nDCG@10 44.45.
|
[
{
"version": "v1",
"created": "Tue, 6 Sep 2022 22:25:36 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Abdulhussein",
"Niran A.",
""
],
[
"Obaid",
"Ahmed J",
""
]
] |
new_dataset
| 0.997034 |
2209.06156
|
Md Saiful Islam
|
Md Saiful Islam, Adiba Mahbub, Caleb Wohn, Karen Berger, Serena Uong,
Varun Kumar, Katrina Smith Korfmacher, Ehsan Hoque
|
SEER: Sustainable E-commerce with Environmental-impact Rating
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
With online shopping gaining massive popularity over the past few years,
e-commerce platforms can play a significant role in tackling climate change and
other environmental problems. In this study, we report that the
"attitude-behavior" gap identified by prior sustainable consumption literature
also exists in an online setting. We propose SEER, a concept design for online
shopping websites to help consumers make more sustainable choices. We introduce
explainable environmental impact ratings to increase knowledge, trust, and
convenience for consumers willing to purchase eco-friendly products. In our
quasi-randomized case-control experiment with 98 subjects across the United
States, we found that the case group using SEER demonstrates significantly more
eco-friendly consumption behavior than the control group using a traditional
e-commerce setting. While there are challenges in generating reliable
explanations and environmental ratings for products, if implemented, in the
United States alone, SEER has the potential to reduce approximately 2.88
million tonnes of carbon emission every year.
|
[
{
"version": "v1",
"created": "Tue, 13 Sep 2022 16:55:27 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Islam",
"Md Saiful",
""
],
[
"Mahbub",
"Adiba",
""
],
[
"Wohn",
"Caleb",
""
],
[
"Berger",
"Karen",
""
],
[
"Uong",
"Serena",
""
],
[
"Kumar",
"Varun",
""
],
[
"Korfmacher",
"Katrina Smith",
""
],
[
"Hoque",
"Ehsan",
""
]
] |
new_dataset
| 0.998199 |
2209.06192
|
Adyasha Maharana
|
Adyasha Maharana, Darryl Hannan, and Mohit Bansal
|
StoryDALL-E: Adapting Pretrained Text-to-Image Transformers for Story
Continuation
|
ECCV 2022 (33 pages; code, data, demo, model card available at
https://github.com/adymaharana/storydalle)
| null | null | null |
cs.CV cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent advances in text-to-image synthesis have led to large pretrained
transformers with excellent capabilities to generate visualizations from a
given text. However, these models are ill-suited for specialized tasks like
story visualization, which requires an agent to produce a sequence of images
given a corresponding sequence of captions, forming a narrative. Moreover, we
find that the story visualization task fails to accommodate generalization to
unseen plots and characters in new narratives. Hence, we first propose the task
of story continuation, where the generated visual story is conditioned on a
source image, allowing for better generalization to narratives with new
characters. Then, we enhance or 'retro-fit' the pretrained text-to-image
synthesis models with task-specific modules for (a) sequential image generation
and (b) copying relevant elements from an initial frame. Then, we explore
full-model finetuning, as well as prompt-based tuning for parameter-efficient
adaptation, of the pre-trained model. We evaluate our approach StoryDALL-E on
two existing datasets, PororoSV and FlintstonesSV, and introduce a new dataset
DiDeMoSV collected from a video-captioning dataset. We also develop a model
StoryGANc based on Generative Adversarial Networks (GAN) for story
continuation, and compare it with the StoryDALL-E model to demonstrate the
advantages of our approach. We show that our retro-fitting approach outperforms
GAN-based models for story continuation and facilitates copying of visual
elements from the source image, thereby improving continuity in the generated
visual story. Finally, our analysis suggests that pretrained transformers
struggle to comprehend narratives containing several characters. Overall, our
work demonstrates that pretrained text-to-image synthesis models can be adapted
for complex and low-resource tasks like story continuation.
|
[
{
"version": "v1",
"created": "Tue, 13 Sep 2022 17:47:39 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Maharana",
"Adyasha",
""
],
[
"Hannan",
"Darryl",
""
],
[
"Bansal",
"Mohit",
""
]
] |
new_dataset
| 0.983007 |
1404.1685
|
Guido Governatori
|
Guido Governatori
|
Thou Shalt is not You Will
| null |
Fifteenth International Conference on Artificial Intelligence and
Law (ICAIL 2015), pp. 63-68
|
10.1145/2746090.2746105
|
NICTA Technical Report 8026
|
cs.AI cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we discuss some reasons why temporal logic might not be
suitable to model real life norms. To show this, we present a novel deontic
logic contrary-to-duty/derived permission paradox based on the interaction of
obligations, permissions and contrary-to-duty obligations. The paradox is
inspired by real life norms.
|
[
{
"version": "v1",
"created": "Mon, 7 Apr 2014 08:11:10 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Jun 2014 07:32:27 GMT"
},
{
"version": "v3",
"created": "Sun, 25 Jan 2015 15:14:33 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Governatori",
"Guido",
""
]
] |
new_dataset
| 0.996628 |
1908.01901
|
Charles Delahunt
|
Charles B. Delahunt, Mayoore S. Jaiswal, Matthew P. Horning, Samantha
Janko, Clay M. Thompson, Sourabh Kulhare, Liming Hu, Travis Ostbye, Grace
Yun, Roman Gebrehiwot, Benjamin K. Wilson, Earl Long, Stephane Proux,
Dionicia Gamboa, Peter Chiodini, Jane Carter, Mehul Dhorda, David Isaboke,
Bernhards Ogutu, Wellington Oyibo, Elizabeth Villasis, Kyaw Myo Tun,
Christine Bachman, David Bell, Courosh Mehanian
|
Fully-automated patient-level malaria assessment on field-prepared thin
blood film microscopy images, including Supplementary Information
|
16 pages, 13 figures
| null | null | null |
cs.LG eess.IV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Malaria is a life-threatening disease affecting millions. Microscopy-based
assessment of thin blood films is a standard method to (i) determine malaria
species and (ii) quantitate high-parasitemia infections. Full automation of
malaria microscopy by machine learning (ML) is a challenging task because
field-prepared slides vary widely in quality and presentation, and artifacts
often heavily outnumber relatively rare parasites. In this work, we describe a
complete, fully-automated framework for thin film malaria analysis that applies
ML methods, including convolutional neural nets (CNNs), trained on a large and
diverse dataset of field-prepared thin blood films. Quantitation and species
identification results are close to sufficiently accurate for the concrete
needs of drug resistance monitoring and clinical use-cases on field-prepared
samples. We focus our methods and our performance metrics on the field use-case
requirements. We discuss key issues and important metrics for the application
of ML methods to malaria microscopy.
|
[
{
"version": "v1",
"created": "Mon, 5 Aug 2019 23:25:48 GMT"
},
{
"version": "v2",
"created": "Sun, 11 Sep 2022 23:40:54 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Delahunt",
"Charles B.",
""
],
[
"Jaiswal",
"Mayoore S.",
""
],
[
"Horning",
"Matthew P.",
""
],
[
"Janko",
"Samantha",
""
],
[
"Thompson",
"Clay M.",
""
],
[
"Kulhare",
"Sourabh",
""
],
[
"Hu",
"Liming",
""
],
[
"Ostbye",
"Travis",
""
],
[
"Yun",
"Grace",
""
],
[
"Gebrehiwot",
"Roman",
""
],
[
"Wilson",
"Benjamin K.",
""
],
[
"Long",
"Earl",
""
],
[
"Proux",
"Stephane",
""
],
[
"Gamboa",
"Dionicia",
""
],
[
"Chiodini",
"Peter",
""
],
[
"Carter",
"Jane",
""
],
[
"Dhorda",
"Mehul",
""
],
[
"Isaboke",
"David",
""
],
[
"Ogutu",
"Bernhards",
""
],
[
"Oyibo",
"Wellington",
""
],
[
"Villasis",
"Elizabeth",
""
],
[
"Tun",
"Kyaw Myo",
""
],
[
"Bachman",
"Christine",
""
],
[
"Bell",
"David",
""
],
[
"Mehanian",
"Courosh",
""
]
] |
new_dataset
| 0.99827 |
2004.11937
|
Dimitrios Thilikos
|
Mamadou Moustapha Kant\'e, Christophe Paul, Dimitrios M. Thilikos
|
A linear fixed parameter tractable algorithm for connected pathwidth
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The graph parameter of pathwidth can be seen as a measure of the topological
resemblance of a graph to a path. A popular definition of pathwidth is given in
terms of node search where we are given a system of tunnels that is
contaminated by some infectious substance and we are looking for a search
strategy that, at each step, either places a searcher on a vertex or removes a
searcher from a vertex and where an edge is cleaned when both endpoints are
simultaneously occupied by searchers. It was proved that the minimum number of
searchers required for a successful cleaning strategy is equal to the pathwidth
of the graph plus one. Two desired characteristics for a cleaning strategy is
to be monotone (no recontamination occurs) and connected (clean territories
always remain connected). Under these two demands, the number of searchers is
equivalent to a variant of pathwidth called {\em connected pathwidth}. We prove
that connected pathwidth is fixed parameter tractable, in particular we design
a $2^{O(k^2)}\cdot n$ time algorithm that checks whether the connected
pathwidth of $G$ is at most $k.$ This resolves an open question by
[Dereniowski, Osula, and Rz{\k{a}}{\.{z}}ewski, Finding small-width connected
path-decompositions in polynomial time. Theor. Comput. Sci., 794:85-100, 2019].
For our algorithm, we enrich the typical sequence technique that is able to
deal with the connectivity demand. Typical sequences have been introduced in
[Bodlaender and Kloks. Efficient and constructive algorithms for the pathwidth
and treewidth of graphs. J. Algorithms, 21(2):358-402, 1996] for the design of
linear parameterized algorithms for treewidth and pathwidth. The proposed
extension is based on an encoding of the connectivity property that is quite
versatile and may be adapted so to deliver linear parameterized algorithms for
the connected variants of other width parameters as well.
|
[
{
"version": "v1",
"created": "Fri, 24 Apr 2020 18:33:39 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Sep 2022 12:46:23 GMT"
},
{
"version": "v3",
"created": "Mon, 12 Sep 2022 11:23:50 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Kanté",
"Mamadou Moustapha",
""
],
[
"Paul",
"Christophe",
""
],
[
"Thilikos",
"Dimitrios M.",
""
]
] |
new_dataset
| 0.999709 |
2103.02979
|
Krishnasuri Narayanam
|
Krishnasuri Narayanam, Seep Goel, Abhishek Singh, Yedendra
Shrinivasan, Parameswaram Selvam
|
Blockchain Based Accounts Payable Platform for Goods Trade
| null | null |
10.1109/ICBC51069.2021.9461053
| null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Goods trade is a supply chain transaction that involves shippers buying goods
from suppliers and carriers providing goods transportation. Shippers are issued
invoices from suppliers and carriers. Shippers carry out goods receiving and
invoice processing before payment processing of bills for suppliers and
carriers, where invoice processing includes tasks like processing claims and
adjusting the bill payments. Goods receiving involves verification of received
goods by the Shipper's receiving team. Invoice processing is carried out by the
Shipper's accounts payable team, which in turn is verified by the accounts
receivable teams of suppliers and carriers. This paper presents a
blockchain-based accounts payable system that generates claims for the
deficiency in the goods received and accordingly adjusts the payment in the
bills for suppliers and carriers. Primary motivations for these supply chain
organizations to adopt blockchain-based accounts payable systems are to
eliminate the process redundancies (accounts payable vs. accounts receivable),
to reduce the number of disputes among the transacting participants, and to
accelerate the accounts payable processes via optimizations in the claims
generation and blockchain-based dispute reconciliation.
|
[
{
"version": "v1",
"created": "Thu, 4 Mar 2021 11:57:24 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Narayanam",
"Krishnasuri",
""
],
[
"Goel",
"Seep",
""
],
[
"Singh",
"Abhishek",
""
],
[
"Shrinivasan",
"Yedendra",
""
],
[
"Selvam",
"Parameswaram",
""
]
] |
new_dataset
| 0.999269 |
2109.13406
|
Maho Nakata
|
Maho Nakata
|
MPLAPACK version 2.0.1 user manual
| null | null | null | null |
cs.MS
|
http://creativecommons.org/licenses/by/4.0/
|
The MPLAPACK (formerly MPACK) is a multiple-precision version of LAPACK
(https://www.netlib.org/lapack/). MPLAPACK version 2.0.1 is based on LAPACK
version 3.9.1 and translated from Fortran 90 to C++ using FABLE, a Fortran to
C++ source-to-source conversion tool
(https://github.com/cctbx/cctbx_project/tree/master/fable/). MPLAPACK version
2.0.1 provides the real and complex version of MPBLAS, and the real and complex
versions of MPLAPACK support all LAPACK features: solvers for systems of
simultaneous linear equations, least-squares solutions of linear systems of
equations, eigenvalue problems, and singular value problems, and related matrix
factorizations except for mixed-precision routines. The MPLAPACK defines an API
for numerical linear algebra, similar to LAPACK. It is easy to port legacy
C/C++ numerical codes using MPLAPACK. MPLAPACK supports binary64, binary128,
FP80 (extended double), MPFR, GMP, and QD libraries (double-double and
quad-double). Users can choose MPFR or GMP for arbitrary accurate calculations,
double-double or quad-double for fast 32 or 64-decimal calculations. We can
consider the binary64 version as the C++ version of LAPACK. Moreover, it comes
with an OpenMP accelerated version of MPBLAS for some routines and CUDA (A100
and V100 support) for double-double versions of Rgemm and Rsyrk. The peak
performances of the OpenMP version are almost proportional to the number of
cores, and the performances of the CUDA version are impressive, and
approximately 400-600 GFlops. MPLAPACK is available at GitHub
(https://github.com/nakatamaho/mplapack/) under the 2-clause BSD license.
|
[
{
"version": "v1",
"created": "Tue, 28 Sep 2021 00:10:44 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Sep 2022 10:07:56 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Nakata",
"Maho",
""
]
] |
new_dataset
| 0.999531 |
2110.02276
|
Xiao Li
|
Xiao Li, Yidong Du, Zhen Zeng, Odest Chadwicke Jenkins
|
SeanNet: Semantic Understanding Network for Localization Under Object
Dynamics
| null | null | null | null |
cs.RO cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We aim for domestic robots to perform long-term indoor service. Under the
object-level scene dynamics induced by daily human activities, a robot needs to
robustly localize itself in the environment subject to scene uncertainties.
Previous works have addressed visual-based localization in static environments,
yet the object-level scene dynamics challenge existing methods for the
long-term deployment of the robot. This paper proposes a SEmantic understANding
Network (SeanNet) architecture that enables an effective learning process with
coupled visual and semantic inputs. With a dataset that contains object
dynamics, we propose a cascaded contrastive learning scheme to train the
SeanNet for learning a vector scene embedding. Subsequently, we can measure the
similarity between the current observed scene and the target scene, whereby
enables robust localization under object-level dynamics. In our experiments, we
benchmark SeanNet against state-of-the-art image-encoding networks (baselines)
on scene similarity measures. The SeanNet architecture with the proposed
training method can achieve an 85.02\% accuracy which is higher than baselines.
We further integrate the SeanNet and the other networks as the localizers into
a visual navigation application. We demonstrate that SeanNet achieves higher
success rates compared to the baselines.
|
[
{
"version": "v1",
"created": "Tue, 5 Oct 2021 18:29:07 GMT"
},
{
"version": "v2",
"created": "Sun, 11 Sep 2022 03:25:30 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Li",
"Xiao",
""
],
[
"Du",
"Yidong",
""
],
[
"Zeng",
"Zhen",
""
],
[
"Jenkins",
"Odest Chadwicke",
""
]
] |
new_dataset
| 0.97426 |
2111.14452
|
Issam Maarouf
|
Issam Maarouf, Andreas Lenz, Lorenz Welter, Antonia Wachter-Zeh, Eirik
Rosnes, Alexandre Graell i Amat
|
Concatenated Codes for Multiple Reads of a DNA Sequence
|
This paper has been accepted for publication in the IEEE Transactions
on Information Theory
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Decoding sequences that stem from multiple transmissions of a codeword over
an insertion, deletion, and substitution channel is a critical component of
efficient deoxyribonucleic acid (DNA) data storage systems. In this paper, we
consider a concatenated coding scheme with an outer nonbinary low-density
parity-check code or a polar code and either an inner convolutional code or a
time-varying block code. We propose two novel decoding algorithms for inference
from multiple received sequences, both combining the inner code and channel to
a joint hidden Markov model to infer symbolwise a posteriori probabilities
(APPs). The first decoder computes the exact APPs by jointly decoding the
received sequences, whereas the second decoder approximates the APPs by
combining the results of separately decoded received sequences and has a
complexity that is linear with the number of sequences. Using the proposed
algorithms, we evaluate the performance of decoding multiple received sequences
by means of achievable information rates and Monte-Carlo simulations. We show
significant performance gains compared to a single received sequence. In
addition, we succeed in improving the performance of the aforementioned coding
scheme by optimizing both the inner and outer codes.
|
[
{
"version": "v1",
"created": "Mon, 29 Nov 2021 11:07:14 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Jul 2022 15:37:04 GMT"
},
{
"version": "v3",
"created": "Mon, 12 Sep 2022 13:35:51 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Maarouf",
"Issam",
""
],
[
"Lenz",
"Andreas",
""
],
[
"Welter",
"Lorenz",
""
],
[
"Wachter-Zeh",
"Antonia",
""
],
[
"Rosnes",
"Eirik",
""
],
[
"Amat",
"Alexandre Graell i",
""
]
] |
new_dataset
| 0.986303 |
2112.04748
|
Leyuan Qu
|
Leyuan Qu, Cornelius Weber and Stefan Wermter
|
LipSound2: Self-Supervised Pre-Training for Lip-to-Speech Reconstruction
and Lip Reading
|
ACCEPTED IN IEEE Transactions on Neural Networks and Learning Systems
| null |
10.1109/TNNLS.2022.3191677
| null |
cs.SD cs.AI eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The aim of this work is to investigate the impact of crossmodal
self-supervised pre-training for speech reconstruction (video-to-audio) by
leveraging the natural co-occurrence of audio and visual streams in videos. We
propose LipSound2 which consists of an encoder-decoder architecture and
location-aware attention mechanism to map face image sequences to mel-scale
spectrograms directly without requiring any human annotations. The proposed
LipSound2 model is firstly pre-trained on $\sim$2400h multi-lingual (e.g.
English and German) audio-visual data (VoxCeleb2). To verify the
generalizability of the proposed method, we then fine-tune the pre-trained
model on domain-specific datasets (GRID, TCD-TIMIT) for English speech
reconstruction and achieve a significant improvement on speech quality and
intelligibility compared to previous approaches in speaker-dependent and
-independent settings. In addition to English, we conduct Chinese speech
reconstruction on the CMLR dataset to verify the impact on transferability.
Lastly, we train the cascaded lip reading (video-to-text) system by fine-tuning
the generated audios on a pre-trained speech recognition system and achieve
state-of-the-art performance on both English and Chinese benchmark datasets.
|
[
{
"version": "v1",
"created": "Thu, 9 Dec 2021 08:11:35 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Sep 2022 11:34:16 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Qu",
"Leyuan",
""
],
[
"Weber",
"Cornelius",
""
],
[
"Wermter",
"Stefan",
""
]
] |
new_dataset
| 0.999115 |
2112.13760
|
Fariha Tabassum Islam
|
Fariha Tabassum Islam, Tanzima Hashem, Rifat Shahriyar
|
A Crowd-enabled Solution for Privacy-Preserving and Personalized Safe
Route Planning for Fixed or Flexible Destinations (Full Version)
| null | null | null | null |
cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
Ensuring travelers' safety on roads has become a research challenge in recent
years. We introduce a novel safe route planning problem and develop an
efficient solution to ensure the travelers' safety on roads. Though few
research attempts have been made in this regard, all of them assume that people
share their sensitive travel experiences with a centralized entity for finding
the safest routes, which is not ideal in practice for privacy reasons.
Furthermore, existing works formulate safe route planning in ways that do not
meet a traveler's need for safe travel on roads. Our approach finds the safest
routes within a user-specified distance threshold based on the personalized
travel experience of the knowledgeable crowd without involving any centralized
computation. We develop a privacy-preserving model to quantify the travel
experience of a user into personalized safety scores. Our algorithms for
finding the safest route further enhance user privacy by minimizing the
exposure of personalized safety scores with others. Our safe route planner can
find the safest routes for individuals and groups by considering both a fixed
and a set of flexible destination locations. Extensive experiments using real
datasets show that our approach finds the safest route in seconds. Compared to
the direct algorithm, our iterative algorithm requires 47% less exposure of
personalized safety scores.
|
[
{
"version": "v1",
"created": "Mon, 27 Dec 2021 16:13:01 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Sep 2022 23:14:30 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Islam",
"Fariha Tabassum",
""
],
[
"Hashem",
"Tanzima",
""
],
[
"Shahriyar",
"Rifat",
""
]
] |
new_dataset
| 0.998455 |
2201.09390
|
Ekta Vats
|
Dmitrijs Kass and Ekta Vats
|
AttentionHTR: Handwritten Text Recognition Based on Attention
Encoder-Decoder Networks
|
15th IAPR International Workshop on Document Analysis System (DAS)
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work proposes an attention-based sequence-to-sequence model for
handwritten word recognition and explores transfer learning for data-efficient
training of HTR systems. To overcome training data scarcity, this work
leverages models pre-trained on scene text images as a starting point towards
tailoring the handwriting recognition models. ResNet feature extraction and
bidirectional LSTM-based sequence modeling stages together form an encoder. The
prediction stage consists of a decoder and a content-based attention mechanism.
The effectiveness of the proposed end-to-end HTR system has been empirically
evaluated on a novel multi-writer dataset Imgur5K and the IAM dataset. The
experimental results evaluate the performance of the HTR framework, further
supported by an in-depth analysis of the error cases. Source code and
pre-trained models are available at https://github.com/dmitrijsk/AttentionHTR.
|
[
{
"version": "v1",
"created": "Sun, 23 Jan 2022 22:48:36 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Apr 2022 09:01:46 GMT"
},
{
"version": "v3",
"created": "Mon, 12 Sep 2022 11:47:55 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Kass",
"Dmitrijs",
""
],
[
"Vats",
"Ekta",
""
]
] |
new_dataset
| 0.988585 |
2202.12855
|
Krishnasuri Narayanam
|
Krishnasuri Narayanam, Venkatraman Ramakrishna, Dhinakaran
Vinayagamurthy and Sandeep Nishad
|
Atomic cross-chain exchanges of shared assets
| null | null |
10.1145/3558535.3559786
| null |
cs.CR cs.DC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
A core enabler for blockchain or DLT interoperability is the ability to
atomically exchange assets held by mutually untrusting owners on different
ledgers. This atomic swap problem has been well-studied, with the Hash Time
Locked Contract (HTLC) emerging as a canonical solution. HTLC ensures atomicity
of exchange, albeit with caveats for node failure and timeliness of claims. But
a bigger limitation of HTLC is that it only applies to a model consisting of
two adversarial parties having sole ownership of a single asset in each ledger.
Realistic extensions of the model in which assets may be jointly owned by
multiple parties, all of whose consents are required for exchanges, or where
multiple assets must be exchanged for one, are susceptible to collusion attacks
and hence cannot be handled by HTLC. In this paper, we generalize the model of
asset exchanges across DLT networks and present a taxonomy of use cases,
describe the threat model, and propose MPHTLC, an augmented HTLC protocol for
atomic multi-owner-and-asset exchanges. We analyze the correctness, safety, and
application scope of MPHTLC. As proof-of-concept, we show how MPHTLC primitives
can be implemented in networks built on Hyperledger Fabric and Corda, and how
MPHTLC can be implemented in the Hyperledger Labs Weaver framework by
augmenting its existing HTLC protocol.
|
[
{
"version": "v1",
"created": "Fri, 25 Feb 2022 18:04:30 GMT"
},
{
"version": "v2",
"created": "Tue, 31 May 2022 12:33:04 GMT"
},
{
"version": "v3",
"created": "Sat, 10 Sep 2022 19:50:03 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Narayanam",
"Krishnasuri",
""
],
[
"Ramakrishna",
"Venkatraman",
""
],
[
"Vinayagamurthy",
"Dhinakaran",
""
],
[
"Nishad",
"Sandeep",
""
]
] |
new_dataset
| 0.992541 |
2204.10264
|
Silviu Craciunas
|
Ana\"is Finzi, Silviu S. Craciunas, Marc Boyer
|
A Real-time Calculus Approach for Integrating Sporadic Events in
Time-triggered Systems
| null | null | null | null |
cs.NI cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In time-triggered systems, where the schedule table is predefined and
statically configured at design time, sporadic event-triggered (ET) tasks are
handled within specially dedicated slots or when time-triggered (TT) tasks
finish their execution early. We introduce a new paradigm for synthesizing TT
schedules that guarantee the correct temporal behavior of TT tasks and the
schedulability of sporadic ET tasks with arbitrary deadlines. The approach
first expresses a constraint for the TT task schedule in the form of a maximal
affine envelope that guarantees that as long as the schedule generation
respects this envelope, all sporadic ET tasks meet their deadline. The second
step consists of modeling this envelope as a burst limiting constraint and
building the TT schedule via simulating a modified Least-Laxity-First (LLF)
scheduler. Using this novel technique, we show that we achieve equal or better
schedulability and a faster schedule generation for most use-cases compared to
simple polling approaches. Moreover, we present an extension to our method that
finds the most favourable schedule for TT tasks with respect to ET
schedulability, thus increasing the probability of the computed TT schedule
remaining feasible when ET tasks are later added or changed.
|
[
{
"version": "v1",
"created": "Thu, 21 Apr 2022 17:05:58 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Jul 2022 15:03:52 GMT"
},
{
"version": "v3",
"created": "Mon, 12 Sep 2022 13:41:48 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Finzi",
"Anaïs",
""
],
[
"Craciunas",
"Silviu S.",
""
],
[
"Boyer",
"Marc",
""
]
] |
new_dataset
| 0.9502 |
2205.10737
|
Peng Yin
|
Peng Yin, Shiqi Zhao, Ruohai Ge, Ivan Cisneros, Ruijie Fu, Ji Zhang,
Howie Choset and Sebastian Scherer
|
ALITA: A Large-scale Incremental Dataset for Long-term Autonomy
|
6 pages, 5 figures, Submitted for IJRR dataset paper
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For long-term autonomy, most place recognition methods are mainly evaluated
on simplified scenarios or simulated datasets, which cannot provide solid
evidence to evaluate the readiness for current Simultaneous Localization and
Mapping (SLAM). In this paper, we present a long-term place recognition dataset
for use in mobile localization under large-scale dynamic environments. This
dataset includes a campus-scale track and a city-scale track: 1) the
campus-track focuses the long-term property, we record LiDAR device and an
omnidirectional camera on 10 trajectories, and each trajectory are repeatly
recorded 8 times under variant illumination conditions. 2) the city-track
focuses the large-scale property, we mount the LiDAR device on the vehicle and
traversing through a 120km trajectories, which contains open streets,
residential areas, natural terrains, etc. They includes 200 hours of raw data
of all kinds scenarios within urban environments. The ground truth position for
both tracks are provided on each trajectory, which is obtained from the Global
Position System with an additional General ICP based point cloud refinement. To
simplify the evaluation procedure, we also provide the Python-API with a set of
place recognition metrics is proposed to quickly load our dataset and evaluate
the recognition performance against different methods. This dataset targets at
finding methods with high place recognition accuracy and robustness, and
providing real robotic system with long-term autonomy. The dataset and the
provided tools can be accessed from https://github.com/MetaSLAM/ALITA.
|
[
{
"version": "v1",
"created": "Sun, 22 May 2022 04:25:00 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Sep 2022 19:43:22 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Yin",
"Peng",
""
],
[
"Zhao",
"Shiqi",
""
],
[
"Ge",
"Ruohai",
""
],
[
"Cisneros",
"Ivan",
""
],
[
"Fu",
"Ruijie",
""
],
[
"Zhang",
"Ji",
""
],
[
"Choset",
"Howie",
""
],
[
"Scherer",
"Sebastian",
""
]
] |
new_dataset
| 0.999841 |
2206.10257
|
Jens Ducr\'ee
|
Jens Ducr\'ee
|
Satoshi Nakamoto and the Origins of Bitcoin -- The Profile of a
1-in-a-Billion Genius
|
Main text: 84 pages Number of references: 1468 Appendix: 5 pages
| null | null | null |
cs.GL cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The mystery about the ingenious creator of Bitcoin concealing behind the
pseudonym Satoshi Nakamoto has been fascinating the global public for more than
a decade. Suddenly jumping out of the dark in 2008, this persona hurled the
decentralized electronic cash system "Bitcoin", which has reached a peak market
capitalization in the region of 1 trillion USD. In a purposely agnostic, and
meticulous "lea-ving no stone unturned" approach, this study presents new hard
facts, which evidently slipped through Satoshi Nakamoto's elaborate privacy
shield, and derives meaningful pointers that are primarily inferred from
Bitcoin's whitepaper, its blockchain parameters, and data that were widely up
to his discretion. This ample stack of established and novel evidence is
systematically categorized, analyzed, and then connected to its related,
real-world ambient, like relevant locations and happenings in the past, and at
the time. Evidence compounds towards a substantial role of the Benelux
cryptography ecosystem, with strong transatlantic links, in the creation of
Bitcoin. A consistent biography, a psychogram, and gripping story of an
ingenious, multi-talented, autodidactic, reticent, and capricious polymath
transpire, which are absolutely unique from a history of science and technology
perspective. A cohort of previously fielded and best matches emerging from the
investigations are probed against an unprecedently restrictive, multi-stage
exclusion filter, which can, with maximum certainty, rule out most "Satoshi
Nakamoto" candidates, while some of them remain to be confirmed. With this
article, you will be able to decide who is not, or highly unlikely to be
Satoshi Nakamoto, be equipped with an ample stack of systematically categorized
evidence and efficient methodologies to find suitable candidates, and can
possibly unveil the real identity of the creator of Bitcoin - if you want.
|
[
{
"version": "v1",
"created": "Tue, 21 Jun 2022 11:10:21 GMT"
},
{
"version": "v10",
"created": "Tue, 26 Jul 2022 01:02:13 GMT"
},
{
"version": "v11",
"created": "Fri, 29 Jul 2022 17:54:30 GMT"
},
{
"version": "v12",
"created": "Thu, 1 Sep 2022 17:57:49 GMT"
},
{
"version": "v13",
"created": "Mon, 5 Sep 2022 17:58:00 GMT"
},
{
"version": "v14",
"created": "Fri, 9 Sep 2022 16:04:31 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Jul 2022 17:22:21 GMT"
},
{
"version": "v3",
"created": "Thu, 7 Jul 2022 13:10:11 GMT"
},
{
"version": "v4",
"created": "Fri, 8 Jul 2022 13:10:45 GMT"
},
{
"version": "v5",
"created": "Mon, 11 Jul 2022 15:09:35 GMT"
},
{
"version": "v6",
"created": "Tue, 12 Jul 2022 13:18:24 GMT"
},
{
"version": "v7",
"created": "Wed, 13 Jul 2022 12:48:01 GMT"
},
{
"version": "v8",
"created": "Mon, 18 Jul 2022 16:48:17 GMT"
},
{
"version": "v9",
"created": "Fri, 22 Jul 2022 17:14:34 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Ducrée",
"Jens",
""
]
] |
new_dataset
| 0.999329 |
2209.04066
|
Nikos Athanasiou
|
Nikos Athanasiou, Mathis Petrovich, Michael J. Black, G\"ul Varol
|
TEACH: Temporal Action Composition for 3D Humans
|
3DV 2022 Camera Ready, Affiliations corrected
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given a series of natural language descriptions, our task is to generate 3D
human motions that correspond semantically to the text, and follow the temporal
order of the instructions. In particular, our goal is to enable the synthesis
of a series of actions, which we refer to as temporal action composition. The
current state of the art in text-conditioned motion synthesis only takes a
single action or a single sentence as input. This is partially due to lack of
suitable training data containing action sequences, but also due to the
computational complexity of their non-autoregressive model formulation, which
does not scale well to long sequences. In this work, we address both issues.
First, we exploit the recent BABEL motion-text collection, which has a wide
range of labeled actions, many of which occur in a sequence with transitions
between them. Next, we design a Transformer-based approach that operates
non-autoregressively within an action, but autoregressively within the sequence
of actions. This hierarchical formulation proves effective in our experiments
when compared with multiple baselines. Our approach, called TEACH for "TEmporal
Action Compositions for Human motions", produces realistic human motions for a
wide variety of actions and temporal compositions from language descriptions.
To encourage work on this new task, we make our code available for research
purposes at our $\href{teach.is.tue.mpg.de}{\text{website}}$.
|
[
{
"version": "v1",
"created": "Fri, 9 Sep 2022 00:33:40 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Sep 2022 16:34:20 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Athanasiou",
"Nikos",
""
],
[
"Petrovich",
"Mathis",
""
],
[
"Black",
"Michael J.",
""
],
[
"Varol",
"Gül",
""
]
] |
new_dataset
| 0.980092 |
2209.04514
|
Zhiqiang Zang
|
Zhiqiang Zang and Nathan Wiatrek and Milos Gligoric and August Shi
|
Compiler Testing using Template Java Programs
|
13 pages, 6 figures, 2 tables, accepted in ASE 2022 (Research Papers
track)
| null |
10.1145/3551349.3556958
| null |
cs.PL cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present JAttack, a framework that enables template-based testing for
compilers. Using JAttack, a developer writes a template program that describes
a set of programs to be generated and given as test inputs to a compiler. Such
a framework enables developers to incorporate their domain knowledge on testing
compilers, giving a basic program structure that allows for exploring complex
programs that can trigger sophisticated compiler optimizations. A developer
writes a template program in the host language (Java) that contains holes to be
filled by JAttack. Each hole, written using a domain-specific language,
constructs a node within an extended abstract syntax tree (eAST). An eAST node
defines the search space for the hole, i.e., a set of expressions and values.
JAttack generates programs by executing templates and filling each hole by
randomly choosing expressions and values (available within the search space
defined by the hole). Additionally, we introduce several optimizations to
reduce JAttack's generation cost. While JAttack could be used to test various
compiler features, we demonstrate its capabilities in helping test just-in-time
(JIT) Java compilers, whose optimizations occur at runtime after a sufficient
number of executions. Using JAttack, we have found six critical bugs that were
confirmed by Oracle developers. Four of them were previously unknown, including
two unknown CVEs (Common Vulnerabilities and Exposures). JAttack shows the
power of combining developers' domain knowledge (via templates) with random
testing to detect bugs in JIT compilers.
|
[
{
"version": "v1",
"created": "Fri, 9 Sep 2022 20:31:38 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Zang",
"Zhiqiang",
""
],
[
"Wiatrek",
"Nathan",
""
],
[
"Gligoric",
"Milos",
""
],
[
"Shi",
"August",
""
]
] |
new_dataset
| 0.99837 |
2209.04517
|
Jola Mirecka
|
Jola Mirecka, Marjan Famili, Anna Kota\'nska, Nikolai Juraschko,
Beatriz Costa-Gomes, Colin M. Palmer, Jeyan Thiyagalingam, Tom Burnley, Mark
Basham, Alan R. Lowe
|
Affinity-VAE for disentanglement, clustering and classification of
objects in multidimensional image data
| null | null | null | null |
cs.CV cs.LG q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work we present affinity-VAE: a framework for automatic clustering
and classification of objects in multidimensional image data based on their
similarity. The method expands on the concept of $\beta$-VAEs with an informed
similarity-based loss component driven by an affinity matrix. The affinity-VAE
is able to create rotationally-invariant, morphologically homogeneous clusters
in the latent representation, with improved cluster separation compared with a
standard $\beta$-VAE. We explore the extent of latent disentanglement and
continuity of the latent spaces on both 2D and 3D image data, including
simulated biological electron cryo-tomography (cryo-ET) volumes as an example
of a scientific application.
|
[
{
"version": "v1",
"created": "Fri, 9 Sep 2022 20:39:22 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Mirecka",
"Jola",
""
],
[
"Famili",
"Marjan",
""
],
[
"Kotańska",
"Anna",
""
],
[
"Juraschko",
"Nikolai",
""
],
[
"Costa-Gomes",
"Beatriz",
""
],
[
"Palmer",
"Colin M.",
""
],
[
"Thiyagalingam",
"Jeyan",
""
],
[
"Burnley",
"Tom",
""
],
[
"Basham",
"Mark",
""
],
[
"Lowe",
"Alan R.",
""
]
] |
new_dataset
| 0.980362 |
2209.04576
|
Zhenhua Wang
|
Zhenhua Wang, Ming Ren, Dong Gao, Bin Wang
|
Yes, DLGM! A novel hierarchical model for hazard classification
|
information science
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Hazards can be exposed by HAZOP as text information, and studying their
classification is of great significance to the development of industrial
informatics, which is conducive to safety early warning, decision support,
policy evaluation, etc. However, there is no research on this important field
at present. In this paper, we propose a novel model termed DLGM via deep
learning for hazard classification. Specifically, first, we leverage BERT to
vectorize the hazard and treat it as a type of time series (HTS). Secondly, we
build a grey model FSGM(1, 1) to model it, and get the grey guidance in the
sense of the structural parameters. Finally, we design a hierarchical-feature
fusion neural network (HFFNN) to investigate the HTS with grey guidance (HTSGG)
from three themes, where, HFFNN is a hierarchical structure with four types of
modules: two feature encoders, a gating mechanism, and a deepening mechanism.
We take 18 industrial processes as application cases and launch a series of
experiments. The experimental results prove that DLGM has promising aptitudes
for hazard classification and that FSGM(1, 1) and HFFNN are effective. We hope
our research can contribute added value and support to the daily practice in
industrial safety.
|
[
{
"version": "v1",
"created": "Sat, 10 Sep 2022 02:45:59 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Wang",
"Zhenhua",
""
],
[
"Ren",
"Ming",
""
],
[
"Gao",
"Dong",
""
],
[
"Wang",
"Bin",
""
]
] |
new_dataset
| 0.962418 |
2209.04602
|
Neela Sawant
|
Neela Sawant, Srinivasan H. Sengamedu
|
Code Compliance Assessment as a Learning Problem
|
Amazon.com, 2022
| null | null | null |
cs.SE cs.AI cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
Manual code reviews and static code analyzers are the traditional mechanisms
to verify if source code complies with coding policies. However, these
mechanisms are hard to scale. We formulate code compliance assessment as a
machine learning (ML) problem, to take as input a natural language policy and
code, and generate a prediction on the code's compliance, non-compliance, or
irrelevance. This can help scale compliance classification and search for
policies not covered by traditional mechanisms. We explore key research
questions on ML model formulation, training data, and evaluation setup. The
core idea is to obtain a joint code-text embedding space which preserves
compliance relationships via the vector distance of code and policy embeddings.
As there is no task-specific data, we re-interpret and filter commonly
available software datasets with additional pre-training and pre-finetuning
tasks that reduce the semantic gap. We benchmarked our approach on two listings
of coding policies (CWE and CBP). This is a zero-shot evaluation as none of the
policies occur in the training set. On CWE and CBP respectively, our tool
Policy2Code achieves classification accuracies of (59%, 71%) and search MRR of
(0.05, 0.21) compared to CodeBERT with classification accuracies of (37%, 54%)
and MRR of (0.02, 0.02). In a user study, 24% Policy2Code detections were
accepted compared to 7% for CodeBERT.
|
[
{
"version": "v1",
"created": "Sat, 10 Sep 2022 05:41:04 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Sawant",
"Neela",
""
],
[
"Sengamedu",
"Srinivasan H.",
""
]
] |
new_dataset
| 0.990377 |
2209.04614
|
Tahmid Hasan Pranto
|
A. A. Talha Talukder, Md. Anisul Islam Mahmud, Arbiya Sultana, Tahmid
Hasan Pranto, AKM Bahalul Haque, Rashedur M. Rahman
|
A customer satisfaction centric food delivery system based on blockchain
and smart contract
| null | null |
10.1080/24751839.2022.2117121
| null |
cs.CE
|
http://creativecommons.org/licenses/by/4.0/
|
Food delivery systems are gaining popularity recently due to the expansion of
internet connectivity and for the increasing availability of devices. The
growing popularity of such systems has raised concerns regarding (i)
Information security, (ii) Business to business (B2B) deep discounting race,
and (iii) Strict policy enforcement. Sensitive personal data and financial
information of the users must be safeguarded. Additionally, in pursuit of
gaining profit, the restaurants tend to offer deep discounts resulting in a
higher volume of orders than usual. Therefore, the restaurants and the delivery
persons fail to maintain the delivery time and often impair the food quality.
In this paper, we have proposed a blockchain and smart contract-based food
delivery system to address these issues. The main goal is to remove commission
schemes and decrease service delays caused by a high volume of orders. The
protocols have been deployed and tested on the Ethereum test network. The
simulation manifests a successful implementation of our desired system; with
the payment being controlled by our system. The actors (restaurant,
delivery-person or consumer) are bound to be compliant with the policies or
penalized otherwise.
|
[
{
"version": "v1",
"created": "Sat, 10 Sep 2022 07:50:25 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Talukder",
"A. A. Talha",
""
],
[
"Mahmud",
"Md. Anisul Islam",
""
],
[
"Sultana",
"Arbiya",
""
],
[
"Pranto",
"Tahmid Hasan",
""
],
[
"Haque",
"AKM Bahalul",
""
],
[
"Rahman",
"Rashedur M.",
""
]
] |
new_dataset
| 0.998262 |
2209.04639
|
Haiyang Mei
|
Haiyang Mei, Xin Yang, Letian Yu, Qiang Zhang, Xiaopeng Wei, Rynson
W.H. Lau
|
Large-Field Contextual Feature Learning for Glass Detection
| null | null |
10.1109/TPAMI.2022.3181973
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Glass is very common in our daily life. Existing computer vision systems
neglect it and thus may have severe consequences, e.g., a robot may crash into
a glass wall. However, sensing the presence of glass is not straightforward.
The key challenge is that arbitrary objects/scenes can appear behind the glass.
In this paper, we propose an important problem of detecting glass surfaces from
a single RGB image. To address this problem, we construct the first large-scale
glass detection dataset (GDD) and propose a novel glass detection network,
called GDNet-B, which explores abundant contextual cues in a large
field-of-view via a novel large-field contextual feature integration (LCFI)
module and integrates both high-level and low-level boundary features with a
boundary feature enhancement (BFE) module. Extensive experiments demonstrate
that our GDNet-B achieves satisfying glass detection results on the images
within and beyond the GDD testing set. We further validate the effectiveness
and generalization capability of our proposed GDNet-B by applying it to other
vision tasks, including mirror segmentation and salient object detection.
Finally, we show the potential applications of glass detection and discuss
possible future research directions.
|
[
{
"version": "v1",
"created": "Sat, 10 Sep 2022 11:08:05 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Mei",
"Haiyang",
""
],
[
"Yang",
"Xin",
""
],
[
"Yu",
"Letian",
""
],
[
"Zhang",
"Qiang",
""
],
[
"Wei",
"Xiaopeng",
""
],
[
"Lau",
"Rynson W. H.",
""
]
] |
new_dataset
| 0.999547 |
2209.04654
|
Ilan Doron-Arad
|
Ilan Doron-Arad, Ariel Kulik, Hadas Shachnai
|
An EPTAS for Budgeted Matroid Independent Set
| null | null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
We consider the budgeted matroid independent set problem. The input is a
ground set, where each element has a cost and a non-negative profit, along with
a matroid over the elements and a budget. The goal is to select a subset of
elements which maximizes the total profit subject to the matroid and budget
constraints. Several well known special cases, where we have, e.g., a uniform
matroid and a budget, or no matroid constraint (i.e., the classic knapsack
problem), admit a fully polynomial-time approximation scheme (FPTAS). In
contrast, already a slight generalization to the multi-budgeted matroid
independent set problem has a PTAS but does not admit an efficient
polynomial-time approximation scheme (EPTAS). This implies a PTAS for our
problem, which is the best known result prior to this work. Our main
contribution is an EPTAS for the budgeted matroid independent set problem. A
key idea of the scheme is to find a representative set for the instance, whose
cardinality depends solely on $1/\varepsilon$, where $\varepsilon > 0$ is the
accuracy parameter of the scheme. The representative set is identified via
matroid basis minimization, which can be solved by a simple greedy algorithm.
Our scheme enumerates over subsets of the representative set and extends each
subset using a linear program. The notion of representative sets may be useful
in solving other variants of the budgeted matroid independent set problem.
|
[
{
"version": "v1",
"created": "Sat, 10 Sep 2022 12:54:10 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Doron-Arad",
"Ilan",
""
],
[
"Kulik",
"Ariel",
""
],
[
"Shachnai",
"Hadas",
""
]
] |
new_dataset
| 0.999392 |
2209.04680
|
Mohammad Ali Keyvanrad
|
Mahdi Rahmani, Melika Sabaghian, Seyyede Mahila Moghadami, Mohammad
Mohsen Talaie, Mahdi Naghibi, Mohammad Ali Keyvanrad
|
IR-LPR: Large Scale of Iranian License Plate Recognition Dataset
|
This is the final draft for the paper submitted to the 12th
International Conference on Computer and Knowledge Engineering (ICCKE 2022),
Ferdowsi University of Mashhad, Iran
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Object detection has always been practical. There are so many things in our
world that recognizing them can not only increase our automatic knowledge of
the surroundings, but can also be lucrative for those interested in starting a
new business. One of these attractive objects is the license plate (LP). In
addition to the security uses that license plate detection can have, it can
also be used to create creative businesses. With the development of object
detection methods based on deep learning models, an appropriate and
comprehensive dataset becomes doubly important. But due to the frequent
commercial use of license plate datasets, there are limited datasets not only
in Iran but also in the world. The largest Iranian dataset for detection
license plates has 1,466 images. Also, the largest Iranian dataset for
recognizing the characters of a license plate has 5,000 images. We have
prepared a complete dataset including 20,967 car images along with all the
detection annotation of the whole license plate and its characters, which can
be useful for various purposes. Also, the total number of license plate images
for character recognition application is 27,745 images.
|
[
{
"version": "v1",
"created": "Sat, 10 Sep 2022 14:41:59 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Rahmani",
"Mahdi",
""
],
[
"Sabaghian",
"Melika",
""
],
[
"Moghadami",
"Seyyede Mahila",
""
],
[
"Talaie",
"Mohammad Mohsen",
""
],
[
"Naghibi",
"Mahdi",
""
],
[
"Keyvanrad",
"Mohammad Ali",
""
]
] |
new_dataset
| 0.999758 |
2209.04773
|
Masoud Salehpour
|
Masoud Salehpour and Joseph G. Davis
|
SymphonyDB: A Polyglot Model for Knowledge Graph Query Processing
|
arXiv admin note: text overlap with arXiv:2004.06203
| null | null | null |
cs.DB cs.DC cs.PF
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Unlocking the full potential of Knowledge Graphs (KGs) to enable or enhance
various semantic and other applications requires Data Management Systems (DMSs)
to efficiently store and process the content of KGs. However, the increases in
the size and variety of KG datasets as well as the growing diversity of KG
queries pose efficiency challenges for the current generation of DMSs to the
extent that the performance of representative DMSs tends to vary significantly
across diverse query types and no single platform dominates performance. We
present our extensible prototype, SymphonyDB, as an approach to addressing this
problem based on a polyglot model of query processing as part of a
multi-database system supported by a unified access layer that can
analyze/translate individual queries just-in-time and match each to the likely
best-performing DMS among Virtuoso, Blazegraph, RDF-3X, and MongoDB as
representative DMSs that are included in our prototype at this time. The
results of our experiments with the prototype over well-known KG benchmark
datasets and queries point to the efficiency and consistency of its performance
across different query types and datasets.
|
[
{
"version": "v1",
"created": "Sun, 11 Sep 2022 03:04:03 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Salehpour",
"Masoud",
""
],
[
"Davis",
"Joseph G.",
""
]
] |
new_dataset
| 0.999416 |
2209.04808
|
Yuanquan Hu
|
Yuanquan Hu, Xiaoli Wei, Junji Yan, Hengxi Zhang
|
Graphon Mean-Field Control for Cooperative Multi-Agent Reinforcement
Learning
| null | null | null | null |
cs.MA math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The marriage between mean-field theory and reinforcement learning has shown a
great capacity to solve large-scale control problems with homogeneous agents.
To break the homogeneity restriction of mean-field theory, a recent interest is
to introduce graphon theory to the mean-field paradigm. In this paper, we
propose a graphon mean-field control (GMFC) framework to approximate
cooperative multi-agent reinforcement learning (MARL) with nonuniform
interactions and show that the approximate order is of
$\mathcal{O}(\frac{1}{\sqrt{N}})$, with $N$ the number of agents. By
discretizing the graphon index of GMFC, we further introduce a smaller class of
GMFC called block GMFC, which is shown to well approximate cooperative MARL.
Our empirical studies on several examples demonstrate that our GMFC approach is
comparable with the state-of-art MARL algorithms while enjoying better
scalability.
|
[
{
"version": "v1",
"created": "Sun, 11 Sep 2022 08:00:39 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Hu",
"Yuanquan",
""
],
[
"Wei",
"Xiaoli",
""
],
[
"Yan",
"Junji",
""
],
[
"Zhang",
"Hengxi",
""
]
] |
new_dataset
| 0.95516 |
2209.04817
|
Lalita Kumari
|
Lalita Kumari, Sukhdeep Singh, VVS Rathore and Anuj Sharma
|
Lexicon and Attention based Handwritten Text Recognition System
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The handwritten text recognition problem is widely studied by the researchers
of computer vision community due to its scope of improvement and applicability
to daily lives, It is a sub-domain of pattern recognition. Due to advancement
of computational power of computers since last few decades neural networks
based systems heavily contributed towards providing the state-of-the-art
handwritten text recognizers. In the same direction, we have taken two
state-of-the art neural networks systems and merged the attention mechanism
with it. The attention technique has been widely used in the domain of neural
machine translations and automatic speech recognition and now is being
implemented in text recognition domain. In this study, we are able to achieve
4.15% character error rate and 9.72% word error rate on IAM dataset, 7.07%
character error rate and 16.14% word error rate on GW dataset after merging the
attention and word beam search decoder with existing Flor et al. architecture.
To analyse further, we have also used system similar to Shi et al. neural
network system with greedy decoder and observed 23.27% improvement in character
error rate from the base model.
|
[
{
"version": "v1",
"created": "Sun, 11 Sep 2022 09:26:45 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Kumari",
"Lalita",
""
],
[
"Singh",
"Sukhdeep",
""
],
[
"Rathore",
"VVS",
""
],
[
"Sharma",
"Anuj",
""
]
] |
new_dataset
| 0.979914 |
2209.04868
|
Pouyan Keshavarzian
|
Pouyan Keshavarzian, Karthick Ramu, Duy Tang, Carlos Weill, Francesco
Gramuglia, Shyue Seng Tan, Michelle Tng, Louis Lim, Elgin Quek, Denis
Mandich, Mario Stip\v{c}evi\'c and Edoardo Charbon
|
A 3.3 Gbps SPAD-Based Quantum Random Number Generator
|
11 pages. 16 Figures
| null | null | null |
cs.CR quant-ph
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Quantum random number generators are a burgeoning technology used for a
variety of applications, including modern security and encryption systems.
Typical methods exploit an entropy source combined with an extraction or bit
generation circuit in order to produce a random string. In integrated designs
there is often little modelling or analytical description of the entropy
source, circuit extraction and post-processing provided. In this work, we first
discuss theory on the quantum random flip-flop (QRFF), which elucidates the
role of circuit imperfections that manifest themselves in bias and correlation.
Then, a Verilog-AMS model is developed in order to validate the analytical
model in simulation. A novel transistor implementation of the QRFF circuit is
presented, which enables compensation of the degradation in entropy inherent to
the finite non-symmetric transitions of the random flip-flop. Finally, a full
system containing two independent arrays of the QRFF circuit is manufactured
and tested in a 55 nm Bipolar-CMOS-DMOS (BCD) technology node, demonstrating
bit generation statistics that are commensurate to the developed model. The
full chip is able to generate 3.3 Gbps of data when operated with an external
LED, whereas an individual QRFF can generate 25 Mbps each of random data while
maintaining a Shannon entropy bound > 0.997, which is one of the highest per
pixel bit generation rates to date. NIST STS is used to benchmark the generated
bit strings, thereby validating the QRFF circuit as an excellent candidate for
fully-integrated QRNGs.
|
[
{
"version": "v1",
"created": "Sun, 11 Sep 2022 13:56:57 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Keshavarzian",
"Pouyan",
""
],
[
"Ramu",
"Karthick",
""
],
[
"Tang",
"Duy",
""
],
[
"Weill",
"Carlos",
""
],
[
"Gramuglia",
"Francesco",
""
],
[
"Tan",
"Shyue Seng",
""
],
[
"Tng",
"Michelle",
""
],
[
"Lim",
"Louis",
""
],
[
"Quek",
"Elgin",
""
],
[
"Mandich",
"Denis",
""
],
[
"Stipčević",
"Mario",
""
],
[
"Charbon",
"Edoardo",
""
]
] |
new_dataset
| 0.96122 |
2209.04908
|
Mina Bishay
|
Mina Bishay, Jay Turcot, Graham Page and Mohammad Mavadati
|
Automatic Detection of Sentimentality from Facial Expressions
|
Accepted in ICIP 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Emotion recognition has received considerable attention from the Computer
Vision community in the last 20 years. However, most of the research focused on
analyzing the six basic emotions (e.g. joy, anger, surprise), with a limited
work directed to other affective states. In this paper, we tackle
sentimentality (strong feeling of heartwarming or nostalgia), a new emotional
state that has few works in the literature, and no guideline defining its
facial markers. To this end, we first collect a dataset of 4.9K videos of
participants watching some sentimental and non-sentimental ads, and then we
label the moments evoking sentimentality in the ads. Second, we use the
ad-level labels and the facial Action Units (AUs) activation across different
frames for defining some weak frame-level sentimentality labels. Third, we
train a Multilayer Perceptron (MLP) using the AUs activation for sentimentality
detection. Finally, we define two new ad-level metrics for evaluating our model
performance. Quantitative and qualitative results show promising results for
sentimentality detection. To the best of our knowledge this is the first work
to address the problem of sentimentality detection.
|
[
{
"version": "v1",
"created": "Sun, 11 Sep 2022 17:36:41 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Bishay",
"Mina",
""
],
[
"Turcot",
"Jay",
""
],
[
"Page",
"Graham",
""
],
[
"Mavadati",
"Mohammad",
""
]
] |
new_dataset
| 0.998692 |
2209.04911
|
M Charity
|
M Charity and Julian Togelius
|
Keke AI Competition: Solving puzzle levels in a dynamically changing
mechanic space
| null | null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Keke AI Competition introduces an artificial agent competition for the
game Baba is You - a Sokoban-like puzzle game where players can create rules
that influence the mechanics of the game. Altering a rule can cause temporary
or permanent effects for the rest of the level that could be part of the
solution space. The nature of these dynamic rules and the deterministic aspect
of the game creates a challenge for AI to adapt to a variety of mechanic
combinations in order to solve a level. This paper describes the framework and
evaluation metrics used to rank submitted agents and baseline results from
sample tree search agents.
|
[
{
"version": "v1",
"created": "Sun, 11 Sep 2022 17:50:27 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Charity",
"M",
""
],
[
"Togelius",
"Julian",
""
]
] |
new_dataset
| 0.99469 |
2209.04916
|
Thorsten Berger
|
Steven She and Thorsten Berger
|
Formal Semantics of the Kconfig Language
|
Technical Note, Department of Electrical and Computer Engineering,
University of Waterloo, Canada
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Kconfig language defines a set of symbols that are assigned a value in a
configuration. We describe the semantics of the Kconfig language according to
the behavior exhibited in the xconfig configurator. We assume an abstract
syntax representation for concepts in the Kconfig language and delegate the
details of the translation from concrete to abstract syntaxes to a later
document.
|
[
{
"version": "v1",
"created": "Sun, 11 Sep 2022 18:49:43 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"She",
"Steven",
""
],
[
"Berger",
"Thorsten",
""
]
] |
new_dataset
| 0.999459 |
2209.04939
|
Richard Eisenberg
|
Alexander Bernauer, Richard A. Eisenberg
|
Eiger: Auditable, executable, flexible legal regulations
|
15 pages, included embedded Haskell code
| null | null | null |
cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Despite recent advances in communication and automation, regulations are
still written in natural-language prose, subject to ambiguity, inconsistency,
and incompleteness. How can we craft regulations with precision? Our solution
is embodied in Eiger, a domain-specific programming language embedded in
Haskell. A domain expert pairs with a software engineer to write regulations in
Eiger. The domain expert needs only to read and audit the code, but not write
it. A first, limited, user study suggests that this works well in practice
because Eiger code mostly looks like Excel formulas with simple SQL queries.
Eiger forms the kernel of a new strategy to deliver value to clients in our
professional services business with increased automation and precision. The
framework is executable: based on client data, we can use Eiger both to deduce
how best to adapt to a new regulation and then maintain compliance. This paper
reviews the design of Eiger and walks through its implementation. To preserve a
straightforward surface syntax but with monadic semantics, we have leveraged
advanced features, including GHC.Generics, the new OverloadedRecordDot
extension, and a novel approach to performing class instance selection at
run-time.
|
[
{
"version": "v1",
"created": "Sun, 11 Sep 2022 21:34:18 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Bernauer",
"Alexander",
""
],
[
"Eisenberg",
"Richard A.",
""
]
] |
new_dataset
| 0.971666 |
2209.04945
|
Guangming Wang
|
Guangming Wang, Zhiheng Feng, Chaokang Jiang, Hesheng Wang
|
Unsupervised Learning of 3D Scene Flow with 3D Odometry Assistance
|
12 pages, 9 figures, under review
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Scene flow represents the 3D motion of each point in the scene, which
explicitly describes the distance and the direction of each point's movement.
Scene flow estimation is used in various applications such as autonomous
driving fields, activity recognition, and virtual reality fields. As it is
challenging to annotate scene flow with ground truth for real-world data, this
leaves no real-world dataset available to provide a large amount of data with
ground truth for scene flow estimation. Therefore, many works use synthesized
data to pre-train their network and real-world LiDAR data to finetune. Unlike
the previous unsupervised learning of scene flow in point clouds, we propose to
use odometry information to assist the unsupervised learning of scene flow and
use real-world LiDAR data to train our network. Supervised odometry provides
more accurate shared cost volume for scene flow. In addition, the proposed
network has mask-weighted warp layers to get a more accurate predicted point
cloud. The warp operation means applying an estimated pose transformation or
scene flow to a source point cloud to obtain a predicted point cloud and is the
key to refining scene flow from coarse to fine. When performing warp
operations, the points in different states use different weights for the pose
transformation and scene flow transformation. We classify the states of points
as static, dynamic, and occluded, where the static masks are used to divide
static and dynamic points, and the occlusion masks are used to divide occluded
points. The mask-weighted warp layer indicates that static masks and occlusion
masks are used as weights when performing warp operations. Our designs are
proved to be effective in ablation experiments. The experiment results show the
promising prospect of an odometry-assisted unsupervised learning method for 3D
scene flow in real-world data.
|
[
{
"version": "v1",
"created": "Sun, 11 Sep 2022 21:53:43 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Wang",
"Guangming",
""
],
[
"Feng",
"Zhiheng",
""
],
[
"Jiang",
"Chaokang",
""
],
[
"Wang",
"Hesheng",
""
]
] |
new_dataset
| 0.999634 |
2209.04966
|
Mazen Abdelfattah
|
Mazen Abdelfattah, Kaiwen Yuan, Z. Jane Wang, and Rabab Ward
|
Multi-modal Streaming 3D Object Detection
| null | null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Modern autonomous vehicles rely heavily on mechanical LiDARs for perception.
Current perception methods generally require 360{\deg} point clouds, collected
sequentially as the LiDAR scans the azimuth and acquires consecutive
wedge-shaped slices. The acquisition latency of a full scan (~ 100ms) may lead
to outdated perception which is detrimental to safe operation. Recent streaming
perception works proposed directly processing LiDAR slices and compensating for
the narrow field of view (FOV) of a slice by reusing features from preceding
slices. These works, however, are all based on a single modality and require
past information which may be outdated. Meanwhile, images from high-frequency
cameras can support streaming models as they provide a larger FoV compared to a
LiDAR slice. However, this difference in FoV complicates sensor fusion. To
address this research gap, we propose an innovative camera-LiDAR streaming 3D
object detection framework that uses camera images instead of past LiDAR slices
to provide an up-to-date, dense, and wide context for streaming perception. The
proposed method outperforms prior streaming models on the challenging NuScenes
benchmark. It also outperforms powerful full-scan detectors while being much
faster. Our method is shown to be robust to missing camera images, narrow LiDAR
slices, and small camera-LiDAR miscalibration.
|
[
{
"version": "v1",
"created": "Mon, 12 Sep 2022 00:30:52 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Abdelfattah",
"Mazen",
""
],
[
"Yuan",
"Kaiwen",
""
],
[
"Wang",
"Z. Jane",
""
],
[
"Ward",
"Rabab",
""
]
] |
new_dataset
| 0.984916 |
2209.05011
|
Zhiqiang Wei
|
Zhiqiang Wei, Shuangyang Li, Weijie Yuan, Robert Schober, Giuseppe
Caire
|
Orthogonal Time Frequency Space Modulation -- Part I: Fundamentals and
Challenges Ahead
|
6 pages, 2 figures, submitted to IEEE Communications Letters
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
This letter is the first part of a three-part tutorial on orthogonal time
frequency space (OTFS) modulation, which is a promising candidate waveform for
future wireless networks. This letter introduces and compares two popular
implementations of OTFS modulation, namely the symplectic finite Fourier
transform (SFFT)- and discrete Zak transform (DZT)-based architectures. Based
on these transceiver architectures, fundamental concepts of OTFS modulation,
including the delay-Doppler (DD) domain, DD domain information multiplexing,
and its potential benefits, are discussed. Finally, the challenges ahead for
OTFS modulation are highlighted. Parts II and III of this tutorial on OTFS
modulation focus on transceiver designs and integrated sensing and
communication (ISAC), respectively.
|
[
{
"version": "v1",
"created": "Mon, 12 Sep 2022 04:06:40 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Wei",
"Zhiqiang",
""
],
[
"Li",
"Shuangyang",
""
],
[
"Yuan",
"Weijie",
""
],
[
"Schober",
"Robert",
""
],
[
"Caire",
"Giuseppe",
""
]
] |
new_dataset
| 0.986223 |
2209.05015
|
Zhiqiang Wei
|
Weijie Yuan, Zhiqiang Wei, Shuangyang Li, Robert Schober, Giuseppe
Caire
|
Orthogonal Time Frequency Space Modulation -- Part III: ISAC and
Potential Applications
|
5 pages, 2 figures, submitted to IEEE Communications Letters
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
The first two parts of this tutorial on orthogonal time frequency space
(OTFS) modulation have discussed the fundamentals of delay-Doppler (DD) domain
communications as well as some advanced technologies for transceiver design. In
this letter, we will present an OTFS-based integrated sensing and
communications (ISAC) system, which is regarded as an enabling technology in
next generation wireless communications. In particular, we illustrate the
sensing as well as the communication models for OTFS-ISAC systems. Next, we
show that benefiting from time-invariant DD channels, the sensing parameters
can be used for inferring the communication channels, leading to an efficient
transmission scheme. As both functionalities are realized in the same DD
domain, we briefly discuss several promising benefits of OTFS-based ISAC
systems, which have not been completely unveiled yet. Finally, a range of
potential applications of OTFS for the future wireless networks will be
highlighted.
|
[
{
"version": "v1",
"created": "Mon, 12 Sep 2022 04:08:21 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Yuan",
"Weijie",
""
],
[
"Wei",
"Zhiqiang",
""
],
[
"Li",
"Shuangyang",
""
],
[
"Schober",
"Robert",
""
],
[
"Caire",
"Giuseppe",
""
]
] |
new_dataset
| 0.998656 |
2209.05022
|
Shubham Kanitkar
|
Shubham Kanitkar, Helen Jiang, Wenzhen Yuan
|
PoseIt: A Visual-Tactile Dataset of Holding Poses for Grasp Stability
Analysis
|
8 pages, 7 figures, IEEE/RSJ International Conference on Intelligent
Robots and Systems 2022
| null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
When humans grasp objects in the real world, we often move our arms to hold
the object in a different pose where we can use it. In contrast, typical lab
settings only study the stability of the grasp immediately after lifting,
without any subsequent re-positioning of the arm. However, the grasp stability
could vary widely based on the object's holding pose, as the gravitational
torque and gripper contact forces could change completely. To facilitate the
study of how holding poses affect grasp stability, we present PoseIt, a novel
multi-modal dataset that contains visual and tactile data collected from a full
cycle of grasping an object, re-positioning the arm to one of the sampled
poses, and shaking the object. Using data from PoseIt, we can formulate and
tackle the task of predicting whether a grasped object is stable in a
particular held pose. We train an LSTM classifier that achieves 85% accuracy on
the proposed task. Our experimental results show that multi-modal models
trained on PoseIt achieve higher accuracy than using solely vision or tactile
data and that our classifiers can also generalize to unseen objects and poses.
|
[
{
"version": "v1",
"created": "Mon, 12 Sep 2022 04:49:41 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Kanitkar",
"Shubham",
""
],
[
"Jiang",
"Helen",
""
],
[
"Yuan",
"Wenzhen",
""
]
] |
new_dataset
| 0.999846 |
2209.05034
|
Yudong Li
|
Yudong Li, Yuqing Zhang, Zhe Zhao, Linlin Shen, Weijie Liu, Weiquan
Mao, and Hui Zhang
|
CSL: A Large-scale Chinese Scientific Literature Dataset
|
to be published in COLING 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Scientific literature serves as a high-quality corpus, supporting a lot of
Natural Language Processing (NLP) research. However, existing datasets are
centered around the English language, which restricts the development of
Chinese scientific NLP. In this work, we present CSL, a large-scale Chinese
Scientific Literature dataset, which contains the titles, abstracts, keywords
and academic fields of 396k papers. To our knowledge, CSL is the first
scientific document dataset in Chinese. The CSL can serve as a Chinese corpus.
Also, this semi-structured data is a natural annotation that can constitute
many supervised NLP tasks. Based on CSL, we present a benchmark to evaluate the
performance of models across scientific domain tasks, i.e., summarization,
keyword generation and text classification. We analyze the behavior of existing
text-to-text models on the evaluation tasks and reveal the challenges for
Chinese scientific NLP tasks, which provides a valuable reference for future
research. Data and code are available at https://github.com/ydli-ai/CSL
|
[
{
"version": "v1",
"created": "Mon, 12 Sep 2022 06:10:47 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Li",
"Yudong",
""
],
[
"Zhang",
"Yuqing",
""
],
[
"Zhao",
"Zhe",
""
],
[
"Shen",
"Linlin",
""
],
[
"Liu",
"Weijie",
""
],
[
"Mao",
"Weiquan",
""
],
[
"Zhang",
"Hui",
""
]
] |
new_dataset
| 0.999883 |
2209.05047
|
Cuicui Kang
|
Cuicui Kang
|
Is Synthetic Dataset Reliable for Benchmarking Generalizable Person
Re-Identification?
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent studies show that models trained on synthetic datasets are able to
achieve better generalizable person re-identification (GPReID) performance than
that trained on public real-world datasets. On the other hand, due to the
limitations of real-world person ReID datasets, it would also be important and
interesting to use large-scale synthetic datasets as test sets to benchmark
person ReID algorithms. Yet this raises a critical question: is synthetic
dataset reliable for benchmarking generalizable person re-identification? In
the literature there is no evidence showing this. To address this, we design a
method called Pairwise Ranking Analysis (PRA) to quantitatively measure the
ranking similarity and perform the statistical test of identical distributions.
Specifically, we employ Kendall rank correlation coefficients to evaluate
pairwise similarity values between algorithm rankings on different datasets.
Then, a non-parametric two-sample Kolmogorov-Smirnov (KS) test is performed for
the judgement of whether algorithm ranking correlations between synthetic and
real-world datasets and those only between real-world datasets lie in identical
distributions. We conduct comprehensive experiments, with ten representative
algorithms, three popular real-world person ReID datasets, and three recently
released large-scale synthetic datasets. Through the designed pairwise ranking
analysis and comprehensive evaluations, we conclude that a recent large-scale
synthetic dataset ClonedPerson can be reliably used to benchmark GPReID,
statistically the same as real-world datasets. Therefore, this study guarantees
the usage of synthetic datasets for both source training set and target testing
set, with completely no privacy concerns from real-world surveillance data.
Besides, the study in this paper might also inspire future designs of synthetic
datasets.
|
[
{
"version": "v1",
"created": "Mon, 12 Sep 2022 06:54:54 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Kang",
"Cuicui",
""
]
] |
new_dataset
| 0.980509 |
2209.05077
|
Girmaw Abebe Tadesse
|
Girmaw Abebe Tadesse and Oliver Bent and Komminist Weldemariam and Md.
Abrar Istiak and Taufiq Hasan and Andrea Cavallaro
|
BON: An extended public domain dataset for human activity recognition
| null | null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Body-worn first-person vision (FPV) camera enables to extract a rich source
of information on the environment from the subject's viewpoint. However, the
research progress in wearable camera-based egocentric office activity
understanding is slow compared to other activity environments (e.g., kitchen
and outdoor ambulatory), mainly due to the lack of adequate datasets to train
more sophisticated (e.g., deep learning) models for human activity recognition
in office environments. This paper provides details of a large and publicly
available office activity dataset (BON) collected in different office settings
across three geographical locations: Barcelona (Spain), Oxford (UK) and Nairobi
(Kenya), using a chest-mounted GoPro Hero camera. The BON dataset contains
eighteen common office activities that can be categorised into person-to-person
interactions (e.g., Chat with colleagues), person-to-object (e.g., Writing on a
whiteboard), and proprioceptive (e.g., Walking). Annotation is provided for
each segment of video with 5-seconds duration. Generally, BON contains 25
subjects and 2639 total segments. In order to facilitate further research in
the sub-domain, we have also provided results that could be used as baselines
for future studies.
|
[
{
"version": "v1",
"created": "Mon, 12 Sep 2022 08:28:26 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Tadesse",
"Girmaw Abebe",
""
],
[
"Bent",
"Oliver",
""
],
[
"Weldemariam",
"Komminist",
""
],
[
"Istiak",
"Md. Abrar",
""
],
[
"Hasan",
"Taufiq",
""
],
[
"Cavallaro",
"Andrea",
""
]
] |
new_dataset
| 0.999718 |
2209.05102
|
Federico Cor\`o
|
Tiziana Calamoneri and Federico Cor\`o
|
(Eternal) Vertex Cover Number of Infinite and Finite Grid Graphs
| null | null | null | null |
cs.DM
|
http://creativecommons.org/licenses/by/4.0/
|
In the eternal vertex cover problem, mobile guards on the vertices of a graph
are used to defend it against an infinite sequence of attacks on its edges by
moving to neighbor vertices. The eternal vertex cover problem consists in
determining the minimum number of necessary guards. Motivated by previous
literature, in this paper, we study the vertex cover and eternal vertex cover
problems on regular grids, when passing from infinite to finite version of the
same graphs, and we provide either coinciding or very tight lower and upper
bounds on the number of necessary guards. To this aim, we generalize the
notions of minimum vertex covers and minimum eternal vertex cover in order to
be well defined for infinite grids.
|
[
{
"version": "v1",
"created": "Mon, 12 Sep 2022 09:15:36 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Calamoneri",
"Tiziana",
""
],
[
"Corò",
"Federico",
""
]
] |
new_dataset
| 0.999635 |
2209.05133
|
Daniel Lizzit
|
Daniel Lizzit, David Esseni
|
Operation and Design of Ferroelectric FETs for a BEOL Compatible Device
Implementation
| null |
ESSDERC 2021 - IEEE 51st European Solid-State Device Research
Conference (ESSDERC), 2021, pp. 215-218
|
10.1109/ESSDERC53440.2021.9631764
| null |
cs.ET
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We present a study based on numerical simulations and comparative analysis of
recent experimental data concerning the operation and design of FeFETs. Our
results show that a proper consideration of charge trapping in the
ferroelectric-dielectric stack is indispensable to reconcile simulations with
experiments, and to attain the desired hysteretic behavior of the
current-voltage characteristics. Then we analyze a few design options for
polysilicon channel FeFETs and, in particular, we study the influence of the
channel thickness and doping concentration on the memory window, and on the
ratio between the polarization dependent, high and low resistance state.
|
[
{
"version": "v1",
"created": "Mon, 12 Sep 2022 10:39:04 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Lizzit",
"Daniel",
""
],
[
"Esseni",
"David",
""
]
] |
new_dataset
| 0.999092 |
2209.05250
|
Willow Ahrens
|
Willow Ahrens, Daniel Donenfeld, Fredrik Kjolstad, Saman Amarasinghe
|
Looplets: A Language For Structured Coiteration
| null | null | null | null |
cs.PL cs.MS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Real world arrays often contain underlying structure, such as sparsity, runs
of repeated values, or symmetry. Specializing for structure yields significant
speedups. But automatically generating efficient code for structured data is
challenging, especially when arrays with different structure interact. We show
how to abstract over array structures so that the compiler can generate code to
coiterate over any combination of them. Our technique enables new array formats
(such as 1DVBL for irregular clustered sparsity), new iteration strategies
(such as galloping intersections), and new operations over structured data
(such as concatenation or convolution).
|
[
{
"version": "v1",
"created": "Thu, 8 Sep 2022 20:16:41 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Ahrens",
"Willow",
""
],
[
"Donenfeld",
"Daniel",
""
],
[
"Kjolstad",
"Fredrik",
""
],
[
"Amarasinghe",
"Saman",
""
]
] |
new_dataset
| 0.998836 |
2209.05252
|
Kresimir Matkovic
|
Manlio Massiris Fern\'andez, Sanjin Rado\v{s}, Kre\v{s}imir
Matkovi\'c, M. Eduard Gr\"oller, Claudio Delrieux
|
ErgoExplorer: Interactive Ergonomic Risk Assessment from Video
Collections
|
Accepted for IEEE VIS 2022 and IEEE TVCG
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Ergonomic risk assessment is now, due to an increased awareness, carried out
more often than in the past. The conventional risk assessment evaluation, based
on expert-assisted observation of the workplaces and manually filling in score
tables, is still predominant. Data analysis is usually done with a focus on
critical moments, although without the support of contextual information and
changes over time. In this paper we introduce ErgoExplorer, a system for the
interactive visual analysis of risk assessment data. In contrast to the current
practice, we focus on data that span across multiple actions and multiple
workers while keeping all contextual information. Data is automatically
extracted from video streams. Based on carefully investigated analysis tasks,
we introduce new views and their corresponding interactions. These views also
incorporate domain-specific score tables to guarantee an easy adoption by
domain experts. All views are integrated into ErgoExplorer, which relies on
coordinated multiple views to facilitate analysis through interaction.
ErgoExplorer makes it possible for the first time to examine complex
relationships between risk assessments of individual body parts over long
sessions that span multiple operations. The newly introduced approach supports
analysis and exploration at several levels of detail, ranging from a general
overview, down to inspecting individual frames in the video stream, if
necessary. We illustrate the usefulness of the newly proposed approach applying
it to several datasets.
|
[
{
"version": "v1",
"created": "Wed, 7 Sep 2022 13:32:45 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Fernández",
"Manlio Massiris",
""
],
[
"Radoš",
"Sanjin",
""
],
[
"Matković",
"Krešimir",
""
],
[
"Gröller",
"M. Eduard",
""
],
[
"Delrieux",
"Claudio",
""
]
] |
new_dataset
| 0.994573 |
2209.05278
|
Ruofeng Wen
|
Ruofeng Wen, Wenjun Zeng, Yi Liu
|
A Nonparametric Contextual Bandit with Arm-level Eligibility Control for
Customer Service Routing
|
Accepted at 4th Edition of Knowledge-aware and Conversational
Recommender Systems (KaRS) Workshop @ RecSys 2022, September 18--23 2023,
Seattle, WA, USA
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Amazon Customer Service provides real-time support for millions of customer
contacts every year. While bot-resolver helps automate some traffic, we still
see high demand for human agents, also called subject matter experts (SMEs).
Customers outreach with questions in different domains (return policy, device
troubleshooting, etc.). Depending on their training, not all SMEs are eligible
to handle all contacts. Routing contacts to eligible SMEs turns out to be a
non-trivial problem because SMEs' domain eligibility is subject to training
quality and can change over time. To optimally recommend SMEs while
simultaneously learning the true eligibility status, we propose to formulate
the routing problem with a nonparametric contextual bandit algorithm (K-Boot)
plus an eligibility control (EC) algorithm. K-Boot models reward with a kernel
smoother on similar past samples selected by $k$-NN, and Bootstrap Thompson
Sampling for exploration. EC filters arms (SMEs) by the initially
system-claimed eligibility and dynamically validates the reliability of this
information. The proposed K-Boot is a general bandit algorithm, and EC is
applicable to other bandits. Our simulation studies show that K-Boot performs
on par with state-of-the-art Bandit models, and EC boosts K-Boot performance
when stochastic eligibility signal exists.
|
[
{
"version": "v1",
"created": "Thu, 8 Sep 2022 19:20:20 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Wen",
"Ruofeng",
""
],
[
"Zeng",
"Wenjun",
""
],
[
"Liu",
"Yi",
""
]
] |
new_dataset
| 0.998941 |
2209.05309
|
Gilbert Feng
|
Gilbert Feng, Hongbo Zhang, Zhongyu Li, Xue Bin Peng, Bhuvan
Basireddy, Linzhu Yue, Zhitao Song, Lizhi Yang, Yunhui Liu, Koushil Sreenath,
Sergey Levine
|
GenLoco: Generalized Locomotion Controllers for Quadrupedal Robots
|
First two authors contributed equally
| null | null | null |
cs.RO cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent years have seen a surge in commercially-available and affordable
quadrupedal robots, with many of these platforms being actively used in
research and industry. As the availability of legged robots grows, so does the
need for controllers that enable these robots to perform useful skills.
However, most learning-based frameworks for controller development focus on
training robot-specific controllers, a process that needs to be repeated for
every new robot. In this work, we introduce a framework for training
generalized locomotion (GenLoco) controllers for quadrupedal robots. Our
framework synthesizes general-purpose locomotion controllers that can be
deployed on a large variety of quadrupedal robots with similar morphologies. We
present a simple but effective morphology randomization method that
procedurally generates a diverse set of simulated robots for training. We show
that by training a controller on this large set of simulated robots, our models
acquire more general control strategies that can be directly transferred to
novel simulated and real-world robots with diverse morphologies, which were not
observed during training.
|
[
{
"version": "v1",
"created": "Mon, 12 Sep 2022 15:14:32 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Feng",
"Gilbert",
""
],
[
"Zhang",
"Hongbo",
""
],
[
"Li",
"Zhongyu",
""
],
[
"Peng",
"Xue Bin",
""
],
[
"Basireddy",
"Bhuvan",
""
],
[
"Yue",
"Linzhu",
""
],
[
"Song",
"Zhitao",
""
],
[
"Yang",
"Lizhi",
""
],
[
"Liu",
"Yunhui",
""
],
[
"Sreenath",
"Koushil",
""
],
[
"Levine",
"Sergey",
""
]
] |
new_dataset
| 0.999667 |
2209.05319
|
John Chebor Mr.
|
John C. Chebor, Simon M. Karume, Nelson B. Masese and Andrew Kipkebut
|
Prototyping a Serial Number Based Authentication Model for a Computer in
a Wireless Local Area Network
|
13 pages, 11 figures
|
International Journal of Wireless & Mobile Networks (IJWMN) 14
(2022) 27-39
|
10.5121/ijwmn.2022.14403
| null |
cs.CR cs.NI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
With the increase of wireless LAN usage in homes and enterprises due to its
numerous benefits, authenticating the ever increasing number of devices and
their users has become a challenge to proprietors of such kind networks. A MAC
address, a physical network address that is used as basis for this study, has a
copy of its value in the system software that can be spoofed and altered
rendering the address not unique, not secure and unreliable. On the contrary, a
computers serial number is hard-coded in the system hardware only and therefore
cannot be spoofed and altered making it unique, secure and reliable. The
research, therefore, was aimed at designing a model that demonstrates how a
computers serial number can be used for authenticating a computer in a wireless
local area network. In order to achieve the research objective, the study
examined the inbuilt access and use of a computers serial number prototype
model as an alternative method of authenticating devices in a network. Design
science research methodology that involved design and development,
demonstration and model evaluation was employed. A Serial Number Based
Authentication Prototype or SNAP was therefore designed using state chart and
flow chart diagrams based on dynamic programming, developed over evolutionary
prototyping and test run on a static experimental design using Java Development
Kit and MySQL platforms to demonstrate, as proof of concept, that a computers
serial number can be used to authenticate a computer in a wireless local area
network. From the test runs whose outcomes were the binary values yes or no, it
was found out that SNAP can actually allow or deny, enable or disable a
computer in a network based on the computers serial number. The researcher
therefore, recommends that the prototype be scaled up, then adopted as a
network device authentication method.
|
[
{
"version": "v1",
"created": "Mon, 12 Sep 2022 15:21:15 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Chebor",
"John C.",
""
],
[
"Karume",
"Simon M.",
""
],
[
"Masese",
"Nelson B.",
""
],
[
"Kipkebut",
"Andrew",
""
]
] |
new_dataset
| 0.996979 |
2209.05405
|
Zhaofeng Tian
|
Zhaofeng Tian, Weisong Shi
|
Edge Coverage Path Planning for Robot Mowing
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Thanks to the rapid evolvement of robotic technologies, robot mowing is
emerging to liberate humans from the tedious and time-consuming landscape work.
Traditionally, robot mowing is perceived as a "Coverage Path Planning" problem,
with a simplification that converts non-convex obstacles into convex obstacles.
Besides, the converted obstacles are commonly dilated by the robot's
circumcircle for collision avoidance. However when applied to robot mowing, an
obstacle in a lawn is usually non-convex, imagine a garden on the lawn, such
that the mentioned obstacle processing methods would fill in some concave areas
so that they are not accessible to the robot anymore and hence produce
inescapable uncut areas along the lawn edge, which dulls the landscape's
elegance and provokes rework. To shrink the uncut area around the lawn edge we
hereby reframe the problem into a brand new problem, named the "Edge Coverage
Path Planning" problem that is dedicated to path planning with the objective to
cover the edge. Correspondingly, we propose two planning methods, the "big and
small disk" and the "sliding chopstick" planning method to tackle the problem
by leveraging image morphological processing and computational geometry skills.
By validation, our proposed methods can outperform the traditional
"dilation-by-circumcircle" method.
|
[
{
"version": "v1",
"created": "Mon, 12 Sep 2022 16:54:59 GMT"
}
] | 2022-09-13T00:00:00 |
[
[
"Tian",
"Zhaofeng",
""
],
[
"Shi",
"Weisong",
""
]
] |
new_dataset
| 0.995704 |
1902.07657
|
Alexandros Hollender
|
Paul W. Goldberg, Alexandros Hollender
|
The Hairy Ball Problem is PPAD-Complete
|
Journal version
|
Journal of Computer and System Sciences, 122:34-62 (2021)
|
10.1016/j.jcss.2021.05.004
| null |
cs.CC cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Hairy Ball Theorem states that every continuous tangent vector field on
an even-dimensional sphere must have a zero. We prove that the associated
computational problem of (a) computing an approximate zero is PPAD-complete,
and (b) computing an exact zero is FIXP-hard. We also consider the Hairy Ball
Theorem on toroidal instead of spherical domains and show that the approximate
problem remains PPAD-complete. On a conceptual level, our PPAD-membership
results are particularly interesting, because they heavily rely on the
investigation of multiple-source variants of END-OF-LINE, the canonical
PPAD-complete problem. Our results on these new END-OF-LINE variants are of
independent interest and provide new tools for showing membership in PPAD. In
particular, we use them to provide the first full proof of PPAD-completeness
for the IMBALANCE problem defined by Beame et al. in 1998.
|
[
{
"version": "v1",
"created": "Wed, 20 Feb 2019 17:11:20 GMT"
},
{
"version": "v2",
"created": "Fri, 3 May 2019 14:11:30 GMT"
},
{
"version": "v3",
"created": "Thu, 8 Sep 2022 23:14:23 GMT"
}
] | 2022-09-12T00:00:00 |
[
[
"Goldberg",
"Paul W.",
""
],
[
"Hollender",
"Alexandros",
""
]
] |
new_dataset
| 0.957199 |
2105.04949
|
Asahi Ushio
|
Asahi Ushio and Luis Espinosa-Anke and Steven Schockaert and Jose
Camacho-Collados
|
BERT is to NLP what AlexNet is to CV: Can Pre-Trained Language Models
Identify Analogies?
|
Accepted by ACL 2021 main conference
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Analogies play a central role in human commonsense reasoning. The ability to
recognize analogies such as "eye is to seeing what ear is to hearing",
sometimes referred to as analogical proportions, shape how we structure
knowledge and understand language. Surprisingly, however, the task of
identifying such analogies has not yet received much attention in the language
model era. In this paper, we analyze the capabilities of transformer-based
language models on this unsupervised task, using benchmarks obtained from
educational settings, as well as more commonly used datasets. We find that
off-the-shelf language models can identify analogies to a certain extent, but
struggle with abstract and complex relations, and results are highly sensitive
to model architecture and hyperparameters. Overall the best results were
obtained with GPT-2 and RoBERTa, while configurations using BERT were not able
to outperform word embedding models. Our results raise important questions for
future work about how, and to what extent, pre-trained language models capture
knowledge about abstract semantic relations.
|
[
{
"version": "v1",
"created": "Tue, 11 May 2021 11:38:49 GMT"
},
{
"version": "v2",
"created": "Wed, 26 May 2021 10:10:38 GMT"
},
{
"version": "v3",
"created": "Thu, 3 Jun 2021 16:39:21 GMT"
},
{
"version": "v4",
"created": "Fri, 9 Sep 2022 14:52:05 GMT"
}
] | 2022-09-12T00:00:00 |
[
[
"Ushio",
"Asahi",
""
],
[
"Espinosa-Anke",
"Luis",
""
],
[
"Schockaert",
"Steven",
""
],
[
"Camacho-Collados",
"Jose",
""
]
] |
new_dataset
| 0.993762 |
2106.13043
|
Andrey Guzhov
|
Andrey Guzhov, Federico Raue, J\"orn Hees, Andreas Dengel
|
AudioCLIP: Extending CLIP to Image, Text and Audio
|
submitted to GCPR 2021
| null | null | null |
cs.SD cs.CV eess.AS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In the past, the rapidly evolving field of sound classification greatly
benefited from the application of methods from other domains. Today, we observe
the trend to fuse domain-specific tasks and approaches together, which provides
the community with new outstanding models.
In this work, we present an extension of the CLIP model that handles audio in
addition to text and images. Our proposed model incorporates the ESResNeXt
audio-model into the CLIP framework using the AudioSet dataset. Such a
combination enables the proposed model to perform bimodal and unimodal
classification and querying, while keeping CLIP's ability to generalize to
unseen datasets in a zero-shot inference fashion.
AudioCLIP achieves new state-of-the-art results in the Environmental Sound
Classification (ESC) task, out-performing other approaches by reaching
accuracies of 90.07% on the UrbanSound8K and 97.15% on the ESC-50 datasets.
Further it sets new baselines in the zero-shot ESC-task on the same datasets
(68.78% and 69.40%, respectively).
Finally, we also assess the cross-modal querying performance of the proposed
model as well as the influence of full and partial training on the results. For
the sake of reproducibility, our code is published.
|
[
{
"version": "v1",
"created": "Thu, 24 Jun 2021 14:16:38 GMT"
}
] | 2022-09-12T00:00:00 |
[
[
"Guzhov",
"Andrey",
""
],
[
"Raue",
"Federico",
""
],
[
"Hees",
"Jörn",
""
],
[
"Dengel",
"Andreas",
""
]
] |
new_dataset
| 0.999819 |
2109.15040
|
Claudio Cicconetti
|
Carlo Puliafito and Claudio Cicconetti and Marco Conti and Enzo
Mingozzi and Andrea Passarella
|
Stateful Function-as-a-Service at the Edge
|
Accepted for publication at IEEE Computer
|
IEEE Computer, vol. 55, issue 9, September 2022
|
10.1109/MC.2021.3138690
| null |
cs.NI cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In FaaS, users invoke remote functions, which encapsulate service(s). These
functions typically need to remotely access a persistent state via external
services: this makes the paradigm less attractive in edge systems, especially
for IoT applications, due to the increased delay and outbound traffic. We
propose to generalize the FaaS paradigm by allowing functions to alternate
between remote-state and local-state phases, depending on internal and external
conditions, and dedicating a container with persistent memory to functions when
in a local-state phase. We present initial results showing that this simple yet
powerful pattern allows to better utilize the available resources, which are
scarce on edge nodes, while significantly reducing tail latencies, which is key
to enable many new applications based on real-time ML, e.g., in smart vehicles
and smart factory scenarios
|
[
{
"version": "v1",
"created": "Thu, 30 Sep 2021 12:07:10 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Dec 2021 18:07:02 GMT"
}
] | 2022-09-12T00:00:00 |
[
[
"Puliafito",
"Carlo",
""
],
[
"Cicconetti",
"Claudio",
""
],
[
"Conti",
"Marco",
""
],
[
"Mingozzi",
"Enzo",
""
],
[
"Passarella",
"Andrea",
""
]
] |
new_dataset
| 0.999327 |
2204.01697
|
Zhengzhong Tu
|
Zhengzhong Tu, Hossein Talebi, Han Zhang, Feng Yang, Peyman Milanfar,
Alan Bovik, Yinxiao Li
|
MaxViT: Multi-Axis Vision Transformer
|
ECCV 2022; code: https://github.com/google-research/maxvit v1:
initials; v2: added GAN visuals; v3: fixed ImageNet-1k acc typos for Maxvit @
384
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Transformers have recently gained significant attention in the computer
vision community. However, the lack of scalability of self-attention mechanisms
with respect to image size has limited their wide adoption in state-of-the-art
vision backbones. In this paper we introduce an efficient and scalable
attention model we call multi-axis attention, which consists of two aspects:
blocked local and dilated global attention. These design choices allow
global-local spatial interactions on arbitrary input resolutions with only
linear complexity. We also present a new architectural element by effectively
blending our proposed attention model with convolutions, and accordingly
propose a simple hierarchical vision backbone, dubbed MaxViT, by simply
repeating the basic building block over multiple stages. Notably, MaxViT is
able to ''see'' globally throughout the entire network, even in earlier,
high-resolution stages. We demonstrate the effectiveness of our model on a
broad spectrum of vision tasks. On image classification, MaxViT achieves
state-of-the-art performance under various settings: without extra data, MaxViT
attains 86.5% ImageNet-1K top-1 accuracy; with ImageNet-21K pre-training, our
model achieves 88.7% top-1 accuracy. For downstream tasks, MaxViT as a backbone
delivers favorable performance on object detection as well as visual aesthetic
assessment. We also show that our proposed model expresses strong generative
modeling capability on ImageNet, demonstrating the superior potential of MaxViT
blocks as a universal vision module. The source code and trained models will be
available at https://github.com/google-research/maxvit.
|
[
{
"version": "v1",
"created": "Mon, 4 Apr 2022 17:59:44 GMT"
},
{
"version": "v2",
"created": "Sun, 24 Jul 2022 05:35:39 GMT"
},
{
"version": "v3",
"created": "Thu, 25 Aug 2022 08:35:14 GMT"
},
{
"version": "v4",
"created": "Fri, 9 Sep 2022 17:57:10 GMT"
}
] | 2022-09-12T00:00:00 |
[
[
"Tu",
"Zhengzhong",
""
],
[
"Talebi",
"Hossein",
""
],
[
"Zhang",
"Han",
""
],
[
"Yang",
"Feng",
""
],
[
"Milanfar",
"Peyman",
""
],
[
"Bovik",
"Alan",
""
],
[
"Li",
"Yinxiao",
""
]
] |
new_dataset
| 0.998616 |
2204.05839
|
Matthew Weiss
|
Benny J. Tang, Qiqi Chen, Matthew L. Weiss, Nathan Frey, Joseph
McDonald, David Bestor, Charles Yee, William Arcand, Chansup Byun, Daniel
Edelman, Matthew Hubbell, Michael Jones, Jeremy Kepner, Anna Klein, Adam
Michaleas, Peter Michaleas, Lauren Milechin, Julia Mullen, Andrew Prout,
Albert Reuther, Antonio Rosa, Andrew Bowne, Lindsey McEvoy, Baolin Li, Devesh
Tiwari, Vijay Gadepally, Siddharth Samsi
|
The MIT Supercloud Workload Classification Challenge
|
Accepted at IPDPS ADOPT'22
| null |
10.1109/IPDPSW55747.2022.00122
| null |
cs.DC cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
High-Performance Computing (HPC) centers and cloud providers support an
increasingly diverse set of applications on heterogenous hardware. As
Artificial Intelligence (AI) and Machine Learning (ML) workloads have become an
increasingly larger share of the compute workloads, new approaches to optimized
resource usage, allocation, and deployment of new AI frameworks are needed. By
identifying compute workloads and their utilization characteristics, HPC
systems may be able to better match available resources with the application
demand. By leveraging datacenter instrumentation, it may be possible to develop
AI-based approaches that can identify workloads and provide feedback to
researchers and datacenter operators for improving operational efficiency. To
enable this research, we released the MIT Supercloud Dataset, which provides
detailed monitoring logs from the MIT Supercloud cluster. This dataset includes
CPU and GPU usage by jobs, memory usage, and file system logs. In this paper,
we present a workload classification challenge based on this dataset. We
introduce a labelled dataset that can be used to develop new approaches to
workload classification and present initial results based on existing
approaches. The goal of this challenge is to foster algorithmic innovations in
the analysis of compute workloads that can achieve higher accuracy than
existing methods. Data and code will be made publicly available via the
Datacenter Challenge website : https://dcc.mit.edu.
|
[
{
"version": "v1",
"created": "Tue, 12 Apr 2022 14:28:04 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Apr 2022 18:31:04 GMT"
}
] | 2022-09-12T00:00:00 |
[
[
"Tang",
"Benny J.",
""
],
[
"Chen",
"Qiqi",
""
],
[
"Weiss",
"Matthew L.",
""
],
[
"Frey",
"Nathan",
""
],
[
"McDonald",
"Joseph",
""
],
[
"Bestor",
"David",
""
],
[
"Yee",
"Charles",
""
],
[
"Arcand",
"William",
""
],
[
"Byun",
"Chansup",
""
],
[
"Edelman",
"Daniel",
""
],
[
"Hubbell",
"Matthew",
""
],
[
"Jones",
"Michael",
""
],
[
"Kepner",
"Jeremy",
""
],
[
"Klein",
"Anna",
""
],
[
"Michaleas",
"Adam",
""
],
[
"Michaleas",
"Peter",
""
],
[
"Milechin",
"Lauren",
""
],
[
"Mullen",
"Julia",
""
],
[
"Prout",
"Andrew",
""
],
[
"Reuther",
"Albert",
""
],
[
"Rosa",
"Antonio",
""
],
[
"Bowne",
"Andrew",
""
],
[
"McEvoy",
"Lindsey",
""
],
[
"Li",
"Baolin",
""
],
[
"Tiwari",
"Devesh",
""
],
[
"Gadepally",
"Vijay",
""
],
[
"Samsi",
"Siddharth",
""
]
] |
new_dataset
| 0.975625 |
2209.02297
|
Yanchao Xu
|
Yanchao Xu, Wenbo Shao, Jun Li, Kai Yang, Weida Wang, Hua Huang, Chen
Lv, Hong Wang
|
SIND: A Drone Dataset at Signalized Intersection in China
|
8 pages
| null | null | null |
cs.CV cs.GL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Intersection is one of the most challenging scenarios for autonomous driving
tasks. Due to the complexity and stochasticity, essential applications (e.g.,
behavior modeling, motion prediction, safety validation, etc.) at intersections
rely heavily on data-driven techniques. Thus, there is an intense demand for
trajectory datasets of traffic participants (TPs) in intersections. Currently,
most intersections in urban areas are equipped with traffic lights. However,
there is not yet a large-scale, high-quality, publicly available trajectory
dataset for signalized intersections. Therefore, in this paper, a typical
two-phase signalized intersection is selected in Tianjin, China. Besides, a
pipeline is designed to construct a Signalized INtersection Dataset (SIND),
which contains 7 hours of recording including over 13,000 TPs with 7 types.
Then, the behaviors of traffic light violations in SIND are recorded.
Furthermore, the SIND is also compared with other similar works. The features
of the SIND can be summarized as follows: 1) SIND provides more comprehensive
information, including traffic light states, motion parameters, High Definition
(HD) map, etc. 2) The category of TPs is diverse and characteristic, where the
proportion of vulnerable road users (VRUs) is up to 62.6% 3) Multiple traffic
light violations of non-motor vehicles are shown. We believe that SIND would be
an effective supplement to existing datasets and can promote related research
on autonomous driving.The dataset is available online via:
https://github.com/SOTIF-AVLab/SinD
|
[
{
"version": "v1",
"created": "Tue, 6 Sep 2022 08:49:44 GMT"
}
] | 2022-09-12T00:00:00 |
[
[
"Xu",
"Yanchao",
""
],
[
"Shao",
"Wenbo",
""
],
[
"Li",
"Jun",
""
],
[
"Yang",
"Kai",
""
],
[
"Wang",
"Weida",
""
],
[
"Huang",
"Hua",
""
],
[
"Lv",
"Chen",
""
],
[
"Wang",
"Hong",
""
]
] |
new_dataset
| 0.999843 |
2209.03990
|
Zlatan Ajanovic
|
Zlatan Ajanovi\'c, Emina Ali\v{c}kovi\'c, Aida Brankovi\'c, Sead
Delali\'c, Eldar Kurti\'c, Salem Maliki\'c, Adnan Mehoni\'c, Hamza Merzi\'c,
Kenan \v{S}ehi\'c, Bahrudin Trbali\'c
|
Vision for Bosnia and Herzegovina in Artificial Intelligence Age: Global
Trends, Potential Opportunities, Selected Use-cases and Realistic Goals
|
25 pages, 3 figures, Bosnian language. Presented at Naucno-strucna
konferencija o umjetnoj inteligenciji. Federalno ministarstvo obrazovanja i
nauke, Mostar, Bosna i Hercegovina, April 2022
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Artificial Intelligence (AI) is one of the most promising technologies of the
21. century, with an already noticeable impact on society and the economy. With
this work, we provide a short overview of global trends, applications in
industry and selected use-cases from our international experience and work in
industry and academia. The goal is to present global and regional positive
practices and provide an informed opinion on the realistic goals and
opportunities for positioning B&H on the global AI scene.
|
[
{
"version": "v1",
"created": "Thu, 8 Sep 2022 18:20:01 GMT"
}
] | 2022-09-12T00:00:00 |
[
[
"Ajanović",
"Zlatan",
""
],
[
"Aličković",
"Emina",
""
],
[
"Branković",
"Aida",
""
],
[
"Delalić",
"Sead",
""
],
[
"Kurtić",
"Eldar",
""
],
[
"Malikić",
"Salem",
""
],
[
"Mehonić",
"Adnan",
""
],
[
"Merzić",
"Hamza",
""
],
[
"Šehić",
"Kenan",
""
],
[
"Trbalić",
"Bahrudin",
""
]
] |
new_dataset
| 0.969645 |
2209.04097
|
Shailesh Nirgudkar
|
Shailesh Nirgudkar, Michael DeFilippo, Michael Sacarny, Michael
Benjamin and Paul Robinette
|
MassMIND: Massachusetts Maritime INfrared Dataset
|
10 pages, 10 figures, submitted to IJRR for review
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Recent advances in deep learning technology have triggered radical progress
in the autonomy of ground vehicles. Marine coastal Autonomous Surface Vehicles
(ASVs) that are regularly used for surveillance, monitoring and other routine
tasks can benefit from this autonomy. Long haul deep sea transportation
activities are additional opportunities. These two use cases present very
different terrains -- the first being coastal waters -- with many obstacles,
structures and human presence while the latter is mostly devoid of such
obstacles. Variations in environmental conditions are common to both terrains.
Robust labeled datasets mapping such terrains are crucial in improving the
situational awareness that can drive autonomy. However, there are only limited
such maritime datasets available and these primarily consist of optical images.
Although, Long Wave Infrared (LWIR) is a strong complement to the optical
spectrum that helps in extreme light conditions, a labeled public dataset with
LWIR images does not currently exist. In this paper, we fill this gap by
presenting a labeled dataset of over 2,900 LWIR segmented images captured in
coastal maritime environment under diverse conditions. The images are labeled
using instance segmentation and classified in seven categories -- sky, water,
obstacle, living obstacle, bridge, self and background. We also evaluate this
dataset across three deep learning architectures (UNet, PSPNet, DeepLabv3) and
provide detailed analysis of its efficacy. While the dataset focuses on the
coastal terrain it can equally help deep sea use cases. Such terrain would have
less traffic, and the classifier trained on cluttered environment would be able
to handle sparse scenes effectively. We share this dataset with the research
community with the hope that it spurs new scene understanding capabilities in
the maritime environment.
|
[
{
"version": "v1",
"created": "Fri, 9 Sep 2022 02:54:26 GMT"
}
] | 2022-09-12T00:00:00 |
[
[
"Nirgudkar",
"Shailesh",
""
],
[
"DeFilippo",
"Michael",
""
],
[
"Sacarny",
"Michael",
""
],
[
"Benjamin",
"Michael",
""
],
[
"Robinette",
"Paul",
""
]
] |
new_dataset
| 0.999751 |
2209.04156
|
Baohang Zhou
|
Baohang Zhou, Ying Zhang, Xuhui Sui, Kehui Song, Xiaojie Yuan
|
Multi-grained Label Refinement Network with Dependency Structures for
Joint Intent Detection and Slot Filling
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Slot filling and intent detection are two fundamental tasks in the field of
natural language understanding. Due to the strong correlation between these two
tasks, previous studies make efforts on modeling them with multi-task learning
or designing feature interaction modules to improve the performance of each
task. However, none of the existing approaches consider the relevance between
the structural information of sentences and the label semantics of two tasks.
The intent and semantic components of a utterance are dependent on the
syntactic elements of a sentence. In this paper, we investigate a multi-grained
label refinement network, which utilizes dependency structures and label
semantic embeddings. Considering to enhance syntactic representations, we
introduce the dependency structures of sentences into our model by graph
attention layer. To capture the semantic dependency between the syntactic
information and task labels, we combine the task specific features with
corresponding label embeddings by attention mechanism. The experimental results
demonstrate that our model achieves the competitive performance on two public
datasets.
|
[
{
"version": "v1",
"created": "Fri, 9 Sep 2022 07:27:38 GMT"
}
] | 2022-09-12T00:00:00 |
[
[
"Zhou",
"Baohang",
""
],
[
"Zhang",
"Ying",
""
],
[
"Sui",
"Xuhui",
""
],
[
"Song",
"Kehui",
""
],
[
"Yuan",
"Xiaojie",
""
]
] |
new_dataset
| 0.995248 |
2209.04203
|
Sarita Gautam
|
Sarita Gautam, Anuj Kumar
|
An Indian Roads Dataset for Supported and Suspended Traffic Lights
Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Autonomous vehicles are growing rapidly, in well-developed nations like
America, Europe, and China. Tech giants like Google, Tesla, Audi, BMW, and
Mercedes are building highly efficient self-driving vehicles. However, the
technology is still not mainstream for developing nations like India, Thailand,
Africa, etc., In this paper, we present a thorough comparison of the existing
datasets based on well-developed nations as well as Indian roads. We then
developed a new dataset "Indian Roads Dataset" (IRD) having more than 8000
annotations extracted from 3000+ images shot using a 64 (megapixel) camera. All
the annotations are manually labelled adhering to the strict rules of
annotations. Real-time video sequences have been captured from two different
cities in India namely New Delhi and Chandigarh during the day and night-light
conditions. Our dataset exceeds previous Indian traffic light datasets in size,
annotations, and variance. We prove the amelioration of our dataset by
providing an extensive comparison with existing Indian datasets. Various
dataset criteria like size, capturing device, a number of cities, and
variations of traffic light orientations are considered. The dataset can be
downloaded from here https://sites.google.com/view/ird-dataset/home
|
[
{
"version": "v1",
"created": "Fri, 9 Sep 2022 09:37:50 GMT"
}
] | 2022-09-12T00:00:00 |
[
[
"Gautam",
"Sarita",
""
],
[
"Kumar",
"Anuj",
""
]
] |
new_dataset
| 0.999877 |
2209.04284
|
Shuiwang Li
|
Zhewen Zhang, Fuliang Wu, Yuming Qiu, Jingdong Liang, Shuiwang Li
|
Tracking Small and Fast Moving Objects: A Benchmark
|
arXiv admin note: text overlap with arXiv:2011.10875 by other authors
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With more and more large-scale datasets available for training, visual
tracking has made great progress in recent years. However, current research in
the field mainly focuses on tracking generic objects. In this paper, we present
TSFMO, a benchmark for \textbf{T}racking \textbf{S}mall and \textbf{F}ast
\textbf{M}oving \textbf{O}bjects. This benchmark aims to encourage research in
developing novel and accurate methods for this challenging task particularly.
TSFMO consists of 250 sequences with about 50k frames in total. Each frame in
these sequences is carefully and manually annotated with a bounding box. To the
best of our knowledge, TSFMO is the first benchmark dedicated to tracking small
and fast moving objects, especially connected to sports. To understand how
existing methods perform and to provide comparison for future research on
TSFMO, we extensively evaluate 20 state-of-the-art trackers on the benchmark.
The evaluation results exhibit that more effort are required to improve
tracking small and fast moving objects. Moreover, to encourage future research,
we proposed a novel tracker S-KeepTrack which surpasses all 20 evaluated
approaches. By releasing TSFMO, we expect to facilitate future researches and
applications of tracking small and fast moving objects. The TSFMO and
evaluation results as well as S-KeepTrack are available at
\url{https://github.com/CodeOfGithub/S-KeepTrack}.
|
[
{
"version": "v1",
"created": "Fri, 9 Sep 2022 13:14:44 GMT"
}
] | 2022-09-12T00:00:00 |
[
[
"Zhang",
"Zhewen",
""
],
[
"Wu",
"Fuliang",
""
],
[
"Qiu",
"Yuming",
""
],
[
"Liang",
"Jingdong",
""
],
[
"Li",
"Shuiwang",
""
]
] |
new_dataset
| 0.993796 |
2209.04409
|
Magdalena Wolska
|
Magdalena Wolska, Christopher Schr\"oder, Ole Borchardt, Benno Stein,
and Martin Potthast
|
Trigger Warnings: Bootstrapping a Violence Detector for FanFiction
|
5 pages
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the first dataset and evaluation results on a newly defined
computational task of trigger warning assignment. Labeled corpus data has been
compiled from narrative works hosted on Archive of Our Own (AO3), a well-known
fanfiction site. In this paper, we focus on the most frequently assigned
trigger type--violence--and define a document-level binary classification task
of whether or not to assign a violence trigger warning to a fanfiction,
exploiting warning labels provided by AO3 authors. SVM and BERT models trained
in four evaluation setups on the corpora we compiled yield $F_1$ results
ranging from 0.585 to 0.798, proving the violence trigger warning assignment to
be a doable, however, non-trivial task.
|
[
{
"version": "v1",
"created": "Fri, 9 Sep 2022 17:27:03 GMT"
}
] | 2022-09-12T00:00:00 |
[
[
"Wolska",
"Magdalena",
""
],
[
"Schröder",
"Christopher",
""
],
[
"Borchardt",
"Ole",
""
],
[
"Stein",
"Benno",
""
],
[
"Potthast",
"Martin",
""
]
] |
new_dataset
| 0.999674 |
2209.04432
|
Tong Zhang
|
Zheng Gu, Jiangpeng Li, Yong Peng, Yang Liu, and Tong Zhang
|
Elastic RAID: When RAID Meets SSDs with Built-in Transparent Compression
| null | null | null | null |
cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
This paper studies how RAID (redundant array of independent disks) could take
full advantage of modern SSDs (solid-state drives) with built-in transparent
compression. In current practice, RAID users are forced to choose a specific
RAID level (e.g., RAID 10 or RAID 5) with a fixed storage cost vs. speed
performance trade-off. Commercial market is witnessing the emergence of a new
family of SSDs that can internally perform hardware-based lossless compression
on each 4KB LBA (logical block address) block, transparent to host OS and user
applications. Beyond straightforwardly reducing the RAID storage cost, such
modern SSDs make it possible to relieve RAID users from being locked into a
fixed storage cost vs. speed performance trade-off. The key idea is simple:
RAID systems opportunistically leverage higher-than-expected runtime user data
compressibility to enable dynamic RAID level conversion to improve the speed
performance without compromising the effective storage capacity. This paper
presents design techniques to enable and optimize the practical implementation
of such elastic RAID systems. For the purpose of demonstration, we implemented
a Linux software-based elastic RAID prototype that supports dynamic conversion
between RAID 5 and RAID 10. Compared with a baseline software-based RAID 5,
under sufficient runtime data compressibility that enables the conversion from
RAID 5 to RAID 10 over 60% user data, the elastic RAID could improve the 4KB
random write IOPS (IO per second) by 42% and 4KB random read IOPS in degraded
mode by 46%, while maintaining the same effective storage capacity.
|
[
{
"version": "v1",
"created": "Wed, 7 Sep 2022 13:12:05 GMT"
}
] | 2022-09-12T00:00:00 |
[
[
"Gu",
"Zheng",
""
],
[
"Li",
"Jiangpeng",
""
],
[
"Peng",
"Yong",
""
],
[
"Liu",
"Yang",
""
],
[
"Zhang",
"Tong",
""
]
] |
new_dataset
| 0.9949 |
2004.08324
|
Ignasi Sau
|
Ignasi Sau, U\'everton S. Souza
|
Hitting forbidden induced subgraphs on bounded treewidth graphs
|
26 pages, 3 figures
| null | null | null |
cs.DS cs.CC math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For a fixed graph $H$, the $H$-IS-Deletion problem asks, given a graph $G$,
for the minimum size of a set $S \subseteq V(G)$ such that $G\setminus S$ does
not contain $H$ as an induced subgraph. Motivated by previous work about
hitting (topological) minors and subgraphs on bounded treewidth graphs, we are
interested in determining, for a fixed graph $H$, the smallest function
$f_H(t)$ such that $H$-IS-Deletion can be solved in time $f_H(t) \cdot
n^{O(1)}$ assuming the Exponential Time Hypothesis (ETH), where $t$ and $n$
denote the treewidth and the number of vertices of the input graph,
respectively.
We show that $f_H(t) = 2^{O(t^{h-2})}$ for every graph $H$ on $h \geq 3$
vertices, and that $f_H(t) = 2^{O(t)}$ if $H$ is a clique or an independent
set. We present a number of lower bounds by generalizing a reduction of Cygan
et al. [MFCS 2014] for the subgraph version. In particular, we show that when
$H$ deviates slightly from a clique, the function $f_H(t)$ suffers a sharp
jump: if $H$ is obtained from a clique of size $h$ by removing one edge, then
$f_H(t) = 2^{\Theta(t^{h-2})}$. We also show that $f_H(t) = 2^{\Omega(t^{h})}$
when $H=K_{h,h}$, and this reduction answers an open question of Mi. Pilipczuk
[MFCS 2011] about the function $f_{C_4}(t)$ for the subgraph version.
Motivated by Cygan et al. [MFCS 2014], we also consider the colorful variant
of the problem, where each vertex of $G$ is colored with some color from $V(H)$
and we require to hit only induced copies of $H$ with matching colors. In this
case, we determine, under the ETH, the function $f_H(t)$ for every connected
graph $H$ on $h$ vertices: if $h\leq 2$ the problem can be solved in polynomial
time; if $h\geq 3$, $f_H(t) = 2^{\Theta(t)}$ if $H$ is a clique, and $f_H(t) =
2^{\Theta(t^{h-2})}$ otherwise.
|
[
{
"version": "v1",
"created": "Fri, 17 Apr 2020 16:12:38 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Sep 2022 08:20:10 GMT"
}
] | 2022-09-09T00:00:00 |
[
[
"Sau",
"Ignasi",
""
],
[
"Souza",
"Uéverton S.",
""
]
] |
new_dataset
| 0.995676 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.