id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2202.13591
|
Shunsuke Inenaga
|
Tooru Akagi, Kouta Okabe, Takuya Mieno, Yuto Nakashima, Shunsuke
Inenaga
|
Minimal Absent Words on Run-Length Encoded Strings
|
Accepted for CPM 2022
| null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
A string $w$ is called a minimal absent word (MAW) for another string $T$ if
$w$ does not occur (as a substring) in $T$ and any proper substring of $w$
occurs in $T$. State-of-the-art data structures for reporting the set
$\mathsf{MAW}(T)$ of MAWs from a given string $T$ of length $n$ require $O(n)$
space, can be built in $O(n)$ time, and can report all MAWs in
$O(|\mathsf{MAW}(T)|)$ time upon a query. This paper initiates the problem of
computing MAWs from a compressed representation of a string. In particular, we
focus on the most basic compressed representation of a string, run-length
encoding (RLE), which represents each maximal run of the same characters $a$ by
$a^p$ where $p$ is the length of the run. Let $m$ be the RLE-size of string
$T$. After categorizing the MAWs into five disjoint sets $\mathcal{M}_1$,
$\mathcal{M}_2$, $\mathcal{M}_3$, $\mathcal{M}_4$, $\mathcal{M}_5$ using RLE,
we present matching upper and lower bounds for the number of MAWs in
$\mathcal{M}_i$ for $i = 1,2,4,5$ in terms of RLE-size $m$, except for
$\mathcal{M}_3$ whose size is unbounded by $m$. We then present a compact
$O(m)$-space data structure that can report all MAWs in optimal
$O(|\mathsf{MAW}(T)|)$ time.
|
[
{
"version": "v1",
"created": "Mon, 28 Feb 2022 07:49:16 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Apr 2022 23:34:10 GMT"
}
] | 2022-04-18T00:00:00 |
[
[
"Akagi",
"Tooru",
""
],
[
"Okabe",
"Kouta",
""
],
[
"Mieno",
"Takuya",
""
],
[
"Nakashima",
"Yuto",
""
],
[
"Inenaga",
"Shunsuke",
""
]
] |
new_dataset
| 0.994931 |
2203.08215
|
Masum Hasan
|
Wasifur Rahman, Masum Hasan, Md Saiful Islam, Titilayo Olubajo, Jeet
Thaker, Abdelrahman Abdelkader, Phillip Yang, Tetsuo Ashizawa, Ehsan Hoque
|
Auto-Gait: Automatic Ataxia Risk Assessment with Computer Vision on Gait
Task Videos
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we investigated whether we can 1) detect participants with
ataxia-specific gait characteristics (risk-prediction), and 2) assess severity
of ataxia from gait (severity-assessment) using computer vision. We created a
dataset of 155 videos from 89 participants, 24 controls and 65 diagnosed with
(or are pre-manifest) spinocerebellar ataxias (SCAs), performing the gait task
of the Scale for the Assessment and Rating of Ataxia (SARA) from 11 medical
sites located in 8 different states across the United States. We develop a
computer vision pipeline to detect, track, and separate out the participants
from their surroundings and construct several features from their body pose
coordinates to capture gait characteristics like step width, step length,
swing, stability, speed, etc. Our risk-prediction model achieves 83.06%
accuracy and an 80.23% F1 score. Similarly, our severity-assessment model
achieves a mean absolute error (MAE) score of 0.6225 and a Pearson's
correlation coefficient score of 0.7268. Our models still performed
competitively when evaluated on data from sites not used during training.
Furthermore, through feature importance analysis, we found that our models
associate wider steps, decreased walking speed, and increased instability with
greater ataxia severity, which is consistent with previously established
clinical knowledge. Our models create possibilities for remote ataxia
assessment in non-clinical settings in the future, which could significantly
improve accessibility of ataxia care. Furthermore, our underlying dataset was
assembled from a geographically diverse cohort, highlighting its potential to
further increase equity. The code used in this study is open to the public, and
the anonymized body pose landmark dataset is also available upon request.
|
[
{
"version": "v1",
"created": "Tue, 15 Mar 2022 19:28:10 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Apr 2022 12:06:25 GMT"
}
] | 2022-04-18T00:00:00 |
[
[
"Rahman",
"Wasifur",
""
],
[
"Hasan",
"Masum",
""
],
[
"Islam",
"Md Saiful",
""
],
[
"Olubajo",
"Titilayo",
""
],
[
"Thaker",
"Jeet",
""
],
[
"Abdelkader",
"Abdelrahman",
""
],
[
"Yang",
"Phillip",
""
],
[
"Ashizawa",
"Tetsuo",
""
],
[
"Hoque",
"Ehsan",
""
]
] |
new_dataset
| 0.99957 |
2203.14463
|
ByungSoo Ko
|
Byungsoo Ko, Geonmo Gu
|
Large-scale Bilingual Language-Image Contrastive Learning
|
Accepted by ICLRW2022
| null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper is a technical report to share our experience and findings
building a Korean and English bilingual multimodal model. While many of the
multimodal datasets focus on English and multilingual multimodal research uses
machine-translated texts, employing such machine-translated texts is limited to
describing unique expressions, cultural information, and proper noun in
languages other than English. In this work, we collect 1.1 billion image-text
pairs (708 million Korean and 476 million English) and train a bilingual
multimodal model named KELIP. We introduce simple yet effective training
schemes, including MAE pre-training and multi-crop augmentation. Extensive
experiments demonstrate that a model trained with such training schemes shows
competitive performance in both languages. Moreover, we discuss
multimodal-related research questions: 1) strong augmentation-based methods can
distract the model from learning proper multimodal relations; 2) training
multimodal model without cross-lingual relation can learn the relation via
visual semantics; 3) our bilingual KELIP can capture cultural differences of
visual semantics for the same meaning of words; 4) a large-scale multimodal
model can be used for multimodal feature analogy. We hope that this work will
provide helpful experience and findings for future research. We provide an
open-source pre-trained KELIP.
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 03:02:03 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Apr 2022 02:37:55 GMT"
}
] | 2022-04-18T00:00:00 |
[
[
"Ko",
"Byungsoo",
""
],
[
"Gu",
"Geonmo",
""
]
] |
new_dataset
| 0.999462 |
2204.05255
|
Yi Zeng
|
Yi Zeng, Minzhou Pan, Hoang Anh Just, Lingjuan Lyu, Meikang Qiu and
Ruoxi Jia
|
Narcissus: A Practical Clean-Label Backdoor Attack with Limited
Information
|
13 pages of the main text
| null | null | null |
cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Backdoor attacks insert malicious data into a training set so that, during
inference time, it misclassifies inputs that have been patched with a backdoor
trigger as the malware specified label. For backdoor attacks to bypass human
inspection, it is essential that the injected data appear to be correctly
labeled. The attacks with such property are often referred to as "clean-label
attacks." Existing clean-label backdoor attacks require knowledge of the entire
training set to be effective. Obtaining such knowledge is difficult or
impossible because training data are often gathered from multiple sources
(e.g., face images from different users). It remains a question whether
backdoor attacks still present a real threat.
This paper provides an affirmative answer to this question by designing an
algorithm to mount clean-label backdoor attacks based only on the knowledge of
representative examples from the target class. With poisoning equal to or less
than 0.5% of the target-class data and 0.05% of the training set, we can train
a model to classify test examples from arbitrary classes into the target class
when the examples are patched with a backdoor trigger. Our attack works well
across datasets and models, even when the trigger presents in the physical
world.
We explore the space of defenses and find that, surprisingly, our attack can
evade the latest state-of-the-art defenses in their vanilla form, or after a
simple twist, we can adapt to the downstream defenses. We study the cause of
the intriguing effectiveness and find that because the trigger synthesized by
our attack contains features as persistent as the original semantic features of
the target class, any attempt to remove such triggers would inevitably hurt the
model accuracy first.
|
[
{
"version": "v1",
"created": "Mon, 11 Apr 2022 16:58:04 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Apr 2022 14:36:57 GMT"
}
] | 2022-04-18T00:00:00 |
[
[
"Zeng",
"Yi",
""
],
[
"Pan",
"Minzhou",
""
],
[
"Just",
"Hoang Anh",
""
],
[
"Lyu",
"Lingjuan",
""
],
[
"Qiu",
"Meikang",
""
],
[
"Jia",
"Ruoxi",
""
]
] |
new_dataset
| 0.985157 |
2204.07199
|
Jie Yang
|
Zi Wang and Jie Yang
|
Ear Wearable (Earable) User Authentication via Acoustic Toothprint
| null |
Proceedings of the 2021 ACM SIGSAC Conference on Computer and
Communications Security (ACM CCS), 2011
|
10.1145/3460120.3485340.
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Earables (ear wearables) is rapidly emerging as a new platform encompassing a
diverse range of personal applications. The traditional authentication methods
hence become less applicable and inconvenient for earables due to their limited
input interface. Nevertheless, earables often feature rich around-the-head
sensing capability that can be leveraged to capture new types of biometrics. In
this work, we proposeToothSonic which leverages the toothprint-induced sonic
effect produced by users performing teeth gestures for earable authentication.
In particular, we design representative teeth gestures that can produce
effective sonic waves carrying the information of the toothprint. To reliably
capture the acoustic toothprint, it leverages the occlusion effect of the ear
canal and the inward-facing microphone of the earables. It then extracts
multi-level acoustic features to reflect the intrinsic toothprint information
for authentication. The key advantages of ToothSonic are that it is suitable
for earables and is resistant to various spoofing attacks as the acoustic
toothprint is captured via the user's private teeth-ear channel that modulates
and encrypts the sonic waves. Our experiment studies with 25 participants show
that ToothSonic achieves up to 95% accuracy with only one of the users' tooth
gestures.
|
[
{
"version": "v1",
"created": "Thu, 14 Apr 2022 19:22:48 GMT"
}
] | 2022-04-18T00:00:00 |
[
[
"Wang",
"Zi",
""
],
[
"Yang",
"Jie",
""
]
] |
new_dataset
| 0.99908 |
2204.07243
|
Rabab Abdelfattah
|
Rabab Abdelfattah, Xiaofeng Wang, Song Wang
|
PLGAN: Generative Adversarial Networks for Power-Line Segmentation in
Aerial Images
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Accurate segmentation of power lines in various aerial images is very
important for UAV flight safety. The complex background and very thin
structures of power lines, however, make it an inherently difficult task in
computer vision. This paper presents PLGAN, a simple yet effective method based
on generative adversarial networks, to segment power lines from aerial images
with different backgrounds. Instead of directly using the adversarial networks
to generate the segmentation, we take their certain decoding features and embed
them into another semantic segmentation network by considering more context,
geometry, and appearance information of power lines. We further exploit the
appropriate form of the generated images for high-quality feature embedding and
define a new loss function in the Hough-transform parameter space to enhance
the segmentation of very thin power lines. Extensive experiments and
comprehensive analysis demonstrate that our proposed PLGAN outperforms the
prior state-of-the-art methods for semantic segmentation and line detection.
|
[
{
"version": "v1",
"created": "Thu, 14 Apr 2022 21:43:31 GMT"
}
] | 2022-04-18T00:00:00 |
[
[
"Abdelfattah",
"Rabab",
""
],
[
"Wang",
"Xiaofeng",
""
],
[
"Wang",
"Song",
""
]
] |
new_dataset
| 0.986709 |
2204.07328
|
Yifei Wang
|
Tong Yang, Yifei Wang, Long Sha, Jan Engelbrecht, Pengyu Hong
|
Knowledgebra: An Algebraic Learning Framework for Knowledge Graph
|
12 pages
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Knowledge graph (KG) representation learning aims to encode entities and
relations into dense continuous vector spaces such that knowledge contained in
a dataset could be consistently represented. Dense embeddings trained from KG
datasets benefit a variety of downstream tasks such as KG completion and link
prediction. However, existing KG embedding methods fell short to provide a
systematic solution for the global consistency of knowledge representation. We
developed a mathematical language for KG based on an observation of their
inherent algebraic structure, which we termed as Knowledgebra. By analyzing
five distinct algebraic properties, we proved that the semigroup is the most
reasonable algebraic structure for the relation embedding of a general
knowledge graph. We implemented an instantiation model, SemE, using simple
matrix semigroups, which exhibits state-of-the-art performance on standard
datasets. Moreover, we proposed a regularization-based method to integrate
chain-like logic rules derived from human knowledge into embedding training,
which further demonstrates the power of the developed language. As far as we
know, by applying abstract algebra in statistical learning, this work develops
the first formal language for general knowledge graphs, and also sheds light on
the problem of neural-symbolic integration from an algebraic perspective.
|
[
{
"version": "v1",
"created": "Fri, 15 Apr 2022 04:53:47 GMT"
}
] | 2022-04-18T00:00:00 |
[
[
"Yang",
"Tong",
""
],
[
"Wang",
"Yifei",
""
],
[
"Sha",
"Long",
""
],
[
"Engelbrecht",
"Jan",
""
],
[
"Hong",
"Pengyu",
""
]
] |
new_dataset
| 0.9988 |
2204.07335
|
Shaofei Huang
|
Jinsheng Wang, Yinchao Ma, Shaofei Huang, Tianrui Hui, Fei Wang, Chen
Qian, Tianzhu Zhang
|
A Keypoint-based Global Association Network for Lane Detection
|
Accepted by CVPR2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Lane detection is a challenging task that requires predicting complex
topology shapes of lane lines and distinguishing different types of lanes
simultaneously. Earlier works follow a top-down roadmap to regress predefined
anchors into various shapes of lane lines, which lacks enough flexibility to
fit complex shapes of lanes due to the fixed anchor shapes. Lately, some works
propose to formulate lane detection as a keypoint estimation problem to
describe the shapes of lane lines more flexibly and gradually group adjacent
keypoints belonging to the same lane line in a point-by-point manner, which is
inefficient and time-consuming during postprocessing. In this paper, we propose
a Global Association Network (GANet) to formulate the lane detection problem
from a new perspective, where each keypoint is directly regressed to the
starting point of the lane line instead of point-by-point extension.
Concretely, the association of keypoints to their belonged lane line is
conducted by predicting their offsets to the corresponding starting points of
lanes globally without dependence on each other, which could be done in
parallel to greatly improve efficiency. In addition, we further propose a
Lane-aware Feature Aggregator (LFA), which adaptively captures the local
correlations between adjacent keypoints to supplement local information to the
global association. Extensive experiments on two popular lane detection
benchmarks show that our method outperforms previous methods with F1 score of
79.63% on CULane and 97.71% on Tusimple dataset with high FPS. The code will be
released at https://github.com/Wolfwjs/GANet.
|
[
{
"version": "v1",
"created": "Fri, 15 Apr 2022 05:24:04 GMT"
}
] | 2022-04-18T00:00:00 |
[
[
"Wang",
"Jinsheng",
""
],
[
"Ma",
"Yinchao",
""
],
[
"Huang",
"Shaofei",
""
],
[
"Hui",
"Tianrui",
""
],
[
"Wang",
"Fei",
""
],
[
"Qian",
"Chen",
""
],
[
"Zhang",
"Tianzhu",
""
]
] |
new_dataset
| 0.988316 |
2204.07408
|
Linyi Yang
|
Linyi Yang, Zhen Wang, Yuxiang Wu, Jie Yang, Yue Zhang
|
Towards Fine-grained Causal Reasoning and QA
| null | null | null | null |
cs.CL cs.AI cs.LO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Understanding causality is key to the success of NLP applications, especially
in high-stakes domains. Causality comes in various perspectives such as enable
and prevent that, despite their importance, have been largely ignored in the
literature. This paper introduces a novel fine-grained causal reasoning dataset
and presents a series of novel predictive tasks in NLP, such as causality
detection, event causality extraction, and Causal QA. Our dataset contains
human annotations of 25K cause-effect event pairs and 24K question-answering
pairs within multi-sentence samples, where each can have multiple causal
relationships. Through extensive experiments and analysis, we show that the
complex relations in our dataset bring unique challenges to state-of-the-art
methods across all three tasks and highlight potential research opportunities,
especially in developing "causal-thinking" methods.
|
[
{
"version": "v1",
"created": "Fri, 15 Apr 2022 10:12:46 GMT"
}
] | 2022-04-18T00:00:00 |
[
[
"Yang",
"Linyi",
""
],
[
"Wang",
"Zhen",
""
],
[
"Wu",
"Yuxiang",
""
],
[
"Yang",
"Jie",
""
],
[
"Zhang",
"Yue",
""
]
] |
new_dataset
| 0.99469 |
2204.07434
|
Meiqi Chen
|
Meiqi Chen, Yixin Cao, Kunquan Deng, Mukai Li, Kun Wang, Jing Shao and
Yan Zhang
|
ERGO: Event Relational Graph Transformer for Document-level Event
Causality Identification
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Document-level Event Causality Identification (DECI) aims to identify causal
relations between event pairs in a document. It poses a great challenge of
across-sentence reasoning without clear causal indicators. In this paper, we
propose a novel Event Relational Graph TransfOrmer (ERGO) framework for DECI,
which improves existing state-of-the-art (SOTA) methods upon two aspects.
First, we formulate DECI as a node classification problem by constructing an
event relational graph, without the needs of prior knowledge or tools. Second,
ERGO seamlessly integrates event-pair relation classification and global
inference, which leverages a Relational Graph Transformer (RGT) to capture the
potential causal chain. Besides, we introduce edge-building strategies and
adaptive focal loss to deal with the massive false positives caused by common
spurious correlation. Extensive experiments on two benchmark datasets show that
ERGO significantly outperforms previous SOTA methods (13.1% F1 gains on
average). We have conducted extensive quantitative analysis and case studies to
provide insights for future research directions (Section 4.8).
|
[
{
"version": "v1",
"created": "Fri, 15 Apr 2022 12:12:16 GMT"
}
] | 2022-04-18T00:00:00 |
[
[
"Chen",
"Meiqi",
""
],
[
"Cao",
"Yixin",
""
],
[
"Deng",
"Kunquan",
""
],
[
"Li",
"Mukai",
""
],
[
"Wang",
"Kun",
""
],
[
"Shao",
"Jing",
""
],
[
"Zhang",
"Yan",
""
]
] |
new_dataset
| 0.998005 |
2204.07435
|
Jincheng Dai
|
Bolin Wu, Kai Niu, Jincheng Dai
|
Performance and Construction of Polar Codes: The Perspective of Bit
Error Probability
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most existing works of polar codes focus on the analysis of block error
probability. However, in many scenarios, bit error probability is also
important for evaluating the performance of channel codes. In this paper, we
establish a new framework to analyze the bit error probability of polar codes.
Specifically, by revisiting the error event of bit-channel, we first introduce
the conditional bit error probability as a metric to evaluate the reliability
of bit-channel for both systematic and non-systematic polar codes. Guided by
the concept of polar subcode, we then derive an upper bound on the conditional
bit error probability of each bit-channel, and accordingly, an upper bound on
the bit error probability of polar codes. Based on these, two types of
construction metrics aiming at minimizing the bit error probability of polar
codes are proposed, which are of linear computational complexity and explicit
forms. Simulation results show that the polar codes constructed by the proposed
methods can outperform those constructed by the conventional methods.
|
[
{
"version": "v1",
"created": "Fri, 15 Apr 2022 12:16:30 GMT"
}
] | 2022-04-18T00:00:00 |
[
[
"Wu",
"Bolin",
""
],
[
"Niu",
"Kai",
""
],
[
"Dai",
"Jincheng",
""
]
] |
new_dataset
| 0.999614 |
2204.07436
|
Yanzhu Guo
|
Hadi Abdine, Yanzhu Guo, Virgile Rennard, Michalis Vazirgiannis
|
Political Communities on Twitter: Case Study of the 2022 French
Presidential Election
| null | null | null | null |
cs.SI cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
With the significant increase in users on social media platforms, a new means
of political campaigning has appeared. Twitter and Facebook are now notable
campaigning tools during elections. Indeed, the candidates and their parties
now take to the internet to interact and spread their ideas. In this paper, we
aim to identify political communities formed on Twitter during the 2022 French
presidential election and analyze each respective community. We create a
large-scale Twitter dataset containing 1.2 million users and 62.6 million
tweets that mention keywords relevant to the election. We perform community
detection on a retweet graph of users and propose an in-depth analysis of the
stance of each community. Finally, we attempt to detect offensive tweets and
automatic bots, comparing across communities in order to gain insight into each
candidate's supporter demographics and online campaign strategy.
|
[
{
"version": "v1",
"created": "Fri, 15 Apr 2022 12:18:16 GMT"
}
] | 2022-04-18T00:00:00 |
[
[
"Abdine",
"Hadi",
""
],
[
"Guo",
"Yanzhu",
""
],
[
"Rennard",
"Virgile",
""
],
[
"Vazirgiannis",
"Michalis",
""
]
] |
new_dataset
| 0.999445 |
2204.07459
|
Gan Weichao
|
Weichao Gan, Yuanping Lin, Guangbo Yu, Guimin Chen and Qian Ye
|
Qtrade AI at SemEval-2022 Task 11: An Unified Framework for Multilingual
NER Task
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper describes our system, which placed third in the Multilingual Track
(subtask 11), fourth in the Code-Mixed Track (subtask 12), and seventh in the
Chinese Track (subtask 9) in the SemEval 2022 Task 11: MultiCoNER Multilingual
Complex Named Entity Recognition. Our system's key contributions are as
follows: 1) For multilingual NER tasks, we offer an unified framework with
which one can easily execute single-language or multilingual NER tasks, 2) for
low-resource code-mixed NER task, one can easily enhance his or her dataset
through implementing several simple data augmentation methods and 3) for
Chinese tasks, we propose a model that can capture Chinese lexical semantic,
lexical border, and lexical graph structural information. Finally, our system
achieves macro-f1 scores of 77.66, 84.35, and 74.00 on subtasks 11, 12, and 9,
respectively, during the testing phase.
|
[
{
"version": "v1",
"created": "Thu, 14 Apr 2022 07:51:36 GMT"
}
] | 2022-04-18T00:00:00 |
[
[
"Gan",
"Weichao",
""
],
[
"Lin",
"Yuanping",
""
],
[
"Yu",
"Guangbo",
""
],
[
"Chen",
"Guimin",
""
],
[
"Ye",
"Qian",
""
]
] |
new_dataset
| 0.991407 |
1901.09527
|
Erel Segal-Halevi
|
Elad Aigner-Horev and Erel Segal-Halevi
|
Envy-free Matchings in Bipartite Graphs and their Applications to Fair
Division
|
Appeared in Information Sciences, 587:164--187. But during the
production, the main theorem text was deleted. The arXiv version is the
correct one
|
Information Sciences, 2022, 587:164--187. Note: during the
production, the main theorem text was deleted. The arXiv version is the
correct one
|
10.1016/j.ins.2021.11.059
| null |
cs.DS cs.GT math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A matching in a bipartite graph with parts X and Y is called envy-free if no
unmatched vertex in X is a adjacent to a matched vertex in Y. Every perfect
matching is envy-free, but envy-free matchings exist even when perfect
matchings do not. We prove that every bipartite graph has a unique partition
such that all envy-free matchings are contained in one of the partition sets.
Using this structural theorem, we provide a polynomial-time algorithm for
finding an envy-free matching of maximum cardinality. For edge-weighted
bipartite graphs, we provide a polynomial-time algorithm for finding a
maximum-cardinality envy-free matching of minimum total weight. We show how
envy-free matchings can be used in various fair division problems with either
continuous resources ("cakes") or discrete ones. In particular, we propose a
symmetric algorithm for proportional cake-cutting, an algorithm for
1-out-of-(2n-2) maximin-share allocation of discrete goods, and an algorithm
for 1-out-of-floor(2n/3) maximin-share allocation of discrete bads among n
agents.
|
[
{
"version": "v1",
"created": "Mon, 28 Jan 2019 06:03:25 GMT"
},
{
"version": "v2",
"created": "Thu, 30 May 2019 10:56:16 GMT"
},
{
"version": "v3",
"created": "Sun, 15 Sep 2019 19:31:33 GMT"
},
{
"version": "v4",
"created": "Tue, 22 Dec 2020 09:15:57 GMT"
},
{
"version": "v5",
"created": "Mon, 22 Nov 2021 06:45:59 GMT"
},
{
"version": "v6",
"created": "Thu, 14 Apr 2022 10:23:16 GMT"
}
] | 2022-04-15T00:00:00 |
[
[
"Aigner-Horev",
"Elad",
""
],
[
"Segal-Halevi",
"Erel",
""
]
] |
new_dataset
| 0.955952 |
2004.08059
|
Ji Guan
|
Ji Guan and Nengkun Yu
|
A Probabilistic Logic for Verifying Continuous-time Markov Chains
| null | null |
10.1007/978-3-030-99527-0_1
| null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
A continuous-time Markov chain (CTMC) execution is a continuous class of
probability distributions over states. This paper proposes a probabilistic
linear-time temporal logic, namely continuous-time linear logic (CLL), to
reason about the probability distribution execution of CTMCs. We define the
syntax of CLL on the space of probability distributions. The syntax of CLL
includes multiphase timed until formulas, and the semantics of CLL allows time
reset to study relatively temporal properties. We derive a corresponding
model-checking algorithm for CLL formulas. The correctness of the
model-checking algorithm depends on Schanuel's conjecture, a central open
problem in transcendental number theory. Furthermore, we provide a running
example of CTMCs to illustrate our method.
|
[
{
"version": "v1",
"created": "Fri, 17 Apr 2020 04:20:40 GMT"
},
{
"version": "v2",
"created": "Thu, 13 May 2021 02:23:56 GMT"
},
{
"version": "v3",
"created": "Thu, 14 Apr 2022 03:56:32 GMT"
}
] | 2022-04-15T00:00:00 |
[
[
"Guan",
"Ji",
""
],
[
"Yu",
"Nengkun",
""
]
] |
new_dataset
| 0.966979 |
2007.05254
|
Wu Qinghua
|
Yongliang Lu, Jin-Kao Hao, Qinghua Wu
|
Solving the Clustered Traveling Salesman Problem via TSP methods
|
26 pages, 6 figures
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Clustered Traveling Salesman Problem (CTSP) is a variant of the popular
Traveling Salesman Problem (TSP) arising from a number of real-life
applications. In this work, we explore a transformation approach that solves
the CTSP by converting it to the well-studied TSP. For this purpose, we first
investigate a technique to convert a CTSP instance to a TSP and then apply
powerful TSP solvers (including exact and heuristic solvers) to solve the
resulting TSP instance. We want to answer the following questions: How do
state-of-the-art TSP solvers perform on clustered instances converted from the
CTSP? Do state-of-the-art TSP solvers compete well with the best performing
methods specifically designed for the CTSP? For this purpose, we present
intensive computational experiments on various benchmark instances to draw
conclusions.
|
[
{
"version": "v1",
"created": "Fri, 10 Jul 2020 08:56:06 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Dec 2020 08:35:00 GMT"
},
{
"version": "v3",
"created": "Thu, 14 Apr 2022 11:34:37 GMT"
}
] | 2022-04-15T00:00:00 |
[
[
"Lu",
"Yongliang",
""
],
[
"Hao",
"Jin-Kao",
""
],
[
"Wu",
"Qinghua",
""
]
] |
new_dataset
| 0.997813 |
2008.09777
|
Julien Siems
|
Arber Zela, Julien Siems, Lucas Zimmer, Jovita Lukasik, Margret
Keuper, Frank Hutter
|
Surrogate NAS Benchmarks: Going Beyond the Limited Search Spaces of
Tabular NAS Benchmarks
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The most significant barrier to the advancement of Neural Architecture Search
(NAS) is its demand for large computational resources, which hinders
scientifically sound empirical evaluations of NAS methods. Tabular NAS
benchmarks have alleviated this problem substantially, making it possible to
properly evaluate NAS methods in seconds on commodity machines. However, an
unintended consequence of tabular NAS benchmarks has been a focus on extremely
small architectural search spaces since their construction relies on exhaustive
evaluations of the space. This leads to unrealistic results that do not
transfer to larger spaces. To overcome this fundamental limitation, we propose
a methodology to create cheap NAS surrogate benchmarks for arbitrary search
spaces. We exemplify this approach by creating surrogate NAS benchmarks on the
existing tabular NAS-Bench-101 and on two widely used NAS search spaces with up
to $10^{21}$ architectures ($10^{13}$ times larger than any previous tabular
NAS benchmark). We show that surrogate NAS benchmarks can model the true
performance of architectures better than tabular benchmarks (at a small
fraction of the cost), that they lead to faithful estimates of how well
different NAS methods work on the original non-surrogate benchmark, and that
they can generate new scientific insight. We open-source all our code and
believe that surrogate NAS benchmarks are an indispensable tool to extend
scientifically sound work on NAS to large and exciting search spaces.
|
[
{
"version": "v1",
"created": "Sat, 22 Aug 2020 08:15:52 GMT"
},
{
"version": "v2",
"created": "Sat, 17 Oct 2020 09:32:18 GMT"
},
{
"version": "v3",
"created": "Thu, 5 Nov 2020 20:47:10 GMT"
},
{
"version": "v4",
"created": "Thu, 14 Apr 2022 15:23:32 GMT"
}
] | 2022-04-15T00:00:00 |
[
[
"Zela",
"Arber",
""
],
[
"Siems",
"Julien",
""
],
[
"Zimmer",
"Lucas",
""
],
[
"Lukasik",
"Jovita",
""
],
[
"Keuper",
"Margret",
""
],
[
"Hutter",
"Frank",
""
]
] |
new_dataset
| 0.959785 |
2105.04301
|
Zhewei Chen
|
Zhewei Chen, Wenwen Yu, Linyue Zhou
|
ADASYN-Random Forest Based Intrusion Detection Model
|
Accepted by SPML 2021
| null |
10.1145/3483207.3483232
| null |
cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Intrusion detection has been a key topic in the field of cyber security, and
the common network threats nowadays have the characteristics of varieties and
variation. Considering the serious imbalance of intrusion detection datasets
will result in low classification performance on attack behaviors of small
sample size and difficulty to detect network attacks accurately and
efficiently, using Adaptive Synthetic Sampling (ADASYN) method to balance
datasets was proposed in this paper. In addition, Random Forest algorithm was
used to train intrusion detection classifiers. Through the comparative
experiment of Intrusion detection on CICIDS 2017 dataset, it is found that
ADASYN with Random Forest performs better. Based on the experimental results,
the improvement of precision, recall, F1 scores and AUC values after ADASYN is
then analyzed. Experiments show that the proposed method can be applied to
intrusion detection with large data, and can effectively improve the
classification accuracy of network attack behaviors. Compared with traditional
machine learning models, it has better performance, generalization ability and
robustness.
|
[
{
"version": "v1",
"created": "Mon, 10 May 2021 12:22:36 GMT"
},
{
"version": "v2",
"created": "Wed, 19 May 2021 14:26:18 GMT"
},
{
"version": "v3",
"created": "Thu, 20 May 2021 02:04:09 GMT"
},
{
"version": "v4",
"created": "Tue, 12 Apr 2022 14:03:01 GMT"
},
{
"version": "v5",
"created": "Wed, 13 Apr 2022 08:29:28 GMT"
},
{
"version": "v6",
"created": "Thu, 14 Apr 2022 15:28:45 GMT"
}
] | 2022-04-15T00:00:00 |
[
[
"Chen",
"Zhewei",
""
],
[
"Yu",
"Wenwen",
""
],
[
"Zhou",
"Linyue",
""
]
] |
new_dataset
| 0.994354 |
2105.11941
|
Jingwen Fu
|
Jingwen Fu, Xiaoyi Zhang, Yuwang Wang, Wenjun Zeng, Sam Yang and
Grayson Hilliard
|
Understanding Mobile GUI: from Pixel-Words to Screen-Sentences
| null | null | null | null |
cs.CV cs.HC cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The ubiquity of mobile phones makes mobile GUI understanding an important
task. Most previous works in this domain require human-created metadata of
screens (e.g. View Hierarchy) during inference, which unfortunately is often
not available or reliable enough for GUI understanding. Inspired by the
impressive success of Transformers in NLP tasks, targeting for purely
vision-based GUI understanding, we extend the concepts of Words/Sentence to
Pixel-Words/Screen-Sentence, and propose a mobile GUI understanding
architecture: Pixel-Words to Screen-Sentence (PW2SS). In analogy to the
individual Words, we define the Pixel-Words as atomic visual components (text
and graphic components), which are visually consistent and semantically clear
across screenshots of a large variety of design styles. The Pixel-Words
extracted from a screenshot are aggregated into Screen-Sentence with a Screen
Transformer proposed to model their relations. Since the Pixel-Words are
defined as atomic visual components, the ambiguity between their visual
appearance and semantics is dramatically reduced. We are able to make use of
metadata available in training data to auto-generate high-quality annotations
for Pixel-Words. A dataset, RICO-PW, of screenshots with Pixel-Words
annotations is built based on the public RICO dataset, which will be released
to help to address the lack of high-quality training data in this area. We
train a detector to extract Pixel-Words from screenshots on this dataset and
achieve metadata-free GUI understanding during inference. We conduct
experiments and show that Pixel-Words can be well extracted on RICO-PW and well
generalized to a new dataset, P2S-UI, collected by ourselves. The effectiveness
of PW2SS is further verified in the GUI understanding tasks including relation
prediction, clickability prediction, screen retrieval, and app type
classification.
|
[
{
"version": "v1",
"created": "Tue, 25 May 2021 13:45:54 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Apr 2022 11:00:58 GMT"
}
] | 2022-04-15T00:00:00 |
[
[
"Fu",
"Jingwen",
""
],
[
"Zhang",
"Xiaoyi",
""
],
[
"Wang",
"Yuwang",
""
],
[
"Zeng",
"Wenjun",
""
],
[
"Yang",
"Sam",
""
],
[
"Hilliard",
"Grayson",
""
]
] |
new_dataset
| 0.998406 |
2201.00439
|
Didier Ndayikengurukiye
|
Didier Ndayikengurukiye and Max Mignotte
|
Salient Object Detection by LTP Texture Characterization on Opposing
Color Pairs under SLICO Superpixel Constraint
| null |
J. Imaging 2022, 8(4), 110
|
10.3390/jimaging8040110
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The effortless detection of salient objects by humans has been the subject of
research in several fields, including computer vision as it has many
applications. However, salient object detection remains a challenge for many
computer models dealing with color and textured images. Herein, we propose a
novel and efficient strategy, through a simple model, almost without internal
parameters, which generates a robust saliency map for a natural image. This
strategy consists of integrating color information into local textural patterns
to characterize a color micro-texture. Most models in the literature that use
the color and texture features treat them separately. In our case, it is the
simple, yet powerful LTP (Local Ternary Patterns) texture descriptor applied to
opposing color pairs of a color space that allows us to achieve this end. Each
color micro-texture is represented by vector whose components are from a
superpixel obtained by SLICO (Simple Linear Iterative Clustering with zero
parameter) algorithm which is simple, fast and exhibits state-of-the-art
boundary adherence. The degree of dissimilarity between each pair of color
micro-texture is computed by the FastMap method, a fast version of MDS
(Multi-dimensional Scaling), that considers the color micro-textures
non-linearity while preserving their distances. These degrees of dissimilarity
give us an intermediate saliency map for each RGB, HSL, LUV and CMY color
spaces. The final saliency map is their combination to take advantage of the
strength of each of them. The MAE (Mean Absolute Error) and F$_{\beta}$
measures of our saliency maps, on the complex ECSSD dataset show that our model
is both simple and efficient, outperforming several state-of-the-art models.
|
[
{
"version": "v1",
"created": "Mon, 3 Jan 2022 00:03:50 GMT"
}
] | 2022-04-15T00:00:00 |
[
[
"Ndayikengurukiye",
"Didier",
""
],
[
"Mignotte",
"Max",
""
]
] |
new_dataset
| 0.999399 |
2202.04989
|
Axel Marmoret
|
Haoran Wu, Axel Marmoret, J\'er\'emy E. Cohen
|
Semi-Supervised Convolutive NMF for Automatic Piano Transcription
|
Published at the 2022 Sound and Music Computing (SMC) conference, 7
pages, 5 figures, 3 tables, code available at
https://github.com/cohenjer/TransSSCNMF
| null | null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Automatic Music Transcription, which consists in transforming an audio
recording of a musical performance into symbolic format, remains a difficult
Music Information Retrieval task. In this work, which focuses on piano
transcription, we propose a semi-supervised approach using low-rank matrix
factorization techniques, in particular Convolutive Nonnegative Matrix
Factorization. In the semi-supervised setting, only a single recording of each
individual notes is required. We show on the MAPS dataset that the proposed
semi-supervised CNMF method performs better than state-of-the-art low-rank
factorization techniques and a little worse than supervised deep learning
state-of-the-art methods, while however suffering from generalization issues.
|
[
{
"version": "v1",
"created": "Thu, 10 Feb 2022 12:38:53 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Apr 2022 10:29:32 GMT"
}
] | 2022-04-15T00:00:00 |
[
[
"Wu",
"Haoran",
""
],
[
"Marmoret",
"Axel",
""
],
[
"Cohen",
"Jérémy E.",
""
]
] |
new_dataset
| 0.996446 |
2203.06486
|
Xiang Lin
|
Shankar Kantharaj, Rixie Tiffany Ko Leong, Xiang Lin, Ahmed Masry,
Megh Thakkar, Enamul Hoque, Shafiq Joty
|
Chart-to-Text: A Large-Scale Benchmark for Chart Summarization
|
Accepted by ACL 2022 Main Conference
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Charts are commonly used for exploring data and communicating insights.
Generating natural language summaries from charts can be very helpful for
people in inferring key insights that would otherwise require a lot of
cognitive and perceptual efforts. We present Chart-to-text, a large-scale
benchmark with two datasets and a total of 44,096 charts covering a wide range
of topics and chart types. We explain the dataset construction process and
analyze the datasets. We also introduce a number of state-of-the-art neural
models as baselines that utilize image captioning and data-to-text generation
techniques to tackle two problem variations: one assumes the underlying data
table of the chart is available while the other needs to extract data from
chart images. Our analysis with automatic and human evaluation shows that while
our best models usually generate fluent summaries and yield reasonable BLEU
scores, they also suffer from hallucinations and factual errors as well as
difficulties in correctly explaining complex patterns and trends in charts.
|
[
{
"version": "v1",
"created": "Sat, 12 Mar 2022 17:01:38 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Mar 2022 03:45:59 GMT"
},
{
"version": "v3",
"created": "Thu, 14 Apr 2022 15:41:15 GMT"
}
] | 2022-04-15T00:00:00 |
[
[
"Kantharaj",
"Shankar",
""
],
[
"Leong",
"Rixie Tiffany Ko",
""
],
[
"Lin",
"Xiang",
""
],
[
"Masry",
"Ahmed",
""
],
[
"Thakkar",
"Megh",
""
],
[
"Hoque",
"Enamul",
""
],
[
"Joty",
"Shafiq",
""
]
] |
new_dataset
| 0.99971 |
2203.14085
|
Sumit Laha
|
Sumit Laha, Ankit Sharma, Shengnan Hu and Hassan Foroosh
|
Near-Infrared Depth-Independent Image Dehazing using Haar Wavelets
|
Accepted in 25th International Conference on Pattern Recognition
(ICPR 2020)
|
2020 25th International Conference on Pattern Recognition (ICPR)
(2021) 5384-5390
|
10.1109/ICPR48806.2021.9412589
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a fusion algorithm for haze removal that combines color
information from an RGB image and edge information extracted from its
corresponding NIR image using Haar wavelets. The proposed algorithm is based on
the key observation that NIR edge features are more prominent in the hazy
regions of the image than the RGB edge features in those same regions. To
combine the color and edge information, we introduce a haze-weight map which
proportionately distributes the color and edge information during the fusion
process. Because NIR images are, intrinsically, nearly haze-free, our work
makes no assumptions like existing works that rely on a scattering model and
essentially designing a depth-independent method. This helps in minimizing
artifacts and gives a more realistic sense to the restored haze-free image.
Extensive experiments show that the proposed algorithm is both qualitatively
and quantitatively better on several key metrics when compared to existing
state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Sat, 26 Mar 2022 14:07:31 GMT"
}
] | 2022-04-15T00:00:00 |
[
[
"Laha",
"Sumit",
""
],
[
"Sharma",
"Ankit",
""
],
[
"Hu",
"Shengnan",
""
],
[
"Foroosh",
"Hassan",
""
]
] |
new_dataset
| 0.986923 |
2204.04221
|
Rishabh Khandelwal
|
Rishabh Khandelwal, Asmit Nayak, Hamza Harkous and Kassem Fawaz
|
CookieEnforcer: Automated Cookie Notice Analysis and Enforcement
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Online websites use cookie notices to elicit consent from the users, as
required by recent privacy regulations like the GDPR and the CCPA. Prior work
has shown that these notices use dark patterns to manipulate users into making
website-friendly choices which put users' privacy at risk. In this work, we
develop CookieEnforcer, a new system for automatically discovering cookie
notices and deciding on the options that result in disabling all non-essential
cookies. In order to achieve this, we first build an automatic cookie notice
detector that utilizes the rendering pattern of the HTML elements to identify
the cookie notices. Next, CookieEnforcer analyzes the cookie notices and
predicts the set of actions required to disable all unnecessary cookies. This
is done by modeling the problem as a sequence-to-sequence task, where the input
is a machine-readable cookie notice and the output is the set of clicks to
make. We demonstrate the efficacy of CookieEnforcer via an end-to-end accuracy
evaluation, showing that it can generate the required steps in 91% of the
cases. Via a user study, we show that CookieEnforcer can significantly reduce
the user effort. Finally, we use our system to perform several measurements on
the top 5k websites from the Tranco list (as accessed from the US and the UK),
drawing comparisons and observations at scale.
|
[
{
"version": "v1",
"created": "Fri, 8 Apr 2022 17:39:33 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Apr 2022 16:30:05 GMT"
}
] | 2022-04-15T00:00:00 |
[
[
"Khandelwal",
"Rishabh",
""
],
[
"Nayak",
"Asmit",
""
],
[
"Harkous",
"Hamza",
""
],
[
"Fawaz",
"Kassem",
""
]
] |
new_dataset
| 0.965208 |
2204.06183
|
Alex Lee
|
Alex Junho Lee, Younggun Cho, Young-sik Shin, Ayoung Kim, Hyun Myung
|
ViViD++: Vision for Visibility Dataset
|
8 pages, 8 figures, Accepted to IEEE Robotics and Automation Letters
(RA-L)
| null | null | null |
cs.RO cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present a dataset capturing diverse visual data formats
that target varying luminance conditions. While RGB cameras provide nourishing
and intuitive information, changes in lighting conditions potentially result in
catastrophic failure for robotic applications based on vision sensors.
Approaches overcoming illumination problems have included developing more
robust algorithms or other types of visual sensors, such as thermal and event
cameras. Despite the alternative sensors' potential, there still are few
datasets with alternative vision sensors. Thus, we provided a dataset recorded
from alternative vision sensors, by handheld or mounted on a car, repeatedly in
the same space but in different conditions. We aim to acquire visible
information from co-aligned alternative vision sensors. Our sensor system
collects data more independently from visible light intensity by measuring the
amount of infrared dissipation, depth by structured reflection, and
instantaneous temporal changes in luminance. We provide these measurements
along with inertial sensors and ground-truth for developing robust visual SLAM
under poor illumination. The full dataset is available at:
https://visibilitydataset.github.io/
|
[
{
"version": "v1",
"created": "Wed, 13 Apr 2022 06:01:27 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Apr 2022 00:38:12 GMT"
}
] | 2022-04-15T00:00:00 |
[
[
"Lee",
"Alex Junho",
""
],
[
"Cho",
"Younggun",
""
],
[
"Shin",
"Young-sik",
""
],
[
"Kim",
"Ayoung",
""
],
[
"Myung",
"Hyun",
""
]
] |
new_dataset
| 0.999802 |
2204.06666
|
Chong Chen
|
Chong Chen
|
Explicit caching HYB: a new high-performance SpMV framework on GPGPU
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Sparse Matrix-Vector Multiplication (SpMV) is a critical operation for the
iterative solver of Finite Element Methods on computer simulation. Since the
SpMV operation is a memory-bound algorithm, the efficiency of data movements
heavily influenced the performance of the SpMV on GPU. In recent years, many
research is conducted in accelerating the performance of SpMV on the graphic
processing units (GPU). The performance optimization methods used in existing
studies focus on the following areas: improve the load balancing between GPU
processors, and reduce the execution divergence between GPU threads. Although
some studies have made preliminary optimization on the input vector fetching,
the effect of explicitly caching the input vector on GPU base SpMV has not been
studied in depth yet. In this study, we are trying to minimize the data
movements cost for GPU-based SpMV using a new framework named "explicit caching
Hybrid (EHYB)". The EHYB framework achieved significant performance improvement
by using the following methods: 1. Improve the speed of data movements by
partitioning and explicitly caching the input vector to the shared memory of
the CUDA kernel. 2. Reduce the volume of data movements by storing the major
part of the column index with a compact format. We tested our implementation
with sparse matrices derived from FEM applications in different areas. The
experiment results show that our implementation can overperform the
state-of-the-arts implementation with significant speedup, and leads to higher
FLOPs than the theoryperformance up-boundary of the existing GPU-based SpMV
implementations.
|
[
{
"version": "v1",
"created": "Wed, 13 Apr 2022 23:15:29 GMT"
}
] | 2022-04-15T00:00:00 |
[
[
"Chen",
"Chong",
""
]
] |
new_dataset
| 0.997963 |
2204.06700
|
Sidong Feng
|
Sidong Feng, Chunyang Chen, Zhenchang Xing
|
Gallery D.C.: Auto-created GUI Component Gallery for Design Search and
Knowledge Discovery
| null | null | null | null |
cs.SE cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
GUI design is an integral part of software development. The process of
designing a mobile application typically starts with the ideation and
inspiration search from existing designs. However, existing
information-retrieval based, and database-query based methods cannot
efficiently gain inspirations in three requirements: design practicality,
design granularity and design knowledge discovery. In this paper we propose a
web application, called \tool that aims to facilitate the process of user
interface design through real world GUI component search. Gallery D.C. indexes
GUI component designs using reverse engineering and deep learning based
computer vision techniques on millions of real world applications. To perform
an advanced design search and knowledge discovery, our approach extracts
information about size, color, component type, and text information to help
designers explore multi-faceted design space and distill higher-order of design
knowledge. Gallery D.C. is well received via an informal evaluation with 7
professional designers. Web Link: http://mui-collection.herokuapp.com/. Demo
Video Link: https://youtu.be/zVmsz_wY5OQ.
|
[
{
"version": "v1",
"created": "Thu, 14 Apr 2022 01:54:44 GMT"
}
] | 2022-04-15T00:00:00 |
[
[
"Feng",
"Sidong",
""
],
[
"Chen",
"Chunyang",
""
],
[
"Xing",
"Zhenchang",
""
]
] |
new_dataset
| 0.990636 |
2204.06701
|
Yuanyuan Wei
|
Yuanyuan Wei, Julian Jang-Jaccard, Wen Xu, Fariza Sabrina, Seyit
Camtepe, Mikael Boulic
|
LSTM-Autoencoder based Anomaly Detection for Indoor Air Quality Time
Series Data
|
14 pages, 16 figures, 5 tables
| null | null | null |
cs.LG cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Anomaly detection for indoor air quality (IAQ) data has become an important
area of research as the quality of air is closely related to human health and
well-being. However, traditional statistics and shallow machine learning-based
approaches in anomaly detection in the IAQ area could not detect anomalies
involving the observation of correlations across several data points (i.e.,
often referred to as long-term dependences). We propose a hybrid deep learning
model that combines LSTM with Autoencoder for anomaly detection tasks in IAQ to
address this issue. In our approach, the LSTM network is comprised of multiple
LSTM cells that work with each other to learn the long-term dependences of the
data in a time-series sequence. Autoencoder identifies the optimal threshold
based on the reconstruction loss rates evaluated on every data across all
time-series sequences. Our experimental results, based on the Dunedin CO2
time-series dataset obtained through a real-world deployment of the schools in
New Zealand, demonstrate a very high and robust accuracy rate (99.50%) that
outperforms other similar models.
|
[
{
"version": "v1",
"created": "Thu, 14 Apr 2022 01:57:46 GMT"
}
] | 2022-04-15T00:00:00 |
[
[
"Wei",
"Yuanyuan",
""
],
[
"Jang-Jaccard",
"Julian",
""
],
[
"Xu",
"Wen",
""
],
[
"Sabrina",
"Fariza",
""
],
[
"Camtepe",
"Seyit",
""
],
[
"Boulic",
"Mikael",
""
]
] |
new_dataset
| 0.978905 |
2204.06720
|
EPTCS
|
Guillaume Aucher (University of Rennes 1, CNRS)
|
A van Benthem Theorem for Atomic and Molecular Logics
|
In Proceedings NCL 2022, arXiv:2204.06359
|
EPTCS 358, 2022, pp. 84-101
|
10.4204/EPTCS.358.7
| null |
cs.LO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
After recalling the definitions of atomic and molecular logics, we show how
notions of bisimulation can be automatically defined from the truth conditions
of the connectives of any of these logics. Then, we prove a generalization of
van Benthem modal characterization theorem for molecular logics. Our molecular
connectives should be uniform and contain all conjunctions and disjunctions. We
use modal logic, the Lambek calculus and modal intuitionistic logic as case
study and compare in particular our work with Olkhovikov's work.
|
[
{
"version": "v1",
"created": "Thu, 14 Apr 2022 03:22:43 GMT"
}
] | 2022-04-15T00:00:00 |
[
[
"Aucher",
"Guillaume",
"",
"University of Rennes 1, CNRS"
]
] |
new_dataset
| 0.994303 |
2204.06731
|
EPTCS
|
Luis Estrada-Gonz\'alez (Institute for Philosophical Research,
National Autonomous University of Mexico (UNAM)), Fernando Cano-Jorge
(Universidad Panamericana)
|
Mortensen Logics
|
In Proceedings NCL 2022, arXiv:2204.06359
|
EPTCS 358, 2022, pp. 189-201
|
10.4204/EPTCS.358.14
| null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
Mortensen introduced a connexive logic commonly known as 'M3V'. M3V is
obtained by adding a special conditional to LP. Among its most notable
features, besides its being connexive, M3V is negation-inconsistent and it
validates the negation of every conditional. But Mortensen has also studied and
applied extensively other non-connexive logics, for example, closed set logic,
CSL, and a variant of Sette's logic, identified and called 'P2' by Marcos.
In this paper, we analyze and compare systematically the connexive variants
of CSL and P2, obtained by adding the M3V conditional to them. Our main
observations are two. First, that the inconsistency of M3V is exacerbated in
the connexive variant of closed set logic, while it is attenuated in the
connexive variant of the Sette-like P2. Second, that the M3V conditional is,
unlike other conditionals, "connexively stable", meaning that it remains
connexive when combined with the main paraconsistent negations.
|
[
{
"version": "v1",
"created": "Thu, 14 Apr 2022 03:26:56 GMT"
}
] | 2022-04-15T00:00:00 |
[
[
"Estrada-González",
"Luis",
"",
"Institute for Philosophical Research,\n National Autonomous University of Mexico"
],
[
"Cano-Jorge",
"Fernando",
"",
"Universidad Panamericana"
]
] |
new_dataset
| 0.999497 |
2204.06737
|
EPTCS
|
Ana Cruz (Aveiro University), Alexandre Madeira (CIDMA, Aveiro
University), Lu\'is Soares Barbosa (INESC TEC & Dep. Informatics, Minho
University)
|
A Logic for Paraconsistent Transition Systems
|
In Proceedings NCL 2022, arXiv:2204.06359
|
EPTCS 358, 2022, pp. 270-284
|
10.4204/EPTCS.358.20
| null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
Modelling complex information systems often entails the need for dealing with
scenarios of inconsistency in which several requirements either reinforce or
contradict each other. In this kind of scenarios, arising e.g. in knowledge
representation, simulation of biological systems, or quantum computation,
inconsistency has to be addressed in a precise and controlled way. This paper
generalises Belnap-Dunn four-valued logic, introducing paraconsistent
transition systems (PTS), endowed with positive and negative accessibility
relations, and a metric space over the lattice of truth values, and their modal
logic.
|
[
{
"version": "v1",
"created": "Thu, 14 Apr 2022 03:28:48 GMT"
}
] | 2022-04-15T00:00:00 |
[
[
"Cruz",
"Ana",
"",
"Aveiro University"
],
[
"Madeira",
"Alexandre",
"",
"CIDMA, Aveiro\n University"
],
[
"Barbosa",
"Luís Soares",
"",
"INESC TEC & Dep. Informatics, Minho\n University"
]
] |
new_dataset
| 0.998386 |
2204.06738
|
EPTCS
|
V\'it Pun\v{c}och\'a\v{r} (Institute of Philosophy, Czech Academy of
Sciences), Igor Sedl\'ar (Institute of Philosophy, Czech Academy of Sciences)
|
Routley Star in Information-Based Semantics
|
In Proceedings NCL 2022, arXiv:2204.06359
|
EPTCS 358, 2022, pp. 285-297
|
10.4204/EPTCS.358.21
| null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
It is common in various non-classical logics, especially in relevant logics,
to characterize negation semantically via the operation known as Routley star.
This operation works well within relational semantic frameworks based on prime
theories. We study this operation in the context of "information-based"
semantics for which it is characteristic that sets of formulas supported by
individual information states are theories that do not have to be prime. We
will show that, somewhat surprisingly, the incorporation of Routley star into
the information-based semantics does not lead to a collapse or a trivialization
of the whole semantic system. On the contrary, it leads to a technically
elegant though quite restricted semantic framework that determines a particular
logic. We study some basic properties of this semantics. For example, we show
that within this framework double negation law is valid only in involutive
linear frames. We characterize axiomatically the logic of all linear frames and
show that the logic of involutive linear frames coincides with a system that
Mike Dunn coined Kalman logic. This logic is the fragment (for the language
restricted to conjunction, disjunction and negation) of the "semi-relevant"
logic known as R-mingle. Finally, we characterize by a deductive system the
logic of all information frames equipped with Routley star.
|
[
{
"version": "v1",
"created": "Thu, 14 Apr 2022 03:29:07 GMT"
}
] | 2022-04-15T00:00:00 |
[
[
"Punčochář",
"Vít",
"",
"Institute of Philosophy, Czech Academy of\n Sciences"
],
[
"Sedlár",
"Igor",
"",
"Institute of Philosophy, Czech Academy of Sciences"
]
] |
new_dataset
| 0.997481 |
2204.06771
|
Sungmin Kang
|
Sungmin Kang and Shin Yoo
|
GLAD: Neural Predicate Synthesis to Repair Omission Faults
|
10 pages, 9 tables, 2 figures
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Existing template and learning-based APR tools have successfully found
patches for many benchmark faults. However, our analysis of existing results
shows that omission faults pose a significant challenge to these techniques.
For template based approaches, omission faults provide no location to apply
templates to; for learning based approaches that formulate repair as Neural
Machine Translation (NMT), omission faults similarly do not provide the faulty
code to translate. To address these issues, we propose GLAD, a novel
learning-based repair technique that specifically targets if-clause synthesis.
GLAD does not require a faulty line as it is based on generative Language
Models (LMs) instead of machine translation; consequently, it can repair
omission faults. GLAD intelligently constrains the language model using a
type-based grammar. Further, it efficiently reduces the validation cost by
performing dynamic ranking of candidate patches using a debugger. Thanks to the
shift from translation to synthesis, GLAD is highly orthogonal to existing
techniques: GLAD can correctly fix 16 Defects4J v1.2 faults that previous
NMT-based techniques could not, while maintaining a reasonable runtime cost,
underscoring its utility as an APR tool and potential to complement existing
tools in practice. An inspection of the bugs that GLAD fixes reveals that GLAD
can quickly generate expressions that would be challenging for other
techniques.
|
[
{
"version": "v1",
"created": "Thu, 14 Apr 2022 06:13:11 GMT"
}
] | 2022-04-15T00:00:00 |
[
[
"Kang",
"Sungmin",
""
],
[
"Yoo",
"Shin",
""
]
] |
new_dataset
| 0.987151 |
2204.06806
|
Manu Mathew
|
Debapriya Maji, Soyeb Nagori, Manu Mathew, Deepak Poddar
|
YOLO-Pose: Enhancing YOLO for Multi Person Pose Estimation Using Object
Keypoint Similarity Loss
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce YOLO-pose, a novel heatmap-free approach for joint detection,
and 2D multi-person pose estimation in an image based on the popular YOLO
object detection framework. Existing heatmap based two-stage approaches are
sub-optimal as they are not end-to-end trainable and training relies on a
surrogate L1 loss that is not equivalent to maximizing the evaluation metric,
i.e. Object Keypoint Similarity (OKS). Our framework allows us to train the
model end-to-end and optimize the OKS metric itself. The proposed model learns
to jointly detect bounding boxes for multiple persons and their corresponding
2D poses in a single forward pass and thus bringing in the best of both
top-down and bottom-up approaches. Proposed approach doesn't require the
postprocessing of bottom-up approaches to group detected keypoints into a
skeleton as each bounding box has an associated pose, resulting in an inherent
grouping of the keypoints. Unlike top-down approaches, multiple forward passes
are done away with since all persons are localized along with their pose in a
single inference. YOLO-pose achieves new state-of-the-art results on COCO
validation (90.2% AP50) and test-dev set (90.3% AP50), surpassing all existing
bottom-up approaches in a single forward pass without flip test, multi-scale
testing, or any other test time augmentation. All experiments and results
reported in this paper are without any test time augmentation, unlike
traditional approaches that use flip-test and multi-scale testing to boost
performance. Our training codes will be made publicly available at
https://github.com/TexasInstruments/edgeai-yolov5 and
https://github.com/TexasInstruments/edgeai-yolox
|
[
{
"version": "v1",
"created": "Thu, 14 Apr 2022 08:02:40 GMT"
}
] | 2022-04-15T00:00:00 |
[
[
"Maji",
"Debapriya",
""
],
[
"Nagori",
"Soyeb",
""
],
[
"Mathew",
"Manu",
""
],
[
"Poddar",
"Deepak",
""
]
] |
new_dataset
| 0.996449 |
2204.06833
|
Feng Xue
|
Tianxi Wang, Feng Xue, Yu Zhou, Anlong Ming
|
MARF: Multiscale Adaptive-switch Random Forest for Leg Detection with 2D
Laser Scanners
|
Accepted by Transactions on Cybernetics (TCYB)
| null | null | null |
cs.RO cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For the 2D laser-based tasks, e.g., people detection and people tracking, leg
detection is usually the first step. Thus, it carries great weight in
determining the performance of people detection and people tracking. However,
many leg detectors ignore the inevitable noise and the multiscale
characteristics of the laser scan, which makes them sensitive to the unreliable
features of point cloud and further degrades the performance of the leg
detector. In this paper, we propose a multiscale adaptive-switch Random Forest
(MARF) to overcome these two challenges. Firstly, the adaptive-switch decision
tree is designed to use noisesensitive features to conduct weighted
classification and noiseinvariant features to conduct binary classification,
which makes our detector perform more robust to noise. Secondly, considering
the multiscale property that the sparsity of the 2D point cloud is proportional
to the length of laser beams, we design a multiscale random forest structure to
detect legs at different distances. Moreover, the proposed approach allows us
to discover a sparser human leg from point clouds than others. Consequently,
our method shows an improved performance compared to other state-of-the-art leg
detectors on the challenging Moving Legs dataset and retains the whole pipeline
at a speed of 60+ FPS on lowcomputational laptops. Moreover, we further apply
the proposed MARF to the people detection and tracking system, achieving a
considerable gain in all metrics.
|
[
{
"version": "v1",
"created": "Thu, 14 Apr 2022 09:03:16 GMT"
}
] | 2022-04-15T00:00:00 |
[
[
"Wang",
"Tianxi",
""
],
[
"Xue",
"Feng",
""
],
[
"Zhou",
"Yu",
""
],
[
"Ming",
"Anlong",
""
]
] |
new_dataset
| 0.990263 |
2204.06890
|
Xinqian Gu
|
Xinqian Gu, Hong Chang, Bingpeng Ma, Shutao Bai, Shiguang Shan, Xilin
Chen
|
Clothes-Changing Person Re-identification with RGB Modality Only
|
Accepted by CVPR 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The key to address clothes-changing person re-identification (re-id) is to
extract clothes-irrelevant features, e.g., face, hairstyle, body shape, and
gait. Most current works mainly focus on modeling body shape from
multi-modality information (e.g., silhouettes and sketches), but do not make
full use of the clothes-irrelevant information in the original RGB images. In
this paper, we propose a Clothes-based Adversarial Loss (CAL) to mine
clothes-irrelevant features from the original RGB images by penalizing the
predictive power of re-id model w.r.t. clothes. Extensive experiments
demonstrate that using RGB images only, CAL outperforms all state-of-the-art
methods on widely-used clothes-changing person re-id benchmarks. Besides,
compared with images, videos contain richer appearance and additional temporal
information, which can be used to model proper spatiotemporal patterns to
assist clothes-changing re-id. Since there is no publicly available
clothes-changing video re-id dataset, we contribute a new dataset named CCVID
and show that there exists much room for improvement in modeling spatiotemporal
information. The code and new dataset are available at:
https://github.com/guxinqian/Simple-CCReID.
|
[
{
"version": "v1",
"created": "Thu, 14 Apr 2022 11:38:28 GMT"
}
] | 2022-04-15T00:00:00 |
[
[
"Gu",
"Xinqian",
""
],
[
"Chang",
"Hong",
""
],
[
"Ma",
"Bingpeng",
""
],
[
"Bai",
"Shutao",
""
],
[
"Shan",
"Shiguang",
""
],
[
"Chen",
"Xilin",
""
]
] |
new_dataset
| 0.990921 |
2204.06945
|
Jordan Aiko Deja Mr
|
Jordan Aiko Deja, Sven Mayer, Klen \v{C}opi\v{c} Pucihar, Matja\v{z}
Kljun
|
The Vision of a Human-Centered Piano
|
4 pages, 1 figure, workshop proceedings
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
For around 300 years, humans have been learning to play the modern piano
either with a teacher or on their own. In recent years teaching and learning
have been enhanced using augmented technologies that support novices. Other
technologies have also tried to improve different use cases with the piano,
such as composing and performing. Researchers and practitioners have showcased
several forms of augmentation, from hardware improvements, sound quality,
rendering projected visualizations to gesture-based and immersive technologies.
Today, the landscape of piano augmentations is very diverse, and it is unclear
how to describe the ideal piano and its features. In this work, we discuss how
the human-centered piano -- the piano that has been designed with humans in the
center of the design process and that effectively supports tasks performed on
it -- can support pianists. In detail, we present the three tasks of learning,
composing, and improvising in which a human-centered piano would be beneficial
for the pianist.
|
[
{
"version": "v1",
"created": "Thu, 14 Apr 2022 13:16:46 GMT"
}
] | 2022-04-15T00:00:00 |
[
[
"Deja",
"Jordan Aiko",
""
],
[
"Mayer",
"Sven",
""
],
[
"Pucihar",
"Klen Čopič",
""
],
[
"Kljun",
"Matjaž",
""
]
] |
new_dataset
| 0.995377 |
2204.06950
|
Bharat Lal Bhatnagar
|
Bharat Lal Bhatnagar, Xianghui Xie, Ilya A. Petrov, Cristian
Sminchisescu, Christian Theobalt, Gerard Pons-Moll
|
BEHAVE: Dataset and Method for Tracking Human Object Interactions
|
Accepted at CVPR'22
|
IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
2022
| null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Modelling interactions between humans and objects in natural environments is
central to many applications including gaming, virtual and mixed reality, as
well as human behavior analysis and human-robot collaboration. This challenging
operation scenario requires generalization to vast number of objects, scenes,
and human actions. Unfortunately, there exist no such dataset. Moreover, this
data needs to be acquired in diverse natural environments, which rules out 4D
scanners and marker based capture systems. We present BEHAVE dataset, the first
full body human- object interaction dataset with multi-view RGBD frames and
corresponding 3D SMPL and object fits along with the annotated contacts between
them. We record around 15k frames at 5 locations with 8 subjects performing a
wide range of interactions with 20 common objects. We use this data to learn a
model that can jointly track humans and objects in natural environments with an
easy-to-use portable multi-camera setup. Our key insight is to predict
correspondences from the human and the object to a statistical body model to
obtain human-object contacts during interactions. Our approach can record and
track not just the humans and objects but also their interactions, modeled as
surface contacts, in 3D. Our code and data can be found at:
http://virtualhumans.mpi-inf.mpg.de/behave
|
[
{
"version": "v1",
"created": "Thu, 14 Apr 2022 13:21:19 GMT"
}
] | 2022-04-15T00:00:00 |
[
[
"Bhatnagar",
"Bharat Lal",
""
],
[
"Xie",
"Xianghui",
""
],
[
"Petrov",
"Ilya A.",
""
],
[
"Sminchisescu",
"Cristian",
""
],
[
"Theobalt",
"Christian",
""
],
[
"Pons-Moll",
"Gerard",
""
]
] |
new_dataset
| 0.996627 |
2204.07015
|
Daniel Posada
|
Mohammed Eleffendi, Daniel Posada, M. Ilhan Akbas, and Troy Henderson
|
NASA/GSFC's Flight Software Core Flight System Implementation For A
Lunar Surface Imaging Mission
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The interest in returning to the Moon for research and exploration has
increased as new tipping point technologies are providing the possibility to do
so. One of these initiatives is the Artemis program by NASA, which plans to
return humans by 2024 to the lunar surface and study water deposits on the
surface. This program will also serve as a practice run to plan the logistics
of sending humans to explore Mars. To return humans safely to the Moon,
multiple technological advances and diverse knowledge about the nature of the
lunar surface are needed. This paper will discuss the design and implementation
of the flight software of EagleCam, a CubeSat camera system based on the free
open-source core Flight System (cFS) architecture developed by NASA's Goddard
Space Flight Center. EagleCam is a payload transported to the Moon by the
Commercial Lunar Payload Services Nova-C lander developed by Intuitive
Machines. The camera system will capture the first third-person view of a
spacecraft performing a Moon landing and collect other scientific data such as
plume interaction with the surface. The complete system is composed of the
CubeSat and the deployer that will eject it. This will be the first time WiFi
protocol is used on the Moon to establish a local communication network.
|
[
{
"version": "v1",
"created": "Thu, 14 Apr 2022 15:12:13 GMT"
}
] | 2022-04-15T00:00:00 |
[
[
"Eleffendi",
"Mohammed",
""
],
[
"Posada",
"Daniel",
""
],
[
"Akbas",
"M. Ilhan",
""
],
[
"Henderson",
"Troy",
""
]
] |
new_dataset
| 0.99892 |
2204.07032
|
Anwesh Reddy Paduri
|
Narayana Darapaneni, Rajiv Tiwari, Anwesh Reddy Paduri, Suman Saurav,
Rohit Chaoji, and Sohil
|
Farmer-Bot: An Interactive Bot for Farmers
| null | null | null | null |
cs.IR cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The Indian Agricultural sector generates huge employment accounting for over
54% of countrys workforce. Its overall stand in GDP is close to 14%. However,
this sector has been plagued by knowledge and infrastructure deficit,
especially in the rural sectors. Like other sectors, the Indian Agricultural
sector has seen rapid digitization with use of technology and Kisan Call Center
(KCC) is one such example. It is a Government of India initiative launched on
21st January 2004 which is a synthesis of two hitherto separate sectors the
Information Technology and Agriculture sector. However, studies have shown to
have constrains to KCC beneficiaries, especially in light of network congestion
and incomplete knowledge of the call center representatives. With the advent of
new technologies, like first-generation SMS based and next-generation social
media tools like WhatsApp, farmers in India are digitally more connected to the
agricultural information services. Previous studies have shown that the KCC
dataset can be used as a viable alternative for Chat-bot. We will base our
study with the available KCC dataset to build an NLP model by getting the
semantic similarity of the queries made by farmers in the past and use it to
automatically answer future queries. We will attempt to make a WhatsApp based
chat-bot to easily communicate with farmers using RASA as a tool.
|
[
{
"version": "v1",
"created": "Thu, 7 Apr 2022 17:52:21 GMT"
}
] | 2022-04-15T00:00:00 |
[
[
"Darapaneni",
"Narayana",
""
],
[
"Tiwari",
"Rajiv",
""
],
[
"Paduri",
"Anwesh Reddy",
""
],
[
"Saurav",
"Suman",
""
],
[
"Chaoji",
"Rohit",
""
],
[
"Sohil",
"",
""
]
] |
new_dataset
| 0.959218 |
2204.07038
|
Emon Dey
|
Emon Dey, Nirmalya Roy
|
OMAD: On-device Mental Anomaly Detection for Substance and Non-Substance
Users
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Stay at home order during the COVID-19 helps flatten the curve but
ironically, instigate mental health problems among the people who have
Substance Use Disorders. Measuring the electrical activity signals in brain
using off-the-shelf consumer wearable devices such as smart wristwatch and
mapping them in real time to underlying mood, behavioral and emotional changes
play striking roles in postulating mental health anomalies. In this work, we
propose to implement a wearable, {\it On-device Mental Anomaly Detection
(OMAD)} system to detect anomalous behaviors and activities that render to
mental health problems and help clinicians to design effective intervention
strategies. We propose an intrinsic artifact removal model on
Electroencephalogram (EEG) signal to better correlate the fine-grained
behavioral changes. We design model compression technique on the artifact
removal and activity recognition (main) modules. We implement a magnitude-based
weight pruning technique both on convolutional neural network and Multilayer
Perceptron to employ the inference phase on Nvidia Jetson Nano; one of the
tightest resource-constrained devices for wearables. We experimented with three
different combinations of feature extractions and artifact removal approaches.
We evaluate the performance of {\it OMAD} in terms of accuracy, F1 score,
memory usage and running time for both unpruned and compressed models using EEG
data from both control and treatment (alcoholic) groups for different object
recognition tasks. Our artifact removal model and main activity detection model
achieved about $\approx$ 93\% and 90\% accuracy, respectively with significant
reduction in model size (70\%) and inference time (31\%).
|
[
{
"version": "v1",
"created": "Wed, 13 Apr 2022 02:29:58 GMT"
}
] | 2022-04-15T00:00:00 |
[
[
"Dey",
"Emon",
""
],
[
"Roy",
"Nirmalya",
""
]
] |
new_dataset
| 0.986186 |
2204.07072
|
Ari Blau
|
Ari Blau, Christoph Gebhardt, Andres Bendesky, Liam Paninski, and Anqi
Wu
|
SemiMultiPose: A Semi-supervised Multi-animal Pose Estimation Framework
|
10 pages, 7 figures, preprint
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Multi-animal pose estimation is essential for studying animals' social
behaviors in neuroscience and neuroethology. Advanced approaches have been
proposed to support multi-animal estimation and achieve state-of-the-art
performance. However, these models rarely exploit unlabeled data during
training even though real world applications have exponentially more unlabeled
frames than labeled frames. Manually adding dense annotations for a large
number of images or videos is costly and labor-intensive, especially for
multiple instances. Given these deficiencies, we propose a novel
semi-supervised architecture for multi-animal pose estimation, leveraging the
abundant structures pervasive in unlabeled frames in behavior videos to enhance
training, which is critical for sparsely-labeled problems. The resulting
algorithm will provide superior multi-animal pose estimation results on three
animal experiments compared to the state-of-the-art baseline and exhibits more
predictive power in sparsely-labeled data regimes.
|
[
{
"version": "v1",
"created": "Thu, 14 Apr 2022 16:06:55 GMT"
}
] | 2022-04-15T00:00:00 |
[
[
"Blau",
"Ari",
""
],
[
"Gebhardt",
"Christoph",
""
],
[
"Bendesky",
"Andres",
""
],
[
"Paninski",
"Liam",
""
],
[
"Wu",
"Anqi",
""
]
] |
new_dataset
| 0.990792 |
2204.07142
|
Rakesh R Menon
|
Rakesh R Menon, Sayan Ghosh, Shashank Srivastava
|
CLUES: A Benchmark for Learning Classifiers using Natural Language
Explanations
|
ACL 2022 (25 pages, 16 figures)
| null | null | null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Supervised learning has traditionally focused on inductive learning by
observing labeled examples of a task. In contrast, humans have the ability to
learn new concepts from language. Here, we explore training zero-shot
classifiers for structured data purely from language. For this, we introduce
CLUES, a benchmark for Classifier Learning Using natural language ExplanationS,
consisting of a range of classification tasks over structured data along with
natural language supervision in the form of explanations. CLUES consists of 36
real-world and 144 synthetic classification tasks. It contains crowdsourced
explanations describing real-world tasks from multiple teachers and
programmatically generated explanations for the synthetic tasks. To model the
influence of explanations in classifying an example, we develop ExEnt, an
entailment-based model that learns classifiers using explanations. ExEnt
generalizes up to 18% better (relative) on novel tasks than a baseline that
does not use explanations. We delineate key challenges for automated learning
from explanations, addressing which can lead to progress on CLUES in the
future. Code and datasets are available at: https://clues-benchmark.github.io.
|
[
{
"version": "v1",
"created": "Thu, 14 Apr 2022 17:54:46 GMT"
}
] | 2022-04-15T00:00:00 |
[
[
"Menon",
"Rakesh R",
""
],
[
"Ghosh",
"Sayan",
""
],
[
"Srivastava",
"Shashank",
""
]
] |
new_dataset
| 0.994651 |
2204.07149
|
Lara Zlokapa
|
Lara Zlokapa, Yiyue Luo, Jie Xu, Michael Foshey, Kui Wu, Pulkit
Agrawal, Wojciech Matusik
|
An Integrated Design Pipeline for Tactile Sensing Robotic Manipulators
| null | null | null | null |
cs.RO cs.AR
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Traditional robotic manipulator design methods require extensive,
time-consuming, and manual trial and error to produce a viable design. During
this process, engineers often spend their time redesigning or reshaping
components as they discover better topologies for the robotic manipulator.
Tactile sensors, while useful, often complicate the design due to their bulky
form factor. We propose an integrated design pipeline to streamline the design
and manufacturing of robotic manipulators with knitted, glove-like tactile
sensors. The proposed pipeline allows a designer to assemble a collection of
modular, open-source components by applying predefined graph grammar rules. The
end result is an intuitive design paradigm that allows the creation of new
virtual designs of manipulators in a matter of minutes. Our framework allows
the designer to fine-tune the manipulator's shape through cage-based geometry
deformation. Finally, the designer can select surfaces for adding tactile
sensing. Once the manipulator design is finished, the program will
automatically generate 3D printing and knitting files for manufacturing. We
demonstrate the utility of this pipeline by creating four custom manipulators
tested on real-world tasks: screwing in a wing screw, sorting water bottles,
picking up an egg, and cutting paper with scissors.
|
[
{
"version": "v1",
"created": "Thu, 14 Apr 2022 17:57:03 GMT"
}
] | 2022-04-15T00:00:00 |
[
[
"Zlokapa",
"Lara",
""
],
[
"Luo",
"Yiyue",
""
],
[
"Xu",
"Jie",
""
],
[
"Foshey",
"Michael",
""
],
[
"Wu",
"Kui",
""
],
[
"Agrawal",
"Pulkit",
""
],
[
"Matusik",
"Wojciech",
""
]
] |
new_dataset
| 0.997153 |
2204.07151
|
Vickie Ye
|
Vickie Ye, Zhengqi Li, Richard Tucker, Angjoo Kanazawa, Noah Snavely
|
Deformable Sprites for Unsupervised Video Decomposition
|
CVPR 2022 Oral. Project Site: https://deformable-sprites.github.io
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe a method to extract persistent elements of a dynamic scene from
an input video. We represent each scene element as a \emph{Deformable Sprite}
consisting of three components: 1) a 2D texture image for the entire video, 2)
per-frame masks for the element, and 3) non-rigid deformations that map the
texture image into each video frame. The resulting decomposition allows for
applications such as consistent video editing. Deformable Sprites are a type of
video auto-encoder model that is optimized on individual videos, and does not
require training on a large dataset, nor does it rely on pre-trained models.
Moreover, our method does not require object masks or other user input, and
discovers moving objects of a wider variety than previous work. We evaluate our
approach on standard video datasets and show qualitative results on a diverse
array of Internet videos. Code and video results can be found at
https://deformable-sprites.github.io
|
[
{
"version": "v1",
"created": "Thu, 14 Apr 2022 17:58:02 GMT"
}
] | 2022-04-15T00:00:00 |
[
[
"Ye",
"Vickie",
""
],
[
"Li",
"Zhengqi",
""
],
[
"Tucker",
"Richard",
""
],
[
"Kanazawa",
"Angjoo",
""
],
[
"Snavely",
"Noah",
""
]
] |
new_dataset
| 0.995683 |
2204.07154
|
Houwen Peng
|
Jinnian Zhang, Houwen Peng, Kan Wu, Mengchen Liu, Bin Xiao, Jianlong
Fu, Lu Yuan
|
MiniViT: Compressing Vision Transformers with Weight Multiplexing
|
Accepted by CVPR 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vision Transformer (ViT) models have recently drawn much attention in
computer vision due to their high model capability. However, ViT models suffer
from huge number of parameters, restricting their applicability on devices with
limited memory. To alleviate this problem, we propose MiniViT, a new
compression framework, which achieves parameter reduction in vision
transformers while retaining the same performance. The central idea of MiniViT
is to multiplex the weights of consecutive transformer blocks. More
specifically, we make the weights shared across layers, while imposing a
transformation on the weights to increase diversity. Weight distillation over
self-attention is also applied to transfer knowledge from large-scale ViT
models to weight-multiplexed compact models. Comprehensive experiments
demonstrate the efficacy of MiniViT, showing that it can reduce the size of the
pre-trained Swin-B transformer by 48\%, while achieving an increase of 1.0\% in
Top-1 accuracy on ImageNet. Moreover, using a single-layer of parameters,
MiniViT is able to compress DeiT-B by 9.7 times from 86M to 9M parameters,
without seriously compromising the performance. Finally, we verify the
transferability of MiniViT by reporting its performance on downstream
benchmarks. Code and models are available at here.
|
[
{
"version": "v1",
"created": "Thu, 14 Apr 2022 17:59:05 GMT"
}
] | 2022-04-15T00:00:00 |
[
[
"Zhang",
"Jinnian",
""
],
[
"Peng",
"Houwen",
""
],
[
"Wu",
"Kan",
""
],
[
"Liu",
"Mengchen",
""
],
[
"Xiao",
"Bin",
""
],
[
"Fu",
"Jianlong",
""
],
[
"Yuan",
"Lu",
""
]
] |
new_dataset
| 0.993837 |
1908.00093
|
Jingmei Hu
|
David A. Holland and Jingmei Hu and Ming Kawaguchi and Eric Lu and
Stephen Chong and Margo I. Seltzer
|
Aquarium: Cassiopea and Alewife Languages
| null | null | null | null |
cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
This technical report describes two of the domain specific languages used in
the Aquarium kernel code synthesis project. It presents the language cores in
terms of abstract syntax. Cassiopea is a machine description language for
describing the semantics of processor instruction sets. Alewife is a
specification language that can be used to write machine-independent
specifications for assembly-level instruction blocks. An Alewife specification
can be used to verify and synthesize code for any machine described in
Cassiopea, given a machine-specific translation for abstractions used in the
specification. This article does not include an introduction to either the
Aquarium system or the use of the languages. In addition to this version of the
article being a draft, the Aquarium project and the languages are works in
progress. This article cannot currently be considered either final or complete.
|
[
{
"version": "v1",
"created": "Wed, 31 Jul 2019 20:50:04 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Oct 2019 00:57:50 GMT"
},
{
"version": "v3",
"created": "Tue, 19 Nov 2019 22:24:27 GMT"
},
{
"version": "v4",
"created": "Thu, 14 May 2020 15:42:09 GMT"
},
{
"version": "v5",
"created": "Wed, 13 Apr 2022 03:28:06 GMT"
}
] | 2022-04-14T00:00:00 |
[
[
"Holland",
"David A.",
""
],
[
"Hu",
"Jingmei",
""
],
[
"Kawaguchi",
"Ming",
""
],
[
"Lu",
"Eric",
""
],
[
"Chong",
"Stephen",
""
],
[
"Seltzer",
"Margo I.",
""
]
] |
new_dataset
| 0.999836 |
2009.10045
|
Antonio Fari\~na
|
Nieves R. Brisaboa, Ana Cerdeira-Pena, Guillermo de Bernardo, Antonio
Fari\~na, Gonzalo Navarro
|
Space/time-efficient RDF stores based on circular suffix sorting
|
This work has been submitted to a Journal for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, RDF has gained popularity as a format for the standardized
publication and exchange of information in the Web of Data. In this paper we
introduce RDFCSA, a data structure that is able to self-index an RDF dataset in
small space and supports efficient querying. RDFCSA regards the triples of the
RDF store as short circular strings and applies suffix sorting on those
strings, so that triple-pattern queries reduce to prefix searching on the
string set. The RDF store is then represented compactly using a Compressed
Suffix Array (CSA), a proved technology in text indexing that efficiently
supports prefix searches.
Our experiments show that RDFCSA provides a compact RDF representation, using
less than 60% of the space required by the raw data, and yields fast and
consistent query times when answering triple-pattern queries (a few
microseconds per result). We also support join queries, a key component of most
SPARQL queries. RDFCSA is shown to provide an excellent space/time tradeoff,
typically using much less space than alternatives that compete in time.
|
[
{
"version": "v1",
"created": "Mon, 21 Sep 2020 17:36:38 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Apr 2022 08:31:20 GMT"
}
] | 2022-04-14T00:00:00 |
[
[
"Brisaboa",
"Nieves R.",
""
],
[
"Cerdeira-Pena",
"Ana",
""
],
[
"de Bernardo",
"Guillermo",
""
],
[
"Fariña",
"Antonio",
""
],
[
"Navarro",
"Gonzalo",
""
]
] |
new_dataset
| 0.958398 |
2010.06891
|
Qingyang Wu
|
Qingyang Wu, Zhenzhong Lan, Kun Qian, Jing Gu, Alborz Geramifard, Zhou
Yu
|
Memformer: A Memory-Augmented Transformer for Sequence Modeling
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transformers have reached remarkable success in sequence modeling. However,
these models have efficiency issues as they need to store all the history
token-level representations as memory. We present Memformer, an efficient
neural network for sequence modeling, that utilizes an external dynamic memory
to encode and retrieve past information. Our model achieves linear time
complexity and constant memory space complexity when processing long sequences.
We also propose a new optimization scheme, memory replay back-propagation
(MRBP), which promotes long-range back-propagation through time with a
significantly reduced memory requirement. Experimental results show that
Memformer has achieved comparable performance compared to the baselines by
using 8.1x less memory space and 3.2x faster on inference. Analysis of the
attention pattern shows that our external memory slots can encode and retain
important information through timesteps.
|
[
{
"version": "v1",
"created": "Wed, 14 Oct 2020 09:03:36 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Apr 2022 20:57:54 GMT"
}
] | 2022-04-14T00:00:00 |
[
[
"Wu",
"Qingyang",
""
],
[
"Lan",
"Zhenzhong",
""
],
[
"Qian",
"Kun",
""
],
[
"Gu",
"Jing",
""
],
[
"Geramifard",
"Alborz",
""
],
[
"Yu",
"Zhou",
""
]
] |
new_dataset
| 0.99508 |
2105.09464
|
Yongxiang Gu
|
Yongxiang Gu, Xiaolin Qin, Yuncong Peng, Lu Li
|
Content-Augmented Feature Pyramid Network with Light Linear Spatial
Transformers for Object Detection
|
16 pages,7 figures,8 tables
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
As one of the prevalent components, Feature Pyramid Network (FPN) is widely
used in current object detection models for improving multi-scale object
detection performance. However, its feature fusion mode is still in a
misaligned and local manner, thus limiting the representation power. To address
the inherit defects of FPN, a novel architecture termed Content-Augmented
Feature Pyramid Network (CA-FPN) is proposed in this paper. Firstly, a Global
Content Extraction Module (GCEM) is proposed to extract multi-scale context
information. Secondly, lightweight linear spatial Transformer connections are
added in the top-down pathway to augment each feature map with multi-scale
features, where a linearized approximate self-attention function is designed
for reducing model complexity. By means of the self-attention mechanism in
Transformer, there is no longer need to align feature maps during feature
fusion, thus solving the misaligned defect. By setting the query scope to the
entire feature map, the local defect can also be solved. Extensive experiments
on COCO and PASCAL VOC datasets demonstrated that our CA-FPN outperforms other
FPN-based detectors without bells and whistles and is robust in different
settings.
|
[
{
"version": "v1",
"created": "Thu, 20 May 2021 02:31:31 GMT"
},
{
"version": "v2",
"created": "Sat, 17 Jul 2021 09:12:24 GMT"
},
{
"version": "v3",
"created": "Wed, 13 Apr 2022 13:10:46 GMT"
}
] | 2022-04-14T00:00:00 |
[
[
"Gu",
"Yongxiang",
""
],
[
"Qin",
"Xiaolin",
""
],
[
"Peng",
"Yuncong",
""
],
[
"Li",
"Lu",
""
]
] |
new_dataset
| 0.984884 |
2106.02285
|
Zheng-Ning Liu
|
Shi-Min Hu, Zheng-Ning Liu, Meng-Hao Guo, Jun-Xiong Cai, Jiahui Huang,
Tai-Jiang Mu, Ralph R. Martin
|
Subdivision-Based Mesh Convolution Networks
|
Codes are available in https://github.com/lzhengning/SubdivNet
|
ACM Transactions on Graphics, Volume 41, Issue 3, 2022, Article
No.: 25, pp 1-16
|
10.1145/3506694
| null |
cs.CV cs.GR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Convolutional neural networks (CNNs) have made great breakthroughs in 2D
computer vision. However, their irregular structure makes it hard to harness
the potential of CNNs directly on meshes. A subdivision surface provides a
hierarchical multi-resolution structure, in which each face in a closed
2-manifold triangle mesh is exactly adjacent to three faces. Motivated by these
two observations, this paper presents SubdivNet, an innovative and versatile
CNN framework for 3D triangle meshes with Loop subdivision sequence
connectivity. Making an analogy between mesh faces and pixels in a 2D image
allows us to present a mesh convolution operator to aggregate local features
from nearby faces. By exploiting face neighborhoods, this convolution can
support standard 2D convolutional network concepts, e.g. variable kernel size,
stride, and dilation. Based on the multi-resolution hierarchy, we make use of
pooling layers which uniformly merge four faces into one and an upsampling
method which splits one face into four. Thereby, many popular 2D CNN
architectures can be easily adapted to process 3D meshes. Meshes with arbitrary
connectivity can be remeshed to have Loop subdivision sequence connectivity via
self-parameterization, making SubdivNet a general approach. Extensive
evaluation and various applications demonstrate SubdivNet's effectiveness and
efficiency.
|
[
{
"version": "v1",
"created": "Fri, 4 Jun 2021 06:50:34 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Dec 2021 10:24:09 GMT"
}
] | 2022-04-14T00:00:00 |
[
[
"Hu",
"Shi-Min",
""
],
[
"Liu",
"Zheng-Ning",
""
],
[
"Guo",
"Meng-Hao",
""
],
[
"Cai",
"Jun-Xiong",
""
],
[
"Huang",
"Jiahui",
""
],
[
"Mu",
"Tai-Jiang",
""
],
[
"Martin",
"Ralph R.",
""
]
] |
new_dataset
| 0.969468 |
2106.05616
|
Yicheng Deng
|
Yicheng Deng, Cheng Sun, Jiahui Zhu, Yongqi Sun
|
SVMAC: Unsupervised 3D Human Pose Estimation from a Single Image with
Single-view-multi-angle Consistency
|
Accpted by 3DV 2021
| null | null | null |
cs.CV
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Recovering 3D human pose from 2D joints is still a challenging problem,
especially without any 3D annotation, video information, or multi-view
information. In this paper, we present an unsupervised GAN-based model
consisting of multiple weight-sharing generators to estimate a 3D human pose
from a single image without 3D annotations. In our model, we introduce
single-view-multi-angle consistency (SVMAC) to significantly improve the
estimation performance. With 2D joint locations as input, our model estimates a
3D pose and a camera simultaneously. During training, the estimated 3D pose is
rotated by random angles and the estimated camera projects the rotated 3D poses
back to 2D. The 2D reprojections will be fed into weight-sharing generators to
estimate the corresponding 3D poses and cameras, which are then mixed to impose
SVMAC constraints to self-supervise the training process. The experimental
results show that our method outperforms the state-of-the-art unsupervised
methods on Human 3.6M and MPI-INF-3DHP. Moreover, qualitative results on MPII
and LSP show that our method can generalize well to unknown data.
|
[
{
"version": "v1",
"created": "Thu, 10 Jun 2021 09:43:57 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Jun 2021 05:21:21 GMT"
},
{
"version": "v3",
"created": "Sun, 8 Aug 2021 02:00:58 GMT"
},
{
"version": "v4",
"created": "Wed, 13 Apr 2022 05:08:38 GMT"
}
] | 2022-04-14T00:00:00 |
[
[
"Deng",
"Yicheng",
""
],
[
"Sun",
"Cheng",
""
],
[
"Zhu",
"Jiahui",
""
],
[
"Sun",
"Yongqi",
""
]
] |
new_dataset
| 0.997151 |
2110.01725
|
Chao Qu `
|
Chao Qu, Shreyas S. Shivakumar, Wenxin Liu and Camillo J. Taylor
|
LLOL: Low-Latency Odometry for Spinning Lidars
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present a low-latency odometry system designed for spinning
lidars. Many existing lidar odometry methods wait for an entire sweep from the
lidar before processing the data. This introduces a large delay between the
first laser firing and its pose estimate. To reduce this latency, we treat the
spinning lidar as a streaming sensor and process packets as they arrive. This
effectively distributes expensive operations across time, resulting in a very
fast and lightweight system with much higher throughput and lower latency. Our
open-source implementation is available at
\url{https://github.com/versatran01/llol}.
|
[
{
"version": "v1",
"created": "Mon, 4 Oct 2021 21:29:42 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Apr 2022 17:19:39 GMT"
}
] | 2022-04-14T00:00:00 |
[
[
"Qu",
"Chao",
""
],
[
"Shivakumar",
"Shreyas S.",
""
],
[
"Liu",
"Wenxin",
""
],
[
"Taylor",
"Camillo J.",
""
]
] |
new_dataset
| 0.999 |
2112.10646
|
Julien Rebut
|
Julien Rebut, Arthur Ouaknine, Waqas Malik and Patrick P\'erez
|
Raw High-Definition Radar for Multi-Task Learning
|
12 pages, 7 figures, 6 tables
|
CVPR2022
| null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With their robustness to adverse weather conditions and ability to measure
speeds, radar sensors have been part of the automotive landscape for more than
two decades. Recent progress toward High Definition (HD) Imaging radar has
driven the angular resolution below the degree, thus approaching laser scanning
performance. However, the amount of data a HD radar delivers and the
computational cost to estimate the angular positions remain a challenge. In
this paper, we propose a novel HD radar sensing model, FFT-RadNet, that
eliminates the overhead of computing the range-azimuth-Doppler 3D tensor,
learning instead to recover angles from a range-Doppler spectrum. FFT-RadNet is
trained both to detect vehicles and to segment free driving space. On both
tasks, it competes with the most recent radar-based models while requiring less
compute and memory. Also, we collected and annotated 2-hour worth of raw data
from synchronized automotive-grade sensors (camera, laser, HD radar) in various
environments (city street, highway, countryside road). This unique dataset,
nick-named RADIal for "Radar, Lidar et al.", is available at
https://github.com/valeoai/RADIal.
|
[
{
"version": "v1",
"created": "Mon, 20 Dec 2021 16:15:26 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Apr 2022 17:52:47 GMT"
},
{
"version": "v3",
"created": "Wed, 13 Apr 2022 13:48:20 GMT"
}
] | 2022-04-14T00:00:00 |
[
[
"Rebut",
"Julien",
""
],
[
"Ouaknine",
"Arthur",
""
],
[
"Malik",
"Waqas",
""
],
[
"Pérez",
"Patrick",
""
]
] |
new_dataset
| 0.96841 |
2112.12782
|
Jitesh Jain
|
Jitesh Jain, Anukriti Singh, Nikita Orlov, Zilong Huang, Jiachen Li,
Steven Walton, Humphrey Shi
|
SeMask: Semantically Masked Transformers for Semantic Segmentation
|
Updated experiments with Mix-Transformer (MiT) on ADE20K and added an
analysis section
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Finetuning a pretrained backbone in the encoder part of an image transformer
network has been the traditional approach for the semantic segmentation task.
However, such an approach leaves out the semantic context that an image
provides during the encoding stage. This paper argues that incorporating
semantic information of the image into pretrained hierarchical
transformer-based backbones while finetuning improves the performance
considerably. To achieve this, we propose SeMask, a simple and effective
framework that incorporates semantic information into the encoder with the help
of a semantic attention operation. In addition, we use a lightweight semantic
decoder during training to provide supervision to the intermediate semantic
prior maps at every stage. Our experiments demonstrate that incorporating
semantic priors enhances the performance of the established hierarchical
encoders with a slight increase in the number of FLOPs. We provide empirical
proof by integrating SeMask into Swin Transformer and Mix Transformer backbones
as our encoder paired with different decoders. Our framework achieves a new
state-of-the-art of 58.25% mIoU on the ADE20K dataset and improvements of over
3% in the mIoU metric on the Cityscapes dataset. The code and checkpoints are
publicly available at
https://github.com/Picsart-AI-Research/SeMask-Segmentation .
|
[
{
"version": "v1",
"created": "Thu, 23 Dec 2021 18:56:02 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Mar 2022 13:58:53 GMT"
},
{
"version": "v3",
"created": "Wed, 13 Apr 2022 09:30:58 GMT"
}
] | 2022-04-14T00:00:00 |
[
[
"Jain",
"Jitesh",
""
],
[
"Singh",
"Anukriti",
""
],
[
"Orlov",
"Nikita",
""
],
[
"Huang",
"Zilong",
""
],
[
"Li",
"Jiachen",
""
],
[
"Walton",
"Steven",
""
],
[
"Shi",
"Humphrey",
""
]
] |
new_dataset
| 0.997669 |
2203.10752
|
Vera Axelrod
|
Alexis Conneau, Ankur Bapna, Yu Zhang, Min Ma, Patrick von Platen,
Anton Lozhkov, Colin Cherry, Ye Jia, Clara Rivera, Mihir Kale, Daan Van Esch,
Vera Axelrod, Simran Khanuja, Jonathan H. Clark, Orhan Firat, Michael Auli,
Sebastian Ruder, Jason Riesa, Melvin Johnson
|
XTREME-S: Evaluating Cross-lingual Speech Representations
|
Minor fix: language code for Filipino (Tagalog), "tg" -> "tl"
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce XTREME-S, a new benchmark to evaluate universal cross-lingual
speech representations in many languages. XTREME-S covers four task families:
speech recognition, classification, speech-to-text translation and retrieval.
Covering 102 languages from 10+ language families, 3 different domains and 4
task families, XTREME-S aims to simplify multilingual speech representation
evaluation, as well as catalyze research in "universal" speech representation
learning. This paper describes the new benchmark and establishes the first
speech-only and speech-text baselines using XLS-R and mSLAM on all downstream
tasks. We motivate the design choices and detail how to use the benchmark.
Datasets and fine-tuning scripts are made easily accessible at
https://hf.co/datasets/google/xtreme_s.
|
[
{
"version": "v1",
"created": "Mon, 21 Mar 2022 06:50:21 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Mar 2022 10:10:19 GMT"
},
{
"version": "v3",
"created": "Wed, 13 Apr 2022 06:28:30 GMT"
}
] | 2022-04-14T00:00:00 |
[
[
"Conneau",
"Alexis",
""
],
[
"Bapna",
"Ankur",
""
],
[
"Zhang",
"Yu",
""
],
[
"Ma",
"Min",
""
],
[
"von Platen",
"Patrick",
""
],
[
"Lozhkov",
"Anton",
""
],
[
"Cherry",
"Colin",
""
],
[
"Jia",
"Ye",
""
],
[
"Rivera",
"Clara",
""
],
[
"Kale",
"Mihir",
""
],
[
"Van Esch",
"Daan",
""
],
[
"Axelrod",
"Vera",
""
],
[
"Khanuja",
"Simran",
""
],
[
"Clark",
"Jonathan H.",
""
],
[
"Firat",
"Orhan",
""
],
[
"Auli",
"Michael",
""
],
[
"Ruder",
"Sebastian",
""
],
[
"Riesa",
"Jason",
""
],
[
"Johnson",
"Melvin",
""
]
] |
new_dataset
| 0.999671 |
2204.05212
|
Alicia Parrish
|
Alicia Parrish and Harsh Trivedi and Ethan Perez and Angelica Chen and
Nikita Nangia and Jason Phang and Samuel R. Bowman
|
Single-Turn Debate Does Not Help Humans Answer Hard
Reading-Comprehension Questions
|
Accepted to the 2022 ACL Workshop on Learning with Natural Language
Supervision. 12 pages total, 9 figures, 2 tables
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Current QA systems can generate reasonable-sounding yet false answers without
explanation or evidence for the generated answer, which is especially
problematic when humans cannot readily check the model's answers. This presents
a challenge for building trust in machine learning systems. We take inspiration
from real-world situations where difficult questions are answered by
considering opposing sides (see Irving et al., 2018). For multiple-choice QA
examples, we build a dataset of single arguments for both a correct and
incorrect answer option in a debate-style set-up as an initial step in training
models to produce explanations for two candidate answers. We use long contexts
-- humans familiar with the context write convincing explanations for
pre-selected correct and incorrect answers, and we test if those explanations
allow humans who have not read the full context to more accurately determine
the correct answer. We do not find that explanations in our set-up improve
human accuracy, but a baseline condition shows that providing human-selected
text snippets does improve accuracy. We use these findings to suggest ways of
improving the debate set up for future data collection efforts.
|
[
{
"version": "v1",
"created": "Mon, 11 Apr 2022 15:56:34 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Apr 2022 13:46:13 GMT"
}
] | 2022-04-14T00:00:00 |
[
[
"Parrish",
"Alicia",
""
],
[
"Trivedi",
"Harsh",
""
],
[
"Perez",
"Ethan",
""
],
[
"Chen",
"Angelica",
""
],
[
"Nangia",
"Nikita",
""
],
[
"Phang",
"Jason",
""
],
[
"Bowman",
"Samuel R.",
""
]
] |
new_dataset
| 0.965628 |
2204.06029
|
Raviraj Joshi
|
Parth Patil, Aparna Ranade, Maithili Sabane, Onkar Litake, Raviraj
Joshi
|
L3Cube-MahaNER: A Marathi Named Entity Recognition Dataset and BERT
models
| null | null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Named Entity Recognition (NER) is a basic NLP task and finds major
applications in conversational and search systems. It helps us identify key
entities in a sentence used for the downstream application. NER or similar slot
filling systems for popular languages have been heavily used in commercial
applications. In this work, we focus on Marathi, an Indian language, spoken
prominently by the people of Maharashtra state. Marathi is a low resource
language and still lacks useful NER resources. We present L3Cube-MahaNER, the
first major gold standard named entity recognition dataset in Marathi. We also
describe the manual annotation guidelines followed during the process. In the
end, we benchmark the dataset on different CNN, LSTM, and Transformer based
models like mBERT, XLM-RoBERTa, IndicBERT, MahaBERT, etc. The MahaBERT provides
the best performance among all the models. The data and models are available at
https://github.com/l3cube-pune/MarathiNLP .
|
[
{
"version": "v1",
"created": "Tue, 12 Apr 2022 18:32:15 GMT"
}
] | 2022-04-14T00:00:00 |
[
[
"Patil",
"Parth",
""
],
[
"Ranade",
"Aparna",
""
],
[
"Sabane",
"Maithili",
""
],
[
"Litake",
"Onkar",
""
],
[
"Joshi",
"Raviraj",
""
]
] |
new_dataset
| 0.999801 |
2204.06105
|
Madeleine Grunde-McLaughlin
|
Madeleine Grunde-McLaughlin, Ranjay Krishna, Maneesh Agrawala
|
AGQA 2.0: An Updated Benchmark for Compositional Spatio-Temporal
Reasoning
|
7 pages, 2 figures, 7 tables, update to AGQA arXiv:2103.16002
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Prior benchmarks have analyzed models' answers to questions about videos in
order to measure visual compositional reasoning. Action Genome Question
Answering (AGQA) is one such benchmark. AGQA provides a training/test split
with balanced answer distributions to reduce the effect of linguistic biases.
However, some biases remain in several AGQA categories. We introduce AGQA 2.0,
a version of this benchmark with several improvements, most namely a stricter
balancing procedure. We then report results on the updated benchmark for all
experiments.
|
[
{
"version": "v1",
"created": "Tue, 12 Apr 2022 22:30:12 GMT"
}
] | 2022-04-14T00:00:00 |
[
[
"Grunde-McLaughlin",
"Madeleine",
""
],
[
"Krishna",
"Ranjay",
""
],
[
"Agrawala",
"Maneesh",
""
]
] |
new_dataset
| 0.98857 |
2204.06114
|
Mohammed Fouda Dr.
|
Mariam Rakka, Mohammed E. Fouda, Rouwaida Kanj, and Fadi Kurdahi
|
DT2CAM: A Decision Tree to Content Addressable Memory Framework
| null | null | null | null |
cs.AR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Decision trees are considered one of the most powerful tools for data
classification. Accelerating the decision tree search is crucial for
on-the-edge applications that have limited power and latency budget. In this
paper, we propose a Content Addressable Memory (CAM) Compiler for Decision Tree
(DT) inference acceleration. We propose a novel "adaptive-precision" scheme
that results in a compact implementation and enables an efficient bijective
mapping to Ternary Content Addressable Memories while maintaining high
inference accuracies. In addition, a Resistive-CAM (ReCAM) functional
synthesizer is developed for mapping the decision tree to the ReCAM and
performing functional simulations for energy, latency, and accuracy
evaluations. We study the decision tree accuracy under hardware non-idealities
including device defects, manufacturing variability, and input encoding noise.
We test our framework on various DT datasets including \textit{Give Me Some
Credit}, \textit{Titanic}, and \textit{COVID-19}. Our results reveal up to
{42.4\%} energy savings and up to 17.8x better energy-delay-area product
compared to the state-of-art hardware accelerators, and up to 333 million
decisions per sec for the pipelined implementation.
|
[
{
"version": "v1",
"created": "Tue, 12 Apr 2022 23:16:46 GMT"
}
] | 2022-04-14T00:00:00 |
[
[
"Rakka",
"Mariam",
""
],
[
"Fouda",
"Mohammed E.",
""
],
[
"Kanj",
"Rouwaida",
""
],
[
"Kurdahi",
"Fadi",
""
]
] |
new_dataset
| 0.987753 |
2204.06134
|
Chunxu Tang
|
Chunxu Tang, Beinan Wang, C.Y. Roger Chen, Huijun Wu
|
CWcollab: A Context-Aware Web-Based Collaborative Multimedia System
| null |
In ICC 2021-IEEE International Conference on Communications (pp.
1-6). IEEE (2021, June)
|
10.1109/ICC42927.2021.9500377
| null |
cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Remote collaboration tools for conferencing and presentation are gaining
significant popularity during the COVID-19 pandemic period. Most prior work has
issues, such as a) limited support for media types, b) lack of interactivity,
for example, an efficient replay mechanism, c) large bandwidth consumption for
screen sharing tools. In this paper, we propose a general-purpose multimedia
collaboration platform-CWcollab. It supports collaboration on general
multimedia by using simple messages to represent media controls with an
object-prioritized synchronization approach. Thus, CWcollab can not only
support fine-grained accurate collaboration, but also rich functionalities such
as replay of these collaboration events. The evaluation shows hundreds of
kilobytes can be enough to store the events in a collaboration session for
accurate replays, compared with hundreds of megabytes of Google Hangouts.
|
[
{
"version": "v1",
"created": "Wed, 13 Apr 2022 01:51:04 GMT"
}
] | 2022-04-14T00:00:00 |
[
[
"Tang",
"Chunxu",
""
],
[
"Wang",
"Beinan",
""
],
[
"Chen",
"C. Y. Roger",
""
],
[
"Wu",
"Huijun",
""
]
] |
new_dataset
| 0.984544 |
2204.06145
|
Ziqing Yang
|
Zheng Chu, Ziqing Yang, Yiming Cui, Zhigang Chen, Ming Liu
|
HIT at SemEval-2022 Task 2: Pre-trained Language Model for Idioms
Detection
|
6 pages; SemEval-2022 Task 2
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The same multi-word expressions may have different meanings in different
sentences. They can be mainly divided into two categories, which are literal
meaning and idiomatic meaning. Non-contextual-based methods perform poorly on
this problem, and we need contextual embedding to understand the idiomatic
meaning of multi-word expressions correctly. We use a pre-trained language
model, which can provide a context-aware sentence embedding, to detect whether
multi-word expression in the sentence is idiomatic usage.
|
[
{
"version": "v1",
"created": "Wed, 13 Apr 2022 02:45:04 GMT"
}
] | 2022-04-14T00:00:00 |
[
[
"Chu",
"Zheng",
""
],
[
"Yang",
"Ziqing",
""
],
[
"Cui",
"Yiming",
""
],
[
"Chen",
"Zhigang",
""
],
[
"Liu",
"Ming",
""
]
] |
new_dataset
| 0.998222 |
2204.06192
|
Zhe Zhang
|
Luyi Chang, Zhe Zhang, Pei Li, Shan Xi, Wei Guo, Yukang Shen, Zehui
Xiong, Jiawen Kang, Dusit Niyato, Xiuquan Qiao, Yi Wu
|
6G-enabled Edge AI for Metaverse: Challenges, Methods, and Future
Research Directions
|
16 pages
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
6G-enabled edge intelligence opens up a new era of Internet of Everything and
makes it possible to interconnect people-devices-cloud anytime, anywhere. More
and more next-generation wireless network smart service applications are
changing our way of life and improving our quality of life. As the hottest new
form of next-generation Internet applications, Metaverse is striving to connect
billions of users and create a shared world where virtual and reality merge.
However, limited by resources, computing power, and sensory devices, Metaverse
is still far from realizing its full vision of immersion, materialization, and
interoperability. To this end, this survey aims to realize this vision through
the organic integration of 6G-enabled edge AI and Metaverse. Specifically, we
first introduce three new types of edge-Metaverse architectures that use
6G-enabled edge AI to solve resource and computing constraints in Metaverse.
Then we summarize technical challenges that these architectures face in
Metaverse and the existing solutions. Furthermore, we explore how the
edge-Metaverse architecture technology helps Metaverse to interact and share
digital data. Finally, we discuss future research directions to realize the
true vision of Metaverse with 6G-enabled edge AI.
|
[
{
"version": "v1",
"created": "Wed, 13 Apr 2022 06:40:47 GMT"
}
] | 2022-04-14T00:00:00 |
[
[
"Chang",
"Luyi",
""
],
[
"Zhang",
"Zhe",
""
],
[
"Li",
"Pei",
""
],
[
"Xi",
"Shan",
""
],
[
"Guo",
"Wei",
""
],
[
"Shen",
"Yukang",
""
],
[
"Xiong",
"Zehui",
""
],
[
"Kang",
"Jiawen",
""
],
[
"Niyato",
"Dusit",
""
],
[
"Qiao",
"Xiuquan",
""
],
[
"Wu",
"Yi",
""
]
] |
new_dataset
| 0.983225 |
2204.06248
|
Hans-Peter Deifel
|
Fabian Birkmann, Hans-Peter Deifel, Stefan Milius
|
Distributed Coalgebraic Partition Refinement
| null | null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
Partition refinement is a method for minimizing automata and transition
systems of various types. Recently, a new partition refinement algorithm and
associated tool CoPaR were developed that are generic in the transition type of
the input system and match the theoretical run time of the best known
algorithms for many concrete system types. Genericity is achieved by modelling
transition types as functors on sets and systems as coalgebras. Experimentation
has shown that memory consumption is a bottleneck for handling systems with a
large state space, while running times are fast. We have therefore extended an
algorithm due to Blom and Orzan, which is suitable for a distributed
implementation to the coalgebraic level of genericity, and implemented it in
CoPaR. Experiments show that this allows to handle much larger state spaces.
Running times are low in most experiments, but there is a significant penalty
for some.
|
[
{
"version": "v1",
"created": "Wed, 13 Apr 2022 08:38:17 GMT"
}
] | 2022-04-14T00:00:00 |
[
[
"Birkmann",
"Fabian",
""
],
[
"Deifel",
"Hans-Peter",
""
],
[
"Milius",
"Stefan",
""
]
] |
new_dataset
| 0.989738 |
2204.06256
|
Johannes de Fine Licht
|
Johannes de Fine Licht, Christopher A. Pattison, Alexandros Nikolaos
Ziogas, David Simmons-Duffin, Torsten Hoefler
|
Fast Arbitrary Precision Floating Point on FPGA
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Numerical codes that require arbitrary precision floating point (APFP)
numbers for their core computation are dominated by elementary arithmetic
operations due to the super-linear complexity of multiplication in the number
of mantissa bits. APFP computations on conventional software-based
architectures are made exceedingly expensive by the lack of native hardware
support, requiring elementary operations to be emulated using instructions
operating on machine-word-sized blocks. In this work, we show how APFP
multiplication on compile-time fixed-precision operands can be implemented as
deep FPGA pipelines with a recursively defined Karatsuba decomposition on top
of native DSP multiplication. When comparing our design implemented on an Alveo
U250 accelerator to a dual-socket 36-core Xeon node running the GNU Multiple
Precision Floating-Point Reliable (MPFR) library, we achieve a 9.8x speedup at
4.8 GOp/s for 512-bit multiplication, and a 5.3x speedup at 1.2 GOp/s for
1024-bit multiplication, corresponding to the throughput of more than 351x and
191x CPU cores, respectively. We apply this architecture to general
matrix-matrix multiplication, yielding a 10x speedup at 2.0 GOp/s over the Xeon
node, equivalent to more than 375x CPU cores, effectively allowing a single
FPGA to replace a small CPU cluster. Due to the significant dependence of some
numerical codes on APFP, such as semidefinite program solvers, we expect these
gains to translate into real-world speedups. Our configurable and flexible
HLS-based code provides as high-level software interface for plug-and-play
acceleration, published as an open source project.
|
[
{
"version": "v1",
"created": "Wed, 13 Apr 2022 08:59:11 GMT"
}
] | 2022-04-14T00:00:00 |
[
[
"Licht",
"Johannes de Fine",
""
],
[
"Pattison",
"Christopher A.",
""
],
[
"Ziogas",
"Alexandros Nikolaos",
""
],
[
"Simmons-Duffin",
"David",
""
],
[
"Hoefler",
"Torsten",
""
]
] |
new_dataset
| 0.987055 |
2204.06272
|
Jiahui Fu
|
Junyu Luo, Jiahui Fu, Xianghao Kong, Chen Gao, Haibing Ren, Hao Shen,
Huaxia Xia, Si Liu
|
3D-SPS: Single-Stage 3D Visual Grounding via Referred Point Progressive
Selection
|
CVPR 2022, Oral
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D visual grounding aims to locate the referred target object in 3D point
cloud scenes according to a free-form language description. Previous methods
mostly follow a two-stage paradigm, i.e., language-irrelevant detection and
cross-modal matching, which is limited by the isolated architecture. In such a
paradigm, the detector needs to sample keypoints from raw point clouds due to
the inherent properties of 3D point clouds (irregular and large-scale), to
generate the corresponding object proposal for each keypoint. However, sparse
proposals may leave out the target in detection, while dense proposals may
confuse the matching model. Moreover, the language-irrelevant detection stage
can only sample a small proportion of keypoints on the target, deteriorating
the target prediction. In this paper, we propose a 3D Single-Stage Referred
Point Progressive Selection (3D-SPS) method, which progressively selects
keypoints with the guidance of language and directly locates the target.
Specifically, we propose a Description-aware Keypoint Sampling (DKS) module to
coarsely focus on the points of language-relevant objects, which are
significant clues for grounding. Besides, we devise a Target-oriented
Progressive Mining (TPM) module to finely concentrate on the points of the
target, which is enabled by progressive intra-modal relation modeling and
inter-modal target mining. 3D-SPS bridges the gap between detection and
matching in the 3D visual grounding task, localizing the target at a single
stage. Experiments demonstrate that 3D-SPS achieves state-of-the-art
performance on both ScanRefer and Nr3D/Sr3D datasets.
|
[
{
"version": "v1",
"created": "Wed, 13 Apr 2022 09:46:27 GMT"
}
] | 2022-04-14T00:00:00 |
[
[
"Luo",
"Junyu",
""
],
[
"Fu",
"Jiahui",
""
],
[
"Kong",
"Xianghao",
""
],
[
"Gao",
"Chen",
""
],
[
"Ren",
"Haibing",
""
],
[
"Shen",
"Hao",
""
],
[
"Xia",
"Huaxia",
""
],
[
"Liu",
"Si",
""
]
] |
new_dataset
| 0.987077 |
2204.06288
|
Robert Lupoiu
|
Robert Lupoiu, Samuel S. H. Ng, Jonathan A. Fan, Konrad Walus
|
Automated Atomic Silicon Quantum Dot Circuit Design via Deep
Reinforcement Learning
|
7 pages, 3 figures
| null | null | null |
cs.ET cond-mat.mes-hall
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robust automated design tools are crucial for the proliferation of any
computing technology. We introduce the first automated design tool for the
silicon dangling bond quantum dot computing technology, which is an extremely
versatile and flexible single-atom computing circuitry framework. The automated
designer is capable of navigating the complex, hyperdimensional design spaces
of arbitrarily sized design areas and truth tables by employing a tabula rasa
double-deep Q-learning reinforcement learning algorithm. Robust policy
convergence is demonstrated for a wide range of two-input, one-output logic
circuits and a two-input, two-output half-adder, designed with an order of
magnitude fewer SiDBs in several orders of magnitude less time than the only
other half-adder demonstrated in the literature. We anticipate that
reinforcement learning-based automated design tools will accelerate the
development of the SiDB quantum dot computing technology, leading to its
eventual adoption in specialized computing hardware.
|
[
{
"version": "v1",
"created": "Wed, 13 Apr 2022 10:34:44 GMT"
}
] | 2022-04-14T00:00:00 |
[
[
"Lupoiu",
"Robert",
""
],
[
"Ng",
"Samuel S. H.",
""
],
[
"Fan",
"Jonathan A.",
""
],
[
"Walus",
"Konrad",
""
]
] |
new_dataset
| 0.998127 |
2204.06299
|
Sherzod Hakimov
|
Sherzod Hakimov and Gullal S. Cheema and Ralph Ewerth
|
TIB-VA at SemEval-2022 Task 5: A Multimodal Architecture for the
Detection and Classification of Misogynous Memes
|
Accepted for publication at SemEval-2022 Workshop, Task 5: MAMI -
Multimedia Automatic Misogyny Identification co-located with NAACL 2022
| null | null | null |
cs.CL cs.AI cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The detection of offensive, hateful content on social media is a challenging
problem that affects many online users on a daily basis. Hateful content is
often used to target a group of people based on ethnicity, gender, religion and
other factors. The hate or contempt toward women has been increasing on social
platforms. Misogynous content detection is especially challenging when textual
and visual modalities are combined to form a single context, e.g., an overlay
text embedded on top of an image, also known as meme. In this paper, we present
a multimodal architecture that combines textual and visual features in order to
detect misogynous meme content. The proposed architecture is evaluated in the
SemEval-2022 Task 5: MAMI - Multimedia Automatic Misogyny Identification
challenge under the team name TIB-VA. Our solution obtained the best result in
the Task-B where the challenge is to classify whether a given document is
misogynous and further identify the main sub-classes of shaming, stereotype,
objectification, and violence.
|
[
{
"version": "v1",
"created": "Wed, 13 Apr 2022 11:03:21 GMT"
}
] | 2022-04-14T00:00:00 |
[
[
"Hakimov",
"Sherzod",
""
],
[
"Cheema",
"Gullal S.",
""
],
[
"Ewerth",
"Ralph",
""
]
] |
new_dataset
| 0.999337 |
2204.06309
|
Alexander Blatt
|
Alexander Blatt, Martin Kocour, Karel Vesel\'y, Igor Sz\"oke, Dietrich
Klakow
|
Call-sign recognition and understanding for noisy air-traffic
transcripts using surveillance information
|
Accepted by ICASSP 2022
| null | null | null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Air traffic control (ATC) relies on communication via speech between pilot
and air-traffic controller (ATCO). The call-sign, as unique identifier for each
flight, is used to address a specific pilot by the ATCO. Extracting the
call-sign from the communication is a challenge because of the noisy ATC voice
channel and the additional noise introduced by the receiver. A low
signal-to-noise ratio (SNR) in the speech leads to high word error rate (WER)
transcripts. We propose a new call-sign recognition and understanding (CRU)
system that addresses this issue. The recognizer is trained to identify
call-signs in noisy ATC transcripts and convert them into the standard
International Civil Aviation Organization (ICAO) format. By incorporating
surveillance information, we can multiply the call-sign accuracy (CSA) up to a
factor of four. The introduced data augmentation adds additional performance on
high WER transcripts and allows the adaptation of the model to unseen
airspaces.
|
[
{
"version": "v1",
"created": "Wed, 13 Apr 2022 11:30:42 GMT"
}
] | 2022-04-14T00:00:00 |
[
[
"Blatt",
"Alexander",
""
],
[
"Kocour",
"Martin",
""
],
[
"Veselý",
"Karel",
""
],
[
"Szöke",
"Igor",
""
],
[
"Klakow",
"Dietrich",
""
]
] |
new_dataset
| 0.996484 |
2204.06347
|
Xuwu Wang
|
Xuwu Wang, Junfeng Tian, Min Gui, Zhixu Li, Rui Wang, Ming Yan, Lihan
Chen, Yanghua Xiao
|
WikiDiverse: A Multimodal Entity Linking Dataset with Diversified
Contextual Topics and Entity Types
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multimodal Entity Linking (MEL) which aims at linking mentions with
multimodal contexts to the referent entities from a knowledge base (e.g.,
Wikipedia), is an essential task for many multimodal applications. Although
much attention has been paid to MEL, the shortcomings of existing MEL datasets
including limited contextual topics and entity types, simplified mention
ambiguity, and restricted availability, have caused great obstacles to the
research and application of MEL. In this paper, we present WikiDiverse, a
high-quality human-annotated MEL dataset with diversified contextual topics and
entity types from Wikinews, which uses Wikipedia as the corresponding knowledge
base. A well-tailored annotation procedure is adopted to ensure the quality of
the dataset. Based on WikiDiverse, a sequence of well-designed MEL models with
intra-modality and inter-modality attentions are implemented, which utilize the
visual information of images more adequately than existing MEL models do.
Extensive experimental analyses are conducted to investigate the contributions
of different modalities in terms of MEL, facilitating the future research on
this task. The dataset and baseline models are available at
https://github.com/wangxw5/wikiDiverse.
|
[
{
"version": "v1",
"created": "Wed, 13 Apr 2022 12:52:40 GMT"
}
] | 2022-04-14T00:00:00 |
[
[
"Wang",
"Xuwu",
""
],
[
"Tian",
"Junfeng",
""
],
[
"Gui",
"Min",
""
],
[
"Li",
"Zhixu",
""
],
[
"Wang",
"Rui",
""
],
[
"Yan",
"Ming",
""
],
[
"Chen",
"Lihan",
""
],
[
"Xiao",
"Yanghua",
""
]
] |
new_dataset
| 0.999705 |
2204.06447
|
Michael Schlichtig
|
Michael Schlichtig, Anna-Katharina Wickert, Stefan Kr\"uger, Eric
Bodden, Mira Mezini
|
CamBench -- Cryptographic API Misuse Detection Tool Benchmark Suite
|
8 pages, accepted at the MSR 2022 Registered Reports Track as a
In-Principal Acceptance (IPA)
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Context: Cryptographic APIs are often misused in real-world applications.
Therefore, many cryptographic API misuse detection tools have been introduced.
However, there exists no established reference benchmark for a fair and
comprehensive comparison and evaluation of these tools. While there are
benchmarks, they often only address a subset of the domain or were only used to
evaluate a subset of existing misuse detection tools. Objective: To fairly
compare cryptographic API misuse detection tools and to drive future
development in this domain, we will devise such a benchmark. Openness and
transparency in the generation process are key factors to fairly generate and
establish the needed benchmark. Method: We propose an approach where we derive
the benchmark generation methodology from the literature which consists of
general best practices in benchmarking and domain-specific benchmark
generation. A part of this methodology is transparency and openness of the
generation process, which is achieved by pre-registering this work. Based on
our methodology we design CamBench, a fair "Cryptographic API Misuse Detection
Tool Benchmark Suite". We will implement the first version of CamBench limiting
the domain to Java, the JCA, and static analyses. Finally, we will use CamBench
to compare current misuse detection tools and compare CamBench to related
benchmarks of its domain.
|
[
{
"version": "v1",
"created": "Wed, 13 Apr 2022 15:12:13 GMT"
}
] | 2022-04-14T00:00:00 |
[
[
"Schlichtig",
"Michael",
""
],
[
"Wickert",
"Anna-Katharina",
""
],
[
"Krüger",
"Stefan",
""
],
[
"Bodden",
"Eric",
""
],
[
"Mezini",
"Mira",
""
]
] |
new_dataset
| 0.993892 |
2008.11147
|
Thomas Zimmermann
|
Denae Ford and Margaret-Anne Storey and Thomas Zimmermann and
Christian Bird and Sonia Jaffe and Chandra Maddila and Jenna L. Butler and
Brian Houck and Nachiappan Nagappan
|
A Tale of Two Cities: Software Developers Working from Home During the
COVID-19 Pandemic
|
36 pages, 1 figure, 6 tables
|
ACM Transactions on Software Engineering and Methodology, Volume
31, Issue 2 (April 2022)
|
10.1145/3487567
| null |
cs.SE cs.CY cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The COVID-19 pandemic has shaken the world to its core and has provoked an
overnight exodus of developers that normally worked in an office setting to
working from home. The magnitude of this shift and the factors that have
accompanied this new unplanned work setting go beyond what the software
engineering community has previously understood to be remote work. To find out
how developers and their productivity were affected, we distributed two surveys
(with a combined total of 3,634 responses that answered all required questions)
-- weeks apart to understand the presence and prevalence of the benefits,
challenges, and opportunities to improve this special circumstance of remote
work. From our thematic qualitative analysis and statistical quantitative
analysis, we find that there is a dichotomy of developer experiences influenced
by many different factors (that for some are a benefit, while for others a
challenge). For example, a benefit for some was being close to family members
but for others having family members share their working space and interrupting
their focus, was a challenge. Our surveys led to powerful narratives from
respondents and revealed the scale at which these experiences exist to provide
insights as to how the future of (pandemic) remote work can evolve.
|
[
{
"version": "v1",
"created": "Tue, 25 Aug 2020 16:27:21 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Jul 2021 18:36:05 GMT"
},
{
"version": "v3",
"created": "Fri, 10 Sep 2021 23:46:50 GMT"
}
] | 2022-04-13T00:00:00 |
[
[
"Ford",
"Denae",
""
],
[
"Storey",
"Margaret-Anne",
""
],
[
"Zimmermann",
"Thomas",
""
],
[
"Bird",
"Christian",
""
],
[
"Jaffe",
"Sonia",
""
],
[
"Maddila",
"Chandra",
""
],
[
"Butler",
"Jenna L.",
""
],
[
"Houck",
"Brian",
""
],
[
"Nagappan",
"Nachiappan",
""
]
] |
new_dataset
| 0.959962 |
2012.10289
|
Binny Mathew
|
Binny Mathew, Punyajoy Saha, Seid Muhie Yimam, Chris Biemann, Pawan
Goyal, and Animesh Mukherjee
|
HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection
|
12 pages, 7 figues, 8 tables. Accepted at AAAI 2021
| null | null | null |
cs.CL cs.AI cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Hate speech is a challenging issue plaguing the online social media. While
better models for hate speech detection are continuously being developed, there
is little research on the bias and interpretability aspects of hate speech. In
this paper, we introduce HateXplain, the first benchmark hate speech dataset
covering multiple aspects of the issue. Each post in our dataset is annotated
from three different perspectives: the basic, commonly used 3-class
classification (i.e., hate, offensive or normal), the target community (i.e.,
the community that has been the victim of hate speech/offensive speech in the
post), and the rationales, i.e., the portions of the post on which their
labelling decision (as hate, offensive or normal) is based. We utilize existing
state-of-the-art models and observe that even models that perform very well in
classification do not score high on explainability metrics like model
plausibility and faithfulness. We also observe that models, which utilize the
human rationales for training, perform better in reducing unintended bias
towards target communities. We have made our code and dataset public at
https://github.com/punyajoy/HateXplain
|
[
{
"version": "v1",
"created": "Fri, 18 Dec 2020 15:12:14 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Apr 2022 13:26:33 GMT"
}
] | 2022-04-13T00:00:00 |
[
[
"Mathew",
"Binny",
""
],
[
"Saha",
"Punyajoy",
""
],
[
"Yimam",
"Seid Muhie",
""
],
[
"Biemann",
"Chris",
""
],
[
"Goyal",
"Pawan",
""
],
[
"Mukherjee",
"Animesh",
""
]
] |
new_dataset
| 0.999816 |
2106.11490
|
Nithin Sugavanam
|
Nithin Sugavanam and Siddharth Baskar and Emre Ertin
|
High Resolution Radar Sensing with Compressive Illumination
|
arXiv admin note: text overlap with arXiv:1508.07969
| null |
10.1109/TSP.2022.3156731
| null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We present a compressive radar design that combines multitone linear
frequency modulated (LFM) waveforms in the transmitter with a classical stretch
processor and sub-Nyquist sampling in the receiver. The proposed compressive
illumination scheme has fewer random elements resulting in reduced storage and
complexity for implementation than previously proposed compressive radar
designs based on stochastic waveforms. We analyze this illumination scheme for
the task of a joint range-angle of arrival estimation in the multi-input and
multi-output (MIMO) radar system. We present recovery guarantees for the
proposed illumination technique. We show that for a sufficiently large number
of modulating tones, the system achieves high-resolution in range and
successfully recovers the range and angle-of-arrival of targets in a sparse
scene. Furthermore, we present an algorithm that estimates the target range,
angle of arrival, and scattering coefficient in the continuum. Finally, we
present simulation results to illustrate the recovery performance as a function
of system parameters.
|
[
{
"version": "v1",
"created": "Tue, 22 Jun 2021 02:43:28 GMT"
}
] | 2022-04-13T00:00:00 |
[
[
"Sugavanam",
"Nithin",
""
],
[
"Baskar",
"Siddharth",
""
],
[
"Ertin",
"Emre",
""
]
] |
new_dataset
| 0.954846 |
2109.10915
|
Francisco Villaescusa-Navarro
|
Francisco Villaescusa-Navarro, Shy Genel, Daniel Angles-Alcazar,
Leander Thiele, Romeel Dave, Desika Narayanan, Andrina Nicola, Yin Li, Pablo
Villanueva-Domingo, Benjamin Wandelt, David N. Spergel, Rachel S. Somerville,
Jose Manuel Zorrilla Matilla, Faizan G. Mohammad, Sultan Hassan, Helen Shao,
Digvijay Wadekar, Michael Eickenberg, Kaze W.K. Wong, Gabriella Contardo,
Yongseok Jo, Emily Moser, Erwin T. Lau, Luis Fernando Machado Poletti Valle,
Lucia A. Perez, Daisuke Nagai, Nicholas Battaglia, Mark Vogelsberger
|
The CAMELS Multifield Dataset: Learning the Universe's Fundamental
Parameters with Artificial Intelligence
|
17 pages, 1 figure. Third paper of a series of four. Hundreds of
thousands of labeled 2D maps and 3D grids from thousands of simulated
universes publicly available at
https://camels-multifield-dataset.readthedocs.io
| null |
10.3847/1538-4365/ac5ab0
| null |
cs.LG astro-ph.CO astro-ph.GA astro-ph.IM cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the Cosmology and Astrophysics with MachinE Learning Simulations
(CAMELS) Multifield Dataset, CMD, a collection of hundreds of thousands of 2D
maps and 3D grids containing many different properties of cosmic gas, dark
matter, and stars from 2,000 distinct simulated universes at several cosmic
times. The 2D maps and 3D grids represent cosmic regions that span $\sim$100
million light years and have been generated from thousands of state-of-the-art
hydrodynamic and gravity-only N-body simulations from the CAMELS project.
Designed to train machine learning models, CMD is the largest dataset of its
kind containing more than 70 Terabytes of data. In this paper we describe CMD
in detail and outline a few of its applications. We focus our attention on one
such task, parameter inference, formulating the problems we face as a challenge
to the community. We release all data and provide further technical details at
https://camels-multifield-dataset.readthedocs.io.
|
[
{
"version": "v1",
"created": "Wed, 22 Sep 2021 18:00:01 GMT"
}
] | 2022-04-13T00:00:00 |
[
[
"Villaescusa-Navarro",
"Francisco",
""
],
[
"Genel",
"Shy",
""
],
[
"Angles-Alcazar",
"Daniel",
""
],
[
"Thiele",
"Leander",
""
],
[
"Dave",
"Romeel",
""
],
[
"Narayanan",
"Desika",
""
],
[
"Nicola",
"Andrina",
""
],
[
"Li",
"Yin",
""
],
[
"Villanueva-Domingo",
"Pablo",
""
],
[
"Wandelt",
"Benjamin",
""
],
[
"Spergel",
"David N.",
""
],
[
"Somerville",
"Rachel S.",
""
],
[
"Matilla",
"Jose Manuel Zorrilla",
""
],
[
"Mohammad",
"Faizan G.",
""
],
[
"Hassan",
"Sultan",
""
],
[
"Shao",
"Helen",
""
],
[
"Wadekar",
"Digvijay",
""
],
[
"Eickenberg",
"Michael",
""
],
[
"Wong",
"Kaze W. K.",
""
],
[
"Contardo",
"Gabriella",
""
],
[
"Jo",
"Yongseok",
""
],
[
"Moser",
"Emily",
""
],
[
"Lau",
"Erwin T.",
""
],
[
"Valle",
"Luis Fernando Machado Poletti",
""
],
[
"Perez",
"Lucia A.",
""
],
[
"Nagai",
"Daisuke",
""
],
[
"Battaglia",
"Nicholas",
""
],
[
"Vogelsberger",
"Mark",
""
]
] |
new_dataset
| 0.999841 |
2109.12818
|
Francesc Verdugo Phd
|
Francesc Verdugo and Santiago Badia
|
The software design of Gridap: a Finite Element package based on the
Julia JIT compiler
| null | null |
10.1016/j.cpc.2022.108341
| null |
cs.MS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the software design of Gridap, a novel finite element library
written exclusively in the Julia programming language, which is being used by
several research groups world-wide to simulate complex physical phenomena such
as magnetohydrodynamics, photonics, weather modeling, non-linear solid
mechanics, and fluid-structure interaction problems. The library provides a
feature-rich set of discretization techniques for the numerical approximation
of a wide range of PDEs, including linear, nonlinear, single-field, and
multi-field equations. An expressive API allows users to define PDEs in weak
form by a syntax close to the mathematical notation. While this is also
available in previous codes, the main novelty of Gridap is that it implements
this API without introducing a DSL plus a compiler of variational forms.
Instead, it leverages the Julia just-in-time compiler to build efficient code,
specialized for the concrete problem at hand. As a result, there is no need to
use different languages for the computational back-end and the user front-end
anymore, thus eliminating the so-called two-language problem. Gridap also
provides a low-level API that is modular and extensible via the
multiple-dispatch paradigm of Julia and provides easy access to the main
building blocks of the library. The main contribution of this paper is the
detailed presentation of the novel software abstractions behind the Gridap
design that leverages the new software possibilities provided by the Julia
language. The second main contribution of the article is a performance
comparison against FEniCS. We measure CPU times needed to assemble discrete
systems of linear equations for different problem types and show that the
performance of Gridap is comparable to FEniCS, demonstrating that the new
software design does not compromise performance. Gridap is freely available at
Github and distributed under an MIT license.
|
[
{
"version": "v1",
"created": "Mon, 27 Sep 2021 06:27:37 GMT"
}
] | 2022-04-13T00:00:00 |
[
[
"Verdugo",
"Francesc",
""
],
[
"Badia",
"Santiago",
""
]
] |
new_dataset
| 0.994986 |
2112.00131
|
Venkata Devesh Reddy Seethi
|
Mrinmoy Roy, Venkata Devesh Reddy Seethi, Rami Lake, Pratool Bharti
|
CovidAlert -- A Wristwatch-based System to Alert Users from Face
Touching
|
17 pages, 9 figures, PervasiveHealth2021 conference
| null |
10.1007/978-3-030-99194-4_30
| null |
cs.LG cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Worldwide 2019 million people have been infected and 4.5 million have lost
their lives in the ongoing Covid-19 pandemic. Until vaccines became widely
available, precautions and safety measures like wearing masks, physical
distancing, avoiding face touching were some of the primary means to curb the
spread of virus. Face touching is a compulsive human begavior that can not be
prevented without making a continuous consious effort, even then it is
inevitable. To address this problem, we have designed a smartwatch-based
solution, CovidAlert, that leverages Random Forest algorithm trained on
accelerometer and gyroscope data from the smartwatch to detects hand transition
to face and sends a quick haptic alert to the users. CovidALert is highly
energy efficient as it employs STA/LTA algorithm as a gatekeeper to curtail the
usage of Random Forest model on the watch when user is inactive. The overall
accuracy of our system is 88.4% with low false negatives and false positives.
We also demonstrated the system viability by implementing it on a commercial
Fossil Gen 5 smartwatch.
|
[
{
"version": "v1",
"created": "Tue, 30 Nov 2021 21:58:50 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Apr 2022 02:48:35 GMT"
}
] | 2022-04-13T00:00:00 |
[
[
"Roy",
"Mrinmoy",
""
],
[
"Seethi",
"Venkata Devesh Reddy",
""
],
[
"Lake",
"Rami",
""
],
[
"Bharti",
"Pratool",
""
]
] |
new_dataset
| 0.995143 |
2201.06427
|
Jiayi Zhu
|
Jiayi Zhu and Qing Guo and Felix Juefei-Xu and Yihao Huang and Yang
Liu and Geguang Pu
|
Masked Faces with Faced Masks
|
8 pages
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern face recognition systems (FRS) still fall short when the subjects are
wearing facial masks, a common theme in the age of respiratory pandemics. An
intuitive partial remedy is to add a mask detector to flag any masked faces so
that the FRS can act accordingly for those low-confidence masked faces. In this
work, we set out to investigate the potential vulnerability of such FRS
equipped with a mask detector, on large-scale masked faces, which might trigger
a serious risk, e.g., letting a suspect evade the FRS where both facial
identity and mask are undetected. As existing face recognizers and mask
detectors have high performance in their respective tasks, it is significantly
challenging to simultaneously fool them and preserve the transferability of the
attack. We formulate the new task as the generation of realistic &
adversarial-faced mask and make three main contributions: First, we study the
naive Delanunay-based masking method (DM) to simulate the process of wearing a
faced mask that is cropped from a template image, which reveals the main
challenges of this new task. Second, we further equip the DM with the
adversarial noise attack and propose the adversarial noise Delaunay-based
masking method (AdvNoise-DM) that can fool the face recognition and mask
detection effectively but make the face less natural. Third, we propose the
adversarial filtering Delaunay-based masking method denoted as MF2M by
employing the adversarial filtering for AdvNoise-DM and obtain more natural
faces. With the above efforts, the final version not only leads to significant
performance deterioration of the state-of-the-art (SOTA) deep learning-based
FRS, but also remains undetected by the SOTA facial mask detector, thus
successfully fooling both systems at the same time.
|
[
{
"version": "v1",
"created": "Mon, 17 Jan 2022 14:37:33 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Apr 2022 14:40:12 GMT"
}
] | 2022-04-13T00:00:00 |
[
[
"Zhu",
"Jiayi",
""
],
[
"Guo",
"Qing",
""
],
[
"Juefei-Xu",
"Felix",
""
],
[
"Huang",
"Yihao",
""
],
[
"Liu",
"Yang",
""
],
[
"Pu",
"Geguang",
""
]
] |
new_dataset
| 0.998334 |
2202.01594
|
Stavros Konstantinidis
|
Stavros Konstantinidis (1), Mitja Mastnak (1), Nelma Moreira (2),
Rog\'erio Reis (2) ((1) Saint Mary's University Halifax Canada, (2)
University of Porto Portugal)
|
Approximate NFA Universality and Related Problems Motivated by
Information Theory
|
23 pages
| null | null | null |
cs.FL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In coding and information theory, it is desirable to construct maximal codes
that can be either variable length codes or error control codes of fixed
length. However deciding code maximality boils down to deciding whether a given
NFA is universal, and this is a hard problem (including the case of whether the
NFA accepts all words of a fixed length). On the other hand, it is acceptable
to know whether a code is `approximately' maximal, which then boils down to
whether a given NFA is `approximately' universal. Here we introduce the notion
of a $(1-\epsilon)$-universal automaton and present polynomial randomized
approximation algorithms to test NFA universality and related hard automata
problems, for certain natural probability distributions on the set of words. We
also conclude that the randomization aspect is necessary, as approximate
universality remains hard for any fixed polynomially computable $\epsilon$.
|
[
{
"version": "v1",
"created": "Thu, 3 Feb 2022 14:01:27 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Apr 2022 19:52:51 GMT"
}
] | 2022-04-13T00:00:00 |
[
[
"Konstantinidis",
"Stavros",
""
],
[
"Mastnak",
"Mitja",
""
],
[
"Moreira",
"Nelma",
""
],
[
"Reis",
"Rogério",
""
]
] |
new_dataset
| 0.999254 |
2203.05703
|
Kai Zhao
|
Kai Zhao, Lei Shen, Yingyi Zhang, Chuhan Zhou, Tao Wang, Ruixin Zhang,
Shouhong Ding, Wei Jia and Wei Shen
|
Geometric Synthesis: A Free lunch for Large-scale Palmprint Recognition
Model Pretraining
|
Codes are available at http://kaizhao.net/palmprint
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Palmprints are private and stable information for biometric recognition. In
the deep learning era, the development of palmprint recognition is limited by
the lack of sufficient training data. In this paper, by observing that palmar
creases are the key information to deep-learning-based palmprint recognition,
we propose to synthesize training data by manipulating palmar creases.
Concretely, we introduce an intuitive geometric model which represents palmar
creases with parameterized B\'ezier curves. By randomly sampling B\'ezier
parameters, we can synthesize massive training samples of diverse identities,
which enables us to pretrain large-scale palmprint recognition models.
Experimental results demonstrate that such synthetically pretrained models have
a very strong generalization ability: they can be efficiently transferred to
real datasets, leading to significant performance improvements on palmprint
recognition. For example, under the open-set protocol, our method improves the
strong ArcFace baseline by more than 10\% in terms of TAR@1e-6. And under the
closed-set protocol, our method reduces the equal error rate (EER) by an order
of magnitude.
|
[
{
"version": "v1",
"created": "Fri, 11 Mar 2022 01:20:22 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Apr 2022 21:04:09 GMT"
}
] | 2022-04-13T00:00:00 |
[
[
"Zhao",
"Kai",
""
],
[
"Shen",
"Lei",
""
],
[
"Zhang",
"Yingyi",
""
],
[
"Zhou",
"Chuhan",
""
],
[
"Wang",
"Tao",
""
],
[
"Zhang",
"Ruixin",
""
],
[
"Ding",
"Shouhong",
""
],
[
"Jia",
"Wei",
""
],
[
"Shen",
"Wei",
""
]
] |
new_dataset
| 0.995695 |
2203.07588
|
Mohammadali Mohammadi
|
Mohammadali Mohammadi and Hien Quoc Ngo and Michail Matthaiou
|
When Cell-Free Massive MIMO Meets OTFS Modulation: The Downlink Case
|
6 pages, 2 figures, accepted by IEEE ICC 2022. arXiv admin note:
substantial text overlap with arXiv:2112.10869
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
We provide a performance evaluation of orthogonal time frequency space (OTFS)
modulation in cell-free massive MIMO (multiple-input multiple-output) systems.
By leveraging the inherent sparsity of the delay-Doppler (DD) representation of
time-varying channels, we apply the embedded pilot-aided channel estimation
method with reduced guard intervals and derive the minimum mean-square error
estimate of the channel gains from received uplink pilots at the access points
(APs). Each AP applies conjugate beamforming to transmit data to the users. We
derive a closed-form expression for the individual user downlink throughput as
a function of the numbers of APs, users and DD channel estimate parameters. We
compare the OTFS performance with that of orthogonal frequency division
multiplexing (OFDM) at high-mobility conditions. Our findings reveal that with
uncorrelated shadowing, cell-free massive MIMO with OTFS modulation achieves up
to 35% gain in 95%-likely per-user throughput, compared with the OFDM
counterpart. Finally, the increase in the per user throughput is more
pronounced at the median rates over the correlated shadowing scenarios.
|
[
{
"version": "v1",
"created": "Tue, 15 Mar 2022 01:26:49 GMT"
}
] | 2022-04-13T00:00:00 |
[
[
"Mohammadi",
"Mohammadali",
""
],
[
"Ngo",
"Hien Quoc",
""
],
[
"Matthaiou",
"Michail",
""
]
] |
new_dataset
| 0.97494 |
2203.13733
|
Seyed Kamyar Seyed Ghasemipour
|
Seyed Kamyar Seyed Ghasemipour, Daniel Freeman, Byron David, Shixiang
Shane Gu, Satoshi Kataoka, Igor Mordatch
|
Blocks Assemble! Learning to Assemble with Large-Scale Structured
Reinforcement Learning
|
Accompanying project webpage can be found at:
https://sites.google.com/view/learning-direct-assembly
| null | null | null |
cs.RO cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Assembly of multi-part physical structures is both a valuable end product for
autonomous robotics, as well as a valuable diagnostic task for open-ended
training of embodied intelligent agents. We introduce a naturalistic
physics-based environment with a set of connectable magnet blocks inspired by
children's toy kits. The objective is to assemble blocks into a succession of
target blueprints. Despite the simplicity of this objective, the compositional
nature of building diverse blueprints from a set of blocks leads to an
explosion of complexity in structures that agents encounter. Furthermore,
assembly stresses agents' multi-step planning, physical reasoning, and bimanual
coordination. We find that the combination of large-scale reinforcement
learning and graph-based policies -- surprisingly without any additional
complexity -- is an effective recipe for training agents that not only
generalize to complex unseen blueprints in a zero-shot manner, but even operate
in a reset-free setting without being trained to do so. Through extensive
experiments, we highlight the importance of large-scale training, structured
representations, contributions of multi-task vs. single-task learning, as well
as the effects of curriculums, and discuss qualitative behaviors of trained
agents.
|
[
{
"version": "v1",
"created": "Tue, 15 Mar 2022 18:21:02 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Apr 2022 16:30:18 GMT"
}
] | 2022-04-13T00:00:00 |
[
[
"Ghasemipour",
"Seyed Kamyar Seyed",
""
],
[
"Freeman",
"Daniel",
""
],
[
"David",
"Byron",
""
],
[
"Gu",
"Shixiang Shane",
""
],
[
"Kataoka",
"Satoshi",
""
],
[
"Mordatch",
"Igor",
""
]
] |
new_dataset
| 0.973913 |
2204.05397
|
Lav Varshney
|
Xiou Ge, Richard T. Goodwin, Haizi Yu, Pablo Romero, Omar Abdelrahman,
Amruta Sudhalkar, Julius Kusuma, Ryan Cialdella, Nishant Garg, and Lav R.
Varshney
|
Accelerated Design and Deployment of Low-Carbon Concrete for Data
Centers
|
18 pages. arXiv admin note: text overlap with arXiv:1905.08222
| null | null | null |
cs.AI cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Concrete is the most widely used engineered material in the world with more
than 10 billion tons produced annually. Unfortunately, with that scale comes a
significant burden in terms of energy, water, and release of greenhouse gases
and other pollutants; indeed 8% of worldwide carbon emissions are attributed to
the production of cement, a key ingredient in concrete. As such, there is
interest in creating concrete formulas that minimize this environmental burden,
while satisfying engineering performance requirements including compressive
strength. Specifically for computing, concrete is a major ingredient in the
construction of data centers.
In this work, we use conditional variational autoencoders (CVAEs), a type of
semi-supervised generative artificial intelligence (AI) model, to discover
concrete formulas with desired properties. Our model is trained just using a
small open dataset from the UCI Machine Learning Repository joined with
environmental impact data from standard lifecycle analysis. Computational
predictions demonstrate CVAEs can design concrete formulas with much lower
carbon requirements than existing formulations while meeting design
requirements. Next we report laboratory-based compressive strength experiments
for five AI-generated formulations, which demonstrate that the formulations
exceed design requirements. The resulting formulations were then used by Ozinga
Ready Mix -- a concrete supplier -- to generate field-ready concrete
formulations, based on local conditions and their expertise in concrete design.
Finally, we report on how these formulations were used in the construction of
buildings and structures in a Meta data center in DeKalb, IL, USA. Results from
field experiments as part of this real-world deployment corroborate the
efficacy of AI-generated low-carbon concrete mixes.
|
[
{
"version": "v1",
"created": "Mon, 11 Apr 2022 20:40:13 GMT"
}
] | 2022-04-13T00:00:00 |
[
[
"Ge",
"Xiou",
""
],
[
"Goodwin",
"Richard T.",
""
],
[
"Yu",
"Haizi",
""
],
[
"Romero",
"Pablo",
""
],
[
"Abdelrahman",
"Omar",
""
],
[
"Sudhalkar",
"Amruta",
""
],
[
"Kusuma",
"Julius",
""
],
[
"Cialdella",
"Ryan",
""
],
[
"Garg",
"Nishant",
""
],
[
"Varshney",
"Lav R.",
""
]
] |
new_dataset
| 0.97371 |
2204.05445
|
Dianwen Ng Mr
|
Dianwen Ng, Jin Hui Pang, Yang Xiao, Biao Tian, Qiang Fu, Eng Siong
Chng
|
Small Footprint Multi-channel ConvMixer for Keyword Spotting with
Centroid Based Awareness
|
submitted to INTERSPEECH 2022
| null | null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
It is critical for a keyword spotting model to have a small footprint as it
typically runs on-device with low computational resources. However, maintaining
the previous SOTA performance with reduced model size is challenging. In
addition, a far-field and noisy environment with multiple signals interference
aggravates the problem causing the accuracy to degrade significantly. In this
paper, we present a multi-channel ConvMixer for speech command recognitions.
The novel architecture introduces an additional audio channel mixing for
channel audio interaction in a multi-channel audio setting to achieve better
noise-robust features with more efficient computation. Besides, we proposed a
centroid based awareness component to enhance the system by equipping it with
additional spatial geometry information in the latent feature projection space.
We evaluate our model using the new MISP challenge 2021 dataset. Our model
achieves significant improvement against the official baseline with a 55% gain
in the competition score (0.152) on raw microphone array input and a 63%
(0.126) boost upon front-end speech enhancement.
|
[
{
"version": "v1",
"created": "Mon, 11 Apr 2022 23:41:25 GMT"
}
] | 2022-04-13T00:00:00 |
[
[
"Ng",
"Dianwen",
""
],
[
"Pang",
"Jin Hui",
""
],
[
"Xiao",
"Yang",
""
],
[
"Tian",
"Biao",
""
],
[
"Fu",
"Qiang",
""
],
[
"Chng",
"Eng Siong",
""
]
] |
new_dataset
| 0.996519 |
2204.05471
|
Daisuke Kotani
|
Koudai Hatakeyama, Daisuke Kotani, Yasuo Okabe
|
Key Management Based on Ownership of Multiple Authenticators in Public
Key Authentication
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Public key authentication (PKA) has been deployed in various services to
provide stronger authentication to users. In PKA, a user manages private keys
on her devices called authenticators, and services bind the corresponding
public keys to her account. To protect private keys, a user uses authenticators
which never export private keys outside. On the other hand, a user regularly
uses multiple authenticators like PCs and smartphones. She replaces some of her
authenticators according to their lifecycle, such as purchasing new devices and
losing devices. It is a burden for a user to register, update and revoke public
keys in many services every time she registers new accounts with services and
replaces some of her authenticators. To ease the burden, we propose a mechanism
where users and services manage public keys based on the owner of
authenticators and users can access services with PKA using any of their
authenticators. We introduce a key pair called an Ownership Verification Key
(OVK), which consists of the private key (OVSK) and the corresponding public
key (OVPK). All authenticators owned by a user derive the same OVSK from the
pre-shared secret called the seed. Services verify the ownership of the
authenticators using the corresponding OVPK to determine whether binding the
requested public key to her account. To protect user privacy while maintaining
convenience, authenticators generate a different OVK for each service from the
seed independently. We demonstrate the feasibility through the Proof of Concept
implementation, show that our proposed mechanism achieves some security goals,
and discuss how the mechanism mitigates threats not completely handled.
|
[
{
"version": "v1",
"created": "Tue, 12 Apr 2022 01:43:57 GMT"
}
] | 2022-04-13T00:00:00 |
[
[
"Hatakeyama",
"Koudai",
""
],
[
"Kotani",
"Daisuke",
""
],
[
"Okabe",
"Yasuo",
""
]
] |
new_dataset
| 0.974175 |
2204.05475
|
Marc Demange
|
Marc Demange, Alessia Di Fonso, Gabriele Di Stefano, Pierpaolo
Vittorini
|
About the Infinite Windy Firebreak Location problem
|
18 pages
| null | null | null |
cs.DM cs.CC
|
http://creativecommons.org/licenses/by/4.0/
|
The severity of wildfires can be mitigated adopting preventive measures like
the construction of firebreaks that are strips of land from which the
vegetation is completely removed. In this paper, we model the problem of
wildfire containment as an optimization problem on infinite graphs called
Infinite Windy Firebreak Location. A land of unknown extension is modeled as an
infinite undirected graph in which the vertices correspond to areas subject to
fire and edges represent the propagation of fire from an area to another. The
construction of a firebreak on the territory is modeled as the removal of edges
in both directions between two vertices. The number of firebreaks that can be
installed depends on budget constraints. We assume that fire ignites in a
subset of vertices and propagates to the neighbours. The goal is to select a
subset of edges to remove in order to contain the fire and avoid burning an
infinite part of the graph. We prove that Infinite Windy Firebreak Location is
coNP-complete in restricted cases and we address some polynomial cases. We show
that Infinite Windy Firebreak Location polynomially reduces to Min Cut for
certain classes of graphs like infinite grid graphs and polyomino-grids.
|
[
{
"version": "v1",
"created": "Tue, 12 Apr 2022 01:57:48 GMT"
}
] | 2022-04-13T00:00:00 |
[
[
"Demange",
"Marc",
""
],
[
"Di Fonso",
"Alessia",
""
],
[
"Di Stefano",
"Gabriele",
""
],
[
"Vittorini",
"Pierpaolo",
""
]
] |
new_dataset
| 0.995857 |
2204.05503
|
Wenjun Chen
|
Wenjun Chen, Chunling Yang, Xin Yang
|
FSOINet: Feature-Space Optimization-Inspired Network for Image
Compressive Sensing
|
ICASSP2022 accepted
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, deep learning-based image compressive sensing (ICS) methods
have achieved brilliant success. Many optimization-inspired networks have been
proposed to bring the insights of optimization algorithms into the network
structure design and have achieved excellent reconstruction quality with low
computational complexity. But they keep the information flow in pixel space as
traditional algorithms by updating and transferring the image in pixel space,
which does not fully use the information in the image features. In this paper,
we propose the idea of achieving information flow phase by phase in feature
space and design a Feature-Space Optimization-Inspired Network (dubbed FSOINet)
to implement it by mapping both steps of proximal gradient descent algorithm
from pixel space to feature space. Moreover, the sampling matrix is learned
end-to-end with other network parameters. Experiments show that the proposed
FSOINet outperforms the existing state-of-the-art methods by a large margin
both quantitatively and qualitatively. The source code is available on
https://github.com/cwjjun/FSOINet.
|
[
{
"version": "v1",
"created": "Tue, 12 Apr 2022 03:30:22 GMT"
}
] | 2022-04-13T00:00:00 |
[
[
"Chen",
"Wenjun",
""
],
[
"Yang",
"Chunling",
""
],
[
"Yang",
"Xin",
""
]
] |
new_dataset
| 0.998539 |
2204.05525
|
Zilong Huang
|
Wenqiang Zhang, Zilong Huang, Guozhong Luo, Tao Chen, Xinggang Wang,
Wenyu Liu, Gang Yu, Chunhua Shen
|
TopFormer: Token Pyramid Transformer for Mobile Semantic Segmentation
|
To Appear at CVPR 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Although vision transformers (ViTs) have achieved great success in computer
vision, the heavy computational cost hampers their applications to dense
prediction tasks such as semantic segmentation on mobile devices. In this
paper, we present a mobile-friendly architecture named \textbf{To}ken
\textbf{P}yramid Vision Trans\textbf{former} (\textbf{TopFormer}). The proposed
\textbf{TopFormer} takes Tokens from various scales as input to produce
scale-aware semantic features, which are then injected into the corresponding
tokens to augment the representation. Experimental results demonstrate that our
method significantly outperforms CNN- and ViT-based networks across several
semantic segmentation datasets and achieves a good trade-off between accuracy
and latency. On the ADE20K dataset, TopFormer achieves 5\% higher accuracy in
mIoU than MobileNetV3 with lower latency on an ARM-based mobile device.
Furthermore, the tiny version of TopFormer achieves real-time inference on an
ARM-based mobile device with competitive results. The code and models are
available at: https://github.com/hustvl/TopFormer
|
[
{
"version": "v1",
"created": "Tue, 12 Apr 2022 04:51:42 GMT"
}
] | 2022-04-13T00:00:00 |
[
[
"Zhang",
"Wenqiang",
""
],
[
"Huang",
"Zilong",
""
],
[
"Luo",
"Guozhong",
""
],
[
"Chen",
"Tao",
""
],
[
"Wang",
"Xinggang",
""
],
[
"Liu",
"Wenyu",
""
],
[
"Yu",
"Gang",
""
],
[
"Shen",
"Chunhua",
""
]
] |
new_dataset
| 0.975807 |
2204.05575
|
Haibao Yu
|
Haibao Yu, Yizhen Luo, Mao Shu, Yiyi Huo, Zebang Yang, Yifeng Shi,
Zhenglong Guo, Hanyu Li, Xing Hu, Jirui Yuan, Zaiqing Nie
|
DAIR-V2X: A Large-Scale Dataset for Vehicle-Infrastructure Cooperative
3D Object Detection
|
CVPR2022
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Autonomous driving faces great safety challenges for a lack of global
perspective and the limitation of long-range perception capabilities. It has
been widely agreed that vehicle-infrastructure cooperation is required to
achieve Level 5 autonomy. However, there is still NO dataset from real
scenarios available for computer vision researchers to work on
vehicle-infrastructure cooperation-related problems. To accelerate computer
vision research and innovation for Vehicle-Infrastructure Cooperative
Autonomous Driving (VICAD), we release DAIR-V2X Dataset, which is the first
large-scale, multi-modality, multi-view dataset from real scenarios for VICAD.
DAIR-V2X comprises 71254 LiDAR frames and 71254 Camera frames, and all frames
are captured from real scenes with 3D annotations. The Vehicle-Infrastructure
Cooperative 3D Object Detection problem (VIC3D) is introduced, formulating the
problem of collaboratively locating and identifying 3D objects using sensory
inputs from both vehicle and infrastructure. In addition to solving traditional
3D object detection problems, the solution of VIC3D needs to consider the
temporal asynchrony problem between vehicle and infrastructure sensors and the
data transmission cost between them. Furthermore, we propose Time Compensation
Late Fusion (TCLF), a late fusion framework for the VIC3D task as a benchmark
based on DAIR-V2X. Find data, code, and more up-to-date information at
https://thudair.baai.ac.cn/index and https://github.com/AIR-THU/DAIR-V2X.
|
[
{
"version": "v1",
"created": "Tue, 12 Apr 2022 07:13:33 GMT"
}
] | 2022-04-13T00:00:00 |
[
[
"Yu",
"Haibao",
""
],
[
"Luo",
"Yizhen",
""
],
[
"Shu",
"Mao",
""
],
[
"Huo",
"Yiyi",
""
],
[
"Yang",
"Zebang",
""
],
[
"Shi",
"Yifeng",
""
],
[
"Guo",
"Zhenglong",
""
],
[
"Li",
"Hanyu",
""
],
[
"Hu",
"Xing",
""
],
[
"Yuan",
"Jirui",
""
],
[
"Nie",
"Zaiqing",
""
]
] |
new_dataset
| 0.999821 |
2204.05576
|
Yuan Tian
|
Yuan Tian, Klaus-Rudolf Kladny, Qin Wang, Zhiwu Huang, Olga Fink
|
Multi-agent Actor-Critic with Time Dynamical Opponent Model
| null | null | null | null |
cs.AI
|
http://creativecommons.org/publicdomain/zero/1.0/
|
In multi-agent reinforcement learning, multiple agents learn simultaneously
while interacting with a common environment and each other. Since the agents
adapt their policies during learning, not only the behavior of a single agent
becomes non-stationary, but also the environment as perceived by the agent.
This renders it particularly challenging to perform policy improvement. In this
paper, we propose to exploit the fact that the agents seek to improve their
expected cumulative reward and introduce a novel \textit{Time Dynamical
Opponent Model} (TDOM) to encode the knowledge that the opponent policies tend
to improve over time. We motivate TDOM theoretically by deriving a lower bound
of the log objective of an individual agent and further propose
\textit{Multi-Agent Actor-Critic with Time Dynamical Opponent Model} (TDOM-AC).
We evaluate the proposed TDOM-AC on a differential game and the Multi-agent
Particle Environment. We show empirically that TDOM achieves superior opponent
behavior prediction during test time. The proposed TDOM-AC methodology
outperforms state-of-the-art Actor-Critic methods on the performed experiments
in cooperative and \textbf{especially} in mixed cooperative-competitive
environments. TDOM-AC results in a more stable training and a faster
convergence.
|
[
{
"version": "v1",
"created": "Tue, 12 Apr 2022 07:16:15 GMT"
}
] | 2022-04-13T00:00:00 |
[
[
"Tian",
"Yuan",
""
],
[
"Kladny",
"Klaus-Rudolf",
""
],
[
"Wang",
"Qin",
""
],
[
"Huang",
"Zhiwu",
""
],
[
"Fink",
"Olga",
""
]
] |
new_dataset
| 0.99354 |
2204.05599
|
Yu Zheng
|
Yu Zheng, Yueqi Duan, Jiwen Lu, Jie Zhou, Qi Tian
|
HyperDet3D: Learning a Scene-conditioned 3D Object Detector
|
to be published on CVPR2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A bathtub in a library, a sink in an office, a bed in a laundry room -- the
counter-intuition suggests that scene provides important prior knowledge for 3D
object detection, which instructs to eliminate the ambiguous detection of
similar objects. In this paper, we propose HyperDet3D to explore
scene-conditioned prior knowledge for 3D object detection. Existing methods
strive for better representation of local elements and their relations without
scene-conditioned knowledge, which may cause ambiguity merely based on the
understanding of individual points and object candidates. Instead, HyperDet3D
simultaneously learns scene-agnostic embeddings and scene-specific knowledge
through scene-conditioned hypernetworks. More specifically, our HyperDet3D not
only explores the sharable abstracts from various 3D scenes, but also adapts
the detector to the given scene at test time. We propose a discriminative
Multi-head Scene-specific Attention (MSA) module to dynamically control the
layer parameters of the detector conditioned on the fusion of scene-conditioned
knowledge. Our HyperDet3D achieves state-of-the-art results on the 3D object
detection benchmark of the ScanNet and SUN RGB-D datasets. Moreover, through
cross-dataset evaluation, we show the acquired scene-conditioned prior
knowledge still takes effect when facing 3D scenes with domain gap.
|
[
{
"version": "v1",
"created": "Tue, 12 Apr 2022 07:57:58 GMT"
}
] | 2022-04-13T00:00:00 |
[
[
"Zheng",
"Yu",
""
],
[
"Duan",
"Yueqi",
""
],
[
"Lu",
"Jiwen",
""
],
[
"Zhou",
"Jie",
""
],
[
"Tian",
"Qi",
""
]
] |
new_dataset
| 0.999224 |
2204.05626
|
Zhaowei Cai
|
Zhaowei Cai, Gukyeong Kwon, Avinash Ravichandran, Erhan Bas, Zhuowen
Tu, Rahul Bhotika, Stefano Soatto
|
X-DETR: A Versatile Architecture for Instance-wise Vision-Language Tasks
| null | null | null | null |
cs.CV cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we study the challenging instance-wise vision-language tasks,
where the free-form language is required to align with the objects instead of
the whole image. To address these tasks, we propose X-DETR, whose architecture
has three major components: an object detector, a language encoder, and
vision-language alignment. The vision and language streams are independent
until the end and they are aligned using an efficient dot-product operation.
The whole network is trained end-to-end, such that the detector is optimized
for the vision-language tasks instead of an off-the-shelf component. To
overcome the limited size of paired object-language annotations, we leverage
other weak types of supervision to expand the knowledge coverage. This simple
yet effective architecture of X-DETR shows good accuracy and fast speeds for
multiple instance-wise vision-language tasks, e.g., 16.4 AP on LVIS detection
of 1.2K categories at ~20 frames per second without using any LVIS annotation
during training.
|
[
{
"version": "v1",
"created": "Tue, 12 Apr 2022 08:34:42 GMT"
}
] | 2022-04-13T00:00:00 |
[
[
"Cai",
"Zhaowei",
""
],
[
"Kwon",
"Gukyeong",
""
],
[
"Ravichandran",
"Avinash",
""
],
[
"Bas",
"Erhan",
""
],
[
"Tu",
"Zhuowen",
""
],
[
"Bhotika",
"Rahul",
""
],
[
"Soatto",
"Stefano",
""
]
] |
new_dataset
| 0.997667 |
2204.05634
|
Eu-Bin Kim
|
Eu-Bin Kim
|
Idiomify -- Building a Collocation-supplemented Reverse Dictionary of
English Idioms with Word2Vec for non-native learners
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The aim of idiomify is to build a collocation-supplemented reverse dictionary
of idioms for the non-native learners of English. We aim to do so because the
reverse dictionary could help the non-natives explore idioms on demand, and the
collocations could also guide them on using idioms more adequately. The
cornerstone of the project is a reliable way of mining idioms from corpora,
which is however a challenge because idioms extensively vary in forms. We
tackle this by automatically deriving matching rules from their base forms. We
use Point-wise Mutual Inclusion (PMI), Term Frequency - Inverse Document
Frequency (TF-IDF) to model collocations, since both of them are popular metric
for pairwise significance. We also try Term Frequency (TF) as the baseline
model. As for implementing the reverse-dictionary, three approaches could be
taken: inverted index, graphs and distributional semantics. We choose to take
the last approach and implement the reverse dictionary with Word2Vec, because
it is the most flexible approach of all and Word2Vec is a simple yet strong
baseline. Evaluating the methods has revealed rooms for improvement. We learn
that we can better identify idioms with the help of slop, wildcard and
reordering techniques. We also learn that we can get the best of both PMI and
TF-IDF if we use machine learning to find the sweet spot. Lastly, We learn that
Idiomify could be further improved with a mixture of inverted index and
distributional semantics approach. The limits aside, the proposed methods are
feasible, and their benefits to the non-natives are apparent, which therefore
can be used to aid the non-natives in acquiring English idioms.
|
[
{
"version": "v1",
"created": "Tue, 12 Apr 2022 08:55:27 GMT"
}
] | 2022-04-13T00:00:00 |
[
[
"Kim",
"Eu-Bin",
""
]
] |
new_dataset
| 0.999117 |
2204.05660
|
Swaroop Mishra
|
Swaroop Mishra, Arindam Mitra, Neeraj Varshney, Bhavdeep Sachdeva,
Peter Clark, Chitta Baral and Ashwin Kalyan
|
NumGLUE: A Suite of Fundamental yet Challenging Mathematical Reasoning
Tasks
|
ACL 2022
| null | null | null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given the ubiquitous nature of numbers in text, reasoning with numbers to
perform simple calculations is an important skill of AI systems. While many
datasets and models have been developed to this end, state-of-the-art AI
systems are brittle; failing to perform the underlying mathematical reasoning
when they appear in a slightly different scenario. Drawing inspiration from
GLUE that was proposed in the context of natural language understanding, we
propose NumGLUE, a multi-task benchmark that evaluates the performance of AI
systems on eight different tasks, that at their core require simple arithmetic
understanding. We show that this benchmark is far from being solved with neural
models including state-of-the-art large-scale language models performing
significantly worse than humans (lower by 46.4%). Further, NumGLUE promotes
sharing knowledge across tasks, especially those with limited training data as
evidenced by the superior performance (average gain of 3.4% on each task) when
a model is jointly trained on all the tasks as opposed to task-specific
modeling. Finally, we hope that NumGLUE will encourage systems that perform
robust and general arithmetic reasoning within language, a first step towards
being able to perform more complex mathematical reasoning.
|
[
{
"version": "v1",
"created": "Tue, 12 Apr 2022 09:36:10 GMT"
}
] | 2022-04-13T00:00:00 |
[
[
"Mishra",
"Swaroop",
""
],
[
"Mitra",
"Arindam",
""
],
[
"Varshney",
"Neeraj",
""
],
[
"Sachdeva",
"Bhavdeep",
""
],
[
"Clark",
"Peter",
""
],
[
"Baral",
"Chitta",
""
],
[
"Kalyan",
"Ashwin",
""
]
] |
new_dataset
| 0.999822 |
2204.05729
|
Loe Feijs
|
L.M.G. Feijs
|
Single line Apollonian gaskets: is the limit a space filling fractal
curve?
|
7 pages, 5 figures. Explorations related to "Single Line Apollonian
Gaskets for Fashion" by Feijs and Toeters for Bridges 2022
| null | null | null |
cs.CG
|
http://creativecommons.org/licenses/by/4.0/
|
In this manuscript we study single-line approximations and fractals based on
the Apollonian gasket. The well-known Apollonian gasket is the limit case of
configurations of kissing circles. Rather than plotting the circles as discs on
a differently colored background (the traditional representation), we draw all
circles as one line without lifting the pen and without crossing itself.
Moreover, the configurations are nested. In this manuscript we explore whether
the limit of the line drawings gives rise to a space filling fractal curve.
|
[
{
"version": "v1",
"created": "Tue, 12 Apr 2022 11:51:02 GMT"
}
] | 2022-04-13T00:00:00 |
[
[
"Feijs",
"L. M. G.",
""
]
] |
new_dataset
| 0.992982 |
2204.05735
|
Shin-Fang Chng
|
Shin-Fang Chng, Sameera Ramasinghe, Jamie Sherrah, Simon Lucey
|
GARF: Gaussian Activated Radiance Fields for High Fidelity
Reconstruction and Pose Estimation
|
Project page: https://sfchng.github.io/garf/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite Neural Radiance Fields (NeRF) showing compelling results in
photorealistic novel views synthesis of real-world scenes, most existing
approaches require accurate prior camera poses. Although approaches for jointly
recovering the radiance field and camera pose exist (BARF), they rely on a
cumbersome coarse-to-fine auxiliary positional embedding to ensure good
performance. We present Gaussian Activated neural Radiance Fields (GARF), a new
positional embedding-free neural radiance field architecture - employing
Gaussian activations - that outperforms the current state-of-the-art in terms
of high fidelity reconstruction and pose estimation.
|
[
{
"version": "v1",
"created": "Tue, 12 Apr 2022 12:14:39 GMT"
}
] | 2022-04-13T00:00:00 |
[
[
"Chng",
"Shin-Fang",
""
],
[
"Ramasinghe",
"Sameera",
""
],
[
"Sherrah",
"Jamie",
""
],
[
"Lucey",
"Simon",
""
]
] |
new_dataset
| 0.968556 |
2204.05754
|
Md Tanvirul Alam
|
Md Tanvirul Alam, Dipkamal Bhusal, Youngja Park, Nidhi Rastogi
|
CyNER: A Python Library for Cybersecurity Named Entity Recognition
| null | null | null | null |
cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Open Cyber threat intelligence (OpenCTI) information is available in an
unstructured format from heterogeneous sources on the Internet. We present
CyNER, an open-source python library for cybersecurity named entity recognition
(NER). CyNER combines transformer-based models for extracting
cybersecurity-related entities, heuristics for extracting different indicators
of compromise, and publicly available NER models for generic entity types. We
provide models trained on a diverse corpus that users can readily use. Events
are described as classes in previous research - MALOnt2.0 (Christian et al.,
2021) and MALOnt (Rastogi et al., 2020) and together extract a wide range of
malware attack details from a threat intelligence corpus. The user can combine
predictions from multiple different approaches to suit their needs. The library
is made publicly available.
|
[
{
"version": "v1",
"created": "Fri, 8 Apr 2022 16:49:32 GMT"
}
] | 2022-04-13T00:00:00 |
[
[
"Alam",
"Md Tanvirul",
""
],
[
"Bhusal",
"Dipkamal",
""
],
[
"Park",
"Youngja",
""
],
[
"Rastogi",
"Nidhi",
""
]
] |
new_dataset
| 0.994902 |
2204.05836
|
Krishnapriya Vishnubhotla
|
Krishnapriya Vishnubhotla, Adam Hammond, Graeme Hirst
|
The Project Dialogism Novel Corpus: A Dataset for Quotation Attribution
in Literary Texts
|
Accepted for publication at LREC 2022
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the Project Dialogism Novel Corpus, or PDNC, an annotated dataset
of quotations for English literary texts. PDNC contains annotations for 35,978
quotations across 22 full-length novels, and is by an order of magnitude the
largest corpus of its kind. Each quotation is annotated for the speaker,
addressees, type of quotation, referring expression, and character mentions
within the quotation text. The annotated attributes allow for a comprehensive
evaluation of models of quotation attribution and coreference for literary
texts.
|
[
{
"version": "v1",
"created": "Tue, 12 Apr 2022 14:23:55 GMT"
}
] | 2022-04-13T00:00:00 |
[
[
"Vishnubhotla",
"Krishnapriya",
""
],
[
"Hammond",
"Adam",
""
],
[
"Hirst",
"Graeme",
""
]
] |
new_dataset
| 0.999531 |
2204.05855
|
Julian Blank
|
Julian Blank and Kalyanmoy Deb
|
pysamoo: Surrogate-Assisted Multi-Objective Optimization in Python
| null | null | null | null |
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Significant effort has been made to solve computationally expensive
optimization problems in the past two decades, and various optimization methods
incorporating surrogates into optimization have been proposed. However, most
optimization toolboxes do not consist of ready-to-run algorithms for
computationally expensive problems, especially in combination with other key
requirements, such as handling multiple conflicting objectives or constraints.
Thus, the lack of appropriate software packages has become a bottleneck for
solving real-world applications. The proposed framework, pysamoo, addresses
these shortcomings of existing optimization frameworks and provides multiple
optimization methods for handling problems involving time-consuming evaluation
functions. The framework extends the functionalities of pymoo, a popular and
comprehensive toolbox for multi-objective optimization, and incorporates
surrogates to support expensive function evaluations. The framework is
available under the GNU Affero General Public License (AGPL) and is primarily
designed for research purposes. For more information about pysamoo, readers are
encouraged to visit: anyoptimization.com/projects/pysamoo.
|
[
{
"version": "v1",
"created": "Tue, 12 Apr 2022 14:55:57 GMT"
}
] | 2022-04-13T00:00:00 |
[
[
"Blank",
"Julian",
""
],
[
"Deb",
"Kalyanmoy",
""
]
] |
new_dataset
| 0.996183 |
2204.05911
|
Valerio Brussani
|
Valerio Brussani
|
ASVAAN: Semi-automatic side-channel analysis of Android NDK
|
11 pages, 3 figures
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Android is the most popular operating systems for smartphones and is also
well-known for its flexibility and security. However, although it is overall
considered very secure, there are still some vulnerabilities occasionally
discovered that allow getting user sensitive information bypassing security
controls and boundaries: among these, side-channel vulnerabilities are a
significant concern these days. Although there are several types of
side-channel vulnerabilities, ones focused on APIs still represent a great area
to explore, which, until now, has often been analysed manually. Only in the
latest years, there have been published some automatic solutions which focus on
performing automatic scanning of side-channel flaws in Android, created due to
the increasing codebase of the operating system; however, they present some
limitations.
This paper introduces a new approach to discover Android NDK side-channel
leaks, which at the best of the author knowledge have never been investigated
through the usage of automatic or semi-automatic solutions. The approach
described in the work, allowed to identify more than 8 new side-channel leaks
in several Android NDK functions,which permitted to infer with great accuracy
application and websites launches on a victim device. The findings represents
the first discovered side-channel leaks in Android NDK functions, and were
responsibly disclosed to the Android Security Team of Google.
|
[
{
"version": "v1",
"created": "Tue, 12 Apr 2022 16:12:11 GMT"
}
] | 2022-04-13T00:00:00 |
[
[
"Brussani",
"Valerio",
""
]
] |
new_dataset
| 0.99935 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.