id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2110.03224
|
Julien Herzen
|
Julien Herzen, Francesco L\"assig, Samuele Giuliano Piazzetta, Thomas
Neuer, L\'eo Tafti, Guillaume Raille, Tomas Van Pottelbergh, Marek Pasieka,
Andrzej Skrodzki, Nicolas Huguenin, Maxime Dumonal, Jan Ko\'scisz, Dennis
Bader, Fr\'ed\'erick Gusset, Mounir Benheddi, Camila Williamson, Michal
Kosinski, Matej Petrik, Ga\"el Grosch
|
Darts: User-Friendly Modern Machine Learning for Time Series
|
Darts Github repository: https://github.com/unit8co/darts
|
Journal of Machine Learning Research 23 (2022) 1-6
| null | null |
cs.LG stat.CO
|
http://creativecommons.org/licenses/by/4.0/
|
We present Darts, a Python machine learning library for time series, with a
focus on forecasting. Darts offers a variety of models, from classics such as
ARIMA to state-of-the-art deep neural networks. The emphasis of the library is
on offering modern machine learning functionalities, such as supporting
multidimensional series, meta-learning on multiple series, training on large
datasets, incorporating external data, ensembling models, and providing a rich
support for probabilistic forecasting. At the same time, great care goes into
the API design to make it user-friendly and easy to use. For instance, all
models can be used using fit()/predict(), similar to scikit-learn.
|
[
{
"version": "v1",
"created": "Thu, 7 Oct 2021 07:18:57 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Oct 2021 12:01:03 GMT"
},
{
"version": "v3",
"created": "Thu, 19 May 2022 06:52:54 GMT"
}
] | 2022-05-20T00:00:00 |
[
[
"Herzen",
"Julien",
""
],
[
"Lässig",
"Francesco",
""
],
[
"Piazzetta",
"Samuele Giuliano",
""
],
[
"Neuer",
"Thomas",
""
],
[
"Tafti",
"Léo",
""
],
[
"Raille",
"Guillaume",
""
],
[
"Van Pottelbergh",
"Tomas",
""
],
[
"Pasieka",
"Marek",
""
],
[
"Skrodzki",
"Andrzej",
""
],
[
"Huguenin",
"Nicolas",
""
],
[
"Dumonal",
"Maxime",
""
],
[
"Kościsz",
"Jan",
""
],
[
"Bader",
"Dennis",
""
],
[
"Gusset",
"Frédérick",
""
],
[
"Benheddi",
"Mounir",
""
],
[
"Williamson",
"Camila",
""
],
[
"Kosinski",
"Michal",
""
],
[
"Petrik",
"Matej",
""
],
[
"Grosch",
"Gaël",
""
]
] |
new_dataset
| 0.984281 |
2201.06723
|
Sabit Hassan
|
Hamdy Mubarak, Sabit Hassan, Shammur Absar Chowdhury
|
Emojis as Anchors to Detect Arabic Offensive Language and Hate Speech
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce a generic, language-independent method to collect a large
percentage of offensive and hate tweets regardless of their topics or genres.
We harness the extralinguistic information embedded in the emojis to collect a
large number of offensive tweets. We apply the proposed method on Arabic tweets
and compare it with English tweets - analysing key cultural differences. We
observed a constant usage of these emojis to represent offensiveness throughout
different timespans on Twitter. We manually annotate and publicly release the
largest Arabic dataset for offensive, fine-grained hate speech, vulgar and
violence content. Furthermore, we benchmark the dataset for detecting
offensiveness and hate speech using different transformer architectures and
perform in-depth linguistic analysis. We evaluate our models on external
datasets - a Twitter dataset collected using a completely different method, and
a multi-platform dataset containing comments from Twitter, YouTube and
Facebook, for assessing generalization capability. Competitive results on these
datasets suggest that the data collected using our method captures universal
characteristics of offensive language. Our findings also highlight the common
words used in offensive communications, common targets for hate speech,
specific patterns in violence tweets; and pinpoint common classification errors
that can be attributed to limitations of NLP models. We observe that even
state-of-the-art transformer models may fail to take into account culture,
background and context or understand nuances present in real-world data such as
sarcasm.
|
[
{
"version": "v1",
"created": "Tue, 18 Jan 2022 03:56:57 GMT"
},
{
"version": "v2",
"created": "Thu, 19 May 2022 00:12:53 GMT"
}
] | 2022-05-20T00:00:00 |
[
[
"Mubarak",
"Hamdy",
""
],
[
"Hassan",
"Sabit",
""
],
[
"Chowdhury",
"Shammur Absar",
""
]
] |
new_dataset
| 0.998555 |
2202.01477
|
Mohammad Javad Ahmadi
|
Mohammad Javad Ahmadi and Tolga M. Duman
|
Unsourced Random Access with a Massive MIMO Receiver Using Multiple
Stages of Orthogonal Pilots
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the problem of unsourced random access (URA) over Rayleigh
block-fading channels with a receiver equipped with multiple antennas. We
employ multiple stages of orthogonal pilots, each of which is randomly picked
from a codebook. In the proposed scheme, each user encodes its message using a
polar code and appends it to the selected pilot sequences to construct its
transmitted signal. Accordingly, the received signal consists of superposition
of the users' signals each composed of multiple orthogonal pilot parts and a
polar coded part. We use an iterative approach for decoding the transmitted
messages along with a suitable successive interference cancellation scheme.
Performance of the proposed scheme is illustrated via extensive set of
simulation results which show that it significantly outperforms the existing
approaches for URA over multiple-input multiple-output fading channels.
|
[
{
"version": "v1",
"created": "Thu, 3 Feb 2022 09:04:42 GMT"
},
{
"version": "v2",
"created": "Thu, 10 Feb 2022 15:14:57 GMT"
},
{
"version": "v3",
"created": "Thu, 17 Feb 2022 11:35:38 GMT"
},
{
"version": "v4",
"created": "Thu, 19 May 2022 10:03:09 GMT"
}
] | 2022-05-20T00:00:00 |
[
[
"Ahmadi",
"Mohammad Javad",
""
],
[
"Duman",
"Tolga M.",
""
]
] |
new_dataset
| 0.989641 |
2202.03918
|
Michael Langberg
|
Michael Langberg and Michelle Effros
|
Network Coding Multicast Key-Capacity
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For a multi-source multi-terminal noiseless network, the key-dissemination
problem involves the task of multicasting a secret key K from the network
sources to its terminals. As in secure multicast network-coding, in the
key-dissemination problem the source nodes have access to independent
randomness and, as the network is noiseless, the resulting key K is a function
of the sources' information. However, different from traditional forms of
multicast, in key-dissemination the key K need not consist of source messages,
but rather may be any function of the information generated at the sources, as
long as it is shared by all terminals. Allowing the shared key K to be a
mixture of source information grants a flexibility to the communication process
which gives rise to the potential of increased key-rates when compared to
traditional secure multicast. The multicast key-capacity is the supremum of
achievable key-rates, subject to the security requirement that the shared key
is not revealed to an eavesdropper with predefined eavesdropping capabilities.
The key-dissemination problem (termed also, secret key-agreement) has seen
significant studies over the past decades in memoryless network structures. In
this work, we initiate the study of key-dissemination in the context of
noiseless networks, i.e., network coding. In this context, we study
similarities and differences between traditional secure-multicast and the more
lenient task of key-dissemination.
|
[
{
"version": "v1",
"created": "Tue, 8 Feb 2022 15:11:01 GMT"
},
{
"version": "v2",
"created": "Thu, 19 May 2022 15:39:40 GMT"
}
] | 2022-05-20T00:00:00 |
[
[
"Langberg",
"Michael",
""
],
[
"Effros",
"Michelle",
""
]
] |
new_dataset
| 0.988645 |
2205.09115
|
Toshiaki Koike-Akino
|
Toshiaki Koike-Akino, Pu Wang, Ye Wang
|
AutoQML: Automated Quantum Machine Learning for Wi-Fi Integrated Sensing
and Communications
|
5 pages, 9 figures, IEEE SAM 2022. arXiv admin note: text overlap
with arXiv:2205.08590
| null | null | null |
cs.LG eess.SP quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Commercial Wi-Fi devices can be used for integrated sensing and
communications (ISAC) to jointly exchange data and monitor indoor environment.
In this paper, we investigate a proof-of-concept approach using automated
quantum machine learning (AutoQML) framework called AutoAnsatz to recognize
human gesture. We address how to efficiently design quantum circuits to
configure quantum neural networks (QNN). The effectiveness of AutoQML is
validated by an in-house experiment for human pose recognition, achieving
state-of-the-art performance greater than 80% accuracy for a limited data size
with a significantly small number of trainable parameters.
|
[
{
"version": "v1",
"created": "Tue, 17 May 2022 19:38:13 GMT"
}
] | 2022-05-20T00:00:00 |
[
[
"Koike-Akino",
"Toshiaki",
""
],
[
"Wang",
"Pu",
""
],
[
"Wang",
"Ye",
""
]
] |
new_dataset
| 0.979905 |
2205.09214
|
Yong Niu
|
Jing Li, Yong Niu, Hao Wu, Bo Ai, Sheng Chen, Zhiyong Feng, Zhangdui
Zhong, Ning Wang
|
Mobility Support for Millimeter Wave Communications: Opportunities and
Challenges
|
25 pages,11 figures,journal
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Millimeter-wave (mmWave) communication technology offers a potential and
promising solution to support 5G and B5G wireless networks in dynamic scenarios
and applications. However, mobility introduces many challenges as well as
opportunities to mmWave applications. To address these problems, we conduct a
survey of the opportunities and technologies to support mmWave communications
in mobile scenarios. Firstly, we summarize the mobile scenarios where mmWave
communications are exploited, including indoor wireless local area network
(WLAN) or wireless personal area network (WPAN), cellular access,
vehicle-to-everything (V2X), high speed train (HST), unmanned aerial vehicle
(UAV), and the new space-air-ground-sea communication scenarios. Then, to
address users' mobility impact on the system performance in different
application scenarios, we introduce several representative mobility models in
mmWave systems, including human mobility, vehicular mobility, high speed train
mobility and ship mobility. Next we survey the key challenges and existing
solutions to mmWave applications, such as channel modeling, channel estimation,
anti-blockage, and capacity improvement. Lastly, we discuss the open issues
concerning mobility-aware mmWave communications that deserve further
investigation. In particular, we highlight future heterogeneous mobile
networks, dynamic resource management, artificial intelligence (AI) for
mobility and integration of geographical information, deployment of large
intelligent surface and reconfigurable antenna technology, and finally, the
evolution to Terahertz (THz) communications.
|
[
{
"version": "v1",
"created": "Wed, 18 May 2022 20:59:14 GMT"
}
] | 2022-05-20T00:00:00 |
[
[
"Li",
"Jing",
""
],
[
"Niu",
"Yong",
""
],
[
"Wu",
"Hao",
""
],
[
"Ai",
"Bo",
""
],
[
"Chen",
"Sheng",
""
],
[
"Feng",
"Zhiyong",
""
],
[
"Zhong",
"Zhangdui",
""
],
[
"Wang",
"Ning",
""
]
] |
new_dataset
| 0.999601 |
2205.09230
|
Gaetano Perrone Mr.
|
Francesco Caturano, Nicola d'Ambrosio, Gaetano Perrone, Luigi
Previdente, Simon Pietro Romano
|
ExploitWP2Docker: a Platform for Automating the Generation of Vulnerable
WordPress Environments for Cyber Ranges
|
7 pages, 3 figures
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-sa/4.0/
|
A cyber range is a realistic simulation of an organization's network
infrastructure, commonly used for cyber security training purposes. It provides
a safe environment to assess competencies in both offensive and defensive
techniques. An important step during the realization of a cyber range is the
generation of vulnerable machines. This step is challenging and requires a
laborious manual configuration. Several works aim to reduce this overhead, but
the current state-of-the-art focuses on generating network services without
considering the effort required to build vulnerable environments for web
applications. A cyber range should represent a real system, and nowadays,
almost all the companies develop their company site by using WordPress, a
common Content Management System (CMS), which is also one of the most critical
attackers' entry points. The presented work proposes an approach to
automatically create and configure vulnerable WordPress applications by using
the information presented in public exploits. Our platform automatically
extracts information from the most well-known publicly available exploit
database in order to generate and configure vulnerable environments. The
container-based virtualization is used to generate lightweight and easily
deployable infrastructures. A final evaluation highlights promising results
regarding the possibility of automating the generation of vulnerable
environments through our approach.
|
[
{
"version": "v1",
"created": "Wed, 18 May 2022 22:18:58 GMT"
}
] | 2022-05-20T00:00:00 |
[
[
"Caturano",
"Francesco",
""
],
[
"d'Ambrosio",
"Nicola",
""
],
[
"Perrone",
"Gaetano",
""
],
[
"Previdente",
"Luigi",
""
],
[
"Romano",
"Simon Pietro",
""
]
] |
new_dataset
| 0.99137 |
2205.09428
|
Ashkan Sami
|
F. Khoshnoud, A. Rezaei Nasab, Z. Toudeji, A. Sami
|
Which bugs are missed in code reviews: An empirical study on SmartSHARK
dataset
|
5 pages, 3 figures. This study has been accepted for publication at:
The 19th International Conference on Mining Software Repositories (MSR 2022)
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In pull-based development systems, code reviews and pull request comments
play important roles in improving code quality. In such systems, reviewers
attempt to carefully check a piece of code by different unit tests.
Unfortunately, sometimes they miss bugs in their review of pull requests, which
lead to quality degradations of the systems. In other words, disastrous
consequences occur when bugs are observed after merging the pull requests. The
lack of a concrete understanding of these bugs led us to investigate and
categorize them. In this research, we try to identify missed bugs in pull
requests of SmartSHARK dataset projects. Our contribution is twofold. First, we
hypothesized merged pull requests that have code reviews, code review comments,
or pull request comments after merging, may have missed bugs after the code
review. We considered these merged pull requests as candidate pull requests
having missed bugs. Based on our assumption, we obtained 3,261 candidate pull
requests from 77 open-source GitHub projects. After two rounds of restrictive
manual analysis, we found 187 bugs missed in 173 pull requests. In the first
step, we found 224 buggy pull requests containing missed bugs after merging the
pull requests. Secondly, we defined and finalized a taxonomy that is
appropriate for the bugs that we found and then found the distribution of bug
categories after analysing those pull requests all over again. The categories
of missed bugs in pull requests and their distributions are: semantic (51.34%),
build (15.5%), analysis checks (9.09%), compatibility (7.49%), concurrency
(4.28%), configuration (4.28%), GUI (2.14%), API (2.14%), security (2.14%), and
memory (1.6%).
|
[
{
"version": "v1",
"created": "Thu, 19 May 2022 09:43:48 GMT"
}
] | 2022-05-20T00:00:00 |
[
[
"Khoshnoud",
"F.",
""
],
[
"Nasab",
"A. Rezaei",
""
],
[
"Toudeji",
"Z.",
""
],
[
"Sami",
"A.",
""
]
] |
new_dataset
| 0.998121 |
2205.09442
|
Mei Wang
|
Mei Wang, Weihong Deng
|
Oracle-MNIST: a Realistic Image Dataset for Benchmarking Machine
Learning Algorithms
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce the Oracle-MNIST dataset, comprising of 28$\times $28 grayscale
images of 30,222 ancient characters from 10 categories, for benchmarking
pattern classification, with particular challenges on image noise and
distortion. The training set totally consists of 27,222 images, and the test
set contains 300 images per class. Oracle-MNIST shares the same data format
with the original MNIST dataset, allowing for direct compatibility with all
existing classifiers and systems, but it constitutes a more challenging
classification task than MNIST. The images of ancient characters suffer from 1)
extremely serious and unique noises caused by three-thousand years of burial
and aging and 2) dramatically variant writing styles by ancient Chinese, which
all make them realistic for machine learning research. The dataset is freely
available at https://github.com/wm-bupt/oracle-mnist.
|
[
{
"version": "v1",
"created": "Thu, 19 May 2022 09:57:45 GMT"
}
] | 2022-05-20T00:00:00 |
[
[
"Wang",
"Mei",
""
],
[
"Deng",
"Weihong",
""
]
] |
new_dataset
| 0.999867 |
2205.09488
|
James Montgomery
|
Mark Reid, James Montgomery, Barry Drake, Avraham Ruderman
|
PSI Draft Specification
|
Software specification for PSI machine learning web services. 42
pages, 2 figures
| null | null | null |
cs.SE cs.LG cs.NI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This document presents the draft specification for delivering machine
learning services over HTTP, developed as part of the Protocols and Structures
for Inference project, which concluded in 2013. It presents the motivation for
providing machine learning as a service, followed by a description of the
essential and optional components of such a service.
|
[
{
"version": "v1",
"created": "Mon, 2 May 2022 02:42:16 GMT"
}
] | 2022-05-20T00:00:00 |
[
[
"Reid",
"Mark",
""
],
[
"Montgomery",
"James",
""
],
[
"Drake",
"Barry",
""
],
[
"Ruderman",
"Avraham",
""
]
] |
new_dataset
| 0.95378 |
2205.09501
|
Jan Deriu
|
Michel Pl\"uss, Manuela H\"urlimann, Marc Cuny, Alla St\"ockli,
Nikolaos Kapotis, Julia Hartmann, Malgorzata Anna Ulasik, Christian Scheller,
Yanick Schraner, Amit Jain, Jan Deriu, Mark Cieliebak, Manfred Vogel
|
SDS-200: A Swiss German Speech to Standard German Text Corpus
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present SDS-200, a corpus of Swiss German dialectal speech with Standard
German text translations, annotated with dialect, age, and gender information
of the speakers. The dataset allows for training speech translation, dialect
recognition, and speech synthesis systems, among others. The data was collected
using a web recording tool that is open to the public. Each participant was
given a text in Standard German and asked to translate it to their Swiss German
dialect before recording it. To increase the corpus quality, recordings were
validated by other participants. The data consists of 200 hours of speech by
around 4000 different speakers and covers a large part of the Swiss-German
dialect landscape. We release SDS-200 alongside a baseline speech translation
model, which achieves a word error rate (WER) of 30.3 and a BLEU score of 53.1
on the SDS-200 test set. Furthermore, we use SDS-200 to fine-tune a pre-trained
XLS-R model, achieving 21.6 WER and 64.0 BLEU.
|
[
{
"version": "v1",
"created": "Thu, 19 May 2022 12:16:29 GMT"
}
] | 2022-05-20T00:00:00 |
[
[
"Plüss",
"Michel",
""
],
[
"Hürlimann",
"Manuela",
""
],
[
"Cuny",
"Marc",
""
],
[
"Stöckli",
"Alla",
""
],
[
"Kapotis",
"Nikolaos",
""
],
[
"Hartmann",
"Julia",
""
],
[
"Ulasik",
"Malgorzata Anna",
""
],
[
"Scheller",
"Christian",
""
],
[
"Schraner",
"Yanick",
""
],
[
"Jain",
"Amit",
""
],
[
"Deriu",
"Jan",
""
],
[
"Cieliebak",
"Mark",
""
],
[
"Vogel",
"Manfred",
""
]
] |
new_dataset
| 0.999823 |
2205.09564
|
Homayoon Beigi
|
Benjamin Kepecs and Homayoon Beigi
|
Automatic Spoken Language Identification using a Time-Delay Neural
Network
|
6 pages, 6 figures, Technical Report Recognition Technologies, Inc
| null |
10.13140/RG.2.2.21631.89763
|
RTI-20220519-01
|
cs.CL cs.LG cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Closed-set spoken language identification is the task of recognizing the
language being spoken in a recorded audio clip from a set of known languages.
In this study, a language identification system was built and trained to
distinguish between Arabic, Spanish, French, and Turkish based on nothing more
than recorded speech. A pre-existing multilingual dataset was used to train a
series of acoustic models based on the Tedlium TDNN model to perform automatic
speech recognition. The system was provided with a custom multilingual language
model and a specialized pronunciation lexicon with language names prepended to
phones. The trained model was used to generate phone alignments to test data
from all four languages, and languages were predicted based on a voting scheme
choosing the most common language prepend in an utterance. Accuracy was
measured by comparing predicted languages to known languages, and was
determined to be very high in identifying Spanish and Arabic, and somewhat
lower in identifying Turkish and French.
|
[
{
"version": "v1",
"created": "Thu, 19 May 2022 13:47:48 GMT"
}
] | 2022-05-20T00:00:00 |
[
[
"Kepecs",
"Benjamin",
""
],
[
"Beigi",
"Homayoon",
""
]
] |
new_dataset
| 0.9996 |
2205.09635
|
Eric Wagner
|
Eric Wagner, Martin Serror, Klaus Wehrle, Martin Henze
|
BP-MAC: Fast Authentication for Short Messages
|
ACM WiSec'22
| null |
10.1145/3507657.3528554
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Resource-constrained devices increasingly rely on wireless communication for
the reliable and low-latency transmission of short messages. However,
especially the implementation of adequate integrity protection of time-critical
messages places a significant burden on these devices. We address this issue by
proposing BP-MAC, a fast and memory-efficient approach for computing message
authentication codes based on the well-established Carter-Wegman construction.
Our key idea is to offload resource-intensive computations to idle phases and
thus save valuable time in latency-critical phases, i.e., when new data awaits
processing. Therefore, BP-MAC leverages a universal hash function designed for
the bitwise preprocessing of integrity protection to later only require a few
XOR operations during the latency-critical phase. Our evaluation on embedded
hardware shows that BP-MAC outperforms the state-of-the-art in terms of latency
and memory overhead, notably for small messages, as required to adequately
protect resource-constrained devices with stringent security and latency
requirements.
|
[
{
"version": "v1",
"created": "Thu, 19 May 2022 15:52:13 GMT"
}
] | 2022-05-20T00:00:00 |
[
[
"Wagner",
"Eric",
""
],
[
"Serror",
"Martin",
""
],
[
"Wehrle",
"Klaus",
""
],
[
"Henze",
"Martin",
""
]
] |
new_dataset
| 0.996664 |
2205.09664
|
Mustafa Jarrar
|
Mustafa Jarrar
|
The Arabic Ontology -- An Arabic Wordnet with Ontologically Clean
Content
| null |
Applied Ontology Journal, 16:1, 1-26. IOS Press. (2021)
|
10.3233/AO-200241
| null |
cs.CL cs.AI cs.IR cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
We present a formal Arabic wordnet built on the basis of a carefully designed
ontology hereby referred to as the Arabic Ontology. The ontology provides a
formal representation of the concepts that the Arabic terms convey, and its
content was built with ontological analysis in mind, and benchmarked to
scientific advances and rigorous knowledge sources as much as this is possible,
rather than to only speakers' beliefs as lexicons typically are. A
comprehensive evaluation was conducted thereby demonstrating that the current
version of the top-levels of the ontology can top the majority of the Arabic
meanings. The ontology consists currently of about 1,300 well-investigated
concepts in addition to 11,000 concepts that are partially validated. The
ontology is accessible and searchable through a lexicographic search engine
(https://ontology.birzeit.edu) that also includes about 150 Arabic-multilingual
lexicons, and which are being mapped and enriched using the ontology. The
ontology is fully mapped with Princeton WordNet, Wikidata, and other resources.
|
[
{
"version": "v1",
"created": "Thu, 19 May 2022 16:27:44 GMT"
}
] | 2022-05-20T00:00:00 |
[
[
"Jarrar",
"Mustafa",
""
]
] |
new_dataset
| 0.967104 |
2205.09685
|
Mustafa Jarrar
|
Moustafa Al-Hajj, Mustafa Jarrar
|
ArabGlossBERT: Fine-Tuning BERT on Context-Gloss Pairs for WSD
| null |
In Proceedings of the International Conference on Recent Advances
in Natural Language Processing (RANLP 2021), PP 40--48. (2021)
|
10.26615/978-954-452-072-4_005
| null |
cs.CL cs.AI cs.IR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Using pre-trained transformer models such as BERT has proven to be effective
in many NLP tasks. This paper presents our work to fine-tune BERT models for
Arabic Word Sense Disambiguation (WSD). We treated the WSD task as a
sentence-pair binary classification task. First, we constructed a dataset of
labeled Arabic context-gloss pairs (~167k pairs) we extracted from the Arabic
Ontology and the large lexicographic database available at Birzeit University.
Each pair was labeled as True or False and target words in each context were
identified and annotated. Second, we used this dataset for fine-tuning three
pre-trained Arabic BERT models. Third, we experimented the use of different
supervised signals used to emphasize target words in context. Our experiments
achieved promising results (accuracy of 84%) although we used a large set of
senses in the experiment.
|
[
{
"version": "v1",
"created": "Thu, 19 May 2022 16:47:18 GMT"
}
] | 2022-05-20T00:00:00 |
[
[
"Al-Hajj",
"Moustafa",
""
],
[
"Jarrar",
"Mustafa",
""
]
] |
new_dataset
| 0.999814 |
2205.09692
|
Mustafa Jarrar
|
Karim El Haff, Mustafa Jarrar, Tymaa Hammouda, Fadi Zaraket
|
Curras + Baladi: Towards a Levantine Corpus
| null |
In Proceedings of the International Conference on Language
Resources and Evaluation (LREC 2022), Marseille, France. (2022)
| null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The processing of the Arabic language is a complex field of research. This is
due to many factors, including the complex and rich morphology of Arabic, its
high degree of ambiguity, and the presence of several regional varieties that
need to be processed while taking into account their unique characteristics.
When its dialects are taken into account, this language pushes the limits of
NLP to find solutions to problems posed by its inherent nature. It is a
diglossic language; the standard language is used in formal settings and in
education and is quite different from the vernacular languages spoken in the
different regions and influenced by older languages that were historically
spoken in those regions. This should encourage NLP specialists to create
dialect-specific corpora such as the Palestinian morphologically annotated
Curras corpus of Birzeit University. In this work, we present the Lebanese
Corpus Baladi that consists of around 9.6K morphologically annotated tokens.
Since Lebanese and Palestinian dialects are part of the same Levantine
dialectal continuum, and thus highly mutually intelligible, our proposed corpus
was constructed to be used to (1) enrich Curras and transform it into a more
general Levantine corpus and (2) improve Curras by solving detected errors.
|
[
{
"version": "v1",
"created": "Thu, 19 May 2022 16:53:04 GMT"
}
] | 2022-05-20T00:00:00 |
[
[
"Haff",
"Karim El",
""
],
[
"Jarrar",
"Mustafa",
""
],
[
"Hammouda",
"Tymaa",
""
],
[
"Zaraket",
"Fadi",
""
]
] |
new_dataset
| 0.999772 |
2205.09747
|
Yu-Wei Chao
|
Yu-Wei Chao, Chris Paxton, Yu Xiang, Wei Yang, Balakumar
Sundaralingam, Tao Chen, Adithyavairavan Murali, Maya Cakmak, Dieter Fox
|
HandoverSim: A Simulation Framework and Benchmark for Human-to-Robot
Object Handovers
|
Accepted to ICRA 2022
| null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a new simulation benchmark "HandoverSim" for human-to-robot
object handovers. To simulate the giver's motion, we leverage a recent motion
capture dataset of hand grasping of objects. We create training and evaluation
environments for the receiver with standardized protocols and metrics. We
analyze the performance of a set of baselines and show a correlation with a
real-world evaluation. Code is open sourced at https://handover-sim.github.io.
|
[
{
"version": "v1",
"created": "Thu, 19 May 2022 17:59:00 GMT"
}
] | 2022-05-20T00:00:00 |
[
[
"Chao",
"Yu-Wei",
""
],
[
"Paxton",
"Chris",
""
],
[
"Xiang",
"Yu",
""
],
[
"Yang",
"Wei",
""
],
[
"Sundaralingam",
"Balakumar",
""
],
[
"Chen",
"Tao",
""
],
[
"Murali",
"Adithyavairavan",
""
],
[
"Cakmak",
"Maya",
""
],
[
"Fox",
"Dieter",
""
]
] |
new_dataset
| 0.999747 |
2006.13597
|
Xingwen Zheng
|
Xingwen Zheng, Ningzhe Hou, Pascal Johannes Daniel Dinjens, Ruifeng
Wang, Chengyang Dong, and Guangming Xie
|
A Thermoplastic Elastomer Belt Based Robotic Gripper
|
Accepted by 2020 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS)
| null |
10.1109/IROS45743.2020.9341152
| null |
cs.RO physics.app-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Novel robotic grippers have captured increasing interests recently because of
their abilities to adapt to varieties of circumstances and their powerful
functionalities. Differing from traditional gripper with mechanical
components-made fingers, novel robotic grippers are typically made of novel
structures and materials, using a novel manufacturing process. In this paper, a
novel robotic gripper with external frame and internal thermoplastic elastomer
belt-made net is proposed. The gripper grasps objects using the friction
between the net and objects. It has the ability of adaptive gripping through
flexible contact surface. Stress simulation has been used to explore the
regularity between the normal stress on the net and the deformation of the net.
Experiments are conducted on a variety of objects to measure the force needed
to reliably grip and hold the object. Test results show that the gripper can
successfully grip objects with varying shape, dimensions, and textures. It is
promising that the gripper can be used for grasping fragile objects in the
industry or out in the field, and also grasping the marine organisms without
hurting them.
|
[
{
"version": "v1",
"created": "Wed, 24 Jun 2020 10:20:24 GMT"
},
{
"version": "v2",
"created": "Sat, 22 May 2021 23:23:29 GMT"
}
] | 2022-05-19T00:00:00 |
[
[
"Zheng",
"Xingwen",
""
],
[
"Hou",
"Ningzhe",
""
],
[
"Dinjens",
"Pascal Johannes Daniel",
""
],
[
"Wang",
"Ruifeng",
""
],
[
"Dong",
"Chengyang",
""
],
[
"Xie",
"Guangming",
""
]
] |
new_dataset
| 0.998332 |
2106.13646
|
Suthee Ruangwises
|
Suthee Ruangwises
|
Two Standard Decks of Playing Cards are Sufficient for a ZKP for Sudoku
|
A shortened version of this paper has appeared at COCOON 2021
|
New Generation Computing, 40(1): 49-65 (2022)
|
10.1007/s00354-021-00146-y
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sudoku is a famous logic puzzle where the player has to fill a number between
1 and 9 into each empty cell of a $9 \times 9$ grid such that every number
appears exactly once in each row, each column, and each $3 \times 3$ block. In
2020, Sasaki et al. developed a physical card-based protocol of zero-knowledge
proof (ZKP) for Sudoku, which enables a prover to convince a verifier that
he/she knows a solution of the puzzle without revealing it. Their protocol uses
90 cards, but requires nine identical copies of some cards, which cannot be
found in a standard deck of playing cards (consisting of 52 different cards and
two jokers). Hence, nine identical standard decks are required to perform that
protocol, making the protocol not very practical. In this paper, we propose a
new ZKP protocol for Sudoku that can be performed using only two standard decks
of playing cards, regardless of whether the two decks are identical or
different. In general, we also develop the first ZKP protocol for a generalized
$n \times n$ Sudoku that can be performed using a deck of all different cards.
|
[
{
"version": "v1",
"created": "Fri, 25 Jun 2021 14:03:36 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Oct 2021 13:40:40 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Jan 2022 16:39:55 GMT"
}
] | 2022-05-19T00:00:00 |
[
[
"Ruangwises",
"Suthee",
""
]
] |
new_dataset
| 0.985832 |
2107.08661
|
Ye Jia
|
Ye Jia, Michelle Tadmor Ramanovich, Tal Remez, Roi Pomerantz
|
Translatotron 2: High-quality direct speech-to-speech translation with
voice preservation
|
ICML 2022
| null | null | null |
cs.CL cs.LG cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Translatotron 2, a neural direct speech-to-speech translation
model that can be trained end-to-end. Translatotron 2 consists of a speech
encoder, a linguistic decoder, an acoustic synthesizer, and a single attention
module that connects them together. Experimental results on three datasets
consistently show that Translatotron 2 outperforms the original Translatotron
by a large margin on both translation quality (up to +15.5 BLEU) and speech
generation quality, and approaches the same of cascade systems. In addition, we
propose a simple method for preserving speakers' voices from the source speech
to the translation speech in a different language. Unlike existing approaches,
the proposed method is able to preserve each speaker's voice on speaker turns
without requiring for speaker segmentation. Furthermore, compared to existing
approaches, it better preserves speaker's privacy and mitigates potential
misuse of voice cloning for creating spoofing audio artifacts.
|
[
{
"version": "v1",
"created": "Mon, 19 Jul 2021 07:43:49 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Jul 2021 06:03:56 GMT"
},
{
"version": "v3",
"created": "Sun, 19 Sep 2021 18:48:20 GMT"
},
{
"version": "v4",
"created": "Fri, 3 Dec 2021 18:40:32 GMT"
},
{
"version": "v5",
"created": "Tue, 17 May 2022 20:40:26 GMT"
}
] | 2022-05-19T00:00:00 |
[
[
"Jia",
"Ye",
""
],
[
"Ramanovich",
"Michelle Tadmor",
""
],
[
"Remez",
"Tal",
""
],
[
"Pomerantz",
"Roi",
""
]
] |
new_dataset
| 0.999622 |
2108.10554
|
Julien Bensmail
|
Julien Bensmail (COATI), Herv\'e Hocquard (LaBRI), Dimitri Lajou
(LaBRI), \'Eric Sopena (LaBRI)
|
A proof of the Multiplicative 1-2-3 Conjecture
| null | null | null | null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We prove that the product version of the 1-2-3 Conjecture, raised by
Skowronek-Kazi{\'o}w in 2012, is true. Namely, for every connected graph with
order at least 3, we prove that we can assign labels 1,2,3 to the edges in such
a way that no two adjacent vertices are incident to the same product of labels.
|
[
{
"version": "v1",
"created": "Tue, 24 Aug 2021 07:42:31 GMT"
},
{
"version": "v2",
"created": "Wed, 18 May 2022 08:02:35 GMT"
}
] | 2022-05-19T00:00:00 |
[
[
"Bensmail",
"Julien",
"",
"COATI"
],
[
"Hocquard",
"Hervé",
"",
"LaBRI"
],
[
"Lajou",
"Dimitri",
"",
"LaBRI"
],
[
"Sopena",
"Éric",
"",
"LaBRI"
]
] |
new_dataset
| 0.994586 |
2111.12785
|
Zhiming Zhao
|
Zhiming Zhao, Spiros Koulouzis, Riccardo Bianchi, Siamak Farshidi,
Zeshun Shi, Ruyue Xin, Yuandou Wang, Na Li, Yifang Shi, Joris Timmermans, W.
Daniel Kissling
|
Notebook-as-a-VRE (NaaVRE): from private notebooks to a collaborative
cloud virtual research environment
|
A revised version has been published in the journal software practice
and experience
|
Softw Pract Exper.2022; 1-20
|
10.1002/spe.3098
| null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Virtual Research Environments (VREs) provide user-centric support in the
lifecycle of research activities, e.g., discovering and accessing research
assets, or composing and executing application workflows. A typical VRE is
often implemented as an integrated environment, which includes a catalog of
research assets, a workflow management system, a data management framework, and
tools for enabling collaboration among users. Notebook environments, such as
Jupyter, allow researchers to rapidly prototype scientific code and share their
experiments as online accessible notebooks. Jupyter can support several popular
languages that are used by data scientists, such as Python, R, and Julia.
However, such notebook environments do not have seamless support for running
heavy computations on remote infrastructure or finding and accessing software
code inside notebooks. This paper investigates the gap between a notebook
environment and a VRE and proposes an embedded VRE solution for the Jupyter
environment called Notebook-as-a-VRE (NaaVRE). The NaaVRE solution provides
functional components via a component marketplace and allows users to create a
customized VRE on top of the Jupyter environment. From the VRE, a user can
search research assets (data, software, and algorithms), compose workflows,
manage the lifecycle of an experiment, and share the results among users in the
community. We demonstrate how such a solution can enhance a legacy workflow
that uses Light Detection and Ranging (LiDAR) data from country-wide airborne
laser scanning surveys for deriving geospatial data products of ecosystem
structure at high resolution over broad spatial extents. This enables users to
scale out the processing of multi-terabyte LiDAR point clouds for ecological
applications to more data sources in a distributed cloud environment.
|
[
{
"version": "v1",
"created": "Wed, 24 Nov 2021 20:35:06 GMT"
},
{
"version": "v2",
"created": "Wed, 18 May 2022 08:20:34 GMT"
}
] | 2022-05-19T00:00:00 |
[
[
"Zhao",
"Zhiming",
""
],
[
"Koulouzis",
"Spiros",
""
],
[
"Bianchi",
"Riccardo",
""
],
[
"Farshidi",
"Siamak",
""
],
[
"Shi",
"Zeshun",
""
],
[
"Xin",
"Ruyue",
""
],
[
"Wang",
"Yuandou",
""
],
[
"Li",
"Na",
""
],
[
"Shi",
"Yifang",
""
],
[
"Timmermans",
"Joris",
""
],
[
"Kissling",
"W. Daniel",
""
]
] |
new_dataset
| 0.999195 |
2111.13063
|
Qingtian Zhu
|
Shuxue Peng, Zihang He, Haotian Zhang, Ran Yan, Chuting Wang, Qingtian
Zhu, Xiao Liu
|
MegLoc: A Robust and Accurate Visual Localization Pipeline
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present a visual localization pipeline, namely MegLoc, for
robust and accurate 6-DoF pose estimation under varying scenarios, including
indoor and outdoor scenes, different time across a day, different seasons
across a year, and even across years. MegLoc achieves state-of-the-art results
on a range of challenging datasets, including winning the Outdoor and Indoor
Visual Localization Challenge of ICCV 2021 Workshop on Long-term Visual
Localization under Changing Conditions, as well as the Re-localization
Challenge for Autonomous Driving of ICCV 2021 Workshop on Map-based
Localization for Autonomous Driving.
|
[
{
"version": "v1",
"created": "Thu, 25 Nov 2021 12:56:08 GMT"
},
{
"version": "v2",
"created": "Wed, 18 May 2022 15:22:07 GMT"
}
] | 2022-05-19T00:00:00 |
[
[
"Peng",
"Shuxue",
""
],
[
"He",
"Zihang",
""
],
[
"Zhang",
"Haotian",
""
],
[
"Yan",
"Ran",
""
],
[
"Wang",
"Chuting",
""
],
[
"Zhu",
"Qingtian",
""
],
[
"Liu",
"Xiao",
""
]
] |
new_dataset
| 0.968851 |
2202.04561
|
Ashwin Singh
|
Ashwin Singh, Arvindh Arun, Ayushi Jain, Pooja Desur, Pulak Malhotra,
Duen Horng Chau, Ponnurangam Kumaraguru
|
Erasing Labor with Labor: Dark Patterns and Lockstep Behaviors on Google
Play
| null | null |
10.1145/3511095.3536368
| null |
cs.CY cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Google Play's policy forbids the use of incentivized installs, ratings, and
reviews to manipulate the placement of apps. However, there still exist apps
that incentivize installs for other apps on the platform. To understand how
install-incentivizing apps affect users, we examine their ecosystem through a
socio-technical lens and perform a mixed-methods analysis of their reviews and
permissions. Our dataset contains 319K reviews collected daily over five months
from 60 such apps that cumulatively account for over 160.5M installs. We
perform qualitative analysis of reviews to reveal various types of dark
patterns that developers incorporate in install-incentivizing apps,
highlighting their normative concerns at both user and platform levels.
Permissions requested by these apps validate our discovery of dark patterns,
with over 92% apps accessing sensitive user information. We find evidence of
fraudulent reviews on install-incentivizing apps, following which we model them
as an edge stream in a dynamic bipartite graph of apps and reviewers. Our
proposed reconfiguration of a state-of-the-art microcluster anomaly detection
algorithm yields promising preliminary results in detecting this fraud. We
discover highly significant lockstep behaviors exhibited by reviews that aim to
boost the overall rating of an install-incentivizing app. Upon evaluating the
50 most suspicious clusters of boosting reviews detected by the algorithm, we
find (i) near-identical pairs of reviews across 94% (47 clusters), and (ii)
over 35% (1,687 of 4,717 reviews) present in the same form near-identical pairs
within their cluster. Finally, we conclude with a discussion on how fraud is
intertwined with labor and poses a threat to the trust and transparency of
Google Play.
|
[
{
"version": "v1",
"created": "Wed, 9 Feb 2022 16:54:27 GMT"
},
{
"version": "v2",
"created": "Tue, 17 May 2022 22:10:54 GMT"
}
] | 2022-05-19T00:00:00 |
[
[
"Singh",
"Ashwin",
""
],
[
"Arun",
"Arvindh",
""
],
[
"Jain",
"Ayushi",
""
],
[
"Desur",
"Pooja",
""
],
[
"Malhotra",
"Pulak",
""
],
[
"Chau",
"Duen Horng",
""
],
[
"Kumaraguru",
"Ponnurangam",
""
]
] |
new_dataset
| 0.999496 |
2203.02397
|
Olga Taran
|
Olga Taran, Joakim Tutt, Taras Holotyak, Roman Chaban, Slavi Bonev,
Slava Voloshynovskiy
|
Mobile authentication of copy detection patterns
| null | null | null | null |
cs.CR cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In the recent years, the copy detection patterns (CDP) attracted a lot of
attention as a link between the physical and digital worlds, which is of great
interest for the internet of things and brand protection applications. However,
the security of CDP in terms of their reproducibility by unauthorized parties
or clonability remains largely unexplored. In this respect this paper addresses
a problem of anti-counterfeiting of physical objects and aims at investigating
the authentication aspects and the resistances to illegal copying of the modern
CDP from machine learning perspectives. A special attention is paid to a
reliable authentication under the real life verification conditions when the
codes are printed on an industrial printer and enrolled via modern mobile
phones under regular light conditions. The theoretical and empirical
investigation of authentication aspects of CDP is performed with respect to
four types of copy fakes from the point of view of (i) multi-class supervised
classification as a baseline approach and (ii) one-class classification as a
real-life application case. The obtained results show that the modern
machine-learning approaches and the technical capacities of modern mobile
phones allow to reliably authenticate CDP on end-user mobile phones under the
considered classes of fakes.
|
[
{
"version": "v1",
"created": "Fri, 4 Mar 2022 16:07:26 GMT"
},
{
"version": "v2",
"created": "Wed, 18 May 2022 11:41:01 GMT"
}
] | 2022-05-19T00:00:00 |
[
[
"Taran",
"Olga",
""
],
[
"Tutt",
"Joakim",
""
],
[
"Holotyak",
"Taras",
""
],
[
"Chaban",
"Roman",
""
],
[
"Bonev",
"Slavi",
""
],
[
"Voloshynovskiy",
"Slava",
""
]
] |
new_dataset
| 0.996131 |
2204.01899
|
Jui-Hsien Wang
|
Paul Liu and Jui-Hsien Wang
|
MonoTrack: Shuttle trajectory reconstruction from monocular badminton
video
|
To appear in CVSports@CVPR 2022
| null | null | null |
cs.CV cs.LG cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
Trajectory estimation is a fundamental component of racket sport analytics,
as the trajectory contains information not only about the winning and losing of
each point, but also how it was won or lost. In sports such as badminton,
players benefit from knowing the full 3D trajectory, as the height of
shuttlecock or ball provides valuable tactical information. Unfortunately, 3D
reconstruction is a notoriously hard problem, and standard trajectory
estimators can only track 2D pixel coordinates. In this work, we present the
first complete end-to-end system for the extraction and segmentation of 3D
shuttle trajectories from monocular badminton videos. Our system integrates
badminton domain knowledge such as court dimension, shot placement, physical
laws of motion, along with vision-based features such as player poses and
shuttle tracking. We find that significant engineering efforts and model
improvements are needed to make the overall system robust, and as a by-product
of our work, improve state-of-the-art results on court recognition, 2D
trajectory estimation, and hit recognition.
|
[
{
"version": "v1",
"created": "Mon, 4 Apr 2022 23:57:57 GMT"
},
{
"version": "v2",
"created": "Wed, 18 May 2022 17:59:57 GMT"
}
] | 2022-05-19T00:00:00 |
[
[
"Liu",
"Paul",
""
],
[
"Wang",
"Jui-Hsien",
""
]
] |
new_dataset
| 0.999598 |
2204.03207
|
Ziad Ashour
|
Ziad Ashour, Zohreh Shaghaghian, Wei Yan
|
BIMxAR: BIM-Empowered Augmented Reality for Learning Architectural
Representations
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Literature review shows limited research investigating the utilization of
Augmented Reality (AR) to improve learning and understanding architectural
representations, specifically section views. In this study, we present an AR
system prototype (BIMxAR), its new and accurate building-scale registration
method, and its novel visualization features that facilitate the comprehension
of building construction systems, materials configuration, and 3D section views
of complex structures through the integration of AR, Building Information
Modeling (BIM), and physical buildings. A pilot user study found improvements
after students studied building section views in a physical building with AR,
though not statistically significant, in terms of scores of the Santa Barbara
Solids Test (SBST) and the Architectural Representations Test (ART). When
incorporating time as a performance factor, the ART timed scores show a
significant improvement in the posttest session. BIMxAR has the potential to
enhance the students spatial abilities, particularly in understanding buildings
and complex section views.
|
[
{
"version": "v1",
"created": "Thu, 7 Apr 2022 04:32:43 GMT"
},
{
"version": "v2",
"created": "Wed, 18 May 2022 15:40:26 GMT"
}
] | 2022-05-19T00:00:00 |
[
[
"Ashour",
"Ziad",
""
],
[
"Shaghaghian",
"Zohreh",
""
],
[
"Yan",
"Wei",
""
]
] |
new_dataset
| 0.999372 |
2205.08585
|
Ruibo Shi
|
Ruibo Shi, Lili Tao, Rohan Saphal, Fran Silavong, Sean J. Moran
|
CV4Code: Sourcecode Understanding via Visual Code Representations
| null | null | null | null |
cs.SE cs.AI cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present CV4Code, a compact and effective computer vision method for
sourcecode understanding. Our method leverages the contextual and the
structural information available from the code snippet by treating each snippet
as a two-dimensional image, which naturally encodes the context and retains the
underlying structural information through an explicit spatial representation.
To codify snippets as images, we propose an ASCII codepoint-based image
representation that facilitates fast generation of sourcecode images and
eliminates redundancy in the encoding that would arise from an RGB pixel
representation. Furthermore, as sourcecode is treated as images, neither
lexical analysis (tokenisation) nor syntax tree parsing is required, which
makes the proposed method agnostic to any particular programming language and
lightweight from the application pipeline point of view. CV4Code can even
featurise syntactically incorrect code which is not possible from methods that
depend on the Abstract Syntax Tree (AST). We demonstrate the effectiveness of
CV4Code by learning Convolutional and Transformer networks to predict the
functional task, i.e. the problem it solves, of the source code directly from
its two-dimensional representation, and using an embedding from its latent
space to derive a similarity score of two code snippets in a retrieval setup.
Experimental results show that our approach achieves state-of-the-art
performance in comparison to other methods with the same task and data
configurations. For the first time we show the benefits of treating sourcecode
understanding as a form of image processing task.
|
[
{
"version": "v1",
"created": "Wed, 11 May 2022 13:02:35 GMT"
}
] | 2022-05-19T00:00:00 |
[
[
"Shi",
"Ruibo",
""
],
[
"Tao",
"Lili",
""
],
[
"Saphal",
"Rohan",
""
],
[
"Silavong",
"Fran",
""
],
[
"Moran",
"Sean J.",
""
]
] |
new_dataset
| 0.99974 |
2205.08587
|
Abdulaziz Al-Meer
|
Abdulaziz Al-Meer, Saif Al-Kuwari
|
Physical Unclonable Functions (PUF) for IoT Devices
|
21 pages, 6 figures
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Physical Unclonable Function (PUF) has recently attracted interested from
both industry and academia as a potential alternative approach to secure
Internet of Things (IoT) devices from the more traditional computational based
approach using conventional cryptography. PUF is promising solution for
lightweight security, where the manufacturing fluctuation process of IC is used
to improve the security of IoT as it provides low complexity design and
preserves secrecy. It provides less cost of computational resources which
prevent high power consumption and can be implemented in both Field
Programmable Gate Arrays (FPGA) and Application-Specific Integrated Circuits
(ASICs). In this survey we provide a comprehensive review of the
state-of-the-art of PUF, its architectures, protocols and security for IoT.
|
[
{
"version": "v1",
"created": "Tue, 17 May 2022 19:07:51 GMT"
}
] | 2022-05-19T00:00:00 |
[
[
"Al-Meer",
"Abdulaziz",
""
],
[
"Al-Kuwari",
"Saif",
""
]
] |
new_dataset
| 0.998395 |
2205.08605
|
Tong Niu
|
Tong Niu, Kazuma Hashimoto, Yingbo Zhou, Caiming Xiong
|
OneAligner: Zero-shot Cross-lingual Transfer with One Rich-Resource
Language Pair for Low-Resource Sentence Retrieval
|
Accepted to Findings of ACL 2022
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Aligning parallel sentences in multilingual corpora is essential to curating
data for downstream applications such as Machine Translation. In this work, we
present OneAligner, an alignment model specially designed for sentence
retrieval tasks. This model is able to train on only one language pair and
transfers, in a cross-lingual fashion, to low-resource language pairs with
negligible degradation in performance. When trained with all language pairs of
a large-scale parallel multilingual corpus (OPUS-100), this model achieves the
state-of-the-art result on the Tateoba dataset, outperforming an equally-sized
previous model by 8.0 points in accuracy while using less than 0.6% of their
parallel data. When finetuned on a single rich-resource language pair, be it
English-centered or not, our model is able to match the performance of the ones
finetuned on all language pairs under the same data budget with less than 2.0
points decrease in accuracy. Furthermore, with the same setup, scaling up the
number of rich-resource language pairs monotonically improves the performance,
reaching a minimum of 0.4 points discrepancy in accuracy, making it less
mandatory to collect any low-resource parallel data. Finally, we conclude
through empirical results and analyses that the performance of the sentence
alignment task depends mostly on the monolingual and parallel data size, up to
a certain size threshold, rather than on what language pairs are used for
training or evaluation.
|
[
{
"version": "v1",
"created": "Tue, 17 May 2022 19:52:42 GMT"
}
] | 2022-05-19T00:00:00 |
[
[
"Niu",
"Tong",
""
],
[
"Hashimoto",
"Kazuma",
""
],
[
"Zhou",
"Yingbo",
""
],
[
"Xiong",
"Caiming",
""
]
] |
new_dataset
| 0.982431 |
2205.08640
|
Erik Antonsson Ph.D.
|
Erik K. Antonsson, Ph.D., P.E., N.A.E
|
A General Measure of Collision Hazard in Traffic
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
A collision hazard measure that has the essential characteristics to provide
a measurement of safety that will be useful to AV developers, traffic
infrastructure developers and managers, regulators and the public is introduced
here. The Streetscope Collision Hazard Measure (SHM) overcomes the limitations
of existing measures, and provides an independent leading indication of safety.
* Trailing indicators, such as collision statistics, incur pain and loss on
society, and are not an ethically acceptable approach. * Near-misses have been
shown to be effective predictors of incidents. * Time-to-Collision (TTC)
provides ambiguous indication of collision hazards, and requires assumptions
about vehicle behavior. * Responsibility-Sensitive Safety (RSS), because of its
reliance on rules for individual circumstances, will not scale up to handle the
complexities of traffic. * Instantaneous Safety Metric (ISM) relies on
probabilistic predictions of behaviors to categorize events (possible,
imminent, critical), and does not provide a quantitative measure of the
severity of the hazard. * Inertial Measurement Unit (IMU) acceleration data is
not correlated with hazard or risk. * A new measure, based on the concept of
near-misses, that incorporates both proximity (separation distance) and motion
(relative speed) is introduced. * Near-miss data has been shown to be
predictive of the likelihood and severity of incidents. The new measure
presented here gathers movement data about vehicles continuously and a
quantitative score reflecting the hazard encountered or created (from which the
riskiness or safeness of the behavior of vehicles can be estimated) is computed
nearly continuously.
|
[
{
"version": "v1",
"created": "Tue, 17 May 2022 21:35:21 GMT"
}
] | 2022-05-19T00:00:00 |
[
[
"Antonsson",
"Erik K.",
""
],
[
"D.",
"Ph.",
""
],
[
"E.",
"P.",
""
],
[
"E",
"N. A.",
""
]
] |
new_dataset
| 0.998914 |
2205.08659
|
Donald Dansereau
|
Tristan Frizza and Donald G. Dansereau and Nagita Mehr Seresht and
Michael Bewley
|
Semantically Accurate Super-Resolution Generative Adversarial Networks
|
11 pages, 7 figures
| null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work addresses the problems of semantic segmentation and image
super-resolution by jointly considering the performance of both in training a
Generative Adversarial Network (GAN). We propose a novel architecture and
domain-specific feature loss, allowing super-resolution to operate as a
pre-processing step to increase the performance of downstream computer vision
tasks, specifically semantic segmentation. We demonstrate this approach using
Nearmap's aerial imagery dataset which covers hundreds of urban areas at 5-7 cm
per pixel resolution. We show the proposed approach improves perceived image
quality as well as quantitative segmentation accuracy across all prediction
classes, yielding an average accuracy improvement of 11.8% and 108% at 4x and
32x super-resolution, compared with state-of-the art single-network methods.
This work demonstrates that jointly considering image-based and task-specific
losses can improve the performance of both, and advances the state-of-the-art
in semantic-aware super-resolution of aerial imagery.
|
[
{
"version": "v1",
"created": "Tue, 17 May 2022 23:05:27 GMT"
}
] | 2022-05-19T00:00:00 |
[
[
"Frizza",
"Tristan",
""
],
[
"Dansereau",
"Donald G.",
""
],
[
"Seresht",
"Nagita Mehr",
""
],
[
"Bewley",
"Michael",
""
]
] |
new_dataset
| 0.997939 |
2205.08701
|
Subodh Mishra
|
Subodh Mishra and Srikanth Saripalli
|
Extrinsic Calibration of LiDAR, IMU and Camera
|
Workshop on Challenges in Sensor Calibration for Robotics
Applications at the 17th INTERNATIONAL CONFERENCE ON INTELLIGENT AUTONOMOUS
SYSTEMS (IAS-17), ZAGREB, CROATIA
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work we present a novel method to jointly calibrate a sensor suite
consisting a 3D-LiDAR, Inertial Measurement Unit (IMU) and Camera under an
Extended Kalman Filter (EKF) framework. We exploit pairwise constraints between
the 3 sensor pairs to perform EKF update and experimentally demonstrate the
superior performance obtained with joint calibration as against individual
sensor pair calibration.
|
[
{
"version": "v1",
"created": "Wed, 18 May 2022 03:20:15 GMT"
}
] | 2022-05-19T00:00:00 |
[
[
"Mishra",
"Subodh",
""
],
[
"Saripalli",
"Srikanth",
""
]
] |
new_dataset
| 0.984766 |
2205.08775
|
Elena Atroshchenko
|
Chintan Jansari, St\'ephane P.A. Bordas, Elena Atroshchenko
|
Design of metamaterial-based heat manipulators by isogeometric shape
optimization
| null | null | null | null |
cs.CE
|
http://creativecommons.org/licenses/by/4.0/
|
There has been a growing interest in controlled heat flux manipulation to
increase the efficiency of thermal apparatus. Heat manipulators control and
manipulate heat flow. A key to the effective performance of these heat
manipulators is their thermal design. Such designs can be achieved by a
periodic assembly of unit cells (known as metamaterials or meta-structure),
whose geometry and material properties can be optimized for a specific
objective. In this work, we focus on thermal metamaterial-based heat
manipulators such as thermal concentrator (which concentrates the heat flux in
a specified region of the domain). The main scope of the current work is to
optimize the shape of the heat manipulators using Particle Swarm Optimization
(PSO) method. The geometry is defined using NURBS basis functions due to the
higher smoothness and continuity and the thermal boundary value problem is
solved using Isogeometric Analysis (IGA). Often, nodes as design variables (as
in Lagrange finite element method) generate the serrate shapes of boundaries
which need to be smoothened later. For the NURBS-based boundary with the
control points as design variables, the required smoothness can be predefined
through knot vectors and smoothening in the post-processing can be avoided. The
optimized shape generated by PSO is compared with the other shape exploited in
the literature. The effects of the number of design variables, the thermal
conductivity of the materials used, as well as some of the geometry parameters
on the optimum shapes are also demonstrated.
|
[
{
"version": "v1",
"created": "Wed, 18 May 2022 07:50:40 GMT"
}
] | 2022-05-19T00:00:00 |
[
[
"Jansari",
"Chintan",
""
],
[
"Bordas",
"Stéphane P. A.",
""
],
[
"Atroshchenko",
"Elena",
""
]
] |
new_dataset
| 0.999084 |
2205.08811
|
Pengyuan Wang
|
Pengyuan Wang, HyunJun Jung, Yitong Li, Siyuan Shen, Rahul
Parthasarathy Srikanth, Lorenzo Garattoni, Sven Meier, Nassir Navab, Benjamin
Busam
|
PhoCaL: A Multi-Modal Dataset for Category-Level Object Pose Estimation
with Photometrically Challenging Objects
|
11 pages
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Object pose estimation is crucial for robotic applications and augmented
reality. Beyond instance level 6D object pose estimation methods, estimating
category-level pose and shape has become a promising trend. As such, a new
research field needs to be supported by well-designed datasets. To provide a
benchmark with high-quality ground truth annotations to the community, we
introduce a multimodal dataset for category-level object pose estimation with
photometrically challenging objects termed PhoCaL. PhoCaL comprises 60 high
quality 3D models of household objects over 8 categories including highly
reflective, transparent and symmetric objects. We developed a novel
robot-supported multi-modal (RGB, depth, polarisation) data acquisition and
annotation process. It ensures sub-millimeter accuracy of the pose for opaque
textured, shiny and transparent objects, no motion blur and perfect camera
synchronisation. To set a benchmark for our dataset, state-of-the-art RGB-D and
monocular RGB methods are evaluated on the challenging scenes of PhoCaL.
|
[
{
"version": "v1",
"created": "Wed, 18 May 2022 09:21:09 GMT"
}
] | 2022-05-19T00:00:00 |
[
[
"Wang",
"Pengyuan",
""
],
[
"Jung",
"HyunJun",
""
],
[
"Li",
"Yitong",
""
],
[
"Shen",
"Siyuan",
""
],
[
"Srikanth",
"Rahul Parthasarathy",
""
],
[
"Garattoni",
"Lorenzo",
""
],
[
"Meier",
"Sven",
""
],
[
"Navab",
"Nassir",
""
],
[
"Busam",
"Benjamin",
""
]
] |
new_dataset
| 0.999866 |
2205.08847
|
Rami Ariss
|
Kai-Ling Lo, Rami Ariss, Philipp Kurz
|
GPoeT-2: A GPT-2 Based Poem Generator
|
Carnegie Mellon University 11-785: Intro to Deep Learning Final
Project
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This project aims to produce the next volume of machine-generated poetry, a
complex art form that can be structured and unstructured, and carries depth in
the meaning between the lines. GPoeT-2 is based on fine-tuning a state of the
art natural language model (i.e. GPT-2) to generate limericks, typically
humorous structured poems consisting of five lines with a AABBA rhyming scheme.
With a two-stage generation system utilizing both forward and reverse language
modeling, GPoeT-2 is capable of freely generating limericks in diverse topics
while following the rhyming structure without any seed phrase or a posteriori
constraints.Based on the automated generation process, we explore a wide
variety of evaluation metrics to quantify "good poetry," including syntactical
correctness, lexical diversity, and subject continuity. Finally, we present a
collection of 94 categorized limericks that rank highly on the explored "good
poetry" metrics to provoke human creativity.
|
[
{
"version": "v1",
"created": "Wed, 18 May 2022 10:25:12 GMT"
}
] | 2022-05-19T00:00:00 |
[
[
"Lo",
"Kai-Ling",
""
],
[
"Ariss",
"Rami",
""
],
[
"Kurz",
"Philipp",
""
]
] |
new_dataset
| 0.991553 |
2205.08852
|
Tanjila Mawla
|
Tanjila Mawla, Maanak Gupta, and Ravi Sandhu
|
BlueSky: Activity Control: A Vision for "Active" Security Models for
Smart Collaborative Systems
| null | null |
10.1145/3532105.3535017
| null |
cs.CR cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Cyber physical ecosystem connects different intelligent devices over
heterogeneous networks. Various operations are performed on smart objects to
ensure efficiency and to support automation in smart environments. An Activity
(defined by Gupta and Sandhu) reflects the current state of an object, which
changes in response to requested operations. Due to multiple running activities
on different objects, it is critical to secure collaborative systems
considering run-time decisions impacted due to related activities (and other
parameters) supporting active enforcement of access control decision. Recently,
Gupta and Sandhu proposed Activity-Centric Access Control (ACAC) and discussed
the notion of activity as a prime abstraction for access control in
collaborative systems. The model provides an active security approach that
considers activity decision factors such as authorizations, obligations,
conditions, and dependencies among related device activities. This paper takes
a step forward and presents the core components of an ACAC model and compares
with other security models differentiating novel properties of ACAC. We
highlight how existing models do not (or in limited scope) support `active'
decision and enforcement of authorization in collaborative systems. We propose
a hierarchical structure for a family of ACAC models by gradually adding the
properties related to notion of activity and discuss states of an activity. We
highlight the convergence of ACAC with Zero Trust tenets to reflect how ACAC
supports necessary security posture of distributed and connected smart
ecosystems. This paper aims to gain a better understanding of ACAC in
collaborative systems supporting novel abstractions, properties and
requirements.
|
[
{
"version": "v1",
"created": "Wed, 18 May 2022 10:34:25 GMT"
}
] | 2022-05-19T00:00:00 |
[
[
"Mawla",
"Tanjila",
""
],
[
"Gupta",
"Maanak",
""
],
[
"Sandhu",
"Ravi",
""
]
] |
new_dataset
| 0.999659 |
2205.08868
|
Hamada Nayel
|
Nsrin Ashraf and Fathy Elkazaz and Mohamed Taha and Hamada Nayel and
Tarek Elshishtawy
|
BFCAI at SemEval-2022 Task 6: Multi-Layer Perceptron for Sarcasm
Detection in Arabic Texts
|
A description of iSarcasm shared task submission, text4 pages, 1
figure
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This paper describes the systems submitted to iSarcasm shared task. The aim
of iSarcasm is to identify the sarcastic contents in Arabic and English text.
Our team participated in iSarcasm for the Arabic language. A multi-Layer
machine learning based model has been submitted for Arabic sarcasm detection.
In this model, a vector space TF-IDF has been used as for feature
representation. The submitted system is simple and does not need any external
resources. The test results show encouraging results.
|
[
{
"version": "v1",
"created": "Wed, 18 May 2022 11:33:07 GMT"
}
] | 2022-05-19T00:00:00 |
[
[
"Ashraf",
"Nsrin",
""
],
[
"Elkazaz",
"Fathy",
""
],
[
"Taha",
"Mohamed",
""
],
[
"Nayel",
"Hamada",
""
],
[
"Elshishtawy",
"Tarek",
""
]
] |
new_dataset
| 0.99959 |
2205.08886
|
Teddy Cunningham
|
Teddy Cunningham, Konstantin Klemmer, Hongkai Wen, Hakan
Ferhatosmanoglu
|
GeoPointGAN: Synthetic Spatial Data with Local Label Differential
Privacy
| null | null | null | null |
cs.LG cs.AI cs.CR cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
Synthetic data generation is a fundamental task for many data management and
data science applications. Spatial data is of particular interest, and its
sensitive nature often leads to privacy concerns. We introduce GeoPointGAN, a
novel GAN-based solution for generating synthetic spatial point datasets with
high utility and strong individual level privacy guarantees. GeoPointGAN's
architecture includes a novel point transformation generator that learns to
project randomly generated point co-ordinates into meaningful synthetic
co-ordinates that capture both microscopic (e.g., junctions, squares) and
macroscopic (e.g., parks, lakes) geographic features. We provide our privacy
guarantees through label local differential privacy, which is more practical
than traditional local differential privacy. We seamlessly integrate this level
of privacy into GeoPointGAN by augmenting the discriminator to the point level
and implementing a randomized response-based mechanism that flips the labels
associated with the 'real' and 'fake' points used in training. Extensive
experiments show that GeoPointGAN significantly outperforms recent solutions,
improving by up to 10 times compared to the most competitive baseline. We also
evaluate GeoPointGAN using range, hotspot, and facility location queries, which
confirm the practical effectiveness of GeoPointGAN for privacy-preserving
querying. The results illustrate that a strong level of privacy is achieved
with little-to-no adverse utility cost, which we explain through the
generalization and regularization effects that are realized by flipping the
labels of the data during training.
|
[
{
"version": "v1",
"created": "Wed, 18 May 2022 12:18:01 GMT"
}
] | 2022-05-19T00:00:00 |
[
[
"Cunningham",
"Teddy",
""
],
[
"Klemmer",
"Konstantin",
""
],
[
"Wen",
"Hongkai",
""
],
[
"Ferhatosmanoglu",
"Hakan",
""
]
] |
new_dataset
| 0.992848 |
2205.08959
|
Han Sun
|
Yuhan Lin, Han Sun, Ningzhong Liu, Yetong Bian, Jun Cen, Huiyu Zhou
|
A lightweight multi-scale context network for salient object detection
in optical remote sensing images
|
accepted by ICPR2022, source code, see
https://github.com/NuaaYH/MSCNet
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Due to the more dramatic multi-scale variations and more complicated
foregrounds and backgrounds in optical remote sensing images (RSIs), the
salient object detection (SOD) for optical RSIs becomes a huge challenge.
However, different from natural scene images (NSIs), the discussion on the
optical RSI SOD task still remains scarce. In this paper, we propose a
multi-scale context network, namely MSCNet, for SOD in optical RSIs.
Specifically, a multi-scale context extraction module is adopted to address the
scale variation of salient objects by effectively learning multi-scale
contextual information. Meanwhile, in order to accurately detect complete
salient objects in complex backgrounds, we design an attention-based pyramid
feature aggregation mechanism for gradually aggregating and refining the
salient regions from the multi-scale context extraction module. Extensive
experiments on two benchmarks demonstrate that MSCNet achieves competitive
performance with only 3.26M parameters. The code will be available at
https://github.com/NuaaYH/MSCNet.
|
[
{
"version": "v1",
"created": "Wed, 18 May 2022 14:32:47 GMT"
}
] | 2022-05-19T00:00:00 |
[
[
"Lin",
"Yuhan",
""
],
[
"Sun",
"Han",
""
],
[
"Liu",
"Ningzhong",
""
],
[
"Bian",
"Yetong",
""
],
[
"Cen",
"Jun",
""
],
[
"Zhou",
"Huiyu",
""
]
] |
new_dataset
| 0.999653 |
2205.09068
|
Kennard Ng Pool HUa
|
Kennard Ng, Ser-Nam Lim, Gim Hee Lee
|
VRAG: Region Attention Graphs for Content-Based Video Retrieval
| null | null | null | null |
cs.CV cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Content-based Video Retrieval (CBVR) is used on media-sharing platforms for
applications such as video recommendation and filtering. To manage databases
that scale to billions of videos, video-level approaches that use fixed-size
embeddings are preferred due to their efficiency. In this paper, we introduce
Video Region Attention Graph Networks (VRAG) that improves the state-of-the-art
of video-level methods. We represent videos at a finer granularity via
region-level features and encode video spatio-temporal dynamics through
region-level relations. Our VRAG captures the relationships between regions
based on their semantic content via self-attention and the permutation
invariant aggregation of Graph Convolution. In addition, we show that the
performance gap between video-level and frame-level methods can be reduced by
segmenting videos into shots and using shot embeddings for video retrieval. We
evaluate our VRAG over several video retrieval tasks and achieve a new
state-of-the-art for video-level retrieval. Furthermore, our shot-level VRAG
shows higher retrieval precision than other existing video-level methods, and
closer performance to frame-level methods at faster evaluation speeds. Finally,
our code will be made publicly available.
|
[
{
"version": "v1",
"created": "Wed, 18 May 2022 16:50:45 GMT"
}
] | 2022-05-19T00:00:00 |
[
[
"Ng",
"Kennard",
""
],
[
"Lim",
"Ser-Nam",
""
],
[
"Lee",
"Gim Hee",
""
]
] |
new_dataset
| 0.992158 |
2005.12660
|
Paschalis Bizopoulos
|
Paschalis Bizopoulos
|
A Makefile for Developing Containerized LaTeX Technical Documents
|
3 pages, 3 figures, 1 table
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a Makefile for developing containerized $\LaTeX$ technical
documents. The Makefile allows the author to execute the code that generates
variables, tables and figures (results), which are then used during the
$\LaTeX$ compilation, to produce either the draft (fast) or full (slow) version
of the document. We also present various utilities that aid in automating the
results generation and improve the reproducibility of the document. We release
an open source repository of a template that uses the Makefile and demonstrate
its use by developing this paper.
|
[
{
"version": "v1",
"created": "Tue, 26 May 2020 12:31:22 GMT"
},
{
"version": "v2",
"created": "Wed, 27 May 2020 07:59:43 GMT"
},
{
"version": "v3",
"created": "Thu, 28 May 2020 13:45:48 GMT"
},
{
"version": "v4",
"created": "Thu, 24 Dec 2020 15:03:28 GMT"
},
{
"version": "v5",
"created": "Thu, 19 Aug 2021 06:20:38 GMT"
},
{
"version": "v6",
"created": "Sun, 29 Aug 2021 07:32:33 GMT"
},
{
"version": "v7",
"created": "Mon, 21 Mar 2022 18:17:43 GMT"
},
{
"version": "v8",
"created": "Mon, 16 May 2022 18:14:49 GMT"
}
] | 2022-05-18T00:00:00 |
[
[
"Bizopoulos",
"Paschalis",
""
]
] |
new_dataset
| 0.996524 |
2102.01558
|
Jiyang Qi
|
Jiyang Qi, Yan Gao, Yao Hu, Xinggang Wang, Xiaoyu Liu, Xiang Bai,
Serge Belongie, Alan Yuille, Philip H.S. Torr, Song Bai
|
Occluded Video Instance Segmentation: A Benchmark
|
IJCV 2022. Project page at https://songbai.site/ovis
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Can our video understanding systems perceive objects when a heavy occlusion
exists in a scene?
To answer this question, we collect a large-scale dataset called OVIS for
occluded video instance segmentation, that is, to simultaneously detect,
segment, and track instances in occluded scenes. OVIS consists of 296k
high-quality instance masks from 25 semantic categories, where object
occlusions usually occur. While our human vision systems can understand those
occluded instances by contextual reasoning and association, our experiments
suggest that current video understanding systems cannot. On the OVIS dataset,
the highest AP achieved by state-of-the-art algorithms is only 16.3, which
reveals that we are still at a nascent stage for understanding objects,
instances, and videos in a real-world scenario. We also present a simple
plug-and-play module that performs temporal feature calibration to complement
missing object cues caused by occlusion. Built upon MaskTrack R-CNN and
SipMask, we obtain a remarkable AP improvement on the OVIS dataset. The OVIS
dataset and project code are available at http://songbai.site/ovis .
|
[
{
"version": "v1",
"created": "Tue, 2 Feb 2021 15:35:43 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Feb 2021 08:10:55 GMT"
},
{
"version": "v3",
"created": "Mon, 8 Feb 2021 12:20:37 GMT"
},
{
"version": "v4",
"created": "Tue, 30 Mar 2021 04:07:27 GMT"
},
{
"version": "v5",
"created": "Mon, 15 Nov 2021 16:31:44 GMT"
},
{
"version": "v6",
"created": "Tue, 17 May 2022 16:14:10 GMT"
}
] | 2022-05-18T00:00:00 |
[
[
"Qi",
"Jiyang",
""
],
[
"Gao",
"Yan",
""
],
[
"Hu",
"Yao",
""
],
[
"Wang",
"Xinggang",
""
],
[
"Liu",
"Xiaoyu",
""
],
[
"Bai",
"Xiang",
""
],
[
"Belongie",
"Serge",
""
],
[
"Yuille",
"Alan",
""
],
[
"Torr",
"Philip H. S.",
""
],
[
"Bai",
"Song",
""
]
] |
new_dataset
| 0.999706 |
2106.10782
|
Hao Chen
|
Hao Chen
|
Coordinate-ordering-free Upper Bounds for Linear Insertion-Deletion
Codes
|
8 pages
|
IEEE Transactions on Information Theory 2022
| null | null |
cs.IT math.IT
|
http://creativecommons.org/publicdomain/zero/1.0/
|
The insertion-deletion codes were motivated to correct the synchronization
errors. In this paper we prove several coordinate-ordering-free upper bounds on
the insdel distances of linear codes, which are based on the generalized
Hamming weights and the formation of minimum Hamming weight codewords. Our
bounds are stronger than some previous known bounds. We apply these upper
bounds to some cyclic codes and one algebraic-geometric code with any
rearrangement of coordinate positions. Some strong upper bounds on the insdel
distances of Reed-Muller codes with special coordinate-ordering are also given.
|
[
{
"version": "v1",
"created": "Mon, 21 Jun 2021 00:37:35 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Jun 2021 08:42:49 GMT"
},
{
"version": "v3",
"created": "Tue, 6 Jul 2021 08:42:48 GMT"
},
{
"version": "v4",
"created": "Fri, 13 Aug 2021 09:14:59 GMT"
},
{
"version": "v5",
"created": "Sat, 25 Sep 2021 07:15:33 GMT"
},
{
"version": "v6",
"created": "Mon, 16 May 2022 23:40:40 GMT"
}
] | 2022-05-18T00:00:00 |
[
[
"Chen",
"Hao",
""
]
] |
new_dataset
| 0.994024 |
2108.03861
|
Shangbin Feng
|
Shangbin Feng, Zilong Chen, Wenqian Zhang, Qingyao Li, Qinghua Zheng,
Xiaojun Chang, Minnan Luo
|
KGAP: Knowledge Graph Augmented Political Perspective Detection in News
Media
| null | null | null | null |
cs.CL cs.AI cs.CY
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Identifying political perspectives in news media has become an important task
due to the rapid growth of political commentary and the increasingly polarized
political ideologies. Previous approaches focus on textual content and leave
out the rich social and political context that is essential in the perspective
detection process. To address this limitation, we propose KGAP, a political
perspective detection method that incorporates external domain knowledge.
Specifically, we construct a political knowledge graph to serve as
domain-specific external knowledge. We then construct heterogeneous information
networks to represent news documents, which jointly model news text and
external knowledge. Finally, we adopt relational graph neural networks and
conduct political perspective detection as graph-level classification.
Extensive experiments demonstrate that our method consistently achieves the
best performance on two real-world perspective detection benchmarks. Ablation
studies further bear out the necessity of external knowledge and the
effectiveness of our graph-based approach.
|
[
{
"version": "v1",
"created": "Mon, 9 Aug 2021 08:05:56 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Sep 2021 08:15:07 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Jan 2022 13:11:35 GMT"
},
{
"version": "v4",
"created": "Tue, 17 May 2022 07:48:24 GMT"
}
] | 2022-05-18T00:00:00 |
[
[
"Feng",
"Shangbin",
""
],
[
"Chen",
"Zilong",
""
],
[
"Zhang",
"Wenqian",
""
],
[
"Li",
"Qingyao",
""
],
[
"Zheng",
"Qinghua",
""
],
[
"Chang",
"Xiaojun",
""
],
[
"Luo",
"Minnan",
""
]
] |
new_dataset
| 0.999746 |
2109.06838
|
Sayan Ghosh
|
Sayan Ghosh and Shashank Srivastava
|
ePiC: Employing Proverbs in Context as a Benchmark for Abstract Language
Understanding
|
ACL 2022
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While large language models have shown exciting progress on several NLP
benchmarks, evaluating their ability for complex analogical reasoning remains
under-explored. Here, we introduce a high-quality crowdsourced dataset of
narratives for employing proverbs in context as a benchmark for abstract
language understanding. The dataset provides fine-grained annotation of aligned
spans between proverbs and narratives, and contains minimal lexical overlaps
between narratives and proverbs, ensuring that models need to go beyond
surface-level reasoning to succeed. We explore three tasks: (1) proverb
recommendation and alignment prediction, (2) narrative generation for a given
proverb and topic, and (3) identifying narratives with similar motifs. Our
experiments show that neural language models struggle on these tasks compared
to humans, and these tasks pose multiple learning challenges.
|
[
{
"version": "v1",
"created": "Tue, 14 Sep 2021 17:21:12 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Sep 2021 15:50:33 GMT"
},
{
"version": "v3",
"created": "Tue, 17 May 2022 14:07:17 GMT"
}
] | 2022-05-18T00:00:00 |
[
[
"Ghosh",
"Sayan",
""
],
[
"Srivastava",
"Shashank",
""
]
] |
new_dataset
| 0.998907 |
2110.02274
|
Ava Chen
|
Ava Chen, Lauren Winterbottom, Katherine O'Reilly, Sangwoo Park, Dawn
Nilsen, Joel Stein, Matei Ciocarlie
|
Design of Spiral-Cable Forearm Exoskeleton to Assist Supination for
Hemiparetic Stroke Subjects
|
6 pages; Accepted to International Conference on Rehabilitation
Robotics (ICORR) 2022
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the development of a cable-based passive forearm exoskeleton that
is designed to assist supination for hemiparetic stroke survivors. Our device
uniquely provides torque sufficient for counteracting spasticity within a
below-elbow apparatus. The mechanism consists of a spiral single-tendon routing
embedded in a rigid forearm brace and terminated at the hand and upper-forearm.
A spool with an internal releasable-ratchet mechanism allows the user to
manually retract the tendon and rotate the hand to counteract involuntary
pronation synergies due to stroke. We characterize the mechanism with benchtop
testing and five healthy subjects, and perform a preliminary assessment of the
exoskeleton with a single chronic stroke subject having minimal supination
ability. The mechanism can be integrated into an existing active hand-opening
orthosis to enable supination support during grasping tasks, and also allows
for a future actuated supination strategy.
|
[
{
"version": "v1",
"created": "Tue, 5 Oct 2021 18:27:30 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Mar 2022 18:13:44 GMT"
},
{
"version": "v3",
"created": "Mon, 16 May 2022 18:05:29 GMT"
}
] | 2022-05-18T00:00:00 |
[
[
"Chen",
"Ava",
""
],
[
"Winterbottom",
"Lauren",
""
],
[
"O'Reilly",
"Katherine",
""
],
[
"Park",
"Sangwoo",
""
],
[
"Nilsen",
"Dawn",
""
],
[
"Stein",
"Joel",
""
],
[
"Ciocarlie",
"Matei",
""
]
] |
new_dataset
| 0.998906 |
2112.09202
|
Christoph Neuhauser
|
Junpeng Wang, Christoph Neuhauser, Jun Wu, Xifeng Gao and R\"udiger
Westermann
|
3D-TSV: The 3D Trajectory-based Stress Visualizer
|
13 pages
| null |
10.1016/j.advengsoft.2022.103144
| null |
cs.GR cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the 3D Trajectory-based Stress Visualizer (3D-TSV), a visual
analysis tool for the exploration of the principal stress directions in 3D
solids under load. 3D-TSV provides a modular and generic implementation of key
algorithms required for a trajectory-based visual analysis of principal stress
directions, including the automatic seeding of space-filling stress lines,
their extraction using numerical schemes, their mapping to an effective
renderable representation, and rendering options to convey structures with
special mechanical properties. In the design of 3D-TSV, several perceptual
challenges have been addressed when simultaneously visualizing three mutually
orthogonal stress directions via lines. We present a novel algorithm for
generating a space-filling and evenly spaced set of mutually orthogonal lines.
The algorithm further considers the locations of lines to obtain a more regular
appearance, and enables the extraction of a level-of-detail representation with
adjustable sparseness of the trajectories along a certain stress direction. To
convey ambiguities in the orientation of the principal stress directions, the
user can select a combined visualization of two principal directions via
oriented ribbons. Additional depth cues improve the perception of the spatial
relationships between trajectories. 3D-TSV is accessible to end users via a
C++- and OpenGL-based rendering frontend that is seamlessly connected to a
MatLab-based extraction backend. The code (BSD license) of 3D-TSV as well as
scripts to make ANSYS and ABAQUS simulation results accessible to the 3D-TSV
backend are publicly available.
|
[
{
"version": "v1",
"created": "Thu, 16 Dec 2021 21:07:24 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Dec 2021 10:23:27 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Feb 2022 18:59:15 GMT"
}
] | 2022-05-18T00:00:00 |
[
[
"Wang",
"Junpeng",
""
],
[
"Neuhauser",
"Christoph",
""
],
[
"Wu",
"Jun",
""
],
[
"Gao",
"Xifeng",
""
],
[
"Westermann",
"Rüdiger",
""
]
] |
new_dataset
| 0.999751 |
2201.07287
|
Debarnab Mitra
|
Debarnab Mitra, Lev Tauz, Lara Dolecek
|
Polar Coded Merkle Tree: Improved Detection of Data Availability Attacks
in Blockchain Systems
|
9 pages, 4 figures, 2 tables, To appear in IEEE International
Symposium on Information Theory (ISIT) 2022
| null | null | null |
cs.IT cs.CR math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Light nodes in blockchain systems are known to be vulnerable to data
availability (DA) attacks where they accept an invalid block with unavailable
portions. Previous works have used LDPC and 2-D Reed Solomon (2D-RS) codes with
Merkle Trees to mitigate DA attacks. While these codes have demonstrated
improved performance across a variety of metrics such as DA detection
probability, they are difficult to apply to blockchains with large blocks due
to generally intractable code guarantees for large codelengths (LDPC), large
decoding complexity (2D-RS), or large coding fraud proof sizes (2D-RS). We
address these issues by proposing the novel Polar Coded Merkle Tree (PCMT)
which is a Merkle Tree built from the encoding graphs of polar codes and a
specialized polar code construction called Sampling-Efficient Freezing (SEF).
We demonstrate that the PCMT with SEF polar codes performs well in detecting DA
attacks for large block sizes.
|
[
{
"version": "v1",
"created": "Tue, 18 Jan 2022 19:54:59 GMT"
},
{
"version": "v2",
"created": "Mon, 16 May 2022 19:23:23 GMT"
}
] | 2022-05-18T00:00:00 |
[
[
"Mitra",
"Debarnab",
""
],
[
"Tauz",
"Lev",
""
],
[
"Dolecek",
"Lara",
""
]
] |
new_dataset
| 0.998565 |
2204.08182
|
Xun Wang
|
Xun Wang, Bingqing Ke, Xuanping Li, Fangyu Liu, Mingyu Zhang, Xiao
Liang, Qiushi Xiao, Cheng Luo, Yue Yu
|
Modality-Balanced Embedding for Video Retrieval
|
Accepted by SIGIR-2022, short paper
|
SIGIR, 2022
| null | null |
cs.CV cs.AI cs.IR stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Video search has become the main routine for users to discover videos
relevant to a text query on large short-video sharing platforms. During
training a query-video bi-encoder model using online search logs, we identify a
modality bias phenomenon that the video encoder almost entirely relies on text
matching, neglecting other modalities of the videos such as vision, audio. This
modality imbalanceresults from a) modality gap: the relevance between a query
and a video text is much easier to learn as the query is also a piece of text,
with the same modality as the video text; b) data bias: most training samples
can be solved solely by text matching. Here we share our practices to improve
the first retrieval stage including our solution for the modality imbalance
issue. We propose MBVR (short for Modality Balanced Video Retrieval) with two
key components: manually generated modality-shuffled (MS) samples and a dynamic
margin (DM) based on visual relevance. They can encourage the video encoder to
pay balanced attentions to each modality. Through extensive experiments on a
real world dataset, we show empirically that our method is both effective and
efficient in solving modality bias problem. We have also deployed our MBVR in a
large video platform and observed statistically significant boost over a highly
optimized baseline in an A/B test and manual GSB evaluations.
|
[
{
"version": "v1",
"created": "Mon, 18 Apr 2022 06:29:46 GMT"
},
{
"version": "v2",
"created": "Tue, 17 May 2022 06:38:48 GMT"
}
] | 2022-05-18T00:00:00 |
[
[
"Wang",
"Xun",
""
],
[
"Ke",
"Bingqing",
""
],
[
"Li",
"Xuanping",
""
],
[
"Liu",
"Fangyu",
""
],
[
"Zhang",
"Mingyu",
""
],
[
"Liang",
"Xiao",
""
],
[
"Xiao",
"Qiushi",
""
],
[
"Luo",
"Cheng",
""
],
[
"Yu",
"Yue",
""
]
] |
new_dataset
| 0.981549 |
2205.04930
|
Mikhail Nesterenko
|
Joseph Oglio, Kendric Hood, Mikhail Nesterenko and Sebastien Tixeuil
|
QUANTAS: Quantitative User-friendly Adaptable Networked Things Abstract
Simulator
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
We present QUANTAS: a simulator that enables quantitative performance
analysis of distributed algorithms. It has a number of attractive features.
QUANTAS is an abstract simulator, therefore, the obtained results are not
affected by the specifics of a particular network or operating system
architecture. QUANTAS allows distributed algorithms researchers to quickly
investigate a potential solution and collect data about its performance.
QUANTAS programming is relatively straightforward and is accessible to
theoretical researchers. To demonstrate QUANTAS capabilities, we implement and
compare the behavior of two representative examples from four major classes of
distributed algorithms: blockchains, distributed hash tables, consensus, and
reliable data link message transmission.
|
[
{
"version": "v1",
"created": "Tue, 10 May 2022 14:37:17 GMT"
},
{
"version": "v2",
"created": "Thu, 12 May 2022 14:57:01 GMT"
},
{
"version": "v3",
"created": "Mon, 16 May 2022 20:13:17 GMT"
}
] | 2022-05-18T00:00:00 |
[
[
"Oglio",
"Joseph",
""
],
[
"Hood",
"Kendric",
""
],
[
"Nesterenko",
"Mikhail",
""
],
[
"Tixeuil",
"Sebastien",
""
]
] |
new_dataset
| 0.99894 |
2205.07557
|
Dominik Stammbach
|
Dominik Stammbach, Maria Antoniak, Elliott Ash
|
Heroes, Villains, and Victims, and GPT-3: Automated Extraction of
Character Roles Without Training Data
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This paper shows how to use large-scale pre-trained language models to
extract character roles from narrative texts without training data. Queried
with a zero-shot question-answering prompt, GPT-3 can identify the hero,
villain, and victim in diverse domains: newspaper articles, movie plot
summaries, and political speeches.
|
[
{
"version": "v1",
"created": "Mon, 16 May 2022 10:08:11 GMT"
},
{
"version": "v2",
"created": "Tue, 17 May 2022 08:09:51 GMT"
}
] | 2022-05-18T00:00:00 |
[
[
"Stammbach",
"Dominik",
""
],
[
"Antoniak",
"Maria",
""
],
[
"Ash",
"Elliott",
""
]
] |
new_dataset
| 0.959256 |
2205.07854
|
Haoteng Tang
|
Haoteng Tang, Xiyao Fu, Lei Guo, Yalin Wang, Scott Mackin, Olusola
Ajilore, Alex Leow, Paul Thompson, Heng Huang, Liang Zhan
|
Functional2Structural: Cross-Modality Brain Networks Representation
Learning
| null | null | null | null |
cs.LG cs.AI cs.CV eess.IV q-bio.NC
|
http://creativecommons.org/licenses/by/4.0/
|
MRI-based modeling of brain networks has been widely used to understand
functional and structural interactions and connections among brain regions, and
factors that affect them, such as brain development and disease. Graph mining
on brain networks may facilitate the discovery of novel biomarkers for clinical
phenotypes and neurodegenerative diseases. Since brain networks derived from
functional and structural MRI describe the brain topology from different
perspectives, exploring a representation that combines these cross-modality
brain networks is non-trivial. Most current studies aim to extract a fused
representation of the two types of brain network by projecting the structural
network to the functional counterpart. Since the functional network is dynamic
and the structural network is static, mapping a static object to a dynamic
object is suboptimal. However, mapping in the opposite direction is not
feasible due to the non-negativity requirement of current graph learning
techniques. Here, we propose a novel graph learning framework, known as Deep
Signed Brain Networks (DSBN), with a signed graph encoder that, from an
opposite perspective, learns the cross-modality representations by projecting
the functional network to the structural counterpart. We validate our framework
on clinical phenotype and neurodegenerative disease prediction tasks using two
independent, publicly available datasets (HCP and OASIS). The experimental
results clearly demonstrate the advantages of our model compared to several
state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Fri, 6 May 2022 03:45:36 GMT"
}
] | 2022-05-18T00:00:00 |
[
[
"Tang",
"Haoteng",
""
],
[
"Fu",
"Xiyao",
""
],
[
"Guo",
"Lei",
""
],
[
"Wang",
"Yalin",
""
],
[
"Mackin",
"Scott",
""
],
[
"Ajilore",
"Olusola",
""
],
[
"Leow",
"Alex",
""
],
[
"Thompson",
"Paul",
""
],
[
"Huang",
"Heng",
""
],
[
"Zhan",
"Liang",
""
]
] |
new_dataset
| 0.997415 |
2205.07859
|
Dvij Kalaria
|
Dvij Kalaria
|
Btech thesis report on adversarial attack detection and purification of
adverserially attacked images
|
Btech thesis report of Dvij Kalaria, Indian Institute of Technology
Kharagpur. arXiv admin note: substantial text overlap with arXiv:2111.15518;
substantial text overlap with arXiv:1911.05268 by other authors
| null | null | null |
cs.LG cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
This is Btech thesis report on detection and purification of adverserially
attacked images. A deep learning model is trained on certain training examples
for various tasks such as classification, regression etc. By training, weights
are adjusted such that the model performs the task well not only on training
examples judged by a certain metric but has an excellent ability to generalize
on other unseen examples as well which are typically called the test data.
Despite the huge success of machine learning models on a wide range of tasks,
security has received a lot less attention along the years. Robustness along
various potential cyber attacks also should be a metric for the accuracy of the
machine learning models. These cyber attacks can potentially lead to a variety
of negative impacts in the real world sensitive applications for which machine
learning is used such as medical and transportation systems. Hence, it is a
necessity to secure the system from such attacks. Int this report, I focus on a
class of these cyber attacks called the adversarial attacks in which the
original input sample is modified by small perturbations such that they still
look visually the same to human beings but the machine learning models are
fooled by such inputs. In this report I discuss 2 novel ways to counter the
adversarial attack using AutoEncoders, 1) by detecting the presence of
adversaries and 2) purifying these adversaries to make target classification
models robust against such attacks.
|
[
{
"version": "v1",
"created": "Mon, 9 May 2022 09:24:11 GMT"
}
] | 2022-05-18T00:00:00 |
[
[
"Kalaria",
"Dvij",
""
]
] |
new_dataset
| 0.97814 |
2205.07861
|
Xiangheng He
|
Xiangheng He, Andreas Triantafyllopoulos, Alexander Kathan, Manuel
Milling, Tianhao Yan, Srividya Tirunellai Rajamani, Ludwig K\"uster, Mathias
Harrer, Elena Heber, Inga Grossmann, David D. Ebert, Bj\"orn W. Schuller
|
Depression Diagnosis and Forecast based on Mobile Phone Sensor Data
|
Accepted by EMBC 2022
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Previous studies have shown the correlation between sensor data collected
from mobile phones and human depression states. Compared to the traditional
self-assessment questionnaires, the passive data collected from mobile phones
is easier to access and less time-consuming. In particular, passive mobile
phone data can be collected on a flexible time interval, thus detecting
moment-by-moment psychological changes and helping achieve earlier
interventions. Moreover, while previous studies mainly focused on depression
diagnosis using mobile phone data, depression forecasting has not received
sufficient attention. In this work, we extract four types of passive features
from mobile phone data, including phone call, phone usage, user activity, and
GPS features. We implement a long short-term memory (LSTM) network in a
subject-independent 10-fold cross-validation setup to model both a diagnostic
and a forecasting tasks. Experimental results show that the forecasting task
achieves comparable results with the diagnostic task, which indicates the
possibility of forecasting depression from mobile phone sensor data. Our model
achieves an accuracy of 77.0 % for major depression forecasting (binary), an
accuracy of 53.7 % for depression severity forecasting (5 classes), and a best
RMSE score of 4.094 (PHQ-9, range from 0 to 27).
|
[
{
"version": "v1",
"created": "Tue, 10 May 2022 10:05:36 GMT"
}
] | 2022-05-18T00:00:00 |
[
[
"He",
"Xiangheng",
""
],
[
"Triantafyllopoulos",
"Andreas",
""
],
[
"Kathan",
"Alexander",
""
],
[
"Milling",
"Manuel",
""
],
[
"Yan",
"Tianhao",
""
],
[
"Rajamani",
"Srividya Tirunellai",
""
],
[
"Küster",
"Ludwig",
""
],
[
"Harrer",
"Mathias",
""
],
[
"Heber",
"Elena",
""
],
[
"Grossmann",
"Inga",
""
],
[
"Ebert",
"David D.",
""
],
[
"Schuller",
"Björn W.",
""
]
] |
new_dataset
| 0.987516 |
2205.07872
|
Bhanu Pratap Singh Rawat
|
Bhanu Pratap Singh Rawat, Samuel Kovaly, Wilfred R. Pigeon, Hong Yu
|
ScAN: Suicide Attempt and Ideation Events Dataset
|
Paper accepted at NAACL 2022
| null | null | null |
cs.LG cs.AI cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Suicide is an important public health concern and one of the leading causes
of death worldwide. Suicidal behaviors, including suicide attempts (SA) and
suicide ideations (SI), are leading risk factors for death by suicide.
Information related to patients' previous and current SA and SI are frequently
documented in the electronic health record (EHR) notes. Accurate detection of
such documentation may help improve surveillance and predictions of patients'
suicidal behaviors and alert medical professionals for suicide prevention
efforts. In this study, we first built Suicide Attempt and Ideation Events
(ScAN) dataset, a subset of the publicly available MIMIC III dataset spanning
over 12k+ EHR notes with 19k+ annotated SA and SI events information. The
annotations also contain attributes such as method of suicide attempt. We also
provide a strong baseline model ScANER (Suicide Attempt and Ideation Events
Retriever), a multi-task RoBERTa-based model with a retrieval module to extract
all the relevant suicidal behavioral evidences from EHR notes of an
hospital-stay and, and a prediction module to identify the type of suicidal
behavior (SA and SI) concluded during the patient's stay at the hospital.
ScANER achieved a macro-weighted F1-score of 0.83 for identifying suicidal
behavioral evidences and a macro F1-score of 0.78 and 0.60 for classification
of SA and SI for the patient's hospital-stay, respectively. ScAN and ScANER are
publicly available.
|
[
{
"version": "v1",
"created": "Thu, 12 May 2022 17:11:07 GMT"
}
] | 2022-05-18T00:00:00 |
[
[
"Rawat",
"Bhanu Pratap Singh",
""
],
[
"Kovaly",
"Samuel",
""
],
[
"Pigeon",
"Wilfred R.",
""
],
[
"Yu",
"Hong",
""
]
] |
new_dataset
| 0.99979 |
2205.07960
|
Badr AlKhamissi
|
Badr AlKhamissi, Mona Diab
|
Meta AI at Arabic Hate Speech 2022: MultiTask Learning with
Self-Correction for Hate Speech Classification
|
Accepted at the 5th Workshop on Open-Source Arabic Corpora and
Processing Tools (OSACT5/LREC 2022)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this paper, we tackle the Arabic Fine-Grained Hate Speech Detection shared
task and demonstrate significant improvements over reported baselines for its
three subtasks. The tasks are to predict if a tweet contains (1) Offensive
language; and whether it is considered (2) Hate Speech or not and if so, then
predict the (3) Fine-Grained Hate Speech label from one of six categories. Our
final solution is an ensemble of models that employs multitask learning and a
self-consistency correction method yielding 82.7% on the hate speech subtask --
reflecting a 3.4% relative improvement compared to previous work.
|
[
{
"version": "v1",
"created": "Mon, 16 May 2022 19:53:16 GMT"
}
] | 2022-05-18T00:00:00 |
[
[
"AlKhamissi",
"Badr",
""
],
[
"Diab",
"Mona",
""
]
] |
new_dataset
| 0.965457 |
2205.07970
|
Maur\'icio Gruppi
|
Maur\'icio Gruppi, Panayiotis Smeros, Sibel Adal{\i}, Carlos Castillo,
Karl Aberer
|
SciLander: Mapping the Scientific News Landscape
| null | null | null | null |
cs.CY cs.SI physics.soc-ph
|
http://creativecommons.org/licenses/by/4.0/
|
The COVID-19 pandemic has fueled the spread of misinformation on social media
and the Web as a whole. The phenomenon dubbed `infodemic' has taken the
challenges of information veracity and trust to new heights by massively
introducing seemingly scientific and technical elements into misleading
content. Despite the existing body of work on modeling and predicting
misinformation, the coverage of very complex scientific topics with inherent
uncertainty and an evolving set of findings, such as COVID-19, provides many
new challenges that are not easily solved by existing tools. To address these
issues, we introduce SciLander, a method for learning representations of news
sources reporting on science-based topics. SciLander extracts four
heterogeneous indicators for the news sources; two generic indicators that
capture (1) the copying of news stories between sources, and (2) the use of the
same terms to mean different things (i.e., the semantic shift of terms), and
two scientific indicators that capture (1) the usage of jargon and (2) the
stance towards specific citations. We use these indicators as signals of source
agreement, sampling pairs of positive (similar) and negative (dissimilar)
samples, and combine them in a unified framework to train unsupervised news
source embeddings with a triplet margin loss objective. We evaluate our method
on a novel COVID-19 dataset containing nearly 1M news articles from 500 sources
spanning a period of 18 months since the beginning of the pandemic in 2020. Our
results show that the features learned by our model outperform state-of-the-art
baseline methods on the task of news veracity classification. Furthermore, a
clustering analysis suggests that the learned representations encode
information about the reliability, political leaning, and partisanship bias of
these sources.
|
[
{
"version": "v1",
"created": "Mon, 16 May 2022 20:20:43 GMT"
}
] | 2022-05-18T00:00:00 |
[
[
"Gruppi",
"Maurício",
""
],
[
"Smeros",
"Panayiotis",
""
],
[
"Adalı",
"Sibel",
""
],
[
"Castillo",
"Carlos",
""
],
[
"Aberer",
"Karl",
""
]
] |
new_dataset
| 0.971386 |
2205.07991
|
Weikang Qiao
|
Weikang Qiao and Licheng Guo and Zhenman Fang and Mau-Chung Frank
Chang and Jason Cong
|
TopSort: A High-Performance Two-Phase Sorting Accelerator Optimized on
HBM-based FPGAs
| null | null | null | null |
cs.AR cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The emergence of high-bandwidth memory (HBM) brings new opportunities to
boost the performance of sorting acceleration on FPGAs, which was
conventionally bounded by the available off-chip memory bandwidth. However, it
is nontrivial for designers to fully utilize this immense bandwidth. First, the
existing sorter designs cannot be directly scaled at the increasing rate of
available off-chip bandwidth, as the required on-chip resource usage grows at a
much faster rate and would bound the sorting performance in turn. Second,
designers need an in-depth understanding of HBM characteristics to effectively
utilize the HBM bandwidth. To tackle these challenges, we present TopSort, a
novel two-phase sorting solution optimized for HBM-based FPGAs. In the first
phase, 16 merge trees work in parallel to fully utilize 32 HBM channels. In the
second phase, TopSort reuses the logic from phase one to form a wider merge
tree to merge the partially sorted results from phase one. TopSort also adopts
HBM-specific optimizations to reduce resource overhead and improve bandwidth
utilization. TopSort can sort up to 4 GB data using all 32 HBM channels, with
an overall sorting performance of 15.6 GB/s. TopSort is 6.7x and 2.2x faster
than state-of-the-art CPU and FPGA sorters.
|
[
{
"version": "v1",
"created": "Mon, 16 May 2022 21:15:43 GMT"
}
] | 2022-05-18T00:00:00 |
[
[
"Qiao",
"Weikang",
""
],
[
"Guo",
"Licheng",
""
],
[
"Fang",
"Zhenman",
""
],
[
"Chang",
"Mau-Chung Frank",
""
],
[
"Cong",
"Jason",
""
]
] |
new_dataset
| 0.960075 |
2205.08007
|
Randy Frans Fela
|
Randy F Fela, Andr\'eas Pastor, Patrick Le Callet, Nick Zacharov,
Toinon Vigier, S{\o}ren Forchhammer
|
Perceptual Evaluation on Audio-visual Dataset of 360 Content
|
6 pages, 5 figures, International Conference on Multimedia and Expo
2022
| null | null | null |
cs.MM cs.SD eess.AS eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To open up new possibilities to assess the multimodal perceptual quality of
omnidirectional media formats, we proposed a novel open source 360 audiovisual
(AV) quality dataset. The dataset consists of high-quality 360 video clips in
equirectangular (ERP) format and higher-order ambisonic (4th order) along with
the subjective scores. Three subjective quality experiments were conducted for
audio, video, and AV with the procedures detailed in this paper. Using the data
from subjective tests, we demonstrated that this dataset can be used to
quantify perceived audio, video, and audiovisual quality. The diversity and
discriminability of subjective scores were also analyzed. Finally, we
investigated how our dataset correlates with various objective quality metrics
of audio and video. Evidence from the results of this study implies that the
proposed dataset can benefit future studies on multimodal quality evaluation of
360 content.
|
[
{
"version": "v1",
"created": "Mon, 16 May 2022 22:31:29 GMT"
}
] | 2022-05-18T00:00:00 |
[
[
"Fela",
"Randy F",
""
],
[
"Pastor",
"Andréas",
""
],
[
"Callet",
"Patrick Le",
""
],
[
"Zacharov",
"Nick",
""
],
[
"Vigier",
"Toinon",
""
],
[
"Forchhammer",
"Søren",
""
]
] |
new_dataset
| 0.976783 |
2205.08025
|
Rahnuma Islam Nishat
|
Rahnuma Islam Nishat, Venkatesh Srinivasan, and Sue Whitesides
|
The Hamiltonian Path Graph is Connected for Simple $s,t$ Paths in
Rectangular Grid Graphs
| null | null | null | null |
cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A \emph{simple} $s,t$ path $P$ in a rectangular grid graph $\mathbb{G}$ is a
Hamiltonian path from the top-left corner $s$ to the bottom-right corner $t$
such that each \emph{internal} subpath of $P$ with both endpoints $a$ and $b$
on the boundary of $\mathbb{G}$ has the minimum number of bends needed to
travel from $a$ to $b$ (i.e., $0$, $1$, or $2$ bends, depending on whether $a$
and $b$ are on opposite, adjacent, or the same side of the bounding rectangle).
Here, we show that $P$ can be reconfigured to any other simple $s,t$ path of
$\mathbb{G}$ by \emph{switching $2\times 2$ squares}, where at most
${5}|\mathbb{G}|/{4}$ such operations are required. Furthermore, each
\emph{square-switch} is done in $O(1)$ time and keeps the resulting path in the
same family of simple $s,t$ paths. Our reconfiguration result proves that the
\emph{Hamiltonian path graph} $\cal{G}$ for simple $s,t$ paths is connected and
has diameter at most ${5}|\mathbb{G}|/{4}$ which is asymptotically tight.
|
[
{
"version": "v1",
"created": "Mon, 16 May 2022 23:34:07 GMT"
}
] | 2022-05-18T00:00:00 |
[
[
"Nishat",
"Rahnuma Islam",
""
],
[
"Srinivasan",
"Venkatesh",
""
],
[
"Whitesides",
"Sue",
""
]
] |
new_dataset
| 0.998309 |
2205.08071
|
Michal Kepkowski
|
Michal Kepkowski, Lucjan Hanzlik, Ian Wood, and Mohamed Ali Kaafar
|
How Not to Handle Keys: Timing Attacks on FIDO Authenticator Privacy
|
to be published in the 22nd Privacy Enhancing Technologies Symposium
(PETS 2022)
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper presents a timing attack on the FIDO2 (Fast IDentity Online)
authentication protocol that allows attackers to link user accounts stored in
vulnerable authenticators, a serious privacy concern. FIDO2 is a new standard
specified by the FIDO industry alliance for secure token online authentication.
It complements the W3C WebAuthn specification by providing means to use a USB
token or other authenticator as a second factor during the authentication
process. From a cryptographic perspective, the protocol is a simple
challenge-response where the elliptic curve digital signature algorithm is used
to sign challenges. To protect the privacy of the user the token uses unique
key pairs per service. To accommodate for small memory, tokens use various
techniques that make use of a special parameter called a key handle sent by the
service to the token. We identify and analyse a vulnerability in the way the
processing of key handles is implemented that allows attackers to remotely link
user accounts on multiple services. We show that for vulnerable authenticators
there is a difference between the time it takes to process a key handle for a
different service but correct authenticator, and for a different authenticator
but correct service. This difference can be used to perform a timing attack
allowing an adversary to link user's accounts across services. We present
several real world examples of adversaries that are in a position to execute
our attack and can benefit from linking accounts. We found that two of the
eight hardware authenticators we tested were vulnerable despite FIDO level 1
certification. This vulnerability cannot be easily mitigated on authenticators
because, for security reasons, they usually do not allow firmware updates. In
addition, we show that due to the way existing browsers implement the WebAuthn
standard, the attack can be executed remotely.
|
[
{
"version": "v1",
"created": "Tue, 17 May 2022 03:11:12 GMT"
}
] | 2022-05-18T00:00:00 |
[
[
"Kepkowski",
"Michal",
""
],
[
"Hanzlik",
"Lucjan",
""
],
[
"Wood",
"Ian",
""
],
[
"Kaafar",
"Mohamed Ali",
""
]
] |
new_dataset
| 0.995811 |
2205.08086
|
Huang Zonghao Mr.
|
Huang Zonghao, Quinn Wu, David Howard, Cynthia Sung
|
EvoRobogami: Co-designing with Humans in Evolutionary Robotics
Experiments
|
To be published in GECCO 2022
| null |
10.1145/3512290.3528867
| null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
We study the effects of injecting human-generated designs into the initial
population of an evolutionary robotics experiment, where subsequent population
of robots are optimised via a Genetic Algorithm and MAP-Elites. First, human
participants interact via a graphical front-end to explore a
directly-parameterised legged robot design space and attempt to produce robots
via a combination of intuition and trial-and-error that perform well in a range
of environments. Environments are generated whose corresponding
high-performance robot designs range from intuitive to complex and hard to
grasp. Once the human designs have been collected, their impact on the
evolutionary process is assessed by replacing a varying number of designs in
the initial population with human designs and subsequently running the
evolutionary algorithm. Our results suggest that a balance of random and
hand-designed initial solutions provides the best performance for the problems
considered, and that human designs are most valuable when the problem is
intuitive. The influence of human design in an evolutionary algorithm is a
highly understudied area, and the insights in this paper may be valuable to the
area of AI-based design more generally.
|
[
{
"version": "v1",
"created": "Tue, 17 May 2022 04:18:20 GMT"
}
] | 2022-05-18T00:00:00 |
[
[
"Zonghao",
"Huang",
""
],
[
"Wu",
"Quinn",
""
],
[
"Howard",
"David",
""
],
[
"Sung",
"Cynthia",
""
]
] |
new_dataset
| 0.994506 |
2205.08090
|
Ziwei Wang
|
Ziwei Wang, Dingran Yuan, Yonhon Ng and Robert Mahony
|
A Linear Comb Filter for Event Flicker Removal
|
10 pages, 7 figures, published in IEEE International Conference on
Robotics and Automation (ICRA), 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Event cameras are bio-inspired sensors that capture per-pixel asynchronous
intensity change rather than the synchronous absolute intensity frames captured
by a classical camera sensor. Such cameras are ideal for robotics applications
since they have high temporal resolution, high dynamic range and low latency.
However, due to their high temporal resolution, event cameras are particularly
sensitive to flicker such as from fluorescent or LED lights. During every cycle
from bright to dark, pixels that image a flickering light source generate many
events that provide little or no useful information for a robot, swamping the
useful data in the scene. In this paper, we propose a novel linear filter to
preprocess event data to remove unwanted flicker events from an event stream.
The proposed algorithm achieves over 4.6 times relative improvement in the
signal-to-noise ratio when compared to the raw event stream due to the
effective removal of flicker from fluorescent lighting. Thus, it is ideally
suited to robotics applications that operate in indoor settings or scenes
illuminated by flickering light sources.
|
[
{
"version": "v1",
"created": "Tue, 17 May 2022 04:47:26 GMT"
}
] | 2022-05-18T00:00:00 |
[
[
"Wang",
"Ziwei",
""
],
[
"Yuan",
"Dingran",
""
],
[
"Ng",
"Yonhon",
""
],
[
"Mahony",
"Robert",
""
]
] |
new_dataset
| 0.955562 |
2205.08094
|
Edouard Belval
|
Thomas Delteil, Edouard Belval, Lei Chen, Luis Goncalves and Vijay
Mahadevan
|
MATrIX -- Modality-Aware Transformer for Information eXtraction
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present MATrIX - a Modality-Aware Transformer for Information eXtraction
in the Visual Document Understanding (VDU) domain. VDU covers information
extraction from visually rich documents such as forms, invoices, receipts,
tables, graphs, presentations, or advertisements. In these, text semantics and
visual information supplement each other to provide a global understanding of
the document. MATrIX is pre-trained in an unsupervised way with specifically
designed tasks that require the use of multi-modal information (spatial,
visual, or textual). We consider the spatial and text modalities all at once in
a single token set. To make the attention more flexible, we use a learned
modality-aware relative bias in the attention mechanism to modulate the
attention between the tokens of different modalities. We evaluate MATrIX on 3
different datasets each with strong baselines.
|
[
{
"version": "v1",
"created": "Tue, 17 May 2022 05:06:59 GMT"
}
] | 2022-05-18T00:00:00 |
[
[
"Delteil",
"Thomas",
""
],
[
"Belval",
"Edouard",
""
],
[
"Chen",
"Lei",
""
],
[
"Goncalves",
"Luis",
""
],
[
"Mahadevan",
"Vijay",
""
]
] |
new_dataset
| 0.99116 |
2205.08149
|
Ke Lai
|
Ke Lai, Zilong Liu, Jing Lei, Lei Wen, Gaojie Chen, and Pei Xiao
|
A Novel K-Repetition Design for SCMA
|
6 pages, 6 figures
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/publicdomain/zero/1.0/
|
This work presents a novel K-Repetition based HARQ scheme for LDPC coded
uplink SCMA by employing a network coding (NC) principle to encode different
packets, where K-Repetition is an emerging technique (recommended in 3GPP
Release 15) for enhanced reliability and reduced latency in future massive
machine-type communication. Such a scheme is referred to as the NC aided
K-repetition SCMA (NCK-SCMA). We introduce a joint iterative detection
algorithm for improved detection of the data from the proposed LDPC coded
NCKSCMA systems. Simulation results demonstrate the benefits of NCK-SCMA with
higher throughput and improved reliability over the conventional K-Repetition
SCMA.
|
[
{
"version": "v1",
"created": "Tue, 17 May 2022 07:33:58 GMT"
}
] | 2022-05-18T00:00:00 |
[
[
"Lai",
"Ke",
""
],
[
"Liu",
"Zilong",
""
],
[
"Lei",
"Jing",
""
],
[
"Wen",
"Lei",
""
],
[
"Chen",
"Gaojie",
""
],
[
"Xiao",
"Pei",
""
]
] |
new_dataset
| 0.998884 |
2205.08166
|
Lorenzo Cerrone
|
Lorenzo Cerrone, Athul Vijayan, Tejasvinee Mody, Kay Schneitz, Fred A.
Hamprecht
|
CellTypeGraph: A New Geometric Computer Vision Benchmark
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Classifying all cells in an organ is a relevant and difficult problem from
plant developmental biology. We here abstract the problem into a new benchmark
for node classification in a geo-referenced graph. Solving it requires learning
the spatial layout of the organ including symmetries. To allow the convenient
testing of new geometrical learning methods, the benchmark of Arabidopsis
thaliana ovules is made available as a PyTorch data loader, along with a large
number of precomputed features. Finally, we benchmark eight recent graph neural
network architectures, finding that DeeperGCN currently works best on this
problem.
|
[
{
"version": "v1",
"created": "Tue, 17 May 2022 08:08:19 GMT"
}
] | 2022-05-18T00:00:00 |
[
[
"Cerrone",
"Lorenzo",
""
],
[
"Vijayan",
"Athul",
""
],
[
"Mody",
"Tejasvinee",
""
],
[
"Schneitz",
"Kay",
""
],
[
"Hamprecht",
"Fred A.",
""
]
] |
new_dataset
| 0.999262 |
2205.08297
|
Christoph Weidenbach
|
Hendrik Leidinger and Christoph Weidenbach
|
SCL(EQ): SCL for First-Order Logic with Equality
| null | null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a new calculus SCL(EQ) for first-order logic with equality that
only learns non-redundant clauses. Following the idea of CDCL (Conflict Driven
Clause Learning) and SCL (Clause Learning from Simple Models) a ground literal
model assumption is used to guide inferences that are then guaranteed to be
non-redundant. Redundancy is defined with respect to a dynamically changing
ordering derived from the ground literal model assumption. We prove SCL(EQ)
sound and complete and provide examples where our calculus improves on
superposition.
|
[
{
"version": "v1",
"created": "Tue, 17 May 2022 12:52:26 GMT"
}
] | 2022-05-18T00:00:00 |
[
[
"Leidinger",
"Hendrik",
""
],
[
"Weidenbach",
"Christoph",
""
]
] |
new_dataset
| 0.999627 |
2205.08301
|
Antonello Paolino
|
Tong Hui (1 and 2), Antonello Paolino (1 and 4), Gabriele Nava (1),
Giuseppe L'Erario (1 and 3), Fabio Di Natale (1), Fabio Bergonti (1 and 3),
Francesco Braghin (2) and Daniele Pucci (1 and 3) ((1) Istituto Italiano di
Tecnologia, (2) Politecnico di Milano, (3) University of Manchester, (4)
Universit\`a degli Studi di Napoli Federico II)
|
Centroidal Aerodynamic Modeling and Control of Flying Multibody Robots
|
7 pages, 6 figures, to be published in IEEE ICRA 2022. Presentation
video: https://youtu.be/WDb-OVlh5XA
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a modeling and control framework for multibody flying
robots subject to non-negligible aerodynamic forces acting on the centroidal
dynamics. First, aerodynamic forces are calculated during robot flight in
different operating conditions by means of Computational Fluid Dynamics (CFD)
analysis. Then, analytical models of the aerodynamics coefficients are
generated from the dataset collected with CFD analysis. The obtained simplified
aerodynamic model is also used to improve the flying robot control design. We
present two control strategies: compensating for the aerodynamic effects via
feedback linearization and enforcing the controller robustness with
gain-scheduling. Simulation results on the jet-powered humanoid robot iRonCub
validate the proposed approach.
|
[
{
"version": "v1",
"created": "Tue, 17 May 2022 12:58:18 GMT"
}
] | 2022-05-18T00:00:00 |
[
[
"Hui",
"Tong",
"",
"1 and 2"
],
[
"Paolino",
"Antonello",
"",
"1 and 4"
],
[
"Nava",
"Gabriele",
"",
"1 and 3"
],
[
"L'Erario",
"Giuseppe",
"",
"1 and 3"
],
[
"Di Natale",
"Fabio",
"",
"1 and 3"
],
[
"Bergonti",
"Fabio",
"",
"1 and 3"
],
[
"Braghin",
"Francesco",
"",
"1 and 3"
],
[
"Pucci",
"Daniele",
"",
"1 and 3"
]
] |
new_dataset
| 0.998757 |
2205.08379
|
Andrea Mifsud
|
Andrea Mifsud, Jiawei Shen, Peilong Feng, Lijie Xie, Chaohan Wang,
Yihan Pan, Sachin Maheshwari, Shady Agwa, Spyros Stathopoulos, Shiwei Wang,
Alexander Serb, Christos Papavassiliou, Themis Prodromakis, Timothy G.
Constandinou
|
A CMOS-based Characterisation Platform for Emerging RRAM Technologies
|
5 pages. To be published in ISCAS 2022 and made available on IEEE
Xplore
| null | null | null |
cs.ET cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Mass characterisation of emerging memory devices is an essential step in
modelling their behaviour for integration within a standard design flow for
existing integrated circuit designers. This work develops a novel
characterisation platform for emerging resistive devices with a capacity of up
to 1 million devices on-chip. Split into four independent sub-arrays, it
contains on-chip column-parallel DACs for fast voltage programming of the DUT.
On-chip readout circuits with ADCs are also available for fast read operations
covering 5-decades of input current (20nA to 2mA). This allows a device's
resistance range to be between 1k$\Omega$ and 10M$\Omega$ with a minimum
voltage range of $\pm$1.5V on the device.
|
[
{
"version": "v1",
"created": "Tue, 17 May 2022 14:02:14 GMT"
}
] | 2022-05-18T00:00:00 |
[
[
"Mifsud",
"Andrea",
""
],
[
"Shen",
"Jiawei",
""
],
[
"Feng",
"Peilong",
""
],
[
"Xie",
"Lijie",
""
],
[
"Wang",
"Chaohan",
""
],
[
"Pan",
"Yihan",
""
],
[
"Maheshwari",
"Sachin",
""
],
[
"Agwa",
"Shady",
""
],
[
"Stathopoulos",
"Spyros",
""
],
[
"Wang",
"Shiwei",
""
],
[
"Serb",
"Alexander",
""
],
[
"Papavassiliou",
"Christos",
""
],
[
"Prodromakis",
"Themis",
""
],
[
"Constandinou",
"Timothy G.",
""
]
] |
new_dataset
| 0.999632 |
2205.08391
|
Andrea Mifsud
|
Jiawei Shen, Andrea Mifsud, Lijie Xie, Abdulaziz Alshaya, Christos
Papavassiliou
|
A High-Voltage Characterisation Platform For Emerging Resistive
Switching Technologies
|
5 pages. To be published in ISCAS 2022 and made available on
IEEEXplore
| null | null | null |
cs.ET cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Emerging memristor-based array architectures have been effectively employed
in non-volatile memories and neuromorphic computing systems due to their
density, scalability and capability of storing information. Nonetheless, to
demonstrate a practical on-chip memristor-based system, it is essential to have
the ability to apply large programming voltage ranges during the
characterisation procedures for various memristor technologies. This work
presents a 16x16 high voltage memristor characterisation array employing high
voltage CMOS circuitry. The proposed system has a maximum programming range of
$\pm22V$ to allow on-chip electroforming and I-V sweep. In addition, a Kelvin
voltage sensing system is implemented to improve the readout accuracy for low
memristance measurements. This work addresses the limitation of conventional
CMOS-memristor platforms which can only operate at low voltages, thus limiting
the characterisation range and integration options of memristor technologies.
|
[
{
"version": "v1",
"created": "Tue, 17 May 2022 14:15:19 GMT"
}
] | 2022-05-18T00:00:00 |
[
[
"Shen",
"Jiawei",
""
],
[
"Mifsud",
"Andrea",
""
],
[
"Xie",
"Lijie",
""
],
[
"Alshaya",
"Abdulaziz",
""
],
[
"Papavassiliou",
"Christos",
""
]
] |
new_dataset
| 0.986082 |
2205.08402
|
Md Atiqul Islam
|
Md Atiqul Islam, George C. Alexandropoulos, and Besma Smida
|
Simultaneous Multi-User MIMO Communications and Multi-Target Tracking
with Full Duplex Radios
|
6 pages, 5 figures. Submitted for publication in the Proceedings of
IEEE Global Communications Conference (GLOBECOM), 2022
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present an Integrated Sensing and Communications (ISAC)
system enabled by in-band Full Duplex (FD) radios, where a massive
Multiple-Input Multiple-Output (MIMO) base station equipped with hybrid Analog
and Digital (A/D) beamformers is communicating with multiple DownLink (DL)
users, and simultaneously estimates via the same signaling waveforms the
Direction of Arrival (DoA) as well as the range of radar targets randomly
distributed within its coverage area. Capitalizing on a recent
reduced-complexity FD hybrid A/D beamforming architecture, we devise a joint
radar target tracking and DL data transmission protocol. An optimization
framework for the joint design of the massive A/D beamformers and the
Self-Interference (SI) cancellation unit, with the dual objective of maximizing
the radar tracking accuracy and DL communication performance, is presented. Our
simulation results at millimeter wave frequencies using 5G NR wideband
waveforms, showcase the accuracy of the radar target tracking performance of
the proposed system, which simultaneously offers increased sum rate compared
with benchmark schemes.
|
[
{
"version": "v1",
"created": "Tue, 17 May 2022 14:37:17 GMT"
}
] | 2022-05-18T00:00:00 |
[
[
"Islam",
"Md Atiqul",
""
],
[
"Alexandropoulos",
"George C.",
""
],
[
"Smida",
"Besma",
""
]
] |
new_dataset
| 0.974182 |
2205.08535
|
Fangzhou Hong
|
Fangzhou Hong, Mingyuan Zhang, Liang Pan, Zhongang Cai, Lei Yang,
Ziwei Liu
|
AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars
|
SIGGRAPH 2022; Project Page
https://hongfz16.github.io/projects/AvatarCLIP.html Codes available at
https://github.com/hongfz16/AvatarCLIP
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
3D avatar creation plays a crucial role in the digital age. However, the
whole production process is prohibitively time-consuming and labor-intensive.
To democratize this technology to a larger audience, we propose AvatarCLIP, a
zero-shot text-driven framework for 3D avatar generation and animation. Unlike
professional software that requires expert knowledge, AvatarCLIP empowers
layman users to customize a 3D avatar with the desired shape and texture, and
drive the avatar with the described motions using solely natural languages. Our
key insight is to take advantage of the powerful vision-language model CLIP for
supervising neural human generation, in terms of 3D geometry, texture and
animation. Specifically, driven by natural language descriptions, we initialize
3D human geometry generation with a shape VAE network. Based on the generated
3D human shapes, a volume rendering model is utilized to further facilitate
geometry sculpting and texture generation. Moreover, by leveraging the priors
learned in the motion VAE, a CLIP-guided reference-based motion synthesis
method is proposed for the animation of the generated 3D avatar. Extensive
qualitative and quantitative experiments validate the effectiveness and
generalizability of AvatarCLIP on a wide range of avatars. Remarkably,
AvatarCLIP can generate unseen 3D avatars with novel animations, achieving
superior zero-shot capability.
|
[
{
"version": "v1",
"created": "Tue, 17 May 2022 17:59:19 GMT"
}
] | 2022-05-18T00:00:00 |
[
[
"Hong",
"Fangzhou",
""
],
[
"Zhang",
"Mingyuan",
""
],
[
"Pan",
"Liang",
""
],
[
"Cai",
"Zhongang",
""
],
[
"Yang",
"Lei",
""
],
[
"Liu",
"Ziwei",
""
]
] |
new_dataset
| 0.99957 |
2011.14035
|
Constantinos Chamzas
|
Dimitrios Chamzas, Constantinos Chamzas and Konstantinos Moustakas
|
cMinMax: A Fast Algorithm to Find the Corners of an N-dimensional Convex
Polytope
|
Accepted in GRAPP 2021, Code available at
https://github.com/jimas95/CMinMax and video presentation at
https://www.youtube.com/watch?v=Ug313Nf-S-A
| null |
10.5220/0010259002290236
| null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
During the last years, the emerging field of Augmented & Virtual Reality
(AR-VR) has seen tremendousgrowth. At the same time there is a trend to develop
low cost high-quality AR systems where computing poweris in demand. Feature
points are extensively used in these real-time frame-rate and 3D applications,
thereforeefficient high-speed feature detectors are necessary. Corners are such
special features and often are used as thefirst step in the marker alignment in
Augmented Reality (AR). Corners are also used in image registration
andrecognition, tracking, SLAM, robot path finding and 2D or 3D object
detection and retrieval. Therefore thereis a large number of corner detection
algorithms but most of them are too computationally intensive for use
inreal-time applications of any complexity. Many times the border of the image
is a convex polygon. For thisspecial, but quite common case, we have developed
a specific algorithm, cMinMax. The proposed algorithmis faster, approximately
by a factor of 5 compared to the widely used Harris Corner Detection algorithm.
Inaddition is highly parallelizable. The algorithm is suitable for the fast
registration of markers in augmentedreality systems and in applications where a
computationally efficient real time feature detector is necessary.The algorithm
can also be extended to N-dimensional polyhedrons.
|
[
{
"version": "v1",
"created": "Sat, 28 Nov 2020 00:32:11 GMT"
},
{
"version": "v2",
"created": "Wed, 24 Mar 2021 15:11:00 GMT"
},
{
"version": "v3",
"created": "Fri, 13 May 2022 19:33:33 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Chamzas",
"Dimitrios",
""
],
[
"Chamzas",
"Constantinos",
""
],
[
"Moustakas",
"Konstantinos",
""
]
] |
new_dataset
| 0.998477 |
2102.13249
|
Shubham Toshniwal
|
Shubham Toshniwal, Sam Wiseman, Karen Livescu, Kevin Gimpel
|
Chess as a Testbed for Language Model State Tracking
|
AAAI 2022 extended version with supplementary material
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transformer language models have made tremendous strides in natural language
understanding tasks. However, the complexity of natural language makes it
challenging to ascertain how accurately these models are tracking the world
state underlying the text. Motivated by this issue, we consider the task of
language modeling for the game of chess. Unlike natural language, chess
notations describe a simple, constrained, and deterministic domain. Moreover,
we observe that the appropriate choice of chess notation allows for directly
probing the world state, without requiring any additional probing-related
machinery. We find that: (a) With enough training data, transformer language
models can learn to track pieces and predict legal moves with high accuracy
when trained solely on move sequences. (b) For small training sets providing
access to board state information during training can yield significant
improvements. (c) The success of transformer language models is dependent on
access to the entire game history i.e. "full attention". Approximating this
full attention results in a significant performance drop. We propose this
testbed as a benchmark for future work on the development and analysis of
transformer language models.
|
[
{
"version": "v1",
"created": "Fri, 26 Feb 2021 01:16:23 GMT"
},
{
"version": "v2",
"created": "Fri, 13 May 2022 21:40:30 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Toshniwal",
"Shubham",
""
],
[
"Wiseman",
"Sam",
""
],
[
"Livescu",
"Karen",
""
],
[
"Gimpel",
"Kevin",
""
]
] |
new_dataset
| 0.970781 |
2104.01026
|
Ying He
|
Ying He, Zhili Shen, Chang Xia, Jingyu Hua, Wei Tong, Sheng Zhong
|
SGBA: A Stealthy Scapegoat Backdoor Attack against Deep Neural Networks
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Outsourced deep neural networks have been demonstrated to suffer from
patch-based trojan attacks, in which an adversary poisons the training sets to
inject a backdoor in the obtained model so that regular inputs can be still
labeled correctly while those carrying a specific trigger are falsely given a
target label. Due to the severity of such attacks, many backdoor detection and
containment systems have recently, been proposed for deep neural networks. One
major category among them are various model inspection schemes, which hope to
detect backdoors before deploying models from non-trusted third-parties. In
this paper, we show that such state-of-the-art schemes can be defeated by a
so-called Scapegoat Backdoor Attack, which introduces a benign scapegoat
trigger in data poisoning to prevent the defender from reversing the real
abnormal trigger. In addition, it confines the values of network parameters
within the same variances of those from clean model during training, which
further significantly enhances the difficulty of the defender to learn the
differences between legal and illegal models through machine-learning
approaches. Our experiments on 3 popular datasets show that it can escape
detection by all five state-of-the-art model inspection schemes. Moreover, this
attack brings almost no side-effects on the attack effectiveness and guarantees
the universal feature of the trigger compared with original patch-based trojan
attacks.
|
[
{
"version": "v1",
"created": "Fri, 2 Apr 2021 12:51:18 GMT"
},
{
"version": "v2",
"created": "Fri, 7 May 2021 12:33:45 GMT"
},
{
"version": "v3",
"created": "Mon, 16 May 2022 13:35:55 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"He",
"Ying",
""
],
[
"Shen",
"Zhili",
""
],
[
"Xia",
"Chang",
""
],
[
"Hua",
"Jingyu",
""
],
[
"Tong",
"Wei",
""
],
[
"Zhong",
"Sheng",
""
]
] |
new_dataset
| 0.952277 |
2104.09180
|
Michael Clear
|
Aritra Banerjee, Michael Clear, Hitesh Tewari
|
zkHawk: Practical Private Smart Contracts from MPC-based Hawk
|
9 pages, 6 figures, published in IEEE BRAINS'21 Conference
Proceedings
| null |
10.1109/BRAINS52497.2021.9569822
| null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Cryptocurrencies have received a lot of research attention in recent years
following the release of the first cryptocurrency Bitcoin. With the rise in
cryptocurrency transactions, the need for smart contracts has also increased.
Smart contracts, in a nutshell, are digitally executed contracts wherein some
parties execute a common goal. The main problem with most of the current smart
contracts is that there is no privacy for a party's input to the contract from
either the blockchain or the other parties. Our research builds on the Hawk
project that provides transaction privacy along with support for smart
contracts. However, Hawk relies on a special trusted party known as a manager,
which must be trusted not to leak each party's input to the smart contract. In
this paper, we present a practical private smart contract protocol that
replaces the manager with an MPC protocol such that the function to be executed
by the MPC protocol is relatively lightweight, involving little overhead added
to the smart contract function, and uses practical sigma protocols and
homomorphic commitments to prove to the blockchain that the sum of the incoming
balances to the smart contract matches the sum of the outgoing balances.
|
[
{
"version": "v1",
"created": "Mon, 19 Apr 2021 10:14:12 GMT"
},
{
"version": "v2",
"created": "Fri, 23 Apr 2021 13:19:11 GMT"
},
{
"version": "v3",
"created": "Mon, 3 May 2021 12:27:18 GMT"
},
{
"version": "v4",
"created": "Sun, 15 May 2022 10:32:40 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Banerjee",
"Aritra",
""
],
[
"Clear",
"Michael",
""
],
[
"Tewari",
"Hitesh",
""
]
] |
new_dataset
| 0.999769 |
2104.13433
|
Siyuan Xiang
|
Siyuan Xiang, Anbang Yang, Yanfei Xue, Yaoqing Yang, Chen Feng
|
Self-supervised Spatial Reasoning on Multi-View Line Drawings
|
The first two authors contributed equally. Chen Feng is the
corresponding author
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Spatial reasoning on multi-view line drawings by state-of-the-art supervised
deep networks is recently shown with puzzling low performances on the SPARE3D
dataset. Based on the fact that self-supervised learning is helpful when a
large number of data are available, we propose two self-supervised learning
approaches to improve the baseline performance for view consistency reasoning
and camera pose reasoning tasks on the SPARE3D dataset. For the first task, we
use a self-supervised binary classification network to contrast the line
drawing differences between various views of any two similar 3D objects,
enabling the trained networks to effectively learn detail-sensitive yet
view-invariant line drawing representations of 3D objects. For the second type
of task, we propose a self-supervised multi-class classification framework to
train a model to select the correct corresponding view from which a line
drawing is rendered. Our method is even helpful for the downstream tasks with
unseen camera poses. Experiments show that our method could significantly
increase the baseline performance in SPARE3D, while some popular
self-supervised learning methods cannot.
|
[
{
"version": "v1",
"created": "Tue, 27 Apr 2021 19:05:27 GMT"
},
{
"version": "v2",
"created": "Mon, 16 May 2022 02:47:32 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Xiang",
"Siyuan",
""
],
[
"Yang",
"Anbang",
""
],
[
"Xue",
"Yanfei",
""
],
[
"Yang",
"Yaoqing",
""
],
[
"Feng",
"Chen",
""
]
] |
new_dataset
| 0.988502 |
2104.13463
|
Rui Yao
|
Rui Yao, Shlomo Bekhor
|
A ridesharing simulation platform that considers dynamic supply-demand
interactions
| null | null | null | null |
cs.MA cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a new ridesharing simulation platform that accounts for
dynamic driver supply and passenger demand, and complex interactions between
drivers and passengers. The proposed simulation platform explicitly considers
driver and passenger acceptance/rejection on the matching options, and
cancellation before/after being matched. New simulation events, procedures and
modules have been developed to handle these realistic interactions. The
capabilities of the simulation platform are illustrated using numerical
experiments. The experiments confirm the importance of considering supply and
demand interactions and provide new insights to ridesharing operations. Results
show that increase of driver supply does not always increase matching option
accept rate, and larger matching window could have negative impacts on overall
ridesharing success rate. These results emphasize the importance of a careful
planning of a ridesharing system.
|
[
{
"version": "v1",
"created": "Tue, 27 Apr 2021 20:31:55 GMT"
},
{
"version": "v2",
"created": "Sun, 15 May 2022 08:53:51 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Yao",
"Rui",
""
],
[
"Bekhor",
"Shlomo",
""
]
] |
new_dataset
| 0.982932 |
2105.10884
|
Jie Qiao
|
Ruichu Cai, Siyu Wu, Jie Qiao, Zhifeng Hao, Keli Zhang, Xi Zhang
|
THP: Topological Hawkes Processes for Learning Causal Structure on Event
Sequences
|
IEEE Transactions on Neural Networks and Learning Systems (TNNLS)
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Learning causal structure among event types on multi-type event sequences is
an important but challenging task. Existing methods, such as the Multivariate
Hawkes processes, mostly assumed that each sequence is independent and
identically distributed. However, in many real-world applications, it is
commonplace to encounter a topological network behind the event sequences such
that an event is excited or inhibited not only by its history but also by its
topological neighbors. Consequently, the failure in describing the topological
dependency among the event sequences leads to the error detection of the causal
structure. By considering the Hawkes processes from the view of temporal
convolution, we propose a Topological Hawkes process (THP) to draw a connection
between the graph convolution in the topology domain and the temporal
convolution in time domains. We further propose a causal structure learning
method on THP in a likelihood framework. The proposed method is featured with
the graph convolution-based likelihood function of THP and a sparse
optimization scheme with an Expectation-Maximization of the likelihood
function. Theoretical analysis and experiments on both synthetic and real-world
data demonstrate the effectiveness of the proposed method
|
[
{
"version": "v1",
"created": "Sun, 23 May 2021 08:33:46 GMT"
},
{
"version": "v2",
"created": "Sun, 15 May 2022 05:20:44 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Cai",
"Ruichu",
""
],
[
"Wu",
"Siyu",
""
],
[
"Qiao",
"Jie",
""
],
[
"Hao",
"Zhifeng",
""
],
[
"Zhang",
"Keli",
""
],
[
"Zhang",
"Xi",
""
]
] |
new_dataset
| 0.991235 |
2106.02740
|
Dina Bashkirova
|
Dina Bashkirova, Mohamed Abdelfattah, Ziliang Zhu, James Akl, Fadi
Alladkani, Ping Hu, Vitaly Ablavsky, Berk Calli, Sarah Adel Bargal, Kate
Saenko
|
ZeroWaste Dataset: Towards Deformable Object Segmentation in Cluttered
Scenes
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Less than 35% of recyclable waste is being actually recycled in the US, which
leads to increased soil and sea pollution and is one of the major concerns of
environmental researchers as well as the common public. At the heart of the
problem are the inefficiencies of the waste sorting process (separating paper,
plastic, metal, glass, etc.) due to the extremely complex and cluttered nature
of the waste stream. Recyclable waste detection poses a unique computer vision
challenge as it requires detection of highly deformable and often translucent
objects in cluttered scenes without the kind of context information usually
present in human-centric datasets. This challenging computer vision task
currently lacks suitable datasets or methods in the available literature. In
this paper, we take a step towards computer-aided waste detection and present
the first in-the-wild industrial-grade waste detection and segmentation
dataset, ZeroWaste. We believe that ZeroWaste will catalyze research in object
detection and semantic segmentation in extreme clutter as well as applications
in the recycling domain. Our project page can be found at
http://ai.bu.edu/zerowaste/.
|
[
{
"version": "v1",
"created": "Fri, 4 Jun 2021 22:17:09 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Oct 2021 16:16:00 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Jan 2022 21:23:43 GMT"
},
{
"version": "v4",
"created": "Mon, 16 May 2022 16:57:45 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Bashkirova",
"Dina",
""
],
[
"Abdelfattah",
"Mohamed",
""
],
[
"Zhu",
"Ziliang",
""
],
[
"Akl",
"James",
""
],
[
"Alladkani",
"Fadi",
""
],
[
"Hu",
"Ping",
""
],
[
"Ablavsky",
"Vitaly",
""
],
[
"Calli",
"Berk",
""
],
[
"Bargal",
"Sarah Adel",
""
],
[
"Saenko",
"Kate",
""
]
] |
new_dataset
| 0.999656 |
2106.09460
|
Bharathi Raja Chakravarthi
|
Bharathi Raja Chakravarthi, Ruba Priyadharshini, Vigneshwaran
Muralidaran, Navya Jose, Shardul Suryawanshi, Elizabeth Sherly, John P.
McCrae
|
DravidianCodeMix: Sentiment Analysis and Offensive Language
Identification Dataset for Dravidian Languages in Code-Mixed Text
|
36 pages
| null |
10.1007/s10579-022-09583-7
| null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This paper describes the development of a multilingual, manually annotated
dataset for three under-resourced Dravidian languages generated from social
media comments. The dataset was annotated for sentiment analysis and offensive
language identification for a total of more than 60,000 YouTube comments. The
dataset consists of around 44,000 comments in Tamil-English, around 7,000
comments in Kannada-English, and around 20,000 comments in Malayalam-English.
The data was manually annotated by volunteer annotators and has a high
inter-annotator agreement in Krippendorff's alpha. The dataset contains all
types of code-mixing phenomena since it comprises user-generated content from a
multilingual country. We also present baseline experiments to establish
benchmarks on the dataset using machine learning methods. The dataset is
available on Github
(https://github.com/bharathichezhiyan/DravidianCodeMix-Dataset) and Zenodo
(https://zenodo.org/record/4750858\#.YJtw0SYo\_0M).
|
[
{
"version": "v1",
"created": "Thu, 17 Jun 2021 13:13:26 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Chakravarthi",
"Bharathi Raja",
""
],
[
"Priyadharshini",
"Ruba",
""
],
[
"Muralidaran",
"Vigneshwaran",
""
],
[
"Jose",
"Navya",
""
],
[
"Suryawanshi",
"Shardul",
""
],
[
"Sherly",
"Elizabeth",
""
],
[
"McCrae",
"John P.",
""
]
] |
new_dataset
| 0.999839 |
2107.12920
|
Roman Klinger
|
Bao Minh Doan Dang and Laura Oberl\"ander and Roman Klinger
|
Emotion Stimulus Detection in German News Headlines
|
KONVENS 2021, published at https://aclanthology.org/2021.konvens-1.7/
Please cite by using https://aclanthology.org/2021.konvens-1.7.bib
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Emotion stimulus extraction is a fine-grained subtask of emotion analysis
that focuses on identifying the description of the cause behind an emotion
expression from a text passage (e.g., in the sentence "I am happy that I passed
my exam" the phrase "passed my exam" corresponds to the stimulus.). Previous
work mainly focused on Mandarin and English, with no resources or models for
German. We fill this research gap by developing a corpus of 2006 German news
headlines annotated with emotions and 811 instances with annotations of
stimulus phrases. Given that such corpus creation efforts are time-consuming
and expensive, we additionally work on an approach for projecting the existing
English GoodNewsEveryone (GNE) corpus to a machine-translated German version.
We compare the performance of a conditional random field (CRF) model (trained
monolingually on German and cross-lingually via projection) with a multilingual
XLM-RoBERTa (XLM-R) model. Our results show that training with the German
corpus achieves higher F1 scores than projection. Experiments with XLM-R
outperform their respective CRF counterparts.
|
[
{
"version": "v1",
"created": "Tue, 27 Jul 2021 16:22:04 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Jul 2021 07:21:13 GMT"
},
{
"version": "v3",
"created": "Mon, 16 May 2022 11:25:46 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Dang",
"Bao Minh Doan",
""
],
[
"Oberländer",
"Laura",
""
],
[
"Klinger",
"Roman",
""
]
] |
new_dataset
| 0.997922 |
2110.00307
|
Giovanni Colavizza
|
Giovanni Colavizza, Silvio Peroni, Matteo Romanello
|
The case for the Humanities Citation Index (HuCI): a citation index by
the humanities, for the humanities
| null | null | null | null |
cs.DL
|
http://creativecommons.org/licenses/by/4.0/
|
Citation indexes are by now part of the research infrastructure in use by
most scientists: a necessary tool in order to cope with the increasing amounts
of scientific literature being published. Commercial citation indexes are
designed for the sciences and have uneven coverage and unsatisfactory
characteristics for humanities scholars, while no comprehensive citation index
is published by a public organization. We argue that an open citation index for
the humanities is desirable, for four reasons: it would greatly improve and
accelerate the retrieval of sources, it would offer a way to interlink
collections across repositories (such as archives and libraries), it would
foster the adoption of metadata standards and best practices by all
stakeholders (including publishers) and it would contribute research data to
fields such as bibliometrics and science studies. We also suggest that the
citation index should be informed by a set of requirements relevant to the
humanities. We discuss four such requirements: source coverage must be
comprehensive, including books and citations to primary sources; there needs to
be chronological depth, as scholarship in the humanities remains relevant over
time; the index should be collection-driven, leveraging the accumulated
thematic collections of specialized research libraries; and it should be rich
in context in order to allow for the qualification of each citation, for
example by providing citation excerpts. We detail the fit-for-purpose research
infrastructure which can make the Humanities Citation Index a reality.
Ultimately, we argue that a citation index for the humanities can be created by
humanists, via a collaborative, distributed and open effort.
|
[
{
"version": "v1",
"created": "Fri, 1 Oct 2021 10:41:44 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Feb 2022 10:25:12 GMT"
},
{
"version": "v3",
"created": "Sat, 14 May 2022 07:59:50 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Colavizza",
"Giovanni",
""
],
[
"Peroni",
"Silvio",
""
],
[
"Romanello",
"Matteo",
""
]
] |
new_dataset
| 0.998477 |
2111.02444
|
Manuel Dahnert
|
Manuel Dahnert, Ji Hou, Matthias Nie{\ss}ner, Angela Dai
|
Panoptic 3D Scene Reconstruction From a Single RGB Image
|
Video: https://youtu.be/YVxRNHmd5SA, Project Page:
https://manuel-dahnert.com/research/panoptic-reconstruction
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Understanding 3D scenes from a single image is fundamental to a wide variety
of tasks, such as for robotics, motion planning, or augmented reality. Existing
works in 3D perception from a single RGB image tend to focus on geometric
reconstruction only, or geometric reconstruction with semantic segmentation or
instance segmentation. Inspired by 2D panoptic segmentation, we propose to
unify the tasks of geometric reconstruction, 3D semantic segmentation, and 3D
instance segmentation into the task of panoptic 3D scene reconstruction - from
a single RGB image, predicting the complete geometric reconstruction of the
scene in the camera frustum of the image, along with semantic and instance
segmentations. We thus propose a new approach for holistic 3D scene
understanding from a single RGB image which learns to lift and propagate 2D
features from an input image to a 3D volumetric scene representation. We
demonstrate that this holistic view of joint scene reconstruction, semantic,
and instance segmentation is beneficial over treating the tasks independently,
thus outperforming alternative approaches.
|
[
{
"version": "v1",
"created": "Wed, 3 Nov 2021 18:06:38 GMT"
},
{
"version": "v2",
"created": "Mon, 16 May 2022 15:51:09 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Dahnert",
"Manuel",
""
],
[
"Hou",
"Ji",
""
],
[
"Nießner",
"Matthias",
""
],
[
"Dai",
"Angela",
""
]
] |
new_dataset
| 0.999537 |
2112.08598
|
Anshuman Dewangan
|
Anshuman Dewangan, Yash Pande, Hans-Werner Braun, Frank Vernon, Ismael
Perez, Ilkay Altintas, Garrison W. Cottrell and Mai H. Nguyen
|
FIgLib & SmokeyNet: Dataset and Deep Learning Model for Real-Time
Wildland Fire Smoke Detection
| null |
Remote Sensing. 2022; 14(4):1007
|
10.3390/rs14041007
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The size and frequency of wildland fires in the western United States have
dramatically increased in recent years. On high-fire-risk days, a small fire
ignition can rapidly grow and become out of control. Early detection of fire
ignitions from initial smoke can assist the response to such fires before they
become difficult to manage. Past deep learning approaches for wildfire smoke
detection have suffered from small or unreliable datasets that make it
difficult to extrapolate performance to real-world scenarios. In this work, we
present the Fire Ignition Library (FIgLib), a publicly available dataset of
nearly 25,000 labeled wildfire smoke images as seen from fixed-view cameras
deployed in Southern California. We also introduce SmokeyNet, a novel deep
learning architecture using spatiotemporal information from camera imagery for
real-time wildfire smoke detection. When trained on the FIgLib dataset,
SmokeyNet outperforms comparable baselines and rivals human performance. We
hope that the availability of the FIgLib dataset and the SmokeyNet architecture
will inspire further research into deep learning methods for wildfire smoke
detection, leading to automated notification systems that reduce the time to
wildfire response.
|
[
{
"version": "v1",
"created": "Thu, 16 Dec 2021 03:49:58 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Feb 2022 18:25:50 GMT"
},
{
"version": "v3",
"created": "Sat, 14 May 2022 18:24:07 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Dewangan",
"Anshuman",
""
],
[
"Pande",
"Yash",
""
],
[
"Braun",
"Hans-Werner",
""
],
[
"Vernon",
"Frank",
""
],
[
"Perez",
"Ismael",
""
],
[
"Altintas",
"Ilkay",
""
],
[
"Cottrell",
"Garrison W.",
""
],
[
"Nguyen",
"Mai H.",
""
]
] |
new_dataset
| 0.999809 |
2112.08619
|
Yoonna Jang
|
Yoonna Jang, Jungwoo Lim, Yuna Hur, Dongsuk Oh, Suhyune Son, Yeonsoo
Lee, Donghoon Shin, Seungryong Kim, and Heuiseok Lim
|
Call for Customized Conversation: Customized Conversation Grounding
Persona and Knowledge
|
Accepted paper at the Thirty-Sixth AAAI Conference on Artificial
Intelligence (AAAI-22)
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Humans usually have conversations by making use of prior knowledge about a
topic and background information of the people whom they are talking to.
However, existing conversational agents and datasets do not consider such
comprehensive information, and thus they have a limitation in generating the
utterances where the knowledge and persona are fused properly. To address this
issue, we introduce a call For Customized conversation (FoCus) dataset where
the customized answers are built with the user's persona and Wikipedia
knowledge. To evaluate the abilities to make informative and customized
utterances of pre-trained language models, we utilize BART and GPT-2 as well as
transformer-based models. We assess their generation abilities with automatic
scores and conduct human evaluations for qualitative results. We examine
whether the model reflects adequate persona and knowledge with our proposed two
sub-tasks, persona grounding (PG) and knowledge grounding (KG). Moreover, we
show that the utterances of our data are constructed with the proper knowledge
and persona through grounding quality assessment.
|
[
{
"version": "v1",
"created": "Thu, 16 Dec 2021 04:44:27 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Apr 2022 11:02:09 GMT"
},
{
"version": "v3",
"created": "Mon, 16 May 2022 05:11:14 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Jang",
"Yoonna",
""
],
[
"Lim",
"Jungwoo",
""
],
[
"Hur",
"Yuna",
""
],
[
"Oh",
"Dongsuk",
""
],
[
"Son",
"Suhyune",
""
],
[
"Lee",
"Yeonsoo",
""
],
[
"Shin",
"Donghoon",
""
],
[
"Kim",
"Seungryong",
""
],
[
"Lim",
"Heuiseok",
""
]
] |
new_dataset
| 0.999612 |
2202.07427
|
Roman Klinger
|
Anna Khlyzova and Carina Silberer and Roman Klinger
|
On the Complementarity of Images and Text for the Expression of Emotions
in Social Media
|
WASSA 2022 at ACL 2022, published at
https://aclanthology.org/2022.wassa-1.1/ Please cite using
https://aclanthology.org/2022.wassa-1.1.bib
| null | null | null |
cs.CV cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Authors of posts in social media communicate their emotions and what causes
them with text and images. While there is work on emotion and stimulus
detection for each modality separately, it is yet unknown if the modalities
contain complementary emotion information in social media. We aim at filling
this research gap and contribute a novel, annotated corpus of English
multimodal Reddit posts. On this resource, we develop models to automatically
detect the relation between image and text, an emotion stimulus category and
the emotion class. We evaluate if these tasks require both modalities and find
for the image-text relations, that text alone is sufficient for most categories
(complementary, illustrative, opposing): the information in the text allows to
predict if an image is required for emotion understanding. The emotions of
anger and sadness are best predicted with a multimodal model, while text alone
is sufficient for disgust, joy, and surprise. Stimuli depicted by objects,
animals, food, or a person are best predicted by image-only models, while
multimodal models are most effective on art, events, memes, places, or
screenshots.
|
[
{
"version": "v1",
"created": "Fri, 11 Feb 2022 12:33:53 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Apr 2022 12:59:40 GMT"
},
{
"version": "v3",
"created": "Mon, 16 May 2022 11:24:37 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Khlyzova",
"Anna",
""
],
[
"Silberer",
"Carina",
""
],
[
"Klinger",
"Roman",
""
]
] |
new_dataset
| 0.997554 |
2203.08101
|
Rafael Sampaio de Rezende
|
Ginger Delmas and Rafael Sampaio de Rezende and Gabriela Csurka and
Diane Larlus
|
ARTEMIS: Attention-based Retrieval with Text-Explicit Matching and
Implicit Similarity
|
Published in ICLR 2022
| null | null | null |
cs.CV cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An intuitive way to search for images is to use queries composed of an
example image and a complementary text. While the first provides rich and
implicit context for the search, the latter explicitly calls for new traits, or
specifies how some elements of the example image should be changed to retrieve
the desired target image. Current approaches typically combine the features of
each of the two elements of the query into a single representation, which can
then be compared to the ones of the potential target images. Our work aims at
shedding new light on the task by looking at it through the prism of two
familiar and related frameworks: text-to-image and image-to-image retrieval.
Taking inspiration from them, we exploit the specific relation of each query
element with the targeted image and derive light-weight attention mechanisms
which enable to mediate between the two complementary modalities. We validate
our approach on several retrieval benchmarks, querying with images and their
associated free-form text modifiers. Our method obtains state-of-the-art
results without resorting to side information, multi-level features, heavy
pre-training nor large architectures as in previous works.
|
[
{
"version": "v1",
"created": "Tue, 15 Mar 2022 17:29:20 GMT"
},
{
"version": "v2",
"created": "Mon, 16 May 2022 15:20:04 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Delmas",
"Ginger",
""
],
[
"de Rezende",
"Rafael Sampaio",
""
],
[
"Csurka",
"Gabriela",
""
],
[
"Larlus",
"Diane",
""
]
] |
new_dataset
| 0.984749 |
2203.10926
|
Martin Alexander B\"uchner
|
Martin Buchner and Abhinav Valada
|
3D Multi-Object Tracking Using Graph Neural Networks with Cross-Edge
Modality Attention
|
12 pages, 7 figures
| null | null | null |
cs.CV cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Online 3D multi-object tracking (MOT) has witnessed significant research
interest in recent years, largely driven by demand from the autonomous systems
community. However, 3D offline MOT is relatively less explored. Labeling 3D
trajectory scene data at a large scale while not relying on high-cost human
experts is still an open research question. In this work, we propose Batch3DMOT
which follows the tracking-by-detection paradigm and represents real-world
scenes as directed, acyclic, and category-disjoint tracking graphs that are
attributed using various modalities such as camera, LiDAR, and radar. We
present a multi-modal graph neural network that uses a cross-edge attention
mechanism mitigating modality intermittence, which translates into sparsity in
the graph domain. Additionally, we present attention-weighted convolutions over
frame-wise k-NN neighborhoods as suitable means to allow information exchange
across disconnected graph components. We evaluate our approach using various
sensor modalities and model configurations on the challenging nuScenes and
KITTI datasets. Extensive experiments demonstrate that our proposed approach
yields an overall improvement of 3.3% in the AMOTA score on nuScenes thereby
setting the new state-of-the-art for 3D tracking and further enhancing false
positive filtering.
|
[
{
"version": "v1",
"created": "Mon, 21 Mar 2022 12:44:17 GMT"
},
{
"version": "v2",
"created": "Sat, 14 May 2022 20:39:23 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Buchner",
"Martin",
""
],
[
"Valada",
"Abhinav",
""
]
] |
new_dataset
| 0.992638 |
2203.16713
|
Bernardo Anibal Subercaseaux Roa
|
Daniel Lokshtanov and Bernardo Subercaseaux
|
Wordle is NP-hard
|
Accepted at FUN2022
| null | null | null |
cs.CC
|
http://creativecommons.org/licenses/by/4.0/
|
Wordle is a single-player word-guessing game where the goal is to discover a
secret word $w$ that has been chosen from a dictionary $D$. In order to
discover $w$, the player can make at most $\ell$ guesses, which must also be
words from $D$, all words in $D$ having the same length $k$. After each guess,
the player is notified of the positions in which their guess matches the secret
word, as well as letters in the guess that appear in the secret word in a
different position. We study the game of Wordle from a complexity perspective,
proving NP-hardness of its natural formalization: to decide given a dictionary
$D$ and an integer $\ell$ if the player can guarantee to discover the secret
word within $\ell$ guesses. Moreover, we prove that hardness holds even over
instances where words have length $k = 5$, and that even in this case it is
NP-hard to approximate the minimum number of guesses required to guarantee
discovering the secret word. We also present results regarding its
parameterized complexity and offer some related open problems.
|
[
{
"version": "v1",
"created": "Wed, 30 Mar 2022 23:27:47 GMT"
},
{
"version": "v2",
"created": "Sat, 14 May 2022 05:54:58 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Lokshtanov",
"Daniel",
""
],
[
"Subercaseaux",
"Bernardo",
""
]
] |
new_dataset
| 0.999861 |
2204.01795
|
Pranjay Shyam
|
Pranjay Shyam, Sandeep Singh Sengar, Kuk-Jin Yoon and Kyung-Soo Kim
|
Lightweight HDR Camera ISP for Robust Perception in Dynamic Illumination
Conditions via Fourier Adversarial Networks
|
Accepted in BMVC 2021
| null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The limited dynamic range of commercial compact camera sensors results in an
inaccurate representation of scenes with varying illumination conditions,
adversely affecting image quality and subsequently limiting the performance of
underlying image processing algorithms. Current state-of-the-art (SoTA)
convolutional neural networks (CNN) are developed as post-processing techniques
to independently recover under-/over-exposed images. However, when applied to
images containing real-world degradations such as glare, high-beam, color
bleeding with varying noise intensity, these algorithms amplify the
degradations, further degrading image quality. We propose a lightweight
two-stage image enhancement algorithm sequentially balancing illumination and
noise removal using frequency priors for structural guidance to overcome these
limitations. Furthermore, to ensure realistic image quality, we leverage the
relationship between frequency and spatial domain properties of an image and
propose a Fourier spectrum-based adversarial framework (AFNet) for consistent
image enhancement under varying illumination conditions. While current
formulations of image enhancement are envisioned as post-processing techniques,
we examine if such an algorithm could be extended to integrate the
functionality of the Image Signal Processing (ISP) pipeline within the camera
sensor benefiting from RAW sensor data and lightweight CNN architecture. Based
on quantitative and qualitative evaluations, we also examine the practicality
and effects of image enhancement techniques on the performance of common
perception tasks such as object detection and semantic segmentation in varying
illumination conditions.
|
[
{
"version": "v1",
"created": "Mon, 4 Apr 2022 18:48:51 GMT"
},
{
"version": "v2",
"created": "Sat, 14 May 2022 15:16:16 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Shyam",
"Pranjay",
""
],
[
"Sengar",
"Sandeep Singh",
""
],
[
"Yoon",
"Kuk-Jin",
""
],
[
"Kim",
"Kyung-Soo",
""
]
] |
new_dataset
| 0.999117 |
2204.04046
|
Wenqian Zhang
|
Wenqian Zhang, Shangbin Feng, Zilong Chen, Zhenyu Lei, Jundong Li,
Minnan Luo
|
KCD: Knowledge Walks and Textual Cues Enhanced Political Perspective
Detection in News Media
|
accepted at NAACL 2022 main conference
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Political perspective detection has become an increasingly important task
that can help combat echo chambers and political polarization. Previous
approaches generally focus on leveraging textual content to identify stances,
while they fail to reason with background knowledge or leverage the rich
semantic and syntactic textual labels in news articles. In light of these
limitations, we propose KCD, a political perspective detection approach to
enable multi-hop knowledge reasoning and incorporate textual cues as
paragraph-level labels. Specifically, we firstly generate random walks on
external knowledge graphs and infuse them with news text representations. We
then construct a heterogeneous information network to jointly model news
content as well as semantic, syntactic and entity cues in news articles.
Finally, we adopt relational graph neural networks for graph-level
representation learning and conduct political perspective detection. Extensive
experiments demonstrate that our approach outperforms state-of-the-art methods
on two benchmark datasets. We further examine the effect of knowledge walks and
textual cues and how they contribute to our approach's data efficiency.
|
[
{
"version": "v1",
"created": "Fri, 8 Apr 2022 13:06:09 GMT"
},
{
"version": "v2",
"created": "Mon, 2 May 2022 05:19:17 GMT"
},
{
"version": "v3",
"created": "Mon, 16 May 2022 04:48:38 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Zhang",
"Wenqian",
""
],
[
"Feng",
"Shangbin",
""
],
[
"Chen",
"Zilong",
""
],
[
"Lei",
"Zhenyu",
""
],
[
"Li",
"Jundong",
""
],
[
"Luo",
"Minnan",
""
]
] |
new_dataset
| 0.9997 |
2205.00301
|
Fan Yan
|
Fan Yan, Ming Nie, Xinyue Cai, Jianhua Han, Hang Xu, Zhen Yang,
Chaoqiang Ye, Yanwei Fu, Michael Bi Mi, Li Zhang
|
ONCE-3DLanes: Building Monocular 3D Lane Detection
|
CVPR 2022. Project page at https://once-3dlanes.github.io
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present ONCE-3DLanes, a real-world autonomous driving dataset with lane
layout annotation in 3D space. Conventional 2D lane detection from a monocular
image yields poor performance of following planning and control tasks in
autonomous driving due to the case of uneven road. Predicting the 3D lane
layout is thus necessary and enables effective and safe driving. However,
existing 3D lane detection datasets are either unpublished or synthesized from
a simulated environment, severely hampering the development of this field. In
this paper, we take steps towards addressing these issues. By exploiting the
explicit relationship between point clouds and image pixels, a dataset
annotation pipeline is designed to automatically generate high-quality 3D lane
locations from 2D lane annotations in 211K road scenes. In addition, we present
an extrinsic-free, anchor-free method, called SALAD, regressing the 3D
coordinates of lanes in image view without converting the feature map into the
bird's-eye view (BEV). To facilitate future research on 3D lane detection, we
benchmark the dataset and provide a novel evaluation metric, performing
extensive experiments of both existing approaches and our proposed method. The
aim of our work is to revive the interest of 3D lane detection in a real-world
scenario. We believe our work can lead to the expected and unexpected
innovations in both academia and industry.
|
[
{
"version": "v1",
"created": "Sat, 30 Apr 2022 16:35:25 GMT"
},
{
"version": "v2",
"created": "Sat, 14 May 2022 16:51:38 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Yan",
"Fan",
""
],
[
"Nie",
"Ming",
""
],
[
"Cai",
"Xinyue",
""
],
[
"Han",
"Jianhua",
""
],
[
"Xu",
"Hang",
""
],
[
"Yang",
"Zhen",
""
],
[
"Ye",
"Chaoqiang",
""
],
[
"Fu",
"Yanwei",
""
],
[
"Mi",
"Michael Bi",
""
],
[
"Zhang",
"Li",
""
]
] |
new_dataset
| 0.998998 |
2205.06836
|
Gregor Lenz
|
Gregor Lenz, Serge Picaud, Sio-Hoi Ieng
|
A Framework for Event-based Computer Vision on a Mobile Device
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present the first publicly available Android framework to stream data from
an event camera directly to a mobile phone. Today's mobile devices handle a
wider range of workloads than ever before and they incorporate a growing gamut
of sensors that make devices smarter, more user friendly and secure.
Conventional cameras in particular play a central role in such tasks, but they
cannot record continuously, as the amount of redundant information recorded is
costly to process. Bio-inspired event cameras on the other hand only record
changes in a visual scene and have shown promising low-power applications that
specifically suit mobile tasks such as face detection, gesture recognition or
gaze tracking. Our prototype device is the first step towards embedding such an
event camera into a battery-powered handheld device. The mobile framework
allows us to stream events in real-time and opens up the possibilities for
always-on and on-demand sensing on mobile phones. To liaise the asynchronous
event camera output with synchronous von Neumann hardware, we look at how
buffering events and processing them in batches can benefit mobile
applications. We evaluate our framework in terms of latency and throughput and
show examples of computer vision tasks that involve both event-by-event and
pre-trained neural network methods for gesture recognition, aperture robust
optical flow and grey-level image reconstruction from events. The code is
available at https://github.com/neuromorphic-paris/frog
|
[
{
"version": "v1",
"created": "Fri, 13 May 2022 18:06:20 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Lenz",
"Gregor",
""
],
[
"Picaud",
"Serge",
""
],
[
"Ieng",
"Sio-Hoi",
""
]
] |
new_dataset
| 0.985271 |
2205.06840
|
Damir Koren\v{c}i\'c
|
Damir Koren\v{c}i\'c, Ivan Grubi\v{s}i\'c
|
IRB-NLP at SemEval-2022 Task 1: Exploring the Relationship Between Words
and Their Semantic Representations
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
What is the relation between a word and its description, or a word and its
embedding? Both descriptions and embeddings are semantic representations of
words. But, what information from the original word remains in these
representations? Or more importantly, which information about a word do these
two representations share? Definition Modeling and Reverse Dictionary are two
opposite learning tasks that address these questions. The goal of the
Definition Modeling task is to investigate the power of information laying
inside a word embedding to express the meaning of the word in a humanly
understandable way -- as a dictionary definition. Conversely, the Reverse
Dictionary task explores the ability to predict word embeddings directly from
its definition. In this paper, by tackling these two tasks, we are exploring
the relationship between words and their semantic representations. We present
our findings based on the descriptive, exploratory, and predictive data
analysis conducted on the CODWOE dataset. We give a detailed overview of the
systems that we designed for Definition Modeling and Reverse Dictionary tasks,
and that achieved top scores on SemEval-2022 CODWOE challenge in several
subtasks. We hope that our experimental results concerning the predictive
models and the data analyses we provide will prove useful in future
explorations of word representations and their relationships.
|
[
{
"version": "v1",
"created": "Fri, 13 May 2022 18:15:20 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Korenčić",
"Damir",
""
],
[
"Grubišić",
"Ivan",
""
]
] |
new_dataset
| 0.998547 |
2205.06841
|
Michael Hanus
|
Michael Hanus
|
From Logic to Functional Logic Programs
|
Paper presented at the 38th International Conference on Logic
Programming (ICLP 2022), 16 pages (without appendix)
| null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Logic programming is a flexible programming paradigm due to the use of
predicates without a fixed data flow. To extend logic languages with the
compact notation of functional programming, there are various proposals to map
evaluable functions into predicates in order to stay in the logic programming
framework. Since amalgamated functional logic languages offer flexible as well
as efficient evaluation strategies, we propose an opposite approach in this
paper. By mapping logic programs into functional logic programs with a
transformation based on inferring functional dependencies, we develop a fully
automatic transformation which keeps the flexibility of logic programming but
can improve computations by reducing infinite search spaces to finite ones.
|
[
{
"version": "v1",
"created": "Fri, 13 May 2022 18:20:50 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Hanus",
"Michael",
""
]
] |
new_dataset
| 0.953205 |
2205.06904
|
Elena Khasanova
|
Elena Khasanova, Pooja Hiranandani, Shayna Gardiner, Cheng Chen,
Xue-Yong Fu, Simon Corston-Oliver
|
Developing a Production System for Purpose of Call Detection in Business
Phone Conversations
|
NAACL 2022
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
For agents at a contact centre receiving calls, the most important piece of
information is the reason for a given call. An agent cannot provide support on
a call if they do not know why a customer is calling. In this paper we describe
our implementation of a commercial system to detect Purpose of Call statements
in English business call transcripts in real time. We present a detailed
analysis of types of Purpose of Call statements and language patterns related
to them, discuss an approach to collect rich training data by bootstrapping
from a set of rules to a neural model, and describe a hybrid model which
consists of a transformer-based classifier and a set of rules by leveraging
insights from the analysis of call transcripts. The model achieved 88.6 F1 on
average in various types of business calls when tested on real life data and
has low inference time. We reflect on the challenges and design decisions when
developing and deploying the system.
|
[
{
"version": "v1",
"created": "Fri, 13 May 2022 21:45:54 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Khasanova",
"Elena",
""
],
[
"Hiranandani",
"Pooja",
""
],
[
"Gardiner",
"Shayna",
""
],
[
"Chen",
"Cheng",
""
],
[
"Fu",
"Xue-Yong",
""
],
[
"Corston-Oliver",
"Simon",
""
]
] |
new_dataset
| 0.968763 |
2205.06929
|
Mohamed Ibrahim
|
Mohamed R. Ibrahim and Terry Lyons
|
ImageSig: A signature transform for ultra-lightweight image recognition
| null |
Proceedings of the IEEE conference on computer vision and pattern
recognition (CVPR) workshops,2022
| null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper introduces a new lightweight method for image recognition.
ImageSig is based on computing signatures and does not require a convolutional
structure or an attention-based encoder. It is striking to the authors that it
achieves: a) an accuracy for 64 X 64 RGB images that exceeds many of the
state-of-the-art methods and simultaneously b) requires orders of magnitude
less FLOPS, power and memory footprint. The pretrained model can be as small as
44.2 KB in size. ImageSig shows unprecedented performance on hardware such as
Raspberry Pi and Jetson-nano. ImageSig treats images as streams with multiple
channels. These streams are parameterized by spatial directions. We contribute
to the functionality of signature and rough path theory to stream-like data and
vision tasks on static images beyond temporal streams. With very few parameters
and small size models, the key advantage is that one could have many of these
"detectors" assembled on the same chip; moreover, the feature acquisition can
be performed once and shared between different models of different tasks -
further accelerating the process. This contributes to energy efficiency and the
advancements of embedded AI at the edge.
|
[
{
"version": "v1",
"created": "Fri, 13 May 2022 23:48:32 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Ibrahim",
"Mohamed R.",
""
],
[
"Lyons",
"Terry",
""
]
] |
new_dataset
| 0.970692 |
2205.06946
|
Woong Gyu La
|
Woong Gyu La, Sunil Muralidhara, Lingjie Kong, Pratik Nichat
|
Unified Distributed Environment
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose Unified Distributed Environment (UDE), an environment
virtualization toolkit for reinforcement learning research. UDE is designed to
integrate environments built on any simulation platform such as Gazebo, Unity,
Unreal, and OpenAI Gym. Through environment virtualization, UDE enables
offloading the environment for execution on a remote machine while still
maintaining a unified interface. The UDE interface is designed to support
multi-agent by default. With environment virtualization and its interface
design, the agent policies can be trained in multiple machines for a
multi-agent environment. Furthermore, UDE supports integration with existing
major RL toolkits for researchers to leverage the benefits. This paper
discusses the components of UDE and its design decisions.
|
[
{
"version": "v1",
"created": "Sat, 14 May 2022 02:27:35 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"La",
"Woong Gyu",
""
],
[
"Muralidhara",
"Sunil",
""
],
[
"Kong",
"Lingjie",
""
],
[
"Nichat",
"Pratik",
""
]
] |
new_dataset
| 0.980138 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.