id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2205.14462
|
Mohammad Faiyaz Khan
|
Mohammad Faiyaz Khan, S.M. Sadiq-Ur-Rahman Shifath, Md Saiful Islam
|
BAN-Cap: A Multi-Purpose English-Bangla Image Descriptions Dataset
|
Accepted in the 13th Edition of Language Resources and Evaluation
Conference (LREC 2022)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
As computers have become efficient at understanding visual information and
transforming it into a written representation, research interest in tasks like
automatic image captioning has seen a significant leap over the last few years.
While most of the research attention is given to the English language in a
monolingual setting, resource-constrained languages like Bangla remain out of
focus, predominantly due to a lack of standard datasets. Addressing this issue,
we present a new dataset BAN-Cap following the widely used Flickr8k dataset,
where we collect Bangla captions of the images provided by qualified
annotators. Our dataset represents a wider variety of image caption styles
annotated by trained people from different backgrounds. We present a
quantitative and qualitative analysis of the dataset and the baseline
evaluation of the recent models in Bangla image captioning. We investigate the
effect of text augmentation and demonstrate that an adaptive attention-based
model combined with text augmentation using Contextualized Word Replacement
(CWR) outperforms all state-of-the-art models for Bangla image captioning. We
also present this dataset's multipurpose nature, especially on machine
translation for Bangla-English and English-Bangla. This dataset and all the
models will be useful for further research.
|
[
{
"version": "v1",
"created": "Sat, 28 May 2022 15:39:09 GMT"
}
] | 2022-05-31T00:00:00 |
[
[
"Khan",
"Mohammad Faiyaz",
""
],
[
"Shifath",
"S. M. Sadiq-Ur-Rahman",
""
],
[
"Islam",
"Md Saiful",
""
]
] |
new_dataset
| 0.999866 |
2205.14496
|
Hanqing Guo
|
Hanqing Guo, Qiben Yan, Nikolay Ivanov, Ying Zhu, Li Xiao, Eric J.
Hunter
|
SuperVoice: Text-Independent Speaker Verification Using Ultrasound
Energy in Human Speech
| null | null | null | null |
cs.SD cs.HC cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Voice-activated systems are integrated into a variety of desktop, mobile, and
Internet-of-Things (IoT) devices. However, voice spoofing attacks, such as
impersonation and replay attacks, in which malicious attackers synthesize the
voice of a victim or simply replay it, have brought growing security concerns.
Existing speaker verification techniques distinguish individual speakers via
the spectrographic features extracted from an audible frequency range of voice
commands. However, they often have high error rates and/or long delays. In this
paper, we explore a new direction of human voice research by scrutinizing the
unique characteristics of human speech at the ultrasound frequency band. Our
research indicates that the high-frequency ultrasound components (e.g. speech
fricatives) from 20 to 48 kHz can significantly enhance the security and
accuracy of speaker verification. We propose a speaker verification system,
SUPERVOICE that uses a two-stream DNN architecture with a feature fusion
mechanism to generate distinctive speaker models. To test the system, we create
a speech dataset with 12 hours of audio (8,950 voice samples) from 127
participants. In addition, we create a second spoofed voice dataset to evaluate
its security. In order to balance between controlled recordings and real-world
applications, the audio recordings are collected from two quiet rooms by 8
different recording devices, including 7 smartphones and an ultrasound
microphone. Our evaluation shows that SUPERVOICE achieves 0.58% equal error
rate in the speaker verification task, it only takes 120 ms for testing an
incoming utterance, outperforming all existing speaker verification systems.
Moreover, within 91 ms processing time, SUPERVOICE achieves 0% equal error rate
in detecting replay attacks launched by 5 different loudspeakers.
|
[
{
"version": "v1",
"created": "Sat, 28 May 2022 18:00:50 GMT"
}
] | 2022-05-31T00:00:00 |
[
[
"Guo",
"Hanqing",
""
],
[
"Yan",
"Qiben",
""
],
[
"Ivanov",
"Nikolay",
""
],
[
"Zhu",
"Ying",
""
],
[
"Xiao",
"Li",
""
],
[
"Hunter",
"Eric J.",
""
]
] |
new_dataset
| 0.999696 |
2205.14543
|
Charles McGuffey
|
Nathan Beckmann, Phillip B Gibbons, and Charles McGuffey
|
Spatial Locality and Granularity Change in Caching
|
13 pages (including references), 6 figures, and 2 tables
| null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
Caches exploit temporal and spatial locality to allow a small memory to
provide fast access to data stored in large, slow memory. The temporal aspect
of locality is extremely well studied and understood, but the spatial aspect
much less so. We seek to gain an increased understanding of spatial locality by
defining and studying the Granularity-Change Caching Problem. This problem
modifies the traditional caching setup by grouping data items into blocks, such
that a cache can choose any subset of a block to load for the same cost as
loading any individual item in the block.
We show that modeling such spatial locality significantly changes the caching
problem. This begins with a proof that Granularity-Change Caching is
NP-Complete in the offline setting, even when all items have unit size and all
blocks have unit load cost. In the online setting, we show a lower bound for
competitive ratios of deterministic policies that is significantly worse than
traditional caching. Moreover, we present a deterministic replacement policy
called Item-Block Layered Partitioning and show that it obtains a competitive
ratio close to that lower bound. Moreover, our bounds reveal a new issue
arising in the Granularity-Change Caching Problem where the choice of offline
cache size affects the competitiveness of different online algorithms relative
to one another. To deal with this issue, we extend a prior (temporal) locality
model to account for spatial locality, and provide a general lower bound in
addition to an upper bound for Item-Block Layered Partitioning.
|
[
{
"version": "v1",
"created": "Sat, 28 May 2022 23:45:52 GMT"
}
] | 2022-05-31T00:00:00 |
[
[
"Beckmann",
"Nathan",
""
],
[
"Gibbons",
"Phillip B",
""
],
[
"McGuffey",
"Charles",
""
]
] |
new_dataset
| 0.98657 |
2205.14584
|
Shahar Kvatinsky Prof.
|
Shahar Kvatinsky
|
Making Real Memristive Processing-in-Memory Faster and Reliable
| null | null |
10.1109/CNNA49188.2021.9610786
| null |
cs.AR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Memristive technologies are attractive candidates to replace conventional
memory technologies, and can also be used to perform logic and arithmetic
operations using a technique called 'stateful logic.' Combining data storage
and computation in the memory array enables a novel non-von Neumann
architecture, where both the operations are performed within a memristive
Memory Processing Unit (mMPU). The mMPU relies on adding computing capabilities
to the memristive memory cells without changing the basic memory array
structure. The use of an mMPU alleviates the primary restriction on performance
and energy in a von Neumann machine, which is the data transfer between CPU and
memory. Here, the various aspects of mMPU are discussed, including its
architecture and implications on the computing system and software, as well as
examining the microarchitectural aspects. We show how mMPU can be improved to
accelerate different applications and how the poor reliability of memristors
can be improved as part of the mMPU operation.
|
[
{
"version": "v1",
"created": "Sun, 29 May 2022 06:50:49 GMT"
}
] | 2022-05-31T00:00:00 |
[
[
"Kvatinsky",
"Shahar",
""
]
] |
new_dataset
| 0.969496 |
2205.14601
|
Kaspar Rosager Ludvigsen
|
Kaspar Rosager Ludvigsen, Shishir Nagaraja, Angela Daly
|
YASM (Yet Another Surveillance Mechanism)
|
16 pages
| null | null | null |
cs.CY cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Client-Side Scanning (CSS) see in the Child Sexual Abuse Material Detection
(CSAMD) represent ubiquitous mass scanning. Apple proposed to scan their
systems for such imagery. CSAMD was since pushed back, but the European Union
decided to propose forced CSS to combat and prevent child sexual abuse and
weaken encryption. CSS is mass surveillance of personal property, pictures and
text, without considerations of privacy and cybersecurity and the law. We first
argue why CSS should be limited or not used and discuss issues with the way
pictures cryptographically are handled and how the CSAMD preserves privacy. In
the second part, we analyse the possible human rights violations which CSS in
general can cause within the regime of the European Convention on Human Rights.
The focus is the harm which the system may cause to individuals, and we also
comment on the proposed Child Abuse Regulation. We find that CSS is problematic
because they can rarely fulfil their purposes, as seen with antivirus software.
The costs for attempting to solve issues such as CSAM outweigh the benefits and
is not likely to change. The CSAMD as proposed is not likely to preserve the
privacy or security in the way of which it is described source materials. We
also find that CSS in general would likely violate the Right to a Fair Trial,
Right to Privacy and Freedom of Expression. Pictures could have been obtained
in a way that could make any trial against a legitimate perpetrator
inadmissible or violate their right for a fair trial, the lack of any
safeguards to protect privacy on national legal level, which would violate the
Right for Privacy, and it is unclear if the kind of scanning could pass the
legal test which Freedom of Expression requires. Finally, we find significant
issues with the proposed Regulation, as it relies on techno-solutionist
arguments and disregards knowledge on cybersecurity.
|
[
{
"version": "v1",
"created": "Sun, 29 May 2022 08:42:59 GMT"
}
] | 2022-05-31T00:00:00 |
[
[
"Ludvigsen",
"Kaspar Rosager",
""
],
[
"Nagaraja",
"Shishir",
""
],
[
"Daly",
"Angela",
""
]
] |
new_dataset
| 0.954029 |
2205.14611
|
Nhien-An Le-Khac
|
Eugene Chang and Paul Darcy and Kim-Kwang Raymond Choo and Nhien-An
Le-Khac
|
Forensic Artefact Discovery and Attribution from Android Cryptocurrency
Wallet Applications
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Cryptocurrency has been (ab)used to purchase illicit goods and services such
as drugs, weapons and child pornography (also referred to as child sexual abuse
materials), and thus mobile devices (where cryptocurrency wallet applications
are installed) are a potential source of evidence in a criminal investigation.
Not surprisingly, there has been increased focus on the security of
cryptocurrency wallets, although forensic extraction and attribution of
forensic artefacts from such wallets is understudied. In this paper, we examine
Bitcoin and Dogecoin. The latter is increasingly popular partly due to
endorsements from celebrities and being positioned as an introductory path to
cryptocurrency for newcomers. Specifically, we demonstrate how one can acquire
forensic artefacts from Android Bitcoin and Dogecoin cryptocurrency wallets,
such as wallet IDs, transaction IDs, timestamp information, email addresses,
cookies, and OAuth tokens.
|
[
{
"version": "v1",
"created": "Sun, 29 May 2022 09:23:02 GMT"
}
] | 2022-05-31T00:00:00 |
[
[
"Chang",
"Eugene",
""
],
[
"Darcy",
"Paul",
""
],
[
"Choo",
"Kim-Kwang Raymond",
""
],
[
"Le-Khac",
"Nhien-An",
""
]
] |
new_dataset
| 0.994088 |
2205.14657
|
Wamiq Reyaz Para
|
Wamiq Reyaz Para, Paul Guerrero, Niloy Mitra, Peter Wonka
|
COFS: Controllable Furniture layout Synthesis
|
Initial Version
| null | null | null |
cs.CV cs.GR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Scalable generation of furniture layouts is essential for many applications
in virtual reality, augmented reality, game development and synthetic data
generation. Many existing methods tackle this problem as a sequence generation
problem which imposes a specific ordering on the elements of the layout making
such methods impractical for interactive editing or scene completion.
Additionally, most methods focus on generating layouts unconditionally and
offer minimal control over the generated layouts. We propose COFS, an
architecture based on standard transformer architecture blocks from language
modeling. The proposed model is invariant to object order by design, removing
the unnatural requirement of specifying an object generation order.
Furthermore, the model allows for user interaction at multiple levels enabling
fine grained control over the generation process. Our model consistently
outperforms other methods which we verify by performing quantitative
evaluations. Our method is also faster to train and sample from, compared to
existing methods.
|
[
{
"version": "v1",
"created": "Sun, 29 May 2022 13:31:18 GMT"
}
] | 2022-05-31T00:00:00 |
[
[
"Para",
"Wamiq Reyaz",
""
],
[
"Guerrero",
"Paul",
""
],
[
"Mitra",
"Niloy",
""
],
[
"Wonka",
"Peter",
""
]
] |
new_dataset
| 0.993452 |
2205.14693
|
Xintong Yu
|
Xintong Yu, Hongming Zhang, Ruixin Hong, Yangqiu Song, Changshui Zhang
|
VD-PCR: Improving Visual Dialog with Pronoun Coreference Resolution
|
The manuscript version of the paper. The published version is
available at https://doi.org/10.1016/j.patcog.2022.108540 . The data, code
and models are available at: https://github.com/HKUST- KnowComp/VD-PCR
|
Pattern Recognition, 125, 108540 (2022)
|
10.1016/j.patcog.2022.108540
| null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The visual dialog task requires an AI agent to interact with humans in
multi-round dialogs based on a visual environment. As a common linguistic
phenomenon, pronouns are often used in dialogs to improve the communication
efficiency. As a result, resolving pronouns (i.e., grounding pronouns to the
noun phrases they refer to) is an essential step towards understanding dialogs.
In this paper, we propose VD-PCR, a novel framework to improve Visual Dialog
understanding with Pronoun Coreference Resolution in both implicit and explicit
ways. First, to implicitly help models understand pronouns, we design novel
methods to perform the joint training of the pronoun coreference resolution and
visual dialog tasks. Second, after observing that the coreference relationship
of pronouns and their referents indicates the relevance between dialog rounds,
we propose to explicitly prune the irrelevant history rounds in visual dialog
models' input. With pruned input, the models can focus on relevant dialog
history and ignore the distraction in the irrelevant one. With the proposed
implicit and explicit methods, VD-PCR achieves state-of-the-art experimental
results on the VisDial dataset.
|
[
{
"version": "v1",
"created": "Sun, 29 May 2022 15:29:50 GMT"
}
] | 2022-05-31T00:00:00 |
[
[
"Yu",
"Xintong",
""
],
[
"Zhang",
"Hongming",
""
],
[
"Hong",
"Ruixin",
""
],
[
"Song",
"Yangqiu",
""
],
[
"Zhang",
"Changshui",
""
]
] |
new_dataset
| 0.95448 |
2205.14727
|
Yirong Chen PhD
|
Yirong Chen, Weiquan Fan, Xiaofen Xing, Jianxin Pang, Minlie Huang,
Wenjing Han, Qianfeng Tie, Xiangmin Xu
|
CPED: A Large-Scale Chinese Personalized and Emotional Dialogue Dataset
for Conversational AI
| null | null | null | null |
cs.CL cs.AI cs.HC cs.MM
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Human language expression is based on the subjective construal of the
situation instead of the objective truth conditions, which means that speakers'
personalities and emotions after cognitive processing have an important
influence on conversation. However, most existing datasets for conversational
AI ignore human personalities and emotions, or only consider part of them. It's
difficult for dialogue systems to understand speakers' personalities and
emotions although large-scale pre-training language models have been widely
used. In order to consider both personalities and emotions in the process of
conversation generation, we propose CPED, a large-scale Chinese personalized
and emotional dialogue dataset, which consists of multi-source knowledge
related to empathy and personal characteristic. These knowledge covers gender,
Big Five personality traits, 13 emotions, 19 dialogue acts and 10 scenes. CPED
contains more than 12K dialogues of 392 speakers from 40 TV shows. We release
the textual dataset with audio features and video features according to the
copyright claims, privacy issues, terms of service of video platforms. We
provide detailed description of the CPED construction process and introduce
three tasks for conversational AI, including personality recognition, emotion
recognition in conversations as well as personalized and emotional conversation
generation. Finally, we provide baseline systems for these tasks and consider
the function of speakers' personalities and emotions on conversation. Our
motivation is to propose a dataset to be widely adopted by the NLP community as
a new open benchmark for conversational AI research. The full dataset is
available at https://github.com/scutcyr/CPED.
|
[
{
"version": "v1",
"created": "Sun, 29 May 2022 17:45:12 GMT"
}
] | 2022-05-31T00:00:00 |
[
[
"Chen",
"Yirong",
""
],
[
"Fan",
"Weiquan",
""
],
[
"Xing",
"Xiaofen",
""
],
[
"Pang",
"Jianxin",
""
],
[
"Huang",
"Minlie",
""
],
[
"Han",
"Wenjing",
""
],
[
"Tie",
"Qianfeng",
""
],
[
"Xu",
"Xiangmin",
""
]
] |
new_dataset
| 0.998776 |
2205.14769
|
Andrei Paraschiv
|
Andrei Paraschiv, Mihai Dascalu, Dumitru-Clementin Cercel
|
UPB at SemEval-2022 Task 5: Enhancing UNITER with Image Sentiment and
Graph Convolutional Networks for Multimedia Automatic Misogyny Identification
|
Semeval 2022, Task 5 submission 8 pages, 3 figures
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In recent times, the detection of hate-speech, offensive, or abusive language
in online media has become an important topic in NLP research due to the
exponential growth of social media and the propagation of such messages, as
well as their impact. Misogyny detection, even though it plays an important
part in hate-speech detection, has not received the same attention. In this
paper, we describe our classification systems submitted to the SemEval-2022
Task 5: MAMI - Multimedia Automatic Misogyny Identification. The shared task
aimed to identify misogynous content in a multi-modal setting by analysing meme
images together with their textual captions. To this end, we propose two models
based on the pre-trained UNITER model, one enhanced with an image sentiment
classifier, whereas the second leverages a Vocabulary Graph Convolutional
Network (VGCN). Additionally, we explore an ensemble using the aforementioned
models. Our best model reaches an F1-score of 71.4% in Sub-task A and 67.3% for
Sub-task B positioning our team in the upper third of the leaderboard. We
release the code and experiments for our models on GitHub
|
[
{
"version": "v1",
"created": "Sun, 29 May 2022 21:12:36 GMT"
}
] | 2022-05-31T00:00:00 |
[
[
"Paraschiv",
"Andrei",
""
],
[
"Dascalu",
"Mihai",
""
],
[
"Cercel",
"Dumitru-Clementin",
""
]
] |
new_dataset
| 0.991719 |
2205.14833
|
Chaoyue Niu
|
Chengfei Lv, Chaoyue Niu, Renjie Gu, Xiaotang Jiang, Zhaode Wang, Bin
Liu, Ziqi Wu, Qiulin Yao, Congyu Huang, Panos Huang, Tao Huang, Hui Shu,
Jinde Song, Bin Zou, Peng Lan, Guohuan Xu, Fei Wu, Shaojie Tang, Fan Wu,
Guihai Chen
|
Walle: An End-to-End, General-Purpose, and Large-Scale Production System
for Device-Cloud Collaborative Machine Learning
|
Accepted by OSDI 2022
| null | null | null |
cs.LG cs.DC cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
To break the bottlenecks of mainstream cloud-based machine learning (ML)
paradigm, we adopt device-cloud collaborative ML and build the first end-to-end
and general-purpose system, called Walle, as the foundation. Walle consists of
a deployment platform, distributing ML tasks to billion-scale devices in time;
a data pipeline, efficiently preparing task input; and a compute container,
providing a cross-platform and high-performance execution environment, while
facilitating daily task iteration. Specifically, the compute container is based
on Mobile Neural Network (MNN), a tensor compute engine along with the data
processing and model execution libraries, which are exposed through a refined
Python thread-level virtual machine (VM) to support diverse ML tasks and
concurrent task execution. The core of MNN is the novel mechanisms of operator
decomposition and semi-auto search, sharply reducing the workload in manually
optimizing hundreds of operators for tens of hardware backends and further
quickly identifying the best backend with runtime optimization for a
computation graph. The data pipeline introduces an on-device stream processing
framework to enable processing user behavior data at source. The deployment
platform releases ML tasks with an efficient push-then-pull method and supports
multi-granularity deployment policies. We evaluate Walle in practical
e-commerce application scenarios to demonstrate its effectiveness, efficiency,
and scalability. Extensive micro-benchmarks also highlight the superior
performance of MNN and the Python thread-level VM. Walle has been in
large-scale production use in Alibaba, while MNN has been open source with a
broad impact in the community.
|
[
{
"version": "v1",
"created": "Mon, 30 May 2022 03:43:35 GMT"
}
] | 2022-05-31T00:00:00 |
[
[
"Lv",
"Chengfei",
""
],
[
"Niu",
"Chaoyue",
""
],
[
"Gu",
"Renjie",
""
],
[
"Jiang",
"Xiaotang",
""
],
[
"Wang",
"Zhaode",
""
],
[
"Liu",
"Bin",
""
],
[
"Wu",
"Ziqi",
""
],
[
"Yao",
"Qiulin",
""
],
[
"Huang",
"Congyu",
""
],
[
"Huang",
"Panos",
""
],
[
"Huang",
"Tao",
""
],
[
"Shu",
"Hui",
""
],
[
"Song",
"Jinde",
""
],
[
"Zou",
"Bin",
""
],
[
"Lan",
"Peng",
""
],
[
"Xu",
"Guohuan",
""
],
[
"Wu",
"Fei",
""
],
[
"Tang",
"Shaojie",
""
],
[
"Wu",
"Fan",
""
],
[
"Chen",
"Guihai",
""
]
] |
new_dataset
| 0.999744 |
2205.14852
|
Xiang Wang
|
Ye Zheng, Xiang Wang, Yu Qi, Wei Li, Liwei Wu
|
Benchmarking Unsupervised Anomaly Detection and Localization
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unsupervised anomaly detection and localization, as of one the most practical
and challenging problems in computer vision, has received great attention in
recent years. From the time the MVTec AD dataset was proposed to the present,
new research methods that are constantly being proposed push its precision to
saturation. It is the time to conduct a comprehensive comparison of existing
methods to inspire further research. This paper extensively compares 13 papers
in terms of the performance in unsupervised anomaly detection and localization
tasks, and adds a comparison of inference efficiency previously ignored by the
community. Meanwhile, analysis of the MVTec AD dataset are also given,
especially the label ambiguity that affects the model fails to achieve full
marks. Moreover, considering the proposal of the new MVTec 3D-AD dataset, this
paper also conducts experiments using the existing state-of-the-art 2D methods
on this new dataset, and reports the corresponding results with analysis.
|
[
{
"version": "v1",
"created": "Mon, 30 May 2022 04:57:25 GMT"
}
] | 2022-05-31T00:00:00 |
[
[
"Zheng",
"Ye",
""
],
[
"Wang",
"Xiang",
""
],
[
"Qi",
"Yu",
""
],
[
"Li",
"Wei",
""
],
[
"Wu",
"Liwei",
""
]
] |
new_dataset
| 0.988378 |
2205.14853
|
Jiunn-Kai Huang
|
Jiunn-Kai Huang, Yingwen Tan, Dongmyeong Lee, Vishnu R. Desaraju, and
Jessy W. Grizzle
|
Informable Multi-Objective and Multi-Directional RRT* System for Robot
Path Planning
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-objective or multi-destination path planning is crucial for mobile
robotics applications such as mobility as a service, robotics inspection, and
electric vehicle charging for long trips. This work proposes an anytime
iterative system to concurrently solve the multi-objective path planning
problem and determine the visiting order of destinations. The system is
comprised of an anytime informable multi-objective and multi-directional RRT*
algorithm to form a simple connected graph, and a proposed solver that consists
of an enhanced cheapest insertion algorithm and a genetic algorithm to solve
the relaxed traveling salesman problem in polynomial time. Moreover, a list of
waypoints is often provided for robotics inspection and vehicle routing so that
the robot can preferentially visit certain equipment or areas of interest. We
show that the proposed system can inherently incorporate such knowledge, and
can navigate through challenging topology. The proposed anytime system is
evaluated on large and complex graphs built for real-world driving
applications. All implementations are coded in multi-threaded C++ and are
available at: https://github.com/UMich-BipedLab/IMOMD-RRTStar.
|
[
{
"version": "v1",
"created": "Mon, 30 May 2022 05:00:28 GMT"
}
] | 2022-05-31T00:00:00 |
[
[
"Huang",
"Jiunn-Kai",
""
],
[
"Tan",
"Yingwen",
""
],
[
"Lee",
"Dongmyeong",
""
],
[
"Desaraju",
"Vishnu R.",
""
],
[
"Grizzle",
"Jessy W.",
""
]
] |
new_dataset
| 0.987057 |
2205.14882
|
Peixuan Li
|
Peixuan Li, Jieyu Jin
|
Time3D: End-to-End Joint Monocular 3D Object Detection and Tracking for
Autonomous Driving
|
Accepted to CVPR 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
While separately leveraging monocular 3D object detection and 2D multi-object
tracking can be straightforwardly applied to sequence images in a
frame-by-frame fashion, stand-alone tracker cuts off the transmission of the
uncertainty from the 3D detector to tracking while cannot pass tracking error
differentials back to the 3D detector. In this work, we propose jointly
training 3D detection and 3D tracking from only monocular videos in an
end-to-end manner. The key component is a novel spatial-temporal information
flow module that aggregates geometric and appearance features to predict robust
similarity scores across all objects in current and past frames. Specifically,
we leverage the attention mechanism of the transformer, in which self-attention
aggregates the spatial information in a specific frame, and cross-attention
exploits relation and affinities of all objects in the temporal domain of
sequence frames. The affinities are then supervised to estimate the trajectory
and guide the flow of information between corresponding 3D objects. In
addition, we propose a temporal
-consistency loss that explicitly involves 3D target motion modeling into the
learning, making the 3D trajectory smooth in the world coordinate system.
Time3D achieves 21.4\% AMOTA, 13.6\% AMOTP on the nuScenes 3D tracking
benchmark, surpassing all published competitors, and running at 38 FPS, while
Time3D achieves 31.2\% mAP, 39.4\% NDS on the nuScenes 3D detection benchmark.
|
[
{
"version": "v1",
"created": "Mon, 30 May 2022 06:41:10 GMT"
}
] | 2022-05-31T00:00:00 |
[
[
"Li",
"Peixuan",
""
],
[
"Jin",
"Jieyu",
""
]
] |
new_dataset
| 0.999005 |
2205.14886
|
Yun-Chun Chen
|
Yun-Chun Chen, Haoda Li, Dylan Turpin, Alec Jacobson, Animesh Garg
|
Neural Shape Mating: Self-Supervised Object Assembly with Adversarial
Shape Priors
|
CVPR 2022
| null | null | null |
cs.CV cs.GR cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Learning to autonomously assemble shapes is a crucial skill for many robotic
applications. While the majority of existing part assembly methods focus on
correctly posing semantic parts to recreate a whole object, we interpret
assembly more literally: as mating geometric parts together to achieve a snug
fit. By focusing on shape alignment rather than semantic cues, we can achieve
across-category generalization. In this paper, we introduce a novel task,
pairwise 3D geometric shape mating, and propose Neural Shape Mating (NSM) to
tackle this problem. Given the point clouds of two object parts of an unknown
category, NSM learns to reason about the fit of the two parts and predict a
pair of 3D poses that tightly mate them together. We couple the training of NSM
with an implicit shape reconstruction task to make NSM more robust to imperfect
point cloud observations. To train NSM, we present a self-supervised data
collection pipeline that generates pairwise shape mating data with ground truth
by randomly cutting an object mesh into two parts, resulting in a dataset that
consists of 200K shape mating pairs from numerous object meshes with diverse
cut types. We train NSM on the collected dataset and compare it with several
point cloud registration methods and one part assembly baseline. Extensive
experimental results and ablation studies under various settings demonstrate
the effectiveness of the proposed algorithm. Additional material is available
at: https://neural-shape-mating.github.io/
|
[
{
"version": "v1",
"created": "Mon, 30 May 2022 06:58:01 GMT"
}
] | 2022-05-31T00:00:00 |
[
[
"Chen",
"Yun-Chun",
""
],
[
"Li",
"Haoda",
""
],
[
"Turpin",
"Dylan",
""
],
[
"Jacobson",
"Alec",
""
],
[
"Garg",
"Animesh",
""
]
] |
new_dataset
| 0.988183 |
2205.14950
|
Wei-Chang Yeh
|
Wei-Chang Yeh
|
QB-II for Evaluating the Reliability of Binary-State Networks
| null | null | null | null |
cs.DS cs.DM
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Current real-life applications of various networks such as utility (gas,
water, electric, 4G/5G) networks, the Internet of Things, social networks, and
supply chains. Reliability is one of the most popular tools for evaluating
network performance. The fundamental structure of these networks is a binary
state network. Distinctive methods have been proposed to efficiently assess
binary-state network reliability. A new algorithm called QB-II (quick
binary-addition tree algorithm II) is proposed to improve the efficiency of
quick BAT, which is based on BAT and outperforms many algorithms. The proposed
QB-II implements the shortest minimum cuts (MCs) to separate the entire BAT
into main-BAT and sub-BATs, and the source-target matrix convolution products
to connect these subgraphs intelligently to improve the efficiency. Twenty
benchmark problems were used to validate the performance of the QB-II.
|
[
{
"version": "v1",
"created": "Mon, 30 May 2022 09:35:03 GMT"
}
] | 2022-05-31T00:00:00 |
[
[
"Yeh",
"Wei-Chang",
""
]
] |
new_dataset
| 0.997812 |
2205.14951
|
Kaicheng Yu
|
Kaicheng Yu, Tang Tao, Hongwei Xie, Zhiwei Lin, Zhongwei Wu, Zhongyu
Xia, Tingting Liang, Haiyang Sun, Jiong Deng, Dayang Hao, Yongtao Wang,
Xiaodan Liang, Bing Wang
|
Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object
Detection
|
Technical report. The first three authors contribute equally
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
There are two critical sensors for 3D perception in autonomous driving, the
camera and the LiDAR. The camera provides rich semantic information such as
color, texture, and the LiDAR reflects the 3D shape and locations of
surrounding objects. People discover that fusing these two modalities can
significantly boost the performance of 3D perception models as each modality
has complementary information to the other. However, we observe that current
datasets are captured from expensive vehicles that are explicitly designed for
data collection purposes, and cannot truly reflect the realistic data
distribution due to various reasons. To this end, we collect a series of
real-world cases with noisy data distribution, and systematically formulate a
robustness benchmark toolkit, that simulates these cases on any clean
autonomous driving datasets. We showcase the effectiveness of our toolkit by
establishing the robustness benchmark on two widely-adopted autonomous driving
datasets, nuScenes and Waymo, then, to the best of our knowledge, holistically
benchmark the state-of-the-art fusion methods for the first time. We observe
that: i) most fusion methods, when solely developed on these data, tend to fail
inevitably when there is a disruption to the LiDAR input; ii) the improvement
of the camera input is significantly inferior to the LiDAR one. We further
propose an efficient robust training strategy to improve the robustness of the
current fusion method. The benchmark and code are available at
https://github.com/kcyu2014/lidar-camera-robust-benchmark
|
[
{
"version": "v1",
"created": "Mon, 30 May 2022 09:35:37 GMT"
}
] | 2022-05-31T00:00:00 |
[
[
"Yu",
"Kaicheng",
""
],
[
"Tao",
"Tang",
""
],
[
"Xie",
"Hongwei",
""
],
[
"Lin",
"Zhiwei",
""
],
[
"Wu",
"Zhongwei",
""
],
[
"Xia",
"Zhongyu",
""
],
[
"Liang",
"Tingting",
""
],
[
"Sun",
"Haiyang",
""
],
[
"Deng",
"Jiong",
""
],
[
"Hao",
"Dayang",
""
],
[
"Wang",
"Yongtao",
""
],
[
"Liang",
"Xiaodan",
""
],
[
"Wang",
"Bing",
""
]
] |
new_dataset
| 0.999764 |
2205.14986
|
Sara Atito
|
Sara Atito and Muhammad Awais and Josef Kittler
|
GMML is All you Need
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Vision transformers have generated significant interest in the computer
vision community because of their flexibility in exploiting contextual
information, whether it is sharply confined local, or long range global.
However, they are known to be data hungry. This has motivated the research in
self-supervised transformer pretraining, which does not need to decode the
semantic information conveyed by labels to link it to the image properties, but
rather focuses directly on extracting a concise representation of the image
data that reflects the notion of similarity, and is invariant to nuisance
factors. The key vehicle for the self-learning process used by the majority of
self-learning methods is the generation of multiple views of the training data
and the creation of pretext tasks which use these views to define the notion of
image similarity, and data integrity. However, this approach lacks the natural
propensity to extract contextual information. We propose group masked model
learning (GMML), a self-supervised learning (SSL) mechanism for pretraining
vision transformers with the ability to extract the contextual information
present in all the concepts in an image. GMML achieves this by manipulating
randomly groups of connected tokens, ensuingly covering a meaningful part of a
semantic concept, and then recovering the hidden semantic information from the
visible part of the concept. GMML implicitly introduces a novel data
augmentation process. Unlike most of the existing SSL approaches, GMML does not
require momentum encoder, nor rely on careful implementation details such as
large batches and gradient stopping, which are all artefacts of most of the
current self-supervised learning techniques. The source code is publicly
available for the community to train on bigger corpora:
https://github.com/Sara-Ahmed/GMML.
|
[
{
"version": "v1",
"created": "Mon, 30 May 2022 10:36:55 GMT"
}
] | 2022-05-31T00:00:00 |
[
[
"Atito",
"Sara",
""
],
[
"Awais",
"Muhammad",
""
],
[
"Kittler",
"Josef",
""
]
] |
new_dataset
| 0.996023 |
2205.14988
|
Zongqi Wan
|
Zongqi Wan, Zhijie Zhang, Tongyang Li, Jialin Zhang, Xiaoming Sun
|
Quantum Multi-Armed Bandits and Stochastic Linear Bandits Enjoy
Logarithmic Regrets
|
14+6 pages
| null | null | null |
cs.LG quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-arm bandit (MAB) and stochastic linear bandit (SLB) are important
models in reinforcement learning, and it is well-known that classical
algorithms for bandits with time horizon $T$ suffer $\Omega(\sqrt{T})$ regret.
In this paper, we study MAB and SLB with quantum reward oracles and propose
quantum algorithms for both models with $O(\mbox{poly}(\log T))$ regrets,
exponentially improving the dependence in terms of $T$. To the best of our
knowledge, this is the first provable quantum speedup for regrets of bandit
problems and in general exploitation in reinforcement learning. Compared to
previous literature on quantum exploration algorithms for MAB and reinforcement
learning, our quantum input model is simpler and only assumes quantum oracles
for each individual arm.
|
[
{
"version": "v1",
"created": "Mon, 30 May 2022 10:54:53 GMT"
}
] | 2022-05-31T00:00:00 |
[
[
"Wan",
"Zongqi",
""
],
[
"Zhang",
"Zhijie",
""
],
[
"Li",
"Tongyang",
""
],
[
"Zhang",
"Jialin",
""
],
[
"Sun",
"Xiaoming",
""
]
] |
new_dataset
| 0.972986 |
2205.15011
|
Nick Zhang
|
Nick Zhang
|
Moore's Law is dead, long live Moore's Law!
| null | null | null | null |
cs.GL
|
http://creativecommons.org/licenses/by/4.0/
|
Moore's Law has been used by semiconductor industry as predicative indicators
of the industry and it has become a self-fulfilling prophecy. Now more people
tend to agree that the original Moore's Law started to falter. This paper
proposes a possible quantitative modification to Moore's Law. It can cover
other derivative laws of Moore's Law as well. It intends to more accurately
predict the roadmap of chip's performance and energy consumption.
|
[
{
"version": "v1",
"created": "Fri, 27 May 2022 05:51:43 GMT"
}
] | 2022-05-31T00:00:00 |
[
[
"Zhang",
"Nick",
""
]
] |
new_dataset
| 0.993582 |
2205.15037
|
Gargi Mitra
|
Gargi Mitra, Prasanna Karthik Vairam, Sandip Saha, Nitin
Chandrachoodan, V. Kamakoti
|
Snoopy: A Webpage Fingerprinting Framework with Finite Query Model for
Mass-Surveillance
|
The codes used for the analyses presented in the paper will be made
available online only after the manuscript is accepted for publication at any
conference/journal
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Internet users are vulnerable to privacy attacks despite the use of
encryption. Webpage fingerprinting, an attack that analyzes encrypted traffic,
can identify the webpages visited by a user in a given website. Recent research
works have been successful in demonstrating webpage fingerprinting attacks on
individual users, but have been unsuccessful in extending their attack for
mass-surveillance. The key challenges in performing mass-scale webpage
fingerprinting arises from (i) the sheer number of combinations of user
behavior and preferences to account for, and; (ii) the bound on the number of
website queries imposed by the defense mechanisms (e.g., DDoS defense) deployed
at the website. These constraints preclude the use of conventional
data-intensive ML-based techniques. In this work, we propose Snoopy, a
first-of-its-kind framework, that performs webpage fingerprinting for a large
number of users visiting a website. Snoopy caters to the generalization
requirements of mass-surveillance while complying with a bound on the number of
website accesses (finite query model) for traffic sample collection. For this,
Snoopy uses a feature (i.e., sequence of encrypted resource sizes) that is
either unaffected or predictably affected by different browsing contexts (OS,
browser, caching, cookie settings). Snoopy uses static analysis techniques to
predict the variations caused by factors such as header sizes, MTU, and User
Agent String that arise from the diversity in browsing contexts. We show that
Snoopy achieves approximately 90% accuracy when evaluated on most websites,
across various browsing contexts. A simple ensemble of Snoopy and an ML-based
technique achieves approximately 97% accuracy while adhering to the finite
query model, in cases when Snoopy alone does not perform well.
|
[
{
"version": "v1",
"created": "Mon, 30 May 2022 12:14:43 GMT"
}
] | 2022-05-31T00:00:00 |
[
[
"Mitra",
"Gargi",
""
],
[
"Vairam",
"Prasanna Karthik",
""
],
[
"Saha",
"Sandip",
""
],
[
"Chandrachoodan",
"Nitin",
""
],
[
"Kamakoti",
"V.",
""
]
] |
new_dataset
| 0.997845 |
2205.15137
|
Alexandre Girard
|
Alexandre Girard and H. Harry Asada
|
A Fast Gear-Shifting Actuator for Robotic Tasks with Contacts
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Vehicle power-trains use a variable transmission (multiple gear-ratios) to
minimize motor size and maximize efficiency while meeting a wide-range of
operating points. Robots could similarly benefit from variable transmission to
save weight and improve energy efficiency; leading to potentially
groundbreaking improvements for mobile and wearable robotic systems. However,
variable transmissions in a robotic context leads to new challenges regarding
the gear-shifting methodology: 1) order-of-magnitude variations of reduction
ratios are desired, and 2) contact situations during manipulation/locomotion
tasks lead to impulsive behavior at the moment when gear-shifting is required.
This paper present an actuator with a gear-shifting methodology that can
seamlessly change between two very different reduction ratios during dynamic
contact situations. Experimental results demonstrate the ability to execute a
gear-shift from a 1:23 reduction to a 1:474 reduction in less than 30ms during
contact with a rigid object.
|
[
{
"version": "v1",
"created": "Mon, 30 May 2022 14:30:04 GMT"
}
] | 2022-05-31T00:00:00 |
[
[
"Girard",
"Alexandre",
""
],
[
"Asada",
"H. Harry",
""
]
] |
new_dataset
| 0.998151 |
2205.15163
|
Haewoon Kwak
|
Haewoon Kwak
|
You Have Earned a Trophy: Characterize In-Game Achievements and Their
Completions
|
Preprint of the paper accepted at the 14th International ACM
Conference on Web Science in 2022 (WebSci'22)
| null | null | null |
cs.HC cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Achievement systems have been actively adopted in gaming platforms to
maintain players' interests. Among them, trophies in PlayStation games are one
of the most successful achievement systems. While the importance of trophy
design has been casually discussed in many game developers' forums, there has
been no systematic study of the historical dataset of trophies yet. In this
work, we construct a complete dataset of PlayStation games and their trophies
and investigate them from both the developers' and players' perspectives.
|
[
{
"version": "v1",
"created": "Mon, 30 May 2022 15:10:12 GMT"
}
] | 2022-05-31T00:00:00 |
[
[
"Kwak",
"Haewoon",
""
]
] |
new_dataset
| 0.999863 |
2205.15175
|
Jinshan Pan
|
Long Sun, Jinshan Pan, Jinhui Tang
|
ShuffleMixer: An Efficient ConvNet for Image Super-Resolution
|
Winner of the model complexity track in NTIRE2022 Efficient
Super-Resolution Challenge, CVPR 2022. The code is available at
https://github.com/sunny2109/MobileSR-NTIRE2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Lightweight and efficiency are critical drivers for the practical application
of image super-resolution (SR) algorithms. We propose a simple and effective
approach, ShuffleMixer, for lightweight image super-resolution that explores
large convolution and channel split-shuffle operation. In contrast to previous
SR models that simply stack multiple small kernel convolutions or complex
operators to learn representations, we explore a large kernel ConvNet for
mobile-friendly SR design. Specifically, we develop a large depth-wise
convolution and two projection layers based on channel splitting and shuffling
as the basic component to mix features efficiently. Since the contexts of
natural images are strongly locally correlated, using large depth-wise
convolutions only is insufficient to reconstruct fine details. To overcome this
problem while maintaining the efficiency of the proposed module, we introduce
Fused-MBConvs into the proposed network to model the local connectivity of
different features. Experimental results demonstrate that the proposed
ShuffleMixer is about 6x smaller than the state-of-the-art methods in terms of
model parameters and FLOPs while achieving competitive performance. In NTIRE
2022, our primary method won the model complexity track of the Efficient
Super-Resolution Challenge [23]. The code is available at
https://github.com/sunny2109/MobileSR-NTIRE2022.
|
[
{
"version": "v1",
"created": "Mon, 30 May 2022 15:26:52 GMT"
}
] | 2022-05-31T00:00:00 |
[
[
"Sun",
"Long",
""
],
[
"Pan",
"Jinshan",
""
],
[
"Tang",
"Jinhui",
""
]
] |
new_dataset
| 0.997337 |
2205.15210
|
Ronghan Chen
|
Ronghan Chen, Yang Cong
|
The Devil is in the Pose: Ambiguity-free 3D Rotation-invariant Learning
via Pose-aware Convolution
|
Accepted by CVPR 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Rotation-invariant (RI) 3D deep learning methods suffer performance
degradation as they typically design RI representations as input that lose
critical global information comparing to 3D coordinates. Most state-of-the-arts
address it by incurring additional blocks or complex global representations in
a heavy and ineffective manner. In this paper, we reveal that the global
information loss stems from an unexplored pose information loss problem, which
can be solved more efficiently and effectively as we only need to restore more
lightweight local pose in each layer, and the global information can be
hierarchically aggregated in the deep networks without extra efforts. To
address this problem, we develop a Pose-aware Rotation Invariant Convolution
(i.e., PaRI-Conv), which dynamically adapts its kernels based on the relative
poses. To implement it, we propose an Augmented Point Pair Feature (APPF) to
fully encode the RI relative pose information, and a factorized dynamic kernel
for pose-aware kernel generation, which can further reduce the computational
cost and memory burden by decomposing the kernel into a shared basis matrix and
a pose-aware diagonal matrix. Extensive experiments on shape classification and
part segmentation tasks show that our PaRI-Conv surpasses the state-of-the-art
RI methods while being more compact and efficient.
|
[
{
"version": "v1",
"created": "Mon, 30 May 2022 16:11:55 GMT"
}
] | 2022-05-31T00:00:00 |
[
[
"Chen",
"Ronghan",
""
],
[
"Cong",
"Yang",
""
]
] |
new_dataset
| 0.998026 |
2205.15237
|
Wangchunshu Zhou
|
Wangchunshu Zhou, Yan Zeng, Shizhe Diao, Xinsong Zhang
|
VLUE: A Multi-Task Benchmark for Evaluating Vision-Language Models
|
ICML 2022, Benchmark website at https://vlue-benchmark.github.io
| null | null | null |
cs.CV cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Recent advances in vision-language pre-training (VLP) have demonstrated
impressive performance in a range of vision-language (VL) tasks. However, there
exist several challenges for measuring the community's progress in building
general multi-modal intelligence. First, most of the downstream VL datasets are
annotated using raw images that are already seen during pre-training, which may
result in an overestimation of current VLP models' generalization ability.
Second, recent VLP work mainly focuses on absolute performance but overlooks
the efficiency-performance trade-off, which is also an important indicator for
measuring progress.
To this end, we introduce the Vision-Language Understanding Evaluation (VLUE)
benchmark, a multi-task multi-dimension benchmark for evaluating the
generalization capabilities and the efficiency-performance trade-off (``Pareto
SOTA'') of VLP models. We demonstrate that there is a sizable generalization
gap for all VLP models when testing on out-of-distribution test sets annotated
on images from a more diverse distribution that spreads across cultures.
Moreover, we find that measuring the efficiency-performance trade-off of VLP
models leads to complementary insights for several design choices of VLP. We
release the VLUE benchmark to promote research on building vision-language
models that generalize well to more diverse images and concepts unseen during
pre-training, and are practical in terms of efficiency-performance trade-off.
|
[
{
"version": "v1",
"created": "Mon, 30 May 2022 16:52:30 GMT"
}
] | 2022-05-31T00:00:00 |
[
[
"Zhou",
"Wangchunshu",
""
],
[
"Zeng",
"Yan",
""
],
[
"Diao",
"Shizhe",
""
],
[
"Zhang",
"Xinsong",
""
]
] |
new_dataset
| 0.983598 |
2005.00179
|
Daniel Frishberg
|
David Eppstein, Daniel Frishberg, and William Maxwell
|
On the treewidth of Hanoi graphs
|
- To be published in the Proceedings of the Tenth International
Conference on Fun with Algorithms (FUN 2020). - 22 pages (including title
page, bibliography, and appendix). - Five figures
|
Theor. Comput. Sci. 906: 1-17, 2022
|
10.1016/j.tcs.2021.12.014
| null |
cs.DM math.CO
|
http://creativecommons.org/licenses/by/4.0/
|
The objective of the well-known Towers of Hanoi puzzle is to move a set of
disks one at a time from one of a set of pegs to another, while keeping the
disks sorted on each peg. We propose an adversarial variation in which the
first player forbids a set of states in the puzzle, and the second player must
then convert one randomly-selected state to another without passing through
forbidden states. Analyzing this version raises the question of the treewidth
of Hanoi graphs. We find this number exactly for three-peg puzzles and provide
nearly-tight asymptotic bounds for larger numbers of pegs.
|
[
{
"version": "v1",
"created": "Fri, 1 May 2020 02:14:44 GMT"
}
] | 2022-05-30T00:00:00 |
[
[
"Eppstein",
"David",
""
],
[
"Frishberg",
"Daniel",
""
],
[
"Maxwell",
"William",
""
]
] |
new_dataset
| 0.999716 |
2105.05763
|
Marko Schmellenkamp
|
Gaetano Geck, Christine Quenkert, Marko Schmellenkamp, Jonas Schmidt,
Felix Tschirbs, Fabian Vehlken, Thomas Zeume
|
Iltis: Learning Logic in the Web
| null | null | null | null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
The Iltis project provides an interactive, web-based system for teaching the
foundations of formal methods. It is designed with the objective to allow for
simple inclusion of new educational tasks; to pipeline such tasks into more
complex exercises; and to allow simple inclusion and cascading of feedback
mechanisms. Currently, exercises for many typical automated reasoning workflows
for propositional logic, modal logic, and some parts of first-order logic are
covered.
Recently, Iltis has reached a level of maturity where large parts of
introductory logic courses can be supplemented with interactive exercises.
Sample interactive course material has been designed and used in courses over
the last years, many of them with more than 300 students.
We invite all readers to try out Iltis: https://iltis.cs.tu-dortmund.de
|
[
{
"version": "v1",
"created": "Wed, 12 May 2021 16:30:53 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Jan 2022 19:10:33 GMT"
},
{
"version": "v3",
"created": "Fri, 27 May 2022 11:42:35 GMT"
}
] | 2022-05-30T00:00:00 |
[
[
"Geck",
"Gaetano",
""
],
[
"Quenkert",
"Christine",
""
],
[
"Schmellenkamp",
"Marko",
""
],
[
"Schmidt",
"Jonas",
""
],
[
"Tschirbs",
"Felix",
""
],
[
"Vehlken",
"Fabian",
""
],
[
"Zeume",
"Thomas",
""
]
] |
new_dataset
| 0.999346 |
2201.06638
|
Nicolas Pr\"ollochs
|
Kirill Solovev, Nicolas Pr\"ollochs
|
Hate Speech in the Political Discourse on Social Media: Disparities
Across Parties, Gender, and Ethnicity
| null | null |
10.1145/3485447.3512261
| null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Social media has become an indispensable channel for political communication.
However, the political discourse is increasingly characterized by hate speech,
which affects not only the reputation of individual politicians but also the
functioning of society at large. In this work, we empirically analyze how the
amount of hate speech in replies to posts from politicians on Twitter depends
on personal characteristics, such as their party affiliation, gender, and
ethnicity. For this purpose, we employ Twitter's Historical API to collect
every tweet posted by members of the 117th U.S. Congress for an observation
period of more than six months. Additionally, we gather replies for each tweet
and use machine learning to predict the amount of hate speech they embed.
Subsequently, we implement hierarchical regression models to analyze whether
politicians with certain characteristics receive more hate speech. We find that
tweets are particularly likely to receive hate speech in replies if they are
authored by (i) persons of color from the Democratic party, (ii) white
Republicans, and (iii) women. Furthermore, our analysis reveals that more
negative sentiment (in the source tweet) is associated with more hate speech
(in replies). However, the association varies across parties: negative
sentiment attracts more hate speech for Democrats (vs. Republicans).
Altogether, our empirical findings imply significant differences in how
politicians are treated on social media depending on their party affiliation,
gender, and ethnicity.
|
[
{
"version": "v1",
"created": "Mon, 17 Jan 2022 21:41:12 GMT"
}
] | 2022-05-30T00:00:00 |
[
[
"Solovev",
"Kirill",
""
],
[
"Pröllochs",
"Nicolas",
""
]
] |
new_dataset
| 0.999695 |
2203.14057
|
Lizhen Wang
|
Lizhen Wang, Zhiyuan Chen, Tao Yu, Chenguang Ma, Liang Li, Yebin Liu
|
FaceVerse: a Fine-grained and Detail-controllable 3D Face Morphable
Model from a Hybrid Dataset
|
https://github.com/LizhenWangT/FaceVerse
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present FaceVerse, a fine-grained 3D Neural Face Model, which is built
from hybrid East Asian face datasets containing 60K fused RGB-D images and 2K
high-fidelity 3D head scan models. A novel coarse-to-fine structure is proposed
to take better advantage of our hybrid dataset. In the coarse module, we
generate a base parametric model from large-scale RGB-D images, which is able
to predict accurate rough 3D face models in different genders, ages, etc. Then
in the fine module, a conditional StyleGAN architecture trained with
high-fidelity scan models is introduced to enrich elaborate facial geometric
and texture details. Note that different from previous methods, our base and
detailed modules are both changeable, which enables an innovative application
of adjusting both the basic attributes and the facial details of 3D face
models. Furthermore, we propose a single-image fitting framework based on
differentiable rendering. Rich experiments show that our method outperforms the
state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Sat, 26 Mar 2022 12:13:14 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Mar 2022 03:13:33 GMT"
},
{
"version": "v3",
"created": "Fri, 27 May 2022 13:39:22 GMT"
}
] | 2022-05-30T00:00:00 |
[
[
"Wang",
"Lizhen",
""
],
[
"Chen",
"Zhiyuan",
""
],
[
"Yu",
"Tao",
""
],
[
"Ma",
"Chenguang",
""
],
[
"Li",
"Liang",
""
],
[
"Liu",
"Yebin",
""
]
] |
new_dataset
| 0.999827 |
2204.14217
|
Ming Ding
|
Ming Ding, Wendi Zheng, Wenyi Hong, Jie Tang
|
CogView2: Faster and Better Text-to-Image Generation via Hierarchical
Transformers
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The development of the transformer-based text-to-image models are impeded by
its slow generation and complexity for high-resolution images. In this work, we
put forward a solution based on hierarchical transformers and local parallel
auto-regressive generation. We pretrain a 6B-parameter transformer with a
simple and flexible self-supervised task, Cross-modal general language model
(CogLM), and finetune it for fast super-resolution. The new text-to-image
system, CogView2, shows very competitive generation compared to concurrent
state-of-the-art DALL-E-2, and naturally supports interactive text-guided
editing on images.
|
[
{
"version": "v1",
"created": "Thu, 28 Apr 2022 15:51:11 GMT"
},
{
"version": "v2",
"created": "Fri, 27 May 2022 14:40:07 GMT"
}
] | 2022-05-30T00:00:00 |
[
[
"Ding",
"Ming",
""
],
[
"Zheng",
"Wendi",
""
],
[
"Hong",
"Wenyi",
""
],
[
"Tang",
"Jie",
""
]
] |
new_dataset
| 0.997918 |
2205.13582
|
Cibele Cristina Trinca Watanabe Mrs.
|
Cibele Cristina Trinca, J. Carmelo Interlando, Reginaldo Palazzo Jr.,
Antonio Aparecido de Andrade, Ricardo Augusto Watanabe
|
On the Construction of New Toric Quantum Codes and Quantum Burst-Error
Correcting Codes
|
Submitted to "Journal of Algebra, Combinatorics, Discrete Structures
and Applications"
| null | null | null |
cs.IT math.CO math.IT
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
A toric quantum error-correcting code construction procedure is presented in
this work. A new class of an infinite family of toric quantum codes is provided
by constructing a classical cyclic code on the square lattice
$\mathbb{Z}_{q}\times \mathbb{Z}_{q}$ for all odd integers $q\geq 5$ and,
consequently, new toric quantum codes are constructed on such square lattices
regardless of whether $q$ can be represented as a sum of two squares.
Furthermore this work supplies for each $q$ the polyomino shapes that
tessellate the corresponding square lattices and, consequently, tile the
lattice $\mathbb{Z}^{2}$. The channel without memory to be considered for these
constructed toric quantum codes is symmetric, since the
$\mathbb{Z}^{2}$-lattice is autodual. Moreover, we propose a quantum
interleaving technique by using the constructed toric quantum codes which shows
that the code rate and the coding gain of the interleaved toric quantum codes
are better than the code rate and the coding gain of Kitaev's toric quantum
codes for $q=2n+1$, where $n\geq 2$, and of an infinite class of Bombin and
Martin-Delgado's toric quantum codes. In addition to the proposed quantum
interleaving technique improves such parameters, it can be used for burst-error
correction in errors which are located, quantum data stored and quantum
channels with memory.
|
[
{
"version": "v1",
"created": "Thu, 26 May 2022 18:58:29 GMT"
}
] | 2022-05-30T00:00:00 |
[
[
"Trinca",
"Cibele Cristina",
""
],
[
"Interlando",
"J. Carmelo",
""
],
[
"Palazzo",
"Reginaldo",
"Jr."
],
[
"de Andrade",
"Antonio Aparecido",
""
],
[
"Watanabe",
"Ricardo Augusto",
""
]
] |
new_dataset
| 0.999635 |
2205.13685
|
Jiahe Lan
|
Jiahe Lan, Rui Zhang, Zheng Yan, Jie Wang, Yu Chen, Ronghui Hou
|
Adversarial attacks and defenses in Speaker Recognition Systems: A
survey
|
38pages, 2 figures, 2 tables. Journal of Systems Architecture,2022
| null | null | null |
cs.CR cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Speaker recognition has become very popular in many application scenarios,
such as smart homes and smart assistants, due to ease of use for remote control
and economic-friendly features. The rapid development of SRSs is inseparable
from the advancement of machine learning, especially neural networks. However,
previous work has shown that machine learning models are vulnerable to
adversarial attacks in the image domain, which inspired researchers to explore
adversarial attacks and defenses in Speaker Recognition Systems (SRS).
Unfortunately, existing literature lacks a thorough review of this topic. In
this paper, we fill this gap by performing a comprehensive survey on
adversarial attacks and defenses in SRSs. We first introduce the basics of SRSs
and concepts related to adversarial attacks. Then, we propose two sets of
criteria to evaluate the performance of attack methods and defense methods in
SRSs, respectively. After that, we provide taxonomies of existing attack
methods and defense methods, and further review them by employing our proposed
criteria. Finally, based on our review, we find some open issues and further
specify a number of future directions to motivate the research of SRSs
security.
|
[
{
"version": "v1",
"created": "Fri, 27 May 2022 00:14:29 GMT"
}
] | 2022-05-30T00:00:00 |
[
[
"Lan",
"Jiahe",
""
],
[
"Zhang",
"Rui",
""
],
[
"Yan",
"Zheng",
""
],
[
"Wang",
"Jie",
""
],
[
"Chen",
"Yu",
""
],
[
"Hou",
"Ronghui",
""
]
] |
new_dataset
| 0.998591 |
2205.13713
|
Hehe Fan
|
Hehe Fan, Xin Yu, Yuhang Ding, Yi Yang, Mohan Kankanhalli
|
PSTNet: Point Spatio-Temporal Convolution on Point Cloud Sequences
|
Accepted to ICLR2021
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Point cloud sequences are irregular and unordered in the spatial dimension
while exhibiting regularities and order in the temporal dimension. Therefore,
existing grid based convolutions for conventional video processing cannot be
directly applied to spatio-temporal modeling of raw point cloud sequences. In
this paper, we propose a point spatio-temporal (PST) convolution to achieve
informative representations of point cloud sequences. The proposed PST
convolution first disentangles space and time in point cloud sequences. Then, a
spatial convolution is employed to capture the local structure of points in the
3D space, and a temporal convolution is used to model the dynamics of the
spatial regions along the time dimension. Furthermore, we incorporate the
proposed PST convolution into a deep network, namely PSTNet, to extract
features of point cloud sequences in a hierarchical manner. Extensive
experiments on widely-used 3D action recognition and 4D semantic segmentation
datasets demonstrate the effectiveness of PSTNet to model point cloud
sequences.
|
[
{
"version": "v1",
"created": "Fri, 27 May 2022 02:14:43 GMT"
}
] | 2022-05-30T00:00:00 |
[
[
"Fan",
"Hehe",
""
],
[
"Yu",
"Xin",
""
],
[
"Ding",
"Yuhang",
""
],
[
"Yang",
"Yi",
""
],
[
"Kankanhalli",
"Mohan",
""
]
] |
new_dataset
| 0.991685 |
2205.13770
|
Haoxin Wang
|
Haoxin Wang, BaekGyu Kim, Jiang Xie, Zhu Han
|
LEAF + AIO: Edge-Assisted Energy-Aware Object Detection for Mobile
Augmented Reality
|
This is a personal copy of the authors. Not for redistribution. The
final version of this paper was accepted by IEEE Transactions on Mobile
Computing
| null | null | null |
cs.CV cs.MM cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Today very few deep learning-based mobile augmented reality (MAR)
applications are applied in mobile devices because they are significantly
energy-guzzling. In this paper, we design an edge-based energy-aware MAR system
that enables MAR devices to dynamically change their configurations, such as
CPU frequency, computation model size, and image offloading frequency based on
user preferences, camera sampling rates, and available radio resources. Our
proposed dynamic MAR configuration adaptations can minimize the per frame
energy consumption of multiple MAR clients without degrading their preferred
MAR performance metrics, such as latency and detection accuracy. To thoroughly
analyze the interactions among MAR configurations, user preferences, camera
sampling rate, and energy consumption, we propose, to the best of our
knowledge, the first comprehensive analytical energy model for MAR devices.
Based on the proposed analytical model, we design a LEAF optimization algorithm
to guide the MAR configuration adaptation and server radio resource allocation.
An image offloading frequency orchestrator, coordinating with the LEAF, is
developed to adaptively regulate the edge-based object detection invocations
and to further improve the energy efficiency of MAR devices. Extensive
evaluations are conducted to validate the performance of the proposed
analytical model and algorithms.
|
[
{
"version": "v1",
"created": "Fri, 27 May 2022 06:11:50 GMT"
}
] | 2022-05-30T00:00:00 |
[
[
"Wang",
"Haoxin",
""
],
[
"Kim",
"BaekGyu",
""
],
[
"Xie",
"Jiang",
""
],
[
"Han",
"Zhu",
""
]
] |
new_dataset
| 0.994274 |
2205.13771
|
Julia Kiseleva
|
Julia Kiseleva and Alexey Skrynnik and Artem Zholus and Shrestha
Mohanty and Negar Arabzadeh and Marc-Alexandre C\^ot\'e and Mohammad
Aliannejadi and Milagro Teruel and Ziming Li and Mikhail Burtsev and Maartje
ter Hoeve and Zoya Volovikova and Aleksandr Panov and Yuxuan Sun and Kavya
Srinet and Arthur Szlam and Ahmed Awadallah
|
IGLU 2022: Interactive Grounded Language Understanding in a
Collaborative Environment at NeurIPS 2022
|
arXiv admin note: text overlap with arXiv:2110.06536
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Human intelligence has the remarkable ability to adapt to new tasks and
environments quickly. Starting from a very young age, humans acquire new skills
and learn how to solve new tasks either by imitating the behavior of others or
by following provided natural language instructions. To facilitate research in
this direction, we propose IGLU: Interactive Grounded Language Understanding in
a Collaborative Environment. The primary goal of the competition is to approach
the problem of how to develop interactive embodied agents that learn to solve a
task while provided with grounded natural language instructions in a
collaborative environment. Understanding the complexity of the challenge, we
split it into sub-tasks to make it feasible for participants.
This research challenge is naturally related, but not limited, to two fields
of study that are highly relevant to the NeurIPS community: Natural Language
Understanding and Generation (NLU/G) and Reinforcement Learning (RL).
Therefore, the suggested challenge can bring two communities together to
approach one of the crucial challenges in AI. Another critical aspect of the
challenge is the dedication to perform a human-in-the-loop evaluation as a
final evaluation for the agents developed by contestants.
|
[
{
"version": "v1",
"created": "Fri, 27 May 2022 06:12:48 GMT"
}
] | 2022-05-30T00:00:00 |
[
[
"Kiseleva",
"Julia",
""
],
[
"Skrynnik",
"Alexey",
""
],
[
"Zholus",
"Artem",
""
],
[
"Mohanty",
"Shrestha",
""
],
[
"Arabzadeh",
"Negar",
""
],
[
"Côté",
"Marc-Alexandre",
""
],
[
"Aliannejadi",
"Mohammad",
""
],
[
"Teruel",
"Milagro",
""
],
[
"Li",
"Ziming",
""
],
[
"Burtsev",
"Mikhail",
""
],
[
"ter Hoeve",
"Maartje",
""
],
[
"Volovikova",
"Zoya",
""
],
[
"Panov",
"Aleksandr",
""
],
[
"Sun",
"Yuxuan",
""
],
[
"Srinet",
"Kavya",
""
],
[
"Szlam",
"Arthur",
""
],
[
"Awadallah",
"Ahmed",
""
]
] |
new_dataset
| 0.992663 |
2205.13808
|
Alessandro Brighente
|
Alessandro Brighente, Mauro Conti, Savio Sciancalepore
|
Hide and Seek -- Preserving Location Privacy and Utility in the Remote
Identification of Unmanned Aerial Vehicles
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Due to the frequent unauthorized access by commercial drones to Critical
Infrastructures (CIs) such as airports and oil refineries, the US-based Federal
Avionics Administration (FAA) recently published a new specification, namely
RemoteID. The aforementioned rule mandates that all Unmanned Aerial Vehicles
(UAVs) have to broadcast information about their identity and location
wirelessly to allow for immediate invasion attribution. However, the
enforcement of such a rule poses severe concerns on UAV operators, especially
in terms of location privacy and tracking threats, to name a few. Indeed, by
simply eavesdropping on the wireless channel, an adversary could know the
precise location of the UAV and track it, as well as obtaining sensitive
information on path source and destination of the UAV. In this paper, we
investigate the trade-off between location privacy and data utility that can be
provided to UAVs when obfuscating the broadcasted location through differential
privacy techniques. Leveraging the concept of Geo-Indistinguishability
(Geo-Ind), already adopted in the context of Location-Based Services (LBS), we
show that it is possible to enhance the privacy of the UAVs without preventing
CI operators to timely detect unauthorized invasions. In particular, our
experiments showed that when the location of an UAV is obfuscated with an
average distance of 1.959 km, a carefully designed UAV detection system can
detect 97.9% of invasions, with an average detection delay of 303.97 msec. The
UAVs have to trade-off such enhanced location privacy with a non-negligible
probability of false positives, i.e., being detected as invading while not
really invading the no-fly zone. UAVs and CI operators can solve such ambiguous
situations later on through the help of the FAA, being this latter the only one
that can unveil the actual location of the UAV.
|
[
{
"version": "v1",
"created": "Fri, 27 May 2022 07:51:10 GMT"
}
] | 2022-05-30T00:00:00 |
[
[
"Brighente",
"Alessandro",
""
],
[
"Conti",
"Mauro",
""
],
[
"Sciancalepore",
"Savio",
""
]
] |
new_dataset
| 0.980747 |
2205.13882
|
Sarah Meiklejohn
|
George Kappos, Haaroon Yousaf, Rainer St\"utz, Sofia Rollet, Bernhard
Haslhofer, Sarah Meiklejohn
|
How to Peel a Million: Validating and Expanding Bitcoin Clusters
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
One of the defining features of Bitcoin and the thousands of cryptocurrencies
that have been derived from it is a globally visible transaction ledger. While
Bitcoin uses pseudonyms as a way to hide the identity of its participants, a
long line of research has demonstrated that Bitcoin is not anonymous. This has
been perhaps best exemplified by the development of clustering heuristics,
which have in turn given rise to the ability to track the flow of bitcoins as
they are sent from one entity to another.
In this paper, we design a new heuristic that is designed to track a certain
type of flow, called a peel chain, that represents many transactions performed
by the same entity; in doing this, we implicitly cluster these transactions and
their associated pseudonyms together. We then use this heuristic to both
validate and expand the results of existing clustering heuristics. We also
develop a machine learning-based validation method and, using a ground-truth
dataset, evaluate all our approaches and compare them with the state of the
art. Ultimately, our goal is to not only enable more powerful tracking
techniques but also call attention to the limits of anonymity in these systems.
|
[
{
"version": "v1",
"created": "Fri, 27 May 2022 10:32:41 GMT"
}
] | 2022-05-30T00:00:00 |
[
[
"Kappos",
"George",
""
],
[
"Yousaf",
"Haaroon",
""
],
[
"Stütz",
"Rainer",
""
],
[
"Rollet",
"Sofia",
""
],
[
"Haslhofer",
"Bernhard",
""
],
[
"Meiklejohn",
"Sarah",
""
]
] |
new_dataset
| 0.970688 |
2205.13885
|
Nicolas Kourtellis Ph.D.
|
Myrsini Gkolemi, Panagiotis Papadopoulos, Evangelos P. Markatos,
Nicolas Kourtellis
|
YouTubers Not madeForKids: Detecting Channels Sharing Inappropriate
Videos Targeting Children
|
12 pages, 10 Tables, 23 Figures. In Proceedings of 14th ACM Web
Science Conference 2022, Barcelona, Spain
| null |
10.1145/3501247.3531556
| null |
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
In the last years, hundreds of new Youtube channels have been creating and
sharing videos targeting children, with themes related to animation, superhero
movies, comics, etc. Unfortunately, many of these videos are inappropriate for
consumption by their target audience, due to disturbing, violent, or sexual
scenes. In this paper, we study YouTube channels found to post suitable or
disturbing videos targeting kids in the past. We identify a clear discrepancy
between what YouTube assumes and flags as inappropriate content and channel,
vs. what is found to be disturbing content and still available on the platform,
targeting kids. In particular, we find that almost 60\% of videos that were
manually annotated and classified as disturbing by an older study in 2019 (a
collection bootstrapped with Elsa and other keywords related to children
videos), are still available on YouTube in mid 2021. In the meantime, 44% of
channels that uploaded such disturbing videos, have yet to be suspended and
their videos to be removed. For the first time in literature, we also study the
"madeForKids" flag, a new feature that YouTube introduced in the end of 2019,
and compare its application to the channels that shared disturbing videos, as
flagged from the previous study. Apparently, these channels are less likely to
be set as "madeForKids" than those sharing suitable content. In addition,
channels posting disturbing videos utilize their channel features such as
keywords, description, topics, posts, etc., to appeal to kids (e.g., using
game-related keywords). Finally, we use a collection of such channel and
content features to train ML classifiers able to detect, at channel creation
time, when a channel will be related to disturbing content uploads. These
classifiers can help YouTube moderators reduce such incidences, pointing to
potentially suspicious accounts without analyzing actual videos.
|
[
{
"version": "v1",
"created": "Fri, 27 May 2022 10:34:15 GMT"
}
] | 2022-05-30T00:00:00 |
[
[
"Gkolemi",
"Myrsini",
""
],
[
"Papadopoulos",
"Panagiotis",
""
],
[
"Markatos",
"Evangelos P.",
""
],
[
"Kourtellis",
"Nicolas",
""
]
] |
new_dataset
| 0.999522 |
2205.13908
|
Priyanshu Priya
|
Gopendra Vikram Singh, Priyanshu Priya, Mauajama Firdaus, Asif Ekbal,
Pushpak Bhattacharyya
|
EmoInHindi: A Multi-label Emotion and Intensity Annotated Dataset in
Hindi for Emotion Recognition in Dialogues
|
This paper is accepted at LREC 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The long-standing goal of Artificial Intelligence (AI) has been to create
human-like conversational systems. Such systems should have the ability to
develop an emotional connection with the users, hence emotion recognition in
dialogues is an important task. Emotion detection in dialogues is a challenging
task because humans usually convey multiple emotions with varying degrees of
intensities in a single utterance. Moreover, emotion in an utterance of a
dialogue may be dependent on previous utterances making the task more complex.
Emotion recognition has always been in great demand. However, most of the
existing datasets for multi-label emotion and intensity detection in
conversations are in English. To this end, we create a large conversational
dataset in Hindi named EmoInHindi for multi-label emotion and intensity
recognition in conversations containing 1,814 dialogues with a total of 44,247
utterances. We prepare our dataset in a Wizard-of-Oz manner for mental health
and legal counselling of crime victims. Each utterance of the dialogue is
annotated with one or more emotion categories from the 16 emotion classes
including neutral, and their corresponding intensity values. We further propose
strong contextual baselines that can detect emotion(s) and the corresponding
intensity of an utterance given the conversational context.
|
[
{
"version": "v1",
"created": "Fri, 27 May 2022 11:23:50 GMT"
}
] | 2022-05-30T00:00:00 |
[
[
"Singh",
"Gopendra Vikram",
""
],
[
"Priya",
"Priyanshu",
""
],
[
"Firdaus",
"Mauajama",
""
],
[
"Ekbal",
"Asif",
""
],
[
"Bhattacharyya",
"Pushpak",
""
]
] |
new_dataset
| 0.999806 |
2205.13981
|
Minjia Shi
|
Minjia Shi, Shukai Wang, Xiaoxiao Li
|
$\mathbb{Z}_p\mathbb{Z}_{p^2}$-linear codes: rank and kernel
| null | null | null | null |
cs.IT cs.CR math.IT
|
http://creativecommons.org/publicdomain/zero/1.0/
|
A code $C$ is called $\Z_p\Z_{p^2}$-linear if it is the Gray image of a
$\Z_p\Z_{p^2}$-additive code, where $p>2$ is prime. In this paper, the rank and
the dimension of the kernel of $\Z_p\Z_{p^2}$-linear codes are studied. Two
bounds of the rank of a $\Z_3\Z_{9}$-linear code and the dimension of the
kernel of a $\Z_p\Z_{p^2}$-linear code are given, respectively. For each value
of these bounds, we give detailed construction of the corresponding code.
Finally, pairs of rank and the dimension of the kernel of $\Z_3\Z_{9}$-linear
codes are also considered.
|
[
{
"version": "v1",
"created": "Fri, 27 May 2022 13:52:13 GMT"
}
] | 2022-05-30T00:00:00 |
[
[
"Shi",
"Minjia",
""
],
[
"Wang",
"Shukai",
""
],
[
"Li",
"Xiaoxiao",
""
]
] |
new_dataset
| 0.997441 |
2205.13992
|
Zhe Liu
|
Zhe Liu and Chunyang Chen and Junjie Wang and Yuhui Su and Qing Wang
|
NaviDroid: A Tool for Guiding Manual Android Testing via Hint Moves
|
Accepted by ICSE 2022. arXiv admin note: substantial text overlap
with arXiv:2201.12085
| null |
10.1145/3510454.3516848
| null |
cs.SE
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Manual testing, as a complement to automated GUI testing, is the last line of
defense for app quality especially in spotting usability and accessibility
issues. However, the repeated actions and easy missing of some functionalities
make manual testing time-consuming, labor-extensive and inefficient. Inspired
by the game candy crush with flashy candies as hint moves for players, we
develop a tool named NaviDroid for navigating human testers via highlighted
next operations for more effective and efficient testing. Within NaviDroid, it
constructs an enriched state transition graph (STG) with the trigger actions as
the edges for two involved states. Based on the STG, NaviDroid utilizes the
dynamic programming algorithm to plan the exploration path, and augment the
run-time GUI with visualized hint moves for testers to quickly explore untested
states and avoid duplication. The automated experiments demonstrate the high
coverage and efficient path planning of NaviDroid. A user study further
confirms its usefulness in the participants covering more states and
activities, detecting more bugs within less time compared with the control
group. NaviDroid demo video: https://youtu.be/lShFyg_nTA0.
|
[
{
"version": "v1",
"created": "Fri, 27 May 2022 14:10:12 GMT"
}
] | 2022-05-30T00:00:00 |
[
[
"Liu",
"Zhe",
""
],
[
"Chen",
"Chunyang",
""
],
[
"Wang",
"Junjie",
""
],
[
"Su",
"Yuhui",
""
],
[
"Wang",
"Qing",
""
]
] |
new_dataset
| 0.993585 |
2205.14018
|
Didier Caucal
|
Didier Caucal and Chlo\'e Rispal
|
Synchronizable functions on integers
|
23 pages, 15 figures
| null | null | null |
cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For all natural numbers a,b and d > 0, we consider the function f_{a,b,d}
which associates n/d to any integer n when it is a multiple of d, and an + b
otherwise; in particular f_{3,1,2} is the Collatz function. Coding in base a >
1 with b < a, we realize these functions by input-deterministic
letter-to-letter transducers with additional output final words. This
particular form allows to explicit, for any integer n, the composition n times
of such a transducer to compute f^n_{a,b,d}. We even realize the closure under
composition f^*_{a,b,d by an infinite input-deterministic letter-to-letter
transducer with a regular set of initial states and a length recurrent terminal
function.
|
[
{
"version": "v1",
"created": "Fri, 27 May 2022 14:39:23 GMT"
}
] | 2022-05-30T00:00:00 |
[
[
"Caucal",
"Didier",
""
],
[
"Rispal",
"Chloé",
""
]
] |
new_dataset
| 0.998685 |
2205.14065
|
Gautam Singh
|
Gautam Singh, Yi-Fu Wu, Sungjin Ahn
|
Simple Unsupervised Object-Centric Learning for Complex and Naturalistic
Videos
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unsupervised object-centric learning aims to represent the modular,
compositional, and causal structure of a scene as a set of object
representations and thereby promises to resolve many critical limitations of
traditional single-vector representations such as poor systematic
generalization. Although there have been many remarkable advances in recent
years, one of the most critical problems in this direction has been that
previous methods work only with simple and synthetic scenes but not with
complex and naturalistic images or videos. In this paper, we propose STEVE, an
unsupervised model for object-centric learning in videos. Our proposed model
makes a significant advancement by demonstrating its effectiveness on various
complex and naturalistic videos unprecedented in this line of research.
Interestingly, this is achieved by neither adding complexity to the model
architecture nor introducing a new objective or weak supervision. Rather, it is
achieved by a surprisingly simple architecture that uses a transformer-based
image decoder conditioned on slots and the learning objective is simply to
reconstruct the observation. Our experiment results on various complex and
naturalistic videos show significant improvements compared to the previous
state-of-the-art.
|
[
{
"version": "v1",
"created": "Fri, 27 May 2022 15:50:44 GMT"
}
] | 2022-05-30T00:00:00 |
[
[
"Singh",
"Gautam",
""
],
[
"Wu",
"Yi-Fu",
""
],
[
"Ahn",
"Sungjin",
""
]
] |
new_dataset
| 0.990317 |
2205.14068
|
Anmoal Porwal
|
Anmoal Porwal, Lukas Holzbaur, Hedongliang Liu, Julian Renner, Antonia
Wachter-Zeh, Violetta Weger
|
Interleaved Prange: A New Generic Decoder for Interleaved Codes
| null | null | null | null |
cs.IT cs.CR math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to the recent challenges in post-quantum cryptography, several new
approaches for code-based cryptography have been proposed. For example, a
variant of the McEliece cryptosystem based on interleaved codes was proposed.
In order to deem such new settings secure, we first need to understand and
analyze the complexity of the underlying problem, in this case the problem of
decoding a random interleaved code. A simple approach to decode such codes,
would be to randomly choose a vector in the row span of the received matrix and
run a classical information set decoding algorithm on this erroneous codeword.
In this paper, we propose a new generic decoder for interleaved codes, which is
an adaption of the classical idea of information set decoding by Prange and
perfectly fits the interleaved setting. We then analyze the cost of the new
algorithm and a comparison to the simple approach described above shows the
superiority of Interleaved Prange.
|
[
{
"version": "v1",
"created": "Fri, 27 May 2022 15:55:50 GMT"
}
] | 2022-05-30T00:00:00 |
[
[
"Porwal",
"Anmoal",
""
],
[
"Holzbaur",
"Lukas",
""
],
[
"Liu",
"Hedongliang",
""
],
[
"Renner",
"Julian",
""
],
[
"Wachter-Zeh",
"Antonia",
""
],
[
"Weger",
"Violetta",
""
]
] |
new_dataset
| 0.987627 |
2205.14106
|
Andrea Passarella
|
Umair Sadiq, Mohan Kumar, Andrea Passarella and Marco Conti
|
Service Composition in Opportunistic Networks: A Load and Mobility Aware
Solution
| null |
in IEEE Transactions on Computers, vol. 64, no. 8, pp. 2308-2322,
1 Aug. 2015
|
10.1109/TC.2014.2360544.
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pervasive networks formed by users' mobile devices have the potential to
exploit a rich set of distributed service components that can be composed to
provide each user with a multitude of application level services. However, in
many challenging scenarios, opportunistic networking techniques are required to
enable communication as devices suffer from intermittent connectivity,
disconnections and partitions. This poses novel challenges to service
composition techniques. While several works have discussed middleware and
architectures for service composition in well-connected wired networks and in
stable MANET environments, the underlying mechanism for selecting and
forwarding service requests in the significantly challenging networking
environment of opportunistic networks has not been entirely addressed. The
problem comprises three stages: i) selecting an appropriate service sequence
set out of available services to obtain the required application level service;
ii) routing results of a previous stage in the composition to the next one
through a multi-hop opportunistic path; and iii) routing final service outcomes
back to the requester. The proposed algorithm derives efficiency and
effectiveness by taking into account the estimated load at service providers
and expected time to opportunistically route information between devices. Based
on this information the algorithm estimates the best composition to obtain a
required service. It is shown that using only local knowledge collected in a
distributed manner, performance close to a real-time centralized system can be
achieved. Applicability and performance guarantee of the service composition
algorithm in a range of mobility characteristics are established through
extensive simulations on real/synthetic traces.
|
[
{
"version": "v1",
"created": "Fri, 27 May 2022 17:18:20 GMT"
}
] | 2022-05-30T00:00:00 |
[
[
"Sadiq",
"Umair",
""
],
[
"Kumar",
"Mohan",
""
],
[
"Passarella",
"Andrea",
""
],
[
"Conti",
"Marco",
""
]
] |
new_dataset
| 0.970902 |
2205.14136
|
Hai Lin
|
Kevin Smith, Hai Lin, Praveen Tiwari, Marjorie Sayer, Claudionor
Coelho
|
PSL is Dead. Long Live PSL
|
7 pages, 16 figures
| null | null | null |
cs.LG cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
Property Specification Language (PSL) is a form of temporal logic that has
been mainly used in discrete domains (e.g. formal hardware verification). In
this paper, we show that by merging machine learning techniques with PSL
monitors, we can extend PSL to work on continuous domains. We apply this
technique in machine learning-based anomaly detection to analyze scenarios of
real-time streaming events from continuous variables in order to detect
abnormal behaviors of a system. By using machine learning with formal models,
we leverage the strengths of both machine learning methods and formal semantics
of time. On one hand, machine learning techniques can produce distributions on
continuous variables, where abnormalities can be captured as deviations from
the distributions. On the other hand, formal methods can characterize discrete
temporal behaviors and relations that cannot be easily learned by machine
learning techniques. Interestingly, the anomalies detected by machine learning
and the underlying time representation used are discrete events. We implemented
a temporal monitoring package (TEF) that operates in conjunction with normal
data science packages for anomaly detection machine learning systems, and we
show that TEF can be used to perform accurate interpretation of temporal
correlation between events.
|
[
{
"version": "v1",
"created": "Fri, 27 May 2022 17:55:54 GMT"
}
] | 2022-05-30T00:00:00 |
[
[
"Smith",
"Kevin",
""
],
[
"Lin",
"Hai",
""
],
[
"Tiwari",
"Praveen",
""
],
[
"Sayer",
"Marjorie",
""
],
[
"Coelho",
"Claudionor",
""
]
] |
new_dataset
| 0.999718 |
2011.04087
|
Yun Chang
|
Yun Chang, Yulun Tian, Jonathan P. How, Luca Carlone
|
Kimera-Multi: a System for Distributed Multi-Robot Metric-Semantic
Simultaneous Localization and Mapping
|
9 pages
| null |
10.1109/ICRA48506.2021.9561090
| null |
cs.RO cs.CV cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the first fully distributed multi-robot system for dense
metric-semantic Simultaneous Localization and Mapping (SLAM). Our system,
dubbed Kimera-Multi, is implemented by a team of robots equipped with
visual-inertial sensors, and builds a 3D mesh model of the environment in
real-time, where each face of the mesh is annotated with a semantic label
(e.g., building, road, objects). In Kimera-Multi, each robot builds a local
trajectory estimate and a local mesh using Kimera. Then, when two robots are
within communication range, they initiate a distributed place recognition and
robust pose graph optimization protocol with a novel incremental maximum clique
outlier rejection; the protocol allows the robots to improve their local
trajectory estimates by leveraging inter-robot loop closures. Finally, each
robot uses its improved trajectory estimate to correct the local mesh using
mesh deformation techniques. We demonstrate Kimera-Multi in photo-realistic
simulations and real data. Kimera-Multi (i) is able to build accurate 3D
metric-semantic meshes, (ii) is robust to incorrect loop closures while
requiring less computation than state-of-the-art distributed SLAM back-ends,
and (iii) is efficient, both in terms of computation at each robot as well as
communication bandwidth.
|
[
{
"version": "v1",
"created": "Sun, 8 Nov 2020 21:38:12 GMT"
}
] | 2022-05-27T00:00:00 |
[
[
"Chang",
"Yun",
""
],
[
"Tian",
"Yulun",
""
],
[
"How",
"Jonathan P.",
""
],
[
"Carlone",
"Luca",
""
]
] |
new_dataset
| 0.997124 |
2011.13307
|
Wu Weijia
|
Weijia Wu, Enze Xie, Ruimao Zhang, Wenhai Wang, Hong Zhou, Ping Luo
|
Polygon-free: Unconstrained Scene Text Detection with Box Annotations
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Although a polygon is a more accurate representation than an upright bounding
box for text detection, the annotations of polygons are extremely expensive and
challenging. Unlike existing works that employ fully-supervised training with
polygon annotations, this study proposes an unconstrained text detection system
termed Polygon-free (PF), in which most existing polygon-based text detectors
(e.g., PSENet [33],DB [16]) are trained with only upright bounding box
annotations. Our core idea is to transfer knowledge from synthetic data to real
data to enhance the supervision information of upright bounding boxes. This is
made possible with a simple segmentation network, namely Skeleton Attention
Segmentation Network (SASN), that includes three vital components (i.e.,
channel attention, spatial attention and skeleton attention map) and one soft
cross-entropy loss. Experiments demonstrate that the proposed Polygonfree
system can combine general detectors (e.g., EAST, PSENet, DB) to yield
surprisingly high-quality pixel-level results with only upright bounding box
annotations on a variety of datasets (e.g., ICDAR2019-Art, TotalText,
ICDAR2015). For example, without using polygon annotations, PSENet achieves an
80.5% F-score on TotalText [3] (vs. 80.9% of fully supervised counterpart),
31.1% better than training directly with upright bounding box annotations, and
saves 80%+ labeling costs. We hope that PF can provide a new perspective for
text detection to reduce the labeling costs. The code can be found at
https://github.com/weijiawu/Unconstrained-Text-Detection-with-Box-Supervisionand-Dynamic-Self-Training.
|
[
{
"version": "v1",
"created": "Thu, 26 Nov 2020 14:19:33 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Dec 2020 07:58:55 GMT"
},
{
"version": "v3",
"created": "Thu, 26 May 2022 10:47:26 GMT"
}
] | 2022-05-27T00:00:00 |
[
[
"Wu",
"Weijia",
""
],
[
"Xie",
"Enze",
""
],
[
"Zhang",
"Ruimao",
""
],
[
"Wang",
"Wenhai",
""
],
[
"Zhou",
"Hong",
""
],
[
"Luo",
"Ping",
""
]
] |
new_dataset
| 0.987598 |
2110.13825
|
Nicholas Rypkema
|
Nicholas R. Rypkema, Henrik Schmidt, Erin M. Fischell
|
Synchronous-Clock Range-Angle Relative Acoustic Navigation: A Unified
Approach to Multi-AUV Localization, Command, Control and Coordination
|
34 pages, 17 figures, to be published in Field Robotics Special Issue
on Unmanned Marine Systems
|
Field Robotics 2 (2022) 774-806
|
10.55417/fr.2022026
| null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a scalable acoustic navigation approach for the unified
command, control and coordination of multiple autonomous underwater vehicles
(AUVs). Existing multi-AUV operations typically achieve coordination manually,
by programming individual vehicles on the surface via radio communications,
which becomes impractical with large vehicle numbers; or they require
bi-directional inter-vehicle acoustic communications to achieve limited
coordination when submerged, with limited scalability due to the physical
properties of the acoustic channel. Our approach utilizes a single,
periodically-broadcasting beacon acting as a navigation reference for the group
of AUVs, each of which carries a chip-scale atomic clock (CSAC) and fixed
ultra-short baseline (USBL) array of acoustic receivers. One-way travel-time
(OWTT) from synchronized clocks and time-delays between signals received by
each array element allows any number of vehicles within receive distance to
determine range, angle, and thus determine their relative position to the
beacon. The operator can command different vehicle behaviors by selecting
between broadcast signals from a predetermined set, while coordination between
AUVs is achieved without inter-vehicle communication, by defining individual
vehicle behaviors within the context of the group. Vehicle behaviors are
designed within a beacon-centric moving frame of reference, allowing the
operator to control the absolute position of the AUV group by re-positioning
the navigation beacon to survey the area of interest. Multiple deployments with
a fleet of three miniature, low-cost SandShark AUVs performing closed-loop
acoustic navigation in real-time provide experimental results validated against
a secondary long-baseline (LBL) positioning system, demonstrating the
capabilities and robustness of our approach with real-world data.
|
[
{
"version": "v1",
"created": "Tue, 26 Oct 2021 16:20:11 GMT"
}
] | 2022-05-27T00:00:00 |
[
[
"Rypkema",
"Nicholas R.",
""
],
[
"Schmidt",
"Henrik",
""
],
[
"Fischell",
"Erin M.",
""
]
] |
new_dataset
| 0.99596 |
2201.08054
|
Yihang Li
|
Yihang Li, Shuichiro Shimizu, Weiqi Gu, Chenhui Chu, Sadao Kurohashi
|
VISA: An Ambiguous Subtitles Dataset for Visual Scene-Aware Machine
Translation
|
Accepted by LREC2022
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing multimodal machine translation (MMT) datasets consist of images and
video captions or general subtitles, which rarely contain linguistic ambiguity,
making visual information not so effective to generate appropriate
translations. We introduce VISA, a new dataset that consists of 40k
Japanese-English parallel sentence pairs and corresponding video clips with the
following key features: (1) the parallel sentences are subtitles from movies
and TV episodes; (2) the source subtitles are ambiguous, which means they have
multiple possible translations with different meanings; (3) we divide the
dataset into Polysemy and Omission according to the cause of ambiguity. We show
that VISA is challenging for the latest MMT system, and we hope that the
dataset can facilitate MMT research. The VISA dataset is available at:
https://github.com/ku-nlp/VISA.
|
[
{
"version": "v1",
"created": "Thu, 20 Jan 2022 08:38:31 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Jan 2022 06:54:02 GMT"
},
{
"version": "v3",
"created": "Thu, 26 May 2022 04:35:49 GMT"
}
] | 2022-05-27T00:00:00 |
[
[
"Li",
"Yihang",
""
],
[
"Shimizu",
"Shuichiro",
""
],
[
"Gu",
"Weiqi",
""
],
[
"Chu",
"Chenhui",
""
],
[
"Kurohashi",
"Sadao",
""
]
] |
new_dataset
| 0.999784 |
2202.05240
|
Benedek Rozemberczki
|
Benedek Rozemberczki, Charles Tapley Hoyt, Anna Gogleva, Piotr
Grabowski, Klas Karis, Andrej Lamov, Andriy Nikolov, Sebastian Nilsson,
Michael Ughetto, Yu Wang, Tyler Derr, Benjamin M Gyori
|
ChemicalX: A Deep Learning Library for Drug Pair Scoring
|
https://github.com/AstraZeneca/chemicalx
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we introduce ChemicalX, a PyTorch-based deep learning library
designed for providing a range of state of the art models to solve the drug
pair scoring task. The primary objective of the library is to make deep drug
pair scoring models accessible to machine learning researchers and
practitioners in a streamlined framework.The design of ChemicalX reuses
existing high level model training utilities, geometric deep learning, and deep
chemistry layers from the PyTorch ecosystem. Our system provides neural network
layers, custom pair scoring architectures, data loaders, and batch iterators
for end users. We showcase these features with example code snippets and case
studies to highlight the characteristics of ChemicalX. A range of experiments
on real world drug-drug interaction, polypharmacy side effect, and combination
synergy prediction tasks demonstrate that the models available in ChemicalX are
effective at solving the pair scoring task. Finally, we show that ChemicalX
could be used to train and score machine learning models on large drug pair
datasets with hundreds of thousands of compounds on commodity hardware.
|
[
{
"version": "v1",
"created": "Thu, 10 Feb 2022 18:49:01 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Feb 2022 09:26:43 GMT"
},
{
"version": "v3",
"created": "Thu, 26 May 2022 14:44:32 GMT"
}
] | 2022-05-27T00:00:00 |
[
[
"Rozemberczki",
"Benedek",
""
],
[
"Hoyt",
"Charles Tapley",
""
],
[
"Gogleva",
"Anna",
""
],
[
"Grabowski",
"Piotr",
""
],
[
"Karis",
"Klas",
""
],
[
"Lamov",
"Andrej",
""
],
[
"Nikolov",
"Andriy",
""
],
[
"Nilsson",
"Sebastian",
""
],
[
"Ughetto",
"Michael",
""
],
[
"Wang",
"Yu",
""
],
[
"Derr",
"Tyler",
""
],
[
"Gyori",
"Benjamin M",
""
]
] |
new_dataset
| 0.975076 |
2203.06749
|
David Freire-Obreg\'on
|
David Freire-Obreg\'on, Javier Lorenzo-Navarro, Modesto
Castrill\'on-Santana
|
Decontextualized I3D ConvNet for ultra-distance runners performance
analysis at a glance
|
Accepted at 21st International Conference on Image Analysis and
Processing (ICIAP 2021)
| null |
10.1007/978-3-031-06433-3_21
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In May 2021, the site runnersworld.com published that participation in
ultra-distance races has increased by 1,676% in the last 23 years. Moreover,
nearly 41% of those runners participate in more than one race per year. The
development of wearable devices has undoubtedly contributed to motivating
participants by providing performance measures in real-time. However, we
believe there is room for improvement, particularly from the organizers point
of view. This work aims to determine how the runners performance can be
quantified and predicted by considering a non-invasive technique focusing on
the ultra-running scenario. In this sense, participants are captured when they
pass through a set of locations placed along the race track. Each footage is
considered an input to an I3D ConvNet to extract the participant's running gait
in our work. Furthermore, weather and illumination capture conditions or
occlusions may affect these footages due to the race staff and other runners.
To address this challenging task, we have tracked and codified the
participant's running gait at some RPs and removed the context intending to
ensure a runner-of-interest proper evaluation. The evaluation suggests that the
features extracted by an I3D ConvNet provide enough information to estimate the
participant's performance along the different race tracks.
|
[
{
"version": "v1",
"created": "Sun, 13 Mar 2022 20:11:10 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Mar 2022 16:55:40 GMT"
},
{
"version": "v3",
"created": "Thu, 26 May 2022 10:24:49 GMT"
}
] | 2022-05-27T00:00:00 |
[
[
"Freire-Obregón",
"David",
""
],
[
"Lorenzo-Navarro",
"Javier",
""
],
[
"Castrillón-Santana",
"Modesto",
""
]
] |
new_dataset
| 0.954205 |
2205.06457
|
Long Phan
|
Long Phan, Hieu Tran, Hieu Nguyen, Trieu H. Trinh
|
ViT5: Pretrained Text-to-Text Transformer for Vietnamese Language
Generation
|
NAACL SRW 2022. arXiv admin note: text overlap with arXiv:2110.04257
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We present ViT5, a pretrained Transformer-based encoder-decoder model for the
Vietnamese language. With T5-style self-supervised pretraining, ViT5 is trained
on a large corpus of high-quality and diverse Vietnamese texts. We benchmark
ViT5 on two downstream text generation tasks, Abstractive Text Summarization
and Named Entity Recognition. Although Abstractive Text Summarization has been
widely studied for the English language thanks to its rich and large source of
data, there has been minimal research into the same task in Vietnamese, a much
lower resource language. In this work, we perform exhaustive experiments on
both Vietnamese Abstractive Summarization and Named Entity Recognition,
validating the performance of ViT5 against many other pretrained
Transformer-based encoder-decoder models. Our experiments show that ViT5
significantly outperforms existing models and achieves state-of-the-art results
on Vietnamese Text Summarization. On the task of Named Entity Recognition, ViT5
is competitive against previous best results from pretrained encoder-based
Transformer models. Further analysis shows the importance of context length
during the self-supervised pretraining on downstream performance across
different settings.
|
[
{
"version": "v1",
"created": "Fri, 13 May 2022 06:08:35 GMT"
},
{
"version": "v2",
"created": "Thu, 26 May 2022 08:23:38 GMT"
}
] | 2022-05-27T00:00:00 |
[
[
"Phan",
"Long",
""
],
[
"Tran",
"Hieu",
""
],
[
"Nguyen",
"Hieu",
""
],
[
"Trinh",
"Trieu H.",
""
]
] |
new_dataset
| 0.99937 |
2205.07410
|
Harideep Nair
|
Harideep Nair, Prabhu Vellaisamy, Santha Bhasuthkar, and John Paul
Shen
|
TNN7: A Custom Macro Suite for Implementing Highly Optimized Designs of
Neuromorphic TNNs
|
To be published in ISVLSI 2022
| null | null | null |
cs.AR cs.ET cs.LG cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Temporal Neural Networks (TNNs), inspired from the mammalian neocortex,
exhibit energy-efficient online sensory processing capabilities. Recent works
have proposed a microarchitecture framework for implementing TNNs and
demonstrated competitive performance on vision and time-series applications.
Building on these previous works, this work proposes TNN7, a suite of nine
highly optimized custom macros developed using a predictive 7nm Process Design
Kit (PDK), to enhance the efficiency, modularity and flexibility of the TNN
design framework. TNN prototypes for two applications are used for evaluation
of TNN7. An unsupervised time-series clustering TNN delivering competitive
performance can be implemented within 40 uW power and 0.05 mm^2 area, while a
4-layer TNN that achieves an MNIST error rate of 1% consumes only 18 mW and
24.63 mm^2. On average, the proposed macros reduce power, delay, area, and
energy-delay product by 14%, 16%, 28%, and 45%, respectively. Furthermore,
employing TNN7 significantly reduces the synthesis runtime of TNN designs (by
more than 3x), allowing for highly-scaled TNN implementations to be realized.
|
[
{
"version": "v1",
"created": "Mon, 16 May 2022 01:03:41 GMT"
},
{
"version": "v2",
"created": "Wed, 25 May 2022 20:14:07 GMT"
}
] | 2022-05-27T00:00:00 |
[
[
"Nair",
"Harideep",
""
],
[
"Vellaisamy",
"Prabhu",
""
],
[
"Bhasuthkar",
"Santha",
""
],
[
"Shen",
"John Paul",
""
]
] |
new_dataset
| 0.987971 |
2205.08712
|
Andrey Pak
|
Andrey Pak, Hemanth Manjunatha, Dimitar Filev, Panagiotis Tsiotras
|
CARNet: A Dynamic Autoencoder for Learning Latent Dynamics in Autonomous
Driving Tasks
|
13 pages, 14 figures, 8 tables, removed submission info, bios
| null | null | null |
cs.LG cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Autonomous driving has received a lot of attention in the automotive industry
and is often seen as the future of transportation. Passenger vehicles equipped
with a wide array of sensors (e.g., cameras, front-facing radars, LiDARs, and
IMUs) capable of continuous perception of the environment are becoming
increasingly prevalent. These sensors provide a stream of high-dimensional,
temporally correlated data that is essential for reliable autonomous driving.
An autonomous driving system should effectively use the information collected
from the various sensors in order to form an abstract description of the world
and maintain situational awareness. Deep learning models, such as autoencoders,
can be used for that purpose, as they can learn compact latent representations
from a stream of incoming data. However, most autoencoder models process the
data independently, without assuming any temporal interdependencies. Thus,
there is a need for deep learning models that explicitly consider the temporal
dependence of the data in their architecture. This work proposes CARNet, a
Combined dynAmic autoencodeR NETwork architecture that utilizes an autoencoder
combined with a recurrent neural network to learn the current latent
representation and, in addition, also predict future latent representations in
the context of autonomous driving. We demonstrate the efficacy of the proposed
model in both imitation and reinforcement learning settings using both
simulated and real datasets. Our results show that the proposed model
outperforms the baseline state-of-the-art model, while having significantly
fewer trainable parameters.
|
[
{
"version": "v1",
"created": "Wed, 18 May 2022 04:15:42 GMT"
},
{
"version": "v2",
"created": "Thu, 26 May 2022 17:43:32 GMT"
}
] | 2022-05-27T00:00:00 |
[
[
"Pak",
"Andrey",
""
],
[
"Manjunatha",
"Hemanth",
""
],
[
"Filev",
"Dimitar",
""
],
[
"Tsiotras",
"Panagiotis",
""
]
] |
new_dataset
| 0.999577 |
2205.12635
|
Hailong Ma
|
Hailong Ma, Xin Xia, Xing Wang, Xuefeng Xiao, Jiashi Li, Min Zheng
|
MoCoViT: Mobile Convolutional Vision Transformer
|
After evaluation, the relevant technical details are temporarily
inconvenient to be disclosed, so the manuscript is temporarily withdrawn. We
will wait for the right time to reopen
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, Transformer networks have achieved impressive results on a variety
of vision tasks. However, most of them are computationally expensive and not
suitable for real-world mobile applications. In this work, we present Mobile
Convolutional Vision Transformer (MoCoViT), which improves in performance and
efficiency by introducing transformer into mobile convolutional networks to
leverage the benefits of both architectures. Different from recent works on
vision transformer, the mobile transformer block in MoCoViT is carefully
designed for mobile devices and is very lightweight, accomplished through two
primary modifications: the Mobile Self-Attention (MoSA) module and the Mobile
Feed Forward Network (MoFFN). MoSA simplifies the calculation of the attention
map through Branch Sharing scheme while MoFFN serves as a mobile version of MLP
in the transformer, further reducing the computation by a large margin.
Comprehensive experiments verify that our proposed MoCoViT family outperform
state-of-the-art portable CNNs and transformer neural architectures on various
vision tasks. On ImageNet classification, it achieves 74.5% top-1 accuracy at
147M FLOPs, gaining 1.2% over MobileNetV3 with less computations. And on the
COCO object detection task, MoCoViT outperforms GhostNet by 2.1 AP in RetinaNet
framework.
|
[
{
"version": "v1",
"created": "Wed, 25 May 2022 10:21:57 GMT"
},
{
"version": "v2",
"created": "Thu, 26 May 2022 13:40:26 GMT"
}
] | 2022-05-27T00:00:00 |
[
[
"Ma",
"Hailong",
""
],
[
"Xia",
"Xin",
""
],
[
"Wang",
"Xing",
""
],
[
"Xiao",
"Xuefeng",
""
],
[
"Li",
"Jiashi",
""
],
[
"Zheng",
"Min",
""
]
] |
new_dataset
| 0.999529 |
2205.13095
|
Faraz Waseem
|
Faraz Waseem, Sanjit Menon, Haotian Xu, Debashis Mondal
|
VizInspect Pro -- Automated Optical Inspection (AOI) solution
| null | null | null | null |
cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Traditional vision based Automated Optical Inspection (referred to as AOI in
paper) systems present multiple challenges in factory settings including
inability to scale across multiple product lines, requirement of vendor
programming expertise, little tolerance to variations and lack of cloud
connectivity for aggregated insights. The lack of flexibility in these systems
presents a unique opportunity for a deep learning based AOI system specifically
for factory automation. The proposed solution, VizInspect pro is a generic
computer vision based AOI solution built on top of Leo - An edge AI platform.
Innovative features that overcome challenges of traditional vision systems
include deep learning based image analysis which combines the power of
self-learning with high speed and accuracy, an intuitive user interface to
configure inspection profiles in minutes without ML or vision expertise and the
ability to solve complex inspection challenges while being tolerant to
deviations and unpredictable defects. This solution has been validated by
multiple external enterprise customers with confirmed value propositions. In
this paper we show you how this solution and platform solved problems around
model development, deployment, scaling multiple inferences and visualizations.
|
[
{
"version": "v1",
"created": "Thu, 26 May 2022 00:38:48 GMT"
}
] | 2022-05-27T00:00:00 |
[
[
"Waseem",
"Faraz",
""
],
[
"Menon",
"Sanjit",
""
],
[
"Xu",
"Haotian",
""
],
[
"Mondal",
"Debashis",
""
]
] |
new_dataset
| 0.997112 |
2205.13229
|
Federica Vinella
|
Isabella Saccardi, Duygu Sezen Islakoglu, Anouk Neerincx, Federica
Lucia Vinella
|
Symbiotic Child Emotional Support with Social Robots and Temporal
Knowledge Graphs
|
Human-Centered Design of Symbiotic Hybrid Intelligence Workshop HHAI
2022
| null | null | null |
cs.RO cs.AI cs.CL cs.CY cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
In current youth-care programs, children with needs (mental health, family
issues, learning disabilities, and autism) receive support from youth and
family experts as one-to-one assistance at schools or hospitals. Occasionally,
social robots have featured in such settings as support roles in a one-to-one
interaction with the child. In this paper, we suggest the development of a
symbiotic framework for real-time Emotional Support (ES) with social robots
Knowledge Graphs (KG). By augmenting a domain-specific corpus from the
literature on ES for children (between the age of 8 and 12) and providing
scenario-driven context including the history of events, we suggest developing
an experimental knowledge-aware ES framework. The framework both guides the
social robot in providing ES statements to the child and assists the expert in
tracking and interpreting the child's emotional state and related events over
time.
|
[
{
"version": "v1",
"created": "Thu, 26 May 2022 08:44:31 GMT"
}
] | 2022-05-27T00:00:00 |
[
[
"Saccardi",
"Isabella",
""
],
[
"Islakoglu",
"Duygu Sezen",
""
],
[
"Neerincx",
"Anouk",
""
],
[
"Vinella",
"Federica Lucia",
""
]
] |
new_dataset
| 0.999489 |
2205.13256
|
Lianna Zhao
|
Lianna Zhao, Pietro Ferraro and Robert Shorten
|
A DLT enabled smart mask system to enable social compliance
| null | null | null | null |
cs.CY cs.CR cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As Covid-19 remains a cause of concern, especially due to its mutations,
wearing masks correctly and efficiently remains a priority in order to limit
the spread of the disease. In this paper we present a wearable smart-mask
prototype using concepts from Internet of Things, Control Theory and
Distributed Ledger Technologies. Its purpose is to encourage people to comply
with social distancing norms, through the use of incentives. The smart mask is
designed to monitor Carbon Dioxide and Total Volatile Organic Compounds
concentrations. The detected data is appended to a DAG-based DLT, named the
IOTA Tangle. The IOTA Tangle ensures that the data is secure and immutable and
acts as a communication backbone for the incentive mechanism. A
hardware-in-the-loop simulation, based on indoor positioning, is developed to
validate the effectiveness of the designed prototype.
|
[
{
"version": "v1",
"created": "Thu, 26 May 2022 09:49:58 GMT"
}
] | 2022-05-27T00:00:00 |
[
[
"Zhao",
"Lianna",
""
],
[
"Ferraro",
"Pietro",
""
],
[
"Shorten",
"Robert",
""
]
] |
new_dataset
| 0.999758 |
2205.13322
|
Mayank Raikwar
|
Mayank Raikwar and Danilo Gligoroski
|
DoS Attacks on Blockchain Ecosystem
|
Accepted at 4TH INTERNATIONAL WORKSHOP ON FUTURE PERSPECTIVE OF
DECENTRALIZED APPLICATIONS (FPDAPP), Euro-Par 2021: Parallel Processing
Workshops
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Denial of Service (DoS) attacks are a growing threat in network services. The
frequency and intensity of DoS attacks are rapidly increasing day by day. The
immense financial potential of the Cryptocurrency market is a prevalent target
of the DoS attack. The DoS attack events are kept on happening in
cryptocurrencies and the blockchain ecosystem. To the best of our knowledge,
there has not been any study on the DoS attack on the blockchain ecosystem. In
this paper, we identify ten entities in the blockchain ecosystem and we
scrutinize the DoS attacks on them. We also present the DoS mitigation
techniques applicable to the blockchain services. Additionally, we propose a
DoS mitigation technique by the use of verifiable delay function (VDF).
|
[
{
"version": "v1",
"created": "Thu, 26 May 2022 12:53:40 GMT"
}
] | 2022-05-27T00:00:00 |
[
[
"Raikwar",
"Mayank",
""
],
[
"Gligoroski",
"Danilo",
""
]
] |
new_dataset
| 0.994252 |
2205.13399
|
Mayowa Ayodele
|
Mayowa Ayodele and Richard Allmendinger and Manuel L\'opez-Ib\'a\~nez
and Matthieu Parizy
|
Multi-objective QUBO Solver: Bi-objective Quadratic Assignment
|
The Genetic and Evolutionary Computation Conference 2022 (GECCO22)
| null |
10.1145/3512290.3528698
| null |
cs.AI cs.DM physics.comp-ph
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Quantum and quantum-inspired optimisation algorithms are designed to solve
problems represented in binary, quadratic and unconstrained form. Combinatorial
optimisation problems are therefore often formulated as Quadratic Unconstrained
Binary Optimisation Problems (QUBO) to solve them with these algorithms.
Moreover, these QUBO solvers are often implemented using specialised hardware
to achieve enormous speedups, e.g. Fujitsu's Digital Annealer (DA) and D-Wave's
Quantum Annealer. However, these are single-objective solvers, while many
real-world problems feature multiple conflicting objectives. Thus, a common
practice when using these QUBO solvers is to scalarise such multi-objective
problems into a sequence of single-objective problems. Due to design trade-offs
of these solvers, formulating each scalarisation may require more time than
finding a local optimum. We present the first attempt to extend the algorithm
supporting a commercial QUBO solver as a multi-objective solver that is not
based on scalarisation. The proposed multi-objective DA algorithm is validated
on the bi-objective Quadratic Assignment Problem. We observe that algorithm
performance significantly depends on the archiving strategy adopted, and that
combining DA with non-scalarisation methods to optimise multiple objectives
outperforms the current scalarised version of the DA in terms of final solution
quality.
|
[
{
"version": "v1",
"created": "Thu, 26 May 2022 14:48:03 GMT"
}
] | 2022-05-27T00:00:00 |
[
[
"Ayodele",
"Mayowa",
""
],
[
"Allmendinger",
"Richard",
""
],
[
"López-Ibáñez",
"Manuel",
""
],
[
"Parizy",
"Matthieu",
""
]
] |
new_dataset
| 0.984458 |
2205.13426
|
Charalampos Tsourakakis
|
Tianyi Chen and Charalampos E. Tsourakakis
|
AntiBenford Subgraphs: Unsupervised Anomaly Detection in Financial
Networks
|
Accepted at KDD'22
| null | null | null |
cs.SI cs.CE
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Benford's law describes the distribution of the first digit of numbers
appearing in a wide variety of numerical data, including tax records, and
election outcomes, and has been used to raise "red flags" about potential
anomalies in the data such as tax evasion. In this work, we ask the following
novel question: given a large transaction or financial graph, how do we find a
set of nodes that perform many transactions among each other that also deviate
significantly from Benford's law?
We propose the AntiBenford subgraph framework that is founded on
well-established statistical principles. Furthermore, we design an efficient
algorithm that finds AntiBenford subgraphs in near-linear time on real data. We
evaluate our framework on both real and synthetic data against a variety of
competitors. We show empirically that our proposed framework enables the
detection of anomalous subgraphs in cryptocurrency transaction networks that go
undetected by state-of-the-art graph-based anomaly detection methods. Our
empirical findings show that our \ab framework is able to mine anomalous
subgraphs, and provide novel insights into financial transaction data.
The code and the datasets are available at
\url{https://github.com/tsourakakis-lab/antibenford-subgraphs}.
|
[
{
"version": "v1",
"created": "Thu, 26 May 2022 15:30:40 GMT"
}
] | 2022-05-27T00:00:00 |
[
[
"Chen",
"Tianyi",
""
],
[
"Tsourakakis",
"Charalampos E.",
""
]
] |
new_dataset
| 0.998936 |
2205.13440
|
Robert Liz\'ee
|
Robert Liz\'ee
|
The Neuro-Symbolic Brain
|
32 pages, 11 figures
| null | null | null |
cs.NE cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Neural networks promote a distributed representation with no clear place for
symbols. Despite this, we propose that symbols are manufactured simply by
training a sparse random noise as a self-sustaining attractor in a feedback
spiking neural network. This way, we can generate many of what we shall call
prime attractors, and the networks that support them are like registers holding
a symbolic value, and we call them registers. Like symbols, prime attractors
are atomic and devoid of any internal structure. Moreover, the winner-take-all
mechanism naturally implemented by spiking neurons enables registers to recover
a prime attractor within a noisy signal. Using this faculty, when considering
two connected registers, an input one and an output one, it is possible to bind
in one shot using a Hebbian rule the attractor active on the output to the
attractor active on the input. Thus, whenever an attractor is active on the
input, it induces its bound attractor on the output; even though the signal
gets blurrier with more bindings, the winner-take-all filtering faculty can
recover the bound prime attractor. However, the capacity is still limited. It
is also possible to unbind in one shot, restoring the capacity taken by that
binding. This mechanism serves as a basis for working memory, turning prime
attractors into variables. Also, we use a random second-order network to
amalgamate the prime attractors held by two registers to bind the prime
attractor held by a third register to them in one shot, de facto implementing a
hash table. Furthermore, we introduce the register switch box composed of
registers to move the content of one register to another. Then, we use spiking
neurons to build a toy symbolic computer based on the above. The technics used
suggest ways to design extrapolating, reusable, sample-efficient deep learning
networks at the cost of structural priors.
|
[
{
"version": "v1",
"created": "Fri, 13 May 2022 00:39:19 GMT"
}
] | 2022-05-27T00:00:00 |
[
[
"Lizée",
"Robert",
""
]
] |
new_dataset
| 0.992208 |
2205.13457
|
Manish Shetty Molahalli
|
Manish Shetty, Chetan Bansal, Sai Pramod Upadhyayula, Arjun
Radhakrishna, Anurag Gupta
|
AutoTSG: Learning and Synthesis for Incident Troubleshooting
| null | null | null | null |
cs.SE cs.AI cs.DC cs.LG cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Incident management is a key aspect of operating large-scale cloud services.
To aid with faster and efficient resolution of incidents, engineering teams
document frequent troubleshooting steps in the form of Troubleshooting Guides
(TSGs), to be used by on-call engineers (OCEs). However, TSGs are siloed,
unstructured, and often incomplete, requiring developers to manually understand
and execute necessary steps. This results in a plethora of issues such as
on-call fatigue, reduced productivity, and human errors. In this work, we
conduct a large-scale empirical study of over 4K+ TSGs mapped to 1000s of
incidents and find that TSGs are widely used and help significantly reduce
mitigation efforts. We then analyze feedback on TSGs provided by 400+ OCEs and
propose a taxonomy of issues that highlights significant gaps in TSG quality.
To alleviate these gaps, we investigate the automation of TSGs and propose
AutoTSG -- a novel framework for automation of TSGs to executable workflows by
combining machine learning and program synthesis. Our evaluation of AutoTSG on
50 TSGs shows the effectiveness in both identifying TSG statements (accuracy
0.89) and parsing them for execution (precision 0.94 and recall 0.91). Lastly,
we survey ten Microsoft engineers and show the importance of TSG automation and
the usefulness of AutoTSG.
|
[
{
"version": "v1",
"created": "Thu, 26 May 2022 16:05:11 GMT"
}
] | 2022-05-27T00:00:00 |
[
[
"Shetty",
"Manish",
""
],
[
"Bansal",
"Chetan",
""
],
[
"Upadhyayula",
"Sai Pramod",
""
],
[
"Radhakrishna",
"Arjun",
""
],
[
"Gupta",
"Anurag",
""
]
] |
new_dataset
| 0.97184 |
2205.13485
|
Mritunjay Musale
|
Mritunjay Musale, Vaibhav Vasani
|
Benchmarking of Deep Learning models on 2D Laminar Flow behind Cylinder
|
5 figures, 8 pages
| null | null | null |
cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
The rapidly advancing field of Fluid Mechanics has recently employed Deep
Learning to solve various problems within that field. In that same spirit we
try to perform Direct Numerical Simulation(DNS) which is one of the tasks in
Computational Fluid Dynamics, using three fundamental architectures in the
field of Deep Learning that were each used to solve various high dimensional
problems. We train these three models in an autoencoder manner, for this the
dataset is treated like sequential frames given to the model as input. We
observe that recently introduced architecture called Transformer significantly
outperforms its counterparts on the selected dataset.Furthermore, we conclude
that using Transformers for doing DNS in the field of CFD is an interesting
research area worth exploring.
|
[
{
"version": "v1",
"created": "Thu, 26 May 2022 16:49:09 GMT"
}
] | 2022-05-27T00:00:00 |
[
[
"Musale",
"Mritunjay",
""
],
[
"Vasani",
"Vaibhav",
""
]
] |
new_dataset
| 0.995647 |
1905.09226
|
Boyuan Ma
|
Boyuan Ma, Chuni Liu, Xiaojuan Ban, Hao Wang, Weihua Xue, Haiyou Huang
|
WPU-Net: Boundary Learning by Using Weighted Propagation in Convolution
Network
|
technical report
|
Journal of Computational Science, 2022
|
10.1016/j.jocs.2022.101709
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep learning has driven a great progress in natural and biological image
processing. However, in material science and engineering, there are often some
flaws and indistinctions in material microscopic images induced from complex
sample preparation, even due to the material itself, hindering the detection of
target objects. In this work, we propose WPU-net that redesigns the
architecture and weighted loss of U-Net, which forces the network to integrate
information from adjacent slices and pays more attention to the topology in
boundary detection task. Then, the WPU-net is applied into a typical material
example, i.e., the grain boundary detection of polycrystalline material.
Experiments demonstrate that the proposed method achieves promising performance
and outperforms state-of-the-art methods. Besides, we propose a new method for
object tracking between adjacent slices, which can effectively reconstruct 3D
structure of the whole material. Finally, we present a material microscopic
image dataset with the goal of advancing the state-of-the-art in image
processing for material science.
|
[
{
"version": "v1",
"created": "Wed, 22 May 2019 16:23:23 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Aug 2019 15:52:09 GMT"
}
] | 2022-05-26T00:00:00 |
[
[
"Ma",
"Boyuan",
""
],
[
"Liu",
"Chuni",
""
],
[
"Ban",
"Xiaojuan",
""
],
[
"Wang",
"Hao",
""
],
[
"Xue",
"Weihua",
""
],
[
"Huang",
"Haiyou",
""
]
] |
new_dataset
| 0.966658 |
2002.10371
|
George Alexandropoulos
|
George C. Alexandropoulos and Evangelos Vlachos
|
A Hardware Architecture for Reconfigurable Intelligent Surfaces with
Minimal Active Elements for Explicit Channel Estimation
|
5 pages, 2 figures, invited/accepted to IEEE ICASSP 2020
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Intelligent surfaces comprising of cost effective, nearly passive, and
reconfigurable unit elements are lately gaining increasing interest due to
their potential in enabling fully programmable wireless environments. They are
envisioned to offer environmental intelligence for diverse communication
objectives, when coated on various objects of the deployment area of interest.
To achieve this overarching goal, the channels where the Reconfigurable
Intelligent Surfaces (RISs) are involved need to be in principle estimated.
However, this is a challenging task with the currently available hardware RIS
architectures requiring lengthy training periods among the network nodes
utilizing RIS-assisted wireless communication. In this paper, we present a
novel RIS architecture comprising of any number of passive reflecting elements,
a simple controller for their adjustable configuration, and a single Radio
Frequency (RF) chain for baseband measurements. Capitalizing on this
architecture and assuming sparse wireless channels in the beamspace domain, we
present an alternating optimization approach for explicit estimation of the
channel gains at the RIS elements attached to the single RF chain.
Representative simulation results demonstrate the channel estimation accuracy
and achievable end-to-end performance for various training lengths and numbers
of reflecting unit elements.
|
[
{
"version": "v1",
"created": "Mon, 24 Feb 2020 16:55:59 GMT"
},
{
"version": "v2",
"created": "Wed, 25 May 2022 15:55:12 GMT"
}
] | 2022-05-26T00:00:00 |
[
[
"Alexandropoulos",
"George C.",
""
],
[
"Vlachos",
"Evangelos",
""
]
] |
new_dataset
| 0.999377 |
2004.09679
|
Weizhe Hua
|
Weizhe Hua, Muhammad Umar, Zhiru Zhang, G. Edward Suh
|
MGX: Near-Zero Overhead Memory Protection for Data-Intensive
Accelerators
|
Accepted to the 49th International Symposium on Computer Architecture
(ISCA'22)
| null | null | null |
cs.CR cs.AR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces MGX, a near-zero overhead memory protection scheme for
hardware accelerators. MGX minimizes the performance overhead of off-chip
memory encryption and integrity verification by exploiting the
application-specific properties of the accelerator execution. In particular,
accelerators tend to explicitly manage data movement between on-chip and
off-chip memories. Therefore, the general memory access pattern of an
accelerator can largely be determined for a given application. Exploiting these
characteristics, MGX generates version numbers used in memory encryption and
integrity verification using on-chip accelerator state rather than storing them
in the off-chip memory; it also customizes the granularity of the memory
protection to match the granularity used by the accelerator. To demonstrate the
efficacy of MGX, we present an in-depth study of MGX for DNN and graph
algorithms. Experimental results show that on average, MGX lowers the
performance overhead of memory protection from 28% and 33% to 4% and 5% for DNN
and graph processing accelerators in a wide range of benchmarks, respectively.
|
[
{
"version": "v1",
"created": "Mon, 20 Apr 2020 23:46:22 GMT"
},
{
"version": "v2",
"created": "Wed, 25 May 2022 17:59:36 GMT"
}
] | 2022-05-26T00:00:00 |
[
[
"Hua",
"Weizhe",
""
],
[
"Umar",
"Muhammad",
""
],
[
"Zhang",
"Zhiru",
""
],
[
"Suh",
"G. Edward",
""
]
] |
new_dataset
| 0.957295 |
2011.12954
|
Peng Jiang
|
Peng Jiang, Philip Osteen, Maggie Wigness, Srikanth Saripalli
|
RELLIS-3D Dataset: Data, Benchmarks and Analysis
|
7 pages, 7 figures
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Semantic scene understanding is crucial for robust and safe autonomous
navigation, particularly so in off-road environments. Recent deep learning
advances for 3D semantic segmentation rely heavily on large sets of training
data, however existing autonomy datasets either represent urban environments or
lack multimodal off-road data. We fill this gap with RELLIS-3D, a multimodal
dataset collected in an off-road environment, which contains annotations for
13,556 LiDAR scans and 6,235 images. The data was collected on the Rellis
Campus of Texas A\&M University and presents challenges to existing algorithms
related to class imbalance and environmental topography. Additionally, we
evaluate the current state-of-the-art deep learning semantic segmentation
models on this dataset. Experimental results show that RELLIS-3D presents
challenges for algorithms designed for segmentation in urban environments. This
novel dataset provides the resources needed by researchers to continue to
develop more advanced algorithms and investigate new research directions to
enhance autonomous navigation in off-road environments. RELLIS-3D is available
at https://github.com/unmannedlab/RELLIS-3D
|
[
{
"version": "v1",
"created": "Tue, 17 Nov 2020 18:28:01 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Dec 2020 02:45:03 GMT"
},
{
"version": "v3",
"created": "Mon, 26 Apr 2021 19:44:12 GMT"
},
{
"version": "v4",
"created": "Wed, 25 May 2022 15:11:10 GMT"
}
] | 2022-05-26T00:00:00 |
[
[
"Jiang",
"Peng",
""
],
[
"Osteen",
"Philip",
""
],
[
"Wigness",
"Maggie",
""
],
[
"Saripalli",
"Srikanth",
""
]
] |
new_dataset
| 0.999673 |
2112.08609
|
Jing Yan
|
Hongyu Zhu, Yan Chen, Jing Yan, Jing Liu, Yu Hong, Ying Chen, Hua Wu,
Haifeng Wang
|
DuQM: A Chinese Dataset of Linguistically Perturbed Natural Questions
for Evaluating the Robustness of Question Matching Models
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we focus on studying robustness evaluation of Chinese question
matching. Most of the previous work on analyzing robustness issue focus on just
one or a few types of artificial adversarial examples. Instead, we argue that
it is necessary to formulate a comprehensive evaluation about the linguistic
capabilities of models on natural texts. For this purpose, we create a Chinese
dataset namely DuQM which contains natural questions with linguistic
perturbations to evaluate the robustness of question matching models. DuQM
contains 3 categories and 13 subcategories with 32 linguistic perturbations.
The extensive experiments demonstrate that DuQM has a better ability to
distinguish different models. Importantly, the detailed breakdown of evaluation
by linguistic phenomenon in DuQM helps us easily diagnose the strength and
weakness of different models. Additionally, our experiment results show that
the effect of artificial adversarial examples does not work on the natural
texts.
|
[
{
"version": "v1",
"created": "Thu, 16 Dec 2021 04:16:39 GMT"
},
{
"version": "v2",
"created": "Wed, 25 May 2022 11:12:02 GMT"
}
] | 2022-05-26T00:00:00 |
[
[
"Zhu",
"Hongyu",
""
],
[
"Chen",
"Yan",
""
],
[
"Yan",
"Jing",
""
],
[
"Liu",
"Jing",
""
],
[
"Hong",
"Yu",
""
],
[
"Chen",
"Ying",
""
],
[
"Wu",
"Hua",
""
],
[
"Wang",
"Haifeng",
""
]
] |
new_dataset
| 0.999644 |
2112.09924
|
Aleksandra Piktus
|
Aleksandra Piktus and Fabio Petroni and Vladimir Karpukhin and Dmytro
Okhonko and Samuel Broscheit and Gautier Izacard and Patrick Lewis and Barlas
O\u{g}uz and Edouard Grave and Wen-tau Yih and Sebastian Riedel
|
The Web Is Your Oyster - Knowledge-Intensive NLP against a Very Large
Web Corpus
| null | null | null | null |
cs.CL cs.AI cs.IR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In order to address increasing demands of real-world applications, the
research for knowledge-intensive NLP (KI-NLP) should advance by capturing the
challenges of a truly open-domain environment: web-scale knowledge, lack of
structure, inconsistent quality and noise. To this end, we propose a new setup
for evaluating existing knowledge intensive tasks in which we generalize the
background corpus to a universal web snapshot. We investigate a slate of NLP
tasks which rely on knowledge - either factual or common sense, and ask systems
to use a subset of CCNet - the Sphere corpus - as a knowledge source. In
contrast to Wikipedia, otherwise a common background corpus in KI-NLP, Sphere
is orders of magnitude larger and better reflects the full diversity of
knowledge on the web. Despite potential gaps in coverage, challenges of scale,
lack of structure and lower quality, we find that retrieval from Sphere enables
a state of the art system to match and even outperform Wikipedia-based models
on several tasks. We also observe that while a dense index can outperform a
sparse BM25 baseline on Wikipedia, on Sphere this is not yet possible. To
facilitate further research and minimise the community's reliance on
proprietary, black-box search engines, we share our indices, evaluation metrics
and infrastructure.
|
[
{
"version": "v1",
"created": "Sat, 18 Dec 2021 13:15:34 GMT"
},
{
"version": "v2",
"created": "Tue, 24 May 2022 18:16:24 GMT"
}
] | 2022-05-26T00:00:00 |
[
[
"Piktus",
"Aleksandra",
""
],
[
"Petroni",
"Fabio",
""
],
[
"Karpukhin",
"Vladimir",
""
],
[
"Okhonko",
"Dmytro",
""
],
[
"Broscheit",
"Samuel",
""
],
[
"Izacard",
"Gautier",
""
],
[
"Lewis",
"Patrick",
""
],
[
"Oğuz",
"Barlas",
""
],
[
"Grave",
"Edouard",
""
],
[
"Yih",
"Wen-tau",
""
],
[
"Riedel",
"Sebastian",
""
]
] |
new_dataset
| 0.995144 |
2204.02491
|
Rafail Fridman
|
Omer Bar-Tal, Dolev Ofri-Amar, Rafail Fridman, Yoni Kasten, Tali Dekel
|
Text2LIVE: Text-Driven Layered Image and Video Editing
|
Project page: https://text2live.github.io
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present a method for zero-shot, text-driven appearance manipulation in
natural images and videos. Given an input image or video and a target text
prompt, our goal is to edit the appearance of existing objects (e.g., object's
texture) or augment the scene with visual effects (e.g., smoke, fire) in a
semantically meaningful manner. We train a generator using an internal dataset
of training examples, extracted from a single input (image or video and target
text prompt), while leveraging an external pre-trained CLIP model to establish
our losses. Rather than directly generating the edited output, our key idea is
to generate an edit layer (color+opacity) that is composited over the original
input. This allows us to constrain the generation process and maintain high
fidelity to the original input via novel text-driven losses that are applied
directly to the edit layer. Our method neither relies on a pre-trained
generator nor requires user-provided edit masks. We demonstrate localized,
semantic edits on high-resolution natural images and videos across a variety of
objects and scenes.
|
[
{
"version": "v1",
"created": "Tue, 5 Apr 2022 21:17:34 GMT"
},
{
"version": "v2",
"created": "Wed, 25 May 2022 15:05:28 GMT"
}
] | 2022-05-26T00:00:00 |
[
[
"Bar-Tal",
"Omer",
""
],
[
"Ofri-Amar",
"Dolev",
""
],
[
"Fridman",
"Rafail",
""
],
[
"Kasten",
"Yoni",
""
],
[
"Dekel",
"Tali",
""
]
] |
new_dataset
| 0.999775 |
2205.08938
|
Ines Messadi
|
Ines Messadi, Markus Horst Becker, Kai Bleeke, Leander Jehl, Sonia Ben
Mokhtar, R\"udiger Kapitza
|
SplitBFT: Improving Byzantine Fault Tolerance Safety Using Trusted
Compartments
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Byzantine fault-tolerant agreement (BFT) in a partially synchronous system
usually requires 3f + 1 nodes to tolerate f faulty replicas. Due to their high
throughput and finality property BFT algorithms build the core of recent
permissioned blockchains. As a complex and resource-demanding infrastructure,
multiple cloud providers have started offering Blockchain-as-a-Service. This
eases the deployment of permissioned blockchains but places the cloud provider
in a central controlling position, thereby questioning blockchains' fault
tolerance and decentralization properties and their underlying BFT algorithm.
This paper presents SplitBFT, a new way to utilize trusted execution technology
(TEEs), such as Intel SGX, to harden the safety and confidentiality guarantees
of BFT systems thereby strengthening the trust in could-based deployments of
permissioned blockchains. Deviating from standard assumptions, SplitBFT
acknowledges that code protected by trusted execution may fail. We address this
by splitting and isolating the core logic of BFT protocols into multiple
compartments resulting in a more resilient architecture. We apply SplitBFT to
the traditional practical byzantine fault tolerance algorithm (PBFT) and
evaluate it using SGX. Our results show that SplitBFT adds only a reasonable
overhead compared to the non-compartmentalized variant.
|
[
{
"version": "v1",
"created": "Wed, 18 May 2022 14:05:48 GMT"
},
{
"version": "v2",
"created": "Tue, 24 May 2022 20:25:18 GMT"
}
] | 2022-05-26T00:00:00 |
[
[
"Messadi",
"Ines",
""
],
[
"Becker",
"Markus Horst",
""
],
[
"Bleeke",
"Kai",
""
],
[
"Jehl",
"Leander",
""
],
[
"Mokhtar",
"Sonia Ben",
""
],
[
"Kapitza",
"Rüdiger",
""
]
] |
new_dataset
| 0.996282 |
2205.10963
|
Liwei Guo
|
Liwei Guo, Kaiyang Zhao, Yiying Zhang, Felix Xiaozhu Lin
|
Protecting File Activities via Deception for ARM TrustZone
|
Under submission
| null | null | null |
cs.CR cs.OS
|
http://creativecommons.org/licenses/by/4.0/
|
A TrustZone TEE often invokes an external filesystem. While filedata can be
encrypted, the revealed file activities can leak secrets. To hide the file
activities from the filesystem and its OS, we propose Enigma, a deception-based
defense injecting sybil file activities as the cover of the actual file
activities.
Enigma contributes three new designs. (1) To make the deception credible, the
TEE generates sybil calls by replaying file calls from the TEE code under
protection. (2) To make sybil activities cheap, the TEE requests the OS to run
K filesystem images simultaneously. Concealing the disk, the TEE backs only one
image with the actual disk while backing other images by only storing their
metadata. (3) To protect filesystem image identities, the TEE shuffles the
images frequently, preventing the OS from observing any image for long.
Enigma works with unmodified filesystems shipped withLinux. On a low-cost Arm
SoC with EXT4 and F2FS, our system can concurrently run as many as 50
filesystem images with 1% of disk overhead per additional image. Compared to
common obfuscation for hiding addresses in a flat space, Enigma hides file
activities with richer semantics. Its cost is lower by one order of magnitude
while achieving the same level of probabilistic security guarantees.
|
[
{
"version": "v1",
"created": "Sun, 22 May 2022 23:55:23 GMT"
},
{
"version": "v2",
"created": "Tue, 24 May 2022 18:57:20 GMT"
}
] | 2022-05-26T00:00:00 |
[
[
"Guo",
"Liwei",
""
],
[
"Zhao",
"Kaiyang",
""
],
[
"Zhang",
"Yiying",
""
],
[
"Lin",
"Felix Xiaozhu",
""
]
] |
new_dataset
| 0.995777 |
2205.11191
|
Yadian Zhao
|
Yadian Zhao and Zhenglin Yang and Chao Xu
|
NPU-BOLT: A Dataset for Bolt Object Detection in Natural Scene Images
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bolt joints are very common and important in engineering structures. Due to
extreme service environment and load factors, bolts often get loose or even
disengaged. To real-time or timely detect the loosed or disengaged bolts is an
urgent need in practical engineering, which is critical to keep structural
safety and service life. In recent years, many bolt loosening detection methods
using deep learning and machine learning techniques have been proposed and are
attracting more and more attention. However, most of these studies use bolt
images captured in laboratory for deep leaning model training. The images are
obtained in a well-controlled light, distance, and view angle conditions. Also,
the bolted structures are well designed experimental structures with brand new
bolts and the bolts are exposed without any shelter nearby. It is noted that in
practical engineering, the above well controlled lab conditions are not easy
realized and the real bolt images often have blur edges, oblique perspective,
partial occlusion and indistinguishable colors etc., which make the trained
models obtained in laboratory conditions loss their accuracy or fails.
Therefore, the aim of this study is to develop a dataset named NPU-BOLT for
bolt object detection in natural scene images and open it to researchers for
public use and further development. In the first version of the dataset, it
contains 337 samples of bolt joints images mainly in the natural environment,
with image data sizes ranging from 400*400 to 6000*4000, totaling approximately
1275 bolt targets. The bolt targets are annotated into four categories named
blur bolt, bolt head, bolt nut and bolt side. The dataset is tested with
advanced object detection models including yolov5, Faster-RCNN and CenterNet.
The effectiveness of the dataset is validated.
|
[
{
"version": "v1",
"created": "Mon, 23 May 2022 10:51:33 GMT"
},
{
"version": "v2",
"created": "Wed, 25 May 2022 15:08:51 GMT"
}
] | 2022-05-26T00:00:00 |
[
[
"Zhao",
"Yadian",
""
],
[
"Yang",
"Zhenglin",
""
],
[
"Xu",
"Chao",
""
]
] |
new_dataset
| 0.99981 |
2205.12261
|
Nguyen Huu Phong
|
Nguyen Huu Phong, Bernardete Ribeiro
|
Action Recognition for American Sign Language
|
2 pages
|
RECPAD 2017
| null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
In this research, we present our findings to recognize American Sign Language
from series of hand gestures. While most researches in literature focus only on
static handshapes, our work target dynamic hand gestures. Since dynamic signs
dataset are very few, we collect an initial dataset of 150 videos for 10 signs
and an extension of 225 videos for 15 signs. We apply transfer learning models
in combination with deep neural networks and background subtraction for videos
in different temporal settings. Our primarily results show that we can get an
accuracy of $0.86$ and $0.71$ using DenseNet201, LSTM with video sequence of 12
frames accordingly.
|
[
{
"version": "v1",
"created": "Fri, 20 May 2022 23:53:19 GMT"
}
] | 2022-05-26T00:00:00 |
[
[
"Phong",
"Nguyen Huu",
""
],
[
"Ribeiro",
"Bernardete",
""
]
] |
new_dataset
| 0.999763 |
2205.12301
|
Fan-Keng Sun
|
Fan-Keng Sun and Duane S. Boning
|
FreDo: Frequency Domain-based Long-Term Time Series Forecasting
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The ability to forecast far into the future is highly beneficial to many
applications, including but not limited to climatology, energy consumption, and
logistics. However, due to noise or measurement error, it is questionable how
far into the future one can reasonably predict. In this paper, we first
mathematically show that due to error accumulation, sophisticated models might
not outperform baseline models for long-term forecasting. To demonstrate, we
show that a non-parametric baseline model based on periodicity can actually
achieve comparable performance to a state-of-the-art Transformer-based model on
various datasets. We further propose FreDo, a frequency domain-based neural
network model that is built on top of the baseline model to enhance its
performance and which greatly outperforms the state-of-the-art model. Finally,
we validate that the frequency domain is indeed better by comparing univariate
models trained in the frequency v.s. time domain.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 18:19:15 GMT"
}
] | 2022-05-26T00:00:00 |
[
[
"Sun",
"Fan-Keng",
""
],
[
"Boning",
"Duane S.",
""
]
] |
new_dataset
| 0.966373 |
2205.12323
|
Juntao Yu
|
Silviu Paun and Juntao Yu and Nafise Sadat Moosavi and Massimo Poesio
|
Scoring Coreference Chains with Split-Antecedent Anaphors
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Anaphoric reference is an aspect of language interpretation covering a
variety of types of interpretation beyond the simple case of identity reference
to entities introduced via nominal expressions covered by the traditional
coreference task in its most recent incarnation in ONTONOTES and similar
datasets. One of these cases that go beyond simple coreference is anaphoric
reference to entities that must be added to the discourse model via
accommodation, and in particular split-antecedent references to entities
constructed out of other entities, as in split-antecedent plurals and in some
cases of discourse deixis. Although this type of anaphoric reference is now
annotated in many datasets, systems interpreting such references cannot be
evaluated using the Reference coreference scorer Pradhan et al. (2014). As part
of the work towards a new scorer for anaphoric reference able to evaluate all
aspects of anaphoric interpretation in the coverage of the Universal Anaphora
initiative, we propose in this paper a solution to the technical problem of
generalizing existing metrics for identity anaphora so that they can also be
used to score cases of split-antecedents. This is the first such proposal in
the literature on anaphora or coreference, and has been successfully used to
score both split-antecedent plural references and discourse deixis in the
recent CODI/CRAC anaphora resolution in dialogue shared tasks.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 19:07:36 GMT"
}
] | 2022-05-26T00:00:00 |
[
[
"Paun",
"Silviu",
""
],
[
"Yu",
"Juntao",
""
],
[
"Moosavi",
"Nafise Sadat",
""
],
[
"Poesio",
"Massimo",
""
]
] |
new_dataset
| 0.973704 |
2205.12446
|
Ankur Bapna
|
Alexis Conneau, Min Ma, Simran Khanuja, Yu Zhang, Vera Axelrod,
Siddharth Dalmia, Jason Riesa, Clara Rivera, Ankur Bapna
|
FLEURS: Few-shot Learning Evaluation of Universal Representations of
Speech
| null | null | null | null |
cs.CL cs.LG cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce FLEURS, the Few-shot Learning Evaluation of Universal
Representations of Speech benchmark. FLEURS is an n-way parallel speech dataset
in 102 languages built on top of the machine translation FLoRes-101 benchmark,
with approximately 12 hours of speech supervision per language. FLEURS can be
used for a variety of speech tasks, including Automatic Speech Recognition
(ASR), Speech Language Identification (Speech LangID), Translation and
Retrieval. In this paper, we provide baselines for the tasks based on
multilingual pre-trained models like mSLAM. The goal of FLEURS is to enable
speech technology in more languages and catalyze research in low-resource
speech understanding.
|
[
{
"version": "v1",
"created": "Wed, 25 May 2022 02:29:03 GMT"
}
] | 2022-05-26T00:00:00 |
[
[
"Conneau",
"Alexis",
""
],
[
"Ma",
"Min",
""
],
[
"Khanuja",
"Simran",
""
],
[
"Zhang",
"Yu",
""
],
[
"Axelrod",
"Vera",
""
],
[
"Dalmia",
"Siddharth",
""
],
[
"Riesa",
"Jason",
""
],
[
"Rivera",
"Clara",
""
],
[
"Bapna",
"Ankur",
""
]
] |
new_dataset
| 0.997981 |
2205.12464
|
Yoones Rezaei
|
Yoones Rezaei, Stephen Lee
|
sat2pc: Estimating Point Cloud of Building Roofs from 2D Satellite
Images
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Three-dimensional (3D) urban models have gained interest because of their
applications in many use-cases such as urban planning and virtual reality.
However, generating these 3D representations requires LiDAR data, which are not
always readily available. Thus, the applicability of automated 3D model
generation algorithms is limited to a few locations. In this paper, we propose
sat2pc, a deep learning architecture that predicts the point cloud of a
building roof from a single 2D satellite image. Our architecture combines
Chamfer distance and EMD loss, resulting in better 2D to 3D performance. We
extensively evaluate our model and perform ablation studies on a building roof
dataset. Our results show that sat2pc was able to outperform existing baselines
by at least 18.6%. Further, we show that the predicted point cloud captures
more detail and geometric characteristics than other baselines.
|
[
{
"version": "v1",
"created": "Wed, 25 May 2022 03:24:40 GMT"
}
] | 2022-05-26T00:00:00 |
[
[
"Rezaei",
"Yoones",
""
],
[
"Lee",
"Stephen",
""
]
] |
new_dataset
| 0.986059 |
2205.12484
|
Pedram Hosseini
|
Pedram Hosseini and Christopher R. Wolfe and Mona Diab and David A.
Broniatowski
|
GisPy: A Tool for Measuring Gist Inference Score in Text
|
Accepted to the 4th Workshop on Narrative Understanding @ NAACL 2022
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Decision making theories such as Fuzzy-Trace Theory (FTT) suggest that
individuals tend to rely on gist, or bottom-line meaning, in the text when
making decisions. In this work, we delineate the process of developing GisPy,
an open-source tool in Python for measuring the Gist Inference Score (GIS) in
text. Evaluation of GisPy on documents in three benchmarks from the news and
scientific text domains demonstrates that scores generated by our tool
significantly distinguish low vs. high gist documents. Our tool is publicly
available to use at: https://github.com/phosseini/GisPy.
|
[
{
"version": "v1",
"created": "Wed, 25 May 2022 04:17:09 GMT"
}
] | 2022-05-26T00:00:00 |
[
[
"Hosseini",
"Pedram",
""
],
[
"Wolfe",
"Christopher R.",
""
],
[
"Diab",
"Mona",
""
],
[
"Broniatowski",
"David A.",
""
]
] |
new_dataset
| 0.959908 |
2205.12494
|
Alex Jones
|
Prayash Dutta (1), Albert Lee (2), Kang L. Wang (2), Alex K. Jones
(3), and Sanjukta Bhanja (1) ((1) University of South Florida, (2) UCLA, (3)
University of Pittsburgh)
|
A Multi-domain Magneto Tunnel Junction for Racetrack Nanowire Strips
|
This paper is under review for possible publication by the IEEE
| null | null | null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Domain-wall memory (DWM) has SRAM class access performance, low energy, high
endurance, high density, and CMOS compatibility. Recently, shift reliability
and processing-using-memory (PuM) proposals developed a need to count the
number of parallel or anti-parallel domains in a portion of the DWM nanowire.
In this paper we propose a multi-domain magneto-tunnel junction (MTJ) that can
detect different resistance levels as a function of a the number of parallel or
anti-parallel domains. Using detailed micromagnetic simulation with LLG, we
demonstrate the multi-domain MTJ, study the benefit of its macro-size on
resilience to process variation and present a macro-model for scaling the size
of the multi-domain MTJ. Our results indicate scalability to seven-domains
while maintaining a 16.3mV sense margin.
|
[
{
"version": "v1",
"created": "Wed, 25 May 2022 05:08:43 GMT"
}
] | 2022-05-26T00:00:00 |
[
[
"Dutta",
"Prayash",
""
],
[
"Lee",
"Albert",
""
],
[
"Wang",
"Kang L.",
""
],
[
"Jones",
"Alex K.",
""
],
[
"Bhanja",
"Sanjukta",
""
]
] |
new_dataset
| 0.999789 |
2205.12562
|
Eugenio Cuniato
|
Eugenio Cuniato, Nicholas Lawrance, Marco Tognon, Roland Siegwart
|
Power-based Safety Layer for Aerial Vehicles in Physical Interaction
using Lyapunov Exponents
| null |
IEEE Robotics and Automation Letters
|
10.1109/LRA.2022.3176959
| null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
As the performance of autonomous systems increases, safety concerns arise,
especially when operating in non-structured environments. To deal with these
concerns, this work presents a safety layer for mechanical systems that detects
and responds to unstable dynamics caused by external disturbances. The safety
layer is implemented independently and on top of already present nominal
controllers, like pose or wrench tracking, and limits power flow when the
system's response would lead to instability. This approach is based on the
computation of the Largest Lyapunov Exponent (LLE) of the system's error
dynamics, which represent a measure of the dynamics' divergence or convergence
rate. By actively computing this metric, divergent and possibly dangerous
system behaviors can be promptly detected. The LLE is then used in combination
with Control Barrier Functions (CBFs) to impose power limit constraints on a
jerk controlled system. The proposed architecture is experimentally validated
on an Omnidirectional Micro Aerial Vehicle (OMAV) both in free flight and
interaction tasks.
|
[
{
"version": "v1",
"created": "Wed, 25 May 2022 08:20:47 GMT"
}
] | 2022-05-26T00:00:00 |
[
[
"Cuniato",
"Eugenio",
""
],
[
"Lawrance",
"Nicholas",
""
],
[
"Tognon",
"Marco",
""
],
[
"Siegwart",
"Roland",
""
]
] |
new_dataset
| 0.954375 |
2205.12570
|
Nora Kassner
|
Nora Kassner, Fabio Petroni, Mikhail Plekhanov, Sebastian Riedel,
Nicola Cancedda
|
EDIN: An End-to-end Benchmark and Pipeline for Unknown Entity Discovery
and Indexing
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Existing work on Entity Linking mostly assumes that the reference knowledge
base is complete, and therefore all mentions can be linked. In practice this is
hardly ever the case, as knowledge bases are incomplete and because novel
concepts arise constantly. This paper created the Unknown Entity Discovery and
Indexing (EDIN) benchmark where unknown entities, that is entities without a
description in the knowledge base and labeled mentions, have to be integrated
into an existing entity linking system. By contrasting EDIN with zero-shot
entity linking, we provide insight on the additional challenges it poses.
Building on dense-retrieval based entity linking, we introduce the end-to-end
EDIN pipeline that detects, clusters, and indexes mentions of unknown entities
in context. Experiments show that indexing a single embedding per entity
unifying the information of multiple mentions works better than indexing
mentions independently.
|
[
{
"version": "v1",
"created": "Wed, 25 May 2022 08:29:39 GMT"
}
] | 2022-05-26T00:00:00 |
[
[
"Kassner",
"Nora",
""
],
[
"Petroni",
"Fabio",
""
],
[
"Plekhanov",
"Mikhail",
""
],
[
"Riedel",
"Sebastian",
""
],
[
"Cancedda",
"Nicola",
""
]
] |
new_dataset
| 0.972765 |
2205.12579
|
Ross Greer
|
Ross Greer and Mohan Trivedi
|
From Pedestrian Detection to Crosswalk Estimation: An EM Algorithm and
Analysis on Diverse Datasets
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this work, we contribute an EM algorithm for estimation of corner points
and linear crossing segments for both marked and unmarked pedestrian crosswalks
using the detections of pedestrians from processed LiDAR point clouds or camera
images. We demonstrate the algorithmic performance by analyzing three
real-world datasets containing multiple periods of data collection for
four-corner and two-corner intersections with marked and unmarked crosswalks.
Additionally, we include a Python video tool to visualize the crossing
parameter estimation, pedestrian trajectories, and phase intervals in our
public source code.
|
[
{
"version": "v1",
"created": "Wed, 25 May 2022 08:40:38 GMT"
}
] | 2022-05-26T00:00:00 |
[
[
"Greer",
"Ross",
""
],
[
"Trivedi",
"Mohan",
""
]
] |
new_dataset
| 0.999046 |
2205.12587
|
Yong Xu
|
Yong Xu, Zhihua Xia, Zichi Wang, Xinpeng Zhang, and Jian Weng
|
Deniable Steganography
| null | null | null | null |
cs.CR cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Steganography conceals the secret message into the cover media, generating a
stego media which can be transmitted on public channels without drawing
suspicion. As its countermeasure, steganalysis mainly aims to detect whether
the secret message is hidden in a given media. Although the steganography
techniques are improving constantly, the sophisticated steganalysis can always
break a known steganographic method to some extent. With a stego media
discovered, the adversary could find out the sender or receiver and coerce them
to disclose the secret message, which we name as coercive attack in this paper.
Inspired by the idea of deniable encryption, we build up the concepts of
deniable steganography for the first time and discuss the feasible
constructions for it. As an example, we propose a receiver-deniable
steganographic scheme to deal with the receiver-side coercive attack using deep
neural networks (DNN). Specifically, besides the real secret message, a piece
of fake message is also embedded into the cover. On the receiver side, the real
message can be extracted with an extraction module; while once the receiver has
to surrender a piece of secret message under coercive attack, he can extract
the fake message to deceive the adversary with another extraction module.
Experiments demonstrate the scalability and sensitivity of the DNN-based
receiver-deniable steganographic scheme.
|
[
{
"version": "v1",
"created": "Wed, 25 May 2022 09:00:30 GMT"
}
] | 2022-05-26T00:00:00 |
[
[
"Xu",
"Yong",
""
],
[
"Xia",
"Zhihua",
""
],
[
"Wang",
"Zichi",
""
],
[
"Zhang",
"Xinpeng",
""
],
[
"Weng",
"Jian",
""
]
] |
new_dataset
| 0.991654 |
2205.12595
|
Milad Ramezani
|
Milad Ramezani, Kasra Khosoussi, Gavin Catt, Peyman Moghadam, Jason
Williams, Paulo Borges, Fred Pauling, Navinda Kottege
|
Wildcat: Online Continuous-Time 3D Lidar-Inertial SLAM
|
13 pages, 18 figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Wildcat, a novel online 3D lidar-inertial SLAM system with
exceptional versatility and robustness. At its core, Wildcat combines a robust
real-time lidar-inertial odometry module, utilising a continuous-time
trajectory representation, with an efficient pose-graph optimisation module
that seamlessly supports both the single- and multi-agent settings. The
robustness of Wildcat was recently demonstrated in the DARPA Subterranean
Challenge where it outperformed other SLAM systems across various types of
sensing-degraded and perceptually challenging environments. In this paper, we
extensively evaluate Wildcat in a diverse set of new and publicly available
real-world datasets and showcase its superior robustness and versatility over
two existing state-of-the-art lidar-inertial SLAM systems.
|
[
{
"version": "v1",
"created": "Wed, 25 May 2022 09:15:27 GMT"
}
] | 2022-05-26T00:00:00 |
[
[
"Ramezani",
"Milad",
""
],
[
"Khosoussi",
"Kasra",
""
],
[
"Catt",
"Gavin",
""
],
[
"Moghadam",
"Peyman",
""
],
[
"Williams",
"Jason",
""
],
[
"Borges",
"Paulo",
""
],
[
"Pauling",
"Fred",
""
],
[
"Kottege",
"Navinda",
""
]
] |
new_dataset
| 0.992588 |
2205.12617
|
Liunian Harold Li
|
Jingnong Qu, Liunian Harold Li, Jieyu Zhao, Sunipa Dev, Kai-Wei Chang
|
DisinfoMeme: A Multimodal Dataset for Detecting Meme Intentionally
Spreading Out Disinformation
| null | null | null | null |
cs.CL cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Disinformation has become a serious problem on social media. In particular,
given their short format, visual attraction, and humorous nature, memes have a
significant advantage in dissemination among online communities, making them an
effective vehicle for the spread of disinformation. We present DisinfoMeme to
help detect disinformation memes. The dataset contains memes mined from Reddit
covering three current topics: the COVID-19 pandemic, the Black Lives Matter
movement, and veganism/vegetarianism. The dataset poses multiple unique
challenges: limited data and label imbalance, reliance on external knowledge,
multimodal reasoning, layout dependency, and noise from OCR. We test multiple
widely-used unimodal and multimodal models on this dataset. The experiments
show that the room for improvement is still huge for current models.
|
[
{
"version": "v1",
"created": "Wed, 25 May 2022 09:54:59 GMT"
}
] | 2022-05-26T00:00:00 |
[
[
"Qu",
"Jingnong",
""
],
[
"Li",
"Liunian Harold",
""
],
[
"Zhao",
"Jieyu",
""
],
[
"Dev",
"Sunipa",
""
],
[
"Chang",
"Kai-Wei",
""
]
] |
new_dataset
| 0.999258 |
2205.12627
|
Henghui Ding
|
Xinke Li, Henghui Ding, Zekun Tong, Yuwei Wu, Yeow Meng Chee
|
Primitive3D: 3D Object Dataset Synthesis from Randomly Assembled
Primitives
|
CVPR 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Numerous advancements in deep learning can be attributed to the access to
large-scale and well-annotated datasets. However, such a dataset is
prohibitively expensive in 3D computer vision due to the substantial collection
cost. To alleviate this issue, we propose a cost-effective method for
automatically generating a large amount of 3D objects with annotations. In
particular, we synthesize objects simply by assembling multiple random
primitives. These objects are thus auto-annotated with part labels originating
from primitives. This allows us to perform multi-task learning by combining the
supervised segmentation with unsupervised reconstruction. Considering the large
overhead of learning on the generated dataset, we further propose a dataset
distillation strategy to remove redundant samples regarding a target dataset.
We conduct extensive experiments for the downstream tasks of 3D object
classification. The results indicate that our dataset, together with multi-task
pretraining on its annotations, achieves the best performance compared to other
commonly used datasets. Further study suggests that our strategy can improve
the model performance by pretraining and fine-tuning scheme, especially for the
dataset with a small scale. In addition, pretraining with the proposed dataset
distillation method can save 86\% of the pretraining time with negligible
performance degradation. We expect that our attempt provides a new data-centric
perspective for training 3D deep models.
|
[
{
"version": "v1",
"created": "Wed, 25 May 2022 10:07:07 GMT"
}
] | 2022-05-26T00:00:00 |
[
[
"Li",
"Xinke",
""
],
[
"Ding",
"Henghui",
""
],
[
"Tong",
"Zekun",
""
],
[
"Wu",
"Yuwei",
""
],
[
"Chee",
"Yeow Meng",
""
]
] |
new_dataset
| 0.99969 |
2205.12633
|
Eduardo Perez-Pellitero
|
Eduardo P\'erez-Pellitero, Sibi Catley-Chandar, Richard Shaw, Ale\v{s}
Leonardis, Radu Timofte, Zexin Zhang, Cen Liu, Yunbo Peng, Yue Lin, Gaocheng
Yu, Jin Zhang, Zhe Ma, Hongbin Wang, Xiangyu Chen, Xintao Wang, Haiwei Wu,
Lin Liu, Chao Dong, Jiantao Zhou, Qingsen Yan, Song Zhang, Weiye Chen, Yuhang
Liu, Zhen Zhang, Yanning Zhang, Javen Qinfeng Shi, Dong Gong, Dan Zhu, Mengdi
Sun, Guannan Chen, Yang Hu, Haowei Li, Baozhu Zou, Zhen Liu, Wenjie Lin, Ting
Jiang, Chengzhi Jiang, Xinpeng Li, Mingyan Han, Haoqiang Fan, Jian Sun,
Shuaicheng Liu, Juan Mar\'in-Vega, Michael Sloth, Peter Schneider-Kamp,
Richard R\"ottger, Chunyang Li, Long Bao, Gang He, Ziyao Xu, Li Xu, Gen Zhan,
Ming Sun, Xing Wen, Junlin Li, Jinjing Li, Chenghua Li, Ruipeng Gang, Fangya
Li, Chenming Liu, Shuang Feng, Fei Lei, Rui Liu, Junxiang Ruan, Tianhong Dai,
Wei Li, Zhan Lu, Hengyan Liu, Peian Huang, Guangyu Ren, Yonglin Luo, Chang
Liu, Qiang Tu, Fangya Li, Ruipeng Gang, Chenghua Li, Jinjing Li, Sai Ma,
Chenming Liu, Yizhen Cao, Steven Tel, Barthelemy Heyrman, Dominique Ginhac,
Chul Lee, Gahyeon Kim, Seonghyun Park, An Gia Vien, Truong Thanh Nhat Mai,
Howoon Yoon, Tu Vo, Alexander Holston, Sheir Zaheer and Chan Y. Park
|
NTIRE 2022 Challenge on High Dynamic Range Imaging: Methods and Results
|
CVPR Workshops 2022. 15 pages, 21 figures, 2 tables
|
Proceedings of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition (CVPR) Workshops, 2022
| null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper reviews the challenge on constrained high dynamic range (HDR)
imaging that was part of the New Trends in Image Restoration and Enhancement
(NTIRE) workshop, held in conjunction with CVPR 2022. This manuscript focuses
on the competition set-up, datasets, the proposed methods and their results.
The challenge aims at estimating an HDR image from multiple respective low
dynamic range (LDR) observations, which might suffer from under- or
over-exposed regions and different sources of noise. The challenge is composed
of two tracks with an emphasis on fidelity and complexity constraints: In Track
1, participants are asked to optimize objective fidelity scores while imposing
a low-complexity constraint (i.e. solutions can not exceed a given number of
operations). In Track 2, participants are asked to minimize the complexity of
their solutions while imposing a constraint on fidelity scores (i.e. solutions
are required to obtain a higher fidelity score than the prescribed baseline).
Both tracks use the same data and metrics: Fidelity is measured by means of
PSNR with respect to a ground-truth HDR image (computed both directly and with
a canonical tonemapping operation), while complexity metrics include the number
of Multiply-Accumulate (MAC) operations and runtime (in seconds).
|
[
{
"version": "v1",
"created": "Wed, 25 May 2022 10:20:06 GMT"
}
] | 2022-05-26T00:00:00 |
[
[
"Pérez-Pellitero",
"Eduardo",
""
],
[
"Catley-Chandar",
"Sibi",
""
],
[
"Shaw",
"Richard",
""
],
[
"Leonardis",
"Aleš",
""
],
[
"Timofte",
"Radu",
""
],
[
"Zhang",
"Zexin",
""
],
[
"Liu",
"Cen",
""
],
[
"Peng",
"Yunbo",
""
],
[
"Lin",
"Yue",
""
],
[
"Yu",
"Gaocheng",
""
],
[
"Zhang",
"Jin",
""
],
[
"Ma",
"Zhe",
""
],
[
"Wang",
"Hongbin",
""
],
[
"Chen",
"Xiangyu",
""
],
[
"Wang",
"Xintao",
""
],
[
"Wu",
"Haiwei",
""
],
[
"Liu",
"Lin",
""
],
[
"Dong",
"Chao",
""
],
[
"Zhou",
"Jiantao",
""
],
[
"Yan",
"Qingsen",
""
],
[
"Zhang",
"Song",
""
],
[
"Chen",
"Weiye",
""
],
[
"Liu",
"Yuhang",
""
],
[
"Zhang",
"Zhen",
""
],
[
"Zhang",
"Yanning",
""
],
[
"Shi",
"Javen Qinfeng",
""
],
[
"Gong",
"Dong",
""
],
[
"Zhu",
"Dan",
""
],
[
"Sun",
"Mengdi",
""
],
[
"Chen",
"Guannan",
""
],
[
"Hu",
"Yang",
""
],
[
"Li",
"Haowei",
""
],
[
"Zou",
"Baozhu",
""
],
[
"Liu",
"Zhen",
""
],
[
"Lin",
"Wenjie",
""
],
[
"Jiang",
"Ting",
""
],
[
"Jiang",
"Chengzhi",
""
],
[
"Li",
"Xinpeng",
""
],
[
"Han",
"Mingyan",
""
],
[
"Fan",
"Haoqiang",
""
],
[
"Sun",
"Jian",
""
],
[
"Liu",
"Shuaicheng",
""
],
[
"Marín-Vega",
"Juan",
""
],
[
"Sloth",
"Michael",
""
],
[
"Schneider-Kamp",
"Peter",
""
],
[
"Röttger",
"Richard",
""
],
[
"Li",
"Chunyang",
""
],
[
"Bao",
"Long",
""
],
[
"He",
"Gang",
""
],
[
"Xu",
"Ziyao",
""
],
[
"Xu",
"Li",
""
],
[
"Zhan",
"Gen",
""
],
[
"Sun",
"Ming",
""
],
[
"Wen",
"Xing",
""
],
[
"Li",
"Junlin",
""
],
[
"Li",
"Jinjing",
""
],
[
"Li",
"Chenghua",
""
],
[
"Gang",
"Ruipeng",
""
],
[
"Li",
"Fangya",
""
],
[
"Liu",
"Chenming",
""
],
[
"Feng",
"Shuang",
""
],
[
"Lei",
"Fei",
""
],
[
"Liu",
"Rui",
""
],
[
"Ruan",
"Junxiang",
""
],
[
"Dai",
"Tianhong",
""
],
[
"Li",
"Wei",
""
],
[
"Lu",
"Zhan",
""
],
[
"Liu",
"Hengyan",
""
],
[
"Huang",
"Peian",
""
],
[
"Ren",
"Guangyu",
""
],
[
"Luo",
"Yonglin",
""
],
[
"Liu",
"Chang",
""
],
[
"Tu",
"Qiang",
""
],
[
"Li",
"Fangya",
""
],
[
"Gang",
"Ruipeng",
""
],
[
"Li",
"Chenghua",
""
],
[
"Li",
"Jinjing",
""
],
[
"Ma",
"Sai",
""
],
[
"Liu",
"Chenming",
""
],
[
"Cao",
"Yizhen",
""
],
[
"Tel",
"Steven",
""
],
[
"Heyrman",
"Barthelemy",
""
],
[
"Ginhac",
"Dominique",
""
],
[
"Lee",
"Chul",
""
],
[
"Kim",
"Gahyeon",
""
],
[
"Park",
"Seonghyun",
""
],
[
"Vien",
"An Gia",
""
],
[
"Mai",
"Truong Thanh Nhat",
""
],
[
"Yoon",
"Howoon",
""
],
[
"Vo",
"Tu",
""
],
[
"Holston",
"Alexander",
""
],
[
"Zaheer",
"Sheir",
""
],
[
"Park",
"Chan Y.",
""
]
] |
new_dataset
| 0.998749 |
2205.12682
|
Haoyu Dong
|
Fan Zhou, Mengkang Hu, Haoyu Dong, Zhoujun Cheng, Shi Han, Dongmei
Zhang
|
TaCube: Pre-computing Data Cubes for Answering Numerical-Reasoning
Questions over Tabular Data
| null | null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing auto-regressive pre-trained language models (PLMs) like T5 and BART,
have been well applied to table question answering by UNIFIEDSKG and TAPEX,
respectively, and demonstrated state-of-the-art results on multiple benchmarks.
However, auto-regressive PLMs are challenged by recent emerging numerical
reasoning datasets, such as TAT-QA, due to the error-prone implicit
calculation. In this paper, we present TaCube, to pre-compute
aggregation/arithmetic results for the table in advance, so that they are handy
and readily available for PLMs to answer numerical reasoning questions. TaCube
systematically and comprehensively covers a collection of computational
operations over table segments. By simply concatenating TaCube to the input
sequence of PLMs, it shows significant experimental effectiveness. TaCube
promotes the F1 score from 49.6% to 66.2% on TAT-QA and achieves new
state-of-the-art results on WikiTQ (59.6% denotation accuracy). TaCube's
improvements on numerical reasoning cases are even more notable: on TAT-QA,
TaCube promotes the exact match accuracy of BART-large by 39.6% on sum, 52.5%
on average, 36.6% on substraction, and 22.2% on division. We believe that
TaCube is a general and portable pre-computation solution that can be
potentially integrated to various numerical reasoning frameworks
|
[
{
"version": "v1",
"created": "Wed, 25 May 2022 11:44:11 GMT"
}
] | 2022-05-26T00:00:00 |
[
[
"Zhou",
"Fan",
""
],
[
"Hu",
"Mengkang",
""
],
[
"Dong",
"Haoyu",
""
],
[
"Cheng",
"Zhoujun",
""
],
[
"Han",
"Shi",
""
],
[
"Zhang",
"Dongmei",
""
]
] |
new_dataset
| 0.964656 |
2205.12698
|
Jo\~ao Sedoc
|
Damilola Omitaomu, Shabnam Tafreshi, Tingting Liu, Sven Buechel, Chris
Callison-Burch, Johannes Eichstaedt, Lyle Ungar, Jo\~ao Sedoc
|
Empathic Conversations: A Multi-level Dataset of Contextualized
Conversations
|
21 pages
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Empathy is a cognitive and emotional reaction to an observed situation of
others. Empathy has recently attracted interest because it has numerous
applications in psychology and AI, but it is unclear how different forms of
empathy (e.g., self-report vs counterpart other-report, concern vs. distress)
interact with other affective phenomena or demographics like gender and age. To
better understand this, we created the {\it Empathic Conversations} dataset of
annotated negative, empathy-eliciting dialogues in which pairs of participants
converse about news articles. People differ in their perception of the empathy
of others. These differences are associated with certain characteristics such
as personality and demographics. Hence, we collected detailed characterization
of the participants' traits, their self-reported empathetic response to news
articles, their conversational partner other-report, and turn-by-turn
third-party assessments of the level of self-disclosure, emotion, and empathy
expressed. This dataset is the first to present empathy in multiple forms along
with personal distress, emotion, personality characteristics, and person-level
demographic information. We present baseline models for predicting some of
these features from conversations.
|
[
{
"version": "v1",
"created": "Wed, 25 May 2022 11:56:29 GMT"
}
] | 2022-05-26T00:00:00 |
[
[
"Omitaomu",
"Damilola",
""
],
[
"Tafreshi",
"Shabnam",
""
],
[
"Liu",
"Tingting",
""
],
[
"Buechel",
"Sven",
""
],
[
"Callison-Burch",
"Chris",
""
],
[
"Eichstaedt",
"Johannes",
""
],
[
"Ungar",
"Lyle",
""
],
[
"Sedoc",
"João",
""
]
] |
new_dataset
| 0.999664 |
2205.12713
|
Hao Wang
|
Hao Wang, Wenjie Qu, Gilad Katz, Wenyu Zhu, Zeyu Gao, Han Qiu, Jianwei
Zhuge, Chao Zhang
|
jTrans: Jump-Aware Transformer for Binary Code Similarity
|
In Proceedings of the 31st ACM SIGSOFT International Symposium on
Software Testing and Analysis (ISSTA) 2022
| null | null | null |
cs.CR cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Binary code similarity detection (BCSD) has important applications in various
fields such as vulnerability detection, software component analysis, and
reverse engineering. Recent studies have shown that deep neural networks (DNNs)
can comprehend instructions or control-flow graphs (CFG) of binary code and
support BCSD. In this study, we propose a novel Transformer-based approach,
namely jTrans, to learn representations of binary code. It is the first
solution that embeds control flow information of binary code into
Transformer-based language models, by using a novel jump-aware representation
of the analyzed binaries and a newly-designed pre-training task. Additionally,
we release to the community a newly-created large dataset of binaries,
BinaryCorp, which is the most diverse to date. Evaluation results show that
jTrans outperforms state-of-the-art (SOTA) approaches on this more challenging
dataset by 30.5% (i.e., from 32.0% to 62.5%). In a real-world task of known
vulnerability searching, jTrans achieves a recall that is 2X higher than
existing SOTA baselines.
|
[
{
"version": "v1",
"created": "Wed, 25 May 2022 12:28:31 GMT"
}
] | 2022-05-26T00:00:00 |
[
[
"Wang",
"Hao",
""
],
[
"Qu",
"Wenjie",
""
],
[
"Katz",
"Gilad",
""
],
[
"Zhu",
"Wenyu",
""
],
[
"Gao",
"Zeyu",
""
],
[
"Qiu",
"Han",
""
],
[
"Zhuge",
"Jianwei",
""
],
[
"Zhang",
"Chao",
""
]
] |
new_dataset
| 0.995556 |
2205.12737
|
Elias Rohrer
|
Niklas G\"ogge, Elias Rohrer, Florian Tschorsch
|
On the Routing Convergence Delay in the Lightning Network
| null | null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Nodes in the Lightning Network synchronise routing information through a
gossip protocol that makes use of a staggered broadcast mechanism. In this
work, we show that the convergence delay in the network is larger than what
would be expected from the protocol's specification and that payment attempt
failures caused by the delay are more frequent, the larger the delay is. To
this end, we measure the convergence delay incurred in the network and analyse
what its primary causes are. Moreover, we further investigate and confirm our
findings through a time-discrete simulation of the Lightning Network gossip
protocol. We explore the use of alternative gossip protocols as well as
parameter variations of the current protocol and evaluate them by the resulting
bandwidth usage and convergence delay. Our research shows that there are
multiple ways of lowering the convergence delay, ranging from simple parameter
changes to overhauling the entire protocol.
|
[
{
"version": "v1",
"created": "Wed, 25 May 2022 12:42:57 GMT"
}
] | 2022-05-26T00:00:00 |
[
[
"Gögge",
"Niklas",
""
],
[
"Rohrer",
"Elias",
""
],
[
"Tschorsch",
"Florian",
""
]
] |
new_dataset
| 0.991545 |
1908.01887
|
Yusuke Urakami
|
Yusuke Urakami, Alec Hodgkinson, Casey Carlin, Randall Leu, Luca
Rigazio, Pieter Abbeel
|
DoorGym: A Scalable Door Opening Environment And Baseline Agent
|
Accepted to NeurIPS2019 Deep Reinforcement Learning Workshop. Full
version
| null | null | null |
cs.RO cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In order to practically implement the door opening task, a policy ought to be
robust to a wide distribution of door types and environment settings.
Reinforcement Learning (RL) with Domain Randomization (DR) is a promising
technique to enforce policy generalization, however, there are only a few
accessible training environments that are inherently designed to train agents
in domain randomized environments. We introduce DoorGym, an open-source door
opening simulation framework designed to utilize domain randomization to train
a stable policy. We intend for our environment to lie at the intersection of
domain transfer, practical tasks, and realism. We also provide baseline
Proximal Policy Optimization and Soft Actor-Critic implementations, which
achieves success rates between 0% up to 95% for opening various types of doors
in this environment. Moreover, the real-world transfer experiment shows the
trained policy is able to work in the real world. Environment kit available
here: https://github.com/PSVL/DoorGym/
|
[
{
"version": "v1",
"created": "Mon, 5 Aug 2019 22:20:32 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Aug 2019 17:21:36 GMT"
},
{
"version": "v3",
"created": "Wed, 13 May 2020 07:56:55 GMT"
},
{
"version": "v4",
"created": "Tue, 24 May 2022 07:15:00 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Urakami",
"Yusuke",
""
],
[
"Hodgkinson",
"Alec",
""
],
[
"Carlin",
"Casey",
""
],
[
"Leu",
"Randall",
""
],
[
"Rigazio",
"Luca",
""
],
[
"Abbeel",
"Pieter",
""
]
] |
new_dataset
| 0.994287 |
2108.08716
|
Vitaly Skachek
|
Irina E. Bocharova, Boris D. Kudryashov, Evgenii P. Ovsyannikov, and
Vitaly Skachek
|
NB QC-LDPC Coded QAM Signals with Optimized Mapping: Bounds and
Simulation Results
|
arXiv admin note: text overlap with arXiv:2006.12147
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The goal of the paper is to study specific properties of nonbinary
low-density parity-check (NB LDPC) codes when used in coded modulation systems.
The paper is focused on the practically important NB LDPC codes over extensions
of the Galois field GF$(2^m)$ with $m \le 6$ used with QAM signaling.
Performance of NB QC LDPC coded transmission strongly depends on mapping of
nonbinary symbols to signal constellation points. We obtain a random coding
bound on the maximum-likelihood decoding error probability for an ensemble of
random irregular NB LDPC codes used with QAM signaling for specific
symbol-to-signal point mappings. This bound is based on the ensemble average
Euclidean distance spectra derived for these mappings. The simulation results
for the belief-propagation decoding in the coded modulation schemes with the NB
quasi-cyclic (QC)-LDPC codes under different mappings are given. Comparisons
with the optimized binary QC-LDPC codes in the WiFi and 5G standards, as well
as with the new bound, are performed.
|
[
{
"version": "v1",
"created": "Thu, 19 Aug 2021 14:35:49 GMT"
},
{
"version": "v2",
"created": "Mon, 23 May 2022 20:40:07 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Bocharova",
"Irina E.",
""
],
[
"Kudryashov",
"Boris D.",
""
],
[
"Ovsyannikov",
"Evgenii P.",
""
],
[
"Skachek",
"Vitaly",
""
]
] |
new_dataset
| 0.972706 |
2109.14470
|
Ishaan Desai
|
Gerasimos Chourdakis, Kyle Davis, Benjamin Rodenberg, Miriam Schulte,
Fr\'ed\'eric Simonis, Benjamin Uekermann, Georg Abrams, Hans-Joachim
Bungartz, Lucia Cheung Yau, Ishaan Desai, Konrad Eder, Richard Hertrich,
Florian Lindner, Alexander Rusch, Dmytro Sashko, David Schneider, Amin
Totounferoush, Dominik Volland, Peter Vollmer, Oguz Ziya Koseomur
|
preCICE v2: A Sustainable and User-Friendly Coupling Library
|
added missing author, added author contributions, changed license
| null |
10.12688/openreseurope.14445.1
| null |
cs.MS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
preCICE is a free/open-source coupling library. It enables creating
partitioned multi-physics simulations by gluing together separate software
packages. This paper summarizes the development efforts in preCICE of the past
five years. During this time span, we have turned the software from a working
prototype -- sophisticated numerical coupling methods and scalability on ten
thousands of compute cores -- to a sustainable and user-friendly software
project with a steadily-growing community. Today, we know through forum
discussions, conferences, workshops, and publications of more than 100 research
groups using preCICE. We cover the fundamentals of the software alongside a
performance and accuracy analysis of different data mapping methods.
Afterwards, we describe ready-to-use integration with widely-used external
simulation software packages, tests and continuous integration from unit to
system level, and community building measures, drawing an overview of the
current preCICE ecosystem.
|
[
{
"version": "v1",
"created": "Wed, 29 Sep 2021 15:01:34 GMT"
},
{
"version": "v2",
"created": "Thu, 30 Sep 2021 09:42:19 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Chourdakis",
"Gerasimos",
""
],
[
"Davis",
"Kyle",
""
],
[
"Rodenberg",
"Benjamin",
""
],
[
"Schulte",
"Miriam",
""
],
[
"Simonis",
"Frédéric",
""
],
[
"Uekermann",
"Benjamin",
""
],
[
"Abrams",
"Georg",
""
],
[
"Bungartz",
"Hans-Joachim",
""
],
[
"Yau",
"Lucia Cheung",
""
],
[
"Desai",
"Ishaan",
""
],
[
"Eder",
"Konrad",
""
],
[
"Hertrich",
"Richard",
""
],
[
"Lindner",
"Florian",
""
],
[
"Rusch",
"Alexander",
""
],
[
"Sashko",
"Dmytro",
""
],
[
"Schneider",
"David",
""
],
[
"Totounferoush",
"Amin",
""
],
[
"Volland",
"Dominik",
""
],
[
"Vollmer",
"Peter",
""
],
[
"Koseomur",
"Oguz Ziya",
""
]
] |
new_dataset
| 0.999318 |
2111.14690
|
Peize Sun
|
Peize Sun, Jinkun Cao, Yi Jiang, Zehuan Yuan, Song Bai, Kris Kitani,
Ping Luo
|
DanceTrack: Multi-Object Tracking in Uniform Appearance and Diverse
Motion
|
add change log
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A typical pipeline for multi-object tracking (MOT) is to use a detector for
object localization, and following re-identification (re-ID) for object
association. This pipeline is partially motivated by recent progress in both
object detection and re-ID, and partially motivated by biases in existing
tracking datasets, where most objects tend to have distinguishing appearance
and re-ID models are sufficient for establishing associations. In response to
such bias, we would like to re-emphasize that methods for multi-object tracking
should also work when object appearance is not sufficiently discriminative. To
this end, we propose a large-scale dataset for multi-human tracking, where
humans have similar appearance, diverse motion and extreme articulation. As the
dataset contains mostly group dancing videos, we name it "DanceTrack". We
expect DanceTrack to provide a better platform to develop more MOT algorithms
that rely less on visual discrimination and depend more on motion analysis. We
benchmark several state-of-the-art trackers on our dataset and observe a
significant performance drop on DanceTrack when compared against existing
benchmarks. The dataset, project code and competition server are released at:
\url{https://github.com/DanceTrack}.
|
[
{
"version": "v1",
"created": "Mon, 29 Nov 2021 16:49:06 GMT"
},
{
"version": "v2",
"created": "Thu, 5 May 2022 06:28:44 GMT"
},
{
"version": "v3",
"created": "Tue, 24 May 2022 15:14:23 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Sun",
"Peize",
""
],
[
"Cao",
"Jinkun",
""
],
[
"Jiang",
"Yi",
""
],
[
"Yuan",
"Zehuan",
""
],
[
"Bai",
"Song",
""
],
[
"Kitani",
"Kris",
""
],
[
"Luo",
"Ping",
""
]
] |
new_dataset
| 0.995899 |
2112.06061
|
Fabio Pardo
|
Vittorio La Barbera, Fabio Pardo, Yuval Tassa, Monica Daley,
Christopher Richards, Petar Kormushev, John Hutchinson
|
OstrichRL: A Musculoskeletal Ostrich Simulation to Study Bio-mechanical
Locomotion
|
https://github.com/vittorione94/ostrichrl
| null | null | null |
cs.RO cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Muscle-actuated control is a research topic that spans multiple domains,
including biomechanics, neuroscience, reinforcement learning, robotics, and
graphics. This type of control is particularly challenging as bodies are often
overactuated and dynamics are delayed and non-linear. It is however a very well
tested and tuned actuation mechanism that has undergone millions of years of
evolution with interesting properties exploiting passive forces and efficient
energy storage of muscle-tendon units. To facilitate research on
muscle-actuated simulation, we release a 3D musculoskeletal simulation of an
ostrich based on the MuJoCo physics engine. The ostrich is one of the fastest
bipeds on earth and therefore makes an excellent model for studying
muscle-actuated bipedal locomotion. The model is based on CT scans and
dissections used to collect actual muscle data, such as insertion sites,
lengths, and pennation angles. Along with this model, we also provide a set of
reinforcement learning tasks, including reference motion tracking, running, and
neck control, used to infer muscle actuation patterns. The reference motion
data is based on motion capture clips of various behaviors that we preprocessed
and adapted to our model. This paper describes how the model was built and
iteratively improved using the tasks. We also evaluate the accuracy of the
muscle actuation patterns by comparing them to experimentally collected
electromyographic data from locomoting birds. The results demonstrate the need
for rich reward signals or regularization techniques to constrain muscle
excitations and produce realistic movements. Overall, we believe that this work
can provide a useful bridge between fields of research interested in muscle
actuation.
|
[
{
"version": "v1",
"created": "Sat, 11 Dec 2021 19:58:11 GMT"
},
{
"version": "v2",
"created": "Tue, 24 May 2022 15:06:31 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"La Barbera",
"Vittorio",
""
],
[
"Pardo",
"Fabio",
""
],
[
"Tassa",
"Yuval",
""
],
[
"Daley",
"Monica",
""
],
[
"Richards",
"Christopher",
""
],
[
"Kormushev",
"Petar",
""
],
[
"Hutchinson",
"John",
""
]
] |
new_dataset
| 0.998391 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.