id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2204.03071
|
Muhammad Humayoun
|
Muhammad Humayoun and Harald Hammarstr\"om and Aarne Ranta
|
Urdu Morphology, Orthography and Lexicon Extraction
|
Published in CAASL-2: The Second Workshop on Computational Approaches
to Arabic Script-based Languages, July 21-22, 2007, LSA 2007 Linguistic
Institute, Stanford University
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Urdu is a challenging language because of, first, its Perso-Arabic script and
second, its morphological system having inherent grammatical forms and
vocabulary of Arabic, Persian and the native languages of South Asia. This
paper describes an implementation of the Urdu language as a software API, and
we deal with orthography, morphology and the extraction of the lexicon. The
morphology is implemented in a toolkit called Functional Morphology (Forsberg &
Ranta, 2004), which is based on the idea of dealing grammars as software
libraries. Therefore this implementation could be reused in applications such
as intelligent search of keywords, language training and infrastructure for
syntax. We also present an implementation of a small part of Urdu syntax to
demonstrate this reusability.
|
[
{
"version": "v1",
"created": "Wed, 6 Apr 2022 20:14:01 GMT"
}
] | 2022-04-08T00:00:00 |
[
[
"Humayoun",
"Muhammad",
""
],
[
"Hammarström",
"Harald",
""
],
[
"Ranta",
"Aarne",
""
]
] |
new_dataset
| 0.999451 |
2204.03112
|
Lihang Feng
|
Lihang Feng, Xu Jiang, Aiguo Song
|
An Instrumented Wheel-On-Limb System of Planetary Rovers for
Wheel-Terrain Interactions: System Conception and Preliminary Design
|
2nd International Conference on Robotics and Control Engineering, ACM
RobCE 2022, March 25, 2022, Nanjing, China
| null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Understanding the wheel-terrain interaction is of great importance to improve
the maneuverability and traversability of the rovers. A well-developed sensing
device carried by the rover would greatly facilitate the complex risk-reducing
operations on sandy terrains. In this paper, an instrumented wheel-on-limb
(WOL) system of planetary rovers for wheel-terrain interaction characterization
is presented. Assuming the function of a passive suspension of the wheel, the
WOL system allows itself to follow the terrain contour, and keep the wheel
remain lowered onto the ground during rover motion including climbing and
descending, as well as deploy and place the wheel on the ground before a drive
commanding. The system concept, functional requirements, and pre-design work,
as well as the system integration are presented.
|
[
{
"version": "v1",
"created": "Wed, 6 Apr 2022 21:57:45 GMT"
}
] | 2022-04-08T00:00:00 |
[
[
"Feng",
"Lihang",
""
],
[
"Jiang",
"Xu",
""
],
[
"Song",
"Aiguo",
""
]
] |
new_dataset
| 0.999623 |
2204.03139
|
Priya Sundaresan
|
Priya Sundaresan, Rika Antonova, Jeannette Bohg
|
DiffCloud: Real-to-Sim from Point Clouds with Differentiable Simulation
and Rendering of Deformable Objects
| null | null | null | null |
cs.RO cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Research in manipulation of deformable objects is typically conducted on a
limited range of scenarios, because handling each scenario on hardware takes
significant effort. Realistic simulators with support for various types of
deformations and interactions have the potential to speed up experimentation
with novel tasks and algorithms. However, for highly deformable objects it is
challenging to align the output of a simulator with the behavior of real
objects. Manual tuning is not intuitive, hence automated methods are needed. We
view this alignment problem as a joint perception-inference challenge and
demonstrate how to use recent neural network architectures to successfully
perform simulation parameter inference from real point clouds. We analyze the
performance of various architectures, comparing their data and training
requirements. Furthermore, we propose to leverage differentiable point cloud
sampling and differentiable simulation to significantly reduce the time to
achieve the alignment. We employ an efficient way to propagate gradients from
point clouds to simulated meshes and further through to the physical simulation
parameters, such as mass and stiffness. Experiments with highly deformable
objects show that our method can achieve comparable or better alignment with
real object behavior, while reducing the time needed to achieve this by more
than an order of magnitude. Videos and supplementary material are available at
https://tinyurl.com/diffcloud.
|
[
{
"version": "v1",
"created": "Thu, 7 Apr 2022 00:45:26 GMT"
}
] | 2022-04-08T00:00:00 |
[
[
"Sundaresan",
"Priya",
""
],
[
"Antonova",
"Rika",
""
],
[
"Bohg",
"Jeannette",
""
]
] |
new_dataset
| 0.995855 |
2204.03156
|
Ramy Taki ElDin F.
|
Ramy Taki ElDin
|
Multi-twisted codes as free modules over principal ideal domains
|
35 pages
| null | null | null |
cs.IT math.IT math.RA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We begin this chapter by introducing the simple algebraic structure of cyclic
codes over finite fields. This structure undergoes a series of generalizations
to present algebraic descriptions of constacyclic, quasi-cyclic (QC),
quasi-twisted (QT), generalized quasi-cyclic (GQC), and multi-twisted (MT)
codes. The correspondence between these codes and submodules of the free
$\mathbb{F}_q[x]$-module $\left(\mathbb{F}_q[x]\right)^\ell$ is established.
Thus, any of these codes corresponds to a free linear code over the principal
ideal domain (PID) $\mathbb{F}_q[x]$. A basis of this code exists and is used
to build a generator matrix with polynomial entries, called the generator
polynomial matrix (GPM). The Hermite normal form of matrices over PIDs is
exploited to achieve the reduced GPMs of MT codes. Some properties of the
reduced GPM are introduced, for example, the identical equation. A formula for
a GPM of the dual code $\mathcal{C}^\perp$ of a MT code is given. At this
point, special attention is paid to QC codes. For a QC code $\mathcal{C}$, we
define its reversed code $\mathcal{R}$. We call $\mathcal{C}$ reversible or
self-dual if $\mathcal{R}=\mathcal{C}$ or $\mathcal{C}^\perp=\mathcal{C}$,
respectively. A formula for a GPM of $\mathcal{R}$ is given. We characterize
GPMs for QC codes that combine reversibility and
self-duality/self-orthogonality. For the reader interested in running computer
search for optimal codes, we show the existence of binary self-orthogonal
reversible QC codes that have the best known parameters as linear codes. These
results can be obtained by brute-force search using GPMs that meet the above
characterization.
|
[
{
"version": "v1",
"created": "Thu, 7 Apr 2022 01:37:39 GMT"
}
] | 2022-04-08T00:00:00 |
[
[
"ElDin",
"Ramy Taki",
""
]
] |
new_dataset
| 0.998413 |
2204.03241
|
Jianfeng Zhan
|
Jianfeng Zhan
|
Three Laws of Technology Rise or Fall
| null |
BenchCouncil Transactions on Benchmarks, Standards and Evaluations
(TBench), 2022
| null | null |
cs.CY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Newton's laws of motion perfectly explain or approximate physical phenomena
in our everyday life. Are there any laws that explain or approximate
technology's rise or fall? After reviewing thirteen information technologies
that succeeded, this article concludes three laws of technology and derives
five corollaries to explain or approximate the rise or fall of technology.
Three laws are the laws of technology inertia, technology change force, and
technology action and reaction. Five corollaries are the corollaries of
measurement of technology change force, technology breakthrough, technology
monopoly, technology openness, and technology business opportunity. I present
how to use the laws and the corollaries to analyze an emerging technology --
the open-source RISC-V processor. Also, I elaborate on benchmarks' role in
applying those laws.
|
[
{
"version": "v1",
"created": "Thu, 7 Apr 2022 06:17:44 GMT"
}
] | 2022-04-08T00:00:00 |
[
[
"Zhan",
"Jianfeng",
""
]
] |
new_dataset
| 0.996412 |
2204.03243
|
Yu Meng
|
Yu Meng, Chenyan Xiong, Payal Bajaj, Saurabh Tiwary, Paul Bennett,
Jiawei Han, Xia Song
|
Pretraining Text Encoders with Adversarial Mixture of Training Signal
Generators
|
ICLR 2022. (Code and Models: https://github.com/microsoft/AMOS)
| null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a new framework AMOS that pretrains text encoders with an
Adversarial learning curriculum via a Mixture Of Signals from multiple
auxiliary generators. Following ELECTRA-style pretraining, the main encoder is
trained as a discriminator to detect replaced tokens generated by auxiliary
masked language models (MLMs). Different from ELECTRA which trains one MLM as
the generator, we jointly train multiple MLMs of different sizes to provide
training signals at various levels of difficulty. To push the discriminator to
learn better with challenging replaced tokens, we learn mixture weights over
the auxiliary MLMs' outputs to maximize the discriminator loss by
backpropagating the gradient from the discriminator via Gumbel-Softmax. For
better pretraining efficiency, we propose a way to assemble multiple MLMs into
one unified auxiliary model. AMOS outperforms ELECTRA and recent
state-of-the-art pretrained models by about 1 point on the GLUE benchmark for
BERT base-sized models.
|
[
{
"version": "v1",
"created": "Thu, 7 Apr 2022 06:19:06 GMT"
}
] | 2022-04-08T00:00:00 |
[
[
"Meng",
"Yu",
""
],
[
"Xiong",
"Chenyan",
""
],
[
"Bajaj",
"Payal",
""
],
[
"Tiwary",
"Saurabh",
""
],
[
"Bennett",
"Paul",
""
],
[
"Han",
"Jiawei",
""
],
[
"Song",
"Xia",
""
]
] |
new_dataset
| 0.954755 |
2204.03249
|
Juheon Lee
|
Juheon Lee, Hyeong-Seok Choi, Kyogu Lee
|
Expressive Singing Synthesis Using Local Style Token and Dual-path Pitch
Encoder
|
4 pages, Submitted to Interspeech 2022
| null | null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper proposes a controllable singing voice synthesis system capable of
generating expressive singing voice with two novel methodologies. First, a
local style token module, which predicts frame-level style tokens from an input
pitch and text sequence, is proposed to allow the singing voice system to
control musical expression often unspecified in sheet music (e.g., breathing
and intensity). Second, we propose a dual-path pitch encoder with a choice of
two different pitch inputs: MIDI pitch sequence or f0 contour. Because the
initial generation of a singing voice is usually executed by taking a MIDI
pitch sequence, one can later extract an f0 contour from the generated singing
voice and modify the f0 contour to a finer level as desired. Through
quantitative and qualitative evaluations, we confirmed that the proposed model
could control various musical expressions while not sacrificing the sound
quality of the singing voice synthesis system.
|
[
{
"version": "v1",
"created": "Thu, 7 Apr 2022 06:44:11 GMT"
}
] | 2022-04-08T00:00:00 |
[
[
"Lee",
"Juheon",
""
],
[
"Choi",
"Hyeong-Seok",
""
],
[
"Lee",
"Kyogu",
""
]
] |
new_dataset
| 0.997563 |
2204.03254
|
Stefano Zacchiroli
|
Zeinab Abou Khalil (DGD-I), Stefano Zacchiroli (IP Paris, LTCI)
|
The General Index of Software Engineering Papers
|
MSR 2022 - The 2022 Mining Software Repositories Conference, May
2022, Pittsburgh, Pennsylvania, United States
| null |
10.1145/3524842.3528494
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce the General Index of Software Engineering Papers, a dataset of
fulltext-indexed papers from the most prominent scientific venues in the field
of Software Engineering. The dataset includes both complete bibliographic
information and indexed ngrams (sequence of contiguous words after removal of
stopwords and non-words, for a total of 577 276 382 unique n-grams in this
release) with length 1 to 5 for 44 581 papers retrieved from 34 venues over the
1971-2020 period.The dataset serves use cases in the field of meta-research,
allowing to introspect the output of software engineering research even when
access to papers or scholarly search engines is not possible (e.g., due to
contractual reasons). The dataset also contributes to making such analyses
reproducible and independently verifiable, as opposed to what happens when they
are conducted using 3rd-party and non-open scholarly indexing services.The
dataset is available as a portable Postgres database dump and released as open
data.
|
[
{
"version": "v1",
"created": "Thu, 7 Apr 2022 06:52:35 GMT"
}
] | 2022-04-08T00:00:00 |
[
[
"Khalil",
"Zeinab Abou",
"",
"DGD-I"
],
[
"Zacchiroli",
"Stefano",
"",
"IP Paris, LTCI"
]
] |
new_dataset
| 0.999437 |
2204.03289
|
Derrick Greenspan
|
Derrick Greenspan, Naveed Ul Mustafa, Zoran Kolega, Mark Heinrich, Yan
Solihin
|
Persistent Memory Objects: Fast and Easy Crash Consistency for
Persistent Memory
|
12 pages, 15 figures
| null | null | null |
cs.OS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
DIMM-compatible persistent memory unites memory and storage. Prior works
utilize persistent memory either by combining the filesystem with direct access
on memory mapped files or by managing it as a collection of objects while
abolishing the POSIX abstraction. In contrast, we propose retaining the POSIX
abstraction and extending it to provide support for persistent memory, using
Persistent Memory Objects (PMOs). In this work, we design and implement PMOs, a
crash-consistent abstraction for managing persistent memory. We introduce
psync, a single system call, that a programmer can use to specify crash
consistency points in their code, without needing to orchestrate durability
explicitly. When rendering data crash consistent, our design incurs a overhead
of $\approx 25\%$ and $\approx 21\%$ for parallel workloads and FileBench,
respectively, compared to a system without crash consistency. Compared to
NOVA-Fortis, our design provides a speedup of $\approx 1.67\times$ and $\approx
3\times$ for the two set of benchmarks, respectively.
|
[
{
"version": "v1",
"created": "Thu, 7 Apr 2022 08:38:37 GMT"
}
] | 2022-04-08T00:00:00 |
[
[
"Greenspan",
"Derrick",
""
],
[
"Mustafa",
"Naveed Ul",
""
],
[
"Kolega",
"Zoran",
""
],
[
"Heinrich",
"Mark",
""
],
[
"Solihin",
"Yan",
""
]
] |
new_dataset
| 0.996619 |
2204.03340
|
Jiale Cao
|
Jiale Cao and Yanwei Pang and Rao Muhammad Anwer and Hisham Cholakkal
and Jin Xie and Mubarak Shah and Fahad Shahbaz Khan
|
PSTR: End-to-End One-Step Person Search With Transformers
|
CVPR2022, Code: https://github.com/JialeCao001/PSTR
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a novel one-step transformer-based person search framework, PSTR,
that jointly performs person detection and re-identification (re-id) in a
single architecture. PSTR comprises a person search-specialized (PSS) module
that contains a detection encoder-decoder for person detection along with a
discriminative re-id decoder for person re-id. The discriminative re-id decoder
utilizes a multi-level supervision scheme with a shared decoder for
discriminative re-id feature learning and also comprises a part attention block
to encode relationship between different parts of a person. We further
introduce a simple multi-scale scheme to support re-id across person instances
at different scales. PSTR jointly achieves the diverse objectives of
object-level recognition (detection) and instance-level matching (re-id). To
the best of our knowledge, we are the first to propose an end-to-end one-step
transformer-based person search framework. Experiments are performed on two
popular benchmarks: CUHK-SYSU and PRW. Our extensive ablations reveal the
merits of the proposed contributions. Further, the proposed PSTR sets a new
state-of-the-art on both benchmarks. On the challenging PRW benchmark, PSTR
achieves a mean average precision (mAP) score of 56.5%. The source code is
available at \url{https://github.com/JialeCao001/PSTR}.
|
[
{
"version": "v1",
"created": "Thu, 7 Apr 2022 10:22:33 GMT"
}
] | 2022-04-08T00:00:00 |
[
[
"Cao",
"Jiale",
""
],
[
"Pang",
"Yanwei",
""
],
[
"Anwer",
"Rao Muhammad",
""
],
[
"Cholakkal",
"Hisham",
""
],
[
"Xie",
"Jin",
""
],
[
"Shah",
"Mubarak",
""
],
[
"Khan",
"Fahad Shahbaz",
""
]
] |
new_dataset
| 0.9994 |
2204.03371
|
Anwesh Reddy Paduri
|
Narayana Darapaneni, Jai Arora, MoniShankar Hazra, Naman Vig,
Simrandeep Singh Gandhi, Saurabh Gupta, Anwesh Reddy Paduri
|
Detection of Distracted Driver using Convolution Neural Network
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
With over 50 million car sales annually and over 1.3 million deaths every
year due to motor accidents we have chosen this space. India accounts for 11
per cent of global death in road accidents. Drivers are held responsible for
78% of accidents. Road safety problems in developing countries is a major
concern and human behavior is ascribed as one of the main causes and
accelerators of road safety problems. Driver distraction has been identified as
the main reason for accidents. Distractions can be caused due to reasons such
as mobile usage, drinking, operating instruments, facial makeup, social
interaction. For the scope of this project, we will focus on building a highly
efficient ML model to classify different driver distractions at runtime using
computer vision. We would also analyze the overall speed and scalability of the
model in order to be able to set it up on an edge device. We use CNN, VGG-16,
RestNet50 and ensemble of CNN to predict the classes.
|
[
{
"version": "v1",
"created": "Thu, 7 Apr 2022 11:41:19 GMT"
}
] | 2022-04-08T00:00:00 |
[
[
"Darapaneni",
"Narayana",
""
],
[
"Arora",
"Jai",
""
],
[
"Hazra",
"MoniShankar",
""
],
[
"Vig",
"Naman",
""
],
[
"Gandhi",
"Simrandeep Singh",
""
],
[
"Gupta",
"Saurabh",
""
],
[
"Paduri",
"Anwesh Reddy",
""
]
] |
new_dataset
| 0.99941 |
2204.03401
|
Maja Hanne Kirkeby
|
Maja H. Kirkeby and Thomas Krabben and Mathias Larsen and Maria B.
Mikkelsen and Tjark Petersen and Mads Rosendahl and Martin Schoeberl and
Martin Sundman
|
Energy Consumption and Performance of Heapsort in Hardware and Software
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
In this poster abstract we will report on a case study on implementing the
Heapsort algorithm in hardware and software and comparing their time and energy
consumption. Our experiment shows that the Hardware implementation is more
energy efficient, but slower than the Software implementation due to a low
clock frequency. It also indicate that the optimal degree of parallelization
differs when optimizing for time compared to optimizing for time.
|
[
{
"version": "v1",
"created": "Thu, 7 Apr 2022 12:38:37 GMT"
}
] | 2022-04-08T00:00:00 |
[
[
"Kirkeby",
"Maja H.",
""
],
[
"Krabben",
"Thomas",
""
],
[
"Larsen",
"Mathias",
""
],
[
"Mikkelsen",
"Maria B.",
""
],
[
"Petersen",
"Tjark",
""
],
[
"Rosendahl",
"Mads",
""
],
[
"Schoeberl",
"Martin",
""
],
[
"Sundman",
"Martin",
""
]
] |
new_dataset
| 0.994945 |
2204.03506
|
Firoj Alam
|
Preslav Nakov, Firoj Alam, Yifan Zhang, Animesh Prakash, Fahim Dalvi
|
QCRI's COVID-19 Disinformation Detector: A System to Fight the COVID-19
Infodemic in Social Media
|
disinformation, misinformation, factuality, fact-checking,
fact-checkers, check-worthiness, Social Media Platforms, COVID-19, social
media
| null | null | null |
cs.CL cs.AI cs.CY cs.IR cs.SI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Fighting the ongoing COVID-19 infodemic has been declared as one of the most
important focus areas by the World Health Organization since the onset of the
COVID-19 pandemic. While the information that is consumed and disseminated
consists of promoting fake cures, rumors, and conspiracy theories to spreading
xenophobia and panic, at the same time there is information (e.g., containing
advice, promoting cure) that can help different stakeholders such as
policy-makers. Social media platforms enable the infodemic and there has been
an effort to curate the content on such platforms, analyze and debunk them.
While a majority of the research efforts consider one or two aspects (e.g.,
detecting factuality) of such information, in this study we focus on a
multifaceted approach, including an
API,\url{https://app.swaggerhub.com/apis/yifan2019/Tanbih/0.8.0/} and a demo
system,\url{https://covid19.tanbih.org}, which we made freely and publicly
available. We believe that this will facilitate researchers and different
stakeholders. A screencast of the API services and demo is
available.\url{https://youtu.be/zhbcSvxEKMk}
|
[
{
"version": "v1",
"created": "Tue, 8 Mar 2022 12:30:35 GMT"
}
] | 2022-04-08T00:00:00 |
[
[
"Nakov",
"Preslav",
""
],
[
"Alam",
"Firoj",
""
],
[
"Zhang",
"Yifan",
""
],
[
"Prakash",
"Animesh",
""
],
[
"Dalvi",
"Fahim",
""
]
] |
new_dataset
| 0.996552 |
2204.03563
|
Xinyu Wang
|
Xinyu Wang
|
Transfinite Modal Logic: a Semi-quantitative Explanation for Bayesian
Reasoning
| null | null | null | null |
cs.AI cs.LO math.LO
|
http://creativecommons.org/licenses/by/4.0/
|
Bayesian reasoning plays a significant role both in human rationality and in
machine learning. In this paper, we introduce transfinite modal logic, which
combines modal logic with ordinal arithmetic, in order to formalize Bayesian
reasoning semi-quantitatively. Technically, we first investigate some
nontrivial properties of ordinal arithmetic, which then enable us to expand
normal modal logic's semantics naturally and elegantly onto the novel
transfinite modal logic, while still keeping the ordinary definition of Kripke
models totally intact. Despite all the transfinite mathematical definition, we
argue that in practice, this logic can actually fit into a completely finite
interpretation as well. We suggest that transfinite modal logic captures the
essence of Bayesian reasoning in a rather clear and simple form, in particular,
it provides a perfect explanation for Sherlock Holmes' famous saying, "When you
have eliminated the impossible, whatever remains, however improbable, must be
the truth." We also prove a counterpart of finite model property theorem for
our logic.
|
[
{
"version": "v1",
"created": "Sat, 2 Apr 2022 17:58:14 GMT"
}
] | 2022-04-08T00:00:00 |
[
[
"Wang",
"Xinyu",
""
]
] |
new_dataset
| 0.999548 |
2204.03593
|
Norman M\"uller
|
Norman M\"uller, Andrea Simonelli, Lorenzo Porzi, Samuel Rota Bul\`o,
Matthias Nie{\ss}ner, Peter Kontschieder
|
AutoRF: Learning 3D Object Radiance Fields from Single View Observations
|
CVPR 2022. Project page: https://sirwyver.github.io/AutoRF/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce AutoRF - a new approach for learning neural 3D object
representations where each object in the training set is observed by only a
single view. This setting is in stark contrast to the majority of existing
works that leverage multiple views of the same object, employ explicit priors
during training, or require pixel-perfect annotations. To address this
challenging setting, we propose to learn a normalized, object-centric
representation whose embedding describes and disentangles shape, appearance,
and pose. Each encoding provides well-generalizable, compact information about
the object of interest, which is decoded in a single-shot into a new target
view, thus enabling novel view synthesis. We further improve the reconstruction
quality by optimizing shape and appearance codes at test time by fitting the
representation tightly to the input image. In a series of experiments, we show
that our method generalizes well to unseen objects, even across different
datasets of challenging real-world street scenes such as nuScenes, KITTI, and
Mapillary Metropolis.
|
[
{
"version": "v1",
"created": "Thu, 7 Apr 2022 17:13:39 GMT"
}
] | 2022-04-08T00:00:00 |
[
[
"Müller",
"Norman",
""
],
[
"Simonelli",
"Andrea",
""
],
[
"Porzi",
"Lorenzo",
""
],
[
"Bulò",
"Samuel Rota",
""
],
[
"Nießner",
"Matthias",
""
],
[
"Kontschieder",
"Peter",
""
]
] |
new_dataset
| 0.975614 |
2204.03645
|
Mingyu Ding
|
Mingyu Ding, Bin Xiao, Noel Codella, Ping Luo, Jingdong Wang, Lu Yuan
|
DaViT: Dual Attention Vision Transformers
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we introduce Dual Attention Vision Transformers (DaViT), a
simple yet effective vision transformer architecture that is able to capture
global context while maintaining computational efficiency. We propose
approaching the problem from an orthogonal angle: exploiting self-attention
mechanisms with both "spatial tokens" and "channel tokens". With spatial
tokens, the spatial dimension defines the token scope, and the channel
dimension defines the token feature dimension. With channel tokens, we have the
inverse: the channel dimension defines the token scope, and the spatial
dimension defines the token feature dimension. We further group tokens along
the sequence direction for both spatial and channel tokens to maintain the
linear complexity of the entire model. We show that these two self-attentions
complement each other: (i) since each channel token contains an abstract
representation of the entire image, the channel attention naturally captures
global interactions and representations by taking all spatial positions into
account when computing attention scores between channels; (ii) the spatial
attention refines the local representations by performing fine-grained
interactions across spatial locations, which in turn helps the global
information modeling in channel attention. Extensive experiments show our DaViT
achieves state-of-the-art performance on four different tasks with efficient
computations. Without extra data, DaViT-Tiny, DaViT-Small, and DaViT-Base
achieve 82.8%, 84.2%, and 84.6% top-1 accuracy on ImageNet-1K with 28.3M,
49.7M, and 87.9M parameters, respectively. When we further scale up DaViT with
1.5B weakly supervised image and text pairs, DaViT-Gaint reaches 90.4% top-1
accuracy on ImageNet-1K. Code is available at https://github.com/dingmyu/davit.
|
[
{
"version": "v1",
"created": "Thu, 7 Apr 2022 17:59:32 GMT"
}
] | 2022-04-08T00:00:00 |
[
[
"Ding",
"Mingyu",
""
],
[
"Xiao",
"Bin",
""
],
[
"Codella",
"Noel",
""
],
[
"Luo",
"Ping",
""
],
[
"Wang",
"Jingdong",
""
],
[
"Yuan",
"Lu",
""
]
] |
new_dataset
| 0.999386 |
2204.03646
|
Jinglin Xu
|
Jinglin Xu, Yongming Rao, Xumin Yu, Guangyi Chen, Jie Zhou, Jiwen Lu
|
FineDiving: A Fine-grained Dataset for Procedure-aware Action Quality
Assessment
|
Computer Vision and Pattern Recognition 2022 (Oral presentation)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most existing action quality assessment methods rely on the deep features of
an entire video to predict the score, which is less reliable due to the
non-transparent inference process and poor interpretability. We argue that
understanding both high-level semantics and internal temporal structures of
actions in competitive sports videos is the key to making predictions accurate
and interpretable. Towards this goal, we construct a new fine-grained dataset,
called FineDiving, developed on diverse diving events with detailed annotations
on action procedures. We also propose a procedure-aware approach for action
quality assessment, learned by a new Temporal Segmentation Attention module.
Specifically, we propose to parse pairwise query and exemplar action instances
into consecutive steps with diverse semantic and temporal correspondences. The
procedure-aware cross-attention is proposed to learn embeddings between query
and exemplar steps to discover their semantic, spatial, and temporal
correspondences, and further serve for fine-grained contrastive regression to
derive a reliable scoring mechanism. Extensive experiments demonstrate that our
approach achieves substantial improvements over state-of-the-art methods with
better interpretability. The dataset and code are available at
\url{https://github.com/xujinglin/FineDiving}.
|
[
{
"version": "v1",
"created": "Thu, 7 Apr 2022 17:59:32 GMT"
}
] | 2022-04-08T00:00:00 |
[
[
"Xu",
"Jinglin",
""
],
[
"Rao",
"Yongming",
""
],
[
"Yu",
"Xumin",
""
],
[
"Chen",
"Guangyi",
""
],
[
"Zhou",
"Jie",
""
],
[
"Lu",
"Jiwen",
""
]
] |
new_dataset
| 0.998887 |
2004.10703
|
Arun Maiya
|
Arun S. Maiya
|
ktrain: A Low-Code Library for Augmented Machine Learning
|
9 pages
| null | null | null |
cs.LG cs.CL cs.CV cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present ktrain, a low-code Python library that makes machine learning more
accessible and easier to apply. As a wrapper to TensorFlow and many other
libraries (e.g., transformers, scikit-learn, stellargraph), it is designed to
make sophisticated, state-of-the-art machine learning models simple to build,
train, inspect, and apply by both beginners and experienced practitioners.
Featuring modules that support text data (e.g., text classification, sequence
tagging, open-domain question-answering), vision data (e.g., image
classification), graph data (e.g., node classification, link prediction), and
tabular data, ktrain presents a simple unified interface enabling one to
quickly solve a wide range of tasks in as little as three or four "commands" or
lines of code.
|
[
{
"version": "v1",
"created": "Sun, 19 Apr 2020 14:18:20 GMT"
},
{
"version": "v2",
"created": "Thu, 30 Apr 2020 11:48:35 GMT"
},
{
"version": "v3",
"created": "Wed, 17 Jun 2020 15:50:12 GMT"
},
{
"version": "v4",
"created": "Fri, 31 Jul 2020 21:25:30 GMT"
},
{
"version": "v5",
"created": "Tue, 5 Apr 2022 18:49:01 GMT"
}
] | 2022-04-07T00:00:00 |
[
[
"Maiya",
"Arun S.",
""
]
] |
new_dataset
| 0.999447 |
2012.02328
|
Vijay Janapa Reddi
|
Vijay Janapa Reddi, David Kanter, Peter Mattson, Jared Duke, Thai
Nguyen, Ramesh Chukka, Ken Shiring, Koan-Sin Tan, Mark Charlebois, William
Chou, Mostafa El-Khamy, Jungwook Hong, Tom St. John, Cindy Trinh, Michael
Buch, Mark Mazumder, Relia Markovic, Thomas Atta, Fatih Cakir, Masoud
Charkhabi, Xiaodong Chen, Cheng-Ming Chiang, Dave Dexter, Terry Heo, Gunther
Schmuelling, Maryam Shabani, Dylan Zika
|
MLPerf Mobile Inference Benchmark
| null | null | null | null |
cs.LG cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents the first industry-standard open-source machine learning
(ML) benchmark to allow perfor mance and accuracy evaluation of mobile devices
with different AI chips and software stacks. The benchmark draws from the
expertise of leading mobile-SoC vendors, ML-framework providers, and model
producers. It comprises a suite of models that operate with standard data sets,
quality metrics and run rules. We describe the design and implementation of
this domain-specific ML benchmark. The current benchmark version comes as a
mobile app for different computer vision and natural language processing tasks.
The benchmark also supports non-smartphone devices, such as laptops and mobile
PCs. Benchmark results from the first two rounds reveal the overwhelming
complexity of the underlying mobile ML system stack, emphasizing the need for
transparency in mobile ML performance analysis. The results also show that the
strides being made all through the ML stack improve performance. Within six
months, offline throughput improved by 3x, while latency reduced by as much as
12x. ML is an evolving field with changing use cases, models, data sets and
quality targets. MLPerf Mobile will evolve and serve as an open-source
community framework to guide research and innovation for mobile AI.
|
[
{
"version": "v1",
"created": "Thu, 3 Dec 2020 23:29:03 GMT"
},
{
"version": "v2",
"created": "Fri, 26 Feb 2021 14:34:51 GMT"
},
{
"version": "v3",
"created": "Sun, 3 Apr 2022 13:13:47 GMT"
},
{
"version": "v4",
"created": "Wed, 6 Apr 2022 15:54:44 GMT"
}
] | 2022-04-07T00:00:00 |
[
[
"Reddi",
"Vijay Janapa",
""
],
[
"Kanter",
"David",
""
],
[
"Mattson",
"Peter",
""
],
[
"Duke",
"Jared",
""
],
[
"Nguyen",
"Thai",
""
],
[
"Chukka",
"Ramesh",
""
],
[
"Shiring",
"Ken",
""
],
[
"Tan",
"Koan-Sin",
""
],
[
"Charlebois",
"Mark",
""
],
[
"Chou",
"William",
""
],
[
"El-Khamy",
"Mostafa",
""
],
[
"Hong",
"Jungwook",
""
],
[
"John",
"Tom St.",
""
],
[
"Trinh",
"Cindy",
""
],
[
"Buch",
"Michael",
""
],
[
"Mazumder",
"Mark",
""
],
[
"Markovic",
"Relia",
""
],
[
"Atta",
"Thomas",
""
],
[
"Cakir",
"Fatih",
""
],
[
"Charkhabi",
"Masoud",
""
],
[
"Chen",
"Xiaodong",
""
],
[
"Chiang",
"Cheng-Ming",
""
],
[
"Dexter",
"Dave",
""
],
[
"Heo",
"Terry",
""
],
[
"Schmuelling",
"Gunther",
""
],
[
"Shabani",
"Maryam",
""
],
[
"Zika",
"Dylan",
""
]
] |
new_dataset
| 0.999443 |
2107.03601
|
Shuang Deng
|
Shuang Deng, Qiulei Dong, Bo Liu, and Zhanyi Hu
|
Superpoint-guided Semi-supervised Semantic Segmentation of 3D Point
Clouds
| null |
IEEE Conference on Robotics and Automation (ICRA), 2022
| null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
3D point cloud semantic segmentation is a challenging topic in the computer
vision field. Most of the existing methods in literature require a large amount
of fully labeled training data, but it is extremely time-consuming to obtain
these training data by manually labeling massive point clouds. Addressing this
problem, we propose a superpoint-guided semi-supervised segmentation network
for 3D point clouds, which jointly utilizes a small portion of labeled scene
point clouds and a large number of unlabeled point clouds for network training.
The proposed network is iteratively updated with its predicted pseudo labels,
where a superpoint generation module is introduced for extracting superpoints
from 3D point clouds, and a pseudo-label optimization module is explored for
automatically assigning pseudo labels to the unlabeled points under the
constraint of the extracted superpoints. Additionally, there are some 3D points
without pseudo-label supervision. We propose an edge prediction module to
constrain features of edge points. A superpoint feature aggregation module and
a superpoint feature consistency loss function are introduced to smooth
superpoint features. Extensive experimental results on two 3D public datasets
demonstrate that our method can achieve better performance than several
state-of-the-art point cloud segmentation networks and several popular
semi-supervised segmentation methods with few labeled scenes.
|
[
{
"version": "v1",
"created": "Thu, 8 Jul 2021 04:43:21 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Jul 2021 07:18:51 GMT"
},
{
"version": "v3",
"created": "Thu, 9 Sep 2021 11:18:18 GMT"
}
] | 2022-04-07T00:00:00 |
[
[
"Deng",
"Shuang",
""
],
[
"Dong",
"Qiulei",
""
],
[
"Liu",
"Bo",
""
],
[
"Hu",
"Zhanyi",
""
]
] |
new_dataset
| 0.991132 |
2107.11965
|
Elif Surer
|
Sinan Ariyurek, Elif Surer, Aysu Betin-Can
|
Playtesting: What is Beyond Personas
| null | null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Playtesting is an essential step in the game design process. Game designers
use the feedback from playtests to refine their designs. Game designers may
employ procedural personas to automate the playtesting process. In this paper,
we present two approaches to improve automated playtesting. First, we propose
developing persona, which allows a persona to progress to different goals. In
contrast, the procedural persona is fixed to a single goal. Second, a human
playtester knows which paths she has tested before, and during the consequent
tests, she may test different paths. However, Reinforcement Learning (RL)
agents disregard these previous paths. We propose a novel methodology that we
refer to as Alternative Path Finder (APF). We train APF with previous paths and
employ APF during the training of an RL agent. APF modulates the reward
structure of the environment while preserving the agent's goal. When evaluated,
the agent generates a different trajectory that achieves the same goal. We use
the General Video Game Artificial Intelligence (GVG-AI) and VizDoom frameworks
to test our proposed methodologies. We use Proximal Policy Optimization (PPO)
RL agent during experiments. First, we compare the playtest data generated by
developing and procedural persona. Our experiments show that developing persona
provides better insight into the game and how different players would play.
Second, we present the alternative paths found using APF and argue why
traditional RL agents cannot learn those paths.
|
[
{
"version": "v1",
"created": "Mon, 26 Jul 2021 05:23:45 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Apr 2022 16:51:08 GMT"
}
] | 2022-04-07T00:00:00 |
[
[
"Ariyurek",
"Sinan",
""
],
[
"Surer",
"Elif",
""
],
[
"Betin-Can",
"Aysu",
""
]
] |
new_dataset
| 0.998269 |
2111.12294
|
Yehui Tang
|
Yehui Tang, Kai Han, Jianyuan Guo, Chang Xu, Yanxi Li, Chao Xu, Yunhe
Wang
|
An Image Patch is a Wave: Phase-Aware Vision MLP
|
This paper is accepted by CVPR 2022 (oral presentation)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the field of computer vision, recent works show that a pure MLP
architecture mainly stacked by fully-connected layers can achieve competing
performance with CNN and transformer. An input image of vision MLP is usually
split into multiple tokens (patches), while the existing MLP models directly
aggregate them with fixed weights, neglecting the varying semantic information
of tokens from different images. To dynamically aggregate tokens, we propose to
represent each token as a wave function with two parts, amplitude and phase.
Amplitude is the original feature and the phase term is a complex value
changing according to the semantic contents of input images. Introducing the
phase term can dynamically modulate the relationship between tokens and fixed
weights in MLP. Based on the wave-like token representation, we establish a
novel Wave-MLP architecture for vision tasks. Extensive experiments demonstrate
that the proposed Wave-MLP is superior to the state-of-the-art MLP
architectures on various vision tasks such as image classification, object
detection and semantic segmentation. The source code is available at
https://github.com/huawei-noah/CV-Backbones/tree/master/wavemlp_pytorch and
https://gitee.com/mindspore/models/tree/master/research/cv/wave_mlp.
|
[
{
"version": "v1",
"created": "Wed, 24 Nov 2021 06:25:49 GMT"
},
{
"version": "v2",
"created": "Thu, 25 Nov 2021 02:49:10 GMT"
},
{
"version": "v3",
"created": "Wed, 2 Mar 2022 14:09:26 GMT"
},
{
"version": "v4",
"created": "Fri, 11 Mar 2022 02:41:30 GMT"
},
{
"version": "v5",
"created": "Wed, 6 Apr 2022 07:37:02 GMT"
}
] | 2022-04-07T00:00:00 |
[
[
"Tang",
"Yehui",
""
],
[
"Han",
"Kai",
""
],
[
"Guo",
"Jianyuan",
""
],
[
"Xu",
"Chang",
""
],
[
"Li",
"Yanxi",
""
],
[
"Xu",
"Chao",
""
],
[
"Wang",
"Yunhe",
""
]
] |
new_dataset
| 0.999723 |
2111.12580
|
Taeyeop Lee
|
Taeyeop Lee, Byeong-Uk Lee, Inkyu Shin, Jaesung Choe, Ukcheol Shin, In
So Kweon, Kuk-Jin Yoon
|
UDA-COPE: Unsupervised Domain Adaptation for Category-level Object Pose
Estimation
|
Accepted to CVPR 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Learning to estimate object pose often requires ground-truth (GT) labels,
such as CAD model and absolute-scale object pose, which is expensive and
laborious to obtain in the real world. To tackle this problem, we propose an
unsupervised domain adaptation (UDA) for category-level object pose estimation,
called UDA-COPE. Inspired by recent multi-modal UDA techniques, the proposed
method exploits a teacher-student self-supervised learning scheme to train a
pose estimation network without using target domain pose labels. We also
introduce a bidirectional filtering method between the predicted normalized
object coordinate space (NOCS) map and observed point cloud, to not only make
our teacher network more robust to the target domain but also to provide more
reliable pseudo labels for the student network training. Extensive experimental
results demonstrate the effectiveness of our proposed method both
quantitatively and qualitatively. Notably, without leveraging target-domain GT
labels, our proposed method achieved comparable or sometimes superior
performance to existing methods that depend on the GT labels.
|
[
{
"version": "v1",
"created": "Wed, 24 Nov 2021 16:00:48 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Apr 2022 02:13:28 GMT"
}
] | 2022-04-07T00:00:00 |
[
[
"Lee",
"Taeyeop",
""
],
[
"Lee",
"Byeong-Uk",
""
],
[
"Shin",
"Inkyu",
""
],
[
"Choe",
"Jaesung",
""
],
[
"Shin",
"Ukcheol",
""
],
[
"Kweon",
"In So",
""
],
[
"Yoon",
"Kuk-Jin",
""
]
] |
new_dataset
| 0.993072 |
2111.15222
|
Zhirong Ye
|
Zhirong Ye, Xiangdong Wang, Hong Liu, Yueliang Qian, Rui Tao, Long
Yan, Kazushige Ouchi
|
SP-SEDT: Self-supervised Pre-training for Sound Event Detection
Transformer
|
Submitted to interspeech 2022; added experiments for section 4
| null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, an event-based end-to-end model (SEDT) has been proposed for sound
event detection (SED) and achieves competitive performance. However, compared
with the frame-based model, it requires more training data with temporal
annotations to improve the localization ability. Synthetic data is an
alternative, but it suffers from a great domain gap with real recordings.
Inspired by the great success of UP-DETR in object detection, we propose to
self-supervisedly pre-train SEDT (SP-SEDT) by detecting random patches (only
cropped along the time axis). Experiments on the DCASE2019 task4 dataset show
the proposed SP-SEDT can outperform fine-tuned frame-based model. The ablation
study is also conducted to investigate the impact of different loss functions
and patch size.
|
[
{
"version": "v1",
"created": "Tue, 30 Nov 2021 09:14:07 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Apr 2022 10:27:59 GMT"
}
] | 2022-04-07T00:00:00 |
[
[
"Ye",
"Zhirong",
""
],
[
"Wang",
"Xiangdong",
""
],
[
"Liu",
"Hong",
""
],
[
"Qian",
"Yueliang",
""
],
[
"Tao",
"Rui",
""
],
[
"Yan",
"Long",
""
],
[
"Ouchi",
"Kazushige",
""
]
] |
new_dataset
| 0.996505 |
2111.15491
|
Stefano Zorzi
|
Stefano Zorzi, Shabab Bazrafkan, Stefan Habenschuss, Friedrich
Fraundorfer
|
PolyWorld: Polygonal Building Extraction with Graph Neural Networks in
Satellite Images
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While most state-of-the-art instance segmentation methods produce binary
segmentation masks, geographic and cartographic applications typically require
precise vector polygons of extracted objects instead of rasterized output. This
paper introduces PolyWorld, a neural network that directly extracts building
vertices from an image and connects them correctly to create precise polygons.
The model predicts the connection strength between each pair of vertices using
a graph neural network and estimates the assignments by solving a
differentiable optimal transport problem. Moreover, the vertex positions are
optimized by minimizing a combined segmentation and polygonal angle difference
loss. PolyWorld significantly outperforms the state of the art in building
polygonization and achieves not only notable quantitative results, but also
produces visually pleasing building polygons. Code and trained weights are
publicly available at https://github.com/zorzi-s/PolyWorldPretrainedNetwork.
|
[
{
"version": "v1",
"created": "Tue, 30 Nov 2021 15:23:17 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Dec 2021 13:44:22 GMT"
},
{
"version": "v3",
"created": "Wed, 6 Apr 2022 10:46:37 GMT"
}
] | 2022-04-07T00:00:00 |
[
[
"Zorzi",
"Stefano",
""
],
[
"Bazrafkan",
"Shabab",
""
],
[
"Habenschuss",
"Stefan",
""
],
[
"Fraundorfer",
"Friedrich",
""
]
] |
new_dataset
| 0.991467 |
2112.02244
|
Zhao Yang
|
Zhao Yang, Jiaqi Wang, Yansong Tang, Kai Chen, Hengshuang Zhao, Philip
H.S. Torr
|
LAVT: Language-Aware Vision Transformer for Referring Image Segmentation
|
CVPR 2022
| null | null | null |
cs.CV cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Referring image segmentation is a fundamental vision-language task that aims
to segment out an object referred to by a natural language expression from an
image. One of the key challenges behind this task is leveraging the referring
expression for highlighting relevant positions in the image. A paradigm for
tackling this problem is to leverage a powerful vision-language ("cross-modal")
decoder to fuse features independently extracted from a vision encoder and a
language encoder. Recent methods have made remarkable advancements in this
paradigm by exploiting Transformers as cross-modal decoders, concurrent to the
Transformer's overwhelming success in many other vision-language tasks.
Adopting a different approach in this work, we show that significantly better
cross-modal alignments can be achieved through the early fusion of linguistic
and visual features in intermediate layers of a vision Transformer encoder
network. By conducting cross-modal feature fusion in the visual feature
encoding stage, we can leverage the well-proven correlation modeling power of a
Transformer encoder for excavating helpful multi-modal context. This way,
accurate segmentation results are readily harvested with a light-weight mask
predictor. Without bells and whistles, our method surpasses the previous
state-of-the-art methods on RefCOCO, RefCOCO+, and G-Ref by large margins.
|
[
{
"version": "v1",
"created": "Sat, 4 Dec 2021 04:53:35 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Apr 2022 21:42:27 GMT"
}
] | 2022-04-07T00:00:00 |
[
[
"Yang",
"Zhao",
""
],
[
"Wang",
"Jiaqi",
""
],
[
"Tang",
"Yansong",
""
],
[
"Chen",
"Kai",
""
],
[
"Zhao",
"Hengshuang",
""
],
[
"Torr",
"Philip H. S.",
""
]
] |
new_dataset
| 0.963558 |
2203.08392
|
Yonggan Fu
|
Yonggan Fu, Shunyao Zhang, Shang Wu, Cheng Wan, Yingyan Lin
|
Patch-Fool: Are Vision Transformers Always Robust Against Adversarial
Perturbations?
|
Accepted at ICLR 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vision transformers (ViTs) have recently set off a new wave in neural
architecture design thanks to their record-breaking performance in various
vision tasks. In parallel, to fulfill the goal of deploying ViTs into
real-world vision applications, their robustness against potential malicious
attacks has gained increasing attention. In particular, recent works show that
ViTs are more robust against adversarial attacks as compared with convolutional
neural networks (CNNs), and conjecture that this is because ViTs focus more on
capturing global interactions among different input/feature patches, leading to
their improved robustness to local perturbations imposed by adversarial
attacks. In this work, we ask an intriguing question: "Under what kinds of
perturbations do ViTs become more vulnerable learners compared to CNNs?" Driven
by this question, we first conduct a comprehensive experiment regarding the
robustness of both ViTs and CNNs under various existing adversarial attacks to
understand the underlying reason favoring their robustness. Based on the drawn
insights, we then propose a dedicated attack framework, dubbed Patch-Fool, that
fools the self-attention mechanism by attacking its basic component (i.e., a
single patch) with a series of attention-aware optimization techniques.
Interestingly, our Patch-Fool framework shows for the first time that ViTs are
not necessarily more robust than CNNs against adversarial perturbations. In
particular, we find that ViTs are more vulnerable learners compared with CNNs
against our Patch-Fool attack which is consistent across extensive experiments,
and the observations from Sparse/Mild Patch-Fool, two variants of Patch-Fool,
indicate an intriguing insight that the perturbation density and strength on
each patch seem to be the key factors that influence the robustness ranking
between ViTs and CNNs.
|
[
{
"version": "v1",
"created": "Wed, 16 Mar 2022 04:45:59 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Apr 2022 23:38:57 GMT"
}
] | 2022-04-07T00:00:00 |
[
[
"Fu",
"Yonggan",
""
],
[
"Zhang",
"Shunyao",
""
],
[
"Wu",
"Shang",
""
],
[
"Wan",
"Cheng",
""
],
[
"Lin",
"Yingyan",
""
]
] |
new_dataset
| 0.999091 |
2204.01028
|
Wenqing Zhu
|
Wenqing Zhu, Norihiro Yoshida, Toshihiro Kamiya, Eunjong Choi, Hiroaki
Takada
|
MSCCD: Grammar Pluggable Clone Detection Based on ANTLR Parser
Generation
|
ICPC2022
| null |
10.1145/3524610.3529161
| null |
cs.SE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
For various reasons, programming languages continue to multiply and evolve.
It has become necessary to have a multilingual clone detection tool that can
easily expand supported programming languages and detect various code clones is
needed. However, research on multilingual code clone detection has not received
sufficient attention. In this study, we propose MSCCD (Multilingual Syntactic
Code Clone Detector), a grammar pluggable code clone detection tool that uses a
parser generator to generate a code block extractor for the target language.
The extractor then extracts the semantic code blocks from a parse tree. MSCCD
can detect Type-3 clones at various granularities. We evaluated MSCCD's
language extensibility by applying MSCCD to 20 modern languages. Sixteen
languages were perfectly supported, and the remaining four were provided with
the same detection capabilities at the expense of execution time. We evaluated
MSCCD's recall by using BigCloneEval and conducted a manual experiment to
evaluate precision. MSCCD achieved equivalent detection performance equivalent
to state-of-the-art tools.
|
[
{
"version": "v1",
"created": "Sun, 3 Apr 2022 08:31:07 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Apr 2022 06:27:40 GMT"
}
] | 2022-04-07T00:00:00 |
[
[
"Zhu",
"Wenqing",
""
],
[
"Yoshida",
"Norihiro",
""
],
[
"Kamiya",
"Toshihiro",
""
],
[
"Choi",
"Eunjong",
""
],
[
"Takada",
"Hiroaki",
""
]
] |
new_dataset
| 0.967132 |
2204.01734
|
Ming Shan Hee
|
Ming Shan Hee, Roy Ka-Wei Lee, Wen-Haw Chong
|
On Explaining Multimodal Hateful Meme Detection Models
| null | null | null | null |
cs.CV cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
Hateful meme detection is a new multimodal task that has gained significant
traction in academic and industry research communities. Recently, researchers
have applied pre-trained visual-linguistic models to perform the multimodal
classification task, and some of these solutions have yielded promising
results. However, what these visual-linguistic models learn for the hateful
meme classification task remains unclear. For instance, it is unclear if these
models are able to capture the derogatory or slurs references in multimodality
(i.e., image and text) of the hateful memes. To fill this research gap, this
paper propose three research questions to improve our understanding of these
visual-linguistic models performing the hateful meme classification task. We
found that the image modality contributes more to the hateful meme
classification task, and the visual-linguistic models are able to perform
visual-text slurs grounding to a certain extent. Our error analysis also shows
that the visual-linguistic models have acquired biases, which resulted in
false-positive predictions.
|
[
{
"version": "v1",
"created": "Mon, 4 Apr 2022 15:35:41 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Apr 2022 08:56:40 GMT"
}
] | 2022-04-07T00:00:00 |
[
[
"Hee",
"Ming Shan",
""
],
[
"Lee",
"Roy Ka-Wei",
""
],
[
"Chong",
"Wen-Haw",
""
]
] |
new_dataset
| 0.989717 |
2204.02411
|
Yawar Siddiqui
|
Yawar Siddiqui, Justus Thies, Fangchang Ma, Qi Shan, Matthias
Nie{\ss}ner, Angela Dai
|
Texturify: Generating Textures on 3D Shape Surfaces
|
Project Page: https://nihalsid.github.io/texturify
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Texture cues on 3D objects are key to compelling visual representations, with
the possibility to create high visual fidelity with inherent spatial
consistency across different views. Since the availability of textured 3D
shapes remains very limited, learning a 3D-supervised data-driven method that
predicts a texture based on the 3D input is very challenging. We thus propose
Texturify, a GAN-based method that leverages a 3D shape dataset of an object
class and learns to reproduce the distribution of appearances observed in real
images by generating high-quality textures. In particular, our method does not
require any 3D color supervision or correspondence between shape geometry and
images to learn the texturing of 3D objects. Texturify operates directly on the
surface of the 3D objects by introducing face convolutional operators on a
hierarchical 4-RoSy parametrization to generate plausible object-specific
textures. Employing differentiable rendering and adversarial losses that
critique individual views and consistency across views, we effectively learn
the high-quality surface texturing distribution from real-world images.
Experiments on car and chair shape collections show that our approach
outperforms state of the art by an average of 22% in FID score.
|
[
{
"version": "v1",
"created": "Tue, 5 Apr 2022 18:00:04 GMT"
}
] | 2022-04-07T00:00:00 |
[
[
"Siddiqui",
"Yawar",
""
],
[
"Thies",
"Justus",
""
],
[
"Ma",
"Fangchang",
""
],
[
"Shan",
"Qi",
""
],
[
"Nießner",
"Matthias",
""
],
[
"Dai",
"Angela",
""
]
] |
new_dataset
| 0.999851 |
2204.02460
|
Patrick Lancaster
|
Patrick Lancaster, Christoforos Mavrogiannis, Siddhartha Srinivasa,
Joshua Smith
|
Electrostatic Brakes Enable Individual Joint Control of Underactuated,
Highly Articulated Robots
|
17 pages, 15 figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Highly articulated organisms serve as blueprints for incredibly dexterous
mechanisms, but building similarly capable robotic counterparts has been
hindered by the difficulties of developing electromechanical actuators with
both the high strength and compactness of biological muscle. We develop a
stackable electrostatic brake that has comparable specific tension and weight
to that of muscles and integrate it into a robotic joint. Compared to
electromechanical motors, our brake-equipped joint is four times lighter and
one thousand times more power efficient while exerting similar holding torques.
Our joint design enables a ten degree-of-freedom robot equipped with only one
motor to manipulate multiple objects simultaneously. We also show that the use
of brakes allows a two-fingered robot to perform in-hand re-positioning of an
object 45% more quickly and with 53% lower positioning error than without
brakes. Relative to fully actuated robots, our findings suggest that robots
equipped with such electrostatic brakes will have lower weight, volume, and
power consumption yet retain the ability to reach arbitrary joint
configurations.
|
[
{
"version": "v1",
"created": "Tue, 5 Apr 2022 19:31:57 GMT"
}
] | 2022-04-07T00:00:00 |
[
[
"Lancaster",
"Patrick",
""
],
[
"Mavrogiannis",
"Christoforos",
""
],
[
"Srinivasa",
"Siddhartha",
""
],
[
"Smith",
"Joshua",
""
]
] |
new_dataset
| 0.998318 |
2204.02464
|
Stefan Bosse
|
Stefan Bosse
|
BeeTS: Smart Distributed Sensor Tuple Spaces combined with Agents using
Bluetooth and IP Broadcasting
| null | null | null | null |
cs.NI cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most Internet-of-Things (IoT) devices and smart sensors are connected via the
Internet using IP communication driectly accessed by a server that collect
sensor information periodically or event-based. Although, Internet access is
widely available, there are places that are not covered and WLAN and mobile
cell communication requires a descent amount of power not always available.
Finally, the spatial context (the environment in which the sensor or devices is
situated) is not considered (or lost) by Internet connectivity. In this work,
smart devices communicate connectionless and ad-hoc by using low-energy
Bluetooth broadcasting available in any smartphone and in most embedded
computers, e.g., the Raspberry PI devices. Bi-directional connectionless
communication is established via the advertisements and scanning modes. The
communication nodes can exchange data via functional tuples using a tuple space
service on each node. Tuple space access is performed by simple evenat-based
agents. Mobile devices act as tuple carriers that can carry tuples between
different locations. Additionally, UDP-based Intranet communication can be used
to access tuple spaces on a wider spatial range. The Bluetooth Low Energy Tuple
Space (BeeTS) service enables opportunistic, ad-hoc and loosely coupled device
communication with a spatial context.
|
[
{
"version": "v1",
"created": "Tue, 5 Apr 2022 19:47:21 GMT"
}
] | 2022-04-07T00:00:00 |
[
[
"Bosse",
"Stefan",
""
]
] |
new_dataset
| 0.999116 |
2204.02475
|
Nathan Lepora
|
Nicholas Pestell and Nathan F. Lepora
|
Artificial SA-I, RA-I and RA-II/Vibrotactile Afferents for Tactile
Sensing of Texture
| null |
J. R. Soc. Interface 20210603 (2022)
|
10.1098/rsif.2021.0603
| null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Robot touch can benefit from how humans perceive tactile textural
information, from the stimulation mode to which tactile channels respond, then
the tactile cues and encoding. Using a soft biomimetic tactile sensor (the
TacTip) based on the physiology of the dermal-epidermal boundary, we construct
two biomimetic tactile channels based on slowly-adapting SA-I and
rapidly-adapting RA-I afferents, and introduce an additional sub-modality for
vibrotactile information with an embedded microphone interpreted as an
artificial RA-II channel. These artificial tactile channels are stimulated
dynamically with a set of 13 artificial rigid textures comprising raised-bump
patterns on a rotating drum that vary systematically in roughness. Methods
employing spatial, spatio-temporal and temporal codes are assessed for texture
classification insensitive to stimulation speed. We find: (i) spatially-encoded
frictional cues provide a salient representation of texture; (ii) a simple
transformation of spatial tactile features to model natural afferent responses
improves the temporal coding; and (iii) the harmonic structure of induced
vibrations provides a pertinent code for speed-invariant texture
classification. Just as human touch relies on an interplay between
slowly-adapting (SA-I), rapidly-adapting (RA-I) and vibrotactile (RA-II)
channels, this tripartite structure may be needed for future robot applications
with human-like dexterity, from prosthetics to materials testing, handling and
manipulation.
|
[
{
"version": "v1",
"created": "Tue, 5 Apr 2022 20:18:38 GMT"
}
] | 2022-04-07T00:00:00 |
[
[
"Pestell",
"Nicholas",
""
],
[
"Lepora",
"Nathan F.",
""
]
] |
new_dataset
| 0.984539 |
2204.02515
|
Jessy Lin
|
Jessy Lin, Daniel Fried, Dan Klein, Anca Dragan
|
Inferring Rewards from Language in Context
|
ACL 2022. Code and dataset:
https://github.com/jlin816/rewards-from-language
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In classic instruction following, language like "I'd like the JetBlue flight"
maps to actions (e.g., selecting that flight). However, language also conveys
information about a user's underlying reward function (e.g., a general
preference for JetBlue), which can allow a model to carry out desirable actions
in new contexts. We present a model that infers rewards from language
pragmatically: reasoning about how speakers choose utterances not only to
elicit desired actions, but also to reveal information about their preferences.
On a new interactive flight-booking task with natural language, our model more
accurately infers rewards and predicts optimal actions in unseen environments,
in comparison to past work that first maps language to actions (instruction
following) and then maps actions to rewards (inverse reinforcement learning).
|
[
{
"version": "v1",
"created": "Tue, 5 Apr 2022 23:04:18 GMT"
}
] | 2022-04-07T00:00:00 |
[
[
"Lin",
"Jessy",
""
],
[
"Fried",
"Daniel",
""
],
[
"Klein",
"Dan",
""
],
[
"Dragan",
"Anca",
""
]
] |
new_dataset
| 0.995292 |
2204.02549
|
Yanran Li
|
Dawei Li and Yanran Li and Jiayi Zhang and Ke Li and Chen Wei and
Jianwei Cui and Bin Wang
|
C3KG: A Chinese Commonsense Conversation Knowledge Graph
|
Accepted by ACL 2022 Findings. Our code and data could be found in
https://github.com/XiaoMi/C3KG
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Existing commonsense knowledge bases often organize tuples in an isolated
manner, which is deficient for commonsense conversational models to plan the
next steps. To fill the gap, we curate a large-scale multi-turn human-written
conversation corpus, and create the first Chinese commonsense conversation
knowledge graph which incorporates both social commonsense knowledge and dialog
flow information. To show the potential of our graph, we develop a
graph-conversation matching approach, and benchmark two graph-grounded
conversational tasks.
|
[
{
"version": "v1",
"created": "Wed, 6 Apr 2022 02:59:34 GMT"
}
] | 2022-04-07T00:00:00 |
[
[
"Li",
"Dawei",
""
],
[
"Li",
"Yanran",
""
],
[
"Zhang",
"Jiayi",
""
],
[
"Li",
"Ke",
""
],
[
"Wei",
"Chen",
""
],
[
"Cui",
"Jianwei",
""
],
[
"Wang",
"Bin",
""
]
] |
new_dataset
| 0.999445 |
2204.02569
|
Xinchen Liu
|
Jinkai Zheng, Xinchen Liu, Wu Liu, Lingxiao He, Chenggang Yan, Tao Mei
|
Gait Recognition in the Wild with Dense 3D Representations and A
Benchmark
|
16 pages, 11 figures, CVPR 2022 accepted, project page:
https://gait3d.github.io/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Existing studies for gait recognition are dominated by 2D representations
like the silhouette or skeleton of the human body in constrained scenes.
However, humans live and walk in the unconstrained 3D space, so projecting the
3D human body onto the 2D plane will discard a lot of crucial information like
the viewpoint, shape, and dynamics for gait recognition. Therefore, this paper
aims to explore dense 3D representations for gait recognition in the wild,
which is a practical yet neglected problem. In particular, we propose a novel
framework to explore the 3D Skinned Multi-Person Linear (SMPL) model of the
human body for gait recognition, named SMPLGait. Our framework has two
elaborately-designed branches of which one extracts appearance features from
silhouettes, the other learns knowledge of 3D viewpoints and shapes from the 3D
SMPL model. In addition, due to the lack of suitable datasets, we build the
first large-scale 3D representation-based gait recognition dataset, named
Gait3D. It contains 4,000 subjects and over 25,000 sequences extracted from 39
cameras in an unconstrained indoor scene. More importantly, it provides 3D SMPL
models recovered from video frames which can provide dense 3D information of
body shape, viewpoint, and dynamics. Based on Gait3D, we comprehensively
compare our method with existing gait recognition approaches, which reflects
the superior performance of our framework and the potential of 3D
representations for gait recognition in the wild. The code and dataset are
available at https://gait3d.github.io.
|
[
{
"version": "v1",
"created": "Wed, 6 Apr 2022 03:54:06 GMT"
}
] | 2022-04-07T00:00:00 |
[
[
"Zheng",
"Jinkai",
""
],
[
"Liu",
"Xinchen",
""
],
[
"Liu",
"Wu",
""
],
[
"He",
"Lingxiao",
""
],
[
"Yan",
"Chenggang",
""
],
[
"Mei",
"Tao",
""
]
] |
new_dataset
| 0.999148 |
2204.02573
|
Anwesh Reddy Paduri
|
Narayana Darapaneni, Prashant Kumar, Nikhil Malhotra, Vigneswaran
Sundaramurthy, Abhaya Thakur, Shivam Chauhan, Krishna Chaitanya Thangeda,
Anwesh Reddy Paduri
|
Detecting key Soccer match events to create highlights using Computer
Vision
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The research and data science community has been fascinated with the
development of automatic systems for the detection of key events in a video.
Special attention in this field is given to sports video analytics which could
help in identifying key events during a match and help in preparing a strategy
for the games going forward. For this paper, we have chosen Football (soccer)
as a sport where we would want to create highlights for a given match video,
through a computer vision model that aims to identify important events in a
Soccer match to create highlights of the match. We built the models based on
Faster RCNN and YoloV5 architectures and noticed that for the amount of data we
used for training Faster RCNN did better than YoloV5 in detecting the events in
the match though it was much slower. Within Faster RCNN using ResNet50 as a
base model gave a better class accuracy of 95.5% as compared to 92% with VGG16
as base model completely outperforming YoloV5 for our training dataset. We
tested with an original video of size 23 minutes and our model could reduce it
to 4:50 minutes of highlights capturing almost all important events in the
match.
|
[
{
"version": "v1",
"created": "Wed, 6 Apr 2022 04:28:27 GMT"
}
] | 2022-04-07T00:00:00 |
[
[
"Darapaneni",
"Narayana",
""
],
[
"Kumar",
"Prashant",
""
],
[
"Malhotra",
"Nikhil",
""
],
[
"Sundaramurthy",
"Vigneswaran",
""
],
[
"Thakur",
"Abhaya",
""
],
[
"Chauhan",
"Shivam",
""
],
[
"Thangeda",
"Krishna Chaitanya",
""
],
[
"Paduri",
"Anwesh Reddy",
""
]
] |
new_dataset
| 0.994124 |
2204.02624
|
Xueliang Zhao
|
Tingchen Fu, Xueliang Zhao, Chongyang Tao, Ji-Rong Wen, Rui Yan
|
There Are a Thousand Hamlets in a Thousand People's Eyes: Enhancing
Knowledge-grounded Dialogue with Personal Memory
|
Accepted by ACL 2022 (main conference). First two authors contributed
equally
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Knowledge-grounded conversation (KGC) shows great potential in building an
engaging and knowledgeable chatbot, and knowledge selection is a key ingredient
in it. However, previous methods for knowledge selection only concentrate on
the relevance between knowledge and dialogue context, ignoring the fact that
age, hobby, education and life experience of an interlocutor have a major
effect on his or her personal preference over external knowledge. Without
taking the personalization issue into account, it is difficult to select the
proper knowledge and generate persona-consistent responses. In this work, we
introduce personal memory into knowledge selection in KGC to address the
personalization issue. We propose a variational method to model the underlying
relationship between one's personal memory and his or her selection of
knowledge, and devise a learning scheme in which the forward mapping from
personal memory to knowledge and its inverse mapping is included in a closed
loop so that they could teach each other. Experiment results show that our
method outperforms existing KGC methods significantly on both automatic
evaluation and human evaluation.
|
[
{
"version": "v1",
"created": "Wed, 6 Apr 2022 07:06:37 GMT"
}
] | 2022-04-07T00:00:00 |
[
[
"Fu",
"Tingchen",
""
],
[
"Zhao",
"Xueliang",
""
],
[
"Tao",
"Chongyang",
""
],
[
"Wen",
"Ji-Rong",
""
],
[
"Yan",
"Rui",
""
]
] |
new_dataset
| 0.985885 |
2204.02632
|
Yuhao Zhou
|
Yuhao Zhou, Minjia Shi, Yuxin Tian, Qing Ye, Jiancheng Lv
|
DeFTA: A Plug-and-Play Decentralized Replacement for FedAvg
|
12 pages, 5 figures
| null | null | null |
cs.DC cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Federated learning (FL) is identified as a crucial enabler for large-scale
distributed machine learning (ML) without the need for local raw dataset
sharing, substantially reducing privacy concerns and alleviating the isolated
data problem. In reality, the prosperity of FL is largely due to a centralized
framework called FedAvg, in which workers are in charge of model training and
servers are in control of model aggregation. However, FedAvg's centralized
worker-server architecture has raised new concerns, be it the low scalability
of the cluster, the risk of data leakage, and the failure or even defection of
the central server. To overcome these problems, we propose Decentralized
Federated Trusted Averaging (DeFTA), a decentralized FL framework that serves
as a plug-and-play replacement for FedAvg, instantly bringing better security,
scalability, and fault-tolerance to the federated learning process after
installation. In principle, it fundamentally resolves the above-mentioned
issues from an architectural perspective without compromises or tradeoffs,
primarily consisting of a new model aggregating formula with theoretical
performance analysis, and a decentralized trust system (DTS) to greatly improve
system robustness. Note that since DeFTA is an alternative to FedAvg at the
framework level, \textit{prevalent algorithms published for FedAvg can be also
utilized in DeFTA with ease}. Extensive experiments on six datasets and six
basic models suggest that DeFTA not only has comparable performance with FedAvg
in a more realistic setting, but also achieves great resilience even when 66%
of workers are malicious. Furthermore, we also present an asynchronous variant
of DeFTA to endow it with more powerful usability.
|
[
{
"version": "v1",
"created": "Wed, 6 Apr 2022 07:20:31 GMT"
}
] | 2022-04-07T00:00:00 |
[
[
"Zhou",
"Yuhao",
""
],
[
"Shi",
"Minjia",
""
],
[
"Tian",
"Yuxin",
""
],
[
"Ye",
"Qing",
""
],
[
"Lv",
"Jiancheng",
""
]
] |
new_dataset
| 0.973251 |
2204.02658
|
Yingwen Fu
|
Yingwen Fu and Jinyi Chen and Nankai Lin and Xixuan Huang and Xinying
Qiu and Shengyi Jiang
|
Yunshan Cup 2020: Overview of the Part-of-Speech Tagging Task for
Low-resourced Languages
| null | null | null | null |
cs.CL
|
http://creativecommons.org/publicdomain/zero/1.0/
|
The Yunshan Cup 2020 track focused on creating a framework for evaluating
different methods of part-of-speech (POS). There were two tasks for this track:
(1) POS tagging for the Indonesian language, and (2) POS tagging for the Lao
tagging. The Indonesian dataset is comprised of 10000 sentences from Indonesian
news within 29 tags. And the Lao dataset consists of 8000 sentences within 27
tags. 25 teams registered for the task. The methods of participants ranged from
feature-based to neural networks using either classical machine learning
techniques or ensemble methods. The best performing results achieve an accuracy
of 95.82% for Indonesian and 93.03%, showing that neural sequence labeling
models significantly outperform classic feature-based methods and rule-based
methods.
|
[
{
"version": "v1",
"created": "Wed, 6 Apr 2022 08:16:22 GMT"
}
] | 2022-04-07T00:00:00 |
[
[
"Fu",
"Yingwen",
""
],
[
"Chen",
"Jinyi",
""
],
[
"Lin",
"Nankai",
""
],
[
"Huang",
"Xixuan",
""
],
[
"Qiu",
"Xinying",
""
],
[
"Jiang",
"Shengyi",
""
]
] |
new_dataset
| 0.999515 |
2204.02659
|
Zhumin Chu
|
Zhumin Chu, Zhihong Wang, Yiqun Liu, Yingye Huang, Min Zhang, Shaoping
Ma
|
ConvSearch: A Open-Domain Conversational Search Behavior Dataset
|
10 pages
| null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Conversational Search has been paid much attention recently with the
increasing popularity of intelligent user interfaces. However, compared with
the endeavour in designing effective conversational search algorithms,
relatively much fewer researchers have focused on the construction of benchmark
datasets. For most existing datasets, the information needs are defined by
researchers and search requests are not proposed by actual users. Meanwhile,
these datasets usually focus on the conversations between users and agents
(systems), while largely ignores the search behaviors of agents before they
return response to users. To overcome these problems, we construct a Chinese
Open-Domain Conversational Search Behavior Dataset (ConvSearch) based on
Wizard-of-Oz paradigm in the field study scenario. We develop a novel
conversational search platform to collect dialogue contents, annotate dialogue
quality and candidate search results and record agent search behaviors. 25
search agents and 51 users are recruited for the field study that lasts about
45 days. The ConvSearch dataset contains 1,131 dialogues together with
annotated search results and corresponding search behaviors. We also provide
the intent labels of each search behavior iteration to support intent
understanding related researches. The dataset is already open to public for
academic usage.
|
[
{
"version": "v1",
"created": "Wed, 6 Apr 2022 08:20:51 GMT"
}
] | 2022-04-07T00:00:00 |
[
[
"Chu",
"Zhumin",
""
],
[
"Wang",
"Zhihong",
""
],
[
"Liu",
"Yiqun",
""
],
[
"Huang",
"Yingye",
""
],
[
"Zhang",
"Min",
""
],
[
"Ma",
"Shaoping",
""
]
] |
new_dataset
| 0.997059 |
2204.02688
|
Shimin Chen
|
Shimin Chen, Wei Li, Chen Chen, Jianyang Gu, Jiaming Chu, Xunqiang
Tao, Yandong Guo
|
SEAL: A Large-scale Video Dataset of Multi-grained Spatio-temporally
Action Localization
|
17 pages,6 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In spite of many dataset efforts for human action recognition, current
computer vision algorithms are still limited to coarse-grained spatial and
temporal annotations among human daily life. In this paper, we introduce a
novel large-scale video dataset dubbed SEAL for multi-grained Spatio-tEmporal
Action Localization. SEAL consists of two kinds of annotations, SEAL Tubes and
SEAL Clips. We observe that atomic actions can be combined into many complex
activities. SEAL Tubes provide both atomic action and complex activity
annotations in tubelet level, producing 49.6k atomic actions spanning 172
action categories and 17.7k complex activities spanning 200 activity
categories. SEAL Clips localizes atomic actions in space during two-second
clips, producing 510.4k action labels with multiple labels per person.
Extensive experimental results show that SEAL significantly helps to advance
video understanding.
|
[
{
"version": "v1",
"created": "Wed, 6 Apr 2022 09:27:52 GMT"
}
] | 2022-04-07T00:00:00 |
[
[
"Chen",
"Shimin",
""
],
[
"Li",
"Wei",
""
],
[
"Chen",
"Chen",
""
],
[
"Gu",
"Jianyang",
""
],
[
"Chu",
"Jiaming",
""
],
[
"Tao",
"Xunqiang",
""
],
[
"Guo",
"Yandong",
""
]
] |
new_dataset
| 0.999801 |
2204.02712
|
Udo Kruschwitz
|
Miriam Schirmer, Udo Kruschwitz, Gregor Donabauer
|
A New Dataset for Topic-Based Paragraph Classification in
Genocide-Related Court Transcripts
|
Preprint. Accepted to appear in Proceedings of LREC 2022
| null | null | null |
cs.CL cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
Recent progress in natural language processing has been impressive in many
different areas with transformer-based approaches setting new benchmarks for a
wide range of applications. This development has also lowered the barriers for
people outside the NLP community to tap into the tools and resources applied to
a variety of domain-specific applications. The bottleneck however still remains
the lack of annotated gold-standard collections as soon as one's research or
professional interest falls outside the scope of what is readily available. One
such area is genocide-related research (also including the work of experts who
have a professional interest in accessing, exploring and searching large-scale
document collections on the topic, such as lawyers). We present GTC (Genocide
Transcript Corpus), the first annotated corpus of genocide-related court
transcripts which serves three purposes: (1) to provide a first reference
corpus for the community, (2) to establish benchmark performances (using
state-of-the-art transformer-based approaches) for the new classification task
of paragraph identification of violence-related witness statements, (3) to
explore first steps towards transfer learning within the domain. We consider
our contribution to be addressing in particular this year's hot topic on
Language Technology for All.
|
[
{
"version": "v1",
"created": "Wed, 6 Apr 2022 10:24:19 GMT"
}
] | 2022-04-07T00:00:00 |
[
[
"Schirmer",
"Miriam",
""
],
[
"Kruschwitz",
"Udo",
""
],
[
"Donabauer",
"Gregor",
""
]
] |
new_dataset
| 0.999 |
2204.02718
|
Taichi Murayama
|
Taichi Murayama, Shohei Hisada, Makoto Uehara, Shoko Wakamiya, Eiji
Aramaki
|
Annotation-Scheme Reconstruction for "Fake News" and Japanese Fake News
Dataset
|
13th International Conference on Language Resources and Evaluation
(LREC), 2022
| null | null | null |
cs.CL cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fake news provokes many societal problems; therefore, there has been
extensive research on fake news detection tasks to counter it. Many fake news
datasets were constructed as resources to facilitate this task. Contemporary
research focuses almost exclusively on the factuality aspect of the news.
However, this aspect alone is insufficient to explain "fake news," which is a
complex phenomenon that involves a wide range of issues. To fully understand
the nature of each instance of fake news, it is important to observe it from
various perspectives, such as the intention of the false news disseminator, the
harmfulness of the news to our society, and the target of the news. We propose
a novel annotation scheme with fine-grained labeling based on detailed
investigations of existing fake news datasets to capture these various aspects
of fake news. Using the annotation scheme, we construct and publish the first
Japanese fake news dataset. The annotation scheme is expected to provide an
in-depth understanding of fake news. We plan to build datasets for both
Japanese and other languages using our scheme. Our Japanese dataset is
published at https://hkefka385.github.io/dataset/fakenews-japanese/.
|
[
{
"version": "v1",
"created": "Wed, 6 Apr 2022 10:42:39 GMT"
}
] | 2022-04-07T00:00:00 |
[
[
"Murayama",
"Taichi",
""
],
[
"Hisada",
"Shohei",
""
],
[
"Uehara",
"Makoto",
""
],
[
"Wakamiya",
"Shoko",
""
],
[
"Aramaki",
"Eiji",
""
]
] |
new_dataset
| 0.999316 |
2204.02739
|
Csaba Gy\"orgyi
|
Csaba Gy\"orgyi (1 and 2), S\'andor Laki (1), Stefan Schmid (2 and 3)
((1) E\"otv\"os Lor\'and University, Hungary, (2) University of Vienna,
Austria, (3) TU Berlin, Germany)
|
P4RROT: Generating P4 Code for the Application Layer
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Throughput and latency critical applications could often benefit of
performing computations close to the client. To enable this, distributed
computing paradigms such as edge computing have recently emerged. However, with
the advent of programmable data planes, computations cannot only be performed
by servers but they can be offloaded to network switches. Languages like P4
enable to flexibly reprogram the entire packet processing pipeline. Though
these devices promise high throughput and ultra-low response times,
implementing application-layer tasks in the data plane programming language P4
is still challenging for an application developer who is not familiar with
networking domain. In this paper, we first identify and examine obstacles and
pain points one can experience when offloading server-based computations to the
network. Then we present P4RROT, a code generator (in form of a library) which
allows to overcome these limitations by providing a user-friendly API to
describe computations to be offloaded. After discussing the design choices
behind P4RROT, we introduce our proof-of-concept implementation for two P4
targets: Netronome SmartNIC and BMv2.
|
[
{
"version": "v1",
"created": "Wed, 6 Apr 2022 11:32:47 GMT"
}
] | 2022-04-07T00:00:00 |
[
[
"Györgyi",
"Csaba",
"",
"1 and 2"
],
[
"Laki",
"Sándor",
"",
"2 and 3"
],
[
"Schmid",
"Stefan",
"",
"2 and 3"
]
] |
new_dataset
| 0.997741 |
2204.02777
|
Jan Philipp Portisch
|
Jan Portisch, Heiko Paulheim
|
Walk this Way! Entity Walks and Property Walks for RDF2vec
|
accepted at the ESWC Posters and Demos Track
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
RDF2vec is a knowledge graph embedding mechanism which first extracts
sequences from knowledge graphs by performing random walks, then feeds those
into the word embedding algorithm word2vec for computing vector representations
for entities. In this poster, we introduce two new flavors of walk extraction
coined e-walks and p-walks, which put an emphasis on the structure or the
neighborhood of an entity respectively, and thereby allow for creating
embeddings which focus on similarity or relatedness. By combining the walk
strategies with order-aware and classic RDF2vec, as well as CBOW and skip-gram
word2vec embeddings, we conduct a preliminary evaluation with a total of 12
RDF2vec variants.
|
[
{
"version": "v1",
"created": "Tue, 5 Apr 2022 14:51:34 GMT"
}
] | 2022-04-07T00:00:00 |
[
[
"Portisch",
"Jan",
""
],
[
"Paulheim",
"Heiko",
""
]
] |
new_dataset
| 0.974224 |
2204.02814
|
Ritesh Kumar
|
Ritesh Kumar, Atul Kr. Ojha, Bornini Lahiri, Chingrimnng Lungleng
|
Aggression in Hindi and English Speech: Acoustic Correlates and
Automatic Identification
|
To appear in the Proceedings of Conference on Sanskrit and Indian
Languages: Technology
| null | null | null |
cs.SD cs.CL eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the present paper, we will present the results of an acoustic analysis of
political discourse in Hindi and discuss some of the conventionalised acoustic
features of aggressive speech regularly employed by the speakers of Hindi and
English. The study is based on a corpus of slightly over 10 hours of political
discourse and includes debates on news channel and political speeches. Using
this study, we develop two automatic classification systems for identifying
aggression in English and Hindi speech, based solely on an acoustic model. The
Hindi classifier, trained using 50 hours of annotated speech, and English
classifier, trained using 40 hours of annotated speech, achieve a respectable
accuracy of over 73% and 66% respectively. In this paper, we discuss the
development of this annotated dataset, the experiments for developing the
classifier and discuss the errors that it makes.
|
[
{
"version": "v1",
"created": "Wed, 6 Apr 2022 13:29:25 GMT"
}
] | 2022-04-07T00:00:00 |
[
[
"Kumar",
"Ritesh",
""
],
[
"Ojha",
"Atul Kr.",
""
],
[
"Lahiri",
"Bornini",
""
],
[
"Lungleng",
"Chingrimnng",
""
]
] |
new_dataset
| 0.988756 |
2204.02822
|
Ritesh Kumar
|
Ritesh Kumar, Bornini Lahiri
|
Language Resources and Technologies for Non-Scheduled and Endangered
Indian Languages
|
To appear in Proceedings of Conference on Sanskrit and Indian
Languages: Technology
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the present paper, we will present a survey of the language resources and
technologies available for the non-scheduled and endangered languages of India.
While there have been different estimates from different sources about the
number of languages in India, it could be assumed that there are more than
1,000 languages currently being spoken in India. However barring some of the 22
languages included in the 8th Schedule of the Indian Constitution (called the
scheduled languages), there is hardly any substantial resource or technology
available for the rest of the languages. Nonetheless there have been some
individual attempts at developing resources and technologies for the different
languages across the country. Of late, some financial support has also become
available for the endangered languages. In this paper, we give a summary of the
resources and technologies for those Indian languages which are not included in
the 8th schedule of the Indian Constitution and/or which are endangered.
|
[
{
"version": "v1",
"created": "Wed, 6 Apr 2022 13:33:24 GMT"
}
] | 2022-04-07T00:00:00 |
[
[
"Kumar",
"Ritesh",
""
],
[
"Lahiri",
"Bornini",
""
]
] |
new_dataset
| 0.996574 |
2204.02905
|
Vil\'em Zouhar
|
Sunit Bhattacharya, V\v{e}ra Kloudov\'a, Vil\'em Zouhar, Ond\v{r}ej
Bojar
|
EMMT: A simultaneous eye-tracking, 4-electrode EEG and audio corpus for
multi-modal reading and translation scenarios
|
Submitted to Nature Scientific Data
| null | null | null |
cs.CL cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
We present the Eyetracked Multi-Modal Translation (EMMT) corpus, a dataset
containing monocular eye movement recordings, audio and 4-electrode
electroencephalogram (EEG) data of 43 participants. The objective was to
collect cognitive signals as responses of participants engaged in a number of
language intensive tasks involving different text-image stimuli settings when
translating from English to Czech.
Each participant was exposed to 32 text-image stimuli pairs and asked to (1)
read the English sentence, (2) translate it into Czech, (3) consult the image,
(4) translate again, either updating or repeating the previous translation. The
text stimuli consisted of 200 unique sentences with 616 unique words coupled
with 200 unique images as the visual stimuli.
The recordings were collected over a two week period and all the participants
included in the study were Czech natives with strong English skills. Due to the
nature of the tasks involved in the study and the relatively large number of
participants involved, the corpus is well suited for research in Translation
Process Studies, Cognitive Sciences among other disciplines.
|
[
{
"version": "v1",
"created": "Wed, 6 Apr 2022 15:47:55 GMT"
}
] | 2022-04-07T00:00:00 |
[
[
"Bhattacharya",
"Sunit",
""
],
[
"Kloudová",
"Věra",
""
],
[
"Zouhar",
"Vilém",
""
],
[
"Bojar",
"Ondřej",
""
]
] |
new_dataset
| 0.999766 |
2204.02944
|
Avishkar Saha
|
Avishkar Saha, Oscar Mendez, Chris Russell, Richard Bowden
|
"The Pedestrian next to the Lamppost" Adaptive Object Graphs for Better
Instantaneous Mapping
|
Accepted to CVPR 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Estimating a semantically segmented bird's-eye-view (BEV) map from a single
image has become a popular technique for autonomous control and navigation.
However, they show an increase in localization error with distance from the
camera. While such an increase in error is entirely expected - localization is
harder at distance - much of the drop in performance can be attributed to the
cues used by current texture-based models, in particular, they make heavy use
of object-ground intersections (such as shadows), which become increasingly
sparse and uncertain for distant objects. In this work, we address these
shortcomings in BEV-mapping by learning the spatial relationship between
objects in a scene. We propose a graph neural network which predicts BEV
objects from a monocular image by spatially reasoning about an object within
the context of other objects. Our approach sets a new state-of-the-art in BEV
estimation from monocular images across three large-scale datasets, including a
50% relative improvement for objects on nuScenes.
|
[
{
"version": "v1",
"created": "Wed, 6 Apr 2022 17:23:13 GMT"
}
] | 2022-04-07T00:00:00 |
[
[
"Saha",
"Avishkar",
""
],
[
"Mendez",
"Oscar",
""
],
[
"Russell",
"Chris",
""
],
[
"Bowden",
"Richard",
""
]
] |
new_dataset
| 0.994531 |
1907.05538
|
Weiying Wang
|
Weiying Wang, Ninad Jadhav, Paul Vohs, Nathan Hughes, Mark Mazumder,
and Stephanie Gil
|
Active Rendezvous for Multi-Robot Pose Graph Optimization using Sensing
over Wi-Fi
| null |
International Symposium on Robotics Research (ISRR), Hanoi, 2019
|
10.1007/978-3-030-95459-8_51
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a novel framework for collaboration amongst a team of robots
performing Pose Graph Optimization (PGO) that addresses two important
challenges for multi-robot SLAM: i) that of enabling information exchange
"on-demand" via Active Rendezvous without using a map or the robot's location,
and ii) that of rejecting outlying measurements. Our key insight is to exploit
relative position data present in the communication channel between robots to
improve groundtruth accuracy of PGO. We develop an algorithmic and experimental
framework for integrating Channel State Information (CSI) with multi-robot PGO;
it is distributed, and applicable in low-lighting or featureless environments
where traditional sensors often fail. We present extensive experimental results
on actual robots and observe that using Active Rendezvous results in a 64%
reduction in ground truth pose error and that using CSI observations to aid
outlier rejection reduces ground truth pose error by 32%. These results show
the potential of integrating communication as a novel sensor for SLAM.
|
[
{
"version": "v1",
"created": "Fri, 12 Jul 2019 01:07:39 GMT"
},
{
"version": "v2",
"created": "Sat, 21 Dec 2019 02:26:24 GMT"
},
{
"version": "v3",
"created": "Tue, 2 Nov 2021 21:07:19 GMT"
}
] | 2022-04-06T00:00:00 |
[
[
"Wang",
"Weiying",
""
],
[
"Jadhav",
"Ninad",
""
],
[
"Vohs",
"Paul",
""
],
[
"Hughes",
"Nathan",
""
],
[
"Mazumder",
"Mark",
""
],
[
"Gil",
"Stephanie",
""
]
] |
new_dataset
| 0.987268 |
1912.04616
|
Matthias Samwald
|
Anna Breit, Simon Ott, Asan Agibetov, Matthias Samwald
|
OpenBioLink: A benchmarking framework for large-scale biomedical link
prediction
| null |
Bioinformatics, Volume 36, Issue 13, July 2020
|
10.1093/bioinformatics/btaa274
| null |
cs.AI cs.IR
|
http://creativecommons.org/licenses/by-sa/4.0/
|
SUMMARY: Recently, novel machine-learning algorithms have shown potential for
predicting undiscovered links in biomedical knowledge networks. However,
dedicated benchmarks for measuring algorithmic progress have not yet emerged.
With OpenBioLink, we introduce a large-scale, high-quality and highly
challenging biomedical link prediction benchmark to transparently and
reproducibly evaluate such algorithms. Furthermore, we present preliminary
baseline evaluation results. AVAILABILITY AND IMPLEMENTATION: Source code, data
and supplementary files are openly available at
https://github.com/OpenBioLink/OpenBioLink CONTACT: matthias.samwald ((at))
meduniwien.ac.at
|
[
{
"version": "v1",
"created": "Tue, 10 Dec 2019 10:26:13 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Feb 2020 14:53:06 GMT"
}
] | 2022-04-06T00:00:00 |
[
[
"Breit",
"Anna",
""
],
[
"Ott",
"Simon",
""
],
[
"Agibetov",
"Asan",
""
],
[
"Samwald",
"Matthias",
""
]
] |
new_dataset
| 0.991912 |
2001.09193
|
Anjany Kumar Sekuboyina
|
Anjany Sekuboyina, Malek E. Husseini, Amirhossein Bayat, Maximilian
L\"offler, Hans Liebl, Hongwei Li, Giles Tetteh, Jan Kuka\v{c}ka, Christian
Payer, Darko \v{S}tern, Martin Urschler, Maodong Chen, Dalong Cheng, Nikolas
Lessmann, Yujin Hu, Tianfu Wang, Dong Yang, Daguang Xu, Felix Ambellan, Tamaz
Amiranashvili, Moritz Ehlke, Hans Lamecker, Sebastian Lehnert, Marilia Lirio,
Nicol\'as P\'erez de Olaguer, Heiko Ramm, Manish Sahu, Alexander Tack, Stefan
Zachow, Tao Jiang, Xinjun Ma, Christoph Angerman, Xin Wang, Kevin Brown,
Alexandre Kirszenberg, \'Elodie Puybareau, Di Chen, Yiwei Bai, Brandon H.
Rapazzo, Timyoas Yeah, Amber Zhang, Shangliang Xu, Feng Hou, Zhiqiang He,
Chan Zeng, Zheng Xiangshang, Xu Liming, Tucker J. Netherton, Raymond P.
Mumme, Laurence E. Court, Zixun Huang, Chenhang He, Li-Wen Wang, Sai Ho Ling,
L\^e Duy Hu\`ynh, Nicolas Boutry, Roman Jakubicek, Jiri Chmelik, Supriti
Mulay, Mohanasankar Sivaprakasam, Johannes C. Paetzold, Suprosanna Shit, Ivan
Ezhov, Benedikt Wiestler, Ben Glocker, Alexander Valentinitsch, Markus
Rempfler, Bj\"orn H. Menze and Jan S. Kirschke
|
VerSe: A Vertebrae Labelling and Segmentation Benchmark for
Multi-detector CT Images
|
Challenge report for the VerSe 2019 and 2020. Published in Medical
Image Analysis (DOI: https://doi.org/10.1016/j.media.2021.102166)
|
Medical Image Analysis, Volume 73, October 2021, 102166
|
10.1016/j.media.2021.102166
| null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Vertebral labelling and segmentation are two fundamental tasks in an
automated spine processing pipeline. Reliable and accurate processing of spine
images is expected to benefit clinical decision-support systems for diagnosis,
surgery planning, and population-based analysis on spine and bone health.
However, designing automated algorithms for spine processing is challenging
predominantly due to considerable variations in anatomy and acquisition
protocols and due to a severe shortage of publicly available data. Addressing
these limitations, the Large Scale Vertebrae Segmentation Challenge (VerSe) was
organised in conjunction with the International Conference on Medical Image
Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020, with a
call for algorithms towards labelling and segmentation of vertebrae. Two
datasets containing a total of 374 multi-detector CT scans from 355 patients
were prepared and 4505 vertebrae have individually been annotated at
voxel-level by a human-machine hybrid algorithm (https://osf.io/nqjyw/,
https://osf.io/t98fz/). A total of 25 algorithms were benchmarked on these
datasets. In this work, we present the the results of this evaluation and
further investigate the performance-variation at vertebra-level, scan-level,
and at different fields-of-view. We also evaluate the generalisability of the
approaches to an implicit domain shift in data by evaluating the top performing
algorithms of one challenge iteration on data from the other iteration. The
principal takeaway from VerSe: the performance of an algorithm in labelling and
segmenting a spine scan hinges on its ability to correctly identify vertebrae
in cases of rare anatomical variations. The content and code concerning VerSe
can be accessed at: https://github.com/anjany/verse.
|
[
{
"version": "v1",
"created": "Fri, 24 Jan 2020 21:09:18 GMT"
},
{
"version": "v2",
"created": "Thu, 11 Jun 2020 16:41:14 GMT"
},
{
"version": "v3",
"created": "Thu, 17 Dec 2020 10:36:03 GMT"
},
{
"version": "v4",
"created": "Mon, 22 Mar 2021 16:58:59 GMT"
},
{
"version": "v5",
"created": "Fri, 30 Jul 2021 12:58:27 GMT"
},
{
"version": "v6",
"created": "Tue, 5 Apr 2022 08:17:55 GMT"
}
] | 2022-04-06T00:00:00 |
[
[
"Sekuboyina",
"Anjany",
""
],
[
"Husseini",
"Malek E.",
""
],
[
"Bayat",
"Amirhossein",
""
],
[
"Löffler",
"Maximilian",
""
],
[
"Liebl",
"Hans",
""
],
[
"Li",
"Hongwei",
""
],
[
"Tetteh",
"Giles",
""
],
[
"Kukačka",
"Jan",
""
],
[
"Payer",
"Christian",
""
],
[
"Štern",
"Darko",
""
],
[
"Urschler",
"Martin",
""
],
[
"Chen",
"Maodong",
""
],
[
"Cheng",
"Dalong",
""
],
[
"Lessmann",
"Nikolas",
""
],
[
"Hu",
"Yujin",
""
],
[
"Wang",
"Tianfu",
""
],
[
"Yang",
"Dong",
""
],
[
"Xu",
"Daguang",
""
],
[
"Ambellan",
"Felix",
""
],
[
"Amiranashvili",
"Tamaz",
""
],
[
"Ehlke",
"Moritz",
""
],
[
"Lamecker",
"Hans",
""
],
[
"Lehnert",
"Sebastian",
""
],
[
"Lirio",
"Marilia",
""
],
[
"de Olaguer",
"Nicolás Pérez",
""
],
[
"Ramm",
"Heiko",
""
],
[
"Sahu",
"Manish",
""
],
[
"Tack",
"Alexander",
""
],
[
"Zachow",
"Stefan",
""
],
[
"Jiang",
"Tao",
""
],
[
"Ma",
"Xinjun",
""
],
[
"Angerman",
"Christoph",
""
],
[
"Wang",
"Xin",
""
],
[
"Brown",
"Kevin",
""
],
[
"Kirszenberg",
"Alexandre",
""
],
[
"Puybareau",
"Élodie",
""
],
[
"Chen",
"Di",
""
],
[
"Bai",
"Yiwei",
""
],
[
"Rapazzo",
"Brandon H.",
""
],
[
"Yeah",
"Timyoas",
""
],
[
"Zhang",
"Amber",
""
],
[
"Xu",
"Shangliang",
""
],
[
"Hou",
"Feng",
""
],
[
"He",
"Zhiqiang",
""
],
[
"Zeng",
"Chan",
""
],
[
"Xiangshang",
"Zheng",
""
],
[
"Liming",
"Xu",
""
],
[
"Netherton",
"Tucker J.",
""
],
[
"Mumme",
"Raymond P.",
""
],
[
"Court",
"Laurence E.",
""
],
[
"Huang",
"Zixun",
""
],
[
"He",
"Chenhang",
""
],
[
"Wang",
"Li-Wen",
""
],
[
"Ling",
"Sai Ho",
""
],
[
"Huynh",
"Lê Duy",
""
],
[
"Boutry",
"Nicolas",
""
],
[
"Jakubicek",
"Roman",
""
],
[
"Chmelik",
"Jiri",
""
],
[
"Mulay",
"Supriti",
""
],
[
"Sivaprakasam",
"Mohanasankar",
""
],
[
"Paetzold",
"Johannes C.",
""
],
[
"Shit",
"Suprosanna",
""
],
[
"Ezhov",
"Ivan",
""
],
[
"Wiestler",
"Benedikt",
""
],
[
"Glocker",
"Ben",
""
],
[
"Valentinitsch",
"Alexander",
""
],
[
"Rempfler",
"Markus",
""
],
[
"Menze",
"Björn H.",
""
],
[
"Kirschke",
"Jan S.",
""
]
] |
new_dataset
| 0.999829 |
2006.11561
|
Aviv Rosenberg
|
Aviv Rosenberg and Yishay Mansour
|
Stochastic Shortest Path with Adversarially Changing Costs
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Stochastic shortest path (SSP) is a well-known problem in planning and
control, in which an agent has to reach a goal state in minimum total expected
cost. In this paper we present the adversarial SSP model that also accounts for
adversarial changes in the costs over time, while the underlying transition
function remains unchanged. Formally, an agent interacts with an SSP
environment for $K$ episodes, the cost function changes arbitrarily between
episodes, and the transitions are unknown to the agent. We develop the first
algorithms for adversarial SSPs and prove high probability regret bounds of
$\widetilde O (\sqrt{K})$ assuming all costs are strictly positive, and
$\widetilde O (K^{3/4})$ in the general case. We are the first to consider this
natural setting of adversarial SSP and obtain sub-linear regret for it.
|
[
{
"version": "v1",
"created": "Sat, 20 Jun 2020 12:10:35 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Nov 2020 13:19:00 GMT"
},
{
"version": "v3",
"created": "Thu, 29 Apr 2021 14:15:34 GMT"
},
{
"version": "v4",
"created": "Tue, 5 Apr 2022 10:29:29 GMT"
}
] | 2022-04-06T00:00:00 |
[
[
"Rosenberg",
"Aviv",
""
],
[
"Mansour",
"Yishay",
""
]
] |
new_dataset
| 0.993886 |
2010.07237
|
Raji Ghawi
|
Wienke Strathern, Mirco Schoenfeld, Raji Ghawi, Juergen Pfeffer
|
Against the Others! Detecting Moral Outrage inSocial Media Networks
| null | null |
10.1109/ASONAM49781.2020.9381415
| null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Online firestorms on Twitter are seemingly arbitrarily occurring outrages
towards people, companies, media campaigns and politicians. Moral outrages can
create an excessive collective aggressiveness against one single argument, one
single word, or one action of a person resulting in hateful speech. With a
collective "against the others" the negative dynamics often start. Using data
from Twitter, we explored the starting points of several firestorm outbreaks.
As a social media platform with hundreds of millions of users interacting in
real-time on topics and events all over the world, Twitter serves as a social
sensor for online discussions and is known for quick and often emotional
disputes. The main question we pose in this article, is whether we can detect
the outbreak of a firestorm. Given 21 online firestorms on Twitter, the key
questions regarding the anomaly detection are: 1) How can we detect the
changing point? 2) How can we distinguish the features that cause a moral
outrage? In this paper we examine these challenges developing a method to
detect the point of change systematically spotting on linguistic cues of
tweets. We are able to detect outbreaks of firestorms early and precisely only
by applying linguistic cues. The results of our work can help detect negative
dynamics and may have the potential for individuals, companies, and governments
to mitigate hate in social media networks.
|
[
{
"version": "v1",
"created": "Wed, 14 Oct 2020 16:53:35 GMT"
}
] | 2022-04-06T00:00:00 |
[
[
"Strathern",
"Wienke",
""
],
[
"Schoenfeld",
"Mirco",
""
],
[
"Ghawi",
"Raji",
""
],
[
"Pfeffer",
"Juergen",
""
]
] |
new_dataset
| 0.981209 |
2106.06604
|
Mario Gleirscher
|
Mario Gleirscher, Radu Calinescu, James Douthwaite, Benjamin Lesage,
Colin Paterson, Jonathan Aitken, Rob Alexander, James Law
|
Verified Synthesis of Optimal Safety Controllers for Human-Robot
Collaboration
|
34 pages, 31 figures
| null |
10.1016/j.scico.2022.102809
| null |
cs.RO cs.HC cs.SE cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We present a tool-supported approach for the synthesis, verification and
validation of the control software responsible for the safety of the
human-robot interaction in manufacturing processes that use collaborative
robots. In human-robot collaboration, software-based safety controllers are
used to improve operational safety, e.g., by triggering shutdown mechanisms or
emergency stops to avoid accidents. Complex robotic tasks and increasingly
close human-robot interaction pose new challenges to controller developers and
certification authorities. Key among these challenges is the need to assure the
correctness of safety controllers under explicit (and preferably weak)
assumptions. Our controller synthesis, verification and validation approach is
informed by the process, risk analysis, and relevant safety regulations for the
target application. Controllers are selected from a design space of feasible
controllers according to a set of optimality criteria, are formally verified
against correctness criteria, and are translated into executable code and
validated in a digital twin. The resulting controller can detect the occurrence
of hazards, move the process into a safe state, and, in certain circumstances,
return the process to an operational state from which it can resume its
original task. We show the effectiveness of our software engineering approach
through a case study involving the development of a safety controller for a
manufacturing work cell equipped with a collaborative robot.
|
[
{
"version": "v1",
"created": "Fri, 11 Jun 2021 20:38:40 GMT"
}
] | 2022-04-06T00:00:00 |
[
[
"Gleirscher",
"Mario",
""
],
[
"Calinescu",
"Radu",
""
],
[
"Douthwaite",
"James",
""
],
[
"Lesage",
"Benjamin",
""
],
[
"Paterson",
"Colin",
""
],
[
"Aitken",
"Jonathan",
""
],
[
"Alexander",
"Rob",
""
],
[
"Law",
"James",
""
]
] |
new_dataset
| 0.999246 |
2107.07253
|
Asier Guti\'errez-Fandi\~no
|
Asier Guti\'errez-Fandi\~no, Jordi Armengol-Estap\'e, Marc P\`amies,
Joan Llop-Palao, Joaqu\'in Silveira-Ocampo, Casimiro Pio Carrino, Aitor
Gonzalez-Agirre, Carme Armentano-Oller, Carlos Rodriguez-Penagos, Marta
Villegas
|
MarIA: Spanish Language Models
| null |
Procesamiento del Lenguaje Natural, v. 68, p. 39-60, mar. 2022.
ISSN 1989-7553
|
10.26342/2022-68-3
| null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
This work presents MarIA, a family of Spanish language models and associated
resources made available to the industry and the research community. Currently,
MarIA includes RoBERTa-base, RoBERTa-large, GPT2 and GPT2-large Spanish
language models, which can arguably be presented as the largest and most
proficient language models in Spanish. The models were pretrained using a
massive corpus of 570GB of clean and deduplicated texts with 135 billion words
extracted from the Spanish Web Archive crawled by the National Library of Spain
between 2009 and 2019. We assessed the performance of the models with nine
existing evaluation datasets and with a novel extractive Question Answering
dataset created ex novo. Overall, MarIA models outperform the existing Spanish
models across a variety of NLU tasks and training settings.
|
[
{
"version": "v1",
"created": "Thu, 15 Jul 2021 11:23:05 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Aug 2021 13:47:44 GMT"
},
{
"version": "v3",
"created": "Fri, 1 Apr 2022 13:03:32 GMT"
},
{
"version": "v4",
"created": "Mon, 4 Apr 2022 16:25:12 GMT"
},
{
"version": "v5",
"created": "Tue, 5 Apr 2022 11:13:46 GMT"
}
] | 2022-04-06T00:00:00 |
[
[
"Gutiérrez-Fandiño",
"Asier",
""
],
[
"Armengol-Estapé",
"Jordi",
""
],
[
"Pàmies",
"Marc",
""
],
[
"Llop-Palao",
"Joan",
""
],
[
"Silveira-Ocampo",
"Joaquín",
""
],
[
"Carrino",
"Casimiro Pio",
""
],
[
"Gonzalez-Agirre",
"Aitor",
""
],
[
"Armentano-Oller",
"Carme",
""
],
[
"Rodriguez-Penagos",
"Carlos",
""
],
[
"Villegas",
"Marta",
""
]
] |
new_dataset
| 0.997444 |
2108.04539
|
Teakgyu Hong
|
Teakgyu Hong, Donghyun Kim, Mingi Ji, Wonseok Hwang, Daehyun Nam, and
Sungrae Park
|
BROS: A Pre-trained Language Model Focusing on Text and Layout for
Better Key Information Extraction from Documents
|
AAAI 2022 - Main Technical Track
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Key information extraction (KIE) from document images requires understanding
the contextual and spatial semantics of texts in two-dimensional (2D) space.
Many recent studies try to solve the task by developing pre-trained language
models focusing on combining visual features from document images with texts
and their layout. On the other hand, this paper tackles the problem by going
back to the basic: effective combination of text and layout. Specifically, we
propose a pre-trained language model, named BROS (BERT Relying On Spatiality),
that encodes relative positions of texts in 2D space and learns from unlabeled
documents with area-masking strategy. With this optimized training scheme for
understanding texts in 2D space, BROS shows comparable or better performance
compared to previous methods on four KIE benchmarks (FUNSD, SROIE*, CORD, and
SciTSR) without relying on visual features. This paper also reveals two
real-world challenges in KIE tasks-(1) minimizing the error from incorrect text
ordering and (2) efficient learning from fewer downstream examples-and
demonstrates the superiority of BROS over previous methods. Code is available
at https://github.com/clovaai/bros.
|
[
{
"version": "v1",
"created": "Tue, 10 Aug 2021 09:30:23 GMT"
},
{
"version": "v2",
"created": "Tue, 24 Aug 2021 03:22:35 GMT"
},
{
"version": "v3",
"created": "Wed, 8 Sep 2021 05:58:10 GMT"
},
{
"version": "v4",
"created": "Fri, 10 Sep 2021 10:51:20 GMT"
},
{
"version": "v5",
"created": "Tue, 5 Apr 2022 13:51:47 GMT"
}
] | 2022-04-06T00:00:00 |
[
[
"Hong",
"Teakgyu",
""
],
[
"Kim",
"Donghyun",
""
],
[
"Ji",
"Mingi",
""
],
[
"Hwang",
"Wonseok",
""
],
[
"Nam",
"Daehyun",
""
],
[
"Park",
"Sungrae",
""
]
] |
new_dataset
| 0.984342 |
2109.01528
|
Alexander Ryzhkov
|
Anton Vakhrushev, Alexander Ryzhkov, Maxim Savchenko, Dmitry Simakov,
Rinchin Damdinov, Alexander Tuzhilin
|
LightAutoML: AutoML Solution for a Large Financial Services Ecosystem
| null | null | null | null |
cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
We present an AutoML system called LightAutoML developed for a large European
financial services company and its ecosystem satisfying the set of
idiosyncratic requirements that this ecosystem has for AutoML solutions. Our
framework was piloted and deployed in numerous applications and performed at
the level of the experienced data scientists while building high-quality ML
models significantly faster than these data scientists. We also compare the
performance of our system with various general-purpose open source AutoML
solutions and show that it performs better for most of the ecosystem and OpenML
problems. We also present the lessons that we learned while developing the
AutoML system and moving it into production.
|
[
{
"version": "v1",
"created": "Fri, 3 Sep 2021 13:52:32 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Apr 2022 13:45:00 GMT"
}
] | 2022-04-06T00:00:00 |
[
[
"Vakhrushev",
"Anton",
""
],
[
"Ryzhkov",
"Alexander",
""
],
[
"Savchenko",
"Maxim",
""
],
[
"Simakov",
"Dmitry",
""
],
[
"Damdinov",
"Rinchin",
""
],
[
"Tuzhilin",
"Alexander",
""
]
] |
new_dataset
| 0.997274 |
2111.11709
|
Alessandro Betti
|
Antonio Di Tommaso, Alessandro Betti, Giacomo Fontanelli, Benedetto
Michelozzi
|
A Multi-Stage model based on YOLOv3 for defect detection in PV panels
based on IR and Visible Imaging by Unmanned Aerial Vehicle
|
Submitted to Elsevier. Under Review
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As solar capacity installed worldwide continues to grow, there is an
increasing awareness that advanced inspection systems are becoming of utmost
importance to schedule smart interventions and minimize downtime likelihood. In
this work we propose a novel automatic multi-stage model to detect panel
defects on aerial images captured by unmanned aerial vehicle by using the
YOLOv3 network and Computer Vision techniques. The model combines detections of
panels and defects to refine its accuracy and exhibits an average inference
time per image of 0.98 s. The main novelties are represented by its versatility
to process either thermographic or visible images and detect a large variety of
defects, to prescript recommended actions to O&M crew to give a more efficient
data-driven maintenance strategy and its portability to both rooftop and
ground-mounted PV systems and different panel types. The proposed model has
been validated on two big PV plants in the south of Italy with an outstanding
AP@0.5 exceeding 98% for panel detection, a remarkable AP@0.4 (AP@0.5) of
roughly 88.3% (66.9%) for hotspots by means of infrared thermography and a
mAP@0.5 of almost 70% in the visible spectrum for detection of anomalies
including panel shading induced by soiling and bird dropping, delamination,
presence of puddles and raised rooftop panels. The model predicts also the
severity of hotspot areas based on the estimated temperature gradients, as well
as it computes the soiling coverage based on visual images. Finally an analysis
of the influence of the different YOLOv3's output scales on the detection is
discussed.
|
[
{
"version": "v1",
"created": "Tue, 23 Nov 2021 08:04:32 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Apr 2022 17:32:20 GMT"
}
] | 2022-04-06T00:00:00 |
[
[
"Di Tommaso",
"Antonio",
""
],
[
"Betti",
"Alessandro",
""
],
[
"Fontanelli",
"Giacomo",
""
],
[
"Michelozzi",
"Benedetto",
""
]
] |
new_dataset
| 0.998373 |
2201.06302
|
Kristina Gligoric
|
Justyna Czestochowska, Kristina Gligoric, Maxime Peyrard, Yann Mentha,
Michal Bien, Andrea Grutter, Anita Auer, Aris Xanthos, Robert West
|
On the Context-Free Ambiguity of Emoji
| null | null | null | null |
cs.CL cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Emojis come with prepacked semantics making them great candidates to create
new forms of more accessible communications. Yet, little is known about how
much of this emojis semantic is agreed upon by humans, outside of textual
contexts. Thus, we collected a crowdsourced dataset of one-word emoji
descriptions for 1,289 emojis presented to participants with no surrounding
text. The emojis and their interpretations were then examined for ambiguity. We
find that with 30 annotations per emoji, 16 emojis (1.2%) are completely
unambiguous, whereas 55 emojis (4.3%) are so ambiguous that their descriptions
are indistinguishable from randomly chosen descriptions. Most of studied emojis
are spread out between the two extremes. Furthermore, investigating the
ambiguity of different types of emojis, we find that an important factor is the
extent to which an emoji has an embedded symbolical meaning drawn from an
established code-book of symbols. We conclude by discussing design
implications.
|
[
{
"version": "v1",
"created": "Mon, 17 Jan 2022 09:33:29 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Apr 2022 09:42:32 GMT"
}
] | 2022-04-06T00:00:00 |
[
[
"Czestochowska",
"Justyna",
""
],
[
"Gligoric",
"Kristina",
""
],
[
"Peyrard",
"Maxime",
""
],
[
"Mentha",
"Yann",
""
],
[
"Bien",
"Michal",
""
],
[
"Grutter",
"Andrea",
""
],
[
"Auer",
"Anita",
""
],
[
"Xanthos",
"Aris",
""
],
[
"West",
"Robert",
""
]
] |
new_dataset
| 0.999739 |
2201.12792
|
Boyi Jiang
|
Boyi Jiang, Yang Hong, Hujun Bao, Juyong Zhang
|
SelfRecon: Self Reconstruction Your Digital Avatar from Monocular Video
|
CVPR 2022, Oral. Project page: https://jby1993.github.io/SelfRecon/
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose SelfRecon, a clothed human body reconstruction method that
combines implicit and explicit representations to recover space-time coherent
geometries from a monocular self-rotating human video. Explicit methods require
a predefined template mesh for a given sequence, while the template is hard to
acquire for a specific subject. Meanwhile, the fixed topology limits the
reconstruction accuracy and clothing types. Implicit representation supports
arbitrary topology and can represent high-fidelity geometry shapes due to its
continuous nature. However, it is difficult to integrate multi-frame
information to produce a consistent registration sequence for downstream
applications. We propose to combine the advantages of both representations. We
utilize differential mask loss of the explicit mesh to obtain the coherent
overall shape, while the details on the implicit surface are refined with the
differentiable neural rendering. Meanwhile, the explicit mesh is updated
periodically to adjust its topology changes, and a consistency loss is designed
to match both representations. Compared with existing methods, SelfRecon can
produce high-fidelity surfaces for arbitrary clothed humans with
self-supervised optimization. Extensive experimental results demonstrate its
effectiveness on real captured monocular videos. The source code is available
at https://github.com/jby1993/SelfReconCode.
|
[
{
"version": "v1",
"created": "Sun, 30 Jan 2022 11:49:29 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Apr 2022 13:47:11 GMT"
}
] | 2022-04-06T00:00:00 |
[
[
"Jiang",
"Boyi",
""
],
[
"Hong",
"Yang",
""
],
[
"Bao",
"Hujun",
""
],
[
"Zhang",
"Juyong",
""
]
] |
new_dataset
| 0.994712 |
2203.14072
|
Guangyao Li
|
Guangyao Li, Yake Wei, Yapeng Tian, Chenliang Xu, Ji-Rong Wen and Di
Hu
|
Learning to Answer Questions in Dynamic Audio-Visual Scenarios
|
Accepted by CVPR2022 (Oral presentation)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we focus on the Audio-Visual Question Answering (AVQA) task,
which aims to answer questions regarding different visual objects, sounds, and
their associations in videos. The problem requires comprehensive multimodal
understanding and spatio-temporal reasoning over audio-visual scenes. To
benchmark this task and facilitate our study, we introduce a large-scale
MUSIC-AVQA dataset, which contains more than 45K question-answer pairs covering
33 different question templates spanning over different modalities and question
types. We develop several baselines and introduce a spatio-temporal grounded
audio-visual network for the AVQA problem. Our results demonstrate that AVQA
benefits from multisensory perception and our model outperforms recent A-, V-,
and AVQA approaches. We believe that our built dataset has the potential to
serve as testbed for evaluating and promoting progress in audio-visual scene
understanding and spatio-temporal reasoning. Code and dataset:
http://gewu-lab.github.io/MUSIC-AVQA/
|
[
{
"version": "v1",
"created": "Sat, 26 Mar 2022 13:03:42 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Apr 2022 12:04:15 GMT"
}
] | 2022-04-06T00:00:00 |
[
[
"Li",
"Guangyao",
""
],
[
"Wei",
"Yake",
""
],
[
"Tian",
"Yapeng",
""
],
[
"Xu",
"Chenliang",
""
],
[
"Wen",
"Ji-Rong",
""
],
[
"Hu",
"Di",
""
]
] |
new_dataset
| 0.999177 |
2203.15125
|
Qunjie Zhou
|
Manuel Kolmet, Qunjie Zhou, Aljosa Osep, Laura Leal-Taixe
|
Text2Pos: Text-to-Point-Cloud Cross-Modal Localization
|
CVPR2022 Camera Ready Version
| null | null | null |
cs.CV cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Natural language-based communication with mobile devices and home appliances
is becoming increasingly popular and has the potential to become natural for
communicating with mobile robots in the future. Towards this goal, we
investigate cross-modal text-to-point-cloud localization that will allow us to
specify, for example, a vehicle pick-up or goods delivery location. In
particular, we propose Text2Pos, a cross-modal localization module that learns
to align textual descriptions with localization cues in a coarse- to-fine
manner. Given a point cloud of the environment, Text2Pos locates a position
that is specified via a natural language-based description of the immediate
surroundings. To train Text2Pos and study its performance, we construct
KITTI360Pose, the first dataset for this task based on the recently introduced
KITTI360 dataset. Our experiments show that we can localize 65% of textual
queries within 15m distance to query locations for top-10 retrieved locations.
This is a starting point that we hope will spark future developments towards
language-based navigation.
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 22:06:00 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Apr 2022 12:10:59 GMT"
}
] | 2022-04-06T00:00:00 |
[
[
"Kolmet",
"Manuel",
""
],
[
"Zhou",
"Qunjie",
""
],
[
"Osep",
"Aljosa",
""
],
[
"Leal-Taixe",
"Laura",
""
]
] |
new_dataset
| 0.991761 |
2204.01725
|
Minsu Kim
|
Minsu Kim, Jeong Hun Yeo, Yong Man Ro
|
Distinguishing Homophenes Using Multi-Head Visual-Audio Memory for Lip
Reading
|
Published at AAAI 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recognizing speech from silent lip movement, which is called lip reading, is
a challenging task due to 1) the inherent information insufficiency of lip
movement to fully represent the speech, and 2) the existence of homophenes that
have similar lip movement with different pronunciations. In this paper, we try
to alleviate the aforementioned two challenges in lip reading by proposing a
Multi-head Visual-audio Memory (MVM). Firstly, MVM is trained with audio-visual
datasets and remembers audio representations by modelling the
inter-relationships of paired audio-visual representations. At the inference
stage, visual input alone can extract the saved audio representation from the
memory by examining the learned inter-relationships. Therefore, the lip reading
model can complement the insufficient visual information with the extracted
audio representations. Secondly, MVM is composed of multi-head key memories for
saving visual features and one value memory for saving audio knowledge, which
is designed to distinguish the homophenes. With the multi-head key memories,
MVM extracts possible candidate audio features from the memory, which allows
the lip reading model to consider the possibility of which pronunciations can
be represented from the input lip movement. This also can be viewed as an
explicit implementation of the one-to-many mapping of viseme-to-phoneme.
Moreover, MVM is employed in multi-temporal levels to consider the context when
retrieving the memory and distinguish the homophenes. Extensive experimental
results verify the effectiveness of the proposed method in lip reading and in
distinguishing the homophenes.
|
[
{
"version": "v1",
"created": "Mon, 4 Apr 2022 06:29:35 GMT"
}
] | 2022-04-06T00:00:00 |
[
[
"Kim",
"Minsu",
""
],
[
"Yeo",
"Jeong Hun",
""
],
[
"Ro",
"Yong Man",
""
]
] |
new_dataset
| 0.996802 |
2204.01827
|
Md Sabbir Hossain
|
Md Sabbir Hossain, Nishat Nayla, Annajiat Alim Rasel
|
Product Market Demand Analysis Using NLP in Banglish Text with Sentiment
Analysis and Named Entity Recognition
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Product market demand analysis plays a significant role for originating
business strategies due to its noticeable impact on the competitive business
field. Furthermore, there are roughly 228 million native Bengali speakers, the
majority of whom use Banglish text to interact with one another on social
media. Consumers are buying and evaluating items on social media with Banglish
text as social media emerges as an online marketplace for entrepreneurs. People
use social media to find preferred smartphone brands and models by sharing
their positive and bad experiences with them. For this reason, our goal is to
gather Banglish text data and use sentiment analysis and named entity
identification to assess Bangladeshi market demand for smartphones in order to
determine the most popular smartphones by gender. We scraped product related
data from social media with instant data scrapers and crawled data from
Wikipedia and other sites for product information with python web scrapers.
Using Python's Pandas and Seaborn libraries, the raw data is filtered using NLP
methods. To train our datasets for named entity recognition, we utilized
Spacey's custom NER model, Amazon Comprehend Custom NER. A tensorflow
sequential model was deployed with parameter tweaking for sentiment analysis.
Meanwhile, we used the Google Cloud Translation API to estimate the gender of
the reviewers using the BanglaLinga library. In this article, we use natural
language processing (NLP) approaches and several machine learning models to
identify the most in-demand items and services in the Bangladeshi market. Our
model has an accuracy of 87.99% in Spacy Custom Named Entity recognition,
95.51% in Amazon Comprehend Custom NER, and 87.02% in the Sequential model for
demand analysis. After Spacy's study, we were able to manage 80% of mistakes
related to misspelled words using a mix of Levenshtein distance and ratio
algorithms.
|
[
{
"version": "v1",
"created": "Mon, 4 Apr 2022 20:21:31 GMT"
}
] | 2022-04-06T00:00:00 |
[
[
"Hossain",
"Md Sabbir",
""
],
[
"Nayla",
"Nishat",
""
],
[
"Rasel",
"Annajiat Alim",
""
]
] |
new_dataset
| 0.994149 |
2204.01861
|
Zhe Shen
|
Zhe Shen, Yudong Ma, Takeshi Tsuchiya
|
Four-dimensional Gait Surfaces for A Tilt-rotor -- Two Color Map Theorem
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This article presents the four-dimensional surfaces which instruct the gait
plan for a tilt-rotor. The previous gaits analyzed in the tilt-rotor research
are inspired by animals; no theoretical base backs the robustness of these
gaits. This research deduces the gaits by diminishing the effect of the
attitude of the tilt-rotor for the first time. Four-dimensional gait surfaces
are subsequently found, on which the gaits are expected to be robust to the
attitude. These surfaces provide the region where the gait is suggested to be
planned. However, a discontinuous region may hinder the gait plan process while
utilizing the proposal gait surfaces. A Two Color Map Theorem is then
established to guarantee the continuity of each gait designed. The robustness
of the typical gaits obeying the Two Color Map Theorem and on the gait surface
is demonstrated by comparing the singular curve in attitude with the gaits not
on the gait surface. The result shows that the acceptable attitudes enlarge for
the gaits on the gait surface.
|
[
{
"version": "v1",
"created": "Mon, 4 Apr 2022 21:36:01 GMT"
}
] | 2022-04-06T00:00:00 |
[
[
"Shen",
"Zhe",
""
],
[
"Ma",
"Yudong",
""
],
[
"Tsuchiya",
"Takeshi",
""
]
] |
new_dataset
| 0.995117 |
2204.01906
|
Tristan Thrush
|
Tristan Thrush, Kushal Tirumala, Anmol Gupta, Max Bartolo, Pedro
Rodriguez, Tariq Kane, William Gaviria Rojas, Peter Mattson, Adina Williams,
Douwe Kiela
|
Dynatask: A Framework for Creating Dynamic AI Benchmark Tasks
|
ACL System Demos 2022
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce Dynatask: an open source system for setting up custom NLP tasks
that aims to greatly lower the technical knowledge and effort required for
hosting and evaluating state-of-the-art NLP models, as well as for conducting
model in the loop data collection with crowdworkers. Dynatask is integrated
with Dynabench, a research platform for rethinking benchmarking in AI that
facilitates human and model in the loop data collection and evaluation. To
create a task, users only need to write a short task configuration file from
which the relevant web interfaces and model hosting infrastructure are
automatically generated. The system is available at https://dynabench.org/ and
the full library can be found at https://github.com/facebookresearch/dynabench.
|
[
{
"version": "v1",
"created": "Tue, 5 Apr 2022 00:32:04 GMT"
}
] | 2022-04-06T00:00:00 |
[
[
"Thrush",
"Tristan",
""
],
[
"Tirumala",
"Kushal",
""
],
[
"Gupta",
"Anmol",
""
],
[
"Bartolo",
"Max",
""
],
[
"Rodriguez",
"Pedro",
""
],
[
"Kane",
"Tariq",
""
],
[
"Rojas",
"William Gaviria",
""
],
[
"Mattson",
"Peter",
""
],
[
"Williams",
"Adina",
""
],
[
"Kiela",
"Douwe",
""
]
] |
new_dataset
| 0.999378 |
2204.01918
|
Xiang Zhang
|
Xiang Zhang, Yongwen Su, Subarna Tripathi, Zhuowen Tu
|
Text Spotting Transformers
|
Accepted to CVPR 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present TExt Spotting TRansformers (TESTR), a generic
end-to-end text spotting framework using Transformers for text detection and
recognition in the wild. TESTR builds upon a single encoder and dual decoders
for the joint text-box control point regression and character recognition.
Other than most existing literature, our method is free from Region-of-Interest
operations and heuristics-driven post-processing procedures; TESTR is
particularly effective when dealing with curved text-boxes where special cares
are needed for the adaptation of the traditional bounding-box representations.
We show our canonical representation of control points suitable for text
instances in both Bezier curve and polygon annotations. In addition, we design
a bounding-box guided polygon detection (box-to-polygon) process. Experiments
on curved and arbitrarily shaped datasets demonstrate state-of-the-art
performances of the proposed TESTR algorithm.
|
[
{
"version": "v1",
"created": "Tue, 5 Apr 2022 01:05:31 GMT"
}
] | 2022-04-06T00:00:00 |
[
[
"Zhang",
"Xiang",
""
],
[
"Su",
"Yongwen",
""
],
[
"Tripathi",
"Subarna",
""
],
[
"Tu",
"Zhuowen",
""
]
] |
new_dataset
| 0.994843 |
2204.01964
|
Yijing Lin
|
Yijing Lin, Zhipeng Gao, Qian Wang, Lanlan Rui, Yang Yang
|
BcMON: Blockchain Middleware for Offline Networks
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Blockchain is becoming a new generation of information infrastructures.
However, the current blockchain solutions rely on a continuous connectivity
network to query and modify the state of the blockchain. The emerging satellite
technology seems to be a good catalyst to forward offline transactions to the
blockchain. However, this approach suffers expensive costs, difficult
interoperability, and limited computation problems. Therefore, we propose
BcMON, the first blockchain middleware for offline networks. BcMON incorporates
three innovative designs: 1) it reduces the costs of offline transactions
accessing the blockchain through Short Message Service (SMS), 2) it validates
the authenticity of offline cross-chain transactions by two-phase consensus, 3)
it supports offline clients to perform complex queries and computations on the
blockchains. The prototype of BcMON has been implemented to evaluate the
performance of the proposed middleware, which can show its stability,
efficiency, and scalability.
|
[
{
"version": "v1",
"created": "Tue, 5 Apr 2022 03:43:05 GMT"
}
] | 2022-04-06T00:00:00 |
[
[
"Lin",
"Yijing",
""
],
[
"Gao",
"Zhipeng",
""
],
[
"Wang",
"Qian",
""
],
[
"Rui",
"Lanlan",
""
],
[
"Yang",
"Yang",
""
]
] |
new_dataset
| 0.999125 |
2204.01975
|
Shulong Hu
|
Jinyin Chen, Shulong Hu, Haibin Zheng, Changyou Xing, Guomin Zhang
|
GAIL-PT: A Generic Intelligent Penetration Testing Framework with
Generative Adversarial Imitation Learning
| null | null | null | null |
cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Penetration testing (PT) is an efficient network testing and vulnerability
mining tool by simulating a hacker's attack for valuable information applied in
some areas. Compared with manual PT, intelligent PT has become a dominating
mainstream due to less time-consuming and lower labor costs. Unfortunately,
RL-based PT is still challenged in real exploitation scenarios because the
agent's action space is usually high-dimensional discrete, thus leading to
algorithm convergence difficulty. Besides, most PT methods still rely on the
decisions of security experts. Addressing the challenges, for the first time,
we introduce expert knowledge to guide the agent to make better decisions in
RL-based PT and propose a Generative Adversarial Imitation Learning-based
generic intelligent Penetration testing framework, denoted as GAIL-PT, to solve
the problems of higher labor costs due to the involvement of security experts
and high-dimensional discrete action space. Specifically, first, we manually
collect the state-action pairs to construct an expert knowledge base when the
pre-trained RL / DRL model executes successful penetration testings. Second, we
input the expert knowledge and the state-action pairs generated online by the
different RL / DRL models into the discriminator of GAIL for training. At last,
we apply the output reward of the discriminator to guide the agent to perform
the action with a higher penetration success rate to improve PT's performance.
Extensive experiments conducted on the real target host and simulated network
scenarios show that GAIL-PT achieves the SOTA penetration performance against
DeepExploit in exploiting actual target Metasploitable2 and Q-learning in
optimizing penetration path, not only in small-scale with or without honey-pot
network environments but also in the large-scale virtual network environment.
|
[
{
"version": "v1",
"created": "Tue, 5 Apr 2022 04:01:17 GMT"
}
] | 2022-04-06T00:00:00 |
[
[
"Chen",
"Jinyin",
""
],
[
"Hu",
"Shulong",
""
],
[
"Zheng",
"Haibin",
""
],
[
"Xing",
"Changyou",
""
],
[
"Zhang",
"Guomin",
""
]
] |
new_dataset
| 0.978915 |
2204.02000
|
Suzan Verberne
|
Yanfang Hou, Peter van der Putten, Suzan Verberne
|
The COVMis-Stance dataset: Stance Detection on Twitter for COVID-19
Misinformation
| null | null | null | null |
cs.CL cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
During the COVID-19 pandemic, large amounts of COVID-19 misinformation are
spreading on social media. We are interested in the stance of Twitter users
towards COVID-19 misinformation. However, due to the relative recent nature of
the pandemic, only a few stance detection datasets fit our task. We have
constructed a new stance dataset consisting of 2631 tweets annotated with the
stance towards COVID-19 misinformation. In contexts with limited labeled data,
we fine-tune our models by leveraging the MNLI dataset and two existing stance
detection datasets (RumourEval and COVIDLies), and evaluate the model
performance on our dataset. Our experimental results show that the model
performs the best when fine-tuned sequentially on the MNLI dataset and the
combination of the undersampled RumourEval and COVIDLies datasets. Our code and
dataset are publicly available at
https://github.com/yanfangh/covid-rumor-stance
|
[
{
"version": "v1",
"created": "Tue, 5 Apr 2022 05:47:15 GMT"
}
] | 2022-04-06T00:00:00 |
[
[
"Hou",
"Yanfang",
""
],
[
"van der Putten",
"Peter",
""
],
[
"Verberne",
"Suzan",
""
]
] |
new_dataset
| 0.984971 |
2204.02035
|
Stanislav Frolov
|
Stanislav Frolov, Prateek Bansal, J\"orn Hees, Andreas Dengel
|
DT2I: Dense Text-to-Image Generation from Region Descriptions
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Despite astonishing progress, generating realistic images of complex scenes
remains a challenging problem. Recently, layout-to-image synthesis approaches
have attracted much interest by conditioning the generator on a list of
bounding boxes and corresponding class labels. However, previous approaches are
very restrictive because the set of labels is fixed a priori. Meanwhile,
text-to-image synthesis methods have substantially improved and provide a
flexible way for conditional image generation. In this work, we introduce dense
text-to-image (DT2I) synthesis as a new task to pave the way toward more
intuitive image generation. Furthermore, we propose DTC-GAN, a novel method to
generate images from semantically rich region descriptions, and a multi-modal
region feature matching loss to encourage semantic image-text matching. Our
results demonstrate the capability of our approach to generate plausible images
of complex scenes using region captions.
|
[
{
"version": "v1",
"created": "Tue, 5 Apr 2022 07:57:11 GMT"
}
] | 2022-04-06T00:00:00 |
[
[
"Frolov",
"Stanislav",
""
],
[
"Bansal",
"Prateek",
""
],
[
"Hees",
"Jörn",
""
],
[
"Dengel",
"Andreas",
""
]
] |
new_dataset
| 0.997246 |
2204.02091
|
Vaishakh Patil
|
Vaishakh Patil, Christos Sakaridis, Alexander Liniger, Luc Van Gool
|
P3Depth: Monocular Depth Estimation with a Piecewise Planarity Prior
|
Accepted at CVPR 2022
| null | null | null |
cs.CV cs.AI cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Monocular depth estimation is vital for scene understanding and downstream
tasks. We focus on the supervised setup, in which ground-truth depth is
available only at training time. Based on knowledge about the high regularity
of real 3D scenes, we propose a method that learns to selectively leverage
information from coplanar pixels to improve the predicted depth. In particular,
we introduce a piecewise planarity prior which states that for each pixel,
there is a seed pixel which shares the same planar 3D surface with the former.
Motivated by this prior, we design a network with two heads. The first head
outputs pixel-level plane coefficients, while the second one outputs a dense
offset vector field that identifies the positions of seed pixels. The plane
coefficients of seed pixels are then used to predict depth at each position.
The resulting prediction is adaptively fused with the initial prediction from
the first head via a learned confidence to account for potential deviations
from precise local planarity. The entire architecture is trained end-to-end
thanks to the differentiability of the proposed modules and it learns to
predict regular depth maps, with sharp edges at occlusion boundaries. An
extensive evaluation of our method shows that we set the new state of the art
in supervised monocular depth estimation, surpassing prior methods on NYU
Depth-v2 and on the Garg split of KITTI. Our method delivers depth maps that
yield plausible 3D reconstructions of the input scenes. Code is available at:
https://github.com/SysCV/P3Depth
|
[
{
"version": "v1",
"created": "Tue, 5 Apr 2022 10:03:52 GMT"
}
] | 2022-04-06T00:00:00 |
[
[
"Patil",
"Vaishakh",
""
],
[
"Sakaridis",
"Christos",
""
],
[
"Liniger",
"Alexander",
""
],
[
"Van Gool",
"Luc",
""
]
] |
new_dataset
| 0.96295 |
2204.02143
|
Dongchao Yang
|
Dongchao Yang, Helin Wang, Zhongjie Ye, Yuexian Zou, Wenwu Wang
|
RaDur: A Reference-aware and Duration-robust Network for Target Sound
Detection
|
submitted to interspeech2022
| null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Target sound detection (TSD) aims to detect the target sound from a mixture
audio given the reference information. Previous methods use a conditional
network to extract a sound-discriminative embedding from the reference audio,
and then use it to detect the target sound from the mixture audio. However, the
network performs much differently when using different reference audios (e.g.
performs poorly for noisy and short-duration reference audios), and tends to
make wrong decisions for transient events (i.e. shorter than $1$ second). To
overcome these problems, in this paper, we present a reference-aware and
duration-robust network (RaDur) for TSD. More specifically, in order to make
the network more aware of the reference information, we propose an embedding
enhancement module to take into account the mixture audio while generating the
embedding, and apply the attention pooling to enhance the features of target
sound-related frames and weaken the features of noisy frames. In addition, a
duration-robust focal loss is proposed to help model different-duration events.
To evaluate our method, we build two TSD datasets based on UrbanSound and
Audioset. Extensive experiments show the effectiveness of our methods.
|
[
{
"version": "v1",
"created": "Tue, 5 Apr 2022 12:08:13 GMT"
}
] | 2022-04-06T00:00:00 |
[
[
"Yang",
"Dongchao",
""
],
[
"Wang",
"Helin",
""
],
[
"Ye",
"Zhongjie",
""
],
[
"Zou",
"Yuexian",
""
],
[
"Wang",
"Wenwu",
""
]
] |
new_dataset
| 0.972948 |
2204.02165
|
Markus Ryll
|
Markus Ryll, Robert K. Katzschmann
|
SMORS: A soft multirotor UAV for multimodal locomotion and robust
interaction
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We present SMORS, the first Soft fully actuated MultirOtoR System for
multimodal locomotion. Unlike conventional hexarotors, SMORS is equipped with
three rigid and three continuously soft arms, with each arm hosting a
propeller. We create a bridge between the fields of soft and aerial robotics by
mechanically coupling the actuation of a fully actuated flying platform with
the actuation of a soft robotic manipulator. Each rotor is slightly tilted,
allowing for full actuation of the platform. The soft components combined with
the platform's full actuation allow for a robust interaction, in the form of
efficient multimodal locomotion. In this work, we present the dynamical model
of the platform, derive a closed-loop control, and present simulation results
fortifying the robustness of the platform under a jumping-flying maneuver. We
demonstrate in simulations that our multimodal locomotion approach can be more
energy-efficient than the flight with a hexarotor.
|
[
{
"version": "v1",
"created": "Tue, 5 Apr 2022 12:47:02 GMT"
}
] | 2022-04-06T00:00:00 |
[
[
"Ryll",
"Markus",
""
],
[
"Katzschmann",
"Robert K.",
""
]
] |
new_dataset
| 0.995726 |
2204.02236
|
Robin Amar
|
Robin Amar, Mohammad Alaee-Kerahroodi, Prabhu Babu, Bhavani Shankar M.
R
|
Designing Interference-Immune Doppler-TolerantWaveforms for Automotive
Radar Applications
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dynamic target detection using FMCW waveform is challenging in the presence
of interference for different radar applications. Degradation in SNR is
irreparable and interference is difficult to mitigate in time and frequency
domain. In this paper, a waveform design problem is addressed using the
Majorization-Minimization (MM) framework by considering PSL/ISL cost functions,
resulting in a code sequence with Doppler-tolerance characteristics of an FMCW
waveform and interference immune characteristics of a tailored PMCW waveform
(unique phase code + minimal ISL/PSL). The optimal design sequences possess
polynomial phase behavior of degree Q amongst its sub-sequences and obtain
optimal ISL and PSL solutions with guaranteed convergence. By tuning the
optimization parameters such as degree Q of the polynomial phase behavior,
sub-sequence length M and the total number of sub-sequences L, the optimized
sequences can be as Doppler tolerant as FMCW waveform in one end, and they can
possess small cross-correlation values similar to random-phase sequences in
PMCW waveform on the other end. If required in the event of acute interference,
new codes can be generated in the runtime which have low cross-correlation with
the interferers. The performance analysis indicates that the proposed method
outperforms the state-of-the-art counterparts.
|
[
{
"version": "v1",
"created": "Tue, 5 Apr 2022 14:18:11 GMT"
}
] | 2022-04-06T00:00:00 |
[
[
"Amar",
"Robin",
""
],
[
"Alaee-Kerahroodi",
"Mohammad",
""
],
[
"Babu",
"Prabhu",
""
],
[
"R",
"Bhavani Shankar M.",
""
]
] |
new_dataset
| 0.98599 |
2204.02308
|
Kiyosu Maeda
|
Kiyosu Maeda, Riku Arakawa, Jun Rekimoto
|
CalmResponses: Displaying Collective Audience Reactions in Remote
Communication
|
To appear in ACM International Conference on Interactive Media
Experiences
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a system displaying audience eye gaze and nod reactions for
enhancing synchronous remote communication. Recently, we have had increasing
opportunities to speak to others remotely. In contrast to offline situations,
however, speakers often have difficulty observing audience reactions at once in
remote communication, which makes them feel more anxious and less confident in
their speeches. Recent studies have proposed methods of presenting various
audience reactions to speakers. Since these methods require additional devices
to measure audience reactions, they are not appropriate for practical
situations. Moreover, these methods do not present overall audience reactions.
In contrast, we design and develop CalmResponses, a browser-based system which
measures audience eye gaze and nod reactions only with a built-in webcam and
collectively presents them to speakers. The results of our two user studies
indicated that the number of fillers in speaker's speech decreases when
audiences' eye gaze is presented, and their self-rating score increases when
audiences' nodding is presented. Moreover, comments from audiences suggested
benefits of CalmResponses for them in terms of co-presence and privacy
concerns.
|
[
{
"version": "v1",
"created": "Tue, 5 Apr 2022 16:04:13 GMT"
}
] | 2022-04-06T00:00:00 |
[
[
"Maeda",
"Kiyosu",
""
],
[
"Arakawa",
"Riku",
""
],
[
"Rekimoto",
"Jun",
""
]
] |
new_dataset
| 0.989446 |
2204.02316
|
Xiaotong Guo
|
Hongmou Zhang, Xiaotong Guo, Jinhua Zhao
|
Economies and Diseconomies of Scale in Segmented Mobility Sharing
Markets
|
9 pages, 3 figures
| null | null | null |
cs.SI physics.soc-ph
|
http://creativecommons.org/licenses/by/4.0/
|
On-demand mobility sharing, provided by one or several transportation network
companies (TNCs), is realized by real-time optimization algorithms to connect
trips among tens of thousands of drivers and fellow passengers. In a market of
mobility sharing comprised of TNCs, there are two competing principles, the
economies of network scale and the healthy competition between TNCs, which can
lead to "segmentation" of market. To understand the substantiality and
relationship of the two competing principles, we need to answer how much
efficiency loss is generated due to the segmentation of market, and which
factors are related to it. Here we show how four critical factors of market
structure and characteristics of mobility sharing services -- density of trips
(thickness), maximum detour allowed for sharing (tightness), market shares
(unevenness), and spatial segregation of the TNCs (dissolvedness) -- are
associated with the efficiency loss, represented as the difference in vehicle
miles traveled (VMT) under different market structures. We found that 1) while
VMT shows a simple power function with thickness, the corresponding exponent
term can be expressed as a non-monotonic function with tightness -- essentially
showing how economies and diseconomies of scale in this market arise, and
appearing a very similar form to the Lennard--Jones model in inter-molecular
potentials; and 2) the efficiency loss is higher when unevenness is closer to
0.5 (50-50 market share) and dissolvedness is larger. Our results give a
comprehensive analysis of how the inefficiency of market segmentation is
generated, and how potentially it may be avoided through market mechanism
design.
|
[
{
"version": "v1",
"created": "Tue, 5 Apr 2022 16:16:10 GMT"
}
] | 2022-04-06T00:00:00 |
[
[
"Zhang",
"Hongmou",
""
],
[
"Guo",
"Xiaotong",
""
],
[
"Zhao",
"Jinhua",
""
]
] |
new_dataset
| 0.995361 |
2204.02325
|
Alessandro Betti
|
Alessandro Betti
|
A lightweight and accurate YOLO-like network for small target detection
in Aerial Imagery
|
Submitted to Elsevier. Under Review
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite the breakthrough deep learning performances achieved for automatic
object detection, small target detection is still a challenging problem,
especially when looking at fast and accurate solutions suitable for mobile or
edge applications. In this work we present YOLO-S, a simple, fast and efficient
network for small target detection. The architecture exploits a small feature
extractor based on Darknet20, as well as skip connection, via both bypass and
concatenation, and reshape-passthrough layer to alleviate the vanishing
gradient problem, promote feature reuse across network and combine low-level
positional information with more meaningful high-level information. To verify
the performances of YOLO-S, we build "AIRES", a novel dataset for cAr detectIon
fRom hElicopter imageS acquired in Europe, and set up experiments on both AIRES
and VEDAI datasets, benchmarking this architecture with four baseline
detectors. Furthermore, in order to handle efficiently the issue of data
insufficiency and domain gap when dealing with a transfer learning strategy, we
introduce a transitional learning task over a combined dataset based on DOTAv2
and VEDAI and demonstrate that can enhance the overall accuracy with respect to
more general features transferred from COCO data. YOLO-S is from 25% to 50%
faster than YOLOv3 and only 15-25% slower than Tiny-YOLOv3, outperforming also
YOLOv3 in terms of accuracy in a wide range of experiments. Further simulations
performed on SARD dataset demonstrate also its applicability to different
scenarios such as for search and rescue operations. Besides, YOLO-S has an 87%
decrease of parameter size and almost one half FLOPs of YOLOv3, making
practical the deployment for low-power industrial applications.
|
[
{
"version": "v1",
"created": "Tue, 5 Apr 2022 16:29:49 GMT"
}
] | 2022-04-06T00:00:00 |
[
[
"Betti",
"Alessandro",
""
]
] |
new_dataset
| 0.998301 |
2204.02336
|
Tamas David-Barrett
|
Tamas David-Barrett
|
Kinship Is a Network Tracking Social Technology, Not an Evolutionary
Phenomenon
|
29 pages, 10 figures
| null | null | null |
cs.SI physics.soc-ph
|
http://creativecommons.org/licenses/by/4.0/
|
On one hand, kinship is a universal human phenomenon that tends to align with
biological relatedness, which might suggest evolutionary foundations. On the
other hand, kinship has exceptional variation across the human populations,
which points to cultural foundations. Furthermore, even if its foundation was
biological, kinship is often too imprecise to track genetic relatedness
efficiently, while inclusive fitness theory would suggest focusing only on the
closest relatives, which is not the case in most human cultures. It was the
parallel validity of these contradicting arguments that led to decades of
fierce debate about the definition and measurement of the phenomenon. This
paper offers a new approach to kinship. First, the model demonstrates that it
is possible to generate kinship networks (a) derived from the kind of basic kin
connections that our species shares with other apes, but (b) driven by network
rather than biological logic beyond the immediate family. Second the model
demonstrates that kinship as a network heuristic works efficiently only in high
fertility societies, and gives way to similarity-based friendship with
demographic transition. The results explain (i) why kinship labelling is unique
to our species, (ii) why kinship is universal among human cultures, (iii) why
kinship terminology systems are varied across cultures, (iv) why linguistic kin
assignment is imprecise, and (v) why kinship is replaced by homophily when
relatives are scarce. The model offers a unifying framework to the debate
between social and evolutionary anthropology concerning the concept of human
kinship.
|
[
{
"version": "v1",
"created": "Fri, 25 Mar 2022 18:32:00 GMT"
}
] | 2022-04-06T00:00:00 |
[
[
"David-Barrett",
"Tamas",
""
]
] |
new_dataset
| 0.982618 |
2204.02342
|
Lea Matlekovic
|
Lea Matlekovic and Peter Schneider-Kamp
|
From Monolith to Microservices: Software Architecture for Autonomous UAV
Infrastructure Inspection
|
11th International Conference on Cloud Computing: Services and
Architecture (CLOUD 2022)
| null |
10.5121/csit.2022.120622
| null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Linear-infrastructure Mission Control (LiMiC) is an application for
autonomous Unmanned Aerial Vehicle (UAV) infrastructure inspection mission
planning developed in monolithic software architecture. The application
calculates routes along the infrastructure based on the users' inputs, the
number of UAVs participating in the mission, and UAVs' locations. LiMiC1.0 is
the latest application version migrated from monolith to microservices,
continuously integrated, and deployed using DevOps tools to facilitate future
features development, enable better traffic management, and improve the route
calculation processing time. Processing time was improved by refactoring the
route calculation algorithm into services, scaling them in the Kubernetes
cluster, and enabling asynchronous communication in between. In this paper, we
discuss the differences between the monolith and microservice architecture to
justify our decision for migration. We describe the methodology for the
application's migration and implementation processes, technologies we use for
continuous integration and deployment, and we present microservices improved
performance results compared with the monolithic application.
|
[
{
"version": "v1",
"created": "Tue, 5 Apr 2022 16:57:14 GMT"
}
] | 2022-04-06T00:00:00 |
[
[
"Matlekovic",
"Lea",
""
],
[
"Schneider-Kamp",
"Peter",
""
]
] |
new_dataset
| 0.99698 |
2204.02380
|
Leonard Salewski
|
Leonard Salewski and A. Sophia Koepke and Hendrik P. A. Lensch and
Zeynep Akata
|
CLEVR-X: A Visual Reasoning Dataset for Natural Language Explanations
| null | null |
10.1007/978-3-031-04083-2_5
| null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Providing explanations in the context of Visual Question Answering (VQA)
presents a fundamental problem in machine learning. To obtain detailed insights
into the process of generating natural language explanations for VQA, we
introduce the large-scale CLEVR-X dataset that extends the CLEVR dataset with
natural language explanations. For each image-question pair in the CLEVR
dataset, CLEVR-X contains multiple structured textual explanations which are
derived from the original scene graphs. By construction, the CLEVR-X
explanations are correct and describe the reasoning and visual information that
is necessary to answer a given question. We conducted a user study to confirm
that the ground-truth explanations in our proposed dataset are indeed complete
and relevant. We present baseline results for generating natural language
explanations in the context of VQA using two state-of-the-art frameworks on the
CLEVR-X dataset. Furthermore, we provide a detailed analysis of the explanation
generation quality for different question and answer types. Additionally, we
study the influence of using different numbers of ground-truth explanations on
the convergence of natural language generation (NLG) metrics. The CLEVR-X
dataset is publicly available at
\url{https://explainableml.github.io/CLEVR-X/}.
|
[
{
"version": "v1",
"created": "Tue, 5 Apr 2022 17:38:04 GMT"
}
] | 2022-04-06T00:00:00 |
[
[
"Salewski",
"Leonard",
""
],
[
"Koepke",
"A. Sophia",
""
],
[
"Lensch",
"Hendrik P. A.",
""
],
[
"Akata",
"Zeynep",
""
]
] |
new_dataset
| 0.999837 |
2204.02389
|
Ruohan Gao
|
Ruohan Gao, Zilin Si, Yen-Yu Chang, Samuel Clarke, Jeannette Bohg, Li
Fei-Fei, Wenzhen Yuan, Jiajun Wu
|
ObjectFolder 2.0: A Multisensory Object Dataset for Sim2Real Transfer
|
In CVPR 2022. Gao, Si, and Chang contributed equally to this work.
Project page: https://ai.stanford.edu/~rhgao/objectfolder2.0/
| null | null | null |
cs.CV cs.LG cs.RO cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Objects play a crucial role in our everyday activities. Though multisensory
object-centric learning has shown great potential lately, the modeling of
objects in prior work is rather unrealistic. ObjectFolder 1.0 is a recent
dataset that introduces 100 virtualized objects with visual, acoustic, and
tactile sensory data. However, the dataset is small in scale and the
multisensory data is of limited quality, hampering generalization to real-world
scenarios. We present ObjectFolder 2.0, a large-scale, multisensory dataset of
common household objects in the form of implicit neural representations that
significantly enhances ObjectFolder 1.0 in three aspects. First, our dataset is
10 times larger in the amount of objects and orders of magnitude faster in
rendering time. Second, we significantly improve the multisensory rendering
quality for all three modalities. Third, we show that models learned from
virtual objects in our dataset successfully transfer to their real-world
counterparts in three challenging tasks: object scale estimation, contact
localization, and shape reconstruction. ObjectFolder 2.0 offers a new path and
testbed for multisensory learning in computer vision and robotics. The dataset
is available at https://github.com/rhgao/ObjectFolder.
|
[
{
"version": "v1",
"created": "Tue, 5 Apr 2022 17:55:01 GMT"
}
] | 2022-04-06T00:00:00 |
[
[
"Gao",
"Ruohan",
""
],
[
"Si",
"Zilin",
""
],
[
"Chang",
"Yen-Yu",
""
],
[
"Clarke",
"Samuel",
""
],
[
"Bohg",
"Jeannette",
""
],
[
"Fei-Fei",
"Li",
""
],
[
"Yuan",
"Wenzhen",
""
],
[
"Wu",
"Jiajun",
""
]
] |
new_dataset
| 0.999696 |
2204.02397
|
Babak Ehteshami Bejnordi
|
Babak Ehteshami Bejnordi, Amirhossein Habibian, Fatih Porikli, Amir
Ghodrati
|
SALISA: Saliency-based Input Sampling for Efficient Video Object
Detection
|
20 pages, 7 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
High-resolution images are widely adopted for high-performance object
detection in videos. However, processing high-resolution inputs comes with high
computation costs, and naive down-sampling of the input to reduce the
computation costs quickly degrades the detection performance. In this paper, we
propose SALISA, a novel non-uniform SALiency-based Input SAmpling technique for
video object detection that allows for heavy down-sampling of unimportant
background regions while preserving the fine-grained details of a
high-resolution image. The resulting image is spatially smaller, leading to
reduced computational costs while enabling a performance comparable to a
high-resolution input. To achieve this, we propose a differentiable resampling
module based on a thin plate spline spatial transformer network (TPS-STN). This
module is regularized by a novel loss to provide an explicit supervision signal
to learn to "magnify" salient regions. We report state-of-the-art results in
the low compute regime on the ImageNet-VID and UA-DETRAC video object detection
datasets. We demonstrate that on both datasets, the mAP of an EfficientDet-D1
(EfficientDet-D2) gets on par with EfficientDet-D2 (EfficientDet-D3) at a much
lower computational cost. We also show that SALISA significantly improves the
detection of small objects. In particular, SALISA with an EfficientDet-D1
detector improves the detection of small objects by $77\%$, and remarkably also
outperforms EfficientDetD3 baseline.
|
[
{
"version": "v1",
"created": "Tue, 5 Apr 2022 17:59:51 GMT"
}
] | 2022-04-06T00:00:00 |
[
[
"Bejnordi",
"Babak Ehteshami",
""
],
[
"Habibian",
"Amirhossein",
""
],
[
"Porikli",
"Fatih",
""
],
[
"Ghodrati",
"Amir",
""
]
] |
new_dataset
| 0.991507 |
1602.05059
|
Dmytro Gavinsky
|
Dmytro Gavinsky
|
Entangled simultaneity versus classical interactivity in communication
complexity
| null | null | null | null |
cs.CC quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In 1999 Raz demonstrated a partial function that had an efficient quantum
two-way communication protocol but no efficient classical two-way protocol and
asked, whether there existed a function with an efficient quantum one-way
protocol, but still no efficient classical two-way protocol. In 2010 Klartag
and Regev demonstrated such a function and asked, whether there existed a
function with an efficient quantum simultaneous-messages protocol, but still no
efficient classical two-way protocol.
In this work we answer the latter question affirmatively and present a
partial function Shape, which can be computed by a protocol sending entangled
simultaneous messages of poly-logarithmic size, and whose classical two-way
complexity is lower bounded by a polynomial.
|
[
{
"version": "v1",
"created": "Tue, 16 Feb 2016 15:42:55 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Feb 2020 21:32:26 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Gavinsky",
"Dmytro",
""
]
] |
new_dataset
| 0.986221 |
1812.11448
|
Sixie Yu
|
Sixie Yu, Yevgeniy Vorobeychik
|
Removing Malicious Nodes from Networks
| null | null | null | null |
cs.SI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
A fundamental challenge in networked systems is detection and removal of
suspected malicious nodes. In reality, detection is always imperfect, and the
decision about which potentially malicious nodes to remove must trade off false
positives (erroneously removing benign nodes) and false negatives (mistakenly
failing to remove malicious nodes). However, in network settings this
conventional tradeoff must now account for node connectivity. In particular,
malicious nodes may exert malicious influence, so that mistakenly leaving some
of these in the network may cause damage to spread. On the other hand, removing
benign nodes causes direct harm to these, and indirect harm to their benign
neighbors who would wish to communicate with them.
We formalize the problem of removing potentially malicious nodes from a
network under uncertainty through an objective that takes connectivity into
account. We show that optimally solving the resulting problem is NP-Hard. We
then propose a tractable solution approach based on a convex relaxation of the
objective. Finally, we experimentally demonstrate that our approach
significantly outperforms both a simple baseline that ignores network
structure, as well as a state-of-the-art approach for a related problem, on
both synthetic and real-world datasets.
|
[
{
"version": "v1",
"created": "Sun, 30 Dec 2018 00:15:19 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Jan 2019 00:07:17 GMT"
},
{
"version": "v3",
"created": "Wed, 20 Feb 2019 17:46:43 GMT"
},
{
"version": "v4",
"created": "Tue, 12 Mar 2019 23:07:58 GMT"
},
{
"version": "v5",
"created": "Thu, 28 Mar 2019 15:19:35 GMT"
},
{
"version": "v6",
"created": "Fri, 29 Mar 2019 00:59:54 GMT"
},
{
"version": "v7",
"created": "Sat, 2 Apr 2022 03:00:42 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Yu",
"Sixie",
""
],
[
"Vorobeychik",
"Yevgeniy",
""
]
] |
new_dataset
| 0.964138 |
1907.04640
|
Dmytro Gavinsky
|
Dmytro Gavinsky, Pavel Pudl\'ak
|
Santha-Vazirani sources, deterministic condensers and very strong
extractors
| null | null | null | null |
cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The notion of semi-random sources, also known as Santha-Vazirani (SV)
sources, stands for a sequence of n bits, where the dependence of the i'th bit
on the previous i-1 bits is limited for every $i\in[n]$. If the dependence of
the i'th bit on the remaining n-1 bits is limited, then this is a strong
SV-source. Even the strong SV-sources are known not to admit (universal)
deterministic extractors, but they have seeded extractors, as their min-entropy
is $\Omega(n)$. It is intuitively obvious that strong SV-sources are more than
just high-min-entropy sources, and this work explores the intuition.
Deterministic condensers are known not to exist for general high-min-entropy
sources, and we construct for any constants $\epsilon, \delta \in (0,1)$ a
deterministic condenser that maps n bits coming from a strong SV-source with
bias at most $\delta$ to $\Omega(n)$ bits of min-entropy rate at least
$1-\epsilon$. In conclusion we observe that deterministic condensers are
closely related to very strong extractors - a proposed strengthening of the
notion of strong (seeded) extractors: in particular, our constructions can be
viewed as very strong extractors for the family of strong Santha-Vazirani
distributions. The notion of very strong extractors requires that the output
remains unpredictable even to someone who knows not only the seed value (as in
the case of strong extractors), but also the extractor's outputs corresponding
to the same input value with each of the preceding seed values (say, under the
lexicographic ordering). Very strong extractors closely resemble the original
notion of SV-sources, except that the bits must satisfy the unpredictability
requirement only on average.
|
[
{
"version": "v1",
"created": "Mon, 8 Jul 2019 22:58:04 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Feb 2020 18:41:11 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Gavinsky",
"Dmytro",
""
],
[
"Pudlák",
"Pavel",
""
]
] |
new_dataset
| 0.980362 |
2005.07531
|
Ripon Patgiri
|
Sabuzima Nayak and Ripon Patgiri
|
6G Communications: A Vision on the Potential Applications
|
This manuscript is submitted to IEEE for possible publications
|
Edge Analytics, Lecture Notes in Electrical Engineering, 2022
|
10.1007/978-981-19-0019-8_16
|
869
|
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
6G communication technology is a revolutionary technology that will
revolutionize many technologies and applications. Furthermore, it will be truly
AI-driven and will carry on intelligent space. Hence, it will enable Internet
of Everything (IoE) which will also impact many technologies and applications.
6G communication technology promises high Quality of Services (QoS) and high
Quality of Experiences (QoE). With the combination of IoE and 6G communication
technology, number of applications will be exploded in the coming future,
particularly, vehicles, drones, homes, cities, hospitals, and so on, and there
will be no untouched area. Thence, it is expected that many existing
technologies will fully depend on 6G communication technology and enhance their
performances. 6G communication technology will prove as game changer
communication technology in many fields and will be capable to influence many
applications. Therefore, we envision the potential applications of 6G
communication technology in the near future.
|
[
{
"version": "v1",
"created": "Thu, 23 Apr 2020 06:26:46 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Nayak",
"Sabuzima",
""
],
[
"Patgiri",
"Ripon",
""
]
] |
new_dataset
| 0.997954 |
2102.05470
|
Alberto Bracci
|
Alberto Bracci, Matthieu Nadini, Maxwell Aliapoulios, Damon McCoy, Ian
Gray, Alexander Teytelboym, Angela Gallo, Andrea Baronchelli
|
The illicit trade of COVID-19 vaccines on the dark web
|
For the "before the vaccine" report see
https://doi.org/10.1140/epjds/s13688-021-00259-w
| null | null | null |
cs.CY cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Early analyses revealed that dark web marketplaces (DWMs) started offering
COVID-19 related products (e.g., masks and COVID-19 tests) as soon as the
COVID-19 pandemic started, when these goods were in shortage in the traditional
economy. Here, we broaden the scope and depth of previous investigations by
analysing 194 DWMs until July 2021, including the crucial period in which
vaccines became available, and by considering the wider impact of the pandemic
on DWMs. First, we focus on vaccines. We find 250 listings offering approved
vaccines, like Pfizer/BioNTech and AstraZeneca, as well as vendors offering
fabricated proofs of vaccination and COVID-19 passports. Second, we consider
COVID-19 related products. We reveal that, as the regular economy has become
able to satisfy the demand of these goods, DWMs have decreased their offer.
Third, we analyse the profile of vendors of COVID-19 related products and
vaccines. We find that most of them are specialized in a single type of
listings and are willing to ship worldwide. Finally, we consider a broader set
of listings mentioning COVID-19 as proxy for the general impact of the pandemic
on these DWMs . Among 10,330 such listings, we show that recreational drugs are
the most affected among traditional DWMs product, with COVID-19 mentions
steadily increasing since March 2020. We anticipate that our effort is of
interest to researchers, practitioners, and law enforcement agencies focused on
the study and safeguard of public health.
|
[
{
"version": "v1",
"created": "Wed, 10 Feb 2021 14:52:54 GMT"
},
{
"version": "v2",
"created": "Tue, 2 Mar 2021 11:10:24 GMT"
},
{
"version": "v3",
"created": "Mon, 10 May 2021 14:58:09 GMT"
},
{
"version": "v4",
"created": "Tue, 10 Aug 2021 17:48:12 GMT"
},
{
"version": "v5",
"created": "Mon, 4 Apr 2022 16:59:58 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Bracci",
"Alberto",
""
],
[
"Nadini",
"Matthieu",
""
],
[
"Aliapoulios",
"Maxwell",
""
],
[
"McCoy",
"Damon",
""
],
[
"Gray",
"Ian",
""
],
[
"Teytelboym",
"Alexander",
""
],
[
"Gallo",
"Angela",
""
],
[
"Baronchelli",
"Andrea",
""
]
] |
new_dataset
| 0.978435 |
2104.08704
|
Tianyu Liu
|
Tianyu Liu, Yizhe Zhang, Chris Brockett, Yi Mao, Zhifang Sui, Weizhu
Chen and Bill Dolan
|
A Token-level Reference-free Hallucination Detection Benchmark for
Free-form Text Generation
|
Accepted by ACL2022 main conference
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Large pretrained generative models like GPT-3 often suffer from hallucinating
non-existent or incorrect content, which undermines their potential merits in
real applications. Existing work usually attempts to detect these
hallucinations based on a corresponding oracle reference at a sentence or
document level. However ground-truth references may not be readily available
for many free-form text generation applications, and sentence- or
document-level detection may fail to provide the fine-grained signals that
would prevent fallacious content in real time. As a first step to addressing
these issues, we propose a novel token-level, reference-free hallucination
detection task and an associated annotated dataset named HaDes (HAllucination
DEtection dataSet). To create this dataset, we first perturb a large number of
text segments extracted from English language Wikipedia, and then verify these
with crowd-sourced annotations. To mitigate label imbalance during annotation,
we utilize an iterative model-in-loop strategy. We conduct comprehensive data
analyses and create multiple baseline models.
|
[
{
"version": "v1",
"created": "Sun, 18 Apr 2021 04:09:48 GMT"
},
{
"version": "v2",
"created": "Sat, 2 Apr 2022 15:23:44 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Liu",
"Tianyu",
""
],
[
"Zhang",
"Yizhe",
""
],
[
"Brockett",
"Chris",
""
],
[
"Mao",
"Yi",
""
],
[
"Sui",
"Zhifang",
""
],
[
"Chen",
"Weizhu",
""
],
[
"Dolan",
"Bill",
""
]
] |
new_dataset
| 0.999438 |
2105.05085
|
Heejin Park
|
Heejin Park, Felix Xiaozhu Lin
|
GPUReplay: A 50-KB GPU Stack for Client ML
|
in Proc. ASPLOS, Mar. 2022
| null | null | null |
cs.DC cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
GPUReplay (GR) is a novel way for deploying GPU-accelerated computation on
mobile and embedded devices. It addresses high complexity of a modern GPU stack
for deployment ease and security. The idea is to record GPU executions on the
full GPU stack ahead of time and replay the executions on new input at run
time. We address key challenges towards making GR feasible, sound, and
practical to use. The resultant replayer is a drop-in replacement of the
original GPU stack. It is tiny (50 KB of executable), robust (replaying long
executions without divergence), portable (running in a commodity OS, in TEE,
and baremetal), and quick to launch (speeding up startup by up to two orders of
magnitude). We show that GPUReplay works with a variety of integrated GPU
hardware, GPU APIs, ML frameworks, and 33 neural network (NN) implementations
for inference or training. The code is available at
https://github.com/bakhi/GPUReplay.
|
[
{
"version": "v1",
"created": "Tue, 4 May 2021 07:55:19 GMT"
},
{
"version": "v2",
"created": "Mon, 16 Aug 2021 05:26:23 GMT"
},
{
"version": "v3",
"created": "Mon, 20 Dec 2021 22:36:11 GMT"
},
{
"version": "v4",
"created": "Sun, 3 Apr 2022 19:16:43 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Park",
"Heejin",
""
],
[
"Lin",
"Felix Xiaozhu",
""
]
] |
new_dataset
| 0.996895 |
2106.11485
|
Yutong He
|
Yutong He, Dingjie Wang, Nicholas Lai, William Zhang, Chenlin Meng,
Marshall Burke, David B. Lobell, Stefano Ermon
|
Spatial-Temporal Super-Resolution of Satellite Imagery via Conditional
Pixel Synthesis
| null |
Advances in Neural Information Processing Systems 35 (2021)
27903-27915
| null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
High-resolution satellite imagery has proven useful for a broad range of
tasks, including measurement of global human population, local economic
livelihoods, and biodiversity, among many others. Unfortunately,
high-resolution imagery is both infrequently collected and expensive to
purchase, making it hard to efficiently and effectively scale these downstream
tasks over both time and space. We propose a new conditional pixel synthesis
model that uses abundant, low-cost, low-resolution imagery to generate accurate
high-resolution imagery at locations and times in which it is unavailable. We
show that our model attains photo-realistic sample quality and outperforms
competing baselines on a key downstream task -- object counting -- particularly
in geographic locations where conditions on the ground are changing rapidly.
|
[
{
"version": "v1",
"created": "Tue, 22 Jun 2021 02:16:24 GMT"
},
{
"version": "v2",
"created": "Thu, 25 Nov 2021 00:29:55 GMT"
},
{
"version": "v3",
"created": "Mon, 4 Apr 2022 16:39:41 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"He",
"Yutong",
""
],
[
"Wang",
"Dingjie",
""
],
[
"Lai",
"Nicholas",
""
],
[
"Zhang",
"William",
""
],
[
"Meng",
"Chenlin",
""
],
[
"Burke",
"Marshall",
""
],
[
"Lobell",
"David B.",
""
],
[
"Ermon",
"Stefano",
""
]
] |
new_dataset
| 0.98115 |
2109.04275
|
Xunlin Zhan
|
Xiao Dong, Xunlin Zhan, Yangxin Wu, Yunchao Wei, Michael C.
Kampffmeyer, Xiaoyong Wei, Minlong Lu, Yaowei Wang, Xiaodan Liang
|
M5Product: Self-harmonized Contrastive Learning for E-commercial
Multi-modal Pretraining
|
CVPR2022
| null | null | null |
cs.CV cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
Despite the potential of multi-modal pre-training to learn highly
discriminative feature representations from complementary data modalities,
current progress is being slowed by the lack of large-scale modality-diverse
datasets. By leveraging the natural suitability of E-commerce, where different
modalities capture complementary semantic information, we contribute a
large-scale multi-modal pre-training dataset M5Product. The dataset comprises 5
modalities (image, text, table, video, and audio), covers over 6,000 categories
and 5,000 attributes, and is 500 larger than the largest publicly available
dataset with a similar number of modalities. Furthermore, M5Product contains
incomplete modality pairs and noise while also having a long-tailed
distribution, resembling most real-world problems. We further propose
Self-harmonized ContrAstive LEarning (SCALE), a novel pretraining framework
that integrates the different modalities into a unified model through an
adaptive feature fusion mechanism, where the importance of each modality is
learned directly from the modality embeddings and impacts the inter-modality
contrastive learning and masked tasks within a multi-modal transformer model.
We evaluate the current multi-modal pre-training state-of-the-art approaches
and benchmark their ability to learn from unlabeled data when faced with the
large number of modalities in the M5Product dataset. We conduct extensive
experiments on four downstream tasks and demonstrate the superiority of our
SCALE model, providing insights into the importance of dataset scale and
diversity.
|
[
{
"version": "v1",
"created": "Thu, 9 Sep 2021 13:50:22 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Mar 2022 12:42:49 GMT"
},
{
"version": "v3",
"created": "Sun, 13 Mar 2022 07:26:16 GMT"
},
{
"version": "v4",
"created": "Fri, 18 Mar 2022 06:56:04 GMT"
},
{
"version": "v5",
"created": "Sat, 2 Apr 2022 13:01:13 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Dong",
"Xiao",
""
],
[
"Zhan",
"Xunlin",
""
],
[
"Wu",
"Yangxin",
""
],
[
"Wei",
"Yunchao",
""
],
[
"Kampffmeyer",
"Michael C.",
""
],
[
"Wei",
"Xiaoyong",
""
],
[
"Lu",
"Minlong",
""
],
[
"Wang",
"Yaowei",
""
],
[
"Liang",
"Xiaodan",
""
]
] |
new_dataset
| 0.999019 |
2110.07592
|
Sreyan Ghosh
|
Sreyan Ghosh and Samden Lepcha and S Sakshi and Rajiv Ratn Shah and S.
Umesh
|
DeToxy: A Large-Scale Multimodal Dataset for Toxicity Classification in
Spoken Utterances
|
Submitted to Interspeech 2022
| null | null | null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Toxic speech, also known as hate speech, is regarded as one of the crucial
issues plaguing online social media today. Most recent work on toxic speech
detection is constrained to the modality of text and written conversations with
very limited work on toxicity detection from spoken utterances or using the
modality of speech. In this paper, we introduce a new dataset DeToxy, the first
publicly available toxicity annotated dataset for the English language. DeToxy
is sourced from various openly available speech databases and consists of over
2 million utterances. We believe that our dataset would act as a benchmark for
the relatively new and un-explored Spoken Language Processing task of detecting
toxicity from spoken utterances and boost further research in this space.
Finally, we also provide strong unimodal baselines for our dataset and compare
traditional two-step and E2E approaches. Our experiments show that in the case
of spoken utterances, text-based approaches are largely dependent on gold
human-annotated transcripts for their performance and also suffer from the
problem of keyword bias. However, the presence of speech files in DeToxy helps
facilitates the development of E2E speech models which alleviate both the
above-stated problems by better capturing speech clues.
|
[
{
"version": "v1",
"created": "Thu, 14 Oct 2021 17:51:04 GMT"
},
{
"version": "v2",
"created": "Sat, 6 Nov 2021 18:27:09 GMT"
},
{
"version": "v3",
"created": "Mon, 4 Apr 2022 14:16:04 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Ghosh",
"Sreyan",
""
],
[
"Lepcha",
"Samden",
""
],
[
"Sakshi",
"S",
""
],
[
"Shah",
"Rajiv Ratn",
""
],
[
"Umesh",
"S.",
""
]
] |
new_dataset
| 0.999833 |
2110.08466
|
Hao Sun
|
Hao Sun, Guangxuan Xu, Jiawen Deng, Jiale Cheng, Chujie Zheng, Hao
Zhou, Nanyun Peng, Xiaoyan Zhu, Minlie Huang
|
On the Safety of Conversational Models: Taxonomy, Dataset, and Benchmark
|
Accepted to Findings of ACL 2022 (Long Paper)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dialogue safety problems severely limit the real-world deployment of neural
conversational models and have attracted great research interests recently.
However, dialogue safety problems remain under-defined and the corresponding
dataset is scarce. We propose a taxonomy for dialogue safety specifically
designed to capture unsafe behaviors in human-bot dialogue settings, with
focuses on context-sensitive unsafety, which is under-explored in prior works.
To spur research in this direction, we compile DiaSafety, a dataset with rich
context-sensitive unsafe examples. Experiments show that existing safety
guarding tools fail severely on our dataset. As a remedy, we train a dialogue
safety classifier to provide a strong baseline for context-sensitive dialogue
unsafety detection. With our classifier, we perform safety evaluations on
popular conversational models and show that existing dialogue systems still
exhibit concerning context-sensitive safety problems.
|
[
{
"version": "v1",
"created": "Sat, 16 Oct 2021 04:17:12 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Apr 2022 06:17:40 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Sun",
"Hao",
""
],
[
"Xu",
"Guangxuan",
""
],
[
"Deng",
"Jiawen",
""
],
[
"Cheng",
"Jiale",
""
],
[
"Zheng",
"Chujie",
""
],
[
"Zhou",
"Hao",
""
],
[
"Peng",
"Nanyun",
""
],
[
"Zhu",
"Xiaoyan",
""
],
[
"Huang",
"Minlie",
""
]
] |
new_dataset
| 0.999382 |
2110.09144
|
Christian Rathgeb
|
Jannis Priesnitz, Christian Rathgeb, Nicolas Buchmann, Christoph Busch
|
SynCoLFinGer: Synthetic Contactless Fingerprint Generator
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the first method for synthetic generation of contactless
fingerprint images, referred to as SynCoLFinGer. To this end, the constituent
components of contactless fingerprint images regarding capturing, subject
characteristics, and environmental influences are modeled and applied to a
synthetically generated ridge pattern using the SFinGe algorithm. The proposed
method is able to generate different synthetic samples corresponding to a
single finger and it can be parameterized to generate contactless fingerprint
images of various quality levels. The resemblance of the synthetically
generated contactless fingerprints to real fingerprints is confirmed by
evaluating biometric sample quality using an adapted NFIQ 2.0 algorithm and
biometric utility using a state-of-the-art contactless fingerprint recognition
system.
|
[
{
"version": "v1",
"created": "Mon, 18 Oct 2021 09:56:07 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Apr 2022 14:42:51 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Priesnitz",
"Jannis",
""
],
[
"Rathgeb",
"Christian",
""
],
[
"Buchmann",
"Nicolas",
""
],
[
"Busch",
"Christoph",
""
]
] |
new_dataset
| 0.998212 |
2110.13981
|
Yang Sui
|
Yang Sui, Miao Yin, Yi Xie, Huy Phan, Saman Zonouz, Bo Yuan
|
CHIP: CHannel Independence-based Pruning for Compact Neural Networks
|
Accepted by NeurIPS 2021. Model Compression, Channel Pruning, Filter
Pruning, Deep Learning
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Filter pruning has been widely used for neural network compression because of
its enabled practical acceleration. To date, most of the existing filter
pruning works explore the importance of filters via using intra-channel
information. In this paper, starting from an inter-channel perspective, we
propose to perform efficient filter pruning using Channel Independence, a
metric that measures the correlations among different feature maps. The less
independent feature map is interpreted as containing less useful
information$/$knowledge, and hence its corresponding filter can be pruned
without affecting model capacity. We systematically investigate the
quantification metric, measuring scheme and sensitiveness$/$reliability of
channel independence in the context of filter pruning. Our evaluation results
for different models on various datasets show the superior performance of our
approach. Notably, on CIFAR-10 dataset our solution can bring $0.90\%$ and
$0.94\%$ accuracy increase over baseline ResNet-56 and ResNet-110 models,
respectively, and meanwhile the model size and FLOPs are reduced by $42.8\%$
and $47.4\%$ (for ResNet-56) and $48.3\%$ and $52.1\%$ (for ResNet-110),
respectively. On ImageNet dataset, our approach can achieve $40.8\%$ and
$44.8\%$ storage and computation reductions, respectively, with $0.15\%$
accuracy increase over the baseline ResNet-50 model. The code is available at
https://github.com/Eclipsess/CHIP_NeurIPS2021.
|
[
{
"version": "v1",
"created": "Tue, 26 Oct 2021 19:35:56 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Dec 2021 09:29:29 GMT"
},
{
"version": "v3",
"created": "Sun, 3 Apr 2022 08:11:33 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Sui",
"Yang",
""
],
[
"Yin",
"Miao",
""
],
[
"Xie",
"Yi",
""
],
[
"Phan",
"Huy",
""
],
[
"Zonouz",
"Saman",
""
],
[
"Yuan",
"Bo",
""
]
] |
new_dataset
| 0.950075 |
2111.13489
|
Rasmus Haugaard
|
Rasmus Laurvig Haugaard, Anders Glent Buch
|
SurfEmb: Dense and Continuous Correspondence Distributions for Object
Pose Estimation with Learnt Surface Embeddings
| null | null | null | null |
cs.CV cs.AI cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an approach to learn dense, continuous 2D-3D correspondence
distributions over the surface of objects from data with no prior knowledge of
visual ambiguities like symmetry. We also present a new method for 6D pose
estimation of rigid objects using the learnt distributions to sample, score and
refine pose hypotheses. The correspondence distributions are learnt with a
contrastive loss, represented in object-specific latent spaces by an
encoder-decoder query model and a small fully connected key model. Our method
is unsupervised with respect to visual ambiguities, yet we show that the query-
and key models learn to represent accurate multi-modal surface distributions.
Our pose estimation method improves the state-of-the-art significantly on the
comprehensive BOP Challenge, trained purely on synthetic data, even compared
with methods trained on real data. The project site is at
https://surfemb.github.io/ .
|
[
{
"version": "v1",
"created": "Fri, 26 Nov 2021 13:39:38 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Apr 2022 07:38:44 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Haugaard",
"Rasmus Laurvig",
""
],
[
"Buch",
"Anders Glent",
""
]
] |
new_dataset
| 0.986225 |
2111.14821
|
Evgenii Zheltonozhskii
|
Adam Botach, Evgenii Zheltonozhskii, Chaim Baskin
|
End-to-End Referring Video Object Segmentation with Multimodal
Transformers
|
Accepted to CVPR 2022
| null | null | null |
cs.CV cs.CL cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The referring video object segmentation task (RVOS) involves segmentation of
a text-referred object instance in the frames of a given video. Due to the
complex nature of this multimodal task, which combines text reasoning, video
understanding, instance segmentation and tracking, existing approaches
typically rely on sophisticated pipelines in order to tackle it. In this paper,
we propose a simple Transformer-based approach to RVOS. Our framework, termed
Multimodal Tracking Transformer (MTTR), models the RVOS task as a sequence
prediction problem. Following recent advancements in computer vision and
natural language processing, MTTR is based on the realization that video and
text can be processed together effectively and elegantly by a single multimodal
Transformer model. MTTR is end-to-end trainable, free of text-related inductive
bias components and requires no additional mask-refinement post-processing
steps. As such, it simplifies the RVOS pipeline considerably compared to
existing methods. Evaluation on standard benchmarks reveals that MTTR
significantly outperforms previous art across multiple metrics. In particular,
MTTR shows impressive +5.7 and +5.0 mAP gains on the A2D-Sentences and
JHMDB-Sentences datasets respectively, while processing 76 frames per second.
In addition, we report strong results on the public validation set of
Refer-YouTube-VOS, a more challenging RVOS dataset that has yet to receive the
attention of researchers. The code to reproduce our experiments is available at
https://github.com/mttr2021/MTTR
|
[
{
"version": "v1",
"created": "Mon, 29 Nov 2021 18:59:32 GMT"
},
{
"version": "v2",
"created": "Sun, 3 Apr 2022 09:22:36 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Botach",
"Adam",
""
],
[
"Zheltonozhskii",
"Evgenii",
""
],
[
"Baskin",
"Chaim",
""
]
] |
new_dataset
| 0.96925 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.