id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2206.05107
|
Yiming Zhu
|
Yiming Zhu, Ehsan-ul Haq, Lik-Hang Lee, Gareth Tyson, Pan Hui
|
A Reddit Dataset for the Russo-Ukrainian Conflict in 2022
| null | null | null | null |
cs.SI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Reddit consists of sub-communities that cover a focused topic. This paper
provides a list of relevant subreddits for the ongoing Russo-Ukrainian crisis.
We perform an exhaustive subreddit exploration using keyword search and
shortlist 12 subreddits as potential candidates that contain nominal discourse
related to the crisis. These subreddits contain over 300,000 posts and 8
million comments collectively. We provide an additional categorization of
content into two categories, "R-U Conflict", and "Military Related", based on
their primary focus. We further perform content characterization of those
subreddits. The results show a surge of posts and comments soon after Russia
launched the invasion. "Military Related" posts are more likely to receive more
replies than "R-U Conflict" posts. Our textual analysis shows an apparent
preference for the Pro-Ukraine stance in "R-U Conflict", while "Military
Related" retain a neutral stance.
|
[
{
"version": "v1",
"created": "Fri, 10 Jun 2022 13:52:51 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Jun 2022 17:27:19 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Zhu",
"Yiming",
""
],
[
"Haq",
"Ehsan-ul",
""
],
[
"Lee",
"Lik-Hang",
""
],
[
"Tyson",
"Gareth",
""
],
[
"Hui",
"Pan",
""
]
] |
new_dataset
| 0.999853 |
2206.05286
|
Danial Nasir
|
Asfand Ali, Danial Nasir, Mohammad Hassan Jawad
|
AHD ConvNet for Speech Emotion Classification
|
Wrong authors quoted
| null | null | null |
cs.SD cs.CL eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Accomplishments in the field of artificial intelligence are utilized in the
advancement of computing and making of intelligent machines for facilitating
mankind and improving user experience. Emotions are rudimentary for people,
affecting thinking and ordinary exercises like correspondence, learning and
direction. Speech emotion recognition is domain of interest in this regard and
in this work, we propose a novel mel spectrogram learning approach in which our
model uses the datapoints to learn emotions from the given wav form voice notes
in the popular CREMA-D dataset. Our model uses log mel-spectrogram as feature
with number of mels = 64. It took less training time compared to other
approaches used to address the problem of emotion speech recognition.
|
[
{
"version": "v1",
"created": "Fri, 10 Jun 2022 11:57:28 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Jun 2022 12:25:51 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Ali",
"Asfand",
""
],
[
"Nasir",
"Danial",
""
],
[
"Jawad",
"Mohammad Hassan",
""
]
] |
new_dataset
| 0.997242 |
2206.08929
|
Ruilong Li
|
Ruilong Li, Julian Tanke, Minh Vo, Michael Zollhofer, Jurgen Gall,
Angjoo Kanazawa, Christoph Lassner
|
TAVA: Template-free Animatable Volumetric Actors
|
Code: https://github.com/facebookresearch/tava; Project Website:
https://www.liruilong.cn/projects/tava/
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Coordinate-based volumetric representations have the potential to generate
photo-realistic virtual avatars from images. However, virtual avatars also need
to be controllable even to a novel pose that may not have been observed.
Traditional techniques, such as LBS, provide such a function; yet it usually
requires a hand-designed body template, 3D scan data, and limited appearance
models. On the other hand, neural representation has been shown to be powerful
in representing visual details, but are under explored on deforming dynamic
articulated actors. In this paper, we propose TAVA, a method to create T
emplate-free Animatable Volumetric Actors, based on neural representations. We
rely solely on multi-view data and a tracked skeleton to create a volumetric
model of an actor, which can be animated at the test time given novel pose.
Since TAVA does not require a body template, it is applicable to humans as well
as other creatures such as animals. Furthermore, TAVA is designed such that it
can recover accurate dense correspondences, making it amenable to
content-creation and editing tasks. Through extensive experiments, we
demonstrate that the proposed method generalizes well to novel poses as well as
unseen views and showcase basic editing capabilities.
|
[
{
"version": "v1",
"created": "Fri, 17 Jun 2022 17:59:59 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Jun 2022 03:14:02 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Li",
"Ruilong",
""
],
[
"Tanke",
"Julian",
""
],
[
"Vo",
"Minh",
""
],
[
"Zollhofer",
"Michael",
""
],
[
"Gall",
"Jurgen",
""
],
[
"Kanazawa",
"Angjoo",
""
],
[
"Lassner",
"Christoph",
""
]
] |
new_dataset
| 0.999612 |
2206.08930
|
Sreela Kodali
|
Sreela Kodali, Allison M. Okamura, Thomas C. Bulea, Alexander T.
Chesler, Carsten G. B\"onnemann
|
Wearable Haptic Device for Individuals with Congenital Absence of
Proprioception
| null | null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A rare genetic condition, PIEZO2 loss of function (LOF) is characterized by
absence of proprioception and light touch, which makes functional tasks (e.g.,
walking, manipulation) difficult. There are no pharmacological treatments or
assistive technologies available for individuals with PIEZO2-LOF. We propose a
sensory substitution device that communicates proprioceptive feedback via
detectable haptic stimuli. We created a wearable prototype that maps
measurements of elbow movement to deep pressure applied to the forearm. The
prototype applies up to 18 N, includes an embedded force sensor, and is
programmable to allow for various angle-to-pressure mappings. Future work
includes comparing proprioceptive acuity and movement ability with and without
the device in healthy and PIEZO2-LOF individuals, developing low-profile
devices using soft robotics, providing sensory substitution for multiple joints
simultaneously, and encoding additional aspects of joint dynamics.
|
[
{
"version": "v1",
"created": "Tue, 7 Jun 2022 22:18:29 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Kodali",
"Sreela",
""
],
[
"Okamura",
"Allison M.",
""
],
[
"Bulea",
"Thomas C.",
""
],
[
"Chesler",
"Alexander T.",
""
],
[
"Bönnemann",
"Carsten G.",
""
]
] |
new_dataset
| 0.999534 |
2206.08932
|
Claire Stevenson
|
Claire Stevenson, Iris Smal, Matthijs Baas, Raoul Grasman and Han van
der Maas
|
Putting GPT-3's Creativity to the (Alternative Uses) Test
|
5 pages, 6 figures, accepted at the International Conference on
Computational Creativity (ICCC) 2022 as a Short Paper. See
https://osf.io/vmk3c/ for data, analyses and code
| null | null | null |
cs.AI cs.CL cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
AI large language models have (co-)produced amazing written works from
newspaper articles to novels and poetry. These works meet the standards of the
standard definition of creativity: being original and useful, and sometimes
even the additional element of surprise. But can a large language model
designed to predict the next text fragment provide creative, out-of-the-box,
responses that still solve the problem at hand? We put Open AI's generative
natural language model, GPT-3, to the test. Can it provide creative solutions
to one of the most commonly used tests in creativity research? We assessed
GPT-3's creativity on Guilford's Alternative Uses Test and compared its
performance to previously collected human responses on expert ratings of
originality, usefulness and surprise of responses, flexibility of each set of
ideas as well as an automated method to measure creativity based on the
semantic distance between a response and the AUT object in question. Our
results show that -- on the whole -- humans currently outperform GPT-3 when it
comes to creative output. But, we believe it is only a matter of time before
GPT-3 catches up on this particular task. We discuss what this work reveals
about human and AI creativity, creativity testing and our definition of
creativity.
|
[
{
"version": "v1",
"created": "Fri, 10 Jun 2022 15:36:45 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Stevenson",
"Claire",
""
],
[
"Smal",
"Iris",
""
],
[
"Baas",
"Matthijs",
""
],
[
"Grasman",
"Raoul",
""
],
[
"van der Maas",
"Han",
""
]
] |
new_dataset
| 0.97571 |
2206.08977
|
Md Ataur Rahman
|
Md. Ataur Rahman, Nazifa Tabassum, Mitu Paul, Riya Pal, Mohammad
Khairul Islam
|
BN-HTRd: A Benchmark Dataset for Document Level Offline Bangla
Handwritten Text Recognition (HTR) and Line Segmentation
| null | null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a new dataset for offline Handwritten Text Recognition (HTR)
from images of Bangla scripts comprising words, lines, and document-level
annotations. The BN-HTRd dataset is based on the BBC Bangla News corpus, meant
to act as ground truth texts. These texts were subsequently used to generate
the annotations that were filled out by people with their handwriting. Our
dataset includes 788 images of handwritten pages produced by approximately 150
different writers. It can be adopted as a basis for various handwriting
classification tasks such as end-to-end document recognition, word-spotting,
word or line segmentation, and so on. We also propose a scheme to segment
Bangla handwritten document images into corresponding lines in an unsupervised
manner. Our line segmentation approach takes care of the variability involved
in different writing styles, accurately segmenting complex handwritten text
lines of curvilinear nature. Along with a bunch of pre-processing and
morphological operations, both Hough line and circle transforms were employed
to distinguish different linear components. In order to arrange those
components into their corresponding lines, we followed an unsupervised
clustering approach. The average success rate of our segmentation technique is
81.57% in terms of FM metrics (similar to F-measure) with a mean Average
Precision (mAP) of 0.547.
|
[
{
"version": "v1",
"created": "Sun, 29 May 2022 22:56:26 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Rahman",
"Md. Ataur",
""
],
[
"Tabassum",
"Nazifa",
""
],
[
"Paul",
"Mitu",
""
],
[
"Pal",
"Riya",
""
],
[
"Islam",
"Mohammad Khairul",
""
]
] |
new_dataset
| 0.999869 |
2206.08990
|
Ruoshi Liu
|
Ruoshi Liu, Sachit Menon, Chengzhi Mao, Dennis Park, Simon Stent, Carl
Vondrick
|
Shadows Shed Light on 3D Objects
|
19 pages, 10 figures
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D reconstruction is a fundamental problem in computer vision, and the task
is especially challenging when the object to reconstruct is partially or fully
occluded. We introduce a method that uses the shadows cast by an unobserved
object in order to infer the possible 3D volumes behind the occlusion. We
create a differentiable image formation model that allows us to jointly infer
the 3D shape of an object, its pose, and the position of a light source. Since
the approach is end-to-end differentiable, we are able to integrate learned
priors of object geometry in order to generate realistic 3D shapes of different
object categories. Experiments and visualizations show that the method is able
to generate multiple possible solutions that are consistent with the
observation of the shadow. Our approach works even when the position of the
light source and object pose are both unknown. Our approach is also robust to
real-world images where ground-truth shadow mask is unknown.
|
[
{
"version": "v1",
"created": "Fri, 17 Jun 2022 19:58:11 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Liu",
"Ruoshi",
""
],
[
"Menon",
"Sachit",
""
],
[
"Mao",
"Chengzhi",
""
],
[
"Park",
"Dennis",
""
],
[
"Stent",
"Simon",
""
],
[
"Vondrick",
"Carl",
""
]
] |
new_dataset
| 0.974318 |
2206.09010
|
Peter Eckmann
|
Peter Eckmann, Kunyang Sun, Bo Zhao, Mudong Feng, Michael K. Gilson,
Rose Yu
|
LIMO: Latent Inceptionism for Targeted Molecule Generation
|
16 pages, 5 figures, ICML 2022
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generation of drug-like molecules with high binding affinity to target
proteins remains a difficult and resource-intensive task in drug discovery.
Existing approaches primarily employ reinforcement learning, Markov sampling,
or deep generative models guided by Gaussian processes, which can be
prohibitively slow when generating molecules with high binding affinity
calculated by computationally-expensive physics-based methods. We present
Latent Inceptionism on Molecules (LIMO), which significantly accelerates
molecule generation with an inceptionism-like technique. LIMO employs a
variational autoencoder-generated latent space and property prediction by two
neural networks in sequence to enable faster gradient-based
reverse-optimization of molecular properties. Comprehensive experiments show
that LIMO performs competitively on benchmark tasks and markedly outperforms
state-of-the-art techniques on the novel task of generating drug-like compounds
with high binding affinity, reaching nanomolar range against two protein
targets. We corroborate these docking-based results with more accurate
molecular dynamics-based calculations of absolute binding free energy and show
that one of our generated drug-like compounds has a predicted $K_D$ (a measure
of binding affinity) of $6 \cdot 10^{-14}$ M against the human estrogen
receptor, well beyond the affinities of typical early-stage drug candidates and
most FDA-approved drugs to their respective targets. Code is available at
https://github.com/Rose-STL-Lab/LIMO.
|
[
{
"version": "v1",
"created": "Fri, 17 Jun 2022 21:05:58 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Eckmann",
"Peter",
""
],
[
"Sun",
"Kunyang",
""
],
[
"Zhao",
"Bo",
""
],
[
"Feng",
"Mudong",
""
],
[
"Gilson",
"Michael K.",
""
],
[
"Yu",
"Rose",
""
]
] |
new_dataset
| 0.983617 |
2206.09011
|
Jacques Bou Abdo
|
Jacques Bou Abdo, Shuvalaxmi Dass, Basheer Qolomany, Liaquat Hossain
|
Evolutionary Random Graph for Bitcoin Overlay and Blockchain Mining
Networks
|
12 pages, 12 figures, 13 equations
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The world economy is experiencing the novel adoption of distributed
currencies that are free from the control of central banks. Distributed
currencies suffer from extreme volatility, and this can lead to catastrophic
implications during future economic crisis. Understanding the dynamics of this
new type of currencies is vital for empowering supervisory bodies from current
reactive and manual incident responders to more proactive and well-informed
planners. Bitcoin, the first and dominant distributed cryptocurrency, is still
notoriously vague, especially for a financial instrument with market value
exceeding 1 trillion. Modeling of bitcoin overlay network poses a number of
important theoretical and methodological challenges. Current measuring
approaches, for example, fail to identify the real network size of bitcoin
miners. This drastically undermines the ability to predict forks, the suitable
mining difficulty and most importantly the resilience of the network supporting
bitcoin. In this work, we developed Evolutionary Random Graph, a theoretical
model that describes the network of bitcoin miners. The correctness of this
model has been validated using simulated and measure real bitcoin data. We then
predicted forking, optimal mining difficulty, network size and consequently the
network's inability to stand a drastic drop in bitcoin price using the current
mining configuration.
|
[
{
"version": "v1",
"created": "Fri, 17 Jun 2022 21:10:19 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Abdo",
"Jacques Bou",
""
],
[
"Dass",
"Shuvalaxmi",
""
],
[
"Qolomany",
"Basheer",
""
],
[
"Hossain",
"Liaquat",
""
]
] |
new_dataset
| 0.995128 |
2206.09117
|
Mustafa Burak Gurbuz
|
Mustafa Burak Gurbuz and Constantine Dovrolis
|
NISPA: Neuro-Inspired Stability-Plasticity Adaptation for Continual
Learning in Sparse Networks
|
International Conference on Machine Learning 2022
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The goal of continual learning (CL) is to learn different tasks over time.
The main desiderata associated with CL are to maintain performance on older
tasks, leverage the latter to improve learning of future tasks, and to
introduce minimal overhead in the training process (for instance, to not
require a growing model or retraining). We propose the Neuro-Inspired
Stability-Plasticity Adaptation (NISPA) architecture that addresses these
desiderata through a sparse neural network with fixed density. NISPA forms
stable paths to preserve learned knowledge from older tasks. Also, NISPA uses
connection rewiring to create new plastic paths that reuse existing knowledge
on novel tasks. Our extensive evaluation on EMNIST, FashionMNIST, CIFAR10, and
CIFAR100 datasets shows that NISPA significantly outperforms representative
state-of-the-art continual learning baselines, and it uses up to ten times
fewer learnable parameters compared to baselines. We also make the case that
sparsity is an essential ingredient for continual learning. The NISPA code is
available at https://github.com/BurakGurbuz97/NISPA.
|
[
{
"version": "v1",
"created": "Sat, 18 Jun 2022 04:56:49 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Gurbuz",
"Mustafa Burak",
""
],
[
"Dovrolis",
"Constantine",
""
]
] |
new_dataset
| 0.999551 |
2206.09166
|
Yijian Qin
|
Yijian Qin, Ziwei Zhang, Xin Wang, Zeyang Zhang, Wenwu Zhu
|
NAS-Bench-Graph: Benchmarking Graph Neural Architecture Search
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Graph neural architecture search (GraphNAS) has recently aroused considerable
attention in both academia and industry. However, two key challenges seriously
hinder the further research of GraphNAS. First, since there is no consensus for
the experimental setting, the empirical results in different research papers
are often not comparable and even not reproducible, leading to unfair
comparisons. Secondly, GraphNAS often needs extensive computations, which makes
it highly inefficient and inaccessible to researchers without access to
large-scale computation. To solve these challenges, we propose NAS-Bench-Graph,
a tailored benchmark that supports unified, reproducible, and efficient
evaluations for GraphNAS. Specifically, we construct a unified, expressive yet
compact search space, covering 26,206 unique graph neural network (GNN)
architectures and propose a principled evaluation protocol. To avoid
unnecessary repetitive training, we have trained and evaluated all of these
architectures on nine representative graph datasets, recording detailed metrics
including train, validation, and test performance in each epoch, the latency,
the number of parameters, etc. Based on our proposed benchmark, the performance
of GNN architectures can be directly obtained by a look-up table without any
further computation, which enables fair, fully reproducible, and efficient
comparisons. To demonstrate its usage, we make in-depth analyses of our
proposed NAS-Bench-Graph, revealing several interesting findings for GraphNAS.
We also showcase how the benchmark can be easily compatible with GraphNAS open
libraries such as AutoGL and NNI. To the best of our knowledge, our work is the
first benchmark for graph neural architecture search.
|
[
{
"version": "v1",
"created": "Sat, 18 Jun 2022 10:17:15 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Qin",
"Yijian",
""
],
[
"Zhang",
"Ziwei",
""
],
[
"Wang",
"Xin",
""
],
[
"Zhang",
"Zeyang",
""
],
[
"Zhu",
"Wenwu",
""
]
] |
new_dataset
| 0.963233 |
2206.09167
|
Randa Zarnoufi
|
Randa Zarnoufi, Walid Bachri, Hamid Jaafar and Mounia Abik
|
MANorm: A Normalization Dictionary for Moroccan Arabic Dialect Written
in Latin Script
|
The Fifth Arabic Natural Language Processing Workshop/COLING 2020
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Social media user-generated text is actually the main resource for many NLP
tasks. This text however, does not follow the standard rules of writing.
Moreover, the use of dialect such as Moroccan Arabic in written communications
increases further NLP tasks complexity. A dialect is a verbal language that
does not have a standard orthography, which leads users to improvise spelling
while writing. Thus, for the same word we can find multiple forms of
transliterations. Subsequently, it is mandatory to normalize these different
transliterations to one canonical word form. To reach this goal, we have
exploited the powerfulness of word embedding models generated with a corpus of
YouTube comments. Besides, using a Moroccan Arabic dialect dictionary that
provides the canonical forms, we have built a normalization dictionary that we
refer to as MANorm. We have conducted several experiments to demonstrate the
efficiency of MANorm, which have shown its usefulness in dialect normalization.
|
[
{
"version": "v1",
"created": "Sat, 18 Jun 2022 10:17:46 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Zarnoufi",
"Randa",
""
],
[
"Bachri",
"Walid",
""
],
[
"Jaafar",
"Hamid",
""
],
[
"Abik",
"Mounia",
""
]
] |
new_dataset
| 0.999449 |
2206.09178
|
Jaehyuk Heo
|
Jaehyuk Heo, YongGi Jeong, Sunwoo Kim, Jaehee Kim, Pilsung Kang
|
REVECA -- Rich Encoder-decoder framework for Video Event CAptioner
|
The IEEE/CVF Computer Vision and Pattern Recognition Conference
(CVPR). LOng-form VidEo Understanding (LOVEU) workshop
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We describe an approach used in the Generic Boundary Event Captioning
challenge at the Long-Form Video Understanding Workshop held at CVPR 2022. We
designed a Rich Encoder-decoder framework for Video Event CAptioner (REVECA)
that utilizes spatial and temporal information from the video to generate a
caption for the corresponding the event boundary. REVECA uses frame position
embedding to incorporate information before and after the event boundary.
Furthermore, it employs features extracted using the temporal segment network
and temporal-based pairwise difference method to learn temporal information. A
semantic segmentation mask for the attentional pooling process is adopted to
learn the subject of an event. Finally, LoRA is applied to fine-tune the image
encoder to enhance the learning efficiency. REVECA yielded an average score of
50.97 on the Kinetics-GEBC test data, which is an improvement of 10.17 over the
baseline method. Our code is available in https://github.com/TooTouch/REVECA.
|
[
{
"version": "v1",
"created": "Sat, 18 Jun 2022 11:10:12 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Heo",
"Jaehyuk",
""
],
[
"Jeong",
"YongGi",
""
],
[
"Kim",
"Sunwoo",
""
],
[
"Kim",
"Jaehee",
""
],
[
"Kang",
"Pilsung",
""
]
] |
new_dataset
| 0.986162 |
2206.09256
|
Zunayed Mahmud
|
Zunayed Mahmud, Paul Hungler, Ali Etemad
|
Multistream Gaze Estimation with Anatomical Eye Region Isolation by
Synthetic to Real Transfer Learning
|
14 pages, 10 figures, 12 tables. This work has been submitted to the
IEEE for possible publication. Copyright may be transferred without notice
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a novel neural pipeline, MSGazeNet, that learns gaze
representations by taking advantage of the eye anatomy information through a
multistream framework. Our proposed solution comprises two components, first a
network for isolating anatomical eye regions, and a second network for
multistream gaze estimation. The eye region isolation is performed with a U-Net
style network which we train using a synthetic dataset that contains eye region
masks for the visible eyeball and the iris region. The synthetic dataset used
in this stage is a new dataset consisting of 60,000 eye images, which we create
using an eye-gaze simulator, UnityEyes. Successive to training, the eye region
isolation network is then transferred to the real domain for generating masks
for the real-world eye images. In order to successfully make the transfer, we
exploit domain randomization in the training process, which allows for the
synthetic images to benefit from a larger variance with the help of
augmentations that resemble artifacts. The generated eye region masks along
with the raw eye images are then used together as a multistream input to our
gaze estimation network. We evaluate our framework on three benchmark gaze
estimation datasets, MPIIGaze, Eyediap, and UTMultiview, where we set a new
state-of-the-art on Eyediap and UTMultiview datasets by obtaining a performance
gain of 7.57% and 1.85% respectively, while achieving competitive performance
on MPIIGaze. We also study the robustness of our method with respect to the
noise in the data and demonstrate that our model is less sensitive to noisy
data. Lastly, we perform a variety of experiments including ablation studies to
evaluate the contribution of different components and design choices in our
solution.
|
[
{
"version": "v1",
"created": "Sat, 18 Jun 2022 17:57:32 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Mahmud",
"Zunayed",
""
],
[
"Hungler",
"Paul",
""
],
[
"Etemad",
"Ali",
""
]
] |
new_dataset
| 0.999384 |
2206.09286
|
Zhengyi Luo
|
Zhengyi Luo, Ye Yuan, Kris M. Kitani
|
From Universal Humanoid Control to Automatic Physically Valid Character
Creation
|
Project page: https://zhengyiluo.github.io/projects/agent_design/
| null | null | null |
cs.GR cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Automatically designing virtual humans and humanoids holds great potential in
aiding the character creation process in games, movies, and robots. In some
cases, a character creator may wish to design a humanoid body customized for
certain motions such as karate kicks and parkour jumps. In this work, we
propose a humanoid design framework to automatically generate physically valid
humanoid bodies conditioned on sequence(s) of pre-specified human motions.
First, we learn a generalized humanoid controller trained on a large-scale
human motion dataset that features diverse human motion and body shapes.
Second, we use a design-and-control framework to optimize a humanoid's physical
attributes to find body designs that can better imitate the pre-specified human
motion sequence(s). Leveraging the pre-trained humanoid controller and physics
simulation as guidance, our method is able to discover new humanoid designs
that are customized to perform pre-specified human motions.
|
[
{
"version": "v1",
"created": "Sat, 18 Jun 2022 22:04:44 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Luo",
"Zhengyi",
""
],
[
"Yuan",
"Ye",
""
],
[
"Kitani",
"Kris M.",
""
]
] |
new_dataset
| 0.999505 |
2206.09310
|
Susmit Shannigrahi
|
Robert Thompson, Muhammad Ismail, Susmit Shannigrahi
|
Vehicle-to-Vehicle Charging Coordination over Information Centric
Networking
| null | null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Cities around the world are increasingly promoting electric vehicles (EV) to
reduce and ultimately eliminate greenhouse gas emissions. For example, the city
of San Francisco aims to increase the number of EVs from tens of thousands to
over quarter of a million by 2025. This huge number of EVs will put
unprecedented stress on the power grid. To efficiently serve the increased
charging load, these EVs need to be charged in a coordinated fashion. One
promising coordination strategy is vehicle-to-vehicle (V2V) charging
coordination, enabling EVs to sell their surplus energy in an ad-hoc, peer to
peer manner.
Enabling V2V charging coordination requires new communication network
protocols that can facilitate such charging coordination in a peer-to-peer
fashion. This paper introduces an Information Centric Networking (ICN)-based
protocol to support ad-hoc V2V charging coordination (V2V-CC). Our evaluations
demonstrate that V2V-CC can provide added flexibility, fault tolerance, and
reduced communication latency than a conventional centralized cloud based
approach. We show that V2V-CC can achieve a 93\% reduction in protocol
completion time compared to a conventional approach. We also show that V2V-CC
also works well under extreme packet loss, making it ideal for V2V charging
coordination.
|
[
{
"version": "v1",
"created": "Sun, 19 Jun 2022 02:27:30 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Thompson",
"Robert",
""
],
[
"Ismail",
"Muhammad",
""
],
[
"Shannigrahi",
"Susmit",
""
]
] |
new_dataset
| 0.996506 |
2206.09476
|
Lear Bahack
|
Lear Bahack
|
The Game of Tumbleweed is PSPACE-complete
| null | null | null | null |
cs.CC math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Tumbleweed is a popular two-player perfect-information new territorial game
played at the prestigious Mind Sport Olympiad. We define a generalized version
of the game, where the board size is arbitrary and so is the possible number of
neutral stones.
Our result: the complexity of deciding for a given configuration which of the
players has a winning strategy is PSPACE-complete. The proof is by a log-space
reduction from a Boolean formula game of T.J. Schaefer, known to be
PSPACE-complete.
We embed the non-planar Schaefer game within the planar Tumbleweed board
without using proper "bridges", that are impossible due to the board's
topology. Instead, our new technique uses a one-move tight race that forces the
players to move only according to the protocol of playing the embedded 4-CNF
game.
|
[
{
"version": "v1",
"created": "Sun, 19 Jun 2022 19:45:55 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Bahack",
"Lear",
""
]
] |
new_dataset
| 0.999769 |
2206.09570
|
Ko-Wei Tai
|
Ko-Wei Tai, HuaYen Lee, Hsin-Huei Chen, Jeng-Sheng Yeh, Ming Ouhyoung
|
Guardian Angel: A Novel Walking Aid for the Visually Impaired
|
2 pages, 1 figure
| null | null | null |
cs.HC cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
This work introduces Guardian Angel, an Android App that assists visually
impaired people to avoid danger in complex traffic environment. The system,
consisting of object detection by pretrained YOLO model, distance estimation
and moving direction estimation, provides information about surrounding
vehicles and alarms users of potential danger without expensive special purpose
device. With an experiment of 8 subjects, we corroborate that in terms of
satisfaction score in pedestrian-crossing experiment with the assistance of our
App using a smartphone is better than when without under 99% confidence level.
The time needed to cross a road is shorter on average with the assistance of
our system, however, not reaching significant difference by our experiment. The
App has been released in Google Play Store, open to the public for free.
|
[
{
"version": "v1",
"created": "Mon, 20 Jun 2022 04:57:40 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Tai",
"Ko-Wei",
""
],
[
"Lee",
"HuaYen",
""
],
[
"Chen",
"Hsin-Huei",
""
],
[
"Yeh",
"Jeng-Sheng",
""
],
[
"Ouhyoung",
"Ming",
""
]
] |
new_dataset
| 0.997809 |
2206.09600
|
Phuong Phan-Dieu Ha
|
Nhung Thi-Hong Nguyen, Phuong Phan-Dieu Ha, Luan Thanh Nguyen, Kiet
Van Nguyen, Ngan Luu-Thuy Nguyen
|
SPBERTQA: A Two-Stage Question Answering System Based on Sentence
Transformers for Medical Texts
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Question answering (QA) systems have gained explosive attention in recent
years. However, QA tasks in Vietnamese do not have many datasets.
Significantly, there is mostly no dataset in the medical domain. Therefore, we
built a Vietnamese Healthcare Question Answering dataset (ViHealthQA),
including 10,015 question-answer passage pairs for this task, in which
questions from health-interested users were asked on prestigious health
websites and answers from highly qualified experts. This paper proposes a
two-stage QA system based on Sentence-BERT (SBERT) using multiple negatives
ranking (MNR) loss combined with BM25. Then, we conduct diverse experiments
with many bag-of-words models to assess our system's performance. With the
obtained results, this system achieves better performance than traditional
methods.
|
[
{
"version": "v1",
"created": "Mon, 20 Jun 2022 07:07:59 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Nguyen",
"Nhung Thi-Hong",
""
],
[
"Ha",
"Phuong Phan-Dieu",
""
],
[
"Nguyen",
"Luan Thanh",
""
],
[
"Van Nguyen",
"Kiet",
""
],
[
"Nguyen",
"Ngan Luu-Thuy",
""
]
] |
new_dataset
| 0.99964 |
2206.09667
|
Seongdeok Bang Dr.
|
Ehtesham Iqbal, Sirojbek Safarov, Seongdeok Bang
|
MSANet: Multi-Similarity and Attention Guidance for Boosting Few-Shot
Segmentation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Few-shot segmentation aims to segment unseen-class objects given only a
handful of densely labeled samples. Prototype learning, where the support
feature yields a singleor several prototypes by averaging global and local
object information, has been widely used in FSS. However, utilizing only
prototype vectors may be insufficient to represent the features for all
training data. To extract abundant features and make more precise predictions,
we propose a Multi-Similarity and Attention Network (MSANet) including two
novel modules, a multi-similarity module and an attention module. The
multi-similarity module exploits multiple feature-maps of support images and
query images to estimate accurate semantic relationships. The attention module
instructs the network to concentrate on class-relevant information. The network
is tested on standard FSS datasets, PASCAL-5i 1-shot, PASCAL-5i 5-shot,
COCO-20i 1-shot, and COCO-20i 5-shot. The MSANet with the backbone of
ResNet-101 achieves the state-of-the-art performance for all 4-benchmark
datasets with mean intersection over union (mIoU) of 69.13%, 73.99%, 51.09%,
56.80%, respectively. Code is available at
https://github.com/AIVResearch/MSANet
|
[
{
"version": "v1",
"created": "Mon, 20 Jun 2022 09:14:17 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Iqbal",
"Ehtesham",
""
],
[
"Safarov",
"Sirojbek",
""
],
[
"Bang",
"Seongdeok",
""
]
] |
new_dataset
| 0.998335 |
2206.09699
|
Konstantinos Sfikas
|
K. Sfikas, P. Perakis and T. Theoharis
|
FoR$^2$M: Recognition and Repair of Foldings in Mesh Surfaces.
Application to 3D Object Degradation
| null | null | null | null |
cs.CG cs.CV cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
Triangular meshes are the most popular representations of 3D objects, but
many mesh surfaces contain topological singularities that represent a challenge
for displaying or further processing them properly. One such singularity is the
self-intersections that may be present in mesh surfaces that have been created
by a scanning procedure or by a deformation transformation, such as
off-setting.
Mesh foldings comprise a special case of mesh surface self-intersections,
where the faces of the 3D model intersect and become reversed, with respect to
the unfolded part of the mesh surface. A novel method for the recognition and
repair of mesh surface foldings is presented, which exploits the structural
characteristics of the foldings in order to efficiently detect the folded
regions. Following detection, the foldings are removed and any gaps so created
are filled based on the geometry of the 3D model. The proposed method is
directly applicable to simple mesh surface representations while it does not
perform any embedding of the 3D mesh (i.e. voxelization, projection). Target of
the proposed method is to facilitate mesh degradation procedures in a fashion
that retains the original structure, given the operator, in the most efficient
manner.
|
[
{
"version": "v1",
"created": "Mon, 20 Jun 2022 10:43:32 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Sfikas",
"K.",
""
],
[
"Perakis",
"P.",
""
],
[
"Theoharis",
"T.",
""
]
] |
new_dataset
| 0.995734 |
2206.09782
|
Martianus Frederic Ezerman
|
Gaojun Luo, Martianus Frederic Ezerman, and San Ling
|
Entanglement-Assisted and Subsystem Quantum Codes: New Propagation Rules
and Constructions
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes new propagation rules on quantum codes in the
entanglement-assisted and in quantum subsystem scenarios. The rules lead to new
families of such quantum codes whose parameters are demonstrably optimal. To
obtain the results, we devise tools to puncture and shorten codes in ways that
ensure their Hermitian hulls have certain desirable properties. More
specifically, we give a general framework to construct $k$-dimensional
generalized Reed-Solomon codes whose Hermitian hulls are $(k-1)$-dimensional
maximum distance separable codes.
|
[
{
"version": "v1",
"created": "Mon, 20 Jun 2022 14:02:06 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Luo",
"Gaojun",
""
],
[
"Ezerman",
"Martianus Frederic",
""
],
[
"Ling",
"San",
""
]
] |
new_dataset
| 0.999757 |
2206.09790
|
Jonathan Mukiibi
|
Jonathan Mukiibi, Andrew Katumba, Joyce Nakatumba-Nabende, Ali
Hussein, Josh Meyer
|
The Makerere Radio Speech Corpus: A Luganda Radio Corpus for Automatic
Speech Recognition
|
Proceedings of the 13th Conference on Language Resources and
Evaluation (LREC 2022), pages 1945 to 1954 Marseille, 20 to 25 June 2022
| null | null | null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Building a usable radio monitoring automatic speech recognition (ASR) system
is a challenging task for under-resourced languages and yet this is paramount
in societies where radio is the main medium of public communication and
discussions. Initial efforts by the United Nations in Uganda have proved how
understanding the perceptions of rural people who are excluded from social
media is important in national planning. However, these efforts are being
challenged by the absence of transcribed speech datasets. In this paper, The
Makerere Artificial Intelligence research lab releases a Luganda radio speech
corpus of 155 hours. To our knowledge, this is the first publicly available
radio dataset in sub-Saharan Africa. The paper describes the development of the
voice corpus and presents baseline Luganda ASR performance results using Coqui
STT toolkit, an open source speech recognition toolkit.
|
[
{
"version": "v1",
"created": "Mon, 20 Jun 2022 14:19:35 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Mukiibi",
"Jonathan",
""
],
[
"Katumba",
"Andrew",
""
],
[
"Nakatumba-Nabende",
"Joyce",
""
],
[
"Hussein",
"Ali",
""
],
[
"Meyer",
"Josh",
""
]
] |
new_dataset
| 0.99968 |
2206.09853
|
Haoning Wu Mr
|
Haoning Wu, Chaofeng Chen, Liang Liao, Jingwen Hou, Wenxiu Sun, Qiong
Yan, Weisi Lin
|
DisCoVQA: Temporal Distortion-Content Transformers for Video Quality
Assessment
| null | null | null | null |
cs.CV cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
The temporal relationships between frames and their influences on video
quality assessment (VQA) are still under-studied in existing works. These
relationships lead to two important types of effects for video quality.
Firstly, some temporal variations (such as shaking, flicker, and abrupt scene
transitions) are causing temporal distortions and lead to extra quality
degradations, while other variations (e.g. those related to meaningful
happenings) do not. Secondly, the human visual system often has different
attention to frames with different contents, resulting in their different
importance to the overall video quality. Based on prominent time-series
modeling ability of transformers, we propose a novel and effective
transformer-based VQA method to tackle these two issues. To better
differentiate temporal variations and thus capture the temporal distortions, we
design a transformer-based Spatial-Temporal Distortion Extraction (STDE)
module. To tackle with temporal quality attention, we propose the
encoder-decoder-like temporal content transformer (TCT). We also introduce the
temporal sampling on features to reduce the input length for the TCT, so as to
improve the learning effectiveness and efficiency of this module. Consisting of
the STDE and the TCT, the proposed Temporal Distortion-Content Transformers for
Video Quality Assessment (DisCoVQA) reaches state-of-the-art performance on
several VQA benchmarks without any extra pre-training datasets and up to 10%
better generalization ability than existing methods. We also conduct extensive
ablation experiments to prove the effectiveness of each part in our proposed
model, and provide visualizations to prove that the proposed modules achieve
our intention on modeling these temporal issues. We will publish our codes and
pretrained weights later.
|
[
{
"version": "v1",
"created": "Mon, 20 Jun 2022 15:31:27 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Wu",
"Haoning",
""
],
[
"Chen",
"Chaofeng",
""
],
[
"Liao",
"Liang",
""
],
[
"Hou",
"Jingwen",
""
],
[
"Sun",
"Wenxiu",
""
],
[
"Yan",
"Qiong",
""
],
[
"Lin",
"Weisi",
""
]
] |
new_dataset
| 0.995637 |
2206.09885
|
Abhilasha Nanda
|
Abhilasha Nanda, Sung Won Cho, Hyeopwoo Lee, Jin Hyoung Park
|
KOLOMVERSE: KRISO open large-scale image dataset for object detection in
the maritime universe
|
13 Pages, 12 figures, submitted to NeurIPS 2022 Datasets and
Benchmarks Track (Under Review)
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Over the years, datasets have been developed for various object detection
tasks. Object detection in the maritime domain is essential for the safety and
navigation of ships. However, there is still a lack of publicly available
large-scale datasets in the maritime domain. To overcome this challenge, we
present KOLOMVERSE, an open large-scale image dataset for object detection in
the maritime domain by KRISO (Korea Research Institute of Ships and Ocean
Engineering). We collected 5,845 hours of video data captured from 21
territorial waters of South Korea. Through an elaborate data quality assessment
process, we gathered around 2,151,470 4K resolution images from the video data.
This dataset considers various environments: weather, time, illumination,
occlusion, viewpoint, background, wind speed, and visibility. The KOLOMVERSE
consists of five classes (ship, buoy, fishnet buoy, lighthouse and wind farm)
for maritime object detection. The dataset has images of 3840$\times$2160
pixels and to our knowledge, it is by far the largest publicly available
dataset for object detection in the maritime domain. We performed object
detection experiments and evaluated our dataset on several pre-trained
state-of-the-art architectures to show the effectiveness and usefulness of our
dataset. The dataset is available at:
\url{https://github.com/MaritimeDataset/KOLOMVERSE}.
|
[
{
"version": "v1",
"created": "Mon, 20 Jun 2022 16:45:12 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Nanda",
"Abhilasha",
""
],
[
"Cho",
"Sung Won",
""
],
[
"Lee",
"Hyeopwoo",
""
],
[
"Park",
"Jin Hyoung",
""
]
] |
new_dataset
| 0.999902 |
2206.09894
|
Noble Mathews
|
Noble Saji Mathews, Sridhar Chimalakonda
|
NoteG: A Computational Notebook to Facilitate Rapid Game Prototyping
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Game development-based approaches are increasingly used to design curricula
that can engage students, as these can help them apply and practice learnt
computer science concepts. However, it can become complex to develop a minimum
working game or a prototype with the help of high-end game engines. Game
prototyping is one of the most essential parts of the game design and
development cycle as it allows developers to continuously test and improve
their ideas. In recent years, computational notebooks have gained widespread
popularity among developers. They can help run individual code snippets,
visualize the output, consolidate the source code, and share live code easily.
However, its use has not been explored in the field of game development and
prototyping. In this paper, we propose NoteG, a computational notebook towards
rapid game prototyping. We evaluated the tool with 18 novice game developers
through a questionnaire-based user survey. A majority of the volunteers (66%)
found it easy to use and were of the opinion that it saves time. A few of the
participants successfully extended the existing framework to implement new game
mechanics within their prototypes.
|
[
{
"version": "v1",
"created": "Mon, 20 Jun 2022 17:05:00 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Mathews",
"Noble Saji",
""
],
[
"Chimalakonda",
"Sridhar",
""
]
] |
new_dataset
| 0.997137 |
2206.09917
|
Paul R\"ottger
|
Paul R\"ottger, Haitham Seelawi, Debora Nozza, Zeerak Talat, Bertie
Vidgen
|
Multilingual HateCheck: Functional Tests for Multilingual Hate Speech
Detection Models
|
Accepted at WOAH (NAACL 2022)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Hate speech detection models are typically evaluated on held-out test sets.
However, this risks painting an incomplete and potentially misleading picture
of model performance because of increasingly well-documented systematic gaps
and biases in hate speech datasets. To enable more targeted diagnostic
insights, recent research has thus introduced functional tests for hate speech
detection models. However, these tests currently only exist for
English-language content, which means that they cannot support the development
of more effective models in other languages spoken by billions across the
world. To help address this issue, we introduce Multilingual HateCheck (MHC), a
suite of functional tests for multilingual hate speech detection models. MHC
covers 34 functionalities across ten languages, which is more languages than
any other hate speech dataset. To illustrate MHC's utility, we train and test a
high-performing multilingual hate speech detection model, and reveal critical
model weaknesses for monolingual and cross-lingual applications.
|
[
{
"version": "v1",
"created": "Mon, 20 Jun 2022 17:54:39 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Röttger",
"Paul",
""
],
[
"Seelawi",
"Haitham",
""
],
[
"Nozza",
"Debora",
""
],
[
"Talat",
"Zeerak",
""
],
[
"Vidgen",
"Bertie",
""
]
] |
new_dataset
| 0.997196 |
2206.09920
|
Yi Shi
|
Yi Wang, Yi Si
|
WOLONet: Wave Outlooker for Efficient and High Fidelity Speech Synthesis
| null | null | null | null |
cs.SD cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, GAN-based neural vocoders such as Parallel WaveGAN, MelGAN,
HiFiGAN, and UnivNet have become popular due to their lightweight and parallel
structure, resulting in a real-time synthesized waveform with high fidelity,
even on a CPU. HiFiGAN and UnivNet are two SOTA vocoders. Despite their high
quality, there is still room for improvement. In this paper, motivated by the
structure of Vision Outlooker from computer vision, we adopt a similar idea and
propose an effective and lightweight neural vocoder called WOLONet. In this
network, we develop a novel lightweight block that uses a location-variable,
channel-independent, and depthwise dynamic convolutional kernel with
sinusoidally activated dynamic kernel weights. To demonstrate the effectiveness
and generalizability of our method, we perform an ablation study to verify our
novel design and make a subjective and objective comparison with typical
GAN-based vocoders. The results show that our WOLONet achieves the best
generation quality while requiring fewer parameters than the two neural SOTA
vocoders, HiFiGAN and UnivNet.
|
[
{
"version": "v1",
"created": "Mon, 20 Jun 2022 17:58:52 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Wang",
"Yi",
""
],
[
"Si",
"Yi",
""
]
] |
new_dataset
| 0.993759 |
2206.09946
|
Xin Jin
|
Yanru Jiang, Xin Jin, Qinhao Deng
|
Short Video Uprising: How #BlackLivesMatter Content on TikTok Challenges
the Protest Paradigm
|
Workshop Proceedings of the 16th International AAAI Conference on Web
and Social Media
| null |
10.36190/2022.42
| null |
cs.CY cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This study uses TikTok (N = 8,173) to examine how short-form video platforms
challenge the protest paradigm in the recent Black Lives Matter movement. A
computer-mediated visual analysis, computer vision, is employed to identify the
presence of four visual frames of protest (riot, confrontation, spectacle, and
debate) in multimedia content. Results of descriptive statistics and the t-test
indicate that the three delegitimizing frames - riot, confrontation, and
spectacle - are rarely found on TikTok, whereas the debate frame, that empowers
marginalized communities, dominates the public sphere. However, although the
three delegitimizing frames receive lower social media visibility, as measured
by views, likes, shares, followers, and durations, legitimizing elements, such
as the debate frame, minority identities, and unofficial sources, are not
generally favored by TikTok audiences. This study concludes that while
short-form video platforms could potentially challenge the protest paradigm on
the content creators' side, the audiences' preference as measured by social
media visibility might still be moderately associated with the protest
paradigm.
|
[
{
"version": "v1",
"created": "Mon, 20 Jun 2022 18:05:07 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Jiang",
"Yanru",
""
],
[
"Jin",
"Xin",
""
],
[
"Deng",
"Qinhao",
""
]
] |
new_dataset
| 0.999359 |
2206.09983
|
Bibek Bhattarai
|
Bibek Bhattarai and Howie Huang
|
Mnemonic: A Parallel Subgraph Matching System for Streaming Graphs
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Finding patterns in large highly connected datasets is critical for value
discovery in business development and scientific research. This work focuses on
the problem of subgraph matching on streaming graphs, which provides utility in
a myriad of real-world applications ranging from social network analysis to
cybersecurity. Each application poses a different set of control parameters,
including the restrictions for a match, type of data stream, and search
granularity. The problem-driven design of existing subgraph matching systems
makes them challenging to apply for different problem domains. This paper
presents Mnemonic, a programmable system that provides a high-level API and
democratizes the development of a wide variety of subgraph matching solutions.
Importantly, Mnemonic also delivers key data management capabilities and
optimizations to support real-time processing on long-running, high-velocity
multi-relational graph streams. The experiments demonstrate the versatility of
Mnemonic, as it outperforms several state-of-the-art systems by up to two
orders of magnitude.
|
[
{
"version": "v1",
"created": "Mon, 20 Jun 2022 20:05:39 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Bhattarai",
"Bibek",
""
],
[
"Huang",
"Howie",
""
]
] |
new_dataset
| 0.997134 |
2206.10041
|
Stepan Konev
|
Stepan Konev
|
MPA: MultiPath++ Based Architecture for Motion Prediction
|
CVPR 2022, Workshop on Autonomous Driving
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Autonomous driving technology is developing rapidly and nowadays first
autonomous rides are being provided in city areas. This requires the highest
standards for the safety and reliability of the technology. Motion prediction
part of the general self-driving pipeline plays a crucial role in providing
these qualities. In this work we present one of the solutions for Waymo Motion
Prediction Challenge 2022 based on MultiPath++ ranked the 3rd as of May, 26
2022. Our source code is publicly available on GitHub.
|
[
{
"version": "v1",
"created": "Mon, 20 Jun 2022 23:06:55 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Konev",
"Stepan",
""
]
] |
new_dataset
| 0.99945 |
2206.10064
|
Hossein Rastgoftar
|
Aeris El Asslouj, Harshvardhan Uppaluru, and Hossein Rastgoftar
|
Fast and Safe Aerial Payload Transport in Urban Areas
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper studies the problem of fast and safe aerial payload transport by a
single quadcopter in urban areas. The quadcopter payload system (QPS) is
considered as a rigid body and modeled with a nonlinear dynamics. The urban
area is modeled as an obstacle-laden environment with obstacle geometries
obtained by incorporating realistic LIDAR data. Our approach for payload
transport is decomposed into high-level motion planning and low-level
trajectory control. For the low-level trajectory tracking, a feedback
linearization control is applied to stably track the desired trajectory of the
quadcopter. For high-level motion planning, we integrate A* search and
polynomial planning to define a safe trajectory for the quadcopter assuring
collision avoidance, boundedness of the quadcopter rotor speeds and tracking
error, and fast arrival to a target destination from an arbitrary initial
location.
|
[
{
"version": "v1",
"created": "Tue, 21 Jun 2022 01:20:40 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Asslouj",
"Aeris El",
""
],
[
"Uppaluru",
"Harshvardhan",
""
],
[
"Rastgoftar",
"Hossein",
""
]
] |
new_dataset
| 0.997331 |
2206.10110
|
Nguyen Khoi Tran
|
Nguyen Khoi Tran, Bushra Sabir, M. Ali Babar, Nini Cui, Mehran
Abolhasan, Justin Lipman
|
ProML: A Decentralised Platform for Provenance Management of Machine
Learning Software Systems
|
Accepted as full paper in ECSA 2022 conference. To be presented
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Large-scale Machine Learning (ML) based Software Systems are increasingly
developed by distributed teams situated in different trust domains. Insider
threats can launch attacks from any domain to compromise ML assets (models and
datasets). Therefore, practitioners require information about how and by whom
ML assets were developed to assess their quality attributes such as security,
safety, and fairness. Unfortunately, it is challenging for ML teams to access
and reconstruct such historical information of ML assets (ML provenance)
because it is generally fragmented across distributed ML teams and threatened
by the same adversaries that attack ML assets. This paper proposes ProML, a
decentralised platform that leverages blockchain and smart contracts to empower
distributed ML teams to jointly manage a single source of truth about
circulated ML assets' provenance without relying on a third party, which is
vulnerable to insider threats and presents a single point of failure. We
propose a novel architectural approach called Artefact-as-a-State-Machine to
leverage blockchain transactions and smart contracts for managing ML provenance
information and introduce a user-driven provenance capturing mechanism to
integrate existing scripts and tools to ProML without compromising
participants' control over their assets and toolchains. We evaluate the
performance and overheads of ProML by benchmarking a proof-of-concept system on
a global blockchain. Furthermore, we assessed ProML's security against a threat
model of a distributed ML workflow.
|
[
{
"version": "v1",
"created": "Tue, 21 Jun 2022 04:58:09 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Tran",
"Nguyen Khoi",
""
],
[
"Sabir",
"Bushra",
""
],
[
"Babar",
"M. Ali",
""
],
[
"Cui",
"Nini",
""
],
[
"Abolhasan",
"Mehran",
""
],
[
"Lipman",
"Justin",
""
]
] |
new_dataset
| 0.997215 |
2206.10177
|
Rui-Jie Zhu
|
Rui-Jie Zhu, Qihang Zhao, Tianjing Zhang, Haoyu Deng, Yule Duan, Malu
Zhang, Liang-Jian Deng
|
TCJA-SNN: Temporal-Channel Joint Attention for Spiking Neural Networks
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Spiking Neural Networks (SNNs) is a practical approach toward more
data-efficient deep learning by simulating neurons leverage on temporal
information. In this paper, we propose the Temporal-Channel Joint Attention
(TCJA) architectural unit, an efficient SNN technique that depends on attention
mechanisms, by effectively enforcing the relevance of spike sequence along both
spatial and temporal dimensions. Our essential technical contribution lies on:
1) compressing the spike stream into an average matrix by employing the squeeze
operation, then using two local attention mechanisms with an efficient 1-D
convolution to establish temporal-wise and channel-wise relations for feature
extraction in a flexible fashion. 2) utilizing the Cross Convolutional Fusion
(CCF) layer for modeling inter-dependencies between temporal and channel scope,
which breaks the independence of the two dimensions and realizes the
interaction between features. By virtue of jointly exploring and recalibrating
data stream, our method outperforms the state-of-the-art (SOTA) by up to 15.7%
in terms of top-1 classification accuracy on all tested mainstream static and
neuromorphic datasets, including Fashion-MNIST, CIFAR10-DVS, N-Caltech 101, and
DVS128 Gesture.
|
[
{
"version": "v1",
"created": "Tue, 21 Jun 2022 08:16:08 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Zhu",
"Rui-Jie",
""
],
[
"Zhao",
"Qihang",
""
],
[
"Zhang",
"Tianjing",
""
],
[
"Deng",
"Haoyu",
""
],
[
"Duan",
"Yule",
""
],
[
"Zhang",
"Malu",
""
],
[
"Deng",
"Liang-Jian",
""
]
] |
new_dataset
| 0.950339 |
2206.10192
|
Leonardo Rossi
|
Leonardo Rossi, Marco Valenti, Sara Elisabetta Legler, Andrea Prati
|
LDD: A Dataset for Grape Diseases Object Detection and Instance
Segmentation
| null |
International Conference on Image Analysis and Processing.
Springer, Cham, 2022
|
10.1007/978-3-031-06430-2_32
| null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The Instance Segmentation task, an extension of the well-known Object
Detection task, is of great help in many areas, such as precision agriculture:
being able to automatically identify plant organs and the possible diseases
associated with them, allows to effectively scale and automate crop monitoring
and its diseases control. To address the problem related to early disease
detection and diagnosis on vines plants, a new dataset has been created with
the goal of advancing the state-of-the-art of diseases recognition via instance
segmentation approaches. This was achieved by gathering images of leaves and
clusters of grapes affected by diseases in their natural context. The dataset
contains photos of 10 object types which include leaves and grapes with and
without symptoms of the eight more common grape diseases, with a total of
17,706 labeled instances in 1,092 images. Multiple statistical measures are
proposed in order to offer a complete view on the characteristics of the
dataset. Preliminary results for the object detection and instance segmentation
tasks reached by the models Mask R-CNN and R^3-CNN are provided as baseline,
demonstrating that the procedure is able to reach promising results about the
objective of automatic diseases' symptoms recognition.
|
[
{
"version": "v1",
"created": "Tue, 21 Jun 2022 08:50:13 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Rossi",
"Leonardo",
""
],
[
"Valenti",
"Marco",
""
],
[
"Legler",
"Sara Elisabetta",
""
],
[
"Prati",
"Andrea",
""
]
] |
new_dataset
| 0.999882 |
2206.10295
|
Mang Li
|
Mang Li
|
Dynamic Reserve Price Design for Lazada Sponsored Search
| null | null | null | null |
cs.GT cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
In ecommerce platform, users will be less likely to use organic search if
sponsored search shows them unexpected advertising items, which will be a
hidden cost for the platform. In order to incorporate the hidden cost into
auction mechanism which helps create positive growth for the platform, we turn
to a reserve price design to decide whether we sell the traffic, as well as
build healthy relationships between revenue and user experience. We propose a
dynamic reserve price design framework to sell traffic more efficiently with
minimal cost of user experience while keeping long term incentives to the
advertisers to reveal their valuations truthfully. A distributed algorithm is
also proposed to compute the reserve price with billion scale data in the
production environment. Experiments with offline evaluations and online AB
testing demonstrate that it is a simple and efficient method to be suitably
used in industrial production. It has already been fully deployed in the
production of Lazada sponsored search.
|
[
{
"version": "v1",
"created": "Tue, 21 Jun 2022 12:20:09 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Li",
"Mang",
""
]
] |
new_dataset
| 0.998066 |
2206.10312
|
Michal Nazarczuk
|
Michal Nazarczuk and Tony Ng and Krystian Mikolajczyk
|
SAMPLE-HD: Simultaneous Action and Motion Planning Learning Environment
|
CVPRW, 2 pages
| null | null | null |
cs.RO cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Humans exhibit incredibly high levels of multi-modal understanding -
combining visual cues with read, or heard knowledge comes easy to us and allows
for very accurate interaction with the surrounding environment. Various
simulation environments focus on providing data for tasks related to scene
understanding, question answering, space exploration, visual navigation. In
this work, we are providing a solution to encompass both, visual and
behavioural aspects of simulation in a new environment for learning interactive
reasoning in manipulation setup. SAMPLE-HD environment allows to generate
various scenes composed of small household objects, to procedurally generate
language instructions for manipulation, and to generate ground truth paths
serving as training data.
|
[
{
"version": "v1",
"created": "Wed, 1 Jun 2022 15:42:05 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Nazarczuk",
"Michal",
""
],
[
"Ng",
"Tony",
""
],
[
"Mikolajczyk",
"Krystian",
""
]
] |
new_dataset
| 0.995502 |
2206.10375
|
Mansi Sharma
|
Rohit Choudhary and Mansi Sharma and Uma T V and Rithvik Anil
|
MEStereo-Du2CNN: A Novel Dual Channel CNN for Learning Robust Depth
Estimates from Multi-exposure Stereo Images for HDR 3D Applications
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Display technologies have evolved over the years. It is critical to develop
practical HDR capturing, processing, and display solutions to bring 3D
technologies to the next level. Depth estimation of multi-exposure stereo image
sequences is an essential task in the development of cost-effective 3D HDR
video content. In this paper, we develop a novel deep architecture for
multi-exposure stereo depth estimation. The proposed architecture has two novel
components. First, the stereo matching technique used in traditional stereo
depth estimation is revamped. For the stereo depth estimation component of our
architecture, a mono-to-stereo transfer learning approach is deployed. The
proposed formulation circumvents the cost volume construction requirement,
which is replaced by a ResNet based dual-encoder single-decoder CNN with
different weights for feature fusion. EfficientNet based blocks are used to
learn the disparity. Secondly, we combine disparity maps obtained from the
stereo images at different exposure levels using a robust disparity feature
fusion approach. The disparity maps obtained at different exposures are merged
using weight maps calculated for different quality measures. The final
predicted disparity map obtained is more robust and retains best features that
preserve the depth discontinuities. The proposed CNN offers flexibility to
train using standard dynamic range stereo data or with multi-exposure low
dynamic range stereo sequences. In terms of performance, the proposed model
surpasses state-of-the-art monocular and stereo depth estimation methods, both
quantitatively and qualitatively, on challenging Scene flow and differently
exposed Middlebury stereo datasets. The architecture performs exceedingly well
on complex natural scenes, demonstrating its usefulness for diverse 3D HDR
applications.
|
[
{
"version": "v1",
"created": "Tue, 21 Jun 2022 13:23:22 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Choudhary",
"Rohit",
""
],
[
"Sharma",
"Mansi",
""
],
[
"T",
"Uma",
"V"
],
[
"Anil",
"Rithvik",
""
]
] |
new_dataset
| 0.979714 |
2206.10390
|
Oliver Bendel
|
Martin Spathelf and Oliver Bendel
|
The SPACE THEA Project
|
Accepted paper of the AAAI 2022 Spring Symposium "How Fair is Fair?
Achieving Wellbeing AI" (Stanford University)
| null | null | null |
cs.HC cs.AI cs.CY cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In some situations, no professional human contact can be available.
Accordingly, one remains alone with one's problems and fears. A manned Mars
flight is certainly such a situation. A voice assistant that shows empathy and
assists the astronauts could be a solution. In the SPACE THEA project, a
prototype with such capabilities was developed using Google Assistant and
Dialogflow Essentials. The voice assistant has a personality based on
characteristics such as functional intelligence, sincerity, creativity, and
emotional intelligence. It proves itself in seven different scenarios designed
to represent the daily lives of astronauts, addressing operational crises and
human problems. The paper describes the seven scenarios in detail, and lists
technical and conceptual foundations of the voice assistant. Finally, the most
important results are stated and the chapters are summarized.
|
[
{
"version": "v1",
"created": "Fri, 17 Jun 2022 12:33:33 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Spathelf",
"Martin",
""
],
[
"Bendel",
"Oliver",
""
]
] |
new_dataset
| 0.999448 |
2206.10418
|
Hongjun Wang
|
Zhiwen Zhang, Hongjun Wang, Zipei Fan, Jiyuan Chen, Xuan Song, and
Ryosuke Shibasaki
|
Route to Time and Time to Route: Travel Time Estimation from Sparse
Trajectories
| null | null | null | null |
cs.AI cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
Due to the rapid development of Internet of Things (IoT) technologies, many
online web apps (e.g., Google Map and Uber) estimate the travel time of
trajectory data collected by mobile devices. However, in reality, complex
factors, such as network communication and energy constraints, make multiple
trajectories collected at a low sampling rate. In this case, this paper aims to
resolve the problem of travel time estimation (TTE) and route recovery in
sparse scenarios, which often leads to the uncertain label of travel time and
route between continuously sampled GPS points. We formulate this problem as an
inexact supervision problem in which the training data has coarsely grained
labels and jointly solve the tasks of TTE and route recovery. And we argue that
both two tasks are complementary to each other in the model-learning procedure
and hold such a relation: more precise travel time can lead to better inference
for routes, in turn, resulting in a more accurate time estimation). Based on
this assumption, we propose an EM algorithm to alternatively estimate the
travel time of inferred route through weak supervision in E step and retrieve
the route based on estimated travel time in M step for sparse trajectories. We
conducted experiments on three real-world trajectory datasets and demonstrated
the effectiveness of the proposed method.
|
[
{
"version": "v1",
"created": "Tue, 21 Jun 2022 14:16:58 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Zhang",
"Zhiwen",
""
],
[
"Wang",
"Hongjun",
""
],
[
"Fan",
"Zipei",
""
],
[
"Chen",
"Jiyuan",
""
],
[
"Song",
"Xuan",
""
],
[
"Shibasaki",
"Ryosuke",
""
]
] |
new_dataset
| 0.972571 |
2206.10459
|
Serge Kernbach
|
Serge Kernbach
|
Device for measuring the plant physiology and electrophysiology
| null | null | null | null |
cs.OH cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper briefly describes the device - the phytosensor - for measuring
physiological and electrophysiological parameters of plants. This system is
developed as a bio-physiological sensor in precise agriculture, as a tool in
plant research and environmental biology, and for plant enthusiasts in smart
home or entertainment applications. The phytosentor measures main physiological
parameters such as the leaf transpiration rate, sap flow, tissue conductivity
and frequency response, biopotentials (action potentials and variation
potentials), and can conduct electrochemical impedance spectroscopy with
organic tissues. Soil moisture and temperature, air quality (CO2, NO2, O3 and
other sensors on I2C bus), and general environmental parameters (light,
temperature, humidity, air pressure, electromagnetic and magnetic fields) are
also recorded in real time. In addition to phytosensing, the device can also
perform phytoactuation, i.e. execute electrical or light stimulation of plants,
control irrigation and lighting modes, conduct fully autonomous experiments
with complex feedback-based and adaptive scenarios in robotic or biohybrid
systems. This article represents the revised and extended version of original
paper and includes some descriptions and images from the FloraRobotica and
BioHybrids projects.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 08:48:17 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Kernbach",
"Serge",
""
]
] |
new_dataset
| 0.992906 |
2206.10520
|
Fadi Boutros
|
Fadi Boutros, Marco Huber, Patrick Siebke, Tim Rieber, Naser Damer
|
SFace: Privacy-friendly and Accurate Face Recognition using Synthetic
Data
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Recent deep face recognition models proposed in the literature utilized
large-scale public datasets such as MS-Celeb-1M and VGGFace2 for training very
deep neural networks, achieving state-of-the-art performance on mainstream
benchmarks. Recently, many of these datasets, e.g., MS-Celeb-1M and VGGFace2,
are retracted due to credible privacy and ethical concerns. This motivates this
work to propose and investigate the feasibility of using a privacy-friendly
synthetically generated face dataset to train face recognition models. Towards
this end, we utilize a class-conditional generative adversarial network to
generate class-labeled synthetic face images, namely SFace. To address the
privacy aspect of using such data to train a face recognition model, we provide
extensive evaluation experiments on the identity relation between the synthetic
dataset and the original authentic dataset used to train the generative model.
Our reported evaluation proved that associating an identity of the authentic
dataset to one with the same class label in the synthetic dataset is hardly
possible. We also propose to train face recognition on our privacy-friendly
dataset, SFace, using three different learning strategies, multi-class
classification, label-free knowledge transfer, and combined learning of
multi-class classification and knowledge transfer. The reported evaluation
results on five authentic face benchmarks demonstrated that the
privacy-friendly synthetic dataset has high potential to be used for training
face recognition models, achieving, for example, a verification accuracy of
91.87\% on LFW using multi-class classification and 99.13\% using the combined
learning strategy.
|
[
{
"version": "v1",
"created": "Tue, 21 Jun 2022 16:42:04 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Boutros",
"Fadi",
""
],
[
"Huber",
"Marco",
""
],
[
"Siebke",
"Patrick",
""
],
[
"Rieber",
"Tim",
""
],
[
"Damer",
"Naser",
""
]
] |
new_dataset
| 0.964555 |
2206.10532
|
Mohammad Dehghani Soltani
|
Mohammad Dehghani Soltani, Hossein Kazemi, Elham Sarbazi, Ahmad Adnan
Qidan, Barzan Yosuf, Sanaa Mohamed, Ravinder Singh, Bela Berde, Dominique
Chiaroni, Bastien B\'echadergue, Fathi Abdeldayem, Hardik Soni, Jose Tabu,
Micheline Perrufel, Nikola Serafimovski, Taisir E. H. El-Gorashi, Jaafar
Elmirghani, Richard Penty, Ian H. White, Harald Haas and Majid Safari
|
Terabit Indoor Laser-Based Wireless Communications: LiFi 2.0 for 6G
|
7 pages, 7 figures
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
This paper provides a summary of available technologies required for
implementing indoor laser-based wireless networks capable of achieving
aggregate data-rates of terabits per second as widely accepted as a sixth
generation (6G) key performance indicator. The main focus of this paper is on
the technologies supporting the near infrared region of the optical spectrum.
The main challenges in the design of the transmitter and receiver systems and
communication/networking schemes are identified and new insights are provided.
This paper also covers the previous and recent standards as well as industrial
applications for optical wireless communications (OWC) and LiFi.
|
[
{
"version": "v1",
"created": "Tue, 21 Jun 2022 17:04:14 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Soltani",
"Mohammad Dehghani",
""
],
[
"Kazemi",
"Hossein",
""
],
[
"Sarbazi",
"Elham",
""
],
[
"Qidan",
"Ahmad Adnan",
""
],
[
"Yosuf",
"Barzan",
""
],
[
"Mohamed",
"Sanaa",
""
],
[
"Singh",
"Ravinder",
""
],
[
"Berde",
"Bela",
""
],
[
"Chiaroni",
"Dominique",
""
],
[
"Béchadergue",
"Bastien",
""
],
[
"Abdeldayem",
"Fathi",
""
],
[
"Soni",
"Hardik",
""
],
[
"Tabu",
"Jose",
""
],
[
"Perrufel",
"Micheline",
""
],
[
"Serafimovski",
"Nikola",
""
],
[
"El-Gorashi",
"Taisir E. H.",
""
],
[
"Elmirghani",
"Jaafar",
""
],
[
"Penty",
"Richard",
""
],
[
"White",
"Ian H.",
""
],
[
"Haas",
"Harald",
""
],
[
"Safari",
"Majid",
""
]
] |
new_dataset
| 0.994734 |
2206.10533
|
Abhish Khanal
|
Abhish Khanal
|
RRT and RRT* Using Vehicle Dynamics
|
5 pages
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The trajectory derived from RRT and RRT* is zagged. A holonomic drive is able
to follow this trajectory. But real-life vehicle which has dynamical
constraints cannot follow this trajectory. In this work, we are going to modify
the RRT and RRT* algorithm to generate a trajectory that a vehicle with
dynamical constraint can follow. The continuous nature of steering control and
acceleration control in a real-world vehicle introduces the complexity in its
model. To introduce constraint in the vehicle's motion, while reducing the
number of control and hence complexity, we are modeling our vehicle as a Dubins
car. A Dubins car has only three controls (turning left, turning right, and
moving forward) with a fixed velocity which makes our model simple. We use
dubins curve (path that dubins car can follow) to trace the trajectory in RRT
and RRT* algorithm.
|
[
{
"version": "v1",
"created": "Sun, 29 May 2022 18:43:38 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Khanal",
"Abhish",
""
]
] |
new_dataset
| 0.999017 |
2206.10544
|
Esmaeil Seraj
|
Esmaeil Seraj and Andrew Silva and Matthew Gombolay
|
Multi-UAV Planning for Cooperative Wildfire Coverage and Tracking with
Quality-of-Service Guarantees
|
To appear in the journal of Autonomous Agents and Multi-Agent Systems
(AAMAS)
| null | null | null |
cs.RO cs.AI cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, teams of robot and Unmanned Aerial Vehicles (UAVs) have been
commissioned by researchers to enable accurate, online wildfire coverage and
tracking. While the majority of prior work focuses on the coordination and
control of such multi-robot systems, to date, these UAV teams have not been
given the ability to reason about a fire's track (i.e., location and
propagation dynamics) to provide performance guarantee over a time horizon.
Motivated by the problem of aerial wildfire monitoring, we propose a predictive
framework which enables cooperation in multi-UAV teams towards collaborative
field coverage and fire tracking with probabilistic performance guarantee. Our
approach enables UAVs to infer the latent fire propagation dynamics for
time-extended coordination in safety-critical conditions. We derive a set of
novel, analytical temporal, and tracking-error bounds to enable the UAV-team to
distribute their limited resources and cover the entire fire area according to
the case-specific estimated states and provide a probabilistic performance
guarantee. Our results are not limited to the aerial wildfire monitoring
case-study and are generally applicable to problems, such as search-and-rescue,
target tracking and border patrol. We evaluate our approach in simulation and
provide demonstrations of the proposed framework on a physical multi-robot
testbed to account for real robot dynamics and restrictions. Our quantitative
evaluations validate the performance of our method accumulating 7.5x and 9.0x
smaller tracking-error than state-of-the-art model-based and reinforcement
learning benchmarks, respectively.
|
[
{
"version": "v1",
"created": "Tue, 21 Jun 2022 17:20:54 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Seraj",
"Esmaeil",
""
],
[
"Silva",
"Andrew",
""
],
[
"Gombolay",
"Matthew",
""
]
] |
new_dataset
| 0.980144 |
2206.10573
|
Gabriele Campanella
|
Gabriele Campanella, David Ho, Ida H\"aggstr\"om, Anton S Becker,
Jason Chang, Chad Vanderbilt, Thomas J Fuchs
|
H&E-based Computational Biomarker Enables Universal EGFR Screening for
Lung Adenocarcinoma
| null | null | null | null |
cs.CV q-bio.QM
|
http://creativecommons.org/licenses/by/4.0/
|
Lung cancer is the leading cause of cancer death worldwide, with lung
adenocarcinoma being the most prevalent form of lung cancer. EGFR positive lung
adenocarcinomas have been shown to have high response rates to TKI therapy,
underlying the essential nature of molecular testing for lung cancers. Despite
current guidelines consider testing necessary, a large portion of patients are
not routinely profiled, resulting in millions of people not receiving the
optimal treatment for their lung cancer. Sequencing is the gold standard for
molecular testing of EGFR mutations, but it can take several weeks for results
to come back, which is not ideal in a time constrained scenario. The
development of alternative screening tools capable of detecting EGFR mutations
quickly and cheaply while preserving tissue for sequencing could help reduce
the amount of sub-optimally treated patients. We propose a multi-modal approach
which integrates pathology images and clinical variables to predict EGFR
mutational status achieving an AUC of 84% on the largest clinical cohort to
date. Such a computational model could be deployed at large at little
additional cost. Its clinical application could reduce the number of patients
who receive sub-optimal treatments by 53.1% in China, and up to 96.6% in the
US.
|
[
{
"version": "v1",
"created": "Tue, 21 Jun 2022 17:52:58 GMT"
}
] | 2022-06-22T00:00:00 |
[
[
"Campanella",
"Gabriele",
""
],
[
"Ho",
"David",
""
],
[
"Häggström",
"Ida",
""
],
[
"Becker",
"Anton S",
""
],
[
"Chang",
"Jason",
""
],
[
"Vanderbilt",
"Chad",
""
],
[
"Fuchs",
"Thomas J",
""
]
] |
new_dataset
| 0.995288 |
2103.01910
|
Josiah Wang
|
Josiah Wang, Pranava Madhyastha, Josiel Figueiredo, Chiraag Lala,
Lucia Specia
|
MultiSubs: A Large-scale Multimodal and Multilingual Dataset
|
Added an n-gram with back-off baseline model to the lexical
translation task (Section 7.2.4). Also synchronised the paper structure to
the LREC2022 version of this work. This arxiv version is a longer version of
the LREC2022 version including more experiments and an additional lexical
translation task
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces a large-scale multimodal and multilingual dataset that
aims to facilitate research on grounding words to images in their contextual
usage in language. The dataset consists of images selected to unambiguously
illustrate concepts expressed in sentences from movie subtitles. The dataset is
a valuable resource as (i) the images are aligned to text fragments rather than
whole sentences; (ii) multiple images are possible for a text fragment and a
sentence; (iii) the sentences are free-form and real-world like; (iv) the
parallel texts are multilingual. We set up a fill-in-the-blank game for humans
to evaluate the quality of the automatic image selection process of our
dataset. We show the utility of the dataset on two automatic tasks: (i)
fill-in-the-blank; (ii) lexical translation. Results of the human evaluation
and automatic models demonstrate that images can be a useful complement to the
textual context. The dataset will benefit research on visual grounding of words
especially in the context of free-form sentences, and can be obtained from
https://doi.org/10.5281/zenodo.5034604 under a Creative Commons licence.
|
[
{
"version": "v1",
"created": "Tue, 2 Mar 2021 18:09:07 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Jun 2021 14:56:02 GMT"
},
{
"version": "v3",
"created": "Thu, 16 Jun 2022 20:41:38 GMT"
}
] | 2022-06-20T00:00:00 |
[
[
"Wang",
"Josiah",
""
],
[
"Madhyastha",
"Pranava",
""
],
[
"Figueiredo",
"Josiel",
""
],
[
"Lala",
"Chiraag",
""
],
[
"Specia",
"Lucia",
""
]
] |
new_dataset
| 0.999877 |
2103.16446
|
Richard Plant
|
Richard Plant, Amir Hussain
|
CovidTracker: A comprehensive Covid-related social media dataset for NLP
tasks
| null | null | null | null |
cs.SI cs.CY
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The Covid-19 pandemic presented an unprecedented global public health
emergency, and concomitantly an unparalleled opportunity to investigate public
responses to adverse social conditions. The widespread ability to post messages
to social media platforms provided an invaluable outlet for such an outpouring
of public sentiment, including not only expressions of social solidarity, but
also the spread of misinformation and misconceptions around the effect and
potential risks of the pandemic. This archive of message content therefore
represents a key resource in understanding public responses to health crises,
analysis of which could help to inform public policy interventions to better
respond to similar events in future. We present a benchmark database of public
social media postings from the United Kingdom related to the Covid-19 pandemic
for academic research purposes, along with some initial analysis, including a
taxonomy of key themes organised by keyword. This release supports the findings
of a research study funded by the Scottish Government Chief Scientists' Office
that aims to investigate social sentiment in order to understand the response
to public health measures implemented during the pandemic.
|
[
{
"version": "v1",
"created": "Tue, 30 Mar 2021 15:44:48 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Jun 2022 11:40:35 GMT"
}
] | 2022-06-20T00:00:00 |
[
[
"Plant",
"Richard",
""
],
[
"Hussain",
"Amir",
""
]
] |
new_dataset
| 0.997938 |
2111.00228
|
Jingyao Yang
|
Yanrui Niu, Jingyao Yang, Ankang Lu, Baojin Huang, Yue Zhang, Ji
Huang, Shishi Wen, Dongshu Xu, Chao Liang, Zhongyuan Wang, Jun Chen
|
whu-nercms at trecvid2021:instance search task
|
9 pages, 4 figures
| null | null | null |
cs.CV cs.IR cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We will make a brief introduction of the experimental methods and results of
the WHU-NERCMS in the TRECVID2021 in the paper. This year we participate in the
automatic and interactive tasks of Instance Search (INS). For the automatic
task, the retrieval target is divided into two parts, person retrieval, and
action retrieval. We adopt a two-stage method including face detection and face
recognition for person retrieval and two kinds of action detection methods
consisting of three frame-based human-object interaction detection methods and
two video-based general action detection methods for action retrieval. After
that, the person retrieval results and action retrieval results are fused to
initialize the result ranking lists. In addition, we make attempts to use
complementary methods to further improve search performance. For interactive
tasks, we test two different interaction strategies on the fusion results. We
submit 4 runs for automatic and interactive tasks respectively. The
introduction of each run is shown in Table 1. The official evaluations show
that the proposed strategies rank 1st in both automatic and interactive tracks.
|
[
{
"version": "v1",
"created": "Sat, 30 Oct 2021 11:00:47 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Jun 2022 15:32:52 GMT"
}
] | 2022-06-20T00:00:00 |
[
[
"Niu",
"Yanrui",
""
],
[
"Yang",
"Jingyao",
""
],
[
"Lu",
"Ankang",
""
],
[
"Huang",
"Baojin",
""
],
[
"Zhang",
"Yue",
""
],
[
"Huang",
"Ji",
""
],
[
"Wen",
"Shishi",
""
],
[
"Xu",
"Dongshu",
""
],
[
"Liang",
"Chao",
""
],
[
"Wang",
"Zhongyuan",
""
],
[
"Chen",
"Jun",
""
]
] |
new_dataset
| 0.995639 |
2111.14813
|
Jeya Maria Jose Valanarasu
|
Jeya Maria Jose Valanarasu, Rajeev Yasarla, and Vishal M. Patel
|
TransWeather: Transformer-based Restoration of Images Degraded by
Adverse Weather Conditions
|
CVPR 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Removing adverse weather conditions like rain, fog, and snow from images is
an important problem in many applications. Most methods proposed in the
literature have been designed to deal with just removing one type of
degradation. Recently, a CNN-based method using neural architecture search
(All-in-One) was proposed to remove all the weather conditions at once.
However, it has a large number of parameters as it uses multiple encoders to
cater to each weather removal task and still has scope for improvement in its
performance. In this work, we focus on developing an efficient solution for the
all adverse weather removal problem. To this end, we propose TransWeather, a
transformer-based end-to-end model with just a single encoder and a decoder
that can restore an image degraded by any weather condition. Specifically, we
utilize a novel transformer encoder using intra-patch transformer blocks to
enhance attention inside the patches to effectively remove smaller weather
degradations. We also introduce a transformer decoder with learnable weather
type embeddings to adjust to the weather degradation at hand. TransWeather
achieves improvements across multiple test datasets over both All-in-One
network as well as methods fine-tuned for specific tasks. TransWeather is also
validated on real world test images and found to be more effective than
previous methods. Implementation code can be accessed at
https://github.com/jeya-maria-jose/TransWeather .
|
[
{
"version": "v1",
"created": "Mon, 29 Nov 2021 18:57:09 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Jun 2022 15:51:31 GMT"
}
] | 2022-06-20T00:00:00 |
[
[
"Valanarasu",
"Jeya Maria Jose",
""
],
[
"Yasarla",
"Rajeev",
""
],
[
"Patel",
"Vishal M.",
""
]
] |
new_dataset
| 0.996778 |
2112.01921
|
Saikat Ray Majumder
|
Sarah Felix, Saikat Ray Majumder, H. Kirk Mathews, Michael Lexa,
Gabriel Lipsa, Xiaohu Ping, Subhrajit Roychowdhury, Thomas Spears
|
In situ process quality monitoring and defect detection for direct metal
laser melting
|
16 pages, 4 figures
|
Sci Rep 12, 8503 (2022)
|
10.1038/s41598-022-12381-4
| null |
cs.LG cs.SY eess.SY physics.data-an physics.ins-det
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Quality control and quality assurance are challenges in Direct Metal Laser
Melting (DMLM). Intermittent machine diagnostics and downstream part
inspections catch problems after undue cost has been incurred processing
defective parts. In this paper we demonstrate two methodologies for in-process
fault detection and part quality prediction that can be readily deployed on
existing commercial DMLM systems with minimal hardware modification. Novel
features were derived from the time series of common photodiode sensors along
with standard machine control signals. A Bayesian approach attributes
measurements to one of multiple process states and a least squares regression
model predicts severity of certain material defects.
|
[
{
"version": "v1",
"created": "Fri, 3 Dec 2021 14:05:31 GMT"
}
] | 2022-06-20T00:00:00 |
[
[
"Felix",
"Sarah",
""
],
[
"Majumder",
"Saikat Ray",
""
],
[
"Mathews",
"H. Kirk",
""
],
[
"Lexa",
"Michael",
""
],
[
"Lipsa",
"Gabriel",
""
],
[
"Ping",
"Xiaohu",
""
],
[
"Roychowdhury",
"Subhrajit",
""
],
[
"Spears",
"Thomas",
""
]
] |
new_dataset
| 0.964665 |
2201.12005
|
Zeyu Lu
|
Zeyu Lu, Xingyu Gao, Haoyong Yu
|
GTac: A Biomimetic Tactile Sensor with Skin-like Heterogeneous Force
Feedback for Robots
| null | null |
10.1109/JSEN.2022.3181128
| null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
The tactile sensing capabilities of human hands are essential in performing
daily activities. Simultaneously perceiving normal and shear forces via the
mechanoreceptors integrated into the hands enables humans to achieve daily
tasks like grasping delicate objects. In this paper, we design and fabricate a
novel biomimetic tactile sensor with skin-like heterogeneity that perceives
normal and shear contact forces simultaneously. It mimics the multilayers of
mechanoreceptors by combining an extrinsic layer (piezoresistive sensors) and
an intrinsic layer (a Hall sensor) so that it can perform estimation of contact
force directions, locations, and joint-level torque. By integrating our
sensors, a robotic gripper can obtain contact force feedback at fingertips;
accordingly, robots can perform challenging tasks, such as tweezers usage, and
egg grasping. This insightful sensor design can be customized and applied in
different areas of robots and provide them with heterogeneous force sensing,
potentially supporting robotics in acquiring skin-like tactile feedback.
|
[
{
"version": "v1",
"created": "Fri, 28 Jan 2022 09:33:00 GMT"
}
] | 2022-06-20T00:00:00 |
[
[
"Lu",
"Zeyu",
""
],
[
"Gao",
"Xingyu",
""
],
[
"Yu",
"Haoyong",
""
]
] |
new_dataset
| 0.99872 |
2202.03077
|
Xilie Xu
|
Xilie Xu, Jingfeng Zhang, Feng Liu, Masashi Sugiyama, Mohan
Kankanhalli
|
Adversarial Attack and Defense for Non-Parametric Two-Sample Tests
|
Accepted by ICML 2022
| null | null | null |
cs.LG cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Non-parametric two-sample tests (TSTs) that judge whether two sets of samples
are drawn from the same distribution, have been widely used in the analysis of
critical data. People tend to employ TSTs as trusted basic tools and rarely
have any doubt about their reliability. This paper systematically uncovers the
failure mode of non-parametric TSTs through adversarial attacks and then
proposes corresponding defense strategies. First, we theoretically show that an
adversary can upper-bound the distributional shift which guarantees the
attack's invisibility. Furthermore, we theoretically find that the adversary
can also degrade the lower bound of a TST's test power, which enables us to
iteratively minimize the test criterion in order to search for adversarial
pairs. To enable TST-agnostic attacks, we propose an ensemble attack (EA)
framework that jointly minimizes the different types of test criteria. Second,
to robustify TSTs, we propose a max-min optimization that iteratively generates
adversarial pairs to train the deep kernels. Extensive experiments on both
simulated and real-world datasets validate the adversarial vulnerabilities of
non-parametric TSTs and the effectiveness of our proposed defense. Source code
is available at https://github.com/GodXuxilie/Robust-TST.git.
|
[
{
"version": "v1",
"created": "Mon, 7 Feb 2022 11:18:04 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Jun 2022 04:33:26 GMT"
}
] | 2022-06-20T00:00:00 |
[
[
"Xu",
"Xilie",
""
],
[
"Zhang",
"Jingfeng",
""
],
[
"Liu",
"Feng",
""
],
[
"Sugiyama",
"Masashi",
""
],
[
"Kankanhalli",
"Mohan",
""
]
] |
new_dataset
| 0.991916 |
2202.05628
|
Haimin Luo
|
Haimin Luo, Teng Xu, Yuheng Jiang, Chenglin Zhou, Qiwei Qiu, Yingliang
Zhang, Wei Yang, Lan Xu, Jingyi Yu
|
Artemis: Articulated Neural Pets with Appearance and Motion synthesis
|
Accepted to ACM SIGGRAPH 2022 (Journal track)
| null |
10.1145/3528223.3530086
| null |
cs.GR cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We, humans, are entering into a virtual era and indeed want to bring animals
to the virtual world as well for companion. Yet, computer-generated (CGI) furry
animals are limited by tedious off-line rendering, let alone interactive motion
control. In this paper, we present ARTEMIS, a novel neural modeling and
rendering pipeline for generating ARTiculated neural pets with appEarance and
Motion synthesIS. Our ARTEMIS enables interactive motion control, real-time
animation, and photo-realistic rendering of furry animals. The core of our
ARTEMIS is a neural-generated (NGI) animal engine, which adopts an efficient
octree-based representation for animal animation and fur rendering. The
animation then becomes equivalent to voxel-level deformation based on explicit
skeletal warping. We further use a fast octree indexing and efficient
volumetric rendering scheme to generate appearance and density features maps.
Finally, we propose a novel shading network to generate high-fidelity details
of appearance and opacity under novel poses from appearance and density feature
maps. For the motion control module in ARTEMIS, we combine state-of-the-art
animal motion capture approach with recent neural character control scheme. We
introduce an effective optimization scheme to reconstruct the skeletal motion
of real animals captured by a multi-view RGB and Vicon camera array. We feed
all the captured motion into a neural character control scheme to generate
abstract control signals with motion styles. We further integrate ARTEMIS into
existing engines that support VR headsets, providing an unprecedented immersive
experience where a user can intimately interact with a variety of virtual
animals with vivid movements and photo-realistic appearance. We make available
our ARTEMIS model and dynamic furry animal dataset at
https://haiminluo.github.io/publication/artemis/.
|
[
{
"version": "v1",
"created": "Fri, 11 Feb 2022 14:07:20 GMT"
},
{
"version": "v2",
"created": "Tue, 3 May 2022 08:14:06 GMT"
},
{
"version": "v3",
"created": "Fri, 17 Jun 2022 04:06:33 GMT"
}
] | 2022-06-20T00:00:00 |
[
[
"Luo",
"Haimin",
""
],
[
"Xu",
"Teng",
""
],
[
"Jiang",
"Yuheng",
""
],
[
"Zhou",
"Chenglin",
""
],
[
"Qiu",
"Qiwei",
""
],
[
"Zhang",
"Yingliang",
""
],
[
"Yang",
"Wei",
""
],
[
"Xu",
"Lan",
""
],
[
"Yu",
"Jingyi",
""
]
] |
new_dataset
| 0.999697 |
2204.08582
|
Jack FitzGerald
|
Jack FitzGerald, Christopher Hench, Charith Peris, Scott Mackie, Kay
Rottmann, Ana Sanchez, Aaron Nash, Liam Urbach, Vishesh Kakarala, Richa
Singh, Swetha Ranganath, Laurie Crist, Misha Britan, Wouter Leeuwis, Gokhan
Tur, Prem Natarajan
|
MASSIVE: A 1M-Example Multilingual Natural Language Understanding
Dataset with 51 Typologically-Diverse Languages
|
Preprint; 8 pages
| null | null | null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the MASSIVE dataset--Multilingual Amazon Slu resource package
(SLURP) for Slot-filling, Intent classification, and Virtual assistant
Evaluation. MASSIVE contains 1M realistic, parallel, labeled virtual assistant
utterances spanning 51 languages, 18 domains, 60 intents, and 55 slots. MASSIVE
was created by tasking professional translators to localize the English-only
SLURP dataset into 50 typologically diverse languages from 29 genera. We also
present modeling results on XLM-R and mT5, including exact match accuracy,
intent classification accuracy, and slot-filling F1 score. We have released our
dataset, modeling code, and models publicly.
|
[
{
"version": "v1",
"created": "Mon, 18 Apr 2022 22:40:52 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Jun 2022 17:19:15 GMT"
}
] | 2022-06-20T00:00:00 |
[
[
"FitzGerald",
"Jack",
""
],
[
"Hench",
"Christopher",
""
],
[
"Peris",
"Charith",
""
],
[
"Mackie",
"Scott",
""
],
[
"Rottmann",
"Kay",
""
],
[
"Sanchez",
"Ana",
""
],
[
"Nash",
"Aaron",
""
],
[
"Urbach",
"Liam",
""
],
[
"Kakarala",
"Vishesh",
""
],
[
"Singh",
"Richa",
""
],
[
"Ranganath",
"Swetha",
""
],
[
"Crist",
"Laurie",
""
],
[
"Britan",
"Misha",
""
],
[
"Leeuwis",
"Wouter",
""
],
[
"Tur",
"Gokhan",
""
],
[
"Natarajan",
"Prem",
""
]
] |
new_dataset
| 0.999533 |
2204.08775
|
Simon Christ
|
Simon Christ, Daniel Schwabeneder, Christopher Rackauckas, Michael
Krabbe Borregaard, Thomas Breloff
|
Plots.jl -- a user extendable plotting API for the julia programming
language
|
22 pages, 6 figures, 6 code listings
| null | null | null |
cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
There are plenty of excellent plotting libraries. Each excels at a different
use case: one is good for printed 2D publication figures, the other at
interactive 3D graphics, a third has excellent L A TEX integration or is good
for creating dashboards on the web. The aim of Plots.jl is to enable the user
to use the same syntax to interact with many different plotting libraries, such
that it is possible to change the library "backend" without needing to touch
the code that creates the content -- and without having to learn yet another
application programming interface (API). This is achieved by the separation of
the plot specification from the implementation of the actual graphical backend.
These plot specifications may be extended by a "recipe" system, which allows
package authors and users to define how to plot any new type (be it a
statistical model, a map, a phylogenetic tree or the solution to a system of
differential equations) and create new types of plots -- without depending on
the Plots.jl package. This supports a modular ecosystem structure for plotting
and yields a high reuse potential across the entire julia package ecosystem.
Plots.jl is publicly available at https://github.com/JuliaPlots/Plots.jl.
|
[
{
"version": "v1",
"created": "Tue, 19 Apr 2022 09:44:46 GMT"
},
{
"version": "v2",
"created": "Wed, 1 Jun 2022 14:43:50 GMT"
},
{
"version": "v3",
"created": "Fri, 17 Jun 2022 08:43:11 GMT"
}
] | 2022-06-20T00:00:00 |
[
[
"Christ",
"Simon",
""
],
[
"Schwabeneder",
"Daniel",
""
],
[
"Rackauckas",
"Christopher",
""
],
[
"Borregaard",
"Michael Krabbe",
""
],
[
"Breloff",
"Thomas",
""
]
] |
new_dataset
| 0.999142 |
2204.09634
|
Parthasaarathy Sudarsanam
|
Samuel Lipping, Parthasaarathy Sudarsanam, Konstantinos Drossos,
Tuomas Virtanen
|
Clotho-AQA: A Crowdsourced Dataset for Audio Question Answering
| null | null | null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Audio question answering (AQA) is a multimodal translation task where a
system analyzes an audio signal and a natural language question, to generate a
desirable natural language answer. In this paper, we introduce Clotho-AQA, a
dataset for Audio question answering consisting of 1991 audio files each
between 15 to 30 seconds in duration selected from the Clotho dataset. For each
audio file, we collect six different questions and corresponding answers by
crowdsourcing using Amazon Mechanical Turk. The questions and answers are
produced by different annotators. Out of the six questions for each audio, two
questions each are designed to have 'yes' and 'no' as answers, while the
remaining two questions have other single-word answers. For each question, we
collect answers from three different annotators. We also present two baseline
experiments to describe the usage of our dataset for the AQA task - an
LSTM-based multimodal binary classifier for 'yes' or 'no' type answers and an
LSTM-based multimodal multi-class classifier for 828 single-word answers. The
binary classifier achieved an accuracy of 62.7% and the multi-class classifier
achieved a top-1 accuracy of 54.2% and a top-5 accuracy of 93.7%. Clotho-AQA
dataset is freely available online at https://zenodo.org/record/6473207.
|
[
{
"version": "v1",
"created": "Wed, 20 Apr 2022 17:28:53 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Jun 2022 07:35:08 GMT"
}
] | 2022-06-20T00:00:00 |
[
[
"Lipping",
"Samuel",
""
],
[
"Sudarsanam",
"Parthasaarathy",
""
],
[
"Drossos",
"Konstantinos",
""
],
[
"Virtanen",
"Tuomas",
""
]
] |
new_dataset
| 0.999816 |
2205.01833
|
Jason Priem
|
Jason Priem, Heather Piwowar, Richard Orr
|
OpenAlex: A fully-open index of scholarly works, authors, venues,
institutions, and concepts
|
Submitted to the 26th International Conference on Science, Technology
and Innovation Indicators (STI 2022)
| null | null | null |
cs.DL
|
http://creativecommons.org/publicdomain/zero/1.0/
|
OpenAlex is a new, fully-open scientific knowledge graph (SKG), launched to
replace the discontinued Microsoft Academic Graph (MAG). It contains metadata
for 209M works (journal articles, books, etc); 2013M disambiguated authors;
124k venues (places that host works, such as journals and online repositories);
109k institutions; and 65k Wikidata concepts (linked to works via an automated
hierarchical multi-tag classifier). The dataset is fully and freely available
via a web-based GUI, a full data dump, and high-volume REST API. The resource
is under active development and future work will improve accuracy and coverage
of citation information and author/institution parsing and deduplication.
|
[
{
"version": "v1",
"created": "Wed, 4 May 2022 00:57:11 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Jun 2022 00:34:23 GMT"
}
] | 2022-06-20T00:00:00 |
[
[
"Priem",
"Jason",
""
],
[
"Piwowar",
"Heather",
""
],
[
"Orr",
"Richard",
""
]
] |
new_dataset
| 0.999383 |
2206.08415
|
Abdelkader El Mahdaouy
|
Abdelkader El Mahdaouy, Abdellah El Mekki, Kabil Essefar, Abderrahman
Skiredj, Ismail Berrada
|
CS-UM6P at SemEval-2022 Task 6: Transformer-based Models for Intended
Sarcasm Detection in English and Arabic
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Sarcasm is a form of figurative language where the intended meaning of a
sentence differs from its literal meaning. This poses a serious challenge to
several Natural Language Processing (NLP) applications such as Sentiment
Analysis, Opinion Mining, and Author Profiling. In this paper, we present our
participating system to the intended sarcasm detection task in English and
Arabic languages. Our system\footnote{The source code of our system is
available at \url{https://github.com/AbdelkaderMH/iSarcasmEval}} consists of
three deep learning-based models leveraging two existing pre-trained language
models for Arabic and English. We have participated in all sub-tasks. Our
official submissions achieve the best performance on sub-task A for Arabic
language and rank second in sub-task B. For sub-task C, our system is ranked
7th and 11th on Arabic and English datasets, respectively.
|
[
{
"version": "v1",
"created": "Thu, 16 Jun 2022 19:14:54 GMT"
}
] | 2022-06-20T00:00:00 |
[
[
"Mahdaouy",
"Abdelkader El",
""
],
[
"Mekki",
"Abdellah El",
""
],
[
"Essefar",
"Kabil",
""
],
[
"Skiredj",
"Abderrahman",
""
],
[
"Berrada",
"Ismail",
""
]
] |
new_dataset
| 0.999752 |
2206.08425
|
Patr\'icia Schmidtov\'a
|
Patr\'icia Schmidtov\'a, D\'avid Javorsk\'y, Christi\'an Mikl\'a\v{s},
Tom\'a\v{s} Musil, Rudolf Rosa, Ond\v{r}ej Du\v{s}ek
|
DialogueScript: Using Dialogue Agents to Produce a Script
|
Non-archival paper at the 4th Workshop on Narrative Understanding
(WNU 2022)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We present a novel approach to generating scripts by using agents with
different personality types. To manage character interaction in the script, we
employ simulated dramatic networks. Automatic and human evaluation on multiple
criteria shows that our approach outperforms a vanilla-GPT2-based baseline. We
further introduce a new metric to evaluate dialogue consistency based on
natural language inference and demonstrate its validity.
|
[
{
"version": "v1",
"created": "Thu, 16 Jun 2022 19:57:01 GMT"
}
] | 2022-06-20T00:00:00 |
[
[
"Schmidtová",
"Patrícia",
""
],
[
"Javorský",
"Dávid",
""
],
[
"Mikláš",
"Christián",
""
],
[
"Musil",
"Tomáš",
""
],
[
"Rosa",
"Rudolf",
""
],
[
"Dušek",
"Ondřej",
""
]
] |
new_dataset
| 0.999172 |
2206.08427
|
Ajay Subramanian
|
Ajay Subramanian, Sara Price, Omkar Kumbhar, Elena Sizikova, Najib J.
Majaj, Denis G. Pelli
|
SATBench: Benchmarking the speed-accuracy tradeoff in object recognition
by humans and dynamic neural networks
|
19 pages, 12 figures. Under Review at NeurIPS Datasets and Benchmarks
Track 2022
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The core of everyday tasks like reading and driving is active object
recognition. Attempts to model such tasks are currently stymied by the
inability to incorporate time. People show a flexible tradeoff between speed
and accuracy and this tradeoff is a crucial human skill. Deep neural networks
have emerged as promising candidates for predicting peak human object
recognition performance and neural activity. However, modeling the temporal
dimension i.e., the speed-accuracy tradeoff (SAT), is essential for them to
serve as useful computational models for how humans recognize objects. To this
end, we here present the first large-scale (148 observers, 4 neural networks, 8
tasks) dataset of the speed-accuracy tradeoff (SAT) in recognizing ImageNet
images. In each human trial, a beep, indicating the desired reaction time,
sounds at a fixed delay after the image is presented, and observer's response
counts only if it occurs near the time of the beep. In a series of blocks, we
test many beep latencies, i.e., reaction times. We observe that human accuracy
increases with reaction time and proceed to compare its characteristics with
the behavior of several dynamic neural networks that are capable of
inference-time adaptive computation. Using FLOPs as an analog for reaction
time, we compare networks with humans on curve-fit error, category-wise
correlation, and curve steepness, and conclude that cascaded dynamic neural
networks are a promising model of human reaction time in object recognition
tasks.
|
[
{
"version": "v1",
"created": "Thu, 16 Jun 2022 20:03:31 GMT"
}
] | 2022-06-20T00:00:00 |
[
[
"Subramanian",
"Ajay",
""
],
[
"Price",
"Sara",
""
],
[
"Kumbhar",
"Omkar",
""
],
[
"Sizikova",
"Elena",
""
],
[
"Majaj",
"Najib J.",
""
],
[
"Pelli",
"Denis G.",
""
]
] |
new_dataset
| 0.999189 |
2206.08474
|
Ming Zhu
|
Ming Zhu, Aneesh Jain, Karthik Suresh, Roshan Ravindran, Sindhu
Tipirneni, Chandan K. Reddy
|
XLCoST: A Benchmark Dataset for Cross-lingual Code Intelligence
|
20 pages, 11 tables, 2 figures
| null | null | null |
cs.SE cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Recent advances in machine learning have significantly improved the
understanding of source code data and achieved good performance on a number of
downstream tasks. Open source repositories like GitHub enable this process with
rich unlabeled code data. However, the lack of high quality labeled data has
largely hindered the progress of several code related tasks, such as program
translation, summarization, synthesis, and code search. This paper introduces
XLCoST, Cross-Lingual Code SnippeT dataset, a new benchmark dataset for
cross-lingual code intelligence. Our dataset contains fine-grained parallel
data from 8 languages (7 commonly used programming languages and English), and
supports 10 cross-lingual code tasks. To the best of our knowledge, it is the
largest parallel dataset for source code both in terms of size and the number
of languages. We also provide the performance of several state-of-the-art
baseline models for each task. We believe this new dataset can be a valuable
asset for the research community and facilitate the development and validation
of new methods for cross-lingual code intelligence.
|
[
{
"version": "v1",
"created": "Thu, 16 Jun 2022 22:49:39 GMT"
}
] | 2022-06-20T00:00:00 |
[
[
"Zhu",
"Ming",
""
],
[
"Jain",
"Aneesh",
""
],
[
"Suresh",
"Karthik",
""
],
[
"Ravindran",
"Roshan",
""
],
[
"Tipirneni",
"Sindhu",
""
],
[
"Reddy",
"Chandan K.",
""
]
] |
new_dataset
| 0.999784 |
2206.08497
|
Xianghao Xu
|
Xianghao Xu, Yifan Ruan, Srinath Sridhar, Daniel Ritchie
|
Unsupervised Kinematic Motion Detection for Part-segmented 3D Shape
Collections
|
SIGGRAPH 2022
| null | null | null |
cs.GR cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
3D models of manufactured objects are important for populating virtual worlds
and for synthetic data generation for vision and robotics. To be most useful,
such objects should be articulated: their parts should move when interacted
with. While articulated object datasets exist, creating them is
labor-intensive. Learning-based prediction of part motions can help, but all
existing methods require annotated training data. In this paper, we present an
unsupervised approach for discovering articulated motions in a part-segmented
3D shape collection. Our approach is based on a concept we call category
closure: any valid articulation of an object's parts should keep the object in
the same semantic category (e.g. a chair stays a chair). We operationalize this
concept with an algorithm that optimizes a shape's part motion parameters such
that it can transform into other shapes in the collection. We evaluate our
approach by using it to re-discover part motions from the PartNet-Mobility
dataset. For almost all shape categories, our method's predicted motion
parameters have low error with respect to ground truth annotations,
outperforming two supervised motion prediction methods.
|
[
{
"version": "v1",
"created": "Fri, 17 Jun 2022 00:50:36 GMT"
}
] | 2022-06-20T00:00:00 |
[
[
"Xu",
"Xianghao",
""
],
[
"Ruan",
"Yifan",
""
],
[
"Sridhar",
"Srinath",
""
],
[
"Ritchie",
"Daniel",
""
]
] |
new_dataset
| 0.983888 |
2206.08513
|
Hieu Tran
|
Hieu Tran, Son Nguyen, I-Ling Yen, Farokh Bastani
|
TLETA: Deep Transfer Learning and Integrated Cellular Knowledge for
Estimated Time of Arrival Prediction
|
8 pages, 3 figures, 3 tables. The 25th IEEE International Conference
on Intelligent Transportation Systems (IEEE ITSC 2022)
| null | null | null |
cs.LG cs.DC cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
Vehicle arrival time prediction has been studied widely. With the emergence
of IoT devices and deep learning techniques, estimated time of arrival (ETA)
has become a critical component in intelligent transportation systems. Though
many tools exist for ETA, ETA for special vehicles, such as ambulances, fire
engines, etc., is still challenging due to the limited amount of traffic data
for special vehicles. Existing works use one model for all types of vehicles,
which can lead to low accuracy. To tackle this, as the first in the field, we
propose a deep transfer learning framework TLETA for the driving time
prediction. TLETA constructs cellular spatial-temporal knowledge grids for
extracting driving patterns, combined with the road network structure embedding
to build a deep neural network for ETA. TLETA contains transferable layers to
support knowledge transfer between different categories of vehicles.
Importantly, our transfer models only train the last layers to map the
transferred knowledge, that reduces the training time significantly. The
experimental studies show that our model predicts travel time with high
accuracy and outperforms many state-of-the-art approaches.
|
[
{
"version": "v1",
"created": "Fri, 17 Jun 2022 02:20:44 GMT"
}
] | 2022-06-20T00:00:00 |
[
[
"Tran",
"Hieu",
""
],
[
"Nguyen",
"Son",
""
],
[
"Yen",
"I-Ling",
""
],
[
"Bastani",
"Farokh",
""
]
] |
new_dataset
| 0.982303 |
2206.08517
|
Xin Zheng
|
Xin Zheng, Jianke Zhu
|
Effective Solid State LiDAR Odometry Using Continuous-time Filter
Registration
|
8 pages, 6 figures
| null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Solid-state LiDARs are more compact and cheaper than the conventional
mechanical multi-line spinning LiDARs, which have become increasingly popular
in autonomous driving recently. However, there are several challenges for these
new LiDAR sensors, including severe motion distortions, small field of view and
sparse point cloud, which hinder them from being widely used in LiDAR odometry.
To tackle these problems, we present an effective continuous-time LiDAR
odometry (ECTLO) method for the Risley prism-based LiDARs with non-repetitive
scanning patterns. To account for the noisy data, a filter-based point-to-plane
Gaussian Mixture Model is used for robust registration. Moreover, a LiDAR-only
continuous-time motion model is employed to relieve the inevitable distortions.
To facilitate the implicit data association in parallel, we maintain all map
points within a single range image. Extensive experiments have been conducted
on various testbeds using the solid-state LiDARs with different scanning
patterns, whose promising results demonstrate the efficacy of our proposed
approach.
|
[
{
"version": "v1",
"created": "Fri, 17 Jun 2022 02:41:48 GMT"
}
] | 2022-06-20T00:00:00 |
[
[
"Zheng",
"Xin",
""
],
[
"Zhu",
"Jianke",
""
]
] |
new_dataset
| 0.983865 |
2206.08524
|
RuiLong Dan
|
Ruilong Dan, Yunxiang Li, Yijie Wang, Gangyong Jia, Ruiquan Ge, Juan
Ye, Qun Jin, Yaqi Wang
|
CDNet: Contrastive Disentangled Network for Fine-Grained Image
Categorization of Ocular B-Scan Ultrasound
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Precise and rapid categorization of images in the B-scan ultrasound modality
is vital for diagnosing ocular diseases. Nevertheless, distinguishing various
diseases in ultrasound still challenges experienced ophthalmologists. Thus a
novel contrastive disentangled network (CDNet) is developed in this work,
aiming to tackle the fine-grained image categorization (FGIC) challenges of
ocular abnormalities in ultrasound images, including intraocular tumor (IOT),
retinal detachment (RD), posterior scleral staphyloma (PSS), and vitreous
hemorrhage (VH). Three essential components of CDNet are the weakly-supervised
lesion localization module (WSLL), contrastive multi-zoom (CMZ) strategy, and
hyperspherical contrastive disentangled loss (HCD-Loss), respectively. These
components facilitate feature disentanglement for fine-grained recognition in
both the input and output aspects. The proposed CDNet is validated on our ZJU
Ocular Ultrasound Dataset (ZJUOUSD), consisting of 5213 samples. Furthermore,
the generalization ability of CDNet is validated on two public and widely-used
chest X-ray FGIC benchmarks. Quantitative and qualitative results demonstrate
the efficacy of our proposed CDNet, which achieves state-of-the-art performance
in the FGIC task. Code is available at:
https://github.com/ZeroOneGame/CDNet-for-OUS-FGIC .
|
[
{
"version": "v1",
"created": "Fri, 17 Jun 2022 03:12:52 GMT"
}
] | 2022-06-20T00:00:00 |
[
[
"Dan",
"Ruilong",
""
],
[
"Li",
"Yunxiang",
""
],
[
"Wang",
"Yijie",
""
],
[
"Jia",
"Gangyong",
""
],
[
"Ge",
"Ruiquan",
""
],
[
"Ye",
"Juan",
""
],
[
"Jin",
"Qun",
""
],
[
"Wang",
"Yaqi",
""
]
] |
new_dataset
| 0.964817 |
2206.08610
|
Rui He
|
Rui He, Yuanxi Sun, Youzeng Li, Zuwei Huang, Feng Hu, Xu Cheng, Jie
Tang
|
Masked Autoencoders for Generic Event Boundary Detection CVPR'2022
Kinetics-GEBD Challenge
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generic Event Boundary Detection (GEBD) tasks aim at detecting generic,
taxonomy-free event boundaries that segment a whole video into chunks. In this
paper, we apply Masked Autoencoders to improve algorithm performance on the
GEBD tasks. Our approach mainly adopted the ensemble of Masked Autoencoders
fine-tuned on the GEBD task as a self-supervised learner with other base
models. Moreover, we also use a semi-supervised pseudo-label method to take
full advantage of the abundant unlabeled Kinetics-400 data while training. In
addition, we propose a soft-label method to partially balance the positive and
negative samples and alleviate the problem of ambiguous labeling in this task.
Lastly, a tricky segmentation alignment policy is implemented to refine
boundaries predicted by our models to more accurate locations. With our
approach, we achieved 85.94% on the F1-score on the Kinetics-GEBD test set,
which improved the F1-score by 2.31% compared to the winner of the 2021
Kinetics-GEBD Challenge. Our code is available at
https://github.com/ContentAndMaterialPortrait/MAE-GEBD.
|
[
{
"version": "v1",
"created": "Fri, 17 Jun 2022 08:10:27 GMT"
}
] | 2022-06-20T00:00:00 |
[
[
"He",
"Rui",
""
],
[
"Sun",
"Yuanxi",
""
],
[
"Li",
"Youzeng",
""
],
[
"Huang",
"Zuwei",
""
],
[
"Hu",
"Feng",
""
],
[
"Cheng",
"Xu",
""
],
[
"Tang",
"Jie",
""
]
] |
new_dataset
| 0.954376 |
2206.08680
|
Shaz Furniturewala
|
Shaz Furniturewala, Vijay Kumari, Amulya Ratna Dash, Hriday Kedia,
Yashvardhan Sharma
|
BITS Pilani at HinglishEval: Quality Evaluation for Code-Mixed Hinglish
Text Using Transformers
| null | null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Code-Mixed text data consists of sentences having words or phrases from more
than one language. Most multi-lingual communities worldwide communicate using
multiple languages, with English usually one of them. Hinglish is a Code-Mixed
text composed of Hindi and English but written in Roman script. This paper aims
to determine the factors influencing the quality of Code-Mixed text data
generated by the system. For the HinglishEval task, the proposed model uses
multi-lingual BERT to find the similarity between synthetically generated and
human-generated sentences to predict the quality of synthetically generated
Hinglish sentences.
|
[
{
"version": "v1",
"created": "Fri, 17 Jun 2022 10:36:50 GMT"
}
] | 2022-06-20T00:00:00 |
[
[
"Furniturewala",
"Shaz",
""
],
[
"Kumari",
"Vijay",
""
],
[
"Dash",
"Amulya Ratna",
""
],
[
"Kedia",
"Hriday",
""
],
[
"Sharma",
"Yashvardhan",
""
]
] |
new_dataset
| 0.971812 |
2206.08720
|
Roman Novak
|
Roman Novak, Jascha Sohl-Dickstein, Samuel S. Schoenholz
|
Fast Finite Width Neural Tangent Kernel
|
Published as a conference paper at ICML 2022
| null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Neural Tangent Kernel (NTK), defined as $\Theta_\theta^f(x_1, x_2) =
\left[\partial f(\theta, x_1)\big/\partial \theta\right] \left[\partial
f(\theta, x_2)\big/\partial \theta\right]^T$ where $\left[\partial f(\theta,
\cdot)\big/\partial \theta\right]$ is a neural network (NN) Jacobian, has
emerged as a central object of study in deep learning. In the infinite width
limit, the NTK can sometimes be computed analytically and is useful for
understanding training and generalization of NN architectures. At finite
widths, the NTK is also used to better initialize NNs, compare the conditioning
across models, perform architecture search, and do meta-learning.
Unfortunately, the finite width NTK is notoriously expensive to compute, which
severely limits its practical utility. We perform the first in-depth analysis
of the compute and memory requirements for NTK computation in finite width
networks. Leveraging the structure of neural networks, we further propose two
novel algorithms that change the exponent of the compute and memory
requirements of the finite width NTK, dramatically improving efficiency. Our
algorithms can be applied in a black box fashion to any differentiable
function, including those implementing neural networks. We open-source our
implementations within the Neural Tangents package (arXiv:1912.02803) at
https://github.com/google/neural-tangents.
|
[
{
"version": "v1",
"created": "Fri, 17 Jun 2022 12:18:22 GMT"
}
] | 2022-06-20T00:00:00 |
[
[
"Novak",
"Roman",
""
],
[
"Sohl-Dickstein",
"Jascha",
""
],
[
"Schoenholz",
"Samuel S.",
""
]
] |
new_dataset
| 0.98174 |
2206.08723
|
Yiwei Jiang
|
Yiwei Jiang, Klim Zaporojets, Johannes Deleu, Thomas Demeester, Chris
Develder
|
CookDial: A dataset for task-oriented dialogs grounded in procedural
documents
|
The dataset and codes are available at
https://github.com/YiweiJiang2015/CookDial
|
Applied Intelligence, 1-19 (2022)
|
10.1007/s10489-022-03692-0
| null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This work presents a new dialog dataset, CookDial, that facilitates research
on task-oriented dialog systems with procedural knowledge understanding. The
corpus contains 260 human-to-human task-oriented dialogs in which an agent,
given a recipe document, guides the user to cook a dish. Dialogs in CookDial
exhibit two unique features: (i) procedural alignment between the dialog flow
and supporting document; (ii) complex agent decision-making that involves
segmenting long sentences, paraphrasing hard instructions and resolving
coreference in the dialog context. In addition, we identify three challenging
(sub)tasks in the assumed task-oriented dialog system: (1) User Question
Understanding, (2) Agent Action Frame Prediction, and (3) Agent Response
Generation. For each of these tasks, we develop a neural baseline model, which
we evaluate on the CookDial dataset. We publicly release the CookDial dataset,
comprising rich annotations of both dialogs and recipe documents, to stimulate
further research on domain-specific document-grounded dialog systems.
|
[
{
"version": "v1",
"created": "Fri, 17 Jun 2022 12:23:53 GMT"
}
] | 2022-06-20T00:00:00 |
[
[
"Jiang",
"Yiwei",
""
],
[
"Zaporojets",
"Klim",
""
],
[
"Deleu",
"Johannes",
""
],
[
"Demeester",
"Thomas",
""
],
[
"Develder",
"Chris",
""
]
] |
new_dataset
| 0.999746 |
2206.08725
|
Astha Agrawal
|
Astha Agrawal, Gyanendra K. Verma and R. K. Sharma
|
Galois LCD Codes Over Fq + uFq + vFq + uvFq
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In \cite{anote}, Wu and Shi studied $ l $-Galois LCD codes over finite chain
ring $\mathcal{R}=\mathbb{F}_q+u\mathbb{F}_q$, where $u^2=0$ and $ q=p^e$ for
some prime $p$ and positive integer $e$. In this work, we extend the results to
the finite non chain ring $ \mathcal{R}
=\mathbb{F}_q+u\mathbb{F}_q+v\mathbb{F}_q+uv\mathbb{F}_q$, where $u^2=u,v^2=v $
and $ uv=vu $. We define a correspondence between $ l $-Galois dual of linear
codes over $ \mathcal{R} $ and $ l $-Galois dual of its component codes over $
\mathbb{F}_q .$ Further, we construct Euclidean LCD and $ l $-Galois LCD codes
from linear code over $ \mathcal{R} $. This consequently leads us to prove that
any linear code over $ \mathcal{R} $ is equivalent to Euclidean ($ q>3 $) and $
l $-Galois LCD ($0<l<e$, and $p^{e-l}+1\mid p^e-1$) code over $ \mathcal{R} .$
Finally, we investigate MDS codes over $ \mathcal{R} .$
|
[
{
"version": "v1",
"created": "Fri, 17 Jun 2022 12:24:09 GMT"
}
] | 2022-06-20T00:00:00 |
[
[
"Agrawal",
"Astha",
""
],
[
"Verma",
"Gyanendra K.",
""
],
[
"Sharma",
"R. K.",
""
]
] |
new_dataset
| 0.999089 |
2206.08727
|
Leon Derczynski
|
Leon Derczynski, Annika Solveig Hedegaard Isfeldt, Signhild Djurhuus
|
The ITU Faroese Pairs Dataset
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This article documents a dataset of sentence pairs between Faroese and
Danish, produced at ITU Copenhagen. The data covers tranlsation from both
source languages, and is intended for use as training data for machine
translation systems in this language pair.
|
[
{
"version": "v1",
"created": "Fri, 17 Jun 2022 12:27:20 GMT"
}
] | 2022-06-20T00:00:00 |
[
[
"Derczynski",
"Leon",
""
],
[
"Isfeldt",
"Annika Solveig Hedegaard",
""
],
[
"Djurhuus",
"Signhild",
""
]
] |
new_dataset
| 0.99972 |
2206.08768
|
Pedro Orvalho
|
Pedro Orvalho and Mikol\'a\v{s} Janota and Vasco Manquinho
|
C-Pack of IPAs: A C90 Program Benchmark of Introductory Programming
Assignments
|
3 pages, 3 tables, 1 GitHub url:
https://github.com/pmorvalho/C-Pack-IPAs
| null | null | null |
cs.SE cs.AI cs.CY cs.PL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Due to the vast number of students enrolled in Massive Open Online Courses
(MOOCs), there has been an increasing number of automated program repair
techniques focused on introductory programming assignments (IPAs). Such
techniques take advantage of previous correct student implementations in order
to provide automated, comprehensive, and personalized feedback to students.
This paper presents C-Pack-IPAs, a publicly available benchmark of students'
programs submitted for 25 different IPAs. C-Pack-IPAs contains semantically
correct, semantically incorrect, and syntactically incorrect programs plus a
test suite for each IPA. Hence, C-Pack-IPAs can be used to help evaluate the
development of novel semantic, as well as syntactic, automated program repair
frameworks, focused on providing feedback to novice programmers.
|
[
{
"version": "v1",
"created": "Fri, 17 Jun 2022 13:30:45 GMT"
}
] | 2022-06-20T00:00:00 |
[
[
"Orvalho",
"Pedro",
""
],
[
"Janota",
"Mikoláš",
""
],
[
"Manquinho",
"Vasco",
""
]
] |
new_dataset
| 0.978186 |
2206.08776
|
Xuchuang Wang
|
Xuchuang Wang, Hong Xie, John C.S. Lui
|
Multiple-Play Stochastic Bandits with Shareable Finite-Capacity Arms
|
to appear in ICML 2022
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We generalize the multiple-play multi-armed bandits (MP-MAB) problem with a
shareable arm setting, in which several plays can share the same arm.
Furthermore, each shareable arm has a finite reward capacity and a ''per-load''
reward distribution, both of which are unknown to the learner. The reward from
a shareable arm is load-dependent, which is the "per-load" reward multiplying
either the number of plays pulling the arm, or its reward capacity when the
number of plays exceeds the capacity limit. When the "per-load" reward follows
a Gaussian distribution, we prove a sample complexity lower bound of learning
the capacity from load-dependent rewards and also a regret lower bound of this
new MP-MAB problem. We devise a capacity estimator whose sample complexity
upper bound matches the lower bound in terms of reward means and capacities. We
also propose an online learning algorithm to address the problem and prove its
regret upper bound. This regret upper bound's first term is the same as regret
lower bound's, and its second and third terms also evidently correspond to
lower bound's. Extensive experiments validate our algorithm's performance and
also its gain in 5G & 4G base station selection.
|
[
{
"version": "v1",
"created": "Fri, 17 Jun 2022 13:47:27 GMT"
}
] | 2022-06-20T00:00:00 |
[
[
"Wang",
"Xuchuang",
""
],
[
"Xie",
"Hong",
""
],
[
"Lui",
"John C. S.",
""
]
] |
new_dataset
| 0.990335 |
2206.08778
|
Weiwei Cui
|
Weiwei Cui, Yaqi Wang, Qianni Zhang, Huiyu Zhou, Dan Song, Xingyong
Zuo, Gangyong Jia, Liaoyuan Zeng
|
CTooth: A Fully Annotated 3D Dataset and Benchmark for Tooth Volume
Segmentation on Cone Beam Computed Tomography Images
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
3D tooth segmentation is a prerequisite for computer-aided dental diagnosis
and treatment. However, segmenting all tooth regions manually is subjective and
time-consuming. Recently, deep learning-based segmentation methods produce
convincing results and reduce manual annotation efforts, but it requires a
large quantity of ground truth for training. To our knowledge, there are few
tooth data available for the 3D segmentation study. In this paper, we establish
a fully annotated cone beam computed tomography dataset CTooth with tooth gold
standard. This dataset contains 22 volumes (7363 slices) with fine tooth labels
annotated by experienced radiographic interpreters. To ensure a relative even
data sampling distribution, data variance is included in the CTooth including
missing teeth and dental restoration. Several state-of-the-art segmentation
methods are evaluated on this dataset. Afterwards, we further summarise and
apply a series of 3D attention-based Unet variants for segmenting tooth
volumes. This work provides a new benchmark for the tooth volume segmentation
task. Experimental evidence proves that attention modules of the 3D UNet
structure boost responses in tooth areas and inhibit the influence of
background and noise. The best performance is achieved by 3D Unet with SKNet
attention module, of 88.04 \% Dice and 78.71 \% IOU, respectively. The
attention-based Unet framework outperforms other state-of-the-art methods on
the CTooth dataset. The codebase and dataset are released.
|
[
{
"version": "v1",
"created": "Fri, 17 Jun 2022 13:48:35 GMT"
}
] | 2022-06-20T00:00:00 |
[
[
"Cui",
"Weiwei",
""
],
[
"Wang",
"Yaqi",
""
],
[
"Zhang",
"Qianni",
""
],
[
"Zhou",
"Huiyu",
""
],
[
"Song",
"Dan",
""
],
[
"Zuo",
"Xingyong",
""
],
[
"Jia",
"Gangyong",
""
],
[
"Zeng",
"Liaoyuan",
""
]
] |
new_dataset
| 0.999791 |
2206.08874
|
Ayush Gupta
|
Ayush Gupta, Ekaterina Dorzhieva, Ahmed Baza, Mert Alper, Aleksey
Fedoseev, and Dzmitry Tsetserukou
|
SwarmHawk: Self-Sustaining Multi-Agent System for Landing on a Moving
Platform through an Agent Supervision
|
Accepted paper at IEEE International Conference on Unmanned Aircraft
System (ICUAS 2022), IEEE copyright
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Heterogeneous teams of mobile robots and UAVs are offering a substantial
benefit in an autonomous exploration of the environment. Nevertheless, although
joint exploration scenarios for such systems are widely discussed, they are
still suffering from low adaptability to changes in external conditions and
faults of swarm agents during the UAV docking. We propose a novel vision-based
drone swarm docking system for robust landing on a moving platform when one of
the agents lost its position signal. The proposed SwarmHawk system relies on
vision-based detection for the mobile platform tracking and navigation of its
agents. Each drone of the swarm carries an RGB camera and AprilTag3 QR-code
marker on board. SwarmHawk can switch between two modes of operation, acting as
a homogeneous swarm in case of global UAV localization or assigning leader
drones to navigate its neighbors in case of a camera fault in one of the drones
or global localization failure. Two experiments were performed to evaluate
SwarmHawk's performance under the global and local localization with static and
moving platforms. The experimental results revealed a sufficient accuracy in
the swarm landing task on a static mobile platform (error of 4.2 cm in
homogeneous formation and 1.9 cm in leader-follower formation) and on moving
platform (error of 6.9 cm in homogeneous formation and 4.7 cm in
leader-follower formation). Moreover, the drones showed a good landing on a
platform moving along a complex trajectory (average error of 19.4 cm) in
leader-follower formation. The proposed SwarmHawk technology can be potentially
applied in various swarm scenarios, including complex environment exploration,
inspection, and drone delivery.
|
[
{
"version": "v1",
"created": "Fri, 17 Jun 2022 16:21:10 GMT"
}
] | 2022-06-20T00:00:00 |
[
[
"Gupta",
"Ayush",
""
],
[
"Dorzhieva",
"Ekaterina",
""
],
[
"Baza",
"Ahmed",
""
],
[
"Alper",
"Mert",
""
],
[
"Fedoseev",
"Aleksey",
""
],
[
"Tsetserukou",
"Dzmitry",
""
]
] |
new_dataset
| 0.992352 |
2206.08882
|
Rui Song
|
Rui Song, Anupama Hegde, Numan Senel, Alois Knoll, Andreas Festag
|
Edge-Aided Sensor Data Sharing in Vehicular Communication Networks
|
Accepted for IEEE 95th Vehicular Technology Conference
(VTC2022-Spring)
| null | null | null |
cs.MA cs.CV eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
Sensor data sharing in vehicular networks can significantly improve the range
and accuracy of environmental perception for connected automated vehicles.
Different concepts and schemes for dissemination and fusion of sensor data have
been developed. It is common to these schemes that measurement errors of the
sensors impair the perception quality and can result in road traffic accidents.
Specifically, when the measurement error from the sensors (also referred as
measurement noise) is unknown and time varying, the performance of the data
fusion process is restricted, which represents a major challenge in the
calibration of sensors. In this paper, we consider sensor data sharing and
fusion in a vehicular network with both, vehicle-to-infrastructure and
vehicle-to-vehicle communication. We propose a method, named Bidirectional
Feedback Noise Estimation (BiFNoE), in which an edge server collects and caches
sensor measurement data from vehicles. The edge estimates the noise and the
targets alternately in double dynamic sliding time windows and enhances the
distributed cooperative environment sensing at each vehicle with low
communication costs. We evaluate the proposed algorithm and data dissemination
strategy in an application scenario by simulation and show that the perception
accuracy is on average improved by around 80 % with only 12 kbps uplink and 28
kbps downlink bandwidth.
|
[
{
"version": "v1",
"created": "Fri, 17 Jun 2022 16:30:56 GMT"
}
] | 2022-06-20T00:00:00 |
[
[
"Song",
"Rui",
""
],
[
"Hegde",
"Anupama",
""
],
[
"Senel",
"Numan",
""
],
[
"Knoll",
"Alois",
""
],
[
"Festag",
"Andreas",
""
]
] |
new_dataset
| 0.988826 |
2206.08898
|
Soroush Abbasi Koohpayegani
|
Soroush Abbasi Koohpayegani, Hamed Pirsiavash
|
SimA: Simple Softmax-free Attention for Vision Transformers
|
Code is available here:
$\href{https://github.com/UCDvision/sima}{\text{This https URL}}$
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, vision transformers have become very popular. However, deploying
them in many applications is computationally expensive partly due to the
Softmax layer in the attention block. We introduce a simple but effective,
Softmax-free attention block, SimA, which normalizes query and key matrices
with simple $\ell_1$-norm instead of using Softmax layer. Then, the attention
block in SimA is a simple multiplication of three matrices, so SimA can
dynamically change the ordering of the computation at the test time to achieve
linear computation on the number of tokens or the number of channels. We
empirically show that SimA applied to three SOTA variations of transformers,
DeiT, XCiT, and CvT, results in on-par accuracy compared to the SOTA models,
without any need for Softmax layer. Interestingly, changing SimA from
multi-head to single-head has only a small effect on the accuracy, which
simplifies the attention block further. The code is available here:
$\href{https://github.com/UCDvision/sima}{\text{This https URL}}$
|
[
{
"version": "v1",
"created": "Fri, 17 Jun 2022 17:15:01 GMT"
}
] | 2022-06-20T00:00:00 |
[
[
"Koohpayegani",
"Soroush Abbasi",
""
],
[
"Pirsiavash",
"Hamed",
""
]
] |
new_dataset
| 0.998405 |
2003.03048
|
Fei Li
|
Fei Li and Xiumei Li
|
Weight hierarchies and weight distributions of a familiy of $p$-ary
linear codes
|
20
|
Designs, Codes and Cryptography, 2021
|
10.1007/s10623-021-00962-9
| null |
cs.IT math.IT math.NT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The weight distribution and weight hierarchy of linear codes are two
important research topics in coding theory. In this paper, by choosing proper
defining sets from inhomogeneous quadratic functions over $\mathbb{F}_{q}^{2},$
we construct a family of $3$-weight $p$-ary linear codes and determine their
weight distributions and weight hierarchies. Most of the codes can be used in
secret sharing schemes.
|
[
{
"version": "v1",
"created": "Fri, 6 Mar 2020 06:21:59 GMT"
},
{
"version": "v2",
"created": "Sat, 26 Sep 2020 14:45:22 GMT"
}
] | 2022-06-17T00:00:00 |
[
[
"Li",
"Fei",
""
],
[
"Li",
"Xiumei",
""
]
] |
new_dataset
| 0.997747 |
2004.04331
|
An-An Lu
|
An-An Lu, Xiqi Gao, and Chengshan Xiao
|
Robust Linear Precoder Design for 3D Massive MIMO Downlink with A
Posteriori Channel Model
|
29 pages, 6 figures
| null |
10.1109/TVT.2022.3163392
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we investigate the robust linear precoder design for three
dimensional (3D) massive multi-input multi-output (MIMO) downlink with uniform
planar array (UPA) and imperfect channel state information (CSI). In practical
massive MIMO with UPAs, the number of antennas in each column or row is usually
limited. The straightforward extension of the conventional DFT based beam
domain channel model widely used in massive MIMO with uniform linear arrays
(ULAs) can not apply. To overcome this issue, we establish a new beam domain
channel model by using sampled steering vectors. Then, a novel method to obtain
the beam domain channel power matrices and the instantaneous beam domain
channel coefficients is proposed, and an a posteriori beam domain channel model
which includes the channel aging and the spatial correlation is established. On
the basis of the a posteriori channel model, we consider the robust precoder
design with the expected weighted sum-rate maximization under a total power
constraint. By viewing the power constraint as a Riemannian manifold, we
transform the constrained optimization problem into an unconstrained
optimization problem on the Riemannian manifold. Then, we derive an iterative
algorithm to obtain the optimal precoders by setting the Riemannian gradient of
the objective function to zero. Furthermore, we propose a low complexity robust
precoder design by replacing the expected rates in the objective function with
their upper bounds. Simulation results show that the proposed precoders can
achieve significant performance gain than the widely used regularized zero
forcing (RZF) precoder and signal to leakage noise ratio (SLNR) precoder.
|
[
{
"version": "v1",
"created": "Thu, 9 Apr 2020 02:23:55 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Jun 2022 00:01:50 GMT"
}
] | 2022-06-17T00:00:00 |
[
[
"Lu",
"An-An",
""
],
[
"Gao",
"Xiqi",
""
],
[
"Xiao",
"Chengshan",
""
]
] |
new_dataset
| 0.971975 |
2012.03476
|
Rui Yang
|
Rui Yang, Wenrui Dai, Chenglin Li, Junni Zou, Hongkai Xiong
|
NCGNN: Node-Level Capsule Graph Neural Network for Semisupervised
Classification
|
accepted by TNNLS
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Message passing has evolved as an effective tool for designing Graph Neural
Networks (GNNs). However, most existing methods for message passing simply sum
or average all the neighboring features to update node representations. They
are restricted by two problems, i.e., (i) lack of interpretability to identify
node features significant to the prediction of GNNs, and (ii) feature
over-mixing that leads to the over-smoothing issue in capturing long-range
dependencies and inability to handle graphs under heterophily or low homophily.
In this paper, we propose a Node-level Capsule Graph Neural Network (NCGNN) to
address these problems with an improved message passing scheme. Specifically,
NCGNN represents nodes as groups of node-level capsules, in which each capsule
extracts distinctive features of its corresponding node. For each node-level
capsule, a novel dynamic routing procedure is developed to adaptively select
appropriate capsules for aggregation from a subgraph identified by the designed
graph filter. NCGNN aggregates only the advantageous capsules and restrains
irrelevant messages to avoid over-mixing features of interacting nodes.
Therefore, it can relieve the over-smoothing issue and learn effective node
representations over graphs with homophily or heterophily. Furthermore, our
proposed message passing scheme is inherently interpretable and exempt from
complex post-hoc explanations, as the graph filter and the dynamic routing
procedure identify a subset of node features that are most significant to the
model prediction from the extracted subgraph. Extensive experiments on
synthetic as well as real-world graphs demonstrate that NCGNN can well address
the over-smoothing issue and produce better node representations for
semisupervised node classification. It outperforms the state of the arts under
both homophily and heterophily.
|
[
{
"version": "v1",
"created": "Mon, 7 Dec 2020 06:46:17 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Jun 2022 04:16:44 GMT"
}
] | 2022-06-17T00:00:00 |
[
[
"Yang",
"Rui",
""
],
[
"Dai",
"Wenrui",
""
],
[
"Li",
"Chenglin",
""
],
[
"Zou",
"Junni",
""
],
[
"Xiong",
"Hongkai",
""
]
] |
new_dataset
| 0.986586 |
2112.11790
|
Junjie Huang
|
Junjie Huang, Guan Huang, Zheng Zhu, Yun Ye, and Dalong Du
|
BEVDet: High-performance Multi-camera 3D Object Detection in
Bird-Eye-View
|
Multi-camera 3D Object Detection
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Autonomous driving perceives its surroundings for decision making, which is
one of the most complex scenarios in visual perception. The success of paradigm
innovation in solving the 2D object detection task inspires us to seek an
elegant, feasible, and scalable paradigm for fundamentally pushing the
performance boundary in this area. To this end, we contribute the BEVDet
paradigm in this paper. BEVDet performs 3D object detection in Bird-Eye-View
(BEV), where most target values are defined and route planning can be handily
performed. We merely reuse existing modules to build its framework but
substantially develop its performance by constructing an exclusive data
augmentation strategy and upgrading the Non-Maximum Suppression strategy. In
the experiment, BEVDet offers an excellent trade-off between accuracy and
time-efficiency. As a fast version, BEVDet-Tiny scores 31.2% mAP and 39.2% NDS
on the nuScenes val set. It is comparable with FCOS3D, but requires just 11%
computational budget of 215.3 GFLOPs and runs 9.2 times faster at 15.6 FPS.
Another high-precision version dubbed BEVDet-Base scores 39.3% mAP and 47.2%
NDS, significantly exceeding all published results. With a comparable inference
speed, it surpasses FCOS3D by a large margin of +9.8% mAP and +10.0% NDS. The
source code is publicly available for further research at
https://github.com/HuangJunJie2017/BEVDet .
|
[
{
"version": "v1",
"created": "Wed, 22 Dec 2021 10:48:06 GMT"
},
{
"version": "v2",
"created": "Thu, 31 Mar 2022 15:47:13 GMT"
},
{
"version": "v3",
"created": "Thu, 16 Jun 2022 09:15:52 GMT"
}
] | 2022-06-17T00:00:00 |
[
[
"Huang",
"Junjie",
""
],
[
"Huang",
"Guan",
""
],
[
"Zhu",
"Zheng",
""
],
[
"Ye",
"Yun",
""
],
[
"Du",
"Dalong",
""
]
] |
new_dataset
| 0.994478 |
2201.01285
|
Dominik Kempa
|
Dominik Kempa, Tomasz Kociumaka
|
Dynamic Suffix Array with Polylogarithmic Queries and Updates
|
83 pages
| null |
10.1145/3519935.3520061
| null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The suffix array $SA[1..n]$ of a text $T$ of length $n$ is a permutation of
$\{1,\ldots,n\}$ describing the lexicographical ordering of suffixes of $T$,
and it is considered to be among of the most important data structures in
string algorithms, with dozens of applications in data compression,
bioinformatics, and information retrieval. One of the biggest drawbacks of the
suffix array is that it is very difficult to maintain under text updates: even
a single character substitution can completely change the contents of the
suffix array. Thus, the suffix array of a dynamic text is modelled using suffix
array queries, which return the value $SA[i]$ given any $i\in[1..n]$.
Prior to this work, the fastest dynamic suffix array implementations were by
Amir and Boneh. At ISAAC 2020, they showed how to answer suffix array queries
in $\tilde{O}(k)$ time, where $k\in[1..n]$ is a trade-off parameter, with
$\tilde{O}(\frac{n}{k})$-time text updates. In a very recent preprint [2021],
they also provided a solution with $O(\log^5 n)$-time queries and
$\tilde{O}(n^{2/3})$-time updates.
We propose the first data structure that supports both suffix array queries
and text updates in $O({\rm polylog}\,n)$ time (achieving $O(\log^4 n)$ and
$O(\log^{3+o(1)} n)$ time, respectively). Our data structure is deterministic
and the running times for all operations are worst-case. In addition to the
standard single-character edits (character insertions, deletions, and
substitutions), we support (also in $O(\log^{3+o(1)} n)$ time) the "cut-paste"
operation that moves any (arbitrarily long) substring of $T$ to any place in
$T$. We complement our structure by a hardness result: unless the Online
Matrix-Vector Multiplication (OMv) Conjecture fails, no data structure with
$O({\rm polylog}\,n)$-time suffix array queries can support the "copy-paste"
operation in $O(n^{1-\epsilon})$ time for any $\epsilon>0$.
|
[
{
"version": "v1",
"created": "Tue, 4 Jan 2022 18:28:45 GMT"
}
] | 2022-06-17T00:00:00 |
[
[
"Kempa",
"Dominik",
""
],
[
"Kociumaka",
"Tomasz",
""
]
] |
new_dataset
| 0.992582 |
2201.05544
|
Jiongzhi Zheng
|
Jiongzhi Zheng and Kun He and Jianrong Zhou and Yan Jin and Chu-Min Li
and Felip Manya
|
BandMaxSAT: A Local Search MaxSAT Solver with Multi-armed Bandit
|
Accepted by IJCAI 2022
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We address Partial MaxSAT (PMS) and Weighted PMS (WPMS), two practical
generalizations of the MaxSAT problem, and propose a local search algorithm for
these problems, called BandMaxSAT, that applies a multi-armed bandit model to
guide the search direction. The bandit in our method is associated with all the
soft clauses in the input (W)PMS instance. Each arm corresponds to a soft
clause. The bandit model can help BandMaxSAT to select a good direction to
escape from local optima by selecting a soft clause to be satisfied in the
current step, that is, selecting an arm to be pulled. We further propose an
initialization method for (W)PMS that prioritizes both unit and binary clauses
when producing the initial solutions. Extensive experiments demonstrate that
BandMaxSAT significantly outperforms the state-of-the-art (W)PMS local search
algorithm SATLike3.0. Specifically, the number of instances in which BandMaxSAT
obtains better results is about twice that obtained by SATLike3.0. Moreover, we
combine BandMaxSAT with the complete solver TT-Open-WBO-Inc. The resulting
solver BandMaxSAT-c also outperforms some of the best state-of-the-art complete
(W)PMS solvers, including SATLike-c, Loandra and TT-Open-WBO-Inc.
|
[
{
"version": "v1",
"created": "Fri, 14 Jan 2022 16:32:39 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Jun 2022 06:28:00 GMT"
}
] | 2022-06-17T00:00:00 |
[
[
"Zheng",
"Jiongzhi",
""
],
[
"He",
"Kun",
""
],
[
"Zhou",
"Jianrong",
""
],
[
"Jin",
"Yan",
""
],
[
"Li",
"Chu-Min",
""
],
[
"Manya",
"Felip",
""
]
] |
new_dataset
| 0.996178 |
2203.02654
|
Sifeng He
|
Sifeng He, Xudong Yang, Chen Jiang, Gang Liang, Wei Zhang, Tan Pan,
Qing Wang, Furong Xu, Chunguang Li, Jingxiong Liu, Hui Xu, Kaiming Huang,
Yuan Cheng, Feng Qian, Xiaobo Zhang, Lei Yang
|
A Large-scale Comprehensive Dataset and Copy-overlap Aware Evaluation
Protocol for Segment-level Video Copy Detection
|
Accepted by CVPR 2022. Codes are all publicly available at
https://github.com/alipay/VCSL
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we introduce VCSL (Video Copy Segment Localization), a new
comprehensive segment-level annotated video copy dataset. Compared with
existing copy detection datasets restricted by either video-level annotation or
small-scale, VCSL not only has two orders of magnitude more segment-level
labelled data, with 160k realistic video copy pairs containing more than 280k
localized copied segment pairs, but also covers a variety of video categories
and a wide range of video duration. All the copied segments inside each
collected video pair are manually extracted and accompanied by precisely
annotated starting and ending timestamps. Alongside the dataset, we also
propose a novel evaluation protocol that better measures the prediction
accuracy of copy overlapping segments between a video pair and shows improved
adaptability in different scenarios. By benchmarking several baseline and
state-of-the-art segment-level video copy detection methods with the proposed
dataset and evaluation metric, we provide a comprehensive analysis that
uncovers the strengths and weaknesses of current approaches, hoping to open up
promising directions for future works. The VCSL dataset, metric and benchmark
codes are all publicly available at https://github.com/alipay/VCSL.
|
[
{
"version": "v1",
"created": "Sat, 5 Mar 2022 04:39:34 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Jun 2022 08:55:27 GMT"
}
] | 2022-06-17T00:00:00 |
[
[
"He",
"Sifeng",
""
],
[
"Yang",
"Xudong",
""
],
[
"Jiang",
"Chen",
""
],
[
"Liang",
"Gang",
""
],
[
"Zhang",
"Wei",
""
],
[
"Pan",
"Tan",
""
],
[
"Wang",
"Qing",
""
],
[
"Xu",
"Furong",
""
],
[
"Li",
"Chunguang",
""
],
[
"Liu",
"Jingxiong",
""
],
[
"Xu",
"Hui",
""
],
[
"Huang",
"Kaiming",
""
],
[
"Cheng",
"Yuan",
""
],
[
"Qian",
"Feng",
""
],
[
"Zhang",
"Xiaobo",
""
],
[
"Yang",
"Lei",
""
]
] |
new_dataset
| 0.999743 |
2203.14757
|
Yuki Saito
|
Yuki Saito, Yuto Nishimura, Shinnosuke Takamichi, Kentaro Tachibana,
Hiroshi Saruwatari
|
STUDIES: Corpus of Japanese Empathetic Dialogue Speech Towards Friendly
Voice Agent
|
5 pages, 2 figures, Accepted for INTERSPEECH2022, project page:
http://sython.org/Corpus/STUDIES
| null | null | null |
cs.SD cs.AI cs.CL cs.HC cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We present STUDIES, a new speech corpus for developing a voice agent that can
speak in a friendly manner. Humans naturally control their speech prosody to
empathize with each other. By incorporating this "empathetic dialogue" behavior
into a spoken dialogue system, we can develop a voice agent that can respond to
a user more naturally. We designed the STUDIES corpus to include a speaker who
speaks with empathy for the interlocutor's emotion explicitly. We describe our
methodology to construct an empathetic dialogue speech corpus and report the
analysis results of the STUDIES corpus. We conducted a text-to-speech
experiment to initially investigate how we can develop more natural voice agent
that can tune its speaking style corresponding to the interlocutor's emotion.
The results show that the use of interlocutor's emotion label and
conversational context embedding can produce speech with the same degree of
naturalness as that synthesized by using the agent's emotion label. Our project
page of the STUDIES corpus is http://sython.org/Corpus/STUDIES.
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 13:49:59 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Jun 2022 09:19:01 GMT"
}
] | 2022-06-17T00:00:00 |
[
[
"Saito",
"Yuki",
""
],
[
"Nishimura",
"Yuto",
""
],
[
"Takamichi",
"Shinnosuke",
""
],
[
"Tachibana",
"Kentaro",
""
],
[
"Saruwatari",
"Hiroshi",
""
]
] |
new_dataset
| 0.997188 |
2203.16618
|
Brian Davis
|
Brian Davis, Bryan Morse, Bryan Price, Chris Tensmeyer, Curtis
Wigington, and Vlad Morariu
|
End-to-end Document Recognition and Understanding with Dessurt
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce Dessurt, a relatively simple document understanding transformer
capable of being fine-tuned on a greater variety of document tasks than prior
methods. It receives a document image and task string as input and generates
arbitrary text autoregressively as output. Because Dessurt is an end-to-end
architecture that performs text recognition in addition to the document
understanding, it does not require an external recognition model as prior
methods do. Dessurt is a more flexible model than prior methods and is able to
handle a variety of document domains and tasks. We show that this model is
effective at 9 different dataset-task combinations.
|
[
{
"version": "v1",
"created": "Wed, 30 Mar 2022 19:02:53 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Jun 2022 12:58:32 GMT"
},
{
"version": "v3",
"created": "Wed, 15 Jun 2022 19:51:01 GMT"
}
] | 2022-06-17T00:00:00 |
[
[
"Davis",
"Brian",
""
],
[
"Morse",
"Bryan",
""
],
[
"Price",
"Bryan",
""
],
[
"Tensmeyer",
"Chris",
""
],
[
"Wigington",
"Curtis",
""
],
[
"Morariu",
"Vlad",
""
]
] |
new_dataset
| 0.998711 |
2204.09610
|
Noah Daniels
|
Polina Shpilker, John Freeman, Hailey McKelvie, Jill Ashey, Jay-Miguel
Fonticella, Hollie Putnam, Jane Greenberg, Lenore J. Cowen, Alva Couch, Noah
M. Daniels
|
MEDFORD: A human and machine readable metadata markup language
|
10 pages, no figures
| null | null | null |
cs.DL cs.DB q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reproducibility of research is essential for science. However, in the way
modern computational biology research is done, it is easy to lose track of
small, but extremely critical, details. Key details, such as the specific
version of a software used or iteration of a genome can easily be lost in the
shuffle, or perhaps not noted at all. Much work is being done on the database
and storage side of things, ensuring that there exists a space to store
experiment-specific details, but current mechanisms for recording details are
cumbersome for scientists to use. We propose a new metadata description
language, named MEDFORD, in which scientists can record all details relevant to
their research. Human-readable, easily-editable, and templatable, MEDFORD
serves as a collection point for all notes that a researcher could find
relevant to their research, be it for internal use or for future replication.
MEDFORD has been applied to coral research, documenting research from RNA-seq
analyses to photo collections.
|
[
{
"version": "v1",
"created": "Wed, 20 Apr 2022 16:45:03 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Jun 2022 16:46:57 GMT"
}
] | 2022-06-17T00:00:00 |
[
[
"Shpilker",
"Polina",
""
],
[
"Freeman",
"John",
""
],
[
"McKelvie",
"Hailey",
""
],
[
"Ashey",
"Jill",
""
],
[
"Fonticella",
"Jay-Miguel",
""
],
[
"Putnam",
"Hollie",
""
],
[
"Greenberg",
"Jane",
""
],
[
"Cowen",
"Lenore J.",
""
],
[
"Couch",
"Alva",
""
],
[
"Daniels",
"Noah M.",
""
]
] |
new_dataset
| 0.997444 |
2206.05503
|
Ayman Alahmar Dr.
|
William Bugden and Ayman Alahmar
|
Rust: The Programming Language for Safety and Performance
|
9 pages, 3 figures, 2 programming code listings
|
2nd International Graduate Studies Congress (IGSCONG'22), Turkey,
June 8-11, 2022. https://www.igscong.net/?lang=en
| null | null |
cs.PL cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Rust is a young programming language gaining increased attention from
software developers since it was introduced to the world by Mozilla in 2010. In
this study, we attempt to answer several research questions. Does Rust deserve
such increased attention? What is there in Rust that is attracting programmers
to this new language? Safety and performance were among the very first promises
of Rust, as was claimed by its early developers. Is Rust a safe language with
high performance? Have these claims been achieved? To answer these questions,
we surveyed and analyzed recent research on Rust and research that benchmarks
Rust with other available prominent programming languages. The results show
that Rust deserves the increased interest by programmers, and recent
experimental results in benchmarking research show Rust's overall superiority
over other well-established languages in terms of performance, safety, and
security. Even though this study was not comprehensive (and more work must be
done in this area), it informs the programming and research communities on the
promising features of Rust as the language of choice for the future.
|
[
{
"version": "v1",
"created": "Sat, 11 Jun 2022 11:12:32 GMT"
}
] | 2022-06-17T00:00:00 |
[
[
"Bugden",
"William",
""
],
[
"Alahmar",
"Ayman",
""
]
] |
new_dataset
| 0.99893 |
2206.06556
|
Naoki Fukaya Ph.D.
|
Naoki Fukaya, Avinash Ummadisingu, Guilherme Maeda, Shin-ichi Maeda
|
F3 Hand: A Versatile Robot Hand Inspired by Human Thumb and Index
Fingers
|
8 pages. Accepted at IEEE RO-MAN 2022. An accompanying video is
available at https://www.youtube.com/watch?v=l6GK5XTbty8
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
It is challenging to grasp numerous objects with varying sizes and shapes
with a single robot hand. To address this, we propose a new robot hand called
the 'F3 hand' inspired by the complex movements of human index finger and
thumb. The F3 hand attempts to realize complex human-like grasping movements by
combining a parallel motion finger and a rotational motion finger with an
adaptive function. In order to confirm the performance of our hand, we attached
it to a mobile manipulator - the Toyota Human Support Robot (HSR) and conducted
grasping experiments. In our results, we show that it is able to grasp all YCB
objects (82 in total), including washers with outer diameters as small as
6.4mm. We also built a system for intuitive operation with a 3D mouse and grasp
an additional 24 objects, including small toothpicks and paper clips and large
pitchers and cracker boxes. The F3 hand is able to achieve a 98% success rate
in grasping even under imprecise control and positional offsets. Furthermore,
owing to the finger's adaptive function, we demonstrate characteristics of the
F3 hand that facilitate the grasping of soft objects such as strawberries in a
desirable posture.
|
[
{
"version": "v1",
"created": "Tue, 14 Jun 2022 02:15:17 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Jun 2022 10:07:10 GMT"
}
] | 2022-06-17T00:00:00 |
[
[
"Fukaya",
"Naoki",
""
],
[
"Ummadisingu",
"Avinash",
""
],
[
"Maeda",
"Guilherme",
""
],
[
"Maeda",
"Shin-ichi",
""
]
] |
new_dataset
| 0.999766 |
2206.07896
|
Ruobing Han
|
Ruobing Han, Jun Chen, Bhanu Garg, Jeffrey Young, Jaewoong Sim, and
Hyesoon Kim
|
CuPBoP: CUDA for Parallelized and Broad-range Processors
| null | null | null | null |
cs.DC cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
CUDA is one of the most popular choices for GPU programming, but it can only
be executed on NVIDIA GPUs. Executing CUDA on non-NVIDIA devices not only
benefits the hardware community, but also allows data-parallel computation in
heterogeneous systems. To make CUDA programs portable, some researchers have
proposed using source-to-source translators to translate CUDA to portable
programming languages that can be executed on non-NVIDIA devices. However, most
CUDA translators require additional manual modifications on the translated
code, which imposes a heavy workload on developers. In this paper, CuPBoP is
proposed to execute CUDA on non-NVIDIA devices without relying on any portable
programming languages. Compared with existing work that executes CUDA on
non-NVIDIA devices, CuPBoP does not require manual modification of the CUDA
source code, but it still achieves the highest coverage (69.6%), much higher
than existing frameworks (56.6%) on the Rodinia benchmark. In particular, for
CPU backends, CuPBoP supports several ISAs (e.g., X86, RISC-V, AArch64) and has
close or even higher performance compared with other projects. We also compare
and analyze the performance among CuPBoP, manually optimized OpenMP/MPI
programs, and CUDA programs on the latest Ampere architecture GPU, and show
future directions for supporting CUDA programs on non-NVIDIA devices with high
performance
|
[
{
"version": "v1",
"created": "Thu, 16 Jun 2022 03:14:30 GMT"
}
] | 2022-06-17T00:00:00 |
[
[
"Han",
"Ruobing",
""
],
[
"Chen",
"Jun",
""
],
[
"Garg",
"Bhanu",
""
],
[
"Young",
"Jeffrey",
""
],
[
"Sim",
"Jaewoong",
""
],
[
"Kim",
"Hyesoon",
""
]
] |
new_dataset
| 0.999199 |
2206.07898
|
Hung Le
|
Hung Le, Nancy F. Chen, Steven C.H. Hoi
|
Multimodal Dialogue State Tracking
|
Accepted at NAACL 2022 (Oral)
| null | null | null |
cs.AI cs.CL cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Designed for tracking user goals in dialogues, a dialogue state tracker is an
essential component in a dialogue system. However, the research of dialogue
state tracking has largely been limited to unimodality, in which slots and slot
values are limited by knowledge domains (e.g. restaurant domain with slots of
restaurant name and price range) and are defined by specific database schema.
In this paper, we propose to extend the definition of dialogue state tracking
to multimodality. Specifically, we introduce a novel dialogue state tracking
task to track the information of visual objects that are mentioned in
video-grounded dialogues. Each new dialogue utterance may introduce a new video
segment, new visual objects, or new object attributes, and a state tracker is
required to update these information slots accordingly. We created a new
synthetic benchmark and designed a novel baseline, Video-Dialogue Transformer
Network (VDTN), for this task. VDTN combines both object-level features and
segment-level features and learns contextual dependencies between videos and
dialogues to generate multimodal dialogue states. We optimized VDTN for a state
generation task as well as a self-supervised video understanding task which
recovers video segment or object representations. Finally, we trained VDTN to
use the decoded states in a response prediction task. Together with
comprehensive ablation and qualitative analysis, we discovered interesting
insights towards building more capable multimodal dialogue systems.
|
[
{
"version": "v1",
"created": "Thu, 16 Jun 2022 03:18:42 GMT"
}
] | 2022-06-17T00:00:00 |
[
[
"Le",
"Hung",
""
],
[
"Chen",
"Nancy F.",
""
],
[
"Hoi",
"Steven C. H.",
""
]
] |
new_dataset
| 0.998923 |
2206.07956
|
Ziqian Dai
|
Ziqian Dai, Jianwei Yu, Yan Wang, Nuo Chen, Yanyao Bian, Guangzhi Li,
Deng Cai, Dong Yu
|
Automatic Prosody Annotation with Pre-Trained Text-Speech Model
|
accepted by INTERSPEECH2022
| null | null | null |
cs.SD cs.CL eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Prosodic boundary plays an important role in text-to-speech synthesis (TTS)
in terms of naturalness and readability. However, the acquisition of prosodic
boundary labels relies on manual annotation, which is costly and
time-consuming. In this paper, we propose to automatically extract prosodic
boundary labels from text-audio data via a neural text-speech model with
pre-trained audio encoders. This model is pre-trained on text and speech data
separately and jointly fine-tuned on TTS data in a triplet format: {speech,
text, prosody}. The experimental results on both automatic evaluation and human
evaluation demonstrate that: 1) the proposed text-speech prosody annotation
framework significantly outperforms text-only baselines; 2) the quality of
automatic prosodic boundary annotations is comparable to human annotations; 3)
TTS systems trained with model-annotated boundaries are slightly better than
systems that use manual ones.
|
[
{
"version": "v1",
"created": "Thu, 16 Jun 2022 06:54:16 GMT"
}
] | 2022-06-17T00:00:00 |
[
[
"Dai",
"Ziqian",
""
],
[
"Yu",
"Jianwei",
""
],
[
"Wang",
"Yan",
""
],
[
"Chen",
"Nuo",
""
],
[
"Bian",
"Yanyao",
""
],
[
"Li",
"Guangzhi",
""
],
[
"Cai",
"Deng",
""
],
[
"Yu",
"Dong",
""
]
] |
new_dataset
| 0.974837 |
2206.08026
|
Min H. Kim
|
Mustafa B. Yaldiz, Andreas Meuleman, Hyeonjoong Jang, Hyunho Ha, Min
H. Kim
|
DeepFormableTag: End-to-end Generation and Recognition of Deformable
Fiducial Markers
| null |
ACM Transactions on Graphics 40, 4, Article 67 (August 2021)
|
10.1145/3450626.3459762
| null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fiducial markers have been broadly used to identify objects or embed messages
that can be detected by a camera. Primarily, existing detection methods assume
that markers are printed on ideally planar surfaces. Markers often fail to be
recognized due to various imaging artifacts of optical/perspective distortion
and motion blur. To overcome these limitations, we propose a novel deformable
fiducial marker system that consists of three main parts: First, a fiducial
marker generator creates a set of free-form color patterns to encode
significantly large-scale information in unique visual codes. Second, a
differentiable image simulator creates a training dataset of photorealistic
scene images with the deformed markers, being rendered during optimization in a
differentiable manner. The rendered images include realistic shading with
specular reflection, optical distortion, defocus and motion blur, color
alteration, imaging noise, and shape deformation of markers. Lastly, a trained
marker detector seeks the regions of interest and recognizes multiple marker
patterns simultaneously via inverse deformation transformation. The deformable
marker creator and detector networks are jointly optimized via the
differentiable photorealistic renderer in an end-to-end manner, allowing us to
robustly recognize a wide range of deformable markers with high accuracy. Our
deformable marker system is capable of decoding 36-bit messages successfully at
~29 fps with severe shape deformation. Results validate that our system
significantly outperforms the traditional and data-driven marker methods. Our
learning-based marker system opens up new interesting applications of fiducial
markers, including cost-effective motion capture of the human body, active 3D
scanning using our fiducial markers' array as structured light patterns, and
robust augmented reality rendering of virtual objects on dynamic surfaces.
|
[
{
"version": "v1",
"created": "Thu, 16 Jun 2022 09:29:26 GMT"
}
] | 2022-06-17T00:00:00 |
[
[
"Yaldiz",
"Mustafa B.",
""
],
[
"Meuleman",
"Andreas",
""
],
[
"Jang",
"Hyeonjoong",
""
],
[
"Ha",
"Hyunho",
""
],
[
"Kim",
"Min H.",
""
]
] |
new_dataset
| 0.975702 |
2206.08081
|
Nishtha Madaan
|
Nishtha Madaan, Prateek Chaudhury, Nishant Kumar, Srikanta Bedathur
|
TransDrift: Modeling Word-Embedding Drift using Transformer
|
10 pages
| null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In modern NLP applications, word embeddings are a crucial backbone that can
be readily shared across a number of tasks. However as the text distributions
change and word semantics evolve over time, the downstream applications using
the embeddings can suffer if the word representations do not conform to the
data drift. Thus, maintaining word embeddings to be consistent with the
underlying data distribution is a key problem. In this work, we tackle this
problem and propose TransDrift, a transformer-based prediction model for word
embeddings. Leveraging the flexibility of transformer, our model accurately
learns the dynamics of the embedding drift and predicts the future embedding.
In experiments, we compare with existing methods and show that our model makes
significantly more accurate predictions of the word embedding than the
baselines. Crucially, by applying the predicted embeddings as a backbone for
downstream classification tasks, we show that our embeddings lead to superior
performance compared to the previous methods.
|
[
{
"version": "v1",
"created": "Thu, 16 Jun 2022 10:48:26 GMT"
}
] | 2022-06-17T00:00:00 |
[
[
"Madaan",
"Nishtha",
""
],
[
"Chaudhury",
"Prateek",
""
],
[
"Kumar",
"Nishant",
""
],
[
"Bedathur",
"Srikanta",
""
]
] |
new_dataset
| 0.986591 |
2206.08141
|
Chaojian Li
|
Yang Zhao, Ziyun Li, Yonggan Fu, Yongan Zhang, Chaojian Li, Cheng Wan,
Haoran You, Shang Wu, Xu Ouyang, Vivek Boominathan, Ashok Veeraraghavan,
Yingyan Lin
|
i-FlatCam: A 253 FPS, 91.49 $\mu$J/Frame Ultra-Compact Intelligent
Lensless Camera for Real-Time and Efficient Eye Tracking in VR/AR
|
Accepted by VLSI 2022
| null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a first-of-its-kind ultra-compact intelligent camera system,
dubbed i-FlatCam, including a lensless camera with a computational (Comp.)
chip. It highlights (1) a predict-then-focus eye tracking pipeline for boosted
efficiency without compromising the accuracy, (2) a unified compression scheme
for single-chip processing and improved frame rate per second (FPS), and (3)
dedicated intra-channel reuse design for depth-wise convolutional layers
(DW-CONV) to increase utilization. i-FlatCam demonstrates the first eye
tracking pipeline with a lensless camera and achieves 3.16 degrees of accuracy,
253 FPS, 91.49 $\mu$J/Frame, and 6.7mm x 8.9mm x 1.2mm camera form factor,
paving the way for next-generation Augmented Reality (AR) and Virtual Reality
(VR) devices.
|
[
{
"version": "v1",
"created": "Wed, 15 Jun 2022 08:55:55 GMT"
}
] | 2022-06-17T00:00:00 |
[
[
"Zhao",
"Yang",
""
],
[
"Li",
"Ziyun",
""
],
[
"Fu",
"Yonggan",
""
],
[
"Zhang",
"Yongan",
""
],
[
"Li",
"Chaojian",
""
],
[
"Wan",
"Cheng",
""
],
[
"You",
"Haoran",
""
],
[
"Wu",
"Shang",
""
],
[
"Ouyang",
"Xu",
""
],
[
"Boominathan",
"Vivek",
""
],
[
"Veeraraghavan",
"Ashok",
""
],
[
"Lin",
"Yingyan",
""
]
] |
new_dataset
| 0.952827 |
2206.08172
|
Heqian Qiu
|
Heqian Qiu, Hongliang Li, Taijin Zhao, Lanxiao Wang, Qingbo Wu and
Fanman Meng
|
RefCrowd: Grounding the Target in Crowd with Referring Expressions
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Crowd understanding has aroused the widespread interest in vision domain due
to its important practical significance. Unfortunately, there is no effort to
explore crowd understanding in multi-modal domain that bridges natural language
and computer vision. Referring expression comprehension (REF) is such a
representative multi-modal task. Current REF studies focus more on grounding
the target object from multiple distinctive categories in general scenarios. It
is difficult to applied to complex real-world crowd understanding. To fill this
gap, we propose a new challenging dataset, called RefCrowd, which towards
looking for the target person in crowd with referring expressions. It not only
requires to sufficiently mine the natural language information, but also
requires to carefully focus on subtle differences between the target and a
crowd of persons with similar appearance, so as to realize the fine-grained
mapping from language to vision. Furthermore, we propose a Fine-grained
Multi-modal Attribute Contrastive Network (FMAC) to deal with REF in crowd
understanding. It first decomposes the intricate visual and language features
into attribute-aware multi-modal features, and then captures discriminative but
robustness fine-grained attribute features to effectively distinguish these
subtle differences between similar persons. The proposed method outperforms
existing state-of-the-art (SoTA) methods on our RefCrowd dataset and existing
REF datasets. In addition, we implement an end-to-end REF toolbox for the
deeper research in multi-modal domain. Our dataset and code can be available
at: \url{https://qiuheqian.github.io/datasets/refcrowd/}.
|
[
{
"version": "v1",
"created": "Thu, 16 Jun 2022 13:39:26 GMT"
}
] | 2022-06-17T00:00:00 |
[
[
"Qiu",
"Heqian",
""
],
[
"Li",
"Hongliang",
""
],
[
"Zhao",
"Taijin",
""
],
[
"Wang",
"Lanxiao",
""
],
[
"Wu",
"Qingbo",
""
],
[
"Meng",
"Fanman",
""
]
] |
new_dataset
| 0.999556 |
2206.08219
|
Alexander Kapitanov
|
Alexander Kapitanov, Andrew Makhlyarchuk, Karina Kvanchiani
|
HaGRID - HAnd Gesture Recognition Image Dataset
|
11 pages, 9 figures, open-source dataset for computer vision
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
In this paper, we introduce an enormous dataset HaGRID (HAnd Gesture
Recognition Image Dataset) for hand gesture recognition (HGR) systems. This
dataset contains 552,992 samples divided into 18 classes of gestures. The
annotations consist of bounding boxes of hands with gesture labels and markups
of leading hands. The proposed dataset allows for building HGR systems, which
can be used in video conferencing services, home automation systems, the
automotive sector, services for people with speech and hearing impairments,
etc. We are especially focused on interaction with devices to manage them. That
is why all 18 chosen gestures are functional, familiar to the majority of
people, and may be an incentive to take some action. In addition, we used
crowdsourcing platforms to collect the dataset and took into account various
parameters to ensure data diversity. We describe the challenges of using
existing HGR datasets for our task and provide a detailed overview of them.
Furthermore, the baselines for the hand detection and gesture classification
tasks are proposed.
|
[
{
"version": "v1",
"created": "Thu, 16 Jun 2022 14:41:32 GMT"
}
] | 2022-06-17T00:00:00 |
[
[
"Kapitanov",
"Alexander",
""
],
[
"Makhlyarchuk",
"Andrew",
""
],
[
"Kvanchiani",
"Karina",
""
]
] |
new_dataset
| 0.999644 |
2206.08261
|
Shugang Hao
|
Shugang Hao and Lingjie Duan
|
To Help or Disturb: Introduction of Crowdsourced WiFi to 5G Networks
| null | null |
10.1109/TMC.2022.3171181
| null |
cs.NI cs.GT
|
http://creativecommons.org/licenses/by/4.0/
|
After upgrading to 5G, a network operator still faces congestion when
providing the ubiquitous wireless service to the crowd. To meet users'
ever-increasing demand, some other operators (e.g., Fon) have been developing
another crowdsourced WiFi network to combine many users' home WiFi access
points and provide enlarged WiFi coverage to them. While the 5G network
experiences negative network externality, the crowdsourced WiFi network helps
offload traffic from 5G and its service coverage exhibits positive externality
with its subscription number. To our best knowledge, we are the first to
investigate how these two heterogeneous networks of diverse network
externalities co-exist from an economic perspective. We propose a dynamic game
theoretic model to analyze the hybrid interaction among the 5G operator, the
crowdsourced WiFi operator, and users. Our user choice model with WiFi's
complementarity for 5G allows users to choose both services, departing from the
traditional economics literature where a user chooses one over another
alternative. Despite of non-convexity of the operators' pricing problems, we
prove that the 5G operator facing severe congestion may purposely lower his
price to encourage users to add-on WiFi to offload, and he benefits from the
introduction of crowdsourced WiFi. However, 5G operator with mild congestion
tends to charge users more and all the users' payoffs may decrease.
|
[
{
"version": "v1",
"created": "Fri, 6 May 2022 10:48:55 GMT"
}
] | 2022-06-17T00:00:00 |
[
[
"Hao",
"Shugang",
""
],
[
"Duan",
"Lingjie",
""
]
] |
new_dataset
| 0.994749 |
2206.08266
|
Timotej Knez
|
Timotej Knez, Marko Bajec, Slavko \v{Z}itnik
|
ANGLEr: A Next-Generation Natural Language Exploratory Framework
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Natural language processing is used for solving a wide variety of problems.
Some scholars and interest groups working with language resources are not well
versed in programming, so there is a need for a good graphical framework that
allows users to quickly design and test natural language processing pipelines
without the need for programming. The existing frameworks do not satisfy all
the requirements for such a tool. We, therefore, propose a new framework that
provides a simple way for its users to build language processing pipelines. It
also allows a simple programming language agnostic way for adding new modules,
which will help the adoption by natural language processing developers and
researchers. The main parts of the proposed framework consist of (a) a
pluggable Docker-based architecture, (b) a general data model, and (c) APIs
description along with the graphical user interface. The proposed design is
being used for implementation of a new natural language processing framework,
called ANGLEr.
|
[
{
"version": "v1",
"created": "Tue, 10 May 2022 13:32:13 GMT"
}
] | 2022-06-17T00:00:00 |
[
[
"Knez",
"Timotej",
""
],
[
"Bajec",
"Marko",
""
],
[
"Žitnik",
"Slavko",
""
]
] |
new_dataset
| 0.989273 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.