id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2108.06955
|
Leonhard Hennig
|
Leonhard Hennig and Phuc Tran Truong and Aleksandra Gabryszak
|
MobIE: A German Dataset for Named Entity Recognition, Entity Linking and
Relation Extraction in the Mobility Domain
|
Accepted at KONVENS 2021. 5 pages, 3 figures, 5 tables
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present MobIE, a German-language dataset, which is human-annotated with 20
coarse- and fine-grained entity types and entity linking information for
geographically linkable entities. The dataset consists of 3,232 social media
texts and traffic reports with 91K tokens, and contains 20.5K annotated
entities, 13.1K of which are linked to a knowledge base. A subset of the
dataset is human-annotated with seven mobility-related, n-ary relation types,
while the remaining documents are annotated using a weakly-supervised labeling
approach implemented with the Snorkel framework. To the best of our knowledge,
this is the first German-language dataset that combines annotations for NER, EL
and RE, and thus can be used for joint and multi-task learning of these
fundamental information extraction tasks. We make MobIE public at
https://github.com/dfki-nlp/mobie.
|
[
{
"version": "v1",
"created": "Mon, 16 Aug 2021 08:21:50 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Mar 2022 09:40:12 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Hennig",
"Leonhard",
""
],
[
"Truong",
"Phuc Tran",
""
],
[
"Gabryszak",
"Aleksandra",
""
]
] |
new_dataset
| 0.999898 |
2109.00590
|
Yingshan Chang
|
Yingshan Chang, Mridu Narang, Hisami Suzuki, Guihong Cao, Jianfeng
Gao, Yonatan Bisk
|
WebQA: Multihop and Multimodal QA
|
CVPR Camera ready
| null | null | null |
cs.CL cs.AI cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Scaling Visual Question Answering (VQA) to the open-domain and multi-hop
nature of web searches, requires fundamental advances in visual representation
learning, knowledge aggregation, and language generation. In this work, we
introduce WebQA, a challenging new benchmark that proves difficult for
large-scale state-of-the-art models which lack language groundable visual
representations for novel objects and the ability to reason, yet trivial for
humans. WebQA mirrors the way humans use the web: 1) Ask a question, 2) Choose
sources to aggregate, and 3) Produce a fluent language response. This is the
behavior we should be expecting from IoT devices and digital assistants.
Existing work prefers to assume that a model can either reason about knowledge
in images or in text. WebQA includes a secondary text-only QA task to ensure
improved visual performance does not come at the cost of language
understanding. Our challenge for the community is to create unified multimodal
reasoning models that answer questions regardless of the source modality,
moving us closer to digital assistants that not only query language knowledge,
but also the richer visual online world.
|
[
{
"version": "v1",
"created": "Wed, 1 Sep 2021 19:43:59 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Sep 2021 20:18:55 GMT"
},
{
"version": "v3",
"created": "Tue, 21 Sep 2021 20:56:01 GMT"
},
{
"version": "v4",
"created": "Mon, 28 Mar 2022 02:42:56 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Chang",
"Yingshan",
""
],
[
"Narang",
"Mridu",
""
],
[
"Suzuki",
"Hisami",
""
],
[
"Cao",
"Guihong",
""
],
[
"Gao",
"Jianfeng",
""
],
[
"Bisk",
"Yonatan",
""
]
] |
new_dataset
| 0.998008 |
2109.01049
|
Cas Widdershoven
|
Stefan Kiefer and Cas Widdershoven
|
Image-Binary Automata
|
Journal version of paper published at DCFS'21
| null | null | null |
cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce a certain restriction of weighted automata over the rationals,
called image-binary automata. We show that such automata accept the regular
languages, can be exponentially more succinct than corresponding NFAs, and
allow for polynomial complementation, union, and intersection. This compares
favourably with unambiguous automata whose complementation requires a
superpolynomial state blowup. We also study an infinite-word version,
image-binary B\"uchi automata. We show that such automata are amenable to
probabilistic model checking, similarly to unambiguous B\"uchi automata. We
provide algorithms to translate $k$-ambiguous B\"uchi automata to image-binary
B\"uchi automata, leading to model-checking algorithms with optimal
computational complexity.
|
[
{
"version": "v1",
"created": "Thu, 2 Sep 2021 16:06:25 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Sep 2021 15:56:05 GMT"
},
{
"version": "v3",
"created": "Mon, 28 Mar 2022 13:20:39 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Kiefer",
"Stefan",
""
],
[
"Widdershoven",
"Cas",
""
]
] |
new_dataset
| 0.976174 |
2109.06072
|
Kshitij Gulati
|
Kshitij Gulati, Gaurav Verma, Mukesh Mohania, Ashish Kundu
|
BeautifAI -- A Personalised Occasion-oriented Makeup Recommendation
System
|
Withdrawing due to issues with training the Makeup Style Transfer
(section about style transfer). This renders the current methodology invalid
| null | null | null |
cs.IR cs.MM
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
With the global metamorphosis of the beauty industry and the rising demand
for beauty products worldwide, the need for an efficacious makeup
recommendation system has never been more. Despite the significant advancements
made towards personalised makeup recommendation, the current research still
falls short of incorporating the context of occasion in makeup recommendation
and integrating feedback for users. In this work, we propose BeautifAI, a novel
makeup recommendation system, delivering personalised occasion-oriented makeup
recommendations to users while providing real-time previews and continuous
feedback. The proposed work's novel contributions, including the incorporation
of occasion context, region-wise makeup recommendation, real-time makeup
previews and continuous makeup feedback, set our system apart from the current
work in makeup recommendation. We also demonstrate our proposed system's
efficacy in providing personalised makeup recommendation by conducting a user
study.
|
[
{
"version": "v1",
"created": "Mon, 13 Sep 2021 15:48:10 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Mar 2022 05:16:59 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Gulati",
"Kshitij",
""
],
[
"Verma",
"Gaurav",
""
],
[
"Mohania",
"Mukesh",
""
],
[
"Kundu",
"Ashish",
""
]
] |
new_dataset
| 0.98014 |
2109.07323
|
Haoyu Dong
|
Zhoujun Cheng, Haoyu Dong, Ran Jia, Pengfei Wu, Shi Han, Fan Cheng,
Dongmei Zhang
|
FORTAP: Using Formulas for Numerical-Reasoning-Aware Table Pretraining
|
Accepted by ACL'22 main track
| null | null | null |
cs.IR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Tables store rich numerical data, but numerical reasoning over tables is
still a challenge. In this paper, we find that the spreadsheet formula, which
performs calculations on numerical values in tables, is naturally a strong
supervision of numerical reasoning. More importantly, large amounts of
spreadsheets with expert-made formulae are available on the web and can be
obtained easily. FORTAP is the first method for numerical-reasoning-aware table
pretraining by leveraging large corpus of spreadsheet formulae. We design two
formula pretraining tasks to explicitly guide FORTAP to learn numerical
reference and calculation in semi-structured tables. FORTAP achieves
state-of-the-art results on two representative downstream tasks, cell type
classification and formula prediction, showing great potential of
numerical-reasoning-aware pretraining.
|
[
{
"version": "v1",
"created": "Wed, 15 Sep 2021 14:31:17 GMT"
},
{
"version": "v2",
"created": "Sat, 26 Mar 2022 01:18:36 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Cheng",
"Zhoujun",
""
],
[
"Dong",
"Haoyu",
""
],
[
"Jia",
"Ran",
""
],
[
"Wu",
"Pengfei",
""
],
[
"Han",
"Shi",
""
],
[
"Cheng",
"Fan",
""
],
[
"Zhang",
"Dongmei",
""
]
] |
new_dataset
| 0.997779 |
2109.10852
|
Ting Chen
|
Ting Chen, Saurabh Saxena, Lala Li, David J. Fleet, Geoffrey Hinton
|
Pix2seq: A Language Modeling Framework for Object Detection
|
ICLR'22. Code and pretrained models at
https://github.com/google-research/pix2seq
| null | null | null |
cs.CV cs.AI cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Pix2Seq, a simple and generic framework for object detection.
Unlike existing approaches that explicitly integrate prior knowledge about the
task, we cast object detection as a language modeling task conditioned on the
observed pixel inputs. Object descriptions (e.g., bounding boxes and class
labels) are expressed as sequences of discrete tokens, and we train a neural
network to perceive the image and generate the desired sequence. Our approach
is based mainly on the intuition that if a neural network knows about where and
what the objects are, we just need to teach it how to read them out. Beyond the
use of task-specific data augmentations, our approach makes minimal assumptions
about the task, yet it achieves competitive results on the challenging COCO
dataset, compared to highly specialized and well optimized detection
algorithms.
|
[
{
"version": "v1",
"created": "Wed, 22 Sep 2021 17:26:36 GMT"
},
{
"version": "v2",
"created": "Sun, 27 Mar 2022 14:44:00 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Chen",
"Ting",
""
],
[
"Saxena",
"Saurabh",
""
],
[
"Li",
"Lala",
""
],
[
"Fleet",
"David J.",
""
],
[
"Hinton",
"Geoffrey",
""
]
] |
new_dataset
| 0.983405 |
2109.12983
|
Soeren Becker
|
Soeren Becker, Florian Schmidt, Odej Kao
|
EdgePier: P2P-based Container Image Distribution in Edge Computing
Environments
|
40th IEEE International Performance Computing and Communications
Conference 2021
| null |
10.1109/IPCCC51483.2021.9679447
| null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Edge and fog computing architectures utilize container technologies in order
to offer a lightweight application deployment. Container images are stored in
registry services and operated by orchestration platforms to download and start
the respective applications on nodes of the infrastructure. During large
application rollouts, the connection to the registry is prone to become a
bottleneck, which results in longer provisioning times and deployment
latencies. Previous work has mainly addressed this problem by proposing
scalable registries, leveraging the BitTorrent protocol or distributed storage
to host container images. However, for lightweight and dynamic edge
environments the overhead of several dedicated components is not feasible in
regard to its interference of the actual workload and is subject to failures
due to the introduced complexity.
In this paper we introduce a fully decentralized container registry called
EdgePier, that can be deployed across edge sites and is able to decrease
container deployment times by utilizing peer-to-peer connections between
participating nodes. Image layers are shared without the need for further
centralized orchestration entities. The conducted evaluation shows that the
provisioning times are improved by up to 65% in comparison to a baseline
registry, even with limited bandwidth to the cloud.
|
[
{
"version": "v1",
"created": "Mon, 27 Sep 2021 12:17:53 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Becker",
"Soeren",
""
],
[
"Schmidt",
"Florian",
""
],
[
"Kao",
"Odej",
""
]
] |
new_dataset
| 0.996449 |
2110.03555
|
Sangwoon Kim
|
Sangwoon Kim, Alberto Rodriguez
|
Active Extrinsic Contact Sensing: Application to General Peg-in-Hole
Insertion
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a method that actively estimates contact location between a
grasped rigid object and its environment and uses this as input to a
peg-in-hole insertion policy. An estimation model and an active tactile
feedback controller work collaboratively to estimate the external contacts
accurately. The controller helps the estimation model get a better estimate by
regulating a consistent contact mode. The better estimation makes it easier for
the controller to regulate the contact. We then train an object-agnostic
insertion policy that learns to use the series of contact estimates to guide
the insertion of an unseen peg into a hole. In contrast with previous works
that learn a policy directly from tactile signals, since this policy is in
contact configuration space, it can be learned directly in simulation. Lastly,
we demonstrate and evaluate the active extrinsic contact line estimation and
the trained insertion policy together in a real experiment. We show that the
proposed method inserts various-shaped test objects with higher success rates
and fewer insertion attempts than previous work with end-to-end approaches. See
supplementary video and results at
https://sites.google.com/view/active-extrinsic-contact.
|
[
{
"version": "v1",
"created": "Thu, 7 Oct 2021 15:22:33 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Mar 2022 12:51:45 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Kim",
"Sangwoon",
""
],
[
"Rodriguez",
"Alberto",
""
]
] |
new_dataset
| 0.984676 |
2110.09215
|
Tobias Kallehauge MSc
|
Tobias Kallehauge, Pablo Ram\'irez-Espinosa, Kimmo Kansanen, Henk
Wymeersch, Petar Popovski
|
A Primer on the Statistical Relation between Wireless Ultra-Reliability
and Location Estimation
|
6 pages and 3 figures. This is an extended version of the article
submitted to IEEE Wireless Communication Letters. The extension differs from
the letter in section V, which here contain some derivations
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Location information is often used as a proxy to infer the performance of a
wireless communication link. Using a very simple model, this letter unveils a
basic statistical relation between the location estimation uncertainty and
wireless link reliability. First, a Cram\'er-Rao bound for the localization
error is derived. Then, wireless link reliability is characterized by how
likely the outage probability is to be above a target threshold. We show that
the reliability is sensitive to location errors, especially when the channel
statistics are also sensitive to the location. Finally, we highlight the
difficulty of choosing a rate that meets target reliability while accounting
for the location uncertainty.
|
[
{
"version": "v1",
"created": "Mon, 18 Oct 2021 12:02:44 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Feb 2022 09:18:08 GMT"
},
{
"version": "v3",
"created": "Sun, 27 Mar 2022 18:27:01 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Kallehauge",
"Tobias",
""
],
[
"Ramírez-Espinosa",
"Pablo",
""
],
[
"Kansanen",
"Kimmo",
""
],
[
"Wymeersch",
"Henk",
""
],
[
"Popovski",
"Petar",
""
]
] |
new_dataset
| 0.99277 |
2110.12610
|
Zhenyu Xiao
|
Zhenyu Xiao, Zhu Han, Arumugam Nallanathan, Octavia A. Dobre, Bruno
Clerckx, Jinho Choi, Chong He, Wen Tong
|
Antenna Array Enabled Space/Air/Ground Communications and Networking for
6G
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Antenna arrays have a long history of more than 100 years and have evolved
closely with the development of electronic and information technologies,
playing an indispensable role in wireless communications and radar. With the
rapid development of electronic and information technologies, the demand for
all-time, all-domain, and full-space network services has exploded, and new
communication requirements have been put forward on various space/air/ground
platforms. To meet the ever increasing requirements of the future sixth
generation (6G) wireless communications, such as high capacity, wide coverage,
low latency, and strong robustness, it is promising to employ different types
of antenna arrays with various beamforming technologies in space/air/ground
communication networks, bringing in advantages such as considerable antenna
gains, multiplexing gains, and diversity gains. However, enabling antenna array
for space/air/ground communication networks poses specific, distinctive and
tricky challenges, which has aroused extensive research attention. This paper
aims to overview the field of antenna array enabled space/air/ground
communications and networking. The technical potentials and challenges of
antenna array enabled space/air/ground communications and networking are
presented first. Subsequently, the antenna array structures and designs are
discussed. We then discuss various emerging technologies facilitated by antenna
arrays to meet the new communication requirements of space/air/ground
communication systems. Enabled by these emerging technologies, the distinct
characteristics, challenges, and solutions for space communications, airborne
communications, and ground communications are reviewed. Finally, we present
promising directions for future research in antenna array enabled
space/air/ground communications and networking.
|
[
{
"version": "v1",
"created": "Mon, 25 Oct 2021 02:45:58 GMT"
},
{
"version": "v2",
"created": "Sun, 27 Mar 2022 01:30:45 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Xiao",
"Zhenyu",
""
],
[
"Han",
"Zhu",
""
],
[
"Nallanathan",
"Arumugam",
""
],
[
"Dobre",
"Octavia A.",
""
],
[
"Clerckx",
"Bruno",
""
],
[
"Choi",
"Jinho",
""
],
[
"He",
"Chong",
""
],
[
"Tong",
"Wen",
""
]
] |
new_dataset
| 0.998008 |
2111.12077
|
Jonathan Barron
|
Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan,
Peter Hedman
|
Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields
|
https://jonbarron.info/mipnerf360/
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Though neural radiance fields (NeRF) have demonstrated impressive view
synthesis results on objects and small bounded regions of space, they struggle
on "unbounded" scenes, where the camera may point in any direction and content
may exist at any distance. In this setting, existing NeRF-like models often
produce blurry or low-resolution renderings (due to the unbalanced detail and
scale of nearby and distant objects), are slow to train, and may exhibit
artifacts due to the inherent ambiguity of the task of reconstructing a large
scene from a small set of images. We present an extension of mip-NeRF (a NeRF
variant that addresses sampling and aliasing) that uses a non-linear scene
parameterization, online distillation, and a novel distortion-based regularizer
to overcome the challenges presented by unbounded scenes. Our model, which we
dub "mip-NeRF 360" as we target scenes in which the camera rotates 360 degrees
around a point, reduces mean-squared error by 57% compared to mip-NeRF, and is
able to produce realistic synthesized views and detailed depth maps for highly
intricate, unbounded real-world scenes.
|
[
{
"version": "v1",
"created": "Tue, 23 Nov 2021 18:51:18 GMT"
},
{
"version": "v2",
"created": "Wed, 24 Nov 2021 18:51:06 GMT"
},
{
"version": "v3",
"created": "Fri, 25 Mar 2022 23:05:20 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Barron",
"Jonathan T.",
""
],
[
"Mildenhall",
"Ben",
""
],
[
"Verbin",
"Dor",
""
],
[
"Srinivasan",
"Pratul P.",
""
],
[
"Hedman",
"Peter",
""
]
] |
new_dataset
| 0.999092 |
2111.14292
|
Ma Li
|
Li Ma and Xiaoyu Li and Jing Liao and Qi Zhang and Xuan Wang and Jue
Wang and Pedro V. Sander
|
Deblur-NeRF: Neural Radiance Fields from Blurry Images
|
accepted in CVPR2022
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural Radiance Field (NeRF) has gained considerable attention recently for
3D scene reconstruction and novel view synthesis due to its remarkable
synthesis quality. However, image blurriness caused by defocus or motion, which
often occurs when capturing scenes in the wild, significantly degrades its
reconstruction quality. To address this problem, We propose Deblur-NeRF, the
first method that can recover a sharp NeRF from blurry input. We adopt an
analysis-by-synthesis approach that reconstructs blurry views by simulating the
blurring process, thus making NeRF robust to blurry inputs. The core of this
simulation is a novel Deformable Sparse Kernel (DSK) module that models
spatially-varying blur kernels by deforming a canonical sparse kernel at each
spatial location. The ray origin of each kernel point is jointly optimized,
inspired by the physical blurring process. This module is parameterized as an
MLP that has the ability to be generalized to various blur types. Jointly
optimizing the NeRF and the DSK module allows us to restore a sharp NeRF. We
demonstrate that our method can be used on both camera motion blur and defocus
blur: the two most common types of blur in real scenes. Evaluation results on
both synthetic and real-world data show that our method outperforms several
baselines. The synthetic and real datasets along with the source code is
publicly available at https://limacv.github.io/deblurnerf/
|
[
{
"version": "v1",
"created": "Mon, 29 Nov 2021 01:49:15 GMT"
},
{
"version": "v2",
"created": "Sun, 27 Mar 2022 15:48:02 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Ma",
"Li",
""
],
[
"Li",
"Xiaoyu",
""
],
[
"Liao",
"Jing",
""
],
[
"Zhang",
"Qi",
""
],
[
"Wang",
"Xuan",
""
],
[
"Wang",
"Jue",
""
],
[
"Sander",
"Pedro V.",
""
]
] |
new_dataset
| 0.982595 |
2111.15341
|
Georg B\"okman
|
Georg B\"okman, Fredrik Kahl and Axel Flinth
|
ZZ-Net: A Universal Rotation Equivariant Architecture for 2D Point
Clouds
|
CVPR 2022 camera ready
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we are concerned with rotation equivariance on 2D point cloud
data. We describe a particular set of functions able to approximate any
continuous rotation equivariant and permutation invariant function. Based on
this result, we propose a novel neural network architecture for processing 2D
point clouds and we prove its universality for approximating functions
exhibiting these symmetries.
We also show how to extend the architecture to accept a set of 2D-2D
correspondences as indata, while maintaining similar equivariance properties.
Experiments are presented on the estimation of essential matrices in stereo
vision.
|
[
{
"version": "v1",
"created": "Tue, 30 Nov 2021 12:37:36 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Mar 2022 08:47:26 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Bökman",
"Georg",
""
],
[
"Kahl",
"Fredrik",
""
],
[
"Flinth",
"Axel",
""
]
] |
new_dataset
| 0.955447 |
2111.15669
|
Manuel Rey-Area
|
Manuel Rey-Area and Mingze Yuan and Christian Richardt
|
360MonoDepth: High-Resolution 360{\deg} Monocular Depth Estimation
|
CVPR 2022. Project page: https://manurare.github.io/360monodepth/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
360{\deg} cameras can capture complete environments in a single shot, which
makes 360{\deg} imagery alluring in many computer vision tasks. However,
monocular depth estimation remains a challenge for 360{\deg} data, particularly
for high resolutions like 2K (2048x1024) and beyond that are important for
novel-view synthesis and virtual reality applications. Current CNN-based
methods do not support such high resolutions due to limited GPU memory. In this
work, we propose a flexible framework for monocular depth estimation from
high-resolution 360{\deg} images using tangent images. We project the 360{\deg}
input image onto a set of tangent planes that produce perspective views, which
are suitable for the latest, most accurate state-of-the-art perspective
monocular depth estimators. To achieve globally consistent disparity estimates,
we recombine the individual depth estimates using deformable multi-scale
alignment followed by gradient-domain blending. The result is a dense,
high-resolution 360{\deg} depth map with a high level of detail, also for
outdoor scenes which are not supported by existing methods. Our source code and
data are available at https://manurare.github.io/360monodepth/.
|
[
{
"version": "v1",
"created": "Tue, 30 Nov 2021 18:57:29 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Mar 2022 17:26:24 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Rey-Area",
"Manuel",
""
],
[
"Yuan",
"Mingze",
""
],
[
"Richardt",
"Christian",
""
]
] |
new_dataset
| 0.992429 |
2112.00431
|
Mattia Soldan
|
Mattia Soldan, Alejandro Pardo, Juan Le\'on Alc\'azar, Fabian Caba
Heilbron, Chen Zhao, Silvio Giancola, Bernard Ghanem
|
MAD: A Scalable Dataset for Language Grounding in Videos from Movie
Audio Descriptions
|
12 Pages, 6 Figures, 7 Tables
|
Proceedings of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition CVPR 2022
| null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The recent and increasing interest in video-language research has driven the
development of large-scale datasets that enable data-intensive machine learning
techniques. In comparison, limited effort has been made at assessing the
fitness of these datasets for the video-language grounding task. Recent works
have begun to discover significant limitations in these datasets, suggesting
that state-of-the-art techniques commonly overfit to hidden dataset biases. In
this work, we present MAD (Movie Audio Descriptions), a novel benchmark that
departs from the paradigm of augmenting existing video datasets with text
annotations and focuses on crawling and aligning available audio descriptions
of mainstream movies. MAD contains over 384,000 natural language sentences
grounded in over 1,200 hours of videos and exhibits a significant reduction in
the currently diagnosed biases for video-language grounding datasets. MAD's
collection strategy enables a novel and more challenging version of
video-language grounding, where short temporal moments (typically seconds long)
must be accurately grounded in diverse long-form videos that can last up to
three hours. We have released MAD's data and baselines code at
https://github.com/Soldelli/MAD.
|
[
{
"version": "v1",
"created": "Wed, 1 Dec 2021 11:47:09 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Mar 2022 16:35:52 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Soldan",
"Mattia",
""
],
[
"Pardo",
"Alejandro",
""
],
[
"Alcázar",
"Juan León",
""
],
[
"Heilbron",
"Fabian Caba",
""
],
[
"Zhao",
"Chen",
""
],
[
"Giancola",
"Silvio",
""
],
[
"Ghanem",
"Bernard",
""
]
] |
new_dataset
| 0.999873 |
2112.01041
|
Junho Kim
|
Junho Kim, Jaehyeok Bae, Gangin Park, Dongsu Zhang, and Young Min Kim
|
N-ImageNet: Towards Robust, Fine-Grained Object Recognition with Event
Cameras
|
Accepted to ICCV 2021
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We introduce N-ImageNet, a large-scale dataset targeted for robust,
fine-grained object recognition with event cameras. The dataset is collected
using programmable hardware in which an event camera consistently moves around
a monitor displaying images from ImageNet. N-ImageNet serves as a challenging
benchmark for event-based object recognition, due to its large number of
classes and samples. We empirically show that pretraining on N-ImageNet
improves the performance of event-based classifiers and helps them learn with
few labeled data. In addition, we present several variants of N-ImageNet to
test the robustness of event-based classifiers under diverse camera
trajectories and severe lighting conditions, and propose a novel event
representation to alleviate the performance degradation. To the best of our
knowledge, we are the first to quantitatively investigate the consequences
caused by various environmental conditions on event-based object recognition
algorithms. N-ImageNet and its variants are expected to guide practical
implementations for deploying event-based object recognition algorithms in the
real world.
|
[
{
"version": "v1",
"created": "Thu, 2 Dec 2021 08:08:32 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Mar 2022 01:49:08 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Kim",
"Junho",
""
],
[
"Bae",
"Jaehyeok",
""
],
[
"Park",
"Gangin",
""
],
[
"Zhang",
"Dongsu",
""
],
[
"Kim",
"Young Min",
""
]
] |
new_dataset
| 0.998642 |
2201.02558
|
Ana Cardenas Gasca
|
Ella Dagan, Ana C\'ardenas Gasca, Ava Robinson, Anwar Noriega, Yu
Jiang Tham, Rajan Vaish, Andr\'es Monroy-Hern\'andez
|
Project IRL: Playful Co-Located Interactions with Mobile Augmented
Reality
| null | null |
10.1145/3512909
| null |
cs.HC cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
We present Project IRL (In Real Life), a suite of five mobile apps we created
to explore novel ways of supporting in-person social interactions with
augmented reality. In recent years, the tone of public discourse surrounding
digital technology has become increasingly critical, and technology's influence
on the way people relate to each other has been blamed for making people feel
"alone together," diverting their attention from truly engaging with one
another when they interact in person. Motivated by this challenge, we focus on
an under-explored design space: playful co-located interactions. We evaluated
the apps through a deployment study that involved interviews and participant
observations with 101 people. We synthesized the results into a series of
design guidelines that focus on four themes: (1) device arrangement (e.g., are
people sharing one phone, or does each person have their own?), (2) enablers
(e.g., should the activity focus on an object, body part, or pet?), (3)
affordances of modifying reality (i.e., features of the technology that enhance
its potential to encourage various aspects of social interaction), and (4)
co-located play (i.e., using technology to make in-person play engaging and
inviting). We conclude by presenting our design guidelines for future work on
embodied social AR.
|
[
{
"version": "v1",
"created": "Fri, 7 Jan 2022 17:31:43 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Mar 2022 21:42:35 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Dagan",
"Ella",
""
],
[
"Gasca",
"Ana Cárdenas",
""
],
[
"Robinson",
"Ava",
""
],
[
"Noriega",
"Anwar",
""
],
[
"Tham",
"Yu Jiang",
""
],
[
"Vaish",
"Rajan",
""
],
[
"Monroy-Hernández",
"Andrés",
""
]
] |
new_dataset
| 0.999532 |
2201.08215
|
Mingye Xu
|
Mingye Xu, Yali Wang, Zhipeng Zhou, Hongbin Xu, and Yu Qiao
|
CP-Net: Contour-Perturbed Reconstruction Network for Self-Supervised
Point Cloud Learning
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Self-supervised learning has not been fully explored for point cloud
analysis. Current frameworks are mainly based on point cloud reconstruction.
Given only 3D coordinates, such approaches tend to learn local geometric
structures and contours, while failing in understanding high level semantic
content. Consequently, they achieve unsatisfactory performance in downstream
tasks such as classification, segmentation, etc. To fill this gap, we propose a
generic Contour-Perturbed Reconstruction Network (CP-Net), which can
effectively guide self-supervised reconstruction to learn semantic content in
the point cloud, and thus promote discriminative power of point cloud
representation. First, we introduce a concise contour-perturbed augmentation
module for point cloud reconstruction. With guidance of geometry disentangling,
we divide point cloud into contour and content components. Subsequently, we
perturb the contour components and preserve the content components on the point
cloud. As a result, self supervisor can effectively focus on semantic content,
by reconstructing the original point cloud from such perturbed one. Second, we
use this perturbed reconstruction as an assistant branch, to guide the learning
of basic reconstruction branch via a distinct dual-branch consistency loss. In
this case, our CP-Net not only captures structural contour but also learn
semantic content for discriminative downstream tasks. Finally, we perform
extensive experiments on a number of point cloud benchmarks. Part segmentation
results demonstrate that our CP-Net (81.5% of mIoU) outperforms the previous
self-supervised models, and narrows the gap with the fully-supervised methods.
For classification, we get a competitive result with the fully-supervised
methods on ModelNet40 (92.5% accuracy) and ScanObjectNN (87.9% accuracy). The
codes and models will be released afterwards.
|
[
{
"version": "v1",
"created": "Thu, 20 Jan 2022 15:04:12 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Mar 2022 05:48:05 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Xu",
"Mingye",
""
],
[
"Wang",
"Yali",
""
],
[
"Zhou",
"Zhipeng",
""
],
[
"Xu",
"Hongbin",
""
],
[
"Qiao",
"Yu",
""
]
] |
new_dataset
| 0.95493 |
2202.07295
|
Yaoyu Tao
|
Yaoyu Tao, Qi Wu
|
An Automated FPGA-based Framework for Rapid Prototyping of Nonbinary
LDPC Codes
|
Published in ISCAS 2019
| null | null | null |
cs.IT cs.AR math.IT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Nonbinary LDPC codes have shown superior performance close to the Shannon
limit. Compared to binary LDPC codes of similar lengths, they can reach orders
of magnitudes lower error rate. However, multitude of design freedoms of
nonbinary LDPC codes complicates the practical code and decoder design process.
Fast simulations are critically important to evaluate the pros and cons. Rapid
prototyping on FPGA is attractive but takes significant design efforts due to
its high design complexity. We propose a high-throughput reconfigurable
hardware emulation architecture with decoder and peripheral co-design. The
architecture enables a library and script-based framework that automates the
construction of FPGA emulations. Code and decoder design parameters are
programmed either during run time or by script in design time. We demonstrate
the capability of the framework in evaluating practical code and decoder design
by experimenting with two popular nonbinary LDPC codes, regular (2, dc) codes
and quasi-cyclic codes: each emulation model can be auto-constructed within
hours and the decoder delivers excellent error-correcting performance on a
Xilinx Virtex-5 FPGA with throughput of up to hundreds of Mbps.
|
[
{
"version": "v1",
"created": "Tue, 15 Feb 2022 10:22:16 GMT"
},
{
"version": "v2",
"created": "Sun, 27 Mar 2022 06:05:19 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Tao",
"Yaoyu",
""
],
[
"Wu",
"Qi",
""
]
] |
new_dataset
| 0.950217 |
2203.00758
|
Xingyu Fu
|
Xingyu Fu, Ben Zhou, Ishaan Preetam Chandratreya, Carl Vondrick, Dan
Roth
|
There is a Time and Place for Reasoning Beyond the Image
|
Article accepted to the ACL 2022 Main conference
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Images are often more significant than only the pixels to human eyes, as we
can infer, associate, and reason with contextual information from other sources
to establish a more complete picture. For example, in Figure 1, we can find a
way to identify the news articles related to the picture through segment-wise
understandings of the signs, the buildings, the crowds, and more. This
reasoning could provide the time and place the image was taken, which will help
us in subsequent tasks, such as automatic storyline construction, correction of
image source in intended effect photographs, and upper-stream processing such
as image clustering for certain location or time.
In this work, we formulate this problem and introduce TARA: a dataset with
16k images with their associated news, time, and location, automatically
extracted from New York Times, and an additional 61k examples as distant
supervision from WIT. On top of the extractions, we present a crowdsourced
subset in which we believe it is possible to find the images' spatio-temporal
information for evaluation purpose. We show that there exists a $70\%$ gap
between a state-of-the-art joint model and human performance, which is slightly
filled by our proposed model that uses segment-wise reasoning, motivating
higher-level vision-language joint models that can conduct open-ended reasoning
with world knowledge. The data and code are publicly available at
https://github.com/zeyofu/TARA.
|
[
{
"version": "v1",
"created": "Tue, 1 Mar 2022 21:52:08 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Mar 2022 04:47:22 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Fu",
"Xingyu",
""
],
[
"Zhou",
"Ben",
""
],
[
"Chandratreya",
"Ishaan Preetam",
""
],
[
"Vondrick",
"Carl",
""
],
[
"Roth",
"Dan",
""
]
] |
new_dataset
| 0.99723 |
2203.01885
|
Ziang Cao
|
Ziang Cao, Ziyuan Huang, Liang Pan, Shiwei Zhang, Ziwei Liu, Changhong
Fu
|
TCTrack: Temporal Contexts for Aerial Tracking
|
To appear in CVPR2022. Code:
https://github.com/vision4robotics/TCTrack
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Temporal contexts among consecutive frames are far from being fully utilized
in existing visual trackers. In this work, we present TCTrack, a comprehensive
framework to fully exploit temporal contexts for aerial tracking. The temporal
contexts are incorporated at \textbf{two levels}: the extraction of
\textbf{features} and the refinement of \textbf{similarity maps}. Specifically,
for feature extraction, an online temporally adaptive convolution is proposed
to enhance the spatial features using temporal information, which is achieved
by dynamically calibrating the convolution weights according to the previous
frames. For similarity map refinement, we propose an adaptive temporal
transformer, which first effectively encodes temporal knowledge in a
memory-efficient way, before the temporal knowledge is decoded for accurate
adjustment of the similarity map. TCTrack is effective and efficient:
evaluation on four aerial tracking benchmarks shows its impressive performance;
real-world UAV tests show its high speed of over 27 FPS on NVIDIA Jetson AGX
Xavier.
|
[
{
"version": "v1",
"created": "Thu, 3 Mar 2022 18:04:20 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Mar 2022 05:13:29 GMT"
},
{
"version": "v3",
"created": "Mon, 28 Mar 2022 07:35:29 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Cao",
"Ziang",
""
],
[
"Huang",
"Ziyuan",
""
],
[
"Pan",
"Liang",
""
],
[
"Zhang",
"Shiwei",
""
],
[
"Liu",
"Ziwei",
""
],
[
"Fu",
"Changhong",
""
]
] |
new_dataset
| 0.997573 |
2203.02104
|
Bo Wang
|
Bo Wang, Tao Wu, Minfeng Zhu, Peng Du
|
Interactive Image Synthesis with Panoptic Layout Generation
|
Accepted by CVPR 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Interactive image synthesis from user-guided input is a challenging task when
users wish to control the scene structure of a generated image with
ease.Although remarkable progress has been made on layout-based image synthesis
approaches, in order to get realistic fake image in interactive scene, existing
methods require high-precision inputs, which probably need adjustment several
times and are unfriendly to novice users. When placement of bounding boxes is
subject to perturbation, layout-based models suffer from "missing regions" in
the constructed semantic layouts and hence undesirable artifacts in the
generated images. In this work, we propose Panoptic Layout Generative
Adversarial Networks (PLGAN) to address this challenge. The PLGAN employs
panoptic theory which distinguishes object categories between "stuff" with
amorphous boundaries and "things" with well-defined shapes, such that stuff and
instance layouts are constructed through separate branches and later fused into
panoptic layouts. In particular, the stuff layouts can take amorphous shapes
and fill up the missing regions left out by the instance layouts. We
experimentally compare our PLGAN with state-of-the-art layout-based models on
the COCO-Stuff, Visual Genome, and Landscape datasets. The advantages of PLGAN
are not only visually demonstrated but quantitatively verified in terms of
inception score, Fr\'echet inception distance, classification accuracy score,
and coverage.
|
[
{
"version": "v1",
"created": "Fri, 4 Mar 2022 02:45:27 GMT"
},
{
"version": "v2",
"created": "Thu, 10 Mar 2022 02:23:30 GMT"
},
{
"version": "v3",
"created": "Mon, 28 Mar 2022 11:20:20 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Wang",
"Bo",
""
],
[
"Wu",
"Tao",
""
],
[
"Zhu",
"Minfeng",
""
],
[
"Du",
"Peng",
""
]
] |
new_dataset
| 0.964575 |
2203.02503
|
Wele Gedara Chaminda Bandara
|
Wele Gedara Chaminda Bandara, Vishal M. Patel
|
HyperTransformer: A Textural and Spectral Feature Fusion Transformer for
Pansharpening
|
Accepted at CVPR'22. Project page:
https://www.wgcban.com/research#h.ar24vwqlm021 Code available at:
https://github.com/wgcban/HyperTransformer
| null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Pansharpening aims to fuse a registered high-resolution panchromatic image
(PAN) with a low-resolution hyperspectral image (LR-HSI) to generate an
enhanced HSI with high spectral and spatial resolution. Existing pansharpening
approaches neglect using an attention mechanism to transfer HR texture features
from PAN to LR-HSI features, resulting in spatial and spectral distortions. In
this paper, we present a novel attention mechanism for pansharpening called
HyperTransformer, in which features of LR-HSI and PAN are formulated as queries
and keys in a transformer, respectively. HyperTransformer consists of three
main modules, namely two separate feature extractors for PAN and HSI, a
multi-head feature soft attention module, and a spatial-spectral feature fusion
module. Such a network improves both spatial and spectral quality measures of
the pansharpened HSI by learning cross-feature space dependencies and
long-range details of PAN and LR-HSI. Furthermore, HyperTransformer can be
utilized across multiple spatial scales at the backbone for obtaining improved
performance. Extensive experiments conducted on three widely used datasets
demonstrate that HyperTransformer achieves significant improvement over the
state-of-the-art methods on both spatial and spectral quality measures.
Implementation code and pre-trained weights can be accessed at
https://github.com/wgcban/HyperTransformer.
|
[
{
"version": "v1",
"created": "Fri, 4 Mar 2022 18:59:08 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Mar 2022 20:02:11 GMT"
},
{
"version": "v3",
"created": "Mon, 28 Mar 2022 17:41:23 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Bandara",
"Wele Gedara Chaminda",
""
],
[
"Patel",
"Vishal M.",
""
]
] |
new_dataset
| 0.98997 |
2203.03014
|
Saghir Alfasly
|
Saghir Alfasly, Jian Lu, Chen Xu, Yuru Zou
|
Learnable Irrelevant Modality Dropout for Multimodal Action Recognition
on Modality-Specific Annotated Videos
|
CVPR 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
With the assumption that a video dataset is multimodality annotated in which
auditory and visual modalities both are labeled or class-relevant, current
multimodal methods apply modality fusion or cross-modality attention. However,
effectively leveraging the audio modality in vision-specific annotated videos
for action recognition is of particular challenge. To tackle this challenge, we
propose a novel audio-visual framework that effectively leverages the audio
modality in any solely vision-specific annotated dataset. We adopt the language
models (e.g., BERT) to build a semantic audio-video label dictionary (SAVLD)
that maps each video label to its most K-relevant audio labels in which SAVLD
serves as a bridge between audio and video datasets. Then, SAVLD along with a
pretrained audio multi-label model are used to estimate the audio-visual
modality relevance during the training phase. Accordingly, a novel learnable
irrelevant modality dropout (IMD) is proposed to completely drop out the
irrelevant audio modality and fuse only the relevant modalities. Moreover, we
present a new two-stream video Transformer for efficiently modeling the visual
modalities. Results on several vision-specific annotated datasets including
Kinetics400 and UCF-101 validated our framework as it outperforms most relevant
action recognition methods.
|
[
{
"version": "v1",
"created": "Sun, 6 Mar 2022 17:31:06 GMT"
},
{
"version": "v2",
"created": "Sun, 27 Mar 2022 03:26:40 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Alfasly",
"Saghir",
""
],
[
"Lu",
"Jian",
""
],
[
"Xu",
"Chen",
""
],
[
"Zou",
"Yuru",
""
]
] |
new_dataset
| 0.997919 |
2203.06604
|
Yatian Pang
|
Yatian Pang, Wenxiao Wang, Francis E.H. Tay, Wei Liu, Yonghong Tian,
Li Yuan
|
Masked Autoencoders for Point Cloud Self-supervised Learning
|
https://github.com/Pang-Yatian/Point-MAE
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As a promising scheme of self-supervised learning, masked autoencoding has
significantly advanced natural language processing and computer vision.
Inspired by this, we propose a neat scheme of masked autoencoders for point
cloud self-supervised learning, addressing the challenges posed by point
cloud's properties, including leakage of location information and uneven
information density. Concretely, we divide the input point cloud into irregular
point patches and randomly mask them at a high ratio. Then, a standard
Transformer based autoencoder, with an asymmetric design and a shifting mask
tokens operation, learns high-level latent features from unmasked point
patches, aiming to reconstruct the masked point patches. Extensive experiments
show that our approach is efficient during pre-training and generalizes well on
various downstream tasks. Specifically, our pre-trained models achieve 85.18%
accuracy on ScanObjectNN and 94.04% accuracy on ModelNet40, outperforming all
the other self-supervised learning methods. We show with our scheme, a simple
architecture entirely based on standard Transformers can surpass dedicated
Transformer models from supervised learning. Our approach also advances
state-of-the-art accuracies by 1.5%-2.3% in the few-shot object classification.
Furthermore, our work inspires the feasibility of applying unified
architectures from languages and images to the point cloud.
|
[
{
"version": "v1",
"created": "Sun, 13 Mar 2022 09:23:39 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Mar 2022 05:01:22 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Pang",
"Yatian",
""
],
[
"Wang",
"Wenxiao",
""
],
[
"Tay",
"Francis E. H.",
""
],
[
"Liu",
"Wei",
""
],
[
"Tian",
"Yonghong",
""
],
[
"Yuan",
"Li",
""
]
] |
new_dataset
| 0.988682 |
2203.06751
|
Sunita Chandrasekaran
|
Holger Brunst, Sunita Chandrasekaran, Florina Ciorba, Nick Hagerty,
Robert Henschel, Guido Juckeland, Junjie Li, Veronica G. Melesse Vergara,
Sandra Wienke, Miguel Zavala
|
First Experiences in Performance Benchmarking with the New SPEChpc 2021
Suites
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Modern HPC systems are built with innovative system architectures and novel
programming models to further push the speed limit of computing. The increased
complexity poses challenges for performance portability and performance
evaluation. The Standard Performance Evaluation Corporation -SPEC has a long
history of producing industry standard benchmarks for modern computer systems.
SPEC is a newly released SPEChpc 2021 benchmark suites, developed by the High
Performance Group, are a bold attempt to provide a fair and objective
benchmarking tool designed for state of the art HPC systems. With the support
of multiple host and accelerator programming models, the suites are portable
across both homogeneous and heterogeneous architectures. Different workloads
are developed to fit system sizes ranging from a few compute nodes to a few
hundred compute nodes. In this manuscript, we take a first glance at these
benchmark suites and evaluate their portability and basic performance
characteristics on various popular and emerging HPC architectures, including
x86 CPU, NVIDIA GPU, and AMD GPU. This study provides a first-hand experience
of executing the SPEChpc 2021 suites at scale on production HPC systems,
discusses real-world use cases, and serves as an initial guideline for using
the benchmark suites.
|
[
{
"version": "v1",
"created": "Sun, 13 Mar 2022 20:17:01 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Mar 2022 16:49:07 GMT"
},
{
"version": "v3",
"created": "Mon, 28 Mar 2022 14:06:21 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Brunst",
"Holger",
""
],
[
"Chandrasekaran",
"Sunita",
""
],
[
"Ciorba",
"Florina",
""
],
[
"Hagerty",
"Nick",
""
],
[
"Henschel",
"Robert",
""
],
[
"Juckeland",
"Guido",
""
],
[
"Li",
"Junjie",
""
],
[
"Vergara",
"Veronica G. Melesse",
""
],
[
"Wienke",
"Sandra",
""
],
[
"Zavala",
"Miguel",
""
]
] |
new_dataset
| 0.991859 |
2203.08936
|
Sangeeth Kochanthara
|
Sangeeth Kochanthara, Yanja Dajsuren, Loek Cleophas, Mark van den
Brand
|
Painting the Landscape of Automotive Software in GitHub
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The automotive industry has transitioned from being an electro-mechanical to
a software-intensive industry. A current high-end production vehicle contains
100 million+ lines of code surpassing modern airplanes, the Large Hadron
Collider, the Android OS, and Facebook's front-end software, in code size by a
huge margin. Today, software companies worldwide, including Apple, Google,
Huawei, Baidu, and Sony are reportedly working to bring their vehicles to the
road. This paper ventures into the automotive software landscape in open
source, providing the first glimpse into this multi-disciplinary industry with
a long history of closed source development. We paint the landscape of
automotive software on GitHub by describing its characteristics and development
styles.
The landscape is defined by 15,000+ users contributing to ~600
actively-developed automotive software projects created in a span of 12 years
from 2010 until 2021. These projects range from vehicle dynamics-related
software; firmware and drivers for sensors like LiDAR and camera; algorithms
for perception and motion control; to complete operating systems integrating
the above. Developments in the field are spearheaded by industry and academia
alike, with one in three actively developed automotive software repositories
owned by an organization. We observe shifts along multiple dimensions,
including preferred language from MATLAB to Python and prevalence of perception
and decision-related software over traditional automotive software. This study
witnesses the open-source automotive software boom in its infancy with many
implications for future research and practice.
|
[
{
"version": "v1",
"created": "Wed, 16 Mar 2022 20:49:07 GMT"
},
{
"version": "v2",
"created": "Sun, 27 Mar 2022 08:43:23 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Kochanthara",
"Sangeeth",
""
],
[
"Dajsuren",
"Yanja",
""
],
[
"Cleophas",
"Loek",
""
],
[
"Brand",
"Mark van den",
""
]
] |
new_dataset
| 0.99933 |
2203.09707
|
Chen Lyu
|
Yuexiu Gao, Chen Lyu
|
M2TS: Multi-Scale Multi-Modal Approach Based on Transformer for Source
Code Summarization
|
Accepted by ICPC 2022
| null | null | null |
cs.SE cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Source code summarization aims to generate natural language descriptions of
code snippets. Many existing studies learn the syntactic and semantic knowledge
of code snippets from their token sequences and Abstract Syntax Trees (ASTs).
They use the learned code representations as input to code summarization
models, which can accordingly generate summaries describing source code.
Traditional models traverse ASTs as sequences or split ASTs into paths as
input. However, the former loses the structural properties of ASTs, and the
latter destroys the overall structure of ASTs. Therefore, comprehensively
capturing the structural features of ASTs in learning code representations for
source code summarization remains a challenging problem to be solved. In this
paper, we propose M2TS, a Multi-scale Multi-modal approach based on Transformer
for source code Summarization. M2TS uses a multi-scale AST feature extraction
method, which can extract the structures of ASTs more completely and accurately
at multiple local and global levels. To complement missing semantic information
in ASTs, we also obtain code token features, and further combine them with the
extracted AST features using a cross modality fusion method that not only fuses
the syntactic and contextual semantic information of source code, but also
highlights the key features of each modality. We conduct experiments on two
Java and one Python datasets, and the experimental results demonstrate that
M2TS outperforms current state-of-the-art methods. We release our code at
https://github.com/TranSMS/M2TS.
|
[
{
"version": "v1",
"created": "Fri, 18 Mar 2022 02:54:06 GMT"
},
{
"version": "v2",
"created": "Sun, 27 Mar 2022 13:05:06 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Gao",
"Yuexiu",
""
],
[
"Lyu",
"Chen",
""
]
] |
new_dataset
| 0.999232 |
2203.09887
|
Tianchen Zhao
|
Tianchen Zhao, Niansong Zhang, Xuefei Ning, He Wang, Li Yi, Yu Wang
|
CodedVTR: Codebook-based Sparse Voxel Transformer with Geometric
Guidance
|
Published at CVPR2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Transformers have gained much attention by outperforming convolutional neural
networks in many 2D vision tasks. However, they are known to have
generalization problems and rely on massive-scale pre-training and
sophisticated training techniques. When applying to 3D tasks, the irregular
data structure and limited data scale add to the difficulty of transformer's
application. We propose CodedVTR (Codebook-based Voxel TRansformer), which
improves data efficiency and generalization ability for 3D sparse voxel
transformers. On the one hand, we propose the codebook-based attention that
projects an attention space into its subspace represented by the combination of
"prototypes" in a learnable codebook. It regularizes attention learning and
improves generalization. On the other hand, we propose geometry-aware
self-attention that utilizes geometric information (geometric pattern, density)
to guide attention learning. CodedVTR could be embedded into existing sparse
convolution-based methods, and bring consistent performance improvements for
indoor and outdoor 3D semantic segmentation tasks
|
[
{
"version": "v1",
"created": "Fri, 18 Mar 2022 11:50:25 GMT"
},
{
"version": "v2",
"created": "Sun, 27 Mar 2022 05:21:32 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Zhao",
"Tianchen",
""
],
[
"Zhang",
"Niansong",
""
],
[
"Ning",
"Xuefei",
""
],
[
"Wang",
"He",
""
],
[
"Yi",
"Li",
""
],
[
"Wang",
"Yu",
""
]
] |
new_dataset
| 0.9916 |
2203.10473
|
Jinlong Xue
|
Jinlong Xue, Yayue Deng, Yichen Han, Ya Li, Jianqing Sun, Jiaen Liang
|
ECAPA-TDNN for Multi-speaker Text-to-speech Synthesis
|
5 pages, 2 figures, submitted to interspeech2022
| null | null | null |
cs.SD cs.LG eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, neural network based methods for multi-speaker
text-to-speech synthesis (TTS) have made significant progress. However, the
current speaker encoder models used in these methods still cannot capture
enough speaker information. In this paper, we focus on accurate speaker encoder
modeling and propose an end-to-end method that can generate high-quality speech
and better similarity for both seen and unseen speakers. The proposed
architecture consists of three separately trained components: a speaker encoder
based on the state-of-the-art ECAPA-TDNN model which is derived from speaker
verification task, a FastSpeech2 based synthesizer, and a HiFi-GAN vocoder. The
comparison among different speaker encoder models shows our proposed method can
achieve better naturalness and similarity. To efficiently evaluate our
synthesized speech, we are the first to adopt deep learning based automatic MOS
evaluation methods to assess our results, and these methods show great
potential in automatic speech quality assessment.
|
[
{
"version": "v1",
"created": "Sun, 20 Mar 2022 07:04:26 GMT"
},
{
"version": "v2",
"created": "Sat, 26 Mar 2022 16:39:46 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Xue",
"Jinlong",
""
],
[
"Deng",
"Yayue",
""
],
[
"Han",
"Yichen",
""
],
[
"Li",
"Ya",
""
],
[
"Sun",
"Jianqing",
""
],
[
"Liang",
"Jiaen",
""
]
] |
new_dataset
| 0.99522 |
2203.10981
|
Kuan-Chih Huang
|
Kuan-Chih Huang, Tsung-Han Wu, Hung-Ting Su, Winston H. Hsu
|
MonoDTR: Monocular 3D Object Detection with Depth-Aware Transformer
|
Accepted to CVPR 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Monocular 3D object detection is an important yet challenging task in
autonomous driving. Some existing methods leverage depth information from an
off-the-shelf depth estimator to assist 3D detection, but suffer from the
additional computational burden and achieve limited performance caused by
inaccurate depth priors. To alleviate this, we propose MonoDTR, a novel
end-to-end depth-aware transformer network for monocular 3D object detection.
It mainly consists of two components: (1) the Depth-Aware Feature Enhancement
(DFE) module that implicitly learns depth-aware features with auxiliary
supervision without requiring extra computation, and (2) the Depth-Aware
Transformer (DTR) module that globally integrates context- and depth-aware
features. Moreover, different from conventional pixel-wise positional
encodings, we introduce a novel depth positional encoding (DPE) to inject depth
positional hints into transformers. Our proposed depth-aware modules can be
easily plugged into existing image-only monocular 3D object detectors to
improve the performance. Extensive experiments on the KITTI dataset demonstrate
that our approach outperforms previous state-of-the-art monocular-based methods
and achieves real-time detection. Code is available at
https://github.com/kuanchihhuang/MonoDTR
|
[
{
"version": "v1",
"created": "Mon, 21 Mar 2022 13:40:10 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Mar 2022 17:56:53 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Huang",
"Kuan-Chih",
""
],
[
"Wu",
"Tsung-Han",
""
],
[
"Su",
"Hung-Ting",
""
],
[
"Hsu",
"Winston H.",
""
]
] |
new_dataset
| 0.998448 |
2203.12188
|
Jun Chen
|
Jun Chen, Zilin Wang, Deyi Tuo, Zhiyong Wu, Shiyin Kang, Helen Meng
|
FullSubNet+: Channel Attention FullSubNet with Complex Spectrograms for
Speech Enhancement
|
Accepted by ICASSP 2022
| null | null | null |
cs.SD cs.AI eess.AS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Previously proposed FullSubNet has achieved outstanding performance in Deep
Noise Suppression (DNS) Challenge and attracted much attention. However, it
still encounters issues such as input-output mismatch and coarse processing for
frequency bands. In this paper, we propose an extended single-channel real-time
speech enhancement framework called FullSubNet+ with following significant
improvements. First, we design a lightweight multi-scale time sensitive channel
attention (MulCA) module which adopts multi-scale convolution and channel
attention mechanism to help the network focus on more discriminative frequency
bands for noise reduction. Then, to make full use of the phase information in
noisy speech, our model takes all the magnitude, real and imaginary
spectrograms as inputs. Moreover, by replacing the long short-term memory
(LSTM) layers in original full-band model with stacked temporal convolutional
network (TCN) blocks, we design a more efficient full-band module called
full-band extractor. The experimental results in DNS Challenge dataset show the
superior performance of our FullSubNet+, which reaches the state-of-the-art
(SOTA) performance and outperforms other existing speech enhancement
approaches.
|
[
{
"version": "v1",
"created": "Wed, 23 Mar 2022 04:33:09 GMT"
},
{
"version": "v2",
"created": "Sat, 26 Mar 2022 19:20:53 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Chen",
"Jun",
""
],
[
"Wang",
"Zilin",
""
],
[
"Tuo",
"Deyi",
""
],
[
"Wu",
"Zhiyong",
""
],
[
"Kang",
"Shiyin",
""
],
[
"Meng",
"Helen",
""
]
] |
new_dataset
| 0.992713 |
2203.12247
|
Junho Kim
|
Junho Kim, Inwoo Hwang, and Young Min Kim
|
Ev-TTA: Test-Time Adaptation for Event-Based Object Recognition
|
Accepted to CVPR 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce Ev-TTA, a simple, effective test-time adaptation algorithm for
event-based object recognition. While event cameras are proposed to provide
measurements of scenes with fast motions or drastic illumination changes, many
existing event-based recognition algorithms suffer from performance
deterioration under extreme conditions due to significant domain shifts. Ev-TTA
mitigates the severe domain gaps by fine-tuning the pre-trained classifiers
during the test phase using loss functions inspired by the spatio-temporal
characteristics of events. Since the event data is a temporal stream of
measurements, our loss function enforces similar predictions for adjacent
events to quickly adapt to the changed environment online. Also, we utilize the
spatial correlations between two polarities of events to handle noise under
extreme illumination, where different polarities of events exhibit distinctive
noise distributions. Ev-TTA demonstrates a large amount of performance gain on
a wide range of event-based object recognition tasks without extensive
additional training. Our formulation can be successfully applied regardless of
input representations and further extended into regression tasks. We expect
Ev-TTA to provide the key technique to deploy event-based vision algorithms in
challenging real-world applications where significant domain shift is
inevitable.
|
[
{
"version": "v1",
"created": "Wed, 23 Mar 2022 07:43:44 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Mar 2022 06:59:03 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Kim",
"Junho",
""
],
[
"Hwang",
"Inwoo",
""
],
[
"Kim",
"Young Min",
""
]
] |
new_dataset
| 0.998939 |
2203.13859
|
Weihua He
|
Weihua He, Kaichao You, Zhendong Qiao, Xu Jia, Ziyang Zhang, Wenhui
Wang, Huchuan Lu, Yaoyuan Wang, Jianxing Liao
|
TimeReplayer: Unlocking the Potential of Event Cameras for Video
Interpolation
|
Accepted to CVPR 2022, project page
https://sites.google.com/view/timereplayer/
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Recording fast motion in a high FPS (frame-per-second) requires expensive
high-speed cameras. As an alternative, interpolating low-FPS videos from
commodity cameras has attracted significant attention. If only low-FPS videos
are available, motion assumptions (linear or quadratic) are necessary to infer
intermediate frames, which fail to model complex motions. Event camera, a new
camera with pixels producing events of brightness change at the temporal
resolution of $\mu s$ $(10^{-6}$ second $)$, is a game-changing device to
enable video interpolation at the presence of arbitrarily complex motion. Since
event camera is a novel sensor, its potential has not been fulfilled due to the
lack of processing algorithms. The pioneering work Time Lens introduced event
cameras to video interpolation by designing optical devices to collect a large
amount of paired training data of high-speed frames and events, which is too
costly to scale. To fully unlock the potential of event cameras, this paper
proposes a novel TimeReplayer algorithm to interpolate videos captured by
commodity cameras with events. It is trained in an unsupervised
cycle-consistent style, canceling the necessity of high-speed training data and
bringing the additional ability of video extrapolation. Its state-of-the-art
results and demo videos in supplementary reveal the promising future of
event-based vision.
|
[
{
"version": "v1",
"created": "Fri, 25 Mar 2022 18:57:42 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"He",
"Weihua",
""
],
[
"You",
"Kaichao",
""
],
[
"Qiao",
"Zhendong",
""
],
[
"Jia",
"Xu",
""
],
[
"Zhang",
"Ziyang",
""
],
[
"Wang",
"Wenhui",
""
],
[
"Lu",
"Huchuan",
""
],
[
"Wang",
"Yaoyuan",
""
],
[
"Liao",
"Jianxing",
""
]
] |
new_dataset
| 0.991998 |
2203.13901
|
Aditi Chaudhary
|
Aditi Chaudhary, Zaid Sheikh, David R Mortensen, Antonios
Anastasopoulos, Graham Neubig
|
AUTOLEX: An Automatic Framework for Linguistic Exploration
|
9 pages
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Each language has its own complex systems of word, phrase, and sentence
construction, the guiding principles of which are often summarized in grammar
descriptions for the consumption of linguists or language learners. However,
manual creation of such descriptions is a fraught process, as creating
descriptions which describe the language in "its own terms" without bias or
error requires both a deep understanding of the language at hand and
linguistics as a whole. We propose an automatic framework AutoLEX that aims to
ease linguists' discovery and extraction of concise descriptions of linguistic
phenomena. Specifically, we apply this framework to extract descriptions for
three phenomena: morphological agreement, case marking, and word order, across
several languages. We evaluate the descriptions with the help of language
experts and propose a method for automated evaluation when human evaluation is
infeasible.
|
[
{
"version": "v1",
"created": "Fri, 25 Mar 2022 20:37:30 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Chaudhary",
"Aditi",
""
],
[
"Sheikh",
"Zaid",
""
],
[
"Mortensen",
"David R",
""
],
[
"Anastasopoulos",
"Antonios",
""
],
[
"Neubig",
"Graham",
""
]
] |
new_dataset
| 0.998918 |
2203.13916
|
Oscar Fontanelli
|
Oscar Fontanelli and Plinio Guzm\'an and Am\'ilcar Meneses and Alfredo
Hern\'andez and Marisol Flores-Garrido and Maribel Hern\'andez-Rosales and
Guillermo de Anda-J\'auregui
|
Intermunicipal Travel Networks of Mexico (2020-2021)
|
13 pages, 8 figures
| null | null | null |
cs.SI physics.soc-ph
|
http://creativecommons.org/licenses/by/4.0/
|
We present a collection of networks that describe the travel patterns between
municipalities in Mexico between 2020 and 2021. Using anonymized mobile device
geo-location data we constructed directed, weighted networks representing the
(normalized) volume of travels between municipalities. We analysed changes in
global (graph total weight sum), local (centrality measures), and mesoscale
(community structure) network features. We observe that changes in these
features are associated with factors such as Covid-19 restrictions and
population size. In general, events in early 2020 (when initial Covid-19
restrictions were implemented) induced more intense changes in network
features, whereas later events had a less notable impact in network features.
We believe these networks will be useful for researchers and decision makers in
the areas of transportation, infrastructure planning, epidemic control and
network science at large.
|
[
{
"version": "v1",
"created": "Fri, 25 Mar 2022 21:36:24 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Fontanelli",
"Oscar",
""
],
[
"Guzmán",
"Plinio",
""
],
[
"Meneses",
"Amílcar",
""
],
[
"Hernández",
"Alfredo",
""
],
[
"Flores-Garrido",
"Marisol",
""
],
[
"Hernández-Rosales",
"Maribel",
""
],
[
"de Anda-Jáuregui",
"Guillermo",
""
]
] |
new_dataset
| 0.984465 |
2203.13947
|
Bingsheng Yao
|
Ying Xu, Dakuo Wang, Mo Yu, Daniel Ritchie, Bingsheng Yao, Tongshuang
Wu, Zheng Zhang, Toby Jia-Jun Li, Nora Bradford, Branda Sun, Tran Bao Hoang,
Yisi Sang, Yufang Hou, Xiaojuan Ma, Diyi Yang, Nanyun Peng, Zhou Yu, Mark
Warschauer
|
Fantastic Questions and Where to Find Them: FairytaleQA -- An Authentic
Dataset for Narrative Comprehension
|
Accepted to ACL 2022
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Question answering (QA) is a fundamental means to facilitate assessment and
training of narrative comprehension skills for both machines and young
children, yet there is scarcity of high-quality QA datasets carefully designed
to serve this purpose. In particular, existing datasets rarely distinguish
fine-grained reading skills, such as the understanding of varying narrative
elements. Drawing on the reading education research, we introduce FairytaleQA,
a dataset focusing on narrative comprehension of kindergarten to eighth-grade
students. Generated by educational experts based on an evidence-based
theoretical framework, FairytaleQA consists of 10,580 explicit and implicit
questions derived from 278 children-friendly stories, covering seven types of
narrative elements or relations. Our dataset is valuable in two folds: First,
we ran existing QA models on our dataset and confirmed that this annotation
helps assess models' fine-grained learning skills. Second, the dataset supports
question generation (QG) task in the education domain. Through benchmarking
with QG models, we show that the QG model trained on FairytaleQA is capable of
asking high-quality and more diverse questions.
|
[
{
"version": "v1",
"created": "Sat, 26 Mar 2022 00:20:05 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Xu",
"Ying",
""
],
[
"Wang",
"Dakuo",
""
],
[
"Yu",
"Mo",
""
],
[
"Ritchie",
"Daniel",
""
],
[
"Yao",
"Bingsheng",
""
],
[
"Wu",
"Tongshuang",
""
],
[
"Zhang",
"Zheng",
""
],
[
"Li",
"Toby Jia-Jun",
""
],
[
"Bradford",
"Nora",
""
],
[
"Sun",
"Branda",
""
],
[
"Hoang",
"Tran Bao",
""
],
[
"Sang",
"Yisi",
""
],
[
"Hou",
"Yufang",
""
],
[
"Ma",
"Xiaojuan",
""
],
[
"Yang",
"Diyi",
""
],
[
"Peng",
"Nanyun",
""
],
[
"Yu",
"Zhou",
""
],
[
"Warschauer",
"Mark",
""
]
] |
new_dataset
| 0.999826 |
2203.13953
|
Liang Zhang
|
Liang Zhang, Yidong Cheng
|
A Densely Connected Criss-Cross Attention Network for Document-level
Relation Extraction
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Document-level relation extraction (RE) aims to identify relations between
two entities in a given document. Compared with its sentence-level counterpart,
document-level RE requires complex reasoning. Previous research normally
completed reasoning through information propagation on the mention-level or
entity-level document-graph, but rarely considered reasoning at the
entity-pair-level.In this paper, we propose a novel model, called Densely
Connected Criss-Cross Attention Network (Dense-CCNet), for document-level RE,
which can complete logical reasoning at the entity-pair-level. Specifically,
the Dense-CCNet performs entity-pair-level logical reasoning through the
Criss-Cross Attention (CCA), which can collect contextual information in
horizontal and vertical directions on the entity-pair matrix to enhance the
corresponding entity-pair representation. In addition, we densely connect
multiple layers of the CCA to simultaneously capture the features of single-hop
and multi-hop logical reasoning.We evaluate our Dense-CCNet model on three
public document-level RE datasets, DocRED, CDR, and GDA. Experimental results
demonstrate that our model achieves state-of-the-art performance on these three
datasets.
|
[
{
"version": "v1",
"created": "Sat, 26 Mar 2022 01:01:34 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Zhang",
"Liang",
""
],
[
"Cheng",
"Yidong",
""
]
] |
new_dataset
| 0.961036 |
2203.14007
|
Behrouz Bolourian Haghighi
|
Hengameh Mirhajianmoghadam, Behrouz Bolourian Haghighi
|
EYNet: Extended YOLO for Airport Detection in Remote Sensing Images
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nowadays, airport detection in remote sensing images has attracted
considerable attention due to its strategic role in civilian and military
scopes. In particular, uncrewed and operated aerial vehicles must immediately
detect safe areas to land in emergencies. The previous schemes suffered from
various aspects, including complicated backgrounds, scales, and shapes of the
airport. Meanwhile, the rapid action and accuracy of the method are confronted
with significant concerns. Hence, this study proposes an effective scheme by
extending YOLOV3 and ShearLet transform. In this way, MobileNet and ResNet18,
with fewer layers and parameters retrained on a similar dataset, are parallelly
trained as base networks. According to airport geometrical characteristics, the
ShearLet filters with different scales and directions are considered in the
first convolution layers of ResNet18 as a visual attention mechanism. Besides,
the major extended in YOLOV3 concerns the detection Sub-Networks with novel
structures which boost object expression ability and training efficiency. In
addition, novel augmentation and negative mining strategies are presented to
significantly increase the localization phase's performance. The experimental
results on the DIOR dataset reveal that the framework reliably detects
different types of airports in a varied area and acquires robust results in
complex scenes compared to traditional YOLOV3 and state-of-the-art schemes.
|
[
{
"version": "v1",
"created": "Sat, 26 Mar 2022 07:07:53 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Mirhajianmoghadam",
"Hengameh",
""
],
[
"Haghighi",
"Behrouz Bolourian",
""
]
] |
new_dataset
| 0.999106 |
2203.14049
|
Anirudh Sriram
|
Emil Biju, Anirudh Sriram, Mitesh M. Khapra, Pratyush Kumar
|
Joint Transformer/RNN Architecture for Gesture Typing in Indic Languages
|
Published at COLING 2020, 12 pages, 4 Tables and 5 Figures
| null | null | null |
cs.LG cs.CL cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Gesture typing is a method of typing words on a touch-based keyboard by
creating a continuous trace passing through the relevant keys. This work is
aimed at developing a keyboard that supports gesture typing in Indic languages.
We begin by noting that when dealing with Indic languages, one needs to cater
to two different sets of users: (i) users who prefer to type in the native
Indic script (Devanagari, Bengali, etc.) and (ii) users who prefer to type in
the English script but want the output transliterated into the native script.
In both cases, we need a model that takes a trace as input and maps it to the
intended word. To enable the development of these models, we create and release
two datasets. First, we create a dataset containing keyboard traces for 193,658
words from 7 Indic languages. Second, we curate 104,412 English-Indic
transliteration pairs from Wikidata across these languages. Using these
datasets we build a model that performs path decoding, transliteration, and
transliteration correction. Unlike prior approaches, our proposed model does
not make co-character independence assumptions during decoding. The overall
accuracy of our model across the 7 languages varies from 70-95%.
|
[
{
"version": "v1",
"created": "Sat, 26 Mar 2022 11:14:23 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Biju",
"Emil",
""
],
[
"Sriram",
"Anirudh",
""
],
[
"Khapra",
"Mitesh M.",
""
],
[
"Kumar",
"Pratyush",
""
]
] |
new_dataset
| 0.998452 |
2203.14065
|
Buzhen Huang
|
Buzhen Huang, Liang Pan, Yuan Yang, Jingyi Ju, Yangang Wang
|
Neural MoCon: Neural Motion Control for Physically Plausible Human
Motion Capture
|
Accepted to CVPR 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to the visual ambiguity, purely kinematic formulations on monocular human
motion capture are often physically incorrect, biomechanically implausible, and
can not reconstruct accurate interactions. In this work, we focus on exploiting
the high-precision and non-differentiable physics simulator to incorporate
dynamical constraints in motion capture. Our key-idea is to use real physical
supervisions to train a target pose distribution prior for sampling-based
motion control to capture physically plausible human motion. To obtain accurate
reference motion with terrain interactions for the sampling, we first introduce
an interaction constraint based on SDF (Signed Distance Field) to enforce
appropriate ground contact modeling. We then design a novel two-branch decoder
to avoid stochastic error from pseudo ground-truth and train a distribution
prior with the non-differentiable physics simulator. Finally, we regress the
sampling distribution from the current state of the physical character with the
trained prior and sample satisfied target poses to track the estimated
reference motion. Qualitative and quantitative results show that we can obtain
physically plausible human motion with complex terrain interactions, human
shape variations, and diverse behaviors. More information can be found
at~\url{https://www.yangangwang.com/papers/HBZ-NM-2022-03.html}
|
[
{
"version": "v1",
"created": "Sat, 26 Mar 2022 12:48:41 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Huang",
"Buzhen",
""
],
[
"Pan",
"Liang",
""
],
[
"Yang",
"Yuan",
""
],
[
"Ju",
"Jingyi",
""
],
[
"Wang",
"Yangang",
""
]
] |
new_dataset
| 0.954497 |
2203.14074
|
Arti Keshari
|
Arti Keshari, Sonam Gupta and Sukhendu Das
|
V3GAN: Decomposing Background, Foreground and Motion for Video
Generation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Video generation is a challenging task that requires modeling plausible
spatial and temporal dynamics in a video. Inspired by how humans perceive a
video by grouping a scene into moving and stationary components, we propose a
method that decomposes the task of video generation into the synthesis of
foreground, background and motion. Foreground and background together describe
the appearance, whereas motion specifies how the foreground moves in a video
over time. We propose V3GAN, a novel three-branch generative adversarial
network where two branches model foreground and background information, while
the third branch models the temporal information without any supervision. The
foreground branch is augmented with our novel feature-level masking layer that
aids in learning an accurate mask for foreground and background separation. To
encourage motion consistency, we further propose a shuffling loss for the video
discriminator. Extensive quantitative and qualitative analysis on synthetic as
well as real-world benchmark datasets demonstrates that V3GAN outperforms the
state-of-the-art methods by a significant margin.
|
[
{
"version": "v1",
"created": "Sat, 26 Mar 2022 13:17:45 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Keshari",
"Arti",
""
],
[
"Gupta",
"Sonam",
""
],
[
"Das",
"Sukhendu",
""
]
] |
new_dataset
| 0.997657 |
2203.14109
|
Richard Mortier
|
Derek McAuley, Jiahong Chen, Tom Lodge, Richard Mortier, Stanislaw
Piasecki, Diana Andreea Popescu, Lachlan Urquhart
|
Human-centred home network security
|
Preprint of Chapter 9 of Privacy by Design for the Internet of
Things: Building accountability and security
| null | null | null |
cs.CR cs.HC cs.NI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This chapter draws from across the foregoing chapters discussing many core
HDI approaches and disciplinary perspectives to consider the specific
application of HDI in home network security. While much work has considered the
challenges of securing in home IoT devices and their communications, especially
for those with limited power or computational capacity, scant attention has
been paid by the research community to home network security, and its
acceptability and usability, from the viewpoint of ordinary citizens. It will
be clear that we need a radical transformation in our approach to designing
domestic networking infrastructure to guard against widespread cyber-attacks
that threaten to counter the benefits of the IoT. Our aim has to be to defend
against enemies inside the walls, to protect critical functionality in the home
against rogue devices and prevent the proliferation of disruptive wide-scale
IoT DDOS attacks that are already occurring [1].
|
[
{
"version": "v1",
"created": "Sat, 26 Mar 2022 16:23:05 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"McAuley",
"Derek",
""
],
[
"Chen",
"Jiahong",
""
],
[
"Lodge",
"Tom",
""
],
[
"Mortier",
"Richard",
""
],
[
"Piasecki",
"Stanislaw",
""
],
[
"Popescu",
"Diana Andreea",
""
],
[
"Urquhart",
"Lachlan",
""
]
] |
new_dataset
| 0.967611 |
2203.14129
|
Jason Milionis
|
Jason Milionis, Christos Papadimitriou, Georgios Piliouras, Kelly
Spendlove
|
Nash, Conley, and Computation: Impossibility and Incompleteness in Game
Dynamics
|
25 pages
| null | null | null |
cs.GT cs.LG econ.TH math.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Under what conditions do the behaviors of players, who play a game
repeatedly, converge to a Nash equilibrium? If one assumes that the players'
behavior is a discrete-time or continuous-time rule whereby the current mixed
strategy profile is mapped to the next, this becomes a problem in the theory of
dynamical systems. We apply this theory, and in particular the concepts of
chain recurrence, attractors, and Conley index, to prove a general
impossibility result: there exist games for which any dynamics is bound to have
starting points that do not end up at a Nash equilibrium. We also prove a
stronger result for $\epsilon$-approximate Nash equilibria: there are games
such that no game dynamics can converge (in an appropriate sense) to
$\epsilon$-Nash equilibria, and in fact the set of such games has positive
measure. Further numerical results demonstrate that this holds for any
$\epsilon$ between zero and $0.09$. Our results establish that, although the
notions of Nash equilibria (and its computation-inspired approximations) are
universally applicable in all games, they are also fundamentally incomplete as
predictors of long term behavior, regardless of the choice of dynamics.
|
[
{
"version": "v1",
"created": "Sat, 26 Mar 2022 18:27:40 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Milionis",
"Jason",
""
],
[
"Papadimitriou",
"Christos",
""
],
[
"Piliouras",
"Georgios",
""
],
[
"Spendlove",
"Kelly",
""
]
] |
new_dataset
| 0.976808 |
2203.14186
|
Luming Liang
|
Zhicheng Geng, Luming Liang, Tianyu Ding, Ilya Zharkov
|
RSTT: Real-time Spatial Temporal Transformer for Space-Time Video
Super-Resolution
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Space-time video super-resolution (STVSR) is the task of interpolating videos
with both Low Frame Rate (LFR) and Low Resolution (LR) to produce
High-Frame-Rate (HFR) and also High-Resolution (HR) counterparts. The existing
methods based on Convolutional Neural Network~(CNN) succeed in achieving
visually satisfied results while suffer from slow inference speed due to their
heavy architectures. We propose to resolve this issue by using a
spatial-temporal transformer that naturally incorporates the spatial and
temporal super resolution modules into a single model. Unlike CNN-based
methods, we do not explicitly use separated building blocks for temporal
interpolations and spatial super-resolutions; instead, we only use a single
end-to-end transformer architecture. Specifically, a reusable dictionary is
built by encoders based on the input LFR and LR frames, which is then utilized
in the decoder part to synthesize the HFR and HR frames. Compared with the
state-of-the-art TMNet \cite{xu2021temporal}, our network is $60\%$ smaller
(4.5M vs 12.3M parameters) and $80\%$ faster (26.2fps vs 14.3fps on
$720\times576$ frames) without sacrificing much performance. The source code is
available at https://github.com/llmpass/RSTT.
|
[
{
"version": "v1",
"created": "Sun, 27 Mar 2022 02:16:26 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Geng",
"Zhicheng",
""
],
[
"Liang",
"Luming",
""
],
[
"Ding",
"Tianyu",
""
],
[
"Zharkov",
"Ilya",
""
]
] |
new_dataset
| 0.980331 |
2203.14188
|
Toyotaro Suzumura Prof
|
Toyotaro Suzumura, Akiyoshi Sugiki, Hiroyuki Takizawa, Akira Imakura,
Hiroshi Nakamura, Kenjiro Taura, Tomohiro Kudoh, Toshihiro Hanawa, Yuji
Sekiya, Hiroki Kobayashi, Shin Matsushima, Yohei Kuga, Ryo Nakamura, Renhe
Jiang, Junya Kawase, Masatoshi Hanai, Hiroshi Miyazaki, Tsutomu Ishizaki,
Daisuke Shimotoku, Daisuke Miyamoto, Kento Aida, Atsuko Takefusa, Takashi
Kurimoto, Koji Sasayama, Naoya Kitagawa, Ikki Fujiwara, Yusuke Tanimura,
Takayuki Aoki, Toshio Endo, Satoshi Ohshima, Keiichiro Fukazawa, Susumu Date,
Toshihiro Uchibayashi
|
mdx: A Cloud Platform for Supporting Data Science and Cross-Disciplinary
Research Collaborations
| null | null | null | null |
cs.LG cs.CY cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
The growing amount of data and advances in data science have created a need
for a new kind of cloud platform that provides users with flexibility, strong
security, and the ability to couple with supercomputers and edge devices
through high-performance networks. We have built such a nation-wide cloud
platform, called "mdx" to meet this need. The mdx platform's virtualization
service, jointly operated by 9 national universities and 2 national research
institutes in Japan, launched in 2021, and more features are in development.
Currently mdx is used by researchers in a wide variety of domains, including
materials informatics, geo-spatial information science, life science,
astronomical science, economics, social science, and computer science. This
paper provides an the overview of the mdx platform, details the motivation for
its development, reports its current status, and outlines its future plans.
|
[
{
"version": "v1",
"created": "Sun, 27 Mar 2022 02:22:42 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Suzumura",
"Toyotaro",
""
],
[
"Sugiki",
"Akiyoshi",
""
],
[
"Takizawa",
"Hiroyuki",
""
],
[
"Imakura",
"Akira",
""
],
[
"Nakamura",
"Hiroshi",
""
],
[
"Taura",
"Kenjiro",
""
],
[
"Kudoh",
"Tomohiro",
""
],
[
"Hanawa",
"Toshihiro",
""
],
[
"Sekiya",
"Yuji",
""
],
[
"Kobayashi",
"Hiroki",
""
],
[
"Matsushima",
"Shin",
""
],
[
"Kuga",
"Yohei",
""
],
[
"Nakamura",
"Ryo",
""
],
[
"Jiang",
"Renhe",
""
],
[
"Kawase",
"Junya",
""
],
[
"Hanai",
"Masatoshi",
""
],
[
"Miyazaki",
"Hiroshi",
""
],
[
"Ishizaki",
"Tsutomu",
""
],
[
"Shimotoku",
"Daisuke",
""
],
[
"Miyamoto",
"Daisuke",
""
],
[
"Aida",
"Kento",
""
],
[
"Takefusa",
"Atsuko",
""
],
[
"Kurimoto",
"Takashi",
""
],
[
"Sasayama",
"Koji",
""
],
[
"Kitagawa",
"Naoya",
""
],
[
"Fujiwara",
"Ikki",
""
],
[
"Tanimura",
"Yusuke",
""
],
[
"Aoki",
"Takayuki",
""
],
[
"Endo",
"Toshio",
""
],
[
"Ohshima",
"Satoshi",
""
],
[
"Fukazawa",
"Keiichiro",
""
],
[
"Date",
"Susumu",
""
],
[
"Uchibayashi",
"Toshihiro",
""
]
] |
new_dataset
| 0.998877 |
2203.14253
|
Amir Reza Asadi
|
Amir Reza Asadi, Reza Hemadi
|
Understanding Currencies in Video Games: A Review
|
"Published" 1st International Digital Games Research Conference:
Trends, Technologies, and Applications (DGRC)
| null |
10.1109/DGRC.2018.8712047
| null |
cs.CY cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a review of the status of currencies in video games. The
business of video games is a multibillion-dollar industry, and its internal
economy design is an important field to investigate. In this study, we have
distinguished virtual currencies in terms of game mechanics and virtual
currency schema, and we have examined 11 games that have used virtual
currencies in a significant way and have provided insight for game designers on
the internal game economy by showing tangible examples of game mechanics
presented in our model
|
[
{
"version": "v1",
"created": "Sun, 27 Mar 2022 09:16:39 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Asadi",
"Amir Reza",
""
],
[
"Hemadi",
"Reza",
""
]
] |
new_dataset
| 0.976268 |
2203.14298
|
Gissel Velarde
|
Marcel Del Castillo Velarde and Gissel Velarde
|
Benchmarking Algorithms for Automatic License Plate Recognition
|
6 pages, 10 Figures, 5 Tables, Technical Report
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We evaluated a lightweight Convolutional Neural Network (CNN) called LPRNet
[1] for automatic License Plate Recognition (LPR). We evaluated the algorithm
on two datasets, one composed of real license plate images and the other of
synthetic license plate images. In addition, we compared its performance
against Tesseract [2], an Optical Character Recognition engine. We measured
performance based on recognition accuracy and Levenshtein Distance. LPRNet is
an end-to-end framework and demonstrated robust performance on both datasets,
delivering 90 and 89 percent recognition accuracy on test sets of 1000 real and
synthetic license plate images, respectively. Tesseract was not trained using
real license plate images and performed well only on the synthetic dataset
after pre-processing steps delivering 93 percent recognition accuracy. Finally,
Pareto analysis for frequency analysis of misclassified characters allowed us
to find in detail which characters were the most conflicting ones according to
the percentage of accumulated error. Depending on the region, license plate
images possess particular characteristics. Once properly trained, LPRNet can be
used to recognize characters from a specific region and dataset. Future work
can focus on applying transfer learning to utilize the features learned by
LPRNet and fine-tune it given a smaller, newer dataset of license plates.
|
[
{
"version": "v1",
"created": "Sun, 27 Mar 2022 13:21:29 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Velarde",
"Marcel Del Castillo",
""
],
[
"Velarde",
"Gissel",
""
]
] |
new_dataset
| 0.999638 |
2203.14325
|
Chien-Yi Wang
|
Chien-Yi Wang, Yu-Ding Lu, Shang-Ta Yang, Shang-Hong Lai
|
PatchNet: A Simple Face Anti-Spoofing Framework via Fine-Grained Patch
Recognition
|
CVPR 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Face anti-spoofing (FAS) plays a critical role in securing face recognition
systems from different presentation attacks. Previous works leverage auxiliary
pixel-level supervision and domain generalization approaches to address unseen
spoof types. However, the local characteristics of image captures, i.e.,
capturing devices and presenting materials, are ignored in existing works and
we argue that such information is required for networks to discriminate between
live and spoof images. In this work, we propose PatchNet which reformulates
face anti-spoofing as a fine-grained patch-type recognition problem. To be
specific, our framework recognizes the combination of capturing devices and
presenting materials based on the patches cropped from non-distorted face
images. This reformulation can largely improve the data variation and enforce
the network to learn discriminative feature from local capture patterns. In
addition, to further improve the generalization ability of the spoof feature,
we propose the novel Asymmetric Margin-based Classification Loss and
Self-supervised Similarity Loss to regularize the patch embedding space. Our
experimental results verify our assumption and show that the model is capable
of recognizing unseen spoof types robustly by only looking at local regions.
Moreover, the fine-grained and patch-level reformulation of FAS outperforms the
existing approaches on intra-dataset, cross-dataset, and domain generalization
benchmarks. Furthermore, our PatchNet framework can enable practical
applications like Few-Shot Reference-based FAS and facilitate future
exploration of spoof-related intrinsic cues.
|
[
{
"version": "v1",
"created": "Sun, 27 Mar 2022 15:16:17 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Wang",
"Chien-Yi",
""
],
[
"Lu",
"Yu-Ding",
""
],
[
"Yang",
"Shang-Ta",
""
],
[
"Lai",
"Shang-Hong",
""
]
] |
new_dataset
| 0.998833 |
2203.14338
|
Yu Zhang
|
Baijiong Lin and Yu Zhang
|
LibMTL: A Python Library for Multi-Task Learning
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents LibMTL, an open-source Python library built on PyTorch,
which provides a unified, comprehensive, reproducible, and extensible
implementation framework for Multi-Task Learning (MTL). LibMTL considers
different settings and approaches in MTL, and it supports a large number of
state-of-the-art MTL methods, including 12 loss weighting strategies, 7
architectures, and 84 combinations of different architectures and loss
weighting methods. Moreover, the modular design in LibMTL makes it easy-to-use
and well extensible, thus users can easily and fast develop new MTL methods,
compare with existing MTL methods fairly, or apply MTL algorithms to real-world
applications with the support of LibMTL. The source code and detailed
documentations of LibMTL are available at
https://github.com/median-research-group/LibMTL and
https://libmtl.readthedocs.io, respectively.
|
[
{
"version": "v1",
"created": "Sun, 27 Mar 2022 16:00:48 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Lin",
"Baijiong",
""
],
[
"Zhang",
"Yu",
""
]
] |
new_dataset
| 0.999712 |
2203.14358
|
Majid Ahmadi Dr.
|
Khalid Alammari, Majid Ahmadi, and Arash Ahmadi
|
A Memristive Based Design of a Core Digital Circuit for Elliptic Curve
Cryptography
|
This paper has neither been published nor being considered in any
other journal
| null | null | null |
cs.CR cs.ET
|
http://creativecommons.org/publicdomain/zero/1.0/
|
The new emerging non-volatile memory (NVM) devices known as memristors could
be the promising candidate for future digital architecture, owing to their
nanoscale size and its ability to integrate with the exciting CMOS technology.
In this paper, a combination of memristor devices and CMOS transistors are
working together to form a hybrid CMOS-memristor circuit for XAX- Module, a
core element for the finite field multiplier. The proposed design was
implemented using Pt /TaOx/Ta memristor device and simulated in Cadence
Virtuoso. The simulation results demonstrate the design functionality. The
proposed module appears to be efficient in terms of layout area, delay and
power consumption since the design utilizes the hybrid CMOS/memristor gates.
|
[
{
"version": "v1",
"created": "Sun, 27 Mar 2022 17:50:41 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Alammari",
"Khalid",
""
],
[
"Ahmadi",
"Majid",
""
],
[
"Ahmadi",
"Arash",
""
]
] |
new_dataset
| 0.999766 |
2203.14371
|
Ankit Pal
|
Ankit Pal, Logesh Kumar Umapathi and Malaikannan Sankarasubbu
|
MedMCQA : A Large-scale Multi-Subject Multi-Choice Dataset for Medical
domain Question Answering
|
Proceedings of Machine Learning Research (PMLR), ACM Conference on
Health, Inference, and Learning (CHIL) 2022
|
ACM Conference on Health, Inference, and Learning (CHIL) 2022
| null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
This paper introduces MedMCQA, a new large-scale, Multiple-Choice Question
Answering (MCQA) dataset designed to address real-world medical entrance exam
questions. More than 194k high-quality AIIMS \& NEET PG entrance exam MCQs
covering 2.4k healthcare topics and 21 medical subjects are collected with an
average token length of 12.77 and high topical diversity. Each sample contains
a question, correct answer(s), and other options which requires a deeper
language understanding as it tests the 10+ reasoning abilities of a model
across a wide range of medical subjects \& topics. A detailed explanation of
the solution, along with the above information, is provided in this study.
|
[
{
"version": "v1",
"created": "Sun, 27 Mar 2022 18:59:16 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Pal",
"Ankit",
""
],
[
"Umapathi",
"Logesh Kumar",
""
],
[
"Sankarasubbu",
"Malaikannan",
""
]
] |
new_dataset
| 0.999861 |
2203.14401
|
Zhenishbek Zhakypov
|
Zhenishbek Zhakypov and Allison M. Okamura
|
FingerPrint: A 3-D Printed Soft Monolithic 4-Degree-of-Freedom Fingertip
Haptic Device with Embedded Actuation
|
For accompanying video, visit https://youtu.be/s0oR8Z6bjQc
| null | null | null |
cs.RO cs.HC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Wearable fingertip haptic interfaces provide tactile stimuli on the
fingerpads by applying skin pressure, linear and rotational shear, and
vibration. Designing and fabricating a compact, multi-degree-of-freedom, and
forceful fingertip haptic interface is challenging due to trade-offs among
miniaturization, multifunctionality, and manufacturability. Downsizing
electromagnetic actuators that produce high torques is infeasible, and
integrating multiple actuators, links, joints, and transmission elements
increases device size and weight. 3-D printing enables rapid manufacturing of
complex devices with minimal assembly in large batches. However, it requires a
careful arrangement of material properties, geometry, scale, and printer
capabilities. Here we present a fully 3-D printed, soft, monolithic fingertip
haptic device based on an origami pattern known as the "waterbomb" base that
embeds foldable vacuum actuation and produces 4-DoF of motion on the fingerpad
with tunable haptic forces (up to 1.3 N shear and 7 N normal) and torque (up to
25 N-mm). Including the thimble mounting, the compact device is 40 mm long and
20 mm wide. This demonstrates the efficacy of origami design and soft material
3D printing for designing and rapidly fabricating miniature yet complex
wearable mechanisms with force output appropriate for haptic interaction.
|
[
{
"version": "v1",
"created": "Sun, 27 Mar 2022 21:44:29 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Zhakypov",
"Zhenishbek",
""
],
[
"Okamura",
"Allison M.",
""
]
] |
new_dataset
| 0.99802 |
2203.14456
|
Trisha Mittal
|
Vikram Gupta, Trisha Mittal, Puneet Mathur, Vaibhav Mishra, Mayank
Maheshwari, Aniket Bera, Debdoot Mukherjee, Dinesh Manocha
|
3MASSIV: Multilingual, Multimodal and Multi-Aspect dataset of Social
Media Short Videos
|
Accepted in CVPR 2022
| null | null | null |
cs.CV cs.AI cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
We present 3MASSIV, a multilingual, multimodal and multi-aspect,
expertly-annotated dataset of diverse short videos extracted from short-video
social media platform - Moj. 3MASSIV comprises of 50k short videos (20 seconds
average duration) and 100K unlabeled videos in 11 different languages and
captures popular short video trends like pranks, fails, romance, comedy
expressed via unique audio-visual formats like self-shot videos, reaction
videos, lip-synching, self-sung songs, etc. 3MASSIV presents an opportunity for
multimodal and multilingual semantic understanding on these unique videos by
annotating them for concepts, affective states, media types, and audio
language. We present a thorough analysis of 3MASSIV and highlight the variety
and unique aspects of our dataset compared to other contemporary popular
datasets with strong baselines. We also show how the social media content in
3MASSIV is dynamic and temporal in nature, which can be used for semantic
understanding tasks and cross-lingual analysis.
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 02:47:01 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Gupta",
"Vikram",
""
],
[
"Mittal",
"Trisha",
""
],
[
"Mathur",
"Puneet",
""
],
[
"Mishra",
"Vaibhav",
""
],
[
"Maheshwari",
"Mayank",
""
],
[
"Bera",
"Aniket",
""
],
[
"Mukherjee",
"Debdoot",
""
],
[
"Manocha",
"Dinesh",
""
]
] |
new_dataset
| 0.999859 |
2203.14470
|
Toshihiro Nishimura
|
Toshihiro Nishimura, Kensuke Shimizu, Seita Nojiri, Kenjiro Tadakuma,
Yosuke Suzuki, Tokuo Tsuji, Tetsuyou Watanabe
|
Soft robotic hand with finger-bending/friction-reduction switching
mechanism through 1-degree-of-freedom flow control
| null |
IEEE Robotics and Automation Letters (2022)(Early Access)
|
10.1109/LRA.2022.3157964
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes a novel pneumatic soft robotic hand that incorporates a
mechanism that can switch the airflow path using a single airflow control. The
developed hand can control the finger motion and operate the surface friction
variable mechanism. In the friction variable mechanism, a lubricant is injected
onto the high-friction finger surface to reduce surface friction. To inject the
lubrication using a positive-pressure airflow, the Venturi effect is applied.
The design and evaluation of the airflow-path switching and friction variable
mechanisms are described. Moreover, the entire design of a soft robotic hand
equipped with these mechanisms is presented. The performance was validated
through grasping, placing, and manipulation tests.
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 03:23:39 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Nishimura",
"Toshihiro",
""
],
[
"Shimizu",
"Kensuke",
""
],
[
"Nojiri",
"Seita",
""
],
[
"Tadakuma",
"Kenjiro",
""
],
[
"Suzuki",
"Yosuke",
""
],
[
"Tsuji",
"Tokuo",
""
],
[
"Watanabe",
"Tetsuyou",
""
]
] |
new_dataset
| 0.998484 |
2203.14498
|
Soroush Vosoughi Dr
|
Weicheng Ma, Samiha Datta, Lili Wang, Soroush Vosoughi
|
EnCBP: A New Benchmark Dataset for Finer-Grained Cultural Background
Prediction in English
|
In Findings of ACL 2022
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
While cultural backgrounds have been shown to affect linguistic expressions,
existing natural language processing (NLP) research on culture modeling is
overly coarse-grained and does not examine cultural differences among speakers
of the same language. To address this problem and augment NLP models with
cultural background features, we collect, annotate, manually validate, and
benchmark EnCBP, a finer-grained news-based cultural background prediction
dataset in English. Through language modeling (LM) evaluations and manual
analyses, we confirm that there are noticeable differences in linguistic
expressions among five English-speaking countries and across four states in the
US. Additionally, our evaluations on nine syntactic (CoNLL-2003), semantic
(PAWS-Wiki, QNLI, STS-B, and RTE), and psycholinguistic tasks (SST-5, SST-2,
Emotion, and Go-Emotions) show that, while introducing cultural background
information does not benefit the Go-Emotions task due to text domain conflicts,
it noticeably improves deep learning (DL) model performance on other tasks. Our
findings strongly support the importance of cultural background modeling to a
wide variety of NLP tasks and demonstrate the applicability of EnCBP in
culture-related research.
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 04:57:17 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Ma",
"Weicheng",
""
],
[
"Datta",
"Samiha",
""
],
[
"Wang",
"Lili",
""
],
[
"Vosoughi",
"Soroush",
""
]
] |
new_dataset
| 0.999705 |
2203.14499
|
MinhDuc Vo
|
Duc Minh Vo, Hong Chen, Akihiro Sugimoto, Hideki Nakayama
|
NOC-REK: Novel Object Captioning with Retrieved Vocabulary from External
Knowledge
|
Accepted at CVPR 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Novel object captioning aims at describing objects absent from training data,
with the key ingredient being the provision of object vocabulary to the model.
Although existing methods heavily rely on an object detection model, we view
the detection step as vocabulary retrieval from an external knowledge in the
form of embeddings for any object's definition from Wiktionary, where we use in
the retrieval image region features learned from a transformers model. We
propose an end-to-end Novel Object Captioning with Retrieved vocabulary from
External Knowledge method (NOC-REK), which simultaneously learns vocabulary
retrieval and caption generation, successfully describing novel objects outside
of the training dataset. Furthermore, our model eliminates the requirement for
model retraining by simply updating the external knowledge whenever a novel
object appears. Our comprehensive experiments on held-out COCO and Nocaps
datasets show that our NOC-REK is considerably effective against SOTAs.
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 04:59:16 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Vo",
"Duc Minh",
""
],
[
"Chen",
"Hong",
""
],
[
"Sugimoto",
"Akihiro",
""
],
[
"Nakayama",
"Hideki",
""
]
] |
new_dataset
| 0.976407 |
2203.14501
|
Ferhat Bayar
|
Ferhat Bayar, Onur Salan, Haci Ilhan, Erdogan Aydin
|
Space-Time Block Coded Reconfigurable Intelligent Surface-Based Received
Spatial Modulation
|
8 pages, 5 figures, 4 tables
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reconfigurable intelligent surface (RIS) structures reflect the incident
signals by adjusting phase adaptively according to the channel condition where
doing transmission in order to increase signal quality at the receiver.
Besides, the spatial modulation (SM) technique is a possible candidate for
future energy-efficient wireless communications due to providing better
throughput, low-cost implementation and good error performance. Also,
Alamouti's space-time block coding (ASBC) is an important space and time coding
technique in terms of diversity gain and simplified ML detection. In this
paper, we proposed the RIS assisted received spatial modulation (RSM) scheme
with ASBC, namely RIS-RSM-ASBC. The termed RIS is portioned by two parts in the
proposed system model. Each one is utilized as an access point (AP) to transmit
its Alamouti coded information while reflecting passive signals to the selected
received antenna. The optimal maximum likelihood (ML) detector is designed for
the proposed RIS-RSM-ASBC scheme. Extensive computer simulations are conducted
to corroborate theoretical derivations. Results show that RIS-RSM-ASBC system
is highly reliable and provides data rate enhancement in contrast to
conventional RIS assisted transmit SM (RIS-TSM), RIS assisted transmit
quadrature SM (RIS-TQSM), RIS assisted received SM (RIS-RSM), RIS assisted
transmit space shift keying with ASBC (RIS-TSSK-ASBC) and RIS-TSSK-VBLAST
schemes.
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 05:08:02 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Bayar",
"Ferhat",
""
],
[
"Salan",
"Onur",
""
],
[
"Ilhan",
"Haci",
""
],
[
"Aydin",
"Erdogan",
""
]
] |
new_dataset
| 0.995348 |
2203.14564
|
Joonkyu Park
|
JoonKyu Park, Yeonguk Oh, Gyeongsik Moon, Hongsuk Choi, Kyoung Mu Lee
|
HandOccNet: Occlusion-Robust 3D Hand Mesh Estimation Network
|
also attached the supplementary material
|
Computer Vision and Pattern Recognition (CVPR), 2022
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Hands are often severely occluded by objects, which makes 3D hand mesh
estimation challenging. Previous works often have disregarded information at
occluded regions. However, we argue that occluded regions have strong
correlations with hands so that they can provide highly beneficial information
for complete 3D hand mesh estimation. Thus, in this work, we propose a novel 3D
hand mesh estimation network HandOccNet, that can fully exploits the
information at occluded regions as a secondary means to enhance image features
and make it much richer. To this end, we design two successive
Transformer-based modules, called feature injecting transformer (FIT) and self-
enhancing transformer (SET). FIT injects hand information into occluded region
by considering their correlation. SET refines the output of FIT by using a
self-attention mechanism. By injecting the hand information to the occluded
region, our HandOccNet reaches the state-of-the-art performance on 3D hand mesh
benchmarks that contain challenging hand-object occlusions. The codes are
available in: https://github.com/namepllet/HandOccNet.
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 08:12:16 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Park",
"JoonKyu",
""
],
[
"Oh",
"Yeonguk",
""
],
[
"Moon",
"Gyeongsik",
""
],
[
"Choi",
"Hongsuk",
""
],
[
"Lee",
"Kyoung Mu",
""
]
] |
new_dataset
| 0.989948 |
2203.14628
|
Yisheng He
|
Yisheng He, Yao Wang, Haoqiang Fan, Jian Sun, Qifeng Chen
|
FS6D: Few-Shot 6D Pose Estimation of Novel Objects
|
Accepted by CVPR 2022
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
6D object pose estimation networks are limited in their capability to scale
to large numbers of object instances due to the close-set assumption and their
reliance on high-fidelity object CAD models. In this work, we study a new open
set problem; the few-shot 6D object poses estimation: estimating the 6D pose of
an unknown object by a few support views without extra training. To tackle the
problem, we point out the importance of fully exploring the appearance and
geometric relationship between the given support views and query scene patches
and propose a dense prototypes matching framework by extracting and matching
dense RGBD prototypes with transformers. Moreover, we show that the priors from
diverse appearances and shapes are crucial to the generalization capability
under the problem setting and thus propose a large-scale RGBD photorealistic
dataset (ShapeNet6D) for network pre-training. A simple and effective online
texture blending approach is also introduced to eliminate the domain gap from
the synthesis dataset, which enriches appearance diversity at a low cost.
Finally, we discuss possible solutions to this problem and establish benchmarks
on popular datasets to facilitate future research. The project page is at
\url{https://fs6d.github.io/}.
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 10:31:29 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"He",
"Yisheng",
""
],
[
"Wang",
"Yao",
""
],
[
"Fan",
"Haoqiang",
""
],
[
"Sun",
"Jian",
""
],
[
"Chen",
"Qifeng",
""
]
] |
new_dataset
| 0.984479 |
2203.14672
|
Daniel Gehrig
|
Daniel Gehrig and Davide Scaramuzza
|
Are High-Resolution Event Cameras Really Needed?
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to their outstanding properties in challenging conditions, event cameras
have become indispensable in a wide range of applications, ranging from
automotive, computational photography, and SLAM. However, as further
improvements are made to the sensor design, modern event cameras are trending
toward higher and higher sensor resolutions, which result in higher bandwidth
and computational requirements on downstream tasks. Despite this trend, the
benefits of using high-resolution event cameras to solve standard computer
vision tasks are still not clear. In this work, we report the surprising
discovery that, in low-illumination conditions and at high speeds,
low-resolution cameras can outperform high-resolution ones, while requiring a
significantly lower bandwidth. We provide both empirical and theoretical
evidence for this claim, which indicates that high-resolution event cameras
exhibit higher per-pixel event rates, leading to higher temporal noise in
low-illumination conditions and at high speeds. As a result, in most cases,
high-resolution event cameras show a lower task performance, compared to lower
resolution sensors in these conditions. We empirically validate our findings
across several tasks, namely image reconstruction, optical flow estimation, and
camera pose tracking, both on synthetic and real data. We believe that these
findings will provide important guidelines for future trends in event camera
development.
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 12:06:20 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Gehrig",
"Daniel",
""
],
[
"Scaramuzza",
"Davide",
""
]
] |
new_dataset
| 0.972105 |
2203.14679
|
Wenshuo Li
|
Wenshuo Li, Hanting Chen, Jianyuan Guo, Ziyang Zhang, Yunhe Wang
|
Brain-inspired Multilayer Perceptron with Spiking Neurons
|
This paper is accepted by CVPR 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, Multilayer Perceptron (MLP) becomes the hotspot in the field of
computer vision tasks. Without inductive bias, MLPs perform well on feature
extraction and achieve amazing results. However, due to the simplicity of their
structures, the performance highly depends on the local features communication
machenism. To further improve the performance of MLP, we introduce information
communication mechanisms from brain-inspired neural networks. Spiking Neural
Network (SNN) is the most famous brain-inspired neural network, and achieve
great success on dealing with sparse data. Leaky Integrate and Fire (LIF)
neurons in SNNs are used to communicate between different time steps. In this
paper, we incorporate the machanism of LIF neurons into the MLP models, to
achieve better accuracy without extra FLOPs. We propose a full-precision LIF
operation to communicate between patches, including horizontal LIF and vertical
LIF in different directions. We also propose to use group LIF to extract better
local features. With LIF modules, our SNN-MLP model achieves 81.9%, 83.3% and
83.5% top-1 accuracy on ImageNet dataset with only 4.4G, 8.5G and 15.2G FLOPs,
respectively, which are state-of-the-art results as far as we know.
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 12:21:47 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Li",
"Wenshuo",
""
],
[
"Chen",
"Hanting",
""
],
[
"Guo",
"Jianyuan",
""
],
[
"Zhang",
"Ziyang",
""
],
[
"Wang",
"Yunhe",
""
]
] |
new_dataset
| 0.992255 |
2203.14698
|
Zhang Jingyi
|
Jialian Li, Jingyi Zhang, Zhiyong Wang, Siqi Shen, Chenglu Wen, Yuexin
Ma, Lan Xu, Jingyi Yu, Cheng Wang
|
LiDARCap: Long-range Marker-less 3D Human Motion Capture with LiDAR
Point Clouds
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Existing motion capture datasets are largely short-range and cannot yet fit
the need of long-range applications. We propose LiDARHuman26M, a new human
motion capture dataset captured by LiDAR at a much longer range to overcome
this limitation. Our dataset also includes the ground truth human motions
acquired by the IMU system and the synchronous RGB images. We further present a
strong baseline method, LiDARCap, for LiDAR point cloud human motion capture.
Specifically, we first utilize PointNet++ to encode features of points and then
employ the inverse kinematics solver and SMPL optimizer to regress the pose
through aggregating the temporally encoded features hierarchically.
Quantitative and qualitative experiments show that our method outperforms the
techniques based only on RGB images. Ablation experiments demonstrate that our
dataset is challenging and worthy of further research. Finally, the experiments
on the KITTI Dataset and the Waymo Open Dataset show that our method can be
generalized to different LiDAR sensor settings.
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 12:52:45 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Li",
"Jialian",
""
],
[
"Zhang",
"Jingyi",
""
],
[
"Wang",
"Zhiyong",
""
],
[
"Shen",
"Siqi",
""
],
[
"Wen",
"Chenglu",
""
],
[
"Ma",
"Yuexin",
""
],
[
"Xu",
"Lan",
""
],
[
"Yu",
"Jingyi",
""
],
[
"Wang",
"Cheng",
""
]
] |
new_dataset
| 0.999818 |
2203.14708
|
Rui Fukushima
|
Rui Fukushima, Kei Ota, Asako Kanezaki, Yoko Sasaki, Yusuke Yoshiyasu
|
Object Memory Transformer for Object Goal Navigation
|
7 pages, 3 figures, Accepted at ICRA 2022
| null | null | null |
cs.CV cs.AI cs.LG cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a reinforcement learning method for object goal
navigation (ObjNav) where an agent navigates in 3D indoor environments to reach
a target object based on long-term observations of objects and scenes. To this
end, we propose Object Memory Transformer (OMT) that consists of two key ideas:
1) Object-Scene Memory (OSM) that enables to store long-term scenes and object
semantics, and 2) Transformer that attends to salient objects in the sequence
of previously observed scenes and objects stored in OSM. This mechanism allows
the agent to efficiently navigate in the indoor environment without prior
knowledge about the environments, such as topological maps or 3D meshes. To the
best of our knowledge, this is the first work that uses a long-term memory of
object semantics in a goal-oriented navigation task. Experimental results
conducted on the AI2-THOR dataset show that OMT outperforms previous approaches
in navigating in unknown environments. In particular, we show that utilizing
the long-term object semantics information improves the efficiency of
navigation.
|
[
{
"version": "v1",
"created": "Thu, 24 Mar 2022 09:16:56 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Fukushima",
"Rui",
""
],
[
"Ota",
"Kei",
""
],
[
"Kanezaki",
"Asako",
""
],
[
"Sasaki",
"Yoko",
""
],
[
"Yoshiyasu",
"Yusuke",
""
]
] |
new_dataset
| 0.955044 |
2203.14709
|
Bumsoo Kim
|
Bumsoo Kim, Jonghwan Mun, Kyoung-Woon On, Minchul Shin, Junhyun Lee,
Eun-Sol Kim
|
MSTR: Multi-Scale Transformer for End-to-End Human-Object Interaction
Detection
|
CVPR 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Human-Object Interaction (HOI) detection is the task of identifying a set of
<human, object, interaction> triplets from an image. Recent work proposed
transformer encoder-decoder architectures that successfully eliminated the need
for many hand-designed components in HOI detection through end-to-end training.
However, they are limited to single-scale feature resolution, providing
suboptimal performance in scenes containing humans, objects and their
interactions with vastly different scales and distances. To tackle this
problem, we propose a Multi-Scale TRansformer (MSTR) for HOI detection powered
by two novel HOI-aware deformable attention modules called Dual-Entity
attention and Entity-conditioned Context attention. While existing deformable
attention comes at a huge cost in HOI detection performance, our proposed
attention modules of MSTR learn to effectively attend to sampling points that
are essential to identify interactions. In experiments, we achieve the new
state-of-the-art performance on two HOI detection benchmarks.
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 12:58:59 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Kim",
"Bumsoo",
""
],
[
"Mun",
"Jonghwan",
""
],
[
"On",
"Kyoung-Woon",
""
],
[
"Shin",
"Minchul",
""
],
[
"Lee",
"Junhyun",
""
],
[
"Kim",
"Eun-Sol",
""
]
] |
new_dataset
| 0.984365 |
2203.14725
|
Yoshifumi Nakano
|
Yoshifumi Nakano, Takaaki Saeki, Shinnosuke Takamichi, Katsuhito
Sudoh, Hiroshi Saruwatari
|
vTTS: visual-text to speech
|
submitted to interspech 2022
| null | null | null |
cs.SD
|
http://creativecommons.org/licenses/by/4.0/
|
This paper proposes visual-text to speech (vTTS), a method for synthesizing
speech from visual text (i.e., text as an image). Conventional TTS converts
phonemes or characters into discrete symbols and synthesizes a speech waveform
from them, thus losing the visual features that the characters essentially
have. Therefore, our method synthesizes speech not from discrete symbols but
from visual text. The proposed vTTS extracts visual features with a
convolutional neural network and then generates acoustic features with a
non-autoregressive model inspired by FastSpeech2. Experimental results show
that 1) vTTS is capable of generating speech with naturalness comparable to or
better than a conventional TTS, 2) it can transfer emphasis and emotion
attributes in visual text to speech without additional labels and
architectures, and 3) it can synthesize more natural and intelligible speech
from unseen and rare characters than conventional TTS.
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 13:10:11 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Nakano",
"Yoshifumi",
""
],
[
"Saeki",
"Takaaki",
""
],
[
"Takamichi",
"Shinnosuke",
""
],
[
"Sudoh",
"Katsuhito",
""
],
[
"Saruwatari",
"Hiroshi",
""
]
] |
new_dataset
| 0.999227 |
2203.14782
|
Marcos Faundez-Zanuy
|
Manuel-Vicente Garnacho-Casta\~no, Marcos Faundez-Zanuy, Josep
Lopez-Xarbau
|
On the Handwriting Tasks' Analysis to Detect Fatigue
|
16 pages, published in Applied Sciences 10, no. 21: 7630, 2020
|
Applied Sciences 10, no. 21: 7630 (2020)
|
10.3390/app10217630
| null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Practical determination of physical recovery after intense exercise is a
challenging topic that must include mechanical aspects as well as cognitive
ones because most of physical sport activities, as well as professional
activities (including brain computer interface-operated systems), require good
shape in both of them. This paper presents a new online handwritten database of
20 healthy subjects. The main goal was to study the influence of several
physical exercise stimuli in different handwritten tasks and to evaluate the
recovery after strenuous exercise. To this aim, they performed different
handwritten tasks before and after physical exercise as well as other
measurements such as metabolic and mechanical fatigue assessment. Experimental
results showed that although a fast mechanical recovery happens and can be
measured by lactate concentrations and mechanical fatigue, this is not the case
when cognitive effort is required. Handwriting analysis revealed that
statistical differences exist on handwriting performance even after lactate
concentration and mechanical assessment recovery. Conclusions: This points out
a necessity of more recovering time in sport and professional activities than
those measured in classic ways.
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 14:15:07 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Garnacho-Castaño",
"Manuel-Vicente",
""
],
[
"Faundez-Zanuy",
"Marcos",
""
],
[
"Lopez-Xarbau",
"Josep",
""
]
] |
new_dataset
| 0.97925 |
2203.14876
|
Anja Virkkunen
|
Anja Virkkunen and Aku Rouhe and Nhan Phan and Mikko Kurimo
|
Finnish Parliament ASR corpus - Analysis, benchmarks and statistics
|
Submitted to Language Resources and Evaluation
| null | null | null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Public sources like parliament meeting recordings and transcripts provide
ever-growing material for the training and evaluation of automatic speech
recognition (ASR) systems. In this paper, we publish and analyse the Finnish
parliament ASR corpus, the largest publicly available collection of manually
transcribed speech data for Finnish with over 3000 hours of speech and 449
speakers for which it provides rich demographic metadata. This corpus builds on
earlier initial work, and as a result the corpus has a natural split into two
training subsets from two periods of time. Similarly, there are two official,
corrected test sets covering different times, setting an ASR task with
longitudinal distribution-shift characteristics. An official development set is
also provided. We develop a complete Kaldi-based data preparation pipeline, and
hidden Markov model (HMM), hybrid deep neural network (HMM-DNN) and
attention-based encoder-decoder (AED) ASR recipes. We set benchmarks on the
official test sets, as well as multiple other recently used test sets. Both
temporal corpus subsets are already large, and we observe that beyond their
scale, ASR performance on the official test sets plateaus, whereas other
domains benefit from added data. The HMM-DNN and AED approaches are compared in
a carefully matched equal data setting, with the HMM-DNN system consistently
performing better. Finally, the variation of the ASR accuracy is compared
between the speaker categories available in the parliament metadata to detect
potential biases based on factors such as gender, age, and education.
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 16:29:49 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Virkkunen",
"Anja",
""
],
[
"Rouhe",
"Aku",
""
],
[
"Phan",
"Nhan",
""
],
[
"Kurimo",
"Mikko",
""
]
] |
new_dataset
| 0.999409 |
2203.14888
|
Amitabh Priyadarshi
|
Amitabh Priyadarshi, Krzysztof J. Kochut
|
WawPart: Workload-Aware Partitioning of Knowledge Graphs
|
12 pages, 8 figures, Springer International Publishing, in Advances
and Trends in Artificial Intelligence. Artificial Intelligence Practices, pp.
383-395
| null | null | null |
cs.DB cs.AI cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Large-scale datasets in the form of knowledge graphs are often used in
numerous domains, today. A knowledge graphs size often exceeds the capacity of
a single computer system, especially if the graph must be stored in main
memory. To overcome this, knowledge graphs can be partitioned into multiple
sub-graphs and distributed as shards among many computing nodes. However,
performance of many common tasks performed on graphs, such as querying,
suffers, as a result. This is due to distributed joins mandated by graph edges
crossing (cutting) the partitions. In this paper, we propose a method of
knowledge graph partitioning that takes into account a set of queries
(workload). The resulting partitioning aims to reduces the number of
distributed joins and improve the workload performance. Critical features
identified in the query workload and the knowledge graph are used to cluster
the queries and then partition the graph. Queries are rewritten to account for
the graph partitioning. Our evaluation results demonstrate the performance
improvement in workload processing time.
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 16:45:39 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Priyadarshi",
"Amitabh",
""
],
[
"Kochut",
"Krzysztof J.",
""
]
] |
new_dataset
| 0.999462 |
2203.14891
|
Patricia Johann
|
Patricia Johann and Pierre Cagne
|
How Functorial Are (Deep) GADTs?
|
Accompanying code at
https://www.normalesup.org/~cagne/gadts/adm/README.html
| null | null | null |
cs.LO cs.PL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
It is well-known that GADTs do not admit standard map functions of the kind
supported by ADTs and nested types. In addition, standard map functions are
insufficient to distribute their data-changing argument functions over all of
the structure present in elements of deep GADTs, even just deep ADTs or nested
types. This paper develops an algorithm for detecting exactly which functions
are mappable over data whose types are (deep) GADTs. The algorithm takes as
input a term t whose type is an instance of a deep GADT D and a function f to
be mapped over t. It detects a minimal possible shape of t as an element of D,
and returns a minimal set of constraints f must satisfy to be mappable over t.
The crux of the algorithm is its ability to separate t's essential structure as
an element of D -- i.e., the part of t that is essential for it to have the
shape of an element of D -- from its incidental structure as an element of D --
i.e., the part of t that is simply data in the positions of this shape. The
algorithm ensures that the constraints on f come only from t's essential
structure. This work is part of an ongoing effort to define initial algebra
semantics for GADTs that properly generalizes the usual semantics for ADTs and
nested types as least fixpoints of higher-order endofunctors.
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 16:49:48 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Johann",
"Patricia",
""
],
[
"Cagne",
"Pierre",
""
]
] |
new_dataset
| 0.983207 |
2203.14954
|
Yuheng Li
|
Yang Xue, Yuheng Li, Krishna Kumar Singh, Yong Jae Lee
|
GIRAFFE HD: A High-Resolution 3D-aware Generative Model
|
CVPR 2022
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D-aware generative models have shown that the introduction of 3D information
can lead to more controllable image generation. In particular, the current
state-of-the-art model GIRAFFE can control each object's rotation, translation,
scale, and scene camera pose without corresponding supervision. However,
GIRAFFE only operates well when the image resolution is low. We propose GIRAFFE
HD, a high-resolution 3D-aware generative model that inherits all of GIRAFFE's
controllable features while generating high-quality, high-resolution images
($512^2$ resolution and above). The key idea is to leverage a style-based
neural renderer, and to independently generate the foreground and background to
force their disentanglement while imposing consistency constraints to stitch
them together to composite a coherent final image. We demonstrate
state-of-the-art 3D controllable high-resolution image generation on multiple
natural image datasets.
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 17:58:20 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Xue",
"Yang",
""
],
[
"Li",
"Yuheng",
""
],
[
"Singh",
"Krishna Kumar",
""
],
[
"Lee",
"Yong Jae",
""
]
] |
new_dataset
| 0.999528 |
2012.15370
|
Baris Gecer
|
Baris Gecer, Jiankang Deng, Stefanos Zafeiriou
|
OSTeC: One-Shot Texture Completion
|
Project page: https://github.com/barisgecer/OSTeC
|
Proceedings of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition (CVPR), 2021, pp. 7628-7638
|
10.1109/CVPR46437.2021.00754
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The last few years have witnessed the great success of non-linear generative
models in synthesizing high-quality photorealistic face images. Many recent 3D
facial texture reconstruction and pose manipulation from a single image
approaches still rely on large and clean face datasets to train image-to-image
Generative Adversarial Networks (GANs). Yet the collection of such a large
scale high-resolution 3D texture dataset is still very costly and difficult to
maintain age/ethnicity balance. Moreover, regression-based approaches suffer
from generalization to the in-the-wild conditions and are unable to fine-tune
to a target-image. In this work, we propose an unsupervised approach for
one-shot 3D facial texture completion that does not require large-scale texture
datasets, but rather harnesses the knowledge stored in 2D face generators. The
proposed approach rotates an input image in 3D and fill-in the unseen regions
by reconstructing the rotated image in a 2D face generator, based on the
visible parts. Finally, we stitch the most visible textures at different angles
in the UV image-plane. Further, we frontalize the target image by projecting
the completed texture into the generator. The qualitative and quantitative
experiments demonstrate that the completed UV textures and frontalized images
are of high quality, resembles the original identity, can be used to train a
texture GAN model for 3DMM fitting and improve pose-invariant face recognition.
|
[
{
"version": "v1",
"created": "Wed, 30 Dec 2020 23:53:26 GMT"
}
] | 2022-03-28T00:00:00 |
[
[
"Gecer",
"Baris",
""
],
[
"Deng",
"Jiankang",
""
],
[
"Zafeiriou",
"Stefanos",
""
]
] |
new_dataset
| 0.963407 |
2109.02409
|
Sai Anurudh Reddy Peduri
|
Anurudh Peduri and Siddharth Bhat
|
QSSA: An SSA-based IR for Quantum Computing
|
20 pages, 16 figures
| null |
10.1145/3497776.3517772
| null |
cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Quantum computing hardware has progressed rapidly. Simultaneously, there has
been a proliferation of programming languages and program optimization tools
for quantum computing. Existing quantum compilers use intermediate
representations (IRs) where quantum programs are described as circuits. Such
IRs fail to leverage existing work on compiler optimizations. In such IRs, it
is non-trivial to statically check for physical constraints such as the
no-cloning theorem, which states that qubits cannot be copied. We introduce
QSSA, a novel quantum IR based on static single assignment (SSA) that enables
decades of research in compiler optimizations to be applied to quantum
compilation. QSSA models quantum operations as being side-effect-free. The
inputs and outputs of the operation are in one-to-one correspondence; qubits
cannot be created or destroyed. As a result, our IR supports a static analysis
pass that verifies no-cloning at compile-time. The quantum circuit is fully
encoded within the def-use chain of the IR, allowing us to leverage existing
optimization passes on SSA representations such as redundancy elimination and
dead-code elimination. Running our QSSA-based compiler on the QASMBench and IBM
Quantum Challenge datasets, we show that our optimizations perform comparably
to IBM's Qiskit quantum compiler infrastructure. QSSA allows us to represent,
analyze, and transform quantum programs using the robust theory of SSA
representations, bringing quantum compilation into the realm of well-understood
theory and practice.
|
[
{
"version": "v1",
"created": "Mon, 6 Sep 2021 12:45:02 GMT"
}
] | 2022-03-28T00:00:00 |
[
[
"Peduri",
"Anurudh",
""
],
[
"Bhat",
"Siddharth",
""
]
] |
new_dataset
| 0.999903 |
2109.05830
|
Kazuhiko Kawamoto
|
Nariki Tanaka, Hiroshi Kera, Kazuhiko Kawamoto
|
Adversarial Bone Length Attack on Action Recognition
|
12 pages, 8 figures, accepted to AAAI2022
| null | null | null |
cs.CV cs.AI cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Skeleton-based action recognition models have recently been shown to be
vulnerable to adversarial attacks. Compared to adversarial attacks on images,
perturbations to skeletons are typically bounded to a lower dimension of
approximately 100 per frame. This lower-dimensional setting makes it more
difficult to generate imperceptible perturbations. Existing attacks resolve
this by exploiting the temporal structure of the skeleton motion so that the
perturbation dimension increases to thousands. In this paper, we show that
adversarial attacks can be performed on skeleton-based action recognition
models, even in a significantly low-dimensional setting without any temporal
manipulation. Specifically, we restrict the perturbations to the lengths of the
skeleton's bones, which allows an adversary to manipulate only approximately 30
effective dimensions. We conducted experiments on the NTU RGB+D and HDM05
datasets and demonstrate that the proposed attack successfully deceived models
with sometimes greater than 90% success rate by small perturbations.
Furthermore, we discovered an interesting phenomenon: in our low-dimensional
setting, the adversarial training with the bone length attack shares a similar
property with data augmentation, and it not only improves the adversarial
robustness but also improves the classification accuracy on the original data.
This is an interesting counterexample of the trade-off between adversarial
robustness and clean accuracy, which has been widely observed in studies on
adversarial training in the high-dimensional regime.
|
[
{
"version": "v1",
"created": "Mon, 13 Sep 2021 09:59:44 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Mar 2022 10:21:50 GMT"
}
] | 2022-03-28T00:00:00 |
[
[
"Tanaka",
"Nariki",
""
],
[
"Kera",
"Hiroshi",
""
],
[
"Kawamoto",
"Kazuhiko",
""
]
] |
new_dataset
| 0.997895 |
2111.13087
|
Duy-Kien Nguyen
|
Duy-Kien Nguyen, Jihong Ju, Olaf Booij, Martin R. Oswald, Cees G. M.
Snoek
|
BoxeR: Box-Attention for 2D and 3D Transformers
|
In Proceeding of CVPR'2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we propose a simple attention mechanism, we call
box-attention. It enables spatial interaction between grid features, as sampled
from boxes of interest, and improves the learning capability of transformers
for several vision tasks. Specifically, we present BoxeR, short for Box
Transformer, which attends to a set of boxes by predicting their transformation
from a reference window on an input feature map. The BoxeR computes attention
weights on these boxes by considering its grid structure. Notably, BoxeR-2D
naturally reasons about box information within its attention module, making it
suitable for end-to-end instance detection and segmentation tasks. By learning
invariance to rotation in the box-attention module, BoxeR-3D is capable of
generating discriminative information from a bird's-eye view plane for 3D
end-to-end object detection. Our experiments demonstrate that the proposed
BoxeR-2D achieves state-of-the-art results on COCO detection and instance
segmentation. Besides, BoxeR-3D improves over the end-to-end 3D object
detection baseline and already obtains a compelling performance for the vehicle
category of Waymo Open, without any class-specific optimization. Code is
available at https://github.com/kienduynguyen/BoxeR.
|
[
{
"version": "v1",
"created": "Thu, 25 Nov 2021 13:54:25 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Mar 2022 09:42:32 GMT"
}
] | 2022-03-28T00:00:00 |
[
[
"Nguyen",
"Duy-Kien",
""
],
[
"Ju",
"Jihong",
""
],
[
"Booij",
"Olaf",
""
],
[
"Oswald",
"Martin R.",
""
],
[
"Snoek",
"Cees G. M.",
""
]
] |
new_dataset
| 0.998589 |
2201.08368
|
Lloyd Montgomery
|
Lloyd Montgomery, Clara L\"uders, Walid Maalej
|
An Alternative Issue Tracking Dataset of Public Jira Repositories
|
5 pages
| null |
10.1145/3524842.3528486
| null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Organisations use issue tracking systems (ITSs) to track and document their
projects' work in units called issues. This style of documentation encourages
evolutionary refinement, as each issue can be independently improved, commented
on, linked to other issues, and progressed through the organisational workflow.
Commonly studied ITSs so far include GitHub, GitLab, and Bugzilla, while Jira,
one of the most popular ITS in practice with a wealth of additional
information, has yet to receive similar attention. Unfortunately, diverse
public Jira datasets are rare, likely due to the difficulty in finding and
accessing these repositories. With this paper, we release a dataset of 16
public Jiras with 1822 projects, spanning 2.7 million issues with a combined
total of 32 million changes, 9 million comments, and 1 million issue links. We
believe this Jira dataset will lead to many fruitful research projects
investigating issue evolution, issue linking, cross-project analysis, as well
as cross-tool analysis when combined with existing well-studied ITS datasets.
|
[
{
"version": "v1",
"created": "Thu, 20 Jan 2022 18:52:36 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Jan 2022 16:09:20 GMT"
},
{
"version": "v3",
"created": "Fri, 25 Mar 2022 16:17:18 GMT"
}
] | 2022-03-28T00:00:00 |
[
[
"Montgomery",
"Lloyd",
""
],
[
"Lüders",
"Clara",
""
],
[
"Maalej",
"Walid",
""
]
] |
new_dataset
| 0.988794 |
2202.02169
|
George Alexandropoulos
|
Konstantinos D. Katsanos and Nir Shlezinger and Mohammadreza F. Imani
and George C. Alexandropoulos
|
Wideband Multi-User MIMO Communications with Frequency Selective RISs:
Element Response Modeling and Sum-Rate Maximization
|
6 pages; 4 figures; to be presented in IEEE ICC 2022
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Reconfigurable Intelligent Surfaces (RISs) are an emerging technology for
future wireless communication systems, enabling improved coverage in an energy
efficient manner. RISs are usually metasurfaces, constituting of
two-dimensional arrangements of metamaterial elements, whose individual
response is commonly modeled in the literature as an adjustable phase shifter.
However, this model holds only for narrowband communications, and when wideband
transmissions are utilized, one has to account for the frequency selectivity of
metamaterials, whose response usually follows a Lorentzian-like profile. In
this paper, we consider the uplink of a wideband RIS-empowered multi-user
Multiple-Input Multiple-Output (MIMO) wireless system with Orthogonal Frequency
Division Multiplexing (OFDM) signaling, while accounting for the frequency
selectivity of RISs. In particular, we focus on designing the controllable
parameters dictating the Lorentzian response of each RIS metamaterial element,
in order to maximize the achievable sum rate. We devise a scheme combining
block coordinate descent with penalty dual decomposition to tackle the
resulting challenging optimization framework. Our simulation results reveal the
achievable rates one can achieve using realistically frequency selective RISs
in wideband settings, and quantify the performance loss that occurs when using
state-of-the-art methods which assume that the RIS elements behave as
frequency-flat phase shifters.
|
[
{
"version": "v1",
"created": "Fri, 4 Feb 2022 14:55:27 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Mar 2022 06:38:35 GMT"
}
] | 2022-03-28T00:00:00 |
[
[
"Katsanos",
"Konstantinos D.",
""
],
[
"Shlezinger",
"Nir",
""
],
[
"Imani",
"Mohammadreza F.",
""
],
[
"Alexandropoulos",
"George C.",
""
]
] |
new_dataset
| 0.995266 |
2203.01824
|
Zhi-Gang Jiang
|
Zhigang Jiang, Zhongzheng Xiang, Jinhua Xu, Ming Zhao
|
LGT-Net: Indoor Panoramic Room Layout Estimation with Geometry-Aware
Transformer Network
|
To Appear in CVPR 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D room layout estimation by a single panorama using deep neural networks has
made great progress. However, previous approaches can not obtain efficient
geometry awareness of room layout with the only latitude of boundaries or
horizon-depth. We present that using horizon-depth along with room height can
obtain omnidirectional-geometry awareness of room layout in both horizontal and
vertical directions. In addition, we propose a planar-geometry aware loss
function with normals and gradients of normals to supervise the planeness of
walls and turning of corners. We propose an efficient network, LGT-Net, for
room layout estimation, which contains a novel Transformer architecture called
SWG-Transformer to model geometry relations. SWG-Transformer consists of
(Shifted) Window Blocks and Global Blocks to combine the local and global
geometry relations. Moreover, we design a novel relative position embedding of
Transformer to enhance the spatial identification ability for the panorama.
Experiments show that the proposed LGT-Net achieves better performance than
current state-of-the-arts (SOTA) on benchmark datasets.
|
[
{
"version": "v1",
"created": "Thu, 3 Mar 2022 16:28:10 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Mar 2022 16:14:33 GMT"
}
] | 2022-03-28T00:00:00 |
[
[
"Jiang",
"Zhigang",
""
],
[
"Xiang",
"Zhongzheng",
""
],
[
"Xu",
"Jinhua",
""
],
[
"Zhao",
"Ming",
""
]
] |
new_dataset
| 0.983958 |
2203.04132
|
Tim Salzmann
|
Tim Salzmann, Marco Pavone, Markus Ryll
|
Motron: Multimodal Probabilistic Human Motion Forecasting
|
CVPR 2022
| null | null | null |
cs.CV cs.HC cs.LG cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Autonomous systems and humans are increasingly sharing the same space. Robots
work side by side or even hand in hand with humans to balance each other's
limitations. Such cooperative interactions are ever more sophisticated. Thus,
the ability to reason not just about a human's center of gravity position, but
also its granular motion is an important prerequisite for human-robot
interaction. Though, many algorithms ignore the multimodal nature of humans or
neglect uncertainty in their motion forecasts. We present Motron, a multimodal,
probabilistic, graph-structured model, that captures human's multimodality
using probabilistic methods while being able to output deterministic
maximum-likelihood motions and corresponding confidence values for each mode.
Our model aims to be tightly integrated with the robotic
planning-control-interaction loop; outputting physically feasible human motions
and being computationally efficient. We demonstrate the performance of our
model on several challenging real-world motion forecasting datasets,
outperforming a wide array of generative/variational methods while providing
state-of-the-art single-output motions if required. Both using significantly
less computational power than state-of-the art algorithms.
|
[
{
"version": "v1",
"created": "Tue, 8 Mar 2022 14:58:41 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Mar 2022 14:40:26 GMT"
},
{
"version": "v3",
"created": "Fri, 25 Mar 2022 08:33:57 GMT"
}
] | 2022-03-28T00:00:00 |
[
[
"Salzmann",
"Tim",
""
],
[
"Pavone",
"Marco",
""
],
[
"Ryll",
"Markus",
""
]
] |
new_dataset
| 0.981265 |
2203.05181
|
Huaming Chen
|
David Hin, Andrey Kan, Huaming Chen, M. Ali Babar
|
LineVD: Statement-level Vulnerability Detection using Graph Neural
Networks
|
Accepted in the 19th International Conference on Mining Software
Repositories Technical Papers
| null | null | null |
cs.CR cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Current machine-learning based software vulnerability detection methods are
primarily conducted at the function-level. However, a key limitation of these
methods is that they do not indicate the specific lines of code contributing to
vulnerabilities. This limits the ability of developers to efficiently inspect
and interpret the predictions from a learnt model, which is crucial for
integrating machine-learning based tools into the software development
workflow. Graph-based models have shown promising performance in function-level
vulnerability detection, but their capability for statement-level vulnerability
detection has not been extensively explored. While interpreting function-level
predictions through explainable AI is one promising direction, we herein
consider the statement-level software vulnerability detection task from a fully
supervised learning perspective. We propose a novel deep learning framework,
LineVD, which formulates statement-level vulnerability detection as a node
classification task. LineVD leverages control and data dependencies between
statements using graph neural networks, and a transformer-based model to encode
the raw source code tokens. In particular, by addressing the conflicting
outputs between function-level and statement-level information, LineVD
significantly improve the prediction performance without vulnerability status
for function code. We have conducted extensive experiments against a
large-scale collection of real-world C/C++ vulnerabilities obtained from
multiple real-world projects, and demonstrate an increase of 105\% in F1-score
over the current state-of-the-art.
|
[
{
"version": "v1",
"created": "Thu, 10 Mar 2022 06:24:15 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Mar 2022 04:28:37 GMT"
}
] | 2022-03-28T00:00:00 |
[
[
"Hin",
"David",
""
],
[
"Kan",
"Andrey",
""
],
[
"Chen",
"Huaming",
""
],
[
"Babar",
"M. Ali",
""
]
] |
new_dataset
| 0.99143 |
2203.13055
|
Li Siyao
|
Li Siyao, Weijiang Yu, Tianpei Gu, Chunze Lin, Quan Wang, Chen Qian,
Chen Change Loy, Ziwei Liu
|
Bailando: 3D Dance Generation by Actor-Critic GPT with Choreographic
Memory
|
Accepted by CVPR 2022. Code and video link:
https://github.com/lisiyao21/Bailando/
| null | null | null |
cs.SD cs.CV eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Driving 3D characters to dance following a piece of music is highly
challenging due to the spatial constraints applied to poses by choreography
norms. In addition, the generated dance sequence also needs to maintain
temporal coherency with different music genres. To tackle these challenges, we
propose a novel music-to-dance framework, Bailando, with two powerful
components: 1) a choreographic memory that learns to summarize meaningful
dancing units from 3D pose sequence to a quantized codebook, 2) an actor-critic
Generative Pre-trained Transformer (GPT) that composes these units to a fluent
dance coherent to the music. With the learned choreographic memory, dance
generation is realized on the quantized units that meet high choreography
standards, such that the generated dancing sequences are confined within the
spatial constraints. To achieve synchronized alignment between diverse motion
tempos and music beats, we introduce an actor-critic-based reinforcement
learning scheme to the GPT with a newly-designed beat-align reward function.
Extensive experiments on the standard benchmark demonstrate that our proposed
framework achieves state-of-the-art performance both qualitatively and
quantitatively. Notably, the learned choreographic memory is shown to discover
human-interpretable dancing-style poses in an unsupervised manner.
|
[
{
"version": "v1",
"created": "Thu, 24 Mar 2022 13:06:43 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Mar 2022 03:07:26 GMT"
}
] | 2022-03-28T00:00:00 |
[
[
"Siyao",
"Li",
""
],
[
"Yu",
"Weijiang",
""
],
[
"Gu",
"Tianpei",
""
],
[
"Lin",
"Chunze",
""
],
[
"Wang",
"Quan",
""
],
[
"Qian",
"Chen",
""
],
[
"Loy",
"Chen Change",
""
],
[
"Liu",
"Ziwei",
""
]
] |
new_dataset
| 0.975717 |
2203.13291
|
Bowen Shi
|
Bowen Shi and Diane Brentari and Greg Shakhnarovich and Karen Livescu
|
Searching for fingerspelled content in American Sign Language
|
ACL 2022
| null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Natural language processing for sign language video - including tasks like
recognition, translation, and search - is crucial for making artificial
intelligence technologies accessible to deaf individuals, and is gaining
research interest in recent years. In this paper, we address the problem of
searching for fingerspelled key-words or key phrases in raw sign language
videos. This is an important task since significant content in sign language is
often conveyed via fingerspelling, and to our knowledge the task has not been
studied before. We propose an end-to-end model for this task, FSS-Net, that
jointly detects fingerspelling and matches it to a text sequence. Our
experiments, done on a large public dataset of ASL fingerspelling in the wild,
show the importance of fingerspelling detection as a component of a search and
retrieval model. Our model significantly outperforms baseline methods adapted
from prior work on related tasks
|
[
{
"version": "v1",
"created": "Thu, 24 Mar 2022 18:36:22 GMT"
}
] | 2022-03-28T00:00:00 |
[
[
"Shi",
"Bowen",
""
],
[
"Brentari",
"Diane",
""
],
[
"Shakhnarovich",
"Greg",
""
],
[
"Livescu",
"Karen",
""
]
] |
new_dataset
| 0.997102 |
2203.13312
|
Xuanye Zhang
|
Chenming Zhu, Xuanye Zhang, Yanran Li, Liangdong Qiu, Kai Han,
Xiaoguang Han
|
SharpContour: A Contour-based Boundary Refinement Approach for Efficient
and Accurate Instance Segmentation
|
10pages, 5 figures, accepted by CVPR 2022, project page: see this
https://xyzhang17.github.io/SharpContour/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Excellent performance has been achieved on instance segmentation but the
quality on the boundary area remains unsatisfactory, which leads to a rising
attention on boundary refinement. For practical use, an ideal post-processing
refinement scheme are required to be accurate, generic and efficient. However,
most of existing approaches propose pixel-wise refinement, which either
introduce a massive computation cost or design specifically for different
backbone models. Contour-based models are efficient and generic to be
incorporated with any existing segmentation methods, but they often generate
over-smoothed contour and tend to fail on corner areas. In this paper, we
propose an efficient contour-based boundary refinement approach, named
SharpContour, to tackle the segmentation of boundary area. We design a novel
contour evolution process together with an Instance-aware Point Classifier. Our
method deforms the contour iteratively by updating offsets in a discrete
manner. Differing from existing contour evolution methods, SharpContour
estimates each offset more independently so that it predicts much sharper and
accurate contours. Notably, our method is generic to seamlessly work with
diverse existing models with a small computational cost. Experiments show that
SharpContour achieves competitive gains whilst preserving high efficiency
|
[
{
"version": "v1",
"created": "Thu, 24 Mar 2022 19:37:20 GMT"
}
] | 2022-03-28T00:00:00 |
[
[
"Zhu",
"Chenming",
""
],
[
"Zhang",
"Xuanye",
""
],
[
"Li",
"Yanran",
""
],
[
"Qiu",
"Liangdong",
""
],
[
"Han",
"Kai",
""
],
[
"Han",
"Xiaoguang",
""
]
] |
new_dataset
| 0.995879 |
2203.13387
|
Mohammed Hassanin
|
Mohammed Hassanin, Abdelwahed Khamiss, Mohammed Bennamoun, Farid
Boussaid, and Ibrahim Radwan
|
CrossFormer: Cross Spatio-Temporal Transformer for 3D Human Pose
Estimation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
3D human pose estimation can be handled by encoding the geometric
dependencies between the body parts and enforcing the kinematic constraints.
Recently, Transformer has been adopted to encode the long-range dependencies
between the joints in the spatial and temporal domains. While they had shown
excellence in long-range dependencies, studies have noted the need for
improving the locality of vision Transformers. In this direction, we propose a
novel pose estimation Transformer featuring rich representations of body joints
critical for capturing subtle changes across frames (i.e., inter-feature
representation). Specifically, through two novel interaction modules;
Cross-Joint Interaction and Cross-Frame Interaction, the model explicitly
encodes the local and global dependencies between the body joints. The proposed
architecture achieved state-of-the-art performance on two popular 3D human pose
estimation datasets, Human3.6 and MPI-INF-3DHP. In particular, our proposed
CrossFormer method boosts performance by 0.9% and 0.3%, compared to the closest
counterpart, PoseFormer, using the detected 2D poses and ground-truth settings
respectively.
|
[
{
"version": "v1",
"created": "Thu, 24 Mar 2022 23:40:11 GMT"
}
] | 2022-03-28T00:00:00 |
[
[
"Hassanin",
"Mohammed",
""
],
[
"Khamiss",
"Abdelwahed",
""
],
[
"Bennamoun",
"Mohammed",
""
],
[
"Boussaid",
"Farid",
""
],
[
"Radwan",
"Ibrahim",
""
]
] |
new_dataset
| 0.989935 |
2203.13394
|
Yujing Xue
|
Yujing Xue, Jiageng Mao, Minzhe Niu, Hang Xu, Michael Bi Mi, Wei
Zhang, Xiaogang Wang, Xinchao Wang
|
Point2Seq: Detecting 3D Objects as Sequences
|
To appear in CVPR2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present a simple and effective framework, named Point2Seq, for 3D object
detection from point clouds. In contrast to previous methods that normally
{predict attributes of 3D objects all at once}, we expressively model the
interdependencies between attributes of 3D objects, which in turn enables a
better detection accuracy. Specifically, we view each 3D object as a sequence
of words and reformulate the 3D object detection task as decoding words from 3D
scenes in an auto-regressive manner. We further propose a lightweight
scene-to-sequence decoder that can auto-regressively generate words conditioned
on features from a 3D scene as well as cues from the preceding words. The
predicted words eventually constitute a set of sequences that completely
describe the 3D objects in the scene, and all the predicted sequences are then
automatically assigned to the respective ground truths through similarity-based
sequence matching. Our approach is conceptually intuitive and can be readily
plugged upon most existing 3D-detection backbones without adding too much
computational overhead; the sequential decoding paradigm we proposed, on the
other hand, can better exploit information from complex 3D scenes with the aid
of preceding predicted words. Without bells and whistles, our method
significantly outperforms previous anchor- and center-based 3D object detection
frameworks, yielding the new state of the art on the challenging ONCE dataset
as well as the Waymo Open Dataset. Code is available at
\url{https://github.com/ocNflag/point2seq}.
|
[
{
"version": "v1",
"created": "Fri, 25 Mar 2022 00:20:31 GMT"
}
] | 2022-03-28T00:00:00 |
[
[
"Xue",
"Yujing",
""
],
[
"Mao",
"Jiageng",
""
],
[
"Niu",
"Minzhe",
""
],
[
"Xu",
"Hang",
""
],
[
"Mi",
"Michael Bi",
""
],
[
"Zhang",
"Wei",
""
],
[
"Wang",
"Xiaogang",
""
],
[
"Wang",
"Xinchao",
""
]
] |
new_dataset
| 0.999689 |
2203.13435
|
Yuni Iwamasa
|
Takehiro Ito, Yuni Iwamasa, Yasuaki Kobayashi, Yu Nakahata, Yota
Otachi, Masahiro Takahashi, Kunihiro Wasa
|
Independent set reconfiguration on directed graphs
|
19 pages
| null | null | null |
cs.DS cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
\textsc{Directed Token Sliding} asks, given a directed graph and two sets of
pairwise nonadjacent vertices, whether one can reach from one set to the other
by repeatedly applying a local operation that exchanges a vertex in the current
set with one of its out-neighbors, while keeping the nonadjacency. It can be
seen as a reconfiguration process where a token is placed on each vertex in the
current set, and the local operation slides a token along an arc respecting its
direction. Previously, such a problem was extensively studied on undirected
graphs, where the edges have no directions and thus the local operation is
symmetric. \textsc{Directed Token Sliding} is a generalization of its
undirected variant since an undirected edge can be simulated by two arcs of
opposite directions.
In this paper, we initiate the algorithmic study of \textsc{Directed Token
Sliding}. We first observe that the problem is PSPACE-complete even if we
forbid parallel arcs in opposite directions and that the problem on directed
acyclic graphs is NP-complete and W[1]-hard parameterized by the size of the
sets in consideration. We then show our main result: a linear-time algorithm
for the problem on directed graphs whose underlying undirected graphs are
trees, which are called polytrees. Such a result is also known for the
undirected variant of the problem on trees~[Demaine et al.~TCS 2015], but the
techniques used here are quite different because of the asymmetric nature of
the directed problem. We present a characterization of yes-instances based on
the existence of a certain set of directed paths, and then derive simple
equivalent conditions from it by some observations, which admits an efficient
algorithm. For the polytree case, we also present a quadratic-time algorithm
that outputs, if the input is a yes-instance, one of the shortest
reconfiguration sequences.
|
[
{
"version": "v1",
"created": "Fri, 25 Mar 2022 03:51:18 GMT"
}
] | 2022-03-28T00:00:00 |
[
[
"Ito",
"Takehiro",
""
],
[
"Iwamasa",
"Yuni",
""
],
[
"Kobayashi",
"Yasuaki",
""
],
[
"Nakahata",
"Yu",
""
],
[
"Otachi",
"Yota",
""
],
[
"Takahashi",
"Masahiro",
""
],
[
"Wasa",
"Kunihiro",
""
]
] |
new_dataset
| 0.998854 |
2203.13437
|
Jiachen Li
|
Jiachen Li, Bin Wang, Shiqiang Zhu, Xin Cao, Fan Zhong, Wenxuan Chen,
Te Li, Jason Gu, Xueying Qin
|
BCOT: A Markerless High-Precision 3D Object Tracking Benchmark
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Template-based 3D object tracking still lacks a high-precision benchmark of
real scenes due to the difficulty of annotating the accurate 3D poses of real
moving video objects without using markers. In this paper, we present a
multi-view approach to estimate the accurate 3D poses of real moving objects,
and then use binocular data to construct a new benchmark for monocular
textureless 3D object tracking. The proposed method requires no markers, and
the cameras only need to be synchronous, relatively fixed as cross-view and
calibrated. Based on our object-centered model, we jointly optimize the object
pose by minimizing shape re-projection constraints in all views, which greatly
improves the accuracy compared with the single-view approach, and is even more
accurate than the depth-based method. Our new benchmark dataset contains 20
textureless objects, 22 scenes, 404 video sequences and 126K images captured in
real scenes. The annotation error is guaranteed to be less than 2mm, according
to both theoretical analysis and validation experiments. We re-evaluate the
state-of-the-art 3D object tracking methods with our dataset, reporting their
performance ranking in real scenes. Our BCOT benchmark and code can be found at
https://ar3dv.github.io/BCOT-Benchmark/.
|
[
{
"version": "v1",
"created": "Fri, 25 Mar 2022 03:55:03 GMT"
}
] | 2022-03-28T00:00:00 |
[
[
"Li",
"Jiachen",
""
],
[
"Wang",
"Bin",
""
],
[
"Zhu",
"Shiqiang",
""
],
[
"Cao",
"Xin",
""
],
[
"Zhong",
"Fan",
""
],
[
"Chen",
"Wenxuan",
""
],
[
"Li",
"Te",
""
],
[
"Gu",
"Jason",
""
],
[
"Qin",
"Xueying",
""
]
] |
new_dataset
| 0.999711 |
2203.13472
|
Jun-Hwa Kim
|
Jun-Hwa Kim, Namho Kim, Chee Sun Won
|
Facial Expression Recognition with Swin Transformer
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The task of recognizing human facial expressions plays a vital role in
various human-related systems, including health care and medical fields. With
the recent success of deep learning and the accessibility of a large amount of
annotated data, facial expression recognition research has been mature enough
to be utilized in real-world scenarios with audio-visual datasets. In this
paper, we introduce Swin transformer-based facial expression approach for an
in-the-wild audio-visual dataset of the Aff-Wild2 Expression dataset.
Specifically, we employ a three-stream network (i.e., Visual stream, Temporal
stream, and Audio stream) for the audio-visual videos to fuse the multi-modal
information into facial expression recognition. Experimental results on the
Aff-Wild2 dataset show the effectiveness of our proposed multi-modal
approaches.
|
[
{
"version": "v1",
"created": "Fri, 25 Mar 2022 06:42:31 GMT"
}
] | 2022-03-28T00:00:00 |
[
[
"Kim",
"Jun-Hwa",
""
],
[
"Kim",
"Namho",
""
],
[
"Won",
"Chee Sun",
""
]
] |
new_dataset
| 0.998647 |
2203.13478
|
George Alexandropoulos
|
George C. Alexandropoulos and Maurizio Crozzoli and Dinh-Thuy Phan-Huy
and Konstantinos D. Katsanos and Henk Wymeersch and Petar Popovski and
Philippe Ratajczak and Yohann B\'en\'edic and Marie-Helene Hamon and
Sebastien Herraiz Gonzalez and Raffaele D'Errico and Emilio Calvanese
Strinati
|
Smart Wireless Environments Enabled by RISs: Deployment Scenarios and
Two Key Challenges
|
6 pages, 6 figures, international conference
| null | null | null |
cs.IT cs.NI math.IT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Reconfigurable Intelligent Surfaces (RISs) constitute the enabler for
programmable propagation of electromagnetic signals, and are lately being
considered as a candidate physical-layer technology for the demanding
connectivity, reliability, localization, and sustainability requirements of
next generation wireless communications networks. In this paper, we present
various deployment scenarios for RIS-enabled smart wireless environments that
have been recently designed by the ongoing EU H2020 RISE-6G project. The
scenarios are taxonomized according to performance objectives, in particular,
connectivity and reliability, localization and sensing, as well as
sustainability and secrecy. We identify various deployment strategies and
sketch the core architectural requirements in terms of RIS control and
signaling, depending on the RIS hardware architectures and their respective
capabilities. Furthermore, we introduce and discuss, via preliminary simulation
results and reflectarray measurements, two key novel challenges with
RIS-enabled smart wireless environments, namely, the area of influence and the
bandwidth of influence of RISs, which corroborate the need for careful
deployment and planning of this new technology.
|
[
{
"version": "v1",
"created": "Fri, 25 Mar 2022 06:59:24 GMT"
}
] | 2022-03-28T00:00:00 |
[
[
"Alexandropoulos",
"George C.",
""
],
[
"Crozzoli",
"Maurizio",
""
],
[
"Phan-Huy",
"Dinh-Thuy",
""
],
[
"Katsanos",
"Konstantinos D.",
""
],
[
"Wymeersch",
"Henk",
""
],
[
"Popovski",
"Petar",
""
],
[
"Ratajczak",
"Philippe",
""
],
[
"Bénédic",
"Yohann",
""
],
[
"Hamon",
"Marie-Helene",
""
],
[
"Gonzalez",
"Sebastien Herraiz",
""
],
[
"D'Errico",
"Raffaele",
""
],
[
"Strinati",
"Emilio Calvanese",
""
]
] |
new_dataset
| 0.998987 |
2203.13497
|
Yunjie Ge
|
Yunjie Ge, Qian Wang, Jingfeng Zhang, Juntao Zhou, Yunzhu Zhang, and
Chao Shen
|
WaveFuzz: A Clean-Label Poisoning Attack to Protect Your Voice
| null | null | null | null |
cs.SD cs.CR eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
People are not always receptive to their voice data being collected and
misused. Training the audio intelligence systems needs these data to build
useful features, but the cost for getting permissions or purchasing data is
very high, which inevitably encourages hackers to collect these voice data
without people's awareness. To discourage the hackers from proactively
collecting people's voice data, we are the first to propose a clean-label
poisoning attack, called WaveFuzz, which can prevent intelligence audio models
from building useful features from protected (poisoned) voice data but still
preserve the semantic information to the humans. Specifically, WaveFuzz
perturbs the voice data to cause Mel Frequency Cepstral Coefficients (MFCC)
(typical representations of audio signals) to generate the poisoned frequency
features. These poisoned features are then fed to audio prediction models,
which degrades the performance of audio intelligence systems. Empirically, we
show the efficacy of WaveFuzz by attacking two representative types of
intelligent audio systems, i.e., speaker recognition system (SR) and speech
command recognition system (SCR). For example, the accuracies of models are
declined by $19.78\%$ when only $10\%$ of the poisoned voice data is to
fine-tune models, and the accuracies of models declined by $6.07\%$ when only
$10\%$ of the training voice data is poisoned. Consequently, WaveFuzz is an
effective technique that enables people to fight back to protect their own
voice data, which sheds new light on ameliorating privacy issues.
|
[
{
"version": "v1",
"created": "Fri, 25 Mar 2022 08:14:37 GMT"
}
] | 2022-03-28T00:00:00 |
[
[
"Ge",
"Yunjie",
""
],
[
"Wang",
"Qian",
""
],
[
"Zhang",
"Jingfeng",
""
],
[
"Zhou",
"Juntao",
""
],
[
"Zhang",
"Yunzhu",
""
],
[
"Shen",
"Chao",
""
]
] |
new_dataset
| 0.994568 |
2203.13501
|
Eito Sato
|
Eito Sato, Hailong Liu, Norimitsu Sakagami and Takahiro Wada
|
Cooperative Path-following Control of Remotely Operated Underwater
Robots for Human Visual Inspection Task
|
8 pages, 11figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Remotely operated vehicles (ROVs) have drawn much attention to underwater
tasks, such as the inspection and maintenance of infrastructure. The workload
of ROV operators tends to be high, even for the skilled ones. Therefore,
assistance methods for the operators are desired. This study focuses on a task
in which a human operator controls an underwater robot to follow a certain path
while visually inspecting objects in the vicinity of the path. In such a task,
it is desirable to realize the speed of trajectory control manually because the
visual inspection is performed by a human operator. However, to allocate
resources to visual inspection, it is desirable to minimize the workload on the
path-following by assisting with the automatic control. Therefore, the
objective of this study was to develop a cooperative path-following control
method that achieves the above-mentioned task by expanding a robust
path-following control law of nonholonomic wheeled vehicles. To simplify this
problem, we considered a path-following and visual objects recognition task in
a two-dimensional plane. We conducted an experiment with participants (n=16)
who completed the task using the proposed method and manual control. The
results showed that both the path-following errors and the workload of the
participants were significantly smaller with the proposed method than with
manual control. In addition, subjective responses demonstrated that operator
attention tended to be allocated to objects recognition rather than robot
operation tasks with the proposed method. These results indicate the
effectiveness of the proposed cooperative path-following control method.
|
[
{
"version": "v1",
"created": "Fri, 25 Mar 2022 08:31:05 GMT"
}
] | 2022-03-28T00:00:00 |
[
[
"Sato",
"Eito",
""
],
[
"Liu",
"Hailong",
""
],
[
"Sakagami",
"Norimitsu",
""
],
[
"Wada",
"Takahiro",
""
]
] |
new_dataset
| 0.987977 |
2203.13504
|
Zaijing Li
|
Zaijing Li, Fengxiao Tang, Ming Zhao, Yusen Zhu
|
EmoCaps: Emotion Capsule based Model for Conversational Emotion
Recognition
|
9 pages, 5 figures, accepted by Finding of ACL 2022
| null | null | null |
cs.CL cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Emotion recognition in conversation (ERC) aims to analyze the speaker's state
and identify their emotion in the conversation. Recent works in ERC focus on
context modeling but ignore the representation of contextual emotional
tendency. In order to extract multi-modal information and the emotional
tendency of the utterance effectively, we propose a new structure named
Emoformer to extract multi-modal emotion vectors from different modalities and
fuse them with sentence vector to be an emotion capsule. Furthermore, we design
an end-to-end ERC model called EmoCaps, which extracts emotion vectors through
the Emoformer structure and obtain the emotion classification results from a
context analysis model. Through the experiments with two benchmark datasets,
our model shows better performance than the existing state-of-the-art models.
|
[
{
"version": "v1",
"created": "Fri, 25 Mar 2022 08:42:57 GMT"
}
] | 2022-03-28T00:00:00 |
[
[
"Li",
"Zaijing",
""
],
[
"Tang",
"Fengxiao",
""
],
[
"Zhao",
"Ming",
""
],
[
"Zhu",
"Yusen",
""
]
] |
new_dataset
| 0.997004 |
2203.13592
|
Haoran Xie
|
Hange Wang, Haoran Xie, Kazunori Miyata
|
ILoveEye: Eyeliner Makeup Guidance System with Eye Shape Features
|
17 pages, 13 figures. Accepted in proceedings of HCII 2022
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Drawing eyeliner is not an easy task for whom lacks experience in eye makeup.
Everyone has a unique pair of eyes, so they need to draw eyeliner in a style
that suits their eyes. We proposed ILoveEye, an interactive system that
supports eye-makeup novices to draw natural and suitable eyeliner. The proposed
system analyzes the shape of the user's eyes and classifies the eye types from
camera frame. The system can recommend the eyeliner style to the user based on
the designed recommendation rules. Then, the system can generate the original
patterns corresponding to the eyeliner style, and the user can draw the
eyeliner while observing the real-time makeup guidance. The user evaluation
experiments are conducted to verify that the proposed ILoveEye system can help
some users to draw reasonable eyeliner based on the specific eye shapes and
improve their eye makeup skills.
|
[
{
"version": "v1",
"created": "Fri, 25 Mar 2022 11:42:55 GMT"
}
] | 2022-03-28T00:00:00 |
[
[
"Wang",
"Hange",
""
],
[
"Xie",
"Haoran",
""
],
[
"Miyata",
"Kazunori",
""
]
] |
new_dataset
| 0.995473 |
2203.13608
|
Xiaoqing Ye
|
Xiaoqing Ye, Mao Shu, Hanyu Li, Yifeng Shi, Yingying Li, Guangjie
Wang, Xiao Tan, Errui Ding
|
Rope3D: TheRoadside Perception Dataset for Autonomous Driving and
Monocular 3D Object Detection Task
|
To appear in CVPR2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Concurrent perception datasets for autonomous driving are mainly limited to
frontal view with sensors mounted on the vehicle. None of them is designed for
the overlooked roadside perception tasks. On the other hand, the data captured
from roadside cameras have strengths over frontal-view data, which is believed
to facilitate a safer and more intelligent autonomous driving system. To
accelerate the progress of roadside perception, we present the first
high-diversity challenging Roadside Perception 3D dataset- Rope3D from a novel
view. The dataset consists of 50k images and over 1.5M 3D objects in various
scenes, which are captured under different settings including various cameras
with ambiguous mounting positions, camera specifications, viewpoints, and
different environmental conditions. We conduct strict 2D-3D joint annotation
and comprehensive data analysis, as well as set up a new 3D roadside perception
benchmark with metrics and evaluation devkit. Furthermore, we tailor the
existing frontal-view monocular 3D object detection approaches and propose to
leverage the geometry constraint to solve the inherent ambiguities caused by
various sensors, viewpoints. Our dataset is available on
https://thudair.baai.ac.cn/rope.
|
[
{
"version": "v1",
"created": "Fri, 25 Mar 2022 12:13:23 GMT"
}
] | 2022-03-28T00:00:00 |
[
[
"Ye",
"Xiaoqing",
""
],
[
"Shu",
"Mao",
""
],
[
"Li",
"Hanyu",
""
],
[
"Shi",
"Yifeng",
""
],
[
"Li",
"Yingying",
""
],
[
"Wang",
"Guangjie",
""
],
[
"Tan",
"Xiao",
""
],
[
"Ding",
"Errui",
""
]
] |
new_dataset
| 0.999888 |
2203.13652
|
Angus Dempster
|
Angus Dempster, Daniel F. Schmidt, Geoffrey I. Webb
|
HYDRA: Competing convolutional kernels for fast and accurate time series
classification
|
27 pages, 18 figures
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We demonstrate a simple connection between dictionary methods for time series
classification, which involve extracting and counting symbolic patterns in time
series, and methods based on transforming input time series using convolutional
kernels, namely ROCKET and its variants. We show that by adjusting a single
hyperparameter it is possible to move by degrees between models resembling
dictionary methods and models resembling ROCKET. We present HYDRA, a simple,
fast, and accurate dictionary method for time series classification using
competing convolutional kernels, combining key aspects of both ROCKET and
conventional dictionary methods. HYDRA is faster and more accurate than the
most accurate existing dictionary methods, and can be combined with ROCKET and
its variants to further improve the accuracy of these methods.
|
[
{
"version": "v1",
"created": "Fri, 25 Mar 2022 13:58:10 GMT"
}
] | 2022-03-28T00:00:00 |
[
[
"Dempster",
"Angus",
""
],
[
"Schmidt",
"Daniel F.",
""
],
[
"Webb",
"Geoffrey I.",
""
]
] |
new_dataset
| 0.960666 |
2203.13691
|
Michael Alexander Beck
|
Michael A. Beck, Christopher P. Bidinosti, Christopher J. Henry,
Manisha Ajmani
|
The TerraByte Client: providing access to terabytes of plant data
| null | null | null | null |
cs.CV cs.AI cs.DB cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper we demonstrate the TerraByte Client, a software to download
user-defined plant datasets from a data portal hosted at Compute Canada. To
that end the client offers two key functionalities: (1) It allows the user to
get an overview on what data is available and a quick way to visually check
samples of that data. For this the client receives the results of queries to a
database and displays the number of images that fulfill the search criteria.
Furthermore, a sample can be downloaded within seconds to confirm that the data
suits the user's needs. (2) The user can then download the specified data to
their own drive. This data is prepared into chunks server-side and sent to the
user's end-system, where it is automatically extracted into individual files.
The first chunks of data are available for inspection after a brief waiting
period of a minute or less depending on available bandwidth and type of data.
The TerraByte Client has a full graphical user interface for easy usage and
uses end-to-end encryption. The user interface is built on top of a low-level
client. This architecture in combination of offering the client program
open-source makes it possible for the user to develop their own user interface
or use the client's functionality directly. An example for direct usage could
be to download specific data on demand within a larger application, such as
training machine learning models.
|
[
{
"version": "v1",
"created": "Fri, 25 Mar 2022 14:55:25 GMT"
}
] | 2022-03-28T00:00:00 |
[
[
"Beck",
"Michael A.",
""
],
[
"Bidinosti",
"Christopher P.",
""
],
[
"Henry",
"Christopher J.",
""
],
[
"Ajmani",
"Manisha",
""
]
] |
new_dataset
| 0.999815 |
2203.13792
|
Diego Alberto Mercado-Ravell Dr.
|
Hector Tovanche-Picon, Javier Gonzalez-Trejo, Angel Flores-Abad and
Diego Mercado-Ravell
|
Visual-based Safe Landing for UAVs in Populated Areas: Real-time
Validation in Virtual Environments
| null | null | null | null |
cs.RO cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Safe autonomous landing for Unmanned Aerial Vehicles (UAVs) in populated
areas is a crucial aspect for successful urban deployment, particularly in
emergency landing situations. Nonetheless, validating autonomous landing in
real scenarios is a challenging task involving a high risk of injuring people.
In this work, we propose a framework for real-time safe and thorough evaluation
of vision-based autonomous landing in populated scenarios, using
photo-realistic virtual environments. We propose to use the Unreal graphics
engine coupled with the AirSim plugin for drone's simulation, and evaluate
autonomous landing strategies based on visual detection of Safe Landing Zones
(SLZ) in populated scenarios. Then, we study two different criteria for
selecting the "best" SLZ, and evaluate them during autonomous landing of a
virtual drone in different scenarios and conditions, under different
distributions of people in urban scenes, including moving people. We evaluate
different metrics to quantify the performance of the landing strategies,
establishing a baseline for comparison with future works in this challenging
task, and analyze them through an important number of randomized iterations.
The study suggests that the use of the autonomous landing algorithms
considerably helps to prevent accidents involving humans, which may allow to
unleash the full potential of drones in urban environments near to people.
|
[
{
"version": "v1",
"created": "Fri, 25 Mar 2022 17:22:24 GMT"
}
] | 2022-03-28T00:00:00 |
[
[
"Tovanche-Picon",
"Hector",
""
],
[
"Gonzalez-Trejo",
"Javier",
""
],
[
"Flores-Abad",
"Angel",
""
],
[
"Mercado-Ravell",
"Diego",
""
]
] |
new_dataset
| 0.997272 |
2203.13803
|
Abhishek Kulkarni
|
Abhishek Ninad Kulkarni and Jie Fu
|
Opportunistic Qualitative Planning in Stochastic Systems with
Preferences over Temporal Logic Objectives
|
6 pages, 3 figure, submitted to IEEE L-CSS
| null | null | null |
cs.FL cs.GT cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Preferences play a key role in determining what goals/constraints to satisfy
when not all constraints can be satisfied simultaneously. In this work, we
study preference-based planning in a stochastic system modeled as a Markov
decision process, subject to a possible incomplete preference over temporally
extended goals. Our contributions are three folds: First, we introduce a
preference language to specify preferences over temporally extended goals.
Second, we define a novel automata-theoretic model to represent the preorder
induced by given preference relation. The automata representation of
preferences enables us to develop a preference-based planning algorithm for
stochastic systems. Finally, we show how to synthesize opportunistic strategies
that achieves an outcome that improves upon the current satisfiable outcome,
with positive probability or with probability one, in a stochastic system. We
illustrate our solution approaches using a robot motion planning example.
|
[
{
"version": "v1",
"created": "Fri, 25 Mar 2022 17:51:02 GMT"
}
] | 2022-03-28T00:00:00 |
[
[
"Kulkarni",
"Abhishek Ninad",
""
],
[
"Fu",
"Jie",
""
]
] |
new_dataset
| 0.958667 |
2203.13809
|
Simon Jones
|
Simon Jones, Emma Milner, Mahesh Sooriyabandara, Sabine Hauert
|
DOTS: An Open Testbed for Industrial Swarm Robotic Solutions
|
16 pages, 17 figures, for associated video, see
https://drive.google.com/file/d/1EuA8PS1qpqK6LIfPwCNXtQ3hHNWPDvtN/view?usp=sharing
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present DOTS, a new open access testbed for industrial swarm robotics
experimentation. It consists of 20 fast agile robots with high sensing and
computational performance, and real-world payload capability. They are housed
in an arena equipped with private 5G, motion capture, multiple cameras, and
openly accessible via an online portal. We reduce barriers to entry by
providing a complete platform-agnostic pipeline to develop, simulate, and
deploy experimental applications to the swarm. We showcase the testbed
capabilities with a swarm logistics application, autonomously and reliably
searching for and retrieving multiple cargo carriers.
|
[
{
"version": "v1",
"created": "Fri, 25 Mar 2022 17:52:49 GMT"
}
] | 2022-03-28T00:00:00 |
[
[
"Jones",
"Simon",
""
],
[
"Milner",
"Emma",
""
],
[
"Sooriyabandara",
"Mahesh",
""
],
[
"Hauert",
"Sabine",
""
]
] |
new_dataset
| 0.999875 |
1704.03065
|
Luca De Nardis
|
Luca De Nardis and Maria-Gabriella Di Benedetto
|
Mo3: a Modular Mobility Model for future generation mobile wireless
networks
|
33 pages, 32 figures. Accepted for publication on IEEE Access
| null |
10.1109/ACCESS.2022.3161541
| null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Mobility modeling in 5G and beyond 5G must address typical features such as
time-varying correlation between mobility patterns of different nodes, and
their variation ranging from macro-mobility (kilometer range) to micro-mobility
(sub-meter range). Current models have strong limitations in doing so: the
widely used reference-based models, such as the Reference Point Group Mobility
(RPGM), lack flexibility and accuracy, while the more sophisticated rule-based
(i.e. behavioral) models are complex to set-up and tune. This paper introduces
a new rule-based Modular Mobility Model, named Mo3, that provides accuracy and
flexibility on par with behavioral models, while preserving the intuitiveness
of the reference-based approach, and is based on five rules: 1) Individual
Mobility, 2) Correlated Mobility, 3) Collision Avoidance, 4) Obstacle Avoidance
and 5) Upper Bounds Enforcement. Mo3 avoids introducing acceleration vectors to
define rules, as behavioral models do, and this significantly reduces
complexity. Rules are mapped one-to-one onto five modules, that can be
independently enabled or replaced. Comparison of time-correlation features
obtained with Mo3 vs. reference-based models, and in particular RPGM, in pure
micro-mobility and mixed macro-mobility / micro-mobility scenarios, shows that
Mo3 and RPGM generate mobility patterns with similar topological properties
(intra-group and inter-group distances), but that Mo3 preserves a spatial
correlation that is lost in RPGM - at no price in terms of complexity - making
it suitable for adoption in 5G and beyond 5G.
|
[
{
"version": "v1",
"created": "Mon, 10 Apr 2017 21:52:15 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Sep 2018 12:04:27 GMT"
},
{
"version": "v3",
"created": "Thu, 24 Mar 2022 16:15:54 GMT"
}
] | 2022-03-25T00:00:00 |
[
[
"De Nardis",
"Luca",
""
],
[
"Di Benedetto",
"Maria-Gabriella",
""
]
] |
new_dataset
| 0.979723 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.