id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2207.10254
|
Billy Jin
|
Samuel C. Gutekunst, Billy Jin, David P. Williamson
|
The Two-Stripe Symmetric Circulant TSP is in P
|
72 pages, 26 figures. A preliminary version appeared in IPCO 2022
| null | null | null |
cs.DM cs.DS math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The symmetric circulant TSP is a special case of the traveling salesman
problem in which edge costs are symmetric and obey circulant symmetry. Despite
the substantial symmetry of the input, remarkably little is known about the
symmetric circulant TSP, and the complexity of the problem has been an
often-cited open question. Considerable effort has been made to understand the
case in which only edges of two lengths are allowed to have finite cost: the
two-stripe symmetric circulant TSP. In this paper, we resolve the complexity of
the two-stripe symmetric circulant TSP. To do so, we reduce two-stripe
symmetric circulant TSP to the problem of finding certain minimum-cost
Hamiltonian paths on cylindrical graphs. We then solve this Hamiltonian path
problem. Our results show that the two-stripe symmetric circulant TSP is in P.
Note that a two-stripe symmetric circulant TSP instance consists of a constant
number of inputs (including $n$, the number of cities), so that a
polynomial-time algorithm for the decision problem must run in time
polylogarithmic in $n$, and a polynomial-time algorithm for the optimization
problem cannot output the tour. We address this latter difficulty by showing
that the optimal tour must fall into one of two parameterized classes of tours,
and that we can output the class and the parameters in polynomial time. Thus we
make a substantial contribution to the set of polynomial-time solvable special
cases of the TSP, and take an important step towards resolving the complexity
of the general symmetric circulant TSP.
|
[
{
"version": "v1",
"created": "Thu, 21 Jul 2022 01:32:19 GMT"
}
] | 2022-07-22T00:00:00 |
[
[
"Gutekunst",
"Samuel C.",
""
],
[
"Jin",
"Billy",
""
],
[
"Williamson",
"David P.",
""
]
] |
new_dataset
| 0.99768 |
2207.10297
|
Huy Kang Kim
|
Junho Jang, Ji Young Woo, Huy Kang Kim
|
Action2Score: An Embedding Approach To Score Player Action
|
20 pages, 8 figures, 4 tables; accepted to ACM CHIPLAY 2022, and PACM
on Human-Computer Interaction
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multiplayer Online Battle Arena (MOBA) is one of the most successful game
genres. MOBA games such as League of Legends have competitive environments
where players race for their rank. In most MOBA games, a player's rank is
determined by the match result (win or lose). It seems natural because of the
nature of team play, but in some sense, it is unfair because the players who
put a lot of effort lose their rank just in case of loss and some players even
get free-ride on teammates' efforts in case of a win. To reduce the
side-effects of the team-based ranking system and evaluate a player's
performance impartially, we propose a novel embedding model that converts a
player's actions into quantitative scores based on the actions' respective
contribution to the team's victory. Our model is built using a sequence-based
deep learning model with a novel loss function working on the team match. The
sequence-based deep learning model process the action sequence from the game
start to the end of a player in a team play using a GRU unit that takes a
hidden state from the previous step and the current input selectively. The loss
function is designed to help the action score to reflect the final score and
the success of the team. We showed that our model can evaluate a player's
individual performance fairly and analyze the contributions of the player's
respective actions.
|
[
{
"version": "v1",
"created": "Thu, 21 Jul 2022 04:23:14 GMT"
}
] | 2022-07-22T00:00:00 |
[
[
"Jang",
"Junho",
""
],
[
"Woo",
"Ji Young",
""
],
[
"Kim",
"Huy Kang",
""
]
] |
new_dataset
| 0.993282 |
2207.10315
|
Haoran Zhou
|
Haoran Zhou, Yun Cao, Wenqing Chu, Junwei Zhu, Tong Lu, Ying Tai and
Chengjie Wang
|
SeedFormer: Patch Seeds based Point Cloud Completion with Upsample
Transformer
|
Camera-ready, to be published in ECCV 2022, with supplementary
material
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Point cloud completion has become increasingly popular among generation tasks
of 3D point clouds, as it is a challenging yet indispensable problem to recover
the complete shape of a 3D object from its partial observation. In this paper,
we propose a novel SeedFormer to improve the ability of detail preservation and
recovery in point cloud completion. Unlike previous methods based on a global
feature vector, we introduce a new shape representation, namely Patch Seeds,
which not only captures general structures from partial inputs but also
preserves regional information of local patterns. Then, by integrating seed
features into the generation process, we can recover faithful details for
complete point clouds in a coarse-to-fine manner. Moreover, we devise an
Upsample Transformer by extending the transformer structure into basic
operations of point generators, which effectively incorporates spatial and
semantic relationships between neighboring points. Qualitative and quantitative
evaluations demonstrate that our method outperforms state-of-the-art completion
networks on several benchmark datasets. Our code is available at
https://github.com/hrzhou2/seedformer.
|
[
{
"version": "v1",
"created": "Thu, 21 Jul 2022 06:15:59 GMT"
}
] | 2022-07-22T00:00:00 |
[
[
"Zhou",
"Haoran",
""
],
[
"Cao",
"Yun",
""
],
[
"Chu",
"Wenqing",
""
],
[
"Zhu",
"Junwei",
""
],
[
"Lu",
"Tong",
""
],
[
"Tai",
"Ying",
""
],
[
"Wang",
"Chengjie",
""
]
] |
new_dataset
| 0.975767 |
2207.10353
|
Chintan Patel
|
Chintan Patel
|
Secure Lightweight Authentication for Multi User IoT Environment
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
The Internet of Things (IoT) is giving a boost to a plethora of new
opportunities for the robust and sustainable deployment of cyber physical
systems. The cornerstone of any IoT system is the sensing devices. These
sensing devices have considerable resource constraints, including insufficient
battery capacity, CPU capability, and physical security. Because of such
resource constraints, designing lightweight cryptographic protocols is an
opportunity. Remote User Authentication ensures that two parties establish a
secure and durable session key. This study presents a lightweight and safe
authentication strategy for the user-gateway (U GW) IoT network model. The
proposed system is designed leveraging Elliptic Curve Cryptography (ECC). We
undertake a formal security analysis with both the Automated Validation of
Internet Security Protocols (AVISPA) and Burrows Abadi Needham (BAN) logic
tools and an information security assessment with the Delev Yao channel. We use
publish subscribe based Message Queuing Telemetry Transport (MQTT) protocol for
communication. Additionally, the performance analysis and comparison of
security features show that the proposed scheme is resilient to well known
cryptographic threats.
|
[
{
"version": "v1",
"created": "Thu, 21 Jul 2022 08:15:54 GMT"
}
] | 2022-07-22T00:00:00 |
[
[
"Patel",
"Chintan",
""
]
] |
new_dataset
| 0.998747 |
2207.10398
|
Pei Lv
|
Yuzhen Zhang, Wentong Wang, Weizhi Guo, Pei Lv, Mingliang Xu, Wei Chen
and Dinesh Manocha
|
D2-TPred: Discontinuous Dependency for Trajectory Prediction under
Traffic Lights
|
Accepted to ECCV2022, 17 pages, 6 figures. Project page:
https://github.com/VTP-TL/D2-TPred
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
A profound understanding of inter-agent relationships and motion behaviors is
important to achieve high-quality planning when navigating in complex
scenarios, especially at urban traffic intersections. We present a trajectory
prediction approach with respect to traffic lights, D2-TPred, which uses a
spatial dynamic interaction graph (SDG) and a behavior dependency graph (BDG)
to handle the problem of discontinuous dependency in the spatial-temporal
space. Specifically, the SDG is used to capture spatial interactions by
reconstructing sub-graphs for different agents with dynamic and changeable
characteristics during each frame. The BDG is used to infer motion tendency by
modeling the implicit dependency of the current state on priors behaviors,
especially the discontinuous motions corresponding to acceleration,
deceleration, or turning direction. Moreover, we present a new dataset for
vehicle trajectory prediction under traffic lights called VTP-TL. Our
experimental results show that our model achieves more than {20.45% and 20.78%
}improvement in terms of ADE and FDE, respectively, on VTP-TL as compared to
other trajectory prediction algorithms. The dataset and code are available at:
https://github.com/VTP-TL/D2-TPred.
|
[
{
"version": "v1",
"created": "Thu, 21 Jul 2022 10:19:07 GMT"
}
] | 2022-07-22T00:00:00 |
[
[
"Zhang",
"Yuzhen",
""
],
[
"Wang",
"Wentong",
""
],
[
"Guo",
"Weizhi",
""
],
[
"Lv",
"Pei",
""
],
[
"Xu",
"Mingliang",
""
],
[
"Chen",
"Wei",
""
],
[
"Manocha",
"Dinesh",
""
]
] |
new_dataset
| 0.990307 |
2207.10433
|
Jinrong Yang
|
Jinrong Yang, Songtao Liu, Zeming Li, Xiaoping Li, Jian Sun
|
StreamYOLO: Real-time Object Detection for Streaming Perception
|
Extended version of arXiv:2203.12338
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The perceptive models of autonomous driving require fast inference within a
low latency for safety. While existing works ignore the inevitable
environmental changes after processing, streaming perception jointly evaluates
the latency and accuracy into a single metric for video online perception,
guiding the previous works to search trade-offs between accuracy and speed. In
this paper, we explore the performance of real time models on this metric and
endow the models with the capacity of predicting the future, significantly
improving the results for streaming perception. Specifically, we build a simple
framework with two effective modules. One is a Dual Flow Perception module
(DFP). It consists of dynamic flow and static flow in parallel to capture
moving tendency and basic detection feature, respectively. Trend Aware Loss
(TAL) is the other module which adaptively generates loss weight for each
object with its moving speed. Realistically, we consider multiple velocities
driving scene and further propose Velocity-awared streaming AP (VsAP) to
jointly evaluate the accuracy. In this realistic setting, we design a efficient
mix-velocity training strategy to guide detector perceive any velocities. Our
simple method achieves the state-of-the-art performance on Argoverse-HD dataset
and improves the sAP and VsAP by 4.7% and 8.2% respectively compared to the
strong baseline, validating its effectiveness.
|
[
{
"version": "v1",
"created": "Thu, 21 Jul 2022 12:03:02 GMT"
}
] | 2022-07-22T00:00:00 |
[
[
"Yang",
"Jinrong",
""
],
[
"Liu",
"Songtao",
""
],
[
"Li",
"Zeming",
""
],
[
"Li",
"Xiaoping",
""
],
[
"Sun",
"Jian",
""
]
] |
new_dataset
| 0.998436 |
2207.10482
|
Bestami Gunay
|
Bestami G\"unay, Sefa Burak Okcu and Hasan \c{S}akir Bilge
|
LPYOLO: Low Precision YOLO for Face Detection on FPGA
|
Accepted to MVML2022
|
Proceedings of the 8th World Congress on Electrical Engineering
and Computer Systems and Sciences (2022)
|
10.11159/mvml22.108
| null |
cs.CV cs.AR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, number of edge computing devices and artificial intelligence
applications on them have advanced excessively. In edge computing, decision
making processes and computations are moved from servers to edge devices.
Hence, cheap and low power devices are required. FPGAs are very low power,
inclined to do parallel operations and deeply suitable devices for running
Convolutional Neural Networks (CNN) which are the fundamental unit of an
artificial intelligence application. Face detection on surveillance systems is
the most expected application on the security market. In this work, TinyYolov3
architecture is redesigned and deployed for face detection. It is a CNN based
object detection method and developed for embedded systems. PYNQ-Z2 is selected
as a target board which has low-end Xilinx Zynq 7020 System-on-Chip (SoC) on
it. Redesigned TinyYolov3 model is defined in numerous bit width precisions
with Brevitas library which brings fundamental CNN layers and activations in
integer quantized form. Then, the model is trained in a quantized structure
with WiderFace dataset. In order to decrease latency and power consumption,
onchip memory of the FPGA is configured as a storage of whole network
parameters and the last activation function is modified as rescaled HardTanh
instead of Sigmoid. Also, high degree of parallelism is applied to logical
resources of the FPGA. The model is converted to an HLS based application with
using FINN framework and FINN-HLS library which includes the layer definitions
in C++. Later, the model is synthesized and deployed. CPU of the SoC is
employed with multithreading mechanism and responsible for preprocessing,
postprocessing and TCP/IP streaming operations. Consequently, 2.4 Watt total
board power consumption, 18 Frames-Per-Second (FPS) throughput and 0.757 mAP
accuracy rate on Easy category of the WiderFace are achieved with 4 bits
precision model.
|
[
{
"version": "v1",
"created": "Thu, 21 Jul 2022 13:54:52 GMT"
}
] | 2022-07-22T00:00:00 |
[
[
"Günay",
"Bestami",
""
],
[
"Okcu",
"Sefa Burak",
""
],
[
"Bilge",
"Hasan Şakir",
""
]
] |
new_dataset
| 0.999506 |
2207.10506
|
Aline Sindel
|
Aline Sindel, Bettina Hohberger, Andreas Maier, Vincent Christlein
|
Multi-modal Retinal Image Registration Using a Keypoint-Based Vessel
Structure Aligning Network
|
11 pages, 3 figures, 3 tables, accepted to MICCAI 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In ophthalmological imaging, multiple imaging systems, such as color fundus,
infrared, fluorescein angiography, optical coherence tomography (OCT) or OCT
angiography, are often involved to make a diagnosis of retinal disease.
Multi-modal retinal registration techniques can assist ophthalmologists by
providing a pixel-based comparison of aligned vessel structures in images from
different modalities or acquisition times. To this end, we propose an
end-to-end trainable deep learning method for multi-modal retinal image
registration. Our method extracts convolutional features from the vessel
structure for keypoint detection and description and uses a graph neural
network for feature matching. The keypoint detection and description network
and graph neural network are jointly trained in a self-supervised manner using
synthetic multi-modal image pairs and are guided by synthetically sampled
ground truth homographies. Our method demonstrates higher registration accuracy
as competing methods for our synthetic retinal dataset and generalizes well for
our real macula dataset and a public fundus dataset.
|
[
{
"version": "v1",
"created": "Thu, 21 Jul 2022 14:36:51 GMT"
}
] | 2022-07-22T00:00:00 |
[
[
"Sindel",
"Aline",
""
],
[
"Hohberger",
"Bettina",
""
],
[
"Maier",
"Andreas",
""
],
[
"Christlein",
"Vincent",
""
]
] |
new_dataset
| 0.998682 |
2207.10614
|
Paul Upchurch
|
Paul Upchurch and Ransen Niu
|
A Dense Material Segmentation Dataset for Indoor and Outdoor Scene
Parsing
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A key algorithm for understanding the world is material segmentation, which
assigns a label (metal, glass, etc.) to each pixel. We find that a model
trained on existing data underperforms in some settings and propose to address
this with a large-scale dataset of 3.2 million dense segments on 44,560 indoor
and outdoor images, which is 23x more segments than existing data. Our data
covers a more diverse set of scenes, objects, viewpoints and materials, and
contains a more fair distribution of skin types. We show that a model trained
on our data outperforms a state-of-the-art model across datasets and
viewpoints. We propose a large-scale scene parsing benchmark and baseline of
0.729 per-pixel accuracy, 0.585 mean class accuracy and 0.420 mean IoU across
46 materials.
|
[
{
"version": "v1",
"created": "Thu, 21 Jul 2022 17:15:41 GMT"
}
] | 2022-07-22T00:00:00 |
[
[
"Upchurch",
"Paul",
""
],
[
"Niu",
"Ransen",
""
]
] |
new_dataset
| 0.999736 |
2207.10644
|
Kunhong Liu Dr
|
Xin-Cheng Wen, Jia-Xin Ye, Yan Luo, Yong Xu, Xuan-Ze Wang, Chang-Li Wu
and Kun-Hong Liu
|
CTL-MTNet: A Novel CapsNet and Transfer Learning-Based Mixed Task Net
for the Single-Corpus and Cross-Corpus Speech Emotion Recognition
|
this paper has been accepted by IJCAI 2022. Please cite it by:
Xin-Cheng Wen#, JiaXin Ye#, Yan Luo, Yong Xu, Xuan-Ze WANG, Chang-Li Wu,
Kun-Hong Liu*, CTL-MTNet: A Novel CapsNet and Transfer Learning-Based Mixed
Task Net for the Single-Corpus and Cross-Corpus Speech Emotion Recognition,
IJCAI 2022
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Speech Emotion Recognition (SER) has become a growing focus of research in
human-computer interaction. An essential challenge in SER is to extract common
attributes from different speakers or languages, especially when a specific
source corpus has to be trained to recognize the unknown data coming from
another speech corpus. To address this challenge, a Capsule Network (CapsNet)
and Transfer Learning based Mixed Task Net (CTLMTNet) are proposed to deal with
both the singlecorpus and cross-corpus SER tasks simultaneously in this paper.
For the single-corpus task, the combination of Convolution-Pooling and
Attention CapsNet module CPAC) is designed by embedding the self-attention
mechanism to the CapsNet, guiding the module to focus on the important features
that can be fed into different capsules. The extracted high-level features by
CPAC provide sufficient discriminative ability. Furthermore, to handle the
cross-corpus task, CTL-MTNet employs a Corpus Adaptation Adversarial Module
(CAAM) by combining CPAC with Margin Disparity Discrepancy (MDD), which can
learn the domain-invariant emotion representations through extracting the
strong emotion commonness. Experiments including ablation studies and
visualizations on both singleand cross-corpus tasks using four well-known SER
datasets in different languages are conducted for performance evaluation and
comparison. The results indicate that in both tasks the CTL-MTNet showed better
performance in all cases compared to a number of state-of-the-art methods. The
source code and the supplementary materials are available at:
https://github.com/MLDMXM2017/CTLMTNet
|
[
{
"version": "v1",
"created": "Mon, 18 Jul 2022 09:09:23 GMT"
}
] | 2022-07-22T00:00:00 |
[
[
"Wen",
"Xin-Cheng",
""
],
[
"Ye",
"Jia-Xin",
""
],
[
"Luo",
"Yan",
""
],
[
"Xu",
"Yong",
""
],
[
"Wang",
"Xuan-Ze",
""
],
[
"Wu",
"Chang-Li",
""
],
[
"Liu",
"Kun-Hong",
""
]
] |
new_dataset
| 0.999019 |
2207.10663
|
Aayush Bansal
|
Aayush Bansal and Michael Zollhoefer
|
Neural Pixel Composition: 3D-4D View Synthesis from Multi-Views
|
A technical report on 3D-4D view synthesis (40 pages, 22 figures and
18 tables). High-resolution version of paper:
http://www.aayushbansal.xyz/npc/npc_hi-res.pdf. Project page (containing
video results): http://www.aayushbansal.xyz/npc/
| null | null | null |
cs.CV cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
We present Neural Pixel Composition (NPC), a novel approach for continuous
3D-4D view synthesis given only a discrete set of multi-view observations as
input. Existing state-of-the-art approaches require dense multi-view
supervision and an extensive computational budget. The proposed formulation
reliably operates on sparse and wide-baseline multi-view imagery and can be
trained efficiently within a few seconds to 10 minutes for hi-res (12MP)
content, i.e., 200-400X faster convergence than existing methods. Crucial to
our approach are two core novelties: 1) a representation of a pixel that
contains color and depth information accumulated from multi-views for a
particular location and time along a line of sight, and 2) a multi-layer
perceptron (MLP) that enables the composition of this rich information provided
for a pixel location to obtain the final color output. We experiment with a
large variety of multi-view sequences, compare to existing approaches, and
achieve better results in diverse and challenging settings. Finally, our
approach enables dense 3D reconstruction from sparse multi-views, where COLMAP,
a state-of-the-art 3D reconstruction approach, struggles.
|
[
{
"version": "v1",
"created": "Thu, 21 Jul 2022 17:58:02 GMT"
}
] | 2022-07-22T00:00:00 |
[
[
"Bansal",
"Aayush",
""
],
[
"Zollhoefer",
"Michael",
""
]
] |
new_dataset
| 0.960001 |
2207.10664
|
Rui Qian
|
Grant Van Horn, Rui Qian, Kimberly Wilber, Hartwig Adam, Oisin Mac
Aodha and Serge Belongie
|
Exploring Fine-Grained Audiovisual Categorization with the SSW60 Dataset
|
ECCV 2022 Camera Ready
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a new benchmark dataset, Sapsucker Woods 60 (SSW60), for advancing
research on audiovisual fine-grained categorization. While our community has
made great strides in fine-grained visual categorization on images, the
counterparts in audio and video fine-grained categorization are relatively
unexplored. To encourage advancements in this space, we have carefully
constructed the SSW60 dataset to enable researchers to experiment with
classifying the same set of categories in three different modalities: images,
audio, and video. The dataset covers 60 species of birds and is comprised of
images from existing datasets, and brand new, expert-curated audio and video
datasets. We thoroughly benchmark audiovisual classification performance and
modality fusion experiments through the use of state-of-the-art transformer
methods. Our findings show that performance of audiovisual fusion methods is
better than using exclusively image or audio based methods for the task of
video classification. We also present interesting modality transfer
experiments, enabled by the unique construction of SSW60 to encompass three
different modalities. We hope the SSW60 dataset and accompanying baselines spur
research in this fascinating area.
|
[
{
"version": "v1",
"created": "Thu, 21 Jul 2022 17:59:06 GMT"
}
] | 2022-07-22T00:00:00 |
[
[
"Van Horn",
"Grant",
""
],
[
"Qian",
"Rui",
""
],
[
"Wilber",
"Kimberly",
""
],
[
"Adam",
"Hartwig",
""
],
[
"Mac Aodha",
"Oisin",
""
],
[
"Belongie",
"Serge",
""
]
] |
new_dataset
| 0.994211 |
2207.10666
|
Kan Wu
|
Kan Wu, Jinnian Zhang, Houwen Peng, Mengchen Liu, Bin Xiao, Jianlong
Fu, Lu Yuan
|
TinyViT: Fast Pretraining Distillation for Small Vision Transformers
|
Accepted by ECCV 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Vision transformer (ViT) recently has drawn great attention in computer
vision due to its remarkable model capability. However, most prevailing ViT
models suffer from huge number of parameters, restricting their applicability
on devices with limited resources. To alleviate this issue, we propose TinyViT,
a new family of tiny and efficient small vision transformers pretrained on
large-scale datasets with our proposed fast distillation framework. The central
idea is to transfer knowledge from large pretrained models to small ones, while
enabling small models to get the dividends of massive pretraining data. More
specifically, we apply distillation during pretraining for knowledge transfer.
The logits of large teacher models are sparsified and stored in disk in advance
to save the memory cost and computation overheads. The tiny student
transformers are automatically scaled down from a large pretrained model with
computation and parameter constraints. Comprehensive experiments demonstrate
the efficacy of TinyViT. It achieves a top-1 accuracy of 84.8% on ImageNet-1k
with only 21M parameters, being comparable to Swin-B pretrained on ImageNet-21k
while using 4.2 times fewer parameters. Moreover, increasing image resolutions,
TinyViT can reach 86.5% accuracy, being slightly better than Swin-L while using
only 11% parameters. Last but not the least, we demonstrate a good transfer
ability of TinyViT on various downstream tasks. Code and models are available
at https://github.com/microsoft/Cream/tree/main/TinyViT.
|
[
{
"version": "v1",
"created": "Thu, 21 Jul 2022 17:59:56 GMT"
}
] | 2022-07-22T00:00:00 |
[
[
"Wu",
"Kan",
""
],
[
"Zhang",
"Jinnian",
""
],
[
"Peng",
"Houwen",
""
],
[
"Liu",
"Mengchen",
""
],
[
"Xiao",
"Bin",
""
],
[
"Fu",
"Jianlong",
""
],
[
"Yuan",
"Lu",
""
]
] |
new_dataset
| 0.999034 |
2012.02973
|
Matheus Cavalcante
|
Matheus Cavalcante, Samuel Riedel, Antonio Pullini, Luca Benini
|
MemPool: A Shared-L1 Memory Many-Core Cluster with a Low-Latency
Interconnect
|
Accepted for publication in the Design, Automation and Test in Europe
(DATE) Conference 2021
| null |
10.23919/DATE51398.2021.9474087
| null |
cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
A key challenge in scaling shared-L1 multi-core clusters towards many-core
(more than 16 cores) configurations is to ensure low-latency and efficient
access to the L1 memory. In this work we demonstrate that it is possible to
scale up the shared-L1 architecture: We present MemPool, a 32 bit many-core
system with 256 fast RV32IMA "Snitch" cores featuring application-tunable
execution units, running at 700 MHz in typical conditions (TT/0.80
V/25{\deg}C). MemPool is easy to program, with all the cores sharing a global
view of a large L1 scratchpad memory pool, accessible within at most 5 cycles.
In MemPool's physical-aware design, we emphasized the exploration, design, and
optimization of the low-latency processor-to-L1-memory interconnect. We compare
three candidate topologies, analyzing them in terms of latency, throughput, and
back-end feasibility. The chosen topology keeps the average latency at fewer
than 6 cycles, even for a heavy injected load of 0.33 request/core/cycle. We
also propose a lightweight addressing scheme that maps each core private data
to a memory bank accessible within one cycle, which leads to performance gains
of up to 20% in real-world signal processing benchmarks. The addressing scheme
is also highly efficient in terms of energy consumption since requests to local
banks consume only half of the energy required to access remote banks. Our
design achieves competitive performance with respect to an ideal,
non-implementable full-crossbar baseline.
|
[
{
"version": "v1",
"created": "Sat, 5 Dec 2020 08:18:47 GMT"
}
] | 2022-07-21T00:00:00 |
[
[
"Cavalcante",
"Matheus",
""
],
[
"Riedel",
"Samuel",
""
],
[
"Pullini",
"Antonio",
""
],
[
"Benini",
"Luca",
""
]
] |
new_dataset
| 0.982422 |
2111.04476
|
Nirmalya Thakur
|
Nirmalya Thakur
|
Twitter Big Data as a Resource for Exoskeleton Research: A Large-Scale
Dataset of about 140,000 Tweets and 100 Research Questions
| null | null | null | null |
cs.CY cs.IR cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
The exoskeleton technology has been rapidly advancing in the recent past due
to its multitude of applications and diverse use-cases in assisted living,
military, healthcare, firefighting, and industry 4.0. The exoskeleton market is
projected to increase by multiple times of its current value within the next
two years. Therefore, it is crucial to study the degree and trends of user
interest, views, opinions, perspectives, attitudes, acceptance, feedback,
engagement, buying behavior, and satisfaction, towards exoskeletons, for which
the availability of Big Data of conversations about exoskeletons is necessary.
The Internet of Everything style of today's living, characterized by people
spending more time on the internet than ever before, with a specific focus on
social media platforms, holds the potential for the development of such a
dataset by the mining of relevant social media conversations. Twitter, one such
social media platform, is highly popular amongst all age groups, where the
topics found in the conversation paradigms include emerging technologies such
as exoskeletons. To address this research challenge, this work makes two
scientific contributions to this field. First, it presents an open-access
dataset of about 140,000 tweets about exoskeletons that were posted in a 5-year
period from May 21, 2017, to May 21, 2022. Second, based on a comprehensive
review of the recent works in the fields of Big Data, Natural Language
Processing, Information Retrieval, Data Mining, Pattern Recognition, and
Artificial Intelligence that may be applied to relevant Twitter data for
advancing research, innovation, and discovery in the field of exoskeleton
research, a total of 100 Research Questions are presented for researchers to
study, analyze, evaluate, ideate, and investigate based on this dataset.
|
[
{
"version": "v1",
"created": "Thu, 4 Nov 2021 19:36:01 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Apr 2022 05:58:07 GMT"
},
{
"version": "v3",
"created": "Thu, 28 Apr 2022 05:20:29 GMT"
},
{
"version": "v4",
"created": "Wed, 20 Jul 2022 16:52:35 GMT"
}
] | 2022-07-21T00:00:00 |
[
[
"Thakur",
"Nirmalya",
""
]
] |
new_dataset
| 0.998245 |
2111.11187
|
Jaesung Choe
|
Jaesung Choe, Chunghyun Park, Francois Rameau, Jaesik Park, In So
Kweon
|
PointMixer: MLP-Mixer for Point Cloud Understanding
|
Accepted to ECCV 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
MLP-Mixer has newly appeared as a new challenger against the realm of CNNs
and transformer. Despite its simplicity compared to transformer, the concept of
channel-mixing MLPs and token-mixing MLPs achieves noticeable performance in
visual recognition tasks. Unlike images, point clouds are inherently sparse,
unordered and irregular, which limits the direct use of MLP-Mixer for point
cloud understanding. In this paper, we propose PointMixer, a universal point
set operator that facilitates information sharing among unstructured 3D points.
By simply replacing token-mixing MLPs with a softmax function, PointMixer can
"mix" features within/between point sets. By doing so, PointMixer can be
broadly used in the network as inter-set mixing, intra-set mixing, and pyramid
mixing. Extensive experiments show the competitive or superior performance of
PointMixer in semantic segmentation, classification, and point reconstruction
against transformer-based methods.
|
[
{
"version": "v1",
"created": "Mon, 22 Nov 2021 13:25:54 GMT"
},
{
"version": "v2",
"created": "Sat, 27 Nov 2021 09:21:43 GMT"
},
{
"version": "v3",
"created": "Tue, 15 Mar 2022 17:09:30 GMT"
},
{
"version": "v4",
"created": "Wed, 16 Mar 2022 05:11:06 GMT"
},
{
"version": "v5",
"created": "Wed, 20 Jul 2022 15:37:39 GMT"
}
] | 2022-07-21T00:00:00 |
[
[
"Choe",
"Jaesung",
""
],
[
"Park",
"Chunghyun",
""
],
[
"Rameau",
"Francois",
""
],
[
"Park",
"Jaesik",
""
],
[
"Kweon",
"In So",
""
]
] |
new_dataset
| 0.963646 |
2112.01168
|
Matheus Cavalcante
|
Matheus Cavalcante, Anthony Agnesina, Samuel Riedel, Moritz Brunion,
Alberto Garcia-Ortiz, Dragomir Milojevic, Francky Catthoor, Sung Kyu Lim and
Luca Benini
|
MemPool-3D: Boosting Performance and Efficiency of Shared-L1 Memory
Many-Core Clusters with 3D Integration
|
Accepted for publication in DATE 2022 -- Design, Automation and Test
in Europe Conference
| null |
10.23919/DATE54114.2022.9774726
| null |
cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
Three-dimensional integrated circuits promise power, performance, and
footprint gains compared to their 2D counterparts, thanks to drastic reductions
in the interconnects' length through their smaller form factor. We can leverage
the potential of 3D integration by enhancing MemPool, an open-source many-core
design with 256 cores and a shared pool of L1 scratchpad memory connected with
a low-latency interconnect. MemPool's baseline 2D design is severely limited by
routing congestion and wire propagation delay, making the design ideal for 3D
integration. In architectural terms, we increase MemPool's scratchpad memory
capacity beyond the sweet spot for 2D designs, improving performance in a
common digital signal processing kernel. We propose a 3D MemPool design that
leverages a smart partitioning of the memory resources across two layers to
balance the size and utilization of the stacked dies. In this paper, we explore
the architectural and the technology parameter spaces by analyzing the power,
performance, area, and energy efficiency of MemPool instances in 2D and 3D with
1 MiB, 2 MiB, 4 MiB, and 8 MiB of scratchpad memory in a commercial 28 nm
technology node. We observe a performance gain of 9.1% when running a matrix
multiplication on the MemPool-3D design with 4 MiB of scratchpad memory
compared to the MemPool 2D counterpart. In terms of energy efficiency, we can
implement the MemPool-3D instance with 4 MiB of L1 memory on an energy budget
15% smaller than its 2D counterpart, and even 3.7% smaller than the MemPool-2D
instance with one-fourth of the L1 scratchpad memory capacity.
|
[
{
"version": "v1",
"created": "Thu, 2 Dec 2021 12:39:17 GMT"
}
] | 2022-07-21T00:00:00 |
[
[
"Cavalcante",
"Matheus",
""
],
[
"Agnesina",
"Anthony",
""
],
[
"Riedel",
"Samuel",
""
],
[
"Brunion",
"Moritz",
""
],
[
"Garcia-Ortiz",
"Alberto",
""
],
[
"Milojevic",
"Dragomir",
""
],
[
"Catthoor",
"Francky",
""
],
[
"Lim",
"Sung Kyu",
""
],
[
"Benini",
"Luca",
""
]
] |
new_dataset
| 0.995197 |
2112.13548
|
Mohan Zhou
|
Mohan Zhou, Yalong Bai, Wei Zhang, Ting Yao, Tiejun Zhao, Tao Mei
|
Responsive Listening Head Generation: A Benchmark Dataset and Baseline
|
Accepted by ECCV 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a new listening head generation benchmark, for synthesizing
responsive feedbacks of a listener (e.g., nod, smile) during a face-to-face
conversation. As the indispensable complement to talking heads generation,
listening head generation has seldomly been studied in literature.
Automatically synthesizing listening behavior that actively responds to a
talking head, is critical to applications such as digital human, virtual agents
and social robots. In this work, we propose a novel dataset "ViCo",
highlighting the listening head generation during a face-to-face conversation.
A total number of 92 identities (67 speakers and 76 listeners) are involved in
ViCo, featuring 483 clips in a paired "speaking-listening" pattern, where
listeners show three listening styles based on their attitudes: positive,
neutral, negative. Different from traditional speech-to-gesture or talking-head
generation, listening head generation takes as input both the audio and visual
signals from the speaker, and gives non-verbal feedbacks (e.g., head motions,
facial expressions) in a real-time manner. Our dataset supports a wide range of
applications such as human-to-human interaction, video-to-video translation,
cross-modal understanding and generation. To encourage further research, we
also release a listening head generation baseline, conditioning on different
listening attitudes. Code & ViCo dataset: https://project.mhzhou.com/vico.
|
[
{
"version": "v1",
"created": "Mon, 27 Dec 2021 07:18:50 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Mar 2022 05:48:18 GMT"
},
{
"version": "v3",
"created": "Wed, 20 Jul 2022 10:54:23 GMT"
}
] | 2022-07-21T00:00:00 |
[
[
"Zhou",
"Mohan",
""
],
[
"Bai",
"Yalong",
""
],
[
"Zhang",
"Wei",
""
],
[
"Yao",
"Ting",
""
],
[
"Zhao",
"Tiejun",
""
],
[
"Mei",
"Tao",
""
]
] |
new_dataset
| 0.968119 |
2201.07412
|
Chunhua Shen
|
Weian Mao and Yongtao Ge and Chunhua Shen and Zhi Tian and Xinlong
Wang and Zhibin Wang and Anton van den Hengel
|
Poseur: Direct Human Pose Regression with Transformers
|
Accepted to Proc. Eur. Conf. Comp. Vision (ECCV) 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We propose a direct, regression-based approach to 2D human pose estimation
from single images. We formulate the problem as a sequence prediction task,
which we solve using a Transformer network. This network directly learns a
regression mapping from images to the keypoint coordinates, without resorting
to intermediate representations such as heatmaps. This approach avoids much of
the complexity associated with heatmap-based approaches. To overcome the
feature misalignment issues of previous regression-based methods, we propose an
attention mechanism that adaptively attends to the features that are most
relevant to the target keypoints, considerably improving the accuracy.
Importantly, our framework is end-to-end differentiable, and naturally learns
to exploit the dependencies between keypoints. Experiments on MS-COCO and MPII,
two predominant pose-estimation datasets, demonstrate that our method
significantly improves upon the state-of-the-art in regression-based pose
estimation. More notably, ours is the first regression-based approach to
perform favorably compared to the best heatmap-based pose estimation methods.
|
[
{
"version": "v1",
"created": "Wed, 19 Jan 2022 04:31:57 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Jul 2022 12:25:18 GMT"
}
] | 2022-07-21T00:00:00 |
[
[
"Mao",
"Weian",
""
],
[
"Ge",
"Yongtao",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Tian",
"Zhi",
""
],
[
"Wang",
"Xinlong",
""
],
[
"Wang",
"Zhibin",
""
],
[
"Hengel",
"Anton van den",
""
]
] |
new_dataset
| 0.987317 |
2203.03450
|
Leandro Lanzieri
|
Leandro Lanzieri, Peter Kietzmann, Thomas C. Schmidt, Matthias
W\"ahlisch
|
Secure and Authorized Client-to-Client Communication for LwM2M
| null |
Proceedings of ACM/IEEE International Conference on Information
Processing in Sensor Networks (IPSN) 2022
|
10.1109/IPSN54338.2022.00020
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Constrained devices on the Internet of Things (IoT) continuously produce and
consume data. LwM2M manages millions of these devices in a server-centric
architecture, which challenges edge networks with expensive uplinks and
time-sensitive use cases. In this paper, we contribute two LwM2M extensions to
enable client-to-client (C2C) communication: (i) an authorization mechanism for
clients, and (ii) an extended management interface to allow secure C2C access
to resources. We analyse the security properties of the proposed extensions and
show that they are compliant with LwM2M security requirements. Our performance
evaluation on off-the-shelf IoT hardware shows that C2C communication
outperforms server-centric deployments. First, LwM2M deployments with edge C2C
communication yield a ~90% faster notification delivery and ~8x greater
throughput compared to common server-centric scenarios, while keeping a small
memory overhead of ~8%. Second, in server-centric communication, the delivery
rate degrades when resource update intervals drop below 100 ms.
|
[
{
"version": "v1",
"created": "Mon, 7 Mar 2022 15:10:14 GMT"
}
] | 2022-07-21T00:00:00 |
[
[
"Lanzieri",
"Leandro",
""
],
[
"Kietzmann",
"Peter",
""
],
[
"Schmidt",
"Thomas C.",
""
],
[
"Wählisch",
"Matthias",
""
]
] |
new_dataset
| 0.950328 |
2203.09440
|
Mutian Xu
|
Mutian Xu, Pei Chen, Haolin Liu, Xiaoguang Han
|
TO-Scene: A Large-scale Dataset for Understanding 3D Tabletop Scenes
|
ECCV 2022 (Oral Presentation)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many basic indoor activities such as eating or writing are always conducted
upon different tabletops (e.g., coffee tables, writing desks). It is
indispensable to understanding tabletop scenes in 3D indoor scene parsing
applications. Unfortunately, it is hard to meet this demand by directly
deploying data-driven algorithms, since 3D tabletop scenes are rarely available
in current datasets. To remedy this defect, we introduce TO-Scene, a
large-scale dataset focusing on tabletop scenes, which contains 20,740 scenes
with three variants. To acquire the data, we design an efficient and scalable
framework, where a crowdsourcing UI is developed to transfer CAD objects from
ModelNet and ShapeNet onto tables from ScanNet, then the output tabletop scenes
are simulated into real scans and annotated automatically.
Further, a tabletop-aware learning strategy is proposed for better perceiving
the small-sized tabletop instances. Notably, we also provide a real scanned
test set TO-Real to verify the practical value of TO-Scene. Experiments show
that the algorithms trained on TO-Scene indeed work on the realistic test data,
and our proposed tabletop-aware learning strategy greatly improves the
state-of-the-art results on both 3D semantic segmentation and object detection
tasks. Dataset and code are available at
https://github.com/GAP-LAB-CUHK-SZ/TO-Scene.
|
[
{
"version": "v1",
"created": "Thu, 17 Mar 2022 17:00:55 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Mar 2022 06:18:32 GMT"
},
{
"version": "v3",
"created": "Wed, 20 Jul 2022 09:29:02 GMT"
}
] | 2022-07-21T00:00:00 |
[
[
"Xu",
"Mutian",
""
],
[
"Chen",
"Pei",
""
],
[
"Liu",
"Haolin",
""
],
[
"Han",
"Xiaoguang",
""
]
] |
new_dataset
| 0.999905 |
2203.12119
|
Menglin Jia
|
Menglin Jia and Luming Tang and Bor-Chun Chen and Claire Cardie and
Serge Belongie and Bharath Hariharan and Ser-Nam Lim
|
Visual Prompt Tuning
|
ECCV2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The current modus operandi in adapting pre-trained models involves updating
all the backbone parameters, ie, full fine-tuning. This paper introduces Visual
Prompt Tuning (VPT) as an efficient and effective alternative to full
fine-tuning for large-scale Transformer models in vision. Taking inspiration
from recent advances in efficiently tuning large language models, VPT
introduces only a small amount (less than 1% of model parameters) of trainable
parameters in the input space while keeping the model backbone frozen. Via
extensive experiments on a wide variety of downstream recognition tasks, we
show that VPT achieves significant performance gains compared to other
parameter efficient tuning protocols. Most importantly, VPT even outperforms
full fine-tuning in many cases across model capacities and training data
scales, while reducing per-task storage cost.
|
[
{
"version": "v1",
"created": "Wed, 23 Mar 2022 01:17:16 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Jul 2022 15:47:22 GMT"
}
] | 2022-07-21T00:00:00 |
[
[
"Jia",
"Menglin",
""
],
[
"Tang",
"Luming",
""
],
[
"Chen",
"Bor-Chun",
""
],
[
"Cardie",
"Claire",
""
],
[
"Belongie",
"Serge",
""
],
[
"Hariharan",
"Bharath",
""
],
[
"Lim",
"Ser-Nam",
""
]
] |
new_dataset
| 0.972097 |
2204.00298
|
Fangyi Chen
|
Fangyi Chen, Han Zhang, Zaiwang Li, Jiachen Dou, Shentong Mo, Hao
Chen, Yongxin Zhang, Uzair Ahmed, Chenchen Zhu, Marios Savvides
|
Unitail: Detecting, Reading, and Matching in Retail Scene
|
ECCV 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
To make full use of computer vision technology in stores, it is required to
consider the actual needs that fit the characteristics of the retail scene.
Pursuing this goal, we introduce the United Retail Datasets (Unitail), a
large-scale benchmark of basic visual tasks on products that challenges
algorithms for detecting, reading, and matching. With 1.8M quadrilateral-shaped
instances annotated, the Unitail offers a detection dataset to align product
appearance better. Furthermore, it provides a gallery-style OCR dataset
containing 1454 product categories, 30k text regions, and 21k transcriptions to
enable robust reading on products and motivate enhanced product matching.
Besides benchmarking the datasets using various state-of-the-arts, we customize
a new detector for product detection and provide a simple OCR-based matching
solution that verifies its effectiveness.
|
[
{
"version": "v1",
"created": "Fri, 1 Apr 2022 09:06:48 GMT"
},
{
"version": "v2",
"created": "Mon, 2 May 2022 07:45:44 GMT"
},
{
"version": "v3",
"created": "Sun, 10 Jul 2022 07:13:58 GMT"
},
{
"version": "v4",
"created": "Wed, 20 Jul 2022 07:16:14 GMT"
}
] | 2022-07-21T00:00:00 |
[
[
"Chen",
"Fangyi",
""
],
[
"Zhang",
"Han",
""
],
[
"Li",
"Zaiwang",
""
],
[
"Dou",
"Jiachen",
""
],
[
"Mo",
"Shentong",
""
],
[
"Chen",
"Hao",
""
],
[
"Zhang",
"Yongxin",
""
],
[
"Ahmed",
"Uzair",
""
],
[
"Zhu",
"Chenchen",
""
],
[
"Savvides",
"Marios",
""
]
] |
new_dataset
| 0.999404 |
2206.07458
|
Yong Man Ro
|
Joanna Hong, Minsu Kim, Yong Man Ro
|
VisageSynTalk: Unseen Speaker Video-to-Speech Synthesis via
Speech-Visage Feature Selection
|
Accepted by ECCV 2022
| null | null | null |
cs.CV cs.LG cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The goal of this work is to reconstruct speech from a silent talking face
video. Recent studies have shown impressive performance on synthesizing speech
from silent talking face videos. However, they have not explicitly considered
on varying identity characteristics of different speakers, which place a
challenge in the video-to-speech synthesis, and this becomes more critical in
unseen-speaker settings. Our approach is to separate the speech content and the
visage-style from a given silent talking face video. By guiding the model to
independently focus on modeling the two representations, we can obtain the
speech of high intelligibility from the model even when the input video of an
unseen subject is given. To this end, we introduce speech-visage selection that
separates the speech content and the speaker identity from the visual features
of the input video. The disentangled representations are jointly incorporated
to synthesize speech through visage-style based synthesizer which generates
speech by coating the visage-styles while maintaining the speech content. Thus,
the proposed framework brings the advantage of synthesizing the speech
containing the right content even with the silent talking face video of an
unseen subject. We validate the effectiveness of the proposed framework on the
GRID, TCD-TIMIT volunteer, and LRW datasets.
|
[
{
"version": "v1",
"created": "Wed, 15 Jun 2022 11:29:58 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Jul 2022 13:03:18 GMT"
}
] | 2022-07-21T00:00:00 |
[
[
"Hong",
"Joanna",
""
],
[
"Kim",
"Minsu",
""
],
[
"Ro",
"Yong Man",
""
]
] |
new_dataset
| 0.99622 |
2206.09465
|
Feras Batarseh
|
Feras A. Batarseh
|
Cybersecurity Law: Legal Jurisdiction and Authority
|
This report is developed for partial fulfillment of the requirements
for the degree of Juris Masters of Law at GMU's Antonin Scalia Law School
| null | null | null |
cs.SI cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Cybersecurity threats affect all aspects of society; critical infrastructures
(such as networks, corporate systems, water supply systems, and intelligent
transportation systems) are especially prone to attacks and can have tangible
negative consequences on society. However, these critical cyber systems are
generally governed by multiple jurisdictions, for instance the Metro in the
Washington, D.C. area is managed by the states of Virginia and Maryland, as
well as the District of Columbia (DC) through Washington Metropolitan Area
Transit Authority (WMATA). Additionally, the water treatment infrastructure
managed by DC Water consists of waste water input from Fairfax and Arlington
counties, and the district (i.e. DC). Additionally, cyber attacks usually
launch from unknown sources, through unknown switches and servers, and end up
at the destination without much knowledge on their source or path. Certain
infrastructures are shared amongst multiple countries, another idiosyncrasy
that exacerbates the issue of governance. This law paper however, is not
concerned with the general governance of these infrastructures, rather with the
ambiguity in the relevant laws or doctrines about which authority would prevail
in the context of a cyber threat or a cyber-attack, with a focus on federal vs.
state issues, international law involvement, federal preemption, technical
aspects that could affect lawmaking, and conflicting responsibilities in cases
of cyber crime. A legal analysis of previous cases is presented, as well as an
extended discussion addressing different sides of the argument.
|
[
{
"version": "v1",
"created": "Sun, 19 Jun 2022 18:35:00 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Jun 2022 01:25:57 GMT"
},
{
"version": "v3",
"created": "Wed, 20 Jul 2022 16:10:30 GMT"
}
] | 2022-07-21T00:00:00 |
[
[
"Batarseh",
"Feras A.",
""
]
] |
new_dataset
| 0.994812 |
2207.00974
|
Yiwen Wu
|
Youjia Wang, Teng Xu, Yiwen Wu, Minzhang Li, Wenzheng Chen, Lan Xu,
Jingyi Yu
|
NARRATE: A Normal Assisted Free-View Portrait Stylizer
|
14 pages,13 figures https://youtu.be/mP4FV3evmyw
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we propose NARRATE, a novel pipeline that enables
simultaneously editing portrait lighting and perspective in a photorealistic
manner. As a hybrid neural-physical face model, NARRATE leverages complementary
benefits of geometry-aware generative approaches and normal-assisted physical
face models. In a nutshell, NARRATE first inverts the input portrait to a
coarse geometry and employs neural rendering to generate images resembling the
input, as well as producing convincing pose changes. However, inversion step
introduces mismatch, bringing low-quality images with less facial details. As
such, we further estimate portrait normal to enhance the coarse geometry,
creating a high-fidelity physical face model. In particular, we fuse the neural
and physical renderings to compensate for the imperfect inversion, resulting in
both realistic and view-consistent novel perspective images. In relighting
stage, previous works focus on single view portrait relighting but ignoring
consistency between different perspectives as well, leading unstable and
inconsistent lighting effects for view changes. We extend Total Relighting to
fix this problem by unifying its multi-view input normal maps with the physical
face model. NARRATE conducts relighting with consistent normal maps, imposing
cross-view constraints and exhibiting stable and coherent illumination effects.
We experimentally demonstrate that NARRATE achieves more photorealistic,
reliable results over prior works. We further bridge NARRATE with animation and
style transfer tools, supporting pose change, light change, facial animation,
and style transfer, either separately or in combination, all at a photographic
quality. We showcase vivid free-view facial animations as well as 3D-aware
relightable stylization, which help facilitate various AR/VR applications like
virtual cinematography, 3D video conferencing, and post-production.
|
[
{
"version": "v1",
"created": "Sun, 3 Jul 2022 07:54:05 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Jul 2022 09:10:39 GMT"
}
] | 2022-07-21T00:00:00 |
[
[
"Wang",
"Youjia",
""
],
[
"Xu",
"Teng",
""
],
[
"Wu",
"Yiwen",
""
],
[
"Li",
"Minzhang",
""
],
[
"Chen",
"Wenzheng",
""
],
[
"Xu",
"Lan",
""
],
[
"Yu",
"Jingyi",
""
]
] |
new_dataset
| 0.999532 |
2207.03401
|
R Jaberi
|
Raed Jaberi
|
Minimum $2$-edge strongly biconnected spanning directed subgraph problem
| null | null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
Wu and Grumbach introduced the concept of strongly biconnected directed
graphs. A directed graph $G=(V,E)$ is called strongly biconnected if the
directed graph $G$ is strongly connected and the underlying undirected graph of
$G$ is biconnected. A strongly biconnected directed graph $G=(V,E)$ is said to
be $2$- edge strongly biconnected if it has at least three vertices and the
directed subgraph $(V,E\setminus\left\lbrace e\right\rbrace )$ is strongly
biconnected for all $e \in E$. Let $G=(V,E)$ be a $2$-edge-strongly biconnected
directed graph. In this paper we study the problem of computing a minimum size
subset $H \subseteq E$ such that the directed subgraph $(V,H)$ is $2$- edge
strongly biconnected.
|
[
{
"version": "v1",
"created": "Thu, 7 Jul 2022 16:11:43 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Jul 2022 03:20:15 GMT"
}
] | 2022-07-21T00:00:00 |
[
[
"Jaberi",
"Raed",
""
]
] |
new_dataset
| 0.957 |
2207.08978
|
Joydeep Mitra
|
Joydeep Mitra
|
A Security & Privacy Analysis of US-based Contact Tracing Apps
| null | null | null | null |
cs.CR cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
With the onset of COVID-19, governments worldwide planned to develop and
deploy contact tracing (CT) apps to help speed up the contact tracing process.
However, experts raised concerns about the long-term privacy and security
implications of using these apps. Consequently, several proposals were made to
design privacy-preserving CT apps. To this end, Google and Apple developed the
Google/Apple Exposure Notification (GAEN) framework to help public health
authorities develop privacy-preserving CT apps. In the United States, 26 states
used the GAEN framework to develop their CT apps. In this paper, we empirically
evaluate the US-based GAEN apps to determine 1) the privileges they have, 2) if
the apps comply with their defined privacy policies, and 3) if they contain
known vulnerabilities that can be exploited to compromise privacy. The results
show that all apps violate their stated privacy policy and contain several
known vulnerabilities.
|
[
{
"version": "v1",
"created": "Mon, 18 Jul 2022 23:14:49 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Jul 2022 16:34:47 GMT"
}
] | 2022-07-21T00:00:00 |
[
[
"Mitra",
"Joydeep",
""
]
] |
new_dataset
| 0.992972 |
2207.09459
|
Andrea Zanini
|
Daniele Secci, Laura Molino, Andrea Zanini
|
Contaminant source identification in groundwater by means of artificial
neural network
|
Published on Journal of Hydrology
|
Volume 611, 2022, 128003, ISSN 0022-1694
|
10.1016/j.jhydrol.2022.128003
| null |
cs.LG cs.AI physics.geo-ph
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In a desired environmental protection system, groundwater may not be
excluded. In addition to the problem of over-exploitation, in total
disagreement with the concept of sustainable development, another not
negligible issue concerns the groundwater contamination. Mainly, this aspect is
due to intensive agricultural activities or industrialized areas. In
literature, several papers have dealt with transport problem, especially for
inverse problems in which the release history or the source location are
identified. The innovative aim of the paper is to develop a data-driven model
that is able to analyze multiple scenarios, even strongly non-linear, in order
to solve forward and inverse transport problems, preserving the reliability of
the results and reducing the uncertainty. Furthermore, this tool has the
characteristic of providing extremely fast responses, essential to identify
remediation strategies immediately. The advantages produced by the model were
compared with literature studies. In this regard, a feedforward artificial
neural network, which has been trained to handle different cases, represents
the data-driven model. Firstly, to identify the concentration of the pollutant
at specific observation points in the study area (forward problem); secondly,
to deal with inverse problems identifying the release history at known source
location; then, in case of one contaminant source, identifying the release
history and, at the same time, the location of the source in a specific
sub-domain of the investigated area. At last, the observation error is
investigated and estimated. The results are satisfactorily achieved,
highlighting the capability of the ANN to deal with multiple scenarios by
approximating nonlinear functions without the physical point of view that
describes the phenomenon, providing reliable results, with very low
computational burden and uncertainty.
|
[
{
"version": "v1",
"created": "Tue, 19 Jul 2022 14:51:30 GMT"
}
] | 2022-07-21T00:00:00 |
[
[
"Secci",
"Daniele",
""
],
[
"Molino",
"Laura",
""
],
[
"Zanini",
"Andrea",
""
]
] |
new_dataset
| 0.992288 |
2207.09506
|
Ian Levy
|
Ian Levy and Crispin Robinson
|
Thoughts on child safety on commodity platforms
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The explosion of global social media and online communication platforms has
changed how we interact with each other and as a society, bringing with it new
security and privacy challenges. Like all technologies, these platforms can be
abused and they are routinely used to attempt to cause harm at scale. One of
the most significant offence types that is enabled by these platforms is child
sexual abuse - both scaling existing abuse and enabling entirely new types of
online-only abuse where the impacts on the victim are equally catastrophic.
Many platforms invest significantly in combating this crime, referring
confirmed evidence of illegality to law enforcement. The introduction of
end-to-end encryption and similar technologies breaks many of the mitigations
in place today and this has led to a debate around the apparent dichotomy of
good child safety and good general user privacy and security. This debate has
concentrated on the problem of detecting offenders sharing known abuse imagery
using a technique known as client side scanning. We will show that the real
problem of online child sexual abuse is much more complex than offender image
sharing, providing a new set of 'harm archetypes' to better group harms into
categories that have similar technical characteristics and, as far as we are
able, bring more clarity to the processes currently used by platforms and law
enforcement in relation to child sexual abuse content and the real world
impacts. We explore, at a high level, a variety of techniques that could be
used as part of any potential solution and examine the benefits and disbenefits
that may accrue in various use cases, and use a hypothetical service as an
example of how various techniques could be brought together to provide both
user privacy and security, while protecting child safety and enabling law
enforcement action.
|
[
{
"version": "v1",
"created": "Tue, 19 Jul 2022 18:36:21 GMT"
}
] | 2022-07-21T00:00:00 |
[
[
"Levy",
"Ian",
""
],
[
"Robinson",
"Crispin",
""
]
] |
new_dataset
| 0.995736 |
2207.09507
|
Dominik Kossmann
|
Dominik Ko{\ss}mann and Viktor Brack and Thorsten Wilhelm
|
SeasoNet: A Seasonal Scene Classification, segmentation and Retrieval
dataset for satellite Imagery over Germany
|
Accepted at IEEE International Geoscience and Remote Sensing
Symposium (IGARSS) 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work presents SeasoNet, a new large-scale multi-label land cover and
land use scene understanding dataset. It includes $1\,759\,830$ images from
Sentinel-2 tiles, with 12 spectral bands and patch sizes of up to $ 120 \
\mathrm{px} \times 120 \ \mathrm{px}$. Each image is annotated with large scale
pixel level labels from the German land cover model LBM-DE2018 with land cover
classes based on the CORINE Land Cover database (CLC) 2018 and a five times
smaller minimum mapping unit (MMU) than the original CLC maps. We provide pixel
synchronous examples from all four seasons, plus an additional snowy set. These
properties make SeasoNet the currently most versatile and biggest remote
sensing scene understanding dataset with possible applications ranging from
scene classification over land cover mapping to content-based cross season
image retrieval and self-supervised feature learning. We provide baseline
results by evaluating state-of-the-art deep networks on the new dataset in
scene classification and semantic segmentation scenarios.
|
[
{
"version": "v1",
"created": "Tue, 19 Jul 2022 18:37:00 GMT"
}
] | 2022-07-21T00:00:00 |
[
[
"Koßmann",
"Dominik",
""
],
[
"Brack",
"Viktor",
""
],
[
"Wilhelm",
"Thorsten",
""
]
] |
new_dataset
| 0.999836 |
2207.09519
|
Renrui Zhang
|
Renrui Zhang, Zhang Wei, Rongyao Fang, Peng Gao, Kunchang Li, Jifeng
Dai, Yu Qiao, Hongsheng Li
|
Tip-Adapter: Training-free Adaption of CLIP for Few-shot Classification
|
Accepted by ECCV 2022. arXiv admin note: substantial text overlap
with arXiv:2111.03930
| null | null | null |
cs.CV cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Contrastive Vision-Language Pre-training, known as CLIP, has provided a new
paradigm for learning visual representations using large-scale image-text
pairs. It shows impressive performance on downstream tasks by zero-shot
knowledge transfer. To further enhance CLIP's adaption capability, existing
methods proposed to fine-tune additional learnable modules, which significantly
improves the few-shot performance but introduces extra training time and
computational resources. In this paper, we propose a training-free adaption
method for CLIP to conduct few-shot classification, termed as Tip-Adapter,
which not only inherits the training-free advantage of zero-shot CLIP but also
performs comparably to those training-required approaches. Tip-Adapter
constructs the adapter via a key-value cache model from the few-shot training
set, and updates the prior knowledge encoded in CLIP by feature retrieval. On
top of that, the performance of Tip-Adapter can be further boosted to be
state-of-the-art on ImageNet by fine-tuning the cache model for 10$\times$
fewer epochs than existing methods, which is both effective and efficient. We
conduct extensive experiments of few-shot classification on 11 datasets to
demonstrate the superiority of our proposed methods. Code is released at
https://github.com/gaopengcuhk/Tip-Adapter.
|
[
{
"version": "v1",
"created": "Tue, 19 Jul 2022 19:12:11 GMT"
}
] | 2022-07-21T00:00:00 |
[
[
"Zhang",
"Renrui",
""
],
[
"Wei",
"Zhang",
""
],
[
"Fang",
"Rongyao",
""
],
[
"Gao",
"Peng",
""
],
[
"Li",
"Kunchang",
""
],
[
"Dai",
"Jifeng",
""
],
[
"Qiao",
"Yu",
""
],
[
"Li",
"Hongsheng",
""
]
] |
new_dataset
| 0.955675 |
2207.09562
|
Tin Kuculo
|
Tin Kuculo, Simon Gottschalk and Elena Demidova
|
QuoteKG: A Multilingual Knowledge Graph of Quotes
| null | null | null | null |
cs.CL cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
Quotes of public figures can mark turning points in history. A quote can
explain its originator's actions, foreshadowing political or personal decisions
and revealing character traits. Impactful quotes cross language barriers and
influence the general population's reaction to specific stances, always facing
the risk of being misattributed or taken out of context. The provision of a
cross-lingual knowledge graph of quotes that establishes the authenticity of
quotes and their contexts is of great importance to allow the exploration of
the lives of important people as well as topics from the perspective of what
was actually said. In this paper, we present QuoteKG, the first multilingual
knowledge graph of quotes. We propose the QuoteKG creation pipeline that
extracts quotes from Wikiquote, a free and collaboratively created collection
of quotes in many languages, and aligns different mentions of the same quote.
QuoteKG includes nearly one million quotes in $55$ languages, said by more than
$69,000$ people of public interest across a wide range of topics. QuoteKG is
publicly available and can be accessed via a SPARQL endpoint.
|
[
{
"version": "v1",
"created": "Tue, 19 Jul 2022 21:32:59 GMT"
}
] | 2022-07-21T00:00:00 |
[
[
"Kuculo",
"Tin",
""
],
[
"Gottschalk",
"Simon",
""
],
[
"Demidova",
"Elena",
""
]
] |
new_dataset
| 0.995219 |
2207.09580
|
Jodie Crocker
|
Aser Abbas (1), Joseph P. Vantassel (2), Brady R. Cox (1), Krishna
Kumar (3), Jodie Crocker (3) ((1) Utah State University, (2) Virginia Tech,
(3) The University of Texas at Austin)
|
A Frequency-Velocity CNN for Developing Near-Surface 2D Vs Images from
Linear-Array, Active-Source Wavefield Measurements
|
34 pages, 13 figures, 2 tables
| null | null | null |
cs.LG eess.SP physics.geo-ph
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a frequency-velocity convolutional neural network (CNN)
for rapid, non-invasive 2D shear wave velocity (Vs) imaging of near-surface
geo-materials. Operating in the frequency-velocity domain allows for
significant flexibility in the linear-array, active-source experimental testing
configurations used for generating the CNN input, which are normalized
dispersion images. Unlike wavefield images, normalized dispersion images are
relatively insensitive to the experimental testing configuration, accommodating
various source types, source offsets, numbers of receivers, and receiver
spacings. We demonstrate the effectiveness of the frequency-velocity CNN by
applying it to a classic near-surface geophysics problem, namely, imaging a
two-layer, undulating, soil-over-bedrock interface. This problem was recently
investigated in our group by developing a time-distance CNN, which showed great
promise but lacked flexibility in utilizing different field-testing
configurations. Herein, the new frequency-velocity CNN is shown to have
comparable accuracy to the time-distance CNN while providing greater
flexibility to handle varied field applications. The frequency-velocity CNN was
trained, validated, and tested using 100,000 synthetic near-surface models. The
ability of the proposed frequency-velocity CNN to generalize across various
acquisition configurations is first tested using synthetic near-surface models
with different acquisition configurations from that of the training set, and
then applied to experimental field data collected at the Hornsby Bend site in
Austin, Texas, USA. When fully developed for a wider range of geological
conditions, the proposed CNN may ultimately be used as a rapid, end-to-end
alternative for current pseudo-2D surface wave imaging techniques or to develop
starting models for full waveform inversion.
|
[
{
"version": "v1",
"created": "Tue, 19 Jul 2022 22:48:43 GMT"
}
] | 2022-07-21T00:00:00 |
[
[
"Abbas",
"Aser",
""
],
[
"Vantassel",
"Joseph P.",
""
],
[
"Cox",
"Brady R.",
""
],
[
"Kumar",
"Krishna",
""
],
[
"Crocker",
"Jodie",
""
]
] |
new_dataset
| 0.989554 |
2207.09610
|
Dongliang Cao
|
Dongliang Cao, Florian Bernard
|
Unsupervised Deep Multi-Shape Matching
|
to be published in ECCV2022
| null | null | null |
cs.CV cs.AI cs.CG
|
http://creativecommons.org/licenses/by/4.0/
|
3D shape matching is a long-standing problem in computer vision and computer
graphics. While deep neural networks were shown to lead to state-of-the-art
results in shape matching, existing learning-based approaches are limited in
the context of multi-shape matching: (i) either they focus on matching pairs of
shapes only and thus suffer from cycle-inconsistent multi-matchings, or (ii)
they require an explicit template shape to address the matching of a collection
of shapes. In this paper, we present a novel approach for deep multi-shape
matching that ensures cycle-consistent multi-matchings while not depending on
an explicit template shape. To this end, we utilise a shape-to-universe
multi-matching representation that we combine with powerful functional map
regularisation, so that our multi-shape matching neural network can be trained
in a fully unsupervised manner. While the functional map regularisation is only
considered during training time, functional maps are not computed for
predicting correspondences, thereby allowing for fast inference. We demonstrate
that our method achieves state-of-the-art results on several challenging
benchmark datasets, and, most remarkably, that our unsupervised method even
outperforms recent supervised methods.
|
[
{
"version": "v1",
"created": "Wed, 20 Jul 2022 01:22:08 GMT"
}
] | 2022-07-21T00:00:00 |
[
[
"Cao",
"Dongliang",
""
],
[
"Bernard",
"Florian",
""
]
] |
new_dataset
| 0.968359 |
2207.09627
|
Shayan Taheri
|
Md Mahfuz Al Hasan, Mohammad Tahsin Mostafiz, Thomas An Le, Jake
Julia, Nidish Vashistha, Shayan Taheri, and Navid Asadizanjani
|
EVHA: Explainable Vision System for Hardware Testing and Assurance -- An
Overview
|
Please contact Dr. Shayan Taheri for any questions and/or comments
regarding the paper arXiv submission at: "www.shayan-taheri.com". The Paper
Initial Submission: The ACM Journal on Emerging Technologies in Computing
Systems (JETC)
| null | null | null |
cs.CR cs.AI cs.CV cs.LG cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to the ever-growing demands for electronic chips in different sectors the
semiconductor companies have been mandated to offshore their manufacturing
processes. This unwanted matter has made security and trustworthiness of their
fabricated chips concerning and caused creation of hardware attacks. In this
condition, different entities in the semiconductor supply chain can act
maliciously and execute an attack on the design computing layers, from devices
to systems. Our attack is a hardware Trojan that is inserted during mask
generation/fabrication in an untrusted foundry. The Trojan leaves a footprint
in the fabricated through addition, deletion, or change of design cells. In
order to tackle this problem, we propose Explainable Vision System for Hardware
Testing and Assurance (EVHA) in this work that can detect the smallest possible
change to a design in a low-cost, accurate, and fast manner. The inputs to this
system are Scanning Electron Microscopy (SEM) images acquired from the
Integrated Circuits (ICs) under examination. The system output is determination
of IC status in terms of having any defect and/or hardware Trojan through
addition, deletion, or change in the design cells at the cell-level. This
article provides an overview on the design, development, implementation, and
analysis of our defense system.
|
[
{
"version": "v1",
"created": "Wed, 20 Jul 2022 02:58:46 GMT"
}
] | 2022-07-21T00:00:00 |
[
[
"Hasan",
"Md Mahfuz Al",
""
],
[
"Mostafiz",
"Mohammad Tahsin",
""
],
[
"Le",
"Thomas An",
""
],
[
"Julia",
"Jake",
""
],
[
"Vashistha",
"Nidish",
""
],
[
"Taheri",
"Shayan",
""
],
[
"Asadizanjani",
"Navid",
""
]
] |
new_dataset
| 0.999876 |
2207.09708
|
EPTCS
|
Debora C. Engelmann (PUCRS and UniGe), Angelo Ferrando (UniGe), Alison
R. Panisson (UFSC), Davide Ancona (UniGe), Rafael H. Bordini (PUCRS), Viviana
Mascardi (UniGe)
|
RV4JaCa -- Runtime Verification for Multi-Agent Systems
|
In Proceedings AREA 2022, arXiv:2207.09058
|
EPTCS 362, 2022, pp. 23-36
|
10.4204/EPTCS.362.5
| null |
cs.MA cs.AI cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a Runtime Verification (RV) approach for Multi-Agent
Systems (MAS) using the JaCaMo framework. Our objective is to bring a layer of
security to the MAS. This layer is capable of controlling events during the
execution of the system without needing a specific implementation in the
behaviour of each agent to recognise the events. MAS have been used in the
context of hybrid intelligence. This use requires communication between
software agents and human beings. In some cases, communication takes place via
natural language dialogues. However, this kind of communication brings us to a
concern related to controlling the flow of dialogue so that agents can prevent
any change in the topic of discussion that could impair their reasoning. We
demonstrate the implementation of a monitor that aims to control this dialogue
flow in a MAS that communicates with the user through natural language to aid
decision-making in hospital bed allocation.
|
[
{
"version": "v1",
"created": "Wed, 20 Jul 2022 07:25:47 GMT"
}
] | 2022-07-21T00:00:00 |
[
[
"Engelmann",
"Debora C.",
"",
"PUCRS and UniGe"
],
[
"Ferrando",
"Angelo",
"",
"UniGe"
],
[
"Panisson",
"Alison R.",
"",
"UFSC"
],
[
"Ancona",
"Davide",
"",
"UniGe"
],
[
"Bordini",
"Rafael H.",
"",
"PUCRS"
],
[
"Mascardi",
"Viviana",
"",
"UniGe"
]
] |
new_dataset
| 0.980593 |
2207.09730
|
Alexander V. Evako
|
Alexander Evako
|
Contractible_Spaces, Homotopy Equivalence and Homeomorphism in Digital
Topology
|
11 pages
| null | null | null |
cs.DM
|
http://creativecommons.org/licenses/by/4.0/
|
This article provides a brief overview of the main results in the field of
contractible digital spaces and contractible transformations of digital spaces
and contains new results. We introduce new types of contractible digital spaces
such as the cone and the double cone. Based on this, we introduce new
contractible transformations that covert the digital space into a homotopy
equivalent to the first one. We group together these transformations and get 6
types of contractible transformations. These transformations can be used to
convert a closed digital n-dimensional manifold into another closed
n-dimensional manifold homeomorphic to the first one.
|
[
{
"version": "v1",
"created": "Wed, 20 Jul 2022 08:15:31 GMT"
}
] | 2022-07-21T00:00:00 |
[
[
"Evako",
"Alexander",
""
]
] |
new_dataset
| 0.997675 |
2207.09830
|
Andrey Rudenko
|
Andrey Rudenko, Luigi Palmieri, Wanting Huang, Achim J. Lilienthal,
and Kai O. Arras
|
The Atlas Benchmark: an Automated Evaluation Framework for Human Motion
Prediction
|
Accepted to and will be presented at the IEEE RO-MAN 2022 conference
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human motion trajectory prediction, an essential task for autonomous systems
in many domains, has been on the rise in recent years. With a multitude of new
methods proposed by different communities, the lack of standardized benchmarks
and objective comparisons is increasingly becoming a major limitation to assess
progress and guide further research. Existing benchmarks are limited in their
scope and flexibility to conduct relevant experiments and to account for
contextual cues of agents and environments. In this paper we present Atlas, a
benchmark to systematically evaluate human motion trajectory prediction
algorithms in a unified framework. Atlas offers data preprocessing functions,
hyperparameter optimization, comes with popular datasets and has the
flexibility to setup and conduct underexplored yet relevant experiments to
analyze a method's accuracy and robustness. In an example application of Atlas,
we compare five popular model- and learning-based predictors and find that,
when properly applied, early physics-based approaches are still remarkably
competitive. Such results confirm the necessity of benchmarks like Atlas.
|
[
{
"version": "v1",
"created": "Wed, 20 Jul 2022 11:33:12 GMT"
}
] | 2022-07-21T00:00:00 |
[
[
"Rudenko",
"Andrey",
""
],
[
"Palmieri",
"Luigi",
""
],
[
"Huang",
"Wanting",
""
],
[
"Lilienthal",
"Achim J.",
""
],
[
"Arras",
"Kai O.",
""
]
] |
new_dataset
| 0.998137 |
2207.09835
|
Shenhan Qian
|
Shenhan Qian, Jiale Xu, Ziwei Liu, Liqian Ma, Shenghua Gao
|
UNIF: United Neural Implicit Functions for Clothed Human Reconstruction
and Animation
|
Accepted to ECCV 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose united implicit functions (UNIF), a part-based method for clothed
human reconstruction and animation with raw scans and skeletons as the input.
Previous part-based methods for human reconstruction rely on ground-truth part
labels from SMPL and thus are limited to minimal-clothed humans. In contrast,
our method learns to separate parts from body motions instead of part
supervision, thus can be extended to clothed humans and other articulated
objects. Our Partition-from-Motion is achieved by a bone-centered
initialization, a bone limit loss, and a section normal loss that ensure stable
part division even when the training poses are limited. We also present a
minimal perimeter loss for SDF to suppress extra surfaces and part overlapping.
Another core of our method is an adjacent part seaming algorithm that produces
non-rigid deformations to maintain the connection between parts which
significantly relieves the part-based artifacts. Under this algorithm, we
further propose "Competing Parts", a method that defines blending weights by
the relative position of a point to bones instead of the absolute position,
avoiding the generalization problem of neural implicit functions with inverse
LBS (linear blend skinning). We demonstrate the effectiveness of our method by
clothed human body reconstruction and animation on the CAPE and the ClothSeq
datasets.
|
[
{
"version": "v1",
"created": "Wed, 20 Jul 2022 11:41:29 GMT"
}
] | 2022-07-21T00:00:00 |
[
[
"Qian",
"Shenhan",
""
],
[
"Xu",
"Jiale",
""
],
[
"Liu",
"Ziwei",
""
],
[
"Ma",
"Liqian",
""
],
[
"Gao",
"Shenghua",
""
]
] |
new_dataset
| 0.994068 |
2207.09918
|
Luke Boegner
|
Luke Boegner, Manbir Gulati, Garrett Vanhoy, Phillip Vallance, Bradley
Comar, Silvija Kokalj-Filipovic, Craig Lennon, Robert D. Miller
|
Large Scale Radio Frequency Signal Classification
| null | null | null | null |
cs.LG eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
Existing datasets used to train deep learning models for narrowband radio
frequency (RF) signal classification lack enough diversity in signal types and
channel impairments to sufficiently assess model performance in the real world.
We introduce the Sig53 dataset consisting of 5 million synthetically-generated
samples from 53 different signal classes and expertly chosen impairments. We
also introduce TorchSig, a signals processing machine learning toolkit that can
be used to generate this dataset. TorchSig incorporates data handling
principles that are common to the vision domain, and it is meant to serve as an
open-source foundation for future signals machine learning research. Initial
experiments using the Sig53 dataset are conducted using state of the art (SoTA)
convolutional neural networks (ConvNets) and Transformers. These experiments
reveal Transformers outperform ConvNets without the need for additional
regularization or a ConvNet teacher, which is contrary to results from the
vision domain. Additional experiments demonstrate that TorchSig's
domain-specific data augmentations facilitate model training, which ultimately
benefits model performance. Finally, TorchSig supports on-the-fly synthetic
data creation at training time, thus enabling massive scale training sessions
with virtually unlimited datasets.
|
[
{
"version": "v1",
"created": "Wed, 20 Jul 2022 14:03:57 GMT"
}
] | 2022-07-21T00:00:00 |
[
[
"Boegner",
"Luke",
""
],
[
"Gulati",
"Manbir",
""
],
[
"Vanhoy",
"Garrett",
""
],
[
"Vallance",
"Phillip",
""
],
[
"Comar",
"Bradley",
""
],
[
"Kokalj-Filipovic",
"Silvija",
""
],
[
"Lennon",
"Craig",
""
],
[
"Miller",
"Robert D.",
""
]
] |
new_dataset
| 0.994647 |
2207.09965
|
Zhaoyangfan Huang
|
Zhaoyangfan Huang and Kun Hu and Xingjun Wang
|
M2-Net: Multi-stages Specular Highlight Detection and Removal in
Multi-scenes
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a novel uniformity framework for highlight
detection and removal in multi-scenes, including synthetic images, face images,
natural images, and text images. The framework consists of three main
components, highlight feature extractor module, highlight coarse removal
module, and highlight refine removal module. Firstly, the highlight feature
extractor module can directly separate the highlight feature and non-highlight
feature from the original highlight image. Then highlight removal image is
obtained using a coarse highlight removal network. To further improve the
highlight removal effect, the refined highlight removal image is finally
obtained using refine highlight removal module based on contextual highlight
attention mechanisms. Extensive experimental results in multiple scenes
indicate that the proposed framework can obtain excellent visual effects of
highlight removal and achieve state-of-the-art results in several quantitative
evaluation metrics. Our algorithm is applied for the first time in video
highlight removal with promising results.
|
[
{
"version": "v1",
"created": "Wed, 20 Jul 2022 15:18:43 GMT"
}
] | 2022-07-21T00:00:00 |
[
[
"Huang",
"Zhaoyangfan",
""
],
[
"Hu",
"Kun",
""
],
[
"Wang",
"Xingjun",
""
]
] |
new_dataset
| 0.99889 |
2207.10008
|
Yanyan Li
|
Yanyan Li and Federico Tombari
|
E-Graph: Minimal Solution for Rigid Rotation with Extensibility Graphs
| null | null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Minimal solutions for relative rotation and translation estimation tasks have
been explored in different scenarios, typically relying on the so-called
co-visibility graph. However, how to build direct rotation relationships
between two frames without overlap is still an open topic, which, if solved,
could greatly improve the accuracy of visual odometry.
In this paper, a new minimal solution is proposed to solve relative rotation
estimation between two images without overlapping areas by exploiting a new
graph structure, which we call Extensibility Graph (E-Graph). Differently from
a co-visibility graph, high-level landmarks, including vanishing directions and
plane normals, are stored in our E-Graph, which are geometrically extensible.
Based on E-Graph, the rotation estimation problem becomes simpler and more
elegant, as it can deal with pure rotational motion and requires fewer
assumptions, e.g. Manhattan/Atlanta World, planar/vertical motion. Finally, we
embed our rotation estimation strategy into a complete camera tracking and
mapping system which obtains 6-DoF camera poses and a dense 3D mesh model.
Extensive experiments on public benchmarks demonstrate that the proposed
method achieves state-of-the-art tracking performance.
|
[
{
"version": "v1",
"created": "Wed, 20 Jul 2022 16:11:48 GMT"
}
] | 2022-07-21T00:00:00 |
[
[
"Li",
"Yanyan",
""
],
[
"Tombari",
"Federico",
""
]
] |
new_dataset
| 0.971707 |
2207.10016
|
Taylor J. Smith
|
Taylor J. Smith
|
Two-Dimensional Typewriter Automata
| null | null | null | null |
cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A typewriter automaton is a special variant of a two-dimensional automaton
that receives two-dimensional words as input and is only capable of moving its
input head through its input word in three directions: downward, leftward, and
rightward. In addition, downward and leftward moves may only be made via a
special "reset" move that simulates the action of a typewriter's carriage
return.
In this paper, we initiate the study of the typewriter automaton model and
relate it to similar models, including three-way two-dimensional automata,
boustrophedon automata, and returning automata. We study the recognition powers
of the typewriter automaton model, establish closure properties of the class of
languages recognized by the model, and consider operational state complexity
bounds for the specific operation of row concatenation. We also provide a
variety of potential future research directions pertaining to the model.
|
[
{
"version": "v1",
"created": "Wed, 20 Jul 2022 16:25:42 GMT"
}
] | 2022-07-21T00:00:00 |
[
[
"Smith",
"Taylor J.",
""
]
] |
new_dataset
| 0.998732 |
2207.10031
|
Joakim Bruslund Haurum
|
Malte Pedersen, Joakim Bruslund Haurum, Patrick Dendorfer, Thomas B.
Moeslund
|
MOTCOM: The Multi-Object Tracking Dataset Complexity Metric
|
ECCV 2022. Project webpage https://vap.aau.dk/motcom
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There exists no comprehensive metric for describing the complexity of
Multi-Object Tracking (MOT) sequences. This lack of metrics decreases
explainability, complicates comparison of datasets, and reduces the
conversation on tracker performance to a matter of leader board position. As a
remedy, we present the novel MOT dataset complexity metric (MOTCOM), which is a
combination of three sub-metrics inspired by key problems in MOT: occlusion,
erratic motion, and visual similarity. The insights of MOTCOM can open nuanced
discussions on tracker performance and may lead to a wider acknowledgement of
novel contributions developed for either less known datasets or those aimed at
solving sub-problems. We evaluate MOTCOM on the comprehensive MOT17, MOT20, and
MOTSynth datasets and show that MOTCOM is far better at describing the
complexity of MOT sequences compared to the conventional density and number of
tracks. Project page at https://vap.aau.dk/motcom
|
[
{
"version": "v1",
"created": "Wed, 20 Jul 2022 16:46:02 GMT"
}
] | 2022-07-21T00:00:00 |
[
[
"Pedersen",
"Malte",
""
],
[
"Haurum",
"Joakim Bruslund",
""
],
[
"Dendorfer",
"Patrick",
""
],
[
"Moeslund",
"Thomas B.",
""
]
] |
new_dataset
| 0.999778 |
2207.10032
|
Jamell Dacon
|
Jamell Dacon, Harry Shomer, Shaylynn Crum-Dacon, Jiliang Tang
|
Detecting Harmful Online Conversational Content towards LGBTQIA+
Individuals
|
Accepted to NAACL 2022 Queer in AI Workshop
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Online discussions, panels, talk page edits, etc., often contain harmful
conversational content i.e., hate speech, death threats and offensive language,
especially towards certain demographic groups. For example, individuals who
identify as members of the LGBTQIA+ community and/or BIPOC (Black, Indigenous,
People of Color) are at higher risk for abuse and harassment online. In this
work, we first introduce a real-world dataset that will enable us to study and
understand harmful online conversational content. Then, we conduct several
exploratory data analysis experiments to gain deeper insights from the dataset.
We later describe our approach for detecting harmful online Anti-LGBTQIA+
conversational content, and finally, we implement two baseline machine learning
models (i.e., Support Vector Machine and Logistic Regression), and fine-tune 3
pre-trained large language models (BERT, RoBERTa, and HateBERT). Our findings
verify that large language models can achieve very promising performance on
detecting online Anti-LGBTQIA+ conversational content detection tasks.
|
[
{
"version": "v1",
"created": "Wed, 15 Jun 2022 20:14:02 GMT"
}
] | 2022-07-21T00:00:00 |
[
[
"Dacon",
"Jamell",
""
],
[
"Shomer",
"Harry",
""
],
[
"Crum-Dacon",
"Shaylynn",
""
],
[
"Tang",
"Jiliang",
""
]
] |
new_dataset
| 0.992732 |
2207.10053
|
Hyeongjin Nam
|
Gyeongsik Moon, Hyeongjin Nam, Takaaki Shiratori, Kyoung Mu Lee
|
3D Clothed Human Reconstruction in the Wild
|
Accepted to ECCV 2022, 25 pages including the supplementary material
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although much progress has been made in 3D clothed human reconstruction, most
of the existing methods fail to produce robust results from in-the-wild images,
which contain diverse human poses and appearances. This is mainly due to the
large domain gap between training datasets and in-the-wild datasets. The
training datasets are usually synthetic ones, which contain rendered images
from GT 3D scans. However, such datasets contain simple human poses and less
natural image appearances compared to those of real in-the-wild datasets, which
makes generalization of it to in-the-wild images extremely challenging. To
resolve this issue, in this work, we propose ClothWild, a 3D clothed human
reconstruction framework that firstly addresses the robustness on in-thewild
images. First, for the robustness to the domain gap, we propose a weakly
supervised pipeline that is trainable with 2D supervision targets of
in-the-wild datasets. Second, we design a DensePose-based loss function to
reduce ambiguities of the weak supervision. Extensive empirical tests on
several public in-the-wild datasets demonstrate that our proposed ClothWild
produces much more accurate and robust results than the state-of-the-art
methods. The codes are available in here:
https://github.com/hygenie1228/ClothWild_RELEASE.
|
[
{
"version": "v1",
"created": "Wed, 20 Jul 2022 17:33:19 GMT"
}
] | 2022-07-21T00:00:00 |
[
[
"Moon",
"Gyeongsik",
""
],
[
"Nam",
"Hyeongjin",
""
],
[
"Shiratori",
"Takaaki",
""
],
[
"Lee",
"Kyoung Mu",
""
]
] |
new_dataset
| 0.994424 |
1906.00478
|
Matheus Cavalcante
|
Matheus Cavalcante, Fabian Schuiki, Florian Zaruba, Michael Schaffner,
Luca Benini
|
Ara: A 1 GHz+ Scalable and Energy-Efficient RISC-V Vector Processor with
Multi-Precision Floating Point Support in 22 nm FD-SOI
|
13 pages. Accepted for publication in IEEE Transactions on Very Large
Scale Integration Systems
| null |
10.1109/TVLSI.2019.2950087
| null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present Ara, a 64-bit vector processor based on the version
0.5 draft of RISC-V's vector extension, implemented in GlobalFoundries 22FDX
FD-SOI technology. Ara's microarchitecture is scalable, as it is composed of a
set of identical lanes, each containing part of the processor's vector register
file and functional units. It achieves up to 97% FPU utilization when running a
256 x 256 double precision matrix multiplication on sixteen lanes. Ara runs at
more than 1 GHz in the typical corner (TT/0.80V/25 oC) achieving a performance
up to 33 DP-GFLOPS. In terms of energy efficiency, Ara achieves up to 41
DP-GFLOPS/W under the same conditions, which is slightly superior to similar
vector processors found in literature. An analysis on several vectorizable
linear algebra computation kernels for a range of different matrix and vector
sizes gives insight into performance limitations and bottlenecks for vector
processors and outlines directions to maintain high energy efficiency even for
small matrix sizes where the vector architecture achieves suboptimal
utilization of the available FPUs.
|
[
{
"version": "v1",
"created": "Sun, 2 Jun 2019 20:33:22 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Oct 2019 09:53:17 GMT"
},
{
"version": "v3",
"created": "Sun, 27 Oct 2019 17:30:24 GMT"
}
] | 2022-07-20T00:00:00 |
[
[
"Cavalcante",
"Matheus",
""
],
[
"Schuiki",
"Fabian",
""
],
[
"Zaruba",
"Florian",
""
],
[
"Schaffner",
"Michael",
""
],
[
"Benini",
"Luca",
""
]
] |
new_dataset
| 0.997606 |
2002.11892
|
Zherong Pan
|
Liang He, Zherong Pan, Kiril Solovey, Biao Jia, and Dinesh Manocha
|
Multi-Robot Path Planning Using Medial-Axis-Based Pebble-Graph Embedding
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a centralized algorithm for labeled, disk-shaped Multi-Robot Path
Planning (MPP) in a continuous planar workspace with polygonal boundaries. Our
method automatically transform the continuous problem into a discrete,
graph-based variant termed the pebble motion problem, which can be solved
efficiently. To construct the underlying pebble graph, we identify inscribed
circles in the workspace via a medial axis transform and organize robots into
layers within each inscribed circle. We show that our layered pebble-graph
enables collision-free motions, allowing all graph-restricted MPP instances to
be feasible. MPP instances with continuous start and goal positions can then be
solved via local navigations that route robots from and to graph vertices. We
tested our method on several environments with high robot-packing densities (up
to $61.6\%$ of the workspace). For environments with narrow passages, such
density violates the well-separated assumptions made by state-of-the-art MPP
planners, while our method achieves an average success rate of $83\%$.
|
[
{
"version": "v1",
"created": "Thu, 27 Feb 2020 03:05:30 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Apr 2022 15:25:05 GMT"
},
{
"version": "v3",
"created": "Wed, 13 Apr 2022 00:53:45 GMT"
},
{
"version": "v4",
"created": "Tue, 19 Jul 2022 16:39:46 GMT"
}
] | 2022-07-20T00:00:00 |
[
[
"He",
"Liang",
""
],
[
"Pan",
"Zherong",
""
],
[
"Solovey",
"Kiril",
""
],
[
"Jia",
"Biao",
""
],
[
"Manocha",
"Dinesh",
""
]
] |
new_dataset
| 0.999267 |
2009.04338
|
Mikhail Kamenev
|
Mikhail Kamenev
|
On Decoding of Reed-Muller Codes Using a Local Graph Search
|
Accepted for Publication in IEEE Transactions on Communications. This
paper has been presented in part at the 2020 IEEE Information Theory Workshop
(https://ieeexplore.ieee.org/document/9457605)
| null |
10.1109/TCOMM.2021.3128541
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a novel iterative decoding algorithm for Reed-Muller (RM) codes,
which takes advantage of a graph representation of the code. Vertices of the
considered graph correspond to codewords, with two vertices being connected by
an edge if and only if the Hamming distance between the corresponding codewords
equals the minimum distance of the code. The algorithm uses a greedy local
search to find a node optimizing a metric, e.g. the correlation between the
received vector and the corresponding codeword. In addition, the cyclic
redundancy check can be used to terminate the search as soon as a valid
codeword is found, leading to an improvement in the average computational
complexity of the algorithm. Simulation results for both binary symmetric
channel and additive white Gaussian noise channel show that the presented
decoder approaches the performance of maximum likelihood decoding for RM codes
of length less than 1024 and for the second-order RM codes of length less than
4096. Moreover, it is demonstrated that the considered decoding approach
outperforms state-of-the-art decoding algorithms of RM codes with similar
computational complexity for a wide range of block lengths and rates.
|
[
{
"version": "v1",
"created": "Wed, 9 Sep 2020 14:52:17 GMT"
},
{
"version": "v2",
"created": "Thu, 11 Nov 2021 11:41:44 GMT"
}
] | 2022-07-20T00:00:00 |
[
[
"Kamenev",
"Mikhail",
""
]
] |
new_dataset
| 0.971945 |
2105.02711
|
Chaoqi Yang
|
Chaoqi Yang, Cao Xiao, Fenglong Ma, Lucas Glass, Jimeng Sun
|
SafeDrug: Dual Molecular Graph Encoders for Recommending Effective and
Safe Drug Combinations
|
Accepted in IJCAI 2021, this is the full version with appendix
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Medication recommendation is an essential task of AI for healthcare. Existing
works focused on recommending drug combinations for patients with complex
health conditions solely based on their electronic health records. Thus, they
have the following limitations: (1) some important data such as drug molecule
structures have not been utilized in the recommendation process. (2) drug-drug
interactions (DDI) are modeled implicitly, which can lead to sub-optimal
results. To address these limitations, we propose a DDI-controllable drug
recommendation model named SafeDrug to leverage drugs' molecule structures and
model DDIs explicitly. SafeDrug is equipped with a global message passing
neural network (MPNN) module and a local bipartite learning module to fully
encode the connectivity and functionality of drug molecules. SafeDrug also has
a controllable loss function to control DDI levels in the recommended drug
combinations effectively. On a benchmark dataset, our SafeDrug is relatively
shown to reduce DDI by 19.43% and improves 2.88% on Jaccard similarity between
recommended and actually prescribed drug combinations over previous approaches.
Moreover, SafeDrug also requires much fewer parameters than previous deep
learning-based approaches, leading to faster training by about 14% and around
2x speed-up in inference.
|
[
{
"version": "v1",
"created": "Wed, 5 May 2021 00:20:48 GMT"
},
{
"version": "v2",
"created": "Sun, 17 Jul 2022 00:41:01 GMT"
}
] | 2022-07-20T00:00:00 |
[
[
"Yang",
"Chaoqi",
""
],
[
"Xiao",
"Cao",
""
],
[
"Ma",
"Fenglong",
""
],
[
"Glass",
"Lucas",
""
],
[
"Sun",
"Jimeng",
""
]
] |
new_dataset
| 0.995509 |
2105.03247
|
Tiancai Wang
|
Fangao Zeng, Bin Dong, Yuang Zhang, Tiancai Wang, Xiangyu Zhang,
Yichen Wei
|
MOTR: End-to-End Multiple-Object Tracking with Transformer
|
Accepted by ECCV 2022. Code is available at
https://github.com/megvii-research/MOTR
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Temporal modeling of objects is a key challenge in multiple object tracking
(MOT). Existing methods track by associating detections through motion-based
and appearance-based similarity heuristics. The post-processing nature of
association prevents end-to-end exploitation of temporal variations in video
sequence. In this paper, we propose MOTR, which extends DETR and introduces
track query to model the tracked instances in the entire video. Track query is
transferred and updated frame-by-frame to perform iterative prediction over
time. We propose tracklet-aware label assignment to train track queries and
newborn object queries. We further propose temporal aggregation network and
collective average loss to enhance temporal relation modeling. Experimental
results on DanceTrack show that MOTR significantly outperforms state-of-the-art
method, ByteTrack by 6.5% on HOTA metric. On MOT17, MOTR outperforms our
concurrent works, TrackFormer and TransTrack, on association performance. MOTR
can serve as a stronger baseline for future research on temporal modeling and
Transformer-based trackers. Code is available at
https://github.com/megvii-research/MOTR.
|
[
{
"version": "v1",
"created": "Fri, 7 May 2021 13:27:01 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Sep 2021 06:33:49 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Mar 2022 08:41:09 GMT"
},
{
"version": "v4",
"created": "Tue, 19 Jul 2022 08:56:21 GMT"
}
] | 2022-07-20T00:00:00 |
[
[
"Zeng",
"Fangao",
""
],
[
"Dong",
"Bin",
""
],
[
"Zhang",
"Yuang",
""
],
[
"Wang",
"Tiancai",
""
],
[
"Zhang",
"Xiangyu",
""
],
[
"Wei",
"Yichen",
""
]
] |
new_dataset
| 0.99885 |
2107.04286
|
Yilin Liu
|
Liqiang Lin and Yilin Liu and Yue Hu and Xingguang Yan and Ke Xie and
Hui Huang
|
Capturing, Reconstructing, and Simulating: the UrbanScene3D Dataset
|
ECCV 2022 camera ready; Project page: https://vcc.tech/UrbanScene3D/
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present UrbanScene3D, a large-scale data platform for research of urban
scene perception and reconstruction. UrbanScene3D contains over 128k
high-resolution images covering 16 scenes including large-scale real urban
regions and synthetic cities with 136 km^2 area in total. The dataset also
contains high-precision LiDAR scans and hundreds of image sets with different
observation patterns, which provide a comprehensive benchmark to design and
evaluate aerial path planning and 3D reconstruction algorithms. In addition,
the dataset, which is built on Unreal Engine and Airsim simulator together with
the manually annotated unique instance label for each building in the dataset,
enables the generation of all kinds of data, e.g., 2D depth maps, 2D/3D
bounding boxes, and 3D point cloud/mesh segmentations, etc. The simulator with
physical engine and lighting system not only produce variety of data but also
enable users to simulate cars or drones in the proposed urban environment for
future research.
|
[
{
"version": "v1",
"created": "Fri, 9 Jul 2021 07:56:46 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Apr 2022 15:29:54 GMT"
},
{
"version": "v3",
"created": "Tue, 19 Jul 2022 05:20:26 GMT"
}
] | 2022-07-20T00:00:00 |
[
[
"Lin",
"Liqiang",
""
],
[
"Liu",
"Yilin",
""
],
[
"Hu",
"Yue",
""
],
[
"Yan",
"Xingguang",
""
],
[
"Xie",
"Ke",
""
],
[
"Huang",
"Hui",
""
]
] |
new_dataset
| 0.99984 |
2111.02709
|
Xu Chen
|
Xu Chen, Erik G. Larsson, Kaibin Huang
|
Analog MIMO Communication for One-shot Distributed Principal Component
Analysis
| null | null |
10.1109/TSP.2022.3182484
| null |
cs.IT cs.DC math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A fundamental algorithm for data analytics at the edge of wireless networks
is distributed principal component analysis (DPCA), which finds the most
important information embedded in a distributed high-dimensional dataset by
distributed computation of a reduced-dimension data subspace, called principal
components (PCs). In this paper, to support one-shot DPCA in wireless systems,
we propose a framework of analog MIMO transmission featuring the uncoded analog
transmission of local PCs for estimating the global PCs. To cope with channel
distortion and noise, two maximum-likelihood (global) PC estimators are
presented corresponding to the cases with and without receive channel state
information (CSI). The first design, termed coherent PC estimator, is derived
by solving a Procrustes problem and reveals the form of regularized channel
inversion where the regulation attempts to alleviate the effects of both
receiver noise and data noise. The second one, termed blind PC estimator, is
designed based on the subspace channel-rotation-invariance property and
computes a centroid of received local PCs on a Grassmann manifold. Using the
manifold-perturbation theory, tight bounds on the mean square subspace distance
(MSSD) of both estimators are derived for performance evaluation. The results
reveal simple scaling laws of MSSD concerning device population, data and
channel signal-to-noise ratios (SNRs), and array sizes. More importantly, both
estimators are found to have identical scaling laws, suggesting the
dispensability of CSI to accelerate DPCA. Simulation results validate the
derived results and demonstrate the promising latency performance of the
proposed analog MIMO
|
[
{
"version": "v1",
"created": "Thu, 4 Nov 2021 09:38:57 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Jun 2022 06:54:45 GMT"
}
] | 2022-07-20T00:00:00 |
[
[
"Chen",
"Xu",
""
],
[
"Larsson",
"Erik G.",
""
],
[
"Huang",
"Kaibin",
""
]
] |
new_dataset
| 0.999101 |
2112.00969
|
Zihang Meng
|
Zihang Meng, David Yang, Xuefei Cao, Ashish Shah, Ser-Nam Lim
|
Object-Centric Unsupervised Image Captioning
|
ECCV 2022
| null | null | null |
cs.CV cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image captioning is a longstanding problem in the field of computer vision
and natural language processing. To date, researchers have produced impressive
state-of-the-art performance in the age of deep learning. Most of these
state-of-the-art, however, requires large volume of annotated image-caption
pairs in order to train their models. When given an image dataset of interests,
practitioner needs to annotate the caption for each image in the training set
and this process needs to happen for each newly collected image dataset. In
this paper, we explore the task of unsupervised image captioning which utilizes
unpaired images and texts to train the model so that the texts can come from
different sources than the images. A main school of research on this topic that
has been shown to be effective is to construct pairs from the images and texts
in the training set according to their overlap of objects. Unlike in the
supervised setting, these constructed pairings are however not guaranteed to
have fully overlapping set of objects. Our work in this paper overcomes this by
harvesting objects corresponding to a given sentence from the training set,
even if they don't belong to the same image. When used as input to a
transformer, such mixture of objects enables larger if not full object
coverage, and when supervised by the corresponding sentence, produced results
that outperform current state of the art unsupervised methods by a significant
margin. Building upon this finding, we further show that (1) additional
information on relationship between objects and attributes of objects also
helps in boosting performance; and (2) our method also extends well to
non-English image captioning, which usually suffers from a scarcer level of
annotations. Our findings are supported by strong empirical results. Our code
is available at https://github.com/zihangm/obj-centric-unsup-caption.
|
[
{
"version": "v1",
"created": "Thu, 2 Dec 2021 03:56:09 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Jul 2022 17:43:05 GMT"
}
] | 2022-07-20T00:00:00 |
[
[
"Meng",
"Zihang",
""
],
[
"Yang",
"David",
""
],
[
"Cao",
"Xuefei",
""
],
[
"Shah",
"Ashish",
""
],
[
"Lim",
"Ser-Nam",
""
]
] |
new_dataset
| 0.998128 |
2112.02758
|
Yiming Tang
|
Yiming Tang, Allan Spektor, Raffi Khatchadourian, Mehdi Bagherzadeh
|
A Tool for Rejuvenating Feature Logging Levels via Git Histories and
Degree of Interest
|
4 pages, ICSE '22 (tool demo track)
|
International Conference on Software Engineering, ICSE 2022.
ACM/IEEE, ACM, May 2022
|
10.1109/ICSE-Companion55297.2022.9793736
| null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Logging is a significant programming practice. Due to the highly
transactional nature of modern software applications, massive amount of logs
are generated every day, which may overwhelm developers. Logging information
overload can be dangerous to software applications. Using log levels,
developers can print the useful information while hiding the verbose logs
during software runtime. As software evolves, the log levels of logging
statements associated with the surrounding software feature implementation may
also need to be altered. Maintaining log levels necessitates a significant
amount of manual effort. In this paper, we demonstrate an automated approach
that can rejuvenate feature log levels by matching the interest level of
developers in the surrounding features. The approach is implemented as an
open-source Eclipse plugin, using two external plug-ins (JGit and Mylyn). It
was tested on 18 open-source Java projects consisting of ~3 million lines of
code and ~4K log statements. Our tool successfully analyzes 99.22% of logging
statements, increases log level distributions by ~20%, and increases the focus
of logs in bug fix contexts ~83% of the time. For further details, interested
readers can watch our demonstration video
(https://www.youtube.com/watch?v=qIULoAXoDv4).
|
[
{
"version": "v1",
"created": "Mon, 6 Dec 2021 03:19:20 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Feb 2022 15:11:39 GMT"
}
] | 2022-07-20T00:00:00 |
[
[
"Tang",
"Yiming",
""
],
[
"Spektor",
"Allan",
""
],
[
"Khatchadourian",
"Raffi",
""
],
[
"Bagherzadeh",
"Mehdi",
""
]
] |
new_dataset
| 0.9579 |
2112.04966
|
Lu Qi
|
Lu Qi, Jason Kuen, Zhe Lin, Jiuxiang Gu, Fengyun Rao, Dian Li, Weidong
Guo, Zhen Wen, Ming-Hsuan Yang, Jiaya Jia
|
CA-SSL: Class-Agnostic Semi-Supervised Learning for Detection and
Segmentation
|
Appeared in ECCV2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
To improve instance-level detection/segmentation performance, existing
self-supervised and semi-supervised methods extract either task-unrelated or
task-specific training signals from unlabeled data. We show that these two
approaches, at the two extreme ends of the task-specificity spectrum, are
suboptimal for the task performance. Utilizing too little task-specific
training signals causes underfitting to the ground-truth labels of downstream
tasks, while the opposite causes overfitting to the ground-truth labels. To
this end, we propose a novel Class-Agnostic Semi-Supervised Learning (CA-SSL)
framework to achieve a more favorable task-specificity balance in extracting
training signals from unlabeled data. CA-SSL has three training stages that act
on either ground-truth labels (labeled data) or pseudo labels (unlabeled data).
This decoupling strategy avoids the complicated scheme in traditional SSL
methods that balances the contributions from both data types. Especially, we
introduce a warmup training stage to achieve a more optimal balance in task
specificity by ignoring class information in the pseudo labels, while
preserving localization training signals. As a result, our warmup model can
better avoid underfitting/overfitting when fine-tuned on the ground-truth
labels in detection and segmentation tasks. Using 3.6M unlabeled data, we
achieve a significant performance gain of 4.7% over ImageNet-pretrained
baseline on FCOS object detection. In addition, our warmup model demonstrates
excellent transferability to other detection and segmentation frameworks.
|
[
{
"version": "v1",
"created": "Thu, 9 Dec 2021 14:54:59 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Jul 2022 11:52:47 GMT"
}
] | 2022-07-20T00:00:00 |
[
[
"Qi",
"Lu",
""
],
[
"Kuen",
"Jason",
""
],
[
"Lin",
"Zhe",
""
],
[
"Gu",
"Jiuxiang",
""
],
[
"Rao",
"Fengyun",
""
],
[
"Li",
"Dian",
""
],
[
"Guo",
"Weidong",
""
],
[
"Wen",
"Zhen",
""
],
[
"Yang",
"Ming-Hsuan",
""
],
[
"Jia",
"Jiaya",
""
]
] |
new_dataset
| 0.977768 |
2202.01644
|
Masarah Paquet-Clouston
|
Masarah Paquet-Clouston, Serge-Olivier Paquette, Sebasti\'an Garc\'ia,
Mar\'ia Jos\'e Erquiaga
|
Entanglement: Cybercrime Connections of an Internet Marketing Forum
Population
|
18 pages, 4 figures
|
Journal of Cybersecurity 8-1 (2022) 1-14
|
10.1093/cybsec/tyac010
|
tyac010
|
cs.CY cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many activities related to cybercrime operations do not require much secrecy,
such as developing websites or translating texts. This research provides
indications that many users of a popular public internet marketing forum have
connections to cybercrime. It does so by investigating the involvement in
cybercrime of a population of users interested in internet marketing, both at a
micro and macro scale. The research starts with a case study of three users
confirmed to be involved in cybercrime and their use of the public forum where
users share information about online advertising. It provides a first glimpse
that some business with cybercrime connection is being conducted in the clear.
The study then pans out to investigate the forum population's ties with
cybercrime by finding crossover users, who are users from the public forum who
also comment on cybercrime forums. The cybercrime forums on which they discuss
are analyzed and crossover users' strength of participation is reported. Also,
to assess if they represent a sub-group of the forum population, their posting
behavior on the public forum is compared with that of non-crossover users. This
blend of analyses shows that (i) a minimum of 7.2% of the public forum
population are crossover users that have ties with cybercrime forums; (ii)
their participation in cybercrime forums is limited; and (iii) their posting
behavior is relatively indistinguishable from that of non-crossover users. This
is the first study to formally quantify how users of an internet marketing
public forum, a space for informal exchanges, have ties to cybercrime
activities. We conclude that crossover users are a substantial part of the
population in the public forum, and, even though they have thus far been
overlooked, their aggregated effect in the ecosystem must be considered.
|
[
{
"version": "v1",
"created": "Thu, 3 Feb 2022 15:40:55 GMT"
}
] | 2022-07-20T00:00:00 |
[
[
"Paquet-Clouston",
"Masarah",
""
],
[
"Paquette",
"Serge-Olivier",
""
],
[
"García",
"Sebastián",
""
],
[
"Erquiaga",
"María José",
""
]
] |
new_dataset
| 0.998445 |
2203.04099
|
Juan F. Montesinos
|
Juan F. Montesinos, Venkatesh S. Kadandale, Gloria Haro
|
VoViT: Low Latency Graph-based Audio-Visual Voice Separation Transformer
|
Accepted to ECCV 2022
| null | null | null |
cs.SD cs.CV cs.LG eess.AS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper presents an audio-visual approach for voice separation which
produces state-of-the-art results at a low latency in two scenarios: speech and
singing voice. The model is based on a two-stage network. Motion cues are
obtained with a lightweight graph convolutional network that processes face
landmarks. Then, both audio and motion features are fed to an audio-visual
transformer which produces a fairly good estimation of the isolated target
source. In a second stage, the predominant voice is enhanced with an audio-only
network. We present different ablation studies and comparison to
state-of-the-art methods. Finally, we explore the transferability of models
trained for speech separation in the task of singing voice separation. The
demos, code, and weights are available in https://ipcv.github.io/VoViT/
|
[
{
"version": "v1",
"created": "Tue, 8 Mar 2022 14:08:47 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Jul 2022 16:54:03 GMT"
}
] | 2022-07-20T00:00:00 |
[
[
"Montesinos",
"Juan F.",
""
],
[
"Kadandale",
"Venkatesh S.",
""
],
[
"Haro",
"Gloria",
""
]
] |
new_dataset
| 0.995919 |
2203.05625
|
Tiancai Wang
|
Yingfei Liu, Tiancai Wang, Xiangyu Zhang, Jian Sun
|
PETR: Position Embedding Transformation for Multi-View 3D Object
Detection
|
Accepted by ECCV 2022. Code is available at
\url{https://github.com/megvii-research/PETR}
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we develop position embedding transformation (PETR) for
multi-view 3D object detection. PETR encodes the position information of 3D
coordinates into image features, producing the 3D position-aware features.
Object query can perceive the 3D position-aware features and perform end-to-end
object detection. PETR achieves state-of-the-art performance (50.4% NDS and
44.1% mAP) on standard nuScenes dataset and ranks 1st place on the benchmark.
It can serve as a simple yet strong baseline for future research. Code is
available at \url{https://github.com/megvii-research/PETR}.
|
[
{
"version": "v1",
"created": "Thu, 10 Mar 2022 20:33:28 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Jun 2022 14:04:28 GMT"
},
{
"version": "v3",
"created": "Tue, 19 Jul 2022 08:30:57 GMT"
}
] | 2022-07-20T00:00:00 |
[
[
"Liu",
"Yingfei",
""
],
[
"Wang",
"Tiancai",
""
],
[
"Zhang",
"Xiangyu",
""
],
[
"Sun",
"Jian",
""
]
] |
new_dataset
| 0.999389 |
2203.11089
|
Li Chen
|
Li Chen, Chonghao Sima, Yang Li, Zehan Zheng, Jiajie Xu, Xiangwei
Geng, Hongyang Li, Conghui He, Jianping Shi, Yu Qiao, Junchi Yan
|
PersFormer: 3D Lane Detection via Perspective Transformer and the
OpenLane Benchmark
|
Accepted by ECCV 2022 (Oral). Project page:
https://github.com/OpenPerceptionX/PersFormer_3DLane | OpenLane dataset:
https://github.com/OpenPerceptionX/OpenLane
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Methods for 3D lane detection have been recently proposed to address the
issue of inaccurate lane layouts in many autonomous driving scenarios
(uphill/downhill, bump, etc.). Previous work struggled in complex cases due to
their simple designs of the spatial transformation between front view and
bird's eye view (BEV) and the lack of a realistic dataset. Towards these
issues, we present PersFormer: an end-to-end monocular 3D lane detector with a
novel Transformer-based spatial feature transformation module. Our model
generates BEV features by attending to related front-view local regions with
camera parameters as a reference. PersFormer adopts a unified 2D/3D anchor
design and an auxiliary task to detect 2D/3D lanes simultaneously, enhancing
the feature consistency and sharing the benefits of multi-task learning.
Moreover, we release one of the first large-scale real-world 3D lane datasets:
OpenLane, with high-quality annotation and scenario diversity. OpenLane
contains 200,000 frames, over 880,000 instance-level lanes, 14 lane categories,
along with scene tags and the closed-in-path object annotations to encourage
the development of lane detection and more industrial-related autonomous
driving methods. We show that PersFormer significantly outperforms competitive
baselines in the 3D lane detection task on our new OpenLane dataset as well as
Apollo 3D Lane Synthetic dataset, and is also on par with state-of-the-art
algorithms in the 2D task on OpenLane. The project page is available at
https://github.com/OpenPerceptionX/PersFormer_3DLane and OpenLane dataset is
provided at https://github.com/OpenPerceptionX/OpenLane.
|
[
{
"version": "v1",
"created": "Mon, 21 Mar 2022 16:12:53 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Apr 2022 08:24:02 GMT"
},
{
"version": "v3",
"created": "Tue, 19 Jul 2022 10:00:22 GMT"
}
] | 2022-07-20T00:00:00 |
[
[
"Chen",
"Li",
""
],
[
"Sima",
"Chonghao",
""
],
[
"Li",
"Yang",
""
],
[
"Zheng",
"Zehan",
""
],
[
"Xu",
"Jiajie",
""
],
[
"Geng",
"Xiangwei",
""
],
[
"Li",
"Hongyang",
""
],
[
"He",
"Conghui",
""
],
[
"Shi",
"Jianping",
""
],
[
"Qiao",
"Yu",
""
],
[
"Yan",
"Junchi",
""
]
] |
new_dataset
| 0.999863 |
2204.09443
|
Yang Zheng
|
Yang Zheng, Yanchao Yang, Kaichun Mo, Jiaman Li, Tao Yu, Yebin Liu, C.
Karen Liu, Leonidas J. Guibas
|
GIMO: Gaze-Informed Human Motion Prediction in Context
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Predicting human motion is critical for assistive robots and AR/VR
applications, where the interaction with humans needs to be safe and
comfortable. Meanwhile, an accurate prediction depends on understanding both
the scene context and human intentions. Even though many works study
scene-aware human motion prediction, the latter is largely underexplored due to
the lack of ego-centric views that disclose human intent and the limited
diversity in motion and scenes. To reduce the gap, we propose a large-scale
human motion dataset that delivers high-quality body pose sequences, scene
scans, as well as ego-centric views with the eye gaze that serves as a
surrogate for inferring human intent. By employing inertial sensors for motion
capture, our data collection is not tied to specific scenes, which further
boosts the motion dynamics observed from our subjects. We perform an extensive
study of the benefits of leveraging the eye gaze for ego-centric human motion
prediction with various state-of-the-art architectures. Moreover, to realize
the full potential of the gaze, we propose a novel network architecture that
enables bidirectional communication between the gaze and motion branches. Our
network achieves the top performance in human motion prediction on the proposed
dataset, thanks to the intent information from eye gaze and the denoised gaze
feature modulated by the motion. Code and data can be found at
https://github.com/y-zheng18/GIMO.
|
[
{
"version": "v1",
"created": "Wed, 20 Apr 2022 13:17:39 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Jul 2022 16:01:02 GMT"
}
] | 2022-07-20T00:00:00 |
[
[
"Zheng",
"Yang",
""
],
[
"Yang",
"Yanchao",
""
],
[
"Mo",
"Kaichun",
""
],
[
"Li",
"Jiaman",
""
],
[
"Yu",
"Tao",
""
],
[
"Liu",
"Yebin",
""
],
[
"Liu",
"C. Karen",
""
],
[
"Guibas",
"Leonidas J.",
""
]
] |
new_dataset
| 0.999558 |
2204.13317
|
Xue Yang
|
Yue Zhou, Xue Yang, Gefan Zhang, Jiabao Wang, Yanyi Liu, Liping Hou,
Xue Jiang, Xingzhao Liu, Junchi Yan, Chengqi Lyu, Wenwei Zhang, Kai Chen
|
MMRotate: A Rotated Object Detection Benchmark using PyTorch
|
5 pages, 2 tables, MMRotate is accepted by ACM MM 2022 (OS Track).
Yue Zhou and Xue Yang provided equal contribution. The code is publicly
released at https://github.com/open-mmlab/mmrotate
| null |
10.1145/3503161.3548541
| null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an open-source toolbox, named MMRotate, which provides a coherent
algorithm framework of training, inferring, and evaluation for the popular
rotated object detection algorithm based on deep learning. MMRotate implements
18 state-of-the-art algorithms and supports the three most frequently used
angle definition methods. To facilitate future research and industrial
applications of rotated object detection-related problems, we also provide a
large number of trained models and detailed benchmarks to give insights into
the performance of rotated object detection. MMRotate is publicly released at
https://github.com/open-mmlab/mmrotate.
|
[
{
"version": "v1",
"created": "Thu, 28 Apr 2022 07:31:00 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Jul 2022 07:04:53 GMT"
},
{
"version": "v3",
"created": "Thu, 14 Jul 2022 00:44:42 GMT"
},
{
"version": "v4",
"created": "Tue, 19 Jul 2022 08:05:58 GMT"
}
] | 2022-07-20T00:00:00 |
[
[
"Zhou",
"Yue",
""
],
[
"Yang",
"Xue",
""
],
[
"Zhang",
"Gefan",
""
],
[
"Wang",
"Jiabao",
""
],
[
"Liu",
"Yanyi",
""
],
[
"Hou",
"Liping",
""
],
[
"Jiang",
"Xue",
""
],
[
"Liu",
"Xingzhao",
""
],
[
"Yan",
"Junchi",
""
],
[
"Lyu",
"Chengqi",
""
],
[
"Zhang",
"Wenwei",
""
],
[
"Chen",
"Kai",
""
]
] |
new_dataset
| 0.999048 |
2205.03961
|
Hongkai Chen
|
Hongkai Chen (1), Shan Lin (1), Scott A. Smolka (1), Nicola Paoletti
(2) ((1) Stony Brook University, Stony Brook, USA, (2) Royal Holloway,
University of London, UK)
|
An STL-based Formulation of Resilience in Cyber-Physical Systems
|
16 pages excluding references and appendix (23 pages in total), 6
figures
| null | null | null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
Resiliency is the ability to quickly recover from a violation and avoid
future violations for as long as possible. Such a property is of fundamental
importance for Cyber-Physical Systems (CPS), and yet, to date, there is no
widely agreed-upon formal treatment of CPS resiliency. We present an STL-based
framework for reasoning about resiliency in CPS in which resiliency has a
syntactic characterization in the form of an STL-based Resiliency Specification
(SRS). Given an arbitrary STL formula $\varphi$, time bounds $\alpha$ and
$\beta$, the SRS of $\varphi$, $R_{\alpha,\beta}(\varphi)$, is the STL formula
$\neg\varphi\mathbf{U}_{[0,\alpha]}\mathbf{G}_{[0,\beta)}\varphi$, specifying
that recovery from a violation of $\varphi$ occur within time $\alpha$
(recoverability), and subsequently that $\varphi$ be maintained for duration
$\beta$ (durability). These $R$-expressions, which are atoms in our SRS logic,
can be combined using STL operators, allowing one to express composite
resiliency specifications, e.g., multiple SRSs must hold simultaneously, or the
system must eventually be resilient. We define a quantitative semantics for
SRSs in the form of a Resilience Satisfaction Value (ReSV) function $r$ and
prove its soundness and completeness w.r.t. STL's Boolean semantics. The
$r$-value for $R_{\alpha,\beta}(\varphi)$ atoms is a singleton set containing a
pair quantifying recoverability and durability. The $r$-value for a composite
SRS formula results in a set of non-dominated recoverability-durability pairs,
given that the ReSVs of subformulas might not be directly comparable (e.g., one
subformula has superior durability but worse recoverability than another). To
the best of our knowledge, this is the first multi-dimensional quantitative
semantics for an STL-based logic. Two case studies demonstrate the practical
utility of our approach.
|
[
{
"version": "v1",
"created": "Sun, 8 May 2022 21:55:35 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Jul 2022 18:55:09 GMT"
}
] | 2022-07-20T00:00:00 |
[
[
"Chen",
"Hongkai",
""
],
[
"Lin",
"Shan",
""
],
[
"Smolka",
"Scott A.",
""
],
[
"Paoletti",
"Nicola",
""
]
] |
new_dataset
| 0.991796 |
2207.07388
|
Kyrill Schmid
|
Kyrill Schmid, Lenz Belzner, Robert M\"uller, Johannes Tochtermann,
Claudia Linnhoff-Popien
|
Stochastic Market Games
|
IJCAI-21
| null |
10.24963/ijcai.2021/54
| null |
cs.MA cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Some of the most relevant future applications of multi-agent systems like
autonomous driving or factories as a service display mixed-motive scenarios,
where agents might have conflicting goals. In these settings agents are likely
to learn undesirable outcomes in terms of cooperation under independent
learning, such as overly greedy behavior. Motivated from real world societies,
in this work we propose to utilize market forces to provide incentives for
agents to become cooperative. As demonstrated in an iterated version of the
Prisoner's Dilemma, the proposed market formulation can change the dynamics of
the game to consistently learn cooperative policies. Further we evaluate our
approach in spatially and temporally extended settings for varying numbers of
agents. We empirically find that the presence of markets can improve both the
overall result and agent individual returns via their trading activities.
|
[
{
"version": "v1",
"created": "Fri, 15 Jul 2022 10:37:16 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Jul 2022 11:27:56 GMT"
},
{
"version": "v3",
"created": "Tue, 19 Jul 2022 05:52:24 GMT"
}
] | 2022-07-20T00:00:00 |
[
[
"Schmid",
"Kyrill",
""
],
[
"Belzner",
"Lenz",
""
],
[
"Müller",
"Robert",
""
],
[
"Tochtermann",
"Johannes",
""
],
[
"Linnhoff-Popien",
"Claudia",
""
]
] |
new_dataset
| 0.968787 |
2207.07795
|
Daqian Shi
|
Daqian Shi, Xiaolei Diao, Hao Tang, Xiaomin Li, Hao Xing, Hao Xu
|
RCRN: Real-world Character Image Restoration Network via Skeleton
Extraction
|
Accepted to ACM MM 2022
| null |
10.1145/3503161.3548344
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Constructing high-quality character image datasets is challenging because
real-world images are often affected by image degradation. There are
limitations when applying current image restoration methods to such real-world
character images, since (i) the categories of noise in character images are
different from those in general images; (ii) real-world character images
usually contain more complex image degradation, e.g., mixed noise at different
noise levels. To address these problems, we propose a real-world character
restoration network (RCRN) to effectively restore degraded character images,
where character skeleton information and scale-ensemble feature extraction are
utilized to obtain better restoration performance. The proposed method consists
of a skeleton extractor (SENet) and a character image restorer (CiRNet). SENet
aims to preserve the structural consistency of the character and normalize
complex noise. Then, CiRNet reconstructs clean images from degraded character
images and their skeletons. Due to the lack of benchmarks for real-world
character image restoration, we constructed a dataset containing 1,606
character images with real-world degradation to evaluate the validity of the
proposed method. The experimental results demonstrate that RCRN outperforms
state-of-the-art methods quantitatively and qualitatively.
|
[
{
"version": "v1",
"created": "Sat, 16 Jul 2022 01:02:52 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Jul 2022 17:52:13 GMT"
}
] | 2022-07-20T00:00:00 |
[
[
"Shi",
"Daqian",
""
],
[
"Diao",
"Xiaolei",
""
],
[
"Tang",
"Hao",
""
],
[
"Li",
"Xiaomin",
""
],
[
"Xing",
"Hao",
""
],
[
"Xu",
"Hao",
""
]
] |
new_dataset
| 0.998258 |
2207.08818
|
Haoyu Ren
|
Haoyu Ren, Kirill Dorofeev, Darko Anicic, Youssef Hammad, Roland Eckl,
Thomas A. Runkler
|
SeLoC-ML: Semantic Low-Code Engineering for Machine Learning
Applications in Industrial IoT
|
Accepted by the 21st International Semantic Web Conference (ISWC2022)
| null | null | null |
cs.SE cs.AI cs.DB cs.IR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Internet of Things (IoT) is transforming the industry by bridging the gap
between Information Technology (IT) and Operational Technology (OT). Machines
are being integrated with connected sensors and managed by intelligent
analytics applications, accelerating digital transformation and business
operations. Bringing Machine Learning (ML) to industrial devices is an
advancement aiming to promote the convergence of IT and OT. However, developing
an ML application in industrial IoT (IIoT) presents various challenges,
including hardware heterogeneity, non-standardized representations of ML
models, device and ML model compatibility issues, and slow application
development. Successful deployment in this area requires a deep understanding
of hardware, algorithms, software tools, and applications. Therefore, this
paper presents a framework called Semantic Low-Code Engineering for ML
Applications (SeLoC-ML), built on a low-code platform to support the rapid
development of ML applications in IIoT by leveraging Semantic Web technologies.
SeLoC-ML enables non-experts to easily model, discover, reuse, and matchmake ML
models and devices at scale. The project code can be automatically generated
for deployment on hardware based on the matching results. Developers can
benefit from semantic application templates, called recipes, to fast prototype
end-user applications. The evaluations confirm an engineering effort reduction
by a factor of at least three compared to traditional approaches on an
industrial ML classification case study, showing the efficiency and usefulness
of SeLoC-ML. We share the code and welcome any contributions.
|
[
{
"version": "v1",
"created": "Mon, 18 Jul 2022 13:06:21 GMT"
}
] | 2022-07-20T00:00:00 |
[
[
"Ren",
"Haoyu",
""
],
[
"Dorofeev",
"Kirill",
""
],
[
"Anicic",
"Darko",
""
],
[
"Hammad",
"Youssef",
""
],
[
"Eckl",
"Roland",
""
],
[
"Runkler",
"Thomas A.",
""
]
] |
new_dataset
| 0.996649 |
2207.08967
|
Eduardo Adam Navas-L\'opez
|
Navas-L\'opez, Eduardo Adam
|
Low Cost Portable Touch Screen Technology Applied to University Teaching
|
in Spanish language. Presented at "Congreso de Electr\'onica e
Inform\'atica 2010", Universidad Centroamericana "Jos\'e Sime\'on Ca\~nas"
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This article describes an implementation of low-cost portable touch screen
technology, applied to university teaching, using as a base the remote control
of the Nintendo Wii console (known as Wiimote), a normal projector, a computer
and free software. The purpose is to show the feasibility of such
implementation to improve teaching/learning processes, without incurring high
costs associated with unaffordable technological equipment, special
infrastructure in classrooms, or expensive computer programs. Also included is
a summary of a test of the system in two college courses.
|
[
{
"version": "v1",
"created": "Mon, 18 Jul 2022 22:40:45 GMT"
}
] | 2022-07-20T00:00:00 |
[
[
"Navas-López",
"",
""
],
[
"Adam",
"Eduardo",
""
]
] |
new_dataset
| 0.988918 |
2207.08972
|
Eduardo Adam Navas-L\'opez
|
Navas-L\'opez, Eduardo Adam
|
Implementation of a Didactic Compiler for a superset of PL/0
|
in Spanish language. Presented at "Congreso de Electr\'onica e
Inform\'atica 2010", Universidad Centroamericana "Jos\'e Sime\'on Ca\~nas"
| null | null | null |
cs.PL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This article describes the features of a compiler for a superset language of
the well-known PL/0 created by Niklaus Wirth. The main feature is that it
implements the build phases in such a way that the information passed between
each one is reflected as an XML file.
|
[
{
"version": "v1",
"created": "Mon, 18 Jul 2022 23:01:35 GMT"
}
] | 2022-07-20T00:00:00 |
[
[
"Navas-López",
"",
""
],
[
"Adam",
"Eduardo",
""
]
] |
new_dataset
| 0.960879 |
2207.09046
|
Lei Tan
|
Lei Tan, Pingyang Dai, Rongrong Ji, Yongjian Wu
|
Dynamic Prototype Mask for Occluded Person Re-Identification
|
Accepted by ACM MM 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although person re-identification has achieved an impressive improvement in
recent years, the common occlusion case caused by different obstacles is still
an unsettled issue in real application scenarios. Existing methods mainly
address this issue by employing body clues provided by an extra network to
distinguish the visible part. Nevertheless, the inevitable domain gap between
the assistant model and the ReID datasets has highly increased the difficulty
to obtain an effective and efficient model. To escape from the extra
pre-trained networks and achieve an automatic alignment in an end-to-end
trainable network, we propose a novel Dynamic Prototype Mask (DPM) based on two
self-evident prior knowledge. Specifically, we first devise a Hierarchical Mask
Generator which utilizes the hierarchical semantic to select the visible
pattern space between the high-quality holistic prototype and the feature
representation of the occluded input image. Under this condition, the occluded
representation could be well aligned in a selected subspace spontaneously.
Then, to enrich the feature representation of the high-quality holistic
prototype and provide a more complete feature space, we introduce a Head Enrich
Module to encourage different heads to aggregate different patterns
representation in the whole image. Extensive experimental evaluations conducted
on occluded and holistic person re-identification benchmarks demonstrate the
superior performance of the DPM over the state-of-the-art methods. The code is
released at https://github.com/stone96123/DPM.
|
[
{
"version": "v1",
"created": "Tue, 19 Jul 2022 03:31:13 GMT"
}
] | 2022-07-20T00:00:00 |
[
[
"Tan",
"Lei",
""
],
[
"Dai",
"Pingyang",
""
],
[
"Ji",
"Rongrong",
""
],
[
"Wu",
"Yongjian",
""
]
] |
new_dataset
| 0.955999 |
2207.09127
|
Rourab Paul
|
Amrutanshu Panigrahi, Ajit Kumar Nayak, Rourab Paul
|
Smart Contract Assisted Blockchain based PKI System
|
manuscript
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The proposed smart contract can prevent seven cyber attacks, such as Denial
of Service (DoS), Man in the Middle Attack (MITM), Distributed Denial of
Service (DDoS), 51\%, Injection attacks, Routing Attack, and Eclipse attack.
The Delegated Proof of Stake (DPoS) consensus algorithm used in this model
reduces the number of validators for each transaction which makes it suitable
for lightweight applications. The timing complexity of key/certificate
validation and signature/certificate revocation processes do not depend on the
number of transactions. The comparisons of various timing parameters with
existing solutions show that the proposed PKI is competitively better.
|
[
{
"version": "v1",
"created": "Tue, 19 Jul 2022 09:00:33 GMT"
}
] | 2022-07-20T00:00:00 |
[
[
"Panigrahi",
"Amrutanshu",
""
],
[
"Nayak",
"Ajit Kumar",
""
],
[
"Paul",
"Rourab",
""
]
] |
new_dataset
| 0.998389 |
2207.09152
|
Christophe Servan
|
Oralie Cattan, Sahar Ghannay, Christophe Servan and Sophie Rosset
|
Benchmarking Transformers-based models on French Spoken Language
Understanding tasks
|
Accepted paper at INTERSPEECH 2022
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In the last five years, the rise of the self-attentional Transformer-based
architectures led to state-of-the-art performances over many natural language
tasks. Although these approaches are increasingly popular, they require large
amounts of data and computational resources. There is still a substantial need
for benchmarking methodologies ever upwards on under-resourced languages in
data-scarce application conditions. Most pre-trained language models were
massively studied using the English language and only a few of them were
evaluated on French. In this paper, we propose a unified benchmark, focused on
evaluating models quality and their ecological impact on two well-known French
spoken language understanding tasks. Especially we benchmark thirteen
well-established Transformer-based models on the two available spoken language
understanding tasks for French: MEDIA and ATIS-FR. Within this framework, we
show that compact models can reach comparable results to bigger ones while
their ecological impact is considerably lower. However, this assumption is
nuanced and depends on the considered compression method.
|
[
{
"version": "v1",
"created": "Tue, 19 Jul 2022 09:47:08 GMT"
}
] | 2022-07-20T00:00:00 |
[
[
"Cattan",
"Oralie",
""
],
[
"Ghannay",
"Sahar",
""
],
[
"Servan",
"Christophe",
""
],
[
"Rosset",
"Sophie",
""
]
] |
new_dataset
| 0.975118 |
2207.09277
|
Chengfei Xie
|
Bingchen Qian, Xin Wang, Chengfei Xie and Gennian Ge
|
Covering Grassmannian Codes: Bounds and Constructions
|
17 pages
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Grassmannian $\mathcal{G}_q(n,k)$ is the set of all $k$-dimensional subspaces
of the vector space $\mathbb{F}_q^n.$ Recently, Etzion and Zhang introduced a
new notion called covering Grassmannian code which can be used in network
coding solutions for generalized combination networks. An
$\alpha$-$(n,k,\delta)_q^c$ covering Grassmannian code $\mathcal{C}$ is a
subset of $\mathcal{G}_q(n,k)$ such that every set of $\alpha$ codewords of
$\mathcal{C}$ spans a subspace of dimension at least $\delta +k$ in
$\mathbb{F}_q^n.$ In this paper, we derive new upper and lower bounds on the
size of covering Grassmannian codes. These bounds improve and extend the
parameter range of known bounds.
|
[
{
"version": "v1",
"created": "Tue, 19 Jul 2022 13:41:58 GMT"
}
] | 2022-07-20T00:00:00 |
[
[
"Qian",
"Bingchen",
""
],
[
"Wang",
"Xin",
""
],
[
"Xie",
"Chengfei",
""
],
[
"Ge",
"Gennian",
""
]
] |
new_dataset
| 0.994501 |
2207.09295
|
Justin Kay
|
Justin Kay, Peter Kulits, Suzanne Stathatos, Siqi Deng, Erik Young,
Sara Beery, Grant Van Horn, Pietro Perona
|
The Caltech Fish Counting Dataset: A Benchmark for Multiple-Object
Tracking and Counting
|
ECCV 2022. 33 pages, 12 figures
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the Caltech Fish Counting Dataset (CFC), a large-scale dataset for
detecting, tracking, and counting fish in sonar videos. We identify sonar
videos as a rich source of data for advancing low signal-to-noise computer
vision applications and tackling domain generalization in multiple-object
tracking (MOT) and counting. In comparison to existing MOT and counting
datasets, which are largely restricted to videos of people and vehicles in
cities, CFC is sourced from a natural-world domain where targets are not easily
resolvable and appearance features cannot be easily leveraged for target
re-identification. With over half a million annotations in over 1,500 videos
sourced from seven different sonar cameras, CFC allows researchers to train MOT
and counting algorithms and evaluate generalization performance at unseen test
locations. We perform extensive baseline experiments and identify key
challenges and opportunities for advancing the state of the art in
generalization in MOT and counting.
|
[
{
"version": "v1",
"created": "Tue, 19 Jul 2022 14:26:12 GMT"
}
] | 2022-07-20T00:00:00 |
[
[
"Kay",
"Justin",
""
],
[
"Kulits",
"Peter",
""
],
[
"Stathatos",
"Suzanne",
""
],
[
"Deng",
"Siqi",
""
],
[
"Young",
"Erik",
""
],
[
"Beery",
"Sara",
""
],
[
"Van Horn",
"Grant",
""
],
[
"Perona",
"Pietro",
""
]
] |
new_dataset
| 0.999839 |
2207.09378
|
Debobroto Das Robin
|
Debobroto Das Robin, Javed I. Khan
|
P4TE: PISA Switch Based Traffic Engineering in Fat-Tree Data Center
Networks
| null |
Elsevier Computer Networks 2022
| null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
This work presents P4TE, an in-band traffic monitoring, load-aware packet
forwarding, and flow rate controlling mechanism for traffic engineering in
fat-tree topology-based data center networks using PISA switches. It achieves
sub-RTT reaction time to change in network conditions, improved flow completion
time, and balanced link utilization. Unlike the classical probe-based
monitoring approach, P4TE uses an in-band monitoring approach to identify
traffic events in the data plane. Based on these events, it re-adjusts the
priorities of the paths. It uses a heuristic-based load-aware forwarding path
selection mechanism to respond to changing network conditions and control the
flow rate by sending feedback to the end hosts. It is implementable on emerging
v1model.p4 architecture-based programmable switches and capable of maintaining
the line-rate performance. Our evaluation shows that P4TE uses a small amount
of resources in the PISA pipeline and achieves an improved flow completion time
than ECMP and HULA.
|
[
{
"version": "v1",
"created": "Tue, 19 Jul 2022 16:23:08 GMT"
}
] | 2022-07-20T00:00:00 |
[
[
"Robin",
"Debobroto Das",
""
],
[
"Khan",
"Javed I.",
""
]
] |
new_dataset
| 0.996399 |
2207.09412
|
Junyuan Ouyang
|
Junyuan Ouyang, Haoyao Chen
|
Det6D: A Ground-Aware Full-Pose 3D Object Detector for Improving Terrain
Robustness
|
8 pages, 9 figures, submit to RA-L
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Accurate 3D object detection with LiDAR is critical for autonomous driving.
Existing research is all based on the flat-world assumption. However, the
actual road can be complex with steep sections, which breaks the premise.
Current methods suffer from performance degradation in this case due to
difficulty correctly detecting objects on sloped terrain. In this work, we
propose Det6D, the first full-degree-of-freedom 3D object detector without
spatial and postural limitations, to improve terrain robustness. We choose the
point-based framework by founding their capability of detecting objects in the
entire spatial range. To predict full-degree poses, including pitch and roll,
we design a ground-aware orientation branch that leverages the local ground
constraints. Given the difficulty of long-tail non-flat scene data collection
and 6D pose annotation, we present Slope-Aug, a data augmentation method for
synthesizing non-flat terrain from existing datasets recorded in flat scenes.
Experiments on various datasets demonstrate the effectiveness and robustness of
our method in different terrains. We further conducted an extended experiment
to explore how the network predicts the two extra poses. The proposed modules
are plug-and-play for existing point-based frameworks. The code is available at
https://github.com/HITSZ-NRSL/De6D.
|
[
{
"version": "v1",
"created": "Tue, 19 Jul 2022 17:12:48 GMT"
}
] | 2022-07-20T00:00:00 |
[
[
"Ouyang",
"Junyuan",
""
],
[
"Chen",
"Haoyao",
""
]
] |
new_dataset
| 0.996874 |
2207.09439
|
Petra Wolf
|
Kevin Goergen, Henning Fernau, Esther Oest, Petra Wolf
|
All Paths Lead to Rome
| null | null | null | null |
cs.CC cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
All roads lead to Rome is the core idea of the puzzle game Roma. It is played
on an $n \times n$ grid consisting of quadratic cells. Those cells are grouped
into boxes of at most four neighboring cells and are either filled, or to be
filled, with arrows pointing in cardinal directions. The goal of the game is to
fill the empty cells with arrows such that each box contains at most one arrow
of each direction and regardless where we start, if we follow the arrows in the
cells, we will always end up in the special Roma-cell. In this work, we study
the computational complexity of the puzzle game Roma and show that completing a
Roma board according to the rules is an \NP-complete task, counting the number
of valid completions is #Ptime-complete, and determining the number of preset
arrows needed to make the instance \emph{uniquely} solvable is
$\Sigma_2^P$-complete. We further show that the problem of completing a given
Roma instance on an $n\times n$ board cannot be solved in time
$\mathcal{O}\left(2^{{o}(n)}\right)$ under ETH and give a matching dynamic
programming algorithm based on the idea of Catalan structures.
|
[
{
"version": "v1",
"created": "Tue, 19 Jul 2022 17:52:35 GMT"
}
] | 2022-07-20T00:00:00 |
[
[
"Goergen",
"Kevin",
""
],
[
"Fernau",
"Henning",
""
],
[
"Oest",
"Esther",
""
],
[
"Wolf",
"Petra",
""
]
] |
new_dataset
| 0.996373 |
1409.6182
|
Josep Silva
|
Juli\'an Alarte and Josep Silva
|
A Benchmark Suite for Template Detection and Content Extraction
|
13 pages, 3 tables
| null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Template detection and content extraction are two of the main areas of
information retrieval applied to the Web. They perform different analyses over
the structure and content of webpages to extract some part of the document.
However, their objective is different. While template detection identifies the
template of a webpage (usually comparing with other webpages of the same
website), content extraction identifies the main content of the webpage
discarding the other part. Therefore, they are somehow complementary, because
the main content is not part of the template. It has been measured that
templates represent between 40% and 50% of data on the Web. Therefore,
identifying templates is essential for indexing tasks because templates usually
contain irrelevant information such as advertisements, menus and banners.
Processing and storing this information is likely to lead to a waste of
resources (storage space, bandwidth, etc.). Similarly, identifying the main
content is essential for many information retrieval tasks. In this paper, we
present a benchmark suite to test different approaches for template detection
and content extraction. The suite is public, and it contains real heterogeneous
webpages that have been labelled so that different techniques can be suitable
(and automatically) compared.
|
[
{
"version": "v1",
"created": "Mon, 22 Sep 2014 14:21:33 GMT"
},
{
"version": "v2",
"created": "Tue, 23 Sep 2014 23:05:29 GMT"
},
{
"version": "v3",
"created": "Tue, 14 Aug 2018 14:33:01 GMT"
},
{
"version": "v4",
"created": "Mon, 30 Nov 2020 23:19:23 GMT"
},
{
"version": "v5",
"created": "Sun, 17 Jul 2022 01:22:34 GMT"
}
] | 2022-07-19T00:00:00 |
[
[
"Alarte",
"Julián",
""
],
[
"Silva",
"Josep",
""
]
] |
new_dataset
| 0.999306 |
2003.11461
|
Shihao Xu
|
Shihao Xu, Jing Fang, Xiping Hu, Edith Ngai, Wei Wang, Yi Guo, Victor
C.M. Leung
|
Emotion Recognition From Gait Analyses: Current Research and Future
Directions
| null | null | null | null |
cs.HC cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human gait refers to a daily motion that represents not only mobility, but it
can also be used to identify the walker by either human observers or computers.
Recent studies reveal that gait even conveys information about the walker's
emotion. Individuals in different emotion states may show different gait
patterns. The mapping between various emotions and gait patterns provides a new
source for automated emotion recognition. Compared to traditional emotion
detection biometrics, such as facial expression, speech and physiological
parameters, gait is remotely observable, more difficult to imitate, and
requires less cooperation from the subject. These advantages make gait a
promising source for emotion detection. This article reviews current research
on gait-based emotion detection, particularly on how gait parameters can be
affected by different emotion states and how the emotion states can be
recognized through distinct gait patterns. We focus on the detailed methods and
techniques applied in the whole process of emotion recognition: data
collection, preprocessing, and classification. At last, we discuss possible
future developments of efficient and effective gait-based emotion recognition
using the state of the art techniques on intelligent computation and big data.
|
[
{
"version": "v1",
"created": "Fri, 13 Mar 2020 08:22:33 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Aug 2020 11:28:02 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Aug 2020 01:39:01 GMT"
},
{
"version": "v4",
"created": "Sat, 16 Jul 2022 02:18:32 GMT"
}
] | 2022-07-19T00:00:00 |
[
[
"Xu",
"Shihao",
""
],
[
"Fang",
"Jing",
""
],
[
"Hu",
"Xiping",
""
],
[
"Ngai",
"Edith",
""
],
[
"Wang",
"Wei",
""
],
[
"Guo",
"Yi",
""
],
[
"Leung",
"Victor C. M.",
""
]
] |
new_dataset
| 0.967147 |
2011.04178
|
Mostafa Hussien
|
Mostafa Hussien, Kim Khoa Nguyen, Mohamed Cheriet
|
PRVNet: A Novel Partially-Regularized Variational Autoencoders for
Massive MIMO CSI Feedback
| null | null | null | null |
cs.NI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In a multiple-input multiple-output frequency-division duplexing (MIMO-FDD)
system, the user equipment (UE) sends the downlink channel state information
(CSI) to the base station to report link status. Due to the complexity of MIMO
systems, the overhead incurred in sending this information negatively affects
the system bandwidth. Although this problem has been widely considered in the
literature, prior work generally assumes an ideal feedback channel. In this
paper, we introduce PRVNet, a neural network architecture inspired by
variational autoencoders (VAE) to compress the CSI matrix before sending it
back to the base station under noisy channel conditions. Moreover, we propose a
customized loss function that best suits the special characteristics of the
problem being addressed. We also introduce an additional regularization
hyperparameter for the learning objective, which is crucial for achieving
competitive performance. In addition, we provide an efficient way to tune this
hyperparameter using KL-annealing. Experimental results show the proposed model
outperforms the benchmark models including two deep learning-based models in a
noise-free feedback channel assumption. In addition, the proposed model
achieves an outstanding performance under different noise levels for additive
white Gaussian noise feedback channels.
|
[
{
"version": "v1",
"created": "Mon, 9 Nov 2020 04:07:45 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Jul 2022 14:50:23 GMT"
}
] | 2022-07-19T00:00:00 |
[
[
"Hussien",
"Mostafa",
""
],
[
"Nguyen",
"Kim Khoa",
""
],
[
"Cheriet",
"Mohamed",
""
]
] |
new_dataset
| 0.95328 |
2106.05306
|
Yifei Li
|
Yifei Li, Tao Du, Kui Wu, Jie Xu, Wojciech Matusik
|
DiffCloth: Differentiable Cloth Simulation with Dry Frictional Contact
| null |
ACM Transactions on Graphics (TOG), 2022
|
10.1145/3527660
| null |
cs.GR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Cloth simulation has wide applications in computer animation, garment design,
and robot-assisted dressing. This work presents a differentiable cloth
simulator whose additional gradient information facilitates cloth-related
applications. Our differentiable simulator extends a state-of-the-art cloth
simulator based on Projective Dynamics (PD) and with dry frictional contact. We
draw inspiration from previous work to propose a fast and novel method for
deriving gradients in PD-based cloth simulation with dry frictional contact.
Furthermore, we conduct a comprehensive analysis and evaluation of the
usefulness of gradients in contact-rich cloth simulation. Finally, we
demonstrate the efficacy of our simulator in a number of downstream
applications, including system identification, trajectory optimization for
assisted dressing, closed-loop control, inverse design, and real-to-sim
transfer. We observe a substantial speedup obtained from using our gradient
information in solving most of these applications.
|
[
{
"version": "v1",
"created": "Wed, 9 Jun 2021 18:02:10 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Oct 2021 18:58:01 GMT"
},
{
"version": "v3",
"created": "Sun, 17 Jul 2022 16:07:04 GMT"
}
] | 2022-07-19T00:00:00 |
[
[
"Li",
"Yifei",
""
],
[
"Du",
"Tao",
""
],
[
"Wu",
"Kui",
""
],
[
"Xu",
"Jie",
""
],
[
"Matusik",
"Wojciech",
""
]
] |
new_dataset
| 0.999018 |
2107.10836
|
Mohammad Javad Amiri
|
Mohammad Javad Amiri, Boon Thau Loo, Divyakant Agrawal, Amr El Abbadi
|
Qanaat: A Scalable Multi-Enterprise Permissioned Blockchain System with
Confidentiality Guarantees
| null |
Proceedings of the VLDB Endowment 15, no. 11 (2022)
| null | null |
cs.DB
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Today's large-scale data management systems need to address distributed
applications' confidentiality and scalability requirements among a set of
collaborative enterprises. This paper presents Qanaat, a scalable
multi-enterprise permissioned blockchain system that guarantees the
confidentiality of enterprises in collaboration workflows. Qanaat presents data
collections that enable any subset of enterprises involved in a collaboration
workflow to keep their collaboration private from other enterprises. A
transaction ordering scheme is also presented to enforce only the necessary and
sufficient constraints on transaction order to guarantee data consistency.
Furthermore, Qanaat supports data consistency across collaboration workflows
where an enterprise can participate in different collaboration workflows with
different sets of enterprises. Finally, Qanaat presents a suite of consensus
protocols to support intra-shard and cross-shard transactions within or across
enterprises.
|
[
{
"version": "v1",
"created": "Thu, 22 Jul 2021 17:50:31 GMT"
},
{
"version": "v2",
"created": "Sun, 17 Jul 2022 21:09:29 GMT"
}
] | 2022-07-19T00:00:00 |
[
[
"Amiri",
"Mohammad Javad",
""
],
[
"Loo",
"Boon Thau",
""
],
[
"Agrawal",
"Divyakant",
""
],
[
"Abbadi",
"Amr El",
""
]
] |
new_dataset
| 0.991738 |
2108.04212
|
Daochen Zha
|
Daochen Zha, Zaid Pervaiz Bhat, Yi-Wei Chen, Yicheng Wang, Sirui Ding,
Jiaben Chen, Kwei-Herng Lai, Mohammad Qazim Bhat, Anmoll Kumar Jain, Alfredo
Costilla Reyes, Na Zou, Xia Hu
|
AutoVideo: An Automated Video Action Recognition System
|
Accepted by IJCAI https://github.com/datamllab/autovideo
| null | null | null |
cs.CV cs.LG eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Action recognition is an important task for video understanding with broad
applications. However, developing an effective action recognition solution
often requires extensive engineering efforts in building and testing different
combinations of the modules and their hyperparameters. In this demo, we present
AutoVideo, a Python system for automated video action recognition. AutoVideo is
featured for 1) highly modular and extendable infrastructure following the
standard pipeline language, 2) an exhaustive list of primitives for pipeline
construction, 3) data-driven tuners to save the efforts of pipeline tuning, and
4) easy-to-use Graphical User Interface (GUI). AutoVideo is released under MIT
license at https://github.com/datamllab/autovideo
|
[
{
"version": "v1",
"created": "Mon, 9 Aug 2021 17:53:32 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Aug 2021 00:49:59 GMT"
},
{
"version": "v3",
"created": "Tue, 12 Oct 2021 15:38:31 GMT"
},
{
"version": "v4",
"created": "Sun, 17 Jul 2022 00:17:49 GMT"
}
] | 2022-07-19T00:00:00 |
[
[
"Zha",
"Daochen",
""
],
[
"Bhat",
"Zaid Pervaiz",
""
],
[
"Chen",
"Yi-Wei",
""
],
[
"Wang",
"Yicheng",
""
],
[
"Ding",
"Sirui",
""
],
[
"Chen",
"Jiaben",
""
],
[
"Lai",
"Kwei-Herng",
""
],
[
"Bhat",
"Mohammad Qazim",
""
],
[
"Jain",
"Anmoll Kumar",
""
],
[
"Reyes",
"Alfredo Costilla",
""
],
[
"Zou",
"Na",
""
],
[
"Hu",
"Xia",
""
]
] |
new_dataset
| 0.996448 |
2109.05729
|
Yunfan Shao
|
Yunfan Shao, Zhichao Geng, Yitao Liu, Junqi Dai, Hang Yan, Fei Yang,
Li Zhe, Hujun Bao, Xipeng Qiu
|
CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language
Understanding and Generation
|
Code is available at https://github.com/fastnlp/CPT
| null |
10.1007/s11432-021-3536-5
| null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we take the advantage of previous pre-trained models (PTMs)
and propose a novel Chinese Pre-trained Unbalanced Transformer (CPT). Different
from previous Chinese PTMs, CPT is designed to utilize the shared knowledge
between natural language understanding (NLU) and natural language generation
(NLG) to boost the performance. CPT consists of three parts: a shared encoder,
an understanding decoder, and a generation decoder. Two specific decoders with
a shared encoder are pre-trained with masked language modeling (MLM) and
denoising auto-encoding (DAE) tasks, respectively. With the partially shared
architecture and multi-task pre-training, CPT can (1) learn specific knowledge
of both NLU or NLG tasks with two decoders and (2) be fine-tuned flexibly that
fully exploits the potential of the model. Moreover, the unbalanced Transformer
saves the computational and storage cost, which makes CPT competitive and
greatly accelerates the inference of text generation. Experimental results on a
wide range of Chinese NLU and NLG tasks show the effectiveness of CPT.
|
[
{
"version": "v1",
"created": "Mon, 13 Sep 2021 06:25:45 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Sep 2021 08:35:14 GMT"
},
{
"version": "v3",
"created": "Fri, 8 Oct 2021 13:22:19 GMT"
},
{
"version": "v4",
"created": "Mon, 18 Jul 2022 08:19:30 GMT"
}
] | 2022-07-19T00:00:00 |
[
[
"Shao",
"Yunfan",
""
],
[
"Geng",
"Zhichao",
""
],
[
"Liu",
"Yitao",
""
],
[
"Dai",
"Junqi",
""
],
[
"Yan",
"Hang",
""
],
[
"Yang",
"Fei",
""
],
[
"Zhe",
"Li",
""
],
[
"Bao",
"Hujun",
""
],
[
"Qiu",
"Xipeng",
""
]
] |
new_dataset
| 0.957988 |
2110.15621
|
Shaoxiong Ji
|
Shaoxiong Ji, Tianlin Zhang, Luna Ansari, Jie Fu, Prayag Tiwari, Erik
Cambria
|
MentalBERT: Publicly Available Pretrained Language Models for Mental
Healthcare
| null |
Proceedings of the Language Resources and Evaluation Conference
(LREC), 2022
| null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Mental health is a critical issue in modern society, and mental disorders
could sometimes turn to suicidal ideation without adequate treatment. Early
detection of mental disorders and suicidal ideation from social content
provides a potential way for effective social intervention. Recent advances in
pretrained contextualized language representations have promoted the
development of several domain-specific pretrained models and facilitated
several downstream applications. However, there are no existing pretrained
language models for mental healthcare. This paper trains and release two
pretrained masked language models, i.e., MentalBERT and MentalRoBERTa, to
benefit machine learning for the mental healthcare research community. Besides,
we evaluate our trained domain-specific models and several variants of
pretrained language models on several mental disorder detection benchmarks and
demonstrate that language representations pretrained in the target domain
improve the performance of mental health detection tasks.
|
[
{
"version": "v1",
"created": "Fri, 29 Oct 2021 08:36:47 GMT"
}
] | 2022-07-19T00:00:00 |
[
[
"Ji",
"Shaoxiong",
""
],
[
"Zhang",
"Tianlin",
""
],
[
"Ansari",
"Luna",
""
],
[
"Fu",
"Jie",
""
],
[
"Tiwari",
"Prayag",
""
],
[
"Cambria",
"Erik",
""
]
] |
new_dataset
| 0.997518 |
2111.02291
|
Hamidreza Kamkari
|
Yuan Gao, Hamidreza Kamkari, Andreas Karrenbauer, Kurt Mehlhorn,
Mohammadamin Sharifi
|
Physarum Inspired Dynamics to Solve Semi-Definite Programs
| null | null | null | null |
cs.DS math.OC
|
http://creativecommons.org/licenses/by/4.0/
|
Physarum Polycephalum is a slime mold that can solve shortest path problems.
A mathematical model based on Physarum's behavior, known as the Physarum
Directed Dynamics, can solve positive linear programs. In this paper, we
present a family of Physarum-based dynamics extending the previous work and
introduce a new algorithm to solve positive Semi-Definite Programs (SDP). The
Physarum dynamics are governed by orthogonal projections (w.r.t. time-dependent
scalar products) on the affine subspace defined by the linear constraints. We
present a natural generalization of the scalar products used in the LP case to
the matrix space for SDPs, which boils down to the linear case when all
matrices in the SDP are diagonal, thus, representing an LP. We investigate the
behavior of the induced dynamics theoretically and experimentally, highlight
challenges arising from the non-commutative nature of matrix products, and
prove soundness and convergence under mild conditions. Moreover, we consider a
more abstract view on the dynamics that suggests a slight variation to
guarantee unconditional soundness and convergence-to-optimality. By simulating
these dynamics using suitable discretizations, one obtains numerical algorithms
for solving positive SDPs, which have applications in discrete optimization,
e.g., for computing the Goemans-Williamson approximation for MaxCut or the
Lovasz theta number for determining the clique/chromatic number in perfect
graphs.
|
[
{
"version": "v1",
"created": "Wed, 3 Nov 2021 15:23:31 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Jul 2022 13:01:12 GMT"
},
{
"version": "v3",
"created": "Mon, 18 Jul 2022 15:49:19 GMT"
}
] | 2022-07-19T00:00:00 |
[
[
"Gao",
"Yuan",
""
],
[
"Kamkari",
"Hamidreza",
""
],
[
"Karrenbauer",
"Andreas",
""
],
[
"Mehlhorn",
"Kurt",
""
],
[
"Sharifi",
"Mohammadamin",
""
]
] |
new_dataset
| 0.999371 |
2111.06705
|
Chenghao Feng
|
Chenghao Feng, Jiaqi Gu, Hanqing Zhu, Zhoufeng Ying, Zheng Zhao, David
Z. Pan and Ray T. Chen
|
A compact butterfly-style silicon photonic-electronic neural chip for
hardware-efficient deep learning
|
17 pages,5 figures
| null | null | null |
cs.ET cs.LG physics.app-ph physics.optics
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The optical neural network (ONN) is a promising hardware platform for
next-generation neurocomputing due to its high parallelism, low latency, and
low energy consumption. Previous ONN architectures are mainly designed for
general matrix multiplication (GEMM), leading to unnecessarily large area cost
and high control complexity. Here, we move beyond classical GEMM-based ONNs and
propose an optical subspace neural network (OSNN) architecture, which trades
the universality of weight representation for lower optical component usage,
area cost, and energy consumption. We devise a butterfly-style
photonic-electronic neural chip to implement our OSNN with up to 7x fewer
trainable optical components compared to GEMM-based ONNs. Additionally, a
hardware-aware training framework is provided to minimize the required device
programming precision, lessen the chip area, and boost the noise robustness. We
experimentally demonstrate the utility of our neural chip in practical image
recognition tasks, showing that a measured accuracy of 94.16% can be achieved
in hand-written digit recognition tasks with 3-bit weight programming
precision.
|
[
{
"version": "v1",
"created": "Thu, 11 Nov 2021 06:34:05 GMT"
},
{
"version": "v2",
"created": "Sun, 17 Jul 2022 05:14:21 GMT"
}
] | 2022-07-19T00:00:00 |
[
[
"Feng",
"Chenghao",
""
],
[
"Gu",
"Jiaqi",
""
],
[
"Zhu",
"Hanqing",
""
],
[
"Ying",
"Zhoufeng",
""
],
[
"Zhao",
"Zheng",
""
],
[
"Pan",
"David Z.",
""
],
[
"Chen",
"Ray T.",
""
]
] |
new_dataset
| 0.997654 |
2111.14448
|
Eric Zhongcong Xu
|
Eric Zhongcong Xu, Zeyang Song, Satoshi Tsutsui, Chao Feng, Mang Ye,
Mike Zheng Shou
|
AVA-AVD: Audio-Visual Speaker Diarization in the Wild
|
ACMMM 2022
| null |
10.1145/3503161.3548027
| null |
cs.CV cs.MM eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Audio-visual speaker diarization aims at detecting "who spoke when" using
both auditory and visual signals. Existing audio-visual diarization datasets
are mainly focused on indoor environments like meeting rooms or news studios,
which are quite different from in-the-wild videos in many scenarios such as
movies, documentaries, and audience sitcoms. To develop diarization methods for
these challenging videos, we create the AVA Audio-Visual Diarization (AVA-AVD)
dataset. Our experiments demonstrate that adding AVA-AVD into training set can
produce significantly better diarization models for in-the-wild videos despite
that the data is relatively small. Moreover, this benchmark is challenging due
to the diverse scenes, complicated acoustic conditions, and completely
off-screen speakers. As a first step towards addressing the challenges, we
design the Audio-Visual Relation Network (AVR-Net) which introduces a simple
yet effective modality mask to capture discriminative information based on face
visibility. Experiments show that our method not only can outperform
state-of-the-art methods but is more robust as varying the ratio of off-screen
speakers. Our data and code has been made publicly available at
https://github.com/showlab/AVA-AVD.
|
[
{
"version": "v1",
"created": "Mon, 29 Nov 2021 11:02:41 GMT"
},
{
"version": "v2",
"created": "Wed, 1 Dec 2021 11:17:30 GMT"
},
{
"version": "v3",
"created": "Mon, 6 Dec 2021 09:38:10 GMT"
},
{
"version": "v4",
"created": "Wed, 13 Jul 2022 02:55:35 GMT"
},
{
"version": "v5",
"created": "Sat, 16 Jul 2022 14:40:40 GMT"
}
] | 2022-07-19T00:00:00 |
[
[
"Xu",
"Eric Zhongcong",
""
],
[
"Song",
"Zeyang",
""
],
[
"Tsutsui",
"Satoshi",
""
],
[
"Feng",
"Chao",
""
],
[
"Ye",
"Mang",
""
],
[
"Shou",
"Mike Zheng",
""
]
] |
new_dataset
| 0.999167 |
2112.04054
|
Pranav Kadam
|
Pranav Kadam, Min Zhang, Jiahao Gu, Shan Liu, C.-C. Jay Kuo
|
GreenPCO: An Unsupervised Lightweight Point Cloud Odometry Method
|
10 pages, 5 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual odometry aims to track the incremental motion of an object using the
information captured by visual sensors. In this work, we study the point cloud
odometry problem, where only the point cloud scans obtained by the LiDAR (Light
Detection And Ranging) are used to estimate object's motion trajectory. A
lightweight point cloud odometry solution is proposed and named the green point
cloud odometry (GreenPCO) method. GreenPCO is an unsupervised learning method
that predicts object motion by matching features of consecutive point cloud
scans. It consists of three steps. First, a geometry-aware point sampling
scheme is used to select discriminant points from the large point cloud.
Second, the view is partitioned into four regions surrounding the object, and
the PointHop++ method is used to extract point features. Third, point
correspondences are established to estimate object motion between two
consecutive scans. Experiments on the KITTI dataset are conducted to
demonstrate the effectiveness of the GreenPCO method. It is observed that
GreenPCO outperforms benchmarking deep learning methods in accuracy while it
has a significantly smaller model size and less training time.
|
[
{
"version": "v1",
"created": "Wed, 8 Dec 2021 00:24:03 GMT"
},
{
"version": "v2",
"created": "Sun, 17 Jul 2022 21:44:47 GMT"
}
] | 2022-07-19T00:00:00 |
[
[
"Kadam",
"Pranav",
""
],
[
"Zhang",
"Min",
""
],
[
"Gu",
"Jiahao",
""
],
[
"Liu",
"Shan",
""
],
[
"Kuo",
"C. -C. Jay",
""
]
] |
new_dataset
| 0.992262 |
2201.03967
|
Kun Zhou
|
Kun Zhou, Berrak Sisman, Rajib Rana, Bj\"orn W. Schuller, Haizhou Li
|
Emotion Intensity and its Control for Emotional Voice Conversion
|
Accepted by IEEE Transactions on Affective Computing
| null |
10.1109/TAFFC.2022.3175578
| null |
cs.SD cs.CL cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Emotional voice conversion (EVC) seeks to convert the emotional state of an
utterance while preserving the linguistic content and speaker identity. In EVC,
emotions are usually treated as discrete categories overlooking the fact that
speech also conveys emotions with various intensity levels that the listener
can perceive. In this paper, we aim to explicitly characterize and control the
intensity of emotion. We propose to disentangle the speaker style from
linguistic content and encode the speaker style into a style embedding in a
continuous space that forms the prototype of emotion embedding. We further
learn the actual emotion encoder from an emotion-labelled database and study
the use of relative attributes to represent fine-grained emotion intensity. To
ensure emotional intelligibility, we incorporate emotion classification loss
and emotion embedding similarity loss into the training of the EVC network. As
desired, the proposed network controls the fine-grained emotion intensity in
the output speech. Through both objective and subjective evaluations, we
validate the effectiveness of the proposed network for emotional expressiveness
and emotion intensity control.
|
[
{
"version": "v1",
"created": "Mon, 10 Jan 2022 02:11:25 GMT"
},
{
"version": "v2",
"created": "Fri, 13 May 2022 11:54:25 GMT"
},
{
"version": "v3",
"created": "Mon, 18 Jul 2022 07:50:21 GMT"
}
] | 2022-07-19T00:00:00 |
[
[
"Zhou",
"Kun",
""
],
[
"Sisman",
"Berrak",
""
],
[
"Rana",
"Rajib",
""
],
[
"Schuller",
"Björn W.",
""
],
[
"Li",
"Haizhou",
""
]
] |
new_dataset
| 0.993555 |
2201.11732
|
Emanuele Bugliarello
|
Emanuele Bugliarello and Fangyu Liu and Jonas Pfeiffer and Siva Reddy
and Desmond Elliott and Edoardo Maria Ponti and Ivan Vuli\'c
|
IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and
Languages
|
ICML 2022
| null | null | null |
cs.CL cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reliable evaluation benchmarks designed for replicability and
comprehensiveness have driven progress in machine learning. Due to the lack of
a multilingual benchmark, however, vision-and-language research has mostly
focused on English language tasks. To fill this gap, we introduce the
Image-Grounded Language Understanding Evaluation benchmark. IGLUE brings
together - by both aggregating pre-existing datasets and creating new ones -
visual question answering, cross-modal retrieval, grounded reasoning, and
grounded entailment tasks across 20 diverse languages. Our benchmark enables
the evaluation of multilingual multimodal models for transfer learning, not
only in a zero-shot setting, but also in newly defined few-shot learning
setups. Based on the evaluation of the available state-of-the-art models, we
find that translate-test transfer is superior to zero-shot transfer and that
few-shot learning is hard to harness for many tasks. Moreover, downstream
performance is partially explained by the amount of available unlabelled
textual data for pretraining, and only weakly by the typological distance of
target-source languages. We hope to encourage future research efforts in this
area by releasing the benchmark to the community.
|
[
{
"version": "v1",
"created": "Thu, 27 Jan 2022 18:53:22 GMT"
},
{
"version": "v2",
"created": "Sun, 17 Jul 2022 13:01:43 GMT"
}
] | 2022-07-19T00:00:00 |
[
[
"Bugliarello",
"Emanuele",
""
],
[
"Liu",
"Fangyu",
""
],
[
"Pfeiffer",
"Jonas",
""
],
[
"Reddy",
"Siva",
""
],
[
"Elliott",
"Desmond",
""
],
[
"Ponti",
"Edoardo Maria",
""
],
[
"Vulić",
"Ivan",
""
]
] |
new_dataset
| 0.999093 |
2202.08449
|
Yiming Li
|
Yiming Li, Dekun Ma, Ziyan An, Zixun Wang, Yiqi Zhong, Siheng Chen,
Chen Feng
|
V2X-Sim: Multi-Agent Collaborative Perception Dataset and Benchmark for
Autonomous Driving
|
2022 IEEE Robotics and Automation Letters (RA-L) (The extended
abstract is presented at 2021 IEEE International Conference on Computer
Vision (ICCV) Simulation Technology for Embodied AI Workshop)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vehicle-to-everything (V2X) communication techniques enable the collaboration
between vehicles and many other entities in the neighboring environment, which
could fundamentally improve the perception system for autonomous driving.
However, the lack of a public dataset significantly restricts the research
progress of collaborative perception. To fill this gap, we present V2X-Sim, a
comprehensive simulated multi-agent perception dataset for V2X-aided autonomous
driving. V2X-Sim provides: (1) \hl{multi-agent} sensor recordings from the
road-side unit (RSU) and multiple vehicles that enable collaborative
perception, (2) multi-modality sensor streams that facilitate multi-modality
perception, and (3) diverse ground truths that support various perception
tasks. Meanwhile, we build an open-source testbed and provide a benchmark for
the state-of-the-art collaborative perception algorithms on three tasks,
including detection, tracking and segmentation. V2X-Sim seeks to stimulate
collaborative perception research for autonomous driving before realistic
datasets become widely available. Our dataset and code are available at
\url{https://ai4ce.github.io/V2X-Sim/}.
|
[
{
"version": "v1",
"created": "Thu, 17 Feb 2022 05:14:02 GMT"
},
{
"version": "v2",
"created": "Sat, 16 Jul 2022 02:56:25 GMT"
}
] | 2022-07-19T00:00:00 |
[
[
"Li",
"Yiming",
""
],
[
"Ma",
"Dekun",
""
],
[
"An",
"Ziyan",
""
],
[
"Wang",
"Zixun",
""
],
[
"Zhong",
"Yiqi",
""
],
[
"Chen",
"Siheng",
""
],
[
"Feng",
"Chen",
""
]
] |
new_dataset
| 0.999795 |
2202.11094
|
Jiarui Xu
|
Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel,
Jan Kautz, Xiaolong Wang
|
GroupViT: Semantic Segmentation Emerges from Text Supervision
|
CVPR 2022. Project page and code: https://jerryxu.net/GroupViT
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Grouping and recognition are important components of visual scene
understanding, e.g., for object detection and semantic segmentation. With
end-to-end deep learning systems, grouping of image regions usually happens
implicitly via top-down supervision from pixel-level recognition labels.
Instead, in this paper, we propose to bring back the grouping mechanism into
deep networks, which allows semantic segments to emerge automatically with only
text supervision. We propose a hierarchical Grouping Vision Transformer
(GroupViT), which goes beyond the regular grid structure representation and
learns to group image regions into progressively larger arbitrary-shaped
segments. We train GroupViT jointly with a text encoder on a large-scale
image-text dataset via contrastive losses. With only text supervision and
without any pixel-level annotations, GroupViT learns to group together semantic
regions and successfully transfers to the task of semantic segmentation in a
zero-shot manner, i.e., without any further fine-tuning. It achieves a
zero-shot accuracy of 52.3% mIoU on the PASCAL VOC 2012 and 22.4% mIoU on
PASCAL Context datasets, and performs competitively to state-of-the-art
transfer-learning methods requiring greater levels of supervision. We
open-source our code at https://github.com/NVlabs/GroupViT .
|
[
{
"version": "v1",
"created": "Tue, 22 Feb 2022 18:56:04 GMT"
},
{
"version": "v2",
"created": "Thu, 19 May 2022 00:43:22 GMT"
},
{
"version": "v3",
"created": "Mon, 23 May 2022 00:57:19 GMT"
},
{
"version": "v4",
"created": "Tue, 5 Jul 2022 23:05:39 GMT"
},
{
"version": "v5",
"created": "Mon, 18 Jul 2022 05:04:01 GMT"
}
] | 2022-07-19T00:00:00 |
[
[
"Xu",
"Jiarui",
""
],
[
"De Mello",
"Shalini",
""
],
[
"Liu",
"Sifei",
""
],
[
"Byeon",
"Wonmin",
""
],
[
"Breuel",
"Thomas",
""
],
[
"Kautz",
"Jan",
""
],
[
"Wang",
"Xiaolong",
""
]
] |
new_dataset
| 0.969877 |
2202.12613
|
Kailun Yang
|
Ze Wang, Kailun Yang, Hao Shi, Peng Li, Fei Gao, Kaiwei Wang
|
LF-VIO: A Visual-Inertial-Odometry Framework for Large Field-of-View
Cameras with Negative Plane
|
Accepted to IROS 2022. Dataset and code are publicly available at
https://github.com/flysoaryun/LF-VIO
| null | null | null |
cs.CV cs.RO eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual-inertial-odometry has attracted extensive attention in the field of
autonomous driving and robotics. The size of Field of View (FoV) plays an
important role in Visual-Odometry (VO) and Visual-Inertial-Odometry (VIO), as a
large FoV enables to perceive a wide range of surrounding scene elements and
features. However, when the field of the camera reaches the negative half
plane, one cannot simply use [u,v,1]^T to represent the image feature points
anymore. To tackle this issue, we propose LF-VIO, a real-time VIO framework for
cameras with extremely large FoV. We leverage a three-dimensional vector with
unit length to represent feature points, and design a series of algorithms to
overcome this challenge. To address the scarcity of panoramic visual odometry
datasets with ground-truth location and pose, we present the PALVIO dataset,
collected with a Panoramic Annular Lens (PAL) system with an entire FoV of
360{\deg}x(40{\deg}-120{\deg}) and an IMU sensor. With a comprehensive variety
of experiments, the proposed LF-VIO is verified on both the established PALVIO
benchmark and a public fisheye camera dataset with a FoV of
360{\deg}x(0{\deg}-93.5{\deg}). LF-VIO outperforms state-of-the-art
visual-inertial-odometry methods. Our dataset and code are made publicly
available at https://github.com/flysoaryun/LF-VIO
|
[
{
"version": "v1",
"created": "Fri, 25 Feb 2022 11:03:31 GMT"
},
{
"version": "v2",
"created": "Sun, 8 May 2022 14:26:45 GMT"
},
{
"version": "v3",
"created": "Mon, 18 Jul 2022 12:27:59 GMT"
}
] | 2022-07-19T00:00:00 |
[
[
"Wang",
"Ze",
""
],
[
"Yang",
"Kailun",
""
],
[
"Shi",
"Hao",
""
],
[
"Li",
"Peng",
""
],
[
"Gao",
"Fei",
""
],
[
"Wang",
"Kaiwei",
""
]
] |
new_dataset
| 0.999565 |
2203.07553
|
Jisan Mahmud
|
Jisan Mahmud, Jan-Michael Frahm
|
VPFusion: Joint 3D Volume and Pixel-Aligned Feature Fusion for Single
and Multi-view 3D Reconstruction
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a unified single and multi-view neural implicit 3D
reconstruction framework VPFusion. VPFusion attains high-quality reconstruction
using both - 3D feature volume to capture 3D-structure-aware context, and
pixel-aligned image features to capture fine local detail. Existing approaches
use RNN, feature pooling, or attention computed independently in each view for
multi-view fusion. RNNs suffer from long-term memory loss and permutation
variance, while feature pooling or independently computed attention leads to
representation in each view being unaware of other views before the final
pooling step. In contrast, we show improved multi-view feature fusion by
establishing transformer-based pairwise view association. In particular, we
propose a novel interleaved 3D reasoning and pairwise view association
architecture for feature volume fusion across different views. Using this
structure-aware and multi-view-aware feature volume, we show improved 3D
reconstruction performance compared to existing methods. VPFusion improves the
reconstruction quality further by also incorporating pixel-aligned local image
features to capture fine detail. We verify the effectiveness of VPFusion on the
ShapeNet and ModelNet datasets, where we outperform or perform on-par the
state-of-the-art single and multi-view 3D shape reconstruction methods.
|
[
{
"version": "v1",
"created": "Mon, 14 Mar 2022 23:30:58 GMT"
},
{
"version": "v2",
"created": "Sat, 16 Jul 2022 21:46:06 GMT"
}
] | 2022-07-19T00:00:00 |
[
[
"Mahmud",
"Jisan",
""
],
[
"Frahm",
"Jan-Michael",
""
]
] |
new_dataset
| 0.997976 |
2203.10694
|
Tianrui Guan
|
Divya Kothandaraman, Tianrui Guan, Xijun Wang, Sean Hu, Ming Lin,
Dinesh Manocha
|
FAR: Fourier Aerial Video Recognition
|
ECCV 2022 Poster paper
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an algorithm, Fourier Activity Recognition (FAR), for UAV video
activity recognition. Our formulation uses a novel Fourier object
disentanglement method to innately separate out the human agent (which is
typically small) from the background. Our disentanglement technique operates in
the frequency domain to characterize the extent of temporal change of spatial
pixels, and exploits convolution-multiplication properties of Fourier transform
to map this representation to the corresponding object-background entangled
features obtained from the network. To encapsulate contextual information and
long-range space-time dependencies, we present a novel Fourier Attention
algorithm, which emulates the benefits of self-attention by modeling the
weighted outer product in the frequency domain. Our Fourier attention
formulation uses much fewer computations than self-attention. We have evaluated
our approach on multiple UAV datasets including UAV Human RGB, UAV Human Night,
Drone Action, and NEC Drone. We demonstrate a relative improvement of 8.02% -
38.69% in top-1 accuracy and up to 3 times faster over prior works.
|
[
{
"version": "v1",
"created": "Mon, 21 Mar 2022 01:24:53 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Jul 2022 04:15:43 GMT"
}
] | 2022-07-19T00:00:00 |
[
[
"Kothandaraman",
"Divya",
""
],
[
"Guan",
"Tianrui",
""
],
[
"Wang",
"Xijun",
""
],
[
"Hu",
"Sean",
""
],
[
"Lin",
"Ming",
""
],
[
"Manocha",
"Dinesh",
""
]
] |
new_dataset
| 0.991359 |
2203.12257
|
Liying Cheng
|
Liying Cheng, Lidong Bing, Ruidan He, Qian Yu, Yan Zhang, Luo Si
|
IAM: A Comprehensive and Large-Scale Dataset for Integrated Argument
Mining Tasks
|
11 pages, 3 figures, accepted by ACL 2022
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Traditionally, a debate usually requires a manual preparation process,
including reading plenty of articles, selecting the claims, identifying the
stances of the claims, seeking the evidence for the claims, etc. As the AI
debate attracts more attention these years, it is worth exploring the methods
to automate the tedious process involved in the debating system. In this work,
we introduce a comprehensive and large dataset named IAM, which can be applied
to a series of argument mining tasks, including claim extraction, stance
classification, evidence extraction, etc. Our dataset is collected from over 1k
articles related to 123 topics. Near 70k sentences in the dataset are fully
annotated based on their argument properties (e.g., claims, stances, evidence,
etc.). We further propose two new integrated argument mining tasks associated
with the debate preparation process: (1) claim extraction with stance
classification (CESC) and (2) claim-evidence pair extraction (CEPE). We adopt a
pipeline approach and an end-to-end method for each integrated task separately.
Promising experimental results are reported to show the values and challenges
of our proposed tasks, and motivate future research on argument mining.
|
[
{
"version": "v1",
"created": "Wed, 23 Mar 2022 08:07:32 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Mar 2022 03:27:52 GMT"
},
{
"version": "v3",
"created": "Sat, 16 Jul 2022 05:41:40 GMT"
}
] | 2022-07-19T00:00:00 |
[
[
"Cheng",
"Liying",
""
],
[
"Bing",
"Lidong",
""
],
[
"He",
"Ruidan",
""
],
[
"Yu",
"Qian",
""
],
[
"Zhang",
"Yan",
""
],
[
"Si",
"Luo",
""
]
] |
new_dataset
| 0.999166 |
2204.06535
|
Adithya Pratapa
|
Adithya Pratapa, Rishubh Gupta, Teruko Mitamura
|
Multilingual Event Linking to Wikidata
|
Camera-ready for Multilingual Information Access workshop at NAACL
2022
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a task of multilingual linking of events to a knowledge base. We
automatically compile a large-scale dataset for this task, comprising of 1.8M
mentions across 44 languages referring to over 10.9K events from Wikidata. We
propose two variants of the event linking task: 1) multilingual, where event
descriptions are from the same language as the mention, and 2) crosslingual,
where all event descriptions are in English. On the two proposed tasks, we
compare multiple event linking systems including BM25+ (Lv and Zhai, 2011) and
multilingual adaptations of the biencoder and crossencoder architectures from
BLINK (Wu et al., 2020). In our experiments on the two task variants, we find
both biencoder and crossencoder models significantly outperform the BM25+
baseline. Our results also indicate that the crosslingual task is in general
more challenging than the multilingual task. To test the out-of-domain
generalization of the proposed linking systems, we additionally create a
Wikinews-based evaluation set. We present qualitative analysis highlighting
various aspects captured by the proposed dataset, including the need for
temporal reasoning over context and tackling diverse event descriptions across
languages.
|
[
{
"version": "v1",
"created": "Wed, 13 Apr 2022 17:28:23 GMT"
},
{
"version": "v2",
"created": "Thu, 30 Jun 2022 03:27:51 GMT"
},
{
"version": "v3",
"created": "Sat, 16 Jul 2022 18:53:32 GMT"
}
] | 2022-07-19T00:00:00 |
[
[
"Pratapa",
"Adithya",
""
],
[
"Gupta",
"Rishubh",
""
],
[
"Mitamura",
"Teruko",
""
]
] |
new_dataset
| 0.999112 |
2205.12992
|
Alishba Imran
|
David Hanson, Alishba Imran, Gerardo Morales, Vytas Krisciunas, Aditya
Sagi, Aman Malali, Rushali Mohbe, Raviteja Upadrashta
|
Open Arms: Open-Source Arms, Hands & Control
|
Submitted to 36th Conference on Neural Information Processing Systems
(NeurIPS 2022)
| null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Open Arms is a novel open-source platform of realistic human-like robotic
hands and arms hardware with 28 Degree-of-Freedom (DoF), designed to extend the
capabilities and accessibility of humanoid robotic grasping and manipulation.
The Open Arms framework includes an open SDK and development environment,
simulation tools, and application development tools to build and operate Open
Arms. This paper describes these hands controls, sensing, mechanisms, aesthetic
design, and manufacturing and their real-world applications with a teleoperated
nursing robot. From 2015 to 2022, the authors have designed and established the
manufacturing of Open Arms as a low-cost, high functionality robotic arms
hardware and software framework to serve both humanoid robot applications and
the urgent demand for low-cost prosthetics, as part of the Hanson Robotics
Sophia Robot platform. Using the techniques of consumer product manufacturing,
we set out to define modular, low-cost techniques for approximating the
dexterity and sensitivity of human hands. To demonstrate the dexterity and
control of our hands, we present a Generative Grasping Residual CNN (GGR-CNN)
model that can generate robust antipodal grasps from input images of various
objects in real-time speeds (22ms). We achieved state-of-the-art accuracy of
92.4% using our model architecture on a standard Cornell Grasping Dataset,
which contains a diverse set of household objects.
|
[
{
"version": "v1",
"created": "Fri, 20 May 2022 15:26:41 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Jul 2022 23:27:06 GMT"
}
] | 2022-07-19T00:00:00 |
[
[
"Hanson",
"David",
""
],
[
"Imran",
"Alishba",
""
],
[
"Morales",
"Gerardo",
""
],
[
"Krisciunas",
"Vytas",
""
],
[
"Sagi",
"Aditya",
""
],
[
"Malali",
"Aman",
""
],
[
"Mohbe",
"Rushali",
""
],
[
"Upadrashta",
"Raviteja",
""
]
] |
new_dataset
| 0.999621 |
2206.13657
|
Nathan Lepora
|
Nathan F. Lepora, Yijiong Lin, Ben Money-Coomes, John Lloyd
|
DigiTac: A DIGIT-TacTip Hybrid Tactile Sensor for Comparing Low-Cost
High-Resolution Robot Touch
|
7 pages. Published in RA-L and accepted in IROS 2022
|
IEEE Robotics and Automation Letters, 2022
|
10.1109/LRA.2022.3190641
| null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Deep learning combined with high-resolution tactile sensing could lead to
highly capable dexterous robots. However, progress is slow because of the
specialist equipment and expertise. The DIGIT tactile sensor offers low-cost
entry to high-resolution touch using GelSight-type sensors. Here we customize
the DIGIT to have a 3D-printed sensing surface based on the TacTip family of
soft biomimetic optical tactile sensors. The DIGIT-TacTip (DigiTac) enables
direct comparison between these distinct tactile sensor types. For this
comparison, we introduce a tactile robot system comprising a desktop arm,
mounts and 3D-printed test objects. We use tactile servo control with a PoseNet
deep learning model to compare the DIGIT, DigiTac and TacTip for edge- and
surface-following over 3D-shapes. All three sensors performed similarly at pose
prediction, but their constructions led to differing performances at servo
control, offering guidance for researchers selecting or innovating tactile
sensors. All hardware and software for reproducing this study will be openly
released. Project website: www.lepora.com/digitac. Project repository:
www.github.com/nlepora/digitac-design.
|
[
{
"version": "v1",
"created": "Mon, 27 Jun 2022 22:53:49 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Jul 2022 07:13:25 GMT"
}
] | 2022-07-19T00:00:00 |
[
[
"Lepora",
"Nathan F.",
""
],
[
"Lin",
"Yijiong",
""
],
[
"Money-Coomes",
"Ben",
""
],
[
"Lloyd",
"John",
""
]
] |
new_dataset
| 0.993351 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.