id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2203.10123
|
Ali H\"urriyeto\u{g}lu
|
Ali H\"urriyeto\u{g}lu, Osman Mutlu, Fatih Beyhan, F{\i}rat
Duru\c{s}an, Ali Safaya, Reyyan Yeniterzi, Erdem Y\"or\"uk
|
Event Coreference Resolution for Contentious Politics Events
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a dataset for event coreference resolution, which is based on
random samples drawn from multiple sources, languages, and countries. Early
scholarship on event information collection has not quantified the contribution
of event coreference resolution. We prepared and analyzed a representative
multilingual corpus and measured the performance and contribution of the
state-of-the-art event coreference resolution approaches. We found that almost
half of the event mentions in documents co-occur with other event mentions and
this makes it inevitable to obtain erroneous or partial event information. We
showed that event coreference resolution could help improving this situation.
Our contribution sheds light on a challenge that has been overlooked or hard to
study to date. Future event information collection studies can be designed
based on the results we present in this report. The repository for this study
is on https://github.com/emerging-welfare/ECR4-Contentious-Politics.
|
[
{
"version": "v1",
"created": "Fri, 18 Mar 2022 18:50:45 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Hürriyetoğlu",
"Ali",
""
],
[
"Mutlu",
"Osman",
""
],
[
"Beyhan",
"Fatih",
""
],
[
"Duruşan",
"Fırat",
""
],
[
"Safaya",
"Ali",
""
],
[
"Yeniterzi",
"Reyyan",
""
],
[
"Yörük",
"Erdem",
""
]
] |
new_dataset
| 0.999773 |
2203.10209
|
Yuliang Liu
|
Mingxin Huang, Yuliang Liu, Zhenghao Peng, Chongyu Liu, Dahua Lin,
Shenggao Zhu, Nicholas Yuan, Kai Ding, Lianwen Jin
|
SwinTextSpotter: Scene Text Spotting via Better Synergy between Text
Detection and Text Recognition
|
Accepted to be appeared in CVPR 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
End-to-end scene text spotting has attracted great attention in recent years
due to the success of excavating the intrinsic synergy of the scene text
detection and recognition. However, recent state-of-the-art methods usually
incorporate detection and recognition simply by sharing the backbone, which
does not directly take advantage of the feature interaction between the two
tasks. In this paper, we propose a new end-to-end scene text spotting framework
termed SwinTextSpotter. Using a transformer encoder with dynamic head as the
detector, we unify the two tasks with a novel Recognition Conversion mechanism
to explicitly guide text localization through recognition loss. The
straightforward design results in a concise framework that requires neither
additional rectification module nor character-level annotation for the
arbitrarily-shaped text. Qualitative and quantitative experiments on
multi-oriented datasets RoIC13 and ICDAR 2015, arbitrarily-shaped datasets
Total-Text and CTW1500, and multi-lingual datasets ReCTS (Chinese) and VinText
(Vietnamese) demonstrate SwinTextSpotter significantly outperforms existing
methods. Code is available at https://github.com/mxin262/SwinTextSpotter.
|
[
{
"version": "v1",
"created": "Sat, 19 Mar 2022 01:14:42 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Huang",
"Mingxin",
""
],
[
"Liu",
"Yuliang",
""
],
[
"Peng",
"Zhenghao",
""
],
[
"Liu",
"Chongyu",
""
],
[
"Lin",
"Dahua",
""
],
[
"Zhu",
"Shenggao",
""
],
[
"Yuan",
"Nicholas",
""
],
[
"Ding",
"Kai",
""
],
[
"Jin",
"Lianwen",
""
]
] |
new_dataset
| 0.991939 |
2203.10213
|
Stefan Zellmann
|
Stefan Zellmann and Giovanni Aguirre and J\"urgen P. Schulze
|
Volkit: A Performance-Portable Computer Vision Library for 3D Volumetric
Data
| null | null | null | null |
cs.CV cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
We present volkit, an open source library with high performance
implementations of image manipulation and computer vision algorithms that focus
on 3D volumetric representations. Volkit implements a cross-platform,
performance-portable API targeting both CPUs and GPUs that defers data and
resource movement and hides them from the application developer using a managed
API. We use volkit to process medical and simulation data that is rendered in
VR and consequently integrated the library into the C++ virtual reality
software CalVR. The paper presents case studies and performance results and by
that demonstrates the library's effectiveness and the efficiency of this
approach.
|
[
{
"version": "v1",
"created": "Sat, 19 Mar 2022 01:52:08 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Zellmann",
"Stefan",
""
],
[
"Aguirre",
"Giovanni",
""
],
[
"Schulze",
"Jürgen P.",
""
]
] |
new_dataset
| 0.993898 |
2203.10217
|
Stefan Scherzinger
|
Stefan Scherzinger and Jakob Weinland and Robert Wilbrandt and Pascal
Becker and Arne Roennau and R\"udiger Dillmann
|
A Walking Space Robot for On-Orbit Satellite Servicing: The ReCoBot
|
7 pages, 9 figures, submitted to the 18th IEEE International
Conference on Automation Science and Engineering (CASE)
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
A key factor in the economic efficiency of satellites is their availability
in orbit. Replacing standardized building blocks, such as empty fuel tanks or
outdated electronic modules, could greatly extend the satellites' lifetime.
This, however, requires flexible robots that can locomote on the surface of
these satellites for optimal accessibility and manipulation. This paper
introduces ReCoBot, a 7-axis walking space manipulator for locomotion and
manipulation. The robot can connect to compatible structures with its symmetric
ends and provides interfaces for manual teleoperation and motion planning with
a constantly changing base and tip. We build on open-source robotics software
and easily available components to evaluate the overall concept with an early
stage demonstrator. The proposed manipulator has a length of 1.20 m and a
weight of 10.4 kg and successfully locomotes over a satellite mockup in our lab
environment.
|
[
{
"version": "v1",
"created": "Sat, 19 Mar 2022 02:29:11 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Scherzinger",
"Stefan",
""
],
[
"Weinland",
"Jakob",
""
],
[
"Wilbrandt",
"Robert",
""
],
[
"Becker",
"Pascal",
""
],
[
"Roennau",
"Arne",
""
],
[
"Dillmann",
"Rüdiger",
""
]
] |
new_dataset
| 0.999603 |
2203.10244
|
Ahmed Masry
|
Ahmed Masry, Do Xuan Long, Jia Qing Tan, Shafiq Joty, Enamul Hoque
|
ChartQA: A Benchmark for Question Answering about Charts with Visual and
Logical Reasoning
|
Accepted by ACL 2022 Findings
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Charts are very popular for analyzing data. When exploring charts, people
often ask a variety of complex reasoning questions that involve several logical
and arithmetic operations. They also commonly refer to visual features of a
chart in their questions. However, most existing datasets do not focus on such
complex reasoning questions as their questions are template-based and answers
come from a fixed-vocabulary. In this work, we present a large-scale benchmark
covering 9.6K human-written questions as well as 23.1K questions generated from
human-written chart summaries. To address the unique challenges in our
benchmark involving visual and logical reasoning over charts, we present two
transformer-based models that combine visual features and the data table of the
chart in a unified way to answer questions. While our models achieve the
state-of-the-art results on the previous datasets as well as on our benchmark,
the evaluation also reveals several challenges in answering complex reasoning
questions.
|
[
{
"version": "v1",
"created": "Sat, 19 Mar 2022 05:00:30 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Masry",
"Ahmed",
""
],
[
"Long",
"Do Xuan",
""
],
[
"Tan",
"Jia Qing",
""
],
[
"Joty",
"Shafiq",
""
],
[
"Hoque",
"Enamul",
""
]
] |
new_dataset
| 0.999795 |
2203.10324
|
Marcelo Fernandes
|
Marcelo Fernandes, Samuel Ferino, Anny Fernandes, Uira Kulesza,
Eduardo Aranha, Christoph Treude
|
DevOps Education: An Interview Study of Challenges and Recommendations
|
12 pages, 6 figures, 5 tables, ICSE 2022 SEET
| null |
10.1145/3510456.3514152
| null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Over the last years, the software industry has adopted several DevOps
technologies related to practices such as continuous integration and continuous
delivery. The high demand for DevOps practitioners requires non-trivial
adjustments in traditional software engineering courses and educational
methodologies. This work presents an interview study with 14 DevOps educators
from different universities and countries, aiming to identify the main
challenges and recommendations for DevOps teaching. Our study identified 83
challenges, 185 recommendations, and several association links and conflicts
between them. Our findings can help educators plan, execute and evaluate DevOps
courses. They also highlight several opportunities for researchers to propose
new methods and tools for teaching DevOps.
|
[
{
"version": "v1",
"created": "Sat, 19 Mar 2022 13:17:00 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Fernandes",
"Marcelo",
""
],
[
"Ferino",
"Samuel",
""
],
[
"Fernandes",
"Anny",
""
],
[
"Kulesza",
"Uira",
""
],
[
"Aranha",
"Eduardo",
""
],
[
"Treude",
"Christoph",
""
]
] |
new_dataset
| 0.998381 |
2203.10338
|
Hasham Ul Haq
|
Ali Emre Varol, Veysel Kocaman, Hasham Ul Haq, David Talby
|
Understanding COVID-19 News Coverage using Medical NLP
|
Proceedings of the Text2Story'22 Workshop, Stavanger (Norway),
10-April-2022
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Being a global pandemic, the COVID-19 outbreak received global media
attention. In this study, we analyze news publications from CNN and The
Guardian - two of the world's most influential media organizations. The dataset
includes more than 36,000 articles, analyzed using the clinical and biomedical
Natural Language Processing (NLP) models from the Spark NLP for Healthcare
library, which enables a deeper analysis of medical concepts than previously
achieved. The analysis covers key entities and phrases, observed biases, and
change over time in news coverage by correlating mined medical symptoms,
procedures, drugs, and guidance with commonly mentioned demographic and
occupational groups. Another analysis is of extracted Adverse Drug Events about
drug and vaccine manufacturers, which when reported by major news outlets has
an impact on vaccine hesitancy.
|
[
{
"version": "v1",
"created": "Sat, 19 Mar 2022 15:07:46 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Varol",
"Ali Emre",
""
],
[
"Kocaman",
"Veysel",
""
],
[
"Haq",
"Hasham Ul",
""
],
[
"Talby",
"David",
""
]
] |
new_dataset
| 0.9964 |
2203.10346
|
Thai Le
|
Thai Le, Jooyoung Lee, Kevin Yen, Yifan Hu, Dongwon Lee
|
Perturbations in the Wild: Leveraging Human-Written Text Perturbations
for Realistic Adversarial Attack and Defense
|
Accepted to the 60th Annual Meeting of the Association for
Computational Linguistics (ACL'22), Findings
| null | null | null |
cs.LG cs.CL cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
We proposes a novel algorithm, ANTHRO, that inductively extracts over 600K
human-written text perturbations in the wild and leverages them for realistic
adversarial attack. Unlike existing character-based attacks which often
deductively hypothesize a set of manipulation strategies, our work is grounded
on actual observations from real-world texts. We find that adversarial texts
generated by ANTHRO achieve the best trade-off between (1) attack success rate,
(2) semantic preservation of the original text, and (3) stealthiness--i.e.
indistinguishable from human writings hence harder to be flagged as suspicious.
Specifically, our attacks accomplished around 83% and 91% attack success rates
on BERT and RoBERTa, respectively. Moreover, it outperformed the TextBugger
baseline with an increase of 50% and 40% in terms of semantic preservation and
stealthiness when evaluated by both layperson and professional human workers.
ANTHRO can further enhance a BERT classifier's performance in understanding
different variations of human-written toxic texts via adversarial training when
compared to the Perspective API.
|
[
{
"version": "v1",
"created": "Sat, 19 Mar 2022 16:00:01 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Le",
"Thai",
""
],
[
"Lee",
"Jooyoung",
""
],
[
"Yen",
"Kevin",
""
],
[
"Hu",
"Yifan",
""
],
[
"Lee",
"Dongwon",
""
]
] |
new_dataset
| 0.974261 |
2203.10350
|
Tu Zheng
|
Tu Zheng, Yifei Huang, Yang Liu, Wenjian Tang, Zheng Yang, Deng Cai,
Xiaofei He
|
CLRNet: Cross Layer Refinement Network for Lane Detection
|
CVPR2022 Acceptance
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Lane is critical in the vision navigation system of the intelligent vehicle.
Naturally, lane is a traffic sign with high-level semantics, whereas it owns
the specific local pattern which needs detailed low-level features to localize
accurately. Using different feature levels is of great importance for accurate
lane detection, but it is still under-explored. In this work, we present Cross
Layer Refinement Network (CLRNet) aiming at fully utilizing both high-level and
low-level features in lane detection. In particular, it first detects lanes
with high-level semantic features then performs refinement based on low-level
features. In this way, we can exploit more contextual information to detect
lanes while leveraging local detailed lane features to improve localization
accuracy. We present ROIGather to gather global context, which further enhances
the feature representation of lanes. In addition to our novel network design,
we introduce Line IoU loss which regresses the lane line as a whole unit to
improve the localization accuracy. Experiments demonstrate that the proposed
method greatly outperforms the state-of-the-art lane detection approaches.
|
[
{
"version": "v1",
"created": "Sat, 19 Mar 2022 16:11:35 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Zheng",
"Tu",
""
],
[
"Huang",
"Yifei",
""
],
[
"Liu",
"Yang",
""
],
[
"Tang",
"Wenjian",
""
],
[
"Yang",
"Zheng",
""
],
[
"Cai",
"Deng",
""
],
[
"He",
"Xiaofei",
""
]
] |
new_dataset
| 0.995675 |
2203.10390
|
Song Han
|
Zelin Yun, Peng Wu, Shengli Zhou, Aloysius K. Mok, Mark Nixon, Song
Han
|
RT-WiFi on Software-Defined Radio: Design and Implementation
|
16 pages
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Applying high-speed real-time wireless technologies in industrial
applications has the great potential to reduce the deployment and maintenance
costs compared to their wired counterparts. Wireless technologies enhance the
mobility and reduce the communication jitter and delay for mobile industrial
equipment, such as mobile collaborative robots. Unfortunately, most existing
wireless solutions employed in industrial fields either cannot support the
desired high-speed communications or cannot guarantee deterministic, real-time
performance. A more recent wireless technology, RT-WiFi, achieves a good
balance between high-speed data rates and deterministic communication
performance. It is however developed on commercial-of-the-shelf (COTS)
hardware, and takes considerable effort and hardware expertise to maintain and
upgrade. To address these problems, this paper introduces the software-defined
radio (SDR)-based RT-WiFi solution which we call SRT-WiFi. SRT-WiFi provides
full-stack configurability for high-speed real-time wireless communications. We
present the overall system architecture of SRT-WiFi and discuss its key
functions which achieve better timing performance and solve the queue
management and rate adaptation issues compared to COTS hardware-based RT-WiFi.
To achieve effective network management with rate adaptation in multi-cluster
SRT-WiFi, a novel scheduling problem is formulated and an effective algorithm
is proposed to solve the problem. A multi-cluster SRT-WiFi testbed is developed
to validate the design, and extensive experiments are performed to evaluate the
performance at both device and system levels.
|
[
{
"version": "v1",
"created": "Sat, 19 Mar 2022 20:36:36 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Yun",
"Zelin",
""
],
[
"Wu",
"Peng",
""
],
[
"Zhou",
"Shengli",
""
],
[
"Mok",
"Aloysius K.",
""
],
[
"Nixon",
"Mark",
""
],
[
"Han",
"Song",
""
]
] |
new_dataset
| 0.999336 |
2203.10426
|
Qingkai Fang
|
Qingkai Fang, Rong Ye, Lei Li, Yang Feng, Mingxuan Wang
|
STEMM: Self-learning with Speech-text Manifold Mixup for Speech
Translation
|
ACL 2022 main conference
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
How to learn a better speech representation for end-to-end speech-to-text
translation (ST) with limited labeled data? Existing techniques often attempt
to transfer powerful machine translation (MT) capabilities to ST, but neglect
the representation discrepancy across modalities. In this paper, we propose the
Speech-TExt Manifold Mixup (STEMM) method to calibrate such discrepancy.
Specifically, we mix up the representation sequences of different modalities,
and take both unimodal speech sequences and multimodal mixed sequences as input
to the translation model in parallel, and regularize their output predictions
with a self-learning framework. Experiments on MuST-C speech translation
benchmark and further analysis show that our method effectively alleviates the
cross-modal representation discrepancy, and achieves significant improvements
over a strong baseline on eight translation directions.
|
[
{
"version": "v1",
"created": "Sun, 20 Mar 2022 01:49:53 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Fang",
"Qingkai",
""
],
[
"Ye",
"Rong",
""
],
[
"Li",
"Lei",
""
],
[
"Feng",
"Yang",
""
],
[
"Wang",
"Mingxuan",
""
]
] |
new_dataset
| 0.980359 |
2203.10456
|
Xiaoke Shen
|
Xiaoke Shen, Ioannis Stamos
|
simCrossTrans: A Simple Cross-Modality Transfer Learning for Object
Detection with ConvNets or Vision Transformers
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Transfer learning is widely used in computer vision (CV), natural language
processing (NLP) and achieves great success. Most transfer learning systems are
based on the same modality (e.g. RGB image in CV and text in NLP). However, the
cross-modality transfer learning (CMTL) systems are scarce. In this work, we
study CMTL from 2D to 3D sensor to explore the upper bound performance of 3D
sensor only systems, which play critical roles in robotic navigation and
perform well in low light scenarios. While most CMTL pipelines from 2D to 3D
vision are complicated and based on Convolutional Neural Networks (ConvNets),
ours is easy to implement, expand and based on both ConvNets and Vision
transformers(ViTs): 1) By converting point clouds to pseudo-images, we can use
an almost identical network from pre-trained models based on 2D images. This
makes our system easy to implement and expand. 2) Recently ViTs have been
showing good performance and robustness to occlusions, one of the key reasons
for poor performance of 3D vision systems. We explored both ViT and ConvNet
with similar model sizes to investigate the performance difference. We name our
approach simCrossTrans: simple cross-modality transfer learning with ConvNets
or ViTs. Experiments on SUN RGB-D dataset show: with simCrossTrans we achieve
$13.2\%$ and $16.1\%$ absolute performance gain based on ConvNets and ViTs
separately. We also observed the ViTs based performs $9.7\%$ better than the
ConvNets one, showing the power of simCrossTrans with ViT. simCrossTrans with
ViTs surpasses the previous state-of-the-art (SOTA) by a large margin of
$+15.4\%$ mAP50. Compared with the previous 2D detection SOTA based RGB images,
our depth image only system only has a $1\%$ gap. The code, training/inference
logs and models are publicly available at
https://github.com/liketheflower/simCrossTrans
|
[
{
"version": "v1",
"created": "Sun, 20 Mar 2022 05:03:29 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Shen",
"Xiaoke",
""
],
[
"Stamos",
"Ioannis",
""
]
] |
new_dataset
| 0.999198 |
2203.10536
|
Anoop Kumar Sinha
|
Fok-Chi-Seng Fok Kow, Anoop Kumar Sinha, Zhang Jin Ming, Bao Songyu,
Jake Tan Jun Kang, Hong Yan Jack Jeffrey, Galina Mihaleva, Nadia Magnenat
Thalmann and Yiyu Cai
|
MIDAS: Multi-sensorial Immersive Dynamic Autonomous System Improves
Motivation of Stroke Affected Patients for Hand Rehabilitation
| null | null |
10.36227/techrxiv.17006641.v1, 2021
| null |
cs.RO cs.HC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Majority of stroke survivors are left with poorly functioning paretic hands.
Current rehabilitation devices have failed to motivate the patients enough to
continue rehabilitation exercises. The objective of this project, MIDAS
(Multi-sensorial Immersive Dynamic Autonomous System) is a proof of concept by
using an immersive system to improve motivation of stroke patients for hand
rehabilitation. MIDAS is intended for stroke patients who suffer from light to
mild stroke. MIDAS is lightweight and portable. It consists of a hand
exoskeleton subsystem, a Virtual Reality (VR) subsystem, and an olfactory
subsystem. Altogether, MIDAS engages four out of five senses during
rehabilitation. To evaluate the efficacy of MIDAS a pilot study consisting of
three sessions is carried out on five stroke affected patients. Subsystems of
MIDAS are added progressively in each session. The game environment, sonic
effects, and scent released is carefully chosen to enhance the immersive
experience. 60% of the scores of user experience are above 40 (out of 56). 96%
Self Rehabilitation Motivation Scale (SRMS) rating shows that the participants
are motivated to use MIDAS and 87% rating shows that MIDAS is exciting for
rehabilitation. Participants experienced elevated motivation to continue stroke
rehabilitation using MIDAS and no undesired side effects were reported.
|
[
{
"version": "v1",
"created": "Sun, 20 Mar 2022 12:00:05 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Kow",
"Fok-Chi-Seng Fok",
""
],
[
"Sinha",
"Anoop Kumar",
""
],
[
"Ming",
"Zhang Jin",
""
],
[
"Songyu",
"Bao",
""
],
[
"Kang",
"Jake Tan Jun",
""
],
[
"Jeffrey",
"Hong Yan Jack",
""
],
[
"Mihaleva",
"Galina",
""
],
[
"Thalmann",
"Nadia Magnenat",
""
],
[
"Cai",
"Yiyu",
""
]
] |
new_dataset
| 0.99772 |
2203.10584
|
Xiaoqing Tan
|
Shentong Mo, Jingfei Xia, Xiaoqing Tan, Bhiksha Raj
|
Point3D: tracking actions as moving points with 3D CNNs
|
Accepted by the 32nd British Machine Vision Conference (BMVC 2021)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Spatio-temporal action recognition has been a challenging task that involves
detecting where and when actions occur. Current state-of-the-art action
detectors are mostly anchor-based, requiring sensitive anchor designs and huge
computations due to calculating large numbers of anchor boxes. Motivated by
nascent anchor-free approaches, we propose Point3D, a flexible and
computationally efficient network with high precision for spatio-temporal
action recognition. Our Point3D consists of a Point Head for action
localization and a 3D Head for action classification. Firstly, Point Head is
used to track center points and knot key points of humans to localize the
bounding box of an action. These location features are then piped into a
time-wise attention to learn long-range dependencies across frames. The 3D Head
is later deployed for the final action classification. Our Point3D achieves
state-of-the-art performance on the JHMDB, UCF101-24, and AVA benchmarks in
terms of frame-mAP and video-mAP. Comprehensive ablation studies also
demonstrate the effectiveness of each module proposed in our Point3D.
|
[
{
"version": "v1",
"created": "Sun, 20 Mar 2022 15:41:47 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Mo",
"Shentong",
""
],
[
"Xia",
"Jingfei",
""
],
[
"Tan",
"Xiaoqing",
""
],
[
"Raj",
"Bhiksha",
""
]
] |
new_dataset
| 0.999397 |
2203.10585
|
Weiwei Wan
|
Shogo Hayakawa, Weiwei Wan, Keisuke Koyama, Kensuke Harada
|
A Dual-Arm Robot that Manipulates Heavy Plates Cooperatively with a
Vacuum Lifter
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
A vacuum lifter is widely used to hold and pick up large, heavy, and flat
objects. Conventionally, when using a vacuum lifter, a human worker watches the
state of a running vacuum lifter and adjusts the object's pose to maintain
balance. In this work, we propose using a dual-arm robot to replace the human
workers and develop planning and control methods for a dual-arm robot to raise
a heavy plate with the help of a vacuum lifter. The methods help the robot
determine its actions by considering the vacuum lifer's suction position and
suction force limits. The essence of the methods is two-fold. First, we build a
Manipulation State Graph (MSG) to store the weighted logical relations of
various plate contact states and robot/vacuum lifter configurations, and search
the graph to plan efficient and low-cost robot manipulation sequences. Second,
we develop a velocity-based impedance controller to coordinate the robot and
the vacuum lifter when lifting an object. With its help, a robot can follow the
vacuum lifter's motion and realize compliant robot-vacuum lifter collaboration.
The proposed planning and control methods are investigated using real-world
experiments. The results show that a robot can effectively and flexibly work
together with a vacuum lifter to manipulate large and heavy plate-like objects
with the methods' support.
|
[
{
"version": "v1",
"created": "Sun, 20 Mar 2022 15:47:38 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Hayakawa",
"Shogo",
""
],
[
"Wan",
"Weiwei",
""
],
[
"Koyama",
"Keisuke",
""
],
[
"Harada",
"Kensuke",
""
]
] |
new_dataset
| 0.973395 |
2203.10621
|
Wanshui Li
|
Wanshui Li, Yifan Bai, Jiaxuan Lu, Kexin Yi
|
Immersive Text Game and Personality Classification
| null | null | null | null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We designed and built a game called \textit{Immersive Text Game}, which
allows the player to choose a story and a character, and interact with other
characters in the story in an immersive manner of dialogues. The game is based
on several latest models, including text generation language model, information
extraction model, commonsense reasoning model, and psychology evaluation model.
In the past, similar text games usually let players choose from limited actions
instead of answering on their own, and not every time what characters said are
determined by the player. Through the combination of these models and elaborate
game mechanics and modes, the player will find some novel experiences as driven
through the storyline.
|
[
{
"version": "v1",
"created": "Sun, 20 Mar 2022 18:37:03 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Li",
"Wanshui",
""
],
[
"Bai",
"Yifan",
""
],
[
"Lu",
"Jiaxuan",
""
],
[
"Yi",
"Kexin",
""
]
] |
new_dataset
| 0.994199 |
2203.10626
|
Delmiro Fernandez-Reyes Prof.
|
Petru Manescu, Priya Narayanan, Christopher Bendkowski, Muna Elmi,
Remy Claveau, Vijay Pawar, Biobele J. Brown, Mike Shaw, Anupama Rao, and
Delmiro Fernandez-Reyes
|
Automated Detection of Acute Promyelocytic Leukemia in Blood Films and
Bone Marrow Aspirates with Annotation-free Deep Learning
|
13 pages, 2 tables, 5 figures
| null | null | null |
cs.LG cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
While optical microscopy inspection of blood films and bone marrow aspirates
by a hematologist is a crucial step in establishing diagnosis of acute
leukemia, especially in low-resource settings where other diagnostic modalities
might not be available, the task remains time-consuming and prone to human
inconsistencies. This has an impact especially in cases of Acute Promyelocytic
Leukemia (APL) that require urgent treatment. Integration of automated
computational hematopathology into clinical workflows can improve the
throughput of these services and reduce cognitive human error. However, a major
bottleneck in deploying such systems is a lack of sufficient cell morphological
object-labels annotations to train deep learning models. We overcome this by
leveraging patient diagnostic labels to train weakly-supervised models that
detect different types of acute leukemia. We introduce a deep learning
approach, Multiple Instance Learning for Leukocyte Identification (MILLIE),
able to perform automated reliable analysis of blood films with minimal
supervision. Without being trained to classify individual cells, MILLIE
differentiates between acute lymphoblastic and myeloblastic leukemia in blood
films. More importantly, MILLIE detects APL in blood films (AUC 0.94+/-0.04)
and in bone marrow aspirates (AUC 0.99+/-0.01). MILLIE is a viable solution to
augment the throughput of clinical pathways that require assessment of blood
film microscopy.
|
[
{
"version": "v1",
"created": "Sun, 20 Mar 2022 18:53:09 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Manescu",
"Petru",
""
],
[
"Narayanan",
"Priya",
""
],
[
"Bendkowski",
"Christopher",
""
],
[
"Elmi",
"Muna",
""
],
[
"Claveau",
"Remy",
""
],
[
"Pawar",
"Vijay",
""
],
[
"Brown",
"Biobele J.",
""
],
[
"Shaw",
"Mike",
""
],
[
"Rao",
"Anupama",
""
],
[
"Fernandez-Reyes",
"Delmiro",
""
]
] |
new_dataset
| 0.998808 |
2203.10823
|
Marc Schlichting
|
Marc R. Schlichting, Stefan Notter, and Walter Fichter
|
Long Short-Term Memory for Spatial Encoding in Multi-Agent Path Planning
|
For associated source code, see
https://github.com/MarcSchlichting/LSTMSpatialEncoding , For associated video
of flight test, see https://schlichting.page.link/lstm_flight_test , 17
pages, 11 figures
|
AIAA Journal of Guidance, Control, and Dynamics, March 2022
|
10.2514/1.G006129
| null |
cs.RO cs.AI cs.LG cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reinforcement learning-based path planning for multi-agent systems of varying
size constitutes a research topic with increasing significance as progress in
domains such as urban air mobility and autonomous aerial vehicles continues.
Reinforcement learning with continuous state and action spaces is used to train
a policy network that accommodates desirable path planning behaviors and can be
used for time-critical applications. A Long Short-Term Memory module is
proposed to encode an unspecified number of states for a varying, indefinite
number of agents. The described training strategies and policy architecture
lead to a guidance that scales to an infinite number of agents and unlimited
physical dimensions, although training takes place at a smaller scale. The
guidance is implemented on a low-cost, off-the-shelf onboard computer. The
feasibility of the proposed approach is validated by presenting flight test
results of up to four drones, autonomously navigating collision-free in a
real-world environment.
|
[
{
"version": "v1",
"created": "Mon, 21 Mar 2022 09:16:56 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Schlichting",
"Marc R.",
""
],
[
"Notter",
"Stefan",
""
],
[
"Fichter",
"Walter",
""
]
] |
new_dataset
| 0.980378 |
2203.10830
|
Marcos Faundez-Zanuy
|
Jiri Mekyska, Zdenek Smekal, Zoltan Galaz, Zdenek Mzourek, Irena
Rektorova, Marcos Faundez-Zanuy, Karmele Lopez-De-Ipina
|
Perceptual Features as Markers of Parkinson's Disease: The Issue of
Clinical Interpretability
|
8 pages, published in International Conference on NONLINEAR SPEECH
PROCESSING, NOLISP 2015 jointly organized with the 25th Italian Workshop on
Neural Networks, WIRN 2015, held at May 2015, Vietri sul Mare, Salerno, Italy
|
NOLISP 2015, In Recent Advances in Nonlinear Speech Processing.
Smart Innovation, Systems and Technologies, vol 48. Springer, Cham
|
10.1007/978-3-319-28109-4_9
| null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Up to 90% of patients with Parkinson's disease (PD) suffer from hypokinetic
dysathria (HD) which is also manifested in the field of phonation. Clinical
signs of HD like monoloudness, monopitch or hoarse voice are usually quantified
by conventional clinical interpretable features (jitter, shimmer,
harmonic-to-noise ratio, etc.). This paper provides large and robust insight
into perceptual analysis of 5 Czech vowels of 84 PD patients and proves that
despite the clinical inexplicability the perceptual features outperform the
conventional ones, especially in terms of discrimination power (classification
accuracy ACC = 92 %, sensitivity SEN = 93 %, specificity SPE = 92 %) and
partial correlation with clinical scores like UPDRS (Unified Parkinson's
disease rating scale), MMSE (Mini-mental state examination) or FOG (Freezing of
gait questionnaire), where p < 0.0001.
|
[
{
"version": "v1",
"created": "Mon, 21 Mar 2022 09:46:48 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Mekyska",
"Jiri",
""
],
[
"Smekal",
"Zdenek",
""
],
[
"Galaz",
"Zoltan",
""
],
[
"Mzourek",
"Zdenek",
""
],
[
"Rektorova",
"Irena",
""
],
[
"Faundez-Zanuy",
"Marcos",
""
],
[
"Lopez-De-Ipina",
"Karmele",
""
]
] |
new_dataset
| 0.979869 |
2203.10938
|
Jingyue Li Prof.
|
Elnaz Namazi and Rudolf Mester and Chaoru Lu and Jingyue Li
|
Geolocation estimation of target vehicles using image processing and
geometric computation
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Estimating vehicles' locations is one of the key components in intelligent
traffic management systems (ITMSs) for increasing traffic scene awareness.
Traditionally, stationary sensors have been employed in this regard. The
development of advanced sensing and communication technologies on modern
vehicles (MVs) makes it feasible to use such vehicles as mobile sensors to
estimate the traffic data of observed vehicles. This study aims to explore the
capabilities of a monocular camera mounted on an MV in order to estimate the
geolocation of the observed vehicle in a global positioning system (GPS)
coordinate system. We proposed a new methodology by integrating deep learning,
image processing, and geometric computation to address the observed-vehicle
localization problem. To evaluate our proposed methodology, we developed new
algorithms and tested them using real-world traffic data. The results indicated
that our proposed methodology and algorithms could effectively estimate the
observed vehicle's latitude and longitude dynamically.
|
[
{
"version": "v1",
"created": "Tue, 8 Mar 2022 13:15:29 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Namazi",
"Elnaz",
""
],
[
"Mester",
"Rudolf",
""
],
[
"Lu",
"Chaoru",
""
],
[
"Li",
"Jingyue",
""
]
] |
new_dataset
| 0.987159 |
2203.10945
|
Moussa Kamal Eddine
|
Moussa Kamal Eddine, Nadi Tomeh, Nizar Habash, Joseph Le Roux,
Michalis Vazirgiannis
|
AraBART: a Pretrained Arabic Sequence-to-Sequence Model for Abstractive
Summarization
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Like most natural language understanding and generation tasks,
state-of-the-art models for summarization are transformer-based
sequence-to-sequence architectures that are pretrained on large corpora. While
most existing models focused on English, Arabic remained understudied. In this
paper we propose AraBART, the first Arabic model in which the encoder and the
decoder are pretrained end-to-end, based on BART. We show that AraBART achieves
the best performance on multiple abstractive summarization datasets,
outperforming strong baselines including a pretrained Arabic BERT-based model
and multilingual mBART and mT5 models.
|
[
{
"version": "v1",
"created": "Mon, 21 Mar 2022 13:11:41 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Eddine",
"Moussa Kamal",
""
],
[
"Tomeh",
"Nadi",
""
],
[
"Habash",
"Nizar",
""
],
[
"Roux",
"Joseph Le",
""
],
[
"Vazirgiannis",
"Michalis",
""
]
] |
new_dataset
| 0.998219 |
2203.10970
|
Gabriella Pizzuto
|
Gabriella Pizzuto, Jacopo de Berardinis, Louis Longley, Hatem
Fakhruldeen, and Andrew I. Cooper
|
SOLIS: Autonomous Solubility Screening using Deep Neural Networks
|
7 pages, 4 figures
| null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Accelerating material discovery has tremendous societal and industrial
impact, particularly for pharmaceuticals and clean energy production. Many
experimental instruments have some degree of automation, facilitating
continuous running and higher throughput. However, it is common that sample
preparation is still carried out manually. This can result in researchers
spending a significant amount of their time on repetitive tasks, which
introduces errors and can prohibit production of statistically relevant data.
Crystallisation experiments are common in many chemical fields, both for
purification and in polymorph screening experiments. The initial step often
involves a solubility screen of the molecule; that is, understanding whether
molecular compounds have dissolved in a particular solvent. This usually can be
time consuming and work intensive. Moreover, accurate knowledge of the precise
solubility limit of the molecule is often not required, and simply measuring a
threshold of solubility in each solvent would be sufficient. To address this,
we propose a novel cascaded deep model that is inspired by how a human chemist
would visually assess a sample to determine whether the solid has completely
dissolved in the solution. In this paper, we design, develop, and evaluate the
first fully autonomous solubility screening framework, which leverages
state-of-the-art methods for image segmentation and convolutional neural
networks for image classification. To realise that, we first create a dataset
comprising different molecules and solvents, which is collected in a real-world
chemistry laboratory. We then evaluated our method on the data recorded through
an eye-in-hand camera mounted on a seven degree-of-freedom robotic manipulator,
and show that our model can achieve 99.13% test accuracy across various setups.
|
[
{
"version": "v1",
"created": "Fri, 18 Mar 2022 09:38:23 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Pizzuto",
"Gabriella",
""
],
[
"de Berardinis",
"Jacopo",
""
],
[
"Longley",
"Louis",
""
],
[
"Fakhruldeen",
"Hatem",
""
],
[
"Cooper",
"Andrew I.",
""
]
] |
new_dataset
| 0.991432 |
2203.11079
|
Fabian Egidy
|
Anton Ehrmanntraut, Fabian Egidy, Christian Gla{\ss}er
|
Oracle with $\mathrm{P=NP\cap coNP}$, but no Many-One Completeness in
UP, DisjNP, and DisjCoNP
| null | null | null | null |
cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We construct an oracle relative to which $\mathrm{P} = \mathrm{NP} \cap
\mathrm{coNP}$, but there are no many-one complete sets in $\mathrm{UP}$, no
many-one complete disjoint $\mathrm{NP}$-pairs, and no many-one complete
disjoint $\mathrm{coNP}$-pairs. This contributes to a research program
initiated by Pudl\'ak [Pud17], which studies incompleteness in the finite
domain and which mentions the construction of such oracles as open problem. The
oracle shows that $\mathsf{NP}\cap\mathsf{coNP}$ is indispensable in the list
of hypotheses studied by Pudl\'ak. Hence one should consider stronger
hypotheses, in order to find a universal one.
|
[
{
"version": "v1",
"created": "Mon, 21 Mar 2022 15:58:52 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Ehrmanntraut",
"Anton",
""
],
[
"Egidy",
"Fabian",
""
],
[
"Glaßer",
"Christian",
""
]
] |
new_dataset
| 0.986513 |
2203.11087
|
Nicolas Tempelmeier
|
Nicolas Tempelmeier, Elena Demidova
|
Ovid: A Machine Learning Approach for Automated Vandalism Detection in
OpenStreetMap
|
arXiv admin note: substantial text overlap with arXiv:2201.10406
|
SIGSPATIAL 2021
|
10.1145/3474717.3484204
| null |
cs.LG cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
OpenStreetMap is a unique source of openly available worldwide map data,
increasingly adopted in real-world applications. Vandalism detection in
OpenStreetMap is critical and remarkably challenging due to the large scale of
the dataset, the sheer number of contributors, various vandalism forms, and the
lack of annotated data to train machine learning algorithms. This paper
presents Ovid - a novel machine learning method for vandalism detection in
OpenStreetMap. Ovid relies on a neural network architecture that adopts a
multi-head attention mechanism to effectively summarize information indicating
vandalism from OpenStreetMap changesets. To facilitate automated vandalism
detection, we introduce a set of original features that capture changeset,
user, and edit information. Our evaluation results on real-world vandalism data
demonstrate that the proposed Ovid method outperforms the baselines by 4.7
percentage points in F1 score.
|
[
{
"version": "v1",
"created": "Mon, 21 Mar 2022 16:07:46 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Tempelmeier",
"Nicolas",
""
],
[
"Demidova",
"Elena",
""
]
] |
new_dataset
| 0.970944 |
2203.11117
|
Jason Chen
|
Jason Chen, Yang Xi
|
L-MAC: Location-aware MAC Protocol for Wireless Sensor Networks
|
in progress
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper presents the design, implementation and performance evaluation of
a location MAC protocol, called L-MAC, for wireless sensor networks. L-MAC is a
combination of TDMA and CSMA while offsetting the high overhead of time slot
assignment by allocating the time slots to sensor nodes based on their location
information. This design avoids high computation complexity of time slot
assignment incurred by node mobility and node failure. The area which the
wireless sensor network occupies is divided into blocks and each block is
associated with an inter-block time slot and an intra-block time slot. In the
inter-block time slot, the sensor nodes stay active and receive the packets
from nodes outside of the block. In the intra-block time slot, the sensor nodes
communicate with peer nodes in the same block under CSMA. Sensor nodes stay
sleep in all other time slots unless they have traffic to send. L-MAC is
implemented and evaluated in NS-2.
|
[
{
"version": "v1",
"created": "Mon, 21 Mar 2022 16:46:19 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Chen",
"Jason",
""
],
[
"Xi",
"Yang",
""
]
] |
new_dataset
| 0.999479 |
2203.11136
|
Darja Smite
|
Darja Smite, Nils Brede Moe, Jarle Hildrum, Javier Gonzalez Huerta,
Daniel Mendez
|
Work-From-Home is Here to Stay: Call for Flexibility in Post-Pandemic
Work Policies
|
Submitted to the Journal of Systems and Software, New Ideas and
Trends track
| null | null | null |
cs.SE cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In early 2020, the Covid-19 pandemic forced employees in tech companies
worldwide to abruptly transition from working in offices to working from their
homes. During two years of predominantly working from home, employees and
managers alike formed expectations about what post-pandemic working life should
look like. Many companies are currently experimenting with new work policies
that balance both employee- and manager expectations to where, when and how
work should be done in the future. In this article, we gather experiences from
17 companies and their sites, covering 12 countries. We share the results of
corporate surveys of employee preferences for working from home and analyse new
work policies. Our results are threefold. First, through the new work policies
all companies are formally giving more flexibility to the employees with
regards to working time and work location. Second, there is a great variation
in how much flexibility the companies are willing to yield to the employees.
The variation is related both to industry type, size of the companies, and
company culture. Third, we document a change in the psychological contract
between employees and managers, where the option of working from home is
converted from an exclusive perk that managers could choose to give to the few,
to a core privilege that all employees feel they are entitled to. Finally,
there are indications that as the companies learn and solicit feedback
regarding the efficiency of the chosen strategies, we might see further
developments and changes of the work policies with respect to how much
flexibility to work whenever and from anywhere they grant. Through these
findings, the paper contributes to a growing literature about the new trends
emerging from the pandemic in tech companies and spells out practical
implications onwards.
|
[
{
"version": "v1",
"created": "Mon, 21 Mar 2022 17:11:20 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Smite",
"Darja",
""
],
[
"Moe",
"Nils Brede",
""
],
[
"Hildrum",
"Jarle",
""
],
[
"Huerta",
"Javier Gonzalez",
""
],
[
"Mendez",
"Daniel",
""
]
] |
new_dataset
| 0.970558 |
2203.11174
|
Chethan M Parameshwara
|
Chethan M. Parameshwara, Gokul Hari, Cornelia Ferm\"uller, Nitin J.
Sanket, Yiannis Aloimonos
|
DiffPoseNet: Direct Differentiable Camera Pose Estimation
|
10 pages, 5 figures, Accepted to CVPR 2022
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current deep neural network approaches for camera pose estimation rely on
scene structure for 3D motion estimation, but this decreases the robustness and
thereby makes cross-dataset generalization difficult. In contrast, classical
approaches to structure from motion estimate 3D motion utilizing optical flow
and then compute depth. Their accuracy, however, depends strongly on the
quality of the optical flow. To avoid this issue, direct methods have been
proposed, which separate 3D motion from depth estimation but compute 3D motion
using only image gradients in the form of normal flow. In this paper, we
introduce a network NFlowNet, for normal flow estimation which is used to
enforce robust and direct constraints. In particular, normal flow is used to
estimate relative camera pose based on the cheirality (depth positivity)
constraint. We achieve this by formulating the optimization problem as a
differentiable cheirality layer, which allows for end-to-end learning of camera
pose. We perform extensive qualitative and quantitative evaluation of the
proposed DiffPoseNet's sensitivity to noise and its generalization across
datasets. We compare our approach to existing state-of-the-art methods on
KITTI, TartanAir, and TUM-RGBD datasets.
|
[
{
"version": "v1",
"created": "Mon, 21 Mar 2022 17:54:30 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Parameshwara",
"Chethan M.",
""
],
[
"Hari",
"Gokul",
""
],
[
"Fermüller",
"Cornelia",
""
],
[
"Sanket",
"Nitin J.",
""
],
[
"Aloimonos",
"Yiannis",
""
]
] |
new_dataset
| 0.9987 |
2011.11961
|
Zhanghan Ke
|
Zhanghan Ke, Jiayu Sun, Kaican Li, Qiong Yan, Rynson W.H. Lau
|
MODNet: Real-Time Trimap-Free Portrait Matting via Objective
Decomposition
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing portrait matting methods either require auxiliary inputs that are
costly to obtain or involve multiple stages that are computationally expensive,
making them less suitable for real-time applications. In this work, we present
a light-weight matting objective decomposition network (MODNet) for portrait
matting in real-time with a single input image. The key idea behind our
efficient design is by optimizing a series of sub-objectives simultaneously via
explicit constraints. In addition, MODNet includes two novel techniques for
improving model efficiency and robustness. First, an Efficient Atrous Spatial
Pyramid Pooling (e-ASPP) module is introduced to fuse multi-scale features for
semantic estimation. Second, a self-supervised sub-objectives consistency (SOC)
strategy is proposed to adapt MODNet to real-world data to address the domain
shift problem common to trimap-free methods. MODNet is easy to be trained in an
end-to-end manner. It is much faster than contemporaneous methods and runs at
67 frames per second on a 1080Ti GPU. Experiments show that MODNet outperforms
prior trimap-free methods by a large margin on both Adobe Matting Dataset and a
carefully designed photographic portrait matting (PPM-100) benchmark proposed
by us. Further, MODNet achieves remarkable results on daily photos and videos.
Our code and models are available at https://github.com/ZHKKKe/MODNet, and the
PPM-100 benchmark is released at https://github.com/ZHKKKe/PPM.
|
[
{
"version": "v1",
"created": "Tue, 24 Nov 2020 08:38:36 GMT"
},
{
"version": "v2",
"created": "Sun, 29 Nov 2020 03:27:58 GMT"
},
{
"version": "v3",
"created": "Thu, 27 Jan 2022 09:17:31 GMT"
},
{
"version": "v4",
"created": "Fri, 18 Mar 2022 04:49:53 GMT"
}
] | 2022-03-21T00:00:00 |
[
[
"Ke",
"Zhanghan",
""
],
[
"Sun",
"Jiayu",
""
],
[
"Li",
"Kaican",
""
],
[
"Yan",
"Qiong",
""
],
[
"Lau",
"Rynson W. H.",
""
]
] |
new_dataset
| 0.998255 |
2104.13100
|
Pietro Liguori
|
Pietro Liguori, Erfan Al-Hossami, Domenico Cotroneo, Roberto Natella,
Bojan Cukic and Samira Shaikh
|
Shellcode_IA32: A Dataset for Automatic Shellcode Generation
|
Paper accepted to NLP4Prog Workshop 2021 co-located with ACL-IJCNLP
2021. Extended journal version of this work has been published in the
Automated Software Engineering journal, Volume 29, Article no. 30, March
2022, DOI: 10.1007/s10515-022-00331-3
| null |
10.18653/v1/2021.nlp4prog-1.7
| null |
cs.SE cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We take the first step to address the task of automatically generating
shellcodes, i.e., small pieces of code used as a payload in the exploitation of
a software vulnerability, starting from natural language comments. We assemble
and release a novel dataset (Shellcode_IA32), consisting of challenging but
common assembly instructions with their natural language descriptions. We
experiment with standard methods in neural machine translation (NMT) to
establish baseline performance levels on this task.
|
[
{
"version": "v1",
"created": "Tue, 27 Apr 2021 10:50:47 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Jun 2021 07:41:21 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Jun 2021 09:23:08 GMT"
},
{
"version": "v4",
"created": "Fri, 18 Mar 2022 10:28:57 GMT"
}
] | 2022-03-21T00:00:00 |
[
[
"Liguori",
"Pietro",
""
],
[
"Al-Hossami",
"Erfan",
""
],
[
"Cotroneo",
"Domenico",
""
],
[
"Natella",
"Roberto",
""
],
[
"Cukic",
"Bojan",
""
],
[
"Shaikh",
"Samira",
""
]
] |
new_dataset
| 0.999844 |
2105.14898
|
Igor Mozeti\v{c}
|
Bojan Evkoski, Andraz Pelicon, Igor Mozetic, Nikola Ljubesic, Petra
Kralj Novak
|
Retweet communities reveal the main sources of hate speech
| null |
B. Evkoski, A. Pelicon, I. Mozeti\v{c}, N. Ljube\v{s}i\'c, P.
Kralj Novak. Retweet communities reveal the main sources of hate speech, PLoS
ONE 17(3): e0265602, 2022
|
10.1371/journal.pone.0265602
| null |
cs.SI cs.CY cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We address a challenging problem of identifying main sources of hate speech
on Twitter. On one hand, we carefully annotate a large set of tweets for hate
speech, and deploy advanced deep learning to produce high quality hate speech
classification models. On the other hand, we create retweet networks, detect
communities and monitor their evolution through time. This combined approach is
applied to three years of Slovenian Twitter data. We report a number of
interesting results. Hate speech is dominated by offensive tweets, related to
political and ideological issues. The share of unacceptable tweets is
moderately increasing with time, from the initial 20% to 30% by the end of
2020. Unacceptable tweets are retweeted significantly more often than
acceptable tweets. About 60% of unacceptable tweets are produced by a single
right-wing community of only moderate size. Institutional Twitter accounts and
media accounts post significantly less unacceptable tweets than individual
accounts. In fact, the main sources of unacceptable tweets are anonymous
accounts, and accounts that were suspended or closed during the years
2018-2020.
|
[
{
"version": "v1",
"created": "Mon, 31 May 2021 11:43:19 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Mar 2022 18:11:55 GMT"
}
] | 2022-03-21T00:00:00 |
[
[
"Evkoski",
"Bojan",
""
],
[
"Pelicon",
"Andraz",
""
],
[
"Mozetic",
"Igor",
""
],
[
"Ljubesic",
"Nikola",
""
],
[
"Novak",
"Petra Kralj",
""
]
] |
new_dataset
| 0.995119 |
2107.06307
|
Qi Li
|
Qi Li, Yue Wang, Yilun Wang, Hang Zhao
|
HDMapNet: An Online HD Map Construction and Evaluation Framework
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Constructing HD semantic maps is a central component of autonomous driving.
However, traditional pipelines require a vast amount of human efforts and
resources in annotating and maintaining the semantics in the map, which limits
its scalability. In this paper, we introduce the problem of HD semantic map
learning, which dynamically constructs the local semantics based on onboard
sensor observations. Meanwhile, we introduce a semantic map learning method,
dubbed HDMapNet. HDMapNet encodes image features from surrounding cameras
and/or point clouds from LiDAR, and predicts vectorized map elements in the
bird's-eye view. We benchmark HDMapNet on nuScenes dataset and show that in all
settings, it performs better than baseline methods. Of note, our camera-LiDAR
fusion-based HDMapNet outperforms existing methods by more than 50% in all
metrics. In addition, we develop semantic-level and instance-level metrics to
evaluate the map learning performance. Finally, we showcase our method is
capable of predicting a locally consistent map. By introducing the method and
metrics, we invite the community to study this novel map learning problem.
|
[
{
"version": "v1",
"created": "Tue, 13 Jul 2021 18:06:46 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Jul 2021 01:54:14 GMT"
},
{
"version": "v3",
"created": "Sun, 24 Oct 2021 03:03:13 GMT"
},
{
"version": "v4",
"created": "Fri, 18 Mar 2022 08:15:56 GMT"
}
] | 2022-03-21T00:00:00 |
[
[
"Li",
"Qi",
""
],
[
"Wang",
"Yue",
""
],
[
"Wang",
"Yilun",
""
],
[
"Zhao",
"Hang",
""
]
] |
new_dataset
| 0.984964 |
2107.07150
|
Tongshuang Wu
|
Alexis Ross, Tongshuang Wu, Hao Peng, Matthew E. Peters, Matt Gardner
|
Tailor: Generating and Perturbing Text with Semantic Controls
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Controlled text perturbation is useful for evaluating and improving model
generalizability. However, current techniques rely on training a model for
every target perturbation, which is expensive and hard to generalize. We
present Tailor, a semantically-controlled text generation system. Tailor builds
on a pretrained seq2seq model and produces textual outputs conditioned on
control codes derived from semantic representations. We craft a set of
operations to modify the control codes, which in turn steer generation towards
targeted attributes. These operations can be further composed into higher-level
ones, allowing for flexible perturbation strategies. We demonstrate the
effectiveness of these perturbations in multiple applications. First, we use
Tailor to automatically create high-quality contrast sets for four distinct
natural language processing (NLP) tasks. These contrast sets contain fewer
spurious artifacts and are complementary to manually annotated ones in their
lexical diversity. Second, we show that Tailor perturbations can improve model
generalization through data augmentation. Perturbing just 2% of training data
leads to a 5.8-point gain on an NLI challenge set measuring reliance on
syntactic heuristics.
|
[
{
"version": "v1",
"created": "Thu, 15 Jul 2021 06:38:59 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Mar 2022 20:02:12 GMT"
}
] | 2022-03-21T00:00:00 |
[
[
"Ross",
"Alexis",
""
],
[
"Wu",
"Tongshuang",
""
],
[
"Peng",
"Hao",
""
],
[
"Peters",
"Matthew E.",
""
],
[
"Gardner",
"Matt",
""
]
] |
new_dataset
| 0.998561 |
2109.01896
|
Rohan Chandra
|
Rohan Chandra, Dinesh Manocha
|
GamePlan: Game-Theoretic Multi-Agent Planning with Human Drivers at
Intersections, Roundabouts, and Merging
|
Published in RA-L and ICRA 2022
| null | null | null |
cs.RO cs.GT cs.MA
|
http://creativecommons.org/licenses/by/4.0/
|
We present a new method for multi-agent planning involving human drivers and
autonomous vehicles (AVs) in unsignaled intersections, roundabouts, and during
merging. In multi-agent planning, the main challenge is to predict the actions
of other agents, especially human drivers, as their intentions are hidden from
other agents. Our algorithm uses game theory to develop a new auction, called
GamePlan, that directly determines the optimal action for each agent based on
their driving style (which is observable via commonly available sensors).
GamePlan assigns a higher priority to more aggressive or impatient drivers and
a lower priority to more conservative or patient drivers; we theoretically
prove that such an approach is game-theoretically optimal prevents collisions
and deadlocks. We compare our approach with prior state-of-the-art auction
techniques including economic auctions, time-based auctions (first-in
first-out), and random bidding and show that each of these methods result in
collisions among agents when taking into account driver behavior. We compare
with methods based on DRL, deep learning, and game theory and present our
benefits over these approaches. Finally, we show that our approach can be
implemented in the real-world with human drivers.
|
[
{
"version": "v1",
"created": "Sat, 4 Sep 2021 16:26:31 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Oct 2021 02:47:52 GMT"
},
{
"version": "v3",
"created": "Thu, 28 Oct 2021 17:14:06 GMT"
},
{
"version": "v4",
"created": "Tue, 30 Nov 2021 03:10:57 GMT"
},
{
"version": "v5",
"created": "Fri, 18 Mar 2022 04:24:39 GMT"
}
] | 2022-03-21T00:00:00 |
[
[
"Chandra",
"Rohan",
""
],
[
"Manocha",
"Dinesh",
""
]
] |
new_dataset
| 0.999366 |
2109.03926
|
Lisa Bylinina
|
Lisa Bylinina, Alexey Tikhonov
|
Transformers in the loop: Polarity in neural models of language
|
Accepted to ACL 2022 main conference
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Representation of linguistic phenomena in computational language models is
typically assessed against the predictions of existing linguistic theories of
these phenomena. Using the notion of polarity as a case study, we show that
this is not always the most adequate set-up. We probe polarity via so-called
'negative polarity items' (in particular, English 'any') in two pre-trained
Transformer-based models (BERT and GPT-2). We show that - at least for polarity
- metrics derived from language models are more consistent with data from
psycholinguistic experiments than linguistic theory predictions. Establishing
this allows us to more adequately evaluate the performance of language models
and also to use language models to discover new insights into natural language
grammar beyond existing linguistic theories. This work contributes to
establishing closer ties between psycholinguistic experiments and experiments
with language models.
|
[
{
"version": "v1",
"created": "Wed, 8 Sep 2021 20:56:32 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Mar 2022 20:58:14 GMT"
}
] | 2022-03-21T00:00:00 |
[
[
"Bylinina",
"Lisa",
""
],
[
"Tikhonov",
"Alexey",
""
]
] |
new_dataset
| 0.985008 |
2109.07648
|
Rohan Chandra
|
Rohan Chandra, Xijun Wang, Mridul Mahajan, Rahul Kala, Rishitha
Palugulla, Chandrababu Naidu, Alok Jain, and Dinesh Manocha
|
METEOR:A Dense, Heterogeneous, and Unstructured Traffic Dataset With
Rare Behaviors
|
Under review at IROS 2022
| null | null | null |
cs.CV cs.AI cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
We present a new traffic dataset, METEOR, which captures traffic patterns and
multi-agent driving behaviors in unstructured scenarios. METEOR consists of
more than 1000 one-minute videos, over 2 million annotated frames with bounding
boxes and GPS trajectories for 16 unique agent categories, and more than 13
million bounding boxes for traffic agents. METEOR is a dataset for rare and
interesting, multi-agent driving behaviors that are grouped into traffic
violations, atypical interactions, and diverse scenarios. Every video in METEOR
is tagged using a diverse range of factors corresponding to weather, time of
the day, road conditions, and traffic density. We use METEOR to benchmark
perception methods for object detection and multi-agent behavior prediction.
Our key finding is that state-of-the-art models for object detection and
behavior prediction, which otherwise succeed on existing datasets such as
Waymo, fail on the METEOR dataset. METEOR marks the first step towards the
development of more sophisticated perception models for dense, heterogeneous,
and unstructured scenarios.
|
[
{
"version": "v1",
"created": "Thu, 16 Sep 2021 01:01:55 GMT"
},
{
"version": "v2",
"created": "Thu, 30 Sep 2021 15:25:08 GMT"
},
{
"version": "v3",
"created": "Fri, 18 Mar 2022 04:14:32 GMT"
}
] | 2022-03-21T00:00:00 |
[
[
"Chandra",
"Rohan",
""
],
[
"Wang",
"Xijun",
""
],
[
"Mahajan",
"Mridul",
""
],
[
"Kala",
"Rahul",
""
],
[
"Palugulla",
"Rishitha",
""
],
[
"Naidu",
"Chandrababu",
""
],
[
"Jain",
"Alok",
""
],
[
"Manocha",
"Dinesh",
""
]
] |
new_dataset
| 0.999446 |
2109.10172
|
Shengzhe Hou
|
Shengzhe Hou, Bruce H. Thomas
|
VRMenuDesigner: A toolkit for automatically generating and modifying VR
menus
| null | null |
10.1109/AIVR52153.2021.00036
| null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
With the rapid development of Virtual Reality (VR) technology, the research
of User Interface (UI), especially menus, in the VR environment has attracted
more and more attention. However, it is very tedious for researchers to develop
UI from scratch or modify existing functions and there are no easy-to-use tools
for efficient development. This paper aims to present VRMenuDesigner, a
flexible and modular toolkit for automatically generating/modifying VR menus.
This toolkit is provided as open-source library and easy to extend to adapt to
various requirements. The main contribution of this work is to organize the
menus and functions with object-oriented thinking, which makes the system very
understandable and extensible. VRMenuDesigner includes two key tools: Creator
and Modifier for quickly generating and modifying elements. Moreover, we
developed several built-in menus and discussed their usability. After a brief
review and taxonomy of 3D menus, the architecture and implementation of the
toolbox are introduced.
|
[
{
"version": "v1",
"created": "Tue, 21 Sep 2021 13:39:15 GMT"
}
] | 2022-03-21T00:00:00 |
[
[
"Hou",
"Shengzhe",
""
],
[
"Thomas",
"Bruce H.",
""
]
] |
new_dataset
| 0.980957 |
2109.11087
|
Yunxiang Zhang
|
Yunxiang Zhang, Xiaojun Wan
|
BiRdQA: A Bilingual Dataset for Question Answering on Tricky Riddles
|
AAAI 2022
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A riddle is a question or statement with double or veiled meanings, followed
by an unexpected answer. Solving riddle is a challenging task for both machine
and human, testing the capability of understanding figurative, creative natural
language and reasoning with commonsense knowledge. We introduce BiRdQA, a
bilingual multiple-choice question answering dataset with 6614 English riddles
and 8751 Chinese riddles. For each riddle-answer pair, we provide four
distractors with additional information from Wikipedia. The distractors are
automatically generated at scale with minimal bias. Existing monolingual and
multilingual QA models fail to perform well on our dataset, indicating that
there is a long way to go before machine can beat human on solving tricky
riddles. The dataset has been released to the community.
|
[
{
"version": "v1",
"created": "Thu, 23 Sep 2021 00:46:47 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Mar 2022 09:30:34 GMT"
}
] | 2022-03-21T00:00:00 |
[
[
"Zhang",
"Yunxiang",
""
],
[
"Wan",
"Xiaojun",
""
]
] |
new_dataset
| 0.999855 |
2112.09312
|
Yusong Wu
|
Yusong Wu, Ethan Manilow, Yi Deng, Rigel Swavely, Kyle Kastner, Tim
Cooijmans, Aaron Courville, Cheng-Zhi Anna Huang, Jesse Engel
|
MIDI-DDSP: Detailed Control of Musical Performance via Hierarchical
Modeling
|
Accepted by International Conference on Learning Representations
(ICLR) 2022
| null | null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Musical expression requires control of both what notes are played, and how
they are performed. Conventional audio synthesizers provide detailed expressive
controls, but at the cost of realism. Black-box neural audio synthesis and
concatenative samplers can produce realistic audio, but have few mechanisms for
control. In this work, we introduce MIDI-DDSP a hierarchical model of musical
instruments that enables both realistic neural audio synthesis and detailed
user control. Starting from interpretable Differentiable Digital Signal
Processing (DDSP) synthesis parameters, we infer musical notes and high-level
properties of their expressive performance (such as timbre, vibrato, dynamics,
and articulation). This creates a 3-level hierarchy (notes, performance,
synthesis) that affords individuals the option to intervene at each level, or
utilize trained priors (performance given notes, synthesis given performance)
for creative assistance. Through quantitative experiments and listening tests,
we demonstrate that this hierarchy can reconstruct high-fidelity audio,
accurately predict performance attributes for a note sequence, independently
manipulate the attributes of a given performance, and as a complete system,
generate realistic audio from a novel note sequence. By utilizing an
interpretable hierarchy, with multiple levels of granularity, MIDI-DDSP opens
the door to assistive tools to empower individuals across a diverse range of
musical experience.
|
[
{
"version": "v1",
"created": "Fri, 17 Dec 2021 04:15:42 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Mar 2022 22:33:35 GMT"
}
] | 2022-03-21T00:00:00 |
[
[
"Wu",
"Yusong",
""
],
[
"Manilow",
"Ethan",
""
],
[
"Deng",
"Yi",
""
],
[
"Swavely",
"Rigel",
""
],
[
"Kastner",
"Kyle",
""
],
[
"Cooijmans",
"Tim",
""
],
[
"Courville",
"Aaron",
""
],
[
"Huang",
"Cheng-Zhi Anna",
""
],
[
"Engel",
"Jesse",
""
]
] |
new_dataset
| 0.999084 |
2112.12219
|
Majid Farhadloo
|
Majid Farhadloo, Carl Molnar, Gaoxiang Luo, Yan Li, Shashi Shekhar,
Rachel L. Maus, Svetomir N. Markovic, Raymond Moore, and Alexey Leontovich
|
SAMCNet for Spatial-configuration-based Classification: A Summary of
Results
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The goal of spatial-configuration-based classification is to build a
classifier to distinguish two classes (e.g., responder, non-responder) based on
the spatial arrangements (e.g., spatial interactions between different point
categories) given multi-category point data from two classes. This problem is
important for generating hypotheses in medical pathology towards discovering
new immunotherapies for cancer treatment as well as for other applications in
biomedical research and microbial ecology. This problem is challenging due to
an exponential number of category subsets which may vary in the strength of
spatial interactions. Most prior efforts on using human selected spatial
association measures may not be sufficient for capturing the relevant (e.g.,
surrounded by) spatial interactions which may be of biological significance. In
addition, the related deep neural networks are limited to category pairs and do
not explore larger subsets of point categories. To overcome these limitations,
we propose a Spatial-interaction Aware Multi-Category deep neural Network
(SAMCNet) architecture and contribute novel local reference frame
characterization and point pair prioritization layers for
spatial-configuration-based classification. Extensive experimental results on
multiple cancer datasets show that the proposed architecture provides higher
prediction accuracy over baseline methods.
|
[
{
"version": "v1",
"created": "Wed, 22 Dec 2021 20:45:24 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Mar 2022 16:23:50 GMT"
}
] | 2022-03-21T00:00:00 |
[
[
"Farhadloo",
"Majid",
""
],
[
"Molnar",
"Carl",
""
],
[
"Luo",
"Gaoxiang",
""
],
[
"Li",
"Yan",
""
],
[
"Shekhar",
"Shashi",
""
],
[
"Maus",
"Rachel L.",
""
],
[
"Markovic",
"Svetomir N.",
""
],
[
"Moore",
"Raymond",
""
],
[
"Leontovich",
"Alexey",
""
]
] |
new_dataset
| 0.957335 |
2202.11572
|
Rohan Chandra
|
Nilesh Suriyarachchi, Rohan Chandra, John S. Baras, Dinesh Manocha
|
GAMEOPT: Optimal Real-time Multi-Agent Planning and Control for Dynamic
Intersections
|
Submitted to ITSC 2022
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
We propose GameOpt: a novel hybrid approach to cooperative intersection
control for dynamic, multi-lane, unsignalized intersections. Safely navigating
these complex and accident prone intersections requires simultaneous trajectory
planning and negotiation among drivers. GameOpt is a hybrid formulation that
first uses an auction mechanism to generate a priority entrance sequence for
every agent, followed by an optimization-based trajectory planner that computes
velocity controls that satisfy the priority sequence. This coupling operates at
real-time speeds of less than 10 milliseconds in high density traffic of more
than 10,000 vehicles/hr, 100 times faster than other fully optimization-based
methods, while providing guarantees in terms of fairness, safety, and
efficiency. Tested on the SUMO simulator, our algorithm improves throughput by
at least 25%, time taken to reach the goal by 75%, and fuel consumption by 33%
compared to auction-based approaches and signaled approaches using
traffic-lights and stop signs.
|
[
{
"version": "v1",
"created": "Wed, 23 Feb 2022 15:42:55 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Feb 2022 05:35:19 GMT"
},
{
"version": "v3",
"created": "Fri, 18 Mar 2022 04:19:42 GMT"
}
] | 2022-03-21T00:00:00 |
[
[
"Suriyarachchi",
"Nilesh",
""
],
[
"Chandra",
"Rohan",
""
],
[
"Baras",
"John S.",
""
],
[
"Manocha",
"Dinesh",
""
]
] |
new_dataset
| 0.99921 |
2203.07182
|
Yao Yao None
|
Yao Yao, Jingyang Zhang, Jingbo Liu, Yihang Qu, Tian Fang, David
McKinnon, Yanghai Tsin, Long Quan
|
NeILF: Neural Incident Light Field for Physically-based Material
Estimation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a differentiable rendering framework for material and lighting
estimation from multi-view images and a reconstructed geometry. In the
framework, we represent scene lightings as the Neural Incident Light Field
(NeILF) and material properties as the surface BRDF modelled by multi-layer
perceptrons. Compared with recent approaches that approximate scene lightings
as the 2D environment map, NeILF is a fully 5D light field that is capable of
modelling illuminations of any static scenes. In addition, occlusions and
indirect lights can be handled naturally by the NeILF representation without
requiring multiple bounces of ray tracing, making it possible to estimate
material properties even for scenes with complex lightings and geometries. We
also propose a smoothness regularization and a Lambertian assumption to reduce
the material-lighting ambiguity during the optimization. Our method strictly
follows the physically-based rendering equation, and jointly optimizes material
and lighting through the differentiable rendering process. We have intensively
evaluated the proposed method on our in-house synthetic dataset, the DTU MVS
dataset, and real-world BlendedMVS scenes. Our method is able to outperform
previous methods by a significant margin in terms of novel view rendering
quality, setting a new state-of-the-art for image-based material and lighting
estimation.
|
[
{
"version": "v1",
"created": "Mon, 14 Mar 2022 15:23:04 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Mar 2022 04:41:55 GMT"
}
] | 2022-03-21T00:00:00 |
[
[
"Yao",
"Yao",
""
],
[
"Zhang",
"Jingyang",
""
],
[
"Liu",
"Jingbo",
""
],
[
"Qu",
"Yihang",
""
],
[
"Fang",
"Tian",
""
],
[
"McKinnon",
"David",
""
],
[
"Tsin",
"Yanghai",
""
],
[
"Quan",
"Long",
""
]
] |
new_dataset
| 0.9979 |
2203.09446
|
Fabian Bongratz
|
Fabian Bongratz, Anne-Marie Rickmann, Sebastian P\"olsterl, Christian
Wachinger
|
Vox2Cortex: Fast Explicit Reconstruction of Cortical Surfaces from 3D
MRI Scans with Geometric Deep Neural Networks
|
Accepted at CVPR 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The reconstruction of cortical surfaces from brain magnetic resonance imaging
(MRI) scans is essential for quantitative analyses of cortical thickness and
sulcal morphology. Although traditional and deep learning-based algorithmic
pipelines exist for this purpose, they have two major drawbacks: lengthy
runtimes of multiple hours (traditional) or intricate post-processing, such as
mesh extraction and topology correction (deep learning-based). In this work, we
address both of these issues and propose Vox2Cortex, a deep learning-based
algorithm that directly yields topologically correct, three-dimensional meshes
of the boundaries of the cortex. Vox2Cortex leverages convolutional and graph
convolutional neural networks to deform an initial template to the densely
folded geometry of the cortex represented by an input MRI scan. We show in
extensive experiments on three brain MRI datasets that our meshes are as
accurate as the ones reconstructed by state-of-the-art methods in the field,
without the need for time- and resource-intensive post-processing. To
accurately reconstruct the tightly folded cortex, we work with meshes
containing about 168,000 vertices at test time, scaling deep explicit
reconstruction methods to a new level.
|
[
{
"version": "v1",
"created": "Thu, 17 Mar 2022 17:06:00 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Mar 2022 11:10:19 GMT"
}
] | 2022-03-21T00:00:00 |
[
[
"Bongratz",
"Fabian",
""
],
[
"Rickmann",
"Anne-Marie",
""
],
[
"Pölsterl",
"Sebastian",
""
],
[
"Wachinger",
"Christian",
""
]
] |
new_dataset
| 0.999228 |
2203.09642
|
Dawei Du
|
Rui Yu, Dawei Du, Rodney LaLonde, Daniel Davila, Christopher Funk,
Anthony Hoogs, Brian Clipp
|
Cascade Transformers for End-to-End Person Search
|
Accepted to CVPR 2022 Code can be found at
https://github.com/Kitware/COAT
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The goal of person search is to localize a target person from a gallery set
of scene images, which is extremely challenging due to large scale variations,
pose/viewpoint changes, and occlusions. In this paper, we propose the Cascade
Occluded Attention Transformer (COAT) for end-to-end person search. Our
three-stage cascade design focuses on detecting people in the first stage,
while later stages simultaneously and progressively refine the representation
for person detection and re-identification. At each stage the occluded
attention transformer applies tighter intersection over union thresholds,
forcing the network to learn coarse-to-fine pose/scale invariant features.
Meanwhile, we calculate each detection's occluded attention to differentiate a
person's tokens from other people or the background. In this way, we simulate
the effect of other objects occluding a person of interest at the token-level.
Through comprehensive experiments, we demonstrate the benefits of our method by
achieving state-of-the-art performance on two benchmark datasets.
|
[
{
"version": "v1",
"created": "Thu, 17 Mar 2022 22:42:12 GMT"
}
] | 2022-03-21T00:00:00 |
[
[
"Yu",
"Rui",
""
],
[
"Du",
"Dawei",
""
],
[
"LaLonde",
"Rodney",
""
],
[
"Davila",
"Daniel",
""
],
[
"Funk",
"Christopher",
""
],
[
"Hoogs",
"Anthony",
""
],
[
"Clipp",
"Brian",
""
]
] |
new_dataset
| 0.995906 |
2203.09673
|
Emily Ohman
|
Elissa Nakajima Wickham, Emily \"Ohman
|
Hate speech, Censorship, and Freedom of Speech: The Changing Policies of
Reddit
|
Submitted to Journal of Data Mining and Digital Humanities
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This paper examines the shift in focus on content policies and user attitudes
on the social media platform Reddit. We do this by focusing on comments from
general Reddit users from five posts made by admins (moderators) on updates to
Reddit Content Policy. All five concern the nature of what kind of content is
allowed to be posted on Reddit, and which measures will be taken against
content that violates these policies. We use topic modeling to probe how the
general discourse for Redditors has changed around limitations on content, and
later, limitations on hate speech, or speech that incites violence against a
particular group. We show that there is a clear shift in both the contents and
the user attitudes that can be linked to contemporary societal upheaval as well
as newly passed laws and regulations, and contribute to the wider discussion on
hate speech moderation.
|
[
{
"version": "v1",
"created": "Fri, 18 Mar 2022 00:46:58 GMT"
}
] | 2022-03-21T00:00:00 |
[
[
"Wickham",
"Elissa Nakajima",
""
],
[
"Öhman",
"Emily",
""
]
] |
new_dataset
| 0.985388 |
2203.09830
|
Jianhua Han
|
Jianhua Han, Xiajun Deng, Xinyue Cai, Zhen Yang, Hang Xu, Chunjing Xu,
Xiaodan Liang
|
Laneformer: Object-aware Row-Column Transformers for Lane Detection
|
AAAI2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We present Laneformer, a conceptually simple yet powerful transformer-based
architecture tailored for lane detection that is a long-standing research topic
for visual perception in autonomous driving. The dominant paradigms rely on
purely CNN-based architectures which often fail in incorporating relations of
long-range lane points and global contexts induced by surrounding objects
(e.g., pedestrians, vehicles). Inspired by recent advances of the transformer
encoder-decoder architecture in various vision tasks, we move forwards to
design a new end-to-end Laneformer architecture that revolutionizes the
conventional transformers into better capturing the shape and semantic
characteristics of lanes, with minimal overhead in latency. First, coupling
with deformable pixel-wise self-attention in the encoder, Laneformer presents
two new row and column self-attention operations to efficiently mine point
context along with the lane shapes. Second, motivated by the appearing objects
would affect the decision of predicting lane segments, Laneformer further
includes the detected object instances as extra inputs of multi-head attention
blocks in the encoder and decoder to facilitate the lane point detection by
sensing semantic contexts. Specifically, the bounding box locations of objects
are added into Key module to provide interaction with each pixel and query
while the ROI-aligned features are inserted into Value module. Extensive
experiments demonstrate our Laneformer achieves state-of-the-art performances
on CULane benchmark, in terms of 77.1% F1 score. We hope our simple and
effective Laneformer will serve as a strong baseline for future research in
self-attention models for lane detection.
|
[
{
"version": "v1",
"created": "Fri, 18 Mar 2022 10:14:35 GMT"
}
] | 2022-03-21T00:00:00 |
[
[
"Han",
"Jianhua",
""
],
[
"Deng",
"Xiajun",
""
],
[
"Cai",
"Xinyue",
""
],
[
"Yang",
"Zhen",
""
],
[
"Xu",
"Hang",
""
],
[
"Xu",
"Chunjing",
""
],
[
"Liang",
"Xiaodan",
""
]
] |
new_dataset
| 0.996198 |
2203.09831
|
Naufal Suryanto
|
Naufal Suryanto, Yongsu Kim, Hyoeun Kang, Harashta Tatimma Larasati,
Youngyeo Yun, Thi-Thu-Huong Le, Hunmin Yang, Se-Yoon Oh, Howon Kim
|
DTA: Physical Camouflage Attacks using Differentiable Transformation
Network
|
Accepted for CVPR 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To perform adversarial attacks in the physical world, many studies have
proposed adversarial camouflage, a method to hide a target object by applying
camouflage patterns on 3D object surfaces. For obtaining optimal physical
adversarial camouflage, previous studies have utilized the so-called neural
renderer, as it supports differentiability. However, existing neural renderers
cannot fully represent various real-world transformations due to a lack of
control of scene parameters compared to the legacy photo-realistic renderers.
In this paper, we propose the Differentiable Transformation Attack (DTA), a
framework for generating a robust physical adversarial pattern on a target
object to camouflage it against object detection models with a wide range of
transformations. It utilizes our novel Differentiable Transformation Network
(DTN), which learns the expected transformation of a rendered object when the
texture is changed while preserving the original properties of the target
object. Using our attack framework, an adversary can gain both the advantages
of the legacy photo-realistic renderers including various physical-world
transformations and the benefit of white-box access by offering
differentiability. Our experiments show that our camouflaged 3D vehicles can
successfully evade state-of-the-art object detection models in the
photo-realistic environment (i.e., CARLA on Unreal Engine). Furthermore, our
demonstration on a scaled Tesla Model 3 proves the applicability and
transferability of our method to the real world.
|
[
{
"version": "v1",
"created": "Fri, 18 Mar 2022 10:15:02 GMT"
}
] | 2022-03-21T00:00:00 |
[
[
"Suryanto",
"Naufal",
""
],
[
"Kim",
"Yongsu",
""
],
[
"Kang",
"Hyoeun",
""
],
[
"Larasati",
"Harashta Tatimma",
""
],
[
"Yun",
"Youngyeo",
""
],
[
"Le",
"Thi-Thu-Huong",
""
],
[
"Yang",
"Hunmin",
""
],
[
"Oh",
"Se-Yoon",
""
],
[
"Kim",
"Howon",
""
]
] |
new_dataset
| 0.998296 |
2203.09910
|
Chuhui Xue
|
Chuhui Xue, Zichen Tian, Fangneng Zhan, Shijian Lu, Song Bai
|
Fourier Document Restoration for Robust Document Dewarping and
Recognition
|
Accepted by CVPR2022
| null | null | null |
cs.CV
|
http://creativecommons.org/publicdomain/zero/1.0/
|
State-of-the-art document dewarping techniques learn to predict 3-dimensional
information of documents which are prone to errors while dealing with documents
with irregular distortions or large variations in depth. This paper presents
FDRNet, a Fourier Document Restoration Network that can restore documents with
different distortions and improve document recognition in a reliable and
simpler manner. FDRNet focuses on high-frequency components in the Fourier
space that capture most structural information but are largely free of
degradation in appearance. It dewarps documents by a flexible Thin-Plate Spline
transformation which can handle various deformations effectively without
requiring deformation annotations in training. These features allow FDRNet to
learn from a small amount of simply labeled training images, and the learned
model can dewarp documents with complex geometric distortion and recognize the
restored texts accurately. To facilitate document restoration research, we
create a benchmark dataset consisting of over one thousand camera documents
with different types of geometric and photometric distortion. Extensive
experiments show that FDRNet outperforms the state-of-the-art by large margins
on both dewarping and text recognition tasks. In addition, FDRNet requires a
small amount of simply labeled training data and is easy to deploy.
|
[
{
"version": "v1",
"created": "Fri, 18 Mar 2022 12:39:31 GMT"
}
] | 2022-03-21T00:00:00 |
[
[
"Xue",
"Chuhui",
""
],
[
"Tian",
"Zichen",
""
],
[
"Zhan",
"Fangneng",
""
],
[
"Lu",
"Shijian",
""
],
[
"Bai",
"Song",
""
]
] |
new_dataset
| 0.992231 |
2203.10013
|
Diego Romeres
|
Arvind Raghunathan, Devesh K. Jha, Diego Romeres
|
PYROBOCOP: Python-based Robotic Control & Optimization Package for
Manipulation
|
7 pages, ICRA22. arXiv admin note: substantial text overlap with
arXiv:2106.03220
| null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
PYROBOCOP is a Python-based package for control, optimization and estimation
of robotic systems described by nonlinear Differential Algebraic Equations
(DAEs). In particular, the package can handle systems with contacts that are
described by complementarity constraints and provides a general framework for
specifying obstacle avoidance constraints. The package performs direct
transcription of the DAEs into a set of nonlinear equations by performing
orthogonal collocation on finite elements. PYROBOCOP provides automatic
reformulation of the complementarity constraints that are tractable to NLP
solvers to perform optimization of robotic systems. The package is interfaced
with ADOL-C[1] for obtaining sparse derivatives by automatic differentiation
and IPOPT[2] for performing optimization. We evaluate PYROBOCOP on several
manipulation problems for control and estimation.
|
[
{
"version": "v1",
"created": "Fri, 18 Mar 2022 15:24:47 GMT"
}
] | 2022-03-21T00:00:00 |
[
[
"Raghunathan",
"Arvind",
""
],
[
"Jha",
"Devesh K.",
""
],
[
"Romeres",
"Diego",
""
]
] |
new_dataset
| 0.999628 |
2203.10024
|
Kheireddine Abainia
|
Oussama Boucherit and Kheireddine Abainia
|
Offensive Language Detection in Under-resourced Algerian Dialectal
Arabic Language
|
BigDML 2021
| null | null | null |
cs.CL cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
This paper addresses the problem of detecting the offensive and abusive
content in Facebook comments, where we focus on the Algerian dialectal Arabic
which is one of under-resourced languages. The latter has a variety of dialects
mixed with different languages (i.e. Berber, French and English). In addition,
we deal with texts written in both Arabic and Roman scripts (i.e. Arabizi). Due
to the scarcity of works on the same language, we have built a new corpus
regrouping more than 8.7k texts manually annotated as normal, abusive and
offensive. We have conducted a series of experiments using the state-of-the-art
classifiers of text categorisation, namely: BiLSTM, CNN, FastText, SVM and NB.
The results showed acceptable performances, but the problem requires further
investigation on linguistic features to increase the identification accuracy.
|
[
{
"version": "v1",
"created": "Fri, 18 Mar 2022 15:42:21 GMT"
}
] | 2022-03-21T00:00:00 |
[
[
"Boucherit",
"Oussama",
""
],
[
"Abainia",
"Kheireddine",
""
]
] |
new_dataset
| 0.99927 |
2203.10070
|
Jeroen Schols
|
Jeroen L.G. Schols
|
Kernelization for Treewidth-2 Vertex Deletion
| null | null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
The Treewidth-2 Vertex Deletion problem asks whether a set of at most $t$
vertices can be removed from a graph, such that the resulting graph has
treewidth at most two. A graph has treewidth at most two if and only if it does
not contain a $K_4$ minor. Hence, this problem corresponds to the NP-hard
$\mathcal{F}$-Minor Cover problem with $\mathcal{F} = \{K_4\}$. For any variant
of the $\mathcal{F}$-Minor Cover problem where $\mathcal{F}$ contains a planar
graph, it is known that a polynomial kernel exists. I.e., a preprocessing
routine that in polynomial time outputs an equivalent instance of size
$t^{O(1)}$. However, this proof is non-constructive, meaning that this proof
does not yield an explicit bound on the kernel size. The $\{K_4\}$-Minor Cover
problem is the simplest variant of the $\mathcal{F}$-Minor Cover problem with
an unknown kernel size.
To develop a constructive kernelization algorithm, we present a new method to
decompose graphs into near-protrusions, such that near-protrusions in this new
decomposition can be reduced using elementary reduction rules. Our method
extends the `approximation and tidying' framework by van Bevern et al.
[Algorithmica 2012] to provide guarantees stronger than those provided by both
this framework and a regular protrusion decomposition. Furthermore, we provide
extensions of the elementary reduction rules used by the $\{K_4,
K_{2,3}\}$-Minor Cover kernelization algorithm introduced by Donkers et al.
[IPEC 2021].
Using the new decomposition method and reduction rules, we obtain a kernel
consisting of $O(t^{41})$ vertices, which is the first constructive kernel.
This kernel is a step towards more concrete kernelization bounds for the
$\mathcal{F}$-Minor Cover problem where $\mathcal{F}$ contains a planar graph,
and our decomposition provides a potential direction to achieve these new
bounds.
|
[
{
"version": "v1",
"created": "Fri, 18 Mar 2022 17:30:31 GMT"
}
] | 2022-03-21T00:00:00 |
[
[
"Schols",
"Jeroen L. G.",
""
]
] |
new_dataset
| 0.995937 |
2203.10073
|
Shreyansh Daftry
|
Larry Matthies, Shreyansh Daftry, Scott Tepsuporn, Yang Cheng, Deegan
Atha, R. Michael Swan, Sanjna Ravichandar, Masahiro Ono
|
Lunar Rover Localization Using Craters as Landmarks
|
IEEE Aerospace Conference, 2022
| null | null | null |
cs.RO cs.AI cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Onboard localization capabilities for planetary rovers to date have used
relative navigation, by integrating combinations of wheel odometry, visual
odometry, and inertial measurements during each drive to track position
relative to the start of each drive. At the end of each drive, a
ground-in-the-loop (GITL) interaction is used to get a position update from
human operators in a more global reference frame, by matching images or local
maps from onboard the rover to orbital reconnaissance images or maps of a large
region around the rover's current position. Autonomous rover drives are limited
in distance so that accumulated relative navigation error does not risk the
possibility of the rover driving into hazards known from orbital images.
However, several rover mission concepts have recently been studied that require
much longer drives between GITL cycles, particularly for the Moon. These
concepts require greater autonomy to minimize GITL cycles to enable such large
range; onboard global localization is a key element of such autonomy. Multiple
techniques have been studied in the past for onboard rover global localization,
but a satisfactory solution has not yet emerged. For the Moon, the ubiquitous
craters offer a new possibility, which involves mapping craters from orbit,
then recognizing crater landmarks with cameras and-or a lidar onboard the
rover. This approach is applicable everywhere on the Moon, does not require
high resolution stereo imaging from orbit as some other approaches do, and has
potential to enable position knowledge with order of 5 to 10 m accuracy at all
times. This paper describes our technical approach to crater-based lunar rover
localization and presents initial results on crater detection using 3D point
cloud data from onboard lidar or stereo cameras, as well as using shading cues
in monocular onboard imagery.
|
[
{
"version": "v1",
"created": "Fri, 18 Mar 2022 17:38:52 GMT"
}
] | 2022-03-21T00:00:00 |
[
[
"Matthies",
"Larry",
""
],
[
"Daftry",
"Shreyansh",
""
],
[
"Tepsuporn",
"Scott",
""
],
[
"Cheng",
"Yang",
""
],
[
"Atha",
"Deegan",
""
],
[
"Swan",
"R. Michael",
""
],
[
"Ravichandar",
"Sanjna",
""
],
[
"Ono",
"Masahiro",
""
]
] |
new_dataset
| 0.998867 |
1711.01981
|
Isabel Campos Dr.
|
INDIGO-DataCloud Collaboration: Davide Salomoni, Isabel Campos,
Luciano Gaido, Jesus Marco de Lucas, Peter Solagna, Jorge Gomes, Ludek
Matyska, Patrick Fuhrman, Marcus Hardt, Giacinto Donvito, Lukasz Dutka,
Marcin Plociennik, Roberto Barbera, Ignacio Blanquer, Andrea Ceccanti, Mario
David, Cristina Duma, Alvaro L\'opez-Garc\'ia, Germ\'an Molt\'o, Pablo Orviz,
Zdenek Sustr, Matthew Viljoen, Fernando Aguilar, Luis Alves, Marica
Antonacci, Lucio Angelo Antonelli, Stefano Bagnasco, Alexandre M.J.J. Bonvin,
Riccardo Bruno, Eva Cetinic, Yin Chen, Fabrizio Chiarello, Alessandro Costa,
Stefano Dal Pra, Davor Davidovic, Alvise Dorigo, Benjamin Ertl, Federica
Fanzago, Marco Fargetta, Sandro Fiore, Stefano Gallozzi, Zeynep Kurkcuoglu,
Lara Lloret, Joao Martins, Alessandra Nuzzo, Paola Nassisi, Cosimo Palazzo,
Joao Pina, Eva Sciacca, Matteo Segatta, Massimo Sgaravatto, Daniele Spiga,
Sonia Taneja, Marco Antonio Tangaro, Michal Urbaniak, Sara Vallero, Marco
Verlato, Bas Wegh, Valentina Zaccolo, Federico Zambelli, Lisa Zangrando,
Stefano Zani and Tomasz Zok
|
INDIGO-DataCloud:A data and computing platform to facilitate seamless
access to e-infrastructures
|
39 pages, 15 figures.Version accepted in Journal of Grid Computing
| null |
10.1007/s10723-018-9453-3
| null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper describes the achievements of the H2020 project INDIGO-DataCloud.
The project has provided e-infrastructures with tools, applications and cloud
framework enhancements to manage the demanding requirements of scientific
communities, either locally or through enhanced interfaces. The middleware
developed allows to federate hybrid resources, to easily write, port and run
scientific applications to the cloud. In particular, we have extended existing
PaaS (Platform as a Service) solutions, allowing public and private
e-infrastructures, including those provided by EGI, EUDAT, and Helix Nebula, to
integrate their existing services and make them available through AAI services
compliant with GEANT interfederation policies, thus guaranteeing transparency
and trust in the provisioning of such services. Our middleware facilitates the
execution of applications using containers on Cloud and Grid based
infrastructures, as well as on HPC clusters. Our developments are freely
downloadable as open source components, and are already being integrated into
many scientific applications.
|
[
{
"version": "v1",
"created": "Mon, 6 Nov 2017 16:06:49 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Nov 2017 14:39:04 GMT"
},
{
"version": "v3",
"created": "Fri, 10 Nov 2017 14:06:56 GMT"
},
{
"version": "v4",
"created": "Sat, 25 Nov 2017 17:09:24 GMT"
},
{
"version": "v5",
"created": "Wed, 25 Jul 2018 08:58:50 GMT"
},
{
"version": "v6",
"created": "Thu, 26 Jul 2018 09:31:53 GMT"
},
{
"version": "v7",
"created": "Tue, 5 Feb 2019 18:00:33 GMT"
}
] | 2022-03-18T00:00:00 |
[
[
"DataCloud Collaboration",
"",
""
],
[
"Salomoni",
"Davide",
""
],
[
"Campos",
"Isabel",
""
],
[
"Gaido",
"Luciano",
""
],
[
"de Lucas",
"Jesus Marco",
""
],
[
"Solagna",
"Peter",
""
],
[
"Gomes",
"Jorge",
""
],
[
"Matyska",
"Ludek",
""
],
[
"Fuhrman",
"Patrick",
""
],
[
"Hardt",
"Marcus",
""
],
[
"Donvito",
"Giacinto",
""
],
[
"Dutka",
"Lukasz",
""
],
[
"Plociennik",
"Marcin",
""
],
[
"Barbera",
"Roberto",
""
],
[
"Blanquer",
"Ignacio",
""
],
[
"Ceccanti",
"Andrea",
""
],
[
"David",
"Mario",
""
],
[
"Duma",
"Cristina",
""
],
[
"López-García",
"Alvaro",
""
],
[
"Moltó",
"Germán",
""
],
[
"Orviz",
"Pablo",
""
],
[
"Sustr",
"Zdenek",
""
],
[
"Viljoen",
"Matthew",
""
],
[
"Aguilar",
"Fernando",
""
],
[
"Alves",
"Luis",
""
],
[
"Antonacci",
"Marica",
""
],
[
"Antonelli",
"Lucio Angelo",
""
],
[
"Bagnasco",
"Stefano",
""
],
[
"Bonvin",
"Alexandre M. J. J.",
""
],
[
"Bruno",
"Riccardo",
""
],
[
"Cetinic",
"Eva",
""
],
[
"Chen",
"Yin",
""
],
[
"Chiarello",
"Fabrizio",
""
],
[
"Costa",
"Alessandro",
""
],
[
"Pra",
"Stefano Dal",
""
],
[
"Davidovic",
"Davor",
""
],
[
"Dorigo",
"Alvise",
""
],
[
"Ertl",
"Benjamin",
""
],
[
"Fanzago",
"Federica",
""
],
[
"Fargetta",
"Marco",
""
],
[
"Fiore",
"Sandro",
""
],
[
"Gallozzi",
"Stefano",
""
],
[
"Kurkcuoglu",
"Zeynep",
""
],
[
"Lloret",
"Lara",
""
],
[
"Martins",
"Joao",
""
],
[
"Nuzzo",
"Alessandra",
""
],
[
"Nassisi",
"Paola",
""
],
[
"Palazzo",
"Cosimo",
""
],
[
"Pina",
"Joao",
""
],
[
"Sciacca",
"Eva",
""
],
[
"Segatta",
"Matteo",
""
],
[
"Sgaravatto",
"Massimo",
""
],
[
"Spiga",
"Daniele",
""
],
[
"Taneja",
"Sonia",
""
],
[
"Tangaro",
"Marco Antonio",
""
],
[
"Urbaniak",
"Michal",
""
],
[
"Vallero",
"Sara",
""
],
[
"Verlato",
"Marco",
""
],
[
"Wegh",
"Bas",
""
],
[
"Zaccolo",
"Valentina",
""
],
[
"Zambelli",
"Federico",
""
],
[
"Zangrando",
"Lisa",
""
],
[
"Zani",
"Stefano",
""
],
[
"Zok",
"Tomasz",
""
]
] |
new_dataset
| 0.994681 |
1903.04646
|
Dimitri Schreiber
|
Dimitri A. Schreiber, Daniel B. Shak, Alexander M. Norbash, Michael C.
Yip
|
An Open-Source 7-Axis, Robotic Platform to Enable Dexterous Procedures
within CT Scanners
|
8 pages, 9 figures, final submission to IROS 2019
| null |
10.1109/IROS40897.2019.8968552
| null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This paper describes the design, manufacture, and performance of a highly
dexterous, low-profile, 7 Degree-of-Freedom (DOF) robotic arm for CT-guided
percutaneous needle biopsy. Direct CT guidance allows physicians to localize
tumours quickly; however, needle insertion is still performed by hand. This
system is mounted to a fully active gantry superior to the patient's head and
teleoperated by a radiologist. Unlike other similar robots, this robot's fully
serial-link approach uses a unique combination of belt and cable drives for
high-transparency and minimal-backlash, allowing for an expansive working area
and numerous approach angles to targets all while maintaining a small in-bore
cross-section of less than $16cm^2$. Simulations verified the system's
expansive collision free work-space and ability to hit targets across the
entire chest, as required for lung cancer biopsy. Targeting error is on average
$<1mm$ on a teleoperated accuracy task, illustrating the system's sufficient
accuracy to perform biopsy procedures. The system is designed for lung biopsies
due to the large working volume that is required for reaching peripheral lung
lesions, though, with its large working volume and small in-bore
cross-sectional area, the robotic system is effectively a general-purpose
CT-compatible manipulation device for percutaneous procedures. Finally, with
the considerable development time undertaken in designing a precise and
flexible-use system and with the desire to reduce the burden of other
researchers in developing algorithms for image-guided surgery, this system
provides open-access, and to the best of our knowledge, is the first
open-hardware image-guided biopsy robot of its kind.
|
[
{
"version": "v1",
"created": "Mon, 11 Mar 2019 23:04:36 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Aug 2019 16:20:20 GMT"
}
] | 2022-03-18T00:00:00 |
[
[
"Schreiber",
"Dimitri A.",
""
],
[
"Shak",
"Daniel B.",
""
],
[
"Norbash",
"Alexander M.",
""
],
[
"Yip",
"Michael C.",
""
]
] |
new_dataset
| 0.996441 |
2002.11503
|
Manuel Fernandez-Carmona
|
Manuel Fernandez-Carmona, Nicola Bellotto
|
Wavelet-based Temporal Models of Human Activity for Anomaly Detection in
Smart Robot-assisted Environments
|
14 pages, 6 figures
| null | null | null |
cs.AI eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a new approach for temporal modelling of long-term human
activities with smart-home sensors, which is used to detect anomalous
situations in a robot-assisted environment. The model is based on wavelet
transforms and used to forecats smart sensor data, providing a temporal prior
to detect unexpected events in human environments. To this end, a new extension
of Hybrid Markov Logic Networks has been developed that merges different
anomaly indicators, including activities detected by binary sensors, expert
logic rules, and wavelet-based temporal models. The latter in particular allow
the inference system to discover deviations from long-term activity patterns,
which cannot be detected by simpler frequency-based models. Two new publicly
available datasets were collected using several smart-sensors to evaluate the
approach in office and domestic scenarios. The experimental results demonstrate
the effectiveness of the proposed solutions and their successful deployment in
complex human environments, showing their potential for future smart-home and
robot integrated services.
|
[
{
"version": "v1",
"created": "Wed, 26 Feb 2020 14:08:46 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Mar 2022 18:02:29 GMT"
}
] | 2022-03-18T00:00:00 |
[
[
"Fernandez-Carmona",
"Manuel",
""
],
[
"Bellotto",
"Nicola",
""
]
] |
new_dataset
| 0.998047 |
2006.08812
|
Xiongjie Chen
|
Xiongjie Chen, Yongxin Yang, Yunpeng Li
|
Augmented Sliced Wasserstein Distances
|
37 pages, 19 figures, published as a conference paper at ICLR 2022
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While theoretically appealing, the application of the Wasserstein distance to
large-scale machine learning problems has been hampered by its prohibitive
computational cost. The sliced Wasserstein distance and its variants improve
the computational efficiency through the random projection, yet they suffer
from low accuracy if the number of projections is not sufficiently large,
because the majority of projections result in trivially small values. In this
work, we propose a new family of distance metrics, called augmented sliced
Wasserstein distances (ASWDs), constructed by first mapping samples to
higher-dimensional hypersurfaces parameterized by neural networks. It is
derived from a key observation that (random) linear projections of samples
residing on these hypersurfaces would translate to much more flexible nonlinear
projections in the original sample space, so they can capture complex
structures of the data distribution. We show that the hypersurfaces can be
optimized by gradient ascent efficiently. We provide the condition under which
the ASWD is a valid metric and show that this can be obtained by an injective
neural network architecture. Numerical results demonstrate that the ASWD
significantly outperforms other Wasserstein variants for both synthetic and
real-world problems.
|
[
{
"version": "v1",
"created": "Mon, 15 Jun 2020 23:00:08 GMT"
},
{
"version": "v2",
"created": "Wed, 17 Jun 2020 21:40:23 GMT"
},
{
"version": "v3",
"created": "Thu, 15 Oct 2020 21:41:02 GMT"
},
{
"version": "v4",
"created": "Thu, 1 Jul 2021 14:19:11 GMT"
},
{
"version": "v5",
"created": "Mon, 11 Oct 2021 21:00:54 GMT"
},
{
"version": "v6",
"created": "Wed, 16 Mar 2022 13:23:45 GMT"
},
{
"version": "v7",
"created": "Thu, 17 Mar 2022 12:14:25 GMT"
}
] | 2022-03-18T00:00:00 |
[
[
"Chen",
"Xiongjie",
""
],
[
"Yang",
"Yongxin",
""
],
[
"Li",
"Yunpeng",
""
]
] |
new_dataset
| 0.966176 |
2103.08361
|
Minghui Xu
|
Minghui Xu, Feng Zhao, Yifei Zou, Chunchi Liu, Xiuzhen Cheng, Falko
Dressler
|
BLOWN: A Blockchain Protocol for Single-Hop Wireless Networks under
Adversarial SINR
|
18 pages, 11 figures, journal paper
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Known as a distributed ledger technology (DLT), blockchain has attracted much
attention due to its properties such as decentralization, security,
immutability and transparency, and its potential of servicing as an
infrastructure for various applications. Blockchain can empower wireless
networks with identity management, data integrity, access control, and
high-level security. However, previous studies on blockchain-enabled wireless
networks mostly focus on proposing architectures or building systems with
popular blockchain protocols. Nevertheless, such existing protocols have
obvious shortcomings when adopted in wireless networks where nodes may have
limited physical resources, may fall short of well-established reliable
channels, or may suffer from variable bandwidths impacted by environments or
jamming attacks. In this paper, we propose a novel consensus protocol named
Proof-of-Channel (PoC) leveraging the natural properties of wireless
communications, and develop a permissioned BLOWN protocol (BLOckchain protocol
for Wireless Networks) for single-hop wireless networks under an adversarial
SINR model. We formalize BLOWN with the universal composition framework and
prove its security properties, namely persistence and liveness, as well as its
strengths in countering against adversarial jamming, double-spending, and Sybil
attacks, which are also demonstrated by extensive simulation studies.
|
[
{
"version": "v1",
"created": "Mon, 15 Mar 2021 13:01:04 GMT"
},
{
"version": "v2",
"created": "Sun, 11 Apr 2021 11:58:57 GMT"
},
{
"version": "v3",
"created": "Thu, 17 Mar 2022 03:13:50 GMT"
}
] | 2022-03-18T00:00:00 |
[
[
"Xu",
"Minghui",
""
],
[
"Zhao",
"Feng",
""
],
[
"Zou",
"Yifei",
""
],
[
"Liu",
"Chunchi",
""
],
[
"Cheng",
"Xiuzhen",
""
],
[
"Dressler",
"Falko",
""
]
] |
new_dataset
| 0.998575 |
2105.07122
|
Qingxiu Dong
|
Qingxiu Dong, Ziwei Qin, Heming Xia, Tian Feng, Shoujie Tong, Haoran
Meng, Lin Xu, Weidong Zhan, Sujian Li and Zhongyu Wei, Tianyu Liu, Zuifang
Sui
|
Premise-based Multimodal Reasoning: Conditional Inference on Joint
Textual and Visual Clues
|
ACL 2022 Main conference (Long Paper)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is a common practice for recent works in vision language cross-modal
reasoning to adopt a binary or multi-choice classification formulation taking
as input a set of source image(s) and textual query. In this work, we take a
sober look at such an unconditional formulation in the sense that no prior
knowledge is specified with respect to the source image(s). Inspired by the
designs of both visual commonsense reasoning and natural language inference
tasks, we propose a new task termed Premise-based Multi-modal Reasoning(PMR)
where a textual premise is the background presumption on each source image. The
PMR dataset contains 15,360 manually annotated samples which are created by a
multi-phase crowd-sourcing process. With selected high-quality movie
screenshots and human-curated premise templates from 6 pre-defined categories,
we ask crowd-source workers to write one true hypothesis and three distractors
(4 choices) given the premise and image through a cross-check procedure.
Besides, we generate adversarial samples to alleviate the annotation artifacts
and double the size of PMR. We benchmark various state-of-the-art (pretrained)
multi-modal inference models on PMR and conduct comprehensive experimental
analyses to showcase the utility of our dataset.
|
[
{
"version": "v1",
"created": "Sat, 15 May 2021 03:25:42 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Mar 2022 11:20:04 GMT"
},
{
"version": "v3",
"created": "Thu, 17 Mar 2022 04:11:58 GMT"
}
] | 2022-03-18T00:00:00 |
[
[
"Dong",
"Qingxiu",
""
],
[
"Qin",
"Ziwei",
""
],
[
"Xia",
"Heming",
""
],
[
"Feng",
"Tian",
""
],
[
"Tong",
"Shoujie",
""
],
[
"Meng",
"Haoran",
""
],
[
"Xu",
"Lin",
""
],
[
"Zhan",
"Weidong",
""
],
[
"Li",
"Sujian",
""
],
[
"Wei",
"Zhongyu",
""
],
[
"Liu",
"Tianyu",
""
],
[
"Sui",
"Zuifang",
""
]
] |
new_dataset
| 0.991457 |
2105.13236
|
Sascha Saralajew
|
Sascha Saralajew and Lars Ohnemus and Lukas Ewecker and Ebubekir Asan
and Simon Isele and Stefan Roos
|
A Dataset for Provident Vehicle Detection at Night
|
to be published in the proceedings of the 2021 IEEE/RSJ International
Conference on Intelligent Robots and Systems (IROS 2021)
| null |
10.1109/IROS51168.2021.9636162
| null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In current object detection, algorithms require the object to be directly
visible in order to be detected. As humans, however, we intuitively use visual
cues caused by the respective object to already make assumptions about its
appearance. In the context of driving, such cues can be shadows during the day
and often light reflections at night. In this paper, we study the problem of
how to map this intuitive human behavior to computer vision algorithms to
detect oncoming vehicles at night just from the light reflections they cause by
their headlights. For that, we present an extensive open-source dataset
containing 59746 annotated grayscale images out of 346 different scenes in a
rural environment at night. In these images, all oncoming vehicles, their
corresponding light objects (e.g., headlamps), and their respective light
reflections (e.g., light reflections on guardrails) are labeled. In this
context, we discuss the characteristics of the dataset and the challenges in
objectively describing visual cues such as light reflections. We provide
different metrics for different ways to approach the task and report the
results we achieved using state-of-the-art and custom object detection models
as a first benchmark. With that, we want to bring attention to a new and so far
neglected field in computer vision research, encourage more researchers to
tackle the problem, and thereby further close the gap between human performance
and computer vision systems.
|
[
{
"version": "v1",
"created": "Thu, 27 May 2021 15:31:33 GMT"
},
{
"version": "v2",
"created": "Wed, 11 Aug 2021 10:00:40 GMT"
}
] | 2022-03-18T00:00:00 |
[
[
"Saralajew",
"Sascha",
""
],
[
"Ohnemus",
"Lars",
""
],
[
"Ewecker",
"Lukas",
""
],
[
"Asan",
"Ebubekir",
""
],
[
"Isele",
"Simon",
""
],
[
"Roos",
"Stefan",
""
]
] |
new_dataset
| 0.999692 |
2106.02773
|
Jiaming Wang
|
Zhenfeng Shao, Jiaming Wang, Lianbing Deng, Xiao Huang, Tao Lu, Fang
Luo, Ruiqian Zhang, Xianwei Lv, Chaoya Dang, Qing Ding, and Zhiqiang Wang
|
GLSD: The Global Large-Scale Ship Database and Baseline Evaluations
|
11 pages, 7 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we introduce a challenging global large-scale ship database
(called GLSD), designed specifically for ship detection tasks. The designed
GLSD database includes a total of 212,357 annotated instances from 152,576
images. Based on the collected images, we propose 13 ship categories that
widely exist in international routes. These categories include Sailing boat,
Fishing boat, Passenger ship, Warship, General cargo ship, Container ship, Bulk
cargo carrier, Barge, Ore carrier, Speed boat, Canoe, Oil carrier, and Tug. The
motivations of developing GLSD include the following: 1) providing a refine and
extensive ship detection database that benefits the object detection community,
2) establishing a database with exhaustive labels (bounding boxes and ship
class categories) in a uniform classification scheme, and 3) providing a
large-scale ship database with geographic information (covering more than 3000
ports and 33 routes) that benefits multi-modal analysis. In addition, we
discuss the evaluation protocols corresponding to image characteristics in GLSD
and analyze the performance of selected state-of-the-art object detection
algorithms on GSLD, aiming to establish baselines for future studies. More
information regarding the designed GLSD can be found at
https://github.com/jiaming-wang/GLSD.
|
[
{
"version": "v1",
"created": "Sat, 5 Jun 2021 01:49:41 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Mar 2022 03:28:28 GMT"
}
] | 2022-03-18T00:00:00 |
[
[
"Shao",
"Zhenfeng",
""
],
[
"Wang",
"Jiaming",
""
],
[
"Deng",
"Lianbing",
""
],
[
"Huang",
"Xiao",
""
],
[
"Lu",
"Tao",
""
],
[
"Luo",
"Fang",
""
],
[
"Zhang",
"Ruiqian",
""
],
[
"Lv",
"Xianwei",
""
],
[
"Dang",
"Chaoya",
""
],
[
"Ding",
"Qing",
""
],
[
"Wang",
"Zhiqiang",
""
]
] |
new_dataset
| 0.999511 |
2107.08391
|
Dongze Lian
|
Dongze Lian, Zehao Yu, Xing Sun, Shenghua Gao
|
AS-MLP: An Axial Shifted MLP Architecture for Vision
|
Accepted by ICLR2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An Axial Shifted MLP architecture (AS-MLP) is proposed in this paper.
Different from MLP-Mixer, where the global spatial feature is encoded for
information flow through matrix transposition and one token-mixing MLP, we pay
more attention to the local features interaction. By axially shifting channels
of the feature map, AS-MLP is able to obtain the information flow from
different axial directions, which captures the local dependencies. Such an
operation enables us to utilize a pure MLP architecture to achieve the same
local receptive field as CNN-like architecture. We can also design the
receptive field size and dilation of blocks of AS-MLP, etc, in the same spirit
of convolutional neural networks. With the proposed AS-MLP architecture, our
model obtains 83.3% Top-1 accuracy with 88M parameters and 15.2 GFLOPs on the
ImageNet-1K dataset. Such a simple yet effective architecture outperforms all
MLP-based architectures and achieves competitive performance compared to the
transformer-based architectures (e.g., Swin Transformer) even with slightly
lower FLOPs. In addition, AS-MLP is also the first MLP-based architecture to be
applied to the downstream tasks (e.g., object detection and semantic
segmentation). The experimental results are also impressive. Our proposed
AS-MLP obtains 51.5 mAP on the COCO validation set and 49.5 MS mIoU on the
ADE20K dataset, which is competitive compared to the transformer-based
architectures. Our AS-MLP establishes a strong baseline of MLP-based
architecture. Code is available at https://github.com/svip-lab/AS-MLP.
|
[
{
"version": "v1",
"created": "Sun, 18 Jul 2021 08:56:34 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Mar 2022 06:59:03 GMT"
}
] | 2022-03-18T00:00:00 |
[
[
"Lian",
"Dongze",
""
],
[
"Yu",
"Zehao",
""
],
[
"Sun",
"Xing",
""
],
[
"Gao",
"Shenghua",
""
]
] |
new_dataset
| 0.997532 |
2109.12405
|
Rajesh Kedia
|
Lokesh Siddhu, Rajesh Kedia, Shailja Pandey, Martin Rapp, Anuj
Pathania, J\"org Henkel, and Preeti Ranjan Panda
|
CoMeT: An Integrated Interval Thermal Simulation Toolchain for 2D, 2.5D,
and 3D Processor-Memory Systems
|
https://github.com/marg-tools/CoMeT
| null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Processing cores and the accompanying main memory working in tandem enable
the modern processors. Dissipating heat produced from computation, memory
access remains a significant problem for processors. Therefore, processor
thermal management continues to be an active research topic. Most thermal
management research takes place using simulations, given the challenges of
measuring temperature in real processors. Since core and memory are fabricated
on separate packages in most existing processors, with the memory having lower
power densities, thermal management research in processors has primarily
focused on the cores.
Memory bandwidth limitations associated with 2D processors lead to
high-density 2.5D and 3D packaging technology. 2.5D packaging places cores and
memory on the same package. 3D packaging technology takes it further by
stacking layers of memory on the top of cores themselves. Such packagings
significantly increase the power density, making processors prone to heating.
Therefore, mitigating thermal issues in high-density processors (packaged with
stacked memory) becomes an even more pressing problem. However, given the lack
of thermal modeling for memories in existing interval thermal simulation
toolchains, they are unsuitable for studying thermal management for
high-density processors.
To address this issue, we present CoMeT, the first integrated Core and Memory
interval Thermal simulation toolchain. CoMeT comprehensively supports thermal
simulation of high- and low-density processors corresponding to four different
core-memory configurations - off-chip DDR memory, off-chip 3D memory, 2.5D, and
3D. CoMeT supports several novel features that facilitate overlying system
research. Compared to an equivalent state-of-the-art core-only toolchain, CoMeT
adds only a ~5% simulation-time overhead. The source code of CoMeT has been
made open for public use under the MIT license.
|
[
{
"version": "v1",
"created": "Sat, 25 Sep 2021 17:23:51 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Sep 2021 13:11:14 GMT"
},
{
"version": "v3",
"created": "Wed, 16 Mar 2022 17:07:14 GMT"
},
{
"version": "v4",
"created": "Thu, 17 Mar 2022 03:25:41 GMT"
}
] | 2022-03-18T00:00:00 |
[
[
"Siddhu",
"Lokesh",
""
],
[
"Kedia",
"Rajesh",
""
],
[
"Pandey",
"Shailja",
""
],
[
"Rapp",
"Martin",
""
],
[
"Pathania",
"Anuj",
""
],
[
"Henkel",
"Jörg",
""
],
[
"Panda",
"Preeti Ranjan",
""
]
] |
new_dataset
| 0.999693 |
2109.13407
|
Dimitri Schreiber
|
Dimitri A. Schreiber, Zhaowei Yu, Hanpeng Jiang, Taylor Henderson,
Guosong Li, Julie Yu, Renjie Zhu, Alexander M. Norbash, Michael C. Yip
|
CRANE: a 10 Degree-of-Freedom, Tele-surgical System for Dexterous
Manipulation within Imaging Bores
|
6+2 pages, 8 figures, ICRA 2022
| null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Physicians perform minimally invasive percutaneous procedures under Computed
Tomography (CT) image guidance both for the diagnosis and treatment of numerous
diseases. For these procedures performed within Computed Tomography Scanners,
robots can enable physicians to more accurately target sub-dermal lesions while
increasing safety. However, existing robots for this application have limited
dexterity, workspace, or accuracy. This paper describes the design,
manufacture, and performance of a highly dexterous, low-profile, 8+2
Degree-ofFreedom (DoF) robotic arm for CT guided percutaneous needle biopsy. In
this article, we propose CRANE: CT Robot and Needle Emplacer. The design
focuses on system dexterity with high accuracy: extending physicians' ability
to manipulate and insert needles within the scanner bore while providing the
high accuracy possible with a robot. We also propose and validate a system
architecture and control scheme for low profile and highly accurate
image-guided robotics, that meets the clinical requirements for target accuracy
during an in-situ evaluation. The accuracy is additionally evaluated through a
trajectory tracking evaluation resulting in <0.2mm and <0.71degree tracking
error. Finally, we present a novel needle driving and grasping mechanism with
controlling electronics that provides simple manufacturing, sterilization, and
adaptability to accommodate different sizes and types of needles.
|
[
{
"version": "v1",
"created": "Tue, 28 Sep 2021 00:23:16 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Mar 2022 17:17:11 GMT"
}
] | 2022-03-18T00:00:00 |
[
[
"Schreiber",
"Dimitri A.",
""
],
[
"Yu",
"Zhaowei",
""
],
[
"Jiang",
"Hanpeng",
""
],
[
"Henderson",
"Taylor",
""
],
[
"Li",
"Guosong",
""
],
[
"Yu",
"Julie",
""
],
[
"Zhu",
"Renjie",
""
],
[
"Norbash",
"Alexander M.",
""
],
[
"Yip",
"Michael C.",
""
]
] |
new_dataset
| 0.993811 |
2112.05329
|
Yingruo Fan
|
Yingruo Fan, Zhaojiang Lin, Jun Saito, Wenping Wang, Taku Komura
|
FaceFormer: Speech-Driven 3D Facial Animation with Transformers
|
Accepted to CVPR 2022
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Speech-driven 3D facial animation is challenging due to the complex geometry
of human faces and the limited availability of 3D audio-visual data. Prior
works typically focus on learning phoneme-level features of short audio windows
with limited context, occasionally resulting in inaccurate lip movements. To
tackle this limitation, we propose a Transformer-based autoregressive model,
FaceFormer, which encodes the long-term audio context and autoregressively
predicts a sequence of animated 3D face meshes. To cope with the data scarcity
issue, we integrate the self-supervised pre-trained speech representations.
Also, we devise two biased attention mechanisms well suited to this specific
task, including the biased cross-modal multi-head (MH) attention and the biased
causal MH self-attention with a periodic positional encoding strategy. The
former effectively aligns the audio-motion modalities, whereas the latter
offers abilities to generalize to longer audio sequences. Extensive experiments
and a perceptual user study show that our approach outperforms the existing
state-of-the-arts. The code will be made available.
|
[
{
"version": "v1",
"created": "Fri, 10 Dec 2021 04:21:59 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Dec 2021 06:31:45 GMT"
},
{
"version": "v3",
"created": "Sun, 13 Mar 2022 09:48:12 GMT"
},
{
"version": "v4",
"created": "Thu, 17 Mar 2022 00:51:05 GMT"
}
] | 2022-03-18T00:00:00 |
[
[
"Fan",
"Yingruo",
""
],
[
"Lin",
"Zhaojiang",
""
],
[
"Saito",
"Jun",
""
],
[
"Wang",
"Wenping",
""
],
[
"Komura",
"Taku",
""
]
] |
new_dataset
| 0.993824 |
2112.12535
|
Hamd Ul Moqeet Riaz
|
Hamd ul Moqeet Riaz, Nuri Benbarka, Timon Hoefer, and Andreas Zell
|
FourierMask: Instance Segmentation using Fourier Mapping in Implicit
Neural Networks
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We present FourierMask, which employs Fourier series combined with implicit
neural representations to generate instance segmentation masks. We apply a
Fourier mapping (FM) to the coordinate locations and utilize the mapped
features as inputs to an implicit representation (coordinate-based multi-layer
perceptron (MLP)). FourierMask learns to predict the coefficients of the FM for
a particular instance, and therefore adapts the FM to a specific object. This
allows FourierMask to be generalized to predict instance segmentation masks
from natural images. Since implicit functions are continuous in the domain of
input coordinates, we illustrate that by sub-sampling the input pixel
coordinates, we can generate higher resolution masks during inference.
Furthermore, we train a renderer MLP (FourierRend) on the uncertain predictions
of FourierMask and illustrate that it significantly improves the quality of the
masks. FourierMask shows competitive results on the MS COCO dataset compared to
the baseline Mask R-CNN at the same output resolution and surpasses it on
higher resolution.
|
[
{
"version": "v1",
"created": "Thu, 23 Dec 2021 13:42:32 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Mar 2022 14:48:47 GMT"
}
] | 2022-03-18T00:00:00 |
[
[
"Riaz",
"Hamd ul Moqeet",
""
],
[
"Benbarka",
"Nuri",
""
],
[
"Hoefer",
"Timon",
""
],
[
"Zell",
"Andreas",
""
]
] |
new_dataset
| 0.999274 |
2201.09613
|
Patrick Ebel
|
Patrick Ebel and Yajin Xu and Michael Schmitt and Xiaoxiang Zhu
|
SEN12MS-CR-TS: A Remote Sensing Data Set for Multi-modal Multi-temporal
Cloud Removal
| null |
IEEE Transactions on Geoscience and Remote Sensing, 2022
|
10.1109/TGRS.2022.3146246
| null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
About half of all optical observations collected via spaceborne satellites
are affected by haze or clouds. Consequently, cloud coverage affects the remote
sensing practitioner's capabilities of a continuous and seamless monitoring of
our planet. This work addresses the challenge of optical satellite image
reconstruction and cloud removal by proposing a novel multi-modal and
multi-temporal data set called SEN12MS-CR-TS. We propose two models
highlighting the benefits and use cases of SEN12MS-CR-TS: First, a multi-modal
multi-temporal 3D-Convolution Neural Network that predicts a cloud-free image
from a sequence of cloudy optical and radar images. Second, a
sequence-to-sequence translation model that predicts a cloud-free time series
from a cloud-covered time series. Both approaches are evaluated experimentally,
with their respective models trained and tested on SEN12MS-CR-TS. The conducted
experiments highlight the contribution of our data set to the remote sensing
community as well as the benefits of multi-modal and multi-temporal information
to reconstruct noisy information. Our data set is available at
https://patrickTUM.github.io/cloud_removal
|
[
{
"version": "v1",
"created": "Mon, 24 Jan 2022 11:38:49 GMT"
}
] | 2022-03-18T00:00:00 |
[
[
"Ebel",
"Patrick",
""
],
[
"Xu",
"Yajin",
""
],
[
"Schmitt",
"Michael",
""
],
[
"Zhu",
"Xiaoxiang",
""
]
] |
new_dataset
| 0.999641 |
2203.04737
|
Gourav Datta
|
Gourav Datta, Souvik Kundu, Zihan Yin, Ravi Teja Lakkireddy, Joe
Mathai, Ajey Jacob, Peter A. Beerel, Akhilesh R. Jaiswal
|
P2M: A Processing-in-Pixel-in-Memory Paradigm for Resource-Constrained
TinyML Applications
|
15 pages, 8 figures
| null | null | null |
cs.LG cs.AR cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The demand to process vast amounts of data generated from state-of-the-art
high resolution cameras has motivated novel energy-efficient on-device AI
solutions. Visual data in such cameras are usually captured in the form of
analog voltages by a sensor pixel array, and then converted to the digital
domain for subsequent AI processing using analog-to-digital converters (ADC).
Recent research has tried to take advantage of massively parallel low-power
analog/digital computing in the form of near- and in-sensor processing, in
which the AI computation is performed partly in the periphery of the pixel
array and partly in a separate on-board CPU/accelerator. Unfortunately,
high-resolution input images still need to be streamed between the camera and
the AI processing unit, frame by frame, causing energy, bandwidth, and security
bottlenecks. To mitigate this problem, we propose a novel
Processing-in-Pixel-in-memory (P2M) paradigm, that customizes the pixel array
by adding support for analog multi-channel, multi-bit convolution, batch
normalization, and ReLU (Rectified Linear Units). Our solution includes a
holistic algorithm-circuit co-design approach and the resulting P2M paradigm
can be used as a drop-in replacement for embedding memory-intensive first few
layers of convolutional neural network (CNN) models within
foundry-manufacturable CMOS image sensor platforms. Our experimental results
indicate that P2M reduces data transfer bandwidth from sensors and analog to
digital conversions by ~21x, and the energy-delay product (EDP) incurred in
processing a MobileNetV2 model on a TinyML use case for visual wake words
dataset (VWW) by up to ~11x compared to standard near-processing or in-sensor
implementations, without any significant drop in test accuracy.
|
[
{
"version": "v1",
"created": "Mon, 7 Mar 2022 04:15:29 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Mar 2022 01:55:36 GMT"
}
] | 2022-03-18T00:00:00 |
[
[
"Datta",
"Gourav",
""
],
[
"Kundu",
"Souvik",
""
],
[
"Yin",
"Zihan",
""
],
[
"Lakkireddy",
"Ravi Teja",
""
],
[
"Mathai",
"Joe",
""
],
[
"Jacob",
"Ajey",
""
],
[
"Beerel",
"Peter A.",
""
],
[
"Jaiswal",
"Akhilesh R.",
""
]
] |
new_dataset
| 0.997058 |
2203.07918
|
Ruida Zhang
|
Yan Di, Ruida Zhang, Zhiqiang Lou, Fabian Manhardt, Xiangyang Ji,
Nassir Navab and Federico Tombari
|
GPV-Pose: Category-level Object Pose Estimation via Geometry-guided
Point-wise Voting
|
CVPR 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
While 6D object pose estimation has recently made a huge leap forward, most
methods can still only handle a single or a handful of different objects, which
limits their applications. To circumvent this problem, category-level object
pose estimation has recently been revamped, which aims at predicting the 6D
pose as well as the 3D metric size for previously unseen instances from a given
set of object classes. This is, however, a much more challenging task due to
severe intra-class shape variations. To address this issue, we propose
GPV-Pose, a novel framework for robust category-level pose estimation,
harnessing geometric insights to enhance the learning of category-level
pose-sensitive features. First, we introduce a decoupled confidence-driven
rotation representation, which allows geometry-aware recovery of the associated
rotation matrix. Second, we propose a novel geometry-guided point-wise voting
paradigm for robust retrieval of the 3D object bounding box. Finally,
leveraging these different output streams, we can enforce several geometric
consistency terms, further increasing performance, especially for non-symmetric
categories. GPV-Pose produces superior results to state-of-the-art competitors
on common public benchmarks, whilst almost achieving real-time inference speed
at 20 FPS.
|
[
{
"version": "v1",
"created": "Tue, 15 Mar 2022 13:58:50 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Mar 2022 14:12:21 GMT"
}
] | 2022-03-18T00:00:00 |
[
[
"Di",
"Yan",
""
],
[
"Zhang",
"Ruida",
""
],
[
"Lou",
"Zhiqiang",
""
],
[
"Manhardt",
"Fabian",
""
],
[
"Ji",
"Xiangyang",
""
],
[
"Navab",
"Nassir",
""
],
[
"Tombari",
"Federico",
""
]
] |
new_dataset
| 0.993839 |
2203.08069
|
Rohan Yadav
|
Rohan Yadav and Alex Aiken and Fredrik Kjolstad
|
DISTAL: The Distributed Tensor Algebra Compiler
| null | null |
10.1145/3519939.3523437
| null |
cs.PL cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce DISTAL, a compiler for dense tensor algebra that targets modern
distributed and heterogeneous systems. DISTAL lets users independently describe
how tensors and computation map onto target machines through separate format
and scheduling languages. The combination of choices for data and computation
distribution creates a large design space that includes many algorithms from
both the past (e.g., Cannon's algorithm) and the present (e.g., COSMA). DISTAL
compiles a tensor algebra domain specific language to a distributed task-based
runtime system and supports nodes with multi-core CPUs and multiple GPUs. Code
generated by DISTAL is competitive with optimized codes for matrix multiply on
256 nodes of the Lassen supercomputer and outperforms existing systems by
between 1.8x to 3.7x (with a 45.7x outlier) on higher order tensor operations.
|
[
{
"version": "v1",
"created": "Tue, 15 Mar 2022 16:59:56 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Mar 2022 16:42:25 GMT"
}
] | 2022-03-18T00:00:00 |
[
[
"Yadav",
"Rohan",
""
],
[
"Aiken",
"Alex",
""
],
[
"Kjolstad",
"Fredrik",
""
]
] |
new_dataset
| 0.999615 |
2203.08528
|
Xinyu Yi
|
Xinyu Yi, Yuxiao Zhou, Marc Habermann, Soshi Shimada, Vladislav
Golyanik, Christian Theobalt, Feng Xu
|
Physical Inertial Poser (PIP): Physics-aware Real-time Human Motion
Tracking from Sparse Inertial Sensors
|
Accepted by CVPR 2022 with 3 strong accepts. Project page:
https://xinyu-yi.github.io/PIP/
| null | null | null |
cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Motion capture from sparse inertial sensors has shown great potential
compared to image-based approaches since occlusions do not lead to a reduced
tracking quality and the recording space is not restricted to be within the
viewing frustum of the camera. However, capturing the motion and global
position only from a sparse set of inertial sensors is inherently ambiguous and
challenging. In consequence, recent state-of-the-art methods can barely handle
very long period motions, and unrealistic artifacts are common due to the
unawareness of physical constraints. To this end, we present the first method
which combines a neural kinematics estimator and a physics-aware motion
optimizer to track body motions with only 6 inertial sensors. The kinematics
module first regresses the motion status as a reference, and then the physics
module refines the motion to satisfy the physical constraints. Experiments
demonstrate a clear improvement over the state of the art in terms of capture
accuracy, temporal stability, and physical correctness.
|
[
{
"version": "v1",
"created": "Wed, 16 Mar 2022 10:53:24 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Mar 2022 02:41:30 GMT"
}
] | 2022-03-18T00:00:00 |
[
[
"Yi",
"Xinyu",
""
],
[
"Zhou",
"Yuxiao",
""
],
[
"Habermann",
"Marc",
""
],
[
"Shimada",
"Soshi",
""
],
[
"Golyanik",
"Vladislav",
""
],
[
"Theobalt",
"Christian",
""
],
[
"Xu",
"Feng",
""
]
] |
new_dataset
| 0.995811 |
2203.08815
|
Christian Bauckhage
|
Christian Bauckhage, Thore Gerlach, Nico Piatkowski
|
QUBOs for Sorting Lists and Building Trees
| null | null | null | null |
cs.DS cs.LG quant-ph
|
http://creativecommons.org/licenses/by/4.0/
|
We show that the fundamental tasks of sorting lists and building search trees
or heaps can be modeled as quadratic unconstrained binary optimization problems
(QUBOs). The idea is to understand these tasks as permutation problems and to
devise QUBOs whose solutions represent appropriate permutation matrices. We
discuss how to construct such QUBOs and how to solve them using Hopfield nets
or adiabatic) quantum computing. In short, we show that neurocomputing methods
or quantum computers can solve problems usually associated with abstract data
structures.
|
[
{
"version": "v1",
"created": "Tue, 15 Mar 2022 11:58:17 GMT"
}
] | 2022-03-18T00:00:00 |
[
[
"Bauckhage",
"Christian",
""
],
[
"Gerlach",
"Thore",
""
],
[
"Piatkowski",
"Nico",
""
]
] |
new_dataset
| 0.995296 |
2203.08889
|
Elizabeth Vasquez
|
Elizabeth D. Vasquez, Allison M. Okamura, Sean Follmer
|
Social-Cultural Factors in the Design of Technology for Hispanic People
with Stroke
|
6 pages, 1 figure
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Stroke is a leading cause of serious, long-term disability in the United
States. There exist disparities in both stroke prevalence and outcomes between
people with stroke in Hispanic and Latinx communities and the general stroke
population. Current stroke technology - which aims to improve quality of life
and bring people with stroke to the most functional, independent state possible
- has shown promising results for the general stroke population, but has failed
to close the recovery outcome gap for underserved Hispanic and Latinx people
with stroke. Previous work in health education, digital health, and HRI has
improved human health outcomes by incorporating social-cultural factors, though
not for stroke. In this position paper, we aim to justify accounting for unique
cultural factors in stroke technology design for the Hispanic and Latinx
community. We review examples of successful culturally appropriate
interventions and suggest design considerations (mutually beneficial community
consultation, accommodating for barriers beforehand, building on culture, and
incorporating education of the family) to provide more culturally appropriate
design of Hispanic and Latinx stroke technology and reduce the disparity gap.
|
[
{
"version": "v1",
"created": "Wed, 16 Mar 2022 19:04:36 GMT"
}
] | 2022-03-18T00:00:00 |
[
[
"Vasquez",
"Elizabeth D.",
""
],
[
"Okamura",
"Allison M.",
""
],
[
"Follmer",
"Sean",
""
]
] |
new_dataset
| 0.988078 |
2203.08890
|
Gitta Kutyniok
|
Gitta Kutyniok
|
The Mathematics of Artificial Intelligence
|
16 pages, 7 figures
| null | null | null |
cs.LG math.HO stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We currently witness the spectacular success of artificial intelligence in
both science and public life. However, the development of a rigorous
mathematical foundation is still at an early stage. In this survey article,
which is based on an invited lecture at the International Congress of
Mathematicians 2022, we will in particular focus on the current "workhorse" of
artificial intelligence, namely deep neural networks. We will present the main
theoretical directions along with several exemplary results and discuss key
open problems.
|
[
{
"version": "v1",
"created": "Wed, 16 Mar 2022 19:04:53 GMT"
}
] | 2022-03-18T00:00:00 |
[
[
"Kutyniok",
"Gitta",
""
]
] |
new_dataset
| 0.967821 |
2203.08903
|
Wonse Jo
|
Wonse Jo, Jaeeun Kim, Ruiqi Wang, Jeremy Pan, Revanth Krishna
Senthilkumaran and Byung-Cheol Min
|
SMARTmBOT: A ROS2-based Low-cost and Open-source Mobile Robot Platform
|
6 pages, 7 figures, and this paper was submitted to the 2022 IEEE/RSJ
International Conference on Intelligent Robots and Systems (IROS 2022)
| null | null | null |
cs.RO cs.AR cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces SMARTmBOT, an open-source mobile robot platform based
on Robot Operating System 2 (ROS2). The characteristics of the SMARTmBOT,
including low-cost, modular-typed, customizable and expandable design, make it
an easily achievable and effective robot platform to support broad robotics
research and education involving either single-robot or multi-robot systems.
The total cost per robot is approximately $210, and most hardware components
can be fabricated by a generic 3D printer, hence allowing users to build the
robots or replace any broken parts conveniently. The SMARTmBot is also equipped
with a rich range of sensors, making it competent for general task scenarios,
such as point-to-point navigation and obstacle avoidance. We validated the
mobility and function of SMARTmBOT through various robot navigation experiments
and applications with tasks including go-to-goal, pure-pursuit, line following,
and swarming. All source code necessary for reading sensors, streaming from an
embedded camera, and controlling the robot including robot navigation
controllers is available through an online repository that can be found at
https://github.com/SMARTlab-Purdue/SMARTmBOT.
|
[
{
"version": "v1",
"created": "Wed, 16 Mar 2022 19:27:32 GMT"
}
] | 2022-03-18T00:00:00 |
[
[
"Jo",
"Wonse",
""
],
[
"Kim",
"Jaeeun",
""
],
[
"Wang",
"Ruiqi",
""
],
[
"Pan",
"Jeremy",
""
],
[
"Senthilkumaran",
"Revanth Krishna",
""
],
[
"Min",
"Byung-Cheol",
""
]
] |
new_dataset
| 0.999404 |
2203.08913
|
Yuhuai(Tony) Wu
|
Yuhuai Wu and Markus N. Rabe and DeLesley Hutchins and Christian
Szegedy
|
Memorizing Transformers
|
Published as a conference paper at ICLR 2022 (spotlight)
| null | null | null |
cs.LG cs.AI cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Language models typically need to be trained or finetuned in order to acquire
new knowledge, which involves updating their weights. We instead envision
language models that can simply read and memorize new data at inference time,
thus acquiring new knowledge immediately. In this work, we extend language
models with the ability to memorize the internal representations of past
inputs. We demonstrate that an approximate kNN lookup into a non-differentiable
memory of recent (key, value) pairs improves language modeling across various
benchmarks and tasks, including generic webtext (C4), math papers (arXiv),
books (PG-19), code (Github), as well as formal theorems (Isabelle). We show
that the performance steadily improves when we increase the size of memory up
to 262K tokens. On benchmarks including code and mathematics, we find that the
model is capable of making use of newly defined functions and theorems during
test time.
|
[
{
"version": "v1",
"created": "Wed, 16 Mar 2022 19:54:35 GMT"
}
] | 2022-03-18T00:00:00 |
[
[
"Wu",
"Yuhuai",
""
],
[
"Rabe",
"Markus N.",
""
],
[
"Hutchins",
"DeLesley",
""
],
[
"Szegedy",
"Christian",
""
]
] |
new_dataset
| 0.950705 |
2203.08946
|
Oliver Gasser
|
Said Jawad Saidi, Oliver Gasser, Georgios Smaragdakis
|
One Bad Apple Can Spoil Your IPv6 Privacy
|
Accepted at ACM SIGCOMM Computer Communication Review, to appear in
the April 2022 issue
| null | null | null |
cs.CR cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
IPv6 is being more and more adopted, in part to facilitate the millions of
smart devices that have already been installed at home. Unfortunately, we find
that the privacy of a substantial fraction of end-users is still at risk,
despite the efforts by ISPs and electronic vendors to improve end-user
security, e.g., by adopting prefix rotation and IPv6 privacy extensions. By
analyzing passive data from a large ISP, we find that around 19% of end-users'
privacy can be at risk. When we investigate the root causes, we notice that a
single device at home that encodes its MAC address into the IPv6 address can be
utilized as a tracking identifier for the entire end-user prefix -- even if
other devices use IPv6 privacy extensions. Our results show that IoT devices
contribute the most to this privacy leakage and, to a lesser extent, personal
computers and mobile devices. To our surprise, some of the most popular IoT
manufacturers have not yet adopted privacy extensions that could otherwise
mitigate this privacy risk. Finally, we show that third-party providers, e.g.,
hypergiants, can track up to 17% of subscriber lines in our study.
|
[
{
"version": "v1",
"created": "Wed, 16 Mar 2022 21:13:57 GMT"
}
] | 2022-03-18T00:00:00 |
[
[
"Saidi",
"Said Jawad",
""
],
[
"Gasser",
"Oliver",
""
],
[
"Smaragdakis",
"Georgios",
""
]
] |
new_dataset
| 0.998077 |
2203.09057
|
Ojas Kanhere
|
Aditya Chopra, Andrew Thornburg, Ojas Kanhere, Saeed S. Ghassemzadeh,
Milap Majmundar, and Theodore S. Rappaport
|
A Real-Time Millimeter Wave V2V Channel Sounder
|
2022 IEEE Wireless Communications and Networking Conference (WCNC)
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Wireless communication in millimeter wave spectrum is poised to provide the
latency and bandwidth needed for advanced use cases unfeasible at lower
frequencies. Despite the market potential of vehicular communication networks,
investigations into the millimeter wave vehicular channel are lacking. In this
paper, we present a detailed overview of a novel 1 GHz wide, multi-antenna
vehicle to vehicle directional channel sounding and measurement platform
operating at 28 GHz. The channel sounder uses two 256-element phased arrays at
the transmitter vehicle and four 64-element arrays at the receiver vehicle,
with the receiver measuring 116 different directional beams in less than 1
millisecond. By measuring the full multi-beam channel impulse response at large
bandwidths, our system provides unprecedented insight in instantaneous mobile
vehicle to vehicle channels. The system also uses centimeter-level global
position tracking and 360 degree video capture to provide additional contextual
information for joint communication and sensing applications. An initial
measurement campaign was conducted on highway and surface streets in Austin,
Texas. We show example data that highlights the sensing capability of the
system. Preliminary results from the measurement campaign show that bumper
mounted mmWave arrays provide rich scattering in traffic as well a provide
significant directional diversity aiding towards high reliability vehicular
communication. Additionally, potential waveguide effects from high traffic in
lanes can also extend the range of mmWave signals significantly.
|
[
{
"version": "v1",
"created": "Thu, 17 Mar 2022 03:38:28 GMT"
}
] | 2022-03-18T00:00:00 |
[
[
"Chopra",
"Aditya",
""
],
[
"Thornburg",
"Andrew",
""
],
[
"Kanhere",
"Ojas",
""
],
[
"Ghassemzadeh",
"Saeed S.",
""
],
[
"Majmundar",
"Milap",
""
],
[
"Rappaport",
"Theodore S.",
""
]
] |
new_dataset
| 0.99984 |
2203.09072
|
Shaolei Zhang
|
Shaolei Zhang, Yang Feng
|
Gaussian Multi-head Attention for Simultaneous Machine Translation
|
Accept to ACL 2022 findings. 12 pages, 8 figures
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Simultaneous machine translation (SiMT) outputs translation while receiving
the streaming source inputs, and hence needs a policy to determine where to
start translating. The alignment between target and source words often implies
the most informative source word for each target word, and hence provides the
unified control over translation quality and latency, but unfortunately the
existing SiMT methods do not explicitly model the alignment to perform the
control. In this paper, we propose Gaussian Multi-head Attention (GMA) to
develop a new SiMT policy by modeling alignment and translation in a unified
manner. For SiMT policy, GMA models the aligned source position of each target
word, and accordingly waits until its aligned position to start translating. To
integrate the learning of alignment into the translation model, a Gaussian
distribution centered on predicted aligned position is introduced as an
alignment-related prior, which cooperates with translation-related soft
attention to determine the final attention. Experiments on En-Vi and De-En
tasks show that our method outperforms strong baselines on the trade-off
between translation and latency.
|
[
{
"version": "v1",
"created": "Thu, 17 Mar 2022 04:01:25 GMT"
}
] | 2022-03-18T00:00:00 |
[
[
"Zhang",
"Shaolei",
""
],
[
"Feng",
"Yang",
""
]
] |
new_dataset
| 0.989327 |
2203.09100
|
Zhe Hu
|
Zhe Hu, Hou Pong Chan, Jiachen Liu, Xinyan Xiao, Hua Wu, Lifu Huang
|
PLANET: Dynamic Content Planning in Autoregressive Transformers for
Long-form Text Generation
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Despite recent progress of pre-trained language models on generating fluent
text, existing methods still suffer from incoherence problems in long-form text
generation tasks that require proper content control and planning to form a
coherent high-level logical flow. In this work, we propose PLANET, a novel
generation framework leveraging autoregressive self-attention mechanism to
conduct content planning and surface realization dynamically. To guide the
generation of output sentences, our framework enriches the Transformer decoder
with latent representations to maintain sentence-level semantic plans grounded
by bag-of-words. Moreover, we introduce a new coherence-based contrastive
learning objective to further improve the coherence of output. Extensive
experiments are conducted on two challenging long-form text generation tasks
including counterargument generation and opinion article generation. Both
automatic and human evaluations show that our method significantly outperforms
strong baselines and generates more coherent texts with richer contents.
|
[
{
"version": "v1",
"created": "Thu, 17 Mar 2022 05:52:35 GMT"
}
] | 2022-03-18T00:00:00 |
[
[
"Hu",
"Zhe",
""
],
[
"Chan",
"Hou Pong",
""
],
[
"Liu",
"Jiachen",
""
],
[
"Xiao",
"Xinyan",
""
],
[
"Wu",
"Hua",
""
],
[
"Huang",
"Lifu",
""
]
] |
new_dataset
| 0.980821 |
2203.09138
|
Yang Ding
|
Yang Ding, Jing Yu, Bang Liu, Yue Hu, Mingxin Cui, Qi Wu
|
MuKEA: Multimodal Knowledge Extraction and Accumulation for
Knowledge-based Visual Question Answering
|
Accepted by CVPR2022
| null | null | null |
cs.CV cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Knowledge-based visual question answering requires the ability of associating
external knowledge for open-ended cross-modal scene understanding. One
limitation of existing solutions is that they capture relevant knowledge from
text-only knowledge bases, which merely contain facts expressed by first-order
predicates or language descriptions while lacking complex but indispensable
multimodal knowledge for visual understanding. How to construct vision-relevant
and explainable multimodal knowledge for the VQA scenario has been less
studied. In this paper, we propose MuKEA to represent multimodal knowledge by
an explicit triplet to correlate visual objects and fact answers with implicit
relations. To bridge the heterogeneous gap, we propose three objective losses
to learn the triplet representations from complementary views: embedding
structure, topological relation and semantic space. By adopting a pre-training
and fine-tuning learning strategy, both basic and domain-specific multimodal
knowledge are progressively accumulated for answer prediction. We outperform
the state-of-the-art by 3.35% and 6.08% respectively on two challenging
knowledge-required datasets: OK-VQA and KRVQA. Experimental results prove the
complementary benefits of the multimodal knowledge with existing knowledge
bases and the advantages of our end-to-end framework over the existing pipeline
methods. The code is available at https://github.com/AndersonStra/MuKEA.
|
[
{
"version": "v1",
"created": "Thu, 17 Mar 2022 07:42:14 GMT"
}
] | 2022-03-18T00:00:00 |
[
[
"Ding",
"Yang",
""
],
[
"Yu",
"Jing",
""
],
[
"Liu",
"Bang",
""
],
[
"Hu",
"Yue",
""
],
[
"Cui",
"Mingxin",
""
],
[
"Wu",
"Qi",
""
]
] |
new_dataset
| 0.994573 |
2104.03547
|
Morris Gu Mr
|
Morris Gu, Akansel Cosgun, Wesley P. Chan, Tom Drummond and Elizabeth
Croft
|
Seeing Thru Walls: Visualizing Mobile Robots in Augmented Reality
|
Accepted at RO-MAN 2021 "30th IEEE International Conference on Robot
and Human Interactive Communication", 6 pages, 5 figures, 5 Tables
| null |
10.1109/RO-MAN50785.2021.9515322
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an approach for visualizing mobile robots through an Augmented
Reality headset when there is no line-of-sight visibility between the robot and
the human. Three elements are visualized in Augmented Reality: 1) Robot's 3D
model to indicate its position, 2) An arrow emanating from the robot to
indicate its planned movement direction, and 3) A 2D grid to represent the
ground plane. We conduct a user study with 18 participants, in which each
participant are asked to retrieve objects, one at a time, from stations at the
two sides of a T-junction at the end of a hallway where a mobile robot is
roaming. The results show that visualizations improved the perceived safety and
efficiency of the task and led to participants being more comfortable with the
robot within their personal spaces. Furthermore, visualizing the motion intent
in addition to the robot model was found to be more effective than visualizing
the robot model alone. The proposed system can improve the safety of automated
warehouses by increasing the visibility and predictability of robots.
|
[
{
"version": "v1",
"created": "Thu, 8 Apr 2021 06:54:37 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Mar 2022 04:03:21 GMT"
}
] | 2022-03-17T00:00:00 |
[
[
"Gu",
"Morris",
""
],
[
"Cosgun",
"Akansel",
""
],
[
"Chan",
"Wesley P.",
""
],
[
"Drummond",
"Tom",
""
],
[
"Croft",
"Elizabeth",
""
]
] |
new_dataset
| 0.979838 |
2105.08621
|
Thorben Funke
|
Thorben Funke, Megha Khosla, Mandeep Rathee, Avishek Anand
|
Zorro: Valid, Sparse, and Stable Explanations in Graph Neural Networks
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
With the ever-increasing popularity and applications of graph neural
networks, several proposals have been made to explain and understand the
decisions of a graph neural network. Explanations for graph neural networks
differ in principle from other input settings. It is important to attribute the
decision to input features and other related instances connected by the graph
structure. We find that the previous explanation generation approaches that
maximize the mutual information between the label distribution produced by the
model and the explanation to be restrictive. Specifically, existing approaches
do not enforce explanations to be valid, sparse, or robust to input
perturbations. In this paper, we lay down some of the fundamental principles
that an explanation method for graph neural networks should follow and
introduce a metric RDT-Fidelity as a measure of the explanation's
effectiveness. We propose a novel approach Zorro based on the principles from
rate-distortion theory that uses a simple combinatorial procedure to optimize
for RDT-Fidelity. Extensive experiments on real and synthetic datasets reveal
that Zorro produces sparser, stable, and more faithful explanations than
existing graph neural network explanation approaches.
|
[
{
"version": "v1",
"created": "Tue, 18 May 2021 15:53:09 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Mar 2022 12:55:29 GMT"
}
] | 2022-03-17T00:00:00 |
[
[
"Funke",
"Thorben",
""
],
[
"Khosla",
"Megha",
""
],
[
"Rathee",
"Mandeep",
""
],
[
"Anand",
"Avishek",
""
]
] |
new_dataset
| 0.988161 |
2105.09453
|
Yukui Luo
|
Yukui Luo, Cheng Gongye, Yunsi Fei, Xiaolin Xu
|
DeepStrike: Remotely-Guided Fault Injection Attacks on DNN Accelerator
in Cloud-FPGA
|
6 pages, 6 figures
| null |
10.1109/DAC18074.2021.9586262
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As Field-programmable gate arrays (FPGAs) are widely adopted in clouds to
accelerate Deep Neural Networks (DNN), such virtualization environments have
posed many new security issues. This work investigates the integrity of DNN
FPGA accelerators in clouds. It proposes DeepStrike, a remotely-guided attack
based on power glitching fault injections targeting DNN execution. We
characterize the vulnerabilities of different DNN layers against fault
injections on FPGAs and leverage time-to-digital converter (TDC) sensors to
precisely control the timing of fault injections. Experimental results show
that our proposed attack can successfully disrupt the FPGA DSP kernel and
misclassify the target victim DNN application.
|
[
{
"version": "v1",
"created": "Thu, 20 May 2021 01:59:54 GMT"
}
] | 2022-03-17T00:00:00 |
[
[
"Luo",
"Yukui",
""
],
[
"Gongye",
"Cheng",
""
],
[
"Fei",
"Yunsi",
""
],
[
"Xu",
"Xiaolin",
""
]
] |
new_dataset
| 0.976223 |
2105.11589
|
Karthik Gopalakrishnan
|
Ayush Shrivastava, Karthik Gopalakrishnan, Yang Liu, Robinson
Piramuthu, Gokhan T\"ur, Devi Parikh, Dilek Hakkani-T\"ur
|
VISITRON: Visual Semantics-Aligned Interactively Trained
Object-Navigator
|
Accepted at Findings of the Annual Meeting of the Association for
Computational Linguistics (ACL) 2022, previous version accepted at Visually
Grounded Interaction and Language (ViGIL) Workshop at NAACL 2021
| null | null | null |
cs.CV cs.AI cs.CL cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Interactive robots navigating photo-realistic environments need to be trained
to effectively leverage and handle the dynamic nature of dialogue in addition
to the challenges underlying vision-and-language navigation (VLN). In this
paper, we present VISITRON, a multi-modal Transformer-based navigator better
suited to the interactive regime inherent to Cooperative Vision-and-Dialog
Navigation (CVDN). VISITRON is trained to: i) identify and associate
object-level concepts and semantics between the environment and dialogue
history, ii) identify when to interact vs. navigate via imitation learning of a
binary classification head. We perform extensive pre-training and fine-tuning
ablations with VISITRON to gain empirical insights and improve performance on
CVDN. VISITRON's ability to identify when to interact leads to a natural
generalization of the game-play mode introduced by Roman et al.
(arXiv:2005.00728) for enabling the use of such models in different
environments. VISITRON is competitive with models on the static CVDN
leaderboard and attains state-of-the-art performance on the Success weighted by
Path Length (SPL) metric.
|
[
{
"version": "v1",
"created": "Tue, 25 May 2021 00:21:54 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Mar 2022 03:03:00 GMT"
}
] | 2022-03-17T00:00:00 |
[
[
"Shrivastava",
"Ayush",
""
],
[
"Gopalakrishnan",
"Karthik",
""
],
[
"Liu",
"Yang",
""
],
[
"Piramuthu",
"Robinson",
""
],
[
"Tür",
"Gokhan",
""
],
[
"Parikh",
"Devi",
""
],
[
"Hakkani-Tür",
"Dilek",
""
]
] |
new_dataset
| 0.974503 |
2105.11827
|
Alberto Sonnino
|
George Danezis, Eleftherios Kokoris Kogias, Alberto Sonnino, Alexander
Spiegelman
|
Narwhal and Tusk: A DAG-based Mempool and Efficient BFT Consensus
| null | null | null | null |
cs.CR cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose separating the task of reliable transaction dissemination from
transaction ordering, to enable high-performance Byzantine fault-tolerant
quorum-based consensus. We design and evaluate a mempool protocol, Narwhal,
specializing in high-throughput reliable dissemination and storage of causal
histories of transactions. Narwhal tolerates an asynchronous network and
maintains high performance despite failures. Narwhal is designed to easily
scale-out using multiple workers at each validator, and we demonstrate that
there is no foreseeable limit to the throughput we can achieve. Composing
Narwhal with a partially synchronous consensus protocol (Narwhal-HotStuff)
yields significantly better throughput even in the presence of faults or
intermittent loss of liveness due to asynchrony. However, loss of liveness can
result in higher latency. To achieve overall good performance when faults occur
we design Tusk, a zero-message overhead asynchronous consensus protocol, to
work with Narwhal. We demonstrate its high performance under a variety of
configurations and faults. As a summary of results, on a WAN, Narwhal-Hotstuff
achieves over 130,000 tx/sec at less than 2-sec latency compared with 1,800
tx/sec at 1-sec latency for Hotstuff. Additional workers increase throughput
linearly to 600,000 tx/sec without any latency increase. Tusk achieves 160,000
tx/sec with about 3 seconds latency. Under faults, both protocols maintain high
throughput, but Narwhal-HotStuff suffers from increased latency.
|
[
{
"version": "v1",
"created": "Tue, 25 May 2021 10:53:41 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Jun 2021 12:10:22 GMT"
},
{
"version": "v3",
"created": "Fri, 22 Oct 2021 16:24:02 GMT"
},
{
"version": "v4",
"created": "Wed, 16 Mar 2022 09:55:20 GMT"
}
] | 2022-03-17T00:00:00 |
[
[
"Danezis",
"George",
""
],
[
"Kogias",
"Eleftherios Kokoris",
""
],
[
"Sonnino",
"Alberto",
""
],
[
"Spiegelman",
"Alexander",
""
]
] |
new_dataset
| 0.990704 |
2106.06920
|
Kavindie Katuwandeniya
|
Kavindie Katuwandeniya, Stefan H. Kiss, Lei Shi, and Jaime Valls Miro
|
Multi-modal Scene-compliant User Intention Estimation in Navigation
|
Published in 2021 IROS
|
2021 IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS)
|
10.1109/IROS51168.2021.9636142
| null |
cs.RO cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A multi-modal framework to generate user intention distributions when
operating a mobile vehicle is proposed in this work. The model learns from past
observed trajectories and leverages traversability information derived from the
visual surroundings to produce a set of future trajectories, suitable to be
directly embedded into a perception-action shared control strategy on a mobile
agent, or as a safety layer to supervise the prudent operation of the vehicle.
We base our solution on a conditional Generative Adversarial Network with
Long-Short Term Memory cells to capture trajectory distributions conditioned on
past trajectories, further fused with traversability probabilities derived from
visual segmentation with a Convolutional Neural Network. The proposed
data-driven framework results in a significant reduction in error of the
predicted trajectories (versus the ground truth) from comparable strategies in
the literature (e.g. Social-GAN) that fail to account for information other
than the agent's past history. Experiments were conducted on a dataset
collected with a custom wheelchair model built onto the open-source urban
driving simulator CARLA, proving also that the proposed framework can be used
with a small, un-annotated dataset.
|
[
{
"version": "v1",
"created": "Sun, 13 Jun 2021 05:11:33 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Mar 2022 03:07:22 GMT"
}
] | 2022-03-17T00:00:00 |
[
[
"Katuwandeniya",
"Kavindie",
""
],
[
"Kiss",
"Stefan H.",
""
],
[
"Shi",
"Lei",
""
],
[
"Miro",
"Jaime Valls",
""
]
] |
new_dataset
| 0.985118 |
2108.09509
|
Caciano Machado
|
Caciano Machado and Renan R. S. dos Santos and Carla Merkle Westphall
|
Hop-by-hop Accounting and Rewards for Packet dIspAtching
| null |
2021 IEEE 20th International Conference on Trust, Security and
Privacy in Computing and Communications (TrustCom), 2021, pp. 1116-1123
|
10.1109/TrustCom53373.2021.00152
| null |
cs.NI cs.DC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Community networks are prone to free-riders, i.e., participants who take
advantage of cooperation from others' routers but do not contribute
reciprocally. In this paper, we present HARPIA, a system for credit-based
incentive mechanisms for data forwarding in community networks aimed to prevent
selfish behavior. HARPIA does not require a trusted third-party or
tamper-resistant security modules as in other incentive mechanisms. Instead, it
uses a distributed accounting scheme (DPIFA) to estimate the balance of data
forwarding contribution and consumption of each network router and settle
correspondent cryptocurrency debts on an Ethereum smart contract. On-chain
settlement transactions are performed every HARPIA cycle (e.g., daily, weekly,
monthly) and must be validated by at least m-of-n network routers using a
multi-signature scheme (MuSig). We also realized a performance evaluation,
security threat assessment, and cryptocurrency costs estimation. Results show
that our proposal is suitable for community networks with up to 64
infrastructure routers under specific m-of-n MuSig thresholds.
|
[
{
"version": "v1",
"created": "Sat, 21 Aug 2021 13:40:03 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Mar 2022 19:42:15 GMT"
}
] | 2022-03-17T00:00:00 |
[
[
"Machado",
"Caciano",
""
],
[
"Santos",
"Renan R. S. dos",
""
],
[
"Westphall",
"Carla Merkle",
""
]
] |
new_dataset
| 0.983908 |
2108.13048
|
Jianwei Yu
|
Lingyun Feng, Jianwei Yu, Deng Cai, Songxiang Liu, Haitao Zheng, Yan
Wang
|
ASR-GLUE: A New Multi-task Benchmark for ASR-Robust Natural Language
Understanding
| null | null | null | null |
cs.CL cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Language understanding in speech-based systems have attracted much attention
in recent years with the growing demand for voice interface applications.
However, the robustness of natural language understanding (NLU) systems to
errors introduced by automatic speech recognition (ASR) is under-examined. %To
facilitate the research on ASR-robust general language understanding, In this
paper, we propose ASR-GLUE benchmark, a new collection of 6 different NLU tasks
for evaluating the performance of models under ASR error across 3 different
levels of background noise and 6 speakers with various voice characteristics.
Based on the proposed benchmark, we systematically investigate the effect of
ASR error on NLU tasks in terms of noise intensity, error type and speaker
variants. We further purpose two ways, correction-based method and data
augmentation-based method to improve robustness of the NLU systems. Extensive
experimental results and analysises show that the proposed methods are
effective to some extent, but still far from human performance, demonstrating
that NLU under ASR error is still very challenging and requires further
research.
|
[
{
"version": "v1",
"created": "Mon, 30 Aug 2021 08:11:39 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Mar 2022 16:24:41 GMT"
}
] | 2022-03-17T00:00:00 |
[
[
"Feng",
"Lingyun",
""
],
[
"Yu",
"Jianwei",
""
],
[
"Cai",
"Deng",
""
],
[
"Liu",
"Songxiang",
""
],
[
"Zheng",
"Haitao",
""
],
[
"Wang",
"Yan",
""
]
] |
new_dataset
| 0.997758 |
2110.07152
|
Riddhish Bhalodia
|
Riddhish Bhalodia, Shireen Elhabian, Jadie Adams, Wenzheng Tao,
Ladislav Kavan, Ross Whitaker
|
DeepSSM: A Blueprint for Image-to-Shape Deep Learning Models
|
pre-print
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Statistical shape modeling (SSM) characterizes anatomical variations in a
population of shapes generated from medical images. SSM requires consistent
shape representation across samples in shape cohort. Establishing this
representation entails a processing pipeline that includes anatomy
segmentation, re-sampling, registration, and non-linear optimization. These
shape representations are then used to extract low-dimensional shape
descriptors that facilitate subsequent analyses in different applications.
However, the current process of obtaining these shape descriptors from imaging
data relies on human and computational resources, requiring domain expertise
for segmenting anatomies of interest. Moreover, this same taxing pipeline needs
to be repeated to infer shape descriptors for new image data using a
pre-trained/existing shape model. Here, we propose DeepSSM, a deep
learning-based framework for learning the functional mapping from images to
low-dimensional shape descriptors and their associated shape representations,
thereby inferring statistical representation of anatomy directly from 3D
images. Once trained using an existing shape model, DeepSSM circumvents the
heavy and manual pre-processing and segmentation and significantly improves the
computational time, making it a viable solution for fully end-to-end SSM
applications. In addition, we introduce a model-based data-augmentation
strategy to address data scarcity. Finally, this paper presents and analyzes
two different architectural variants of DeepSSM with different loss functions
using three medical datasets and their downstream clinical application.
Experiments showcase that DeepSSM performs comparably or better to the
state-of-the-art SSM both quantitatively and on application-driven downstream
tasks. Therefore, DeepSSM aims to provide a comprehensive blueprint for deep
learning-based image-to-shape models.
|
[
{
"version": "v1",
"created": "Thu, 14 Oct 2021 04:52:37 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Mar 2022 15:46:08 GMT"
}
] | 2022-03-17T00:00:00 |
[
[
"Bhalodia",
"Riddhish",
""
],
[
"Elhabian",
"Shireen",
""
],
[
"Adams",
"Jadie",
""
],
[
"Tao",
"Wenzheng",
""
],
[
"Kavan",
"Ladislav",
""
],
[
"Whitaker",
"Ross",
""
]
] |
new_dataset
| 0.975735 |
2110.08193
|
Alicia Parrish
|
Alicia Parrish, Angelica Chen, Nikita Nangia, Vishakh Padmakumar,
Jason Phang, Jana Thompson, Phu Mon Htut, Samuel R. Bowman
|
BBQ: A Hand-Built Bias Benchmark for Question Answering
|
Accepted to ACL 2022 Findings. 20 pages, 10 figures
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
It is well documented that NLP models learn social biases, but little work
has been done on how these biases manifest in model outputs for applied tasks
like question answering (QA). We introduce the Bias Benchmark for QA (BBQ), a
dataset of question sets constructed by the authors that highlight attested
social biases against people belonging to protected classes along nine social
dimensions relevant for U.S. English-speaking contexts. Our task evaluates
model responses at two levels: (i) given an under-informative context, we test
how strongly responses reflect social biases, and (ii) given an adequately
informative context, we test whether the model's biases override a correct
answer choice. We find that models often rely on stereotypes when the context
is under-informative, meaning the model's outputs consistently reproduce
harmful biases in this setting. Though models are more accurate when the
context provides an informative answer, they still rely on stereotypes and
average up to 3.4 percentage points higher accuracy when the correct answer
aligns with a social bias than when it conflicts, with this difference widening
to over 5 points on examples targeting gender for most models tested.
|
[
{
"version": "v1",
"created": "Fri, 15 Oct 2021 16:43:46 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Mar 2022 01:35:45 GMT"
}
] | 2022-03-17T00:00:00 |
[
[
"Parrish",
"Alicia",
""
],
[
"Chen",
"Angelica",
""
],
[
"Nangia",
"Nikita",
""
],
[
"Padmakumar",
"Vishakh",
""
],
[
"Phang",
"Jason",
""
],
[
"Thompson",
"Jana",
""
],
[
"Htut",
"Phu Mon",
""
],
[
"Bowman",
"Samuel R.",
""
]
] |
new_dataset
| 0.998365 |
2201.08812
|
Tao Han
|
Yongjie Guan and Xueyu Hou and Nan Wu and Bo Han and Tao Han
|
DeepMix: Mobility-aware, Lightweight, and Hybrid 3D Object Detection for
Headsets
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mobile headsets should be capable of understanding 3D physical environments
to offer a truly immersive experience for augmented/mixed reality (AR/MR).
However, their small form-factor and limited computation resources make it
extremely challenging to execute in real-time 3D vision algorithms, which are
known to be more compute-intensive than their 2D counterparts. In this paper,
we propose DeepMix, a mobility-aware, lightweight, and hybrid 3D object
detection framework for improving the user experience of AR/MR on mobile
headsets. Motivated by our analysis and evaluation of state-of-the-art 3D
object detection models, DeepMix intelligently combines edge-assisted 2D object
detection and novel, on-device 3D bounding box estimations that leverage depth
data captured by headsets. This leads to low end-to-end latency and
significantly boosts detection accuracy in mobile scenarios. A unique feature
of DeepMix is that it fully exploits the mobility of headsets to fine-tune
detection results and boost detection accuracy. To the best of our knowledge,
DeepMix is the first 3D object detection that achieves 30 FPS (an end-to-end
latency much lower than the 100 ms stringent requirement of interactive AR/MR).
We implement a prototype of DeepMix on Microsoft HoloLens and evaluate its
performance via both extensive controlled experiments and a user study with 30+
participants. DeepMix not only improves detection accuracy by 9.1--37.3% but
also reduces end-to-end latency by 2.68--9.15x, compared to the baseline that
uses existing 3D object detection models.
|
[
{
"version": "v1",
"created": "Sat, 15 Jan 2022 05:50:18 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Mar 2022 03:15:09 GMT"
}
] | 2022-03-17T00:00:00 |
[
[
"Guan",
"Yongjie",
""
],
[
"Hou",
"Xueyu",
""
],
[
"Wu",
"Nan",
""
],
[
"Han",
"Bo",
""
],
[
"Han",
"Tao",
""
]
] |
new_dataset
| 0.985148 |
2202.05531
|
Himmet Toprak Kesgin
|
H. Toprak Kesgin, M. Fatih Amasyali
|
Cyclical Curriculum Learning
|
Added references, corrected typos
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Artificial neural networks (ANN) are inspired by human learning. However,
unlike human education, classical ANN does not use a curriculum. Curriculum
Learning (CL) refers to the process of ANN training in which examples are used
in a meaningful order. When using CL, training begins with a subset of the
dataset and new samples are added throughout the training, or training begins
with the entire dataset and the number of samples used is reduced. With these
changes in training dataset size, better results can be obtained with
curriculum, anti-curriculum, or random-curriculum methods than the vanilla
method. However, a generally efficient CL method for various architectures and
data sets is not found. In this paper, we propose cyclical curriculum learning
(CCL), in which the data size used during training changes cyclically rather
than simply increasing or decreasing. Instead of using only the vanilla method
or only the curriculum method, using both methods cyclically like in CCL
provides more successful results. We tested the method on 18 different data
sets and 15 architectures in image and text classification tasks and obtained
more successful results than no-CL and existing CL methods. We also have shown
theoretically that it is less erroneous to apply CL and vanilla cyclically
instead of using only CL or only vanilla method. The code of Cyclical
Curriculum is available at
https://github.com/CyclicalCurriculum/Cyclical-Curriculum.
|
[
{
"version": "v1",
"created": "Fri, 11 Feb 2022 10:09:29 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Mar 2022 18:03:20 GMT"
}
] | 2022-03-17T00:00:00 |
[
[
"Kesgin",
"H. Toprak",
""
],
[
"Amasyali",
"M. Fatih",
""
]
] |
new_dataset
| 0.952932 |
2203.01215
|
Ali Safaya
|
Ali Safaya, Emirhan Kurtulu\c{s}, Arda G\"okto\u{g}an, Deniz Yuret
|
Mukayese: Turkish NLP Strikes Back
|
Accepted at Findings of ACL 2022 (Camera Ready)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Having sufficient resources for language X lifts it from the under-resourced
languages class, but not necessarily from the under-researched class. In this
paper, we address the problem of the absence of organized benchmarks in the
Turkish language. We demonstrate that languages such as Turkish are left behind
the state-of-the-art in NLP applications. As a solution, we present Mukayese, a
set of NLP benchmarks for the Turkish language that contains several NLP tasks.
We work on one or more datasets for each benchmark and present two or more
baselines. Moreover, we present four new benchmarking datasets in Turkish for
language modeling, sentence segmentation, and spell checking. All datasets and
baselines are available under: https://github.com/alisafaya/mukayese
|
[
{
"version": "v1",
"created": "Wed, 2 Mar 2022 16:18:44 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Mar 2022 12:19:45 GMT"
}
] | 2022-03-17T00:00:00 |
[
[
"Safaya",
"Ali",
""
],
[
"Kurtuluş",
"Emirhan",
""
],
[
"Göktoğan",
"Arda",
""
],
[
"Yuret",
"Deniz",
""
]
] |
new_dataset
| 0.997581 |
2203.01914
|
Willi Menapace
|
Willi Menapace, St\'ephane Lathuili\`ere, Aliaksandr Siarohin,
Christian Theobalt, Sergey Tulyakov, Vladislav Golyanik, Elisa Ricci
|
Playable Environments: Video Manipulation in Space and Time
|
CVPR 2022
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Playable Environments - a new representation for interactive video
generation and manipulation in space and time. With a single image at inference
time, our novel framework allows the user to move objects in 3D while
generating a video by providing a sequence of desired actions. The actions are
learnt in an unsupervised manner. The camera can be controlled to get the
desired viewpoint. Our method builds an environment state for each frame, which
can be manipulated by our proposed action module and decoded back to the image
space with volumetric rendering. To support diverse appearances of objects, we
extend neural radiance fields with style-based modulation. Our method trains on
a collection of various monocular videos requiring only the estimated camera
parameters and 2D object locations. To set a challenging benchmark, we
introduce two large scale video datasets with significant camera movements. As
evidenced by our experiments, playable environments enable several creative
applications not attainable by prior video synthesis works, including playable
3D video generation, stylization and manipulation. Further details, code and
examples are available at
https://willi-menapace.github.io/playable-environments-website
|
[
{
"version": "v1",
"created": "Thu, 3 Mar 2022 18:51:05 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Mar 2022 18:13:26 GMT"
}
] | 2022-03-17T00:00:00 |
[
[
"Menapace",
"Willi",
""
],
[
"Lathuilière",
"Stéphane",
""
],
[
"Siarohin",
"Aliaksandr",
""
],
[
"Theobalt",
"Christian",
""
],
[
"Tulyakov",
"Sergey",
""
],
[
"Golyanik",
"Vladislav",
""
],
[
"Ricci",
"Elisa",
""
]
] |
new_dataset
| 0.974159 |
2203.08184
|
Qingchao Li
|
Qingchao Li, Mohammed El-Hajjar, Ibrahim Hemadeh, Arman Shojaeifard,
Alain A. M. Mourad, Bruno Clerckx, Lajos Hanzo
|
Reconfigurable Intelligent Surfaces Relying on Non-Diagonal Phase Shift
Matrices
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Reconfigurable intelligent surfaces (RIS) have been actively researched as a
potential technique for future wireless communications, which intelligently
ameliorate the signal propagation environment. In the conventional design, each
RIS element configures and reflects its received signal independently of all
other RIS elements, which results in a diagonal phase shift matrix. By
contrast, we propose a novel RIS architecture, where the incident signal
impinging on one element can be reflected from another element after an
appropriate phase shift adjustment, which increases the flexibility in the
design of RIS phase shifts, hence, potentially improving the system
performance. The resultant RIS phase shift matrix also has off-diagonal
elements, as opposed to the pure diagonal structure of the conventional design.
Compared to the state-of-art fully-connected/group-connected RIS structures,
our proposed RIS architecture has lower complexity, while attaining a higher
channel gain than the group-connected RIS structure, and approaching that of
the fully-connected RIS structure. We formulate and solve the problem of
maximizing the achievable rate of our proposed RIS architecture by jointly
optimizing the transmit beamforming and the non-diagonal phase shift matrix
based on alternating optimization and semi-define relaxation (SDR) methods.
Moreover, the closed-form expressions of the channel gain, the outage
probability and bit error ratio (BER) are derived. Simulation results
demonstrate that our proposed RIS architecture results in an improved
performance in terms of the achievable rate compared to the conventional
architecture, both in single-user as well as in multi-user scenarios.
|
[
{
"version": "v1",
"created": "Tue, 15 Mar 2022 18:21:59 GMT"
}
] | 2022-03-17T00:00:00 |
[
[
"Li",
"Qingchao",
""
],
[
"El-Hajjar",
"Mohammed",
""
],
[
"Hemadeh",
"Ibrahim",
""
],
[
"Shojaeifard",
"Arman",
""
],
[
"Mourad",
"Alain A. M.",
""
],
[
"Clerckx",
"Bruno",
""
],
[
"Hanzo",
"Lajos",
""
]
] |
new_dataset
| 0.996017 |
2203.08355
|
Hao Xu
|
Hao Xu, Zihao Li, Zongyao Li, Xiaoshuai Zhang, Yao Sun, Lei Zhang
|
Metaverse Native Communication: A Blockchain and Spectrum Prospective
| null | null | null | null |
cs.DC cs.CY cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Metaverse depicts a vista of constructing a virtual environment parallel to
the real world so people can communicate with others and objects through
digital entities. In the real world, communication relies on identities and
addresses that are recognized by authorities, no matter the link is established
via post, email, mobile phone, or landline. Metaverse, however, is different
from the real world, which requires a single identity belongs to the
individual. This identity can be an encrypted virtual address in the metaverse
but no one can trace or verify it. In order to achieve such addresses to hide
individuals in the metaverse, re-mapping the virtual address to the
individual's identity and a specific spectrum to support the address-based
communication for the metaverse are needed. Therefore, metaverse native or
meta-native communications based on blockchain could be a promising solution to
directly connect entities with their native encrypted addresses that gets rid
of the existing network services based on IP, cellular, HTTP, etc. This paper
proposes a vision of blockchain, encrypted address and address-based access
model for all users, devices, services, etc. to contribute to the metaverse.
Furthermore, the allocation architecture of a designated spectrum for the
metaverse is proposed to remove the barrier to access to the
metaverse/blockchain in response to the initiatives of metaverse and
decentralized Internet.
|
[
{
"version": "v1",
"created": "Wed, 16 Mar 2022 02:25:39 GMT"
}
] | 2022-03-17T00:00:00 |
[
[
"Xu",
"Hao",
""
],
[
"Li",
"Zihao",
""
],
[
"Li",
"Zongyao",
""
],
[
"Zhang",
"Xiaoshuai",
""
],
[
"Sun",
"Yao",
""
],
[
"Zhang",
"Lei",
""
]
] |
new_dataset
| 0.99974 |
2203.08364
|
Salman Parsa
|
Salman Parsa, Tim Ophelders
|
Minimum Height Drawings of Ordered Trees in Polynomial Time: Homotopy
Height of Tree Duals
| null | null | null | null |
cs.CG cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
We consider drawings of graphs in the plane in which vertices are assigned
distinct points in the plane and edges are drawn as simple curves connecting
the vertices and such that the edges intersect only at their common endpoints.
There is an intuitive quality measure for drawings of a graph that measures the
height of a drawing $\phi : G \rightarrow \mathbb{R}^2$ as follows. For a
vertical line $\ell$ in $\mathbb{R}^2$, let the height of $\ell$ be the
cardinality of the set $\ell \cap \phi(G)$. The height of a drawing of $G$ is
the maximum height over all vertical lines. In this paper, instead of abstract
graphs, we fix a drawing and consider plane graphs. In other words, we are
looking for a homeomorphism of the plane that minimizes the height of the
resulting drawing. This problem is equivalent to the homotopy height problem in
the plane, and the homotopic Fr\'echet distance problem. These problems were
recently shown to lie in NP, but no polynomial-time algorithm or NP-hardness
proof has been found since their formulation in 2009. We present the first
polynomial-time algorithm for drawing trees with optimal height. This
corresponds to a polynomial-time algorithm for the homotopy height where the
triangulation has only one vertex (that is, a set of loops incident to a single
vertex), so that its dual is a tree.
|
[
{
"version": "v1",
"created": "Wed, 16 Mar 2022 03:00:55 GMT"
}
] | 2022-03-17T00:00:00 |
[
[
"Parsa",
"Salman",
""
],
[
"Ophelders",
"Tim",
""
]
] |
new_dataset
| 0.998754 |
2203.08408
|
Zifan Chen
|
Zifan Chen, Jie Zhao, Hao Yu, Yue Zhang, Li Zhang
|
Multi-Scale Context-Guided Lumbar Spine Disease Identification with
Coarse-to-fine Localization and Classification
|
Accepted at ISBI 2022
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Accurate and efficient lumbar spine disease identification is crucial for
clinical diagnosis. However, existing deep learning models with millions of
parameters often fail to learn with only hundreds or dozens of medical images.
These models also ignore the contextual relationship between adjacent objects,
such as between vertebras and intervertebral discs. This work introduces a
multi-scale context-guided network with coarse-to-fine localization and
classification, named CCF-Net, for lumbar spine disease identification.
Specifically, in learning, we divide the localization objective into two
parallel tasks, coarse and fine, which are more straightforward and effectively
reduce the number of parameters and computational cost. The experimental
results show that the coarse-to-fine design presents the potential to achieve
high performance with fewer parameters and data requirements. Moreover, the
multi-scale context-guided module can significantly improve the performance by
6.45% and 5.51% with ResNet18 and ResNet50, respectively. Our code is available
at https://github.com/czifan/CCFNet.pytorch.
|
[
{
"version": "v1",
"created": "Wed, 16 Mar 2022 05:51:16 GMT"
}
] | 2022-03-17T00:00:00 |
[
[
"Chen",
"Zifan",
""
],
[
"Zhao",
"Jie",
""
],
[
"Yu",
"Hao",
""
],
[
"Zhang",
"Yue",
""
],
[
"Zhang",
"Li",
""
]
] |
new_dataset
| 0.996934 |
2203.08491
|
Lior Rokach
|
Shir Chorev, Philip Tannor, Dan Ben Israel, Noam Bressler, Itay
Gabbay, Nir Hutnik, Jonatan Liberman, Matan Perlmutter, Yurii Romanyshyn,
Lior Rokach
|
Deepchecks: A Library for Testing and Validating Machine Learning Models
and Data
| null | null | null | null |
cs.LG cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents Deepchecks, a Python library for comprehensively
validating machine learning models and data. Our goal is to provide an
easy-to-use library comprising of many checks related to various types of
issues, such as model predictive performance, data integrity, data distribution
mismatches, and more. The package is distributed under the GNU Affero General
Public License (AGPL) and relies on core libraries from the scientific Python
ecosystem: scikit-learn, PyTorch, NumPy, pandas, and SciPy. Source code,
documentation, examples, and an extensive user guide can be found at
\url{https://github.com/deepchecks/deepchecks} and
\url{https://docs.deepchecks.com/}.
|
[
{
"version": "v1",
"created": "Wed, 16 Mar 2022 09:37:22 GMT"
}
] | 2022-03-17T00:00:00 |
[
[
"Chorev",
"Shir",
""
],
[
"Tannor",
"Philip",
""
],
[
"Israel",
"Dan Ben",
""
],
[
"Bressler",
"Noam",
""
],
[
"Gabbay",
"Itay",
""
],
[
"Hutnik",
"Nir",
""
],
[
"Liberman",
"Jonatan",
""
],
[
"Perlmutter",
"Matan",
""
],
[
"Romanyshyn",
"Yurii",
""
],
[
"Rokach",
"Lior",
""
]
] |
new_dataset
| 0.965191 |
2203.08534
|
Jen-Chun Lin
|
Wen-Li Wei, Jen-Chun Lin, Tyng-Luh Liu, and Hong-Yuan Mark Liao
|
Capturing Humans in Motion: Temporal-Attentive 3D Human Pose and Shape
Estimation from Monocular Video
|
Accepted by CVPR 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Learning to capture human motion is essential to 3D human pose and shape
estimation from monocular video. However, the existing methods mainly rely on
recurrent or convolutional operation to model such temporal information, which
limits the ability to capture non-local context relations of human motion. To
address this problem, we propose a motion pose and shape network (MPS-Net) to
effectively capture humans in motion to estimate accurate and temporally
coherent 3D human pose and shape from a video. Specifically, we first propose a
motion continuity attention (MoCA) module that leverages visual cues observed
from human motion to adaptively recalibrate the range that needs attention in
the sequence to better capture the motion continuity dependencies. Then, we
develop a hierarchical attentive feature integration (HAFI) module to
effectively combine adjacent past and future feature representations to
strengthen temporal correlation and refine the feature representation of the
current frame. By coupling the MoCA and HAFI modules, the proposed MPS-Net
excels in estimating 3D human pose and shape in the video. Though conceptually
simple, our MPS-Net not only outperforms the state-of-the-art methods on the
3DPW, MPI-INF-3DHP, and Human3.6M benchmark datasets, but also uses fewer
network parameters. The video demos can be found at
https://mps-net.github.io/MPS-Net/.
|
[
{
"version": "v1",
"created": "Wed, 16 Mar 2022 11:00:24 GMT"
}
] | 2022-03-17T00:00:00 |
[
[
"Wei",
"Wen-Li",
""
],
[
"Lin",
"Jen-Chun",
""
],
[
"Liu",
"Tyng-Luh",
""
],
[
"Liao",
"Hong-Yuan Mark",
""
]
] |
new_dataset
| 0.990912 |
2203.08556
|
Feng Yao
|
Feng Yao, Chaojun Xiao, Xiaozhi Wang, Zhiyuan Liu, Lei Hou, Cunchao
Tu, Juanzi Li, Yun Liu, Weixing Shen, Maosong Sun
|
LEVEN: A Large-Scale Chinese Legal Event Detection Dataset
|
Accepted to ACL2022 Findings
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recognizing facts is the most fundamental step in making judgments, hence
detecting events in the legal documents is important to legal case analysis
tasks. However, existing Legal Event Detection (LED) datasets only concern
incomprehensive event types and have limited annotated data, which restricts
the development of LED methods and their downstream applications. To alleviate
these issues, we present LEVEN a large-scale Chinese LEgal eVENt detection
dataset, with 8,116 legal documents and 150,977 human-annotated event mentions
in 108 event types. Not only charge-related events, LEVEN also covers general
events, which are critical for legal case understanding but neglected in
existing LED datasets. To our knowledge, LEVEN is the largest LED dataset and
has dozens of times the data scale of others, which shall significantly promote
the training and evaluation of LED methods. The results of extensive
experiments indicate that LED is challenging and needs further effort.
Moreover, we simply utilize legal events as side information to promote
downstream applications. The method achieves improvements of average 2.2 points
precision in low-resource judgment prediction, and 1.5 points mean average
precision in unsupervised case retrieval, which suggests the fundamentality of
LED. The source code and dataset can be obtained from
https://github.com/thunlp/LEVEN.
|
[
{
"version": "v1",
"created": "Wed, 16 Mar 2022 11:40:02 GMT"
}
] | 2022-03-17T00:00:00 |
[
[
"Yao",
"Feng",
""
],
[
"Xiao",
"Chaojun",
""
],
[
"Wang",
"Xiaozhi",
""
],
[
"Liu",
"Zhiyuan",
""
],
[
"Hou",
"Lei",
""
],
[
"Tu",
"Cunchao",
""
],
[
"Li",
"Juanzi",
""
],
[
"Liu",
"Yun",
""
],
[
"Shen",
"Weixing",
""
],
[
"Sun",
"Maosong",
""
]
] |
new_dataset
| 0.999863 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.