id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1804.03547
|
Yujiang Wang
|
Yujiang Wang, Jie Shen, Stavros Petridis, Maja Pantic
|
A real-time and unsupervised face Re-Identification system for
Human-Robot Interaction
|
Code implementation in Python is available at:
https://github.com/ibug-group/face_reid
|
Pattern Recognition Letters, Volume 128, 1 December 2019, Pages
559-568
|
10.1016/j.patrec.2018.04.009
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the context of Human-Robot Interaction (HRI), face Re-Identification (face
Re-ID) aims to verify if certain detected faces have already been observed by
robots. The ability of distinguishing between different users is crucial in
social robots as it will enable the robot to tailor the interaction strategy
toward the users' individual preferences. So far face recognition research has
achieved great success, however little attention has been paid to the realistic
applications of Face Re-ID in social robots. In this paper, we present an
effective and unsupervised face Re-ID system which simultaneously re-identifies
multiple faces for HRI. This Re-ID system employs Deep Convolutional Neural
Networks to extract features, and an online clustering algorithm to determine
the face's ID. Its performance is evaluated on two datasets: the TERESA video
dataset collected by the TERESA robot, and the YouTube Face Dataset (YTF
Dataset). We demonstrate that the optimised combination of techniques achieves
an overall 93.55% accuracy on TERESA dataset and an overall 90.41% accuracy on
YTF dataset. We have implemented the proposed method into a software module in
the HCI^2 Framework for it to be further integrated into the TERESA robot, and
has achieved real-time performance at 10~26 Frames per second.
|
[
{
"version": "v1",
"created": "Tue, 10 Apr 2018 14:07:45 GMT"
},
{
"version": "v2",
"created": "Wed, 11 Apr 2018 15:20:31 GMT"
},
{
"version": "v3",
"created": "Thu, 24 Mar 2022 11:04:37 GMT"
}
] | 2022-03-25T00:00:00 |
[
[
"Wang",
"Yujiang",
""
],
[
"Shen",
"Jie",
""
],
[
"Petridis",
"Stavros",
""
],
[
"Pantic",
"Maja",
""
]
] |
new_dataset
| 0.982922 |
2012.13168
|
Kyuwon Kim
|
Kyuwon Kim, Junhyuck Im, Gyuin Jee
|
Tunnel Facility-based Vehicle Localization in Highway Tunnel using 3D
LIDAR
|
16 pages, 25 figures
| null |
10.1109/TITS.2022.3160235
| null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Vehicle localization in highway tunnels is a challenging issue for autonomous
vehicle navigation. Since GPS signals from satellites cannot be received inside
a highway tunnel, map-aided localization is essential. However, the environment
around the tunnel is composed mostly of an elliptical wall. Thereby, the unique
feature points for map matching are few unlike the case outdoors. As a result,
it is a very difficult condition to perform vehicle navigation in the tunnel
with existing map-aided localization. In this paper, we propose tunnel
facility-based precise vehicle localization in highway tunnels using 3D LIDAR.
For vehicle localization in a highway tunnel, a point landmark map that stores
the center points of tunnel facilities and a probability distribution map that
stores the probability distributions of the lane markings are used. Point
landmark-based localization is possible regardless of the number of feature
points, if only representative points of an object can be extracted. Therefore,
it is a suitable localization method for highway tunnels where the feature
points are few. The tunnel facility points were extracted using 3D LIDAR.
Position estimation is conducted using an EKF-based navigation filter. The
proposed localization algorithm is verified through experiments using actual
highway driving data. The experimental results verify that the tunnel
facility-based vehicle localization yields precise results in real time.
|
[
{
"version": "v1",
"created": "Thu, 24 Dec 2020 08:37:23 GMT"
}
] | 2022-03-25T00:00:00 |
[
[
"Kim",
"Kyuwon",
""
],
[
"Im",
"Junhyuck",
""
],
[
"Jee",
"Gyuin",
""
]
] |
new_dataset
| 0.999296 |
2103.08573
|
Udit Singh Parihar
|
Udit Singh Parihar, Aniket Gujarathi, Kinal Mehta, Satyajit Tourani,
Sourav Garg, Michael Milford and K. Madhava Krishna
|
RoRD: Rotation-Robust Descriptors and Orthographic Views for Local
Feature Matching
|
Accepted to IROS 2021. Project Page:
https://uditsinghparihar.github.io/RoRD/
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
The use of local detectors and descriptors in typical computer vision
pipelines work well until variations in viewpoint and appearance change become
extreme. Past research in this area has typically focused on one of two
approaches to this challenge: the use of projections into spaces more suitable
for feature matching under extreme viewpoint changes, and attempting to learn
features that are inherently more robust to viewpoint change. In this paper, we
present a novel framework that combines learning of invariant descriptors
through data augmentation and orthographic viewpoint projection. We propose
rotation-robust local descriptors, learnt through training data augmentation
based on rotation homographies, and a correspondence ensemble technique that
combines vanilla feature correspondences with those obtained through
rotation-robust features. Using a range of benchmark datasets as well as
contributing a new bespoke dataset for this research domain, we evaluate the
effectiveness of the proposed approach on key tasks including pose estimation
and visual place recognition. Our system outperforms a range of baseline and
state-of-the-art techniques, including enabling higher levels of place
recognition precision across opposing place viewpoints and achieves
practically-useful performance levels even under extreme viewpoint changes.
|
[
{
"version": "v1",
"created": "Mon, 15 Mar 2021 17:40:25 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Jul 2021 20:28:12 GMT"
},
{
"version": "v3",
"created": "Thu, 17 Mar 2022 16:58:12 GMT"
},
{
"version": "v4",
"created": "Thu, 24 Mar 2022 09:01:27 GMT"
}
] | 2022-03-25T00:00:00 |
[
[
"Parihar",
"Udit Singh",
""
],
[
"Gujarathi",
"Aniket",
""
],
[
"Mehta",
"Kinal",
""
],
[
"Tourani",
"Satyajit",
""
],
[
"Garg",
"Sourav",
""
],
[
"Milford",
"Michael",
""
],
[
"Krishna",
"K. Madhava",
""
]
] |
new_dataset
| 0.99906 |
2106.07545
|
Qi Chen
|
Qi Chen, Sourabh Vora and Oscar Beijbom
|
PolarStream: Streaming Lidar Object Detection and Segmentation with
Polar Pillars
|
NeurIPS 2021; code and pretrained models available at
https://github.com/motional/polarstream
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Recent works recognized lidars as an inherently streaming data source and
showed that the end-to-end latency of lidar perception models can be reduced
significantly by operating on wedge-shaped point cloud sectors rather then the
full point cloud. However, due to use of cartesian coordinate systems these
methods represent the sectors as rectangular regions, wasting memory and
compute. In this work we propose using a polar coordinate system and make two
key improvements on this design. First, we increase the spatial context by
using multi-scale padding from neighboring sectors: preceding sector from the
current scan and/or the following sector from the past scan. Second, we improve
the core polar convolutional architecture by introducing feature undistortion
and range stratified convolutions. Experimental results on the nuScenes dataset
show significant improvements over other streaming based methods. We also
achieve comparable results to existing non-streaming methods but with lower
latencies. The code and pretrained models are available at
\url{https://github.com/motional/polarstream}.
|
[
{
"version": "v1",
"created": "Mon, 14 Jun 2021 16:11:28 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Mar 2022 01:33:38 GMT"
}
] | 2022-03-25T00:00:00 |
[
[
"Chen",
"Qi",
""
],
[
"Vora",
"Sourabh",
""
],
[
"Beijbom",
"Oscar",
""
]
] |
new_dataset
| 0.998819 |
2106.12373
|
Jiajie Zou
|
Jiajie Zou, Yuran Zhang, Peiqing Jin, Cheng Luo, Xunyi Pan, Nai Ding
|
PALRACE: Reading Comprehension Dataset with Human Data and Labeled
Rationales
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Pre-trained language models achieves high performance on machine reading
comprehension (MRC) tasks but the results are hard to explain. An appealing
approach to make models explainable is to provide rationales for its decision.
To investigate whether human rationales can further improve current models and
to facilitate supervised learning of human rationales, here we present PALRACE
(Pruned And Labeled RACE), a new MRC dataset with human labeled rationales for
800 passages selected from the RACE dataset. We further classified the question
to each passage into 6 types. Each passage was read by at least 26 human
readers, who labeled their rationales to answer the question. It is
demonstrated that models such as RoBERTa-large outperforms human readers in all
6 types of questions, including inference questions, but its performance can be
further improved when having access to the human rationales. Simpler models and
pre-trained models that are not fine-tuned based on the task benefit more from
human rationales, and their performance can be boosted by more than 30% by
rationales. With access to human rationales, a simple model based on the GloVe
word embedding can reach the performance of BERT-base.
|
[
{
"version": "v1",
"created": "Wed, 23 Jun 2021 13:12:40 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Mar 2022 04:59:12 GMT"
}
] | 2022-03-25T00:00:00 |
[
[
"Zou",
"Jiajie",
""
],
[
"Zhang",
"Yuran",
""
],
[
"Jin",
"Peiqing",
""
],
[
"Luo",
"Cheng",
""
],
[
"Pan",
"Xunyi",
""
],
[
"Ding",
"Nai",
""
]
] |
new_dataset
| 0.999834 |
2109.04386
|
Koushik Biswas
|
Koushik Biswas, Sandeep Kumar, Shilpak Banerjee, Ashish Kumar Pandey
|
ErfAct and Pserf: Non-monotonic Smooth Trainable Activation Functions
|
AAAI 2022
| null | null | null |
cs.NE cs.AI cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
An activation function is a crucial component of a neural network that
introduces non-linearity in the network. The state-of-the-art performance of a
neural network depends also on the perfect choice of an activation function. We
propose two novel non-monotonic smooth trainable activation functions, called
ErfAct and Pserf. Experiments suggest that the proposed functions improve the
network performance significantly compared to the widely used activations like
ReLU, Swish, and Mish. Replacing ReLU by ErfAct and Pserf, we have 5.68% and
5.42% improvement for top-1 accuracy on Shufflenet V2 (2.0x) network in
CIFAR100 dataset, 2.11% and 1.96% improvement for top-1 accuracy on Shufflenet
V2 (2.0x) network in CIFAR10 dataset, 1.0%, and 1.0% improvement on mean
average precision (mAP) on SSD300 model in Pascal VOC dataset.
|
[
{
"version": "v1",
"created": "Thu, 9 Sep 2021 16:17:38 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Sep 2021 09:56:58 GMT"
},
{
"version": "v3",
"created": "Sun, 19 Sep 2021 18:59:11 GMT"
},
{
"version": "v4",
"created": "Thu, 24 Mar 2022 12:46:15 GMT"
}
] | 2022-03-25T00:00:00 |
[
[
"Biswas",
"Koushik",
""
],
[
"Kumar",
"Sandeep",
""
],
[
"Banerjee",
"Shilpak",
""
],
[
"Pandey",
"Ashish Kumar",
""
]
] |
new_dataset
| 0.985346 |
2109.14490
|
Bijeeta Pal
|
Bijeeta Pal, Mazharul Islam, Marina Sanusi, Nick Sullivan, Luke
Valenta, Tara Whalen, Christopher Wood, Thomas Ristenpart, Rahul Chattejee
|
Might I Get Pwned: A Second Generation Compromised Credential Checking
Service
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Credential stuffing attacks use stolen passwords to log into victim accounts.
To defend against these attacks, recently deployed compromised credential
checking (C3) services provide APIs that help users and companies check whether
a username, password pair is exposed. These services however only check if the
exact password is leaked, and therefore do not mitigate credential tweaking
attacks - attempts to compromise a user account with variants of a user's
leaked passwords. Recent work has shown credential tweaking attacks can
compromise accounts quite effectively even when the credential stuffing
countermeasures are in place. We initiate work on C3 services that protect
users from credential tweaking attacks. The core underlying challenge is how to
identify passwords that are similar to their leaked passwords while preserving
honest clients' privacy and also preventing malicious clients from extracting
breach data from the service. We formalize the problem and explore ways to
measure password similarity that balance efficacy, performance, and security.
Based on this study, we design "Might I Get Pwned" (MIGP), a new kind of breach
alerting service. Our simulations show that MIGP reduces the efficacy of
state-of-the-art 1000-guess credential tweaking attacks by 94%. MIGP preserves
user privacy and limits potential exposure of sensitive breach entries. We show
that the protocol is fast, with response time close to existing C3 services. We
worked with Cloudflare to deploy MIGP in practice.
|
[
{
"version": "v1",
"created": "Wed, 29 Sep 2021 15:16:59 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Mar 2022 23:34:29 GMT"
}
] | 2022-03-25T00:00:00 |
[
[
"Pal",
"Bijeeta",
""
],
[
"Islam",
"Mazharul",
""
],
[
"Sanusi",
"Marina",
""
],
[
"Sullivan",
"Nick",
""
],
[
"Valenta",
"Luke",
""
],
[
"Whalen",
"Tara",
""
],
[
"Wood",
"Christopher",
""
],
[
"Ristenpart",
"Thomas",
""
],
[
"Chattejee",
"Rahul",
""
]
] |
new_dataset
| 0.988442 |
2110.08222
|
Prakhar Gupta
|
Prakhar Gupta, Chien-Sheng Wu, Wenhao Liu and Caiming Xiong
|
DialFact: A Benchmark for Fact-Checking in Dialogue
|
ACL 2022
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fact-checking is an essential tool to mitigate the spread of misinformation
and disinformation. We introduce the task of fact-checking in dialogue, which
is a relatively unexplored area. We construct DialFact, a testing benchmark
dataset of 22,245 annotated conversational claims, paired with pieces of
evidence from Wikipedia. There are three sub-tasks in DialFact: 1) Verifiable
claim detection task distinguishes whether a response carries verifiable
factual information; 2) Evidence retrieval task retrieves the most relevant
Wikipedia snippets as evidence; 3) Claim verification task predicts a dialogue
response to be supported, refuted, or not enough information. We found that
existing fact-checking models trained on non-dialogue data like FEVER fail to
perform well on our task, and thus, we propose a simple yet data-efficient
solution to effectively improve fact-checking performance in dialogue. We point
out unique challenges in DialFact such as handling the colloquialisms,
coreferences and retrieval ambiguities in the error analysis to shed light on
future research in this direction.
|
[
{
"version": "v1",
"created": "Fri, 15 Oct 2021 17:34:35 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Mar 2022 17:26:00 GMT"
}
] | 2022-03-25T00:00:00 |
[
[
"Gupta",
"Prakhar",
""
],
[
"Wu",
"Chien-Sheng",
""
],
[
"Liu",
"Wenhao",
""
],
[
"Xiong",
"Caiming",
""
]
] |
new_dataset
| 0.999203 |
2110.09663
|
Daniel Acuna
|
Daniel E. Acuna, Kartik Nagre, Priya Matnani
|
EILEEN: A recommendation system for scientific publications and grants
|
16 pages, 3 figures, 2 tables
| null | null | null |
cs.IR cs.DL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Finding relevant scientific articles is crucial for advancing knowledge.
Recommendation systems are helpful for such purpose, although they have only
been applied to science recently. This article describes EILEEN (Exploratory
Innovator of LitEraturE Networks), a recommendation system for scientific
publications and grants with open source code and datasets. We describe
EILEEN's architecture for ingesting and processing documents and modeling the
recommendation system and keyphrase estimator. Using a unique dataset of log-in
user behavior, we validate our recommendation system against Latent Semantic
Analysis (LSA) and the standard ranking from Elasticsearch (Lucene scoring). We
find that a learning-to-rank with Random Forest achieves an AUC of 0.9,
significantly outperforming both baselines. Our results suggest that we can
substantially improve science recommendations and learn about scientists'
behavior through their search behavior. We make our system available through
eileen.io
|
[
{
"version": "v1",
"created": "Tue, 19 Oct 2021 00:12:25 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Mar 2022 01:59:44 GMT"
}
] | 2022-03-25T00:00:00 |
[
[
"Acuna",
"Daniel E.",
""
],
[
"Nagre",
"Kartik",
""
],
[
"Matnani",
"Priya",
""
]
] |
new_dataset
| 0.991836 |
2111.00207
|
Cheng Zhang
|
Zhang Cheng, Haocheng Wan, Xinyi Shen, Zizhao Wu
|
PatchFormer: An Efficient Point Transformer with Patch Attention
|
10 pages
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The point cloud learning community witnesses a modeling shift from CNNs to
Transformers, where pure Transformer architectures have achieved top accuracy
on the major learning benchmarks. However, existing point Transformers are
computationally expensive since they need to generate a large attention map,
which has quadratic complexity (both in space and time) with respect to input
size. To solve this shortcoming, we introduce Patch ATtention (PAT) to
adaptively learn a much smaller set of bases upon which the attention maps are
computed. By a weighted summation upon these bases, PAT not only captures the
global shape context but also achieves linear complexity to input size. In
addition, we propose a lightweight Multi-Scale aTtention (MST) block to build
attentions among features of different scales, providing the model with
multi-scale features. Equipped with the PAT and MST, we construct our neural
architecture called PatchFormer that integrates both modules into a joint
framework for point cloud learning. Extensive experiments demonstrate that our
network achieves comparable accuracy on general point cloud learning tasks with
9.2x speed-up than previous point Transformers.
|
[
{
"version": "v1",
"created": "Sat, 30 Oct 2021 08:39:55 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Dec 2021 06:54:02 GMT"
},
{
"version": "v3",
"created": "Thu, 24 Mar 2022 09:15:14 GMT"
}
] | 2022-03-25T00:00:00 |
[
[
"Cheng",
"Zhang",
""
],
[
"Wan",
"Haocheng",
""
],
[
"Shen",
"Xinyi",
""
],
[
"Wu",
"Zizhao",
""
]
] |
new_dataset
| 0.988901 |
2111.10139
|
Tal Remez
|
Michael Hassid, Michelle Tadmor Ramanovich, Brendan Shillingford,
Miaosen Wang, Ye Jia, Tal Remez
|
More than Words: In-the-Wild Visually-Driven Prosody for Text-to-Speech
| null | null | null | null |
cs.CV cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper we present VDTTS, a Visually-Driven Text-to-Speech model.
Motivated by dubbing, VDTTS takes advantage of video frames as an additional
input alongside text, and generates speech that matches the video signal. We
demonstrate how this allows VDTTS to, unlike plain TTS models, generate speech
that not only has prosodic variations like natural pauses and pitch, but is
also synchronized to the input video. Experimentally, we show our model
produces well-synchronized outputs, approaching the video-speech
synchronization quality of the ground-truth, on several challenging benchmarks
including "in-the-wild" content from VoxCeleb2. Supplementary demo videos
demonstrating video-speech synchronization, robustness to speaker ID swapping,
and prosody, presented at the project page.
|
[
{
"version": "v1",
"created": "Fri, 19 Nov 2021 10:23:38 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Mar 2022 22:20:57 GMT"
}
] | 2022-03-25T00:00:00 |
[
[
"Hassid",
"Michael",
""
],
[
"Ramanovich",
"Michelle Tadmor",
""
],
[
"Shillingford",
"Brendan",
""
],
[
"Wang",
"Miaosen",
""
],
[
"Jia",
"Ye",
""
],
[
"Remez",
"Tal",
""
]
] |
new_dataset
| 0.980648 |
2112.00322
|
Danila Rukhovich
|
Danila Rukhovich, Anna Vorontsova, Anton Konushin
|
FCAF3D: Fully Convolutional Anchor-Free 3D Object Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, promising applications in robotics and augmented reality have
attracted considerable attention to 3D object detection from point clouds. In
this paper, we present FCAF3D - a first-in-class fully convolutional
anchor-free indoor 3D object detection method. It is a simple yet effective
method that uses a voxel representation of a point cloud and processes voxels
with sparse convolutions. FCAF3D can handle large-scale scenes with minimal
runtime through a single fully convolutional feed-forward pass. Existing 3D
object detection methods make prior assumptions on the geometry of objects, and
we argue that it limits their generalization ability. To get rid of any prior
assumptions, we propose a novel parametrization of oriented bounding boxes that
allows obtaining better results in a purely data-driven way. The proposed
method achieves state-of-the-art 3D object detection results in terms of
mAP@0.5 on ScanNet V2 (+4.5), SUN RGB-D (+3.5), and S3DIS (+20.5) datasets. The
code and models are available at https://github.com/samsunglabs/fcaf3d.
|
[
{
"version": "v1",
"created": "Wed, 1 Dec 2021 07:28:52 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Mar 2022 06:12:39 GMT"
}
] | 2022-03-25T00:00:00 |
[
[
"Rukhovich",
"Danila",
""
],
[
"Vorontsova",
"Anna",
""
],
[
"Konushin",
"Anton",
""
]
] |
new_dataset
| 0.998712 |
2201.02767
|
Shitao Tang
|
Shitao Tang, Jiahui Zhang, Siyu Zhu, Ping Tan
|
QuadTree Attention for Vision Transformers
|
ICLR2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Transformers have been successful in many vision tasks, thanks to their
capability of capturing long-range dependency. However, their quadratic
computational complexity poses a major obstacle for applying them to vision
tasks requiring dense predictions, such as object detection, feature matching,
stereo, etc. We introduce QuadTree Attention, which reduces the computational
complexity from quadratic to linear. Our quadtree transformer builds token
pyramids and computes attention in a coarse-to-fine manner. At each level, the
top K patches with the highest attention scores are selected, such that at the
next level, attention is only evaluated within the relevant regions
corresponding to these top K patches. We demonstrate that quadtree attention
achieves state-of-the-art performance in various vision tasks, e.g. with 4.0%
improvement in feature matching on ScanNet, about 50% flops reduction in stereo
matching, 0.4-1.5% improvement in top-1 accuracy on ImageNet classification,
1.2-1.8% improvement on COCO object detection, and 0.7-2.4% improvement on
semantic segmentation over previous state-of-the-art transformers. The codes
are available at https://github.com/Tangshitao/QuadtreeAttention.
|
[
{
"version": "v1",
"created": "Sat, 8 Jan 2022 05:45:32 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Mar 2022 19:10:58 GMT"
}
] | 2022-03-25T00:00:00 |
[
[
"Tang",
"Shitao",
""
],
[
"Zhang",
"Jiahui",
""
],
[
"Zhu",
"Siyu",
""
],
[
"Tan",
"Ping",
""
]
] |
new_dataset
| 0.999483 |
2202.13657
|
Nicol\`o Lucchesi
|
Nicol\`o Lucchesi, Antonio Carta, Vincenzo Lomonaco and Davide Bacciu
|
Avalanche RL: a Continual Reinforcement Learning Library
|
Presented at the 21st International Conference on Image Analysis and
Processing (ICIAP 2021)
| null | null | null |
cs.LG cs.AI cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Continual Reinforcement Learning (CRL) is a challenging setting where an
agent learns to interact with an environment that is constantly changing over
time (the stream of experiences). In this paper, we describe Avalanche RL, a
library for Continual Reinforcement Learning which allows to easily train
agents on a continuous stream of tasks. Avalanche RL is based on PyTorch and
supports any OpenAI Gym environment. Its design is based on Avalanche, one of
the more popular continual learning libraries, which allow us to reuse a large
number of continual learning strategies and improve the interaction between
reinforcement learning and continual learning researchers. Additionally, we
propose Continual Habitat-Lab, a novel benchmark and a high-level library which
enables the usage of the photorealistic simulator Habitat-Sim for CRL research.
Overall, Avalanche RL attempts to unify under a common framework continual
reinforcement learning applications, which we hope will foster the growth of
the field.
|
[
{
"version": "v1",
"created": "Mon, 28 Feb 2022 10:01:22 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Mar 2022 14:32:41 GMT"
}
] | 2022-03-25T00:00:00 |
[
[
"Lucchesi",
"Nicolò",
""
],
[
"Carta",
"Antonio",
""
],
[
"Lomonaco",
"Vincenzo",
""
],
[
"Bacciu",
"Davide",
""
]
] |
new_dataset
| 0.998826 |
2203.10765
|
Anurag Jain
|
Anurag Jain, Sanidhay Arora, Sankarshan Damle and Sujit Gujar
|
Tiramisu: Layering Consensus Protocols for Scalable and Secure
Blockchains
| null | null | null | null |
cs.CR cs.GT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Cryptocurrencies are poised to revolutionize the modern economy by
democratizing commerce. These currencies operate on top of blockchain-based
distributed ledgers. Existing permissionless blockchain-based protocols offer
unparalleled benefits like decentralization, anonymity, and transparency.
However, these protocols suffer in performance which hinders their widespread
adoption. In particular, high time-to-finality and low transaction rates keep
them from replacing centralized payment systems such as the Visa network.
Permissioned blockchain protocols offer attractive performance guarantees, but
they are not considered suitable for deploying decentralized cryptocurrencies
due to their centralized nature. Researchers have developed several
multi-layered blockchain protocols that combine both permissioned and
permissionless blockchain protocols to achieve high performance along with
decentralization. The key idea with existing layered blockchain protocols in
literature is to divide blockchain operations into two layers and use different
types of blockchain protocols to manage each layer. However, many such works
come with the assumptions of honest majority which may not accurately reflect
the real world where the participants may be self-interested or rational. These
assumptions may render the protocols susceptible to security threats in the
real world, as highlighted by the literature focused on exploring
game-theoretic attacks on these protocols. We generalize the "layered" approach
taken by existing protocols in the literature and present a framework to
analyze the system in the BAR Model and provide a generalized game-theoretic
analysis of such protocols. Using our analysis, we identify the critical system
parameters required for a distributed ledger's secure operation in a more
realistic setting.
|
[
{
"version": "v1",
"created": "Mon, 21 Mar 2022 07:14:06 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Mar 2022 10:37:42 GMT"
},
{
"version": "v3",
"created": "Thu, 24 Mar 2022 04:22:59 GMT"
}
] | 2022-03-25T00:00:00 |
[
[
"Jain",
"Anurag",
""
],
[
"Arora",
"Sanidhay",
""
],
[
"Damle",
"Sankarshan",
""
],
[
"Gujar",
"Sujit",
""
]
] |
new_dataset
| 0.974889 |
2203.12633
|
Tolga Birdal
|
Alp Yurtsever and Tolga Birdal and Vladislav Golyanik
|
Q-FW: A Hybrid Classical-Quantum Frank-Wolfe for Quadratic Binary
Optimization
|
26 pages with supplementary material
| null | null | null |
cs.CV cs.LG math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a hybrid classical-quantum framework based on the Frank-Wolfe
algorithm, Q-FW, for solving quadratic, linearly-constrained, binary
optimization problems on quantum annealers (QA). The computational premise of
quantum computers has cultivated the re-design of various existing vision
problems into quantum-friendly forms. Experimental QA realizations can solve a
particular non-convex problem known as the quadratic unconstrained binary
optimization (QUBO). Yet a naive-QUBO cannot take into account the restrictions
on the parameters. To introduce additional structure in the parameter space,
researchers have crafted ad-hoc solutions incorporating (linear) constraints in
the form of regularizers. However, this comes at the expense of a
hyper-parameter, balancing the impact of regularization. To date, a true
constrained solver of quadratic binary optimization (QBO) problems has lacked.
Q-FW first reformulates constrained-QBO as a copositive program (CP), then
employs Frank-Wolfe iterations to solve CP while satisfying linear (in)equality
constraints. This procedure unrolls the original constrained-QBO into a set of
unconstrained QUBOs all of which are solved, in a sequel, on a QA. We use
D-Wave Advantage QA to conduct synthetic and real experiments on two important
computer vision problems, graph matching and permutation synchronization, which
demonstrate that our approach is effective in alleviating the need for an
explicit regularization coefficient.
|
[
{
"version": "v1",
"created": "Wed, 23 Mar 2022 18:00:03 GMT"
}
] | 2022-03-25T00:00:00 |
[
[
"Yurtsever",
"Alp",
""
],
[
"Birdal",
"Tolga",
""
],
[
"Golyanik",
"Vladislav",
""
]
] |
new_dataset
| 0.99064 |
2203.12712
|
Bolun Li
|
Bolun Li, Hao Xu, Qidong Zhao, Pengfei Su, Milind Chabbi, Shuyin Jiao,
Xu Liu
|
OJXPerf: Featherlight Object Replica Detection for Java Programs
| null |
44th International Conference on Software Engineering (ICSE 2022)
|
10.1145/3510003.3510083
| null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Memory bloat is an important source of inefficiency in complex production
software, especially in software written in managed languages such as Java.
Prior approaches to this problem have focused on identifying objects that
outlive their life span. Few studies have, however, looked into whether and to
what extent myriad objects of the same type are identical. A quantitative
assessment of identical objects with code-level attribution can assist
developers in refactoring code to eliminate object bloat, and favor reuse of
existing object(s). The result is reduced memory pressure, reduced allocation
and garbage collection, enhanced data locality, and reduced re-computation, all
of which result in superior performance.
We develop OJXPerf, a lightweight sampling-based profiler, which
probabilistically identifies identical objects. OJXPerf employs hardware
performance monitoring units (PMU) in conjunction with hardware debug registers
to sample and compare field values of different objects of the same type
allocated at the same calling context but potentially accessed at different
program points. The result is a lightweight measurement, a combination of
object allocation contexts and usage contexts ordered by duplication frequency.
This class of duplicated objects is relatively easier to optimize. OJXPerf
incurs 9% runtime and 6% memory overheads on average. We empirically show the
benefit of OJXPerf by using its profiles to instruct us to optimize a number of
Java programs, including well-known benchmarks and real-world applications. The
results show a noticeable reduction in memory usage (up to 11%) and a
significant speedup (up to 25%).
|
[
{
"version": "v1",
"created": "Wed, 23 Mar 2022 20:20:07 GMT"
}
] | 2022-03-25T00:00:00 |
[
[
"Li",
"Bolun",
""
],
[
"Xu",
"Hao",
""
],
[
"Zhao",
"Qidong",
""
],
[
"Su",
"Pengfei",
""
],
[
"Chabbi",
"Milind",
""
],
[
"Jiao",
"Shuyin",
""
],
[
"Liu",
"Xu",
""
]
] |
new_dataset
| 0.999374 |
2203.12751
|
Mehrad Moradshahi
|
Monica S. Lam, Giovanni Campagna, Mehrad Moradshahi, Sina J. Semnani,
Silei Xu
|
ThingTalk: An Extensible, Executable Representation Language for
Task-Oriented Dialogues
|
8 pages, 3 figures
| null | null | null |
cs.PL cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Task-oriented conversational agents rely on semantic parsers to translate
natural language to formal representations. In this paper, we propose the
design and rationale of the ThingTalk formal representation, and how the design
improves the development of transactional task-oriented agents.
ThingTalk is built on four core principles: (1) representing user requests
directly as executable statements, covering all the functionality of the agent,
(2) representing dialogues formally and succinctly to support accurate
contextual semantic parsing, (3) standardizing types and interfaces to maximize
reuse between agents, and (4) allowing multiple, independently-developed agents
to be composed in a single virtual assistant. ThingTalk is developed as part of
the Genie Framework that allows developers to quickly build transactional
agents given a database and APIs.
We compare ThingTalk to existing representations: SMCalFlow, SGD, TreeDST.
Compared to the others, the ThingTalk design is both more general and more
cost-effective. Evaluated on the MultiWOZ benchmark, using ThingTalk and
associated tools yields a new state of the art accuracy of 79% turn-by-turn.
|
[
{
"version": "v1",
"created": "Wed, 23 Mar 2022 22:40:50 GMT"
}
] | 2022-03-25T00:00:00 |
[
[
"Lam",
"Monica S.",
""
],
[
"Campagna",
"Giovanni",
""
],
[
"Moradshahi",
"Mehrad",
""
],
[
"Semnani",
"Sina J.",
""
],
[
"Xu",
"Silei",
""
]
] |
new_dataset
| 0.994745 |
2203.12752
|
Calogero Maria Oddo
|
Luca Massari, Giulia Fransvea, Jessica D'Abbraccio, Mariangela Filosa,
Giuseppe Terruso, Andrea Aliperta, Giacomo D'Alesio, Martina Zaltieri,
Emiliano Schena, Eduardo Palermo, Edoardo Sinibaldi, Calogero Maria Oddo
|
Functional mimicry of Ruffini receptors with Fiber Bragg Gratings and
Deep Neural Networks enables a bio-inspired large-area tactile sensitive skin
|
6 figures, 4 extended data figures, 2 extended data tables, 39 pages
| null | null | null |
cs.RO cs.SY eess.SP eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Collaborative robots are expected to physically interact with humans in daily
living and workplace, including industrial and healthcare settings. A related
key enabling technology is tactile sensing, which currently requires addressing
the outstanding scientific challenge to simultaneously detect contact location
and intensity by means of soft conformable artificial skins adapting over large
areas to the complex curved geometries of robot embodiments. In this work, the
development of a large-area sensitive soft skin with a curved geometry is
presented, allowing for robot total-body coverage through modular patches. The
biomimetic skin consists of a soft polymeric matrix, resembling a human
forearm, embedded with photonic Fiber Bragg Grating (FBG) transducers, which
partially mimics Ruffini mechanoreceptor functionality with diffuse,
overlapping receptive fields. A Convolutional Neural Network deep learning
algorithm and a multigrid Neuron Integration Process were implemented to decode
the FBG sensor outputs for inferring contact force magnitude and localization
through the skin surface. Results achieved 35 mN (IQR = 56 mN) and 3.2 mm (IQR
= 2.3 mm) median errors, for force and localization predictions, respectively.
Demonstrations with an anthropomorphic arm pave the way towards AI-based
integrated skins enabling safe human-robot cooperation via machine
intelligence.
|
[
{
"version": "v1",
"created": "Wed, 23 Mar 2022 22:42:29 GMT"
}
] | 2022-03-25T00:00:00 |
[
[
"Massari",
"Luca",
""
],
[
"Fransvea",
"Giulia",
""
],
[
"D'Abbraccio",
"Jessica",
""
],
[
"Filosa",
"Mariangela",
""
],
[
"Terruso",
"Giuseppe",
""
],
[
"Aliperta",
"Andrea",
""
],
[
"D'Alesio",
"Giacomo",
""
],
[
"Zaltieri",
"Martina",
""
],
[
"Schena",
"Emiliano",
""
],
[
"Palermo",
"Eduardo",
""
],
[
"Sinibaldi",
"Edoardo",
""
],
[
"Oddo",
"Calogero Maria",
""
]
] |
new_dataset
| 0.996079 |
2203.12776
|
Michele Tufano
|
Michele Tufano, Shao Kun Deng, Neel Sundaresan, Alexey Svyatkovskiy
|
Methods2Test: A dataset of focal methods mapped to test cases
|
Accepted for publication in the proceedings of The 2022 Mining
Software Repositories Conference (MSR 2022) - Data and Tool track
| null |
10.1145/3524842.3528009
| null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Unit testing is an essential part of the software development process, which
helps to identify issues with source code in early stages of development and
prevent regressions. Machine learning has emerged as viable approach to help
software developers generate automated unit tests. However, generating reliable
unit test cases that are semantically correct and capable of catching software
bugs or unintended behavior via machine learning requires large, metadata-rich,
datasets. In this paper we present Methods2Test: A dataset of focal methods
mapped to test cases: a large, supervised dataset of test cases mapped to
corresponding methods under test (i.e., focal methods). This dataset contains
780,944 pairs of JUnit tests and focal methods, extracted from a total of
91,385 Java open source projects hosted on GitHub with licenses permitting
re-distribution. The main challenge behind the creation of the Methods2Test was
to establish a reliable mapping between a test case and the relevant focal
method. To this aim, we designed a set of heuristics, based on developers' best
practices in software testing, which identify the likely focal method for a
given test case. To facilitate further analysis, we store a rich set of
metadata for each method-test pair in JSON-formatted files. Additionally, we
extract textual corpus from the dataset at different context levels, which we
provide both in raw and tokenized forms, in order to enable researchers to
train and evaluate machine learning models for Automated Test Generation.
Methods2Test is publicly available at:
https://github.com/microsoft/methods2test
|
[
{
"version": "v1",
"created": "Wed, 23 Mar 2022 23:59:02 GMT"
}
] | 2022-03-25T00:00:00 |
[
[
"Tufano",
"Michele",
""
],
[
"Deng",
"Shao Kun",
""
],
[
"Sundaresan",
"Neel",
""
],
[
"Svyatkovskiy",
"Alexey",
""
]
] |
new_dataset
| 0.999838 |
2203.12831
|
Bowen Wang
|
Bowen Wang, Guibao Shen, Dong Li, Jianye Hao, Wulong Liu, Yu Huang,
Hongzhong Wu, Yibo Lin, Guangyong Chen, Pheng Ann Heng
|
LHNN: Lattice Hypergraph Neural Network for VLSI Congestion Prediction
|
Accepted as a conference paper in DAC 2022; 6 pages, 4 figures
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Precise congestion prediction from a placement solution plays a crucial role
in circuit placement. This work proposes the lattice hypergraph (LH-graph), a
novel graph formulation for circuits, which preserves netlist data during the
whole learning process, and enables the congestion information propagated
geometrically and topologically. Based on the formulation, we further developed
a heterogeneous graph neural network architecture LHNN, jointing the routing
demand regression to support the congestion spot classification. LHNN
constantly achieves more than 35% improvements compared with U-nets and Pix2Pix
on the F1 score. We expect our work shall highlight essential procedures using
machine learning for congestion prediction.
|
[
{
"version": "v1",
"created": "Thu, 24 Mar 2022 03:31:18 GMT"
}
] | 2022-03-25T00:00:00 |
[
[
"Wang",
"Bowen",
""
],
[
"Shen",
"Guibao",
""
],
[
"Li",
"Dong",
""
],
[
"Hao",
"Jianye",
""
],
[
"Liu",
"Wulong",
""
],
[
"Huang",
"Yu",
""
],
[
"Wu",
"Hongzhong",
""
],
[
"Lin",
"Yibo",
""
],
[
"Chen",
"Guangyong",
""
],
[
"Heng",
"Pheng Ann",
""
]
] |
new_dataset
| 0.9644 |
2203.12876
|
EPTCS
|
Ilaria Castellani (INRIA, Universit\'e C\^ote d'Azur), Mariangiola
Dezani-Ciancaglini (Universit\`a di Torino), Paola Giannini (Universit\`a del
Piemonte Orientale)
|
Asynchronous Sessions with Input Races
|
In Proceedings PLACES 2022, arXiv:2203.12142
|
EPTCS 356, 2022, pp. 12-23
|
10.4204/EPTCS.356.2
| null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a calculus for asynchronous multiparty sessions where input
choices with different senders are allowed in processes. We present a type
system that accepts such input races provided they do not hinder lock-freedom.
|
[
{
"version": "v1",
"created": "Thu, 24 Mar 2022 06:37:11 GMT"
}
] | 2022-03-25T00:00:00 |
[
[
"Castellani",
"Ilaria",
"",
"INRIA, Université Côte d'Azur"
],
[
"Dezani-Ciancaglini",
"Mariangiola",
"",
"Università di Torino"
],
[
"Giannini",
"Paola",
"",
"Università del\n Piemonte Orientale"
]
] |
new_dataset
| 0.987237 |
2203.12878
|
EPTCS
|
Dennis Liew (University of Massachusetts Boston, Boston, USA), Tiago
Cogumbreiro (University of Massachusetts Boston, Boston, USA), Julien Lange
(Royal Holloway, University of London, Egham, UK)
|
Provable GPU Data-Races in Static Race Detection
|
In Proceedings PLACES 2022, arXiv:2203.12142
|
EPTCS 356, 2022, pp. 36-45
|
10.4204/EPTCS.356.4
| null |
cs.PL cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
We extend the theory behind the Faial tool-chain, which can soundly prove
that CUDA programs (aka, kernels) are data-race free using specialized
behavioral types called memory access protocols (MAPs). In this paper we extend
the theory of MAPs to characterize kernels for which alarms can be identified
as true alarms. We introduce a core calculus for CUDA, which we named BabyCUDA,
and a behavioral type system for it. We show that if a BabyCUDA program can be
assigned a MAP, then any alarm raised by Faial for this program is a true
alarm.
|
[
{
"version": "v1",
"created": "Thu, 24 Mar 2022 06:37:51 GMT"
}
] | 2022-03-25T00:00:00 |
[
[
"Liew",
"Dennis",
"",
"University of Massachusetts Boston, Boston, USA"
],
[
"Cogumbreiro",
"Tiago",
"",
"University of Massachusetts Boston, Boston, USA"
],
[
"Lange",
"Julien",
"",
"Royal Holloway, University of London, Egham, UK"
]
] |
new_dataset
| 0.978755 |
2203.12879
|
EPTCS
|
Matteo Cimini (University of Massachusetts Lowell, USA)
|
Lang-n-Send: Processes That Send Languages
|
In Proceedings PLACES 2022, arXiv:2203.12142
|
EPTCS 356, 2022, pp. 46-56
|
10.4204/EPTCS.356.5
| null |
cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
We present Lang-n-Send, a pi-calculus that is equipped with language
definitions. Processes can define languages in operational semantics, and use
them to execute programs. Furthermore, processes can send and receive pieces of
operational semantics through channels.
We present a reduction semantics for Lang-n-Send, and we offer examples that
demonstrate some of the scenarios that Lang-n-Send captures.
|
[
{
"version": "v1",
"created": "Thu, 24 Mar 2022 06:38:12 GMT"
}
] | 2022-03-25T00:00:00 |
[
[
"Cimini",
"Matteo",
"",
"University of Massachusetts Lowell, USA"
]
] |
new_dataset
| 0.998619 |
2203.12906
|
Anssi Moisio
|
Anssi Moisio, Dejan Porjazovski, Aku Rouhe, Yaroslav Getman, Anja
Virkkunen, Tam\'as Gr\'osz, Krister Lind\'en and Mikko Kurimo
|
Lahjoita puhetta -- a large-scale corpus of spoken Finnish with some
benchmarks
|
Submitted to Language Resources and Evaluation
| null | null | null |
cs.CL eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
The Donate Speech campaign has so far succeeded in gathering approximately
3600 hours of ordinary, colloquial Finnish speech into the Lahjoita puhetta
(Donate Speech) corpus. The corpus includes over twenty thousand speakers from
all the regions of Finland and from all age brackets. The primary goals of the
collection were to create a representative, large-scale resource to study
spontaneous spoken Finnish and to accelerate the development of language
technology and speech-based services. In this paper, we present the collection
process and the collected corpus, and showcase its versatility through multiple
use cases. The evaluated use cases include: automatic speech recognition of
spontaneous speech, detection of age, gender, dialect and topic and metadata
analysis. We provide benchmarks for the use cases, as well down loadable,
trained baseline systems with open-source code for reproducibility. One further
use case is to verify the metadata and transcripts given in this corpus itself,
and to suggest artificial metadata and transcripts for the part of the corpus
where it is missing.
|
[
{
"version": "v1",
"created": "Thu, 24 Mar 2022 07:50:25 GMT"
}
] | 2022-03-25T00:00:00 |
[
[
"Moisio",
"Anssi",
""
],
[
"Porjazovski",
"Dejan",
""
],
[
"Rouhe",
"Aku",
""
],
[
"Getman",
"Yaroslav",
""
],
[
"Virkkunen",
"Anja",
""
],
[
"Grósz",
"Tamás",
""
],
[
"Lindén",
"Krister",
""
],
[
"Kurimo",
"Mikko",
""
]
] |
new_dataset
| 0.999802 |
2203.12921
|
Luoxiao Yang
|
Luoxiao Yang, Zhong Zheng, and Zijun Zhang
|
Rubik's Cube Operator: A Plug And Play Permutation Module for Better
Arranging High Dimensional Industrial Data in Deep Convolutional Processes
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The convolutional neural network (CNN) has been widely applied to process the
industrial data based tensor input, which integrates data records of
distributed industrial systems from the spatial, temporal, and system dynamics
aspects. However, unlike images, information in the industrial data based
tensor is not necessarily spatially ordered. Thus, directly applying CNN is
ineffective. To tackle such issue, we propose a plug and play module, the
Rubik's Cube Operator (RCO), to adaptively permutate the data organization of
the industrial data based tensor to an optimal or suboptimal order of
attributes before being processed by CNNs, which can be updated with subsequent
CNNs together via the gradient-based optimizer. The proposed RCO maintains K
binary and right stochastic permutation matrices to permutate attributes of K
axes of the input industrial data based tensor. A novel learning process is
proposed to enable learning permutation matrices from data, where the
Gumbel-Softmax is employed to reparameterize elements of permutation matrices,
and the soft regularization loss is proposed and added to the task-specific
loss to ensure the feature diversity of the permuted data. We verify the
effectiveness of the proposed RCO via considering two representative learning
tasks processing industrial data via CNNs, the wind power prediction (WPP) and
the wind speed prediction (WSP) from the renewable energy domain. Computational
experiments are conducted based on four datasets collected from different wind
farms and the results demonstrate that the proposed RCO can improve the
performance of CNN based networks significantly.
|
[
{
"version": "v1",
"created": "Thu, 24 Mar 2022 08:13:56 GMT"
}
] | 2022-03-25T00:00:00 |
[
[
"Yang",
"Luoxiao",
""
],
[
"Zheng",
"Zhong",
""
],
[
"Zhang",
"Zijun",
""
]
] |
new_dataset
| 0.971214 |
2203.12987
|
Daniel Mitchell MEng
|
Daniel Mitchell, Jamie Blanche, Sam T. Harper, Theodore Lim, Valentin
Robu, Ikuo Yamamoto and David Flynn
|
Millimeter-wave Foresight Sensing for Safety and Resilience in
Autonomous Operations
|
7 pages, 4 figures
| null | null | null |
cs.RO eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
Robotic platforms are highly programmable, scalable and versatile to complete
several tasks including Inspection, Maintenance and Repair (IMR). Mobile
robotics offer reduced restrictions in operating environments, resulting in
greater flexibility; operation at height, dangerous areas and repetitive tasks.
Cyber physical infrastructures have been identified by the UK Robotics Growth
Partnership as a key enabler in how we utilize and interact with sensors and
machines via the virtual and physical worlds. Cyber Physical Systems (CPS)
allow for robotics and artificial intelligence to adapt and repurpose at pace,
allowing for the addressment of new challenges in CPS. A challenge exists
within robotics to secure an effective partnership in a wide range of areas
which include shared workspaces and Beyond Visual Line of Sight (BVLOS).
Robotic manipulation abilities have improved a robots accessibility via the
ability to open doorways, however, challenges exist in how a robot decides if
it is safe to move into a new workspace. Current sensing methods are limited to
line of sight and are unable to capture data beyond doorways or walls,
therefore, a robot is unable to sense if it is safe to open a door. Another
limitation exists as robots are unable to detect if a human is within a shared
workspace. Therefore, if a human is detected, extended safety precautions can
be taken to ensure the safe autonomous operation of a robot. These challenges
are represented as safety, trust and resilience, inhibiting the successful
advancement of CPS. This paper evaluates the use of frequency modulated
continuous wave radar sensing for human detection and through-wall detection to
increase situational awareness. The results validate the use of the sensor to
detect the difference between a person and infrastructure, and increased
situational awareness for navigation via foresight monitoring through walls.
|
[
{
"version": "v1",
"created": "Thu, 24 Mar 2022 11:25:24 GMT"
}
] | 2022-03-25T00:00:00 |
[
[
"Mitchell",
"Daniel",
""
],
[
"Blanche",
"Jamie",
""
],
[
"Harper",
"Sam T.",
""
],
[
"Lim",
"Theodore",
""
],
[
"Robu",
"Valentin",
""
],
[
"Yamamoto",
"Ikuo",
""
],
[
"Flynn",
"David",
""
]
] |
new_dataset
| 0.999782 |
2203.12998
|
Linda Freienthal
|
Marit Asula, Jane Makke, Linda Freienthal, Hele-Andra Kuulmets and
Raul Sirel
|
Kratt: Developing an Automatic Subject Indexing Tool for The National
Library of Estonia
|
This is a preprint version. It has 12 pages, 5 figures, 3 tables
|
Cataloging & Classification Quarterly (2021), 59:8, 775-793
|
10.1080/01639374.2021.1998283
| null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Manual subject indexing in libraries is a time-consuming and costly process
and the quality of the assigned subjects is affected by the cataloguer's
knowledge on the specific topics contained in the book. Trying to solve these
issues, we exploited the opportunities arising from artificial intelligence to
develop Kratt: a prototype of an automatic subject indexing tool. Kratt is able
to subject index a book independent of its extent and genre with a set of
keywords present in the Estonian Subject Thesaurus. It takes Kratt
approximately 1 minute to subject index a book, outperforming humans 10-15
times. Although the resulting keywords were not considered satisfactory by the
cataloguers, the ratings of a small sample of regular library users showed more
promise. We also argue that the results can be enhanced by including a bigger
corpus for training the model and applying more careful preprocessing
techniques.
|
[
{
"version": "v1",
"created": "Thu, 24 Mar 2022 11:45:44 GMT"
}
] | 2022-03-25T00:00:00 |
[
[
"Asula",
"Marit",
""
],
[
"Makke",
"Jane",
""
],
[
"Freienthal",
"Linda",
""
],
[
"Kuulmets",
"Hele-Andra",
""
],
[
"Sirel",
"Raul",
""
]
] |
new_dataset
| 0.992378 |
2203.13158
|
Daniel Harasim
|
Daniel Harasim, Giovanni Affatato and Fabian C. Moss
|
midiVERTO: A Web Application to Visualize Tonality in Real Time
| null | null | null | null |
cs.SD
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a web application for visualizing the tonality of a piece
of music -- the organization of its chords and scales -- at a high level of
abstraction and with coordinated playback. The application applies the discrete
Fourier transform to the pitch-class domain of a user-specified segmentation of
a MIDI file and visualizes the Fourier coefficients' trajectories. Since the
coefficients indicate different musical properties, such as harmonic function,
triadicity, and diatonicity, the application isolates aspects of a piece's
tonality and shows their development in time. The aim of the application is to
bridge a gap between mathematical music theory, musicology, and the general
public by making the discrete Fourier transform as applied to the pitch-class
domain accessible without requiring advanced mathematical knowledge or
programming skills up front.
|
[
{
"version": "v1",
"created": "Thu, 24 Mar 2022 16:26:48 GMT"
}
] | 2022-03-25T00:00:00 |
[
[
"Harasim",
"Daniel",
""
],
[
"Affatato",
"Giovanni",
""
],
[
"Moss",
"Fabian C.",
""
]
] |
new_dataset
| 0.998392 |
2203.13185
|
Willi Menapace
|
Federica Arrigoni, Willi Menapace, Marcel Seelbach Benkner, Elisa
Ricci, Vladislav Golyanik
|
Quantum Motion Segmentation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Motion segmentation is a challenging problem that seeks to identify
independent motions in two or several input images. This paper introduces the
first algorithm for motion segmentation that relies on adiabatic quantum
optimization of the objective function. The proposed method achieves on-par
performance with the state of the art on problem instances which can be mapped
to modern quantum annealers.
|
[
{
"version": "v1",
"created": "Thu, 24 Mar 2022 17:02:43 GMT"
}
] | 2022-03-25T00:00:00 |
[
[
"Arrigoni",
"Federica",
""
],
[
"Menapace",
"Willi",
""
],
[
"Benkner",
"Marcel Seelbach",
""
],
[
"Ricci",
"Elisa",
""
],
[
"Golyanik",
"Vladislav",
""
]
] |
new_dataset
| 0.97162 |
2203.13249
|
Likun Cai
|
Likun Cai, Zhi Zhang, Yi Zhu, Li Zhang, Mu Li and Xiangyang Xue
|
BigDetection: A Large-scale Benchmark for Improved Object Detector
Pre-training
|
Technical report, code is released at
https://github.com/amazon-research/bigdetection
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multiple datasets and open challenges for object detection have been
introduced in recent years. To build more general and powerful object detection
systems, in this paper, we construct a new large-scale benchmark termed
BigDetection. Our goal is to simply leverage the training data from existing
datasets (LVIS, OpenImages and Object365) with carefully designed principles,
and curate a larger dataset for improved detector pre-training. Specifically,
we generate a new taxonomy which unifies the heterogeneous label spaces from
different sources. Our BigDetection dataset has 600 object categories and
contains over 3.4M training images with 36M bounding boxes. It is much larger
in multiple dimensions than previous benchmarks, which offers both
opportunities and challenges. Extensive experiments demonstrate its validity as
a new benchmark for evaluating different object detection methods, and its
effectiveness as a pre-training dataset.
|
[
{
"version": "v1",
"created": "Thu, 24 Mar 2022 17:57:29 GMT"
}
] | 2022-03-25T00:00:00 |
[
[
"Cai",
"Likun",
""
],
[
"Zhang",
"Zhi",
""
],
[
"Zhu",
"Yi",
""
],
[
"Zhang",
"Li",
""
],
[
"Li",
"Mu",
""
],
[
"Xue",
"Xiangyang",
""
]
] |
new_dataset
| 0.988896 |
1611.06301
|
Haofu Liao
|
Haofu Liao, Yuncheng Li, Tianran Hu and Jiebo Luo
|
Inferring Restaurant Styles by Mining Crowd Sourced Photos from
User-Review Websites
|
10 pages, Accepted by IEEE BigData 2016
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
When looking for a restaurant online, user uploaded photos often give people
an immediate and tangible impression about a restaurant. Due to their
informativeness, such user contributed photos are leveraged by restaurant
review websites to provide their users an intuitive and effective search
experience. In this paper, we present a novel approach to inferring restaurant
types or styles (ambiance, dish styles, suitability for different occasions)
from user uploaded photos on user-review websites. To that end, we first
collect a novel restaurant photo dataset associating the user contributed
photos with the restaurant styles from TripAdvior. We then propose a deep
multi-instance multi-label learning (MIML) framework to deal with the unique
problem setting of the restaurant style classification task. We employ a
two-step bootstrap strategy to train a multi-label convolutional neural network
(CNN). The multi-label CNN is then used to compute the confidence scores of
restaurant styles for all the images associated with a restaurant. The computed
confidence scores are further used to train a final binary classifier for each
restaurant style tag. Upon training, the styles of a restaurant can be profiled
by analyzing restaurant photos with the trained multi-label CNN and SVM models.
Experimental evaluation has demonstrated that our crowd sourcing-based approach
can effectively infer the restaurant style when there are a sufficient number
of user uploaded photos for a given restaurant.
|
[
{
"version": "v1",
"created": "Sat, 19 Nov 2016 04:27:28 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Dec 2020 21:01:03 GMT"
},
{
"version": "v3",
"created": "Wed, 23 Mar 2022 16:27:31 GMT"
}
] | 2022-03-24T00:00:00 |
[
[
"Liao",
"Haofu",
""
],
[
"Li",
"Yuncheng",
""
],
[
"Hu",
"Tianran",
""
],
[
"Luo",
"Jiebo",
""
]
] |
new_dataset
| 0.996028 |
1812.03507
|
Haofu Liao
|
Haofu Liao, Yucheng Tang, Gareth Funka-Lea, Jiebo Luo, Shaohua Kevin
Zhou
|
More Knowledge is Better: Cross-Modality Volume Completion and 3D+2D
Segmentation for Intracardiac Echocardiography Contouring
| null |
Medical Image Computing and Computer Assisted Intervention
(MICCAI) 2018. Lecture Notes in Computer Science, vol 11071. Springer, Cham
|
10.1007/978-3-030-00934-2_60
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Using catheter ablation to treat atrial fibrillation increasingly relies on
intracardiac echocardiography (ICE) for an anatomical delineation of the left
atrium and the pulmonary veins that enter the atrium. However, it is a
challenge to build an automatic contouring algorithm because ICE is noisy and
provides only a limited 2D view of the 3D anatomy. This work provides the first
automatic solution to segment the left atrium and the pulmonary veins from ICE.
In this solution, we demonstrate the benefit of building a cross-modality
framework that can leverage a database of diagnostic images to supplement the
less available interventional images. To this end, we develop a novel deep
neural network approach that uses the (i) 3D geometrical information provided
by a position sensor embedded in the ICE catheter and the (ii) 3D image
appearance information from a set of computed tomography cardiac volumes. We
evaluate the proposed approach over 11,000 ICE images collected from 150
clinical patients. Experimental results show that our model is significantly
better than a direct 2D image-to-image deep neural network segmentation,
especially for less-observed structures.
|
[
{
"version": "v1",
"created": "Sun, 9 Dec 2018 16:03:38 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Nov 2019 03:14:49 GMT"
},
{
"version": "v3",
"created": "Wed, 23 Mar 2022 16:25:19 GMT"
}
] | 2022-03-24T00:00:00 |
[
[
"Liao",
"Haofu",
""
],
[
"Tang",
"Yucheng",
""
],
[
"Funka-Lea",
"Gareth",
""
],
[
"Luo",
"Jiebo",
""
],
[
"Zhou",
"Shaohua Kevin",
""
]
] |
new_dataset
| 0.96345 |
2004.11824
|
Alex Levering
|
Alex Levering, Martin Tomko, Devis Tuia, Kourosh Khoshelham
|
Detecting Unsigned Physical Road Incidents from Driver-View Images
|
Preprint to T-IV paper
| null |
10.1109/TIV.2020.2991963
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Safety on roads is of uttermost importance, especially in the context of
autonomous vehicles. A critical need is to detect and communicate disruptive
incidents early and effectively. In this paper we propose a system based on an
off-the-shelf deep neural network architecture that is able to detect and
recognize types of unsigned (non-placarded, such as traffic signs), physical
(visible in images) road incidents. We develop a taxonomy for unsigned physical
incidents to provide a means of organizing and grouping related incidents.
After selecting eight target types of incidents, we collect a dataset of twelve
thousand images gathered from publicly-available web sources. We subsequently
fine-tune a convolutional neural network to recognize the eight types of road
incidents. The proposed model is able to recognize incidents with a high level
of accuracy (higher than 90%). We further show that while our system
generalizes well across spatial context by training a classifier on
geostratified data in the United Kingdom (with an accuracy of over 90%), the
translation to visually less similar environments requires spatially
distributed data collection.
Note: this is a pre-print version of work accepted in IEEE Transactions on
Intelligent Vehicles (T-IV;in press). The paper is currently in production, and
the DOI link will be added soon.
|
[
{
"version": "v1",
"created": "Fri, 24 Apr 2020 16:02:17 GMT"
}
] | 2022-03-24T00:00:00 |
[
[
"Levering",
"Alex",
""
],
[
"Tomko",
"Martin",
""
],
[
"Tuia",
"Devis",
""
],
[
"Khoshelham",
"Kourosh",
""
]
] |
new_dataset
| 0.973602 |
2011.13045
|
R. Kenny Jones
|
R. Kenny Jones and Homer Walke and Daniel Ritchie
|
PLAD: Learning to Infer Shape Programs with Pseudo-Labels and
Approximate Distributions
|
CVPR 2022; https://github.com/rkjones4/PLAD
| null | null | null |
cs.CV cs.GR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Inferring programs which generate 2D and 3D shapes is important for reverse
engineering, editing, and more. Training models to perform this task is
complicated because paired (shape, program) data is not readily available for
many domains, making exact supervised learning infeasible. However, it is
possible to get paired data by compromising the accuracy of either the assigned
program labels or the shape distribution. Wake-sleep methods use samples from a
generative model of shape programs to approximate the distribution of real
shapes. In self-training, shapes are passed through a recognition model, which
predicts programs that are treated as "pseudo-labels" for those shapes. Related
to these approaches, we introduce a novel self-training variant unique to
program inference, where program pseudo-labels are paired with their executed
output shapes, avoiding label mismatch at the cost of an approximate shape
distribution. We propose to group these regimes under a single conceptual
framework, where training is performed with maximum likelihood updates sourced
from either Pseudo-Labels or an Approximate Distribution (PLAD). We evaluate
these techniques on multiple 2D and 3D shape program inference domains.
Compared with policy gradient reinforcement learning, we show that PLAD
techniques infer more accurate shape programs and converge significantly
faster. Finally, we propose to combine updates from different PLAD methods
within the training of a single model, and find that this approach outperforms
any individual technique.
|
[
{
"version": "v1",
"created": "Wed, 25 Nov 2020 22:10:32 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Sep 2021 18:49:50 GMT"
},
{
"version": "v3",
"created": "Wed, 8 Dec 2021 01:20:42 GMT"
},
{
"version": "v4",
"created": "Tue, 22 Mar 2022 19:16:20 GMT"
}
] | 2022-03-24T00:00:00 |
[
[
"Jones",
"R. Kenny",
""
],
[
"Walke",
"Homer",
""
],
[
"Ritchie",
"Daniel",
""
]
] |
new_dataset
| 0.996475 |
2103.09927
|
Evrard Garcelon
|
Evrard Garcelon and Vianney Perchet and Matteo Pirotta
|
Encrypted Linear Contextual Bandit
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Contextual bandit is a general framework for online learning in sequential
decision-making problems that has found application in a wide range of domains,
including recommendation systems, online advertising, and clinical trials.
A critical aspect of bandit methods is that they require to observe the
contexts --i.e., individual or group-level data-- and rewards in order to solve
the sequential problem. The large deployment in industrial applications has
increased interest in methods that preserve the users' privacy. In this paper,
we introduce a privacy-preserving bandit framework based on homomorphic
encryption{\color{violet} which allows computations using encrypted data}. The
algorithm \textit{only} observes encrypted information (contexts and rewards)
and has no ability to decrypt it. Leveraging the properties of homomorphic
encryption, we show that despite the complexity of the setting, it is possible
to solve linear contextual bandits over encrypted data with a
$\widetilde{O}(d\sqrt{T})$ regret bound in any linear contextual bandit
problem, while keeping data encrypted.
|
[
{
"version": "v1",
"created": "Wed, 17 Mar 2021 21:49:21 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Mar 2022 16:16:58 GMT"
}
] | 2022-03-24T00:00:00 |
[
[
"Garcelon",
"Evrard",
""
],
[
"Perchet",
"Vianney",
""
],
[
"Pirotta",
"Matteo",
""
]
] |
new_dataset
| 0.992369 |
2104.04182
|
Santiago Castro
|
Santiago Castro, Ruoyao Wang, Pingxuan Huang, Ian Stewart, Oana Ignat,
Nan Liu, Jonathan C. Stroud, Rada Mihalcea
|
FIBER: Fill-in-the-Blanks as a Challenging Video Understanding
Evaluation Framework
|
Accepted at ACL 2022 Main conference. Camera-ready version
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose fill-in-the-blanks as a video understanding evaluation framework
and introduce FIBER -- a novel dataset consisting of 28,000 videos and
descriptions in support of this evaluation framework. The fill-in-the-blanks
setting tests a model's understanding of a video by requiring it to predict a
masked noun phrase in the caption of the video, given the video and the
surrounding text. The FIBER benchmark does not share the weaknesses of the
current state-of-the-art language-informed video understanding tasks, namely:
(1) video question answering using multiple-choice questions, where models
perform relatively well because they exploit linguistic biases in the task
formulation, thus making our framework challenging for the current
state-of-the-art systems to solve; and (2) video captioning, which relies on an
open-ended evaluation framework that is often inaccurate because system answers
may be perceived as incorrect if they differ in form from the ground truth. The
FIBER dataset and our code are available at https://lit.eecs.umich.edu/fiber/.
|
[
{
"version": "v1",
"created": "Fri, 9 Apr 2021 04:00:10 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Feb 2022 18:05:18 GMT"
},
{
"version": "v3",
"created": "Tue, 22 Mar 2022 21:24:09 GMT"
}
] | 2022-03-24T00:00:00 |
[
[
"Castro",
"Santiago",
""
],
[
"Wang",
"Ruoyao",
""
],
[
"Huang",
"Pingxuan",
""
],
[
"Stewart",
"Ian",
""
],
[
"Ignat",
"Oana",
""
],
[
"Liu",
"Nan",
""
],
[
"Stroud",
"Jonathan C.",
""
],
[
"Mihalcea",
"Rada",
""
]
] |
new_dataset
| 0.993052 |
2104.07407
|
Chuhan Wu
|
Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang
|
MM-Rec: Multimodal News Recommendation
| null | null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Accurate news representation is critical for news recommendation. Most of
existing news representation methods learn news representations only from news
texts while ignore the visual information in news like images. In fact, users
may click news not only because of the interest in news titles but also due to
the attraction of news images. Thus, images are useful for representing news
and predicting user behaviors. In this paper, we propose a multimodal news
recommendation method, which can incorporate both textual and visual
information of news to learn multimodal news representations. We first extract
region-of-interests (ROIs) from news images via object detection. Then we use a
pre-trained visiolinguistic model to encode both news texts and news image ROIs
and model their inherent relatedness using co-attentional Transformers. In
addition, we propose a crossmodal candidate-aware attention network to select
relevant historical clicked news for accurate user modeling by measuring the
crossmodal relatedness between clicked news and candidate news. Experiments
validate that incorporating multimodal news information can effectively improve
news recommendation.
|
[
{
"version": "v1",
"created": "Thu, 15 Apr 2021 12:11:50 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Mar 2022 12:06:42 GMT"
}
] | 2022-03-24T00:00:00 |
[
[
"Wu",
"Chuhan",
""
],
[
"Wu",
"Fangzhao",
""
],
[
"Qi",
"Tao",
""
],
[
"Huang",
"Yongfeng",
""
]
] |
new_dataset
| 0.998762 |
2106.12122
|
Fariba Abbasi
|
Fariba Abbasi, Hessam Mahdavifar, and Emanuele Viterbo
|
Hybrid Non-Binary Repeated Polar Codes
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Concatenating the state-of-the-art codes at moderate rates with repetition
codes has emerged as a practical solution deployed in various standards for
ultra-low-power devices such as in Internet-of-Things (IoT) networks. In this
paper, we propose a novel concatenation mechanism for such applications which
need to operate at very low signal-to-noise ratio (SNR) regime. In the proposed
scheme, the outer code is a hybrid polar code constructed in two stages, one
with a binary kernel and another also with a binary kernel but applied over a
binary extension field. The inner code is a non-binary multiplicative
repetition code. This particular structure inherits low-complexity decoding
structures of polar codes while enabling concatenation with an inner non-binary
multiplicative repetition scheme. The decoding for the proposed scheme is done
using cyclic redundancy check (CRC) aided successive cancellation list (SCL)
decoder over AWGN channel. Simulation results demonstrate that the proposed
hybrid non-binary repeated polar code provides performance gain compared to a
polar-repetition scheme with comparable decoding complexity.
|
[
{
"version": "v1",
"created": "Wed, 23 Jun 2021 01:59:33 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Mar 2022 17:03:06 GMT"
}
] | 2022-03-24T00:00:00 |
[
[
"Abbasi",
"Fariba",
""
],
[
"Mahdavifar",
"Hessam",
""
],
[
"Viterbo",
"Emanuele",
""
]
] |
new_dataset
| 0.995752 |
2202.12024
|
Chuhan Wu
|
Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang, Xing Xie
|
NoisyTune: A Little Noise Can Help You Finetune Pretrained Language
Models Better
|
ACL 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Effectively finetuning pretrained language models (PLMs) is critical for
their success in downstream tasks. However, PLMs may have risks in overfitting
the pretraining tasks and data, which usually have gap with the target
downstream tasks. Such gap may be difficult for existing PLM finetuning methods
to overcome and lead to suboptimal performance. In this paper, we propose a
very simple yet effective method named NoisyTune to help better finetune PLMs
on downstream tasks by adding some noise to the parameters of PLMs before
fine-tuning. More specifically, we propose a matrix-wise perturbing method
which adds different uniform noises to different parameter matrices based on
their standard deviations. In this way, the varied characteristics of different
types of parameters in PLMs can be considered. Extensive experiments on both
GLUE English benchmark and XTREME multilingual benchmark show NoisyTune can
consistently empower the finetuning of different PLMs on different downstream
tasks.
|
[
{
"version": "v1",
"created": "Thu, 24 Feb 2022 11:08:02 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Mar 2022 12:13:07 GMT"
}
] | 2022-03-24T00:00:00 |
[
[
"Wu",
"Chuhan",
""
],
[
"Wu",
"Fangzhao",
""
],
[
"Qi",
"Tao",
""
],
[
"Huang",
"Yongfeng",
""
],
[
"Xie",
"Xing",
""
]
] |
new_dataset
| 0.991713 |
2203.09032
|
Yi Huang
|
Yi Huang, Yuan Fang, Xinmin Li, Jie Xu
|
Coordinated Power Control for Network Integrated Sensing and
Communication
|
5 pages, 4 figures
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This correspondence paper studies a network integrated sensing and
communication (ISAC) system that unifies the interference channel for
communication and distributed radar sensing. In this system, a set of
distributed ISAC transmitters send individual messages to their respective
communication users (CUs), and at the same time cooperate with multiple sensing
receivers to estimate the location of one target. We exploit the coordinated
power control among ISAC transmitters to minimize their total transmit power
while ensuring the minimum signal-to-interference-plus-noise ratio (SINR)
constraints at individual CUs and the maximum Cram\'{e}r-Rao lower bound (CRLB)
requirement for target location estimation. Although the formulated coordinated
power control problem is non-convex and difficult to solve in general, we
propose two efficient algorithms to obtain high-quality solutions based on the
semi-definite relaxation (SDR) and CRLB approximation, respectively. Numerical
results show that the proposed designs achieve substantial performance gains in
terms of power reduction, as compared to the benchmark with a heuristic
separate communication-sensing design.
|
[
{
"version": "v1",
"created": "Thu, 17 Mar 2022 02:15:11 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Mar 2022 10:15:06 GMT"
}
] | 2022-03-24T00:00:00 |
[
[
"Huang",
"Yi",
""
],
[
"Fang",
"Yuan",
""
],
[
"Li",
"Xinmin",
""
],
[
"Xu",
"Jie",
""
]
] |
new_dataset
| 0.956935 |
2203.12057
|
Zhe Shen
|
Zhe Shen and Takeshi Tsuchiya
|
Cat-inspired Gaits for A Tilt-rotor -- from Symmetrical to Asymmetrical
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Among the tilt-rotors (quadrotors) developed in the last decades, Rylls model
with eight inputs (four magnitudes of the thrusts and four tilting angles)
attracted great attention. Typical feedback linearization maneuvers all the
eight inputs with a united control rule to stabilize this tilt-rotor. Instead
of assigning the tilting angles by the control rule, the recent research
predetermined the tilting angles and left the magnitudes of the thrusts the
only control signals. These tilting angles are designed to mimic the cat-trot
gait, avoiding the singular decoupling matrix feedback linearization. To
complete the discussions of the cat-gaits inspired tilt-rotor gaits, this
research addresses the analyses on the rest of the common cat gaits, walk, run,
transverse gallop, and rotary gallop. It is found that the singular decoupling
matrix exist in walk gait and rotary gallop. Further modifications are
conducted to these two gaits to accommodate the application of feedback
linearization. The modified gaits with different periods are then applied to
the tilt-rotor in tracking experiments, in which the references are uniform
rectilinear motion and uniform circular motion. All the experiments are
simulated in Simulink, MATLAB. The result shows that.
|
[
{
"version": "v1",
"created": "Tue, 22 Mar 2022 21:36:37 GMT"
}
] | 2022-03-24T00:00:00 |
[
[
"Shen",
"Zhe",
""
],
[
"Tsuchiya",
"Takeshi",
""
]
] |
new_dataset
| 0.98022 |
2203.12065
|
Eric Horton
|
Eric Horton, Chris Parnin
|
Dozer: Migrating Shell Commands to Ansible Modules via Execution
Profiling and Synthesis
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Software developers frequently use the system shell to perform configuration
management tasks. Unfortunately, the shell does not scale well to large
systems, and configuration management tools like Ansible are more difficult to
learn. We address this problem with Dozer, a technique to help developers push
their shell commands into Ansible task definitions. It operates by tracing and
comparing system calls to find Ansible modules with similar behaviors to shell
commands, then generating and validating migrations to find the task which
produces the most similar changes to the system. Dozer is syntax agnostic,
which should allow it to generalize to other configuration management
platforms. We evaluate Dozer using datasets from open source configuration
scripts.
|
[
{
"version": "v1",
"created": "Tue, 22 Mar 2022 21:54:44 GMT"
}
] | 2022-03-24T00:00:00 |
[
[
"Horton",
"Eric",
""
],
[
"Parnin",
"Chris",
""
]
] |
new_dataset
| 0.987559 |
2203.12081
|
Hongrun Zhang
|
Hongrun Zhang, Yanda Meng, Yitian Zhao, Yihong Qiao, Xiaoyun Yang,
Sarah E. Coupland, Yalin Zheng
|
DTFD-MIL: Double-Tier Feature Distillation Multiple Instance Learning
for Histopathology Whole Slide Image Classification
|
Accepted to CVPR2022
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multiple instance learning (MIL) has been increasingly used in the
classification of histopathology whole slide images (WSIs). However, MIL
approaches for this specific classification problem still face unique
challenges, particularly those related to small sample cohorts. In these, there
are limited number of WSI slides (bags), while the resolution of a single WSI
is huge, which leads to a large number of patches (instances) cropped from this
slide. To address this issue, we propose to virtually enlarge the number of
bags by introducing the concept of pseudo-bags, on which a double-tier MIL
framework is built to effectively use the intrinsic features. Besides, we also
contribute to deriving the instance probability under the framework of
attention-based MIL, and utilize the derivation to help construct and analyze
the proposed framework. The proposed method outperforms other latest methods on
the CAMELYON-16 by substantially large margins, and is also better in
performance on the TCGA lung cancer dataset. The proposed framework is ready to
be extended for wider MIL applications. The code is available at:
https://github.com/hrzhang1123/DTFD-MIL
|
[
{
"version": "v1",
"created": "Tue, 22 Mar 2022 22:33:42 GMT"
}
] | 2022-03-24T00:00:00 |
[
[
"Zhang",
"Hongrun",
""
],
[
"Meng",
"Yanda",
""
],
[
"Zhao",
"Yitian",
""
],
[
"Qiao",
"Yihong",
""
],
[
"Yang",
"Xiaoyun",
""
],
[
"Coupland",
"Sarah E.",
""
],
[
"Zheng",
"Yalin",
""
]
] |
new_dataset
| 0.999247 |
2203.12111
|
Alexander Neuwirth
|
Alex Moran, Bart Gebka, Joshua Goldshteyn, Autumn Beyer, Nathan
Johnson, and Alexander Neuwirth
|
Muscle Vision: Real Time Keypoint Based Pose Classification of Physical
Exercises
|
Published in MICS 2022
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Recent advances in machine learning technology have enabled highly portable
and performant models for many common tasks, especially in image recognition.
One emerging field, 3D human pose recognition extrapolated from video, has now
advanced to the point of enabling real-time software applications with robust
enough output to support downstream machine learning tasks. In this work we
propose a new machine learning pipeline and web interface that performs human
pose recognition on a live video feed to detect when common exercises are
performed and classify them accordingly. We present a model interface capable
of webcam input with live display of classification results. Our main
contributions include a keypoint and time series based lightweight approach for
classifying a selected set of fitness exercises and a web-based software
application for obtaining and visualizing the results in real time.
|
[
{
"version": "v1",
"created": "Wed, 23 Mar 2022 00:55:07 GMT"
}
] | 2022-03-24T00:00:00 |
[
[
"Moran",
"Alex",
""
],
[
"Gebka",
"Bart",
""
],
[
"Goldshteyn",
"Joshua",
""
],
[
"Beyer",
"Autumn",
""
],
[
"Johnson",
"Nathan",
""
],
[
"Neuwirth",
"Alexander",
""
]
] |
new_dataset
| 0.997459 |
2203.12186
|
Nathan Young
|
Nathan Young, Qiming Bao, Joshua Bensemann, Michael Witbrock
|
AbductionRules: Training Transformers to Explain Unexpected Inputs
|
Findings of ACL 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Transformers have recently been shown to be capable of reliably performing
logical reasoning over facts and rules expressed in natural language, but
abductive reasoning - inference to the best explanation of an unexpected
observation - has been underexplored despite significant applications to
scientific discovery, common-sense reasoning, and model interpretability.
We present AbductionRules, a group of natural language datasets designed to
train and test generalisable abduction over natural-language knowledge bases.
We use these datasets to finetune pretrained Transformers and discuss their
performance, finding that our models learned generalisable abductive techniques
but also learned to exploit the structure of our data. Finally, we discuss the
viability of this approach to abductive reasoning and ways in which it may be
improved in future work.
|
[
{
"version": "v1",
"created": "Wed, 23 Mar 2022 04:18:30 GMT"
}
] | 2022-03-24T00:00:00 |
[
[
"Young",
"Nathan",
""
],
[
"Bao",
"Qiming",
""
],
[
"Bensemann",
"Joshua",
""
],
[
"Witbrock",
"Michael",
""
]
] |
new_dataset
| 0.998875 |
2203.12207
|
Shirshendu Das
|
Jaspinder Kaur, Shirshendu Das
|
TPPD: Targeted Pseudo Partitioning based Defence for Cross-Core Covert
Channel Attacks
| null | null | null | null |
cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
Contemporary computing employs cache hierarchy to fill the speed gap between
processors and main memories. In order to optimise system performance, Last
Level Caches(LLC) are shared among all the cores. Cache sharing has made them
an attractive surface for cross-core timing channel attacks. In these attacks,
an attacker running on another core can exploit the access timing of the victim
process to infiltrate the secret information. One such attack is called
cross-core Covert Channel Attack (CCA). Timely detection and then prevention of
cross-core CCA is critical for maintaining the integrity and security of users,
especially in a shared computing environment. In this work, we have proposed an
efficient cross-core CCA mitigation technique. We propose a way-wise cache
partitioning on targeted sets, only for the processes suspected to be
attackers. In this way, the performance impact on the entire LLC is minimised,
and benign applications can utilise the LLC to its full capacity. We have used
a cycle-accurate simulator (gem5) to analyse the per-formance of the proposed
method and its security effectiveness. It has been successful in abolishing the
cross-core covert timing channel attack with no significant performance impact
on benign applications. It causes 23% less cache misses in comparison to
existing partitioning based solutions while requiring 0.26% storage overhead.
|
[
{
"version": "v1",
"created": "Wed, 23 Mar 2022 05:49:51 GMT"
}
] | 2022-03-24T00:00:00 |
[
[
"Kaur",
"Jaspinder",
""
],
[
"Das",
"Shirshendu",
""
]
] |
new_dataset
| 0.998795 |
2203.12301
|
Jiacheng Han
|
Jun Xie, Jiacheng Han, Dezhen Qi, Feng Chen, Kaer Huang, Jianwei Shuai
|
Lane detection with Position Embedding
| null | null | null | null |
cs.CV
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Recently, lane detection has made great progress in autonomous driving. RESA
(REcurrent Feature-Shift Aggregator) is based on image segmentation. It
presents a novel module to enrich lane feature after preliminary feature
extraction with an ordinary CNN. For Tusimple dataset, there is not too
complicated scene and lane has more prominent spatial features. On the basis of
RESA, we introduce the method of position embedding to enhance the spatial
features. The experimental results show that this method has achieved the best
accuracy 96.93% on Tusimple dataset.
|
[
{
"version": "v1",
"created": "Wed, 23 Mar 2022 09:48:59 GMT"
}
] | 2022-03-24T00:00:00 |
[
[
"Xie",
"Jun",
""
],
[
"Han",
"Jiacheng",
""
],
[
"Qi",
"Dezhen",
""
],
[
"Chen",
"Feng",
""
],
[
"Huang",
"Kaer",
""
],
[
"Shuai",
"Jianwei",
""
]
] |
new_dataset
| 0.970743 |
2203.12304
|
Shang-Fu Chen
|
Shang-Fu Chen, Yu-Min Liu, Chia-Ching Lin, Trista Pei-Chun Chen,
Yu-Chiang Frank Wang
|
Domain-Generalized Textured Surface Anomaly Detection
|
Accepted by IEEE International Conference on Multimedia and Expo
(ICME) 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Anomaly detection aims to identify abnormal data that deviates from the
normal ones, while typically requiring a sufficient amount of normal data to
train the model for performing this task. Despite the success of recent anomaly
detection methods, performing anomaly detection in an unseen domain remain a
challenging task. In this paper, we address the task of domain-generalized
textured surface anomaly detection. By observing normal and abnormal surface
data across multiple source domains, our model is expected to be generalized to
an unseen textured surface of interest, in which only a small number of normal
data can be observed during testing. Although with only image-level labels
observed in the training data, our patch-based meta-learning model exhibits
promising generalization ability: not only can it generalize to unseen image
domains, but it can also localize abnormal regions in the query image. Our
experiments verify that our model performs favorably against state-of-the-art
anomaly detection and domain generalization approaches in various settings.
|
[
{
"version": "v1",
"created": "Wed, 23 Mar 2022 10:01:35 GMT"
}
] | 2022-03-24T00:00:00 |
[
[
"Chen",
"Shang-Fu",
""
],
[
"Liu",
"Yu-Min",
""
],
[
"Lin",
"Chia-Ching",
""
],
[
"Chen",
"Trista Pei-Chun",
""
],
[
"Wang",
"Yu-Chiang Frank",
""
]
] |
new_dataset
| 0.988251 |
2203.12321
|
Shijie Lin
|
Shijie Lin and Yinqiang Zhang and Lei Yu and Bin Zhou and Xiaowei Luo
and Jia Pan
|
Autofocus for Event Cameras
| null | null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Focus control (FC) is crucial for cameras to capture sharp images in
challenging real-world scenarios. The autofocus (AF) facilitates the FC by
automatically adjusting the focus settings. However, due to the lack of
effective AF methods for the recently introduced event cameras, their FC still
relies on naive AF like manual focus adjustments, leading to poor adaptation in
challenging real-world conditions. In particular, the inherent differences
between event and frame data in terms of sensing modality, noise, temporal
resolutions, etc., bring many challenges in designing an effective AF method
for event cameras. To address these challenges, we develop a novel event-based
autofocus framework consisting of an event-specific focus measure called event
rate (ER) and a robust search strategy called event-based golden search (EGS).
To verify the performance of our method, we have collected an event-based
autofocus dataset (EAD) containing well-synchronized frames, events, and focal
positions in a wide variety of challenging scenes with severe lighting and
motion conditions. The experiments on this dataset and additional real-world
scenarios demonstrated the superiority of our method over state-of-the-art
approaches in terms of efficiency and accuracy.
|
[
{
"version": "v1",
"created": "Wed, 23 Mar 2022 10:46:33 GMT"
}
] | 2022-03-24T00:00:00 |
[
[
"Lin",
"Shijie",
""
],
[
"Zhang",
"Yinqiang",
""
],
[
"Yu",
"Lei",
""
],
[
"Zhou",
"Bin",
""
],
[
"Luo",
"Xiaowei",
""
],
[
"Pan",
"Jia",
""
]
] |
new_dataset
| 0.995831 |
2203.12323
|
Deepal Tennakoon
|
Deepal Tennakoon, Yiding Hua, Vincent Gramoli
|
CollaChain: A BFT Collaborative Middleware for Decentralized
Applications
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The sharing economy is centralizing services, leading to misuses of the
Internet. We can list growing damages of data hacks, global outages and even
the use of data to manipulate their owners. Unfortunately, there is no
decentralized web where users can interact peer-to-peer in a secure way.
Blockchains incentivize participants to individually validate every transaction
and impose their block to the network. As a result, the validation of smart
contract requests is computationally intensive while the agreement on a unique
state does not make full use of the network. In this paper, we propose
Collachain, a new byzantine fault tolerant blockchain compatible with the
largest ecosystem of DApps that leverages collaboration. First, the
pariticipants executing smart contracts collaborate to validate the
transactions, hence halving the number of validations required by modern
blockchains (e.g., Ethereum, Libra). Second, the participants in the consensus
collaborate to combine their block proposal into a superblock, hence improving
throughput as the system grows to hundreds of nodes. In addition, Collachain
offers the possibility to its users to interact securely with each other
without downloading the blockchain, hence allowing interactions via mobile
devices. Collachain is effective at outperforming the Concord and Quorum
blockchains and its throughput peaks at 4500 TPS under a Twitter DApp
(Decentralized Application) workload. Finally, we demonstrate Collachain's
scalability by deploying it on 200 nodes located in 10 countries over 5
continents.
|
[
{
"version": "v1",
"created": "Wed, 23 Mar 2022 10:58:50 GMT"
}
] | 2022-03-24T00:00:00 |
[
[
"Tennakoon",
"Deepal",
""
],
[
"Hua",
"Yiding",
""
],
[
"Gramoli",
"Vincent",
""
]
] |
new_dataset
| 0.963518 |
2203.12350
|
Maya Aghaei
|
Guillem Martinez, Maya Aghaei, Martin Dijkstra, Bhalaji Nagarajan,
Femke Jaarsma, Jaap van de Loosdrecht, Petia Radeva, Klaas Dijkstra
|
Hyper-Spectral Imaging for Overlapping Plastic Flakes Segmentation
|
Submitted to ICIP2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given the hyper-spectral imaging unique potentials in grasping the polymer
characteristics of different materials, it is commonly used in sorting
procedures. In a practical plastic sorting scenario, multiple plastic flakes
may overlap which depending on their characteristics, the overlap can be
reflected in their spectral signature. In this work, we use hyper-spectral
imaging for the segmentation of three types of plastic flakes and their
possible overlapping combinations. We propose an intuitive and simple
multi-label encoding approach, bitfield encoding, to account for the
overlapping regions. With our experiments, we show that the bitfield encoding
improves over the baseline single-label approach and we further demonstrate its
potential in predicting multiple labels for overlapping classes even when the
model is only trained with non-overlapping classes.
|
[
{
"version": "v1",
"created": "Wed, 23 Mar 2022 12:02:10 GMT"
}
] | 2022-03-24T00:00:00 |
[
[
"Martinez",
"Guillem",
""
],
[
"Aghaei",
"Maya",
""
],
[
"Dijkstra",
"Martin",
""
],
[
"Nagarajan",
"Bhalaji",
""
],
[
"Jaarsma",
"Femke",
""
],
[
"van de Loosdrecht",
"Jaap",
""
],
[
"Radeva",
"Petia",
""
],
[
"Dijkstra",
"Klaas",
""
]
] |
new_dataset
| 0.98803 |
2203.12352
|
Alexander Steen
|
Alexander Steen
|
An Extensible Logic Embedding Tool for Lightweight Non-Classical
Reasoning
|
10 pages, 1 figure, 1 table
| null | null | null |
cs.AI cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The logic embedding tool provides a procedural encoding for non-classical
reasoning problems into classical higher-order logic. It is extensible and can
support an increasing number of different non-classical logics as reasoning
targets. When used as a pre-processor or library for higher-order theorem
provers, the tool admits off-the-shelf automation for logics for which
otherwise few to none provers are currently available.
|
[
{
"version": "v1",
"created": "Wed, 23 Mar 2022 12:08:51 GMT"
}
] | 2022-03-24T00:00:00 |
[
[
"Steen",
"Alexander",
""
]
] |
new_dataset
| 0.978691 |
2203.12441
|
Ziqi Yuan
|
Huisheng Mao and Ziqi Yuan and Hua Xu and Wenmeng Yu and Yihe Liu and
Kai Gao
|
M-SENA: An Integrated Platform for Multimodal Sentiment Analysis
|
11 pages, 4 figures, to be published in ACL 2022 System Demonstration
Track
| null | null | null |
cs.AI cs.MM
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
M-SENA is an open-sourced platform for Multimodal Sentiment Analysis. It aims
to facilitate advanced research by providing flexible toolkits, reliable
benchmarks, and intuitive demonstrations. The platform features a fully modular
video sentiment analysis framework consisting of data management, feature
extraction, model training, and result analysis modules. In this paper, we
first illustrate the overall architecture of the M-SENA platform and then
introduce features of the core modules. Reliable baseline results of different
modality features and MSA benchmarks are also reported. Moreover, we use model
evaluation and analysis tools provided by M-SENA to present intermediate
representation visualization, on-the-fly instance test, and generalization
ability test results. The source code of the platform is publicly available at
https://github.com/thuiar/M-SENA.
|
[
{
"version": "v1",
"created": "Wed, 23 Mar 2022 14:28:08 GMT"
}
] | 2022-03-24T00:00:00 |
[
[
"Mao",
"Huisheng",
""
],
[
"Yuan",
"Ziqi",
""
],
[
"Xu",
"Hua",
""
],
[
"Yu",
"Wenmeng",
""
],
[
"Liu",
"Yihe",
""
],
[
"Gao",
"Kai",
""
]
] |
new_dataset
| 0.999019 |
2203.12553
|
Yuanzhe Jin
|
Yuanzhe Jin, Xiangguo Liu, Qi Zhu
|
DSRC & C-V2X Comparison for Connected and Automated Vehicles in
Different Traffic Scenarios
| null | null | null | null |
cs.NI cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Researches have been devoted to making connected and automated vehicles
(CAVs) faster in different traffic scenarios. By using C-V2X or DSRC
communication protocol, CAVs can work more effectively. In this paper, we
compare these two communication protocols on CAVs in three different traffic
scenarios including ramp merging, intersection, and platoon brake. It shows
there is a trade-off between communication range and interval when leveraging
C-V2X or DSRC for CAVs. The result can help support further application designs
for CAV autonomously choosing communication protocols in different traffic
scenarios.
|
[
{
"version": "v1",
"created": "Wed, 23 Mar 2022 17:12:14 GMT"
}
] | 2022-03-24T00:00:00 |
[
[
"Jin",
"Yuanzhe",
""
],
[
"Liu",
"Xiangguo",
""
],
[
"Zhu",
"Qi",
""
]
] |
new_dataset
| 0.98318 |
2203.12560
|
Aysim Toker
|
Aysim Toker, Lukas Kondmann, Mark Weber, Marvin Eisenberger, Andr\'es
Camero, Jingliang Hu, Ariadna Pregel Hoderlein, \c{C}a\u{g}lar \c{S}enaras,
Timothy Davis, Daniel Cremers, Giovanni Marchisio, Xiao Xiang Zhu, Laura
Leal-Taix\'e
|
DynamicEarthNet: Daily Multi-Spectral Satellite Dataset for Semantic
Change Segmentation
|
Accepted to CVPR 2022, evaluation webpage:
https://codalab.lisn.upsaclay.fr/competitions/2882
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Earth observation is a fundamental tool for monitoring the evolution of land
use in specific areas of interest. Observing and precisely defining change, in
this context, requires both time-series data and pixel-wise segmentations. To
that end, we propose the DynamicEarthNet dataset that consists of daily,
multi-spectral satellite observations of 75 selected areas of interest
distributed over the globe with imagery from Planet Labs. These observations
are paired with pixel-wise monthly semantic segmentation labels of 7 land use
and land cover (LULC) classes. DynamicEarthNet is the first dataset that
provides this unique combination of daily measurements and high-quality labels.
In our experiments, we compare several established baselines that either
utilize the daily observations as additional training data (semi-supervised
learning) or multiple observations at once (spatio-temporal learning) as a
point of reference for future research. Finally, we propose a new evaluation
metric SCS that addresses the specific challenges associated with time-series
semantic change segmentation. The data is available at:
https://mediatum.ub.tum.de/1650201.
|
[
{
"version": "v1",
"created": "Wed, 23 Mar 2022 17:22:22 GMT"
}
] | 2022-03-24T00:00:00 |
[
[
"Toker",
"Aysim",
""
],
[
"Kondmann",
"Lukas",
""
],
[
"Weber",
"Mark",
""
],
[
"Eisenberger",
"Marvin",
""
],
[
"Camero",
"Andrés",
""
],
[
"Hu",
"Jingliang",
""
],
[
"Hoderlein",
"Ariadna Pregel",
""
],
[
"Şenaras",
"Çağlar",
""
],
[
"Davis",
"Timothy",
""
],
[
"Cremers",
"Daniel",
""
],
[
"Marchisio",
"Giovanni",
""
],
[
"Zhu",
"Xiao Xiang",
""
],
[
"Leal-Taixé",
"Laura",
""
]
] |
new_dataset
| 0.999783 |
2203.12573
|
Jin Yang
|
Jin Yang and Yue Yin and Alexander K. Landauer and Selda Buyuktozturk
and Jing Zhang and Luke Summey and Alexander McGhee and Matt K. Fu and John
O. Dabiri and Christian Franck
|
SerialTrack: ScalE and Rotation Invariant Augmented Lagrangian Particle
Tracking
| null | null | null | null |
cs.RO physics.data-an q-bio.QM
|
http://creativecommons.org/licenses/by/4.0/
|
We present a new particle tracking algorithm to accurately resolve large
deformation and rotational motion fields, which takes advantage of both local
and global particle tracking algorithms. We call this method the ScalE and
Rotation Invariant Augmented Lagrangian Particle Tracking (SerialTrack). This
method builds an iterative scale and rotation invariant topology-based feature
for each particle within a multi-scale tracking algorithm. The global kinematic
compatibility condition is applied as a global augmented Lagrangian constraint
to enhance the tracking accuracy. An open source software package implementing
this numerical approach to track both 2D and 3D, incremental and cumulative
deformation fields is provided.
|
[
{
"version": "v1",
"created": "Wed, 23 Mar 2022 17:33:20 GMT"
}
] | 2022-03-24T00:00:00 |
[
[
"Yang",
"Jin",
""
],
[
"Yin",
"Yue",
""
],
[
"Landauer",
"Alexander K.",
""
],
[
"Buyuktozturk",
"Selda",
""
],
[
"Zhang",
"Jing",
""
],
[
"Summey",
"Luke",
""
],
[
"McGhee",
"Alexander",
""
],
[
"Fu",
"Matt K.",
""
],
[
"Dabiri",
"John O.",
""
],
[
"Franck",
"Christian",
""
]
] |
new_dataset
| 0.996965 |
2008.07912
|
Andrew Cropper
|
Andrew Cropper and Sebastijan Duman\v{c}i\'c
|
Inductive logic programming at 30: a new introduction
|
Preprint of a paper accepted for JAIR
| null | null | null |
cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Inductive logic programming (ILP) is a form of machine learning. The goal of
ILP is to induce a hypothesis (a set of logical rules) that generalises
training examples. As ILP turns 30, we provide a new introduction to the field.
We introduce the necessary logical notation and the main learning settings;
describe the building blocks of an ILP system; compare several systems on
several dimensions; describe four systems (Aleph, TILDE, ASPAL, and Metagol);
highlight key application areas; and, finally, summarise current limitations
and directions for future research.
|
[
{
"version": "v1",
"created": "Tue, 18 Aug 2020 13:09:25 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Oct 2020 12:52:09 GMT"
},
{
"version": "v3",
"created": "Tue, 13 Oct 2020 16:35:41 GMT"
},
{
"version": "v4",
"created": "Tue, 7 Dec 2021 15:46:50 GMT"
},
{
"version": "v5",
"created": "Tue, 22 Mar 2022 10:44:16 GMT"
}
] | 2022-03-23T00:00:00 |
[
[
"Cropper",
"Andrew",
""
],
[
"Dumančić",
"Sebastijan",
""
]
] |
new_dataset
| 0.990229 |
2010.10805
|
Jianlei Chi
|
Jianlei Chi, Yu Qu, Ting Liu, Qinghua Zheng, Heng Yin
|
SeqTrans: Automatic Vulnerability Fix via Sequence to Sequence Learning
|
22 pages, 20 figures, 7 tables
| null | null | null |
cs.CR cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Software vulnerabilities are now reported at an unprecedented speed due to
the recent development of automated vulnerability hunting tools. However,
fixing vulnerabilities still mainly depends on programmers' manual efforts.
Developers need to deeply understand the vulnerability and try to affect the
system's functions as little as possible.
In this paper, with the advancement of Neural Machine Translation (NMT)
techniques, we provide a novel approach called SeqTrans to exploit historical
vulnerability fixes to provide suggestions and automatically fix the source
code. To capture the contextual information around the vulnerable code, we
propose to leverage data flow dependencies to construct code sequences and fed
them into the state-of-the-art transformer model. The fine-tuning strategy has
been introduced to overcome the small sample size problem. We evaluate SeqTrans
on a dataset containing 1,282 commits that fix 624 vulnerabilities in 205 Java
projects. Results show that the accuracy of SeqTrans outperforms the latest
techniques and achieves 23.3% in statement-level fix and 25.3% in CVE-level
fix. In the meantime, we look deep inside the result and observe that NMT model
performs very well in certain kinds of vulnerabilities like CWE-287 (Improper
Authentication) and CWE-863 (Incorrect Authorization).
|
[
{
"version": "v1",
"created": "Wed, 21 Oct 2020 07:49:08 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Jun 2021 06:17:30 GMT"
},
{
"version": "v3",
"created": "Tue, 22 Mar 2022 12:45:39 GMT"
}
] | 2022-03-23T00:00:00 |
[
[
"Chi",
"Jianlei",
""
],
[
"Qu",
"Yu",
""
],
[
"Liu",
"Ting",
""
],
[
"Zheng",
"Qinghua",
""
],
[
"Yin",
"Heng",
""
]
] |
new_dataset
| 0.996374 |
2012.04886
|
Weikang Wang
|
Jing Liu, Jiaxiang Wang, Weikang Wang and Yuting Su
|
DS-Net: Dynamic Spatiotemporal Network for Video Salient Object
Detection
|
The article has made some format changes
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As moving objects always draw more attention of human eyes, the temporal
motive information is always exploited complementarily with spatial information
to detect salient objects in videos. Although efficient tools such as optical
flow have been proposed to extract temporal motive information, it often
encounters difficulties when used for saliency detection due to the movement of
camera or the partial movement of salient objects. In this paper, we
investigate the complimentary roles of spatial and temporal information and
propose a novel dynamic spatiotemporal network (DS-Net) for more effective
fusion of spatiotemporal information. We construct a symmetric two-bypass
network to explicitly extract spatial and temporal features. A dynamic weight
generator (DWG) is designed to automatically learn the reliability of
corresponding saliency branch. And a top-down cross attentive aggregation (CAA)
procedure is designed so as to facilitate dynamic complementary aggregation of
spatiotemporal features. Finally, the features are modified by spatial
attention with the guidance of coarse saliency map and then go through decoder
part for final saliency map. Experimental results on five benchmarks VOS,
DAVIS, FBMS, SegTrack-v2, and ViSal demonstrate that the proposed method
achieves superior performance than state-of-the-art algorithms. The source code
is available at https://github.com/TJUMMG/DS-Net.
|
[
{
"version": "v1",
"created": "Wed, 9 Dec 2020 06:42:30 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Sep 2021 03:24:23 GMT"
},
{
"version": "v3",
"created": "Tue, 22 Mar 2022 07:40:38 GMT"
}
] | 2022-03-23T00:00:00 |
[
[
"Liu",
"Jing",
""
],
[
"Wang",
"Jiaxiang",
""
],
[
"Wang",
"Weikang",
""
],
[
"Su",
"Yuting",
""
]
] |
new_dataset
| 0.98724 |
2102.08804
|
Carlton Shepherd
|
Carlton Shepherd, Konstantinos Markantonakis, Georges-Axel Jaloyan
|
LIRA-V: Lightweight Remote Attestation for Constrained RISC-V Devices
|
Published in the proceedings of the IEEE Security and Privacy
Workshops, 2021
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents LIRA-V, a lightweight system for performing remote
attestation between constrained devices using the RISC-V architecture. We
propose using read-only memory and the RISC-V Physical Memory Protection (PMP)
primitive to build a trust anchor for remote attestation and secure channel
creation. Moreover, we show how LIRA-V can be used for trusted communication
between two devices using mutual attestation. We present the design,
implementation and evaluation of LIRA-V using an off-the-shelf RISC-V
microcontroller and present performance results to demonstrate its suitability.
To our knowledge, we present the first remote attestation mechanism suitable
for constrained RISC-V devices, with applications to cyber-physical systems and
Internet of Things (IoT) devices.
|
[
{
"version": "v1",
"created": "Wed, 17 Feb 2021 15:04:29 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Mar 2021 17:45:26 GMT"
},
{
"version": "v3",
"created": "Thu, 11 Mar 2021 14:20:16 GMT"
},
{
"version": "v4",
"created": "Tue, 22 Mar 2022 13:48:54 GMT"
}
] | 2022-03-23T00:00:00 |
[
[
"Shepherd",
"Carlton",
""
],
[
"Markantonakis",
"Konstantinos",
""
],
[
"Jaloyan",
"Georges-Axel",
""
]
] |
new_dataset
| 0.993175 |
2105.04454
|
Carlton Shepherd
|
Carlton Shepherd, Konstantinos Markantonakis, Nico van Heijningen,
Driss Aboulkassimi, Cl\'ement Gaine, Thibaut Heckmann, David Naccache
|
Physical Fault Injection and Side-Channel Attacks on Mobile Devices: A
Comprehensive Analysis
| null |
Computers & Security. 111 (2021) 102471
|
10.1016/j.cose.2021.102471
| null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Today's mobile devices contain densely packaged system-on-chips (SoCs) with
multi-core, high-frequency CPUs and complex pipelines. In parallel,
sophisticated SoC-assisted security mechanisms have become commonplace for
protecting device data, such as trusted execution environments, full-disk and
file-based encryption. Both advancements have dramatically complicated the use
of conventional physical attacks, requiring the development of specialised
attacks. In this survey, we consolidate recent developments in physical fault
injections and side-channel attacks on modern mobile devices. In total, we
comprehensively survey over 50 fault injection and side-channel attack papers
published between 2009-2021. We evaluate the prevailing methods, compare
existing attacks using a common set of criteria, identify several challenges
and shortcomings, and suggest future directions of research.
|
[
{
"version": "v1",
"created": "Mon, 10 May 2021 15:37:09 GMT"
},
{
"version": "v2",
"created": "Wed, 12 May 2021 13:54:56 GMT"
},
{
"version": "v3",
"created": "Fri, 14 May 2021 12:36:30 GMT"
},
{
"version": "v4",
"created": "Mon, 9 Aug 2021 08:35:25 GMT"
},
{
"version": "v5",
"created": "Tue, 28 Sep 2021 12:03:37 GMT"
},
{
"version": "v6",
"created": "Tue, 22 Mar 2022 13:23:58 GMT"
}
] | 2022-03-23T00:00:00 |
[
[
"Shepherd",
"Carlton",
""
],
[
"Markantonakis",
"Konstantinos",
""
],
[
"van Heijningen",
"Nico",
""
],
[
"Aboulkassimi",
"Driss",
""
],
[
"Gaine",
"Clément",
""
],
[
"Heckmann",
"Thibaut",
""
],
[
"Naccache",
"David",
""
]
] |
new_dataset
| 0.99977 |
2107.00396
|
Konstantin Bulatov
|
Konstantin Bulatov, Ekaterina Emelianova, Daniil Tropin, Natalya
Skoryukina, Yulia Chernyshova, Alexander Sheshkus, Sergey Usilin, Zuheng
Ming, Jean-Christophe Burie, Muhammad Muzzamil Luqman, Vladimir V. Arlazarov
|
MIDV-2020: A Comprehensive Benchmark Dataset for Identity Document
Analysis
| null |
Computer Optics, volume 46, issue 2, p. 252-270, 2022
|
10.18287/2412-6179-CO-1006
| null |
cs.CV cs.DL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Identity documents recognition is an important sub-field of document
analysis, which deals with tasks of robust document detection, type
identification, text fields recognition, as well as identity fraud prevention
and document authenticity validation given photos, scans, or video frames of an
identity document capture. Significant amount of research has been published on
this topic in recent years, however a chief difficulty for such research is
scarcity of datasets, due to the subject matter being protected by security
requirements. A few datasets of identity documents which are available lack
diversity of document types, capturing conditions, or variability of document
field values. In addition, the published datasets were typically designed only
for a subset of document recognition problems, not for a complex identity
document analysis. In this paper, we present a dataset MIDV-2020 which consists
of 1000 video clips, 2000 scanned images, and 1000 photos of 1000 unique mock
identity documents, each with unique text field values and unique artificially
generated faces, with rich annotation. For the presented benchmark dataset
baselines are provided for such tasks as document location and identification,
text fields recognition, and face detection. With 72409 annotated images in
total, to the date of publication the proposed dataset is the largest publicly
available identity documents dataset with variable artificially generated data,
and we believe that it will prove invaluable for advancement of the field of
document analysis and recognition. The dataset is available for download at
ftp://smartengines.com/midv-2020 and http://l3i-share.univ-lr.fr .
|
[
{
"version": "v1",
"created": "Thu, 1 Jul 2021 12:14:17 GMT"
}
] | 2022-03-23T00:00:00 |
[
[
"Bulatov",
"Konstantin",
""
],
[
"Emelianova",
"Ekaterina",
""
],
[
"Tropin",
"Daniil",
""
],
[
"Skoryukina",
"Natalya",
""
],
[
"Chernyshova",
"Yulia",
""
],
[
"Sheshkus",
"Alexander",
""
],
[
"Usilin",
"Sergey",
""
],
[
"Ming",
"Zuheng",
""
],
[
"Burie",
"Jean-Christophe",
""
],
[
"Luqman",
"Muhammad Muzzamil",
""
],
[
"Arlazarov",
"Vladimir V.",
""
]
] |
new_dataset
| 0.999843 |
2108.02281
|
Guilherme Rotth Zibetti
|
Guilherme Rotth Zibetti and Juliano Araujo Wickboldt and Edison
Pignaton de Freitas
|
Context-Aware Environment Monitoring to Support LPWAN-based Battlefield
Applications
| null |
Computer Communications 189C (2022) pp. 18-27
|
10.1016/j.comcom.2022.02.020
|
189C
|
cs.NI cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The use of IoT-related technologies is growing in several areas. Applications
of environmental monitoring, logistics, smart cities are examples of
applications that benefit from advances in IoT. In the military context, IoT
applications can support the decision-making process by delivering information
collected directly from the battlefield to Command, Control, Communications,
Computers, Intelligence, Surveillance and Reconnaissance (C4ISR) systems.
Taking the benefit of the installed IoT network in the battlefield, the use of
the data collected by the IoT nodes is a way to improve resiliency and increase
the survivability of networks, as well as to optimize the use of available
resources. Towards improving the communication network present on the
battlefield, this work presents a context-aware environmental monitoring system
that uses real-time battlefield information to increase military networks'
resilience and survivability. The proposed approach is validated by a
proof-of-concept experiment. The obtained results show that the implementation
of this system can improve the communication process even when the network is
exposed to unfavorable climatic factors.
|
[
{
"version": "v1",
"created": "Wed, 4 Aug 2021 20:41:30 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Mar 2022 11:57:17 GMT"
}
] | 2022-03-23T00:00:00 |
[
[
"Zibetti",
"Guilherme Rotth",
""
],
[
"Wickboldt",
"Juliano Araujo",
""
],
[
"de Freitas",
"Edison Pignaton",
""
]
] |
new_dataset
| 0.997128 |
2203.04041
|
Qidong Huang
|
Qidong Huang and Xiaoyi Dong and Dongdong Chen and Hang Zhou and
Weiming Zhang and Nenghai Yu
|
Shape-invariant 3D Adversarial Point Clouds
|
Accepted at CVPR 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Adversary and invisibility are two fundamental but conflict characters of
adversarial perturbations. Previous adversarial attacks on 3D point cloud
recognition have often been criticized for their noticeable point outliers,
since they just involve an "implicit constrain" like global distance loss in
the time-consuming optimization to limit the generated noise. While point cloud
is a highly structured data format, it is hard to constrain its perturbation
with a simple loss or metric properly. In this paper, we propose a novel
Point-Cloud Sensitivity Map to boost both the efficiency and imperceptibility
of point perturbations. This map reveals the vulnerability of point cloud
recognition models when encountering shape-invariant adversarial noises. These
noises are designed along the shape surface with an "explicit constrain"
instead of extra distance loss. Specifically, we first apply a reversible
coordinate transformation on each point of the point cloud input, to reduce one
degree of point freedom and limit its movement on the tangent plane. Then we
calculate the best attacking direction with the gradients of the transformed
point cloud obtained on the white-box model. Finally we assign each point with
a non-negative score to construct the sensitivity map, which benefits both
white-box adversarial invisibility and black-box query-efficiency extended in
our work. Extensive evaluations prove that our method can achieve the superior
performance on various point cloud recognition models, with its satisfying
adversarial imperceptibility and strong resistance to different point cloud
defense settings. Our code is available at: https://github.com/shikiw/SI-Adv.
|
[
{
"version": "v1",
"created": "Tue, 8 Mar 2022 12:21:35 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Mar 2022 14:43:14 GMT"
}
] | 2022-03-23T00:00:00 |
[
[
"Huang",
"Qidong",
""
],
[
"Dong",
"Xiaoyi",
""
],
[
"Chen",
"Dongdong",
""
],
[
"Zhou",
"Hang",
""
],
[
"Zhang",
"Weiming",
""
],
[
"Yu",
"Nenghai",
""
]
] |
new_dataset
| 0.995854 |
2203.10562
|
Matheus Souza
|
Matheus Souza, Wolfgang Heidrich
|
CRISPnet: Color Rendition ISP Net
| null | null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Image signal processors (ISPs) are historically grown legacy software systems
for reconstructing color images from noisy raw sensor measurements. They are
usually composited of many heuristic blocks for denoising, demosaicking, and
color restoration. Color reproduction in this context is of particular
importance, since the raw colors are often severely distorted, and each smart
phone manufacturer has developed their own characteristic heuristics for
improving the color rendition, for example of skin tones and other visually
important colors.
In recent years there has been strong interest in replacing the historically
grown ISP systems with deep learned pipelines. Much progress has been made in
approximating legacy ISPs with such learned models. However, so far the focus
of these efforts has been on reproducing the structural features of the images,
with less attention paid to color rendition.
Here we present CRISPnet, the first learned ISP model to specifically target
color rendition accuracy relative to a complex, legacy smart phone ISP. We
achieve this by utilizing both image metadata (like a legacy ISP would), as
well as by learning simple global semantics based on image classification --
similar to what a legacy ISP does to determine the scene type. We also
contribute a new ISP image dataset consisting of both high dynamic range
monitor data, as well as real-world data, both captured with an actual cell
phone ISP pipeline under a variety of lighting conditions, exposure times, and
gain settings.
|
[
{
"version": "v1",
"created": "Sun, 20 Mar 2022 14:28:38 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Mar 2022 09:34:04 GMT"
}
] | 2022-03-23T00:00:00 |
[
[
"Souza",
"Matheus",
""
],
[
"Heidrich",
"Wolfgang",
""
]
] |
new_dataset
| 0.993963 |
2203.11216
|
Stephen Clark
|
Razin A. Shaikh, Sara Sabrina Zemljic, Sean Tull and Stephen Clark
|
The Conceptual VAE
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In this report we present a new model of concepts, based on the framework of
variational autoencoders, which is designed to have attractive properties such
as factored conceptual domains, and at the same time be learnable from data.
The model is inspired by, and closely related to, the Beta-VAE model of
concepts, but is designed to be more closely connected with language, so that
the names of concepts form part of the graphical model. We provide evidence
that our model -- which we call the Conceptual VAE -- is able to learn
interpretable conceptual representations from simple images of coloured shapes
together with the corresponding concept labels. We also show how the model can
be used as a concept classifier, and how it can be adapted to learn from fewer
labels per instance. Finally, we formally relate our model to Gardenfors'
theory of conceptual spaces, showing how the Gaussians we use to represent
concepts can be formalised in terms of "fuzzy concepts" in such a space.
|
[
{
"version": "v1",
"created": "Mon, 21 Mar 2022 17:27:28 GMT"
}
] | 2022-03-23T00:00:00 |
[
[
"Shaikh",
"Razin A.",
""
],
[
"Zemljic",
"Sara Sabrina",
""
],
[
"Tull",
"Sean",
""
],
[
"Clark",
"Stephen",
""
]
] |
new_dataset
| 0.987962 |
2203.11265
|
Paolo Pistone
|
Melissa Antonelli, Ugo Dal Lago, Paolo Pistone
|
Curry and Howard Meet Borel
| null | null | null | null |
cs.LO math.LO
|
http://creativecommons.org/licenses/by/4.0/
|
We show that an intuitionistic version of counting propositional logic
corresponds, in the sense of Curry and Howard, to an expressive type system for
the probabilistic event lambda-calculus, a vehicle calculus in which both
call-by-name and call-by-value evaluation of discrete randomized functional
programs can be simulated. Remarkably, proofs (respectively, types) do not only
guarantee that validity (respectively, termination) holds, but also reveal the
underlying probability. We finally show that by endowing the type system with
an intersection operator, one obtains a system precisely capturing the
probabilistic behavior of lambda-terms.
|
[
{
"version": "v1",
"created": "Mon, 21 Mar 2022 18:48:49 GMT"
}
] | 2022-03-23T00:00:00 |
[
[
"Antonelli",
"Melissa",
""
],
[
"Lago",
"Ugo Dal",
""
],
[
"Pistone",
"Paolo",
""
]
] |
new_dataset
| 0.987946 |
2203.11274
|
Isabella Huang
|
Isabella Huang, Yashraj Narang, Clemens Eppner, Balakumar
Sundaralingam, Miles Macklin, Ruzena Bajcsy, Tucker Hermans, Dieter Fox
|
DefGraspSim: Physics-based simulation of grasp outcomes for 3D
deformable objects
|
For associated web page, see
\url{https://sites.google.com/nvidia.com/defgraspsim}. To be published in the
IEEE Robotics and Automation Letters (RA-L) special issue on Robotic Handling
of Deformable Objects, 2022. arXiv admin note: substantial text overlap with
arXiv:2107.05778
| null |
10.1109/LRA.2022.3158725
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robotic grasping of 3D deformable objects (e.g., fruits/vegetables, internal
organs, bottles/boxes) is critical for real-world applications such as food
processing, robotic surgery, and household automation. However, developing
grasp strategies for such objects is uniquely challenging. Unlike rigid
objects, deformable objects have infinite degrees of freedom and require field
quantities (e.g., deformation, stress) to fully define their state. As these
quantities are not easily accessible in the real world, we propose studying
interaction with deformable objects through physics-based simulation. As such,
we simulate grasps on a wide range of 3D deformable objects using a GPU-based
implementation of the corotational finite element method (FEM). To facilitate
future research, we open-source our simulated dataset (34 objects, 1e5 Pa
elasticity range, 6800 grasp evaluations, 1.1M grasp measurements), as well as
a code repository that allows researchers to run our full FEM-based grasp
evaluation pipeline on arbitrary 3D object models of their choice. Finally, we
demonstrate good correspondence between grasp outcomes on simulated objects and
their real counterparts.
|
[
{
"version": "v1",
"created": "Mon, 21 Mar 2022 18:53:03 GMT"
}
] | 2022-03-23T00:00:00 |
[
[
"Huang",
"Isabella",
""
],
[
"Narang",
"Yashraj",
""
],
[
"Eppner",
"Clemens",
""
],
[
"Sundaralingam",
"Balakumar",
""
],
[
"Macklin",
"Miles",
""
],
[
"Bajcsy",
"Ruzena",
""
],
[
"Hermans",
"Tucker",
""
],
[
"Fox",
"Dieter",
""
]
] |
new_dataset
| 0.999587 |
2203.11341
|
\v{S}t\v{e}p\'an Holub
|
\v{S}t\v{e}p\'an Holub and Martin Ra\v{s}ka and \v{S}t\v{e}p\'an
Starosta
|
Binary codes that do not preserve primitivity
| null | null | null | null |
cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A code $X$ is not primitivity preserving if there is a primitive list
${\mathbf w} \in {\tt lists} X$ whose concatenation is imprimitive. We
formalize a full characterization of such codes in the binary case in the proof
assistant Isabelle/HOL. Part of the formalization, interesting on its own, is a
description of $\{x,y\}$-interpretations of the square $xx$ if $|y| \leq |x|$.
We also provide a formalized parametric solution of the related equation
$x^jy^k = z^\ell$.
|
[
{
"version": "v1",
"created": "Mon, 21 Mar 2022 21:13:18 GMT"
}
] | 2022-03-23T00:00:00 |
[
[
"Holub",
"Štěpán",
""
],
[
"Raška",
"Martin",
""
],
[
"Starosta",
"Štěpán",
""
]
] |
new_dataset
| 0.996575 |
2203.11420
|
Munindar Singh
|
Munindar P. Singh
|
Consent as a Foundation for Responsible Autonomy
|
6 pages; 1 table Proceedings of the 36th AAAI Conference on
Artificial Intelligence (AAAI), Blue Sky Track
|
Proceedings of the 36th AAAI Conference on Artificial Intelligence
(AAAI), Blue Sky Track, 2022
| null | null |
cs.AI cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper focuses on a dynamic aspect of responsible autonomy, namely, to
make intelligent agents be responsible at run time. That is, it considers
settings where decision making by agents impinges upon the outcomes perceived
by other agents. For an agent to act responsibly, it must accommodate the
desires and other attitudes of its users and, through other agents, of their
users.
The contribution of this paper is twofold. First, it provides a conceptual
analysis of consent, its benefits and misuses, and how understanding consent
can help achieve responsible autonomy. Second, it outlines challenges for AI
(in particular, for agents and multiagent systems) that merit investigation to
form as a basis for modeling consent in multiagent systems and applying consent
to achieve responsible autonomy.
|
[
{
"version": "v1",
"created": "Tue, 22 Mar 2022 02:25:27 GMT"
}
] | 2022-03-23T00:00:00 |
[
[
"Singh",
"Munindar P.",
""
]
] |
new_dataset
| 0.993752 |
2203.11443
|
Ritesh Kumar
|
Siddharth Singh and Ritesh Kumar and Shyam Ratan and Sonal Sinha
|
Demo of the Linguistic Field Data Management and Analysis System -- LiFE
|
Accepted in the 19th International Conference on Natural Language
Processing (ICON-2021)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the proposed demo, we will present a new software - Linguistic Field Data
Management and Analysis System - LiFE (https://github.com/kmi-linguistics/life)
- an open-source, web-based linguistic data management and analysis application
that allows for systematic storage, management, sharing and usage of linguistic
data collected from the field. The application allows users to store lexical
items, sentences, paragraphs, audio-visual content with rich glossing /
annotation; generate interactive and print dictionaries; and also train and use
natural language processing tools and models for various purposes using this
data. Since its a web-based application, it also allows for seamless
collaboration among multiple persons and sharing the data, models, etc with
each other.
The system uses the Python-based Flask framework and MongoDB in the backend
and HTML, CSS and Javascript at the frontend. The interface allows creation of
multiple projects that could be shared with the other users. At the backend,
the application stores the data in RDF format so as to allow its release as
Linked Data over the web using semantic web technologies - as of now it makes
use of the OntoLex-Lemon for storing the lexical data and Ligt for storing the
interlinear glossed text and then internally linking it to the other linked
lexicons and databases such as DBpedia and WordNet. Furthermore it provides
support for training the NLP systems using scikit-learn and HuggingFace
Transformers libraries as well as make use of any model trained using these
libraries - while the user interface itself provides limited options for tuning
the system, an externally-trained model could be easily incorporated within the
application; similarly the dataset itself could be easily exported into a
standard machine-readable format like JSON or CSV that could be consumed by
other programs and pipelines.
|
[
{
"version": "v1",
"created": "Tue, 22 Mar 2022 03:34:10 GMT"
}
] | 2022-03-23T00:00:00 |
[
[
"Singh",
"Siddharth",
""
],
[
"Kumar",
"Ritesh",
""
],
[
"Ratan",
"Shyam",
""
],
[
"Sinha",
"Sonal",
""
]
] |
new_dataset
| 0.967159 |
2203.11496
|
Xuyang Bai Mr.
|
Xuyang Bai, Zeyu Hu, Xinge Zhu, Qingqiu Huang, Yilun Chen, Hongbo Fu,
Chiew-Lan Tai
|
TransFusion: Robust LiDAR-Camera Fusion for 3D Object Detection with
Transformers
|
Accepted to CVPR2022; Code at
\url{https://github.com/XuyangBai/TransFusion}; Based on this work, we
achieve the 1st place in the leaderboard of nuScenes tracking
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
LiDAR and camera are two important sensors for 3D object detection in
autonomous driving. Despite the increasing popularity of sensor fusion in this
field, the robustness against inferior image conditions, e.g., bad illumination
and sensor misalignment, is under-explored. Existing fusion methods are easily
affected by such conditions, mainly due to a hard association of LiDAR points
and image pixels, established by calibration matrices. We propose TransFusion,
a robust solution to LiDAR-camera fusion with a soft-association mechanism to
handle inferior image conditions. Specifically, our TransFusion consists of
convolutional backbones and a detection head based on a transformer decoder.
The first layer of the decoder predicts initial bounding boxes from a LiDAR
point cloud using a sparse set of object queries, and its second decoder layer
adaptively fuses the object queries with useful image features, leveraging both
spatial and contextual relationships. The attention mechanism of the
transformer enables our model to adaptively determine where and what
information should be taken from the image, leading to a robust and effective
fusion strategy. We additionally design an image-guided query initialization
strategy to deal with objects that are difficult to detect in point clouds.
TransFusion achieves state-of-the-art performance on large-scale datasets. We
provide extensive experiments to demonstrate its robustness against degenerated
image quality and calibration errors. We also extend the proposed method to the
3D tracking task and achieve the 1st place in the leaderboard of nuScenes
tracking, showing its effectiveness and generalization capability.
|
[
{
"version": "v1",
"created": "Tue, 22 Mar 2022 07:15:13 GMT"
}
] | 2022-03-23T00:00:00 |
[
[
"Bai",
"Xuyang",
""
],
[
"Hu",
"Zeyu",
""
],
[
"Zhu",
"Xinge",
""
],
[
"Huang",
"Qingqiu",
""
],
[
"Chen",
"Yilun",
""
],
[
"Fu",
"Hongbo",
""
],
[
"Tai",
"Chiew-Lan",
""
]
] |
new_dataset
| 0.995151 |
2203.11540
|
Ahmet Caner Y\"uz\"ug\"uler
|
Ahmet Caner Y\"uz\"ug\"uler, Canberk S\"onmez, Mario Drumond, Yunho
Oh, Babak Falsafi, and Pascal Frossard
|
Scale-out Systolic Arrays
| null | null | null | null |
cs.AR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Multi-pod systolic arrays are emerging as the architecture of choice in DNN
inference accelerators. Despite their potential, designing multi-pod systolic
arrays to maximize effective throughput/Watt (i.e., throughput/Watt adjusted
when accounting for array utilization) poses a unique set of challenges. In
this work, we study three key pillars in multi-pod systolic array designs,
namely array granularity, interconnect, and tiling. We identify optimal array
granularity across workloads and show that state-of-the-art commercial
accelerators use suboptimal array sizes for single-tenancy workloads. We, then
evaluate the bandwidth/latency trade-offs in interconnects and show that
Butterfly networks offer a scalable topology for accelerators with a large
number of pods. Finally, we introduce a novel data tiling scheme with custom
partition size to maximize utilization in optimally sized pods. We propose
Scale-out Systolic Arrays, a multi-pod inference accelerator for both single-
and multi-tenancy based on these three pillars. We show that SOSA exhibits
scaling of up to 600 TeraOps/s in effective throughput for state-of-the-art DNN
inference workloads, and outperforms state-of-the-art multi-pod accelerators by
a factor of 1.5x.
|
[
{
"version": "v1",
"created": "Tue, 22 Mar 2022 08:46:11 GMT"
}
] | 2022-03-23T00:00:00 |
[
[
"Yüzügüler",
"Ahmet Caner",
""
],
[
"Sönmez",
"Canberk",
""
],
[
"Drumond",
"Mario",
""
],
[
"Oh",
"Yunho",
""
],
[
"Falsafi",
"Babak",
""
],
[
"Frossard",
"Pascal",
""
]
] |
new_dataset
| 0.993859 |
2203.11567
|
Hongwei Zhu
|
Hongwei Zhu, Minjia Shi
|
The b-symbol weight distribution of irreducible cyclic codes and related
consequences
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/publicdomain/zero/1.0/
|
The $b$-symbol read channel is motivated by the limitations of the reading
process in high density data storage systems. The corresponding new metric is a
generalization of the Hamming metric known as the $b$-symbol weight metric and
has become an important object in coding theory. In this paper, the general
$b$-symbol weight enumerator formula for irreducible cyclic codes is presented
by using the Gaussian period and a new invariant $\#U(b,j,N_1)$. The related
$b$-symbol weight hierarchies $\{d_1(\C),d_2(\C),\ldots,d_K(\C)\}$
($K=\dim(\C)$) are given for some cases. The shortened codes which are optimal
from some classes of irreducible cyclic codes are given, where the shorten set
$\mathcal{T}$ is the complementary set of $b$-symbol support of some codeword
with the minimal $b$-symbol weight.
|
[
{
"version": "v1",
"created": "Tue, 22 Mar 2022 09:41:45 GMT"
}
] | 2022-03-23T00:00:00 |
[
[
"Zhu",
"Hongwei",
""
],
[
"Shi",
"Minjia",
""
]
] |
new_dataset
| 0.964337 |
2203.11573
|
Yuanbo Hou
|
Yuanbo Hou, Zhaoyi Liu, Bo Kang, Yun Wang, Dick Botteldooren
|
CT-SAT: Contextual Transformer for Sequential Audio Tagging
|
Submitted to interspeech 2022
| null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sequential audio event tagging can provide not only the type information of
audio events, but also the order information between events and the number of
events that occur in an audio clip. Most previous works on audio event sequence
analysis rely on connectionist temporal classification (CTC). However, CTC's
conditional independence assumption prevents it from effectively learning
correlations between diverse audio events. This paper first attempts to
introduce Transformer into sequential audio tagging, since Transformers perform
well in sequence-related tasks. To better utilize contextual information of
audio event sequences, we draw on the idea of bidirectional recurrent neural
networks, and propose a contextual Transformer (cTransformer) with a
bidirectional decoder that could exploit the forward and backward information
of event sequences. Experiments on the real-life polyphonic audio dataset show
that, compared to CTC-based methods, the cTransformer can effectively combine
the fine-grained acoustic representations from the encoder and coarse-grained
audio event cues to exploit contextual information to successfully recognize
and predict audio event sequences.
|
[
{
"version": "v1",
"created": "Tue, 22 Mar 2022 09:53:02 GMT"
}
] | 2022-03-23T00:00:00 |
[
[
"Hou",
"Yuanbo",
""
],
[
"Liu",
"Zhaoyi",
""
],
[
"Kang",
"Bo",
""
],
[
"Wang",
"Yun",
""
],
[
"Botteldooren",
"Dick",
""
]
] |
new_dataset
| 0.998857 |
2203.11600
|
Pawel Sroka
|
Pawe{\l} Sroka, Pawe{\l} Kryszkiewicz, Micha{\l} Sybis, Adrian Kliks,
Kuldeep S. Gill, Alexander Wyglinski
|
Distributed Vehicular Dynamic Spectrum Access for Platooning
Environments
| null |
2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring),
2020
|
10.1109/VTC2020-Spring48590.2020.9128929
| null |
cs.NI eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a distributed Vehicular Dynamic Spectrum Access
(VDSA) framework for vehicles operating in platoon formations. Given the
potential for significant congestion in licensed frequency bands for vehicular
applications such as 5.9 GHz. Our approach proposes to offload part of the
intra-platoon data traffic to spectral white-spaces in order to enhance
vehicular connectivity in support of on-road operations. To enable VDSA, a
Bumblebee-based decision making process is employed which is based on the
behavioral models of animals, is employed to provide a means of distributed
transmission band selection. Simulation results show the distributed VDSA
framework improves the leader packets reception ratio by 5%, thus indicating
its potential to increase in reliability of intra-platoon communications.
|
[
{
"version": "v1",
"created": "Tue, 22 Mar 2022 10:34:36 GMT"
}
] | 2022-03-23T00:00:00 |
[
[
"Sroka",
"Paweł",
""
],
[
"Kryszkiewicz",
"Paweł",
""
],
[
"Sybis",
"Michał",
""
],
[
"Kliks",
"Adrian",
""
],
[
"Gill",
"Kuldeep S.",
""
],
[
"Wyglinski",
"Alexander",
""
]
] |
new_dataset
| 0.997343 |
2203.11764
|
Antonios Maronikolakis
|
Antonis Maronikolakis, Axel Wisiorek, Leah Nann, Haris Jabbar, Sahana
Udupa, Hinrich Schuetze
|
Listening to Affected Communities to Define Extreme Speech: Dataset and
Experiments
|
Accepted to ACL 2022 Findings
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Building on current work on multilingual hate speech (e.g., Ousidhoum et al.
(2019)) and hate speech reduction (e.g., Sap et al. (2020)), we present
XTREMESPEECH, a new hate speech dataset containing 20,297 social media passages
from Brazil, Germany, India and Kenya. The key novelty is that we directly
involve the affected communities in collecting and annotating the data - as
opposed to giving companies and governments control over defining and
combatting hate speech. This inclusive approach results in datasets more
representative of actually occurring online speech and is likely to facilitate
the removal of the social media content that marginalized communities view as
causing the most harm. Based on XTREMESPEECH, we establish novel tasks with
accompanying baselines, provide evidence that cross-country training is
generally not feasible due to cultural differences between countries and
perform an interpretability analysis of BERT's predictions.
|
[
{
"version": "v1",
"created": "Tue, 22 Mar 2022 14:24:56 GMT"
}
] | 2022-03-23T00:00:00 |
[
[
"Maronikolakis",
"Antonis",
""
],
[
"Wisiorek",
"Axel",
""
],
[
"Nann",
"Leah",
""
],
[
"Jabbar",
"Haris",
""
],
[
"Udupa",
"Sahana",
""
],
[
"Schuetze",
"Hinrich",
""
]
] |
new_dataset
| 0.952529 |
2203.11777
|
Feng Han
|
Feng Han, Xinyan Huang, Zenghao Wang, Jingang Yi, and Tao Liu
|
Autonomous Bikebot Control for Crossing Obstacles with Assistive Leg
Impulsive Actuation
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
As a single-track mobile platform, bikebot (i.e., bicycle-based robot) has
attractive navigation capability to pass through narrow, off-road terrain with
high-speed and high-energy efficiency. However, running crossing step-like
obstacles creates challenges for intrinsically unstable, underactuated
bikebots. This paper presents a novel autonomous bikebot control with assistive
leg actuation to navigate crossing obstacles. The proposed design integrates
the external/internal convertible-based control with leg-assisted impulse
control. The leg-terrain interaction generates assistive impulsive torques to
help maintain the navigation and balance capability when running across
obstacles. The control performance is analyzed and guaranteed. The experimental
results confirm that under the control design, the bikebot can smoothly run
crossing multiple step-like obstacles with height more than one third of the
wheel radius. The comparison results demonstrate the superior performance than
those under only the velocity and steering control without leg assistive
impulsive actuation.
|
[
{
"version": "v1",
"created": "Tue, 22 Mar 2022 14:41:42 GMT"
}
] | 2022-03-23T00:00:00 |
[
[
"Han",
"Feng",
""
],
[
"Huang",
"Xinyan",
""
],
[
"Wang",
"Zenghao",
""
],
[
"Yi",
"Jingang",
""
],
[
"Liu",
"Tao",
""
]
] |
new_dataset
| 0.997338 |
2203.11914
|
Jayasree Sengupta
|
Jayasree Sengupta and Sushmita Ruj and Sipra Das Bit
|
SPRITE: A Scalable Privacy-Preserving and Verifiable Collaborative
Learning for Industrial IoT
|
Accepted for publication at The 22nd IEEE/ACM International Symposium
on Cluster, Cloud and Internet Computing (CCGrid 2022). 5 figures and 6
tables
| null | null | null |
cs.CR cs.DC cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Recently collaborative learning is widely applied to model sensitive data
generated in Industrial IoT (IIoT). It enables a large number of devices to
collectively train a global model by collaborating with a server while keeping
the datasets on their respective premises. However, existing approaches are
limited by high overheads and may also suffer from falsified aggregated results
returned by a malicious server. Hence, we propose a Scalable,
Privacy-preserving and veRIfiable collaboraTive lEarning (SPRITE) algorithm to
train linear and logistic regression models for IIoT. We aim to reduce burden
from resource-constrained IIoT devices and trust dependence on cloud by
introducing fog as a middleware. SPRITE employs threshold secret sharing to
guarantee privacy-preservation and robustness to IIoT device dropout whereas
verifiable additive homomorphic secret sharing to ensure verifiability during
model aggregation. We prove the security of SPRITE in an honest-but-curious
setting where the cloud is untrustworthy. We validate SPRITE to be scalable and
lightweight through theoretical overhead analysis and extensive testbed
experimentation on an IIoT use-case with two real-world industrial datasets.
For a large-scale industrial setup, SPRITE records 65% and 55% improved
performance over its competitor for linear and logistic regressions
respectively while reducing communication overhead for an IIoT device by 90%.
|
[
{
"version": "v1",
"created": "Tue, 22 Mar 2022 17:34:27 GMT"
}
] | 2022-03-23T00:00:00 |
[
[
"Sengupta",
"Jayasree",
""
],
[
"Ruj",
"Sushmita",
""
],
[
"Bit",
"Sipra Das",
""
]
] |
new_dataset
| 0.998429 |
2203.11931
|
Agrim Gupta
|
Agrim Gupta, Linxi Fan, Surya Ganguli, Li Fei-Fei
|
MetaMorph: Learning Universal Controllers with Transformers
|
ICLR 2022
| null | null | null |
cs.LG cs.NE cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Multiple domains like vision, natural language, and audio are witnessing
tremendous progress by leveraging Transformers for large scale pre-training
followed by task specific fine tuning. In contrast, in robotics we primarily
train a single robot for a single task. However, modular robot systems now
allow for the flexible combination of general-purpose building blocks into task
optimized morphologies. However, given the exponentially large number of
possible robot morphologies, training a controller for each new design is
impractical. In this work, we propose MetaMorph, a Transformer based approach
to learn a universal controller over a modular robot design space. MetaMorph is
based on the insight that robot morphology is just another modality on which we
can condition the output of a Transformer. Through extensive experiments we
demonstrate that large scale pre-training on a variety of robot morphologies
results in policies with combinatorial generalization capabilities, including
zero shot generalization to unseen robot morphologies. We further demonstrate
that our pre-trained policy can be used for sample-efficient transfer to
completely new robot morphologies and tasks.
|
[
{
"version": "v1",
"created": "Tue, 22 Mar 2022 17:58:31 GMT"
}
] | 2022-03-23T00:00:00 |
[
[
"Gupta",
"Agrim",
""
],
[
"Fan",
"Linxi",
""
],
[
"Ganguli",
"Surya",
""
],
[
"Fei-Fei",
"Li",
""
]
] |
new_dataset
| 0.996058 |
0904.1538
|
P{\aa}l Anders Floor Dr
|
P{\aa}l Anders Floor and Tor A. Ramstad
|
Shannon-Kotel'nikov Mappings for Analog Point-to-Point Communications
|
Revision of old manuscript to be submitted
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper an approach to joint source-channel coding (JSCC) named
Shannon-Kotel'nikov mappings (S-K mappings) is presented. S-K mappings are
continuous, or piecewise continuous direct source-to-channel mappings operating
directly on amplitude continuous and discrete time signals. Such mappings
include several existing JSCC schemes as special cases. Many existing
approaches to analog- or hybrid discrete analog JSCC provide both excellent
performance as well as robustness to variations in noise level. This at low
delay and relatively low complexity. However, a theory explaining their
performance and behaviour on a general basis, as well as guidelines on how to
construct close to optimal mappings in general, does not currently exist.
Therefore, such mappings are often found based on educated guesses inspired of
configurations that are known in advance to produce good solutions, combination
of already existing mappings, numerical optimization or machine learning
methods. The objective of this paper is to introduce a theoretical framework
for analysis of analog- or hybrid discrete analog S-K mappings. This framework
will enable calculation of distortion when applying such schemes on
point-to-point links, reveal more about their fundamental nature, and provide
guidelines on how they should be constructed in order to perform well at both
low and arbitrary complexity and delay. Such guidelines will likely help
constrain solutions to numerical approaches and help explain why machine
learning approaches finds the solutions they do. This task is difficult and we
do not provide a complete framework at this stage: We focus on high SNR and
memoryless sources with an arbitrary continuous unimodal density function and
memoryless Gaussian channels. We also provide example of mappings based on
surfaces which are chosen based on the provided theory.
|
[
{
"version": "v1",
"created": "Thu, 9 Apr 2009 14:54:19 GMT"
},
{
"version": "v2",
"created": "Tue, 3 Apr 2012 22:52:12 GMT"
},
{
"version": "v3",
"created": "Thu, 22 Jul 2021 21:49:38 GMT"
},
{
"version": "v4",
"created": "Sun, 20 Mar 2022 19:25:42 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Floor",
"Pål Anders",
""
],
[
"Ramstad",
"Tor A.",
""
]
] |
new_dataset
| 0.982668 |
1804.06039
|
Xuepeng Shi Mr
|
Xuepeng Shi, Shiguang Shan, Meina Kan, Shuzhe Wu, Xilin Chen
|
Real-Time Rotation-Invariant Face Detection with Progressive Calibration
Networks
|
Accepted to CVPR 2018. Code: https://github.com/Rock-100/FaceKit
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Rotation-invariant face detection, i.e. detecting faces with arbitrary
rotation-in-plane (RIP) angles, is widely required in unconstrained
applications but still remains as a challenging task, due to the large
variations of face appearances. Most existing methods compromise with speed or
accuracy to handle the large RIP variations. To address this problem more
efficiently, we propose Progressive Calibration Networks (PCN) to perform
rotation-invariant face detection in a coarse-to-fine manner. PCN consists of
three stages, each of which not only distinguishes the faces from non-faces,
but also calibrates the RIP orientation of each face candidate to upright
progressively. By dividing the calibration process into several progressive
steps and only predicting coarse orientations in early stages, PCN can achieve
precise and fast calibration. By performing binary classification of face vs.
non-face with gradually decreasing RIP ranges, PCN can accurately detect faces
with full $360^{\circ}$ RIP angles. Such designs lead to a real-time
rotation-invariant face detector. The experiments on multi-oriented FDDB and a
challenging subset of WIDER FACE containing rotated faces in the wild show that
our PCN achieves quite promising performance.
|
[
{
"version": "v1",
"created": "Tue, 17 Apr 2018 04:27:14 GMT"
},
{
"version": "v2",
"created": "Sat, 18 Dec 2021 16:52:48 GMT"
},
{
"version": "v3",
"created": "Fri, 18 Mar 2022 22:42:29 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Shi",
"Xuepeng",
""
],
[
"Shan",
"Shiguang",
""
],
[
"Kan",
"Meina",
""
],
[
"Wu",
"Shuzhe",
""
],
[
"Chen",
"Xilin",
""
]
] |
new_dataset
| 0.99753 |
1809.07258
|
Guillaume Gautier
|
Guillaume Gautier, Guillermo Polito, R\'emi Bardenet, Michal Valko
|
DPPy: Sampling DPPs with Python
|
Code at http://github.com/guilgautier/DPPy/ Documentation at
http://dppy.readthedocs.io/
|
Journal of Machine Learning Research 20 (2019) 1-7
| null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Determinantal point processes (DPPs) are specific probability distributions
over clouds of points that are used as models and computational tools across
physics, probability, statistics, and more recently machine learning. Sampling
from DPPs is a challenge and therefore we present DPPy, a Python toolbox that
gathers known exact and approximate sampling algorithms for both finite and
continuous DPPs. The project is hosted on GitHub and equipped with an extensive
documentation.
|
[
{
"version": "v1",
"created": "Wed, 19 Sep 2018 15:53:00 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Aug 2019 16:58:41 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Gautier",
"Guillaume",
""
],
[
"Polito",
"Guillermo",
""
],
[
"Bardenet",
"Rémi",
""
],
[
"Valko",
"Michal",
""
]
] |
new_dataset
| 0.997028 |
2006.14884
|
Yikai Zhao
|
Tong Yang, Jizhou Li, Yikai Zhao, Kaicheng Yang, Hao Wang, Jie Jiang,
Yinda Zhang, Nicholas Zhang
|
QCluster: Clustering Packets for Flow Scheduling
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Flow scheduling is crucial in data centers, as it directly influences user
experience of applications. According to different assumptions and design
goals, there are four typical flow scheduling problems/solutions: SRPT, LAS,
Fair Queueing, and Deadline-Aware scheduling. When implementing these solutions
in commodity switches with limited number of queues, they need to set static
parameters by measuring traffic in advance, while optimal parameters vary
across time and space. This paper proposes a generic framework, namely
QCluster, to adapt all scheduling problems for limited number of queues. The
key idea of QCluster is to cluster packets with similar weights/properties into
the same queue. QCluster is implemented in Tofino switches, and can cluster
packets at a speed of 3.2 Tbps. To the best of our knowledge, QCluster is the
fastest clustering algorithm. Experimental results in testbed with programmable
switches and ns-2 show that QCluster reduces the average flow completion time
(FCT) for short flows up to 56.6%, and reduces the overall average FCT up to
21.7% over state-of-the-art. All the source code in ns-2 is available in Github
without.
|
[
{
"version": "v1",
"created": "Fri, 26 Jun 2020 09:38:43 GMT"
},
{
"version": "v2",
"created": "Sat, 19 Mar 2022 17:38:29 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Yang",
"Tong",
""
],
[
"Li",
"Jizhou",
""
],
[
"Zhao",
"Yikai",
""
],
[
"Yang",
"Kaicheng",
""
],
[
"Wang",
"Hao",
""
],
[
"Jiang",
"Jie",
""
],
[
"Zhang",
"Yinda",
""
],
[
"Zhang",
"Nicholas",
""
]
] |
new_dataset
| 0.987079 |
2101.11796
|
Tsu-Jui Fu
|
Tsu-Jui Fu, William Yang Wang, Daniel McDuff, Yale Song
|
DOC2PPT: Automatic Presentation Slides Generation from Scientific
Documents
|
AAAI'22
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Creating presentation materials requires complex multimodal reasoning skills
to summarize key concepts and arrange them in a logical and visually pleasing
manner. Can machines learn to emulate this laborious process? We present a
novel task and approach for document-to-slide generation. Solving this involves
document summarization, image and text retrieval, slide structure and layout
prediction to arrange key elements in a form suitable for presentation. We
propose a hierarchical sequence-to-sequence approach to tackle our task in an
end-to-end manner. Our approach exploits the inherent structures within
documents and slides and incorporates paraphrasing and layout prediction
modules to generate slides. To help accelerate research in this domain, we
release a dataset about 6K paired documents and slide decks used in our
experiments. We show that our approach outperforms strong baselines and
produces slides with rich content and aligned imagery.
|
[
{
"version": "v1",
"created": "Thu, 28 Jan 2021 03:21:17 GMT"
},
{
"version": "v2",
"created": "Sun, 14 Feb 2021 05:41:08 GMT"
},
{
"version": "v3",
"created": "Mon, 31 Jan 2022 19:02:37 GMT"
},
{
"version": "v4",
"created": "Sat, 19 Mar 2022 18:19:35 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Fu",
"Tsu-Jui",
""
],
[
"Wang",
"William Yang",
""
],
[
"McDuff",
"Daniel",
""
],
[
"Song",
"Yale",
""
]
] |
new_dataset
| 0.992899 |
2103.07356
|
Akira Taniguchi
|
Akira Taniguchi, Ayako Fukawa, Hiroshi Yamakawa
|
Hippocampal formation-inspired probabilistic generative model
|
Submitted to Neural Networks
| null | null | null |
cs.AI cs.NE q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In building artificial intelligence (AI) agents, referring to how brains
function in real environments can accelerate development by reducing the design
space. In this study, we propose a probabilistic generative model (PGM) for
navigation in uncertain environments by integrating the neuroscientific
knowledge of hippocampal formation (HF) and the engineering knowledge in
robotics and AI, namely, simultaneous localization and mapping (SLAM). We
follow the approach of brain reference architecture (BRA) (Yamakawa, 2021) to
compose the PGM and outline how to verify the model. To this end, we survey and
discuss the relationship between the HF findings and SLAM models. The proposed
hippocampal formation-inspired probabilistic generative model (HF-PGM) is
designed to be highly consistent with the anatomical structure and functions of
the HF. By referencing the brain, we elaborate on the importance of integration
of egocentric/allocentric information from the entorhinal cortex to the
hippocampus and the use of discrete-event queues.
|
[
{
"version": "v1",
"created": "Fri, 12 Mar 2021 15:46:52 GMT"
},
{
"version": "v2",
"created": "Wed, 10 Nov 2021 08:19:20 GMT"
},
{
"version": "v3",
"created": "Mon, 21 Mar 2022 08:15:09 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Taniguchi",
"Akira",
""
],
[
"Fukawa",
"Ayako",
""
],
[
"Yamakawa",
"Hiroshi",
""
]
] |
new_dataset
| 0.993788 |
2104.01122
|
Tsu-Jui Fu
|
Tsu-Jui Fu, Xin Eric Wang, Scott T. Grafton, Miguel P. Eckstein,
William Yang Wang
|
M3L: Language-based Video Editing via Multi-Modal Multi-Level
Transformers
|
CVPR'22
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Video editing tools are widely used nowadays for digital design. Although the
demand for these tools is high, the prior knowledge required makes it difficult
for novices to get started. Systems that could follow natural language
instructions to perform automatic editing would significantly improve
accessibility. This paper introduces the language-based video editing (LBVE)
task, which allows the model to edit, guided by text instruction, a source
video into a target video. LBVE contains two features: 1) the scenario of the
source video is preserved instead of generating a completely different video;
2) the semantic is presented differently in the target video, and all changes
are controlled by the given instruction. We propose a Multi-Modal Multi-Level
Transformer (M$^3$L) to carry out LBVE. M$^3$L dynamically learns the
correspondence between video perception and language semantic at different
levels, which benefits both the video understanding and video frame synthesis.
We build three new datasets for evaluation, including two diagnostic and one
from natural videos with human-labeled text. Extensive experimental results
show that M$^3$L is effective for video editing and that LBVE can lead to a new
field toward vision-and-language research.
|
[
{
"version": "v1",
"created": "Fri, 2 Apr 2021 15:59:52 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Mar 2022 20:08:30 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Fu",
"Tsu-Jui",
""
],
[
"Wang",
"Xin Eric",
""
],
[
"Grafton",
"Scott T.",
""
],
[
"Eckstein",
"Miguel P.",
""
],
[
"Wang",
"William Yang",
""
]
] |
new_dataset
| 0.999728 |
2108.00054
|
Alireza Javaheri
|
Alireza Javaheri, Catarina Brites, Fernando Pereira, and Jo\~ao
Ascenso
|
A Point-to-Distribution Joint Geometry and Color Metric for Point Cloud
Quality Assessment
|
This paper has been accepted for publication in IEEE Workshop on
Multimedia Signal Processing
| null |
10.1109/MMSP53017.2021.9733670
| null |
cs.MM eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Point clouds (PCs) are a powerful 3D visual representation paradigm for many
emerging application domains, especially virtual and augmented reality, and
autonomous vehicles. However, the large amount of PC data required for highly
immersive and realistic experiences requires the availability of efficient,
lossy PC coding solutions are critical. Recently, two MPEG PC coding standards
have been developed to address the relevant application requirements and
further developments are expected in the future. In this context, the
assessment of PC quality, notably for decoded PCs, is critical and asks for the
design of efficient objective PC quality metrics. In this paper, a novel
point-to-distribution metric is proposed for PC quality assessment considering
both the geometry and texture. This new quality metric exploits the
scale-invariance property of the Mahalanobis distance to assess first the
geometry and color point-to-distribution distortions, which are after fused to
obtain a joint geometry and color quality metric. The proposed quality metric
significantly outperforms the best PC quality assessment metrics in the
literature.
|
[
{
"version": "v1",
"created": "Fri, 30 Jul 2021 19:33:47 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Javaheri",
"Alireza",
""
],
[
"Brites",
"Catarina",
""
],
[
"Pereira",
"Fernando",
""
],
[
"Ascenso",
"João",
""
]
] |
new_dataset
| 0.976155 |
2109.01429
|
Tom Gilat
|
Tom Gilat
|
Smooth Surfaces via Nets of Geodesics
|
Added explanations in multiple places
| null | null | null |
cs.CG cs.NA math.NA
|
http://creativecommons.org/licenses/by/4.0/
|
This work presents an algorithm for the computation and visualization of an
underlying unknown surface from a given net of geodesics. It is based on a
theoretical result by the author regarding minimal Gaussian curvature surfaces
with geodesic boundary conditions. The novelty of the method is that it
consists of the computation of each patch in the net independently with the
union of the patches being a smooth surface. This communicates with a seminal
work by the late David Knill which suggests that the human visual system infers
different objects by markings along geodesics on their surface. It also
provides a complete program to tackle the reconstruction problem raised in: N.
Sprynski, N. Szafran, B. Lacolle, and L. Biard. Surface reconstruction via
geodesic interpolation. Comput. Aided Des., 40(4):480-492, April 2008.
|
[
{
"version": "v1",
"created": "Fri, 3 Sep 2021 10:43:26 GMT"
},
{
"version": "v2",
"created": "Sun, 20 Mar 2022 10:45:11 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Gilat",
"Tom",
""
]
] |
new_dataset
| 0.998058 |
2109.05491
|
Min Wang
|
Libing Wu, Min Wang, Dan Wu, Jia Wu
|
DynSTGAT: Dynamic Spatial-Temporal Graph Attention Network for Traffic
Signal Control
|
I need to revise it
| null |
10.1145/3459637.3482254
| null |
cs.LG cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Adaptive traffic signal control plays a significant role in the construction
of smart cities. This task is challenging because of many essential factors,
such as cooperation among neighboring intersections and dynamic traffic
scenarios. First, to facilitate cooperation of traffic signals, existing work
adopts graph neural networks to incorporate the temporal and spatial influences
of the surrounding intersections into the target intersection, where
spatial-temporal information is used separately. However, one drawback of these
methods is that the spatial-temporal correlations are not adequately exploited
to obtain a better control scheme. Second, in a dynamic traffic environment,
the historical state of the intersection is also critical for predicting future
signal switching. Previous work mainly solves this problem using the current
intersection's state, neglecting the fact that traffic flow is continuously
changing both spatially and temporally and does not handle the historical
state.
In this paper, we propose a novel neural network framework named DynSTGAT,
which integrates dynamic historical state into a new spatial-temporal graph
attention network to address the above two problems. More specifically, our
DynSTGAT model employs a novel multi-head graph attention mechanism, which aims
to adequately exploit the joint relations of spatial-temporal information.
Then, to efficiently utilize the historical state information of the
intersection, we design a sequence model with the temporal convolutional
network (TCN) to capture the historical information and further merge it with
the spatial information to improve its performance. Extensive experiments
conducted in the multi-intersection scenario on synthetic data and real-world
data confirm that our method can achieve superior performance in travel time
and throughput against the state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Sun, 12 Sep 2021 11:27:27 GMT"
},
{
"version": "v2",
"created": "Sun, 20 Mar 2022 03:10:52 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Wu",
"Libing",
""
],
[
"Wang",
"Min",
""
],
[
"Wu",
"Dan",
""
],
[
"Wu",
"Jia",
""
]
] |
new_dataset
| 0.999121 |
2109.06165
|
Weihua Chen
|
Tongkun Xu, Weihua Chen, Pichao Wang, Fan Wang, Hao Li, Rong Jin
|
CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from
a labeled source domain to a different unlabeled target domain. Most existing
UDA methods focus on learning domain-invariant feature representation, either
from the domain level or category level, using convolution neural networks
(CNNs)-based frameworks. One fundamental problem for the category level based
UDA is the production of pseudo labels for samples in target domain, which are
usually too noisy for accurate domain alignment, inevitably compromising the
UDA performance. With the success of Transformer in various tasks, we find that
the cross-attention in Transformer is robust to the noisy input pairs for
better feature alignment, thus in this paper Transformer is adopted for the
challenging UDA task. Specifically, to generate accurate input pairs, we design
a two-way center-aware labeling algorithm to produce pseudo labels for target
samples. Along with the pseudo labels, a weight-sharing triple-branch
transformer framework is proposed to apply self-attention and cross-attention
for source/target feature learning and source-target domain alignment,
respectively. Such design explicitly enforces the framework to learn
discriminative domain-specific and domain-invariant representations
simultaneously. The proposed method is dubbed CDTrans (cross-domain
transformer), and it provides one of the first attempts to solve UDA tasks with
a pure transformer solution. Experiments show that our proposed method achieves
the best performance on public UDA datasets, e.g. VisDA-2017 and DomainNet.
Code and models are available at https://github.com/CDTrans/CDTrans.
|
[
{
"version": "v1",
"created": "Mon, 13 Sep 2021 17:59:07 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Oct 2021 02:53:33 GMT"
},
{
"version": "v3",
"created": "Tue, 14 Dec 2021 03:49:14 GMT"
},
{
"version": "v4",
"created": "Sat, 19 Mar 2022 11:02:21 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Xu",
"Tongkun",
""
],
[
"Chen",
"Weihua",
""
],
[
"Wang",
"Pichao",
""
],
[
"Wang",
"Fan",
""
],
[
"Li",
"Hao",
""
],
[
"Jin",
"Rong",
""
]
] |
new_dataset
| 0.99328 |
2110.07365
|
Ayon Chakraborty
|
Md. Shaifur Rahman, Ayon Chakraborty, Karthikeyan Sunderasan, Sampath
Rangarajan
|
DynoLoc: Infrastructure-free RF Tracking in Dynamic Indoor Environments
|
The work was done when all the authors were employees of NEC
Laboratories America and is protected by the patent applications:
US20210306977A1 and US20210185491A1 available in the public domain
| null | null | null |
cs.NI cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Promising solutions exist today that can accurately track mobile entities
indoor using visual inertial odometry in favorable visual conditions, or by
leveraging fine-grained ranging (RF, ultrasonic, IR, etc.) to reference
anchors. However, they are unable to directly cater to "dynamic" indoor
environments (e.g. first responder scenarios, multi-player AR/VR gaming in
everyday spaces, etc.) that are devoid of such favorable conditions. Indeed, we
show that the need for "infrastructure-free", and robustness to "node mobility"
and "visual conditions" in such environments, motivates a robust RF-based
approach along with the need to address a novel and challenging variant of its
infrastructure-free (i.e. peer-to-peer) localization problem that is
latency-bounded - accurate tracking of mobile entities imposes a latency budget
that not only affects the solution computation but also the collection of
peer-to-peer ranges themselves.
In this work, we present the design and deployment of DynoLoc that addresses
this latency-bounded infrastructure-free RF localization problem. To this end,
DynoLoc unravels the fundamental tradeoff between latency and localization
accuracy and incorporates design elements that judiciously leverage the
available ranging resources to adaptively estimate the joint topology of nodes,
coupled with robust algorithm that maximizes the localization accuracy even in
the face of practical environmental artifacts (wireless connectivity and
multipath, node mobility, etc.). This allows DynoLoc to track (every second) a
network of few tens of mobile entities even at speeds of 1-2 m/s with median
accuracies under 1-2 m (compared to 5m+ with baselines), without infrastructure
support. We demonstrate DynoLoc's potential in a real-world firefighters'
drill, as well as two other use cases of (i) multi-player AR/VR gaming, and
(ii) active shooter tracking by first responders.
|
[
{
"version": "v1",
"created": "Thu, 14 Oct 2021 13:51:45 GMT"
},
{
"version": "v2",
"created": "Sat, 19 Mar 2022 04:35:43 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Rahman",
"Md. Shaifur",
""
],
[
"Chakraborty",
"Ayon",
""
],
[
"Sunderasan",
"Karthikeyan",
""
],
[
"Rangarajan",
"Sampath",
""
]
] |
new_dataset
| 0.997737 |
2111.11017
|
Nan Liu
|
Feng Xie, Jun Zhou, Jin Wee Lee, Mingrui Tan, Siqi Li, Logasan S/O
Rajnthern, Marcel Lucas Chee, Bibhas Chakraborty, An-Kwok Ian Wong, Alon
Dagan, Marcus Eng Hock Ong, Fei Gao, Nan Liu
|
Benchmarking emergency department triage prediction models with machine
learning and large public electronic health records
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The demand for emergency department (ED) services is increasing across the
globe, particularly during the current COVID-19 pandemic. Clinical triage and
risk assessment have become increasingly challenging due to the shortage of
medical resources and the strain on hospital infrastructure caused by the
pandemic. As a result of the widespread use of electronic health records
(EHRs), we now have access to a vast amount of clinical data, which allows us
to develop predictive models and decision support systems to address these
challenges. To date, however, there are no widely accepted benchmark ED triage
prediction models based on large-scale public EHR data. An open-source
benchmarking platform would streamline research workflows by eliminating
cumbersome data preprocessing, and facilitate comparisons among different
studies and methodologies. In this paper, based on the Medical Information Mart
for Intensive Care IV Emergency Department (MIMIC-IV-ED) database, we developed
a publicly available benchmark suite for ED triage predictive models and
created a benchmark dataset that contains over 400,000 ED visits from 2011 to
2019. We introduced three ED-based outcomes (hospitalization, critical
outcomes, and 72-hour ED reattendance) and implemented a variety of popular
methodologies, ranging from machine learning methods to clinical scoring
systems. We evaluated and compared the performance of these methods against
benchmark tasks. Our codes are open-source, allowing anyone with MIMIC-IV-ED
data access to perform the same steps in data processing, benchmark model
building, and experiments. This study provides future researchers with
insights, suggestions, and protocols for managing raw data and developing risk
triaging tools for emergency care.
|
[
{
"version": "v1",
"created": "Mon, 22 Nov 2021 06:51:11 GMT"
},
{
"version": "v2",
"created": "Sun, 20 Mar 2022 07:12:13 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Xie",
"Feng",
""
],
[
"Zhou",
"Jun",
""
],
[
"Lee",
"Jin Wee",
""
],
[
"Tan",
"Mingrui",
""
],
[
"Li",
"Siqi",
""
],
[
"Rajnthern",
"Logasan S/O",
""
],
[
"Chee",
"Marcel Lucas",
""
],
[
"Chakraborty",
"Bibhas",
""
],
[
"Wong",
"An-Kwok Ian",
""
],
[
"Dagan",
"Alon",
""
],
[
"Ong",
"Marcus Eng Hock",
""
],
[
"Gao",
"Fei",
""
],
[
"Liu",
"Nan",
""
]
] |
new_dataset
| 0.975781 |
2112.04536
|
Maria Vittoria Minniti
|
Maria Vittoria Minniti, Ruben Grandia, Farbod Farshidian, Marco Hutter
|
Adaptive CLF-MPC With Application To Quadrupedal Robots
| null |
IEEE Robotics and Automation Letters (Volume: 7, Issue: 1, Jan.
2022)
|
10.1109/LRA.2021.3128697
| null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern robotic systems are endowed with superior mobility and mechanical
skills that make them suited to be employed in real-world scenarios, where
interactions with heavy objects and precise manipulation capabilities are
required. For instance, legged robots with high payload capacity can be used in
disaster scenarios to remove dangerous material or carry injured people. It is
thus essential to develop planning algorithms that can enable complex robots to
perform motion and manipulation tasks accurately. In addition, online
adaptation mechanisms with respect to new, unknown environments are needed. In
this work, we impose that the optimal state-input trajectories generated by
Model Predictive Control (MPC) satisfy the Lyapunov function criterion derived
in adaptive control for robotic systems. As a result, we combine the stability
guarantees provided by Control Lyapunov Functions (CLFs) and the optimality
offered by MPC in a unified adaptive framework, yielding an improved
performance during the robot's interaction with unknown objects. We validate
the proposed approach in simulation and hardware tests on a quadrupedal robot
carrying un-modeled payloads and pulling heavy boxes.
|
[
{
"version": "v1",
"created": "Wed, 8 Dec 2021 19:28:02 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Mar 2022 16:29:20 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Minniti",
"Maria Vittoria",
""
],
[
"Grandia",
"Ruben",
""
],
[
"Farshidian",
"Farbod",
""
],
[
"Hutter",
"Marco",
""
]
] |
new_dataset
| 0.997146 |
2112.12692
|
Alex Jones
|
Arifa Hoque (1), Alex K. Jones (2), Sanjukta Bhanja (1) ((1)
University of South Florida, (2) University of Pittsburgh)
|
XDWM: A 2D Domain Wall Memory
|
in IEEE Transactions on Nanotechnology
|
IEEE Transactions on Nanotechnology
|
10.1109/TNANO.2022.3158889
| null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Domain-Wall Memory (DWM) structures typically bundle nanowires shifted
together for parallel access. Ironically, this organization does not allow the
natural shifting of DWM to realize \textit{logical shifting} within data
elements. We describe a novel 2-D DWM cross-point (X-Cell) that allows two
individual nanowires placed orthogonally to share the X-Cell. Each nanowire can
operate independently while sharing the value at the X-Cell. Using X-Cells, we
propose an orthogonal nanowire in the Y dimension overlaid on a bundle of X
dimension nanowires for a cross-DWM or XDWM. We demonstrate that the bundle
shifts correctly in the X-Direction, and that data can be logically shifted in
the Y-direction providing novel data movement and supporting
processing-in-memory. We conducted studies on the requirements for physical
cell dimensions and shift currents for XDWM. Due to the non-standard domain,
our micro-magnetic studies demonstrate that XDWM introduces a shift current
penalty of 6.25% while shifting happens in one nanowire compared to a standard
nanowire. We also demonstrate correct shifting using nanowire bundles in both
the X- and Y- dimensions. Using magnetic simulation to derive the values for
SPICE simulation we show the maximum leakage current between nanowires when
shifting the bundle together is $\le3$% indicating that sneak paths are not
problematic for XDWM.
|
[
{
"version": "v1",
"created": "Thu, 23 Dec 2021 16:40:10 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Hoque",
"Arifa",
""
],
[
"Jones",
"Alex K.",
""
],
[
"Bhanja",
"Sanjukta",
""
]
] |
new_dataset
| 0.999138 |
2201.04402
|
Ekrem \c{C}etinkaya
|
Ekrem \c{C}etinkaya and Minh Nguyen and Christian Timmerer
|
MoViDNN: A Mobile Platform for Evaluating Video Quality Enhancement with
Deep Neural Networks
|
8 pages, 3 figures
|
MMM 2022: MultiMedia Modeling pp 465-472
|
10.1007/978-3-030-98355-0_40
| null |
cs.CV cs.MM
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Deep neural network (DNN) based approaches have been intensively studied to
improve video quality thanks to their fast advancement in recent years. These
approaches are designed mainly for desktop devices due to their high
computational cost. However, with the increasing performance of mobile devices
in recent years, it became possible to execute DNN based approaches in mobile
devices. Despite having the required computational power, utilizing DNNs to
improve the video quality for mobile devices is still an active research area.
In this paper, we propose an open-source mobile platform, namely MoViDNN, to
evaluate DNN based video quality enhancement methods, such as super-resolution,
denoising, and deblocking. Our proposed platform can be used to evaluate the
DNN based approaches both objectively and subjectively. For objective
evaluation, we report common metrics such as execution time, PSNR, and SSIM.
For subjective evaluation, Mean Score Opinion (MOS) is reported. The proposed
platform is available publicly at https://github.com/cd-athena/MoViDNN
|
[
{
"version": "v1",
"created": "Wed, 12 Jan 2022 10:38:04 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Çetinkaya",
"Ekrem",
""
],
[
"Nguyen",
"Minh",
""
],
[
"Timmerer",
"Christian",
""
]
] |
new_dataset
| 0.9885 |
2201.11871
|
Zhengwei Bai
|
Zhengwei Bai, Guoyuan Wu, Xuewei Qi, Yongkang Liu, Kentaro Oguchi,
Matthew J. Barth
|
Infrastructure-Based Object Detection and Tracking for Cooperative
Driving Automation: A Survey
| null | null | null | null |
cs.CV eess.SP
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Object detection plays a fundamental role in enabling Cooperative Driving
Automation (CDA), which is regarded as the revolutionary solution to addressing
safety, mobility, and sustainability issues of contemporary transportation
systems. Although current computer vision technologies could provide
satisfactory object detection results in occlusion-free scenarios, the
perception performance of onboard sensors could be inevitably limited by the
range and occlusion. Owing to flexible position and pose for sensor
installation, infrastructure-based detection and tracking systems can enhance
the perception capability for connected vehicles and thus quickly become one of
the most popular research topics. In this paper, we review the research
progress for infrastructure-based object detection and tracking systems.
Architectures of roadside perception systems based on different types of
sensors are reviewed to show a high-level description of the workflows for
infrastructure-based perception systems. Roadside sensors and different
perception methodologies are reviewed and analyzed with detailed literature to
provide a low-level explanation for specific methods followed by Datasets and
Simulators to draw an overall landscape of infrastructure-based object
detection and tracking methods. Discussions are conducted to point out current
opportunities, open problems, and anticipated future trends.
|
[
{
"version": "v1",
"created": "Fri, 28 Jan 2022 00:55:24 GMT"
},
{
"version": "v2",
"created": "Sat, 19 Mar 2022 23:02:57 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Bai",
"Zhengwei",
""
],
[
"Wu",
"Guoyuan",
""
],
[
"Qi",
"Xuewei",
""
],
[
"Liu",
"Yongkang",
""
],
[
"Oguchi",
"Kentaro",
""
],
[
"Barth",
"Matthew J.",
""
]
] |
new_dataset
| 0.981433 |
2203.08512
|
Wenpeng Yin
|
Wenpeng Yin, Jia Li, Caiming Xiong
|
ConTinTin: Continual Learning from Task Instructions
|
ACL'2022 camera-ready
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The mainstream machine learning paradigms for NLP often work with two
underlying presumptions. First, the target task is predefined and static; a
system merely needs to learn to solve it exclusively. Second, the supervision
of a task mainly comes from a set of labeled examples. A question arises: how
to build a system that can keep learning new tasks from their instructions?
This work defines a new learning paradigm ConTinTin (Continual Learning from
Task Instructions), in which a system should learn a sequence of new tasks one
by one, each task is explained by a piece of textual instruction. The system is
required to (i) generate the expected outputs of a new task by learning from
its instruction, (ii) transfer the knowledge acquired from upstream tasks to
help solve downstream tasks (i.e., forward-transfer), and (iii) retain or even
improve the performance on earlier tasks after learning new tasks (i.e.,
backward-transfer). This new problem is studied on a stream of more than 60
tasks, each equipped with an instruction. Technically, our method
InstructionSpeak contains two strategies that make full use of task
instructions to improve forward-transfer and backward-transfer: one is to learn
from negative outputs, the other is to re-visit instructions of previous tasks.
To our knowledge, this is the first time to study ConTinTin in NLP. In addition
to the problem formulation and our promising approach, this work also
contributes to providing rich analyses for the community to better understand
this novel learning problem.
|
[
{
"version": "v1",
"created": "Wed, 16 Mar 2022 10:27:18 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Mar 2022 19:15:47 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Yin",
"Wenpeng",
""
],
[
"Li",
"Jia",
""
],
[
"Xiong",
"Caiming",
""
]
] |
new_dataset
| 0.992436 |
2203.09463
|
Yan Wang
|
Yan Wang, Yixuan Sun, Yiwen Huang, Zhongying Liu, Shuyong Gao, Wei
Zhang, Weifeng Ge and Wenqiang Zhang
|
FERV39k: A Large-Scale Multi-Scene Dataset for Facial Expression
Recognition in Videos
|
Accepted for CVPR2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current benchmarks for facial expression recognition (FER) mainly focus on
static images, while there are limited datasets for FER in videos. It is still
ambiguous to evaluate whether performances of existing methods remain
satisfactory in real-world application-oriented scenes. For example, the
"Happy" expression with high intensity in Talk-Show is more discriminating than
the same expression with low intensity in Official-Event. To fill this gap, we
build a large-scale multi-scene dataset, coined as FERV39k. We analyze the
important ingredients of constructing such a novel dataset in three aspects:
(1) multi-scene hierarchy and expression class, (2) generation of candidate
video clips, (3) trusted manual labelling process. Based on these guidelines,
we select 4 scenarios subdivided into 22 scenes, annotate 86k samples
automatically obtained from 4k videos based on the well-designed workflow, and
finally build 38,935 video clips labeled with 7 classic expressions. Experiment
benchmarks on four kinds of baseline frameworks were also provided and further
analysis on their performance across different scenes and some challenges for
future research were given. Besides, we systematically investigate key
components of DFER by ablation studies. The baseline framework and our project
will be available.
|
[
{
"version": "v1",
"created": "Thu, 17 Mar 2022 17:25:33 GMT"
},
{
"version": "v2",
"created": "Sun, 20 Mar 2022 09:43:16 GMT"
}
] | 2022-03-22T00:00:00 |
[
[
"Wang",
"Yan",
""
],
[
"Sun",
"Yixuan",
""
],
[
"Huang",
"Yiwen",
""
],
[
"Liu",
"Zhongying",
""
],
[
"Gao",
"Shuyong",
""
],
[
"Zhang",
"Wei",
""
],
[
"Ge",
"Weifeng",
""
],
[
"Zhang",
"Wenqiang",
""
]
] |
new_dataset
| 0.999882 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.