id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2202.00448
|
Chengliang Zhong
|
Chengliang Zhong, Chao Yang, Jinshan Qi, Fuchun Sun, Huaping Liu,
Xiaodong Mu, Wenbing Huang
|
Sim2Real Object-Centric Keypoint Detection and Description
|
accepted to AAAI2022
| null | null | null |
cs.CV cs.AI cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Keypoint detection and description play a central role in computer vision.
Most existing methods are in the form of scene-level prediction, without
returning the object classes of different keypoints. In this paper, we propose
the object-centric formulation, which, beyond the conventional setting,
requires further identifying which object each interest point belongs to. With
such fine-grained information, our framework enables more downstream
potentials, such as object-level matching and pose estimation in a clustered
environment. To get around the difficulty of label collection in the real
world, we develop a sim2real contrastive learning mechanism that can generalize
the model trained in simulation to real-world applications. The novelties of
our training method are three-fold: (i) we integrate the uncertainty into the
learning framework to improve feature description of hard cases, e.g.,
less-textured or symmetric patches; (ii) we decouple the object descriptor into
two output branches -- intra-object salience and inter-object distinctness,
resulting in a better pixel-wise description; (iii) we enforce cross-view
semantic consistency for enhanced robustness in representation learning.
Comprehensive experiments on image matching and 6D pose estimation verify the
encouraging generalization ability of our method from simulation to reality.
Particularly for 6D pose estimation, our method significantly outperforms
typical unsupervised/sim2real methods, achieving a closer gap with the fully
supervised counterpart. Additional results and videos can be found at
https://zhongcl-thu.github.io/rock/
|
[
{
"version": "v1",
"created": "Tue, 1 Feb 2022 15:00:20 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Feb 2022 10:37:09 GMT"
}
] | 2022-02-04T00:00:00 |
[
[
"Zhong",
"Chengliang",
""
],
[
"Yang",
"Chao",
""
],
[
"Qi",
"Jinshan",
""
],
[
"Sun",
"Fuchun",
""
],
[
"Liu",
"Huaping",
""
],
[
"Mu",
"Xiaodong",
""
],
[
"Huang",
"Wenbing",
""
]
] |
new_dataset
| 0.999195 |
2202.00738
|
\c{C}a\u{g}kan Yapar
|
\c{C}a\u{g}kan Yapar, Ron Levie, Gitta Kutyniok, Giuseppe Caire
|
LocUNet: Fast Urban Positioning Using Radio Maps and Deep Learning
|
To appear in ICASSP 2022. arXiv admin note: substantial text overlap
with arXiv:2106.12556
| null | null | null |
cs.LG cs.NI eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper deals with the problem of localization in a cellular network in a
dense urban scenario. Global Navigation Satellite Systems (GNSS) typically
perform poorly in urban environments, where the likelihood of line-of-sight
conditions is low, and thus alternative localization methods are required for
good accuracy. We present LocUNet: A deep learning method for localization,
based merely on Received Signal Strength (RSS) from Base Stations (BSs), which
does not require any increase in computation complexity at the user devices
with respect to the device standard operations, unlike methods that rely on
time of arrival or angle of arrival information. In the proposed method, the
user to be localized reports the RSS from BSs to a Central Processing Unit
(CPU), which may be located in the cloud. Alternatively, the localization can
be performed locally at the user. Using estimated pathloss radio maps of the
BSs, LocUNet can localize users with state-of-the-art accuracy and enjoys high
robustness to inaccuracies in the radio maps. The proposed method does not
require pre-sampling of the environment; and is suitable for real-time
applications, thanks to the RadioUNet, a neural network-based radio map
estimator. We also introduce two datasets that allow numerical comparisons of
RSS and Time of Arrival (ToA) methods in realistic urban environments.
|
[
{
"version": "v1",
"created": "Tue, 1 Feb 2022 20:27:46 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Feb 2022 02:16:57 GMT"
}
] | 2022-02-04T00:00:00 |
[
[
"Yapar",
"Çağkan",
""
],
[
"Levie",
"Ron",
""
],
[
"Kutyniok",
"Gitta",
""
],
[
"Caire",
"Giuseppe",
""
]
] |
new_dataset
| 0.995162 |
2202.01246
|
Pranav Madadi Dr
|
Pranav Madadi, Jeongho Jeon, Joonyoung Cho, Caleb Lo, Juho Lee,
Jianzhong Zhang
|
PolarDenseNet: A Deep Learning Model for CSI Feedback in MIMO Systems
| null | null | null | null |
cs.IT cs.AI math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In multiple-input multiple-output (MIMO) systems, the high-resolution channel
information (CSI) is required at the base station (BS) to ensure optimal
performance, especially in the case of multi-user MIMO (MU-MIMO) systems. In
the absence of channel reciprocity in frequency division duplex (FDD) systems,
the user needs to send the CSI to the BS. Often the large overhead associated
with this CSI feedback in FDD systems becomes the bottleneck in improving the
system performance. In this paper, we propose an AI-based CSI feedback based on
an auto-encoder architecture that encodes the CSI at UE into a low-dimensional
latent space and decodes it back at the BS by effectively reducing the feedback
overhead while minimizing the loss during recovery. Our simulation results show
that the AI-based proposed architecture outperforms the state-of-the-art
high-resolution linear combination codebook using the DFT basis adopted in the
5G New Radio (NR) system.
|
[
{
"version": "v1",
"created": "Wed, 2 Feb 2022 19:04:49 GMT"
}
] | 2022-02-04T00:00:00 |
[
[
"Madadi",
"Pranav",
""
],
[
"Jeon",
"Jeongho",
""
],
[
"Cho",
"Joonyoung",
""
],
[
"Lo",
"Caleb",
""
],
[
"Lee",
"Juho",
""
],
[
"Zhang",
"Jianzhong",
""
]
] |
new_dataset
| 0.985029 |
2202.01256
|
Xijun Li
|
Jianye Hao, Jiawen Lu, Xijun Li, Xialiang Tong, Xiang Xiang, Mingxuan
Yuan and Hankz Hankui Zhuo
|
Introduction to The Dynamic Pickup and Delivery Problem Benchmark --
ICAPS 2021 Competition
| null | null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The Dynamic Pickup and Delivery Problem (DPDP) is an essential problem within
the logistics domain. So far, research on this problem has mainly focused on
using artificial data which fails to reflect the complexity of real-world
problems. In this draft, we would like to introduce a new benchmark from real
business scenarios as well as a simulator supporting the dynamic evaluation.
The benchmark and simulator have been published and successfully supported the
ICAPS 2021 Dynamic Pickup and Delivery Problem competition participated by 152
teams.
|
[
{
"version": "v1",
"created": "Wed, 19 Jan 2022 00:52:16 GMT"
}
] | 2022-02-04T00:00:00 |
[
[
"Hao",
"Jianye",
""
],
[
"Lu",
"Jiawen",
""
],
[
"Li",
"Xijun",
""
],
[
"Tong",
"Xialiang",
""
],
[
"Xiang",
"Xiang",
""
],
[
"Yuan",
"Mingxuan",
""
],
[
"Zhuo",
"Hankz Hankui",
""
]
] |
new_dataset
| 0.966617 |
2202.01299
|
Yinbin Ma
|
Yinbin Ma and Daniela Tuninetti
|
On Coded Caching Systems with Offline Users
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Coded caching is a technique that leverages locally cached contents at the
users to reduce the network's peak-time communication load. Coded caching
achieves significant performance gains compared to uncoded caching schemes and
is thus a promising technique to boost performance in future networks. In the
original model introduced by Maddah-Ali and Niesen (MAN), a server stores
multiple files and is connected to multiple cache-aided users through an
error-free shared link; once the local caches have been filled and all users
have sent their demand to the server, the server can start sending coded
multicast messages to satisfy all users' demands. A practical limitation of the
original MAN model is that it halts if the server does not receive all users'
demands, which is the limiting case of asynchronous coded caching when the
requests of some users arrive with infinite delay. In this paper we formally
define a coded caching system where some users are offline. We propose
achievable and converse bounds for this novel setting and show under which
conditions they meet, thus providing an optimal solution, and when they are to
within a constant multiplicative gap of two. Interestingly, when optimality can
be be shown, the optimal load-memory tradeoff only depends on the number active
users, and not on the total (active plus offline) number of users.
|
[
{
"version": "v1",
"created": "Wed, 2 Feb 2022 21:44:00 GMT"
}
] | 2022-02-04T00:00:00 |
[
[
"Ma",
"Yinbin",
""
],
[
"Tuninetti",
"Daniela",
""
]
] |
new_dataset
| 0.996595 |
2202.01323
|
Yuyan Li
|
Yuyan Li, Zhixin Yan, Ye Duan, Liu Ren
|
PanoDepth: A Two-Stage Approach for Monocular Omnidirectional Depth
Estimation
|
Accepted by International Conference on 3D Vision (3DV). IEEE, 2021
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Omnidirectional 3D information is essential for a wide range of applications
such as Virtual Reality, Autonomous Driving, Robotics, etc. In this paper, we
propose a novel, model-agnostic, two-stage pipeline for omnidirectional
monocular depth estimation. Our proposed framework PanoDepth takes one 360
image as input, produces one or more synthesized views in the first stage, and
feeds the original image and the synthesized images into the subsequent stereo
matching stage. In the second stage, we propose a differentiable Spherical
Warping Layer to handle omnidirectional stereo geometry efficiently and
effectively. By utilizing the explicit stereo-based geometric constraints in
the stereo matching stage, PanoDepth can generate dense high-quality depth. We
conducted extensive experiments and ablation studies to evaluate PanoDepth with
both the full pipeline as well as the individual modules in each stage. Our
results show that PanoDepth outperforms the state-of-the-art approaches by a
large margin for 360 monocular depth estimation.
|
[
{
"version": "v1",
"created": "Wed, 2 Feb 2022 23:08:06 GMT"
}
] | 2022-02-04T00:00:00 |
[
[
"Li",
"Yuyan",
""
],
[
"Yan",
"Zhixin",
""
],
[
"Duan",
"Ye",
""
],
[
"Ren",
"Liu",
""
]
] |
new_dataset
| 0.956617 |
2202.01365
|
Jingyi Xie
|
Jingyi Xie, Rui Yu, Sooyeon Lee, Yao Lyu, Syed Masum Billah, John M.
Carroll
|
Feasibility of Interactive 3D Map for Remote Sighted Assistance
| null | null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Remote sighted assistance (RSA) has emerged as a conversational assistive
technology, where remote sighted workers, i.e., agents, provide real-time
assistance to users with vision impairments via video-chat-like communication.
Researchers found that agents' lack of environmental knowledge, the difficulty
of orienting users in their surroundings, and the inability to estimate
distances from users' camera feeds are key challenges to sighted agents. To
address these challenges, researchers have suggested assisting agents with
computer vision technologies, especially 3D reconstruction. This paper presents
a high-fidelity prototype of such an RSA, where agents use interactive 3D maps
with localization capability. We conducted a walkthrough study with thirteen
agents and one user with simulated vision impairment using this prototype. The
study revealed that, compared to baseline RSA, the agents were significantly
faster in providing navigational assistance to users, and their mental workload
was significantly reduced -- all indicate the feasibility and prospect of 3D
maps in RSA.
|
[
{
"version": "v1",
"created": "Thu, 3 Feb 2022 01:38:38 GMT"
}
] | 2022-02-04T00:00:00 |
[
[
"Xie",
"Jingyi",
""
],
[
"Yu",
"Rui",
""
],
[
"Lee",
"Sooyeon",
""
],
[
"Lyu",
"Yao",
""
],
[
"Billah",
"Syed Masum",
""
],
[
"Carroll",
"John M.",
""
]
] |
new_dataset
| 0.981056 |
2202.01414
|
Wenzhen Zhu
|
Wenzhen Zhu, Negin Sokhandan, Guang Yang, Sujitha Martin, Suchitra
Sathyanarayana
|
DocBed: A Multi-Stage OCR Solution for Documents with Complex Layouts
|
7 pages, 6 figures, The Thirty-Fourth Annual Conference on Innovative
Applications of Artificial Intelligence (IAAI-22), Collocated with AAAI-22
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Digitization of newspapers is of interest for many reasons including
preservation of history, accessibility and search ability, etc. While
digitization of documents such as scientific articles and magazines is
prevalent in literature, one of the main challenges for digitization of
newspaper lies in its complex layout (e.g. articles spanning multiple columns,
text interrupted by images) analysis, which is necessary to preserve human
read-order. This work provides a major breakthrough in the digitization of
newspapers on three fronts: first, releasing a dataset of 3000 fully-annotated,
real-world newspaper images from 21 different U.S. states representing an
extensive variety of complex layouts for document layout analysis; second,
proposing layout segmentation as a precursor to existing optical character
recognition (OCR) engines, where multiple state-of-the-art image segmentation
models and several post-processing methods are explored for document layout
segmentation; third, providing a thorough and structured evaluation protocol
for isolated layout segmentation and end-to-end OCR.
|
[
{
"version": "v1",
"created": "Thu, 3 Feb 2022 05:21:31 GMT"
}
] | 2022-02-04T00:00:00 |
[
[
"Zhu",
"Wenzhen",
""
],
[
"Sokhandan",
"Negin",
""
],
[
"Yang",
"Guang",
""
],
[
"Martin",
"Sujitha",
""
],
[
"Sathyanarayana",
"Suchitra",
""
]
] |
new_dataset
| 0.999594 |
2202.01473
|
Peiying Zhang
|
Peiying Zhang, Xue Pang, Yongjing Ni, Haipeng Yao, Xin Li
|
A multi-domain virtual network embedding algorithm with delay prediction
| null | null | null | null |
cs.NI cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Virtual network embedding (VNE) is an crucial part of network virtualization
(NV), which aims to map the virtual networks (VNs) to a shared substrate
network (SN). With the emergence of various delay-sensitive applications, how
to improve the delay performance of the system has become a hot topic in
academic circles. Based on extensive research, we proposed a multi-domain
virtual network embedding algorithm based on delay prediction (DP-VNE).
Firstly, the candidate physical nodes are selected by estimating the delay of
virtual requests, then particle swarm optimization (PSO) algorithm is used to
optimize the mapping process, so as to reduce the delay of the system. The
simulation results show that compared with the other three advanced algorithms,
the proposed algorithm can significantly reduce the system delay while keeping
other indicators unaffected.
|
[
{
"version": "v1",
"created": "Thu, 3 Feb 2022 08:58:49 GMT"
}
] | 2022-02-04T00:00:00 |
[
[
"Zhang",
"Peiying",
""
],
[
"Pang",
"Xue",
""
],
[
"Ni",
"Yongjing",
""
],
[
"Yao",
"Haipeng",
""
],
[
"Li",
"Xin",
""
]
] |
new_dataset
| 0.977258 |
2202.01619
|
Benyamin Ghojogh
|
Benyamin Ghojogh, Fakhri Karray, Mark Crowley
|
On Manifold Hypothesis: Hypersurface Submanifold Embedding Using
Osculating Hyperspheres
| null | null | null | null |
cs.LG math.AT math.DG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Consider a set of $n$ data points in the Euclidean space $\mathbb{R}^d$. This
set is called dataset in machine learning and data science. Manifold hypothesis
states that the dataset lies on a low-dimensional submanifold with high
probability. All dimensionality reduction and manifold learning methods have
the assumption of manifold hypothesis. In this paper, we show that the dataset
lies on an embedded hypersurface submanifold which is locally
$(d-1)$-dimensional. Hence, we show that the manifold hypothesis holds at least
for the embedding dimensionality $d-1$. Using an induction in a pyramid
structure, we also extend the embedding dimensionality to lower embedding
dimensionalities to show the validity of manifold hypothesis for embedding
dimensionalities $\{1, 2, \dots, d-1\}$. For embedding the hypersurface, we
first construct the $d$ nearest neighbors graph for data. For every point, we
fit an osculating hypersphere $S^{d-1}$ using its neighbors where this
hypersphere is osculating to a hypothetical hypersurface. Then, using surgery
theory, we apply surgery on the osculating hyperspheres to obtain $n$
hyper-caps. We connect the hyper-caps to one another using partial
hyper-cylinders. By connecting all parts, the embedded hypersurface is obtained
as the disjoint union of these elements. We discuss the geometrical
characteristics of the embedded hypersurface, such as having boundary, its
topology, smoothness, boundedness, orientability, compactness, and injectivity.
Some discussion are also provided for the linearity and structure of data. This
paper is the intersection of several fields of science including machine
learning, differential geometry, and algebraic topology.
|
[
{
"version": "v1",
"created": "Thu, 3 Feb 2022 14:46:34 GMT"
}
] | 2022-02-04T00:00:00 |
[
[
"Ghojogh",
"Benyamin",
""
],
[
"Karray",
"Fakhri",
""
],
[
"Crowley",
"Mark",
""
]
] |
new_dataset
| 0.995138 |
2202.01747
|
Nikolaos-Antonios Ypsilantis
|
Nikolaos-Antonios Ypsilantis, Noa Garcia, Guangxing Han, Sarah
Ibrahimi, Nanne Van Noord, Giorgos Tolias
|
The Met Dataset: Instance-level Recognition for Artworks
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work introduces a dataset for large-scale instance-level recognition in
the domain of artworks. The proposed benchmark exhibits a number of different
challenges such as large inter-class similarity, long tail distribution, and
many classes. We rely on the open access collection of The Met museum to form a
large training set of about 224k classes, where each class corresponds to a
museum exhibit with photos taken under studio conditions. Testing is primarily
performed on photos taken by museum guests depicting exhibits, which introduces
a distribution shift between training and testing. Testing is additionally
performed on a set of images not related to Met exhibits making the task
resemble an out-of-distribution detection problem. The proposed benchmark
follows the paradigm of other recent datasets for instance-level recognition on
different domains to encourage research on domain independent approaches. A
number of suitable approaches are evaluated to offer a testbed for future
comparisons. Self-supervised and supervised contrastive learning are
effectively combined to train the backbone which is used for non-parametric
classification that is shown as a promising direction. Dataset webpage:
http://cmp.felk.cvut.cz/met/
|
[
{
"version": "v1",
"created": "Thu, 3 Feb 2022 18:13:30 GMT"
}
] | 2022-02-04T00:00:00 |
[
[
"Ypsilantis",
"Nikolaos-Antonios",
""
],
[
"Garcia",
"Noa",
""
],
[
"Han",
"Guangxing",
""
],
[
"Ibrahimi",
"Sarah",
""
],
[
"Van Noord",
"Nanne",
""
],
[
"Tolias",
"Giorgos",
""
]
] |
new_dataset
| 0.999739 |
2202.01764
|
Byunghoon So
|
ByungHoon So, Kyuhong Byun, Kyungwon Kang, Seongjin Cho
|
JaQuAD: Japanese Question Answering Dataset for Machine Reading
Comprehension
|
11 pages, 3 figures, 6 tables
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Question Answering (QA) is a task in which a machine understands a given
document and a question to find an answer. Despite impressive progress in the
NLP area, QA is still a challenging problem, especially for non-English
languages due to the lack of annotated datasets. In this paper, we present the
Japanese Question Answering Dataset, JaQuAD, which is annotated by humans.
JaQuAD consists of 39,696 extractive question-answer pairs on Japanese
Wikipedia articles. We finetuned a baseline model which achieves 78.92% for F1
score and 63.38% for EM on test set. The dataset and our experiments are
available at https://github.com/SkelterLabsInc/JaQuAD.
|
[
{
"version": "v1",
"created": "Thu, 3 Feb 2022 18:40:25 GMT"
}
] | 2022-02-04T00:00:00 |
[
[
"So",
"ByungHoon",
""
],
[
"Byun",
"Kyuhong",
""
],
[
"Kang",
"Kyungwon",
""
],
[
"Cho",
"Seongjin",
""
]
] |
new_dataset
| 0.999803 |
1909.03212
|
Praneet Dutta
|
Praneet Dutta, Joe Cheuk, Jonathan S Kim, Massimo Mascaro
|
AutoML for Contextual Bandits
|
Presented(peer-reviewed) at the REVEAL Workshop at the ACM RecSys
Conference Copenhagen'19
[https://sites.google.com/view/reveal2019/proceedings]
| null | null | null |
cs.LG cs.AI stat.ML
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Contextual Bandits is one of the widely popular techniques used in
applications such as personalization, recommendation systems, mobile health,
causal marketing etc . As a dynamic approach, it can be more efficient than
standard A/B testing in minimizing regret. We propose an end to end automated
meta-learning pipeline to approximate the optimal Q function for contextual
bandits problems. We see that our model is able to perform much better than
random exploration, being more regret efficient and able to converge with a
limited number of samples, while remaining very general and easy to use due to
the meta-learning approach. We used a linearly annealed e-greedy exploration
policy to define the exploration vs exploitation schedule. We tested the system
on a synthetic environment to characterize it fully and we evaluated it on some
open source datasets to benchmark against prior work. We see that our model
outperforms or performs comparatively to other models while requiring no tuning
nor feature engineering.
|
[
{
"version": "v1",
"created": "Sat, 7 Sep 2019 08:18:03 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Feb 2022 01:44:30 GMT"
}
] | 2022-02-03T00:00:00 |
[
[
"Dutta",
"Praneet",
""
],
[
"Cheuk",
"Joe",
""
],
[
"Kim",
"Jonathan S",
""
],
[
"Mascaro",
"Massimo",
""
]
] |
new_dataset
| 0.996597 |
2007.10529
|
Jinyue Song
|
Jinyue Song, Tianbo Gu, Zheng Fang, Xiaotao Feng, Yunjie Ge, Hao Fu,
Pengfei Hu, Prasant Mohapatra
|
Blockchain Meets COVID-19: A Framework for Contact Information Sharing
and Risk Notification System
|
11 pages, 7 figures, this work has been accepted by IEEE
International Conference on Mobile Ad-Hoc and Smart Systems (MASS) 2021
| null | null | null |
cs.CR cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
COVID-19 is a severe global epidemic in human history. Even though there are
particular medications and vaccines to curb the epidemic, tracing and isolating
the infection source is the best option to slow the virus spread and reduce
infection and death rates. There are three disadvantages to the existing
contact tracing system: 1. User data is stored in a centralized database that
could be stolen and tampered with, 2. User's confidential personal identity may
be revealed to a third party or organization, 3. Existing contact tracing
systems only focus on information sharing from one dimension, such as
location-based tracing, which significantly limits the effectiveness of such
systems.
We propose a global COVID-19 information sharing and risk notification system
that utilizes the Blockchain, Smart Contract, and Bluetooth. To protect user
privacy, we design a novel Blockchain-based platform that can share consistent
and non-tampered contact tracing information from multiple dimensions, such as
location-based for indirect contact and Bluetooth-based for direct contact.
Hierarchical smart contract architecture is also designed to achieve global
agreements from users about how to process and utilize user data, thereby
enhancing the data usage transparency. Furthermore, we propose a mechanism to
protect user identity privacy from multiple aspects. More importantly, our
system can notify the users about the exposure risk via smart contracts. We
implement a prototype system to conduct extensive measurements to demonstrate
the feasibility and effectiveness of our system.
|
[
{
"version": "v1",
"created": "Mon, 20 Jul 2020 23:36:46 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Feb 2022 19:14:55 GMT"
}
] | 2022-02-03T00:00:00 |
[
[
"Song",
"Jinyue",
""
],
[
"Gu",
"Tianbo",
""
],
[
"Fang",
"Zheng",
""
],
[
"Feng",
"Xiaotao",
""
],
[
"Ge",
"Yunjie",
""
],
[
"Fu",
"Hao",
""
],
[
"Hu",
"Pengfei",
""
],
[
"Mohapatra",
"Prasant",
""
]
] |
new_dataset
| 0.997464 |
2101.07312
|
Tobias Huber
|
Tobias Huber, Benedikt Limmer, Elisabeth Andr\'e
|
Benchmarking Perturbation-based Saliency Maps for Explaining Atari
Agents
| null | null | null | null |
cs.LG cs.AI cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One of the most prominent methods for explaining the behavior of Deep
Reinforcement Learning (DRL) agents is the generation of saliency maps that
show how much each pixel attributed to the agents' decision. However, there is
no work that computationally evaluates and compares the fidelity of different
saliency map approaches specifically for DRL agents. It is particularly
challenging to computationally evaluate saliency maps for DRL agents since
their decisions are part of an overarching policy. For instance, the output
neurons of value-based DRL algorithms encode both the value of the current
state as well as the value of doing each action in this state. This ambiguity
should be considered when evaluating saliency maps for such agents. In this
paper, we compare five popular perturbation-based approaches to create saliency
maps for DRL agents trained on four different Atari 2600 games. The approaches
are compared using two computational metrics: dependence on the learned
parameters of the agent (sanity checks) and fidelity to the agent's reasoning
(input degradation). During the sanity checks, we encounter issues with one
approach and propose a solution to fix these issues. For fidelity, we identify
two main factors that influence which saliency approach should be chosen in
which situation.
|
[
{
"version": "v1",
"created": "Mon, 18 Jan 2021 19:57:52 GMT"
},
{
"version": "v2",
"created": "Sat, 19 Jun 2021 09:02:25 GMT"
},
{
"version": "v3",
"created": "Wed, 2 Feb 2022 16:46:07 GMT"
}
] | 2022-02-03T00:00:00 |
[
[
"Huber",
"Tobias",
""
],
[
"Limmer",
"Benedikt",
""
],
[
"André",
"Elisabeth",
""
]
] |
new_dataset
| 0.998165 |
2105.09827
|
Luca Ferrarini
|
Luca Ferrarini, Stefano Gualandi
|
Total Coloring and Total Matching: Polyhedra and Facets
|
29 pages, 5 figures
| null | null | null |
cs.DM math.CO math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A total coloring of a graph $G = (V, E)$ is an assignment of colors to
vertices and edges such that neither two adjacent vertices nor two incident
edges get the same color, and, for each edge, the end-points and the edge
itself receive different colors. Any valid total coloring induces a partition
of the elements of $G$ into total matchings, which are defined as subsets of
vertices and edges that can take the same color. In this paper, we propose
Integer Linear Programming models for both the Total Coloring and the Total
Matching problems, and we study the strength of the corresponding Linear
Programming relaxations. The total coloring is formulated as the problem of
finding the minimum number of total matchings that cover all the graph
elements. This covering formulation can be solved by a Column Generation
algorithm, where the pricing subproblem corresponds to the Weighted Total
Matching Problem. Hence, we study the Total Matching Polytope. We introduce
three families of nontrivial valid inequalities: vertex-clique inequalities
based on standard clique inequalities of the Stable Set Polytope,
congruent-$2k3$ cycle inequalities based on the parity of the vertex set
induced by the cycle, and even-clique inequalities induced by complete
subgraphs of even order. We prove that congruent-$2k3$ cycle inequalities are
facet-defining only when $k = 4$, while the vertex-clique and even-cliques are
always facet-defining. Finally, we present preliminary computational results of
a Column Generation algorithm for the Total Coloring Problem and a Cutting
Plane algorithm for the Total Matching Problem.
|
[
{
"version": "v1",
"created": "Thu, 20 May 2021 15:21:31 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Feb 2022 15:26:37 GMT"
}
] | 2022-02-03T00:00:00 |
[
[
"Ferrarini",
"Luca",
""
],
[
"Gualandi",
"Stefano",
""
]
] |
new_dataset
| 0.959074 |
2106.13896
|
Alireza Khodaei
|
Alireza Khodaei and Jitender Deogun
|
Optical MIMO Communication Using Holographic Spectral Multiplexing of
Pulsed Ultrashort Laser
|
Needs more improvement
| null | null | null |
cs.NI eess.SP
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this paper, we introduce Holographic Spectral Multiplexing (HSM) as a
novel technique to enable multiple-input multiple-output (MIMO) communication
in optical networks. HSM uses the spectral space of ultrashort laser pulses to
create line codes in the form of 2D holograms. The pulse processing is
performed in the temporal Fourier domain by spatially dispersing the pulse
frequency components in a spectral processing device (SPD). The 2D holograms
are composed of the patterns of intensity disparities that an SLM inscribes on
the spectrally-decomposed Fourier plane of the pulse. The holographic line
codes defined in this way transform the ultrashort laser pulses into
high-entropy data symbols, hence, enhance the communication's spectral
efficiency. Unlike conventional optical multiplexing schemes (e.g., TDM, WDM,
or SDM), HSM does not physically or abstractly separate the communication
propagation space into subchannels. Rather, HSM realizes a MIMO communication
paradigm by allowing the photonic waves under the pulse envelope to propagate
in the same space so they scatter and interfere by chromatic dispersion. This
allows HSM to form beams between the pixels of SLM at the sender and receiver
sides and optimize the beam to adapt to channel scattering situations. In this
way, HSM delivers a rate gain that in the best case exponentially increases the
information rate of communication.
|
[
{
"version": "v1",
"created": "Fri, 25 Jun 2021 21:57:55 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Feb 2022 17:39:58 GMT"
}
] | 2022-02-03T00:00:00 |
[
[
"Khodaei",
"Alireza",
""
],
[
"Deogun",
"Jitender",
""
]
] |
new_dataset
| 0.984136 |
2108.01466
|
Md. Shirajum Munir
|
Md. Shirajum Munir, Ki Tae Kim, Kyi Thar, Dusit Niyato, and Choong
Seon Hong
|
Risk Adversarial Learning System for Connected and Autonomous Vehicle
Charging
|
Accepted Article By IEEE Internet of Things Journal,
DOI:10.1109/JIOT.2022.3149038 (In Press)
| null |
10.1109/JIOT.2022.3149038
| null |
cs.AI cs.CE cs.LG cs.MA
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this paper, the design of a rational decision support system (RDSS) for a
connected and autonomous vehicle charging infrastructure (CAV-CI) is studied.
In the considered CAV-CI, the distribution system operator (DSO) deploys
electric vehicle supply equipment (EVSE) to provide an EV charging facility for
human-driven connected vehicles (CVs) and autonomous vehicles (AVs). The
charging request by the human-driven EV becomes irrational when it demands more
energy and charging period than its actual need. Therefore, the scheduling
policy of each EVSE must be adaptively accumulated the irrational charging
request to satisfy the charging demand of both CVs and AVs. To tackle this, we
formulate an RDSS problem for the DSO, where the objective is to maximize the
charging capacity utilization by satisfying the laxity risk of the DSO. Thus,
we devise a rational reward maximization problem to adapt the irrational
behavior by CVs in a data-informed manner. We propose a novel risk adversarial
multi-agent learning system (RAMALS) for CAV-CI to solve the formulated RDSS
problem. In RAMALS, the DSO acts as a centralized risk adversarial agent (RAA)
for informing the laxity risk to each EVSE. Subsequently, each EVSE plays the
role of a self-learner agent to adaptively schedule its own EV sessions by
coping advice from RAA. Experiment results show that the proposed RAMALS
affords around 46.6% improvement in charging rate, about 28.6% improvement in
the EVSE's active charging time and at least 33.3% more energy utilization, as
compared to a currently deployed ACN EVSE system, and other baselines.
|
[
{
"version": "v1",
"created": "Mon, 2 Aug 2021 02:38:15 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Feb 2022 11:33:35 GMT"
}
] | 2022-02-03T00:00:00 |
[
[
"Munir",
"Md. Shirajum",
""
],
[
"Kim",
"Ki Tae",
""
],
[
"Thar",
"Kyi",
""
],
[
"Niyato",
"Dusit",
""
],
[
"Hong",
"Choong Seon",
""
]
] |
new_dataset
| 0.967876 |
2112.12693
|
Martin Vassor
|
Zak Cutner and Nobuko Yoshida and Martin Vassor
|
Deadlock-free asynchronous message reordering in Rust with multiparty
session types
|
Full-version, 24 pages. Short version to appear in PPoPP 2022.
Updated according to the latest modifications of the camera-ready conference
version
| null | null | null |
cs.PL cs.SE
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Rust is a modern systems language focused on performance and reliability.
Complementing Rust's promise to provide "fearless concurrency", developers
frequently exploit asynchronous message passing. Unfortunately, arbitrarily
ordering sending and receiving messages to maximise computation-communication
overlap (a popular optimisation to message-passing applications) opens up a
Pandora's box of further subtle concurrency bugs.
To guarantee deadlock-freedom by construction, we present Rumpsteak: a new
Rust framework based on multiparty session types. Previous session type
implementations in Rust are either built upon synchronous and blocking
communication and/or limited to two-party interactions. Crucially, none support
the arbitrary ordering of messages for efficiency.
Rumpsteak instead targets asynchronous async/await code. Its unique ability
is allowing developers to arbitrarily order send/receive messages while
preserving deadlock-freedom. For this, Rumpsteak incorporates two recent
advanced session type theories: (1) k-multiparty compatibility (kmc), which
globally verifies the safety of a set of participants, and (2) asynchronous
multiparty session subtyping, which locally verifies optimisations in the
context of a single participant. Specifically, we propose a novel algorithm for
asynchronous subtyping that is both sound and decidable.
We first evaluate the performance and expressiveness of Rumpsteak against
three previous Rust implementations. We discover that Rumpsteak is around
1.7--8.6x more efficient and can safely express many more examples by virtue of
offering arbitrary message ordering. Secondly, we analyse the complexity of our
new algorithm and benchmark it against kmc and a binary session subtyping
algorithm. We find they are exponentially slower than Rumpsteak's.
|
[
{
"version": "v1",
"created": "Thu, 23 Dec 2021 16:40:58 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Feb 2022 17:00:53 GMT"
}
] | 2022-02-03T00:00:00 |
[
[
"Cutner",
"Zak",
""
],
[
"Yoshida",
"Nobuko",
""
],
[
"Vassor",
"Martin",
""
]
] |
new_dataset
| 0.997613 |
2201.11040
|
Stephanie Weirich
|
Pritam Choudhury and Harley Eades III and Stephanie Weirich
|
A Dependent Dependency Calculus (Extended Version)
|
Extended version of paper published in ESOP 2022, 2-7 April 2022,
Munich, Germany
| null | null | null |
cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Over twenty years ago, Abadi et al. established the Dependency Core Calculus
(DCC) as a general purpose framework for analyzing dependency in typed
programming languages. Since then, dependency analysis has shown many practical
benefits to language design: its results can help users and compilers enforce
security constraints, eliminate dead code, among other applications. In this
work, we present a Dependent Dependency Calculus (DDC), which extends this
general idea to the setting of a dependently-typed language. We use this
calculus to track both run-time and compile-time irrelevance, enabling faster
type-checking and program execution.
|
[
{
"version": "v1",
"created": "Wed, 26 Jan 2022 16:28:05 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Feb 2022 14:59:20 GMT"
}
] | 2022-02-03T00:00:00 |
[
[
"Choudhury",
"Pritam",
""
],
[
"Eades",
"Harley",
"III"
],
[
"Weirich",
"Stephanie",
""
]
] |
new_dataset
| 0.997236 |
2202.00788
|
Jiawei Xu
|
Jiawei Xu, Diego S. D'Antonio, David Salda\~na
|
Modular Multi-Rotors: From Quadrotors to Fully-Actuated Aerial Vehicles
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Traditional aerial vehicles have specific characteristics to perform specific
tasks. For instance, in aerial transportation, the vehicles are limited with a
maximum payload that cannot be extended to transport heavier objects. We
propose a versatile modular robotic system that can increase its payload and
controllable degrees of freedom by reconfiguring heterogeneous modules; we call
it H-ModQuad. The system consists of cuboid modules, propelled by quadrotors
with tilted propellers that can generate forces in different directions. We
present two module designs with different actuation properties that enhance the
capabilities of the assembled robot. By assembling different types of modules,
H-ModQuad can increase its controllable degrees of freedom from 4 to 5 and 6
depending on its configuration. We model the modular vehicle and propose a
general control strategy for all possible numbers of controllable degrees of
freedom. We extend the concept of the actuation ellipsoid to find the best
reference orientation that can maximize the performance of the structure. Our
approach is validated with experiments using actual robots, showing that the
structure can perform independent actuation for rotation and translation.
|
[
{
"version": "v1",
"created": "Tue, 1 Feb 2022 22:12:19 GMT"
}
] | 2022-02-03T00:00:00 |
[
[
"Xu",
"Jiawei",
""
],
[
"D'Antonio",
"Diego S.",
""
],
[
"Saldaña",
"David",
""
]
] |
new_dataset
| 0.990548 |
2202.00830
|
Wesley Joon-Wie Tann
|
Wesley Joon-Wie Tann
|
Quantum Remote Entanglement for Medium-Free Secure Communication?
| null | null | null | null |
cs.ET quant-ph
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Present-day quantum communication predominantly depends on trusted relays
(e.g., quantum repeaters, low-Earth-orbit satellite) connected by optical fiber
cables to transmit information. However, recent evidence supports a decades-old
concept that quantum entanglement, harnessed by current quantum communication
systems, does not necessarily rely on a physical relay medium. In modern
quantum communication networks, this trusted relay infrastructure is (1)
susceptible to security attacks, (2) limited by the channel capacity, (3)
subject to decoherence loss, and (4) expensive to set up. The instantaneous and
faster-than-light activities of quantum entanglement occurring in quantum
communication have suggested guidance by some non-locality nature. On the
contrary, neither ground nor space-relays have shown or been demonstrated to
embody it. It is proposed in this paper that the non-locality nature of quantum
theory governs quantum entanglement; elementary particles, components of a
universal quantum body, can achieve remote entanglement regardless of a
physical medium or spatial proximity. Evidence and theory supporting remote
entanglement in superconducting quantum systems (entanglement fidelities for
communication in particular) are presented. One such particle, the photon,
representing a basic unit of quantum information, qubit $|\psi\rangle = \alpha
|0\rangle + \beta |1\rangle$, consists of real continuous values in complex
numbers $(\alpha, \beta)$ with infinite precision. These values $(\alpha,
\beta)$ can account for the distinctiveness of qubits and result in an identity
$QuID$ that possibly supports remote entanglement. New approaches to
medium-free secure quantum communication are suggested by running simulations
and actual quantum computations on a quantum circuit.
|
[
{
"version": "v1",
"created": "Wed, 2 Feb 2022 00:53:19 GMT"
}
] | 2022-02-03T00:00:00 |
[
[
"Tann",
"Wesley Joon-Wie",
""
]
] |
new_dataset
| 0.996296 |
2202.00874
|
Ke Chen
|
Ke Chen, Xingjian Du, Bilei Zhu, Zejun Ma, Taylor Berg-Kirkpatrick,
Shlomo Dubnov
|
HTS-AT: A Hierarchical Token-Semantic Audio Transformer for Sound
Classification and Detection
|
Preprint version for ICASSP 2022, Singapore
| null | null | null |
cs.SD cs.AI cs.IR cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Audio classification is an important task of mapping audio samples into their
corresponding labels. Recently, the transformer model with self-attention
mechanisms has been adopted in this field. However, existing audio transformers
require large GPU memories and long training time, meanwhile relying on
pretrained vision models to achieve high performance, which limits the model's
scalability in audio tasks. To combat these problems, we introduce HTS-AT: an
audio transformer with a hierarchical structure to reduce the model size and
training time. It is further combined with a token-semantic module to map final
outputs into class featuremaps, thus enabling the model for the audio event
detection (i.e. localization in time). We evaluate HTS-AT on three datasets of
audio classification where it achieves new state-of-the-art (SOTA) results on
AudioSet and ESC-50, and equals the SOTA on Speech Command V2. It also achieves
better performance in event localization than the previous CNN-based models.
Moreover, HTS-AT requires only 35% model parameters and 15% training time of
the previous audio transformer. These results demonstrate the high performance
and high efficiency of HTS-AT.
|
[
{
"version": "v1",
"created": "Wed, 2 Feb 2022 04:49:14 GMT"
}
] | 2022-02-03T00:00:00 |
[
[
"Chen",
"Ke",
""
],
[
"Du",
"Xingjian",
""
],
[
"Zhu",
"Bilei",
""
],
[
"Ma",
"Zejun",
""
],
[
"Berg-Kirkpatrick",
"Taylor",
""
],
[
"Dubnov",
"Shlomo",
""
]
] |
new_dataset
| 0.997509 |
2202.00878
|
Younes Karimi
|
Younes Karimi, Anna Squicciarini, Peter K. Forster, Kira M. Leavitt
|
A Longitudinal Dataset of Twitter ISIS Users
|
10 pages, 7 figures; Submitted to the 16th International Conference
on Web and Social Media (AAAI ICWSM-2022)
| null | null | null |
cs.SI cs.AI cs.CL cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We present a large longitudinal dataset of tweets from two sets of users that
are suspected to be affiliated with ISIS. These sets of users are identified
based on a prior study and a campaign aimed at shutting down ISIS Twitter
accounts. These users have engaged with known ISIS accounts at least once
during 2014-2015 and are still active as of 2021. Some of them have directly
supported the ISIS users and their tweets by retweeting them, and some of the
users that have quoted tweets of ISIS, have uncertain connections to ISIS seed
accounts. This study and the dataset represent a unique approach to analyzing
ISIS data. Although much research exists on ISIS online activities, few studies
have focused on individual accounts. Our approach to validating accounts as
well as developing a framework for differentiating accounts' functionality
(e.g., propaganda versus operational planning) offers a foundation for future
research. We perform some descriptive statistics and preliminary analyses on
our collected data to provide deeper insight and highlight the significance and
practicality of such analyses. We further discuss several cross-disciplinary
potential use cases and research directions.
|
[
{
"version": "v1",
"created": "Wed, 2 Feb 2022 05:03:05 GMT"
}
] | 2022-02-03T00:00:00 |
[
[
"Karimi",
"Younes",
""
],
[
"Squicciarini",
"Anna",
""
],
[
"Forster",
"Peter K.",
""
],
[
"Leavitt",
"Kira M.",
""
]
] |
new_dataset
| 0.999821 |
2202.00890
|
Mikhail V. Saramud
|
K.A. Bashmur, V.S. Tynchenko, V.V. Bukhtoyarov, M.V. Saramud
|
Robot-printer for creating elements of technological equipment for the
production of components of biofuel compositions
|
10 pages, in Russian, 6 figures
| null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This study is devoted to the search for new scientific and technical
solutions in the field of renewable energy sources, in particular biofuels.
Biomass is the main fuel for green energy, accounting for two thirds of the
energy produced from renewable sources. The further development of the industry
depends on the improvement of the equipment and technologies used in it. On the
example of a cleaning apparatus, a new technology for prototyping its parts
using a robotic module is shown and tested. The use of plastics as parts of
technological equipment is a modern trend and may be due to the low adhesion
strength of various substances to the surface of these parts due to poor
wettability and low values of the surface energy of these materials compared to
metals.
|
[
{
"version": "v1",
"created": "Wed, 2 Feb 2022 06:03:15 GMT"
}
] | 2022-02-03T00:00:00 |
[
[
"Bashmur",
"K. A.",
""
],
[
"Tynchenko",
"V. S.",
""
],
[
"Bukhtoyarov",
"V. V.",
""
],
[
"Saramud",
"M. V.",
""
]
] |
new_dataset
| 0.99663 |
2202.00979
|
Vali Tawosi
|
Vali Tawosi, Afnan Al-Subaihin, Rebecca Moussa, Federica Sarro
|
A Versatile Dataset of Agile Open Source Software Projects
|
5 pages, 1 figure
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Agile software development is nowadays a widely adopted practise in both
open-source and industrial software projects. Agile teams typically heavily
rely on issue management tools to document new issues and keep track of
outstanding ones, in addition to storing their technical details, effort
estimates, assignment to developers, and more. Previous work utilised the
historical information stored in issue management systems for various purposes;
however, when researchers make their empirical data public, it is usually
relevant solely to the study's objective. In this paper, we present a more
holistic and versatile dataset containing a wealth of information on more than
500,000 issues from 44 open-source Agile software, making it well-suited to
several research avenues, and cross-analyses therein, including effort
estimation, issue prioritization, issue assignment and many more. We make this
data publicly available on GitHub to facilitate ease of use, maintenance, and
extensibility.
|
[
{
"version": "v1",
"created": "Wed, 2 Feb 2022 11:51:14 GMT"
}
] | 2022-02-03T00:00:00 |
[
[
"Tawosi",
"Vali",
""
],
[
"Al-Subaihin",
"Afnan",
""
],
[
"Moussa",
"Rebecca",
""
],
[
"Sarro",
"Federica",
""
]
] |
new_dataset
| 0.999274 |
2202.01031
|
Cise Midoglu
|
Cise Midoglu, Steven A. Hicks, Vajira Thambawita, Tomas Kupka, P{\aa}l
Halvorsen
|
MMSys'22 Grand Challenge on AI-based Video Production for Soccer
| null | null | null | null |
cs.CV cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
Soccer has a considerable market share of the global sports industry, and the
interest in viewing videos from soccer games continues to grow. In this
respect, it is important to provide game summaries and highlights of the main
game events. However, annotating and producing events and summaries often
require expensive equipment and a lot of tedious, cumbersome, manual labor.
Therefore, automating the video production pipeline providing fast game
highlights at a much lower cost is seen as the "holy grail". In this context,
recent developments in Artificial Intelligence (AI) technology have shown great
potential. Still, state-of-the-art approaches are far from being adequate for
practical scenarios that have demanding real-time requirements, as well as
strict performance criteria (where at least the detection of official events
such as goals and cards must be 100% accurate). In addition, event detection
should be thoroughly enhanced by annotation and classification, proper
clipping, generating short descriptions, selecting appropriate thumbnails for
highlight clips, and finally, combining the event highlights into an overall
game summary, similar to what is commonly aired during sports news. Even though
the event tagging operation has by far received the most attention, an
end-to-end video production pipeline also includes various other operations
which serve the overall purpose of automated soccer analysis. This challenge
aims to assist the automation of such a production pipeline using AI. In
particular, we focus on the enhancement operations that take place after an
event has been detected, namely event clipping (Task 1), thumbnail selection
(Task 2), and game summarization (Task 3). Challenge website:
https://mmsys2022.ie/authors/grand-challenge.
|
[
{
"version": "v1",
"created": "Wed, 2 Feb 2022 13:53:42 GMT"
}
] | 2022-02-03T00:00:00 |
[
[
"Midoglu",
"Cise",
""
],
[
"Hicks",
"Steven A.",
""
],
[
"Thambawita",
"Vajira",
""
],
[
"Kupka",
"Tomas",
""
],
[
"Halvorsen",
"Pål",
""
]
] |
new_dataset
| 0.999292 |
2202.01037
|
Monica M. Wilhelmus
|
Sara Oliveira Santos, Francisco Cuenca-Jim\'enez, P. Antonio
Gomez-Valdez, Oscar Morales-Lopez, Monica M. Wilhelmus
|
RoboKrill : a metachronal drag-based swimmer robot
| null | null | null | null |
cs.RO physics.bio-ph
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Marine exploration is essential to understanding ocean processes and
organisms. While the use of current unmanned underwater vehicles has enabled
many discoveries, there are still plenty of limitations toward exploring
complex environments. Bio-inspired robots are a promising solution for highly
maneuverable underwater swimming at moderate speeds. Krill, especially, are
efficient swimmers in the intermediate Reynolds number regime and can inform
engineering solutions for ocean exploration. In this paper, we present the
design, manufacture, and validation of a new krill-inspired, metachronal,
drag-based robotic system. By combining active and passive actuation of the
joints with 3D printed parts, our unique design recreates the swimming
kinematics of Euphausia superba in a compact and reproducible robotic platform.
The motion of the anterior and posterior appendage segments is achieved using
servo motors and a multi-link mechanism, while the out-of-plane motion of the
biramous distal segments is attained via fluid-structure interactions. Going
forward, our platform will be leveraged to study metachronal, drag-based
swimmers across taxa to identify unifying success mechanisms at different
scales, facilitating the development of a new generation of underwater robots.
|
[
{
"version": "v1",
"created": "Wed, 2 Feb 2022 14:02:33 GMT"
}
] | 2022-02-03T00:00:00 |
[
[
"Santos",
"Sara Oliveira",
""
],
[
"Cuenca-Jiménez",
"Francisco",
""
],
[
"Gomez-Valdez",
"P. Antonio",
""
],
[
"Morales-Lopez",
"Oscar",
""
],
[
"Wilhelmus",
"Monica M.",
""
]
] |
new_dataset
| 0.999392 |
2202.01155
|
Jana G\"otze
|
Jana G\"otze, Maike Paetzel-Pr\"usmann, Wencke Liermann, Tim Diekmann,
David Schlangen
|
The slurk Interaction Server Framework: Better Data for Better Dialog
Models
|
submitted to LREC 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents the slurk software, a lightweight interaction server for
setting up dialog data collections and running experiments. Slurk enables a
multitude of settings including text-based, speech and video interaction
between two or more humans or humans and bots, and a multimodal display area
for presenting shared or private interactive context. The software is
implemented in Python with an HTML and JS frontend that can easily be adapted
to individual needs. It also provides a setup for pairing participants on
common crowdworking platforms such as Amazon Mechanical Turk and some example
bot scripts for common interaction scenarios.
|
[
{
"version": "v1",
"created": "Wed, 2 Feb 2022 17:30:33 GMT"
}
] | 2022-02-03T00:00:00 |
[
[
"Götze",
"Jana",
""
],
[
"Paetzel-Prüsmann",
"Maike",
""
],
[
"Liermann",
"Wencke",
""
],
[
"Diekmann",
"Tim",
""
],
[
"Schlangen",
"David",
""
]
] |
new_dataset
| 0.96232 |
2202.01176
|
Sanja \v{S}\'cepanovi\'c
|
Sanja \v{S}\'cepanovi\'c, Luca Maria Aiello, Deirdre Barrett, Daniele
Quercia
|
Epidemic Dreams: Dreaming about health during the COVID-19 pandemic
| null | null |
10.1098/rsos.211080
| null |
cs.SI cs.CL cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The continuity hypothesis of dreams suggests that the content of dreams is
continuous with the dreamer's waking experiences. Given the unprecedented
nature of the experiences during COVID-19, we studied the continuity hypothesis
in the context of the pandemic. We implemented a deep-learning algorithm that
can extract mentions of medical conditions from text and applied it to two
datasets collected during the pandemic: 2,888 dream reports (dreaming life
experiences), and 57M tweets mentioning the pandemic (waking life experiences).
The health expressions common to both sets were typical COVID-19 symptoms
(e.g., cough, fever, and anxiety), suggesting that dreams reflected people's
real-world experiences. The health expressions that distinguished the two sets
reflected differences in thought processes: expressions in waking life
reflected a linear and logical thought process and, as such, described
realistic symptoms or related disorders (e.g., nasal pain, SARS, H1N1); those
in dreaming life reflected a thought process closer to the visual and emotional
spheres and, as such, described either conditions unrelated to the virus (e.g.,
maggots, deformities, snakebites), or conditions of surreal nature (e.g., teeth
falling out, body crumbling into sand). Our results confirm that dream reports
represent an understudied yet valuable source of people's health experiences in
the real world.
|
[
{
"version": "v1",
"created": "Wed, 2 Feb 2022 18:09:06 GMT"
}
] | 2022-02-03T00:00:00 |
[
[
"Šćepanović",
"Sanja",
""
],
[
"Aiello",
"Luca Maria",
""
],
[
"Barrett",
"Deirdre",
""
],
[
"Quercia",
"Daniele",
""
]
] |
new_dataset
| 0.997748 |
1506.05068
|
Kazuhisa Fujita Dr.
|
Kazuhisa Fujita
|
Extract an essential skeleton of a character as a graph from a character
image
| null |
International Journal of Computer Science Issues 10, 5, 35-39,
2013
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper aims to make a graph representing an essential skeleton of a
character from an image that includes a machine printed or a handwritten
character using growing neural gas (GNG) method and relative network graph
(RNG) algorithm. The visual system in our brain can recognize printed
characters and handwritten characters easily, robustly, and precisely. How does
our brain robustly recognize characters? The visual processing in our brain
uses the essential features of an object, such as crosses and corners. These
features will be helpful for character recognition by a computer. However,
extraction of the features is difficult. If the skeleton of a character is
represented as a graph, we can more easily extract the features. To extract the
skeleton of a character as a graph from an image, this paper proposes the new
approach using GNG and RNG algorithm. I achieved to extract skeleton graphs
from images including distorted, noisy, and handwritten characters.
|
[
{
"version": "v1",
"created": "Sat, 13 Jun 2015 14:25:54 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Feb 2022 02:21:03 GMT"
}
] | 2022-02-02T00:00:00 |
[
[
"Fujita",
"Kazuhisa",
""
]
] |
new_dataset
| 0.99212 |
1902.11186
|
Andreas Varga
|
Andreas Varga
|
Fault detection and diagnosis: computational issues and tools
|
12 pages. A shorter version of this article appeared in the
Encyclopedia of Systems and Control (2019)
| null |
10.1007/978-1-4471-5102-9_100055-1
| null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A representative set of fault diagnosis problems is formulated for linear
time-invariant systems with additive faults. For all formulated problems,
general existence conditions of their solutions are given. An overview of
recent developments of computational methods for the synthesis of fault
detection filters is presented and available software tools are described.
|
[
{
"version": "v1",
"created": "Thu, 28 Feb 2019 16:12:08 GMT"
},
{
"version": "v2",
"created": "Fri, 3 May 2019 11:02:55 GMT"
},
{
"version": "v3",
"created": "Fri, 15 Nov 2019 13:19:31 GMT"
},
{
"version": "v4",
"created": "Tue, 1 Feb 2022 11:33:11 GMT"
}
] | 2022-02-02T00:00:00 |
[
[
"Varga",
"Andreas",
""
]
] |
new_dataset
| 0.996195 |
2012.08909
|
Subhrangsu Mandal
|
Subhrangsu Mandal, Arobinda Gupta
|
Maximum 0-1 Timed Matching on Temporal Graphs
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Temporal graphs are graphs where the topology and/or other properties of the
graph change with time. They have been used to model applications with temporal
information in various domains. Problems on static graphs become more
challenging to solve in temporal graphs because of dynamically changing
topology, and many recent works have explored graph problems on temporal
graphs. In this paper, we define a type of matching called {\em 0-1 timed
matching} for temporal graphs, and investigate the problem of finding a {\em
maximum 0-1 timed matching} for different classes of temporal graphs. We first
prove that the problem is NP-Complete for rooted temporal trees when each edge
is associated with two or more time intervals. We then propose an $O(n \log n)$
time algorithm for the problem on a rooted temporal tree with $n$ nodes when
each edge is associated with exactly one time interval. The problem is then
shown to be NP-Complete also for bipartite temporal graphs even when each edge
is associated with a single time interval and degree of each node is bounded by
a constant $k \geq 3$. We next investigate approximation algorithms for the
problem for temporal graphs where each edge is associated with more than one
time intervals. It is first shown that there is no
$\frac{1}{n^{1-\epsilon}}$-factor approximation algorithm for the problem for
any $\epsilon > 0$ even on a rooted temporal tree with $n$ nodes unless NP =
ZPP. We then present a $\frac{5}{2\mathcal{N}^* + 3}$-factor approximation
algorithm for the problem for general temporal graphs where $\mathcal{N^*}$ is
the average number of edges overlapping in time with each edge in the temporal
graph. The same algorithm is also a constant-factor approximation algorithm for
degree bounded temporal graphs.
|
[
{
"version": "v1",
"created": "Wed, 16 Dec 2020 12:40:21 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Feb 2022 13:12:55 GMT"
}
] | 2022-02-02T00:00:00 |
[
[
"Mandal",
"Subhrangsu",
""
],
[
"Gupta",
"Arobinda",
""
]
] |
new_dataset
| 0.970688 |
2109.10252
|
Surya Kant Sahu
|
Surya Kant Sahu, Sai Mitheran, Juhi Kamdar, Meet Gandhi
|
Audiomer: A Convolutional Transformer For Keyword Spotting
|
The results and claims made are incorrect due to data leakage and an
erroneous split of datasets
| null | null | null |
cs.LG cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Transformers have seen an unprecedented rise in Natural Language Processing
and Computer Vision tasks. However, in audio tasks, they are either infeasible
to train due to extremely large sequence length of audio waveforms or incur a
performance penalty when trained on Fourier-based features. In this work, we
introduce an architecture, Audiomer, where we combine 1D Residual Networks with
Performer Attention to achieve state-of-the-art performance in keyword spotting
with raw audio waveforms, outperforming all previous methods while being
computationally cheaper and parameter-efficient. Additionally, our model has
practical advantages for speech processing, such as inference on arbitrarily
long audio clips owing to the absence of positional encoding. The code is
available at https://github.com/The-Learning-Machines/Audiomer-PyTorch.
|
[
{
"version": "v1",
"created": "Tue, 21 Sep 2021 15:28:41 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Dec 2021 00:17:07 GMT"
},
{
"version": "v3",
"created": "Fri, 7 Jan 2022 06:11:48 GMT"
},
{
"version": "v4",
"created": "Tue, 1 Feb 2022 09:32:15 GMT"
}
] | 2022-02-02T00:00:00 |
[
[
"Sahu",
"Surya Kant",
""
],
[
"Mitheran",
"Sai",
""
],
[
"Kamdar",
"Juhi",
""
],
[
"Gandhi",
"Meet",
""
]
] |
new_dataset
| 0.971083 |
2110.03331
|
Martin Mundt
|
Martin Mundt, Steven Lang, Quentin Delfosse, Kristian Kersting
|
CLEVA-Compass: A Continual Learning EValuation Assessment Compass to
Promote Research Transparency and Comparability
|
International Conference on Learning Representations (ICLR) 2022
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
What is the state of the art in continual machine learning? Although a
natural question for predominant static benchmarks, the notion to train systems
in a lifelong manner entails a plethora of additional challenges with respect
to set-up and evaluation. The latter have recently sparked a growing amount of
critiques on prominent algorithm-centric perspectives and evaluation protocols
being too narrow, resulting in several attempts at constructing guidelines in
favor of specific desiderata or arguing against the validity of prevalent
assumptions. In this work, we depart from this mindset and argue that the goal
of a precise formulation of desiderata is an ill-posed one, as diverse
applications may always warrant distinct scenarios. Instead, we introduce the
Continual Learning EValuation Assessment Compass: the CLEVA-Compass. The
compass provides the visual means to both identify how approaches are
practically reported and how works can simultaneously be contextualized in the
broader literature landscape. In addition to promoting compact specification in
the spirit of recent replication trends, it thus provides an intuitive chart to
understand the priorities of individual systems, where they resemble each
other, and what elements are missing towards a fair comparison.
|
[
{
"version": "v1",
"created": "Thu, 7 Oct 2021 10:53:26 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Feb 2022 10:31:29 GMT"
}
] | 2022-02-02T00:00:00 |
[
[
"Mundt",
"Martin",
""
],
[
"Lang",
"Steven",
""
],
[
"Delfosse",
"Quentin",
""
],
[
"Kersting",
"Kristian",
""
]
] |
new_dataset
| 0.97517 |
2202.00005
|
Daryll DCosta
|
Daryll Ralph D'Costa, Dr. Robert Abbas
|
5G enabled Mobile Edge Computing security for Autonomous Vehicles
|
9 pages, 8 figures
| null | null | null |
cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The world is moving into a new era with the deployment of 5G communication
infrastructure. Many new developments are deployed centred around this
technology. One such advancement is 5G Vehicle to Everything communication.
This technology can be used for applications such as driverless delivery of
goods, immediate response to emergencies and improving traffic efficiency. The
concept of Intelligent Transport Systems (ITS) is built around this system
which is completely autonomous. This paper studies the Distributed Denial of
Service (DDoS) attack carried out over a 5G network and analyses security
attacks, particularly the DDoS attack. The aim is to implement a machine
learning model capable of classifying different types of DDoS attacks and
predicting the quality of 5G latency. The initial steps of implementation
involved the synthetic addition of 5G parameters into the dataset.
Subsequently, the data was label encoded, and minority classes were oversampled
to match the other classes. Finally, the data was split as training and
testing, and machine learning models were applied. Although the paper resulted
in a model that predicted DDoS attacks, the dataset acquired significantly
lacked 5G related information. Furthermore, the 5G classification model needed
more modification. The research was based on largely quantitative research
methods in a simulated environment. Hence, the biggest limitation of this
research has been the lack of resources for data collection and sole reliance
on online data sets. Ideally, a Vehicle to Everything (V2X) project would
greatly benefit from an autonomous 5G enabled vehicle connected to a mobile
edge cloud. However, this project was conducted solely online on a single PC
which further limits the outcomes. Although the model underperformed, this
paper can be used as a framework for future research in Intelligent Transport
System development.
|
[
{
"version": "v1",
"created": "Sun, 30 Jan 2022 05:36:32 GMT"
}
] | 2022-02-02T00:00:00 |
[
[
"D'Costa",
"Daryll Ralph",
""
],
[
"Abbas",
"Dr. Robert",
""
]
] |
new_dataset
| 0.989665 |
2202.00164
|
Priyanka Mandikal
|
Priyanka Mandikal and Kristen Grauman
|
DexVIP: Learning Dexterous Grasping with Human Hand Pose Priors from
Video
| null | null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dexterous multi-fingered robotic hands have a formidable action space, yet
their morphological similarity to the human hand holds immense potential to
accelerate robot learning. We propose DexVIP, an approach to learn dexterous
robotic grasping from human-object interactions present in in-the-wild YouTube
videos. We do this by curating grasp images from human-object interaction
videos and imposing a prior over the agent's hand pose when learning to grasp
with deep reinforcement learning. A key advantage of our method is that the
learned policy is able to leverage free-form in-the-wild visual data. As a
result, it can easily scale to new objects, and it sidesteps the standard
practice of collecting human demonstrations in a lab -- a much more expensive
and indirect way to capture human expertise. Through experiments on 27 objects
with a 30-DoF simulated robot hand, we demonstrate that DexVIP compares
favorably to existing approaches that lack a hand pose prior or rely on
specialized tele-operation equipment to obtain human demonstrations, while also
being faster to train. Project page:
https://vision.cs.utexas.edu/projects/dexvip-dexterous-grasp-pose-prior
|
[
{
"version": "v1",
"created": "Tue, 1 Feb 2022 00:45:57 GMT"
}
] | 2022-02-02T00:00:00 |
[
[
"Mandikal",
"Priyanka",
""
],
[
"Grauman",
"Kristen",
""
]
] |
new_dataset
| 0.997642 |
2202.00168
|
Emre Sariyildiz
|
Emre Sariyildiz
|
A Unified Robust Motion Controller Synthesis for Compliant Robots Driven
by Series Elastic Actuators
|
IEEE International Workshop on Advanced Motion Control
| null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
This paper proposes a unified robust motion controller for the position and
force control problems of compliant robot manipulators driven by Series Elastic
Actuators (SEAs). It is shown that the dynamic model of the compliant robot
includes not only matched but also mismatched disturbances that act on the
system through a different channel from the control input. To tackle this
complex robust control problem, the unified robust motion controller is
synthesised by employing a second-order Disturbance Observer (DOb), which
allows us to estimate not only disturbances but also their first and second
order derivatives, and a novel controller design approach in state space. By
using the Brunovsky canonical form transformation and the estimations of
disturbances and their first and second order derivatives, the dynamic model of
the robot is reconstructed so that a new system model that includes only
matched disturbances is obtained for compliant robots driven by SEAs. The
robust position and force controllers are simply designed by eliminating the
matched disturbances of the reconstructed system model via the conventional
DOb-based robust control method. The stability and performance of the proposed
robust motion controllers are verified by simulations.
|
[
{
"version": "v1",
"created": "Tue, 1 Feb 2022 00:51:54 GMT"
}
] | 2022-02-02T00:00:00 |
[
[
"Sariyildiz",
"Emre",
""
]
] |
new_dataset
| 0.997056 |
2202.00176
|
Tao Yu
|
Tao Yu, Kiyomichi Araki, Kei Sakaguchi
|
Full-Duplex Aerial Communication System for Multiple UAVs with
Directional Antennas
|
The paper was accepted by IEEE Consumer Communications & Networking
Conference (CCNC) 2022
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
UAV-based wireless systems, such as wireless relay and remote sensing, have
attracted great attentions from academia and industry. To realize them, a
high-performance wireless aerial communication system, which bridges UAVs and
ground stations, is one of the key enablers. However, there are still issues
hindering its development, such as the severe co-channel interference among
UAVs, and the limited payload/battery-life of UAVs. To address the challenges,
we propose an aerial communication system which enables system-level
full-duplex communication of multiple UAVs with lower hardware complexities
than ideal full-duplex communication systems. In the proposed system, each
channel is re-assigned to the uplink and downlink of a pair of UAVs, and each
UAV employ a pair of separated channels for its uplink and downlink. The
co-channel interference between UAVs that reuse same channels is eliminated by
exploiting advantages of UAVs' maneuverability and high-gain directional
antennas equipped in UAVs and ground stations, so that dedicated cancellers are
not necessary in the proposed system. The system design and performance
analysis are given, and the simulation results well agree with the designs.
|
[
{
"version": "v1",
"created": "Tue, 1 Feb 2022 01:20:09 GMT"
}
] | 2022-02-02T00:00:00 |
[
[
"Yu",
"Tao",
""
],
[
"Araki",
"Kiyomichi",
""
],
[
"Sakaguchi",
"Kei",
""
]
] |
new_dataset
| 0.998566 |
2202.00210
|
Masaki Yasuhara
|
Masaki Yasuhara, Tomoya Takahashi, Hiroki Maruta, Hiroyuki Saito,
Shota Higuchi, Takaaki Nara, Keitaro Takeuchi, Yota Sakai, Kazuki Ishibashi
|
INPUT Team Description Paper in 2022
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
INPUT is a team participating in the RoboCup Soccer Small League (SSL). It
aims to show the world the technological capabilities of the Nagaoka region of
Niigata Prefecture, which is where the team members are from. For this purpose,
we are working on one of the projects from the Nagaoka Activation Zone of
Energy (NAZE). Herein, we introduce two robots, v2019 and v2022, as well as AI
systems that will be used in RoboCup 2022. In addition, we describe our efforts
to develop robots in collaboration with companies in the Nagaoka area.
|
[
{
"version": "v1",
"created": "Tue, 1 Feb 2022 04:17:44 GMT"
}
] | 2022-02-02T00:00:00 |
[
[
"Yasuhara",
"Masaki",
""
],
[
"Takahashi",
"Tomoya",
""
],
[
"Maruta",
"Hiroki",
""
],
[
"Saito",
"Hiroyuki",
""
],
[
"Higuchi",
"Shota",
""
],
[
"Nara",
"Takaaki",
""
],
[
"Takeuchi",
"Keitaro",
""
],
[
"Sakai",
"Yota",
""
],
[
"Ishibashi",
"Kazuki",
""
]
] |
new_dataset
| 0.95136 |
2202.00367
|
Uday Kusupati
|
Uday Kusupati and Venkata Ravi Teja Ailavarapu
|
Natural Language to Code Using Transformers
| null | null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We tackle the problem of generating code snippets from natural language
descriptions using the CoNaLa dataset. We use the self-attention based
transformer architecture and show that it performs better than recurrent
attention-based encoder decoder. Furthermore, we develop a modified form of
back translation and use cycle consistent losses to train the model in an
end-to-end fashion. We achieve a BLEU score of 16.99 beating the previously
reported baseline of the CoNaLa challenge.
|
[
{
"version": "v1",
"created": "Tue, 1 Feb 2022 12:17:52 GMT"
}
] | 2022-02-02T00:00:00 |
[
[
"Kusupati",
"Uday",
""
],
[
"Ailavarapu",
"Venkata Ravi Teja",
""
]
] |
new_dataset
| 0.99332 |
2202.00379
|
Thien-Minh Nguyen
|
Thien-Minh Nguyen, Shenghai Yuan, Muqing Cao, Yang Lyu, Thien Hoang
Nguyen, Lihua Xie
|
NTU VIRAL: A Visual-Inertial-Ranging-Lidar Dataset, From an Aerial
Vehicle Viewpoint
|
IJRR 2021
| null |
10.1177/02783649211052312
| null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, autonomous robots have become ubiquitous in research and
daily life. Among many factors, public datasets play an important role in the
progress of this field, as they waive the tall order of initial investment in
hardware and manpower. However, for research on autonomous aerial systems,
there appears to be a relative lack of public datasets on par with those used
for autonomous driving and ground robots. Thus, to fill in this gap, we conduct
a data collection exercise on an aerial platform equipped with an extensive and
unique set of sensors: two 3D lidars, two hardware-synchronized global-shutter
cameras, multiple Inertial Measurement Units (IMUs), and especially, multiple
Ultra-wideband (UWB) ranging units. The comprehensive sensor suite resembles
that of an autonomous driving car, but features distinct and challenging
characteristics of aerial operations. We record multiple datasets in several
challenging indoor and outdoor conditions. Calibration results and ground truth
from a high-accuracy laser tracker are also included in each package. All
resources can be accessed via our webpage
https://ntu-aris.github.io/ntu_viral_dataset.
|
[
{
"version": "v1",
"created": "Tue, 1 Feb 2022 12:46:52 GMT"
}
] | 2022-02-02T00:00:00 |
[
[
"Nguyen",
"Thien-Minh",
""
],
[
"Yuan",
"Shenghai",
""
],
[
"Cao",
"Muqing",
""
],
[
"Lyu",
"Yang",
""
],
[
"Nguyen",
"Thien Hoang",
""
],
[
"Xie",
"Lihua",
""
]
] |
new_dataset
| 0.999841 |
2202.00480
|
Zhiyuan Chen Dr
|
Alvin Chaidrata, Mariyam Imtha Shafeeu, Sze Ker Chew, Zhiyuan Chen,
Jin Sheng Cham, Zi Li Yong, Uen Hsieh Yap, Dania Imanina Binti Kamarul Bahrin
|
Intent Matching based Customer Services Chatbot with Natural Language
Understanding
|
Accepted by "the 5th International Conference on Communication and
Information Systems (ICCIS 2021)"
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Customer service is the lifeblood of any business. Excellent customer service
not only generates return business but also creates new customers. Looking at
the demanding market to provide a 24/7 service to customers, many organisations
are increasingly engaged in popular social media and text messaging platforms
such as WhatsApp and Facebook Messenger in providing a 24/7 service to
customers in the current demanding market. In this paper, we present an intent
matching based customer services chatbot (IMCSC), which is capable of replacing
the customer service work of sales personnel, whilst interacting in a more
natural and human-like manner through the employment of Natural Language
Understanding (NLU). The bot is able to answer the most common frequently asked
questions and we have also integrated features for the processing and exporting
of customer orders to a Google Sheet.
|
[
{
"version": "v1",
"created": "Fri, 7 Jan 2022 08:30:32 GMT"
}
] | 2022-02-02T00:00:00 |
[
[
"Chaidrata",
"Alvin",
""
],
[
"Shafeeu",
"Mariyam Imtha",
""
],
[
"Chew",
"Sze Ker",
""
],
[
"Chen",
"Zhiyuan",
""
],
[
"Cham",
"Jin Sheng",
""
],
[
"Yong",
"Zi Li",
""
],
[
"Yap",
"Uen Hsieh",
""
],
[
"Bahrin",
"Dania Imanina Binti Kamarul",
""
]
] |
new_dataset
| 0.999749 |
2202.00481
|
Asadullah Al Galib
|
Asadullah Al Galib
|
RabindraNet, Creating Literary Works in the Style of Rabindranath Tagore
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Bengali literature has a rich history of hundreds of years with luminary
figures such as Rabindranath Tagore and Kazi Nazrul Islam. However, analytical
works involving the most recent advancements in NLP have barely scratched the
surface utilizing the enormous volume of the collected works from the writers
of the language. In order to bring attention to the analytical study involving
the works of Bengali writers and spearhead the text generation endeavours in
the style of existing literature, we are introducing RabindraNet, a character
level RNN model with stacked-LSTM layers trained on the works of Rabindranath
Tagore to produce literary works in his style for multiple genres. We created
an extensive dataset as well by compiling the digitized works of Rabindranath
Tagore from authentic online sources and published as open source dataset on
data science platform Kaggle.
|
[
{
"version": "v1",
"created": "Wed, 5 Jan 2022 16:23:37 GMT"
}
] | 2022-02-02T00:00:00 |
[
[
"Galib",
"Asadullah Al",
""
]
] |
new_dataset
| 0.999585 |
2202.00504
|
Deshan Gong
|
Deshan Gong, Zhanxing Zhu, Andrew J.Bulpitt, He Wang
|
Fine-grained differentiable physics: a yarn-level model for fabrics
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Differentiable physics modeling combines physics models with gradient-based
learning to provide model explicability and data efficiency. It has been used
to learn dynamics, solve inverse problems and facilitate design, and is at its
inception of impact. Current successes have concentrated on general physics
models such as rigid bodies, deformable sheets, etc., assuming relatively
simple structures and forces. Their granularity is intrinsically coarse and
therefore incapable of modelling complex physical phenomena. Fine-grained
models are still to be developed to incorporate sophisticated material
structures and force interactions with gradient-based learning. Following this
motivation, we propose a new differentiable fabrics model for composite
materials such as cloths, where we dive into the granularity of yarns and model
individual yarn physics and yarn-to-yarn interactions. To this end, we propose
several differentiable forces, whose counterparts in empirical physics are
indifferentiable, to facilitate gradient-based learning. These forces, albeit
applied to cloths, are ubiquitous in various physical systems. Through
comprehensive evaluation and comparison, we demonstrate our model's
explicability in learning meaningful physical parameters, versatility in
incorporating complex physical structures and heterogeneous materials,
data-efficiency in learning, and high-fidelity in capturing subtle dynamics.
|
[
{
"version": "v1",
"created": "Tue, 1 Feb 2022 16:01:01 GMT"
}
] | 2022-02-02T00:00:00 |
[
[
"Gong",
"Deshan",
""
],
[
"Zhu",
"Zhanxing",
""
],
[
"Bulpitt",
"Andrew J.",
""
],
[
"Wang",
"He",
""
]
] |
new_dataset
| 0.998284 |
2202.00617
|
Thomas Kingsford
|
Thomas Kingsford
|
A General, Evolution-Inspired Reward Function for Social Robotics
| null | null | null | null |
cs.RO cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The field of social robotics will likely need to depart from a paradigm of
designed behaviours and imitation learning and adopt modern reinforcement
learning (RL) methods to enable robots to interact fluidly and efficaciously
with humans. In this paper, we present the Social Reward Function as a
mechanism to provide (1) a real-time, dense reward function necessary for the
deployment of RL agents in social robotics, and (2) a standardised objective
metric for comparing the efficacy of different social robots. The Social Reward
Function is designed to closely mimic those genetically endowed social
perception capabilities of humans in an effort to provide a simple, stable and
culture-agnostic reward function. Presently, datasets used in social robotics
are either small or significantly out-of-domain with respect to social
robotics. The use of the Social Reward Function will allow larger in-domain
datasets to be collected close to the behaviour policy of social robots, which
will allow both further improvements to reward functions and to the behaviour
policies of social robots. We believe this will be the key enabler to
developing efficacious social robots in the future.
|
[
{
"version": "v1",
"created": "Tue, 1 Feb 2022 18:05:31 GMT"
}
] | 2022-02-02T00:00:00 |
[
[
"Kingsford",
"Thomas",
""
]
] |
new_dataset
| 0.959397 |
1905.12461
|
Yu Nakahata
|
Yu Nakahata
|
On the Clique-Width of Unigraphs
| null | null | null | null |
cs.DS cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Clique-width is a well-studied graph parameter. For graphs of bounded
clique-width, many problems that are NP-hard in general can be polynomial-time
solvable. The fact motivates several studies to investigate whether the
clique-width of graphs in a certain class is bounded or not. We focus on
unigraphs, that is, graphs that are uniquely determined by their degree
sequences up to isomorphism. We show that every unigraph has clique-width at
most 4. It follows that many problems that are NP-hard in general are
polynomial-time solvable for unigraphs.
|
[
{
"version": "v1",
"created": "Wed, 29 May 2019 13:58:35 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Jan 2022 02:40:57 GMT"
}
] | 2022-02-01T00:00:00 |
[
[
"Nakahata",
"Yu",
""
]
] |
new_dataset
| 0.995618 |
2010.08936
|
Han Xiao
|
Han Xiao and Qizhi Fang
|
Arboricity games: the core and the nucleolus
| null |
Mathematical Programming (2022)
|
10.1007/s10107-021-01752-w
| null |
cs.GT cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The arboricity of a graph is the minimum number of forests required to cover
all its edges. In this paper, we examine arboricity from a game-theoretic
perspective and investigate cost-sharing in the minimum forest cover problem.
We introduce the arboricity game as a cooperative cost game defined on a graph.
The players are edges, and the cost of each coalition is the arboricity of the
subgraph induced by the coalition. We study properties of the core and propose
an efficient algorithm for computing the nucleolus when the core is not empty.
In order to compute the nucleolus in the core, we introduce the prime partition
which is built on the densest subgraph lattice. The prime partition decomposes
the edge set of a graph into a partially ordered set defined from minimal
densest minors and their invariant precedence relation. Moreover, edges from
the same partition always have the same value in a core allocation.
Consequently, when the core is not empty, the prime partition significantly
reduces the number of variables and constraints required in the linear programs
of Maschler's scheme and allows us to compute the nucleolus in polynomial time.
Besides, the prime partition provides a graph decomposition analogous to the
celebrated core decomposition and the density-friendly decomposition, which may
be of independent interest.
|
[
{
"version": "v1",
"created": "Sun, 18 Oct 2020 08:11:50 GMT"
},
{
"version": "v2",
"created": "Sat, 18 Sep 2021 04:30:18 GMT"
},
{
"version": "v3",
"created": "Sat, 29 Jan 2022 12:29:29 GMT"
}
] | 2022-02-01T00:00:00 |
[
[
"Xiao",
"Han",
""
],
[
"Fang",
"Qizhi",
""
]
] |
new_dataset
| 0.99813 |
2012.07139
|
Niclas V\"odisch
|
Niclas V\"odisch, David Dodel, Michael Sch\"otz
|
FSOCO: The Formula Student Objects in Context Dataset
| null |
SAE International Journal of Connected and Automated Vehicles
5.12-05-01-0003 (2022)
|
10.4271/12-05-01-0003
| null |
cs.CV cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents the FSOCO dataset, a collaborative dataset for
vision-based cone detection systems in Formula Student Driverless competitions.
It contains human annotated ground truth labels for both bounding boxes and
instance-wise segmentation masks. The data buy-in philosophy of FSOCO asks
student teams to contribute to the database first before being granted access
ensuring continuous growth. By providing clear labeling guidelines and tools
for a sophisticated raw image selection, new annotations are guaranteed to meet
the desired quality. The effectiveness of the approach is shown by comparing
prediction results of a network trained on FSOCO and its unregulated
predecessor. The FSOCO dataset can be found at fsoco-dataset.com.
|
[
{
"version": "v1",
"created": "Sun, 13 Dec 2020 20:24:48 GMT"
},
{
"version": "v2",
"created": "Thu, 25 Mar 2021 09:19:44 GMT"
},
{
"version": "v3",
"created": "Tue, 25 May 2021 16:34:19 GMT"
},
{
"version": "v4",
"created": "Mon, 31 Jan 2022 11:22:59 GMT"
}
] | 2022-02-01T00:00:00 |
[
[
"Vödisch",
"Niclas",
""
],
[
"Dodel",
"David",
""
],
[
"Schötz",
"Michael",
""
]
] |
new_dataset
| 0.99985 |
2103.06426
|
Stephen McAleer
|
Stephen McAleer, John Lanier, Kevin Wang, Pierre Baldi, Roy Fox
|
XDO: A Double Oracle Algorithm for Extensive-Form Games
|
35th Conference on Neural Information Processing Systems (NeurIPS
2021)
| null | null | null |
cs.GT cs.AI cs.LG cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Policy Space Response Oracles (PSRO) is a reinforcement learning (RL)
algorithm for two-player zero-sum games that has been empirically shown to find
approximate Nash equilibria in large games. Although PSRO is guaranteed to
converge to an approximate Nash equilibrium and can handle continuous actions,
it may take an exponential number of iterations as the number of information
states (infostates) grows. We propose Extensive-Form Double Oracle (XDO), an
extensive-form double oracle algorithm for two-player zero-sum games that is
guaranteed to converge to an approximate Nash equilibrium linearly in the
number of infostates. Unlike PSRO, which mixes best responses at the root of
the game, XDO mixes best responses at every infostate. We also introduce Neural
XDO (NXDO), where the best response is learned through deep RL. In tabular
experiments on Leduc poker, we find that XDO achieves an approximate Nash
equilibrium in a number of iterations an order of magnitude smaller than PSRO.
Experiments on a modified Leduc poker game and Oshi-Zumo show that tabular XDO
achieves a lower exploitability than CFR with the same amount of computation.
We also find that NXDO outperforms PSRO and NFSP on a sequential
multidimensional continuous-action game. NXDO is the first deep RL method that
can find an approximate Nash equilibrium in high-dimensional continuous-action
sequential games. Experiment code is available at
https://github.com/indylab/nxdo.
|
[
{
"version": "v1",
"created": "Thu, 11 Mar 2021 03:05:44 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Jan 2022 23:50:30 GMT"
}
] | 2022-02-01T00:00:00 |
[
[
"McAleer",
"Stephen",
""
],
[
"Lanier",
"John",
""
],
[
"Wang",
"Kevin",
""
],
[
"Baldi",
"Pierre",
""
],
[
"Fox",
"Roy",
""
]
] |
new_dataset
| 0.997902 |
2105.07583
|
Ziqiang Shi
|
Shoule Wu and Ziqiang Shi
|
It\^oTTS and It\^oWave: Linear Stochastic Differential Equation Is All
You Need For Audio Generation
|
The generated audio samples are available at
https://wushoule.github.io/ItoAudio/
| null | null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we propose to unify the two aspects of voice synthesis, namely
text-to-speech (TTS) and vocoder, into one framework based on a pair of forward
and reverse-time linear stochastic differential equations (SDE). The solutions
of this SDE pair are two stochastic processes, one of which turns the
distribution of mel spectrogram (or wave), that we want to generate, into a
simple and tractable distribution. The other is the generation procedure that
turns this tractable simple signal into the target mel spectrogram (or wave).
The model that generates mel spectrogram is called It\^oTTS, and the model that
generates wave is called It\^oWave. It\^oTTS and It\^oWave use the Wiener
process as a driver to gradually subtract the excess signal from the noise
signal to generate realistic corresponding meaningful mel spectrogram and audio
respectively, under the conditional inputs of original text or mel spectrogram.
The results of the experiment show that the mean opinion scores (MOS) of
It\^oTTS and It\^oWave can exceed the current state-of-the-art methods, and
reached 3.925$\pm$0.160 and 4.35$\pm$0.115 respectively. The generated audio
samples are available at https://wushoule.github.io/ItoAudio/. All authors
contribute equally to this work.
|
[
{
"version": "v1",
"created": "Mon, 17 May 2021 02:46:15 GMT"
},
{
"version": "v2",
"created": "Thu, 20 May 2021 08:19:06 GMT"
},
{
"version": "v3",
"created": "Mon, 9 Aug 2021 05:48:17 GMT"
},
{
"version": "v4",
"created": "Thu, 12 Aug 2021 03:25:22 GMT"
},
{
"version": "v5",
"created": "Sat, 29 Jan 2022 07:08:54 GMT"
}
] | 2022-02-01T00:00:00 |
[
[
"Wu",
"Shoule",
""
],
[
"Shi",
"Ziqiang",
""
]
] |
new_dataset
| 0.993477 |
2106.00188
|
Rowan Zellers
|
Rowan Zellers, Ari Holtzman, Matthew Peters, Roozbeh Mottaghi,
Aniruddha Kembhavi, Ali Farhadi, Yejin Choi
|
PIGLeT: Language Grounding Through Neuro-Symbolic Interaction in a 3D
World
|
ACL 2021 camera ready, project page at
https://rowanzellers.com/piglet/
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We propose PIGLeT: a model that learns physical commonsense knowledge through
interaction, and then uses this knowledge to ground language. We factorize
PIGLeT into a physical dynamics model, and a separate language model. Our
dynamics model learns not just what objects are but also what they do: glass
cups break when thrown, plastic ones don't. We then use it as the interface to
our language model, giving us a unified model of linguistic form and grounded
meaning. PIGLeT can read a sentence, simulate neurally what might happen next,
and then communicate that result through a literal symbolic representation, or
natural language.
Experimental results show that our model effectively learns world dynamics,
along with how to communicate them. It is able to correctly forecast "what
happens next" given an English sentence over 80% of the time, outperforming a
100x larger, text-to-text approach by over 10%. Likewise, its natural language
summaries of physical interactions are also judged by humans as more accurate
than LM alternatives. We present comprehensive analysis showing room for future
work.
|
[
{
"version": "v1",
"created": "Tue, 1 Jun 2021 02:32:12 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Jan 2022 15:52:25 GMT"
}
] | 2022-02-01T00:00:00 |
[
[
"Zellers",
"Rowan",
""
],
[
"Holtzman",
"Ari",
""
],
[
"Peters",
"Matthew",
""
],
[
"Mottaghi",
"Roozbeh",
""
],
[
"Kembhavi",
"Aniruddha",
""
],
[
"Farhadi",
"Ali",
""
],
[
"Choi",
"Yejin",
""
]
] |
new_dataset
| 0.998237 |
2107.10446
|
Siqi Fan
|
Siqi Fan, I-Hong Hou, Van Sy Mai, Lotfi Benmohamed
|
Online Service Caching and Routing at the Edge with Unknown Arrivals
|
This paper is accepted for publication in IEEE ICC 2022
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper studies a problem of jointly optimizing two important operations
in mobile edge computing without knowing future requests, namely service
caching, which determines which services to be hosted at the edge, and service
routing, which determines which requests to be processed locally at the edge.
We aim to address several practical challenges, including limited storage and
computation capacities of edge servers and unknown future request arrival
patterns. To this end, we formulate the problem as an online optimization
problem, in which the objective function includes costs of forwarding requests,
processing requests, and reconfiguring edge servers. By leveraging a natural
timescale separation between service routing and service caching, namely, the
former happens faster than the latter, we propose an online two-stage algorithm
and its randomized variant. Both algorithms have low complexity, and our
fractional solution achieves sublinear regret. Simulation results show that our
algorithms significantly outperform other state-of-the-art online policies.
|
[
{
"version": "v1",
"created": "Thu, 22 Jul 2021 03:58:59 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Jan 2022 02:37:59 GMT"
}
] | 2022-02-01T00:00:00 |
[
[
"Fan",
"Siqi",
""
],
[
"Hou",
"I-Hong",
""
],
[
"Mai",
"Van Sy",
""
],
[
"Benmohamed",
"Lotfi",
""
]
] |
new_dataset
| 0.996607 |
2108.04424
|
Junke Wang
|
Junke Wang, Shaoxiang Chen, Zuxuan Wu, Yu-Gang Jiang
|
FT-TDR: Frequency-guided Transformer and Top-Down Refinement Network for
Blind Face Inpainting
| null |
IEEE Transactions on Multimedia, 2022
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Blind face inpainting refers to the task of reconstructing visual contents
without explicitly indicating the corrupted regions in a face image.
Inherently, this task faces two challenges: (1) how to detect various mask
patterns of different shapes and contents; (2) how to restore visually
plausible and pleasing contents in the masked regions. In this paper, we
propose a novel two-stage blind face inpainting method named Frequency-guided
Transformer and Top-Down Refinement Network (FT-TDR) to tackle these
challenges. Specifically, we first use a transformer-based network to detect
the corrupted regions to be inpainted as masks by modeling the relation among
different patches. We also exploit the frequency modality as complementary
information for improved detection results and capture the local contextual
incoherence to enhance boundary consistency. Then a top-down refinement network
is proposed to hierarchically restore features at different levels and generate
contents that are semantically consistent with the unmasked face regions.
Extensive experiments demonstrate that our method outperforms current
state-of-the-art blind and non-blind face inpainting methods qualitatively and
quantitatively.
|
[
{
"version": "v1",
"created": "Tue, 10 Aug 2021 03:12:01 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Jan 2022 15:45:01 GMT"
}
] | 2022-02-01T00:00:00 |
[
[
"Wang",
"Junke",
""
],
[
"Chen",
"Shaoxiang",
""
],
[
"Wu",
"Zuxuan",
""
],
[
"Jiang",
"Yu-Gang",
""
]
] |
new_dataset
| 0.992321 |
2108.13376
|
Yimin Wang
|
Yimin Wang, Yixian Chen, Guilong Li, Yuhuan Lu, Zhi Yu, and Zhaocheng
He
|
City-Scale Holographic Traffic Flow Data based on Vehicular Trajectory
Resampling
| null | null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite abundant accessible traffic data, researches on traffic flow
estimation and optimization still face the dilemma of detailedness and
integrity in the measurement. A dataset of city-scale vehicular continuous
trajectories featuring the finest resolution and integrity, as known as the
holographic traffic data, would be a breakthrough, for it could reproduce every
detail of the traffic flow evolution and reveal the personal mobility pattern
within the city. Due to the high coverage of Automatic Vehicle Identification
(AVI) devices in Xuancheng city, we constructed one-month continuous
trajectories of daily 80,000 vehicles in the city with accurate intersection
passing time and no travel path estimation bias. With such holographic traffic
data, it is possible to reproduce every detail of the traffic flow evolution.
We presented a set of traffic flow data based on the holographic trajectories
resampling, covering the whole 482 road segments in the city round the clock,
including stationary average speed and flow data of 5-minute intervals and
dynamic floating car data.
|
[
{
"version": "v1",
"created": "Mon, 30 Aug 2021 16:59:04 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Jan 2022 09:35:23 GMT"
}
] | 2022-02-01T00:00:00 |
[
[
"Wang",
"Yimin",
""
],
[
"Chen",
"Yixian",
""
],
[
"Li",
"Guilong",
""
],
[
"Lu",
"Yuhuan",
""
],
[
"Yu",
"Zhi",
""
],
[
"He",
"Zhaocheng",
""
]
] |
new_dataset
| 0.99937 |
2109.05633
|
Maria Korosteleva
|
Maria Korosteleva, Sung-Hee Lee
|
Generating Datasets of 3D Garments with Sewing Patterns
|
To appear in NeurIPS 2021 Datasets and Benchmarks Track
|
Proceedings of the Neural Information Processing Systems Track on
Datasets and Benchmarks 1 (NeurIPS Datasets and Benchmarks 2021)
| null | null |
cs.CV cs.AI cs.GR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Garments are ubiquitous in both real and many of the virtual worlds. They are
highly deformable objects, exhibit an immense variety of designs and shapes,
and yet, most garments are created from a set of regularly shaped flat pieces.
Exploration of garment structure presents a peculiar case for an object
structure estimation task and might prove useful for downstream tasks of neural
3D garment modeling and reconstruction by providing strong prior on garment
shapes. To facilitate research in these directions, we propose a method for
generating large synthetic datasets of 3D garment designs and their sewing
patterns. Our method consists of a flexible description structure for
specifying parametric sewing pattern templates and the automatic generation
pipeline to produce garment 3D models with little-to-none manual intervention.
To add realism, the pipeline additionally creates corrupted versions of the
final meshes that imitate artifacts of 3D scanning.
With this pipeline, we created the first large-scale synthetic dataset of 3D
garment models with their sewing patterns. The dataset contains more than 20000
garment design variations produced from 19 different base types. Seven of these
garment types are specifically designed to target evaluation of the
generalization across garment sewing pattern topologies.
|
[
{
"version": "v1",
"created": "Sun, 12 Sep 2021 23:03:48 GMT"
}
] | 2022-02-01T00:00:00 |
[
[
"Korosteleva",
"Maria",
""
],
[
"Lee",
"Sung-Hee",
""
]
] |
new_dataset
| 0.996346 |
2201.07700
|
Stephen McAleer
|
Stephen McAleer, Kevin Wang, John Lanier, Marc Lanctot, Pierre Baldi,
Tuomas Sandholm, Roy Fox
|
Anytime PSRO for Two-Player Zero-Sum Games
|
Published in AAAI Reinforcement Learning in Games Workshop
| null | null | null |
cs.GT cs.LG cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Policy space response oracles (PSRO) is a multi-agent reinforcement learning
algorithm that has achieved state-of-the-art performance in very large
two-player zero-sum games. PSRO is based on the tabular double oracle (DO)
method, an algorithm that is guaranteed to converge to a Nash equilibrium, but
may increase exploitability from one iteration to the next. We propose anytime
double oracle (ADO), a tabular double oracle algorithm for 2-player zero-sum
games that is guaranteed to converge to a Nash equilibrium while decreasing
exploitability from one iteration to the next. Unlike DO, in which the
restricted distribution is based on the restricted game formed by each player's
strategy sets, ADO finds the restricted distribution for each player that
minimizes its exploitability against any policy in the full, unrestricted game.
We also propose a method of finding this restricted distribution via a
no-regret algorithm updated against best responses, called RM-BR DO. Finally,
we propose anytime PSRO (APSRO), a version of ADO that calculates best
responses via reinforcement learning. In experiments on Leduc poker and random
normal form games, we show that our methods achieve far lower exploitability
than DO and PSRO and decrease exploitability monotonically.
|
[
{
"version": "v1",
"created": "Wed, 19 Jan 2022 16:34:11 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Jan 2022 23:31:19 GMT"
}
] | 2022-02-01T00:00:00 |
[
[
"McAleer",
"Stephen",
""
],
[
"Wang",
"Kevin",
""
],
[
"Lanier",
"John",
""
],
[
"Lanctot",
"Marc",
""
],
[
"Baldi",
"Pierre",
""
],
[
"Sandholm",
"Tuomas",
""
],
[
"Fox",
"Roy",
""
]
] |
new_dataset
| 0.99921 |
2201.10526
|
Thomas Chen
|
Thomas Y. Chen
|
MonarchNet: Differentiating Monarch Butterflies from Butterflies Species
with Similar Phenotypes
|
5 pages, 2 figures, Proceedings of NeurIPS 2020 - Learning Meaningful
Representations of Life (LMRL) Workshop. The FASEB Journal
|
CVPR 2021 Workshop on CV4Animals (Computer Vision for Animal
Behavior Tracking and Modeling)
|
10.1096/fasebj.2021.35.S1.05504
| null |
cs.CV cs.AI q-bio.PE stat.AP
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, the monarch butterfly's iconic migration patterns have come
under threat from a number of factors, from climate change to pesticide use. To
track trends in their populations, scientists as well as citizen scientists
must identify individuals accurately. This is uniquely key for the study of
monarch butterflies because there exist other species of butterfly, such as
viceroy butterflies, that are "look-alikes" (coined by the Convention on
International Trade in Endangered Species of Wild Fauna and Flora), having
similar phenotypes. To tackle this problem and to aid in more efficient
identification, we present MonarchNet, the first comprehensive dataset
consisting of butterfly imagery for monarchs and five look-alike species. We
train a baseline deep-learning classification model to serve as a tool for
differentiating monarch butterflies and its various look-alikes. We seek to
contribute to the study of biodiversity and butterfly ecology by providing a
novel method for computational classification of these particular butterfly
species. The ultimate aim is to help scientists track monarch butterfly
population and migration trends in the most precise and efficient manner
possible.
|
[
{
"version": "v1",
"created": "Mon, 24 Jan 2022 17:51:42 GMT"
}
] | 2022-02-01T00:00:00 |
[
[
"Chen",
"Thomas Y.",
""
]
] |
new_dataset
| 0.966115 |
2201.11187
|
Ashar Ali
|
Ashar Ali, Upal Mahbub, Gokce Dane, Gerhard Reitmayr
|
DIREG3D: DIrectly REGress 3D Hands from Multiple Cameras
| null |
ICCV 2021 Fifth Workshop on Computer Vision for AR/VR
| null | null |
cs.CV cs.HC cs.RO eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present DIREG3D, a holistic framework for 3D Hand Tracking.
The proposed framework is capable of utilizing camera intrinsic parameters, 3D
geometry, intermediate 2D cues, and visual information to regress parameters
for accurately representing a Hand Mesh model. Our experiments show that
information like the size of the 2D hand, its distance from the optical center,
and radial distortion is useful for deriving highly reliable 3D poses in camera
space from just monocular information. Furthermore, we extend these results to
a multi-view camera setup by fusing features from different viewpoints.
|
[
{
"version": "v1",
"created": "Wed, 26 Jan 2022 21:03:56 GMT"
}
] | 2022-02-01T00:00:00 |
[
[
"Ali",
"Ashar",
""
],
[
"Mahbub",
"Upal",
""
],
[
"Dane",
"Gokce",
""
],
[
"Reitmayr",
"Gerhard",
""
]
] |
new_dataset
| 0.994966 |
2201.11994
|
Guillaume Sartoretti
|
Yutong Wang and Guillaume Sartoretti
|
FCMNet: Full Communication Memory Net for Team-Level Cooperation in
Multi-Agent Systems
|
To appear in the International Conference on Autonomous Agents and
Multiagent Systems (AAMAS 2022)
| null | null | null |
cs.RO cs.AI cs.LG cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Decentralized cooperation in partially-observable multi-agent systems
requires effective communications among agents. To support this effort, this
work focuses on the class of problems where global communications are available
but may be unreliable, thus precluding differentiable communication learning
methods. We introduce FCMNet, a reinforcement learning based approach that
allows agents to simultaneously learn a) an effective multi-hop communications
protocol and b) a common, decentralized policy that enables team-level
decision-making. Specifically, our proposed method utilizes the hidden states
of multiple directional recurrent neural networks as communication messages
among agents. Using a simple multi-hop topology, we endow each agent with the
ability to receive information sequentially encoded by every other agent at
each time step, leading to improved global cooperation. We demonstrate FCMNet
on a challenging set of StarCraft II micromanagement tasks with shared rewards,
as well as a collaborative multi-agent pathfinding task with individual
rewards. There, our comparison results show that FCMNet outperforms
state-of-the-art communication-based reinforcement learning methods in all
StarCraft II micromanagement tasks, and value decomposition methods in certain
tasks. We further investigate the robustness of FCMNet under realistic
communication disturbances, such as random message loss or binarized messages
(i.e., non-differentiable communication channels), to showcase FMCNet's
potential applicability to robotic tasks under a variety of real-world
conditions.
|
[
{
"version": "v1",
"created": "Fri, 28 Jan 2022 09:12:01 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Jan 2022 08:01:14 GMT"
}
] | 2022-02-01T00:00:00 |
[
[
"Wang",
"Yutong",
""
],
[
"Sartoretti",
"Guillaume",
""
]
] |
new_dataset
| 0.962234 |
2201.11999
|
Shuang Wu
|
Shuang Wu, Zhenguang Li, Shijian Lu, Li Cheng
|
Dual Learning Music Composition and Dance Choreography
|
ACMMM 2021 (Oral)
| null |
10.1145/3474085.3475180
| null |
cs.SD cs.CV cs.LG cs.MM eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Music and dance have always co-existed as pillars of human activities,
contributing immensely to the cultural, social, and entertainment functions in
virtually all societies. Notwithstanding the gradual systematization of music
and dance into two independent disciplines, their intimate connection is
undeniable and one art-form often appears incomplete without the other. Recent
research works have studied generative models for dance sequences conditioned
on music. The dual task of composing music for given dances, however, has been
largely overlooked. In this paper, we propose a novel extension, where we
jointly model both tasks in a dual learning approach. To leverage the duality
of the two modalities, we introduce an optimal transport objective to align
feature embeddings, as well as a cycle consistency loss to foster overall
consistency. Experimental results demonstrate that our dual learning framework
improves individual task performance, delivering generated music compositions
and dance choreographs that are realistic and faithful to the conditioned
inputs.
|
[
{
"version": "v1",
"created": "Fri, 28 Jan 2022 09:20:28 GMT"
}
] | 2022-02-01T00:00:00 |
[
[
"Wu",
"Shuang",
""
],
[
"Li",
"Zhenguang",
""
],
[
"Lu",
"Shijian",
""
],
[
"Cheng",
"Li",
""
]
] |
new_dataset
| 0.995801 |
2201.12376
|
Herbert Roitblat
|
Herbert L. Roitblat
|
Probably Reasonable Search in eDiscovery
| null | null | null | null |
cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
In eDiscovery, a party to a lawsuit or similar action must search through
available information to identify those documents and files that are relevant
to the suit. Search efforts tend to identify less than 100% of the relevant
documents and courts are frequently asked to adjudicate whether the search
effort has been reasonable, or whether additional effort to find more of the
relevant documents is justified. This article provides a method for estimating
the probability that significant additional information will be found from
extended effort. Modeling and two data sets indicate that the probability that
facts/topics exist among the so-far unidentified documents that have not been
observed in the identified documents is low for even moderate levels of Recall.
|
[
{
"version": "v1",
"created": "Fri, 28 Jan 2022 19:13:32 GMT"
}
] | 2022-02-01T00:00:00 |
[
[
"Roitblat",
"Herbert L.",
""
]
] |
new_dataset
| 0.988834 |
2201.12394
|
Mitchell Terrell
|
Mitch Terrell, Yixuan Wang, Matt Dorow, Soumya Agrawal, Bhaargav
Sriraman, Zach Leidall, Abhishek Chandra, Jon Weissman
|
Constellation: An Edge-Based Semantic Runtime System for Internet of
Things Applications
|
15 pages, 11 figures, 2 tables
| null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
With the global Internet of Things IoT market size predicted to grow to over
1 trillion dollars in the next 5 years, many large corporations are scrambling
to solidify their product line as the defacto device suite for consumers. This
has led to each corporation developing their devices in a siloed environment
with unique protocols and runtime frameworks that explicitly exclude the
ability to work with the competitions devices. This development silo has
created problems with programming complexity for application developers as well
as concurrency and scalability limitations for applications that involve a
network of IoT devices. The Constellation project is a distributed IoT runtime
system that attempts to address these challenges by creating an operating
system layer that decouples applications from devices. This layer provides
mechanisms designed to allow applications to interface with an underlying
substrate of IoT devices while abstracting away the complexities of application
concurrency, device interoperability, and system scalability. This paper
provides an overview of the Constellation system as well as details four new
project expansions to improve system scalability.
|
[
{
"version": "v1",
"created": "Fri, 28 Jan 2022 19:51:53 GMT"
}
] | 2022-02-01T00:00:00 |
[
[
"Terrell",
"Mitch",
""
],
[
"Wang",
"Yixuan",
""
],
[
"Dorow",
"Matt",
""
],
[
"Agrawal",
"Soumya",
""
],
[
"Sriraman",
"Bhaargav",
""
],
[
"Leidall",
"Zach",
""
],
[
"Chandra",
"Abhishek",
""
],
[
"Weissman",
"Jon",
""
]
] |
new_dataset
| 0.9917 |
2201.12408
|
Han Ching Ou
|
Han-Ching Ou, Christoph Siebenbrunner, Jackson Killian, Meredith B
Brooks, David Kempe, Yevgeniy Vorobeychik, Milind Tambe
|
Networked Restless Multi-Armed Bandits for Mobile Interventions
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Motivated by a broad class of mobile intervention problems, we propose and
study restless multi-armed bandits (RMABs) with network effects. In our model,
arms are partially recharging and connected through a graph, so that pulling
one arm also improves the state of neighboring arms, significantly extending
the previously studied setting of fully recharging bandits with no network
effects. In mobile interventions, network effects may arise due to regular
population movements (such as commuting between home and work). We show that
network effects in RMABs induce strong reward coupling that is not accounted
for by existing solution methods. We propose a new solution approach for
networked RMABs, exploiting concavity properties which arise under natural
assumptions on the structure of intervention effects. We provide sufficient
conditions for optimality of our approach in idealized settings and demonstrate
that it empirically outperforms state-of-the art baselines in three mobile
intervention domains using real-world graphs.
|
[
{
"version": "v1",
"created": "Fri, 28 Jan 2022 20:38:01 GMT"
}
] | 2022-02-01T00:00:00 |
[
[
"Ou",
"Han-Ching",
""
],
[
"Siebenbrunner",
"Christoph",
""
],
[
"Killian",
"Jackson",
""
],
[
"Brooks",
"Meredith B",
""
],
[
"Kempe",
"David",
""
],
[
"Vorobeychik",
"Yevgeniy",
""
],
[
"Tambe",
"Milind",
""
]
] |
new_dataset
| 0.992702 |
2201.12425
|
Ruofan Liang
|
Ruofan Liang, Hongyi Sun, Nandita Vijaykumar
|
CoordX: Accelerating Implicit Neural Representation with a Split MLP
Architecture
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Implicit neural representations with multi-layer perceptrons (MLPs) have
recently gained prominence for a wide variety of tasks such as novel view
synthesis and 3D object representation and rendering. However, a significant
challenge with these representations is that both training and inference with
an MLP over a large number of input coordinates to learn and represent an
image, video, or 3D object, require large amounts of computation and incur long
processing times. In this work, we aim to accelerate inference and training of
coordinate-based MLPs for implicit neural representations by proposing a new
split MLP architecture, CoordX. With CoordX, the initial layers are split to
learn each dimension of the input coordinates separately. The intermediate
features are then fused by the last layers to generate the learned signal at
the corresponding coordinate point. This significantly reduces the amount of
computation required and leads to large speedups in training and inference,
while achieving similar accuracy as the baseline MLP. This approach thus aims
at first learning functions that are a decomposition of the original signal and
then fusing them to generate the learned signal. Our proposed architecture can
be generally used for many implicit neural representation tasks with no
additional memory overheads. We demonstrate a speedup of up to 2.92x compared
to the baseline model for image, video, and 3D shape representation and
rendering tasks.
|
[
{
"version": "v1",
"created": "Fri, 28 Jan 2022 21:30:42 GMT"
}
] | 2022-02-01T00:00:00 |
[
[
"Liang",
"Ruofan",
""
],
[
"Sun",
"Hongyi",
""
],
[
"Vijaykumar",
"Nandita",
""
]
] |
new_dataset
| 0.966954 |
2201.12506
|
Ziyan Luo
|
Yunfang Fu, Qiuqi Ruan, Ziyan Luo, Gaoyun An, Yi Jin, Jun Wan
|
2D+3D facial expression recognition via embedded tensor manifold
regularization
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, a novel approach via embedded tensor manifold regularization
for 2D+3D facial expression recognition (FERETMR) is proposed. Firstly, 3D
tensors are constructed from 2D face images and 3D face shape models to keep
the structural information and correlations. To maintain the local structure
(geometric information) of 3D tensor samples in the low-dimensional tensors
space during the dimensionality reduction, the $\ell_0$-norm of the core
tensors and a tensor manifold regularization scheme embedded on core tensors
are adopted via a low-rank truncated Tucker decomposition on the generated
tensors. As a result, the obtained factor matrices will be used for facial
expression classification prediction. To make the resulting tensor optimization
more tractable, $\ell_1$-norm surrogate is employed to relax $\ell_0$-norm and
hence the resulting tensor optimization problem has a nonsmooth objective
function due to the $\ell_1$-norm and orthogonal constraints from the
orthogonal Tucker decomposition. To efficiently tackle this tensor optimization
problem, we establish the first-order optimality condition in terms of
stationary points, and then design a block coordinate descent (BCD) algorithm
with convergence analysis and the computational complexity. Numerical results
on BU-3DFE database and Bosphorus databases demonstrate the effectiveness of
our proposed approach.
|
[
{
"version": "v1",
"created": "Sat, 29 Jan 2022 06:11:00 GMT"
}
] | 2022-02-01T00:00:00 |
[
[
"Fu",
"Yunfang",
""
],
[
"Ruan",
"Qiuqi",
""
],
[
"Luo",
"Ziyan",
""
],
[
"An",
"Gaoyun",
""
],
[
"Jin",
"Yi",
""
],
[
"Wan",
"Jun",
""
]
] |
new_dataset
| 0.952104 |
2201.12542
|
Sinan Wang
|
Sinan Wang, Yibo Wang, Xian Zhan, Ying Wang, Yepang Liu, Xiapu Luo,
Shing-Chi Cheung
|
Aper: Evolution-Aware Runtime Permission Misuse Detection for Android
Apps
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Android platform introduces the runtime permission model in version 6.0.
The new model greatly improves data privacy and user experience, but brings new
challenges for app developers. First, it allows users to freely revoke granted
permissions. Hence, developers cannot assume that the permissions granted to an
app would keep being granted. Instead, they should make their apps carefully
check the permission status before invoking dangerous APIs. Second, the
permission specification keeps evolving, bringing new types of compatibility
issues into the ecosystem. To understand the impact of the challenges, we
conducted an empirical study on 13,352 popular Google Play apps. We found that
86.0% apps used dangerous APIs asynchronously after permission management and
61.2% apps used evolving dangerous APIs. If an app does not properly handle
permission revocations or platform differences, unexpected runtime issues may
happen and even cause app crashes. We call such Android Runtime Permission
issues as ARP bugs. Unfortunately, existing runtime permission issue detection
tools cannot effectively deal with the ARP bugs induced by asynchronous
permission management and permission specification evolution. To fill the gap,
we designed a static analyzer, Aper, that performs reaching definition and
dominator analysis on Android apps to detect the two types of ARP bugs. To
compare Aper with existing tools, we built a benchmark, ARPfix, from 60 real
ARP bugs. Our experiment results show that Aper significantly outperforms two
academic tools, ARPDroid and RevDroid, and an industrial tool, Lint, on ARPfix,
with an average improvement of 46.3% on F1-score. In addition, Aper
successfully found 34 ARP bugs in 214 opensource Android apps, most of which
can result in abnormal app behaviors (such as app crashes) according to our
manual validation.
|
[
{
"version": "v1",
"created": "Sat, 29 Jan 2022 09:57:55 GMT"
}
] | 2022-02-01T00:00:00 |
[
[
"Wang",
"Sinan",
""
],
[
"Wang",
"Yibo",
""
],
[
"Zhan",
"Xian",
""
],
[
"Wang",
"Ying",
""
],
[
"Liu",
"Yepang",
""
],
[
"Luo",
"Xiapu",
""
],
[
"Cheung",
"Shing-Chi",
""
]
] |
new_dataset
| 0.984945 |
2201.12544
|
Alexander Hernandez
|
Dexter I. Mercurio, Alexander A. Hernandez
|
An Open Data and Geo-based Information Systems
|
None
| null | null | null |
cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
Barangay is the smallest type of government in the Philippines, and it is
driven and represented by its barangay authorities. The barangay officials are
accountable for keeping the records of citizens health and crime incidents. It
also the first-hand source of information of the national government to develop
government programs, community services, and maintain peace and order. This
paper presents a developed a web-based information system incorporating open
data and geo-based features for a pilot community in the Philippines. This
system serves as a platform for information collection and used for planning,
analysis, decision-making and increase effectiveness and efficiency of
government services in the community.
|
[
{
"version": "v1",
"created": "Sat, 29 Jan 2022 10:05:58 GMT"
}
] | 2022-02-01T00:00:00 |
[
[
"Mercurio",
"Dexter I.",
""
],
[
"Hernandez",
"Alexander A.",
""
]
] |
new_dataset
| 0.999463 |
2201.12660
|
Asil Koc
|
Asil Koc, Ahmed Masmoudi, Tho Le-Ngoc
|
Full-Duplex Non-Coherent Communications for Massive MIMO Systems with
Analog Beamforming
|
6 pages, 6 figures
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, a novel full-duplex non-coherent (FD-NC) transmission scheme
is developed for massive multiple-input multiple-output (mMIMO) systems using
analog beamforming (ABF). We propose to use a structured Grassmannian
constellation for the non-coherent communications that does not require channel
estimation. Then, we design the transmit and receive ABF via the slow
time-varying angle-of-departure (AoD) and angle-of-arrival (AoA) information,
respectively. The ABF design targets maximizing the intended signal power while
suppressing the strong self-interference (SI) occurred in the FD transmission.
Also, the proposed ABF technique only needs a single transmit and receive RF
chain to support large antenna arrays, thus, it reduces hardware
cost/complexity in the mMIMO systems. It is shown that the proposed FD-NC
offers a great improvement in bit error rate (BER) in comparison to both
half-duplex non-coherent (HD-NC) and HD coherent schemes. We also observe that
the proposed FD-NC both reduces the error floor resulted from the residual SI
in FD transmission, and provides lower BER compared to the FD coherent
transmission.
|
[
{
"version": "v1",
"created": "Sat, 29 Jan 2022 21:07:40 GMT"
}
] | 2022-02-01T00:00:00 |
[
[
"Koc",
"Asil",
""
],
[
"Masmoudi",
"Ahmed",
""
],
[
"Le-Ngoc",
"Tho",
""
]
] |
new_dataset
| 0.986995 |
2201.12664
|
Richard Sutcliffe
|
Mustafa Mhamed, Richard Sutcliffe, Xia Sun, Jun Feng, Eiad Almekhlafi,
Ephrem A. Retta
|
A Deep CNN Architecture with Novel Pooling Layer Applied to Two Sudanese
Arabic Sentiment Datasets
|
19 pages, 11 tables, 11 figures
| null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Arabic sentiment analysis has become an important research field in recent
years. Initially, work focused on Modern Standard Arabic (MSA), which is the
most widely-used form. Since then, work has been carried out on several
different dialects, including Egyptian, Levantine and Moroccan. Moreover, a
number of datasets have been created to support such work. However, up until
now, less work has been carried out on Sudanese Arabic, a dialect which has 32
million speakers. In this paper, two new publicly available datasets are
introduced, the 2-Class Sudanese Sentiment Dataset (SudSenti2) and the 3-Class
Sudanese Sentiment Dataset (SudSenti3). Furthermore, a CNN architecture, SCM,
is proposed, comprising five CNN layers together with a novel pooling layer,
MMA, to extract the best features. This SCM+MMA model is applied to SudSenti2
and SudSenti3 with accuracies of 92.75% and 84.39%. Next, the model is compared
to other deep learning classifiers and shown to be superior on these new
datasets. Finally, the proposed model is applied to the existing Saudi
Sentiment Dataset and to the MSA Hotel Arabic Review Dataset with accuracies
85.55% and 90.01%.
|
[
{
"version": "v1",
"created": "Sat, 29 Jan 2022 21:33:28 GMT"
}
] | 2022-02-01T00:00:00 |
[
[
"Mhamed",
"Mustafa",
""
],
[
"Sutcliffe",
"Richard",
""
],
[
"Sun",
"Xia",
""
],
[
"Feng",
"Jun",
""
],
[
"Almekhlafi",
"Eiad",
""
],
[
"Retta",
"Ephrem A.",
""
]
] |
new_dataset
| 0.99201 |
2201.12700
|
Jeongyeol Kwon
|
Jeongyeol Kwon, Yonathan Efroni, Constantine Caramanis, Shie Mannor
|
Coordinated Attacks against Contextual Bandits: Fundamental Limits and
Defense Mechanisms
| null | null | null | null |
cs.LG cs.CR cs.IT math.IT stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Motivated by online recommendation systems, we propose the problem of finding
the optimal policy in multitask contextual bandits when a small fraction
$\alpha < 1/2$ of tasks (users) are arbitrary and adversarial. The remaining
fraction of good users share the same instance of contextual bandits with $S$
contexts and $A$ actions (items). Naturally, whether a user is good or
adversarial is not known in advance. The goal is to robustly learn the policy
that maximizes rewards for good users with as few user interactions as
possible. Without adversarial users, established results in collaborative
filtering show that $O(1/\epsilon^2)$ per-user interactions suffice to learn a
good policy, precisely because information can be shared across users. This
parallelization gain is fundamentally altered by the presence of adversarial
users: unless there are super-polynomial number of users, we show a lower bound
of $\tilde{\Omega}(\min(S,A) \cdot \alpha^2 / \epsilon^2)$ {\it per-user}
interactions to learn an $\epsilon$-optimal policy for the good users. We then
show we can achieve an $\tilde{O}(\min(S,A)\cdot \alpha/\epsilon^2)$
upper-bound, by employing efficient robust mean estimators for both uni-variate
and high-dimensional random variables. We also show that this can be improved
depending on the distributions of contexts.
|
[
{
"version": "v1",
"created": "Sun, 30 Jan 2022 01:45:13 GMT"
}
] | 2022-02-01T00:00:00 |
[
[
"Kwon",
"Jeongyeol",
""
],
[
"Efroni",
"Yonathan",
""
],
[
"Caramanis",
"Constantine",
""
],
[
"Mannor",
"Shie",
""
]
] |
new_dataset
| 0.99131 |
2201.12809
|
Vijeth Aradhya
|
Vijeth Aradhya, Seth Gilbert, Aquinas Hobor
|
OverChain: Building a robust overlay with a blockchain
|
47 pages, 2 figures
| null | null | null |
cs.DC cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
Blockchains use peer-to-peer networks for disseminating information among
peers, but these networks currently do not have any provable guarantees for
desirable properties such as Byzantine fault tolerance, good connectivity and
small diameter. This is not just a theoretical problem, as recent works have
exploited unsafe peer connection policies and weak network synchronization to
mount partitioning attacks on Bitcoin. Cryptocurrency blockchains are safety
critical systems, so we need principled algorithms to maintain their networks.
Our key insight is that we can leverage the blockchain itself to share
information among the peers, and thus simplify the network maintenance process.
Given that the peers have restricted computational resources, and at most a
constant fraction of them are Byzantine, we provide communication-efficient
protocols to maintain a hypercubic network for blockchains, where peers can
join and leave over time. Interestingly, we discover that our design can
\emph{recover} from substantial adversarial failures. Moreover, these
properties hold despite significant churn.
A key contribution is a secure mechanism for joining the network that uses
the blockchain to help new peers to contact existing peers. Furthermore, by
examining how peers join the network, i.e., the "bootstrapping service," we
give a lower bound showing that (within log factors) our network tolerates the
maximum churn rate possible. In fact, we can give a lower bound on churn for
any fully distributed service that requires connectivity.
|
[
{
"version": "v1",
"created": "Sun, 30 Jan 2022 13:21:17 GMT"
}
] | 2022-02-01T00:00:00 |
[
[
"Aradhya",
"Vijeth",
""
],
[
"Gilbert",
"Seth",
""
],
[
"Hobor",
"Aquinas",
""
]
] |
new_dataset
| 0.985455 |
2201.12888
|
Deepak Gupta
|
Deepak Gupta, Kush Attal, and Dina Demner-Fushman
|
A Dataset for Medical Instructional Video Classification and Question
Answering
| null | null | null | null |
cs.CV cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This paper introduces a new challenge and datasets to foster research toward
designing systems that can understand medical videos and provide visual answers
to natural language questions. We believe medical videos may provide the best
possible answers to many first aids, medical emergency, and medical education
questions. Toward this, we created the MedVidCL and MedVidQA datasets and
introduce the tasks of Medical Video Classification (MVC) and Medical Visual
Answer Localization (MVAL), two tasks that focus on cross-modal (medical
language and medical video) understanding. The proposed tasks and datasets have
the potential to support the development of sophisticated downstream
applications that can benefit the public and medical practitioners. Our
datasets consist of 6,117 annotated videos for the MVC task and 3,010 annotated
questions and answers timestamps from 899 videos for the MVAL task. These
datasets have been verified and corrected by medical informatics experts. We
have also benchmarked each task with the created MedVidCL and MedVidQA datasets
and proposed the multimodal learning methods that set competitive baselines for
future research.
|
[
{
"version": "v1",
"created": "Sun, 30 Jan 2022 18:06:31 GMT"
}
] | 2022-02-01T00:00:00 |
[
[
"Gupta",
"Deepak",
""
],
[
"Attal",
"Kush",
""
],
[
"Demner-Fushman",
"Dina",
""
]
] |
new_dataset
| 0.999665 |
2201.12901
|
Shubham Chandel
|
Shubham Chandel, Colin B. Clement, Guillermo Serrato, and Neel
Sundaresan
|
Training and Evaluating a Jupyter Notebook Data Science Assistant
| null | null | null | null |
cs.LG cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the feasibility of a Data Science assistant powered by a
sequence-to-sequence transformer by training a new model JuPyT5 on all publicly
available Jupyter Notebook GitHub repositories and developing a new metric:
Data Science Problems (DSP). DSP is a collection of 1119 problems curated from
306 pedagogical notebooks with 92 dataset dependencies, natural language and
Markdown problem descriptions, and assert-based unit tests. These notebooks
were designed to test university students' mastery of various Python
implementations of Math and Data Science, and we now leverage them to study the
ability of JuPyT5 to understand and pass the tests. We analyze the content of
DSP, validate its quality, and we find that given 100 sampling attempts JuPyT5
is able to solve 77.5\% of the DSP problems. We further present various
ablation and statistical analyses and compare DSP to other recent natural
language to code benchmarks.
|
[
{
"version": "v1",
"created": "Sun, 30 Jan 2022 19:56:37 GMT"
}
] | 2022-02-01T00:00:00 |
[
[
"Chandel",
"Shubham",
""
],
[
"Clement",
"Colin B.",
""
],
[
"Serrato",
"Guillermo",
""
],
[
"Sundaresan",
"Neel",
""
]
] |
new_dataset
| 0.98477 |
2201.13020
|
Xiaodong Yu
|
Xiaodong Yu, Sheng Di, Kai Zhao, jiannan Tian, Dingwen Tao, Xin Liang,
Franck Cappello
|
SZx: an Ultra-fast Error-bounded Lossy Compressor for Scientific
Datasets
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Today's scientific high performance computing (HPC) applications or advanced
instruments are producing vast volumes of data across a wide range of domains,
which introduces a serious burden on data transfer and storage. Error-bounded
lossy compression has been developed and widely used in scientific community,
because not only can it significantly reduce the data volumes but it can also
strictly control the data distortion based on the use-specified error bound.
Existing lossy compressors, however, cannot offer ultra-fast compression speed,
which is highly demanded by quite a few applications or use-cases (such as
in-memory compression and online instrument data compression). In this paper,
we propose a novel ultra-fast error-bounded lossy compressor, which can obtain
fairly high compression performance on both CPU and GPU, also with reasonably
high compression ratios. The key contributions are three-fold: (1) We propose a
novel, generic ultra-fast error-bounded lossy compression framework -- called
UFZ, by confining our design to be composed of only super-lightweight
operations such as bitwise and addition/subtraction operation, still keeping a
certain high compression ratio. (2) We implement UFZ on both CPU and GPU and
optimize the performance according to their architectures carefully. (3) We
perform a comprehensive evaluation with 6 real-world production-level
scientific datasets on both CPU and GPU. Experiments show that UFZ is 2~16X as
fast as the second-fastest existing error-bounded lossy compressor (either SZ
or ZFP) on CPU and GPU, with respect to both compression and decompression.
|
[
{
"version": "v1",
"created": "Mon, 31 Jan 2022 06:46:16 GMT"
}
] | 2022-02-01T00:00:00 |
[
[
"Yu",
"Xiaodong",
""
],
[
"Di",
"Sheng",
""
],
[
"Zhao",
"Kai",
""
],
[
"Tian",
"jiannan",
""
],
[
"Tao",
"Dingwen",
""
],
[
"Liang",
"Xin",
""
],
[
"Cappello",
"Franck",
""
]
] |
new_dataset
| 0.994619 |
2201.13128
|
Federico Fusco
|
Paul D\"utting, Federico Fusco, Silvio Lattanzi, Ashkan Norouzi-Fard,
Morteza Zadimoghaddam
|
Deletion Robust Submodular Maximization over Matroids
| null | null | null | null |
cs.DS cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Maximizing a monotone submodular function is a fundamental task in machine
learning. In this paper, we study the deletion robust version of the problem
under the classic matroids constraint. Here the goal is to extract a small size
summary of the dataset that contains a high value independent set even after an
adversary deleted some elements. We present constant-factor approximation
algorithms, whose space complexity depends on the rank $k$ of the matroid and
the number $d$ of deleted elements. In the centralized setting we present a
$(3.582+O(\varepsilon))$-approximation algorithm with summary size $O(k +
\frac{d \log k}{\varepsilon^2})$. In the streaming setting we provide a
$(5.582+O(\varepsilon))$-approximation algorithm with summary size and memory
$O(k + \frac{d \log k}{\varepsilon^2})$. We complement our theoretical results
with an in-depth experimental analysis showing the effectiveness of our
algorithms on real-world datasets.
|
[
{
"version": "v1",
"created": "Mon, 31 Jan 2022 11:15:56 GMT"
}
] | 2022-02-01T00:00:00 |
[
[
"Dütting",
"Paul",
""
],
[
"Fusco",
"Federico",
""
],
[
"Lattanzi",
"Silvio",
""
],
[
"Norouzi-Fard",
"Ashkan",
""
],
[
"Zadimoghaddam",
"Morteza",
""
]
] |
new_dataset
| 0.99096 |
2201.13144
|
Carlos Eduardo Cancino-Chac\'on
|
Maarten Grachten, Carlos Cancino-Chac\'on, Thassilo Gadermaier
|
partitura: A Python Package for Handling Symbolic Musical Data
|
This preprint is a slightly updated and reformatted version of the
work presented at the Late Breaking/Demo Session of the 20th International
Society for Music Information Retrieval Conference (ISMIR 2019), Delft, The
Netherlands
| null | null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
This demo paper introduces partitura, a Python package for handling symbolic
musical information. The principal aim of this package is to handle richly
structured musical information as conveyed by modern staff music notation. It
provides a much wider range of possibilities to deal with music than the more
reductive (but very common) piano roll-oriented approach inspired by the MIDI
standard. The package is an open source project and is available on GitHub.
|
[
{
"version": "v1",
"created": "Mon, 31 Jan 2022 11:40:17 GMT"
}
] | 2022-02-01T00:00:00 |
[
[
"Grachten",
"Maarten",
""
],
[
"Cancino-Chacón",
"Carlos",
""
],
[
"Gadermaier",
"Thassilo",
""
]
] |
new_dataset
| 0.999769 |
2201.13158
|
Anja Rey
|
Anja Rey (1) and Lisa Rey (2) ((1) Universit\"at zu K\"oln, Germany,
(2) Heinrich-Heine-Universit\"at D\"usseldorf, Germany)
|
FEN-Hedonic Games with Distance-Based Preferences
| null | null | null | null |
cs.GT
|
http://creativecommons.org/licenses/by/4.0/
|
Hedonic games formalize coalition formation scenarios where players evaluate
an outcome based on the coalition they are contained in. Due to a large number
of possible coalitions, compact representations of these games are crucial. We
complement known compact representation models by a distance-based approach:
Players' preferences are encoded in a bipolar manner by ordinal preferences
over a small set of known neighbouring players, coalitions are represented by
adequate preference orders from a player's perspective, and preferences over
coalitions are extended based on a directed form of Hausdorff-Kendall-tau
distance between individual preferences and coalitions. We show that this model
satisfies desirable axiomatic properties and has reasonable computational
complexity in terms of selected individual-based stability notions.
|
[
{
"version": "v1",
"created": "Mon, 31 Jan 2022 12:07:16 GMT"
}
] | 2022-02-01T00:00:00 |
[
[
"Rey",
"Anja",
""
],
[
"Rey",
"Lisa",
""
]
] |
new_dataset
| 0.999879 |
2201.13248
|
Rituraj Kaushik
|
Rituraj Kaushik, Karol Arndt and Ville Kyrki
|
SafeAPT: Safe Simulation-to-Real Robot Learning using Diverse Policies
Learned in Simulation
|
Under review. For video of the paper http://tiny.cc/safeAPT
| null | null | null |
cs.RO cs.AI cs.LG cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
The framework of Simulation-to-real learning, i.e, learning policies in
simulation and transferring those policies to the real world is one of the most
promising approaches towards data-efficient learning in robotics. However, due
to the inevitable reality gap between the simulation and the real world, a
policy learned in the simulation may not always generate a safe behaviour on
the real robot. As a result, during adaptation of the policy in the real world,
the robot may damage itself or cause harm to its surroundings. In this work, we
introduce a novel learning algorithm called SafeAPT that leverages a diverse
repertoire of policies evolved in the simulation and transfers the most
promising safe policy to the real robot through episodic interaction. To
achieve this, SafeAPT iteratively learns a probabilistic reward model as well
as a safety model using real-world observations combined with simulated
experiences as priors. Then, it performs Bayesian optimization on the
repertoire with the reward model while maintaining the specified safety
constraint using the safety model. SafeAPT allows a robot to adapt to a wide
range of goals safely with the same repertoire of policies evolved in the
simulation. We compare SafeAPT with several baselines, both in simulated and
real robotic experiments and show that SafeAPT finds high-performance policies
within a few minutes in the real world while minimizing safety violations
during the interactions.
|
[
{
"version": "v1",
"created": "Thu, 27 Jan 2022 16:40:36 GMT"
}
] | 2022-02-01T00:00:00 |
[
[
"Kaushik",
"Rituraj",
""
],
[
"Arndt",
"Karol",
""
],
[
"Kyrki",
"Ville",
""
]
] |
new_dataset
| 0.977397 |
2201.13284
|
Maged Shoman Mr
|
Maged Shoman and Ana Tsui Moreno
|
Exploring Preferences for Transportation Modes in the City of Munich
after the Recent Incorporation of Ride-Hailing Companies
| null |
Transportation Research Record 2021
|
10.1177/0361198121989726
|
Vol. 2675(5) 329--338
|
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The growth of ridehailing (RH) companies over the past few years has affected
urban mobility in numerous ways. Despite widespread claims about the benefits
of such services, limited research has been conducted on the topic. This paper
assesses the willingness of Munich transportation users to pay for RH services.
Realizing the difficulty of obtaining data directly from RH companies, a stated
preference survey was designed. The dataset includes responses from 500
commuters. Sociodemographic attributes, current travel behavior and
transportation mode preference in an 8 km trip scenario using RH service and
its similar modes (auto and transit), were collected. A multinomial logit model
was used to estimate the time and cost coefficients for using RH services
across income groups, which was then used to estimate the value of time (VOT)
for RH. The model results indicate RH services popularity among those aged 18
to 39, larger households and households with fewer autos. Higher income groups
are also willing to pay more for using RH services. To examine the impact of RH
services on modal split in the city of Munich, we incorporated RH as a new mode
into an existing nested logit mode choice model using an incremental logit.
Travel time, travel cost and VOT were used as measures for the choice commuters
make when choosing between RH and its closest mode, metro. A total of 20
scenarios were evaluated at four different congestion levels and four price
levels to reflect the demand in response to acceptable costs and time
tradeoffs.
|
[
{
"version": "v1",
"created": "Fri, 28 Jan 2022 16:03:24 GMT"
}
] | 2022-02-01T00:00:00 |
[
[
"Shoman",
"Maged",
""
],
[
"Moreno",
"Ana Tsui",
""
]
] |
new_dataset
| 0.990695 |
2201.13292
|
Chryssis Georgiou
|
Chryssis Georgiou, Nicolas Nicolaou, and Andria Trigeorgi
|
Fragmented ARES: Dynamic Storage for Large Objects
|
18 pages (in two-column IEEE format), 12 figures, 5 algorithm codes,
Technical Report
| null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Data availability is one of the most important features in distributed
storage systems, made possible by data replication. Nowadays data are generated
rapidly and the goal to develop efficient, scalable and reliable storage
systems has become one of the major challenges for high performance computing.
In this work, we develop a dynamic, robust and strongly consistent distributed
storage implementation suitable for handling large objects (such as files). We
do so by integrating an Adaptive, Reconfigurable, Atomic Storage framework,
called ARES, with a distributed file system, called COBFS, which relies on a
block fragmentation technique to handle large objects. With the addition of
ARES, we also enable the use of an erasure-coded algorithm to further split our
data and to potentially improve storage efficiency at the replica servers and
operation latency. To put the practicality of our outcomes at test, we conduct
an in-depth experimental evaluation on the Emulab and AWS EC2 testbeds,
illustrating the benefits of our approaches, as well as other interesting
tradeoffs.
|
[
{
"version": "v1",
"created": "Mon, 31 Jan 2022 15:00:40 GMT"
}
] | 2022-02-01T00:00:00 |
[
[
"Georgiou",
"Chryssis",
""
],
[
"Nicolaou",
"Nicolas",
""
],
[
"Trigeorgi",
"Andria",
""
]
] |
new_dataset
| 0.977864 |
2103.03500
|
Mark Zhao
|
Mark Zhao, Mingyu Gao, and Christos Kozyrakis
|
ShEF: Shielded Enclaves for Cloud FPGAs
| null | null |
10.1145/3503222.3507733
| null |
cs.CR cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
FPGAs are now used in public clouds to accelerate a wide range of
applications, including many that operate on sensitive data such as financial
and medical records. We present ShEF, a trusted execution environment (TEE) for
cloud-based reconfigurable accelerators. ShEF is independent from CPU-based
TEEs and allows secure execution under a threat model where the adversary can
control all software running on the CPU connected to the FPGA, has physical
access to the FPGA, and can compromise the FPGA interface logic of the cloud
provider. ShEF provides a secure boot and remote attestation process that
relies solely on existing FPGA mechanisms for root of trust. It also includes a
Shield component that provides secure access to data while the accelerator is
in use. The Shield is highly customizable and extensible, allowing users to
craft a bespoke security solution that fits their accelerator's memory access
patterns, bandwidth, and security requirements at minimum performance and area
overheads. We describe a prototype implementation of ShEF for existing cloud
FPGAs, map ShEF to a performant and secure storage application, and measure the
performance benefits of customizable security using five additional
accelerators.
|
[
{
"version": "v1",
"created": "Fri, 5 Mar 2021 07:02:26 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Jan 2022 00:01:04 GMT"
}
] | 2022-01-31T00:00:00 |
[
[
"Zhao",
"Mark",
""
],
[
"Gao",
"Mingyu",
""
],
[
"Kozyrakis",
"Christos",
""
]
] |
new_dataset
| 0.98961 |
2103.15552
|
Jamie Shelley Mr
|
Jamie Nicholas Shelley, Optishell Consultancy
|
Energy Decay Network (EDeN)
| null | null |
10.31224/osf.io/dfyzn
| null |
cs.NE cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper and accompanying Python and C++ Framework is the product of the
authors perceived problems with narrow (Discrimination based) AI. (Artificial
Intelligence) The Framework attempts to develop a genetic transfer of
experience through potential structural expressions using a common
regulation/exchange value (energy) to create a model whereby neural
architecture and all unit processes are co-dependently developed by genetic and
real time signal processing influences; successful routes are defined by
stability of the spike distribution per epoch which is influenced by
genetically encoded morphological development biases.These principles are aimed
towards creating a diverse and robust network that is capable of adapting to
general tasks by training within a simulation designed for transfer learning to
other mediums at scale.
|
[
{
"version": "v1",
"created": "Wed, 10 Mar 2021 23:17:59 GMT"
},
{
"version": "v2",
"created": "Sun, 11 Jul 2021 22:48:26 GMT"
},
{
"version": "v3",
"created": "Tue, 28 Dec 2021 19:54:57 GMT"
},
{
"version": "v4",
"created": "Fri, 28 Jan 2022 01:55:39 GMT"
}
] | 2022-01-31T00:00:00 |
[
[
"Shelley",
"Jamie Nicholas",
""
],
[
"Consultancy",
"Optishell",
""
]
] |
new_dataset
| 0.976821 |
2104.07855
|
Rui Liu
|
Rui Liu and Alex Olshevsky
|
Distributed TD(0) with Almost No Communication
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We provide a new non-asymptotic analysis of distributed TD(0) with linear
function approximation. Our approach relies on "one-shot averaging," where $N$
agents run local copies of TD(0) and average the outcomes only once at the very
end. We consider two models: one in which the agents interact with an
environment they can observe and whose transitions depends on all of their
actions (which we call the global state model), and one in which each agent can
run a local copy of an identical Markov Decision Process, which we call the
local state model.
In the global state model, we show that the convergence rate of our
distributed one-shot averaging method matches the known convergence rate of
TD(0). By contrast, the best convergence rate in the previous literature showed
a rate which, according to the worst-case bounds given, could underperform the
non-distributed version by $O(N^3)$ in terms of the number of agents $N$. In
the local state model, we demonstrate a version of the linear time speedup
phenomenon, where the convergence time of the distributed process is a factor
of $N$ faster than the convergence time of TD(0). As far as we are aware, this
is the first result rigorously showing benefits from parallelism for temporal
difference methods.
|
[
{
"version": "v1",
"created": "Fri, 16 Apr 2021 02:21:11 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Jan 2022 21:56:06 GMT"
}
] | 2022-01-31T00:00:00 |
[
[
"Liu",
"Rui",
""
],
[
"Olshevsky",
"Alex",
""
]
] |
new_dataset
| 0.998613 |
2104.10029
|
Dongnan Liu
|
Yang Ma, Chaoyi Zhang, Mariano Cabezas, Yang Song, Zihao Tang, Dongnan
Liu, Weidong Cai, Michael Barnett, Chenyu Wang
|
Multiple Sclerosis Lesion Analysis in Brain Magnetic Resonance Images:
Techniques and Clinical Applications
|
Accepted to appear in IEEE Journal of Biomedical And Health
Informatics
| null | null | null |
cs.CV eess.IV stat.AP
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Multiple sclerosis (MS) is a chronic inflammatory and degenerative disease of
the central nervous system, characterized by the appearance of focal lesions in
the white and gray matter that topographically correlate with an individual
patient's neurological symptoms and signs. Magnetic resonance imaging (MRI)
provides detailed in-vivo structural information, permitting the quantification
and categorization of MS lesions that critically inform disease management.
Traditionally, MS lesions have been manually annotated on 2D MRI slices, a
process that is inefficient and prone to inter-/intra-observer errors.
Recently, automated statistical imaging analysis techniques have been proposed
to detect and segment MS lesions based on MRI voxel intensity. However, their
effectiveness is limited by the heterogeneity of both MRI data acquisition
techniques and the appearance of MS lesions. By learning complex lesion
representations directly from images, deep learning techniques have achieved
remarkable breakthroughs in the MS lesion segmentation task. Here, we provide a
comprehensive review of state-of-the-art automatic statistical and
deep-learning MS segmentation methods and discuss current and future clinical
applications. Further, we review technical strategies, such as domain
adaptation, to enhance MS lesion segmentation in real-world clinical settings.
|
[
{
"version": "v1",
"created": "Tue, 20 Apr 2021 15:08:51 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Dec 2021 10:14:03 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Jan 2022 00:43:12 GMT"
}
] | 2022-01-31T00:00:00 |
[
[
"Ma",
"Yang",
""
],
[
"Zhang",
"Chaoyi",
""
],
[
"Cabezas",
"Mariano",
""
],
[
"Song",
"Yang",
""
],
[
"Tang",
"Zihao",
""
],
[
"Liu",
"Dongnan",
""
],
[
"Cai",
"Weidong",
""
],
[
"Barnett",
"Michael",
""
],
[
"Wang",
"Chenyu",
""
]
] |
new_dataset
| 0.998345 |
2108.07597
|
Yingqian Wang
|
Zhengyu Liang, Yingqian Wang, Longguang Wang, Jungang Yang, Shilin
Zhou
|
Light Field Image Super-Resolution with Transformers
|
This paper has been accepted by IEEE Signal Processing Letters. The
current version on arXiv is identical to the final accepted version in
content, but integrates the supplemental material (i.e., related work and
visual comparisons) to the main body of the paper. Moreover, figures and
tables of the arxiv version were zoomed for better visualization
| null |
10.1109/LSP.2022.3146798
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Light field (LF) image super-resolution (SR) aims at reconstructing
high-resolution LF images from their low-resolution counterparts. Although
CNN-based methods have achieved remarkable performance in LF image SR, these
methods cannot fully model the non-local properties of the 4D LF data. In this
paper, we propose a simple but effective Transformer-based method for LF image
SR. In our method, an angular Transformer is designed to incorporate
complementary information among different views, and a spatial Transformer is
developed to capture both local and long-range dependencies within each
sub-aperture image. With the proposed angular and spatial Transformers, the
beneficial information in an LF can be fully exploited and the SR performance
is boosted. We validate the effectiveness of our angular and spatial
Transformers through extensive ablation studies, and compare our method to
recent state-of-the-art methods on five public LF datasets. Our method achieves
superior SR performance with a small model size and low computational cost.
Code is available at https://github.com/ZhengyuLiang24/LFT.
|
[
{
"version": "v1",
"created": "Tue, 17 Aug 2021 12:58:11 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Jan 2022 03:16:29 GMT"
}
] | 2022-01-31T00:00:00 |
[
[
"Liang",
"Zhengyu",
""
],
[
"Wang",
"Yingqian",
""
],
[
"Wang",
"Longguang",
""
],
[
"Yang",
"Jungang",
""
],
[
"Zhou",
"Shilin",
""
]
] |
new_dataset
| 0.976442 |
2110.13790
|
Giovanni Colavizza
|
Puyu Yang and Giovanni Colavizza
|
A Map of Science in Wikipedia
| null | null | null | null |
cs.DL cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
In recent decades, the rapid growth of Internet adoption is offering
opportunities for convenient and inexpensive access to scientific information.
Wikipedia, one of the largest encyclopedias worldwide, has become a reference
in this respect, and has attracted widespread attention from scholars. However,
a clear understanding of the scientific sources underpinning Wikipedia's
contents remains elusive. In this work, we rely on an open dataset of citations
from Wikipedia to map the relationship between Wikipedia articles and
scientific journal articles. We find that most journal articles cited from
Wikipedia belong to STEM fields, in particular biology and medicine ($47.6$\%
of citations; $46.1$\% of cited articles). Furthermore, Wikipedia's biographies
play an important role in connecting STEM fields with the humanities,
especially history. These results contribute to our understanding of
Wikipedia's reliance on scientific sources, and its role as knowledge broker to
the public.
|
[
{
"version": "v1",
"created": "Tue, 26 Oct 2021 15:44:32 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Jan 2022 17:05:48 GMT"
}
] | 2022-01-31T00:00:00 |
[
[
"Yang",
"Puyu",
""
],
[
"Colavizza",
"Giovanni",
""
]
] |
new_dataset
| 0.991443 |
2110.15717
|
Sourav Ghosh
|
Vibhav Agarwal, Sudeep Deepak Shivnikar, Sourav Ghosh, Himanshu Arora,
Yashwant Saini
|
LIDSNet: A Lightweight on-device Intent Detection model using Deep
Siamese Network
|
Accepted for publication in 2021 IEEE 20th International Conference
on Machine Learning and Applications (ICMLA)
|
2021 20th IEEE International Conference on Machine Learning and
Applications (ICMLA), Pasadena, CA, USA, 2021, pp. 1112-1117
|
10.1109/ICMLA52953.2021.00182
| null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Intent detection is a crucial task in any Natural Language Understanding
(NLU) system and forms the foundation of a task-oriented dialogue system. To
build high-quality real-world conversational solutions for edge devices, there
is a need for deploying intent detection model on device. This necessitates a
light-weight, fast, and accurate model that can perform efficiently in a
resource-constrained environment. To this end, we propose LIDSNet, a novel
lightweight on-device intent detection model, which accurately predicts the
message intent by utilizing a Deep Siamese Network for learning better sentence
representations. We use character-level features to enrich the sentence-level
representations and empirically demonstrate the advantage of transfer learning
by utilizing pre-trained embeddings. Furthermore, to investigate the efficacy
of the modules in our architecture, we conduct an ablation study and arrive at
our optimal model. Experimental results prove that LIDSNet achieves
state-of-the-art competitive accuracy of 98.00% and 95.97% on SNIPS and ATIS
public datasets respectively, with under 0.59M parameters. We further benchmark
LIDSNet against fine-tuned BERTs and show that our model is at least 41x
lighter and 30x faster during inference than MobileBERT on Samsung Galaxy S20
device, justifying its efficiency on resource-constrained edge devices.
|
[
{
"version": "v1",
"created": "Wed, 6 Oct 2021 18:20:37 GMT"
}
] | 2022-01-31T00:00:00 |
[
[
"Agarwal",
"Vibhav",
""
],
[
"Shivnikar",
"Sudeep Deepak",
""
],
[
"Ghosh",
"Sourav",
""
],
[
"Arora",
"Himanshu",
""
],
[
"Saini",
"Yashwant",
""
]
] |
new_dataset
| 0.998445 |
2111.05196
|
David Alfonso-Hermelo
|
David Alfonso-Hermelo, Ahmad Rashid, Abbas Ghaddar, Philippe Langlais,
Mehdi Rezagholizadeh
|
NATURE: Natural Auxiliary Text Utterances for Realistic Spoken Language
Evaluation
|
20 pages, 4 figures, accepted to NeurIPS 2021 Track Datasets and
Benchmarks
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Slot-filling and intent detection are the backbone of conversational agents
such as voice assistants, and are active areas of research. Even though
state-of-the-art techniques on publicly available benchmarks show impressive
performance, their ability to generalize to realistic scenarios is yet to be
demonstrated. In this work, we present NATURE, a set of simple spoken-language
oriented transformations, applied to the evaluation set of datasets, to
introduce human spoken language variations while preserving the semantics of an
utterance. We apply NATURE to common slot-filling and intent detection
benchmarks and demonstrate that simple perturbations from the standard
evaluation set by NATURE can deteriorate model performance significantly.
Through our experiments we demonstrate that when NATURE operators are applied
to evaluation set of popular benchmarks the model accuracy can drop by up to
40%.
|
[
{
"version": "v1",
"created": "Tue, 9 Nov 2021 15:09:06 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Jan 2022 17:40:16 GMT"
}
] | 2022-01-31T00:00:00 |
[
[
"Alfonso-Hermelo",
"David",
""
],
[
"Rashid",
"Ahmad",
""
],
[
"Ghaddar",
"Abbas",
""
],
[
"Langlais",
"Philippe",
""
],
[
"Rezagholizadeh",
"Mehdi",
""
]
] |
new_dataset
| 0.999083 |
2201.11369
|
Ting-Chun Lin
|
Ting-Chun Lin, Min-Hsiu Hsieh
|
$c^3$-Locally Testable Codes from Lossless Expanders
| null | null | null | null |
cs.IT cs.CC math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A locally testable code (LTC) is an error correcting code with a property
tester. The tester tests if a word is codeword by reading constant random bits
and rejects the word with probability proportional to the distance from the
word to the closest codeword. An important open question until recently is
whether there exist $c^3$-LTCs which are LTCs with constant rate, constant
relative distance and constant locality. In this work, we construct a new LTC
family using 1-sided lossless expanders and balanced products.
|
[
{
"version": "v1",
"created": "Thu, 27 Jan 2022 08:10:13 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Jan 2022 07:33:50 GMT"
}
] | 2022-01-31T00:00:00 |
[
[
"Lin",
"Ting-Chun",
""
],
[
"Hsieh",
"Min-Hsiu",
""
]
] |
new_dataset
| 0.999382 |
2201.11539
|
Ali Gholami
|
Ali Gholami, Kai Wan, Hua Sun, Mingyue Ji, Giuseppe Caire
|
Coded Caching with Private Demands and Caches
|
7 pages, 4 tables
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Coded caching has been shown as a promissing method to reduce the network
load in peak-traffic hours. In the coded caching literature, the notion of
privacy is considered only against demands. On the motivation that multi-round
transmissions almost appear everywhere in real communication systems, this
paper formulates the coded caching problem with private demands and caches.
Only one existing private caching scheme, which is based on introducing virtual
users, can preserve the privacy of demands and caches simultaneously, but with
an extremely large subpacketization exponential to the product of the numbers
of users and files in the system. In order to reduce the subpacketization while
satisfying the privacy constraint, we propose a novel approach which constructs
private coded caching schemes through private information retrieval (PIR).
Based on this approach, we propose novel schemes with private demands and
caches which have a subpacketization level in the order exponential to $K$
(number of users) against $NK$ in the virtual user scheme where $N$ stands for
the numbers of files. As a by-product, for the coded caching problem with
private demands, a private coded caching scheme could be obtained from the
proposed approach, which generally improves the memory-load tradeoff of the
private coded caching scheme by Yan and Tuninetti.
|
[
{
"version": "v1",
"created": "Thu, 27 Jan 2022 14:27:25 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Jan 2022 11:25:43 GMT"
}
] | 2022-01-31T00:00:00 |
[
[
"Gholami",
"Ali",
""
],
[
"Wan",
"Kai",
""
],
[
"Sun",
"Hua",
""
],
[
"Ji",
"Mingyue",
""
],
[
"Caire",
"Giuseppe",
""
]
] |
new_dataset
| 0.959586 |
2201.11764
|
Stefan Hristozov
|
Stefan Hristozov, Moritz Wettermann, Manuel Huber
|
A TOCTOU Attack on DICE Attestation
|
10 pages, 3 figures, to appear at CODASPY'22
| null |
10.1145/3508398.3511507
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A major security challenge for modern Internet of Things (IoT) deployments is
to ensure that the devices run legitimate firmware free from malware. This
challenge can be addressed through a security primitive called attestation
which allows a remote backend to verify the firmware integrity of the devices
it manages. In order to accelerate broad attestation adoption in the IoT domain
the Trusted Computing Group (TCG) has introduced the Device Identifier
Composition Engine (DICE) series of specifications. DICE is a hardware-software
architecture for constrained, e.g., microcontroller-based IoT devices where the
firmware is divided into successively executed layers.
In this paper, we demonstrate a remote Time-Of-Check Time-Of-Use (TOCTOU)
attack on DICE-based attestation. We demonstrate that it is possible to install
persistent malware in the flash memory of a constrained microcontroller that
cannot be detected through DICE-based attestation. The main idea of our attack
is to install malware during runtime of application logic in the top firmware
layer. The malware reads the valid attestation key and stores it on the
device's flash memory. After reboot, the malware uses the previously stored key
for all subsequent attestations to the backend. We conduct the installation of
malware and copying of the key through Return-Oriented Programming (ROP). As a
platform for our demonstration, we use the Cortex-M-based nRF52840
microcontroller. We provide a discussion of several possible countermeasures
which can mitigate the shortcomings of the DICE specifications.
|
[
{
"version": "v1",
"created": "Thu, 27 Jan 2022 19:05:53 GMT"
}
] | 2022-01-31T00:00:00 |
[
[
"Hristozov",
"Stefan",
""
],
[
"Wettermann",
"Moritz",
""
],
[
"Huber",
"Manuel",
""
]
] |
new_dataset
| 0.999556 |
2201.11828
|
Sarah Ostadabbas
|
Shuangjun Liu, Sarah Ostadabbas
|
Pressure Eye: In-bed Contact Pressure Estimation via Contact-less
Imaging
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Computer vision has achieved great success in interpreting semantic meanings
from images, yet estimating underlying (non-visual) physical properties of an
object is often limited to their bulk values rather than reconstructing a dense
map. In this work, we present our pressure eye (PEye) approach to estimate
contact pressure between a human body and the surface she is lying on with high
resolution from vision signals directly. PEye approach could ultimately enable
the prediction and early detection of pressure ulcers in bed-bound patients,
that currently depends on the use of expensive pressure mats. Our PEye network
is configured in a dual encoding shared decoding form to fuse visual cues and
some relevant physical parameters in order to reconstruct high resolution
pressure maps (PMs). We also present a pixel-wise resampling approach based on
Naive Bayes assumption to further enhance the PM regression performance. A
percentage of correct sensing (PCS) tailored for sensing estimation accuracy
evaluation is also proposed which provides another perspective for performance
evaluation under varying error tolerances. We tested our approach via a series
of extensive experiments using multimodal sensing technologies to collect data
from 102 subjects while lying on a bed. The individual's high resolution
contact pressure data could be estimated from their RGB or long wavelength
infrared (LWIR) images with 91.8% and 91.2% estimation accuracies in
$PCS_{efs0.1}$ criteria, superior to state-of-the-art methods in the related
image regression/translation tasks.
|
[
{
"version": "v1",
"created": "Thu, 27 Jan 2022 22:22:17 GMT"
}
] | 2022-01-31T00:00:00 |
[
[
"Liu",
"Shuangjun",
""
],
[
"Ostadabbas",
"Sarah",
""
]
] |
new_dataset
| 0.995769 |
2201.11844
|
Qi Zhao
|
Qi Zhao, Huanhao Li, Zhipeng Yu, Chi Man Woo, Tianting Zhong, Shengfu
Cheng, Yuanjin Zheng, Honglin Liu, Jie Tian, and Puxiang Lai
|
Speckle-based optical cryptosystem and its application for human face
recognition via deep learning
| null | null | null | null |
cs.CR cs.CV physics.optics
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Face recognition has recently become ubiquitous in many scenes for
authentication or security purposes. Meanwhile, there are increasing concerns
about the privacy of face images, which are sensitive biometric data that
should be carefully protected. Software-based cryptosystems are widely adopted
nowadays to encrypt face images, but the security level is limited by
insufficient digital secret key length or computing power. Hardware-based
optical cryptosystems can generate enormously longer secret keys and enable
encryption at light speed, but most reported optical methods, such as double
random phase encryption, are less compatible with other systems due to system
complexity. In this study, a plain yet high-efficient speckle-based optical
cryptosystem is proposed and implemented. A scattering ground glass is
exploited to generate physical secret keys of gigabit length and encrypt face
images via seemingly random optical speckles at light speed. Face images can
then be decrypted from the random speckles by a well-trained decryption neural
network, such that face recognition can be realized with up to 98% accuracy.
The proposed cryptosystem has wide applicability, and it may open a new avenue
for high-security complex information encryption and decryption by utilizing
optical speckles.
|
[
{
"version": "v1",
"created": "Wed, 26 Jan 2022 07:18:02 GMT"
}
] | 2022-01-31T00:00:00 |
[
[
"Zhao",
"Qi",
""
],
[
"Li",
"Huanhao",
""
],
[
"Yu",
"Zhipeng",
""
],
[
"Woo",
"Chi Man",
""
],
[
"Zhong",
"Tianting",
""
],
[
"Cheng",
"Shengfu",
""
],
[
"Zheng",
"Yuanjin",
""
],
[
"Liu",
"Honglin",
""
],
[
"Tian",
"Jie",
""
],
[
"Lai",
"Puxiang",
""
]
] |
new_dataset
| 0.998023 |
2201.11852
|
Hendrico Burger
|
C.V. Vletter, H.L. Burger, H. Alers, N. Sourlos, Z. Al-Ars
|
Towards an Automatic Diagnosis of Peripheral and Central Palsy Using
Machine Learning on Facial Features
|
9 pages, 10 tables, 10 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Central palsy is a form of facial paralysis that requires urgent medical
attention and has to be differentiated from other, similar conditions such as
peripheral palsy. To aid in fast and accurate diagnosis of this condition, we
propose a machine learning approach to automatically classify peripheral and
central facial palsy. The Palda dataset is used, which contains 103 peripheral
palsy images, 40 central palsy, and 60 healthy people. Experiments are run on
five machine learning algorithms. The best performing algorithms were found to
be the SVM (total accuracy of 85.1%) and the Gaussian naive Bayes (80.7%). The
lowest false negative rate on central palsy was achieved by the naive Bayes
approach (80% compared to 70%). This condition could prove to be the most
severe, and thus its sensitivity is another good way to compare algorithms. By
extrapolation, a dataset size of 334 total pictures is estimated to achieve a
central palsy sensitivity of 95%. All code used for these machine learning
experiments is freely available online at https://github.com/cvvletter/palsy.
|
[
{
"version": "v1",
"created": "Thu, 27 Jan 2022 23:07:02 GMT"
}
] | 2022-01-31T00:00:00 |
[
[
"Vletter",
"C. V.",
""
],
[
"Burger",
"H. L.",
""
],
[
"Alers",
"H.",
""
],
[
"Sourlos",
"N.",
""
],
[
"Al-Ars",
"Z.",
""
]
] |
new_dataset
| 0.986782 |
2201.12046
|
Cedric Richter
|
Cedric Richter and Heike Wehrheim
|
TSSB-3M: Mining single statement bugs at massive scale
|
7 pages, 2 figures
| null | null | null |
cs.SE cs.PL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Single statement bugs are one of the most important ingredients in the
evaluation of modern bug detection and automatic program repair methods. By
affecting only a single statement, single statement bugs represent a type of
bug often overlooked by developers, while still being small enough to be
detected and fixed by automatic methods. With the rise of data-driven automatic
repair the availability of single statement bugs at the scale of millionth of
examples is more important than ever; not only for testing these methods but
also for providing sufficient real world examples for training. To provide
access to bug fix datasets of this scale, we are releasing two datasets called
SSB-9M and TSSB-3M. While SSB-9M provides access to a collection of over 9M
general single statement bug fixes from over 500K open source Python projects ,
TSSB-3M focuses on over 3M single statement bugs which can be fixed solely by a
single statement change. To facilitate future research and empirical
investigations, we annotated each bug fix with one of 20 single statement bug
(SStuB) patterns typical for Python together with a characterization of the
code change as a sequence of AST modifications. Our initial investigation shows
that at least 40% of all single statement bug fixes mined fit at least one
SStuB pattern, and that the majority of 72% of all bugs can be fixed with the
same syntactic modifications as needed for fixing SStuBs.
|
[
{
"version": "v1",
"created": "Fri, 28 Jan 2022 11:21:24 GMT"
}
] | 2022-01-31T00:00:00 |
[
[
"Richter",
"Cedric",
""
],
[
"Wehrheim",
"Heike",
""
]
] |
new_dataset
| 0.999104 |
2201.12085
|
Zhe Liu
|
Zhe Liu, Chunyang Chen, Junjie Wang, Yuekai Huang, Jun Hu, Qing Wang
|
Guided Bug Crush: Assist Manual GUI Testing of Android Apps via Hint
Moves
|
Accepted to CHI Conference on Human Factors in Computing Systems
(CHI'22)
| null |
10.1145/3491102.3501903
|
CHI 2022 3315
|
cs.SE cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Mobile apps are indispensable for people's daily life. Complementing with
automated GUI testing, manual testing is the last line of defence for app
quality. However, the repeated actions and easily missing of functionalities
make manual testing time-consuming and inefficient. Inspired by the game candy
crush with flashy candies as hint moves for players, we propose an approach
named NaviDroid for navigating testers via highlighted next operations for more
effective and efficient testing. Within NaviDroid, we construct an enriched
state transition graph with the triggering actions as the edges for two
involved states. Based on it, we utilize the dynamic programming algorithm to
plan the exploration path, and augment the GUI with visualized hints for
testers to quickly explore untested activities and avoid duplicate
explorations. The automated experiments demonstrate the high coverage and
efficient path planning of NaviDroid and a user study further confirms its
usefulness. The NaviDroid can help us develop more robust software that works
in more mission-critical settings, not only by performing more thorough testing
with the same effort that has been put in before, but also by integrating these
techniques into different parts of development pipeline.
|
[
{
"version": "v1",
"created": "Fri, 28 Jan 2022 12:45:56 GMT"
}
] | 2022-01-31T00:00:00 |
[
[
"Liu",
"Zhe",
""
],
[
"Chen",
"Chunyang",
""
],
[
"Wang",
"Junjie",
""
],
[
"Huang",
"Yuekai",
""
],
[
"Hu",
"Jun",
""
],
[
"Wang",
"Qing",
""
]
] |
new_dataset
| 0.993979 |
2201.12123
|
Camille Gontier
|
Camille Gontier, Jakob Jordan, Mihai A. Petrovici
|
DELAUNAY: a dataset of abstract art for psychophysical and machine
learning research
| null | null | null | null |
cs.LG cs.CV q-bio.NC
|
http://creativecommons.org/licenses/by/4.0/
|
Image datasets are commonly used in psychophysical experiments and in machine
learning research. Most publicly available datasets are comprised of images of
realistic and natural objects. However, while typical machine learning models
lack any domain specific knowledge about natural objects, humans can leverage
prior experience for such data, making comparisons between artificial and
natural learning challenging. Here, we introduce DELAUNAY, a dataset of
abstract paintings and non-figurative art objects labelled by the artists'
names. This dataset provides a middle ground between natural images and
artificial patterns and can thus be used in a variety of contexts, for example
to investigate the sample efficiency of humans and artificial neural networks.
Finally, we train an off-the-shelf convolutional neural network on DELAUNAY,
highlighting several of its intriguing features.
|
[
{
"version": "v1",
"created": "Fri, 28 Jan 2022 13:57:32 GMT"
}
] | 2022-01-31T00:00:00 |
[
[
"Gontier",
"Camille",
""
],
[
"Jordan",
"Jakob",
""
],
[
"Petrovici",
"Mihai A.",
""
]
] |
new_dataset
| 0.999829 |
2201.12177
|
Ipek Ozkaya
|
Ipek Ozkaya, Zachary Kurtz, Robert L. Nord, Raghvinder S. Sangwan,
Satish M. Srinivasan
|
Detecting Discussions of Technical Debt
|
12 pages, 5 figures, 5 tables
| null | null |
DM18-0447
|
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Technical debt (TD) refers to suboptimal choices during software development
that achieve short-term goals at the expense of long-term quality. Although
developers often informally discuss TD, the concept has not yet crystalized
into a consistently applied label when describing issues in most repositories.
We apply machine learning to understand developer insights into TD when
discussing tickets in an issue tracker. We generate expert labels that indicate
whether discussion of TD occurs in the free text associated with each ticket in
a sample of more than 1,900 tickets in the Chromium issue tracker. We then use
these labels to train a classifier that estimates labels for the remaining
475,000 tickets. We conclude that discussion of TD appears in about 16% of the
tracked Chromium issues. If we can effectively classify TD-related issues, we
can focus on what practices could be most useful for their timely resolution.
|
[
{
"version": "v1",
"created": "Fri, 28 Jan 2022 15:20:20 GMT"
}
] | 2022-01-31T00:00:00 |
[
[
"Ozkaya",
"Ipek",
""
],
[
"Kurtz",
"Zachary",
""
],
[
"Nord",
"Robert L.",
""
],
[
"Sangwan",
"Raghvinder S.",
""
],
[
"Srinivasan",
"Satish M.",
""
]
] |
new_dataset
| 0.96189 |
2201.12200
|
Rachel Arredondo
|
Rachel Arredondo, Ofri Dar, Kylon Chiang, Arielle Blonder, Linning Yao
|
Blue Ceramics: Co-designing Morphing Ceramics for Seagrass Meadow
Restoration
|
12 pages with 32 figures, ACM C&C Pictorial
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Seagrass meadows are twice as efficient as forests at capturing and storing
carbon, but over the last two decades they have been disappearing due to human
activities. We take a nature-centered design approach using contextual inquiry
and iterative participatory designs methods to consolidate knowledge from the
marine and material sciences to industrial design. The sketches and renders
documented evolved into the design and fabrication guidelines. This pictorial
documents a dialogue between designers and scientists to design an ecological
intervention using digital fabrication to manufacture morphing ceramics for
seagrass meadow restoration.
|
[
{
"version": "v1",
"created": "Fri, 28 Jan 2022 15:57:56 GMT"
}
] | 2022-01-31T00:00:00 |
[
[
"Arredondo",
"Rachel",
""
],
[
"Dar",
"Ofri",
""
],
[
"Chiang",
"Kylon",
""
],
[
"Blonder",
"Arielle",
""
],
[
"Yao",
"Linning",
""
]
] |
new_dataset
| 0.999582 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.