id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2201.12265
|
Haixin Sun
|
Haixin Sun, Minh-Quan Dao, Vincent Fremont
|
3D-FlowNet: Event-based optical flow estimation with 3D representation
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Event-based cameras can overpass frame-based cameras limitations for
important tasks such as high-speed motion detection during self-driving cars
navigation in low illumination conditions. The event cameras' high temporal
resolution and high dynamic range, allow them to work in fast motion and
extreme light scenarios. However, conventional computer vision methods, such as
Deep Neural Networks, are not well adapted to work with event data as they are
asynchronous and discrete. Moreover, the traditional 2D-encoding representation
methods for event data, sacrifice the time resolution. In this paper, we first
improve the 2D-encoding representation by expanding it into three dimensions to
better preserve the temporal distribution of the events. We then propose
3D-FlowNet, a novel network architecture that can process the 3D input
representation and output optical flow estimations according to the new
encoding methods. A self-supervised training strategy is adopted to compensate
the lack of labeled datasets for the event-based camera. Finally, the proposed
network is trained and evaluated with the Multi-Vehicle Stereo Event Camera
(MVSEC) dataset. The results show that our 3D-FlowNet outperforms
state-of-the-art approaches with less training epoch (30 compared to 100 of
Spike-FlowNet).
|
[
{
"version": "v1",
"created": "Fri, 28 Jan 2022 17:28:15 GMT"
}
] | 2022-01-31T00:00:00 |
[
[
"Sun",
"Haixin",
""
],
[
"Dao",
"Minh-Quan",
""
],
[
"Fremont",
"Vincent",
""
]
] |
new_dataset
| 0.997911 |
2105.12931
|
Weijun Tan
|
Delong Qi, Weijun Tan, Qi Yao, Jingfeng Liu
|
YOLO5Face: Why Reinventing a Face Detector
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Tremendous progress has been made on face detection in recent years using
convolutional neural networks. While many face detectors use designs designated
for detecting faces, we treat face detection as a generic object detection
task. We implement a face detector based on the YOLOv5 object detector and call
it YOLO5Face. We make a few key modifications to the YOLOv5 and optimize it for
face detection. These modifications include adding a five-point landmark
regression head, using a stem block at the input of the backbone, using
smaller-size kernels in the SPP, and adding a P6 output in the PAN block. We
design detectors of different model sizes, from an extra-large model to achieve
the best performance to a super small model for real-time detection on an
embedded or mobile device. Experiment results on the WiderFace dataset show
that on VGA images, our face detectors can achieve state-of-the-art performance
in almost all the Easy, Medium, and Hard subsets, exceeding the more complex
designated face detectors. The code is available at
\url{https://github.com/deepcam-cn/yolov5-face}
|
[
{
"version": "v1",
"created": "Thu, 27 May 2021 03:54:38 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Dec 2021 04:40:13 GMT"
},
{
"version": "v3",
"created": "Thu, 27 Jan 2022 16:26:17 GMT"
}
] | 2022-01-28T00:00:00 |
[
[
"Qi",
"Delong",
""
],
[
"Tan",
"Weijun",
""
],
[
"Yao",
"Qi",
""
],
[
"Liu",
"Jingfeng",
""
]
] |
new_dataset
| 0.986694 |
2110.03072
|
Travis Munyer
|
Travis Munyer, Pei-Chi Huang, Chenyu Huang, Xin Zhong
|
FOD-A: A Dataset for Foreign Object Debris in Airports
|
This paper has been accepted for publication by 20th IEEE
International Conference on Machine Learning and Applications. The copyright
is with the IEEE
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Foreign Object Debris (FOD) detection has attracted increased attention in
the area of machine learning and computer vision. However, a robust and
publicly available image dataset for FOD has not been initialized. To this end,
this paper introduces an image dataset of FOD, named FOD in Airports (FOD-A).
FOD-A object categories have been selected based on guidance from prior
documentation and related research by the Federal Aviation Administration
(FAA). In addition to the primary annotations of bounding boxes for object
detection, FOD-A provides labeled environmental conditions. As such, each
annotation instance is further categorized into three light level categories
(bright, dim, and dark) and two weather categories (dry and wet). Currently,
FOD-A has released 31 object categories and over 30,000 annotation instances.
This paper presents the creation methodology, discusses the publicly available
dataset extension process, and demonstrates the practicality of FOD-A with
widely used machine learning models for object detection.
|
[
{
"version": "v1",
"created": "Wed, 6 Oct 2021 21:11:50 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Jan 2022 20:38:41 GMT"
}
] | 2022-01-28T00:00:00 |
[
[
"Munyer",
"Travis",
""
],
[
"Huang",
"Pei-Chi",
""
],
[
"Huang",
"Chenyu",
""
],
[
"Zhong",
"Xin",
""
]
] |
new_dataset
| 0.9998 |
2112.12248
|
David Anisi
|
Yvonne Murray, Martin Sirev{\aa}g, Pedro Ribeiro, David A. Anisi,
Morten Mossige
|
Safety assurance of an industrial robotic control system using
hardware/software co-verification
|
preprint, Author Accepted Manuscript
| null |
10.1016/j.scico.2021.102766
| null |
cs.RO cs.FL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
As a general trend in industrial robotics, an increasing number of safety
functions are being developed or re-engineered to be handled in software rather
than by physical hardware such as safety relays or interlock circuits. This
trend reinforces the importance of supplementing traditional, input-based
testing and quality procedures which are widely used in industry today, with
formal verification and model-checking methods. To this end, this paper focuses
on a representative safety-critical system in an ABB industrial paint robot,
namely the High-Voltage electrostatic Control system (HVC). The practical
convergence of the high-voltage produced by the HVC, essential for safe
operation, is formally verified using a novel and general co-verification
framework where hardware and software models are related via platform mappings.
This approach enables the pragmatic combination of highly diverse and
specialised tools. The paper's main contribution includes details on how
hardware abstraction and verification results can be transferred between tools
in order to verify system-level safety properties. It is noteworthy that the
HVC application considered in this paper has a rather generic form of a
feedback controller. Hence, the co-verification framework and experiences
reported here are also highly relevant for any cyber-physical system tracking a
setpoint reference.
|
[
{
"version": "v1",
"created": "Wed, 22 Dec 2021 22:29:40 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Dec 2021 10:29:25 GMT"
}
] | 2022-01-28T00:00:00 |
[
[
"Murray",
"Yvonne",
""
],
[
"Sirevåg",
"Martin",
""
],
[
"Ribeiro",
"Pedro",
""
],
[
"Anisi",
"David A.",
""
],
[
"Mossige",
"Morten",
""
]
] |
new_dataset
| 0.99654 |
2201.09149
|
Juncheng Dong
|
Juncheng Dong, Suya Wu, Mohammadreza Sultani, Vahid Tarokh
|
Multi-Agent Adversarial Attacks for Multi-Channel Communications
| null | null | null | null |
cs.MA cs.IT cs.LG math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Recently Reinforcement Learning (RL) has been applied as an anti-adversarial
remedy in wireless communication networks. However, studying the RL-based
approaches from the adversary's perspective has received little attention.
Additionally, RL-based approaches in an anti-adversary or adversarial paradigm
mostly consider single-channel communication (either channel selection or
single channel power control), while multi-channel communication is more common
in practice. In this paper, we propose a multi-agent adversary system (MAAS)
for modeling and analyzing adversaries in a wireless communication scenario by
careful design of the reward function under realistic communication scenarios.
In particular, by modeling the adversaries as learning agents, we show that the
proposed MAAS is able to successfully choose the transmitted channel(s) and
their respective allocated power(s) without any prior knowledge of the sender
strategy. Compared to the single-agent adversary (SAA), multi-agents in MAAS
can achieve significant reduction in signal-to-noise ratio (SINR) under the
same power constraints and partial observability, while providing improved
stability and a more efficient learning process. Moreover, through empirical
studies we show that the results in simulation are close to the ones in
communication in reality, a conclusion that is pivotal to the validity of
performance of agents evaluated in simulations.
|
[
{
"version": "v1",
"created": "Sat, 22 Jan 2022 23:57:00 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Jan 2022 15:51:28 GMT"
}
] | 2022-01-28T00:00:00 |
[
[
"Dong",
"Juncheng",
""
],
[
"Wu",
"Suya",
""
],
[
"Sultani",
"Mohammadreza",
""
],
[
"Tarokh",
"Vahid",
""
]
] |
new_dataset
| 0.971643 |
2201.11192
|
Gyri Reiersen
|
Gyri Reiersen, David Dao, Bj\"orn L\"utjens, Konstantin Klemmer, Kenza
Amara, Attila Steinegger, Ce Zhang, Xiaoxiang Zhu
|
ReforesTree: A Dataset for Estimating Tropical Forest Carbon Stock with
Deep Learning and Aerial Imagery
|
Accepted paper for the AI for Social Impact Track at the AAAI 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Forest biomass is a key influence for future climate, and the world urgently
needs highly scalable financing schemes, such as carbon offsetting
certifications, to protect and restore forests. Current manual forest carbon
stock inventory methods of measuring single trees by hand are time, labour, and
cost-intensive and have been shown to be subjective. They can lead to
substantial overestimation of the carbon stock and ultimately distrust in
forest financing. The potential for impact and scale of leveraging advancements
in machine learning and remote sensing technologies is promising but needs to
be of high quality in order to replace the current forest stock protocols for
certifications.
In this paper, we present ReforesTree, a benchmark dataset of forest carbon
stock in six agro-forestry carbon offsetting sites in Ecuador. Furthermore, we
show that a deep learning-based end-to-end model using individual tree
detection from low cost RGB-only drone imagery is accurately estimating forest
carbon stock within official carbon offsetting certification standards.
Additionally, our baseline CNN model outperforms state-of-the-art
satellite-based forest biomass and carbon stock estimates for this type of
small-scale, tropical agro-forestry sites. We present this dataset to encourage
machine learning research in this area to increase accountability and
transparency of monitoring, verification and reporting (MVR) in carbon
offsetting projects, as well as scaling global reforestation financing through
accurate remote sensing.
|
[
{
"version": "v1",
"created": "Wed, 26 Jan 2022 21:27:57 GMT"
}
] | 2022-01-28T00:00:00 |
[
[
"Reiersen",
"Gyri",
""
],
[
"Dao",
"David",
""
],
[
"Lütjens",
"Björn",
""
],
[
"Klemmer",
"Konstantin",
""
],
[
"Amara",
"Kenza",
""
],
[
"Steinegger",
"Attila",
""
],
[
"Zhang",
"Ce",
""
],
[
"Zhu",
"Xiaoxiang",
""
]
] |
new_dataset
| 0.99979 |
2201.11227
|
Ashish Tiwari
|
Gabriel Poesia, Oleksandr Polozov, Vu Le, Ashish Tiwari, Gustavo
Soares, Christopher Meek, Sumit Gulwani
|
Synchromesh: Reliable code generation from pre-trained language models
|
10 pages, 9 additional pages of Appendix
| null | null | null |
cs.LG cs.PL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Large pre-trained language models have been used to generate code,providing a
flexible interface for synthesizing programs from natural language
specifications. However, they often violate syntactic and semantic rules of
their output language, limiting their practical usability. In this paper, we
propose Synchromesh: a framework for substantially improving the reliability of
pre-trained models for code generation. Synchromesh comprises two components.
First, it retrieves few-shot examples from a training bank using Target
Similarity Tuning (TST), a novel method for semantic example selection. TST
learns to recognize utterances that describe similar target programs despite
differences in surface natural language features. Then, Synchromesh feeds the
examples to a pre-trained language model and samples programs using Constrained
Semantic Decoding (CSD): a general framework for constraining the output to a
set of valid programs in the target language. CSD leverages constraints on
partial outputs to sample complete correct programs, and needs neither
re-training nor fine-tuning of the language model. We evaluate our methods by
synthesizing code from natural language descriptions using GPT-3 and Codex in
three real-world languages: SQL queries, Vega-Lite visualizations and SMCalFlow
programs. These domains showcase rich constraints that CSD is able to enforce,
including syntax, scope, typing rules, and contextual logic. We observe
substantial complementary gains from CSD and TST in prediction accuracy and in
effectively preventing run-time errors.
|
[
{
"version": "v1",
"created": "Wed, 26 Jan 2022 22:57:44 GMT"
}
] | 2022-01-28T00:00:00 |
[
[
"Poesia",
"Gabriel",
""
],
[
"Polozov",
"Oleksandr",
""
],
[
"Le",
"Vu",
""
],
[
"Tiwari",
"Ashish",
""
],
[
"Soares",
"Gustavo",
""
],
[
"Meek",
"Christopher",
""
],
[
"Gulwani",
"Sumit",
""
]
] |
new_dataset
| 0.982816 |
2201.11275
|
Amani Abusafia
|
Jessica Yao, Amani Abusafia, Abdallah Lakhdari, and Athman Bouguettaya
|
Wireless IoT Energy Sharing Platform
|
3 pages, 3 figures, PERCOM 2022 , Demo Paper
| null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Wireless energy sharing is a novel convenient alternative to charge IoT
devices. In this demo paper, we present a peer-to-peer wireless energy sharing
platform. The platform enables users to exchange energy wirelessly with nearby
IoT devices. The energy sharing platform allows IoT users to send and receive
energy wirelessly. The platform consists of (i) a mobile application that
monitors and synchronizes the energy transfer among two IoT devices and (ii)
and a backend to register energy providers and consumers and store their energy
transfer transactions. The eveloped framework allows the collection of a real
wireless energy sharing dataset. A set of preliminary experiments has been
conducted on the collected dataset to analyze and demonstrate the behavior of
the current wireless energy sharing technology.
|
[
{
"version": "v1",
"created": "Thu, 27 Jan 2022 02:03:06 GMT"
}
] | 2022-01-28T00:00:00 |
[
[
"Yao",
"Jessica",
""
],
[
"Abusafia",
"Amani",
""
],
[
"Lakhdari",
"Abdallah",
""
],
[
"Bouguettaya",
"Athman",
""
]
] |
new_dataset
| 0.998697 |
2201.11330
|
Geng Liu
|
Geng Liu, Saumil Patel, Ramesh Balakrishnan and Taehun Lee
|
IMEXLBM 1.0: A Proxy Application based on the Lattice Boltzmann Method
for solving Computational Fluid Dynamic problems on GPUs
| null | null | null | null |
cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The US Department of Energy launched the Exascale Computing Project (ECP) in
2016 as part of a coordinated effort to achieve the next generation of
high-performance computing (HPC) and to accelerate scientific discovery. The
Exascale Proxy Applications Project began within the ECP to: (1) improve the
quality of proxies created by the ECP (2) provide small, simplified codes which
share important features of large applications and (3) capture programming
methods and styles that drive requirements for compilers and other elements of
the tool chain. This article describes one Proxy Application (or "proxy app")
suite called IMEXLBM which is an open-source, self-contained code unit, with
minimal dependencies, that is capable of running on heterogeneous platforms
like those with graphic processing units (GPU) for accelerating the
calculation. In particular, we demonstrate functionality by solving a benchmark
problem in computational fluid dynamics (CFD) on the ThetaGPU machine at the
Argonne Leadership Computing Facility (ALCF). Our method makes use of a
domain-decomposition technique in conjunction with the message-passing
interface (MPI) standard for distributed memory systems. The OpenMP application
programming interface (API) is employed for shared-memory multi-processing and
offloading critical kernels to the device (i.e. GPU). We also verify our effort
by comparing data generated via CPU-only calculations with data generated with
CPU+GPU calculations. While we demonstrate efficacy for single-phase fluid
problems, the code-unit is designed to be versatile and enable new physical
models that can capture complex phenomena such as two-phase flow with interface
capture.
|
[
{
"version": "v1",
"created": "Thu, 27 Jan 2022 05:31:16 GMT"
}
] | 2022-01-28T00:00:00 |
[
[
"Liu",
"Geng",
""
],
[
"Patel",
"Saumil",
""
],
[
"Balakrishnan",
"Ramesh",
""
],
[
"Lee",
"Taehun",
""
]
] |
new_dataset
| 0.958821 |
2201.11342
|
Dimitrios Sikeridis
|
Dimitrios Sikeridis, Michael Devetsikiotis
|
Smart City Defense Game: Strategic Resource Management during
Socio-Cyber-Physical Attacks
| null | null | null | null |
cs.GT cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Ensuring public safety in a Smart City (SC) environment is a critical and
increasingly complicated task due to the involvement of multiple agencies and
the city's expansion across cyber and social layers. In this paper, we propose
an extensive form perfect information game to model interactions and optimal
city resource allocations when a Terrorist Organization (TO) performs attacks
on multiple targets across two conceptual SC levels, a physical, and a
cyber-social. The Smart City Defense Game (SCDG) considers three players that
initially are entitled to a specific finite budget. Two SC agencies that have
to defend their physical or social territories respectively, fight against a
common enemy, the TO. Each layer consists of multiple targets and the attack
outcome depends on whether the resources allocated there by the associated
agency, exceed or not the TO's. Each player's utility is equal to the number of
successfully defended targets. The two agencies are allowed to make budget
transfers provided that it is beneficial for both. We completely characterize
the Sub-game Perfect Nash Equilibrium (SPNE) of the SCDG that consists of
strategies for optimal resource exchanges between SC agencies and accounts for
the TO's budget allocation across the physical and social targets. Also, we
present numerical and comparative results demonstrating that when the SC
players act according to the SPNE, they maximize the number of successfully
defended targets. The SCDG is shown to be a promising solution for modeling
critical resource allocations between SC parties in the face of multi-layer
simultaneous terrorist attacks.
|
[
{
"version": "v1",
"created": "Thu, 27 Jan 2022 06:41:12 GMT"
}
] | 2022-01-28T00:00:00 |
[
[
"Sikeridis",
"Dimitrios",
""
],
[
"Devetsikiotis",
"Michael",
""
]
] |
new_dataset
| 0.999526 |
2201.11370
|
Hajar Moudoud
|
Hajar Moudoud, Soumaya Cherkaoui and Lyes Khoukhi
|
An IoT Blockchain Architecture Using Oracles and Smart Contracts: the
Use-Case of a Food Supply Chain
|
This paper has been accepted for publication by IEEE 30th Annual
International Symposium on Personal, Indoor and Mobile Radio Communications
(PIMRC). The final version will be published by the IEEE
| null |
10.1109/PIMRC.2019.8904404
| null |
cs.NI cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
The blockchain is a distributed technology which allows establishing trust
among unreliable users who interact and perform transactions with each other.
While blockchain technology has been mainly used for crypto-currency, it has
emerged as an enabling technology for establishing trust in the realm of the
Internet of Things (IoT). Nevertheless, a naive usage of the blockchain for IoT
leads to high delays and extensive computational power. In this paper, we
propose a blockchain architecture dedicated to being used in a supply chain
which comprises different distributed IoT entities. We propose a lightweight
consensus for this architecture, called LC4IoT. The consensus is evaluated
through extensive simulations. The results show that the proposed consensus
uses low computational power, storage capability and latency.
|
[
{
"version": "v1",
"created": "Thu, 27 Jan 2022 08:10:37 GMT"
}
] | 2022-01-28T00:00:00 |
[
[
"Moudoud",
"Hajar",
""
],
[
"Cherkaoui",
"Soumaya",
""
],
[
"Khoukhi",
"Lyes",
""
]
] |
new_dataset
| 0.968962 |
2201.11662
|
Parand Akbari
|
Parand Akbari, Francis Ogoke, Ning-Yu Kao, Kazem Meidani, Chun-Yu Yeh,
William Lee, Amir Barati Farimani
|
MeltpoolNet: Melt pool Characteristic Prediction in Metal Additive
Manufacturing Using Machine Learning
| null | null | null | null |
cs.LG cond-mat.mtrl-sci cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Characterizing meltpool shape and geometry is essential in metal Additive
Manufacturing (MAM) to control the printing process and avoid defects.
Predicting meltpool flaws based on process parameters and powder material is
difficult due to the complex nature of MAM process. Machine learning (ML)
techniques can be useful in connecting process parameters to the type of flaws
in the meltpool. In this work, we introduced a comprehensive framework for
benchmarking ML for melt pool characterization. An extensive experimental
dataset has been collected from more than 80 MAM articles containing MAM
processing conditions, materials, meltpool dimensions, meltpool modes and flaw
types. We introduced physics-aware MAM featurization, versatile ML models, and
evaluation metrics to create a comprehensive learning framework for meltpool
defect and geometry prediction. This benchmark can serve as a basis for melt
pool control and process optimization. In addition, data-driven explicit models
have been identified to estimate meltpool geometry from process parameters and
material properties which outperform Rosenthal estimation for meltpool geometry
while maintaining interpretability.
|
[
{
"version": "v1",
"created": "Wed, 26 Jan 2022 04:08:56 GMT"
}
] | 2022-01-28T00:00:00 |
[
[
"Akbari",
"Parand",
""
],
[
"Ogoke",
"Francis",
""
],
[
"Kao",
"Ning-Yu",
""
],
[
"Meidani",
"Kazem",
""
],
[
"Yeh",
"Chun-Yu",
""
],
[
"Lee",
"William",
""
],
[
"Farimani",
"Amir Barati",
""
]
] |
new_dataset
| 0.999589 |
1908.06751
|
Guillaume Theyssier
|
Nicolas Ollinger (LIFO), Guillaume Theyssier (I2M)
|
Freezing, Bounded-Change and Convergent Cellular Automata
| null | null | null | null |
cs.DM cs.CC nlin.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper studies three classes of cellular automata from a computational
point of view: freezing cellular automata where the state of a cell can only
decrease according to some order on states, cellular automata where each cell
only makes a bounded number of state changes in any orbit, and finally cellular
automata where each orbit converges to some fixed point. Many examples studied
in the literature fit into these definitions, in particular the works on
cristal growth started by S. Ulam in the 60s. The central question addressed
here is how the computational power and computational hardness of basic
properties is affected by the constraints of convergence, bounded number of
change, or local decreasing of states in each cell. By studying various
benchmark problems (short-term prediction, long term reachability, limits) and
considering various complexity measures and scales (LOGSPACE vs. PTIME,
communication complexity, Turing computability and arithmetical hierarchy) we
give a rich and nuanced answer: the overall computational complexity of such
cellular automata depends on the class considered (among the three above), the
dimension, and the precise problem studied. In particular, we show that all
settings can achieve universality in the sense of Blondel-Delvenne-K\r{u}rka,
although short term predictability varies from NLOGSPACE to P-complete.
Besides, the computability of limit configurations starting from computable
initial configurations separates bounded-change from convergent cellular
automata in dimension~1, but also dimension~1 versus higher dimensions for
freezing cellular automata. Another surprising dimension-sensitive result
obtained is that nilpotency becomes decidable in dimension~ 1 for all the three
classes, while it stays undecidable even for freezing cellular automata in
higher dimension.
|
[
{
"version": "v1",
"created": "Mon, 19 Aug 2019 12:39:10 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Feb 2020 09:05:12 GMT"
},
{
"version": "v3",
"created": "Thu, 22 Apr 2021 07:55:33 GMT"
},
{
"version": "v4",
"created": "Wed, 26 Jan 2022 10:02:31 GMT"
}
] | 2022-01-27T00:00:00 |
[
[
"Ollinger",
"Nicolas",
"",
"LIFO"
],
[
"Theyssier",
"Guillaume",
"",
"I2M"
]
] |
new_dataset
| 0.974441 |
2007.10773
|
Irena Rusu Ph.D.
|
Irena Rusu
|
Stick graphs: examples and counter-examples
|
15 pages, 5 figures
| null | null | null |
cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Grid intersection graphs are the intersection graphs of vertical and
horizontal segments in the plane. When the bottom and respectively left
endpoints of the vertical and horizontals segments belong to a line with
negative slope, the graph is called a Stick graph. Very few results exist on
Stick graphs: only small classes of Stick graphs have been identified;
recognizing Stick graphs is an open problem; and even building examples of
graphs that are not Stick graphs is quite tricky.
In this paper, we first prove that the complements of circle graphs and of
circular arc graphs are Stick graphs. Then, we propose two certificates
allowing to decide that a graph is not a Stick graph, and use them to build new
examples of non-Stick graphs. It turns out that these examples of non-Stick
graphs, as well as all those from literature, have long holes. We thus also
investigate the place of chordal grid intersection graphs in the hierarchy of
classes built around Stick graphs.
|
[
{
"version": "v1",
"created": "Tue, 21 Jul 2020 13:14:50 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Aug 2020 13:04:50 GMT"
},
{
"version": "v3",
"created": "Wed, 26 Jan 2022 08:56:20 GMT"
}
] | 2022-01-27T00:00:00 |
[
[
"Rusu",
"Irena",
""
]
] |
new_dataset
| 0.999819 |
2009.13018
|
Ishan Karunanayake
|
Ishan Karunanayake, Nadeem Ahmed, Robert Malaney, Rafiqul Islam,
Sanjay Jha
|
De-anonymisation attacks on Tor: A Survey
|
This work is published in IEEE Communications Surveys & Tutorials and
is licensed under a Creative Commons Attribution 4.0 License. For more
information, see https://creativecommons.org/licenses/by/4.0/. Link to the
article https://ieeexplore.ieee.org/abstract/document/9471821
|
IEEE Communications Surveys & Tutorials, vol. 23, no. 4, pp.
2324-2350, Fourthquarter 2021
|
10.1109/COMST.2021.3093615
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Anonymity networks are becoming increasingly popular in today's online world
as more users attempt to safeguard their online privacy. Tor is currently the
most popular anonymity network in use and provides anonymity to both users and
services (hidden services). However, the anonymity provided by Tor is also
being misused in various ways. Hosting illegal sites for selling drugs, hosting
command and control servers for botnets, and distributing censored content are
but a few such examples. As a result, various parties, including governments
and law enforcement agencies, are interested in attacks that assist in
de-anonymising the Tor network, disrupting its operations, and bypassing its
censorship circumvention mechanisms. In this survey paper, we review known Tor
attacks and identify current techniques for the de-anonymisation of Tor users
and hidden services. We discuss these techniques and analyse the practicality
of their execution method. We conclude by discussing improvements to the Tor
framework that help prevent the surveyed de-anonymisation attacks.
|
[
{
"version": "v1",
"created": "Mon, 28 Sep 2020 02:16:12 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Sep 2020 23:50:16 GMT"
},
{
"version": "v3",
"created": "Wed, 26 Jan 2022 00:00:37 GMT"
}
] | 2022-01-27T00:00:00 |
[
[
"Karunanayake",
"Ishan",
""
],
[
"Ahmed",
"Nadeem",
""
],
[
"Malaney",
"Robert",
""
],
[
"Islam",
"Rafiqul",
""
],
[
"Jha",
"Sanjay",
""
]
] |
new_dataset
| 0.994672 |
2105.00327
|
Kuan Xu
|
Kuan Xu, Chen Wang, Chao Chen, Wei Wu, Sebastian Scherer
|
AirCode: A Robust Object Encoding Method
|
IEEE Robotics and Automation Letters (RA-L), 2022
| null |
10.1109/LRA.2022.3141221
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Object encoding and identification are crucial for many robotic tasks such as
autonomous exploration and semantic relocalization. Existing works heavily rely
on the tracking of detected objects but have difficulty recalling revisited
objects precisely. In this paper, we propose a novel object encoding method,
which is named as AirCode, based on a graph of key-points. To be robust to the
number of key-points detected, we propose a feature sparse encoding and object
dense encoding method to ensure that each key-point can only affect a small
part of the object descriptors, leading it to be robust to viewpoint changes,
scaling, occlusion, and even object deformation. In the experiments, we show
that it achieves superior performance for object identification than the
state-of-the-art algorithms and is able to provide reliable semantic
relocalization. It is a plug-and-play module and we expect that it will play an
important role in various applications.
|
[
{
"version": "v1",
"created": "Sat, 1 May 2021 18:56:15 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Sep 2021 20:06:58 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Jan 2022 18:49:35 GMT"
},
{
"version": "v4",
"created": "Wed, 26 Jan 2022 06:44:22 GMT"
}
] | 2022-01-27T00:00:00 |
[
[
"Xu",
"Kuan",
""
],
[
"Wang",
"Chen",
""
],
[
"Chen",
"Chao",
""
],
[
"Wu",
"Wei",
""
],
[
"Scherer",
"Sebastian",
""
]
] |
new_dataset
| 0.968409 |
2106.05596
|
Sanka Rasnayaka
|
Sachith Seneviratne, Nuran Kasthuriaarachchi, Sanka Rasnayaka
|
Multi-Dataset Benchmarks for Masked Identification using Contrastive
Representation Learning
| null | null |
10.1109/DICTA52665.2021.9647194
| null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The COVID-19 pandemic has drastically changed accepted norms globally. Within
the past year, masks have been used as a public health response to limit the
spread of the virus. This sudden change has rendered many face recognition
based access control, authentication and surveillance systems ineffective.
Official documents such as passports, driving license and national identity
cards are enrolled with fully uncovered face images. However, in the current
global situation, face matching systems should be able to match these reference
images with masked face images. As an example, in an airport or security
checkpoint it is safer to match the unmasked image of the identifying document
to the masked person rather than asking them to remove the mask. We find that
current facial recognition techniques are not robust to this form of occlusion.
To address this unique requirement presented due to the current circumstance,
we propose a set of re-purposed datasets and a benchmark for researchers to
use. We also propose a contrastive visual representation learning based
pre-training workflow which is specialized to masked vs unmasked face matching.
We ensure that our method learns robust features to differentiate people across
varying data collection scenarios. We achieve this by training over many
different datasets and validating our result by testing on various holdout
datasets. The specialized weights trained by our method outperform standard
face recognition features for masked to unmasked face matching. We believe the
provided synthetic mask generating code, our novel training approach and the
trained weights from the masked face models will help in adopting existing face
recognition systems to operate in the current global environment. We
open-source all contributions for broader use by the research community.
|
[
{
"version": "v1",
"created": "Thu, 10 Jun 2021 08:58:10 GMT"
}
] | 2022-01-27T00:00:00 |
[
[
"Seneviratne",
"Sachith",
""
],
[
"Kasthuriaarachchi",
"Nuran",
""
],
[
"Rasnayaka",
"Sanka",
""
]
] |
new_dataset
| 0.999728 |
2110.04934
|
Yiming Wang
|
Yiming Wang, Jinyu Li, Heming Wang, Yao Qian, Chengyi Wang, Yu Wu
|
Wav2vec-Switch: Contrastive Learning from Original-noisy Speech Pairs
for Robust Speech Recognition
|
Accepted at IEEE ICASSP 2022. 5 pages, 1 figure
| null | null | null |
cs.CL cs.LG cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The goal of self-supervised learning (SSL) for automatic speech recognition
(ASR) is to learn good speech representations from a large amount of unlabeled
speech for the downstream ASR task. However, most SSL frameworks do not
consider noise robustness which is crucial for real-world applications. In this
paper we propose wav2vec-Switch, a method to encode noise robustness into
contextualized representations of speech via contrastive learning.
Specifically, we feed original-noisy speech pairs simultaneously into the
wav2vec 2.0 network. In addition to the existing contrastive learning task, we
switch the quantized representations of the original and noisy speech as
additional prediction targets of each other. By doing this, it enforces the
network to have consistent predictions for the original and noisy speech, thus
allows to learn contextualized representation with noise robustness. Our
experiments on synthesized and real noisy data show the effectiveness of our
method: it achieves 2.9--4.9% relative word error rate (WER) reduction on the
synthesized noisy LibriSpeech data without deterioration on the original data,
and 5.7% on CHiME-4 real 1-channel noisy data compared to a data augmentation
baseline even with a strong language model for decoding. Our results on CHiME-4
can match or even surpass those with well-designed speech enhancement
components.
|
[
{
"version": "v1",
"created": "Mon, 11 Oct 2021 00:08:48 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Jan 2022 00:18:29 GMT"
}
] | 2022-01-27T00:00:00 |
[
[
"Wang",
"Yiming",
""
],
[
"Li",
"Jinyu",
""
],
[
"Wang",
"Heming",
""
],
[
"Qian",
"Yao",
""
],
[
"Wang",
"Chengyi",
""
],
[
"Wu",
"Yu",
""
]
] |
new_dataset
| 0.953963 |
2112.01591
|
Andr\'e Seidel Oliveira
|
Andr\'e Seidel Oliveira, Anna Helena Reali Costa
|
PLSUM: Generating PT-BR Wikipedia by Summarizing Multiple Websites
|
Published on Encontro Nacional de Intelig\^encia Artificial e
Computacional (ENIAC) 2021 conference
|
2021: Anais do XVIII Encontro Nacional de Intelig\^eencia
Artificial e Computacional
|
10.5753/eniac.2021.18300
| null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Wikipedia is an important free source of intelligible knowledge. Despite
that, Brazilian Portuguese Wikipedia still lacks descriptions for many
subjects. In an effort to expand the Brazilian Wikipedia, we contribute PLSum,
a framework for generating wiki-like abstractive summaries from multiple
descriptive websites. The framework has an extractive stage followed by an
abstractive one. In particular, for the abstractive stage, we fine-tune and
compare two recent variations of the Transformer neural network, PTT5, and
Longformer. To fine-tune and evaluate the model, we created a dataset with
thousands of examples, linking reference websites to Wikipedia. Our results
show that it is possible to generate meaningful abstractive summaries from
Brazilian Portuguese web content.
|
[
{
"version": "v1",
"created": "Thu, 2 Dec 2021 20:16:17 GMT"
}
] | 2022-01-27T00:00:00 |
[
[
"Oliveira",
"André Seidel",
""
],
[
"Costa",
"Anna Helena Reali",
""
]
] |
new_dataset
| 0.993431 |
2201.02013
|
Wentu Song
|
Wentu Song, Kui Cai, and Tuan Thanh Nguyen
|
List-decodable Codes for Single-deletion Single-substitution with
List-size Two
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present an explicit construction of list-decodable codes
for single-deletion and single-substitution with list size two and redundancy
3log n+4, where n is the block length of the code. Our construction has lower
redundancy than the best known explicit construction by Gabrys et al. (arXiv
2021), whose redundancy is 4log n+O(1).
|
[
{
"version": "v1",
"created": "Thu, 6 Jan 2022 11:08:50 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Jan 2022 08:55:21 GMT"
}
] | 2022-01-27T00:00:00 |
[
[
"Song",
"Wentu",
""
],
[
"Cai",
"Kui",
""
],
[
"Nguyen",
"Tuan Thanh",
""
]
] |
new_dataset
| 0.999522 |
2201.06753
|
Chongxin Zhong
|
Chongxin Zhong, Qidong Zhao, and Xu Liu
|
BinGo: Pinpointing Concurrency Bugs in Go via Binary Analysis
| null | null | null | null |
cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Golang (also known as Go for short) has become popular in building
concurrency programs in distributed systems. As the unique features, Go employs
lightweight Goroutines to support highly parallelism in user space. Moreover,
Go leverages channels to enable explicit communication among threads. However,
recent studies show that concurrency bugs are not uncommon in Go applications.
Pinpointing these concurrency bugs in real Go applications is both important
and challenging. Existing approaches are mostly based on compiler-aided static
or dynamic analysis, which have two limitations. First, existing approaches
require the availability and recompilation of the source code, which work well
on testing rather than production environments with no source code available
for both applications and external libraries. Second, existing approaches work
on pure Go code bases only, not programs mixed with Go and other languages. To
address these limitations, we develop BinGo, the first tool to identify
concurrency bugs in Go applications via dynamic binary analysis. BinGo
correlates binary execution with Go semantics and employs novel bug detection
algorithms. BinGo is an end-to-end tool that is ready for deployment in the
production environment with no modification on source code, compilers, and
runtimes in the Go eco-system. Our experiments show that BinGo has a high
coverage of concurrency bugs with no false positives. We are able to use BinGo
to identify concurrency bugs in real applications with moderate overhead.
|
[
{
"version": "v1",
"created": "Tue, 18 Jan 2022 05:33:22 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Jan 2022 17:39:13 GMT"
}
] | 2022-01-27T00:00:00 |
[
[
"Zhong",
"Chongxin",
""
],
[
"Zhao",
"Qidong",
""
],
[
"Liu",
"Xu",
""
]
] |
new_dataset
| 0.988673 |
2201.10474
|
Suchin Gururangan
|
Suchin Gururangan, Dallas Card, Sarah K. Dreier, Emily K. Gade, Leroy
Z. Wang, Zeyu Wang, Luke Zettlemoyer, Noah A. Smith
|
Whose Language Counts as High Quality? Measuring Language Ideologies in
Text Data Selection
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Language models increasingly rely on massive web dumps for diverse text data.
However, these sources are rife with undesirable content. As such, resources
like Wikipedia, books, and newswire often serve as anchors for automatically
selecting web text most suitable for language modeling, a process typically
referred to as quality filtering. Using a new dataset of U.S. high school
newspaper articles -- written by students from across the country -- we
investigate whose language is preferred by the quality filter used for GPT-3.
We find that newspapers from larger schools, located in wealthier, educated,
and urban ZIP codes are more likely to be classified as high quality. We then
demonstrate that the filter's measurement of quality is unaligned with other
sensible metrics, such as factuality or literary acclaim. We argue that
privileging any corpus as high quality entails a language ideology, and more
care is needed to construct training corpora for language models, with better
transparency and justification for the inclusion or exclusion of various texts.
|
[
{
"version": "v1",
"created": "Tue, 25 Jan 2022 17:20:04 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Jan 2022 18:46:26 GMT"
}
] | 2022-01-27T00:00:00 |
[
[
"Gururangan",
"Suchin",
""
],
[
"Card",
"Dallas",
""
],
[
"Dreier",
"Sarah K.",
""
],
[
"Gade",
"Emily K.",
""
],
[
"Wang",
"Leroy Z.",
""
],
[
"Wang",
"Zeyu",
""
],
[
"Zettlemoyer",
"Luke",
""
],
[
"Smith",
"Noah A.",
""
]
] |
new_dataset
| 0.969879 |
2201.10585
|
B.Sundar Rajan
|
K. K. Krishnan Namboodiri, Elizabath Peter and B. Sundar Rajan
|
Extended Placement Delivery Arrays for Multi-Antenna Coded Caching
Scheme
|
10 pages, 1 figure
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The multi-antenna coded caching problem, where the server having $L$ transmit
antennas communicating to $K$ users through a wireless broadcast link, is
addressed. In the problem setting, the server has a library of $N$ files, and
each user is equipped with a dedicated cache of capacity $M$. The idea of
extended placement delivery array (EPDA), an array which consists of a special
symbol $\star$ and integers in a set $\{1,2,\dots,S\}$, is proposed to obtain a
novel solution for the aforementioned multi-antenna coded caching problem. From
a $(K,L,F,Z,S)$ EPDA, a multi-antenna coded caching scheme with $K$ users, and
the server with $L$ transmit antennas, can be obtained in which the normalized
memory $\frac{M}{N}=\frac{Z}{F}$, and the delivery time $T=\frac{S}{F}$. The
placement delivery array (for single-antenna coded caching scheme) is a special
class of EPDAs with $L=1$. For the multi-antenna coded caching schemes
constructed from EPDAs, it is shown that the maximum possible Degree of Freedom
(DoF) that can be achieved is $t+L$, where $t=\frac{KM}{N}$ is an integer.
Furthermore, two constructions of EPDAs are proposed: a) $ K=t+L$, and b)
$K=nt+(n-1)L, \hspace{0.1cm}L\geq t$, where $n\geq 2$ is an integer. In the
resulting multi-antenna schemes from those EPDAs achieve the full DoF, while
requiring a subpacketization number $\frac{K}{\text{gcd}(K,t,L)}$. This
subpacketization number is less than that required by previously known schemes
in the literature.
|
[
{
"version": "v1",
"created": "Tue, 25 Jan 2022 19:10:20 GMT"
}
] | 2022-01-27T00:00:00 |
[
[
"Namboodiri",
"K. K. Krishnan",
""
],
[
"Peter",
"Elizabath",
""
],
[
"Rajan",
"B. Sundar",
""
]
] |
new_dataset
| 0.997586 |
2201.10608
|
Xiang Deng
|
Xiang Deng, Prashant Shiralkar, Colin Lockard, Binxuan Huang, Huan Sun
|
DOM-LM: Learning Generalizable Representations for HTML Documents
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
HTML documents are an important medium for disseminating information on the
Web for human consumption. An HTML document presents information in multiple
text formats including unstructured text, structured key-value pairs, and
tables. Effective representation of these documents is essential for machine
understanding to enable a wide range of applications, such as Question
Answering, Web Search, and Personalization. Existing work has either
represented these documents using visual features extracted by rendering them
in a browser, which is typically computationally expensive, or has simply
treated them as plain text documents, thereby failing to capture useful
information presented in their HTML structure. We argue that the text and HTML
structure together convey important semantics of the content and therefore
warrant a special treatment for their representation learning. In this paper,
we introduce a novel representation learning approach for web pages, dubbed
DOM-LM, which addresses the limitations of existing approaches by encoding both
text and DOM tree structure with a transformer-based encoder and learning
generalizable representations for HTML documents via self-supervised
pre-training. We evaluate DOM-LM on a variety of webpage understanding tasks,
including Attribute Extraction, Open Information Extraction, and Question
Answering. Our extensive experiments show that DOM-LM consistently outperforms
all baselines designed for these tasks. In particular, DOM-LM demonstrates
better generalization performance both in few-shot and zero-shot settings,
making it attractive for making it suitable for real-world application settings
with limited labeled data.
|
[
{
"version": "v1",
"created": "Tue, 25 Jan 2022 20:10:32 GMT"
}
] | 2022-01-27T00:00:00 |
[
[
"Deng",
"Xiang",
""
],
[
"Shiralkar",
"Prashant",
""
],
[
"Lockard",
"Colin",
""
],
[
"Huang",
"Binxuan",
""
],
[
"Sun",
"Huan",
""
]
] |
new_dataset
| 0.991263 |
2201.10655
|
Dilara Kek\"ull\"uo\u{g}lu
|
Dilara Kek\"ull\"uo\u{g}lu, Walid Magdy, Kami Vaniea
|
From an Authentication Question to a Public Social Event: Characterizing
Birthday Sharing on Twitter
|
Proceedings of The 16th International AAAI Conference on Weblogs and
Social Media (ICWSM'22)
| null | null | null |
cs.SI cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Date of birth (DOB) has historically been considered as private information
and safe to use for authentication, but recent years have seen a shift towards
wide public sharing. In this work we characterize how modern social media users
are approaching the sharing of birthday wishes publicly online. Over 45 days,
we collected over 2.8M tweets wishing happy birthday to 724K Twitter accounts.
For 50K accounts, their age was likely mentioned revealing their DOB, and 10%
were protected accounts. Our findings show that the majority of both public and
protected accounts seem to be accepting of their birthdays and DOB being
revealed online by their friends even when they do not have it listed on their
profiles. We further complemented our findings through a survey to measure
awareness of DOB disclosure issues and how people think about sharing different
types of birthday-related information. Our analysis shows that giving birthday
wishes to others online is considered a celebration and many users are quite
comfortable with it. This view matches the trend also seen in security where
the use of DOB in authentication process is no longer considered best practice.
|
[
{
"version": "v1",
"created": "Tue, 25 Jan 2022 22:27:03 GMT"
}
] | 2022-01-27T00:00:00 |
[
[
"Keküllüoğlu",
"Dilara",
""
],
[
"Magdy",
"Walid",
""
],
[
"Vaniea",
"Kami",
""
]
] |
new_dataset
| 0.982071 |
2201.10656
|
Peixi Xiong
|
Peixi Xiong, Yilin Shen, Hongxia Jin
|
MGA-VQA: Multi-Granularity Alignment for Visual Question Answering
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Learning to answer visual questions is a challenging task since the
multi-modal inputs are within two feature spaces. Moreover, reasoning in visual
question answering requires the model to understand both image and question,
and align them in the same space, rather than simply memorize statistics about
the question-answer pairs. Thus, it is essential to find component connections
between different modalities and within each modality to achieve better
attention. Previous works learned attention weights directly on the features.
However, the improvement is limited since these two modality features are in
two domains: image features are highly diverse, lacking structure and
grammatical rules as language, and natural language features have a higher
probability of missing detailed information. To better learn the attention
between visual and text, we focus on how to construct input stratification and
embed structural information to improve the alignment between different level
components. We propose Multi-Granularity Alignment architecture for Visual
Question Answering task (MGA-VQA), which learns intra- and inter-modality
correlations by multi-granularity alignment, and outputs the final result by
the decision fusion module. In contrast to previous works, our model splits
alignment into different levels to achieve learning better correlations without
needing additional data and annotations. The experiments on the VQA-v2 and GQA
datasets demonstrate that our model significantly outperforms non-pretrained
state-of-the-art methods on both datasets without extra pretraining data and
annotations. Moreover, it even achieves better results over the pre-trained
methods on GQA.
|
[
{
"version": "v1",
"created": "Tue, 25 Jan 2022 22:30:54 GMT"
}
] | 2022-01-27T00:00:00 |
[
[
"Xiong",
"Peixi",
""
],
[
"Shen",
"Yilin",
""
],
[
"Jin",
"Hongxia",
""
]
] |
new_dataset
| 0.986402 |
2201.10829
|
Haifan Yin
|
Haifan Yin and David Gesbert
|
A Partial Channel Reciprocity-based Codebook for Wideband FDD Massive
MIMO
|
15 pages, 8 figures, submitted to IEEE Transactions on Wireless
Communications
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
The acquisition of channel state information (CSI) in Frequency Division
Duplex (FDD) massive MIMO has been a formidable challenge. In this paper, we
address this problem with a novel CSI feedback framework enabled by the partial
reciprocity of uplink and downlink channels in the wideband regime. We first
derive the closed-form expression of the rank of the wideband massive MIMO
channel covariance matrix for a given angle-delay distribution. A low-rankness
property is identified, which generalizes the well-known result of the
narrow-band uniform linear array setting. Then we propose a partial channel
reciprocity (PCR) codebook, inspired by the low-rankness behavior and the fact
that the uplink and downlink channels have similar angle-delay distributions.
Compared to the latest codebook in 5G, the proposed PCR codebook scheme
achieves higher performance, lower complexity at the user side, and requires a
smaller amount of feedback. We derive the feedback overhead necessary to
achieve asymptotically error-free CSI feedback. Two low-complexity alternatives
are also proposed to further reduce the complexity at the base station side.
Simulations with the practical 3GPP channel model show the significant gains
over the latest 5G codebook, which prove that our proposed methods are
practical solutions for 5G and beyond.
|
[
{
"version": "v1",
"created": "Wed, 26 Jan 2022 09:19:02 GMT"
}
] | 2022-01-27T00:00:00 |
[
[
"Yin",
"Haifan",
""
],
[
"Gesbert",
"David",
""
]
] |
new_dataset
| 0.962322 |
2201.10830
|
Xinzhu Ma
|
Zhiyu Chong, Xinzhu Ma, Hong Zhang, Yuxin Yue, Haojie Li, Zhihui Wang,
Wanli Ouyang
|
MonoDistill: Learning Spatial Features for Monocular 3D Object Detection
|
Accepted by ICLR 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D object detection is a fundamental and challenging task for 3D scene
understanding, and the monocular-based methods can serve as an economical
alternative to the stereo-based or LiDAR-based methods. However, accurately
detecting objects in the 3D space from a single image is extremely difficult
due to the lack of spatial cues. To mitigate this issue, we propose a simple
and effective scheme to introduce the spatial information from LiDAR signals to
the monocular 3D detectors, without introducing any extra cost in the inference
phase. In particular, we first project the LiDAR signals into the image plane
and align them with the RGB images. After that, we use the resulting data to
train a 3D detector (LiDAR Net) with the same architecture as the baseline
model. Finally, this LiDAR Net can serve as the teacher to transfer the learned
knowledge to the baseline model. Experimental results show that the proposed
method can significantly boost the performance of the baseline model and ranks
the $1^{st}$ place among all monocular-based methods on the KITTI benchmark.
Besides, extensive ablation studies are conducted, which further prove the
effectiveness of each part of our designs and illustrate what the baseline
model has learned from the LiDAR Net. Our code will be released at
\url{https://github.com/monster-ghost/MonoDistill}.
|
[
{
"version": "v1",
"created": "Wed, 26 Jan 2022 09:21:41 GMT"
}
] | 2022-01-27T00:00:00 |
[
[
"Chong",
"Zhiyu",
""
],
[
"Ma",
"Xinzhu",
""
],
[
"Zhang",
"Hong",
""
],
[
"Yue",
"Yuxin",
""
],
[
"Li",
"Haojie",
""
],
[
"Wang",
"Zhihui",
""
],
[
"Ouyang",
"Wanli",
""
]
] |
new_dataset
| 0.996973 |
2201.10873
|
Jiaqi Kang
|
Jiaqi Kang, Su Yang, Weishan Zhang
|
TransPPG: Two-stream Transformer for Remote Heart Rate Estimate
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Non-contact facial video-based heart rate estimation using remote
photoplethysmography (rPPG) has shown great potential in many applications
(e.g., remote health care) and achieved creditable results in constrained
scenarios. However, practical applications require results to be accurate even
under complex environment with head movement and unstable illumination.
Therefore, improving the performance of rPPG in complex environment has become
a key challenge. In this paper, we propose a novel video embedding method that
embeds each facial video sequence into a feature map referred to as Multi-scale
Adaptive Spatial and Temporal Map with Overlap (MAST_Mop), which contains not
only vital information but also surrounding information as reference, which
acts as the mirror to figure out the homogeneous perturbations imposed on
foreground and background simultaneously, such as illumination instability.
Correspondingly, we propose a two-stream Transformer model to map the MAST_Mop
into heart rate (HR), where one stream follows the pulse signal in the facial
area while the other figures out the perturbation signal from the surrounding
region such that the difference of the two channels leads to adaptive noise
cancellation. Our approach significantly outperforms all current
state-of-the-art methods on two public datasets MAHNOB-HCI and VIPL-HR. As far
as we know, it is the first work with Transformer as backbone to capture the
temporal dependencies in rPPGs and apply the two stream scheme to figure out
the interference from backgrounds as mirror of the corresponding perturbation
on foreground signals for noise tolerating.
|
[
{
"version": "v1",
"created": "Wed, 26 Jan 2022 11:11:14 GMT"
}
] | 2022-01-27T00:00:00 |
[
[
"Kang",
"Jiaqi",
""
],
[
"Yang",
"Su",
""
],
[
"Zhang",
"Weishan",
""
]
] |
new_dataset
| 0.99063 |
2201.10896
|
Shinnosuke Takamichi
|
Shinnosuke Takamichi, Wataru Nakata, Naoko Tanji, Hiroshi Saruwatari
|
J-MAC: Japanese multi-speaker audiobook corpus for speech synthesis
| null | null | null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by-sa/4.0/
|
In this paper, we construct a Japanese audiobook speech corpus called "J-MAC"
for speech synthesis research. With the success of reading-style speech
synthesis, the research target is shifting to tasks that use complicated
contexts. Audiobook speech synthesis is a good example that requires
cross-sentence, expressiveness, etc. Unlike reading-style speech,
speaker-specific expressiveness in audiobook speech also becomes the context.
To enhance this research, we propose a method of constructing a corpus from
audiobooks read by professional speakers. From many audiobooks and their texts,
our method can automatically extract and refine the data without any language
dependency. Specifically, we use vocal-instrumental separation to extract clean
data, connectionist temporal classification to roughly align text and audio,
and voice activity detection to refine the alignment. J-MAC is open-sourced in
our project page. We also conduct audiobook speech synthesis evaluations, and
the results give insights into audiobook speech synthesis.
|
[
{
"version": "v1",
"created": "Wed, 26 Jan 2022 12:22:53 GMT"
}
] | 2022-01-27T00:00:00 |
[
[
"Takamichi",
"Shinnosuke",
""
],
[
"Nakata",
"Wataru",
""
],
[
"Tanji",
"Naoko",
""
],
[
"Saruwatari",
"Hiroshi",
""
]
] |
new_dataset
| 0.999294 |
2201.11111
|
Alexander Kott
|
Alexander Kott, Paul Theron
|
Doers, not Watchers: Intelligent Autonomous Agents are a Path to Cyber
Resilience
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Today's cyber defense tools are mostly watchers. They are not active doers.
To be sure, watching too is a demanding affair. These tools monitor the traffic
and events; they detect malicious signatures, patterns and anomalies; they
might classify and characterize what they observe; they issue alerts, and they
might even learn while doing all this. But they don't act. They do little to
plan and execute responses to attacks, and they don't plan and execute recovery
activities. Response and recovery - core elements of cyber resilience are left
to the human cyber analysts, incident responders and system administrators. We
believe things should change. Cyber defense tools should not be merely
watchers. They need to become doers - active fighters in maintaining a system's
resilience against cyber threats. This means that their capabilities should
include a significant degree of autonomy and intelligence for the purposes of
rapid response to a compromise - either incipient or already successful - and
rapid recovery that aids the resilience of the overall system. Often, the
response and recovery efforts need to be undertaken in absence of any human
involvement, and with an intelligent consideration of risks and ramifications
of such efforts. Recently an international team published a report that
proposes a vision of an autonomous intelligent cyber defense agent (AICA) and
offers a high-level reference architecture of such an agent. In this paper we
explore this vision.
|
[
{
"version": "v1",
"created": "Wed, 26 Jan 2022 18:41:39 GMT"
}
] | 2022-01-27T00:00:00 |
[
[
"Kott",
"Alexander",
""
],
[
"Theron",
"Paul",
""
]
] |
new_dataset
| 0.998548 |
2007.12284
|
Andr\'e Coelho
|
Hugo Rodrigues, Andr\'e Coelho, Manuel Ricardo, Rui Campos
|
Energy-aware Relay Positioning in Flying Networks
| null | null | null | null |
cs.NI eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The ability to move and hover has made rotary-wing Unmanned Aerial Vehicles
(UAVs) suitable platforms to act as Flying Communications Relays (FCR), aiming
at providing on-demand, temporary wireless connectivity when there is no
network infrastructure available or a need to reinforce the capacity of
existing networks. However, since UAVs rely on their on-board batteries, which
can be drained quickly, they typically need to land frequently for recharging
or replacing them, limiting their endurance and the flying network
availability. The problem is exacerbated when a single FCR UAV is used. The FCR
UAV energy is used for two main tasks: communications and propulsion. The
literature has been focused on optimizing both the flying network performance
and energy-efficiency from the communications point of view, overlooking the
energy spent for the UAV propulsion. Yet, the energy spent for communications
is typically negligible when compared with the energy spent for the UAV
propulsion.
In this article we propose Energy-aware RElay Positioning (EREP), an
algorithm for positioning the FCR taking into account the energy spent for the
UAV propulsion. Building upon the conclusion that hovering is not the most
energy-efficient state, EREP defines the trajectory and speed that minimize the
energy spent by the FCR UAV on propulsion, without compromising in practice the
Quality of Service offered by the flying network. The EREP algorithm is
evaluated using simulations. The obtained results show gains up to 26% in the
FCR UAV endurance for negligible throughput and delay degradation.
|
[
{
"version": "v1",
"created": "Thu, 23 Jul 2020 22:45:17 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Dec 2020 16:46:51 GMT"
},
{
"version": "v3",
"created": "Tue, 14 Dec 2021 10:41:15 GMT"
},
{
"version": "v4",
"created": "Tue, 25 Jan 2022 11:43:38 GMT"
}
] | 2022-01-26T00:00:00 |
[
[
"Rodrigues",
"Hugo",
""
],
[
"Coelho",
"André",
""
],
[
"Ricardo",
"Manuel",
""
],
[
"Campos",
"Rui",
""
]
] |
new_dataset
| 0.985442 |
2104.09958
|
Martin Engelcke
|
Martin Engelcke, Oiwi Parker Jones, Ingmar Posner
|
GENESIS-V2: Inferring Unordered Object Representations without Iterative
Refinement
|
NeurIPS 2021 camera-ready version; 26 pages, 19 figures
| null | null | null |
cs.CV cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Advances in unsupervised learning of object-representations have culminated
in the development of a broad range of methods for unsupervised object
segmentation and interpretable object-centric scene generation. These methods,
however, are limited to simulated and real-world datasets with limited visual
complexity. Moreover, object representations are often inferred using RNNs
which do not scale well to large images or iterative refinement which avoids
imposing an unnatural ordering on objects in an image but requires the a priori
initialisation of a fixed number of object representations. In contrast to
established paradigms, this work proposes an embedding-based approach in which
embeddings of pixels are clustered in a differentiable fashion using a
stochastic stick-breaking process. Similar to iterative refinement, this
clustering procedure also leads to randomly ordered object representations, but
without the need of initialising a fixed number of clusters a priori. This is
used to develop a new model, GENESIS-v2, which can infer a variable number of
object representations without using RNNs or iterative refinement. We show that
GENESIS-v2 performs strongly in comparison to recent baselines in terms of
unsupervised image segmentation and object-centric scene generation on
established synthetic datasets as well as more complex real-world datasets.
|
[
{
"version": "v1",
"created": "Tue, 20 Apr 2021 14:59:27 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Apr 2021 14:52:11 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Jan 2022 18:15:16 GMT"
}
] | 2022-01-26T00:00:00 |
[
[
"Engelcke",
"Martin",
""
],
[
"Jones",
"Oiwi Parker",
""
],
[
"Posner",
"Ingmar",
""
]
] |
new_dataset
| 0.99897 |
2104.14988
|
Noemi Passing
|
Bernd Finkbeiner, Philippe Heim, Noemi Passing
|
Temporal Stream Logic modulo Theories (Full Version)
|
Full version of the corresponding FoSSaCS 2022 paper
| null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Temporal stream logic (TSL) extends LTL with updates and predicates over
arbitrary function terms. This allows for specifying data-intensive systems for
which LTL is not expressive enough. In the semantics of TSL, functions and
predicates are left uninterpreted. In this paper, we extend TSL with
first-order theories, enabling us to specify systems using interpreted
functions and predicates such as incrementation or equality. We investigate the
satisfiability problem of TSL modulo the standard underlying theory of
uninterpreted functions as well as with respect to Presburger arithmetic and
the theory of equality: For all three theories, TSL satisfiability is highly
undecidable. Nevertheless, we identify three fragments of TSL for which the
satisfiability problem is (semi-)decidable in the theory of uninterpreted
functions. Despite the high undecidability, we present an algorithm - which is
not guaranteed to terminate - for checking the satisfiability of a TSL formula
in the theory of uninterpreted functions and evaluate it: It scales well and is
able to validate assumptions in a real-world system design.
|
[
{
"version": "v1",
"created": "Fri, 30 Apr 2021 13:22:41 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Jan 2022 16:00:02 GMT"
}
] | 2022-01-26T00:00:00 |
[
[
"Finkbeiner",
"Bernd",
""
],
[
"Heim",
"Philippe",
""
],
[
"Passing",
"Noemi",
""
]
] |
new_dataset
| 0.95048 |
2107.03200
|
Johannes Wachs
|
Johannes Wachs, Mariusz Nitecki, William Schueller, Axel Polleres
|
The Geography of Open Source Software: Evidence from GitHub
| null |
Technological Forecasting and Social Change (2022)
|
10.1016/j.techfore.2022.121478
| null |
cs.SI cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Open Source Software (OSS) plays an important role in the digital economy.
Yet although software production is amenable to remote collaboration and its
outputs are easily shared across distances, software development seems to
cluster geographically in places such as Silicon Valley, London, or Berlin. And
while recent work indicates that OSS activity creates positive externalities
which accrue locally through knowledge spillovers and information effects,
up-to-date data on the geographic distribution of active open source developers
is limited. This presents a significant blindspot for policymakers, who tend to
promote OSS at the national level as a cost-saving tool for public sector
institutions. We address this gap by geolocating more than half a million
active contributors to GitHub in early 2021 at various spatial scales. Compared
to results from 2010, we find a significant increase in the share of developers
based in Asia, Latin America and Eastern Europe, suggesting a more even spread
of OSS developers globally. Within countries, however, we find significant
concentration in regions, exceeding the concentration of workers in high-tech
fields. Social and economic development indicators predict at most half of
regional variation in OSS activity in the EU, suggesting that clusters of OSS
have idiosyncratic roots. We argue that policymakers seeking to foster OSS
should focus locally rather than nationally, using the tools of cluster policy
to support networks of OSS developers.
|
[
{
"version": "v1",
"created": "Wed, 7 Jul 2021 13:18:17 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Oct 2021 08:25:28 GMT"
}
] | 2022-01-26T00:00:00 |
[
[
"Wachs",
"Johannes",
""
],
[
"Nitecki",
"Mariusz",
""
],
[
"Schueller",
"William",
""
],
[
"Polleres",
"Axel",
""
]
] |
new_dataset
| 0.980697 |
2110.10661
|
Victor Zhong
|
Victor Zhong and Austin W. Hanjie and Sida I. Wang and Karthik
Narasimhan and Luke Zettlemoyer
|
SILG: The Multi-environment Symbolic Interactive Language Grounding
Benchmark
|
NeurIPS 2021. 14 pages, 8 figures
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Existing work in language grounding typically study single environments. How
do we build unified models that apply across multiple environments? We propose
the multi-environment Symbolic Interactive Language Grounding benchmark (SILG),
which unifies a collection of diverse grounded language learning environments
under a common interface. SILG consists of grid-world environments that require
generalization to new dynamics, entities, and partially observed worlds (RTFM,
Messenger, NetHack), as well as symbolic counterparts of visual worlds that
require interpreting rich natural language with respect to complex scenes
(ALFWorld, Touchdown). Together, these environments provide diverse grounding
challenges in richness of observation space, action space, language
specification, and plan complexity. In addition, we propose the first shared
model architecture for RL on these environments, and evaluate recent advances
such as egocentric local convolution, recurrent state-tracking, entity-centric
attention, and pretrained LM using SILG. Our shared architecture achieves
comparable performance to environment-specific architectures. Moreover, we find
that many recent modelling advances do not result in significant gains on
environments other than the one they were designed for. This highlights the
need for a multi-environment benchmark. Finally, the best models significantly
underperform humans on SILG, which suggests ample room for future work. We hope
SILG enables the community to quickly identify new methodologies for language
grounding that generalize to a diverse set of environments and their associated
challenges.
|
[
{
"version": "v1",
"created": "Wed, 20 Oct 2021 17:02:06 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Jan 2022 19:16:12 GMT"
}
] | 2022-01-26T00:00:00 |
[
[
"Zhong",
"Victor",
""
],
[
"Hanjie",
"Austin W.",
""
],
[
"Wang",
"Sida I.",
""
],
[
"Narasimhan",
"Karthik",
""
],
[
"Zettlemoyer",
"Luke",
""
]
] |
new_dataset
| 0.998694 |
2110.14164
|
Geunseong Jung
|
Geunseong Jung, Sungjae Han, Hansung Kim, Kwanguk Kim, Jaehyuk Cha
|
Don't read, just look: Main content extraction from web pages using
visual features
| null | null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Extracting main content from web pages provides primary informative blocks
that remove a web page's minor areas like navigation menu, ads, and site
templates. The main content extraction has various applications: information
retrieval, search engine optimization, and browser reader mode. We assessed the
existing four main content extraction methods (Readability.js, Chrome DOM
Distiller, Web2Text, and Boilernet) with the web pages of two English datasets
from global websites of 2017 and 2020 and seven non-English datasets by
languages of 2020. Its result showed that performance was lower by up to 40% in
non-English datasets than in English datasets. Thus, this paper proposes a
multilingual main content extraction method using visual features: the
elements' positions, size, and distances from three centers. These centers were
derived from the browser window, web document, and the first browsing area. We
propose this first browsing area, which is the top side of a web document for
simulating situations where a user first encountered a web page. Because web
page authors placed their main contents in the central area for the web page's
usability, we can assume the center of this area is close to the main content.
Our grid-centering-expanding (GCE) method suggests the three centroids as
hypothetical user foci. Traversing the DOM tree from each of the leaf nodes
closest to these centroids, our method inspects which the ancestor node can be
the main content candidate. Finally, it extracts the main content by selecting
the best among the three main content candidates. Our method performed 14%
better than the existing method on average in Longest Common Subsequence F1
score. In particular, it improved performance by up to 25% in the English
dataset and 16% in the non-English dataset. Therefore, our method showed the
visual and basic HTML features are effective in extracting the main content.
|
[
{
"version": "v1",
"created": "Wed, 27 Oct 2021 04:43:12 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Jan 2022 00:54:09 GMT"
}
] | 2022-01-26T00:00:00 |
[
[
"Jung",
"Geunseong",
""
],
[
"Han",
"Sungjae",
""
],
[
"Kim",
"Hansung",
""
],
[
"Kim",
"Kwanguk",
""
],
[
"Cha",
"Jaehyuk",
""
]
] |
new_dataset
| 0.995909 |
2111.14295
|
Vu Phi Tran Dr
|
Vu Phi Tran, Matthew A. Garratt, Kathryn Kasmarik, Sreenatha G.
Anavatti
|
Frontier-led Swarming: Robust Multi-Robot Coverage of Unknown
Environments
| null | null | null | null |
cs.RO cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes a novel swarm-based control algorithm for exploration and
coverage of unknown environments, while maintaining a formation that permits
short-range communication. The algorithm combines two elements: swarm rules for
maintaining a close-knit formation and frontier search for driving exploration
and coverage. Inspired by natural systems in which large numbers of simple
agents (e.g., schooling fish, flocking birds, swarming insects) perform
complicated collective behaviors for efficiency and safety, the first element
uses three simple rules to maintain a swarm formation. The second element
provides a means to select promising regions to explore (and cover) by
minimising a cost function involving robots' relative distance to frontier
cells and the frontier's size. We tested the performance of our approach on
heterogeneous and homogeneous groups of mobile robots in different
environments. We measure both coverage performance and swarm formation
statistics as indicators of the robots' ability to explore effectively while
maintaining a formation conducive to short-range communication. Through a
series of comparison experiments, we demonstrate that our proposed strategy has
superior performance to recently presented map coverage methodologies and
conventional swarming methods.
|
[
{
"version": "v1",
"created": "Mon, 29 Nov 2021 01:56:21 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Jan 2022 11:55:55 GMT"
}
] | 2022-01-26T00:00:00 |
[
[
"Tran",
"Vu Phi",
""
],
[
"Garratt",
"Matthew A.",
""
],
[
"Kasmarik",
"Kathryn",
""
],
[
"Anavatti",
"Sreenatha G.",
""
]
] |
new_dataset
| 0.999422 |
2201.06796
|
Mina Lee
|
Mina Lee, Percy Liang, Qian Yang
|
CoAuthor: Designing a Human-AI Collaborative Writing Dataset for
Exploring Language Model Capabilities
|
Published as a conference paper at CHI 2022
| null |
10.1145/3491102.3502030
| null |
cs.HC cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Large language models (LMs) offer unprecedented language generation
capabilities and exciting opportunities for interaction design. However, their
highly context-dependent capabilities are difficult to grasp and are often
subjectively interpreted. In this paper, we argue that by curating and
analyzing large interaction datasets, the HCI community can foster more
incisive examinations of LMs' generative capabilities. Exemplifying this
approach, we present CoAuthor, a dataset designed for revealing GPT-3's
capabilities in assisting creative and argumentative writing. CoAuthor captures
rich interactions between 63 writers and four instances of GPT-3 across 1445
writing sessions. We demonstrate that CoAuthor can address questions about
GPT-3's language, ideation, and collaboration capabilities, and reveal its
contribution as a writing "collaborator" under various definitions of good
collaboration. Finally, we discuss how this work may facilitate a more
principled discussion around LMs' promises and pitfalls in relation to
interaction design. The dataset and an interface for replaying the writing
sessions are publicly available at https://coauthor.stanford.edu.
|
[
{
"version": "v1",
"created": "Tue, 18 Jan 2022 07:51:57 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Jan 2022 05:29:58 GMT"
}
] | 2022-01-26T00:00:00 |
[
[
"Lee",
"Mina",
""
],
[
"Liang",
"Percy",
""
],
[
"Yang",
"Qian",
""
]
] |
new_dataset
| 0.979679 |
2201.09956
|
Naif Mehanna
|
Tomer Laor (1), Naif Mehanna (2 and 3 and 4), Antonin Durey (2 and 3
and 4), Vitaly Dyadyuk (1), Pierre Laperdrix (2 and 3 and 4), Cl\'ementine
Maurice (2 and 3 and 4), Yossi Oren (1), Romain Rouvoy (2 and 3 and 4),
Walter Rudametkin (2 and 3 and 4), Yuval Yarom (5) ((1) Ben-Gurion University
of the Negev, (2) University of Lille, (3) CNRS, (4) Inria, (5) University of
Adelaide)
|
DRAWNAPART: A Device Identification Technique based on Remote GPU
Fingerprinting
|
Network and Distributed System Security Symposium, Feb 2022, San
Diego, United States
| null |
10.14722/ndss.2022.24093
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Browser fingerprinting aims to identify users or their devices, through
scripts that execute in the users' browser and collect information on software
or hardware characteristics. It is used to track users or as an additional
means of identification to improve security. In this paper, we report on a new
technique that can significantly extend the tracking time of fingerprint-based
tracking methods. Our technique, which we call DrawnApart, is a new GPU
fingerprinting technique that identifies a device based on the unique
properties of its GPU stack. Specifically, we show that variations in speed
among the multiple execution units that comprise a GPU can serve as a reliable
and robust device signature, which can be collected using unprivileged
JavaScript. We investigate the accuracy of DrawnApart under two scenarios. In
the first scenario, our controlled experiments confirm that the technique is
effective in distinguishing devices with similar hardware and software
configurations, even when they are considered identical by current
state-of-the-art fingerprinting algorithms. In the second scenario, we
integrate a one-shot learning version of our technique into a state-of-the-art
browser fingerprint tracking algorithm. We verify our technique through a
large-scale experiment involving data collected from over 2,500 crowd-sourced
devices over a period of several months and show it provides a boost of up to
67% to the median tracking duration, compared to the state-of-the-art method.
DrawnApart makes two contributions to the state of the art in browser
fingerprinting. On the conceptual front, it is the first work that explores the
manufacturing differences between identical GPUs and the first to exploit these
differences in a privacy context. On the practical front, it demonstrates a
robust technique for distinguishing between machines with identical hardware
and software configurations.
|
[
{
"version": "v1",
"created": "Mon, 24 Jan 2022 21:16:24 GMT"
}
] | 2022-01-26T00:00:00 |
[
[
"Laor",
"Tomer",
"",
"2 and 3 and 4"
],
[
"Mehanna",
"Naif",
"",
"2 and 3 and 4"
],
[
"Durey",
"Antonin",
"",
"2 and 3\n and 4"
],
[
"Dyadyuk",
"Vitaly",
"",
"2 and 3 and 4"
],
[
"Laperdrix",
"Pierre",
"",
"2 and 3 and 4"
],
[
"Maurice",
"Clémentine",
"",
"2 and 3 and 4"
],
[
"Oren",
"Yossi",
"",
"2 and 3 and 4"
],
[
"Rouvoy",
"Romain",
"",
"2 and 3 and 4"
],
[
"Rudametkin",
"Walter",
"",
"2 and 3 and 4"
],
[
"Yarom",
"Yuval",
""
]
] |
new_dataset
| 0.966323 |
2201.09992
|
Eugene Yang
|
Dawn Lawrie and James Mayfield and Douglas Oard and Eugene Yang
|
HC4: A New Suite of Test Collections for Ad Hoc CLIR
|
16 pages, 2 figures, accepted at ECIR 2022
| null | null | null |
cs.IR cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
HC4 is a new suite of test collections for ad hoc Cross-Language Information
Retrieval (CLIR), with Common Crawl News documents in Chinese, Persian, and
Russian, topics in English and in the document languages, and graded relevance
judgments. New test collections are needed because existing CLIR test
collections built using pooling of traditional CLIR runs have systematic gaps
in their relevance judgments when used to evaluate neural CLIR methods. The HC4
collections contain 60 topics and about half a million documents for each of
Chinese and Persian, and 54 topics and five million documents for Russian.
Active learning was used to determine which documents to annotate after being
seeded using interactive search and judgment. Documents were judged on a
three-grade relevance scale. This paper describes the design and construction
of the new test collections and provides baseline results for demonstrating
their utility for evaluating systems.
|
[
{
"version": "v1",
"created": "Mon, 24 Jan 2022 22:52:11 GMT"
}
] | 2022-01-26T00:00:00 |
[
[
"Lawrie",
"Dawn",
""
],
[
"Mayfield",
"James",
""
],
[
"Oard",
"Douglas",
""
],
[
"Yang",
"Eugene",
""
]
] |
new_dataset
| 0.999469 |
2201.09996
|
Eugene Yang
|
Cash Costello and Eugene Yang and Dawn Lawrie and James Mayfield
|
Patapasco: A Python Framework for Cross-Language Information Retrieval
Experiments
|
5 pages, accepted at ECIR 2022 as a demo paper
| null | null | null |
cs.IR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
While there are high-quality software frameworks for information retrieval
experimentation, they do not explicitly support cross-language information
retrieval (CLIR). To fill this gap, we have created Patapsco, a Python CLIR
framework. This framework specifically addresses the complexity that comes with
running experiments in multiple languages. Patapsco is designed to be
extensible to many language pairs, to be scalable to large document
collections, and to support reproducible experiments driven by a configuration
file. We include Patapsco results on standard CLIR collections using multiple
settings.
|
[
{
"version": "v1",
"created": "Mon, 24 Jan 2022 23:03:36 GMT"
}
] | 2022-01-26T00:00:00 |
[
[
"Costello",
"Cash",
""
],
[
"Yang",
"Eugene",
""
],
[
"Lawrie",
"Dawn",
""
],
[
"Mayfield",
"James",
""
]
] |
new_dataset
| 0.999345 |
2201.09997
|
Oleg Serikov
|
Timofey Atnashev, Veronika Ganeeva, Roman Kazakov, Daria Matyash,
Michael Sonkin, Ekaterina Voloshina, Oleg Serikov, Ekaterina Artemova
|
Razmecheno: Named Entity Recognition from Digital Archive of Diaries
"Prozhito"
|
Submitted to LREC 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The vast majority of existing datasets for Named Entity Recognition (NER) are
built primarily on news, research papers and Wikipedia with a few exceptions,
created from historical and literary texts. What is more, English is the main
source for data for further labelling. This paper aims to fill in multiple gaps
by creating a novel dataset "Razmecheno", gathered from the diary texts of the
project "Prozhito" in Russian. Our dataset is of interest for multiple research
lines: literary studies of diary texts, transfer learning from other domains,
low-resource or cross-lingual named entity recognition. Razmecheno comprises
1331 sentences and 14119 tokens, sampled from diaries, written during the
Perestroika. The annotation schema consists of five commonly used entity tags:
person, characteristics, location, organisation, and facility. The labelling is
carried out on the crowdsourcing platfrom Yandex.Toloka in two stages. First,
workers selected sentences, which contain an entity of particular type. Second,
they marked up entity spans. As a result 1113 entities were obtained. Empirical
evaluation of Razmecheno is carried out with off-the-shelf NER tools and by
fine-tuning pre-trained contextualized encoders. We release the annotated
dataset for open access.
|
[
{
"version": "v1",
"created": "Mon, 24 Jan 2022 23:06:01 GMT"
}
] | 2022-01-26T00:00:00 |
[
[
"Atnashev",
"Timofey",
""
],
[
"Ganeeva",
"Veronika",
""
],
[
"Kazakov",
"Roman",
""
],
[
"Matyash",
"Daria",
""
],
[
"Sonkin",
"Michael",
""
],
[
"Voloshina",
"Ekaterina",
""
],
[
"Serikov",
"Oleg",
""
],
[
"Artemova",
"Ekaterina",
""
]
] |
new_dataset
| 0.999826 |
2201.10060
|
Arash Mohammadi
|
Mansooreh Montazerin, Soheil Zabihi, Elahe Rahimian, Arash Mohammadi,
Farnoosh Naderkhani
|
ViT-HGR: Vision Transformer-based Hand Gesture Recognition from High
Density Surface EMG Signals
| null | null | null | null |
cs.CV cs.LG eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
Recently, there has been a surge of significant interest on application of
Deep Learning (DL) models to autonomously perform hand gesture recognition
using surface Electromyogram (sEMG) signals. DL models are, however, mainly
designed to be applied on sparse sEMG signals. Furthermore, due to their
complex structure, typically, we are faced with memory constraints; require
large training times and a large number of training samples, and; there is the
need to resort to data augmentation and/or transfer learning. In this paper,
for the first time (to the best of our knowledge), we investigate and design a
Vision Transformer (ViT) based architecture to perform hand gesture recognition
from High Density (HD-sEMG) signals. Intuitively speaking, we capitalize on the
recent breakthrough role of the transformer architecture in tackling different
complex problems together with its potential for employing more input
parallelization via its attention mechanism. The proposed Vision
Transformer-based Hand Gesture Recognition (ViT-HGR) framework can overcome the
aforementioned training time problems and can accurately classify a large
number of hand gestures from scratch without any need for data augmentation
and/or transfer learning. The efficiency of the proposed ViT-HGR framework is
evaluated using a recently-released HD-sEMG dataset consisting of 65 isometric
hand gestures. Our experiments with 64-sample (31.25 ms) window size yield
average test accuracy of 84.62 +/- 3.07%, where only 78, 210 number of
parameters is utilized. The compact structure of the proposed ViT-based ViT-HGR
framework (i.e., having significantly reduced number of trainable parameters)
shows great potentials for its practical application for prosthetic control.
|
[
{
"version": "v1",
"created": "Tue, 25 Jan 2022 02:42:50 GMT"
}
] | 2022-01-26T00:00:00 |
[
[
"Montazerin",
"Mansooreh",
""
],
[
"Zabihi",
"Soheil",
""
],
[
"Rahimian",
"Elahe",
""
],
[
"Mohammadi",
"Arash",
""
],
[
"Naderkhani",
"Farnoosh",
""
]
] |
new_dataset
| 0.99546 |
2201.10107
|
Quan Nguyen Minh
|
Quan Nguyen Minh, Bang Le Van, Can Nguyen, Anh Le and Viet Dung Nguyen
|
ARPD: Anchor-free Rotation-aware People Detection using Topview Fisheye
Camera
|
2021 17th IEEE International Conference on Advanced Video and Signal
Based Surveillance (AVSS)
| null |
10.1109/AVSS52988.2021.9663768
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
People detection in top-view, fish-eye images is challenging as people in
fish-eye images often appear in arbitrary directions and are distorted
differently. Due to this unique radial geometry, axis-aligned people detectors
often work poorly on fish-eye frames. Recent works account for this variability
by modifying existing anchor-based detectors or relying on complex
pre/post-processing. Anchor-based methods spread a set of pre-defined bounding
boxes on the input image, most of which are invalid. In addition to being
inefficient, this approach could lead to a significant imbalance between the
positive and negative anchor boxes. In this work, we propose ARPD, a
single-stage anchor-free fully convolutional network to detect arbitrarily
rotated people in fish-eye images. Our network uses keypoint estimation to find
the center point of each object and regress the object's other properties
directly. To capture the various orientation of people in fish-eye cameras, in
addition to the center and size, ARPD also predicts the angle of each bounding
box. We also propose a periodic loss function that accounts for angle
periodicity and relieves the difficulty of learning small-angle oscillations.
Experimental results show that our method competes favorably with
state-of-the-art algorithms while running significantly faster.
|
[
{
"version": "v1",
"created": "Tue, 25 Jan 2022 05:49:50 GMT"
}
] | 2022-01-26T00:00:00 |
[
[
"Minh",
"Quan Nguyen",
""
],
[
"Van",
"Bang Le",
""
],
[
"Nguyen",
"Can",
""
],
[
"Le",
"Anh",
""
],
[
"Nguyen",
"Viet Dung",
""
]
] |
new_dataset
| 0.997898 |
2201.10111
|
Weiqian Tan
|
Weiqian Tan, Binwei Wu, Shuo Wang, Tao Huang
|
Large-scale Deterministic Transmission among IEEE 802.1Qbv
Time-Sensitive Networks
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
IEEE 802.1Qbv (TAS) is the most widely used technique in Time-Sensitive
Networking (TSN) which aims to provide bounded transmission delays and
ultra-low jitters in industrial local area networks. With the development of
emerging technologies (e.g., cloud computing), many wide-range time-sensitive
network services emerge, such as factory automation, connected vehicles, and
smart grids. Nevertheless, TAS is a Layer 2 technique for local networks, and
cannot provide large-scale deterministic transmission. To tackle this problem,
this paper proposes a hierarchical network containing access networks and a
core network. Access networks perform TAS to aggregate time-sensitive traffic.
In the core network, we exploit DIP (a well-known deterministic networking
mechanism for backbone networks) to achieve long-distance deterministic
transmission. Due to the differences between TAS and DIP, we design
cross-domain transmission mechanisms at the edge of access networks and the
core network to achieve seamless deterministic transmission. We also formulate
the end-to-end scheduling to maximize the amount of accepted time-sensitive
traffic. Experimental simulations show that the proposed network can achieve
end-to-end deterministic transmission even in high-loaded scenarios.
|
[
{
"version": "v1",
"created": "Tue, 25 Jan 2022 06:05:00 GMT"
}
] | 2022-01-26T00:00:00 |
[
[
"Tan",
"Weiqian",
""
],
[
"Wu",
"Binwei",
""
],
[
"Wang",
"Shuo",
""
],
[
"Huang",
"Tao",
""
]
] |
new_dataset
| 0.993623 |
2201.10165
|
Grigoriy Korolev
|
Grigory Korolev, Aleksey Kureev, Evgeny Khorov, and Andrey Lyakhov
|
Enabling Synchronous Uplink NOMA in Wi-Fi Networks
|
International Conference "Engineering & Telecommunication 2021"
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Non-Orthogonal Multiple Access (NOMA) is a promising technology for future
Wi-Fi. In uplink NOMA, stations with different channel conditions transmit
simultaneously at the same frequency by splitting the signal by power level.
Since Wi-Fi uses random access, the implementation of uplink NOMA in Wi- Fi
faces many challenges. The paper presents a data transmission mechanism in
Wi-Fi networks that enables synchronous uplink NOMA, where multiple stations
start data transmission to the access point simultaneously. The developed
mechanism can work with the legacy Enhanced Distributed Channel Access (EDCA)
mechanism in Wi-Fi. With simulation, it is shown that the developed mechanism
can double the total throughput and geometric mean throughput compared with the
legacy EDCA.
|
[
{
"version": "v1",
"created": "Tue, 25 Jan 2022 08:08:41 GMT"
}
] | 2022-01-26T00:00:00 |
[
[
"Korolev",
"Grigory",
""
],
[
"Kureev",
"Aleksey",
""
],
[
"Khorov",
"Evgeny",
""
],
[
"Lyakhov",
"Andrey",
""
]
] |
new_dataset
| 0.967334 |
2201.10175
|
Zhi Wu
|
Zhi Wu, Dongheng Zhang, Chunyang Xie, Cong Yu, Jinbo Chen, Yang Hu,
Yan Chen
|
RFMask: A Simple Baseline for Human Silhouette Segmentation with Radio
Signals
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Human silhouette segmentation, which is originally defined in computer
vision, has achieved promising results for understanding human activities.
However, the physical limitation makes existing systems based on optical
cameras suffer from severe performance degradation under low illumination,
smoke, and/or opaque obstruction conditions. To overcome such limitations, in
this paper, we propose to utilize the radio signals, which can traverse
obstacles and are unaffected by the lighting conditions to achieve silhouette
segmentation. The proposed RFMask framework is composed of three modules. It
first transforms RF signals captured by millimeter wave radar on two planes
into spatial domain and suppress interference with the signal processing
module. Then, it locates human reflections on RF frames and extract features
from surrounding signals with human detection module. Finally, the extracted
features from RF frames are aggregated with an attention based mask generation
module. To verify our proposed framework, we collect a dataset containing
804,760 radio frames and 402,380 camera frames with human activities under
various scenes. Experimental results show that the proposed framework can
achieve impressive human silhouette segmentation even under the challenging
scenarios(such as low light and occlusion scenarios) where traditional
optical-camera-based methods fail. To the best of our knowledge, this is the
first investigation towards segmenting human silhouette based on millimeter
wave signals. We hope that our work can serve as a baseline and inspire further
research that perform vision tasks with radio signals. The dataset and codes
will be made in public.
|
[
{
"version": "v1",
"created": "Tue, 25 Jan 2022 08:43:01 GMT"
}
] | 2022-01-26T00:00:00 |
[
[
"Wu",
"Zhi",
""
],
[
"Zhang",
"Dongheng",
""
],
[
"Xie",
"Chunyang",
""
],
[
"Yu",
"Cong",
""
],
[
"Chen",
"Jinbo",
""
],
[
"Hu",
"Yang",
""
],
[
"Chen",
"Yan",
""
]
] |
new_dataset
| 0.99949 |
2201.10252
|
Mohamed Ali Souibgui
|
Mohamed Ali Souibgui, Sanket Biswas, Sana Khamekhem Jemni, Yousri
Kessentini, Alicia Forn\'es, Josep Llad\'os, Umapada Pal
|
DocEnTr: An End-to-End Document Image Enhancement Transformer
|
submitted to ICPR 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Document images can be affected by many degradation scenarios, which cause
recognition and processing difficulties. In this age of digitization, it is
important to denoise them for proper usage. To address this challenge, we
present a new encoder-decoder architecture based on vision transformers to
enhance both machine-printed and handwritten document images, in an end-to-end
fashion. The encoder operates directly on the pixel patches with their
positional information without the use of any convolutional layers, while the
decoder reconstructs a clean image from the encoded patches. Conducted
experiments show a superiority of the proposed model compared to the state-of
the-art methods on several DIBCO benchmarks. Code and models will be publicly
available at: \url{https://github.com/dali92002/DocEnTR}.
|
[
{
"version": "v1",
"created": "Tue, 25 Jan 2022 11:45:35 GMT"
}
] | 2022-01-26T00:00:00 |
[
[
"Souibgui",
"Mohamed Ali",
""
],
[
"Biswas",
"Sanket",
""
],
[
"Jemni",
"Sana Khamekhem",
""
],
[
"Kessentini",
"Yousri",
""
],
[
"Fornés",
"Alicia",
""
],
[
"Lladós",
"Josep",
""
],
[
"Pal",
"Umapada",
""
]
] |
new_dataset
| 0.980038 |
2201.10349
|
Sudeep Pasricha
|
Vipin Kumar Kukkala, Sooryaa Vignesh Thiruloga, Sudeep Pasricha
|
Roadmap for Cybersecurity in Autonomous Vehicles
| null | null | null | null |
cs.CR cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Autonomous vehicles are on the horizon and will be transforming
transportation safety and comfort. These vehicles will be connected to various
external systems and utilize advanced embedded systems to perceive their
environment and make intelligent decisions. However, this increased
connectivity makes these vehicles vulnerable to various cyber-attacks that can
have catastrophic effects. Attacks on automotive systems are already on the
rise in today's vehicles and are expected to become more commonplace in future
autonomous vehicles. Thus, there is a need to strengthen cybersecurity in
future autonomous vehicles. In this article, we discuss major automotive
cyber-attacks over the past decade and present state-of-the-art solutions that
leverage artificial intelligence (AI). We propose a roadmap towards building
secure autonomous vehicles and highlight key open challenges that need to be
addressed.
|
[
{
"version": "v1",
"created": "Wed, 19 Jan 2022 16:42:18 GMT"
}
] | 2022-01-26T00:00:00 |
[
[
"Kukkala",
"Vipin Kumar",
""
],
[
"Thiruloga",
"Sooryaa Vignesh",
""
],
[
"Pasricha",
"Sudeep",
""
]
] |
new_dataset
| 0.999295 |
2201.10366
|
Matthew Brown
|
Daniel Davila, Joseph VanPelt, Alexander Lynch, Adam Romlein, Peter
Webley, Matthew S. Brown
|
ADAPT: An Open-Source sUAS Payload for Real-Time Disaster Prediction and
Response with AI
|
To be published in Workshop on Practical Deep Learning in the Wild at
AAAI Conference on Artificial Intelligence 2022, 9 pages, 5 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Small unmanned aircraft systems (sUAS) are becoming prominent components of
many humanitarian assistance and disaster response (HADR) operations. Pairing
sUAS with onboard artificial intelligence (AI) substantially extends their
utility in covering larger areas with fewer support personnel. A variety of
missions, such as search and rescue, assessing structural damage, and
monitoring forest fires, floods, and chemical spills, can be supported simply
by deploying the appropriate AI models. However, adoption by
resource-constrained groups, such as local municipalities, regulatory agencies,
and researchers, has been hampered by the lack of a cost-effective,
readily-accessible baseline platform that can be adapted to their unique
missions. To fill this gap, we have developed the free and open-source ADAPT
multi-mission payload for deploying real-time AI and computer vision onboard a
sUAS during local and beyond-line-of-site missions. We have emphasized a
modular design with low-cost, readily-available components, open-source
software, and thorough documentation (https://kitware.github.io/adapt/). The
system integrates an inertial navigation system, high-resolution color camera,
computer, and wireless downlink to process imagery and broadcast georegistered
analytics back to a ground station. Our goal is to make it easy for the HADR
community to build their own copies of the ADAPT payload and leverage the
thousands of hours of engineering we have devoted to developing and testing. In
this paper, we detail the development and testing of the ADAPT payload. We
demonstrate the example mission of real-time, in-flight ice segmentation to
monitor river ice state and provide timely predictions of catastrophic flooding
events. We deploy a novel active learning workflow to annotate river ice
imagery, train a real-time deep neural network for ice segmentation, and
demonstrate operation in the field.
|
[
{
"version": "v1",
"created": "Tue, 25 Jan 2022 14:51:19 GMT"
}
] | 2022-01-26T00:00:00 |
[
[
"Davila",
"Daniel",
""
],
[
"VanPelt",
"Joseph",
""
],
[
"Lynch",
"Alexander",
""
],
[
"Romlein",
"Adam",
""
],
[
"Webley",
"Peter",
""
],
[
"Brown",
"Matthew S.",
""
]
] |
new_dataset
| 0.998588 |
2201.10371
|
Johan Mazel
|
Johan Mazel, Matthieu Saudrais, Antoine Hervieu
|
ML-based tunnel detection and tunneled application classification
| null | null | null | null |
cs.CR cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Encrypted tunneling protocols are widely used. Beyond business and personal
uses, malicious actors also deploy tunneling to hinder the detection of Command
and Control and data exfiltration. A common approach to maintain visibility on
tunneling is to rely on network traffic metadata and machine learning to
analyze tunnel occurrence without actually decrypting data. Existing work that
address tunneling protocols however exhibit several weaknesses: their goal is
to detect application inside tunnels and not tunnel identification, they
exhibit limited protocol coverage (e.g. OpenVPN and Wireguard are not
addressed), and both inconsistent features and diverse machine learning
techniques which makes performance comparison difficult.
Our work makes four contributions that address these limitations and provide
further analysis. First, we address OpenVPN and Wireguard. Second, we propose a
complete pipeline to detect and classify tunneling protocols and tunneled
applications. Third, we present a thorough analysis of the performance of both
network traffic metadata features and machine learning techniques. Fourth, we
provide a novel analysis of domain generalization regarding background
untunneled traffic, and, both domain generalization and adversarial learning
regarding Maximum Transmission Unit (MTU).
|
[
{
"version": "v1",
"created": "Tue, 25 Jan 2022 15:02:07 GMT"
}
] | 2022-01-26T00:00:00 |
[
[
"Mazel",
"Johan",
""
],
[
"Saudrais",
"Matthieu",
""
],
[
"Hervieu",
"Antoine",
""
]
] |
new_dataset
| 0.966977 |
2201.10406
|
Nicolas Tempelmeier
|
Nicolas Tempelmeier, Elena Demidova
|
Attention-Based Vandalism Detection in OpenStreetMap
| null |
Proceedings of The Webconference 2022
|
10.1145/3485447.3512224
| null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
OpenStreetMap (OSM), a collaborative, crowdsourced Web map, is a unique
source of openly available worldwide map data, increasingly adopted in Web
applications. Vandalism detection is a critical task to support trust and
maintain OSM transparency. This task is remarkably challenging due to the large
scale of the dataset, the sheer number of contributors, various vandalism
forms, and the lack of annotated data. This paper presents Ovid - a novel
attention-based method for vandalism detection in OSM. Ovid relies on a novel
neural architecture that adopts a multi-head attention mechanism to summarize
information indicating vandalism from OSM changesets effectively. To facilitate
automated vandalism detection, we introduce a set of original features that
capture changeset, user, and edit information. Furthermore, we extract a
dataset of real-world vandalism incidents from the OSM edit history for the
first time and provide this dataset as open data. Our evaluation conducted on
real-world vandalism data demonstrates the effectiveness of Ovid.
|
[
{
"version": "v1",
"created": "Tue, 25 Jan 2022 15:52:54 GMT"
}
] | 2022-01-26T00:00:00 |
[
[
"Tempelmeier",
"Nicolas",
""
],
[
"Demidova",
"Elena",
""
]
] |
new_dataset
| 0.999571 |
2201.10409
|
Melika Payvand
|
Matteo Cartiglia, Arianna Rubino, Shyam Narayanan, Charlotte Frenkel,
Germain Haessig, Giacomo Indiveri, Melika Payvand
|
Stochastic dendrites enable online learning in mixed-signal neuromorphic
processing systems
| null | null | null | null |
cs.ET
|
http://creativecommons.org/licenses/by/4.0/
|
The stringent memory and power constraints required in edge-computing
sensory-processing applications have made event-driven neuromorphic systems a
promising technology. On-chip online learning provides such systems the ability
to learn the statistics of the incoming data and to adapt to their changes.
Implementing online learning on event driven-neuromorphic systems requires (i)
a spike-based learning algorithm that calculates the weight updates using only
local information from streaming data, (ii) mapping these weight updates onto
limited bit precision memory and (iii) doing so in a robust manner that does
not lead to unnecessary updates as the system is reaching its optimal output.
Recent neuroscience studies have shown how dendritic compartments of cortical
neurons can solve these problems in biological neural networks. Inspired by
these studies we propose spike-based learning circuits to implement stochastic
dendritic online learning. The circuits are embedded in a prototype spiking
neural network fabricated using a 180nm process. Following an
algorithm-circuits co-design approach we present circuits and behavioral
simulation results that demonstrate the learning rule features. We validate the
proposed method using behavioral simulations of a single-layer network with
4-bit precision weights applied to the MNIST benchmark and demonstrating
results that reach accuracy levels above 85%.
|
[
{
"version": "v1",
"created": "Tue, 25 Jan 2022 15:58:08 GMT"
}
] | 2022-01-26T00:00:00 |
[
[
"Cartiglia",
"Matteo",
""
],
[
"Rubino",
"Arianna",
""
],
[
"Narayanan",
"Shyam",
""
],
[
"Frenkel",
"Charlotte",
""
],
[
"Haessig",
"Germain",
""
],
[
"Indiveri",
"Giacomo",
""
],
[
"Payvand",
"Melika",
""
]
] |
new_dataset
| 0.99659 |
2201.10430
|
Amal Alqahtani
|
Amal Alqahtani, Efsun Sarioglu Kay, Sardar Hamidian, Michael Compton,
Mona Diab
|
A Quantitative and Qualitative Analysis of Schizophrenia Language
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Schizophrenia is one of the most disabling mental health conditions to live
with. Approximately one percent of the population has schizophrenia which makes
it fairly common, and it affects many people and their families. Patients with
schizophrenia suffer different symptoms: formal thought disorder (FTD),
delusions, and emotional flatness. In this paper, we quantitatively and
qualitatively analyze the language of patients with schizophrenia measuring
various linguistic features in two modalities: speech and written text. We
examine the following features: coherence and cohesion of thoughts, emotions,
specificity, level of committed belief (LCB), and personality traits. Our
results show that patients with schizophrenia score high in fear and
neuroticism compared to healthy controls. In addition, they are more committed
to their beliefs, and their writing lacks details. They score lower in most of
the linguistic features of cohesion with significant p-values.
|
[
{
"version": "v1",
"created": "Tue, 25 Jan 2022 16:25:58 GMT"
}
] | 2022-01-26T00:00:00 |
[
[
"Alqahtani",
"Amal",
""
],
[
"Kay",
"Efsun Sarioglu",
""
],
[
"Hamidian",
"Sardar",
""
],
[
"Compton",
"Michael",
""
],
[
"Diab",
"Mona",
""
]
] |
new_dataset
| 0.998355 |
2201.10453
|
Laurens Bliek
|
Laurens Bliek, Paulo da Costa, Reza Refaei Afshar, Yingqian Zhang, Tom
Catshoek, Dani\"el Vos, Sicco Verwer, Fynn Schmitt-Ulms, Andr\'e Hottung,
Tapan Shah, Meinolf Sellmann, Kevin Tierney, Carl Perreault-Lafleur, Caroline
Leboeuf, Federico Bobbio, Justine Pepin, Warley Almeida Silva, Ricardo Gama,
Hugo L. Fernandes, Martin Zaefferer, Manuel L\'opez-Ib\'a\~nez, Ekhine
Irurozki
|
The First AI4TSP Competition: Learning to Solve Stochastic Routing
Problems
|
21 pages
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
This paper reports on the first international competition on AI for the
traveling salesman problem (TSP) at the International Joint Conference on
Artificial Intelligence 2021 (IJCAI-21). The TSP is one of the classical
combinatorial optimization problems, with many variants inspired by real-world
applications. This first competition asked the participants to develop
algorithms to solve a time-dependent orienteering problem with stochastic
weights and time windows (TD-OPSWTW). It focused on two types of learning
approaches: surrogate-based optimization and deep reinforcement learning. In
this paper, we describe the problem, the setup of the competition, the winning
methods, and give an overview of the results. The winning methods described in
this work have advanced the state-of-the-art in using AI for stochastic routing
problems. Overall, by organizing this competition we have introduced routing
problems as an interesting problem setting for AI researchers. The simulator of
the problem has been made open-source and can be used by other researchers as a
benchmark for new AI methods.
|
[
{
"version": "v1",
"created": "Tue, 25 Jan 2022 16:55:33 GMT"
}
] | 2022-01-26T00:00:00 |
[
[
"Bliek",
"Laurens",
""
],
[
"da Costa",
"Paulo",
""
],
[
"Afshar",
"Reza Refaei",
""
],
[
"Zhang",
"Yingqian",
""
],
[
"Catshoek",
"Tom",
""
],
[
"Vos",
"Daniël",
""
],
[
"Verwer",
"Sicco",
""
],
[
"Schmitt-Ulms",
"Fynn",
""
],
[
"Hottung",
"André",
""
],
[
"Shah",
"Tapan",
""
],
[
"Sellmann",
"Meinolf",
""
],
[
"Tierney",
"Kevin",
""
],
[
"Perreault-Lafleur",
"Carl",
""
],
[
"Leboeuf",
"Caroline",
""
],
[
"Bobbio",
"Federico",
""
],
[
"Pepin",
"Justine",
""
],
[
"Silva",
"Warley Almeida",
""
],
[
"Gama",
"Ricardo",
""
],
[
"Fernandes",
"Hugo L.",
""
],
[
"Zaefferer",
"Martin",
""
],
[
"López-Ibáñez",
"Manuel",
""
],
[
"Irurozki",
"Ekhine",
""
]
] |
new_dataset
| 0.971438 |
2201.10477
|
Yawen Wang
|
Yawen Wang, Daniel Crankshaw, Neeraja J. Yadwadkar, Daniel Berger,
Christos Kozyrakis, Ricardo Bianchini
|
SOL: Safe On-Node Learning in Cloud Platforms
| null | null | null | null |
cs.OS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cloud platforms run many software agents on each server node. These agents
manage all aspects of node operation, and in some cases frequently collect data
and make decisions. Unfortunately, their behavior is typically based on
pre-defined static heuristics or offline analysis; they do not leverage on-node
machine learning (ML). In this paper, we first characterize the spectrum of
node agents in Azure, and identify the classes of agents that are most likely
to benefit from on-node ML. We then propose SOL, an extensible framework for
designing ML-based agents that are safe and robust to the range of failure
conditions that occur in production. SOL provides a simple API to agent
developers and manages the scheduling and running of the agent-specific
functions they write. We illustrate the use of SOL by implementing three
ML-based agents that manage CPU cores, node power, and memory placement. Our
experiments show that (1) ML substantially improves our agents, and (2) SOL
ensures that agents operate safely under a variety of failure conditions. We
conclude that ML-based agents show significant potential and that SOL can help
build them.
|
[
{
"version": "v1",
"created": "Tue, 25 Jan 2022 17:21:58 GMT"
}
] | 2022-01-26T00:00:00 |
[
[
"Wang",
"Yawen",
""
],
[
"Crankshaw",
"Daniel",
""
],
[
"Yadwadkar",
"Neeraja J.",
""
],
[
"Berger",
"Daniel",
""
],
[
"Kozyrakis",
"Christos",
""
],
[
"Bianchini",
"Ricardo",
""
]
] |
new_dataset
| 0.955625 |
2201.10489
|
Gengchen Mai
|
Gengchen Mai, Yao Xuan, Wenyun Zuo, Krzysztof Janowicz, Ni Lao
|
Sphere2Vec: Multi-Scale Representation Learning over a Spherical Surface
for Geospatial Predictions
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Generating learning-friendly representations for points in a 2D space is a
fundamental and long-standing problem in machine learning. Recently,
multi-scale encoding schemes (such as Space2Vec) were proposed to directly
encode any point in 2D space as a high-dimensional vector, and has been
successfully applied to various (geo)spatial prediction tasks. However, a map
projection distortion problem rises when applying location encoding models to
large-scale real-world GPS coordinate datasets (e.g., species images taken all
over the world) - all current location encoding models are designed for
encoding points in a 2D (Euclidean) space but not on a spherical surface, e.g.,
earth surface. To solve this problem, we propose a multi-scale location
encoding model called Sphere2V ec which directly encodes point coordinates on a
spherical surface while avoiding the mapprojection distortion problem. We
provide theoretical proof that the Sphere2Vec encoding preserves the spherical
surface distance between any two points. We also developed a unified view of
distance-reserving encoding on spheres based on the Double Fourier Sphere
(DFS). We apply Sphere2V ec to the geo-aware image classification task. Our
analysis shows that Sphere2V ec outperforms other 2D space location encoder
models especially on the polar regions and data-sparse areas for image
classification tasks because of its nature for spherical surface distance
preservation.
|
[
{
"version": "v1",
"created": "Tue, 25 Jan 2022 17:34:29 GMT"
}
] | 2022-01-26T00:00:00 |
[
[
"Mai",
"Gengchen",
""
],
[
"Xuan",
"Yao",
""
],
[
"Zuo",
"Wenyun",
""
],
[
"Janowicz",
"Krzysztof",
""
],
[
"Lao",
"Ni",
""
]
] |
new_dataset
| 0.99875 |
2201.10517
|
Moustafa Gharamti Dr
|
Moustafa Gharamti, Maciej Jarema, Samuel Kirwin-Jones
|
DFORMPY: A Python Library for visualising and zooming on differential
forms
| null | null | null | null |
cs.SC physics.comp-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the v1.0.1 release of DFormPy, the first Python library providing
an interactive visualisation of differential forms. DFormPy is also capable of
exterior algebra and vector calculus, building on the capabilities of NumPy and
matplotlib. This short paper will demonstrate the functionalities of the
library, briefly outlining the mathematics involved with our objects and the
methods available to the user. DFormPy is an open source library with
interactive GUI released under MIT license at
https://github.com/MostaphaG/Summer_project-df
|
[
{
"version": "v1",
"created": "Sun, 16 Jan 2022 10:51:04 GMT"
}
] | 2022-01-26T00:00:00 |
[
[
"Gharamti",
"Moustafa",
""
],
[
"Jarema",
"Maciej",
""
],
[
"Kirwin-Jones",
"Samuel",
""
]
] |
new_dataset
| 0.970311 |
2201.10531
|
Gourav Takhar
|
Gourav Takhar, Ramesh Karri, Christian Pilato, and Subhajit Roy
|
HOLL: Program Synthesis for Higher OrderLogic Locking
|
Accepted in TACAS-22 conference. 24 pages llncs format (without
references), 11 figures, 5 tables
| null | null | null |
cs.CR cs.FL cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
Logic locking "hides" the functionality of a digital circuit to protect it
from counterfeiting, piracy, and malicious design modifications. The original
design is transformed into a "locked" design such that the circuit reveals its
correct functionality only when it is "unlocked" with a secret sequence of
bits--the key bit-string. However, strong attacks, especially the SAT attack
that uses a SAT solver to recover the key bitstring, have been profoundly
effective at breaking the locked circuit and recovering the circuit
functionality.
We lift logic locking to Higher Order Logic Locking (HOLL) by hiding a
higher-order relation, instead of a key of independent values, challenging the
attacker to discover this key relation to recreate the circuit functionality.
Our technique uses program synthesis to construct the locked design and
synthesize a corresponding key relation. HOLL has low overhead and existing
attacks for logic locking do not apply as the entity to be recovered is no more
a value. To evaluate our proposal, we propose a new attack (SynthAttack) that
uses an inductive synthesis algorithm guided by an operational circuit as an
input-output oracle to recover the hidden functionality. SynthAttack is
inspired by the SAT attack, and similar to the SAT attack, it is verifiably
correct, i.e., if the correct functionality is revealed, a verification check
guarantees the same. Our empirical analysis shows that SynthAttack can break
HOLL for small circuits and small key relations, but it is ineffective for
real-life designs.
|
[
{
"version": "v1",
"created": "Tue, 25 Jan 2022 18:39:00 GMT"
}
] | 2022-01-26T00:00:00 |
[
[
"Takhar",
"Gourav",
""
],
[
"Karri",
"Ramesh",
""
],
[
"Pilato",
"Christian",
""
],
[
"Roy",
"Subhajit",
""
]
] |
new_dataset
| 0.966814 |
1806.00276
|
Paul Schmitt
|
Paul Schmitt, Anne Edmundson, Nick Feamster
|
Oblivious DNS: Practical Privacy for DNS Queries
| null | null |
10.1145/3340301.3341128
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Virtually every Internet communication typically involves a Domain Name
System (DNS) lookup for the destination server that the client wants to
communicate with. Operators of DNS recursive resolvers---the machines that
receive a client's query for a domain name and resolve it to a corresponding IP
address---can learn significant information about client activity. Past work,
for example, indicates that DNS queries reveal information ranging from web
browsing activity to the types of devices that a user has in their home.
Recognizing the privacy vulnerabilities associated with DNS queries, various
third parties have created alternate DNS services that obscure a user's DNS
queries from his or her Internet service provider. Yet, these systems merely
transfer trust to a different third party. We argue that no single party ought
to be able to associate DNS queries with a client IP address that issues those
queries. To this end, we present Oblivious DNS (ODNS), which introduces an
additional layer of obfuscation between clients and their queries. To do so,
ODNS uses its own authoritative namespace; the authoritative servers for the
ODNS namespace act as recursive resolvers for the DNS queries that they
receive, but they never see the IP addresses for the clients that initiated
these queries. We present an initial deployment of ODNS; our experiments show
that ODNS introduces minimal performance overhead, both for individual queries
and for web page loads. We design ODNS to be compatible with existing DNS
protocols and infrastructure, and we are actively working on an open standard
with the IETF.
|
[
{
"version": "v1",
"created": "Fri, 1 Jun 2018 10:35:12 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Dec 2018 17:32:09 GMT"
}
] | 2022-01-25T00:00:00 |
[
[
"Schmitt",
"Paul",
""
],
[
"Edmundson",
"Anne",
""
],
[
"Feamster",
"Nick",
""
]
] |
new_dataset
| 0.996186 |
1903.02252
|
Arjun Akula
|
Arjun R. Akula, Song-Chun Zhu
|
Discourse Parsing in Videos: A Multi-modal Appraoch
|
Accepted in CVPR 2019 Workshop on Language and Vision (Oral
Presentation)
|
CVPR 2019 Workshop on Language and Vision (Oral Presentation)
| null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Text-level discourse parsing aims to unmask how two sentences in the text are
related to each other. We propose the task of Visual Discourse Parsing, which
requires understanding discourse relations among scenes in a video. Here we use
the term scene to refer to a subset of video frames that can better summarize
the video. In order to collect a dataset for learning discourse cues from
videos, one needs to manually identify the scenes from a large pool of video
frames and then annotate the discourse relations between them. This is clearly
a time consuming, expensive and tedious task. In this work, we propose an
approach to identify discourse cues from the videos without the need to
explicitly identify and annotate the scenes. We also present a novel dataset
containing 310 videos and the corresponding discourse cues to evaluate our
approach. We believe that many of the multi-discipline AI problems such as
Visual Dialog and Visual Storytelling would greatly benefit from the use of
visual discourse cues.
|
[
{
"version": "v1",
"created": "Wed, 6 Mar 2019 09:09:47 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Mar 2019 21:39:16 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Jan 2022 09:05:32 GMT"
},
{
"version": "v4",
"created": "Sat, 22 Jan 2022 18:46:14 GMT"
}
] | 2022-01-25T00:00:00 |
[
[
"Akula",
"Arjun R.",
""
],
[
"Zhu",
"Song-Chun",
""
]
] |
new_dataset
| 0.999857 |
1911.03858
|
Andrii Riazanov
|
Venkatesan Guruswami, Andrii Riazanov, Min Ye
|
Ar{\i}kan meets Shannon: Polar codes with near-optimal convergence to
channel capacity
| null | null | null | null |
cs.IT cs.CC cs.DS math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Let $W$ be a binary-input memoryless symmetric (BMS) channel with Shannon
capacity $I(W)$ and fix any $\alpha > 0$. We construct, for any sufficiently
small $\delta > 0$, binary linear codes of block length
$O(1/\delta^{2+\alpha})$ and rate $I(W)-\delta$ that enable reliable
communication on $W$ with quasi-linear time encoding and decoding. Shannon's
noisy coding theorem established the \emph{existence} of such codes (without
efficient constructions or decoding) with block length $O(1/\delta^2)$. This
quadratic dependence on the gap $\delta$ to capacity is known to be best
possible. Our result thus yields a constructive version of Shannon's theorem
with near-optimal convergence to capacity as a function of the block length.
This resolves a central theoretical challenge associated with the attainment of
Shannon capacity. Previously such a result was only known for the erasure
channel.
Our codes are a variant of Ar{\i}kan's polar codes based on multiple
carefully constructed local kernels, one for each intermediate channel that
arises in the decoding. A crucial ingredient in the analysis is a strong
converse of the noisy coding theorem when communicating using random linear
codes on arbitrary BMS channels. Our converse theorem shows extreme
unpredictability of even a single message bit for random coding at rates
slightly above capacity.
|
[
{
"version": "v1",
"created": "Sun, 10 Nov 2019 05:45:33 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Jul 2020 20:51:11 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Jan 2022 16:32:20 GMT"
}
] | 2022-01-25T00:00:00 |
[
[
"Guruswami",
"Venkatesan",
""
],
[
"Riazanov",
"Andrii",
""
],
[
"Ye",
"Min",
""
]
] |
new_dataset
| 0.999745 |
2004.03656
|
Nathana\"el Eon
|
Pablo Arrighi, Giuseppe Di Molfetta, Nathana\"el Eon
|
Gauge-invariance in cellular automata
|
This article supersedes arXiv:1802.07644 and arXiv:1908.01229
| null |
10.1007/s11047-022-09879-1
| null |
cs.FL nlin.CG quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Gauge-invariance is a fundamental concept in Physics -- known to provide
mathematical justification for the fundamental forces. In this paper, we
provide discrete counterparts to the main gauge theoretical concepts directly
in terms of Cellular Automata. More precisely, the notions of gauge-invariance
and gauge-equivalence in Cellular Automata are formalized. A step-by-step
gauging procedure to enforce this symmetry upon a given Cellular Automaton is
developed, and three examples of gauge-invariant Cellular Automata are
examined.
|
[
{
"version": "v1",
"created": "Tue, 7 Apr 2020 19:20:26 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Jan 2022 09:51:49 GMT"
}
] | 2022-01-25T00:00:00 |
[
[
"Arrighi",
"Pablo",
""
],
[
"Di Molfetta",
"Giuseppe",
""
],
[
"Eon",
"Nathanaël",
""
]
] |
new_dataset
| 0.97184 |
2009.09035
|
Paul Schmitt
|
Paul Schmitt and Barath Raghavan
|
Pretty Good Phone Privacy
| null |
Proceedings of the 30th USENIX Security Symposium, August 2021
| null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To receive service in today's cellular architecture, phones uniquely identify
themselves to towers and thus to operators. This is now a cause of major
privacy violations, as operators now sell and leak identity and location data
of hundreds of millions of mobile users.
In this paper, we take an end-to-end perspective on the cellular architecture
and find key points of decoupling that enable us to protect user identity and
location privacy with no changes to physical infrastructure, no added latency,
and no requirement of direct cooperation from existing operators.
We describe Pretty Good Phone Privacy (PGPP) and demonstrate how our modified
backend stack (NGC) works with real phones to provide ordinary yet
privacy-preserving connectivity. We explore inherent privacy and efficiency
tradeoffs in a simulation of a large metropolitan region. We show how PGPP
maintains today's control overheads while significantly improving user identity
and location privacy.
|
[
{
"version": "v1",
"created": "Fri, 18 Sep 2020 19:27:49 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Sep 2020 12:47:16 GMT"
},
{
"version": "v3",
"created": "Mon, 28 Dec 2020 19:49:34 GMT"
}
] | 2022-01-25T00:00:00 |
[
[
"Schmitt",
"Paul",
""
],
[
"Raghavan",
"Barath",
""
]
] |
new_dataset
| 0.979114 |
2012.04581
|
Shiv Ram Dubey
|
Viswanatha Reddy Gajjala, Sai Prasanna Teja Reddy, Snehasis Mukherjee,
Shiv Ram Dubey
|
MERANet: Facial Micro-Expression Recognition using 3D Residual Attention
Network
|
Published in Twelfth Indian Conference on Computer Vision, Graphics
and Image Processing (ICVGIP), 2021
| null |
10.1145/3490035.3490260
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Micro-expression has emerged as a promising modality in affective computing
due to its high objectivity in emotion detection. Despite the higher
recognition accuracy provided by the deep learning models, there are still
significant scope for improvements in micro-expression recognition techniques.
The presence of micro-expressions in small-local regions of the face, as well
as the limited size of available databases, continue to limit the accuracy in
recognizing micro-expressions. In this work, we propose a facial
micro-expression recognition model using 3D residual attention network named
MERANet to tackle such challenges. The proposed model takes advantage of
spatial-temporal attention and channel attention together, to learn deeper
fine-grained subtle features for classification of emotions. Further, the
proposed model encompasses both spatial and temporal information simultaneously
using the 3D kernels and residual connections. Moreover, the channel features
and spatio-temporal features are re-calibrated using the channel and
spatio-temporal attentions, respectively in each residual module. Our attention
mechanism enables the model to learn to focus on different facial areas of
interest. The experiments are conducted on benchmark facial micro-expression
datasets. A superior performance is observed as compared to the
state-of-the-art for facial micro-expression recognition on benchmark data.
|
[
{
"version": "v1",
"created": "Mon, 7 Dec 2020 16:41:42 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Jan 2022 05:45:59 GMT"
}
] | 2022-01-25T00:00:00 |
[
[
"Gajjala",
"Viswanatha Reddy",
""
],
[
"Reddy",
"Sai Prasanna Teja",
""
],
[
"Mukherjee",
"Snehasis",
""
],
[
"Dubey",
"Shiv Ram",
""
]
] |
new_dataset
| 0.99897 |
2101.10759
|
Xutan Peng
|
Xutan Peng, Yi Zheng, Chenghua Lin, Advaith Siddharthan
|
Summarising Historical Text in Modern Languages
|
To appear at EACL 2021
|
EACL 2021
|
10.18653/v1/2021.eacl-main.273
| null |
cs.CL cs.AI cs.CY cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce the task of historical text summarisation, where documents in
historical forms of a language are summarised in the corresponding modern
language. This is a fundamentally important routine to historians and digital
humanities researchers but has never been automated. We compile a high-quality
gold-standard text summarisation dataset, which consists of historical German
and Chinese news from hundreds of years ago summarised in modern German or
Chinese. Based on cross-lingual transfer learning techniques, we propose a
summarisation model that can be trained even with no cross-lingual (historical
to modern) parallel data, and further benchmark it against state-of-the-art
algorithms. We report automatic and human evaluations that distinguish the
historic to modern language summarisation task from standard cross-lingual
summarisation (i.e., modern to modern language), highlight the distinctness and
value of our dataset, and demonstrate that our transfer learning approach
outperforms standard cross-lingual benchmarks on this task.
|
[
{
"version": "v1",
"created": "Tue, 26 Jan 2021 13:00:07 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Jan 2021 04:17:02 GMT"
}
] | 2022-01-25T00:00:00 |
[
[
"Peng",
"Xutan",
""
],
[
"Zheng",
"Yi",
""
],
[
"Lin",
"Chenghua",
""
],
[
"Siddharthan",
"Advaith",
""
]
] |
new_dataset
| 0.995124 |
2103.08514
|
Sara Ramezanian
|
Sara Ramezanian, Tommi Meskanen, and Valtteri Niemi
|
Multi-party Private Set Operations with an External Decider
| null |
Data and Applications Security and Privacy XXXV. DBSec 2021.
Lecture Notes in Computer Science, vol 12840
|
10.1007/978-3-030-81242-3_7
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A Private Set Operation (PSO) protocol involves at least two parties with
their private input sets. The goal of the protocol is for the parties to learn
the output of a set operation, i.e. set intersection, on their input sets,
without revealing any information about the items that are not in the output
set. Commonly, the outcome of the set operation is revealed to parties and
no-one else. However, in many application areas of PSO the result of the set
operation should be learned by an external participant whom does not have an
input set. We call this participant the decider. In this paper, we present new
variants of multi-party PSO, where there is a decider who gets the result. All
parties expect the decider have a private set. Other parties neither learn this
result, nor anything else about this protocol. Moreover, we present a generic
solution to the problem of PSO.
|
[
{
"version": "v1",
"created": "Mon, 15 Mar 2021 16:36:33 GMT"
}
] | 2022-01-25T00:00:00 |
[
[
"Ramezanian",
"Sara",
""
],
[
"Meskanen",
"Tommi",
""
],
[
"Niemi",
"Valtteri",
""
]
] |
new_dataset
| 0.974144 |
2103.10726
|
Yasunori Toshimitsu
|
Yasunori Toshimitsu, Ki Wan Wong, Thomas Buchner, Robert Katzschmann
|
SoPrA: Fabrication & Dynamical Modeling of a Scalable Soft Continuum
Robotic Arm with Integrated Proprioceptive Sensing
|
8 pages, 8 figures, 2021 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS 2021). For associated video, see
https://youtu.be/bTD2H4qhzpg
| null |
10.1109/IROS51168.2021.9636539
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to their inherent compliance, soft robots are more versatile than rigid
linked robots when they interact with their environment, such as object
manipulation or biomimetic motion, and considered the key element in
introducing robots to everyday environments. Although various soft robotic
actuators exist, past research has focused primarily on designing and analyzing
single components. Limited effort has been made to combine each component to
create an overall capable, integrated soft robot. Ideally, the behavior of such
a robot can be accurately modeled, and its motion within an environment uses
its proprioception, without requiring external sensors. This work presents a
design and modeling process for a Soft continuum Proprioceptive Arm (SoPrA)
actuated by pneumatics. The integrated design is suitable for an analytical
model due to its internal capacitive flex sensor for proprioceptive
measurements and its fiber-reinforced fluidic elastomer actuators. The proposed
analytical dynamical model accounts for the inertial effects of the actuator's
mass and the material properties, and predicts in real-time the soft robot's
behavior. Our estimation method integrates the analytical model with
proprioceptive sensors to calculate external forces, all without relying on an
external motion capture system. SoPrA is validated in a series of experiments
demonstrating the model's and sensor's accuracy in estimation. SoPrA will
enable soft arm manipulation including force sensing while operating in
obstructed environments that disallows exteroceptive measurements.
|
[
{
"version": "v1",
"created": "Fri, 19 Mar 2021 10:44:18 GMT"
},
{
"version": "v2",
"created": "Mon, 22 Mar 2021 16:22:27 GMT"
},
{
"version": "v3",
"created": "Fri, 6 Aug 2021 04:06:30 GMT"
}
] | 2022-01-25T00:00:00 |
[
[
"Toshimitsu",
"Yasunori",
""
],
[
"Wong",
"Ki Wan",
""
],
[
"Buchner",
"Thomas",
""
],
[
"Katzschmann",
"Robert",
""
]
] |
new_dataset
| 0.996952 |
2106.00221
|
Yong Liu
|
Yong Liu, Xiangning Chen, Minhao Cheng, Cho-Jui Hsieh, Yang You
|
Concurrent Adversarial Learning for Large-Batch Training
|
Accepted to ICLR 2022
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large-batch training has become a commonly used technique when training
neural networks with a large number of GPU/TPU processors. As batch size
increases, stochastic optimizers tend to converge to sharp local minima,
leading to degraded test performance. Current methods usually use extensive
data augmentation to increase the batch size, but we found the performance gain
with data augmentation decreases as batch size increases, and data augmentation
will become insufficient after certain point. In this paper, we propose to use
adversarial learning to increase the batch size in large-batch training.
Despite being a natural choice for smoothing the decision surface and biasing
towards a flat region, adversarial learning has not been successfully applied
in large-batch training since it requires at least two sequential gradient
computations at each step, which will at least double the running time compared
with vanilla training even with a large number of processors. To overcome this
issue, we propose a novel Concurrent Adversarial Learning (ConAdv) method that
decouple the sequential gradient computations in adversarial learning by
utilizing staled parameters. Experimental results demonstrate that ConAdv can
successfully increase the batch size on ResNet-50 training on ImageNet while
maintaining high accuracy. In particular, we show ConAdv along can achieve
75.3\% top-1 accuracy on ImageNet ResNet-50 training with 96K batch size, and
the accuracy can be further improved to 76.2\% when combining ConAdv with data
augmentation. This is the first work successfully scales ResNet-50 training
batch size to 96K.
|
[
{
"version": "v1",
"created": "Tue, 1 Jun 2021 04:26:02 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Jan 2022 14:12:53 GMT"
}
] | 2022-01-25T00:00:00 |
[
[
"Liu",
"Yong",
""
],
[
"Chen",
"Xiangning",
""
],
[
"Cheng",
"Minhao",
""
],
[
"Hsieh",
"Cho-Jui",
""
],
[
"You",
"Yang",
""
]
] |
new_dataset
| 0.989197 |
2106.04048
|
Jiawei Xu
|
Jiawei Xu, Diego S. D'Antonio, David Salda\~na
|
H-ModQuad: Modular Multi-Rotors with 4, 5, and 6 Controllable DOF
|
6 pages plus reference, ICRA 2021
| null |
10.1109/ICRA48506.2021.9561016
| null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Traditional aerial vehicles are usually custom-designed for specific tasks.
Although they offer an efficient solution, they are not always able to adapt to
changes in the task specification, e.g., increasing the payload. This applies
to quadrotors, having a maximum payload and only four controllable degrees of
freedom, limiting their adaptability to the task's variations. We propose a
versatile modular robotic system that can increase its payload and degrees of
freedom by assembling heterogeneous modules; we call it H-ModQuad. It consists
of cuboid modules propelled by quadrotors with tilted propellers that can
generate forces in different directions. By connecting different types of
modules, an H-ModQuad can increase its controllable degrees of freedom from 4
to 5 and 6. We model the general structure and propose three controllers, one
for each number of controllable degrees of freedom. We extend the concept of
the actuation ellipsoid to find the best reference orientation that can
maximize the performance of the structure. Our approach is validated with
experiments using actual robots, showing the independence of the translation
and orientation of a structure.
|
[
{
"version": "v1",
"created": "Tue, 8 Jun 2021 01:53:30 GMT"
}
] | 2022-01-25T00:00:00 |
[
[
"Xu",
"Jiawei",
""
],
[
"D'Antonio",
"Diego S.",
""
],
[
"Saldaña",
"David",
""
]
] |
new_dataset
| 0.996927 |
2107.06945
|
Sven Puchinger
|
Peter Beelen, Sven Puchinger, Johan Rosenkilde
|
Twisted Reed-Solomon Codes
|
15 pages, accepted for publication in IEEE Transactions on
Information Theory
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this article, we present a new construction of evaluation codes in the
Hamming metric, which we call twisted Reed-Solomon codes. Whereas Reed-Solomon
(RS) codes are MDS codes, this need not be the case for twisted RS codes.
Nonetheless, we show that our construction yields several families of MDS
codes. Further, for a large subclass of (MDS) twisted RS codes, we show that
the new codes are not generalized RS codes. To achieve this, we use properties
of Schur squares of codes as well as an explicit description of the dual of a
large subclass of our codes. We conclude the paper with a description of a
decoder, that performs very well in practice as shown by extensive simulation
results.
|
[
{
"version": "v1",
"created": "Wed, 14 Jul 2021 19:20:15 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Jan 2022 11:19:08 GMT"
}
] | 2022-01-25T00:00:00 |
[
[
"Beelen",
"Peter",
""
],
[
"Puchinger",
"Sven",
""
],
[
"Rosenkilde",
"Johan",
""
]
] |
new_dataset
| 0.999754 |
2111.01354
|
Siddhartha Gairola
|
Siddhartha Gairola, Murtuza Bohra, Nadeem Shaheer, Navya Jayaprakash,
Pallavi Joshi, Anand Balasubramaniam, Kaushik Murali, Nipun Kwatra, Mohit
Jain
|
SmartKC: Smartphone-based Corneal Topographer for Keratoconus Detection
|
Change Log: + Fixed sim-K computation (updated Section 5.5.3); re-ran
our pipeline with the updated sim-K values (updated Figure 7); + Conducted
the comparative evaluation with doctors again (total 4 doctors), and got
improved results (updated Section 7.2 and Table 2); [Note: This is an updated
version of the paper that was accepted for publication in IMWUT 2021.]
| null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Keratoconus is a severe eye disease affecting the cornea (the clear,
dome-shaped outer surface of the eye), causing it to become thin and develop a
conical bulge. The diagnosis of keratoconus requires sophisticated ophthalmic
devices which are non-portable and very expensive. This makes early detection
of keratoconus inaccessible to large populations in low- and middle-income
countries, making it a leading cause for partial/complete blindness among such
populations. We propose SmartKC, a low-cost, smartphone-based keratoconus
diagnosis system comprising of a 3D-printed placido's disc attachment, an LED
light strip, and an intelligent smartphone app to capture the reflection of the
placido rings on the cornea. An image processing pipeline analyzes the corneal
image and uses the smartphone's camera parameters, the placido rings' 3D
location, the pixel location of the reflected placido rings and the setup's
working distance to construct the corneal surface, via the Arc-Step method and
Zernike polynomials based surface fitting. In a clinical study with 101
distinct eyes, we found that SmartKC achieves a sensitivity of 94.1% and a
specificity of 100.0%. Moreover, the quantitative curvature estimates (sim-K)
strongly correlate with a gold-standard medical device (Pearson correlation
coefficient =0.78). Our results indicate that SmartKC has the potential to be
used as a keratoconus screening tool under real-world medical settings.
|
[
{
"version": "v1",
"created": "Tue, 2 Nov 2021 03:30:40 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Jan 2022 04:24:59 GMT"
}
] | 2022-01-25T00:00:00 |
[
[
"Gairola",
"Siddhartha",
""
],
[
"Bohra",
"Murtuza",
""
],
[
"Shaheer",
"Nadeem",
""
],
[
"Jayaprakash",
"Navya",
""
],
[
"Joshi",
"Pallavi",
""
],
[
"Balasubramaniam",
"Anand",
""
],
[
"Murali",
"Kaushik",
""
],
[
"Kwatra",
"Nipun",
""
],
[
"Jain",
"Mohit",
""
]
] |
new_dataset
| 0.999655 |
2111.04875
|
Senthil Yogamani
|
Sambit Mohapatra, Mona Hodaei, Senthil Yogamani, Stefan Milz, Heinrich
Gotzig, Martin Simon, Hazem Rashed, Patrick Maeder
|
LiMoSeg: Real-time Bird's Eye View based LiDAR Motion Segmentation
|
Accepted for Presentation at International Conference on Computer
Vision Theory and Applications (VISAPP 2022)
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Moving object detection and segmentation is an essential task in the
Autonomous Driving pipeline. Detecting and isolating static and moving
components of a vehicle's surroundings are particularly crucial in path
planning and localization tasks. This paper proposes a novel real-time
architecture for motion segmentation of Light Detection and Ranging (LiDAR)
data. We use three successive scans of LiDAR data in 2D Bird's Eye View (BEV)
representation to perform pixel-wise classification as static or moving.
Furthermore, we propose a novel data augmentation technique to reduce the
significant class imbalance between static and moving objects. We achieve this
by artificially synthesizing moving objects by cutting and pasting static
vehicles. We demonstrate a low latency of 8 ms on a commonly used automotive
embedded platform, namely Nvidia Jetson Xavier. To the best of our knowledge,
this is the first work directly performing motion segmentation in LiDAR BEV
space. We provide quantitative results on the challenging SemanticKITTI
dataset, and qualitative results are provided in https://youtu.be/2aJ-cL8b0LI.
|
[
{
"version": "v1",
"created": "Mon, 8 Nov 2021 23:40:55 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Dec 2021 18:59:44 GMT"
},
{
"version": "v3",
"created": "Sat, 22 Jan 2022 21:46:40 GMT"
}
] | 2022-01-25T00:00:00 |
[
[
"Mohapatra",
"Sambit",
""
],
[
"Hodaei",
"Mona",
""
],
[
"Yogamani",
"Senthil",
""
],
[
"Milz",
"Stefan",
""
],
[
"Gotzig",
"Heinrich",
""
],
[
"Simon",
"Martin",
""
],
[
"Rashed",
"Hazem",
""
],
[
"Maeder",
"Patrick",
""
]
] |
new_dataset
| 0.999514 |
2112.00621
|
Gabriel Ammes
|
Gabriel Ammes, Walter Lau Neto, Paulo Butzen, Pierre-Emmanuel
Gaillardon and Renato P. Ribas
|
A Two-Level Approximate Logic Synthesis Combining Cube Insertion and
Removal
|
5 Pages, submitted to IEEE Transactions on Computer-Aided Design of
Integrated Circuits and Systems
| null |
10.1109/TCAD.2022.3143489
| null |
cs.OH cs.AR cs.LO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Approximate computing is an attractive paradigm for reducing the design
complexity of error-resilient systems, therefore improving performance and
saving power consumption. In this work, we propose a new two-level approximate
logic synthesis method based on cube insertion and removal procedures.
Experimental results have shown significant literal count and runtime reduction
compared to the state-of-the-art approach. The method scalability is
illustrated for a high error threshold over large benchmark circuits. The
obtained solutions have presented a literal number reduction up to 38%, 56% and
93% with respect to an error rate of 1%, 3% and 5%, respectively.
|
[
{
"version": "v1",
"created": "Mon, 22 Nov 2021 00:37:50 GMT"
}
] | 2022-01-25T00:00:00 |
[
[
"Ammes",
"Gabriel",
""
],
[
"Neto",
"Walter Lau",
""
],
[
"Butzen",
"Paulo",
""
],
[
"Gaillardon",
"Pierre-Emmanuel",
""
],
[
"Ribas",
"Renato P.",
""
]
] |
new_dataset
| 0.96095 |
2112.10038
|
Peng Xu Mr
|
Peng Xu
|
Android-COCO: Android Malware Detection with Graph Neural Network for
Byte- and Native-Code
|
10 pages, 3 figures, 3 tables
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
With the popularity of Android growing exponentially, the amount of malware
has significantly exploded. It is arguably one of the most viral problems on
mobile platforms. Recently, various approaches have been introduced to detect
Android malware, the majority of these are either based on the Manifest File
features or the structural information, such as control flow graph and API
calls. Among those methods, nearly all of them only consider the Java byte-code
as the target to detect malicious behaviors. However, Recent research and our
own statistics show that native payloads are commonly used in both benign and
malicious apps. Current state-of-the-art Android static analysis tools avoid
handling native method invocation. None of those tools have the capability to
capture the inter-language behaviors.
In this work, we explore an ensemble mechanism, which presents how the
combination of byte-code and native-code analysis of Android applications can
be efficiently used to cope with the advanced sophistication of Android
malware. We, therefore, present a multi-layer approach that utilizes deep
learning, natural language processing (NLP), as well as graph embedding
techniques to handle the threats of Android malware, both from the Java
byte-code and native code. After that, we design an ensemble algorithm to get
the final result of malware detection system. To be specific, the first layer
of our detection approach operates on the byte-code of application and the
native code level, whereas the second layer focuses on the ensemble algorithm.
Large-scale experiments on 100,113 samples (35,113 malware and 65,000 benign)
show that only byte-code sub-system yields 99.8% accuracy and native-code
sub-system yields an accuracy of 96.6%, whereas the Android-COCO method attains
an accuracy of 99.86% which outperforms various related works.
|
[
{
"version": "v1",
"created": "Sun, 19 Dec 2021 01:46:01 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Jan 2022 14:11:00 GMT"
}
] | 2022-01-25T00:00:00 |
[
[
"Xu",
"Peng",
""
]
] |
new_dataset
| 0.98682 |
2112.10469
|
Jordan Samhi
|
Jordan Samhi, Jun Gao, Nadia Daoudi, Pierre Graux, Henri Hoyez, Xiaoyu
Sun, Kevin Allix, Tegawend\'e F. Bissyand\'e, Jacques Klein
|
JuCify: A Step Towards Android Code Unification for Enhanced Static
Analysis
|
In the proceedings of the 44th International Conference on Software
Engineering 2022 (ICSE 2022)
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Native code is now commonplace within Android app packages where it co-exists
and interacts with Dex bytecode through the Java Native Interface to deliver
rich app functionalities. Yet, state-of-the-art static analysis approaches have
mostly overlooked the presence of such native code, which, however, may
implement some key sensitive, or even malicious, parts of the app behavior.
This limitation of the state of the art is a severe threat to validity in a
large range of static analyses that do not have a complete view of the
executable code in apps. To address this issue, we propose a new advance in the
ambitious research direction of building a unified model of all code in Android
apps. The JuCify approach presented in this paper is a significant step towards
such a model, where we extract and merge call graphs of native code and
bytecode to make the final model readily-usable by a common Android analysis
framework: in our implementation, JuCify builds on the Soot internal
intermediate representation. We performed empirical investigations to highlight
how, without the unified model, a significant amount of Java methods called
from the native code are "unreachable" in apps' call-graphs, both in goodware
and malware. Using JuCify, we were able to enable static analyzers to reveal
cases where malware relied on native code to hide invocation of payment library
code or of other sensitive code in the Android framework. Additionally,
JuCify's model enables state-of-the-art tools to achieve better precision and
recall in detecting data leaks through native code. Finally, we show that by
using JuCify we can find sensitive data leaks that pass through native code.
|
[
{
"version": "v1",
"created": "Mon, 20 Dec 2021 12:08:57 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Jan 2022 08:31:25 GMT"
}
] | 2022-01-25T00:00:00 |
[
[
"Samhi",
"Jordan",
""
],
[
"Gao",
"Jun",
""
],
[
"Daoudi",
"Nadia",
""
],
[
"Graux",
"Pierre",
""
],
[
"Hoyez",
"Henri",
""
],
[
"Sun",
"Xiaoyu",
""
],
[
"Allix",
"Kevin",
""
],
[
"Bissyandé",
"Tegawendé F.",
""
],
[
"Klein",
"Jacques",
""
]
] |
new_dataset
| 0.987978 |
2112.10470
|
Jordan Samhi
|
Jordan Samhi, Li Li, Tegawend\'e F. Bissyand\'e, Jacques Klein
|
Difuzer: Uncovering Suspicious Hidden Sensitive Operations in Android
Apps
|
In the proceedings of the 44th International Conference on Software
Engineering 2022 (ICSE 2022)
| null | null | null |
cs.CR cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One prominent tactic used to keep malicious behavior from being detected
during dynamic test campaigns is logic bombs, where malicious operations are
triggered only when specific conditions are satisfied. Defusing logic bombs
remains an unsolved problem in the literature. In this work, we propose to
investigate Suspicious Hidden Sensitive Operations (SHSOs) as a step towards
triaging logic bombs. To that end, we develop a novel hybrid approach that
combines static analysis and anomaly detection techniques to uncover SHSOs,
which we predict as likely implementations of logic bombs. Concretely, Difuzer
identifies SHSO entry-points using an instrumentation engine and an
inter-procedural data-flow analysis. Then, it extracts trigger-specific
features to characterize SHSOs and leverages One-Class SVM to implement an
unsupervised learning model for detecting abnormal triggers.
We evaluate our prototype and show that it yields a precision of 99.02% to
detect SHSOs among which 29.7% are logic bombs. Difuzer outperforms the
state-of-the-art in revealing more logic bombs while yielding less false
positives in about one order of magnitude less time. All our artifacts are
released to the community.
|
[
{
"version": "v1",
"created": "Mon, 20 Dec 2021 12:11:27 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Jan 2022 08:28:13 GMT"
}
] | 2022-01-25T00:00:00 |
[
[
"Samhi",
"Jordan",
""
],
[
"Li",
"Li",
""
],
[
"Bissyandé",
"Tegawendé F.",
""
],
[
"Klein",
"Jacques",
""
]
] |
new_dataset
| 0.950689 |
2201.08158
|
Tiansong Zhou
|
Tiansong Zhou, Tao Yu, Ruizhi Shao, Kun Li
|
HDhuman: High-quality Human Performance Capture with Sparse Views
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we introduce HDhuman, a method that addresses the challenge of
novel view rendering of human performers that wear clothes with complex texture
patterns using a sparse set of camera views. Although some recent works have
achieved remarkable rendering quality on humans with relatively uniform
textures using sparse views, the rendering quality remains limited when dealing
with complex texture patterns as they are unable to recover the high-frequency
geometry details that observed in the input views. To this end, the proposed
HDhuman uses a human reconstruction network with a pixel-aligned spatial
transformer and a rendering network that uses geometry-guided pixel-wise
feature integration to achieve high-quality human reconstruction and rendering.
The designed pixel-aligned spatial transformer calculates the correlations
between the input views, producing human reconstruction results with
high-frequency details. Based on the surface reconstruction results, the
geometry-guided pixel-wise visibility reasoning provides guidance for
multi-view feature integration, enabling the rendering network to render
high-quality images at 2k resolution on novel views. Unlike previous neural
rendering works that always need to train or fine-tune an independent network
for a different scene, our method is a general framework that is able to
generalize to novel subjects. Experiments show that our approach outperforms
all the prior generic or specific methods on both synthetic data and real-world
data.
|
[
{
"version": "v1",
"created": "Thu, 20 Jan 2022 13:04:59 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Jan 2022 12:49:11 GMT"
}
] | 2022-01-25T00:00:00 |
[
[
"Zhou",
"Tiansong",
""
],
[
"Yu",
"Tao",
""
],
[
"Shao",
"Ruizhi",
""
],
[
"Li",
"Kun",
""
]
] |
new_dataset
| 0.995201 |
2201.08461
|
Ioannis Agadakos
|
Ioannis Agadakos, Manuel Egele, and William Robertson
|
Polytope: Practical Memory Access Control for C++ Applications
| null | null | null | null |
cs.CR cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Designing and implementing secure software is inarguably more important than
ever. However, despite years of research into privilege separating programs, it
remains difficult to actually do so and such efforts can take years of
labor-intensive engineering to reach fruition. At the same time, new
intra-process isolation primitives make strong data isolation and privilege
separation more attractive from a performance perspective. Yet, substituting
intra-process security boundaries for time-tested process boundaries opens the
door to subtle but devastating privilege leaks. In this work, we present
Polytope, a language extension to C++ that aims to make efficient privilege
separation accessible to a wider audience of developers. Polytope defines a
policy language encoded as C++11 attributes that separate code and data into
distinct program partitions. A modified Clang front-end embeds source-level
policy as metadata nodes in the LLVM IR. An LLVM pass interprets embedded
policy and instruments an IR with code to enforce the source-level policy using
Intel MPK. A run-time support library manages partitions, protection keys,
dynamic memory operations, and indirect call target privileges. An evaluation
demonstrates that Polytope provides equivalent protection to prior systems with
a low annotation burden and comparable performance overhead. Polytope also
renders privilege leaks that contradict intended policy impossible to express.
|
[
{
"version": "v1",
"created": "Thu, 20 Jan 2022 21:40:56 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Jan 2022 16:49:51 GMT"
}
] | 2022-01-25T00:00:00 |
[
[
"Agadakos",
"Ioannis",
""
],
[
"Egele",
"Manuel",
""
],
[
"Robertson",
"William",
""
]
] |
new_dataset
| 0.993872 |
2201.08889
|
Young-Ho Kim
|
Young-Ho Kim and Jarrod Collins and Zhongyu Li and Ponraj Chinnadurai
and Ankur Kapoor and C. Huie Lin and Tommaso Mansi
|
Automated Catheter Tip Repositioning for Intra-cardiac Echocardiography
|
arXiv admin note: substantial text overlap with arXiv:2009.05859
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Purpose: Intra-Cardiac Echocardiography (ICE) is a powerful imaging modality
for guiding cardiac electrophysiology and structural heart interventions. ICE
provides real-time observation of anatomy and devices, while enabling direct
monitoring of potential complications. In single operator settings, the
physician needs to switch back-and-forth between the ICE catheter and therapy
device, making continuous ICE support impossible. Two operators setup are
therefore sometimes implemented, with the challenge of increase room occupation
and cost. Two operator setups are sometimes implemented, but increase
procedural costs and room occupation.
Methods: ICE catheter robotic control system is developed with automated
catheter tip repositioning (i.e. view recovery) method, which can reproduce
important views previously navigated to and saved by the user. The performance
of the proposed method is demonstrated and evaluated in a combination of heart
phantom and animal experiments.
Results: Automated ICE view recovery achieved catheter tip position accuracy
of 2.09 +/-0.90 mm and catheter image orientation accuracy of 3.93 +/- 2.07
degree in animal studies, and 0.67 +/- 0.79 mm and 0.37 +/- 0.19 degree in
heart phantom studies, respectively. Our proposed method is also successfully
used during transeptal puncture in animals without complications, showing the
possibility for fluoro-less transeptal puncture with ICE catheter robot.
Conclusion: Robotic ICE imaging has the potential to provide precise and
reproducible anatomical views, which can reduce overall execution time, labor
burden of procedures, and x-ray usage for a range of cardiac procedures.
Keywords: Automated View Recovery, Path Planning, Intra-cardiac
echocardiography (ICE), Catheter, Tendon-driven manipulator, Cardiac Imaging
|
[
{
"version": "v1",
"created": "Fri, 21 Jan 2022 21:18:57 GMT"
}
] | 2022-01-25T00:00:00 |
[
[
"Kim",
"Young-Ho",
""
],
[
"Collins",
"Jarrod",
""
],
[
"Li",
"Zhongyu",
""
],
[
"Chinnadurai",
"Ponraj",
""
],
[
"Kapoor",
"Ankur",
""
],
[
"Lin",
"C. Huie",
""
],
[
"Mansi",
"Tommaso",
""
]
] |
new_dataset
| 0.997684 |
2201.08950
|
Zhuoran Zeng
|
Zhuoran Zeng and Ernest Davis
|
Physical Reasoning in an Open World
|
Presented at The Ninth Advances in Cognitive Systems (ACS) Conference
2021 (arXiv:2201.06134)
| null | null |
ACS2021/07
|
cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Most work on physical reasoning, both in artificial intelligence and in
cognitive science, has focused on closed-world reasoning, in which it is
assumed that the problem specification specifies all relevant objects and
substance, all their relations in an initial situation, and all exogenous
events. However, in many situations, it is important to do open-world
reasoning; that is, making valid conclusions from very incomplete information.
We have implemented in Prolog an open-world reasoner for a toy microworld of
containers that can be loaded, unloaded, sealed, unsealed, carried, and dumped.
|
[
{
"version": "v1",
"created": "Sat, 22 Jan 2022 02:35:16 GMT"
}
] | 2022-01-25T00:00:00 |
[
[
"Zeng",
"Zhuoran",
""
],
[
"Davis",
"Ernest",
""
]
] |
new_dataset
| 0.990215 |
2201.08968
|
Huang Huang
|
Huang Huang, Michael Danielczuk, Chung Min Kim, Letian Fu, Zachary
Tam, Jeffrey Ichnowski, Anelia Angelova, Brian Ichter, and Ken Goldberg
|
Mechanical Search on Shelves using a Novel "Bluction" Tool
| null | null | null | null |
cs.RO cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Shelves are common in homes, warehouses, and commercial settings due to their
storage efficiency. However, this efficiency comes at the cost of reduced
visibility and accessibility. When looking from a side (lateral) view of a
shelf, most objects will be fully occluded, resulting in a constrained
lateral-access mechanical search problem. To address this problem, we
introduce: (1) a novel bluction tool, which combines a thin pushing blade and
suction cup gripper, (2) an improved LAX-RAY simulation pipeline and perception
model that combines ray-casting with 2D Minkowski sums to efficiently generate
target occupancy distributions, and (3) a novel SLAX-RAY search policy, which
optimally reduces target object distribution support area using the bluction
tool. Experimental data from 2000 simulated shelf trials and 18 trials with a
physical Fetch robot equipped with the bluction tool suggest that using suction
grasping actions improves the success rate over the highest performing
push-only policy by 26% in simulation and 67% in physical environments.
|
[
{
"version": "v1",
"created": "Sat, 22 Jan 2022 05:47:30 GMT"
}
] | 2022-01-25T00:00:00 |
[
[
"Huang",
"Huang",
""
],
[
"Danielczuk",
"Michael",
""
],
[
"Kim",
"Chung Min",
""
],
[
"Fu",
"Letian",
""
],
[
"Tam",
"Zachary",
""
],
[
"Ichnowski",
"Jeffrey",
""
],
[
"Angelova",
"Anelia",
""
],
[
"Ichter",
"Brian",
""
],
[
"Goldberg",
"Ken",
""
]
] |
new_dataset
| 0.954093 |
2201.08969
|
Hongjia Wu
|
Hongjia Wu, Ozgu Alay, Anna Brunstrom, Giuseppe Caso, Simone Ferlin
|
FALCON: Fast and Accurate Multipath Scheduling using Offline and Online
Learning
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multipath transport protocols enable the concurrent use of different network
paths, benefiting a fast and reliable data transmission. The scheduler of a
multipath transport protocol determines how to distribute data packets over
different paths. Existing multipath schedulers either conform to predefined
policies or to online trained policies. The adoption of millimeter wave
(mmWave) paths in 5th Generation (5G) networks and Wireless Local Area Networks
(WLANs) introduces time-varying network conditions, under which the existing
schedulers struggle to achieve fast and accurate adaptation. In this paper, we
propose FALCON, a learning-based multipath scheduler that can adapt fast and
accurately to time-varying network conditions. FALCON builds on the idea of
meta-learning where offline learning is used to create a set of meta-models
that represent coarse-grained network conditions, and online learning is used
to bootstrap a specific model for the current fine-grained network conditions
towards deriving the scheduling policy to deal with such conditions. Using
trace-driven emulation experiments, we demonstrate FALCON outperforms the best
state-of-the-art scheduler by up to 19.3% and 23.6% in static and mobile
networks, respectively. Furthermore, we show FALCON is quite flexible to work
with different types of applications such as bulk transfer and web services.
Moreover, we observe FALCON has a much faster adaptation time compared to all
the other learning-based schedulers, reaching almost an 8-fold speedup compared
to the best of them. Finally, we have validated the emulation results in
real-world settings illustrating that FALCON adapts well to the dynamicity of
real networks, consistently outperforming all other schedulers.
|
[
{
"version": "v1",
"created": "Sat, 22 Jan 2022 05:56:32 GMT"
}
] | 2022-01-25T00:00:00 |
[
[
"Wu",
"Hongjia",
""
],
[
"Alay",
"Ozgu",
""
],
[
"Brunstrom",
"Anna",
""
],
[
"Caso",
"Giuseppe",
""
],
[
"Ferlin",
"Simone",
""
]
] |
new_dataset
| 0.997411 |
2201.08970
|
Siyuan Liang
|
Siyuan Liang, Baoyuan Wu, Yanbo Fan, Xingxing Wei, Xiaochun Cao
|
Parallel Rectangle Flip Attack: A Query-based Black-box Attack against
Object Detection
|
8 pages, 5 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Object detection has been widely used in many safety-critical tasks, such as
autonomous driving. However, its vulnerability to adversarial examples has not
been sufficiently studied, especially under the practical scenario of black-box
attacks, where the attacker can only access the query feedback of predicted
bounding-boxes and top-1 scores returned by the attacked model. Compared with
black-box attack to image classification, there are two main challenges in
black-box attack to detection. Firstly, even if one bounding-box is
successfully attacked, another sub-optimal bounding-box may be detected near
the attacked bounding-box. Secondly, there are multiple bounding-boxes, leading
to very high attack cost. To address these challenges, we propose a Parallel
Rectangle Flip Attack (PRFA) via random search. We explain the difference
between our method with other attacks in Fig.~\ref{fig1}. Specifically, we
generate perturbations in each rectangle patch to avoid sub-optimal detection
near the attacked region. Besides, utilizing the observation that adversarial
perturbations mainly locate around objects' contours and critical points under
white-box attacks, the search space of attacked rectangles is reduced to
improve the attack efficiency. Moreover, we develop a parallel mechanism of
attacking multiple rectangles simultaneously to further accelerate the attack
process. Extensive experiments demonstrate that our method can effectively and
efficiently attack various popular object detectors, including anchor-based and
anchor-free, and generate transferable adversarial examples.
|
[
{
"version": "v1",
"created": "Sat, 22 Jan 2022 06:00:17 GMT"
}
] | 2022-01-25T00:00:00 |
[
[
"Liang",
"Siyuan",
""
],
[
"Wu",
"Baoyuan",
""
],
[
"Fan",
"Yanbo",
""
],
[
"Wei",
"Xingxing",
""
],
[
"Cao",
"Xiaochun",
""
]
] |
new_dataset
| 0.998826 |
2201.09001
|
Jiayi Zhang
|
Yan Zhang, Jiayi Zhang, Marco Di Renzo, Huahua Xiao, and Bo Ai
|
Reconfigurable Intelligent Surfaces with Outdated Channel State
Information: Centralized vs. Distributed Deployments
|
to appear in IEEE Transactions on Communications, 2022
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we investigate the performance of an RIS-aided wireless
communication system subject to outdated channel state information that may
operate in both the near- and far-field regions. In particular, we take two RIS
deployment strategies into consideration: (i) the centralized deployment, where
all the reflecting elements are installed on a single RIS and (ii) the
distributed deployment, where the same number of reflecting elements are placed
on multiple RISs. For both deployment strategies, we derive accurate
closed-form approximations for the ergodic capacity, and we introduce tight
upper and lower bounds for the ergodic capacity to obtain useful design
insights. From this analysis, we unveil that an increase of the transmit power,
the Rician-K factor, the accuracy of the channel state information and the
number of reflecting elements help improve the system performance. Moreover, we
prove that the centralized RIS-aided deployment may achieve a higher ergodic
capacity as compared with the distributed RIS-aided deployment when the RIS is
located near the base station or near the user. In different setups, on the
other hand, we prove that the distributed deployment outperforms the
centralized deployment. Finally, the analytical results are verified by using
Monte Carlo simulations.
|
[
{
"version": "v1",
"created": "Sat, 22 Jan 2022 08:59:28 GMT"
}
] | 2022-01-25T00:00:00 |
[
[
"Zhang",
"Yan",
""
],
[
"Zhang",
"Jiayi",
""
],
[
"Di Renzo",
"Marco",
""
],
[
"Xiao",
"Huahua",
""
],
[
"Ai",
"Bo",
""
]
] |
new_dataset
| 0.995353 |
2201.09034
|
Dmitry Zaitsev
|
Dmitry A. Zaitsev
|
Strong Sleptsov Net is Turing-Complete
|
21 pages, 8 figures, 2 tables, 43 references
| null | null | null |
cs.CC cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is known that a Sleptsov net, with multiple firing a transition at a step,
runs exponentially faster than a Petri net opening prospects for its
application as a graphical language of concurrent programming. We provide
classification of place-transition nets based on firability rules considering
general definitions and their strong and weak variants. We introduce and study
a strong Sleptsov net, where a transition with the maximal firing multiplicity
fires at a step, and prove that it is Turing-complete. We follow the proof
pattern of Peterson applied to prove that an inhibitor Petri net is
Turing-complete simulating a Shepherdson and Sturgis register machine. The
central construct of our proof is a strong Sleptsov net that checks whether a
register value (place marking) equals zero.
|
[
{
"version": "v1",
"created": "Sat, 22 Jan 2022 12:20:20 GMT"
}
] | 2022-01-25T00:00:00 |
[
[
"Zaitsev",
"Dmitry A.",
""
]
] |
new_dataset
| 0.998133 |
2201.09082
|
Xinrui Li
|
Xinrui Li, Haiquan Lu, Yong Zeng, Shi Jin, and Rui Zhang
|
Near-Field Modelling and Performance Analysis of Modular Extremely
Large-Scale Array Communications
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This letter studies a new array architecture, termed as modular extremely
large-scale array (XL-array), for which a large number of array elements are
arranged in a modular manner. Each module consists of a moderate number of
array elements and the modules are regularly arranged with the inter-module
space typically much larger than signal wavelength to cater to the actual
mounting structure. We study the mathematical modelling and conduct the
performance analysis for modular XL-array communications, by considering the
non-uniform spherical wave (NUSW) characteristic that is more suitable than the
conventional uniform plane wave (UPW) assumption for physically large arrays. A
closed-form expression is derived for the maximum signal-to-noise ratio (SNR)
in terms of the geometries of the modular XL-array, including the total array
size and module separation, as well as the user's location. The asymptotic SNR
scaling law is revealed as the size of modular array goes to infinity.
Furthermore, we show that the developed modelling and performance analysis
include the existing results for collocated XL-array or far-field UPW
assumption as special cases. Numerical results demonstrate the importance of
near-field modelling for modular XL-array communications since it leads to
significantly different results from the conventional far-field UPW modelling.
|
[
{
"version": "v1",
"created": "Sat, 22 Jan 2022 15:39:41 GMT"
}
] | 2022-01-25T00:00:00 |
[
[
"Li",
"Xinrui",
""
],
[
"Lu",
"Haiquan",
""
],
[
"Zeng",
"Yong",
""
],
[
"Jin",
"Shi",
""
],
[
"Zhang",
"Rui",
""
]
] |
new_dataset
| 0.995302 |
2201.09208
|
I-Hsi Kao
|
I-Hsi Kao, Ya-Zhu Yian, Jian-An Su, Yi-Horng Lai, Jau-Woei Perng,
Tung-Li Hsieh, Yi-Shueh Tsai, and Min-Shiu Hsieh
|
Design of Sensor Fusion Driver Assistance System for Active Pedestrian
Safety
|
The 14th International Conference on Automation Technology
(Automation 2017), December 8-10, 2017, Kaohsiung, Taiwan
| null | null | null |
cs.CV eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present a parallel architecture for a sensor fusion
detection system that combines a camera and 1D light detection and ranging
(lidar) sensor for object detection. The system contains two object detection
methods, one based on an optical flow, and the other using lidar. The two
sensors can effectively complement the defects of the other. The accurate
longitudinal accuracy of the object's location and its lateral movement
information can be achieved simultaneously. Using a spatio-temporal alignment
and a policy of sensor fusion, we completed the development of a fusion
detection system with high reliability at distances of up to 20 m. Test results
show that the proposed system achieves a high level of accuracy for pedestrian
or object detection in front of a vehicle, and has high robustness to special
environments.
|
[
{
"version": "v1",
"created": "Sun, 23 Jan 2022 08:52:32 GMT"
}
] | 2022-01-25T00:00:00 |
[
[
"Kao",
"I-Hsi",
""
],
[
"Yian",
"Ya-Zhu",
""
],
[
"Su",
"Jian-An",
""
],
[
"Lai",
"Yi-Horng",
""
],
[
"Perng",
"Jau-Woei",
""
],
[
"Hsieh",
"Tung-Li",
""
],
[
"Tsai",
"Yi-Shueh",
""
],
[
"Hsieh",
"Min-Shiu",
""
]
] |
new_dataset
| 0.961947 |
2201.09329
|
Olga Kononova
|
Zheren Wang, Kevin Cruse, Yuxing Fei, Ann Chia, Yan Zeng, Haoyan Huo,
Tanjin He, Bowen Deng, Olga Kononova and Gerbrand Ceder
|
ULSA: Unified Language of Synthesis Actions for Representation of
Synthesis Protocols
| null | null | null | null |
cs.LG cond-mat.mtrl-sci
|
http://creativecommons.org/licenses/by/4.0/
|
Applying AI power to predict syntheses of novel materials requires
high-quality, large-scale datasets. Extraction of synthesis information from
scientific publications is still challenging, especially for extracting
synthesis actions, because of the lack of a comprehensive labeled dataset using
a solid, robust, and well-established ontology for describing synthesis
procedures. In this work, we propose the first Unified Language of Synthesis
Actions (ULSA) for describing ceramics synthesis procedures. We created a
dataset of 3,040 synthesis procedures annotated by domain experts according to
the proposed ULSA scheme. To demonstrate the capabilities of ULSA, we built a
neural network-based model to map arbitrary ceramics synthesis paragraphs into
ULSA and used it to construct synthesis flowcharts for synthesis procedures.
Analysis for the flowcharts showed that (a) ULSA covers essential vocabulary
used by researchers when describing synthesis procedures and (b) it can capture
important features of synthesis protocols. This work is an important step
towards creating a synthesis ontology and a solid foundation for autonomous
robotic synthesis.
|
[
{
"version": "v1",
"created": "Sun, 23 Jan 2022 17:44:48 GMT"
}
] | 2022-01-25T00:00:00 |
[
[
"Wang",
"Zheren",
""
],
[
"Cruse",
"Kevin",
""
],
[
"Fei",
"Yuxing",
""
],
[
"Chia",
"Ann",
""
],
[
"Zeng",
"Yan",
""
],
[
"Huo",
"Haoyan",
""
],
[
"He",
"Tanjin",
""
],
[
"Deng",
"Bowen",
""
],
[
"Kononova",
"Olga",
""
],
[
"Ceder",
"Gerbrand",
""
]
] |
new_dataset
| 0.999848 |
2201.09338
|
Yohan Beugin
|
Yohan Beugin, Quinn Burke, Blaine Hoak, Ryan Sheatsley, Eric Pauley,
Gang Tan, Syed Rafiul Hussain, Patrick McDaniel
|
Building a Privacy-Preserving Smart Camera System
|
Accepted to PETS (Privacy Enhancing Technologies Symposium) 2022
|
PoPETS (Proceedings on Privacy Enhancing Technologies Symposium)
2022
| null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Millions of consumers depend on smart camera systems to remotely monitor
their homes and businesses. However, the architecture and design of popular
commercial systems require users to relinquish control of their data to
untrusted third parties, such as service providers (e.g., the cloud). Third
parties therefore can (and in some instances have) access the video footage
without the users' knowledge or consent -- violating the core tenet of user
privacy. In this paper, we present CaCTUs, a privacy-preserving smart Camera
system Controlled Totally by Users. CaCTUs returns control to the user; the
root of trust begins with the user and is maintained through a series of
cryptographic protocols, designed to support popular features, such as sharing,
deleting, and viewing videos live. We show that the system can support live
streaming with a latency of 2s at a frame rate of 10fps and a resolution of
480p. In so doing, we demonstrate that it is feasible to implement a performant
smart-camera system that leverages the convenience of a cloud-based model while
retaining the ability to control access to (private) data.
|
[
{
"version": "v1",
"created": "Sun, 23 Jan 2022 18:26:35 GMT"
}
] | 2022-01-25T00:00:00 |
[
[
"Beugin",
"Yohan",
""
],
[
"Burke",
"Quinn",
""
],
[
"Hoak",
"Blaine",
""
],
[
"Sheatsley",
"Ryan",
""
],
[
"Pauley",
"Eric",
""
],
[
"Tan",
"Gang",
""
],
[
"Hussain",
"Syed Rafiul",
""
],
[
"McDaniel",
"Patrick",
""
]
] |
new_dataset
| 0.983793 |
2201.09410
|
Yi Geng
|
Yi Geng
|
Map-Assisted Material Identification at 100 GHz and Above Using Radio
Access Technology
|
Submitted to EUCNC & 6G Summit 2022, 6 pages
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The inclusion of material identification in wireless communication system is
an emerging area that offers many opportunities for 6G systems. By using
reflected radio wave to determine the material of reflecting surface, not only
the performance of 6G networks can be improved, but also some exciting
applications can be developed. In this paper, we recap a few prior methods for
material identification, then analyze the impact of thickness of reflecting
surface on reflection coefficient and present a new concept "settling
thickness", which indicates the minimum thickness of reflecting surface to
induce steady reflection coefficient. Finally, we propose a novel material
identification method based on ray-tracing and 3D-map. Compared to some prior
methods that can be implemented in single-bounce-reflection scenario only, we
extend the capability of the method to multiple-bounce-reflection scenarios.
|
[
{
"version": "v1",
"created": "Mon, 24 Jan 2022 01:38:55 GMT"
}
] | 2022-01-25T00:00:00 |
[
[
"Geng",
"Yi",
""
]
] |
new_dataset
| 0.96294 |
2201.09448
|
Ankit Kulshrestha
|
Ankit Kulshrestha, Vishwas Lele
|
Cobol2Vec: Learning Representations of Cobol code
|
Initial draft
| null | null | null |
cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
There has been a steadily growing interest in development of novel methods to
learn a representation of a given input data and subsequently using them for
several downstream tasks. The field of natural language processing has seen a
significant improvement in different tasks by incorporating pre-trained
embeddings into their pipelines. Recently, these methods have been applied to
programming languages with a view to improve developer productivity. In this
paper, we present an unsupervised learning approach to encode old mainframe
languages into a fixed dimensional vector space. We use COBOL as our motivating
example and create a corpus and demonstrate the efficacy of our approach in a
code-retrieval task on our corpus.
|
[
{
"version": "v1",
"created": "Mon, 24 Jan 2022 04:27:35 GMT"
}
] | 2022-01-25T00:00:00 |
[
[
"Kulshrestha",
"Ankit",
""
],
[
"Lele",
"Vishwas",
""
]
] |
new_dataset
| 0.951527 |
2201.09514
|
Aman Rangapur
|
Aman Rangapur, Tarun Kanakam, Ajith Jubilson
|
DDoSDet: An approach to Detect DDoS attacks using Neural Networks
|
6 figures, 2 tables, 10 pages
| null | null | null |
cs.CR cs.AI
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Cyber-attacks have been one of the deadliest attacks in today's world. One of
them is DDoS (Distributed Denial of Services). It is a cyber-attack in which
the attacker attacks and makes a network or a machine unavailable to its
intended users temporarily or indefinitely, interrupting services of the host
that are connected to a network. To define it in simple terms, It's an attack
accomplished by flooding the target machine with unnecessary requests in an
attempt to overload and make the systems crash and make the users unable to use
that network or a machine. In this research paper, we present the detection of
DDoS attacks using neural networks, that would flag malicious and legitimate
data flow, preventing network performance degradation. We compared and assessed
our suggested system against current models in the field. We are glad to note
that our work was 99.7\% accurate.
|
[
{
"version": "v1",
"created": "Mon, 24 Jan 2022 08:16:16 GMT"
}
] | 2022-01-25T00:00:00 |
[
[
"Rangapur",
"Aman",
""
],
[
"Kanakam",
"Tarun",
""
],
[
"Jubilson",
"Ajith",
""
]
] |
new_dataset
| 0.998655 |
2201.09521
|
Simon Vandevelde
|
Simon Vandevelde and Joost Vennekens
|
Problife: a Probabilistic Game of Life
|
This paper was presented at BNAIC 2021
| null | null | null |
cs.AI nlin.CG
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a probabilistic extension of the well-known cellular
automaton, Game of Life. In Game of Life, cells are placed in a grid and then
watched as they evolve throughout subsequent generations, as dictated by the
rules of the game. In our extension, called ProbLife, these rules now have
probabilities associated with them. Instead of cells being either dead or
alive, they are denoted by their chance to live. After presenting the rules of
ProbLife and its underlying characteristics, we show a concrete implementation
in ProbLog, a probabilistic logic programming system. We use this to generate
different images, as a form of rule-based generative art.
|
[
{
"version": "v1",
"created": "Mon, 24 Jan 2022 08:29:00 GMT"
}
] | 2022-01-25T00:00:00 |
[
[
"Vandevelde",
"Simon",
""
],
[
"Vennekens",
"Joost",
""
]
] |
new_dataset
| 0.999619 |
2201.09652
|
Zeyu Mi
|
Jiahao Chen, Dingji Li, Zeyu Mi, Yuxuan Liu, Binyu Zang, Haibing Guan,
Haibo Chen
|
DuVisor: a User-level Hypervisor Through Delegated Virtualization
|
17 pages, 9 figures
| null | null | null |
cs.OS cs.AR cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Today's mainstream virtualization systems comprise of two cooperative
components: a kernel-resident driver that accesses virtualization hardware and
a user-level helper process that provides VM management and I/O virtualization.
However, this virtualization architecture has intrinsic issues in both security
(a large attack surface) and performance. While there is a long thread of work
trying to minimize the kernel-resident driver by offloading functions to user
mode, they face a fundamental tradeoff between security and performance: more
offloading may reduce the kernel attack surface, yet increase the runtime ring
crossings between the helper process and the driver, and thus more performance
cost.
This paper explores a new design called delegated virtualization, which
completely separates the control plane (the kernel driver) from the data plane
(the helper process) and thus eliminates the kernel driver from runtime
intervention. The resulting user-level hypervisor, called DuVisor, can handle
all VM operations without trapping into the kernel once the kernel driver has
done the initialization. DuVisor retrofits existing hardware virtualization
support with a new delegated virtualization extension to directly handle VM
exits, configure virtualization registers, manage the stage-2 page table and
virtual devices in user mode. We have implemented the hardware extension on an
open-source RISC-V CPU and built a Rust-based hypervisor atop the hardware.
Evaluation on FireSim shows that DuVisor outperforms KVM by up to 47.96\% in a
variety of real-world applications and significantly reduces the attack
surface.
|
[
{
"version": "v1",
"created": "Mon, 24 Jan 2022 13:17:51 GMT"
}
] | 2022-01-25T00:00:00 |
[
[
"Chen",
"Jiahao",
""
],
[
"Li",
"Dingji",
""
],
[
"Mi",
"Zeyu",
""
],
[
"Liu",
"Yuxuan",
""
],
[
"Zang",
"Binyu",
""
],
[
"Guan",
"Haibing",
""
],
[
"Chen",
"Haibo",
""
]
] |
new_dataset
| 0.999324 |
2201.09863
|
Jagdeep Bhatia S
|
Jagdeep Singh Bhatia, Holly Jackson, Yunsheng Tian, Jie Xu, Wojciech
Matusik
|
Evolution Gym: A Large-Scale Benchmark for Evolving Soft Robots
|
Accepted to NeurIPS 2021, Website with documentation is available at
https://evolutiongym.github.io/
| null | null | null |
cs.RO cs.LG cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
Both the design and control of a robot play equally important roles in its
task performance. However, while optimal control is well studied in the machine
learning and robotics community, less attention is placed on finding the
optimal robot design. This is mainly because co-optimizing design and control
in robotics is characterized as a challenging problem, and more importantly, a
comprehensive evaluation benchmark for co-optimization does not exist. In this
paper, we propose Evolution Gym, the first large-scale benchmark for
co-optimizing the design and control of soft robots. In our benchmark, each
robot is composed of different types of voxels (e.g., soft, rigid, actuators),
resulting in a modular and expressive robot design space. Our benchmark
environments span a wide range of tasks, including locomotion on various types
of terrains and manipulation. Furthermore, we develop several robot
co-evolution algorithms by combining state-of-the-art design optimization
methods and deep reinforcement learning techniques. Evaluating the algorithms
on our benchmark platform, we observe robots exhibiting increasingly complex
behaviors as evolution progresses, with the best evolved designs solving many
of our proposed tasks. Additionally, even though robot designs are evolved
autonomously from scratch without prior knowledge, they often grow to resemble
existing natural creatures while outperforming hand-designed robots.
Nevertheless, all tested algorithms fail to find robots that succeed in our
hardest environments. This suggests that more advanced algorithms are required
to explore the high-dimensional design space and evolve increasingly
intelligent robots -- an area of research in which we hope Evolution Gym will
accelerate progress. Our website with code, environments, documentation, and
tutorials is available at http://evogym.csail.mit.edu.
|
[
{
"version": "v1",
"created": "Mon, 24 Jan 2022 18:39:22 GMT"
}
] | 2022-01-25T00:00:00 |
[
[
"Bhatia",
"Jagdeep Singh",
""
],
[
"Jackson",
"Holly",
""
],
[
"Tian",
"Yunsheng",
""
],
[
"Xu",
"Jie",
""
],
[
"Matusik",
"Wojciech",
""
]
] |
new_dataset
| 0.999535 |
1906.11721
|
Parwat Singh Anjana
|
Shrey Baheti, Parwat Singh Anjana, Sathya Peri, Yogesh Simmhan
|
DiPETrans: A Framework for Distributed Parallel Execution of
Transactions of Blocks in Blockchain
|
38 Pages, 25 Figures, and 6 Tables
|
2022
|
10.1002/cpe.6804
|
cpe.6804
|
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Contemporary blockchain such as Bitcoin and Ethereum execute transactions
serially by miners and validators and determine the Proof-of-Work (PoW). Such
serial execution is unable to exploit modern multi-core resources efficiently,
hence limiting the system throughput and increasing the transaction acceptance
latency. The objective of this work is to increase the transaction throughput
by introducing parallel transaction execution using a static analysis
technique. We propose a framework DiPETrans for the distributed execution of
the transactions in a block. Here, peers in the blockchain network form a
community to execute the transactions and find the PoW parallelly, using a
leader-follower approach. During mining, the leader statically analyzes the
transactions, creates different groups (shards) of independent transactions,
and distributes them to followers to execute them in parallel. After the
transaction executes, the community's compute power is utilized to solve the
PoW concurrently. When a block is successfully created, the leader broadcasts
the proposed block to other peers in the network for validation. On receiving a
block, validators re-execute the block transactions and accept the block if
they reach the same state as shared by the miner. Validation can also be done
as a community, in parallel, following the same leader-follower approach as
mining. We report experiments using over 5 Million real transactions from the
Ethereum blockchain and execute them using our DiPETrans framework to
empirically validate the benefits of our techniques over traditional sequential
execution. We achieve a maximum speedup of 2.2x for the miner and 2.0x for the
validator, with 100 to 500 transactions per block. Further, we achieve a peak
of 5x end-to-end block creation speedup using a parallel miner over a serial
miner when using 6 machines in the community.
|
[
{
"version": "v1",
"created": "Thu, 27 Jun 2019 15:09:11 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Jun 2019 14:30:45 GMT"
},
{
"version": "v3",
"created": "Sun, 27 Sep 2020 04:56:48 GMT"
},
{
"version": "v4",
"created": "Wed, 21 Oct 2020 16:21:43 GMT"
},
{
"version": "v5",
"created": "Sat, 13 Mar 2021 06:00:04 GMT"
}
] | 2022-01-24T00:00:00 |
[
[
"Baheti",
"Shrey",
""
],
[
"Anjana",
"Parwat Singh",
""
],
[
"Peri",
"Sathya",
""
],
[
"Simmhan",
"Yogesh",
""
]
] |
new_dataset
| 0.994493 |
2102.08026
|
Nabil Ibtehaz
|
Nabil Ibtehaz, Muhammad E. H. Chowdhury, Amith Khandakar, Serkan
Kiranyaz, M. Sohel Rahman, Anas Tahir, Yazan Qiblawey, and Tawsifur Rahman
|
EDITH :ECG biometrics aided by Deep learning for reliable Individual
auTHentication
|
Preprint
| null |
10.1109/TETCI.2021.3131374
| null |
cs.LG cs.AI cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, physiological signal based authentication has shown great
promises,for its inherent robustness against forgery. Electrocardiogram (ECG)
signal, being the most widely studied biosignal, has also received the highest
level of attention in this regard. It has been proven with numerous studies
that by analyzing ECG signals from different persons, it is possible to
identify them, with acceptable accuracy. In this work, we present, EDITH, a
deep learning-based framework for ECG biometrics authentication system.
Moreover, we hypothesize and demonstrate that Siamese architectures can be used
over typical distance metrics for improved performance. We have evaluated EDITH
using 4 commonly used datasets and outperformed the prior works using less
number of beats. EDITH performs competitively using just a single heartbeat
(96-99.75% accuracy) and can be further enhanced by fusing multiple beats (100%
accuracy from 3 to 6 beats). Furthermore, the proposed Siamese architecture
manages to reduce the identity verification Equal Error Rate (EER) to 1.29%. A
limited case study of EDITH with real-world experimental data also suggests its
potential as a practical authentication system.
|
[
{
"version": "v1",
"created": "Tue, 16 Feb 2021 08:45:17 GMT"
},
{
"version": "v2",
"created": "Sun, 14 Nov 2021 01:37:03 GMT"
}
] | 2022-01-24T00:00:00 |
[
[
"Ibtehaz",
"Nabil",
""
],
[
"Chowdhury",
"Muhammad E. H.",
""
],
[
"Khandakar",
"Amith",
""
],
[
"Kiranyaz",
"Serkan",
""
],
[
"Rahman",
"M. Sohel",
""
],
[
"Tahir",
"Anas",
""
],
[
"Qiblawey",
"Yazan",
""
],
[
"Rahman",
"Tawsifur",
""
]
] |
new_dataset
| 0.976116 |
2104.10889
|
Takuma Kogo
|
Takuma Kogo, Kei Takaya, Hiroyuki Oyama
|
Fast MILP-based Task and Motion Planning for Pick-and-Place with
Hard/Soft Constraints of Collision-Free Route
|
IEEE International Conference on Systems, Man, and Cybernetics (SMC)
2021 - accepted
| null |
10.1109/SMC52423.2021.9659097
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present new models of optimization-based task and motion planning (TAMP)
for robotic pick-and-place (P&P), which plan action sequences and motion
trajectory with low computational costs. We improved an existing
state-of-the-art TAMP model integrated with the collision avoidance, which is
formulated as a mixed-integer linear programing (MILP) problem. To enable the
MILP solver to search for solutions efficiently, we introduced two approaches
leveraging features of collision avoidance in robotic P&P. The first approach
reduces number of binary variables, which are related to the collision
avoidance of delivery objects, by reformulating them as continuous variables
with additional hard constraints. These hard constraints maintain consistency
by conditionally propagating binary values, which are related to the carry
action state and collision avoidance of robots, to the reformulated continuous
variables. The second approach is more aware of the branch-and-bound method,
which is the fundamental algorithm of modern MILP solvers. This approach guides
the MILP solver to find integer solutions with shallower branching by adding a
soft constraint, which softly restricts a robot's routes around delivery
objects. We demonstrate the effectiveness of the proposed approaches with a
modern MILP solver.
|
[
{
"version": "v1",
"created": "Thu, 22 Apr 2021 06:29:58 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Sep 2021 05:05:57 GMT"
}
] | 2022-01-24T00:00:00 |
[
[
"Kogo",
"Takuma",
""
],
[
"Takaya",
"Kei",
""
],
[
"Oyama",
"Hiroyuki",
""
]
] |
new_dataset
| 0.994881 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.