id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2208.04204
|
Yu-Ki Lee
|
Yu-Ki Lee, Yue Hao, Zhonghua Xi, Woongbae Kim, Youngmin Park, Kyu-Jin
Cho, Jyh-Ming Lien, In-Suk Choi
|
Origami-based Zygote structure enables pluripotent shape-transforming
deployable structure
| null | null | null | null |
cs.CE cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose an algorithmic framework of a pluripotent structure evolving from
a simple compact structure into diverse complex 3-D structures for designing
the shape transformable, reconfigurable, and deployable structures and robots.
Our algorithmic approach suggests a way of transforming a compact structure
consisting of uniform building blocks into a large, desired 3-D shape.
Analogous to the pluripotent stem cells that can grow into a preprogrammed
shape according to coded information, which we call DNA, compactly stacked
panels named the zygote structure can evolve into arbitrary 3-D structures by
programming their connection path. Our stacking algorithm obtains this coded
sequence by inversely stacking the voxelized surface of the desired structure
into a tree. Applying the connection path obtained by the stacking algorithm,
the compactly stacked panels named the zygote structure can be deployed into
diverse large 3-D structures. We conceptually demonstrated our pluripotent
evolving structure by energy releasing commercial spring hinges and thermally
actuated shape memory alloy (SMA) hinges, respectively. We also show that the
proposed concept enables the fabrication of large structures in a significantly
smaller workspace.
|
[
{
"version": "v1",
"created": "Mon, 8 Aug 2022 15:09:52 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Lee",
"Yu-Ki",
""
],
[
"Hao",
"Yue",
""
],
[
"Xi",
"Zhonghua",
""
],
[
"Kim",
"Woongbae",
""
],
[
"Park",
"Youngmin",
""
],
[
"Cho",
"Kyu-Jin",
""
],
[
"Lien",
"Jyh-Ming",
""
],
[
"Choi",
"In-Suk",
""
]
] |
new_dataset
| 0.999579 |
2208.04223
|
Nicolas Garneau
|
Jean-Thomas Baillargeon and Nicolas Garneau
|
Beer2Vec : Extracting Flavors from Reviews for Thirst-Quenching
Recommandations
| null | null | null | null |
cs.IR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
This paper introduces the Beer2Vec model that allows the most popular
alcoholic beverage in the world to be encoded into vectors enabling flavorful
recommendations. We present our algorithm using a unique dataset focused on the
analysis of craft beers. We thoroughly explain how we encode the flavors and
how useful, from an empirical point of view, the beer vectors are to generate
meaningful recommendations. We also present three different ways to use
Beer2Vec in a real-world environment to enlighten the pool of craft beer
consumers. Finally, we make our model and functionalities available to
everybody through a web application.
|
[
{
"version": "v1",
"created": "Thu, 4 Aug 2022 13:33:23 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Baillargeon",
"Jean-Thomas",
""
],
[
"Garneau",
"Nicolas",
""
]
] |
new_dataset
| 0.997255 |
2208.04231
|
Mahdi Nikdast
|
Ebadollah Taheri and Sudeep Pasricha and Mahdi Nikdast
|
ReSiPI: A Reconfigurable Silicon-Photonic 2.5D Chiplet Network with PCMs
for Energy-Efficient Interposer Communication
|
This paper is accepted and will appear in IEEE/ACM ICCAD 2022
proceedings
| null | null | null |
cs.AR cs.ET physics.optics
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
2.5D chiplet systems have been proposed to improve the low manufacturing
yield of large-scale chips. However, connecting the chiplets through an
electronic interposer imposes a high traffic load on the interposer network.
Silicon photonics technology has shown great promise towards handling a high
volume of traffic with low latency in intra-chip network-on-chip (NoC) fabrics.
Although recent advances in silicon photonic devices have extended photonic
NoCs to enable high bandwidth communication in 2.5D chiplet systems, such
interposer-based photonic networks still suffer from high power consumption. In
this work, we design and analyze a novel Reconfigurable power-efficient and
congestion-aware Silicon Photonic 2.5D Interposer network, called ReSiPI.
Considering run-time traffic, ReSiPI is able to dynamically deploy
inter-chiplet photonic gateways to improve the overall network congestion.
ReSiPI also employs switching elements based on phase change materials (PCMs)
to dynamically reconfigure and power-gate the photonic interposer network,
thereby improving the network power efficiency. Compared to the best prior
state-of-the-art 2.5D photonic network, ReSiPI demonstrates, on average, 37%
lower latency, 25% power reduction, and 53% energy minimization in the network.
|
[
{
"version": "v1",
"created": "Mon, 8 Aug 2022 16:00:37 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Taheri",
"Ebadollah",
""
],
[
"Pasricha",
"Sudeep",
""
],
[
"Nikdast",
"Mahdi",
""
]
] |
new_dataset
| 0.9994 |
2208.04243
|
Dat Quoc Nguyen
|
Linh The Nguyen, Nguyen Luong Tran, Long Doan, Manh Luong, Dat Quoc
Nguyen
|
A High-Quality and Large-Scale Dataset for English-Vietnamese Speech
Translation
|
In Proceedings of INTERSPEECH 2022, to appear. The first three
authors contributed equally to this work
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we introduce a high-quality and large-scale benchmark dataset
for English-Vietnamese speech translation with 508 audio hours, consisting of
331K triplets of (sentence-lengthed audio, English source transcript sentence,
Vietnamese target subtitle sentence). We also conduct empirical experiments
using strong baselines and find that the traditional "Cascaded" approach still
outperforms the modern "End-to-End" approach. To the best of our knowledge,
this is the first large-scale English-Vietnamese speech translation study. We
hope both our publicly available dataset and study can serve as a starting
point for future research and applications on English-Vietnamese speech
translation. Our dataset is available at https://github.com/VinAIResearch/PhoST
|
[
{
"version": "v1",
"created": "Mon, 8 Aug 2022 16:11:26 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Nguyen",
"Linh The",
""
],
[
"Tran",
"Nguyen Luong",
""
],
[
"Doan",
"Long",
""
],
[
"Luong",
"Manh",
""
],
[
"Nguyen",
"Dat Quoc",
""
]
] |
new_dataset
| 0.999799 |
2108.07707
|
Cheng Zhang
|
Cheng Zhang, Arthur Azevedo de Amorim, Marco Gaboardi
|
On Incorrectness Logic and Kleene Algebra with Top and Tests
| null |
Proc. ACM Program. Lang. 6, POPL, Article 29 (January 2022), 30
pages (2022)
| null | null |
cs.PL cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Kleene algebra with tests (KAT) is a foundational equational framework for
reasoning about programs, which has found applications in program
transformations, networking and compiler optimizations, among many other areas.
In his seminal work, Kozen proved that KAT subsumes propositional Hoare logic,
showing that one can reason about the (partial) correctness of while programs
by means of the equational theory of KAT. In this work, we investigate the
support that KAT provides for reasoning about incorrectness, instead, as
embodied by Ohearn's recently proposed incorrectness logic. We show that KAT
cannot directly express incorrectness logic. The main reason for this
limitation can be traced to the fact that KAT cannot express explicitly the
notion of codomain, which is essential to express incorrectness triples. To
address this issue, we study Kleene Algebra with Top and Tests (TopKAT), an
extension of KAT with a top element. We show that TopKAT is powerful enough to
express a codomain operation, to express incorrectness triples, and to prove
all the rules of incorrectness logic sound. This shows that one can reason
about the incorrectness of while-like programs by means of the equational
theory of TopKAT.
|
[
{
"version": "v1",
"created": "Tue, 17 Aug 2021 15:50:21 GMT"
},
{
"version": "v2",
"created": "Fri, 12 Nov 2021 03:12:07 GMT"
},
{
"version": "v3",
"created": "Fri, 4 Feb 2022 18:51:54 GMT"
},
{
"version": "v4",
"created": "Thu, 4 Aug 2022 20:16:12 GMT"
}
] | 2022-08-08T00:00:00 |
[
[
"Zhang",
"Cheng",
""
],
[
"de Amorim",
"Arthur Azevedo",
""
],
[
"Gaboardi",
"Marco",
""
]
] |
new_dataset
| 0.996636 |
2108.09376
|
Thomas Verelst
|
Thomas Verelst, Tinne Tuytelaars
|
BlockCopy: High-Resolution Video Processing with Block-Sparse Feature
Propagation and Online Policies
|
Accepted for International Conference on Computer Vision (ICCV 2021)
| null |
10.1109/ICCV48922.2021.00511
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we propose BlockCopy, a scheme that accelerates pretrained
frame-based CNNs to process video more efficiently, compared to standard
frame-by-frame processing. To this end, a lightweight policy network determines
important regions in an image, and operations are applied on selected regions
only, using custom block-sparse convolutions. Features of non-selected regions
are simply copied from the preceding frame, reducing the number of computations
and latency. The execution policy is trained using reinforcement learning in an
online fashion without requiring ground truth annotations. Our universal
framework is demonstrated on dense prediction tasks such as pedestrian
detection, instance segmentation and semantic segmentation, using both state of
the art (Center and Scale Predictor, MGAN, SwiftNet) and standard baseline
networks (Mask-RCNN, DeepLabV3+). BlockCopy achieves significant FLOPS savings
and inference speedup with minimal impact on accuracy.
|
[
{
"version": "v1",
"created": "Fri, 20 Aug 2021 21:16:01 GMT"
},
{
"version": "v2",
"created": "Fri, 5 Aug 2022 14:21:05 GMT"
}
] | 2022-08-08T00:00:00 |
[
[
"Verelst",
"Thomas",
""
],
[
"Tuytelaars",
"Tinne",
""
]
] |
new_dataset
| 0.995643 |
2109.12709
|
Elena Ivanova
|
Elena Alexander, Kam W. Leong, and Andrew F. Laine
|
Automated Multi-Process CTC Detection using Deep Learning
| null | null | null | null |
cs.CV q-bio.QM
|
http://creativecommons.org/licenses/by/4.0/
|
Circulating Tumor Cells (CTCs) bear great promise as biomarkers in tumor
prognosis. However, the process of identification and later enumeration of CTCs
require manual labor, which is error-prone and time-consuming. The recent
developments in object detection via Deep Learning using Mask-RCNNs and wider
availability of pre-trained models have enabled sensitive tasks with limited
data of such to be tackled with unprecedented accuracy. In this report, we
present a novel 3-stage detection model for automated identification of
Circulating Tumor Cells in multi-channel darkfield microscopic images comprised
of: RetinaNet based identification of Cytokeratin (CK) stains, Mask-RCNN based
cell detection of DAPI cell nuclei and Otsu thresholding to detect CD-45s. The
training dataset is composed of 46 high variance data points, with 10 Negative
and 36 Positive data points. The test set is composed of 420 negative data
points. The final accuracy of the pipeline is 98.81%.
|
[
{
"version": "v1",
"created": "Sun, 26 Sep 2021 21:56:34 GMT"
}
] | 2022-08-08T00:00:00 |
[
[
"Alexander",
"Elena",
""
],
[
"Leong",
"Kam W.",
""
],
[
"Laine",
"Andrew F.",
""
]
] |
new_dataset
| 0.974379 |
2110.12320
|
Keval Morabia
|
Anurendra Kumar, Keval Morabia, Jingjin Wang, Kevin Chen-Chuan Chang,
Alexander Schwing
|
CoVA: Context-aware Visual Attention for Webpage Information Extraction
|
11 Pages, 6 Figures, 3 Tables
| null |
10.18653/v1/2022.ecnlp-1.11
| null |
cs.CV cs.AI cs.CL cs.HC cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
Webpage information extraction (WIE) is an important step to create knowledge
bases. For this, classical WIE methods leverage the Document Object Model (DOM)
tree of a website. However, use of the DOM tree poses significant challenges as
context and appearance are encoded in an abstract manner. To address this
challenge we propose to reformulate WIE as a context-aware Webpage Object
Detection task. Specifically, we develop a Context-aware Visual Attention-based
(CoVA) detection pipeline which combines appearance features with syntactical
structure from the DOM tree. To study the approach we collect a new large-scale
dataset of e-commerce websites for which we manually annotate every web element
with four labels: product price, product title, product image and background.
On this dataset we show that the proposed CoVA approach is a new challenging
baseline which improves upon prior state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Sun, 24 Oct 2021 00:21:46 GMT"
}
] | 2022-08-08T00:00:00 |
[
[
"Kumar",
"Anurendra",
""
],
[
"Morabia",
"Keval",
""
],
[
"Wang",
"Jingjin",
""
],
[
"Chang",
"Kevin Chen-Chuan",
""
],
[
"Schwing",
"Alexander",
""
]
] |
new_dataset
| 0.999335 |
2205.02866
|
Hongyu Li
|
Hongyu Li, Shanpu Shen, and Bruno Clerckx
|
Beyond Diagonal Reconfigurable Intelligent Surfaces: From Transmitting
and Reflecting Modes to Single-, Group-, and Fully-Connected Architectures
|
14 pages, 11 figures, 2 tables, submitted to Transactions on Wireless
Communications
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reconfigurable intelligent surfaces (RISs) are envisioned as a promising
technology for future wireless communications. With various hardware
realizations, RISs can work under different modes
(reflective/transmissive/hybrid) or have different architectures
(single/group/fully-connected). However, most existing research focused on
single-connected reflective RISs, mathematically characterized by diagonal
phase shift matrices, while there is a lack of a comprehensive study for RISs
unifying different modes/architectures. In this paper, we solve this issue by
analyzing and proposing a general RIS-aided communication model. Specifically,
we establish an RIS model not limited to diagonal phase shift matrices, a novel
branch referred to as beyond diagonal RIS (BD-RIS), unifying modes and
architectures. With the proposed model, we develop efficient algorithms to
jointly design transmit precoder and BDRIS matrix to maximize the sum-rate for
RIS-aided systems. We also provide simulation results to compare the
performance of BD-RISs with different modes/architectures. Simulation results
show that under the same mode, fully- and group-connected RIS can effectively
increase the sum-rate performance compared with single-connected RIS, and that
hybrid RIS outperforms reflective/transmissive RIS with the same architecture.
|
[
{
"version": "v1",
"created": "Thu, 5 May 2022 18:03:47 GMT"
},
{
"version": "v2",
"created": "Fri, 5 Aug 2022 10:31:26 GMT"
}
] | 2022-08-08T00:00:00 |
[
[
"Li",
"Hongyu",
""
],
[
"Shen",
"Shanpu",
""
],
[
"Clerckx",
"Bruno",
""
]
] |
new_dataset
| 0.950169 |
2206.08355
|
Ang Cao
|
Ang Cao, Chris Rockwell, Justin Johnson
|
FWD: Real-time Novel View Synthesis with Forward Warping and Depth
|
CVPR 2022. Project website https://caoang327.github.io/FWD/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Novel view synthesis (NVS) is a challenging task requiring systems to
generate photorealistic images of scenes from new viewpoints, where both
quality and speed are important for applications. Previous image-based
rendering (IBR) methods are fast, but have poor quality when input views are
sparse. Recent Neural Radiance Fields (NeRF) and generalizable variants give
impressive results but are not real-time. In our paper, we propose a
generalizable NVS method with sparse inputs, called FWD, which gives
high-quality synthesis in real-time. With explicit depth and differentiable
rendering, it achieves competitive results to the SOTA methods with 130-1000x
speedup and better perceptual quality. If available, we can seamlessly
integrate sensor depth during either training or inference to improve image
quality while retaining real-time speed. With the growing prevalence of depths
sensors, we hope that methods making use of depth will become increasingly
useful.
|
[
{
"version": "v1",
"created": "Thu, 16 Jun 2022 17:56:48 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Jun 2022 03:05:16 GMT"
},
{
"version": "v3",
"created": "Fri, 5 Aug 2022 11:32:01 GMT"
}
] | 2022-08-08T00:00:00 |
[
[
"Cao",
"Ang",
""
],
[
"Rockwell",
"Chris",
""
],
[
"Johnson",
"Justin",
""
]
] |
new_dataset
| 0.975692 |
2206.12037
|
Albert Gu
|
Albert Gu, Isys Johnson, Aman Timalsina, Atri Rudra, Christopher R\'e
|
How to Train Your HiPPO: State Space Models with Generalized Orthogonal
Basis Projections
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Linear time-invariant state space models (SSM) are a classical model from
engineering and statistics, that have recently been shown to be very promising
in machine learning through the Structured State Space sequence model (S4). A
core component of S4 involves initializing the SSM state matrix to a particular
matrix called a HiPPO matrix, which was empirically important for S4's ability
to handle long sequences. However, the specific matrix that S4 uses was
actually derived in previous work for a particular time-varying dynamical
system, and the use of this matrix as a time-invariant SSM had no known
mathematical interpretation. Consequently, the theoretical mechanism by which
S4 models long-range dependencies actually remains unexplained. We derive a
more general and intuitive formulation of the HiPPO framework, which provides a
simple mathematical interpretation of S4 as a decomposition onto
exponentially-warped Legendre polynomials, explaining its ability to capture
long dependencies. Our generalization introduces a theoretically rich class of
SSMs that also lets us derive more intuitive S4 variants for other bases such
as the Fourier basis, and explains other aspects of training S4, such as how to
initialize the important timescale parameter. These insights improve S4's
performance to 86% on the Long Range Arena benchmark, with 96% on the most
difficult Path-X task.
|
[
{
"version": "v1",
"created": "Fri, 24 Jun 2022 02:24:41 GMT"
},
{
"version": "v2",
"created": "Fri, 5 Aug 2022 17:35:04 GMT"
}
] | 2022-08-08T00:00:00 |
[
[
"Gu",
"Albert",
""
],
[
"Johnson",
"Isys",
""
],
[
"Timalsina",
"Aman",
""
],
[
"Rudra",
"Atri",
""
],
[
"Ré",
"Christopher",
""
]
] |
new_dataset
| 0.995852 |
2207.01696
|
Chuan Guo
|
Chuan Guo, Xinxin Zuo, Sen Wang, Li Cheng
|
TM2T: Stochastic and Tokenized Modeling for the Reciprocal Generation of
3D Human Motions and Texts
|
Accepted to ECCV 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Inspired by the strong ties between vision and language, the two intimate
human sensing and communication modalities, our paper aims to explore the
generation of 3D human full-body motions from texts, as well as its reciprocal
task, shorthanded for text2motion and motion2text, respectively. To tackle the
existing challenges, especially to enable the generation of multiple distinct
motions from the same text, and to avoid the undesirable production of trivial
motionless pose sequences, we propose the use of motion token, a discrete and
compact motion representation. This provides one level playing ground when
considering both motions and text signals, as the motion and text tokens,
respectively. Moreover, our motion2text module is integrated into the inverse
alignment process of our text2motion training pipeline, where a significant
deviation of synthesized text from the input text would be penalized by a large
training loss; empirically this is shown to effectively improve performance.
Finally, the mappings in-between the two modalities of motions and texts are
facilitated by adapting the neural model for machine translation (NMT) to our
context. This autoregressive modeling of the distribution over discrete motion
tokens further enables non-deterministic production of pose sequences, of
variable lengths, from an input text. Our approach is flexible, could be used
for both text2motion and motion2text tasks. Empirical evaluations on two
benchmark datasets demonstrate the superior performance of our approach on both
tasks over a variety of state-of-the-art methods. Project page:
https://ericguo5513.github.io/TM2T/
|
[
{
"version": "v1",
"created": "Mon, 4 Jul 2022 19:52:18 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Aug 2022 18:31:20 GMT"
}
] | 2022-08-08T00:00:00 |
[
[
"Guo",
"Chuan",
""
],
[
"Zuo",
"Xinxin",
""
],
[
"Wang",
"Sen",
""
],
[
"Cheng",
"Li",
""
]
] |
new_dataset
| 0.999416 |
2207.11938
|
Jiezhang Cao
|
Jiezhang Cao, Jingyun Liang, Kai Zhang, Yawei Li, Yulun Zhang, Wenguan
Wang, Luc Van Gool
|
Reference-based Image Super-Resolution with Deformable Attention
Transformer
|
ECCV 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reference-based image super-resolution (RefSR) aims to exploit auxiliary
reference (Ref) images to super-resolve low-resolution (LR) images. Recently,
RefSR has been attracting great attention as it provides an alternative way to
surpass single image SR. However, addressing the RefSR problem has two critical
challenges: (i) It is difficult to match the correspondence between LR and Ref
images when they are significantly different; (ii) How to transfer the relevant
texture from Ref images to compensate the details for LR images is very
challenging. To address these issues of RefSR, this paper proposes a deformable
attention Transformer, namely DATSR, with multiple scales, each of which
consists of a texture feature encoder (TFE) module, a reference-based
deformable attention (RDA) module and a residual feature aggregation (RFA)
module. Specifically, TFE first extracts image transformation (e.g.,
brightness) insensitive features for LR and Ref images, RDA then can exploit
multiple relevant textures to compensate more information for LR features, and
RFA lastly aggregates LR features and relevant textures to get a more visually
pleasant result. Extensive experiments demonstrate that our DATSR achieves
state-of-the-art performance on benchmark datasets quantitatively and
qualitatively.
|
[
{
"version": "v1",
"created": "Mon, 25 Jul 2022 07:07:00 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Aug 2022 23:06:18 GMT"
}
] | 2022-08-08T00:00:00 |
[
[
"Cao",
"Jiezhang",
""
],
[
"Liang",
"Jingyun",
""
],
[
"Zhang",
"Kai",
""
],
[
"Li",
"Yawei",
""
],
[
"Zhang",
"Yulun",
""
],
[
"Wang",
"Wenguan",
""
],
[
"Van Gool",
"Luc",
""
]
] |
new_dataset
| 0.960466 |
2208.02884
|
Nicolaas Kaashoek
|
Nicolaas Kaashoek and Robert Morris
|
CheckSync: Using Runtime-Integrated Checkpoints to Achieve High
Availability}
|
14 pages, 6 figures
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
CheckSync provides applications with high availability via runtime-integrated
checkpointing. This allows CheckSync to take checkpoints of a process running
in a memory-managed language (Go, for now), which can be resumed on another
machine after a failure. CheckSync uses the runtime to checkpoint only the
process' live memory, doing without requiring significant changes to
applications.
CheckSync maintains the ease of use provided by virtual machines for the
applications it supports without requiring that an entire virtual machine image
be snapshotted. Because CheckSync captures only the memory used by an
application, it produces checkpoints that are smaller (by an order of
magnitude) than virtual machine snapshots if the memory footprint of the
application is relatively small compared to the state of the rest of the
operating system. Additionally, when running go-cache, a popular in-memory
key/value store, CheckSync reduces throughput by only 12% compared to the 78%
throughput loss when using go-cache's snapshot functionality, the 45% loss when
using CRIU, and the 68% loss when using virtual machine live migration.
|
[
{
"version": "v1",
"created": "Thu, 4 Aug 2022 20:53:50 GMT"
}
] | 2022-08-08T00:00:00 |
[
[
"Kaashoek",
"Nicolaas",
""
],
[
"Morris",
"Robert",
""
]
] |
new_dataset
| 0.998921 |
2208.02920
|
Franck Cassez
|
Franck Cassez and Joanne Fuller and Horacio Mijail Anton Quiles
|
Deductive Verification of Smart Contracts with Dafny
| null | null | null | null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
We present a methodology to develop verified smart contracts. We write smart
contracts, their specifications and implementations in the
verification-friendly language Dafny. In our methodology the ability to write
specifications, implementations and to reason about correctness is a primary
concern. We propose a simple, concise yet powerful solution to reasoning about
contracts that have external calls. This includes arbitrary re-entrancy which
is a major source of bugs and attacks in smart contracts. Although we do not
yet have a compiler from Dafny to EVM bytecode, the results we obtain on the
Dafny code can reasonably be assumed to hold on Solidity code: the translation
of the Dafny code to Solidity is straightforward. As a result our approach can
readily be used to develop and deploy safer contracts.
|
[
{
"version": "v1",
"created": "Thu, 4 Aug 2022 22:48:30 GMT"
}
] | 2022-08-08T00:00:00 |
[
[
"Cassez",
"Franck",
""
],
[
"Fuller",
"Joanne",
""
],
[
"Quiles",
"Horacio Mijail Anton",
""
]
] |
new_dataset
| 0.997695 |
2208.03030
|
Bingning Wang Dr.
|
Bingning Wang, Feiyang Lv, Ting Yao, Yiming Yuan, Jin Ma, Yu Luo and
Haijin Liang
|
ChiQA: A Large Scale Image-based Real-World Question Answering Dataset
for Multi-Modal Understanding
|
CIKM2022 camera ready version
| null | null | null |
cs.CL cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Visual question answering is an important task in both natural language and
vision understanding. However, in most of the public visual question answering
datasets such as VQA, CLEVR, the questions are human generated that specific to
the given image, such as `What color are her eyes?'. The human generated
crowdsourcing questions are relatively simple and sometimes have the bias
toward certain entities or attributes. In this paper, we introduce a new
question answering dataset based on image-ChiQA. It contains the real-world
queries issued by internet users, combined with several related open-domain
images. The system should determine whether the image could answer the question
or not. Different from previous VQA datasets, the questions are real-world
image-independent queries that are more various and unbiased. Compared with
previous image-retrieval or image-caption datasets, the ChiQA not only measures
the relatedness but also measures the answerability, which demands more
fine-grained vision and language reasoning. ChiQA contains more than 40K
questions and more than 200K question-images pairs. A three-level 2/1/0 label
is assigned to each pair indicating perfect answer, partially answer and
irrelevant. Data analysis shows ChiQA requires a deep understanding of both
language and vision, including grounding, comparisons, and reading. We evaluate
several state-of-the-art visual-language models such as ALBEF, demonstrating
that there is still a large room for improvements on ChiQA.
|
[
{
"version": "v1",
"created": "Fri, 5 Aug 2022 07:55:28 GMT"
}
] | 2022-08-08T00:00:00 |
[
[
"Wang",
"Bingning",
""
],
[
"Lv",
"Feiyang",
""
],
[
"Yao",
"Ting",
""
],
[
"Yuan",
"Yiming",
""
],
[
"Ma",
"Jin",
""
],
[
"Luo",
"Yu",
""
],
[
"Liang",
"Haijin",
""
]
] |
new_dataset
| 0.995823 |
2208.03092
|
EPTCS
|
Marco Alberti (Dipartimento di Matematica e Informatica, University of
Ferrara), Riccardo Zese (Dipartimento di Scienze Chimiche, Farmaceutiche ed
Agrarie, University of Ferrara), Fabrizio Riguzzi (Dipartimento di Matematica
e Informatica, University of Ferrara), Evelina Lamma (Dipartimento di
Ingegneria, University of Ferrara)
|
An Iterative Fixpoint Semantics for MKNF Hybrid Knowledge Bases with
Function Symbols
|
In Proceedings ICLP 2022, arXiv:2208.02685
|
EPTCS 364, 2022, pp. 65-78
|
10.4204/EPTCS.364.7
| null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
Hybrid Knowledge Bases based on Lifschitz's logic of Minimal Knowledge with
Negation as Failure are a successful approach to combine the expressivity of
Description Logics and Logic Programming in a single language. Their syntax,
defined by Motik and Rosati, disallows function symbols. In order to define a
well-founded semantics for MKNF HKBs, Knorr et al. define a partition of the
modal atoms occurring in it, called the alternating fixpoint partition. In this
paper, we propose an iterated fixpoint semantics for HKBs with function
symbols. We prove that our semantics extends Knorr et al.'s, in that, for a
function-free HKBs, it coincides with its alternating fixpoint partition. The
proposed semantics lends itself well to a probabilistic extension with a
distribution semantic approach, which is the subject of future work.
|
[
{
"version": "v1",
"created": "Fri, 5 Aug 2022 10:49:02 GMT"
}
] | 2022-08-08T00:00:00 |
[
[
"Alberti",
"Marco",
"",
"Dipartimento di Matematica e Informatica, University of\n Ferrara"
],
[
"Zese",
"Riccardo",
"",
"Dipartimento di Scienze Chimiche, Farmaceutiche ed\n Agrarie, University of Ferrara"
],
[
"Riguzzi",
"Fabrizio",
"",
"Dipartimento di Matematica\n e Informatica, University of Ferrara"
],
[
"Lamma",
"Evelina",
"",
"Dipartimento di\n Ingegneria, University of Ferrara"
]
] |
new_dataset
| 0.990022 |
2208.03110
|
Iurii Medvedev
|
Iurii Medvedev, Farhad Shadmand, Nuno Gon\c{c}alves
|
MorDeephy: Face Morphing Detection Via Fused Classification
|
10 pages, 5 figures, 4 tables
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Face morphing attack detection (MAD) is one of the most challenging tasks in
the field of face recognition nowadays. In this work, we introduce a novel deep
learning strategy for a single image face morphing detection, which implies the
discrimination of morphed face images along with a sophisticated face
recognition task in a complex classification scheme. It is directed onto
learning the deep facial features, which carry information about the
authenticity of these features. Our work also introduces several additional
contributions: the public and easy-to-use face morphing detection benchmark and
the results of our wild datasets filtering strategy. Our method, which we call
MorDeephy, achieved the state of the art performance and demonstrated a
prominent ability for generalising the task of morphing detection to unseen
scenarios.
|
[
{
"version": "v1",
"created": "Fri, 5 Aug 2022 11:39:22 GMT"
}
] | 2022-08-08T00:00:00 |
[
[
"Medvedev",
"Iurii",
""
],
[
"Shadmand",
"Farhad",
""
],
[
"Gonçalves",
"Nuno",
""
]
] |
new_dataset
| 0.999435 |
2208.03130
|
Richard Marcus
|
Richard Marcus, Niklas Knoop, Bernhard Egger and Marc Stamminger
|
A Lightweight Machine Learning Pipeline for LiDAR-simulation
|
Conference: DeLTA 22; ISBN 978-989-758-584-5; ISSN 2184-9277;
publisher: SciTePress, organization: INSTICC
|
Proceedings of the 3rd International Conference on Deep Learning
Theory and Applications - DeLTA, 2022, pages 176-183
|
10.5220/0011309100003277
| null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Virtual testing is a crucial task to ensure safety in autonomous driving, and
sensor simulation is an important task in this domain. Most current LiDAR
simulations are very simplistic and are mainly used to perform initial tests,
while the majority of insights are gathered on the road. In this paper, we
propose a lightweight approach for more realistic LiDAR simulation that learns
a real sensor's behavior from test drive data and transforms this to the
virtual domain. The central idea is to cast the simulation into an
image-to-image translation problem. We train our pix2pix based architecture on
two real world data sets, namely the popular KITTI data set and the Audi
Autonomous Driving Dataset which provide both, RGB and LiDAR images. We apply
this network on synthetic renderings and show that it generalizes sufficiently
from real images to simulated images. This strategy enables to skip the
sensor-specific, expensive and complex LiDAR physics simulation in our
synthetic world and avoids oversimplification and a large domain-gap through
the clean synthetic environment.
|
[
{
"version": "v1",
"created": "Fri, 5 Aug 2022 12:45:53 GMT"
}
] | 2022-08-08T00:00:00 |
[
[
"Marcus",
"Richard",
""
],
[
"Knoop",
"Niklas",
""
],
[
"Egger",
"Bernhard",
""
],
[
"Stamminger",
"Marc",
""
]
] |
new_dataset
| 0.992066 |
2208.03138
|
Aidan Boyd
|
Aidan Boyd, Daniel Moreira, Andrey Kuehlkamp, Kevin Bowyer, Adam
Czajka
|
Human Saliency-Driven Patch-based Matching for Interpretable Post-mortem
Iris Recognition
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Forensic iris recognition, as opposed to live iris recognition, is an
emerging research area that leverages the discriminative power of iris
biometrics to aid human examiners in their efforts to identify deceased
persons. As a machine learning-based technique in a predominantly
human-controlled task, forensic recognition serves as "back-up" to human
expertise in the task of post-mortem identification. As such, the machine
learning model must be (a) interpretable, and (b) post-mortem-specific, to
account for changes in decaying eye tissue. In this work, we propose a method
that satisfies both requirements, and that approaches the creation of a
post-mortem-specific feature extractor in a novel way employing human
perception. We first train a deep learning-based feature detector on
post-mortem iris images, using annotations of image regions highlighted by
humans as salient for their decision making. In effect, the method learns
interpretable features directly from humans, rather than purely data-driven
features. Second, regional iris codes (again, with human-driven filtering
kernels) are used to pair detected iris patches, which are translated into
pairwise, patch-based comparison scores. In this way, our method presents human
examiners with human-understandable visual cues in order to justify the
identification decision and corresponding confidence score. When tested on a
dataset of post-mortem iris images collected from 259 deceased subjects, the
proposed method places among the three best iris matchers, demonstrating better
results than the commercial (non-human-interpretable) VeriEye approach. We
propose a unique post-mortem iris recognition method trained with human
saliency to give fully-interpretable comparison outcomes for use in the context
of forensic examination, achieving state-of-the-art recognition performance.
|
[
{
"version": "v1",
"created": "Wed, 3 Aug 2022 19:40:44 GMT"
}
] | 2022-08-08T00:00:00 |
[
[
"Boyd",
"Aidan",
""
],
[
"Moreira",
"Daniel",
""
],
[
"Kuehlkamp",
"Andrey",
""
],
[
"Bowyer",
"Kevin",
""
],
[
"Czajka",
"Adam",
""
]
] |
new_dataset
| 0.990288 |
2208.03142
|
Vadim Borisov
|
Michael Gr\"oger and Vadim Borisov and Gjergji Kasneci
|
BoxShrink: From Bounding Boxes to Segmentation Masks
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One of the core challenges facing the medical image computing community is
fast and efficient data sample labeling. Obtaining fine-grained labels for
segmentation is particularly demanding since it is expensive, time-consuming,
and requires sophisticated tools. On the contrary, applying bounding boxes is
fast and takes significantly less time than fine-grained labeling, but does not
produce detailed results. In response, we propose a novel framework for
weakly-supervised tasks with the rapid and robust transformation of bounding
boxes into segmentation masks without training any machine learning model,
coined BoxShrink. The proposed framework comes in two variants -
rapid-BoxShrink for fast label transformations, and robust-BoxShrink for more
precise label transformations. An average of four percent improvement in IoU is
found across several models when being trained using BoxShrink in a
weakly-supervised setting, compared to using only bounding box annotations as
inputs on a colonoscopy image data set. We open-sourced the code for the
proposed framework and published it online.
|
[
{
"version": "v1",
"created": "Fri, 5 Aug 2022 13:07:51 GMT"
}
] | 2022-08-08T00:00:00 |
[
[
"Gröger",
"Michael",
""
],
[
"Borisov",
"Vadim",
""
],
[
"Kasneci",
"Gjergji",
""
]
] |
new_dataset
| 0.999215 |
2007.00558
|
Daniel Berj\'on
|
Pablo Carballeira, Carlos Carmona, C\'esar D\'iaz, Daniel Berj\'on,
Daniel Corregidor, Juli\'an Cabrera, Francisco Mor\'an, Carmen Doblado,
Sergio Arnaldo, Mar\'ia del Mar Mart\'in, Narciso Garc\'ia
|
FVV Live: A real-time free-viewpoint video system with consumer
electronics hardware
| null | null |
10.1109/TMM.2021.3079711
| null |
cs.CV cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
FVV Live is a novel end-to-end free-viewpoint video system, designed for low
cost and real-time operation, based on off-the-shelf components. The system has
been designed to yield high-quality free-viewpoint video using consumer-grade
cameras and hardware, which enables low deployment costs and easy installation
for immersive event-broadcasting or videoconferencing.
The paper describes the architecture of the system, including acquisition and
encoding of multiview plus depth data in several capture servers and virtual
view synthesis on an edge server. All the blocks of the system have been
designed to overcome the limitations imposed by hardware and network, which
impact directly on the accuracy of depth data and thus on the quality of
virtual view synthesis. The design of FVV Live allows for an arbitrary number
of cameras and capture servers, and the results presented in this paper
correspond to an implementation with nine stereo-based depth cameras.
FVV Live presents low motion-to-photon and end-to-end delays, which enables
seamless free-viewpoint navigation and bilateral immersive communications.
Moreover, the visual quality of FVV Live has been assessed through subjective
assessment with satisfactory results, and additional comparative tests show
that it is preferred over state-of-the-art DIBR alternatives.
|
[
{
"version": "v1",
"created": "Wed, 1 Jul 2020 15:40:28 GMT"
}
] | 2022-08-05T00:00:00 |
[
[
"Carballeira",
"Pablo",
""
],
[
"Carmona",
"Carlos",
""
],
[
"Díaz",
"César",
""
],
[
"Berjón",
"Daniel",
""
],
[
"Corregidor",
"Daniel",
""
],
[
"Cabrera",
"Julián",
""
],
[
"Morán",
"Francisco",
""
],
[
"Doblado",
"Carmen",
""
],
[
"Arnaldo",
"Sergio",
""
],
[
"Martín",
"María del Mar",
""
],
[
"García",
"Narciso",
""
]
] |
new_dataset
| 0.999148 |
2009.01228
|
John Christian
|
John A. Christian, Harm Derksen, and Ryan Watkins
|
Lunar Crater Identification in Digital Images
| null | null |
10.1007/s40295-021-00287-8
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
It is often necessary to identify a pattern of observed craters in a single
image of the lunar surface and without any prior knowledge of the camera's
location. This so-called "lost-in-space" crater identification problem is
common in both crater-based terrain relative navigation (TRN) and in automatic
registration of scientific imagery. Past work on crater identification has
largely been based on heuristic schemes, with poor performance outside of a
narrowly defined operating regime (e.g., nadir pointing images, small search
areas). This work provides the first mathematically rigorous treatment of the
general crater identification problem. It is shown when it is (and when it is
not) possible to recognize a pattern of elliptical crater rims in an image
formed by perspective projection. For the cases when it is possible to
recognize a pattern, descriptors are developed using invariant theory that
provably capture all of the viewpoint invariant information. These descriptors
may be pre-computed for known crater patterns and placed in a searchable index
for fast recognition. New techniques are also developed for computing pose from
crater rim observations and for evaluating crater rim correspondences. These
techniques are demonstrated on both synthetic and real images.
|
[
{
"version": "v1",
"created": "Wed, 2 Sep 2020 17:59:51 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Sep 2020 16:25:05 GMT"
}
] | 2022-08-05T00:00:00 |
[
[
"Christian",
"John A.",
""
],
[
"Derksen",
"Harm",
""
],
[
"Watkins",
"Ryan",
""
]
] |
new_dataset
| 0.996252 |
2104.04637
|
Abdelhaliem Babiker
|
Abdelhaliem Babiker
|
New Quantum-Safe Versions of Decisional Diffie-Hellman Assumption in the
General Linear Group and Their Applications: Two New Key-agreements
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Diffie-Hellman key-agreement and RSA cryptosystem are widely used to provide
security in internet protocols. But both of the two algorithms are totally
breakable using Shor's algorithms. This paper proposes two connected
matrix-based key-agreements: (a) Diffie-Hellman Key-Agreement with Errors and
(b) RSA-Resemble Key-agreement, which, respectively, bear resemblance to
Diffie-Hellman key-agreement and RSA cryptosystem and thereby they gain some of
the well-known security characteristics of these two algorithms, but without
being subject to Shor's algorithms attacks. That is, the new schemes avoid the
direct reliance on the hardness of Discrete Logarithm and Integer Factoring
problems which are solvable by Shor's algorithms. The paper introduces a new
family of quantum-safe hardness assumptions which consist of taking noisy
powers of binary matrices. The new assumptions are derived from Decisional
Diffie-Hellman (DDH) assumption in the general linear group GL(n,2) by
introducing random noise into a quadruple similar to that which define the DDH
assumption in GL(n,2(. Thereby we make certain that the resulting quadruple is
secure against Shor's algorithm attack and any other DLP-based attack. Thence,
the resulting assumptions, are used as basis for the two key-agreement schemes.
We prove that these key-agreements are secure -- in key indistinguishability
notion -- under the new assumptions.
|
[
{
"version": "v1",
"created": "Fri, 9 Apr 2021 23:15:23 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Jun 2021 17:35:07 GMT"
},
{
"version": "v3",
"created": "Wed, 2 Jun 2021 13:33:47 GMT"
},
{
"version": "v4",
"created": "Thu, 4 Aug 2022 01:08:12 GMT"
}
] | 2022-08-05T00:00:00 |
[
[
"Babiker",
"Abdelhaliem",
""
]
] |
new_dataset
| 0.98312 |
2109.07652
|
Hanjia Lyu
|
Yangxin Fan, Hanjia Lyu, Jin Xiao, Jiebo Luo
|
American Twitter Users Revealed Social Determinants-related Oral Health
Disparities amid the COVID-19 Pandemic
|
Accepted for publication in Quintessence International
| null | null | null |
cs.SI cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Objectives: To assess self-reported population oral health conditions amid
COVID-19 pandemic using user reports on Twitter. Method and Material: We
collected oral health-related tweets during the COVID-19 pandemic from 9,104
Twitter users across 26 states (with sufficient samples) in the United States
between November 12, 2020 and June 14, 2021. We inferred user demographics by
leveraging the visual information from the user profile images. Other
characteristics including income, population density, poverty rate, health
insurance coverage rate, community water fluoridation rate, and relative change
in the number of daily confirmed COVID-19 cases were acquired or inferred based
on retrieved information from user profiles. We performed logistic regression
to examine whether discussions vary across user characteristics. Results:
Overall, 26.70% of the Twitter users discuss wisdom tooth pain/jaw hurt, 23.86%
tweet about dental service/cavity, 18.97% discuss chipped tooth/tooth break,
16.23% talk about dental pain, and the rest are about tooth decay/gum bleeding.
Women and younger adults (19-29) are more likely to talk about oral health
problems. Health insurance coverage rate is the most significant predictor in
logistic regression for topic prediction. Conclusion: Tweets inform social
disparities in oral health during the pandemic. For instance, people from
counties at a higher risk of COVID-19 talk more about tooth decay/gum bleeding
and chipped tooth/tooth break. Older adults, who are vulnerable to COVID-19,
are more likely to discuss dental pain. Topics of interest vary across user
characteristics. Through the lens of social media, our findings may provide
insights for oral health practitioners and policy makers.
|
[
{
"version": "v1",
"created": "Thu, 16 Sep 2021 01:10:06 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Aug 2022 17:04:14 GMT"
}
] | 2022-08-05T00:00:00 |
[
[
"Fan",
"Yangxin",
""
],
[
"Lyu",
"Hanjia",
""
],
[
"Xiao",
"Jin",
""
],
[
"Luo",
"Jiebo",
""
]
] |
new_dataset
| 0.991536 |
2110.08633
|
Kabir Nagrecha
|
Kabir Nagrecha, Arun Kumar
|
Hydra: A System for Large Multi-Model Deep Learning
|
3 figures, 1 table, 11 pages including references
| null | null | null |
cs.DC cs.DB cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Scaling up model depth and size is now a common approach to raise accuracy in
many deep learning (DL) applications, as evidenced by the widespread success of
multi-billion or even trillion parameter models in natural language processing
(NLP) research. Despite success in DL research and at major technology
companies, broader practical adoption of such large models among domain
scientists and businesses is still bottlenecked by GPU memory limits, high
training costs, and low GPU availability, even on public clouds. Model
selection needs further compound these resource challenges: users often need to
compare dozens of models with different hyper-parameters or neural
architectures to suit their specific task and dataset. In this paper, we
present Hydra, a system designed to tackle such challenges by enabling
out-of-the-box scaling for multi-large-model DL workloads on even commodity
GPUs in a resource-efficient manner. Hydra is the first approach to
holistically optimize the execution of multi-model workloads for large DL
models. We do this by adapting prior "model-parallel" execution schemes to work
with scalable parameter offloading across the memory hierarchy and further
hybridizing this approach with task-parallel job scheduling techniques. Hydra
decouples scalability of model parameters from parallelism of execution, thus
enabling DL users to train even a 6-billion parameter model on a single
commodity GPU. It also fully exploits the speedup potential of task parallelism
in multi-GPU setups, yielding near-linear strong scaling and making rigorous
model selection perhaps more practical for such models. We evaluate end-to-end
performance by fine-tuning GPT-2 for language modeling. We find that Hydra
offers between 50% and 100% higher training throughput than even the best
settings of state-of-the-art industrial frameworks such as DeepSpeed and GPipe
for multi-large-model training.
|
[
{
"version": "v1",
"created": "Sat, 16 Oct 2021 18:13:57 GMT"
},
{
"version": "v2",
"created": "Sat, 23 Oct 2021 18:04:29 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Jan 2022 18:58:32 GMT"
},
{
"version": "v4",
"created": "Tue, 8 Feb 2022 18:53:35 GMT"
},
{
"version": "v5",
"created": "Sat, 30 Apr 2022 00:31:09 GMT"
},
{
"version": "v6",
"created": "Fri, 3 Jun 2022 16:32:51 GMT"
},
{
"version": "v7",
"created": "Wed, 3 Aug 2022 18:50:20 GMT"
}
] | 2022-08-05T00:00:00 |
[
[
"Nagrecha",
"Kabir",
""
],
[
"Kumar",
"Arun",
""
]
] |
new_dataset
| 0.990952 |
2111.15205
|
Berkant D\"uzg\"un
|
Berkant D\"uzg\"un, Aykut \c{C}ay{\i}r, Ferhat Demirk{\i}ran, Ceyda
Nur Kahya, Buket Gen\c{c}ayd{\i}n and Hasan Da\u{g}
|
Benchmark Static API Call Datasets for Malware Family Classification
|
5 pages, 7 figures, 5 tables
| null | null | null |
cs.CR cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Nowadays, malware and malware incidents are increasing daily, even with
various antivirus systems and malware detection or classification
methodologies. Machine learning techniques have been the main focus of the
security experts to detect malware and determine their families. Many static,
dynamic, and hybrid techniques have been presented for that purpose. In this
study, the static analysis technique has been applied to malware samples to
extract API calls, which is one of the most used features in machine/deep
learning models as it represents the behavior of malware samples.
Since the rapid increase and continuous evolution of malware affect the
detection capacity of antivirus scanners, recent and updated datasets of
malicious software became necessary to overcome this drawback. This paper
introduces two new datasets: One with 14,616 samples obtained and compiled from
VirusShare and one with 9,795 samples from VirusSample. In addition, benchmark
results based on static API calls of malware samples are presented using
several machine and deep learning models on these datasets. We believe that
these two datasets and benchmark results enable researchers to test and
validate their methods and approaches in this field.
|
[
{
"version": "v1",
"created": "Tue, 30 Nov 2021 08:31:16 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Aug 2022 10:10:15 GMT"
}
] | 2022-08-05T00:00:00 |
[
[
"Düzgün",
"Berkant",
""
],
[
"Çayır",
"Aykut",
""
],
[
"Demirkıran",
"Ferhat",
""
],
[
"Kahya",
"Ceyda Nur",
""
],
[
"Gençaydın",
"Buket",
""
],
[
"Dağ",
"Hasan",
""
]
] |
new_dataset
| 0.999855 |
2205.04534
|
Mohammad Javad Amiri
|
Mohammad Javad Amiri, Chenyuan Wu, Divyakant Agrawal, Amr El Abbadi,
Boon Thau Loo, Mohammad Sadoghi
|
The Bedrock of Byzantine Fault Tolerance: A Unified Platform for BFT
Protocol Design and Implementation
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Byzantine Fault-Tolerant (BFT) protocols have recently been extensively used
by decentralized data management systems with non-trustworthy infrastructures,
e.g., permissioned blockchains. BFT protocols cover a broad spectrum of design
dimensions from infrastructure settings such as the communication topology, to
more technical features such as commitment strategy and even fundamental social
choice properties like order-fairness. The proliferation of different BFT
protocols has rendered it difficult to navigate the BFT landscape, let alone
determine the protocol that best meets application needs. This paper presents
Bedrock, a unified platform for BFT protocols design, analysis, implementation,
and experiments. Bedrock proposes a design space consisting of a set of design
choices capturing the trade-offs between different design space dimensions and
providing fundamentally new insights into the strengths and weaknesses of BFT
protocols. Bedrock enables users to analyze and experiment with BFT protocols
within the space of plausible choices, evolve current protocols to design new
ones, and even uncover previously unknown protocols. Our experimental results
demonstrate the capability of Bedrock to uniformly evaluate BFT protocols in
new ways that were not possible before due to the diverse assumptions made by
these protocols. The results validate Bedrock's ability to analyze and derive
BFT protocols.
|
[
{
"version": "v1",
"created": "Mon, 9 May 2022 20:18:30 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Aug 2022 23:35:08 GMT"
}
] | 2022-08-05T00:00:00 |
[
[
"Amiri",
"Mohammad Javad",
""
],
[
"Wu",
"Chenyuan",
""
],
[
"Agrawal",
"Divyakant",
""
],
[
"Abbadi",
"Amr El",
""
],
[
"Loo",
"Boon Thau",
""
],
[
"Sadoghi",
"Mohammad",
""
]
] |
new_dataset
| 0.99683 |
2207.11615
|
J\'er\'emie Decouchant
|
O\u{g}uzhan Ersoy and J\'er\'emie Decouchant and Satwik Prabhu Kimble
and Stefanie Roos
|
SyncPCN/PSyncPCN: Payment Channel Networks without Blockchain Synchrony
|
Preprint of a paper accepted at the ACM conference on Advances in
Financial Technologies (AFT 2022)
| null | null | null |
cs.CR cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Payment channel networks (PCNs) enhance the scalability of blockchains by
allowing parties to conduct transactions off-chain, i.e, without broadcasting
every transaction to all blockchain participants. To conduct transactions, a
sender and a receiver can either establish a direct payment channel with a
funding blockchain transaction or leverage existing channels in a multi-hop
payment. The security of PCNs usually relies on the synchrony of the underlying
blockchain, i.e., evidence of misbehavior needs to be published on the
blockchain within a time limit. Alternative payment channel proposals that do
not require blockchain synchrony rely on quorum certificates and use a
committee to register the transactions of a channel. However, these proposals
do not support multi-hop payments, a limitation we aim to overcome. In this
paper, we demonstrate that it is in fact impossible to design a multi-hop
payment protocol with both network asynchrony and faulty channels, i.e.,
channels that may not correctly follow the protocol. We then detail two
committee-based multi-hop payment protocols that respectively assume
synchronous communications and possibly faulty channels, or asynchronous
communication and correct channels. The first protocol relies on possibly
faulty committees instead of the blockchain to resolve channel disputes, and
enforces privacy properties within a synchronous network. The second one relies
on committees that contain at most f faulty members out of 3f+1 and
successively delegate to each other the role of eventually completing a
multi-hop payment. We show that both protocols satisfy the security
requirements of a multi-hop payment and compare their communication complexity
and latency.
|
[
{
"version": "v1",
"created": "Sat, 23 Jul 2022 22:16:37 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Aug 2022 12:58:36 GMT"
}
] | 2022-08-05T00:00:00 |
[
[
"Ersoy",
"Oğuzhan",
""
],
[
"Decouchant",
"Jérémie",
""
],
[
"Kimble",
"Satwik Prabhu",
""
],
[
"Roos",
"Stefanie",
""
]
] |
new_dataset
| 0.997141 |
2208.00467
|
Shohreh Deldari
|
Shohreh Deldari, Hao Xue, Aaqib Saeed, Daniel V. Smith, Flora D. Salim
|
COCOA: Cross Modality Contrastive Learning for Sensor Data
|
27 pages, 10 figures, 6 tables, Accepted with minor revision at IMWUT
Vol. 6 No. 3
| null |
10.1145/3550316
| null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Self-Supervised Learning (SSL) is a new paradigm for learning discriminative
representations without labelled data and has reached comparable or even
state-of-the-art results in comparison to supervised counterparts. Contrastive
Learning (CL) is one of the most well-known approaches in SSL that attempts to
learn general, informative representations of data. CL methods have been mostly
developed for applications in computer vision and natural language processing
where only a single sensor modality is used. A majority of pervasive computing
applications, however, exploit data from a range of different sensor
modalities. While existing CL methods are limited to learning from one or two
data sources, we propose COCOA (Cross mOdality COntrastive leArning), a
self-supervised model that employs a novel objective function to learn quality
representations from multisensor data by computing the cross-correlation
between different data modalities and minimizing the similarity between
irrelevant instances. We evaluate the effectiveness of COCOA against eight
recently introduced state-of-the-art self-supervised models, and two supervised
baselines across five public datasets. We show that COCOA achieves superior
classification performance to all other approaches. Also, COCOA is far more
label-efficient than the other baselines including the fully supervised model
using only one-tenth of available labelled data.
|
[
{
"version": "v1",
"created": "Sun, 31 Jul 2022 16:36:13 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Aug 2022 22:52:59 GMT"
}
] | 2022-08-05T00:00:00 |
[
[
"Deldari",
"Shohreh",
""
],
[
"Xue",
"Hao",
""
],
[
"Saeed",
"Aaqib",
""
],
[
"Smith",
"Daniel V.",
""
],
[
"Salim",
"Flora D.",
""
]
] |
new_dataset
| 0.955759 |
2208.00928
|
Weijia Li
|
Weijia Li, Yawen Lai, Linning Xu, Yuanbo Xiangli, Jinhua Yu, Conghui
He, Gui-Song Xia, Dahua Lin
|
OmniCity: Omnipotent City Understanding with Multi-level and Multi-view
Images
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents OmniCity, a new dataset for omnipotent city understanding
from multi-level and multi-view images. More precisely, the OmniCity contains
multi-view satellite images as well as street-level panorama and mono-view
images, constituting over 100K pixel-wise annotated images that are
well-aligned and collected from 25K geo-locations in New York City. To
alleviate the substantial pixel-wise annotation efforts, we propose an
efficient street-view image annotation pipeline that leverages the existing
label maps of satellite view and the transformation relations between different
views (satellite, panorama, and mono-view). With the new OmniCity dataset, we
provide benchmarks for a variety of tasks including building footprint
extraction, height estimation, and building plane/instance/fine-grained
segmentation. Compared with the existing multi-level and multi-view benchmarks,
OmniCity contains a larger number of images with richer annotation types and
more views, provides more benchmark results of state-of-the-art models, and
introduces a novel task for fine-grained building instance segmentation on
street-level panorama images. Moreover, OmniCity provides new problem settings
for existing tasks, such as cross-view image matching, synthesis, segmentation,
detection, etc., and facilitates the developing of new methods for large-scale
city understanding, reconstruction, and simulation. The OmniCity dataset as
well as the benchmarks will be available at
https://city-super.github.io/omnicity.
|
[
{
"version": "v1",
"created": "Mon, 1 Aug 2022 15:19:25 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Aug 2022 08:03:12 GMT"
}
] | 2022-08-05T00:00:00 |
[
[
"Li",
"Weijia",
""
],
[
"Lai",
"Yawen",
""
],
[
"Xu",
"Linning",
""
],
[
"Xiangli",
"Yuanbo",
""
],
[
"Yu",
"Jinhua",
""
],
[
"He",
"Conghui",
""
],
[
"Xia",
"Gui-Song",
""
],
[
"Lin",
"Dahua",
""
]
] |
new_dataset
| 0.962608 |
2208.01815
|
Leyang Cui
|
Shuming Shi, Enbo Zhao, Duyu Tang, Yan Wang, Piji Li, Wei Bi, Haiyun
Jiang, Guoping Huang, Leyang Cui, Xinting Huang, Cong Zhou, Yong Dai,
Dongyang Ma
|
Effidit: Your AI Writing Assistant
|
Technical report for Effidit. arXiv admin note: text overlap with
arXiv:2202.06417
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this technical report, we introduce Effidit (Efficient and Intelligent
Editing), a digital writing assistant that facilitates users to write
higher-quality text more efficiently by using artificial intelligence (AI)
technologies. Previous writing assistants typically provide the function of
error checking (to detect and correct spelling and grammatical errors) and
limited text-rewriting functionality. With the emergence of large-scale neural
language models, some systems support automatically completing a sentence or a
paragraph. In Effidit, we significantly expand the capacities of a writing
assistant by providing functions in five categories: text completion, error
checking, text polishing, keywords to sentences (K2S), and cloud input methods
(cloud IME). In the text completion category, Effidit supports generation-based
sentence completion, retrieval-based sentence completion, and phrase
completion. In contrast, many other writing assistants so far only provide one
or two of the three functions. For text polishing, we have three functions:
(context-aware) phrase polishing, sentence paraphrasing, and sentence
expansion, whereas many other writing assistants often support one or two
functions in this category. The main contents of this report include major
modules of Effidit, methods for implementing these modules, and evaluation
results of some key methods.
|
[
{
"version": "v1",
"created": "Wed, 3 Aug 2022 02:24:45 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Aug 2022 12:13:43 GMT"
}
] | 2022-08-05T00:00:00 |
[
[
"Shi",
"Shuming",
""
],
[
"Zhao",
"Enbo",
""
],
[
"Tang",
"Duyu",
""
],
[
"Wang",
"Yan",
""
],
[
"Li",
"Piji",
""
],
[
"Bi",
"Wei",
""
],
[
"Jiang",
"Haiyun",
""
],
[
"Huang",
"Guoping",
""
],
[
"Cui",
"Leyang",
""
],
[
"Huang",
"Xinting",
""
],
[
"Zhou",
"Cong",
""
],
[
"Dai",
"Yong",
""
],
[
"Ma",
"Dongyang",
""
]
] |
new_dataset
| 0.999167 |
2208.02019
|
Hauck Huang
|
Ziping Yu, Hongbo Huang, Weijun Chen, Yongxin Su, Yahui Liu, Xiuying
Wang
|
YOLO-FaceV2: A Scale and Occlusion Aware Face Detector
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, face detection algorithms based on deep learning have made
great progress. These algorithms can be generally divided into two categories,
i.e. two-stage detector like Faster R-CNN and one-stage detector like YOLO.
Because of the better balance between accuracy and speed, one-stage detectors
have been widely used in many applications. In this paper, we propose a
real-time face detector based on the one-stage detector YOLOv5, named
YOLO-FaceV2. We design a Receptive Field Enhancement module called RFE to
enhance receptive field of small face, and use NWD Loss to make up for the
sensitivity of IoU to the location deviation of tiny objects. For face
occlusion, we present an attention module named SEAM and introduce Repulsion
Loss to solve it. Moreover, we use a weight function Slide to solve the
imbalance between easy and hard samples and use the information of the
effective receptive field to design the anchor. The experimental results on
WiderFace dataset show that our face detector outperforms YOLO and its variants
can be find in all easy, medium and hard subsets. Source code in
https://github.com/Krasjet-Yu/YOLO-FaceV2
|
[
{
"version": "v1",
"created": "Wed, 3 Aug 2022 12:40:00 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Aug 2022 16:29:08 GMT"
}
] | 2022-08-05T00:00:00 |
[
[
"Yu",
"Ziping",
""
],
[
"Huang",
"Hongbo",
""
],
[
"Chen",
"Weijun",
""
],
[
"Su",
"Yongxin",
""
],
[
"Liu",
"Yahui",
""
],
[
"Wang",
"Xiuying",
""
]
] |
new_dataset
| 0.998778 |
2208.02148
|
Benyuan Sun
|
Benyuan Sun, Jin Dai, Zihao Liang, Congying Liu, Yi Yang, Bo Bai
|
GPPF: A General Perception Pre-training Framework via Sparsely Activated
Multi-Task Learning
|
22 pages
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Pre-training over mixtured multi-task, multi-domain, and multi-modal data
remains an open challenge in vision perception pre-training. In this paper, we
propose GPPF, a General Perception Pre-training Framework, that pre-trains a
task-level dynamic network, which is composed by knowledge "legos" in each
layers, on labeled multi-task and multi-domain datasets. By inspecting humans'
innate ability to learn in complex environment, we recognize and transfer three
critical elements to deep networks: (1) simultaneous exposure to diverse
cross-task and cross-domain information in each batch. (2) partitioned
knowledge storage in separate lego units driven by knowledge sharing. (3)
sparse activation of a subset of lego units for both pre-training and
downstream tasks. Noteworthy, the joint training of disparate vision tasks is
non-trivial due to their differences in input shapes, loss functions, output
formats, data distributions, etc. Therefore, we innovatively develop a
plug-and-play multi-task training algorithm, which supports Single Iteration
Multiple Tasks (SIMT) concurrently training. SIMT lays the foundation of
pre-training with large-scale multi-task multi-domain datasets and is proved
essential for stable training in our GPPF experiments. Excitingly, the
exhaustive experiments show that, our GPPF-R50 model achieves significant
improvements of 2.5-5.8 over a strong baseline of the 8 pre-training tasks in
GPPF-15M and harvests a range of SOTAs over the 22 downstream tasks with
similar computation budgets. We also validate the generalization ability of
GPPF to SOTA vision transformers with consistent improvements. These solid
experimental results fully prove the effective knowledge learning, storing,
sharing, and transfer provided by our novel GPPF framework.
|
[
{
"version": "v1",
"created": "Wed, 3 Aug 2022 15:34:35 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Aug 2022 04:39:23 GMT"
}
] | 2022-08-05T00:00:00 |
[
[
"Sun",
"Benyuan",
""
],
[
"Dai",
"Jin",
""
],
[
"Liang",
"Zihao",
""
],
[
"Liu",
"Congying",
""
],
[
"Yang",
"Yi",
""
],
[
"Bai",
"Bo",
""
]
] |
new_dataset
| 0.999711 |
2208.02250
|
Xiao Zhang
|
Xiao Zhang, Hao Tan, Xuan Huang, Denghui Zhang, Keke Tang, Zhaoquan Gu
|
Adversarial Attacks on ASR Systems: An Overview
| null | null | null | null |
cs.SD cs.AI cs.CL cs.CR eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the development of hardware and algorithms, ASR(Automatic Speech
Recognition) systems evolve a lot. As The models get simpler, the difficulty of
development and deployment become easier, ASR systems are getting closer to our
life. On the one hand, we often use APPs or APIs of ASR to generate subtitles
and record meetings. On the other hand, smart speaker and self-driving car rely
on ASR systems to control AIoT devices. In past few years, there are a lot of
works on adversarial examples attacks against ASR systems. By adding a small
perturbation to the waveforms, the recognition results make a big difference.
In this paper, we describe the development of ASR system, different assumptions
of attacks, and how to evaluate these attacks. Next, we introduce the current
works on adversarial examples attacks from two attack assumptions: white-box
attack and black-box attack. Different from other surveys, we pay more
attention to which layer they perturb waveforms in ASR system, the relationship
between these attacks, and their implementation methods. We focus on the effect
of their works.
|
[
{
"version": "v1",
"created": "Wed, 3 Aug 2022 06:46:42 GMT"
}
] | 2022-08-05T00:00:00 |
[
[
"Zhang",
"Xiao",
""
],
[
"Tan",
"Hao",
""
],
[
"Huang",
"Xuan",
""
],
[
"Zhang",
"Denghui",
""
],
[
"Tang",
"Keke",
""
],
[
"Gu",
"Zhaoquan",
""
]
] |
new_dataset
| 0.996508 |
2208.02286
|
Thomas Kahl
|
Thomas Kahl
|
On the homology language of HDA models of transition systems
|
17 pages
| null | null | null |
cs.FL math.AT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given a transition system with an independence relation on the alphabet of
labels, one can associate with it a usually very large symmetric
higher-dimensional automaton. The purpose of this paper is to show that by
choosing an acyclic relation whose symmetric closure is the given independence
relation, it is possible to construct a much smaller nonsymmetric HDA with the
same homology language.
|
[
{
"version": "v1",
"created": "Wed, 3 Aug 2022 18:08:45 GMT"
}
] | 2022-08-05T00:00:00 |
[
[
"Kahl",
"Thomas",
""
]
] |
new_dataset
| 0.998366 |
2208.02330
|
Yuanyuan Tang
|
Yuanyuan Tang, Shuche Wang, Hao Lou, Ryan Gabrys, and Farzad Farnoud
|
Low-redundancy codes for correcting multiple short-duplication and edit
errors
|
21 pages. The paper has been submitted to IEEE Transaction on
Information Theory. Furthermore, the paper was presented in part at the
ISIT2021 and ISIT2022
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to its higher data density, longevity, energy efficiency, and ease of
generating copies, DNA is considered a promising storage technology for
satisfying future needs. However, a diverse set of errors including deletions,
insertions, duplications, and substitutions may arise in DNA at different
stages of data storage and retrieval. The current paper constructs
error-correcting codes for simultaneously correcting short (tandem)
duplications and at most $p$ edits, where a short duplication generates a copy
of a substring with length $\leq 3$ and inserts the copy following the original
substring, and an edit is a substitution, deletion, or insertion. Compared to
the state-of-the-art codes for duplications only, the proposed codes correct up
to $p$ edits (in addition to duplications) at the additional cost of roughly
$8p(\log_q n)(1+o(1))$ symbols of redundancy, thus achieving the same
asymptotic rate, where $q\ge 4$ is the alphabet size and $p$ is a constant.
Furthermore, the time complexities of both the encoding and decoding processes
are polynomial when $p$ is a constant with respect to the code length.
|
[
{
"version": "v1",
"created": "Wed, 3 Aug 2022 20:13:18 GMT"
}
] | 2022-08-05T00:00:00 |
[
[
"Tang",
"Yuanyuan",
""
],
[
"Wang",
"Shuche",
""
],
[
"Lou",
"Hao",
""
],
[
"Gabrys",
"Ryan",
""
],
[
"Farnoud",
"Farzad",
""
]
] |
new_dataset
| 0.984807 |
2208.02332
|
Nitpreet Bamra
|
Nitpreet Bamra, Vikram Voleti, Alexander Wong, Jason Deglint
|
Towards Generating Large Synthetic Phytoplankton Datasets for Efficient
Monitoring of Harmful Algal Blooms
| null | null | null | null |
cs.CV cs.LG eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Climate change is increasing the frequency and severity of harmful algal
blooms (HABs), which cause significant fish deaths in aquaculture farms. This
contributes to ocean pollution and greenhouse gas (GHG) emissions since dead
fish are either dumped into the ocean or taken to landfills, which in turn
negatively impacts the climate. Currently, the standard method to enumerate
harmful algae and other phytoplankton is to manually observe and count them
under a microscope. This is a time-consuming, tedious and error-prone process,
resulting in compromised management decisions by farmers. Hence, automating
this process for quick and accurate HAB monitoring is extremely helpful.
However, this requires large and diverse datasets of phytoplankton images, and
such datasets are hard to produce quickly. In this work, we explore the
feasibility of generating novel high-resolution photorealistic synthetic
phytoplankton images, containing multiple species in the same image, given a
small dataset of real images. To this end, we employ Generative Adversarial
Networks (GANs) to generate synthetic images. We evaluate three different GAN
architectures: ProjectedGAN, FastGAN, and StyleGANv2 using standard image
quality metrics. We empirically show the generation of high-fidelity synthetic
phytoplankton images using a training dataset of only 961 real images. Thus,
this work demonstrates the ability of GANs to create large synthetic datasets
of phytoplankton from small training datasets, accomplishing a key step towards
sustainable systematic monitoring of harmful algal blooms.
|
[
{
"version": "v1",
"created": "Wed, 3 Aug 2022 20:15:55 GMT"
}
] | 2022-08-05T00:00:00 |
[
[
"Bamra",
"Nitpreet",
""
],
[
"Voleti",
"Vikram",
""
],
[
"Wong",
"Alexander",
""
],
[
"Deglint",
"Jason",
""
]
] |
new_dataset
| 0.997455 |
2208.02335
|
Finlay Macklon
|
Finlay Macklon, Mohammad Reza Taesiri, Markos Viggiato, Stefan
Antoszko, Natalia Romanova, Dale Paas, Cor-Paul Bezemer
|
Automatically Detecting Visual Bugs in HTML5 <canvas> Games
|
Accepted at ASE 2022 conference
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The HTML5 <canvas> is used to display high quality graphics in web
applications such as web games (i.e., <canvas> games). However, automatically
testing <canvas> games is not possible with existing web testing techniques and
tools, and manual testing is laborious. Many widely used web testing tools rely
on the Document Object Model (DOM) to drive web test automation, but the
contents of the <canvas> are not represented in the DOM. The main alternative
approach, snapshot testing, involves comparing oracle snapshot images with
test-time snapshot images using an image similarity metric to catch visual
bugs, i.e., bugs in the graphics of the web application. However, creating and
maintaining oracle snapshot images for <canvas> games is onerous, defeating the
purpose of test automation. In this paper, we present a novel approach to
automatically detect visual bugs in <canvas> games. By leveraging an internal
representation of objects on the <canvas>, we decompose snapshot images into a
set of object images, each of which is compared with a respective oracle asset
(e.g., a sprite) using four similarity metrics: percentage overlap, mean
squared error, structural similarity, and embedding similarity. We evaluate our
approach by injecting 24 visual bugs into a custom <canvas> game, and find that
our approach achieves an accuracy of 100%, compared to an accuracy of 44.6%
with traditional snapshot testing.
|
[
{
"version": "v1",
"created": "Wed, 3 Aug 2022 20:27:18 GMT"
}
] | 2022-08-05T00:00:00 |
[
[
"Macklon",
"Finlay",
""
],
[
"Taesiri",
"Mohammad Reza",
""
],
[
"Viggiato",
"Markos",
""
],
[
"Antoszko",
"Stefan",
""
],
[
"Romanova",
"Natalia",
""
],
[
"Paas",
"Dale",
""
],
[
"Bezemer",
"Cor-Paul",
""
]
] |
new_dataset
| 0.978406 |
2208.02376
|
Yuan Zhou
|
Wangyang Yue, Yuan Zhou, Xiaochuan Zhang, Yuchen Hua, Zhiyuan Wang,
Guang Kou
|
AACC: Asymmetric Actor-Critic in Contextual Reinforcement Learning
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reinforcement Learning (RL) techniques have drawn great attention in many
challenging tasks, but their performance deteriorates dramatically when applied
to real-world problems. Various methods, such as domain randomization, have
been proposed to deal with such situations by training agents under different
environmental setups, and therefore they can be generalized to different
environments during deployment. However, they usually do not incorporate the
underlying environmental factor information that the agents interact with
properly and thus can be overly conservative when facing changes in the
surroundings. In this paper, we first formalize the task of adapting to
changing environmental dynamics in RL as a generalization problem using
Contextual Markov Decision Processes (CMDPs). We then propose the Asymmetric
Actor-Critic in Contextual RL (AACC) as an end-to-end actor-critic method to
deal with such generalization tasks. We demonstrate the essential improvements
in the performance of AACC over existing baselines experimentally in a range of
simulated environments.
|
[
{
"version": "v1",
"created": "Wed, 3 Aug 2022 22:52:26 GMT"
}
] | 2022-08-05T00:00:00 |
[
[
"Yue",
"Wangyang",
""
],
[
"Zhou",
"Yuan",
""
],
[
"Zhang",
"Xiaochuan",
""
],
[
"Hua",
"Yuchen",
""
],
[
"Wang",
"Zhiyuan",
""
],
[
"Kou",
"Guang",
""
]
] |
new_dataset
| 0.998864 |
2208.02378
|
Ivelisse Rubio
|
Ivelisse Rubio and Jaziel Torres
|
Multidimensional Costas Arrays and Their Periodicity
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
A novel higher-dimensional definition for Costas arrays is introduced. This
definition works for arbitrary dimensions and avoids some limitations of
previous definitions. Some non-existence results are presented for
multidimensional Costas arrays preserving the Costas condition when the array
is extended periodically throughout the whole space. In particular, it is shown
that three-dimensional arrays with this property must have the least possible
order; extending an analogous two-dimensional result by H. Taylor. Said result
is conjectured to extend for Costas arrays of arbitrary dimensions.
|
[
{
"version": "v1",
"created": "Wed, 3 Aug 2022 22:57:53 GMT"
}
] | 2022-08-05T00:00:00 |
[
[
"Rubio",
"Ivelisse",
""
],
[
"Torres",
"Jaziel",
""
]
] |
new_dataset
| 0.95804 |
2208.02403
|
M Rasel Mahmud
|
M. Rasel Mahmud, Michael Stewart, Alberto Cordova, John Quarles
|
Vibrotactile Feedback to Make Real Walking in Virtual Reality More
Accessible
|
13 pages, 7 figures
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
This research aims to examine the effects of various vibrotactile feedback
techniques on gait (i.e., walking patterns) in virtual reality (VR). Prior
studies have demonstrated that gait disturbances in VR users are significant
usability barriers. However, adequate research has not been performed to
address this problem. In our study, 39 participants (with mobility impairments:
18, without mobility impairments: 21) performed timed walking tasks in a
real-world environment and identical activities in a VR environment with
different forms of vibrotactile feedback (spatial, static, and rhythmic).
Within-group results revealed that each form of vibrotactile feedback improved
gait performance in VR significantly (p < .001) relative to the no vibrotactile
condition in VR for individuals with and without mobility impairments.
Moreover, spatial vibrotactile feedback increased gait performance
significantly (p < .001) in both participant groups compared to other
vibrotactile conditions. The findings of this research will help to make real
walking in VR more accessible for those with and without mobility impairments.
|
[
{
"version": "v1",
"created": "Thu, 4 Aug 2022 02:13:58 GMT"
}
] | 2022-08-05T00:00:00 |
[
[
"Mahmud",
"M. Rasel",
""
],
[
"Stewart",
"Michael",
""
],
[
"Cordova",
"Alberto",
""
],
[
"Quarles",
"John",
""
]
] |
new_dataset
| 0.98344 |
2208.02417
|
MyeongAh Cho
|
MyeongAh Cho, Tae-young Chun, g Taeoh Kim, Sangyoun Lee
|
NIR-to-VIS Face Recognition via Embedding Relations and Coordinates of
the Pairwise Features
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
NIR-to-VIS face recognition is identifying faces of two different domains by
extracting domain-invariant features. However, this is a challenging problem
due to the two different domain characteristics, and the lack of NIR face
dataset. In order to reduce domain discrepancy while using the existing face
recognition models, we propose a 'Relation Module' which can simply add-on to
any face recognition models. The local features extracted from face image
contain information of each component of the face. Based on two different
domain characteristics, to use the relationships between local features is more
domain-invariant than to use it as it is. In addition to these relationships,
positional information such as distance from lips to chin or eye to eye, also
provides domain-invariant information. In our Relation Module, Relation Layer
implicitly captures relationships, and Coordinates Layer models the positional
information. Also, our proposed Triplet loss with conditional margin reduces
intra-class variation in training, and resulting in additional performance
improvements. Different from the general face recognition models, our add-on
module does not need to pre-train with the large scale dataset. The proposed
module fine-tuned only with CASIA NIR-VIS 2.0 database. With the proposed
module, we achieve 14.81% rank-1 accuracy and 15.47% verification rate of 0.1%
FAR improvements compare to two baseline models.
|
[
{
"version": "v1",
"created": "Thu, 4 Aug 2022 02:53:44 GMT"
}
] | 2022-08-05T00:00:00 |
[
[
"Cho",
"MyeongAh",
""
],
[
"Chun",
"Tae-young",
""
],
[
"Kim",
"g Taeoh",
""
],
[
"Lee",
"Sangyoun",
""
]
] |
new_dataset
| 0.998683 |
2208.02436
|
Ming Cheng
|
Ming Cheng, Yiling Xu, Wang Shen, M. Salman Asif, Chao Ma, Jun Sun,
Zhan Ma
|
H2-Stereo: High-Speed, High-Resolution Stereoscopic Video System
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
High-speed, high-resolution stereoscopic (H2-Stereo) video allows us to
perceive dynamic 3D content at fine granularity. The acquisition of H2-Stereo
video, however, remains challenging with commodity cameras. Existing spatial
super-resolution or temporal frame interpolation methods provide compromised
solutions that lack temporal or spatial details, respectively. To alleviate
this problem, we propose a dual camera system, in which one camera captures
high-spatial-resolution low-frame-rate (HSR-LFR) videos with rich spatial
details, and the other captures low-spatial-resolution high-frame-rate
(LSR-HFR) videos with smooth temporal details. We then devise a Learned
Information Fusion network (LIFnet) that exploits the cross-camera redundancies
to enhance both camera views to high spatiotemporal resolution (HSTR) for
reconstructing the H2-Stereo video effectively. We utilize a disparity network
to transfer spatiotemporal information across views even in large disparity
scenes, based on which, we propose disparity-guided flow-based warping for
LSR-HFR view and complementary warping for HSR-LFR view. A multi-scale fusion
method in feature domain is proposed to minimize occlusion-induced warping
ghosts and holes in HSR-LFR view. The LIFnet is trained in an end-to-end manner
using our collected high-quality Stereo Video dataset from YouTube. Extensive
experiments demonstrate that our model outperforms existing state-of-the-art
methods for both views on synthetic data and camera-captured real data with
large disparity. Ablation studies explore various aspects, including
spatiotemporal resolution, camera baseline, camera desynchronization,
long/short exposures and applications, of our system to fully understand its
capability for potential applications.
|
[
{
"version": "v1",
"created": "Thu, 4 Aug 2022 04:06:01 GMT"
}
] | 2022-08-05T00:00:00 |
[
[
"Cheng",
"Ming",
""
],
[
"Xu",
"Yiling",
""
],
[
"Shen",
"Wang",
""
],
[
"Asif",
"M. Salman",
""
],
[
"Ma",
"Chao",
""
],
[
"Sun",
"Jun",
""
],
[
"Ma",
"Zhan",
""
]
] |
new_dataset
| 0.999536 |
2208.02615
|
Ruffin White
|
Victor Mayoral Vilches, Ruffin White, Gianluca Caiazza, Mikael
Arguedas
|
SROS2: Usable Cyber Security Tools for ROS 2
|
Accepted, IROS 2022, 7 pages, 2 figures, 5 code listings, 5 sections
plus references
| null | null | null |
cs.CR cs.DC cs.NI cs.RO cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
ROS 2 is rapidly becoming a standard in the robotics industry. Built upon DDS
as its default communication middleware and used in safety-critical scenarios,
adding security to robots and ROS computational graphs is increasingly becoming
a concern. The present work introduces SROS2, a series of developer tools and
libraries that facilitate adding security to ROS 2 graphs. Focusing on a
usability-centric approach in SROS2, we present a methodology for securing
graphs systematically while following the DevSecOps model. We also demonstrate
the use of our security tools by presenting an application case study that
considers securing a graph using the popular Navigation2 and SLAM Toolbox
stacks applied in a TurtleBot3 robot. We analyse the current capabilities of
SROS2 and discuss the shortcomings, which provides insights for future
contributions and extensions. Ultimately, we present SROS2 as usable security
tools for ROS 2 and argue that without usability, security in robotics will be
greatly impaired.
|
[
{
"version": "v1",
"created": "Thu, 4 Aug 2022 12:28:17 GMT"
}
] | 2022-08-05T00:00:00 |
[
[
"Vilches",
"Victor Mayoral",
""
],
[
"White",
"Ruffin",
""
],
[
"Caiazza",
"Gianluca",
""
],
[
"Arguedas",
"Mikael",
""
]
] |
new_dataset
| 0.998903 |
2208.02626
|
Xi Xie
|
Xi Xie, Sihem Mesnager, Nian Li, Debiao He, Xiangyong Zeng
|
On the Niho type locally-APN power functions and their boomerang
spectrum
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this article, we focus on the concept of locally-APN-ness (``APN" is the
abbreviation of the well-known notion of Almost Perfect Nonlinear) introduced
by Blondeau, Canteaut, and Charpin, which makes the corpus of S-boxes somehow
larger regarding their differential uniformity and, therefore, possibly, more
suitable candidates against the differential attack (or their variants).
Specifically, given two coprime positive integers $m$ and $k$ such that
$\gcd(2^m+1,2^k+1)=1$, we investigate the locally-APN-ness property of an
infinite family of Niho type power functions in the form $F(x)=x^{s(2^m-1)+1}$
over the finite field ${\mathbb F}_{2^{2m}}$ for $s=(2^k+1)^{-1}$, where
$(2^k+1)^{-1}$ denotes the multiplicative inverse modulo $2^m+1$.
By employing finer studies of the number of solutions of certain equations
over finite fields (with even characteristic) as well as some subtle
manipulations of solving some equations, we prove that $F(x)$ is locally APN
and determine its differential spectrum. It is worth noting that computer
experiments show that this class of locally-APN power functions covers all Niho
type locally-APN power functions for $2\leq m\leq10$. In addition, we also
determine the boomerang spectrum of $F(x)$ by using its differential spectrum,
which particularly generalizes a recent result by Yan, Zhang, and Li.
|
[
{
"version": "v1",
"created": "Thu, 4 Aug 2022 12:35:50 GMT"
}
] | 2022-08-05T00:00:00 |
[
[
"Xie",
"Xi",
""
],
[
"Mesnager",
"Sihem",
""
],
[
"Li",
"Nian",
""
],
[
"He",
"Debiao",
""
],
[
"Zeng",
"Xiangyong",
""
]
] |
new_dataset
| 0.998356 |
2208.02683
|
Giovanni Geraci
|
Mohamed Benzaghta, Giovanni Geraci, Rasoul Nikbakht, and David
Lopez-Perez
|
UAV Communications in Integrated Terrestrial and Non-terrestrial
Networks
| null | null | null | null |
cs.IT cs.NI eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
With growing interest in integrating terrestrial networks (TNs) and
non-terrestrial networks (NTNs) to connect the unconnected, a key question is
whether this new paradigm could also be opportunistically exploited to augment
service in urban areas. We assess this possibility in the context of an
integrated TN-NTN, comprising a ground cellular deployment paired with a Low
Earth Orbit (LEO) satellite constellation, providing sub-6 GHz connectivity to
an urban area populated by ground users (GUEs) and uncrewed aerial vehicles
(UAVs). Our study reveals that offloading UAV traffic to the NTN segment
drastically reduces the downlink outage of UAVs from 70% to nearly zero, also
boosting their uplink signal quality as long as the LEO satellite constellation
is sufficiently dense to guarantee a minimum elevation angle. Offloading UAVs
to the NTN also benefits coexisting GUEs, preventing uplink outages of around
12% that GUEs would otherwise incur. Despite the limited bandwidth available
below 6 GHz, NTN-offloaded UAVs meet command and control rate requirements even
across an area the size of Barcelona with as many as one active UAV per cell.
Smaller UAV populations yield proportionally higher rates, potentially enabling
aerial broadband applications.
|
[
{
"version": "v1",
"created": "Thu, 4 Aug 2022 14:27:19 GMT"
}
] | 2022-08-05T00:00:00 |
[
[
"Benzaghta",
"Mohamed",
""
],
[
"Geraci",
"Giovanni",
""
],
[
"Nikbakht",
"Rasoul",
""
],
[
"Lopez-Perez",
"David",
""
]
] |
new_dataset
| 0.968506 |
2208.02685
|
EPTCS
|
Yuliya Lierler, Jose F. Morales, Carmine Dodaro, Veronica Dahl, Martin
Gebser, Tuncay Tekle
|
Proceedings 38th International Conference on Logic Programming
| null |
EPTCS 364, 2022
|
10.4204/EPTCS.364
| null |
cs.LO cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
ICLP is the premier international event for presenting research in logic
programming. Contributions to ICLP 2022 were sought in all areas of logic
programming, including but not limited to: Foundations: Semantics, Formalisms,
Nonmonotonic reasoning, Knowledge representation. Languages issues:
Concurrency, Objects, Coordination, Mobility, Higher order, Types, Modes,
Assertions, Modules, Meta-programming, Logic-based domain-specific languages,
Programming techniques. Programming support: Program analysis, Transformation,
Validation, Verification, Debugging, Profiling, Testing, Execution
visualization. Implementation: Compilation, Virtual machines, Memory
management, Parallel and Distributed execution, Constraint handling rules,
Tabling, Foreign interfaces, User interfaces. Related Paradigms and Synergies:
Inductive and coinductive logic programming, Constraint logic programming,
Answer set programming, Interaction with SAT, SMT and CSP solvers, Theorem
proving, Argumentation, Probabilistic programming, Machine learning.
Applications: Databases, Big data, Data integration and federation, Software
engineering, Natural language processing, Web and semantic web, Agents,
Artificial intelligence, Computational life sciences, Cyber-security, Robotics,
Education.
|
[
{
"version": "v1",
"created": "Thu, 4 Aug 2022 14:36:47 GMT"
}
] | 2022-08-05T00:00:00 |
[
[
"Lierler",
"Yuliya",
""
],
[
"Morales",
"Jose F.",
""
],
[
"Dodaro",
"Carmine",
""
],
[
"Dahl",
"Veronica",
""
],
[
"Gebser",
"Martin",
""
],
[
"Tekle",
"Tuncay",
""
]
] |
new_dataset
| 0.994532 |
2208.02697
|
Jose Emilio Labra Gayo
|
Jose Emilio Labra Gayo
|
WShEx: A language to describe and validate Wikibase entities
|
arXiv admin note: substantial text overlap with arXiv:2110.11709
| null | null | null |
cs.DB cs.DL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Wikidata is one of the most successful Semantic Web projects. Its underlying
Wikibase data model departs from RDF with the inclusion of several features
like qualifiers and references, built-in datatypes, etc. Those features are
serialized to RDF for content negotiation, RDF dumps and in the SPARQL
endpoint. Wikidata adopted the entity schemas namespace using the ShEx language
to describe and validate the RDF serialization of Wikidata entities. In this
paper we propose WShEx, a language inspired by ShEx that directly supports the
Wikibase data model and can be used to describe and validate Wikibase entities.
The paper presents a the abstract syntax and semantic of the WShEx language.
|
[
{
"version": "v1",
"created": "Thu, 4 Aug 2022 14:51:35 GMT"
}
] | 2022-08-05T00:00:00 |
[
[
"Gayo",
"Jose Emilio Labra",
""
]
] |
new_dataset
| 0.999583 |
2208.02792
|
Yiheng Feng
|
Hanlin Chen, Brian Liu, Xumiao Zhang, Feng Qian, Z. Morley Mao, and
Yiheng Feng
|
A Cooperative Perception Environment for Traffic Operations and Control
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Existing data collection methods for traffic operations and control usually
rely on infrastructure-based loop detectors or probe vehicle trajectories.
Connected and automated vehicles (CAVs) not only can report data about
themselves but also can provide the status of all detected surrounding
vehicles. Integration of perception data from multiple CAVs as well as
infrastructure sensors (e.g., LiDAR) can provide richer information even under
a very low penetration rate. This paper aims to develop a cooperative data
collection system, which integrates Lidar point cloud data from both
infrastructure and CAVs to create a cooperative perception environment for
various transportation applications. The state-of-the-art 3D detection models
are applied to detect vehicles in the merged point cloud. We test the proposed
cooperative perception environment with the max pressure adaptive signal
control model in a co-simulation platform with CARLA and SUMO. Results show
that very low penetration rates of CAV plus an infrastructure sensor are
sufficient to achieve comparable performance with 30% or higher penetration
rates of connected vehicles (CV). We also show the equivalent CV penetration
rate (E-CVPR) under different CAV penetration rates to demonstrate the data
collection efficiency of the cooperative perception environment.
|
[
{
"version": "v1",
"created": "Thu, 4 Aug 2022 17:48:20 GMT"
}
] | 2022-08-05T00:00:00 |
[
[
"Chen",
"Hanlin",
""
],
[
"Liu",
"Brian",
""
],
[
"Zhang",
"Xumiao",
""
],
[
"Qian",
"Feng",
""
],
[
"Mao",
"Z. Morley",
""
],
[
"Feng",
"Yiheng",
""
]
] |
new_dataset
| 0.967087 |
2012.13093
|
Yu-Huan Wu
|
Yu-Huan Wu, Yun Liu, Le Zhang, Ming-Ming Cheng, Bo Ren
|
EDN: Salient Object Detection via Extremely-Downsampled Network
|
Accepted by IEEE Transactions on Image Processing, 12 pages
|
IEEE Transactions on Image Processing, vol. 31, pp. 3125-3136,
2022
|
10.1109/TIP.2022.3164550
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Recent progress on salient object detection (SOD) mainly benefits from
multi-scale learning, where the high-level and low-level features collaborate
in locating salient objects and discovering fine details, respectively.
However, most efforts are devoted to low-level feature learning by fusing
multi-scale features or enhancing boundary representations. High-level
features, which although have long proven effective for many other tasks, yet
have been barely studied for SOD. In this paper, we tap into this gap and show
that enhancing high- level features is essential for SOD as well. To this end,
we introduce an Extremely-Downsampled Network (EDN), which employs an extreme
downsampling technique to effectively learn a global view of the whole image,
leading to accurate salient object localization. To accomplish better
multi-level feature fusion, we construct the Scale-Correlated Pyramid
Convolution (SCPC) to build an elegant decoder for recovering object details
from the above extreme downsampling. Extensive experiments demonstrate that EDN
achieves state-of-the-art performance with real-time speed. Our efficient
EDN-Lite also achieves competitive performance with a speed of 316fps. Hence,
this work is expected to spark some new thinking in SOD. Code is available at
https://github.com/yuhuan-wu/EDN.
|
[
{
"version": "v1",
"created": "Thu, 24 Dec 2020 04:23:48 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Aug 2021 13:13:30 GMT"
},
{
"version": "v3",
"created": "Thu, 31 Mar 2022 13:09:40 GMT"
}
] | 2022-08-04T00:00:00 |
[
[
"Wu",
"Yu-Huan",
""
],
[
"Liu",
"Yun",
""
],
[
"Zhang",
"Le",
""
],
[
"Cheng",
"Ming-Ming",
""
],
[
"Ren",
"Bo",
""
]
] |
new_dataset
| 0.970286 |
2112.00468
|
Vihanga Jayawickrama
|
Vihanga Jayawickrama, Gihan Weeraprameshwara, Nisansa de Silva,
Yudhanjaya Wijeratne
|
Seeking Sinhala Sentiment: Predicting Facebook Reactions of Sinhala
Posts
| null | null |
10.1109/ICter53630.2021.9774796
| null |
cs.LG cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Facebook network allows its users to record their reactions to text via a
typology of emotions. This network, taken at scale, is therefore a prime data
set of annotated sentiment data. This paper uses millions of such reactions,
derived from a decade worth of Facebook post data centred around a Sri Lankan
context, to model an eye of the beholder approach to sentiment detection for
online Sinhala textual content. Three different sentiment analysis models are
built, taking into account a limited subset of reactions, all reactions, and
another that derives a positive/negative star rating value. The efficacy of
these models in capturing the reactions of the observers are then computed and
discussed. The analysis reveals that binary classification of reactions, for
Sinhala content, is significantly more accurate than the other approaches.
Furthermore, the inclusion of the like reaction hinders the capability of
accurately predicting other reactions.
|
[
{
"version": "v1",
"created": "Wed, 1 Dec 2021 13:05:05 GMT"
}
] | 2022-08-04T00:00:00 |
[
[
"Jayawickrama",
"Vihanga",
""
],
[
"Weeraprameshwara",
"Gihan",
""
],
[
"de Silva",
"Nisansa",
""
],
[
"Wijeratne",
"Yudhanjaya",
""
]
] |
new_dataset
| 0.994568 |
2203.05352
|
Lojze \v{Z}ust
|
Lojze \v{Z}ust and Matej Kristan
|
Temporal Context for Robust Maritime Obstacle Detection
|
7 pages, 6 figures, accepted to IROS 2022, for code & data visit
https://github.com/lojzezust/WaSR-T
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robust maritime obstacle detection is essential for fully autonomous unmanned
surface vehicles (USVs). The currently widely adopted segmentation-based
obstacle detection methods are prone to misclassification of object reflections
and sun glitter as obstacles, producing many false positive detections,
effectively rendering the methods impractical for USV navigation. However,
water-turbulence-induced temporal appearance changes on object reflections are
very distinctive from the appearance dynamics of true objects. We harness this
property to design WaSR-T, a novel maritime obstacle detection network, that
extracts the temporal context from a sequence of recent frames to reduce
ambiguity. By learning the local temporal characteristics of object reflection
on the water surface, WaSR-T substantially improves obstacle detection accuracy
in the presence of reflections and glitter. Compared with existing single-frame
methods, WaSR-T reduces the number of false positive detections by 41% overall
and by over 53% within the danger zone of the boat, while preserving a high
recall, and achieving new state-of-the-art performance on the challenging MODS
maritime obstacle detection benchmark. The code, pretrained models and extended
datasets are available at https://github.com/lojzezust/WaSR-T
|
[
{
"version": "v1",
"created": "Thu, 10 Mar 2022 12:58:14 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Aug 2022 12:08:40 GMT"
}
] | 2022-08-04T00:00:00 |
[
[
"Žust",
"Lojze",
""
],
[
"Kristan",
"Matej",
""
]
] |
new_dataset
| 0.984457 |
2203.10839
|
Mucheng Ren
|
Mucheng Ren, Heyan Huang, Yuxiang Zhou, Qianwen Cao, Yuan Bu, Yang Gao
|
TCM-SD: A Benchmark for Probing Syndrome Differentiation via Natural
Language Processing
|
10 main pages + 2 reference pages, to appear at CCL2022
| null | null | null |
cs.CL cs.AI cs.CY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Traditional Chinese Medicine (TCM) is a natural, safe, and effective therapy
that has spread and been applied worldwide. The unique TCM diagnosis and
treatment system requires a comprehensive analysis of a patient's symptoms
hidden in the clinical record written in free text. Prior studies have shown
that this system can be informationized and intelligentized with the aid of
artificial intelligence (AI) technology, such as natural language processing
(NLP). However, existing datasets are not of sufficient quality nor quantity to
support the further development of data-driven AI technology in TCM. Therefore,
in this paper, we focus on the core task of the TCM diagnosis and treatment
system -- syndrome differentiation (SD) -- and we introduce the first public
large-scale dataset for SD, called TCM-SD. Our dataset contains 54,152
real-world clinical records covering 148 syndromes. Furthermore, we collect a
large-scale unlabelled textual corpus in the field of TCM and propose a
domain-specific pre-trained language model, called ZY-BERT. We conducted
experiments using deep neural networks to establish a strong performance
baseline, reveal various challenges in SD, and prove the potential of
domain-specific pre-trained language model. Our study and analysis reveal
opportunities for incorporating computer science and linguistics knowledge to
explore the empirical validity of TCM theories.
|
[
{
"version": "v1",
"created": "Mon, 21 Mar 2022 09:59:54 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Aug 2022 03:18:00 GMT"
}
] | 2022-08-04T00:00:00 |
[
[
"Ren",
"Mucheng",
""
],
[
"Huang",
"Heyan",
""
],
[
"Zhou",
"Yuxiang",
""
],
[
"Cao",
"Qianwen",
""
],
[
"Bu",
"Yuan",
""
],
[
"Gao",
"Yang",
""
]
] |
new_dataset
| 0.999783 |
2205.04281
|
Kunhan Lu
|
Changhong Fu, Kunhan Lu, Guangze Zheng, Junjie Ye, Ziang Cao, Bowen
Li, and Geng Lu
|
Siamese Object Tracking for Unmanned Aerial Vehicle: A Review and
Comprehensive Analysis
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unmanned aerial vehicle (UAV)-based visual object tracking has enabled a wide
range of applications and attracted increasing attention in the field of
intelligent transportation systems because of its versatility and
effectiveness. As an emerging force in the revolutionary trend of deep
learning, Siamese networks shine in UAV-based object tracking with their
promising balance of accuracy, robustness, and speed. Thanks to the development
of embedded processors and the gradual optimization of deep neural networks,
Siamese trackers receive extensive research and realize preliminary
combinations with UAVs. However, due to the UAV's limited onboard computational
resources and the complex real-world circumstances, aerial tracking with
Siamese networks still faces severe obstacles in many aspects. To further
explore the deployment of Siamese networks in UAV-based tracking, this work
presents a comprehensive review of leading-edge Siamese trackers, along with an
exhaustive UAV-specific analysis based on the evaluation using a typical UAV
onboard processor. Then, the onboard tests are conducted to validate the
feasibility and efficacy of representative Siamese trackers in real-world UAV
deployment. Furthermore, to better promote the development of the tracking
community, this work analyzes the limitations of existing Siamese trackers and
conducts additional experiments represented by low-illumination evaluations. In
the end, prospects for the development of Siamese tracking for UAV-based
intelligent transportation systems are deeply discussed. The unified framework
of leading-edge Siamese trackers, i.e., code library, and the results of their
experimental evaluations are available at
https://github.com/vision4robotics/SiameseTracking4UAV .
|
[
{
"version": "v1",
"created": "Mon, 9 May 2022 13:53:34 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Aug 2022 10:23:58 GMT"
}
] | 2022-08-04T00:00:00 |
[
[
"Fu",
"Changhong",
""
],
[
"Lu",
"Kunhan",
""
],
[
"Zheng",
"Guangze",
""
],
[
"Ye",
"Junjie",
""
],
[
"Cao",
"Ziang",
""
],
[
"Li",
"Bowen",
""
],
[
"Lu",
"Geng",
""
]
] |
new_dataset
| 0.96368 |
2205.05627
|
Patrizio Angelini
|
Patrizio Angelini, Steven Chaplick, Sabine Cornelsen, Giordano Da
Lozzo
|
On Upward-Planar L-Drawings of Graphs
|
Extended abstract appeared at MFCS 2022
| null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
In an upward-planar L-drawing of a directed acyclic graph (DAG) each edge $e$
is represented as a polyline composed of a vertical segment with its lowest
endpoint at the tail of $e$ and of a horizontal segment ending at the head of
$e$. Distinct edges may overlap, but not cross. Recently, upward-planar
L-drawings have been studied for $st$-graphs, i.e., planar DAGs with a single
source $s$ and a single sink $t$ containing an edge directed from $s$ to $t$.
It is known that a plane $st$-graph, i.e., an embedded $st$-graph in which the
edge $(s,t)$ is incident to the outer face, admits an upward-planar L-drawing
if and only if it admits a bitonic $st$-ordering, which can be tested in linear
time.
We study upward-planar L-drawings of DAGs that are not necessarily
$st$-graphs. On the combinatorial side, we show that a plane DAG admits an
upward-planar L-drawing if and only if it is a subgraph of a plane $st$-graph
admitting a bitonic $st$-ordering. This allows us to show that not every tree
with a fixed bimodal embedding admits an upward-planar L-drawing. Moreover, we
prove that any acyclic cactus with a single source (or a single sink) admits an
upward-planar L-drawing, which respects a given outerplanar embedding if there
are no transitive edges. On the algorithmic side, we consider DAGs with a
single source (or a single sink). We give linear-time testing algorithms for
these DAGs in two cases: (i) when the drawing must respect a prescribed
embedding and (ii) when no restriction is given on the embedding, but it is
biconnected and series-parallel.
|
[
{
"version": "v1",
"created": "Wed, 11 May 2022 16:53:07 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Aug 2022 14:11:36 GMT"
}
] | 2022-08-04T00:00:00 |
[
[
"Angelini",
"Patrizio",
""
],
[
"Chaplick",
"Steven",
""
],
[
"Cornelsen",
"Sabine",
""
],
[
"Da Lozzo",
"Giordano",
""
]
] |
new_dataset
| 0.989242 |
2206.06147
|
Adrien Cassagne
|
Adrien Cassagne (ALSOC), Romain Tajan (IMS, Bordeaux INP), Olivier
Aumage (STORM), Camille Leroux (IMS, Bordeaux INP), Denis Barthou (STORM,
Bordeaux INP), Christophe J\'ego (IMS, Bordeaux INP)
|
A DSEL for High Throughput and Low Latency Software-Defined Radio on
Multicore CPUs
| null | null | null | null |
cs.CL cs.DC eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This article presents a new Domain Specific Embedded Language (DSEL)
dedicated to Software-Defined Radio (SDR). From a set of carefully designed
components, it enables to build efficient software digital communication
systems, able to take advantage of the parallelism of modern processor
architectures, in a straightforward and safe manner for the programmer. In
particular, proposed DSEL enables the combination of pipelining and sequence
duplication techniques to extract both temporal and spatial parallelism from
digital communication systems. We leverage the DSEL capabilities on a real use
case: a fully digital transceiver for the widely used DVB-S2 standard designed
entirely in software. Through evaluation, we show how proposed software DVB-S2
transceiver is able to get the most from modern, high-end multicore CPU
targets.
|
[
{
"version": "v1",
"created": "Mon, 13 Jun 2022 13:30:14 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Aug 2022 07:02:02 GMT"
}
] | 2022-08-04T00:00:00 |
[
[
"Cassagne",
"Adrien",
"",
"ALSOC"
],
[
"Tajan",
"Romain",
"",
"IMS, Bordeaux INP"
],
[
"Aumage",
"Olivier",
"",
"STORM"
],
[
"Leroux",
"Camille",
"",
"IMS, Bordeaux INP"
],
[
"Barthou",
"Denis",
"",
"STORM,\n Bordeaux INP"
],
[
"Jégo",
"Christophe",
"",
"IMS, Bordeaux INP"
]
] |
new_dataset
| 0.990669 |
2206.10234
|
Kostia Chardonnet
|
Kostia Chardonnet, Marc de Visme, Beno\^it Valiron, Renaud Vilmart
|
The Many-Worlds Calculus
| null | null | null | null |
cs.LO quant-ph
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a new typed graphical language for quantum computation, based on
compact categories with biproducts. Our language generalizes existing
approaches such as ZX-calculus and quantum circuits, while offering a natural
framework to support quantum control: it natively supports "quantum tests". The
language comes equipped with a denotational semantics based on linear
applications, and an equational theory. Through the use of normal forms for the
diagrams, we prove the language to be universal, and the equational theory to
be complete with respect to the semantics.
|
[
{
"version": "v1",
"created": "Tue, 21 Jun 2022 10:10:26 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Aug 2022 14:44:15 GMT"
}
] | 2022-08-04T00:00:00 |
[
[
"Chardonnet",
"Kostia",
""
],
[
"de Visme",
"Marc",
""
],
[
"Valiron",
"Benoît",
""
],
[
"Vilmart",
"Renaud",
""
]
] |
new_dataset
| 0.998434 |
2207.00186
|
Qingwen Zhang
|
Qingwen Zhang, Mingkai Tang, Ruoyu Geng, Feiyi Chen, Ren Xin, Lujia
Wang
|
MMFN: Multi-Modal-Fusion-Net for End-to-End Driving
|
7 pages, 5 figures, accepted by IROS 2022
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Inspired by the fact that humans use diverse sensory organs to perceive the
world, sensors with different modalities are deployed in end-to-end driving to
obtain the global context of the 3D scene. In previous works, camera and LiDAR
inputs are fused through transformers for better driving performance. These
inputs are normally further interpreted as high-level map information to assist
navigation tasks. Nevertheless, extracting useful information from the complex
map input is challenging, for redundant information may mislead the agent and
negatively affect driving performance. We propose a novel approach to
efficiently extract features from vectorized High-Definition (HD) maps and
utilize them in the end-to-end driving tasks. In addition, we design a new
expert to further enhance the model performance by considering multi-road
rules. Experimental results prove that both of the proposed improvements enable
our agent to achieve superior performance compared with other methods.
|
[
{
"version": "v1",
"created": "Fri, 1 Jul 2022 03:30:48 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Aug 2022 07:34:22 GMT"
}
] | 2022-08-04T00:00:00 |
[
[
"Zhang",
"Qingwen",
""
],
[
"Tang",
"Mingkai",
""
],
[
"Geng",
"Ruoyu",
""
],
[
"Chen",
"Feiyi",
""
],
[
"Xin",
"Ren",
""
],
[
"Wang",
"Lujia",
""
]
] |
new_dataset
| 0.998588 |
2207.01334
|
Qinghong Lin
|
Kevin Qinghong Lin, Alex Jinpeng Wang, Rui Yan, Eric Zhongcong Xu,
Rongcheng Tu, Yanru Zhu, Wenzhe Zhao, Weijie Kong, Chengfei Cai, Hongfa Wang,
Wei Liu, Mike Zheng Shou
|
Egocentric Video-Language Pretraining @ EPIC-KITCHENS-100 Multi-Instance
Retrieval Challenge 2022
|
To appeared in CVPRW22. 5 pages, 2 figures, 2 tables. Code:
https://github.com/showlab/EgoVLP. The EPIC challenge technical report of
EgoVLP arXiv:2206.01670. See Ego4D challenge technical report
arXiv:2207.01622
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this report, we propose a video-language pretraining (VLP) based solution
\cite{kevin2022egovlp} for the EPIC-KITCHENS-100 Multi-Instance Retrieval (MIR)
challenge. Especially, we exploit the recently released Ego4D dataset
\cite{grauman2021ego4d} to pioneer Egocentric VLP from pretraining dataset,
pretraining objective, and development set. Based on the above three designs,
we develop a pretrained video-language model that is able to transfer its
egocentric video-text representation to MIR benchmark. Furthermore, we devise
an adaptive multi-instance max-margin loss to effectively fine-tune the model
and equip the dual-softmax technique for reliable inference. Our best single
model obtains strong performance on the challenge test set with 47.39% mAP and
61.44% nDCG. The code is available at https://github.com/showlab/EgoVLP.
|
[
{
"version": "v1",
"created": "Mon, 4 Jul 2022 11:32:48 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Aug 2022 12:08:50 GMT"
}
] | 2022-08-04T00:00:00 |
[
[
"Lin",
"Kevin Qinghong",
""
],
[
"Wang",
"Alex Jinpeng",
""
],
[
"Yan",
"Rui",
""
],
[
"Xu",
"Eric Zhongcong",
""
],
[
"Tu",
"Rongcheng",
""
],
[
"Zhu",
"Yanru",
""
],
[
"Zhao",
"Wenzhe",
""
],
[
"Kong",
"Weijie",
""
],
[
"Cai",
"Chengfei",
""
],
[
"Wang",
"Hongfa",
""
],
[
"Liu",
"Wei",
""
],
[
"Shou",
"Mike Zheng",
""
]
] |
new_dataset
| 0.996831 |
2207.01622
|
Qinghong Lin
|
Kevin Qinghong Lin, Alex Jinpeng Wang, Mattia Soldan, Michael Wray,
Rui Yan, Eric Zhongcong Xu, Difei Gao, Rongcheng Tu, Wenzhe Zhao, Weijie
Kong, Chengfei Cai, Hongfa Wang, Dima Damen, Bernard Ghanem, Wei Liu, Mike
Zheng Shou
|
Egocentric Video-Language Pretraining @ Ego4D Challenge 2022
|
Preprint. 4 pages, 2 figures, 5 tables. Code:
https://github.com/showlab/EgoVLP. The Ego4D challenge technical report of
EgoVLP arXiv:2206.01670. See EPIC challenge technical report arXiv:2207.01334
for overlap
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this report, we propose a video-language pretraining (VLP) based solution
\cite{kevin2022egovlp} for four Ego4D challenge tasks, including Natural
Language Query (NLQ), Moment Query (MQ), Object State Change Classification
(OSCC), and PNR Localization (PNR). Especially, we exploit the recently
released Ego4D dataset \cite{grauman2021ego4d} to pioneer Egocentric VLP from
pretraining dataset, pretraining objective, and development set. Based on the
above three designs, we develop a pretrained video-language model that is able
to transfer its egocentric video-text representation or video-only
representation to several video downstream tasks. Our Egocentric VLP achieves
10.46R@1&IoU @0.3 on NLQ, 10.33 mAP on MQ, 74% Acc on OSCC, 0.67 sec error on
PNR. The code is available at https://github.com/showlab/EgoVLP.
|
[
{
"version": "v1",
"created": "Mon, 4 Jul 2022 12:47:16 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Aug 2022 12:03:39 GMT"
}
] | 2022-08-04T00:00:00 |
[
[
"Lin",
"Kevin Qinghong",
""
],
[
"Wang",
"Alex Jinpeng",
""
],
[
"Soldan",
"Mattia",
""
],
[
"Wray",
"Michael",
""
],
[
"Yan",
"Rui",
""
],
[
"Xu",
"Eric Zhongcong",
""
],
[
"Gao",
"Difei",
""
],
[
"Tu",
"Rongcheng",
""
],
[
"Zhao",
"Wenzhe",
""
],
[
"Kong",
"Weijie",
""
],
[
"Cai",
"Chengfei",
""
],
[
"Wang",
"Hongfa",
""
],
[
"Damen",
"Dima",
""
],
[
"Ghanem",
"Bernard",
""
],
[
"Liu",
"Wei",
""
],
[
"Shou",
"Mike Zheng",
""
]
] |
new_dataset
| 0.99702 |
2208.01393
|
Raula Gaikovina Kula Dr
|
Raula Gaikovina Kula and Christoph Treude
|
In War and Peace: The Impact of World Politics on Software Ecosystems
|
Accepted to ESEC/FSE as a vision paper
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Reliance on third-party libraries is now commonplace in contemporary software
engineering. Being open source in nature, these libraries should advocate for a
world where the freedoms and opportunities of open source software can be
enjoyed by all. Yet, there is a growing concern related to maintainers using
their influence to make political stances (i.e., referred to as protestware).
In this paper, we reflect on the impact of world politics on software
ecosystems, especially in the context of the ongoing War in Ukraine. We show
three cases where world politics has had an impact on a software ecosystem, and
how these incidents may result in either benign or malignant consequences. We
further point to specific opportunities for research, and conclude with a
research agenda with ten research questions to guide future research
directions.
|
[
{
"version": "v1",
"created": "Mon, 1 Aug 2022 00:44:01 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Aug 2022 03:27:13 GMT"
}
] | 2022-08-04T00:00:00 |
[
[
"Kula",
"Raula Gaikovina",
""
],
[
"Treude",
"Christoph",
""
]
] |
new_dataset
| 0.980358 |
2208.01636
|
Vivek Sharma
|
Chris Clifton, Bradley Malin, Anna Oganian, Ramesh Raskar, Vivek
Sharma
|
A Roadmap for Greater Public Use of Privacy-Sensitive Government Data:
Workshop Report
|
23 pages
| null | null | null |
cs.CR cs.CV cs.CY cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Government agencies collect and manage a wide range of ever-growing datasets.
While such data has the potential to support research and evidence-based policy
making, there are concerns that the dissemination of such data could infringe
upon the privacy of the individuals (or organizations) from whom such data was
collected. To appraise the current state of data sharing, as well as learn
about opportunities for stimulating such sharing at a faster pace, a virtual
workshop was held on May 21st and 26th, 2021, sponsored by the National Science
Foundation and National Institute of Standards and Technologies, where a
multinational collection of researchers and practitioners were brought together
to discuss their experiences and learn about recently developed technologies
for managing privacy while sharing data. The workshop specifically focused on
challenges and successes in government data sharing at various levels. The
first day focused on successful examples of new technology applied to sharing
of public data, including formal privacy techniques, synthetic data, and
cryptographic approaches. Day two emphasized brainstorming sessions on some of
the challenges and directions to address them.
|
[
{
"version": "v1",
"created": "Fri, 17 Jun 2022 17:20:29 GMT"
}
] | 2022-08-04T00:00:00 |
[
[
"Clifton",
"Chris",
""
],
[
"Malin",
"Bradley",
""
],
[
"Oganian",
"Anna",
""
],
[
"Raskar",
"Ramesh",
""
],
[
"Sharma",
"Vivek",
""
]
] |
new_dataset
| 0.995657 |
2208.01703
|
Tim Finin
|
Sai Sree Laya Chukkapalli, Anupam Joshi, Tim Finin, Robert F. Erbacher
|
CAPD: A Context-Aware, Policy-Driven Framework for Secure and Resilient
IoBT Operations
| null | null | null | null |
cs.CR cs.AI cs.MA
|
http://creativecommons.org/licenses/by/4.0/
|
The Internet of Battlefield Things (IoBT) will advance the operational
effectiveness of infantry units. However, this requires autonomous assets such
as sensors, drones, combat equipment, and uncrewed vehicles to collaborate,
securely share information, and be resilient to adversary attacks in contested
multi-domain operations. CAPD addresses this problem by providing a
context-aware, policy-driven framework supporting data and knowledge exchange
among autonomous entities in a battlespace. We propose an IoBT ontology that
facilitates controlled information sharing to enable semantic interoperability
between systems. Its key contributions include providing a knowledge graph with
a shared semantic schema, integration with background knowledge, efficient
mechanisms for enforcing data consistency and drawing inferences, and
supporting attribute-based access control. The sensors in the IoBT provide data
that create populated knowledge graphs based on the ontology. This paper
describes using CAPD to detect and mitigate adversary actions. CAPD enables
situational awareness using reasoning over the sensed data and SPARQL queries.
For example, adversaries can cause sensor failure or hijacking and disrupt the
tactical networks to degrade video surveillance. In such instances, CAPD uses
an ontology-based reasoner to see how alternative approaches can still support
the mission. Depending on bandwidth availability, the reasoner initiates the
creation of a reduced frame rate grayscale video by active transcoding or
transmits only still images. This ability to reason over the mission sensed
environment and attack context permits the autonomous IoBT system to exhibit
resilience in contested conditions.
|
[
{
"version": "v1",
"created": "Tue, 2 Aug 2022 19:27:51 GMT"
}
] | 2022-08-04T00:00:00 |
[
[
"Chukkapalli",
"Sai Sree Laya",
""
],
[
"Joshi",
"Anupam",
""
],
[
"Finin",
"Tim",
""
],
[
"Erbacher",
"Robert F.",
""
]
] |
new_dataset
| 0.996341 |
2208.01710
|
Ziwei Wang
|
Ziwei Wang and Yonhon Ng and Jack Henderson and Robert Mahony
|
Smart Visual Beacons with Asynchronous Optical Communications using
Event Cameras
|
7 pages, 8 figures, accepted by IEEE International Conference on
Intelligent Robots and Systems (IROS) 2022
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Event cameras are bio-inspired dynamic vision sensors that respond to changes
in image intensity with a high temporal resolution, high dynamic range and low
latency. These sensor characteristics are ideally suited to enable visual
target tracking in concert with a broadcast visual communication channel for
smart visual beacons with applications in distributed robotics. Visual beacons
can be constructed by high-frequency modulation of Light Emitting Diodes (LEDs)
such as vehicle headlights, Internet of Things (IoT) LEDs, smart building
lights, etc., that are already present in many real-world scenarios. The high
temporal resolution characteristic of the event cameras allows them to capture
visual signals at far higher data rates compared to classical frame-based
cameras. In this paper, we propose a novel smart visual beacon architecture
with both LED modulation and event camera demodulation algorithms. We
quantitatively evaluate the relationship between LED transmission rate,
communication distance and the message transmission accuracy for the smart
visual beacon communication system that we prototyped. The proposed method
achieves up to 4 kbps in an indoor environment and lossless transmission over a
distance of 100 meters, at a transmission rate of 500 bps, in full sunlight,
demonstrating the potential of the technology in an outdoor environment.
|
[
{
"version": "v1",
"created": "Tue, 2 Aug 2022 19:46:32 GMT"
}
] | 2022-08-04T00:00:00 |
[
[
"Wang",
"Ziwei",
""
],
[
"Ng",
"Yonhon",
""
],
[
"Henderson",
"Jack",
""
],
[
"Mahony",
"Robert",
""
]
] |
new_dataset
| 0.956731 |
2208.01757
|
Ruibo Wang
|
Ruibo Wang, Anna Talgat, Mustafa A. Kishk and Mohamed-Slim Alouini
|
Conditional Contact Angle Distribution in LEO Satellite-Relayed
Transmission
| null | null | null | null |
cs.IT cs.NI math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
This letter characterizes the contact angle distribution based on the
condition that the relay low earth orbit (LEO) satellite is in the
communication range of both the ground transmitter and the ground receiver. As
one of the core distributions in stochastic geometry-based routing analysis,
the analytical expression of the \ac{CDF} of the conditional contact angle is
derived. Furthermore, the conditional contact angle is applied to analyze the
inaccessibility of common satellites between the ground transmitter and
receiver. Finally, with the help of the conditional contact angle, coverage
probability and achievable data rate in LEO satellite-relayed transmission are
studied.
|
[
{
"version": "v1",
"created": "Tue, 2 Aug 2022 21:14:05 GMT"
}
] | 2022-08-04T00:00:00 |
[
[
"Wang",
"Ruibo",
""
],
[
"Talgat",
"Anna",
""
],
[
"Kishk",
"Mustafa A.",
""
],
[
"Alouini",
"Mohamed-Slim",
""
]
] |
new_dataset
| 0.993411 |
2208.01919
|
Sicheng Zhang
|
Sicheng Zhang (1), Jiarun Yu (1), Zhida Bao (1), Shiwen Mao (2), Yun
Lin (1) ((1) College of Information and Communication Engineering, Harbin
Engineering University, Harbin, (2) Department of Electrical & Computer
Engineering, Auburn University, Auburn)
|
Spectrum Focused Frequency Adversarial Attacks for Automatic Modulation
Classification
|
6 pages, 9 figures
| null | null | null |
cs.CR cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Artificial intelligence (AI) technology has provided a potential solution for
automatic modulation recognition (AMC). Unfortunately, AI-based AMC models are
vulnerable to adversarial examples, which seriously threatens the efficient,
secure and trusted application of AI in AMC. This issue has attracted the
attention of researchers. Various studies on adversarial attacks and defenses
evolve in a spiral. However, the existing adversarial attack methods are all
designed in the time domain. They introduce more high-frequency components in
the frequency domain, due to abrupt updates in the time domain. For this issue,
from the perspective of frequency domain, we propose a spectrum focused
frequency adversarial attacks (SFFAA) for AMC model, and further draw on the
idea of meta-learning, propose a Meta-SFFAA algorithm to improve the
transferability in the black-box attacks. Extensive experiments, qualitative
and quantitative metrics demonstrate that the proposed algorithm can
concentrate the adversarial energy on the spectrum where the signal is located,
significantly improve the adversarial attack performance while maintaining the
concealment in the frequency domain.
|
[
{
"version": "v1",
"created": "Wed, 3 Aug 2022 08:54:56 GMT"
}
] | 2022-08-04T00:00:00 |
[
[
"Zhang",
"Sicheng",
""
],
[
"Yu",
"Jiarun",
""
],
[
"Bao",
"Zhida",
""
],
[
"Mao",
"Shiwen",
""
],
[
"Lin",
"Yun",
""
]
] |
new_dataset
| 0.996103 |
2208.01925
|
Xiangrui Zhao
|
Xiangrui Zhao, Sheng Yang, Tianxin Huang, Jun Chen, Teng Ma, Mingyang
Li and Yong Liu
|
SuperLine3D: Self-supervised Line Segmentation and Description for LiDAR
Point Cloud
|
17 pages, ECCV 2022 Accepted
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Poles and building edges are frequently observable objects on urban roads,
conveying reliable hints for various computer vision tasks. To repetitively
extract them as features and perform association between discrete LiDAR frames
for registration, we propose the first learning-based feature segmentation and
description model for 3D lines in LiDAR point cloud. To train our model without
the time consuming and tedious data labeling process, we first generate
synthetic primitives for the basic appearance of target lines, and build an
iterative line auto-labeling process to gradually refine line labels on real
LiDAR scans. Our segmentation model can extract lines under arbitrary scale
perturbations, and we use shared EdgeConv encoder layers to train the two
segmentation and descriptor heads jointly. Base on the model, we can build a
highly-available global registration module for point cloud registration, in
conditions without initial transformation hints. Experiments have demonstrated
that our line-based registration method is highly competitive to
state-of-the-art point-based approaches. Our code is available at
https://github.com/zxrzju/SuperLine3D.git.
|
[
{
"version": "v1",
"created": "Wed, 3 Aug 2022 09:06:14 GMT"
}
] | 2022-08-04T00:00:00 |
[
[
"Zhao",
"Xiangrui",
""
],
[
"Yang",
"Sheng",
""
],
[
"Huang",
"Tianxin",
""
],
[
"Chen",
"Jun",
""
],
[
"Ma",
"Teng",
""
],
[
"Li",
"Mingyang",
""
],
[
"Liu",
"Yong",
""
]
] |
new_dataset
| 0.999736 |
2208.01933
|
Bing Han
|
Bing Han, Zhengyang Chen, Zhikai Zhou, Yanmin Qian
|
The SJTU System for Short-duration Speaker Verification Challenge 2021
|
Published by Interspeech 2021
| null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents the SJTU system for both text-dependent and
text-independent tasks in short-duration speaker verification (SdSV) challenge
2021. In this challenge, we explored different strong embedding extractors to
extract robust speaker embedding. For text-independent task, language-dependent
adaptive snorm is explored to improve the system performance under the
cross-lingual verification condition. For text-dependent task, we mainly focus
on the in-domain fine-tuning strategies based on the model pre-trained on
large-scale out-of-domain data. In order to improve the distinction between
different speakers uttering the same phrase, we proposed several novel
phrase-aware fine-tuning strategies and phrase-aware neural PLDA. With such
strategies, the system performance is further improved. Finally, we fused the
scores of different systems, and our fusion systems achieved 0.0473 in Task1
(rank 3) and 0.0581 in Task2 (rank 8) on the primary evaluation metric.
|
[
{
"version": "v1",
"created": "Wed, 3 Aug 2022 09:19:22 GMT"
}
] | 2022-08-04T00:00:00 |
[
[
"Han",
"Bing",
""
],
[
"Chen",
"Zhengyang",
""
],
[
"Zhou",
"Zhikai",
""
],
[
"Qian",
"Yanmin",
""
]
] |
new_dataset
| 0.983772 |
2208.01946
|
Mingyuan Gao
|
Mingyuan Gao (1), Hung Dang (2), Ee-Chien Chang (1), Jialin Li (1)
((1) National University of Singapore, Singapore (2) FPT Blockchain Lab,
Vietnam)
|
Mixed Fault Tolerance Protocols with Trusted Execution Environment
|
12 pages, 3 figures
| null | null | null |
cs.DC cs.CR
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Blockchain systems are designed, built and operated in the presence of
failures. There are two dominant failure models, namely crash fault and
Byzantine fault. Byzantine fault tolerance (BFT) protocols offer stronger
security guarantees, and thus are widely used in blockchain systems. However,
their security guarantees come at a dear cost to their performance and
scalability. Several works have improved BFT protocols, and Trusted Execution
Environment (TEE) has been shown to be an effective solution. However, existing
such works typically assume that each participating node is equipped with TEE.
For blockchain systems wherein participants typically have different hardware
configurations, i.e., some nodes feature TEE while others do not, existing
TEE-based BFT protocols are not applicable.
This work studies the setting wherein not all participating nodes feature
TEE, under which we propose a new fault model called mixed fault. We explore a
new approach to designing efficient distributed fault-tolerant protocols under
the mixed fault model. In general, mixed fault tolerance (MFT) protocols assume
a network of $n$ nodes, among which up to $f = \frac{n-2}{3}$ can be subject to
mixed faults. We identify two key principles for designing efficient MFT
protocols, namely, (i) prioritizing non-equivocating nodes in leading the
protocol, and (ii) advocating the use of public-key cryptographic primitives
that allow authenticated messages to be aggregated. We showcase these design
principles by prescribing an MFT protocol, namely MRaft.
We implemented a prototype of MRaft using Intel SGX, integrated it into the
CCF blockchain framework, conducted experiments, and showed that MFT protocols
can obtain the same security guarantees as their BFT counterparts while still
providing better performance (both transaction throughput and latency) and
scalability.
|
[
{
"version": "v1",
"created": "Wed, 3 Aug 2022 09:48:03 GMT"
}
] | 2022-08-04T00:00:00 |
[
[
"Gao",
"Mingyuan",
""
],
[
"Dang",
"Hung",
""
],
[
"Chang",
"Ee-Chien",
""
],
[
"Li",
"Jialin",
""
]
] |
new_dataset
| 0.994102 |
2208.01957
|
Aleksandr Kim
|
Aleksandr Kim (1), Guillem Bras\'o (1), Aljo\v{s}a O\v{s}ep (1), Laura
Leal-Taix\'e (1) ((1) Technical University of Munich)
|
PolarMOT: How Far Can Geometric Relations Take Us in 3D Multi-Object
Tracking?
|
ECCV 2022, 17 pages, 5 pages of supplementary, 3 figures
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most (3D) multi-object tracking methods rely on appearance-based cues for
data association. By contrast, we investigate how far we can get by only
encoding geometric relationships between objects in 3D space as cues for
data-driven data association. We encode 3D detections as nodes in a graph,
where spatial and temporal pairwise relations among objects are encoded via
localized polar coordinates on graph edges. This representation makes our
geometric relations invariant to global transformations and smooth trajectory
changes, especially under non-holonomic motion. This allows our graph neural
network to learn to effectively encode temporal and spatial interactions and
fully leverage contextual and motion cues to obtain final scene interpretation
by posing data association as edge classification. We establish a new
state-of-the-art on nuScenes dataset and, more importantly, show that our
method, PolarMOT, generalizes remarkably well across different locations
(Boston, Singapore, Karlsruhe) and datasets (nuScenes and KITTI).
|
[
{
"version": "v1",
"created": "Wed, 3 Aug 2022 10:06:56 GMT"
}
] | 2022-08-04T00:00:00 |
[
[
"Kim",
"Aleksandr",
"",
"Technical University of Munich"
],
[
"Brasó",
"Guillem",
"",
"Technical University of Munich"
],
[
"Ošep",
"Aljoša",
"",
"Technical University of Munich"
],
[
"Leal-Taixé",
"Laura",
"",
"Technical University of Munich"
]
] |
new_dataset
| 0.9987 |
2208.01968
|
Jyoti Prakash
|
Abhishek Tiwari, Jyoti Prakash, Alimerdan Rahimov, Christian Hammer
|
Our fingerprints don't fade from the Apps we touch: Fingerprinting the
Android WebView
| null | null | null | null |
cs.CR cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Numerous studies demonstrated that browser fingerprinting is detrimental to
users' security and privacy. However, little is known about the effects of
browser fingerprinting on Android hybrid apps -- where a stripped-down Chromium
browser is integrated into an app. These apps expand the attack surface by
employing two-way communication between native apps and the web. This paper
studies the impact of browser fingerprinting on these embedded browsers. To
this end, we instrument the Android framework to record and extract information
leveraged for fingerprinting. We study over 20,000 apps, including the most
popular apps from the Google play store. We exemplify security flaws and severe
information leaks in popular apps like Instagram. Our study reveals that
fingerprints in hybrid apps potentially contain account-specific and
device-specific information that identifies users across multiple devices
uniquely. Besides, our results show that the hybrid app browser does not always
adhere to standard browser-specific privacy policies.
|
[
{
"version": "v1",
"created": "Wed, 3 Aug 2022 10:34:30 GMT"
}
] | 2022-08-04T00:00:00 |
[
[
"Tiwari",
"Abhishek",
""
],
[
"Prakash",
"Jyoti",
""
],
[
"Rahimov",
"Alimerdan",
""
],
[
"Hammer",
"Christian",
""
]
] |
new_dataset
| 0.981027 |
2208.02010
|
Lina Mar\'ia Amaya-Mej\'ia
|
Lina Mar\'ia Amaya-Mej\'ia, Nicol\'as Duque-Su\'arez, Daniel
Jaramillo-Ram\'irez, Carol Martinez
|
Vision-Based Safety System for Barrierless Human-Robot Collaboration
|
Accepted for publication at the 2022 IEEE/RSJ International
Conference on Intelligent Robots and Systems (IROS)
| null | null | null |
cs.RO cs.CV cs.LG cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human safety has always been the main priority when working near an
industrial robot. With the rise of Human-Robot Collaborative environments,
physical barriers to avoiding collisions have been disappearing, increasing the
risk of accidents and the need for solutions that ensure a safe Human-Robot
Collaboration. This paper proposes a safety system that implements Speed and
Separation Monitoring (SSM) type of operation. For this, safety zones are
defined in the robot's workspace following current standards for industrial
collaborative robots. A deep learning-based computer vision system detects,
tracks, and estimates the 3D position of operators close to the robot. The
robot control system receives the operator's 3D position and generates 3D
representations of them in a simulation environment. Depending on the zone
where the closest operator was detected, the robot stops or changes its
operating speed. Three different operation modes in which the human and robot
interact are presented. Results show that the vision-based system can correctly
detect and classify in which safety zone an operator is located and that the
different proposed operation modes ensure that the robot's reaction and stop
time are within the required time limits to guarantee safety.
|
[
{
"version": "v1",
"created": "Wed, 3 Aug 2022 12:31:03 GMT"
}
] | 2022-08-04T00:00:00 |
[
[
"Amaya-Mejía",
"Lina María",
""
],
[
"Duque-Suárez",
"Nicolás",
""
],
[
"Jaramillo-Ramírez",
"Daniel",
""
],
[
"Martinez",
"Carol",
""
]
] |
new_dataset
| 0.989629 |
2208.02020
|
Yilei Jiang
|
Yilei Jiang and Dongkun Han
|
Finite-time Motion Planning of Multi-agent Systems with Collision
Avoidance
| null |
2022 13th Asian Control Conference (ASCC)
|
10.23919/ASCC56756.2022.9828361
| null |
cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Finite-time motion planning with collision avoidance is a challenging issue
in multi-agent systems. This paper proposes a novel distributed controller
based on a new Lyapunov barrier function which guarantees finite-time stability
for multi-agent systems without collisions. First, the problem of finite-time
motion planning of multi-agent systems is formulated. Then, a novel finite-time
distributed controller is developed based on a Lyapunov barrier function.
Finally, numerical simulations demonstrate the effectiveness of proposed
method.
|
[
{
"version": "v1",
"created": "Wed, 3 Aug 2022 12:43:24 GMT"
}
] | 2022-08-04T00:00:00 |
[
[
"Jiang",
"Yilei",
""
],
[
"Han",
"Dongkun",
""
]
] |
new_dataset
| 0.972596 |
2208.02030
|
Laurens Martin Tetzlaff
|
Laurens Martin Tetzlaff
|
BPMN4sML: A BPMN Extension for Serverless Machine Learning. Technology
Independent and Interoperable Modeling of Machine Learning Workflows and
their Serverless Deployment Orchestration
|
105 pages 3 tables 33 figures
| null | null | null |
cs.SE cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Machine learning (ML) continues to permeate all layers of academia, industry
and society. Despite its successes, mental frameworks to capture and represent
machine learning workflows in a consistent and coherent manner are lacking. For
instance, the de facto process modeling standard, Business Process Model and
Notation (BPMN), managed by the Object Management Group, is widely accepted and
applied. However, it is short of specific support to represent machine learning
workflows. Further, the number of heterogeneous tools for deployment of machine
learning solutions can easily overwhelm practitioners. Research is needed to
align the process from modeling to deploying ML workflows.
We analyze requirements for standard based conceptual modeling for machine
learning workflows and their serverless deployment. Confronting the
shortcomings with respect to consistent and coherent modeling of ML workflows
in a technology independent and interoperable manner, we extend BPMN's
Meta-Object Facility (MOF) metamodel and the corresponding notation and
introduce BPMN4sML (BPMN for serverless machine learning). Our extension
BPMN4sML follows the same outline referenced by the Object Management Group
(OMG) for BPMN. We further address the heterogeneity in deployment by proposing
a conceptual mapping to convert BPMN4sML models to corresponding deployment
models using TOSCA.
BPMN4sML allows technology-independent and interoperable modeling of machine
learning workflows of various granularity and complexity across the entire
machine learning lifecycle. It aids in arriving at a shared and standardized
language to communicate ML solutions. Moreover, it takes the first steps toward
enabling conversion of ML workflow model diagrams to corresponding deployment
models for serverless deployment via TOSCA.
|
[
{
"version": "v1",
"created": "Tue, 2 Aug 2022 10:36:00 GMT"
}
] | 2022-08-04T00:00:00 |
[
[
"Tetzlaff",
"Laurens Martin",
""
]
] |
new_dataset
| 0.982967 |
2208.02031
|
Lisa Raithel
|
Lisa Raithel, Philippe Thomas, Roland Roller, Oliver Sapina, Sebastian
M\"oller, Pierre Zweigenbaum
|
Cross-lingual Approaches for the Detection of Adverse Drug Reactions in
German from a Patient's Perspective
|
Accepted at LREC 2022
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
In this work, we present the first corpus for German Adverse Drug Reaction
(ADR) detection in patient-generated content. The data consists of 4,169 binary
annotated documents from a German patient forum, where users talk about health
issues and get advice from medical doctors. As is common in social media data
in this domain, the class labels of the corpus are very imbalanced. This and a
high topic imbalance make it a very challenging dataset, since often, the same
symptom can have several causes and is not always related to a medication
intake. We aim to encourage further multi-lingual efforts in the domain of ADR
detection and provide preliminary experiments for binary classification using
different methods of zero- and few-shot learning based on a multi-lingual
model. When fine-tuning XLM-RoBERTa first on English patient forum data and
then on the new German data, we achieve an F1-score of 37.52 for the positive
class. We make the dataset and models publicly available for the community.
|
[
{
"version": "v1",
"created": "Wed, 3 Aug 2022 12:52:01 GMT"
}
] | 2022-08-04T00:00:00 |
[
[
"Raithel",
"Lisa",
""
],
[
"Thomas",
"Philippe",
""
],
[
"Roller",
"Roland",
""
],
[
"Sapina",
"Oliver",
""
],
[
"Möller",
"Sebastian",
""
],
[
"Zweigenbaum",
"Pierre",
""
]
] |
new_dataset
| 0.99395 |
2208.02049
|
Ziyi Wang
|
Ziyi Wang, Bo Lu, Yonghao Long, Fangxun Zhong, Tak-Hong Cheung, Qi
Dou, Yunhui Liu
|
AutoLaparo: A New Dataset of Integrated Multi-tasks for Image-guided
Surgical Automation in Laparoscopic Hysterectomy
|
Accepted at MICCAI 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Computer-assisted minimally invasive surgery has great potential in
benefiting modern operating theatres. The video data streamed from the
endoscope provides rich information to support context-awareness for
next-generation intelligent surgical systems. To achieve accurate perception
and automatic manipulation during the procedure, learning based technique is a
promising way, which enables advanced image analysis and scene understanding in
recent years. However, learning such models highly relies on large-scale,
high-quality, and multi-task labelled data. This is currently a bottleneck for
the topic, as available public dataset is still extremely limited in the field
of CAI. In this paper, we present and release the first integrated dataset
(named AutoLaparo) with multiple image-based perception tasks to facilitate
learning-based automation in hysterectomy surgery. Our AutoLaparo dataset is
developed based on full-length videos of entire hysterectomy procedures.
Specifically, three different yet highly correlated tasks are formulated in the
dataset, including surgical workflow recognition, laparoscope motion
prediction, and instrument and key anatomy segmentation. In addition, we
provide experimental results with state-of-the-art models as reference
benchmarks for further model developments and evaluations on this dataset. The
dataset is available at https://autolaparo.github.io.
|
[
{
"version": "v1",
"created": "Wed, 3 Aug 2022 13:17:23 GMT"
}
] | 2022-08-04T00:00:00 |
[
[
"Wang",
"Ziyi",
""
],
[
"Lu",
"Bo",
""
],
[
"Long",
"Yonghao",
""
],
[
"Zhong",
"Fangxun",
""
],
[
"Cheung",
"Tak-Hong",
""
],
[
"Dou",
"Qi",
""
],
[
"Liu",
"Yunhui",
""
]
] |
new_dataset
| 0.999785 |
2208.02121
|
Diego Paez Granados PhD
|
Diego Paez-Granados, Yujie He, David Gonon, Dan Jia, Bastian Leibe,
Kenji Suzuki, Aude Billard
|
Pedestrian-Robot Interactions on Autonomous Crowd Navigation: Reactive
Control Methods and Evaluation Metrics
|
\c{opyright}IEEE All rights reserved. IEEE-IROS-2022, Oct.23-27.
Kyoto, Japan
|
IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS-2022)
| null | null |
cs.RO cs.CV cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Autonomous navigation in highly populated areas remains a challenging task
for robots because of the difficulty in guaranteeing safe interactions with
pedestrians in unstructured situations. In this work, we present a crowd
navigation control framework that delivers continuous obstacle avoidance and
post-contact control evaluated on an autonomous personal mobility vehicle. We
propose evaluation metrics for accounting efficiency, controller response and
crowd interactions in natural crowds. We report the results of over 110 trials
in different crowd types: sparse, flows, and mixed traffic, with low- (< 0.15
ppsm), mid- (< 0.65 ppsm), and high- (< 1 ppsm) pedestrian densities. We
present comparative results between two low-level obstacle avoidance methods
and a baseline of shared control. Results show a 10% drop in relative time to
goal on the highest density tests, and no other efficiency metric decrease.
Moreover, autonomous navigation showed to be comparable to shared-control
navigation with a lower relative jerk and significantly higher fluency in
commands indicating high compatibility with the crowd. We conclude that the
reactive controller fulfils a necessary task of fast and continuous adaptation
to crowd navigation, and it should be coupled with high-level planners for
environmental and situational awareness.
|
[
{
"version": "v1",
"created": "Wed, 3 Aug 2022 14:56:03 GMT"
}
] | 2022-08-04T00:00:00 |
[
[
"Paez-Granados",
"Diego",
""
],
[
"He",
"Yujie",
""
],
[
"Gonon",
"David",
""
],
[
"Jia",
"Dan",
""
],
[
"Leibe",
"Bastian",
""
],
[
"Suzuki",
"Kenji",
""
],
[
"Billard",
"Aude",
""
]
] |
new_dataset
| 0.996306 |
2208.02140
|
Lars Hillebrand
|
Lars Hillebrand, Tobias Deu{\ss}er, Tim Dilmaghani, Bernd Kliem,
R\"udiger Loitz, Christian Bauckhage, Rafet Sifa
|
KPI-BERT: A Joint Named Entity Recognition and Relation Extraction Model
for Financial Reports
|
Accepted at ICPR 2022, 8 pages, 1 figure, 6 tables
| null | null | null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present KPI-BERT, a system which employs novel methods of named entity
recognition (NER) and relation extraction (RE) to extract and link key
performance indicators (KPIs), e.g. "revenue" or "interest expenses", of
companies from real-world German financial documents. Specifically, we
introduce an end-to-end trainable architecture that is based on Bidirectional
Encoder Representations from Transformers (BERT) combining a recurrent neural
network (RNN) with conditional label masking to sequentially tag entities
before it classifies their relations. Our model also introduces a learnable
RNN-based pooling mechanism and incorporates domain expert knowledge by
explicitly filtering impossible relations. We achieve a substantially higher
prediction performance on a new practical dataset of German financial reports,
outperforming several strong baselines including a competing state-of-the-art
span-based entity tagging approach.
|
[
{
"version": "v1",
"created": "Wed, 3 Aug 2022 15:21:28 GMT"
}
] | 2022-08-04T00:00:00 |
[
[
"Hillebrand",
"Lars",
""
],
[
"Deußer",
"Tobias",
""
],
[
"Dilmaghani",
"Tim",
""
],
[
"Kliem",
"Bernd",
""
],
[
"Loitz",
"Rüdiger",
""
],
[
"Bauckhage",
"Christian",
""
],
[
"Sifa",
"Rafet",
""
]
] |
new_dataset
| 0.981491 |
2208.02159
|
Andrei Costin
|
Hannu Turtiainen, Andrei Costin, Timo Hamalainen
|
CCTV-Exposure: An open-source system for measuring user's privacy
exposure to mapped CCTV cameras based on geo-location (Extended Version)
| null | null | null | null |
cs.CR cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we present CCTV-Exposure -- the first CCTV-aware solution to
evaluate potential privacy exposure to closed-circuit television (CCTV)
cameras. The objective was to develop a toolset for quantifying human exposure
to CCTV cameras from a privacy perspective. Our novel approach is trajectory
analysis of the individuals, coupled with a database of geo-location mapped
CCTV cameras annotated with minimal yet sufficient meta-information. For this
purpose, CCTV-Exposure model based on a Global Positioning System (GPS)
tracking was applied to estimate individual privacy exposure in different
scenarios. The current investigation provides an application example and
validation of the modeling approach. The methodology and toolset developed and
implemented in this work provide time-sequence and location-sequence of the
exposure events, thus making possible association of the exposure with the
individual activities and cameras, and delivers main statistics on individual's
exposure to CCTV cameras with high spatio-temporal resolution.
|
[
{
"version": "v1",
"created": "Sat, 2 Jul 2022 14:43:44 GMT"
}
] | 2022-08-04T00:00:00 |
[
[
"Turtiainen",
"Hannu",
""
],
[
"Costin",
"Andrei",
""
],
[
"Hamalainen",
"Timo",
""
]
] |
new_dataset
| 0.997058 |
2208.02210
|
Michail Christos Doukas
|
Michail Christos Doukas, Evangelos Ververas, Viktoriia Sharmanska,
Stefanos Zafeiriou
|
Free-HeadGAN: Neural Talking Head Synthesis with Explicit Gaze Control
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We present Free-HeadGAN, a person-generic neural talking head synthesis
system. We show that modeling faces with sparse 3D facial landmarks are
sufficient for achieving state-of-the-art generative performance, without
relying on strong statistical priors of the face, such as 3D Morphable Models.
Apart from 3D pose and facial expressions, our method is capable of fully
transferring the eye gaze, from a driving actor to a source identity. Our
complete pipeline consists of three components: a canonical 3D key-point
estimator that regresses 3D pose and expression-related deformations, a gaze
estimation network and a generator that is built upon the architecture of
HeadGAN. We further experiment with an extension of our generator to
accommodate few-shot learning using an attention mechanism, in case more than
one source images are available. Compared to the latest models for reenactment
and motion transfer, our system achieves higher photo-realism combined with
superior identity preservation, while offering explicit gaze control.
|
[
{
"version": "v1",
"created": "Wed, 3 Aug 2022 16:46:08 GMT"
}
] | 2022-08-04T00:00:00 |
[
[
"Doukas",
"Michail Christos",
""
],
[
"Ververas",
"Evangelos",
""
],
[
"Sharmanska",
"Viktoriia",
""
],
[
"Zafeiriou",
"Stefanos",
""
]
] |
new_dataset
| 0.988221 |
2208.02245
|
De-An Huang
|
De-An Huang, Zhiding Yu, Anima Anandkumar
|
MinVIS: A Minimal Video Instance Segmentation Framework without
Video-based Training
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose MinVIS, a minimal video instance segmentation (VIS) framework that
achieves state-of-the-art VIS performance with neither video-based
architectures nor training procedures. By only training a query-based image
instance segmentation model, MinVIS outperforms the previous best result on the
challenging Occluded VIS dataset by over 10% AP. Since MinVIS treats frames in
training videos as independent images, we can drastically sub-sample the
annotated frames in training videos without any modifications. With only 1% of
labeled frames, MinVIS outperforms or is comparable to fully-supervised
state-of-the-art approaches on YouTube-VIS 2019/2021. Our key observation is
that queries trained to be discriminative between intra-frame object instances
are temporally consistent and can be used to track instances without any
manually designed heuristics. MinVIS thus has the following inference pipeline:
we first apply the trained query-based image instance segmentation to video
frames independently. The segmented instances are then tracked by bipartite
matching of the corresponding queries. This inference is done in an online
fashion and does not need to process the whole video at once. MinVIS thus has
the practical advantages of reducing both the labeling costs and the memory
requirements, while not sacrificing the VIS performance. Code is available at:
https://github.com/NVlabs/MinVIS
|
[
{
"version": "v1",
"created": "Wed, 3 Aug 2022 17:50:42 GMT"
}
] | 2022-08-04T00:00:00 |
[
[
"Huang",
"De-An",
""
],
[
"Yu",
"Zhiding",
""
],
[
"Anandkumar",
"Anima",
""
]
] |
new_dataset
| 0.999201 |
1606.02738
|
Matthieu Schaller
|
Matthieu Schaller (1), Pedro Gonnet (2,3), Aidan B. G. Chalk (2),
Peter W. Draper (1) ((1) ICC, Durham University, (2) ECS, Durham University,
(3) Google Switzerland GmbH)
|
SWIFT: Using task-based parallelism, fully asynchronous communication,
and graph partition-based domain decomposition for strong scaling on more
than 100,000 cores
|
9 pages, 7 figures. Code, scripts and examples available at
http://icc.dur.ac.uk/swift/
| null |
10.1145/2929908.2929916
| null |
cs.DC astro-ph.IM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a new open-source cosmological code, called SWIFT, designed to
solve the equations of hydrodynamics using a particle-based approach (Smooth
Particle Hydrodynamics) on hybrid shared/distributed-memory architectures.
SWIFT was designed from the bottom up to provide excellent strong scaling on
both commodity clusters (Tier-2 systems) and Top100-supercomputers (Tier-0
systems), without relying on architecture-specific features or specialized
accelerator hardware. This performance is due to three main computational
approaches: (1) Task-based parallelism for shared-memory parallelism, which
provides fine-grained load balancing and thus strong scaling on large numbers
of cores. (2) Graph-based domain decomposition, which uses the task graph to
decompose the simulation domain such that the work, as opposed to just the
data, as is the case with most partitioning schemes, is equally distributed
across all nodes. (3) Fully dynamic and asynchronous communication, in which
communication is modelled as just another task in the task-based scheme,
sending data whenever it is ready and deferring on tasks that rely on data from
other nodes until it arrives. In order to use these approaches, the code had to
be re-written from scratch, and the algorithms therein adapted to the
task-based paradigm. As a result, we can show upwards of 60% parallel
efficiency for moderate-sized problems when increasing the number of cores
512-fold, on both x86-based and Power8-based architectures.
|
[
{
"version": "v1",
"created": "Wed, 8 Jun 2016 20:22:15 GMT"
}
] | 2022-08-03T00:00:00 |
[
[
"Schaller",
"Matthieu",
""
],
[
"Gonnet",
"Pedro",
""
],
[
"Chalk",
"Aidan B. G.",
""
],
[
"Draper",
"Peter W.",
""
]
] |
new_dataset
| 0.999663 |
1710.00273
|
Jason Dou
|
Jason Xiaotian Dou, Michelle Liu, Haaris Muneer, Adam Schlussel
|
What Words Do We Use to Lie?: Word Choice in Deceptive Messages
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Text messaging is the most widely used form of computer-mediated
communication (CMC). Previous findings have shown that linguistic factors can
reliably indicate messages as deceptive. For example, users take longer and use
more words to craft deceptive messages than they do truthful messages. Existing
research has also examined how factors, such as student status and gender,
affect rates of deception and word choice in deceptive messages. However, this
research has been limited by small sample sizes and has returned contradicting
findings. This paper aims to address these issues by using a dataset of text
messages collected from a large and varied set of participants using an Android
messaging application. The results of this paper show significant differences
in word choice and frequency of deceptive messages between male and female
participants, as well as between students and non-students.
|
[
{
"version": "v1",
"created": "Sun, 1 Oct 2017 00:04:10 GMT"
},
{
"version": "v2",
"created": "Mon, 1 Aug 2022 21:35:09 GMT"
}
] | 2022-08-03T00:00:00 |
[
[
"Dou",
"Jason Xiaotian",
""
],
[
"Liu",
"Michelle",
""
],
[
"Muneer",
"Haaris",
""
],
[
"Schlussel",
"Adam",
""
]
] |
new_dataset
| 0.975846 |
2109.06238
|
Gerry Chen
|
Gerry Chen, Sereym Baek, Juan-Diego Florez, Wanli Qian, Sang-won
Leigh, Seth Hutchinson, and Frank Dellaert
|
Extended Version of GTGraffiti: Spray Painting Graffiti Art from Human
Painting Motions with a Cable Driven Parallel Robot
|
Accompanying Details to ICRA 2022 Submission Number 2016
|
2022 International Conference on Robotics and Automation (ICRA),
2022, pp. 4065-4072
|
10.1109/ICRA46639.2022.9812008
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present GTGraffiti, a graffiti painting system from Georgia Tech that
tackles challenges in art, hardware, and human-robot collaboration. The problem
of painting graffiti in a human style is particularly challenging and requires
a system-level approach because the robotics and art must be designed around
each other. The robot must be highly dynamic over a large workspace while the
artist must work within the robot's limitations. Our approach consists of three
stages: artwork capture, robot hardware, and planning & control. We use motion
capture to capture collaborator painting motions which are then composed and
processed into a time-varying linear feedback controller for a cable-driven
parallel robot (CDPR) to execute. In this work, we will describe the capturing
process, the design and construction of a purpose-built CDPR, and the software
for turning an artist's vision into control commands. Our work represents an
important step towards faithfully recreating human graffiti artwork by
demonstrating that we can reproduce artist motions up to 2m/s and 20m/s$^2$
within 9.3mm RMSE to paint artworks. Changes to the submitted manuscript are
colored in blue.
|
[
{
"version": "v1",
"created": "Mon, 13 Sep 2021 18:14:26 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Sep 2021 01:03:48 GMT"
},
{
"version": "v3",
"created": "Thu, 21 Oct 2021 16:38:48 GMT"
}
] | 2022-08-03T00:00:00 |
[
[
"Chen",
"Gerry",
""
],
[
"Baek",
"Sereym",
""
],
[
"Florez",
"Juan-Diego",
""
],
[
"Qian",
"Wanli",
""
],
[
"Leigh",
"Sang-won",
""
],
[
"Hutchinson",
"Seth",
""
],
[
"Dellaert",
"Frank",
""
]
] |
new_dataset
| 0.998233 |
2109.11011
|
Jarrett Holtz
|
Jarrett Holtz, Joydeep Biswas
|
SOCIALGYM: A Framework for Benchmarking Social Robot Navigation
|
Published in IROS2022
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robots moving safely and in a socially compliant manner in dynamic human
environments is an essential benchmark for long-term robot autonomy. However,
it is not feasible to learn and benchmark social navigation behaviors entirely
in the real world, as learning is data-intensive, and it is challenging to make
safety guarantees during training. Therefore, simulation-based benchmarks that
provide abstractions for social navigation are required. A framework for these
benchmarks would need to support a wide variety of learning approaches, be
extensible to the broad range of social navigation scenarios, and abstract away
the perception problem to focus on social navigation explicitly. While there
have been many proposed solutions, including high fidelity 3D simulators and
grid world approximations, no existing solution satisfies all of the
aforementioned properties for learning and evaluating social navigation
behaviors. In this work, we propose SOCIALGYM, a lightweight 2D simulation
environment for robot social navigation designed with extensibility in mind,
and a benchmark scenario built on SOCIALGYM. Further, we present benchmark
results that compare and contrast human-engineered and model-based learning
approaches to a suite of off-the-shelf Learning from Demonstration (LfD) and
Reinforcement Learning (RL) approaches applied to social robot navigation.
These results demonstrate the data efficiency, task performance, social
compliance, and environment transfer capabilities for each of the policies
evaluated to provide a solid grounding for future social navigation research.
|
[
{
"version": "v1",
"created": "Wed, 22 Sep 2021 19:58:44 GMT"
},
{
"version": "v2",
"created": "Mon, 1 Aug 2022 18:42:50 GMT"
}
] | 2022-08-03T00:00:00 |
[
[
"Holtz",
"Jarrett",
""
],
[
"Biswas",
"Joydeep",
""
]
] |
new_dataset
| 0.999535 |
2109.12855
|
Jari Pronold
|
Jari Pronold, Jakob Jordan, Brian J. N. Wylie, Itaru Kitayama, Markus
Diesmann, Susanne Kunkel
|
Routing brain traffic through the von Neumann bottleneck: Efficient
cache usage in spiking neural network simulation code on general purpose
computers
| null | null |
10.1016/j.parco.2022.102952
| null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Simulation is a third pillar next to experiment and theory in the study of
complex dynamic systems such as biological neural networks. Contemporary
brain-scale networks correspond to directed graphs of a few million nodes, each
with an in-degree and out-degree of several thousands of edges, where nodes and
edges correspond to the fundamental biological units, neurons and synapses,
respectively. When considering a random graph, each node's edges are
distributed across thousands of parallel processes. The activity in neuronal
networks is also sparse. Each neuron occasionally transmits a brief signal,
called spike, via its outgoing synapses to the corresponding target neurons.
This spatial and temporal sparsity represents an inherent bottleneck for
simulations on conventional computers: Fundamentally irregular memory-access
patterns cause poor cache utilization. Using an established neuronal network
simulation code as a reference implementation, we investigate how common
techniques to recover cache performance such as software-induced prefetching
and software pipelining can benefit a real-world application. The algorithmic
changes reduce simulation time by up to 50%. The study exemplifies that
many-core systems assigned with an intrinsically parallel computational problem
can overcome the von Neumann bottleneck of conventional computer architectures.
|
[
{
"version": "v1",
"created": "Mon, 27 Sep 2021 07:57:11 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Mar 2022 08:59:02 GMT"
}
] | 2022-08-03T00:00:00 |
[
[
"Pronold",
"Jari",
""
],
[
"Jordan",
"Jakob",
""
],
[
"Wylie",
"Brian J. N.",
""
],
[
"Kitayama",
"Itaru",
""
],
[
"Diesmann",
"Markus",
""
],
[
"Kunkel",
"Susanne",
""
]
] |
new_dataset
| 0.996705 |
2111.04851
|
Andrew Sabelhaus
|
Andrew P. Sabelhaus, Rohan K. Mehta, Anthony T. Wertz, Carmel Majidi
|
In-Situ Sensing and Dynamics Predictions for Electrothermally-Actuated
Soft Robot Limbs
|
17 pages, 8 figures
|
Frontiers in Robotics and AI, 17 May 2022
|
10.3389/frobt.2022.888261
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Untethered soft robots that locomote using electrothermally-responsive
materials like shape memory alloy (SMA) face challenging design constraints for
sensing actuator states. At the same time, modeling of actuator behaviors faces
steep challenges, even with available sensor data, due to complex
electrical-thermal-mechanical interactions and hysteresis. This article
proposes a framework for in-situ sensing and dynamics modeling of actuator
states, particularly temperature of SMA wires, which is used to predict robot
motions. A planar soft limb is developed, actuated by a pair of SMA coils, that
includes compact and robust sensors for temperature and angular deflection.
Data from these sensors are used to train a neural network based on the long
short-term memory (LSTM) architecture to model both unidirectional (single SMA)
and bidirectional (both SMAs) motion. Predictions from the model demonstrate
that data from the temperature sensor, combined with control inputs, allow for
dynamics predictions over extraordinarily long open-loop timescales (10
minutes) with little drift. Prediction errors are on the order of the soft
deflection sensor's accuracy. This architecture allows for compact designs of
electrothermally-actuated soft robots that include sensing sufficient for
motion predictions, helping to bring these robots into practical application.
|
[
{
"version": "v1",
"created": "Mon, 8 Nov 2021 22:19:10 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Mar 2022 16:40:20 GMT"
}
] | 2022-08-03T00:00:00 |
[
[
"Sabelhaus",
"Andrew P.",
""
],
[
"Mehta",
"Rohan K.",
""
],
[
"Wertz",
"Anthony T.",
""
],
[
"Majidi",
"Carmel",
""
]
] |
new_dataset
| 0.997656 |
2112.03360
|
Tahiya Chowdhury
|
Tahiya Chowdhury, Murtadha Aldeer, Shantanu Laghate, Jorge Ortiz
|
Cadence: A Practical Time-series Partitioning Algorithm for Unlabeled
IoT Sensor Streams
|
28 pages, 13 figures
| null | null | null |
cs.LG cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Timeseries partitioning is an essential step in most machine-learning driven,
sensor-based IoT applications. This paper introduces a sample-efficient,
robust, time-series segmentation model and algorithm. We show that by learning
a representation specifically with the segmentation objective based on maximum
mean discrepancy (MMD), our algorithm can robustly detect time-series events
across different applications. Our loss function allows us to infer whether
consecutive sequences of samples are drawn from the same distribution (null
hypothesis) and determines the change-point between pairs that reject the null
hypothesis (i.e., come from different distributions). We demonstrate its
applicability in a real-world IoT deployment for ambient-sensing based activity
recognition. Moreover, while many works on change-point detection exist in the
literature, our model is significantly simpler and can be fully trained in 9-93
seconds on average with little variation in hyperparameters for data across
different applications. We empirically evaluate Cadence on four popular change
point detection (CPD) datasets where Cadence matches or outperforms existing
CPD techniques.
|
[
{
"version": "v1",
"created": "Mon, 6 Dec 2021 21:13:18 GMT"
},
{
"version": "v2",
"created": "Mon, 1 Aug 2022 19:36:43 GMT"
}
] | 2022-08-03T00:00:00 |
[
[
"Chowdhury",
"Tahiya",
""
],
[
"Aldeer",
"Murtadha",
""
],
[
"Laghate",
"Shantanu",
""
],
[
"Ortiz",
"Jorge",
""
]
] |
new_dataset
| 0.96598 |
2112.08928
|
Andres Lombo
|
Andres E. Lombo, Jesus E. Lares, Matteo Castellani, Chi-Ning Chou,
Nancy Lynch and Karl K. Berggren
|
A Superconducting Nanowire-based Architecture for Neuromorphic Computing
|
29 pages, 10 figures
| null | null | null |
cs.ET cond-mat.supr-con physics.app-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neuromorphic computing is poised to further the success of software-based
neural networks by utilizing improved customized hardware. However, the
translation of neuromorphic algorithms to hardware specifications is a problem
that has been seldom explored. Building superconducting neuromorphic systems
requires extensive expertise in both superconducting physics and theoretical
neuroscience. In this work, we aim to bridge this gap by presenting a tool and
methodology to translate algorithmic parameters into circuit specifications. We
first show the correspondence between theoretical neuroscience models and the
dynamics of our circuit topologies. We then apply this tool to solve linear
systems by implementing a spiking neural network with our superconducting
nanowire-based hardware.
|
[
{
"version": "v1",
"created": "Wed, 15 Dec 2021 18:22:46 GMT"
},
{
"version": "v2",
"created": "Tue, 2 Aug 2022 14:02:44 GMT"
}
] | 2022-08-03T00:00:00 |
[
[
"Lombo",
"Andres E.",
""
],
[
"Lares",
"Jesus E.",
""
],
[
"Castellani",
"Matteo",
""
],
[
"Chou",
"Chi-Ning",
""
],
[
"Lynch",
"Nancy",
""
],
[
"Berggren",
"Karl K.",
""
]
] |
new_dataset
| 0.998263 |
2112.13889
|
Phong Nguyen-Ha
|
Phong Nguyen-Ha, Nikolaos Sarafianos, Christoph Lassner, Janne
Heikkila, Tony Tung
|
Free-Viewpoint RGB-D Human Performance Capture and Rendering
|
Accepted at ECCV 2022, Project page:
https://www.phongnhhn.info/HVS_Net/index.html
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Capturing and faithfully rendering photo-realistic humans from novel views is
a fundamental problem for AR/VR applications. While prior work has shown
impressive performance capture results in laboratory settings, it is
non-trivial to achieve casual free-viewpoint human capture and rendering for
unseen identities with high fidelity, especially for facial expressions, hands,
and clothes. To tackle these challenges we introduce a novel view synthesis
framework that generates realistic renders from unseen views of any human
captured from a single-view and sparse RGB-D sensor, similar to a low-cost
depth camera, and without actor-specific models. We propose an architecture to
create dense feature maps in novel views obtained by sphere-based neural
rendering, and create complete renders using a global context inpainting model.
Additionally, an enhancer network leverages the overall fidelity, even in
occluded areas from the original view, producing crisp renders with fine
details. We show that our method generates high-quality novel views of
synthetic and real human actors given a single-stream, sparse RGB-D input. It
generalizes to unseen identities, and new poses and faithfully reconstructs
facial expressions. Our approach outperforms prior view synthesis methods and
is robust to different levels of depth sparsity.
|
[
{
"version": "v1",
"created": "Mon, 27 Dec 2021 20:13:53 GMT"
},
{
"version": "v2",
"created": "Thu, 30 Dec 2021 13:24:37 GMT"
},
{
"version": "v3",
"created": "Sun, 10 Jul 2022 14:19:00 GMT"
},
{
"version": "v4",
"created": "Tue, 2 Aug 2022 10:58:01 GMT"
}
] | 2022-08-03T00:00:00 |
[
[
"Nguyen-Ha",
"Phong",
""
],
[
"Sarafianos",
"Nikolaos",
""
],
[
"Lassner",
"Christoph",
""
],
[
"Heikkila",
"Janne",
""
],
[
"Tung",
"Tony",
""
]
] |
new_dataset
| 0.990597 |
2203.05955
|
Ryo Okumura
|
Ryo Okumura, Nobuki Nishio and Tadahiro Taniguchi
|
Tactile-Sensitive NewtonianVAE for High-Accuracy Industrial Connector
Insertion
|
7 pages, 4 figures
| null | null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An industrial connector insertion task requires submillimeter positioning and
grasp pose compensation for a plug. Thus, highly accurate estimation of the
relative pose between a plug and socket is fundamental for achieving the task.
World models are promising technologies for visuomotor control because they
obtain appropriate state representation to jointly optimize feature extraction
and latent dynamics model. Recent studies show that the NewtonianVAE, a type of
the world model, acquires latent space equivalent to mapping from images to
physical coordinates. Proportional control can be achieved in the latent space
of NewtonianVAE. However, applying NewtonianVAE to high-accuracy industrial
tasks in physical environments is an open problem. Moreover, the existing
framework does not consider the grasp pose compensation in the obtained latent
space. In this work, we proposed tactile-sensitive NewtonianVAE and applied it
to a USB connector insertion with grasp pose variation in the physical
environments. We adopted a GelSight-type tactile sensor and estimated the
insertion position compensated by the grasp pose of the plug. Our method trains
the latent space in an end-to-end manner, and no additional engineering and
annotation are required. Simple proportional control is available in the
obtained latent space. Moreover, we showed that the original NewtonianVAE fails
in some situations, and demonstrated that domain knowledge induction improves
model accuracy. This domain knowledge can be easily obtained using robot
specification and grasp pose error measurement. We demonstrated that our
proposed method achieved a 100\% success rate and 0.3 mm positioning accuracy
in the USB connector insertion task in the physical environment. It
outperformed SOTA CNN-based two-stage goal pose regression with grasp pose
compensation using coordinate transformation.
|
[
{
"version": "v1",
"created": "Thu, 10 Mar 2022 09:53:13 GMT"
},
{
"version": "v2",
"created": "Tue, 2 Aug 2022 09:13:35 GMT"
}
] | 2022-08-03T00:00:00 |
[
[
"Okumura",
"Ryo",
""
],
[
"Nishio",
"Nobuki",
""
],
[
"Taniguchi",
"Tadahiro",
""
]
] |
new_dataset
| 0.999458 |
2207.12503
|
Philip Darke
|
Philip Darke, Paolo Missier and Jaume Bacardit
|
Benchmark time series data sets for PyTorch -- the torchtime package
|
15 pages
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The development of models for Electronic Health Record data is an area of
active research featuring a small number of public benchmark data sets.
Researchers typically write custom data processing code but this hinders
reproducibility and can introduce errors. The Python package torchtime provides
reproducible implementations of commonly used PhysioNet and UEA & UCR time
series classification repository data sets for PyTorch. Features are provided
for working with irregularly sampled and partially observed time series of
unequal length. It aims to simplify access to PhysioNet data and enable fair
comparisons of models in this exciting area of research.
|
[
{
"version": "v1",
"created": "Mon, 25 Jul 2022 20:06:36 GMT"
},
{
"version": "v2",
"created": "Mon, 1 Aug 2022 18:33:12 GMT"
}
] | 2022-08-03T00:00:00 |
[
[
"Darke",
"Philip",
""
],
[
"Missier",
"Paolo",
""
],
[
"Bacardit",
"Jaume",
""
]
] |
new_dataset
| 0.995291 |
2208.00929
|
Cheng Kang
|
Cheng Kang, Jindich Prokop, Lei Tong, Huiyu Zhou, Yong Hu, Daneil
Novak
|
giMLPs: Gate with Inhibition Mechanism in MLPs
|
It needs to be replaced in the future, because there are some extra
experiments should be added
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a new model architecture, gate with inhibition MLP
(giMLP).The gate with inhibition on CycleMLP (gi-CycleMLP) can produce equal
performance on the ImageNet classification task, and it also improves the BERT,
Roberta, and DeBERTaV3 models depending on two novel techniques. The first is
the gating MLP, where matrix multiplications between the MLP and the trunk
Attention input in further adjust models' adaptation. The second is inhibition
which inhibits or enhances the branch adjustment, and with the inhibition
levels increasing, it offers models more muscular features restriction. We show
that the giCycleMLP with a lower inhibition level can be competitive with the
original CycleMLP in terms of ImageNet classification accuracy. In addition, we
also show through a comprehensive empirical study that these techniques
significantly improve the performance of fine-tuning NLU downstream tasks. As
for the gate with inhibition MLPs on DeBERTa (giDeBERTa) fine-tuning, we find
it can achieve appealing results on most parts of NLU tasks without any extra
pretraining again. We also find that with the use of Gate With Inhibition, the
activation function should have a short and smooth negative tail, with which
the unimportant features or the features that hurt models can be moderately
inhibited. The experiments on ImageNet and twelve language downstream tasks
demonstrate the effectiveness of Gate With Inhibition, both for image
classification and for enhancing the capacity of nature language fine-tuning
without any extra pretraining.
|
[
{
"version": "v1",
"created": "Mon, 1 Aug 2022 15:23:51 GMT"
},
{
"version": "v2",
"created": "Tue, 2 Aug 2022 09:51:47 GMT"
}
] | 2022-08-03T00:00:00 |
[
[
"Kang",
"Cheng",
""
],
[
"Prokop",
"Jindich",
""
],
[
"Tong",
"Lei",
""
],
[
"Zhou",
"Huiyu",
""
],
[
"Hu",
"Yong",
""
],
[
"Novak",
"Daneil",
""
]
] |
new_dataset
| 0.992405 |
2208.01100
|
Jicheng Li
|
Jicheng Li, Anjana Bhat, Roghayeh Barmaki
|
Dyadic Movement Synchrony Estimation Under Privacy-preserving Conditions
|
IEEE ICPR 2022. 8 pages, 3 figures
| null | null | null |
cs.CV cs.LG cs.MM
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Movement synchrony refers to the dynamic temporal connection between the
motions of interacting people. The applications of movement synchrony are wide
and broad. For example, as a measure of coordination between teammates,
synchrony scores are often reported in sports. The autism community also
identifies movement synchrony as a key indicator of children's social and
developmental achievements. In general, raw video recordings are often used for
movement synchrony estimation, with the drawback that they may reveal people's
identities. Furthermore, such privacy concern also hinders data sharing, one
major roadblock to a fair comparison between different approaches in autism
research. To address the issue, this paper proposes an ensemble method for
movement synchrony estimation, one of the first deep-learning-based methods for
automatic movement synchrony assessment under privacy-preserving conditions.
Our method relies entirely on publicly shareable, identity-agnostic secondary
data, such as skeleton data and optical flow. We validate our method on two
datasets: (1) PT13 dataset collected from autism therapy interventions and (2)
TASD-2 dataset collected from synchronized diving competitions. In this
context, our method outperforms its counterpart approaches, both deep neural
networks and alternatives.
|
[
{
"version": "v1",
"created": "Mon, 1 Aug 2022 18:59:05 GMT"
}
] | 2022-08-03T00:00:00 |
[
[
"Li",
"Jicheng",
""
],
[
"Bhat",
"Anjana",
""
],
[
"Barmaki",
"Roghayeh",
""
]
] |
new_dataset
| 0.985497 |
2208.01106
|
Amjed Tahir
|
Jens Dietrich, Shawn Rasheed, Amjed Tahir
|
Flaky Test Sanitisation via On-the-Fly Assumption Inference for Tests
with Network Dependencies
|
to appear at IEEE International Working Conference on Source Code
Analysis and Manipulation (SCAM)
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Flaky tests cause significant problems as they can interrupt automated build
processes that rely on all tests succeeding and undermine the trustworthiness
of tests. Numerous causes of test flakiness have been identified, and program
analyses exist to detect such tests. Typically, these methods produce advice to
developers on how to refactor tests in order to make test outcomes
deterministic. We argue that one source of flakiness is the lack of assumptions
that precisely describe under which circumstances a test is meaningful. We
devise a sanitisation technique that can isolate f laky tests quickly by
inferring such assumptions on-the-fly, allowing automated builds to proceed as
flaky tests are ignored. We demonstrate this approach for Java and Groovy
programs by implementing it as extensions for three popular testing frameworks
(JUnit4, JUnit5 and Spock) that can transparently inject the inferred
assumptions. If JUnit5 is used, those extensions can be deployed without
refactoring project source code. We demonstrate and evaluate the utility of our
approach using a set of six popular real-world programs, addressing known test
flakiness issues in these programs caused by dependencies of tests on network
availability. We find that our method effectively sanitises failures induced by
network connectivity problems with high precision and recall.
|
[
{
"version": "v1",
"created": "Mon, 1 Aug 2022 19:18:24 GMT"
}
] | 2022-08-03T00:00:00 |
[
[
"Dietrich",
"Jens",
""
],
[
"Rasheed",
"Shawn",
""
],
[
"Tahir",
"Amjed",
""
]
] |
new_dataset
| 0.991119 |
2208.01166
|
Carlos Diaz-Ruiz
|
Carlos A. Diaz-Ruiz (1), Youya Xia (1), Yurong You (1), Jose Nino (1),
Junan Chen (1), Josephine Monica (1), Xiangyu Chen (1), Katie Luo (1), Yan
Wang (1), Marc Emond (1), Wei-Lun Chao (2), Bharath Hariharan (1), Kilian Q.
Weinberger (1), Mark Campbell (1) ((1) Cornell University, (2) The Ohio State
University)
|
Ithaca365: Dataset and Driving Perception under Repeated and Challenging
Weather Conditions
|
Accepted by CVPR 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Advances in perception for self-driving cars have accelerated in recent years
due to the availability of large-scale datasets, typically collected at
specific locations and under nice weather conditions. Yet, to achieve the high
safety requirement, these perceptual systems must operate robustly under a wide
variety of weather conditions including snow and rain. In this paper, we
present a new dataset to enable robust autonomous driving via a novel data
collection process - data is repeatedly recorded along a 15 km route under
diverse scene (urban, highway, rural, campus), weather (snow, rain, sun), time
(day/night), and traffic conditions (pedestrians, cyclists and cars). The
dataset includes images and point clouds from cameras and LiDAR sensors, along
with high-precision GPS/INS to establish correspondence across routes. The
dataset includes road and object annotations using amodal masks to capture
partial occlusions and 3D bounding boxes. We demonstrate the uniqueness of this
dataset by analyzing the performance of baselines in amodal segmentation of
road and objects, depth estimation, and 3D object detection. The repeated
routes opens new research directions in object discovery, continual learning,
and anomaly detection. Link to Ithaca365: https://ithaca365.mae.cornell.edu/
|
[
{
"version": "v1",
"created": "Mon, 1 Aug 2022 22:55:32 GMT"
}
] | 2022-08-03T00:00:00 |
[
[
"Diaz-Ruiz",
"Carlos A.",
""
],
[
"Xia",
"Youya",
""
],
[
"You",
"Yurong",
""
],
[
"Nino",
"Jose",
""
],
[
"Chen",
"Junan",
""
],
[
"Monica",
"Josephine",
""
],
[
"Chen",
"Xiangyu",
""
],
[
"Luo",
"Katie",
""
],
[
"Wang",
"Yan",
""
],
[
"Emond",
"Marc",
""
],
[
"Chao",
"Wei-Lun",
""
],
[
"Hariharan",
"Bharath",
""
],
[
"Weinberger",
"Kilian Q.",
""
],
[
"Campbell",
"Mark",
""
]
] |
new_dataset
| 0.999909 |
2208.01171
|
Grigor Aslanyan
|
Grigor Aslanyan, Ian Wetherbee
|
Patents Phrase to Phrase Semantic Matching Dataset
|
Presented at the SIGIR PatentSemTech 2022 Workshop. The dataset can
be accessed at
https://www.kaggle.com/datasets/google/google-patent-phrase-similarity-dataset
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
There are many general purpose benchmark datasets for Semantic Textual
Similarity but none of them are focused on technical concepts found in patents
and scientific publications. This work aims to fill this gap by presenting a
new human rated contextual phrase to phrase matching dataset. The entire
dataset contains close to $50,000$ rated phrase pairs, each with a CPC
(Cooperative Patent Classification) class as a context. This paper describes
the dataset and some baseline models.
|
[
{
"version": "v1",
"created": "Mon, 1 Aug 2022 23:33:30 GMT"
}
] | 2022-08-03T00:00:00 |
[
[
"Aslanyan",
"Grigor",
""
],
[
"Wetherbee",
"Ian",
""
]
] |
new_dataset
| 0.999772 |
2208.01172
|
Fabian Duffhauss
|
Fabian Duffhauss, Tobias Demmler, Gerhard Neumann
|
MV6D: Multi-View 6D Pose Estimation on RGB-D Frames Using a Deep
Point-wise Voting Network
|
Accepted at IROS 2022
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Estimating 6D poses of objects is an essential computer vision task. However,
most conventional approaches rely on camera data from a single perspective and
therefore suffer from occlusions. We overcome this issue with our novel
multi-view 6D pose estimation method called MV6D which accurately predicts the
6D poses of all objects in a cluttered scene based on RGB-D images from
multiple perspectives. We base our approach on the PVN3D network that uses a
single RGB-D image to predict keypoints of the target objects. We extend this
approach by using a combined point cloud from multiple views and fusing the
images from each view with a DenseFusion layer. In contrast to current
multi-view pose detection networks such as CosyPose, our MV6D can learn the
fusion of multiple perspectives in an end-to-end manner and does not require
multiple prediction stages or subsequent fine tuning of the prediction.
Furthermore, we present three novel photorealistic datasets of cluttered scenes
with heavy occlusions. All of them contain RGB-D images from multiple
perspectives and the ground truth for instance semantic segmentation and 6D
pose estimation. MV6D significantly outperforms the state-of-the-art in
multi-view 6D pose estimation even in cases where the camera poses are known
inaccurately. Furthermore, we show that our approach is robust towards dynamic
camera setups and that its accuracy increases incrementally with an increasing
number of perspectives.
|
[
{
"version": "v1",
"created": "Mon, 1 Aug 2022 23:34:43 GMT"
}
] | 2022-08-03T00:00:00 |
[
[
"Duffhauss",
"Fabian",
""
],
[
"Demmler",
"Tobias",
""
],
[
"Neumann",
"Gerhard",
""
]
] |
new_dataset
| 0.991877 |
2208.01201
|
Maria Nyamukuru
|
Kofi Odame, Maria Nyamukuru, Mohsen Shahghasemi, Shengjie Bi, David
Kotz
|
Analog Gated Recurrent Neural Network for Detecting Chewing Events
|
11 pages, 16 figures
| null | null | null |
cs.LG cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
We present a novel gated recurrent neural network to detect when a person is
chewing on food. We implemented the neural network as a custom analog
integrated circuit in a 0.18 um CMOS technology. The neural network was trained
on 6.4 hours of data collected from a contact microphone that was mounted on
volunteers' mastoid bones. When tested on 1.6 hours of previously-unseen data,
the neural network identified chewing events at a 24-second time resolution. It
achieved a recall of 91% and an F1-score of 94% while consuming 1.1 uW of
power. A system for detecting whole eating episodes -- like meals and snacks --
that is based on the novel analog neural network consumes an estimated 18.8uW
of power.
|
[
{
"version": "v1",
"created": "Tue, 2 Aug 2022 01:57:49 GMT"
}
] | 2022-08-03T00:00:00 |
[
[
"Odame",
"Kofi",
""
],
[
"Nyamukuru",
"Maria",
""
],
[
"Shahghasemi",
"Mohsen",
""
],
[
"Bi",
"Shengjie",
""
],
[
"Kotz",
"David",
""
]
] |
new_dataset
| 0.998285 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.