id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2205.07056
|
Hengcan Shi
|
Hengcan Shi, Munawar Hayat, Jianfei Cai
|
Transformer Scale Gate for Semantic Segmentation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Effectively encoding multi-scale contextual information is crucial for
accurate semantic segmentation. Existing transformer-based segmentation models
combine features across scales without any selection, where features on
sub-optimal scales may degrade segmentation outcomes. Leveraging from the
inherent properties of Vision Transformers, we propose a simple yet effective
module, Transformer Scale Gate (TSG), to optimally combine multi-scale
features.TSG exploits cues in self and cross attentions in Vision Transformers
for the scale selection. TSG is a highly flexible plug-and-play module, and can
easily be incorporated with any encoder-decoder-based hierarchical vision
Transformer architecture. Extensive experiments on the Pascal Context and
ADE20K datasets demonstrate that our feature selection strategy achieves
consistent gains.
|
[
{
"version": "v1",
"created": "Sat, 14 May 2022 13:11:39 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Shi",
"Hengcan",
""
],
[
"Hayat",
"Munawar",
""
],
[
"Cai",
"Jianfei",
""
]
] |
new_dataset
| 0.96967 |
2205.07060
|
Anssi Kanervisto
|
Anssi Kanervisto, Tomi Kinnunen, Ville Hautam\"aki
|
GAN-Aimbots: Using Machine Learning for Cheating in First Person
Shooters
|
Accepted to IEEE Transactions on Games. Source code available at
https://github.com/miffyli/gan-aimbots
| null |
10.1109/TG.2022.3173450
| null |
cs.AI cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Playing games with cheaters is not fun, and in a multi-billion-dollar video
game industry with hundreds of millions of players, game developers aim to
improve the security and, consequently, the user experience of their games by
preventing cheating. Both traditional software-based methods and statistical
systems have been successful in protecting against cheating, but recent
advances in the automatic generation of content, such as images or speech,
threaten the video game industry; they could be used to generate artificial
gameplay indistinguishable from that of legitimate human players. To better
understand this threat, we begin by reviewing the current state of multiplayer
video game cheating, and then proceed to build a proof-of-concept method,
GAN-Aimbot. By gathering data from various players in a first-person shooter
game we show that the method improves players' performance while remaining
hidden from automatic and manual protection mechanisms. By sharing this work we
hope to raise awareness on this issue and encourage further research into
protecting the gaming communities.
|
[
{
"version": "v1",
"created": "Sat, 14 May 2022 13:33:23 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Kanervisto",
"Anssi",
""
],
[
"Kinnunen",
"Tomi",
""
],
[
"Hautamäki",
"Ville",
""
]
] |
new_dataset
| 0.990761 |
2205.07066
|
Guilherme Maeda
|
Guilherme Maeda, Naoki Fukaya, Shin-ichi Maeda
|
F1 Hand: A Versatile Fixed-Finger Gripper for Delicate Teleoperation and
Autonomous Grasping
|
Accepted for publication at the IEEE Robotics and Automation Letters
(RA-L)
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Teleoperation is often limited by the ability of an operator to react and
predict the behavior of the robot as it interacts with the environment. For
example, to grasp small objects on a table, the teleoperator needs to predict
the position of the fingertips before the fingers are closed to avoid them
hitting the table. For that reason, we developed the F1 hand, a single-motor
gripper that facilitates teleoperation with the use of a fixed finger. The hand
is capable of grasping objects as thin as a paper clip, and as heavy and large
as a cordless drill. The applicability of the hand can be expanded by replacing
the fixed finger with different shapes. This flexibility makes the hand highly
versatile while being easy and cheap to develop. However, due to the atypical
asymmetric structure and actuation of the hand usual grasping strategies no
longer apply. Thus, we propose a controller that approximates actuation
symmetry by using the motion of the whole arm. The F1 hand and its controller
are compared side-by-side with the original Toyota Human Support Robot (HSR)
gripper in teleoperation using 22 objects from the YCB dataset in addition to
small objects. The grasping time and peak contact forces could be decreased by
20% and 70%, respectively while increasing success rates by 5%. Using an
off-the-shelf grasp pose estimator for autonomous grasping, the system achieved
similar success rates to the original HSR gripper, at the order of 80%.
|
[
{
"version": "v1",
"created": "Sat, 14 May 2022 13:40:19 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Maeda",
"Guilherme",
""
],
[
"Fukaya",
"Naoki",
""
],
[
"Maeda",
"Shin-ichi",
""
]
] |
new_dataset
| 0.999338 |
2205.07075
|
Dennis Haitz
|
Dennis Haitz, Boris Jutzi, Patrick Huebner, Markus Ulrich
|
Corrosion Detection for Industrial Objects: From Multi-Sensor System to
5D Feature Space
|
8 pages, 4 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Corrosion is a form of damage that often appears on the surface of metal-made
objects used in industrial applications. Those damages can be critical
depending on the purpose of the used object. Optical-based testing systems
provide a form of non-contact data acquisition, where the acquired data can
then be used to analyse the surface of an object. In the field of industrial
image processing, this is called surface inspection. We provide a testing setup
consisting of a rotary table which rotates the object by 360 degrees, as well
as industrial RGB cameras and laser triangulation sensors for the acquisition
of 2D and 3D data as our multi-sensor system. These sensors acquire data while
the object to be tested takes a full rotation. Further on, data augmentation is
applied to prepare new data or enhance already acquired data. In order to
evaluate the impact of a laser triangulation sensor for corrosion detection,
one challenge is to at first fuse the data of both domains. After the data
fusion process, 5 different channels can be utilized to create a 5D feature
space. Besides the red, green and blue channels of the image (1-3), additional
range data from the laser triangulation sensor is incorporated (4). As a fifth
channel, said sensor provides additional intensity data (5). With a
multi-channel image classification, a 5D feature space will lead to slightly
superior results opposed to a 3D feature space, composed of only the RGB
channels of the image.
|
[
{
"version": "v1",
"created": "Sat, 14 May 2022 14:45:58 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Haitz",
"Dennis",
""
],
[
"Jutzi",
"Boris",
""
],
[
"Huebner",
"Patrick",
""
],
[
"Ulrich",
"Markus",
""
]
] |
new_dataset
| 0.997737 |
2205.07096
|
Sandipan Das
|
Sandipan Das, Navid Mahabadi, Saikat Chatterjee, Maurice Fallon
|
Multi-modal curb detection and filtering
| null | null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Reliable knowledge of road boundaries is critical for autonomous vehicle
navigation. We propose a robust curb detection and filtering technique based on
the fusion of camera semantics and dense lidar point clouds. The lidar point
clouds are collected by fusing multiple lidars for robust feature detection.
The camera semantics are based on a modified EfficientNet architecture which is
trained with labeled data collected from onboard fisheye cameras. The point
clouds are associated with the closest curb segment with $L_2$-norm analysis
after projecting into the image space with the fisheye model projection. Next,
the selected points are clustered using unsupervised density-based spatial
clustering to detect different curb regions. As new curb points are detected in
consecutive frames they are associated with the existing curb clusters using
temporal reachability constraints. If no reachability constraints are found a
new curb cluster is formed from these new points. This ensures we can detect
multiple curbs present in road segments consisting of multiple lanes if they
are in the sensors' field of view. Finally, Delaunay filtering is applied for
outlier removal and its performance is compared to traditional RANSAC-based
filtering. An objective evaluation of the proposed solution is done using a
high-definition map containing ground truth curb points obtained from a
commercial map supplier. The proposed system has proven capable of detecting
curbs of any orientation in complex urban road scenarios comprising straight
roads, curved roads, and intersections with traffic isles.
|
[
{
"version": "v1",
"created": "Sat, 14 May 2022 17:03:41 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Das",
"Sandipan",
""
],
[
"Mahabadi",
"Navid",
""
],
[
"Chatterjee",
"Saikat",
""
],
[
"Fallon",
"Maurice",
""
]
] |
new_dataset
| 0.993854 |
2205.07204
|
Liuyue Jiang
|
Liuyue Jiang, Nguyen Khoi Tran, M. Ali Babar
|
Mod2Dash: A Framework for Model-Driven Dashboards Generation
| null | null |
10.1145/3534526
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The construction of an interactive dashboard involves deciding on what
information to present and how to display it and implementing those design
decisions to create an operational dashboard. Traditionally, a dashboard's
design is implied in the deployed dashboard rather than captured explicitly as
a digital artifact, preventing it from being backed up, version-controlled, and
shared. Moreover, practitioners have to implement this implicit design manually
by coding or configuring it on a dashboard platform. This paper proposes
Mod2Dash, a software framework that enables practitioners to capture their
dashboard designs as models and generate operational dashboards automatically
from these models. The framework also provides a GUI-driven customization
approach for practitioners to fine-tune the auto-generated dashboards and
update their models. With these abilities, Mod2Dash enables practitioners to
rapidly prototype and deploy dashboards for both operational and research
purposes. We evaluated the framework's effectiveness in a case study on cyber
security visualization for decision support. A proof-of-concept of Mod2Dash was
employed to model and reconstruct 31 diverse real-world cyber security
dashboards. A human-assisted comparison between the Mod2Dash-generated
dashboards and the baseline dashboards shows a close matching, indicating the
framework's effectiveness for real-world scenarios.
|
[
{
"version": "v1",
"created": "Sun, 15 May 2022 07:20:08 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Jiang",
"Liuyue",
""
],
[
"Tran",
"Nguyen Khoi",
""
],
[
"Babar",
"M. Ali",
""
]
] |
new_dataset
| 0.993545 |
2205.07303
|
Yuan Sun
|
Yuan Sun, Sisi Liu, Junjie Deng, Xiaobing Zhao
|
TiBERT: Tibetan Pre-trained Language Model
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The pre-trained language model is trained on large-scale unlabeled text and
can achieve state-of-the-art results in many different downstream tasks.
However, the current pre-trained language model is mainly concentrated in the
Chinese and English fields. For low resource language such as Tibetan, there is
lack of a monolingual pre-trained model. To promote the development of Tibetan
natural language processing tasks, this paper collects the large-scale training
data from Tibetan websites and constructs a vocabulary that can cover 99.95$\%$
of the words in the corpus by using Sentencepiece. Then, we train the Tibetan
monolingual pre-trained language model named TiBERT on the data and vocabulary.
Finally, we apply TiBERT to the downstream tasks of text classification and
question generation, and compare it with classic models and multilingual
pre-trained models, the experimental results show that TiBERT can achieve the
best performance. Our model is published in http://tibert.cmli-nlp.com/
|
[
{
"version": "v1",
"created": "Sun, 15 May 2022 14:45:08 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Sun",
"Yuan",
""
],
[
"Liu",
"Sisi",
""
],
[
"Deng",
"Junjie",
""
],
[
"Zhao",
"Xiaobing",
""
]
] |
new_dataset
| 0.999326 |
2205.07309
|
Yinan Huang
|
Yinan Huang, Xingang Peng, Jianzhu Ma, Muhan Zhang
|
3DLinker: An E(3) Equivariant Variational Autoencoder for Molecular
Linker Design
| null | null | null | null |
cs.LG cs.AI q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep learning has achieved tremendous success in designing novel chemical
compounds with desirable pharmaceutical properties. In this work, we focus on a
new type of drug design problem -- generating a small "linker" to physically
attach two independent molecules with their distinct functions. The main
computational challenges include: 1) the generation of linkers is conditional
on the two given molecules, in contrast to generating full molecules from
scratch in previous works; 2) linkers heavily depend on the anchor atoms of the
two molecules to be connected, which are not known beforehand; 3) 3D structures
and orientations of the molecules need to be considered to avoid atom clashes,
for which equivariance to E(3) group are necessary. To address these problems,
we propose a conditional generative model, named 3DLinker, which is able to
predict anchor atoms and jointly generate linker graphs and their 3D structures
based on an E(3) equivariant graph variational autoencoder. So far as we know,
there are no previous models that could achieve this task. We compare our model
with multiple conditional generative models modified from other molecular
design tasks and find that our model has a significantly higher rate in
recovering molecular graphs, and more importantly, accurately predicting the 3D
coordinates of all the atoms.
|
[
{
"version": "v1",
"created": "Sun, 15 May 2022 15:26:29 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Huang",
"Yinan",
""
],
[
"Peng",
"Xingang",
""
],
[
"Ma",
"Jianzhu",
""
],
[
"Zhang",
"Muhan",
""
]
] |
new_dataset
| 0.990732 |
2205.07394
|
Gagandeep Singh
|
Gagandeep Singh, Rakesh Nadig, Jisung Park, Rahul Bera, Nastaran
Hajinazar, David Novo, Juan G\'omez-Luna, Sander Stuijk, Henk Corporaal, Onur
Mutlu
|
Sibyl: Adaptive and Extensible Data Placement in Hybrid Storage Systems
Using Online Reinforcement Learning
| null | null | null | null |
cs.AR cs.AI cs.DC cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Hybrid storage systems (HSS) use multiple different storage devices to
provide high and scalable storage capacity at high performance. Recent research
proposes various techniques that aim to accurately identify
performance-critical data to place it in a "best-fit" storage device.
Unfortunately, most of these techniques are rigid, which (1) limits their
adaptivity to perform well for a wide range of workloads and storage device
configurations, and (2) makes it difficult for designers to extend these
techniques to different storage system configurations (e.g., with a different
number or different types of storage devices) than the configuration they are
designed for. We introduce Sibyl, the first technique that uses reinforcement
learning for data placement in hybrid storage systems. Sibyl observes different
features of the running workload as well as the storage devices to make
system-aware data placement decisions. For every decision it makes, Sibyl
receives a reward from the system that it uses to evaluate the long-term
performance impact of its decision and continuously optimizes its data
placement policy online. We implement Sibyl on real systems with various HSS
configurations. Our results show that Sibyl provides 21.6%/19.9% performance
improvement in a performance-oriented/cost-oriented HSS configuration compared
to the best previous data placement technique. Our evaluation using an HSS
configuration with three different storage devices shows that Sibyl outperforms
the state-of-the-art data placement policy by 23.9%-48.2%, while significantly
reducing the system architect's burden in designing a data placement mechanism
that can simultaneously incorporate three storage devices. We show that Sibyl
achieves 80% of the performance of an oracle policy that has complete knowledge
of future access patterns while incurring a very modest storage overhead of
only 124.4 KiB.
|
[
{
"version": "v1",
"created": "Sun, 15 May 2022 22:53:36 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Singh",
"Gagandeep",
""
],
[
"Nadig",
"Rakesh",
""
],
[
"Park",
"Jisung",
""
],
[
"Bera",
"Rahul",
""
],
[
"Hajinazar",
"Nastaran",
""
],
[
"Novo",
"David",
""
],
[
"Gómez-Luna",
"Juan",
""
],
[
"Stuijk",
"Sander",
""
],
[
"Corporaal",
"Henk",
""
],
[
"Mutlu",
"Onur",
""
]
] |
new_dataset
| 0.96331 |
2205.07407
|
Xiaohan Yang
|
Xiaohan Yang, Eduardo Peynetti, Vasco Meerman, Chris Tanner
|
What GPT Knows About Who is Who
|
Accepted by ACL 2022 Workshop on Insights from Negative Results in
NLP
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Coreference resolution -- which is a crucial task for understanding discourse
and language at large -- has yet to witness widespread benefits from large
language models (LLMs). Moreover, coreference resolution systems largely rely
on supervised labels, which are highly expensive and difficult to annotate,
thus making it ripe for prompt engineering. In this paper, we introduce a
QA-based prompt-engineering method and discern \textit{generative}, pre-trained
LLMs' abilities and limitations toward the task of coreference resolution. Our
experiments show that GPT-2 and GPT-Neo can return valid answers, but that
their capabilities to identify coreferent mentions are limited and
prompt-sensitive, leading to inconsistent results.
|
[
{
"version": "v1",
"created": "Mon, 16 May 2022 00:59:37 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Yang",
"Xiaohan",
""
],
[
"Peynetti",
"Eduardo",
""
],
[
"Meerman",
"Vasco",
""
],
[
"Tanner",
"Chris",
""
]
] |
new_dataset
| 0.997509 |
2205.07437
|
Jiahao Li
|
Jiahao Li, Alexis Samoylov, Jeeeun Kim, Xiang 'Anthony' Chen
|
Roman: Making Everyday Objects Robotically Manipulable with 3D-Printable
Add-on Mechanisms
| null | null |
10.1145/3491102.3501818
| null |
cs.RO cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
One important vision of robotics is to provide physical assistance by
manipulating different everyday objects, e.g., hand tools, kitchen utensils.
However, many objects designed for dexterous hand-control are not easily
manipulable by a single robotic arm with a generic parallel gripper.
Complementary to existing research on developing grippers and control
algorithms, we present Roman, a suite of hardware design and software tool
support for robotic engineers to create 3D printable mechanisms attached to
everyday handheld objects, making them easier to be manipulated by conventional
robotic arms. The Roman hardware comes with a versatile magnetic gripper that
can snap on/off handheld objects and drive add-on mechanisms to perform tasks.
Roman also provides software support to register and author control programs.
To validate our approach, we designed and fabricated Roman mechanisms for 14
everyday objects/tasks presented within a design space and conducted expert
interviews with robotic engineers indicating that Roman serves as a practical
alternative for enabling robotic manipulation of everyday objects.
|
[
{
"version": "v1",
"created": "Mon, 16 May 2022 04:19:43 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Li",
"Jiahao",
""
],
[
"Samoylov",
"Alexis",
""
],
[
"Kim",
"Jeeeun",
""
],
[
"Chen",
"Xiang 'Anthony'",
""
]
] |
new_dataset
| 0.999523 |
2205.07441
|
Hengwei Zhang
|
Hengwei Zhang, Hua Yang, Haitao Wang, Zhigang Wang, Shengmin Zhang,
Ming Chen
|
Autonomous Electric Vehicle Battery Disassembly Based on NeuroSymbolic
Computing
|
Accepted to IntelliSys 2022(Intelligent Systems Conference),15 pages
with 6 figures
| null | null |
CONF-220910
|
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The booming of electric vehicles demands efficient battery disassembly for
recycling to be environment-friendly. Due to the unstructured environment and
high uncertainties, battery disassembly is still primarily done by humans,
probably assisted by robots. It is highly desirable to design autonomous
solutions to improve work efficiency and lower human risks in high voltage and
toxic environments. This paper proposes a novel framework of the NeuroSymbolic
task and motion planning method to disassemble batteries in an unstructured
environment using robots automatically. It enables robots to independently
locate and disassemble battery bolts, with or without obstacles. This study not
only provides a solution for intelligently disassembling electric vehicle
batteries but also verifies its feasibility through a set of test results with
the robot accomplishing the disassembly tasks in a complex and dynamic
environment.
|
[
{
"version": "v1",
"created": "Mon, 16 May 2022 04:32:53 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Zhang",
"Hengwei",
""
],
[
"Yang",
"Hua",
""
],
[
"Wang",
"Haitao",
""
],
[
"Wang",
"Zhigang",
""
],
[
"Zhang",
"Shengmin",
""
],
[
"Chen",
"Ming",
""
]
] |
new_dataset
| 0.991449 |
2205.07446
|
Yen-Ting Lin
|
Yen-Ting Lin, Hui-Chi Kuo, Ze-Song Xu, Ssu Chiu, Chieh-Chi Hung,
Yi-Cheng Chen, Chao-Wei Huang, Yun-Nung Chen
|
Miutsu: NTU's TaskBot for the Alexa Prize
| null | null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper introduces Miutsu, National Taiwan University's Alexa Prize
TaskBot, which is designed to assist users in completing tasks requiring
multiple steps and decisions in two different domains -- home improvement and
cooking. We overview our system design and architectural goals, and detail the
proposed core elements, including question answering, task retrieval, social
chatting, and various conversational modules. A dialogue flow is proposed to
provide a robust and engaging conversation when handling complex tasks. We
discuss the faced challenges during the competition and potential future work.
|
[
{
"version": "v1",
"created": "Mon, 16 May 2022 04:56:55 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Lin",
"Yen-Ting",
""
],
[
"Kuo",
"Hui-Chi",
""
],
[
"Xu",
"Ze-Song",
""
],
[
"Chiu",
"Ssu",
""
],
[
"Hung",
"Chieh-Chi",
""
],
[
"Chen",
"Yi-Cheng",
""
],
[
"Huang",
"Chao-Wei",
""
],
[
"Chen",
"Yun-Nung",
""
]
] |
new_dataset
| 0.999477 |
2205.07452
|
Mike Wu
|
Mike Wu, Will McTighe
|
Constant Power Root Market Makers
|
16 pages; proofs inline
| null | null | null |
cs.CE
|
http://creativecommons.org/licenses/by/4.0/
|
The paper introduces a new type of constant function market maker, the
constant power root market marker. We show that the constant sum (used by
mStable), constant product (used by Uniswap and Balancer), constant reserve
(HOLD-ing), and constant harmonic mean trading functions are special cases of
the constant power root trading function. We derive the value function for
liquidity providers, marginal price function, price impact function,
impermanent loss function, and greeks for constant power root market markers.
In particular, we find that as the power q varies from the range of -infinity
to 1, the power root function interpolates between the harmonic (q=-1),
geometric (q=0), and arithmetic (q=1) means. This provides a toggle that trades
off between price slippage for traders and impermanent loss for liquidity
providers. As the power q approaches 1, slippage is low and impermanent loss is
high. As q approaches to -1, price slippage increases and impermanent loss
decreases.
|
[
{
"version": "v1",
"created": "Mon, 16 May 2022 05:38:13 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Wu",
"Mike",
""
],
[
"McTighe",
"Will",
""
]
] |
new_dataset
| 0.987459 |
2205.07500
|
Giacomo Ortali
|
Walter Didimo, Michael Kaufmann, Giuseppe Liotta, Giacomo Ortali
|
Computing Bend-Minimum Orthogonal Drawings of Plane Series-Parallel
Graphs in Linear Time
|
arXiv admin note: text overlap with arXiv:2008.03784
| null | null | null |
cs.CG cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A planar orthogonal drawing of a planar 4-graph G (i.e., a planar graph with
vertex-degree at most four) is a crossing-free drawing that maps each vertex of
G to a distinct point of the plane and each edge of $G$ to a sequence of
horizontal and vertical segments between its end-points. A longstanding open
question in Graph Drawing, dating back over 30 years, is whether there exists a
linear-time algorithm to compute an orthogonal drawing of a plane 4-graph with
the minimum number of bends. The term "plane" indicates that the input graph
comes together with a planar embedding, which must be preserved by the drawing
(i.e., the drawing must have the same set of faces as the input graph). In this
paper, we positively answer the question above for the widely-studied class of
series-parallel graphs. Our linear-time algorithm is based on a
characterization of the planar series-parallel graphs that admit an orthogonal
drawing without bends. This characterization is given in terms of the
orthogonal spirality that each type of triconnected component of the graph can
take; the orthogonal spirality of a component measures how much that component
is "rolled-up" in an orthogonal drawing of the graph.
|
[
{
"version": "v1",
"created": "Mon, 16 May 2022 08:23:09 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Didimo",
"Walter",
""
],
[
"Kaufmann",
"Michael",
""
],
[
"Liotta",
"Giuseppe",
""
],
[
"Ortali",
"Giacomo",
""
]
] |
new_dataset
| 0.995919 |
2205.07502
|
Lei Zhang
|
Lei Zhang, Yu Pan, Yi Liu, Qibin Zheng, Zhisong Pan
|
KGRGRL: A User's Permission Reasoning Method Based on Knowledge Graph
Reward Guidance Reinforcement Learning
|
8 pages, 2 figures
| null | null | null |
cs.AI cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In general, multiple domain cyberspace security assessments can be
implemented by reasoning user's permissions. However, while existing methods
include some information from the physical and social domains, they do not
provide a comprehensive representation of cyberspace. Existing reasoning
methods are also based on expert-given rules, resulting in inefficiency and a
low degree of intelligence. To address this challenge, we create a Knowledge
Graph (KG) of multiple domain cyberspace in order to provide a standard
semantic description of the multiple domain cyberspace. Following that, we
proposed a user's permissions reasoning method based on reinforcement learning.
All permissions in cyberspace are represented as nodes, and an agent is trained
to find all permissions that user can have according to user's initial
permissions and cyberspace KG. We set 10 reward setting rules based on the
features of cyberspace KG in the reinforcement learning of reward information
setting, so that the agent can better locate user's all permissions and avoid
blindly finding user's permissions. The results of the experiments showed that
the proposed method can successfully reason about user's permissions and
increase the intelligence level of the user's permissions reasoning method. At
the same time, the F1 value of the proposed method is 6% greater than that of
the Translating Embedding (TransE) method.
|
[
{
"version": "v1",
"created": "Mon, 16 May 2022 08:28:23 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Zhang",
"Lei",
""
],
[
"Pan",
"Yu",
""
],
[
"Liu",
"Yi",
""
],
[
"Zheng",
"Qibin",
""
],
[
"Pan",
"Zhisong",
""
]
] |
new_dataset
| 0.997772 |
2205.07529
|
Pedro Antonino
|
Pedro Antonino and Juliandson Ferreira and Augusto Sampaio and A. W.
Roscoe
|
Specification is Law: Safe Creation and Upgrade of Ethereum Smart
Contracts
| null | null | null | null |
cs.SE cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
Smart contracts are the building blocks of the "code is law" paradigm: the
smart contract's code indisputably describes how its assets are to be managed -
once it is created, its code is typically immutable. Faulty smart contracts
present the most significant evidence against the practicality of this
paradigm; they are well-documented and resulted in assets worth vast sums of
money being compromised. To address this issue, the Ethereum community proposed
(i) tools and processes to audit/analyse smart contracts, and (ii) design
patterns implementing a mechanism to make contract code mutable. Individually,
(i) and (ii) only partially address the challenges raised by the "code is law"
paradigm. In this paper, we combine elements from (i) and (ii) to create a
systematic framework that moves away from "code is law" and gives rise to a new
"specification is law" paradigm. It allows contracts to be created and upgraded
but only if they meet a corresponding formal specification. The framework is
centered around \emph{a trusted deployer}: an off-chain service that formally
verifies and enforces this notion of conformance. We have prototyped this
framework, and investigated its applicability to contracts implementing two
widely used Ethereum standards: the ERC20 Token Standard and ERC1155 Multi
Token Standard, with promising results.
|
[
{
"version": "v1",
"created": "Mon, 16 May 2022 09:08:48 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Antonino",
"Pedro",
""
],
[
"Ferreira",
"Juliandson",
""
],
[
"Sampaio",
"Augusto",
""
],
[
"Roscoe",
"A. W.",
""
]
] |
new_dataset
| 0.950821 |
2205.07548
|
Johannes Oetsch
|
Thomas Eiter, Nelson Higuera, Johannes Oetsch, and Michael Pritz
|
A Neuro-Symbolic ASP Pipeline for Visual Question Answering
|
Paper presented at the 38th International Conference on Logic
Programming (ICLP 2022), 15 pages
| null | null | null |
cs.AI cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present a neuro-symbolic visual question answering (VQA) pipeline for
CLEVR, which is a well-known dataset that consists of pictures showing scenes
with objects and questions related to them. Our pipeline covers (i) training
neural networks for object classification and bounding-box prediction of the
CLEVR scenes, (ii) statistical analysis on the distribution of prediction
values of the neural networks to determine a threshold for high-confidence
predictions, and (iii) a translation of CLEVR questions and network predictions
that pass confidence thresholds into logic programs so that we can compute the
answers using an ASP solver. By exploiting choice rules, we consider
deterministic and non-deterministic scene encodings. Our experiments show that
the non-deterministic scene encoding achieves good results even if the neural
networks are trained rather poorly in comparison with the deterministic
approach. This is important for building robust VQA systems if network
predictions are less-than perfect. Furthermore, we show that restricting
non-determinism to reasonable choices allows for more efficient implementations
in comparison with related neuro-symbolic approaches without loosing much
accuracy. This work is under consideration for acceptance in TPLP.
|
[
{
"version": "v1",
"created": "Mon, 16 May 2022 09:50:37 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Eiter",
"Thomas",
""
],
[
"Higuera",
"Nelson",
""
],
[
"Oetsch",
"Johannes",
""
],
[
"Pritz",
"Michael",
""
]
] |
new_dataset
| 0.995765 |
2205.07627
|
Nour Ramzy
|
Nour Ramzy, Soren Auer, Javad Chamanara, Hans Ehm
|
KnowGraph-PM: a Knowledge Graph based Pricing Model for Semiconductors
Supply Chains
| null | null | null | null |
cs.DB cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Semiconductor supply chains are described by significant demand fluctuation
that increases as one moves up the supply chain, the so-called bullwhip effect.
To counteract, semiconductor manufacturers aim to optimize capacity
utilization, to deliver with shorter lead times and exploit this to generate
revenue. Additionally, in a competitive market, firms seek to maintain customer
relationships while applying revenue management strategies such as dynamic
pricing. Price change potentially generates conflicts with customers. In this
paper, we present KnowGraph-PM, a knowledge graph-based dynamic pricing model.
The semantic model uses the potential of faster delivery and shorter lead times
to define premium prices, thus entail increased profits based on the customer
profile. The knowledge graph enables the integration of customer-related
information, e.g., customer class and location to customer order data. The
pricing algorithm is realized as a SPARQL query that relies on customer profile
and order behavior to determine the corresponding price premium. We evaluate
the approach by calculating the revenue generated after applying the pricing
algorithm. Based on competency questions that translate to SPARQL queries, we
validate the created knowledge graph. We demonstrate that semantic data
integration enables customer-tailored revenue management.
|
[
{
"version": "v1",
"created": "Fri, 13 May 2022 10:34:57 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Ramzy",
"Nour",
""
],
[
"Auer",
"Soren",
""
],
[
"Chamanara",
"Javad",
""
],
[
"Ehm",
"Hans",
""
]
] |
new_dataset
| 0.996946 |
2205.07646
|
Senjie Liang
|
Liang Huang, Senjie Liang, Feiyang Ye, Nan Gao
|
A Fast Attention Network for Joint Intent Detection and Slot Filling on
Edge Devices
|
9 pages, 4 figures
| null | null | null |
cs.CL cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Intent detection and slot filling are two main tasks in natural language
understanding and play an essential role in task-oriented dialogue systems. The
joint learning of both tasks can improve inference accuracy and is popular in
recent works. However, most joint models ignore the inference latency and
cannot meet the need to deploy dialogue systems at the edge. In this paper, we
propose a Fast Attention Network (FAN) for joint intent detection and slot
filling tasks, guaranteeing both accuracy and latency. Specifically, we
introduce a clean and parameter-refined attention module to enhance the
information exchange between intent and slot, improving semantic accuracy by
more than 2%. FAN can be implemented on different encoders and delivers more
accurate models at every speed level. Our experiments on the Jetson Nano
platform show that FAN inferences fifteen utterances per second with a small
accuracy drop, showing its effectiveness and efficiency on edge devices.
|
[
{
"version": "v1",
"created": "Mon, 16 May 2022 13:06:51 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Huang",
"Liang",
""
],
[
"Liang",
"Senjie",
""
],
[
"Ye",
"Feiyang",
""
],
[
"Gao",
"Nan",
""
]
] |
new_dataset
| 0.977344 |
2205.07683
|
Alin Popa
|
Ionut-Catalin Sandu and Daniel Voinea and Alin-Ionut Popa
|
CONSENT: Context Sensitive Transformer for Bold Words Classification
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We present CONSENT, a simple yet effective CONtext SENsitive Transformer
framework for context-dependent object classification within a fully-trainable
end-to-end deep learning pipeline. We exemplify the proposed framework on the
task of bold words detection proving state-of-the-art results. Given an image
containing text of unknown font-types (e.g. Arial, Calibri, Helvetica), unknown
language, taken under various degrees of illumination, angle distortion and
scale variation, we extract all the words and learn a context-dependent binary
classification (i.e. bold versus non-bold) using an end-to-end
transformer-based neural network ensemble. To prove the extensibility of our
framework, we demonstrate competitive results against state-of-the-art for the
game of rock-paper-scissors by training the model to determine the winner given
a sequence with $2$ pictures depicting hand poses.
|
[
{
"version": "v1",
"created": "Mon, 16 May 2022 13:50:33 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Sandu",
"Ionut-Catalin",
""
],
[
"Voinea",
"Daniel",
""
],
[
"Popa",
"Alin-Ionut",
""
]
] |
new_dataset
| 0.996712 |
2205.07752
|
Vasileios Sitokonstantinou
|
Thanassis Drivas, Vasileios Sitokonstantinou, Iason Tsardanidis,
Alkiviadis Koukos, Charalampos Kontoes, Vassilia Karathanassi
|
A Data Cube of Big Satellite Image Time-Series for Agriculture
Monitoring
|
This work has been accepted for publication in IEEE 14th Image,
Video, and Multidimensional Signal Processing Workshop (IVMSP 2022)
| null | null | null |
cs.CV cs.DB cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The modernization of the Common Agricultural Policy (CAP) requires the large
scale and frequent monitoring of agricultural land. Towards this direction, the
free and open satellite data (i.e., Sentinel missions) have been extensively
used as the sources for the required high spatial and temporal resolution Earth
observations. Nevertheless, monitoring the CAP at large scales constitutes a
big data problem and puts a strain on CAP paying agencies that need to adapt
fast in terms of infrastructure and know-how. Hence, there is a need for
efficient and easy-to-use tools for the acquisition, storage, processing and
exploitation of big satellite data. In this work, we present the Agriculture
monitoring Data Cube (ADC), which is an automated, modular, end-to-end
framework for discovering, pre-processing and indexing optical and Synthetic
Aperture Radar (SAR) images into a multidimensional cube. We also offer a set
of powerful tools on top of the ADC, including i) the generation of
analysis-ready feature spaces of big satellite data to feed downstream machine
learning tasks and ii) the support of Satellite Image Time-Series (SITS)
analysis via services pertinent to the monitoring of the CAP (e.g., detecting
trends and events, monitoring the growth status etc.). The knowledge extracted
from the SITS analyses and the machine learning tasks returns to the data cube,
building scalable country-specific knowledge bases that can efficiently answer
complex and multi-faceted geospatial queries.
|
[
{
"version": "v1",
"created": "Mon, 16 May 2022 15:26:23 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Drivas",
"Thanassis",
""
],
[
"Sitokonstantinou",
"Vasileios",
""
],
[
"Tsardanidis",
"Iason",
""
],
[
"Koukos",
"Alkiviadis",
""
],
[
"Kontoes",
"Charalampos",
""
],
[
"Karathanassi",
"Vassilia",
""
]
] |
new_dataset
| 0.964653 |
2205.07769
|
Jianfeng Zhan
|
Jianfeng Zhan
|
A BenchCouncil View on Benchmarking Emerging and Future Computing
|
To appear BenchCouncil Transactions on Benchmarks, Standards and
Evaluation (TBench)
| null | null | null |
cs.ET cs.PF
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The measurable properties of the artifacts or objects in the computer,
management, or finance disciplines are extrinsic, not inherent -- dependent on
their problem definitions and solution instantiations. Only after the
instantiation can the solutions to the problem be measured. The processes of
definition, instantiation, and measurement are entangled, and they have complex
mutual influences. Meanwhile, the technology inertia brings instantiation bias
-- trapped into a subspace or even a point at a high-dimension solution space.
These daunting challenges, which emerging computing aggravates, make metrology
can not work for benchmark communities. It is pressing to establish independent
benchmark science and engineering.
This article presents a unifying benchmark definition, a conceptual
framework, and a traceable and supervised learning-based benchmarking
methodology, laying the foundation for benchmark science and engineering. I
also discuss BenchCouncil's plans for emerging and future computing. The
ongoing projects include defining the challenges of intelligence, instinct,
quantum computers, Metaverse, planet-scale computers, and reformulating data
centers, artificial intelligence for science, and CPU benchmark suites. Also,
BenchCouncil will collaborate with ComputerCouncil on open-source computer
systems for planet-scale computing, AI for science systems, and Metaverse.
|
[
{
"version": "v1",
"created": "Mon, 16 May 2022 15:47:59 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Zhan",
"Jianfeng",
""
]
] |
new_dataset
| 0.997849 |
2205.07815
|
Wardah Saleh
|
Shafin Talukder, SK. Tasnim Bari Ira, Aseya Khanom, Prantika Biswas
Sneha and Wardah Saleh
|
Vehicle Collision Detection & Prevention Using VANET Based IoT With V2V
|
10 pages, 5 figures , 2 tables
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
EMERGENCY alert in case of any accident is vitally necessitated to rescue the
victims. And so, this paper is made to present the results of a major analysis
relating to emergency alert conditions at the time of collision (automobile).
In this study, the authors have investigated modern Internet of Things (IoT)
and VANET (Vehicular Ad hoc Networks) technologies and developed a collection
of modern and specialized techniques as well as their characteristics. It has
sensors that detect unbalanced circumstances and provide with a warning to the
microcontroller if a collision occurs. Additionally, the technique can be
implemented in such a way that vehicles are alerted of possible closing
barriers. Vehicle-to-Vehicle communication (V2V) has a huge impact since it
allows vehicles to communicate with each other while in proximity and the
buzzer together with the LEDs serves as a safety feature. The primary goal of
the system is to carry out the microcontroller functions in every environment
and moreover, the concept refers to detect and prevent the collision specially
in a foggy weather as well as at night and in other odd circumstances. The
Internet of Things (IoT) and the Vehicular Ad-Hoc Network (VANET) have now been
merged as the fundamental and central components of Intelligent Transportation
System (ITS). Furthermore, while the procedure of obtaining the insurance may
be longer for certain people. On the other hand, others may avoid the law after
being involved in severe collisions which makes it difficult for the
authorities to discriminate between criminal and non-criminal evidence.
|
[
{
"version": "v1",
"created": "Mon, 16 May 2022 17:14:23 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Talukder",
"Shafin",
""
],
[
"Ira",
"SK. Tasnim Bari",
""
],
[
"Khanom",
"Aseya",
""
],
[
"Sneha",
"Prantika Biswas",
""
],
[
"Saleh",
"Wardah",
""
]
] |
new_dataset
| 0.999009 |
2205.07824
|
Jordi Vila-Perez
|
Jordi Vila-P\'erez, R. Loek Van Heyningen, Ngoc-Cuong Nguyen, Jaume
Peraire
|
Exasim: Generating Discontinuous Galerkin Codes for Numerical Solutions
of Partial Differential Equations on Graphics Processors
|
19 pages, 4 figures, 3 tables
| null | null | null |
cs.MS cs.CE cs.NA math.NA physics.comp-ph physics.flu-dyn
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents an overview of the functionalities and applications of
Exasim, an open-source code for generating high-order discontinuous Galerkin
codes to numerically solve parametrized partial differential equations (PDEs).
The software combines high-level and low-level languages to construct
parametrized PDE models via Julia, Python or Matlab scripts and produce
high-performance C++ codes for solving the PDE models on CPU and Nvidia GPU
processors with distributed memory. Exasim provides matrix-free discontinuous
Galerkin discretization schemes together with scalable reduced basis
preconditioners and Newton-GMRES solvers, making it suitable for accurate and
efficient approximation of wide-ranging classes of PDEs.
|
[
{
"version": "v1",
"created": "Mon, 16 May 2022 17:28:28 GMT"
}
] | 2022-05-17T00:00:00 |
[
[
"Vila-Pérez",
"Jordi",
""
],
[
"Van Heyningen",
"R. Loek",
""
],
[
"Nguyen",
"Ngoc-Cuong",
""
],
[
"Peraire",
"Jaume",
""
]
] |
new_dataset
| 0.999721 |
1803.10106
|
Gregor Lenz
|
Gregor Lenz, Sio-Hoi Ieng, Ryad Benosman
|
Event-based Face Detection and Tracking in the Blink of an Eye
| null |
Frontiers in Neuroscience 2020 volume 14
|
10.3389/fnins.2020.00587
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the first purely event-based method for face detection using the
high temporal resolution of an event-based camera. We will rely on a new
feature that has never been used for such a task that relies on detecting eye
blinks. Eye blinks are a unique natural dynamic signature of human faces that
is captured well by event-based sensors that rely on relative changes of
luminance. Although an eye blink can be captured with conventional cameras, we
will show that the dynamics of eye blinks combined with the fact that two eyes
act simultaneously allows to derive a robust methodology for face detection at
a low computational cost and high temporal resolution. We show that eye blinks
have a unique temporal signature over time that can be easily detected by
correlating the acquired local activity with a generic temporal model of eye
blinks that has been generated from a wide population of users. We furthermore
show that once the face is reliably detected it is possible to apply a
probabilistic framework to track the spatial position of a face for each
incoming event while updating the position of trackers. Results are shown for
several indoor and outdoor experiments. We will also release an annotated data
set that can be used for future work on the topic.
|
[
{
"version": "v1",
"created": "Tue, 27 Mar 2018 14:27:26 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Nov 2018 16:53:22 GMT"
},
{
"version": "v3",
"created": "Tue, 2 Apr 2019 19:05:55 GMT"
}
] | 2022-05-16T00:00:00 |
[
[
"Lenz",
"Gregor",
""
],
[
"Ieng",
"Sio-Hoi",
""
],
[
"Benosman",
"Ryad",
""
]
] |
new_dataset
| 0.991124 |
1903.05918
|
Carsten Kutzner
|
Carsten Kutzner, Szil\'ard P\'all, Martin Fechner, Ansgar Esztermann,
Bert L. de Groot, Helmut Grubm\"uller
|
More Bang for Your Buck: Improved use of GPU Nodes for GROMACS 2018
|
41 pages, 13 figures, 4 tables. This updated version includes the
following improvements: - most notably, added benchmarks for two coarse grain
MARTINI systems VES and BIG, resulting in a new Figure 13 - fixed typos -
made text clearer in some places - added two more benchmarks for MEM and RIB
systems (E3-1240v6 + RTX 2080 / 2080Ti)
|
Journal of Computational Chemistry, 2019, 40, 2418-2431
|
10.1002/jcc.26011
| null |
cs.DC cs.PF physics.bio-ph physics.comp-ph q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We identify hardware that is optimal to produce molecular dynamics
trajectories on Linux compute clusters with the GROMACS 2018 simulation
package. Therefore, we benchmark the GROMACS performance on a diverse set of
compute nodes and relate it to the costs of the nodes, which may include their
lifetime costs for energy and cooling. In agreement with our earlier
investigation using GROMACS 4.6 on hardware of 2014, the performance to price
ratio of consumer GPU nodes is considerably higher than that of CPU nodes.
However, with GROMACS 2018, the optimal CPU to GPU processing power balance has
shifted even more towards the GPU. Hence, nodes optimized for GROMACS 2018 and
later versions enable a significantly higher performance to price ratio than
nodes optimized for older GROMACS versions. Moreover, the shift towards GPU
processing allows to cheaply upgrade old nodes with recent GPUs, yielding
essentially the same performance as comparable brand-new hardware.
|
[
{
"version": "v1",
"created": "Thu, 14 Mar 2019 11:06:54 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Jun 2019 09:59:55 GMT"
}
] | 2022-05-16T00:00:00 |
[
[
"Kutzner",
"Carsten",
""
],
[
"Páll",
"Szilárd",
""
],
[
"Fechner",
"Martin",
""
],
[
"Esztermann",
"Ansgar",
""
],
[
"de Groot",
"Bert L.",
""
],
[
"Grubmüller",
"Helmut",
""
]
] |
new_dataset
| 0.987232 |
1912.10013
|
Maura Pintor
|
Maura Pintor, Luca Demetrio, Angelo Sotgiu, Marco Melis, Ambra
Demontis, Battista Biggio
|
secml: A Python Library for Secure and Explainable Machine Learning
|
Accepted for publication to SoftwareX. Published version can be found
at: https://doi.org/10.1016/j.softx.2022.101095
|
SoftwareX 18 (2022)
|
10.1016/j.softx.2022.101095
| null |
cs.LG cs.CR cs.CV cs.GT stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present \texttt{secml}, an open-source Python library for secure and
explainable machine learning. It implements the most popular attacks against
machine learning, including test-time evasion attacks to generate adversarial
examples against deep neural networks and training-time poisoning attacks
against support vector machines and many other algorithms. These attacks enable
evaluating the security of learning algorithms and the corresponding defenses
under both white-box and black-box threat models. To this end, \texttt{secml}
provides built-in functions to compute security evaluation curves, showing how
quickly classification performance decreases against increasing adversarial
perturbations of the input data. \texttt{secml} also includes explainability
methods to help understand why adversarial attacks succeed against a given
model, by visualizing the most influential features and training prototypes
contributing to each decision. It is distributed under the Apache License 2.0
and hosted at \url{https://github.com/pralab/secml}.
|
[
{
"version": "v1",
"created": "Fri, 20 Dec 2019 18:41:37 GMT"
},
{
"version": "v2",
"created": "Fri, 13 May 2022 16:15:10 GMT"
}
] | 2022-05-16T00:00:00 |
[
[
"Pintor",
"Maura",
""
],
[
"Demetrio",
"Luca",
""
],
[
"Sotgiu",
"Angelo",
""
],
[
"Melis",
"Marco",
""
],
[
"Demontis",
"Ambra",
""
],
[
"Biggio",
"Battista",
""
]
] |
new_dataset
| 0.999318 |
2101.08943
|
Jun Muramatsu
|
Jun Muramatsu
|
Binary Polar Codes Based on Bit Error Probability
|
(v1) 36 pages, this is the extended version of the paper submitted to
2021 IEEE ISIT, (v2) 37 pages, this is the full version of the paper
re-submitted to 2021 IEEE ITW, (v3) 41 pages, this is the full version of the
paper re-submitted to 2022 IEEE ISIT, slight improvement of main theorems and
additional experimental results comparing with conventional methods, (v4)
correcting typos
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces techniques to construct binary polar source/channel
codes based on the bit error probability of successive-cancellation decoding.
The polarization lemma is reconstructed based on the bit error probability and
then techniques to compute the bit error probability are introduced. These
techniques can be applied to the construction of polar codes and the
computation of lower and upper bounds of the block decoding error probability.
|
[
{
"version": "v1",
"created": "Fri, 22 Jan 2021 04:12:58 GMT"
},
{
"version": "v2",
"created": "Thu, 13 May 2021 08:07:28 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Jan 2022 11:23:56 GMT"
},
{
"version": "v4",
"created": "Fri, 13 May 2022 02:57:12 GMT"
}
] | 2022-05-16T00:00:00 |
[
[
"Muramatsu",
"Jun",
""
]
] |
new_dataset
| 0.998745 |
2105.10382
|
Fabio Poiesi
|
Fabio Poiesi and Davide Boscaini
|
Learning general and distinctive 3D local deep descriptors for point
cloud registration
|
Accepted in IEEE Transactions on Pattern Analysis and Machine
Intelligence
| null |
10.1109/TPAMI.2022.3175371
| null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
An effective 3D descriptor should be invariant to different geometric
transformations, such as scale and rotation, robust to occlusions and clutter,
and capable of generalising to different application domains. We present a
simple yet effective method to learn general and distinctive 3D local
descriptors that can be used to register point clouds that are captured in
different domains. Point cloud patches are extracted, canonicalised with
respect to their local reference frame, and encoded into scale and
rotation-invariant compact descriptors by a deep neural network that is
invariant to permutations of the input points. This design is what enables our
descriptors to generalise across domains. We evaluate and compare our
descriptors with alternative handcrafted and deep learning-based descriptors on
several indoor and outdoor datasets that are reconstructed by using both RGBD
sensors and laser scanners. Our descriptors outperform most recent descriptors
by a large margin in terms of generalisation, and also become the state of the
art in benchmarks where training and testing are performed in the same domain.
|
[
{
"version": "v1",
"created": "Fri, 21 May 2021 14:47:55 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Sep 2021 16:56:54 GMT"
},
{
"version": "v3",
"created": "Thu, 12 May 2022 19:47:29 GMT"
}
] | 2022-05-16T00:00:00 |
[
[
"Poiesi",
"Fabio",
""
],
[
"Boscaini",
"Davide",
""
]
] |
new_dataset
| 0.964387 |
2106.13139
|
Andre Rochow
|
Andre Rochow, Max Schwarz, Michael Weinmann, Sven Behnke
|
FaDIV-Syn: Fast Depth-Independent View Synthesis using Soft Masks and
Implicit Blending
|
Accepted to Robotics: Science and Systems (RSS) 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Novel view synthesis is required in many robotic applications, such as VR
teleoperation and scene reconstruction. Existing methods are often too slow for
these contexts, cannot handle dynamic scenes, and are limited by their explicit
depth estimation stage, where incorrect depth predictions can lead to large
projection errors. Our proposed method runs in real time on live streaming data
and avoids explicit depth estimation by efficiently warping input images into
the target frame for a range of assumed depth planes. The resulting plane sweep
volume (PSV) is directly fed into our network, which first estimates soft PSV
masks in a self-supervised manner, and then directly produces the novel output
view. This improves efficiency and performance on transparent, reflective,
thin, and feature-less scene parts. FaDIV-Syn can perform both interpolation
and extrapolation tasks at 540p in real-time and outperforms state-of-the-art
extrapolation methods on the large-scale RealEstate10k dataset. We thoroughly
evaluate ablations, such as removing the Soft-Masking network, training from
fewer examples as well as generalization to higher resolutions and stronger
depth discretization. Our implementation is available.
|
[
{
"version": "v1",
"created": "Thu, 24 Jun 2021 16:14:01 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Dec 2021 14:03:55 GMT"
},
{
"version": "v3",
"created": "Fri, 13 May 2022 11:29:44 GMT"
}
] | 2022-05-16T00:00:00 |
[
[
"Rochow",
"Andre",
""
],
[
"Schwarz",
"Max",
""
],
[
"Weinmann",
"Michael",
""
],
[
"Behnke",
"Sven",
""
]
] |
new_dataset
| 0.998702 |
2111.08799
|
Ruben Wiersma
|
Ruben Wiersma, Ahmad Nasikun, Elmar Eisemann, Klaus Hildebrandt
|
DeltaConv: Anisotropic Operators for Geometric Deep Learning on Point
Clouds
|
8 pages, 5 figures, 7 tables; ACM Transactions on Graphics 41, 4,
Article 105 (SIGGRAPH 2022)
| null |
10.1145/3528223.3530166
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Learning from 3D point-cloud data has rapidly gained momentum, motivated by
the success of deep learning on images and the increased availability of
3D~data. In this paper, we aim to construct anisotropic convolution layers that
work directly on the surface derived from a point cloud. This is challenging
because of the lack of a global coordinate system for tangential directions on
surfaces. We introduce DeltaConv, a convolution layer that combines geometric
operators from vector calculus to enable the construction of anisotropic
filters on point clouds. Because these operators are defined on scalar- and
vector-fields, we separate the network into a scalar- and a vector-stream,
which are connected by the operators. The vector stream enables the network to
explicitly represent, evaluate, and process directional information. Our
convolutions are robust and simple to implement and match or improve on
state-of-the-art approaches on several benchmarks, while also speeding up
training and inference.
|
[
{
"version": "v1",
"created": "Tue, 16 Nov 2021 21:58:55 GMT"
},
{
"version": "v2",
"created": "Thu, 18 Nov 2021 15:28:44 GMT"
},
{
"version": "v3",
"created": "Tue, 23 Nov 2021 13:33:48 GMT"
},
{
"version": "v4",
"created": "Fri, 28 Jan 2022 10:48:30 GMT"
},
{
"version": "v5",
"created": "Thu, 12 May 2022 20:38:25 GMT"
}
] | 2022-05-16T00:00:00 |
[
[
"Wiersma",
"Ruben",
""
],
[
"Nasikun",
"Ahmad",
""
],
[
"Eisemann",
"Elmar",
""
],
[
"Hildebrandt",
"Klaus",
""
]
] |
new_dataset
| 0.997929 |
2201.02639
|
Rowan Zellers
|
Rowan Zellers and Jiasen Lu and Ximing Lu and Youngjae Yu and Yanpeng
Zhao and Mohammadreza Salehi and Aditya Kusupati and Jack Hessel and Ali
Farhadi and Yejin Choi
|
MERLOT Reserve: Neural Script Knowledge through Vision and Language and
Sound
|
CVPR 2022. Project page at https://rowanzellers.com/merlotreserve
| null | null | null |
cs.CV cs.CL cs.LG cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
As humans, we navigate a multimodal world, building a holistic understanding
from all our senses. We introduce MERLOT Reserve, a model that represents
videos jointly over time -- through a new training objective that learns from
audio, subtitles, and video frames. Given a video, we replace snippets of text
and audio with a MASK token; the model learns by choosing the correct
masked-out snippet. Our objective learns faster than alternatives, and performs
well at scale: we pretrain on 20 million YouTube videos.
Empirical results show that MERLOT Reserve learns strong multimodal
representations. When finetuned, it sets state-of-the-art on Visual Commonsense
Reasoning (VCR), TVQA, and Kinetics-600; outperforming prior work by 5%, 7%,
and 1.5% respectively. Ablations show that these tasks benefit from audio
pretraining -- even VCR, a QA task centered around images (without sound).
Moreover, our objective enables out-of-the-box prediction, revealing strong
multimodal commonsense understanding. In a fully zero-shot setting, our model
obtains competitive results on four video tasks, even outperforming supervised
approaches on the recently proposed Situated Reasoning (STAR) benchmark.
We analyze why audio enables better vision-language representations,
suggesting significant opportunities for future research. We conclude by
discussing ethical and societal implications of multimodal pretraining.
|
[
{
"version": "v1",
"created": "Fri, 7 Jan 2022 19:00:21 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Mar 2022 02:37:30 GMT"
},
{
"version": "v3",
"created": "Tue, 15 Mar 2022 23:47:29 GMT"
},
{
"version": "v4",
"created": "Fri, 13 May 2022 14:25:04 GMT"
}
] | 2022-05-16T00:00:00 |
[
[
"Zellers",
"Rowan",
""
],
[
"Lu",
"Jiasen",
""
],
[
"Lu",
"Ximing",
""
],
[
"Yu",
"Youngjae",
""
],
[
"Zhao",
"Yanpeng",
""
],
[
"Salehi",
"Mohammadreza",
""
],
[
"Kusupati",
"Aditya",
""
],
[
"Hessel",
"Jack",
""
],
[
"Farhadi",
"Ali",
""
],
[
"Choi",
"Yejin",
""
]
] |
new_dataset
| 0.999388 |
2201.06372
|
Carsten Kutzner
|
Carsten Kutzner, Christian Kniep, Austin Cherian, Ludvig Nordstrom,
Helmut Grubm\"uller, Bert L. de Groot, Vytautas Gapsys
|
GROMACS in the cloud: A global supercomputer to speed up alchemical drug
design
|
59 pages, 11 figures, 11 tables v2 fixed a typo in the abstract
|
Journal of Chemical Information and Modelling, 2022, 62, 1691-1711
|
10.1021/acs.jcim.2c00044
| null |
cs.DC physics.bio-ph physics.comp-ph q-bio.BM
|
http://creativecommons.org/licenses/by/4.0/
|
We assess costs and efficiency of state-of-the-art high performance cloud
computing compared to a traditional on-premises compute cluster. Our use case
are atomistic simulations carried out with the GROMACS molecular dynamics (MD)
toolkit with a focus on alchemical protein-ligand binding free energy
calculations.
We set up a compute cluster in the Amazon Web Services (AWS) cloud that
incorporates various different instances with Intel, AMD, and ARM CPUs, some
with GPU acceleration. Using representative biomolecular simulation systems we
benchmark how GROMACS performs on individual instances and across multiple
instances. Thereby we assess which instances deliver the highest performance
and which are the most cost-efficient ones for our use case.
We find that, in terms of total costs including hardware, personnel, room,
energy and cooling, producing MD trajectories in the cloud can be as
cost-efficient as an on-premises cluster given that optimal cloud instances are
chosen. Further, we find that high-throughput ligand-screening for
protein-ligand binding affinity estimation can be accelerated dramatically by
using global cloud resources. For a ligand screening study consisting of 19,872
independent simulations, we used all hardware that was available in the cloud
at the time of the study. The computations scaled-up to reach peak performances
using more than 4,000 instances, 140,000 cores, and 3,000 GPUs simultaneously
around the globe. Our simulation ensemble finished in about two days in the
cloud, while weeks would be required to complete the task on a typical
on-premises cluster consisting of several hundred nodes. We demonstrate that
the costs of such and similar studies can be drastically reduced with a
checkpoint-restart protocol that allows to use cheap Spot pricing and by using
instance types with optimal cost-efficiency.
|
[
{
"version": "v1",
"created": "Mon, 17 Jan 2022 12:10:31 GMT"
},
{
"version": "v2",
"created": "Fri, 13 May 2022 14:41:27 GMT"
}
] | 2022-05-16T00:00:00 |
[
[
"Kutzner",
"Carsten",
""
],
[
"Kniep",
"Christian",
""
],
[
"Cherian",
"Austin",
""
],
[
"Nordstrom",
"Ludvig",
""
],
[
"Grubmüller",
"Helmut",
""
],
[
"de Groot",
"Bert L.",
""
],
[
"Gapsys",
"Vytautas",
""
]
] |
new_dataset
| 0.99344 |
2204.06527
|
Walter Zimmer
|
Christian Cre{\ss}, Walter Zimmer, Leah Strand, Venkatnarayanan
Lakshminarasimhan, Maximilian Fortkord, Siyi Dai and Alois Knoll
|
A9-Dataset: Multi-Sensor Infrastructure-Based Dataset for Mobility
Research
|
Accepted for IEEE Intelligent Vehicles Symposium 2022 (IV22)
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Data-intensive machine learning based techniques increasingly play a
prominent role in the development of future mobility solutions - from driver
assistance and automation functions in vehicles, to real-time traffic
management systems realized through dedicated infrastructure. The availability
of high quality real-world data is often an important prerequisite for the
development and reliable deployment of such systems in large scale. Towards
this endeavour, we present the A9-Dataset based on roadside sensor
infrastructure from the 3 km long Providentia++ test field near Munich in
Germany. The dataset includes anonymized and precision-timestamped multi-modal
sensor and object data in high resolution, covering a variety of traffic
situations. As part of the first set of data, which we describe in this paper,
we provide camera and LiDAR frames from two overhead gantry bridges on the A9
autobahn with the corresponding objects labeled with 3D bounding boxes. The
first set includes in total more than 1000 sensor frames and 14000 traffic
objects. The dataset is available for download at https://a9-dataset.com.
|
[
{
"version": "v1",
"created": "Wed, 13 Apr 2022 17:12:16 GMT"
},
{
"version": "v2",
"created": "Fri, 13 May 2022 16:27:37 GMT"
}
] | 2022-05-16T00:00:00 |
[
[
"Creß",
"Christian",
""
],
[
"Zimmer",
"Walter",
""
],
[
"Strand",
"Leah",
""
],
[
"Lakshminarasimhan",
"Venkatnarayanan",
""
],
[
"Fortkord",
"Maximilian",
""
],
[
"Dai",
"Siyi",
""
],
[
"Knoll",
"Alois",
""
]
] |
new_dataset
| 0.999867 |
2205.01686
|
Alexander Angus
|
Zoran Kosti\'c, Alex Angus, Zhengye Yang, Zhuoxu Duan, Ivan Seskar,
Gil Zussman, Dipankar Raychaudhuri
|
Smart City Intersections: Intelligence Nodes for Future Metropolises
| null | null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Traffic intersections are the most suitable locations for the deployment of
computing, communications, and intelligence services for smart cities of the
future. The abundance of data to be collected and processed, in combination
with privacy and security concerns, motivates the use of the edge-computing
paradigm which aligns well with physical intersections in metropolises. This
paper focuses on high-bandwidth, low-latency applications, and in that context
it describes: (i) system design considerations for smart city intersection
intelligence nodes; (ii) key technological components including sensors,
networking, edge computing, low latency design, and AI-based intelligence; and
(iii) applications such as privacy preservation, cloud-connected vehicles, a
real-time "radar-screen", traffic management, and monitoring of pedestrian
behavior during pandemics. The results of the experimental studies performed on
the COSMOS testbed located in New York City are illustrated. Future challenges
in designing human-centered smart city intersections are summarized.
|
[
{
"version": "v1",
"created": "Tue, 3 May 2022 17:22:57 GMT"
},
{
"version": "v2",
"created": "Fri, 13 May 2022 12:25:06 GMT"
}
] | 2022-05-16T00:00:00 |
[
[
"Kostić",
"Zoran",
""
],
[
"Angus",
"Alex",
""
],
[
"Yang",
"Zhengye",
""
],
[
"Duan",
"Zhuoxu",
""
],
[
"Seskar",
"Ivan",
""
],
[
"Zussman",
"Gil",
""
],
[
"Raychaudhuri",
"Dipankar",
""
]
] |
new_dataset
| 0.99916 |
2205.05783
|
Trenton Ford
|
Trenton W. Ford, William Theisen, Michael Yankoski, Tom Henry, Farah
Khashman, Katherine R. Dearstyne and Tim Weninger
|
MEWS: Real-time Social Media Manipulation Detection and Analysis
| null | null | null | null |
cs.CV cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
This article presents a beta-version of MEWS (Misinformation Early Warning
System). It describes the various aspects of the ingestion, manipulation
detection, and graphing algorithms employed to determine--in near
real-time--the relationships between social media images as they emerge and
spread on social media platforms. By combining these various technologies into
a single processing pipeline, MEWS can identify manipulated media items as they
arise and identify when these particular items begin trending on individual
social media platforms or even across multiple platforms. The emergence of a
novel manipulation followed by rapid diffusion of the manipulated content
suggests a disinformation campaign.
|
[
{
"version": "v1",
"created": "Wed, 11 May 2022 21:44:26 GMT"
},
{
"version": "v2",
"created": "Fri, 13 May 2022 00:37:18 GMT"
}
] | 2022-05-16T00:00:00 |
[
[
"Ford",
"Trenton W.",
""
],
[
"Theisen",
"William",
""
],
[
"Yankoski",
"Michael",
""
],
[
"Henry",
"Tom",
""
],
[
"Khashman",
"Farah",
""
],
[
"Dearstyne",
"Katherine R.",
""
],
[
"Weninger",
"Tim",
""
]
] |
new_dataset
| 0.984913 |
2205.06395
|
Alireza Ramezani
|
Eric Sihite, Xintao Hu, Bozhen Li, Adarsh Salagame, Paul Ghanem, and
Alireza Ramezani
|
Bang-Bang Control Of A Tail-less Morphing Wing Flight
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bats' dynamic morphing wings are known to be extremely high-dimensional, and
they employ the combination of inertial dynamics and aerodynamics manipulations
to showcase extremely agile maneuvers. Bats heavily rely on their highly
flexible wings and are capable of dynamically morphing their wings to adjust
aerodynamic and inertial forces applied to their wing and perform sharp banking
turns. There are technical hardware and control challenges in copying the
morphing wing flight capabilities of flying animals. This work is majorly
focused on the modeling and control aspects of stable, tail-less, morphing wing
flight. A classical control approach using bang-bang control is proposed to
stabilize a bio-inspired morphing wing robot called Aerobat. Robot-environment
interactions based on horseshoe vortex shedding and Wagner functions is derived
to realistically evaluate the feasibility of the bang-bang control, which is
then implemented on the robot in experiments to demonstrate first-time
closed-loop stable flights of Aerobat.
|
[
{
"version": "v1",
"created": "Thu, 12 May 2022 23:33:18 GMT"
}
] | 2022-05-16T00:00:00 |
[
[
"Sihite",
"Eric",
""
],
[
"Hu",
"Xintao",
""
],
[
"Li",
"Bozhen",
""
],
[
"Salagame",
"Adarsh",
""
],
[
"Ghanem",
"Paul",
""
],
[
"Ramezani",
"Alireza",
""
]
] |
new_dataset
| 0.996348 |
2205.06397
|
Srivatsa Kundurthy
|
Srivatsa Kundurthy
|
LANTERN-RD: Enabling Deep Learning for Mitigation of the Invasive
Spotted Lanternfly
|
Under Review at IEEE Conference on Computer Vision and Pattern
Recognition, CV4Animals: Computer Vision for Animal Behavior Tracking and
Modeling Workshop, 2022
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The Spotted Lanternfly (SLF) is an invasive planthopper that threatens the
local biodiversity and agricultural economy of regions such as the Northeastern
United States and Japan. As researchers scramble to study the insect, there is
a great potential for computer vision tasks such as detection, pose estimation,
and accurate identification to have important downstream implications in
containing the SLF. However, there is currently no publicly available dataset
for training such AI models. To enable computer vision applications and
motivate advancements to challenge the invasive SLF problem, we propose
LANTERN-RD, the first curated image dataset of the spotted lanternfly and its
look-alikes, featuring images with varied lighting conditions, diverse
backgrounds, and subjects in assorted poses. A VGG16-based baseline CNN
validates the potential of this dataset for stimulating fresh computer vision
applications to accelerate invasive SLF research. Additionally, we implement
the trained model in a simple mobile classification application in order to
directly empower responsible public mitigation efforts. The overarching mission
of this work is to introduce a novel SLF image dataset and release a
classification framework that enables computer vision applications, boosting
studies surrounding the invasive SLF and assisting in minimizing its
agricultural and economic damage.
|
[
{
"version": "v1",
"created": "Thu, 12 May 2022 23:37:29 GMT"
}
] | 2022-05-16T00:00:00 |
[
[
"Kundurthy",
"Srivatsa",
""
]
] |
new_dataset
| 0.998851 |
2205.06403
|
Thanh Tung Vu
|
Mohammadali Mohammadi, Tung T. Vu, Behnaz Naderi Beni, Hien Quoc Ngo,
and Michail Matthaiou
|
Virtually Full-duplex Cell-Free Massive MIMO with Access Point Mode
Assignment
|
accepted to appear in IEEE SPAWC'22, Oulu, Finland
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider a cell-free massive multiple-input multiple-output (MIMO) network
utilizing a virtually full-duplex (vFD) mode, where access points (APs) with a
downlink (DL) mode and those with an uplink (UL) mode simultaneously serve DL
and UL users (UEs). In order to maximize the sum spectral efficiency (SE) of
the DL and UL transmissions, we formulate a mixed-integer optimization problem
to jointly design the AP mode assignment and power control. This problem is
subject to minimum per-UE SE requirements, per-AP power control, and per-UL UE
power constraints. By employing the successive convex approximation technique,
we propose an algorithm to obtain a stationary solution of the formulated
problem. Numerical results show that the proposed vFD approach can provide a
sum SE that is $2.5$ and $1.5$ times larger than the traditional half-duplex
and heuristic baseline schemes, respectively, in terms of $95\%$-likely sum SE.
|
[
{
"version": "v1",
"created": "Fri, 13 May 2022 00:26:04 GMT"
}
] | 2022-05-16T00:00:00 |
[
[
"Mohammadi",
"Mohammadali",
""
],
[
"Vu",
"Tung T.",
""
],
[
"Beni",
"Behnaz Naderi",
""
],
[
"Ngo",
"Hien Quoc",
""
],
[
"Matthaiou",
"Michail",
""
]
] |
new_dataset
| 0.973568 |
2205.06415
|
Sandeep Kumar
|
Sandeep Kumar, Abhisek Panda, Smruti R. Sarangi
|
A Comprehensive Benchmark Suite for Intel SGX
| null | null | null | null |
cs.CR cs.PF
|
http://creativecommons.org/licenses/by/4.0/
|
Trusted execution environments (TEEs) such as \intelsgx facilitate the secure
execution of an application on untrusted machines. Sadly, such environments
suffer from serious limitations and performance overheads in terms of writing
back data to the main memory, their interaction with the OS, and the ability to
issue I/O instructions. There is thus a plethora of work that focuses on
improving the performance of such environments -- this necessitates the need
for a standard, widely accepted benchmark suite (something similar to SPEC and
PARSEC). To the best of our knowledge, such a suite does not exist.
Our suite, SGXGauge, contains a diverse set of workloads such as blockchain
codes, secure machine learning algorithms, lightweight web servers, secure
key-value stores, etc. We thoroughly characterizes the behavior of the
benchmark suite on a native platform and on a platform that uses a library
OS-based shimming layer (GrapheneSGX). We observe that the most important
metrics of interest are performance counters related to paging, memory, and TLB
accesses. There is an abrupt change in performance when the memory footprint
starts to exceed the size of the EPC size in Intel SGX, and the library OS does
not add a significant overhead (~ +- 10%).
|
[
{
"version": "v1",
"created": "Fri, 13 May 2022 01:42:42 GMT"
}
] | 2022-05-16T00:00:00 |
[
[
"Kumar",
"Sandeep",
""
],
[
"Panda",
"Abhisek",
""
],
[
"Sarangi",
"Smruti R.",
""
]
] |
new_dataset
| 0.993813 |
2205.06497
|
Marcos Nieto
|
Marcos Nieto, Mikel Garcia, Itziar Urbieta, Oihana Otaegui
|
RTMaps-based Local Dynamic Map for multi-ADAS data fusion
|
9 pages. To be published in 14th ITS European Congress 2022
| null | null | null |
cs.DB cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Work on Local Dynamic Maps (LDM) implementation is still in its early stages,
as the LDM standards only define how information shall be structured in
databases, while the mechanism to fuse or link information across different
layers is left undefined. A working LDM component, as a real-time database
inside the vehicle is an attractive solution to multi-ADAS systems, which may
feed a real-time LDM database that serves as a central point of information
inside the vehicle, exposing fused and structured information to other
components (e.g., decision-making systems). In this paper we describe our
approach implementing a real-time LDM component using the RTMaps middleware, as
a database deployed in a vehicle, but also at road-side units (RSU), making use
of the three pillars that guide a successful fusion strategy: utilisation of
standards (with conversions between domains), middlewares to unify multiple
ADAS sources, and linkage of data via semantic concepts.
|
[
{
"version": "v1",
"created": "Fri, 13 May 2022 08:07:16 GMT"
}
] | 2022-05-16T00:00:00 |
[
[
"Nieto",
"Marcos",
""
],
[
"Garcia",
"Mikel",
""
],
[
"Urbieta",
"Itziar",
""
],
[
"Otaegui",
"Oihana",
""
]
] |
new_dataset
| 0.99689 |
2205.06513
|
Christin Katharina Kreutz
|
Christin Katharina Kreutz, Martin Blum, Ralf Schenkel
|
SchenQL: A query language for bibliographic data with aggregations and
domain-specific functions
|
Accepted at JCDL'22 as a demo, 5 pages, 4 figures
| null |
10.1145/3529372.3533282
| null |
cs.DL cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current search interfaces of digital libraries are not suitable to satisfy
complex or convoluted information needs directly, when it comes to cases such
as "Find authors who only recently started working on a topic". They might
offer possibilities to obtain this information only by requiring vast user
interaction. We present SchenQL, a web interface of a domain-specific query
language on bibliographic metadata, which offers information search and
exploration by query formulation and navigation in the system. Our system
focuses on supporting aggregation of data and providing specialised domain
dependent functions while being suitable for domain experts as well as casual
users of digital libraries.
|
[
{
"version": "v1",
"created": "Fri, 13 May 2022 08:40:23 GMT"
}
] | 2022-05-16T00:00:00 |
[
[
"Kreutz",
"Christin Katharina",
""
],
[
"Blum",
"Martin",
""
],
[
"Schenkel",
"Ralf",
""
]
] |
new_dataset
| 0.999069 |
2205.06564
|
Alan Winfield
|
Alan F.T. Winfield, Anouk van Maris, Pericle Salvini, Marina Jirotka
|
An Ethical Black Box for Social Robots: a draft Open Standard
|
Submitted to the International Conference on Robot Ethics and
Standards (ICRES 2022)
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This paper introduces a draft open standard for the robot equivalent of an
aircraft flight data recorder, which we call an ethical black box. This is a
device, or software module, capable of securely recording operational data
(sensor, actuator and control decisions) for a social robot, in order to
support the investigation of accidents or near-miss incidents. The open
standard, presented as an annex to this paper, is offered as a first draft for
discussion within the robot ethics community. Our intention is to publish
further drafts following feedback, in the hope that the standard will become a
useful reference for social robot designers, operators and robot
accident/incident investigators.
|
[
{
"version": "v1",
"created": "Fri, 13 May 2022 11:32:33 GMT"
}
] | 2022-05-16T00:00:00 |
[
[
"Winfield",
"Alan F. T.",
""
],
[
"van Maris",
"Anouk",
""
],
[
"Salvini",
"Pericle",
""
],
[
"Jirotka",
"Marina",
""
]
] |
new_dataset
| 0.999446 |
2205.06567
|
Mihai Ordean
|
Mihai Ordean and Flavio D. Garcia
|
Millimeter-Wave Automotive Radar Spoofing
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Millimeter-wave radar systems are one of the core components of the
safety-critical Advanced Driver Assistant System (ADAS) of a modern vehicle.
Due to their ability to operate efficiently despite bad weather conditions and
poor visibility, they are often the only reliable sensor a car has to detect
and evaluate potential dangers in the surrounding environment. In this paper,
we propose several attacks against automotive radars for the purposes of
assessing their reliability in real-world scenarios. Using COTS hardware, we
are able to successfully interfere with automotive-grade FMCW radars operating
in the commonly used 77GHz frequency band, deployed in real-world, truly
wireless environments. Our strongest type of interference is able to trick the
victim into detecting virtual (moving) objects. We also extend this attack with
a novel method that leverages noise to remove real-world objects, thus
complementing the aforementioned object spoofing attack. We evaluate the
viability of our attacks in two ways. First, we establish a baseline by
implementing and evaluating an unrealistically powerful adversary which
requires synchronization to the victim in a limited setup that uses wire-based
chirp synchronization. Later, we implement, for the first time, a truly
wireless attack that evaluates a weaker but realistic adversary which is
non-synchronized and does not require any adjustment feedback from the victim.
Finally, we provide theoretical fundamentals for our findings, and discuss the
efficiency of potential countermeasures against the proposed attacks. We plan
to release our software as open-source.
|
[
{
"version": "v1",
"created": "Fri, 13 May 2022 11:37:17 GMT"
}
] | 2022-05-16T00:00:00 |
[
[
"Ordean",
"Mihai",
""
],
[
"Garcia",
"Flavio D.",
""
]
] |
new_dataset
| 0.999559 |
2205.06584
|
Gidon Ernst
|
Gidon Ernst, Alexander Knapp, Toby Murray
|
A Hoare Logic with Regular Behavioral Specifications
| null | null | null | null |
cs.LO cs.FL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We present a Hoare logic that extends program specifications with regular
expressions that capture behaviors in terms of sequences of events that arise
during the execution. The idea is similar to session types or process-like
behavioral contracts, two currently popular research directions. The approach
presented here strikes a particular balance between expressiveness and proof
automation, notably, it can capture interesting sequential behavior across
multiple iterations of loops. The approach is modular and integrates well with
autoactive deductive verification tools. We describe and demonstrate our
prototype implementation in SecC using two case studies: A matcher for E-Mail
addresses and a specification of the game steps in the VerifyThis Casino
challenge.
|
[
{
"version": "v1",
"created": "Fri, 13 May 2022 12:16:22 GMT"
}
] | 2022-05-16T00:00:00 |
[
[
"Ernst",
"Gidon",
""
],
[
"Knapp",
"Alexander",
""
],
[
"Murray",
"Toby",
""
]
] |
new_dataset
| 0.984709 |
2205.06611
|
Gunhee Lee
|
Gunhee Lee, Jonghwa Yim, Chanran Kim, Minjae Kim
|
StyLandGAN: A StyleGAN based Landscape Image Synthesis using Depth-map
|
AI for Content Creation Workshop, CVPR 2022
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite recent success in conditional image synthesis, prevalent input
conditions such as semantics and edges are not clear enough to express `Linear
(Ridges)' and `Planar (Scale)' representations. To address this problem, we
propose a novel framework StyLandGAN, which synthesizes desired landscape
images using a depth map which has higher expressive power. Our StyleLandGAN is
extended from the unconditional generation model to accept input conditions. We
also propose a '2-phase inference' pipeline which generates diverse depth maps
and shifts local parts so that it can easily reflect user's intend. As a
comparison, we modified the existing semantic image synthesis models to accept
a depth map as well. Experimental results show that our method is superior to
existing methods in quality, diversity, and depth-accuracy.
|
[
{
"version": "v1",
"created": "Fri, 13 May 2022 13:05:33 GMT"
}
] | 2022-05-16T00:00:00 |
[
[
"Lee",
"Gunhee",
""
],
[
"Yim",
"Jonghwa",
""
],
[
"Kim",
"Chanran",
""
],
[
"Kim",
"Minjae",
""
]
] |
new_dataset
| 0.999463 |
2205.06678
|
Pradeep Murukannaiah
|
Pradeep K. Murukannaiah and Catholijn M. Jonker
|
MOPaC: The Multiple Offers Protocol for Multilateral Negotiations with
Partial Consensus
| null | null | null | null |
cs.MA cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Existing protocols for multilateral negotiation require a full consensus
among the negotiating parties. In contrast, we propose a protocol for
multilateral negotiation that allows partial consensus, wherein only a subset
of the negotiating parties can reach an agreement. We motivate problems that
require such a protocol and describe the protocol formally.
|
[
{
"version": "v1",
"created": "Fri, 13 May 2022 14:27:11 GMT"
}
] | 2022-05-16T00:00:00 |
[
[
"Murukannaiah",
"Pradeep K.",
""
],
[
"Jonker",
"Catholijn M.",
""
]
] |
new_dataset
| 0.999256 |
2205.06691
|
Frank D. Zamora-Reina
|
Frank D. Zamora-Reina, Felipe Bravo-Marquez, Dominik Schlechtweg
|
LSCDiscovery: A shared task on semantic change discovery and detection
in Spanish
|
Accepted for publication in the 3rd International Workshop on
Computational Approaches to Historical Language Change 2022 (LChange'22)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the first shared task on semantic change discovery and detection
in Spanish and create the first dataset of Spanish words manually annotated for
semantic change using the DURel framework (Schlechtweg et al., 2018). The task
is divided in two phases: 1) Graded Change Discovery, and 2) Binary Change
Detection. In addition to introducing a new language the main novelty with
respect to the previous tasks consists in predicting and evaluating changes for
all vocabulary words in the corpus. Six teams participated in phase 1 and seven
teams in phase 2 of the shared task, and the best system obtained a Spearman
rank correlation of 0.735 for phase 1 and an F1 score of 0.716 for phase 2. We
describe the systems developed by the competing teams, highlighting the
techniques that were particularly useful and discuss the limits of these
approaches.
|
[
{
"version": "v1",
"created": "Fri, 13 May 2022 14:52:18 GMT"
}
] | 2022-05-16T00:00:00 |
[
[
"Zamora-Reina",
"Frank D.",
""
],
[
"Bravo-Marquez",
"Felipe",
""
],
[
"Schlechtweg",
"Dominik",
""
]
] |
new_dataset
| 0.999176 |
2205.06703
|
Yahui Liu
|
Yahui Liu and Haoping Yang and Chen Gong and Qingrong Xia and Zhenghua
Li and Min Zhang
|
MuCPAD: A Multi-Domain Chinese Predicate-Argument Dataset
|
Accepted by NAACL2022 (Main conference)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
During the past decade, neural network models have made tremendous progress
on in-domain semantic role labeling (SRL). However, performance drops
dramatically under the out-of-domain setting. In order to facilitate research
on cross-domain SRL, this paper presents MuCPAD, a multi-domain Chinese
predicate-argument dataset, which consists of 30,897 sentences and 92,051
predicates from six different domains. MuCPAD exhibits three important
features. 1) Based on a frame-free annotation methodology, we avoid writing
complex frames for new predicates. 2) We explicitly annotate omitted core
arguments to recover more complete semantic structure, considering that
omission of content words is ubiquitous in multi-domain Chinese texts. 3) We
compile 53 pages of annotation guidelines and adopt strict double annotation
for improving data quality. This paper describes in detail the annotation
methodology and annotation process of MuCPAD, and presents in-depth data
analysis. We also give benchmark results on cross-domain SRL based on MuCPAD.
|
[
{
"version": "v1",
"created": "Fri, 13 May 2022 15:17:24 GMT"
}
] | 2022-05-16T00:00:00 |
[
[
"Liu",
"Yahui",
""
],
[
"Yang",
"Haoping",
""
],
[
"Gong",
"Chen",
""
],
[
"Xia",
"Qingrong",
""
],
[
"Li",
"Zhenghua",
""
],
[
"Zhang",
"Min",
""
]
] |
new_dataset
| 0.999814 |
2205.06799
|
Bj\"orn Schuller
|
Bj\"orn W. Schuller, Anton Batliner, Shahin Amiriparian, Christian
Bergler, Maurice Gerczuk, Natalie Holz, Pauline Larrouy-Maestri, Sebastian P.
Bayerl, Korbinian Riedhammer, Adria Mallol-Ragolta, Maria Pateraki, Harry
Coppock, Ivan Kiskin, Marianne Sinka, Stephen Roberts
|
The ACM Multimedia 2022 Computational Paralinguistics Challenge:
Vocalisations, Stuttering, Activity, & Mosquitoes
|
5 pages, part of the ACM Multimedia 2022 Grand Challenge "The ACM
Multimedia 2022 Computational Paralinguistics Challenge (ComParE 2022)"
| null | null | null |
cs.SD cs.LG eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The ACM Multimedia 2022 Computational Paralinguistics Challenge addresses
four different problems for the first time in a research competition under
well-defined conditions: In the Vocalisations and Stuttering Sub-Challenges, a
classification on human non-verbal vocalisations and speech has to be made; the
Activity Sub-Challenge aims at beyond-audio human activity recognition from
smartwatch sensor data; and in the Mosquitoes Sub-Challenge, mosquitoes need to
be detected. We describe the Sub-Challenges, baseline feature extraction, and
classifiers based on the usual ComPaRE and BoAW features, the auDeep toolkit,
and deep feature extraction from pre-trained CNNs using the DeepSpectRum
toolkit; in addition, we add end-to-end sequential modelling, and a
log-mel-128-BNN.
|
[
{
"version": "v1",
"created": "Fri, 13 May 2022 17:51:45 GMT"
}
] | 2022-05-16T00:00:00 |
[
[
"Schuller",
"Björn W.",
""
],
[
"Batliner",
"Anton",
""
],
[
"Amiriparian",
"Shahin",
""
],
[
"Bergler",
"Christian",
""
],
[
"Gerczuk",
"Maurice",
""
],
[
"Holz",
"Natalie",
""
],
[
"Larrouy-Maestri",
"Pauline",
""
],
[
"Bayerl",
"Sebastian P.",
""
],
[
"Riedhammer",
"Korbinian",
""
],
[
"Mallol-Ragolta",
"Adria",
""
],
[
"Pateraki",
"Maria",
""
],
[
"Coppock",
"Harry",
""
],
[
"Kiskin",
"Ivan",
""
],
[
"Sinka",
"Marianne",
""
],
[
"Roberts",
"Stephen",
""
]
] |
new_dataset
| 0.992091 |
2205.06801
|
Zahra Movahedi Nia
|
Zahra Movahedi Nia, Ali Ahmadi, Bruce Mellado, Jianhong Wu, James
Orbinski, Ali Agary, Jude Dzevela Kong
|
Twitter-Based Gender Recognition Using Transformers
| null | null | null | null |
cs.CL cs.AI cs.SI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Social media contains useful information about people and the society that
could help advance research in many different areas (e.g. by applying opinion
mining, emotion/sentiment analysis, and statistical analysis) such as business
and finance, health, socio-economic inequality and gender vulnerability. User
demographics provide rich information that could help study the subject
further. However, user demographics such as gender are considered private and
are not freely available. In this study, we propose a model based on
transformers to predict the user's gender from their images and tweets. We
fine-tune a model based on Vision Transformers (ViT) to stratify female and
male images. Next, we fine-tune another model based on Bidirectional Encoders
Representations from Transformers (BERT) to recognize the user's gender by
their tweets. This is highly beneficial, because not all users provide an image
that indicates their gender. The gender of such users could be detected form
their tweets. The combination model improves the accuracy of image and text
classification models by 6.98% and 4.43%, respectively. This shows that the
image and text classification models are capable of complementing each other by
providing additional information to one another. We apply our method to the
PAN-2018 dataset, and obtain an accuracy of 85.52%.
|
[
{
"version": "v1",
"created": "Sun, 24 Apr 2022 19:58:42 GMT"
}
] | 2022-05-16T00:00:00 |
[
[
"Nia",
"Zahra Movahedi",
""
],
[
"Ahmadi",
"Ali",
""
],
[
"Mellado",
"Bruce",
""
],
[
"Wu",
"Jianhong",
""
],
[
"Orbinski",
"James",
""
],
[
"Agary",
"Ali",
""
],
[
"Kong",
"Jude Dzevela",
""
]
] |
new_dataset
| 0.991988 |
2012.03094
|
Siddhant Gangapurwala
|
Siddhant Gangapurwala, Mathieu Geisert, Romeo Orsolino, Maurice Fallon
and Ioannis Havoutis
|
RLOC: Terrain-Aware Legged Locomotion using Reinforcement Learning and
Optimal Control
|
26 pages, 19 figures, 16 tables, 2 algorithms, accepted for
publication to IEEE T-RO
| null |
10.1109/TRO.2022.3172469
| null |
cs.RO cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We present a unified model-based and data-driven approach for quadrupedal
planning and control to achieve dynamic locomotion over uneven terrain. We
utilize on-board proprioceptive and exteroceptive feedback to map sensory
information and desired base velocity commands into footstep plans using a
reinforcement learning (RL) policy. This RL policy is trained in simulation
over a wide range of procedurally generated terrains. When ran online, the
system tracks the generated footstep plans using a model-based motion
controller. We evaluate the robustness of our method over a wide variety of
complex terrains. It exhibits behaviors which prioritize stability over
aggressive locomotion. Additionally, we introduce two ancillary RL policies for
corrective whole-body motion tracking and recovery control. These policies
account for changes in physical parameters and external perturbations. We train
and evaluate our framework on a complex quadrupedal system, ANYmal version B,
and demonstrate transferability to a larger and heavier robot, ANYmal C,
without requiring retraining.
|
[
{
"version": "v1",
"created": "Sat, 5 Dec 2020 18:30:23 GMT"
},
{
"version": "v2",
"created": "Mon, 9 May 2022 10:00:44 GMT"
},
{
"version": "v3",
"created": "Wed, 11 May 2022 18:48:32 GMT"
}
] | 2022-05-13T00:00:00 |
[
[
"Gangapurwala",
"Siddhant",
""
],
[
"Geisert",
"Mathieu",
""
],
[
"Orsolino",
"Romeo",
""
],
[
"Fallon",
"Maurice",
""
],
[
"Havoutis",
"Ioannis",
""
]
] |
new_dataset
| 0.999376 |
2012.15425
|
Guy Lapalme
|
Guy Lapalme
|
The jsRealB Text Realizer: Organization and Use Cases -- Revised version
|
Revision that presents the new dependency notation and the Python
implementation of the approach. It updates the bibliography and discusses new
demonstrations and applications developed since the previous version of this
paper. 31 pages, 11 figures
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This paper describes the design principles behind jsRealB (Version 4.0), a
surface realizer written JavaScript for English or French sentences from a
specification inspired by the constituent syntax formalism but for which a
dependency-based input notation is also available. jsRealB can be used either
within a web page or as a node.js module. We show that the seemingly simple
process of text realization involves many interesting implementation challenges
in order to take into account the specifics of each language. jsRealB has a
large coverage of English and French and has been used to develop realistic
data-to-text applications and to reproduce existing literary texts and
sentences from Universal Dependency annotations. Its source code and that of
its applications are available on GitHub. The port of this approach to Python
(pyrealb) is also presented.
|
[
{
"version": "v1",
"created": "Thu, 31 Dec 2020 03:32:58 GMT"
},
{
"version": "v2",
"created": "Thu, 12 May 2022 15:11:39 GMT"
}
] | 2022-05-13T00:00:00 |
[
[
"Lapalme",
"Guy",
""
]
] |
new_dataset
| 0.997054 |
2104.02180
|
Xue Bin Peng
|
Xue Bin Peng, Ze Ma, Pieter Abbeel, Sergey Levine, Angjoo Kanazawa
|
AMP: Adversarial Motion Priors for Stylized Physics-Based Character
Control
| null | null |
10.1145/3450626.3459670
| null |
cs.GR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Synthesizing graceful and life-like behaviors for physically simulated
characters has been a fundamental challenge in computer animation. Data-driven
methods that leverage motion tracking are a prominent class of techniques for
producing high fidelity motions for a wide range of behaviors. However, the
effectiveness of these tracking-based methods often hinges on carefully
designed objective functions, and when applied to large and diverse motion
datasets, these methods require significant additional machinery to select the
appropriate motion for the character to track in a given scenario. In this
work, we propose to obviate the need to manually design imitation objectives
and mechanisms for motion selection by utilizing a fully automated approach
based on adversarial imitation learning. High-level task objectives that the
character should perform can be specified by relatively simple reward
functions, while the low-level style of the character's behaviors can be
specified by a dataset of unstructured motion clips, without any explicit clip
selection or sequencing. These motion clips are used to train an adversarial
motion prior, which specifies style-rewards for training the character through
reinforcement learning (RL). The adversarial RL procedure automatically selects
which motion to perform, dynamically interpolating and generalizing from the
dataset. Our system produces high-quality motions that are comparable to those
achieved by state-of-the-art tracking-based techniques, while also being able
to easily accommodate large datasets of unstructured motion clips. Composition
of disparate skills emerges automatically from the motion prior, without
requiring a high-level motion planner or other task-specific annotations of the
motion clips. We demonstrate the effectiveness of our framework on a diverse
cast of complex simulated characters and a challenging suite of motor control
tasks.
|
[
{
"version": "v1",
"created": "Mon, 5 Apr 2021 22:43:14 GMT"
},
{
"version": "v2",
"created": "Thu, 12 May 2022 04:38:30 GMT"
}
] | 2022-05-13T00:00:00 |
[
[
"Peng",
"Xue Bin",
""
],
[
"Ma",
"Ze",
""
],
[
"Abbeel",
"Pieter",
""
],
[
"Levine",
"Sergey",
""
],
[
"Kanazawa",
"Angjoo",
""
]
] |
new_dataset
| 0.999188 |
2107.00416
|
Christian G\"ottel
|
Christian G\"ottel, Konstantinos Parasyris, Osman Unsal, Pascal
Felber, Marcelo Pasin, Valerio Schiavoni
|
Scrooge Attack: Undervolting ARM Processors for Profit
|
European Commission Project: LEGaTO - Low Energy Toolset for
Heterogeneous Computing (EC-H2020-780681)
|
2021 40th International Symposium on Reliable Distributed Systems
(SRDS) (2021) 187-197
|
10.1109/SRDS53918.2021.00027
| null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Latest ARM processors are approaching the computational power of x86
architectures while consuming much less energy. Consequently, supply follows
demand with Amazon EC2, Equinix Metal and Microsoft Azure offering ARM-based
instances, while Oracle Cloud Infrastructure is about to add such support. We
expect this trend to continue, with an increasing number of cloud providers
offering ARM-based cloud instances.
ARM processors are more energy-efficient leading to substantial electricity
savings for cloud providers. However, a malicious cloud provider could
intentionally reduce the CPU voltage to further lower its costs. Running
applications malfunction when the undervolting goes below critical thresholds.
By avoiding critical voltage regions, a cloud provider can run undervolted
instances in a stealthy manner.
This practical experience report describes a novel attack scenario: an attack
launched by the cloud provider against its users to aggressively reduce the
processor voltage for saving energy to the last penny. We call it the Scrooge
Attack and show how it could be executed using ARM-based computing instances.
We mimic ARM-based cloud instances by deploying our own ARM-based devices using
different generations of Raspberry Pi. Using realistic and synthetic workloads,
we demonstrate to which degree of aggressiveness the attack is relevant. The
attack is unnoticeable by our detection method up to an offset of -50mV. We
show that the attack may even remain completely stealthy for certain workloads.
Finally, we propose a set of client-based detection methods that can identify
undervolted instances. We support experimental reproducibility and provide
instructions to reproduce our results.
|
[
{
"version": "v1",
"created": "Thu, 1 Jul 2021 12:58:23 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Jul 2021 06:41:58 GMT"
},
{
"version": "v3",
"created": "Thu, 12 May 2022 13:46:13 GMT"
}
] | 2022-05-13T00:00:00 |
[
[
"Göttel",
"Christian",
""
],
[
"Parasyris",
"Konstantinos",
""
],
[
"Unsal",
"Osman",
""
],
[
"Felber",
"Pascal",
""
],
[
"Pasin",
"Marcelo",
""
],
[
"Schiavoni",
"Valerio",
""
]
] |
new_dataset
| 0.991059 |
2109.06161
|
Stan Birchfield
|
Yunzhi Lin, Jonathan Tremblay, Stephen Tyree, Patricio A. Vela, Stan
Birchfield
|
Single-Stage Keypoint-Based Category-Level Object Pose Estimation from
an RGB Image
|
ICRA 2022. Project page at https://sites.google.com/view/centerpose
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Prior work on 6-DoF object pose estimation has largely focused on
instance-level processing, in which a textured CAD model is available for each
object being detected. Category-level 6-DoF pose estimation represents an
important step toward developing robotic vision systems that operate in
unstructured, real-world scenarios. In this work, we propose a single-stage,
keypoint-based approach for category-level object pose estimation that operates
on unknown object instances within a known category using a single RGB image as
input. The proposed network performs 2D object detection, detects 2D keypoints,
estimates 6-DoF pose, and regresses relative bounding cuboid dimensions. These
quantities are estimated in a sequential fashion, leveraging the recent idea of
convGRU for propagating information from easier tasks to those that are more
difficult. We favor simplicity in our design choices: generic cuboid vertex
coordinates, single-stage network, and monocular RGB input. We conduct
extensive experiments on the challenging Objectron benchmark, outperforming
state-of-the-art methods on the 3D IoU metric (27.6% higher than the MobilePose
single-stage approach and 7.1% higher than the related two-stage approach).
|
[
{
"version": "v1",
"created": "Mon, 13 Sep 2021 17:55:00 GMT"
},
{
"version": "v2",
"created": "Thu, 12 May 2022 08:50:48 GMT"
}
] | 2022-05-13T00:00:00 |
[
[
"Lin",
"Yunzhi",
""
],
[
"Tremblay",
"Jonathan",
""
],
[
"Tyree",
"Stephen",
""
],
[
"Vela",
"Patricio A.",
""
],
[
"Birchfield",
"Stan",
""
]
] |
new_dataset
| 0.997488 |
2112.08854
|
Lintong Zhang
|
Lintong Zhang, Marco Camurri, David Wisth and Maurice Fallon
|
Multi-Camera LiDAR Inertial Extension to the Newer College Dataset
| null | null | null | null |
cs.RO cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We present a multi-camera LiDAR inertial dataset of 4.5 km walking distance
as an expansion of the Newer College Dataset. The global shutter multi-camera
device is hardware synchronized with both the IMU and LiDAR, which is more
accurate than the original dataset with software synchronization. This dataset
also provides six Degrees of Freedom (DoF) ground truth poses at LiDAR
frequency (10 Hz). Three data collections are described and an example use case
of multi-camera visual-inertial odometry is demonstrated. This expansion
dataset contains small and narrow passages, large scale open spaces, as well as
vegetated areas, to test localization and mapping systems. Furthermore, some
sequences present challenging situations such as abrupt lighting change,
textureless surfaces, and aggressive motion. The dataset is available at:
https://ori-drs.github. io/newer-college-dataset/
|
[
{
"version": "v1",
"created": "Thu, 16 Dec 2021 13:02:59 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Apr 2022 15:21:46 GMT"
},
{
"version": "v3",
"created": "Thu, 12 May 2022 13:24:42 GMT"
}
] | 2022-05-13T00:00:00 |
[
[
"Zhang",
"Lintong",
""
],
[
"Camurri",
"Marco",
""
],
[
"Wisth",
"David",
""
],
[
"Fallon",
"Maurice",
""
]
] |
new_dataset
| 0.999615 |
2201.06777
|
Ana Brassard
|
Ana Brassard, Benjamin Heinzerling, Pride Kavumba, Kentaro Inui
|
COPA-SSE: Semi-structured Explanations for Commonsense Reasoning
|
6 pages, 6 figures, LREC 2022. Data available at
https://github.com/a-brassard/copa-sse
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We present Semi-Structured Explanations for COPA (COPA-SSE), a new
crowdsourced dataset of 9,747 semi-structured, English common sense
explanations for Choice of Plausible Alternatives (COPA) questions. The
explanations are formatted as a set of triple-like common sense statements with
ConceptNet relations but freely written concepts. This semi-structured format
strikes a balance between the high quality but low coverage of structured data
and the lower quality but high coverage of free-form crowdsourcing. Each
explanation also includes a set of human-given quality ratings. With their
familiar format, the explanations are geared towards commonsense reasoners
operating on knowledge graphs and serve as a starting point for ongoing work on
improving such systems. The dataset is available at
https://github.com/a-brassard/copa-sse.
|
[
{
"version": "v1",
"created": "Tue, 18 Jan 2022 07:20:57 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Jan 2022 02:20:31 GMT"
},
{
"version": "v3",
"created": "Thu, 12 May 2022 03:15:05 GMT"
}
] | 2022-05-13T00:00:00 |
[
[
"Brassard",
"Ana",
""
],
[
"Heinzerling",
"Benjamin",
""
],
[
"Kavumba",
"Pride",
""
],
[
"Inui",
"Kentaro",
""
]
] |
new_dataset
| 0.997069 |
2201.09670
|
Yu Wang
|
Yu Wang, Wujun Xie, Haochang Chen, and David Day-Uei Li
|
Low hardware consumption, resolution-configurable Gray code oscillator
time-to-digital converters implemented in 16nm, 20nm and 28nm FPGAs
|
9 pages, 9 figures
| null | null | null |
cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a low hardware consumption, resolution-configurable,
automatically calibrating Gray code oscillator time-to-digital converter (TDC)
in Xilinx 16nm UltraScale+, 20nm UltraScale and 28nm Virtex-7
field-programmable gate arrays (FPGAs). The proposed TDC has several
innovations: 1) a sampling matrix to improve the resolution. 2) a virtual bin
calibration method (VBCM) to realize resolution configuration and automatic
calibration. 3) a hardware implementation of the VBCM in standard FPGA devices.
We implemented and evaluated a 16-channel TDC system in all three FPGAs. The
UltraScale+ version achieved the best resolution (least significant bit, LSB)
of 20.97 ps with 0.09 LSB averaged peak-peak differential linearity (DNLpk-pk).
The UltraScale and Virtex-7 versions achieved the best resolutions of 36.01 ps
with 0.10 LSB averaged DNLpk-pk and 34.84 ps with 0.08 LSB averaged DNLpk-pk,
respectively.
|
[
{
"version": "v1",
"created": "Mon, 17 Jan 2022 16:30:07 GMT"
},
{
"version": "v2",
"created": "Thu, 12 May 2022 10:02:37 GMT"
}
] | 2022-05-13T00:00:00 |
[
[
"Wang",
"Yu",
""
],
[
"Xie",
"Wujun",
""
],
[
"Chen",
"Haochang",
""
],
[
"Li",
"David Day-Uei",
""
]
] |
new_dataset
| 0.99909 |
2204.03475
|
Tal Ridnik
|
Tal Ridnik, Hussam Lawen, Emanuel Ben-Baruch, Asaf Noy
|
Solving ImageNet: a Unified Scheme for Training any Backbone to Top
Results
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
ImageNet serves as the primary dataset for evaluating the quality of
computer-vision models. The common practice today is training each architecture
with a tailor-made scheme, designed and tuned by an expert. In this paper, we
present a unified scheme for training any backbone on ImageNet. The scheme,
named USI (Unified Scheme for ImageNet), is based on knowledge distillation and
modern tricks. It requires no adjustments or hyper-parameters tuning between
different models, and is efficient in terms of training times. We test USI on a
wide variety of architectures, including CNNs, Transformers, Mobile-oriented
and MLP-only. On all models tested, USI outperforms previous state-of-the-art
results. Hence, we are able to transform training on ImageNet from an
expert-oriented task to an automatic seamless routine. Since USI accepts any
backbone and trains it to top results, it also enables to perform methodical
comparisons, and identify the most efficient backbones along the speed-accuracy
Pareto curve. Implementation is available
at:https://github.com/Alibaba-MIIL/Solving_ImageNet
|
[
{
"version": "v1",
"created": "Thu, 7 Apr 2022 14:43:58 GMT"
},
{
"version": "v2",
"created": "Thu, 12 May 2022 05:44:38 GMT"
}
] | 2022-05-13T00:00:00 |
[
[
"Ridnik",
"Tal",
""
],
[
"Lawen",
"Hussam",
""
],
[
"Ben-Baruch",
"Emanuel",
""
],
[
"Noy",
"Asaf",
""
]
] |
new_dataset
| 0.996434 |
2205.05738
|
Shivam Sharma
|
Shivam Sharma, Md. Shad Akhtar, Preslav Nakov, Tanmoy Chakraborty
|
DISARM: Detecting the Victims Targeted by Harmful Memes
|
Accepted at NAACL 2022 (Findings)
| null | null | null |
cs.CL cs.AI cs.CV cs.CY cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
Internet memes have emerged as an increasingly popular means of communication
on the Web. Although typically intended to elicit humour, they have been
increasingly used to spread hatred, trolling, and cyberbullying, as well as to
target specific individuals, communities, or society on political,
socio-cultural, and psychological grounds. While previous work has focused on
detecting harmful, hateful, and offensive memes, identifying whom they attack
remains a challenging and underexplored area. Here we aim to bridge this gap.
In particular, we create a dataset where we annotate each meme with its
victim(s) such as the name of the targeted person(s), organization(s), and
community(ies). We then propose DISARM (Detecting vIctimS targeted by hARmful
Memes), a framework that uses named entity recognition and person
identification to detect all entities a meme is referring to, and then,
incorporates a novel contextualized multimodal deep neural network to classify
whether the meme intends to harm these entities. We perform several systematic
experiments on three test setups, corresponding to entities that are (a) all
seen while training, (b) not seen as a harmful target on training, and (c) not
seen at all on training. The evaluation results show that DISARM significantly
outperforms ten unimodal and multimodal systems. Finally, we show that DISARM
is interpretable and comparatively more generalizable and that it can reduce
the relative error rate for harmful target identification by up to 9 points
absolute over several strong multimodal rivals.
|
[
{
"version": "v1",
"created": "Wed, 11 May 2022 19:14:26 GMT"
}
] | 2022-05-13T00:00:00 |
[
[
"Sharma",
"Shivam",
""
],
[
"Akhtar",
"Md. Shad",
""
],
[
"Nakov",
"Preslav",
""
],
[
"Chakraborty",
"Tanmoy",
""
]
] |
new_dataset
| 0.9995 |
2205.05823
|
Vladimir Saveljev
|
Vladimir Saveljev
|
Continuous wavelet transform of multiview images using wavelets based on
voxel patterns
|
19 pages, 27 figures, 35 equations, 21 references
| null | null | null |
cs.CV cs.HC eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
We propose the multiview wavelets based on voxel patterns of autostereoscopic
multiview displays. Direct and inverse continuous wavelet transforms of binary
and gray-scale images were performed. The input to the inverse wavelet
transform was the array of wavelet coefficients of the direct transform. A
restored image reproduces the structure of the multiview image correctly. Also,
we modified the dimension of the parallax and the depth of 3D images. The
restored and modified images were displayed in 3D using lenticular plates. In
each case, the visual 3D picture corresponds to the applied modifications. The
results can be applied to the autostereoscopic 3D displays.
|
[
{
"version": "v1",
"created": "Thu, 12 May 2022 01:22:02 GMT"
}
] | 2022-05-13T00:00:00 |
[
[
"Saveljev",
"Vladimir",
""
]
] |
new_dataset
| 0.999044 |
2205.05849
|
Li Du
|
Li Du, Xiao Ding, Kai Xiong, Ting Liu, and Bing Qin
|
e-CARE: a New Dataset for Exploring Explainable Causal Reasoning
| null | null | null | null |
cs.AI cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Understanding causality has vital importance for various Natural Language
Processing (NLP) applications. Beyond the labeled instances, conceptual
explanations of the causality can provide deep understanding of the causal
facts to facilitate the causal reasoning process. However, such explanation
information still remains absent in existing causal reasoning resources. In
this paper, we fill this gap by presenting a human-annotated explainable CAusal
REasoning dataset (e-CARE), which contains over 21K causal reasoning questions,
together with natural language formed explanations of the causal questions.
Experimental results show that generating valid explanations for causal facts
still remains especially challenging for the state-of-the-art models, and the
explanation information can be helpful for promoting the accuracy and stability
of causal reasoning models.
|
[
{
"version": "v1",
"created": "Thu, 12 May 2022 02:41:48 GMT"
}
] | 2022-05-13T00:00:00 |
[
[
"Du",
"Li",
""
],
[
"Ding",
"Xiao",
""
],
[
"Xiong",
"Kai",
""
],
[
"Liu",
"Ting",
""
],
[
"Qin",
"Bing",
""
]
] |
new_dataset
| 0.999332 |
2205.05853
|
Zhong Sun
|
Zhong Sun, Daniele Ielmini
|
Tutorial: Analog Matrix Computing (AMC) with Crosspoint Resistive Memory
Arrays
|
6 pages, 6 figures, 2 tables, IEEE TCAS-II Tutorial 2022, accepted
for publication
| null | null | null |
cs.ET eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Matrix computation is ubiquitous in modern scientific and engineering fields.
Due to the high computational complexity in conventional digital computers,
matrix computation represents a heavy workload in many data-intensive
applications, e.g., machine learning, scientific computing, and wireless
communications. For fast, efficient matrix computations, analog computing with
resistive memory arrays has been proven to be a promising solution. In this
Tutorial, we present analog matrix computing (AMC) circuits based on crosspoint
resistive memory arrays. AMC circuits are able to carry out basic matrix
computations, including matrix multiplication, matrix inversion, pseudoinverse
and eigenvector computation, all with one single operation. We describe the
main design principles of the AMC circuits, such as local/global or
negative/positive feedback configurations, with/without external inputs.
Mapping strategies for matrices containing negative values will be presented.
The underlying requirements for circuit stability will be described via the
transfer function analysis, which also defines time complexity of the circuits
towards steady-state results. Lastly, typical applications, challenges, and
future trends of AMC circuits will be discussed.
|
[
{
"version": "v1",
"created": "Thu, 12 May 2022 02:59:18 GMT"
}
] | 2022-05-13T00:00:00 |
[
[
"Sun",
"Zhong",
""
],
[
"Ielmini",
"Daniele",
""
]
] |
new_dataset
| 0.993403 |
2205.05918
|
Thanh Binh Nguyen
|
Thao V. Ha, Hoang Nguyen, Son T. Huynh, Trung T. Nguyen, Binh T.
Nguyen
|
Fall detection using multimodal data
|
12 pages, 5 figures, 6 tables
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In recent years, the occurrence of falls has increased and has had
detrimental effects on older adults. Therefore, various machine learning
approaches and datasets have been introduced to construct an efficient fall
detection algorithm for the social community. This paper studies the fall
detection problem based on a large public dataset, namely the UP-Fall Detection
Dataset. This dataset was collected from a dozen of volunteers using different
sensors and two cameras. We propose several techniques to obtain valuable
features from these sensors and cameras and then construct suitable models for
the main problem. The experimental results show that our proposed methods can
bypass the state-of-the-art methods on this dataset in terms of accuracy,
precision, recall, and F1 score.
|
[
{
"version": "v1",
"created": "Thu, 12 May 2022 07:13:34 GMT"
}
] | 2022-05-13T00:00:00 |
[
[
"Ha",
"Thao V.",
""
],
[
"Nguyen",
"Hoang",
""
],
[
"Huynh",
"Son T.",
""
],
[
"Nguyen",
"Trung T.",
""
],
[
"Nguyen",
"Binh T.",
""
]
] |
new_dataset
| 0.998736 |
2205.05960
|
Junjia Liu
|
Junjia Liu, Yiting Chen, Zhipeng Dong, Shixiong Wang, Sylvain Calinon,
Miao Li, and Fei Chen
|
Robot Cooking with Stir-fry: Bimanual Non-prehensile Manipulation of
Semi-fluid Objects
|
8 pages, 8 figures, published to RA-L
|
IEEE Robotics and Automation Letters, vol. 7, no. 2, pp.
5159-5166, April 2022
|
10.1109/LRA.2022.3153728
| null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This letter describes an approach to achieve well-known Chinese cooking art
stir-fry on a bimanual robot system. Stir-fry requires a sequence of highly
dynamic coordinated movements, which is usually difficult to learn for a chef,
let alone transfer to robots. In this letter, we define a canonical stir-fry
movement, and then propose a decoupled framework for learning this deformable
object manipulation from human demonstration. First, the dual arms of the robot
are decoupled into different roles (a leader and follower) and learned with
classical and neural network-based methods separately, then the bimanual task
is transformed into a coordination problem. To obtain general bimanual
coordination, we secondly propose a Graph and Transformer based model --
Structured-Transformer, to capture the spatio-temporal relationship between
dual-arm movements. Finally, by adding visual feedback of content deformation,
our framework can adjust the movements automatically to achieve the desired
stir-fry effect. We verify the framework by a simulator and deploy it on a real
bimanual Panda robot system. The experimental results validate our framework
can realize the bimanual robot stir-fry motion and have the potential to extend
to other deformable objects with bimanual coordination.
|
[
{
"version": "v1",
"created": "Thu, 12 May 2022 08:58:30 GMT"
}
] | 2022-05-13T00:00:00 |
[
[
"Liu",
"Junjia",
""
],
[
"Chen",
"Yiting",
""
],
[
"Dong",
"Zhipeng",
""
],
[
"Wang",
"Shixiong",
""
],
[
"Calinon",
"Sylvain",
""
],
[
"Li",
"Miao",
""
],
[
"Chen",
"Fei",
""
]
] |
new_dataset
| 0.997247 |
2205.05976
|
Thanh Binh Nguyen
|
Quynh Nguyen, Dac H. Nguyen, Son T. Huynh, Hoa K. Dam, Binh T. Nguyen
|
TaDeR: A New Task Dependency Recommendation for Project Management
Platform
|
28 pages, 1 figure, 18 tables
| null | null | null |
cs.IR cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Many startups and companies worldwide have been using project management
software and tools to monitor, track and manage their projects. For software
projects, the number of tasks from the beginning to the end is quite a large
number that sometimes takes a lot of time and effort to search and link the
current task to a group of previous ones for further references. This paper
proposes an efficient task dependency recommendation algorithm to suggest tasks
dependent on a given task that the user has just created. We present an
efficient feature engineering step and construct a deep neural network to this
aim. We performed extensive experiments on two different large projects
(MDLSITE from moodle.org and FLUME from apache.org) to find the best features
in 28 combinations of features and the best performance model using two
embedding methods (GloVe and FastText). We consider three types of models (GRU,
CNN, LSTM) using Accuracy@K, MRR@K, and Recall@K (where K = 1, 2, 3, and 5) and
baseline models using traditional methods: TF-IDF with various matching score
calculating such as cosine similarity, Euclidean distance, Manhattan distance,
and Chebyshev distance. After many experiments, the GloVe Embedding and CNN
model reached the best result in our dataset, so we chose this model as our
proposed method. In addition, adding the time filter in the post-processing
step can significantly improve the recommendation system's performance. The
experimental results show that our proposed method can reach 0.2335 in
Accuracy@1 and MRR@1 and 0.2011 in Recall@1 of dataset FLUME. With the MDLSITE
dataset, we obtained 0.1258 in Accuracy@1 and MRR@1 and 0.1141 in Recall@1. In
the top 5, our model reached 0.3040 in Accuracy@5, 0.2563 MRR@5, and 0.2651
Recall@5 in FLUME. In the MDLSITE dataset, our model got 0.5270 Accuracy@5,
0.2689 MRR@5, and 0.2651 Recall@5.
|
[
{
"version": "v1",
"created": "Thu, 12 May 2022 09:30:23 GMT"
}
] | 2022-05-13T00:00:00 |
[
[
"Nguyen",
"Quynh",
""
],
[
"Nguyen",
"Dac H.",
""
],
[
"Huynh",
"Son T.",
""
],
[
"Dam",
"Hoa K.",
""
],
[
"Nguyen",
"Binh T.",
""
]
] |
new_dataset
| 0.997819 |
2205.06025
|
Damith Premasiri Dola Mullage
|
Damith Premasiri, Tharindu Ranasinghe, Wajdi Zaghouani, Ruslan Mitkov
|
DTW at Qur'an QA 2022: Utilising Transfer Learning with Transformers for
Question Answering in a Low-resource Domain
|
Accepted to OSACT5 Co-located with LREC 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The task of machine reading comprehension (MRC) is a useful benchmark to
evaluate the natural language understanding of machines. It has gained
popularity in the natural language processing (NLP) field mainly due to the
large number of datasets released for many languages. However, the research in
MRC has been understudied in several domains, including religious texts. The
goal of the Qur'an QA 2022 shared task is to fill this gap by producing
state-of-the-art question answering and reading comprehension research on
Qur'an. This paper describes the DTW entry to the Quran QA 2022 shared task.
Our methodology uses transfer learning to take advantage of available Arabic
MRC data. We further improve the results using various ensemble learning
strategies. Our approach provided a partial Reciprocal Rank (pRR) score of 0.49
on the test set, proving its strong performance on the task.
|
[
{
"version": "v1",
"created": "Thu, 12 May 2022 11:17:23 GMT"
}
] | 2022-05-13T00:00:00 |
[
[
"Premasiri",
"Damith",
""
],
[
"Ranasinghe",
"Tharindu",
""
],
[
"Zaghouani",
"Wajdi",
""
],
[
"Mitkov",
"Ruslan",
""
]
] |
new_dataset
| 0.997576 |
2205.06059
|
Kaicheng Zhang
|
Kaicheng Zhang, Ziyang Hong, Shida Xu, Sen Wang
|
CURL: Continuous, Ultra-compact Representation for LiDAR
| null |
Robotics: Science and Systems (RSS), 2022
| null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Increasing the density of the 3D LiDAR point cloud is appealing for many
applications in robotics. However, high-density LiDAR sensors are usually
costly and still limited to a level of coverage per scan (e.g., 128 channels).
Meanwhile, denser point cloud scans and maps mean larger volumes to store and
longer times to transmit. Existing works focus on either improving point cloud
density or compressing its size. This paper aims to design a novel 3D point
cloud representation that can continuously increase point cloud density while
reducing its storage and transmitting size. The pipeline of the proposed
Continuous, Ultra-compact Representation of LiDAR (CURL) includes four main
steps: meshing, upsampling, encoding, and continuous reconstruction. It is
capable of transforming a 3D LiDAR scan or map into a compact spherical
harmonics representation which can be used or transmitted in low latency to
continuously reconstruct a much denser 3D point cloud. Extensive experiments on
four public datasets, covering college gardens, city streets, and indoor rooms,
demonstrate that much denser 3D point clouds can be accurately reconstructed
using the proposed CURL representation while achieving up to 80% storage
space-saving. We open-source the CURL codes for the community.
|
[
{
"version": "v1",
"created": "Thu, 12 May 2022 12:50:02 GMT"
}
] | 2022-05-13T00:00:00 |
[
[
"Zhang",
"Kaicheng",
""
],
[
"Hong",
"Ziyang",
""
],
[
"Xu",
"Shida",
""
],
[
"Wang",
"Sen",
""
]
] |
new_dataset
| 0.996962 |
2205.06064
|
Philipp Jeitner
|
Tomas Hlavacek, Philipp Jeitner, Donika Mirdita, Haya Shulman and
Michael Waidner
|
Stalloris: RPKI Downgrade Attack
| null |
31th USENIX Security Symposium (USENIX Security 22), 2022
| null | null |
cs.CR cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We demonstrate the first downgrade attacks against RPKI. The key design
property in RPKI that allows our attacks is the tradeoff between connectivity
and security: when networks cannot retrieve RPKI information from publication
points, they make routing decisions in BGP without validating RPKI. We exploit
this tradeoff to develop attacks that prevent the retrieval of the RPKI objects
from the public repositories, thereby disabling RPKI validation and exposing
the RPKI-protected networks to prefix hijack attacks.
We demonstrate experimentally that at least 47% of the public repositories
are vulnerable against a specific version of our attacks, a rate-limiting
off-path downgrade attack. We also show that all the current RPKI relying party
implementations are vulnerable to attacks by a malicious publication point.
This translates to 20.4% of the IPv4 address space.
We provide recommendations for preventing our downgrade attacks. However,
resolving the fundamental problem is not straightforward: if the relying
parties prefer security over connectivity and insist on RPKI validation when
ROAs cannot be retrieved, the victim AS may become disconnected from many more
networks than just the one that the adversary wishes to hijack. Our work shows
that the publication points are a critical infrastructure for Internet
connectivity and security. Our main recommendation is therefore that the
publication points should be hosted on robust platforms guaranteeing a high
degree of connectivity.
|
[
{
"version": "v1",
"created": "Thu, 12 May 2022 12:55:01 GMT"
}
] | 2022-05-13T00:00:00 |
[
[
"Hlavacek",
"Tomas",
""
],
[
"Jeitner",
"Philipp",
""
],
[
"Mirdita",
"Donika",
""
],
[
"Shulman",
"Haya",
""
],
[
"Waidner",
"Michael",
""
]
] |
new_dataset
| 0.994792 |
2205.06072
|
Elmurod Kuriyozov
|
Ulugbek Salaev, Elmurod Kuriyozov, Carlos G\'omez-Rodr\'iguez
|
SimRelUz: Similarity and Relatedness scores as a Semantic Evaluation
dataset for Uzbek language
|
Final version, published in the proceedings of SIGUL workshop of LREC
2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Semantic relatedness between words is one of the core concepts in natural
language processing, thus making semantic evaluation an important task. In this
paper, we present a semantic model evaluation dataset: SimRelUz - a collection
of similarity and relatedness scores of word pairs for the low-resource Uzbek
language. The dataset consists of more than a thousand pairs of words carefully
selected based on their morphological features, occurrence frequency, semantic
relation, as well as annotated by eleven native Uzbek speakers from different
age groups and gender. We also paid attention to the problem of dealing with
rare words and out-of-vocabulary words to thoroughly evaluate the robustness of
semantic models.
|
[
{
"version": "v1",
"created": "Thu, 12 May 2022 13:11:28 GMT"
}
] | 2022-05-13T00:00:00 |
[
[
"Salaev",
"Ulugbek",
""
],
[
"Kuriyozov",
"Elmurod",
""
],
[
"Gómez-Rodríguez",
"Carlos",
""
]
] |
new_dataset
| 0.999775 |
2205.06114
|
Jiska Classen
|
Jiska Classen, Alexander Heinrich, Robert Reith, Matthias Hollick
|
Evil Never Sleeps: When Wireless Malware Stays On After Turning Off
iPhones
| null |
WiSec 2022: Proceedings of the 15th ACM Conference on Security and
Privacy in Wireless and Mobile Networks
|
10.1145/3507657.3528547
| null |
cs.CR cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
When an iPhone is turned off, most wireless chips stay on. For instance, upon
user-initiated shutdown, the iPhone remains locatable via the Find My network.
If the battery runs low, the iPhone shuts down automatically and enters a power
reserve mode. Yet, users can still access credit cards, student passes, and
other items in their Wallet. We analyze how Apple implements these standalone
wireless features, working while iOS is not running, and determine their
security boundaries. On recent iPhones, Bluetooth, Near Field Communication
(NFC), and Ultra-wideband (UWB) keep running after power off, and all three
wireless chips have direct access to the secure element. As a practical example
what this means to security, we demonstrate the possibility to load malware
onto a Bluetooth chip that is executed while the iPhone is off.
|
[
{
"version": "v1",
"created": "Thu, 12 May 2022 14:29:49 GMT"
}
] | 2022-05-13T00:00:00 |
[
[
"Classen",
"Jiska",
""
],
[
"Heinrich",
"Alexander",
""
],
[
"Reith",
"Robert",
""
],
[
"Hollick",
"Matthias",
""
]
] |
new_dataset
| 0.999768 |
2205.06142
|
Ferdian Jovan
|
Ferdian Jovan, Ryan McConville, Catherine Morgan, Emma Tonkin, Alan
Whone, Ian Craddock
|
Multimodal Indoor Localisation for Measuring Mobility in Parkinson's
Disease using Transformers
|
17 pages, 1 figure, 3 tables
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Parkinson's disease (PD) is a slowly progressive debilitating
neurodegenerative disease which is prominently characterised by motor symptoms.
Indoor localisation, including number and speed of room to room transitions,
provides a proxy outcome which represents mobility and could be used as a
digital biomarker to quantify how mobility changes as this disease progresses.
We use data collected from 10 people with Parkinson's, and 10 controls, each of
whom lived for five days in a smart home with various sensors. In order to more
effectively localise them indoors, we propose a transformer-based approach
utilizing two data modalities, Received Signal Strength Indicator (RSSI) and
accelerometer data from wearable devices, which provide complementary views of
movement. Our approach makes asymmetric and dynamic correlations by a) learning
temporal correlations at different scales and levels, and b) utilizing various
gating mechanisms to select relevant features within modality and suppress
unnecessary modalities. On a dataset with real patients, we demonstrate that
our proposed method gives an average accuracy of 89.9%, outperforming
competitors. We also show that our model is able to better predict in-home
mobility for people with Parkinson's with an average offset of 1.13 seconds to
ground truth.
|
[
{
"version": "v1",
"created": "Thu, 12 May 2022 15:05:57 GMT"
}
] | 2022-05-13T00:00:00 |
[
[
"Jovan",
"Ferdian",
""
],
[
"McConville",
"Ryan",
""
],
[
"Morgan",
"Catherine",
""
],
[
"Tonkin",
"Emma",
""
],
[
"Whone",
"Alan",
""
],
[
"Craddock",
"Ian",
""
]
] |
new_dataset
| 0.999076 |
2205.06181
|
Flora Sakketou Dr
|
Flora Sakketou, Joan Plepi, Riccardo Cervero, Henri-Jacques Geiss,
Paolo Rosso, Lucie Flek
|
FACTOID: A New Dataset for Identifying Misinformation Spreaders and
Political Bias
|
Accepted to LREC 2022
| null | null | null |
cs.SI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Proactively identifying misinformation spreaders is an important step towards
mitigating the impact of fake news on our society. In this paper, we introduce
a new contemporary Reddit dataset for fake news spreader analysis, called
FACTOID, monitoring political discussions on Reddit since the beginning of
2020. The dataset contains over 4K users with 3.4M Reddit posts, and includes,
beyond the users' binary labels, also their fine-grained credibility level
(very low to very high) and their political bias strength (extreme right to
extreme left). As far as we are aware, this is the first fake news spreader
dataset that simultaneously captures both the long-term context of users'
historical posts and the interactions between them. To create the first
benchmark on our data, we provide methods for identifying misinformation
spreaders by utilizing the social connections between the users along with
their psycho-linguistic features. We show that the users' social interactions
can, on their own, indicate misinformation spreading, while the
psycho-linguistic features are mostly informative in non-neural classification
settings. In a qualitative analysis, we observe that detecting affective mental
processes correlates negatively with right-biased users, and that the openness
to experience factor is lower for those who spread fake news.
|
[
{
"version": "v1",
"created": "Wed, 11 May 2022 07:42:56 GMT"
}
] | 2022-05-13T00:00:00 |
[
[
"Sakketou",
"Flora",
""
],
[
"Plepi",
"Joan",
""
],
[
"Cervero",
"Riccardo",
""
],
[
"Geiss",
"Henri-Jacques",
""
],
[
"Rosso",
"Paolo",
""
],
[
"Flek",
"Lucie",
""
]
] |
new_dataset
| 0.999425 |
2205.06255
|
Qianqian Wang
|
Qianqian Wang, Zhengqi Li, David Salesin, Noah Snavely, Brian Curless,
Janne Kontkanen
|
3D Moments from Near-Duplicate Photos
|
CVPR 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce 3D Moments, a new computational photography effect. As input we
take a pair of near-duplicate photos, i.e., photos of moving subjects from
similar viewpoints, common in people's photo collections. As output, we produce
a video that smoothly interpolates the scene motion from the first photo to the
second, while also producing camera motion with parallax that gives a
heightened sense of 3D. To achieve this effect, we represent the scene as a
pair of feature-based layered depth images augmented with scene flow. This
representation enables motion interpolation along with independent control of
the camera viewpoint. Our system produces photorealistic space-time videos with
motion parallax and scene dynamics, while plausibly recovering regions occluded
in the original views. We conduct extensive experiments demonstrating superior
performance over baselines on public datasets and in-the-wild photos. Project
page: https://3d-moments.github.io/
|
[
{
"version": "v1",
"created": "Thu, 12 May 2022 17:56:18 GMT"
}
] | 2022-05-13T00:00:00 |
[
[
"Wang",
"Qianqian",
""
],
[
"Li",
"Zhengqi",
""
],
[
"Salesin",
"David",
""
],
[
"Snavely",
"Noah",
""
],
[
"Curless",
"Brian",
""
],
[
"Kontkanen",
"Janne",
""
]
] |
new_dataset
| 0.98638 |
2001.10071
|
Lucas Oliveira E S
|
Lucas Emanuel Silva e Oliveira, Ana Carolina Peters, Adalniza Moura
Pucca da Silva, Caroline P. Gebeluca, Yohan Bonescki Gumiel, Lilian Mie Mukai
Cintho, Deborah Ribeiro Carvalho, Sadid A. Hasan, Claudia Maria Cabral Moro
|
SemClinBr -- a multi institutional and multi specialty semantically
annotated corpus for Portuguese clinical NLP tasks
| null | null |
10.1186/s13326-022-00269-1
| null |
cs.CL cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The high volume of research focusing on extracting patient's information from
electronic health records (EHR) has led to an increase in the demand for
annotated corpora, which are a very valuable resource for both the development
and evaluation of natural language processing (NLP) algorithms. The absence of
a multi-purpose clinical corpus outside the scope of the English language,
especially in Brazilian Portuguese, is glaring and severely impacts scientific
progress in the biomedical NLP field. In this study, we developed a
semantically annotated corpus using clinical texts from multiple medical
specialties, document types, and institutions. We present the following: (1) a
survey listing common aspects and lessons learned from previous research, (2) a
fine-grained annotation schema which could be replicated and guide other
annotation initiatives, (3) a web-based annotation tool focusing on an
annotation suggestion feature, and (4) both intrinsic and extrinsic evaluation
of the annotations. The result of this work is the SemClinBr, a corpus that has
1,000 clinical notes, labeled with 65,117 entities and 11,263 relations, and
can support a variety of clinical NLP tasks and boost the EHR's secondary use
for the Portuguese language.
|
[
{
"version": "v1",
"created": "Mon, 27 Jan 2020 20:39:32 GMT"
}
] | 2022-05-12T00:00:00 |
[
[
"Oliveira",
"Lucas Emanuel Silva e",
""
],
[
"Peters",
"Ana Carolina",
""
],
[
"da Silva",
"Adalniza Moura Pucca",
""
],
[
"Gebeluca",
"Caroline P.",
""
],
[
"Gumiel",
"Yohan Bonescki",
""
],
[
"Cintho",
"Lilian Mie Mukai",
""
],
[
"Carvalho",
"Deborah Ribeiro",
""
],
[
"Hasan",
"Sadid A.",
""
],
[
"Moro",
"Claudia Maria Cabral",
""
]
] |
new_dataset
| 0.991765 |
2103.05204
|
Zihan Zhang
|
Zihan Zhang
|
A New Metric on Symmetric Group and Applications to Block Permutation
Codes
|
arXiv admin note: text overlap with arXiv:1710.09638 by other authors
| null | null | null |
cs.IT math.CO math.IT
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Permutation codes have received a great attention due to various
applications. For different applications, one needs permutation codes under
different metrics. The generalized Cayley metric was introduced by Chee and Vu
[4] and this metric includes several other metrics as special cases. However,
the generalized Cayley metric is not easily computable in general. Therefore
the block permutation metric was introduced by Yang et al. [22] as the
generalized Cayley metric and the block permutation metric have the same
magnitude. However, the block permutation metric lacks the symmetry property
which restricts more advanced algebraic tools to be involved. In this paper, by
introducing a novel metric closely related to the block permutation metric, we
build a bridge between some advanced algebraic methods and codes in the block
permutation metric. More specifically, based on some techniques from algebraic
function fields originated in [19], we give an algebraic-geometric construction
of codes in the novel metric with reasonably good parameters. By observing a
trivial relation between the novel metric and block permutation metric, we then
produce non-systematic codes in block permutation metric that improve all known
results given in [21, 22]. More importantly, based on our non-systematic codes,
we provide an explicit and systematic construction of codes in block
permutation metric which improves the systematic result shown in [22]. In the
end, we demonstrate that our codes in the novel metric itself have reasonably
good parameters by showing that our construction beats the corresponding
Gilbert-Varshamov bound.
|
[
{
"version": "v1",
"created": "Tue, 9 Mar 2021 03:46:25 GMT"
},
{
"version": "v2",
"created": "Wed, 10 Mar 2021 15:03:37 GMT"
},
{
"version": "v3",
"created": "Thu, 11 Mar 2021 06:40:25 GMT"
},
{
"version": "v4",
"created": "Sun, 18 Apr 2021 08:59:21 GMT"
},
{
"version": "v5",
"created": "Sat, 23 Oct 2021 09:52:39 GMT"
},
{
"version": "v6",
"created": "Mon, 9 May 2022 09:00:56 GMT"
}
] | 2022-05-12T00:00:00 |
[
[
"Zhang",
"Zihan",
""
]
] |
new_dataset
| 0.997609 |
2103.07584
|
Shizuo Kaji
|
Shizuo Kaji, Jingyao Zhang
|
Free-form Design of Discrete Architectural Surfaces by use of Circle
Packing
| null | null | null | null |
cs.CG
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents an efficient approach for the conceptual design of
architectural surfaces which are composed of triangular panels. In the
free-form design of discrete architectural surfaces, the Gaussian curvature
plays an important role not only aesthetically but also in terms of stiffness
and constructability. However, designing a surface manually with specific
Gaussian curvatures can be a time-consuming task. We propose a method to find a
triangulated surface with user-specified Gaussian curvatures (not limited to
constant Gaussian curvatures) and boundary vertex positions. In addition, the
conformal class of the final design can be specified; that is, the user has
control over the shape (the corner angles) of each triangular panel. The panels
could be encouraged to form a regular tessellation or kept close to those of
the initial design. The controllability of the conformal class suppresses
possible distortion of the panels, resulting in higher structural performance
and aesthetics. Our method relies on the idea in computational conformal
geometry called circle packing. In this line of research, the discrete Ricci
flow has been widely used for surface modelling. However, it is not trivial to
incorporate constraints such as boundary locations and convexity of the spanned
surface, which are essential to architectural applications. We propose a
perturbation of the discrete Ricci energy and develop a least-squares-based
optimisation scheme to address these problems with an open-source
implementation available online.
|
[
{
"version": "v1",
"created": "Sat, 13 Mar 2021 00:31:50 GMT"
},
{
"version": "v2",
"created": "Wed, 11 May 2022 08:01:41 GMT"
}
] | 2022-05-12T00:00:00 |
[
[
"Kaji",
"Shizuo",
""
],
[
"Zhang",
"Jingyao",
""
]
] |
new_dataset
| 0.997569 |
2104.02477
|
Madhurananda Pahar
|
Madhurananda Pahar, Marisa Klopper, Robin Warren and Thomas Niesler
|
COVID-19 Detection in Cough, Breath and Speech using Deep Transfer
Learning and Bottleneck Features
| null |
Computers in Biology and Medicine, 2022
|
10.1016/j.compbiomed.2021.105153
| null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
We present an experimental investigation into the effectiveness of transfer
learning and bottleneck feature extraction in detecting COVID-19 from audio
recordings of cough, breath and speech.
This type of screening is non-contact, does not require specialist medical
expertise or laboratory facilities and can be deployed on inexpensive consumer
hardware.
We use datasets that contain recordings of coughing, sneezing, speech and
other noises, but do not contain COVID-19 labels, to pre-train three deep
neural networks: a CNN, an LSTM and a Resnet50.
These pre-trained networks are subsequently either fine-tuned using smaller
datasets of coughing with COVID-19 labels in the process of transfer learning,
or are used as bottleneck feature extractors.
Results show that a Resnet50 classifier trained by this transfer learning
process delivers optimal or near-optimal performance across all datasets
achieving areas under the receiver operating characteristic (ROC AUC) of 0.98,
0.94 and 0.92 respectively for all three sound classes (coughs, breaths and
speech).
This indicates that coughs carry the strongest COVID-19 signature, followed
by breath and speech.
Our results also show that applying transfer learning and extracting
bottleneck features using the larger datasets without COVID-19 labels led not
only to improve performance, but also to minimise the standard deviation of the
classifier AUCs among the outer folds of the leave-$p$-out cross-validation,
indicating better generalisation.
We conclude that deep transfer learning and bottleneck feature extraction can
improve COVID-19 cough, breath and speech audio classification, yielding
automatic classifiers with higher accuracy.
|
[
{
"version": "v1",
"created": "Fri, 2 Apr 2021 23:21:24 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Apr 2021 22:14:59 GMT"
},
{
"version": "v3",
"created": "Tue, 27 Jul 2021 15:03:56 GMT"
},
{
"version": "v4",
"created": "Wed, 18 Aug 2021 00:16:14 GMT"
}
] | 2022-05-12T00:00:00 |
[
[
"Pahar",
"Madhurananda",
""
],
[
"Klopper",
"Marisa",
""
],
[
"Warren",
"Robin",
""
],
[
"Niesler",
"Thomas",
""
]
] |
new_dataset
| 0.977017 |
2104.12250
|
Jose Camacho-Collados
|
Francesco Barbieri and Luis Espinosa Anke and Jose Camacho-Collados
|
XLM-T: Multilingual Language Models in Twitter for Sentiment Analysis
and Beyond
|
LREC 2022. Code and data available at
https://github.com/cardiffnlp/xlm-t
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Language models are ubiquitous in current NLP, and their multilingual
capacity has recently attracted considerable attention. However, current
analyses have almost exclusively focused on (multilingual variants of) standard
benchmarks, and have relied on clean pre-training and task-specific corpora as
multilingual signals. In this paper, we introduce XLM-T, a model to train and
evaluate multilingual language models in Twitter. In this paper we provide: (1)
a new strong multilingual baseline consisting of an XLM-R (Conneau et al. 2020)
model pre-trained on millions of tweets in over thirty languages, alongside
starter code to subsequently fine-tune on a target task; and (2) a set of
unified sentiment analysis Twitter datasets in eight different languages and a
XLM-T model fine-tuned on them.
|
[
{
"version": "v1",
"created": "Sun, 25 Apr 2021 20:28:53 GMT"
},
{
"version": "v2",
"created": "Wed, 11 May 2022 08:06:39 GMT"
}
] | 2022-05-12T00:00:00 |
[
[
"Barbieri",
"Francesco",
""
],
[
"Anke",
"Luis Espinosa",
""
],
[
"Camacho-Collados",
"Jose",
""
]
] |
new_dataset
| 0.992405 |
2108.02605
|
Alexey Tikhonov
|
Alexey Tikhonov, Alex Malkhasov, Andrey Manoshin, George Dima, R\'eka
Cserh\'ati, Md.Sadek Hossain Asif, Matt S\'ardi
|
EENLP: Cross-lingual Eastern European NLP Index
|
Accepted for LREC 2022. 5 pages, 1 figure. Originally EEML 2021
project
| null | null | null |
cs.CL cs.AI cs.NE
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Motivated by the sparsity of NLP resources for Eastern European languages, we
present a broad index of existing Eastern European language resources (90+
datasets and 45+ models) published as a github repository open for updates from
the community. Furthermore, to support the evaluation of commonsense reasoning
tasks, we provide hand-crafted cross-lingual datasets for five different
semantic tasks (namely news categorization, paraphrase detection, Natural
Language Inference (NLI) task, tweet sentiment detection, and news sentiment
detection) for some of the Eastern European languages. We perform several
experiments with the existing multilingual models on these datasets to define
the performance baselines and compare them to the existing results for other
languages.
|
[
{
"version": "v1",
"created": "Thu, 5 Aug 2021 13:24:30 GMT"
},
{
"version": "v2",
"created": "Tue, 17 Aug 2021 15:00:19 GMT"
},
{
"version": "v3",
"created": "Tue, 10 May 2022 19:16:32 GMT"
}
] | 2022-05-12T00:00:00 |
[
[
"Tikhonov",
"Alexey",
""
],
[
"Malkhasov",
"Alex",
""
],
[
"Manoshin",
"Andrey",
""
],
[
"Dima",
"George",
""
],
[
"Cserháti",
"Réka",
""
],
[
"Asif",
"Md. Sadek Hossain",
""
],
[
"Sárdi",
"Matt",
""
]
] |
new_dataset
| 0.999642 |
2109.00103
|
Madhurananda Pahar
|
Madhurananda Pahar, Igor Miranda, Andreas Diacon, Thomas Niesler
|
Automatic non-invasive Cough Detection based on Accelerometer and Audio
Signals
|
arXiv admin note: text overlap with arXiv:2102.04997
|
Journal of Signal Processing Systems, 2022
|
10.1007/s11265-022-01748-5
| null |
cs.SD cs.AI cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
We present an automatic non-invasive way of detecting cough events based on
both accelerometer and audio signals.
The acceleration signals are captured by a smartphone firmly attached to the
patient's bed, using its integrated accelerometer.
The audio signals are captured simultaneously by the same smartphone using an
external microphone.
We have compiled a manually-annotated dataset containing such
simultaneously-captured acceleration and audio signals for approximately 6000
cough and 68000 non-cough events from 14 adult male patients in a tuberculosis
clinic.
LR, SVM and MLP are evaluated as baseline classifiers and compared with deep
architectures such as CNN, LSTM, and Resnet50 using a leave-one-out
cross-validation scheme.
We find that the studied classifiers can use either acceleration or audio
signals to distinguish between coughing and other activities including
sneezing, throat-clearing, and movement on the bed with high accuracy.
However, in all cases, the deep neural networks outperform the shallow
classifiers by a clear margin and the Resnet50 offers the best performance by
achieving an AUC exceeding 0.98 and 0.99 for acceleration and audio signals
respectively.
While audio-based classification consistently offers a better performance
than acceleration-based classification, we observe that the difference is very
small for the best systems.
Since the acceleration signal requires less processing power, and since the
need to record audio is sidestepped and thus privacy is inherently secured, and
since the recording device is attached to the bed and not worn, an
accelerometer-based highly accurate non-invasive cough detector may represent a
more convenient and readily accepted method in long-term cough monitoring.
|
[
{
"version": "v1",
"created": "Tue, 31 Aug 2021 22:44:56 GMT"
}
] | 2022-05-12T00:00:00 |
[
[
"Pahar",
"Madhurananda",
""
],
[
"Miranda",
"Igor",
""
],
[
"Diacon",
"Andreas",
""
],
[
"Niesler",
"Thomas",
""
]
] |
new_dataset
| 0.998901 |
2203.09993
|
Xinyu Wang
|
Rui Dong, Zhicheng Huang, Ian Iong Lam, Yan Chen, Xinyu Wang
|
WebRobot: Web Robotic Process Automation using Interactive
Programming-by-Demonstration
|
Published at PLDI 2022
| null | null | null |
cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
It is imperative to democratize robotic process automation (RPA), as RPA has
become a main driver of the digital transformation but is still technically
very demanding to construct, especially for non-experts. In this paper, we
study how to automate an important class of RPA tasks, dubbed web RPA, which
are concerned with constructing software bots that automate interactions across
data and a web browser. Our main contributions are twofold. First, we develop a
formal foundation which allows semantically reasoning about web RPA programs
and formulate its synthesis problem in a principled manner. Second, we propose
a web RPA program synthesis algorithm based on a new idea called speculative
rewriting. This leads to a novel speculate-and-validate methodology in the
context of rewrite-based program synthesis, which has also shown to be both
theoretically simple and practically efficient for synthesizing programs from
demonstrations. We have built these ideas in a new interactive synthesizer
called WebRobot and evaluate it on 76 web RPA benchmarks. Our results show that
WebRobot automated a majority of them effectively. Furthermore, we show that
WebRobot compares favorably with a conventional rewrite-based synthesis
baseline implemented using egg. Finally, we conduct a small user study
demonstrating WebRobot is also usable.
|
[
{
"version": "v1",
"created": "Fri, 18 Mar 2022 14:43:37 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Apr 2022 20:36:51 GMT"
},
{
"version": "v3",
"created": "Wed, 11 May 2022 15:30:39 GMT"
}
] | 2022-05-12T00:00:00 |
[
[
"Dong",
"Rui",
""
],
[
"Huang",
"Zhicheng",
""
],
[
"Lam",
"Ian Iong",
""
],
[
"Chen",
"Yan",
""
],
[
"Wang",
"Xinyu",
""
]
] |
new_dataset
| 0.991544 |
2203.10965
|
Junda He
|
Junda He, Bowen Xu, Zhou Yang, DongGyun Han, Chengran Yang, and David
Lo
|
PTM4Tag: Sharpening Tag Recommendation of Stack Overflow Posts with
Pre-trained Models
|
Accepted for Research Track ICPC 2022
| null |
10.1145/3524610.3527897
| null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Stack Overflow is often viewed as the most influential Software Question
Answer (SQA) website with millions of programming-related questions and
answers. Tags play a critical role in efficiently structuring the contents in
Stack Overflow and are vital to support a range of site operations, e.g.,
querying relevant contents. Poorly selected tags often introduce extra noise
and redundancy, which leads to tag synonym and tag explosion problems. Thus, an
automated tag recommendation technique that can accurately recommend
high-quality tags is desired to alleviate the problems mentioned above.
Inspired by the recent success of pre-trained language models (PTMs) in natural
language processing (NLP), we present PTM4Tag, a tag recommendation framework
for Stack Overflow posts that utilize PTMs with a triplet architecture, which
models the components of a post, i.e., Title, Description, and Code with
independent language models. To the best of our knowledge, this is the first
work that leverages PTMs in the tag recommendation task of SQA sites. We
comparatively evaluate the performance of PTM4Tag based on five popular
pre-trained models: BERT, RoBERTa, ALBERT, CodeBERT, and BERTOverflow. Our
results show that leveraging the software engineering (SE) domain-specific PTM
CodeBERT in PTM4Tag achieves the best performance among the five considered
PTMs and outperforms the state-of-the-art deep learning (Convolutional Neural
Network-based) approach by a large margin in terms of average $Precision@k$,
$Recall@k$, and $F1$-$score@k$. We conduct an ablation study to quantify the
contribution of a post's constituent components (Title, Description, and Code
Snippets) to the performance of PTM4Tag. Our results show that Title is the
most important in predicting the most relevant tags, and utilizing all the
components achieves the best performance.
|
[
{
"version": "v1",
"created": "Mon, 21 Mar 2022 13:24:59 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Mar 2022 07:05:37 GMT"
},
{
"version": "v3",
"created": "Fri, 25 Mar 2022 01:04:47 GMT"
},
{
"version": "v4",
"created": "Wed, 11 May 2022 07:56:55 GMT"
}
] | 2022-05-12T00:00:00 |
[
[
"He",
"Junda",
""
],
[
"Xu",
"Bowen",
""
],
[
"Yang",
"Zhou",
""
],
[
"Han",
"DongGyun",
""
],
[
"Yang",
"Chengran",
""
],
[
"Lo",
"David",
""
]
] |
new_dataset
| 0.996512 |
2205.00328
|
Mithun Das
|
Mithun Das and Punyajoy Saha and Binny Mathew and Animesh Mukherjee
|
HateCheckHIn: Evaluating Hindi Hate Speech Detection Models
|
Accepted at: 13th Edition of its Language Resources and Evaluation
Conference. arXiv admin note: text overlap with arXiv:2012.15606 by other
authors
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to the sheer volume of online hate, the AI and NLP communities have
started building models to detect such hateful content. Recently, multilingual
hate is a major emerging challenge for automated detection where code-mixing or
more than one language have been used for conversation in social media.
Typically, hate speech detection models are evaluated by measuring their
performance on the held-out test data using metrics such as accuracy and
F1-score. While these metrics are useful, it becomes difficult to identify
using them where the model is failing, and how to resolve it. To enable more
targeted diagnostic insights of such multilingual hate speech models, we
introduce a set of functionalities for the purpose of evaluation. We have been
inspired to design this kind of functionalities based on real-world
conversation on social media. Considering Hindi as a base language, we craft
test cases for each functionality. We name our evaluation dataset HateCheckHIn.
To illustrate the utility of these functionalities , we test state-of-the-art
transformer based m-BERT model and the Perspective API.
|
[
{
"version": "v1",
"created": "Sat, 30 Apr 2022 19:09:09 GMT"
}
] | 2022-05-12T00:00:00 |
[
[
"Das",
"Mithun",
""
],
[
"Saha",
"Punyajoy",
""
],
[
"Mathew",
"Binny",
""
],
[
"Mukherjee",
"Animesh",
""
]
] |
new_dataset
| 0.999184 |
2205.04684
|
Yanyan Huang
|
Yu Fu, Yanyan Huang, Yalin Wang, Shunjie Dong, Le Xue, Xunzhao Yin,
Qianqian Yang, Yiyu Shi, Cheng Zhuo
|
OTFPF: Optimal Transport-Based Feature Pyramid Fusion Network for Brain
Age Estimation with 3D Overlapped ConvNeXt
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Chronological age of healthy brain is able to be predicted using deep neural
networks from T1-weighted magnetic resonance images (T1 MRIs), and the
predicted brain age could serve as an effective biomarker for detecting
aging-related diseases or disorders. In this paper, we propose an end-to-end
neural network architecture, referred to as optimal transport based feature
pyramid fusion (OTFPF) network, for the brain age estimation with T1 MRIs. The
OTFPF consists of three types of modules: Optimal Transport based Feature
Pyramid Fusion (OTFPF) module, 3D overlapped ConvNeXt (3D OL-ConvNeXt) module
and fusion module. These modules strengthen the OTFPF network's understanding
of each brain's semi-multimodal and multi-level feature pyramid information,
and significantly improve its estimation performances. Comparing with recent
state-of-the-art models, the proposed OTFPF converges faster and performs
better. The experiments with 11,728 MRIs aged 3-97 years show that OTFPF
network could provide accurate brain age estimation, yielding mean absolute
error (MAE) of 2.097, Pearson's correlation coefficient (PCC) of 0.993 and
Spearman's rank correlation coefficient (SRCC) of 0.989, between the estimated
and chronological ages. Widespread quantitative experiments and ablation
experiments demonstrate the superiority and rationality of OTFPF network. The
codes and implement details will be released on GitHub:
https://github.com/ZJU-Brain/OTFPF after final decision.
|
[
{
"version": "v1",
"created": "Tue, 10 May 2022 05:39:35 GMT"
},
{
"version": "v2",
"created": "Wed, 11 May 2022 04:30:32 GMT"
}
] | 2022-05-12T00:00:00 |
[
[
"Fu",
"Yu",
""
],
[
"Huang",
"Yanyan",
""
],
[
"Wang",
"Yalin",
""
],
[
"Dong",
"Shunjie",
""
],
[
"Xue",
"Le",
""
],
[
"Yin",
"Xunzhao",
""
],
[
"Yang",
"Qianqian",
""
],
[
"Shi",
"Yiyu",
""
],
[
"Zhuo",
"Cheng",
""
]
] |
new_dataset
| 0.959248 |
2205.04966
|
Jeanette Falk
|
Jeanette Falk
|
How Game Jams and Hackathons Accelerate Design Processes
|
PhD thesis, handed in 12th November 2020, defended 8th March 2021 at
Aarhus University, Denmark. Please cite as: Falk, Jeanette (2021). How Game
Jams and Hackathons Accelerate Design Processes. PhD thesis. Aarhus
University. doi: https://doi.org/10.48550/arXiv.2205.04966
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
This dissertation presents three years of research on how design processes in
game jams and hackathons can be understood as accelerated. Hackathons and game
jams can both be described as formats where participants engage in designing
and developing prototypes during an intentionally short time frame, such as 48
hours, which is meant to facilitate creativity, and encourage fast decision
making and rapid prototyping. Game jams and hackathons are organised in many
different contexts and for many different purposes as well, such as: internally
in companies to spark new ideas; for fostering citizen innovation for
municipalities; in cultural and governmental agencies; integral parts of
education; entry points for developers wanting to enter especially the game
industry (Olesen, 2020; Kultima, 2015). During the recent decade, game jams and
hackathons have been introduced to academia as well, as formats for teaching
and learning, and as research platforms as well. Only few research
contributions engage with understanding how accelerated design processes in
game jams and hackathons unfold, or how the organisation of game jam and
hackathon formats influence these accelerated design processes.
The main contributions of my PhD project are: 1) Descriptive process-level
knowledge, which contextualise and solidify how accelerated design processes
unfold under the circumstances of a game jam and a hackathon. 2) Overviews of
how game jams have been organised for supporting participants' creativity and
of how hackathons have been used as means and as research focus within
academia. 3) Exploring how game jam and hackathon formats may be organised in
order to support knowledge generation such as within academia, and in order to
support creativity.
|
[
{
"version": "v1",
"created": "Tue, 10 May 2022 15:24:54 GMT"
},
{
"version": "v2",
"created": "Wed, 11 May 2022 11:54:58 GMT"
}
] | 2022-05-12T00:00:00 |
[
[
"Falk",
"Jeanette",
""
]
] |
new_dataset
| 0.996102 |
2205.05122
|
Hoover H. F. Yin
|
Hoover H. F. Yin, Harry W. H. Wong, Mehrdad Tahernia, Russell W. F.
Lai
|
Multichannel Optimal Tree-Decodable Codes are Not Always Optimal Prefix
Codes
|
Full version of the conference version in ISIT'22
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The theory of multichannel prefix codes aims to generalize the classical
theory of prefix codes. Although single- and two-channel prefix codes always
have decoding trees, the same cannot be said when there are more than two
channels. One question is of theoretical interest: Do there exist optimal
tree-decodable codes that are not optimal prefix codes? Existing literature,
which focused on generalizing single-channel results, covered little about
non-tree-decodable prefix codes since they have no single-channel counterparts.
In this work, we study the fundamental reason behind the non-tree-decodability
of prefix codes. By investigating the simplest non-tree-decodable structure, we
obtain a general sufficient condition on the channel alphabets for the
existence of optimal tree-decodable codes that are not optimal prefix codes.
|
[
{
"version": "v1",
"created": "Tue, 10 May 2022 18:58:10 GMT"
}
] | 2022-05-12T00:00:00 |
[
[
"Yin",
"Hoover H. F.",
""
],
[
"Wong",
"Harry W. H.",
""
],
[
"Tahernia",
"Mehrdad",
""
],
[
"Lai",
"Russell W. F.",
""
]
] |
new_dataset
| 0.954149 |
2205.05245
|
Mengqi He
|
Mengqi He, Jing Zhang, Wenxin Yu
|
Salient Object Detection via Bounding-box Supervision
|
5 pages,4 figures,submitted to ICIP 2022
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The success of fully supervised saliency detection models depends on a large
number of pixel-wise labeling. In this paper, we work on bounding-box based
weakly-supervised saliency detection to relieve the labeling effort. Given the
bounding box annotation, we observe that pixels inside the bounding box may
contain extensive labeling noise. However, as a large amount of background is
excluded, the foreground bounding box region contains a less complex
background, making it possible to perform handcrafted features-based saliency
detection with only the cropped foreground region. As the conventional
handcrafted features are not representative enough, leading to noisy saliency
maps, we further introduce structure-aware self-supervised loss to regularize
the structure of the prediction. Further, we claim that pixels outside the
bounding box should be background, thus partial cross-entropy loss function can
be used to accurately localize the accurate background region. Experimental
results on six benchmark RGB saliency datasets illustrate the effectiveness of
our model.
|
[
{
"version": "v1",
"created": "Wed, 11 May 2022 03:03:26 GMT"
}
] | 2022-05-12T00:00:00 |
[
[
"He",
"Mengqi",
""
],
[
"Zhang",
"Jing",
""
],
[
"Yu",
"Wenxin",
""
]
] |
new_dataset
| 0.993289 |
2205.05300
|
Duyoung Jeon
|
Duyoung Jeon and Junho Lee and Cheongtag Kim
|
User Guide for KOTE: Korean Online Comments Emotions Dataset
|
16 pages, 4 figures
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sentiment analysis that classifies data into positive or negative has been
dominantly used to recognize emotional aspects of texts, despite the deficit of
thorough examination of emotional meanings. Recently, corpora labeled with more
than just valence are built to exceed this limit. However, most Korean emotion
corpora are small in the number of instances and cover a limited range of
emotions. We introduce KOTE dataset. KOTE contains 50k (250k cases) Korean
online comments, each of which is manually labeled for 43 emotion labels or one
special label (NO EMOTION) by crowdsourcing (Ps = 3,048). The emotion taxonomy
of the 43 emotions is systematically established by cluster analysis of Korean
emotion concepts expressed on word embedding space. After explaining how KOTE
is developed, we also discuss the results of finetuning and analysis for social
discrimination in the corpus.
|
[
{
"version": "v1",
"created": "Wed, 11 May 2022 06:54:10 GMT"
}
] | 2022-05-12T00:00:00 |
[
[
"Jeon",
"Duyoung",
""
],
[
"Lee",
"Junho",
""
],
[
"Kim",
"Cheongtag",
""
]
] |
new_dataset
| 0.999874 |
2205.05348
|
Ye Tang
|
Ye Tang, Xuesong Yang, Xinrui Liu, Xiwei Zhao, Zhangang Lin, Changping
Peng
|
NDGGNET-A Node Independent Gate based Graph Neural Networks
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph Neural Networks (GNNs) is an architecture for structural data, and has
been adopted in a mass of tasks and achieved fabulous results, such as link
prediction, node classification, graph classification and so on. Generally, for
a certain node in a given graph, a traditional GNN layer can be regarded as an
aggregation from one-hop neighbors, thus a set of stacked layers are able to
fetch and update node status within multi-hops. For nodes with sparse
connectivity, it is difficult to obtain enough information through a single GNN
layer as not only there are only few nodes directly connected to them but also
can not propagate the high-order neighbor information. However, as the number
of layer increases, the GNN model is prone to over-smooth for nodes with the
dense connectivity, which resulting in the decrease of accuracy. To tackle this
issue, in this thesis, we define a novel framework that allows the normal GNN
model to accommodate more layers. Specifically, a node-degree based gate is
employed to adjust weight of layers dynamically, that try to enhance the
information aggregation ability and reduce the probability of over-smoothing.
Experimental results show that our proposed model can effectively increase the
model depth and perform well on several datasets.
|
[
{
"version": "v1",
"created": "Wed, 11 May 2022 08:51:04 GMT"
}
] | 2022-05-12T00:00:00 |
[
[
"Tang",
"Ye",
""
],
[
"Yang",
"Xuesong",
""
],
[
"Liu",
"Xinrui",
""
],
[
"Zhao",
"Xiwei",
""
],
[
"Lin",
"Zhangang",
""
],
[
"Peng",
"Changping",
""
]
] |
new_dataset
| 0.99898 |
2205.05369
|
Chenyu Zheng
|
Chenyu Zheng, Junjue Wang, Ailong Ma, Yanfei Zhong
|
AutoLC: Search Lightweight and Top-Performing Architecture for Remote
Sensing Image Land-Cover Classification
|
Early accepted by ICPR 2022
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Land-cover classification has long been a hot and difficult challenge in
remote sensing community. With massive High-resolution Remote Sensing (HRS)
images available, manually and automatically designed Convolutional Neural
Networks (CNNs) have already shown their great latent capacity on HRS
land-cover classification in recent years. Especially, the former can achieve
better performance while the latter is able to generate lightweight
architecture. Unfortunately, they both have shortcomings. On the one hand,
because manual CNNs are almost proposed for natural image processing, it
becomes very redundant and inefficient to process HRS images. On the other
hand, nascent Neural Architecture Search (NAS) techniques for dense prediction
tasks are mainly based on encoder-decoder architecture, and just focus on the
automatic design of the encoder, which makes it still difficult to recover the
refined mapping when confronting complicated HRS scenes.
To overcome their defects and tackle the HRS land-cover classification
problems better, we propose AutoLC which combines the advantages of two
methods. First, we devise a hierarchical search space and gain the lightweight
encoder underlying gradient-based search strategy. Second, we meticulously
design a lightweight but top-performing decoder that is adaptive to the
searched encoder of itself. Finally, experimental results on the LoveDA
land-cover dataset demonstrate that our AutoLC method outperforms the
state-of-art manual and automatic methods with much less computational
consumption.
|
[
{
"version": "v1",
"created": "Wed, 11 May 2022 09:30:36 GMT"
}
] | 2022-05-12T00:00:00 |
[
[
"Zheng",
"Chenyu",
""
],
[
"Wang",
"Junjue",
""
],
[
"Ma",
"Ailong",
""
],
[
"Zhong",
"Yanfei",
""
]
] |
new_dataset
| 0.993287 |
2205.05387
|
Tom\'a\v{s} Jakl
|
Tom\'a\v{s} Jakl, Dan Marsden, Nihil Shah
|
A game comonadic account of Courcelle and Feferman-Vaught-Mostowski
theorems
| null | null | null | null |
cs.LO math.CT math.LO
|
http://creativecommons.org/licenses/by/4.0/
|
Game comonads, introduced by Abramsky, Dawar and Wang, and developed by
Abramsky and Shah, give a categorical semantics for model comparison games. We
present an axiomatic account of Feferman-Vaught-Mostowski (FVM) composition
theorems within the game comonad framework, parameterized by the model
comparison game. In a uniform way, we produce compositionality results for the
logic in question, and its positive existential and counting quantifier
variants.
Secondly, we extend game comonads to the second order setting, specifically
in the case of Monadic Second Order (MSO) logic. We then generalize our FVM
theorems to the second order case. We conclude with an abstract formulation of
Courcelle's algorithmic meta-theorem, exploiting our earlier developments. This
is instantiated to recover well-known bounded tree-width and bounded
clique-width Courcelle theorems for MSO on graphs.
|
[
{
"version": "v1",
"created": "Wed, 11 May 2022 10:23:07 GMT"
}
] | 2022-05-12T00:00:00 |
[
[
"Jakl",
"Tomáš",
""
],
[
"Marsden",
"Dan",
""
],
[
"Shah",
"Nihil",
""
]
] |
new_dataset
| 0.997099 |
2205.05439
|
Philipp Jeitner
|
Philipp Jeitner and Haya Shulman
|
Injection Attacks Reloaded: Tunnelling Malicious Payloads over DNS
| null |
30th USENIX Security Symposium (USENIX Security 21), 2021, pages
3165-3182, ISBN 978-1-939133-24-3
| null | null |
cs.CR cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The traditional design principle for Internet protocols indicates: "Be strict
when sending and tolerant when receiving" [RFC1958], and DNS is no exception to
this. The transparency of DNS in handling the DNS records, also standardised
specifically for DNS [RFC3597], is one of the key features that made it such a
popular platform facilitating a constantly increasing number of new
applications. An application simply creates a new DNS record and can instantly
start distributing it over DNS without requiring any changes to the DNS servers
and platforms. Our Internet wide study confirms that more than 1.3M (96% of
tested) open DNS resolvers are standard compliant and treat DNS records
transparently.
In this work we show that this `transparency' introduces a severe
vulnerability in the Internet: we demonstrate a new method to launch string
injection attacks by encoding malicious payloads into DNS records. We show how
to weaponise such DNS records to attack popular applications. For instance, we
apply string injection to launch a new type of DNS cache poisoning attack,
which we evaluated against a population of open resolvers and found 105K to be
vulnerable. Such cache poisoning cannot be prevented with common setups of
DNSSEC. Our attacks apply to internal as well as to public services, for
instance, we reveal that all eduroam services are vulnerable to our injection
attacks, allowing us to launch exploits ranging from unauthorised access to
eduroam networks to resource starvation. Depending on the application, our
attacks cause system crashes, data corruption and leakage, degradation of
security, and can introduce remote code execution and arbitrary errors.
In our evaluation of the attacks in the Internet we find that all the
standard compliant open DNS resolvers we tested allow our injection attacks
against applications and users on their networks.
|
[
{
"version": "v1",
"created": "Wed, 11 May 2022 12:39:21 GMT"
}
] | 2022-05-12T00:00:00 |
[
[
"Jeitner",
"Philipp",
""
],
[
"Shulman",
"Haya",
""
]
] |
new_dataset
| 0.980742 |
2205.05473
|
Philipp Jeitner
|
Tianxiang Dai, Philipp Jeitner, Haya Shulman and Michael Waidner
|
The Hijackers Guide To The Galaxy: Off-Path Taking Over Internet
Resources
| null |
30th USENIX Security Symposium (USENIX Security 21), 2021, pages
3147-3164, ISBN 978-1-939133-24-3
| null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Internet resources form the basic fabric of the digital society. They provide
the fundamental platform for digital services and assets, e.g., for critical
infrastructures, financial services, government. Whoever controls that fabric
effectively controls the digital society.
In this work we demonstrate that the current practices of Internet resources
management, of IP addresses, domains, certificates and virtual platforms are
insecure. Over long periods of time adversaries can maintain control over
Internet resources which they do not own and perform stealthy manipulations,
leading to devastating attacks. We show that network adversaries can take over
and manipulate at least 68% of the assigned IPv4 address space as well as 31%
of the top Alexa domains. We demonstrate such attacks by hijacking the accounts
associated with the digital resources.
For hijacking the accounts we launch off-path DNS cache poisoning attacks, to
redirect the password recovery link to the adversarial hosts. We then
demonstrate that the adversaries can manipulate the resources associated with
these accounts. We find all the tested providers vulnerable to our attacks.
We recommend mitigations for blocking the attacks that we present in this
work. Nevertheless, the countermeasures cannot solve the fundamental problem -
the management of the Internet resources should be revised to ensure that
applying transactions cannot be done so easily and stealthily as is currently
possible.
|
[
{
"version": "v1",
"created": "Wed, 11 May 2022 13:17:33 GMT"
}
] | 2022-05-12T00:00:00 |
[
[
"Dai",
"Tianxiang",
""
],
[
"Jeitner",
"Philipp",
""
],
[
"Shulman",
"Haya",
""
],
[
"Waidner",
"Michael",
""
]
] |
new_dataset
| 0.981963 |
2205.05477
|
Paolo De Petris
|
Paolo De Petris, Shehryar Khattak, Mihir Dharmadhikari, Gabriel
Waibel, Huan Nguyen, Markus Montenegro, Nikhil Khedekar, Kostas Alexis, Marco
Hutter
|
Marsupial Walking-and-Flying Robotic Deployment for Collaborative
Exploration of Unknown Environments
|
6 pages, 6 figures, Submitted to the IEEE/RSJ International
Conference on Intelligent Robots and Systems, 2022
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work contributes a marsupial robotic system-of-systems involving a
legged and an aerial robot capable of collaborative mapping and exploration
path planning that exploits the heterogeneous properties of the two systems and
the ability to selectively deploy the aerial system from the ground robot.
Exploiting the dexterous locomotion capabilities and long endurance of
quadruped robots, the marsupial combination can explore within large-scale and
confined environments involving rough terrain. However, as certain types of
terrain or vertical geometries can render any ground system unable to continue
its exploration, the marsupial system can - when needed - deploy the flying
robot which, by exploiting its 3D navigation capabilities, can undertake a
focused exploration task within its endurance limitations. Focusing on
autonomy, the two systems can co-localize and map together by sharing
LiDAR-based maps and plan exploration paths individually, while a tailored
graph search onboard the legged robot allows it to identify where and when the
ferried aerial platform should be deployed. The system is verified within
multiple experimental studies demonstrating the expanded exploration
capabilities of the marsupial system-of-systems and facilitating the
exploration of otherwise individually unreachable areas.
|
[
{
"version": "v1",
"created": "Wed, 11 May 2022 13:21:11 GMT"
}
] | 2022-05-12T00:00:00 |
[
[
"De Petris",
"Paolo",
""
],
[
"Khattak",
"Shehryar",
""
],
[
"Dharmadhikari",
"Mihir",
""
],
[
"Waibel",
"Gabriel",
""
],
[
"Nguyen",
"Huan",
""
],
[
"Montenegro",
"Markus",
""
],
[
"Khedekar",
"Nikhil",
""
],
[
"Alexis",
"Kostas",
""
],
[
"Hutter",
"Marco",
""
]
] |
new_dataset
| 0.999452 |
2205.05580
|
Vedant Kalbag
|
Vedant Kalbag, Alexander Lerch
|
Scream Detection in Heavy Metal Music
| null | null | null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Harsh vocal effects such as screams or growls are far more common in heavy
metal vocals than the traditionally sung vocal. This paper explores the problem
of detection and classification of extreme vocal techniques in heavy metal
music, specifically the identification of different scream techniques. We
investigate the suitability of various feature representations, including
cepstral, spectral, and temporal features as input representations for
classification. The main contributions of this work are (i) a manually
annotated dataset comprised of over 280 minutes of heavy metal songs of various
genres with a statistical analysis of occurrences of different extreme vocal
techniques in heavy metal music, and (ii) a systematic study of different input
feature representations for the classification of heavy metal vocals
|
[
{
"version": "v1",
"created": "Wed, 11 May 2022 15:48:56 GMT"
}
] | 2022-05-12T00:00:00 |
[
[
"Kalbag",
"Vedant",
""
],
[
"Lerch",
"Alexander",
""
]
] |
new_dataset
| 0.999754 |
2205.05583
|
Shuzhi Yu
|
Shuzhi Yu, Guanhang Wu, Chunhui Gu, Mohammed E. Fathy
|
TDT: Teaching Detectors to Track without Fully Annotated Videos
|
Workshop on Learning with Limited Labelled Data for Image and Video
Understanding (L3D-IVU), CVPR2022 Workshop
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, one-stage trackers that use a joint model to predict both
detections and appearance embeddings in one forward pass received much
attention and achieved state-of-the-art results on the Multi-Object Tracking
(MOT) benchmarks. However, their success depends on the availability of videos
that are fully annotated with tracking data, which is expensive and hard to
obtain. This can limit the model generalization. In comparison, the two-stage
approach, which performs detection and embedding separately, is slower but
easier to train as their data are easier to annotate. We propose to combine the
best of the two worlds through a data distillation approach. Specifically, we
use a teacher embedder, trained on Re-ID datasets, to generate pseudo
appearance embedding labels for the detection datasets. Then, we use the
augmented dataset to train a detector that is also capable of regressing these
pseudo-embeddings in a fully-convolutional fashion. Our proposed one-stage
solution matches the two-stage counterpart in quality but is 3 times faster.
Even though the teacher embedder has not seen any tracking data during
training, our proposed tracker achieves competitive performance with some
popular trackers (e.g. JDE) trained with fully labeled tracking data.
|
[
{
"version": "v1",
"created": "Wed, 11 May 2022 15:56:17 GMT"
}
] | 2022-05-12T00:00:00 |
[
[
"Yu",
"Shuzhi",
""
],
[
"Wu",
"Guanhang",
""
],
[
"Gu",
"Chunhui",
""
],
[
"Fathy",
"Mohammed E.",
""
]
] |
new_dataset
| 0.982433 |
2205.05589
|
Zhiyu Chen
|
Zhiyu Chen, Bing Liu, Seungwhan Moon, Chinnadhurai Sankar, Paul Crook,
William Yang Wang
|
KETOD: Knowledge-Enriched Task-Oriented Dialogue
|
NAACL 2022 Findings
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing studies in dialogue system research mostly treat task-oriented
dialogue and chit-chat as separate domains. Towards building a human-like
assistant that can converse naturally and seamlessly with users, it is
important to build a dialogue system that conducts both types of conversations
effectively. In this work, we investigate how task-oriented dialogue and
knowledge-grounded chit-chat can be effectively integrated into a single model.
To this end, we create a new dataset, KETOD (Knowledge-Enriched Task-Oriented
Dialogue), where we naturally enrich task-oriented dialogues with chit-chat
based on relevant entity knowledge. We also propose two new models,
SimpleToDPlus and Combiner, for the proposed task. Experimental results on both
automatic and human evaluations show that the proposed methods can
significantly improve the performance in knowledge-enriched response generation
while maintaining a competitive task-oriented dialog performance. We believe
our new dataset will be a valuable resource for future studies. Our dataset and
code are publicly available at \url{https://github.com/facebookresearch/ketod}.
|
[
{
"version": "v1",
"created": "Wed, 11 May 2022 16:01:03 GMT"
}
] | 2022-05-12T00:00:00 |
[
[
"Chen",
"Zhiyu",
""
],
[
"Liu",
"Bing",
""
],
[
"Moon",
"Seungwhan",
""
],
[
"Sankar",
"Chinnadhurai",
""
],
[
"Crook",
"Paul",
""
],
[
"Wang",
"William Yang",
""
]
] |
new_dataset
| 0.998363 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.