id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2208.01296
|
Shantipriya Parida
|
Shantipriya Parida, Subhadarshi Panda, Stig-Arne Gr\"onroos, Mark
Granroth-Wilding, Mika Koistinen
|
Silo NLP's Participation at WAT2022
|
Submitted to Workshop on Asian Translation (WAT2022)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This paper provides the system description of "Silo NLP's" submission to the
Workshop on Asian Translation (WAT2022). We have participated in the Indic
Multimodal tasks (English->Hindi, English->Malayalam, and English->Bengali
Multimodal Translation). For text-only translation, we trained Transformers
from scratch and fine-tuned mBART-50 models. For multimodal translation, we
used the same mBART architecture and extracted object tags from the images to
use as visual features concatenated with the text sequence.
Our submission tops many tasks including English->Hindi multimodal
translation (evaluation test), English->Malayalam text-only and multimodal
translation (evaluation test), English->Bengali multimodal translation
(challenge test), and English->Bengali text-only translation (evaluation test).
|
[
{
"version": "v1",
"created": "Tue, 2 Aug 2022 07:49:33 GMT"
}
] | 2022-08-03T00:00:00 |
[
[
"Parida",
"Shantipriya",
""
],
[
"Panda",
"Subhadarshi",
""
],
[
"Grönroos",
"Stig-Arne",
""
],
[
"Granroth-Wilding",
"Mark",
""
],
[
"Koistinen",
"Mika",
""
]
] |
new_dataset
| 0.997609 |
2208.01312
|
Liangyu Chen
|
Yong Deng, Chenxiao Dou, Liangyu Chen, Deqiang Miao, Xianghui Sun,
Baochang Ma, Xiangang Li
|
BEIKE NLP at SemEval-2022 Task 4: Prompt-Based Paragraph Classification
for Patronizing and Condescending Language Detection
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
PCL detection task is aimed at identifying and categorizing language that is
patronizing or condescending towards vulnerable communities in the general
media.Compared to other NLP tasks of paragraph classification, the negative
language presented in the PCL detection task is usually more implicit and
subtle to be recognized, making the performance of common text-classification
approaches disappointed. Targeting the PCL detection problem in SemEval-2022
Task 4, in this paper, we give an introduction to our team's solution, which
exploits the power of prompt-based learning on paragraph classification. We
reformulate the task as an appropriate cloze prompt and use pre-trained Masked
Language Models to fill the cloze slot. For the two subtasks, binary
classification and multi-label classification, DeBERTa model is adopted and
fine-tuned to predict masked label words of task-specific prompts. On the
evaluation dataset, for binary classification, our approach achieves an
F1-score of 0.6406; for multi-label classification, our approach achieves an
macro-F1-score of 0.4689 and ranks first in the leaderboard.
|
[
{
"version": "v1",
"created": "Tue, 2 Aug 2022 08:38:47 GMT"
}
] | 2022-08-03T00:00:00 |
[
[
"Deng",
"Yong",
""
],
[
"Dou",
"Chenxiao",
""
],
[
"Chen",
"Liangyu",
""
],
[
"Miao",
"Deqiang",
""
],
[
"Sun",
"Xianghui",
""
],
[
"Ma",
"Baochang",
""
],
[
"Li",
"Xiangang",
""
]
] |
new_dataset
| 0.965818 |
2208.01356
|
Pascal Nasahl
|
Pascal Nasahl, Martin Unterguggenberger, Rishub Nagpal, Robert
Schilling, David Schrammel, Stefan Mangard
|
SCFI: State Machine Control-Flow Hardening Against Fault Attacks
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fault injection (FI) is a powerful attack methodology allowing an adversary
to entirely break the security of a target device. As finite-state machines
(FSMs) are fundamental hardware building blocks responsible for controlling
systems, inducing faults into these controllers enables an adversary to hijack
the execution of the integrated circuit. A common defense strategy mitigating
these attacks is to manually instantiate FSMs multiple times and detect faults
using a majority voting logic. However, as each additional FSM instance only
provides security against one additional induced fault, this approach scales
poorly in a multi-fault attack scenario.
In this paper, we present SCFI: a strong, probabilistic FSM protection
mechanism ensuring that control-flow deviations from the intended control-flow
are detected even in the presence of multiple faults. At its core, SCFI
consists of a hardened next-state function absorbing the execution history as
well as the FSM's control signals to derive the next state. When either the
absorbed inputs, the state registers, or the function itself are affected by
faults, SCFI triggers an error with no detection latency. We integrate SCFI
into a synthesis tool capable of automatically hardening arbitrary unprotected
FSMs without user interaction and open-source the tool. Our evaluation shows
that SCFI provides strong protection guarantees with a better area-time product
than FSMs protected using classical redundancy-based approaches. Finally, we
formally verify the resilience of the protected state machines using a
pre-silicon fault analysis tool.
|
[
{
"version": "v1",
"created": "Tue, 2 Aug 2022 10:54:48 GMT"
}
] | 2022-08-03T00:00:00 |
[
[
"Nasahl",
"Pascal",
""
],
[
"Unterguggenberger",
"Martin",
""
],
[
"Nagpal",
"Rishub",
""
],
[
"Schilling",
"Robert",
""
],
[
"Schrammel",
"David",
""
],
[
"Mangard",
"Stefan",
""
]
] |
new_dataset
| 0.998722 |
2208.01380
|
Beibei Lin
|
Beibei Lin, Shunli Zhang, Ming Wang, Lincheng Li, and Xin Yu
|
GaitGL: Learning Discriminative Global-Local Feature Representations for
Gait Recognition
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing gait recognition methods either directly establish Global Feature
Representation (GFR) from original gait sequences or generate Local Feature
Representation (LFR) from several local parts. However, GFR tends to neglect
local details of human postures as the receptive fields become larger in the
deeper network layers. Although LFR allows the network to focus on the detailed
posture information of each local region, it neglects the relations among
different local parts and thus only exploits limited local information of
several specific regions. To solve these issues, we propose a global-local
based gait recognition network, named GaitGL, to generate more discriminative
feature representations. To be specific, a novel Global and Local Convolutional
Layer (GLCL) is developed to take full advantage of both global visual
information and local region details in each layer. GLCL is a dual-branch
structure that consists of a GFR extractor and a mask-based LFR extractor. GFR
extractor aims to extract contextual information, e.g., the relationship among
various body parts, and the mask-based LFR extractor is presented to exploit
the detailed posture changes of local regions. In addition, we introduce a
novel mask-based strategy to improve the local feature extraction capability.
Specifically, we design pairs of complementary masks to randomly occlude
feature maps, and then train our mask-based LFR extractor on various occluded
feature maps. In this manner, the LFR extractor will learn to fully exploit
local information. Extensive experiments demonstrate that GaitGL achieves
better performance than state-of-the-art gait recognition methods. The average
rank-1 accuracy on CASIA-B, OU-MVLP, GREW and Gait3D is 93.6%, 98.7%, 68.0% and
63.8%, respectively, significantly outperforming the competing methods. The
proposed method has won the first prize in two competitions: HID 2020 and HID
2021.
|
[
{
"version": "v1",
"created": "Tue, 2 Aug 2022 11:50:21 GMT"
}
] | 2022-08-03T00:00:00 |
[
[
"Lin",
"Beibei",
""
],
[
"Zhang",
"Shunli",
""
],
[
"Wang",
"Ming",
""
],
[
"Li",
"Lincheng",
""
],
[
"Yu",
"Xin",
""
]
] |
new_dataset
| 0.986149 |
2208.01412
|
Lucia Moura
|
Andr\'e Guerino Castoldi, Emerson Luiz do Monte Carmelo, Lucia Moura,
Daniel Panario, Brett Stevens
|
Bounds on Covering Codes in RT spaces using Ordered Covering Arrays
|
12 pages
|
CAI 2019. Lecture Notes in Computer Science, vol 11545. Springer
|
10.1007/978-3-030-21363-3_9
| null |
cs.DM cs.IT math.CO math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, constructions of ordered covering arrays are discussed and
applied to obtain new upper bounds on covering codes in Rosenbloom-Tsfasman
spaces (RT spaces), improving or extending some previous results.
|
[
{
"version": "v1",
"created": "Sun, 31 Jul 2022 02:43:53 GMT"
}
] | 2022-08-03T00:00:00 |
[
[
"Castoldi",
"André Guerino",
""
],
[
"Carmelo",
"Emerson Luiz do Monte",
""
],
[
"Moura",
"Lucia",
""
],
[
"Panario",
"Daniel",
""
],
[
"Stevens",
"Brett",
""
]
] |
new_dataset
| 0.999297 |
2208.01422
|
Konstantinos Dovelos
|
Konstantinos Dovelos, Stylianos D. Assimonis, Hien Quoc Ngo, Michail
Matthaiou
|
Superdirective Arrays with Finite-Length Dipoles: Modeling and New
Perspectives
|
To appear in 2022 IEEE GLOBECOM
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dense arrays can facilitate the integration of multiple antennas into finite
volumes. In addition to the compact size, sub-wavelength spacing enables
superdirectivity for endfire operation, a phenomenon that has been mainly
studied for isotropic and infinitesimal radiators. In this work, we focus on
linear dipoles of arbitrary yet finite length. Specifically, we first introduce
an array model that accounts for the sinusoidal current distribution (SCD) on
very thin dipoles. Based on the SCD, the loss resistance of each dipole antenna
is precisely determined. Capitalizing on the derived model, we next investigate
the maximum achievable rate under a fixed power constraint. The optimal design
entails conjugate power matching along with maximizing the array gain. Our
theoretical analysis is corroborated by the method of moments under the
thin-wire approximation, as well as by full-wave simulations. Numerical results
showcase that a super-gain is attainable with high radiation efficiency when
the dipole antennas are not too short and thin.
|
[
{
"version": "v1",
"created": "Tue, 2 Aug 2022 12:58:41 GMT"
}
] | 2022-08-03T00:00:00 |
[
[
"Dovelos",
"Konstantinos",
""
],
[
"Assimonis",
"Stylianos D.",
""
],
[
"Ngo",
"Hien Quoc",
""
],
[
"Matthaiou",
"Michail",
""
]
] |
new_dataset
| 0.999253 |
2208.01529
|
Giovanni Menegozzo
|
Giovanni Menegozzo, Diego Dall'Alba, Paolo Fiorini
|
CIPCaD-Bench: Continuous Industrial Process datasets for benchmarking
Causal Discovery methods
|
Supplementary Materials at:
https://github.com/giovanniMen/CPCaD-Bench
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Causal relationships are commonly examined in manufacturing processes to
support faults investigations, perform interventions, and make strategic
decisions. Industry 4.0 has made available an increasing amount of data that
enable data-driven Causal Discovery (CD). Considering the growing number of
recently proposed CD methods, it is necessary to introduce strict benchmarking
procedures on publicly available datasets since they represent the foundation
for a fair comparison and validation of different methods. This work introduces
two novel public datasets for CD in continuous manufacturing processes. The
first dataset employs the well-known Tennessee Eastman simulator for fault
detection and process control. The second dataset is extracted from an
ultra-processed food manufacturing plant, and it includes a description of the
plant, as well as multiple ground truths. These datasets are used to propose a
benchmarking procedure based on different metrics and evaluated on a wide
selection of CD algorithms. This work allows testing CD methods in realistic
conditions enabling the selection of the most suitable method for specific
target applications. The datasets are available at the following link:
https://github.com/giovanniMen
|
[
{
"version": "v1",
"created": "Tue, 2 Aug 2022 15:30:10 GMT"
}
] | 2022-08-03T00:00:00 |
[
[
"Menegozzo",
"Giovanni",
""
],
[
"Dall'Alba",
"Diego",
""
],
[
"Fiorini",
"Paolo",
""
]
] |
new_dataset
| 0.999767 |
2208.01547
|
Andrew Sabelhaus
|
Andrew P. Sabelhaus, Zach J. Patterson, Anthony T. Wertz, Carmel
Majidi
|
Safe Supervisory Control of Soft Robot Actuators
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although soft robots show safer interactions with their environment than
traditional robots, soft mechanisms and actuators still have significant
potential for damage or degradation particularly during unmodeled contact. This
article introduces a feedback strategy for safe soft actuator operation during
control of a soft robot. To do so, a supervisory controller monitors actuator
state and dynamically saturates control inputs to avoid conditions that could
lead to physical damage. We prove that, under certain conditions, the
supervisory controller is stable and verifiably safe. We then demonstrate
completely onboard operation of the supervisory controller using a soft
thermally-actuated robot limb with embedded shape memory alloy (SMA) actuators
and sensing. Tests performed with the supervisor verify its theoretical
properties and show stabilization of the robot limb's pose in free space.
Finally, experiments show that our approach prevents overheating during contact
(including environmental constraints and human contact) or when infeasible
motions are commanded. This supervisory controller, and its ability to be
executed with completely onboard sensing, has the potential to make soft robot
actuators reliable enough for practical use.
|
[
{
"version": "v1",
"created": "Tue, 2 Aug 2022 15:53:42 GMT"
}
] | 2022-08-03T00:00:00 |
[
[
"Sabelhaus",
"Andrew P.",
""
],
[
"Patterson",
"Zach J.",
""
],
[
"Wertz",
"Anthony T.",
""
],
[
"Majidi",
"Carmel",
""
]
] |
new_dataset
| 0.998543 |
2208.01633
|
Vladislav Golyanik
|
Hiroyasu Akada and Jian Wang and Soshi Shimada and Masaki Takahashi
and Christian Theobalt and Vladislav Golyanik
|
UnrealEgo: A New Dataset for Robust Egocentric 3D Human Motion Capture
|
21 pages, 10 figures, 10 tables; project page:
https://4dqv.mpi-inf.mpg.de/UnrealEgo/
|
European Conference on Computer Vision (ECCV) 2022
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present UnrealEgo, i.e., a new large-scale naturalistic dataset for
egocentric 3D human pose estimation. UnrealEgo is based on an advanced concept
of eyeglasses equipped with two fisheye cameras that can be used in
unconstrained environments. We design their virtual prototype and attach them
to 3D human models for stereo view capture. We next generate a large corpus of
human motions. As a consequence, UnrealEgo is the first dataset to provide
in-the-wild stereo images with the largest variety of motions among existing
egocentric datasets. Furthermore, we propose a new benchmark method with a
simple but effective idea of devising a 2D keypoint estimation module for
stereo inputs to improve 3D human pose estimation. The extensive experiments
show that our approach outperforms the previous state-of-the-art methods
qualitatively and quantitatively. UnrealEgo and our source codes are available
on our project web page.
|
[
{
"version": "v1",
"created": "Tue, 2 Aug 2022 17:59:54 GMT"
}
] | 2022-08-03T00:00:00 |
[
[
"Akada",
"Hiroyasu",
""
],
[
"Wang",
"Jian",
""
],
[
"Shimada",
"Soshi",
""
],
[
"Takahashi",
"Masaki",
""
],
[
"Theobalt",
"Christian",
""
],
[
"Golyanik",
"Vladislav",
""
]
] |
new_dataset
| 0.99973 |
2002.00171
|
Gayatri Venugopal-Wairagade
|
Gayatri Venugopal-Wairagade, Jatinderkumar R. Saini, Dhanya Pramod
|
Novel Language Resources for Hindi: An Aesthetics Text Corpus and a
Comprehensive Stop Lemma List
|
7 pages, 3 figures
| null |
10.14569/IJACSA.2020.0110130
| null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This paper is an effort to complement the contributions made by researchers
working toward the inclusion of non-English languages in natural language
processing studies. Two novel Hindi language resources have been created and
released for public consumption. The first resource is a corpus consisting of
nearly thousand pre-processed fictional and nonfictional texts spanning over
hundred years. The second resource is an exhaustive list of stop lemmas created
from 12 corpora across multiple domains, consisting of over 13 million words,
from which more than 200,000 lemmas were generated, and 11 publicly available
stop word lists comprising over 1000 words, from which nearly 400 unique lemmas
were generated. This research lays emphasis on the use of stop lemmas instead
of stop words owing to the presence of various, but not all morphological forms
of a word in stop word lists, as opposed to the presence of only the root form
of the word, from which variations could be derived if required. It was also
observed that stop lemmas were more consistent across multiple sources as
compared to stop words. In order to generate a stop lemma list, the parts of
speech of the lemmas were investigated but rejected as it was found that there
was no significant correlation between the rank of a word in the frequency list
and its part of speech. The stop lemma list was assessed using a comparative
method. A formal evaluation method is suggested as future work arising from
this study.
|
[
{
"version": "v1",
"created": "Sat, 1 Feb 2020 08:49:17 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Venugopal-Wairagade",
"Gayatri",
""
],
[
"Saini",
"Jatinderkumar R.",
""
],
[
"Pramod",
"Dhanya",
""
]
] |
new_dataset
| 0.999274 |
2005.11508
|
Xincao Xu
|
Xincao Xu and Kai Liu and Ke Xiao and Liang Feng and Zhou Wu and
Songtao Guo
|
Vehicular Fog Computing Enabled Real-time Collision Warning via
Trajectory Calibration
| null |
Mobile Networks and Applications, 25(6), 2482-2494 (2020)
|
10.1007/s11036-020-01591-7
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vehicular fog computing (VFC) has been envisioned as a promising paradigm for
enabling a variety of emerging intelligent transportation systems (ITS).
However, due to inevitable as well as non-negligible issues in wireless
communication, including transmission latency and packet loss, it is still
challenging in implementing safety-critical applications, such as real-time
collision warning in vehicular networks. In this paper, we present a vehicular
fog computing architecture, aiming at supporting effective and real-time
collision warning by offloading computation and communication overheads to
distributed fog nodes. With the system architecture, we further propose a
trajectory calibration based collision warning (TCCW) algorithm along with
tailored communication protocols. Specifically, an application-layer
vehicular-to-infrastructure (V2I) communication delay is fitted by the Stable
distribution with real-world field testing data. Then, a packet loss detection
mechanism is designed. Finally, TCCW calibrates real-time vehicle trajectories
based on received vehicle status including GPS coordinates, velocity,
acceleration, heading direction, as well as the estimation of communication
delay and the detection of packet loss. For performance evaluation, we build
the simulation model and implement conventional solutions including cloud-based
warning and fog-based warning without calibration for comparison. Real-vehicle
trajectories are extracted as the input, and the simulation results demonstrate
that the effectiveness of TCCW in terms of the highest precision and recall in
a wide range of scenarios.
|
[
{
"version": "v1",
"created": "Sat, 23 May 2020 10:26:47 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Xu",
"Xincao",
""
],
[
"Liu",
"Kai",
""
],
[
"Xiao",
"Ke",
""
],
[
"Feng",
"Liang",
""
],
[
"Wu",
"Zhou",
""
],
[
"Guo",
"Songtao",
""
]
] |
new_dataset
| 0.99859 |
2007.09262
|
Caitlyn Seim
|
Caitlyn E. Seim, Steven L. Wolf, and Thad E. Starner
|
Wearable vibrotactile stimulation for upper extremity rehabilitation in
chronic stroke: clinical feasibility trial using the VTS Glove
| null |
Journal Neuroengineering and Rehabilitation 18, 14 (2021)
|
10.1186/s12984-021-00813-7
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Objective: Evaluate the feasibility and potential impacts on hand function
using a wearable stimulation device (the VTS Glove) which provides mechanical,
vibratory input to the affected limb of chronic stroke survivors.
Methods: A double-blind, randomized, controlled feasibility study including
sixteen chronic stroke survivors (mean age: 54; 1-13 years post-stroke) with
diminished movement and tactile perception in their affected hand. Participants
were given a wearable device to take home and asked to wear it for three hours
daily over eight weeks. The device intervention was either (1) the VTS Glove,
which provided vibrotactile stimulation to the hand, or (2) an identical glove
with vibration disabled. Participants were equally randomly assigned to each
condition. Hand and arm function were measured weekly at home and in local
physical therapy clinics.
Results: Participants using the VTS Glove showed significantly improved
Semmes-Weinstein monofilament exam, reduction in Modified Ashworth measures in
the fingers, and some increased voluntary finger flexion, elbow and shoulder
range of motion.
Conclusions: Vibrotactile stimulation applied to the disabled limb may impact
tactile perception, tone and spasticity, and voluntary range of motion.
Wearable devices allow extended application and study of stimulation methods
outside of a clinical setting.
|
[
{
"version": "v1",
"created": "Fri, 17 Jul 2020 22:17:30 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Seim",
"Caitlyn E.",
""
],
[
"Wolf",
"Steven L.",
""
],
[
"Starner",
"Thad E.",
""
]
] |
new_dataset
| 0.99923 |
2009.01302
|
Mao Ye
|
Mao Ye, Lin Guan, Mohammed Quddus
|
TDMP-Reliable Target Driven and Mobility Prediction based Routing
Protocol in Complex VANET
|
35 pages,16 Figures
| null |
10.1016/j.vehcom.2021.100361
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vehicle-to-everything (V2X) communication in the vehicular ad hoc network
(VANET), an infrastructure-free mechanism, has emerged as a crucial component
in the advanced Intelligent Transport System (ITS) for special information
transmission and inter-vehicular communications. One of the main research
challenges in VANET is the design and implementation of network routing
protocols which manage to trigger V2X communication with the reliable
end-to-end connectivity and efficient packet transmission. The organically
changing nature of road transport vehicles poses a significant threat to VANET
with respect to the accuracy and reliability of packet delivery. Therefore, a
position-based routing protocol tends to be the predominant method in VANET as
they overcome rapid changes in vehicle movements effectively. However, existing
routing protocols have some limitations such as (i) inaccurate in high dynamic
network topology, (ii) defective link-state estimation (iii) poor movement
prediction in heterogeneous road layouts. In this paper, a target-driven and
mobility prediction (TDMP) based routing protocol is therefore developed for
high-speed mobility and dynamic topology of vehicles, fluctuant traffic flow
and diverse road layouts in VANET. The primary idea in TDMP is that the
destination target of a driver is included in the mobility prediction to assist
the implementation of the routing protocol. Compared to existing geographic
routing protocols which mainly greedily forward the packet to the next-hop
based on its current position and partial road layout, TDMP is developed to
enhance the packet transmission with the consideration of the estimation of
inter-vehicles link status, and the prediction of vehicle positions dynamically
in fluctuant mobility and global road layout.
|
[
{
"version": "v1",
"created": "Wed, 2 Sep 2020 19:01:51 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Sep 2020 14:13:54 GMT"
},
{
"version": "v3",
"created": "Wed, 2 Dec 2020 14:53:10 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Ye",
"Mao",
""
],
[
"Guan",
"Lin",
""
],
[
"Quddus",
"Mohammed",
""
]
] |
new_dataset
| 0.99973 |
2010.07497
|
Jianheng Tang
|
Wenge Liu, Jianheng Tang, Yi Cheng, Wenjie Li, Yefeng Zheng, Xiaodan
Liang
|
MedDG: An Entity-Centric Medical Consultation Dataset for Entity-Aware
Medical Dialogue Generation
|
Data and code are available at https://github.com/lwgkzl/MedDG
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Developing conversational agents to interact with patients and provide
primary clinical advice has attracted increasing attention due to its huge
application potential, especially in the time of COVID-19 Pandemic. However,
the training of end-to-end neural-based medical dialogue system is restricted
by an insufficient quantity of medical dialogue corpus. In this work, we make
the first attempt to build and release a large-scale high-quality Medical
Dialogue dataset related to 12 types of common Gastrointestinal diseases named
MedDG, with more than 17K conversations collected from the online health
consultation community. Five different categories of entities, including
diseases, symptoms, attributes, tests, and medicines, are annotated in each
conversation of MedDG as additional labels. To push forward the future research
on building expert-sensitive medical dialogue system, we proposes two kinds of
medical dialogue tasks based on MedDG dataset. One is the next entity
prediction and the other is the doctor response generation. To acquire a clear
comprehension on these two medical dialogue tasks, we implement several
state-of-the-art benchmarks, as well as design two dialogue models with a
further consideration on the predicted entities. Experimental results show that
the pre-train language models and other baselines struggle on both tasks with
poor performance in our dataset, and the response quality can be enhanced with
the help of auxiliary entity information. From human evaluation, the simple
retrieval model outperforms several state-of-the-art generative models,
indicating that there still remains a large room for improvement on generating
medically meaningful responses.
|
[
{
"version": "v1",
"created": "Thu, 15 Oct 2020 03:34:33 GMT"
},
{
"version": "v2",
"created": "Sun, 31 Jul 2022 06:04:16 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Liu",
"Wenge",
""
],
[
"Tang",
"Jianheng",
""
],
[
"Cheng",
"Yi",
""
],
[
"Li",
"Wenjie",
""
],
[
"Zheng",
"Yefeng",
""
],
[
"Liang",
"Xiaodan",
""
]
] |
new_dataset
| 0.999703 |
2109.03670
|
Lennart Schneider
|
Florian Pfisterer, Lennart Schneider, Julia Moosbauer, Martin Binder,
Bernd Bischl
|
YAHPO Gym -- An Efficient Multi-Objective Multi-Fidelity Benchmark for
Hyperparameter Optimization
|
Accepted at the First Conference on Automated Machine Learning (Main
Track). 39 pages, 12 tables, 10 figures, 1 listing
| null | null | null |
cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
When developing and analyzing new hyperparameter optimization methods, it is
vital to empirically evaluate and compare them on well-curated benchmark
suites. In this work, we propose a new set of challenging and relevant
benchmark problems motivated by desirable properties and requirements for such
benchmarks. Our new surrogate-based benchmark collection consists of 14
scenarios that in total constitute over 700 multi-fidelity hyperparameter
optimization problems, which all enable multi-objective hyperparameter
optimization. Furthermore, we empirically compare surrogate-based benchmarks to
the more widely-used tabular benchmarks, and demonstrate that the latter may
produce unfaithful results regarding the performance ranking of HPO methods. We
examine and compare our benchmark collection with respect to defined
requirements and propose a single-objective as well as a multi-objective
benchmark suite on which we compare 7 single-objective and 7 multi-objective
optimizers in a benchmark experiment. Our software is available at
[https://github.com/slds-lmu/yahpo_gym].
|
[
{
"version": "v1",
"created": "Wed, 8 Sep 2021 14:16:31 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Oct 2021 09:41:20 GMT"
},
{
"version": "v3",
"created": "Tue, 5 Apr 2022 14:49:43 GMT"
},
{
"version": "v4",
"created": "Sat, 30 Jul 2022 12:33:47 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Pfisterer",
"Florian",
""
],
[
"Schneider",
"Lennart",
""
],
[
"Moosbauer",
"Julia",
""
],
[
"Binder",
"Martin",
""
],
[
"Bischl",
"Bernd",
""
]
] |
new_dataset
| 0.965861 |
2109.12696
|
Ren Liu
|
Ren Liu, Nitish Sontakke, Sehoon Ha
|
PM-FSM: Policies Modulating Finite State Machine for Robust Quadrupedal
Locomotion
| null | null | null | null |
cs.RO cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep reinforcement learning (deep RL) has emerged as an effective tool for
developing controllers for legged robots. However, vanilla deep RL often
requires a tremendous amount of training samples and is not feasible for
achieving robust behaviors. Instead, researchers have investigated a novel
policy architecture by incorporating human experts' knowledge, such as Policies
Modulating Trajectory Generators (PMTG). This architecture builds a recurrent
control loop by combining a parametric trajectory generator (TG) and a feedback
policy network to achieve more robust behaviors. To take advantage of human
experts' knowledge but eliminate time-consuming interactive teaching,
researchers have investigated a novel architecture, Policies Modulating
Trajectory Generators (PMTG), which builds a recurrent control loop by
combining a parametric trajectory generator (TG) and a feedback policy network
to achieve more robust behaviors using intuitive prior knowledge. In this work,
we propose Policies Modulating Finite State Machine (PM-FSM) by replacing TGs
with contact-aware finite state machines (FSM), which offer more flexible
control of each leg. Compared with the TGs, FSMs offer high-level management on
each leg motion generator and enable a flexible state arrangement, which makes
the learned behavior less vulnerable to unseen perturbations or challenging
terrains. This invention offers an explicit notion of contact events to the
policy to negotiate unexpected perturbations. We demonstrated that the proposed
architecture could achieve more robust behaviors in various scenarios, such as
challenging terrains or external perturbations, on both simulated and real
robots. The supplemental video can be found at: https://youtu.be/78cboMqTkJQ.
|
[
{
"version": "v1",
"created": "Sun, 26 Sep 2021 20:27:53 GMT"
},
{
"version": "v2",
"created": "Mon, 1 Aug 2022 05:52:47 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Liu",
"Ren",
""
],
[
"Sontakke",
"Nitish",
""
],
[
"Ha",
"Sehoon",
""
]
] |
new_dataset
| 0.987529 |
2111.11046
|
Feng Liu
|
Wentian Zhang, Haozhe Liu, Feng Liu, Raghavendra Ramachandra,
Christoph Busch
|
FRT-PAD: Effective Presentation Attack Detection Driven by Face Related
Task
|
Accepted by ECCV 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The robustness and generalization ability of Presentation Attack Detection
(PAD) methods is critical to ensure the security of Face Recognition Systems
(FRSs). However, in a real scenario, Presentation Attacks (PAs) are various and
it is hard to predict the Presentation Attack Instrument (PAI) species that
will be used by the attacker. Existing PAD methods are highly dependent on the
limited training set and cannot generalize well to unknown PAI species. Unlike
this specific PAD task, other face related tasks trained by huge amount of real
faces (e.g. face recognition and attribute editing) can be effectively adopted
into different application scenarios. Inspired by this, we propose to trade
position of PAD and face related work in a face system and apply the free
acquired prior knowledge from face related tasks to solve face PAD, so as to
improve the generalization ability in detecting PAs. The proposed method, first
introduces task specific features from other face related task, then, we design
a Cross-Modal Adapter using a Graph Attention Network (GAT) to re-map such
features to adapt to PAD task. Finally, face PAD is achieved by using the
hierarchical features from a CNN-based PA detector and the re-mapped features.
The experimental results show that the proposed method can achieve significant
improvements in the complicated and hybrid datasets, when compared with the
state-of-the-art methods. In particular, when training on the datasets
OULU-NPU, CASIA-FASD, and Idiap Replay-Attack, we obtain HTER (Half Total Error
Rate) of 5.48% for the testing dataset MSU-MFSD, outperforming the baseline by
7.39%.
|
[
{
"version": "v1",
"created": "Mon, 22 Nov 2021 08:35:26 GMT"
},
{
"version": "v2",
"created": "Sun, 31 Jul 2022 10:34:30 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Zhang",
"Wentian",
""
],
[
"Liu",
"Haozhe",
""
],
[
"Liu",
"Feng",
""
],
[
"Ramachandra",
"Raghavendra",
""
],
[
"Busch",
"Christoph",
""
]
] |
new_dataset
| 0.98359 |
2201.06997
|
Mredulraj Pandianchery
|
Mredulraj S. Pandianchery, Gopalakrishnan E.A, Sowmya V, Vinayakumar
Ravi, Soman K.P
|
Explainable AI Framework for COVID-19 Prediction in Different Provinces
of India
|
12 pages
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In 2020, covid-19 virus had reached more than 200 countries. Till December
20th 2021, 221 nations in the world had collectively reported 275M confirmed
cases of covid-19 & total death toll of 5.37M. Many countries which include
United States, India, Brazil, United Kingdom, Russia etc were badly affected by
covid-19 pandemic due to the large population. The total confirmed cases
reported in this country are 51.7M, 34.7M, 22.2M, 11.3M, 10.2M respectively
till December 20, 2021. This pandemic can be controlled with the help of
precautionary steps by government & civilians of the country. The early
prediction of covid-19 cases helps to track the transmission dynamics & alert
the government to take the necessary precautions. Recurrent Deep learning
algorithms is a data driven model which plays a key role to capture the
patterns present in time series data. In many literatures, the Recurrent Neural
Network (RNN) based model are proposed for the efficient prediction of COVID-19
cases for different provinces. The study in the literature doesnt involve the
interpretation of the model behavior & robustness. In this study, The LSTM
model is proposed for the efficient prediction of active cases in each
provinces of India. The active cases dataset for each province in India is
taken from John Hopkins publicly available dataset for the duration from 10th
June, 2020 to 4th August, 2021. The proposed LSTM model is trained on one state
i.e., Maharashtra and tested for rest of the provinces in India. The concept of
Explainable AI is involved in this study for the better interpretation &
understanding of the model behavior. The proposed model is used to forecast the
active cases in India from 16th December, 2021 to 5th March, 2022. It is
notated that there will be a emergence of third wave on January, 2022 in India.
|
[
{
"version": "v1",
"created": "Wed, 12 Jan 2022 16:26:14 GMT"
},
{
"version": "v2",
"created": "Sat, 30 Jul 2022 06:55:48 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Pandianchery",
"Mredulraj S.",
""
],
[
"A",
"Gopalakrishnan E.",
""
],
[
"V",
"Sowmya",
""
],
[
"Ravi",
"Vinayakumar",
""
],
[
"P",
"Soman K.",
""
]
] |
new_dataset
| 0.991134 |
2201.07379
|
Omid Abbasi
|
Omid Abbasi and Halim Yanikomeroglu
|
UxNB-Enabled Cell-Free Massive MIMO with HAPS-Assisted Sub-THz
Backhauling
|
32 pages, 13 figures
| null | null | null |
cs.IT cs.NI math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a cell-free scheme for unmanned aerial vehicle
(UAV) base stations (BSs) to manage the severe intercell interference between
terrestrial users and UAV-BSs of neighboring cells. Since the cell-free scheme
requires enormous bandwidth for backhauling, we propose to use the
sub-terahertz (sub-THz) band for the backhaul links between UAV-BSs and central
processing unit (CPU). Also, because the sub-THz band requires a reliable
line-of-sight link, we propose to use a high altitude platform station (HAPS)
as a CPU. At the first time-slot of the proposed scheme, users send their
messages to UAVs at the sub-6 GHz band. The UAVs then apply match-filtering and
power allocation. At the second time-slot, at each UAV, orthogonal resource
blocks are allocated for each user at the sub-THz band, and the signals are
sent to the HAPS after analog beamforming. In the HAPS receiver, after analog
beamforming, the message of each user is decoded. We formulate an optimization
problem that maximizes the minimum signal-to-interference-plus-noise ratio of
users by finding the optimum allocated power as well as the optimum locations
of UAVs. Simulation results demonstrate the superiority of the proposed scheme
compared with aerial cellular and terrestrial cell-free baseline schemes.
|
[
{
"version": "v1",
"created": "Wed, 19 Jan 2022 01:50:38 GMT"
},
{
"version": "v2",
"created": "Sat, 30 Jul 2022 01:51:33 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Abbasi",
"Omid",
""
],
[
"Yanikomeroglu",
"Halim",
""
]
] |
new_dataset
| 0.998321 |
2203.01588
|
Alexander Badri-Spr\"owitz
|
Bernadett Kiss and Emre Cemal Gonen and An Mo and Alexandra Buchmann
and Daniel Renjewski and Alexander Badri-Spr\"owitz
|
Gastrocnemius and Power Amplifier Soleus Spring-Tendons Achieve Fast
Human-like Walking in a Bipedal Robot
|
Data and code repository at https://doi.org/10.17617/3.BQ2PZ9. Video
on youtube at https://youtu.be/T79pKLQ47XU
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Legged locomotion in humans is governed by natural dynamics of the human body
and neural control. One mechanism that is assumed to contribute to the high
efficiency of human walking is the impulsive ankle push-off, which potentially
powers the swing leg catapult. However, the mechanics of the human lower leg
with its complex muscle-tendon units spanning over single and multiple joints
is not yet understood. Legged robots allow testing the interaction between
complex leg mechanics, control, and environment in real-world walking gait. We
developed a 0.49m tall, 2.2kg anthropomorphic bipedal robot with Soleus and
Gastrocnemius muscle-tendon units represented by linear springs, acting as
mono- and biarticular elastic structures around the robot's ankle and knee
joints. We tested the influence of three Soleus and Gastrocnemius spring-tendon
configurations on the ankle power curves, the coordination of the ankle and
knee joint movements, the total cost of transport, and walking speed. We
controlled the robot with a feed-forward central pattern generator, leading to
walking speeds between 0.35m/s and 0.57m/s at 1.0Hz locomotion frequency, at
0.35m leg length. We found differences between all three configurations; the
Soleus spring-tendon modulates the robot's speed and energy efficiency likely
by ankle power amplification, while the Gastrocnemius spring-tendon changes the
movement coordination between ankle and knee joints during push-off.
|
[
{
"version": "v1",
"created": "Thu, 3 Mar 2022 09:31:04 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Jul 2022 20:03:44 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Kiss",
"Bernadett",
""
],
[
"Gonen",
"Emre Cemal",
""
],
[
"Mo",
"An",
""
],
[
"Buchmann",
"Alexandra",
""
],
[
"Renjewski",
"Daniel",
""
],
[
"Badri-Spröwitz",
"Alexander",
""
]
] |
new_dataset
| 0.999143 |
2203.07548
|
Sebastian Risi
|
Kathryn Walker, Rasmus Berg Palm, Rodrigo Moreno Garcia, Andres Faina,
Kasper Stoy, Sebastian Risi
|
Physical Neural Cellular Automata for 2D Shape Classification
| null | null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Materials with the ability to self-classify their own shape have the
potential to advance a wide range of engineering applications and industries.
Biological systems possess the ability not only to self-reconfigure but also to
self-classify themselves to determine a general shape and function. Previous
work into modular robotics systems has only enabled self-recognition and
self-reconfiguration into a specific target shape, missing the inherent
robustness present in nature to self-classify. In this paper we therefore take
advantage of recent advances in deep learning and neural cellular automata, and
present a simple modular 2D robotic system that can infer its own class of
shape through the local communication of its components. Furthermore, we show
that our system can be successfully transferred to hardware which thus opens
opportunities for future self-classifying machines. Code available at
https://github.com/kattwalker/projectcube. Video available at
https://youtu.be/0TCOkE4keyc.
|
[
{
"version": "v1",
"created": "Mon, 14 Mar 2022 23:18:13 GMT"
},
{
"version": "v2",
"created": "Sun, 31 Jul 2022 20:27:30 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Walker",
"Kathryn",
""
],
[
"Palm",
"Rasmus Berg",
""
],
[
"Garcia",
"Rodrigo Moreno",
""
],
[
"Faina",
"Andres",
""
],
[
"Stoy",
"Kasper",
""
],
[
"Risi",
"Sebastian",
""
]
] |
new_dataset
| 0.958751 |
2203.11130
|
Samuel Yu
|
Samuel Yu, Peter Wu, Paul Pu Liang, Ruslan Salakhutdinov,
Louis-Philippe Morency
|
PACS: A Dataset for Physical Audiovisual CommonSense Reasoning
|
ECCV 2022, 51 pages, 23 figures, 4 tables
| null | null | null |
cs.LG cs.AI cs.CL cs.CV cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In order for AI to be safely deployed in real-world scenarios such as
hospitals, schools, and the workplace, it must be able to robustly reason about
the physical world. Fundamental to this reasoning is physical common sense:
understanding the physical properties and affordances of available objects, how
they can be manipulated, and how they interact with other objects. Physical
commonsense reasoning is fundamentally a multi-sensory task, since physical
properties are manifested through multiple modalities - two of them being
vision and acoustics. Our paper takes a step towards real-world physical
commonsense reasoning by contributing PACS: the first audiovisual benchmark
annotated for physical commonsense attributes. PACS contains 13,400
question-answer pairs, involving 1,377 unique physical commonsense questions
and 1,526 videos. Our dataset provides new opportunities to advance the
research field of physical reasoning by bringing audio as a core component of
this multimodal problem. Using PACS, we evaluate multiple state-of-the-art
models on our new challenging task. While some models show promising results
(70% accuracy), they all fall short of human performance (95% accuracy). We
conclude the paper by demonstrating the importance of multimodal reasoning and
providing possible avenues for future research.
|
[
{
"version": "v1",
"created": "Mon, 21 Mar 2022 17:05:23 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Jul 2022 01:09:24 GMT"
},
{
"version": "v3",
"created": "Mon, 1 Aug 2022 05:23:54 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Yu",
"Samuel",
""
],
[
"Wu",
"Peter",
""
],
[
"Liang",
"Paul Pu",
""
],
[
"Salakhutdinov",
"Ruslan",
""
],
[
"Morency",
"Louis-Philippe",
""
]
] |
new_dataset
| 0.999842 |
2203.11423
|
Mattia Nicolella
|
Mattia Nicolella (1), Shahin Roozkhosh (1), Denis Hoornaert (2),
Andrea Bastoni (2), Renato Mancuso (1) ((1) Boston University Boston USA, (2)
TU M\"unchen Germany)
|
RT-Bench: an Extensible Benchmark Framework for the Analysis and
Management of Real-Time Applications
|
11 pages, 12 figures; code available at
https://gitlab.com/rt-bench/rt-bench, documentation available at
https://rt-bench.gitlab.io/rt-bench/
|
RTNS 2022: Proceedings of the 30th International Conference on
Real-Time Networks and Systems June 2022 Pages 184-195
|
10.1145/3534879.3534888
| null |
cs.SE cs.PF cs.SY eess.SY
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Benchmarking is crucial for testing and validating any system, even more so
in real-time systems. Typical real-time applications adhere to well-understood
abstractions: they exhibit a periodic behavior, operate on a well-defined
working set, and strive for stable response time avoiding non-predicable
factors such as page faults. Unfortunately, available benchmark suites fail to
reflect key characteristics of real-time applications. Practitioners and
researchers must resort to either benchmark heavily approximated real-time
environments, or to re-engineer available benchmarks to add -- if possible --
the sought-after features. Additionally, the measuring and logging capabilities
provided by most benchmark suites are not tailored "out-of-the-box" to
real-time environments, and changing basic parameters such as the scheduling
policy often becomes a tiring and error-prone exercise.
In this paper, we present RT-bench, an open-source framework adding standard
real-time features to virtually any existing benchmark. Furthermore, RT-bench
provides an easy-to-use, unified command line interface to customize key
aspects of the real-time execution of a set of benchmarks. Our framework is
guided by four main criteria: 1) cohesive interface, 2) support for periodic
application behavior and deadline semantics, 3) controllable memory footprint,
and 4) extensibility and portability. We have integrated within the framework
applications from the widely used SD-VBS and IsolBench suites. We showcase a
set of use-cases that are representative of typical real-time system evaluation
scenarios and that can be easily conducted via RT-Bench.
|
[
{
"version": "v1",
"created": "Tue, 22 Mar 2022 02:40:47 GMT"
},
{
"version": "v2",
"created": "Sat, 30 Jul 2022 23:34:03 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Nicolella",
"Mattia",
""
],
[
"Roozkhosh",
"Shahin",
""
],
[
"Hoornaert",
"Denis",
""
],
[
"Bastoni",
"Andrea",
""
],
[
"Mancuso",
"Renato",
""
]
] |
new_dataset
| 0.998937 |
2203.14221
|
Fida Mohammad Thoker
|
Fida Mohammad Thoker, Hazel Doughty, Piyush Bagad, Cees Snoek
|
How Severe is Benchmark-Sensitivity in Video Self-Supervised Learning?
|
Accepted in ECCV 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite the recent success of video self-supervised learning models, there is
much still to be understood about their generalization capability. In this
paper, we investigate how sensitive video self-supervised learning is to the
current conventional benchmark and whether methods generalize beyond the
canonical evaluation setting. We do this across four different factors of
sensitivity: domain, samples, actions and task. Our study which encompasses
over 500 experiments on 7 video datasets, 9 self-supervised methods and 6 video
understanding tasks, reveals that current benchmarks in video self-supervised
learning are not good indicators of generalization along these sensitivity
factors. Further, we find that self-supervised methods considerably lag behind
vanilla supervised pre-training, especially when domain shift is large and the
amount of available downstream samples are low. From our analysis, we distill
the SEVERE-benchmark, a subset of our experiments, and discuss its implication
for evaluating the generalizability of representations obtained by existing and
future self-supervised video learning methods.
|
[
{
"version": "v1",
"created": "Sun, 27 Mar 2022 06:32:55 GMT"
},
{
"version": "v2",
"created": "Sat, 30 Jul 2022 10:58:42 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Thoker",
"Fida Mohammad",
""
],
[
"Doughty",
"Hazel",
""
],
[
"Bagad",
"Piyush",
""
],
[
"Snoek",
"Cees",
""
]
] |
new_dataset
| 0.981498 |
2204.08453
|
Hanyu Wang
|
Hanyu Wang, Kamal Gupta, Larry Davis, Abhinav Shrivastava
|
Neural Space-filling Curves
|
ECCV 2022. Project page:
https://hywang66.github.io/publication/neuralsfc/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Neural Space-filling Curves (SFCs), a data-driven approach to
infer a context-based scan order for a set of images. Linear ordering of pixels
forms the basis for many applications such as video scrambling, compression,
and auto-regressive models that are used in generative modeling for images.
Existing algorithms resort to a fixed scanning algorithm such as Raster scan or
Hilbert scan. Instead, our work learns a spatially coherent linear ordering of
pixels from the dataset of images using a graph-based neural network. The
resulting Neural SFC is optimized for an objective suitable for the downstream
task when the image is traversed along with the scan line order. We show the
advantage of using Neural SFCs in downstream applications such as image
compression. Code and additional results will be made available at
https://hywang66.github.io/publication/neuralsfc.
|
[
{
"version": "v1",
"created": "Mon, 18 Apr 2022 17:59:01 GMT"
},
{
"version": "v2",
"created": "Sun, 31 Jul 2022 01:12:49 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Wang",
"Hanyu",
""
],
[
"Gupta",
"Kamal",
""
],
[
"Davis",
"Larry",
""
],
[
"Shrivastava",
"Abhinav",
""
]
] |
new_dataset
| 0.999047 |
2205.02837
|
Dave Epstein
|
Dave Epstein, Taesung Park, Richard Zhang, Eli Shechtman, Alexei A.
Efros
|
BlobGAN: Spatially Disentangled Scene Representations
|
ECCV 2022. Project webpage available at https://www.dave.ml/blobgan
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We propose an unsupervised, mid-level representation for a generative model
of scenes. The representation is mid-level in that it is neither per-pixel nor
per-image; rather, scenes are modeled as a collection of spatial, depth-ordered
"blobs" of features. Blobs are differentiably placed onto a feature grid that
is decoded into an image by a generative adversarial network. Due to the
spatial uniformity of blobs and the locality inherent to convolution, our
network learns to associate different blobs with different entities in a scene
and to arrange these blobs to capture scene layout. We demonstrate this
emergent behavior by showing that, despite training without any supervision,
our method enables applications such as easy manipulation of objects within a
scene (e.g., moving, removing, and restyling furniture), creation of feasible
scenes given constraints (e.g., plausible rooms with drawers at a particular
location), and parsing of real-world images into constituent parts. On a
challenging multi-category dataset of indoor scenes, BlobGAN outperforms
StyleGAN2 in image quality as measured by FID. See our project page for video
results and interactive demo: https://www.dave.ml/blobgan
|
[
{
"version": "v1",
"created": "Thu, 5 May 2022 17:59:55 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Jul 2022 20:48:05 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Epstein",
"Dave",
""
],
[
"Park",
"Taesung",
""
],
[
"Zhang",
"Richard",
""
],
[
"Shechtman",
"Eli",
""
],
[
"Efros",
"Alexei A.",
""
]
] |
new_dataset
| 0.996499 |
2206.06171
|
Sivan Toledo
|
Sivan Toledo, Shai Mendel, Anat Levi, Yoni Vortman, Wiebke Ullmann,
Lena-Rosa Scherer, Jan Pufelski, Frank van Maarseveen, Bas Denissen, Allert
Bijleveld, Yotam Orchan, Yoav Bartan, Sivan Margalit, Idan Talmon, Ran Nathan
|
Vildehaye: A Family of Versatile, Widely-Applicable, and Field-Proven
Lightweight Wildlife Tracking and Sensing Tags
|
Accepted version of IPSN 2022 paper
|
Proceedings of the 21st ACM/IEEE International Conference on
Information Processing in Sensor Networks (IPSN), 2022
|
10.1109/IPSN54338.2022.00008
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe the design and implementation of Vildehaye, a family of
versatile, widely-applicable, and field-proven tags for wildlife sensing and
radio tracking. The family includes 6 distinct hardware designs for tags, 3
add-on boards, a programming adapter, and base stations; modular firmware for
tags and base stations (both standalone low-power embedded base stations and
base stations tethered to a computer running Linux or Windows); and desktop
software for programming and configuring tags, monitoring tags, and downloading
and processing sensor data. The tags are versatile: they support multiple
packet formats, data rates, and frequency bands; they can be configured for
minimum mass (down to less than 1g), making them applicable to a wide range of
flying and terrestrial animals, or for inclusion of important sensors and large
memories; they can transmit packets compatible with time-of-arrival
transmitter-localization systems, tag identification and state packets, and
they can reliably upload sensor data through their radio link. The system has
been designed, upgraded, and maintained as an academic research project, but it
has been extensively used by 5 different groups of ecologists in 4 countries
over a period of 5 years. More than 7100 tags have been produced and most of
these have been deployed. Production used 41 manufacturing runs. The tags have
been used in studies that so far resulted in 9 scientific publications in
ecology (including in Science). The paper describes innovative design aspects
of Vildehaye, field-use experiences, and lessons from the design,
implementation, and maintenance of the system. Both the hardware and software
of the system are open.
|
[
{
"version": "v1",
"created": "Wed, 4 May 2022 05:34:51 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Toledo",
"Sivan",
""
],
[
"Mendel",
"Shai",
""
],
[
"Levi",
"Anat",
""
],
[
"Vortman",
"Yoni",
""
],
[
"Ullmann",
"Wiebke",
""
],
[
"Scherer",
"Lena-Rosa",
""
],
[
"Pufelski",
"Jan",
""
],
[
"van Maarseveen",
"Frank",
""
],
[
"Denissen",
"Bas",
""
],
[
"Bijleveld",
"Allert",
""
],
[
"Orchan",
"Yotam",
""
],
[
"Bartan",
"Yoav",
""
],
[
"Margalit",
"Sivan",
""
],
[
"Talmon",
"Idan",
""
],
[
"Nathan",
"Ran",
""
]
] |
new_dataset
| 0.999596 |
2206.11022
|
Pierre Nugues
|
Pierre Nugues
|
Connecting a French Dictionary from the Beginning of the 20th Century to
Wikidata
| null |
Proceedings of the 13th Language Resources and Evaluation
Conference (LREC), Marseille, France pp. 2548-2555 (2022)
| null | null |
cs.CL cs.DL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The \textit{Petit Larousse illustr\'e} is a French dictionary first published
in 1905. Its division in two main parts on language and on history and
geography corresponds to a major milestone in French lexicography as well as a
repository of general knowledge from this period. Although the value of many
entries from 1905 remains intact, some descriptions now have a dimension that
is more historical than contemporary. They are nonetheless significant to
analyze and understand cultural representations from this time. A comparison
with more recent information or a verification of these entries would require a
tedious manual work. In this paper, we describe a new lexical resource, where
we connected all the dictionary entries of the history and geography part to
current data sources. For this, we linked each of these entries to a wikidata
identifier. Using the wikidata links, we can automate more easily the
identification, comparison, and verification of historically-situated
representations. We give a few examples on how to process wikidata identifiers
and we carried out a small analysis of the entities described in the dictionary
to outline possible applications. The resource, i.e. the annotation of 20,245
dictionary entries with wikidata links, is available from GitHub
url{https://github.com/pnugues/petit_larousse_1905/
|
[
{
"version": "v1",
"created": "Wed, 22 Jun 2022 12:45:21 GMT"
},
{
"version": "v2",
"created": "Sun, 26 Jun 2022 14:08:00 GMT"
},
{
"version": "v3",
"created": "Mon, 1 Aug 2022 15:05:31 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Nugues",
"Pierre",
""
]
] |
new_dataset
| 0.974855 |
2207.01180
|
Yusuke Tanaka
|
Yusuke Tanaka, Yuki Shirai, Xuan Lin, Alexander Schperberg, Hayato
Kato, Alexander Swerdlow, Naoya Kumagai, and Dennis Hong
|
SCALER: A Tough Versatile Quadruped Free-Climber Robot
|
Proceeding to IROS 2022, Preprint and not a final version
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces SCALER, a quadrupedal robot that demonstrates climbing
on bouldering walls, overhangs, ceilings and trotting on the ground. SCALER is
one of the first high-degrees of freedom four-limbed robots that can free-climb
under the Earth's gravity and one of the most mechanically efficient quadrupeds
on the ground. Where other state-of-the-art climbers specialize in climbing,
SCALER promises practical free-climbing with payload \textit{and} ground
locomotion, which realizes true versatile mobility. A new climbing gait, SKATE
gait, increases the payload by utilizing the SCALER body linkage mechanism.
SCALER achieves a maximum normalized locomotion speed of $1.87$ /s, or $0.56$
m/s on the ground and $1.0$ /min, or $0.35$ m/min in bouldering wall climbing.
Payload capacity reaches $233$ % of the SCALER weight on the ground and $35$ %
on the vertical wall. Our GOAT gripper, a mechanically adaptable underactuated
two-finger gripper, successfully grasps convex and non-convex objects and
supports SCALER.
|
[
{
"version": "v1",
"created": "Mon, 4 Jul 2022 03:43:57 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Jul 2022 15:40:23 GMT"
},
{
"version": "v3",
"created": "Sat, 30 Jul 2022 14:56:23 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Tanaka",
"Yusuke",
""
],
[
"Shirai",
"Yuki",
""
],
[
"Lin",
"Xuan",
""
],
[
"Schperberg",
"Alexander",
""
],
[
"Kato",
"Hayato",
""
],
[
"Swerdlow",
"Alexander",
""
],
[
"Kumagai",
"Naoya",
""
],
[
"Hong",
"Dennis",
""
]
] |
new_dataset
| 0.999242 |
2207.06782
|
Zilong Wang
|
Erzhong Xue, Zilong Wang, Jinjin Chai
|
Boolean Functions of Binary Type-II and Type-II/III Complementary Array
Pair
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The sequence pairs of length $2^{m}$ projected from complementary array pairs
of Type-II of size $\mathbf{2}^{(m)}$ and mixed Type-II/III and of size
$\mathbf{2}^{(m-1)}\times2$ are complementary sequence pairs Type-II and
Type-III respectively. An exhaustive search for binary Type-II and Type-III
complementary sequence pairs of small lengths $2^{m}$ ($m=1,2,3,4$) shows that
they are all projected from the aforementioned complementary array pairs, whose
algebraic normal forms satisfy specified expressions. It's natural to ask
whether the conclusion holds for all $m$. In this paper, we proved that these
expressions of algebraic normal forms determine all the binary complementary
array pairs of Type-II of size $\mathbf{2}^{(m)}$ and mixed Type-II/III of size
$\mathbf{2}^{(m-1)}\times2$ respectively.
|
[
{
"version": "v1",
"created": "Thu, 14 Jul 2022 09:45:51 GMT"
},
{
"version": "v2",
"created": "Mon, 1 Aug 2022 02:42:41 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Xue",
"Erzhong",
""
],
[
"Wang",
"Zilong",
""
],
[
"Chai",
"Jinjin",
""
]
] |
new_dataset
| 0.999164 |
2207.10524
|
Samuel Cahyawijaya
|
Samuel Cahyawijaya, Alham Fikri Aji, Holy Lovenia, Genta Indra Winata,
Bryan Wilie, Rahmad Mahendra, Fajri Koto, David Moeljadi, Karissa Vincentio,
Ade Romadhony, Ayu Purwarianti
|
NusaCrowd: A Call for Open and Reproducible NLP Research in Indonesian
Languages
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
At the center of the underlying issues that halt Indonesian natural language
processing (NLP) research advancement, we find data scarcity. Resources in
Indonesian languages, especially the local ones, are extremely scarce and
underrepresented. Many Indonesian researchers do not publish their dataset.
Furthermore, the few public datasets that we have are scattered across
different platforms, thus makes performing reproducible and data-centric
research in Indonesian NLP even more arduous. Rising to this challenge, we
initiate the first Indonesian NLP crowdsourcing effort, NusaCrowd. NusaCrowd
strives to provide the largest datasheets aggregation with standardized data
loading for NLP tasks in all Indonesian languages. By enabling open and
centralized access to Indonesian NLP resources, we hope NusaCrowd can tackle
the data scarcity problem hindering NLP progress in Indonesia and bring NLP
practitioners to move towards collaboration.
|
[
{
"version": "v1",
"created": "Thu, 21 Jul 2022 15:05:42 GMT"
},
{
"version": "v2",
"created": "Mon, 1 Aug 2022 16:55:04 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Cahyawijaya",
"Samuel",
""
],
[
"Aji",
"Alham Fikri",
""
],
[
"Lovenia",
"Holy",
""
],
[
"Winata",
"Genta Indra",
""
],
[
"Wilie",
"Bryan",
""
],
[
"Mahendra",
"Rahmad",
""
],
[
"Koto",
"Fajri",
""
],
[
"Moeljadi",
"David",
""
],
[
"Vincentio",
"Karissa",
""
],
[
"Romadhony",
"Ade",
""
],
[
"Purwarianti",
"Ayu",
""
]
] |
new_dataset
| 0.999545 |
2208.00003
|
Claude Kl\"ockl
|
Viktor Zobernig, Richard A. Saldanha, Jinke He, Erica van der Sar,
Jasper van Doorn, Jia-Chen Hua, Lachlan R. Mason, Aleksander Czechowski,
Drago Indjic, Tomasz Kosmala, Alessandro Zocca, Sandjai Bhulai, Jorge
Montalvo Arvizu, Claude Kl\"ockl, John Moriarty
|
RangL: A Reinforcement Learning Competition Platform
|
Documents in general and premierly the RangL competition plattform
and in particular its 2022's competition "Pathways to Netzero" 10 pages, 2
figures, 1 table, Comments welcome!
| null | null | null |
cs.LG cs.AI cs.GL cs.SY eess.SY math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The RangL project hosted by The Alan Turing Institute aims to encourage the
wider uptake of reinforcement learning by supporting competitions relating to
real-world dynamic decision problems. This article describes the reusable code
repository developed by the RangL team and deployed for the 2022 Pathways to
Net Zero Challenge, supported by the UK Net Zero Technology Centre. The winning
solutions to this particular Challenge seek to optimize the UK's energy
transition policy to net zero carbon emissions by 2050. The RangL repository
includes an OpenAI Gym reinforcement learning environment and code that
supports both submission to, and evaluation in, a remote instance of the open
source EvalAI platform as well as all winning learning agent strategies. The
repository is an illustrative example of RangL's capability to provide a
reusable structure for future challenges.
|
[
{
"version": "v1",
"created": "Thu, 28 Jul 2022 09:44:21 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Zobernig",
"Viktor",
""
],
[
"Saldanha",
"Richard A.",
""
],
[
"He",
"Jinke",
""
],
[
"van der Sar",
"Erica",
""
],
[
"van Doorn",
"Jasper",
""
],
[
"Hua",
"Jia-Chen",
""
],
[
"Mason",
"Lachlan R.",
""
],
[
"Czechowski",
"Aleksander",
""
],
[
"Indjic",
"Drago",
""
],
[
"Kosmala",
"Tomasz",
""
],
[
"Zocca",
"Alessandro",
""
],
[
"Bhulai",
"Sandjai",
""
],
[
"Arvizu",
"Jorge Montalvo",
""
],
[
"Klöckl",
"Claude",
""
],
[
"Moriarty",
"John",
""
]
] |
new_dataset
| 0.997912 |
2208.00169
|
Szymon P{\l}otka
|
Przemys{\l}aw Korzeniowski, Szymon P{\l}otka, Robert
Brawura-Biskupski-Samaha, Arkadiusz Sitek
|
Virtual Reality Simulator for Fetoscopic Spina Bifida Repair Surgery
|
Accepted for IEEE/RSJ International Conference on Intelligent Robots
and Systems (IROS) 2022, Kyoto, Japan
| null | null | null |
cs.CV cs.MM cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Spina Bifida (SB) is a birth defect developed during the early stage of
pregnancy in which there is incomplete closing of the spine around the spinal
cord. The growing interest in fetoscopic Spina-Bifida repair, which is
performed in fetuses who are still in the pregnant uterus, prompts the need for
appropriate training. The learning curve for such procedures is steep and
requires excellent procedural skills. Computer-based virtual reality (VR)
simulation systems offer a safe, cost-effective, and configurable training
environment free from ethical and patient safety issues. However, to the best
of our knowledge, there are currently no commercial or experimental VR training
simulation systems available for fetoscopic SB-repair procedures. In this
paper, we propose a novel VR simulator for core manual skills training for
SB-repair. An initial simulation realism validation study was carried out by
obtaining subjective feedback (face and content validity) from 14 clinicians.
The overall simulation realism was on average marked 4.07 on a 5-point Likert
scale (1 - very unrealistic, 5 - very realistic). Its usefulness as a training
tool for SB-repair as well as in learning fundamental laparoscopic skills was
marked 4.63 and 4.80, respectively. These results indicate that VR simulation
of fetoscopic procedures may contribute to surgical training without putting
fetuses and their mothers at risk. It could also facilitate wider adaptation of
fetoscopic procedures in place of much more invasive open fetal surgeries.
|
[
{
"version": "v1",
"created": "Sat, 30 Jul 2022 08:51:11 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Korzeniowski",
"Przemysław",
""
],
[
"Płotka",
"Szymon",
""
],
[
"Brawura-Biskupski-Samaha",
"Robert",
""
],
[
"Sitek",
"Arkadiusz",
""
]
] |
new_dataset
| 0.987888 |
2208.00192
|
Joao Barbosa
|
Jo\~ao Barbosa, M\'ario Florido, V\'itor Santos Costa
|
Typed SLD-Resolution: Dynamic Typing for Logic Programming
|
17 pages
| null | null | null |
cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
The semantic foundations for logic programming are usually separated into two
different approaches. The operational semantics, which uses SLD-resolution, the
proof method that computes answers in logic programming, and the declarative
semantics, which sees logic programs as formulas and its semantics as models.
Here, we define a new operational semantics called TSLD-resolution, which
stands for Typed SLD-resolution, where we include a value "wrong", that
corresponds to the detection of a type error at run-time. For this we define a
new typed unification algorithm. Finally we prove the correctness of
TSLD-resolution with respect to a typed declarative semantics.
|
[
{
"version": "v1",
"created": "Sat, 30 Jul 2022 11:37:00 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Barbosa",
"João",
""
],
[
"Florido",
"Mário",
""
],
[
"Costa",
"Vítor Santos",
""
]
] |
new_dataset
| 0.998388 |
2208.00223
|
Aoran Xiao
|
Aoran Xiao, Jiaxing Huang, Dayan Guan, Kaiwen Cui, Shijian Lu, Ling
Shao
|
PolarMix: A General Data Augmentation Technique for LiDAR Point Clouds
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
LiDAR point clouds, which are usually scanned by rotating LiDAR sensors
continuously, capture precise geometry of the surrounding environment and are
crucial to many autonomous detection and navigation tasks. Though many 3D deep
architectures have been developed, efficient collection and annotation of large
amounts of point clouds remain one major challenge in the analytic and
understanding of point cloud data. This paper presents PolarMix, a point cloud
augmentation technique that is simple and generic but can mitigate the data
constraint effectively across different perception tasks and scenarios.
PolarMix enriches point cloud distributions and preserves point cloud fidelity
via two cross-scan augmentation strategies that cut, edit, and mix point clouds
along the scanning direction. The first is scene-level swapping which exchanges
point cloud sectors of two LiDAR scans that are cut along the azimuth axis. The
second is instance-level rotation and paste which crops point instances from
one LiDAR scan, rotates them by multiple angles (to create multiple copies),
and paste the rotated point instances into other scans. Extensive experiments
show that PolarMix achieves superior performance consistently across different
perception tasks and scenarios. In addition, it can work as plug-and-play for
various 3D deep architectures and also performs well for unsupervised domain
adaptation.
|
[
{
"version": "v1",
"created": "Sat, 30 Jul 2022 13:52:19 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Xiao",
"Aoran",
""
],
[
"Huang",
"Jiaxing",
""
],
[
"Guan",
"Dayan",
""
],
[
"Cui",
"Kaiwen",
""
],
[
"Lu",
"Shijian",
""
],
[
"Shao",
"Ling",
""
]
] |
new_dataset
| 0.966746 |
2208.00235
|
Roberto Dillon
|
Roberto Dillon, Arushi
|
'PeriHack': Designing a Serious Game for Cybersecurity Awareness
|
5 pages, 6 figures, 2 tables. For associated files see
https://github.com/rdillon73/PeriHack
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper describes the design process for the cybersecurity serious game
'PeriHack'. Publicly released under a CC (BY-NC-SA) license, PeriHack is a
board and card game for two players or teams that simulates the struggle
between a red team (attackers) and a blue team (defenders). The game requires
players to explore a sample network looking for vulnerabilities and then chain
different attacks to exploit possible weaknesses of different nature, which may
include both technical and social engineering exploits. At the same time, it
also simulates budget level constraints for the blue team by providing limited
resources to evaluate and prioritize different critical vulnerabilities. The
game is discussed via the lenses of the AGE and 6-11 Frameworks and was
primarily designed as a learning tool for students in the cybersecurity and
technology related fields.
|
[
{
"version": "v1",
"created": "Sat, 30 Jul 2022 14:41:10 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Dillon",
"Roberto",
""
],
[
"Arushi",
"",
""
]
] |
new_dataset
| 0.997649 |
2208.00324
|
Djoko Suprijanto -
|
Hopein Christofen Tang and Djoko Suprijanto
|
A general family of Plotkin-optimal two-weight codes over $\mathbb{Z}_4$
|
16 pages
| null | null | null |
cs.IT math.CO math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
We obtained all possible parameters of Plotkin-optimal two-Lee weight
projective codes over $\mathbb{Z}_4,$ together with their weight distributions.
We show the existence of codes with these parameters as well as their weight
distributions by constructing an infinite family of two-weight codes.
Previously known codes constructed by Shi et al. (\emph{Des Codes Cryptogr.}
{\bf 88}(3):1-13, 2020) can be derived as a special case of our results. We
also prove that the Gray image of any Plotkin-optimal two-Lee weight projective
codes over $\mathbb{Z}_4$ has the same parameters and weight distribution as
some two-weight binary projective codes of type SU1 in the sense of Calderbank
and Kantor (\emph{Bull. Lond. Math. Soc.} {\bf 18}:97-122, 1986).
|
[
{
"version": "v1",
"created": "Sat, 30 Jul 2022 23:53:02 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Tang",
"Hopein Christofen",
""
],
[
"Suprijanto",
"Djoko",
""
]
] |
new_dataset
| 0.999403 |
2208.00332
|
Maleknaz Nayebi
|
Sk Golam Saroar, Waseefa Ahmed, Maleknaz Nayebi
|
GitHub Marketplace for Practitioners and Researchers to Date: A
Systematic Analysis of the Knowledge Mobilization Gap in Open Source Software
Automation
|
The paper is under review in a journal
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Marketplaces for distributing software products and services have been
getting increasing popularity. GitHub, which is most known for its version
control functionality through Git, launched its own marketplace in 2017. GitHub
Marketplace hosts third party apps and actions to automate workflows in
software teams. Currently, this marketplace hosts 440 Apps and 7,878 Actions
across 32 different categories. Overall, 419 Third party developers released
their apps on this platform which 111 distinct customers adopted. The
popularity and accessibility of GitHub projects have made this platform and the
projects hosted on it one of the most frequent subjects for experimentation in
the software engineering research. A simple Google Scholar search shows that
24,100 Research papers have discussed GitHub within the Software Engineering
field since 2017, but none have looked into the marketplace. The GitHub
Marketplace provides a unique source of information on the tools used by the
practitioners in the Open Source Software (OSS) ecosystem for automating their
project's workflow. In this study, we (i) mine and provide a descriptive
overview of the GitHub Marketplace, (ii) perform a systematic mapping of
research studies in automation for open source software, and (iii) compare the
state of the art with the state of the practice on the automation tools. We
conclude the paper by discussing the potential of GitHub Marketplace for
knowledge mobilization and collaboration within the field. This is the first
study on the GitHub Marketplace in the field.
|
[
{
"version": "v1",
"created": "Sun, 31 Jul 2022 01:48:19 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Saroar",
"Sk Golam",
""
],
[
"Ahmed",
"Waseefa",
""
],
[
"Nayebi",
"Maleknaz",
""
]
] |
new_dataset
| 0.999663 |
2208.00333
|
Lucia Moura
|
Andr\'e Guerino Castoldi, Lucia Moura, Daniel Panario, Brett Stevens
|
Ordered Orthogonal Array Construction Using LFSR Sequences
|
12 pages
|
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 63, NO. 2, FEBRUARY
2017
|
10.1109/TIT.2016.2634010
| null |
cs.IT math.CO math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a new construction of ordered orthogonal arrays (OOA) of strength
$t$ with $(q + 1)t$ columns over a finite field $\mathbb{F}_{q}$ using linear
feedback shift register sequences (LFSRs). OOAs are naturally related to $(t,
m, s)$-nets, linear codes, and MDS codes. Our construction selects suitable
columns from the array formed by all subintervals of length
$\frac{q^{t}-1}{q-1}$ of an LFSR sequence generated by a primitive polynomial
of degree $t$ over $\mathbb{F}_{q}$. We prove properties about the relative
positions of runs in an LFSR which guarantee that the constructed OOA has
strength $t$. The set of parameters of our OOAs are the same as the ones given
by Rosenbloom and Tsfasman (1997) and Skriganov (2002), but the constructed
arrays are different. We experimentally verify that our OOAs are stronger than
the Rosenbloom-Tsfasman-Skriganov OOAs in the sense that ours are "closer" to
being a "full" orthogonal array. We also discuss how our OOA construction
relates to previous techniques to build OOAs from a set of linearly independent
vectors over $\mathbb{F}_{q}$, as well as to hypergraph homomorphisms.
|
[
{
"version": "v1",
"created": "Sun, 31 Jul 2022 01:49:49 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Castoldi",
"André Guerino",
""
],
[
"Moura",
"Lucia",
""
],
[
"Panario",
"Daniel",
""
],
[
"Stevens",
"Brett",
""
]
] |
new_dataset
| 0.998751 |
2208.00388
|
Gabriel Chen
|
Gabriel Chen, Rick Wanner
|
Secure Email Transmission Protocols -- A New Architecture Design
|
8 pages, 5 figures, SANS Institute
| null | null | null |
cs.CR cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
During today's digital age, emails have become a crucial part of
communications for both personal and enterprise usage. However, email
transmission protocols were not designed with security in mind, and this has
always been a challenge while trying to make email transmission more secure. On
top of the basic layer of SMTP, POP3, and IMAP protocols to send and retrieve
emails, there are several other major security protocols used in current days
to secure email transmission such as TLS/SSL, STARTTLS, and PGP/GPG encryption.
The most general design used in email transmission architecture is SMTP with
PGP/GPG encryption sending through an TLS/SSL secure channel. Regardless,
vulnerabilities within these security protocols and encryption methods, there
is still work can be done regarding the architecture design. In this paper, we
discuss the challenges among current email transmission security protocols and
architectures. We explore some new techniques and propose a new email
transmission architecture using EEKS structure and Schnorr Signature to
eliminate the usage of PGP/GPG for encryption while achieving Perfect Forward
Secrecy.
|
[
{
"version": "v1",
"created": "Sun, 31 Jul 2022 07:56:01 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Chen",
"Gabriel",
""
],
[
"Wanner",
"Rick",
""
]
] |
new_dataset
| 0.998023 |
2208.00392
|
Jonathan Fhima
|
Jonathan Fhima, Jan Van Eijgen, Ingeborg Stalmans, Yevgeniy Men, Moti
Freiman, Joachim A. Behar
|
PVBM: A Python Vasculature Biomarker Toolbox Based On Retinal Blood
Vessel Segmentation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Introduction: Blood vessels can be non-invasively visualized from a digital
fundus image (DFI). Several studies have shown an association between
cardiovascular risk and vascular features obtained from DFI. Recent advances in
computer vision and image segmentation enable automatising DFI blood vessel
segmentation. There is a need for a resource that can automatically compute
digital vasculature biomarkers (VBM) from these segmented DFI. Methods: In this
paper, we introduce a Python Vasculature BioMarker toolbox, denoted PVBM. A
total of 11 VBMs were implemented. In particular, we introduce new algorithmic
methods to estimate tortuosity and branching angles. Using PVBM, and as a proof
of usability, we analyze geometric vascular differences between glaucomatous
patients and healthy controls. Results: We built a fully automated vasculature
biomarker toolbox based on DFI segmentations and provided a proof of usability
to characterize the vascular changes in glaucoma. For arterioles and venules,
all biomarkers were significant and lower in glaucoma patients compared to
healthy controls except for tortuosity, venular singularity length and venular
branching angles.
Conclusion: We have automated the computation of 11 VBMs from retinal blood
vessel segmentation. The PVBM toolbox is made open source under a GNU GPL 3
license and is available on physiozoo.com (following publication).
|
[
{
"version": "v1",
"created": "Sun, 31 Jul 2022 08:22:59 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Fhima",
"Jonathan",
""
],
[
"Van Eijgen",
"Jan",
""
],
[
"Stalmans",
"Ingeborg",
""
],
[
"Men",
"Yevgeniy",
""
],
[
"Freiman",
"Moti",
""
],
[
"Behar",
"Joachim A.",
""
]
] |
new_dataset
| 0.997504 |
2208.00408
|
Guangyao Zhai
|
Guangyao Zhai, Yu Zheng, Ziwei Xu, Xin Kong, Yong Liu, Benjamin Busam,
Yi Ren, Nassir Navab, Zhengyou Zhang
|
DA$^2$ Dataset: Toward Dexterity-Aware Dual-Arm Grasping
|
RAL+IROS'22
| null |
10.1109/LRA.2022.3189959
| null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we introduce DA$^2$, the first large-scale dual-arm
dexterity-aware dataset for the generation of optimal bimanual grasping pairs
for arbitrary large objects. The dataset contains about 9M pairs of
parallel-jaw grasps, generated from more than 6000 objects and each labeled
with various grasp dexterity measures. In addition, we propose an end-to-end
dual-arm grasp evaluation model trained on the rendered scenes from this
dataset. We utilize the evaluation model as our baseline to show the value of
this novel and nontrivial dataset by both online analysis and real robot
experiments. All data and related code will be open-sourced at
https://sites.google.com/view/da2dataset.
|
[
{
"version": "v1",
"created": "Sun, 31 Jul 2022 10:02:27 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Zhai",
"Guangyao",
""
],
[
"Zheng",
"Yu",
""
],
[
"Xu",
"Ziwei",
""
],
[
"Kong",
"Xin",
""
],
[
"Liu",
"Yong",
""
],
[
"Busam",
"Benjamin",
""
],
[
"Ren",
"Yi",
""
],
[
"Navab",
"Nassir",
""
],
[
"Zhang",
"Zhengyou",
""
]
] |
new_dataset
| 0.999552 |
2208.00449
|
Yabo Chen
|
Yabo Chen, Yuchen Liu, Dongsheng Jiang, Xiaopeng Zhang, Wenrui Dai,
Hongkai Xiong, Qi Tian
|
SdAE: Self-distillated Masked Autoencoder
|
Accepted to ECCV 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
With the development of generative-based self-supervised learning (SSL)
approaches like BeiT and MAE, how to learn good representations by masking
random patches of the input image and reconstructing the missing information
has grown in concern. However, BeiT and PeCo need a "pre-pretraining" stage to
produce discrete codebooks for masked patches representing. MAE does not
require a pre-training codebook process, but setting pixels as reconstruction
targets may introduce an optimization gap between pre-training and downstream
tasks that good reconstruction quality may not always lead to the high
descriptive capability for the model. Considering the above issues, in this
paper, we propose a simple Self-distillated masked AutoEncoder network, namely
SdAE. SdAE consists of a student branch using an encoder-decoder structure to
reconstruct the missing information, and a teacher branch producing latent
representation of masked tokens. We also analyze how to build good views for
the teacher branch to produce latent representation from the perspective of
information bottleneck. After that, we propose a multi-fold masking strategy to
provide multiple masked views with balanced information for boosting the
performance, which can also reduce the computational complexity. Our approach
generalizes well: with only 300 epochs pre-training, a vanilla ViT-Base model
achieves an 84.1% fine-tuning accuracy on ImageNet-1k classification, 48.6 mIOU
on ADE20K segmentation, and 48.9 mAP on COCO detection, which surpasses other
methods by a considerable margin. Code is available at
https://github.com/AbrahamYabo/SdAE.
|
[
{
"version": "v1",
"created": "Sun, 31 Jul 2022 15:07:25 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Chen",
"Yabo",
""
],
[
"Liu",
"Yuchen",
""
],
[
"Jiang",
"Dongsheng",
""
],
[
"Zhang",
"Xiaopeng",
""
],
[
"Dai",
"Wenrui",
""
],
[
"Xiong",
"Hongkai",
""
],
[
"Tian",
"Qi",
""
]
] |
new_dataset
| 0.995953 |
2208.00493
|
Debanjan Datta
|
Debanjan Datta, Sathappan Muthiah, John Simeone, Amelia Meadows, Naren
Ramakrishnan
|
Scrutinizing Shipment Records To Thwart Illegal Timber Trade
|
Accepted in Proceedings of 6th Outlier Detection and Description
Workshop, ACM SigKDD 2021 https://oddworkshop.github.io/assets/papers/7.pdf.
arXiv admin note: substantial text overlap with arXiv:2104.01156
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Timber and forest products made from wood, like furniture, are valuable
commodities, and like the global trade of many highly-valued natural resources,
face challenges of corruption, fraud, and illegal harvesting. These grey and
black market activities in the wood and forest products sector are not limited
to the countries where the wood was harvested, but extend throughout the global
supply chain and have been tied to illicit financial flows, like trade-based
money laundering, document fraud, species mislabeling, and other illegal
activities. The task of finding such fraudulent activities using trade data, in
the absence of ground truth, can be modelled as an unsupervised anomaly
detection problem. However existing approaches suffer from certain shortcomings
in their applicability towards large scale trade data. Trade data is
heterogeneous, with both categorical and numerical attributes in a tabular
format. The overall challenge lies in the complexity, volume and velocity of
data, with large number of entities and lack of ground truth labels. To
mitigate these, we propose a novel unsupervised anomaly detection --
Contrastive Learning based Heterogeneous Anomaly Detection (CHAD) that is
generally applicable for large-scale heterogeneous tabular data. We demonstrate
our model CHAD performs favorably against multiple comparable baselines for
public benchmark datasets, and outperforms them in the case of trade data. More
importantly we demonstrate our approach reduces assumptions and efforts
required hyperparameter tuning, which is a key challenging aspect in an
unsupervised training paradigm. Specifically, our overarching objective
pertains to detecting suspicious timber shipments and patterns using Bill of
Lading trade record data. Detecting anomalous transactions in shipment records
can enable further investigation by government agencies and supply chain
constituents.
|
[
{
"version": "v1",
"created": "Sun, 31 Jul 2022 18:54:52 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Datta",
"Debanjan",
""
],
[
"Muthiah",
"Sathappan",
""
],
[
"Simeone",
"John",
""
],
[
"Meadows",
"Amelia",
""
],
[
"Ramakrishnan",
"Naren",
""
]
] |
new_dataset
| 0.999066 |
2208.00496
|
Michael Xieyang Liu
|
Michael Xieyang Liu, Andrew Kuznetsov, Yongsung Kim, Joseph Chee
Chang, Aniket Kittur, Brad A. Myers
|
Wigglite: Low-cost Information Collection and Triage
| null | null |
10.1145/3526113.3545661
| null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Consumers conducting comparison shopping, researchers making sense of
competitive space, and developers looking for code snippets online all face the
challenge of capturing the information they find for later use without
interrupting their current flow. In addition, during many learning and
exploration tasks, people need to externalize their mental context, such as
estimating how urgent a topic is to follow up on, or rating a piece of evidence
as a "pro" or "con," which helps scaffold subsequent deeper exploration.
However, current approaches incur a high cost, often requiring users to select,
copy, context switch, paste, and annotate information in a separate document
without offering specific affordances that capture their mental context. In
this work, we explore a new interaction technique called "wiggling," which can
be used to fluidly collect, organize, and rate information during early
sensemaking stages with a single gesture. Wiggling involves rapid
back-and-forth movements of a pointer or up-and-down scrolling on a smartphone,
which can indicate the information to be collected and its valence, using a
single, light-weight gesture that does not interfere with other interactions
that are already available. Through implementation and user evaluation, we
found that wiggling helped participants accurately collect information and
encode their mental context with a 58% reduction in operational cost while
being 24% faster compared to a common baseline.
|
[
{
"version": "v1",
"created": "Sun, 31 Jul 2022 19:23:59 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Liu",
"Michael Xieyang",
""
],
[
"Kuznetsov",
"Andrew",
""
],
[
"Kim",
"Yongsung",
""
],
[
"Chang",
"Joseph Chee",
""
],
[
"Kittur",
"Aniket",
""
],
[
"Myers",
"Brad A.",
""
]
] |
new_dataset
| 0.998964 |
2208.00639
|
Kaicheng Pang
|
Kaicheng Pang, Xingxing Zou, Waikeung Wong
|
Dress Well via Fashion Cognitive Learning
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fashion compatibility models enable online retailers to easily obtain a large
number of outfit compositions with good quality. However, effective fashion
recommendation demands precise service for each customer with a deeper
cognition of fashion. In this paper, we conduct the first study on fashion
cognitive learning, which is fashion recommendations conditioned on personal
physical information. To this end, we propose a Fashion Cognitive Network (FCN)
to learn the relationships among visual-semantic embedding of outfit
composition and appearance features of individuals. FCN contains two
submodules, namely outfit encoder and Multi-label Graph Neural Network
(ML-GCN). The outfit encoder uses a convolutional layer to encode an outfit
into an outfit embedding. The latter module learns label classifiers via
stacked GCN. We conducted extensive experiments on the newly collected O4U
dataset, and the results provide strong qualitative and quantitative evidence
that our framework outperforms alternative methods.
|
[
{
"version": "v1",
"created": "Mon, 1 Aug 2022 06:52:37 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Pang",
"Kaicheng",
""
],
[
"Zou",
"Xingxing",
""
],
[
"Wong",
"Waikeung",
""
]
] |
new_dataset
| 0.981872 |
2208.00737
|
Joaquin Taverner
|
Joaquin Taverner, Emilio Vivancos, and Vicente Botti
|
e-Genia3 An AgentSpeak extension for empathic agents
| null | null | null | null |
cs.MA cs.AI cs.PL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this paper, we present e-Genia3 an extension of AgentSpeak to provide
support to the development of empathic agents. The new extension modifies the
agent's reasoning processes to select plans according to the analyzed event and
the affective state and personality of the agent. In addition, our proposal
allows a software agent to simulate the distinction between self and other
agents through two different event appraisal processes: the empathic appraisal
process, for eliciting emotions as a response to other agents emotions, and the
regular affective appraisal process for other non-empathic affective events.
The empathic regulation process adapts the elicited empathic emotion based on
intrapersonal factors (e.g., the agent's personality and affective memory) and
interpersonal characteristics of the agent (e.g., the affective link between
the agents). The use of a memory of past events and their corresponding
elicited emotions allows the maintaining of an affective link to support
long-term empathic interaction between agents.
|
[
{
"version": "v1",
"created": "Mon, 1 Aug 2022 10:53:25 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Taverner",
"Joaquin",
""
],
[
"Vivancos",
"Emilio",
""
],
[
"Botti",
"Vicente",
""
]
] |
new_dataset
| 0.9877 |
2208.00741
|
Barak Hoffer
|
Barak Hoffer and Shahar Kvatinsky
|
Performing Stateful Logic Using Spin-Orbit Torque (SOT) MRAM
|
Published in 2022 22nd IEEE International Conference on
Nanotechnology (NANO)
| null | null | null |
cs.ET
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Stateful logic is a promising processing-in-memory (PIM) paradigm to perform
logic operations using emerging nonvolatile memory cells. While most stateful
logic circuits to date focused on technologies such as resistive RAM, we
propose two approaches to designing stateful logic using spin orbit torque
(SOT) MRAM. The first approach utilizes the separation of read and write paths
in SOT devices to perform logic operations. In contrast to previous work, our
method utilizes a standard memory structure, and each row can be used as input
or output. The second approach uses voltage-gated SOT switching to allow
stateful logic in denser memory arrays. We present array structures to support
the two approaches and evaluate their functionality using SPICE simulations in
the presence of process variation and device mismatch.
|
[
{
"version": "v1",
"created": "Mon, 1 Aug 2022 10:57:38 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Hoffer",
"Barak",
""
],
[
"Kvatinsky",
"Shahar",
""
]
] |
new_dataset
| 0.999156 |
2208.00771
|
Vladan Majerech Dr.
|
Vladan Majerech
|
100 prisoners and a lightbulb -- looking back
|
12 pages, 1 table, 1 graph
| null | null | null |
cs.DM math.HO
|
http://creativecommons.org/publicdomain/zero/1.0/
|
100 prisoners and a light bulb is a long standing mathematical puzzle. The
problem was studied mostly in 2002 [5], 2003 [1], and 2004 [3]. Solutions in
published articles had average number of visits above 3850, but best solutions
on forums had (declared) average number of visits around 3500. I spent some
time in 2007-2009 to optimize the communication strategy and I pushed the
average number of visits below 3390, seems no new ideas appear after it.
Recently I have met several people familiar with published papers from
2002-2003 but not knowing newer results. Even after 2009 several papers on the
topic were published where the new results were not mentioned [4]. Whole book
was written about the problem [2]. This is why I am writing this summary.
|
[
{
"version": "v1",
"created": "Wed, 6 Jul 2022 22:22:53 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Majerech",
"Vladan",
""
]
] |
new_dataset
| 0.981328 |
2208.00772
|
Shehbaz Jaffer
|
Shehbaz Jaffer and Kaveh Mahdaviani and Bianca Schroeder
|
Improving the Reliability of Next Generation SSDs using WOM-v Codes
|
15 pages, 13 Figures, Published at USENIX FAST'22
|
20th USENIX Conference on File and Storage Technologies (FAST)
2022
| null | null |
cs.AR
|
http://creativecommons.org/publicdomain/zero/1.0/
|
High density Solid State Drives, such as QLC drives, offer increased storage
capacity, but a magnitude lower Program and Erase (P/E) cycles, limiting their
endurance and hence usability. We present the design and implementation of
non-binary, Voltage-Based Write-Once-Memory (WOM-v) Codes to improve the
lifetime of QLC drives. First, we develop a FEMU based simulator test-bed to
evaluate the gains of WOM-v codes on real world workloads. Second, we propose
and implement two optimizations, an efficient garbage collection mechanism and
an encoding optimization to drastically improve WOM-v code endurance without
compromising performance. A careful evaluation, including microbenchmarks and
trace-driven evaluation, demonstrates that WOM-v codes can reduce Erase cycles
for QLC drives by 4.4x-11.1x for real world workloads with minimal performance
overheads resulting in improved QLC SSD lifetime.
|
[
{
"version": "v1",
"created": "Sat, 23 Jul 2022 06:00:48 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Jaffer",
"Shehbaz",
""
],
[
"Mahdaviani",
"Kaveh",
""
],
[
"Schroeder",
"Bianca",
""
]
] |
new_dataset
| 0.999013 |
2208.00775
|
Zheng Tong
|
Zheng Tong, Tao Ma, Ju Huyan, Weiguang Zhang
|
Pavementscapes: a large-scale hierarchical image dataset for asphalt
pavement damage segmentation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pavement damage segmentation has benefited enormously from deep learning. %
and large-scale datasets. However, few current public datasets limit the
potential exploration of deep learning in the application of pavement damage
segmentation. To address this problem, this study has proposed Pavementscapes,
a large-scale dataset to develop and evaluate methods for pavement damage
segmentation. Pavementscapes is comprised of 4,000 images with a resolution of
$1024 \times 2048$, which have been recorded in the real-world pavement
inspection projects with 15 different pavements. A total of 8,680 damage
instances are manually labeled with six damage classes at the pixel level. The
statistical study gives a thorough investigation and analysis of the proposed
dataset. The numeral experiments propose the top-performing deep neural
networks capable of segmenting pavement damages, which provides the baselines
of the open challenge for pavement inspection. The experiment results also
indicate the existing problems for damage segmentation using deep learning, and
this study provides potential solutions.
|
[
{
"version": "v1",
"created": "Sun, 24 Jul 2022 03:40:27 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Tong",
"Zheng",
""
],
[
"Ma",
"Tao",
""
],
[
"Huyan",
"Ju",
""
],
[
"Zhang",
"Weiguang",
""
]
] |
new_dataset
| 0.999909 |
2208.00792
|
Carey Bunks
|
C. Bunks and T. Weyde
|
Jazz Contrafact Detection
|
8 pages, 6 figures, 4 tables
| null | null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
In jazz, a contrafact is a new melody composed over an existing, but often
reharmonized chord progression. Because reharmonization can introduce a wide
range of variations, detecting contrafacts is a challenging task. This paper
develops a novel vector-space model to represent chord progressions, and uses
it for contrafact detection. The process applies principles from music theory
to reduce the dimensionality of chord space, determine a common key signature
representation, and compute a chordal co-occurrence matrix. The rows of the
matrix form a basis for the vector space in which chord progressions are
represented as piecewise linear functions, and harmonic similarity is evaluated
by computing the membrane area, a novel distance metric. To illustrate our
method's effectiveness, we apply it to the Impro-Visor corpus of 2,612 chord
progressions, and present examples demonstrating its ability to account for
reharmonizations and find contrafacts.
|
[
{
"version": "v1",
"created": "Mon, 1 Aug 2022 12:07:20 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Bunks",
"C.",
""
],
[
"Weyde",
"T.",
""
]
] |
new_dataset
| 0.99931 |
2208.00802
|
Trygve Olav Fossum
|
Trygve Olav Fossum, {\O}ystein Sture, Petter Norgren-Aamot, Ingrid
Myrnes Hansen, Bj{\o}rn Christian Kvisvik, Anne Christine Knag
|
Underwater autonomous mapping and characterization of marine debris in
urban water bodies
|
Read more on https://skarvtech.com
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Marine debris originating from human activity has been accumulating in
underwater environments such as oceans, lakes, and rivers for decades. The
extent, type, and amount of waste is hard to assess as the exact mechanisms for
spread are not understood, yielding unknown consequences for the marine
environment and human health. Methods for detecting and mapping marine debris
is therefore vital in order to gain insight into pollution dynamics, which in
turn can be used to effectively plan and execute physical removal. Using an
autonomous underwater vehicle (AUV), equipped with an underwater hyperspectral
imager (UHI) and stereo-camera, marine debris was autonomously detected, mapped
and quantified in the sheltered bay Store Lungegaardsvann in Bergen, Norway.
|
[
{
"version": "v1",
"created": "Mon, 1 Aug 2022 12:36:38 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Fossum",
"Trygve Olav",
""
],
[
"Sture",
"Øystein",
""
],
[
"Norgren-Aamot",
"Petter",
""
],
[
"Hansen",
"Ingrid Myrnes",
""
],
[
"Kvisvik",
"Bjørn Christian",
""
],
[
"Knag",
"Anne Christine",
""
]
] |
new_dataset
| 0.984918 |
2208.00861
|
Florian Weigand
|
Florian Weigand, Andreas H\"ohl, Julian Zeiss, Ulrich Konigorski and
Martin Grimmer
|
Continuous locomotion mode recognition and gait phase estimation based
on a shank-mounted IMU with artificial neural networks
|
Copyright 2022 IEEE. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising
or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other works
| null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To improve the control of wearable robotics for gait assistance, we present
an approach for continuous locomotion mode recognition as well as gait phase
and stair slope estimation based on artificial neural networks that include
time history information. The input features consist exclusively of processed
variables that can be measured with a single shank-mounted inertial measurement
unit. We introduce a wearable device to acquire real-world environment test
data to demonstrate the performance and the robustness of the approach. Mean
absolute error (gait phase, stair slope) and accuracy (locomotion mode) were
determined for steady level walking and steady stair ambulation. Robustness was
assessed using test data from different sensor hardware, sensor fixations,
ambulation environments and subjects. The mean absolute error from the steady
gait test data for the gait phase was 2.0-3.5 % for gait phase estimation and
3.3-3.8{\deg} for stair slope estimation. The accuracy of classifying the
correct locomotion mode on the test data with the utilization of time history
information was in between 98.51 % and 99.67 %. Results show high performance
and robustness for continuously predicting gait phase, stair slope and
locomotion mode during steady gait. As hypothesized, time history information
improves the locomotion mode recognition. However, while the gait phase
estimation performed well for untrained transitions between locomotion modes,
our qualitative analysis revealed that it may be beneficial to include
transition data into the training of the neural network to improve the
prediction of the slope and the locomotion mode. Our results suggest that
artificial neural networks could be used for high level control of wearable
lower limb robotics.
|
[
{
"version": "v1",
"created": "Mon, 1 Aug 2022 13:45:31 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Weigand",
"Florian",
""
],
[
"Höhl",
"Andreas",
""
],
[
"Zeiss",
"Julian",
""
],
[
"Konigorski",
"Ulrich",
""
],
[
"Grimmer",
"Martin",
""
]
] |
new_dataset
| 0.961147 |
2208.00949
|
Marek Kowalski
|
Stephan J. Garbin, Marek Kowalski, Virginia Estellers, Stanislaw
Szymanowicz, Shideh Rezaeifar, Jingjing Shen, Matthew Johnson, Julien
Valentin
|
VolTeMorph: Realtime, Controllable and Generalisable Animation of
Volumetric Representations
|
18 pages, 21 figures
| null | null | null |
cs.GR cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The recent increase in popularity of volumetric representations for scene
reconstruction and novel view synthesis has put renewed focus on animating
volumetric content at high visual quality and in real-time. While implicit
deformation methods based on learned functions can produce impressive results,
they are `black boxes' to artists and content creators, they require large
amounts of training data to generalise meaningfully, and they do not produce
realistic extrapolations outside the training data. In this work we solve these
issues by introducing a volume deformation method which is real-time, easy to
edit with off-the-shelf software and can extrapolate convincingly. To
demonstrate the versatility of our method, we apply it in two scenarios:
physics-based object deformation and telepresence where avatars are controlled
using blendshapes. We also perform thorough experiments showing that our method
compares favourably to both volumetric approaches combined with implicit
deformation and methods based on mesh deformation.
|
[
{
"version": "v1",
"created": "Mon, 1 Aug 2022 16:04:38 GMT"
}
] | 2022-08-02T00:00:00 |
[
[
"Garbin",
"Stephan J.",
""
],
[
"Kowalski",
"Marek",
""
],
[
"Estellers",
"Virginia",
""
],
[
"Szymanowicz",
"Stanislaw",
""
],
[
"Rezaeifar",
"Shideh",
""
],
[
"Shen",
"Jingjing",
""
],
[
"Johnson",
"Matthew",
""
],
[
"Valentin",
"Julien",
""
]
] |
new_dataset
| 0.998263 |
2102.05981
|
Abdullah Giray Ya\u{g}l{\i}k\c{c}{\i}
|
Abdullah Giray Ya\u{g}l{\i}k\c{c}{\i}, Minesh Patel, Jeremie S. Kim,
Roknoddin Azizi, Ataberk Olgun, Lois Orosa, Hasan Hassan, Jisung Park,
Konstantinos Kanellopoulos, Taha Shahroodi, Saugata Ghose, Onur Mutlu
|
BlockHammer: Preventing RowHammer at Low Cost by Blacklisting
Rapidly-Accessed DRAM Rows
|
A shorter version of this work is to appear at the 27th IEEE
International Symposium on High-Performance Computer Architecture (HPCA-27),
2021
| null |
10.1109/HPCA51647.2021.00037
| null |
cs.CR cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
Aggressive memory density scaling causes modern DRAM devices to suffer from
RowHammer, a phenomenon where rapidly activating a DRAM row can cause bit-flips
in physically-nearby rows. Recent studies demonstrate that modern DRAM chips,
including chips previously marketed as RowHammer-safe, are even more vulnerable
to RowHammer than older chips. Many works show that attackers can exploit
RowHammer bit-flips to reliably mount system-level attacks to escalate
privilege and leak private data. Therefore, it is critical to ensure
RowHammer-safe operation on all DRAM-based systems. Unfortunately,
state-of-the-art RowHammer mitigation mechanisms face two major challenges.
First, they incur increasingly higher performance and/or area overheads when
applied to more vulnerable DRAM chips. Second, they require either proprietary
information about or modifications to the DRAM chip design. In this paper, we
show that it is possible to efficiently and scalably prevent RowHammer
bit-flips without knowledge of or modification to DRAM internals. We introduce
BlockHammer, a low-cost, effective, and easy-to-adopt RowHammer mitigation
mechanism that overcomes the two key challenges by selectively throttling
memory accesses that could otherwise cause RowHammer bit-flips. The key idea of
BlockHammer is to (1) track row activation rates using area-efficient Bloom
filters and (2) use the tracking data to ensure that no row is ever activated
rapidly enough to induce RowHammer bit-flips. By doing so, BlockHammer (1)
makes it impossible for a RowHammer bit-flip to occur and (2) greatly reduces a
RowHammer attack's impact on the performance of co-running benign applications.
Compared to state-of-the-art RowHammer mitigation mechanisms, BlockHammer
provides competitive performance and energy when the system is not under a
RowHammer attack and significantly better performance and energy when the
system is under attack.
|
[
{
"version": "v1",
"created": "Thu, 11 Feb 2021 12:56:45 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Jul 2022 12:48:44 GMT"
}
] | 2022-08-01T00:00:00 |
[
[
"Yağlıkçı",
"Abdullah Giray",
""
],
[
"Patel",
"Minesh",
""
],
[
"Kim",
"Jeremie S.",
""
],
[
"Azizi",
"Roknoddin",
""
],
[
"Olgun",
"Ataberk",
""
],
[
"Orosa",
"Lois",
""
],
[
"Hassan",
"Hasan",
""
],
[
"Park",
"Jisung",
""
],
[
"Kanellopoulos",
"Konstantinos",
""
],
[
"Shahroodi",
"Taha",
""
],
[
"Ghose",
"Saugata",
""
],
[
"Mutlu",
"Onur",
""
]
] |
new_dataset
| 0.989287 |
2111.10367
|
Suwon Shon
|
Suwon Shon, Ankita Pasad, Felix Wu, Pablo Brusco, Yoav Artzi, Karen
Livescu, Kyu J. Han
|
SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation
on Natural Speech
|
Updated preprint for SLUE Benchmark v0.2; Toolkit link
https://github.com/asappresearch/slue-toolkit
| null | null | null |
cs.CL cs.LG cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Progress in speech processing has been facilitated by shared datasets and
benchmarks. Historically these have focused on automatic speech recognition
(ASR), speaker identification, or other lower-level tasks. Interest has been
growing in higher-level spoken language understanding tasks, including using
end-to-end models, but there are fewer annotated datasets for such tasks. At
the same time, recent work shows the possibility of pre-training generic
representations and then fine-tuning for several tasks using relatively little
labeled data. We propose to create a suite of benchmark tasks for Spoken
Language Understanding Evaluation (SLUE) consisting of limited-size labeled
training sets and corresponding evaluation sets. This resource would allow the
research community to track progress, evaluate pre-trained representations for
higher-level tasks, and study open questions such as the utility of pipeline
versus end-to-end approaches. We present the first phase of the SLUE benchmark
suite, consisting of named entity recognition, sentiment analysis, and ASR on
the corresponding datasets. We focus on naturally produced (not read or
synthesized) speech, and freely available datasets. We provide new
transcriptions and annotations on subsets of the VoxCeleb and VoxPopuli
datasets, evaluation metrics and results for baseline models, and an
open-source toolkit to reproduce the baselines and evaluate new models.
|
[
{
"version": "v1",
"created": "Fri, 19 Nov 2021 18:59:23 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Apr 2022 02:53:35 GMT"
},
{
"version": "v3",
"created": "Fri, 29 Jul 2022 15:52:35 GMT"
}
] | 2022-08-01T00:00:00 |
[
[
"Shon",
"Suwon",
""
],
[
"Pasad",
"Ankita",
""
],
[
"Wu",
"Felix",
""
],
[
"Brusco",
"Pablo",
""
],
[
"Artzi",
"Yoav",
""
],
[
"Livescu",
"Karen",
""
],
[
"Han",
"Kyu J.",
""
]
] |
new_dataset
| 0.99277 |
2202.13830
|
Patrik Christen
|
Patrik Christen
|
Curb Your Self-Modifying Code
|
6 pages, 1 figure
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Self-modifying code has many intriguing applications in a broad range of
fields including software security, artificial general intelligence, and
open-ended evolution. Having control over self-modifying code, however, is
still an open challenge since it is a balancing act between providing as much
freedom as possible so as not to limit possible solutions, while at the same
time imposing restriction to avoid security issues and invalid code or
solutions. In the present study, I provide a prototype implementation of how
one might curb self-modifying code by introducing control mechanisms for code
modifications within specific regions and for specific transitions between code
and data. I show that this is possible to achieve with the so-called allagmatic
method - a framework to formalise, model, implement, and interpret complex
systems inspired by Gilbert Simondon's philosophy of individuation and Alfred
North Whitehead's philosophy of organism. Thereby, the allagmatic method serves
as guidance for self-modification based on concepts defined in a metaphysical
framework. I conclude that the allagmatic method seems to be a suitable
framework for control mechanisms in self-modifying code and that there are
intriguing analogies between the presented control mechanisms and gene
regulation.
|
[
{
"version": "v1",
"created": "Mon, 28 Feb 2022 14:39:34 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Jul 2022 14:17:55 GMT"
}
] | 2022-08-01T00:00:00 |
[
[
"Christen",
"Patrik",
""
]
] |
new_dataset
| 0.992112 |
2203.07628
|
Wenkang Shan
|
Wenkang Shan, Zhenhua Liu, Xinfeng Zhang, Shanshe Wang, Siwei Ma, Wen
Gao
|
P-STMO: Pre-Trained Spatial Temporal Many-to-One Model for 3D Human Pose
Estimation
|
ECCV 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces a novel Pre-trained Spatial Temporal Many-to-One
(P-STMO) model for 2D-to-3D human pose estimation task. To reduce the
difficulty of capturing spatial and temporal information, we divide this task
into two stages: pre-training (Stage I) and fine-tuning (Stage II). In Stage I,
a self-supervised pre-training sub-task, termed masked pose modeling, is
proposed. The human joints in the input sequence are randomly masked in both
spatial and temporal domains. A general form of denoising auto-encoder is
exploited to recover the original 2D poses and the encoder is capable of
capturing spatial and temporal dependencies in this way. In Stage II, the
pre-trained encoder is loaded to STMO model and fine-tuned. The encoder is
followed by a many-to-one frame aggregator to predict the 3D pose in the
current frame. Especially, an MLP block is utilized as the spatial feature
extractor in STMO, which yields better performance than other methods. In
addition, a temporal downsampling strategy is proposed to diminish data
redundancy. Extensive experiments on two benchmarks show that our method
outperforms state-of-the-art methods with fewer parameters and less
computational overhead. For example, our P-STMO model achieves 42.1mm MPJPE on
Human3.6M dataset when using 2D poses from CPN as inputs. Meanwhile, it brings
a 1.5-7.1 times speedup to state-of-the-art methods. Code is available at
https://github.com/paTRICK-swk/P-STMO.
|
[
{
"version": "v1",
"created": "Tue, 15 Mar 2022 04:00:59 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Jul 2022 03:59:40 GMT"
}
] | 2022-08-01T00:00:00 |
[
[
"Shan",
"Wenkang",
""
],
[
"Liu",
"Zhenhua",
""
],
[
"Zhang",
"Xinfeng",
""
],
[
"Wang",
"Shanshe",
""
],
[
"Ma",
"Siwei",
""
],
[
"Gao",
"Wen",
""
]
] |
new_dataset
| 0.996504 |
2204.05208
|
David Beauchemin
|
David Beauchemin and Julien Laumonier and Yvan Le Ster and Marouane
Yassine
|
"FIJO": a French Insurance Soft Skill Detection Dataset
|
Accepted in CAIA 2022 https://caiac.pubpub.org/pub/72bhunl6
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Understanding the evolution of job requirements is becoming more important
for workers, companies and public organizations to follow the fast
transformation of the employment market. Fortunately, recent natural language
processing (NLP) approaches allow for the development of methods to
automatically extract information from job ads and recognize skills more
precisely. However, these efficient approaches need a large amount of annotated
data from the studied domain which is difficult to access, mainly due to
intellectual property. This article proposes a new public dataset, FIJO,
containing insurance job offers, including many soft skill annotations. To
understand the potential of this dataset, we detail some characteristics and
some limitations. Then, we present the results of skill detection algorithms
using a named entity recognition approach and show that transformers-based
models have good token-wise performances on this dataset. Lastly, we analyze
some errors made by our best model to emphasize the difficulties that may arise
when applying NLP approaches.
|
[
{
"version": "v1",
"created": "Mon, 11 Apr 2022 15:54:22 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Jul 2022 20:58:38 GMT"
}
] | 2022-08-01T00:00:00 |
[
[
"Beauchemin",
"David",
""
],
[
"Laumonier",
"Julien",
""
],
[
"Ster",
"Yvan Le",
""
],
[
"Yassine",
"Marouane",
""
]
] |
new_dataset
| 0.999764 |
2204.05880
|
Daniel Gehrig
|
Florian Mahlknecht, Daniel Gehrig, Jeremy Nash, Friedrich M.
Rockenbauer, Benjamin Morrell, Jeff Delaune, Davide Scaramuzza
|
Exploring Event Camera-based Odometry for Planetary Robots
| null |
IEEE Robotics and Automation Letters (RA-L), 2022
| null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to their resilience to motion blur and high robustness in low-light and
high dynamic range conditions, event cameras are poised to become enabling
sensors for vision-based exploration on future Mars helicopter missions.
However, existing event-based visual-inertial odometry (VIO) algorithms either
suffer from high tracking errors or are brittle, since they cannot cope with
significant depth uncertainties caused by an unforeseen loss of tracking or
other effects. In this work, we introduce EKLT-VIO, which addresses both
limitations by combining a state-of-the-art event-based frontend with a
filter-based backend. This makes it both accurate and robust to uncertainties,
outperforming event- and frame-based VIO algorithms on challenging benchmarks
by 32%. In addition, we demonstrate accurate performance in hover-like
conditions (outperforming existing event-based methods) as well as high
robustness in newly collected Mars-like and high-dynamic-range sequences, where
existing frame-based methods fail. In doing so, we show that event-based VIO is
the way forward for vision-based exploration on Mars.
|
[
{
"version": "v1",
"created": "Tue, 12 Apr 2022 15:19:50 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Jul 2022 14:26:36 GMT"
}
] | 2022-08-01T00:00:00 |
[
[
"Mahlknecht",
"Florian",
""
],
[
"Gehrig",
"Daniel",
""
],
[
"Nash",
"Jeremy",
""
],
[
"Rockenbauer",
"Friedrich M.",
""
],
[
"Morrell",
"Benjamin",
""
],
[
"Delaune",
"Jeff",
""
],
[
"Scaramuzza",
"Davide",
""
]
] |
new_dataset
| 0.959851 |
2204.05972
|
Hadi Jahanshahi
|
Hadi Jahanshahi, Mucahit Cevik
|
S-DABT: Schedule and Dependency-Aware Bug Triage in Open-Source Bug
Tracking Systems
|
An extension of "DABT: A Dependency-aware Bug Triaging Method"
arXiv:2104.12744
|
Information and Software Technology, 28 July 2022, 107025
|
10.1016/j.infsof.2022.107025
| null |
cs.SE cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fixing bugs in a timely manner lowers various potential costs in software
maintenance. However, manual bug fixing scheduling can be time-consuming,
cumbersome, and error-prone. In this paper, we propose the Schedule and
Dependency-aware Bug Triage (S-DABT), a bug triaging method that utilizes
integer programming and machine learning techniques to assign bugs to suitable
developers. Unlike prior works that largely focus on a single component of the
bug reports, our approach takes into account the textual data, bug fixing
costs, and bug dependencies. We further incorporate the schedule of developers
in our formulation to have a more comprehensive model for this multifaceted
problem. As a result, this complete formulation considers developers' schedules
and the blocking effects of the bugs while covering the most significant
aspects of the previously proposed methods. Our numerical study on four
open-source software systems, namely, EclipseJDT, LibreOffice, GCC, and
Mozilla, shows that taking into account the schedules of the developers
decreases the average bug fixing times. We find that S-DABT leads to a high
level of developer utilization through a fair distribution of the tasks among
the developers and efficient use of the free spots in their schedules. Via the
simulation of the issue tracking system, we also show how incorporating the
schedule in the model formulation reduces the bug fixing time, improves the
assignment accuracy, and utilizes the capability of each developer without much
comprising in the model run times. We find that S-DABT decreases the complexity
of the bug dependency graph by prioritizing blocking bugs and effectively
reduces the infeasible assignment ratio due to bug dependencies. Consequently,
we recommend considering developers' schedules while automating bug triage.
|
[
{
"version": "v1",
"created": "Tue, 12 Apr 2022 17:36:43 GMT"
}
] | 2022-08-01T00:00:00 |
[
[
"Jahanshahi",
"Hadi",
""
],
[
"Cevik",
"Mucahit",
""
]
] |
new_dataset
| 0.998702 |
2205.02908
|
Sebastian J\"ager
|
Sebastian J\"ager, Jessica Greene, Max Jakob, Ruben Korenke, Tilman
Santarius, Felix Biessmann
|
GreenDB: Toward a Product-by-Product Sustainability Database
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The production, shipping, usage, and disposal of consumer goods have a
substantial impact on greenhouse gas emissions and the depletion of resources.
Modern retail platforms rely heavily on Machine Learning (ML) for their search
and recommender systems. Thus, ML can potentially support efforts towards more
sustainable consumption patterns, for example, by accounting for sustainability
aspects in product search or recommendations. However, leveraging ML potential
for reaching sustainability goals requires data on sustainability.
Unfortunately, no open and publicly available database integrates
sustainability information on a product-by-product basis. In this work, we
present the GreenDB, which fills this gap. Based on search logs of millions of
users, we prioritize which products users care about most. The GreenDB schema
extends the well-known schema.org Product definition and can be readily
integrated into existing product catalogs to improve sustainability information
available for search and recommendation experiences. We present our proof of
concept implementation of a scraping system that creates the GreenDB dataset.
|
[
{
"version": "v1",
"created": "Thu, 5 May 2022 20:24:16 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Jul 2022 09:02:02 GMT"
}
] | 2022-08-01T00:00:00 |
[
[
"Jäger",
"Sebastian",
""
],
[
"Greene",
"Jessica",
""
],
[
"Jakob",
"Max",
""
],
[
"Korenke",
"Ruben",
""
],
[
"Santarius",
"Tilman",
""
],
[
"Biessmann",
"Felix",
""
]
] |
new_dataset
| 0.984219 |
2205.06118
|
Manikandan Ravikiran
|
Manikandan Ravikiran, Bharathi Raja Chakravarthi, Anand Kumar
Madasamy, Sangeetha Sivanesan, Ratnavel Rajalakshmi, Sajeetha Thavareesan,
Rahul Ponnusamy, Shankar Mahadevan
|
Findings of the Shared Task on Offensive Span Identification from
Code-Mixed Tamil-English Comments
|
System Description of Shared Task
https://competitions.codalab.org/competitions/36395
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Offensive content moderation is vital in social media platforms to support
healthy online discussions. However, their prevalence in codemixed Dravidian
languages is limited to classifying whole comments without identifying part of
it contributing to offensiveness. Such limitation is primarily due to the lack
of annotated data for offensive spans. Accordingly, in this shared task, we
provide Tamil-English code-mixed social comments with offensive spans. This
paper outlines the dataset so released, methods, and results of the submitted
systems
|
[
{
"version": "v1",
"created": "Thu, 12 May 2022 14:31:57 GMT"
}
] | 2022-08-01T00:00:00 |
[
[
"Ravikiran",
"Manikandan",
""
],
[
"Chakravarthi",
"Bharathi Raja",
""
],
[
"Madasamy",
"Anand Kumar",
""
],
[
"Sivanesan",
"Sangeetha",
""
],
[
"Rajalakshmi",
"Ratnavel",
""
],
[
"Thavareesan",
"Sajeetha",
""
],
[
"Ponnusamy",
"Rahul",
""
],
[
"Mahadevan",
"Shankar",
""
]
] |
new_dataset
| 0.995343 |
2207.01129
|
Shin-Ichi Nakano
|
Shin-ichi Nakano
|
A Gray Code of Ordered Trees
|
14 pages
| null | null | null |
cs.DM
|
http://creativecommons.org/licenses/by/4.0/
|
A combinatorial Gray code for a set of combinatorial objects is a sequence of
all combinatorial objects in the set so that each object is derived from the
preceding object by changing a small part.
In this paper we design a Gray code for ordered trees with n vertices such
that each ordered tree is derived from the preceding ordered tree by removing a
leaf then appending a leaf elsewhere. Thus the change is just remove-and-append
a leaf, which is the minimum.
|
[
{
"version": "v1",
"created": "Sun, 3 Jul 2022 22:00:09 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Jul 2022 00:13:20 GMT"
},
{
"version": "v3",
"created": "Tue, 12 Jul 2022 22:34:19 GMT"
},
{
"version": "v4",
"created": "Sat, 23 Jul 2022 03:54:04 GMT"
},
{
"version": "v5",
"created": "Thu, 28 Jul 2022 19:56:25 GMT"
}
] | 2022-08-01T00:00:00 |
[
[
"Nakano",
"Shin-ichi",
""
]
] |
new_dataset
| 0.999364 |
2207.02335
|
Manuel Alejandro Rodriguez Rivera
|
M. A. Rodriguez, H. AlMarzouqi and P. Liatsis (Department of
Electrical Engineering and Computer Science, Khalifa University)
|
Multi-Label Retinal Disease Classification using Transformers
|
13 pages, 4 figures, 12 tables. Submitted to IEEE Journal of
Biomedical and Health Informatics. Dataset:
https://data.mendeley.com/datasets/pc4mb3h8hz/1
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Early detection of retinal diseases is one of the most important means of
preventing partial or permanent blindness in patients. In this research, a
novel multi-label classification system is proposed for the detection of
multiple retinal diseases, using fundus images collected from a variety of
sources. First, a new multi-label retinal disease dataset, the MuReD dataset,
is constructed, using a number of publicly available datasets for fundus
disease classification. Next, a sequence of post-processing steps is applied to
ensure the quality of the image data and the range of diseases, present in the
dataset. For the first time in fundus multi-label disease classification, a
transformer-based model optimized through extensive experimentation is used for
image analysis and decision making. Numerous experiments are performed to
optimize the configuration of the proposed system. It is shown that the
approach performs better than state-of-the-art works on the same task by 7.9%
and 8.1% in terms of AUC score for disease detection and disease
classification, respectively. The obtained results further support the
potential applications of transformer-based architectures in the medical
imaging field.
|
[
{
"version": "v1",
"created": "Tue, 5 Jul 2022 22:06:52 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Jul 2022 15:19:45 GMT"
},
{
"version": "v3",
"created": "Thu, 28 Jul 2022 19:35:22 GMT"
}
] | 2022-08-01T00:00:00 |
[
[
"Rodriguez",
"M. A.",
"",
"Department of\n Electrical Engineering and Computer Science, Khalifa University"
],
[
"AlMarzouqi",
"H.",
"",
"Department of\n Electrical Engineering and Computer Science, Khalifa University"
],
[
"Liatsis",
"P.",
"",
"Department of\n Electrical Engineering and Computer Science, Khalifa University"
]
] |
new_dataset
| 0.999123 |
2207.10562
|
Remi Desmartin
|
Remi Desmartin, Grant Passmore, Ekaterina Komendantskaya, Matthew
Daggitt
|
CheckINN: Wide Range Neural Network Verification in Imandra (Extended)
|
PPDP 2022, 24th International Symposium on Principles and Practice of
Declarative Programming
| null | null | null |
cs.LO cs.AI cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural networks are increasingly relied upon as components of complex
safety-critical systems such as autonomous vehicles. There is high demand for
tools and methods that embed neural network verification in a larger
verification cycle. However, neural network verification is difficult due to a
wide range of verification properties of interest, each typically only amenable
to verification in specialised solvers. In this paper, we show how Imandra, a
functional programming language and a theorem prover originally designed for
verification, validation and simulation of financial infrastructure can offer a
holistic infrastructure for neural network verification. We develop a novel
library CheckINN that formalises neural networks in Imandra, and covers
different important facets of neural network verification.
|
[
{
"version": "v1",
"created": "Thu, 21 Jul 2022 16:06:58 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Jul 2022 18:15:15 GMT"
}
] | 2022-08-01T00:00:00 |
[
[
"Desmartin",
"Remi",
""
],
[
"Passmore",
"Grant",
""
],
[
"Komendantskaya",
"Ekaterina",
""
],
[
"Daggitt",
"Matthew",
""
]
] |
new_dataset
| 0.99537 |
2207.14436
|
Seung Yeon Shin
|
Seung Yeon Shin, Sungwon Lee, and Ronald M. Summers
|
Graph-Based Small Bowel Path Tracking with Cylindrical Constraints
|
Published in: 2022 IEEE 19th International Symposium on Biomedical
Imaging (ISBI)
| null |
10.1109/ISBI52829.2022.9761423
| null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a new graph-based method for small bowel path tracking based on
cylindrical constraints. A distinctive characteristic of the small bowel
compared to other organs is the contact between parts of itself along its
course, which makes the path tracking difficult together with the indistinct
appearance of the wall. It causes the tracked path to easily cross over the
walls when relying on low-level features like the wall detection. To circumvent
this, a series of cylinders that are fitted along the course of the small bowel
are used to guide the tracking to more reliable directions. It is implemented
as soft constraints using a new cost function. The proposed method is evaluated
against ground-truth paths that are all connected from start to end of the
small bowel for 10 abdominal CT scans. The proposed method showed clear
improvements compared to the baseline method in tracking the path without
making an error. Improvements of 6.6% and 17.0%, in terms of the tracked
length, were observed for two different settings related to the small bowel
segmentation.
|
[
{
"version": "v1",
"created": "Fri, 29 Jul 2022 02:17:56 GMT"
}
] | 2022-08-01T00:00:00 |
[
[
"Shin",
"Seung Yeon",
""
],
[
"Lee",
"Sungwon",
""
],
[
"Summers",
"Ronald M.",
""
]
] |
new_dataset
| 0.986258 |
2207.14444
|
Theodore Steiner
|
Theo Steiner and Rui Zhang
|
Code Comment Inconsistency Detection with BERT and Longformer
|
8 pages, 5 tables, 4 figures
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Comments, or natural language descriptions of source code, are standard
practice among software developers. By communicating important aspects of the
code such as functionality and usage, comments help with software project
maintenance. However, when the code is modified without an accompanying
correction to the comment, an inconsistency between the comment and code can
arise, which opens up the possibility for developer confusion and bugs. In this
paper, we propose two models based on BERT (Devlin et al., 2019) and Longformer
(Beltagy et al., 2020) to detect such inconsistencies in a natural language
inference (NLI) context. Through an evaluation on a previously established
corpus of comment-method pairs both during and after code changes, we
demonstrate that our models outperform multiple baselines and yield comparable
results to the state-of-the-art models that exclude linguistic and lexical
features. We further discuss ideas for future research in using pretrained
language models for both inconsistency detection and automatic comment
updating.
|
[
{
"version": "v1",
"created": "Fri, 29 Jul 2022 02:43:51 GMT"
}
] | 2022-08-01T00:00:00 |
[
[
"Steiner",
"Theo",
""
],
[
"Zhang",
"Rui",
""
]
] |
new_dataset
| 0.962377 |
2207.14447
|
Huihui Fang Miss
|
Huihui Fang, Fei Li, Huazhu Fu, Junde Wu, Xiulan Zhang, Yanwu Xu
|
Dataset and Evaluation algorithm design for GOALS Challenge
|
8 pages, 3 figures, OMIA9 (MICCAI 2022) workshop
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Glaucoma causes irreversible vision loss due to damage to the optic nerve,
and there is no cure for glaucoma.OCT imaging modality is an essential
technique for assessing glaucomatous damage since it aids in quantifying fundus
structures. To promote the research of AI technology in the field of
OCT-assisted diagnosis of glaucoma, we held a Glaucoma OCT Analysis and Layer
Segmentation (GOALS) Challenge in conjunction with the International Conference
on Medical Image Computing and Computer Assisted Intervention (MICCAI) 2022 to
provide data and corresponding annotations for researchers studying layer
segmentation from OCT images and the classification of glaucoma. This paper
describes the released 300 circumpapillary OCT images, the baselines of the two
sub-tasks, and the evaluation methodology. The GOALS Challenge is accessible at
https://aistudio.baidu.com/aistudio/competition/detail/230.
|
[
{
"version": "v1",
"created": "Fri, 29 Jul 2022 02:51:26 GMT"
}
] | 2022-08-01T00:00:00 |
[
[
"Fang",
"Huihui",
""
],
[
"Li",
"Fei",
""
],
[
"Fu",
"Huazhu",
""
],
[
"Wu",
"Junde",
""
],
[
"Zhang",
"Xiulan",
""
],
[
"Xu",
"Yanwu",
""
]
] |
new_dataset
| 0.99958 |
2207.14460
|
Xinjie Yao
|
Xinjie Yao, Ji Zhang, Jean Oh
|
RCA: Ride Comfort-Aware Visual Navigation via Self-Supervised Learning
| null | null | null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Under shared autonomy, wheelchair users expect vehicles to provide safe and
comfortable rides while following users high-level navigation plans. To find
such a path, vehicles negotiate with different terrains and assess their
traversal difficulty. Most prior works model surroundings either through
geometric representations or semantic classifications, which do not reflect
perceived motion intensity and ride comfort in downstream navigation tasks. We
propose to model ride comfort explicitly in traversability analysis using
proprioceptive sensing. We develop a self-supervised learning framework to
predict traversability costmap from first-person-view images by leveraging
vehicle states as training signals. Our approach estimates how the vehicle
would feel if traversing over based on terrain appearances. We then show our
navigation system provides human-preferred ride comfort through robot
experiments together with a human evaluation study.
|
[
{
"version": "v1",
"created": "Fri, 29 Jul 2022 03:38:41 GMT"
}
] | 2022-08-01T00:00:00 |
[
[
"Yao",
"Xinjie",
""
],
[
"Zhang",
"Ji",
""
],
[
"Oh",
"Jean",
""
]
] |
new_dataset
| 0.973754 |
2207.14473
|
William Chen
|
Chih-Chen Chen, William Chen
|
Benchmarking Azerbaijani Neural Machine Translation
|
Published in The International Conference and Workshop on
Agglutinative Language Technologies as a Challenge for NLP (ALTNLP)
https://www.altnlp.org
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Little research has been done on Neural Machine Translation (NMT) for
Azerbaijani. In this paper, we benchmark the performance of Azerbaijani-English
NMT systems on a range of techniques and datasets. We evaluate which
segmentation techniques work best on Azerbaijani translation and benchmark the
performance of Azerbaijani NMT models across several domains of text. Our
results show that while Unigram segmentation improves NMT performance and
Azerbaijani translation models scale better with dataset quality than quantity,
cross-domain generalization remains a challenge
|
[
{
"version": "v1",
"created": "Fri, 29 Jul 2022 04:29:43 GMT"
}
] | 2022-08-01T00:00:00 |
[
[
"Chen",
"Chih-Chen",
""
],
[
"Chen",
"William",
""
]
] |
new_dataset
| 0.999105 |
2207.14483
|
Xinglei Dou
|
Lei Liu, Xinglei Dou
|
QuCloud+: A Holistic Qubit Mapping Scheme for Single/Multi-programming
on 2D/3D NISQ Quantum Computers
|
arXiv admin note: text overlap with arXiv:2004.12854
| null | null | null |
cs.AR quant-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Qubit mapping is essential to quantum computing's fidelity and quantum
computers' resource utilization. Yet, the existing qubit mapping schemes meet
some challenges (e.g., crosstalk, SWAP overheads, diverse device topologies,
etc.), leading to qubit resource under-utilization, high error rate, and low
fidelity in computing results. This paper presents QuCloud+, a new qubit
mapping scheme capable of handling these challenges. QuCloud+ has several new
designs. (1) QuCloud+ enables multi-programming quantum computing on quantum
chips with 2D/3D topology. (2) It partitions physical qubits for concurrent
quantum programs with the crosstalk-aware community detection technique and
further allocates qubits according to qubit degree, improving fidelity and
resource utilization. (3) QuCloud+ includes an X-SWAP mechanism that avoids
SWAPs with high crosstalk errors and enables inter-program SWAPs to reduce the
SWAP overheads. (4) QuCloud+ schedules concurrent quantum programs to be mapped
and executed based on estimated fidelity for the best practice. QuCloud+
outperforms the previous multi-programming work on various devices by 6.84% on
fidelity and saves 40.9% additional gates required during mapping transition.
|
[
{
"version": "v1",
"created": "Fri, 29 Jul 2022 05:19:56 GMT"
}
] | 2022-08-01T00:00:00 |
[
[
"Liu",
"Lei",
""
],
[
"Dou",
"Xinglei",
""
]
] |
new_dataset
| 0.997855 |
2207.14498
|
Taorong Liu
|
Taorong Liu, Liang Liao, Zheng Wang, Shin'ichi Satoh
|
Reference-Guided Texture and Structure Inference for Image Inpainting
|
IEEE International Conference on Image Processing(ICIP 2022)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing learning-based image inpainting methods are still in challenge when
facing complex semantic environments and diverse hole patterns. The prior
information learned from the large scale training data is still insufficient
for these situations. Reference images captured covering the same scenes share
similar texture and structure priors with the corrupted images, which offers
new prospects for the image inpainting tasks. Inspired by this, we first build
a benchmark dataset containing 10K pairs of input and reference images for
reference-guided inpainting. Then we adopt an encoder-decoder structure to
separately infer the texture and structure features of the input image
considering their pattern discrepancy of texture and structure during
inpainting. A feature alignment module is further designed to refine these
features of the input image with the guidance of a reference image. Both
quantitative and qualitative evaluations demonstrate the superiority of our
method over the state-of-the-art methods in terms of completing complex holes.
|
[
{
"version": "v1",
"created": "Fri, 29 Jul 2022 06:26:03 GMT"
}
] | 2022-08-01T00:00:00 |
[
[
"Liu",
"Taorong",
""
],
[
"Liao",
"Liang",
""
],
[
"Wang",
"Zheng",
""
],
[
"Satoh",
"Shin'ichi",
""
]
] |
new_dataset
| 0.996282 |
2207.14507
|
Andrea Montibeller
|
Andrea Montibeller, Cecilia Pasquini, Giulia Boato, Stefano Dell'Anna,
Fernando P\'erez-Gonz\'alez
|
GPU-accelerated SIFT-aided source identification of stabilized videos
| null | null | null | null |
cs.CV cs.MM eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Video stabilization is an in-camera processing commonly applied by modern
acquisition devices. While significantly improving the visual quality of the
resulting videos, it has been shown that such operation typically hinders the
forensic analysis of video signals. In fact, the correct identification of the
acquisition source usually based on Photo Response non-Uniformity (PRNU) is
subject to the estimation of the transformation applied to each frame in the
stabilization phase. A number of techniques have been proposed for dealing with
this problem, which however typically suffer from a high computational burden
due to the grid search in the space of inversion parameters. Our work attempts
to alleviate these shortcomings by exploiting the parallelization capabilities
of Graphics Processing Units (GPUs), typically used for deep learning
applications, in the framework of stabilised frames inversion. Moreover, we
propose to exploit SIFT features {to estimate the camera momentum and} %to
identify less stabilized temporal segments, thus enabling a more accurate
identification analysis, and to efficiently initialize the frame-wise parameter
search of consecutive frames. Experiments on a consolidated benchmark dataset
confirm the effectiveness of the proposed approach in reducing the required
computational time and improving the source identification accuracy. {The code
is available at \url{https://github.com/AMontiB/GPU-PRNU-SIFT}}.
|
[
{
"version": "v1",
"created": "Fri, 29 Jul 2022 07:01:31 GMT"
}
] | 2022-08-01T00:00:00 |
[
[
"Montibeller",
"Andrea",
""
],
[
"Pasquini",
"Cecilia",
""
],
[
"Boato",
"Giulia",
""
],
[
"Dell'Anna",
"Stefano",
""
],
[
"Pérez-González",
"Fernando",
""
]
] |
new_dataset
| 0.975352 |
2207.14745
|
Maria Vittoria Minniti
|
Jessie van Dam, Andreea Tulbure, Maria Vittoria Minniti, Firas
Abi-Farraj, Marco Hutter
|
Collision detection and identification for a legged manipulator
|
International Conference on Intelligent Robots and Systems, IROS
2022, Kyoto, Japan, Oct 23 - Oct. 27, 2022
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To safely deploy legged robots in the real world it is necessary to provide
them with the ability to reliably detect unexpected contacts and accurately
estimate the corresponding contact force. In this paper, we propose a collision
detection and identification pipeline for a quadrupedal manipulator. We first
introduce an approach to estimate the collision time span based on band-pass
filtering and show that this information is key for obtaining accurate
collision force estimates. We then improve the accuracy of the identified force
magnitude by compensating for model inaccuracies, unmodeled loads, and any
other potential source of quasi-static disturbances acting on the robot. We
validate our framework with extensive hardware experiments in various
scenarios, including trotting and additional unmodeled load on the robot.
|
[
{
"version": "v1",
"created": "Fri, 29 Jul 2022 15:37:23 GMT"
}
] | 2022-08-01T00:00:00 |
[
[
"van Dam",
"Jessie",
""
],
[
"Tulbure",
"Andreea",
""
],
[
"Minniti",
"Maria Vittoria",
""
],
[
"Abi-Farraj",
"Firas",
""
],
[
"Hutter",
"Marco",
""
]
] |
new_dataset
| 0.983026 |
2207.14757
|
Nicola Messina
|
Nicola Messina, Matteo Stefanini, Marcella Cornia, Lorenzo Baraldi,
Fabrizio Falchi, Giuseppe Amato, Rita Cucchiara
|
ALADIN: Distilling Fine-grained Alignment Scores for Efficient
Image-Text Matching and Retrieval
|
CBMI 2022
| null | null | null |
cs.CV cs.AI cs.CL cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image-text matching is gaining a leading role among tasks involving the joint
understanding of vision and language. In literature, this task is often used as
a pre-training objective to forge architectures able to jointly deal with
images and texts. Nonetheless, it has a direct downstream application:
cross-modal retrieval, which consists in finding images related to a given
query text or vice-versa. Solving this task is of critical importance in
cross-modal search engines. Many recent methods proposed effective solutions to
the image-text matching problem, mostly using recent large vision-language (VL)
Transformer networks. However, these models are often computationally
expensive, especially at inference time. This prevents their adoption in
large-scale cross-modal retrieval scenarios, where results should be provided
to the user almost instantaneously. In this paper, we propose to fill in the
gap between effectiveness and efficiency by proposing an ALign And DIstill
Network (ALADIN). ALADIN first produces high-effective scores by aligning at
fine-grained level images and texts. Then, it learns a shared embedding space -
where an efficient kNN search can be performed - by distilling the relevance
scores obtained from the fine-grained alignments. We obtained remarkable
results on MS-COCO, showing that our method can compete with state-of-the-art
VL Transformers while being almost 90 times faster. The code for reproducing
our results is available at https://github.com/mesnico/ALADIN.
|
[
{
"version": "v1",
"created": "Fri, 29 Jul 2022 16:01:48 GMT"
}
] | 2022-08-01T00:00:00 |
[
[
"Messina",
"Nicola",
""
],
[
"Stefanini",
"Matteo",
""
],
[
"Cornia",
"Marcella",
""
],
[
"Baraldi",
"Lorenzo",
""
],
[
"Falchi",
"Fabrizio",
""
],
[
"Amato",
"Giuseppe",
""
],
[
"Cucchiara",
"Rita",
""
]
] |
new_dataset
| 0.99104 |
2103.09043
|
Jacob Eeuwe Kooi
|
Jacob E. Kooi and Robert Babu\v{s}ka
|
Inclined Quadrotor Landing using Deep Reinforcement Learning
|
8 pages, 4 figures. Published in IROS 2021
| null |
10.1109/IROS51168.2021.9636096
| null |
cs.RO cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Landing a quadrotor on an inclined surface is a challenging maneuver. The
final state of any inclined landing trajectory is not an equilibrium, which
precludes the use of most conventional control methods. We propose a deep
reinforcement learning approach to design an autonomous landing controller for
inclined surfaces. Using the proximal policy optimization (PPO) algorithm with
sparse rewards and a tailored curriculum learning approach, an inclined landing
policy can be trained in simulation in less than 90 minutes on a standard
laptop. The policy then directly runs on a real Crazyflie 2.1 quadrotor and
successfully performs real inclined landings in a flying arena. A single policy
evaluation takes approximately 2.5\,ms, which makes it suitable for a future
embedded implementation on the quadrotor.
|
[
{
"version": "v1",
"created": "Tue, 16 Mar 2021 13:22:51 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Jul 2022 13:06:49 GMT"
}
] | 2022-07-29T00:00:00 |
[
[
"Kooi",
"Jacob E.",
""
],
[
"Babuška",
"Robert",
""
]
] |
new_dataset
| 0.980119 |
2103.12826
|
Zachary Kingston
|
Zachary Kingston and Lydia E. Kavraki
|
Robowflex: Robot Motion Planning with MoveIt Made Easy
|
7 pages, 8 figures. Accepted at the 2022 IEEE/RSJ International
Conference on Intelligent Robots and Systems (IROS 2022). Software available
at https://github.com/KavrakiLab/robowflex
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Robowflex is a software library for robot motion planning in industrial and
research applications, leveraging the popular MoveIt library and Robot
Operating System (ROS) middleware. Robowflex provides an augmented API for
crafting and manipulating motion planning queries within a single program,
making motion planning with MoveIt easy. Robowflex's high-level API simplifies
many common use-cases while still providing low-level access to the MoveIt
library when needed. Robowflex is particularly useful for 1) developing new
motion planners, 2) evaluating motion planners, and 3) complex problems that
use motion planning as a subroutine (e.g., task and motion planning). Robowflex
also provides visualization capabilities, integrations to other robotics
libraries (e.g., DART and Tesseract), and is complementary to other robotics
packages. With our library, the user does not need to be an expert at ROS or
MoveIt to set up motion planning queries, extract information from results, and
directly interface with a variety of software components. We demonstrate its
efficacy through several example use-cases.
|
[
{
"version": "v1",
"created": "Tue, 23 Mar 2021 20:41:20 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Jul 2022 20:02:28 GMT"
}
] | 2022-07-29T00:00:00 |
[
[
"Kingston",
"Zachary",
""
],
[
"Kavraki",
"Lydia E.",
""
]
] |
new_dataset
| 0.999599 |
2104.02643
|
David Melhart
|
David Melhart, Antonios Liapis, Georgios N. Yannakakis
|
The Arousal video Game AnnotatIoN (AGAIN) Dataset
|
Published in the IEEE Transactions on Affective Computing (2022).
Available on IEEE Xplore: https://ieeexplore.ieee.org/document/9816018
| null |
10.1109/TAFFC.2022.3188851
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
How can we model affect in a general fashion, across dissimilar tasks, and to
which degree are such general representations of affect even possible? To
address such questions and enable research towards general affective computing,
this paper introduces The Arousal video Game AnnotatIoN (AGAIN) dataset. AGAIN
is a large-scale affective corpus that features over 1,100 in-game videos (with
corresponding gameplay data) from nine different games, which are annotated for
arousal from 124 participants in a first-person continuous fashion. Even though
AGAIN is created for the purpose of investigating the generality of affective
computing across dissimilar tasks, affect modelling can be studied within each
of its 9 specific interactive games. To the best of our knowledge AGAIN is the
largest -- over 37 hours of annotated video and game logs -- and most diverse
publicly available affective dataset based on games as interactive affect
elicitors.
|
[
{
"version": "v1",
"created": "Tue, 6 Apr 2021 16:27:21 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Jul 2022 10:55:15 GMT"
}
] | 2022-07-29T00:00:00 |
[
[
"Melhart",
"David",
""
],
[
"Liapis",
"Antonios",
""
],
[
"Yannakakis",
"Georgios N.",
""
]
] |
new_dataset
| 0.974804 |
2107.06495
|
Peter Xenopoulos
|
Peter Xenopoulos, Joao Rulff, Claudio Silva
|
ggViz: Accelerating Large-Scale Esports Game Analysis
|
Accepted to CHI Play 2022 Full Papers
| null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While esports organizations are increasingly adopting practices of
conventional sports teams, such as dedicated analysts and data-driven
decision-making, video-based game review is still the primary mode of game
analysis. In conventional sports, advances in data collection have introduced
systems that allow for sketch-based querying of game situations. However, due
to data limitations, as well as differences in the sport itself, esports has
seen a dearth of such systems. In this paper, we leverage player tracking data
for Counter-Strike: Global Offensive (CSGO) to develop ggViz, a visual
analytics system that allows users to query a large esports data set through
game state sketches to find similar game states. Users are guided to game
states of interest using win probability charts and round icons, and can
summarize collections of states through heatmaps. We motivate our design
through interviews with esports experts to especially address the issue of game
review. We demonstrate ggViz's utility through detailed case studies and expert
interviews with coaches, managers, and analysts from professional esports
teams.
|
[
{
"version": "v1",
"created": "Wed, 14 Jul 2021 05:48:26 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Jul 2021 16:47:07 GMT"
},
{
"version": "v3",
"created": "Sat, 15 Jan 2022 05:13:04 GMT"
},
{
"version": "v4",
"created": "Wed, 27 Jul 2022 20:17:22 GMT"
}
] | 2022-07-29T00:00:00 |
[
[
"Xenopoulos",
"Peter",
""
],
[
"Rulff",
"Joao",
""
],
[
"Silva",
"Claudio",
""
]
] |
new_dataset
| 0.993173 |
2201.00693
|
Hao Peng
|
Hao Peng, Hang Li, Lei Hou, Juanzi Li, Chao Qiao
|
Multimodal Entity Tagging with Multimodal Knowledge Base
|
11 pages, 4 figures
| null | null | null |
cs.IR cs.AI cs.CL cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
To enhance research on multimodal knowledge base and multimodal information
processing, we propose a new task called multimodal entity tagging (MET) with a
multimodal knowledge base (MKB). We also develop a dataset for the problem
using an existing MKB. In an MKB, there are entities and their associated texts
and images. In MET, given a text-image pair, one uses the information in the
MKB to automatically identify the related entity in the text-image pair. We
solve the task by using the information retrieval paradigm and implement
several baselines using state-of-the-art methods in NLP and CV. We conduct
extensive experiments and make analyses on the experimental results. The
results show that the task is challenging, but current technologies can achieve
relatively high performance. We will release the dataset, code, and models for
future research.
|
[
{
"version": "v1",
"created": "Tue, 21 Dec 2021 15:04:57 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Jul 2022 07:56:08 GMT"
}
] | 2022-07-29T00:00:00 |
[
[
"Peng",
"Hao",
""
],
[
"Li",
"Hang",
""
],
[
"Hou",
"Lei",
""
],
[
"Li",
"Juanzi",
""
],
[
"Qiao",
"Chao",
""
]
] |
new_dataset
| 0.999464 |
2201.03339
|
Jinqi Huang
|
Jinqi Huang, Spyros Stathopoulos, Alex Serb, and Themis Prodromakis
|
NeuroPack: An Algorithm-level Python-based Simulator for
Memristor-empowered Neuro-inspired Computing
| null | null |
10.3389/fnano.2022.851856
| null |
cs.ET cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Emerging two terminal nanoscale memory devices, known as memristors, have
over the past decade demonstrated great potential for implementing energy
efficient neuro-inspired computing architectures. As a result, a wide-range of
technologies have been developed that in turn are described via distinct
empirical models. This diversity of technologies requires the establishment of
versatile tools that can enable designers to translate memristors' attributes
in novel neuro-inspired topologies. In this paper, we present NeuroPack, a
modular, algorithm level Python-based simulation platform that can support
studies of memristor neuro-inspired architectures for performing online
learning or offline classification. The NeuroPack environment is designed with
versatility being central, allowing the user to chose from a variety of neuron
models, learning rules and memristors models. Its hierarchical structure,
empowers NeuroPack to predict any memristor state changes and the corresponding
neural network behavior across a variety of design decisions and user
parameters options. The use of NeuroPack is demonstrated herein via an
application example of performing handwritten digit classification with the
MNIST dataset and an existing empirical model for metal-oxide memristors.
|
[
{
"version": "v1",
"created": "Mon, 10 Jan 2022 13:35:25 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Feb 2022 16:05:47 GMT"
}
] | 2022-07-29T00:00:00 |
[
[
"Huang",
"Jinqi",
""
],
[
"Stathopoulos",
"Spyros",
""
],
[
"Serb",
"Alex",
""
],
[
"Prodromakis",
"Themis",
""
]
] |
new_dataset
| 0.986828 |
2201.07198
|
Dennis Aumiller
|
Dennis Aumiller and Michael Gertz
|
Klexikon: A German Dataset for Joint Summarization and Simplification
|
Code and data are available on Github:
https://github.com/dennlinger/klexikon
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Traditionally, Text Simplification is treated as a monolingual translation
task where sentences between source texts and their simplified counterparts are
aligned for training. However, especially for longer input documents,
summarizing the text (or dropping less relevant content altogether) plays an
important role in the simplification process, which is currently not reflected
in existing datasets. Simultaneously, resources for non-English languages are
scarce in general and prohibitive for training new solutions. To tackle this
problem, we pose core requirements for a system that can jointly summarize and
simplify long source documents. We further describe the creation of a new
dataset for joint Text Simplification and Summarization based on German
Wikipedia and the German children's lexicon "Klexikon", consisting of almost
2900 documents. We release a document-aligned version that particularly
highlights the summarization aspect, and provide statistical evidence that this
resource is well suited to simplification as well. Code and data are available
on Github: https://github.com/dennlinger/klexikon
|
[
{
"version": "v1",
"created": "Tue, 18 Jan 2022 18:50:43 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Jul 2022 08:39:54 GMT"
}
] | 2022-07-29T00:00:00 |
[
[
"Aumiller",
"Dennis",
""
],
[
"Gertz",
"Michael",
""
]
] |
new_dataset
| 0.999742 |
2202.08152
|
Taegyun Noh
|
Taegyun Noh, Junil Choi
|
Cell-Free MIMO Systems Powered by Intelligent Reflecting Surfaces
|
5 pages, 4 figures, accepted to IEEE Communications Letters
|
IEEE Communications Letters, vol. 26, no. 5, pp. 1076-1080, May
2022
|
10.1109/LCOMM.2022.3152616
| null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cell-free massive multiple-input multiple-output (MIMO) and intelligent
reflecting surface (IRS) are considered as the prospective multiple antenna
technologies for beyond the fifth-generation (5G) networks. Cell-free MIMO
systems powered by IRSs, combining both technologies, can further improve the
performance of cell-free MIMO systems at low cost and energy consumption. Prior
works focused on instantaneous performance metrics and relied on alternating
optimization algorithms, which impose huge computational complexity and
signaling overhead. To address these challenges, we propose a novel two-step
algorithm that provides the long-term passive beamformers at the IRSs using
statistical channel state information (S-CSI) and short-term active precoders
and long-term power allocation at the access points (APs) to maximize the
minimum achievable rate. Simulation results verify that the proposed scheme
outperforms benchmark schemes and brings a significant performance gain to the
cell-free MIMO systems powered by IRSs.
|
[
{
"version": "v1",
"created": "Wed, 16 Feb 2022 15:48:38 GMT"
}
] | 2022-07-29T00:00:00 |
[
[
"Noh",
"Taegyun",
""
],
[
"Choi",
"Junil",
""
]
] |
new_dataset
| 0.99456 |
2204.02863
|
Eugene Valassakis
|
Eugene Valassakis, Georgios Papagiannis, Norman Di Palo and Edward
Johns
|
Demonstrate Once, Imitate Immediately (DOME): Learning Visual Servoing
for One-Shot Imitation Learning
|
To be published at IROS 2022. 7 figures, 8 pages. Videos and
supplementary material are available at: https://www.robot-learning.uk/dome
| null | null | null |
cs.RO cs.AI cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present DOME, a novel method for one-shot imitation learning, where a task
can be learned from just a single demonstration and then be deployed
immediately, without any further data collection or training. DOME does not
require prior task or object knowledge, and can perform the task in novel
object configurations and with distractors. At its core, DOME uses an
image-conditioned object segmentation network followed by a learned visual
servoing network, to move the robot's end-effector to the same relative pose to
the object as during the demonstration, after which the task can be completed
by replaying the demonstration's end-effector velocities. We show that DOME
achieves near 100% success rate on 7 real-world everyday tasks, and we perform
several studies to thoroughly understand each individual component of DOME.
Videos and supplementary material are available at:
https://www.robot-learning.uk/dome .
|
[
{
"version": "v1",
"created": "Wed, 6 Apr 2022 14:32:51 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Jul 2022 19:24:54 GMT"
}
] | 2022-07-29T00:00:00 |
[
[
"Valassakis",
"Eugene",
""
],
[
"Papagiannis",
"Georgios",
""
],
[
"Di Palo",
"Norman",
""
],
[
"Johns",
"Edward",
""
]
] |
new_dataset
| 0.974534 |
2205.03043
|
Chen Zui
|
Zui Chen, Yansen Jing, Shengcheng Yuan, Yifei Xu, Jian Wu and Hang
Zhao
|
Sound2Synth: Interpreting Sound via FM Synthesizer Parameters Estimation
|
8 pages, 8 figures. v2: IJCAI2022 published, format revisions and
bugfixes
| null |
10.24963/ijcai.2022/682
| null |
cs.SD cs.AI cs.LG eess.AS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Synthesizer is a type of electronic musical instrument that is now widely
used in modern music production and sound design. Each parameters configuration
of a synthesizer produces a unique timbre and can be viewed as a unique
instrument. The problem of estimating a set of parameters configuration that
best restore a sound timbre is an important yet complicated problem, i.e.: the
synthesizer parameters estimation problem. We proposed a multi-modal
deep-learning-based pipeline Sound2Synth, together with a network structure
Prime-Dilated Convolution (PDC) specially designed to solve this problem. Our
method achieved not only SOTA but also the first real-world applicable results
on Dexed synthesizer, a popular FM synthesizer.
|
[
{
"version": "v1",
"created": "Fri, 6 May 2022 06:55:29 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Jul 2022 10:08:12 GMT"
}
] | 2022-07-29T00:00:00 |
[
[
"Chen",
"Zui",
""
],
[
"Jing",
"Yansen",
""
],
[
"Yuan",
"Shengcheng",
""
],
[
"Xu",
"Yifei",
""
],
[
"Wu",
"Jian",
""
],
[
"Zhao",
"Hang",
""
]
] |
new_dataset
| 0.985905 |
2207.13165
|
Siddharth Ganjoo
|
Siddharth Ganjoo
|
YOLO and Mask R-CNN for Vehicle Number Plate Identification
|
Correction regarding the data
| null | null | null |
cs.CV
|
http://creativecommons.org/publicdomain/zero/1.0/
|
License plate scanners have grown in popularity in parking lots during the
past few years. In order to quickly identify license plates, traditional plate
recognition devices used in parking lots employ a fixed source of light and
shooting angles. For skewed angles, such as license plate images taken with
ultra-wide angle or fisheye lenses, deformation of the license plate
recognition plate can also be quite severe, impairing the ability of standard
license plate recognition systems to identify the plate. Mask RCNN gadget that
may be utilised for oblique pictures and various shooting angles. The results
of the experiments show that the suggested design will be capable of
classifying license plates with bevel angles larger than 0/60. Character
recognition using the suggested Mask R-CNN approach has advanced significantly
as well. The proposed Mask R-CNN method has also achieved significant progress
in character recognition, which is tilted more than 45 degrees as compared to
the strategy of employing the YOLOv2 model. Experiment results also suggest
that the methodology presented in the open data plate collecting is better than
other techniques (known as the AOLP dataset).
|
[
{
"version": "v1",
"created": "Tue, 26 Jul 2022 19:41:59 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Jul 2022 07:48:11 GMT"
}
] | 2022-07-29T00:00:00 |
[
[
"Ganjoo",
"Siddharth",
""
]
] |
new_dataset
| 0.99984 |
2207.13784
|
Jiaxi Jiang
|
Jiaxi Jiang, Paul Streli, Huajian Qiu, Andreas Fender, Larissa Laich,
Patrick Snape, Christian Holz
|
AvatarPoser: Articulated Full-Body Pose Tracking from Sparse Motion
Sensing
|
Accepted by ECCV 2022, Code:
https://github.com/eth-siplab/AvatarPoser
| null | null | null |
cs.CV cs.AI cs.GR cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Today's Mixed Reality head-mounted displays track the user's head pose in
world space as well as the user's hands for interaction in both Augmented
Reality and Virtual Reality scenarios. While this is adequate to support user
input, it unfortunately limits users' virtual representations to just their
upper bodies. Current systems thus resort to floating avatars, whose limitation
is particularly evident in collaborative settings. To estimate full-body poses
from the sparse input sources, prior work has incorporated additional trackers
and sensors at the pelvis or lower body, which increases setup complexity and
limits practical application in mobile settings. In this paper, we present
AvatarPoser, the first learning-based method that predicts full-body poses in
world coordinates using only motion input from the user's head and hands. Our
method builds on a Transformer encoder to extract deep features from the input
signals and decouples global motion from the learned local joint orientations
to guide pose estimation. To obtain accurate full-body motions that resemble
motion capture animations, we refine the arm joints' positions using an
optimization routine with inverse kinematics to match the original tracking
input. In our evaluation, AvatarPoser achieved new state-of-the-art results in
evaluations on large motion capture datasets (AMASS). At the same time, our
method's inference speed supports real-time operation, providing a practical
interface to support holistic avatar control and representation for Metaverse
applications.
|
[
{
"version": "v1",
"created": "Wed, 27 Jul 2022 20:52:39 GMT"
}
] | 2022-07-29T00:00:00 |
[
[
"Jiang",
"Jiaxi",
""
],
[
"Streli",
"Paul",
""
],
[
"Qiu",
"Huajian",
""
],
[
"Fender",
"Andreas",
""
],
[
"Laich",
"Larissa",
""
],
[
"Snape",
"Patrick",
""
],
[
"Holz",
"Christian",
""
]
] |
new_dataset
| 0.999437 |
2207.13807
|
Garvita Tiwari
|
Garvita Tiwari, Dimitrije Antic, Jan Eric Lenssen, Nikolaos
Sarafianos, Tony Tung, Gerard Pons-Moll
|
Pose-NDF: Modeling Human Pose Manifolds with Neural Distance Fields
|
Project page: https://virtualhumans.mpi-inf.mpg.de/posendf
|
European Conference on Computer Vision (ECCV 2022), Oral
Presentation
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Pose-NDF, a continuous model for plausible human poses based on
neural distance fields (NDFs). Pose or motion priors are important for
generating realistic new poses and for reconstructing accurate poses from noisy
or partial observations. Pose-NDF learns a manifold of plausible poses as the
zero level set of a neural implicit function, extending the idea of modeling
implicit surfaces in 3D to the high-dimensional domain SO(3)^K, where a human
pose is defined by a single data point, represented by K quaternions. The
resulting high-dimensional implicit function can be differentiated with respect
to the input poses and thus can be used to project arbitrary poses onto the
manifold by using gradient descent on the set of 3-dimensional hyperspheres. In
contrast to previous VAE-based human pose priors, which transform the pose
space into a Gaussian distribution, we model the actual pose manifold,
preserving the distances between poses. We demonstrate that PoseNDF outperforms
existing state-of-the-art methods as a prior in various downstream tasks,
ranging from denoising real-world human mocap data, pose recovery from occluded
data to 3D pose reconstruction from images. Furthermore, we show that it can be
used to generate more diverse poses by random sampling and projection than
VAE-based methods.
|
[
{
"version": "v1",
"created": "Wed, 27 Jul 2022 21:46:47 GMT"
}
] | 2022-07-29T00:00:00 |
[
[
"Tiwari",
"Garvita",
""
],
[
"Antic",
"Dimitrije",
""
],
[
"Lenssen",
"Jan Eric",
""
],
[
"Sarafianos",
"Nikolaos",
""
],
[
"Tung",
"Tony",
""
],
[
"Pons-Moll",
"Gerard",
""
]
] |
new_dataset
| 0.99741 |
2207.13835
|
Mark Minor
|
Nathaniel G. Luttmer, Takara E. Truong, Alicia M. Boynton, Andrew S.
Merryweather, David R. Carrier, and Mark A. Minor
|
Impactful Robots: Evaluating Visual and Audio Warnings to Help Users
Brace for Impact in Human Robot Interaction
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Wearable robotic devices have potential to assist and protect their users.
Toward design of a Smart Helmet, this article examines the effectiveness of
audio and visual warnings to help participants brace for impacts. A user study
examines different warnings and impacts applied to users while running.
Perturbation forces scaled to user mass are applied from different directions
and user displacement is measured to characterize effectiveness of the warning.
This is accomplished using the TreadPort Active Wind Tunnel adapted to deliver
forward, rearward, right, or left perturbation forces at precise moments during
the locomotor cycle. The article presents an overview of the system and
demonstrates the ability to precisely deliver consistent warnings and
perturbations during gait. User study results highlight effectiveness of visual
and audio warnings to help users brace for impact, resulting in guidelines that
will inform future human-robot warning systems.
|
[
{
"version": "v1",
"created": "Thu, 28 Jul 2022 00:09:45 GMT"
}
] | 2022-07-29T00:00:00 |
[
[
"Luttmer",
"Nathaniel G.",
""
],
[
"Truong",
"Takara E.",
""
],
[
"Boynton",
"Alicia M.",
""
],
[
"Merryweather",
"Andrew S.",
""
],
[
"Carrier",
"David R.",
""
],
[
"Minor",
"Mark A.",
""
]
] |
new_dataset
| 0.995292 |
2207.13845
|
Adolfo Ramirez-Aristizabal
|
Adolfo G. Ramirez-Aristizabal, Chris Kello
|
EEG2Mel: Reconstructing Sound from Brain Responses to Music
|
5 figures, 2 tables, listening examples and code provided
| null | null | null |
cs.SD cs.CV cs.IR eess.AS
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Information retrieval from brain responses to auditory and visual stimuli has
shown success through classification of song names and image classes presented
to participants while recording EEG signals. Information retrieval in the form
of reconstructing auditory stimuli has also shown some success, but here we
improve on previous methods by reconstructing music stimuli well enough to be
perceived and identified independently. Furthermore, deep learning models were
trained on time-aligned music stimuli spectrum for each corresponding
one-second window of EEG recording, which greatly reduces feature extraction
steps needed when compared to prior studies. The NMED-Tempo and NMED-Hindi
datasets of participants passively listening to full length songs were used to
train and validate Convolutional Neural Network (CNN) regressors. The efficacy
of raw voltage versus power spectrum inputs and linear versus mel spectrogram
outputs were tested, and all inputs and outputs were converted into 2D images.
The quality of reconstructed spectrograms was assessed by training classifiers
which showed 81% accuracy for mel-spectrograms and 72% for linear spectrograms
(10% chance accuracy). Lastly, reconstructions of auditory music stimuli were
discriminated by listeners at an 85% success rate (50% chance) in a
two-alternative match-to-sample task.
|
[
{
"version": "v1",
"created": "Thu, 28 Jul 2022 01:06:51 GMT"
}
] | 2022-07-29T00:00:00 |
[
[
"Ramirez-Aristizabal",
"Adolfo G.",
""
],
[
"Kello",
"Chris",
""
]
] |
new_dataset
| 0.990188 |
2207.13862
|
Wenzhi Gao
|
Wenzhi Gao, Dongdong Ge, Yinyu Ye
|
HDSDP: Software for Semidefinite Programming
| null | null | null | null |
cs.MS math.OC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
HDSDP is a numerical software solving the semidefinite programming problems.
The main framework of HDSDP resembles the dual-scaling interior point solver
DSDP[2] and several new features, especially a dual method based on the
simplified homogeneous self-dual embedding, have been implemented. The
embedding enhances stability of dual method and several new heuristics and
computational techniques are designed to accelerate its convergence. HDSDP aims
to show how dual-scaling algorithms benefit from the self-dual embedding and it
is developed in parallel to DSDP5.8. Numerical experiments over several
classical benchmark datasets exhibit its robustness and efficiency, and
particularly its advantages on SDP instances featuring low-rank structure and
sparsity. The pre-built binary of HDSDP is currently freely available at
https://github.com/COPT-Public/HDSDP.
|
[
{
"version": "v1",
"created": "Thu, 28 Jul 2022 02:35:08 GMT"
}
] | 2022-07-29T00:00:00 |
[
[
"Gao",
"Wenzhi",
""
],
[
"Ge",
"Dongdong",
""
],
[
"Ye",
"Yinyu",
""
]
] |
new_dataset
| 0.999451 |
2207.13940
|
Alena Otto
|
Catherine Lorenz, Nicola Mimmo, Alena Otto, Daniele Vigo
|
Very large-scale neighborhood search for drone routing with energy
replenishment
|
30 pages (22 pages main text and 8 pages appendix), 8 figures
| null | null | null |
cs.DM math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Drone Routing Problem with Energy replenishment (DRP-E) belongs to a
general class of routing problems with intermediate stops and synchronization
constraints. In DRP-E, the drone has to visit a set of nodes and routinely
requires battery swaps from a (potentially) mobile replenishment station.
Contrary to widespread restrictions in the drone routing literature, several
destinations may be visited in between two consecutive battery swaps. In this
paper, we propose a nontrivial very large-scale neighbourhood for DRP-E, which
synergetically leverages two large-sized polynomially solvable DRP-E
SubProblems (SP1 and SP2). The number of feasible solutions in the resulting
neighborhood is a multiple of those in SP1 and SP2, and, thus, exponential in
the input size of the problem, whereas the computational time to search it
remains polynomial. The proposed polynomial two-stage dynamic programming
algorithm VLSN to search this neighborhood can be flexibly adjusted to the
desired trade-off between accuracy and computational time. For instance, the
search procedure can be converted into an exact algorithm of competitive
runtime for DRP-E. In computational tests, the developed solution methods
outperform current state-of-the art heuristics for DRP-E by a significant
margin. A case study based on a search for missing persons demonstrates that
VLSN easily accommodates additional practice relevant features and outperforms
the state-of-the-art solution in disaster relief by 20%.
|
[
{
"version": "v1",
"created": "Thu, 28 Jul 2022 08:04:51 GMT"
}
] | 2022-07-29T00:00:00 |
[
[
"Lorenz",
"Catherine",
""
],
[
"Mimmo",
"Nicola",
""
],
[
"Otto",
"Alena",
""
],
[
"Vigo",
"Daniele",
""
]
] |
new_dataset
| 0.976937 |
2207.13941
|
Georgios Mylonas
|
Agorakis Bompotas, Christos Anagnostopoulos, Athanasios Kalogeras,
Georgios Kalogeras, Georgios Mylonas, Kyriakos Stefanidis, Christos Alexakos,
Miranda Dandoulaki
|
A Civil Protection Early Warning System to Improve the Resilience of
Adriatic-Ionian Territories to Natural and Man-made Risk
|
Preprint submitted to the 27th IEEE International Conference on
Emerging Technologies and Factory Automation, ETFA 2022
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We are currently witnessing an increased occurrence of extreme weather
events, causing a great deal of disruption and distress across the globe. In
this setting, the importance and utility of Early Warning Systems is becoming
increasingly obvious. In this work, we present the design of an early warning
system called TransCPEarlyWarning, aimed at seven countries in the
Adriatic-Ionian area in Europe. The overall objective is to increase the level
of cooperation among national civil protection institutions in these countries,
addressing natural and man-made risks from the early warning stage and
improving the intervention capabilities of civil protection mechanisms. The
system utilizes an innovative approach with a lever effect, while also aiming
to support the whole system of Civil Protection.
|
[
{
"version": "v1",
"created": "Thu, 28 Jul 2022 08:05:37 GMT"
}
] | 2022-07-29T00:00:00 |
[
[
"Bompotas",
"Agorakis",
""
],
[
"Anagnostopoulos",
"Christos",
""
],
[
"Kalogeras",
"Athanasios",
""
],
[
"Kalogeras",
"Georgios",
""
],
[
"Mylonas",
"Georgios",
""
],
[
"Stefanidis",
"Kyriakos",
""
],
[
"Alexakos",
"Christos",
""
],
[
"Dandoulaki",
"Miranda",
""
]
] |
new_dataset
| 0.965912 |
2207.13970
|
Miguel Arana-Catania
|
John Dougrez-Lewis, Elena Kochkina, M. Arana-Catania, Maria Liakata,
Yulan He
|
PHEMEPlus: Enriching Social Media Rumour Verification with External
Evidence
|
10 pages, 1 figure, 5 tables, presented in the Fifth Fact Extraction
and VERification Workshop (FEVER). 2022
| null | null | null |
cs.CL cs.AI cs.CY cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Work on social media rumour verification utilises signals from posts, their
propagation and users involved. Other lines of work target identifying and
fact-checking claims based on information from Wikipedia, or trustworthy news
articles without considering social media context. However works combining the
information from social media with external evidence from the wider web are
lacking. To facilitate research in this direction, we release a novel dataset,
PHEMEPlus, an extension of the PHEME benchmark, which contains social media
conversations as well as relevant external evidence for each rumour. We
demonstrate the effectiveness of incorporating such evidence in improving
rumour verification models. Additionally, as part of the evidence collection,
we evaluate various ways of query formulation to identify the most effective
method.
|
[
{
"version": "v1",
"created": "Thu, 28 Jul 2022 09:21:05 GMT"
}
] | 2022-07-29T00:00:00 |
[
[
"Dougrez-Lewis",
"John",
""
],
[
"Kochkina",
"Elena",
""
],
[
"Arana-Catania",
"M.",
""
],
[
"Liakata",
"Maria",
""
],
[
"He",
"Yulan",
""
]
] |
new_dataset
| 0.9995 |
2207.13989
|
Linda Kleist
|
Eva Stehr and Linda Kleist
|
Folding Polyiamonds into Octahedra
| null | null | null | null |
cs.CG math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study polyiamonds (polygons arising from the triangular grid) that fold
into the smallest yet unstudied platonic solid -- the octahedron. We show a
number of results. Firstly, we characterize foldable polyiamonds containing a
hole of positive area, namely each but one polyiamond is foldable. Secondly, we
show that a convex polyiamond folds into the octahedron if and only if it
contains one of five polyiamonds. We thirdly present a sharp size bound: While
there exist unfoldable polyiamonds of size 14, every polyiamond of size at
least 15 folds into the octahedron. This clearly implies that one can test in
polynomial time whether a given polyiamond folds into the octahedron. Lastly,
we show that for any assignment of positive integers to the faces, there exist
a polyiamond that folds into the octahedron such that the number of triangles
covering a face is equal to the assigned number.
|
[
{
"version": "v1",
"created": "Thu, 28 Jul 2022 10:11:36 GMT"
}
] | 2022-07-29T00:00:00 |
[
[
"Stehr",
"Eva",
""
],
[
"Kleist",
"Linda",
""
]
] |
new_dataset
| 0.989248 |
2207.13999
|
Alireza Madani
|
Alireza Madani, Pouya P. Niaz, Berk Guler, Yusuf Aydin, Cagatay
Basdogan
|
Robot-Assisted Drilling on Curved Surfaces with Haptic Guidance under
Adaptive Admittance Control
|
RA-L IROS 2022
| null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Drilling a hole on a curved surface with a desired angle is prone to failure
when done manually, due to the difficulties in drill alignment and also
inherent instabilities of the task, potentially causing injury and fatigue to
the workers. On the other hand, it can be impractical to fully automate such a
task in real manufacturing environments because the parts arriving at an
assembly line can have various complex shapes where drill point locations are
not easily accessible, making automated path planning difficult. In this work,
an adaptive admittance controller with 6 degrees of freedom is developed and
deployed on a KUKA LBR iiwa 7 cobot such that the operator is able to
manipulate a drill mounted on the robot with one hand comfortably and open
holes on a curved surface with haptic guidance of the cobot and visual guidance
provided through an AR interface. Real-time adaptation of the admittance
damping provides more transparency when driving the robot in free space while
ensuring stability during drilling. After the user brings the drill
sufficiently close to the drill target and roughly aligns to the desired
drilling angle, the haptic guidance module fine tunes the alignment first and
then constrains the user movement to the drilling axis only, after which the
operator simply pushes the drill into the workpiece with minimal effort. Two
sets of experiments were conducted to investigate the potential benefits of the
haptic guidance module quantitatively (Experiment I) and also the practical
value of the proposed pHRI system for real manufacturing settings based on the
subjective opinion of the participants (Experiment II).
|
[
{
"version": "v1",
"created": "Thu, 28 Jul 2022 10:44:17 GMT"
}
] | 2022-07-29T00:00:00 |
[
[
"Madani",
"Alireza",
""
],
[
"Niaz",
"Pouya P.",
""
],
[
"Guler",
"Berk",
""
],
[
"Aydin",
"Yusuf",
""
],
[
"Basdogan",
"Cagatay",
""
]
] |
new_dataset
| 0.99296 |
2207.14043
|
Sebastian Stock
|
Sebastian Stock, Atif Mashkoor, Michael Leuschel, Alexander Egyed
|
Trace Refinement in B and Event-B
| null | null | null | null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
Traces are used to show whether a model complies with the intended behavior.
A modeler can use trace checking to ensure the preservation of the model
behavior during the refinement process. In this paper, we present a trace
refinement technique and tool called BERT that allows designers to ensure the
behavioral integrity of high-level traces at the concrete level. The proposed
technique is evaluated within the context of the B and Event-B methods on
industrial-strength case studies from the automotive domain.
|
[
{
"version": "v1",
"created": "Thu, 28 Jul 2022 12:31:12 GMT"
}
] | 2022-07-29T00:00:00 |
[
[
"Stock",
"Sebastian",
""
],
[
"Mashkoor",
"Atif",
""
],
[
"Leuschel",
"Michael",
""
],
[
"Egyed",
"Alexander",
""
]
] |
new_dataset
| 0.984283 |
2207.14072
|
Chenning Li
|
Chenning Li, Li Liu, Zhichao Cao, Mi Zhang
|
WiVelo: Fine-grained Walking Velocity Estimation for Wi-Fi Passive
Tracking
|
Proceedings of IEEE SECON, 2022
| null | null | null |
cs.HC eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Passive human tracking via Wi-Fi has been researched broadly in the past
decade. Besides straight-forward anchor point localization, velocity is another
vital sign adopted by the existing approaches to infer user trajectory.
However, state-of-the-art Wi-Fi velocity estimation relies on
Doppler-Frequency-Shift (DFS) which suffers from the inevitable signal noise
incurring unbounded velocity errors, further degrading the tracking accuracy.
In this paper, we present WiVelo\footnote{Code\&datasets are available at
\textit{https://github.com/liecn/WiVelo\_SECON22}} that explores new
spatial-temporal signal correlation features observed from different antennas
to achieve accurate velocity estimation. First, we use subcarrier shift
distribution (SSD) extracted from channel state information (CSI) to define two
correlation features for direction and speed estimation, separately. Then, we
design a mesh model calculated by the antennas' locations to enable a
fine-grained velocity estimation with bounded direction error. Finally, with
the continuously estimated velocity, we develop an end-to-end trajectory
recovery algorithm to mitigate velocity outliers with the property of walking
velocity continuity. We implement WiVelo on commodity Wi-Fi hardware and
extensively evaluate its tracking accuracy in various environments. The
experimental results show our median and 90\% tracking errors are 0.47~m and
1.06~m, which are half and a quarter of state-of-the-arts.
|
[
{
"version": "v1",
"created": "Thu, 28 Jul 2022 13:24:07 GMT"
}
] | 2022-07-29T00:00:00 |
[
[
"Li",
"Chenning",
""
],
[
"Liu",
"Li",
""
],
[
"Cao",
"Zhichao",
""
],
[
"Zhang",
"Mi",
""
]
] |
new_dataset
| 0.997048 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.