id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2209.09034
|
Kai North
|
Kai North, Marcos Zampieri, Tharindu Ranasinghe
|
ALEXSIS-PT: A New Resource for Portuguese Lexical Simplification
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Lexical simplification (LS) is the task of automatically replacing complex
words for easier ones making texts more accessible to various target
populations (e.g. individuals with low literacy, individuals with learning
disabilities, second language learners). To train and test models, LS systems
usually require corpora that feature complex words in context along with their
candidate substitutions. To continue improving the performance of LS systems we
introduce ALEXSIS-PT, a novel multi-candidate dataset for Brazilian Portuguese
LS containing 9,605 candidate substitutions for 387 complex words. ALEXSIS-PT
has been compiled following the ALEXSIS protocol for Spanish opening exciting
new avenues for cross-lingual models. ALEXSIS-PT is the first LS
multi-candidate dataset that contains Brazilian newspaper articles. We
evaluated four models for substitute generation on this dataset, namely
mDistilBERT, mBERT, XLM-R, and BERTimbau. BERTimbau achieved the highest
performance across all evaluation metrics.
|
[
{
"version": "v1",
"created": "Mon, 19 Sep 2022 14:10:21 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"North",
"Kai",
""
],
[
"Zampieri",
"Marcos",
""
],
[
"Ranasinghe",
"Tharindu",
""
]
] |
new_dataset
| 0.998039 |
2209.09035
|
Meiling Fang
|
Meiling Fang and Wufei Yang and Arjan Kuijper and Vitomir Struc and
Naser Damer
|
Fairness in Face Presentation Attack Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Face presentation attack detection (PAD) is critical to secure face
recognition (FR) applications from presentation attacks. FR performance has
been shown to be unfair to certain demographic and non-demographic groups.
However, the fairness of face PAD is an understudied issue, mainly due to the
lack of appropriately annotated data. To address this issue, this work first
presents a Combined Attribute Annotated PAD Dataset (CAAD-PAD) by combining
several well-known PAD datasets where we provide seven human-annotated
attribute labels. This work then comprehensively analyses the fairness of a set
of face PADs and its relation to the nature of training data and the
Operational Decision Threshold Assignment (ODTA) on different data groups by
studying four face PAD approaches on our CAAD-PAD. To simultaneously represent
both the PAD fairness and the absolute PAD performance, we introduce a novel
metric, namely the Accuracy Balanced Fairness (ABF). Extensive experiments on
CAAD-PAD show that the training data and ODTA induce unfairness on gender,
occlusion, and other attribute groups. Based on these analyses, we propose a
data augmentation method, FairSWAP, which aims to disrupt the identity/semantic
information and guide models to mine attack cues rather than attribute-related
information. Detailed experimental results demonstrate that FairSWAP generally
enhances both the PAD performance and the fairness of face PAD.
|
[
{
"version": "v1",
"created": "Mon, 19 Sep 2022 14:12:09 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Fang",
"Meiling",
""
],
[
"Yang",
"Wufei",
""
],
[
"Kuijper",
"Arjan",
""
],
[
"Struc",
"Vitomir",
""
],
[
"Damer",
"Naser",
""
]
] |
new_dataset
| 0.956876 |
2209.09094
|
Amir Ziaee
|
Amir Ziaee and Georg Suter
|
SFS-A68: a dataset for the segmentation of space functions in apartment
buildings
|
Published in proceedings of the 29th International Workshop on
Intelligent Computing in Engineering, EG-ICE 2022, Aarhus, Denmark.
https://doi.org/10.7146/aul.455.c222
|
Teizer, Jochen & Schultz, Carl. (2022). Proceedings of the 29th
EG-ICE International Workshop on Intelligent Computing in Engineering:
Frontmatter and Backmatter. 1-8. 10.7146/aul.455.c191
| null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Analyzing building models for usable area, building safety, or energy
analysis requires function classification data of spaces and related objects.
Automated space function classification is desirable to reduce input model
preparation effort and errors. Existing space function classifiers use space
feature vectors or space connectivity graphs as input. The application of deep
learning (DL) image segmentation methods to space function classification has
not been studied. As an initial step towards addressing this gap, we present a
dataset, SFS-A68, that consists of input and ground truth images generated from
68 digital 3D models of space layouts of apartment buildings. The dataset is
suitable for developing DL models for space function segmentation. We use the
dataset to train and evaluate an experimental space function segmentation
network based on transfer learning and training from scratch. Test results
confirm the applicability of DL image segmentation for space function
classification. The code and the dataset of the experiments are publicly
available online (https://github.com/A2Amir/SFS-A68).
|
[
{
"version": "v1",
"created": "Tue, 13 Sep 2022 07:49:54 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Ziaee",
"Amir",
""
],
[
"Suter",
"Georg",
""
]
] |
new_dataset
| 0.999668 |
2209.09118
|
Dr. Mohammed Javed
|
Dikshit Sharma and Mohammed Javed
|
OCR for TIFF Compressed Document Images Directly in Compressed Domain
Using Text segmentation and Hidden Markov Model
|
The paper has 14 figures and 1 table
| null | null | null |
cs.CV cs.CL cs.IR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In today's technological era, document images play an important and integral
part in our day to day life, and specifically with the surge of Covid-19,
digitally scanned documents have become key source of communication, thus
avoiding any sort of infection through physical contact. Storage and
transmission of scanned document images is a very memory intensive task, hence
compression techniques are being used to reduce the image size before archival
and transmission. To extract information or to operate on the compressed
images, we have two ways of doing it. The first way is to decompress the image
and operate on it and subsequently compress it again for the efficiency of
storage and transmission. The other way is to use the characteristics of the
underlying compression algorithm to directly process the images in their
compressed form without involving decompression and re-compression. In this
paper, we propose a novel idea of developing an OCR for CCITT (The
International Telegraph and Telephone Consultative Committee) compressed
machine printed TIFF document images directly in the compressed domain. After
segmenting text regions into lines and words, HMM is applied for recognition
using three coding modes of CCITT- horizontal, vertical and the pass mode.
Experimental results show that OCR on pass modes give a promising results.
|
[
{
"version": "v1",
"created": "Tue, 13 Sep 2022 06:34:26 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Sharma",
"Dikshit",
""
],
[
"Javed",
"Mohammed",
""
]
] |
new_dataset
| 0.999569 |
2209.09127
|
Nikolay Ivanov
|
Nikolay Ivanov
|
Is Rust C++-fast? Benchmarking System Languages on Everyday Routines
|
Michigan State University
| null | null | null |
cs.PL cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Rust is a relatively new system programming language that has been
experiencing a rapid adoption in the past 10 years. Rust incorporates a memory
ownership model enforced at a compile time. Since this model involves zero
runtime overhead, programs written in Rust are not only memory-safe but also
fast, leading to performance comparable to C and C++. Multiple existing
benchmarks comparing the performance of Rust with other languages focus on
rarely used superficial algorithms, leading to somewhat inconclusive results.
In this work, we conduct a comparative performance benchmark of Rust and C++
using commonly used algorithms and data structures rather than exotic ones. Our
evaluation shows that the overall performance of Rust is similar to C++, with
only minor disadvantage. We also demonstrate that in some Rust routines are
slightly faster than the ones of C++.
|
[
{
"version": "v1",
"created": "Mon, 19 Sep 2022 15:45:50 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Ivanov",
"Nikolay",
""
]
] |
new_dataset
| 0.999377 |
2209.09157
|
Ricardo M\"uller
|
Ricardo M\"uller, Marco Schreyer, Timur Sattarov, Damian Borth
|
RESHAPE: Explaining Accounting Anomalies in Financial Statement Audits
by enhancing SHapley Additive exPlanations
|
9 pages, 4 figures, 5 tables, preprint version, currently under
review
| null | null | null |
cs.LG cs.CE q-fin.ST
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Detecting accounting anomalies is a recurrent challenge in financial
statement audits. Recently, novel methods derived from Deep-Learning (DL) have
been proposed to audit the large volumes of a statement's underlying accounting
records. However, due to their vast number of parameters, such models exhibit
the drawback of being inherently opaque. At the same time, the concealing of a
model's inner workings often hinders its real-world application. This
observation holds particularly true in financial audits since auditors must
reasonably explain and justify their audit decisions. Nowadays, various
Explainable AI (XAI) techniques have been proposed to address this challenge,
e.g., SHapley Additive exPlanations (SHAP). However, in unsupervised DL as
often applied in financial audits, these methods explain the model output at
the level of encoded variables. As a result, the explanations of Autoencoder
Neural Networks (AENNs) are often hard to comprehend by human auditors. To
mitigate this drawback, we propose (RESHAPE), which explains the model output
on an aggregated attribute-level. In addition, we introduce an evaluation
framework to compare the versatility of XAI methods in auditing. Our
experimental results show empirical evidence that RESHAPE results in versatile
explanations compared to state-of-the-art baselines. We envision such
attribute-level explanations as a necessary next step in the adoption of
unsupervised DL techniques in financial auditing.
|
[
{
"version": "v1",
"created": "Mon, 19 Sep 2022 16:23:43 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Müller",
"Ricardo",
""
],
[
"Schreyer",
"Marco",
""
],
[
"Sattarov",
"Timur",
""
],
[
"Borth",
"Damian",
""
]
] |
new_dataset
| 0.972379 |
2209.09171
|
Nipun Dhananjaya Weerakkodi Mudalige
|
Nipun Dhananjaya Weerakkodi Mudalige, Iana Zhura, Ildar Babataev,
Elena Nazarova, Aleksey Fedoseev and Dzmitry Tsetserukou
|
HyperDog: An Open-Source Quadruped Robot Platform Based on ROS2 and
micro-ROS
|
6 pages, 13 figures, IEEE SMC 2022 conference
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Nowadays, design and development of legged quadruped robots is a quite active
area of scientific research. In fact, the legged robots have become popular due
to their capabilities to adapt to harsh terrains and diverse environmental
conditions in comparison to other mobile robots. With the higher demand for
legged robot experiments, more researches and engineers need an affordable and
quick way of locomotion algorithm development. In this paper, we present a new
open source quadruped robot HyperDog platform, which features 12 RC servo
motors, onboard NVIDIA Jetson nano computer and STM32F4 Discovery board.
HyperDog is an open-source platform for quadruped robotic software development,
which is based on Robot Operating System 2 (ROS2) and micro-ROS. Moreover, the
HyperDog is a quadrupedal robotic dog entirely built from 3D printed parts and
carbon fiber, which allows the robot to have light weight and good strength.
The idea of this work is to demonstrate an affordable and customizable way of
robot development and provide researches and engineers with the legged robot
platform, where different algorithms can be tested and validated in simulation
and real environment. The developed project with code is available on GitHub
(https://github.com/NDHANA94/hyperdog_ros2).
|
[
{
"version": "v1",
"created": "Mon, 19 Sep 2022 16:47:18 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Mudalige",
"Nipun Dhananjaya Weerakkodi",
""
],
[
"Zhura",
"Iana",
""
],
[
"Babataev",
"Ildar",
""
],
[
"Nazarova",
"Elena",
""
],
[
"Fedoseev",
"Aleksey",
""
],
[
"Tsetserukou",
"Dzmitry",
""
]
] |
new_dataset
| 0.966317 |
2209.09191
|
Franco Coltraro
|
Franco Coltraro, Josep Fontana, Jaume Amor\'os, Maria
Alberich-Carrami\~nana, J\'ulia Borr\`as, Carme Torras
|
The dGLI Cloth Coordinates: A Topological Representation for Semantic
Classification of Cloth States
|
24 pages, 34 references, 6 figures, 1 table
| null | null | null |
cs.RO cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Robotic manipulation of cloth is a highly complex task because of its
infinite-dimensional shape-state space that makes cloth state estimation very
difficult. In this paper we introduce the dGLI Cloth Coordinates, a
low-dimensional representation of the state of a rectangular piece of cloth
that allows to efficiently distinguish key topological changes in a folding
sequence, opening the door to efficient learning methods for cloth manipulation
planning and control. Our representation is based on a directional derivative
of the Gauss Linking Integral and allows us to represent both planar and
spatial configurations in a consistent unified way. The proposed dGLI Cloth
Coordinates are shown to be more accurate in the classification of cloth states
and significantly more sensitive to changes in grasping affordances than other
classic shape distance methods. Finally, we apply our representation to real
images of a cloth, showing we can identify the different states using a simple
distance-based classifier.
|
[
{
"version": "v1",
"created": "Wed, 14 Sep 2022 15:16:45 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Coltraro",
"Franco",
""
],
[
"Fontana",
"Josep",
""
],
[
"Amorós",
"Jaume",
""
],
[
"Alberich-Carramiñana",
"Maria",
""
],
[
"Borràs",
"Júlia",
""
],
[
"Torras",
"Carme",
""
]
] |
new_dataset
| 0.999256 |
2209.09207
|
Mrinal Haloi
|
Mrinal Haloi, Shashank Shekhar, Nikhil Fande, Siddhant Swaroop Dash,
Sanjay G
|
Table Detection in the Wild: A Novel Diverse Table Detection Dataset and
Method
|
Open source Table detection dataset and baseline results
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Recent deep learning approaches in table detection achieved outstanding
performance and proved to be effective in identifying document layouts.
Currently, available table detection benchmarks have many limitations,
including the lack of samples diversity, simple table structure, the lack of
training cases, and samples quality. In this paper, we introduce a diverse
large-scale dataset for table detection with more than seven thousand samples
containing a wide variety of table structures collected from many diverse
sources. In addition to that, we also present baseline results using a
convolutional neural network-based method to detect table structure in
documents. Experimental results show the superiority of applying convolutional
deep learning methods over classical computer vision-based methods. The
introduction of this diverse table detection dataset will enable the community
to develop high throughput deep learning methods for understanding document
layout and tabular data processing.
|
[
{
"version": "v1",
"created": "Wed, 31 Aug 2022 14:20:30 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Haloi",
"Mrinal",
""
],
[
"Shekhar",
"Shashank",
""
],
[
"Fande",
"Nikhil",
""
],
[
"Dash",
"Siddhant Swaroop",
""
],
[
"G",
"Sanjay",
""
]
] |
new_dataset
| 0.99956 |
2209.09217
|
Agrim Gupta
|
Agrim Gupta, Daegue Park, Shayaun Bashar, Cedric Girerd, Tania
Morimoto, Dinesh Bharadia
|
WiForceSticker: Batteryless, Thin Sticker-like Flexible Force Sensor
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Any two objects in contact with each other exert a force that could be simply
due to gravity or mechanical contact, such as a robotic arm gripping an object
or even the contact between two bones at our knee joints. The ability to
naturally measure and monitor these contact forces allows a plethora of
applications from warehouse management (detect faulty packages based on
weights) to robotics (making a robotic arms' grip as sensitive as human skin)
and healthcare (knee-implants). It is challenging to design a ubiquitous force
sensor that can be used naturally for all these applications. First, the sensor
should be small enough to fit in narrow spaces. Next, we don't want to lay
cumbersome cables to read the force values from the sensors. Finally, we need
to have a battery-free design to meet the in-vivo applications. We develop
WiForceSticker, a wireless, battery-free, sticker-like force sensor that can be
ubiquitously deployed on any surface, such as all warehouse packages, robotic
arms, and knee joints. WiForceSticker first designs a tiny
$4$~mm~$\times$~$2$~mm~$\times$~$0.4$~mm capacitative sensor design equipped
with a $10$~mm~$\times$~$10$~mm antenna designed on a flexible PCB substrate.
Secondly, it introduces a new mechanism to transduce the force information on
ambient RF radiations that can be read by a remotely located reader wirelessly
without requiring any battery or active components at the force sensor, by
interfacing the sensors with COTS RFID systems. The sensor can detect forces in
the range of $0$-$6$~N with sensing accuracy of $<0.5$~N across multiple
testing environments and evaluated with over $10,000$ varying force level
presses on the sensor. We also showcase two application case studies with our
designed sensors, weighing warehouse packages and sensing forces applied by
bone joints.
|
[
{
"version": "v1",
"created": "Mon, 19 Sep 2022 17:33:58 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Gupta",
"Agrim",
""
],
[
"Park",
"Daegue",
""
],
[
"Bashar",
"Shayaun",
""
],
[
"Girerd",
"Cedric",
""
],
[
"Morimoto",
"Tania",
""
],
[
"Bharadia",
"Dinesh",
""
]
] |
new_dataset
| 0.999838 |
1904.01497
|
Srushti Rath
|
Srushti Rath, Joseph Y.J. Chow
|
Air Taxi Skyport Location Problem for Airport Access
|
25 pages
|
Journal of Air Transport Management. 105 (2022) 102294
|
10.1016/j.jairtraman.2022.102294
| null |
cs.AI math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Witnessing the rapid progress and accelerated commercialization made in
recent years for the introduction of air taxi services in near future across
metropolitan cities, our research focuses on one of the most important
consideration for such services, i.e., infrastructure planning (also known as
skyports). We consider design of skyport locations for air taxis accessing
airports, where we present the skyport location problem as a modified
single-allocation p-hub median location problem integrating choice-constrained
user mode choice behavior into the decision process. Our approach focuses on
two alternative objectives i.e., maximizing air taxi ridership and maximizing
air taxi revenue. The proposed models in the study incorporate trade-offs
between trip length and trip cost based on mode choice behavior of travelers to
determine optimal choices of skyports in an urban city. We examine the
sensitivity of skyport locations based on two objectives, three air taxi
pricing strategies, and varying transfer times at skyports. A case study of New
York City is conducted considering a network of 149 taxi zones and 3 airports
with over 20 million for-hire-vehicles trip data to the airports to discuss
insights around the choice of skyport locations in the city, and demand
allocation to different skyports under various parameter settings. Results
suggest that a minimum of 9 skyports located between Manhattan, Queens and
Brooklyn can adequately accommodate the airport access travel needs and are
sufficiently stable against transfer time increases. Findings from this study
can help air taxi providers strategize infrastructure design options and
investment decisions based on skyport location choices.
|
[
{
"version": "v1",
"created": "Mon, 1 Apr 2019 01:00:49 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Apr 2019 01:12:50 GMT"
},
{
"version": "v3",
"created": "Wed, 25 Mar 2020 03:21:53 GMT"
},
{
"version": "v4",
"created": "Mon, 27 Sep 2021 23:21:08 GMT"
}
] | 2022-09-19T00:00:00 |
[
[
"Rath",
"Srushti",
""
],
[
"Chow",
"Joseph Y. J.",
""
]
] |
new_dataset
| 0.97968 |
2011.00617
|
Brittany Story
|
Henry Adams, Elin Farnell, Brittany Story
|
Support vector machines and Radon's theorem
| null | null | null | null |
cs.LG math.CO math.GN math.ST stat.TH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A support vector machine (SVM) is an algorithm that finds a hyperplane which
optimally separates labeled data points in $\mathbb{R}^n$ into positive and
negative classes. The data points on the margin of this separating hyperplane
are called support vectors. We connect the possible configurations of support
vectors to Radon's theorem, which provides guarantees for when a set of points
can be divided into two classes (positive and negative) whose convex hulls
intersect. If the convex hulls of the positive and negative support vectors are
projected onto a separating hyperplane, then the projections intersect if and
only if the hyperplane is optimal. Further, with a particular type of general
position, we show that (a) the projected convex hulls of the support vectors
intersect in exactly one point, (b) the support vectors are stable under
perturbation, (c) there are at most $n+1$ support vectors, and (d) every number
of support vectors from 2 up to $n+1$ is possible. Finally, we perform computer
simulations studying the expected number of support vectors, and their
configurations, for randomly generated data. We observe that as the distance
between classes of points increases for this type of randomly generated data,
configurations with fewer support vectors become more likely.
|
[
{
"version": "v1",
"created": "Sun, 1 Nov 2020 19:57:46 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Jan 2022 21:44:04 GMT"
},
{
"version": "v3",
"created": "Fri, 2 Sep 2022 17:38:22 GMT"
},
{
"version": "v4",
"created": "Fri, 16 Sep 2022 14:39:12 GMT"
}
] | 2022-09-19T00:00:00 |
[
[
"Adams",
"Henry",
""
],
[
"Farnell",
"Elin",
""
],
[
"Story",
"Brittany",
""
]
] |
new_dataset
| 0.972747 |
2011.00753
|
Subangkar Karmaker Shanto
|
Sarkar Snigdha Sarathi Das, Subangkar Karmaker Shanto, Masum Rahman,
Md. Saiful Islam, Atif Rahman, Mohammad Mehedy Masud, Mohammed Eunus Ali
|
BayesBeat: Reliable Atrial Fibrillation Detection from Noisy
Photoplethysmography Data
|
IMWUT March 2022, Vol 6 Article 8 (UbiComp 2022)
|
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 6, 1,
Article 8 (March 2022), 21 pages
|
10.1145/3517247
| null |
cs.LG eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Smartwatches or fitness trackers have garnered a lot of popularity as
potential health tracking devices due to their affordable and longitudinal
monitoring capabilities. To further widen their health tracking capabilities,
in recent years researchers have started to look into the possibility of Atrial
Fibrillation (AF) detection in real-time leveraging photoplethysmography (PPG)
data, an inexpensive sensor widely available in almost all smartwatches. A
significant challenge in AF detection from PPG signals comes from the inherent
noise in the smartwatch PPG signals. In this paper, we propose a novel deep
learning based approach, BayesBeat that leverages the power of Bayesian deep
learning to accurately infer AF risks from noisy PPG signals, and at the same
time provides an uncertainty estimate of the prediction. Extensive experiments
on two publicly available dataset reveal that our proposed method BayesBeat
outperforms the existing state-of-the-art methods. Moreover, BayesBeat is
substantially more efficient having 40-200X fewer parameters than
state-of-the-art baseline approaches making it suitable for deployment in
resource constrained wearable devices.
|
[
{
"version": "v1",
"created": "Mon, 2 Nov 2020 05:20:32 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Sep 2022 12:45:01 GMT"
}
] | 2022-09-19T00:00:00 |
[
[
"Das",
"Sarkar Snigdha Sarathi",
""
],
[
"Shanto",
"Subangkar Karmaker",
""
],
[
"Rahman",
"Masum",
""
],
[
"Islam",
"Md. Saiful",
""
],
[
"Rahman",
"Atif",
""
],
[
"Masud",
"Mohammad Mehedy",
""
],
[
"Ali",
"Mohammed Eunus",
""
]
] |
new_dataset
| 0.999095 |
2111.10153
|
Mohammed Alghazwi
|
Mohammed Alghazwi, Fatih Turkmen, Joeri van der Velde, Dimka
Karastoyanova
|
Blockchain for Genomics: A Systematic Literature Review
|
Literature review updated to cover recently published papers on
blockchain and genomics
| null |
10.1145/3563044
| null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Human genomic data carry unique information about an individual and offer
unprecedented opportunities for healthcare. The clinical interpretations
derived from large genomic datasets can greatly improve healthcare and pave the
way for personalized medicine. Sharing genomic datasets, however, pose major
challenges, as genomic data is different from traditional medical data,
indirectly revealing information about descendants and relatives of the data
owner and carrying valid information even after the owner passes away.
Therefore, stringent data ownership and control measures are required when
dealing with genomic data. In order to provide secure and accountable
infrastructure, blockchain technologies offer a promising alternative to
traditional distributed systems. Indeed, the research on blockchain-based
infrastructures tailored to genomics is on the rise. However, there is a lack
of a comprehensive literature review that summarizes the current
state-of-the-art methods in the applications of blockchain in genomics. In this
paper, we systematically look at the existing work both commercial and
academic, and discuss the major opportunities and challenges. Our study is
driven by five research questions that we aim to answer in our review. We also
present our projections of future research directions which we hope the
researchers interested in the area can benefit from.
|
[
{
"version": "v1",
"created": "Fri, 19 Nov 2021 10:59:32 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Sep 2022 10:06:09 GMT"
}
] | 2022-09-19T00:00:00 |
[
[
"Alghazwi",
"Mohammed",
""
],
[
"Turkmen",
"Fatih",
""
],
[
"van der Velde",
"Joeri",
""
],
[
"Karastoyanova",
"Dimka",
""
]
] |
new_dataset
| 0.956517 |
2203.14455
|
Margaret Coad
|
Nelson G. Badillo Perez and Margaret M. Coad
|
Self-Propelled Soft Everting Toroidal Robot for Navigation and Climbing
in Confined Spaces
|
7 pages and 8 figures. Accepted to IEEE Conference on Intelligent
Robots and Systems (IROS 2022). Video available at
https://youtu.be/R0TlKPLbM9Y
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There are many spaces inaccessible to humans where robots could help deliver
sensors and equipment. Many of these spaces contain three-dimensional
passageways and uneven terrain that pose challenges for robot design and
control. Everting toroidal robots, which move via simultaneous eversion and
inversion of their body material, are promising for navigation in these types
of spaces. We present a novel soft everting toroidal robot that propels itself
using a motorized device inside an air-filled membrane. Our robot requires only
a single control signal to move, can conform to its environment, and can climb
vertically with a motor torque that is independent of the force used to brace
the robot against its environment. We derive and validate models of the forces
involved in its motion, and we demonstrate the robot's ability to navigate a
maze and climb a pipe.
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 02:44:47 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Sep 2022 05:37:16 GMT"
},
{
"version": "v3",
"created": "Thu, 15 Sep 2022 21:12:54 GMT"
}
] | 2022-09-19T00:00:00 |
[
[
"Perez",
"Nelson G. Badillo",
""
],
[
"Coad",
"Margaret M.",
""
]
] |
new_dataset
| 0.99841 |
2203.16995
|
Sajjad Heydari
|
Sajjad Heydari, Lorenzo Livi
|
Message Passing Neural Networks for Hypergraphs
| null | null |
10.1007/978-3-031-15931-2_48
| null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Hypergraph representations are both more efficient and better suited to
describe data characterized by relations between two or more objects. In this
work, we present a new graph neural network based on message passing capable of
processing hypergraph-structured data. We show that the proposed model defines
a design space for neural network models for hypergraphs, thus generalizing
existing models for hypergraphs. We report experiments on a benchmark dataset
for node classification, highlighting the effectiveness of the proposed model
with respect to other state-of-the-art methods for graphs and hypergraphs. We
also discuss the benefits of using hypergraph representations and, at the same
time, highlight the limitation of using equivalent graph representations when
the underlying problem has relations among more than two objects.
|
[
{
"version": "v1",
"created": "Thu, 31 Mar 2022 12:38:22 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Apr 2022 15:25:01 GMT"
}
] | 2022-09-19T00:00:00 |
[
[
"Heydari",
"Sajjad",
""
],
[
"Livi",
"Lorenzo",
""
]
] |
new_dataset
| 0.997554 |
2204.00743
|
David Wadden
|
David Wadden, Nikita Gupta, Kenton Lee, Kristina Toutanova
|
Entity-Centric Query Refinement
|
AKBC 2022
| null | null | null |
cs.CL cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce the task of entity-centric query refinement. Given an input
query whose answer is a (potentially large) collection of entities, the task
output is a small set of query refinements meant to assist the user in
efficient domain exploration and entity discovery. We propose a method to
create a training dataset for this task. For a given input query, we use an
existing knowledge base taxonomy as a source of candidate query refinements,
and choose a final set of refinements from among these candidates using a
search procedure designed to partition the set of entities answering the input
query. We demonstrate that our approach identifies refinement sets which human
annotators judge to be interesting, comprehensive, and non-redundant. In
addition, we find that a text generation model trained on our newly-constructed
dataset is able to offer refinements for novel queries not covered by an
existing taxonomy. Our code and data are available at
https://github.com/google-research/language/tree/master/language/qresp.
|
[
{
"version": "v1",
"created": "Sat, 2 Apr 2022 02:19:47 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Sep 2022 22:09:48 GMT"
}
] | 2022-09-19T00:00:00 |
[
[
"Wadden",
"David",
""
],
[
"Gupta",
"Nikita",
""
],
[
"Lee",
"Kenton",
""
],
[
"Toutanova",
"Kristina",
""
]
] |
new_dataset
| 0.99788 |
2207.14686
|
Denise Moussa
|
Denise Moussa, Anatol Maier, Andreas Spruck, J\"urgen Seiler,
Christian Riess
|
Forensic License Plate Recognition with Compression-Informed
Transformers
|
Accepted at ICIP 2022, Code:
https://faui1-gitlab.cs.fau.de/denise.moussa/forensic-license-plate-transformer/
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Forensic license plate recognition (FLPR) remains an open challenge in legal
contexts such as criminal investigations, where unreadable license plates (LPs)
need to be deciphered from highly compressed and/or low resolution footage,
e.g., from surveillance cameras. In this work, we propose a side-informed
Transformer architecture that embeds knowledge on the input compression level
to improve recognition under strong compression. We show the effectiveness of
Transformers for license plate recognition (LPR) on a low-quality real-world
dataset. We also provide a synthetic dataset that includes strongly degraded,
illegible LP images and analyze the impact of knowledge embedding on it. The
network outperforms existing FLPR methods and standard state-of-the art image
recognition models while requiring less parameters. For the severest degraded
images, we can improve recognition by up to 8.9 percent points.
|
[
{
"version": "v1",
"created": "Fri, 29 Jul 2022 13:58:24 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Sep 2022 13:45:56 GMT"
}
] | 2022-09-19T00:00:00 |
[
[
"Moussa",
"Denise",
""
],
[
"Maier",
"Anatol",
""
],
[
"Spruck",
"Andreas",
""
],
[
"Seiler",
"Jürgen",
""
],
[
"Riess",
"Christian",
""
]
] |
new_dataset
| 0.999871 |
2209.07216
|
Daniel Loureiro
|
Daniel Loureiro, Aminette D'Souza, Areej Nasser Muhajab, Isabella A.
White, Gabriel Wong, Luis Espinosa Anke, Leonardo Neves, Francesco Barbieri,
Jose Camacho-Collados
|
TempoWiC: An Evaluation Benchmark for Detecting Meaning Shift in Social
Media
|
Accepted to COLING 2022. Used to create the TempoWiC Shared Task for
EvoNLP
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Language evolves over time, and word meaning changes accordingly. This is
especially true in social media, since its dynamic nature leads to faster
semantic shifts, making it challenging for NLP models to deal with new content
and trends. However, the number of datasets and models that specifically
address the dynamic nature of these social platforms is scarce. To bridge this
gap, we present TempoWiC, a new benchmark especially aimed at accelerating
research in social media-based meaning shift. Our results show that TempoWiC is
a challenging benchmark, even for recently-released language models specialized
in social media.
|
[
{
"version": "v1",
"created": "Thu, 15 Sep 2022 11:17:56 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Sep 2022 16:54:46 GMT"
}
] | 2022-09-19T00:00:00 |
[
[
"Loureiro",
"Daniel",
""
],
[
"D'Souza",
"Aminette",
""
],
[
"Muhajab",
"Areej Nasser",
""
],
[
"White",
"Isabella A.",
""
],
[
"Wong",
"Gabriel",
""
],
[
"Anke",
"Luis Espinosa",
""
],
[
"Neves",
"Leonardo",
""
],
[
"Barbieri",
"Francesco",
""
],
[
"Camacho-Collados",
"Jose",
""
]
] |
new_dataset
| 0.999545 |
2209.07299
|
Chen Chen
|
Chen Chen, Yufei Wang, Bing Li and Kwok-Yan Lam
|
Knowledge Is Flat: A Seq2Seq Generative Framework for Various Knowledge
Graph Completion
|
COLING 2022 Main Conference
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Knowledge Graph Completion (KGC) has been recently extended to multiple
knowledge graph (KG) structures, initiating new research directions, e.g.
static KGC, temporal KGC and few-shot KGC. Previous works often design KGC
models closely coupled with specific graph structures, which inevitably results
in two drawbacks: 1) structure-specific KGC models are mutually incompatible;
2) existing KGC methods are not adaptable to emerging KGs. In this paper, we
propose KG-S2S, a Seq2Seq generative framework that could tackle different
verbalizable graph structures by unifying the representation of KG facts into
"flat" text, regardless of their original form. To remedy the KG structure
information loss from the "flat" text, we further improve the input
representations of entities and relations, and the inference algorithm in
KG-S2S. Experiments on five benchmarks show that KG-S2S outperforms many
competitive baselines, setting new state-of-the-art performance. Finally, we
analyze KG-S2S's ability on the different relations and the Non-entity
Generations.
|
[
{
"version": "v1",
"created": "Thu, 15 Sep 2022 13:49:40 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Sep 2022 08:15:55 GMT"
}
] | 2022-09-19T00:00:00 |
[
[
"Chen",
"Chen",
""
],
[
"Wang",
"Yufei",
""
],
[
"Li",
"Bing",
""
],
[
"Lam",
"Kwok-Yan",
""
]
] |
new_dataset
| 0.997885 |
2209.07550
|
Steven Kapturowski
|
Steven Kapturowski, V\'ictor Campos, Ray Jiang, Nemanja Raki\'cevi\'c,
Hado van Hasselt, Charles Blundell, Adri\`a Puigdom\`enech Badia
|
Human-level Atari 200x faster
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The task of building general agents that perform well over a wide range of
tasks has been an important goal in reinforcement learning since its inception.
The problem has been subject of research of a large body of work, with
performance frequently measured by observing scores over the wide range of
environments contained in the Atari 57 benchmark. Agent57 was the first agent
to surpass the human benchmark on all 57 games, but this came at the cost of
poor data-efficiency, requiring nearly 80 billion frames of experience to
achieve. Taking Agent57 as a starting point, we employ a diverse set of
strategies to achieve a 200-fold reduction of experience needed to out perform
the human baseline. We investigate a range of instabilities and bottlenecks we
encountered while reducing the data regime, and propose effective solutions to
build a more robust and efficient agent. We also demonstrate competitive
performance with high-performing methods such as Muesli and MuZero. The four
key components to our approach are (1) an approximate trust region method which
enables stable bootstrapping from the online network, (2) a normalisation
scheme for the loss and priorities which improves robustness when learning a
set of value functions with a wide range of scales, (3) an improved
architecture employing techniques from NFNets in order to leverage deeper
networks without the need for normalization layers, and (4) a policy
distillation method which serves to smooth out the instantaneous greedy policy
overtime.
|
[
{
"version": "v1",
"created": "Thu, 15 Sep 2022 18:08:48 GMT"
}
] | 2022-09-19T00:00:00 |
[
[
"Kapturowski",
"Steven",
""
],
[
"Campos",
"Víctor",
""
],
[
"Jiang",
"Ray",
""
],
[
"Rakićević",
"Nemanja",
""
],
[
"van Hasselt",
"Hado",
""
],
[
"Blundell",
"Charles",
""
],
[
"Badia",
"Adrià Puigdomènech",
""
]
] |
new_dataset
| 0.999041 |
2209.07552
|
Jieyang Chen
|
Jieyang Chen, Chenhao Xie, Jesun S Firoz, Jiajia Li, Shuaiwen Leon
Song, Kevin Barker, Mark Raugas, and Ang Li
|
MSREP: A Fast yet Light Sparse Matrix Framework for Multi-GPU Systems
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Sparse linear algebra kernels play a critical role in numerous applications,
covering from exascale scientific simulation to large-scale data analytics.
Offloading linear algebra kernels on one GPU will no longer be viable in these
applications, simply because the rapidly growing data volume may exceed the
memory capacity and computing power of a single GPU. Multi-GPU systems nowadays
being ubiquitous in supercomputers and data-centers present great potentials in
scaling up large sparse linear algebra kernels. In this work, we design a novel
sparse matrix representation framework for multi-GPU systems called MSREP, to
scale sparse linear algebra operations based on our augmented sparse matrix
formats in a balanced pattern. Different from dense operations, sparsity
significantly intensifies the difficulty of distributing the computation
workload among multiple GPUs in a balanced manner. We enhance three mainstream
sparse data formats -- CSR, CSC, and COO, to enable fine-grained data
distribution. We take sparse matrix-vector multiplication (SpMV) as an example
to demonstrate the efficiency of our MSREP framework. In addition, MSREP can be
easily extended to support other sparse linear algebra kernels based on the
three fundamental formats (i.e., CSR, CSC and COO).
|
[
{
"version": "v1",
"created": "Thu, 15 Sep 2022 18:14:29 GMT"
}
] | 2022-09-19T00:00:00 |
[
[
"Chen",
"Jieyang",
""
],
[
"Xie",
"Chenhao",
""
],
[
"Firoz",
"Jesun S",
""
],
[
"Li",
"Jiajia",
""
],
[
"Song",
"Shuaiwen Leon",
""
],
[
"Barker",
"Kevin",
""
],
[
"Raugas",
"Mark",
""
],
[
"Li",
"Ang",
""
]
] |
new_dataset
| 0.97032 |
2209.07582
|
Ashok Urlana
|
Chakravarthi J, Vinod Babu P, Pavan B, Ashok U, Marek Kolencik, Martin
\v{S}ebesta, Ramakanth Illa
|
Bflier's: A Novel Butterfly Inspired Multi-robotic Model in Search of
Signal Sources
|
12 pages, 17 figures
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The diversified ecology in nature had various forms of swarm behaviors in
many species. The butterfly species is one of the prominent and a bit
insightful in their random flights and converting that into an artificial
metaphor would lead to enormous possibilities. This paper considers one such
metaphor known as Butterfly Mating Optimization (BMO). In BMO, the Bfly follows
the patrolling mating phenomena and simultaneously captures all the local
optima of multimodal functions. To imitate this algorithm, a mobile robot
(Bflybot) was designed to meet the features of the Bfly in the BMO algorithm.
Also, the multi-Bflybot swarm is designed to act like butterflies in nature and
follow the algorithm's rules. The real-time experiments were performed on the
BMO algorithm in the multi-robotic arena and considered the signal source as
the light source. The experimental results show that the BMO algorithm is
applicable to detect multiple signal sources with significant variations in
their movements i.e., static and dynamic. In the case of static signal sources,
with varying initial locations of Bflybots, the convergence is affected in
terms of time and smoothness. Whereas the experiments with varying step-size
leads to their variation in the execution time and speed of the bots. In this
work, experiments were performed in a dynamic environment where the movement of
the signal source in both maneuvering and non-maneuvering scenarios. The
Bflybot swarm is able to detect the single and multi-signal sources, moving
linearly in between two fixed points, in circular, up and down movements.To
evaluate the BMO phenomenon, various ongoing and prospective works such as
mid-sea ship detection, aerial search applications, and earthquake prediction
were discussed.
|
[
{
"version": "v1",
"created": "Thu, 15 Sep 2022 19:32:57 GMT"
}
] | 2022-09-19T00:00:00 |
[
[
"J",
"Chakravarthi",
""
],
[
"P",
"Vinod Babu",
""
],
[
"B",
"Pavan",
""
],
[
"U",
"Ashok",
""
],
[
"Kolencik",
"Marek",
""
],
[
"Šebesta",
"Martin",
""
],
[
"Illa",
"Ramakanth",
""
]
] |
new_dataset
| 0.998386 |
2209.07619
|
Jaka \v{S}ircelj
|
Jaka \v{S}ircelj, Peter Peer, Franc Solina, Vitomir \v{S}truc
|
Hierarchical Superquadric Decomposition with Implicit Space Separation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce a new method to reconstruct 3D objects using a set of volumetric
primitives, i.e., superquadrics. The method hierarchically decomposes a target
3D object into pairs of superquadrics recovering finer and finer details. While
such hierarchical methods have been studied before, we introduce a new way of
splitting the object space using only properties of the predicted
superquadrics. The method is trained and evaluated on the ShapeNet dataset. The
results of our experiments suggest that reasonable reconstructions can be
obtained with the proposed approach for a diverse set of objects with complex
geometry.
|
[
{
"version": "v1",
"created": "Thu, 15 Sep 2022 21:34:46 GMT"
}
] | 2022-09-19T00:00:00 |
[
[
"Šircelj",
"Jaka",
""
],
[
"Peer",
"Peter",
""
],
[
"Solina",
"Franc",
""
],
[
"Štruc",
"Vitomir",
""
]
] |
new_dataset
| 0.9941 |
2209.07620
|
Pino Caballero-Gil
|
J Toledo-Castro, I Santos-Gonz\'alez, P Caballero-Gil, C
Hern\'andez-Goya, N Rodr\'iguez-P\'erez, R Aguasca-Colomo
|
Fuzzy-based forest fire prevention and detection by wireless sensor
networks
| null |
The 13th International Conference on Soft Computing Models in
Industrial and Environmental Applications, 2018
| null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Forest fires may cause considerable damages both in ecosystems and lives.
This proposal describes the application of Internet of Things and wireless
sensor networks jointly with multi-hop routing through a real time and dynamic
monitoring system for forest fire prevention. It is based on gathering and
analyzing information related to meteorological conditions, concentrations of
polluting gases and oxygen level around particular interesting forest areas.
Unusual measurements of these environmental variables may help to prevent
wildfire incidents and make their detection more efficient. A forest fire risk
controller based on fuzzy logic has been implemented in order to activate
environmental risk alerts through a Web service and a mobile application. For
this purpose, security mechanisms have been proposed for ensuring integrity and
confidentiality in the transmission of measured environmental information.
Lamport's signature and a block cipher algorithm are used to achieve this
objective.
|
[
{
"version": "v1",
"created": "Thu, 15 Sep 2022 21:37:02 GMT"
}
] | 2022-09-19T00:00:00 |
[
[
"Toledo-Castro",
"J",
""
],
[
"Santos-González",
"I",
""
],
[
"Caballero-Gil",
"P",
""
],
[
"Hernández-Goya",
"C",
""
],
[
"Rodríguez-Pérez",
"N",
""
],
[
"Aguasca-Colomo",
"R",
""
]
] |
new_dataset
| 0.984249 |
2209.07654
|
Shuo Yang
|
Shuo Yang, Zixin Zhang, Zhengyu Fu, and Zachary Manchester
|
Cerberus: Low-Drift Visual-Inertial-Leg Odometry For Agile Locomotion
|
7 pages, 6 figures, submitted to IEEE ICRA 2023
| null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an open-source Visual-Inertial-Leg Odometry (VILO) state
estimation solution, Cerberus, for legged robots that estimates position
precisely on various terrains in real time using a set of standard sensors,
including stereo cameras, IMU, joint encoders, and contact sensors. In addition
to estimating robot states, we also perform online kinematic parameter
calibration and contact outlier rejection to substantially reduce position
drift. Hardware experiments in various indoor and outdoor environments validate
that calibrating kinematic parameters within the Cerberus can reduce estimation
drift to lower than 1% during long distance high speed locomotion. Our drift
results are better than any other state estimation method using the same set of
sensors reported in the literature. Moreover, our state estimator performs well
even when the robot is experiencing large impacts and camera occlusion. The
implementation of the state estimator, along with the datasets used to compute
our results, are available at https://github.com/ShuoYangRobotics/Cerberus.
|
[
{
"version": "v1",
"created": "Fri, 16 Sep 2022 00:21:37 GMT"
}
] | 2022-09-19T00:00:00 |
[
[
"Yang",
"Shuo",
""
],
[
"Zhang",
"Zixin",
""
],
[
"Fu",
"Zhengyu",
""
],
[
"Manchester",
"Zachary",
""
]
] |
new_dataset
| 0.995232 |
2209.07678
|
Dawei Zhu
|
Dawei Zhu, Qiusi Zhan, Zhejian Zhou, Yifan Song, Jiebin Zhang, Sujian
Li
|
ConFiguRe: Exploring Discourse-level Chinese Figures of Speech
|
Accepted to Coling 2022
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Figures of speech, such as metaphor and irony, are ubiquitous in literature
works and colloquial conversations. This poses great challenge for natural
language understanding since figures of speech usually deviate from their
ostensible meanings to express deeper semantic implications. Previous research
lays emphasis on the literary aspect of figures and seldom provide a
comprehensive exploration from a view of computational linguistics. In this
paper, we first propose the concept of figurative unit, which is the carrier of
a figure. Then we select 12 types of figures commonly used in Chinese, and
build a Chinese corpus for Contextualized Figure Recognition (ConFiguRe).
Different from previous token-level or sentence-level counterparts, ConFiguRe
aims at extracting a figurative unit from discourse-level context, and
classifying the figurative unit into the right figure type. On ConFiguRe, three
tasks, i.e., figure extraction, figure type classification and figure
recognition, are designed and the state-of-the-art techniques are utilized to
implement the benchmarks. We conduct thorough experiments and show that all
three tasks are challenging for existing models, thus requiring further
research. Our dataset and code are publicly available at
https://github.com/pku-tangent/ConFiguRe.
|
[
{
"version": "v1",
"created": "Fri, 16 Sep 2022 02:31:48 GMT"
}
] | 2022-09-19T00:00:00 |
[
[
"Zhu",
"Dawei",
""
],
[
"Zhan",
"Qiusi",
""
],
[
"Zhou",
"Zhejian",
""
],
[
"Song",
"Yifan",
""
],
[
"Zhang",
"Jiebin",
""
],
[
"Li",
"Sujian",
""
]
] |
new_dataset
| 0.997274 |
2209.07683
|
Dayang Wang
|
Dayang Wang, Boce Zhang, Yongshun Xu, Yaguang Luo, Hengyong Yu
|
SQ-Swin: a Pretrained Siamese Quadratic Swin Transformer for Lettuce
Browning Prediction
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Packaged fresh-cut lettuce is widely consumed as a major component of
vegetable salad owing to its high nutrition, freshness, and convenience.
However, enzymatic browning discoloration on lettuce cut edges significantly
reduces product quality and shelf life. While there are many research and
breeding efforts underway to minimize browning, the progress is hindered by the
lack of a rapid and reliable methodology to evaluate browning. Current methods
to identify and quantify browning are either too subjective, labor intensive,
or inaccurate. In this paper, we report a deep learning model for lettuce
browning prediction. To the best of our knowledge, it is the first-of-its-kind
on deep learning for lettuce browning prediction using a pretrained Siamese
Quadratic Swin (SQ-Swin) transformer with several highlights. First, our model
includes quadratic features in the transformer model which is more powerful to
incorporate real-world representations than the linear transformer. Second, a
multi-scale training strategy is proposed to augment the data and explore more
of the inherent self-similarity of the lettuce images. Third, the proposed
model uses a siamese architecture which learns the inter-relations among the
limited training samples. Fourth, the model is pretrained on the ImageNet and
then trained with the reptile meta-learning algorithm to learn higher-order
gradients than a regular one. Experiment results on the fresh-cut lettuce
datasets show that the proposed SQ-Swin outperforms the traditional methods and
other deep learning-based backbones.
|
[
{
"version": "v1",
"created": "Fri, 16 Sep 2022 02:45:28 GMT"
}
] | 2022-09-19T00:00:00 |
[
[
"Wang",
"Dayang",
""
],
[
"Zhang",
"Boce",
""
],
[
"Xu",
"Yongshun",
""
],
[
"Luo",
"Yaguang",
""
],
[
"Yu",
"Hengyong",
""
]
] |
new_dataset
| 0.999365 |
2209.07742
|
Jihyun Lee
|
Jihyun Lee, Gary Geunbae Lee
|
SF-DST: Few-Shot Self-Feeding Reading Comprehension Dialogue State
Tracking with Auxiliary Task
|
Accepted in INTERSPEECH 2022
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Few-shot dialogue state tracking (DST) model tracks user requests in dialogue
with reliable accuracy even with a small amount of data. In this paper, we
introduce an ontology-free few-shot DST with self-feeding belief state input.
The self-feeding belief state input increases the accuracy in multi-turn
dialogue by summarizing previous dialogue. Also, we newly developed a slot-gate
auxiliary task. This new auxiliary task helps classify whether a slot is
mentioned in the dialogue. Our model achieved the best score in a few-shot
setting for four domains on multiWOZ 2.0.
|
[
{
"version": "v1",
"created": "Fri, 16 Sep 2022 06:54:25 GMT"
}
] | 2022-09-19T00:00:00 |
[
[
"Lee",
"Jihyun",
""
],
[
"Lee",
"Gary Geunbae",
""
]
] |
new_dataset
| 0.984719 |
2209.07760
|
Saku Sugawara
|
Mana Ashida, Saku Sugawara
|
Possible Stories: Evaluating Situated Commonsense Reasoning under
Multiple Possible Scenarios
|
Accepted to COLING 2022
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The possible consequences for the same context may vary depending on the
situation we refer to. However, current studies in natural language processing
do not focus on situated commonsense reasoning under multiple possible
scenarios. This study frames this task by asking multiple questions with the
same set of possible endings as candidate answers, given a short story text.
Our resulting dataset, Possible Stories, consists of more than 4.5K questions
over 1.3K story texts in English. We discover that even current strong
pretrained language models struggle to answer the questions consistently,
highlighting that the highest accuracy in an unsupervised setting (60.2%) is
far behind human accuracy (92.5%). Through a comparison with existing datasets,
we observe that the questions in our dataset contain minimal annotation
artifacts in the answer options. In addition, our dataset includes examples
that require counterfactual reasoning, as well as those requiring readers'
reactions and fictional information, suggesting that our dataset can serve as a
challenging testbed for future studies on situated commonsense reasoning.
|
[
{
"version": "v1",
"created": "Fri, 16 Sep 2022 07:38:51 GMT"
}
] | 2022-09-19T00:00:00 |
[
[
"Ashida",
"Mana",
""
],
[
"Sugawara",
"Saku",
""
]
] |
new_dataset
| 0.993283 |
2209.07775
|
Daniel Bermuth
|
Daniel Bermuth, Alexander Poeppel, Wolfgang Reif
|
Jaco: An Offline Running Privacy-aware Voice Assistant
| null |
In Proceedings of the 2022 ACM-IEEE International Conference on
Human-Robot Interaction (HRI 2022). IEEE Press, 618-622
|
10.5555/3523760.3523842
| null |
cs.CR cs.HC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
With the recent advance in speech technology, smart voice assistants have
been improved and are now used by many people. But often these assistants are
running online as a cloud service and are not always known for a good
protection of users' privacy. This paper presents the architecture of a novel
voice assistant, called Jaco, with the following features: (a) It can run
completely offline, even on low resource devices like a RaspberryPi. (b)
Through a skill concept it can be easily extended. (c) The architectural focus
is on protecting users' privacy, but without restricting capabilities for
developers. (d) It supports multiple languages. (e) It is competitive with
other voice assistant solutions. In this respect the assistant combines and
extends the advantages of other approaches.
|
[
{
"version": "v1",
"created": "Fri, 16 Sep 2022 08:03:46 GMT"
}
] | 2022-09-19T00:00:00 |
[
[
"Bermuth",
"Daniel",
""
],
[
"Poeppel",
"Alexander",
""
],
[
"Reif",
"Wolfgang",
""
]
] |
new_dataset
| 0.99711 |
2209.07796
|
MD Romael Haque
|
MD Romael Haque and Sabirat Rubya
|
"For an App Supposed to Make Its Users Feel Better, It Sure is a Joke"
-- An Analysis of User Reviews of Mobile Mental Health Applications
| null | null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mobile mental health applications are seen as a promising way to fulfill the
growing need for mental health care. Although there are more than ten thousand
mental health apps available on app marketplaces, such as Google Play and Apple
App Store, many of them are not evidence-based, or have been minimally
evaluated or regulated. The real-life experience and concerns of the app users
are largely unknown. To address this knowledge gap, we analyzed 2159 user
reviews from 117 Android apps and 2764 user reviews from 76 iOS apps. Our
findings include the critiques around inconsistent moderation standards and
lack of transparency. App-embedded social features and chatbots were criticized
for providing little support during crises. We provide research and design
implications for future mental health app developers, discuss the necessity of
developing a comprehensive and centralized app development guideline, and the
opportunities of incorporating existing AI technology in mental health
chatbots.
|
[
{
"version": "v1",
"created": "Fri, 16 Sep 2022 08:53:26 GMT"
}
] | 2022-09-19T00:00:00 |
[
[
"Haque",
"MD Romael",
""
],
[
"Rubya",
"Sabirat",
""
]
] |
new_dataset
| 0.998481 |
2209.07818
|
Leon Abdillah
|
Leon A. Abdillah, Azka Kurniasti
|
Mobile-Based COVID-19 Vaccination Registration Application Prototype
|
8 pages
|
SinkrOn, vol. 7, no. 3, pp. 2152-2159, 2022
|
10.33395/sinkron.v7i3.
| null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Information technology-based applications have entered the era of mobile
phones or smartphones such as those using the Android or iOS operating system.
Mobile-based application development has become a trend for today's society.
Especially during the global COVID-19 pandemic, almost all activities are
carried out remotely through mobile-based applications. To prevent the spread
of COVID-19, mass vaccines are given to the public. So that the process of
administering the vaccine does not cause crowds, it is necessary to create a
mobile-based application. So that the application can be further developed
properly, it is necessary to make a prototype. The prototype consists of 5
(steps): 1) Quick plan, 2) Modeling Quick Design, 3) Construction of prototype,
4) Deployment Delivery & feedback, and 5) Communication. In this research, the
InVision design tool is used which can help design prototypes for both mobile
and web versions. InVision has been widely used in making prototypes and is
used by many digital companies in the world. The results obtained are in the
form of a prototype application for the registration of vaccine participants
via mobile phones and also the web. The programmers will easily translate the
prototype results into a mobile-based application for the benefit of mobile
phone-based online vaccine registration.
|
[
{
"version": "v1",
"created": "Fri, 16 Sep 2022 09:35:32 GMT"
}
] | 2022-09-19T00:00:00 |
[
[
"Abdillah",
"Leon A.",
""
],
[
"Kurniasti",
"Azka",
""
]
] |
new_dataset
| 0.998444 |
2209.07886
|
Deyou Zhang
|
Deyou Zhang, Ming Xiao, and Mikael Skoglund
|
Beam Tracking for Dynamic mmWave Channels: A New Training Beam Sequence
Design Approach
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we develop an efficient training beam sequence design approach
for millimeter wave MISO tracking systems. We impose a discrete state Markov
process assumption on the evolution of the angle of departure and introduce the
maximum a posteriori criterion to track it in each beam training period. Since
it is infeasible to derive an explicit expression for the resultant tracking
error probability, we turn to its upper bound, which possesses a closed-form
expression and is therefore leveraged as the objective function to optimize the
training beam sequence. Considering the complicated objective function and the
unit modulus constraints imposed by analog phase shifters, we resort to the
particle swarm algorithm to solve the formulated optimization problem.
Numerical results validate the superiority of the proposed training beam
sequence design approach.
|
[
{
"version": "v1",
"created": "Fri, 16 Sep 2022 12:27:07 GMT"
}
] | 2022-09-19T00:00:00 |
[
[
"Zhang",
"Deyou",
""
],
[
"Xiao",
"Ming",
""
],
[
"Skoglund",
"Mikael",
""
]
] |
new_dataset
| 0.964368 |
2209.07919
|
Yuhang Ming
|
Yuhang Ming, Weicai Ye, Andrew Calway
|
iDF-SLAM: End-to-End RGB-D SLAM with Neural Implicit Mapping and Deep
Feature Tracking
|
7 pages, 6 figures, 3 tables
| null | null | null |
cs.RO cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We propose a novel end-to-end RGB-D SLAM, iDF-SLAM, which adopts a
feature-based deep neural tracker as the front-end and a NeRF-style neural
implicit mapper as the back-end. The neural implicit mapper is trained
on-the-fly, while though the neural tracker is pretrained on the ScanNet
dataset, it is also finetuned along with the training of the neural implicit
mapper. Under such a design, our iDF-SLAM is capable of learning to use
scene-specific features for camera tracking, thus enabling lifelong learning of
the SLAM system. Both the training for the tracker and the mapper are
self-supervised without introducing ground truth poses. We test the performance
of our iDF-SLAM on the Replica and ScanNet datasets and compare the results to
the two recent NeRF-based neural SLAM systems. The proposed iDF-SLAM
demonstrates state-of-the-art results in terms of scene reconstruction and
competitive performance in camera tracking.
|
[
{
"version": "v1",
"created": "Fri, 16 Sep 2022 13:32:57 GMT"
}
] | 2022-09-19T00:00:00 |
[
[
"Ming",
"Yuhang",
""
],
[
"Ye",
"Weicai",
""
],
[
"Calway",
"Andrew",
""
]
] |
new_dataset
| 0.999667 |
2209.07936
|
Mingshuai Chen
|
Zhuoruo Zhang, Chenyang Yu, He Huang, Rui Chang, Mingshuai Chen,
Qinming Dai, Wenbo Shen, Yongwang Zhao, Kui Ren
|
PA-Boot: A Formally Verified Authentication Protocol for Multiprocessor
Secure Boot
|
Manuscript submitted to IEEE Trans. Dependable Secure Comput
| null | null | null |
cs.CR cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Hardware supply-chain attacks are raising significant security threats to the
boot process of multiprocessor systems. This paper identifies a new, prevalent
hardware supply-chain attack surface that can bypass multiprocessor secure boot
due to the absence of processor-authentication mechanisms. To defend against
such attacks, we present PA-Boot, the first formally verified
processor-authentication protocol for secure boot in multiprocessor systems.
PA-Boot is proved functionally correct and is guaranteed to detect multiple
adversarial behaviors, e.g., processor replacements, man-in-the-middle attacks,
and tampering with certificates. The fine-grained formalization of PA-Boot and
its fully mechanized security proofs are carried out in the Isabelle/HOL
theorem prover with 306 lemmas/theorems and ~7,100 LoC. Experiments on a
proof-of-concept implementation indicate that PA-Boot can effectively identify
boot-process attacks with a considerably minor overhead and thereby improve the
security of multiprocessor systems.
|
[
{
"version": "v1",
"created": "Fri, 16 Sep 2022 13:54:43 GMT"
}
] | 2022-09-19T00:00:00 |
[
[
"Zhang",
"Zhuoruo",
""
],
[
"Yu",
"Chenyang",
""
],
[
"Huang",
"He",
""
],
[
"Chang",
"Rui",
""
],
[
"Chen",
"Mingshuai",
""
],
[
"Dai",
"Qinming",
""
],
[
"Shen",
"Wenbo",
""
],
[
"Zhao",
"Yongwang",
""
],
[
"Ren",
"Kui",
""
]
] |
new_dataset
| 0.993911 |
2209.07937
|
Yunliang Zhuang
|
Yunliang Zhuang, Zhuoran Zheng, Chen Lyu
|
DPFNet: A Dual-branch Dilated Network with Phase-aware Fourier
Convolution for Low-light Image Enhancement
| null | null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Low-light image enhancement is a classical computer vision problem aiming to
recover normal-exposure images from low-light images. However, convolutional
neural networks commonly used in this field are good at sampling low-frequency
local structural features in the spatial domain, which leads to unclear texture
details of the reconstructed images. To alleviate this problem, we propose a
novel module using the Fourier coefficients, which can recover high-quality
texture details under the constraint of semantics in the frequency phase and
supplement the spatial domain. In addition, we design a simple and efficient
module for the image spatial domain using dilated convolutions with different
receptive fields to alleviate the loss of detail caused by frequent
downsampling. We integrate the above parts into an end-to-end dual branch
network and design a novel loss committee and an adaptive fusion module to
guide the network to flexibly combine spatial and frequency domain features to
generate more pleasing visual effects. Finally, we evaluate the proposed
network on public benchmarks. Extensive experimental results show that our
method outperforms many existing state-of-the-art ones, showing outstanding
performance and potential.
|
[
{
"version": "v1",
"created": "Fri, 16 Sep 2022 13:56:09 GMT"
}
] | 2022-09-19T00:00:00 |
[
[
"Zhuang",
"Yunliang",
""
],
[
"Zheng",
"Zhuoran",
""
],
[
"Lyu",
"Chen",
""
]
] |
new_dataset
| 0.988327 |
2209.07951
|
Junyi Ma
|
Junyi Ma, Xieyuanli Chen, Jingyi Xu, Guangming Xiong
|
SeqOT: A Spatial-Temporal Transformer Network for Place Recognition
Using Sequential LiDAR Data
|
Submitted to IEEE Transactions on Industrial Electronics
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Place recognition is an important component for autonomous vehicles to
achieve loop closing or global localization. In this paper, we tackle the
problem of place recognition based on sequential 3D LiDAR scans obtained by an
onboard LiDAR sensor. We propose a transformer-based network named SeqOT to
exploit the temporal and spatial information provided by sequential range
images generated from the LiDAR data. It uses multi-scale transformers to
generate a global descriptor for each sequence of LiDAR range images in an
end-to-end fashion. During online operation, our SeqOT finds similar places by
matching such descriptors between the current query sequence and those stored
in the map. We evaluate our approach on four datasets collected with different
types of LiDAR sensors in different environments. The experimental results show
that our method outperforms the state-of-the-art LiDAR-based place recognition
methods and generalizes well across different environments. Furthermore, our
method operates online faster than the frame rate of the sensor. The
implementation of our method is released as open source at:
https://github.com/BIT-MJY/SeqOT.
|
[
{
"version": "v1",
"created": "Fri, 16 Sep 2022 14:08:11 GMT"
}
] | 2022-09-19T00:00:00 |
[
[
"Ma",
"Junyi",
""
],
[
"Chen",
"Xieyuanli",
""
],
[
"Xu",
"Jingyi",
""
],
[
"Xiong",
"Guangming",
""
]
] |
new_dataset
| 0.999445 |
2209.07974
|
Carlos Hernandez-Olivan
|
Carlos Hernandez-Olivan, Jose R. Beltran
|
musicaiz: A Python Library for Symbolic Music Generation, Analysis and
Visualization
| null | null | null | null |
cs.SD cs.MM eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
In this article, we present musicaiz, an object-oriented library for
analyzing, generating and evaluating symbolic music. The submodules of the
package allow the user to create symbolic music data from scratch, build
algorithms to analyze symbolic music, encode MIDI data as tokens to train deep
learning sequence models, modify existing music data and evaluate music
generation systems. The evaluation submodule builds on previous work to
objectively measure music generation systems and to be able to reproduce the
results of music generation models. The library is publicly available online.
We encourage the community to contribute and provide feedback.
|
[
{
"version": "v1",
"created": "Fri, 16 Sep 2022 14:42:47 GMT"
}
] | 2022-09-19T00:00:00 |
[
[
"Hernandez-Olivan",
"Carlos",
""
],
[
"Beltran",
"Jose R.",
""
]
] |
new_dataset
| 0.998684 |
2209.08000
|
Davide Salvi
|
Davide Salvi, Brian Hosler, Paolo Bestagini, Matthew C. Stamm, Stefano
Tubaro
|
TIMIT-TTS: a Text-to-Speech Dataset for Multimodal Synthetic Media
Detection
| null | null | null | null |
cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the rapid development of deep learning techniques, the generation and
counterfeiting of multimedia material are becoming increasingly straightforward
to perform. At the same time, sharing fake content on the web has become so
simple that malicious users can create unpleasant situations with minimal
effort. Also, forged media are getting more and more complex, with manipulated
videos that are taking the scene over still images. The multimedia forensic
community has addressed the possible threats that this situation could imply by
developing detectors that verify the authenticity of multimedia objects.
However, the vast majority of these tools only analyze one modality at a time.
This was not a problem as long as still images were considered the most widely
edited media, but now, since manipulated videos are becoming customary,
performing monomodal analyses could be reductive. Nonetheless, there is a lack
in the literature regarding multimodal detectors, mainly due to the scarsity of
datasets containing forged multimodal data to train and test the designed
algorithms. In this paper we focus on the generation of an audio-visual
deepfake dataset. First, we present a general pipeline for synthesizing speech
deepfake content from a given real or fake video, facilitating the creation of
counterfeit multimodal material. The proposed method uses Text-to-Speech (TTS)
and Dynamic Time Warping techniques to achieve realistic speech tracks. Then,
we use the pipeline to generate and release TIMIT-TTS, a synthetic speech
dataset containing the most cutting-edge methods in the TTS field. This can be
used as a standalone audio dataset, or combined with other state-of-the-art
sets to perform multimodal research. Finally, we present numerous experiments
to benchmark the proposed dataset in both mono and multimodal conditions,
showing the need for multimodal forensic detectors and more suitable data.
|
[
{
"version": "v1",
"created": "Fri, 16 Sep 2022 15:27:35 GMT"
}
] | 2022-09-19T00:00:00 |
[
[
"Salvi",
"Davide",
""
],
[
"Hosler",
"Brian",
""
],
[
"Bestagini",
"Paolo",
""
],
[
"Stamm",
"Matthew C.",
""
],
[
"Tubaro",
"Stefano",
""
]
] |
new_dataset
| 0.999877 |
2209.08035
|
Arthur Juliani
|
Arthur Juliani, Margaret Sereno
|
A Biologically-Inspired Dual Stream World Model
| null | null | null | null |
cs.LG cs.NE q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The medial temporal lobe (MTL), a brain region containing the hippocampus and
nearby areas, is hypothesized to be an experience-construction system in
mammals, supporting both recall and imagination of temporally-extended
sequences of events. Such capabilities are also core to many recently proposed
``world models" in the field of AI research. Taking inspiration from this
connection, we propose a novel variant, the Dual Stream World Model (DSWM),
which learns from high-dimensional observations and dissociates them into
context and content streams. DSWM can reliably generate imagined trajectories
in novel 2D environments after only a single exposure, outperforming a standard
world model. DSWM also learns latent representations which bear a strong
resemblance to place cells found in the hippocampus. We show that this
representation is useful as a reinforcement learning basis function, and that
the generative model can be used to aid the policy learning process using
Dyna-like updates.
|
[
{
"version": "v1",
"created": "Fri, 16 Sep 2022 16:27:48 GMT"
}
] | 2022-09-19T00:00:00 |
[
[
"Juliani",
"Arthur",
""
],
[
"Sereno",
"Margaret",
""
]
] |
new_dataset
| 0.993712 |
1811.11660
|
Michiel de Bondt
|
Michiel de Bondt
|
A short and elegant proof of a theorem of J.-E. Pin
|
11 pages, major update with new proof
| null | null | null |
cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We give a short proof of a theorem of J.-E. Pin (theorem 1.1 below), which
can be found in his thesis. The part of the proof which is my own (not Pin's)
is a complete replacement of the same part in an earlier version of this paper.
|
[
{
"version": "v1",
"created": "Wed, 28 Nov 2018 16:36:15 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Sep 2022 11:52:28 GMT"
},
{
"version": "v3",
"created": "Thu, 15 Sep 2022 11:44:46 GMT"
}
] | 2022-09-16T00:00:00 |
[
[
"de Bondt",
"Michiel",
""
]
] |
new_dataset
| 0.994142 |
2101.10775
|
Leonardo Parisi
|
Andrea Cavagna, Xiao Feng, Stefania Melillo, Leonardo Parisi, Lorena
Postiglione, Pablo Villegas
|
CoMo: A novel co-moving 3D camera system
| null |
IEEE Trans. Instrum. Meas. 70: 1-16 (2021)
|
10.1109/TIM.2021.3074388
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Motivated by the theoretical interest in reconstructing long 3D trajectories
of individual birds in large flocks, we developed CoMo, a co-moving camera
system of two synchronized high speed cameras coupled with rotational stages,
which allow us to dynamically follow the motion of a target flock. With the
rotation of the cameras we overcome the limitations of standard static systems
that restrict the duration of the collected data to the short interval of time
in which targets are in the cameras common field of view, but at the same time
we change in time the external parameters of the system, which have then to be
calibrated frame-by-frame. We address the calibration of the external
parameters measuring the position of the cameras and their three angles of yaw,
pitch and roll in the system "home" configuration (rotational stage at an angle
equal to 0deg and combining this static information with the time dependent
rotation due to the stages. We evaluate the robustness and accuracy of the
system by comparing reconstructed and measured 3D distances in what we call 3D
tests, which show a relative error of the order of 1%. The novelty of the work
presented in this paper is not only on the system itself, but also on the
approach we use in the tests, which we show to be a very powerful tool in
detecting and fixing calibration inaccuracies and that, for this reason, may be
relevant for a broad audience.
|
[
{
"version": "v1",
"created": "Tue, 26 Jan 2021 13:29:13 GMT"
}
] | 2022-09-16T00:00:00 |
[
[
"Cavagna",
"Andrea",
""
],
[
"Feng",
"Xiao",
""
],
[
"Melillo",
"Stefania",
""
],
[
"Parisi",
"Leonardo",
""
],
[
"Postiglione",
"Lorena",
""
],
[
"Villegas",
"Pablo",
""
]
] |
new_dataset
| 0.999472 |
2104.14686
|
Fabio Zanasi
|
Filippo Bonchi, Fabio Gadducci, Aleks Kissinger, Pawel Sobocinski,
Fabio Zanasi
|
String Diagram Rewrite Theory II: Rewriting with Symmetric Monoidal
Structure
| null | null | null | null |
cs.LO math.CT math.LO
|
http://creativecommons.org/licenses/by/4.0/
|
Symmetric monoidal theories (SMTs) generalise algebraic theories in a way
that make them suitable to express resource-sensitive systems, in which
variables cannot be copied or discarded at will.
In SMTs, traditional tree-like terms are replaced by string diagrams,
topological entities that can be intuitively thoughts as diagrams of wires and
boxes. Recently, string diagrams have become increasingly popular as a
graphical syntax to reason about computational models across diverse fields,
including programming language semantics, circuit theory, quantum mechanics,
linguistics, and control theory. In applications, it is often convenient to
implement the equations appearing in SMTs as rewriting rules. This poses the
challenge of extending the traditional theory of term rewriting, which has been
developed for algebraic theories, to string diagrams.
In this paper, we develop a mathematical theory of string diagram rewriting
for SMTs. Our approach exploits the correspondence between string diagram
rewriting and double pushout (DPO) rewriting of certain graphs, introduced in
the first paper of this series. Such a correspondence is only sound when the
SMT includes a Frobenius algebra structure. In the present work, we show how an
analogous correspondence may be established for arbitrary SMTs, once an
appropriate notion of DPO rewriting (which we call convex) is identified.
As proof of concept, we use our approach to show termination of two SMTs of
interest: Frobenius semi-algebras and bialgebras.
|
[
{
"version": "v1",
"created": "Thu, 29 Apr 2021 22:39:54 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Sep 2022 20:52:32 GMT"
}
] | 2022-09-16T00:00:00 |
[
[
"Bonchi",
"Filippo",
""
],
[
"Gadducci",
"Fabio",
""
],
[
"Kissinger",
"Aleks",
""
],
[
"Sobocinski",
"Pawel",
""
],
[
"Zanasi",
"Fabio",
""
]
] |
new_dataset
| 0.995552 |
2107.00857
|
Yu Min Park
|
Yu Min Park, Yan Kyaw Tun, Zhu Han, Choong Seon Hong
|
Trajectory Optimization and Phase-Shift Design in IRS Assisted UAV
Network for High Speed Trains
|
This paper has been submitted to IEEE Wireless Communications Letters
| null |
10.1109/TVT.2022.3189024
| null |
cs.NI eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The recent trend towards the high-speed transportation system has spurred the
development of high-speed trains (HSTs). However, enabling HST users with
seamless wireless connectivity using the roadside units (RSUs) is extremely
challenging, mostly due to the lack of line of sight link. To address this
issue, we propose a novel framework that uses intelligent reflecting surfaces
(IRS)-enabled unmanned aerial vehicles (UAVs) to provide line of sight
communication to HST users. First, we formulate the optimization problem where
the objective is to maximize the minimum achievable rate of HSTs by jointly
optimizing the trajectory of UAV and the phase-shift of IRS. Due to the
non-convex nature of the formulated problem, it is decomposed into two
subproblems: IRS phase-shift problem and UAV trajectory optimization problem.
Next, a Binary Integer Linear Programming (BILP) and a Soft Actor-Critic (SAC)
are constructed in order to solve our decomposed problems. Finally,
comprehensive numerical results are provided in order to show the effectiveness
of our proposed framework.
|
[
{
"version": "v1",
"created": "Fri, 2 Jul 2021 06:15:31 GMT"
}
] | 2022-09-16T00:00:00 |
[
[
"Park",
"Yu Min",
""
],
[
"Tun",
"Yan Kyaw",
""
],
[
"Han",
"Zhu",
""
],
[
"Hong",
"Choong Seon",
""
]
] |
new_dataset
| 0.990014 |
2109.09824
|
Christian Joppi
|
Geri Skenderi, Christian Joppi, Matteo Denitto, Marco Cristani
|
Well Googled is Half Done: Multimodal Forecasting of New Fashion Product
Sales with Image-based Google Trends
|
Paper submitted at Wiley Journal of Forecasting
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
New fashion product sales forecasting is a challenging problem that involves
many business dynamics and cannot be solved by classical forecasting
approaches. In this paper, we investigate the effectiveness of systematically
probing exogenous knowledge in the form of Google Trends time series and
combining it with multi-modal information related to a brand-new fashion item,
in order to effectively forecast its sales despite the lack of past data. In
particular, we propose a neural network-based approach, where an encoder learns
a representation of the exogenous time series, while the decoder forecasts the
sales based on the Google Trends encoding and the available visual and metadata
information. Our model works in a non-autoregressive manner, avoiding the
compounding effect of large first-step errors. As a second contribution, we
present VISUELLE, a publicly available dataset for the task of new fashion
product sales forecasting, containing multimodal information for 5577 real, new
products sold between 2016-2019 from Nunalie, an Italian fast-fashion company.
The dataset is equipped with images of products, metadata, related sales, and
associated Google Trends. We use VISUELLE to compare our approach against
state-of-the-art alternatives and several baselines, showing that our neural
network-based approach is the most accurate in terms of both percentage and
absolute error. It is worth noting that the addition of exogenous knowledge
boosts the forecasting accuracy by 1.5% WAPE wise, revealing the importance of
exploiting informative external information. The code and dataset are both
available at https://github.com/HumaticsLAB/GTM-Transformer.
|
[
{
"version": "v1",
"created": "Mon, 20 Sep 2021 20:15:08 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Sep 2021 07:17:51 GMT"
},
{
"version": "v3",
"created": "Fri, 8 Oct 2021 09:33:18 GMT"
},
{
"version": "v4",
"created": "Tue, 26 Oct 2021 07:47:50 GMT"
},
{
"version": "v5",
"created": "Thu, 15 Sep 2022 12:06:59 GMT"
}
] | 2022-09-16T00:00:00 |
[
[
"Skenderi",
"Geri",
""
],
[
"Joppi",
"Christian",
""
],
[
"Denitto",
"Matteo",
""
],
[
"Cristani",
"Marco",
""
]
] |
new_dataset
| 0.981176 |
2112.03258
|
Ankan Kumar Bhunia
|
Ankan Kumar Bhunia, Salman Khan, Hisham Cholakkal, Rao Muhammad Anwer,
Fahad Shahbaz Khan, Jorma Laaksonen, Michael Felsberg
|
DoodleFormer: Creative Sketch Drawing with Transformers
|
Accepted to ECCV-2022. Project webpage:
https://ankanbhunia.github.io/doodleformer/
| null | null | null |
cs.CV cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
Creative sketching or doodling is an expressive activity, where imaginative
and previously unseen depictions of everyday visual objects are drawn. Creative
sketch image generation is a challenging vision problem, where the task is to
generate diverse, yet realistic creative sketches possessing the unseen
composition of the visual-world objects. Here, we propose a novel
coarse-to-fine two-stage framework, DoodleFormer, that decomposes the creative
sketch generation problem into the creation of coarse sketch composition
followed by the incorporation of fine-details in the sketch. We introduce
graph-aware transformer encoders that effectively capture global dynamic as
well as local static structural relations among different body parts. To ensure
diversity of the generated creative sketches, we introduce a probabilistic
coarse sketch decoder that explicitly models the variations of each sketch body
part to be drawn. Experiments are performed on two creative sketch datasets:
Creative Birds and Creative Creatures. Our qualitative, quantitative and
human-based evaluations show that DoodleFormer outperforms the state-of-the-art
on both datasets, yielding realistic and diverse creative sketches. On Creative
Creatures, DoodleFormer achieves an absolute gain of 25 in terms of Fr`echet
inception distance (FID) over the state-of-the-art. We also demonstrate the
effectiveness of DoodleFormer for related applications of text to creative
sketch generation and sketch completion.
|
[
{
"version": "v1",
"created": "Mon, 6 Dec 2021 18:59:59 GMT"
},
{
"version": "v2",
"created": "Sat, 9 Jul 2022 06:21:04 GMT"
},
{
"version": "v3",
"created": "Thu, 15 Sep 2022 17:59:49 GMT"
}
] | 2022-09-16T00:00:00 |
[
[
"Bhunia",
"Ankan Kumar",
""
],
[
"Khan",
"Salman",
""
],
[
"Cholakkal",
"Hisham",
""
],
[
"Anwer",
"Rao Muhammad",
""
],
[
"Khan",
"Fahad Shahbaz",
""
],
[
"Laaksonen",
"Jorma",
""
],
[
"Felsberg",
"Michael",
""
]
] |
new_dataset
| 0.999455 |
2112.04596
|
Tuan-Phong Nguyen
|
Tuan-Phong Nguyen, Simon Razniewski, Julien Romero, Gerhard Weikum
|
Refined Commonsense Knowledge from Large-Scale Web Contents
|
This is a substantial extension of the previous WWW paper:
arXiv:2011.00905
|
IEEE Transactions on Knowledge and Data Engineering, 2022
|
10.1109/TKDE.2022.3206505
| null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Commonsense knowledge (CSK) about concepts and their properties is helpful
for AI applications. Prior works, such as ConceptNet, have compiled large CSK
collections. However, they are restricted in their expressiveness to
subject-predicate-object (SPO) triples with simple concepts for S and strings
for P and O. This paper presents a method called ASCENT++ to automatically
build a large-scale knowledge base (KB) of CSK assertions, with refined
expressiveness and both better precision and recall than prior works. ASCENT++
goes beyond SPO triples by capturing composite concepts with subgroups and
aspects, and by refining assertions with semantic facets. The latter is
essential to express the temporal and spatial validity of assertions and
further qualifiers. Furthermore, ASCENT++ combines open information extraction
(OpenIE) with judicious cleaning and ranking by typicality and saliency scores.
For high coverage, our method taps into the large-scale crawl C4 with broad web
contents. The evaluation with human judgments shows the superior quality of the
ASCENT++ KB, and an extrinsic evaluation for QA-support tasks underlines the
benefits of ASCENT++. A web interface, data, and code can be accessed at
https://ascentpp.mpi-inf.mpg.de/.
|
[
{
"version": "v1",
"created": "Tue, 30 Nov 2021 20:26:09 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Jun 2022 12:12:18 GMT"
}
] | 2022-09-16T00:00:00 |
[
[
"Nguyen",
"Tuan-Phong",
""
],
[
"Razniewski",
"Simon",
""
],
[
"Romero",
"Julien",
""
],
[
"Weikum",
"Gerhard",
""
]
] |
new_dataset
| 0.996278 |
2201.03101
|
Xin Miao
|
Xin Miao, Jiayi Liu, Huayan Wang, Jun Fu
|
ImageSubject: A Large-scale Dataset for Subject Detection
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Main subjects usually exist in the images or videos, as they are the objects
that the photographer wants to highlight. Human viewers can easily identify
them but algorithms often confuse them with other objects. Detecting the main
subjects is an important technique to help machines understand the content of
images and videos. We present a new dataset with the goal of training models to
understand the layout of the objects and the context of the image then to find
the main subjects among them. This is achieved in three aspects. By gathering
images from movie shots created by directors with professional shooting skills,
we collect the dataset with strong diversity, specifically, it contains
107\,700 images from 21\,540 movie shots. We labeled them with the bounding box
labels for two classes: subject and non-subject foreground object. We present a
detailed analysis of the dataset and compare the task with saliency detection
and object detection. ImageSubject is the first dataset that tries to localize
the subject in an image that the photographer wants to highlight. Moreover, we
find the transformer-based detection model offers the best result among other
popular model architectures. Finally, we discuss the potential applications and
conclude with the importance of the dataset.
|
[
{
"version": "v1",
"created": "Sun, 9 Jan 2022 22:49:59 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Sep 2022 07:30:48 GMT"
}
] | 2022-09-16T00:00:00 |
[
[
"Miao",
"Xin",
""
],
[
"Liu",
"Jiayi",
""
],
[
"Wang",
"Huayan",
""
],
[
"Fu",
"Jun",
""
]
] |
new_dataset
| 0.999862 |
2202.02281
|
Momona Yamagami
|
Momona Yamagami, Kelly Mack, Jennifer Mankoff, Katherine M. Steele
|
"I'm Just Overwhelmed": Investigating Physical Therapy Accessibility and
Technology Interventions for People with Disabilities and/or Chronic
Conditions
|
22 pages, 2 tables
| null |
10.1145/3563396
| null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Many individuals with disabilities and/or chronic conditions (da/cc)
experience symptoms that may require intermittent or on-going medical care.
However, healthcare is an often-overlooked domain for accessibility work, where
access needs associated with temporary and long-term disability must be
addressed to increase the utility of physical and digital interactions with
healthcare workers and spaces. Our work focuses on a specific domain of
healthcare often used by individuals with da/cc: physical therapy (PT). Through
a twelve-person interview study, we examined how people's access to PT for
their da/cc is hampered by social (e.g., physically visiting a PT clinic) and
physiological (e.g., chronic pain) barriers, and how technology could improve
PT access. In-person PT is often inaccessible to our participants due to lack
of transportation and insufficient insurance coverage. As such, many of our
participants relied on at-home PT to manage their da/cc symptoms and work
towards PT goals. Participants felt that PT barriers, such as having
particularly bad symptoms or feeling short on time, could be addressed with
well-designed technology that flexibly adapts to the person's dynamically
changing needs while supporting their PT goals. We introduce core design
principles (adaptability, movement tracking, community building) and tensions
(insurance) to consider when developing technology to support PT access.
Rethinking da/cc access to PT from a lens that includes social and
physiological barriers presents opportunities to integrate accessibility and
adaptability into PT technology.
|
[
{
"version": "v1",
"created": "Fri, 4 Feb 2022 18:11:58 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Sep 2022 15:30:31 GMT"
}
] | 2022-09-16T00:00:00 |
[
[
"Yamagami",
"Momona",
""
],
[
"Mack",
"Kelly",
""
],
[
"Mankoff",
"Jennifer",
""
],
[
"Steele",
"Katherine M.",
""
]
] |
new_dataset
| 0.985832 |
2203.07094
|
Yuqiang Han
|
Zhenfeng He and Yuqiang Han and Zhenqiu Ouyang and Wei Gao and Hongxu
Chen and Guandong Xu and Jian Wu
|
DialMed: A Dataset for Dialogue-based Medication Recommendation
|
Accepted as a long paper at COLING 2022
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Medication recommendation is a crucial task for intelligent healthcare
systems. Previous studies mainly recommend medications with electronic health
records (EHRs). However, some details of interactions between doctors and
patients may be ignored or omitted in EHRs, which are essential for automatic
medication recommendation. Therefore, we make the first attempt to recommend
medications with the conversations between doctors and patients. In this work,
we construct DIALMED, the first high-quality dataset for medical dialogue-based
medication recommendation task. It contains 11,996 medical dialogues related to
16 common diseases from 3 departments and 70 corresponding common medications.
Furthermore, we propose a Dialogue structure and Disease knowledge aware
Network (DDN), where a QA Dialogue Graph mechanism is designed to model the
dialogue structure and the knowledge graph is used to introduce external
disease knowledge. The extensive experimental results demonstrate that the
proposed method is a promising solution to recommend medications with medical
dialogues. The dataset and code are available at
https://github.com/f-window/DialMed.
|
[
{
"version": "v1",
"created": "Tue, 22 Feb 2022 05:12:29 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Sep 2022 02:52:27 GMT"
}
] | 2022-09-16T00:00:00 |
[
[
"He",
"Zhenfeng",
""
],
[
"Han",
"Yuqiang",
""
],
[
"Ouyang",
"Zhenqiu",
""
],
[
"Gao",
"Wei",
""
],
[
"Chen",
"Hongxu",
""
],
[
"Xu",
"Guandong",
""
],
[
"Wu",
"Jian",
""
]
] |
new_dataset
| 0.99975 |
2205.13124
|
Heng Zhou
|
Heng Zhou, Chunna Tian, Zhenxi Zhang, Chengyang Li, Yongqiang Xie,
Zhongbo Li
|
PixelGame: Infrared small target segmentation as a Nash equilibrium
| null | null |
10.1109/JSTARS.2022.3206062
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A key challenge of infrared small target segmentation (ISTS) is to balance
false negative pixels (FNs) and false positive pixels (FPs). Traditional
methods combine FNs and FPs into a single objective by weighted sum, and the
optimization process is decided by one actor. Minimizing FNs and FPs with the
same strategy leads to antagonistic decisions. To address this problem, we
propose a competitive game framework (pixelGame) from a novel perspective for
ISTS. In pixelGame, FNs and FPs are controlled by different player whose goal
is to minimize their own utility function. FNs-player and FPs-player are
designed with different strategies: One is to minimize FNs and the other is to
minimize FPs. The utility function drives the evolution of the two participants
in competition. We consider the Nash equilibrium of pixelGame as the optimal
solution. In addition, we propose maximum information modulation (MIM) to
highlight the tar-get information. MIM effectively focuses on the salient
region including small targets. Extensive experiments on two standard public
datasets prove the effectiveness of our method. Compared with other
state-of-the-art methods, our method achieves better performance in terms of
F1-measure (F1) and the intersection of union (IoU).
|
[
{
"version": "v1",
"created": "Thu, 26 May 2022 03:13:27 GMT"
}
] | 2022-09-16T00:00:00 |
[
[
"Zhou",
"Heng",
""
],
[
"Tian",
"Chunna",
""
],
[
"Zhang",
"Zhenxi",
""
],
[
"Li",
"Chengyang",
""
],
[
"Xie",
"Yongqiang",
""
],
[
"Li",
"Zhongbo",
""
]
] |
new_dataset
| 0.998546 |
2206.15407
|
Andrey Malinin Dr.
|
Andrey Malinin, Andreas Athanasopoulos, Muhamed Barakovic, Meritxell
Bach Cuadra, Mark J. F. Gales, Cristina Granziera, Mara Graziani, Nikolay
Kartashev, Konstantinos Kyriakopoulos, Po-Jui Lu, Nataliia Molchanova,
Antonis Nikitakis, Vatsal Raina, Francesco La Rosa, Eli Sivena, Vasileios
Tsarsitalidis, Efi Tsompopoulou, Elena Volf
|
Shifts 2.0: Extending The Dataset of Real Distributional Shifts
| null | null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Distributional shift, or the mismatch between training and deployment data,
is a significant obstacle to the usage of machine learning in high-stakes
industrial applications, such as autonomous driving and medicine. This creates
a need to be able to assess how robustly ML models generalize as well as the
quality of their uncertainty estimates. Standard ML baseline datasets do not
allow these properties to be assessed, as the training, validation and test
data are often identically distributed. Recently, a range of dedicated
benchmarks have appeared, featuring both distributionally matched and shifted
data. Among these benchmarks, the Shifts dataset stands out in terms of the
diversity of tasks as well as the data modalities it features. While most of
the benchmarks are heavily dominated by 2D image classification tasks, Shifts
contains tabular weather forecasting, machine translation, and vehicle motion
prediction tasks. This enables the robustness properties of models to be
assessed on a diverse set of industrial-scale tasks and either universal or
directly applicable task-specific conclusions to be reached. In this paper, we
extend the Shifts Dataset with two datasets sourced from industrial, high-risk
applications of high societal importance. Specifically, we consider the tasks
of segmentation of white matter Multiple Sclerosis lesions in 3D magnetic
resonance brain images and the estimation of power consumption in marine cargo
vessels. Both tasks feature ubiquitous distributional shifts and a strict
safety requirement due to the high cost of errors. These new datasets will
allow researchers to further explore robust generalization and uncertainty
estimation in new situations. In this work, we provide a description of the
dataset and baseline results for both tasks.
|
[
{
"version": "v1",
"created": "Thu, 30 Jun 2022 16:51:52 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Sep 2022 09:52:12 GMT"
}
] | 2022-09-16T00:00:00 |
[
[
"Malinin",
"Andrey",
""
],
[
"Athanasopoulos",
"Andreas",
""
],
[
"Barakovic",
"Muhamed",
""
],
[
"Cuadra",
"Meritxell Bach",
""
],
[
"Gales",
"Mark J. F.",
""
],
[
"Granziera",
"Cristina",
""
],
[
"Graziani",
"Mara",
""
],
[
"Kartashev",
"Nikolay",
""
],
[
"Kyriakopoulos",
"Konstantinos",
""
],
[
"Lu",
"Po-Jui",
""
],
[
"Molchanova",
"Nataliia",
""
],
[
"Nikitakis",
"Antonis",
""
],
[
"Raina",
"Vatsal",
""
],
[
"La Rosa",
"Francesco",
""
],
[
"Sivena",
"Eli",
""
],
[
"Tsarsitalidis",
"Vasileios",
""
],
[
"Tsompopoulou",
"Efi",
""
],
[
"Volf",
"Elena",
""
]
] |
new_dataset
| 0.999578 |
2207.14741
|
Zelin Zhao
|
Zelin Zhao, Jiaya Jia
|
End-to-end View Synthesis via NeRF Attention
|
Fixed reference formatting issues
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present a simple seq2seq formulation for view synthesis
where we take a set of ray points as input and output colors corresponding to
the rays. Directly applying a standard transformer on this seq2seq formulation
has two limitations. First, the standard attention cannot successfully fit the
volumetric rendering procedure, and therefore high-frequency components are
missing in the synthesized views. Second, applying global attention to all rays
and pixels is extremely inefficient. Inspired by the neural radiance field
(NeRF), we propose the NeRF attention (NeRFA) to address the above problems. On
the one hand, NeRFA considers the volumetric rendering equation as a soft
feature modulation procedure. In this way, the feature modulation enhances the
transformers with the NeRF-like inductive bias. On the other hand, NeRFA
performs multi-stage attention to reduce the computational overhead.
Furthermore, the NeRFA model adopts the ray and pixel transformers to learn the
interactions between rays and pixels. NeRFA demonstrates superior performance
over NeRF and NerFormer on four datasets: DeepVoxels, Blender, LLFF, and CO3D.
Besides, NeRFA establishes a new state-of-the-art under two settings: the
single-scene view synthesis and the category-centric novel view synthesis.
|
[
{
"version": "v1",
"created": "Fri, 29 Jul 2022 15:26:16 GMT"
},
{
"version": "v2",
"created": "Mon, 1 Aug 2022 03:53:27 GMT"
},
{
"version": "v3",
"created": "Thu, 15 Sep 2022 03:04:10 GMT"
}
] | 2022-09-16T00:00:00 |
[
[
"Zhao",
"Zelin",
""
],
[
"Jia",
"Jiaya",
""
]
] |
new_dataset
| 0.960315 |
2208.07049
|
Sachith Seneviratne PhD
|
Sachith Seneviratne, Ridwan Shariffdeen, Sanka Rasnayaka and Nuran
Kasthuriarachchi
|
Self-Supervised Vision Transformers for Malware Detection
| null | null |
10.1109/ACCESS.2022.3206445
| null |
cs.CR cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Malware detection plays a crucial role in cyber-security with the increase in
malware growth and advancements in cyber-attacks. Previously unseen malware
which is not determined by security vendors are often used in these attacks and
it is becoming inevitable to find a solution that can self-learn from unlabeled
sample data. This paper presents SHERLOCK, a self-supervision based deep
learning model to detect malware based on the Vision Transformer (ViT)
architecture. SHERLOCK is a novel malware detection method which learns unique
features to differentiate malware from benign programs with the use of
image-based binary representation. Experimental results using 1.2 million
Android applications across a hierarchy of 47 types and 696 families, shows
that self-supervised learning can achieve an accuracy of 97% for the binary
classification of malware which is higher than existing state-of-the-art
techniques. Our proposed model is also able to outperform state-of-the-art
techniques for multi-class malware classification of types and family with
macro-F1 score of .497 and .491 respectively.
|
[
{
"version": "v1",
"created": "Mon, 15 Aug 2022 07:49:58 GMT"
}
] | 2022-09-16T00:00:00 |
[
[
"Seneviratne",
"Sachith",
""
],
[
"Shariffdeen",
"Ridwan",
""
],
[
"Rasnayaka",
"Sanka",
""
],
[
"Kasthuriarachchi",
"Nuran",
""
]
] |
new_dataset
| 0.964764 |
2208.10844
|
Hongyin Tang
|
Borun Chen, Hongyin Tang, Jiahao Bu, Kai Zhang, Jingang Wang, Qifan
Wang, Hai-Tao Zheng, Wei Wu and Liqian Yu
|
CLOWER: A Pre-trained Language Model with Contrastive Learning over Word
and Character Representations
|
Accepted in COLING 2022
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pre-trained Language Models (PLMs) have achieved remarkable performance gains
across numerous downstream tasks in natural language understanding. Various
Chinese PLMs have been successively proposed for learning better Chinese
language representation. However, most current models use Chinese characters as
inputs and are not able to encode semantic information contained in Chinese
words. While recent pre-trained models incorporate both words and characters
simultaneously, they usually suffer from deficient semantic interactions and
fail to capture the semantic relation between words and characters. To address
the above issues, we propose a simple yet effective PLM CLOWER, which adopts
the Contrastive Learning Over Word and charactER representations. In
particular, CLOWER implicitly encodes the coarse-grained information (i.e.,
words) into the fine-grained representations (i.e., characters) through
contrastive learning on multi-grained information. CLOWER is of great value in
realistic scenarios since it can be easily incorporated into any existing
fine-grained based PLMs without modifying the production pipelines.Extensive
experiments conducted on a range of downstream tasks demonstrate the superior
performance of CLOWER over several state-of-the-art baselines.
|
[
{
"version": "v1",
"created": "Tue, 23 Aug 2022 09:52:34 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Sep 2022 03:07:05 GMT"
}
] | 2022-09-16T00:00:00 |
[
[
"Chen",
"Borun",
""
],
[
"Tang",
"Hongyin",
""
],
[
"Bu",
"Jiahao",
""
],
[
"Zhang",
"Kai",
""
],
[
"Wang",
"Jingang",
""
],
[
"Wang",
"Qifan",
""
],
[
"Zheng",
"Hai-Tao",
""
],
[
"Wu",
"Wei",
""
],
[
"Yu",
"Liqian",
""
]
] |
new_dataset
| 0.997448 |
2209.06955
|
Mia Fili\'c
|
Mia Fili\'c, Kenneth G. Paterson, Anupama Unnikrishnan and Fernando
Virdia
|
Adversarial Correctness and Privacy for Probabilistic Data Structures
|
The full version of the paper accepted at ACM CCS '22. The latest
version is available at https://eprint.iacr.org/2022/1186
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
We study the security of Probabilistic Data Structures (PDS) for handling
Approximate Membership Queries (AMQ); prominent examples of AMQ-PDS are Bloom
and Cuckoo filters. AMQ-PDS are increasingly being deployed in environments
where adversaries can gain benefit from carefully selecting inputs, for example
to increase the false positive rate of an AMQ-PDS. They are also being used in
settings where the inputs are sensitive and should remain private in the face
of adversaries who can access an AMQ-PDS through an API or who can learn its
internal state by compromising the system running the AMQ-PDS.
We develop simulation-based security definitions that speak to correctness
and privacy of AMQ-PDS. Our definitions are general and apply to a broad range
of adversarial settings. We use our definitions to analyse the behaviour of
both Bloom filters and insertion-only Cuckoo filters. We show that these
AMQ-PDS can be provably protected through replacement or composition of hash
functions with keyed pseudorandom functions in their construction. We also
examine the practical impact on storage size and computation of providing
secure instances of Bloom and insertion-only Cuckoo filters.
|
[
{
"version": "v1",
"created": "Wed, 14 Sep 2022 22:10:36 GMT"
}
] | 2022-09-16T00:00:00 |
[
[
"Filić",
"Mia",
""
],
[
"Paterson",
"Kenneth G.",
""
],
[
"Unnikrishnan",
"Anupama",
""
],
[
"Virdia",
"Fernando",
""
]
] |
new_dataset
| 0.951408 |
2209.06964
|
Youngwoo Sim
|
Guillermo Colin, Youngwoo Sim, and Joao Ramos
|
Bipedal Robot Walking Control Using Human Whole-Body Dynamic
Telelocomotion
|
Submitted to ICRA 2023
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
For humanoids to be deployed in demanding situations, such as search and
rescue, highly intelligent decision making and proficient sensorimotor skill is
expected. A promising solution is to leverage human prowess by interconnecting
robot and human via teleoperation. Towards creating seamless operation, this
paper presents a dynamic telelocomotion framework that synchronizes the gait of
a human pilot with the walking of a bipedal robot. First, we introduce a method
to generate a virtual human walking model from the stepping behavior of a human
pilot which serves as a reference for the robot to walk. Second, the dynamics
of the walking reference and robot walking are synchronized by applying forces
to the human pilot and the robot to achieve dynamic similarity between the two
systems. This enables the human pilot to continuously perceive and cancel any
asynchrony between the walking reference and robot. A consistent step placement
strategy for the robot is derived to maintain dynamic similarity through step
transitions. Using our human-machine-interface, we demonstrate that the human
pilot can achieve stable and synchronous teleoperation of a simulated robot
through stepping-in-place, walking, and disturbance rejection experiments. This
work provides a fundamental step towards transferring human intelligence and
reflexes to humanoid robots.
|
[
{
"version": "v1",
"created": "Wed, 14 Sep 2022 22:31:44 GMT"
}
] | 2022-09-16T00:00:00 |
[
[
"Colin",
"Guillermo",
""
],
[
"Sim",
"Youngwoo",
""
],
[
"Ramos",
"Joao",
""
]
] |
new_dataset
| 0.98631 |
2209.06967
|
Swarna Sethu Dr
|
Swarna Sethu (1), Dongyi Wang (1 and 2) ((1) Department of Biological
& Agricultural engineering, University of Arkansas, Fayetteville, (2)
Department of Food & Science and Department of Biological & Agricultural
engineering, University of Arkansas, Fayetteville)
|
A novel illumination condition varied image dataset-Food Vision Dataset
(FVD) for fair and reliable consumer acceptability predictions from food
|
8 pages, 4 figures, 1 table
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Recent advances in artificial intelligence promote a wide range of computer
vision applications in many different domains. Digital cameras, acting as human
eyes, can perceive fundamental object properties, such as shapes and colors,
and can be further used for conducting high-level tasks, such as image
classification, and object detections. Human perceptions have been widely
recognized as the ground truth for training and evaluating computer vision
models. However, in some cases, humans can be deceived by what they have seen.
Well-functioned human vision relies on stable external lighting while unnatural
illumination would influence human perception of essential characteristics of
goods. To evaluate the illumination effects on human and computer perceptions,
the group presents a novel dataset, the Food Vision Dataset (FVD), to create an
evaluation benchmark to quantify illumination effects, and to push forward
developments of illumination estimation methods for fair and reliable consumer
acceptability prediction from food appearances. FVD consists of 675 images
captured under 3 different power and 5 different temperature settings every
alternate day for five such days.
|
[
{
"version": "v1",
"created": "Wed, 14 Sep 2022 22:46:42 GMT"
}
] | 2022-09-16T00:00:00 |
[
[
"Sethu",
"Swarna",
"",
"1 and 2"
],
[
"Wang",
"Dongyi",
"",
"1 and 2"
]
] |
new_dataset
| 0.99321 |
2209.06997
|
Ruoxi Sun
|
Pingyi Hu, Zihan Wang, Ruoxi Sun, Hu Wang, Minhui Xue
|
M^4I: Multi-modal Models Membership Inference
|
Accepted to NeurIPS 2022
| null | null | null |
cs.LG cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the development of machine learning techniques, the attention of
research has been moved from single-modal learning to multi-modal learning, as
real-world data exist in the form of different modalities. However, multi-modal
models often carry more information than single-modal models and they are
usually applied in sensitive scenarios, such as medical report generation or
disease identification. Compared with the existing membership inference against
machine learning classifiers, we focus on the problem that the input and output
of the multi-modal models are in different modalities, such as image
captioning. This work studies the privacy leakage of multi-modal models through
the lens of membership inference attack, a process of determining whether a
data record involves in the model training process or not. To achieve this, we
propose Multi-modal Models Membership Inference (M^4I) with two attack methods
to infer the membership status, named metric-based (MB) M^4I and feature-based
(FB) M^4I, respectively. More specifically, MB M^4I adopts similarity metrics
while attacking to infer target data membership. FB M^4I uses a pre-trained
shadow multi-modal feature extractor to achieve the purpose of data inference
attack by comparing the similarities from extracted input and output features.
Extensive experimental results show that both attack methods can achieve strong
performances. Respectively, 72.5% and 94.83% of attack success rates on average
can be obtained under unrestricted scenarios. Moreover, we evaluate multiple
defense mechanisms against our attacks. The source code of M^4I attacks is
publicly available at
https://github.com/MultimodalMI/Multimodal-membership-inference.git.
|
[
{
"version": "v1",
"created": "Thu, 15 Sep 2022 01:57:37 GMT"
}
] | 2022-09-16T00:00:00 |
[
[
"Hu",
"Pingyi",
""
],
[
"Wang",
"Zihan",
""
],
[
"Sun",
"Ruoxi",
""
],
[
"Wang",
"Hu",
""
],
[
"Xue",
"Minhui",
""
]
] |
new_dataset
| 0.991675 |
2209.07023
|
Atsuya Kobayashi
|
Atsuya Kobayashi, Ryogo Ishino, Ryuku Nobusue, Takumi Inoue, Keisuke
Okazaki, Shoma Sawa and Nao Tokui
|
MR4MR: Mixed Reality for Melody Reincarnation
|
Accepted paper at the 3rd Conference on AI Music Creativity
(September 2022)
| null | null | null |
cs.HC cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
There is a long history of an effort made to explore musical elements with
the entities and spaces around us, such as musique concr\`ete and ambient
music. In the context of computer music and digital art, interactive
experiences that concentrate on the surrounding objects and physical spaces
have also been designed. In recent years, with the development and
popularization of devices, an increasing number of works have been designed in
Extended Reality to create such musical experiences. In this paper, we describe
MR4MR, a sound installation work that allows users to experience melodies
produced from interactions with their surrounding space in the context of Mixed
Reality (MR). Using HoloLens, an MR head-mounted display, users can bump
virtual objects that emit sound against real objects in their surroundings.
Then, by continuously creating a melody following the sound made by the object
and re-generating randomly and gradually changing melody using music generation
machine learning models, users can feel their ambient melody "reincarnating".
|
[
{
"version": "v1",
"created": "Thu, 15 Sep 2022 03:23:29 GMT"
}
] | 2022-09-16T00:00:00 |
[
[
"Kobayashi",
"Atsuya",
""
],
[
"Ishino",
"Ryogo",
""
],
[
"Nobusue",
"Ryuku",
""
],
[
"Inoue",
"Takumi",
""
],
[
"Okazaki",
"Keisuke",
""
],
[
"Sawa",
"Shoma",
""
],
[
"Tokui",
"Nao",
""
]
] |
new_dataset
| 0.999757 |
2209.07057
|
Chongyi Li
|
Wenxiu Sun, Qingpeng Zhu, Chongyi Li, Ruicheng Feng, Shangchen Zhou,
Jun Jiang, Qingyu Yang, Chen Change Loy, Jinwei Gu
|
MIPI 2022 Challenge on RGB+ToF Depth Completion: Dataset and Report
|
ECCV 2022 Mobile Intelligent Photography and Imaging (MIPI)
Workshop--RGB+ToF Depth Completion Challenge Report. MIPI workshop website:
http://mipi-challenge.org/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Developing and integrating advanced image sensors with novel algorithms in
camera systems is prevalent with the increasing demand for computational
photography and imaging on mobile platforms. However, the lack of high-quality
data for research and the rare opportunity for in-depth exchange of views from
industry and academia constrain the development of mobile intelligent
photography and imaging (MIPI). To bridge the gap, we introduce the first MIPI
challenge including five tracks focusing on novel image sensors and imaging
algorithms. In this paper, RGB+ToF Depth Completion, one of the five tracks,
working on the fusion of RGB sensor and ToF sensor (with spot illumination) is
introduced. The participants were provided with a new dataset called
TetrasRGBD, which contains 18k pairs of high-quality synthetic RGB+Depth
training data and 2.3k pairs of testing data from mixed sources. All the data
are collected in an indoor scenario. We require that the running time of all
methods should be real-time on desktop GPUs. The final results are evaluated
using objective metrics and Mean Opinion Score (MOS) subjectively. A detailed
description of all models developed in this challenge is provided in this
paper. More details of this challenge and the link to the dataset can be found
at https://github.com/mipi-challenge/MIPI2022.
|
[
{
"version": "v1",
"created": "Thu, 15 Sep 2022 05:31:53 GMT"
}
] | 2022-09-16T00:00:00 |
[
[
"Sun",
"Wenxiu",
""
],
[
"Zhu",
"Qingpeng",
""
],
[
"Li",
"Chongyi",
""
],
[
"Feng",
"Ruicheng",
""
],
[
"Zhou",
"Shangchen",
""
],
[
"Jiang",
"Jun",
""
],
[
"Yang",
"Qingyu",
""
],
[
"Loy",
"Chen Change",
""
],
[
"Gu",
"Jinwei",
""
]
] |
new_dataset
| 0.994308 |
2209.07068
|
Piji Li
|
Piji Li
|
uChecker: Masked Pretrained Language Models as Unsupervised Chinese
Spelling Checkers
|
COLING2022,11 pages
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The task of Chinese Spelling Check (CSC) is aiming to detect and correct
spelling errors that can be found in the text. While manually annotating a
high-quality dataset is expensive and time-consuming, thus the scale of the
training dataset is usually very small (e.g., SIGHAN15 only contains 2339
samples for training), therefore supervised-learning based models usually
suffer the data sparsity limitation and over-fitting issue, especially in the
era of big language models. In this paper, we are dedicated to investigating
the \textbf{unsupervised} paradigm to address the CSC problem and we propose a
framework named \textbf{uChecker} to conduct unsupervised spelling error
detection and correction. Masked pretrained language models such as BERT are
introduced as the backbone model considering their powerful language diagnosis
capability. Benefiting from the various and flexible MASKing operations, we
propose a Confusionset-guided masking strategy to fine-train the masked
language model to further improve the performance of unsupervised detection and
correction. Experimental results on standard datasets demonstrate the
effectiveness of our proposed model uChecker in terms of character-level and
sentence-level Accuracy, Precision, Recall, and F1-Measure on tasks of spelling
error detection and correction respectively.
|
[
{
"version": "v1",
"created": "Thu, 15 Sep 2022 05:57:12 GMT"
}
] | 2022-09-16T00:00:00 |
[
[
"Li",
"Piji",
""
]
] |
new_dataset
| 0.998617 |
2209.07136
|
Mar\'ia Chara
|
M. Chara, F. Galluccio and E. Mart\'inez-Moro
|
Locally recoverable codes from towers of function fields
| null | null | null | null |
cs.IT math.IT math.NT
|
http://creativecommons.org/licenses/by/4.0/
|
In this work we construct sequences of locally recoverable AG codes arising
from a tower of function fields and give bound for the parameters of the
obtained codes. In a particular case of a tower over $\mathbb{F}_{q^2}$ for any
odd $q$, defined by Garcia and Stichtenoth in [GS2007], we show that the bound
is sharp for the first code in the sequence, and we include a detailed analysis
for the following codes in the sequence based on the distribution of rational
places that split completely in the considered function field extension.
|
[
{
"version": "v1",
"created": "Thu, 15 Sep 2022 08:29:33 GMT"
}
] | 2022-09-16T00:00:00 |
[
[
"Chara",
"M.",
""
],
[
"Galluccio",
"F.",
""
],
[
"Martínez-Moro",
"E.",
""
]
] |
new_dataset
| 0.999557 |
2209.07215
|
Motahareh Dehghan
|
Motahareh Dehghan, Babak Sadeghiyan, Erfan Khosravian, Alireza Sedighi
Moghaddam, Farshid Nooshi
|
ProAPT: Projection of APT Threats with Deep Reinforcement Learning
| null | null | null | null |
cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The highest level in the Endsley situation awareness model is called
projection when the status of elements in the environment in the near future is
predicted. In cybersecurity situation awareness, the projection for an Advanced
Persistent Threat (APT) requires predicting the next step of the APT. The
threats are constantly changing and becoming more complex. As supervised and
unsupervised learning methods require APT datasets for projecting the next step
of APTs, they are unable to identify unknown APT threats. In reinforcement
learning methods, the agent interacts with the environment, and so it might
project the next step of known and unknown APTs. So far, reinforcement learning
has not been used to project the next step for APTs. In reinforcement learning,
the agent uses the previous states and actions to approximate the best action
of the current state. When the number of states and actions is abundant, the
agent employs a neural network which is called deep learning to approximate the
best action of each state. In this paper, we present a deep reinforcement
learning system to project the next step of APTs. As there exists some relation
between attack steps, we employ the Long- Short-Term Memory (LSTM) method to
approximate the best action of each state. In our proposed system, based on the
current situation, we project the next steps of APT threats.
|
[
{
"version": "v1",
"created": "Thu, 15 Sep 2022 11:16:40 GMT"
}
] | 2022-09-16T00:00:00 |
[
[
"Dehghan",
"Motahareh",
""
],
[
"Sadeghiyan",
"Babak",
""
],
[
"Khosravian",
"Erfan",
""
],
[
"Moghaddam",
"Alireza Sedighi",
""
],
[
"Nooshi",
"Farshid",
""
]
] |
new_dataset
| 0.999188 |
2209.07252
|
Stepan Dergachev
|
Stepan Dergachev and Kirill Muravyev and Konstantin Yakovlev
|
2.5D Mapping, Pathfinding and Path Following For Navigation Of A
Differential Drive Robot In Uneven Terrain
|
This is a preprint of the paper accepted to IFAC SYROCO'21/22. It
contains 6 pages, 4 figures and 2 tables. The supplementary video available
at https://youtu.be/LGhKaxnL8xA
| null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Safe navigation in uneven terrains is an important problem in robotic
research. In this paper we propose a 2.5D navigation system which consists of
elevation map building, path planning and local path following with obstacle
avoidance. For local path following we use Model Predictive Path Integral
(MPPI) control method. We propose novel cost-functions for MPPI in order to
adapt it to elevation maps and motion through unevenness. We evaluate our
system on multiple synthetic tests and in a simulated environment with
different types of obstacles and rough surfaces.
|
[
{
"version": "v1",
"created": "Thu, 15 Sep 2022 12:39:04 GMT"
}
] | 2022-09-16T00:00:00 |
[
[
"Dergachev",
"Stepan",
""
],
[
"Muravyev",
"Kirill",
""
],
[
"Yakovlev",
"Konstantin",
""
]
] |
new_dataset
| 0.95704 |
2209.07268
|
\"Ozg\"ur Aslan
|
\"Ozg\"ur Aslan, Burak Bolat, Batuhan Bal, Tu\u{g}ba T\"umer, Erol
\c{S}ahin, and Sinan Kalkan
|
AssembleRL: Learning to Assemble Furniture from Their Point Clouds
|
6 pages, 6 figures, iros2022
| null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The rise of simulation environments has enabled learning-based approaches for
assembly planning, which is otherwise a labor-intensive and daunting task.
Assembling furniture is especially interesting since furniture are intricate
and pose challenges for learning-based approaches. Surprisingly, humans can
solve furniture assembly mostly given a 2D snapshot of the assembled product.
Although recent years have witnessed promising learning-based approaches for
furniture assembly, they assume the availability of correct connection labels
for each assembly step, which are expensive to obtain in practice. In this
paper, we alleviate this assumption and aim to solve furniture assembly with as
little human expertise and supervision as possible. To be specific, we assume
the availability of the assembled point cloud, and comparing the point cloud of
the current assembly and the point cloud of the target product, obtain a novel
reward signal based on two measures: Incorrectness and incompleteness. We show
that our novel reward signal can train a deep network to successfully assemble
different types of furniture. Code and networks available here:
https://github.com/METU-KALFA/AssembleRL
|
[
{
"version": "v1",
"created": "Thu, 15 Sep 2022 13:04:45 GMT"
}
] | 2022-09-16T00:00:00 |
[
[
"Aslan",
"Özgür",
""
],
[
"Bolat",
"Burak",
""
],
[
"Bal",
"Batuhan",
""
],
[
"Tümer",
"Tuğba",
""
],
[
"Şahin",
"Erol",
""
],
[
"Kalkan",
"Sinan",
""
]
] |
new_dataset
| 0.998696 |
2209.07278
|
Milan Straka
|
Milan Straka and Jana Strakov\'a
|
\'UFAL CorPipe at CRAC 2022: Effectivity of Multilingual Models for
Coreference Resolution
|
Accepted to CRAC 2022 (Fifth Workshop on Computational Models of
Reference, Anaphora and Coreference)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe the winning submission to the CRAC 2022 Shared Task on
Multilingual Coreference Resolution. Our system first solves mention detection
and then coreference linking on the retrieved spans with an
antecedent-maximization approach, and both tasks are fine-tuned jointly with
shared Transformer weights. We report results of fine-tuning a wide range of
pretrained models. The center of this contribution are fine-tuned multilingual
models. We found one large multilingual model with sufficiently large encoder
to increase performance on all datasets across the board, with the benefit not
limited only to the underrepresented languages or groups of typologically
relative languages. The source code is available at
https://github.com/ufal/crac2022-corpipe.
|
[
{
"version": "v1",
"created": "Thu, 15 Sep 2022 13:11:39 GMT"
}
] | 2022-09-16T00:00:00 |
[
[
"Straka",
"Milan",
""
],
[
"Straková",
"Jana",
""
]
] |
new_dataset
| 0.996714 |
2209.07424
|
Junghun Kim
|
Junghun Kim, Jihie Kim
|
CMSBERT-CLR: Context-driven Modality Shifting BERT with Contrastive
Learning for linguistic, visual, acoustic Representations
|
Accepted by IJCNN 2022
| null | null | null |
cs.CL cs.LG cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multimodal sentiment analysis has become an increasingly popular research
area as the demand for multimodal online content is growing. For multimodal
sentiment analysis, words can have different meanings depending on the
linguistic context and non-verbal information, so it is crucial to understand
the meaning of the words accordingly. In addition, the word meanings should be
interpreted within the whole utterance context that includes nonverbal
information. In this paper, we present a Context-driven Modality Shifting BERT
with Contrastive Learning for linguistic, visual, acoustic Representations
(CMSBERT-CLR), which incorporates the whole context's non-verbal and verbal
information and aligns modalities more effectively through contrastive
learning. First, we introduce a Context-driven Modality Shifting (CMS) to
incorporate the non-verbal and verbal information within the whole context of
the sentence utterance. Then, for improving the alignment of different
modalities within a common embedding space, we apply contrastive learning.
Furthermore, we use an exponential moving average parameter and label smoothing
as optimization strategies, which can make the convergence of the network more
stable and increase the flexibility of the alignment. In our experiments, we
demonstrate that our approach achieves state-of-the-art results.
|
[
{
"version": "v1",
"created": "Sun, 21 Aug 2022 08:21:43 GMT"
}
] | 2022-09-16T00:00:00 |
[
[
"Kim",
"Junghun",
""
],
[
"Kim",
"Jihie",
""
]
] |
new_dataset
| 0.999495 |
2209.07440
|
Michael McKay
|
\'Agnes Cseh, Michael McKay, David Manlove
|
Envy-freeness in 3D Hedonic Games
|
78 pages, 6 figures
| null | null | null |
cs.GT cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
We study the problem of partitioning a set of agents into coalitions based on
the agents' additively separable preferences, which can also be viewed as a
hedonic game. We apply three successively weaker solution concepts, namely
envy-freeness, weakly justified envy-freeness, and justified envy-freeness.
In a model in which coalitions may have any size, trivial solutions exist for
these concepts, which provides a strong motivation for placing restrictions on
coalition size. In this paper, we require feasible coalitions to have size
three. We study the existence of partitions that are envy-free, weakly
justified envy-free, and justified envy-free, and the computational complexity
of finding such partitions, if they exist.
We present a comprehensive complexity classification, in terms of the
restrictions placed on the agents' preferences. From this, we identify a
general trend that for the three successively weaker solution concepts,
existence and polynomial-time solvability hold under successively weaker
restrictions.
|
[
{
"version": "v1",
"created": "Thu, 15 Sep 2022 16:42:07 GMT"
}
] | 2022-09-16T00:00:00 |
[
[
"Cseh",
"Ágnes",
""
],
[
"McKay",
"Michael",
""
],
[
"Manlove",
"David",
""
]
] |
new_dataset
| 0.985359 |
2209.07491
|
Asm Rizvi
|
A S M Rizvi, Jelena Mirkovic, John Heidemann, Wesley Hardaker, and
Robert Story
|
Defending Root DNS Servers Against DDoS Using Layered Defenses
|
9 pages, 3 figures
| null | null | null |
cs.CR cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Distributed Denial-of-Service (DDoS) attacks exhaust resources, leaving a
server unavailable to legitimate clients. The Domain Name System (DNS) is a
frequent target of DDoS attacks. Since DNS is a critical infrastructure
service, protecting it from DoS is imperative. Many prior approaches have
focused on specific filters or anti-spoofing techniques to protect generic
services. DNS root nameservers are more challenging to protect, since they use
fixed IP addresses, serve very diverse clients and requests, receive
predominantly UDP traffic that can be spoofed, and must guarantee high quality
of service. In this paper we propose a layered DDoS defense for DNS root
nameservers. Our defense uses a library of defensive filters, which can be
optimized for different attack types, with different levels of selectivity. We
further propose a method that automatically and continuously evaluates and
selects the best combination of filters throughout the attack. We show that
this layered defense approach provides exceptional protection against all
attack types using traces of ten real attacks from a DNS root nameserver. Our
automated system can select the best defense within seconds and quickly reduces
traffic to the server within a manageable range, while keeping collateral
damage lower than 2%. We can handle millions of filtering rules without
noticeable operational overhead.
|
[
{
"version": "v1",
"created": "Thu, 15 Sep 2022 17:32:45 GMT"
}
] | 2022-09-16T00:00:00 |
[
[
"Rizvi",
"A S M",
""
],
[
"Mirkovic",
"Jelena",
""
],
[
"Heidemann",
"John",
""
],
[
"Hardaker",
"Wesley",
""
],
[
"Story",
"Robert",
""
]
] |
new_dataset
| 0.995279 |
2101.08819
|
Mohammad Javad Amiri
|
Mohammad Javad Amiri, Ziliang Lai, Liana Patel, Boon Thau Loo, Eric
Lo, Wenchao Zhou
|
Saguaro: An Edge Computing-Enabled Hierarchical Permissioned Blockchain
| null | null | null | null |
cs.DB cs.NI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We present Saguaro, a permissioned blockchain system designed specifically
for edge computing networks. Saguaro leverages the hierarchical structure of
edge computing networks to reduce the overhead of wide-area communication by
presenting several techniques. First, Saguaro proposes coordinator-based and
optimistic protocols to process cross-domain transactions with low latency
where the lowest common ancestor of the involved domains coordinates the
protocol or detects inconsistency. Second, data are collected over hierarchy
enabling higher-level domains to aggregate their sub-domain data. Finally,
transactions initiated by mobile edge devices are processed without relying on
high-level fog and cloud servers. Our experimental results across a wide range
of workloads demonstrate the scalability of Saguaro in supporting a range of
cross-domain and mobile transactions.
|
[
{
"version": "v1",
"created": "Thu, 21 Jan 2021 19:16:22 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Sep 2022 15:15:48 GMT"
}
] | 2022-09-15T00:00:00 |
[
[
"Amiri",
"Mohammad Javad",
""
],
[
"Lai",
"Ziliang",
""
],
[
"Patel",
"Liana",
""
],
[
"Loo",
"Boon Thau",
""
],
[
"Lo",
"Eric",
""
],
[
"Zhou",
"Wenchao",
""
]
] |
new_dataset
| 0.995173 |
2104.06641
|
Gabin An
|
Gabin An, Shin Yoo
|
FDG: A Precise Measurement of Fault Diagnosability Gain of Test Cases
|
13 pages, 6 figures (to be published in ISSTA'22)
| null |
10.1145/3533767.3534370
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The performance of many Fault Localisation (FL) techniques directly depends
on the quality of the used test suites. Consequently, it is extremely useful to
be able to precisely measure how much diagnostic power each test case can
introduce when added to a test suite used for FL. Such a measure can help us
not only to prioritise and select test cases to be used for FL, but also to
effectively augment test suites that are too weak to be used with FL
techniques. We propose FDG, a new measure of Fault Diagnosability Gain for
individual test cases. The design of FDG is based on our analysis of existing
metrics that are designed to prioritise test cases for better FL. Unlike other
metrics, FDG exploits the ongoing FL results to emphasise the parts of the
program for which more information is needed. Our evaluation of FDG with
Defects4J shows that it can successfully help the augmentation of test suites
for better FL. When given only a few failing test cases (2.3 test cases on
average), FDG can effectively augment the given test suite by prioritising the
test cases generated automatically by EvoSuite: the augmentation can improve
the acc@1 and acc@10 of the FL results by 11.6x and 2.2x on average, after
requiring only ten human judgements on the correctness of the assertions
EvoSuite generates.
|
[
{
"version": "v1",
"created": "Wed, 14 Apr 2021 06:06:29 GMT"
},
{
"version": "v2",
"created": "Thu, 6 May 2021 02:18:55 GMT"
},
{
"version": "v3",
"created": "Tue, 24 May 2022 12:45:38 GMT"
}
] | 2022-09-15T00:00:00 |
[
[
"An",
"Gabin",
""
],
[
"Yoo",
"Shin",
""
]
] |
new_dataset
| 0.987846 |
2109.03891
|
Wentao Yuan
|
Wentao Yuan, Chris Paxton, Karthik Desingh, Dieter Fox
|
SORNet: Spatial Object-Centric Representations for Sequential
Manipulation
|
CoRL 2021 Best Systems Paper Finalist; Code and data available at
https://github.com/wentaoyuan/sornet
| null | null | null |
cs.RO cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sequential manipulation tasks require a robot to perceive the state of an
environment and plan a sequence of actions leading to a desired goal state. In
such tasks, the ability to reason about spatial relations among object entities
from raw sensor inputs is crucial in order to determine when a task has been
completed and which actions can be executed. In this work, we propose SORNet
(Spatial Object-Centric Representation Network), a framework for learning
object-centric representations from RGB images conditioned on a set of object
queries, represented as image patches called canonical object views. With only
a single canonical view per object and no annotation, SORNet generalizes
zero-shot to object entities whose shape and texture are both unseen during
training. We evaluate SORNet on various spatial reasoning tasks such as spatial
relation classification and relative direction regression in complex tabletop
manipulation scenarios and show that SORNet significantly outperforms baselines
including state-of-the-art representation learning techniques. We also
demonstrate the application of the representation learned by SORNet on
visual-servoing and task planning for sequential manipulation on a real robot.
|
[
{
"version": "v1",
"created": "Wed, 8 Sep 2021 19:36:29 GMT"
},
{
"version": "v2",
"created": "Thu, 11 Nov 2021 08:25:07 GMT"
},
{
"version": "v3",
"created": "Wed, 14 Sep 2022 02:33:44 GMT"
}
] | 2022-09-15T00:00:00 |
[
[
"Yuan",
"Wentao",
""
],
[
"Paxton",
"Chris",
""
],
[
"Desingh",
"Karthik",
""
],
[
"Fox",
"Dieter",
""
]
] |
new_dataset
| 0.995797 |
2111.09451
|
Nikolaos Ioannis Bountos
|
Ioannis Papoutsis, Nikolaos-Ioannis Bountos, Angelos Zavras, Dimitrios
Michail, Christos Tryfonopoulos
|
Benchmarking and scaling of deep learning models for land cover image
classification
|
25 pages
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The availability of the sheer volume of Copernicus Sentinel-2 imagery has
created new opportunities for exploiting deep learning (DL) methods for land
use land cover (LULC) image classification. However, an extensive set of
benchmark experiments is currently lacking, i.e. DL models tested on the same
dataset, with a common and consistent set of metrics, and in the same hardware.
In this work, we use the BigEarthNet Sentinel-2 dataset to benchmark for the
first time different state-of-the-art DL models for the multi-label,
multi-class LULC image classification problem, contributing with an exhaustive
zoo of 60 trained models. Our benchmark includes standard CNNs, as well as
non-convolutional methods. We put to the test EfficientNets and Wide Residual
Networks (WRN) architectures, and leverage classification accuracy, training
time and inference rate. Furthermore, we propose to use the EfficientNet
framework for the compound scaling of a lightweight WRN. Enhanced with an
Efficient Channel Attention mechanism, our scaled lightweight model emerged as
the new state-of-the-art. It achieves 4.5% higher averaged F-Score
classification accuracy for all 19 LULC classes compared to a standard ResNet50
baseline model, with an order of magnitude less trainable parameters. We
provide access to all trained models, along with our code for distributed
training on multiple GPU nodes. This model zoo of pre-trained encoders can be
used for transfer learning and rapid prototyping in different remote sensing
tasks that use Sentinel-2 data, instead of exploiting backbone models trained
with data from a different domain, e.g., from ImageNet. We validate their
suitability for transfer learning in different datasets of diverse volumes. Our
top-performing WRN achieves state-of-the-art performance (71.1% F-Score) on the
SEN12MS dataset while being exposed to only a small fraction of the training
dataset.
|
[
{
"version": "v1",
"created": "Thu, 18 Nov 2021 00:03:14 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Jan 2022 17:04:45 GMT"
},
{
"version": "v3",
"created": "Wed, 14 Sep 2022 08:54:07 GMT"
}
] | 2022-09-15T00:00:00 |
[
[
"Papoutsis",
"Ioannis",
""
],
[
"Bountos",
"Nikolaos-Ioannis",
""
],
[
"Zavras",
"Angelos",
""
],
[
"Michail",
"Dimitrios",
""
],
[
"Tryfonopoulos",
"Christos",
""
]
] |
new_dataset
| 0.999159 |
2205.10726
|
Ruofan Hu
|
Ruofan Hu, Dongyu Zhang, Dandan Tao, Thomas Hartvigsen, Hao Feng, Elke
Rundensteiner
|
TWEET-FID: An Annotated Dataset for Multiple Foodborne Illness Detection
Tasks
|
LREC 2022
| null | null | null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Foodborne illness is a serious but preventable public health problem -- with
delays in detecting the associated outbreaks resulting in productivity loss,
expensive recalls, public safety hazards, and even loss of life. While social
media is a promising source for identifying unreported foodborne illnesses,
there is a dearth of labeled datasets for developing effective outbreak
detection models. To accelerate the development of machine learning-based
models for foodborne outbreak detection, we thus present TWEET-FID
(TWEET-Foodborne Illness Detection), the first publicly available annotated
dataset for multiple foodborne illness incident detection tasks. TWEET-FID
collected from Twitter is annotated with three facets: tweet class, entity
type, and slot type, with labels produced by experts as well as by crowdsource
workers. We introduce several domain tasks leveraging these three facets: text
relevance classification (TRC), entity mention detection (EMD), and slot
filling (SF). We describe the end-to-end methodology for dataset design,
creation, and labeling for supporting model development for these tasks. A
comprehensive set of results for these tasks leveraging state-of-the-art
single- and multi-task deep learning methods on the TWEET-FID dataset are
provided. This dataset opens opportunities for future research in foodborne
outbreak detection.
|
[
{
"version": "v1",
"created": "Sun, 22 May 2022 03:47:18 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Sep 2022 03:18:41 GMT"
}
] | 2022-09-15T00:00:00 |
[
[
"Hu",
"Ruofan",
""
],
[
"Zhang",
"Dongyu",
""
],
[
"Tao",
"Dandan",
""
],
[
"Hartvigsen",
"Thomas",
""
],
[
"Feng",
"Hao",
""
],
[
"Rundensteiner",
"Elke",
""
]
] |
new_dataset
| 0.999832 |
2206.01589
|
Peize Li
|
Peize Li, Kaiwen Cai, Muhamad Risqi U. Saputra, Zhuangzhuang Dai,
Chris Xiaoxuan Lu, Andrew Markham and Niki Trigoni
|
OdomBeyondVision: An Indoor Multi-modal Multi-platform Odometry Dataset
Beyond the Visible Spectrum
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a multimodal indoor odometry dataset, OdomBeyondVision,
featuring multiple sensors across the different spectrum and collected with
different mobile platforms. Not only does OdomBeyondVision contain the
traditional navigation sensors, sensors such as IMUs, mechanical LiDAR, RGBD
camera, it also includes several emerging sensors such as the single-chip
mmWave radar, LWIR thermal camera and solid-state LiDAR. With the above sensors
on UAV, UGV and handheld platforms, we respectively recorded the multimodal
odometry data and their movement trajectories in various indoor scenes and
different illumination conditions. We release the exemplar radar,
radar-inertial and thermal-inertial odometry implementations to demonstrate
their results for future works to compare against and improve upon. The full
dataset including toolkit and documentation is publicly available at:
https://github.com/MAPS-Lab/OdomBeyondVision.
|
[
{
"version": "v1",
"created": "Fri, 3 Jun 2022 14:19:40 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Sep 2022 11:54:24 GMT"
},
{
"version": "v3",
"created": "Wed, 14 Sep 2022 11:44:11 GMT"
}
] | 2022-09-15T00:00:00 |
[
[
"Li",
"Peize",
""
],
[
"Cai",
"Kaiwen",
""
],
[
"Saputra",
"Muhamad Risqi U.",
""
],
[
"Dai",
"Zhuangzhuang",
""
],
[
"Lu",
"Chris Xiaoxuan",
""
],
[
"Markham",
"Andrew",
""
],
[
"Trigoni",
"Niki",
""
]
] |
new_dataset
| 0.999887 |
2208.05446
|
Jiyang Zhang
|
Jiyang Zhang, Sheena Panthaplackel, Pengyu Nie, Junyi Jessy Li, Milos
Gligoric
|
CoditT5: Pretraining for Source Code and Natural Language Editing
|
ASE 2022 (camera ready)
| null | null | null |
cs.SE cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Pretrained language models have been shown to be effective in many
software-related generation tasks; however, they are not well-suited for
editing tasks as they are not designed to reason about edits. To address this,
we propose a novel pretraining objective which explicitly models edits and use
it to build CoditT5, a large language model for software-related editing tasks
that is pretrained on large amounts of source code and natural language
comments. We fine-tune it on various downstream editing tasks, including
comment updating, bug fixing, and automated code review. By outperforming
standard generation-based models, we demonstrate the generalizability of our
approach and its suitability for editing tasks. We also show how a standard
generation model and our edit-based model can complement one another through
simple reranking strategies, with which we achieve state-of-the-art performance
for the three downstream editing tasks.
|
[
{
"version": "v1",
"created": "Wed, 10 Aug 2022 16:59:40 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Sep 2022 16:42:24 GMT"
}
] | 2022-09-15T00:00:00 |
[
[
"Zhang",
"Jiyang",
""
],
[
"Panthaplackel",
"Sheena",
""
],
[
"Nie",
"Pengyu",
""
],
[
"Li",
"Junyi Jessy",
""
],
[
"Gligoric",
"Milos",
""
]
] |
new_dataset
| 0.993154 |
2209.05376
|
Bon Adriel Aseniero
|
Bon Adriel Aseniero, Sheelagh Carpendale, George Fitzmaurice, Justin
Matejka
|
SkyGlyphs: Reflections on the Design of a Delightful Visualization
|
Accepted to IEEE VIS Arts Program 2022
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In creating SkyGlyphs, our goal was to develop a data visualization that
could possibly capture people's attention and spark their curiosity to explore
a dataset. This work was inspired by a mingling of research including
serendipitous interactions, visualizations for public displays, and personal
visualizations. SkyGlyphs is a nonconventional whimsical visualization,
depicting datapoints as animated balloons in space. We designed it to encourage
non-experts to casually browse the contents of a repository through visual
interactions like linking and grouping of datapoints. Our contributions include
SkyGlyphs' representation and our design reflection that reveals a perspective
on how to design delightful visualizations.
|
[
{
"version": "v1",
"created": "Mon, 12 Sep 2022 16:26:07 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Sep 2022 15:24:09 GMT"
}
] | 2022-09-15T00:00:00 |
[
[
"Aseniero",
"Bon Adriel",
""
],
[
"Carpendale",
"Sheelagh",
""
],
[
"Fitzmaurice",
"George",
""
],
[
"Matejka",
"Justin",
""
]
] |
new_dataset
| 0.998837 |
2209.06322
|
Mojtaba Kolahdouzi
|
Mojtaba Kolahdouzi, Alireza Sepas-Moghaddam, Ali Etemad
|
FaceTopoNet: Facial Expression Recognition using Face Topology Learning
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Prior work has shown that the order in which different components of the face
are learned using a sequential learner can play an important role in the
performance of facial expression recognition systems. We propose FaceTopoNet,
an end-to-end deep model for facial expression recognition, which is capable of
learning an effective tree topology of the face. Our model then traverses the
learned tree to generate a sequence, which is then used to form an embedding to
feed a sequential learner. The devised model adopts one stream for learning
structure and one stream for learning texture. The structure stream focuses on
the positions of the facial landmarks, while the main focus of the texture
stream is on the patches around the landmarks to learn textural information. We
then fuse the outputs of the two streams by utilizing an effective
attention-based fusion strategy. We perform extensive experiments on four
large-scale in-the-wild facial expression datasets - namely AffectNet, FER2013,
ExpW, and RAF-DB - and one lab-controlled dataset (CK+) to evaluate our
approach. FaceTopoNet achieves state-of-the-art performance on three of the
five datasets and obtains competitive results on the other two datasets. We
also perform rigorous ablation and sensitivity experiments to evaluate the
impact of different components and parameters in our model. Lastly, we perform
robustness experiments and demonstrate that FaceTopoNet is more robust against
occlusions in comparison to other leading methods in the area.
|
[
{
"version": "v1",
"created": "Tue, 13 Sep 2022 22:02:54 GMT"
}
] | 2022-09-15T00:00:00 |
[
[
"Kolahdouzi",
"Mojtaba",
""
],
[
"Sepas-Moghaddam",
"Alireza",
""
],
[
"Etemad",
"Ali",
""
]
] |
new_dataset
| 0.994633 |
2209.06334
|
Pritam Choudhury
|
Pritam Choudhury
|
Monadic and Comonadic Aspects of Dependency Analysis
|
Extended version of paper (with same title) to be published at SPLASH
2022
| null | null | null |
cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Dependency analysis is vital to several applications in computer science. It
lies at the essence of secure information flow analysis, binding-time analysis,
etc. Various calculi have been proposed in the literature for analysing
individual dependencies. Abadi et. al., by extending Moggi's monadic
metalanguage, unified several of these calculi into the Dependency Core
Calculus (DCC). DCC has served as a foundational framework for dependency
analysis for the last two decades. However, in spite of its success, DCC has
its limitations. First, the monadic bind rule of the calculus is nonstandard
and relies upon an auxiliary protection judgement. Second, being of a monadic
nature, the calculus cannot capture dependency analyses that possess a
comonadic nature, for example, the binding-time calculus, $\lambda^{\circ}$, of
Davies. In this paper, we address these limitations by designing an alternative
dependency calculus that is inspired by standard ideas from category theory.
Our calculus is both monadic and comonadic in nature and subsumes both DCC and
$\lambda^{\circ}$. Our construction explains the nonstandard bind rule and the
protection judgement of DCC in terms of standard categorical concepts. It also
leads to a novel technique for proving correctness of dependency analysis. We
use this technique to present alternative proofs of correctness for DCC and
$\lambda^{\circ}$.
|
[
{
"version": "v1",
"created": "Tue, 13 Sep 2022 22:42:21 GMT"
}
] | 2022-09-15T00:00:00 |
[
[
"Choudhury",
"Pritam",
""
]
] |
new_dataset
| 0.951376 |
2209.06376
|
Peng Yin
|
Peng Yin, Ivan Cisneros, Ji Zhang, Howie Choset, and Sebastian Scherer
|
iSimLoc: Visual Global Localization for Previously Unseen Environments
with Simulated Images
|
17 pages, 16 Figures, Conditional accpted by IEEE Transactions on
Robotics
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The visual camera is an attractive device in beyond visual line of sight
(B-VLOS) drone operation, since they are low in size, weight, power, and cost,
and can provide redundant modality to GPS failures. However, state-of-the-art
visual localization algorithms are unable to match visual data that have a
significantly different appearance due to illuminations or viewpoints. This
paper presents iSimLoc, a condition/viewpoint consistent hierarchical global
re-localization approach. The place features of iSimLoc can be utilized to
search target images under changing appearances and viewpoints. Additionally,
our hierarchical global re-localization module refines in a coarse-to-fine
manner, allowing iSimLoc to perform a fast and accurate estimation. We evaluate
our method on one dataset with appearance variations and one dataset that
focuses on demonstrating large-scale matching over a long flight in complicated
environments. On our two datasets, iSimLoc achieves 88.7\% and 83.8\%
successful retrieval rates with 1.5s inferencing time, compared to 45.8% and
39.7% using the next best method. These results demonstrate robust localization
in a range of environments.
|
[
{
"version": "v1",
"created": "Wed, 14 Sep 2022 02:40:50 GMT"
}
] | 2022-09-15T00:00:00 |
[
[
"Yin",
"Peng",
""
],
[
"Cisneros",
"Ivan",
""
],
[
"Zhang",
"Ji",
""
],
[
"Choset",
"Howie",
""
],
[
"Scherer",
"Sebastian",
""
]
] |
new_dataset
| 0.963502 |
2209.06416
|
Zhexiong Liu
|
Zhexiong Liu, Meiqi Guo, Yue Dai, Diane Litman
|
ImageArg: A Multi-modal Tweet Dataset for Image Persuasiveness Mining
|
In Argument Mining Workshop, held in conjunction with the
International Conference on Computational Linguistics (COLING), October 2022
| null | null | null |
cs.CL cs.AI cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
The growing interest in developing corpora of persuasive texts has promoted
applications in automated systems, e.g., debating and essay scoring systems;
however, there is little prior work mining image persuasiveness from an
argumentative perspective. To expand persuasiveness mining into a multi-modal
realm, we present a multi-modal dataset, ImageArg, consisting of annotations of
image persuasiveness in tweets. The annotations are based on a persuasion
taxonomy we developed to explore image functionalities and the means of
persuasion. We benchmark image persuasiveness tasks on ImageArg using
widely-used multi-modal learning methods. The experimental results show that
our dataset offers a useful resource for this rich and challenging topic, and
there is ample room for modeling improvement.
|
[
{
"version": "v1",
"created": "Wed, 14 Sep 2022 05:03:10 GMT"
}
] | 2022-09-15T00:00:00 |
[
[
"Liu",
"Zhexiong",
""
],
[
"Guo",
"Meiqi",
""
],
[
"Dai",
"Yue",
""
],
[
"Litman",
"Diane",
""
]
] |
new_dataset
| 0.999378 |
2209.06418
|
Seyun Bae
|
Seyun Bae, Hoyoon Byun, Changdae Oh, Yoon-Sik Cho, Kyungwoo Song
|
Graph Perceiver IO: A General Architecture for Graph Structured Data
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Multimodal machine learning has been widely studied for the development of
general intelligence. Recently, the remarkable multimodal algorithms, the
Perceiver and Perceiver IO, show competitive results for diverse dataset
domains and tasks. However, recent works, Perceiver and Perceiver IO, have
focused on heterogeneous modalities, including image, text, and speech, and
there are few research works for graph structured datasets. A graph is one of
the most generalized dataset structures, and we can represent the other
dataset, including images, text, and speech, as graph structured data. A graph
has an adjacency matrix different from other dataset domains such as text and
image, and it is not trivial to handle the topological information, relational
information, and canonical positional information. In this study, we provide a
Graph Perceiver IO, the Perceiver IO for the graph structured dataset. We keep
the main structure of the Graph Perceiver IO as the Perceiver IO because the
Perceiver IO already handles the diverse dataset well, except for the graph
structured dataset. The Graph Perceiver IO is a general method, and it can
handle diverse datasets such as graph structured data as well as text and
images. Comparing the graph neural networks, the Graph Perceiver IO requires a
lower complexity, and it can incorporate the local and global information
efficiently. We show that Graph Perceiver IO shows competitive results for
diverse graph-related tasks, including node classification, graph
classification, and link prediction.
|
[
{
"version": "v1",
"created": "Wed, 14 Sep 2022 05:05:55 GMT"
}
] | 2022-09-15T00:00:00 |
[
[
"Bae",
"Seyun",
""
],
[
"Byun",
"Hoyoon",
""
],
[
"Oh",
"Changdae",
""
],
[
"Cho",
"Yoon-Sik",
""
],
[
"Song",
"Kyungwoo",
""
]
] |
new_dataset
| 0.979133 |
2209.06452
|
Luigy Alex Machaca Arcana
|
Luigy Machaca, F. Oliver Sumari H, Jose Huaman, Esteban Clua, Joris
Guerin
|
TrADe Re-ID -- Live Person Re-Identification using Tracking and Anomaly
Detection
|
6 pages, 4 figures, Accepted on ICMLA 2022
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Person Re-Identification (Re-ID) aims to search for a person of interest
(query) in a network of cameras. In the classic Re-ID setting the query is
sought in a gallery containing properly cropped images of entire bodies.
Recently, the live Re-ID setting was introduced to represent the practical
application context of Re-ID better. It consists in searching for the query in
short videos, containing whole scene frames. The initial live Re-ID baseline
used a pedestrian detector to build a large search gallery and a classic Re-ID
model to find the query in the gallery. However, the galleries generated were
too large and contained low-quality images, which decreased the live Re-ID
performance. Here, we present a new live Re-ID approach called TrADe, to
generate lower high-quality galleries. TrADe first uses a Tracking algorithm to
identify sequences of images of the same individual in the gallery. Following,
an Anomaly Detection model is used to select a single good representative of
each tracklet. TrADe is validated on the live Re-ID version of the PRID-2011
dataset and shows significant improvements over the baseline.
|
[
{
"version": "v1",
"created": "Wed, 14 Sep 2022 07:00:35 GMT"
}
] | 2022-09-15T00:00:00 |
[
[
"Machaca",
"Luigy",
""
],
[
"H",
"F. Oliver Sumari",
""
],
[
"Huaman",
"Jose",
""
],
[
"Clua",
"Esteban",
""
],
[
"Guerin",
"Joris",
""
]
] |
new_dataset
| 0.999609 |
2209.06496
|
Ziya Zhou
|
Yu Zhang, Ziya Zhou, Xiaobing Li, Feng Yu, Maosong Sun
|
CCOM-HuQin: an Annotated Multimodal Chinese Fiddle Performance Dataset
|
14 pages, 11 figures
| null | null | null |
cs.MM cs.SD eess.AS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
HuQin is a family of traditional Chinese bowed string instruments. Playing
techniques(PTs) embodied in various playing styles add abundant emotional
coloring and aesthetic feelings to HuQin performance. The complex applied
techniques make HuQin music a challenging source for fundamental MIR tasks such
as pitch analysis, transcription and score-audio alignment. In this paper, we
present a multimodal performance dataset of HuQin music that contains
audio-visual recordings of 11,992 single PT clips and 57 annotated musical
pieces of classical excerpts. We systematically describe the HuQin PT taxonomy
based on musicological theory and practical use cases. Then we introduce the
dataset creation methodology and highlight the annotation principles featuring
PTs. We analyze the statistics in different aspects to demonstrate the variety
of PTs played in HuQin subcategories and perform preliminary experiments to
show the potential applications of the dataset in various MIR tasks and
cross-cultural music studies. Finally, we propose future work to be extended on
the dataset.
|
[
{
"version": "v1",
"created": "Wed, 14 Sep 2022 08:51:15 GMT"
}
] | 2022-09-15T00:00:00 |
[
[
"Zhang",
"Yu",
""
],
[
"Zhou",
"Ziya",
""
],
[
"Li",
"Xiaobing",
""
],
[
"Yu",
"Feng",
""
],
[
"Sun",
"Maosong",
""
]
] |
new_dataset
| 0.999791 |
2209.06598
|
Oliver Karras
|
Hartmut Schmitt, Gerald Heller, Anne Hess, Oliver Karras
|
Ermittlung und Kommunikation von Anforderungen in etablierten
UX-Prozessen
|
in German language, Gesellschaft f\"ur Informatik, Fachgruppentreffen
Requirements Engineering 2022
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There is a strong overlap between requirements engineering (RE) and user
experience (UX). Nevertheless, in practice both disciplines are often performed
by separate roles and there are deficits in collaboration. In order to provide
starting points for the further development of roles, activities and artifacts
of the disciplines, the Requirements Engineering and User Experience Working
Group (AK REUX) has been conducting a series of case studies since 2021,
analyzing the UX processes of different companies from a RE perspective. We
presented interim results of this investigation at the RE specialist group
meeting in 2022 and compared them with the experiences of the participants.
|
[
{
"version": "v1",
"created": "Tue, 13 Sep 2022 06:01:36 GMT"
}
] | 2022-09-15T00:00:00 |
[
[
"Schmitt",
"Hartmut",
""
],
[
"Heller",
"Gerald",
""
],
[
"Hess",
"Anne",
""
],
[
"Karras",
"Oliver",
""
]
] |
new_dataset
| 0.997726 |
2209.06641
|
Dhanalaxmi Gaddam
|
Dhanalaxmi Gaddam, Jean Lahoud, Fahad Shahbaz Khan, Rao Muhammad
Anwer, Hisham Cholakkal
|
CMR3D: Contextualized Multi-Stage Refinement for 3D Object Detection
|
5 figures, 10 pages including references
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Existing deep learning-based 3D object detectors typically rely on the
appearance of individual objects and do not explicitly pay attention to the
rich contextual information of the scene. In this work, we propose
Contextualized Multi-Stage Refinement for 3D Object Detection (CMR3D)
framework, which takes a 3D scene as input and strives to explicitly integrate
useful contextual information of the scene at multiple levels to predict a set
of object bounding-boxes along with their corresponding semantic labels. To
this end, we propose to utilize a context enhancement network that captures the
contextual information at different levels of granularity followed by a
multi-stage refinement module to progressively refine the box positions and
class predictions. Extensive experiments on the large-scale ScanNetV2 benchmark
reveal the benefits of our proposed method, leading to an absolute improvement
of 2.0% over the baseline. In addition to 3D object detection, we investigate
the effectiveness of our CMR3D framework for the problem of 3D object counting.
Our source code will be publicly released.
|
[
{
"version": "v1",
"created": "Tue, 13 Sep 2022 05:26:09 GMT"
}
] | 2022-09-15T00:00:00 |
[
[
"Gaddam",
"Dhanalaxmi",
""
],
[
"Lahoud",
"Jean",
""
],
[
"Khan",
"Fahad Shahbaz",
""
],
[
"Anwer",
"Rao Muhammad",
""
],
[
"Cholakkal",
"Hisham",
""
]
] |
new_dataset
| 0.999694 |
2209.06650
|
Naihao Deng
|
Santiago Castro, Naihao Deng, Pingxuan Huang, Mihai Burzo, Rada
Mihalcea
|
WildQA: In-the-Wild Video Question Answering
|
*: Equal contribution; COLING 2022 oral; project webpage:
https://lit.eecs.umich.edu/wildqa/
| null | null | null |
cs.CV cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Existing video understanding datasets mostly focus on human interactions,
with little attention being paid to the "in the wild" settings, where the
videos are recorded outdoors. We propose WILDQA, a video understanding dataset
of videos recorded in outside settings. In addition to video question answering
(Video QA), we also introduce the new task of identifying visual support for a
given question and answer (Video Evidence Selection). Through evaluations using
a wide range of baseline models, we show that WILDQA poses new challenges to
the vision and language research communities. The dataset is available at
https://lit.eecs.umich.edu/wildqa/.
|
[
{
"version": "v1",
"created": "Wed, 14 Sep 2022 13:54:07 GMT"
}
] | 2022-09-15T00:00:00 |
[
[
"Castro",
"Santiago",
""
],
[
"Deng",
"Naihao",
""
],
[
"Huang",
"Pingxuan",
""
],
[
"Burzo",
"Mihai",
""
],
[
"Mihalcea",
"Rada",
""
]
] |
new_dataset
| 0.999745 |
2209.06668
|
Son T. Luu
|
Triet Minh Thai, Ngan Ha-Thao Chu, Anh Tuan Vo, Son T. Luu
|
UIT-ViCoV19QA: A Dataset for COVID-19 Community-based Question Answering
on Vietnamese Language
|
Accepted as poster paper at The 36th annual Meeting of Pacific Asia
Conference on Language, Information and Computation (PACLIC 36). The dataset
and code are available at https://github.com/minhtriet2397/UIT-ViCoV19QA
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
For the last two years, from 2020 to 2021, COVID-19 has broken disease
prevention measures in many countries, including Vietnam, and negatively
impacted various aspects of human life and the social community. Besides, the
misleading information in the community and fake news about the pandemic are
also serious situations. Therefore, we present the first Vietnamese
community-based question answering dataset for developing question answering
systems for COVID-19 called UIT-ViCoV19QA. The dataset comprises 4,500
question-answer pairs collected from trusted medical sources, with at least one
answer and at most four unique paraphrased answers per question. Along with the
dataset, we set up various deep learning models as baseline to assess the
quality of our dataset and initiate the benchmark results for further research
through commonly used metrics such as BLEU, METEOR, and ROUGE-L. We also
illustrate the positive effects of having multiple paraphrased answers
experimented on these models, especially on Transformer - a dominant
architecture in the field of study.
|
[
{
"version": "v1",
"created": "Wed, 14 Sep 2022 14:24:23 GMT"
}
] | 2022-09-15T00:00:00 |
[
[
"Thai",
"Triet Minh",
""
],
[
"Chu",
"Ngan Ha-Thao",
""
],
[
"Vo",
"Anh Tuan",
""
],
[
"Luu",
"Son T.",
""
]
] |
new_dataset
| 0.999862 |
2209.06675
|
Junhao Cai
|
Junhao Cai, Jingcheng Su, Zida Zhou, Hui Cheng, Qifeng Chen, Michael Y
Wang
|
Volumetric-based Contact Point Detection for 7-DoF Grasping
|
Accepted to Conference on Robot Learning (CoRL) 2022. Supplementary
materials: https://openreview.net/forum?id=SrSCqW4dq9
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a novel grasp pipeline based on contact point
detection on the truncated signed distance function (TSDF) volume to achieve
closed-loop 7-degree-of-freedom (7-DoF) grasping on cluttered environments. The
key aspects of our method are that 1) the proposed pipeline exploits the TSDF
volume in terms of multi-view fusion, contact-point sampling and evaluation,
and collision checking, which provides reliable and collision-free 7-DoF
gripper poses with real-time performance; 2) the contact-based pose
representation effectively eliminates the ambiguity introduced by the
normal-based methods, which provides a more precise and flexible solution.
Extensive simulated and real-robot experiments demonstrate that the proposed
pipeline can select more antipodal and stable grasp poses and outperforms
normal-based baselines in terms of the grasp success rate in both simulated and
physical scenarios.
|
[
{
"version": "v1",
"created": "Wed, 14 Sep 2022 14:30:51 GMT"
}
] | 2022-09-15T00:00:00 |
[
[
"Cai",
"Junhao",
""
],
[
"Su",
"Jingcheng",
""
],
[
"Zhou",
"Zida",
""
],
[
"Cheng",
"Hui",
""
],
[
"Chen",
"Qifeng",
""
],
[
"Wang",
"Michael Y",
""
]
] |
new_dataset
| 0.97982 |
2209.06681
|
Philipp Schr\"oppel
|
Philipp Schr\"oppel and Jan Bechtold and Artemij Amiranashvili and
Thomas Brox
|
A Benchmark and a Baseline for Robust Multi-view Depth Estimation
|
Accepted at 3DV 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent deep learning approaches for multi-view depth estimation are employed
either in a depth-from-video or a multi-view stereo setting. Despite different
settings, these approaches are technically similar: they correlate multiple
source views with a keyview to estimate a depth map for the keyview. In this
work, we introduce the Robust Multi-View Depth Benchmark that is built upon a
set of public datasets and allows evaluation in both settings on data from
different domains. We evaluate recent approaches and find imbalanced
performances across domains. Further, we consider a third setting, where camera
poses are available and the objective is to estimate the corresponding depth
maps with their correct scale. We show that recent approaches do not generalize
across datasets in this setting. This is because their cost volume output runs
out of distribution. To resolve this, we present the Robust MVD Baseline model
for multi-view depth estimation, which is built upon existing components but
employs a novel scale augmentation procedure. It can be applied for robust
multi-view depth estimation, independent of the target data. We provide code
for the proposed benchmark and baseline model at
https://github.com/lmb-freiburg/robustmvd.
|
[
{
"version": "v1",
"created": "Tue, 13 Sep 2022 17:44:16 GMT"
}
] | 2022-09-15T00:00:00 |
[
[
"Schröppel",
"Philipp",
""
],
[
"Bechtold",
"Jan",
""
],
[
"Amiranashvili",
"Artemij",
""
],
[
"Brox",
"Thomas",
""
]
] |
new_dataset
| 0.998296 |
2209.06750
|
Oscar Araque
|
Oscar Araque, Lorenzo Gatti and Kyriaki Kalimeri
|
LibertyMFD: A Lexicon to Assess the Moral Foundation of Liberty
|
GoodIT '22: Proceedings of the 2022 ACM Conference on Information
Technology for Social Good. GoodIT'22, September 7-9, 2022, Limassol, Cyprus
|
Conference on Information Technology for Social Good (GoodIT'22),
September 7-9, 2022, Limassol, Cyprus. ACM, New York, NY, USA, 7 pages
|
10.1145/3524458.3547264
| null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Quantifying the moral narratives expressed in the user-generated text, news,
or public discourses is fundamental for understanding individuals' concerns and
viewpoints and preventing violent protests and social polarisation. The Moral
Foundation Theory (MFT) was developed to operationalise morality in a
five-dimensional scale system. Recent developments of the theory urged for the
introduction of a new foundation, the Liberty Foundation. Being only recently
added to the theory, there are no available linguistic resources to assess
whether liberty is present in text corpora. Given its importance to current
social issues such as the vaccination debate, we propose two data-driven
approaches, deriving two candidate lexicons generated based on aligned
documents from online news sources with different worldviews. After extensive
experimentation, we contribute to the research community a novel lexicon that
assesses the liberty moral foundation in the way individuals with contrasting
viewpoints express themselves through written text. The LibertyMFD dictionary
can be a valuable tool for policymakers to understand diverse viewpoints on
controversial social issues such as vaccination, abortion, or even uprisings,
as they happen and on a large scale.
|
[
{
"version": "v1",
"created": "Wed, 14 Sep 2022 16:14:54 GMT"
}
] | 2022-09-15T00:00:00 |
[
[
"Araque",
"Oscar",
""
],
[
"Gatti",
"Lorenzo",
""
],
[
"Kalimeri",
"Kyriaki",
""
]
] |
new_dataset
| 0.999861 |
2209.06792
|
Geoffrey Cideron
|
Geoffrey Cideron, Sertan Girgin, Anton Raichuk, Olivier Pietquin,
Olivier Bachem, L\'eonard Hussenot
|
vec2text with Round-Trip Translations
| null | null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We investigate models that can generate arbitrary natural language text (e.g.
all English sentences) from a bounded, convex and well-behaved control space.
We call them universal vec2text models. Such models would allow making semantic
decisions in the vector space (e.g. via reinforcement learning) while the
natural language generation is handled by the vec2text model. We propose four
desired properties: universality, diversity, fluency, and semantic structure,
that such vec2text models should possess and we provide quantitative and
qualitative methods to assess them. We implement a vec2text model by adding a
bottleneck to a 250M parameters Transformer model and training it with an
auto-encoding objective on 400M sentences (10B tokens) extracted from a massive
web corpus. We propose a simple data augmentation technique based on round-trip
translations and show in extensive experiments that the resulting vec2text
model surprisingly leads to vector spaces that fulfill our four desired
properties and that this model strongly outperforms both standard and denoising
auto-encoders.
|
[
{
"version": "v1",
"created": "Wed, 14 Sep 2022 17:20:18 GMT"
}
] | 2022-09-15T00:00:00 |
[
[
"Cideron",
"Geoffrey",
""
],
[
"Girgin",
"Sertan",
""
],
[
"Raichuk",
"Anton",
""
],
[
"Pietquin",
"Olivier",
""
],
[
"Bachem",
"Olivier",
""
],
[
"Hussenot",
"Léonard",
""
]
] |
new_dataset
| 0.997749 |
2209.06812
|
Mao Ye
|
Mao Ye, Nicolette Formosa and Mohammed Quddus
|
Developing a Vehicle Re-routing Algorithm using Connected Vehicle (CV)
Technology
|
19 pages, 11 figures
| null | null | null |
cs.NI cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Vehicle Ad-hoc Networks (VANETs) act as the core of vehicular communications
and provide the fundamental wireless communication architecture to support both
vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication.
Therefore, by leveraging only communication technologies, Connected Vehicles
(CVs) can navigate through the dynamic road network. However, such vehicles are
still in their infancy but are expected to have a significant impact on safety
and mobility such as reducing non-recurrent congestion in case of a vehicle
breakdown or other roadway incidents. To evaluate their impacts, this research
examines the benefits of having CVs when a vehicle breakdown occurs by
developing an intelligent proactive re-routing algorithm. Due to a lack of
real-world data, this paper adopts an integrated simulated framework consisting
of a V2X (OMNET++) communication simulator and a traffic microscopic simulator
(SUMO). The developed algorithm functions such that when a vehicle is broken
down within a live traffic lane, the system detects the breakdown, generates
warning messages immediately and transmits them to approaching vehicles. Based
on the real-time notification, informed vehicles proactively re-route to
alternative roads to avoid the breakdown zone. Two scenarios were developed
where a breakdown occurs within and outside a junction for both V2X-enabled and
disabled systems. Results show that V2X-enabled CV re-routing mechanism can
improve traffic efficiency by reducing congestion and enhance traffic safety by
smoothing accelerations and decelerations of affected vehicles with low
infrastructure costs. The algorithm would be useful to highway agencies
(Department for Transport) and vehicle manufacturers in introducing CVs onto
existing road networks.
|
[
{
"version": "v1",
"created": "Tue, 30 Aug 2022 10:33:52 GMT"
}
] | 2022-09-15T00:00:00 |
[
[
"Ye",
"Mao",
""
],
[
"Formosa",
"Nicolette",
""
],
[
"Quddus",
"Mohammed",
""
]
] |
new_dataset
| 0.992699 |
2209.06820
|
EPTCS
|
Bas van den Heuvel (University of Groningen), Jorge A. P\'erez
(University of Groningen)
|
Asynchronous Functional Sessions: Cyclic and Concurrent
|
In Proceedings EXPRESS/SOS 2022, arXiv:2208.14777. arXiv admin note:
substantial text overlap with arXiv:2208.07644
|
EPTCS 368, 2022, pp. 75-94
|
10.4204/EPTCS.368.5
| null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Concurrent GV (CGV), a functional calculus with message-passing
concurrency governed by session types. With respect to prior calculi, CGV has
increased support for concurrent evaluation and for cyclic network topologies.
The design of CGV draws on APCP, a session-typed asynchronous pi-calculus
developed in prior work. Technical contributions are (i) the syntax, semantics,
and type system of CGV; (ii) a correct translation of CGV into APCP; (iii) a
technique for establishing deadlock-free CGV programs, by resorting to APCP's
priority-based type system.
|
[
{
"version": "v1",
"created": "Tue, 6 Sep 2022 10:36:44 GMT"
}
] | 2022-09-15T00:00:00 |
[
[
"Heuvel",
"Bas van den",
"",
"University of Groningen"
],
[
"Pérez",
"Jorge A.",
"",
"University of Groningen"
]
] |
new_dataset
| 0.999072 |
2102.05851
|
Joseph Chow
|
Bingqing Liu, Theodoros P. Pantelidis, Stephanie Tam, Joseph Y. J.
Chow
|
An electric vehicle charging station access equilibrium model with M/D/C
queueing
| null |
International Journal of Sustainable Transportation (2022)
|
10.1080/15568318.2022.2029633
| null |
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Despite the dependency of electric vehicle (EV) fleets on charging station
availability, charging infrastructure remains limited in many cities. Three
contributions are made. First, we propose an EV-to-charging station user
equilibrium (UE) assignment model with a M/D/C queue approximation as a
nondifferentiable nonlinear program. Second, to address the
non-differentiability of the queue delay function, we propose an original
solution algorithm based on the derivative-free Method of Successive Averages.
Computational tests with a toy network show that the model converges to a UE. A
working code in Python is provided free on Github with detailed test cases.
Third, the model is applied to the large-scale case study of New York City
Department of Citywide Administrative Services (NYC DCAS) fleet and EV charging
station configuration as of July 8, 2020, which includes unique, real data for
563 Level 2 chargers and 4 Direct Current Fast Chargers (DCFCs) and 1484 EVs
distributed over 512 Traffic Analysis Zones. The arrival rates of the
assignment model are calibrated in the base scenario to fit an observed average
utilization ratio of 7.6% in NYC. The model is then applied to compare charging
station investment policies of DCFCs to Level 2 charging stations based on two
alternative criteria. Results suggest a policy based on selecting locations
with high utilization ratio instead of with high queue delay.
|
[
{
"version": "v1",
"created": "Thu, 11 Feb 2021 05:23:36 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Sep 2021 19:41:03 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Liu",
"Bingqing",
""
],
[
"Pantelidis",
"Theodoros P.",
""
],
[
"Tam",
"Stephanie",
""
],
[
"Chow",
"Joseph Y. J.",
""
]
] |
new_dataset
| 0.966952 |
2103.00597
|
Jean Marie Tshimula
|
Jean Marie Tshimula, Belkacem Chikhaoui, Shengrui Wang
|
COVID-19: Detecting Depression Signals during Stay-At-Home Period
| null |
Health Informatics Journal, 2022
|
10.1177/14604582221094931
|
28(2): 14604582221094931
|
cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
The new coronavirus outbreak has been officially declared a global pandemic
by the World Health Organization. To grapple with the rapid spread of this
ongoing pandemic, most countries have banned indoor and outdoor gatherings and
ordered their residents to stay home. Given the developing situation with
coronavirus, mental health is an important challenge in our society today. In
this paper, we discuss the investigation of social media postings to detect
signals relevant to depression. To this end, we utilize topic modeling features
and a collection of psycholinguistic and mental-well-being attributes to
develop statistical models to characterize and facilitate representation of the
more subtle aspects of depression. Furthermore, we predict whether signals
relevant to depression are likely to grow significantly as time moves forward.
Our best classifier yields F-1 scores as high as 0.8 and surpasses the utilized
baseline by a considerable margin, 0.173. In closing, we propose several future
research avenues.
|
[
{
"version": "v1",
"created": "Sun, 28 Feb 2021 19:30:20 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Tshimula",
"Jean Marie",
""
],
[
"Chikhaoui",
"Belkacem",
""
],
[
"Wang",
"Shengrui",
""
]
] |
new_dataset
| 0.977112 |
2107.02625
|
Marsel Faizullin
|
Marsel Faizullin, Anastasiia Kornilova, Gonzalo Ferrer
|
Open-Source LiDAR Time Synchronization System by Mimicking GNSS-clock
|
Accepted to IEEE ISPCS 2022 Conference (International Symposium on
Precision Clock Synchronization for Measurement, Control and Communication)
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Data fusion algorithms that employ LiDAR measurements, such as Visual-LiDAR,
LiDAR-Inertial, or Multiple LiDAR Odometry and simultaneous localization and
mapping (SLAM) rely on precise timestamping schemes that grant synchronicity to
data from LiDAR and other sensors. Poor synchronization performance, due to
incorrect timestamping procedure, may negatively affect the algorithms' state
estimation results. To provide highly accurate and precise synchronization
between the sensors, we introduce an open-source hardware-software LiDAR to
other sensors time synchronization system that exploits a dedicated hardware
LiDAR time synchronization interface by providing emulated GNSS-clock to this
interface, no physical GNSS-receiver is needed. The emulator is based on a
general-purpose microcontroller and, due to concise hardware and software
architecture, can be easily modified or extended for synchronization of sets of
different sensors such as cameras, inertial measurement units (IMUs), wheel
encoders, other LiDARs, etc. In the paper, we provide an example of such a
system with synchronized LiDAR and IMU sensors. We conducted an evaluation of
the sensors synchronization accuracy and precision, and state 1 microsecond
performance. We compared our results with timestamping provided by ROS software
and by a LiDAR inner clocking scheme to underline clear advantages over these
two baseline methods.
|
[
{
"version": "v1",
"created": "Tue, 6 Jul 2021 14:03:30 GMT"
},
{
"version": "v2",
"created": "Mon, 29 Aug 2022 11:51:41 GMT"
},
{
"version": "v3",
"created": "Tue, 13 Sep 2022 12:18:26 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Faizullin",
"Marsel",
""
],
[
"Kornilova",
"Anastasiia",
""
],
[
"Ferrer",
"Gonzalo",
""
]
] |
new_dataset
| 0.955623 |
2107.08217
|
Joseph Chow
|
Qi Liu, Joseph Y. J. Chow
|
A congested schedule-based dynamic transit passenger flow estimator
using stop count data
| null |
Transportmetrica B: Transport Dynamics (2022)
|
10.1080/21680566.2022.2060370
| null |
cs.CY math.OC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
A dynamic transit flow estimation model based on congested schedule-based
transit equilibrium assignment is proposed using observations from stop count
data. A solution algorithm is proposed for the mathematical program with
schedule-based transit equilibrium constraints (MPEC) with polynomial
computational complexity. The equilibrium constraints corresponding to the
schedule-based hyperpath flow are modified from the literature to fit into an
estimation problem. Computational experiments are conducted first to verify the
methodology with two synthetic data sets (one of which is Sioux Falls),
followed by a validation of the method using bus data from Qingpu District in
Shanghai, China, with 4 bus lines, 120 segments, 55 bus stops, and 120
one-minute intervals. The estimation model converged to 0.005 tolerance of
relative change in 10 iterations. The estimated average of segment flows are
only 2.5% off from the average of the observed segment flows; relative errors
among segments are 42.5%.
|
[
{
"version": "v1",
"created": "Sat, 17 Jul 2021 10:52:57 GMT"
},
{
"version": "v2",
"created": "Mon, 16 Aug 2021 16:04:51 GMT"
}
] | 2022-09-14T00:00:00 |
[
[
"Liu",
"Qi",
""
],
[
"Chow",
"Joseph Y. J.",
""
]
] |
new_dataset
| 0.998725 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.