id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2211.07817
|
Shivakumar Mahesh
|
Shivakumar Mahesh, Anshuka Rangi, Haifeng Xu and Long Tran-Thanh
|
Multi-Player Bandits Robust to Adversarial Collisions
| null | null | null | null |
cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Motivated by cognitive radios, stochastic Multi-Player Multi-Armed Bandits
has been extensively studied in recent years. In this setting, each player
pulls an arm, and receives a reward corresponding to the arm if there is no
collision, namely the arm was selected by one single player. Otherwise, the
player receives no reward if collision occurs. In this paper, we consider the
presence of malicious players (or attackers) who obstruct the cooperative
players (or defenders) from maximizing their rewards, by deliberately colliding
with them. We provide the first decentralized and robust algorithm RESYNC for
defenders whose performance deteriorates gracefully as $\tilde{O}(C)$ as the
number of collisions $C$ from the attackers increases. We show that this
algorithm is order-optimal by proving a lower bound which scales as
$\Omega(C)$. This algorithm is agnostic to the algorithm used by the attackers
and agnostic to the number of collisions $C$ faced from attackers.
|
[
{
"version": "v1",
"created": "Tue, 15 Nov 2022 00:43:26 GMT"
}
] | 2022-11-16T00:00:00 |
[
[
"Mahesh",
"Shivakumar",
""
],
[
"Rangi",
"Anshuka",
""
],
[
"Xu",
"Haifeng",
""
],
[
"Tran-Thanh",
"Long",
""
]
] |
new_dataset
| 0.960649 |
2211.07818
|
Shen Sang
|
Shen Sang, Tiancheng Zhi, Guoxian Song, Minghao Liu, Chunpong Lai,
Jing Liu, Xiang Wen, James Davis, Linjie Luo
|
AgileAvatar: Stylized 3D Avatar Creation via Cascaded Domain Bridging
|
ACM SIGGRAPH Asia 2022 Conference Proceedings
| null |
10.1145/3550469.3555402
| null |
cs.CV cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
Stylized 3D avatars have become increasingly prominent in our modern life.
Creating these avatars manually usually involves laborious selection and
adjustment of continuous and discrete parameters and is time-consuming for
average users. Self-supervised approaches to automatically create 3D avatars
from user selfies promise high quality with little annotation cost but fall
short in application to stylized avatars due to a large style domain gap. We
propose a novel self-supervised learning framework to create high-quality
stylized 3D avatars with a mix of continuous and discrete parameters. Our
cascaded domain bridging framework first leverages a modified portrait
stylization approach to translate input selfies into stylized avatar renderings
as the targets for desired 3D avatars. Next, we find the best parameters of the
avatars to match the stylized avatar renderings through a differentiable
imitator we train to mimic the avatar graphics engine. To ensure we can
effectively optimize the discrete parameters, we adopt a cascaded
relaxation-and-search pipeline. We use a human preference study to evaluate how
well our method preserves user identity compared to previous work as well as
manual creation. Our results achieve much higher preference scores than
previous work and close to those of manual creation. We also provide an
ablation study to justify the design choices in our pipeline.
|
[
{
"version": "v1",
"created": "Tue, 15 Nov 2022 00:43:45 GMT"
}
] | 2022-11-16T00:00:00 |
[
[
"Sang",
"Shen",
""
],
[
"Zhi",
"Tiancheng",
""
],
[
"Song",
"Guoxian",
""
],
[
"Liu",
"Minghao",
""
],
[
"Lai",
"Chunpong",
""
],
[
"Liu",
"Jing",
""
],
[
"Wen",
"Xiang",
""
],
[
"Davis",
"James",
""
],
[
"Luo",
"Linjie",
""
]
] |
new_dataset
| 0.996769 |
2211.07843
|
Xunjian Yin
|
Xunjian Yin and Xinyu Hu and Xiaojun Wan
|
Chinese Spelling Check with Nearest Neighbors
|
work in progress
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Chinese Spelling Check (CSC) aims to detect and correct error tokens in
Chinese contexts, which has a wide range of applications. In this paper, we
introduce InfoKNN-CSC, extending the standard CSC model by linearly
interpolating it with a k-nearest neighbors (kNN) model. Moreover, the
phonetic, graphic, and contextual information (info) of tokens and contexts are
elaborately incorporated into the design of the query and key of kNN, according
to the characteristics of the task. After retrieval, in order to match the
candidates more accurately, we also perform reranking methods based on the
overlap of the n-gram values and inputs. Experiments on the SIGHAN benchmarks
demonstrate that the proposed model achieves state-of-the-art performance with
substantial improvements over existing work.
|
[
{
"version": "v1",
"created": "Tue, 15 Nov 2022 01:55:34 GMT"
}
] | 2022-11-16T00:00:00 |
[
[
"Yin",
"Xunjian",
""
],
[
"Hu",
"Xinyu",
""
],
[
"Wan",
"Xiaojun",
""
]
] |
new_dataset
| 0.997857 |
2211.07912
|
Chih-Hui Ho
|
Chih-Hui Ho, Srikar Appalaraju, Bhavan Jasani, R. Manmatha, Nuno
Vasconcelos
|
YORO -- Lightweight End to End Visual Grounding
|
Accepted to ECCVW on International Challenge on Compositional and
Multimodal Perception
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present YORO - a multi-modal transformer encoder-only architecture for the
Visual Grounding (VG) task. This task involves localizing, in an image, an
object referred via natural language. Unlike the recent trend in the literature
of using multi-stage approaches that sacrifice speed for accuracy, YORO seeks a
better trade-off between speed an accuracy by embracing a single-stage design,
without CNN backbone. YORO consumes natural language queries, image patches,
and learnable detection tokens and predicts coordinates of the referred object,
using a single transformer encoder. To assist the alignment between text and
visual objects, a novel patch-text alignment loss is proposed. Extensive
experiments are conducted on 5 different datasets with ablations on
architecture design choices. YORO is shown to support real-time inference and
outperform all approaches in this class (single-stage methods) by large
margins. It is also the fastest VG model and achieves the best speed/accuracy
trade-off in the literature.
|
[
{
"version": "v1",
"created": "Tue, 15 Nov 2022 05:34:40 GMT"
}
] | 2022-11-16T00:00:00 |
[
[
"Ho",
"Chih-Hui",
""
],
[
"Appalaraju",
"Srikar",
""
],
[
"Jasani",
"Bhavan",
""
],
[
"Manmatha",
"R.",
""
],
[
"Vasconcelos",
"Nuno",
""
]
] |
new_dataset
| 0.999113 |
2211.07980
|
Ayush Maheshwari
|
Ayush Maheshwari, Nikhil Singh, Amrith Krishna, Ganesh Ramakrishnan
|
A Benchmark and Dataset for Post-OCR text correction in Sanskrit
|
Findings of EMNLP, 2022. Code and Data:
https://github.com/ayushbits/pe-ocr-sanskrit
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sanskrit is a classical language with about 30 million extant manuscripts fit
for digitisation, available in written, printed or scannedimage forms. However,
it is still considered to be a low-resource language when it comes to available
digital resources. In this work, we release a post-OCR text correction dataset
containing around 218,000 sentences, with 1.5 million words, from 30 different
books. Texts in Sanskrit are known to be diverse in terms of their linguistic
and stylistic usage since Sanskrit was the 'lingua franca' for discourse in the
Indian subcontinent for about 3 millennia. Keeping this in mind, we release a
multi-domain dataset, from areas as diverse as astronomy, medicine and
mathematics, with some of them as old as 18 centuries. Further, we release
multiple strong baselines as benchmarks for the task, based on pre-trained
Seq2Seq language models. We find that our best-performing model, consisting of
byte level tokenization in conjunction with phonetic encoding (Byt5+SLP1),
yields a 23% point increase over the OCR output in terms of word and character
error rates. Moreover, we perform extensive experiments in evaluating these
models on their performance and analyse common causes of mispredictions both at
the graphemic and lexical levels. Our code and dataset is publicly available at
https://github.com/ayushbits/pe-ocr-sanskrit.
|
[
{
"version": "v1",
"created": "Tue, 15 Nov 2022 08:32:18 GMT"
}
] | 2022-11-16T00:00:00 |
[
[
"Maheshwari",
"Ayush",
""
],
[
"Singh",
"Nikhil",
""
],
[
"Krishna",
"Amrith",
""
],
[
"Ramakrishnan",
"Ganesh",
""
]
] |
new_dataset
| 0.999865 |
2211.08042
|
Golsa Tahmasebzadeh
|
Golsa Tahmasebzadeh, Eric M\"uller-Budack, Sherzod Hakimov, Ralph
Ewerth
|
MM-Locate-News: Multimodal Focus Location Estimation in News
| null | null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The consumption of news has changed significantly as the Web has become the
most influential medium for information. To analyze and contextualize the large
amount of news published every day, the geographic focus of an article is an
important aspect in order to enable content-based news retrieval. There are
methods and datasets for geolocation estimation from text or photos, but they
are typically considered as separate tasks. However, the photo might lack
geographical cues and text can include multiple locations, making it
challenging to recognize the focus location using a single modality. In this
paper, a novel dataset called Multimodal Focus Location of News
(MM-Locate-News) is introduced. We evaluate state-of-the-art methods on the new
benchmark dataset and suggest novel models to predict the focus location of
news using both textual and image content. The experimental results show that
the multimodal model outperforms unimodal models.
|
[
{
"version": "v1",
"created": "Tue, 15 Nov 2022 10:47:45 GMT"
}
] | 2022-11-16T00:00:00 |
[
[
"Tahmasebzadeh",
"Golsa",
""
],
[
"Müller-Budack",
"Eric",
""
],
[
"Hakimov",
"Sherzod",
""
],
[
"Ewerth",
"Ralph",
""
]
] |
new_dataset
| 0.999766 |
2211.08144
|
Wenxi Liu
|
Wenxi Liu, Qi Li, Weixiang Yang, Jiaxin Cai, Yuanlong Yu, Yuexin Ma,
Shengfeng He, Jia Pan
|
Monocular BEV Perception of Road Scenes via Front-to-Top View Projection
|
Extension to CVPR'21 paper "Projecting Your View Attentively:
Monocular Road Scene Layout Estimation via Cross-View Transformation"
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
HD map reconstruction is crucial for autonomous driving. LiDAR-based methods
are limited due to expensive sensors and time-consuming computation.
Camera-based methods usually need to perform road segmentation and view
transformation separately, which often causes distortion and missing content.
To push the limits of the technology, we present a novel framework that
reconstructs a local map formed by road layout and vehicle occupancy in the
bird's-eye view given a front-view monocular image only. We propose a
front-to-top view projection (FTVP) module, which takes the constraint of cycle
consistency between views into account and makes full use of their correlation
to strengthen the view transformation and scene understanding. In addition, we
also apply multi-scale FTVP modules to propagate the rich spatial information
of low-level features to mitigate spatial deviation of the predicted object
location. Experiments on public benchmarks show that our method achieves the
state-of-the-art performance in the tasks of road layout estimation, vehicle
occupancy estimation, and multi-class semantic estimation. For multi-class
semantic estimation, in particular, our model outperforms all competitors by a
large margin. Furthermore, our model runs at 25 FPS on a single GPU, which is
efficient and applicable for real-time panorama HD map reconstruction.
|
[
{
"version": "v1",
"created": "Tue, 15 Nov 2022 13:52:41 GMT"
}
] | 2022-11-16T00:00:00 |
[
[
"Liu",
"Wenxi",
""
],
[
"Li",
"Qi",
""
],
[
"Yang",
"Weixiang",
""
],
[
"Cai",
"Jiaxin",
""
],
[
"Yu",
"Yuanlong",
""
],
[
"Ma",
"Yuexin",
""
],
[
"He",
"Shengfeng",
""
],
[
"Pan",
"Jia",
""
]
] |
new_dataset
| 0.990686 |
2211.08158
|
Yue Zhang
|
Yue Zhang, Zhenghua Li
|
CSynGEC: Incorporating Constituent-based Syntax for Grammatical Error
Correction with a Tailored GEC-Oriented Parser
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Recently, Zhang et al. (2022) propose a syntax-aware grammatical error
correction (GEC) approach, named SynGEC, showing that incorporating tailored
dependency-based syntax of the input sentence is quite beneficial to GEC. This
work considers another mainstream syntax formalism, i.e., constituent-based
syntax. By drawing on the successful experience of SynGEC, we first propose an
extended constituent-based syntax scheme to accommodate errors in ungrammatical
sentences. Then, we automatically obtain constituency trees of ungrammatical
sentences to train a GEC-oriented constituency parser by using parallel GEC
data as a pivot. For syntax encoding, we employ the graph convolutional network
(GCN). Experimental results show that our method, named CSynGEC, yields
substantial improvements over strong baselines. Moreover, we investigate the
integration of constituent-based and dependency-based syntax for GEC in two
ways: 1) intra-model combination, which means using separate GCNs to encode
both kinds of syntax for decoding in a single model; 2)inter-model combination,
which means gathering and selecting edits predicted by different models to
achieve final corrections. We find that the former method improves recall over
using one standalone syntax formalism while the latter improves precision, and
both lead to better F0.5 values.
|
[
{
"version": "v1",
"created": "Tue, 15 Nov 2022 14:11:39 GMT"
}
] | 2022-11-16T00:00:00 |
[
[
"Zhang",
"Yue",
""
],
[
"Li",
"Zhenghua",
""
]
] |
new_dataset
| 0.974582 |
2211.08188
|
Ricardo Grando
|
Martin Mattos, Ricardo Grando, Andr\'e Kelbouscas
|
Desarollo de un Dron Low-Cost para Tareas Indoor
|
in Spanish language. Articulo aceptado para la FEBITEC 2022
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Commercial drones are not yet dimensioned to perform indoor autonomous tasks,
since they use GPS for their location in the environment. When it comes to a
space with physical obstacles (walls, metal, etc.) between the communication of
the drone and the satellites that allow the precise location of the same, there
is great difficulty in finding the satellites or it generates interference for
this location. This problem can cause an unexpected action of the drone, a
collision and a possible accident can occur. The work to follow presents the
development of a drone capable of operating in a physical space (indoor),
without the need for GPS. In this proposal, a prototype of a system for
detecting the distance (lidar) that the drone is from the walls is also
developed, with the aim of being able to take this information as the location
of the drone.
|
[
{
"version": "v1",
"created": "Sun, 23 Oct 2022 21:30:29 GMT"
}
] | 2022-11-16T00:00:00 |
[
[
"Mattos",
"Martin",
""
],
[
"Grando",
"Ricardo",
""
],
[
"Kelbouscas",
"André",
""
]
] |
new_dataset
| 0.999746 |
2211.08190
|
Ricardo Grando
|
Agustina Marion de Freitas Vidal, Anthony Rodriguez, Richard Suarez,
Andr\'e Kelbouscas, Ricardo Grando
|
Reconocimiento de Objetos a partir de Nube de Puntos en un Ve\'iculo
A\'ereo no Tripulado
|
in Spanish language. Articulo aceptado en la FEBITEC 2022
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Currently, research in robotics, artificial intelligence and drones are
advancing exponentially, they are directly or indirectly related to various
areas of the economy, from agriculture to industry. With this context, this
project covers these topics guiding them, seeking to provide a framework that
is capable of helping to develop new future researchers. For this, we use an
aerial vehicle that works autonomously and is capable of mapping the scenario
and providing useful information to the end user. This occurs from a
communication between a simple programming language (Scratch) and one of the
most important and efficient robot operating systems today (ROS). This is how
we managed to develop a tool capable of generating a 3D map and detecting
objects using the camera attached to the drone. Although this tool can be used
in the advanced fields of industry, it is also an important advance for the
research sector. The implementation of this tool in intermediate-level
institutions is aspired to provide the ability to carry out high-level projects
from a simple programming language.
|
[
{
"version": "v1",
"created": "Sun, 23 Oct 2022 21:28:03 GMT"
}
] | 2022-11-16T00:00:00 |
[
[
"Vidal",
"Agustina Marion de Freitas",
""
],
[
"Rodriguez",
"Anthony",
""
],
[
"Suarez",
"Richard",
""
],
[
"Kelbouscas",
"André",
""
],
[
"Grando",
"Ricardo",
""
]
] |
new_dataset
| 0.999721 |
2211.08221
|
Jennifer Andreoli-Fang
|
Jennifer Andreoli-Fang, George Kondylis
|
A Synchronous, Reservation Based Medium Access Control Protocol for
Multihop Wireless Networks
|
5 pages, 5 figures, IEEE Wireless Communication and Networking
Conference 2003. Author Jennifer Andreoli-Fang was previously known as
Jennifer Fang
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe a new synchronous and distributed medium access control (MAC)
protocol for multihop wireless networks that provides bandwidth guarantees to
unicast connections. Our MAC protocol is based on a slotted time division
multiple access (TDMA) architecture, with a multi-mini-slotted signaling phase
scheduling data transmissions over slots in the following data phase. Resolving
contentions at the beginning of a frame allows for effective utilization of
bandwidth. Our protocol essentially combines the benefits of TDMA architecture
with the distributed reservation mechanism of IEEE 802.11 MAC protocol, thereby
performing well even at high loads. We implement a two-way handshake before
each data slot to avoid deadlocks, a phenomena that plagues 802.11. Through
theoretical analysis, we derive the system throughput achieved by our MAC
protocol. We implemented our MAC protocol into ns-2 simulator, and demonstrate
its vast superiority to IEEE 802.11 and a synchronous MAC protocol CATA through
extensive simulations.
|
[
{
"version": "v1",
"created": "Tue, 15 Nov 2022 15:41:19 GMT"
}
] | 2022-11-16T00:00:00 |
[
[
"Andreoli-Fang",
"Jennifer",
""
],
[
"Kondylis",
"George",
""
]
] |
new_dataset
| 0.999493 |
2211.08245
|
Hanchen David Wang
|
Hanchen David Wang, Meiyi Ma
|
PhysiQ: Off-site Quality Assessment of Exercise in Physical Therapy
|
22 pages
| null |
10.1145/3570349
| null |
cs.HC cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Physical therapy (PT) is crucial for patients to restore and maintain
mobility, function, and well-being. Many on-site activities and body exercises
are performed under the supervision of therapists or clinicians. However, the
postures of some exercises at home cannot be performed accurately due to the
lack of supervision, quality assessment, and self-correction. Therefore, in
this paper, we design a new framework, PhysiQ, that continuously tracks and
quantitatively measures people's off-site exercise activity through passive
sensory detection. In the framework, we create a novel multi-task
spatio-temporal Siamese Neural Network that measures the absolute quality
through classification and relative quality based on an individual's PT
progress through similarity comparison. PhysiQ digitizes and evaluates
exercises in three different metrics: range of motions, stability, and
repetition.
|
[
{
"version": "v1",
"created": "Sat, 12 Nov 2022 01:53:38 GMT"
}
] | 2022-11-16T00:00:00 |
[
[
"Wang",
"Hanchen David",
""
],
[
"Ma",
"Meiyi",
""
]
] |
new_dataset
| 0.999066 |
2211.08248
|
Ting Yao
|
Qi Cai and Yingwei Pan and Ting Yao and Tao Mei
|
3D Cascade RCNN: High Quality Object Detection in Point Clouds
|
IEEE Transactions on Image Processing (TIP) 2022. The source code is
publicly available at \url{https://github.com/caiqi/Cascasde-3D}
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent progress on 2D object detection has featured Cascade RCNN, which
capitalizes on a sequence of cascade detectors to progressively improve
proposal quality, towards high-quality object detection. However, there has not
been evidence in support of building such cascade structures for 3D object
detection, a challenging detection scenario with highly sparse LiDAR point
clouds. In this work, we present a simple yet effective cascade architecture,
named 3D Cascade RCNN, that allocates multiple detectors based on the voxelized
point clouds in a cascade paradigm, pursuing higher quality 3D object detector
progressively. Furthermore, we quantitatively define the sparsity level of the
points within 3D bounding box of each object as the point completeness score,
which is exploited as the task weight for each proposal to guide the learning
of each stage detector. The spirit behind is to assign higher weights for
high-quality proposals with relatively complete point distribution, while
down-weight the proposals with extremely sparse points that often incur noise
during training. This design of completeness-aware re-weighting elegantly
upgrades the cascade paradigm to be better applicable for the sparse input
data, without increasing any FLOP budgets. Through extensive experiments on
both the KITTI dataset and Waymo Open Dataset, we validate the superiority of
our proposed 3D Cascade RCNN, when comparing to state-of-the-art 3D object
detection techniques. The source code is publicly available at
\url{https://github.com/caiqi/Cascasde-3D}.
|
[
{
"version": "v1",
"created": "Tue, 15 Nov 2022 15:58:36 GMT"
}
] | 2022-11-16T00:00:00 |
[
[
"Cai",
"Qi",
""
],
[
"Pan",
"Yingwei",
""
],
[
"Yao",
"Ting",
""
],
[
"Mei",
"Tao",
""
]
] |
new_dataset
| 0.998348 |
2211.08387
|
Hayate Iso
|
Hayate Iso
|
AutoTemplate: A Simple Recipe for Lexically Constrained Text Generation
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Lexically constrained text generation is one of the constrained text
generation tasks, which aims to generate text that covers all the given
constraint lexicons. While the existing approaches tackle this problem using a
lexically constrained beam search algorithm or dedicated model using
non-autoregressive decoding, there is a trade-off between the generated text
quality and the hard constraint satisfaction. We introduce AutoTemplate, a
simple yet effective lexically constrained text generation framework divided
into template generation and lexicalization tasks. The template generation is
to generate the text with the placeholders, and lexicalization replaces them
into the constraint lexicons to perform lexically constrained text generation.
We conducted the experiments on two tasks: keywords-to-sentence generations and
entity-guided summarization. Experimental results show that the AutoTemplate
outperforms the competitive baselines on both tasks while satisfying the hard
lexical constraints.
|
[
{
"version": "v1",
"created": "Tue, 15 Nov 2022 18:36:18 GMT"
}
] | 2022-11-16T00:00:00 |
[
[
"Iso",
"Hayate",
""
]
] |
new_dataset
| 0.979425 |
2211.08400
|
Yawen Zhang
|
Yawen Zhang, Michael Hannigan, Qin Lv
|
Air Pollution Hotspot Detection and Source Feature Analysis using
Cross-domain Urban Data
|
10 pages
|
ACM SIGSPATIAL 2021
|
10.1145/3474717.3484263
| null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Air pollution is a major global environmental health threat, in particular
for people who live or work near pollution sources. Areas adjacent to pollution
sources often have high ambient pollution concentrations, and those areas are
commonly referred to as air pollution hotspots. Detecting and characterizing
pollution hotspots are of great importance for air quality management, but are
challenging due to the high spatial and temporal variability of air pollutants.
In this work, we explore the use of mobile sensing data (i.e., air quality
sensors installed on vehicles) to detect pollution hotspots. One major
challenge with mobile sensing data is uneven sampling, i.e., data collection
can vary by both space and time. To address this challenge, we propose a
two-step approach to detect hotspots from mobile sensing data, which includes
local spike detection and sample-weighted clustering. Essentially, this
approach tackles the uneven sampling issue by weighting samples based on their
spatial frequency and temporal hit rate, so as to identify robust and
persistent hotspots. To contextualize the hotspots and discover potential
pollution source characteristics, we explore a variety of cross-domain urban
data and extract features from them. As a soft-validation of the extracted
features, we build hotspot inference models for cities with and without mobile
sensing data. Evaluation results using real-world mobile sensing air quality
data as well as cross-domain urban data demonstrate the effectiveness of our
approach in detecting and inferring pollution hotspots. Furthermore, the
empirical analysis of hotspots and source features yields useful insights
regarding neighborhood pollution sources.
|
[
{
"version": "v1",
"created": "Tue, 15 Nov 2022 18:44:03 GMT"
}
] | 2022-11-16T00:00:00 |
[
[
"Zhang",
"Yawen",
""
],
[
"Hannigan",
"Michael",
""
],
[
"Lv",
"Qin",
""
]
] |
new_dataset
| 0.999609 |
2211.08401
|
Evgenii Vinogradov A
|
Nesrine Cherif and Wael Jaafar and Evgenii Vinogradov and Halim
Yanikomeroglu and Sofie Pollin and Abbas Yongacoglu
|
iTUAVs: Intermittently Tethered UAVs for Future Wireless Networks
| null | null |
10.1109/MWC.018.2100720
| null |
cs.NI cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
We propose the intermittently tethered unmanned aerial vehicle (iTUAV) as a
tradeoff between the power availability of a tethered UAV (TUAV) and the
flexibility of an untethered UAV. An iTUAV can provide cellular connectivity
while being temporarily tethered to the most adequate ground anchor. Also, it
can flexibly detach from one anchor, travel, then attach to another one to
maintain/improve the coverage quality for mobile users. Hence, we discuss here
the existing UAV-based cellular networking technologies, followed by a detailed
description of the iTUAV system, its components, and mode of operation.
Subsequently, we present a comparative study of the existing and proposed
systems highlighting the differences in key features such as mobility and
energy. To emphasize the potential of iTUAV systems, we conduct a case study,
evaluate the iTUAV performance, and compare it to benchmarks. Obtained results
show that with only 10 anchors in the area, the iTUAV system can serve up to
90% of the users covered by the untethered UAV swapping system. Moreover,
results from a small case study prove that the iTUAV allows to balance
performance/cost and can be implemented realistically. For instance, when user
locations are clustered, with only 2 active iTUAVs and 4 anchors, achieved
performance is superior to that of the system with 3 TUAVs, while when
considering a single UAV on a 100 minutes event, a system with only 6 anchors
outperforms the untethered UAV as it combines location flexibility with
increased mission time.
|
[
{
"version": "v1",
"created": "Wed, 26 Oct 2022 08:51:29 GMT"
}
] | 2022-11-16T00:00:00 |
[
[
"Cherif",
"Nesrine",
""
],
[
"Jaafar",
"Wael",
""
],
[
"Vinogradov",
"Evgenii",
""
],
[
"Yanikomeroglu",
"Halim",
""
],
[
"Pollin",
"Sofie",
""
],
[
"Yongacoglu",
"Abbas",
""
]
] |
new_dataset
| 0.972897 |
1804.05105
|
Hossein Boomari
|
Hossein Boomari, Mojtaba Ostovari and Alireza Zarei
|
Recognizing Visibility Graphs of Polygons with Holes and
Internal-External Visibility Graphs of Polygons
|
Sumbitted to COCOON2018 Conference
| null | null | null |
cs.CG cs.CC
|
http://creativecommons.org/licenses/by/4.0/
|
Visibility graph of a polygon corresponds to its internal diagonals and
boundary edges. For each vertex on the boundary of the polygon, we have a
vertex in this graph and if two vertices of the polygon see each other there is
an edge between their corresponding vertices in the graph. Two vertices of a
polygon see each other if and only if their connecting line segment completely
lies inside the polygon, and they are externally visible if and only if this
line segment completely lies outside the polygon. Recognizing visibility graphs
is the problem of deciding whether there is a simple polygon whose visibility
graph is isomorphic to a given input graph. This problem is well-known and
well-studied, but yet widely open in geometric graphs and computational
geometry.
Existential Theory of the Reals is the complexity class of problems that can
be reduced to the problem of deciding whether there exists a solution to a
quantifier-free formula F(X1,X2,...,Xn), involving equalities and inequalities
of real polynomials with real variables. The complete problems for this
complexity class are called Existential Theory of the Reals Complete.
In this paper we show that recognizing visibility graphs of polygons with
holes is Existential Theory of the Reals Complete. Moreover, we show that
recognizing visibility graphs of simple polygons when we have the internal and
external visibility graphs, is also Existential Theory of the Reals Complete.
|
[
{
"version": "v1",
"created": "Fri, 13 Apr 2018 20:07:57 GMT"
},
{
"version": "v2",
"created": "Sat, 12 Nov 2022 06:15:17 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Boomari",
"Hossein",
""
],
[
"Ostovari",
"Mojtaba",
""
],
[
"Zarei",
"Alireza",
""
]
] |
new_dataset
| 0.999583 |
2002.02717
|
Dmitry V. Dylov
|
Nikolay Shvetsov and Nazar Buzun and Dmitry V. Dylov
|
Unsupervised non-parametric change point detection in quasi-periodic
signals
|
8 pages, 7 figures, 1 table
|
SSDBM 2020
|
10.1145/3400903.3400917
| null |
cs.LG math.ST stat.ML stat.TH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a new unsupervised and non-parametric method to detect change
points in intricate quasi-periodic signals. The detection relies on optimal
transport theory combined with topological analysis and the bootstrap
procedure. The algorithm is designed to detect changes in virtually any
harmonic or a partially harmonic signal and is verified on three different
sources of physiological data streams. We successfully find abnormal or
irregular cardiac cycles in the waveforms for the six of the most frequent
types of clinical arrhythmias using a single algorithm. The validation and the
efficiency of the method are shown both on synthetic and on real time series.
Our unsupervised approach reaches the level of performance of the supervised
state-of-the-art techniques. We provide conceptual justification for the
efficiency of the method and prove the convergence of the bootstrap procedure
theoretically.
|
[
{
"version": "v1",
"created": "Fri, 7 Feb 2020 11:24:50 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Shvetsov",
"Nikolay",
""
],
[
"Buzun",
"Nazar",
""
],
[
"Dylov",
"Dmitry V.",
""
]
] |
new_dataset
| 0.988392 |
2102.02729
|
Dongrui Wu
|
Dongrui Wu, Jiaxin Xu, Weili Fang, Yi Zhang, Liuqing Yang, Xiaodong
Xu, Hanbin Luo and Xiang Yu
|
Adversarial Attacks and Defenses in Physiological Computing: A
Systematic Review
|
National Science Open, 2022
| null | null | null |
cs.LG cs.CY cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Physiological computing uses human physiological data as system inputs in
real time. It includes, or significantly overlaps with, brain-computer
interfaces, affective computing, adaptive automation, health informatics, and
physiological signal based biometrics. Physiological computing increases the
communication bandwidth from the user to the computer, but is also subject to
various types of adversarial attacks, in which the attacker deliberately
manipulates the training and/or test examples to hijack the machine learning
algorithm output, leading to possible user confusion, frustration, injury, or
even death. However, the vulnerability of physiological computing systems has
not been paid enough attention to, and there does not exist a comprehensive
review on adversarial attacks to them. This paper fills this gap, by providing
a systematic review on the main research areas of physiological computing,
different types of adversarial attacks and their applications to physiological
computing, and the corresponding defense strategies. We hope this review will
attract more research interests on the vulnerability of physiological computing
systems, and more importantly, defense strategies to make them more secure.
|
[
{
"version": "v1",
"created": "Thu, 4 Feb 2021 16:40:12 GMT"
},
{
"version": "v2",
"created": "Sun, 7 Feb 2021 22:24:25 GMT"
},
{
"version": "v3",
"created": "Thu, 11 Feb 2021 17:15:30 GMT"
},
{
"version": "v4",
"created": "Sun, 13 Nov 2022 06:33:23 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Wu",
"Dongrui",
""
],
[
"Xu",
"Jiaxin",
""
],
[
"Fang",
"Weili",
""
],
[
"Zhang",
"Yi",
""
],
[
"Yang",
"Liuqing",
""
],
[
"Xu",
"Xiaodong",
""
],
[
"Luo",
"Hanbin",
""
],
[
"Yu",
"Xiang",
""
]
] |
new_dataset
| 0.999582 |
2104.08252
|
Rune Krauss
|
Rune Krauss, Marcel Merten, Mirco Bockholt, Rolf Drechsler
|
ALF -- A Fitness-Based Artificial Life Form for Evolving Large-Scale
Neural Networks
| null | null |
10.1145/3449726.3459545
| null |
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Machine Learning (ML) is becoming increasingly important in daily life. In
this context, Artificial Neural Networks (ANNs) are a popular approach within
ML methods to realize an artificial intelligence. Usually, the topology of ANNs
is predetermined. However, there are problems where it is difficult to find a
suitable topology. Therefore, Topology and Weight Evolving Artificial Neural
Network (TWEANN) algorithms have been developed that can find ANN topologies
and weights using genetic algorithms. A well-known downside for large-scale
problems is that TWEANN algorithms often evolve inefficient ANNs and require
long runtimes.
To address this issue, we propose a new TWEANN algorithm called Artificial
Life Form (ALF) with the following technical advancements: (1) speciation via
structural and semantic similarity to form better candidate solutions, (2)
dynamic adaptation of the observed candidate solutions for better convergence
properties, and (3) integration of solution quality into genetic reproduction
to increase the probability of optimization success. Experiments on large-scale
ML problems confirm that these approaches allow the fast solving of these
problems and lead to efficient evolved ANNs.
|
[
{
"version": "v1",
"created": "Fri, 16 Apr 2021 17:36:41 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Krauss",
"Rune",
""
],
[
"Merten",
"Marcel",
""
],
[
"Bockholt",
"Mirco",
""
],
[
"Drechsler",
"Rolf",
""
]
] |
new_dataset
| 0.982877 |
2107.02275
|
Wenting Li
|
Wenting Li, Deepjyoti Deka
|
PPGN: Physics-Preserved Graph Networks for Real-Time Fault Location in
Distribution Systems with Limited Observation and Labels
|
10 pages, 4 figure
| null | null | null |
cs.LG cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Electrical faults may trigger blackouts or wildfires without timely
monitoring and control strategy. Traditional solutions for locating faults in
distribution systems are not real-time when network observability is low, while
novel black-box machine learning methods are vulnerable to stochastic
environments. We propose a novel Physics-Preserved Graph Network (PPGN)
architecture to accurately locate faults at the node level with limited
observability and labeled training data. PPGN has a unique two-stage graph
neural network architecture. The first stage learns the graph embedding to
represent the entire network using a few measured nodes. The second stage finds
relations between the labeled and unlabeled data samples to further improve the
location accuracy. We explain the benefits of the two-stage graph configuration
through a random walk equivalence. We numerically validate the proposed method
in the IEEE 123-node and 37-node test feeders, demonstrating the superior
performance over three baseline classifiers when labeled training data is
limited, and loads and topology are allowed to vary.
|
[
{
"version": "v1",
"created": "Mon, 5 Jul 2021 21:18:37 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Oct 2021 19:10:00 GMT"
},
{
"version": "v3",
"created": "Sat, 12 Nov 2022 16:39:24 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Li",
"Wenting",
""
],
[
"Deka",
"Deepjyoti",
""
]
] |
new_dataset
| 0.998985 |
2107.12226
|
Ivan P Yamshchikov
|
Anastasia Malysheva, Alexey Tikhonov, Ivan P. Yamshchikov
|
DYPLODOC: Dynamic Plots for Document Classification
| null |
in Modern Management based on Big Data II and Machine Learning and
Intelligent Systems III 2021 (pp. 511-519). IOS Press
|
10.3233/FAIA210283
| null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Narrative generation and analysis are still on the fringe of modern natural
language processing yet are crucial in a variety of applications. This paper
proposes a feature extraction method for plot dynamics. We present a dataset
that consists of the plot descriptions for thirteen thousand TV shows alongside
meta-information on their genres and dynamic plots extracted from them. We
validate the proposed tool for plot dynamics extraction and discuss possible
applications of this method to the tasks of narrative analysis and generation.
|
[
{
"version": "v1",
"created": "Mon, 26 Jul 2021 14:12:45 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Malysheva",
"Anastasia",
""
],
[
"Tikhonov",
"Alexey",
""
],
[
"Yamshchikov",
"Ivan P.",
""
]
] |
new_dataset
| 0.999843 |
2108.06862
|
Ashraful Islam
|
Md Imran Hossen, Ashraful Islam, Farzana Anowar, Eshtiak Ahmed,
Mohammad Masudur Rahman, Xiali (Sharon) Hei
|
Generating Cyber Threat Intelligence to Discover Potential Security
Threats Using Classification and Topic Modeling
| null | null | null | null |
cs.LG cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Due to the variety of cyber-attacks or threats, the cybersecurity community
enhances the traditional security control mechanisms to an advanced level so
that automated tools can encounter potential security threats. Very recently,
Cyber Threat Intelligence (CTI) has been presented as one of the proactive and
robust mechanisms because of its automated cybersecurity threat prediction.
Generally, CTI collects and analyses data from various sources e.g., online
security forums, social media where cyber enthusiasts, analysts, even
cybercriminals discuss cyber or computer security-related topics and discovers
potential threats based on the analysis. As the manual analysis of every such
discussion (posts on online platforms) is time-consuming, inefficient, and
susceptible to errors, CTI as an automated tool can perform uniquely to detect
cyber threats. In this paper, we identify and explore relevant CTI from hacker
forums utilizing different supervised (classification) and unsupervised
learning (topic modeling) techniques. To this end, we collect data from a real
hacker forum and constructed two datasets: a binary dataset and a multi-class
dataset. We then apply several classifiers along with deep neural network-based
classifiers and use them on the datasets to compare their performances. We also
employ the classifiers on a labeled leaked dataset as our ground truth. We
further explore the datasets using unsupervised techniques. For this purpose,
we leverage two topic modeling algorithms namely Latent Dirichlet Allocation
(LDA) and Non-negative Matrix Factorization (NMF).
|
[
{
"version": "v1",
"created": "Mon, 16 Aug 2021 02:30:29 GMT"
},
{
"version": "v2",
"created": "Thu, 19 Aug 2021 22:24:16 GMT"
},
{
"version": "v3",
"created": "Mon, 14 Nov 2022 15:20:28 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Hossen",
"Md Imran",
"",
"Sharon"
],
[
"Islam",
"Ashraful",
"",
"Sharon"
],
[
"Anowar",
"Farzana",
"",
"Sharon"
],
[
"Ahmed",
"Eshtiak",
"",
"Sharon"
],
[
"Rahman",
"Mohammad Masudur",
"",
"Sharon"
],
[
"Xiali",
"",
"",
"Sharon"
],
[
"Hei",
"",
""
]
] |
new_dataset
| 0.99282 |
2109.06593
|
Darya Melnyk
|
Amirreza Akbari, Navid Eslami, Henrik Lievonen, Darya Melnyk, Joona
S\"arkij\"arvi and Jukka Suomela
|
Locality in online, dynamic, sequential, and distributed graph
algorithms
| null | null | null | null |
cs.DS cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we give a unifying view of locality in four settings:
distributed algorithms, sequential greedy algorithms, dynamic algorithms, and
online algorithms. We introduce a new model of computing, called the
online-LOCAL model: the adversary reveals the nodes of the input graph one by
one, in the same way as in classical online algorithms, but for each new node
we get to see its radius-T neighborhood before choosing the output. We compare
the online-LOCAL model with three other models: the LOCAL model of distributed
computing, where each node produces its output based on its radius-T
neighborhood, its sequential counterpart SLOCAL, and the dynamic-LOCAL model,
where changes in the dynamic input graph only influence the radius-T
neighborhood of the point of change. The SLOCAL and dynamic-LOCAL models are
sandwiched between the LOCAL and online-LOCAL models, with LOCAL being the
weakest and online-LOCAL the strongest model. In general, all models are
distinct, but we study in particular locally checkable labeling problems
(LCLs), which is a family of graph problems studied in the context of
distributed graph algorithms. We prove that for LCL problems in paths, cycles,
and rooted trees, all models are roughly equivalent: the locality of any LCL
problem falls in the same broad class - $O(\log^* n)$, $\Theta(\log n)$, or
$n^{\Theta(1)}$ - in all four models. In particular, this result enables one to
generalize prior lower-bound results from the LOCAL model to all four models,
and it also allows one to simulate e.g. dynamic-LOCAL algorithms efficiently in
the LOCAL model. We also show that this equivalence does not hold in general
bipartite graphs. We provide an online-LOCAL algorithm with locality $O(\log
n)$ for the $3$-coloring problem in bipartite graphs - this is a problem with
locality $\Omega(n^{1/2})$ in the LOCAL model and $\Omega(n^{1/10})$ in the
SLOCAL model.
|
[
{
"version": "v1",
"created": "Tue, 14 Sep 2021 11:29:42 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Feb 2022 10:26:22 GMT"
},
{
"version": "v3",
"created": "Mon, 12 Sep 2022 14:51:22 GMT"
},
{
"version": "v4",
"created": "Sat, 12 Nov 2022 21:43:51 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Akbari",
"Amirreza",
""
],
[
"Eslami",
"Navid",
""
],
[
"Lievonen",
"Henrik",
""
],
[
"Melnyk",
"Darya",
""
],
[
"Särkijärvi",
"Joona",
""
],
[
"Suomela",
"Jukka",
""
]
] |
new_dataset
| 0.980259 |
2109.08975
|
Dasong Gao
|
Dasong Gao, Chen Wang, Sebastian Scherer
|
AirLoop: Lifelong Loop Closure Detection
| null |
2022 International Conference on Robotics and Automation (ICRA)
|
10.1109/ICRA46639.2022.9811658
| null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Loop closure detection is an important building block that ensures the
accuracy and robustness of simultaneous localization and mapping (SLAM)
systems. Due to their generalization ability, CNN-based approaches have
received increasing attention. Although they normally benefit from training on
datasets that are diverse and reflective of the environments, new environments
often emerge after the model is deployed. It is therefore desirable to
incorporate the data newly collected during operation for incremental learning.
Nevertheless, simply finetuning the model on new data is infeasible since it
may cause the model's performance on previously learned data to degrade over
time, which is also known as the problem of catastrophic forgetting. In this
paper, we present AirLoop, a method that leverages techniques from lifelong
learning to minimize forgetting when training loop closure detection models
incrementally. We experimentally demonstrate the effectiveness of AirLoop on
TartanAir, Nordland, and RobotCar datasets. To the best of our knowledge,
AirLoop is one of the first works to achieve lifelong learning of deep loop
closure detectors.
|
[
{
"version": "v1",
"created": "Sat, 18 Sep 2021 17:28:47 GMT"
},
{
"version": "v2",
"created": "Sun, 27 Feb 2022 19:46:16 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Mar 2022 03:49:51 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Gao",
"Dasong",
""
],
[
"Wang",
"Chen",
""
],
[
"Scherer",
"Sebastian",
""
]
] |
new_dataset
| 0.998866 |
2109.09617
|
Zeqian Ju
|
Zeqian Ju, Peiling Lu, Xu Tan, Rui Wang, Chen Zhang, Songruoyao Wu,
Kejun Zhang, Xiangyang Li, Tao Qin, Tie-Yan Liu
|
TeleMelody: Lyric-to-Melody Generation with a Template-Based Two-Stage
Method
| null | null | null | null |
cs.SD cs.AI cs.CL cs.MM eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Lyric-to-melody generation is an important task in automatic songwriting.
Previous lyric-to-melody generation systems usually adopt end-to-end models
that directly generate melodies from lyrics, which suffer from several issues:
1) lack of paired lyric-melody training data; 2) lack of control on generated
melodies. In this paper, we develop TeleMelody, a two-stage lyric-to-melody
generation system with music template (e.g., tonality, chord progression,
rhythm pattern, and cadence) to bridge the gap between lyrics and melodies
(i.e., the system consists of a lyric-to-template module and a
template-to-melody module). TeleMelody has two advantages. First, it is data
efficient. The template-to-melody module is trained in a self-supervised way
(i.e., the source template is extracted from the target melody) that does not
need any lyric-melody paired data. The lyric-to-template module is made up of
some rules and a lyric-to-rhythm model, which is trained with paired
lyric-rhythm data that is easier to obtain than paired lyric-melody data.
Second, it is controllable. The design of template ensures that the generated
melodies can be controlled by adjusting the musical elements in template. Both
subjective and objective experimental evaluations demonstrate that TeleMelody
generates melodies with higher quality, better controllability, and less
requirement on paired lyric-melody data than previous generation systems.
|
[
{
"version": "v1",
"created": "Mon, 20 Sep 2021 15:19:33 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Nov 2022 03:29:54 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Ju",
"Zeqian",
""
],
[
"Lu",
"Peiling",
""
],
[
"Tan",
"Xu",
""
],
[
"Wang",
"Rui",
""
],
[
"Zhang",
"Chen",
""
],
[
"Wu",
"Songruoyao",
""
],
[
"Zhang",
"Kejun",
""
],
[
"Li",
"Xiangyang",
""
],
[
"Qin",
"Tao",
""
],
[
"Liu",
"Tie-Yan",
""
]
] |
new_dataset
| 0.998525 |
2109.11835
|
Min Zhang
|
Min Zhang, Pranav Kadam, Shan Liu, C.-C. Jay Kuo
|
GSIP: Green Semantic Segmentation of Large-Scale Indoor Point Clouds
|
10 pages, 3 figures
|
Pattern Recognition Letters, Volume 164, 2022, Pages 9-15
|
10.1016/j.patrec.2022.10.014
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An efficient solution to semantic segmentation of large-scale indoor scene
point clouds is proposed in this work. It is named GSIP (Green Segmentation of
Indoor Point clouds) and its performance is evaluated on a representative
large-scale benchmark -- the Stanford 3D Indoor Segmentation (S3DIS) dataset.
GSIP has two novel components: 1) a room-style data pre-processing method that
selects a proper subset of points for further processing, and 2) a new feature
extractor which is extended from PointHop. For the former, sampled points of
each room form an input unit. For the latter, the weaknesses of PointHop's
feature extraction when extending it to large-scale point clouds are identified
and fixed with a simpler processing pipeline. As compared with PointNet, which
is a pioneering deep-learning-based solution, GSIP is green since it has
significantly lower computational complexity and a much smaller model size.
Furthermore, experiments show that GSIP outperforms PointNet in segmentation
performance for the S3DIS dataset.
|
[
{
"version": "v1",
"created": "Fri, 24 Sep 2021 09:26:53 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Nov 2022 05:03:44 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Zhang",
"Min",
""
],
[
"Kadam",
"Pranav",
""
],
[
"Liu",
"Shan",
""
],
[
"Kuo",
"C. -C. Jay",
""
]
] |
new_dataset
| 0.998438 |
2109.14396
|
Ivan P Yamshchikov
|
Alexey Tikhonov and Igor Samenko and Ivan P. Yamshchikov
|
StoryDB: Broad Multi-language Narrative Dataset
| null |
In Proceedings of the 2nd Workshop on Evaluation and Comparison of
NLP Systems 2021 Nov (pp. 32-39)
|
10.18653/v1/2021.eval4nlp-1.4
| null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents StoryDB - a broad multi-language dataset of narratives.
StoryDB is a corpus of texts that includes stories in 42 different languages.
Every language includes 500+ stories. Some of the languages include more than
20 000 stories. Every story is indexed across languages and labeled with tags
such as a genre or a topic. The corpus shows rich topical and language
variation and can serve as a resource for the study of the role of narrative in
natural language processing across various languages including low resource
ones. We also demonstrate how the dataset could be used to benchmark three
modern multilanguage models, namely, mDistillBERT, mBERT, and XLM-RoBERTa.
|
[
{
"version": "v1",
"created": "Wed, 29 Sep 2021 12:59:38 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Tikhonov",
"Alexey",
""
],
[
"Samenko",
"Igor",
""
],
[
"Yamshchikov",
"Ivan P.",
""
]
] |
new_dataset
| 0.999901 |
2110.10018
|
Noemie Perivier
|
Vineet Goyal and Noemie Perivier
|
Dynamic pricing and assortment under a contextual MNL demand
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider dynamic multi-product pricing and assortment problems under an
unknown demand over T periods, where in each period, the seller decides on the
price for each product or the assortment of products to offer to a customer who
chooses according to an unknown Multinomial Logit Model (MNL). Such problems
arise in many applications, including online retail and advertising. We propose
a randomized dynamic pricing policy based on a variant of the Online Newton
Step algorithm (ONS) that achieves a $O(d\sqrt{T}\log(T))$ regret guarantee
under an adversarial arrival model. We also present a new optimistic algorithm
for the adversarial MNL contextual bandits problem, which achieves a better
dependency than the state-of-the-art algorithms in a problem-dependent constant
$\kappa_2$ (potentially exponentially small). Our regret upper bound scales as
$\tilde{O}(d\sqrt{\kappa_2 T}+ \log(T)/\kappa_2)$, which gives a stronger bound
than the existing $\tilde{O}(d\sqrt{T}/\kappa_2)$ guarantees.
|
[
{
"version": "v1",
"created": "Tue, 19 Oct 2021 14:37:10 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Nov 2022 21:33:53 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Goyal",
"Vineet",
""
],
[
"Perivier",
"Noemie",
""
]
] |
new_dataset
| 0.992484 |
2203.03182
|
Pengjin Wei
|
Pengjin Wei, Guohang Yan, Yikang Li, Kun Fang, Xinyu Cai, Jie Yang,
Wei Liu
|
CROON: Automatic Multi-LiDAR Calibration and Refinement Method in Road
Scene
|
7 pages, 5 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sensor-based environmental perception is a crucial part of the autonomous
driving system. In order to get an excellent perception of the surrounding
environment, an intelligent system would configure multiple LiDARs (3D Light
Detection and Ranging) to cover the distant and near space of the car. The
precision of perception relies on the quality of sensor calibration. This
research aims at developing an accurate, automatic, and robust calibration
strategy for multiple LiDAR systems in the general road scene. We thus propose
CROON (automatiC multi-LiDAR CalibratiOn and Refinement method in rOad sceNe),
a two-stage method including rough and refinement calibration. The first stage
can calibrate the sensor from an arbitrary initial pose, and the second stage
is able to precisely calibrate the sensor iteratively. Specifically, CROON
utilize the nature characteristics of road scene so that it is independent and
easy to apply in large-scale conditions. Experimental results on real-world and
simulated data sets demonstrate the reliability and accuracy of our method. All
the related data sets and codes are open-sourced on the Github website
https://github.com/OpenCalib/LiDAR2LiDAR.
|
[
{
"version": "v1",
"created": "Mon, 7 Mar 2022 07:36:31 GMT"
},
{
"version": "v2",
"created": "Sun, 13 Nov 2022 13:15:27 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Wei",
"Pengjin",
""
],
[
"Yan",
"Guohang",
""
],
[
"Li",
"Yikang",
""
],
[
"Fang",
"Kun",
""
],
[
"Cai",
"Xinyu",
""
],
[
"Yang",
"Jie",
""
],
[
"Liu",
"Wei",
""
]
] |
new_dataset
| 0.999224 |
2203.07540
|
Peter Jansen
|
Ruoyao Wang, Peter Jansen, Marc-Alexandre C\^ot\'e, Prithviraj
Ammanabrolu
|
ScienceWorld: Is your Agent Smarter than a 5th Grader?
|
Accepted to EMNLP 2022
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We present ScienceWorld, a benchmark to test agents' scientific reasoning
abilities in a new interactive text environment at the level of a standard
elementary school science curriculum. Despite the transformer-based progress
seen in question-answering and scientific text processing, we find that current
models cannot reason about or explain learned science concepts in novel
contexts. For instance, models can easily answer what the conductivity of a
known material is but struggle when asked how they would conduct an experiment
in a grounded environment to find the conductivity of an unknown material. This
begs the question of whether current models are simply retrieving answers by
way of seeing a large number of similar examples or if they have learned to
reason about concepts in a reusable manner. We hypothesize that agents need to
be grounded in interactive environments to achieve such reasoning capabilities.
Our experiments provide empirical evidence supporting this hypothesis --
showing that a 1.5 million parameter agent trained interactively for 100k steps
outperforms a 11 billion parameter model statically trained for scientific
question-answering and reasoning from millions of expert demonstrations.
|
[
{
"version": "v1",
"created": "Mon, 14 Mar 2022 22:52:34 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Nov 2022 17:52:27 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Wang",
"Ruoyao",
""
],
[
"Jansen",
"Peter",
""
],
[
"Côté",
"Marc-Alexandre",
""
],
[
"Ammanabrolu",
"Prithviraj",
""
]
] |
new_dataset
| 0.994662 |
2204.00239
|
Toru Tamaki
|
Jun Kimata, Tomoya Nitta, Toru Tamaki
|
ObjectMix: Data Augmentation by Copy-Pasting Objects in Videos for
Action Recognition
|
ACM Multimedia Asia (MMAsia '22), December 13--16, 2022, Tokyo, Japan
| null |
10.1145/3551626.3564941
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we propose a data augmentation method for action recognition
using instance segmentation. Although many data augmentation methods have been
proposed for image recognition, few of them are tailored for action
recognition. Our proposed method, ObjectMix, extracts each object region from
two videos using instance segmentation and combines them to create new videos.
Experiments on two action recognition datasets, UCF101 and HMDB51, demonstrate
the effectiveness of the proposed method and show its superiority over
VideoMix, a prior work.
|
[
{
"version": "v1",
"created": "Fri, 1 Apr 2022 06:58:44 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Nov 2022 01:41:17 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Kimata",
"Jun",
""
],
[
"Nitta",
"Tomoya",
""
],
[
"Tamaki",
"Toru",
""
]
] |
new_dataset
| 0.95882 |
2204.07741
|
Xingbo Wang
|
Meng Xia, Qian Zhu, Xingbo Wang, Fei Nie, Huamin Qu, Xiaojuan Ma
|
Persua: A Visual Interactive System to Enhance the Persuasiveness of
Arguments in Online Discussion
|
This paper will appear in CSCW 2022
|
Proc. ACM Hum.-Comput. Interact. 6, CSCW2, Article 319 (November
2022)
|
10.1145/3555210
| null |
cs.HC cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Persuading people to change their opinions is a common practice in online
discussion forums on topics ranging from political campaigns to relationship
consultation. Enhancing people's ability to write persuasive arguments could
not only practice their critical thinking and reasoning but also contribute to
the effectiveness and civility in online communication. It is, however, not an
easy task in online discussion settings where written words are the primary
communication channel. In this paper, we derived four design goals for a tool
that helps users improve the persuasiveness of arguments in online discussions
through a survey with 123 online forum users and interviews with five debating
experts. To satisfy these design goals, we analyzed and built a labeled dataset
of fine-grained persuasive strategies (i.e., logos, pathos, ethos, and
evidence) in 164 arguments with high ratings on persuasiveness from
ChangeMyView, a popular online discussion forum. We then designed an
interactive visual system, Persua, which provides example-based guidance on
persuasive strategies to enhance the persuasiveness of arguments. In
particular, the system constructs portfolios of arguments based on different
persuasive strategies applied to a given discussion topic. It then presents
concrete examples based on the difference between the portfolios of user input
and high-quality arguments in the dataset. A between-subjects study shows
suggestive evidence that Persua encourages users to submit more times for
feedback and helps users improve more on the persuasiveness of their arguments
than a baseline system. Finally, a set of design considerations was summarized
to guide future intelligent systems that improve the persuasiveness in text.
|
[
{
"version": "v1",
"created": "Sat, 16 Apr 2022 08:07:53 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Apr 2022 13:19:56 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Xia",
"Meng",
""
],
[
"Zhu",
"Qian",
""
],
[
"Wang",
"Xingbo",
""
],
[
"Nie",
"Fei",
""
],
[
"Qu",
"Huamin",
""
],
[
"Ma",
"Xiaojuan",
""
]
] |
new_dataset
| 0.997805 |
2204.08669
|
Raviraj Joshi
|
Abhishek Velankar, Hrushikesh Patil, Raviraj Joshi
|
Mono vs Multilingual BERT for Hate Speech Detection and Text
Classification: A Case Study in Marathi
| null | null |
10.1007/978-3-031-20650-4_10
| null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transformers are the most eminent architectures used for a vast range of
Natural Language Processing tasks. These models are pre-trained over a large
text corpus and are meant to serve state-of-the-art results over tasks like
text classification. In this work, we conduct a comparative study between
monolingual and multilingual BERT models. We focus on the Marathi language and
evaluate the models on the datasets for hate speech detection, sentiment
analysis and simple text classification in Marathi. We use standard
multilingual models such as mBERT, indicBERT and xlm-RoBERTa and compare with
MahaBERT, MahaALBERT and MahaRoBERTa, the monolingual models for Marathi. We
further show that Marathi monolingual models outperform the multilingual BERT
variants on five different downstream fine-tuning experiments. We also evaluate
sentence embeddings from these models by freezing the BERT encoder layers. We
show that monolingual MahaBERT based models provide rich representations as
compared to sentence embeddings from multi-lingual counterparts. However, we
observe that these embeddings are not generic enough and do not work well on
out of domain social media datasets. We consider two Marathi hate speech
datasets L3Cube-MahaHate, HASOC-2021, a Marathi sentiment classification
dataset L3Cube-MahaSent, and Marathi Headline, Articles classification
datasets.
|
[
{
"version": "v1",
"created": "Tue, 19 Apr 2022 05:07:58 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Velankar",
"Abhishek",
""
],
[
"Patil",
"Hrushikesh",
""
],
[
"Joshi",
"Raviraj",
""
]
] |
new_dataset
| 0.995825 |
2204.13601
|
Ali Yazdani
|
Ali Yazdani, Hossein Simchi, Yasser Shekofteh
|
Emotion Recognition In Persian Speech Using Deep Neural Networks
|
5 pages, 1 figure, 3 tables
|
11th International Conference on Computer and Knowledge
Engineering (ICCKE 2021)
|
10.1109/ICCKE54056.2021.9721504
| null |
cs.SD cs.AI eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Speech Emotion Recognition (SER) is of great importance in Human-Computer
Interaction (HCI), as it provides a deeper understanding of the situation and
results in better interaction. In recent years, various machine learning and
Deep Learning (DL) algorithms have been developed to improve SER techniques.
Recognition of the spoken emotions depends on the type of expression that
varies between different languages. In this paper, to further study important
factors in the Farsi language, we examine various DL techniques on a
Farsi/Persian dataset, Sharif Emotional Speech Database (ShEMO), which was
released in 2018. Using signal features in low- and high-level descriptions and
different deep neural networks and machine learning techniques, Unweighted
Accuracy (UA) of 65.20% and Weighted Accuracy (WA) of 78.29% are achieved.
|
[
{
"version": "v1",
"created": "Thu, 28 Apr 2022 16:02:05 GMT"
},
{
"version": "v2",
"created": "Sat, 12 Nov 2022 08:16:35 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Yazdani",
"Ali",
""
],
[
"Simchi",
"Hossein",
""
],
[
"Shekofteh",
"Yasser",
""
]
] |
new_dataset
| 0.99592 |
2205.05467
|
Zhiwu Huang
|
Chuqiao Li, Zhiwu Huang, Danda Pani Paudel, Yabin Wang, Mohamad
Shahbazi, Xiaopeng Hong, Luc Van Gool
|
A Continual Deepfake Detection Benchmark: Dataset, Methods, and
Essentials
|
Accepted to WACV 2023
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There have been emerging a number of benchmarks and techniques for the
detection of deepfakes. However, very few works study the detection of
incrementally appearing deepfakes in the real-world scenarios. To simulate the
wild scenes, this paper suggests a continual deepfake detection benchmark
(CDDB) over a new collection of deepfakes from both known and unknown
generative models. The suggested CDDB designs multiple evaluations on the
detection over easy, hard, and long sequence of deepfake tasks, with a set of
appropriate measures. In addition, we exploit multiple approaches to adapt
multiclass incremental learning methods, commonly used in the continual visual
recognition, to the continual deepfake detection problem. We evaluate existing
methods, including their adapted ones, on the proposed CDDB. Within the
proposed benchmark, we explore some commonly known essentials of standard
continual learning. Our study provides new insights on these essentials in the
context of continual deepfake detection. The suggested CDDB is clearly more
challenging than the existing benchmarks, which thus offers a suitable
evaluation avenue to the future research. Both data and code are available at
https://github.com/Coral79/CDDB.
|
[
{
"version": "v1",
"created": "Wed, 11 May 2022 13:07:19 GMT"
},
{
"version": "v2",
"created": "Sat, 14 May 2022 03:19:38 GMT"
},
{
"version": "v3",
"created": "Mon, 14 Nov 2022 14:36:43 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Li",
"Chuqiao",
""
],
[
"Huang",
"Zhiwu",
""
],
[
"Paudel",
"Danda Pani",
""
],
[
"Wang",
"Yabin",
""
],
[
"Shahbazi",
"Mohamad",
""
],
[
"Hong",
"Xiaopeng",
""
],
[
"Van Gool",
"Luc",
""
]
] |
new_dataset
| 0.995567 |
2205.05832
|
Shuang Wu
|
Shuang Wu, Xiaoning Song, Zhenhua Feng, Xiao-Jun Wu
|
NFLAT: Non-Flat-Lattice Transformer for Chinese Named Entity Recognition
|
13 pages, 6 figures, 9 tables
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, Flat-LAttice Transformer (FLAT) has achieved great success in
Chinese Named Entity Recognition (NER). FLAT performs lexical enhancement by
constructing flat lattices, which mitigates the difficulties posed by blurred
word boundaries and the lack of word semantics. In FLAT, the positions of
starting and ending characters are used to connect a matching word. However,
this method is likely to match more words when dealing with long texts,
resulting in long input sequences. Therefore, it significantly increases the
memory and computational costs of the self-attention module. To deal with this
issue, we advocate a novel lexical enhancement method, InterFormer, that
effectively reduces the amount of computational and memory costs by
constructing non-flat lattices. Furthermore, with InterFormer as the backbone,
we implement NFLAT for Chinese NER. NFLAT decouples lexicon fusion and context
feature encoding. Compared with FLAT, it reduces unnecessary attention
calculations in "word-character" and "word-word". This reduces the memory usage
by about 50% and can use more extensive lexicons or higher batches for network
training. The experimental results obtained on several well-known benchmarks
demonstrate the superiority of the proposed method over the state-of-the-art
hybrid (character-word) models.
|
[
{
"version": "v1",
"created": "Thu, 12 May 2022 01:55:37 GMT"
},
{
"version": "v2",
"created": "Thu, 19 May 2022 07:24:58 GMT"
},
{
"version": "v3",
"created": "Mon, 14 Nov 2022 13:47:16 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Wu",
"Shuang",
""
],
[
"Song",
"Xiaoning",
""
],
[
"Feng",
"Zhenhua",
""
],
[
"Wu",
"Xiao-Jun",
""
]
] |
new_dataset
| 0.986875 |
2205.13412
|
Yanjie Li Mr.
|
Yanjie Li, Yiquan Li, Xuelong Dai, Songtao Guo, Bin Xiao
|
Physical-World Optical Adversarial Attacks on 3D Face Recognition
|
Submitted to CVPR 2023
| null | null | null |
cs.CV cs.CR eess.IV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
2D face recognition has been proven insecure for physical adversarial
attacks. However, few studies have investigated the possibility of attacking
real-world 3D face recognition systems. 3D-printed attacks recently proposed
cannot generate adversarial points in the air. In this paper, we attack 3D face
recognition systems through elaborate optical noises. We took structured light
3D scanners as our attack target. End-to-end attack algorithms are designed to
generate adversarial illumination for 3D faces through the inherent or an
additional projector to produce adversarial points at arbitrary positions.
Nevertheless, face reflectance is a complex procedure because the skin is
translucent. To involve this projection-and-capture procedure in optimization
loops, we model it by Lambertian rendering model and use SfSNet to estimate the
albedo. Moreover, to improve the resistance to distance and angle changes while
maintaining the perturbation unnoticeable, a 3D transform invariant loss and
two kinds of sensitivity maps are introduced. Experiments are conducted in both
simulated and physical worlds. We successfully attacked point-cloud-based and
depth-image-based 3D face recognition algorithms while needing fewer
perturbations than previous state-of-the-art physical-world 3D adversarial
attacks.
|
[
{
"version": "v1",
"created": "Thu, 26 May 2022 15:06:14 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Aug 2022 05:41:39 GMT"
},
{
"version": "v3",
"created": "Sun, 13 Nov 2022 11:52:04 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Li",
"Yanjie",
""
],
[
"Li",
"Yiquan",
""
],
[
"Dai",
"Xuelong",
""
],
[
"Guo",
"Songtao",
""
],
[
"Xiao",
"Bin",
""
]
] |
new_dataset
| 0.999357 |
2206.01256
|
Tiancai Wang
|
Yingfei Liu, Junjie Yan, Fan Jia, Shuailin Li, Aqi Gao, Tiancai Wang,
Xiangyu Zhang, Jian Sun
|
PETRv2: A Unified Framework for 3D Perception from Multi-Camera Images
|
Adding 3D lane detection results on OpenLane Dataset
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose PETRv2, a unified framework for 3D perception from
multi-view images. Based on PETR, PETRv2 explores the effectiveness of temporal
modeling, which utilizes the temporal information of previous frames to boost
3D object detection. More specifically, we extend the 3D position embedding (3D
PE) in PETR for temporal modeling. The 3D PE achieves the temporal alignment on
object position of different frames. A feature-guided position encoder is
further introduced to improve the data adaptability of 3D PE. To support for
multi-task learning (e.g., BEV segmentation and 3D lane detection), PETRv2
provides a simple yet effective solution by introducing task-specific queries,
which are initialized under different spaces. PETRv2 achieves state-of-the-art
performance on 3D object detection, BEV segmentation and 3D lane detection.
Detailed robustness analysis is also conducted on PETR framework. We hope
PETRv2 can serve as a strong baseline for 3D perception. Code is available at
\url{https://github.com/megvii-research/PETR}.
|
[
{
"version": "v1",
"created": "Thu, 2 Jun 2022 19:13:03 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Jun 2022 15:16:15 GMT"
},
{
"version": "v3",
"created": "Mon, 14 Nov 2022 07:58:14 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Liu",
"Yingfei",
""
],
[
"Yan",
"Junjie",
""
],
[
"Jia",
"Fan",
""
],
[
"Li",
"Shuailin",
""
],
[
"Gao",
"Aqi",
""
],
[
"Wang",
"Tiancai",
""
],
[
"Zhang",
"Xiangyu",
""
],
[
"Sun",
"Jian",
""
]
] |
new_dataset
| 0.974922 |
2207.00412
|
Yanick Schraner
|
Yanick Schraner, Christian Scheller, Michel Pl\"uss, Manfred Vogel
|
Swiss German Speech to Text system evaluation
|
arXiv admin note: text overlap with arXiv:2205.09501
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We present an in-depth evaluation of four commercially available
Speech-to-Text (STT) systems for Swiss German. The systems are anonymized and
referred to as system a-d in this report. We compare the four systems to our
STT model, referred to as FHNW from hereon after, and provide details on how we
trained our model. To evaluate the models, we use two STT datasets from
different domains. The Swiss Parliament Corpus (SPC) test set and a private
dataset in the news domain with an even distribution across seven dialect
regions. We provide a detailed error analysis to detect the three systems'
strengths and weaknesses. This analysis is limited by the characteristics of
the two test sets. Our model scored the highest bilingual evaluation understudy
(BLEU) on both datasets. On the SPC test set, we obtain a BLEU score of 0.607,
whereas the best commercial system reaches a BLEU score of 0.509. On our
private test set, we obtain a BLEU score of 0.722 and the best commercial
system a BLEU score of 0.568.
|
[
{
"version": "v1",
"created": "Fri, 1 Jul 2022 13:43:06 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Nov 2022 10:35:45 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Schraner",
"Yanick",
""
],
[
"Scheller",
"Christian",
""
],
[
"Plüss",
"Michel",
""
],
[
"Vogel",
"Manfred",
""
]
] |
new_dataset
| 0.998197 |
2207.04034
|
Ranjit Jhala
|
Nico Lehmann, Adam Geller, Niki Vazou, Ranjit Jhala
|
Flux: Liquid Types for Rust
| null | null | null | null |
cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce Flux, which shows how logical refinements can work hand in glove
with Rust's ownership mechanisms to yield ergonomic type-based verification of
low-level pointer manipulating programs. First, we design a novel refined type
system for Rust that indexes mutable locations, with pure (immutable) values
that can appear in refinements, and then exploits Rust's ownership mechanisms
to abstract sub-structural reasoning about locations within Rust's polymorphic
type constructors, while supporting strong updates. We formalize the crucial
dependency upon Rust's strong aliasing guarantees by exploiting the stacked
borrows aliasing model to prove that ``well-borrowed evaluations of well-typed
programs do not get stuck''. Second, we implement our type system in Flux, a
plug-in to the Rust compiler that exploits the factoring of complex invariants
into types and refinements to efficiently synthesize loop annotations --
including complex quantified invariants describing the contents of containers
-- via liquid inference. Third, we evaluate Flux with a benchmark suite of
vector manipulating programs and parts of a previously verified secure
sandboxing library to demonstrate the advantages of refinement types over
program logics as implemented in the state-of-the-art Prusti verifier. While
Prusti's more expressive program logic can, in general, verify deep functional
correctness specifications, for the lightweight but ubiquitous and important
verification use-cases covered by our benchmarks, liquid typing makes
verification ergonomic by slashing specification lines by a factor of two,
verification time by an order of magnitude, and annotation overhead from up to
24% of code size (average 9%) to nothing at all.
|
[
{
"version": "v1",
"created": "Fri, 8 Jul 2022 17:44:36 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Nov 2022 18:59:56 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Lehmann",
"Nico",
""
],
[
"Geller",
"Adam",
""
],
[
"Vazou",
"Niki",
""
],
[
"Jhala",
"Ranjit",
""
]
] |
new_dataset
| 0.996836 |
2208.09174
|
Travis Greene
|
Travis Greene, Amit Dhurandhar, Galit Shmueli
|
Atomist or Holist? A Diagnosis and Vision for More Productive
Interdisciplinary AI Ethics Dialogue
|
9 pages, 1 figure, 2 tables. To be published in Patterns by Cell
Press
| null | null | null |
cs.CY cs.AI stat.OT
|
http://creativecommons.org/licenses/by/4.0/
|
In response to growing recognition of the social impact of new AI-based
technologies, major AI and ML conferences and journals now encourage or require
papers to include ethics impact statements and undergo ethics reviews. This
move has sparked heated debate concerning the role of ethics in AI research, at
times devolving into name-calling and threats of "cancellation." We diagnose
this conflict as one between atomist and holist ideologies. Among other things,
atomists believe facts are and should be kept separate from values, while
holists believe facts and values are and should be inextricable from one
another. With the goal of reducing disciplinary polarization, we draw on
numerous philosophical and historical sources to describe each ideology's core
beliefs and assumptions. Finally, we call on atomists and holists within the
ever-expanding data science community to exhibit greater empathy during ethical
disagreements and propose four targeted strategies to ensure AI research
benefits society.
|
[
{
"version": "v1",
"created": "Fri, 19 Aug 2022 06:51:27 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Sep 2022 04:38:42 GMT"
},
{
"version": "v3",
"created": "Sat, 12 Nov 2022 05:27:28 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Greene",
"Travis",
""
],
[
"Dhurandhar",
"Amit",
""
],
[
"Shmueli",
"Galit",
""
]
] |
new_dataset
| 0.997213 |
2208.12914
|
Himarsha R Jayanetti
|
Himarsha R. Jayanetti, Kritika Garg, Sawood Alam, Michael L. Nelson,
Michele C. Weigle
|
Robots Still Outnumber Humans in Web Archives, But Less Than Before
| null | null |
10.1007/978-3-031-16802-4_19
| null |
cs.DL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
To identify robots and humans and analyze their respective access patterns,
we used the Internet Archive's (IA) Wayback Machine access logs from 2012 and
2019, as well as Arquivo.pt's (Portuguese Web Archive) access logs from 2019.
We identified user sessions in the access logs and classified those sessions as
human or robot based on their browsing behavior. To better understand how users
navigate through the web archives, we evaluated these sessions to discover user
access patterns. Based on the two archives and between the two years of IA
access logs (2012 vs. 2019), we present a comparison of detected robots vs.
humans and their user access patterns and temporal preferences. The total
number of robots detected in IA 2012 is greater than in IA 2019 (21% more in
requests and 18% more in sessions). Robots account for 98% of requests (97% of
sessions) in Arquivo.pt (2019). We found that the robots are almost entirely
limited to "Dip" and "Skim" access patterns in IA 2012, but exhibit all the
patterns and their combinations in IA 2019. Both humans and robots show a
preference for web pages archived in the near past.
|
[
{
"version": "v1",
"created": "Sat, 27 Aug 2022 02:51:06 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Jayanetti",
"Himarsha R.",
""
],
[
"Garg",
"Kritika",
""
],
[
"Alam",
"Sawood",
""
],
[
"Nelson",
"Michael L.",
""
],
[
"Weigle",
"Michele C.",
""
]
] |
new_dataset
| 0.976529 |
2208.13615
|
Fabrizio Frati
|
Michael A. Bekos and Giordano Da Lozzo and Fabrizio Frati and Martin
Gronemann and Tamara Mchedlidze and Chrysanthi N. Raftopoulou
|
Recognizing DAGs with Page-Number 2 is NP-complete
|
Appears in the Proceedings of the 30th International Symposium on
Graph Drawing and Network Visualization (GD 2022)
| null | null | null |
cs.CG cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The page-number of a directed acyclic graph (a DAG, for short) is the minimum
$k$ for which the DAG has a topological order and a $k$-coloring of its edges
such that no two edges of the same color cross, i.e., have alternating
endpoints along the topological order. In 1999, Heath and Pemmaraju conjectured
that the recognition of DAGs with page-number $2$ is NP-complete and proved
that recognizing DAGs with page-number $6$ is NP-complete [SIAM J. Computing,
1999]. Binucci et al. recently strengthened this result by proving that
recognizing DAGs with page-number $k$ is NP-complete, for every $k\geq 3$ [SoCG
2019]. In this paper, we finally resolve Heath and Pemmaraju's conjecture in
the affirmative. In particular, our NP-completeness result holds even for
$st$-planar graphs and planar posets.
|
[
{
"version": "v1",
"created": "Mon, 29 Aug 2022 14:06:06 GMT"
},
{
"version": "v2",
"created": "Tue, 30 Aug 2022 10:16:56 GMT"
},
{
"version": "v3",
"created": "Fri, 11 Nov 2022 19:16:44 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Bekos",
"Michael A.",
""
],
[
"Da Lozzo",
"Giordano",
""
],
[
"Frati",
"Fabrizio",
""
],
[
"Gronemann",
"Martin",
""
],
[
"Mchedlidze",
"Tamara",
""
],
[
"Raftopoulou",
"Chrysanthi N.",
""
]
] |
new_dataset
| 0.997642 |
2209.03594
|
Cheng Da
|
Cheng Da, Peng Wang, Cong Yao
|
Levenshtein OCR
|
Accepted by ECCV2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A novel scene text recognizer based on Vision-Language Transformer (VLT) is
presented. Inspired by Levenshtein Transformer in the area of NLP, the proposed
method (named Levenshtein OCR, and LevOCR for short) explores an alternative
way for automatically transcribing textual content from cropped natural images.
Specifically, we cast the problem of scene text recognition as an iterative
sequence refinement process. The initial prediction sequence produced by a pure
vision model is encoded and fed into a cross-modal transformer to interact and
fuse with the visual features, to progressively approximate the ground truth.
The refinement process is accomplished via two basic character-level
operations: deletion and insertion, which are learned with imitation learning
and allow for parallel decoding, dynamic length change and good
interpretability. The quantitative experiments clearly demonstrate that LevOCR
achieves state-of-the-art performances on standard benchmarks and the
qualitative analyses verify the effectiveness and advantage of the proposed
LevOCR algorithm. Code is available at
https://github.com/AlibabaResearch/AdvancedLiterateMachinery/tree/main/OCR/LevOCR.
|
[
{
"version": "v1",
"created": "Thu, 8 Sep 2022 06:46:50 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Nov 2022 06:09:39 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Da",
"Cheng",
""
],
[
"Wang",
"Peng",
""
],
[
"Yao",
"Cong",
""
]
] |
new_dataset
| 0.995963 |
2210.03690
|
Nghia T. Le
|
Nghia T. Le, Fan Bai, and Alan Ritter
|
Few-Shot Anaphora Resolution in Scientific Protocols via Mixtures of
In-Context Experts
|
Findings of EMNLP 2022
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Anaphora resolution is an important task for information extraction across a
range of languages, text genres, and domains, motivating the need for methods
that do not require large annotated datasets. In-context learning has emerged
as a promising approach, yet there are a number of challenges in applying
in-context learning to resolve anaphora. For example, encoding a single
in-context demonstration that consists of: an anaphor, a paragraph-length
context, and a list of corresponding antecedents, requires conditioning a
language model on a long sequence of tokens, limiting the number of
demonstrations per prompt. In this paper, we present MICE (Mixtures of
In-Context Experts), which we demonstrate is effective for few-shot anaphora
resolution in scientific protocols (Tamari et al., 2021). Given only a handful
of training examples, MICE combines the predictions of hundreds of in-context
experts, yielding a 30% increase in F1 score over a competitive prompt
retrieval baseline. Furthermore, we show MICE can be used to train compact
student models without sacrificing performance. As far as we are aware, this is
the first work to present experimental results demonstrating the effectiveness
of in-context learning on the task of few-shot anaphora resolution in
scientific protocols.
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2022 16:51:45 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Nov 2022 18:31:12 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Le",
"Nghia T.",
""
],
[
"Bai",
"Fan",
""
],
[
"Ritter",
"Alan",
""
]
] |
new_dataset
| 0.999665 |
2210.11235
|
Rawshan Ara Mowri
|
Rawshan Ara Mowri, Madhuri Siddula, Kaushik Roy
|
Application of Explainable Machine Learning in Detecting and Classifying
Ransomware Families Based on API Call Analysis
| null | null | null | null |
cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Ransomware has appeared as one of the major global threats in recent days.
The alarming increasing rate of ransomware attacks and new ransomware variants
intrigue the researchers to constantly examine the distinguishing traits of
ransomware and refine their detection strategies. Application Programming
Interface (API) is a way for one program to collaborate with another; API calls
are the medium by which they communicate. Ransomware uses this strategy to
interact with the OS and makes a significantly higher number of calls in
different sequences to ask for taking action. This research work utilizes the
frequencies of different API calls to detect and classify ransomware families.
First, a Web-Crawler is developed to automate collecting the Windows Portable
Executable (PE) files of 15 different ransomware families. By extracting
different frequencies of 68 API calls, we develop our dataset in the first
phase of the two-phase feature engineering process. After selecting the most
significant features in the second phase of the feature engineering process, we
deploy six Supervised Machine Learning models: Na"ive Bayes, Logistic
Regression, Random Forest, Stochastic Gradient Descent, K-Nearest Neighbor, and
Support Vector Machine. Then, the performances of all the classifiers are
compared to select the best model. The results reveal that Logistic Regression
can efficiently classify ransomware into their corresponding families securing
99.15% overall accuracy. Finally, instead of relying on the 'Black box'
characteristic of the Machine Learning models, we present the post-hoc analysis
of our best-performing model using 'SHapley Additive exPlanations' or SHAP
values to ascertain the transparency and trustworthiness of the model's
prediction.
|
[
{
"version": "v1",
"created": "Sun, 16 Oct 2022 15:54:45 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Oct 2022 01:26:44 GMT"
},
{
"version": "v3",
"created": "Sun, 13 Nov 2022 19:08:28 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Mowri",
"Rawshan Ara",
""
],
[
"Siddula",
"Madhuri",
""
],
[
"Roy",
"Kaushik",
""
]
] |
new_dataset
| 0.984074 |
2211.06451
|
Ioanna Kantzavelou
|
Athanasios Kalogiratos (1) and Ioanna Kantzavelou (1) ((1) University
of West Attica)
|
Blockchain Technology to Secure Bluetooth
|
7 pages, 6 figures
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Bluetooth is a communication technology used to wirelessly exchange data
between devices. In the last few years there have been found a great number of
security vulnerabilities, and adversaries are taking advantage of them causing
harm and significant loss. Numerous system security updates have been approved
and installed in order to sort out security holes and bugs, and prevent attacks
that could expose personal or other valuable information. But those updates are
not sufficient and appropriate and new bugs keep showing up. In Bluetooth
technology, pairing is identified as the step where most bugs are found and
most attacks target this particular process part of Bluetooth. A new technology
that has been proved bulletproof when it comes to security and the exchange of
sensitive information is Blockchain. Blockchain technology is promising to be
incorporated well in a network of smart devices, and secure an Internet of
Things (IoT), where Bluetooth technology is being extensively used. This work
presents a vulnerability discovered in Bluetooth pairing process, and proposes
a Blockchain solution approach to secure pairing and mitigate this
vulnerability. The paper first introduces the Bluetooth technology and delves
into how Blockchain technology can be a solution to certain security problems.
Then a solution approach shows how Blockchain can be integrated and implemented
to ensure the required level of security. Certain attack incidents on Bluetooth
vulnerable points are examined and discussion and conclusions give the
extension of the security related problems.
|
[
{
"version": "v1",
"created": "Fri, 11 Nov 2022 19:20:33 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Kalogiratos",
"Athanasios",
""
],
[
"Kantzavelou",
"Ioanna",
""
]
] |
new_dataset
| 0.999637 |
2211.06543
|
Yuki Yada
|
Yuki Yada, Jiaying Feng, Tsuneo Matsumoto, Nao Fukushima, Fuyuko Kido,
Hayato Yamana
|
Dark patterns in e-commerce: a dataset and its baseline evaluations
|
Accepted at 5th International Workshop on Big Data for Cybersecurity
(BigCyber) in conjunction with the 2022 IEEE International Conference on Big
Data (IEEE BigData 2022)
| null | null | null |
cs.LG cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dark patterns, which are user interface designs in online services, induce
users to take unintended actions. Recently, dark patterns have been raised as
an issue of privacy and fairness. Thus, a wide range of research on detecting
dark patterns is eagerly awaited. In this work, we constructed a dataset for
dark pattern detection and prepared its baseline detection performance with
state-of-the-art machine learning methods. The original dataset was obtained
from Mathur et al.'s study in 2019, which consists of 1,818 dark pattern texts
from shopping sites. Then, we added negative samples, i.e., non-dark pattern
texts, by retrieving texts from the same websites as Mathur et al.'s dataset.
We also applied state-of-the-art machine learning methods to show the automatic
detection accuracy as baselines, including BERT, RoBERTa, ALBERT, and XLNet. As
a result of 5-fold cross-validation, we achieved the highest accuracy of 0.975
with RoBERTa. The dataset and baseline source codes are available at
https://github.com/yamanalab/ec-darkpattern.
|
[
{
"version": "v1",
"created": "Sat, 12 Nov 2022 01:53:49 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Yada",
"Yuki",
""
],
[
"Feng",
"Jiaying",
""
],
[
"Matsumoto",
"Tsuneo",
""
],
[
"Fukushima",
"Nao",
""
],
[
"Kido",
"Fuyuko",
""
],
[
"Yamana",
"Hayato",
""
]
] |
new_dataset
| 0.998914 |
2211.06565
|
Guangtao Lyu
|
Guangtao Lyu (School of Computer Science and Artificial Intelligence,
Wuhan University of Technology, China)
|
MSLKANet: A Multi-Scale Large Kernel Attention Network for Scene Text
Removal
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Scene text removal aims to remove the text and fill the regions with
perceptually plausible background information in natural images. It has
attracted increasing attention due to its various applications in privacy
protection, scene text retrieval, and text editing. With the development of
deep learning, the previous methods have achieved significant improvements.
However, most of the existing methods seem to ignore the large perceptive
fields and global information. The pioneer method can get significant
improvements by only changing training data from the cropped image to the full
image. In this paper, we present a single-stage multi-scale network MSLKANet
for scene text removal in full images. For obtaining large perceptive fields
and global information, we propose multi-scale large kernel attention (MSLKA)
to obtain long-range dependencies between the text regions and the backgrounds
at various granularity levels. Furthermore, we combine the large kernel
decomposition mechanism and atrous spatial pyramid pooling to build a large
kernel spatial pyramid pooling (LKSPP), which can perceive more valid pixels in
the spatial dimension while maintaining large receptive fields and low cost of
computation. Extensive experimental results indicate that the proposed method
achieves state-of-the-art performance on both synthetic and real-world datasets
and the effectiveness of the proposed components MSLKA and LKSPP.
|
[
{
"version": "v1",
"created": "Sat, 12 Nov 2022 04:04:55 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Lyu",
"Guangtao",
"",
"School of Computer Science and Artificial Intelligence,\n Wuhan University of Technology, China"
]
] |
new_dataset
| 0.95562 |
2211.06571
|
Shuhan Yuan
|
Xingyi Zhao, Lu Zhang, Depeng Xu, Shuhan Yuan
|
Generating Textual Adversaries with Minimal Perturbation
|
To appear in EMNLP Findings 2022. The code is available at
https://github.com/xingyizhao/TAMPERS
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Many word-level adversarial attack approaches for textual data have been
proposed in recent studies. However, due to the massive search space consisting
of combinations of candidate words, the existing approaches face the problem of
preserving the semantics of texts when crafting adversarial counterparts. In
this paper, we develop a novel attack strategy to find adversarial texts with
high similarity to the original texts while introducing minimal perturbation.
The rationale is that we expect the adversarial texts with small perturbation
can better preserve the semantic meaning of original texts. Experiments show
that, compared with state-of-the-art attack approaches, our approach achieves
higher success rates and lower perturbation rates in four benchmark datasets.
|
[
{
"version": "v1",
"created": "Sat, 12 Nov 2022 04:46:07 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Zhao",
"Xingyi",
""
],
[
"Zhang",
"Lu",
""
],
[
"Xu",
"Depeng",
""
],
[
"Yuan",
"Shuhan",
""
]
] |
new_dataset
| 0.968006 |
2211.06654
|
Jie Li
|
Jie Li, Xiaohu Tang, Hanxu Hou, Yunghsiang S. Han, Bo Bai, and Gong
Zhang
|
PMDS Array Codes With Small Sub-packetization, Small Repair
Bandwidth/Rebuilding Access
|
Accepted for publication in the IEEE Transactions on Information
Theory
| null |
10.1109/TIT.2022.3220227
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Partial maximum distance separable (PMDS) codes are a kind of erasure codes
where the nodes are divided into multiple groups with each forming an MDS code
with a smaller code length, thus they allow repairing a failed node with only a
few helper nodes and can correct all erasure patterns that are
information-theoretically correctable. However, the repair of a failed node of
PMDS codes still requires a large amount of communication if the group size is
large. Recently, PMDS array codes with each local code being an MSR code were
introduced to reduce the repair bandwidth further. However, they require
extensive rebuilding access and unavoidably a significant sub packetization
level. In this paper, we first propose two constructions of PMDS array codes
with two global parities that have smaller sub-packetization levels and much
smaller finite fields than the existing one. One construction can support an
arbitrary number of local parities and has $(1+\epsilon)$-optimal repair
bandwidth (i.e., $(1+\epsilon)$ times the optimal repair bandwidth), while the
other one is limited to two local parities but has significantly smaller
rebuilding access and its sub packetization level is only $2$. In addition, we
present a construction of PMDS array code with three global parities, which has
a smaller sub-packetization level as well as $(1+\epsilon)$-optimal repair
bandwidth, the required finite field is significantly smaller than existing
ones.
|
[
{
"version": "v1",
"created": "Sat, 12 Nov 2022 12:51:36 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Li",
"Jie",
""
],
[
"Tang",
"Xiaohu",
""
],
[
"Hou",
"Hanxu",
""
],
[
"Han",
"Yunghsiang S.",
""
],
[
"Bai",
"Bo",
""
],
[
"Zhang",
"Gong",
""
]
] |
new_dataset
| 0.999871 |
2211.06696
|
Akinobu Mizutani
|
Tomoya Shiba, Tomohiro Ono, Shoshi Tokuno, Issei Uchino, Masaya
Okamoto, Daiju Kanaoka, Kazutaka Takahashi, Kenta Tsukamoto, Yoshiaki
Tsutsumi, Yugo Nakamura, Yukiya Fukuda, Yusuke Hoji, Hayato Amano, Yuma
Kubota, Mayu Koresawa, Yoshifumi Sakai, Ryogo Takemoto, Katsunori Tamai,
Kazuo Nakahara, Hiroyuki Hayashi, Satsuki Fujimatsu, Akinobu Mizutani, Yusuke
Mizoguchi, Yuhei Yoshimitsu, Mayo Suzuka, Ikuya Matsumoto, Yuga Yano,
Yuichiro Tanaka, Takashi Morie, and Hakaru Tamukoh
|
Hibikino-Musashi@Home 2022 Team Description Paper
|
arXiv admin note: substantial text overlap with arXiv:2005.14451,
arXiv:2006.01233
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Our team, Hibikino-Musashi@Home (HMA), was founded in 2010. It is based in
Japan in the Kitakyushu Science and Research Park. Since 2010, we have annually
participated in the RoboCup@Home Japan Open competition in the open platform
league (OPL).We participated as an open platform league team in the 2017 Nagoya
RoboCup competition and as a domestic standard platform league (DSPL) team in
the 2017 Nagoya, 2018 Montreal, 2019 Sydney, and 2021 Worldwide RoboCup
competitions.We also participated in theWorld Robot Challenge (WRC) 2018 in the
service-robotics category of the partner-robot challenge (real space) and won
first place. Currently, we have 27 members from nine different laboratories
within the Kyushu Institute of Technology and the university of Kitakyushu. In
this paper, we introduce the activities that have been performed by our team
and the technologies that we use.
|
[
{
"version": "v1",
"created": "Sat, 12 Nov 2022 16:10:05 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Shiba",
"Tomoya",
""
],
[
"Ono",
"Tomohiro",
""
],
[
"Tokuno",
"Shoshi",
""
],
[
"Uchino",
"Issei",
""
],
[
"Okamoto",
"Masaya",
""
],
[
"Kanaoka",
"Daiju",
""
],
[
"Takahashi",
"Kazutaka",
""
],
[
"Tsukamoto",
"Kenta",
""
],
[
"Tsutsumi",
"Yoshiaki",
""
],
[
"Nakamura",
"Yugo",
""
],
[
"Fukuda",
"Yukiya",
""
],
[
"Hoji",
"Yusuke",
""
],
[
"Amano",
"Hayato",
""
],
[
"Kubota",
"Yuma",
""
],
[
"Koresawa",
"Mayu",
""
],
[
"Sakai",
"Yoshifumi",
""
],
[
"Takemoto",
"Ryogo",
""
],
[
"Tamai",
"Katsunori",
""
],
[
"Nakahara",
"Kazuo",
""
],
[
"Hayashi",
"Hiroyuki",
""
],
[
"Fujimatsu",
"Satsuki",
""
],
[
"Mizutani",
"Akinobu",
""
],
[
"Mizoguchi",
"Yusuke",
""
],
[
"Yoshimitsu",
"Yuhei",
""
],
[
"Suzuka",
"Mayo",
""
],
[
"Matsumoto",
"Ikuya",
""
],
[
"Yano",
"Yuga",
""
],
[
"Tanaka",
"Yuichiro",
""
],
[
"Morie",
"Takashi",
""
],
[
"Tamukoh",
"Hakaru",
""
]
] |
new_dataset
| 0.999737 |
2211.06716
|
Linshan Jiang
|
Linshan Jiang, Qun Song, Rui Tan, Mo Li
|
PriMask: Cascadable and Collusion-Resilient Data Masking for Mobile
Cloud Inference
|
13 pages, best paper candidate, Sensys 2022
| null |
10.1145/3560905.3568531
| null |
cs.CR cs.DC cs.LG cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Mobile cloud offloading is indispensable for inference tasks based on
large-scale deep models. However, transmitting privacy-rich inference data to
the cloud incurs concerns. This paper presents the design of a system called
PriMask, in which the mobile device uses a secret small-scale neural network
called MaskNet to mask the data before transmission. PriMask significantly
weakens the cloud's capability to recover the data or extract certain private
attributes. The MaskNet is em cascadable in that the mobile can opt in to or
out of its use seamlessly without any modifications to the cloud's inference
service. Moreover, the mobiles use different MaskNets, such that the collusion
between the cloud and some mobiles does not weaken the protection for other
mobiles. We devise a {\em split adversarial learning} method to train a neural
network that generates a new MaskNet quickly (within two seconds) at run time.
We apply PriMask to three mobile sensing applications with diverse modalities
and complexities, i.e., human activity recognition, urban environment
crowdsensing, and driver behavior recognition. Results show PriMask's
effectiveness in all three applications.
|
[
{
"version": "v1",
"created": "Sat, 12 Nov 2022 17:54:13 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Jiang",
"Linshan",
""
],
[
"Song",
"Qun",
""
],
[
"Tan",
"Rui",
""
],
[
"Li",
"Mo",
""
]
] |
new_dataset
| 0.98621 |
2211.06719
|
Hao Tang
|
Hao Tang, Ling Shao, Philip H.S. Torr, Nicu Sebe
|
Bipartite Graph Reasoning GANs for Person Pose and Facial Image
Synthesis
|
Accepted to IJCV, an extended version of a paper published in BMVC
2020. arXiv admin note: substantial text overlap with arXiv:2008.04381
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a novel bipartite graph reasoning Generative Adversarial Network
(BiGraphGAN) for two challenging tasks: person pose and facial image synthesis.
The proposed graph generator consists of two novel blocks that aim to model the
pose-to-pose and pose-to-image relations, respectively. Specifically, the
proposed bipartite graph reasoning (BGR) block aims to reason the long-range
cross relations between the source and target pose in a bipartite graph, which
mitigates some of the challenges caused by pose deformation. Moreover, we
propose a new interaction-and-aggregation (IA) block to effectively update and
enhance the feature representation capability of both a person's shape and
appearance in an interactive way. To further capture the change in pose of each
part more precisely, we propose a novel part-aware bipartite graph reasoning
(PBGR) block to decompose the task of reasoning the global structure
transformation with a bipartite graph into learning different local
transformations for different semantic body/face parts. Experiments on two
challenging generation tasks with three public datasets demonstrate the
effectiveness of the proposed methods in terms of objective quantitative scores
and subjective visual realness. The source code and trained models are
available at https://github.com/Ha0Tang/BiGraphGAN.
|
[
{
"version": "v1",
"created": "Sat, 12 Nov 2022 18:27:00 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Tang",
"Hao",
""
],
[
"Shao",
"Ling",
""
],
[
"Torr",
"Philip H. S.",
""
],
[
"Sebe",
"Nicu",
""
]
] |
new_dataset
| 0.994986 |
2211.06770
|
Andrey Ignatov
|
Andrey Ignatov and Anastasia Sycheva and Radu Timofte and Yu Tseng and
Yu-Syuan Xu and Po-Hsiang Yu and Cheng-Ming Chiang and Hsien-Kai Kuo and
Min-Hung Chen and Chia-Ming Cheng and Luc Van Gool
|
MicroISP: Processing 32MP Photos on Mobile Devices with Deep Learning
|
arXiv admin note: text overlap with arXiv:2211.06263
| null | null | null |
cs.CV cs.LG eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While neural networks-based photo processing solutions can provide a better
image quality compared to the traditional ISP systems, their application to
mobile devices is still very limited due to their very high computational
complexity. In this paper, we present a novel MicroISP model designed
specifically for edge devices, taking into account their computational and
memory limitations. The proposed solution is capable of processing up to 32MP
photos on recent smartphones using the standard mobile ML libraries and
requiring less than 1 second to perform the inference, while for FullHD images
it achieves real-time performance. The architecture of the model is flexible,
allowing to adjust its complexity to devices of different computational power.
To evaluate the performance of the model, we collected a novel Fujifilm
UltraISP dataset consisting of thousands of paired photos captured with a
normal mobile camera sensor and a professional 102MP medium-format FujiFilm
GFX100 camera. The experiments demonstrated that, despite its compact size, the
MicroISP model is able to provide comparable or better visual results than the
traditional mobile ISP systems, while outperforming the previously proposed
efficient deep learning based solutions. Finally, this model is also compatible
with the latest mobile AI accelerators, achieving good runtime and low power
consumption on smartphone NPUs and APUs. The code, dataset and pre-trained
models are available on the project website:
https://people.ee.ethz.ch/~ihnatova/microisp.html
|
[
{
"version": "v1",
"created": "Tue, 8 Nov 2022 17:40:50 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Ignatov",
"Andrey",
""
],
[
"Sycheva",
"Anastasia",
""
],
[
"Timofte",
"Radu",
""
],
[
"Tseng",
"Yu",
""
],
[
"Xu",
"Yu-Syuan",
""
],
[
"Yu",
"Po-Hsiang",
""
],
[
"Chiang",
"Cheng-Ming",
""
],
[
"Kuo",
"Hsien-Kai",
""
],
[
"Chen",
"Min-Hung",
""
],
[
"Cheng",
"Chia-Ming",
""
],
[
"Van Gool",
"Luc",
""
]
] |
new_dataset
| 0.983806 |
2211.06783
|
Abhijit Suprem
|
Abhijit Suprem, Sanjyot Vaidya, Avinash Venugopal, Joao Eduardo
Ferreira, and Calton Pu
|
EdnaML: A Declarative API and Framework for Reproducible Deep Learning
| null | null | null | null |
cs.LG cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Machine Learning has become the bedrock of recent advances in text, image,
video, and audio processing and generation. Most production systems deal with
several models during deployment and training, each with a variety of tuned
hyperparameters. Furthermore, data collection and processing aspects of ML
pipelines are receiving increasing interest due to their importance in creating
sustainable high-quality classifiers. We present EdnaML, a framework with a
declarative API for reproducible deep learning. EdnaML provides low-level
building blocks that can be composed manually, as well as a high-level pipeline
orchestration API to automate data collection, data processing, classifier
training, classifier deployment, and model monitoring. Our layered API allows
users to manage ML pipelines at high-level component abstractions, while
providing flexibility to modify any part of it through the building blocks. We
present several examples of ML pipelines with EdnaML, including a large-scale
fake news labeling and classification system with six sub-pipelines managed by
EdnaML.
|
[
{
"version": "v1",
"created": "Sun, 13 Nov 2022 01:27:06 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Suprem",
"Abhijit",
""
],
[
"Vaidya",
"Sanjyot",
""
],
[
"Venugopal",
"Avinash",
""
],
[
"Ferreira",
"Joao Eduardo",
""
],
[
"Pu",
"Calton",
""
]
] |
new_dataset
| 0.999486 |
2211.06801
|
Zhaoliang Zheng
|
Zhaoliang Zheng, Thomas R. Bewley, Falko Kuester, Jiaqi Ma
|
BTO-RRT: A rapid, optimal, smooth and point cloud-based path planning
algorithm
|
12 Pages, 16 figures, submitted to T-IV and in review
| null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper explores a rapid, optimal smooth path-planning algorithm for
robots (e.g., autonomous vehicles) in point cloud environments. Derivative maps
such as dense point clouds, mesh maps, Octomaps, etc. are frequently used for
path planning purposes. A bi-directional target-oriented point planning
algorithm, directly using point clouds to compute the optimized and dynamically
feasible trajectories, is presented in this paper. This approach searches for
obstacle-free, low computational cost, smooth, and dynamically feasible paths
by analyzing a point cloud of the target environment, using a modified
bi-directional and RRT-connect-based path planning algorithm, with a k-d
tree-based obstacle avoidance strategy and three-step optimization. This
presented approach bypasses the common 3D map discretization, directly
leveraging point cloud data and it can be separated into two parts: modified
RRT-based algorithm core and the three-step optimization. Simulations on 8 2D
maps with different configurations and characteristics are presented to show
the efficiency and 2D performance of the proposed algorithm. Benchmark
comparison and evaluation with other RRT-based algorithms like RRT, B-RRT, and
RRT star are also shown in the paper. Finally, the proposed algorithm
successfully achieved different levels of mission goals on three 3D point cloud
maps with different densities. The whole simulation proves that not only can
our algorithm achieves a better performance on 2D maps compared with other
algorithms, but also it can handle different tasks(ground vehicles and UAV
applications) on different 3D point cloud maps, which shows the high
performance and robustness of the proposed algorithm. The algorithm is
open-sourced at \url{https://github.com/zhz03/BTO-RRT}
|
[
{
"version": "v1",
"created": "Sun, 13 Nov 2022 03:46:00 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Zheng",
"Zhaoliang",
""
],
[
"Bewley",
"Thomas R.",
""
],
[
"Kuester",
"Falko",
""
],
[
"Ma",
"Jiaqi",
""
]
] |
new_dataset
| 0.966251 |
2211.06838
|
Minrui Xu
|
Minrui Xu, Dusit Niyato, Benjamin Wright, Hongliang Zhang, Jiawen
Kang, Zehui Xiong, Shiwen Mao, and Zhu Han
|
EPViSA: Efficient Auction Design for Real-time Physical-Virtual
Synchronization in the Metaverse
| null | null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Metaverse can obscure the boundary between the physical and virtual worlds.
Specifically, for the Metaverse in vehicular networks, i.e., the vehicular
Metaverse, vehicles are no longer isolated physical spaces but interfaces that
extend the virtual worlds to the physical world. Accessing the Metaverse via
autonomous vehicles (AVs), drivers and passengers can immerse in and interact
with 3D virtual objects overlaying views of streets on head-up displays (HUD)
via augmented reality (AR). The seamless, immersive, and interactive experience
rather relies on real-time multi-dimensional data synchronization between
physical entities, i.e., AVs, and virtual entities, i.e., Metaverse billboard
providers (MBPs). However, mechanisms to allocate and match synchronizing AV
and MBP pairs to roadside units (RSUs) in a synchronization service market,
which consists of the physical and virtual submarkets, are vulnerable to
adverse selection. In this paper, we propose an enhanced second-score
auction-based mechanism, named EPViSA, to allocate physical and virtual
entities in the synchronization service market of the vehicular Metaverse. The
EPViSA mechanism can determine synchronizing AV and MBP pairs simultaneously
while protecting participants from adverse selection and thus achieving high
total social welfare. We propose a synchronization scoring rule to eliminate
the external effects from the virtual submarkets. Then, a price scaling factor
is introduced to enhance the allocation of synchronizing virtual entities in
the virtual submarkets. Finally, rigorous analysis and extensive experimental
results demonstrate that EPViSA can achieve at least 96\% of the social welfare
compared to the omniscient benchmark while ensuring strategy-proof and adverse
selection free through a simulation testbed.
|
[
{
"version": "v1",
"created": "Sun, 13 Nov 2022 07:42:03 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Xu",
"Minrui",
""
],
[
"Niyato",
"Dusit",
""
],
[
"Wright",
"Benjamin",
""
],
[
"Zhang",
"Hongliang",
""
],
[
"Kang",
"Jiawen",
""
],
[
"Xiong",
"Zehui",
""
],
[
"Mao",
"Shiwen",
""
],
[
"Han",
"Zhu",
""
]
] |
new_dataset
| 0.984505 |
2211.06913
|
Kaveh Akbari Hamed
|
Jeeseop Kim, Randall T Fawcett, Vinay R Kamidi, Aaron D Ames, Kaveh
Akbari Hamed
|
Layered Control for Cooperative Locomotion of Two Quadrupedal Robots:
Centralized and Distributed Approaches
| null | null | null | null |
cs.RO math.OC
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a layered control approach for real-time trajectory
planning and control of robust cooperative locomotion by two holonomically
constrained quadrupedal robots. A novel interconnected network of reduced-order
models, based on the single rigid body (SRB) dynamics, is developed for
trajectory planning purposes. At the higher level of the control architecture,
two different model predictive control (MPC) algorithms are proposed to address
the optimal control problem of the interconnected SRB dynamics: centralized and
distributed MPCs. The distributed MPC assumes two local quadratic programs that
share their optimal solutions according to a one-step communication delay and
an agreement protocol. At the lower level of the control scheme, distributed
nonlinear controllers are developed to impose the full-order dynamics to track
the prescribed reduced-order trajectories generated by MPCs. The effectiveness
of the control approach is verified with extensive numerical simulations and
experiments for the robust and cooperative locomotion of two holonomically
constrained A1 robots with different payloads on variable terrains and in the
presence of disturbances. It is shown that the distributed MPC has a
performance similar to that of the centralized MPC, while the computation time
is reduced significantly.
|
[
{
"version": "v1",
"created": "Sun, 13 Nov 2022 14:26:32 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Kim",
"Jeeseop",
""
],
[
"Fawcett",
"Randall T",
""
],
[
"Kamidi",
"Vinay R",
""
],
[
"Ames",
"Aaron D",
""
],
[
"Hamed",
"Kaveh Akbari",
""
]
] |
new_dataset
| 0.959054 |
2211.06920
|
Merav Parter
|
Shimon Kogan and Merav Parter
|
Having Hope in Hops: New Spanners, Preservers and Lower Bounds for
Hopsets
|
FOCS 2022
| null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
Hopsets and spanners are fundamental graph structures, playing a key role in
shortest path computation, distributed communication, and more. A (near-exact)
hopset for a given graph $G$ is a (small) subset of weighted edges $H$ that
when added to the graph $G$ reduces the number of hops (edges) of near-exact
shortest paths. Spanners and distance preservers, on the other hand, ask for
removing many edges from the graph while approximately preserving shortest path
distances.
We provide a general reduction scheme from graph hopsets to the known metric
compression schemes of spanners, emulators and distance preservers.
Consequently, we get new and improved upper bound constructions for the latter,
as well as, new lower bound results for hopsets. Our work makes a significant
progress on the tantalizing open problem concerning the formal connection
between hopsets and spanners, e.g., as posed by Elkin and Neiman [Bull. EATCS
2020].
|
[
{
"version": "v1",
"created": "Sun, 13 Nov 2022 15:00:22 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Kogan",
"Shimon",
""
],
[
"Parter",
"Merav",
""
]
] |
new_dataset
| 0.997152 |
2211.06977
|
Jiaxin Jiang
|
Jiaxin Jiang and Yuan Li and Bingsheng He and Bryan Hooi and Jia Chen
and Johan Kok Zhi Kang
|
Spade: A Real-Time Fraud Detection Framework on Evolving Graphs
(Complete Version)
| null | null | null | null |
cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
Real-time fraud detection is a challenge for most financial and electronic
commercial platforms. To identify fraudulent communities, Grab, one of the
largest technology companies in Southeast Asia, forms a graph from a set of
transactions and detects dense subgraphs arising from abnormally large numbers
of connections among fraudsters. Existing dense subgraph detection approaches
focus on static graphs without considering the fact that transaction graphs are
highly dynamic. Moreover, detecting dense subgraphs from scratch with graph
updates is time consuming and cannot meet the real-time requirement in
industry. To address this problem, we introduce an incremental real-time fraud
detection framework called Spade. Spade can detect fraudulent communities in
hundreds of microseconds on million-scale graphs by incrementally maintaining
dense subgraphs. Furthermore, Spade supports batch updates and edge grouping to
reduce response latency. Lastly, Spade provides simple but expressive APIs for
the design of evolving fraud detection semantics. Developers plug their
customized suspiciousness functions into Spade which incrementalizes their
semantics without recasting their algorithms. Extensive experiments show that
Spade detects fraudulent communities in real time on million-scale graphs.
Peeling algorithms incrementalized by Spade are up to a million times faster
than the static version.
|
[
{
"version": "v1",
"created": "Sun, 13 Nov 2022 18:06:36 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Jiang",
"Jiaxin",
""
],
[
"Li",
"Yuan",
""
],
[
"He",
"Bingsheng",
""
],
[
"Hooi",
"Bryan",
""
],
[
"Chen",
"Jia",
""
],
[
"Kang",
"Johan Kok Zhi",
""
]
] |
new_dataset
| 0.986358 |
2211.06992
|
Aron Wussler
|
Francisco Vial-Prado and Aron Wussler
|
OpenPGP Email Forwarding Via Diverted Elliptic Curve Diffie-Hellman Key
Exchanges
|
12 pages, presented at ICMC 2021
| null |
10.1007/978-981-16-6890-6_12
| null |
cs.CR cs.NI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
An offline OpenPGP user might want to forward part or all of their email
messages to third parties. Given that messages are encrypted, this requires
transforming them into ciphertexts decryptable by the intended forwarded
parties, while maintaining confidentiality and authentication. It is shown in
recent lines of work that this can be achieved by means of proxy-re-encryption
schemes, however, while encrypted email forwarding is the most mentioned
application of proxy-re-encryption, it has not been implemented in the OpenPGP
context, to the best of our knowledge. In this paper, we adapt the seminal
technique introduced by Blaze, Bleumer and Strauss in EUROCRYPT'98, allowing a
Mail Transfer Agent to transform and forward OpenPGP messages without access to
decryption keys or plaintexts. We also provide implementation details and a
security analysis.
|
[
{
"version": "v1",
"created": "Sun, 13 Nov 2022 18:58:20 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Vial-Prado",
"Francisco",
""
],
[
"Wussler",
"Aron",
""
]
] |
new_dataset
| 0.977325 |
2211.07052
|
Adam Dejl
|
Adam Dejl, Harsh Deep, Jonathan Fei, Ardavan Saeedi and Li-wei H.
Lehman
|
Treatment-RSPN: Recurrent Sum-Product Networks for Sequential Treatment
Regimes
|
Extended Abstract presented at Machine Learning for Health (ML4H)
symposium 2022, November 28th, 2022, New Orleans, United States & Virtual,
http://www.ml4h.cc, 14 pages
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Sum-product networks (SPNs) have recently emerged as a novel deep learning
architecture enabling highly efficient probabilistic inference. Since their
introduction, SPNs have been applied to a wide range of data modalities and
extended to time-sequence data. In this paper, we propose a general framework
for modelling sequential treatment decision-making behaviour and treatment
response using recurrent sum-product networks (RSPNs). Models developed using
our framework benefit from the full range of RSPN capabilities, including the
abilities to model the full distribution of the data, to seamlessly handle
latent variables, missing values and categorical data, and to efficiently
perform marginal and conditional inference. Our methodology is complemented by
a novel variant of the expectation-maximization algorithm for RSPNs, enabling
efficient training of our models. We evaluate our approach on a synthetic
dataset as well as real-world data from the MIMIC-IV intensive care unit
medical database. Our evaluation demonstrates that our approach can closely
match the ground-truth data generation process on synthetic data and achieve
results close to neural and probabilistic baselines while using a tractable and
interpretable model.
|
[
{
"version": "v1",
"created": "Mon, 14 Nov 2022 00:18:44 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Dejl",
"Adam",
""
],
[
"Deep",
"Harsh",
""
],
[
"Fei",
"Jonathan",
""
],
[
"Saeedi",
"Ardavan",
""
],
[
"Lehman",
"Li-wei H.",
""
]
] |
new_dataset
| 0.967506 |
2211.07066
|
Nianlong Gu
|
Nianlong Gu, Richard H.R. Hahnloser
|
Controllable Citation Text Generation
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The aim of citation generation is usually to automatically generate a
citation sentence that refers to a chosen paper in the context of a manuscript.
However, a rigid citation generation process is at odds with an author's desire
to control the generated text based on certain attributes, such as 1) the
citation intent of e.g. either introducing background information or comparing
results; 2) keywords that should appear in the citation text; or 3) specific
sentences in the cited paper that characterize the citation content. To provide
these degrees of freedom, we present a controllable citation generation system.
In data from a large corpus, we first parse the attributes of each citation
sentence and use these as additional input sources during training of the
BART-based abstractive summarizer. We further develop an attribute suggestion
module that infers the citation intent and suggests relevant keywords and
sentences that users can select to tune the generation. Our framework gives
users more control over generated citations, outperforming citation generation
models without attribute awareness in both ROUGE and human evaluations.
|
[
{
"version": "v1",
"created": "Mon, 14 Nov 2022 01:54:08 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Gu",
"Nianlong",
""
],
[
"Hahnloser",
"Richard H. R.",
""
]
] |
new_dataset
| 0.999317 |
2211.07089
|
Yunfeng Fan
|
Yunfeng Fan, Wenchao Xu, Haozhao Wang, Junxiao Wang, and Song Guo
|
PMR: Prototypical Modal Rebalance for Multimodal Learning
|
10 pages,4 figures
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Multimodal learning (MML) aims to jointly exploit the common priors of
different modalities to compensate for their inherent limitations. However,
existing MML methods often optimize a uniform objective for different
modalities, leading to the notorious "modality imbalance" problem and
counterproductive MML performance. To address the problem, some existing
methods modulate the learning pace based on the fused modality, which is
dominated by the better modality and eventually results in a limited
improvement on the worse modal. To better exploit the features of multimodal,
we propose Prototypical Modality Rebalance (PMR) to perform stimulation on the
particular slow-learning modality without interference from other modalities.
Specifically, we introduce the prototypes that represent general features for
each class, to build the non-parametric classifiers for uni-modal performance
evaluation. Then, we try to accelerate the slow-learning modality by enhancing
its clustering toward prototypes. Furthermore, to alleviate the suppression
from the dominant modality, we introduce a prototype-based entropy
regularization term during the early training stage to prevent premature
convergence. Besides, our method only relies on the representations of each
modality and without restrictions from model structures and fusion methods,
making it with great application potential for various scenarios.
|
[
{
"version": "v1",
"created": "Mon, 14 Nov 2022 03:36:05 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Fan",
"Yunfeng",
""
],
[
"Xu",
"Wenchao",
""
],
[
"Wang",
"Haozhao",
""
],
[
"Wang",
"Junxiao",
""
],
[
"Guo",
"Song",
""
]
] |
new_dataset
| 0.994903 |
2211.07090
|
Yuwei Ren
|
Yuwei Ren, Jiuyuan Lu, Andrian Beletchi, Yin Huang, Ilia Karmanov,
Daniel Fontijne, Chirag Patel and Hao Xu
|
Hand gesture recognition using 802.11ad mmWave sensor in the mobile
device
|
6 pages, 12 figures
|
2021 IEEE Wireless Communications and Networking Conference
Workshops (WCNCW)
| null | null |
cs.IT cs.LG math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
We explore the feasibility of AI assisted hand-gesture recognition using
802.11ad 60GHz (mmWave) technology in smartphones. Range-Doppler information
(RDI) is obtained by using pulse Doppler radar for gesture recognition. We
built a prototype system, where radar sensing and WLAN communication waveform
can coexist by time-division duplex (TDD), to demonstrate the real-time
hand-gesture inference. It can gather sensing data and predict gestures within
100 milliseconds. First, we build the pipeline for the real-time feature
processing, which is robust to occasional frame drops in the data stream. RDI
sequence restoration is implemented to handle the frame dropping in the
continuous data stream, and also applied to data augmentation. Second,
different gestures RDI are analyzed, where finger and hand motions can clearly
show distinctive features. Third, five typical gestures (swipe, palm-holding,
pull-push, finger-sliding and noise) are experimented with, and a
classification framework is explored to segment the different gestures in the
continuous gesture sequence with arbitrary inputs. We evaluate our architecture
on a large multi-person dataset and report > 95% accuracy with one CNN + LSTM
model. Further, a pure CNN model is developed to fit to on-device
implementation, which minimizes the inference latency, power consumption and
computation cost. And the accuracy of this CNN model is more than 93% with only
2.29K parameters.
|
[
{
"version": "v1",
"created": "Mon, 14 Nov 2022 03:36:17 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Ren",
"Yuwei",
""
],
[
"Lu",
"Jiuyuan",
""
],
[
"Beletchi",
"Andrian",
""
],
[
"Huang",
"Yin",
""
],
[
"Karmanov",
"Ilia",
""
],
[
"Fontijne",
"Daniel",
""
],
[
"Patel",
"Chirag",
""
],
[
"Xu",
"Hao",
""
]
] |
new_dataset
| 0.999361 |
2211.07131
|
Eunjin Choi
|
Eunjin Choi, Yoonjin Chung, Seolhee Lee, JongIk Jeon, Taegyun Kwon,
Juhan Nam
|
YM2413-MDB: A Multi-Instrumental FM Video Game Music Dataset with
Emotion Annotations
|
The paper has been accepted for publication at ISMIR 2022
| null | null | null |
cs.SD cs.LG cs.MM eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Existing multi-instrumental datasets tend to be biased toward pop and
classical music. In addition, they generally lack high-level annotations such
as emotion tags. In this paper, we propose YM2413-MDB, an 80s FM video game
music dataset with multi-label emotion annotations. It includes 669 audio and
MIDI files of music from Sega and MSX PC games in the 80s using YM2413, a
programmable sound generator based on FM. The collected game music is arranged
with a subset of 15 monophonic instruments and one drum instrument. They were
converted from binary commands of the YM2413 sound chip. Each song was labeled
with 19 emotion tags by two annotators and validated by three verifiers to
obtain refined tags. We provide the baseline models and results for emotion
recognition and emotion-conditioned symbolic music generation using YM2413-MDB.
|
[
{
"version": "v1",
"created": "Mon, 14 Nov 2022 06:18:25 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Choi",
"Eunjin",
""
],
[
"Chung",
"Yoonjin",
""
],
[
"Lee",
"Seolhee",
""
],
[
"Jeon",
"JongIk",
""
],
[
"Kwon",
"Taegyun",
""
],
[
"Nam",
"Juhan",
""
]
] |
new_dataset
| 0.999726 |
2211.07161
|
Kemal Bicakci
|
Kemal Bicakci and Yusuf Uzunay
|
Is FIDO2 Passwordless Authentication a Hype or for Real?: A Position
Paper
|
Published in proceedings of the 15th International Information
Security and Cryptology Conference, 6 pages
| null |
10.1109/ISCTURKEY56345.2022.9931832
| null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Operating system and browser support that comes with the FIDO2 standard and
the biometric user verification options increasingly available on smart phones
has excited everyone, especially big tech companies, about the passwordless
future. Does a dream come true, are we finally totally getting rid of
passwords? In this position paper, we argue that although passwordless
authentication may be preferable in certain situations, it will be still not
possible to eliminate passwords on the web in the foreseeable future. We defend
our position with five main reasons, supported either by the results from the
recent literature or by our own technical and business experience. We believe
our discussion could also serve as a research agenda comprising promising
future work directions on (passwordless) user authentication.
|
[
{
"version": "v1",
"created": "Mon, 14 Nov 2022 07:47:40 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Bicakci",
"Kemal",
""
],
[
"Uzunay",
"Yusuf",
""
]
] |
new_dataset
| 0.992182 |
2211.07190
|
Lab SmartImaging
|
Baoshun Shi, Ke Jiang, Shaolei Zhang, Qiusheng Lian, and Yanwei Qin
|
TriDoNet: A Triple Domain Model-driven Network for CT Metal Artifact
Reduction
|
6 pages, 3 figures
| null | null | null |
cs.NI cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent deep learning-based methods have achieved promising performance for
computed tomography metal artifact reduction (CTMAR). However, most of them
suffer from two limitations: (i) the domain knowledge is not fully embedded
into the network training; (ii) metal artifacts lack effective representation
models. The aforementioned limitations leave room for further performance
improvement. Against these issues, we propose a novel triple domain
model-driven CTMAR network, termed as TriDoNet, whose network training exploits
triple domain knowledge, i.e., the knowledge of the sinogram, CT image, and
metal artifact domains. Specifically, to explore the non-local repetitive
streaking patterns of metal artifacts, we encode them as an explicit tight
frame sparse representation model with adaptive thresholds. Furthermore, we
design a contrastive regularization (CR) built upon contrastive learning to
exploit clean CT images and metal-affected images as positive and negative
samples, respectively. Experimental results show that our TriDoNet can generate
superior artifact-reduced CT images.
|
[
{
"version": "v1",
"created": "Mon, 14 Nov 2022 08:28:57 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Shi",
"Baoshun",
""
],
[
"Jiang",
"Ke",
""
],
[
"Zhang",
"Shaolei",
""
],
[
"Lian",
"Qiusheng",
""
],
[
"Qin",
"Yanwei",
""
]
] |
new_dataset
| 0.995519 |
2211.07290
|
Siddique Latif
|
Siddique Latif, Hafiz Shehbaz Ali, Muhammad Usama, Rajib Rana, Bj\"orn
Schuller, and Junaid Qadir
|
AI-Based Emotion Recognition: Promise, Peril, and Prescriptions for
Prosocial Path
|
Under review in IEEE TAC
| null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automated emotion recognition (AER) technology can detect humans' emotional
states in real-time using facial expressions, voice attributes, text, body
movements, and neurological signals and has a broad range of applications
across many sectors. It helps businesses get a much deeper understanding of
their customers, enables monitoring of individuals' moods in healthcare,
education, or the automotive industry, and enables identification of violence
and threat in forensics, to name a few. However, AER technology also risks
using artificial intelligence (AI) to interpret sensitive human emotions. It
can be used for economic and political power and against individual rights.
Human emotions are highly personal, and users have justifiable concerns about
privacy invasion, emotional manipulation, and bias. In this paper, we present
the promises and perils of AER applications. We discuss the ethical challenges
related to the data and AER systems and highlight the prescriptions for
prosocial perspectives for future AER applications. We hope this work will help
AI researchers and developers design prosocial AER applications.
|
[
{
"version": "v1",
"created": "Mon, 14 Nov 2022 11:43:10 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Latif",
"Siddique",
""
],
[
"Ali",
"Hafiz Shehbaz",
""
],
[
"Usama",
"Muhammad",
""
],
[
"Rana",
"Rajib",
""
],
[
"Schuller",
"Björn",
""
],
[
"Qadir",
"Junaid",
""
]
] |
new_dataset
| 0.970884 |
2211.07342
|
Xiaozhi Wang
|
Xiaozhi Wang, Yulin Chen, Ning Ding, Hao Peng, Zimu Wang, Yankai Lin,
Xu Han, Lei Hou, Juanzi Li, Zhiyuan Liu, Peng Li, Jie Zhou
|
MAVEN-ERE: A Unified Large-scale Dataset for Event Coreference,
Temporal, Causal, and Subevent Relation Extraction
|
Accepted at EMNLP 2022. Camera-ready version
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The diverse relationships among real-world events, including coreference,
temporal, causal, and subevent relations, are fundamental to understanding
natural languages. However, two drawbacks of existing datasets limit event
relation extraction (ERE) tasks: (1) Small scale. Due to the annotation
complexity, the data scale of existing datasets is limited, which cannot well
train and evaluate data-hungry models. (2) Absence of unified annotation.
Different types of event relations naturally interact with each other, but
existing datasets only cover limited relation types at once, which prevents
models from taking full advantage of relation interactions. To address these
issues, we construct a unified large-scale human-annotated ERE dataset
MAVEN-ERE with improved annotation schemes. It contains 103,193 event
coreference chains, 1,216,217 temporal relations, 57,992 causal relations, and
15,841 subevent relations, which is larger than existing datasets of all the
ERE tasks by at least an order of magnitude. Experiments show that ERE on
MAVEN-ERE is quite challenging, and considering relation interactions with
joint learning can improve performances. The dataset and source codes can be
obtained from https://github.com/THU-KEG/MAVEN-ERE.
|
[
{
"version": "v1",
"created": "Mon, 14 Nov 2022 13:34:49 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Wang",
"Xiaozhi",
""
],
[
"Chen",
"Yulin",
""
],
[
"Ding",
"Ning",
""
],
[
"Peng",
"Hao",
""
],
[
"Wang",
"Zimu",
""
],
[
"Lin",
"Yankai",
""
],
[
"Han",
"Xu",
""
],
[
"Hou",
"Lei",
""
],
[
"Li",
"Juanzi",
""
],
[
"Liu",
"Zhiyuan",
""
],
[
"Li",
"Peng",
""
],
[
"Zhou",
"Jie",
""
]
] |
new_dataset
| 0.999811 |
2211.07459
|
Zirui Wu
|
Zirui Wu, Yuantao Chen, Runyi Yang, Zhenxin Zhu, Chao Hou, Yongliang
Shi, Hao Zhao, Guyue Zhou
|
AsyncNeRF: Learning Large-scale Radiance Fields from Asynchronous RGB-D
Sequences with Time-Pose Function
|
10 pages, 6 figures, 4 tables
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large-scale radiance fields are promising mapping tools for smart
transportation applications like autonomous driving or drone delivery. But for
large-scale scenes, compact synchronized RGB-D cameras are not applicable due
to limited sensing range, and using separate RGB and depth sensors inevitably
leads to unsynchronized sequences. Inspired by the recent success of
self-calibrating radiance field training methods that do not require known
intrinsic or extrinsic parameters, we propose the first solution that
self-calibrates the mismatch between RGB and depth frames. We leverage the
important domain-specific fact that RGB and depth frames are actually sampled
from the same trajectory and develop a novel implicit network called the
time-pose function. Combining it with a large-scale radiance field leads to an
architecture that cascades two implicit representation networks. To validate
its effectiveness, we construct a diverse and photorealistic dataset that
covers various RGB-D mismatch scenarios. Through a comprehensive benchmarking
on this dataset, we demonstrate the flexibility of our method in different
scenarios and superior performance over applicable prior counterparts. Codes,
data, and models will be made publicly available.
|
[
{
"version": "v1",
"created": "Mon, 14 Nov 2022 15:37:27 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Wu",
"Zirui",
""
],
[
"Chen",
"Yuantao",
""
],
[
"Yang",
"Runyi",
""
],
[
"Zhu",
"Zhenxin",
""
],
[
"Hou",
"Chao",
""
],
[
"Shi",
"Yongliang",
""
],
[
"Zhao",
"Hao",
""
],
[
"Zhou",
"Guyue",
""
]
] |
new_dataset
| 0.996638 |
2211.07491
|
Yigit Baran Can
|
Yigit Baran Can, Alexander Liniger, Danda Pani Paudel, Luc Van Gool
|
Piecewise Planar Hulls for Semi-Supervised Learning of 3D Shape and Pose
from 2D Images
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the problem of estimating 3D shape and pose of an object in terms of
keypoints, from a single 2D image.
The shape and pose are learned directly from images collected by categories
and their partial 2D keypoint annotations.. In this work, we first propose an
end-to-end training framework for intermediate 2D keypoints extraction and
final 3D shape and pose estimation. The proposed framework is then trained
using only the weak supervision of the intermediate 2D keypoints. Additionally,
we devise a semi-supervised training framework that benefits from both labeled
and unlabeled data. To leverage the unlabeled data, we introduce and exploit
the \emph{piece-wise planar hull} prior of the canonical object shape. These
planar hulls are defined manually once per object category, with the help of
the keypoints. On the one hand, the proposed method learns to segment these
planar hulls from the labeled data. On the other hand, it simultaneously
enforces the consistency between predicted keypoints and the segmented hulls on
the unlabeled data. The enforced consistency allows us to efficiently use the
unlabeled data for the task at hand. The proposed method achieves comparable
results with fully supervised state-of-the-art methods by using only half of
the annotations. Our source code will be made publicly available.
|
[
{
"version": "v1",
"created": "Mon, 14 Nov 2022 16:18:11 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Can",
"Yigit Baran",
""
],
[
"Liniger",
"Alexander",
""
],
[
"Paudel",
"Danda Pani",
""
],
[
"Van Gool",
"Luc",
""
]
] |
new_dataset
| 0.9649 |
2211.07545
|
Amir Rasouli
|
Amir Rasouli, Randy Goebel, Matthew E. Taylor, Iuliia Kotseruba,
Soheil Alizadeh, Tianpei Yang, Montgomery Alban, Florian Shkurti, Yuzheng
Zhuang, Adam Scibior, Kasra Rezaee, Animesh Garg, David Meger, Jun Luo, Liam
Paull, Weinan Zhang, Xinyu Wang, and Xi Chen
|
NeurIPS 2022 Competition: Driving SMARTS
|
10 pages, 8 figures
| null | null | null |
cs.RO cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Driving SMARTS is a regular competition designed to tackle problems caused by
the distribution shift in dynamic interaction contexts that are prevalent in
real-world autonomous driving (AD). The proposed competition supports
methodologically diverse solutions, such as reinforcement learning (RL) and
offline learning methods, trained on a combination of naturalistic AD data and
open-source simulation platform SMARTS. The two-track structure allows focusing
on different aspects of the distribution shift. Track 1 is open to any method
and will give ML researchers with different backgrounds an opportunity to solve
a real-world autonomous driving challenge. Track 2 is designed for strictly
offline learning methods. Therefore, direct comparisons can be made between
different methods with the aim to identify new promising research directions.
The proposed setup consists of 1) realistic traffic generated using real-world
data and micro simulators to ensure fidelity of the scenarios, 2) framework
accommodating diverse methods for solving the problem, and 3) baseline method.
As such it provides a unique opportunity for the principled investigation into
various aspects of autonomous vehicle deployment.
|
[
{
"version": "v1",
"created": "Mon, 14 Nov 2022 17:10:53 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Rasouli",
"Amir",
""
],
[
"Goebel",
"Randy",
""
],
[
"Taylor",
"Matthew E.",
""
],
[
"Kotseruba",
"Iuliia",
""
],
[
"Alizadeh",
"Soheil",
""
],
[
"Yang",
"Tianpei",
""
],
[
"Alban",
"Montgomery",
""
],
[
"Shkurti",
"Florian",
""
],
[
"Zhuang",
"Yuzheng",
""
],
[
"Scibior",
"Adam",
""
],
[
"Rezaee",
"Kasra",
""
],
[
"Garg",
"Animesh",
""
],
[
"Meger",
"David",
""
],
[
"Luo",
"Jun",
""
],
[
"Paull",
"Liam",
""
],
[
"Zhang",
"Weinan",
""
],
[
"Wang",
"Xinyu",
""
],
[
"Chen",
"Xi",
""
]
] |
new_dataset
| 0.998054 |
2211.07546
|
Shizheng Zhou
|
Shizheng Zhou, Juntao Jiang, Xiaohan Hong, Yajun Fang, Yan Hong,
Pengcheng Fu
|
Marine Microalgae Detection in Microscopy Images: A New Dataset
| null | null | null | null |
cs.CV cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Marine microalgae are widespread in the ocean and play a crucial role in the
ecosystem. Automatic identification and location of marine microalgae in
microscopy images would help establish marine ecological environment monitoring
and water quality evaluation system. A new dataset for marine microalgae
detection is proposed in this paper. Six classes of microalgae commonlyfound in
the ocean (Bacillariophyta, Chlorella pyrenoidosa, Platymonas, Dunaliella
salina, Chrysophyta, Symbiodiniaceae) are microscopically imaged in real-time.
Images of Symbiodiniaceae in three physiological states known as normal,
bleaching, and translating are also included. We annotated these images with
bounding boxes using Labelme software and split them into the training and
testing sets. The total number of images in the dataset is 937 and all the
objects in these images were annotated. The total number of annotated objects
is 4201. The training set contains 537 images and the testing set contains 430
images. Baselines of different object detection algorithms are trained,
validated and tested on this dataset. This data set can be got accessed via
tianchi.aliyun.com/competition/entrance/532036/information.
|
[
{
"version": "v1",
"created": "Mon, 14 Nov 2022 17:11:15 GMT"
}
] | 2022-11-15T00:00:00 |
[
[
"Zhou",
"Shizheng",
""
],
[
"Jiang",
"Juntao",
""
],
[
"Hong",
"Xiaohan",
""
],
[
"Fang",
"Yajun",
""
],
[
"Hong",
"Yan",
""
],
[
"Fu",
"Pengcheng",
""
]
] |
new_dataset
| 0.999597 |
2008.04008
|
Rafael Kiesel
|
Thomas Eiter and Rafael Kiesel
|
ASP(AC): Answer Set Programming with Algebraic Constraints
|
32 pages, 16 pages are appendix
| null |
10.1017/S1471068420000393
| null |
cs.AI cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Weighted Logic is a powerful tool for the specification of calculations over
semirings that depend on qualitative information. Using a novel combination of
Weighted Logic and Here-and-There (HT) Logic, in which this dependence is based
on intuitionistic grounds, we introduce Answer Set Programming with Algebraic
Constraints (ASP(AC)), where rules may contain constraints that compare
semiring values to weighted formula evaluations. Such constraints provide
streamlined access to a manifold of constructs available in ASP, like
aggregates, choice constraints, and arithmetic operators. They extend some of
them and provide a generic framework for defining programs with algebraic
computation, which can be fruitfully used e.g. for provenance semantics of
datalog programs. While undecidable in general, expressive fragments of ASP(AC)
can be exploited for effective problem-solving in a rich framework. This work
is under consideration for acceptance in Theory and Practice of Logic
Programming.
|
[
{
"version": "v1",
"created": "Mon, 10 Aug 2020 10:20:49 GMT"
}
] | 2022-11-14T00:00:00 |
[
[
"Eiter",
"Thomas",
""
],
[
"Kiesel",
"Rafael",
""
]
] |
new_dataset
| 0.993444 |
2204.12917
|
Gloria Mittmann
|
Gloria Mittmann, Adam Barnard, Ina Krammer, Diogo Martins, Jo\~ao Dias
|
LINA -- A social augmented reality game around mental health, supporting
real-world connection and sense of belonging for early adolescents
|
21 pages, 10 figures, 2 tables
|
Proceedings of the ACM on Human-Computer Interaction 6(CHI PLAY)
(2022) 1-21
|
10.1145/3549505
| null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Early adolescence is a time of major social change; a strong sense of
belonging and peer connectedness is an essential protective factor in mental
health during that period. In this paper we introduce LINA, an augmented
reality (AR) smartphone-based serious game played in school by an entire class
(age 10+) together with their teacher, which aims to facilitate and improve
peer interaction, sense of belonging and class climate, while creating a safe
space to reflect on mental health and external stressors related to family
circumstance. LINA was developed through an interdisciplinary collaboration
involving a playwright, software developers, psychologists, and artists, via an
iterative co-development process with young people. A prototype has been
evaluated quantitatively for usability and qualitatively for efficacy in a
study with 91 early adolescents (agemean=11.41). Results from the Game User
Experience Satisfaction Scale (GUESS-18) and data from qualitative focus groups
showed high acceptability and preliminary efficacy of the game. Using AR, a
shared immersive narrative and collaborative gameplay in a shared physical
space offers an opportunity to harness adolescent affinity for digital
technology towards improving real-world social connection and sense of
belonging.
|
[
{
"version": "v1",
"created": "Wed, 27 Apr 2022 13:24:41 GMT"
}
] | 2022-11-14T00:00:00 |
[
[
"Mittmann",
"Gloria",
""
],
[
"Barnard",
"Adam",
""
],
[
"Krammer",
"Ina",
""
],
[
"Martins",
"Diogo",
""
],
[
"Dias",
"João",
""
]
] |
new_dataset
| 0.999484 |
2204.13746
|
Soham Poddar
|
Soham Poddar, Azlaan Mustafa Samad, Rajdeep Mukherjee, Niloy Ganguly,
Saptarshi Ghosh
|
CAVES: A Dataset to facilitate Explainable Classification and
Summarization of Concerns towards COVID Vaccines
|
Accepted at SIGIR'22 (Resource Track)
| null | null | null |
cs.CL cs.CY cs.IR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Convincing people to get vaccinated against COVID-19 is a key societal
challenge in the present times. As a first step towards this goal, many prior
works have relied on social media analysis to understand the specific concerns
that people have towards these vaccines, such as potential side-effects,
ineffectiveness, political factors, and so on. Though there are datasets that
broadly classify social media posts into Anti-vax and Pro-Vax labels, there is
no dataset (to our knowledge) that labels social media posts according to the
specific anti-vaccine concerns mentioned in the posts. In this paper, we have
curated CAVES, the first large-scale dataset containing about 10k COVID-19
anti-vaccine tweets labelled into various specific anti-vaccine concerns in a
multi-label setting. This is also the first multi-label classification dataset
that provides explanations for each of the labels. Additionally, the dataset
also provides class-wise summaries of all the tweets. We also perform
preliminary experiments on the dataset and show that this is a very challenging
dataset for multi-label explainable classification and tweet summarization, as
is evident by the moderate scores achieved by some state-of-the-art models. Our
dataset and codes are available at: https://github.com/sohampoddar26/caves-data
|
[
{
"version": "v1",
"created": "Thu, 28 Apr 2022 19:26:54 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Nov 2022 14:16:46 GMT"
}
] | 2022-11-14T00:00:00 |
[
[
"Poddar",
"Soham",
""
],
[
"Samad",
"Azlaan Mustafa",
""
],
[
"Mukherjee",
"Rajdeep",
""
],
[
"Ganguly",
"Niloy",
""
],
[
"Ghosh",
"Saptarshi",
""
]
] |
new_dataset
| 0.999266 |
2205.06175
|
Konrad Zolna
|
Scott Reed, Konrad Zolna, Emilio Parisotto, Sergio Gomez Colmenarejo,
Alexander Novikov, Gabriel Barth-Maron, Mai Gimenez, Yury Sulsky, Jackie Kay,
Jost Tobias Springenberg, Tom Eccles, Jake Bruce, Ali Razavi, Ashley Edwards,
Nicolas Heess, Yutian Chen, Raia Hadsell, Oriol Vinyals, Mahyar Bordbar,
Nando de Freitas
|
A Generalist Agent
|
Published at TMLR, 42 pages
|
Transactions on Machine Learning Research, 11/2022,
https://openreview.net/forum?id=1ikK0kHjvj
| null | null |
cs.AI cs.CL cs.LG cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Inspired by progress in large-scale language modeling, we apply a similar
approach towards building a single generalist agent beyond the realm of text
outputs. The agent, which we refer to as Gato, works as a multi-modal,
multi-task, multi-embodiment generalist policy. The same network with the same
weights can play Atari, caption images, chat, stack blocks with a real robot
arm and much more, deciding based on its context whether to output text, joint
torques, button presses, or other tokens. In this report we describe the model
and the data, and document the current capabilities of Gato.
|
[
{
"version": "v1",
"created": "Thu, 12 May 2022 16:03:26 GMT"
},
{
"version": "v2",
"created": "Thu, 19 May 2022 13:32:28 GMT"
},
{
"version": "v3",
"created": "Fri, 11 Nov 2022 10:04:29 GMT"
}
] | 2022-11-14T00:00:00 |
[
[
"Reed",
"Scott",
""
],
[
"Zolna",
"Konrad",
""
],
[
"Parisotto",
"Emilio",
""
],
[
"Colmenarejo",
"Sergio Gomez",
""
],
[
"Novikov",
"Alexander",
""
],
[
"Barth-Maron",
"Gabriel",
""
],
[
"Gimenez",
"Mai",
""
],
[
"Sulsky",
"Yury",
""
],
[
"Kay",
"Jackie",
""
],
[
"Springenberg",
"Jost Tobias",
""
],
[
"Eccles",
"Tom",
""
],
[
"Bruce",
"Jake",
""
],
[
"Razavi",
"Ali",
""
],
[
"Edwards",
"Ashley",
""
],
[
"Heess",
"Nicolas",
""
],
[
"Chen",
"Yutian",
""
],
[
"Hadsell",
"Raia",
""
],
[
"Vinyals",
"Oriol",
""
],
[
"Bordbar",
"Mahyar",
""
],
[
"de Freitas",
"Nando",
""
]
] |
new_dataset
| 0.962023 |
2206.04617
|
Joshua Springer
|
Joshua Springer and Marcel Kyas
|
Autonomous Drone Landing with Fiducial Markers and a Gimbal-Mounted
Camera for Active Tracking
|
Update for IRC
| null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Precision landing is a remaining challenge in autonomous drone flight.
Fiducial markers provide a computationally cheap way for a drone to locate a
landing pad and autonomously execute precision landings. However, most work in
this field depends on either rigidly-mounted or downward-facing cameras which
restrict the drone's ability to detect the marker. We present a method of
autonomous landing that uses a gimbal-mounted camera to quickly search for the
landing pad by simply spinning in place while tilting the camera up and down,
and to continually aim the camera at the landing pad during approach and
landing. This method demonstrates successful search, tracking, and landing with
4 of 5 tested fiducial systems on a physical drone with no human intervention.
Per fiducial system, we present the distributions of the distances from the
drone to the center of the landing pad after each successful landing. We also
show representative examples of flight trajectories, marker tracking
performance, and control outputs for each channel during the landing. Finally,
we discuss qualitative strengths and weaknesses underlying each system.
|
[
{
"version": "v1",
"created": "Thu, 9 Jun 2022 17:09:16 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Jun 2022 12:04:11 GMT"
},
{
"version": "v3",
"created": "Fri, 11 Nov 2022 16:04:45 GMT"
}
] | 2022-11-14T00:00:00 |
[
[
"Springer",
"Joshua",
""
],
[
"Kyas",
"Marcel",
""
]
] |
new_dataset
| 0.999463 |
2207.00246
|
Zihan Lin
|
Zihan Lin, Jincheng Yu, Lipu Zhou, Xudong Zhang, Jian Wang, Yu Wang
|
Point Cloud Change Detection With Stereo V-SLAM:Dataset, Metrics and
Baseline
| null |
IEEE Robotics and Automation Letters, vol. 7, no. 4, pp.
12443-12450, Oct. 2022
|
10.1109/LRA.2022.3219018
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Localization and navigation are basic robotic tasks requiring an accurate and
up-to-date map to finish these tasks, with crowdsourced data to detect map
changes posing an appealing solution. Collecting and processing crowdsourced
data requires low-cost sensors and algorithms, but existing methods rely on
expensive sensors or computationally expensive algorithms. Additionally, there
is no existing dataset to evaluate point cloud change detection. Thus, this
paper proposes a novel framework using low-cost sensors like stereo cameras and
IMU to detect changes in a point cloud map. Moreover, we create a dataset and
the corresponding metrics to evaluate point cloud change detection with the
help of the high-fidelity simulator Unreal Engine 4. Experiments show that our
visualbased framework can effectively detect the changes in our dataset.
|
[
{
"version": "v1",
"created": "Fri, 1 Jul 2022 07:31:40 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Nov 2022 04:02:38 GMT"
}
] | 2022-11-14T00:00:00 |
[
[
"Lin",
"Zihan",
""
],
[
"Yu",
"Jincheng",
""
],
[
"Zhou",
"Lipu",
""
],
[
"Zhang",
"Xudong",
""
],
[
"Wang",
"Jian",
""
],
[
"Wang",
"Yu",
""
]
] |
new_dataset
| 0.995405 |
2207.02355
|
Sebastian Wolff
|
Roland Meyer, Thomas Wies, Sebastian Wolff
|
A Concurrent Program Logic with a Future and History
| null |
Proc. ACM Program. Lang. 6, OOPSLA2, Article 174 (October 2022),
30 pages
|
10.1145/3563337
| null |
cs.PL cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Verifying fine-grained optimistic concurrent programs remains an open
problem. Modern program logics provide abstraction mechanisms and compositional
reasoning principles to deal with the inherent complexity. However, their use
is mostly confined to pencil-and-paper or mechanized proofs. We devise a new
separation logic geared towards the lacking automation. While local reasoning
is known to be crucial for automation, we are the first to show how to retain
this locality for (i) reasoning about inductive properties without the need for
ghost code, and (ii) reasoning about computation histories in hindsight. We
implemented our new logic in a tool and used it to automatically verify
challenging concurrent search structures that require inductive properties and
hindsight reasoning, such as the Harris set.
|
[
{
"version": "v1",
"created": "Tue, 5 Jul 2022 23:17:35 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Nov 2022 16:56:24 GMT"
}
] | 2022-11-14T00:00:00 |
[
[
"Meyer",
"Roland",
""
],
[
"Wies",
"Thomas",
""
],
[
"Wolff",
"Sebastian",
""
]
] |
new_dataset
| 0.997227 |
2207.11171
|
Musard Balliu
|
Mikhail Shcherbakov, Musard Balliu, Cristian-Alexandru Staicu
|
Silent Spring: Prototype Pollution Leads to Remote Code Execution in
Node.js
|
USENIX Security'23
| null | null | null |
cs.CR cs.PL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Prototype pollution is a dangerous vulnerability affecting prototype-based
languages like JavaScript and the Node.js platform. It refers to the ability of
an attacker to inject properties into an object's root prototype at runtime and
subsequently trigger the execution of legitimate code gadgets that access these
properties on the object's prototype, leading to attacks such as Denial of
Service (DoS), privilege escalation, and Remote Code Execution (RCE). While
there is anecdotal evidence that prototype pollution leads to RCE, current
research does not tackle the challenge of gadget detection, thus only showing
feasibility of DoS attacks, mainly against Node.js libraries.
In this paper, we set out to study the problem in a holistic way, from the
detection of prototype pollution to detection of gadgets, with the ambitious
goal of finding end-to-end exploits beyond DoS, in full-fledged Node.js
applications. We build the first multi-staged framework that uses multi-label
static taint analysis to identify prototype pollution in Node.js libraries and
applications, as well as a hybrid approach to detect universal gadgets,
notably, by analyzing the Node.js source code. We implement our framework on
top of GitHub's static analysis framework CodeQL to find 11 universal gadgets
in core Node.js APIs, leading to code execution. Furthermore, we use our
methodology in a study of 15 popular Node.js applications to identify prototype
pollutions and gadgets. We manually exploit eight RCE vulnerabilities in three
high-profile applications such as NPM CLI, Parse Server, and Rocket.Chat. Our
results provide alarming evidence that prototype pollution in combination with
powerful universal gadgets lead to RCE in Node.js.
|
[
{
"version": "v1",
"created": "Fri, 22 Jul 2022 16:16:28 GMT"
},
{
"version": "v2",
"created": "Thu, 10 Nov 2022 21:04:36 GMT"
}
] | 2022-11-14T00:00:00 |
[
[
"Shcherbakov",
"Mikhail",
""
],
[
"Balliu",
"Musard",
""
],
[
"Staicu",
"Cristian-Alexandru",
""
]
] |
new_dataset
| 0.999645 |
2207.14668
|
Pasquale Claudio Africa
|
Pasquale Claudio Africa
|
lifex: a flexible, high performance library for the numerical solution
of complex finite element problems
| null |
SoftwareX 20 (2022), p. 101252. issn: 2352-7110
|
10.1016/j.softx.2022.101252
| null |
cs.MS cs.DC cs.NA math.NA
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Numerical simulations are ubiquitous in mathematics and computational
science. Several industrial and clinical applications entail modeling complex
multiphysics systems that evolve over a variety of spatial and temporal scales.
This study introduces the design and capabilities of lifex, an open source
C++ library for high performance finite element simulations of multiphysics,
multiscale, and multidomain problems. lifex meets the emerging need for
versatile, efficient computational tools that are easily accessed by users and
developers. We showcase its flexibility and effectiveness on a number of
illustrative examples and advanced applications of use and demonstrate its
parallel performance up to thousands of cores.
|
[
{
"version": "v1",
"created": "Fri, 29 Jul 2022 13:24:30 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Oct 2022 14:36:27 GMT"
},
{
"version": "v3",
"created": "Fri, 11 Nov 2022 13:25:20 GMT"
}
] | 2022-11-14T00:00:00 |
[
[
"Africa",
"Pasquale Claudio",
""
]
] |
new_dataset
| 0.996274 |
2208.01263
|
Basireddy Swaroopa Reddy
|
B Swaroopa Reddy
|
A ZK-SNARK based Proof of Assets Protocol for Bitcoin Exchanges
|
9 pages, 2 figures, 6 tables
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
This paper proposes a protocol for Proof of Assets of a bitcoin exchange
using the Zero-Knowledge Succinct Non-Interactive Argument of Knowledge
(ZK-SNARK) without revealing either the bitcoin addresses of the exchange or
balances associated with those addresses. The proof of assets is a mechanism to
prove the total value of bitcoins the exchange has authority to spend using its
private keys. We construct a privacy-preserving ZK-SNARK proof system to prove
the knowledge of the private keys corresponding to the bitcoin assets of an
exchange. The ZK-SNARK tool-chain helps to convert an NP-Statement for proving
the knowledge of the private keys (known to the exchange) into a circuit
satisfiability problem. In this protocol, the exchange creates a Pedersen
commitment to the value of bitcoins associated with each address without
revealing the balance. The simulation results show that the proof generation
time, size, and verification time are efficient in practice.
|
[
{
"version": "v1",
"created": "Tue, 2 Aug 2022 06:20:44 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Nov 2022 10:40:15 GMT"
}
] | 2022-11-14T00:00:00 |
[
[
"Reddy",
"B Swaroopa",
""
]
] |
new_dataset
| 0.994501 |
2208.14615
|
Jonathan Shafer
|
Olivier Bousquet, Steve Hanneke, Shay Moran, Jonathan Shafer, Ilya
Tolstikhin
|
Fine-Grained Distribution-Dependent Learning Curves
| null | null | null | null |
cs.LG cs.CC stat.ML
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Learning curves plot the expected error of a learning algorithm as a function
of the number of labeled samples it receives from a target distribution. They
are widely used as a measure of an algorithm's performance, but classic PAC
learning theory cannot explain their behavior.
As observed by Antos and Lugosi (1996 , 1998), the classic `No Free Lunch'
lower bounds only trace the upper envelope above all learning curves of
specific target distributions. For a concept class with VC dimension $d$ the
classic bound decays like $d/n$, yet it is possible that the learning curve for
\emph{every} specific distribution decays exponentially. In this case, for each
$n$ there exists a different `hard' distribution requiring $d/n$ samples. Antos
and Lugosi asked which concept classes admit a `strong minimax lower bound' --
a lower bound of $d'/n$ that holds for a fixed distribution for infinitely many
$n$.
We solve this problem in a principled manner, by introducing a combinatorial
dimension called VCL that characterizes the best $d'$ for which $d'/n$ is a
strong minimax lower bound. Our characterization strengthens the lower bounds
of Bousquet, Hanneke, Moran, van Handel, and Yehudayoff (2021), and it refines
their theory of learning curves, by showing that for classes with finite VCL
the learning rate can be decomposed into a linear component that depends only
on the hypothesis class and an exponential component that depends also on the
target distribution. As a corollary, we recover the lower bound of Antos and
Lugosi (1996 , 1998) for half-spaces in $\mathbb{R}^d$.
Finally, to provide another viewpoint on our work and how it compares to
traditional PAC learning bounds, we also present an alternative formulation of
our results in a language that is closer to the PAC setting.
|
[
{
"version": "v1",
"created": "Wed, 31 Aug 2022 03:29:21 GMT"
},
{
"version": "v2",
"created": "Thu, 10 Nov 2022 21:35:25 GMT"
}
] | 2022-11-14T00:00:00 |
[
[
"Bousquet",
"Olivier",
""
],
[
"Hanneke",
"Steve",
""
],
[
"Moran",
"Shay",
""
],
[
"Shafer",
"Jonathan",
""
],
[
"Tolstikhin",
"Ilya",
""
]
] |
new_dataset
| 0.991796 |
2209.05451
|
Mohit Shridhar
|
Mohit Shridhar, Lucas Manuelli, Dieter Fox
|
Perceiver-Actor: A Multi-Task Transformer for Robotic Manipulation
|
CoRL 2022. Project Website: https://peract.github.io/
| null | null | null |
cs.RO cs.AI cs.CL cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Transformers have revolutionized vision and natural language processing with
their ability to scale with large datasets. But in robotic manipulation, data
is both limited and expensive. Can manipulation still benefit from Transformers
with the right problem formulation? We investigate this question with PerAct, a
language-conditioned behavior-cloning agent for multi-task 6-DoF manipulation.
PerAct encodes language goals and RGB-D voxel observations with a Perceiver
Transformer, and outputs discretized actions by ``detecting the next best voxel
action''. Unlike frameworks that operate on 2D images, the voxelized 3D
observation and action space provides a strong structural prior for efficiently
learning 6-DoF actions. With this formulation, we train a single multi-task
Transformer for 18 RLBench tasks (with 249 variations) and 7 real-world tasks
(with 18 variations) from just a few demonstrations per task. Our results show
that PerAct significantly outperforms unstructured image-to-action agents and
3D ConvNet baselines for a wide range of tabletop tasks.
|
[
{
"version": "v1",
"created": "Mon, 12 Sep 2022 17:51:05 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Nov 2022 08:14:32 GMT"
}
] | 2022-11-14T00:00:00 |
[
[
"Shridhar",
"Mohit",
""
],
[
"Manuelli",
"Lucas",
""
],
[
"Fox",
"Dieter",
""
]
] |
new_dataset
| 0.994155 |
2210.00438
|
Thanh Pham
|
Thanh V. Pham, Steve Hranilovic, Susumu Ishihara
|
Design of Artificial Noise for Physical Layer Security in Visible Light
Systems with Clipping
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Though visible light communication (VLC) systems are contained to a given
room, ensuring their security amongst users in a room is essential. In this
paper, the design of artificial noise (AN) to enhance physical layer security
in VLC systems is studied in the context of input signals with no explicit
amplitude constraint (such as multicarrier systems). In such systems, clipping
is needed to constrain the input signals within the limited linear ranges of
the LEDs. However, this clipping process gives rise to non-linear clipping
distortion, which must be incorporated into the AN design. To facilitate the
solution of this problem, a sub-optimal design approach is presented using the
Charnes-Cooper transformation and the convex-concave procedure (CCP). Numerical
results show that the clipping distortion significantly reduces the secrecy
level, and using AN is advantageous over the no-AN scheme in improving the
secrecy performance.
|
[
{
"version": "v1",
"created": "Sun, 2 Oct 2022 06:33:42 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Nov 2022 05:06:43 GMT"
}
] | 2022-11-14T00:00:00 |
[
[
"Pham",
"Thanh V.",
""
],
[
"Hranilovic",
"Steve",
""
],
[
"Ishihara",
"Susumu",
""
]
] |
new_dataset
| 0.955754 |
2210.11923
|
Levente Csikor PhD
|
Levente Csikor, Hoon Wei Lim, Jun Wen Wong, Soundarya Ramesh, Rohini
Poolat Parameswarath, and Mun Choon Chan
|
RollBack: A New Time-Agnostic Replay Attack Against the Automotive
Remote Keyless Entry Systems
|
24 pages, 5 figures Under submission to a journal
|
BlackHat USA 2022
| null | null |
cs.CR cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Today's RKE systems implement disposable rolling codes, making every key fob
button press unique, effectively preventing simple replay attacks. However, a
prior attack called RollJam was proven to break all rolling code-based systems
in general. By a careful sequence of signal jamming, capturing, and replaying,
an attacker can become aware of the subsequent valid unlock signal that has not
been used yet. RollJam, however, requires continuous deployment indefinitely
until it is exploited. Otherwise, the captured signals become invalid if the
key fob is used again without RollJam in place. We introduce RollBack, a new
replay-and-resynchronize attack against most of today's RKE systems. In
particular, we show that even though the one-time code becomes invalid in
rolling code systems, replaying a few previously captured signals consecutively
can trigger a rollback-like mechanism in the RKE system. Put differently, the
rolling codes become resynchronized back to a previous code used in the past
from where all subsequent yet already used signals work again. Moreover, the
victim can still use the key fob without noticing any difference before and
after the attack. Unlike RollJam, RollBack does not necessitate jamming at all.
Furthermore, it requires signal capturing only once and can be exploited at any
time in the future as many times as desired. This time-agnostic property is
particularly attractive to attackers, especially in car-sharing/renting
scenarios where accessing the key fob is straightforward. However, while
RollJam defeats virtually any rolling code-based system, vehicles might have
additional anti-theft measures against malfunctioning key fobs, hence against
RollBack. Our ongoing analysis (covering Asian vehicle manufacturers for the
time being) against different vehicle makes and models has revealed that ~70%
of them are vulnerable to RollBack.
|
[
{
"version": "v1",
"created": "Wed, 14 Sep 2022 04:12:58 GMT"
}
] | 2022-11-14T00:00:00 |
[
[
"Csikor",
"Levente",
""
],
[
"Lim",
"Hoon Wei",
""
],
[
"Wong",
"Jun Wen",
""
],
[
"Ramesh",
"Soundarya",
""
],
[
"Parameswarath",
"Rohini Poolat",
""
],
[
"Chan",
"Mun Choon",
""
]
] |
new_dataset
| 0.992718 |
2211.04944
|
Xuda Ding
|
Xuda Ding, Han Wang, Yi Ren, Yu Zheng, Cailian Chen, Jianping He
|
Safety-Critical Optimal Control for Robotic Manipulators in A Cluttered
Environment
|
Submitted to IEEE RA-L
| null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Designing safety-critical control for robotic manipulators is challenging,
especially in a cluttered environment. First, the actual trajectory of a
manipulator might deviate from the planned one due to the complex collision
environments and non-trivial dynamics, leading to collision; Second, the
feasible space for the manipulator is hard to obtain since the explicit
distance functions between collision meshes are unknown. By analyzing the
relationship between the safe set and the controlled invariant set, this paper
proposes a data-driven control barrier function (CBF) construction method,
which extracts CBF from distance samples. Specifically, the CBF guarantees the
controlled invariant property for considering the system dynamics. The
data-driven method samples the distance function and determines the safe set.
Then, the CBF is synthesized based on the safe set by a scenario-based sum of
square (SOS) program. Unlike most existing linearization based approaches, our
method reserves the volume of the feasible space for planning without
approximation, which helps find a solution in a cluttered environment. The
control law is obtained by solving a CBF-based quadratic program in real time,
which works as a safe filter for the desired planning-based controller.
Moreover, our method guarantees safety with the proven probabilistic result.
Our method is validated on a 7-DOF manipulator in both real and virtual
cluttered environments. The experiments show that the manipulator is able to
execute tasks where the clearance between obstacles is in millimeters.
|
[
{
"version": "v1",
"created": "Wed, 9 Nov 2022 15:12:43 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Nov 2022 03:43:49 GMT"
}
] | 2022-11-14T00:00:00 |
[
[
"Ding",
"Xuda",
""
],
[
"Wang",
"Han",
""
],
[
"Ren",
"Yi",
""
],
[
"Zheng",
"Yu",
""
],
[
"Chen",
"Cailian",
""
],
[
"He",
"Jianping",
""
]
] |
new_dataset
| 0.993771 |
2211.05809
|
Caner Hazirbas
|
Caner Hazirbas, Yejin Bang, Tiezheng Yu, Parisa Assar, Bilal Porgali,
V\'itor Albiero, Stefan Hermanek, Jacqueline Pan, Emily McReynolds, Miranda
Bogen, Pascale Fung, Cristian Canton Ferrer
|
Casual Conversations v2: Designing a large consent-driven dataset to
measure algorithmic bias and robustness
| null | null | null | null |
cs.CV cs.AI cs.CL cs.CY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Developing robust and fair AI systems require datasets with comprehensive set
of labels that can help ensure the validity and legitimacy of relevant
measurements. Recent efforts, therefore, focus on collecting person-related
datasets that have carefully selected labels, including sensitive
characteristics, and consent forms in place to use those attributes for model
testing and development. Responsible data collection involves several stages,
including but not limited to determining use-case scenarios, selecting
categories (annotations) such that the data are fit for the purpose of
measuring algorithmic bias for subgroups and most importantly ensure that the
selected categories/subcategories are robust to regional diversities and
inclusive of as many subgroups as possible.
Meta, in a continuation of our efforts to measure AI algorithmic bias and
robustness
(https://ai.facebook.com/blog/shedding-light-on-fairness-in-ai-with-a-new-data-set),
is working on collecting a large consent-driven dataset with a comprehensive
list of categories. This paper describes our proposed design of such categories
and subcategories for Casual Conversations v2.
|
[
{
"version": "v1",
"created": "Thu, 10 Nov 2022 19:06:21 GMT"
}
] | 2022-11-14T00:00:00 |
[
[
"Hazirbas",
"Caner",
""
],
[
"Bang",
"Yejin",
""
],
[
"Yu",
"Tiezheng",
""
],
[
"Assar",
"Parisa",
""
],
[
"Porgali",
"Bilal",
""
],
[
"Albiero",
"Vítor",
""
],
[
"Hermanek",
"Stefan",
""
],
[
"Pan",
"Jacqueline",
""
],
[
"McReynolds",
"Emily",
""
],
[
"Bogen",
"Miranda",
""
],
[
"Fung",
"Pascale",
""
],
[
"Ferrer",
"Cristian Canton",
""
]
] |
new_dataset
| 0.994074 |
2211.05824
|
Hassan Khan
|
Jason Ceci, Jonah Stegman, Hassan Khan
|
No Privacy in the Electronics Repair Industry
|
This paper has been accepted to appear at the 44th IEEE Symposium on
Security and Privacy (IEEE S&P 2023)
| null | null | null |
cs.CR cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Electronics repair and service providers offer a range of services to
computing device owners across North America -- from software installation to
hardware repair. Device owners obtain these services and leave their device
along with their access credentials at the mercy of technicians, which leads to
privacy concerns for owners' personal data. We conduct a comprehensive
four-part study to measure the state of privacy in the electronics repair
industry. First, through a field study with 18 service providers, we uncover
that most service providers do not have any privacy policy or controls to
safeguard device owners' personal data from snooping by technicians. Second, we
drop rigged devices for repair at 16 service providers and collect data on
widespread privacy violations by technicians, including snooping on personal
data, copying data off the device, and removing tracks of snooping activities.
Third, we conduct an online survey (n=112) to collect data on customers'
experiences when getting devices repaired. Fourth, we invite a subset of survey
respondents (n=30) for semi-structured interviews to establish a deeper
understanding of their experiences and identify potential solutions to curtail
privacy violations by technicians. We apply our findings to discuss possible
controls and actions different stakeholders and regulatory agencies should take
to improve the state of privacy in the repair industry.
|
[
{
"version": "v1",
"created": "Thu, 10 Nov 2022 19:27:21 GMT"
}
] | 2022-11-14T00:00:00 |
[
[
"Ceci",
"Jason",
""
],
[
"Stegman",
"Jonah",
""
],
[
"Khan",
"Hassan",
""
]
] |
new_dataset
| 0.999298 |
2211.05902
|
Runze Cheng
|
Runze Cheng, Yao Sun, Lina Mohjazi, Ying-Chang Liang and Muhammad Ali
Imran
|
Blockchain-Assisted Intelligent Symbiotic Radio in Space-Air-Ground
Integrated Networks
|
8 pages, 6 figures, submitted to IEEE Network
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In a space-air-ground integrated network (SAGIN), managing resources for the
growing number of highly-dynamic and heterogeneous radios is a challenging
task. Symbiotic communication (SC) is a novel paradigm, which leverages the
analogy of the natural ecosystem in biology to create a radio ecosystem in
wireless networks that achieves cooperative service exchange and resource
sharing, i.e., service/resource trading, among numerous radios. As a result,
the potential of symbiotic communication can be exploited to enhance resource
management in SAGIN. Despite the fact that different radio resource bottlenecks
can complement each other via symbiotic relationships, unreliable information
sharing among heterogeneous radios and multi-dimensional resources managing
under diverse service requests impose critical challenges on trusted trading
and intelligent decision-making. In this article, we propose a secure and smart
symbiotic SAGIN (S^4) framework by using blockchain for ensuring trusted
trading among heterogeneous radios and machine learning (ML) for guiding
complex service/resource trading. A case study demonstrates that our proposed
S^4 framework provides better service with rational resource management when
compared with existing schemes. Finally, we discuss several potential research
directions for future symbiotic SAGIN.
|
[
{
"version": "v1",
"created": "Thu, 10 Nov 2022 21:59:18 GMT"
}
] | 2022-11-14T00:00:00 |
[
[
"Cheng",
"Runze",
""
],
[
"Sun",
"Yao",
""
],
[
"Mohjazi",
"Lina",
""
],
[
"Liang",
"Ying-Chang",
""
],
[
"Imran",
"Muhammad Ali",
""
]
] |
new_dataset
| 0.996921 |
2211.05967
|
Motoi Omachi
|
Motoi Omachi, Brian Yan, Siddharth Dalmia, Yuya Fujita, Shinji
Watanabe
|
Align, Write, Re-order: Explainable End-to-End Speech Translation via
Operation Sequence Generation
| null | null | null | null |
cs.CL eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The black-box nature of end-to-end speech translation (E2E ST) systems makes
it difficult to understand how source language inputs are being mapped to the
target language. To solve this problem, we would like to simultaneously
generate automatic speech recognition (ASR) and ST predictions such that each
source language word is explicitly mapped to a target language word. A major
challenge arises from the fact that translation is a non-monotonic sequence
transduction task due to word ordering differences between languages -- this
clashes with the monotonic nature of ASR. Therefore, we propose to generate ST
tokens out-of-order while remembering how to re-order them later. We achieve
this by predicting a sequence of tuples consisting of a source word, the
corresponding target words, and post-editing operations dictating the correct
insertion points for the target word. We examine two variants of such operation
sequences which enable generation of monotonic transcriptions and non-monotonic
translations from the same speech input simultaneously. We apply our approach
to offline and real-time streaming models, demonstrating that we can provide
explainable translations without sacrificing quality or latency. In fact, the
delayed re-ordering ability of our approach improves performance during
streaming. As an added benefit, our method performs ASR and ST simultaneously,
making it faster than using two separate systems to perform these tasks.
|
[
{
"version": "v1",
"created": "Fri, 11 Nov 2022 02:29:28 GMT"
}
] | 2022-11-14T00:00:00 |
[
[
"Omachi",
"Motoi",
""
],
[
"Yan",
"Brian",
""
],
[
"Dalmia",
"Siddharth",
""
],
[
"Fujita",
"Yuya",
""
],
[
"Watanabe",
"Shinji",
""
]
] |
new_dataset
| 0.967747 |
2211.05970
|
Jiashu Lou
|
Jiashu Lou, Jie zou, Baohua Wang
|
Palm Vein Recognition via Multi-task Loss Function and Attention Layer
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the improvement of arithmetic power and algorithm accuracy of personal
devices, biological features are increasingly widely used in personal
identification, and palm vein recognition has rich extractable features and has
been widely studied in recent years. However, traditional recognition methods
are poorly robust and susceptible to environmental influences such as
reflections and noise. In this paper, a convolutional neural network based on
VGG-16 transfer learning fused attention mechanism is used as the feature
extraction network on the infrared palm vein dataset. The palm vein
classification task is first trained using palmprint classification methods,
followed by matching using a similarity function, in which we propose the
multi-task loss function to improve the accuracy of the matching task. In order
to verify the robustness of the model, some experiments were carried out on
datasets from different sources. Then, we used K-means clustering to determine
the adaptive matching threshold and finally achieved an accuracy rate of 98.89%
on prediction set. At the same time, the matching is with high efficiency which
takes an average of 0.13 seconds per palm vein pair, and that means our method
can be adopted in practice.
|
[
{
"version": "v1",
"created": "Fri, 11 Nov 2022 02:32:49 GMT"
}
] | 2022-11-14T00:00:00 |
[
[
"Lou",
"Jiashu",
""
],
[
"zou",
"Jie",
""
],
[
"Wang",
"Baohua",
""
]
] |
new_dataset
| 0.999707 |
2211.05987
|
Yang Li
|
Yang Li, Canran Xu, Tao Shen, Jing Jiang and Guodong Long
|
CCPrompt: Counterfactual Contrastive Prompt-Tuning for Many-Class
Classification
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
With the success of the prompt-tuning paradigm in Natural Language Processing
(NLP), various prompt templates have been proposed to further stimulate
specific knowledge for serving downstream tasks, e.g., machine translation,
text generation, relation extraction, and so on. Existing prompt templates are
mainly shared among all training samples with the information of task
description. However, training samples are quite diverse. The sharing task
description is unable to stimulate the unique task-related information in each
training sample, especially for tasks with the finite-label space. To exploit
the unique task-related information, we imitate the human decision process
which aims to find the contrastive attributes between the objective factual and
their potential counterfactuals. Thus, we propose the \textbf{C}ounterfactual
\textbf{C}ontrastive \textbf{Prompt}-Tuning (CCPrompt) approach for many-class
classification, e.g., relation classification, topic classification, and entity
typing. Compared with simple classification tasks, these tasks have more
complex finite-label spaces and are more rigorous for prompts. First of all, we
prune the finite label space to construct fact-counterfactual pairs. Then, we
exploit the contrastive attributes by projecting training instances onto every
fact-counterfactual pair. We further set up global prototypes corresponding
with all contrastive attributes for selecting valid contrastive attributes as
additional tokens in the prompt template. Finally, a simple Siamese
representation learning is employed to enhance the robustness of the model. We
conduct experiments on relation classification, topic classification, and
entity typing tasks in both fully supervised setting and few-shot setting. The
results indicate that our model outperforms former baselines.
|
[
{
"version": "v1",
"created": "Fri, 11 Nov 2022 03:45:59 GMT"
}
] | 2022-11-14T00:00:00 |
[
[
"Li",
"Yang",
""
],
[
"Xu",
"Canran",
""
],
[
"Shen",
"Tao",
""
],
[
"Jiang",
"Jing",
""
],
[
"Long",
"Guodong",
""
]
] |
new_dataset
| 0.95962 |
2211.06053
|
Ravi Shekhar
|
Ravi Shekhar, Mladen Karan, Matthew Purver
|
CoRAL: a Context-aware Croatian Abusive Language Dataset
|
Findings of the ACL: AACL-IJCNLP, 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In light of unprecedented increases in the popularity of the internet and
social media, comment moderation has never been a more relevant task.
Semi-automated comment moderation systems greatly aid human moderators by
either automatically classifying the examples or allowing the moderators to
prioritize which comments to consider first. However, the concept of
inappropriate content is often subjective, and such content can be conveyed in
many subtle and indirect ways. In this work, we propose CoRAL -- a language and
culturally aware Croatian Abusive dataset covering phenomena of implicitness
and reliance on local and global context. We show experimentally that current
models degrade when comments are not explicit and further degrade when language
skill and context knowledge are required to interpret the comment.
|
[
{
"version": "v1",
"created": "Fri, 11 Nov 2022 08:10:13 GMT"
}
] | 2022-11-14T00:00:00 |
[
[
"Shekhar",
"Ravi",
""
],
[
"Karan",
"Mladen",
""
],
[
"Purver",
"Matthew",
""
]
] |
new_dataset
| 0.998675 |
2211.06056
|
Wei Song
|
Wei Song and Rui Hou and Peng Liu and Xiaoxin Li and Peinan Li and
Lutan Zhao and Xiaofei Fu and Yifei Sun and Dan Meng
|
Remapped Cache Layout: Thwarting Cache-Based Side-Channel Attacks with a
Hardware Defense
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As cache-based side-channel attacks become serious security problems, various
defenses have been proposed and deployed in both software and hardware.
Consequently, cache-based side-channel attacks on processes co-residing on the
same core are becoming extremely difficult. Most of recent attacks then shift
their focus to the last-level cache (LLC). Although cache partitioning is
currently the most promising defense against the attacks abusing LLC, it is
ineffective in thwarting the side-channel attacks that automatically create
eviction sets or bypass the user address space layout randomization. In fact,
these attacks are largely undefended in current computer systems.
We propose Remapped Cache Layout (\textsf{RCL}) -- a pure hardware defense
against a broad range of conflict-based side-channel attacks. \textsf{RCL}
obfuscates the mapping from address to cache sets; therefore, an attacker
cannot accurately infer the location of her data in caches or using a cache set
to infer her victim's data. To our best knowledge, it is the first defense to
thwart the aforementioned largely undefended side-channel attacks .
\textsf{RCL} has been implemented in a superscalar processor and detailed
evaluation results show that \textsf{RCL} incurs only small costs in area,
frequency and execution time.
|
[
{
"version": "v1",
"created": "Fri, 11 Nov 2022 08:17:35 GMT"
}
] | 2022-11-14T00:00:00 |
[
[
"Song",
"Wei",
""
],
[
"Hou",
"Rui",
""
],
[
"Liu",
"Peng",
""
],
[
"Li",
"Xiaoxin",
""
],
[
"Li",
"Peinan",
""
],
[
"Zhao",
"Lutan",
""
],
[
"Fu",
"Xiaofei",
""
],
[
"Sun",
"Yifei",
""
],
[
"Meng",
"Dan",
""
]
] |
new_dataset
| 0.997166 |
2211.06073
|
Jiangyan Yi
|
Jiangyan Yi and Chenglong Wang and Jianhua Tao and Zhengkun Tian and
Cunhang Fan and Haoxin Ma and Ruibo Fu
|
SceneFake: An Initial Dataset and Benchmarks for Scene Fake Audio
Detection
| null | null | null | null |
cs.SD cs.CL eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Previous databases have been designed to further the development of fake
audio detection. However, fake utterances are mostly generated by altering
timbre, prosody, linguistic content or channel noise of original audios. They
ignore a fake situation, in which the attacker manipulates an acoustic scene of
the original audio with another forgery one. It will pose a major threat to our
society if some people misuse the manipulated audio with malicious purpose.
Therefore, this motivates us to fill in the gap. This paper designs such a
dataset for scene fake audio detection (SceneFake). A manipulated audio in the
SceneFake dataset involves only tampering the acoustic scene of an utterance by
using speech enhancement technologies. We can not only detect fake utterances
on a seen test set but also evaluate the generalization of fake detection
models to unseen manipulation attacks. Some benchmark results are described on
the SceneFake dataset. Besides, an analysis of fake attacks with different
speech enhancement technologies and signal-to-noise ratios are presented on the
dataset. The results show that scene manipulated utterances can not be detected
reliably by the existing baseline models of ASVspoof 2019. Furthermore, the
detection of unseen scene manipulation audio is still challenging.
|
[
{
"version": "v1",
"created": "Fri, 11 Nov 2022 09:05:50 GMT"
}
] | 2022-11-14T00:00:00 |
[
[
"Yi",
"Jiangyan",
""
],
[
"Wang",
"Chenglong",
""
],
[
"Tao",
"Jianhua",
""
],
[
"Tian",
"Zhengkun",
""
],
[
"Fan",
"Cunhang",
""
],
[
"Ma",
"Haoxin",
""
],
[
"Fu",
"Ruibo",
""
]
] |
new_dataset
| 0.999833 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.