id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2212.05914
|
Jiale Cheng
|
Jiale Cheng, Nan Liu, and Wei Kang
|
On the Asymptotic Capacity of Information Theoretical Privacy-preserving
Epidemiological Data Collection
| null | null |
10.3390/e25040625
| null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
We formulate a new secure distributed computation problem, where a simulation
center can require any linear combination of $ K $ users' data through a
caching layer consisting of $ N $ servers. The users, servers, and data
collector do not trust each other. For users, any data is required to be
protected from up to $ E $ servers; for servers, any more information than the
desired linear combination cannot be leaked to the data collector; and for the
data collector, any single server knows nothing about the coefficients of the
linear combination. Our goal is to find the optimal download cost, which is
defined as the size of message uploaded to the simulation center by the
servers, to the size of desired linear combination. We proposed a scheme with
the optimal download cost when $E < N-1$. We also prove that when $E\geq N-1$,
the scheme is not feasible.
|
[
{
"version": "v1",
"created": "Mon, 12 Dec 2022 14:24:26 GMT"
}
] | 2023-04-19T00:00:00 |
[
[
"Cheng",
"Jiale",
""
],
[
"Liu",
"Nan",
""
],
[
"Kang",
"Wei",
""
]
] |
new_dataset
| 0.993112 |
2301.07405
|
Zongwei Wu
|
Zongwei Wu, Guillaume Allibert, Fabrice Meriaudeau, Chao Ma, and
C\'edric Demonceaux
|
HiDAnet: RGB-D Salient Object Detection via Hierarchical Depth Awareness
| null | null |
10.1109/TIP.2023.3263111
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
RGB-D saliency detection aims to fuse multi-modal cues to accurately localize
salient regions. Existing works often adopt attention modules for feature
modeling, with few methods explicitly leveraging fine-grained details to merge
with semantic cues. Thus, despite the auxiliary depth information, it is still
challenging for existing models to distinguish objects with similar appearances
but at distinct camera distances. In this paper, from a new perspective, we
propose a novel Hierarchical Depth Awareness network (HiDAnet) for RGB-D
saliency detection. Our motivation comes from the observation that the
multi-granularity properties of geometric priors correlate well with the neural
network hierarchies. To realize multi-modal and multi-level fusion, we first
use a granularity-based attention scheme to strengthen the discriminatory power
of RGB and depth features separately. Then we introduce a unified cross
dual-attention module for multi-modal and multi-level fusion in a
coarse-to-fine manner. The encoded multi-modal features are gradually
aggregated into a shared decoder. Further, we exploit a multi-scale loss to
take full advantage of the hierarchical information. Extensive experiments on
challenging benchmark datasets demonstrate that our HiDAnet performs favorably
over the state-of-the-art methods by large margins.
|
[
{
"version": "v1",
"created": "Wed, 18 Jan 2023 10:00:59 GMT"
}
] | 2023-04-19T00:00:00 |
[
[
"Wu",
"Zongwei",
""
],
[
"Allibert",
"Guillaume",
""
],
[
"Meriaudeau",
"Fabrice",
""
],
[
"Ma",
"Chao",
""
],
[
"Demonceaux",
"Cédric",
""
]
] |
new_dataset
| 0.99863 |
2302.01452
|
M. Hammad Mazhar
|
M. Hammad Mazhar, Li Li, Endadul Hoque, Omar Chowdhury
|
MAVERICK: An App-independent and Platform-agnostic Approach to Enforce
Policies in IoT Systems at Runtime
|
13 pages, full version with material cut from version accepted at ACM
WiSec 2023
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many solutions have been proposed to curb unexpected behavior of automation
apps installed on programmable IoT platforms by enforcing safety policies at
runtime. However, all prior work addresses a weaker version of the actual
problem due to a simpler, unrealistic threat model. These solutions are not
general enough as they are heavily dependent on the installed apps and catered
to specific IoT platforms. Here, we address a stronger version of the problem
via a realistic threat model, where (i) undesired cyber actions can come from
not only automation platform backends (e.g., SmartThings) but also
close-sourced third-party services (e.g., IFTTT), and (ii) physical actions
(e.g., user interactions) on devices can move the IoT system to an undesirable
state. We propose a runtime mechanism, dubbed Maverick, which employs an
app-independent, platform-agnostic mediator to enforce policies against all
undesired cyber actions and applies corrective-actions to bring the IoT system
back to a safe state from an unsafe state transition. Maverick is equipped with
a policy language capable of expressing rich temporal invariants and an
automated toolchain that includes a policy synthesizer and a policy analyzer
for user assistance. We implemented Maverick in a prototype and showed its
efficacy in both physical and virtual testbeds, incurring minimal overhead.
|
[
{
"version": "v1",
"created": "Thu, 2 Feb 2023 22:39:48 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Apr 2023 16:45:46 GMT"
}
] | 2023-04-19T00:00:00 |
[
[
"Mazhar",
"M. Hammad",
""
],
[
"Li",
"Li",
""
],
[
"Hoque",
"Endadul",
""
],
[
"Chowdhury",
"Omar",
""
]
] |
new_dataset
| 0.983106 |
2302.02997
|
Yftah Ziser
|
Shun Shao, Yftah Ziser and Shay Cohen
|
Erasure of Unaligned Attributes from Neural Representations
|
Accepted to Transactions of the Association for Computational
Linguistics, 22 pages (pre-MIT Press publication version)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We present the Assignment-Maximization Spectral Attribute removaL (AMSAL)
algorithm, which erases information from neural representations when the
information to be erased is implicit rather than directly being aligned to each
input example. Our algorithm works by alternating between two steps. In one, it
finds an assignment of the input representations to the information to be
erased, and in the other, it creates projections of both the input
representations and the information to be erased into a joint latent space. We
test our algorithm on an extensive array of datasets, including a Twitter
dataset with multiple guarded attributes, the BiasBios dataset and the
BiasBench benchmark. The last benchmark includes four datasets with various
types of protected attributes. Our results demonstrate that bias can often be
removed in our setup. We also discuss the limitations of our approach when
there is a strong entanglement between the main task and the information to be
erased.
|
[
{
"version": "v1",
"created": "Mon, 6 Feb 2023 18:32:17 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Apr 2023 10:34:01 GMT"
}
] | 2023-04-19T00:00:00 |
[
[
"Shao",
"Shun",
""
],
[
"Ziser",
"Yftah",
""
],
[
"Cohen",
"Shay",
""
]
] |
new_dataset
| 0.95818 |
2303.16727
|
Limin Wang
|
Limin Wang, Bingkun Huang, Zhiyu Zhao, Zhan Tong, Yinan He, Yi Wang,
Yali Wang, Yu Qiao
|
VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking
|
CVPR 2023 camera-ready version
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Scale is the primary factor for building a powerful foundation model that
could well generalize to a variety of downstream tasks. However, it is still
challenging to train video foundation models with billions of parameters. This
paper shows that video masked autoencoder (VideoMAE) is a scalable and general
self-supervised pre-trainer for building video foundation models. We scale the
VideoMAE in both model and data with a core design. Specifically, we present a
dual masking strategy for efficient pre-training, with an encoder operating on
a subset of video tokens and a decoder processing another subset of video
tokens. Although VideoMAE is very efficient due to high masking ratio in
encoder, masking decoder can still further reduce the overall computational
cost. This enables the efficient pre-training of billion-level models in video.
We also use a progressive training paradigm that involves an initial
pre-training on a diverse multi-sourced unlabeled dataset, followed by a
post-pre-training on a mixed labeled dataset. Finally, we successfully train a
video ViT model with a billion parameters, which achieves a new
state-of-the-art performance on the datasets of Kinetics (90.0% on K400 and
89.9% on K600) and Something-Something (68.7% on V1 and 77.0% on V2). In
addition, we extensively verify the pre-trained video ViT models on a variety
of downstream tasks, demonstrating its effectiveness as a general video
representation learner. The code and model is available at
\url{https://github.com/OpenGVLab/VideoMAEv2}.
|
[
{
"version": "v1",
"created": "Wed, 29 Mar 2023 14:28:41 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Apr 2023 11:46:41 GMT"
}
] | 2023-04-19T00:00:00 |
[
[
"Wang",
"Limin",
""
],
[
"Huang",
"Bingkun",
""
],
[
"Zhao",
"Zhiyu",
""
],
[
"Tong",
"Zhan",
""
],
[
"He",
"Yinan",
""
],
[
"Wang",
"Yi",
""
],
[
"Wang",
"Yali",
""
],
[
"Qiao",
"Yu",
""
]
] |
new_dataset
| 0.96477 |
2304.06970
|
Qijie Bai
|
Qijie Bai, Jiawen Guo, Haiwei Zhang, Changli Nie, Lin Zhang, Xiaojie
Yuan
|
H2TNE: Temporal Heterogeneous Information Network Embedding in
Hyperbolic Spaces
| null |
The Semantic Web-ISWC 2022: 21st International Semantic Web
Conference, Virtual Event, October 23-27, 2022, Proceedings (pp. 179-195)
|
10.1007/978-3-031-19433-7_11
| null |
cs.SI cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Temporal heterogeneous information network (temporal HIN) embedding, aiming
to represent various types of nodes of different timestamps into low
dimensional spaces while preserving structural and semantic information, is of
vital importance in diverse real-life tasks. Researchers have made great
efforts on temporal HIN embedding in Euclidean spaces and got some considerable
achievements. However, there is always a fundamental conflict that many
real-world networks show hierarchical property and power-law distribution, and
are not isometric of Euclidean spaces. Recently, representation learning in
hyperbolic spaces has been proved to be valid for data with hierarchical and
power-law structure. Inspired by this character, we propose a hyperbolic
heterogeneous temporal network embedding (H2TNE) model for temporal HINs.
Specifically, we leverage a temporally and heterogeneously double-constrained
random walk strategy to capture the structural and semantic information, and
then calculate the embedding by exploiting hyperbolic distance in proximity
measurement. Experimental results show that our method has superior performance
on temporal link prediction and node classification compared with SOTA models.
|
[
{
"version": "v1",
"created": "Fri, 14 Apr 2023 07:39:52 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Apr 2023 06:12:02 GMT"
}
] | 2023-04-19T00:00:00 |
[
[
"Bai",
"Qijie",
""
],
[
"Guo",
"Jiawen",
""
],
[
"Zhang",
"Haiwei",
""
],
[
"Nie",
"Changli",
""
],
[
"Zhang",
"Lin",
""
],
[
"Yuan",
"Xiaojie",
""
]
] |
new_dataset
| 0.980458 |
2304.08494
|
Iuliana Marin
|
Mohammad Rasras, Iuliana Marin, Serban Radu
|
Smart Home Environment Modelled with a Multi-Agent System
|
12 pages, 8 figures, journal article
|
U.P.B. Sci. Bull., Series C, Vol. 85, Iss. 1, 2023, ISSN 2286-3540
| null | null |
cs.MA cs.AI cs.CY cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
A smart home can be considered a place of residence that enables the
management of appliances and systems to help with day-to-day life by automated
technology. In the current paper is described a prototype that simulates a
context-aware environment, developed in a designed smart home. The smart home
environment has been simulated using three agents and five locations in a
house. The context-aware agents behave based on predefined rules designed for
daily activities. Our proposal aims to reduce operational cost of running
devices. In the future, monitors of health aspects belonging to home residents
will sustain their healthy life daily.
|
[
{
"version": "v1",
"created": "Tue, 28 Mar 2023 08:09:08 GMT"
}
] | 2023-04-19T00:00:00 |
[
[
"Rasras",
"Mohammad",
""
],
[
"Marin",
"Iuliana",
""
],
[
"Radu",
"Serban",
""
]
] |
new_dataset
| 0.99665 |
2304.08504
|
Shubham Patil
|
Shubham Patil, Jayatika Sakhuja, Ajay Kumar Singh, Anmol Biswas, Vivek
Saraswat, Sandeep Kumar, Sandip Lashkare, Udayan Ganguly
|
Schottky Barrier MOSFET Enabled Ultra-Low Power Real-Time Neuron for
Neuromorphic Computing
| null | null | null | null |
cs.ET physics.app-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Energy-efficient real-time synapses and neurons are essential to enable
large-scale neuromorphic computing. In this paper, we propose and demonstrate
the Schottky-Barrier MOSFET-based ultra-low power voltage-controlled current
source to enable real-time neurons for neuromorphic computing. Schottky-Barrier
MOSFET is fabricated on a Silicon-on-insulator platform with polycrystalline
Silicon as the channel and Nickel/Platinum as the source/drain. The Poly-Si and
Nickel make the back-to-back Schottky junction enabling ultra-low ON current
required for energy-efficient neurons.
|
[
{
"version": "v1",
"created": "Mon, 17 Apr 2023 12:39:21 GMT"
}
] | 2023-04-19T00:00:00 |
[
[
"Patil",
"Shubham",
""
],
[
"Sakhuja",
"Jayatika",
""
],
[
"Singh",
"Ajay Kumar",
""
],
[
"Biswas",
"Anmol",
""
],
[
"Saraswat",
"Vivek",
""
],
[
"Kumar",
"Sandeep",
""
],
[
"Lashkare",
"Sandip",
""
],
[
"Ganguly",
"Udayan",
""
]
] |
new_dataset
| 0.999496 |
2304.08580
|
Pooya Fayyazsanavi
|
Pooya Fayyazsanavi, Zhiqiang Wan, Will Hutchcroft, Ivaylo Boyadzhiev,
Yuguang Li, Jana Kosecka, Sing Bing Kang
|
U2RLE: Uncertainty-Guided 2-Stage Room Layout Estimation
|
To be Appear on CVPR 2023
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
While the existing deep learning-based room layout estimation techniques
demonstrate good overall accuracy, they are less effective for distant
floor-wall boundary. To tackle this problem, we propose a novel
uncertainty-guided approach for layout boundary estimation introducing new
two-stage CNN architecture termed U2RLE. The initial stage predicts both
floor-wall boundary and its uncertainty and is followed by the refinement of
boundaries with high positional uncertainty using a different, distance-aware
loss. Finally, outputs from the two stages are merged to produce the room
layout. Experiments using ZInD and Structure3D datasets show that U2RLE
improves over current state-of-the-art, being able to handle both near and far
walls better. In particular, U2RLE outperforms current state-of-the-art
techniques for the most distant walls.
|
[
{
"version": "v1",
"created": "Mon, 17 Apr 2023 19:43:08 GMT"
}
] | 2023-04-19T00:00:00 |
[
[
"Fayyazsanavi",
"Pooya",
""
],
[
"Wan",
"Zhiqiang",
""
],
[
"Hutchcroft",
"Will",
""
],
[
"Boyadzhiev",
"Ivaylo",
""
],
[
"Li",
"Yuguang",
""
],
[
"Kosecka",
"Jana",
""
],
[
"Kang",
"Sing Bing",
""
]
] |
new_dataset
| 0.977282 |
2304.08595
|
Zicong Hong
|
Zicong Hong and Song Guo and Enyuan Zhou and Jianting Zhang and Wuhui
Chen and Jinwen Liang and Jie Zhang and Albert Zomaya
|
Prophet: Conflict-Free Sharding Blockchain via Byzantine-Tolerant
Deterministic Ordering
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sharding scales throughput by splitting blockchain nodes into parallel
groups. However, different shards' independent and random scheduling for
cross-shard transactions results in numerous conflicts and aborts, since
cross-shard transactions from different shards may access the same account. A
deterministic ordering can eliminate conflicts by determining a global order
for transactions before processing, as proved in the database field.
Unfortunately, due to the intertwining of the Byzantine environment and
information isolation among shards, there is no trusted party able to
predetermine such an order for cross-shard transactions. To tackle this
challenge, this paper proposes Prophet, a conflict-free sharding blockchain
based on Byzantine-tolerant deterministic ordering. It first depends on
untrusted self-organizing coalitions of nodes from different shards to
pre-execute cross-shard transactions for prerequisite information about
ordering. It then determines a trusted global order based on stateless ordering
and post-verification for pre-executed results, through shard cooperation.
Following the order, the shards thus orderly execute and commit transactions
without conflicts. Prophet orchestrates the pre-execution, ordering, and
execution processes in the sharding consensus for minimal overhead. We
rigorously prove the determinism and serializability of transactions under the
Byzantine and sharded environment. An evaluation of our prototype shows that
Prophet improves the throughput by $3.11\times$ and achieves nearly no aborts
on 1 million Ethereum transactions compared with state-of-the-art sharding.
|
[
{
"version": "v1",
"created": "Mon, 17 Apr 2023 20:20:44 GMT"
}
] | 2023-04-19T00:00:00 |
[
[
"Hong",
"Zicong",
""
],
[
"Guo",
"Song",
""
],
[
"Zhou",
"Enyuan",
""
],
[
"Zhang",
"Jianting",
""
],
[
"Chen",
"Wuhui",
""
],
[
"Liang",
"Jinwen",
""
],
[
"Zhang",
"Jie",
""
],
[
"Zomaya",
"Albert",
""
]
] |
new_dataset
| 0.995301 |
2304.08630
|
Anran Hu
|
Xin Guo, Anran Hu, Matteo Santamaria, Mahan Tajrobehkar, Junzi Zhang
|
MFGLib: A Library for Mean-Field Games
| null | null | null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mean-field games (MFGs) are limiting models to approximate $N$-player games,
with a number of applications. Despite the ever-growing numerical literature on
computation of MFGs, there is no library that allows researchers and
practitioners to easily create and solve their own MFG problems. The purpose of
this document is to introduce MFGLib, an open-source Python library for solving
general MFGs with a user-friendly and customizable interface. It serves as a
handy tool for creating and analyzing generic MFG environments, along with
embedded auto-tuners for all implemented algorithms. The package is distributed
under the MIT license and the source code and documentation can be found at
https://github.com/radar-research-lab/MFGLib/.
|
[
{
"version": "v1",
"created": "Mon, 17 Apr 2023 21:54:22 GMT"
}
] | 2023-04-19T00:00:00 |
[
[
"Guo",
"Xin",
""
],
[
"Hu",
"Anran",
""
],
[
"Santamaria",
"Matteo",
""
],
[
"Tajrobehkar",
"Mahan",
""
],
[
"Zhang",
"Junzi",
""
]
] |
new_dataset
| 0.99903 |
2304.08639
|
Ankur Ankan
|
Ankur Ankan and Johannes Textor
|
pgmpy: A Python Toolkit for Bayesian Networks
| null | null | null | null |
cs.LG stat.ME
|
http://creativecommons.org/licenses/by/4.0/
|
Bayesian Networks (BNs) are used in various fields for modeling, prediction,
and decision making. pgmpy is a python package that provides a collection of
algorithms and tools to work with BNs and related models. It implements
algorithms for structure learning, parameter estimation, approximate and exact
inference, causal inference, and simulations. These implementations focus on
modularity and easy extensibility to allow users to quickly modify/add to
existing algorithms, or to implement new algorithms for different use cases.
pgmpy is released under the MIT License; the source code is available at:
https://github.com/pgmpy/pgmpy, and the documentation at: https://pgmpy.org.
|
[
{
"version": "v1",
"created": "Mon, 17 Apr 2023 22:17:53 GMT"
}
] | 2023-04-19T00:00:00 |
[
[
"Ankan",
"Ankur",
""
],
[
"Textor",
"Johannes",
""
]
] |
new_dataset
| 0.997456 |
2304.08640
|
Baixiang Huang
|
Baixiang Huang, Bryan Hooi, Kai Shu
|
TAP: A Comprehensive Data Repository for Traffic Accident Prediction in
Road Networks
|
10 pages, 5 figures
| null | null | null |
cs.LG cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Road safety is a major global public health concern. Effective traffic crash
prediction can play a critical role in reducing road traffic accidents.
However, Existing machine learning approaches tend to focus on predicting
traffic accidents in isolation, without considering the potential relationships
between different accident locations within road networks. To incorporate graph
structure information, graph-based approaches such as Graph Neural Networks
(GNNs) can be naturally applied. However, applying GNNs to the accident
prediction problem faces challenges due to the lack of suitable
graph-structured traffic accident datasets. To bridge this gap, we have
constructed a real-world graph-based Traffic Accident Prediction (TAP) data
repository, along with two representative tasks: accident occurrence prediction
and accident severity prediction. With nationwide coverage, real-world network
topology, and rich geospatial features, this data repository can be used for a
variety of traffic-related tasks. We further comprehensively evaluate eleven
state-of-the-art GNN variants and two non-graph-based machine learning methods
using the created datasets. Significantly facilitated by the proposed data, we
develop a novel Traffic Accident Vulnerability Estimation via Linkage (TRAVEL)
model, which is designed to capture angular and directional information from
road networks. We demonstrate that the proposed model consistently outperforms
the baselines. The data and code are available on GitHub
(https://github.com/baixianghuang/travel).
|
[
{
"version": "v1",
"created": "Mon, 17 Apr 2023 22:18:58 GMT"
}
] | 2023-04-19T00:00:00 |
[
[
"Huang",
"Baixiang",
""
],
[
"Hooi",
"Bryan",
""
],
[
"Shu",
"Kai",
""
]
] |
new_dataset
| 0.980707 |
2304.08650
|
Berk \c{C}ilo\u{g}lu
|
Abdullah Taha \c{C}a\u{g}an, G\"orkem Berkay Ko\c{c}, Handan Yak{\i}n,
Berk \c{C}ilo\u{g}lu, Muhammad Zeeshan Ashgar, \"Ozg\"un Ersoy, Jyri
H\"am\"al\"ainen, Metin \"Ozt\"urk
|
UAV-based Maritime Communications: Relaying to Enhance the Link Quality
| null | null | null | null |
cs.NI eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Providing a stable connectivity in maritime communications is of utmost
importance to unleash the full potential of smart ports. Nonetheless, due to
the crowded nature of harbor environments, it is likely that some ships are
shadowed by others, resulting in reduced received power that subsequently
diminishes their data rates-even threatens basic connectivity requirements.
Given that UAVs have been regarded as an integral part of future generations of
wireless communication networks, they can be employed in maritime
communications as well. In this paper, we investigate the use of UAV-mounted
relays in order to help mitigate the reduced data rates of blocked links in
maritime communications. Various communication architectures are considered
based on the positioning mechanism of the UAV; in this regard, fixed, k-means
algorithm-based, and landing spot-based positioning approaches are examined. On
the other hand, since UAVs are predominantly battery-operated, the energy
consumption performances of these approaches are also measured. Results reveal
that the landing spot-based UAV relay positioning approach finds the best
trade-off between the data rate and energy consumption.
|
[
{
"version": "v1",
"created": "Mon, 17 Apr 2023 22:54:05 GMT"
}
] | 2023-04-19T00:00:00 |
[
[
"Çağan",
"Abdullah Taha",
""
],
[
"Koç",
"Görkem Berkay",
""
],
[
"Yakın",
"Handan",
""
],
[
"Çiloğlu",
"Berk",
""
],
[
"Ashgar",
"Muhammad Zeeshan",
""
],
[
"Ersoy",
"Özgün",
""
],
[
"Hämäläinen",
"Jyri",
""
],
[
"Öztürk",
"Metin",
""
]
] |
new_dataset
| 0.978395 |
2304.08655
|
Shuo Chen
|
Nikolaj Bj{\o}rner, Shuo Chen, Yang Chen, Zhongxin Guo, Peng Liu,
Nanqing Luo
|
An Ethereum-compatible blockchain that explicates and ensures
design-level safety properties for smart contracts
| null | null | null | null |
cs.CR cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Smart contracts are crucial elements of decentralized technologies, but they
face significant obstacles to trustworthiness due to security bugs and
trapdoors. To address the core issue, we propose a technology that enables
programmers to focus on design-level properties rather than specific low-level
attack patterns. Our proposed technology, called Theorem-Carrying-Transaction
(TCT), combines the benefits of runtime checking and symbolic proof. Under the
TCT protocol, every transaction must carry a theorem that proves its adherence
to the safety properties in the invoked contracts, and the blockchain checks
the proof before executing the transaction. The unique design of TCT ensures
that the theorems are provable and checkable in an efficient manner. We believe
that TCT holds a great promise for enabling provably secure smart contracts in
the future. As such, we call for collaboration toward this vision.
|
[
{
"version": "v1",
"created": "Mon, 17 Apr 2023 23:14:45 GMT"
}
] | 2023-04-19T00:00:00 |
[
[
"Bjørner",
"Nikolaj",
""
],
[
"Chen",
"Shuo",
""
],
[
"Chen",
"Yang",
""
],
[
"Guo",
"Zhongxin",
""
],
[
"Liu",
"Peng",
""
],
[
"Luo",
"Nanqing",
""
]
] |
new_dataset
| 0.999029 |
2304.08660
|
Alex Junho Lee
|
Alex Junho Lee, Seungwon Song, Hyungtae Lim, Woojoo Lee and Hyun Myung
|
(LC)$^2$: LiDAR-Camera Loop Constraints For Cross-Modal Place
Recognition
|
8 pages, 11 figures, Accepted to IEEE Robotics and Automation Letters
(RA-L)
| null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Localization has been a challenging task for autonomous navigation. A loop
detection algorithm must overcome environmental changes for the place
recognition and re-localization of robots. Therefore, deep learning has been
extensively studied for the consistent transformation of measurements into
localization descriptors. Street view images are easily accessible; however,
images are vulnerable to appearance changes. LiDAR can robustly provide precise
structural information. However, constructing a point cloud database is
expensive, and point clouds exist only in limited places. Different from
previous works that train networks to produce shared embedding directly between
the 2D image and 3D point cloud, we transform both data into 2.5D depth images
for matching. In this work, we propose a novel cross-matching method, called
(LC)$^2$, for achieving LiDAR localization without a prior point cloud map. To
this end, LiDAR measurements are expressed in the form of range images before
matching them to reduce the modality discrepancy. Subsequently, the network is
trained to extract localization descriptors from disparity and range images.
Next, the best matches are employed as a loop factor in a pose graph. Using
public datasets that include multiple sessions in significantly different
lighting conditions, we demonstrated that LiDAR-based navigation systems could
be optimized from image databases and vice versa.
|
[
{
"version": "v1",
"created": "Mon, 17 Apr 2023 23:20:16 GMT"
}
] | 2023-04-19T00:00:00 |
[
[
"Lee",
"Alex Junho",
""
],
[
"Song",
"Seungwon",
""
],
[
"Lim",
"Hyungtae",
""
],
[
"Lee",
"Woojoo",
""
],
[
"Myung",
"Hyun",
""
]
] |
new_dataset
| 0.952613 |
2304.08665
|
Tanish Jain
|
Tanish Jain
|
Insta(nt) Pet Therapy: GAN-generated Images for Therapeutic Social Media
Content
|
7 pages, 7 figures
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The positive therapeutic effect of viewing pet images online has been
well-studied. However, it is difficult to obtain large-scale production of such
content since it relies on pet owners to capture photographs and upload them. I
use a Generative Adversarial Network-based framework for the creation of fake
pet images at scale. These images are uploaded on an Instagram account where
they drive user engagement at levels comparable to those seen with images from
accounts with traditional pet photographs, underlining the applicability of the
framework to be used for pet-therapy social media content.
|
[
{
"version": "v1",
"created": "Mon, 17 Apr 2023 23:43:29 GMT"
}
] | 2023-04-19T00:00:00 |
[
[
"Jain",
"Tanish",
""
]
] |
new_dataset
| 0.989248 |
2304.08695
|
Yifan Wang Mr
|
Yifan Wang, Meng Yuan, Lei Li, Karen Sui Geok Chua, Seng Kwee Wee, Wei
Tech Ang
|
Graceful User Following for Mobile Balance Assistive Robot in Daily
Activities Assistance
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Numerous diseases and aging can cause degeneration of people's balance
ability resulting in limited mobility and even high risks of fall. Robotic
technologies can provide more intensive rehabilitation exercises or be used as
assistive devices to compensate for balance ability. However, With the new
healthcare paradigm shifting from hospital care to home care, there is a gap in
robotic systems that can provide care at home. This paper introduces Mobile
Robotic Balance Assistant (MRBA), a compact and cost-effective balance
assistive robot that can provide both rehabilitation training and activities of
daily living (ADLs) assistance at home. A three degrees of freedom (3-DoF)
robotic arm was designed to mimic the therapist arm function to provide balance
assistance to the user. To minimize the interference to users' natural pelvis
movements and gait patterns, the robot must have a Human-Robot Interface(HRI)
that can detect user intention accurately and follow the user's movement
smoothly and timely. Thus, a graceful user following control rule was proposed.
The overall control architecture consists of two parts: an observer for human
inputs estimation and an LQR-based controller with disturbance rejection. The
proposed controller is validated in high-fidelity simulation with actual human
trajectories, and the results successfully show the effectiveness of the method
in different walking modes.
|
[
{
"version": "v1",
"created": "Tue, 18 Apr 2023 02:01:10 GMT"
}
] | 2023-04-19T00:00:00 |
[
[
"Wang",
"Yifan",
""
],
[
"Yuan",
"Meng",
""
],
[
"Li",
"Lei",
""
],
[
"Chua",
"Karen Sui Geok",
""
],
[
"Wee",
"Seng Kwee",
""
],
[
"Ang",
"Wei Tech",
""
]
] |
new_dataset
| 0.997602 |
2304.08754
|
Jie Shao
|
Xin Man, Chenghong Zhang, Changyu Li, Jie Shao
|
W-MAE: Pre-trained weather model with masked autoencoder for
multi-variable weather forecasting
| null | null | null | null |
cs.LG cs.AI physics.ao-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Weather forecasting is a long-standing computational challenge with direct
societal and economic impacts. This task involves a large amount of continuous
data collection and exhibits rich spatiotemporal dependencies over long
periods, making it highly suitable for deep learning models. In this paper, we
apply pre-training techniques to weather forecasting and propose W-MAE, a
Weather model with Masked AutoEncoder pre-training for multi-variable weather
forecasting. W-MAE is pre-trained in a self-supervised manner to reconstruct
spatial correlations within meteorological variables. On the temporal scale, we
fine-tune the pre-trained W-MAE to predict the future states of meteorological
variables, thereby modeling the temporal dependencies present in weather data.
We pre-train W-MAE using the fifth-generation ECMWF Reanalysis (ERA5) data,
with samples selected every six hours and using only two years of data. Under
the same training data conditions, we compare W-MAE with FourCastNet, and W-MAE
outperforms FourCastNet in precipitation forecasting. In the setting where the
training data is far less than that of FourCastNet, our model still performs
much better in precipitation prediction (0.80 vs. 0.98). Additionally,
experiments show that our model has a stable and significant advantage in
short-to-medium-range forecasting (i.e., forecasting time ranges from 6 hours
to one week), and the longer the prediction time, the more evident the
performance advantage of W-MAE, further proving its robustness.
|
[
{
"version": "v1",
"created": "Tue, 18 Apr 2023 06:25:11 GMT"
}
] | 2023-04-19T00:00:00 |
[
[
"Man",
"Xin",
""
],
[
"Zhang",
"Chenghong",
""
],
[
"Li",
"Changyu",
""
],
[
"Shao",
"Jie",
""
]
] |
new_dataset
| 0.991083 |
2304.08893
|
Manoj Kumar Rajagopal
|
Aswin Iyer, Santosh Narayan, Naren M, Manoj kumar Rajagopal
|
Autonomous Systems: Autonomous Systems: Indoor Drone Navigation
| null | null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Drones are a promising technology for autonomous data collection and indoor
sensing. In situations when human-controlled UAVs may not be practical or
dependable, such as in uncharted or dangerous locations, the usage of
autonomous UAVs offers flexibility, cost savings, and reduced risk. The system
creates a simulated quadcopter capable of autonomously travelling in an indoor
environment using the gazebo simulation tool and the ros navigation system
framework known as Navigaation2. While Nav2 has successfully shown the
functioning of autonomous navigation in terrestrial robots and vehicles, the
same hasn't been accomplished with unmanned aerial vehicles and still has to be
done. The goal is to use the slam toolbox for ROS and the Nav2 navigation
system framework to construct a simulated drone that can move autonomously in
an indoor (gps-less) environment.
|
[
{
"version": "v1",
"created": "Tue, 18 Apr 2023 10:40:00 GMT"
}
] | 2023-04-19T00:00:00 |
[
[
"Iyer",
"Aswin",
""
],
[
"Narayan",
"Santosh",
""
],
[
"M",
"Naren",
""
],
[
"Rajagopal",
"Manoj kumar",
""
]
] |
new_dataset
| 0.99504 |
2304.08901
|
Alpay Sabuncuo\u{g}lu
|
Alpay Sabuncuoglu, T. Metin Sezgin
|
Multimodal Group Activity Dataset for Classroom Engagement Level
Prediction
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We collected a new dataset that includes approximately eight hours of
audiovisual recordings of a group of students and their self-evaluation scores
for classroom engagement. The dataset and data analysis scripts are available
on our open-source repository. We developed baseline face-based and
group-activity-based image and video recognition models. Our image models yield
45-85% test accuracy with face-area inputs on person-based classification task.
Our video models achieved up to 71% test accuracy on group-level prediction
using group activity video inputs. In this technical report, we shared the
details of our end-to-end human-centered engagement analysis pipeline from data
collection to model development.
|
[
{
"version": "v1",
"created": "Tue, 18 Apr 2023 11:09:02 GMT"
}
] | 2023-04-19T00:00:00 |
[
[
"Sabuncuoglu",
"Alpay",
""
],
[
"Sezgin",
"T. Metin",
""
]
] |
new_dataset
| 0.999529 |
2304.08908
|
Mario Saucedo
|
Mario A.V. Saucedo, Akash Patel, Rucha Sawlekar, Akshit Saradagi,
Christoforos Kanellakis, Ali-Akbar Agha-Mohammadi and George Nikolakopoulos
|
Event Camera and LiDAR based Human Tracking for Adverse Lighting
Conditions in Subterranean Environments
|
Accepted at IFAC World Congress 2023
| null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this article, we propose a novel LiDAR and event camera fusion modality
for subterranean (SubT) environments for fast and precise object and human
detection in a wide variety of adverse lighting conditions, such as low or no
light, high-contrast zones and in the presence of blinding light sources. In
the proposed approach, information from the event camera and LiDAR are fused to
localize a human or an object-of-interest in a robot's local frame. The local
detection is then transformed into the inertial frame and used to set
references for a Nonlinear Model Predictive Controller (NMPC) for reactive
tracking of humans or objects in SubT environments. The proposed novel fusion
uses intensity filtering and K-means clustering on the LiDAR point cloud and
frequency filtering and connectivity clustering on the events induced in an
event camera by the returning LiDAR beams. The centroids of the clusters in the
event camera and LiDAR streams are then paired to localize reflective markers
present on safety vests and signs in SubT environments. The efficacy of the
proposed scheme has been experimentally validated in a real SubT environment (a
mine) with a Pioneer 3AT mobile robot. The experimental results show real-time
performance for human detection and the NMPC-based controller allows for
reactive tracking of a human or object of interest, even in complete darkness.
|
[
{
"version": "v1",
"created": "Tue, 18 Apr 2023 11:27:41 GMT"
}
] | 2023-04-19T00:00:00 |
[
[
"Saucedo",
"Mario A. V.",
""
],
[
"Patel",
"Akash",
""
],
[
"Sawlekar",
"Rucha",
""
],
[
"Saradagi",
"Akshit",
""
],
[
"Kanellakis",
"Christoforos",
""
],
[
"Agha-Mohammadi",
"Ali-Akbar",
""
],
[
"Nikolakopoulos",
"George",
""
]
] |
new_dataset
| 0.999119 |
2304.08940
|
Isabella Gra{\ss}l
|
Isabella Gra{\ss}l and Gordon Fraser
|
The ABC of Pair Programming: Gender-dependent Attitude, Behavior and
Code of Young Learners
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Young learners are increasingly introduced to programming, and one of the
main challenges for educators is to achieve learning success while also
creating enthusiasm. As it is particularly difficult to achieve this enthusiasm
initially in young females, prior work has identified gender-specific
differences in the programming behavior of young learners. Since pair
programming, which turns programming into a more sociable activity, has been
proposed as an approach to support programming education, in this paper we aim
to investigate whether similar gender-specific characteristics can also be
observed during pair programming. Therefore, we designed a gender-neutral
introductory SCRATCH programming course tailored for integrating pair
programming principles, and conducted it with a total of 139 students aged
between 8 and 14 years. To identify gender-dependent differences and
similarities, we measure the attitude towards programming and the course
setting, observe the behavior of the students while programming, and analyze
the code of the programs for different gender-combinations. Overall, our study
demonstrates that pair programming is well suited for young learners and
results in a positive attitude. While the resulting programs are similar in
quality and complexity independent of gender, differences are evident when it
comes to the compliance to pair programming roles, the exploration of code, and
the creative customization of programs. These findings contribute to an
in-depth understanding of social and technical gender specifics of pair
programming, and provide educators with resources and guidance for implementing
gender-sensitive pair programming in the classroom.
|
[
{
"version": "v1",
"created": "Tue, 18 Apr 2023 12:30:20 GMT"
}
] | 2023-04-19T00:00:00 |
[
[
"Graßl",
"Isabella",
""
],
[
"Fraser",
"Gordon",
""
]
] |
new_dataset
| 0.997728 |
2304.08956
|
Naiyu Fang
|
Naiyu Fang, Lemiao Qiu, Shuyou Zhang, Zili Wang, Kerui Hu
|
PG-VTON: A Novel Image-Based Virtual Try-On Method via Progressive
Inference Paradigm
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Virtual try-on is a promising computer vision topic with a high commercial
value wherein a new garment is visually worn on a person with a photo-realistic
effect. Previous studies conduct their shape and content inference at one
stage, employing a single-scale warping mechanism and a relatively
unsophisticated content inference mechanism. These approaches have led to
suboptimal results in terms of garment warping and skin reservation under
challenging try-on scenarios. To address these limitations, we propose a novel
virtual try-on method via progressive inference paradigm (PGVTON) that
leverages a top-down inference pipeline and a general garment try-on strategy.
Specifically, we propose a robust try-on parsing inference method by
disentangling semantic categories and introducing consistency. Exploiting the
try-on parsing as the shape guidance, we implement the garment try-on via
warping-mapping-composition. To facilitate adaptation to a wide range of try-on
scenarios, we adopt a covering more and selecting one warping strategy and
explicitly distinguish tasks based on alignment. Additionally, we regulate
StyleGAN2 to implement re-naked skin inpainting, conditioned on the target skin
shape and spatial-agnostic skin features. Experiments demonstrate that our
method has state-of-the-art performance under two challenging scenarios. The
code will be available at https://github.com/NerdFNY/PGVTON.
|
[
{
"version": "v1",
"created": "Tue, 18 Apr 2023 12:47:26 GMT"
}
] | 2023-04-19T00:00:00 |
[
[
"Fang",
"Naiyu",
""
],
[
"Qiu",
"Lemiao",
""
],
[
"Zhang",
"Shuyou",
""
],
[
"Wang",
"Zili",
""
],
[
"Hu",
"Kerui",
""
]
] |
new_dataset
| 0.999256 |
2304.08994
|
Alexander Naumann
|
Alexander Naumann, Felix Hertlein, Laura D\"orr, Kai Furmans
|
Parcel3D: Shape Reconstruction from Single RGB Images for Applications
in Transportation Logistics
|
Accepted at CVPR workshop on Vision-based InduStrial InspectiON
(VISION) 2023, see
https://vision-based-industrial-inspection.github.io/cvpr-2023/
| null | null | null |
cs.CV cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We focus on enabling damage and tampering detection in logistics and tackle
the problem of 3D shape reconstruction of potentially damaged parcels. As input
we utilize single RGB images, which corresponds to use-cases where only simple
handheld devices are available, e.g. for postmen during delivery or clients on
delivery. We present a novel synthetic dataset, named Parcel3D, that is based
on the Google Scanned Objects (GSO) dataset and consists of more than 13,000
images of parcels with full 3D annotations. The dataset contains intact, i.e.
cuboid-shaped, parcels and damaged parcels, which were generated in
simulations. We work towards detecting mishandling of parcels by presenting a
novel architecture called CubeRefine R-CNN, which combines estimating a 3D
bounding box with an iterative mesh refinement. We benchmark our approach on
Parcel3D and an existing dataset of cuboid-shaped parcels in real-world
scenarios. Our results show, that while training on Parcel3D enables transfer
to the real world, enabling reliable deployment in real-world scenarios is
still challenging. CubeRefine R-CNN yields competitive performance in terms of
Mesh AP and is the only model that directly enables deformation assessment by
3D mesh comparison and tampering detection by comparing viewpoint invariant
parcel side surface representations. Dataset and code are available at
https://a-nau.github.io/parcel3d.
|
[
{
"version": "v1",
"created": "Tue, 18 Apr 2023 13:55:51 GMT"
}
] | 2023-04-19T00:00:00 |
[
[
"Naumann",
"Alexander",
""
],
[
"Hertlein",
"Felix",
""
],
[
"Dörr",
"Laura",
""
],
[
"Furmans",
"Kai",
""
]
] |
new_dataset
| 0.999867 |
2304.09012
|
Andrey Sobolevsky
|
Andrey Sobolevsky, Guillaume-Alexandre Bilodeau, Jinghui Cheng, Jin
L.C. Guo
|
GUILGET: GUI Layout GEneration with Transformer
|
12 pages, 5 figures, Canadian AI Conference 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Sketching out Graphical User Interface (GUI) layout is part of the pipeline
of designing a GUI and a crucial task for the success of a software
application. Arranging all components inside a GUI layout manually is a
time-consuming task. In order to assist designers, we developed a method named
GUILGET to automatically generate GUI layouts from positional constraints
represented as GUI arrangement graphs (GUI-AGs). The goal is to support the
initial step of GUI design by producing realistic and diverse GUI layouts. The
existing image layout generation techniques often cannot incorporate GUI design
constraints. Thus, GUILGET needs to adapt existing techniques to generate GUI
layouts that obey to constraints specific to GUI designs. GUILGET is based on
transformers in order to capture the semantic in relationships between elements
from GUI-AG. Moreover, the model learns constraints through the minimization of
losses responsible for placing each component inside its parent layout, for not
letting components overlap if they are inside the same parent, and for
component alignment. Our experiments, which are conducted on the CLAY dataset,
reveal that our model has the best understanding of relationships from GUI-AG
and has the best performances in most of evaluation metrics. Therefore, our
work contributes to improved GUI layout generation by proposing a novel method
that effectively accounts for the constraints on GUI elements and paves the
road for a more efficient GUI design pipeline.
|
[
{
"version": "v1",
"created": "Tue, 18 Apr 2023 14:27:34 GMT"
}
] | 2023-04-19T00:00:00 |
[
[
"Sobolevsky",
"Andrey",
""
],
[
"Bilodeau",
"Guillaume-Alexandre",
""
],
[
"Cheng",
"Jinghui",
""
],
[
"Guo",
"Jin L. C.",
""
]
] |
new_dataset
| 0.997557 |
2304.09048
|
Ningyu Zhang
|
Zhen Bi, Jing Chen, Yinuo Jiang, Feiyu Xiong, Wei Guo, Huajun Chen,
Ningyu Zhang
|
CodeKGC: Code Language Model for Generative Knowledge Graph Construction
|
Work in progress
| null | null | null |
cs.CL cs.AI cs.IR cs.LG cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current generative knowledge graph construction approaches usually fail to
capture structural knowledge by simply flattening natural language into
serialized texts or a specification language. However, large generative
language model trained on structured data such as code has demonstrated
impressive capability in understanding natural language for structural
prediction and reasoning tasks. Intuitively, we address the task of generative
knowledge graph construction with code language model: given a code-format
natural language input, the target is to generate triples which can be
represented as code completion tasks. Specifically, we develop schema-aware
prompts that effectively utilize the semantic structure within the knowledge
graph. As code inherently possesses structure, such as class and function
definitions, it serves as a useful model for prior semantic structural
knowledge. Furthermore, we employ a rationale-enhanced generation method to
boost the performance. Rationales provide intermediate steps, thereby improving
knowledge extraction abilities. Experimental results indicate that the proposed
approach can obtain better performance on benchmark datasets compared with
baselines. Code and datasets are available in
https://github.com/zjunlp/DeepKE/tree/main/example/llm.
|
[
{
"version": "v1",
"created": "Tue, 18 Apr 2023 15:12:34 GMT"
}
] | 2023-04-19T00:00:00 |
[
[
"Bi",
"Zhen",
""
],
[
"Chen",
"Jing",
""
],
[
"Jiang",
"Yinuo",
""
],
[
"Xiong",
"Feiyu",
""
],
[
"Guo",
"Wei",
""
],
[
"Chen",
"Huajun",
""
],
[
"Zhang",
"Ningyu",
""
]
] |
new_dataset
| 0.998659 |
2304.09071
|
Andrea Ferraguti
|
Andrea Ferraguti and Dorian Goldfeld and Giacomo Micheli
|
Number Theoretical Locally Recoverable Codes
| null | null | null | null |
cs.IT math.IT math.NT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we give constructions for infinite sequences of finite
non-linear locally recoverable codes $\mathcal C\subseteq
\prod\limits^N_{i=1}\mathbb F_{q_i}$ over a product of finite fields arising
from basis expansions in algebraic number fields. The codes in our sequences
have increasing length and size, constant rate, fixed locality, and minimum
distance going to infinity.
|
[
{
"version": "v1",
"created": "Tue, 18 Apr 2023 15:45:12 GMT"
}
] | 2023-04-19T00:00:00 |
[
[
"Ferraguti",
"Andrea",
""
],
[
"Goldfeld",
"Dorian",
""
],
[
"Micheli",
"Giacomo",
""
]
] |
new_dataset
| 0.998888 |
2101.06549
|
Jingkang Wang
|
Jingkang Wang, Ava Pun, James Tu, Sivabalan Manivasagam, Abbas Sadat,
Sergio Casas, Mengye Ren, Raquel Urtasun
|
AdvSim: Generating Safety-Critical Scenarios for Self-Driving Vehicles
|
CVPR 2021. Corrected typos in the adversarial objective
| null | null | null |
cs.RO cs.AI cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As self-driving systems become better, simulating scenarios where the
autonomy stack may fail becomes more important. Traditionally, those scenarios
are generated for a few scenes with respect to the planning module that takes
ground-truth actor states as input. This does not scale and cannot identify all
possible autonomy failures, such as perception failures due to occlusion. In
this paper, we propose AdvSim, an adversarial framework to generate
safety-critical scenarios for any LiDAR-based autonomy system. Given an initial
traffic scenario, AdvSim modifies the actors' trajectories in a physically
plausible manner and updates the LiDAR sensor data to match the perturbed
world. Importantly, by simulating directly from sensor data, we obtain
adversarial scenarios that are safety-critical for the full autonomy stack. Our
experiments show that our approach is general and can identify thousands of
semantically meaningful safety-critical scenarios for a wide range of modern
self-driving systems. Furthermore, we show that the robustness and safety of
these systems can be further improved by training them with scenarios generated
by AdvSim.
|
[
{
"version": "v1",
"created": "Sat, 16 Jan 2021 23:23:12 GMT"
},
{
"version": "v2",
"created": "Sun, 4 Apr 2021 03:42:18 GMT"
},
{
"version": "v3",
"created": "Sat, 8 Jan 2022 21:50:56 GMT"
},
{
"version": "v4",
"created": "Sun, 16 Apr 2023 20:22:41 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Wang",
"Jingkang",
""
],
[
"Pun",
"Ava",
""
],
[
"Tu",
"James",
""
],
[
"Manivasagam",
"Sivabalan",
""
],
[
"Sadat",
"Abbas",
""
],
[
"Casas",
"Sergio",
""
],
[
"Ren",
"Mengye",
""
],
[
"Urtasun",
"Raquel",
""
]
] |
new_dataset
| 0.999583 |
2104.02438
|
Bartek Klin
|
Miko{\l}aj Boja\'nczyk, Joanna Fijalkow, Bartek Klin and Joshua
Moerman
|
Orbit-Finite-Dimensional Vector Spaces and Weighted Register Automata
| null | null | null | null |
cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
We develop a theory of vector spaces spanned by orbit-finite sets. Using this
theory, we give a decision procedure for equivalence of weighted register
automata, which are the common generalization of weighted automata and register
automata for infinite alphabets. The algorithm runs in exponential time, and in
polynomial time for a fixed number of registers. As a special case, we can
decide, with the same complexity, language equivalence for unambiguous register
automata, which improves previous results in three ways: (a) we allow for order
comparisons on atoms, and not just equality; (b) the complexity is
exponentially better; and (c) we allow automata with guessing.
|
[
{
"version": "v1",
"created": "Tue, 6 Apr 2021 11:54:51 GMT"
},
{
"version": "v2",
"created": "Fri, 23 Apr 2021 14:45:04 GMT"
},
{
"version": "v3",
"created": "Sat, 15 Apr 2023 19:14:28 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Bojańczyk",
"Mikołaj",
""
],
[
"Fijalkow",
"Joanna",
""
],
[
"Klin",
"Bartek",
""
],
[
"Moerman",
"Joshua",
""
]
] |
new_dataset
| 0.999309 |
2106.13797
|
Wenhai Wang
|
Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding
Liang, Tong Lu, Ping Luo, Ling Shao
|
PVT v2: Improved Baselines with Pyramid Vision Transformer
|
Accepted to CVMJ 2022
|
Computational Visual Media, 2022, Vol. 8, No. 3, Pages: 415-424
|
10.1007/s41095-022-0274-8
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Transformer recently has presented encouraging progress in computer vision.
In this work, we present new baselines by improving the original Pyramid Vision
Transformer (PVT v1) by adding three designs, including (1) linear complexity
attention layer, (2) overlapping patch embedding, and (3) convolutional
feed-forward network. With these modifications, PVT v2 reduces the
computational complexity of PVT v1 to linear and achieves significant
improvements on fundamental vision tasks such as classification, detection, and
segmentation. Notably, the proposed PVT v2 achieves comparable or better
performances than recent works such as Swin Transformer. We hope this work will
facilitate state-of-the-art Transformer researches in computer vision. Code is
available at https://github.com/whai362/PVT.
|
[
{
"version": "v1",
"created": "Fri, 25 Jun 2021 17:51:09 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Jun 2021 15:07:07 GMT"
},
{
"version": "v3",
"created": "Mon, 5 Jul 2021 08:04:40 GMT"
},
{
"version": "v4",
"created": "Sat, 17 Jul 2021 15:12:25 GMT"
},
{
"version": "v5",
"created": "Wed, 9 Feb 2022 03:51:39 GMT"
},
{
"version": "v6",
"created": "Thu, 30 Jun 2022 15:31:56 GMT"
},
{
"version": "v7",
"created": "Mon, 17 Apr 2023 12:49:29 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Wang",
"Wenhai",
""
],
[
"Xie",
"Enze",
""
],
[
"Li",
"Xiang",
""
],
[
"Fan",
"Deng-Ping",
""
],
[
"Song",
"Kaitao",
""
],
[
"Liang",
"Ding",
""
],
[
"Lu",
"Tong",
""
],
[
"Luo",
"Ping",
""
],
[
"Shao",
"Ling",
""
]
] |
new_dataset
| 0.976022 |
2108.05271
|
Georgi Karadzhov
|
Georgi Karadzhov, Tom Stafford, Andreas Vlachos
|
DeliData: A dataset for deliberation in multi-party problem solving
| null | null | null | null |
cs.CL cs.AI cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Group deliberation enables people to collaborate and solve problems, however,
it is understudied due to a lack of resources. To this end, we introduce the
first publicly available dataset containing collaborative conversations on
solving a well-established cognitive task, consisting of 500 group dialogues
and 14k utterances. In 64% of these conversations, the group members are able
to find a better solution than they had identified individually, and in 43.8%
of the groups who had a correct answer as their final solution, none of the
participants had solved the task correctly by themselves. Furthermore, we
propose a novel annotation schema that captures deliberation cues and release
all 14k utterances annotated with it. Finally, we use the proposed dataset to
develop and evaluate two methods for generating deliberation utterances. The
data collection platform, dataset and annotated corpus are publicly available
at https://delibot.xyz.
|
[
{
"version": "v1",
"created": "Wed, 11 Aug 2021 15:13:07 GMT"
},
{
"version": "v2",
"created": "Sat, 7 May 2022 18:18:00 GMT"
},
{
"version": "v3",
"created": "Sun, 16 Apr 2023 13:11:25 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Karadzhov",
"Georgi",
""
],
[
"Stafford",
"Tom",
""
],
[
"Vlachos",
"Andreas",
""
]
] |
new_dataset
| 0.999847 |
2110.11073
|
Kai Wang
|
Kai Wang, Zhene Zou, Minghao Zhao, Qilin Deng, Yue Shang, Yile Liang,
Runze Wu, Xudong Shen, Tangjie Lyu, Changjie Fan
|
RL4RS: A Real-World Dataset for Reinforcement Learning based Recommender
System
|
4-th version, SIGIR2023
| null | null | null |
cs.IR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reinforcement learning based recommender systems (RL-based RS) aim at
learning a good policy from a batch of collected data, by casting
recommendations to multi-step decision-making tasks. However, current RL-based
RS research commonly has a large reality gap. In this paper, we introduce the
first open-source real-world dataset, RL4RS, hoping to replace the artificial
datasets and semi-simulated RS datasets previous studies used due to the
resource limitation of the RL-based RS domain. Unlike academic RL research,
RL-based RS suffers from the difficulties of being well-validated before
deployment. We attempt to propose a new systematic evaluation framework,
including evaluation of environment simulation, evaluation on environments,
counterfactual policy evaluation, and evaluation on environments built from
test set. In summary, the RL4RS (Reinforcement Learning for Recommender
Systems), a new resource with special concerns on the reality gaps, contains
two real-world datasets, data understanding tools, tuned simulation
environments, related advanced RL baselines, batch RL baselines, and
counterfactual policy evaluation algorithms. The RL4RS suite can be found at
https://github.com/fuxiAIlab/RL4RS. In addition to the RL-based recommender
systems, we expect the resource to contribute to research in applied
reinforcement learning.
|
[
{
"version": "v1",
"created": "Mon, 18 Oct 2021 12:48:02 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Feb 2022 09:12:53 GMT"
},
{
"version": "v3",
"created": "Wed, 16 Feb 2022 03:31:15 GMT"
},
{
"version": "v4",
"created": "Sun, 20 Feb 2022 13:08:08 GMT"
},
{
"version": "v5",
"created": "Mon, 17 Apr 2023 10:37:38 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Wang",
"Kai",
""
],
[
"Zou",
"Zhene",
""
],
[
"Zhao",
"Minghao",
""
],
[
"Deng",
"Qilin",
""
],
[
"Shang",
"Yue",
""
],
[
"Liang",
"Yile",
""
],
[
"Wu",
"Runze",
""
],
[
"Shen",
"Xudong",
""
],
[
"Lyu",
"Tangjie",
""
],
[
"Fan",
"Changjie",
""
]
] |
new_dataset
| 0.999552 |
2111.10756
|
Keng Ji Chow
|
Keng Ji Chow, Samson Tan, Min-Yen Kan
|
TraVLR: Now You See It, Now You Don't! A Bimodal Dataset for Evaluating
Visio-Linguistic Reasoning
|
The first two authors contributed equally
| null | null | null |
cs.CL cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Numerous visio-linguistic (V+L) representation learning methods have been
developed, yet existing datasets do not adequately evaluate the extent to which
they represent visual and linguistic concepts in a unified space. We propose
several novel evaluation settings for V+L models, including cross-modal
transfer. Furthermore, existing V+L benchmarks often report global accuracy
scores on the entire dataset, making it difficult to pinpoint the specific
reasoning tasks that models fail and succeed at. We present TraVLR, a synthetic
dataset comprising four V+L reasoning tasks. TraVLR's synthetic nature allows
us to constrain its training and testing distributions along task-relevant
dimensions, enabling the evaluation of out-of-distribution generalisation. Each
example in TraVLR redundantly encodes the scene in two modalities, allowing
either to be dropped or added during training or testing without losing
relevant information. We compare the performance of four state-of-the-art V+L
models, finding that while they perform well on test examples from the same
modality, they all fail at cross-modal transfer and have limited success
accommodating the addition or deletion of one modality. We release TraVLR as an
open challenge for the research community.
|
[
{
"version": "v1",
"created": "Sun, 21 Nov 2021 07:22:44 GMT"
},
{
"version": "v2",
"created": "Sat, 4 Mar 2023 12:57:41 GMT"
},
{
"version": "v3",
"created": "Sat, 15 Apr 2023 09:48:44 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Chow",
"Keng Ji",
""
],
[
"Tan",
"Samson",
""
],
[
"Kan",
"Min-Yen",
""
]
] |
new_dataset
| 0.999682 |
2111.11969
|
Qiang Nie
|
Qiang Nie, Ziwei Liu, Yunhui Liu
|
Lifting 2D Human Pose to 3D with Domain Adapted 3D Body Concept
|
15 pages, a paper submitted to IJCV
|
Int J Comput Vis 131 (2023) 1250 - 1268
|
10.1007/s11263-023-01749-2
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Lifting the 2D human pose to the 3D pose is an important yet challenging
task. Existing 3D pose estimation suffers from 1) the inherent ambiguity
between the 2D and 3D data, and 2) the lack of well labeled 2D-3D pose pairs in
the wild. Human beings are able to imagine the human 3D pose from a 2D image or
a set of 2D body key-points with the least ambiguity, which should be
attributed to the prior knowledge of the human body that we have acquired in
our mind. Inspired by this, we propose a new framework that leverages the
labeled 3D human poses to learn a 3D concept of the human body to reduce the
ambiguity. To have consensus on the body concept from 2D pose, our key insight
is to treat the 2D human pose and the 3D human pose as two different domains.
By adapting the two domains, the body knowledge learned from 3D poses is
applied to 2D poses and guides the 2D pose encoder to generate informative 3D
"imagination" as embedding in pose lifting. Benefiting from the domain
adaptation perspective, the proposed framework unifies the supervised and
semi-supervised 3D pose estimation in a principled framework. Extensive
experiments demonstrate that the proposed approach can achieve state-of-the-art
performance on standard benchmarks. More importantly, it is validated that the
explicitly learned 3D body concept effectively alleviates the 2D-3D ambiguity
in 2D pose lifting, improves the generalization, and enables the network to
exploit the abundant unlabeled 2D data.
|
[
{
"version": "v1",
"created": "Tue, 23 Nov 2021 16:02:12 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Nie",
"Qiang",
""
],
[
"Liu",
"Ziwei",
""
],
[
"Liu",
"Yunhui",
""
]
] |
new_dataset
| 0.988816 |
2201.11221
|
Alejandro D\'iaz-Caro
|
Alejandro D\'iaz-Caro and Gilles Dowek
|
Linear lambda-calculus is linear
|
This is the full revised journal version of the paper accepted at
FSCD 2022 and published at LIPIcs 228:21, 2022
| null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We prove a linearity theorem for an extension of linear logic with addition
and multiplication by a scalar: the proofs of some propositions in this logic
are linear in the algebraic sense. This work is part of a wider research
program that aims at defining a logic whose proof language is a quantum
programming language.
|
[
{
"version": "v1",
"created": "Wed, 26 Jan 2022 22:48:04 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Feb 2022 16:45:51 GMT"
},
{
"version": "v3",
"created": "Wed, 20 Apr 2022 15:19:40 GMT"
},
{
"version": "v4",
"created": "Sat, 15 Apr 2023 16:48:27 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Díaz-Caro",
"Alejandro",
""
],
[
"Dowek",
"Gilles",
""
]
] |
new_dataset
| 0.987198 |
2201.11460
|
Yuren Cong
|
Yuren Cong, Michael Ying Yang, Bodo Rosenhahn
|
RelTR: Relation Transformer for Scene Graph Generation
|
accepted by IEEE Transactions on Pattern Analysis and Machine
Intelligence
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Different objects in the same scene are more or less related to each other,
but only a limited number of these relationships are noteworthy. Inspired by
DETR, which excels in object detection, we view scene graph generation as a set
prediction problem and propose an end-to-end scene graph generation model RelTR
which has an encoder-decoder architecture. The encoder reasons about the visual
feature context while the decoder infers a fixed-size set of triplets
subject-predicate-object using different types of attention mechanisms with
coupled subject and object queries. We design a set prediction loss performing
the matching between the ground truth and predicted triplets for the end-to-end
training. In contrast to most existing scene graph generation methods, RelTR is
a one-stage method that predicts a set of relationships directly only using
visual appearance without combining entities and labeling all possible
predicates. Extensive experiments on the Visual Genome and Open Images V6
datasets demonstrate the superior performance and fast inference of our model.
|
[
{
"version": "v1",
"created": "Thu, 27 Jan 2022 11:53:41 GMT"
},
{
"version": "v2",
"created": "Thu, 11 Aug 2022 20:17:58 GMT"
},
{
"version": "v3",
"created": "Fri, 14 Apr 2023 21:44:13 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Cong",
"Yuren",
""
],
[
"Yang",
"Michael Ying",
""
],
[
"Rosenhahn",
"Bodo",
""
]
] |
new_dataset
| 0.994143 |
2203.08897
|
Swathikiran Sudhakaran
|
Swathikiran Sudhakaran, Sergio Escalera, Oswald Lanz
|
Gate-Shift-Fuse for Video Action Recognition
|
Accepted to TPAMI. arXiv admin note: text overlap with
arXiv:1912.00381
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Convolutional Neural Networks are the de facto models for image recognition.
However 3D CNNs, the straight forward extension of 2D CNNs for video
recognition, have not achieved the same success on standard action recognition
benchmarks. One of the main reasons for this reduced performance of 3D CNNs is
the increased computational complexity requiring large scale annotated datasets
to train them in scale. 3D kernel factorization approaches have been proposed
to reduce the complexity of 3D CNNs. Existing kernel factorization approaches
follow hand-designed and hard-wired techniques. In this paper we propose
Gate-Shift-Fuse (GSF), a novel spatio-temporal feature extraction module which
controls interactions in spatio-temporal decomposition and learns to adaptively
route features through time and combine them in a data dependent manner. GSF
leverages grouped spatial gating to decompose input tensor and channel
weighting to fuse the decomposed tensors. GSF can be inserted into existing 2D
CNNs to convert them into an efficient and high performing spatio-temporal
feature extractor, with negligible parameter and compute overhead. We perform
an extensive analysis of GSF using two popular 2D CNN families and achieve
state-of-the-art or competitive performance on five standard action recognition
benchmarks.
|
[
{
"version": "v1",
"created": "Wed, 16 Mar 2022 19:19:04 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Apr 2023 17:27:02 GMT"
},
{
"version": "v3",
"created": "Sat, 15 Apr 2023 13:06:27 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Sudhakaran",
"Swathikiran",
""
],
[
"Escalera",
"Sergio",
""
],
[
"Lanz",
"Oswald",
""
]
] |
new_dataset
| 0.99549 |
2203.10642
|
Xuanyao Chen
|
Xuanyao Chen, Tianyuan Zhang, Yue Wang, Yilun Wang, Hang Zhao
|
FUTR3D: A Unified Sensor Fusion Framework for 3D Detection
| null |
CVPR 2023 workshop on autonomous driving
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sensor fusion is an essential topic in many perception systems, such as
autonomous driving and robotics. Existing multi-modal 3D detection models
usually involve customized designs depending on the sensor combinations or
setups. In this work, we propose the first unified end-to-end sensor fusion
framework for 3D detection, named FUTR3D, which can be used in (almost) any
sensor configuration. FUTR3D employs a query-based Modality-Agnostic Feature
Sampler (MAFS), together with a transformer decoder with a set-to-set loss for
3D detection, thus avoiding using late fusion heuristics and post-processing
tricks. We validate the effectiveness of our framework on various combinations
of cameras, low-resolution LiDARs, high-resolution LiDARs, and Radars. On
NuScenes dataset, FUTR3D achieves better performance over specifically designed
methods across different sensor combinations. Moreover, FUTR3D achieves great
flexibility with different sensor configurations and enables low-cost
autonomous driving. For example, only using a 4-beam LiDAR with cameras, FUTR3D
(58.0 mAP) achieves on par performance with state-of-the-art 3D detection model
CenterPoint (56.6 mAP) using a 32-beam LiDAR.
|
[
{
"version": "v1",
"created": "Sun, 20 Mar 2022 20:41:55 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Apr 2023 13:05:44 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Chen",
"Xuanyao",
""
],
[
"Zhang",
"Tianyuan",
""
],
[
"Wang",
"Yue",
""
],
[
"Wang",
"Yilun",
""
],
[
"Zhao",
"Hang",
""
]
] |
new_dataset
| 0.998944 |
2203.16828
|
Sihan Ma
|
Sihan Ma, Jizhizi Li, Jing Zhang, He Zhang, Dacheng Tao
|
Rethinking Portrait Matting with Privacy Preserving
|
Accepted to the International Journal of Computer Vision (IJCV). The
code, dataset, and models are available at
https://github.com/ViTAE-Transformer/P3M-Net. arXiv admin note: substantial
text overlap with arXiv:2104.14222
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, there has been an increasing concern about the privacy issue raised
by identifiable information in machine learning. However, previous portrait
matting methods were all based on identifiable images. To fill the gap, we
present P3M-10k, which is the first large-scale anonymized benchmark for
Privacy-Preserving Portrait Matting (P3M). P3M-10k consists of 10,421 high
resolution face-blurred portrait images along with high-quality alpha mattes,
which enables us to systematically evaluate both trimap-free and trimap-based
matting methods and obtain some useful findings about model generalization
ability under the privacy preserving training (PPT) setting. We also present a
unified matting model dubbed P3M-Net that is compatible with both CNN and
transformer backbones. To further mitigate the cross-domain performance gap
issue under the PPT setting, we devise a simple yet effective Copy and Paste
strategy (P3M-CP), which borrows facial information from public celebrity
images and directs the network to reacquire the face context at both data and
feature level. Extensive experiments on P3M-10k and public benchmarks
demonstrate the superiority of P3M-Net over state-of-the-art methods and the
effectiveness of P3M-CP in improving the cross-domain generalization ability,
implying a great significance of P3M for future research and real-world
applications.
|
[
{
"version": "v1",
"created": "Thu, 31 Mar 2022 06:26:07 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Apr 2023 00:19:30 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Ma",
"Sihan",
""
],
[
"Li",
"Jizhizi",
""
],
[
"Zhang",
"Jing",
""
],
[
"Zhang",
"He",
""
],
[
"Tao",
"Dacheng",
""
]
] |
new_dataset
| 0.982907 |
2204.13686
|
Zhongang Cai
|
Zhongang Cai, Daxuan Ren, Ailing Zeng, Zhengyu Lin, Tao Yu, Wenjia
Wang, Xiangyu Fan, Yang Gao, Yifan Yu, Liang Pan, Fangzhou Hong, Mingyuan
Zhang, Chen Change Loy, Lei Yang, Ziwei Liu
|
HuMMan: Multi-Modal 4D Human Dataset for Versatile Sensing and Modeling
|
Homepage: https://caizhongang.github.io/projects/HuMMan/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
4D human sensing and modeling are fundamental tasks in vision and graphics
with numerous applications. With the advances of new sensors and algorithms,
there is an increasing demand for more versatile datasets. In this work, we
contribute HuMMan, a large-scale multi-modal 4D human dataset with 1000 human
subjects, 400k sequences and 60M frames. HuMMan has several appealing
properties: 1) multi-modal data and annotations including color images, point
clouds, keypoints, SMPL parameters, and textured meshes; 2) popular mobile
device is included in the sensor suite; 3) a set of 500 actions, designed to
cover fundamental movements; 4) multiple tasks such as action recognition, pose
estimation, parametric human recovery, and textured mesh reconstruction are
supported and evaluated. Extensive experiments on HuMMan voice the need for
further study on challenges such as fine-grained action recognition, dynamic
human mesh reconstruction, point cloud-based parametric human recovery, and
cross-device domain gaps.
|
[
{
"version": "v1",
"created": "Thu, 28 Apr 2022 17:54:25 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Apr 2023 12:26:14 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Cai",
"Zhongang",
""
],
[
"Ren",
"Daxuan",
""
],
[
"Zeng",
"Ailing",
""
],
[
"Lin",
"Zhengyu",
""
],
[
"Yu",
"Tao",
""
],
[
"Wang",
"Wenjia",
""
],
[
"Fan",
"Xiangyu",
""
],
[
"Gao",
"Yang",
""
],
[
"Yu",
"Yifan",
""
],
[
"Pan",
"Liang",
""
],
[
"Hong",
"Fangzhou",
""
],
[
"Zhang",
"Mingyuan",
""
],
[
"Loy",
"Chen Change",
""
],
[
"Yang",
"Lei",
""
],
[
"Liu",
"Ziwei",
""
]
] |
new_dataset
| 0.999466 |
2205.15360
|
Stavros Nousias PhD
|
Nikos D. Fakotakis, Stavros Nousias, Gerasimos Arvanitis, Evangelia I.
Zacharaki, Konstantinos Moustakas
|
AI-enabled Sound Pattern Recognition on Asthma Medication Adherence:
Evaluation with the RDA Benchmark Suite
| null | null |
10.1109/ACCESS.2023.3243547
| null |
cs.SD cs.CV cs.CY cs.GL eess.AS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Asthma is a common, usually long-term respiratory disease with negative
impact on global society and economy. Treatment involves using medical devices
(inhalers) that distribute medication to the airways and its efficiency depends
on the precision of the inhalation technique. There is a clinical need for
objective methods to assess the inhalation technique, during clinical
consultation. Integrated health monitoring systems, equipped with sensors,
enable the recognition of drug actuation, embedded with sound signal detection,
analysis and identification, from intelligent structures, that could provide
powerful tools for reliable content management. Health monitoring systems
equipped with sensors, embedded with sound signal detection, enable the
recognition of drug actuation and could be used for effective audio content
analysis. This paper revisits sound pattern recognition with machine learning
techniques for asthma medication adherence assessment and presents the
Respiratory and Drug Actuation (RDA) Suite
(https://gitlab.com/vvr/monitoring-medication-adherence/rda-benchmark) for
benchmarking and further research. The RDA Suite includes a set of tools for
audio processing, feature extraction and classification procedures and is
provided along with a dataset, consisting of respiratory and drug actuation
sounds. The classification models in RDA are implemented based on conventional
and advanced machine learning and deep networks' architectures. This study
provides a comparative evaluation of the implemented approaches, examines
potential improvements and discusses on challenges and future tendencies.
|
[
{
"version": "v1",
"created": "Mon, 30 May 2022 18:08:28 GMT"
},
{
"version": "v2",
"created": "Wed, 1 Jun 2022 11:46:47 GMT"
},
{
"version": "v3",
"created": "Sun, 16 Apr 2023 17:32:06 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Fakotakis",
"Nikos D.",
""
],
[
"Nousias",
"Stavros",
""
],
[
"Arvanitis",
"Gerasimos",
""
],
[
"Zacharaki",
"Evangelia I.",
""
],
[
"Moustakas",
"Konstantinos",
""
]
] |
new_dataset
| 0.997869 |
2206.04218
|
Orian Leitersdorf
|
Orian Leitersdorf, Dean Leitersdorf, Jonathan Gal, Mor Dahan, Ronny
Ronen, Shahar Kvatinsky
|
AritPIM: High-Throughput In-Memory Arithmetic
|
Accepted to IEEE Transactions on Emerging Topics in Computing (TETC)
| null | null | null |
cs.AR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Digital processing-in-memory (PIM) architectures are rapidly emerging to
overcome the memory-wall bottleneck by integrating logic within memory
elements. Such architectures provide vast computational power within the memory
itself in the form of parallel bitwise logic operations. We develop novel
algorithmic techniques for PIM that, combined with new perspectives on computer
arithmetic, extend this bitwise parallelism to the four fundamental arithmetic
operations (addition, subtraction, multiplication, and division), for both
fixed-point and floating-point numbers, and using both bit-serial and
bit-parallel approaches. We propose a state-of-the-art suite of arithmetic
algorithms, demonstrating the first algorithm in the literature of digital PIM
for a majority of cases - including cases previously considered impossible for
digital PIM, such as floating-point addition. Through a case study on
memristive PIM, we compare the proposed algorithms to an NVIDIA RTX 3070 GPU
and demonstrate significant throughput and energy improvements.
|
[
{
"version": "v1",
"created": "Thu, 9 Jun 2022 01:49:52 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Apr 2023 19:52:53 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Leitersdorf",
"Orian",
""
],
[
"Leitersdorf",
"Dean",
""
],
[
"Gal",
"Jonathan",
""
],
[
"Dahan",
"Mor",
""
],
[
"Ronen",
"Ronny",
""
],
[
"Kvatinsky",
"Shahar",
""
]
] |
new_dataset
| 0.99921 |
2206.15331
|
Arghavan Moradi Dakhel
|
Arghavan Moradi Dakhel, Vahid Majdinasab, Amin Nikanjam, Foutse Khomh,
Michel C. Desmarais, Zhen Ming (Jack) Jiang
|
GitHub Copilot AI pair programmer: Asset or Liability?
|
27 pages, 8 figures
| null | null | null |
cs.SE cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatic program synthesis is a long-lasting dream in software engineering.
Recently, a promising Deep Learning (DL) based solution, called Copilot, has
been proposed by OpenAI and Microsoft as an industrial product. Although some
studies evaluate the correctness of Copilot solutions and report its issues,
more empirical evaluations are necessary to understand how developers can
benefit from it effectively. In this paper, we study the capabilities of
Copilot in two different programming tasks: (i) generating (and reproducing)
correct and efficient solutions for fundamental algorithmic problems, and (ii)
comparing Copilot's proposed solutions with those of human programmers on a set
of programming tasks. For the former, we assess the performance and
functionality of Copilot in solving selected fundamental problems in computer
science, like sorting and implementing data structures. In the latter, a
dataset of programming problems with human-provided solutions is used. The
results show that Copilot is capable of providing solutions for almost all
fundamental algorithmic problems, however, some solutions are buggy and
non-reproducible. Moreover, Copilot has some difficulties in combining multiple
methods to generate a solution. Comparing Copilot to humans, our results show
that the correct ratio of humans' solutions is greater than Copilot's
suggestions, while the buggy solutions generated by Copilot require less effort
to be repaired.
|
[
{
"version": "v1",
"created": "Thu, 30 Jun 2022 15:00:03 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Apr 2023 20:52:00 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Dakhel",
"Arghavan Moradi",
"",
"Jack"
],
[
"Majdinasab",
"Vahid",
"",
"Jack"
],
[
"Nikanjam",
"Amin",
"",
"Jack"
],
[
"Khomh",
"Foutse",
"",
"Jack"
],
[
"Desmarais",
"Michel C.",
"",
"Jack"
],
[
"Ming",
"Zhen",
"",
"Jack"
],
[
"Jiang",
"",
""
]
] |
new_dataset
| 0.991632 |
2210.03112
|
Aishwarya Kamath
|
Aishwarya Kamath, Peter Anderson, Su Wang, Jing Yu Koh, Alexander Ku,
Austin Waters, Yinfei Yang, Jason Baldridge and Zarana Parekh
|
A New Path: Scaling Vision-and-Language Navigation with Synthetic
Instructions and Imitation Learning
|
CVPR 2023
| null | null | null |
cs.LG cs.CL cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Recent studies in Vision-and-Language Navigation (VLN) train RL agents to
execute natural-language navigation instructions in photorealistic
environments, as a step towards robots that can follow human instructions.
However, given the scarcity of human instruction data and limited diversity in
the training environments, these agents still struggle with complex language
grounding and spatial language understanding. Pretraining on large text and
image-text datasets from the web has been extensively explored but the
improvements are limited. We investigate large-scale augmentation with
synthetic instructions. We take 500+ indoor environments captured in
densely-sampled 360 degree panoramas, construct navigation trajectories through
these panoramas, and generate a visually-grounded instruction for each
trajectory using Marky, a high-quality multilingual navigation instruction
generator. We also synthesize image observations from novel viewpoints using an
image-to-image GAN. The resulting dataset of 4.2M instruction-trajectory pairs
is two orders of magnitude larger than existing human-annotated datasets, and
contains a wider variety of environments and viewpoints. To efficiently
leverage data at this scale, we train a simple transformer agent with imitation
learning. On the challenging RxR dataset, our approach outperforms all existing
RL agents, improving the state-of-the-art NDTW from 71.1 to 79.1 in seen
environments, and from 64.6 to 66.8 in unseen test environments. Our work
points to a new path to improving instruction-following agents, emphasizing
large-scale imitation learning and the development of synthetic instruction
generation capabilities.
|
[
{
"version": "v1",
"created": "Thu, 6 Oct 2022 17:59:08 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Dec 2022 00:57:11 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Apr 2023 11:17:35 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Kamath",
"Aishwarya",
""
],
[
"Anderson",
"Peter",
""
],
[
"Wang",
"Su",
""
],
[
"Koh",
"Jing Yu",
""
],
[
"Ku",
"Alexander",
""
],
[
"Waters",
"Austin",
""
],
[
"Yang",
"Yinfei",
""
],
[
"Baldridge",
"Jason",
""
],
[
"Parekh",
"Zarana",
""
]
] |
new_dataset
| 0.973385 |
2210.12922
|
Wenhui Chen
|
Wenhui Chen and Zhijiang Zhang and Liang Yu and Yichun Tai
|
BARS: A Benchmark for Airport Runway Segmentation
|
Applied Intelligence 2023
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Airport runway segmentation can effectively reduce the accident rate during
the landing phase, which has the largest risk of flight accidents. With the
rapid development of deep learning (DL), related methods achieve good
performance on segmentation tasks and can be well adapted to complex scenes.
However, the lack of large-scale, publicly available datasets in this field
makes the development of methods based on DL difficult. Therefore, we propose a
benchmark for airport runway segmentation, named BARS. Additionally, a
semiautomatic annotation pipeline is designed to reduce the annotation
workload. BARS has the largest dataset with the richest categories and the only
instance annotation in the field. The dataset, which was collected using the
X-Plane simulation platform, contains 10,256 images and 30,201 instances with
three categories. We evaluate eleven representative instance segmentation
methods on BARS and analyze their performance. Based on the characteristic of
an airport runway with a regular shape, we propose a plug-and-play smoothing
postprocessing module (SPM) and a contour point constraint loss (CPCL) function
to smooth segmentation results for mask-based and contour-based methods,
respectively. Furthermore, a novel evaluation metric named average smoothness
(AS) is developed to measure smoothness. The experiments show that existing
instance segmentation methods can achieve prediction results with good
performance on BARS. SPM and CPCL can effectively enhance the AS metric while
modestly improving accuracy. Our work will be available at
https://github.com/c-wenhui/BARS.
|
[
{
"version": "v1",
"created": "Mon, 24 Oct 2022 02:26:05 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Nov 2022 02:19:44 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Apr 2023 16:00:19 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Chen",
"Wenhui",
""
],
[
"Zhang",
"Zhijiang",
""
],
[
"Yu",
"Liang",
""
],
[
"Tai",
"Yichun",
""
]
] |
new_dataset
| 0.999812 |
2211.06818
|
Meghana Sistla
|
Meghana Sistla, Swarat Chaudhuri, Thomas Reps
|
CFLOBDDs: Context-Free-Language Ordered Binary Decision Diagrams
|
130 pages
| null | null | null |
cs.SC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a new compressed representation of Boolean functions,
called CFLOBDDs (for Context-Free-Language Ordered Binary Decision Diagrams).
They are essentially a plug-compatible alternative to BDDs (Binary Decision
Diagrams), and hence useful for representing certain classes of functions,
matrices, graphs, relations, etc. in a highly compressed fashion. CFLOBDDs
share many of the good properties of BDDs, but--in the best case--the CFLOBDD
for a Boolean function can be exponentially smaller than any BDD for that
function. Compared with the size of the decision tree for a function, a
CFLOBDD--again, in the best case--can give a double-exponential reduction in
size. They have the potential to permit applications to (i) execute much
faster, and (ii) handle much larger problem instances than has been possible
heretofore.
CFLOBDDs are a new kind of decision diagram that go beyond BDDs (and their
many relatives). The key insight is a new way to reuse sub-decision-diagrams:
components of CFLOBDDs are structured hierarchically, so that
sub-decision-diagrams can be treated as standalone ''procedures'' and reused.
We applied CFLOBDDs to the problem of simulating quantum circuits, and found
that for several standard problems the improvement in scalability--compared to
simulation using BDDs--is quite dramatic. In particular, the number of qubits
that could be handled using CFLOBDDs was larger, compared to BDDs, by a factor
of 128x for GHZ; 1,024x for BV; 8,192x for DJ; and 128x for Grover's algorithm.
(With a 15-minute timeout, the number of qubits that CFLOBDDs can handle are
65,536 for GHZ, 524,288 for BV; 4,194,304 for DJ; and 4,096 for Grover's
Algorithm.)
|
[
{
"version": "v1",
"created": "Sun, 13 Nov 2022 04:57:29 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Apr 2023 16:26:32 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Apr 2023 00:31:05 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Sistla",
"Meghana",
""
],
[
"Chaudhuri",
"Swarat",
""
],
[
"Reps",
"Thomas",
""
]
] |
new_dataset
| 0.999307 |
2211.16076
|
Jan Hubi\v{c}ka
|
Geoffrey Barker, Jan Hubi\v{c}ka, Mark Jacobs, Linda Kimrov\'a, Kendra
Meyer, Doug Peterson
|
Finlay, Thames, Dufay, and Paget color screen process collections: Using
digital registration of viewing screens to reveal original color
|
8 figures, 9 pages; submitted to the proceedings of Colour
Photography and Film: sharing knowledge of analysis, preservation,
conservation, migration of analogue and digital materials
|
In: 2nd Edition of the Conference "Colour Photography and Film:
Sharing knowledge of analysis, preservation, and conservation of analogue and
digital materials", 2022, 15--23
|
10.23738/RCASB.008
| null |
cs.GR cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We discuss digitization, subsequent digital analysis and processing of
negatives (and diapositives) made by Finlay, Thames, Dufay, Paget, and similar
additive color screen processes. These early color processes (introduced in the
1890s and popular until the 1950s) used a special color screen filter and a
monochromatic negative. Due to poor stability of dyes used to produce color
screens many of the photographs appear faded; others exist only in the form of
(monochromatic) negatives. We discuss the possibility of digitally
reconstructing the original color from scans of original negatives or by virtue
of infrared imaging of original transparencies (which eliminates the physically
coupled color filters) and digitally recreating the original color filter
pattern using a new open-source software tool. Photographs taken using additive
color screen processes are some of the very earliest color images of our shared
cultural heritage. They depict people, places, and events for which there are
no other surviving color images. We hope that our new software tool can bring
these images back to life.
|
[
{
"version": "v1",
"created": "Tue, 29 Nov 2022 10:39:28 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Barker",
"Geoffrey",
""
],
[
"Hubička",
"Jan",
""
],
[
"Jacobs",
"Mark",
""
],
[
"Kimrová",
"Linda",
""
],
[
"Meyer",
"Kendra",
""
],
[
"Peterson",
"Doug",
""
]
] |
new_dataset
| 0.995827 |
2212.13326
|
Andrew Melnik
|
Federico Malato, Florian Leopold, Amogh Raut, Ville Hautam\"aki,
Andrew Melnik
|
Behavioral Cloning via Search in Video PreTraining Latent Space
| null | null | null | null |
cs.LG cs.AI cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Our aim is to build autonomous agents that can solve tasks in environments
like Minecraft. To do so, we used an imitation learning-based approach. We
formulate our control problem as a search problem over a dataset of experts'
demonstrations, where the agent copies actions from a similar demonstration
trajectory of image-action pairs. We perform a proximity search over the BASALT
MineRL-dataset in the latent representation of a Video PreTraining model. The
agent copies the actions from the expert trajectory as long as the distance
between the state representations of the agent and the selected expert
trajectory from the dataset do not diverge. Then the proximity search is
repeated. Our approach can effectively recover meaningful demonstration
trajectories and show human-like behavior of an agent in the Minecraft
environment.
|
[
{
"version": "v1",
"created": "Tue, 27 Dec 2022 00:20:37 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Apr 2023 05:38:15 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Malato",
"Federico",
""
],
[
"Leopold",
"Florian",
""
],
[
"Raut",
"Amogh",
""
],
[
"Hautamäki",
"Ville",
""
],
[
"Melnik",
"Andrew",
""
]
] |
new_dataset
| 0.992093 |
2301.06660
|
Yida Mu
|
Yida Mu, Mali Jin, Charlie Grimshaw, Carolina Scarton, Kalina
Bontcheva, Xingyi Song
|
VaxxHesitancy: A Dataset for Studying Hesitancy towards COVID-19
Vaccination on Twitter
|
Accepted at ICWSM 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Vaccine hesitancy has been a common concern, probably since vaccines were
created and, with the popularisation of social media, people started to express
their concerns about vaccines online alongside those posting pro- and
anti-vaccine content. Predictably, since the first mentions of a COVID-19
vaccine, social media users posted about their fears and concerns or about
their support and belief into the effectiveness of these rapidly developing
vaccines. Identifying and understanding the reasons behind public hesitancy
towards COVID-19 vaccines is important for policy markers that need to develop
actions to better inform the population with the aim of increasing vaccine
take-up. In the case of COVID-19, where the fast development of the vaccines
was mirrored closely by growth in anti-vaxx disinformation, automatic means of
detecting citizen attitudes towards vaccination became necessary. This is an
important computational social sciences task that requires data analysis in
order to gain in-depth understanding of the phenomena at hand. Annotated data
is also necessary for training data-driven models for more nuanced analysis of
attitudes towards vaccination. To this end, we created a new collection of over
3,101 tweets annotated with users' attitudes towards COVID-19 vaccination
(stance). Besides, we also develop a domain-specific language model (VaxxBERT)
that achieves the best predictive performance (73.0 accuracy and 69.3 F1-score)
as compared to a robust set of baselines. To the best of our knowledge, these
are the first dataset and model that model vaccine hesitancy as a category
distinct from pro- and anti-vaccine stance.
|
[
{
"version": "v1",
"created": "Tue, 17 Jan 2023 02:00:31 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Feb 2023 03:28:03 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Apr 2023 18:58:33 GMT"
},
{
"version": "v4",
"created": "Sat, 15 Apr 2023 15:32:43 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Mu",
"Yida",
""
],
[
"Jin",
"Mali",
""
],
[
"Grimshaw",
"Charlie",
""
],
[
"Scarton",
"Carolina",
""
],
[
"Bontcheva",
"Kalina",
""
],
[
"Song",
"Xingyi",
""
]
] |
new_dataset
| 0.999805 |
2301.10872
|
Abu Reyan Ahmed
|
Reyan Ahmed, Patrizio Angelini, Michael A. Bekos, Giuseppe Di
Battista, Michael Kaufmann, Philipp Kindermann, Stephen Kobourov, Martin
N\"ollenburg, Antonios Symvonis, Ana\"is Villedieu, Markus Wallinger
|
Splitting Vertices in 2-Layer Graph Drawings
| null | null | null | null |
cs.CG
|
http://creativecommons.org/licenses/by/4.0/
|
Bipartite graphs model the relationships between two disjoint sets of
entities in several applications and are naturally drawn as 2-layer graph
drawings. In such drawings, the two sets of entities (vertices) are placed on
two parallel lines (layers), and their relationships (edges) are represented by
segments connecting vertices. Methods for constructing 2-layer drawings often
try to minimize the number of edge crossings. We use vertex splitting to reduce
the number of crossings, by replacing selected vertices on one layer by two (or
more) copies and suitably distributing their incident edges among these copies.
We study several optimization problems related to vertex splitting, either
minimizing the number of crossings or removing all crossings with fewest
splits. While we prove that some variants are \NP-complete, we obtain
polynomial-time algorithms for others. We run our algorithms on a benchmark set
of bipartite graphs representing the relationships between human anatomical
structures and cell types.
|
[
{
"version": "v1",
"created": "Wed, 25 Jan 2023 23:36:28 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Apr 2023 14:26:19 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Ahmed",
"Reyan",
""
],
[
"Angelini",
"Patrizio",
""
],
[
"Bekos",
"Michael A.",
""
],
[
"Di Battista",
"Giuseppe",
""
],
[
"Kaufmann",
"Michael",
""
],
[
"Kindermann",
"Philipp",
""
],
[
"Kobourov",
"Stephen",
""
],
[
"Nöllenburg",
"Martin",
""
],
[
"Symvonis",
"Antonios",
""
],
[
"Villedieu",
"Anaïs",
""
],
[
"Wallinger",
"Markus",
""
]
] |
new_dataset
| 0.999383 |
2302.01857
|
Nan Jiang
|
Nan Jiang, Thibaud Lutellier, Yiling Lou, Lin Tan, Dan Goldwasser, and
Xiangyu Zhang
|
KNOD: Domain Knowledge Distilled Tree Decoder for Automated Program
Repair
|
This paper is accepted by 2023 IEEE/ACM 45th International Conference
on Software Engineering (ICSE)
| null | null | null |
cs.SE cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Automated Program Repair (APR) improves software reliability by generating
patches for a buggy program automatically. Recent APR techniques leverage deep
learning (DL) to build models to learn to generate patches from existing
patches and code corpora. While promising, DL-based APR techniques suffer from
the abundant syntactically or semantically incorrect patches in the patch
space. These patches often disobey the syntactic and semantic domain knowledge
of source code and thus cannot be the correct patches to fix a bug.
We propose a DL-based APR approach KNOD, which incorporates domain knowledge
to guide patch generation in a direct and comprehensive way. KNOD has two major
novelties, including (1) a novel three-stage tree decoder, which directly
generates Abstract Syntax Trees of patched code according to the inherent tree
structure, and (2) a novel domain-rule distillation, which leverages syntactic
and semantic rules and teacher-student distributions to explicitly inject the
domain knowledge into the decoding procedure during both the training and
inference phases.
We evaluate KNOD on three widely-used benchmarks. KNOD fixes 72 bugs on the
Defects4J v1.2, 25 bugs on the QuixBugs, and 50 bugs on the additional
Defects4J v2.0 benchmarks, outperforming all existing APR tools.
|
[
{
"version": "v1",
"created": "Fri, 3 Feb 2023 17:02:56 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Feb 2023 07:43:51 GMT"
},
{
"version": "v3",
"created": "Sun, 16 Apr 2023 20:29:38 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Jiang",
"Nan",
""
],
[
"Lutellier",
"Thibaud",
""
],
[
"Lou",
"Yiling",
""
],
[
"Tan",
"Lin",
""
],
[
"Goldwasser",
"Dan",
""
],
[
"Zhang",
"Xiangyu",
""
]
] |
new_dataset
| 0.998975 |
2302.11494
|
J\'er\'emy Anger
|
Ngoc Long Nguyen, J\'er\'emy Anger, Lara Raad, Bruno Galerne, Gabriele
Facciolo
|
On The Role of Alias and Band-Shift for Sentinel-2 Super-Resolution
|
4 pages, 3 figures
| null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
In this work, we study the problem of single-image super-resolution (SISR) of
Sentinel-2 imagery. We show that thanks to its unique sensor specification,
namely the inter-band shift and alias, that deep-learning methods are able to
recover fine details. By training a model using a simple $L_1$ loss, results
are free of hallucinated details. For this study, we build a dataset of pairs
of images Sentinel-2/PlanetScope to train and evaluate our super-resolution
(SR) model.
|
[
{
"version": "v1",
"created": "Wed, 22 Feb 2023 17:08:45 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Apr 2023 16:24:05 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Nguyen",
"Ngoc Long",
""
],
[
"Anger",
"Jérémy",
""
],
[
"Raad",
"Lara",
""
],
[
"Galerne",
"Bruno",
""
],
[
"Facciolo",
"Gabriele",
""
]
] |
new_dataset
| 0.999421 |
2303.00085
|
Shrey Pareek
|
Shrey Pareek, Harris Nisar and Thenkurussi Kesavadas
|
AR3n: A Reinforcement Learning-based Assist-As-Needed Controller for
Robotic Rehabilitation
|
8 pages, 9 figures, IEEE RA-M
|
IEEE Robotics and Automation Magazine, 2023
| null | null |
cs.RO cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present AR3n (pronounced as Aaron), an assist-as-needed
(AAN) controller that utilizes reinforcement learning to supply adaptive
assistance during a robot assisted handwriting rehabilitation task. Unlike
previous AAN controllers, our method does not rely on patient specific
controller parameters or physical models. We propose the use of a virtual
patient model to generalize AR3n across multiple subjects. The system modulates
robotic assistance in realtime based on a subject's tracking error, while
minimizing the amount of robotic assistance. The controller is experimentally
validated through a set of simulations and human subject experiments. Finally,
a comparative study with a traditional rule-based controller is conducted to
analyze differences in assistance mechanisms of the two controllers.
|
[
{
"version": "v1",
"created": "Tue, 28 Feb 2023 21:04:05 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Mar 2023 16:12:15 GMT"
},
{
"version": "v3",
"created": "Thu, 6 Apr 2023 01:07:13 GMT"
},
{
"version": "v4",
"created": "Mon, 17 Apr 2023 03:11:45 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Pareek",
"Shrey",
""
],
[
"Nisar",
"Harris",
""
],
[
"Kesavadas",
"Thenkurussi",
""
]
] |
new_dataset
| 0.998872 |
2303.03599
|
Yili Jin
|
Kaiyuan Hu, Yili Jin, Haowen Yang, Junhua Liu, Fangxin Wang
|
FSVVD: A Dataset of Full Scene Volumetric Video
|
Accepted by MMSys'23 Open Dataset and Software Track. The dataset and
additional tools can be accessed via
https://cuhksz-inml.github.io/full_scene_volumetric_video_dataset/
| null | null | null |
cs.MM cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Recent years have witnessed a rapid development of immersive multimedia which
bridges the gap between the real world and virtual space. Volumetric videos, as
an emerging representative 3D video paradigm that empowers extended reality,
stand out to provide unprecedented immersive and interactive video watching
experience. Despite the tremendous potential, the research towards 3D
volumetric video is still in its infancy, relying on sufficient and complete
datasets for further exploration. However, existing related volumetric video
datasets mostly only include a single object, lacking details about the scene
and the interaction between them. In this paper, we focus on the current most
widely used data format, point cloud, and for the first time release a
full-scene volumetric video dataset that includes multiple people and their
daily activities interacting with the external environments. Comprehensive
dataset description and analysis are conducted, with potential usage of this
dataset. The dataset and additional tools can be accessed via the following
website: https://cuhksz-inml.github.io/full_scene_volumetric_video_dataset/.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2023 02:31:08 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Apr 2023 08:50:55 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Hu",
"Kaiyuan",
""
],
[
"Jin",
"Yili",
""
],
[
"Yang",
"Haowen",
""
],
[
"Liu",
"Junhua",
""
],
[
"Wang",
"Fangxin",
""
]
] |
new_dataset
| 0.999676 |
2303.10335
|
Su Zhang
|
Su Zhang, Ziyuan Zhao, Cuntai Guan
|
Multimodal Continuous Emotion Recognition: A Technical Report for ABAW5
|
6 pages. 1 figure. arXiv admin note: substantial text overlap with
arXiv:2203.13031
| null | null | null |
cs.MM cs.SD eess.AS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We used two multimodal models for continuous valence-arousal recognition
using visual, audio, and linguistic information. The first model is the same as
we used in ABAW2 and ABAW3, which employs the leader-follower attention. The
second model has the same architecture for spatial and temporal encoding. As
for the fusion block, it employs a compact and straightforward channel
attention, borrowed from the End2You toolkit. Unlike our previous attempts that
use Vggish feature directly as the audio feature, this time we feed the
pre-trained VGG model using logmel-spectrogram and finetune it during the
training. To make full use of the data and alleviate over-fitting,
cross-validation is carried out. The code is available at
https://github.com/sucv/ABAW3.
|
[
{
"version": "v1",
"created": "Sat, 18 Mar 2023 04:50:07 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Apr 2023 02:18:29 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Zhang",
"Su",
""
],
[
"Zhao",
"Ziyuan",
""
],
[
"Guan",
"Cuntai",
""
]
] |
new_dataset
| 0.950255 |
2303.14114
|
Shay Snyder
|
Shay Snyder (1), Hunter Thompson (2), Md Abdullah-Al Kaiser (3),
Gregory Schwartz (4), Akhilesh Jaiswal (3), and Maryam Parsa (1) ((1) George
Mason University, (2) Georgia Institute of Technology, (3) University of
Southern California, (4) Northwestern University)
|
Object Motion Sensitivity: A Bio-inspired Solution to the Ego-motion
Problem for Event-based Cameras
|
This document is 9 pages and has 6 figures, tables, and algorithms
| null | null | null |
cs.CV cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
Neuromorphic (event-based) image sensors draw inspiration from the
human-retina to create an electronic device that can process visual stimuli in
a way that closely resembles its biological counterpart. These sensors process
information significantly different than the traditional RGB sensors.
Specifically, the sensory information generated by event-based image sensors
are orders of magnitude sparser compared to that of RGB sensors. The first
generation of neuromorphic image sensors, Dynamic Vision Sensor (DVS), are
inspired by the computations confined to the photoreceptors and the first
retinal synapse. In this work, we highlight the capability of the second
generation of neuromorphic image sensors, Integrated Retinal Functionality in
CMOS Image Sensors (IRIS), which aims to mimic full retinal computations from
photoreceptors to output of the retina (retinal ganglion cells) for targeted
feature-extraction. The feature of choice in this work is Object Motion
Sensitivity (OMS) that is processed locally in the IRIS sensor. Our results
show that OMS can accomplish standard computer vision tasks with similar
efficiency to conventional RGB and DVS solutions but offers drastic bandwidth
reduction. This cuts the wireless and computing power budgets and opens up vast
opportunities in high-speed, robust, energy-efficient, and low-bandwidth
real-time decision making.
|
[
{
"version": "v1",
"created": "Fri, 24 Mar 2023 16:22:06 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Mar 2023 01:55:42 GMT"
},
{
"version": "v3",
"created": "Fri, 14 Apr 2023 21:43:46 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Snyder",
"Shay",
""
],
[
"Thompson",
"Hunter",
""
],
[
"Kaiser",
"Md Abdullah-Al",
""
],
[
"Schwartz",
"Gregory",
""
],
[
"Jaiswal",
"Akhilesh",
""
],
[
"Parsa",
"Maryam",
""
]
] |
new_dataset
| 0.998145 |
2303.16818
|
Haimei Zhao
|
Haimei Zhao, Qiming Zhang, Shanshan Zhao, Jing Zhang, Dacheng Tao
|
BEVSimDet: Simulated Multi-modal Distillation in Bird's-Eye View for
Multi-view 3D Object Detection
|
15 pages; add link
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-view camera-based 3D object detection has gained popularity due to its
low cost. But accurately inferring 3D geometry solely from camera data remains
challenging, which impacts model performance. One promising approach to address
this issue is to distill precise 3D geometry knowledge from LiDAR data.
However, transferring knowledge between different sensor modalities is hindered
by the significant modality gap. In this paper, we approach this challenge from
the perspective of both architecture design and knowledge distillation and
present a new simulated multi-modal 3D object detection method named BEVSimDet.
We first introduce a novel framework that includes a LiDAR and camera
fusion-based teacher and a simulated multi-modal student, where the student
simulates multi-modal features with image-only input. To facilitate effective
distillation, we propose a simulated multi-modal distillation scheme that
supports intra-modal, cross-modal, and multi-modal distillation simultaneously,
in Bird's-eye-view (BEV) space. By combining them together, BEVSimDet can learn
better feature representations for 3D object detection while enjoying
cost-effective camera-only deployment. Experimental results on the challenging
nuScenes benchmark demonstrate the effectiveness and superiority of BEVSimDet
over recent representative methods. The source code will be released at
\href{https://github.com/ViTAE-Transformer/BEVSimDet}{BEVSimDet}.
|
[
{
"version": "v1",
"created": "Wed, 29 Mar 2023 16:08:59 GMT"
},
{
"version": "v2",
"created": "Thu, 30 Mar 2023 14:53:14 GMT"
},
{
"version": "v3",
"created": "Sat, 15 Apr 2023 02:31:44 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Zhao",
"Haimei",
""
],
[
"Zhang",
"Qiming",
""
],
[
"Zhao",
"Shanshan",
""
],
[
"Zhang",
"Jing",
""
],
[
"Tao",
"Dacheng",
""
]
] |
new_dataset
| 0.995565 |
2304.00276
|
Bingxi Liu
|
Bingxi Liu, Yujie Fu, Feng Lu, Jinqiang Cui, Yihong Wu, Hong Zhang
|
NPR: Nocturnal Place Recognition in Streets
|
10 pages, 6 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Visual Place Recognition (VPR) is the task of retrieving database images
similar to a query photo by comparing it to a large database of known images.
In real-world applications, extreme illumination changes caused by query images
taken at night pose a significant obstacle that VPR needs to overcome. However,
a training set with day-night correspondence for city-scale, street-level VPR
does not exist. To address this challenge, we propose a novel pipeline that
divides VPR and conquers Nocturnal Place Recognition (NPR). Specifically, we
first established a street-level day-night dataset, NightStreet, and used it to
train an unpaired image-to-image translation model. Then we used this model to
process existing large-scale VPR datasets to generate the VPR-Night datasets
and demonstrated how to combine them with two popular VPR pipelines. Finally,
we proposed a divide-and-conquer VPR framework and provided explanations at the
theoretical, experimental, and application levels. Under our framework,
previous methods can significantly improve performance on two public datasets,
including the top-ranked method.
|
[
{
"version": "v1",
"created": "Sat, 1 Apr 2023 09:43:58 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Apr 2023 16:28:47 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Liu",
"Bingxi",
""
],
[
"Fu",
"Yujie",
""
],
[
"Lu",
"Feng",
""
],
[
"Cui",
"Jinqiang",
""
],
[
"Wu",
"Yihong",
""
],
[
"Zhang",
"Hong",
""
]
] |
new_dataset
| 0.999083 |
2304.04301
|
Yasemin Ozkan Aydin
|
Sean Even, Yasemin Ozkan-Aydin
|
Locomotion and Obstacle Avoidance of a Worm-like Soft Robot
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a soft earthworm robot that is capable of both efficient
locomotion and obstacle avoidance. The robot is designed to replicate the
unique locomotion mechanisms of earthworms, which enable them to move through
narrow and complex environments with ease. The robot consists of multiple
segments, each with its own set of actuators, that are connected through rigid
plastic joints, allowing for increased adaptability and flexibility in
navigating different environments. The robot utilizes proprioceptive sensing
and control algorithms to detect and avoid obstacles in real-time while
maintaining efficient locomotion. The robot uses a pneumatic actuation system
to mimic the circumnutation behavior exhibited by plant roots in order to
navigate through complex environments. The results demonstrate the capabilities
of the robot for navigating through cluttered environments, making this
development significant for various fields of robotics, including search and
rescue, environmental monitoring, and medical procedures.
|
[
{
"version": "v1",
"created": "Sun, 9 Apr 2023 19:30:49 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Apr 2023 01:33:54 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Even",
"Sean",
""
],
[
"Ozkan-Aydin",
"Yasemin",
""
]
] |
new_dataset
| 0.992913 |
2304.05224
|
Rafiah Patel
|
Rafiah Patel
|
A user co-designed digital INtervention for Child LangUage DisordEr: The
INCLUDE Project Protocol
|
9 pages, 1 figure, 1 table. Paper has been selected following peer
review for presenting at the "CHI 2023 Workshop on Child-centred AI Design:
Definition, Operation and Considerations, April 23, 2023, Hamburg, Germany"
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Around ten percent of children may present with a disorder where language
does not develop as expected. This often affects vocabulary skills, i.e.,
finding the words to express wants, needs and ideas, which can influence
behaviours linked to wellbeing and daily functioning, such as concentration,
independence, social interactions and managing emotions. Without specialist
support, needs can increase in severity and continue to adulthood.
The type of support, known as interventions, showing strongest evidence for
improving vocabulary with some signs of improved behaviour and wellbeing are
ones that use word webs. These are diagrams consisting of lines that connect
sound and meaning information about a word to strengthen the child's word
knowledge and use. The diagrams resemble what is commonly known as mind-maps
and are widely used by Speech and Language Therapists in partnership with
school educators to help children with language difficulties. In addition,
interventions delivered through mobile-devices has led in some cases to
increased vocabulary gains with positive influence on wellbeing and academic
attainment.
With advances in technology and availability of user-friendly mobile devices
to capture, combine and replay multimedia, new opportunities for designing
bespoke vocabulary instruction have emerged that are without timing and
location constraints. This brings the potential to engage and motivate users
and harbour independence through functional strategies that support each
child's unique language needs. To achieve this, children with language
disorder, their parents/carers, support professionals and software development
team members must work jointly to create an intervention that is fit for
purpose. This is the first research planned to explore the collaborative
development and acceptability of a digitally enhanced vocabulary intervention
for child language disorder.
|
[
{
"version": "v1",
"created": "Tue, 11 Apr 2023 13:51:45 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Apr 2023 11:11:27 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Apr 2023 15:16:41 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Patel",
"Rafiah",
""
]
] |
new_dataset
| 0.983319 |
2304.06002
|
Hao Xu
|
Bo Li, YiHua Chen, Hao Xu and Fei Zhong
|
Fast vehicle detection algorithm based on lightweight YOLO7-tiny
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The swift and precise detection of vehicles plays a significant role in
intelligent transportation systems. Current vehicle detection algorithms
encounter challenges of high computational complexity, low detection rate, and
limited feasibility on mobile devices. To address these issues, this paper
proposes a lightweight vehicle detection algorithm based on YOLOv7-tiny (You
Only Look Once version seven) called Ghost-YOLOv7. The width of model is scaled
to 0.5 and the standard convolution of the backbone network is replaced with
Ghost convolution to achieve a lighter network and improve the detection speed;
then a self-designed Ghost bi-directional feature pyramid network (Ghost-BiFPN)
is embedded into the neck network to enhance feature extraction capability of
the algorithm and enriches semantic information; and a Ghost Decouoled Head
(GDH) is employed for accurate prediction of vehicle location and species;
finally, a coordinate attention mechanism is introduced into the output layer
to suppress environmental interference. The WIoU loss function is employed to
further enhance the detection accuracy. Ablation experiments results on the
PASCAL VOC dataset demonstrate that Ghost-YOLOv7 outperforms the original
YOLOv7-tiny model. It achieving a 29.8% reduction in computation, 37.3%
reduction in the number of parameters, 35.1% reduction in model weights, 1.1%
higher mean average precision (mAP), the detection speed is higher 27FPS
compared with the original algorithm. Ghost-YOLOv7 was also compared on KITTI
and BIT-vehicle datasets as well, and the results show that this algorithm has
the overall best performance.
|
[
{
"version": "v1",
"created": "Wed, 12 Apr 2023 17:28:30 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Apr 2023 03:38:22 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Apr 2023 06:47:01 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Li",
"Bo",
""
],
[
"Chen",
"YiHua",
""
],
[
"Xu",
"Hao",
""
],
[
"Zhong",
"Fei",
""
]
] |
new_dataset
| 0.993734 |
2304.06943
|
Qingsen Yan
|
Qingsen Yan, Weiye Chen, Song Zhang, Yu Zhu, Jinqiu Sun, Yanning Zhang
|
A Unified HDR Imaging Method with Pixel and Patch Level
|
accepted by CVPR2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mapping Low Dynamic Range (LDR) images with different exposures to High
Dynamic Range (HDR) remains nontrivial and challenging on dynamic scenes due to
ghosting caused by object motion or camera jitting. With the success of Deep
Neural Networks (DNNs), several DNNs-based methods have been proposed to
alleviate ghosting, they cannot generate approving results when motion and
saturation occur. To generate visually pleasing HDR images in various cases, we
propose a hybrid HDR deghosting network, called HyHDRNet, to learn the
complicated relationship between reference and non-reference images. The
proposed HyHDRNet consists of a content alignment subnetwork and a
Transformer-based fusion subnetwork. Specifically, to effectively avoid
ghosting from the source, the content alignment subnetwork uses patch
aggregation and ghost attention to integrate similar content from other
non-reference images with patch level and suppress undesired components with
pixel level. To achieve mutual guidance between patch-level and pixel-level, we
leverage a gating module to sufficiently swap useful information both in
ghosted and saturated regions. Furthermore, to obtain a high-quality HDR image,
the Transformer-based fusion subnetwork uses a Residual Deformable Transformer
Block (RDTB) to adaptively merge information for different exposed regions. We
examined the proposed method on four widely used public HDR image deghosting
datasets. Experiments demonstrate that HyHDRNet outperforms state-of-the-art
methods both quantitatively and qualitatively, achieving appealing HDR
visualization with unified textures and colors.
|
[
{
"version": "v1",
"created": "Fri, 14 Apr 2023 06:21:57 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Apr 2023 01:38:17 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Yan",
"Qingsen",
""
],
[
"Chen",
"Weiye",
""
],
[
"Zhang",
"Song",
""
],
[
"Zhu",
"Yu",
""
],
[
"Sun",
"Jinqiu",
""
],
[
"Zhang",
"Yanning",
""
]
] |
new_dataset
| 0.964106 |
2304.07274
|
Simon van Wageningen
|
Simon van Wageningen, Tamara Mchedlidze, Alexandru Telea
|
Identifying Cluttering Edges in Near-Planar Graphs
|
Short paper for proceedings of EuroVis 2023 conference
| null | null | null |
cs.CG cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
Planar drawings of graphs tend to be favored over non-planar drawings.
Testing planarity and creating a planar layout of a planar graph can be done in
linear time. However, creating readable drawings of nearly planar graphs
remains a challenge. We therefore seek to answer which edges of nearly planar
graphs create clutter in their drawings generated by mainstream graph drawing
algorithms. We present a heuristic to identify problematic edges in nearly
planar graphs and adjust their weights in order to produce higher quality
layouts with spring-based drawing algorithms. Our experiments show that our
heuristic produces significantly higher quality drawings for augmented grid
graphs, augmented triangulations, and deep triangulations.
|
[
{
"version": "v1",
"created": "Fri, 14 Apr 2023 17:36:41 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Apr 2023 14:03:43 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"van Wageningen",
"Simon",
""
],
[
"Mchedlidze",
"Tamara",
""
],
[
"Telea",
"Alexandru",
""
]
] |
new_dataset
| 0.999156 |
2304.07291
|
Emilio Mart\'inez-Pa\~neda
|
K. Au-Yeung, A. Quintanas-Corominas, E. Mart\'inez-Pa\~neda, W. Tan
|
Hygroscopic phase field fracture modelling of composite materials
| null | null | null | null |
cs.CE cond-mat.mtrl-sci physics.app-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper investigates the effect of moisture content upon the degradation
behaviour of composite materials. A coupled phase field framework considering
moisture diffusion, hygroscopic expansion, and fracture behaviour is developed.
This multi-physics framework is used to explore the damage evolution of
composite materials, spanning the micro-, meso- and macro-scales. The
micro-scale unit-cell model shows how the mismatch between the hygroscopic
expansion of fibre and matrix leads to interface debonding. From the meso-scale
ply-level model, we learn that the distribution of fibres has a minor influence
on the material properties, while increasing moisture content facilitates
interface debonding. The macro-scale laminate-level model shows that moisture
induces a higher degree of damage on the longitudinal ply relative to the
transverse ply. This work opens a new avenue to understand and predict
environmentally-assisted degradation in composite materials.
|
[
{
"version": "v1",
"created": "Thu, 6 Apr 2023 16:22:15 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Au-Yeung",
"K.",
""
],
[
"Quintanas-Corominas",
"A.",
""
],
[
"Martínez-Pañeda",
"E.",
""
],
[
"Tan",
"W.",
""
]
] |
new_dataset
| 0.986432 |
2304.07303
|
Gabriel Avelino Sampedro
|
Jayrald Empino, Jean Allyson Junsay, Mary Grace Verzon, Mideth
Abisado, Shekinah Lor Huyo-a, Gabriel Avelino Sampedro
|
Smart Metro: Deep Learning Approaches to Forecasting the MRT Line 3
Ridership
| null |
International Journal of Computing Sciences Research (ISSN print:
2546-0552; ISSN online: 2546-115X), Vol. 7, pp. 1923-1936
|
10.25147/ijcsr.2017.001.1.137
| null |
cs.LG cs.AI cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Since its establishment in 1999, the Metro Rail Transit Line 3 (MRT3) has
served as a transportation option for numerous passengers in Metro Manila,
Philippines. The Philippine government's transportation department records more
than a thousand people using the MRT3 daily and forecasting the daily passenger
count may be rather challenging. The MRT3's daily ridership fluctuates owing to
variables such as holidays, working days, and other unexpected issues.
Commuters do not know how many other commuters are on their route on a given
day, which may hinder their ability to plan an efficient itinerary. Currently,
the DOTr depends on spreadsheets containing historical data, which might be
challenging to examine. This study presents a time series prediction of daily
traffic to anticipate future attendance at a particular station on specific
days.
|
[
{
"version": "v1",
"created": "Fri, 14 Apr 2023 07:39:10 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Empino",
"Jayrald",
""
],
[
"Junsay",
"Jean Allyson",
""
],
[
"Verzon",
"Mary Grace",
""
],
[
"Abisado",
"Mideth",
""
],
[
"Huyo-a",
"Shekinah Lor",
""
],
[
"Sampedro",
"Gabriel Avelino",
""
]
] |
new_dataset
| 0.951236 |
2304.07328
|
Henrik Ejersbo
|
Henrik Ejersbo, Kenneth Lausdahl, Mirgita Frasheri, Lukas Esterle
|
fmiSwap: Run-time Swapping of Models for Co-simulation and Digital Twins
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Digital Twins represent a new and disruptive technology, where digital
replicas of (cyber)-physical systems operate for long periods of time alongside
their (cyber)-physical counterparts, with enabled bi-directional communication
between them. However promising, the development of digital twins is a
non-trivial problem, since what can initially be adequate models may become
obsolete in time due to wear and tear of the physical components, accumulated
errors, or the evolving interaction with the environment. As such, there is a
clear need for mechanisms that support swapping in new models, as well changing
model structures as a whole when necessary. To address this challenge, we
propose in this paper a novel artefact, fmiSwap, that is FMI compliant and
allows for run-time swapping in standalone co-simulations, where different
strategies can be tested easily, as well in fully deployed DT settings with
hardware in the loop. We adopt a water-tank case-study consisting of a tank and
its controller to demonstrate how fmiSwap works and how it can support swaps in
a safe manner.
|
[
{
"version": "v1",
"created": "Mon, 20 Mar 2023 12:12:19 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Ejersbo",
"Henrik",
""
],
[
"Lausdahl",
"Kenneth",
""
],
[
"Frasheri",
"Mirgita",
""
],
[
"Esterle",
"Lukas",
""
]
] |
new_dataset
| 0.999116 |
2304.07349
|
Jingrong Chen
|
Jingrong Chen and Yongji Wu and Shihan Lin and Yechen Xu and Xinhao
Kong and Thomas Anderson and Matthew Lentz and Xiaowei Yang and Danyang Zhuo
|
Remote Procedure Call as a Managed System Service
|
NSDI 2023
| null | null | null |
cs.NI cs.OS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Remote Procedure Call (RPC) is a widely used abstraction for cloud computing.
The programmer specifies type information for each remote procedure, and a
compiler generates stub code linked into each application to marshal and
unmarshal arguments into message buffers. Increasingly, however, application
and service operations teams need a high degree of visibility and control over
the flow of RPCs between services, leading many installations to use sidecars
or service mesh proxies for manageability and policy flexibility. These
sidecars typically involve inspection and modification of RPC data that the
stub compiler had just carefully assembled, adding needless overhead. Further,
upgrading diverse application RPC stubs to use advanced hardware capabilities
such as RDMA or DPDK is a long and involved process, and often incompatible
with sidecar policy control.
In this paper, we propose, implement, and evaluate a novel approach, where
RPC marshalling and policy enforcement are done as a system service rather than
as a library linked into each application. Applications specify type
information to the RPC system as before, while the RPC service executes policy
engines and arbitrates resource use, and then marshals data customized to the
underlying network hardware capabilities. Our system, mRPC, also supports live
upgrades so that both policy and marshalling code can be updated transparently
to application code. Compared with using a sidecar, mRPC speeds up a standard
microservice benchmark, DeathStarBench, by up to 2.5$\times$ while having a
higher level of policy flexibility and availability.
|
[
{
"version": "v1",
"created": "Fri, 14 Apr 2023 18:47:55 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Chen",
"Jingrong",
""
],
[
"Wu",
"Yongji",
""
],
[
"Lin",
"Shihan",
""
],
[
"Xu",
"Yechen",
""
],
[
"Kong",
"Xinhao",
""
],
[
"Anderson",
"Thomas",
""
],
[
"Lentz",
"Matthew",
""
],
[
"Yang",
"Xiaowei",
""
],
[
"Zhuo",
"Danyang",
""
]
] |
new_dataset
| 0.994775 |
2304.07411
|
Emmanouil Panaousis Prof.
|
Shanto Roy, Emmanouil Panaousis, Cameron Noakes, Aron Laszka, Sakshyam
Panda, George Loukas
|
SoK: The MITRE ATT&CK Framework in Research and Practice
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The MITRE ATT&CK framework, a comprehensive knowledge base of adversary
tactics and techniques, has been widely adopted by the cybersecurity industry
as well as by academic researchers. Its broad range of industry applications
include threat intelligence, threat detection, and incident response, some of
which go beyond what it was originally designed for. Despite its popularity,
there is a lack of a systematic review of the applications and the research on
ATT&CK. This systematization of work aims to fill this gap. To this end, it
introduces the first taxonomic systematization of the research literature on
ATT&CK, studies its degree of usefulness in different applications, and
identifies important gaps and discrepancies in the literature to identify key
directions for future work. The results of this work provide valuable insights
for academics and practitioners alike, highlighting the need for more research
on the practical implementation and evaluation of ATT&CK.
|
[
{
"version": "v1",
"created": "Fri, 14 Apr 2023 22:10:38 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Roy",
"Shanto",
""
],
[
"Panaousis",
"Emmanouil",
""
],
[
"Noakes",
"Cameron",
""
],
[
"Laszka",
"Aron",
""
],
[
"Panda",
"Sakshyam",
""
],
[
"Loukas",
"George",
""
]
] |
new_dataset
| 0.99616 |
2304.07444
|
Thanh-Danh Nguyen
|
Thanh-Danh Nguyen, Anh-Khoa Nguyen Vu, Nhat-Duy Nguyen, Vinh-Tiep
Nguyen, Thanh Duc Ngo, Thanh-Toan Do, Minh-Triet Tran, and Tam V. Nguyen
|
Few-shot Camouflaged Animal Detection and Segmentation
|
Under-review Journal
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Camouflaged object detection and segmentation is a new and challenging
research topic in computer vision. There is a serious issue of lacking data of
camouflaged objects such as camouflaged animals in natural scenes. In this
paper, we address the problem of few-shot learning for camouflaged object
detection and segmentation. To this end, we first collect a new dataset,
CAMO-FS, for the benchmark. We then propose a novel method to efficiently
detect and segment the camouflaged objects in the images. In particular, we
introduce the instance triplet loss and the instance memory storage. The
extensive experiments demonstrated that our proposed method achieves
state-of-the-art performance on the newly collected dataset.
|
[
{
"version": "v1",
"created": "Sat, 15 Apr 2023 01:33:14 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Nguyen",
"Thanh-Danh",
""
],
[
"Vu",
"Anh-Khoa Nguyen",
""
],
[
"Nguyen",
"Nhat-Duy",
""
],
[
"Nguyen",
"Vinh-Tiep",
""
],
[
"Ngo",
"Thanh Duc",
""
],
[
"Do",
"Thanh-Toan",
""
],
[
"Tran",
"Minh-Triet",
""
],
[
"Nguyen",
"Tam V.",
""
]
] |
new_dataset
| 0.999656 |
2304.07491
|
Akira Terui
|
Ayane Ito, Takefumi Kasai, Akira Terui
|
Computer-assisted proofs of "Kariya's theorem" with computer algebra
| null | null | null | null |
cs.SC math.AC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We demonstrate computer-assisted proofs of "Kariya's theorem," a theorem in
elementary geometry, with computer algebra. In the proof of geometry theorem
with computer algebra, vertices of geometric figures that are subjects for the
proof are expressed as variables. The variables are classified into two
classes: arbitrarily given points and the points defined from the former points
by constraints. We show proofs of Kariya's theorem with two formulations
according to two ways for giving the arbitrary points: one is called "vertex
formulation," and the other is called "incenter formulation," with two methods:
one is Gr\"obner basis computation, and the other is Wu's method. Furthermore,
we show computer-assisted proofs of the property that the point so-called
"Kariya point" is located on the hyperbola so-called "Feuerbach's hyperbola",
with two formulations and two methods.
|
[
{
"version": "v1",
"created": "Sat, 15 Apr 2023 06:56:55 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Ito",
"Ayane",
""
],
[
"Kasai",
"Takefumi",
""
],
[
"Terui",
"Akira",
""
]
] |
new_dataset
| 0.991107 |
2304.07500
|
Zheng Tang
|
Milind Naphade, Shuo Wang, David C. Anastasiu, Zheng Tang, Ming-Ching
Chang, Yue Yao, Liang Zheng, Mohammed Shaiqur Rahman, Meenakshi S. Arya, Anuj
Sharma, Qi Feng, Vitaly Ablavsky, Stan Sclaroff, Pranamesh Chakraborty,
Sanjita Prajapati, Alice Li, Shangru Li, Krishna Kunadharaju, Shenxin Jiang
and Rama Chellappa
|
The 7th AI City Challenge
|
Summary of the 7th AI City Challenge Workshop in conjunction with
CVPR 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The AI City Challenge's seventh edition emphasizes two domains at the
intersection of computer vision and artificial intelligence - retail business
and Intelligent Traffic Systems (ITS) - that have considerable untapped
potential. The 2023 challenge had five tracks, which drew a record-breaking
number of participation requests from 508 teams across 46 countries. Track 1
was a brand new track that focused on multi-target multi-camera (MTMC) people
tracking, where teams trained and evaluated using both real and highly
realistic synthetic data. Track 2 centered around natural-language-based
vehicle track retrieval. Track 3 required teams to classify driver actions in
naturalistic driving analysis. Track 4 aimed to develop an automated checkout
system for retail stores using a single view camera. Track 5, another new
addition, tasked teams with detecting violations of the helmet rule for
motorcyclists. Two leader boards were released for submissions based on
different methods: a public leader board for the contest where external private
data wasn't allowed and a general leader board for all results submitted. The
participating teams' top performances established strong baselines and even
outperformed the state-of-the-art in the proposed challenge tracks.
|
[
{
"version": "v1",
"created": "Sat, 15 Apr 2023 08:02:16 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Naphade",
"Milind",
""
],
[
"Wang",
"Shuo",
""
],
[
"Anastasiu",
"David C.",
""
],
[
"Tang",
"Zheng",
""
],
[
"Chang",
"Ming-Ching",
""
],
[
"Yao",
"Yue",
""
],
[
"Zheng",
"Liang",
""
],
[
"Rahman",
"Mohammed Shaiqur",
""
],
[
"Arya",
"Meenakshi S.",
""
],
[
"Sharma",
"Anuj",
""
],
[
"Feng",
"Qi",
""
],
[
"Ablavsky",
"Vitaly",
""
],
[
"Sclaroff",
"Stan",
""
],
[
"Chakraborty",
"Pranamesh",
""
],
[
"Prajapati",
"Sanjita",
""
],
[
"Li",
"Alice",
""
],
[
"Li",
"Shangru",
""
],
[
"Kunadharaju",
"Krishna",
""
],
[
"Jiang",
"Shenxin",
""
],
[
"Chellappa",
"Rama",
""
]
] |
new_dataset
| 0.974158 |
2304.07511
|
Rongxuan Mu
|
Rongxuan Mu, Yuhe Nie, Kent Cao, Ruoxin You, Yinzong Wei, Xin Tong
|
Pilgrimage to Pureland: Art, Perception and the Wutai Mural VR
Reconstruction
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Virtual reality (VR) supports audiences to engage with cultural heritage
proactively. We designed an easy-to-access and guided Pilgrimage To Pureland VR
reconstruction of Dunhuang Mogao Grottoes to offer the general public an
accessible and engaging way to explore the Dunhuang murals. We put forward an
immersive VR reconstruction paradigm that can efficiently convert complex 2D
artwork into a VR environment. We reconstructed the Mt. Wutai pilgrimage mural
in Cave 61, Mogao Grottoes, Dunhuang, into an immersive VR environment and
created a plot-based and interactive experience that offers users a more
accessible solution to visit, understand and appreciate the complex religious,
historical, and artistic value of Dunhuang murals. \textcolor{black}{Our system
remarkably smoothed users' approaches to those elusive cultural heritages.
Appropriate adaptation of plots and 3D VR transfer consistent with the original
art style could enhance the accessibility of cultural heritages.
|
[
{
"version": "v1",
"created": "Sat, 15 Apr 2023 08:42:51 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Mu",
"Rongxuan",
""
],
[
"Nie",
"Yuhe",
""
],
[
"Cao",
"Kent",
""
],
[
"You",
"Ruoxin",
""
],
[
"Wei",
"Yinzong",
""
],
[
"Tong",
"Xin",
""
]
] |
new_dataset
| 0.997768 |
2304.07529
|
Astha Agrawal
|
Astha Agrawal and R. K. Sharma
|
ACD codes over non-symmetric dualities
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
The applications of additive codes mainly lie in quantum error correction and
quantum computing. Due to their applications in quantum codes, additive codes
have grown in importance. In addition to this, additive codes allow the
implementation of a variety of dualities. The article begins by developing the
properties of Additive Complementary Dual (ACD) codes with respect to arbitrary
dualities over finite abelian groups. Further, we calculate precisely the total
number of dualities over finite fields and introduce a new class of
non-symmetric dualities, denoted as class A. Two conditions have been obtained,
one is necessary and sufficient condition and other is a necessary condition.
The necessary and sufficient condition is for an additive code to be an ACD
code over arbitrary dualities, along with an algorithm for determining whether
an additive code is an ACD code or not. The necessary condition is on the
generator matrix of an ACD code for any duality belonging to the class A. We
provide bounds for the highest possible distance of ACD codes over finite
fields. Finally, we examine non-symmetric dualities over F4.
|
[
{
"version": "v1",
"created": "Sat, 15 Apr 2023 10:36:30 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Agrawal",
"Astha",
""
],
[
"Sharma",
"R. K.",
""
]
] |
new_dataset
| 0.999138 |
2304.07547
|
Jingyao Li
|
Jingyao Li, Pengguang Chen, Shengju Qian, Jiaya Jia
|
TagCLIP: Improving Discrimination Ability of Open-Vocabulary Semantic
Segmentation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent success of Contrastive Language-Image Pre-training~(CLIP) has shown
great promise in pixel-level open-vocabulary learning tasks. A general paradigm
utilizes CLIP's text and patch embeddings to generate semantic masks. However,
existing models easily misidentify input pixels from unseen classes, thus
confusing novel classes with semantically-similar ones. In our work, we
disentangle the ill-posed optimization problem into two parallel processes: one
performs semantic matching individually, and the other judges reliability for
improving discrimination ability. Motivated by special tokens in language
modeling that represents sentence-level embeddings, we design a trusty token
that decouples the known and novel category prediction tendency. With almost no
extra overhead, we upgrade the pixel-level generalization capacity of existing
models effectively. Our TagCLIP (CLIP adapting with Trusty-guidance) boosts the
IoU of unseen classes by 7.4% and 1.7% on PASCAL VOC 2012 and COCO-Stuff 164K.
|
[
{
"version": "v1",
"created": "Sat, 15 Apr 2023 12:52:23 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Li",
"Jingyao",
""
],
[
"Chen",
"Pengguang",
""
],
[
"Qian",
"Shengju",
""
],
[
"Jia",
"Jiaya",
""
]
] |
new_dataset
| 0.987091 |
2304.07549
|
Ajian Liu
|
Ajian Liu and Yanyan Liang
|
MA-ViT: Modality-Agnostic Vision Transformers for Face Anti-Spoofing
|
7 pages, 4 figures, conference
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The existing multi-modal face anti-spoofing (FAS) frameworks are designed
based on two strategies: halfway and late fusion. However, the former requires
test modalities consistent with the training input, which seriously limits its
deployment scenarios. And the latter is built on multiple branches to process
different modalities independently, which limits their use in applications with
low memory or fast execution requirements. In this work, we present a single
branch based Transformer framework, namely Modality-Agnostic Vision Transformer
(MA-ViT), which aims to improve the performance of arbitrary modal attacks with
the help of multi-modal data. Specifically, MA-ViT adopts the early fusion to
aggregate all the available training modalities data and enables flexible
testing of any given modal samples. Further, we develop the Modality-Agnostic
Transformer Block (MATB) in MA-ViT, which consists of two stacked attentions
named Modal-Disentangle Attention (MDA) and Cross-Modal Attention (CMA), to
eliminate modality-related information for each modal sequences and supplement
modality-agnostic liveness features from another modal sequences, respectively.
Experiments demonstrate that the single model trained based on MA-ViT can not
only flexibly evaluate different modal samples, but also outperforms existing
single-modal frameworks by a large margin, and approaches the multi-modal
frameworks introduced with smaller FLOPs and model parameters.
|
[
{
"version": "v1",
"created": "Sat, 15 Apr 2023 13:03:44 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Liu",
"Ajian",
""
],
[
"Liang",
"Yanyan",
""
]
] |
new_dataset
| 0.985004 |
2304.07554
|
Ella Gale
|
Ella Gale
|
Shape is (almost) all!: Persistent homology features (PHFs) are an
information rich input for efficient molecular machine learning
|
18 pages, 15 figures
| null | null | null |
cs.LG cond-mat.dis-nn math.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3-D shape is important to chemistry, but how important? Machine learning
works best when the inputs are simple and match the problem well. Chemistry
datasets tend to be very small compared to those generally used in machine
learning so we need to get the most from each datapoint. Persistent homology
measures the topological shape properties of point clouds at different scales
and is used in topological data analysis. Here we investigate what persistent
homology captures about molecular structure and create persistent homology
features (PHFs) that encode a molecule's shape whilst losing most of the
symbolic detail like atom labels, valence, charge, bonds etc. We demonstrate
the usefulness of PHFs on a series of chemical datasets: QM7, lipophilicity,
Delaney and Tox21. PHFs work as well as the best benchmarks. PHFs are very
information dense and much smaller than other encoding methods yet found,
meaning ML algorithms are much more energy efficient. PHFs success despite
losing a large amount of chemical detail highlights how much of chemistry can
be simplified to topological shape.
|
[
{
"version": "v1",
"created": "Sat, 15 Apr 2023 13:24:35 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Gale",
"Ella",
""
]
] |
new_dataset
| 0.95284 |
2304.07555
|
Anuran Roy
|
Anuran Roy, Sridhar Raj S
|
SerPyTor: A distributed context-aware computational graph execution
framework for durable execution
|
5 pages, 2 figures
| null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Distributed computation is always a tricky topic to deal with, especially in
context of various requirements in various scenarios. A popular solution is to
use Apache Spark with a setup of multiple systems forming a cluster. However,
the prerequisite setup for a Spark cluster often induces an additional
overhead, often limiting usage in constrained scenarios, especially in
scenarios requiring context propagation. In this paper, we explore a relatively
lightweight computational graph execution framework requiring little setup and
fast speeds, coupled with context awareness.
|
[
{
"version": "v1",
"created": "Sat, 15 Apr 2023 13:25:42 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Roy",
"Anuran",
""
],
[
"S",
"Sridhar Raj",
""
]
] |
new_dataset
| 0.991987 |
2304.07572
|
Huixin Dong
|
Huixin Dong, Yirong Xie, Xianan Zhang, Wei Wang, Xinyu Zhang, Jianhua
He
|
GPSMirror: Expanding Accurate GPS Positioning to Shadowed and Indoor
Regions with Backscatter
|
13 pages, 26 figures, to appear in MobiCom 2023
| null |
10.1145/3570361.3592511
| null |
cs.NI eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite the prevalence of GPS services, they still suffer from intermittent
positioning with poor accuracy in partially shadowed regions like urban
canyons, flyover shadows, and factories' indoor areas. Existing wisdom relies
on hardware modifications of GPS receivers or power-hungry infrastructures
requiring continuous plug-in power supply which is hard to provide in outdoor
regions and some factories. This paper fills the gap with GPSMirror, the first
GPS-strengthening system that works for unmodified smartphones with the
assistance of newly-designed GPS backscatter tags. The key enabling techniques
in GPSMirror include: (i) a careful hardware design with microwatt-level power
consumption that pushes the limit of backscatter sensitivity to re-radiate
extremely weak GPS signals with enough coverage approaching the regulation
limit; and (ii) a novel GPS positioning algorithm achieving meter-level
accuracy in shadowed regions as well as expanding locatable regions under
inadequate satellites where conventional algorithms fail. We build a prototype
of the GPSMirror tags and conduct comprehensive experiments to evaluate them.
Our results show that a GPSMirror tag can provide coverage up to 27.7 m.
GPSMirror achieves median positioning accuracy of 3.7 m indoors and 4.6 m in
urban canyon environments, respectively.
|
[
{
"version": "v1",
"created": "Sat, 15 Apr 2023 14:54:17 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Dong",
"Huixin",
""
],
[
"Xie",
"Yirong",
""
],
[
"Zhang",
"Xianan",
""
],
[
"Wang",
"Wei",
""
],
[
"Zhang",
"Xinyu",
""
],
[
"He",
"Jianhua",
""
]
] |
new_dataset
| 0.998305 |
2304.07583
|
Tao Zhou
|
Tao Zhou, Yizhe Zhang, Yi Zhou, Ye Wu, Chen Gong
|
Can SAM Segment Polyps?
|
Technical Report
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, Meta AI Research releases a general Segment Anything Model (SAM),
which has demonstrated promising performance in several segmentation tasks. As
we know, polyp segmentation is a fundamental task in the medical imaging field,
which plays a critical role in the diagnosis and cure of colorectal cancer. In
particular, applying SAM to the polyp segmentation task is interesting. In this
report, we evaluate the performance of SAM in segmenting polyps, in which SAM
is under unprompted settings. We hope this report will provide insights to
advance this polyp segmentation field and promote more interesting works in the
future. This project is publicly at https://github.com/taozh2017/SAMPolyp.
|
[
{
"version": "v1",
"created": "Sat, 15 Apr 2023 15:41:10 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Zhou",
"Tao",
""
],
[
"Zhang",
"Yizhe",
""
],
[
"Zhou",
"Yi",
""
],
[
"Wu",
"Ye",
""
],
[
"Gong",
"Chen",
""
]
] |
new_dataset
| 0.99044 |
2304.07584
|
Wenxian Wu Ncu
|
Li Zhu, Jiahui Xiong, Wenxian Wu, Hongyu Yu
|
FSDNet-An efficient fire detection network for complex scenarios based
on YOLOv3 and DenseNet
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Fire is one of the common disasters in daily life. To achieve fast and
accurate detection of fires, this paper proposes a detection network called
FSDNet (Fire Smoke Detection Network), which consists of a feature extraction
module, a fire classification module, and a fire detection module. Firstly, a
dense connection structure is introduced in the basic feature extraction module
to enhance the feature extraction ability of the backbone network and alleviate
the gradient disappearance problem. Secondly, a spatial pyramid pooling
structure is introduced in the fire detection module, and the Mosaic data
augmentation method and CIoU loss function are used in the training process to
comprehensively improve the flame feature extraction ability. Finally, in view
of the shortcomings of public fire datasets, a fire dataset called MS-FS
(Multi-scene Fire And Smoke) containing 11938 fire images was created through
data collection, screening, and object annotation. To prove the effectiveness
of the proposed method, the accuracy of the method was evaluated on two
benchmark fire datasets and MS-FS. The experimental results show that the
accuracy of FSDNet on the two benchmark datasets is 99.82% and 91.15%,
respectively, and the average precision on MS-FS is 86.80%, which is better
than the mainstream fire detection methods.
|
[
{
"version": "v1",
"created": "Sat, 15 Apr 2023 15:46:08 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Zhu",
"Li",
""
],
[
"Xiong",
"Jiahui",
""
],
[
"Wu",
"Wenxian",
""
],
[
"Yu",
"Hongyu",
""
]
] |
new_dataset
| 0.991748 |
2304.07596
|
Alisha Sharma
|
Alisha Sharma, Jason Geder, Joseph Lingevitch, Theodore Martin, Daniel
Lofaro, Donald Sofge
|
Acoustic Beamforming for Object-relative Distance Estimation and Control
in Unmanned Air Vehicles using Propulsion System Noise
|
7 pages, 12 figures
| null | null | null |
cs.RO cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Unmanned air vehicles often produce significant noise from their propulsion
systems. Using this broadband signal as "acoustic illumination" for an
auxiliary sensing system could make vehicles more robust at a minimal cost. We
present an acoustic beamforming-based algorithm that estimates object-relative
distance with a small two-microphone array using the generated propulsion
system noise of a vehicle. We demonstrate this approach in several closed-loop
distance feedback control tests with a mounted quad-rotor vehicle in a noisy
environment and show accurate object-relative distance estimates more than 2x
further than the baseline channel-based approach. We conclude that this
approach is robust to several practical vehicle and noise situations and shows
promise for use in more complex operating environments.
|
[
{
"version": "v1",
"created": "Sat, 15 Apr 2023 17:03:21 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Sharma",
"Alisha",
""
],
[
"Geder",
"Jason",
""
],
[
"Lingevitch",
"Joseph",
""
],
[
"Martin",
"Theodore",
""
],
[
"Lofaro",
"Daniel",
""
],
[
"Sofge",
"Donald",
""
]
] |
new_dataset
| 0.983809 |
2304.07609
|
Chul Gwon
|
Chul Gwon and Steven C. Howell
|
ODSmoothGrad: Generating Saliency Maps for Object Detectors
|
To be published in XAI4CV Workshop Proceedings at CVPR 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Techniques for generating saliency maps continue to be used for
explainability of deep learning models, with efforts primarily applied to the
image classification task. Such techniques, however, can also be applied to
object detectors, not only with the classification scores, but also for the
bounding box parameters, which are regressed values for which the relevant
pixels contributing to these parameters can be identified. In this paper, we
present ODSmoothGrad, a tool for generating saliency maps for the
classification and the bounding box parameters in object detectors. Given the
noisiness of saliency maps, we also apply the SmoothGrad algorithm to visually
enhance the pixels of interest. We demonstrate these capabilities on one-stage
and two-stage object detectors, with comparisons using classifier-based
techniques.
|
[
{
"version": "v1",
"created": "Sat, 15 Apr 2023 18:21:56 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Gwon",
"Chul",
""
],
[
"Howell",
"Steven C.",
""
]
] |
new_dataset
| 0.997448 |
2304.07637
|
Abhishek Bamotra
|
Abhishek Bamotra, Phani Krishna Uppala
|
TransDocs: Optical Character Recognition with word to word translation
| null | null | null | null |
cs.CV cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
While OCR has been used in various applications, its output is not always
accurate, leading to misfit words. This research work focuses on improving the
optical character recognition (OCR) with ML techniques with integration of OCR
with long short-term memory (LSTM) based sequence to sequence deep learning
models to perform document translation. This work is based on ANKI dataset for
English to Spanish translation. In this work, I have shown comparative study
for pre-trained OCR while using deep learning model using LSTM-based seq2seq
architecture with attention for machine translation. End-to-end performance of
the model has been expressed in BLEU-4 score. This research paper is aimed at
researchers and practitioners interested in OCR and its applications in
document translation.
|
[
{
"version": "v1",
"created": "Sat, 15 Apr 2023 21:40:14 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Bamotra",
"Abhishek",
""
],
[
"Uppala",
"Phani Krishna",
""
]
] |
new_dataset
| 0.999277 |
2304.07646
|
Jonas Skackauskas Mr
|
Jonas Skackauskas, Tatiana Kalganova
|
Herder Ants: Ant Colony Optimization with Aphids for Discrete
Event-Triggered Dynamic Optimization Problems
| null | null | null | null |
cs.NE
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Currently available dynamic optimization strategies for Ant Colony
Optimization (ACO) algorithm offer a trade-off of slower algorithm convergence
or significant penalty to solution quality after each dynamic change occurs.
This paper proposes a discrete dynamic optimization strategy called Ant Colony
Optimization (ACO) with Aphids, modelled after a real-world symbiotic
relationship between ants and aphids. ACO with Aphids strategy is designed to
improve solution quality of discrete domain Dynamic Optimization Problems
(DOPs) with event-triggered discrete dynamism. The proposed strategy aims to
improve the inter-state convergence rate throughout the entire dynamic
optimization. It does so by minimizing the fitness penalty and maximizing the
convergence speed that occurs after the dynamic change. This strategy is tested
against Full-Restart and Pheromone-Sharing strategies implemented on the same
ACO core algorithm solving Dynamic Multidimensional Knapsack Problem (DMKP)
benchmarks. ACO with Aphids has demonstrated superior performance over the
Pheromone-Sharing strategy in every test on average gap reduced by 29.2%. Also,
ACO with Aphids has outperformed the Full-Restart strategy for large datasets
groups, and the overall average gap is reduced by 52.5%.
|
[
{
"version": "v1",
"created": "Sat, 15 Apr 2023 22:21:41 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Skackauskas",
"Jonas",
""
],
[
"Kalganova",
"Tatiana",
""
]
] |
new_dataset
| 0.997982 |
2304.07651
|
Anthony Goeckner
|
Eugene M. Taranta II, Adam Seiwert, Anthony Goeckner, Khiem Nguyen,
Erin Cherry
|
From Warfighting Needs to Robot Actuation: A Complete Rapid Integration
Swarming Solution
|
58 pages, 29 figures. Published in Field Robotics
|
Field Robotics, 3, 460-515 (2023)
|
10.55417/fr.2023015
| null |
cs.RO cs.HC cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Swarm robotics systems have the potential to transform warfighting in urban
environments, but until now have not seen large-scale field testing. We present
the Rapid Integration Swarming Ecosystem (RISE), a platform for future
multi-agent research and deployment. RISE enables rapid integration of
third-party swarm tactics and behaviors, which was demonstrated using both
physical and simulated swarms. Our physical testbed is composed of more than
250 networked heterogeneous agents and has been extensively tested in mock
warfare scenarios at five urban combat training ranges. RISE implements live,
virtual, constructive simulation capabilities to allow the use of both virtual
and physical agents simultaneously, while our "fluid fidelity" simulation
enables adaptive scaling between low and high fidelity simulation levels based
on dynamic runtime requirements. Both virtual and physical agents are
controlled with a unified gesture-based interface that enables a greater than
150:1 agent-to-operator ratio. Through this interface, we enable efficient
swarm-based mission execution. RISE translates mission needs to robot actuation
with rapid tactic integration, a reliable testbed, and efficient operation.
|
[
{
"version": "v1",
"created": "Sat, 15 Apr 2023 22:40:00 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Taranta",
"Eugene M.",
"II"
],
[
"Seiwert",
"Adam",
""
],
[
"Goeckner",
"Anthony",
""
],
[
"Nguyen",
"Khiem",
""
],
[
"Cherry",
"Erin",
""
]
] |
new_dataset
| 0.957947 |
2304.07687
|
Sam Van Der Poel
|
Sam van der Poel, Dakotah Lambert, Kalina Kostyszyn, Tiantian Gao,
Rahul Verma, Derek Andersen, Joanne Chau, Emily Peterson, Cody St. Clair,
Paul Fodor, Chihiro Shibata, Jeffrey Heinz
|
MLRegTest: A Benchmark for the Machine Learning of Regular Languages
|
38 pages, MLRegTest benchmark available at the OSF at
https://osf.io/ksdnm , associated code at
https://github.com/heinz-jeffrey/subregular-learning
| null | null | null |
cs.LG cs.CL cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
Evaluating machine learning (ML) systems on their ability to learn known
classifiers allows fine-grained examination of the patterns they can learn,
which builds confidence when they are applied to the learning of unknown
classifiers. This article presents a new benchmark for ML systems on sequence
classification called MLRegTest, which contains training, development, and test
sets from 1,800 regular languages.
Different kinds of formal languages represent different kinds of
long-distance dependencies, and correctly identifying long-distance
dependencies in sequences is a known challenge for ML systems to generalize
successfully. MLRegTest organizes its languages according to their logical
complexity (monadic second order, first order, propositional, or monomial
expressions) and the kind of logical literals (string, tier-string,
subsequence, or combinations thereof). The logical complexity and choice of
literal provides a systematic way to understand different kinds of
long-distance dependencies in regular languages, and therefore to understand
the capacities of different ML systems to learn such long-distance
dependencies.
Finally, the performance of different neural networks (simple RNN, LSTM, GRU,
transformer) on MLRegTest is examined. The main conclusion is that their
performance depends significantly on the kind of test set, the class of
language, and the neural network architecture.
|
[
{
"version": "v1",
"created": "Sun, 16 Apr 2023 03:49:50 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"van der Poel",
"Sam",
""
],
[
"Lambert",
"Dakotah",
""
],
[
"Kostyszyn",
"Kalina",
""
],
[
"Gao",
"Tiantian",
""
],
[
"Verma",
"Rahul",
""
],
[
"Andersen",
"Derek",
""
],
[
"Chau",
"Joanne",
""
],
[
"Peterson",
"Emily",
""
],
[
"Clair",
"Cody St.",
""
],
[
"Fodor",
"Paul",
""
],
[
"Shibata",
"Chihiro",
""
],
[
"Heinz",
"Jeffrey",
""
]
] |
new_dataset
| 0.999708 |
2304.07743
|
Simon Korman
|
Deborah Levy, Amit Peleg, Naama Pearl, Dan Rosenbaum, Derya Akkaynak,
Simon Korman, Tali Treibitz
|
SeaThru-NeRF: Neural Radiance Fields in Scattering Media
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Research on neural radiance fields (NeRFs) for novel view generation is
exploding with new models and extensions. However, a question that remains
unanswered is what happens in underwater or foggy scenes where the medium
strongly influences the appearance of objects. Thus far, NeRF and its variants
have ignored these cases. However, since the NeRF framework is based on
volumetric rendering, it has inherent capability to account for the medium's
effects, once modeled appropriately. We develop a new rendering model for NeRFs
in scattering media, which is based on the SeaThru image formation model, and
suggest a suitable architecture for learning both scene information and medium
parameters. We demonstrate the strength of our method using simulated and
real-world scenes, correctly rendering novel photorealistic views underwater.
Even more excitingly, we can render clear views of these scenes, removing the
medium between the camera and the scene and reconstructing the appearance and
depth of far objects, which are severely occluded by the medium. Our code and
unique datasets are available on the project's website.
|
[
{
"version": "v1",
"created": "Sun, 16 Apr 2023 10:17:26 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Levy",
"Deborah",
""
],
[
"Peleg",
"Amit",
""
],
[
"Pearl",
"Naama",
""
],
[
"Rosenbaum",
"Dan",
""
],
[
"Akkaynak",
"Derya",
""
],
[
"Korman",
"Simon",
""
],
[
"Treibitz",
"Tali",
""
]
] |
new_dataset
| 0.995223 |
2304.07750
|
Valerio Marsocci
|
Valerio Marsocci, Nicolas Gonthier, Anatol Garioud, Simone Scardapane,
Cl\'ement Mallet
|
GeoMultiTaskNet: remote sensing unsupervised domain adaptation using
geographical coordinates
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Land cover maps are a pivotal element in a wide range of Earth Observation
(EO) applications. However, annotating large datasets to develop supervised
systems for remote sensing (RS) semantic segmentation is costly and
time-consuming. Unsupervised Domain Adaption (UDA) could tackle these issues by
adapting a model trained on a source domain, where labels are available, to a
target domain, without annotations. UDA, while gaining importance in computer
vision, is still under-investigated in RS. Thus, we propose a new lightweight
model, GeoMultiTaskNet, based on two contributions: a GeoMultiTask module
(GeoMT), which utilizes geographical coordinates to align the source and target
domains, and a Dynamic Class Sampling (DCS) strategy, to adapt the semantic
segmentation loss to the frequency of classes. This approach is the first to
use geographical metadata for UDA in semantic segmentation. It reaches
state-of-the-art performances (47,22% mIoU), reducing at the same time the
number of parameters (33M), on a subset of the FLAIR dataset, a recently
proposed dataset properly shaped for RS UDA, used for the first time ever for
research scopes here.
|
[
{
"version": "v1",
"created": "Sun, 16 Apr 2023 11:00:43 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Marsocci",
"Valerio",
""
],
[
"Gonthier",
"Nicolas",
""
],
[
"Garioud",
"Anatol",
""
],
[
"Scardapane",
"Simone",
""
],
[
"Mallet",
"Clément",
""
]
] |
new_dataset
| 0.999303 |
2304.07822
|
JiaHao Xie
|
JiaHao Xie, Ye Luo, Jianwei Lu
|
A Random-patch based Defense Strategy Against Physical Attacks for Face
Recognition Systems
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/publicdomain/zero/1.0/
|
The physical attack has been regarded as a kind of threat against real-world
computer vision systems. Still, many existing defense methods are only useful
for small perturbations attacks and can't detect physical attacks effectively.
In this paper, we propose a random-patch based defense strategy to robustly
detect physical attacks for Face Recognition System (FRS). Different from
mainstream defense methods which focus on building complex deep neural networks
(DNN) to achieve high recognition rate on attacks, we introduce a patch based
defense strategy to a standard DNN aiming to obtain robust detection models.
Extensive experimental results on the employed datasets show the superiority of
the proposed defense method on detecting white-box attacks and adaptive attacks
which attack both FRS and the defense method. Additionally, due to the
simpleness yet robustness of our method, it can be easily applied to the real
world face recognition system and extended to other defense methods to boost
the detection performance.
|
[
{
"version": "v1",
"created": "Sun, 16 Apr 2023 16:11:56 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Xie",
"JiaHao",
""
],
[
"Luo",
"Ye",
""
],
[
"Lu",
"Jianwei",
""
]
] |
new_dataset
| 0.997806 |
2304.07862
|
Xinyi Li
|
Xinyi Li, Yongfeng Zhang, Edward C. Malthouse
|
PBNR: Prompt-based News Recommender System
| null | null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Online news platforms often use personalized news recommendation methods to
help users discover articles that align with their interests. These methods
typically predict a matching score between a user and a candidate article to
reflect the user's preference for the article. Some previous works have used
language model techniques, such as the attention mechanism, to capture users'
interests based on their past behaviors, and to understand the content of
articles. However, these existing model architectures require adjustments if
additional information is taken into account. Pre-trained large language
models, which can better capture word relationships and comprehend contexts,
have seen a significant development in recent years, and these pre-trained
models have the advantages of transfer learning and reducing the training time
for downstream tasks. Meanwhile, prompt learning is a newly developed technique
that leverages pre-trained language models by building task-specific guidance
for output generations. To leverage textual information in news articles, this
paper introduces the pre-trained large language model and prompt-learning to
the community of news recommendation. The proposed model "prompt-based news
recommendation" (PBNR) treats the personalized news recommendation as a
text-to-text language task and designs personalized prompts to adapt to the
pre-trained language model -- text-to-text transfer transformer (T5).
Experimental studies using the Microsoft News dataset show that PBNR is capable
of making accurate recommendations by taking into account various lengths of
past behaviors of different users. PBNR can also easily adapt to new
information without changing the model architecture and the training objective.
Additionally, PBNR can make recommendations based on users' specific
requirements, allowing human-computer interaction in the news recommendation
field.
|
[
{
"version": "v1",
"created": "Sun, 16 Apr 2023 19:03:01 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Li",
"Xinyi",
""
],
[
"Zhang",
"Yongfeng",
""
],
[
"Malthouse",
"Edward C.",
""
]
] |
new_dataset
| 0.984458 |
2304.07883
|
Lia Morra
|
Luca Piano, Filippo Gabriele Prattic\`o, Alessandro Sebastian Russo,
Lorenzo Lanari, Lia Morra, Fabrizio Lamberti
|
Bent & Broken Bicycles: Leveraging synthetic data for damaged object
re-identification
| null |
Proceedings of the IEEE/CVF Winter Conference on Applications of
Computer Vision (WACV) 2023, pp. 4881-4891
|
10.1109/WACV56688.2023.00486
| null |
cs.CV cs.GR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Instance-level object re-identification is a fundamental computer vision
task, with applications from image retrieval to intelligent monitoring and
fraud detection. In this work, we propose the novel task of damaged object
re-identification, which aims at distinguishing changes in visual appearance
due to deformations or missing parts from subtle intra-class variations. To
explore this task, we leverage the power of computer-generated imagery to
create, in a semi-automatic fashion, high-quality synthetic images of the same
bike before and after a damage occurs. The resulting dataset, Bent & Broken
Bicycles (BBBicycles), contains 39,200 images and 2,800 unique bike instances
spanning 20 different bike models. As a baseline for this task, we propose
TransReI3D, a multi-task, transformer-based deep network unifying damage
detection (framed as a multi-label classification task) with object
re-identification. The BBBicycles dataset is available at
https://huggingface.co/datasets/GrainsPolito/BBBicycles
|
[
{
"version": "v1",
"created": "Sun, 16 Apr 2023 20:23:58 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Piano",
"Luca",
""
],
[
"Pratticò",
"Filippo Gabriele",
""
],
[
"Russo",
"Alessandro Sebastian",
""
],
[
"Lanari",
"Lorenzo",
""
],
[
"Morra",
"Lia",
""
],
[
"Lamberti",
"Fabrizio",
""
]
] |
new_dataset
| 0.97741 |
2304.07909
|
Muriel Franco Dr.
|
Muriel Figueredo Franco, Christian Omlin, Oliver Kamer, Eder John
Scheid, Burkhard Stiller
|
SECAdvisor: a Tool for Cybersecurity Planning using Economic Models
|
12 pages, 7 figures, 2 tables, 9 equations
| null | null | null |
cs.CR cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Cybersecurity planning is challenging for digitized companies that want
adequate protection without overspending money. Currently, the lack of
investments and perverse economic incentives are the root cause of
cyberattacks, which results in several economic impacts on companies worldwide.
Therefore, cybersecurity planning has to consider technical and economic
dimensions to help companies achieve a better cybersecurity strategy. This
article introduces SECAdvisor, a tool to support cybersecurity planning using
economic models. SECAdvisor allows to (a) understand the risks and valuation of
different businesses' information, (b) calculate the optimal investment in
cybersecurity for a company, (c) receive a recommendation of protections based
on the budget available and demands, and (d) compare protection solutions in
terms of cost-efficiency. Furthermore, evaluations on usability and real-world
training activities performed using SECAdvisor are discussed.
|
[
{
"version": "v1",
"created": "Sun, 16 Apr 2023 22:31:50 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Franco",
"Muriel Figueredo",
""
],
[
"Omlin",
"Christian",
""
],
[
"Kamer",
"Oliver",
""
],
[
"Scheid",
"Eder John",
""
],
[
"Stiller",
"Burkhard",
""
]
] |
new_dataset
| 0.992492 |
2304.07911
|
Zepeng Huai
|
Zepeng Huai and Yuji Yang and Mengdi Zhang and Zhongyi Zhang and
Yichun Li and Wei Wu
|
M2GNN: Metapath and Multi-interest Aggregated Graph Neural Network for
Tag-based Cross-domain Recommendation
| null |
Proceedings of the 46th International ACM SIGIR Conference on
Research and Development in Information Retrieval (SIGIR 2023)
|
10.1145/3539618.3591720
| null |
cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
Cross-domain recommendation (CDR) is an effective way to alleviate the data
sparsity problem. Content-based CDR is one of the most promising branches since
most kinds of products can be described by a piece of text, especially when
cold-start users or items have few interactions. However, two vital issues are
still under-explored: (1) From the content modeling perspective, sufficient
long-text descriptions are usually scarce in a real recommender system, more
often the light-weight textual features, such as a few keywords or tags, are
more accessible, which is improperly modeled by existing methods. (2) From the
CDR perspective, not all inter-domain interests are helpful to infer
intra-domain interests. Caused by domain-specific features, there are part of
signals benefiting for recommendation in the source domain but harmful for that
in the target domain. Therefore, how to distill useful interests is crucial. To
tackle the above two problems, we propose a metapath and multi-interest
aggregated graph neural network (M2GNN). Specifically, to model the tag-based
contents, we construct a heterogeneous information network to hold the semantic
relatedness between users, items, and tags in all domains. The metapath schema
is predefined according to domain-specific knowledge, with one metapath for one
domain. User representations are learned by GNN with a hierarchical aggregation
framework, where the intra-metapath aggregation firstly filters out trivial
tags and the inter-metapath aggregation further filters out useless interests.
Offline experiments and online A/B tests demonstrate that M2GNN achieves
significant improvements over the state-of-the-art methods and current
industrial recommender system in Dianping, respectively. Further analysis shows
that M2GNN offers an interpretable recommendation.
|
[
{
"version": "v1",
"created": "Sun, 16 Apr 2023 22:47:53 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Huai",
"Zepeng",
""
],
[
"Yang",
"Yuji",
""
],
[
"Zhang",
"Mengdi",
""
],
[
"Zhang",
"Zhongyi",
""
],
[
"Li",
"Yichun",
""
],
[
"Wu",
"Wei",
""
]
] |
new_dataset
| 0.999241 |
2304.07940
|
Hyunwoo Choi
|
Hyunwoo Choi, Suryeon Kim, Seungwon Shin
|
AVX Timing Side-Channel Attacks against Address Space Layout
Randomization
|
Accepted to Design Automation Conference (DAC) 2023
|
The 60th Annual Design Automation Conference (DAC), 2023
| null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern x86 processors support an AVX instruction set to boost performance.
However, this extension may cause security issues. We discovered that there are
vulnerable properties in implementing masked load/store instructions. Based on
this, we present a novel AVX timing side-channel attack that can defeat address
space layout randomization. We demonstrate the significance of our attack by
showing User and Kernel ASLR breaks on the recent Intel and AMD processors in
various environments, including cloud computing systems, an SGX enclave (a
fine-grained ASLR break), and major operating systems. We further demonstrate
that our attack can be used to infer user behavior, such as Bluetooth events
and mouse movements. We highlight that stronger isolation or more fine-grained
randomization should be adopted to successfully mitigate our presented attacks.
|
[
{
"version": "v1",
"created": "Mon, 17 Apr 2023 01:38:18 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Choi",
"Hyunwoo",
""
],
[
"Kim",
"Suryeon",
""
],
[
"Shin",
"Seungwon",
""
]
] |
new_dataset
| 0.998573 |
2304.07983
|
Sofiane Tanji
|
Sofiane Tanji and Andrea Della Vecchia and Fran\c{c}ois Glineur and
Silvia Villa
|
Snacks: a fast large-scale kernel SVM solver
|
6 pages
| null | null | null |
cs.LG math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Kernel methods provide a powerful framework for non parametric learning. They
are based on kernel functions and allow learning in a rich functional space
while applying linear statistical learning tools, such as Ridge Regression or
Support Vector Machines. However, standard kernel methods suffer from a
quadratic time and memory complexity in the number of data points and thus have
limited applications in large-scale learning. In this paper, we propose Snacks,
a new large-scale solver for Kernel Support Vector Machines. Specifically,
Snacks relies on a Nystr\"om approximation of the kernel matrix and an
accelerated variant of the stochastic subgradient method. We demonstrate
formally through a detailed empirical evaluation, that it competes with other
SVM solvers on a variety of benchmark datasets.
|
[
{
"version": "v1",
"created": "Mon, 17 Apr 2023 04:19:20 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Tanji",
"Sofiane",
""
],
[
"Della Vecchia",
"Andrea",
""
],
[
"Glineur",
"François",
""
],
[
"Villa",
"Silvia",
""
]
] |
new_dataset
| 0.998539 |
2304.07984
|
Nan Li
|
Nan Li, Yutong Li, Ilya Kolmanovsky
|
A Unified Safety Protection and Extension Governor
|
8 pages, 4 figures
| null | null | null |
cs.SY math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a supervisory control scheme that unifies the
abilities of safety protection and safety extension. It produces a control that
is able to keep the system safe indefinitely when such a control exists. When
such a control does not exist due to abnormal system states, it optimizes the
control to maximize the time before any safety violation, which translates into
more time to seek recovery and/or mitigate any harm. We describe the scheme and
develop an approach that integrates the two capabilities into a single
constrained optimization problem with only continuous variables. For linear
systems with convex constraints, the problem reduces to a convex quadratic
program and is easy to solve. We illustrate the proposed safety supervisor with
an automotive example.
|
[
{
"version": "v1",
"created": "Mon, 17 Apr 2023 04:20:04 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Li",
"Nan",
""
],
[
"Li",
"Yutong",
""
],
[
"Kolmanovsky",
"Ilya",
""
]
] |
new_dataset
| 0.993686 |
2304.08077
|
Doratossadat Dastgheib
|
Doratossadat Dastgheib, Hadi Farahani
|
Doxastic Lukasiewicz Logic with Public Announcement
| null | null | null | null |
cs.LO math.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a doxastic extension $BL^+$ of Lukasiewicz logic
which is sound and complete relative to the introduced corresponding semantics.
Also, we equip our doxastic Lukasiewicz logic $BL^+$ with public announcement
and propose the logic $DL$. As an application, we model a fuzzy version of
muddy children puzzle with public announcement using $DL$. Finally, we define a
translation between $DL$ and $BL^+$, and prove the soundness and completeness
theorems for D L
|
[
{
"version": "v1",
"created": "Mon, 17 Apr 2023 08:41:48 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Dastgheib",
"Doratossadat",
""
],
[
"Farahani",
"Hadi",
""
]
] |
new_dataset
| 0.999443 |
2304.08085
|
Xiao Wang
|
Xiao Wang, Weikang Zhou, Can Zu, Han Xia, Tianze Chen, Yuansen Zhang,
Rui Zheng, Junjie Ye, Qi Zhang, Tao Gui, Jihua Kang, Jingsheng Yang, Siyuan
Li, Chunsai Du
|
InstructUIE: Multi-task Instruction Tuning for Unified Information
Extraction
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large language models have unlocked strong multi-task capabilities from
reading instructive prompts. However, recent studies have shown that existing
large models still have difficulty with information extraction tasks. For
example, gpt-3.5-turbo achieved an F1 score of 18.22 on the Ontonotes dataset,
which is significantly lower than the state-of-the-art performance. In this
paper, we propose InstructUIE, a unified information extraction framework based
on instruction tuning, which can uniformly model various information extraction
tasks and capture the inter-task dependency. To validate the proposed method,
we introduce IE INSTRUCTIONS, a benchmark of 32 diverse information extraction
datasets in a unified text-to-text format with expert-written instructions.
Experimental results demonstrate that our method achieves comparable
performance to Bert in supervised settings and significantly outperforms the
state-of-the-art and gpt3.5 in zero-shot settings.
|
[
{
"version": "v1",
"created": "Mon, 17 Apr 2023 09:00:50 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Wang",
"Xiao",
""
],
[
"Zhou",
"Weikang",
""
],
[
"Zu",
"Can",
""
],
[
"Xia",
"Han",
""
],
[
"Chen",
"Tianze",
""
],
[
"Zhang",
"Yuansen",
""
],
[
"Zheng",
"Rui",
""
],
[
"Ye",
"Junjie",
""
],
[
"Zhang",
"Qi",
""
],
[
"Gui",
"Tao",
""
],
[
"Kang",
"Jihua",
""
],
[
"Yang",
"Jingsheng",
""
],
[
"Li",
"Siyuan",
""
],
[
"Du",
"Chunsai",
""
]
] |
new_dataset
| 0.988988 |
2304.08095
|
Maxime Guillaud
|
Paul Ferrand, Maxime Guillaud, Christoph Studer, Olav Tirkkonen
|
Wireless Channel Charting: Theory, Practice, and Applications
|
Accepted for publication in the IEEE Communication Magazine
| null | null | null |
cs.IT cs.AI math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Channel charting is a recently proposed framework that applies dimensionality
reduction to channel state information (CSI) in wireless systems with the goal
of associating a pseudo-position to each mobile user in a low-dimensional
space: the channel chart. Channel charting summarizes the entire CSI dataset in
a self-supervised manner, which opens up a range of applications that are tied
to user location. In this article, we introduce the theoretical underpinnings
of channel charting and present an overview of recent algorithmic developments
and experimental results obtained in the field. We furthermore discuss concrete
application examples of channel charting to network- and user-related
applications, and we provide a perspective on future developments and
challenges as well as the role of channel charting in next-generation wireless
networks.
|
[
{
"version": "v1",
"created": "Mon, 17 Apr 2023 09:10:46 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Ferrand",
"Paul",
""
],
[
"Guillaud",
"Maxime",
""
],
[
"Studer",
"Christoph",
""
],
[
"Tirkkonen",
"Olav",
""
]
] |
new_dataset
| 0.999369 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.