id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2304.08107
|
Hao Tian
|
Hao Tian, Yu Cao, P. Y. Mok
|
DETR-based Layered Clothing Segmentation and Fine-Grained Attribute
Recognition
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Clothing segmentation and fine-grained attribute recognition are challenging
tasks at the crossing of computer vision and fashion, which segment the entire
ensemble clothing instances as well as recognize detailed attributes of the
clothing products from any input human images. Many new models have been
developed for the tasks in recent years, nevertheless the segmentation accuracy
is less than satisfactory in case of layered clothing or fashion products in
different scales. In this paper, a new DEtection TRansformer (DETR) based
method is proposed to segment and recognize fine-grained attributes of ensemble
clothing instances with high accuracy. In this model, we propose a
\textbf{multi-layered attention module} by aggregating features of different
scales, determining the various scale components of a single instance, and
merging them together. We train our model on the Fashionpedia dataset and
demonstrate our method surpasses SOTA models in tasks of layered clothing
segmentation and fine-grained attribute recognition.
|
[
{
"version": "v1",
"created": "Mon, 17 Apr 2023 09:34:48 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Tian",
"Hao",
""
],
[
"Cao",
"Yu",
""
],
[
"Mok",
"P. Y.",
""
]
] |
new_dataset
| 0.977495 |
2304.08154
|
Henrik Bj{\o}rn Axelsen
|
Henrik Axelsen, Ulrik Rasmussen, Johannes Rude Jensen, Omri Ross,
Fritz Henglein
|
Trading green bonds using distributed ledger technology
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
The promising markets for voluntary carbon credits are faced with crippling
challenges to the certification of carbon sequestration and the lack of
scalable market infrastructure in which companies and institutions can invest
in carbon offsetting. This amounts to a funding problem for green transition
projects, such as in the agricultural sector, since farmers need access to the
liquidity needed to fund the transition to sustainable practices. We explore
the feasibility of mitigating infrastructural challenges based on a DLT Trading
and Settlement System for green bonds. The artefact employs a multi-sharded
architecture in which the nodes retain carefully orchestrated responsibilities
in the functioning of the network. We evaluate the artefact in a supranational
context with an EU-based regulator as part of a regulatory sandbox program
targeting the new EU DLT Pilot regime. By conducting design-driven research
with stakeholders from industrial and governmental bodies, we contribute to the
IS literature on the practical implications of DLT.
|
[
{
"version": "v1",
"created": "Mon, 17 Apr 2023 11:05:59 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Axelsen",
"Henrik",
""
],
[
"Rasmussen",
"Ulrik",
""
],
[
"Jensen",
"Johannes Rude",
""
],
[
"Ross",
"Omri",
""
],
[
"Henglein",
"Fritz",
""
]
] |
new_dataset
| 0.967445 |
2304.08162
|
Kishore Anand K
|
Prof Sangeetha R G, Kishore Anand K, Sreevatsan B and Vishal Kumar A
|
Cardiac Arrhythmia Detection using Artificial Neural Network
| null | null | null | null |
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The prime purpose of this project is to develop a portable cardiac
abnormality monitoring device which can drastically improvise the quality of
the monitoring and the overall safety of the device. While a generic, low cost,
wearable battery powered device for such applications may not yield sufficient
performance, such devices combined with the capabilities of Artificial Neural
Network algorithms can however, prove to be as competent as high end flexible
and wearable monitoring devices fabricated using advanced manufacturing
technologies. This paper evaluates the feasibility of the Levenberg-Marquardt
ANN algorithm for use in any generic low power wearable devices implemented
either as a pure real-time embedded system or as an IoT device capable of
uploading the monitored readings to the cloud.
|
[
{
"version": "v1",
"created": "Mon, 17 Apr 2023 11:20:11 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"G",
"Prof Sangeetha R",
""
],
[
"K",
"Kishore Anand",
""
],
[
"B",
"Sreevatsan",
""
],
[
"A",
"Vishal Kumar",
""
]
] |
new_dataset
| 0.982859 |
2304.08205
|
Chuanqi Tan
|
Zhen-Ru Zhang, Chuanqi Tan, Songfang Huang, Fei Huang
|
VECO 2.0: Cross-lingual Language Model Pre-training with
Multi-granularity Contrastive Learning
|
Technical Report for AliceMind's VECO 2.0 (ranked 1st on the XTREME
leaderboard on March 17, 2023)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Recent studies have demonstrated the potential of cross-lingual
transferability by training a unified Transformer encoder for multiple
languages. In addition to involving the masked language model objective,
existing cross-lingual pre-training works leverage sentence-level contrastive
learning or plugs in extra cross-attention module to complement the
insufficient capabilities of cross-lingual alignment. Nonetheless, synonym
pairs residing in bilingual corpus are not exploited and aligned, which is more
crucial than sentence interdependence establishment for token-level tasks. In
this work, we propose a cross-lingual pre-trained model VECO~2.0 based on
contrastive learning with multi-granularity alignments. Specifically, the
sequence-to-sequence alignment is induced to maximize the similarity of the
parallel pairs and minimize the non-parallel pairs. Then, token-to-token
alignment is integrated to bridge the gap between synonymous tokens excavated
via the thesaurus dictionary from the other unpaired tokens in a bilingual
instance. Experiments show the effectiveness of the proposed strategy for
cross-lingual model pre-training on the XTREME benchmark.
|
[
{
"version": "v1",
"created": "Mon, 17 Apr 2023 12:23:41 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Zhang",
"Zhen-Ru",
""
],
[
"Tan",
"Chuanqi",
""
],
[
"Huang",
"Songfang",
""
],
[
"Huang",
"Fei",
""
]
] |
new_dataset
| 0.982464 |
2304.08210
|
Dustin Aganian
|
Dustin Aganian, Benedict Stephan, Markus Eisenbach, Corinna Stretz,
and Horst-Michael Gross
|
ATTACH Dataset: Annotated Two-Handed Assembly Actions for Human Action
Understanding
|
IEEE International Conference on Robotics and Automation (ICRA) 2023
| null | null | null |
cs.RO cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the emergence of collaborative robots (cobots), human-robot
collaboration in industrial manufacturing is coming into focus. For a cobot to
act autonomously and as an assistant, it must understand human actions during
assembly. To effectively train models for this task, a dataset containing
suitable assembly actions in a realistic setting is crucial. For this purpose,
we present the ATTACH dataset, which contains 51.6 hours of assembly with 95.2k
annotated fine-grained actions monitored by three cameras, which represent
potential viewpoints of a cobot. Since in an assembly context workers tend to
perform different actions simultaneously with their two hands, we annotated the
performed actions for each hand separately. Therefore, in the ATTACH dataset,
more than 68% of annotations overlap with other annotations, which is many
times more than in related datasets, typically featuring more simplistic
assembly tasks. For better generalization with respect to the background of the
working area, we did not only record color and depth images, but also used the
Azure Kinect body tracking SDK for estimating 3D skeletons of the worker. To
create a first baseline, we report the performance of state-of-the-art methods
for action recognition as well as action detection on video and
skeleton-sequence inputs. The dataset is available at
https://www.tu-ilmenau.de/neurob/data-sets-code/attach-dataset .
|
[
{
"version": "v1",
"created": "Mon, 17 Apr 2023 12:31:24 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Aganian",
"Dustin",
""
],
[
"Stephan",
"Benedict",
""
],
[
"Eisenbach",
"Markus",
""
],
[
"Stretz",
"Corinna",
""
],
[
"Gross",
"Horst-Michael",
""
]
] |
new_dataset
| 0.999843 |
2304.08244
|
Minghao Li
|
Minghao Li, Feifan Song, Bowen Yu, Haiyang Yu, Zhoujun Li, Fei Huang,
Yongbin Li
|
API-Bank: A Benchmark for Tool-Augmented LLMs
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent research has shown that Large Language Models (LLMs) can utilize
external tools to improve their contextual processing abilities, moving away
from the pure language modeling paradigm and paving the way for Artificial
General Intelligence. Despite this, there has been a lack of systematic
evaluation to demonstrate the efficacy of LLMs using tools to respond to human
instructions. This paper presents API-Bank, the first benchmark tailored for
Tool-Augmented LLMs. API-Bank includes 53 commonly used API tools, a complete
Tool-Augmented LLM workflow, and 264 annotated dialogues that encompass a total
of 568 API calls. These resources have been designed to thoroughly evaluate
LLMs' ability to plan step-by-step API calls, retrieve relevant APIs, and
correctly execute API calls to meet human needs. The experimental results show
that GPT-3.5 emerges the ability to use the tools relative to GPT3, while GPT-4
has stronger planning performance. Nevertheless, there remains considerable
scope for further improvement when compared to human performance. Additionally,
detailed error analysis and case studies demonstrate the feasibility of
Tool-Augmented LLMs for daily use, as well as the primary challenges that
future research needs to address.
|
[
{
"version": "v1",
"created": "Fri, 14 Apr 2023 14:05:32 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Li",
"Minghao",
""
],
[
"Song",
"Feifan",
""
],
[
"Yu",
"Bowen",
""
],
[
"Yu",
"Haiyang",
""
],
[
"Li",
"Zhoujun",
""
],
[
"Huang",
"Fei",
""
],
[
"Li",
"Yongbin",
""
]
] |
new_dataset
| 0.961044 |
2304.08293
|
Florence Smith Nicholls
|
Florence Smith Nicholls and Michael Cook
|
'That Darned Sandstorm': A Study of Procedural Generation through
Archaeological Storytelling
|
Published at the PCG Workshop at FDG 2023
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Procedural content generation has been applied to many domains, especially
level design, but the narrative affordances of generated game environments are
comparatively understudied. In this paper we present our first attempt to study
these effects through the lens of what we call a generative archaeology game
that prompts the player to archaeologically interpret the generated content of
the game world. We report on a survey that gathered qualitative and
quantitative data on the experiences of 187 participants playing the game
Nothing Beside Remains. We provide some preliminary analysis of our intentional
attempt to prompt player interpretation, and the unintentional effects of a
glitch on the player experience of the game.
|
[
{
"version": "v1",
"created": "Mon, 17 Apr 2023 14:08:05 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Nicholls",
"Florence Smith",
""
],
[
"Cook",
"Michael",
""
]
] |
new_dataset
| 0.979237 |
2304.08327
|
Yi-Pei Chen
|
Yi-Pei Chen, An-Zi Yen, Hen-Hsen Huang, Hideki Nakayama, Hsin-Hsi Chen
|
LED: A Dataset for Life Event Extraction from Dialogs
|
Accepted to EACL 2023 Findings
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Lifelogging has gained more attention due to its wide applications, such as
personalized recommendations or memory assistance. The issues of collecting and
extracting personal life events have emerged. People often share their life
experiences with others through conversations. However, extracting life events
from conversations is rarely explored. In this paper, we present Life Event
Dialog, a dataset containing fine-grained life event annotations on
conversational data. In addition, we initiate a novel conversational life event
extraction task and differentiate the task from the public event extraction or
the life event extraction from other sources like microblogs. We explore three
information extraction (IE) frameworks to address the conversational life event
extraction task: OpenIE, relation extraction, and event extraction. A
comprehensive empirical analysis of the three baselines is established. The
results suggest that the current event extraction model still struggles with
extracting life events from human daily conversations. Our proposed life event
dialog dataset and in-depth analysis of IE frameworks will facilitate future
research on life event extraction from conversations.
|
[
{
"version": "v1",
"created": "Mon, 17 Apr 2023 14:46:59 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Chen",
"Yi-Pei",
""
],
[
"Yen",
"An-Zi",
""
],
[
"Huang",
"Hen-Hsen",
""
],
[
"Nakayama",
"Hideki",
""
],
[
"Chen",
"Hsin-Hsi",
""
]
] |
new_dataset
| 0.999723 |
2304.08345
|
Sihan Chen
|
Sihan Chen, Xingjian He, Longteng Guo, Xinxin Zhu, Weining Wang,
Jinhui Tang, Jing Liu
|
VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and
Dataset
|
Preprint version w/o audio files embeded in PDF. Audio embeded
version can be found on project page or github
| null | null | null |
cs.LG cs.CL cs.CV cs.MM eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a Vision-Audio-Language Omni-peRception pretraining
model (VALOR) for multi-modal understanding and generation. Different from
widely-studied vision-language pretraining models, VALOR jointly models
relationships of vision, audio and language in an end-to-end manner. It
contains three separate encoders for single modality representations, and a
decoder for multimodal conditional text generation. We design two pretext tasks
to pretrain VALOR model, including Multimodal Grouping Alignment (MGA) and
Multimodal Grouping Captioning (MGC). MGA projects vision, language and audio
to the same common space, building vision-language, audio-language and
audiovisual-language alignment simultaneously. MGC learns how to generate text
tokens in conditions of vision, audio or their both. To promote
vision-audio-language pretraining research, we construct a large-scale
high-quality tri-modality dataset named VALOR-1M, which contains 1M audiable
videos with human annotated audiovisual captions. Extensive experiments show
that VALOR can learn strong multimodal correlations and be generalized to
various downstream tasks (e.g., retrieval, captioning and question answering),
with different input modalities (e.g., vision-language, audio-language and
audiovisual-language). VALOR achieves new state-of-the-art performances on
series of public cross-modality benchmarks. Code and data are available at
project page https://casia-iva-group.github.io/projects/VALOR.
|
[
{
"version": "v1",
"created": "Mon, 17 Apr 2023 15:08:15 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Chen",
"Sihan",
""
],
[
"He",
"Xingjian",
""
],
[
"Guo",
"Longteng",
""
],
[
"Zhu",
"Xinxin",
""
],
[
"Wang",
"Weining",
""
],
[
"Tang",
"Jinhui",
""
],
[
"Liu",
"Jing",
""
]
] |
new_dataset
| 0.99952 |
2304.08352
|
Karol Lynch
|
Karol Lynch and Joern Ploennigs and Bradley Eck
|
What Makes a Good Dataset for Symbol Description Reading?
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The usage of mathematical formulas as concise representations of a document's
key ideas is common practice. Correctly interpreting these formulas, by
identifying mathematical symbols and extracting their descriptions, is an
important task in document understanding. This paper makes the following
contributions to the mathematical identifier description reading (MIDR) task:
(i) introduces the Math Formula Question Answering Dataset (MFQuAD) with
$7508$ annotated identifier occurrences;
(ii) describes novel variations of the noun phrase ranking approach for the
MIDR task;
(iii) reports experimental results for the SOTA noun phrase ranking approach
and our novel variations of the approach, providing problem insights and a
performance baseline;
(iv) provides a position on the features that make an effective dataset for
the MIDR task.
|
[
{
"version": "v1",
"created": "Mon, 17 Apr 2023 15:14:27 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Lynch",
"Karol",
""
],
[
"Ploennigs",
"Joern",
""
],
[
"Eck",
"Bradley",
""
]
] |
new_dataset
| 0.993706 |
2304.08408
|
Tobias Fischer
|
Siyuan Li, Tobias Fischer, Lei Ke, Henghui Ding, Martin Danelljan,
Fisher Yu
|
OVTrack: Open-Vocabulary Multiple Object Tracking
|
CVPR 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The ability to recognize, localize and track dynamic objects in a scene is
fundamental to many real-world applications, such as self-driving and robotic
systems. Yet, traditional multiple object tracking (MOT) benchmarks rely only
on a few object categories that hardly represent the multitude of possible
objects that are encountered in the real world. This leaves contemporary MOT
methods limited to a small set of pre-defined object categories. In this paper,
we address this limitation by tackling a novel task, open-vocabulary MOT, that
aims to evaluate tracking beyond pre-defined training categories. We further
develop OVTrack, an open-vocabulary tracker that is capable of tracking
arbitrary object classes. Its design is based on two key ingredients: First,
leveraging vision-language models for both classification and association via
knowledge distillation; second, a data hallucination strategy for robust
appearance feature learning from denoising diffusion probabilistic models. The
result is an extremely data-efficient open-vocabulary tracker that sets a new
state-of-the-art on the large-scale, large-vocabulary TAO benchmark, while
being trained solely on static images. Project page:
https://www.vis.xyz/pub/ovtrack/
|
[
{
"version": "v1",
"created": "Mon, 17 Apr 2023 16:20:05 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Li",
"Siyuan",
""
],
[
"Fischer",
"Tobias",
""
],
[
"Ke",
"Lei",
""
],
[
"Ding",
"Henghui",
""
],
[
"Danelljan",
"Martin",
""
],
[
"Yu",
"Fisher",
""
]
] |
new_dataset
| 0.999578 |
2304.08431
|
Vaclav Hanzl
|
V\'aclav Han\v{z}l, Adl\'eta Han\v{z}lov\'a
|
Prak: An automatic phonetic alignment tool for Czech
|
Submitted for ICPhS 2023
| null | null | null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Labeling speech down to the identity and time boundaries of phones is a
labor-intensive part of phonetic research. To simplify this work, we created a
free open-source tool generating phone sequences from Czech text and
time-aligning them with audio.
Low architecture complexity makes the design approachable for students of
phonetics. Acoustic model ReLU NN with 56k weights was trained using PyTorch on
small CommonVoice data. Alignment and variant selection decoder is implemented
in Python with matrix library.
A Czech pronunciation generator is composed of simple rule-based blocks
capturing the logic of the language where possible, allowing modification of
transcription approach details.
Compared to tools used until now, data preparation efficiency improved, the
tool is usable on Mac, Linux and Windows in Praat GUI or command line, achieves
mostly correct pronunciation variant choice including glottal stop detection,
algorithmically captures most of Czech assimilation logic and is both didactic
and practical.
|
[
{
"version": "v1",
"created": "Mon, 17 Apr 2023 16:51:24 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Hanžl",
"Václav",
""
],
[
"Hanžlová",
"Adléta",
""
]
] |
new_dataset
| 0.993946 |
2304.08435
|
Khushhall Chandra Mahajan
|
Khushhall Chandra Mahajan, Aditya Palnitkar, Ameya Raul, Brad
Schumitsch
|
CAViaR: Context Aware Video Recommendations
|
Accepted by WWW'2023
| null |
10.1145/3543873.3584658
| null |
cs.IR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many recommendation systems rely on point-wise models, which score items
individually. However, point-wise models generating scores for a video are
unable to account for other videos being recommended in a query. Due to this,
diversity has to be introduced through the application of heuristic-based
rules, which are not able to capture user preferences, or make balanced
trade-offs in terms of diversity and item relevance. In this paper, we propose
a novel method which introduces diversity by modeling the impact of low
diversity on user's engagement on individual items, thus being able to account
for both diversity and relevance to adjust item scores. The proposed method is
designed to be easily pluggable into existing large-scale recommender systems,
while introducing minimal changes in the recommendations stack. Our models show
significant improvements in offline metrics based on the normalized cross
entropy loss compared to production point-wise models. Our approach also shows
a substantial increase of 1.7% in topline engagements coupled with a 1.5%
increase in daily active users in an A/B test with live traffic on Facebook
Watch, which translates into an increase of millions in the number of daily
active users for the product.
|
[
{
"version": "v1",
"created": "Mon, 17 Apr 2023 16:56:23 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Mahajan",
"Khushhall Chandra",
""
],
[
"Palnitkar",
"Aditya",
""
],
[
"Raul",
"Ameya",
""
],
[
"Schumitsch",
"Brad",
""
]
] |
new_dataset
| 0.99525 |
2304.08447
|
Yahia Dalbah
|
Yahia Dalbah, Jean Lahoud, Hisham Cholakkal
|
RadarFormer: Lightweight and Accurate Real-Time Radar Object Detection
Model
|
18 pages (with reference), 8 figures, submitted and accepted to
SCIA2023
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The performance of perception systems developed for autonomous driving
vehicles has seen significant improvements over the last few years. This
improvement was associated with the increasing use of LiDAR sensors and point
cloud data to facilitate the task of object detection and recognition in
autonomous driving. However, LiDAR and camera systems show deteriorating
performances when used in unfavorable conditions like dusty and rainy weather.
Radars on the other hand operate on relatively longer wavelengths which allows
for much more robust measurements in these conditions. Despite that,
radar-centric data sets do not get a lot of attention in the development of
deep learning techniques for radar perception. In this work, we consider the
radar object detection problem, in which the radar frequency data is the only
input into the detection framework. We further investigate the challenges of
using radar-only data in deep learning models. We propose a transformers-based
model, named RadarFormer, that utilizes state-of-the-art developments in vision
deep learning. Our model also introduces a channel-chirp-time merging module
that reduces the size and complexity of our models by more than 10 times
without compromising accuracy. Comprehensive experiments on the CRUW radar
dataset demonstrate the advantages of the proposed method. Our RadarFormer
performs favorably against the state-of-the-art methods while being 2x faster
during inference and requiring only one-tenth of their model parameters. The
code associated with this paper is available at
https://github.com/YahiDar/RadarFormer.
|
[
{
"version": "v1",
"created": "Mon, 17 Apr 2023 17:07:35 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Dalbah",
"Yahia",
""
],
[
"Lahoud",
"Jean",
""
],
[
"Cholakkal",
"Hisham",
""
]
] |
new_dataset
| 0.998425 |
2304.08483
|
Yuming Jiang
|
Yuming Jiang, Shuai Yang, Tong Liang Koh, Wayne Wu, Chen Change Loy,
Ziwei Liu
|
Text2Performer: Text-Driven Human Video Generation
|
Project Page: https://yumingj.github.io/projects/Text2Performer.html,
Github: https://github.com/yumingj/Text2Performer
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Text-driven content creation has evolved to be a transformative technique
that revolutionizes creativity. Here we study the task of text-driven human
video generation, where a video sequence is synthesized from texts describing
the appearance and motions of a target performer. Compared to general
text-driven video generation, human-centric video generation requires
maintaining the appearance of synthesized human while performing complex
motions. In this work, we present Text2Performer to generate vivid human videos
with articulated motions from texts. Text2Performer has two novel designs: 1)
decomposed human representation and 2) diffusion-based motion sampler. First,
we decompose the VQVAE latent space into human appearance and pose
representation in an unsupervised manner by utilizing the nature of human
videos. In this way, the appearance is well maintained along the generated
frames. Then, we propose continuous VQ-diffuser to sample a sequence of pose
embeddings. Unlike existing VQ-based methods that operate in the discrete
space, continuous VQ-diffuser directly outputs the continuous pose embeddings
for better motion modeling. Finally, motion-aware masking strategy is designed
to mask the pose embeddings spatial-temporally to enhance the temporal
coherence. Moreover, to facilitate the task of text-driven human video
generation, we contribute a Fashion-Text2Video dataset with manually annotated
action labels and text descriptions. Extensive experiments demonstrate that
Text2Performer generates high-quality human videos (up to 512x256 resolution)
with diverse appearances and flexible motions.
|
[
{
"version": "v1",
"created": "Mon, 17 Apr 2023 17:59:02 GMT"
}
] | 2023-04-18T00:00:00 |
[
[
"Jiang",
"Yuming",
""
],
[
"Yang",
"Shuai",
""
],
[
"Koh",
"Tong Liang",
""
],
[
"Wu",
"Wayne",
""
],
[
"Loy",
"Chen Change",
""
],
[
"Liu",
"Ziwei",
""
]
] |
new_dataset
| 0.997499 |
2004.05702
|
Henry Phalen
|
Henry Phalen, Prasad Vagdargi, Mariah L. Schrum, Sumana Chakravarty,
Amanda Canezin, Michael Pozin, Suat Coemert, Iulian Iordachita, Stephen L.
Hoffman, Gregory S. Chirikjian, Russell H. Taylor
|
A Mosquito Pick-and-Place System for PfSPZ-based Malaria Vaccine
Production
|
12 pages, 11 figures, Manuscript submitted for Special Issue of IEEE
CASE 2019 for IEEE T-ASE
| null |
10.1109/tase.2020.2992131
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The treatment of malaria is a global health challenge that stands to benefit
from the widespread introduction of a vaccine for the disease. A method has
been developed to create a live organism vaccine using the sporozoites (SPZ) of
the parasite Plasmodium falciparum (Pf), which are concentrated in the salivary
glands of infected mosquitoes. Current manual dissection methods to obtain
these PfSPZ are not optimally efficient for large-scale vaccine production. We
propose an improved dissection procedure and a mechanical fixture that
increases the rate of mosquito dissection and helps to deskill this stage of
the production process. We further demonstrate the automation of a key step in
this production process, the picking and placing of mosquitoes from a staging
apparatus into a dissection assembly. This unit test of a robotic mosquito
pick-and-place system is performed using a custom-designed micro-gripper
attached to a four degree of freedom (4-DOF) robot under the guidance of a
computer vision system. Mosquitoes are autonomously grasped and pulled to a
pair of notched dissection blades to remove the head of the mosquito, allowing
access to the salivary glands. Placement into these blades is adapted based on
output from computer vision to accommodate for the unique anatomy and
orientation of each grasped mosquito. In this pilot test of the system on 50
mosquitoes, we demonstrate a 100% grasping accuracy and a 90% accuracy in
placing the mosquito with its neck within the blade notches such that the head
can be removed. This is a promising result for this difficult and non-standard
pick-and-place task.
|
[
{
"version": "v1",
"created": "Sun, 12 Apr 2020 21:39:56 GMT"
}
] | 2023-04-17T00:00:00 |
[
[
"Phalen",
"Henry",
""
],
[
"Vagdargi",
"Prasad",
""
],
[
"Schrum",
"Mariah L.",
""
],
[
"Chakravarty",
"Sumana",
""
],
[
"Canezin",
"Amanda",
""
],
[
"Pozin",
"Michael",
""
],
[
"Coemert",
"Suat",
""
],
[
"Iordachita",
"Iulian",
""
],
[
"Hoffman",
"Stephen L.",
""
],
[
"Chirikjian",
"Gregory S.",
""
],
[
"Taylor",
"Russell H.",
""
]
] |
new_dataset
| 0.980212 |
2110.01580
|
Djoko Suprijanto -
|
Djoko Suprijanto and Hopein Christofen Tang
|
Skew cyclic codes over $\mathbb{Z}_4+v\mathbb{Z}_4$ with derivation:
structural properties and computational results
|
25 pages
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In this work, we study a class of skew cyclic codes over the ring
$R:=\mathbb{Z}_4+v\mathbb{Z}_4,$ where $v^2=v,$ with an automorphism $\theta$
and a derivation $\Delta_\theta,$ namely codes as modules over a skew
polynomial ring $R[x;\theta,\Delta_{\theta}],$ whose multiplication is defined
using an automorphism $\theta$ and a derivation $\Delta_{\theta}.$ We
investigate the structures of a skew polynomial ring
$R[x;\theta,\Delta_{\theta}].$ We define $\Delta_{\theta}$-cyclic codes as a
generalization of the notion of cyclic codes. The properties of
$\Delta_{\theta}$-cyclic codes as well as dual $\Delta_{\theta}$-cyclic codes
are derived. As an application, some new linear codes over $\mathbb{Z}_4$ with
good parameters are obtained by Plotkin sum construction, also via a Gray map
as well as residue and torsion codes of these codes.
|
[
{
"version": "v1",
"created": "Mon, 4 Oct 2021 17:23:49 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Oct 2021 11:51:50 GMT"
},
{
"version": "v3",
"created": "Tue, 22 Feb 2022 15:36:03 GMT"
},
{
"version": "v4",
"created": "Fri, 14 Apr 2023 10:54:00 GMT"
}
] | 2023-04-17T00:00:00 |
[
[
"Suprijanto",
"Djoko",
""
],
[
"Tang",
"Hopein Christofen",
""
]
] |
new_dataset
| 0.996243 |
2111.08172
|
Eric Graves
|
Eric Graves, Ehsan Imani, Raksha Kumaraswamy, Martha White
|
Off-Policy Actor-Critic with Emphatic Weightings
|
63 pages
|
Journal of Machine Learning Research 24 (2023) 1-63
| null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A variety of theoretically-sound policy gradient algorithms exist for the
on-policy setting due to the policy gradient theorem, which provides a
simplified form for the gradient. The off-policy setting, however, has been
less clear due to the existence of multiple objectives and the lack of an
explicit off-policy policy gradient theorem. In this work, we unify these
objectives into one off-policy objective, and provide a policy gradient theorem
for this unified objective. The derivation involves emphatic weightings and
interest functions. We show multiple strategies to approximate the gradients,
in an algorithm called Actor Critic with Emphatic weightings (ACE). We prove in
a counterexample that previous (semi-gradient) off-policy actor-critic
methods--particularly Off-Policy Actor-Critic (OffPAC) and Deterministic Policy
Gradient (DPG)--converge to the wrong solution whereas ACE finds the optimal
solution. We also highlight why these semi-gradient approaches can still
perform well in practice, suggesting strategies for variance reduction in ACE.
We empirically study several variants of ACE on two classic control
environments and an image-based environment designed to illustrate the
tradeoffs made by each gradient approximation. We find that by approximating
the emphatic weightings directly, ACE performs as well as or better than OffPAC
in all settings tested.
|
[
{
"version": "v1",
"created": "Tue, 16 Nov 2021 01:18:16 GMT"
},
{
"version": "v2",
"created": "Thu, 11 Aug 2022 17:33:58 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Apr 2023 20:18:25 GMT"
}
] | 2023-04-17T00:00:00 |
[
[
"Graves",
"Eric",
""
],
[
"Imani",
"Ehsan",
""
],
[
"Kumaraswamy",
"Raksha",
""
],
[
"White",
"Martha",
""
]
] |
new_dataset
| 0.980115 |
2206.05256
|
Joshua Brakensiek
|
Joshua Brakensiek, Sivakanth Gopi, Visu Makam
|
Generic Reed-Solomon codes achieve list-decoding capacity
|
37 pages
| null | null | null |
cs.IT cs.CC math.CO math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In a recent paper, Brakensiek, Gopi and Makam introduced higher order MDS
codes as a generalization of MDS codes. An order-$\ell$ MDS code, denoted by
$\operatorname{MDS}(\ell)$, has the property that any $\ell$ subspaces formed
from columns of its generator matrix intersect as minimally as possible. An
independent work by Roth defined a different notion of higher order MDS codes
as those achieving a generalized singleton bound for list-decoding. In this
work, we show that these two notions of higher order MDS codes are (nearly)
equivalent.
We also show that generic Reed-Solomon codes are $\operatorname{MDS}(\ell)$
for all $\ell$, relying crucially on the GM-MDS theorem which shows that
generator matrices of generic Reed-Solomon codes achieve any possible zero
pattern. As a corollary, this implies that generic Reed-Solomon codes achieve
list decoding capacity. More concretely, we show that, with high probability, a
random Reed-Solomon code of rate $R$ over an exponentially large field is list
decodable from radius $1-R-\epsilon$ with list size at most
$\frac{1-R-\epsilon}{\epsilon}$, resolving a conjecture of Shangguan and Tamo.
|
[
{
"version": "v1",
"created": "Fri, 10 Jun 2022 17:54:02 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Apr 2023 20:30:24 GMT"
}
] | 2023-04-17T00:00:00 |
[
[
"Brakensiek",
"Joshua",
""
],
[
"Gopi",
"Sivakanth",
""
],
[
"Makam",
"Visu",
""
]
] |
new_dataset
| 0.990115 |
2206.15088
|
Ma\"el Dumas
|
Ma\"el Dumas, Florent Foucaud, Anthony Perez, Ioan Todinca
|
On graphs coverable by k shortest paths
| null | null | null | null |
cs.DM cs.CC cs.DS math.CO
|
http://creativecommons.org/licenses/by/4.0/
|
We show that if the edges or vertices of an undirected graph $G$ can be
covered by $k$ shortest paths, then the pathwidth of $G$ is upper-bounded by a
single-exponential function of $k$. As a corollary, we prove that the problem
Isometric Path Cover with Terminals (which, given a graph $G$ and a set of $k$
pairs of vertices called terminals, asks whether $G$ can be covered by $k$
shortest paths, each joining a pair of terminals) is FPT with respect to the
number of terminals. The same holds for the similar problem Strong Geodetic Set
with Terminals (which, given a graph $G$ and a set of $k$ terminals, asks
whether there exist $\binom{k}{2}$ shortest paths covering $G$, each joining a
distinct pair of terminals). Moreover, this implies that the related problems
Isometric Path Cover and Strong Geodetic Set (defined similarly but where the
set of terminals is not part of the input) are in XP with respect to parameter
$k$.
|
[
{
"version": "v1",
"created": "Thu, 30 Jun 2022 07:46:47 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Apr 2023 10:30:59 GMT"
}
] | 2023-04-17T00:00:00 |
[
[
"Dumas",
"Maël",
""
],
[
"Foucaud",
"Florent",
""
],
[
"Perez",
"Anthony",
""
],
[
"Todinca",
"Ioan",
""
]
] |
new_dataset
| 0.993476 |
2207.00856
|
Alejandro Lancho
|
Alejandro Lancho, Giuseppe Durisi and Luca Sanguinetti
|
Cell-Free Massive MIMO for URLLC: A Finite-Blocklength Analysis
|
13 pages, 8 figures, 1 table, accepted version at IEEE Transactions
on Wireless Communications
| null |
10.1109/TWC.2023.3265303
| null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a general framework for the characterization of the packet error
probability achievable in cell-free Massive multiple-input multiple output
(MIMO) architectures deployed to support ultra-reliable low-latency (URLLC)
traffic. The framework is general and encompasses both centralized and
distributed cell-free architectures, arbitrary fading channels and channel
estimation algorithms at both network and user-equipment (UE) sides, as well as
arbitrary combining and precoding schemes. The framework is used to perform
numerical experiments on specific scenarios, which illustrate the superiority
of cell-free architectures compared to cellular architectures in supporting
URLLC traffic in uplink and downlink. Also, these numerical experiments provide
the following insights into the design of cell-free architectures for URLLC: i)
minimum mean square error (MMSE) spatial processing must be used to achieve the
URLLC targets; ii) for a given total number of antennas per coverage area,
centralized cell-free solutions involving single-antenna access points (APs)
offer the best performance in the uplink, thereby highlighting the importance
of reducing the average distance between APs and UEs in the URLLC regime; iii)
this observation applies also to the downlink, provided that the APs transmit
precoded pilots to allow the UEs to estimate accurately the precoded channel.
|
[
{
"version": "v1",
"created": "Sat, 2 Jul 2022 15:08:40 GMT"
},
{
"version": "v2",
"created": "Sat, 17 Dec 2022 16:11:11 GMT"
},
{
"version": "v3",
"created": "Fri, 14 Apr 2023 12:59:57 GMT"
}
] | 2023-04-17T00:00:00 |
[
[
"Lancho",
"Alejandro",
""
],
[
"Durisi",
"Giuseppe",
""
],
[
"Sanguinetti",
"Luca",
""
]
] |
new_dataset
| 0.995137 |
2207.12496
|
Bandhav Veluri
|
Bandhav Veluri, Collin Pernu, Ali Saffari, Joshua Smith, Michael
Taylor, Shyamnath Gollakota
|
NeuriCam: Key-Frame Video Super-Resolution and Colorization for IoT
Cameras
|
MobiCom 2023 camera-ready
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present NeuriCam, a novel deep learning-based system to achieve video
capture from low-power dual-mode IoT camera systems. Our idea is to design a
dual-mode camera system where the first mode is low-power (1.1 mW) but only
outputs grey-scale, low resolution, and noisy video and the second mode
consumes much higher power (100 mW) but outputs color and higher resolution
images. To reduce total energy consumption, we heavily duty cycle the high
power mode to output an image only once every second. The data for this camera
system is then wirelessly sent to a nearby plugged-in gateway, where we run our
real-time neural network decoder to reconstruct a higher-resolution color
video. To achieve this, we introduce an attention feature filter mechanism that
assigns different weights to different features, based on the correlation
between the feature map and the contents of the input frame at each spatial
location. We design a wireless hardware prototype using off-the-shelf cameras
and address practical issues including packet loss and perspective mismatch.
Our evaluations show that our dual-camera approach reduces energy consumption
by 7x compared to existing systems. Further, our model achieves an average
greyscale PSNR gain of 3.7 dB over prior single and dual-camera video
super-resolution methods and 5.6 dB RGB gain over prior color propagation
methods. Open-source code: https://github.com/vb000/NeuriCam.
|
[
{
"version": "v1",
"created": "Mon, 25 Jul 2022 19:54:57 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Apr 2023 23:35:16 GMT"
}
] | 2023-04-17T00:00:00 |
[
[
"Veluri",
"Bandhav",
""
],
[
"Pernu",
"Collin",
""
],
[
"Saffari",
"Ali",
""
],
[
"Smith",
"Joshua",
""
],
[
"Taylor",
"Michael",
""
],
[
"Gollakota",
"Shyamnath",
""
]
] |
new_dataset
| 0.999379 |
2208.09163
|
Fiona Anting Tan Ms
|
Fiona Anting Tan, Xinyu Zuo and See-Kiong Ng
|
UniCausal: Unified Benchmark and Repository for Causal Text Mining
|
15 pages include References
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Current causal text mining datasets vary in objectives, data coverage, and
annotation schemes. These inconsistent efforts prevent modeling capabilities
and fair comparisons of model performance. Furthermore, few datasets include
cause-effect span annotations, which are needed for end-to-end causal relation
extraction. To address these issues, we propose UniCausal, a unified benchmark
for causal text mining across three tasks: (I) Causal Sequence Classification,
(II) Cause-Effect Span Detection and (III) Causal Pair Classification. We
consolidated and aligned annotations of six high quality, mainly
human-annotated, corpora, resulting in a total of 58,720, 12,144 and 69,165
examples for each task respectively. Since the definition of causality can be
subjective, our framework was designed to allow researchers to work on some or
all datasets and tasks. To create an initial benchmark, we fine-tuned BERT
pre-trained language models to each task, achieving 70.10% Binary F1, 52.42%
Macro F1, and 84.68% Binary F1 scores respectively.
|
[
{
"version": "v1",
"created": "Fri, 19 Aug 2022 06:14:05 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Apr 2023 09:02:50 GMT"
}
] | 2023-04-17T00:00:00 |
[
[
"Tan",
"Fiona Anting",
""
],
[
"Zuo",
"Xinyu",
""
],
[
"Ng",
"See-Kiong",
""
]
] |
new_dataset
| 0.998601 |
2209.09693
|
Michele Focchi
|
Michele Focchi, Mohamed Bensaadallah, Marco Frego, Angelika Peer,
Daniele Fontanelli, Andrea Del Prete, Luigi Palopoli
|
CLIO: a Novel Robotic Solution for Exploration and Rescue Missions in
Hostile Mountain Environments
|
7 pages
|
ICRA 2023
| null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Rescue missions in mountain environments are hardly achievable by standard
legged robots-because of the high slopes-or by flying robots-because of limited
payload capacity. We present a concept for a rope-aided climbing robot which
can negotiate up-to-vertical slopes and carry heavy payloads. The robot is
attached to the mountain through a rope, and it is equipped with a leg to push
against the mountain and initiate jumping maneuvers. Between jumps, a hoist is
used to wind/unwind the rope to move vertically and affect the lateral motion.
This simple (yet effective) two-fold actuation allows the system to achieve
high safety and energy efficiency. Indeed, the rope prevents the robot from
falling while compensating for most of its weight, drastically reducing the
effort required by the leg actuator. We also present an optimal control
strategy to generate point-to-point trajectories overcoming an obstacle. We
achieve fast computation time (<1 s) thanks to the use of a custom simplified
robot model. We validated the generated optimal movements in Gazebo simulations
with a complete robot model with a < 5% error on a 16 m long jump, showing the
effectiveness of the proposed approach, and confirming the interest of our
concept. Finally, we performed a reachability analysis showing that the region
of achievable targets is strongly affected by the friction properties of the
foot-wall contact.
|
[
{
"version": "v1",
"created": "Tue, 20 Sep 2022 12:58:04 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Apr 2023 09:50:40 GMT"
}
] | 2023-04-17T00:00:00 |
[
[
"Focchi",
"Michele",
""
],
[
"Bensaadallah",
"Mohamed",
""
],
[
"Frego",
"Marco",
""
],
[
"Peer",
"Angelika",
""
],
[
"Fontanelli",
"Daniele",
""
],
[
"Del Prete",
"Andrea",
""
],
[
"Palopoli",
"Luigi",
""
]
] |
new_dataset
| 0.999781 |
2210.01171
|
Zehong Wang
|
Zehong Wang, Qi Li, Donghua Yu
|
TPGNN: Learning High-order Information in Dynamic Graphs via Temporal
Propagation
| null | null | null | null |
cs.LG cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Temporal graph is an abstraction for modeling dynamic systems that consist of
evolving interaction elements. In this paper, we aim to solve an important yet
neglected problem -- how to learn information from high-order neighbors in
temporal graphs? -- to enhance the informativeness and discriminativeness for
the learned node representations. We argue that when learning high-order
information from temporal graphs, we encounter two challenges, i.e.,
computational inefficiency and over-smoothing, that cannot be solved by
conventional techniques applied on static graphs. To remedy these deficiencies,
we propose a temporal propagation-based graph neural network, namely TPGNN. To
be specific, the model consists of two distinct components, i.e., propagator
and node-wise encoder. The propagator is leveraged to propagate messages from
the anchor node to its temporal neighbors within $k$-hop, and then
simultaneously update the state of neighborhoods, which enables efficient
computation, especially for a deep model. In addition, to prevent
over-smoothing, the model compels the messages from $n$-hop neighbors to update
the $n$-hop memory vector preserved on the anchor. The node-wise encoder adopts
transformer architecture to learn node representations by explicitly learning
the importance of memory vectors preserved on the node itself, that is,
implicitly modeling the importance of messages from neighbors at different
layers, thus mitigating the over-smoothing. Since the encoding process will not
query temporal neighbors, we can dramatically save time consumption in
inference. Extensive experiments on temporal link prediction and node
classification demonstrate the superiority of TPGNN over state-of-the-art
baselines in efficiency and robustness.
|
[
{
"version": "v1",
"created": "Mon, 3 Oct 2022 18:39:07 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Apr 2023 23:41:39 GMT"
}
] | 2023-04-17T00:00:00 |
[
[
"Wang",
"Zehong",
""
],
[
"Li",
"Qi",
""
],
[
"Yu",
"Donghua",
""
]
] |
new_dataset
| 0.977537 |
2212.10343
|
Rodrigo Hernang\'omez
|
Rodrigo Hernang\'omez, Philipp Geuer, Alexandros Palaios, Daniel
Sch\"aufele, Cara Watermann, Khawla Taleb-Bouhemadi, Mohammad Parvini, Anton
Krause, Sanket Partani, Christian Vielhaus, Martin Kasparick, Daniel F.
K\"ulzer, Friedrich Burmeister, Frank H. P. Fitzek, Hans D. Schotten, Gerhard
Fettweis, S{\l}awomir Sta\'nczak
|
Berlin V2X: A Machine Learning Dataset from Multiple Vehicles and Radio
Access Technologies
|
5 pages, 6 figures. Accepted for presentation at IEEE conference
VTC2023-Spring. Available dataset at
https://ieee-dataport.org/open-access/berlin-v2x
| null | null | null |
cs.LG cs.AI cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
The evolution of wireless communications into 6G and beyond is expected to
rely on new machine learning (ML)-based capabilities. These can enable
proactive decisions and actions from wireless-network components to sustain
quality-of-service (QoS) and user experience. Moreover, new use cases in the
area of vehicular and industrial communications will emerge. Specifically in
the area of vehicle communication, vehicle-to-everything (V2X) schemes will
benefit strongly from such advances. With this in mind, we have conducted a
detailed measurement campaign that paves the way to a plethora of diverse
ML-based studies. The resulting datasets offer GPS-located wireless
measurements across diverse urban environments for both cellular (with two
different operators) and sidelink radio access technologies, thus enabling a
variety of different studies towards V2X. The datasets are labeled and sampled
with a high time resolution. Furthermore, we make the data publicly available
with all the necessary information to support the onboarding of new
researchers. We provide an initial analysis of the data showing some of the
challenges that ML needs to overcome and the features that ML can leverage, as
well as some hints at potential research studies.
|
[
{
"version": "v1",
"created": "Tue, 20 Dec 2022 15:26:39 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Dec 2022 17:03:30 GMT"
},
{
"version": "v3",
"created": "Fri, 14 Apr 2023 16:15:30 GMT"
}
] | 2023-04-17T00:00:00 |
[
[
"Hernangómez",
"Rodrigo",
""
],
[
"Geuer",
"Philipp",
""
],
[
"Palaios",
"Alexandros",
""
],
[
"Schäufele",
"Daniel",
""
],
[
"Watermann",
"Cara",
""
],
[
"Taleb-Bouhemadi",
"Khawla",
""
],
[
"Parvini",
"Mohammad",
""
],
[
"Krause",
"Anton",
""
],
[
"Partani",
"Sanket",
""
],
[
"Vielhaus",
"Christian",
""
],
[
"Kasparick",
"Martin",
""
],
[
"Külzer",
"Daniel F.",
""
],
[
"Burmeister",
"Friedrich",
""
],
[
"Fitzek",
"Frank H. P.",
""
],
[
"Schotten",
"Hans D.",
""
],
[
"Fettweis",
"Gerhard",
""
],
[
"Stańczak",
"Sławomir",
""
]
] |
new_dataset
| 0.999844 |
2301.04397
|
Daniel Adolfsson
|
Daniel Adolfsson, Mattias Karlsson, Vladim\'ir Kubelka, Martin
Magnusson, Henrik Andreasson
|
TBV Radar SLAM -- trust but verify loop candidates
|
Accepted for RAL, to be presented at IROS 2023, Detroit. Code:
https://github.com/dan11003/tbv_slam_public Submission video:
https://youtu.be/t8HQtHAUHHc
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Robust SLAM in large-scale environments requires fault resilience and
awareness at multiple stages, from sensing and odometry estimation to loop
closure. In this work, we present TBV (Trust But Verify) Radar SLAM, a method
for radar SLAM that introspectively verifies loop closure candidates. TBV Radar
SLAM achieves a high correct-loop-retrieval rate by combining multiple
place-recognition techniques: tightly coupled place similarity and odometry
uncertainty search, creating loop descriptors from origin-shifted scans, and
delaying loop selection until after verification. Robustness to false
constraints is achieved by carefully verifying and selecting the most likely
ones from multiple loop constraints. Importantly, the verification and
selection are carried out after registration when additional sources of loop
evidence can easily be computed. We integrate our loop retrieval and
verification method with a fault-resilient odometry pipeline within a pose
graph framework. By evaluating on public benchmarks we found that TBV Radar
SLAM achieves 65% lower error than the previous state of the art. We also show
that it's generalizing across environments without needing to change any
parameters.
|
[
{
"version": "v1",
"created": "Wed, 11 Jan 2023 10:50:24 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Jan 2023 07:25:23 GMT"
},
{
"version": "v3",
"created": "Fri, 14 Apr 2023 08:49:51 GMT"
}
] | 2023-04-17T00:00:00 |
[
[
"Adolfsson",
"Daniel",
""
],
[
"Karlsson",
"Mattias",
""
],
[
"Kubelka",
"Vladimír",
""
],
[
"Magnusson",
"Martin",
""
],
[
"Andreasson",
"Henrik",
""
]
] |
new_dataset
| 0.996018 |
2302.07738
|
Antoine Lefebvre-Brossard
|
Antoine Lefebvre-Brossard, Stephane Gazaille, Michel C. Desmarais
|
Alloprof: a new French question-answer education dataset and its use in
an information retrieval case study
| null | null | null | null |
cs.CL cs.IR cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Teachers and students are increasingly relying on online learning resources
to supplement the ones provided in school. This increase in the breadth and
depth of available resources is a great thing for students, but only provided
they are able to find answers to their queries. Question-answering and
information retrieval systems have benefited from public datasets to train and
evaluate their algorithms, but most of these datasets have been in English text
written by and for adults. We introduce a new public French question-answering
dataset collected from Alloprof, a Quebec-based primary and high-school help
website, containing 29 349 questions and their explanations in a variety of
school subjects from 10 368 students, with more than half of the explanations
containing links to other questions or some of the 2 596 reference pages on the
website. We also present a case study of this dataset in an information
retrieval task. This dataset was collected on the Alloprof public forum, with
all questions verified for their appropriateness and the explanations verified
both for their appropriateness and their relevance to the question. To predict
relevant documents, architectures using pre-trained BERT models were fine-tuned
and evaluated. This dataset will allow researchers to develop
question-answering, information retrieval and other algorithms specifically for
the French speaking education context. Furthermore, the range of language
proficiency, images, mathematical symbols and spelling mistakes will
necessitate algorithms based on a multimodal comprehension. The case study we
present as a baseline shows an approach that relies on recent techniques
provides an acceptable performance level, but more work is necessary before it
can reliably be used and trusted in a production setting.
|
[
{
"version": "v1",
"created": "Fri, 10 Feb 2023 20:23:27 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Apr 2023 13:20:07 GMT"
}
] | 2023-04-17T00:00:00 |
[
[
"Lefebvre-Brossard",
"Antoine",
""
],
[
"Gazaille",
"Stephane",
""
],
[
"Desmarais",
"Michel C.",
""
]
] |
new_dataset
| 0.999866 |
2303.08687
|
Jade Nardi
|
Sabira El Khalfaoui, Mathieu Lhotel, Jade Nardi
|
Goppa-like AG codes from $C_{a,b}$ curves and their behaviour under
squaring their dual
|
Minor changes: authors reordered alphabetically and missing
parentheses added in Corollary 1.8
| null | null | null |
cs.IT math.AG math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we introduce a family of codes that can be used in a McEliece
cryptosystem, called Goppa--like AG codes. These codes generalize classical
Goppa codes and can be constructed from any curve of genus $\mathfrak{g} \geq
0$. Focusing on codes from $C_{a,b}$ curves, we study the behaviour of the
dimension of the square of their dual to determine their resistance to
distinguisher attacks similar to the one for alternant and Goppa codes
developed by Mora and Tillich. We also propose numerical experiments to measure
how sharp is our bound.
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 15:17:12 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Apr 2023 09:21:16 GMT"
}
] | 2023-04-17T00:00:00 |
[
[
"Khalfaoui",
"Sabira El",
""
],
[
"Lhotel",
"Mathieu",
""
],
[
"Nardi",
"Jade",
""
]
] |
new_dataset
| 0.999755 |
2303.17876
|
Nora Hollenstein
|
Tiago Ribeiro, Stephanie Brandl, Anders S{\o}gaard, Nora Hollenstein
|
WebQAmGaze: A Multilingual Webcam Eye-Tracking-While-Reading Dataset
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We create WebQAmGaze, a multilingual low-cost eye-tracking-while-reading
dataset, designed to support the development of fair and transparent NLP
models. WebQAmGaze includes webcam eye-tracking data from 332 participants
naturally reading English, Spanish, and German texts. Each participant performs
two reading tasks composed of five texts, a normal reading and an
information-seeking task. After preprocessing the data, we find that fixations
on relevant spans seem to indicate correctness when answering the comprehension
questions. Additionally, we perform a comparative analysis of the data
collected to high-quality eye-tracking data. The results show a moderate
correlation between the features obtained with the webcam-ET compared to those
of a commercial ET device. We believe this data can advance webcam-based
reading studies and open a way to cheaper and more accessible data collection.
WebQAmGaze is useful to learn about the cognitive processes behind question
answering (QA) and to apply these insights to computational models of language
understanding.
|
[
{
"version": "v1",
"created": "Fri, 31 Mar 2023 08:18:30 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Apr 2023 06:22:47 GMT"
}
] | 2023-04-17T00:00:00 |
[
[
"Ribeiro",
"Tiago",
""
],
[
"Brandl",
"Stephanie",
""
],
[
"Søgaard",
"Anders",
""
],
[
"Hollenstein",
"Nora",
""
]
] |
new_dataset
| 0.99978 |
2304.06255
|
Siqi Chen
|
Siqi Chen, Xueming Li, Xianlin Zhang, Mingdao Wang, Yu Zhang, Yue
Zhang
|
SPColor: Semantic Prior Guided Exemplar-based Image Colorization
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Exemplar-based image colorization aims to colorize a target grayscale image
based on a color reference image, and the key is to establish accurate
pixel-level semantic correspondence between these two images. Previous methods
search for correspondence across the entire reference image, and this type of
global matching is easy to get mismatch. We summarize the difficulties in two
aspects: (1) When the reference image only contains a part of objects related
to target image, improper correspondence will be established in unrelated
regions. (2) It is prone to get mismatch in regions where the shape or texture
of the object is easily confused. To overcome these issues, we propose SPColor,
a semantic prior guided exemplar-based image colorization framework. Different
from previous methods, SPColor first coarsely classifies pixels of the
reference and target images to several pseudo-classes under the guidance of
semantic prior, then the correspondences are only established locally between
the pixels in the same class via the newly designed semantic prior guided
correspondence network. In this way, improper correspondence between different
semantic classes is explicitly excluded, and the mismatch is obviously
alleviated. Besides, to better reserve the color from reference, a similarity
masked perceptual loss is designed. Noting that the carefully designed SPColor
utilizes the semantic prior provided by an unsupervised segmentation model,
which is free for additional manual semantic annotations. Experiments
demonstrate that our model outperforms recent state-of-the-art methods both
quantitatively and qualitatively on public dataset.
|
[
{
"version": "v1",
"created": "Thu, 13 Apr 2023 04:21:45 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Apr 2023 07:06:22 GMT"
}
] | 2023-04-17T00:00:00 |
[
[
"Chen",
"Siqi",
""
],
[
"Li",
"Xueming",
""
],
[
"Zhang",
"Xianlin",
""
],
[
"Wang",
"Mingdao",
""
],
[
"Zhang",
"Yu",
""
],
[
"Zhang",
"Yue",
""
]
] |
new_dataset
| 0.955212 |
2304.06258
|
Yuanyuan Wei
|
Yuanyuan Wei, Roger Tam, Xiaoying Tang
|
MProtoNet: A Case-Based Interpretable Model for Brain Tumor
Classification with 3D Multi-parametric Magnetic Resonance Imaging
|
15 pages, 5 figures, 1 table; accepted for oral presentation at MIDL
2023 (https://openreview.net/forum?id=6Wbj3QCo4U4 ); camera-ready version
| null | null | null |
cs.CV cs.LG eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Recent applications of deep convolutional neural networks in medical imaging
raise concerns about their interpretability. While most explainable deep
learning applications use post hoc methods (such as GradCAM) to generate
feature attribution maps, there is a new type of case-based reasoning models,
namely ProtoPNet and its variants, which identify prototypes during training
and compare input image patches with those prototypes. We propose the first
medical prototype network (MProtoNet) to extend ProtoPNet to brain tumor
classification with 3D multi-parametric magnetic resonance imaging (mpMRI)
data. To address different requirements between 2D natural images and 3D mpMRIs
especially in terms of localizing attention regions, a new attention module
with soft masking and online-CAM loss is introduced. Soft masking helps sharpen
attention maps, while online-CAM loss directly utilizes image-level labels when
training the attention module. MProtoNet achieves statistically significant
improvements in interpretability metrics of both correctness and localization
coherence (with a best activation precision of $0.713\pm0.058$) without
human-annotated labels during training, when compared with GradCAM and several
ProtoPNet variants. The source code is available at
https://github.com/aywi/mprotonet.
|
[
{
"version": "v1",
"created": "Thu, 13 Apr 2023 04:39:21 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Apr 2023 15:51:54 GMT"
}
] | 2023-04-17T00:00:00 |
[
[
"Wei",
"Yuanyuan",
""
],
[
"Tam",
"Roger",
""
],
[
"Tang",
"Xiaoying",
""
]
] |
new_dataset
| 0.9939 |
2304.06671
|
Jaemin Cho
|
Jaemin Cho, Linjie Li, Zhengyuan Yang, Zhe Gan, Lijuan Wang, Mohit
Bansal
|
Diagnostic Benchmark and Iterative Inpainting for Layout-Guided Image
Generation
|
22 pages; Project website: https://layoutbench.github.io
| null | null | null |
cs.CV cs.AI cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Spatial control is a core capability in controllable image generation.
Advancements in layout-guided image generation have shown promising results on
in-distribution (ID) datasets with similar spatial configurations. However, it
is unclear how these models perform when facing out-of-distribution (OOD)
samples with arbitrary, unseen layouts. In this paper, we propose LayoutBench,
a diagnostic benchmark for layout-guided image generation that examines four
categories of spatial control skills: number, position, size, and shape. We
benchmark two recent representative layout-guided image generation methods and
observe that the good ID layout control may not generalize well to arbitrary
layouts in the wild (e.g., objects at the boundary). Next, we propose
IterInpaint, a new baseline that generates foreground and background regions in
a step-by-step manner via inpainting, demonstrating stronger generalizability
than existing models on OOD layouts in LayoutBench. We perform quantitative and
qualitative evaluation and fine-grained analysis on the four LayoutBench skills
to pinpoint the weaknesses of existing models. Lastly, we show comprehensive
ablation studies on IterInpaint, including training task ratio, crop&paste vs.
repaint, and generation order. Project website: https://layoutbench.github.io
|
[
{
"version": "v1",
"created": "Thu, 13 Apr 2023 16:58:33 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Apr 2023 15:37:40 GMT"
}
] | 2023-04-17T00:00:00 |
[
[
"Cho",
"Jaemin",
""
],
[
"Li",
"Linjie",
""
],
[
"Yang",
"Zhengyuan",
""
],
[
"Gan",
"Zhe",
""
],
[
"Wang",
"Lijuan",
""
],
[
"Bansal",
"Mohit",
""
]
] |
new_dataset
| 0.997899 |
2304.06724
|
Lin Geng Foo
|
Jianhong Pan, Lin Geng Foo, Qichen Zheng, Zhipeng Fan, Hossein
Rahmani, Qiuhong Ke, Jun Liu
|
GradMDM: Adversarial Attack on Dynamic Networks
|
Accepted to IEEE Transactions on Pattern Analysis and Machine
Intelligence (TPAMI)
| null | null | null |
cs.CR cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dynamic neural networks can greatly reduce computation redundancy without
compromising accuracy by adapting their structures based on the input. In this
paper, we explore the robustness of dynamic neural networks against
energy-oriented attacks targeted at reducing their efficiency. Specifically, we
attack dynamic models with our novel algorithm GradMDM. GradMDM is a technique
that adjusts the direction and the magnitude of the gradients to effectively
find a small perturbation for each input, that will activate more computational
units of dynamic models during inference. We evaluate GradMDM on multiple
datasets and dynamic models, where it outperforms previous energy-oriented
attack techniques, significantly increasing computation complexity while
reducing the perceptibility of the perturbations.
|
[
{
"version": "v1",
"created": "Sat, 1 Apr 2023 09:07:12 GMT"
}
] | 2023-04-17T00:00:00 |
[
[
"Pan",
"Jianhong",
""
],
[
"Foo",
"Lin Geng",
""
],
[
"Zheng",
"Qichen",
""
],
[
"Fan",
"Zhipeng",
""
],
[
"Rahmani",
"Hossein",
""
],
[
"Ke",
"Qiuhong",
""
],
[
"Liu",
"Jun",
""
]
] |
new_dataset
| 0.992203 |
2304.06775
|
Tejas Anvekar
|
Shivanand Kundargi, Tejas Anvekar, Ramesh Ashok Tabib, Uma Mudenagudi
|
PointCLIMB: An Exemplar-Free Point Cloud Class Incremental Benchmark
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Point clouds offer comprehensive and precise data regarding the contour and
configuration of objects. Employing such geometric and topological 3D
information of objects in class incremental learning can aid endless
application in 3D-computer vision. Well known 3D-point cloud class incremental
learning methods for addressing catastrophic forgetting generally entail the
usage of previously encountered data, which can present difficulties in
situations where there are restrictions on memory or when there are concerns
about the legality of the data. Towards this we pioneer to leverage exemplar
free class incremental learning on Point Clouds. In this paper we propose
PointCLIMB: An exemplar Free Class Incremental Learning Benchmark. We focus on
a pragmatic perspective to consider novel classes for class incremental
learning on 3D point clouds. We setup a benchmark for 3D Exemplar free class
incremental learning. We investigate performance of various backbones on
3D-Exemplar Free Class Incremental Learning framework. We demonstrate our
results on ModelNet40 dataset.
|
[
{
"version": "v1",
"created": "Thu, 13 Apr 2023 18:47:29 GMT"
}
] | 2023-04-17T00:00:00 |
[
[
"Kundargi",
"Shivanand",
""
],
[
"Anvekar",
"Tejas",
""
],
[
"Tabib",
"Ramesh Ashok",
""
],
[
"Mudenagudi",
"Uma",
""
]
] |
new_dataset
| 0.987795 |
2304.06790
|
Tao Yu
|
Tao Yu, Runseng Feng, Ruoyu Feng, Jinming Liu, Xin Jin, Wenjun Zeng,
Zhibo Chen
|
Inpaint Anything: Segment Anything Meets Image Inpainting
|
Technical report. Code URL:
https://github.com/geekyutao/Inpaint-Anything
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern image inpainting systems, despite the significant progress, often
struggle with mask selection and holes filling. Based on Segment-Anything Model
(SAM), we make the first attempt to the mask-free image inpainting and propose
a new paradigm of ``clicking and filling'', which is named as Inpaint Anything
(IA). The core idea behind IA is to combine the strengths of different models
in order to build a very powerful and user-friendly pipeline for solving
inpainting-related problems. IA supports three main features: (i) Remove
Anything: users could click on an object and IA will remove it and smooth the
``hole'' with the context; (ii) Fill Anything: after certain objects removal,
users could provide text-based prompts to IA, and then it will fill the hole
with the corresponding generative content via driving AIGC models like Stable
Diffusion; (iii) Replace Anything: with IA, users have another option to retain
the click-selected object and replace the remaining background with the newly
generated scenes. We are also very willing to help everyone share and promote
new projects based on our Inpaint Anything (IA). Our codes are available at
https://github.com/geekyutao/Inpaint-Anything.
|
[
{
"version": "v1",
"created": "Thu, 13 Apr 2023 19:23:52 GMT"
}
] | 2023-04-17T00:00:00 |
[
[
"Yu",
"Tao",
""
],
[
"Feng",
"Runseng",
""
],
[
"Feng",
"Ruoyu",
""
],
[
"Liu",
"Jinming",
""
],
[
"Jin",
"Xin",
""
],
[
"Zeng",
"Wenjun",
""
],
[
"Chen",
"Zhibo",
""
]
] |
new_dataset
| 0.996773 |
2304.06831
|
Hanqiu Chen
|
Hanqiu Chen and Cong Hao
|
DGNN-Booster: A Generic FPGA Accelerator Framework For Dynamic Graph
Neural Network Inference
|
This paper is accepted by FCCM 2023
| null | null | null |
cs.AR cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dynamic Graph Neural Networks (DGNNs) are becoming increasingly popular due
to their effectiveness in analyzing and predicting the evolution of complex
interconnected graph-based systems. However, hardware deployment of DGNNs still
remains a challenge. First, DGNNs do not fully utilize hardware resources
because temporal data dependencies cause low hardware parallelism.
Additionally, there is currently a lack of generic DGNN hardware accelerator
frameworks, and existing GNN accelerator frameworks have limited ability to
handle dynamic graphs with changing topologies and node features. To address
the aforementioned challenges, in this paper, we propose DGNN-Booster, which is
a novel Field-Programmable Gate Array (FPGA) accelerator framework for
real-time DGNN inference using High-Level Synthesis (HLS). It includes two
different FPGA accelerator designs with different dataflows that can support
the most widely used DGNNs. We showcase the effectiveness of our designs by
implementing and evaluating two representative DGNN models on ZCU102 board and
measuring the end-to-end performance. The experiment results demonstrate that
DGNN-Booster can achieve a speedup of up to 5.6x compared to the CPU baseline
(6226R), 8.4x compared to the GPU baseline (A6000) and 2.1x compared to the
FPGA baseline without applying optimizations proposed in this paper. Moreover,
DGNN-Booster can achieve over 100x and over 1000x runtime energy efficiency
than the CPU and GPU baseline respectively. Our implementation code and
on-board measurements are publicly available at
https://github.com/sharc-lab/DGNN-Booster.
|
[
{
"version": "v1",
"created": "Thu, 13 Apr 2023 21:50:23 GMT"
}
] | 2023-04-17T00:00:00 |
[
[
"Chen",
"Hanqiu",
""
],
[
"Hao",
"Cong",
""
]
] |
new_dataset
| 0.999297 |
2304.06870
|
Shan Jia
|
Shan Jia, Mingzhen Huang, Zhou Zhou, Yan Ju, Jialing Cai, Siwei Lyu
|
AutoSplice: A Text-prompt Manipulated Image Dataset for Media Forensics
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent advancements in language-image models have led to the development of
highly realistic images that can be generated from textual descriptions.
However, the increased visual quality of these generated images poses a
potential threat to the field of media forensics. This paper aims to
investigate the level of challenge that language-image generation models pose
to media forensics. To achieve this, we propose a new approach that leverages
the DALL-E2 language-image model to automatically generate and splice masked
regions guided by a text prompt. To ensure the creation of realistic
manipulations, we have designed an annotation platform with human checking to
verify reasonable text prompts. This approach has resulted in the creation of a
new image dataset called AutoSplice, containing 5,894 manipulated and authentic
images. Specifically, we have generated a total of 3,621 images by locally or
globally manipulating real-world image-caption pairs, which we believe will
provide a valuable resource for developing generalized detection methods in
this area. The dataset is evaluated under two media forensic tasks: forgery
detection and localization. Our extensive experiments show that most media
forensic models struggle to detect the AutoSplice dataset as an unseen
manipulation. However, when fine-tuned models are used, they exhibit improved
performance in both tasks.
|
[
{
"version": "v1",
"created": "Fri, 14 Apr 2023 00:14:08 GMT"
}
] | 2023-04-17T00:00:00 |
[
[
"Jia",
"Shan",
""
],
[
"Huang",
"Mingzhen",
""
],
[
"Zhou",
"Zhou",
""
],
[
"Ju",
"Yan",
""
],
[
"Cai",
"Jialing",
""
],
[
"Lyu",
"Siwei",
""
]
] |
new_dataset
| 0.999739 |
2304.06925
|
Feng Xiong
|
Li Zhu, Jiahui Xiong, Feng Xiong, Hanzheng Hu, Zhengnan Jiang
|
YOLO-Drone:Airborne real-time detection of dense small objects from
high-altitude perspective
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Unmanned Aerial Vehicles (UAVs), specifically drones equipped with remote
sensing object detection technology, have rapidly gained a broad spectrum of
applications and emerged as one of the primary research focuses in the field of
computer vision. Although UAV remote sensing systems have the ability to detect
various objects, small-scale objects can be challenging to detect reliably due
to factors such as object size, image degradation, and real-time limitations.
To tackle these issues, a real-time object detection algorithm (YOLO-Drone) is
proposed and applied to two new UAV platforms as well as a specific light
source (silicon-based golden LED). YOLO-Drone presents several novelties: 1)
including a new backbone Darknet59; 2) a new complex feature aggregation module
MSPP-FPN that incorporated one spatial pyramid pooling and three atrous spatial
pyramid pooling modules; 3) and the use of Generalized Intersection over Union
(GIoU) as the loss function. To evaluate performance, two benchmark datasets,
UAVDT and VisDrone, along with one homemade dataset acquired at night under
silicon-based golden LEDs, are utilized. The experimental results show that, in
both UAVDT and VisDrone, the proposed YOLO-Drone outperforms state-of-the-art
(SOTA) object detection methods by improving the mAP of 10.13% and 8.59%,
respectively. With regards to UAVDT, the YOLO-Drone exhibits both high
real-time inference speed of 53 FPS and a maximum mAP of 34.04%. Notably,
YOLO-Drone achieves high performance under the silicon-based golden LEDs, with
a mAP of up to 87.71%, surpassing the performance of YOLO series under ordinary
light sources. To conclude, the proposed YOLO-Drone is a highly effective
solution for object detection in UAV applications, particularly for night
detection tasks where silicon-based golden light LED technology exhibits
significant superiority.
|
[
{
"version": "v1",
"created": "Fri, 14 Apr 2023 05:21:47 GMT"
}
] | 2023-04-17T00:00:00 |
[
[
"Zhu",
"Li",
""
],
[
"Xiong",
"Jiahui",
""
],
[
"Xiong",
"Feng",
""
],
[
"Hu",
"Hanzheng",
""
],
[
"Jiang",
"Zhengnan",
""
]
] |
new_dataset
| 0.999829 |
2304.07007
|
David Schlangen
|
David Schlangen
|
Dialogue Games for Benchmarking Language Understanding: Motivation,
Taxonomy, Strategy
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
How does one measure "ability to understand language"? If it is a person's
ability that is being measured, this is a question that almost never poses
itself in an unqualified manner: Whatever formal test is applied, it takes
place on the background of the person's language use in daily social practice,
and what is measured is a specialised variety of language understanding (e.g.,
of a second language; or of written, technical language). Computer programs do
not have this background. What does that mean for the applicability of formal
tests of language understanding? I argue that such tests need to be
complemented with tests of language use embedded in a practice, to arrive at a
more comprehensive evaluation of "artificial language understanding". To do
such tests systematically, I propose to use "Dialogue Games" -- constructed
activities that provide a situational embedding for language use. I describe a
taxonomy of Dialogue Game types, linked to a model of underlying capabilites
that are tested, and thereby giving an argument for the \emph{construct
validity} of the test. I close with showing how the internal structure of the
taxonomy suggests an ordering from more specialised to more general situational
language understanding, which potentially can provide some strategic guidance
for development in this field.
|
[
{
"version": "v1",
"created": "Fri, 14 Apr 2023 09:11:36 GMT"
}
] | 2023-04-17T00:00:00 |
[
[
"Schlangen",
"David",
""
]
] |
new_dataset
| 0.999078 |
2304.07061
|
Hao Wen
|
Hao Wen, Hongming Wang, Jiaxuan Liu, Yuanchun Li
|
DroidBot-GPT: GPT-powered UI Automation for Android
|
8 pages, 5 figures
| null | null | null |
cs.SE cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces DroidBot-GPT, a tool that utilizes GPT-like large
language models (LLMs) to automate the interactions with Android mobile
applications. Given a natural language description of a desired task,
DroidBot-GPT can automatically generate and execute actions that navigate the
app to complete the task. It works by translating the app GUI state information
and the available actions on the smartphone screen to natural language prompts
and asking the LLM to make a choice of actions. Since the LLM is typically
trained on a large amount of data including the how-to manuals of diverse
software applications, it has the ability to make reasonable choices of actions
based on the provided information. We evaluate DroidBot-GPT with a self-created
dataset that contains 33 tasks collected from 17 Android applications spanning
10 categories. It can successfully complete 39.39% of the tasks, and the
average partial completion progress is about 66.76%. Given the fact that our
method is fully unsupervised (no modification required from both the app and
the LLM), we believe there is great potential to enhance automation performance
with better app development paradigms and/or custom model training.
|
[
{
"version": "v1",
"created": "Fri, 14 Apr 2023 11:31:56 GMT"
}
] | 2023-04-17T00:00:00 |
[
[
"Wen",
"Hao",
""
],
[
"Wang",
"Hongming",
""
],
[
"Liu",
"Jiaxuan",
""
],
[
"Li",
"Yuanchun",
""
]
] |
new_dataset
| 0.999484 |
2304.07062
|
Takashi Yamakawa
|
Fuyuki Kitagawa, Ryo Nishimaki, Takashi Yamakawa
|
Publicly Verifiable Deletion from Minimal Assumptions
|
15 pages
| null | null | null |
cs.CR quant-ph
|
http://creativecommons.org/licenses/by/4.0/
|
We present a general compiler to add the publicly verifiable deletion
property for various cryptographic primitives including public key encryption,
attribute-based encryption, and quantum fully homomorphic encryption. Our
compiler only uses one-way functions, or more generally hard quantum planted
problems for NP, which are implied by one-way functions. It relies on minimal
assumptions and enables us to add the publicly verifiable deletion property
with no additional assumption for the above primitives. Previously, such a
compiler needs additional assumptions such as injective trapdoor one-way
functions or pseudorandom group actions [Bartusek-Khurana-Poremba,
ePrint:2023/370]. Technically, we upgrade an existing compiler for privately
verifiable deletion [Bartusek-Khurana, ePrint:2022/1178] to achieve publicly
verifiable deletion by using digital signatures.
|
[
{
"version": "v1",
"created": "Fri, 14 Apr 2023 11:34:43 GMT"
}
] | 2023-04-17T00:00:00 |
[
[
"Kitagawa",
"Fuyuki",
""
],
[
"Nishimaki",
"Ryo",
""
],
[
"Yamakawa",
"Takashi",
""
]
] |
new_dataset
| 0.992572 |
2304.07081
|
Matteo Monti
|
Martina Camaioni and Rachid Guerraoui and Matteo Monti and
Pierre-Louis Roman and Manuel Vidigueira and Gauthier Voron
|
Chop Chop: Byzantine Atomic Broadcast to the Network Limit
| null | null | null | null |
cs.DC cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
At the heart of state machine replication, the celebrated technique enabling
decentralized and secure universal computation, lies Atomic Broadcast, a
fundamental communication primitive that orders, authenticates, and
deduplicates messages. This paper presents Chop Chop, a Byzantine Atomic
Broadcast system that amortizes the cost of ordering, authenticating and
deduplicating messages, achieving "line rate" (i.e., closely matching the
complexity of a protocol that does not ensure any ordering, authentication or
Byzantine resilience) even when processing messages as small as 8 bytes. Chop
Chop attains this performance by means of a new form of batching we call
distillation. A distilled batch is a set of messages that are fast to
authenticate and deduplicate, as well as order. Batches are distilled using a
novel interactive mechanism involving brokers, an untrusted layer of
facilitating processes between clients and servers. In a geo-distributed
deployment of 64 medium-sized servers, with clients situated cross-cloud, Chop
Chop processes 43,600,000 messages per second with an average latency of 3.6
seconds. Under the same conditions, state-of-the-art alternatives offer two
orders of magnitude less throughput for the same latency. We showcase three
simple Chop Chop applications: a Payment system, an Auction house and a "Pixel
war" game, respectively achieving 32, 2.3 and 35 million operations per second.
|
[
{
"version": "v1",
"created": "Fri, 14 Apr 2023 12:09:06 GMT"
}
] | 2023-04-17T00:00:00 |
[
[
"Camaioni",
"Martina",
""
],
[
"Guerraoui",
"Rachid",
""
],
[
"Monti",
"Matteo",
""
],
[
"Roman",
"Pierre-Louis",
""
],
[
"Vidigueira",
"Manuel",
""
],
[
"Voron",
"Gauthier",
""
]
] |
new_dataset
| 0.97977 |
2304.07140
|
Olaf Wysocki
|
Olaf Wysocki, Ludwig Hoegner, Uwe Stilla
|
TUM-FA\c{C}ADE: Reviewing and enriching point cloud benchmarks for
fa\c{c}ade segmentation
|
3D-ARCH 2022, Mantova, Italy, 2022, ISPRS conference
|
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci.,
XLVI-2/W1-2022
|
10.5194/isprs-archives-XLVI-2-W1-2022-529-2022
| null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Point clouds are widely regarded as one of the best dataset types for urban
mapping purposes. Hence, point cloud datasets are commonly investigated as
benchmark types for various urban interpretation methods. Yet, few researchers
have addressed the use of point cloud benchmarks for fa\c{c}ade segmentation.
Robust fa\c{c}ade segmentation is becoming a key factor in various applications
ranging from simulating autonomous driving functions to preserving cultural
heritage. In this work, we present a method of enriching existing point cloud
datasets with fa\c{c}ade-related classes that have been designed to facilitate
fa\c{c}ade segmentation testing. We propose how to efficiently extend existing
datasets and comprehensively assess their potential for fa\c{c}ade
segmentation. We use the method to create the TUM-FA\c{C}ADE dataset, which
extends the capabilities of TUM-MLS-2016. Not only can TUM-FA\c{C}ADE
facilitate the development of point-cloud-based fa\c{c}ade segmentation tasks,
but our procedure can also be applied to enrich further datasets.
|
[
{
"version": "v1",
"created": "Fri, 14 Apr 2023 14:04:00 GMT"
}
] | 2023-04-17T00:00:00 |
[
[
"Wysocki",
"Olaf",
""
],
[
"Hoegner",
"Ludwig",
""
],
[
"Stilla",
"Uwe",
""
]
] |
new_dataset
| 0.998779 |
2304.07165
|
Claudio Felicioli
|
Andrea Canciani, Claudio Felicioli, Andrea Lisi, Fabio Severino
|
Hybrid DLT as a data layer for real-time, data-intensive applications
| null | null | null | null |
cs.CR cs.CY cs.DC cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a new approach, termed Hybrid DLT, to address a broad range of
industrial use cases where certain properties of both private and public DLTs
are valuable, while other properties may be unnecessary or detrimental. The
Hybrid DLT approach involves a system where private ledgers, with limited data
block dissemination, are collaboratively created by nodes within a private
network. The Notary, a publicly auditable authoritative component, maintains a
single, official, coherent history for each private ledger without requiring
access to data blocks. This is achieved by leveraging a public DLT solution to
render the ledger histories tamper-proof, consequently providing
tamper-evidence for ledger data disclosed to external actors. We present Traent
Hybrid Blockchain, a commercial implementation of the Hybrid DLT approach: a
real-time, data-intensive collaboration system for organizations seeking
immutable data while also needing to comply with the European General Data
Protection Regulation (GDPR).
|
[
{
"version": "v1",
"created": "Fri, 14 Apr 2023 14:39:52 GMT"
}
] | 2023-04-17T00:00:00 |
[
[
"Canciani",
"Andrea",
""
],
[
"Felicioli",
"Claudio",
""
],
[
"Lisi",
"Andrea",
""
],
[
"Severino",
"Fabio",
""
]
] |
new_dataset
| 0.997939 |
2304.07166
|
Ningyu He
|
Edward Lo, Ningyu He, Yuejie Shi, Jiajia Xu, Chiachih Wu, Ding Li, Yao
Guo
|
Fuzzing the Latest NTFS in Linux with Papora: An Empirical Study
|
Accepted by 17th IEEE Workshop on Offensive Technologies
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Recently, the first feature-rich NTFS implementation, NTFS3, has been
upstreamed to Linux. Although ensuring the security of NTFS3 is essential for
the future of Linux, it remains unclear, however, whether the most recent
version of NTFS for Linux contains 0-day vulnerabilities. To this end, we
implemented Papora, the first effective fuzzer for NTFS3. We have identified
and reported 3 CVE-assigned 0-day vulnerabilities and 9 severe bugs in NTFS3.
Furthermore, we have investigated the underlying causes as well as types of
these vulnerabilities and bugs. We have conducted an empirical study on the
identified bugs while the results of our study have offered practical insights
regarding the security of NTFS3 in Linux.
|
[
{
"version": "v1",
"created": "Fri, 14 Apr 2023 14:39:59 GMT"
}
] | 2023-04-17T00:00:00 |
[
[
"Lo",
"Edward",
""
],
[
"He",
"Ningyu",
""
],
[
"Shi",
"Yuejie",
""
],
[
"Xu",
"Jiajia",
""
],
[
"Wu",
"Chiachih",
""
],
[
"Li",
"Ding",
""
],
[
"Guo",
"Yao",
""
]
] |
new_dataset
| 0.997373 |
2304.07199
|
Thanh-Dat Truong
|
Thanh-Dat Truong, Chi Nhan Duong, Ashley Dowling, Son Lam Phung,
Jackson Cothren, Khoa Luu
|
CROVIA: Seeing Drone Scenes from Car Perspective via Cross-View
Adaptation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Understanding semantic scene segmentation of urban scenes captured from the
Unmanned Aerial Vehicles (UAV) perspective plays a vital role in building a
perception model for UAV. With the limitations of large-scale densely labeled
data, semantic scene segmentation for UAV views requires a broad understanding
of an object from both its top and side views. Adapting from well-annotated
autonomous driving data to unlabeled UAV data is challenging due to the
cross-view differences between the two data types. Our work proposes a novel
Cross-View Adaptation (CROVIA) approach to effectively adapt the knowledge
learned from on-road vehicle views to UAV views. First, a novel geometry-based
constraint to cross-view adaptation is introduced based on the geometry
correlation between views. Second, cross-view correlations from image space are
effectively transferred to segmentation space without any requirement of paired
on-road and UAV view data via a new Geometry-Constraint Cross-View (GeiCo)
loss. Third, the multi-modal bijective networks are introduced to enforce the
global structural modeling across views. Experimental results on new cross-view
adaptation benchmarks introduced in this work, i.e., SYNTHIA to UAVID and GTA5
to UAVID, show the State-of-the-Art (SOTA) performance of our approach over
prior adaptation methods
|
[
{
"version": "v1",
"created": "Fri, 14 Apr 2023 15:20:40 GMT"
}
] | 2023-04-17T00:00:00 |
[
[
"Truong",
"Thanh-Dat",
""
],
[
"Duong",
"Chi Nhan",
""
],
[
"Dowling",
"Ashley",
""
],
[
"Phung",
"Son Lam",
""
],
[
"Cothren",
"Jackson",
""
],
[
"Luu",
"Khoa",
""
]
] |
new_dataset
| 0.996705 |
2304.07200
|
Ziyun Wang
|
Ziyun Wang, Fernando Cladera Ojeda, Anthony Bisulco, Daewon Lee,
Camillo J. Taylor, Kostas Daniilidis, M. Ani Hsieh, Daniel D. Lee, and Volkan
Isler
|
EV-Catcher: High-Speed Object Catching Using Low-latency Event-based
Neural Networks
|
8 pages, 6 figures, IEEE Robotics and Automation Letters ( Volume: 7,
Issue: 4, October 2022)
| null |
10.1109/LRA.2022.3188400
| null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Event-based sensors have recently drawn increasing interest in robotic
perception due to their lower latency, higher dynamic range, and lower
bandwidth requirements compared to standard CMOS-based imagers. These
properties make them ideal tools for real-time perception tasks in highly
dynamic environments. In this work, we demonstrate an application where event
cameras excel: accurately estimating the impact location of fast-moving
objects. We introduce a lightweight event representation called Binary Event
History Image (BEHI) to encode event data at low latency, as well as a
learning-based approach that allows real-time inference of a confidence-enabled
control signal to the robot. To validate our approach, we present an
experimental catching system in which we catch fast-flying ping-pong balls. We
show that the system is capable of achieving a success rate of 81% in catching
balls targeted at different locations, with a velocity of up to 13 m/s even on
compute-constrained embedded platforms such as the Nvidia Jetson NX.
|
[
{
"version": "v1",
"created": "Fri, 14 Apr 2023 15:23:28 GMT"
}
] | 2023-04-17T00:00:00 |
[
[
"Wang",
"Ziyun",
""
],
[
"Ojeda",
"Fernando Cladera",
""
],
[
"Bisulco",
"Anthony",
""
],
[
"Lee",
"Daewon",
""
],
[
"Taylor",
"Camillo J.",
""
],
[
"Daniilidis",
"Kostas",
""
],
[
"Hsieh",
"M. Ani",
""
],
[
"Lee",
"Daniel D.",
""
],
[
"Isler",
"Volkan",
""
]
] |
new_dataset
| 0.993594 |
2304.07236
|
Bart Van Marum
|
Bart van Marum, Matthia Sabatelli, Hamidreza Kasaei
|
Learning Perceptive Bipedal Locomotion over Irregular Terrain
|
8 pages, 10 figures
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper we propose a novel bipedal locomotion controller that uses
noisy exteroception to traverse a wide variety of terrains. Building on the
cutting-edge advancements in attention based belief encoding for quadrupedal
locomotion, our work extends these methods to the bipedal domain, resulting in
a robust and reliable internal belief of the terrain ahead despite noisy sensor
inputs. Additionally, we present a reward function that allows the controller
to successfully traverse irregular terrain. We compare our method with a
proprioceptive baseline and show that our method is able to traverse a wide
variety of terrains and greatly outperforms the state-of-the-art in terms of
robustness, speed and efficiency.
|
[
{
"version": "v1",
"created": "Fri, 14 Apr 2023 16:33:42 GMT"
}
] | 2023-04-17T00:00:00 |
[
[
"van Marum",
"Bart",
""
],
[
"Sabatelli",
"Matthia",
""
],
[
"Kasaei",
"Hamidreza",
""
]
] |
new_dataset
| 0.959433 |
2112.14602
|
Dianzhao Li
|
Dianzhao Li and Ostap Okhrin
|
Modified DDPG car-following model with a real-world human driving
experience with CARLA simulator
| null | null |
10.1016/j.trc.2022.103987
| null |
cs.RO cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
In the autonomous driving field, fusion of human knowledge into Deep
Reinforcement Learning (DRL) is often based on the human demonstration recorded
in a simulated environment. This limits the generalization and the feasibility
of application in real-world traffic. We propose a two-stage DRL method to
train a car-following agent, that modifies the policy by leveraging the
real-world human driving experience and achieves performance superior to the
pure DRL agent. Training a DRL agent is done within CARLA framework with Robot
Operating System (ROS). For evaluation, we designed different driving scenarios
to compare the proposed two-stage DRL car-following agent with other agents.
After extracting the "good" behavior from the human driver, the agent becomes
more efficient and reasonable, which makes this autonomous agent more suitable
for Human-Robot Interaction (HRI) traffic.
|
[
{
"version": "v1",
"created": "Wed, 29 Dec 2021 15:22:31 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Apr 2022 09:09:46 GMT"
},
{
"version": "v3",
"created": "Thu, 8 Sep 2022 12:29:53 GMT"
},
{
"version": "v4",
"created": "Mon, 19 Sep 2022 14:31:46 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Li",
"Dianzhao",
""
],
[
"Okhrin",
"Ostap",
""
]
] |
new_dataset
| 0.993706 |
2205.13803
|
Xiaojian Ma
|
Huaizu Jiang, Xiaojian Ma, Weili Nie, Zhiding Yu, Yuke Zhu, Song-Chun
Zhu, Anima Anandkumar
|
Bongard-HOI: Benchmarking Few-Shot Visual Reasoning for Human-Object
Interactions
|
CVPR 2022 (oral); First two authors contributed equally; Code:
https://github.com/NVlabs/Bongard-HOI
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A significant gap remains between today's visual pattern recognition models
and human-level visual cognition especially when it comes to few-shot learning
and compositional reasoning of novel concepts. We introduce Bongard-HOI, a new
visual reasoning benchmark that focuses on compositional learning of
human-object interactions (HOIs) from natural images. It is inspired by two
desirable characteristics from the classical Bongard problems (BPs): 1)
few-shot concept learning, and 2) context-dependent reasoning. We carefully
curate the few-shot instances with hard negatives, where positive and negative
images only disagree on action labels, making mere recognition of object
categories insufficient to complete our benchmarks. We also design multiple
test sets to systematically study the generalization of visual learning models,
where we vary the overlap of the HOI concepts between the training and test
sets of few-shot instances, from partial to no overlaps. Bongard-HOI presents a
substantial challenge to today's visual recognition models. The
state-of-the-art HOI detection model achieves only 62% accuracy on few-shot
binary prediction while even amateur human testers on MTurk have 91% accuracy.
With the Bongard-HOI benchmark, we hope to further advance research efforts in
visual reasoning, especially in holistic perception-reasoning systems and
better representation learning.
|
[
{
"version": "v1",
"created": "Fri, 27 May 2022 07:36:29 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Apr 2023 07:29:12 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Jiang",
"Huaizu",
""
],
[
"Ma",
"Xiaojian",
""
],
[
"Nie",
"Weili",
""
],
[
"Yu",
"Zhiding",
""
],
[
"Zhu",
"Yuke",
""
],
[
"Zhu",
"Song-Chun",
""
],
[
"Anandkumar",
"Anima",
""
]
] |
new_dataset
| 0.999574 |
2208.09985
|
Jo\"el Lindegger
|
Jo\"el Lindegger, Damla Senol Cali, Mohammed Alser, Juan G\'omez-Luna,
Nika Mansouri Ghiasi, Onur Mutlu
|
Scrooge: A Fast and Memory-Frugal Genomic Sequence Aligner for CPUs,
GPUs, and ASICs
| null | null |
10.1093/bioinformatics/btad151
| null |
cs.AR q-bio.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pairwise sequence alignment is a very time-consuming step in common
bioinformatics pipelines. Speeding up this step requires heuristics, efficient
implementations, and/or hardware acceleration. A promising candidate for all of
the above is the recently proposed GenASM algorithm. We identify and address
three inefficiencies in the GenASM algorithm: it has a high amount of data
movement, a large memory footprint, and does some unnecessary work. We propose
Scrooge, a fast and memory-frugal genomic sequence aligner. Scrooge includes
three novel algorithmic improvements which reduce the data movement, memory
footprint, and the number of operations in the GenASM algorithm. We provide
efficient open-source implementations of the Scrooge algorithm for CPUs and
GPUs, which demonstrate the significant benefits of our algorithmic
improvements. For long reads, the CPU version of Scrooge achieves a 20.1x,
1.7x, and 2.1x speedup over KSW2, Edlib, and a CPU implementation of GenASM,
respectively. The GPU version of Scrooge achieves a 4.0x 80.4x, 6.8x, 12.6x and
5.9x speedup over the CPU version of Scrooge, KSW2, Edlib, Darwin-GPU, and a
GPU implementation of GenASM, respectively. We estimate an ASIC implementation
of Scrooge to use 3.6x less chip area and 2.1x less power than a GenASM ASIC
while maintaining the same throughput. Further, we systematically analyze the
throughput and accuracy behavior of GenASM and Scrooge under various
configurations. As the best configuration of Scrooge depends on the computing
platform, we make several observations that can help guide future
implementations of Scrooge. Availability: https://github.com/CMU-SAFARI/Scrooge
|
[
{
"version": "v1",
"created": "Sun, 21 Aug 2022 23:36:01 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Mar 2023 18:05:54 GMT"
},
{
"version": "v3",
"created": "Wed, 12 Apr 2023 21:50:45 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Lindegger",
"Joël",
""
],
[
"Cali",
"Damla Senol",
""
],
[
"Alser",
"Mohammed",
""
],
[
"Gómez-Luna",
"Juan",
""
],
[
"Ghiasi",
"Nika Mansouri",
""
],
[
"Mutlu",
"Onur",
""
]
] |
new_dataset
| 0.999493 |
2210.07474
|
Xiaojian Ma
|
Xiaojian Ma, Silong Yong, Zilong Zheng, Qing Li, Yitao Liang,
Song-Chun Zhu, Siyuan Huang
|
SQA3D: Situated Question Answering in 3D Scenes
|
ICLR 2023. First two authors contributed equally. Project website:
https://sqa3d.github.io
| null | null | null |
cs.CV cs.AI cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a new task to benchmark scene understanding of embodied agents:
Situated Question Answering in 3D Scenes (SQA3D). Given a scene context (e.g.,
3D scan), SQA3D requires the tested agent to first understand its situation
(position, orientation, etc.) in the 3D scene as described by text, then reason
about its surrounding environment and answer a question under that situation.
Based upon 650 scenes from ScanNet, we provide a dataset centered around 6.8k
unique situations, along with 20.4k descriptions and 33.4k diverse reasoning
questions for these situations. These questions examine a wide spectrum of
reasoning capabilities for an intelligent agent, ranging from spatial relation
comprehension to commonsense understanding, navigation, and multi-hop
reasoning. SQA3D imposes a significant challenge to current multi-modal
especially 3D reasoning models. We evaluate various state-of-the-art approaches
and find that the best one only achieves an overall score of 47.20%, while
amateur human participants can reach 90.06%. We believe SQA3D could facilitate
future embodied AI research with stronger situation understanding and reasoning
capability.
|
[
{
"version": "v1",
"created": "Fri, 14 Oct 2022 02:52:26 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Oct 2022 15:25:26 GMT"
},
{
"version": "v3",
"created": "Sat, 11 Feb 2023 01:57:41 GMT"
},
{
"version": "v4",
"created": "Wed, 22 Feb 2023 08:25:24 GMT"
},
{
"version": "v5",
"created": "Wed, 12 Apr 2023 20:05:41 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Ma",
"Xiaojian",
""
],
[
"Yong",
"Silong",
""
],
[
"Zheng",
"Zilong",
""
],
[
"Li",
"Qing",
""
],
[
"Liang",
"Yitao",
""
],
[
"Zhu",
"Song-Chun",
""
],
[
"Huang",
"Siyuan",
""
]
] |
new_dataset
| 0.99993 |
2210.11978
|
Shipeng Zhong
|
Shipeng Zhong, Yuhua Qi, Zhiqiang Chen, Jin Wu, Hongbo Chen, Ming Liu
|
DCL-SLAM: A Distributed Collaborative LiDAR SLAM Framework for a Robotic
Swarm
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To execute collaborative tasks in unknown environments, a robotic swarm needs
to establish a global reference frame and locate itself in a shared
understanding of the environment. However, it faces many challenges in
real-world scenarios, such as the prior information about the environment being
absent and poor communication among the team members. This work presents
DCL-SLAM, a fully distributed collaborative LiDAR SLAM framework intended for
the robotic swarm to simultaneously co-localize in an unknown environment with
minimal information exchange. Based on ad-hoc wireless peer-to-peer
communication (limited bandwidth and communication range), DCL-SLAM adopts the
lightweight LiDAR-Iris descriptor for place recognition and does not require
full connectivity among teams. DCL-SLAM includes three main parts: a
replaceable single-robot front-end that produces LiDAR odometry results; a
distributed loop closure module that detects inter-robot loop closures with
keyframes; and a distributed back-end module that adapts distributed pose graph
optimizer combined with a pairwise consistent measurement set maximization
algorithm to reject spurious inter-robot loop closures. We integrate our
proposed framework with diverse open-source LiDAR odometry methods to show its
versatility. The proposed system is extensively evaluated on benchmarking
datasets and field experiments over various scales and environments.
Experimental result shows that DCL-SLAM achieves higher accuracy and lower
communication bandwidth than other state-of-art multi-robot SLAM systems. The
full source code is available at https://github.com/zhongshp/DCL-SLAM.git.
|
[
{
"version": "v1",
"created": "Fri, 21 Oct 2022 14:09:15 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Apr 2023 02:10:35 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Zhong",
"Shipeng",
""
],
[
"Qi",
"Yuhua",
""
],
[
"Chen",
"Zhiqiang",
""
],
[
"Wu",
"Jin",
""
],
[
"Chen",
"Hongbo",
""
],
[
"Liu",
"Ming",
""
]
] |
new_dataset
| 0.998259 |
2211.09119
|
Michael S. Ryoo
|
Michael S. Ryoo, Keerthana Gopalakrishnan, Kumara Kahatapitiya, Ted
Xiao, Kanishka Rao, Austin Stone, Yao Lu, Julian Ibarz, Anurag Arnab
|
Token Turing Machines
|
CVPR 2023 camera-ready copy
|
CVPR 2023
| null | null |
cs.LG cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose Token Turing Machines (TTM), a sequential, autoregressive
Transformer model with memory for real-world sequential visual understanding.
Our model is inspired by the seminal Neural Turing Machine, and has an external
memory consisting of a set of tokens which summarise the previous history
(i.e., frames). This memory is efficiently addressed, read and written using a
Transformer as the processing unit/controller at each step. The model's memory
module ensures that a new observation will only be processed with the contents
of the memory (and not the entire history), meaning that it can efficiently
process long sequences with a bounded computational cost at each step. We show
that TTM outperforms other alternatives, such as other Transformer models
designed for long sequences and recurrent neural networks, on two real-world
sequential visual understanding tasks: online temporal activity detection from
videos and vision-based robot action policy learning.
Code is publicly available at:
https://github.com/google-research/scenic/tree/main/scenic/projects/token_turing
|
[
{
"version": "v1",
"created": "Wed, 16 Nov 2022 18:59:18 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Apr 2023 15:23:10 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Ryoo",
"Michael S.",
""
],
[
"Gopalakrishnan",
"Keerthana",
""
],
[
"Kahatapitiya",
"Kumara",
""
],
[
"Xiao",
"Ted",
""
],
[
"Rao",
"Kanishka",
""
],
[
"Stone",
"Austin",
""
],
[
"Lu",
"Yao",
""
],
[
"Ibarz",
"Julian",
""
],
[
"Arnab",
"Anurag",
""
]
] |
new_dataset
| 0.967787 |
2212.03793
|
Yashovardhan Sharma
|
Yashovardhan Sharma, Simon Birnbach, Ivan Martinovic
|
RADAR: A TTP-based Extensible, Explainable, and Effective System for
Network Traffic Analysis and Malware Detection
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Network analysis and machine learning techniques have been widely applied for
building malware detection systems. Though these systems attain impressive
results, they often are $(i)$ not extensible, being monolithic, well tuned for
the specific task they have been designed for but very difficult to adapt
and/or extend to other settings, and $(ii)$ not interpretable, being black
boxes whose inner complexity makes it impossible to link the result of
detection with its root cause, making further analysis of threats a challenge.
In this paper we present RADAR, an extensible and explainable system that
exploits the popular TTP (Tactics, Techniques, and Procedures) ontology of
adversary behaviour described in the industry-standard MITRE ATT\&CK framework
in order to unequivocally identify and classify malicious behaviour using
network traffic. We evaluate RADAR on a very large dataset comprising of
2,286,907 malicious and benign samples, representing a total of 84,792,452
network flows. The experimental analysis confirms that the proposed methodology
can be effectively exploited: RADAR's ability to detect malware is comparable
to other state-of-the-art non-interpretable systems' capabilities. To the best
of our knowledge, RADAR is the first TTP-based system for malware detection
that uses machine learning while being extensible and explainable.
|
[
{
"version": "v1",
"created": "Wed, 7 Dec 2022 17:19:43 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Apr 2023 15:28:13 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Sharma",
"Yashovardhan",
""
],
[
"Birnbach",
"Simon",
""
],
[
"Martinovic",
"Ivan",
""
]
] |
new_dataset
| 0.993814 |
2212.04362
|
Jiezhang Cao
|
Jiezhang Cao, Qin Wang, Yongqin Xian, Yawei Li, Bingbing Ni, Zhiming
Pi, Kai Zhang, Yulun Zhang, Radu Timofte, Luc Van Gool
|
CiaoSR: Continuous Implicit Attention-in-Attention Network for
Arbitrary-Scale Image Super-Resolution
|
CVPR 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Learning continuous image representations is recently gaining popularity for
image super-resolution (SR) because of its ability to reconstruct
high-resolution images with arbitrary scales from low-resolution inputs.
Existing methods mostly ensemble nearby features to predict the new pixel at
any queried coordinate in the SR image. Such a local ensemble suffers from some
limitations: i) it has no learnable parameters and it neglects the similarity
of the visual features; ii) it has a limited receptive field and cannot
ensemble relevant features in a large field which are important in an image. To
address these issues, this paper proposes a continuous implicit
attention-in-attention network, called CiaoSR. We explicitly design an implicit
attention network to learn the ensemble weights for the nearby local features.
Furthermore, we embed a scale-aware attention in this implicit attention
network to exploit additional non-local information. Extensive experiments on
benchmark datasets demonstrate CiaoSR significantly outperforms the existing
single image SR methods with the same backbone. In addition, CiaoSR also
achieves the state-of-the-art performance on the arbitrary-scale SR task. The
effectiveness of the method is also demonstrated on the real-world SR setting.
More importantly, CiaoSR can be flexibly integrated into any backbone to
improve the SR performance.
|
[
{
"version": "v1",
"created": "Thu, 8 Dec 2022 15:57:46 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Jan 2023 11:23:41 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Apr 2023 07:50:41 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Cao",
"Jiezhang",
""
],
[
"Wang",
"Qin",
""
],
[
"Xian",
"Yongqin",
""
],
[
"Li",
"Yawei",
""
],
[
"Ni",
"Bingbing",
""
],
[
"Pi",
"Zhiming",
""
],
[
"Zhang",
"Kai",
""
],
[
"Zhang",
"Yulun",
""
],
[
"Timofte",
"Radu",
""
],
[
"Van Gool",
"Luc",
""
]
] |
new_dataset
| 0.967525 |
2212.11370
|
Baibhab Chatterjee
|
Baibhab Chatterjee, Pedram Mohseni and Shreyas Sen
|
Bioelectronic Sensor Nodes for Internet of Bodies
|
30 pages, 5 Figures. This is a pre-print version of the article which
has been accepted for Publication in Volume 25 of the Annual Review of
Biomedical Engineering (2023). Only Personal Use is Permitted
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Energy-efficient sensing with Physically-secure communication for bio-sensors
on, around and within the Human Body is a major area of research today for
development of low-cost healthcare, enabling continuous monitoring and/or
secure, perpetual operation. These devices, when used as a network of nodes
form the Internet of Bodies (IoB), which poses certain challenges including
stringent resource constraints (power/area/computation/memory), simultaneous
sensing and communication, and security vulnerabilities as evidenced by the DHS
and FDA advisories. One other major challenge is to find an efficient on-body
energy harvesting method to support the sensing, communication, and security
sub-modules. Due to the limitations in the harvested amount of energy, we
require reduction of energy consumed per unit information, making the use of
in-sensor analytics/processing imperative. In this paper, we review the
challenges and opportunities in low-power sensing, processing and
communication, with possible powering modalities for future bio-sensor nodes.
Specifically, we analyze, compare and contrast (a) different sensing mechanisms
such as voltage/current domain vs time-domain, (b) low-power, secure
communication modalities including wireless techniques and human-body
communication, and (c) different powering techniques for both wearable devices
and implants.
|
[
{
"version": "v1",
"created": "Wed, 21 Dec 2022 21:18:39 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Apr 2023 14:18:47 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Chatterjee",
"Baibhab",
""
],
[
"Mohseni",
"Pedram",
""
],
[
"Sen",
"Shreyas",
""
]
] |
new_dataset
| 0.99842 |
2303.10118
|
Susana Hahn Martin Lunas
|
Susana Hahn, Orkunt Sabuncu, Torsten Schaub, Tobias Stolzmann
|
Clingraph: A System for ASP-based Visualization
|
Short version presented at the International Conference on Logic
Programming and Non-monotonic Reasoning (LPNMR'22). Extended version under
consideration in Theory and Practice of Logic Programming (TPLP'22), 24
pages, 10 figures
| null | null | null |
cs.AI
|
http://creativecommons.org/publicdomain/zero/1.0/
|
We present the ASP-based visualization tool, clingraph, which aims at
visualizing various concepts of ASP by means of ASP itself. This idea traces
back to the aspviz tool and clingraph redevelops and extends it in the context
of modern ASP systems. More precisely, clingraph takes graph specifications in
terms of ASP facts and hands them over to the graph visualization system
graphviz. The use of ASP provides a great interface between logic programs
and/or answer sets and their visualization. Also, clingraph offers a python API
that extends this ease of interfacing to clingo's API, and in turn to connect
and monitor various aspects of the solving process.
|
[
{
"version": "v1",
"created": "Fri, 17 Mar 2023 16:59:14 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Hahn",
"Susana",
""
],
[
"Sabuncu",
"Orkunt",
""
],
[
"Schaub",
"Torsten",
""
],
[
"Stolzmann",
"Tobias",
""
]
] |
new_dataset
| 0.977407 |
2303.14690
|
Prabhat Kumar
|
Prabhat Kumar
|
TOPress: a MATLAB implementation for topology optimization of structures
subjected to design-dependent pressure loads
|
19 Figures, MATLAB codes
|
Structural and Multidisciplinary Optimization, 2023
|
10.1007/s00158-023-03533-9
| null |
cs.MS cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In a topology optimization setting, design-dependent fluidic pressure loads
pose several challenges as their direction, magnitude, and location alter with
topology evolution. This paper offers a compact 100-line MATLAB code, TOPress,
for topology optimization of structures subjected to fluidic pressure loads
using the method of moving asymptotes. The code is intended for pedagogical
purposes and aims to ease the beginners' and students' learning toward topology
optimization with design-dependent fluidic pressure loads. TOPress is developed
per the approach first reported in Kumar et al. (Struct Multidisc Optim
61(4):1637-1655, 2020). The Darcy law, in conjunction with the drainage term,
is used to model the applied pressure load. The consistent nodal loads are
determined from the obtained pressure field. The employed approach facilitates
inexpensive computation of the load sensitivities using the adjoint-variable
method. Compliance minimization subject to volume constraint optimization
problems are solved. The success and efficacy of the code are demonstrated by
solving benchmark numerical examples involving pressure loads, wherein the
importance of load sensitivities is also demonstrated. TOPress contains six
main parts, is described in detail, and is extended to solve different
problems. Steps to include a projection filter are provided to achieve
loadbearing designs close to~0-1. The code is provided in Appendix~B and can
also be downloaded along with its extensions from
\url{https://github.com/PrabhatIn/TOPress}.
|
[
{
"version": "v1",
"created": "Sun, 26 Mar 2023 11:31:22 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Apr 2023 07:22:28 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Apr 2023 07:13:54 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Kumar",
"Prabhat",
""
]
] |
new_dataset
| 0.998458 |
2303.17118
|
Negar Neda
|
Deepraj Soni, Negar Neda, Naifeng Zhang, Benedict Reynwar, Homer
Gamil, Benjamin Heyman, Mohammed Nabeel, Ahmad Al Badawi, Yuriy Polyakov,
Kellie Canida, Massoud Pedram, Michail Maniatakos, David Bruce Cousins, Franz
Franchetti, Matthew French, Andrew Schmidt, and Brandon Reagen
|
RPU: The Ring Processing Unit
| null | null | null | null |
cs.AR cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Ring-Learning-with-Errors (RLWE) has emerged as the foundation of many
important techniques for improving security and privacy, including homomorphic
encryption and post-quantum cryptography. While promising, these techniques
have received limited use due to their extreme overheads of running on
general-purpose machines. In this paper, we present a novel vector Instruction
Set Architecture (ISA) and microarchitecture for accelerating the ring-based
computations of RLWE. The ISA, named B512, is developed to meet the needs of
ring processing workloads while balancing high-performance and general-purpose
programming support. Having an ISA rather than fixed hardware facilitates
continued software improvement post-fabrication and the ability to support the
evolving workloads. We then propose the ring processing unit (RPU), a
high-performance, modular implementation of B512. The RPU has native large word
modular arithmetic support, capabilities for very wide parallel processing, and
a large capacity high-bandwidth scratchpad to meet the needs of ring
processing. We address the challenges of programming the RPU using a newly
developed SPIRAL backend. A configurable simulator is built to characterize
design tradeoffs and quantify performance. The best performing design was
implemented in RTL and used to validate simulator performance. In addition to
our characterization, we show that a RPU using 20.5mm2 of GF 12nm can provide a
speedup of 1485x over a CPU running a 64k, 128-bit NTT, a core RLWE workload
|
[
{
"version": "v1",
"created": "Thu, 30 Mar 2023 03:10:03 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Apr 2023 18:00:40 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Apr 2023 13:47:01 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Soni",
"Deepraj",
""
],
[
"Neda",
"Negar",
""
],
[
"Zhang",
"Naifeng",
""
],
[
"Reynwar",
"Benedict",
""
],
[
"Gamil",
"Homer",
""
],
[
"Heyman",
"Benjamin",
""
],
[
"Nabeel",
"Mohammed",
""
],
[
"Badawi",
"Ahmad Al",
""
],
[
"Polyakov",
"Yuriy",
""
],
[
"Canida",
"Kellie",
""
],
[
"Pedram",
"Massoud",
""
],
[
"Maniatakos",
"Michail",
""
],
[
"Cousins",
"David Bruce",
""
],
[
"Franchetti",
"Franz",
""
],
[
"French",
"Matthew",
""
],
[
"Schmidt",
"Andrew",
""
],
[
"Reagen",
"Brandon",
""
]
] |
new_dataset
| 0.998716 |
2303.18194
|
Edgar Martinez-Moro
|
Sanjit Bhowmick, Javier de la Cruz, Edgar Mart\'inez-Moro, Anuradha
Sharma
|
On LCP and checkable group codes over finite non-commutative Frobenius
rings
| null | null | null | null |
cs.IT math.IT math.RA
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We provide a simple proof for a complementary pair of group codes over a
finite non-commutative Frobenius ring of the fact that one of them is
equivalent to the other one. We also explore this fact for checkeable codes
over the same type of alphabet.
|
[
{
"version": "v1",
"created": "Fri, 31 Mar 2023 16:51:06 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Apr 2023 10:55:21 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Bhowmick",
"Sanjit",
""
],
[
"de la Cruz",
"Javier",
""
],
[
"Martínez-Moro",
"Edgar",
""
],
[
"Sharma",
"Anuradha",
""
]
] |
new_dataset
| 0.999765 |
2304.03868
|
Liu Liu
|
Liu Liu, Shubham Kumar, Simon Thomann, Yogesh Singh Chauhan, Hussam
Amrouch and Xiaobo Sharon Hu
|
Compact and High-Performance TCAM Based on Scaled Double-Gate FeFETs
|
Accepted by Design Automation Conference (DAC) 2023
| null | null | null |
cs.ET
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Ternary content addressable memory (TCAM), widely used in network routers and
high-associativity caches, is gaining popularity in machine learning and
data-analytic applications. Ferroelectric FETs (FeFETs) are a promising
candidate for implementing TCAM owing to their high ON/OFF ratio,
non-volatility, and CMOS compatibility. However, conventional single-gate
FeFETs (SG-FeFETs) suffer from relatively high write voltage, low endurance,
potential read disturbance, and face scaling challenges. Recently, a
double-gate FeFET (DG-FeFET) has been proposed and outperforms SG-FeFETs in
many aspects. This paper investigates TCAM design challenges specific to
DG-FeFETs and introduces a novel 1.5T1Fe TCAM design based on DG-FeFETs. A
2-step search with early termination is employed to reduce the cell area and
improve energy efficiency. A shared driver design is proposed to reduce the
peripherals area. Detailed analysis and SPICE simulation show that the 1.5T1Fe
DG-TCAM leads to superior search speed and energy efficiency. The 1.5T1Fe TCAM
design can also be built with SG-FeFETs, which achieve search latency and
energy improvement compared with 2FeFET TCAM.
|
[
{
"version": "v1",
"created": "Fri, 7 Apr 2023 23:47:57 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Apr 2023 13:51:38 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Liu",
"Liu",
""
],
[
"Kumar",
"Shubham",
""
],
[
"Thomann",
"Simon",
""
],
[
"Chauhan",
"Yogesh Singh",
""
],
[
"Amrouch",
"Hussam",
""
],
[
"Hu",
"Xiaobo Sharon",
""
]
] |
new_dataset
| 0.999699 |
2304.05119
|
Zhaorui Wang
|
Zhaorui Wang, Ya-Feng Liu, Ziyue Wang, Liang Liu, Haoyuan Pan, and
Shuguang Cui
|
Device Activity Detection in mMTC with Low-Resolution ADC: A New
Protocol
|
Submitted to IEEE for possible publication
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
This paper investigates the effect of low-resolution analog-to-digital
converters (ADCs) on device activity detection in massive machine-type
communications (mMTC). The low-resolution ADCs induce two challenges on the
device activity detection compared with the traditional setup with the
assumption of infinite ADC resolution. First, the codebook design for signal
quantization by the low-resolution ADC is particularly important since a good
design of the codebook can lead to small quantization error on the received
signal, which in turn has significant influence on the activity detector
performance. To this end, prior information about the received signal power is
needed, which depends on the number of active devices $K$. This is sharply
different from the activity detection problem in traditional setups, in which
the knowledge of $K$ is not required by the BS as a prerequisite. Second, the
covariance-based approach achieves good activity detection performance in
traditional setups while it is not clear if it can still achieve good
performance in this paper. To solve the above challenges, we propose a
communication protocol that consists of an estimator for $K$ and a detector for
active device identities: 1) For the estimator, the technical difficulty is
that the design of the ADC quantizer and the estimation of $K$ are closely
intertwined and doing one needs the information/execution from the other. We
propose a progressive estimator which iteratively performs the estimation of
$K$ and the design of the ADC quantizer; 2) For the activity detector, we
propose a custom-designed stochastic gradient descent algorithm to estimate the
active device identities. Numerical results demonstrate the effectiveness of
the communication protocol.
|
[
{
"version": "v1",
"created": "Tue, 11 Apr 2023 10:21:09 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Apr 2023 08:50:15 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Wang",
"Zhaorui",
""
],
[
"Liu",
"Ya-Feng",
""
],
[
"Wang",
"Ziyue",
""
],
[
"Liu",
"Liang",
""
],
[
"Pan",
"Haoyuan",
""
],
[
"Cui",
"Shuguang",
""
]
] |
new_dataset
| 0.99054 |
2304.05170
|
Yutao Cui
|
Yutao Cui, Chenkai Zeng, Xiaoyu Zhao, Yichun Yang, Gangshan Wu and
Limin Wang
|
SportsMOT: A Large Multi-Object Tracking Dataset in Multiple Sports
Scenes
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Multi-object tracking in sports scenes plays a critical role in gathering
players statistics, supporting further analysis, such as automatic tactical
analysis. Yet existing MOT benchmarks cast little attention on the domain,
limiting its development. In this work, we present a new large-scale
multi-object tracking dataset in diverse sports scenes, coined as
\emph{SportsMOT}, where all players on the court are supposed to be tracked. It
consists of 240 video sequences, over 150K frames (almost 15\times MOT17) and
over 1.6M bounding boxes (3\times MOT17) collected from 3 sports categories,
including basketball, volleyball and football. Our dataset is characterized
with two key properties: 1) fast and variable-speed motion and 2) similar yet
distinguishable appearance. We expect SportsMOT to encourage the MOT trackers
to promote in both motion-based association and appearance-based association.
We benchmark several state-of-the-art trackers and reveal the key challenge of
SportsMOT lies in object association. To alleviate the issue, we further
propose a new multi-object tracking framework, termed as \emph{MixSort},
introducing a MixFormer-like structure as an auxiliary association model to
prevailing tracking-by-detection trackers. By integrating the customized
appearance-based association with the original motion-based association,
MixSort achieves state-of-the-art performance on SportsMOT and MOT17. Based on
MixSort, we give an in-depth analysis and provide some profound insights into
SportsMOT. The dataset and code will be available at
https://deeperaction.github.io/datasets/sportsmot.html.
|
[
{
"version": "v1",
"created": "Tue, 11 Apr 2023 12:07:31 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Apr 2023 12:23:36 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Cui",
"Yutao",
""
],
[
"Zeng",
"Chenkai",
""
],
[
"Zhao",
"Xiaoyu",
""
],
[
"Yang",
"Yichun",
""
],
[
"Wu",
"Gangshan",
""
],
[
"Wang",
"Limin",
""
]
] |
new_dataset
| 0.999805 |
2304.05869
|
Julian Schmidt
|
Julian Schmidt, Thomas Monninger, Julian Jordan, Klaus Dietmayer
|
LMR: Lane Distance-Based Metric for Trajectory Prediction
|
Accepted to the 2023 IEEE Intelligent Vehicles Symposium (IV 2023)
| null | null | null |
cs.CV cs.AI cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The development of approaches for trajectory prediction requires metrics to
validate and compare their performance. Currently established metrics are based
on Euclidean distance, which means that errors are weighted equally in all
directions. Euclidean metrics are insufficient for structured environments like
roads, since they do not properly capture the agent's intent relative to the
underlying lane. In order to provide a reasonable assessment of trajectory
prediction approaches with regard to the downstream planning task, we propose a
new metric that is lane distance-based: Lane Miss Rate (LMR). For the
calculation of LMR, the ground-truth and predicted endpoints are assigned to
lane segments, more precisely their centerlines. Measured by the distance along
the lane segments, predictions that are within a certain threshold distance to
the ground-truth count as hits, otherwise they count as misses. LMR is then
defined as the ratio of sequences that yield a miss. Our results on three
state-of-the-art trajectory prediction models show that LMR preserves the order
of Euclidean distance-based metrics. In contrast to the Euclidean Miss Rate,
qualitative results show that LMR yields misses for sequences where predictions
are located on wrong lanes. Hits on the other hand result for sequences where
predictions are located on the correct lane. This means that LMR implicitly
weights Euclidean error relative to the lane and goes into the direction of
capturing intents of traffic agents. The source code of LMR for Argoverse 2 is
publicly available.
|
[
{
"version": "v1",
"created": "Wed, 12 Apr 2023 13:59:04 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Apr 2023 07:22:48 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Schmidt",
"Julian",
""
],
[
"Monninger",
"Thomas",
""
],
[
"Jordan",
"Julian",
""
],
[
"Dietmayer",
"Klaus",
""
]
] |
new_dataset
| 0.972609 |
2304.06111
|
Wensheng Gan
|
Shicheng Wan, Hong Lin, Wensheng Gan, Jiahui Chen, Philip S. Yu
|
Web3: The Next Internet Revolution
|
Preprint. 5 figures, 2 tables
| null | null | null |
cs.CY cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Since the first appearance of the World Wide Web, people more rely on the Web
for their cyber social activities. The second phase of World Wide Web, named
Web 2.0, has been extensively attracting worldwide people that participate in
building and enjoying the virtual world. Nowadays, the next internet
revolution: Web3 is going to open new opportunities for traditional social
models. The decentralization property of Web3 is capable of breaking the
monopoly of the internet companies. Moreover, Web3 will lead a paradigm shift
from the Web as a publishing medium to a medium of interaction and
participation. This change will deeply transform the relations among users and
platforms, forces and relations of production, and the global economy.
Therefore, it is necessary that we technically, practically, and more broadly
take an overview of Web3. In this paper, we present a comprehensive survey of
Web3, with a focus on current technologies, challenges, opportunities, and
outlook. This article first introduces several major technologies of Web3.
Then, we illustrate the type of Web3 applications in detail. Blockchain and
smart contracts ensure that decentralized organizations will be less trusted
and more truthful than that centralized organizations. Decentralized finance
will be global, and open with financial inclusiveness for unbanked people. This
paper also discusses the relationship between the Metaverse and Web3, as well
as the differences and similarities between Web 3.0 and Web3. Inspired by the
Maslow's hierarchy of needs theory, we further conduct a novel hierarchy of
needs theory within Web3. Finally, several worthwhile future research
directions of Web3 are discussed.
|
[
{
"version": "v1",
"created": "Wed, 22 Mar 2023 23:37:43 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Wan",
"Shicheng",
""
],
[
"Lin",
"Hong",
""
],
[
"Gan",
"Wensheng",
""
],
[
"Chen",
"Jiahui",
""
],
[
"Yu",
"Philip S.",
""
]
] |
new_dataset
| 0.997577 |
2304.06116
|
Wentao Zhu
|
Wentao Zhu, Yufang Huang, Xiufeng Xie, Wenxian Liu, Jincan Deng,
Debing Zhang, Zhangyang Wang, Ji Liu
|
AutoShot: A Short Video Dataset and State-of-the-Art Shot Boundary
Detection
|
10 pages, 5 figures, 3 tables, in CVPR 2023; Top-1 solution for scene
/ shot boundary detection
https://paperswithcode.com/paper/autoshot-a-short-video-dataset-and-state-of
| null | null | null |
cs.CV cs.AI cs.LG cs.MM cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The short-form videos have explosive popularity and have dominated the new
social media trends. Prevailing short-video platforms,~\textit{e.g.}, Kuaishou
(Kwai), TikTok, Instagram Reels, and YouTube Shorts, have changed the way we
consume and create content. For video content creation and understanding, the
shot boundary detection (SBD) is one of the most essential components in
various scenarios. In this work, we release a new public Short video sHot
bOundary deTection dataset, named SHOT, consisting of 853 complete short videos
and 11,606 shot annotations, with 2,716 high quality shot boundary annotations
in 200 test videos. Leveraging this new data wealth, we propose to optimize the
model design for video SBD, by conducting neural architecture search in a
search space encapsulating various advanced 3D ConvNets and Transformers. Our
proposed approach, named AutoShot, achieves higher F1 scores than previous
state-of-the-art approaches, e.g., outperforming TransNetV2 by 4.2%, when being
derived and evaluated on our newly constructed SHOT dataset. Moreover, to
validate the generalizability of the AutoShot architecture, we directly
evaluate it on another three public datasets: ClipShots, BBC and RAI, and the
F1 scores of AutoShot outperform previous state-of-the-art approaches by 1.1%,
0.9% and 1.2%, respectively. The SHOT dataset and code can be found in
https://github.com/wentaozhu/AutoShot.git .
|
[
{
"version": "v1",
"created": "Wed, 12 Apr 2023 19:01:21 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Zhu",
"Wentao",
""
],
[
"Huang",
"Yufang",
""
],
[
"Xie",
"Xiufeng",
""
],
[
"Liu",
"Wenxian",
""
],
[
"Deng",
"Jincan",
""
],
[
"Zhang",
"Debing",
""
],
[
"Wang",
"Zhangyang",
""
],
[
"Liu",
"Ji",
""
]
] |
new_dataset
| 0.999697 |
2304.06121
|
Abduallah Mohamed
|
Abduallah Mohamed, Jundi Liu, Linda Ng Boyle, Christian Claudel
|
FollowMe: Vehicle Behaviour Prediction in Autonomous Vehicle Settings
| null | null | null | null |
cs.RO cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
An ego vehicle following a virtual lead vehicle planned route is an essential
component when autonomous and non-autonomous vehicles interact. Yet, there is a
question about the driver's ability to follow the planned lead vehicle route.
Thus, predicting the trajectory of the ego vehicle route given a lead vehicle
route is of interest. We introduce a new dataset, the FollowMe dataset, which
offers a motion and behavior prediction problem by answering the latter
question of the driver's ability to follow a lead vehicle. We also introduce a
deep spatio-temporal graph model FollowMe-STGCNN as a baseline for the dataset.
In our experiments and analysis, we show the design benefits of FollowMe-STGCNN
in capturing the interactions that lie within the dataset. We contrast the
performance of FollowMe-STGCNN with prior motion prediction models showing the
need to have a different design mechanism to address the lead vehicle following
settings.
|
[
{
"version": "v1",
"created": "Wed, 12 Apr 2023 19:05:56 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Mohamed",
"Abduallah",
""
],
[
"Liu",
"Jundi",
""
],
[
"Boyle",
"Linda Ng",
""
],
[
"Claudel",
"Christian",
""
]
] |
new_dataset
| 0.999413 |
2304.06145
|
Randall Powers
|
Randall Powers, Wendy Martinez, and Terrance Savitsky
|
The growclusters Package for R
|
10 pages, 6 figures, paper presented at 2022 Joint Statistical
Meetings
| null | null | null |
cs.MS cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The growclusters package for R implements an enhanced version of k-means
clustering that allows discovery of local clusterings or partitions for a
collection of data sets that each draw their cluster means from a single,
global partition. The package contains functions to estimate a partition
structure for multivariate data. Estimation is performed under a penalized
optimization derived from Bayesian non-parametric formulations. This paper
describes some of the functions and capabilities of the growclusters package,
including the creation of R Shiny applications designed to visually illustrate
the operation and functionality of the growclusters package.
|
[
{
"version": "v1",
"created": "Wed, 12 Apr 2023 20:03:44 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Powers",
"Randall",
""
],
[
"Martinez",
"Wendy",
""
],
[
"Savitsky",
"Terrance",
""
]
] |
new_dataset
| 0.979903 |
2304.06155
|
Antoine Amarilli
|
Antoine Amarilli and Benny Kimelfeld and S\'ebastien Labb\'e and
Stefan Mengel
|
Skyline Operators for Document Spanners
|
42 pages. Submitted
| null | null | null |
cs.DB cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
When extracting a relation of spans (intervals) from a text document, a
common practice is to filter out tuples of the relation that are deemed
dominated by others. The domination rule is defined as a partial order that
varies along different systems and tasks. For example, we may state that a
tuple is dominated by tuples which extend it by assigning additional
attributes, or assigning larger intervals. The result of filtering the relation
would then be the skyline according to this partial order. As this filtering
may remove most of the extracted tuples, we study whether we can improve the
performance of the extraction by compiling the domination rule into the
extractor.
To this aim, we introduce the skyline operator for declarative information
extraction tasks expressed as document spanners. We show that this operator can
be expressed via regular operations when the domination partial order can
itself be expressed as a regular spanner, which covers several natural
domination rules. Yet, we show that the skyline operator incurs a computational
cost (under combined complexity). First, there are cases where the operator
requires an exponential blowup on the number of states needed to represent the
spanner as a sequential variable-set automaton. Second, the evaluation may
become computationally hard. Our analysis more precisely identifies classes of
domination rules for which the combined complexity is tractable or intractable.
|
[
{
"version": "v1",
"created": "Wed, 12 Apr 2023 20:38:32 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Amarilli",
"Antoine",
""
],
[
"Kimelfeld",
"Benny",
""
],
[
"Labbé",
"Sébastien",
""
],
[
"Mengel",
"Stefan",
""
]
] |
new_dataset
| 0.958866 |
2304.06167
|
Ravi Sahita
|
Ravi Sahita, Atish Patra, Vedvyas Shanbhogue, Samuel Ortiz, Andrew
Bresticker, Dylan Reid, Atul Khare, Rajnesh Kanwal
|
CoVE: Towards Confidential Computing on RISC-V Platforms
| null | null | null | null |
cs.CR cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-tenant computing platforms are typically comprised of several software
and hardware components including platform firmware, host operating system
kernel, virtualization monitor, and the actual tenant payloads that run on them
(typically in a virtual machine, container, or application). This model is well
established in large scale commercial deployment, but the downside is that all
platform components and operators are in the Trusted Computing Base (TCB) of
the tenant. This aspect is ill-suited for privacy-oriented workloads that aim
to minimize the TCB footprint. Confidential computing presents a good
stepping-stone towards providing a quantifiable TCB for computing. Confidential
computing [1] requires the use of a HW-attested Trusted Execution Environments
for data-in-use protection. The RISC-V architecture presents a strong
foundation for meeting the requirements for Confidential Computing and other
security paradigms in a clean slate manner. This paper describes a reference
architecture and discusses ISA, non-ISA and system-on-chip (SoC) requirements
for confidential computing on RISC-V Platforms. It discusses proposed ISA and
non-ISA Extension for Confidential Virtual Machine for RISC-V platforms,
referred to as CoVE.
|
[
{
"version": "v1",
"created": "Wed, 12 Apr 2023 21:35:44 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Sahita",
"Ravi",
""
],
[
"Patra",
"Atish",
""
],
[
"Shanbhogue",
"Vedvyas",
""
],
[
"Ortiz",
"Samuel",
""
],
[
"Bresticker",
"Andrew",
""
],
[
"Reid",
"Dylan",
""
],
[
"Khare",
"Atul",
""
],
[
"Kanwal",
"Rajnesh",
""
]
] |
new_dataset
| 0.982924 |
2304.06168
|
Ming-Chang Lee
|
Ming-Chang Lee, Jia-Chun Lin, and Volker Stolz
|
NP-Free: A Real-Time Normalization-free and Parameter-tuning-free
Representation Approach for Open-ended Time Series
|
9 pages, 12 figures, 9 tables, and this paper was accepted by 2023
IEEE 47th Annual Computers, Software, and Applications Conference (COMPSAC
2023)
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As more connected devices are implemented in a cyber-physical world and data
is expected to be collected and processed in real time, the ability to handle
time series data has become increasingly significant. To help analyze time
series in data mining applications, many time series representation approaches
have been proposed to convert a raw time series into another series for
representing the original time series. However, existing approaches are not
designed for open-ended time series (which is a sequence of data points being
continuously collected at a fixed interval without any length limit) because
these approaches need to know the total length of the target time series in
advance and pre-process the entire time series using normalization methods.
Furthermore, many representation approaches require users to configure and tune
some parameters beforehand in order to achieve satisfactory representation
results. In this paper, we propose NP-Free, a real-time Normalization-free and
Parameter-tuning-free representation approach for open-ended time series.
Without needing to use any normalization method or tune any parameter, NP-Free
can generate a representation for a raw time series on the fly by converting
each data point of the time series into a root-mean-square error (RMSE) value
based on Long Short-Term Memory (LSTM) and a Look-Back and Predict-Forward
strategy. To demonstrate the capability of NP-Free in representing time series,
we conducted several experiments based on real-world open-source time series
datasets. We also evaluated the time consumption of NP-Free in generating
representations.
|
[
{
"version": "v1",
"created": "Wed, 12 Apr 2023 21:48:53 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Lee",
"Ming-Chang",
""
],
[
"Lin",
"Jia-Chun",
""
],
[
"Stolz",
"Volker",
""
]
] |
new_dataset
| 0.966625 |
2304.06177
|
Mahla Nejati
|
Andy Kweon, Vishnu Hu, Jong Yoon Lim, Trevor Gee, Edmond Liu, Henry
Williams, Bruce A. MacDonald, Mahla Nejati, Inkyu Sa, and Ho Seok Ahn
|
Visual based Tomato Size Measurement System for an Indoor Farming
Environment
|
10 Pages, 12 Figures
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As technology progresses, smart automated systems will serve an increasingly
important role in the agricultural industry. Current existing vision systems
for yield estimation face difficulties in occlusion and scalability as they
utilize a camera system that is large and expensive, which are unsuitable for
orchard environments. To overcome these problems, this paper presents a size
measurement method combining a machine learning model and depth images captured
from three low cost RGBD cameras to detect and measure the height and width of
tomatoes. The performance of the presented system is evaluated on a lab
environment with real tomato fruits and fake leaves to simulate occlusion in
the real farm environment. To improve accuracy by addressing fruit occlusion,
our three-camera system was able to achieve a height measurement accuracy of
0.9114 and a width accuracy of 0.9443.
|
[
{
"version": "v1",
"created": "Wed, 12 Apr 2023 22:27:05 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Kweon",
"Andy",
""
],
[
"Hu",
"Vishnu",
""
],
[
"Lim",
"Jong Yoon",
""
],
[
"Gee",
"Trevor",
""
],
[
"Liu",
"Edmond",
""
],
[
"Williams",
"Henry",
""
],
[
"MacDonald",
"Bruce A.",
""
],
[
"Nejati",
"Mahla",
""
],
[
"Sa",
"Inkyu",
""
],
[
"Ahn",
"Ho Seok",
""
]
] |
new_dataset
| 0.988002 |
2304.06184
|
Anjana Arunkumar
|
Anjana Arunkumar, Shubham Sharma, Rakhi Agrawal, Sriram
Chandrasekaran, Chris Bryan
|
LINGO : Visually Debiasing Natural Language Instructions to Support Task
Diversity
|
13 pages, 6 figures, Eurovis 2023
| null | null | null |
cs.HC cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cross-task generalization is a significant outcome that defines mastery in
natural language understanding. Humans show a remarkable aptitude for this, and
can solve many different types of tasks, given definitions in the form of
textual instructions and a small set of examples. Recent work with pre-trained
language models mimics this learning style: users can define and exemplify a
task for the model to attempt as a series of natural language prompts or
instructions. While prompting approaches have led to higher cross-task
generalization compared to traditional supervised learning, analyzing 'bias' in
the task instructions given to the model is a difficult problem, and has thus
been relatively unexplored. For instance, are we truly modeling a task, or are
we modeling a user's instructions? To help investigate this, we develop LINGO,
a novel visual analytics interface that supports an effective, task-driven
workflow to (1) help identify bias in natural language task instructions, (2)
alter (or create) task instructions to reduce bias, and (3) evaluate
pre-trained model performance on debiased task instructions. To robustly
evaluate LINGO, we conduct a user study with both novice and expert instruction
creators, over a dataset of 1,616 linguistic tasks and their natural language
instructions, spanning 55 different languages. For both user groups, LINGO
promotes the creation of more difficult tasks for pre-trained models, that
contain higher linguistic diversity and lower instruction bias. We additionally
discuss how the insights learned in developing and evaluating LINGO can aid in
the design of future dashboards that aim to minimize the effort involved in
prompt creation across multiple domains.
|
[
{
"version": "v1",
"created": "Wed, 12 Apr 2023 22:55:52 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Arunkumar",
"Anjana",
""
],
[
"Sharma",
"Shubham",
""
],
[
"Agrawal",
"Rakhi",
""
],
[
"Chandrasekaran",
"Sriram",
""
],
[
"Bryan",
"Chris",
""
]
] |
new_dataset
| 0.951675 |
2304.06204
|
Pedro Neto
|
Diogo Fonseca, Mohammad Safeea, Pedro Neto
|
A Flexible Piezoresistive/Self-Capacitive Hybrid Force and Proximity
Sensor to Interface Collaborative Robots
| null |
IEEE Transactions on Industrial Informatics (Volume: 19, Issue: 3,
March 2023)
|
10.1109/TII.2022.3174708
| null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Force and proximity sensors are key in robotics, especially when applied in
collaborative robots that interact physically or cognitively with humans in
real unstructured environments. However, most existing sensors for use in
robotics are limited by: 1) their scope, measuring single parameters/events and
often requiring multiple types of sensors, 2) being expensive to manufacture,
limiting their use to where they are strictly necessary and often compromising
redundancy, and 3) have null or reduced physical flexibility, requiring further
costs with adaptation to a variety of robot structures. This paper presents a
novel mechanically flexible force and proximity hybrid sensor based on
piezoresistive and self-capacitive phenomena. The sensor is inexpensive and
easy to apply even on complex-shaped robot structures. The manufacturing
process is described, including controlling circuits, mechanical design, and
data acquisition. Experimental trials featuring the characterisation of the
sensor were conducted, focusing on both force-electrical resistance and
self-capacitive proximity response. The sensor's versatility, flexibility,
thinness (1 mm thickness), accuracy (reduced drift) and repeatability
demonstrated its applicability in several domains. Finally, the sensor was
successfully applied in two distinct situations: hand guiding a robot (by touch
commands), and human-robot collision avoidance (by proximity detection).
|
[
{
"version": "v1",
"created": "Thu, 13 Apr 2023 00:45:29 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Fonseca",
"Diogo",
""
],
[
"Safeea",
"Mohammad",
""
],
[
"Neto",
"Pedro",
""
]
] |
new_dataset
| 0.992479 |
2304.06247
|
Zixuan Huang
|
Zixuan Huang, Varun Jampani, Anh Thai, Yuanzhen Li, Stefan Stojanov,
James M. Rehg
|
ShapeClipper: Scalable 3D Shape Learning from Single-View Images via
Geometric and CLIP-based Consistency
|
Accepted to CVPR 2023, project website at
https://zixuanh.com/projects/shapeclipper.html
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present ShapeClipper, a novel method that reconstructs 3D object shapes
from real-world single-view RGB images. Instead of relying on laborious 3D,
multi-view or camera pose annotation, ShapeClipper learns shape reconstruction
from a set of single-view segmented images. The key idea is to facilitate shape
learning via CLIP-based shape consistency, where we encourage objects with
similar CLIP encodings to share similar shapes. We also leverage off-the-shelf
normals as an additional geometric constraint so the model can learn better
bottom-up reasoning of detailed surface geometry. These two novel consistency
constraints, when used to regularize our model, improve its ability to learn
both global shape structure and local geometric details. We evaluate our method
over three challenging real-world datasets, Pix3D, Pascal3D+, and OpenImages,
where we achieve superior performance over state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Thu, 13 Apr 2023 03:53:12 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Huang",
"Zixuan",
""
],
[
"Jampani",
"Varun",
""
],
[
"Thai",
"Anh",
""
],
[
"Li",
"Yuanzhen",
""
],
[
"Stojanov",
"Stefan",
""
],
[
"Rehg",
"James M.",
""
]
] |
new_dataset
| 0.999703 |
2304.06300
|
Hongguang Sun
|
Hongguang Sun, Linyi Zhang, Tony Q. S. Quek, Xijun Wang, and Yan Zhang
|
CoMP Transmission in Downlink NOMA-Based Cellular-Connected UAV Networks
|
29 pages,10 figures, submitted to a transaction journal
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we study the integration between the coordinated multipoint
(CoMP) transmission and the non-orthogonal multiple access (NOMA) in the
downlink cellular-connected UAV networks with the coexistence of aerial users
(AUs) and terrestrial users (TUs). Based on the comparison of the desired
signal strength to the dominant interference strength, the AUs are classified
into CoMP-AUs and Non-CoMP AUs, where the former receives transmissions from
two cooperative BSs, and constructs two exclusive NOMA clusters with two TUs,
respectively. A Non-CoMP AU constructs a NOMA cluster with a TU served by the
same BS. By leveraging the tools from stochastic geometry, we propose a novel
analytical framework to evaluate the performance of the CoMP-NOMA based
cellular-connected UAV network in terms of coverage probability, and average
ergodic rate. We reveal the superiority of the proposed CoMP-NOMA scheme by
comparing with three benchmark schemes, and further quantify the impacts of key
system parameters on the network performance. By harvesting the benefits of
both CoMP and NOMA, we prove that the proposed framework can provide reliable
connection for AUs by using CoMP and enhance the average ergodic rate through
NOMA technique as well.
|
[
{
"version": "v1",
"created": "Thu, 13 Apr 2023 07:13:32 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Sun",
"Hongguang",
""
],
[
"Zhang",
"Linyi",
""
],
[
"Quek",
"Tony Q. S.",
""
],
[
"Wang",
"Xijun",
""
],
[
"Zhang",
"Yan",
""
]
] |
new_dataset
| 0.989871 |
2304.06342
|
Yiming Qian
|
Akshay Gadi Patil, Yiming Qian, Shan Yang, Brian Jackson, Eric
Bennett, Hao Zhang
|
RoSI: Recovering 3D Shape Interiors from Few Articulation Images
| null | null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The dominant majority of 3D models that appear in gaming, VR/AR, and those we
use to train geometric deep learning algorithms are incomplete, since they are
modeled as surface meshes and missing their interior structures. We present a
learning framework to recover the shape interiors (RoSI) of existing 3D models
with only their exteriors from multi-view and multi-articulation images. Given
a set of RGB images that capture a target 3D object in different articulated
poses, possibly from only few views, our method infers the interior planes that
are observable in the input images. Our neural architecture is trained in a
category-agnostic manner and it consists of a motion-aware multi-view analysis
phase including pose, depth, and motion estimations, followed by interior plane
detection in images and 3D space, and finally multi-view plane fusion. In
addition, our method also predicts part articulations and is able to realize
and even extrapolate the captured motions on the target 3D object. We evaluate
our method by quantitative and qualitative comparisons to baselines and
alternative solutions, as well as testing on untrained object categories and
real image inputs to assess its generalization capabilities.
|
[
{
"version": "v1",
"created": "Thu, 13 Apr 2023 08:45:26 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Patil",
"Akshay Gadi",
""
],
[
"Qian",
"Yiming",
""
],
[
"Yang",
"Shan",
""
],
[
"Jackson",
"Brian",
""
],
[
"Bennett",
"Eric",
""
],
[
"Zhang",
"Hao",
""
]
] |
new_dataset
| 0.972519 |
2304.06351
|
Lorenzo Berlincioni
|
Lorenzo Berlincioni, Luca Cultrera, Chiara Albisani, Lisa Cresti,
Andrea Leonardo, Sara Picchioni, Federico Becattini, Alberto Del Bimbo
|
Neuromorphic Event-based Facial Expression Recognition
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Recently, event cameras have shown large applicability in several computer
vision fields especially concerning tasks that require high temporal
resolution. In this work, we investigate the usage of such kind of data for
emotion recognition by presenting NEFER, a dataset for Neuromorphic Event-based
Facial Expression Recognition. NEFER is composed of paired RGB and event videos
representing human faces labeled with the respective emotions and also
annotated with face bounding boxes and facial landmarks. We detail the data
acquisition process as well as providing a baseline method for RGB and event
data. The collected data captures subtle micro-expressions, which are hard to
spot with RGB data, yet emerge in the event domain. We report a double
recognition accuracy for the event-based approach, proving the effectiveness of
a neuromorphic approach for analyzing fast and hardly detectable expressions
and the emotions they conceal.
|
[
{
"version": "v1",
"created": "Thu, 13 Apr 2023 09:02:10 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Berlincioni",
"Lorenzo",
""
],
[
"Cultrera",
"Luca",
""
],
[
"Albisani",
"Chiara",
""
],
[
"Cresti",
"Lisa",
""
],
[
"Leonardo",
"Andrea",
""
],
[
"Picchioni",
"Sara",
""
],
[
"Becattini",
"Federico",
""
],
[
"Del Bimbo",
"Alberto",
""
]
] |
new_dataset
| 0.999379 |
2304.06395
|
EPTCS
|
Dominic Orchard (University of Kent, UK), Mihail Munteanu (Masabi
Ltd.), Paulo Torrens (University of Kent, UK)
|
Communicating Actor Automata -- Modelling Erlang Processes as
Communicating Machines
|
In Proceedings PLACES 2023, arXiv:2304.05439
|
EPTCS 378, 2023, pp. 38-48
|
10.4204/EPTCS.378.4
| null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Brand and Zafiropulo's notion of Communicating Finite-State Machines (CFSMs)
provides a succinct and powerful model of message-passing concurrency, based
around channels. However, a major variant of message-passing concurrency is not
readily captured by CFSMs: the actor model. In this work, we define a variant
of CFSMs, called Communicating Actor Automata, to capture the actor model of
concurrency as provided by Erlang: with mailboxes, from which messages are
received according to repeated application of pattern matching. Furthermore,
this variant of CFSMs supports dynamic process topologies, capturing common
programming idioms in the context of actor-based message-passing concurrency.
This gives a new basis for modelling, specifying, and verifying Erlang
programs. We also consider a class of CAAs that give rise to freedom from race
conditions.
|
[
{
"version": "v1",
"created": "Thu, 13 Apr 2023 11:01:39 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Orchard",
"Dominic",
"",
"University of Kent, UK"
],
[
"Munteanu",
"Mihail",
"",
"Masabi\n Ltd."
],
[
"Torrens",
"Paulo",
"",
"University of Kent, UK"
]
] |
new_dataset
| 0.999009 |
2304.06440
|
Kai Zhao
|
Kai Zhao, Kun Yuan, Ming Sun and Xing Wen
|
Zoom-VQA: Patches, Frames and Clips Integration for Video Quality
Assessment
|
Accepted by CVPR 2023 Workshop
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Video quality assessment (VQA) aims to simulate the human perception of video
quality, which is influenced by factors ranging from low-level color and
texture details to high-level semantic content. To effectively model these
complicated quality-related factors, in this paper, we decompose video into
three levels (\ie, patch level, frame level, and clip level), and propose a
novel Zoom-VQA architecture to perceive spatio-temporal features at different
levels. It integrates three components: patch attention module, frame pyramid
alignment, and clip ensemble strategy, respectively for capturing
region-of-interest in the spatial dimension, multi-level information at
different feature levels, and distortions distributed over the temporal
dimension. Owing to the comprehensive design, Zoom-VQA obtains state-of-the-art
results on four VQA benchmarks and achieves 2nd place in the NTIRE 2023 VQA
challenge. Notably, Zoom-VQA has outperformed the previous best results on two
subsets of LSVQ, achieving 0.8860 (+1.0%) and 0.7985 (+1.9%) of SRCC on the
respective subsets. Adequate ablation studies further verify the effectiveness
of each component. Codes and models are released in
https://github.com/k-zha14/Zoom-VQA.
|
[
{
"version": "v1",
"created": "Thu, 13 Apr 2023 12:18:15 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Zhao",
"Kai",
""
],
[
"Yuan",
"Kun",
""
],
[
"Sun",
"Ming",
""
],
[
"Wen",
"Xing",
""
]
] |
new_dataset
| 0.999235 |
2304.06454
|
Senmao Tian
|
Senmao Tian, Ming Lu, Jiaming Liu, Yandong Guo, Yurong Chen, Shunli
Zhang
|
CABM: Content-Aware Bit Mapping for Single Image Super-Resolution
Network with Large Input
|
Accepted to CVPR2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the development of high-definition display devices, the practical
scenario of Super-Resolution (SR) usually needs to super-resolve large input
like 2K to higher resolution (4K/8K). To reduce the computational and memory
cost, current methods first split the large input into local patches and then
merge the SR patches into the output. These methods adaptively allocate a
subnet for each patch. Quantization is a very important technique for network
acceleration and has been used to design the subnets. Current methods train an
MLP bit selector to determine the propoer bit for each layer. However, they
uniformly sample subnets for training, making simple subnets overfitted and
complicated subnets underfitted. Therefore, the trained bit selector fails to
determine the optimal bit. Apart from this, the introduced bit selector brings
additional cost to each layer of the SR network. In this paper, we propose a
novel method named Content-Aware Bit Mapping (CABM), which can remove the bit
selector without any performance loss. CABM also learns a bit selector for each
layer during training. After training, we analyze the relation between the edge
information of an input patch and the bit of each layer. We observe that the
edge information can be an effective metric for the selected bit. Therefore, we
design a strategy to build an Edge-to-Bit lookup table that maps the edge score
of a patch to the bit of each layer during inference. The bit configuration of
SR network can be determined by the lookup tables of all layers. Our strategy
can find better bit configuration, resulting in more efficient mixed precision
networks. We conduct detailed experiments to demonstrate the generalization
ability of our method. The code will be released.
|
[
{
"version": "v1",
"created": "Thu, 13 Apr 2023 12:48:30 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Tian",
"Senmao",
""
],
[
"Lu",
"Ming",
""
],
[
"Liu",
"Jiaming",
""
],
[
"Guo",
"Yandong",
""
],
[
"Chen",
"Yurong",
""
],
[
"Zhang",
"Shunli",
""
]
] |
new_dataset
| 0.995238 |
2304.06480
|
Amir Hossein Zolfaghari
|
Kalpdrum Passi, Shervin Assari, Amir Hossein Zolfaghari
|
#BlackLivesMatter and Racism in Life Expectancy, Poverty, Educational
Attainment, and Race Compositions: State Analysis of 2020 Tweets in the USA
| null | null | null | null |
cs.CY cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
The year 2020 was a challenging year known mainly as the pandemic year.
However, the notable event of George Floyd's killing broke many humans' hearts
and made them protest on social media and the streets as well. In this
research, we studied the hashtag "BlackLivesMatter," and some of its adversary
contentions regarding George Floyd's demise in 2020 on Twitter. Based on the
extensive aftermath of protests in the United States, we considered an area
analysis to compare tweet rates in different groups to some previously studied
statistics. The purpose is to investigate how racism content is correlated with
life expectancy, poverty, and education. Findings revealed a significant
relationship between online color-based contents and some physical world
indicators.
|
[
{
"version": "v1",
"created": "Thu, 13 Apr 2023 17:57:16 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Passi",
"Kalpdrum",
""
],
[
"Assari",
"Shervin",
""
],
[
"Zolfaghari",
"Amir Hossein",
""
]
] |
new_dataset
| 0.990213 |
2304.06491
|
Abdur Rab Dhruba
|
Abdur Rab Dhruba, Kazi Nabiul Alam, Md. Shakib Khan, Sananda Saha,
Mohammad Monirujjaman Khan, Mohammed Baz, Mehedi Masud, and Mohammed A.
AlZain
|
IoT-Based Water Quality Assessment System for Industrial Waste
WaterHealthcare Perspective
| null | null |
10.1155/2022/3769965
| null |
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
The environment, especially water, gets polluted due to industrialization and
urbanization. Pollution due to industrialization and urbanization has harmful
effects on both the environment and the lives on Earth. This polluted water can
cause food poisoning, diarrhea, short-term gastrointestinal problems,
respiratory diseases, skin problems, and other serious health complications. In
a developing country like Bangladesh, where ready-made garments sector is one
of the major sources of the total Gross Domestic Product (GDP), most of the
wastes released from the garment factories are dumped into the nearest rivers
or canals. Hence, the quality of the water of these bodies become very
incompatible for the living beings, and so, it has become one of the major
threats to the environment and human health. In addition, the amount of fish in
the rivers and canals in Bangladesh is decreasing day by day as a result of
water pollution. Therefore, to save fish and other water animals and the
environment, we need to monitor the quality of the water and find out the
reasons for the pollution. Real-time monitoring of the quality of water is
vital for controlling water pollution. Most of the approaches for controlling
water pollution are mainly biological and lab-based, which takes a lot of time
and resources. To address this issue, we developed an Internet of Things
(IoT)-based real-time water quality monitoring system, integrated with a mobile
application. The proposed system in this research measures some of the most
important indexes of water, including the potential of hydrogen (pH), total
dissolved solids (TDS), and turbidity, and temperature of water. The proposed
system results will be very helpful in saving the environment, and thus,
improving the health of living creatures on Earth.
|
[
{
"version": "v1",
"created": "Sun, 26 Mar 2023 07:17:18 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Dhruba",
"Abdur Rab",
""
],
[
"Alam",
"Kazi Nabiul",
""
],
[
"Khan",
"Md. Shakib",
""
],
[
"Saha",
"Sananda",
""
],
[
"Khan",
"Mohammad Monirujjaman",
""
],
[
"Baz",
"Mohammed",
""
],
[
"Masud",
"Mehedi",
""
],
[
"AlZain",
"Mohammed A.",
""
]
] |
new_dataset
| 0.995579 |
2304.06517
|
Pedro Neto
|
Mahmoud Tavakoli, Andriy Sayuk, Jo\~ao Louren\c{c}o, Pedro Neto
|
Anthropomorphic finger for grasping applications: 3D printed
endoskeleton in a soft skin
| null |
Int J Adv Manuf Technol 91, 2607-2620 (2017)
|
10.1007/s00170-016-9971-8
| null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Application of soft and compliant joints in grasping mechanisms received an
increasing attention during recent years. This article suggests the design and
development of a novel bio-inspired compliant finger which is composed of a 3D
printed rigid endoskeleton covered by a soft matter. The overall integrated
system resembles a biological structure in which a finger presents an
anthropomorphic look. The mechanical properties of such structure are enhanced
through optimization of the repetitive geometrical structures that constructs a
flexure bearing as a joint for the fingers. The endoskeleton is formed by
additive manufacturing of such geometries with rigid materials. The geometry of
the endoskeleton was studied by finite element analysis (FEA) to obtain the
desired properties: high stiffness against lateral deflection and twisting, and
low stiffness in the desired bending axis of the fingers. Results are validated
by experimental analysis.
|
[
{
"version": "v1",
"created": "Thu, 13 Apr 2023 13:17:45 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Tavakoli",
"Mahmoud",
""
],
[
"Sayuk",
"Andriy",
""
],
[
"Lourenço",
"João",
""
],
[
"Neto",
"Pedro",
""
]
] |
new_dataset
| 0.999744 |
2304.06523
|
Philip Whittington
|
Janosch Fuchs, Philip Whittington
|
The 2-Attractor Problem is NP-Complete
| null | null | null | null |
cs.CC
|
http://creativecommons.org/licenses/by/4.0/
|
A $k$-attractor is a combinatorial object unifying dictionary-based
compression. It allows to compare the repetitiveness measures of different
dictionary compressors such as Lempel-Ziv 77, the Burrows-Wheeler transform,
straight line programs and macro schemes. For a string $ T \in \Sigma^n$, the
$k$-attractor is defined as a set of positions $\Gamma \subseteq [1,n]$, such
that every distinct substring of length at most $k$ is covered by at least one
of the selected positions. Thus, if a substring occurs multiple times in $T$,
one position suffices to cover it. A 1-attractor is easily computed in linear
time, while Kempa and Prezza [STOC 2018] have shown that for $k \geq 3$, it is
NP-complete to compute the smallest $k$-attractor by a reduction from $k$-set
cover.
The main result of this paper answers the open question for the complexity of
the 2-attractor problem, showing that the problem remains NP-complete. Kempa
and Prezza's proof for $k \geq 3$ also reduces the 2-attractor problem to the
2-set cover problem, which is equivalent to edge cover, but that does not fully
capture the complexity of the 2-attractor problem. For this reason, we extend
edge cover by a color function on the edges, yielding the colorful edge cover
problem. Any edge cover must then satisfy the additional constraint that each
color is represented. This extension raises the complexity such that colorful
edge cover becomes NP-complete while also more precisely modeling the
2-attractor problem. We obtain a reduction showing $k$-attractor to be
NP-complete for any $k \geq 2$.
|
[
{
"version": "v1",
"created": "Thu, 13 Apr 2023 13:19:37 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Fuchs",
"Janosch",
""
],
[
"Whittington",
"Philip",
""
]
] |
new_dataset
| 0.965223 |
2304.06543
|
Venkata M V Gunturi
|
Sarnath Ramnath, Venkata M. V. Gunturi, Subi Dangol, Abhishek Mishra,
Pradeep Kumar
|
Load Balanced Demand Distribution under Overload Penalties
|
arXiv admin note: text overlap with arXiv:2009.01765
| null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
Input to the Load Balanced Demand Distribution (LBDD) consists of the
following: (a) a set of public service centers (e.g., schools); (b) a set of
demand (people) units and; (c) a cost matrix containing the cost of assignment
for all demand unit-service center pairs. In addition, each service center is
also associated with a notion of capacity and a penalty which is incurred if it
gets overloaded. Given the input, the LBDD problem determines a mapping from
the set of demand units to the set of service centers. The objective is to
determine a mapping that minimizes the sum of the following two terms: (i) the
total assignment cost between demand units and their allotted service centers
and, (ii) total of penalties incurred. The problem of LBDD finds its
application in the domain of urban planning. An instance of the LBDD problem
can be reduced to an instance of the min-cost bi-partite matching problem.
However, this approach cannot scale up to the real world large problem
instances. The current state of the art related to LBDD makes simplifying
assumptions such as infinite capacity or total capacity being equal to the
total demand. This paper proposes a novel allotment subspace re-adjustment
based approach (ASRAL) for the LBDD problem. We analyze ASRAL theoretically and
present its asymptotic time complexity. We also evaluate ASRAL experimentally
on large problem instances and compare with alternative approaches. Our results
indicate that ASRAL is able to scale-up while maintaining significantly better
solution quality over the alternative approaches. In addition, we also extend
ASRAL to para-ASRAL which uses the GPU and CPU cores to speed-up the execution
while maintaining the same solution quality as ASRAL.
|
[
{
"version": "v1",
"created": "Thu, 13 Apr 2023 13:53:37 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Ramnath",
"Sarnath",
""
],
[
"Gunturi",
"Venkata M. V.",
""
],
[
"Dangol",
"Subi",
""
],
[
"Mishra",
"Abhishek",
""
],
[
"Kumar",
"Pradeep",
""
]
] |
new_dataset
| 0.998358 |
2304.06560
|
Filip Sroubek
|
Roman Stanek, Tomas Kerepecky, Adam Novozamsky, Filip Sroubek, Barbara
Zitova, Jan Flusser
|
Real-Time Wheel Detection and Rim Classification in Automotive
Production
|
5 pages, 7 figures, 3 tables
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
This paper proposes a novel approach to real-time automatic rim detection,
classification, and inspection by combining traditional computer vision and
deep learning techniques. At the end of every automotive assembly line, a
quality control process is carried out to identify any potential defects in the
produced cars. Common yet hazardous defects are related, for example, to
incorrectly mounted rims. Routine inspections are mostly conducted by human
workers that are negatively affected by factors such as fatigue or distraction.
We have designed a new prototype to validate whether all four wheels on a
single car match in size and type. Additionally, we present three comprehensive
open-source databases, CWD1500, WHEEL22, and RB600, for wheel, rim, and bolt
detection, as well as rim classification, which are free-to-use for scientific
purposes.
|
[
{
"version": "v1",
"created": "Thu, 13 Apr 2023 14:12:57 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Stanek",
"Roman",
""
],
[
"Kerepecky",
"Tomas",
""
],
[
"Novozamsky",
"Adam",
""
],
[
"Sroubek",
"Filip",
""
],
[
"Zitova",
"Barbara",
""
],
[
"Flusser",
"Jan",
""
]
] |
new_dataset
| 0.999274 |
2304.06575
|
Benjamin Badger
|
Benjamin L. Badger
|
Adversarial Examples from Dimensional Invariance
|
6 pages
| null | null | null |
cs.LG cs.CV cs.NA math.NA
|
http://creativecommons.org/licenses/by/4.0/
|
Adversarial examples have been found for various deep as well as shallow
learning models, and have at various times been suggested to be either fixable
model-specific bugs, or else inherent dataset feature, or both. We present
theoretical and empirical results to show that adversarial examples are
approximate discontinuities resulting from models that specify approximately
bijective maps $f: \Bbb R^n \to \Bbb R^m; n \neq m$ over their inputs, and this
discontinuity follows from the topological invariance of dimension.
|
[
{
"version": "v1",
"created": "Thu, 13 Apr 2023 14:37:45 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Badger",
"Benjamin L.",
""
]
] |
new_dataset
| 0.962824 |
2304.06602
|
MinhDuc Vo
|
Duc Minh Vo, Quoc-An Luong, Akihiro Sugimoto, Hideki Nakayama
|
A-CAP: Anticipation Captioning with Commonsense Knowledge
|
Accepted to CVPR 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Humans possess the capacity to reason about the future based on a sparse
collection of visual cues acquired over time. In order to emulate this ability,
we introduce a novel task called Anticipation Captioning, which generates a
caption for an unseen oracle image using a sparsely temporally-ordered set of
images. To tackle this new task, we propose a model called A-CAP, which
incorporates commonsense knowledge into a pre-trained vision-language model,
allowing it to anticipate the caption. Through both qualitative and
quantitative evaluations on a customized visual storytelling dataset, A-CAP
outperforms other image captioning methods and establishes a strong baseline
for anticipation captioning. We also address the challenges inherent in this
task.
|
[
{
"version": "v1",
"created": "Thu, 13 Apr 2023 15:10:47 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Vo",
"Duc Minh",
""
],
[
"Luong",
"Quoc-An",
""
],
[
"Sugimoto",
"Akihiro",
""
],
[
"Nakayama",
"Hideki",
""
]
] |
new_dataset
| 0.954697 |
2304.06627
|
Haozhe Feng
|
Haozhe Feng, Zhaorui Yang, Hesun Chen, Tianyu Pang, Chao Du, Minfeng
Zhu, Wei Chen, Shuicheng Yan
|
CoSDA: Continual Source-Free Domain Adaptation
|
15 pages, 6 figures
| null | null | null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Without access to the source data, source-free domain adaptation (SFDA)
transfers knowledge from a source-domain trained model to target domains.
Recently, SFDA has gained popularity due to the need to protect the data
privacy of the source domain, but it suffers from catastrophic forgetting on
the source domain due to the lack of data. To systematically investigate the
mechanism of catastrophic forgetting, we first reimplement previous SFDA
approaches within a unified framework and evaluate them on four benchmarks. We
observe that there is a trade-off between adaptation gain and forgetting loss,
which motivates us to design a consistency regularization to mitigate
forgetting. In particular, we propose a continual source-free domain adaptation
approach named CoSDA, which employs a dual-speed optimized teacher-student
model pair and is equipped with consistency learning capability. Our
experiments demonstrate that CoSDA outperforms state-of-the-art approaches in
continuous adaptation. Notably, our CoSDA can also be integrated with other
SFDA methods to alleviate forgetting.
|
[
{
"version": "v1",
"created": "Thu, 13 Apr 2023 15:53:23 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Feng",
"Haozhe",
""
],
[
"Yang",
"Zhaorui",
""
],
[
"Chen",
"Hesun",
""
],
[
"Pang",
"Tianyu",
""
],
[
"Du",
"Chao",
""
],
[
"Zhu",
"Minfeng",
""
],
[
"Chen",
"Wei",
""
],
[
"Yan",
"Shuicheng",
""
]
] |
new_dataset
| 0.997442 |
2304.06630
|
Ziwei Gao
|
Ziwei Gao
|
Time-Based Addiction
|
Accepted at the CHI-23 1st Workshop on Behavioural Design in Video
Games: Ethical, Legal, and Health Impact on Players held at the CHI
Conference on Human Factors in Computing Systems (CHI-23), 8 pages
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
This paper introduces time-based addiction, which refers to excessive
engagement in an activity that results in negative outcomes due to the
misallocation of time. This type of addiction is often seen in media-related
activities such as video games, social media, and television watching.
Behavioural design in video games plays a significant role in enabling
time-based addiction. Games are designed to be engaging and enjoyable, with
features such as rewards, leveling up, and social competition, which is all
intended to keep players coming back for more. This article reviews the
behavioural design used in video games, and media more broadly, to increase the
addictive nature of these experiences. By doing so the article aims to
recognise time-based addiction as a problem that in large part stems from
irresponsible design practices.
|
[
{
"version": "v1",
"created": "Thu, 13 Apr 2023 15:56:37 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Gao",
"Ziwei",
""
]
] |
new_dataset
| 0.978372 |
2304.06710
|
Mustansar Fiaz
|
Mubashir Noman, Mustansar Fiaz, Hisham Cholakkal, Sanath Narayan, Rao
Muhammad Anwer, Salman Khan, Fahad Shahbaz Khan
|
Remote Sensing Change Detection With Transformers Trained from Scratch
|
5 figures and 4 tables
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current transformer-based change detection (CD) approaches either employ a
pre-trained model trained on large-scale image classification ImageNet dataset
or rely on first pre-training on another CD dataset and then fine-tuning on the
target benchmark. This current strategy is driven by the fact that transformers
typically require a large amount of training data to learn inductive biases,
which is insufficient in standard CD datasets due to their small size. We
develop an end-to-end CD approach with transformers that is trained from
scratch and yet achieves state-of-the-art performance on four public
benchmarks. Instead of using conventional self-attention that struggles to
capture inductive biases when trained from scratch, our architecture utilizes a
shuffled sparse-attention operation that focuses on selected sparse informative
regions to capture the inherent characteristics of the CD data. Moreover, we
introduce a change-enhanced feature fusion (CEFF) module to fuse the features
from input image pairs by performing a per-channel re-weighting. Our CEFF
module aids in enhancing the relevant semantic changes while suppressing the
noisy ones. Extensive experiments on four CD datasets reveal the merits of the
proposed contributions, achieving gains as high as 14.27\% in
intersection-over-union (IoU) score, compared to the best-published results in
the literature. Code is available at
\url{https://github.com/mustansarfiaz/ScratchFormer}.
|
[
{
"version": "v1",
"created": "Thu, 13 Apr 2023 17:57:54 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Noman",
"Mubashir",
""
],
[
"Fiaz",
"Mustansar",
""
],
[
"Cholakkal",
"Hisham",
""
],
[
"Narayan",
"Sanath",
""
],
[
"Anwer",
"Rao Muhammad",
""
],
[
"Khan",
"Salman",
""
],
[
"Khan",
"Fahad Shahbaz",
""
]
] |
new_dataset
| 0.999216 |
2304.06717
|
Sida Peng
|
Sida Peng, Yunzhi Yan, Qing Shuai, Hujun Bao, Xiaowei Zhou
|
Representing Volumetric Videos as Dynamic MLP Maps
|
Accepted to CVPR 2023. The first two authors contributed equally to
this paper. Project page: https://zju3dv.github.io/mlp_maps/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces a novel representation of volumetric videos for
real-time view synthesis of dynamic scenes. Recent advances in neural scene
representations demonstrate their remarkable capability to model and render
complex static scenes, but extending them to represent dynamic scenes is not
straightforward due to their slow rendering speed or high storage cost. To
solve this problem, our key idea is to represent the radiance field of each
frame as a set of shallow MLP networks whose parameters are stored in 2D grids,
called MLP maps, and dynamically predicted by a 2D CNN decoder shared by all
frames. Representing 3D scenes with shallow MLPs significantly improves the
rendering speed, while dynamically predicting MLP parameters with a shared 2D
CNN instead of explicitly storing them leads to low storage cost. Experiments
show that the proposed approach achieves state-of-the-art rendering quality on
the NHR and ZJU-MoCap datasets, while being efficient for real-time rendering
with a speed of 41.7 fps for $512 \times 512$ images on an RTX 3090 GPU. The
code is available at https://zju3dv.github.io/mlp_maps/.
|
[
{
"version": "v1",
"created": "Thu, 13 Apr 2023 17:59:33 GMT"
}
] | 2023-04-14T00:00:00 |
[
[
"Peng",
"Sida",
""
],
[
"Yan",
"Yunzhi",
""
],
[
"Shuai",
"Qing",
""
],
[
"Bao",
"Hujun",
""
],
[
"Zhou",
"Xiaowei",
""
]
] |
new_dataset
| 0.996209 |
2106.07258
|
Madelon Hulsebos
|
Madelon Hulsebos, \c{C}a\u{g}atay Demiralp, Paul Groth
|
GitTables: A Large-Scale Corpus of Relational Tables
| null | null |
10.1145/3588710
| null |
cs.DB cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The success of deep learning has sparked interest in improving relational
table tasks, like data preparation and search, with table representation models
trained on large table corpora. Existing table corpora primarily contain tables
extracted from HTML pages, limiting the capability to represent offline
database tables. To train and evaluate high-capacity models for applications
beyond the Web, we need resources with tables that resemble relational database
tables. Here we introduce GitTables, a corpus of 1M relational tables extracted
from GitHub. Our continuing curation aims at growing the corpus to at least 10M
tables. Analyses of GitTables show that its structure, content, and topical
coverage differ significantly from existing table corpora. We annotate table
columns in GitTables with semantic types, hierarchical relations and
descriptions from Schema.org and DBpedia. The evaluation of our annotation
pipeline on the T2Dv2 benchmark illustrates that our approach provides results
on par with human annotations. We present three applications of GitTables,
demonstrating its value for learned semantic type detection models, schema
completion methods, and benchmarks for table-to-KG matching, data search, and
preparation. We make the corpus and code available at
https://gittables.github.io.
|
[
{
"version": "v1",
"created": "Mon, 14 Jun 2021 09:22:09 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Sep 2021 11:52:20 GMT"
},
{
"version": "v3",
"created": "Thu, 9 Sep 2021 09:59:29 GMT"
},
{
"version": "v4",
"created": "Fri, 15 Apr 2022 14:45:47 GMT"
},
{
"version": "v5",
"created": "Wed, 12 Apr 2023 13:24:55 GMT"
}
] | 2023-04-13T00:00:00 |
[
[
"Hulsebos",
"Madelon",
""
],
[
"Demiralp",
"Çağatay",
""
],
[
"Groth",
"Paul",
""
]
] |
new_dataset
| 0.999442 |
2112.02807
|
Qi Pang
|
Qi Pang, Yuanyuan Yuan, Shuai Wang
|
MDPFuzz: Testing Models Solving Markov Decision Processes
| null | null | null | null |
cs.SE cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The Markov decision process (MDP) provides a mathematical framework for
modeling sequential decision-making problems, many of which are crucial to
security and safety, such as autonomous driving and robot control. The rapid
development of artificial intelligence research has created efficient methods
for solving MDPs, such as deep neural networks (DNNs), reinforcement learning
(RL), and imitation learning (IL). However, these popular models solving MDPs
are neither thoroughly tested nor rigorously reliable.
We present MDPFuzz, the first blackbox fuzz testing framework for models
solving MDPs. MDPFuzz forms testing oracles by checking whether the target
model enters abnormal and dangerous states. During fuzzing, MDPFuzz decides
which mutated state to retain by measuring if it can reduce cumulative rewards
or form a new state sequence. We design efficient techniques to quantify the
"freshness" of a state sequence using Gaussian mixture models (GMMs) and
dynamic expectation-maximization (DynEM). We also prioritize states with high
potential of revealing crashes by estimating the local sensitivity of target
models over states.
MDPFuzz is evaluated on five state-of-the-art models for solving MDPs,
including supervised DNN, RL, IL, and multi-agent RL. Our evaluation includes
scenarios of autonomous driving, aircraft collision avoidance, and two games
that are often used to benchmark RL. During a 12-hour run, we find over 80
crash-triggering state sequences on each model. We show inspiring findings that
crash-triggering states, though they look normal, induce distinct neuron
activation patterns compared with normal states. We further develop an abnormal
behavior detector to harden all the evaluated models and repair them with the
findings of MDPFuzz to significantly enhance their robustness without
sacrificing accuracy.
|
[
{
"version": "v1",
"created": "Mon, 6 Dec 2021 06:35:55 GMT"
},
{
"version": "v2",
"created": "Sun, 12 Dec 2021 03:47:30 GMT"
},
{
"version": "v3",
"created": "Mon, 25 Apr 2022 11:54:47 GMT"
},
{
"version": "v4",
"created": "Tue, 11 Apr 2023 22:19:33 GMT"
}
] | 2023-04-13T00:00:00 |
[
[
"Pang",
"Qi",
""
],
[
"Yuan",
"Yuanyuan",
""
],
[
"Wang",
"Shuai",
""
]
] |
new_dataset
| 0.977332 |
2204.14211
|
Joel Jang
|
Joel Jang, Seonghyeon Ye, Changho Lee, Sohee Yang, Joongbo Shin,
Janghoon Han, Gyeonghun Kim, Minjoon Seo
|
TemporalWiki: A Lifelong Benchmark for Training and Evaluating
Ever-Evolving Language Models
|
published at EMNLP 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Language Models (LMs) become outdated as the world changes; they often fail
to perform tasks requiring recent factual information which was absent or
different during training, a phenomenon called temporal misalignment. This is
especially a challenging problem because the research community still lacks a
coherent dataset for assessing the adaptability of LMs to frequently-updated
knowledge corpus such as Wikipedia. To this end, we introduce TemporalWiki, a
lifelong benchmark for ever-evolving LMs that utilizes the difference between
consecutive snapshots of English Wikipedia and English Wikidata for training
and evaluation, respectively. The benchmark hence allows researchers to
periodically track an LM's ability to retain previous knowledge and acquire
updated/new knowledge at each point in time. We also find that training an LM
on the diff data through continual learning methods achieves similar or better
perplexity than on the entire snapshot in our benchmark with 12 times less
computational cost, which verifies that factual knowledge in LMs can be safely
updated with minimal training data via continual learning. The dataset and the
code are available at https://github.com/joeljang/temporalwiki.
|
[
{
"version": "v1",
"created": "Fri, 29 Apr 2022 16:40:07 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Feb 2023 05:15:18 GMT"
},
{
"version": "v3",
"created": "Wed, 12 Apr 2023 12:16:59 GMT"
}
] | 2023-04-13T00:00:00 |
[
[
"Jang",
"Joel",
""
],
[
"Ye",
"Seonghyeon",
""
],
[
"Lee",
"Changho",
""
],
[
"Yang",
"Sohee",
""
],
[
"Shin",
"Joongbo",
""
],
[
"Han",
"Janghoon",
""
],
[
"Kim",
"Gyeonghun",
""
],
[
"Seo",
"Minjoon",
""
]
] |
new_dataset
| 0.980881 |
2205.15960
|
Genta Indra Winata
|
Genta Indra Winata, Alham Fikri Aji, Samuel Cahyawijaya, Rahmad
Mahendra, Fajri Koto, Ade Romadhony, Kemal Kurniawan, David Moeljadi, Radityo
Eko Prasojo, Pascale Fung, Timothy Baldwin, Jey Han Lau, Rico Sennrich,
Sebastian Ruder
|
NusaX: Multilingual Parallel Sentiment Dataset for 10 Indonesian Local
Languages
|
EACL 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Natural language processing (NLP) has a significant impact on society via
technologies such as machine translation and search engines. Despite its
success, NLP technology is only widely available for high-resource languages
such as English and Chinese, while it remains inaccessible to many languages
due to the unavailability of data resources and benchmarks. In this work, we
focus on developing resources for languages in Indonesia. Despite being the
second most linguistically diverse country, most languages in Indonesia are
categorized as endangered and some are even extinct. We develop the first-ever
parallel resource for 10 low-resource languages in Indonesia. Our resource
includes datasets, a multi-task benchmark, and lexicons, as well as a parallel
Indonesian-English dataset. We provide extensive analyses and describe the
challenges when creating such resources. We hope that our work can spark NLP
research on Indonesian and other underrepresented languages.
|
[
{
"version": "v1",
"created": "Tue, 31 May 2022 17:03:50 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Apr 2023 16:42:53 GMT"
}
] | 2023-04-13T00:00:00 |
[
[
"Winata",
"Genta Indra",
""
],
[
"Aji",
"Alham Fikri",
""
],
[
"Cahyawijaya",
"Samuel",
""
],
[
"Mahendra",
"Rahmad",
""
],
[
"Koto",
"Fajri",
""
],
[
"Romadhony",
"Ade",
""
],
[
"Kurniawan",
"Kemal",
""
],
[
"Moeljadi",
"David",
""
],
[
"Prasojo",
"Radityo Eko",
""
],
[
"Fung",
"Pascale",
""
],
[
"Baldwin",
"Timothy",
""
],
[
"Lau",
"Jey Han",
""
],
[
"Sennrich",
"Rico",
""
],
[
"Ruder",
"Sebastian",
""
]
] |
new_dataset
| 0.999861 |
2301.08863
|
Animesh Yadav
|
Omid Abbasi, Animesh Yadav, Halim Yanikomeroglu, Ngoc Dung Dao, Gamini
Senarath, Peiying Zhu
|
HAPS for 6G Networks: Potential Use Cases, Open Challenges, and Possible
Solutions
| null | null | null | null |
cs.NI eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
High altitude platform station (HAPS), which is deployed in the stratosphere
at an altitude of 20-50 kilometres, has attracted much attention in recent
years due to their large footprint, line-of-sight links, and fixed position
relative to the Earth. Compared with existing network infrastructure, HAPS has
a much larger coverage area than terrestrial base stations and is much closer
than satellites to the ground users. Besides small-cells and macro-cells, a
HAPS can offer one mega-cell, which can complement legacy networks in 6G and
beyond wireless systems. This paper explores potential use cases and discusses
relevant open challenges of integrating HAPS into legacy networks, while also
suggesting some solutions to these challenges. The cumulative density functions
of spectral efficiency of the integrated network and cell-edge users are
studied and compared with terrestrial network. The results show the capacity
gains achieved by the integrated network are beneficial to cell-edge users.
Furthermore, the advantages of a HAPS for backhauling aerial base stations are
demonstrated by the simulation results.
|
[
{
"version": "v1",
"created": "Sat, 21 Jan 2023 02:37:22 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Apr 2023 21:56:28 GMT"
}
] | 2023-04-13T00:00:00 |
[
[
"Abbasi",
"Omid",
""
],
[
"Yadav",
"Animesh",
""
],
[
"Yanikomeroglu",
"Halim",
""
],
[
"Dao",
"Ngoc Dung",
""
],
[
"Senarath",
"Gamini",
""
],
[
"Zhu",
"Peiying",
""
]
] |
new_dataset
| 0.999616 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.