id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2302.05991
|
Zhihao Zhao
|
Weiyu Feng, Seth Z. Zhao, Chuanyu Pan, Adam Chang, Yichen Chen, Zekun
Wang, Allen Y. Yang
|
Digital Twin Tracking Dataset (DTTD): A New RGB+Depth 3D Dataset for
Longer-Range Object Tracking Applications
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Digital twin is a problem of augmenting real objects with their digital
counterparts. It can underpin a wide range of applications in augmented reality
(AR), autonomy, and UI/UX. A critical component in a good digital-twin system
is real-time, accurate 3D object tracking. Most existing works solve 3D object
tracking through the lens of robotic grasping, employ older generations of
depth sensors, and measure performance metrics that may not apply to other
digital-twin applications such as in AR. In this work, we create a novel RGB-D
dataset, called Digital Twin Tracking Dataset (DTTD), to enable further
research of the problem and extend potential solutions towards longer ranges
and mm localization accuracy. To reduce point cloud noise from the input
source, we select the latest Microsoft Azure Kinect as the state-of-the-art
time-of-flight (ToF) camera. In total, 103 scenes of 10 common off-the-shelf
objects with rich textures are recorded, with each frame annotated with a
per-pixel semantic segmentation and ground-truth object poses provided by a
commercial motion capturing system. Through extensive experiments with
model-level and dataset-level analysis, we demonstrate that DTTD can help
researchers develop future object tracking methods and analyze new challenges.
The dataset, data generation, annotation, and model evaluation pipeline are
made publicly available as open source code at:
https://github.com/augcog/DTTDv1.
|
[
{
"version": "v1",
"created": "Sun, 12 Feb 2023 20:06:07 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Apr 2023 20:31:38 GMT"
}
] | 2023-04-13T00:00:00 |
[
[
"Feng",
"Weiyu",
""
],
[
"Zhao",
"Seth Z.",
""
],
[
"Pan",
"Chuanyu",
""
],
[
"Chang",
"Adam",
""
],
[
"Chen",
"Yichen",
""
],
[
"Wang",
"Zekun",
""
],
[
"Yang",
"Allen Y.",
""
]
] |
new_dataset
| 0.999755 |
2303.11825
|
Matthew Brehmer
|
Matthew Brehmer, Maxime Cordeil, Christophe Hurter, Takayuki Itoh
|
The MERCADO Workshop at IEEE VIS 2023: Multimodal Experiences for Remote
Communication Around Data Online
|
Workshop accepted for IEEE VIS 2023
(https://ieeevis.org/year/2023/info/workshops): October 22 - 27 in Melbourne,
Australia. Website: https://sites.google.com/view/mercadoworkshop
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a half-day workshop at IEEE VIS 2023 on the topic of communication
and collaboration around data. Specifically, we aim to gather researchers
interested on multimodal, synchronous, and remote or hybrid forms of
communication and collaboration within organizational and educational settings.
This topic lies at the intersection of data visualization, human-computer
interaction, and computer-supported collaborative work, and overlaps
thematically with several prior seminars and workshops. Our intended outcomes
for the workshop include assembling a corpus of inspiring examples and a design
space, ideally consolidated into a survey paper, as well as the establishment
of new collaborations and a shared research agenda. We anticipate a format
comprised of short presentations and demos, an invited keynote or fireside
chat, and a breakout group session organized around specific application
domains. Website: https://sites.google.com/view/mercadoworkshop.
|
[
{
"version": "v1",
"created": "Tue, 21 Mar 2023 13:08:57 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Apr 2023 16:30:42 GMT"
}
] | 2023-04-13T00:00:00 |
[
[
"Brehmer",
"Matthew",
""
],
[
"Cordeil",
"Maxime",
""
],
[
"Hurter",
"Christophe",
""
],
[
"Itoh",
"Takayuki",
""
]
] |
new_dataset
| 0.997905 |
2303.14897
|
Xianfan Gu
|
Xianfan Gu, Chuan Wen, Jiaming Song, Yang Gao
|
Seer: Language Instructed Video Prediction with Latent Diffusion Models
|
17 pages, 15 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Imagining the future trajectory is the key for robots to make sound planning
and successfully reach their goals. Therefore, text-conditioned video
prediction (TVP) is an essential task to facilitate general robot policy
learning, i.e., predicting future video frames with a given language
instruction and reference frames. It is a highly challenging task to ground
task-level goals specified by instructions and high-fidelity frames together,
requiring large-scale data and computation. To tackle this task and empower
robots with the ability to foresee the future, we propose a sample and
computation-efficient model, named \textbf{Seer}, by inflating the pretrained
text-to-image (T2I) stable diffusion models along the temporal axis. We inflate
the denoising U-Net and language conditioning model with two novel techniques,
Autoregressive Spatial-Temporal Attention and Frame Sequential Text Decomposer,
to propagate the rich prior knowledge in the pretrained T2I models across the
frames. With the well-designed architecture, Seer makes it possible to generate
high-fidelity, coherent, and instruction-aligned video frames by fine-tuning a
few layers on a small amount of data. The experimental results on Something
Something V2 (SSv2) and Bridgedata datasets demonstrate our superior video
prediction performance with around 210-hour training on 4 RTX 3090 GPUs:
decreasing the FVD of the current SOTA model from 290 to 200 on SSv2 and
achieving at least 70\% preference in the human evaluation.
|
[
{
"version": "v1",
"created": "Mon, 27 Mar 2023 03:12:24 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Apr 2023 03:10:37 GMT"
}
] | 2023-04-13T00:00:00 |
[
[
"Gu",
"Xianfan",
""
],
[
"Wen",
"Chuan",
""
],
[
"Song",
"Jiaming",
""
],
[
"Gao",
"Yang",
""
]
] |
new_dataset
| 0.981274 |
2304.02437
|
Francesco Gonnella
|
Nicolo Valdi Biesuz, Rimsky Caballero, Davide Cieri, Nico Giangiacomi,
Francesco Gonnella, Guillermo Loustau De Linares, Andrew Peck
|
Hog 2023.1: a collaborative management tool to handle Git-based HDL
repository
|
Presented at the 3rd Workshop on Open-Source Design Automation
(OSDA), 2023 (arXiv:2303.18024)
| null | null |
OSDA/2023/01
|
cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
Hog (HDL on Git) is an open-source tool designed to manage Git-based HDL
repositories. It aims to simplify HDL project development, maintenance, and
versioning by using Git to guarantee synthesis and implementation
reproducibility and binary file traceability. This is ensured by linking each
produced binary file to a specific Git commit, embedding the Git commit hash
(SHA) into the binary file via HDL generics stored in firmware registers. Hog
is released twice a year, in January and in June. We present here the latest
stable version 2023.1, which introduces major novel features, such as the
support for Microchip Libero IDE, and the capability to run the Hog Continuous
Integration (Hog-CI) workflow with GitHub Actions. A plan to integrate Hog with
the OpenCores repository is also described, which is expected to be completed
for Hog release 2023.2
|
[
{
"version": "v1",
"created": "Wed, 5 Apr 2023 13:47:27 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Apr 2023 12:12:51 GMT"
}
] | 2023-04-13T00:00:00 |
[
[
"Biesuz",
"Nicolo Valdi",
""
],
[
"Caballero",
"Rimsky",
""
],
[
"Cieri",
"Davide",
""
],
[
"Giangiacomi",
"Nico",
""
],
[
"Gonnella",
"Francesco",
""
],
[
"De Linares",
"Guillermo Loustau",
""
],
[
"Peck",
"Andrew",
""
]
] |
new_dataset
| 0.984277 |
2304.05470
|
Faraz Zaidi
|
Mohammed Adil Saleem and Faraz Zaidi and Celine Rozenblat
|
World City Networks and Multinational Firms: An Analysis of Economic
Ties Over a Decade
| null | null | null | null |
cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
One perspective to view the economic development of cities is through the
presence of multinational firms; how subsidiaries of various organizations are
set up throughout the globe and how cities are connected to each other through
these networks of multinational firms. Analysis of these networks can reveal
interesting economical and spatial trends, as well as help us understand the
importance of cities in national and regional economic development. This paper
aims to study networks of cities formed due to the linkages of multinational
firms over a decade (from 2010 to 2019). More specifically we are interested in
analyzing the growth and stability of various cities in terms of the
connections they form with other cities over time. Our results can be
summarized into two key findings: First, we ascertain the central position of
several cities due to their economically stable connections; Second, we
successfully identify cities that have evolved over the past decade as the
presence of multinational firms has increased in these cities.
|
[
{
"version": "v1",
"created": "Tue, 11 Apr 2023 19:38:06 GMT"
}
] | 2023-04-13T00:00:00 |
[
[
"Saleem",
"Mohammed Adil",
""
],
[
"Zaidi",
"Faraz",
""
],
[
"Rozenblat",
"Celine",
""
]
] |
new_dataset
| 0.972948 |
2304.05512
|
Taner Arsan
|
Taner Arsan, Sehnaz Sismanoglu Simsek, Onder Pekcan
|
Mathematical and Linguistic Characterization of Orhan Pamuk's Nobel
Works
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
In this study, Nobel Laureate Orhan Pamuk's works are chosen as examples of
Turkish literature. By counting the number of letters and words in his texts,
we find it possible to study his works statistically. It has been known that
there is a geometrical order in text structures. Here the method based on the
basic assumption of fractal geometry is introduced for calculating the fractal
dimensions of Pamuk's texts. The results are compared with the applications of
Zipf's law, which is successfully applied for letters and words, where two
concepts, namely Zipf's dimension and Zipf's order, are introduced. The Zipf
dimension of the novel My Name is Red is found to be much different than his
other novels. However, it is linguistically observed that there is no
fundamental difference between his corpora. The results are interpreted in
terms of fractal dimensions and the Turkish language.
|
[
{
"version": "v1",
"created": "Tue, 11 Apr 2023 21:37:50 GMT"
}
] | 2023-04-13T00:00:00 |
[
[
"Arsan",
"Taner",
""
],
[
"Simsek",
"Sehnaz Sismanoglu",
""
],
[
"Pekcan",
"Onder",
""
]
] |
new_dataset
| 0.994041 |
2304.05523
|
Rakesh Chada
|
Rakesh Chada, Zhaoheng Zheng, Pradeep Natarajan
|
MoMo: A shared encoder Model for text, image and multi-Modal
representations
| null | null | null | null |
cs.CV cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a self-supervised shared encoder model that achieves strong
results on several visual, language and multimodal benchmarks while being data,
memory and run-time efficient. We make three key contributions. First, in
contrast to most existing works, we use a single transformer with all the
encoder layers processing both the text and the image modalities. Second, we
propose a stage-wise training strategy where the model is first trained on
images, then jointly with unimodal text and image datasets and finally jointly
with text and text-image datasets. Third, to preserve information across both
the modalities, we propose a training pipeline that learns simultaneously from
gradient updates of different modalities at each training update step. The
results on downstream text-only, image-only and multimodal tasks show that our
model is competitive with several strong models while using fewer parameters
and lesser pre-training data. For example, MoMo performs competitively with
FLAVA on multimodal (+3.1), image-only (+1.1) and text-only (-0.1) tasks
despite having 2/5th the number of parameters and using 1/3rd the image-text
training pairs. Finally, we ablate various design choices and further show that
increasing model size produces significant performance gains indicating
potential for substantial improvements with larger models using our approach.
|
[
{
"version": "v1",
"created": "Tue, 11 Apr 2023 22:26:10 GMT"
}
] | 2023-04-13T00:00:00 |
[
[
"Chada",
"Rakesh",
""
],
[
"Zheng",
"Zhaoheng",
""
],
[
"Natarajan",
"Pradeep",
""
]
] |
new_dataset
| 0.970085 |
2304.05552
|
Zhihao Lin
|
Zhihao Lin, Yongtao Wang, Jinhe Zhang, Xiaojie Chu
|
DynamicDet: A Unified Dynamic Architecture for Object Detection
|
Accepted by CVPR 2023
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dynamic neural network is an emerging research topic in deep learning. With
adaptive inference, dynamic models can achieve remarkable accuracy and
computational efficiency. However, it is challenging to design a powerful
dynamic detector, because of no suitable dynamic architecture and exiting
criterion for object detection. To tackle these difficulties, we propose a
dynamic framework for object detection, named DynamicDet. Firstly, we carefully
design a dynamic architecture based on the nature of the object detection task.
Then, we propose an adaptive router to analyze the multi-scale information and
to decide the inference route automatically. We also present a novel
optimization strategy with an exiting criterion based on the detection losses
for our dynamic detectors. Last, we present a variable-speed inference
strategy, which helps to realize a wide range of accuracy-speed trade-offs with
only one dynamic detector. Extensive experiments conducted on the COCO
benchmark demonstrate that the proposed DynamicDet achieves new
state-of-the-art accuracy-speed trade-offs. For instance, with comparable
accuracy, the inference speed of our dynamic detector Dy-YOLOv7-W6 surpasses
YOLOv7-E6 by 12%, YOLOv7-D6 by 17%, and YOLOv7-E6E by 39%. The code is
available at https://github.com/VDIGPKU/DynamicDet.
|
[
{
"version": "v1",
"created": "Wed, 12 Apr 2023 01:16:53 GMT"
}
] | 2023-04-13T00:00:00 |
[
[
"Lin",
"Zhihao",
""
],
[
"Wang",
"Yongtao",
""
],
[
"Zhang",
"Jinhe",
""
],
[
"Chu",
"Xiaojie",
""
]
] |
new_dataset
| 0.996584 |
2304.05611
|
Behrooz Mansouri
|
Behrooz Mansouri, Ricardo Campos
|
FALQU: Finding Answers to Legal Questions
|
4 pages, 1 figure, 2 tables
| null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a new test collection for Legal IR, FALQU: Finding
Answers to Legal Questions, where questions and answers were obtained from Law
Stack Exchange (LawSE), a Q&A website for legal professionals, and others with
experience in law. Much in line with Stack overflow, Law Stack Exchange has a
variety of questions on different topics such as copyright, intellectual
property, and criminal laws, making it an interesting source for dataset
construction. Questions are also not limited to one country. Often, users of
different nationalities may ask questions about laws in different countries and
expertise. Therefore, questions in FALQU represent real-world users'
information needs thus helping to avoid lab-generated questions. Answers on the
other side are given by experts in the field. FALQU is the first test
collection, to the best of our knowledge, to use LawSE, considering more
diverse questions than the questions from the standard legal bar and judicial
exams. It contains 9880 questions and 34,145 answers to legal questions.
Alongside our new test collection, we provide different baseline systems that
include traditional information retrieval models such as TF-IDF and BM25, and
deep neural network search models. The results obtained from the BM25 model
achieved the highest effectiveness.
|
[
{
"version": "v1",
"created": "Wed, 12 Apr 2023 05:03:59 GMT"
}
] | 2023-04-13T00:00:00 |
[
[
"Mansouri",
"Behrooz",
""
],
[
"Campos",
"Ricardo",
""
]
] |
new_dataset
| 0.999783 |
2304.05617
|
Deyun Lyu
|
Deyun Lyu, Jiayang Song, Zhenya Zhang, Zhijie Wang, Tianyi Zhang, Lei
Ma, Jianjun Zhao
|
AutoRepair: Automated Repair for AI-Enabled Cyber-Physical Systems under
Safety-Critical Conditions
| null | null | null | null |
cs.SE cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cyber-Physical Systems (CPS) have been widely deployed in safety-critical
domains such as transportation, power and energy. Recently, there comes an
increasing demand in employing deep neural networks (DNNs) in CPS for more
intelligent control and decision making in sophisticated industrial
safety-critical conditions, giving birth to the class of DNN controllers.
However, due to the inherent uncertainty and opaqueness of DNNs, concerns about
the safety of DNN-enabled CPS are also surging. In this work, we propose an
automated framework named AutoRepair that, given a safety requirement,
identifies unsafe control behavior in a DNN controller and repairs them through
an optimization-based method. Having an unsafe signal of system execution,
AutoRepair iteratively explores the control decision space and searches for the
optimal corrections for the DNN controller in order to satisfy the safety
requirements. We conduct a comprehensive evaluation of AutoRepair on 6
instances of industry-level DNN-enabled CPS from different safety-critical
domains. Evaluation results show that AutoRepair successfully repairs critical
safety issues in the DNN controllers, and significantly improves the
reliability of CPS.
|
[
{
"version": "v1",
"created": "Wed, 12 Apr 2023 05:25:45 GMT"
}
] | 2023-04-13T00:00:00 |
[
[
"Lyu",
"Deyun",
""
],
[
"Song",
"Jiayang",
""
],
[
"Zhang",
"Zhenya",
""
],
[
"Wang",
"Zhijie",
""
],
[
"Zhang",
"Tianyi",
""
],
[
"Ma",
"Lei",
""
],
[
"Zhao",
"Jianjun",
""
]
] |
new_dataset
| 0.974411 |
2304.05619
|
Chi-En Tai
|
Chi-en Amy Tai, Matthew Keller, Mattie Kerrigan, Yuhao Chen, Saeejith
Nair, Pengcheng Xi, Alexander Wong
|
NutritionVerse-3D: A 3D Food Model Dataset for Nutritional Intake
Estimation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
77% of adults over 50 want to age in place today, presenting a major
challenge to ensuring adequate nutritional intake. It has been reported that
one in four older adults that are 65 years or older are malnourished and given
the direct link between malnutrition and decreased quality of life, there have
been numerous studies conducted on how to efficiently track nutritional intake
of food. Recent advancements in machine learning and computer vision show
promise of automated nutrition tracking methods of food, but require a large
high-quality dataset in order to accurately identify the nutrients from the
food on the plate. Unlike existing datasets, a collection of 3D models with
nutritional information allow for view synthesis to create an infinite number
of 2D images for any given viewpoint/camera angle along with the associated
nutritional information. In this paper, we develop a methodology for collecting
high-quality 3D models for food items with a particular focus on speed and
consistency, and introduce NutritionVerse-3D, a large-scale high-quality
high-resolution dataset of 105 3D food models, in conjunction with their
associated weight, food name, and nutritional value. These models allow for
large quantity food intake scenes, diverse and customizable scene layout, and
an infinite number of camera settings and lighting conditions.
NutritionVerse-3D is publicly available as a part of an open initiative to
accelerate machine learning for nutrition sensing.
|
[
{
"version": "v1",
"created": "Wed, 12 Apr 2023 05:27:30 GMT"
}
] | 2023-04-13T00:00:00 |
[
[
"Tai",
"Chi-en Amy",
""
],
[
"Keller",
"Matthew",
""
],
[
"Kerrigan",
"Mattie",
""
],
[
"Chen",
"Yuhao",
""
],
[
"Nair",
"Saeejith",
""
],
[
"Xi",
"Pengcheng",
""
],
[
"Wong",
"Alexander",
""
]
] |
new_dataset
| 0.99983 |
2304.05634
|
Dhruv Srivastava
|
Dhruv Srivastava and Aditya Kumar Singh and Makarand Tapaswi
|
How you feelin'? Learning Emotions and Mental States in Movie Scenes
|
CVPR 2023. Project Page: https://katha-ai.github.io/projects/emotx/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Movie story analysis requires understanding characters' emotions and mental
states. Towards this goal, we formulate emotion understanding as predicting a
diverse and multi-label set of emotions at the level of a movie scene and for
each character. We propose EmoTx, a multimodal Transformer-based architecture
that ingests videos, multiple characters, and dialog utterances to make joint
predictions. By leveraging annotations from the MovieGraphs dataset, we aim to
predict classic emotions (e.g. happy, angry) and other mental states (e.g.
honest, helpful). We conduct experiments on the most frequently occurring 10
and 25 labels, and a mapping that clusters 181 labels to 26. Ablation studies
and comparison against adapted state-of-the-art emotion recognition approaches
shows the effectiveness of EmoTx. Analyzing EmoTx's self-attention scores
reveals that expressive emotions often look at character tokens while other
mental states rely on video and dialog cues.
|
[
{
"version": "v1",
"created": "Wed, 12 Apr 2023 06:31:14 GMT"
}
] | 2023-04-13T00:00:00 |
[
[
"Srivastava",
"Dhruv",
""
],
[
"Singh",
"Aditya Kumar",
""
],
[
"Tapaswi",
"Makarand",
""
]
] |
new_dataset
| 0.99927 |
2304.05645
|
Zhenxiang Lin
|
Zhenxiang Lin, Xidong Peng, Peishan Cong, Yuenan Hou, Xinge Zhu, Sibei
Yang, Yuexin Ma
|
WildRefer: 3D Object Localization in Large-scale Dynamic Scenes with
Multi-modal Visual Data and Natural Language
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce the task of 3D visual grounding in large-scale dynamic scenes
based on natural linguistic descriptions and online captured multi-modal visual
data, including 2D images and 3D LiDAR point clouds. We present a novel method,
WildRefer, for this task by fully utilizing the appearance features in images,
the location and geometry features in point clouds, and the dynamic features in
consecutive input frames to match the semantic features in language. In
particular, we propose two novel datasets, STRefer and LifeRefer, which focus
on large-scale human-centric daily-life scenarios with abundant 3D object and
natural language annotations. Our datasets are significant for the research of
3D visual grounding in the wild and has huge potential to boost the development
of autonomous driving and service robots. Extensive comparisons and ablation
studies illustrate that our method achieves state-of-the-art performance on two
proposed datasets. Code and dataset will be released when the paper is
published.
|
[
{
"version": "v1",
"created": "Wed, 12 Apr 2023 06:48:26 GMT"
}
] | 2023-04-13T00:00:00 |
[
[
"Lin",
"Zhenxiang",
""
],
[
"Peng",
"Xidong",
""
],
[
"Cong",
"Peishan",
""
],
[
"Hou",
"Yuenan",
""
],
[
"Zhu",
"Xinge",
""
],
[
"Yang",
"Sibei",
""
],
[
"Ma",
"Yuexin",
""
]
] |
new_dataset
| 0.999237 |
2304.05646
|
Risheng Liu
|
Zhiying Jiang, Zengxi Zhang, Jinyuan Liu, Xin Fan, Risheng Liu
|
Modality-Invariant Representation for Infrared and Visible Image
Registration
|
10 pages, 11 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Since the differences in viewing range, resolution and relative position, the
multi-modality sensing module composed of infrared and visible cameras needs to
be registered so as to have more accurate scene perception. In practice, manual
calibration-based registration is the most widely used process, and it is
regularly calibrated to maintain accuracy, which is time-consuming and
labor-intensive. To cope with these problems, we propose a scene-adaptive
infrared and visible image registration. Specifically, in regard of the
discrepancy between multi-modality images, an invertible translation process is
developed to establish a modality-invariant domain, which comprehensively
embraces the feature intensity and distribution of both infrared and visible
modalities. We employ homography to simulate the deformation between different
planes and develop a hierarchical framework to rectify the deformation inferred
from the proposed latent representation in a coarse-to-fine manner. For that,
the advanced perception ability coupled with the residual estimation conducive
to the regression of sparse offsets, and the alternate correlation search
facilitates a more accurate correspondence matching. Moreover, we propose the
first ground truth available misaligned infrared and visible image dataset,
involving three synthetic sets and one real-world set. Extensive experiments
validate the effectiveness of the proposed method against the
state-of-the-arts, advancing the subsequent applications.
|
[
{
"version": "v1",
"created": "Wed, 12 Apr 2023 06:49:56 GMT"
}
] | 2023-04-13T00:00:00 |
[
[
"Jiang",
"Zhiying",
""
],
[
"Zhang",
"Zengxi",
""
],
[
"Liu",
"Jinyuan",
""
],
[
"Fan",
"Xin",
""
],
[
"Liu",
"Risheng",
""
]
] |
new_dataset
| 0.989695 |
2304.05667
|
Xinpeng Li
|
Xinpeng Li and Xiaojiang Peng
|
Rail Detection: An Efficient Row-based Network and A New Benchmark
|
Accepted by ACMMM 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Rail detection, essential for railroad anomaly detection, aims to identify
the railroad region in video frames. Although various studies on rail detection
exist, neither an open benchmark nor a high-speed network is available in the
community, making algorithm comparison and development difficult. Inspired by
the growth of lane detection, we propose a rail database and a row-based rail
detection method. In detail, we make several contributions: (i) We present a
real-world railway dataset, Rail-DB, with 7432 pairs of images and annotations.
The images are collected from different situations in lighting, road
structures, and views. The rails are labeled with polylines, and the images are
categorized into nine scenes. The Rail-DB is expected to facilitate the
improvement of rail detection algorithms. (ii) We present an efficient
row-based rail detection method, Rail-Net, containing a lightweight
convolutional backbone and an anchor classifier. Specifically, we formulate the
process of rail detection as a row-based selecting problem. This strategy
reduces the computational cost compared to alternative segmentation methods.
(iii) We evaluate the Rail-Net on Rail-DB with extensive experiments, including
cross-scene settings and network backbones ranging from ResNet to Vision
Transformers. Our method achieves promising performance in terms of both speed
and accuracy. Notably, a lightweight version could achieve 92.77% accuracy and
312 frames per second. The Rail-Net outperforms the traditional method by
50.65% and the segmentation one by 5.86%. The database and code are available
at: https://github.com/Sampson-Lee/Rail-Detection.
|
[
{
"version": "v1",
"created": "Wed, 12 Apr 2023 07:44:50 GMT"
}
] | 2023-04-13T00:00:00 |
[
[
"Li",
"Xinpeng",
""
],
[
"Peng",
"Xiaojiang",
""
]
] |
new_dataset
| 0.999004 |
2304.05719
|
Huu Nghia Nguyen
|
Zujany Salazar, Huu Nghia Nguyen, Wissam Mallouli, Ana R Cavalli,
Edgardo Montes de Oca
|
5Greplay: a 5G Network Traffic Fuzzer -- Application to Attack Injection
| null |
ARES 2021: The 16th International Conference on Availability,
Reliability and Security, Aug 2021, Vienna, Austria. pp.1-8
|
10.1145/3465481.3470079
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The fifth generation of mobile broadband is more than just an evolution to
provide more mobile bandwidth, massive machine-type communications, and
ultra-reliable and low-latency communications. It relies on a complex, dynamic
and heterogeneous environment that implies addressing numerous testing and
security challenges. In this paper we present 5Greplay, an open-source 5G
network traffic fuzzer that enables the evaluation of 5G components by
replaying and modifying 5G network traffic by creating and injecting network
scenarios into a target that can be a 5G core service (e.g., AMF, SMF) or a RAN
network (e.g., gNodeB). The tool provides the ability to alter network packets
online or offline in both control and data planes in a very flexible manner.
The experimental evaluation conducted against open-source based 5G platforms,
showed that the target services accept traffic being altered by the tool, and
that it can reach up to 9.56 Gbps using only 1 processor core to replay 5G
traffic.
|
[
{
"version": "v1",
"created": "Wed, 12 Apr 2023 09:20:56 GMT"
}
] | 2023-04-13T00:00:00 |
[
[
"Salazar",
"Zujany",
""
],
[
"Nguyen",
"Huu Nghia",
""
],
[
"Mallouli",
"Wissam",
""
],
[
"Cavalli",
"Ana R",
""
],
[
"de Oca",
"Edgardo Montes",
""
]
] |
new_dataset
| 0.996628 |
2304.05772
|
Nicolas Chahine
|
Nicolas Chahine, Ana-Stefania Calarasanu, Davide Garcia-Civiero, Theo
Cayla, Sira Ferradans, Jean Ponce (NYU)
|
An Image Quality Assessment Dataset for Portraits
|
Conference on Computer Vision and Pattern Recognition 2023, IEEE/CVF,
Jun 2023, Vancouver, Canada
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Year after year, the demand for ever-better smartphone photos continues to
grow, in particular in the domain of portrait photography. Manufacturers thus
use perceptual quality criteria throughout the development of smartphone
cameras. This costly procedure can be partially replaced by automated
learning-based methods for image quality assessment (IQA). Due to its
subjective nature, it is necessary to estimate and guarantee the consistency of
the IQA process, a characteristic lacking in the mean opinion scores (MOS)
widely used for crowdsourcing IQA. In addition, existing blind IQA (BIQA)
datasets pay little attention to the difficulty of cross-content assessment,
which may degrade the quality of annotations. This paper introduces PIQ23, a
portrait-specific IQA dataset of 5116 images of 50 predefined scenarios
acquired by 100 smartphones, covering a high variety of brands, models, and use
cases. The dataset includes individuals of various genders and ethnicities who
have given explicit and informed consent for their photographs to be used in
public research. It is annotated by pairwise comparisons (PWC) collected from
over 30 image quality experts for three image attributes: face detail
preservation, face target exposure, and overall image quality. An in-depth
statistical analysis of these annotations allows us to evaluate their
consistency over PIQ23. Finally, we show through an extensive comparison with
existing baselines that semantic information (image context) can be used to
improve IQA predictions. The dataset along with the proposed statistical
analysis and BIQA algorithms are available:
https://github.com/DXOMARK-Research/PIQ2023
|
[
{
"version": "v1",
"created": "Wed, 12 Apr 2023 11:30:06 GMT"
}
] | 2023-04-13T00:00:00 |
[
[
"Chahine",
"Nicolas",
"",
"NYU"
],
[
"Calarasanu",
"Ana-Stefania",
"",
"NYU"
],
[
"Garcia-Civiero",
"Davide",
"",
"NYU"
],
[
"Cayla",
"Theo",
"",
"NYU"
],
[
"Ferradans",
"Sira",
"",
"NYU"
],
[
"Ponce",
"Jean",
"",
"NYU"
]
] |
new_dataset
| 0.993855 |
2304.05804
|
Yuchen Zhao
|
Yuchen Zhao and Yifan Wang
|
A Palm-Shape Variable-Stiffness Gripper based on 3D-Printed Fabric
Jamming
|
8 pages, 7 figures
| null |
10.1109/LRA.2023.3266667
| null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Soft grippers have excellent adaptability for a variety of objects and tasks.
Jamming-based variable stiffness materials can further increase soft grippers'
gripping force and capacity. Previous universal grippers enabled by granular
jamming have shown great capability of handling objects with various shapes and
weight. However, they require a large pushing force on the object during
gripping, which is not suitable for very soft or free-hanging objects. In this
paper, we create a novel palm-shape anthropomorphic variable-stiffness gripper
enabled by jamming of 3D printed fabrics. This gripper is conformable and
gentle to objects with different shapes, requires little pushing force, and
increases gripping strength only when necessary. We present the design,
fabrication and performance of this gripper and tested its conformability and
gripping capacity. Our design utilizes soft pneumatic actuators to drive two
wide palms to enclose objects, thanks to the excellent conformability of the
structured fabrics. While the pinch force is low, the palm can significantly
increase stiffness to lift heavy objects with a maximum gripping force of
$17\,$N and grip-to-pinch force ratio of $42$. We also explore different
variable-stiffness materials in the gripper, including sheets for layer
jamming, to compare their performances. We conduct gripping tests on standard
objects and daily items to show the great capacity of our gripper design.
|
[
{
"version": "v1",
"created": "Wed, 12 Apr 2023 12:29:41 GMT"
}
] | 2023-04-13T00:00:00 |
[
[
"Zhao",
"Yuchen",
""
],
[
"Wang",
"Yifan",
""
]
] |
new_dataset
| 0.998196 |
2304.05868
|
Alexey Bokhovkin
|
Alexey Bokhovkin, Shubham Tulsiani, Angela Dai
|
Mesh2Tex: Generating Mesh Textures from Image Queries
|
https://alexeybokhovkin.github.io/mesh2tex/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Remarkable advances have been achieved recently in learning neural
representations that characterize object geometry, while generating textured
objects suitable for downstream applications and 3D rendering remains at an
early stage. In particular, reconstructing textured geometry from images of
real objects is a significant challenge -- reconstructed geometry is often
inexact, making realistic texturing a significant challenge. We present
Mesh2Tex, which learns a realistic object texture manifold from uncorrelated
collections of 3D object geometry and photorealistic RGB images, by leveraging
a hybrid mesh-neural-field texture representation. Our texture representation
enables compact encoding of high-resolution textures as a neural field in the
barycentric coordinate system of the mesh faces. The learned texture manifold
enables effective navigation to generate an object texture for a given 3D
object geometry that matches to an input RGB image, which maintains robustness
even under challenging real-world scenarios where the mesh geometry
approximates an inexact match to the underlying geometry in the RGB image.
Mesh2Tex can effectively generate realistic object textures for an object mesh
to match real images observations towards digitization of real environments,
significantly improving over previous state of the art.
|
[
{
"version": "v1",
"created": "Wed, 12 Apr 2023 13:58:25 GMT"
}
] | 2023-04-13T00:00:00 |
[
[
"Bokhovkin",
"Alexey",
""
],
[
"Tulsiani",
"Shubham",
""
],
[
"Dai",
"Angela",
""
]
] |
new_dataset
| 0.998802 |
2304.05930
|
Rezaul Karim
|
Rezaul Karim, He Zhao, Richard P. Wildes, Mennatullah Siam
|
MED-VT: Multiscale Encoder-Decoder Video Transformer with Application to
Object Segmentation
|
Accepted in CVPR 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multiscale video transformers have been explored in a wide variety of vision
tasks. To date, however, the multiscale processing has been confined to the
encoder or decoder alone. We present a unified multiscale encoder-decoder
transformer that is focused on dense prediction tasks in videos. Multiscale
representation at both encoder and decoder yields key benefits of implicit
extraction of spatiotemporal features (i.e. without reliance on input optical
flow) as well as temporal consistency at encoding and coarseto-fine detection
for high-level (e.g. object) semantics to guide precise localization at
decoding. Moreover, we propose a transductive learning scheme through
many-to-many label propagation to provide temporally consistent predictions. We
showcase our Multiscale Encoder-Decoder Video Transformer (MED-VT) on Automatic
Video Object Segmentation (AVOS) and actor/action segmentation, where we
outperform state-of-the-art approaches on multiple benchmarks using only raw
images, without using optical flow.
|
[
{
"version": "v1",
"created": "Wed, 12 Apr 2023 15:50:19 GMT"
}
] | 2023-04-13T00:00:00 |
[
[
"Karim",
"Rezaul",
""
],
[
"Zhao",
"He",
""
],
[
"Wildes",
"Richard P.",
""
],
[
"Siam",
"Mennatullah",
""
]
] |
new_dataset
| 0.989639 |
2304.05956
|
Federico Cunico
|
Federico Cunico, Federico Girella, Andrea Avogaro, Marco Emporio,
Andrea Giachetti and Marco Cristani
|
OO-dMVMT: A Deep Multi-view Multi-task Classification Framework for
Real-time 3D Hand Gesture Classification and Segmentation
|
Accepted to the Computer Vision for Mixed Reality workshop at CVPR
2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Continuous mid-air hand gesture recognition based on captured hand pose
streams is fundamental for human-computer interaction, particularly in AR / VR.
However, many of the methods proposed to recognize heterogeneous hand gestures
are tested only on the classification task, and the real-time low-latency
gesture segmentation in a continuous stream is not well addressed in the
literature. For this task, we propose the On-Off deep Multi-View Multi-Task
paradigm (OO-dMVMT). The idea is to exploit multiple time-local views related
to hand pose and movement to generate rich gesture descriptions, along with
using heterogeneous tasks to achieve high accuracy. OO-dMVMT extends the
classical MVMT paradigm, where all of the multiple tasks have to be active at
each time, by allowing specific tasks to switch on/off depending on whether
they can apply to the input. We show that OO-dMVMT defines the new SotA on
continuous/online 3D skeleton-based gesture recognition in terms of gesture
classification accuracy, segmentation accuracy, false positives, and decision
latency while maintaining real-time operation.
|
[
{
"version": "v1",
"created": "Wed, 12 Apr 2023 16:28:29 GMT"
}
] | 2023-04-13T00:00:00 |
[
[
"Cunico",
"Federico",
""
],
[
"Girella",
"Federico",
""
],
[
"Avogaro",
"Andrea",
""
],
[
"Emporio",
"Marco",
""
],
[
"Giachetti",
"Andrea",
""
],
[
"Cristani",
"Marco",
""
]
] |
new_dataset
| 0.988975 |
2304.06013
|
Ertugrul Basar
|
Ertugrul Basar
|
Reconfigurable Intelligent Surface-Empowered MIMO Systems
|
4 pages, to appear in National Science Review
| null |
10.1093/nsr/nwad096
| null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Reconfigurable intelligent surface (RIS)-empowered communication stands out
as a solid candidate for future wireless networks due to its flexibility, ease
of deployment, and attractive advantages to control the wireless propagation
environment. In this perspective article, a brief overview is presented
considering the application of reconfigurable intelligent surfaces for future
multiple-input multiple-output (MIMO) systems. Potential future research
directions are also highlighted.
|
[
{
"version": "v1",
"created": "Wed, 12 Apr 2023 17:50:43 GMT"
}
] | 2023-04-13T00:00:00 |
[
[
"Basar",
"Ertugrul",
""
]
] |
new_dataset
| 0.996115 |
2111.09450
|
Darren Tsai
|
Darren Tsai and Julie Stephany Berrio and Mao Shan and Stewart Worrall
and Eduardo Nebot
|
See Eye to Eye: A Lidar-Agnostic 3D Detection Framework for Unsupervised
Multi-Target Domain Adaptation
|
Published in RAL and presented in IROS 2022. Code is available at
https://github.com/darrenjkt/SEE-MTDA
|
IEEE Robotics and Automation Letters (2022)
|
10.1109/LRA.2022.3185783
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sampling discrepancies between different manufacturers and models of lidar
sensors result in inconsistent representations of objects. This leads to
performance degradation when 3D detectors trained for one lidar are tested on
other types of lidars. Remarkable progress in lidar manufacturing has brought
about advances in mechanical, solid-state, and recently, adjustable scan
pattern lidars. For the latter, existing works often require fine-tuning the
model each time scan patterns are adjusted, which is infeasible. We explicitly
deal with the sampling discrepancy by proposing a novel unsupervised
multi-target domain adaptation framework, SEE, for transferring the performance
of state-of-the-art 3D detectors across both fixed and flexible scan pattern
lidars without requiring fine-tuning of models by end-users. Our approach
interpolates the underlying geometry and normalizes the scan pattern of objects
from different lidars before passing them to the detection network. We
demonstrate the effectiveness of SEE on public datasets, achieving
state-of-the-art results, and additionally provide quantitative results on a
novel high-resolution lidar to prove the industry applications of our
framework.
|
[
{
"version": "v1",
"created": "Wed, 17 Nov 2021 23:46:47 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Apr 2023 21:32:35 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Tsai",
"Darren",
""
],
[
"Berrio",
"Julie Stephany",
""
],
[
"Shan",
"Mao",
""
],
[
"Worrall",
"Stewart",
""
],
[
"Nebot",
"Eduardo",
""
]
] |
new_dataset
| 0.998405 |
2202.09799
|
Masayuki Tezuka
|
Masayuki Tezuka, Keisuke Tanaka
|
Redactable Signature with Compactness from Set-Commitment
| null |
IEICE TRANSACTIONS on Fundamentals of Electronics, Communications
and Computer Sciences Vol.E104-A No.9 September 2021
|
10.1587/transfun.2020DMP0013
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Redactable signature allows anyone to remove parts of a signed message
without invalidating the signature. The need to prove the validity of digital
documents issued by governments is increasing. When governments disclose
documents, they must remove private information concerning individuals.
Redactable signature is useful for such a situation. However, in most
redactable signature schemes, to remove parts of the signed message, we need
pieces of information for each part we want to remove. If a signed message
consists of l elements, the number of elements in an original signature is at
least linear in l. As far as we know, in some redactable signature schemes, the
number of elements in an original signature is constant, regardless of the
number of elements in a message to be signed. However, these constructions have
drawbacks in that the use of the random oracle model or generic group model. In
this paper, we construct an efficient redactable signature to overcome these
drawbacks. Our redactable signature is obtained by combining set-commitment
proposed in the recent work by Fuchsbauer et al. (JoC 2019) and digital
signatures.
|
[
{
"version": "v1",
"created": "Sun, 20 Feb 2022 11:49:37 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Tezuka",
"Masayuki",
""
],
[
"Tanaka",
"Keisuke",
""
]
] |
new_dataset
| 0.999602 |
2203.07488
|
Emilio Ferrara
|
Emily Chen, Emilio Ferrara
|
Tweets in Time of Conflict: A Public Dataset Tracking the Twitter
Discourse on the War Between Ukraine and Russia
|
Dataset at https://github.com/echen102/ukraine-russia
| null | null | null |
cs.SI cs.CY cs.DL
|
http://creativecommons.org/licenses/by/4.0/
|
On February 24, 2022, Russia invaded Ukraine. In the days that followed,
reports kept flooding in from layman to news anchors of a conflict quickly
escalating into war. Russia faced immediate backlash and condemnation from the
world at large. While the war continues to contribute to an ongoing
humanitarian and refugee crisis in Ukraine, a second battlefield has emerged in
the online space, both in the use of social media to garner support for both
sides of the conflict and also in the context of information warfare. In this
paper, we present a collection of over 63 million tweets, from February 22,
2022 through March 8, 2022 that we are publishing for the wider research
community to use. This dataset can be found at
https://github.com/echen102/ukraine-russia and will be maintained and regularly
updated as the war continues to unfold. Our preliminary analysis already shows
evidence of public engagement with Russian state sponsored media and other
domains that are known to push unreliable information; the former saw a spike
in activity on the day of the Russian invasion. Our hope is that this public
dataset can help the research community to further understand the ever evolving
role that social media plays in information dissemination, influence campaigns,
grassroots mobilization, and much more, during a time of conflict.
|
[
{
"version": "v1",
"created": "Mon, 14 Mar 2022 20:52:02 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Apr 2023 19:11:55 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Chen",
"Emily",
""
],
[
"Ferrara",
"Emilio",
""
]
] |
new_dataset
| 0.999923 |
2204.03883
|
Yuda Song
|
Yuda Song, Zhuqing He, Hui Qian, Xin Du
|
Vision Transformers for Single Image Dehazing
| null | null |
10.1109/TIP.2023.3256763
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image dehazing is a representative low-level vision task that estimates
latent haze-free images from hazy images. In recent years, convolutional neural
network-based methods have dominated image dehazing. However, vision
Transformers, which has recently made a breakthrough in high-level vision
tasks, has not brought new dimensions to image dehazing. We start with the
popular Swin Transformer and find that several of its key designs are
unsuitable for image dehazing. To this end, we propose DehazeFormer, which
consists of various improvements, such as the modified normalization layer,
activation function, and spatial information aggregation scheme. We train
multiple variants of DehazeFormer on various datasets to demonstrate its
effectiveness. Specifically, on the most frequently used SOTS indoor set, our
small model outperforms FFA-Net with only 25% #Param and 5% computational cost.
To the best of our knowledge, our large model is the first method with the PSNR
over 40 dB on the SOTS indoor set, dramatically outperforming the previous
state-of-the-art methods. We also collect a large-scale realistic remote
sensing dehazing dataset for evaluating the method's capability to remove
highly non-homogeneous haze.
|
[
{
"version": "v1",
"created": "Fri, 8 Apr 2022 07:17:20 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Song",
"Yuda",
""
],
[
"He",
"Zhuqing",
""
],
[
"Qian",
"Hui",
""
],
[
"Du",
"Xin",
""
]
] |
new_dataset
| 0.99376 |
2205.02717
|
Min Yang
|
Min Yang, Guo Chen, Yin-Dong Zheng, Tong Lu, Limin Wang
|
BasicTAD: an Astounding RGB-Only Baseline for Temporal Action Detection
|
Accepted by CVIU
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Temporal action detection (TAD) is extensively studied in the video
understanding community by generally following the object detection pipeline in
images. However, complex designs are not uncommon in TAD, such as two-stream
feature extraction, multi-stage training, complex temporal modeling, and global
context fusion. In this paper, we do not aim to introduce any novel technique
for TAD. Instead, we study a simple, straightforward, yet must-known baseline
given the current status of complex design and low detection efficiency in TAD.
In our simple baseline (termed BasicTAD), we decompose the TAD pipeline into
several essential components: data sampling, backbone design, neck
construction, and detection head. We extensively investigate the existing
techniques in each component for this baseline, and more importantly, perform
end-to-end training over the entire pipeline thanks to the simplicity of
design. As a result, this simple BasicTAD yields an astounding and real-time
RGB-Only baseline very close to the state-of-the-art methods with two-stream
inputs. In addition, we further improve the BasicTAD by preserving more
temporal and spatial information in network representation (termed as PlusTAD).
Empirical results demonstrate that our PlusTAD is very efficient and
significantly outperforms the previous methods on the datasets of THUMOS14 and
FineAction. Meanwhile, we also perform in-depth visualization and error
analysis on our proposed method and try to provide more insights on the TAD
problem. Our approach can serve as a strong baseline for future TAD research.
The code and model will be released at https://github.com/MCG-NJU/BasicTAD.
|
[
{
"version": "v1",
"created": "Thu, 5 May 2022 15:42:56 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Nov 2022 06:38:26 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Apr 2023 14:57:34 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Yang",
"Min",
""
],
[
"Chen",
"Guo",
""
],
[
"Zheng",
"Yin-Dong",
""
],
[
"Lu",
"Tong",
""
],
[
"Wang",
"Limin",
""
]
] |
new_dataset
| 0.99678 |
2205.14430
|
Liang Zhou
|
Kaiyi Zhang, Liang Zhou, Lu Chen, Shitong He, Daniel Weiskopf, Yunhai
Wang
|
Angle-Uniform Parallel Coordinates
|
Computational Visual Media, 2023
| null |
10.1007/s41095-022-0291-7
| null |
cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present angle-uniform parallel coordinates, a data-independent technique
that deforms the image plane of parallel coordinates so that the angles of
linear relationships between two variables are linearly mapped along the
horizontal axis of the parallel coordinates plot. Despite being a common method
for visualizing multidimensional data, parallel coordinates are ineffective for
revealing positive correlations since the associated parallel coordinates
points of such structures may be located at infinity in the image plane and the
asymmetric encoding of negative and positive correlations may lead to
unreliable estimations. To address this issue, we introduce a transformation
that bounds all points horizontally using an angle-uniform mapping and shrinks
them vertically in a structure-preserving fashion; polygonal lines become
smooth curves and a symmetric representation of data correlations is achieved.
We further propose a combined subsampling and density visualization approach to
reduce visual clutter caused by overdrawing. Our method enables accurate visual
pattern interpretation of data correlations, and its data-independent nature
makes it applicable to all multidimensional datasets. The usefulness of our
method is demonstrated using examples of synthetic and real-world datasets.
|
[
{
"version": "v1",
"created": "Sat, 28 May 2022 13:24:37 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Apr 2023 07:02:06 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Zhang",
"Kaiyi",
""
],
[
"Zhou",
"Liang",
""
],
[
"Chen",
"Lu",
""
],
[
"He",
"Shitong",
""
],
[
"Weiskopf",
"Daniel",
""
],
[
"Wang",
"Yunhai",
""
]
] |
new_dataset
| 0.996041 |
2208.10267
|
Murat Altunbulak
|
Murat Altunbulak, Fatma Altunbulak Aksu
|
On the binary linear constant weight codes and their autormorphism
groups
|
12 pages
| null | null | null |
cs.IT math.CO math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We give a characterization for the binary linear constant weight codes by
using the symmetric difference of the supports of the codewords. This
characterization gives a correspondence between the set of binary linear
constant weight codes and the set of partitions for the union of supports of
the codewords. By using this correspondence, we present a formula for the order
of the automorphism group of a binary linear constant weight code in terms of
its parameters. This formula is a key step to determine more algebraic
structures on constant weight codes with given parameters. Bonisoli [Bonisoli,
A.: Every equidistant linear code is a sequence of dual Hamming codes. Ars
Combinatoria 18, 181--186 (1984)] proves that the $q$-ary linear constant
weight codes with the same parameters are equivalent (for the binary case
permutation equivalent). We also give an alternative proof for Bonisoli's
theorem by presenting an explicit permutation on symmetric difference of the
supports of the codewords which gives the permutation equivalence between the
binary linear constant weight codes.
|
[
{
"version": "v1",
"created": "Mon, 22 Aug 2022 12:43:14 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Apr 2023 18:13:01 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Altunbulak",
"Murat",
""
],
[
"Aksu",
"Fatma Altunbulak",
""
]
] |
new_dataset
| 0.979265 |
2209.15182
|
Ruiqi Wang
|
Ruiqi Wang, Wonse Jo, Dezhong Zhao, Weizheng Wang, Baijian Yang,
Guohua Chen and Byung-Cheol Min
|
Husformer: A Multi-Modal Transformer for Multi-Modal Human State
Recognition
| null | null | null | null |
cs.HC cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human state recognition is a critical topic with pervasive and important
applications in human-machine systems. Multi-modal fusion, the combination of
metrics from multiple data sources, has been shown as a sound method for
improving the recognition performance. However, while promising results have
been reported by recent multi-modal-based models, they generally fail to
leverage the sophisticated fusion strategies that would model sufficient
cross-modal interactions when producing the fusion representation; instead,
current methods rely on lengthy and inconsistent data preprocessing and feature
crafting. To address this limitation, we propose an end-to-end multi-modal
transformer framework for multi-modal human state recognition called Husformer.
Specifically, we propose to use cross-modal transformers, which inspire one
modality to reinforce itself through directly attending to latent relevance
revealed in other modalities, to fuse different modalities while ensuring
sufficient awareness of the cross-modal interactions introduced. Subsequently,
we utilize a self-attention transformer to further prioritize contextual
information in the fusion representation. Using two such attention mechanisms
enables effective and adaptive adjustments to noise and interruptions in
multi-modal signals during the fusion process and in relation to high-level
features. Extensive experiments on two human emotion corpora (DEAP and WESAD)
and two cognitive workload datasets (MOCAS and CogLoad) demonstrate that in the
recognition of human state, our Husformer outperforms both state-of-the-art
multi-modal baselines and the use of a single modality by a large margin,
especially when dealing with raw multi-modal signals. We also conducted an
ablation study to show the benefits of each component in Husformer.
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2022 02:11:27 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Apr 2023 03:48:45 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Wang",
"Ruiqi",
""
],
[
"Jo",
"Wonse",
""
],
[
"Zhao",
"Dezhong",
""
],
[
"Wang",
"Weizheng",
""
],
[
"Yang",
"Baijian",
""
],
[
"Chen",
"Guohua",
""
],
[
"Min",
"Byung-Cheol",
""
]
] |
new_dataset
| 0.996127 |
2210.11928
|
Qin Wang
|
Shange Fu, Qin Wang, Jiangshan Yu, Shiping Chen
|
Rational Ponzi Games in Algorithmic Stablecoin
|
Accepted by CryptoEx@ICBC 2023
| null | null | null |
cs.GT cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Algorithmic stablecoins (AS) are one special type of stablecoins that are not
backed by any asset (equiv. without collateral). They stand to revolutionize
the way a sovereign fiat operates. As implemented, these coins are poorly
stabilized in most cases, easily deviating from the price target or even
falling into a catastrophic collapse (a.k.a. Death spiral), and are as a result
dismissed as a Ponzi scheme. However, is this the whole picture? In this paper,
we try to reveal the truth and clarify such a deceptive concept. We find that
Ponzi is basically a financial protocol that pays existing investors with funds
collected from new ones. Running a Ponzi, however, does not necessarily imply
that any participant is in any sense losing out, as long as the game can be
perpetually rolled over. Economists call such realization as a \textit{rational
Ponzi game}. We thereby propose a rational model in the context of AS and draw
its holding conditions. We apply the model to examine: \textit{whether or not
the algorithmic stablecoin is a rational Ponzi game.} Accordingly, we discuss
two types of algorithmic stablecoins (\text{Rebase} \& \text{Seigniorage
shares}) and dig into the historical market performance of two impactful
projects (\text{Ampleforth} \& \text{TerraUSD}, respectively) to demonstrate
the effectiveness of our model.
|
[
{
"version": "v1",
"created": "Fri, 21 Oct 2022 13:00:46 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Apr 2023 16:15:26 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Fu",
"Shange",
""
],
[
"Wang",
"Qin",
""
],
[
"Yu",
"Jiangshan",
""
],
[
"Chen",
"Shiping",
""
]
] |
new_dataset
| 0.998631 |
2210.13634
|
Alberto Tono
|
Alberto Tono and Heyaojing Huang and Ashwin Agrawal and Martin Fischer
|
Vitruvio: 3D Building Meshes via Single Perspective Sketches
| null | null | null | null |
cs.CV cs.AI cs.GR cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Today's architectural engineering and construction (AEC) software require a
learning curve to generate a three-dimension building representation. This
limits the ability to quickly validate the volumetric implications of an
initial design idea communicated via a single sketch. Allowing designers to
translate a single sketch to a 3D building will enable owners to instantly
visualize 3D project information without the cognitive load required. If
previous state-of-the-art (SOTA) data-driven methods for single view
reconstruction (SVR) showed outstanding results in the reconstruction process
from a single image or sketch, they lacked specific applications, analysis, and
experiments in the AEC. Therefore, this research addresses this gap,
introducing the first deep learning method focused only on buildings that aim
to convert a single sketch to a 3D building mesh: Vitruvio. Vitruvio adapts
Occupancy Network for SVR tasks on a specific building dataset (Manhattan 1K).
This adaptation brings two main improvements. First, it accelerates the
inference process by more than 26% (from 0.5s to 0.37s). Second, it increases
the reconstruction accuracy (measured by the Chamfer Distance) by 18%. During
this adaptation in the AEC domain, we evaluate the effect of the building
orientation in the learning procedure since it constitutes an important design
factor. While aligning all the buildings to a canonical pose improved the
overall quantitative metrics, it did not capture fine-grain details in more
complex building shapes (as shown in our qualitative analysis). Finally,
Vitruvio outputs a 3D-printable building mesh with arbitrary topology and genus
from a single perspective sketch, providing a step forward to allow owners and
designers to communicate 3D information via a 2D, effective, intuitive, and
universal communication medium: the sketch.
|
[
{
"version": "v1",
"created": "Mon, 24 Oct 2022 22:24:58 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Apr 2023 16:52:01 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Tono",
"Alberto",
""
],
[
"Huang",
"Heyaojing",
""
],
[
"Agrawal",
"Ashwin",
""
],
[
"Fischer",
"Martin",
""
]
] |
new_dataset
| 0.993581 |
2211.08459
|
Marco Eilers
|
Marco Eilers and Thibault Dardinier and Peter M\"uller
|
CommCSL: Proving Information Flow Security for Concurrent Programs using
Abstract Commutativity
| null | null | null | null |
cs.CR cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Information flow security ensures that the secret data manipulated by a
program does not influence its observable output. Proving information flow
security is especially challenging for concurrent programs, where operations on
secret data may influence the execution time of a thread and, thereby, the
interleaving between different threads. Such internal timing channels may
affect the observable outcome of a program even if an attacker does not observe
execution times. Existing verification techniques for information flow security
in concurrent programs attempt to prove that secret data does not influence the
relative timing of threads. However, these techniques are often restrictive
(for instance because they disallow branching on secret data) and make strong
assumptions about the execution platform (ignoring caching, processor
instructions with data-dependent runtime, and other common features that affect
execution time). In this paper, we present a novel verification technique for
secure information flow in concurrent programs that lifts these restrictions
and does not make any assumptions about timing behavior. The key idea is to
prove that all mutating operations performed on shared data commute, such that
different thread interleavings do not influence its final value. Crucially,
commutativity is required only for an abstraction of the shared data that
contains the information that will be leaked to a public output. Abstract
commutativity is satisfied by many more operations than standard commutativity,
which makes our technique widely applicable. We formalize our technique in
CommCSL, a relational concurrent separation logic with support for
commutativity-based reasoning, and prove its soundness in Isabelle/HOL. We
implemented CommCSL in HyperViper, an automated verifier based on the Viper
verification infrastructure, and demonstrate its ability to verify challenging
examples.
|
[
{
"version": "v1",
"created": "Tue, 15 Nov 2022 19:24:31 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Apr 2023 17:57:04 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Eilers",
"Marco",
""
],
[
"Dardinier",
"Thibault",
""
],
[
"Müller",
"Peter",
""
]
] |
new_dataset
| 0.983443 |
2211.08703
|
Yongjie Chen
|
Yongjie Chen, Tieru Wu
|
SATVSR: Scenario Adaptive Transformer for Cross Scenarios Video
Super-Resolution
| null | null |
10.1088/1742-6596/2456/1/012028
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Video Super-Resolution (VSR) aims to recover sequences of high-resolution
(HR) frames from low-resolution (LR) frames. Previous methods mainly utilize
temporally adjacent frames to assist the reconstruction of target frames.
However, in the real world, there is a lot of irrelevant information in
adjacent frames of videos with fast scene switching, these VSR methods cannot
adaptively distinguish and select useful information. In contrast, with a
transformer structure suitable for temporal tasks, we devise a novel adaptive
scenario video super-resolution method. Specifically, we use optical flow to
label the patches in each video frame, only calculate the attention of patches
with the same label. Then select the most relevant label among them to
supplement the spatial-temporal information of the target frame. This design
can directly make the supplementary information come from the same scene as
much as possible. We further propose a cross-scale feature aggregation module
to better handle the scale variation problem. Compared with other video
super-resolution methods, our method not only achieves significant performance
gains on single-scene videos but also has better robustness on cross-scene
datasets.
|
[
{
"version": "v1",
"created": "Wed, 16 Nov 2022 06:30:13 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Chen",
"Yongjie",
""
],
[
"Wu",
"Tieru",
""
]
] |
new_dataset
| 0.972313 |
2211.11316
|
Wei Chen
|
Wei Chen, Yansheng Li, Bo Dang, Yongjun Zhang
|
EHSNet: End-to-End Holistic Learning Network for Large-Size Remote
Sensing Image Semantic Segmentation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents EHSNet, a new end-to-end segmentation network designed
for the holistic learning of large-size remote sensing image semantic
segmentation (LRISS). Large-size remote sensing images (LRIs) can lead to GPU
memory exhaustion due to their extremely large size, which has been handled in
previous works through either global-local fusion or multi-stage refinement,
both of which are limited in their ability to fully exploit the abundant
information available in LRIs. Unlike them, EHSNet features three
memory-friendly modules to utilize the characteristics of LRIs: a long-range
dependency module to develop long-range spatial context, an efficient
cross-correlation module to build holistic contextual relationships, and a
boundary-aware enhancement module to preserve complete object boundaries.
Moreover, EHSNet manages to process holistic LRISS with the aid of memory
offloading. To the best of our knowledge, EHSNet is the first method capable of
performing holistic LRISS. To make matters better, EHSNet outperforms previous
state-of-the-art competitors by a significant margin of +5.65 mIoU on FBP and
+4.28 mIoU on Inria Aerial, demonstrating its effectiveness. We hope that
EHSNet will provide a new perspective for LRISS. The code and models will be
made publicly available.
|
[
{
"version": "v1",
"created": "Mon, 21 Nov 2022 10:00:59 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Apr 2023 03:48:40 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Chen",
"Wei",
""
],
[
"Li",
"Yansheng",
""
],
[
"Dang",
"Bo",
""
],
[
"Zhang",
"Yongjun",
""
]
] |
new_dataset
| 0.993457 |
2301.06855
|
Manasi Muglikar Ms.
|
Manasi Muglikar, Leonard Bauersfeld, Diederik Paul Moeys, Davide
Scaramuzza
|
Event-based Shape from Polarization
|
IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
Vancouver, 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
State-of-the-art solutions for Shape-from-Polarization (SfP) suffer from a
speed-resolution tradeoff: they either sacrifice the number of polarization
angles measured or necessitate lengthy acquisition times due to framerate
constraints, thus compromising either accuracy or latency. We tackle this
tradeoff using event cameras. Event cameras operate at microseconds resolution
with negligible motion blur, and output a continuous stream of events that
precisely measures how light changes over time asynchronously. We propose a
setup that consists of a linear polarizer rotating at high-speeds in front of
an event camera. Our method uses the continuous event stream caused by the
rotation to reconstruct relative intensities at multiple polarizer angles.
Experiments demonstrate that our method outperforms physics-based baselines
using frames, reducing the MAE by 25% in synthetic and real-world dataset. In
the real world, we observe, however, that the challenging conditions (i.e.,
when few events are generated) harm the performance of physics-based solutions.
To overcome this, we propose a learning-based approach that learns to estimate
surface normals even at low event-rates, improving the physics-based approach
by 52% on the real world dataset. The proposed system achieves an acquisition
speed equivalent to 50 fps (>twice the framerate of the commercial polarization
sensor) while retaining the spatial resolution of 1MP. Our evaluation is based
on the first large-scale dataset for event-based SfP
|
[
{
"version": "v1",
"created": "Tue, 17 Jan 2023 12:59:58 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Apr 2023 14:50:04 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Muglikar",
"Manasi",
""
],
[
"Bauersfeld",
"Leonard",
""
],
[
"Moeys",
"Diederik Paul",
""
],
[
"Scaramuzza",
"Davide",
""
]
] |
new_dataset
| 0.964612 |
2301.07525
|
Tong Wu
|
Tong Wu, Jiarui Zhang, Xiao Fu, Yuxin Wang, Jiawei Ren, Liang Pan,
Wayne Wu, Lei Yang, Jiaqi Wang, Chen Qian, Dahua Lin, Ziwei Liu
|
OmniObject3D: Large-Vocabulary 3D Object Dataset for Realistic
Perception, Reconstruction and Generation
|
Project page: https://omniobject3d.github.io/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent advances in modeling 3D objects mostly rely on synthetic datasets due
to the lack of large-scale realscanned 3D databases. To facilitate the
development of 3D perception, reconstruction, and generation in the real world,
we propose OmniObject3D, a large vocabulary 3D object dataset with massive
high-quality real-scanned 3D objects. OmniObject3D has several appealing
properties: 1) Large Vocabulary: It comprises 6,000 scanned objects in 190
daily categories, sharing common classes with popular 2D datasets (e.g.,
ImageNet and LVIS), benefiting the pursuit of generalizable 3D representations.
2) Rich Annotations: Each 3D object is captured with both 2D and 3D sensors,
providing textured meshes, point clouds, multiview rendered images, and
multiple real-captured videos. 3) Realistic Scans: The professional scanners
support highquality object scans with precise shapes and realistic appearances.
With the vast exploration space offered by OmniObject3D, we carefully set up
four evaluation tracks: a) robust 3D perception, b) novel-view synthesis, c)
neural surface reconstruction, and d) 3D object generation. Extensive studies
are performed on these four benchmarks, revealing new observations, challenges,
and opportunities for future research in realistic 3D vision.
|
[
{
"version": "v1",
"created": "Wed, 18 Jan 2023 18:14:18 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Apr 2023 17:41:17 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Wu",
"Tong",
""
],
[
"Zhang",
"Jiarui",
""
],
[
"Fu",
"Xiao",
""
],
[
"Wang",
"Yuxin",
""
],
[
"Ren",
"Jiawei",
""
],
[
"Pan",
"Liang",
""
],
[
"Wu",
"Wayne",
""
],
[
"Yang",
"Lei",
""
],
[
"Wang",
"Jiaqi",
""
],
[
"Qian",
"Chen",
""
],
[
"Lin",
"Dahua",
""
],
[
"Liu",
"Ziwei",
""
]
] |
new_dataset
| 0.999884 |
2302.09654
|
J\"urgen Pfeffer
|
Juergen Pfeffer, Daniel Matter, Anahit Sargsyan
|
The Half-Life of a Tweet
| null | null | null | null |
cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
Twitter has started to share an impression_count variable as part of the
available public metrics for every Tweet collected with Twitter's APIs. With
the information about how often a particular Tweet has been shown to Twitter
users at the time of data collection, we can learn important insights about the
dissemination process of a Tweet by measuring its impression count repeatedly
over time. With our preliminary analysis, we can show that on average the peak
of impressions per second is 72 seconds after a Tweet was sent and that after
24 hours, no relevant number of impressions can be observed for ~95% of all
Tweets. Finally, we estimate that the median half-life of a Tweet, i.e. the
time it takes before half of all impressions are created, is about 80 minutes.
|
[
{
"version": "v1",
"created": "Sun, 19 Feb 2023 18:48:15 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Apr 2023 08:10:08 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Pfeffer",
"Juergen",
""
],
[
"Matter",
"Daniel",
""
],
[
"Sargsyan",
"Anahit",
""
]
] |
new_dataset
| 0.982954 |
2302.11428
|
Zhaoyuan Ma
|
Zhaoyuan Ma and Jing Xiao
|
Robotic Perception-motion Synergy for Novel Rope Wrapping Tasks
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces a novel and general method to address the problem of
using a general-purpose robot manipulator with a parallel gripper to wrap a
deformable linear object (DLO), called a rope, around a rigid object, called a
rod, autonomously. Such a robotic wrapping task has broad potential
applications in automotive, electromechanical industries construction
manufacturing, etc., but has hardly been studied. Our method does not require
prior knowledge of the physical and geometrical properties of the objects but
enables the robot to use real-time RGB-D perception to determine the wrapping
state and feedback control to achieve high-quality results. As such, it
provides the robot manipulator with the general capabilities to handle wrapping
tasks of different rods or ropes. We tested our method on 6 combinations of 3
different ropes and 2 rods. The result shows that the wrapping quality improved
and converged within 5 wraps for all test cases.
|
[
{
"version": "v1",
"created": "Wed, 22 Feb 2023 15:08:23 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Apr 2023 03:49:22 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Ma",
"Zhaoyuan",
""
],
[
"Xiao",
"Jing",
""
]
] |
new_dataset
| 0.996004 |
2303.02660
|
Meiling Fang
|
Meiling Fang and Marco Huber and Naser Damer
|
SynthASpoof: Developing Face Presentation Attack Detection Based on
Privacy-friendly Synthetic Data
|
Accepted at CVPR workshop 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Recently, significant progress has been made in face presentation attack
detection (PAD), which aims to secure face recognition systems against
presentation attacks, owing to the availability of several face PAD datasets.
However, all available datasets are based on privacy and legally-sensitive
authentic biometric data with a limited number of subjects. To target these
legal and technical challenges, this work presents the first synthetic-based
face PAD dataset, named SynthASpoof, as a large-scale PAD development dataset.
The bona fide samples in SynthASpoof are synthetically generated and the attack
samples are collected by presenting such synthetic data to capture systems in a
real attack scenario. The experimental results demonstrate the feasibility of
using SynthASpoof for the development of face PAD. Moreover, we boost the
performance of such a solution by incorporating the domain generalization tool
MixStyle into the PAD solutions. Additionally, we showed the viability of using
synthetic data as a supplement to enrich the diversity of limited authentic
training data and consistently enhance PAD performances. The SynthASpoof
dataset, containing 25,000 bona fide and 78,800 attack samples, the
implementation, and the pre-trained weights are made publicly available.
|
[
{
"version": "v1",
"created": "Sun, 5 Mar 2023 12:35:58 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Apr 2023 09:38:16 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Fang",
"Meiling",
""
],
[
"Huber",
"Marco",
""
],
[
"Damer",
"Naser",
""
]
] |
new_dataset
| 0.999824 |
2304.04709
|
Lv Tang
|
Lv Tang, Haoke Xiao, Bo Li
|
Can SAM Segment Anything? When SAM Meets Camouflaged Object Detection
| null | null | null | null |
cs.CV
|
http://creativecommons.org/publicdomain/zero/1.0/
|
SAM is a segmentation model recently released by Meta AI Research and has
been gaining attention quickly due to its impressive performance in generic
object segmentation. However, its ability to generalize to specific scenes such
as camouflaged scenes is still unknown. Camouflaged object detection (COD)
involves identifying objects that are seamlessly integrated into their
surroundings and has numerous practical applications in fields such as
medicine, art, and agriculture. In this study, we try to ask if SAM can address
the COD task and evaluate the performance of SAM on the COD benchmark by
employing maximum segmentation evaluation and camouflage location evaluation.
We also compare SAM's performance with 22 state-of-the-art COD methods. Our
results indicate that while SAM shows promise in generic object segmentation,
its performance on the COD task is limited. This presents an opportunity for
further research to explore how to build a stronger SAM that may address the
COD task. The results of this paper are provided in
\url{https://github.com/luckybird1994/SAMCOD}.
|
[
{
"version": "v1",
"created": "Mon, 10 Apr 2023 17:05:58 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Apr 2023 03:53:13 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Tang",
"Lv",
""
],
[
"Xiao",
"Haoke",
""
],
[
"Li",
"Bo",
""
]
] |
new_dataset
| 0.999294 |
2304.04812
|
Ziyang Li
|
Ziyang Li, Jiani Huang, Mayur Naik
|
Scallop: A Language for Neurosymbolic Programming
| null | null | null | null |
cs.PL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We present Scallop, a language which combines the benefits of deep learning
and logical reasoning. Scallop enables users to write a wide range of
neurosymbolic applications and train them in a data- and compute-efficient
manner. It achieves these goals through three key features: 1) a flexible
symbolic representation that is based on the relational data model; 2) a
declarative logic programming language that is based on Datalog and supports
recursion, aggregation, and negation; and 3) a framework for automatic and
efficient differentiable reasoning that is based on the theory of provenance
semirings. We evaluate Scallop on a suite of eight neurosymbolic applications
from the literature. Our evaluation demonstrates that Scallop is capable of
expressing algorithmic reasoning in diverse and challenging AI tasks, provides
a succinct interface for machine learning programmers to integrate logical
domain knowledge, and yields solutions that are comparable or superior to
state-of-the-art models in terms of accuracy. Furthermore, Scallop's solutions
outperform these models in aspects such as runtime and data efficiency,
interpretability, and generalizability.
|
[
{
"version": "v1",
"created": "Mon, 10 Apr 2023 18:46:53 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Li",
"Ziyang",
""
],
[
"Huang",
"Jiani",
""
],
[
"Naik",
"Mayur",
""
]
] |
new_dataset
| 0.999742 |
2304.04817
|
Thomas H\"utter
|
Konstantin Emil Thiel and Daniel Kocher and Nikolaus Augsten and
Thomas H\"utter and Willi Mann and Daniel Ulrich Schmitt
|
FINEX: A Fast Index for Exact & Flexible Density-Based Clustering
(Extended Version with Proofs)*
| null | null | null | null |
cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
Density-based clustering aims to find groups of similar objects (i.e.,
clusters) in a given dataset. Applications include, e.g., process mining and
anomaly detection. It comes with two user parameters ({\epsilon}, MinPts) that
determine the clustering result, but are typically unknown in advance. Thus,
users need to interactively test various settings until satisfying clusterings
are found. However, existing solutions suffer from the following limitations:
(a) Ineffective pruning of expensive neighborhood computations. (b) Approximate
clustering, where objects are falsely labeled noise. (c) Restricted parameter
tuning that is limited to {\epsilon} whereas MinPts is constant, which reduces
the explorable clusterings. (d) Inflexibility in terms of applicable data types
and distance functions. We propose FINEX, a linear-space index that overcomes
these limitations. Our index provides exact clusterings and can be queried with
either of the two parameters. FINEX avoids neighborhood computations where
possible and reduces the complexities of the remaining computations by
leveraging fundamental properties of density-based clusters. Hence, our
solution is effcient and flexible regarding data types and distance functions.
Moreover, FINEX respects the original and straightforward notion of
density-based clustering. In our experiments on 12 large real-world datasets
from various domains, FINEX frequently outperforms state-of-the-art techniques
for exact clustering by orders of magnitude.
|
[
{
"version": "v1",
"created": "Mon, 10 Apr 2023 18:57:45 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Thiel",
"Konstantin Emil",
""
],
[
"Kocher",
"Daniel",
""
],
[
"Augsten",
"Nikolaus",
""
],
[
"Hütter",
"Thomas",
""
],
[
"Mann",
"Willi",
""
],
[
"Schmitt",
"Daniel Ulrich",
""
]
] |
new_dataset
| 0.999529 |
2304.04833
|
Marcio Guilherme Bronzato De Avellar
|
Marcio G B de Avellar, Alexandre A S Junior, Andr\'e H G Lopes,
Andr\'e L S Carneiro, Jo\~ao A Pereira, Davi C B D da Cunha
|
A vis\~ao da BBChain sobre o contexto tecnol\'ogico subjacente \`a
ado\c{c}\~ao do Real Digital
|
Comments: 11 pages, 8 figures, in (Brazilian) Portuguese
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We explore confidential computing in the context of CBDCs using Microsoft's
CCF framework as an example. By developing an experiment and comparing
different approaches and performance and security metrics, we seek to evaluate
the effectiveness of confidential computing to improve the privacy, security,
and performance of CBDCs. Preliminary results suggest that confidential
computing could be a promising solution to the technological challenges faced
by CBDCs. Furthermore, by implementing confidential computing in DLTs such as
Hyperledger Besu and utilizing frameworks such as CCF, we increase transaction
confidentiality and privacy while maintaining the scalability and
interoperability required for a global digital financial system. In conclusion,
confidential computing can significantly bolster CBDC development, fostering a
secure, private, and efficient financial future.
--
Exploramos o uso da computa\c{c}\~ao confidencial no contexto das CBDCs
utilizando o framework CCF da Microsoft como exemplo. Via desenvolvimento de
experimentos e compara\c{c}\~ao de diferentes abordagens e m\'etricas de
desempenho e seguran\c{c}a, buscamos avaliar a efic\'acia da computa\c{c}\~ao
confidencial para melhorar a privacidade, seguran\c{c}a e desempenho das CBDCs.
Resultados preliminares sugerem que a computa\c{c}\~ao confidencial pode ser
uma solu\c{c}\~ao promissora para os desafios tecnol\'ogicos enfrentados pelas
CBDCs. Ao implementar a computa\c{c}\~ao confidencial em DLTs, como o
Hyperledger Besu, e utilizar frameworks como o CCF, aumentamos a
confidencialidade e a privacidade das transa\c{c}\~oes, mantendo a
escalabilidade e a interoperabilidade necess\'arias para um sistema financeiro
global e digital. Em conclus\~ao, a computa\c{c}\~ao confidencial pode
refor\c{c}ar significativamente o desenvolvimento do CBDC, promovendo um futuro
financeiro seguro, privado e eficiente.
|
[
{
"version": "v1",
"created": "Mon, 10 Apr 2023 19:42:27 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"de Avellar",
"Marcio G B",
""
],
[
"Junior",
"Alexandre A S",
""
],
[
"Lopes",
"André H G",
""
],
[
"Carneiro",
"André L S",
""
],
[
"Pereira",
"João A",
""
],
[
"da Cunha",
"Davi C B D",
""
]
] |
new_dataset
| 0.951798 |
2304.04861
|
E Zhixuan Zeng
|
E. Zhixuan Zeng, Yuhao Chen, Alexander Wong
|
ShapeShift: Superquadric-based Object Pose Estimation for Robotic
Grasping
| null | null | null | null |
cs.CV cs.AI cs.LG cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Object pose estimation is a critical task in robotics for precise object
manipulation. However, current techniques heavily rely on a reference 3D
object, limiting their generalizability and making it expensive to expand to
new object categories. Direct pose predictions also provide limited information
for robotic grasping without referencing the 3D model. Keypoint-based methods
offer intrinsic descriptiveness without relying on an exact 3D model, but they
may lack consistency and accuracy. To address these challenges, this paper
proposes ShapeShift, a superquadric-based framework for object pose estimation
that predicts the object's pose relative to a primitive shape which is fitted
to the object. The proposed framework offers intrinsic descriptiveness and the
ability to generalize to arbitrary geometric shapes beyond the training set.
|
[
{
"version": "v1",
"created": "Mon, 10 Apr 2023 20:55:41 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Zeng",
"E. Zhixuan",
""
],
[
"Chen",
"Yuhao",
""
],
[
"Wong",
"Alexander",
""
]
] |
new_dataset
| 0.999388 |
2304.04893
|
Yanlin Qi
|
Yanlin Qi, Gengchen Mai, Rui Zhu, and Michael Zhang
|
EVKG: An Interlinked and Interoperable Electric Vehicle Knowledge Graph
for Smart Transportation System
| null | null | null | null |
cs.AI
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Over the past decade, the electric vehicle industry has experienced
unprecedented growth and diversification, resulting in a complex ecosystem. To
effectively manage this multifaceted field, we present an EV-centric knowledge
graph (EVKG) as a comprehensive, cross-domain, extensible, and open geospatial
knowledge management system. The EVKG encapsulates essential EV-related
knowledge, including EV adoption, electric vehicle supply equipment, and
electricity transmission network, to support decision-making related to EV
technology development, infrastructure planning, and policy-making by providing
timely and accurate information and analysis. To enrich and contextualize the
EVKG, we integrate the developed EV-relevant ontology modules from existing
well-known knowledge graphs and ontologies. This integration enables
interoperability with other knowledge graphs in the Linked Data Open Cloud,
enhancing the EVKG's value as a knowledge hub for EV decision-making. Using six
competency questions, we demonstrate how the EVKG can be used to answer various
types of EV-related questions, providing critical insights into the EV
ecosystem. Our EVKG provides an efficient and effective approach for managing
the complex and diverse EV industry. By consolidating critical EV-related
knowledge into a single, easily accessible resource, the EVKG supports
decision-makers in making informed choices about EV technology development,
infrastructure planning, and policy-making. As a flexible and extensible
platform, the EVKG is capable of accommodating a wide range of data sources,
enabling it to evolve alongside the rapidly changing EV landscape.
|
[
{
"version": "v1",
"created": "Mon, 10 Apr 2023 23:01:02 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Qi",
"Yanlin",
""
],
[
"Mai",
"Gengchen",
""
],
[
"Zhu",
"Rui",
""
],
[
"Zhang",
"Michael",
""
]
] |
new_dataset
| 0.99744 |
2304.04915
|
Kat Agres
|
Kat R. Agres, Adyasha Dash, Phoebe Chua
|
AffectMachine-Classical: A novel system for generating affective
classical music
|
K. Agres and A. Dash share first authorship
| null | null | null |
cs.SD cs.AI cs.HC cs.MM eess.AS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This work introduces a new music generation system, called
AffectMachine-Classical, that is capable of generating affective Classic music
in real-time. AffectMachine was designed to be incorporated into biofeedback
systems (such as brain-computer-interfaces) to help users become aware of, and
ultimately mediate, their own dynamic affective states. That is, this system
was developed for music-based MedTech to support real-time emotion
self-regulation in users. We provide an overview of the rule-based,
probabilistic system architecture, describing the main aspects of the system
and how they are novel. We then present the results of a listener study that
was conducted to validate the ability of the system to reliably convey target
emotions to listeners. The findings indicate that AffectMachine-Classical is
very effective in communicating various levels of Arousal ($R^2 = .96$) to
listeners, and is also quite convincing in terms of Valence (R^2 = .90). Future
work will embed AffectMachine-Classical into biofeedback systems, to leverage
the efficacy of the affective music for emotional well-being in listeners.
|
[
{
"version": "v1",
"created": "Tue, 11 Apr 2023 01:06:26 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Agres",
"Kat R.",
""
],
[
"Dash",
"Adyasha",
""
],
[
"Chua",
"Phoebe",
""
]
] |
new_dataset
| 0.990368 |
2304.04917
|
Xianrui Luo
|
Xianrui Luo, Juewen Peng, Weiyue Zhao, Ke Xian, Hao Lu, and Zhiguo Cao
|
Point-and-Shoot All-in-Focus Photo Synthesis from Smartphone Camera Pair
|
Early Access by IEEE Transactions on Circuits and Systems for Video
Technology 2022
| null |
10.1109/TCSVT.2022.3222609
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
All-in-Focus (AIF) photography is expected to be a commercial selling point
for modern smartphones. Standard AIF synthesis requires manual, time-consuming
operations such as focal stack compositing, which is unfriendly to ordinary
people. To achieve point-and-shoot AIF photography with a smartphone, we expect
that an AIF photo can be generated from one shot of the scene, instead of from
multiple photos captured by the same camera. Benefiting from the multi-camera
module in modern smartphones, we introduce a new task of AIF synthesis from
main (wide) and ultra-wide cameras. The goal is to recover sharp details from
defocused regions in the main-camera photo with the help of the
ultra-wide-camera one. The camera setting poses new challenges such as
parallax-induced occlusions and inconsistent color between cameras. To overcome
the challenges, we introduce a predict-and-refine network to mitigate
occlusions and propose dynamic frequency-domain alignment for color correction.
To enable effective training and evaluation, we also build an AIF dataset with
2686 unique scenes. Each scene includes two photos captured by the main camera,
one photo captured by the ultrawide camera, and a synthesized AIF photo.
Results show that our solution, termed EasyAIF, can produce high-quality AIF
photos and outperforms strong baselines quantitatively and qualitatively. For
the first time, we demonstrate point-and-shoot AIF photo synthesis successfully
from main and ultra-wide cameras.
|
[
{
"version": "v1",
"created": "Tue, 11 Apr 2023 01:09:54 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Luo",
"Xianrui",
""
],
[
"Peng",
"Juewen",
""
],
[
"Zhao",
"Weiyue",
""
],
[
"Xian",
"Ke",
""
],
[
"Lu",
"Hao",
""
],
[
"Cao",
"Zhiguo",
""
]
] |
new_dataset
| 0.966711 |
2304.04958
|
Ghayoor Shah
|
Ghayoor Shah, Yaser P. Fallah, Danyang Tian, Ehsan Moradi-Pari
|
AROW: A V2X-based Automated Right-of-Way Algorithm for Distributed
Cooperative Intersection Management
| null | null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Safe and efficient intersection management is critical for an improved
driving experience. As per several studies, an increasing number of crashes and
fatalities occur every year at intersections. Most crashes are a consequence of
a lack of situational awareness and ambiguity over intersection crossing
priority. In this regard, research in Cooperative Intersection Management (CIM)
is considered highly significant since it can utilize Vehicle-to-Everything
(V2X) communication among Connected and Autonomous Vehicles (CAVs). CAVs can
transceive basic and/or advanced safety information, thereby improving
situational awareness at intersections. Although numerous studies have been
performed on CIM, most of them are reliant on the presence of a Road-Side Unit
(RSU) that can act as a centralized intersection manager and assign
intersection crossing priorities. In the absence of RSU, there are some
distributed CIM methods that only rely on communication among CAVs for
situational awareness, however, none of them are specifically focused towards
Stop Controlled-Intersection (SCI) with the aim of mitigating ambiguity among
CAVs. Thus, we propose an Automated Right-of-Way (AROW) algorithm based on
distributed CIM that is capable of reducing ambiguity and handling any level of
noncompliance by CAVs. The algorithm is validated with extensive experiments
for its functionality and robustness, and it outperforms the current solutions.
|
[
{
"version": "v1",
"created": "Tue, 11 Apr 2023 04:04:39 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Shah",
"Ghayoor",
""
],
[
"Fallah",
"Yaser P.",
""
],
[
"Tian",
"Danyang",
""
],
[
"Moradi-Pari",
"Ehsan",
""
]
] |
new_dataset
| 0.99978 |
2304.04960
|
Soohyun Kim
|
Soohyun Kim, Junho Kim, Taekyung Kim, Hwan Heo, Seungryong Kim,
Jiyoung Lee, Jin-Hwa Kim
|
Panoramic Image-to-Image Translation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we tackle the challenging task of Panoramic Image-to-Image
translation (Pano-I2I) for the first time. This task is difficult due to the
geometric distortion of panoramic images and the lack of a panoramic image
dataset with diverse conditions, like weather or time. To address these
challenges, we propose a panoramic distortion-aware I2I model that preserves
the structure of the panoramic images while consistently translating their
global style referenced from a pinhole image. To mitigate the distortion issue
in naive 360 panorama translation, we adopt spherical positional embedding to
our transformer encoders, introduce a distortion-free discriminator, and apply
sphere-based rotation for augmentation and its ensemble. We also design a
content encoder and a style encoder to be deformation-aware to deal with a
large domain gap between panoramas and pinhole images, enabling us to work on
diverse conditions of pinhole images. In addition, considering the large
discrepancy between panoramas and pinhole images, our framework decouples the
learning procedure of the panoramic reconstruction stage from the translation
stage. We show distinct improvements over existing I2I models in translating
the StreetLearn dataset in the daytime into diverse conditions. The code will
be publicly available online for our community.
|
[
{
"version": "v1",
"created": "Tue, 11 Apr 2023 04:08:58 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Kim",
"Soohyun",
""
],
[
"Kim",
"Junho",
""
],
[
"Kim",
"Taekyung",
""
],
[
"Heo",
"Hwan",
""
],
[
"Kim",
"Seungryong",
""
],
[
"Lee",
"Jiyoung",
""
],
[
"Kim",
"Jin-Hwa",
""
]
] |
new_dataset
| 0.96022 |
2304.04978
|
Yao Teng
|
Yao Teng, Haisong Liu, Sheng Guo, Limin Wang
|
StageInteractor: Query-based Object Detector with Cross-stage
Interaction
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Previous object detectors make predictions based on dense grid points or
numerous preset anchors. Most of these detectors are trained with one-to-many
label assignment strategies. On the contrary, recent query-based object
detectors depend on a sparse set of learnable queries and a series of decoder
layers. The one-to-one label assignment is independently applied on each layer
for the deep supervision during training. Despite the great success of
query-based object detection, however, this one-to-one label assignment
strategy demands the detectors to have strong fine-grained discrimination and
modeling capacity. To solve the above problems, in this paper, we propose a new
query-based object detector with cross-stage interaction, coined as
StageInteractor. During the forward propagation, we come up with an efficient
way to improve this modeling ability by reusing dynamic operators with
lightweight adapters. As for the label assignment, a cross-stage label assigner
is applied subsequent to the one-to-one label assignment. With this assigner,
the training target class labels are gathered across stages and then
reallocated to proper predictions at each decoder layer. On MS COCO benchmark,
our model improves the baseline by 2.2 AP, and achieves 44.8 AP with ResNet-50
as backbone, 100 queries and 12 training epochs. With longer training time and
300 queries, StageInteractor achieves 51.1 AP and 52.2 AP with ResNeXt-101-DCN
and Swin-S, respectively.
|
[
{
"version": "v1",
"created": "Tue, 11 Apr 2023 04:50:13 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Teng",
"Yao",
""
],
[
"Liu",
"Haisong",
""
],
[
"Guo",
"Sheng",
""
],
[
"Wang",
"Limin",
""
]
] |
new_dataset
| 0.996658 |
2304.05041
|
Matiss Rikters
|
Maija K\=ale and Mat\=iss Rikters
|
What Food Do We Tweet about on a Rainy Day?
| null |
Published in the proceedings of The 29th Annual Conference of the
Association for Natural Language Processing (NLP2023)
| null | null |
cs.SI cs.AI cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Food choice is a complex phenomenon shaped by factors such as taste,
ambience, culture or weather. In this paper, we explore food-related tweeting
in different weather conditions. We inspect a Latvian food tweet dataset
spanning the past decade in conjunction with a weather observation dataset
consisting of average temperature, precipitation, and other phenomena. We find
which weather conditions lead to specific food information sharing;
automatically classify tweet sentiment and discuss how it changes depending on
the weather. This research contributes to the growing area of large-scale
social network data understanding of food consumers' choices and perceptions.
|
[
{
"version": "v1",
"created": "Tue, 11 Apr 2023 07:57:10 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Kāle",
"Maija",
""
],
[
"Rikters",
"Matīss",
""
]
] |
new_dataset
| 0.998886 |
2304.05049
|
Xia Shangzhou
|
Shangzhou Xia, Jianjun Zhao
|
Static Entanglement Analysis of Quantum Programs
| null | null | null | null |
cs.SE quant-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Quantum entanglement plays a crucial role in quantum computing. Entangling
information has important implications for understanding the behavior of
quantum programs and avoiding entanglement-induced errors. Entanglement
analysis is a static code analysis technique that determines which qubit may
entangle with another qubit and establishes an entanglement graph to represent
the whole picture of interactions between entangled qubits. This paper presents
the first static entanglement analysis method for quantum programs developed in
the practical quantum programming language Q\#. Our method first constructs an
interprocedural control flow graph (ICFG) for a Q\# program and then calculates
the entanglement information not only within each module but also between
modules of the program. The analysis results can help improve the reliability
and security of quantum programs.
|
[
{
"version": "v1",
"created": "Tue, 11 Apr 2023 08:18:39 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Xia",
"Shangzhou",
""
],
[
"Zhao",
"Jianjun",
""
]
] |
new_dataset
| 0.997339 |
2304.05051
|
Yunpeng Han
|
Yunpeng Han, Lisai Zhang, Qingcai Chen, Zhijian Chen, Zhonghua Li,
Jianxin Yang, Zhao Cao
|
FashionSAP: Symbols and Attributes Prompt for Fine-grained Fashion
Vision-Language Pre-training
| null | null | null | null |
cs.CV cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Fashion vision-language pre-training models have shown efficacy for a wide
range of downstream tasks. However, general vision-language pre-training models
pay less attention to fine-grained domain features, while these features are
important in distinguishing the specific domain tasks from general tasks. We
propose a method for fine-grained fashion vision-language pre-training based on
fashion Symbols and Attributes Prompt (FashionSAP) to model fine-grained
multi-modalities fashion attributes and characteristics. Firstly, we propose
the fashion symbols, a novel abstract fashion concept layer, to represent
different fashion items and to generalize various kinds of fine-grained fashion
features, making modelling fine-grained attributes more effective. Secondly,
the attributes prompt method is proposed to make the model learn specific
attributes of fashion items explicitly. We design proper prompt templates
according to the format of fashion data. Comprehensive experiments are
conducted on two public fashion benchmarks, i.e., FashionGen and FashionIQ, and
FashionSAP gets SOTA performances for four popular fashion tasks. The ablation
study also shows the proposed abstract fashion symbols, and the attribute
prompt method enables the model to acquire fine-grained semantics in the
fashion domain effectively. The obvious performance gains from FashionSAP
provide a new baseline for future fashion task research.
|
[
{
"version": "v1",
"created": "Tue, 11 Apr 2023 08:20:17 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Han",
"Yunpeng",
""
],
[
"Zhang",
"Lisai",
""
],
[
"Chen",
"Qingcai",
""
],
[
"Chen",
"Zhijian",
""
],
[
"Li",
"Zhonghua",
""
],
[
"Yang",
"Jianxin",
""
],
[
"Cao",
"Zhao",
""
]
] |
new_dataset
| 0.998799 |
2304.05056
|
Benjamin Kenwright
|
Ben Kenwright
|
Real-Time Character Rise Motions
| null | null | null | null |
cs.RO cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents an uncomplicated dynamic controller for generating
physically-plausible three-dimensional full-body biped character rise motions
on-the-fly at run-time. Our low-dimensional controller uses fundamental
reference information (e.g., center-of-mass, hands, and feet locations) to
produce balanced biped get-up poses by means of a real-time physically-based
simulation. The key idea is to use a simple approximate model (i.e., similar to
the inverted-pendulum stepping model) to create continuous reference
trajectories that can be seamlessly tracked by an articulated biped character
to create balanced rise-motions. Our approach does not use any key-framed data
or any computationally expensive processing (e.g., offline-optimization or
search algorithms). We demonstrate the effectiveness and ease of our technique
through example (i.e., a biped character picking itself up from different
laying positions).
|
[
{
"version": "v1",
"created": "Tue, 11 Apr 2023 08:26:11 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Kenwright",
"Ben",
""
]
] |
new_dataset
| 0.987413 |
2304.05090
|
Luca Ciampi
|
Pawe{\l} Foszner, Agnieszka Szcz\k{e}sna, Luca Ciampi, Nicola Messina,
Adam Cygan, Bartosz Bizo\'n, Micha{\l} Cogiel, Dominik Golba, El\.zbieta
Macioszek, Micha{\l} Staniszewski
|
CrowdSim2: an Open Synthetic Benchmark for Object Detectors
|
Proceedings of the 18th International Joint Conference on Computer
Vision, Imaging and Computer Graphics Theory and Applications, 2023
| null |
10.5220/0011692500003417
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Data scarcity has become one of the main obstacles to developing supervised
models based on Artificial Intelligence in Computer Vision. Indeed, Deep
Learning-based models systematically struggle when applied in new scenarios
never seen during training and may not be adequately tested in non-ordinary yet
crucial real-world situations. This paper presents and publicly releases
CrowdSim2, a new synthetic collection of images suitable for people and vehicle
detection gathered from a simulator based on the Unity graphical engine. It
consists of thousands of images gathered from various synthetic scenarios
resembling the real world, where we varied some factors of interest, such as
the weather conditions and the number of objects in the scenes. The labels are
automatically collected and consist of bounding boxes that precisely localize
objects belonging to the two object classes, leaving out humans from the
annotation pipeline. We exploited this new benchmark as a testing ground for
some state-of-the-art detectors, showing that our simulated scenarios can be a
valuable tool for measuring their performances in a controlled environment.
|
[
{
"version": "v1",
"created": "Tue, 11 Apr 2023 09:35:57 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Foszner",
"Paweł",
""
],
[
"Szczęsna",
"Agnieszka",
""
],
[
"Ciampi",
"Luca",
""
],
[
"Messina",
"Nicola",
""
],
[
"Cygan",
"Adam",
""
],
[
"Bizoń",
"Bartosz",
""
],
[
"Cogiel",
"Michał",
""
],
[
"Golba",
"Dominik",
""
],
[
"Macioszek",
"Elżbieta",
""
],
[
"Staniszewski",
"Michał",
""
]
] |
new_dataset
| 0.99973 |
2304.05097
|
Weichuang Li
|
Weichuang Li, Longhao Zhang, Dong Wang, Bin Zhao, Zhigang Wang, Mulin
Chen, Bang Zhang, Zhongjian Wang, Liefeng Bo, Xuelong Li
|
One-Shot High-Fidelity Talking-Head Synthesis with Deformable Neural
Radiance Field
|
Accepted by CVPR 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Talking head generation aims to generate faces that maintain the identity
information of the source image and imitate the motion of the driving image.
Most pioneering methods rely primarily on 2D representations and thus will
inevitably suffer from face distortion when large head rotations are
encountered. Recent works instead employ explicit 3D structural representations
or implicit neural rendering to improve performance under large pose changes.
Nevertheless, the fidelity of identity and expression is not so desirable,
especially for novel-view synthesis. In this paper, we propose HiDe-NeRF, which
achieves high-fidelity and free-view talking-head synthesis. Drawing on the
recently proposed Deformable Neural Radiance Fields, HiDe-NeRF represents the
3D dynamic scene into a canonical appearance field and an implicit deformation
field, where the former comprises the canonical source face and the latter
models the driving pose and expression. In particular, we improve fidelity from
two aspects: (i) to enhance identity expressiveness, we design a generalized
appearance module that leverages multi-scale volume features to preserve face
shape and details; (ii) to improve expression preciseness, we propose a
lightweight deformation module that explicitly decouples the pose and
expression to enable precise expression modeling. Extensive experiments
demonstrate that our proposed approach can generate better results than
previous works. Project page: https://www.waytron.net/hidenerf/
|
[
{
"version": "v1",
"created": "Tue, 11 Apr 2023 09:47:35 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Li",
"Weichuang",
""
],
[
"Zhang",
"Longhao",
""
],
[
"Wang",
"Dong",
""
],
[
"Zhao",
"Bin",
""
],
[
"Wang",
"Zhigang",
""
],
[
"Chen",
"Mulin",
""
],
[
"Zhang",
"Bang",
""
],
[
"Wang",
"Zhongjian",
""
],
[
"Bo",
"Liefeng",
""
],
[
"Li",
"Xuelong",
""
]
] |
new_dataset
| 0.984294 |
2304.05098
|
Tianyuan Zhang
|
Tianyuan Zhang, Yisong Xiao, Xiaoya Zhang, Hao Li, Lu Wang
|
Benchmarking the Physical-world Adversarial Robustness of Vehicle
Detection
|
CVPR 2023 workshop
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Adversarial attacks in the physical world can harm the robustness of
detection models. Evaluating the robustness of detection models in the physical
world can be challenging due to the time-consuming and labor-intensive nature
of many experiments. Thus, virtual simulation experiments can provide a
solution to this challenge. However, there is no unified detection benchmark
based on virtual simulation environment. To address this challenge, we proposed
an instant-level data generation pipeline based on the CARLA simulator. Using
this pipeline, we generated the DCI dataset and conducted extensive experiments
on three detection models and three physical adversarial attacks. The dataset
covers 7 continuous and 1 discrete scenes, with over 40 angles, 20 distances,
and 20,000 positions. The results indicate that Yolo v6 had strongest
resistance, with only a 6.59% average AP drop, and ASA was the most effective
attack algorithm with a 14.51% average AP reduction, twice that of other
algorithms. Static scenes had higher recognition AP, and results under
different weather conditions were similar. Adversarial attack algorithm
improvement may be approaching its 'limitation'.
|
[
{
"version": "v1",
"created": "Tue, 11 Apr 2023 09:48:25 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Zhang",
"Tianyuan",
""
],
[
"Xiao",
"Yisong",
""
],
[
"Zhang",
"Xiaoya",
""
],
[
"Li",
"Hao",
""
],
[
"Wang",
"Lu",
""
]
] |
new_dataset
| 0.999053 |
2304.05141
|
Wenbin Hu
|
Wenbin Hu, Bidan Huang, Wang Wei Lee, Sicheng Yang, Yu Zheng, Zhibin
Li
|
Dexterous In-Hand Manipulation of Slender Cylindrical Objects through
Deep Reinforcement Learning with Tactile Sensing
|
10 pages, 12 figures, submitted to Transaction on Mechatronics
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Continuous in-hand manipulation is an important physical interaction skill,
where tactile sensing provides indispensable contact information to enable
dexterous manipulation of small objects. This work proposed a framework for
end-to-end policy learning with tactile feedback and sim-to-real transfer,
which achieved fine in-hand manipulation that controls the pose of a thin
cylindrical object, such as a long stick, to track various continuous
trajectories through multiple contacts of three fingertips of a dexterous robot
hand with tactile sensor arrays. We estimated the central contact position
between the stick and each fingertip from the high-dimensional tactile
information and showed that the learned policies achieved effective
manipulation performance with the processed tactile feedback. The policies were
trained with deep reinforcement learning in simulation and successfully
transferred to real-world experiments, using coordinated model calibration and
domain randomization. We evaluated the effectiveness of tactile information via
comparative studies and validated the sim-to-real performance through
real-world experiments.
|
[
{
"version": "v1",
"created": "Tue, 11 Apr 2023 11:13:48 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Hu",
"Wenbin",
""
],
[
"Huang",
"Bidan",
""
],
[
"Lee",
"Wang Wei",
""
],
[
"Yang",
"Sicheng",
""
],
[
"Zheng",
"Yu",
""
],
[
"Li",
"Zhibin",
""
]
] |
new_dataset
| 0.997372 |
2304.05152
|
Shiyu Tang
|
Shiyu Tang, Ting Sun, Juncai Peng, Guowei Chen, Yuying Hao, Manhui
Lin, Zhihong Xiao, Jiangbin You, Yi Liu
|
PP-MobileSeg: Explore the Fast and Accurate Semantic Segmentation Model
on Mobile Devices
|
8 pages, 3 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The success of transformers in computer vision has led to several attempts to
adapt them for mobile devices, but their performance remains unsatisfactory in
some real-world applications. To address this issue, we propose PP-MobileSeg, a
semantic segmentation model that achieves state-of-the-art performance on
mobile devices. PP-MobileSeg comprises three novel parts: the StrideFormer
backbone, the Aggregated Attention Module (AAM), and the Valid Interpolate
Module (VIM). The four-stage StrideFormer backbone is built with MV3 blocks and
strided SEA attention, and it is able to extract rich semantic and detailed
features with minimal parameter overhead. The AAM first filters the detailed
features through semantic feature ensemble voting and then combines them with
semantic features to enhance the semantic information. Furthermore, we proposed
VIM to upsample the downsampled feature to the resolution of the input image.
It significantly reduces model latency by only interpolating classes present in
the final prediction, which is the most significant contributor to overall
model latency. Extensive experiments show that PP-MobileSeg achieves a superior
tradeoff between accuracy, model size, and latency compared to other methods.
On the ADE20K dataset, PP-MobileSeg achieves 1.57% higher accuracy in mIoU than
SeaFormer-Base with 32.9% fewer parameters and 42.3% faster acceleration on
Qualcomm Snapdragon 855. Source codes are available at
https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.8.
|
[
{
"version": "v1",
"created": "Tue, 11 Apr 2023 11:43:10 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Tang",
"Shiyu",
""
],
[
"Sun",
"Ting",
""
],
[
"Peng",
"Juncai",
""
],
[
"Chen",
"Guowei",
""
],
[
"Hao",
"Yuying",
""
],
[
"Lin",
"Manhui",
""
],
[
"Xiao",
"Zhihong",
""
],
[
"You",
"Jiangbin",
""
],
[
"Liu",
"Yi",
""
]
] |
new_dataset
| 0.997366 |
2304.05193
|
Jian Wang Jornbowrl
|
Jian Wang, Shangqing Liu, Xiaofei Xie, Yi Li
|
Evaluating AIGC Detectors on Code Content
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Artificial Intelligence Generated Content (AIGC) has garnered considerable
attention for its impressive performance, with ChatGPT emerging as a leading
AIGC model that produces high-quality responses across various applications,
including software development and maintenance. Despite its potential, the
misuse of ChatGPT poses significant concerns, especially in education and
safetycritical domains. Numerous AIGC detectors have been developed and
evaluated on natural language data. However, their performance on code-related
content generated by ChatGPT remains unexplored. To fill this gap, in this
paper, we present the first empirical study on evaluating existing AIGC
detectors in the software domain. We created a comprehensive dataset including
492.5K samples comprising code-related content produced by ChatGPT,
encompassing popular software activities like Q&A (115K), code summarization
(126K), and code generation (226.5K). We evaluated six AIGC detectors,
including three commercial and three open-source solutions, assessing their
performance on this dataset. Additionally, we conducted a human study to
understand human detection capabilities and compare them with the existing AIGC
detectors. Our results indicate that AIGC detectors demonstrate lower
performance on code-related data compared to natural language data. Fine-tuning
can enhance detector performance, especially for content within the same
domain; but generalization remains a challenge. The human evaluation reveals
that detection by humans is quite challenging.
|
[
{
"version": "v1",
"created": "Tue, 11 Apr 2023 12:54:42 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Wang",
"Jian",
""
],
[
"Liu",
"Shangqing",
""
],
[
"Xie",
"Xiaofei",
""
],
[
"Li",
"Yi",
""
]
] |
new_dataset
| 0.987111 |
2304.05274
|
Shao Yi Liaw
|
Shaoyi Liaw, Fan Huang, Fabricio Benevenuto, Haewoon Kwak, Jisun An
|
YouNICon: YouTube's CommuNIty of Conspiracy Videos
| null | null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Conspiracy theories are widely propagated on social media. Among various
social media services, YouTube is one of the most influential sources of news
and entertainment. This paper seeks to develop a dataset, YOUNICON, to enable
researchers to perform conspiracy theory detection as well as classification of
videos with conspiracy theories into different topics. YOUNICON is a dataset
with a large collection of videos from suspicious channels that were identified
to contain conspiracy theories in a previous study (Ledwich and Zaitsev 2020).
Overall, YOUNICON will enable researchers to study trends in conspiracy
theories and understand how individuals can interact with the conspiracy theory
producing community or channel. Our data is available at:
https://doi.org/10.5281/zenodo.7466262.
|
[
{
"version": "v1",
"created": "Tue, 11 Apr 2023 15:20:51 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Liaw",
"Shaoyi",
""
],
[
"Huang",
"Fan",
""
],
[
"Benevenuto",
"Fabricio",
""
],
[
"Kwak",
"Haewoon",
""
],
[
"An",
"Jisun",
""
]
] |
new_dataset
| 0.999868 |
2304.05312
|
Ashok Patel
|
Riley Kiefer, Jacob Stevens, and Ashok Patel
|
Fingerprint Liveness Detection using Minutiae-Independent Dense Sampling
of Local Patches
|
Submitted, peer-reviewed, accepted, and under publication with
Springer Nature
| null | null | null |
cs.CY
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Fingerprint recognition and matching is a common form of user authentication.
While a fingerprint is unique to each individual, authentication is vulnerable
when an attacker can forge a copy of the fingerprint (spoof). To combat these
spoofed fingerprints, spoof detection and liveness detection algorithms are
currently being researched as countermeasures to this security vulnerability.
This paper introduces a fingerprint anti-spoofing mechanism using machine
learning.
|
[
{
"version": "v1",
"created": "Tue, 11 Apr 2023 16:11:44 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Kiefer",
"Riley",
""
],
[
"Stevens",
"Jacob",
""
],
[
"Patel",
"Ashok",
""
]
] |
new_dataset
| 0.986384 |
2304.05335
|
Ameet Deshpande
|
Ameet Deshpande, Vishvak Murahari, Tanmay Rajpurohit, Ashwin Kalyan,
Karthik Narasimhan
|
Toxicity in ChatGPT: Analyzing Persona-assigned Language Models
| null | null | null | null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large language models (LLMs) have shown incredible capabilities and
transcended the natural language processing (NLP) community, with adoption
throughout many services like healthcare, therapy, education, and customer
service. Since users include people with critical information needs like
students or patients engaging with chatbots, the safety of these systems is of
prime importance. Therefore, a clear understanding of the capabilities and
limitations of LLMs is necessary. To this end, we systematically evaluate
toxicity in over half a million generations of ChatGPT, a popular
dialogue-based LLM. We find that setting the system parameter of ChatGPT by
assigning it a persona, say that of the boxer Muhammad Ali, significantly
increases the toxicity of generations. Depending on the persona assigned to
ChatGPT, its toxicity can increase up to 6x, with outputs engaging in incorrect
stereotypes, harmful dialogue, and hurtful opinions. This may be potentially
defamatory to the persona and harmful to an unsuspecting user. Furthermore, we
find concerning patterns where specific entities (e.g., certain races) are
targeted more than others (3x more) irrespective of the assigned persona, that
reflect inherent discriminatory biases in the model. We hope that our findings
inspire the broader AI community to rethink the efficacy of current safety
guardrails and develop better techniques that lead to robust, safe, and
trustworthy AI systems.
|
[
{
"version": "v1",
"created": "Tue, 11 Apr 2023 16:53:54 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Deshpande",
"Ameet",
""
],
[
"Murahari",
"Vishvak",
""
],
[
"Rajpurohit",
"Tanmay",
""
],
[
"Kalyan",
"Ashwin",
""
],
[
"Narasimhan",
"Karthik",
""
]
] |
new_dataset
| 0.987615 |
2304.05340
|
Yue Zhang
|
Yue Zhang, Chengtao Peng, Qiuli Wang, Dan Song, Kaiyan Li, S. Kevin
Zhou
|
Unified Multi-Modal Image Synthesis for Missing Modality Imputation
|
10 pages, 9 figures
| null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-modal medical images provide complementary soft-tissue characteristics
that aid in the screening and diagnosis of diseases. However, limited scanning
time, image corruption and various imaging protocols often result in incomplete
multi-modal images, thus limiting the usage of multi-modal data for clinical
purposes. To address this issue, in this paper, we propose a novel unified
multi-modal image synthesis method for missing modality imputation. Our method
overall takes a generative adversarial architecture, which aims to synthesize
missing modalities from any combination of available ones with a single model.
To this end, we specifically design a Commonality- and Discrepancy-Sensitive
Encoder for the generator to exploit both modality-invariant and specific
information contained in input modalities. The incorporation of both types of
information facilitates the generation of images with consistent anatomy and
realistic details of the desired distribution. Besides, we propose a Dynamic
Feature Unification Module to integrate information from a varying number of
available modalities, which enables the network to be robust to random missing
modalities. The module performs both hard integration and soft integration,
ensuring the effectiveness of feature combination while avoiding information
loss. Verified on two public multi-modal magnetic resonance datasets, the
proposed method is effective in handling various synthesis tasks and shows
superior performance compared to previous methods.
|
[
{
"version": "v1",
"created": "Tue, 11 Apr 2023 16:59:15 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Zhang",
"Yue",
""
],
[
"Peng",
"Chengtao",
""
],
[
"Wang",
"Qiuli",
""
],
[
"Song",
"Dan",
""
],
[
"Li",
"Kaiyan",
""
],
[
"Zhou",
"S. Kevin",
""
]
] |
new_dataset
| 0.977132 |
2304.05342
|
Gonzalo Ferrer
|
Alexey I. Boyko, Anastasiia Kornilova, Rahim Tariverdizadeh, Mirfarid
Musavian, Larisa Markeeva, Ivan Oseledets and Gonzalo Ferrer
|
TT-SDF2PC: Registration of Point Cloud and Compressed SDF Directly in
the Memory-Efficient Tensor Train Domain
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This paper addresses the following research question: ``can one compress a
detailed 3D representation and use it directly for point cloud registration?''.
Map compression of the scene can be achieved by the tensor train (TT)
decomposition of the signed distance function (SDF) representation. It
regulates the amount of data reduced by the so-called TT-ranks.
Using this representation we have proposed an algorithm, the TT-SDF2PC, that
is capable of directly registering a PC to the compressed SDF by making use of
efficient calculations of its derivatives in the TT domain, saving computations
and memory. We compare TT-SDF2PC with SOTA local and global registration
methods in a synthetic dataset and a real dataset and show on par performance
while requiring significantly less resources.
|
[
{
"version": "v1",
"created": "Tue, 11 Apr 2023 17:01:56 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Boyko",
"Alexey I.",
""
],
[
"Kornilova",
"Anastasiia",
""
],
[
"Tariverdizadeh",
"Rahim",
""
],
[
"Musavian",
"Mirfarid",
""
],
[
"Markeeva",
"Larisa",
""
],
[
"Oseledets",
"Ivan",
""
],
[
"Ferrer",
"Gonzalo",
""
]
] |
new_dataset
| 0.99215 |
2304.05390
|
Eslam Bakr
|
Eslam Mohamed Bakr, Pengzhan Sun, Xiaoqian Shen, Faizan Farooq Khan,
Li Erran Li, Mohamed Elhoseiny
|
HRS-Bench: Holistic, Reliable and Scalable Benchmark for Text-to-Image
Models
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In recent years, Text-to-Image (T2I) models have been extensively studied,
especially with the emergence of diffusion models that achieve state-of-the-art
results on T2I synthesis tasks. However, existing benchmarks heavily rely on
subjective human evaluation, limiting their ability to holistically assess the
model's capabilities. Furthermore, there is a significant gap between efforts
in developing new T2I architectures and those in evaluation. To address this,
we introduce HRS-Bench, a concrete evaluation benchmark for T2I models that is
Holistic, Reliable, and Scalable. Unlike existing bench-marks that focus on
limited aspects, HRS-Bench measures 13 skills that can be categorized into five
major categories: accuracy, robustness, generalization, fairness, and bias. In
addition, HRS-Bench covers 50 scenarios, including fashion, animals,
transportation, food, and clothes. We evaluate nine recent large-scale T2I
models using metrics that cover a wide range of skills. A human evaluation
aligned with 95% of our evaluations on average was conducted to probe the
effectiveness of HRS-Bench. Our experiments demonstrate that existing models
often struggle to generate images with the desired count of objects, visual
text, or grounded emotions. We hope that our benchmark help ease future
text-to-image generation research. The code and data are available at
https://eslambakr.github.io/hrsbench.github.io
|
[
{
"version": "v1",
"created": "Tue, 11 Apr 2023 17:59:13 GMT"
}
] | 2023-04-12T00:00:00 |
[
[
"Bakr",
"Eslam Mohamed",
""
],
[
"Sun",
"Pengzhan",
""
],
[
"Shen",
"Xiaoqian",
""
],
[
"Khan",
"Faizan Farooq",
""
],
[
"Li",
"Li Erran",
""
],
[
"Elhoseiny",
"Mohamed",
""
]
] |
new_dataset
| 0.964811 |
1809.07870
|
Brenner Rego
|
Brenner S. Rego, Guilherme V. Raffo
|
Suspended Load Path Tracking Control Using a Tilt-rotor UAV Based on
Zonotopic State Estimation
| null | null |
10.1016/j.jfranklin.2018.08.028
| null |
cs.SY cs.RO math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work addresses the problem of path tracking control of a suspended load
using a tilt-rotor UAV. The main challenge in controlling this kind of system
arises from the dynamic behavior imposed by the load, which is usually coupled
to the UAV by means of a rope, adding unactuated degrees of freedom to the
whole system. Furthermore, to perform the load transportation it is often
needed the knowledge of the load position to accomplish the task. Since
available sensors are commonly embedded in the mobile platform, information on
the load position may not be directly available. To solve this problem in this
work, initially, the kinematics of the multi-body mechanical system are
formulated from the load's perspective, from which a detailed dynamic model is
derived using the Euler-Lagrange approach, yielding a highly coupled, nonlinear
state-space representation of the system, affine in the inputs, with the load's
position and orientation directly represented by state variables. A zonotopic
state estimator is proposed to solve the problem of estimating the load
position and orientation, which is formulated based on sensors located at the
aircraft, with different sampling times, and unknown-but-bounded measurement
noise. To solve the path tracking problem, a discrete-time mixed
$\mathcal{H}_2/\mathcal{H}_\infty$ controller with pole-placement constraints
is designed with guaranteed time-response properties and robust to unmodeled
dynamics, parametric uncertainties, and external disturbances. Results from
numerical experiments, performed in a platform based on the Gazebo simulator
and on a Computer Aided Design (CAD) model of the system, are presented to
corroborate the performance of the zonotopic state estimator along with the
designed controller.
|
[
{
"version": "v1",
"created": "Thu, 20 Sep 2018 21:23:00 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Rego",
"Brenner S.",
""
],
[
"Raffo",
"Guilherme V.",
""
]
] |
new_dataset
| 0.997824 |
1905.06686
|
Om Prakash
|
Habibul Islam and Om Prakash
|
On ZpZp[u, v]-additive cyclic and constacyclic codes
|
It is submitted to the journal
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Let $\mathbb{Z}_{p}$ be the ring of residue classes modulo a prime $p$. The
$\mathbb{Z}_{p}\mathbb{Z}_{p}[u,v]$-additive cyclic codes of length
$(\alpha,\beta)$ is identify as $\mathbb{Z}_{p}[u,v][x]$-submodule of
$\mathbb{Z}_{p}[x]/\langle x^{\alpha}-1\rangle \times
\mathbb{Z}_{p}[u,v][x]/\langle x^{\beta}-1\rangle$ where
$\mathbb{Z}_{p}[u,v]=\mathbb{Z}_{p}+u\mathbb{Z}_{p}+v\mathbb{Z}_{p}$ with
$u^{2}=v^{2}=uv=vu=0$. In this article, we obtain the complete sets of
generator polynomials, minimal generating sets for cyclic codes with length
$\beta$ over $\mathbb{Z}_{p}[u,v]$ and
$\mathbb{Z}_{p}\mathbb{Z}_{p}[u,v]$-additive cyclic codes with length
$(\alpha,\beta)$ respectively. We show that the Gray image of
$\mathbb{Z}_{p}\mathbb{Z}_{p}[u,v]$-additive cyclic code with length
$(\alpha,\beta)$ is either a QC code of length $4\alpha$ with index $4$ or a
generalized QC code of length $(\alpha,3\beta)$ over $\mathbb{Z}_{p}$.
Moreover, some structural properties like generating polynomials, minimal
generating sets of $\mathbb{Z}_{p}\mathbb{Z}_{p}[u,v]$-additive constacyclic
code with length $(\alpha,p-1)$ are determined.
|
[
{
"version": "v1",
"created": "Thu, 16 May 2019 12:25:42 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Apr 2023 15:21:37 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Islam",
"Habibul",
""
],
[
"Prakash",
"Om",
""
]
] |
new_dataset
| 0.999671 |
1908.00140
|
Max Reuter
|
Max Reuter, Gheorghe-Teodor Bercea, Liana Fong
|
"Sliced" Subwindow Search: a Sublinear-complexity Solution to the
Maximum Rectangle Problem
|
8 pages, 7 figures
| null | null | null |
cs.DS cs.CC cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Considering a 2D matrix of positive and negative numbers, how might one draw
a rectangle within it whose contents sum higher than all other rectangles'?
This fundamental problem, commonly known the maximum rectangle problem or
subwindow search, spans many computational domains. Yet, the problem has not
been solved without demanding computational resources at least linearly
proportional to the size of the matrix. In this work, we present a new approach
to the problem which achieves sublinear time and memory complexities by
interpolating between a small amount of equidistant sections of the matrix.
Applied to natural images, our solution outperforms the state-of-the-art by
achieving an 11x increase in speed and memory efficiency at 99% comparative
accuracy. In general, our solution outperforms existing solutions when matrices
are sufficiently large and a marginal decrease in accuracy is acceptable, such
as in many problems involving natural images. As such, it is well-suited for
real-time application and in a variety of computationally hard instances of the
maximum rectangle problem.
|
[
{
"version": "v1",
"created": "Wed, 31 Jul 2019 23:21:52 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Apr 2023 21:48:47 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Reuter",
"Max",
""
],
[
"Bercea",
"Gheorghe-Teodor",
""
],
[
"Fong",
"Liana",
""
]
] |
new_dataset
| 0.986058 |
2105.07132
|
Keisuke Okumura
|
Keisuke Okumura, Fran\c{c}ois Bonnet, Yasumasa Tamura, Xavier D\'efago
|
Offline Time-Independent Multi-Agent Path Planning
|
This is the IJCAI-22 version. The journal version is available in
IEEE Transactions on Robotics (T-RO; 2023; open access)
| null |
10.24963/ijcai.2022/645
| null |
cs.MA cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper studies a novel planning problem for multiple agents that cannot
share holding resources, named OTIMAPP (Offline Time-Independent Multi-Agent
Path Planning). Given a graph and a set of start-goal pairs, the problem
consists in assigning a path to each agent such that every agent eventually
reaches their goal without blocking each other, regardless of how the agents
are being scheduled at runtime. The motivation stems from the nature of
distributed environments that agents take actions fully asynchronous and have
no knowledge about those exact timings of other actors. We present solution
conditions, computational complexity, solvers, and robotic applications.
|
[
{
"version": "v1",
"created": "Sat, 15 May 2021 04:05:01 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Jun 2022 12:51:39 GMT"
},
{
"version": "v3",
"created": "Sat, 8 Apr 2023 08:00:00 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Okumura",
"Keisuke",
""
],
[
"Bonnet",
"François",
""
],
[
"Tamura",
"Yasumasa",
""
],
[
"Défago",
"Xavier",
""
]
] |
new_dataset
| 0.984827 |
2203.03610
|
Menelaos Kanakis
|
Menelaos Kanakis, Simon Maurer, Matteo Spallanzani, Ajad Chhatkuli,
Luc Van Gool
|
ZippyPoint: Fast Interest Point Detection, Description, and Matching
through Mixed Precision Discretization
|
Computer Vision and Pattern Recognition Workshop (CVPRW), 2023
| null | null | null |
cs.CV cs.LG cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Efficient detection and description of geometric regions in images is a
prerequisite in visual systems for localization and mapping. Such systems still
rely on traditional hand-crafted methods for efficient generation of
lightweight descriptors, a common limitation of the more powerful neural
network models that come with high compute and specific hardware requirements.
In this paper, we focus on the adaptations required by detection and
description neural networks to enable their use in computationally limited
platforms such as robots, mobile, and augmented reality devices. To that end,
we investigate and adapt network quantization techniques to accelerate
inference and enable its use on compute limited platforms. In addition, we
revisit common practices in descriptor quantization and propose the use of a
binary descriptor normalization layer, enabling the generation of distinctive
binary descriptors with a constant number of ones. ZippyPoint, our efficient
quantized network with binary descriptors, improves the network runtime speed,
the descriptor matching speed, and the 3D model size, by at least an order of
magnitude when compared to full-precision counterparts. These improvements come
at a minor performance degradation as evaluated on the tasks of homography
estimation, visual localization, and map-free visual relocalization. Code and
models are available at https://github.com/menelaoskanakis/ZippyPoint.
|
[
{
"version": "v1",
"created": "Mon, 7 Mar 2022 18:59:03 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Dec 2022 12:34:44 GMT"
},
{
"version": "v3",
"created": "Sat, 8 Apr 2023 18:58:44 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Kanakis",
"Menelaos",
""
],
[
"Maurer",
"Simon",
""
],
[
"Spallanzani",
"Matteo",
""
],
[
"Chhatkuli",
"Ajad",
""
],
[
"Van Gool",
"Luc",
""
]
] |
new_dataset
| 0.999273 |
2204.04746
|
Chinedu Nwoye
|
Chinedu Innocent Nwoye, Deepak Alapatt, Tong Yu, Armine Vardazaryan,
Fangfang Xia, Zixuan Zhao, Tong Xia, Fucang Jia, Yuxuan Yang, Hao Wang,
Derong Yu, Guoyan Zheng, Xiaotian Duan, Neil Getty, Ricardo Sanchez-Matilla,
Maria Robu, Li Zhang, Huabin Chen, Jiacheng Wang, Liansheng Wang, Bokai
Zhang, Beerend Gerats, Sista Raviteja, Rachana Sathish, Rong Tao, Satoshi
Kondo, Winnie Pang, Hongliang Ren, Julian Ronald Abbing, Mohammad Hasan
Sarhan, Sebastian Bodenstedt, Nithya Bhasker, Bruno Oliveira, Helena R.
Torres, Li Ling, Finn Gaida, Tobias Czempiel, Jo\~ao L. Vila\c{c}a, Pedro
Morais, Jaime Fonseca, Ruby Mae Egging, Inge Nicole Wijma, Chen Qian, Guibin
Bian, Zhen Li, Velmurugan Balasubramanian, Debdoot Sheet, Imanol Luengo,
Yuanbo Zhu, Shuai Ding, Jakob-Anton Aschenbrenner, Nicolas Elini van der Kar,
Mengya Xu, Mobarakol Islam, Lalithkumar Seenivasan, Alexander Jenke, Danail
Stoyanov, Didier Mutter, Pietro Mascagni, Barbara Seeliger, Cristians
Gonzalez, Nicolas Padoy
|
CholecTriplet2021: A benchmark challenge for surgical action triplet
recognition
|
CholecTriplet2021 challenge report. Paper accepted at Elsevier
journal of Medical Image Analysis. 22 pages, 8 figures, 11 tables. Challenge
website: https://cholectriplet2021.grand-challenge.org
|
Medical Image Analysis 86 (2023) 102803
|
10.1016/j.media.2023.102803
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Context-aware decision support in the operating room can foster surgical
safety and efficiency by leveraging real-time feedback from surgical workflow
analysis. Most existing works recognize surgical activities at a coarse-grained
level, such as phases, steps or events, leaving out fine-grained interaction
details about the surgical activity; yet those are needed for more helpful AI
assistance in the operating room. Recognizing surgical actions as triplets of
<instrument, verb, target> combination delivers comprehensive details about the
activities taking place in surgical videos. This paper presents
CholecTriplet2021: an endoscopic vision challenge organized at MICCAI 2021 for
the recognition of surgical action triplets in laparoscopic videos. The
challenge granted private access to the large-scale CholecT50 dataset, which is
annotated with action triplet information. In this paper, we present the
challenge setup and assessment of the state-of-the-art deep learning methods
proposed by the participants during the challenge. A total of 4 baseline
methods from the challenge organizers and 19 new deep learning algorithms by
competing teams are presented to recognize surgical action triplets directly
from surgical videos, achieving mean average precision (mAP) ranging from 4.2%
to 38.1%. This study also analyzes the significance of the results obtained by
the presented approaches, performs a thorough methodological comparison between
them, in-depth result analysis, and proposes a novel ensemble method for
enhanced recognition. Our analysis shows that surgical workflow analysis is not
yet solved, and also highlights interesting directions for future research on
fine-grained surgical activity recognition which is of utmost importance for
the development of AI in surgery.
|
[
{
"version": "v1",
"created": "Sun, 10 Apr 2022 18:51:55 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Dec 2022 20:11:19 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Nwoye",
"Chinedu Innocent",
""
],
[
"Alapatt",
"Deepak",
""
],
[
"Yu",
"Tong",
""
],
[
"Vardazaryan",
"Armine",
""
],
[
"Xia",
"Fangfang",
""
],
[
"Zhao",
"Zixuan",
""
],
[
"Xia",
"Tong",
""
],
[
"Jia",
"Fucang",
""
],
[
"Yang",
"Yuxuan",
""
],
[
"Wang",
"Hao",
""
],
[
"Yu",
"Derong",
""
],
[
"Zheng",
"Guoyan",
""
],
[
"Duan",
"Xiaotian",
""
],
[
"Getty",
"Neil",
""
],
[
"Sanchez-Matilla",
"Ricardo",
""
],
[
"Robu",
"Maria",
""
],
[
"Zhang",
"Li",
""
],
[
"Chen",
"Huabin",
""
],
[
"Wang",
"Jiacheng",
""
],
[
"Wang",
"Liansheng",
""
],
[
"Zhang",
"Bokai",
""
],
[
"Gerats",
"Beerend",
""
],
[
"Raviteja",
"Sista",
""
],
[
"Sathish",
"Rachana",
""
],
[
"Tao",
"Rong",
""
],
[
"Kondo",
"Satoshi",
""
],
[
"Pang",
"Winnie",
""
],
[
"Ren",
"Hongliang",
""
],
[
"Abbing",
"Julian Ronald",
""
],
[
"Sarhan",
"Mohammad Hasan",
""
],
[
"Bodenstedt",
"Sebastian",
""
],
[
"Bhasker",
"Nithya",
""
],
[
"Oliveira",
"Bruno",
""
],
[
"Torres",
"Helena R.",
""
],
[
"Ling",
"Li",
""
],
[
"Gaida",
"Finn",
""
],
[
"Czempiel",
"Tobias",
""
],
[
"Vilaça",
"João L.",
""
],
[
"Morais",
"Pedro",
""
],
[
"Fonseca",
"Jaime",
""
],
[
"Egging",
"Ruby Mae",
""
],
[
"Wijma",
"Inge Nicole",
""
],
[
"Qian",
"Chen",
""
],
[
"Bian",
"Guibin",
""
],
[
"Li",
"Zhen",
""
],
[
"Balasubramanian",
"Velmurugan",
""
],
[
"Sheet",
"Debdoot",
""
],
[
"Luengo",
"Imanol",
""
],
[
"Zhu",
"Yuanbo",
""
],
[
"Ding",
"Shuai",
""
],
[
"Aschenbrenner",
"Jakob-Anton",
""
],
[
"van der Kar",
"Nicolas Elini",
""
],
[
"Xu",
"Mengya",
""
],
[
"Islam",
"Mobarakol",
""
],
[
"Seenivasan",
"Lalithkumar",
""
],
[
"Jenke",
"Alexander",
""
],
[
"Stoyanov",
"Danail",
""
],
[
"Mutter",
"Didier",
""
],
[
"Mascagni",
"Pietro",
""
],
[
"Seeliger",
"Barbara",
""
],
[
"Gonzalez",
"Cristians",
""
],
[
"Padoy",
"Nicolas",
""
]
] |
new_dataset
| 0.999608 |
2204.05999
|
Daniel Fried
|
Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace,
Freda Shi, Ruiqi Zhong, Wen-tau Yih, Luke Zettlemoyer, Mike Lewis
|
InCoder: A Generative Model for Code Infilling and Synthesis
|
ICLR 2023. v3: camera-ready that includes PLBART and OpenAI baselines
| null | null | null |
cs.SE cs.CL cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Code is seldom written in a single left-to-right pass and is instead
repeatedly edited and refined. We introduce InCoder, a unified generative model
that can perform program synthesis (via left-to-right generation) as well as
editing (via infilling). InCoder is trained to generate code files from a large
corpus of permissively licensed code, where regions of code have been randomly
masked and moved to the end of each file, allowing code infilling with
bidirectional context. Our model is the first generative model that is able to
directly perform zero-shot code infilling, which we evaluate on challenging
tasks such as type inference, comment generation, and variable re-naming. We
find that the ability to condition on bidirectional context substantially
improves performance on these tasks, while still performing comparably on
standard program synthesis benchmarks in comparison to left-to-right only
models pretrained at similar scale. The InCoder models and code are publicly
released. https://sites.google.com/view/incoder-code-models
|
[
{
"version": "v1",
"created": "Tue, 12 Apr 2022 16:25:26 GMT"
},
{
"version": "v2",
"created": "Sun, 17 Apr 2022 17:30:27 GMT"
},
{
"version": "v3",
"created": "Sun, 9 Apr 2023 14:31:40 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Fried",
"Daniel",
""
],
[
"Aghajanyan",
"Armen",
""
],
[
"Lin",
"Jessy",
""
],
[
"Wang",
"Sida",
""
],
[
"Wallace",
"Eric",
""
],
[
"Shi",
"Freda",
""
],
[
"Zhong",
"Ruiqi",
""
],
[
"Yih",
"Wen-tau",
""
],
[
"Zettlemoyer",
"Luke",
""
],
[
"Lewis",
"Mike",
""
]
] |
new_dataset
| 0.997964 |
2204.08096
|
Mohammed Shaiqur Rahman
|
Mohammed Shaiqur Rahman, Jiyang Wang, Senem Velipasalar Gursoy, David
Anastasiu, Shuo Wang, Anuj Sharma
|
Synthetic Distracted Driving (SynDD2) dataset for analyzing distracted
behaviors and various gaze zones of a driver
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
This article presents a synthetic distracted driving (SynDD2 - a continuum of
SynDD1) dataset for machine learning models to detect and analyze drivers'
various distracted behavior and different gaze zones. We collected the data in
a stationary vehicle using three in-vehicle cameras positioned at locations: on
the dashboard, near the rearview mirror, and on the top right-side window
corner. The dataset contains two activity types: distracted activities and gaze
zones for each participant, and each activity type has two sets: without
appearance blocks and with appearance blocks such as wearing a hat or
sunglasses. The order and duration of each activity for each participant are
random. In addition, the dataset contains manual annotations for each activity,
having its start and end time annotated. Researchers could use this dataset to
evaluate the performance of machine learning algorithms to classify various
distracting activities and gaze zones of drivers.
|
[
{
"version": "v1",
"created": "Sun, 17 Apr 2022 22:31:41 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Apr 2022 19:16:59 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Apr 2023 07:11:01 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Rahman",
"Mohammed Shaiqur",
""
],
[
"Wang",
"Jiyang",
""
],
[
"Gursoy",
"Senem Velipasalar",
""
],
[
"Anastasiu",
"David",
""
],
[
"Wang",
"Shuo",
""
],
[
"Sharma",
"Anuj",
""
]
] |
new_dataset
| 0.999769 |
2204.10581
|
Dinh Tuan Truong
|
Tuan Truong, Matthias Lenga, Antoine Serrurier, Sadegh Mohammadi
|
FAIR4Cov: Fused Audio Instance and Representation for COVID-19 Detection
| null | null | null | null |
cs.SD cs.AI cs.LG eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Audio-based classification techniques on body sounds have long been studied
to support diagnostic decisions, particularly in pulmonary diseases. In
response to the urgency of the COVID-19 pandemic, a growing number of models
are developed to identify COVID-19 patients based on acoustic input. Most
models focus on cough because the dry cough is the best-known symptom of
COVID-19. However, other body sounds, such as breath and speech, have also been
revealed to correlate with COVID-19 as well. In this work, rather than relying
on a specific body sound, we propose Fused Audio Instance and Representation
for COVID-19 Detection (FAIR4Cov). It relies on constructing a joint feature
vector obtained from a plurality of body sounds in waveform and spectrogram
representation. The core component of FAIR4Cov is a self-attention fusion unit
that is trained to establish the relation of multiple body sounds and audio
representations and integrate it into a compact feature vector. We set up our
experiments on different combinations of body sounds using only waveform,
spectrogram, and a joint representation of waveform and spectrogram. Our
findings show that the use of self-attention to combine extracted features from
cough, breath, and speech sounds leads to the best performance with an Area
Under the Receiver Operating Characteristic Curve (AUC) score of 0.8658, a
sensitivity of 0.8057, and a specificity of 0.7958. This AUC is 0.0227 higher
than the one of the models trained on spectrograms only and 0.0847 higher than
the one of the models trained on waveforms only. The results demonstrate that
the combination of spectrogram with waveform representation helps to enrich the
extracted features and outperforms the models with single representation.
|
[
{
"version": "v1",
"created": "Fri, 22 Apr 2022 09:01:29 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Apr 2023 08:36:17 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Truong",
"Tuan",
""
],
[
"Lenga",
"Matthias",
""
],
[
"Serrurier",
"Antoine",
""
],
[
"Mohammadi",
"Sadegh",
""
]
] |
new_dataset
| 0.998331 |
2205.15410
|
Yiming Ren
|
Yiming Ren, Chengfeng Zhao, Yannan He, Peishan Cong, Han Liang, Jingyi
Yu, Lan Xu, Yuexin Ma
|
LiDAR-aid Inertial Poser: Large-scale Human Motion Capture by Sparse
Inertial and LiDAR Sensors
| null |
IEEE Transactions on Visualization and Computer Graphics ( Volume:
29, Issue: 5, May 2023)
|
10.1109/TVCG.2023.3247088
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a multi-sensor fusion method for capturing challenging 3D human
motions with accurate consecutive local poses and global trajectories in
large-scale scenarios, only using single LiDAR and 4 IMUs, which are set up
conveniently and worn lightly. Specifically, to fully utilize the global
geometry information captured by LiDAR and local dynamic motions captured by
IMUs, we design a two-stage pose estimator in a coarse-to-fine manner, where
point clouds provide the coarse body shape and IMU measurements optimize the
local actions. Furthermore, considering the translation deviation caused by the
view-dependent partial point cloud, we propose a pose-guided translation
corrector. It predicts the offset between captured points and the real root
locations, which makes the consecutive movements and trajectories more precise
and natural. Moreover, we collect a LiDAR-IMU multi-modal mocap dataset, LIPD,
with diverse human actions in long-range scenarios. Extensive quantitative and
qualitative experiments on LIPD and other open datasets all demonstrate the
capability of our approach for compelling motion capture in large-scale
scenarios, which outperforms other methods by an obvious margin. We will
release our code and captured dataset to stimulate future research.
|
[
{
"version": "v1",
"created": "Mon, 30 May 2022 20:15:11 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Apr 2023 18:04:50 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Ren",
"Yiming",
""
],
[
"Zhao",
"Chengfeng",
""
],
[
"He",
"Yannan",
""
],
[
"Cong",
"Peishan",
""
],
[
"Liang",
"Han",
""
],
[
"Yu",
"Jingyi",
""
],
[
"Xu",
"Lan",
""
],
[
"Ma",
"Yuexin",
""
]
] |
new_dataset
| 0.998919 |
2207.09446
|
Rao Fu
|
Rao Fu, Xiao Zhan, Yiwen Chen, Daniel Ritchie, Srinath Sridhar
|
ShapeCrafter: A Recursive Text-Conditioned 3D Shape Generation Model
|
Presented at the Advances in Neural Information Processing Systems
(NeurIPS) 2022
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present ShapeCrafter, a neural network for recursive text-conditioned 3D
shape generation. Existing methods to generate text-conditioned 3D shapes
consume an entire text prompt to generate a 3D shape in a single step. However,
humans tend to describe shapes recursively-we may start with an initial
description and progressively add details based on intermediate results. To
capture this recursive process, we introduce a method to generate a 3D shape
distribution, conditioned on an initial phrase, that gradually evolves as more
phrases are added. Since existing datasets are insufficient for training this
approach, we present Text2Shape++, a large dataset of 369K shape-text pairs
that supports recursive shape generation. To capture local details that are
often used to refine shape descriptions, we build on top of vector-quantized
deep implicit functions that generate a distribution of high-quality shapes.
Results show that our method can generate shapes consistent with text
descriptions, and shapes evolve gradually as more phrases are added. Our method
supports shape editing, extrapolation, and can enable new applications in
human-machine collaboration for creative design.
|
[
{
"version": "v1",
"created": "Tue, 19 Jul 2022 17:59:01 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Oct 2022 17:59:03 GMT"
},
{
"version": "v3",
"created": "Fri, 25 Nov 2022 22:47:22 GMT"
},
{
"version": "v4",
"created": "Sat, 8 Apr 2023 17:08:55 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Fu",
"Rao",
""
],
[
"Zhan",
"Xiao",
""
],
[
"Chen",
"Yiwen",
""
],
[
"Ritchie",
"Daniel",
""
],
[
"Sridhar",
"Srinath",
""
]
] |
new_dataset
| 0.99974 |
2209.13351
|
Jiaqing Zhang
|
Jiaqing Zhang, Jie Lei, Weiying Xie, Zhenman Fang, Yunsong Li, Qian Du
|
SuperYOLO: Super Resolution Assisted Object Detection in Multimodal
Remote Sensing Imagery
|
The article is accepted by IEEE Transactions on Geoscience and Remote
Sensing
| null |
10.1109/TGRS.2023.3258666
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Accurately and timely detecting multiscale small objects that contain tens of
pixels from remote sensing images (RSI) remains challenging. Most of the
existing solutions primarily design complex deep neural networks to learn
strong feature representations for objects separated from the background, which
often results in a heavy computation burden. In this article, we propose an
accurate yet fast object detection method for RSI, named SuperYOLO, which fuses
multimodal data and performs high-resolution (HR) object detection on
multiscale objects by utilizing the assisted super resolution (SR) learning and
considering both the detection accuracy and computation cost. First, we utilize
a symmetric compact multimodal fusion (MF) to extract supplementary information
from various data for improving small object detection in RSI. Furthermore, we
design a simple and flexible SR branch to learn HR feature representations that
can discriminate small objects from vast backgrounds with low-resolution (LR)
input, thus further improving the detection accuracy. Moreover, to avoid
introducing additional computation, the SR branch is discarded in the inference
stage, and the computation of the network model is reduced due to the LR input.
Experimental results show that, on the widely used VEDAI RS dataset, SuperYOLO
achieves an accuracy of 75.09% (in terms of mAP50 ), which is more than 10%
higher than the SOTA large models, such as YOLOv5l, YOLOv5x, and RS designed
YOLOrs. Meanwhile, the parameter size and GFLOPs of SuperYOLO are about 18
times and 3.8 times less than YOLOv5x. Our proposed model shows a favorable
accuracy and speed tradeoff compared to the state-of-the-art models. The code
will be open-sourced at https://github.com/icey-zhang/SuperYOLO.
|
[
{
"version": "v1",
"created": "Tue, 27 Sep 2022 12:58:58 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Apr 2023 09:50:26 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Zhang",
"Jiaqing",
""
],
[
"Lei",
"Jie",
""
],
[
"Xie",
"Weiying",
""
],
[
"Fang",
"Zhenman",
""
],
[
"Li",
"Yunsong",
""
],
[
"Du",
"Qian",
""
]
] |
new_dataset
| 0.999834 |
2210.11634
|
Xiaoya Li
|
Jinchuan Cui, Xiaoya Li
|
A polynomial-time algorithm to solve the large scale of airplane
refueling problem
|
18 pages, 2 figures
| null | null | null |
cs.DS math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Airplane refueling problem is a nonlinear combinatorial optimization problem
with $n!$ feasible feasible solutions. Given a fleet of $n$ airplanes with
mid-air refueling technique, each airplane has a specific fuel capacity and
fuel consumption rate. The fleet starts to fly together to a same target and
during the trip each airplane could instantaneously refuel to other airplanes
and then be dropped out. The question is how to find the best refueling policy
to make the last remaining airplane travels the farthest. To solve the large
scale of the airplane refueling problem in polynomial-time, we propose the
definition of the sequential feasible solution by employing the data structural
properties of the airplane refueling problem. We prove that if an airplane
refueling problem has feasible solutions, it must have sequential feasible
solutions, and its optimal feasible solution must be the optimal sequential
feasible solution. Then we present the sequential search algorithm which has a
computational complexity that depends on the number of sequential feasible
solutions referred to $Q_n$, which is proved to be upper bounded by $2^{n-2}$
as an exponential bound that lacks of applicability on larger input for worst
case. Therefore we investigate the complexity behavior of the sequential search
algorithm from dynamic perspective, and find out that $Q_n$ is bounded by
$\frac{m^2}{n}C_n^m$ when the input $n$ is greater than $2m$. Here $m$ is a
constant and $2m$ is regarded as the "inflection point" of the complexity of
the sequential search algorithm from exponential-time to polynomial-time.
Moreover, we build an efficient computability scheme according to which we
shall predict the specific complexity of the sequential search algorithm to
choose a proper algorithm considering the available running time for decision
makers or users.
|
[
{
"version": "v1",
"created": "Tue, 18 Oct 2022 16:41:04 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Apr 2023 15:07:57 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Cui",
"Jinchuan",
""
],
[
"Li",
"Xiaoya",
""
]
] |
new_dataset
| 0.984591 |
2211.04658
|
Gilberto Ochoa-Ruiz
|
Rafael Martinez-Garcia-Pe\~na, Mansoor Ali Teevno, Gilberto
Ochoa-Ruiz, Sharib Ali
|
SUPRA: Superpixel Guided Loss for Improved Multi-modal Segmentation in
Endoscopy
|
This work has been accepted at the LatinX in Computer Vision Research
Workshop at CVPR 2023
| null | null | null |
cs.CV cs.LG q-bio.QM
|
http://creativecommons.org/licenses/by/4.0/
|
Domain shift is a well-known problem in the medical imaging community. In
particular, for endoscopic image analysis where the data can have different
modalities the performance of deep learning (DL) methods gets adversely
affected. In other words, methods developed on one modality cannot be used for
a different modality. However, in real clinical settings, endoscopists switch
between modalities for better mucosal visualisation. In this paper, we explore
the domain generalisation technique to enable DL methods to be used in such
scenarios. To this extend, we propose to use super pixels generated with Simple
Linear Iterative Clustering (SLIC) which we refer to as "SUPRA" for SUPeRpixel
Augmented method. SUPRA first generates a preliminary segmentation mask making
use of our new loss "SLICLoss" that encourages both an accurate and
color-consistent segmentation. We demonstrate that SLICLoss when combined with
Binary Cross Entropy loss (BCE) can improve the model's generalisability with
data that presents significant domain shift. We validate this novel compound
loss on a vanilla U-Net using the EndoUDA dataset, which contains images for
Barret's Esophagus and polyps from two modalities. We show that our method
yields an improvement of nearly 20% in the target domain set compared to the
baseline.
|
[
{
"version": "v1",
"created": "Wed, 9 Nov 2022 03:13:59 GMT"
},
{
"version": "v2",
"created": "Sat, 12 Nov 2022 01:41:35 GMT"
},
{
"version": "v3",
"created": "Sun, 9 Apr 2023 18:30:47 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Martinez-Garcia-Peña",
"Rafael",
""
],
[
"Teevno",
"Mansoor Ali",
""
],
[
"Ochoa-Ruiz",
"Gilberto",
""
],
[
"Ali",
"Sharib",
""
]
] |
new_dataset
| 0.999322 |
2211.07945
|
Joohwan Seo
|
Joohwan Seo, Nikhil Potu Surya Prakash, Alexander Rose and Roberto
Horowitz
|
Geometric Impedance Control on SE(3) for Robotic Manipulators
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
After its introduction, impedance control has been utilized as a primary
control scheme for robotic manipulation tasks that involve interaction with
unknown environments. While impedance control has been extensively studied, the
geometric structure of SE(3) for the robotic manipulator itself and its use in
formulating a robotic task has not been adequately addressed. In this paper, we
propose a differential geometric approach to impedance control. Given a
left-invariant error metric in SE(3), the corresponding error vectors in
position and velocity are first derived. We then propose the impedance control
schemes that adequately account for the geometric structure of the manipulator
in SE(3) based on a left-invariant potential function. The closed-loop
stabilities for the proposed control schemes are verified using Lyapunov
function-based analysis. The proposed control design clearly outperformed a
conventional impedance control approach when tracking challenging trajectory
profiles.
|
[
{
"version": "v1",
"created": "Tue, 15 Nov 2022 07:07:38 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Apr 2023 04:19:41 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Seo",
"Joohwan",
""
],
[
"Prakash",
"Nikhil Potu Surya",
""
],
[
"Rose",
"Alexander",
""
],
[
"Horowitz",
"Roberto",
""
]
] |
new_dataset
| 0.993065 |
2211.14425
|
Han Gao
|
Han Gao, Xu Han, Jiaoyang Huang, Jian-Xun Wang, Li-Ping Liu
|
PatchGT: Transformer over Non-trainable Clusters for Learning Graph
Representations
|
25 pages, 10 figures
| null | null | null |
cs.LG cs.AI math.GT
|
http://creativecommons.org/licenses/by/4.0/
|
Recently the Transformer structure has shown good performances in graph
learning tasks. However, these Transformer models directly work on graph nodes
and may have difficulties learning high-level information. Inspired by the
vision transformer, which applies to image patches, we propose a new
Transformer-based graph neural network: Patch Graph Transformer (PatchGT).
Unlike previous transformer-based models for learning graph representations,
PatchGT learns from non-trainable graph patches, not from nodes directly. It
can help save computation and improve the model performance. The key idea is to
segment a graph into patches based on spectral clustering without any trainable
parameters, with which the model can first use GNN layers to learn patch-level
representations and then use Transformer to obtain graph-level representations.
The architecture leverages the spectral information of graphs and combines the
strengths of GNNs and Transformers. Further, we show the limitations of
previous hierarchical trainable clusters theoretically and empirically. We also
prove the proposed non-trainable spectral clustering method is permutation
invariant and can help address the information bottlenecks in the graph.
PatchGT achieves higher expressiveness than 1-WL-type GNNs, and the empirical
study shows that PatchGT achieves competitive performances on benchmark
datasets and provides interpretability to its predictions. The implementation
of our algorithm is released at our Github repo:
https://github.com/tufts-ml/PatchGT.
|
[
{
"version": "v1",
"created": "Sat, 26 Nov 2022 01:17:23 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Apr 2023 19:39:46 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Gao",
"Han",
""
],
[
"Han",
"Xu",
""
],
[
"Huang",
"Jiaoyang",
""
],
[
"Wang",
"Jian-Xun",
""
],
[
"Liu",
"Li-Ping",
""
]
] |
new_dataset
| 0.999413 |
2211.14461
|
Zixiang Zhao
|
Zixiang Zhao, Haowen Bai, Jiangshe Zhang, Yulun Zhang, Shuang Xu, Zudi
Lin, Radu Timofte, Luc Van Gool
|
CDDFuse: Correlation-Driven Dual-Branch Feature Decomposition for
Multi-Modality Image Fusion
|
Accepted by CVPR 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-modality (MM) image fusion aims to render fused images that maintain
the merits of different modalities, e.g., functional highlight and detailed
textures. To tackle the challenge in modeling cross-modality features and
decomposing desirable modality-specific and modality-shared features, we
propose a novel Correlation-Driven feature Decomposition Fusion (CDDFuse)
network. Firstly, CDDFuse uses Restormer blocks to extract cross-modality
shallow features. We then introduce a dual-branch Transformer-CNN feature
extractor with Lite Transformer (LT) blocks leveraging long-range attention to
handle low-frequency global features and Invertible Neural Networks (INN)
blocks focusing on extracting high-frequency local information. A
correlation-driven loss is further proposed to make the low-frequency features
correlated while the high-frequency features uncorrelated based on the embedded
information. Then, the LT-based global fusion and INN-based local fusion layers
output the fused image. Extensive experiments demonstrate that our CDDFuse
achieves promising results in multiple fusion tasks, including infrared-visible
image fusion and medical image fusion. We also show that CDDFuse can boost the
performance in downstream infrared-visible semantic segmentation and object
detection in a unified benchmark. The code is available at
https://github.com/Zhaozixiang1228/MMIF-CDDFuse.
|
[
{
"version": "v1",
"created": "Sat, 26 Nov 2022 02:40:28 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Apr 2023 10:46:30 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Zhao",
"Zixiang",
""
],
[
"Bai",
"Haowen",
""
],
[
"Zhang",
"Jiangshe",
""
],
[
"Zhang",
"Yulun",
""
],
[
"Xu",
"Shuang",
""
],
[
"Lin",
"Zudi",
""
],
[
"Timofte",
"Radu",
""
],
[
"Van Gool",
"Luc",
""
]
] |
new_dataset
| 0.9894 |
2212.01615
|
Omar Costa Hamido
|
Omar Costa Hamido and Paulo Vitor Itabora\'i
|
OSC-Qasm: Interfacing Music Software with Quantum Computing
| null | null |
10.1007/978-3-031-29956-8_24
| null |
cs.ET cs.HC cs.SE quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
OSC-Qasm is a cross-platform, Python-based, OSC interface for executing Qasm
code. It serves as a simple way to connect creative programming environments
like Max (with The QAC Toolkit) and Pure Data with real quantum hardware, using
the Open Sound Control protocol. In this paper, the authors introduce the
context and meaning of developing a tool like this, and what it can offer to
creative artists.
|
[
{
"version": "v1",
"created": "Sat, 3 Dec 2022 13:24:16 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Dec 2022 08:55:46 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Hamido",
"Omar Costa",
""
],
[
"Itaboraí",
"Paulo Vitor",
""
]
] |
new_dataset
| 0.998954 |
2212.01779
|
Yuan Sun
|
Junjie Deng, Hanru Shi, Xinhe Yu, Wugedele Bao, Yuan Sun, Xiaobing
Zhao
|
MiLMo:Minority Multilingual Pre-trained Language Model
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pre-trained language models are trained on large-scale unsupervised data, and
they can fine-turn the model only on small-scale labeled datasets, and achieve
good results. Multilingual pre-trained language models can be trained on
multiple languages, and the model can understand multiple languages at the same
time. At present, the search on pre-trained models mainly focuses on rich
resources, while there is relatively little research on low-resource languages
such as minority languages, and the public multilingual pre-trained language
model can not work well for minority languages. Therefore, this paper
constructs a multilingual pre-trained model named MiLMo that performs better on
minority language tasks, including Mongolian, Tibetan, Uyghur, Kazakh and
Korean. To solve the problem of scarcity of datasets on minority languages and
verify the effectiveness of the MiLMo model, this paper constructs a minority
multilingual text classification dataset named MiTC, and trains a word2vec
model for each language. By comparing the word2vec model and the pre-trained
model in the text classification task, this paper provides an optimal scheme
for the downstream task research of minority languages. The final experimental
results show that the performance of the pre-trained model is better than that
of the word2vec model, and it has achieved the best results in minority
multilingual text classification. The multilingual pre-trained model MiLMo,
multilingual word2vec model and multilingual text classification dataset MiTC
are published on http://milmo.cmli-nlp.com/.
|
[
{
"version": "v1",
"created": "Sun, 4 Dec 2022 09:28:17 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Apr 2023 08:54:47 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Deng",
"Junjie",
""
],
[
"Shi",
"Hanru",
""
],
[
"Yu",
"Xinhe",
""
],
[
"Bao",
"Wugedele",
""
],
[
"Sun",
"Yuan",
""
],
[
"Zhao",
"Xiaobing",
""
]
] |
new_dataset
| 0.995957 |
2301.02560
|
Vikram V. Ramaswamy
|
Vikram V. Ramaswamy, Sing Yu Lin, Dora Zhao, Aaron B. Adcock, Laurens
van der Maaten, Deepti Ghadiyaram, Olga Russakovsky
|
GeoDE: a Geographically Diverse Evaluation Dataset for Object
Recognition
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current dataset collection methods typically scrape large amounts of data
from the web. While this technique is extremely scalable, data collected in
this way tends to reinforce stereotypical biases, can contain personally
identifiable information, and typically originates from Europe and North
America. In this work, we rethink the dataset collection paradigm and introduce
GeoDE, a geographically diverse dataset with 61,940 images from 40 classes and
6 world regions, and no personally identifiable information, collected through
crowd-sourcing. We analyse GeoDE to understand differences in images collected
in this manner compared to web-scraping. Despite the smaller size of this
dataset, we demonstrate its use as both an evaluation and training dataset,
highlight shortcomings in current models, as well as show improved performances
when even small amounts of GeoDE (1000 - 2000 images per region) are added to a
training dataset. We release the full dataset and code at
https://geodiverse-data-collection.cs.princeton.edu/
|
[
{
"version": "v1",
"created": "Thu, 5 Jan 2023 18:21:50 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Apr 2023 21:54:59 GMT"
},
{
"version": "v3",
"created": "Sat, 8 Apr 2023 00:10:46 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Ramaswamy",
"Vikram V.",
""
],
[
"Lin",
"Sing Yu",
""
],
[
"Zhao",
"Dora",
""
],
[
"Adcock",
"Aaron B.",
""
],
[
"van der Maaten",
"Laurens",
""
],
[
"Ghadiyaram",
"Deepti",
""
],
[
"Russakovsky",
"Olga",
""
]
] |
new_dataset
| 0.999181 |
2301.04224
|
Xindi Wu
|
Xindi Wu, KwunFung Lau, Francesco Ferroni, Aljo\v{s}a O\v{s}ep, Deva
Ramanan
|
Pix2Map: Cross-modal Retrieval for Inferring Street Maps from Images
|
12 pages, 8 figures
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Self-driving vehicles rely on urban street maps for autonomous navigation. In
this paper, we introduce Pix2Map, a method for inferring urban street map
topology directly from ego-view images, as needed to continually update and
expand existing maps. This is a challenging task, as we need to infer a complex
urban road topology directly from raw image data. The main insight of this
paper is that this problem can be posed as cross-modal retrieval by learning a
joint, cross-modal embedding space for images and existing maps, represented as
discrete graphs that encode the topological layout of the visual surroundings.
We conduct our experimental evaluation using the Argoverse dataset and show
that it is indeed possible to accurately retrieve street maps corresponding to
both seen and unseen roads solely from image data. Moreover, we show that our
retrieved maps can be used to update or expand existing maps and even show
proof-of-concept results for visual localization and image retrieval from
spatial graphs.
|
[
{
"version": "v1",
"created": "Tue, 10 Jan 2023 22:05:35 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Apr 2023 21:30:05 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Wu",
"Xindi",
""
],
[
"Lau",
"KwunFung",
""
],
[
"Ferroni",
"Francesco",
""
],
[
"Ošep",
"Aljoša",
""
],
[
"Ramanan",
"Deva",
""
]
] |
new_dataset
| 0.998172 |
2301.06083
|
Yuntian Chen
|
Qian Li, Yuxiao Hu, Ye Liu, Dongxiao Zhang, Xin Jin, Yuntian Chen
|
Discrete Point-wise Attack Is Not Enough: Generalized Manifold
Adversarial Attack for Face Recognition
|
Accepted by CVPR2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Classical adversarial attacks for Face Recognition (FR) models typically
generate discrete examples for target identity with a single state image.
However, such paradigm of point-wise attack exhibits poor generalization
against numerous unknown states of identity and can be easily defended. In this
paper, by rethinking the inherent relationship between the face of target
identity and its variants, we introduce a new pipeline of Generalized Manifold
Adversarial Attack (GMAA) to achieve a better attack performance by expanding
the attack range. Specifically, this expansion lies on two aspects - GMAA not
only expands the target to be attacked from one to many to encourage a good
generalization ability for the generated adversarial examples, but it also
expands the latter from discrete points to manifold by leveraging the domain
knowledge that face expression change can be continuous, which enhances the
attack effect as a data augmentation mechanism did. Moreover, we further design
a dual supervision with local and global constraints as a minor contribution to
improve the visual quality of the generated adversarial examples. We
demonstrate the effectiveness of our method based on extensive experiments, and
reveal that GMAA promises a semantic continuous adversarial space with a higher
generalization ability and visual quality
|
[
{
"version": "v1",
"created": "Mon, 19 Dec 2022 02:57:55 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Apr 2023 02:47:42 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Li",
"Qian",
""
],
[
"Hu",
"Yuxiao",
""
],
[
"Liu",
"Ye",
""
],
[
"Zhang",
"Dongxiao",
""
],
[
"Jin",
"Xin",
""
],
[
"Chen",
"Yuntian",
""
]
] |
new_dataset
| 0.958352 |
2302.10126
|
Radu Tudor Ionescu
|
Eduard Poesina, Radu Tudor Ionescu, Josiane Mothe
|
iQPP: A Benchmark for Image Query Performance Prediction
|
Accepted at SIGIR 2023
| null | null | null |
cs.CV cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To date, query performance prediction (QPP) in the context of content-based
image retrieval remains a largely unexplored task, especially in the
query-by-example scenario, where the query is an image. To boost the
exploration of the QPP task in image retrieval, we propose the first benchmark
for image query performance prediction (iQPP). First, we establish a set of
four data sets (PASCAL VOC 2012, Caltech-101, ROxford5k and RParis6k) and
estimate the ground-truth difficulty of each query as the average precision or
the precision@k, using two state-of-the-art image retrieval models. Next, we
propose and evaluate novel pre-retrieval and post-retrieval query performance
predictors, comparing them with existing or adapted (from text to image)
predictors. The empirical results show that most predictors do not generalize
across evaluation scenarios. Our comprehensive experiments indicate that iQPP
is a challenging benchmark, revealing an important research gap that needs to
be addressed in future work. We release our code and data as open source at
https://github.com/Eduard6421/iQPP, to foster future research.
|
[
{
"version": "v1",
"created": "Mon, 20 Feb 2023 17:56:57 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Feb 2023 09:13:06 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Apr 2023 06:41:46 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Poesina",
"Eduard",
""
],
[
"Ionescu",
"Radu Tudor",
""
],
[
"Mothe",
"Josiane",
""
]
] |
new_dataset
| 0.999492 |
2302.11559
|
MD Shamimul Islam
|
Md Shamimul Islam, A.J.M. Akhtarujjaman Joha, Md Nur Hossain, Sohaib
Abdullah, Ibrahim Elwarfalli, Md Mahedi Hasan
|
Word level Bangla Sign Language Dataset for Continuous BSL Recognition
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
An robust sign language recognition system can greatly alleviate
communication barriers, particularly for people who struggle with verbal
communication. This is crucial for human growth and progress as it enables the
expression of thoughts, feelings, and ideas. However, sign recognition is a
complex task that faces numerous challenges such as same gesture patterns for
multiple signs, lighting, clothing, carrying conditions, and the presence of
large poses, as well as illumination discrepancies across different views.
Additionally, the absence of an extensive Bangla sign language video dataset
makes it even more challenging to operate recognition systems, particularly
when utilizing deep learning techniques. In order to address this issue,
firstly, we created a large-scale dataset called the MVBSL-W50, which comprises
50 isolated words across 13 categories. Secondly, we developed an
attention-based Bi-GRU model that captures the temporal dynamics of pose
information for individuals communicating through sign language. The proposed
model utilizes human pose information, which has shown to be successful in
analyzing sign language patterns. By focusing solely on movement information
and disregarding body appearance and environmental factors, the model is
simplified and can achieve a speedier performance. The accuracy of the model is
reported to be 85.64%.
|
[
{
"version": "v1",
"created": "Wed, 22 Feb 2023 18:55:54 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Apr 2023 18:48:21 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Islam",
"Md Shamimul",
""
],
[
"Joha",
"A. J. M. Akhtarujjaman",
""
],
[
"Hossain",
"Md Nur",
""
],
[
"Abdullah",
"Sohaib",
""
],
[
"Elwarfalli",
"Ibrahim",
""
],
[
"Hasan",
"Md Mahedi",
""
]
] |
new_dataset
| 0.999864 |
2304.00947
|
Mehdi S. M. Sajjadi
|
Aleksandr Safin, Daniel Duckworth, Mehdi S. M. Sajjadi
|
RePAST: Relative Pose Attention Scene Representation Transformer
| null | null | null | null |
cs.CV cs.AI cs.GR cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Scene Representation Transformer (SRT) is a recent method to render novel
views at interactive rates. Since SRT uses camera poses with respect to an
arbitrarily chosen reference camera, it is not invariant to the order of the
input views. As a result, SRT is not directly applicable to large-scale scenes
where the reference frame would need to be changed regularly. In this work, we
propose Relative Pose Attention SRT (RePAST): Instead of fixing a reference
frame at the input, we inject pairwise relative camera pose information
directly into the attention mechanism of the Transformers. This leads to a
model that is by definition invariant to the choice of any global reference
frame, while still retaining the full capabilities of the original method.
Empirical results show that adding this invariance to the model does not lead
to a loss in quality. We believe that this is a step towards applying fully
latent transformer-based rendering methods to large-scale scenes.
|
[
{
"version": "v1",
"created": "Mon, 3 Apr 2023 13:13:12 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Apr 2023 13:11:13 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Safin",
"Aleksandr",
""
],
[
"Duckworth",
"Daniel",
""
],
[
"Sajjadi",
"Mehdi S. M.",
""
]
] |
new_dataset
| 0.997878 |
2304.01108
|
Jordan Suchow
|
Jordan W. Suchow and Necdet G\"urkan
|
Coincidental Generation
| null | null | null | null |
cs.CV cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generative A.I. models have emerged as versatile tools across diverse
industries, with applications in privacy-preserving data sharing, computational
art, personalization of products and services, and immersive entertainment.
Here, we introduce a new privacy concern in the adoption and use of generative
A.I. models: that of coincidental generation, where a generative model's output
is similar enough to an existing entity, beyond those represented in the
dataset used to train the model, to be mistaken for it. Consider, for example,
synthetic portrait generators, which are today deployed in commercial
applications such as virtual modeling agencies and synthetic stock photography.
Due to the low intrinsic dimensionality of human face perception, every
synthetically generated face will coincidentally resemble an actual person.
Such examples of coincidental generation all but guarantee the misappropriation
of likeness and expose organizations that use generative A.I. to legal and
regulatory risk.
|
[
{
"version": "v1",
"created": "Mon, 3 Apr 2023 16:08:22 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Apr 2023 15:16:04 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Suchow",
"Jordan W.",
""
],
[
"Gürkan",
"Necdet",
""
]
] |
new_dataset
| 0.967749 |
2304.01964
|
Aditi Mishra
|
Aditi Mishra, Utkarsh Soni, Anjana Arunkumar, Jinbin Huang, Bum Chul
Kwon, Chris Bryan
|
PromptAid: Prompt Exploration, Perturbation, Testing and Iteration using
Visual Analytics for Large Language Models
| null | null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large Language Models (LLMs) have gained widespread popularity due to their
ability to perform ad-hoc Natural Language Processing (NLP) tasks with a simple
natural language prompt. Part of the appeal for LLMs is their approachability
to the general public, including individuals with no prior technical experience
in NLP techniques. However, natural language prompts can vary significantly in
terms of their linguistic structure, context, and other semantics. Modifying
one or more of these aspects can result in significant differences in task
performance. Non-expert users may find it challenging to identify the changes
needed to improve a prompt, especially when they lack domain-specific knowledge
and lack appropriate feedback. To address this challenge, we present PromptAid,
a visual analytics system designed to interactively create, refine, and test
prompts through exploration, perturbation, testing, and iteration. PromptAid
uses multiple, coordinated visualizations which allow users to improve prompts
by using the three strategies: keyword perturbations, paraphrasing
perturbations, and obtaining the best set of in-context few-shot examples.
PromptAid was designed through an iterative prototyping process involving NLP
experts and was evaluated through quantitative and qualitative assessments for
LLMs. Our findings indicate that PromptAid helps users to iterate over prompt
template alterations with less cognitive overhead, generate diverse prompts
with help of recommendations, and analyze the performance of the generated
prompts while surpassing existing state-of-the-art prompting interfaces in
performance.
|
[
{
"version": "v1",
"created": "Tue, 4 Apr 2023 17:14:54 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Apr 2023 16:25:10 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Mishra",
"Aditi",
""
],
[
"Soni",
"Utkarsh",
""
],
[
"Arunkumar",
"Anjana",
""
],
[
"Huang",
"Jinbin",
""
],
[
"Kwon",
"Bum Chul",
""
],
[
"Bryan",
"Chris",
""
]
] |
new_dataset
| 0.99523 |
2304.02084
|
Stephen Parsons
|
Stephen Parsons, C. Seth Parker, Christy Chapman, Mami Hayashida, W.
Brent Seales
|
EduceLab-Scrolls: Verifiable Recovery of Text from Herculaneum Papyri
using X-ray CT
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We present a complete software pipeline for revealing the hidden texts of the
Herculaneum papyri using X-ray CT images. This enhanced virtual unwrapping
pipeline combines machine learning with a novel geometric framework linking 3D
and 2D images. We also present EduceLab-Scrolls, a comprehensive open dataset
representing two decades of research effort on this problem. EduceLab-Scrolls
contains a set of volumetric X-ray CT images of both small fragments and
intact, rolled scrolls. The dataset also contains 2D image labels that are used
in the supervised training of an ink detection model. Labeling is enabled by
aligning spectral photography of scroll fragments with X-ray CT images of the
same fragments, thus creating a machine-learnable mapping between image spaces
and modalities. This alignment permits supervised learning for the detection of
"invisible" carbon ink in X-ray CT, a task that is "impossible" even for human
expert labelers. To our knowledge, this is the first aligned dataset of its
kind and is the largest dataset ever released in the heritage domain. Our
method is capable of revealing accurate lines of text on scroll fragments with
known ground truth. Revealed text is verified using visual confirmation,
quantitative image metrics, and scholarly review. EduceLab-Scrolls has also
enabled the discovery, for the first time, of hidden texts from the Herculaneum
papyri, which we present here. We anticipate that the EduceLab-Scrolls dataset
will generate more textual discovery as research continues.
|
[
{
"version": "v1",
"created": "Tue, 4 Apr 2023 19:28:51 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Apr 2023 16:14:46 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Parsons",
"Stephen",
""
],
[
"Parker",
"C. Seth",
""
],
[
"Chapman",
"Christy",
""
],
[
"Hayashida",
"Mami",
""
],
[
"Seales",
"W. Brent",
""
]
] |
new_dataset
| 0.999195 |
2304.03824
|
Murat Kuscu Dr
|
Meltem Civas, Murat Kuscu, Oktay Cetinkaya, Beyza E. Ortlek, Ozgur B.
Akan
|
Graphene and Related Materials for the Internet of Bio-Nano Things
| null | null | null | null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Internet of Bio-Nano Things (IoBNT) is a transformative communication
framework, characterized by heterogeneous networks comprising both biological
entities and artificial micro/nano-scale devices, so-called Bio-Nano Things
(BNTs), interfaced with conventional communication networks for enabling
innovative biomedical and environmental applications. Realizing the potential
of IoBNT requires the development of new and unconventional communication
technologies, such as molecular communications, as well as the corresponding
transceivers, bio-cyber interfacing technologies connecting the biochemical
domain of IoBNT to the electromagnetic domain of conventional networks, and
miniaturized energy harvesting and storage components for the continuous power
supply to BNTs. Graphene and related materials (GRMs) exhibit exceptional
electrical, optical, biochemical, and mechanical properties, rendering them
ideal candidates for addressing the challenges posed by IoBNT. This perspective
article highlights recent advancements in GRM-based device technologies that
are promising for implementing the core components of IoBNT. By identifying the
unique opportunities afforded by GRMs and aligning them with the practical
challenges associated with IoBNT, particularly in the materials domain, our aim
is to accelerate the transition of envisaged IoBNT applications from
theoretical concepts to practical implementations, while also uncovering new
application areas for GRMs.
|
[
{
"version": "v1",
"created": "Fri, 7 Apr 2023 19:36:17 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Civas",
"Meltem",
""
],
[
"Kuscu",
"Murat",
""
],
[
"Cetinkaya",
"Oktay",
""
],
[
"Ortlek",
"Beyza E.",
""
],
[
"Akan",
"Ozgur B.",
""
]
] |
new_dataset
| 0.999103 |
2304.03834
|
Kan Chen
|
Kan Chen, Runzhou Ge, Hang Qiu, Rami Ai-Rfou, Charles R. Qi, Xuanyu
Zhou, Zoey Yang, Scott Ettinger, Pei Sun, Zhaoqi Leng, Mustafa Mustafa, Ivan
Bogun, Weiyue Wang, Mingxing Tan, Dragomir Anguelov
|
WOMD-LiDAR: Raw Sensor Dataset Benchmark for Motion Forecasting
|
Dataset website: https://waymo.com/open/data/motion/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Widely adopted motion forecasting datasets substitute the observed sensory
inputs with higher-level abstractions such as 3D boxes and polylines. These
sparse shapes are inferred through annotating the original scenes with
perception systems' predictions. Such intermediate representations tie the
quality of the motion forecasting models to the performance of computer vision
models. Moreover, the human-designed explicit interfaces between perception and
motion forecasting typically pass only a subset of the semantic information
present in the original sensory input. To study the effect of these modular
approaches, design new paradigms that mitigate these limitations, and
accelerate the development of end-to-end motion forecasting models, we augment
the Waymo Open Motion Dataset (WOMD) with large-scale, high-quality, diverse
LiDAR data for the motion forecasting task.
The new augmented dataset WOMD-LiDAR consists of over 100,000 scenes that
each spans 20 seconds, consisting of well-synchronized and calibrated high
quality LiDAR point clouds captured across a range of urban and suburban
geographies (https://waymo.com/open/data/motion/). Compared to Waymo Open
Dataset (WOD), WOMD-LiDAR dataset contains 100x more scenes. Furthermore, we
integrate the LiDAR data into the motion forecasting model training and provide
a strong baseline. Experiments show that the LiDAR data brings improvement in
the motion forecasting task. We hope that WOMD-LiDAR will provide new
opportunities for boosting end-to-end motion forecasting models.
|
[
{
"version": "v1",
"created": "Fri, 7 Apr 2023 20:23:15 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Chen",
"Kan",
""
],
[
"Ge",
"Runzhou",
""
],
[
"Qiu",
"Hang",
""
],
[
"Ai-Rfou",
"Rami",
""
],
[
"Qi",
"Charles R.",
""
],
[
"Zhou",
"Xuanyu",
""
],
[
"Yang",
"Zoey",
""
],
[
"Ettinger",
"Scott",
""
],
[
"Sun",
"Pei",
""
],
[
"Leng",
"Zhaoqi",
""
],
[
"Mustafa",
"Mustafa",
""
],
[
"Bogun",
"Ivan",
""
],
[
"Wang",
"Weiyue",
""
],
[
"Tan",
"Mingxing",
""
],
[
"Anguelov",
"Dragomir",
""
]
] |
new_dataset
| 0.999398 |
2304.03848
|
Kyounggon Kim Dr.
|
Yu-Min Jeon, Won-Mu Heo, Jong-Min Kim, Kyounggon Kim
|
Multimedia Distribution Process Tracking for Android and iOS
|
10 pages
| null | null | null |
cs.MM
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The crime of illegally filming and distributing images or videos worldwide is
increasing day by day. With the increasing penetration rate of smartphones,
there has been a rise in crimes involving secretly taking pictures of people's
bodies and distributing them through messengers. However, little research has
been done on these related issue. The crime of distributing media using the
world's popular messengers, WhatsApp and Telegram, is continuously increasing.
It is also common to see criminals distributing illegal footage through various
messengers to avoid being caught in the investigation network. As these crimes
increase, there will continue to be a need for professional investigative
personnel, and the time required for criminal investigations will continue to
increase. In this paper, we propose a multimedia forensic method for tracking
footprints by checking the media information that changes when images and
videos shot with a smartphone are transmitted through instant messengers. We
have selected 11 of the world's most popular instant messengers and two secure
messengers. In addition, we selected the most widely used Android and iOS
operating systems for smartphones. Through this study, we were able to confirm
that it is possible to trace footprints related to the distribution of instant
messengers by analyzing transmitted images and videos. Thus, it was possible to
determine which messengers were used to distribute the video when it was
transmitted through multiple messengers.
|
[
{
"version": "v1",
"created": "Fri, 7 Apr 2023 21:57:13 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Jeon",
"Yu-Min",
""
],
[
"Heo",
"Won-Mu",
""
],
[
"Kim",
"Jong-Min",
""
],
[
"Kim",
"Kyounggon",
""
]
] |
new_dataset
| 0.998205 |
2304.03867
|
Sridhar Sola Mr.
|
Sridhar Sola and Darshan Gera
|
Masked Student Dataset of Expressions
|
Thirteenth Indian Conference on Computer Vision, Graphics and Image
Processing, ACM, 2022, Gandhinagar, India
| null | null | null |
cs.CV cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Facial expression recognition (FER) algorithms work well in constrained
environments with little or no occlusion of the face. However, real-world face
occlusion is prevalent, most notably with the need to use a face mask in the
current Covid-19 scenario. While there are works on the problem of occlusion in
FER, little has been done before on the particular face mask scenario.
Moreover, the few works in this area largely use synthetically created masked
FER datasets. Motivated by these challenges posed by the pandemic to FER, we
present a novel dataset, the Masked Student Dataset of Expressions or MSD-E,
consisting of 1,960 real-world non-masked and masked facial expression images
collected from 142 individuals. Along with the issue of obfuscated facial
features, we illustrate how other subtler issues in masked FER are represented
in our dataset. We then provide baseline results using ResNet-18, finding that
its performance dips in the non-masked case when trained for FER in the
presence of masks. To tackle this, we test two training paradigms: contrastive
learning and knowledge distillation, and find that they increase the model's
performance in the masked scenario while maintaining its non-masked
performance. We further visualise our results using t-SNE plots and Grad-CAM,
demonstrating that these paradigms capitalise on the limited features available
in the masked scenario. Finally, we benchmark SOTA methods on MSD-E.
|
[
{
"version": "v1",
"created": "Fri, 7 Apr 2023 23:43:21 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Sola",
"Sridhar",
""
],
[
"Gera",
"Darshan",
""
]
] |
new_dataset
| 0.957914 |
2304.03917
|
Zhu Zhimin
|
Zhimin Zhu, Jianguo Zhao, Tong Mu, Yuliang Yang, Mengyu Zhu
|
MC-MLP:Multiple Coordinate Frames in all-MLP Architecture for Vision
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In deep learning, Multi-Layer Perceptrons (MLPs) have once again garnered
attention from researchers. This paper introduces MC-MLP, a general MLP-like
backbone for computer vision that is composed of a series of fully-connected
(FC) layers. In MC-MLP, we propose that the same semantic information has
varying levels of difficulty in learning, depending on the coordinate frame of
features. To address this, we perform an orthogonal transform on the feature
information, equivalent to changing the coordinate frame of features. Through
this design, MC-MLP is equipped with multi-coordinate frame receptive fields
and the ability to learn information across different coordinate frames.
Experiments demonstrate that MC-MLP outperforms most MLPs in image
classification tasks, achieving better performance at the same parameter level.
The code will be available at: https://github.com/ZZM11/MC-MLP.
|
[
{
"version": "v1",
"created": "Sat, 8 Apr 2023 05:23:25 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Zhu",
"Zhimin",
""
],
[
"Zhao",
"Jianguo",
""
],
[
"Mu",
"Tong",
""
],
[
"Yang",
"Yuliang",
""
],
[
"Zhu",
"Mengyu",
""
]
] |
new_dataset
| 0.983842 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.