id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2303.14848
|
Zuhaib Akhtar
|
Zuhaib Akhtar
|
From Blockchain to Hashgraph: Distributed Ledger Technologies in the
Wild
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
With the introduction of the term blockchain in 2008, its interest has been
increasing in the community since the idea was coined. The reason for this
interest is because it provides anonymity, security and integrity without any
central third party organisation in control of data and transaction. It has
attracted huge interest in research areas due to its advances in various
platforms, limitations and challenges. There are various Distributed Ledger
Technologies that demonstrates their special features which overcome
limitations of other platforms. However, implementations of various distributed
ledger technologies differ substantially based on their data structures,
consensus protocol and fault tolerant among others. Due to these variations,
they have a quite different cost, performance, latency and security. In this
paper, working and in-depth comparison of major distributed ledger technologies
including their special features, strengths and weaknesses is presented and
discussed by identifying various criteria.
|
[
{
"version": "v1",
"created": "Sun, 26 Mar 2023 23:26:46 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Akhtar",
"Zuhaib",
""
]
] |
new_dataset
| 0.992487 |
2303.14883
|
Sandra Liu
|
Sandra Q. Liu, Yuxiang Ma, Edward H. Adelson
|
GelSight Baby Fin Ray: A Compact, Compliant, Flexible Finger with
High-Resolution Tactile Sensing
|
Accepted to IEEE Conference of Soft Robotics (RoboSoft) 2023
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
The synthesis of tactile sensing with compliance is essential to many fields,
from agricultural usages like fruit picking, to sustainability practices such
as sorting recycling, to the creation of safe home-care robots for the elderly
to age with dignity. From tactile sensing, we can discern material properties,
recognize textures, and determine softness, while with compliance, we are able
to securely and safely interact with the objects and the environment around us.
These two abilities can culminate into a useful soft robotic gripper, such as
the original GelSight Fin Ray, which is able to grasp a large variety of
different objects and also perform a simple household manipulation task: wine
glass reorientation. Although the original GelSight Fin Ray solves the problem
of interfacing a generally rigid, high-resolution sensor with a soft, compliant
structure, we can improve the robustness of the sensor and implement techniques
that make such camera-based tactile sensors applicable to a wider variety of
soft robot designs. We first integrate flexible mirrors and incorporate the
rigid electronic components into the base of the gripper, which greatly
improves the compliance of the Fin Ray structure. Then, we synthesize a
flexible and high-elongation silicone adhesive-based fluorescent paint, which
can provide good quality 2D tactile localization results for our sensor.
Finally, we incorporate all of these techniques into a new design: the Baby Fin
Ray, which we use to dig through clutter, and perform successful classification
of nuts in their shells. The supplementary video can be found here:
https://youtu.be/_oD_QFtYTPM
|
[
{
"version": "v1",
"created": "Mon, 27 Mar 2023 02:47:19 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Liu",
"Sandra Q.",
""
],
[
"Ma",
"Yuxiang",
""
],
[
"Adelson",
"Edward H.",
""
]
] |
new_dataset
| 0.998849 |
2303.14884
|
Shuangping Huang
|
Fan Yang, Lei Hu, Xinwu Liu, Shuangping Huang, Zhenghui Gu
|
A large-scale dataset for end-to-end table recognition in the wild
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Table recognition (TR) is one of the research hotspots in pattern
recognition, which aims to extract information from tables in an image. Common
table recognition tasks include table detection (TD), table structure
recognition (TSR) and table content recognition (TCR). TD is to locate tables
in the image, TCR recognizes text content, and TSR recognizes spatial ogical
structure. Currently, the end-to-end TR in real scenarios, accomplishing the
three sub-tasks simultaneously, is yet an unexplored research area. One major
factor that inhibits researchers is the lack of a benchmark dataset. To this
end, we propose a new large-scale dataset named Table Recognition Set
(TabRecSet) with diverse table forms sourcing from multiple scenarios in the
wild, providing complete annotation dedicated to end-to-end TR research. It is
the largest and first bi-lingual dataset for end-to-end TR, with 38.1K tables
in which 20.4K are in English\, and 17.7K are in Chinese. The samples have
diverse forms, such as the border-complete and -incomplete table, regular and
irregular table (rotated, distorted, etc.). The scenarios are multiple in the
wild, varying from scanned to camera-taken images, documents to Excel tables,
educational test papers to financial invoices. The annotations are complete,
consisting of the table body spatial annotation, cell spatial logical
annotation and text content for TD, TSR and TCR, respectively. The spatial
annotation utilizes the polygon instead of the bounding box or quadrilateral
adopted by most datasets. The polygon spatial annotation is more suitable for
irregular tables that are common in wild scenarios. Additionally, we propose a
visualized and interactive annotation tool named TableMe to improve the
efficiency and quality of table annotation.
|
[
{
"version": "v1",
"created": "Mon, 27 Mar 2023 02:48:51 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Yang",
"Fan",
""
],
[
"Hu",
"Lei",
""
],
[
"Liu",
"Xinwu",
""
],
[
"Huang",
"Shuangping",
""
],
[
"Gu",
"Zhenghui",
""
]
] |
new_dataset
| 0.999899 |
2303.14935
|
Nam Ly Tuan
|
Phuc Nguyen, Nam Tuan Ly, Hideaki Takeda, and Atsuhiro Takasu
|
TabIQA: Table Questions Answering on Business Document Images
|
First two authors contributed equally
| null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Table answering questions from business documents has many challenges that
require understanding tabular structures, cross-document referencing, and
additional numeric computations beyond simple search queries. This paper
introduces a novel pipeline, named TabIQA, to answer questions about business
document images. TabIQA combines state-of-the-art deep learning techniques 1)
to extract table content and structural information from images and 2) to
answer various questions related to numerical data, text-based information, and
complex queries from structured tables. The evaluation results on VQAonBD 2023
dataset demonstrate the effectiveness of TabIQA in achieving promising
performance in answering table-related questions. The TabIQA repository is
available at https://github.com/phucty/itabqa.
|
[
{
"version": "v1",
"created": "Mon, 27 Mar 2023 06:31:21 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Nguyen",
"Phuc",
""
],
[
"Ly",
"Nam Tuan",
""
],
[
"Takeda",
"Hideaki",
""
],
[
"Takasu",
"Atsuhiro",
""
]
] |
new_dataset
| 0.990408 |
2303.15060
|
Dongki Jung
|
Jaehoon Choi, Dongki Jung, Taejae Lee, Sangwook Kim, Youngdong Jung,
Dinesh Manocha, Donghwan Lee
|
TMO: Textured Mesh Acquisition of Objects with a Mobile Device by using
Differentiable Rendering
|
Accepted to CVPR23. Project Page: https://jh-choi.github.io/TMO/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a new pipeline for acquiring a textured mesh in the wild with a
single smartphone which offers access to images, depth maps, and valid poses.
Our method first introduces an RGBD-aided structure from motion, which can
yield filtered depth maps and refines camera poses guided by corresponding
depth. Then, we adopt the neural implicit surface reconstruction method, which
allows for high-quality mesh and develops a new training process for applying a
regularization provided by classical multi-view stereo methods. Moreover, we
apply a differentiable rendering to fine-tune incomplete texture maps and
generate textures which are perceptually closer to the original scene. Our
pipeline can be applied to any common objects in the real world without the
need for either in-the-lab environments or accurate mask images. We demonstrate
results of captured objects with complex shapes and validate our method
numerically against existing 3D reconstruction and texture mapping methods.
|
[
{
"version": "v1",
"created": "Mon, 27 Mar 2023 10:07:52 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Choi",
"Jaehoon",
""
],
[
"Jung",
"Dongki",
""
],
[
"Lee",
"Taejae",
""
],
[
"Kim",
"Sangwook",
""
],
[
"Jung",
"Youngdong",
""
],
[
"Manocha",
"Dinesh",
""
],
[
"Lee",
"Donghwan",
""
]
] |
new_dataset
| 0.998596 |
2303.15083
|
Shengchao Zhou
|
Shengchao Zhou, Weizhou Liu, Chen Hu, Shuchang Zhou, and Chao Ma
|
UniDistill: A Universal Cross-Modality Knowledge Distillation Framework
for 3D Object Detection in Bird's-Eye View
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the field of 3D object detection for autonomous driving, the sensor
portfolio including multi-modality and single-modality is diverse and complex.
Since the multi-modal methods have system complexity while the accuracy of
single-modal ones is relatively low, how to make a tradeoff between them is
difficult. In this work, we propose a universal cross-modality knowledge
distillation framework (UniDistill) to improve the performance of
single-modality detectors. Specifically, during training, UniDistill projects
the features of both the teacher and the student detector into Bird's-Eye-View
(BEV), which is a friendly representation for different modalities. Then, three
distillation losses are calculated to sparsely align the foreground features,
helping the student learn from the teacher without introducing additional cost
during inference. Taking advantage of the similar detection paradigm of
different detectors in BEV, UniDistill easily supports LiDAR-to-camera,
camera-to-LiDAR, fusion-to-LiDAR and fusion-to-camera distillation paths.
Furthermore, the three distillation losses can filter the effect of misaligned
background information and balance between objects of different sizes,
improving the distillation effectiveness. Extensive experiments on nuScenes
demonstrate that UniDistill effectively improves the mAP and NDS of student
detectors by 2.0%~3.2%.
|
[
{
"version": "v1",
"created": "Mon, 27 Mar 2023 10:50:58 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Zhou",
"Shengchao",
""
],
[
"Liu",
"Weizhou",
""
],
[
"Hu",
"Chen",
""
],
[
"Zhou",
"Shuchang",
""
],
[
"Ma",
"Chao",
""
]
] |
new_dataset
| 0.972095 |
2303.15110
|
Isaac Chung
|
Elizaveta Korotkova, Isaac Kwan Yin Chung
|
Beyond Toxic: Toxicity Detection Datasets are Not Enough for Brand
Safety
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The rapid growth in user generated content on social media has resulted in a
significant rise in demand for automated content moderation. Various methods
and frameworks have been proposed for the tasks of hate speech detection and
toxic comment classification. In this work, we combine common datasets to
extend these tasks to brand safety. Brand safety aims to protect commercial
branding by identifying contexts where advertisements should not appear and
covers not only toxicity, but also other potentially harmful content. As these
datasets contain different label sets, we approach the overall problem as a
binary classification task. We demonstrate the need for building brand safety
specific datasets via the application of common toxicity detection datasets to
a subset of brand safety and empirically analyze the effects of weighted
sampling strategies in text classification.
|
[
{
"version": "v1",
"created": "Mon, 27 Mar 2023 11:29:09 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Korotkova",
"Elizaveta",
""
],
[
"Chung",
"Isaac Kwan Yin",
""
]
] |
new_dataset
| 0.997912 |
2303.15128
|
Timo H\"ackel
|
Mehmet Mueller, Timo H\"ackel, Philipp Meyer, Franz Korf, Thomas C.
Schmidt
|
Authenticated and Secure Automotive Service Discovery with DNSSEC and
DANE
| null | null | null | null |
cs.CR cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automotive softwarization is progressing and future cars are expected to
operate a Service-Oriented Architecture on multipurpose compute units, which
are interconnected via a high-speed Ethernet backbone. The AUTOSAR architecture
foresees a universal middleware called SOME/IP that provides the service
primitives, interfaces, and application protocols on top of Ethernet and IP.
SOME/IP lacks a robust security architecture, even though security is an
essential in future Internet-connected vehicles. In this paper, we augment the
SOME/IP service discovery with an authentication and certificate management
scheme based on DNSSEC and DANE. We argue that the deployment of well-proven,
widely tested standard protocols should serve as an appropriate basis for a
robust and reliable security infrastructure in cars. Our solution enables
on-demand service authentication in offline scenarios, easy online updates, and
remains free of attestation collisions. We evaluate our extension of the common
vsomeip stack and find performance values that fully comply with car
operations.
|
[
{
"version": "v1",
"created": "Mon, 27 Mar 2023 12:01:19 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Mueller",
"Mehmet",
""
],
[
"Häckel",
"Timo",
""
],
[
"Meyer",
"Philipp",
""
],
[
"Korf",
"Franz",
""
],
[
"Schmidt",
"Thomas C.",
""
]
] |
new_dataset
| 0.973641 |
2303.15166
|
Haoyuan Tian
|
Ran Yi, Haoyuan Tian, Zhihao Gu, Yu-Kun Lai and Paul L. Rosin
|
Towards Artistic Image Aesthetics Assessment: a Large-scale Dataset and
a New Method
|
Accepted by CVPR 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image aesthetics assessment (IAA) is a challenging task due to its highly
subjective nature. Most of the current studies rely on large-scale datasets
(e.g., AVA and AADB) to learn a general model for all kinds of photography
images. However, little light has been shed on measuring the aesthetic quality
of artistic images, and the existing datasets only contain relatively few
artworks. Such a defect is a great obstacle to the aesthetic assessment of
artistic images. To fill the gap in the field of artistic image aesthetics
assessment (AIAA), we first introduce a large-scale AIAA dataset: Boldbrush
Artistic Image Dataset (BAID), which consists of 60,337 artistic images
covering various art forms, with more than 360,000 votes from online users. We
then propose a new method, SAAN (Style-specific Art Assessment Network), which
can effectively extract and utilize style-specific and generic aesthetic
information to evaluate artistic images. Experiments demonstrate that our
proposed approach outperforms existing IAA methods on the proposed BAID dataset
according to quantitative comparisons. We believe the proposed dataset and
method can serve as a foundation for future AIAA works and inspire more
research in this field. Dataset and code are available at:
https://github.com/Dreemurr-T/BAID.git
|
[
{
"version": "v1",
"created": "Mon, 27 Mar 2023 12:59:15 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Yi",
"Ran",
""
],
[
"Tian",
"Haoyuan",
""
],
[
"Gu",
"Zhihao",
""
],
[
"Lai",
"Yu-Kun",
""
],
[
"Rosin",
"Paul L.",
""
]
] |
new_dataset
| 0.992654 |
2303.15187
|
Tommaso Lisini Baldi Dr.
|
Alberto Villani, Giovanni Cortigiani, Bernardo Brogi, Nicole
D'Aurizio, Tommaso Lisini Baldi, and Domenico Prattichizzo
|
Avatarm: an Avatar With Manipulation Capabilities for the Physical
Metaverse
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Metaverse is an immersive shared space that remote users can access through
virtual and augmented reality interfaces, enabling their avatars to interact
with each other and the surrounding. Although digital objects can be
manipulated, physical objects cannot be touched, grasped, or moved within the
metaverse due to the lack of a suitable interface. This work proposes a
solution to overcome this limitation by introducing the concept of a Physical
Metaverse enabled by a new interface named "Avatarm". The Avatarm consists in
an avatar enhanced with a robotic arm that performs physical manipulation tasks
while remaining entirely hidden in the metaverse. The users have the illusion
that the avatar is directly manipulating objects without the mediation by a
robot. The Avatarm is the first step towards a new metaverse, the "Physical
Metaverse", where users can physically interact each other and with the
environment.
|
[
{
"version": "v1",
"created": "Mon, 27 Mar 2023 13:23:11 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Villani",
"Alberto",
""
],
[
"Cortigiani",
"Giovanni",
""
],
[
"Brogi",
"Bernardo",
""
],
[
"D'Aurizio",
"Nicole",
""
],
[
"Baldi",
"Tommaso Lisini",
""
],
[
"Prattichizzo",
"Domenico",
""
]
] |
new_dataset
| 0.996025 |
2303.15193
|
Tarek Saier
|
Tarek Saier and Youxiang Dong and Michael F\"arber
|
CoCon: A Data Set on Combined Contextualized Research Artifact Use
|
submitted to JCDL2023
| null | null | null |
cs.DL cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
In the wake of information overload in academia, methodologies and systems
for search, recommendation, and prediction to aid researchers in identifying
relevant research are actively studied and developed. Existing work, however,
is limited in terms of granularity, focusing only on the level of papers or a
single type of artifact, such as data sets. To enable more holistic analyses
and systems dealing with academic publications and their content, we propose
CoCon, a large scholarly data set reflecting the combined use of research
artifacts, contextualized in academic publications' full-text. Our data set
comprises 35 k artifacts (data sets, methods, models, and tasks) and 340 k
publications. We additionally formalize a link prediction task for "combined
research artifact use prediction" and provide code to utilize analyses of and
the development of ML applications on our data. All data and code is publicly
available at https://github.com/IllDepence/contextgraph.
|
[
{
"version": "v1",
"created": "Mon, 27 Mar 2023 13:29:09 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Saier",
"Tarek",
""
],
[
"Dong",
"Youxiang",
""
],
[
"Färber",
"Michael",
""
]
] |
new_dataset
| 0.989756 |
2303.15306
|
Nicolas Lazzari
|
Nicolas Lazzari, Andrea Poltronieri, Valentina Presutti
|
Pitchclass2vec: Symbolic Music Structure Segmentation with Chord
Embeddings
| null |
Proceedings of the 1st Workshop on Artificial Intelligence and
Creativity co-located with 21th International Conference of the Italian
Association for Artificial Intelligence(AIxIA 2022), Udine, Italy, November
28 - December 3, 2022
| null | null |
cs.SD cs.AI cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Structure perception is a fundamental aspect of music cognition in humans.
Historically, the hierarchical organization of music into structures served as
a narrative device for conveying meaning, creating expectancy, and evoking
emotions in the listener. Thereby, musical structures play an essential role in
music composition, as they shape the musical discourse through which the
composer organises his ideas. In this paper, we present a novel music
segmentation method, pitchclass2vec, based on symbolic chord annotations, which
are embedded into continuous vector representations using both natural language
processing techniques and custom-made encodings. Our algorithm is based on
long-short term memory (LSTM) neural network and outperforms the
state-of-the-art techniques based on symbolic chord annotations in the field.
|
[
{
"version": "v1",
"created": "Fri, 24 Mar 2023 10:23:15 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Lazzari",
"Nicolas",
""
],
[
"Poltronieri",
"Andrea",
""
],
[
"Presutti",
"Valentina",
""
]
] |
new_dataset
| 0.967363 |
2303.15334
|
Yifu Zhang
|
Yifu Zhang, Xinggang Wang, Xiaoqing Ye, Wei Zhang, Jincheng Lu, Xiao
Tan, Errui Ding, Peize Sun, Jingdong Wang
|
ByteTrackV2: 2D and 3D Multi-Object Tracking by Associating Every
Detection Box
|
Code is available at https://github.com/ifzhang/ByteTrack-V2. arXiv
admin note: text overlap with arXiv:2110.06864; substantial text overlap with
arXiv:2203.06424 by other authors
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-object tracking (MOT) aims at estimating bounding boxes and identities
of objects across video frames. Detection boxes serve as the basis of both 2D
and 3D MOT. The inevitable changing of detection scores leads to object missing
after tracking. We propose a hierarchical data association strategy to mine the
true objects in low-score detection boxes, which alleviates the problems of
object missing and fragmented trajectories. The simple and generic data
association strategy shows effectiveness under both 2D and 3D settings. In 3D
scenarios, it is much easier for the tracker to predict object velocities in
the world coordinate. We propose a complementary motion prediction strategy
that incorporates the detected velocities with a Kalman filter to address the
problem of abrupt motion and short-term disappearing. ByteTrackV2 leads the
nuScenes 3D MOT leaderboard in both camera (56.4% AMOTA) and LiDAR (70.1%
AMOTA) modalities. Furthermore, it is nonparametric and can be integrated with
various detectors, making it appealing in real applications. The source code is
released at https://github.com/ifzhang/ByteTrack-V2.
|
[
{
"version": "v1",
"created": "Mon, 27 Mar 2023 15:35:21 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Zhang",
"Yifu",
""
],
[
"Wang",
"Xinggang",
""
],
[
"Ye",
"Xiaoqing",
""
],
[
"Zhang",
"Wei",
""
],
[
"Lu",
"Jincheng",
""
],
[
"Tan",
"Xiao",
""
],
[
"Ding",
"Errui",
""
],
[
"Sun",
"Peize",
""
],
[
"Wang",
"Jingdong",
""
]
] |
new_dataset
| 0.999326 |
2303.15352
|
Weimin Jin
|
Fengjiao Zou, Jennifer Ogle, Weimin Jin, Patrick Gerard, Daniel Petty,
and Andrew Robb
|
Pedestrian Behavior Interacting with Autonomous Vehicles: Role of AV
Operation and Signal Indication and Roadway Infrastructure
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Interacting with pedestrians is challenging for Autonomous vehicles (AVs).
This study evaluates how AV operations /associated signaling and roadway
infrastructure affect pedestrian behavior in virtual reality. AVs were designed
with different operations and signal indications, including negotiating with no
signal, negotiating with a yellow signal, and yellow/blue negotiating/no-yield
indications. Results show that AV signal significantly impacts pedestrians'
accepted gap, walking time, and waiting time. Pedestrians chose the largest
open gap between cars with AV showing no signal, and had the slowest crossing
speed with AV showing a yellow signal indication. Roadway infrastructure
affects pedestrian walking time and waiting time.
|
[
{
"version": "v1",
"created": "Mon, 27 Mar 2023 16:09:38 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Zou",
"Fengjiao",
""
],
[
"Ogle",
"Jennifer",
""
],
[
"Jin",
"Weimin",
""
],
[
"Gerard",
"Patrick",
""
],
[
"Petty",
"Daniel",
""
],
[
"Robb",
"Andrew",
""
]
] |
new_dataset
| 0.9927 |
2303.15380
|
Chen Guo
|
Yifei Yin, Chen Guo, Manuel Kaufmann, Juan Jose Zarate, Jie Song,
Otmar Hilliges
|
Hi4D: 4D Instance Segmentation of Close Human Interaction
|
Project page: https://yifeiyin04.github.io/Hi4D/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose Hi4D, a method and dataset for the automatic analysis of
physically close human-human interaction under prolonged contact. Robustly
disentangling several in-contact subjects is a challenging task due to
occlusions and complex shapes. Hence, existing multi-view systems typically
fuse 3D surfaces of close subjects into a single, connected mesh. To address
this issue we leverage i) individually fitted neural implicit avatars; ii) an
alternating optimization scheme that refines pose and surface through periods
of close proximity; and iii) thus segment the fused raw scans into individual
instances. From these instances we compile Hi4D dataset of 4D textured scans of
20 subject pairs, 100 sequences, and a total of more than 11K frames. Hi4D
contains rich interaction-centric annotations in 2D and 3D alongside accurately
registered parametric body models. We define varied human pose and shape
estimation tasks on this dataset and provide results from state-of-the-art
methods on these benchmarks.
|
[
{
"version": "v1",
"created": "Mon, 27 Mar 2023 16:53:09 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Yin",
"Yifei",
""
],
[
"Guo",
"Chen",
""
],
[
"Kaufmann",
"Manuel",
""
],
[
"Zarate",
"Juan Jose",
""
],
[
"Song",
"Jie",
""
],
[
"Hilliges",
"Otmar",
""
]
] |
new_dataset
| 0.996085 |
2303.15417
|
Jaeha Kim
|
Yeonguk Oh, JoonKyu Park, Jaeha Kim, Gyeongsik Moon, Kyoung Mu Lee
|
Recovering 3D Hand Mesh Sequence from a Single Blurry Image: A New
Dataset and Temporal Unfolding
|
Accepted at CVPR 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Hands, one of the most dynamic parts of our body, suffer from blur due to
their active movements. However, previous 3D hand mesh recovery methods have
mainly focused on sharp hand images rather than considering blur due to the
absence of datasets providing blurry hand images. We first present a novel
dataset BlurHand, which contains blurry hand images with 3D groundtruths. The
BlurHand is constructed by synthesizing motion blur from sequential sharp hand
images, imitating realistic and natural motion blurs. In addition to the new
dataset, we propose BlurHandNet, a baseline network for accurate 3D hand mesh
recovery from a blurry hand image. Our BlurHandNet unfolds a blurry input image
to a 3D hand mesh sequence to utilize temporal information in the blurry input
image, while previous works output a static single hand mesh. We demonstrate
the usefulness of BlurHand for the 3D hand mesh recovery from blurry images in
our experiments. The proposed BlurHandNet produces much more robust results on
blurry images while generalizing well to in-the-wild images. The training codes
and BlurHand dataset are available at
https://github.com/JaehaKim97/BlurHand_RELEASE.
|
[
{
"version": "v1",
"created": "Mon, 27 Mar 2023 17:40:29 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Oh",
"Yeonguk",
""
],
[
"Park",
"JoonKyu",
""
],
[
"Kim",
"Jaeha",
""
],
[
"Moon",
"Gyeongsik",
""
],
[
"Lee",
"Kyoung Mu",
""
]
] |
new_dataset
| 0.998778 |
2303.15433
|
Hao Phung
|
Thanh Van Le, Hao Phung, Thuan Hoang Nguyen, Quan Dao, Ngoc Tran, Anh
Tran
|
Anti-DreamBooth: Protecting users from personalized text-to-image
synthesis
|
Project page: https://anti-dreambooth.github.io/
| null | null | null |
cs.CV cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Text-to-image diffusion models are nothing but a revolution, allowing anyone,
even without design skills, to create realistic images from simple text inputs.
With powerful personalization tools like DreamBooth, they can generate images
of a specific person just by learning from his/her few reference images.
However, when misused, such a powerful and convenient tool can produce fake
news or disturbing content targeting any individual victim, posing a severe
negative social impact. In this paper, we explore a defense system called
Anti-DreamBooth against such malicious use of DreamBooth. The system aims to
add subtle noise perturbation to each user's image before publishing in order
to disrupt the generation quality of any DreamBooth model trained on these
perturbed images. We investigate a wide range of algorithms for perturbation
optimization and extensively evaluate them on two facial datasets over various
text-to-image model versions. Despite the complicated formulation of DreamBooth
and Diffusion-based text-to-image models, our methods effectively defend users
from the malicious use of those models. Their effectiveness withstands even
adverse conditions, such as model or prompt/term mismatching between training
and testing. Our code will be available at
\href{https://github.com/VinAIResearch/Anti-DreamBooth.git}{https://github.com/VinAIResearch/Anti-DreamBooth.git}.
|
[
{
"version": "v1",
"created": "Mon, 27 Mar 2023 17:55:44 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Van Le",
"Thanh",
""
],
[
"Phung",
"Hao",
""
],
[
"Nguyen",
"Thuan Hoang",
""
],
[
"Dao",
"Quan",
""
],
[
"Tran",
"Ngoc",
""
],
[
"Tran",
"Anh",
""
]
] |
new_dataset
| 0.995825 |
2303.15437
|
Anurag Ranjan
|
Anurag Ranjan, Kwang Moo Yi, Jen-Hao Rick Chang, Oncel Tuzel
|
FaceLit: Neural 3D Relightable Faces
|
CVPR 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a generative framework, FaceLit, capable of generating a 3D face
that can be rendered at various user-defined lighting conditions and views,
learned purely from 2D images in-the-wild without any manual annotation. Unlike
existing works that require careful capture setup or human labor, we rely on
off-the-shelf pose and illumination estimators. With these estimates, we
incorporate the Phong reflectance model in the neural volume rendering
framework. Our model learns to generate shape and material properties of a face
such that, when rendered according to the natural statistics of pose and
illumination, produces photorealistic face images with multiview 3D and
illumination consistency. Our method enables photorealistic generation of faces
with explicit illumination and view controls on multiple datasets - FFHQ,
MetFaces and CelebA-HQ. We show state-of-the-art photorealism among 3D aware
GANs on FFHQ dataset achieving an FID score of 3.5.
|
[
{
"version": "v1",
"created": "Mon, 27 Mar 2023 17:59:10 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Ranjan",
"Anurag",
""
],
[
"Yi",
"Kwang Moo",
""
],
[
"Chang",
"Jen-Hao Rick",
""
],
[
"Tuzel",
"Oncel",
""
]
] |
new_dataset
| 0.997258 |
2303.15443
|
Tarun Kalluri
|
Tarun Kalluri, Wangdong Xu, Manmohan Chandraker
|
GeoNet: Benchmarking Unsupervised Adaptation across Geographies
|
CVPR 2023 Camera Ready. Project Page:
https://tarun005.github.io/GeoNet
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, several efforts have been aimed at improving the robustness
of vision models to domains and environments unseen during training. An
important practical problem pertains to models deployed in a new geography that
is under-represented in the training dataset, posing a direct challenge to fair
and inclusive computer vision. In this paper, we study the problem of
geographic robustness and make three main contributions. First, we introduce a
large-scale dataset GeoNet for geographic adaptation containing benchmarks
across diverse tasks like scene recognition (GeoPlaces), image classification
(GeoImNet) and universal adaptation (GeoUniDA). Second, we investigate the
nature of distribution shifts typical to the problem of geographic adaptation
and hypothesize that the major source of domain shifts arise from significant
variations in scene context (context shift), object design (design shift) and
label distribution (prior shift) across geographies. Third, we conduct an
extensive evaluation of several state-of-the-art unsupervised domain adaptation
algorithms and architectures on GeoNet, showing that they do not suffice for
geographical adaptation, and that large-scale pre-training using large vision
models also does not lead to geographic robustness. Our dataset is publicly
available at https://tarun005.github.io/GeoNet.
|
[
{
"version": "v1",
"created": "Mon, 27 Mar 2023 17:59:34 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Kalluri",
"Tarun",
""
],
[
"Xu",
"Wangdong",
""
],
[
"Chandraker",
"Manmohan",
""
]
] |
new_dataset
| 0.999579 |
2303.15445
|
Ron Yosef
|
Ron Yosef, Yonatan Bitton, Dafna Shahaf
|
IRFL: Image Recognition of Figurative Language
| null | null | null | null |
cs.CL cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Figures of speech such as metaphors, similes, and idioms allow language to be
expressive, invoke emotion, and communicate abstract ideas that might otherwise
be difficult to visualize. These figurative forms are often conveyed through
multiple modes, such as text and images, and frequently appear in advertising,
news, social media, etc. Understanding multimodal figurative language is an
essential component of human communication, and it plays a significant role in
our daily interactions. While humans can intuitively understand multimodal
figurative language, this poses a challenging task for machines that requires
the cognitive ability to map between domains, abstraction, commonsense, and
profound language and cultural knowledge. In this work, we propose the Image
Recognition of Figurative Language dataset to examine vision and language
models' understanding of figurative language. We leverage human annotation and
an automatic pipeline we created to generate a multimodal dataset and introduce
two novel tasks as a benchmark for multimodal figurative understanding. We
experiment with several baseline models and find that all perform substantially
worse than humans. We hope our dataset and benchmark will drive the development
of models that will better understand figurative language.
|
[
{
"version": "v1",
"created": "Mon, 27 Mar 2023 17:59:55 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Yosef",
"Ron",
""
],
[
"Bitton",
"Yonatan",
""
],
[
"Shahaf",
"Dafna",
""
]
] |
new_dataset
| 0.999842 |
2202.04278
|
David Naumann
|
Timos Antonopoulos, Eric Koskinen, Ton Chanh Le, Ramana Nagasamudram,
David A. Naumann, Minh Ngo
|
An algebra of alignment for relational verification
|
v2 adds examples and an undecidability result; v3 has expository
improvements (POPL version + appendix); v4 fixes the proof of Thm 4.3
| null |
10.1145/3571213
| null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Relational verification encompasses information flow security, regression
verification, translation validation for compilers, and more. Effective
alignment of the programs and computations to be related facilitates use of
simpler relational invariants and relational procedure specs, which in turn
enables automation and modular reasoning. Alignment has been explored in terms
of trace pairs, deductive rules of relational Hoare logics (RHL), and several
forms of product automata. This article shows how a simple extension of Kleene
Algebra with Tests (KAT), called BiKAT, subsumes prior formulations, including
alignment witnesses for forall-exists properties, which brings to light new
RHL-style rules for such properties. Alignments can be discovered
algorithmically or devised manually but, in either case, their adequacy with
respect to the original programs must be proved; an explicit algebra enables
constructive proof by equational reasoning. Furthermore our approach inherits
algorithmic benefits from existing KAT-based techniques and tools, which are
applicable to a range of semantic models.
|
[
{
"version": "v1",
"created": "Wed, 9 Feb 2022 04:53:04 GMT"
},
{
"version": "v2",
"created": "Sat, 9 Jul 2022 21:14:46 GMT"
},
{
"version": "v3",
"created": "Wed, 7 Dec 2022 02:34:16 GMT"
},
{
"version": "v4",
"created": "Thu, 23 Mar 2023 18:05:57 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Antonopoulos",
"Timos",
""
],
[
"Koskinen",
"Eric",
""
],
[
"Le",
"Ton Chanh",
""
],
[
"Nagasamudram",
"Ramana",
""
],
[
"Naumann",
"David A.",
""
],
[
"Ngo",
"Minh",
""
]
] |
new_dataset
| 0.957154 |
2207.10660
|
Garrick Brazil
|
Garrick Brazil, Abhinav Kumar, Julian Straub, Nikhila Ravi, Justin
Johnson, Georgia Gkioxari
|
Omni3D: A Large Benchmark and Model for 3D Object Detection in the Wild
|
CVPR 2023, Project website: https://omni3d.garrickbrazil.com/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Recognizing scenes and objects in 3D from a single image is a longstanding
goal of computer vision with applications in robotics and AR/VR. For 2D
recognition, large datasets and scalable solutions have led to unprecedented
advances. In 3D, existing benchmarks are small in size and approaches
specialize in few object categories and specific domains, e.g. urban driving
scenes. Motivated by the success of 2D recognition, we revisit the task of 3D
object detection by introducing a large benchmark, called Omni3D. Omni3D
re-purposes and combines existing datasets resulting in 234k images annotated
with more than 3 million instances and 98 categories. 3D detection at such
scale is challenging due to variations in camera intrinsics and the rich
diversity of scene and object types. We propose a model, called Cube R-CNN,
designed to generalize across camera and scene types with a unified approach.
We show that Cube R-CNN outperforms prior works on the larger Omni3D and
existing benchmarks. Finally, we prove that Omni3D is a powerful dataset for 3D
object recognition and show that it improves single-dataset performance and can
accelerate learning on new smaller datasets via pre-training.
|
[
{
"version": "v1",
"created": "Thu, 21 Jul 2022 17:56:22 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Mar 2023 00:42:18 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Brazil",
"Garrick",
""
],
[
"Kumar",
"Abhinav",
""
],
[
"Straub",
"Julian",
""
],
[
"Ravi",
"Nikhila",
""
],
[
"Johnson",
"Justin",
""
],
[
"Gkioxari",
"Georgia",
""
]
] |
new_dataset
| 0.999868 |
2211.13874
|
Haoran Bai
|
Haoran Bai, Di Kang, Haoxian Zhang, Jinshan Pan, Linchao Bao
|
FFHQ-UV: Normalized Facial UV-Texture Dataset for 3D Face Reconstruction
|
The dataset, code, and pre-trained texture decoder are publicly
available at https://github.com/csbhr/FFHQ-UV
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a large-scale facial UV-texture dataset that contains over 50,000
high-quality texture UV-maps with even illuminations, neutral expressions, and
cleaned facial regions, which are desired characteristics for rendering
realistic 3D face models under different lighting conditions. The dataset is
derived from a large-scale face image dataset namely FFHQ, with the help of our
fully automatic and robust UV-texture production pipeline. Our pipeline
utilizes the recent advances in StyleGAN-based facial image editing approaches
to generate multi-view normalized face images from single-image inputs. An
elaborated UV-texture extraction, correction, and completion procedure is then
applied to produce high-quality UV-maps from the normalized face images.
Compared with existing UV-texture datasets, our dataset has more diverse and
higher-quality texture maps. We further train a GAN-based texture decoder as
the nonlinear texture basis for parametric fitting based 3D face
reconstruction. Experiments show that our method improves the reconstruction
accuracy over state-of-the-art approaches, and more importantly, produces
high-quality texture maps that are ready for realistic renderings. The dataset,
code, and pre-trained texture decoder are publicly available at
https://github.com/csbhr/FFHQ-UV.
|
[
{
"version": "v1",
"created": "Fri, 25 Nov 2022 03:21:05 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Mar 2023 14:44:50 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Bai",
"Haoran",
""
],
[
"Kang",
"Di",
""
],
[
"Zhang",
"Haoxian",
""
],
[
"Pan",
"Jinshan",
""
],
[
"Bao",
"Linchao",
""
]
] |
new_dataset
| 0.999798 |
2211.14306
|
Mehdi S. M. Sajjadi
|
Mehdi S. M. Sajjadi, Aravindh Mahendran, Thomas Kipf, Etienne Pot,
Daniel Duckworth, Mario Lucic, Klaus Greff
|
RUST: Latent Neural Scene Representations from Unposed Imagery
|
CVPR 2023 Highlight. Project website: https://rust-paper.github.io/
| null | null | null |
cs.CV cs.GR cs.LG eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Inferring the structure of 3D scenes from 2D observations is a fundamental
challenge in computer vision. Recently popularized approaches based on neural
scene representations have achieved tremendous impact and have been applied
across a variety of applications. One of the major remaining challenges in this
space is training a single model which can provide latent representations which
effectively generalize beyond a single scene. Scene Representation Transformer
(SRT) has shown promise in this direction, but scaling it to a larger set of
diverse scenes is challenging and necessitates accurately posed ground truth
data. To address this problem, we propose RUST (Really Unposed Scene
representation Transformer), a pose-free approach to novel view synthesis
trained on RGB images alone. Our main insight is that one can train a Pose
Encoder that peeks at the target image and learns a latent pose embedding which
is used by the decoder for view synthesis. We perform an empirical
investigation into the learned latent pose structure and show that it allows
meaningful test-time camera transformations and accurate explicit pose
readouts. Perhaps surprisingly, RUST achieves similar quality as methods which
have access to perfect camera pose, thereby unlocking the potential for
large-scale training of amortized neural scene representations.
|
[
{
"version": "v1",
"created": "Fri, 25 Nov 2022 18:59:10 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Mar 2023 16:56:25 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Sajjadi",
"Mehdi S. M.",
""
],
[
"Mahendran",
"Aravindh",
""
],
[
"Kipf",
"Thomas",
""
],
[
"Pot",
"Etienne",
""
],
[
"Duckworth",
"Daniel",
""
],
[
"Lucic",
"Mario",
""
],
[
"Greff",
"Klaus",
""
]
] |
new_dataset
| 0.987096 |
2212.08013
|
Lucas Beyer
|
Lucas Beyer, Pavel Izmailov, Alexander Kolesnikov, Mathilde Caron,
Simon Kornblith, Xiaohua Zhai, Matthias Minderer, Michael Tschannen, Ibrahim
Alabdulmohsin, Filip Pavetic
|
FlexiViT: One Model for All Patch Sizes
|
Code and pre-trained models available at
https://github.com/google-research/big_vision. All authors made significant
technical contributions. CVPR 2023
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Vision Transformers convert images to sequences by slicing them into patches.
The size of these patches controls a speed/accuracy tradeoff, with smaller
patches leading to higher accuracy at greater computational cost, but changing
the patch size typically requires retraining the model. In this paper, we
demonstrate that simply randomizing the patch size at training time leads to a
single set of weights that performs well across a wide range of patch sizes,
making it possible to tailor the model to different compute budgets at
deployment time. We extensively evaluate the resulting model, which we call
FlexiViT, on a wide range of tasks, including classification, image-text
retrieval, open-world detection, panoptic segmentation, and semantic
segmentation, concluding that it usually matches, and sometimes outperforms,
standard ViT models trained at a single patch size in an otherwise identical
setup. Hence, FlexiViT training is a simple drop-in improvement for ViT that
makes it easy to add compute-adaptive capabilities to most models relying on a
ViT backbone architecture. Code and pre-trained models are available at
https://github.com/google-research/big_vision
|
[
{
"version": "v1",
"created": "Thu, 15 Dec 2022 18:18:38 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Mar 2023 21:38:16 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Beyer",
"Lucas",
""
],
[
"Izmailov",
"Pavel",
""
],
[
"Kolesnikov",
"Alexander",
""
],
[
"Caron",
"Mathilde",
""
],
[
"Kornblith",
"Simon",
""
],
[
"Zhai",
"Xiaohua",
""
],
[
"Minderer",
"Matthias",
""
],
[
"Tschannen",
"Michael",
""
],
[
"Alabdulmohsin",
"Ibrahim",
""
],
[
"Pavetic",
"Filip",
""
]
] |
new_dataset
| 0.998035 |
2212.09877
|
Ning Yu
|
Ning Yu, Chia-Chih Chen, Zeyuan Chen, Rui Meng, Gang Wu, Paul Josel,
Juan Carlos Niebles, Caiming Xiong, Ran Xu
|
LayoutDETR: Detection Transformer Is a Good Multimodal Layout Designer
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graphic layout designs play an essential role in visual communication. Yet
handcrafting layout designs is skill-demanding, time-consuming, and
non-scalable to batch production. Generative models emerge to make design
automation scalable but it remains non-trivial to produce designs that comply
with designers' multimodal desires, i.e., constrained by background images and
driven by foreground content. We propose LayoutDETR that inherits the high
quality and realism from generative modeling, while reformulating content-aware
requirements as a detection problem: we learn to detect in a background image
the reasonable locations, scales, and spatial relations for multimodal
foreground elements in a layout. Our solution sets a new state-of-the-art
performance for layout generation on public benchmarks and on our newly-curated
ad banner dataset. We integrate our solution into a graphical system that
facilitates user studies, and show that users prefer our designs over baselines
by significant margins. Our code, models, dataset, graphical system, and demos
are available at https://github.com/salesforce/LayoutDETR.
|
[
{
"version": "v1",
"created": "Mon, 19 Dec 2022 21:57:35 GMT"
},
{
"version": "v2",
"created": "Mon, 30 Jan 2023 07:57:53 GMT"
},
{
"version": "v3",
"created": "Fri, 24 Mar 2023 08:56:44 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Yu",
"Ning",
""
],
[
"Chen",
"Chia-Chih",
""
],
[
"Chen",
"Zeyuan",
""
],
[
"Meng",
"Rui",
""
],
[
"Wu",
"Gang",
""
],
[
"Josel",
"Paul",
""
],
[
"Niebles",
"Juan Carlos",
""
],
[
"Xiong",
"Caiming",
""
],
[
"Xu",
"Ran",
""
]
] |
new_dataset
| 0.993971 |
2301.13760
|
Pascal Nasahl
|
Pascal Nasahl, Salmin Sultana, Hans Liljestrand, Karanvir Grewal,
Michael LeMay, David M. Durham, David Schrammel, Stefan Mangard
|
EC-CFI: Control-Flow Integrity via Code Encryption Counteracting Fault
Attacks
|
Accepted at HOST'23
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Fault attacks enable adversaries to manipulate the control-flow of
security-critical applications. By inducing targeted faults into the CPU, the
software's call graph can be escaped and the control-flow can be redirected to
arbitrary functions inside the program. To protect the control-flow from these
attacks, dedicated fault control-flow integrity (CFI) countermeasures are
commonly deployed. However, these schemes either have high detection latencies
or require intrusive hardware changes. In this paper, we present EC-CFI, a
software-based cryptographically enforced CFI scheme with no detection latency
utilizing hardware features of recent Intel platforms. Our EC-CFI prototype is
designed to prevent an adversary from escaping the program's call graph using
faults by encrypting each function with a different key before execution. At
runtime, the instrumented program dynamically derives the decryption key,
ensuring that the code only can be successfully decrypted when the program
follows the intended call graph. To enable this level of protection on Intel
commodity systems, we introduce extended page table (EPT) aliasing allowing us
to achieve function-granular encryption by combing Intel's TME-MK and
virtualization technology. We open-source our custom LLVM-based toolchain
automatically protecting arbitrary programs with EC-CFI. Furthermore, we
evaluate our EPT aliasing approach with the SPEC CPU2017 and Embench-IoT
benchmarks and discuss and evaluate potential TME-MK hardware changes
minimizing runtime overheads.
|
[
{
"version": "v1",
"created": "Tue, 31 Jan 2023 16:51:33 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Mar 2023 10:41:21 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Nasahl",
"Pascal",
""
],
[
"Sultana",
"Salmin",
""
],
[
"Liljestrand",
"Hans",
""
],
[
"Grewal",
"Karanvir",
""
],
[
"LeMay",
"Michael",
""
],
[
"Durham",
"David M.",
""
],
[
"Schrammel",
"David",
""
],
[
"Mangard",
"Stefan",
""
]
] |
new_dataset
| 0.998917 |
2303.02416
|
Yuan Liu
|
Yuan Liu, Songyang Zhang, Jiacheng Chen, Kai Chen, Dahua Lin
|
PixMIM: Rethinking Pixel Reconstruction in Masked Image Modeling
|
Update code link and add additional results
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Masked Image Modeling (MIM) has achieved promising progress with the advent
of Masked Autoencoders (MAE) and BEiT. However, subsequent works have
complicated the framework with new auxiliary tasks or extra pre-trained models,
inevitably increasing computational overhead. This paper undertakes a
fundamental analysis of MIM from the perspective of pixel reconstruction, which
examines the input image patches and reconstruction target, and highlights two
critical but previously overlooked bottlenecks. Based on this analysis, we
propose a remarkably simple and effective method, {\ourmethod}, that entails
two strategies: 1) filtering the high-frequency components from the
reconstruction target to de-emphasize the network's focus on texture-rich
details and 2) adopting a conservative data transform strategy to alleviate the
problem of missing foreground in MIM training. {\ourmethod} can be easily
integrated into most existing pixel-based MIM approaches (\ie, using raw images
as reconstruction target) with negligible additional computation. Without bells
and whistles, our method consistently improves three MIM approaches, MAE,
ConvMAE, and LSMAE, across various downstream tasks. We believe this effective
plug-and-play method will serve as a strong baseline for self-supervised
learning and provide insights for future improvements of the MIM framework.
Code and models are available at
\url{https://github.com/open-mmlab/mmselfsup/tree/dev-1.x/configs/selfsup/pixmim}.
|
[
{
"version": "v1",
"created": "Sat, 4 Mar 2023 13:38:51 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Mar 2023 05:37:41 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Liu",
"Yuan",
""
],
[
"Zhang",
"Songyang",
""
],
[
"Chen",
"Jiacheng",
""
],
[
"Chen",
"Kai",
""
],
[
"Lin",
"Dahua",
""
]
] |
new_dataset
| 0.953138 |
2303.03711
|
Pascal Nasahl
|
Pascal Nasahl and Stefan Mangard
|
SCRAMBLE-CFI: Mitigating Fault-Induced Control-Flow Attacks on OpenTitan
|
Accepted at GLSVLSI'23
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Secure elements physically exposed to adversaries are frequently targeted by
fault attacks. These attacks can be utilized to hijack the control-flow of
software allowing the attacker to bypass security measures, extract sensitive
data, or gain full code execution. In this paper, we systematically analyze the
threat vector of fault-induced control-flow manipulations on the open-source
OpenTitan secure element. Our thorough analysis reveals that current
countermeasures of this chip either induce large area overheads or still cannot
prevent the attacker from exploiting the identified threats. In this context,
we introduce SCRAMBLE-CFI, an encryption-based control-flow integrity scheme
utilizing existing hardware features of OpenTitan. SCRAMBLE-CFI confines, with
minimal hardware overhead, the impact of fault-induced control-flow attacks by
encrypting each function with a different encryption tweak at load-time. At
runtime, code only can be successfully decrypted when the correct decryption
tweak is active. We open-source our hardware changes and release our LLVM
toolchain automatically protecting programs. Our analysis shows that
SCRAMBLE-CFI complementarily enhances security guarantees of OpenTitan with a
negligible hardware overhead of less than 3.97 % and a runtime overhead of 7.02
% for the Embench-IoT benchmarks.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2023 07:53:02 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Mar 2023 12:05:50 GMT"
},
{
"version": "v3",
"created": "Fri, 24 Mar 2023 10:30:09 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Nasahl",
"Pascal",
""
],
[
"Mangard",
"Stefan",
""
]
] |
new_dataset
| 0.99598 |
2303.07541
|
Mina Rezaei
|
Mina Rezaei and Patsy Eubanks Owens
|
Young Humans Make Change, Young Users Click: Creating Youth-Centered
Networked Social Movements
| null |
CHI 2023 Workshop titled "Supporting Social Movements Through HCI
and Design"
| null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
From the urbanists' perspective, the everyday experience of young people, as
an underrepresented group in the design of public spaces, includes tactics they
use to challenge the strategies which rule over urban spaces. In this regard,
youth led social movements are a set of collective tactics which groups of
young people use to resist power structures. Social informational streams have
revolutionized the way youth organize and mobilize for social movements
throughout the world, especially in urban areas. However, just like public
spaces, these algorithm based platforms have been developed with a great power
imbalance between the developers and users which results in the creation of non
inclusive social informational streams for young activists. Social activism
grows agency and confidence in youth which is critical to their development.
This paper employs a youth centric lens, which is used in designing public
spaces, for designing algorithmic spaces that can improve bottom up youth led
movements. By reviewing the structure of these spaces and how young people
interact with these structures in the different cultural contexts of Iran and
the US, we propose a humanistic approach to designing social informational
streams which can enhance youth activism.
|
[
{
"version": "v1",
"created": "Tue, 14 Mar 2023 00:07:43 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Mar 2023 21:16:54 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Rezaei",
"Mina",
""
],
[
"Owens",
"Patsy Eubanks",
""
]
] |
new_dataset
| 0.958899 |
2303.10795
|
Vaibhav Garg
|
Vaibhav Garg, Hui Guo, Nirav Ajmeri, Saikath Bhattacharya, and
Munindar P. Singh
|
iRogue: Identifying Rogue Behavior from App Reviews
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
An app user can access information of other users or third parties. We define
rogue mobile apps as those that enable a user (abuser) to access information of
another user or third party (victim), in a way that violates the victim's
privacy expectations. Such apps are dual-use and their identification is
nontrivial. We propose iRogue, an approach for identifying rogue apps based on
their reviews, posted by victims, abusers, and others. iRogue involves training
on deep learning features extracted from their 1,884 manually labeled reviews.
iRogue first identifies how alarming a review is with respect to rogue behavior
and, second, generates a rogue score for an app. iRogue predicts 100 rogue apps
from a seed dataset curated following a previous study. Also, iRogue examines
apps in other datasets of scraped reviews, and predicts an additional 139 rogue
apps. On labeled ground truth, iRogue achieves the highest recall, and
outperforms baseline approaches that leverage app descriptions and reviews. A
qualitative analysis of alarming reviews reveals rogue functionalities. App
users, platforms, and developers should be aware of such apps and their
functionalities and take measures to curb privacy risk.
|
[
{
"version": "v1",
"created": "Sun, 19 Mar 2023 23:43:36 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Garg",
"Vaibhav",
""
],
[
"Guo",
"Hui",
""
],
[
"Ajmeri",
"Nirav",
""
],
[
"Bhattacharya",
"Saikath",
""
],
[
"Singh",
"Munindar P.",
""
]
] |
new_dataset
| 0.988122 |
2303.11972
|
Pedro Hecht
|
Juan Pedro Hecht, Hugo Daniel Scolnik
|
A Post Quantum Key Agreement Protocol Based on a Modified Matrix Power
Function over a Rectangular Matrices Semiring
|
6 pages, 20 references
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We present an improved post quantum version of Sakalauskas matrix power
function key agreement protocol, using rectangular matrices instead of the
original square ones. Sakalauskas matrix power function is an efficient and
secure way to generate a shared secret key, and using rectangular matrices
provides additional flexibility and security. This method reduces the
computational burden by allowing smaller random integer matrices while
maintaining equal security. Another advantage of using the rank deficient
rectangular matrices over key agreement protocols is that it blocks
linearization attacks.
|
[
{
"version": "v1",
"created": "Tue, 21 Mar 2023 16:07:17 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Mar 2023 15:45:25 GMT"
},
{
"version": "v3",
"created": "Thu, 23 Mar 2023 17:27:27 GMT"
},
{
"version": "v4",
"created": "Fri, 24 Mar 2023 17:45:59 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Hecht",
"Juan Pedro",
""
],
[
"Scolnik",
"Hugo Daniel",
""
]
] |
new_dataset
| 0.995827 |
2303.12564
|
Zhongjin Luo
|
Zhongjin Luo, Shengcai Cai, Jinguo Dong, Ruibo Ming, Liangdong Qiu,
Xiaohang Zhan, Xiaoguang Han
|
RaBit: Parametric Modeling of 3D Biped Cartoon Characters with a
Topological-consistent Dataset
|
CVPR 2023, Project page: https://gaplab.cuhk.edu.cn/projects/RaBit/
| null | null | null |
cs.CV
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Assisting people in efficiently producing visually plausible 3D characters
has always been a fundamental research topic in computer vision and computer
graphics. Recent learning-based approaches have achieved unprecedented accuracy
and efficiency in the area of 3D real human digitization. However, none of the
prior works focus on modeling 3D biped cartoon characters, which are also in
great demand in gaming and filming. In this paper, we introduce 3DBiCar, the
first large-scale dataset of 3D biped cartoon characters, and RaBit, the
corresponding parametric model. Our dataset contains 1,500 topologically
consistent high-quality 3D textured models which are manually crafted by
professional artists. Built upon the data, RaBit is thus designed with a
SMPL-like linear blend shape model and a StyleGAN-based neural UV-texture
generator, simultaneously expressing the shape, pose, and texture. To
demonstrate the practicality of 3DBiCar and RaBit, various applications are
conducted, including single-view reconstruction, sketch-based modeling, and 3D
cartoon animation. For the single-view reconstruction setting, we find a
straightforward global mapping from input images to the output UV-based texture
maps tends to lose detailed appearances of some local parts (e.g., nose, ears).
Thus, a part-sensitive texture reasoner is adopted to make all important local
areas perceived. Experiments further demonstrate the effectiveness of our
method both qualitatively and quantitatively. 3DBiCar and RaBit are available
at gaplab.cuhk.edu.cn/projects/RaBit.
|
[
{
"version": "v1",
"created": "Wed, 22 Mar 2023 13:46:15 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Mar 2023 07:49:32 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Luo",
"Zhongjin",
""
],
[
"Cai",
"Shengcai",
""
],
[
"Dong",
"Jinguo",
""
],
[
"Ming",
"Ruibo",
""
],
[
"Qiu",
"Liangdong",
""
],
[
"Zhan",
"Xiaohang",
""
],
[
"Han",
"Xiaoguang",
""
]
] |
new_dataset
| 0.999718 |
2303.12968
|
Tim Scargill
|
Tim Scargill, Sangjun Eom, Ying Chen, Maria Gorlatova
|
Ambient Intelligence for Next-Generation AR
|
This is a preprint of a book chapter which will appear in the
Springer Handbook of the Metaverse
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Next-generation augmented reality (AR) promises a high degree of
context-awareness - a detailed knowledge of the environmental, user, social and
system conditions in which an AR experience takes place. This will facilitate
both the closer integration of the real and virtual worlds, and the provision
of context-specific content or adaptations. However, environmental awareness in
particular is challenging to achieve using AR devices alone; not only are these
mobile devices' view of an environment spatially and temporally limited, but
the data obtained by onboard sensors is frequently inaccurate and incomplete.
This, combined with the fact that many aspects of core AR functionality and
user experiences are impacted by properties of the real environment, motivates
the use of ambient IoT devices, wireless sensors and actuators placed in the
surrounding environment, for the measurement and optimization of environment
properties. In this book chapter we categorize and examine the wide variety of
ways in which these IoT sensors and actuators can support or enhance AR
experiences, including quantitative insights and proof-of-concept systems that
will inform the development of future solutions. We outline the challenges and
opportunities associated with several important research directions which must
be addressed to realize the full potential of next-generation AR.
|
[
{
"version": "v1",
"created": "Thu, 23 Mar 2023 00:25:08 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Mar 2023 14:09:40 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Scargill",
"Tim",
""
],
[
"Eom",
"Sangjun",
""
],
[
"Chen",
"Ying",
""
],
[
"Gorlatova",
"Maria",
""
]
] |
new_dataset
| 0.998943 |
2303.13522
|
Tallulah Frappier
|
Tallulah Frappier (I3, CESSP)
|
Online Assemblies: Civic Technologies Reshaping the Public Space
| null | null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Speaking or writing of political assemblies tends to evoke the action of
people gathering to deliberate, or the spaces in which this deliberation might
take place. One thing that is often overlooked, however, is the fact that these
spaces can be digital. Online assemblies have become more widespread in recent
years; from the first Web forums to civic technologies specifically designed to
host collective political debates. As digital services affect our possibilities
for political mobilization and participation, I will here attempt to define the
qualities specific to online assemblies, and to identify several patterns and
continuities in the design features of civic technologies offering online
spaces for debate.
|
[
{
"version": "v1",
"created": "Mon, 30 Jan 2023 15:02:43 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Frappier",
"Tallulah",
"",
"I3, CESSP"
]
] |
new_dataset
| 0.975496 |
2303.13524
|
Filipo Sharevski
|
Filipo Sharevski and Jennifer Vander Loop and Peter Jachim and Amy
Devine and Emma Pieroni
|
Talking Abortion (Mis)information with ChatGPT on TikTok
| null | null | null | null |
cs.HC cs.CY cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
In this study, we tested users' perception of accuracy and engagement with
TikTok videos in which ChatGPT responded to prompts about "at-home" abortion
remedies. The chatbot's responses, though somewhat vague and confusing,
nonetheless recommended consulting with health professionals before attempting
an "at-home" abortion. We used ChatGPT to create two TikTok video variants -
one where users can see ChatGPT explicitly typing back a response, and one
where the text response is presented without any notion to the chatbot. We
randomly exposed 100 participants to each variant and found that the group of
participants unaware of ChatGPT's text synthetization was more inclined to
believe the responses were misinformation. Under the same impression, TikTok
itself attached misinformation warning labels ("Get the facts about abortion")
to all videos after we collected our initial results. We then decided to test
the videos again with another set of 50 participants and found that the labels
did not affect the perceptions of abortion misinformation except in the case
where ChatGPT explicitly responded to a prompt for a lyrical output. We also
found that more than 60% of the participants expressed negative or hesitant
opinions about chatbots as sources of credible health information.
|
[
{
"version": "v1",
"created": "Thu, 23 Feb 2023 17:35:27 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Sharevski",
"Filipo",
""
],
[
"Loop",
"Jennifer Vander",
""
],
[
"Jachim",
"Peter",
""
],
[
"Devine",
"Amy",
""
],
[
"Pieroni",
"Emma",
""
]
] |
new_dataset
| 0.976715 |
2303.13527
|
Yuyang Wang
|
Yuyang Wang, Ruichen Li, Jean-R\'emy Chardonnet, Pan Hui
|
Dataset for predicting cybersickness from a virtual navigation task
| null | null | null | null |
cs.HC cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work presents a dataset collected to predict cybersickness in virtual
reality environments. The data was collected from navigation tasks in a virtual
environment designed to induce cybersickness. The dataset consists of many data
points collected from diverse participants, including physiological responses
(EDA and Heart Rate) and self-reported cybersickness symptoms. The paper will
provide a detailed description of the dataset, including the arranged
navigation task, the data collection procedures, and the data format. The
dataset will serve as a valuable resource for researchers to develop and
evaluate predictive models for cybersickness and will facilitate more research
in cybersickness mitigation.
|
[
{
"version": "v1",
"created": "Tue, 7 Feb 2023 03:57:56 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Wang",
"Yuyang",
""
],
[
"Li",
"Ruichen",
""
],
[
"Chardonnet",
"Jean-Rémy",
""
],
[
"Hui",
"Pan",
""
]
] |
new_dataset
| 0.999713 |
2303.13545
|
Manas Mehta
|
Manas Mehta, Nugzar Chkhaidze, and Yizhen Wang
|
Developing IncidentUI -- A Ride Comfort and Disengagement Evaluation
Application for Autonomous Vehicles
|
Previously embargoed by Nvidia. Nvidia owns the rights
| null | null | null |
cs.HC cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
This report details the design, development, and implementation of
IncidentUI, an Android tablet application designed to measure user-experienced
ride comfort and record disengagement data for autonomous vehicles (AV) during
test drives. The goal of our project was to develop an Android application to
run on a peripheral tablet and communicate with the Drive Pegasus AGX, the AI
Computing Platform for Nvidia's AV Level 2 Autonomy Solution Architecture [1],
to detect AV disengagements and report ride comfort. We designed and developed
an Android XML-based intuitive user interface for IncidentUI. The development
of IncidentUI required a redesign of the system architecture by redeveloping
the system communications protocol in Java and implementing the Protocol
Buffers (Protobufs) in Java using the existing system Protobuf definitions. The
final iteration of IncidentUI yielded the desired functionality while testing
on an AV test drive. We also received positive feedback from Nvidia's AV
Platform Team during our final IncidentUI demonstration.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 21:30:58 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Mehta",
"Manas",
""
],
[
"Chkhaidze",
"Nugzar",
""
],
[
"Wang",
"Yizhen",
""
]
] |
new_dataset
| 0.999612 |
2303.13548
|
Aparna Varde
|
Vishesh Kalvakurthi, Aparna S. Varde, John Jenq
|
Hey Dona! Can you help me with student course registration?
| null |
AAAI 2023 the 37th AAAI Conference on Artificial Intelligence
(AI4EDU workshop)
| null | null |
cs.HC cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present a demo of an intelligent personal agent called Hey
Dona (or just Dona) with virtual voice assistance in student course
registration. It is a deployed project in the theme of AI for education. In
this digital age with a myriad of smart devices, users often delegate tasks to
agents. While pointing and clicking supersedes the erstwhile command-typing,
modern devices allow users to speak commands for agents to execute tasks,
enhancing speed and convenience. In line with this progress, Dona is an
intelligent agent catering to student needs by automated, voice-operated course
registration, spanning a multitude of accents, entailing task planning
optimization, with some language translation as needed. Dona accepts voice
input by microphone (Bluetooth, wired microphone), converts human voice to
computer understandable language, performs query processing as per user
commands, connects with the Web to search for answers, models task
dependencies, imbibes quality control, and conveys output by speaking to users
as well as displaying text, thus enabling human-AI interaction by speech cum
text. It is meant to work seamlessly on desktops, smartphones etc. and in
indoor as well as outdoor settings. To the best of our knowledge, Dona is among
the first of its kind as an intelligent personal agent for voice assistance in
student course registration. Due to its ubiquitous access for educational
needs, Dona directly impacts AI for education. It makes a broader impact on
smart city characteristics of smart living and smart people due to its
contributions to providing benefits for new ways of living and assisting 21st
century education, respectively.
|
[
{
"version": "v1",
"created": "Tue, 21 Mar 2023 21:37:19 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Kalvakurthi",
"Vishesh",
""
],
[
"Varde",
"Aparna S.",
""
],
[
"Jenq",
"John",
""
]
] |
new_dataset
| 0.982052 |
2303.13549
|
Aparna Varde
|
Levi Corallo and Aparna S. Varde
|
Optical Character Recognition and Transcription of Berber Signs from
Images in a Low-Resource Language Amazigh
| null |
AAAI-2023 the 37th AAAI Conference on Artificial Intelligence
(AI4EDU workshop)
| null | null |
cs.CV cs.AI cs.CL cs.LG eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
The Berber, or Amazigh language family is a low-resource North African
vernacular language spoken by the indigenous Berber ethnic group. It has its
own unique alphabet called Tifinagh used across Berber communities in Morocco,
Algeria, and others. The Afroasiatic language Berber is spoken by 14 million
people, yet lacks adequate representation in education, research, web
applications etc. For instance, there is no option of translation to or from
Amazigh / Berber on Google Translate, which hosts over 100 languages today.
Consequently, we do not find specialized educational apps, L2 (2nd language
learner) acquisition, automated language translation, and remote-access
facilities enabled in Berber. Motivated by this background, we propose a
supervised approach called DaToBS for Detection and Transcription of Berber
Signs. The DaToBS approach entails the automatic recognition and transcription
of Tifinagh characters from signs in photographs of natural environments. This
is achieved by self-creating a corpus of 1862 pre-processed character images;
curating the corpus with human-guided annotation; and feeding it into an OCR
model via the deployment of CNN for deep learning based on computer vision
models. We deploy computer vision modeling (rather than language models)
because there are pictorial symbols in this alphabet, this deployment being a
novel aspect of our work. The DaToBS experimentation and analyses yield over 92
percent accuracy in our research. To the best of our knowledge, ours is among
the first few works in the automated transcription of Berber signs from
roadside images with deep learning, yielding high accuracy. This can pave the
way for developing pedagogical applications in the Berber language, thereby
addressing an important goal of outreach to underrepresented communities via AI
in education.
|
[
{
"version": "v1",
"created": "Tue, 21 Mar 2023 21:38:44 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Corallo",
"Levi",
""
],
[
"Varde",
"Aparna S.",
""
]
] |
new_dataset
| 0.99949 |
2303.13550
|
Rob Eagle
|
Rob Eagle
|
Augmented reality as a Thirdspace: Simultaneous experience of the
physical and virtual
|
Preprint of chapter published in Proceedings of the 3rd International
and Interdisciplinary Conference on Images and Imagination, edited by D.
Villa and F. Zuccoli, 2023, Springer Nature, reproduced with permission of
Springer Nature
| null |
10.1007/978-3-031-25906-7_39
| null |
cs.HC cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
With the proliferation of devices that display augmented reality (AR), now is
the time for scholars and practitioners to evaluate and engage critically with
emerging applications of the medium. AR mediates the way users see their
bodies, hear their environment and engage with places. Applied in various
forms, including social media, e-commerce, gaming, enterprise and art, the
medium facilitates a hybrid experience of physical and digital spaces. This
article employs a model of real-and-imagined space from geographer Edward Soja
to examine how the user of an AR app navigates the two intertwined spaces of
physical and digital, experiencing what Soja calls a 'Third-space'. The article
illustrates the potential for headset-based AR to engender such a Thirdspace
through the author's practice-led research project, the installation Through
the Wardrobe. This installation demonstrates how AR has the potential to shift
the way that users view and interact with their world with artistic
applications providing an opportunity to question assumptions of social norms,
identity and uses of physical space.
|
[
{
"version": "v1",
"created": "Tue, 21 Mar 2023 22:46:22 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Eagle",
"Rob",
""
]
] |
new_dataset
| 0.995864 |
2303.13604
|
Mohammad Pedramfar
|
Mohammad Pedramfar, Vaneet Aggarwal
|
Stochastic Submodular Bandits with Delayed Composite Anonymous Bandit
Feedback
| null | null | null | null |
cs.LG cs.AI cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper investigates the problem of combinatorial multiarmed bandits with
stochastic submodular (in expectation) rewards and full-bandit delayed
feedback, where the delayed feedback is assumed to be composite and anonymous.
In other words, the delayed feedback is composed of components of rewards from
past actions, with unknown division among the sub-components. Three models of
delayed feedback: bounded adversarial, stochastic independent, and stochastic
conditionally independent are studied, and regret bounds are derived for each
of the delay models. Ignoring the problem dependent parameters, we show that
regret bound for all the delay models is $\tilde{O}(T^{2/3} + T^{1/3} \nu)$ for
time horizon $T$, where $\nu$ is a delay parameter defined differently in the
three cases, thus demonstrating an additive term in regret with delay in all
the three delay models. The considered algorithm is demonstrated to outperform
other full-bandit approaches with delayed composite anonymous feedback.
|
[
{
"version": "v1",
"created": "Thu, 23 Mar 2023 18:38:33 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Pedramfar",
"Mohammad",
""
],
[
"Aggarwal",
"Vaneet",
""
]
] |
new_dataset
| 0.99759 |
2303.13675
|
Andrew Halterman
|
Andrew Halterman
|
Mordecai 3: A Neural Geoparser and Event Geocoder
|
6 pages, 1 figure, 4 tables
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mordecai3 is a new end-to-end text geoparser and event geolocation system.
The system performs toponym resolution using a new neural ranking model to
resolve a place name extracted from a document to its entry in the Geonames
gazetteer. It also performs event geocoding, the process of linking events
reported in text with the place names where they are reported to occur, using
an off-the-shelf question-answering model. The toponym resolution model is
trained on a diverse set of existing training data, along with several thousand
newly annotated examples. The paper describes the model, its training process,
and performance comparisons with existing geoparsers. The system is available
as an open source Python library, Mordecai 3, and replaces an earlier
geoparser, Mordecai v2, one of the most widely used text geoparsers (Halterman
2017).
|
[
{
"version": "v1",
"created": "Thu, 23 Mar 2023 21:10:04 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Halterman",
"Andrew",
""
]
] |
new_dataset
| 0.994548 |
2303.13731
|
Yiran Li
|
Yiran Li, Junpeng Wang, Xin Dai, Liang Wang, Chin-Chia Michael Yeh,
Yan Zheng, Wei Zhang, Kwan-Liu Ma
|
How Does Attention Work in Vision Transformers? A Visual Analytics
Attempt
|
Accepted by PacificVis 2023 and selected to be published in TVCG
| null | null | null |
cs.LG cs.CV cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vision transformer (ViT) expands the success of transformer models from
sequential data to images. The model decomposes an image into many smaller
patches and arranges them into a sequence. Multi-head self-attentions are then
applied to the sequence to learn the attention between patches. Despite many
successful interpretations of transformers on sequential data, little effort
has been devoted to the interpretation of ViTs, and many questions remain
unanswered. For example, among the numerous attention heads, which one is more
important? How strong are individual patches attending to their spatial
neighbors in different heads? What attention patterns have individual heads
learned? In this work, we answer these questions through a visual analytics
approach. Specifically, we first identify what heads are more important in ViTs
by introducing multiple pruning-based metrics. Then, we profile the spatial
distribution of attention strengths between patches inside individual heads, as
well as the trend of attention strengths across attention layers. Third, using
an autoencoder-based learning solution, we summarize all possible attention
patterns that individual heads could learn. Examining the attention strengths
and patterns of the important heads, we answer why they are important. Through
concrete case studies with experienced deep learning experts on multiple ViTs,
we validate the effectiveness of our solution that deepens the understanding of
ViTs from head importance, head attention strength, and head attention pattern.
|
[
{
"version": "v1",
"created": "Fri, 24 Mar 2023 01:02:59 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Li",
"Yiran",
""
],
[
"Wang",
"Junpeng",
""
],
[
"Dai",
"Xin",
""
],
[
"Wang",
"Liang",
""
],
[
"Yeh",
"Chin-Chia Michael",
""
],
[
"Zheng",
"Yan",
""
],
[
"Zhang",
"Wei",
""
],
[
"Ma",
"Kwan-Liu",
""
]
] |
new_dataset
| 0.959314 |
2303.13733
|
Taeyoung Kim
|
Taeyoung Kim, Yunhee Jang, Chanjong Lee, Hyungjoon Koo, Hyoungshick
Kim
|
SmartMark: Software Watermarking Scheme for Smart Contracts
|
This paper is accepted for publication in ICSE 2023
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Smart contracts are self-executing programs on a blockchain to ensure
immutable and transparent agreements without the involvement of intermediaries.
Despite the growing popularity of smart contracts for many blockchain platforms
like Ethereum, smart contract developers cannot prevent copying their smart
contracts from competitors due to the absence of technical means available.
However, applying existing software watermarking techniques is challenging
because of the unique properties of smart contracts, such as a code size
constraint, non-free execution cost, and no support for dynamic allocation
under a virtual machine environment. This paper introduces a novel software
watermarking scheme, dubbed SmartMark, aiming to protect the piracy of smart
contracts. SmartMark builds the control flow graph of a target contract runtime
bytecode and locates a series of bytes randomly selected from a collection of
opcodes to represent a watermark. We implement a full-fledged prototype for
Ethereum, applying SmartMark to 27,824 unique smart contract bytecodes. Our
empirical results demonstrate that SmartMark can effectively embed a watermark
into smart contracts and verify its presence, meeting the requirements of
credibility and imperceptibility while incurring a slight performance
degradation. Furthermore, our security analysis shows that SmartMark is
resilient against foreseeable watermarking corruption attacks; e.g., a large
number of dummy opcodes are needed to disable a watermark effectively,
resulting in producing illegitimate smart contract clones that are not
economical.
|
[
{
"version": "v1",
"created": "Fri, 24 Mar 2023 01:12:19 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Kim",
"Taeyoung",
""
],
[
"Jang",
"Yunhee",
""
],
[
"Lee",
"Chanjong",
""
],
[
"Koo",
"Hyungjoon",
""
],
[
"Kim",
"Hyoungshick",
""
]
] |
new_dataset
| 0.996965 |
2303.13739
|
Rui Zhao
|
Yulin Luo, Rui Zhao, Xiaobao Wei, Jinwei Chen, Yijie Lu, Shenghao Xie,
Tianyu Wang, Ruiqin Xiong, Ming Lu, Shanghang Zhang
|
MoWE: Mixture of Weather Experts for Multiple Adverse Weather Removal
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Currently, most adverse weather removal tasks are handled independently, such
as deraining, desnowing, and dehazing. However, in autonomous driving
scenarios, the type, intensity, and mixing degree of the weather are unknown,
so the separated task setting cannot deal with these complex conditions well.
Besides, the vision applications in autonomous driving often aim at high-level
tasks, but existing weather removal methods neglect the connection between
performance on perceptual tasks and signal fidelity. To this end, in upstream
task, we propose a novel \textbf{Mixture of Weather Experts(MoWE)} Transformer
framework to handle complex weather removal in a perception-aware fashion. We
design a \textbf{Weather-aware Router} to make the experts targeted more
relevant to weather types while without the need for weather type labels during
inference. To handle diverse weather conditions, we propose \textbf{Multi-scale
Experts} to fuse information among neighbor tokens. In downstream task, we
propose a \textbf{Label-free Perception-aware Metric} to measure whether the
outputs of image processing models are suitable for high level perception tasks
without the demand for semantic labels. We collect a syntactic dataset
\textbf{MAW-Sim} towards autonomous driving scenarios to benchmark the multiple
weather removal performance of existing methods. Our MoWE achieves SOTA
performance in upstream task on the proposed dataset and two public datasets,
i.e. All-Weather and Rain/Fog-Cityscapes, and also have better perceptual
results in downstream segmentation task compared to other methods. Our codes
and datasets will be released after acceptance.
|
[
{
"version": "v1",
"created": "Fri, 24 Mar 2023 01:46:25 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Luo",
"Yulin",
""
],
[
"Zhao",
"Rui",
""
],
[
"Wei",
"Xiaobao",
""
],
[
"Chen",
"Jinwei",
""
],
[
"Lu",
"Yijie",
""
],
[
"Xie",
"Shenghao",
""
],
[
"Wang",
"Tianyu",
""
],
[
"Xiong",
"Ruiqin",
""
],
[
"Lu",
"Ming",
""
],
[
"Zhang",
"Shanghang",
""
]
] |
new_dataset
| 0.99887 |
2303.13740
|
Raul Rojas Prof.
|
Ra\'ul Rojas
|
The First Computer Program
|
8 pages, 4 tables
| null | null | null |
cs.GL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In 1837, the first computer program in history was sketched by the renowned
mathematician and inventor Charles Babbage. It was a program for the Analytical
Engine. The program consists of a sequence of arithmetical operations and the
necessary variable addresses (memory locations) of the arguments and the
result, displayed in tabular fashion, like a program trace. The program
computes the solutions for a system of two linear equations in two unknowns.
|
[
{
"version": "v1",
"created": "Fri, 24 Mar 2023 01:46:27 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Rojas",
"Raúl",
""
]
] |
new_dataset
| 0.995769 |
2303.13743
|
Vishal Vinod
|
Vishal Vinod, Tanmay Shah, Dmitry Lagun
|
TEGLO: High Fidelity Canonical Texture Mapping from Single-View Images
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Recent work in Neural Fields (NFs) learn 3D representations from
class-specific single view image collections. However, they are unable to
reconstruct the input data preserving high-frequency details. Further, these
methods do not disentangle appearance from geometry and hence are not suitable
for tasks such as texture transfer and editing. In this work, we propose TEGLO
(Textured EG3D-GLO) for learning 3D representations from single view
in-the-wild image collections for a given class of objects. We accomplish this
by training a conditional Neural Radiance Field (NeRF) without any explicit 3D
supervision. We equip our method with editing capabilities by creating a dense
correspondence mapping to a 2D canonical space. We demonstrate that such
mapping enables texture transfer and texture editing without requiring meshes
with shared topology. Our key insight is that by mapping the input image pixels
onto the texture space we can achieve near perfect reconstruction (>= 74 dB
PSNR at 1024^2 resolution). Our formulation allows for high quality 3D
consistent novel view synthesis with high-frequency details at megapixel image
resolution.
|
[
{
"version": "v1",
"created": "Fri, 24 Mar 2023 01:52:03 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Vinod",
"Vishal",
""
],
[
"Shah",
"Tanmay",
""
],
[
"Lagun",
"Dmitry",
""
]
] |
new_dataset
| 0.991997 |
2303.13806
|
Xusheng Zhu
|
Xusheng Zhu, Wen Chen, Zhendong Li, Qingqing Wu, and Jun Li
|
Quadrature Spatial Scattering Modulation for mmWave Transmission
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this letter, we investigate a novel quadrature spatial scattering
modulation (QSSM) transmission technique based on millimeter wave (mmWave)
systems, in which the transmitter generates two orthogonal beams targeting
candidate scatterers in the channel to carry the real and imaginary parts of
the conventional signal, respectively. Meanwhile, the maximum likelihood (ML)
detector is adopted at the receiver to recover the received beams and signals.
Based on the ML detector, we derive the closed-form average bit error
probability (ABEP) expression of the QSSM scheme. Furthermore, we evaluate the
asymptotic ABEP expression of the proposed scheme. Monte Carlo simulations
verify the exactness and tightness of the derivation results. It is shown that
the ABEP performance of QSSM is better than that of traditional spatial
scattering modulation.
|
[
{
"version": "v1",
"created": "Fri, 24 Mar 2023 05:00:38 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Zhu",
"Xusheng",
""
],
[
"Chen",
"Wen",
""
],
[
"Li",
"Zhendong",
""
],
[
"Wu",
"Qingqing",
""
],
[
"Li",
"Jun",
""
]
] |
new_dataset
| 0.9969 |
2303.13807
|
Juncheng Li
|
Hansheng Guo, Juncheng Li, Guangwei Gao, Zhi Li, Tieyong Zeng
|
PFT-SSR: Parallax Fusion Transformer for Stereo Image Super-Resolution
|
5 pages, 3 figures
|
ICASSP 2023
| null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Stereo image super-resolution aims to boost the performance of image
super-resolution by exploiting the supplementary information provided by
binocular systems. Although previous methods have achieved promising results,
they did not fully utilize the information of cross-view and intra-view. To
further unleash the potential of binocular images, in this letter, we propose a
novel Transformerbased parallax fusion module called Parallax Fusion
Transformer (PFT). PFT employs a Cross-view Fusion Transformer (CVFT) to
utilize cross-view information and an Intra-view Refinement Transformer (IVRT)
for intra-view feature refinement. Meanwhile, we adopted the Swin Transformer
as the backbone for feature extraction and SR reconstruction to form a pure
Transformer architecture called PFT-SSR. Extensive experiments and ablation
studies show that PFT-SSR achieves competitive results and outperforms most
SOTA methods. Source code is available at https://github.com/MIVRC/PFT-PyTorch.
|
[
{
"version": "v1",
"created": "Fri, 24 Mar 2023 05:04:52 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Guo",
"Hansheng",
""
],
[
"Li",
"Juncheng",
""
],
[
"Gao",
"Guangwei",
""
],
[
"Li",
"Zhi",
""
],
[
"Zeng",
"Tieyong",
""
]
] |
new_dataset
| 0.994086 |
2303.13825
|
Zhiyang Guo
|
Zhiyang Guo, Wengang Zhou, Min Wang, Li Li, Houqiang Li
|
HandNeRF: Neural Radiance Fields for Animatable Interacting Hands
|
CVPR 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a novel framework to reconstruct accurate appearance and geometry
with neural radiance fields (NeRF) for interacting hands, enabling the
rendering of photo-realistic images and videos for gesture animation from
arbitrary views. Given multi-view images of a single hand or interacting hands,
an off-the-shelf skeleton estimator is first employed to parameterize the hand
poses. Then we design a pose-driven deformation field to establish
correspondence from those different poses to a shared canonical space, where a
pose-disentangled NeRF for one hand is optimized. Such unified modeling
efficiently complements the geometry and texture cues in rarely-observed areas
for both hands. Meanwhile, we further leverage the pose priors to generate
pseudo depth maps as guidance for occlusion-aware density learning. Moreover, a
neural feature distillation method is proposed to achieve cross-domain
alignment for color optimization. We conduct extensive experiments to verify
the merits of our proposed HandNeRF and report a series of state-of-the-art
results both qualitatively and quantitatively on the large-scale InterHand2.6M
dataset.
|
[
{
"version": "v1",
"created": "Fri, 24 Mar 2023 06:19:19 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Guo",
"Zhiyang",
""
],
[
"Zhou",
"Wengang",
""
],
[
"Wang",
"Min",
""
],
[
"Li",
"Li",
""
],
[
"Li",
"Houqiang",
""
]
] |
new_dataset
| 0.980831 |
2303.13839
|
Jiefeng Ma
|
Jiefeng Ma, Jun Du, Pengfei Hu, Zhenrong Zhang, Jianshu Zhang, Huihui
Zhu, Cong Liu
|
HRDoc: Dataset and Baseline Method Toward Hierarchical Reconstruction of
Document Structures
|
8 pages, 6 figures. Accepted by AAAI-2023
| null | null | null |
cs.CL cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The problem of document structure reconstruction refers to converting digital
or scanned documents into corresponding semantic structures. Most existing
works mainly focus on splitting the boundary of each element in a single
document page, neglecting the reconstruction of semantic structure in
multi-page documents. This paper introduces hierarchical reconstruction of
document structures as a novel task suitable for NLP and CV fields. To better
evaluate the system performance on the new task, we built a large-scale dataset
named HRDoc, which consists of 2,500 multi-page documents with nearly 2 million
semantic units. Every document in HRDoc has line-level annotations including
categories and relations obtained from rule-based extractors and human
annotators. Moreover, we proposed an encoder-decoder-based hierarchical
document structure parsing system (DSPS) to tackle this problem. By adopting a
multi-modal bidirectional encoder and a structure-aware GRU decoder with
soft-mask operation, the DSPS model surpass the baseline method by a large
margin. All scripts and datasets will be made publicly available at
https://github.com/jfma-USTC/HRDoc.
|
[
{
"version": "v1",
"created": "Fri, 24 Mar 2023 07:23:56 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Ma",
"Jiefeng",
""
],
[
"Du",
"Jun",
""
],
[
"Hu",
"Pengfei",
""
],
[
"Zhang",
"Zhenrong",
""
],
[
"Zhang",
"Jianshu",
""
],
[
"Zhu",
"Huihui",
""
],
[
"Liu",
"Cong",
""
]
] |
new_dataset
| 0.99972 |
2303.13859
|
Chunyi Li
|
Xinhui Huang, Chunyi Li, Abdelhak Bentaleb, Roger Zimmermann, Guangtao
Zhai
|
XGC-VQA: A unified video quality assessment model for User,
Professionally, and Occupationally-Generated Content
|
6 pages, 4 figures
| null | null | null |
cs.MM eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
With the rapid growth of Internet video data amounts and types, a unified
Video Quality Assessment (VQA) is needed to inspire video communication with
perceptual quality. To meet the real-time and universal requirements in
providing such inspiration, this study proposes a VQA model from a
classification of User Generated Content (UGC), Professionally Generated
Content (PGC), and Occupationally Generated Content (OGC). In the time domain,
this study utilizes non-uniform sampling, as each content type has varying
temporal importance based on its perceptual quality. In the spatial domain,
centralized downsampling is performed before the VQA process by utilizing a
patch splicing/sampling mechanism to lower complexity for real-time assessment.
The experimental results demonstrate that the proposed method achieves a median
correlation of $0.7$ while limiting the computation time below 5s for three
content types, which ensures that the communication experience of UGC, PGC, and
OGC can be optimized altogether.
|
[
{
"version": "v1",
"created": "Fri, 24 Mar 2023 08:47:02 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Huang",
"Xinhui",
""
],
[
"Li",
"Chunyi",
""
],
[
"Bentaleb",
"Abdelhak",
""
],
[
"Zimmermann",
"Roger",
""
],
[
"Zhai",
"Guangtao",
""
]
] |
new_dataset
| 0.978468 |
2303.13868
|
Xingxing Wei
|
Wei Xingxing and Yu Jie and Huang Yao
|
Physically Adversarial Infrared Patches with Learnable Shapes and
Locations
|
accepted by CVPR2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Owing to the extensive application of infrared object detectors in the
safety-critical tasks, it is necessary to evaluate their robustness against
adversarial examples in the real world. However, current few physical infrared
attacks are complicated to implement in practical application because of their
complex transformation from digital world to physical world. To address this
issue, in this paper, we propose a physically feasible infrared attack method
called "adversarial infrared patches". Considering the imaging mechanism of
infrared cameras by capturing objects' thermal radiation, adversarial infrared
patches conduct attacks by attaching a patch of thermal insulation materials on
the target object to manipulate its thermal distribution. To enhance
adversarial attacks, we present a novel aggregation regularization to guide the
simultaneous learning for the patch' shape and location on the target object.
Thus, a simple gradient-based optimization can be adapted to solve for them. We
verify adversarial infrared patches in different object detection tasks with
various object detectors. Experimental results show that our method achieves
more than 90\% Attack Success Rate (ASR) versus the pedestrian detector and
vehicle detector in the physical environment, where the objects are captured in
different angles, distances, postures, and scenes. More importantly,
adversarial infrared patch is easy to implement, and it only needs 0.5 hours to
be constructed in the physical world, which verifies its effectiveness and
efficiency.
|
[
{
"version": "v1",
"created": "Fri, 24 Mar 2023 09:11:36 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Xingxing",
"Wei",
""
],
[
"Jie",
"Yu",
""
],
[
"Yao",
"Huang",
""
]
] |
new_dataset
| 0.998528 |
2303.13885
|
Junsong Chen
|
Haojie Zhao and Junsong Chen and Lijun Wang and Huchuan Lu
|
ARKitTrack: A New Diverse Dataset for Tracking Using Mobile RGB-D Data
|
Accepted by CVPR2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Compared with traditional RGB-only visual tracking, few datasets have been
constructed for RGB-D tracking. In this paper, we propose ARKitTrack, a new
RGB-D tracking dataset for both static and dynamic scenes captured by
consumer-grade LiDAR scanners equipped on Apple's iPhone and iPad. ARKitTrack
contains 300 RGB-D sequences, 455 targets, and 229.7K video frames in total.
Along with the bounding box annotations and frame-level attributes, we also
annotate this dataset with 123.9K pixel-level target masks. Besides, the camera
intrinsic and camera pose of each frame are provided for future developments.
To demonstrate the potential usefulness of this dataset, we further present a
unified baseline for both box-level and pixel-level tracking, which integrates
RGB features with bird's-eye-view representations to better explore
cross-modality 3D geometry. In-depth empirical analysis has verified that the
ARKitTrack dataset can significantly facilitate RGB-D tracking and that the
proposed baseline method compares favorably against the state of the arts. The
code and dataset is available at https://arkittrack.github.io.
|
[
{
"version": "v1",
"created": "Fri, 24 Mar 2023 09:51:13 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Zhao",
"Haojie",
""
],
[
"Chen",
"Junsong",
""
],
[
"Wang",
"Lijun",
""
],
[
"Lu",
"Huchuan",
""
]
] |
new_dataset
| 0.999842 |
2303.13903
|
Timo H\"ackel
|
Timo H\"ackel, Philipp Meyer, Mehmet Mueller, Jan Schmitt-Solbrig,
Franz Korf, Thomas C. Schmidt
|
Dynamic Service-Orientation for Software-Defined In-Vehicle Networks
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern In-Vehicle Networks (IVNs) are composed of a large number of devices
and services linked via an Ethernet-based time-sensitive network. Communication
in future IVNs will become more dynamic as services can be updated, added, or
removed during runtime. This requires a flexible and adaptable IVN, for which
Software-Defined Networking (SDN) is a promising candidate. In this paper, we
show how SDN can be used to support a dynamic, service-oriented network
architecture. We demonstrate our concept using the SOME/IP protocol, which is
the most widely deployed implementation of automotive service-oriented
architectures. In a simulation study, we evaluate the performance of
SOME/IP-adaptive SDN control compared to standard Ethernet switching and
non-optimized SDN. Our results show an expected overhead introduced by the
central SDN controller, which is, however, reduced by up to 50% compared to
SOME/IP-unaware SDN.For a large number of services, the setup time is in the
order of milliseconds, which matches standard Ethernet switching. A
SOME/IP-aware SDN controller can optimize the service discovery to improve
adaptability, robustness, security, and Quality-of-Service of the IVN while
remaining transparent to existing SOME/IP implementations.
|
[
{
"version": "v1",
"created": "Fri, 24 Mar 2023 10:32:10 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Häckel",
"Timo",
""
],
[
"Meyer",
"Philipp",
""
],
[
"Mueller",
"Mehmet",
""
],
[
"Schmitt-Solbrig",
"Jan",
""
],
[
"Korf",
"Franz",
""
],
[
"Schmidt",
"Thomas C.",
""
]
] |
new_dataset
| 0.988367 |
2303.13913
|
Han Xue
|
Han Xue, Wenqiang Xu, Jieyi Zhang, Tutian Tang, Yutong Li, Wenxin Du,
Ruolin Ye, Cewu Lu
|
GarmentTracking: Category-Level Garment Pose Tracking
| null | null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Garments are important to humans. A visual system that can estimate and track
the complete garment pose can be useful for many downstream tasks and
real-world applications. In this work, we present a complete package to address
the category-level garment pose tracking task: (1) A recording system
VR-Garment, with which users can manipulate virtual garment models in
simulation through a VR interface. (2) A large-scale dataset VR-Folding, with
complex garment pose configurations in manipulation like flattening and
folding. (3) An end-to-end online tracking framework GarmentTracking, which
predicts complete garment pose both in canonical space and task space given a
point cloud sequence. Extensive experiments demonstrate that the proposed
GarmentTracking achieves great performance even when the garment has large
non-rigid deformation. It outperforms the baseline approach on both speed and
accuracy. We hope our proposed solution can serve as a platform for future
research. Codes and datasets are available in
https://garment-tracking.robotflow.ai.
|
[
{
"version": "v1",
"created": "Fri, 24 Mar 2023 10:59:17 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Xue",
"Han",
""
],
[
"Xu",
"Wenqiang",
""
],
[
"Zhang",
"Jieyi",
""
],
[
"Tang",
"Tutian",
""
],
[
"Li",
"Yutong",
""
],
[
"Du",
"Wenxin",
""
],
[
"Ye",
"Ruolin",
""
],
[
"Lu",
"Cewu",
""
]
] |
new_dataset
| 0.999702 |
2303.13931
|
Yousri Kessentini
|
Marwa Dhiaf, Ahmed Cheikh Rouhou, Yousri Kessentini, Sinda Ben Salem
|
MSdocTr-Lite: A Lite Transformer for Full Page Multi-script Handwriting
Recognition
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The Transformer has quickly become the dominant architecture for various
pattern recognition tasks due to its capacity for long-range representation.
However, transformers are data-hungry models and need large datasets for
training. In Handwritten Text Recognition (HTR), collecting a massive amount of
labeled data is a complicated and expensive task. In this paper, we propose a
lite transformer architecture for full-page multi-script handwriting
recognition. The proposed model comes with three advantages: First, to solve
the common problem of data scarcity, we propose a lite transformer model that
can be trained on a reasonable amount of data, which is the case of most HTR
public datasets, without the need for external data. Second, it can learn the
reading order at page-level thanks to a curriculum learning strategy, allowing
it to avoid line segmentation errors, exploit a larger context and reduce the
need for costly segmentation annotations. Third, it can be easily adapted to
other scripts by applying a simple transfer-learning process using only
page-level labeled images. Extensive experiments on different datasets with
different scripts (French, English, Spanish, and Arabic) show the effectiveness
of the proposed model.
|
[
{
"version": "v1",
"created": "Fri, 24 Mar 2023 11:40:50 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Dhiaf",
"Marwa",
""
],
[
"Rouhou",
"Ahmed Cheikh",
""
],
[
"Kessentini",
"Yousri",
""
],
[
"Salem",
"Sinda Ben",
""
]
] |
new_dataset
| 0.997729 |
2303.13965
|
Rashmi Kushwaha
|
Rashmi Kushwaha, Shreyas Kulkarni, Yatindra Nath Singh
|
Generalized Distance Metric for Different DHT Routing Algorithms in
Peer-to-Peer Networks
|
7 pages, 4 figures, 13 Tables
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We present a generalized distance metric that can be used to identify routing
table entries and implement routing strategies to reach the root node for a
given key, in DHT (Distributed Hash Table) networks such as Chord, Kademlia,
Tapestry, and Pastry. The generalization shows that all the four DHT algorithms
are in fact, the same algorithm but with different parameters in distance
representation. This paper also proposes that nodes can have routing tables of
varying sizes based on their memory capabilities. But Each node must have at
least two entries, one for the node closest from it, and the other for the node
from whom it is closest, regardless of memory capacity. With this condition,
messages will still reach the correct root nodes. We also further observe that
in any network, if the distance metric of the DHT is same at all the nodes,
then the root node for a key will also be the same, irrespective of the size of
the routing table at different nodes.
|
[
{
"version": "v1",
"created": "Fri, 24 Mar 2023 12:38:00 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Kushwaha",
"Rashmi",
""
],
[
"Kulkarni",
"Shreyas",
""
],
[
"Singh",
"Yatindra Nath",
""
]
] |
new_dataset
| 0.984793 |
2303.14087
|
Xiaohao Sun
|
Xiaohao Sun, Hanxiao Jiang, Manolis Savva, Angel Xuan Chang
|
OPDMulti: Openable Part Detection for Multiple Objects
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Openable part detection is the task of detecting the openable parts of an
object in a single-view image, and predicting corresponding motion parameters.
Prior work investigated the unrealistic setting where all input images only
contain a single openable object. We generalize this task to scenes with
multiple objects each potentially possessing openable parts, and create a
corresponding dataset based on real-world scenes. We then address this more
challenging scenario with OPDFormer: a part-aware transformer architecture. Our
experiments show that the OPDFormer architecture significantly outperforms
prior work. The more realistic multiple-object scenarios we investigated remain
challenging for all methods, indicating opportunities for future work.
|
[
{
"version": "v1",
"created": "Fri, 24 Mar 2023 15:52:20 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Sun",
"Xiaohao",
""
],
[
"Jiang",
"Hanxiao",
""
],
[
"Savva",
"Manolis",
""
],
[
"Chang",
"Angel Xuan",
""
]
] |
new_dataset
| 0.990028 |
2303.14126
|
Jordan J. Bird
|
Jordan J. Bird, Ahmad Lotfi
|
CIFAKE: Image Classification and Explainable Identification of
AI-Generated Synthetic Images
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Recent technological advances in synthetic data have enabled the generation
of images with such high quality that human beings cannot tell the difference
between real-life photographs and Artificial Intelligence (AI) generated
images. Given the critical necessity of data reliability and authentication,
this article proposes to enhance our ability to recognise AI-generated images
through computer vision. Initially, a synthetic dataset is generated that
mirrors the ten classes of the already available CIFAR-10 dataset with latent
diffusion which provides a contrasting set of images for comparison to real
photographs. The model is capable of generating complex visual attributes, such
as photorealistic reflections in water. The two sets of data present as a
binary classification problem with regard to whether the photograph is real or
generated by AI. This study then proposes the use of a Convolutional Neural
Network (CNN) to classify the images into two categories; Real or Fake.
Following hyperparameter tuning and the training of 36 individual network
topologies, the optimal approach could correctly classify the images with
92.98% accuracy. Finally, this study implements explainable AI via Gradient
Class Activation Mapping to explore which features within the images are useful
for classification. Interpretation reveals interesting concepts within the
image, in particular, noting that the actual entity itself does not hold useful
information for classification; instead, the model focuses on small visual
imperfections in the background of the images. The complete dataset engineered
for this study, referred to as the CIFAKE dataset, is made publicly available
to the research community for future work.
|
[
{
"version": "v1",
"created": "Fri, 24 Mar 2023 16:33:06 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Bird",
"Jordan J.",
""
],
[
"Lotfi",
"Ahmad",
""
]
] |
new_dataset
| 0.999624 |
2303.14139
|
Huiguang He
|
Yizhuo Lu, Changde Du, Dianpeng Wang and Huiguang He
|
MindDiffuser: Controlled Image Reconstruction from Human Brain Activity
with Semantic and Structural Diffusion
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reconstructing visual stimuli from measured functional magnetic resonance
imaging (fMRI) has been a meaningful and challenging task. Previous studies
have successfully achieved reconstructions with structures similar to the
original images, such as the outlines and size of some natural images. However,
these reconstructions lack explicit semantic information and are difficult to
discern. In recent years, many studies have utilized multi-modal pre-trained
models with stronger generative capabilities to reconstruct images that are
semantically similar to the original ones. However, these images have
uncontrollable structural information such as position and orientation. To
address both of the aforementioned issues simultaneously, we propose a
two-stage image reconstruction model called MindDiffuser, utilizing Stable
Diffusion. In Stage 1, the VQ-VAE latent representations and the CLIP text
embeddings decoded from fMRI are put into the image-to-image process of Stable
Diffusion, which yields a preliminary image that contains semantic and
structural information. In Stage 2, we utilize the low-level CLIP visual
features decoded from fMRI as supervisory information, and continually adjust
the two features in Stage 1 through backpropagation to align the structural
information. The results of both qualitative and quantitative analyses
demonstrate that our proposed model has surpassed the current state-of-the-art
models in terms of reconstruction results on Natural Scenes Dataset (NSD).
Furthermore, the results of ablation experiments indicate that each component
of our model is effective for image reconstruction.
|
[
{
"version": "v1",
"created": "Fri, 24 Mar 2023 16:41:42 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Lu",
"Yizhuo",
""
],
[
"Du",
"Changde",
""
],
[
"Wang",
"Dianpeng",
""
],
[
"He",
"Huiguang",
""
]
] |
new_dataset
| 0.975397 |
2303.14143
|
Evan King
|
Evan King, Haoxiang Yu, Sangsu Lee, and Christine Julien
|
"Get ready for a party": Exploring smarter smart spaces with help from
large language models
|
7 pages, 4 figures
| null | null | null |
cs.HC cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The right response to someone who says "get ready for a party" is deeply
influenced by meaning and context. For a smart home assistant (e.g., Google
Home), the ideal response might be to survey the available devices in the home
and change their state to create a festive atmosphere. Current practical
systems cannot service such requests since they require the ability to (1)
infer meaning behind an abstract statement and (2) map that inference to a
concrete course of action appropriate for the context (e.g., changing the
settings of specific devices). In this paper, we leverage the observation that
recent task-agnostic large language models (LLMs) like GPT-3 embody a vast
amount of cross-domain, sometimes unpredictable contextual knowledge that
existing rule-based home assistant systems lack, which can make them powerful
tools for inferring user intent and generating appropriate context-dependent
responses during smart home interactions. We first explore the feasibility of a
system that places an LLM at the center of command inference and action
planning, showing that LLMs have the capacity to infer intent behind vague,
context-dependent commands like "get ready for a party" and respond with
concrete, machine-parseable instructions that can be used to control smart
devices. We furthermore demonstrate a proof-of-concept implementation that puts
an LLM in control of real devices, showing its ability to infer intent and
change device state appropriately with no fine-tuning or task-specific
training. Our work hints at the promise of LLM-driven systems for
context-awareness in smart environments, motivating future research in this
area.
|
[
{
"version": "v1",
"created": "Fri, 24 Mar 2023 16:51:08 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"King",
"Evan",
""
],
[
"Yu",
"Haoxiang",
""
],
[
"Lee",
"Sangsu",
""
],
[
"Julien",
"Christine",
""
]
] |
new_dataset
| 0.970895 |
2303.14174
|
Camilo Sanchez
|
Camilo Sanchez and Felix A. Epp
|
Experiential Futures In-the-wild to Inform Policy Design
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
As technological innovation continues to shape our world at an accelerating
pace, policy makers struggle to keep up with the unintended consequences of
these new technologies. To address this policy-novelty gap, Responsible
Research Innovation (RRI) has been proposed as a way to drive science and
technology innovation towards socially desirable goals. This work suggests a
more active HCI's position in the materialisation of pluralistic future visions
and emphasizes the engagement between policy design and HCI for more agile and
responsive evaluation environments. It calls for both fields to engage in
questioning which and how futures are constructed, who they are benefiting, and
how the findings of these interventions are interpreted towards other futures.
|
[
{
"version": "v1",
"created": "Fri, 24 Mar 2023 17:37:25 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Sanchez",
"Camilo",
""
],
[
"Epp",
"Felix A.",
""
]
] |
new_dataset
| 0.981201 |
2303.14190
|
Junxuan Li
|
Ziang Cheng, Junxuan Li, Hongdong Li
|
WildLight: In-the-wild Inverse Rendering with a Flashlight
|
Accepted to CVPR23. Website:
https://junxuan-li.github.io/wildlight-website/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes a practical photometric solution for the challenging
problem of in-the-wild inverse rendering under unknown ambient lighting. Our
system recovers scene geometry and reflectance using only multi-view images
captured by a smartphone. The key idea is to exploit smartphone's built-in
flashlight as a minimally controlled light source, and decompose image
intensities into two photometric components -- a static appearance corresponds
to ambient flux, plus a dynamic reflection induced by the moving flashlight.
Our method does not require flash/non-flash images to be captured in pairs.
Building on the success of neural light fields, we use an off-the-shelf method
to capture the ambient reflections, while the flashlight component enables
physically accurate photometric constraints to decouple reflectance and
illumination. Compared to existing inverse rendering methods, our setup is
applicable to non-darkroom environments yet sidesteps the inherent difficulties
of explicit solving ambient reflections. We demonstrate by extensive
experiments that our method is easy to implement, casual to set up, and
consistently outperforms existing in-the-wild inverse rendering techniques.
Finally, our neural reconstruction can be easily exported to PBR textured
triangle mesh ready for industrial renderers.
|
[
{
"version": "v1",
"created": "Fri, 24 Mar 2023 17:59:56 GMT"
}
] | 2023-03-27T00:00:00 |
[
[
"Cheng",
"Ziang",
""
],
[
"Li",
"Junxuan",
""
],
[
"Li",
"Hongdong",
""
]
] |
new_dataset
| 0.996081 |
1908.04531
|
Leon Derczynski
|
Gudbjartur Ingi Sigurbergsson, Leon Derczynski
|
Offensive Language and Hate Speech Detection for Danish
|
Proceedings of the Twelfth Language Resources and Evaluation
Conference
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The presence of offensive language on social media platforms and the
implications this poses is becoming a major concern in modern society. Given
the enormous amount of content created every day, automatic methods are
required to detect and deal with this type of content. Until now, most of the
research has focused on solving the problem for the English language, while the
problem is multilingual.
We construct a Danish dataset containing user-generated comments from
\textit{Reddit} and \textit{Facebook}. It contains user generated comments from
various social media platforms, and to our knowledge, it is the first of its
kind. Our dataset is annotated to capture various types and target of offensive
language. We develop four automatic classification systems, each designed to
work for both the English and the Danish language. In the detection of
offensive language in English, the best performing system achieves a macro
averaged F1-score of $0.74$, and the best performing system for Danish achieves
a macro averaged F1-score of $0.70$. In the detection of whether or not an
offensive post is targeted, the best performing system for English achieves a
macro averaged F1-score of $0.62$, while the best performing system for Danish
achieves a macro averaged F1-score of $0.73$. Finally, in the detection of the
target type in a targeted offensive post, the best performing system for
English achieves a macro averaged F1-score of $0.56$, and the best performing
system for Danish achieves a macro averaged F1-score of $0.63$.
Our work for both the English and the Danish language captures the type and
targets of offensive language, and present automatic methods for detecting
different kinds of offensive language such as hate speech and cyberbullying.
|
[
{
"version": "v1",
"created": "Tue, 13 Aug 2019 08:29:48 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Mar 2023 04:24:09 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Sigurbergsson",
"Gudbjartur Ingi",
""
],
[
"Derczynski",
"Leon",
""
]
] |
new_dataset
| 0.99986 |
2108.09184
|
Adam Michael Roberts
|
Joe Gildea, Adrian Korban, Adam Michael Roberts, Alexander Tylyshchak
|
New binary self-dual codes of lengths 56, 62, 78, 92 and 94 from a
bordered construction
|
corrected typos; other minor corrections. arXiv admin note:
substantial text overlap with arXiv:2102.10354, arXiv:2106.12355,
arXiv:2102.12326
| null |
10.1016/j.disc.2023.113425
| null |
cs.IT math.CO math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present a new bordered construction for self-dual codes
which employs $\lambda$-circulant matrices. We give the necessary conditions
for our construction to produce self-dual codes over a finite commutative
Frobenius ring of characteristic 2. Moreover, using our bordered construction
together with the well-known building-up and neighbour methods, we construct
many binary self-dual codes of lengths 56, 62, 78, 92 and 94 with parameters in
their weight enumerators that were not known in the literature before.
|
[
{
"version": "v1",
"created": "Fri, 20 Aug 2021 14:00:58 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Feb 2022 15:33:46 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Gildea",
"Joe",
""
],
[
"Korban",
"Adrian",
""
],
[
"Roberts",
"Adam Michael",
""
],
[
"Tylyshchak",
"Alexander",
""
]
] |
new_dataset
| 0.999358 |
2204.08696
|
Guangwei Gao
|
Guangwei Gao, Zixiang Xu, Juncheng Li, Jian Yang, Tieyong Zeng and
Guo-Jun Qi
|
CTCNet: A CNN-Transformer Cooperation Network for Face Image
Super-Resolution
|
IEEE Transactions on Image Processing, 12 figures, 9 tables
| null |
10.1109/TIP.2023.3261747
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, deep convolution neural networks (CNNs) steered face
super-resolution methods have achieved great progress in restoring degraded
facial details by jointly training with facial priors. However, these methods
have some obvious limitations. On the one hand, multi-task joint learning
requires additional marking on the dataset, and the introduced prior network
will significantly increase the computational cost of the model. On the other
hand, the limited receptive field of CNN will reduce the fidelity and
naturalness of the reconstructed facial images, resulting in suboptimal
reconstructed images. In this work, we propose an efficient CNN-Transformer
Cooperation Network (CTCNet) for face super-resolution tasks, which uses the
multi-scale connected encoder-decoder architecture as the backbone.
Specifically, we first devise a novel Local-Global Feature Cooperation Module
(LGCM), which is composed of a Facial Structure Attention Unit (FSAU) and a
Transformer block, to promote the consistency of local facial detail and global
facial structure restoration simultaneously. Then, we design an efficient
Feature Refinement Module (FRM) to enhance the encoded features. Finally, to
further improve the restoration of fine facial details, we present a
Multi-scale Feature Fusion Unit (MFFU) to adaptively fuse the features from
different stages in the encoder procedure. Extensive evaluations on various
datasets have assessed that the proposed CTCNet can outperform other
state-of-the-art methods significantly. Source code will be available at
https://github.com/IVIPLab/CTCNet.
|
[
{
"version": "v1",
"created": "Tue, 19 Apr 2022 06:38:29 GMT"
},
{
"version": "v2",
"created": "Mon, 30 Jan 2023 01:51:04 GMT"
},
{
"version": "v3",
"created": "Thu, 23 Mar 2023 09:44:22 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Gao",
"Guangwei",
""
],
[
"Xu",
"Zixiang",
""
],
[
"Li",
"Juncheng",
""
],
[
"Yang",
"Jian",
""
],
[
"Zeng",
"Tieyong",
""
],
[
"Qi",
"Guo-Jun",
""
]
] |
new_dataset
| 0.990776 |
2208.10295
|
Wouter Jansen
|
Wouter Jansen, Nico Huebel, Jan Steckel
|
Physical LiDAR Simulation in Real-Time Engine
|
IEEE Sensors 2022 Conference, Dallas, TX, USA
| null |
10.1109/SENSORS52175.2022.9967197
| null |
cs.RO eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Designing and validating sensor applications and algorithms in simulation is
an important step in the modern development process. Furthermore, modern
open-source multi-sensor simulation frameworks are moving towards the usage of
video-game engines such as the Unreal Engine. Simulation of a sensor such as a
LiDAR can prove to be difficult in such real-time software. In this paper we
present a GPU-accelerated simulation of LiDAR based on its physical properties
and interaction with the environment. We provide a generation of the depth and
intensity data based on the properties of the sensor as well as the surface
material and incidence angle at which the light beams hit the surface. It is
validated against a real LiDAR sensor and shown to be accurate and precise
although highly depended on the spectral data used for the material properties.
|
[
{
"version": "v1",
"created": "Mon, 22 Aug 2022 13:23:17 GMT"
},
{
"version": "v2",
"created": "Tue, 23 Aug 2022 08:43:01 GMT"
},
{
"version": "v3",
"created": "Mon, 19 Dec 2022 09:30:18 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Jansen",
"Wouter",
""
],
[
"Huebel",
"Nico",
""
],
[
"Steckel",
"Jan",
""
]
] |
new_dataset
| 0.99792 |
2211.03456
|
Xin Jin
|
Xin Jin, Longhai Wu, Jie Chen, Youxin Chen, Jayoon Koo, Cheul-hee Hahm
|
A Unified Pyramid Recurrent Network for Video Frame Interpolation
|
Accepted by CVPR 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Flow-guided synthesis provides a common framework for frame interpolation,
where optical flow is estimated to guide the synthesis of intermediate frames
between consecutive inputs. In this paper, we present UPR-Net, a novel Unified
Pyramid Recurrent Network for frame interpolation. Cast in a flexible pyramid
framework, UPR-Net exploits lightweight recurrent modules for both
bi-directional flow estimation and intermediate frame synthesis. At each
pyramid level, it leverages estimated bi-directional flow to generate
forward-warped representations for frame synthesis; across pyramid levels, it
enables iterative refinement for both optical flow and intermediate frame. In
particular, we show that our iterative synthesis strategy can significantly
improve the robustness of frame interpolation on large motion cases. Despite
being extremely lightweight (1.7M parameters), our base version of UPR-Net
achieves excellent performance on a large range of benchmarks. Code and trained
models of our UPR-Net series are available at:
https://github.com/srcn-ivl/UPR-Net.
|
[
{
"version": "v1",
"created": "Mon, 7 Nov 2022 11:12:31 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Mar 2023 04:14:45 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Jin",
"Xin",
""
],
[
"Wu",
"Longhai",
""
],
[
"Chen",
"Jie",
""
],
[
"Chen",
"Youxin",
""
],
[
"Koo",
"Jayoon",
""
],
[
"Hahm",
"Cheul-hee",
""
]
] |
new_dataset
| 0.985499 |
2211.03704
|
Patrice Ossona de Mendez
|
J. Nesetril and P. Ossona de Mendez and S. Siebertz
|
Modulo-Counting First-Order Logic on Bounded Expansion Classes
|
submitted to CSGT2022 special issue
| null | null | null |
cs.LO math.CO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We prove that, on bounded expansion classes, every first-order formula with
modulo counting is equivalent, in a linear-time computable monadic expansion,
to an existential first-order formula. As a consequence, we derive, on bounded
expansion classes, that first-order transductions with modulo counting have the
same encoding power as existential first-order transductions. Also,
modulo-counting first-order model checking and computation of the size of sets
definable in modulo-counting first-order logic can be achieved in linear time
on bounded expansion classes. As an application, we prove that a class has
structurally bounded expansion if and only if it is a class of bounded depth
vertex-minors of graphs in a bounded expansion class. We also show how our
results can be used to implement fast matrix calculus on bounded expansion
matrices over a finite field.
|
[
{
"version": "v1",
"created": "Mon, 7 Nov 2022 17:20:37 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Mar 2023 09:31:58 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Nesetril",
"J.",
""
],
[
"de Mendez",
"P. Ossona",
""
],
[
"Siebertz",
"S.",
""
]
] |
new_dataset
| 0.998469 |
2211.11270
|
Cheng Guo
|
Cheng Guo and Xiuhua Jiang
|
LHDR: HDR Reconstruction for Legacy Content using a Lightweight DNN
|
Accepted in ACCV2022
| null |
10.1007/978-3-031-26313-2_19
| null |
cs.CV cs.MM
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
High dynamic range (HDR) image is widely-used in graphics and photography due
to the rich information it contains. Recently the community has started using
deep neural network (DNN) to reconstruct standard dynamic range (SDR) images
into HDR. Albeit the superiority of current DNN-based methods, their
application scenario is still limited: (1) heavy model impedes real-time
processing, and (2) inapplicable to legacy SDR content with more degradation
types. Therefore, we propose a lightweight DNN-based method trained to tackle
legacy SDR. For better design, we reform the problem modeling and emphasize
degradation model. Experiments show that our method reached appealing
performance with minimal computational cost compared with others.
|
[
{
"version": "v1",
"created": "Mon, 21 Nov 2022 09:05:20 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Guo",
"Cheng",
""
],
[
"Jiang",
"Xiuhua",
""
]
] |
new_dataset
| 0.984192 |
2211.13081
|
Robert Alexander Marsden
|
Mario D\"obler, Robert A. Marsden, Bin Yang
|
Robust Mean Teacher for Continual and Gradual Test-Time Adaptation
|
Accepted at CVPR 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Since experiencing domain shifts during test-time is inevitable in practice,
test-time adaption (TTA) continues to adapt the model after deployment.
Recently, the area of continual and gradual test-time adaptation (TTA) emerged.
In contrast to standard TTA, continual TTA considers not only a single domain
shift, but a sequence of shifts. Gradual TTA further exploits the property that
some shifts evolve gradually over time. Since in both settings long test
sequences are present, error accumulation needs to be addressed for methods
relying on self-training. In this work, we propose and show that in the setting
of TTA, the symmetric cross-entropy is better suited as a consistency loss for
mean teachers compared to the commonly used cross-entropy. This is justified by
our analysis with respect to the (symmetric) cross-entropy's gradient
properties. To pull the test feature space closer to the source domain, where
the pre-trained model is well posed, contrastive learning is leveraged. Since
applications differ in their requirements, we address several settings,
including having source data available and the more challenging source-free
setting. We demonstrate the effectiveness of our proposed method 'robust mean
teacher' (RMT) on the continual and gradual corruption benchmarks CIFAR10C,
CIFAR100C, and Imagenet-C. We further consider ImageNet-R and propose a new
continual DomainNet-126 benchmark. State-of-the-art results are achieved on all
benchmarks.
|
[
{
"version": "v1",
"created": "Wed, 23 Nov 2022 16:14:45 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Mar 2023 18:44:42 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Döbler",
"Mario",
""
],
[
"Marsden",
"Robert A.",
""
],
[
"Yang",
"Bin",
""
]
] |
new_dataset
| 0.967295 |
2211.14068
|
Zhian Liu
|
Zhian Liu, Maomao Li, Yong Zhang, Cairong Wang, Qi Zhang, Jue Wang,
Yongwei Nie
|
Fine-Grained Face Swapping via Regional GAN Inversion
|
Accepted to CVPR 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a novel paradigm for high-fidelity face swapping that faithfully
preserves the desired subtle geometry and texture details. We rethink face
swapping from the perspective of fine-grained face editing, \textit{i.e.,
``editing for swapping'' (E4S)}, and propose a framework that is based on the
explicit disentanglement of the shape and texture of facial components.
Following the E4S principle, our framework enables both global and local
swapping of facial features, as well as controlling the amount of partial
swapping specified by the user. Furthermore, the E4S paradigm is inherently
capable of handling facial occlusions by means of facial masks. At the core of
our system lies a novel Regional GAN Inversion (RGI) method, which allows the
explicit disentanglement of shape and texture. It also allows face swapping to
be performed in the latent space of StyleGAN. Specifically, we design a
multi-scale mask-guided encoder to project the texture of each facial component
into regional style codes. We also design a mask-guided injection module to
manipulate the feature maps with the style codes. Based on the disentanglement,
face swapping is reformulated as a simplified problem of style and mask
swapping. Extensive experiments and comparisons with current state-of-the-art
methods demonstrate the superiority of our approach in preserving texture and
shape details, as well as working with high resolution images. The project page
is http://e4s2022.github.io
|
[
{
"version": "v1",
"created": "Fri, 25 Nov 2022 12:40:45 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Mar 2023 08:05:52 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Liu",
"Zhian",
""
],
[
"Li",
"Maomao",
""
],
[
"Zhang",
"Yong",
""
],
[
"Wang",
"Cairong",
""
],
[
"Zhang",
"Qi",
""
],
[
"Wang",
"Jue",
""
],
[
"Nie",
"Yongwei",
""
]
] |
new_dataset
| 0.970301 |
2211.14086
|
Jingwang Ling
|
Jingwang Ling, Zhibo Wang, Feng Xu
|
ShadowNeuS: Neural SDF Reconstruction by Shadow Ray Supervision
|
CVPR 2023. Project page: https://gerwang.github.io/shadowneus/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
By supervising camera rays between a scene and multi-view image planes, NeRF
reconstructs a neural scene representation for the task of novel view
synthesis. On the other hand, shadow rays between the light source and the
scene have yet to be considered. Therefore, we propose a novel shadow ray
supervision scheme that optimizes both the samples along the ray and the ray
location. By supervising shadow rays, we successfully reconstruct a neural SDF
of the scene from single-view images under multiple lighting conditions. Given
single-view binary shadows, we train a neural network to reconstruct a complete
scene not limited by the camera's line of sight. By further modeling the
correlation between the image colors and the shadow rays, our technique can
also be effectively extended to RGB inputs. We compare our method with previous
works on challenging tasks of shape reconstruction from single-view binary
shadow or RGB images and observe significant improvements. The code and data
are available at https://github.com/gerwang/ShadowNeuS.
|
[
{
"version": "v1",
"created": "Fri, 25 Nov 2022 13:14:56 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Mar 2023 14:21:24 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Ling",
"Jingwang",
""
],
[
"Wang",
"Zhibo",
""
],
[
"Xu",
"Feng",
""
]
] |
new_dataset
| 0.997946 |
2212.07597
|
Emery Berger
|
Emery D. Berger and Sam Stern and Juan Altmayer Pizzorno
|
Triangulating Python Performance Issues with Scalene
| null | null | null |
Accepted, to appear at OSDI 2023
|
cs.PL cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes Scalene, a profiler specialized for Python. Scalene
combines a suite of innovations to precisely and simultaneously profile CPU,
memory, and GPU usage, all with low overhead. Scalene's CPU and memory
profilers help Python programmers direct their optimization efforts by
distinguishing between inefficient Python and efficient native execution time
and memory usage. Scalene's memory profiler employs a novel sampling algorithm
that lets it operate with low overhead yet high precision. It also incorporates
a novel algorithm that automatically pinpoints memory leaks, whether within
Python or across the Python-native boundary. Scalene tracks a new metric called
copy volume, which highlights costly copying operations that can occur when
Python silently converts between C and Python data representations, or between
CPU and GPU. Since its introduction, Scalene has been widely adopted, with over
500,000 downloads to date. We present experience reports from developers who
used Scalene to achieve significant performance improvements and memory
savings.
|
[
{
"version": "v1",
"created": "Thu, 15 Dec 2022 02:56:25 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Berger",
"Emery D.",
""
],
[
"Stern",
"Sam",
""
],
[
"Pizzorno",
"Juan Altmayer",
""
]
] |
new_dataset
| 0.976437 |
2302.01392
|
Yiming Sun
|
Yiming Sun, Bing Cao, Pengfei Zhu, Qinghua Hu
|
Multi-modal Gated Mixture of Local-to-Global Experts for Dynamic Image
Fusion
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Infrared and visible image fusion aims to integrate comprehensive information
from multiple sources to achieve superior performances on various practical
tasks, such as detection, over that of a single modality. However, most
existing methods directly combined the texture details and object contrast of
different modalities, ignoring the dynamic changes in reality, which diminishes
the visible texture in good lighting conditions and the infrared contrast in
low lighting conditions. To fill this gap, we propose a dynamic image fusion
framework with a multi-modal gated mixture of local-to-global experts, termed
MoE-Fusion, to dynamically extract effective and comprehensive information from
the respective modalities. Our model consists of a Mixture of Local Experts
(MoLE) and a Mixture of Global Experts (MoGE) guided by a multi-modal gate. The
MoLE performs specialized learning of multi-modal local features, prompting the
fused images to retain the local information in a sample-adaptive manner, while
the MoGE focuses on the global information that complements the fused image
with overall texture detail and contrast. Extensive experiments show that our
MoE-Fusion outperforms state-of-the-art methods in preserving multi-modal image
texture and contrast through the local-to-global dynamic learning paradigm, and
also achieves superior performance on detection tasks. Our code will be
available: https://github.com/SunYM2020/MoE-Fusion.
|
[
{
"version": "v1",
"created": "Thu, 2 Feb 2023 20:06:58 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Mar 2023 07:15:53 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Sun",
"Yiming",
""
],
[
"Cao",
"Bing",
""
],
[
"Zhu",
"Pengfei",
""
],
[
"Hu",
"Qinghua",
""
]
] |
new_dataset
| 0.983856 |
2302.01703
|
Tony Han
|
Fuzhang Han, Han Zheng, Wenjun Huang, Rong Xiong, Yue Wang, Yanmei
Jiao
|
DAMS-LIO: A Degeneration-Aware and Modular Sensor-Fusion LiDAR-inertial
Odometry
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With robots being deployed in increasingly complex environments like
underground mines and planetary surfaces, the multi-sensor fusion method has
gained more and more attention which is a promising solution to state
estimation in the such scene. The fusion scheme is a central component of these
methods. In this paper, a light-weight iEKF-based LiDAR-inertial odometry
system is presented, which utilizes a degeneration-aware and modular
sensor-fusion pipeline that takes both LiDAR points and relative pose from
another odometry as the measurement in the update process only when
degeneration is detected. Both the Cramer-Rao Lower Bound (CRLB) theory and
simulation test are used to demonstrate the higher accuracy of our method
compared to methods using a single observation. Furthermore, the proposed
system is evaluated in perceptually challenging datasets against various
state-of-the-art sensor-fusion methods. The results show that the proposed
system achieves real-time and high estimation accuracy performance despite the
challenging environment and poor observations.
|
[
{
"version": "v1",
"created": "Fri, 3 Feb 2023 13:01:55 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Feb 2023 14:09:06 GMT"
},
{
"version": "v3",
"created": "Thu, 23 Mar 2023 06:02:59 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Han",
"Fuzhang",
""
],
[
"Zheng",
"Han",
""
],
[
"Huang",
"Wenjun",
""
],
[
"Xiong",
"Rong",
""
],
[
"Wang",
"Yue",
""
],
[
"Jiao",
"Yanmei",
""
]
] |
new_dataset
| 0.973073 |
2303.04238
|
Raz Lapid
|
Raz Lapid and Moshe Sipper
|
Patch of Invisibility: Naturalistic Black-Box Adversarial Attacks on
Object Detectors
| null | null | null | null |
cs.CV cs.AI cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Adversarial attacks on deep-learning models have been receiving increased
attention in recent years. Work in this area has mostly focused on
gradient-based techniques, so-called white-box attacks, wherein the attacker
has access to the targeted model's internal parameters; such an assumption is
usually unrealistic in the real world. Some attacks additionally use the entire
pixel space to fool a given model, which is neither practical nor physical
(i.e., real-world). On the contrary, we propose herein a gradient-free method
that uses the learned image manifold of a pretrained generative adversarial
network (GAN) to generate naturalistic physical adversarial patches for object
detectors. We show that our proposed method works both digitally and
physically.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2023 21:03:48 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Mar 2023 11:14:06 GMT"
},
{
"version": "v3",
"created": "Thu, 23 Mar 2023 08:49:30 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Lapid",
"Raz",
""
],
[
"Sipper",
"Moshe",
""
]
] |
new_dataset
| 0.990132 |
2303.12692
|
Benjamin Kenwright
|
Benjamin Kenwright
|
Dual-Quaternions: Theory and Applications in Sound
| null | null | null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Sound is a fundamental and rich source of information; playing a key role in
many areas from humanities and social sciences through to engineering and
mathematics. Sound is more than just data 'signals'. It encapsulates physical,
sensorial and emotional, as well as social, cultural and environmental factors.
Sound contributes to the transformation of our experiences, environments and
beliefs. Sound is all around us and everywhere. Hence, it should come as no
surprise that sound is a complex multicomponent entity with a vast assortment
of characteristics and applications. Of course, an important question is, what
is the best way to store and represent sound digitally to capture these
characteristics? What model or method is best for manipulating, extracting and
filtering sounds? There are a large number of representations and models,
however, one approach that has yet to be used with sound is dual-quaternions.
While dual-quaternions have established themselves in many fields of science
and computing as an efficient mathematical model for providing an unambiguous,
un-cumbersome, computationally effective means of representing multi-component
data. Sound is one area that has yet to explore and reap the benefits of
dual-quaternions (using sound and audio-related dual-quaternion models). This
article aims to explore the exciting potential and possibilities
dual-quaternions offer when applied and combined with sound-based models
(including but not limited to the applications, tools, machine-learning,
statistical and computational sound-related algorithms).
|
[
{
"version": "v1",
"created": "Wed, 22 Mar 2023 16:40:24 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Mar 2023 17:09:59 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Kenwright",
"Benjamin",
""
]
] |
new_dataset
| 0.999566 |
2303.12798
|
Yimin Dai
|
Yimin Dai and Xian Shuai and Rui Tan and Guoliang Xing
|
Interpersonal Distance Tracking with mmWave Radar and IMUs
| null | null | null | null |
cs.NI cs.LG cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Tracking interpersonal distances is essential for real-time social distancing
management and {\em ex-post} contact tracing to prevent spreads of contagious
diseases. Bluetooth neighbor discovery has been employed for such purposes in
combating COVID-19, but does not provide satisfactory spatiotemporal
resolutions. This paper presents ImmTrack, a system that uses a millimeter wave
radar and exploits the inertial measurement data from user-carried smartphones
or wearables to track interpersonal distances. By matching the movement traces
reconstructed from the radar and inertial data, the pseudo identities of the
inertial data can be transferred to the radar sensing results in the global
coordinate system. The re-identified, radar-sensed movement trajectories are
then used to track interpersonal distances. In a broader sense, ImmTrack is the
first system that fuses data from millimeter wave radar and inertial
measurement units for simultaneous user tracking and re-identification.
Evaluation with up to 27 people in various indoor/outdoor environments shows
ImmTrack's decimeters-seconds spatiotemporal accuracy in contact tracing, which
is similar to that of the privacy-intrusive camera surveillance and
significantly outperforms the Bluetooth neighbor discovery approach.
|
[
{
"version": "v1",
"created": "Tue, 28 Feb 2023 15:44:17 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Dai",
"Yimin",
""
],
[
"Shuai",
"Xian",
""
],
[
"Tan",
"Rui",
""
],
[
"Xing",
"Guoliang",
""
]
] |
new_dataset
| 0.959031 |
2303.12808
|
Vaibhav Garg
|
Vaibhav Garg, Ganning Xu, and Munindar P. Singh
|
PACO: Provocation Involving Action, Culture, and Oppression
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In India, people identify with a particular group based on certain attributes
such as religion. The same religious groups are often provoked against each
other. Previous studies show the role of provocation in increasing tensions
between India's two prominent religious groups: Hindus and Muslims. With the
advent of the Internet, such provocation also surfaced on social media
platforms such as WhatsApp.
By leveraging an existing dataset of Indian WhatsApp posts, we identified
three categories of provoking sentences against Indian Muslims. Further, we
labeled 7,000 sentences for three provocation categories and called this
dataset PACO. We leveraged PACO to train a model that can identify provoking
sentences from a WhatsApp post. Our best model is fine-tuned RoBERTa and
achieved a 0.851 average AUC score over five-fold cross-validation.
Automatically identifying provoking sentences could stop provoking text from
reaching out to the masses, and can prevent possible discrimination or violence
against the target religious group.
Further, we studied the provocative speech through a pragmatic lens, by
identifying the dialog acts and impoliteness super-strategies used against the
religious group.
|
[
{
"version": "v1",
"created": "Sun, 19 Mar 2023 04:39:36 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Garg",
"Vaibhav",
""
],
[
"Xu",
"Ganning",
""
],
[
"Singh",
"Munindar P.",
""
]
] |
new_dataset
| 0.999844 |
2303.12811
|
Amani Alshawabka
|
Amani Al-shawabka, Philip Pietraski, Sudhir B Pattar, Pedram Johari,
Tommaso Melodia
|
SignCRF: Scalable Channel-agnostic Data-driven Radio Authentication
System
|
11 pages, 13 figures, 3 tables
| null | null | null |
cs.CR cs.AI cs.LG cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Radio Frequency Fingerprinting through Deep Learning (RFFDL) is a data-driven
IoT authentication technique that leverages the unique hardware-level
manufacturing imperfections associated with a particular device to recognize
(fingerprint) the device based on variations introduced in the transmitted
waveform. The proposed SignCRF is a scalable, channel-agnostic, data-driven
radio authentication platform with unmatched precision in fingerprinting
wireless devices based on their unique manufacturing impairments and
independent of the dynamic channel irregularities caused by mobility. SignCRF
consists of (i) a baseline classifier finely trained to authenticate devices
with high accuracy and at scale; (ii) an environment translator carefully
designed and trained to remove the dynamic channel impact from RF signals while
maintaining the radio's specific signature; (iii) a Max-Rule module that
selects the highest precision authentication technique between the baseline
classifier and the environment translator per radio. We design, train, and
validate the performance of SignCRF for multiple technologies in dynamic
environments and at scale (100 LoRa and 20 WiFi devices). We demonstrate that
SignCRF significantly improves the RFFDL performance by achieving as high as 5x
and 8x improvement in correct authentication of WiFi and LoRa devices when
compared to the state-of-the-art, respectively.
|
[
{
"version": "v1",
"created": "Tue, 21 Mar 2023 21:11:02 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Al-shawabka",
"Amani",
""
],
[
"Pietraski",
"Philip",
""
],
[
"Pattar",
"Sudhir B",
""
],
[
"Johari",
"Pedram",
""
],
[
"Melodia",
"Tommaso",
""
]
] |
new_dataset
| 0.988544 |
2303.12869
|
Jessica Nayeli Lopez Espejel
|
Jessica L\'opez Espejel, Mahaman Sanoussi Yahaya Alassan, Walid
Dahhane, El Hassane Ettifouri
|
JaCoText: A Pretrained Model for Java Code-Text Generation
|
International Conference on Code Generation and Implementation
Volume: 17
|
Espejel, J. L., Alassan, M. S. Y., Dahhane, W., & Ettifouri, E. H.
(2023). JaCoText: A Pretrained Model for Java Code-Text Generation.
International Journal of Computer and Systems Engineering, 17(2), 100-105
| null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Pretrained transformer-based models have shown high performance in natural
language generation task. However, a new wave of interest has surged: automatic
programming language generation. This task consists of translating natural
language instructions to a programming code. Despite the fact that well-known
pretrained models on language generation have achieved good performance in
learning programming languages, effort is still needed in automatic code
generation. In this paper, we introduce JaCoText, a model based on Transformers
neural network. It aims to generate java source code from natural language
text. JaCoText leverages advantages of both natural language and code
generation models. More specifically, we study some findings from the state of
the art and use them to (1) initialize our model from powerful pretrained
models, (2) explore additional pretraining on our java dataset, (3) carry out
experiments combining the unimodal and bimodal data in the training, and (4)
scale the input and output length during the fine-tuning of the model.
Conducted experiments on CONCODE dataset show that JaCoText achieves new
state-of-the-art results.
|
[
{
"version": "v1",
"created": "Wed, 22 Mar 2023 19:01:25 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Espejel",
"Jessica López",
""
],
[
"Alassan",
"Mahaman Sanoussi Yahaya",
""
],
[
"Dahhane",
"Walid",
""
],
[
"Ettifouri",
"El Hassane",
""
]
] |
new_dataset
| 0.996622 |
2303.12889
|
Zijin Wang
|
Ou Zheng, Mohamed Abdel-Aty, Zijin Wang, Shengxuan Ding, Dongdong
Wang, Yuxuan Huang
|
AVOID: Autonomous Vehicle Operation Incident Dataset Across the Globe
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Crash data of autonomous vehicles (AV) or vehicles equipped with advanced
driver assistance systems (ADAS) are the key information to understand the
crash nature and to enhance the automation systems. However, most of the
existing crash data sources are either limited by the sample size or suffer
from missing or unverified data. To contribute to the AV safety research
community, we introduce AVOID: an open AV crash dataset. Three types of
vehicles are considered: Advanced Driving System (ADS) vehicles, Advanced
Driver Assistance Systems (ADAS) vehicles, and low-speed autonomous shuttles.
The crash data are collected from the National Highway Traffic Safety
Administration (NHTSA), California Department of Motor Vehicles (CA DMV) and
incident news worldwide, and the data are manually verified and summarized in
ready-to-use format. In addition, land use, weather, and geometry information
are also provided. The dataset is expected to accelerate the research on AV
crash analysis and potential risk identification by providing the research
community with data of rich samples, diverse data sources, clear data
structure, and high data quality.
|
[
{
"version": "v1",
"created": "Wed, 22 Mar 2023 20:05:23 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Zheng",
"Ou",
""
],
[
"Abdel-Aty",
"Mohamed",
""
],
[
"Wang",
"Zijin",
""
],
[
"Ding",
"Shengxuan",
""
],
[
"Wang",
"Dongdong",
""
],
[
"Huang",
"Yuxuan",
""
]
] |
new_dataset
| 0.999295 |
2303.12890
|
Djemel Ziou
|
Aicha Baya Goumeidane, Djemel Ziou, and Nafaa Nacereddine
|
Scale space radon transform-based inertia axis and object central
symmetry estimation
|
This work has not been published
| null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Inertia Axes are involved in many techniques for image content measurement
when involving information obtained from lines, angles, centroids... etc. We
investigate, here, the estimation of the main axis of inertia of an object in
the image. We identify the coincidence conditions of the Scale Space Radon
Transform (SSRT) maximum and the inertia main axis. We show, that by choosing
the appropriate scale parameter, it is possible to match the SSRT maximum and
the main axis of inertia location and orientation of the embedded object in the
image. Furthermore, an example of use case is presented where binary objects
central symmetry computation is derived by means of SSRT projections and the
axis of inertia orientation. To this end, some SSRT characteristics have been
highlighted and exploited. The experimentations show the SSRT-based main axis
of inertia computation effectiveness. Concerning the central symmetry, results
are very satisfying as experimentations carried out on randomly created images
dataset and existing datasets have permitted to divide successfully these
images bases into centrally symmetric and non-centrally symmetric objects.
|
[
{
"version": "v1",
"created": "Wed, 22 Mar 2023 20:07:27 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Goumeidane",
"Aicha Baya",
""
],
[
"Ziou",
"Djemel",
""
],
[
"Nacereddine",
"Nafaa",
""
]
] |
new_dataset
| 0.999729 |
2303.12892
|
Thanh Dung Le
|
Thanh-Dung Le, Philippe Jouvet, Rita Noumeir
|
A Small-Scale Switch Transformer and NLP-based Model for Clinical
Narratives Classification
|
Submitted to IEEE Journal of Biomedical and Health Informatics
| null | null | null |
cs.CL eess.SP
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In recent years, Transformer-based models such as the Switch Transformer have
achieved remarkable results in natural language processing tasks. However,
these models are often too complex and require extensive pre-training, which
limits their effectiveness for small clinical text classification tasks with
limited data. In this study, we propose a simplified Switch Transformer
framework and train it from scratch on a small French clinical text
classification dataset at CHU Sainte-Justine hospital. Our results demonstrate
that the simplified small-scale Transformer models outperform pre-trained
BERT-based models, including DistillBERT, CamemBERT, FlauBERT, and FrALBERT.
Additionally, using a mixture of expert mechanisms from the Switch Transformer
helps capture diverse patterns; hence, the proposed approach achieves better
results than a conventional Transformer with the self-attention mechanism.
Finally, our proposed framework achieves an accuracy of 87\%, precision at
87\%, and recall at 85\%, compared to the third-best pre-trained BERT-based
model, FlauBERT, which achieved an accuracy of 84\%, precision at 84\%, and
recall at 84\%. However, Switch Transformers have limitations, including a
generalization gap and sharp minima. We compare it with a multi-layer
perceptron neural network for small French clinical narratives classification
and show that the latter outperforms all other models.
|
[
{
"version": "v1",
"created": "Wed, 22 Mar 2023 20:10:29 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Le",
"Thanh-Dung",
""
],
[
"Jouvet",
"Philippe",
""
],
[
"Noumeir",
"Rita",
""
]
] |
new_dataset
| 0.982193 |
2303.12940
|
Ehsan Nowroozi
|
Ehsan Nowroozi, Seyedsadra Seyedshoari, Yassine Mekdad, Erkay Savas,
Mauro Conti
|
Cryptocurrency wallets: assessment and security
| null | null | null | null |
cs.CR cs.CV cs.DC cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Digital wallet as a software program or a digital device allows users to
conduct various transactions. Hot and cold digital wallets are considered as
two types of this wallet. Digital wallets need an online connection fall into
the first group, whereas digital wallets can operate without internet
connection belong to the second group. Prior to buying a digital wallet, it is
important to define for what purpose it will be utilized. The ease with which a
mobile phone transaction may be completed in a couple of seconds and the speed
with which transactions are executed are reflection of efficiency. One of the
most important elements of digital wallets is data organization. Digital
wallets are significantly less expensive than classic methods of transaction,
which entails various charges and fees. Constantly, demand for their usage is
growing due to speed, security, and the ability to conduct transactions between
two users without the need of a third party. As the popularity of digital
currency wallets grows, the number of security concerns impacting them
increases significantly. The current status of digital wallets on the market,
as well as the options for an efficient solution for obtaining and utilizing
digital wallets. Finally, the digital wallets' security and future improvement
prospects are discussed in this chapter.
|
[
{
"version": "v1",
"created": "Mon, 6 Mar 2023 08:52:01 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Nowroozi",
"Ehsan",
""
],
[
"Seyedshoari",
"Seyedsadra",
""
],
[
"Mekdad",
"Yassine",
""
],
[
"Savas",
"Erkay",
""
],
[
"Conti",
"Mauro",
""
]
] |
new_dataset
| 0.999806 |
2303.12946
|
Jinchao Zhu
|
Feng Dong, Jinchao Zhu
|
Underwater Camouflage Object Detection Dataset
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We have made a dataset of camouflage object detection mainly for complex
seabed scenes, and named it UnderWater RGB&Sonar,or UW-RS for short. The UW-RS
dataset contains a total of 1972 image data. The dataset mainly consists of two
parts, namely underwater optical data part (UW-R dataset) and underwater sonar
data part (UW-S dataset).
|
[
{
"version": "v1",
"created": "Wed, 1 Mar 2023 22:36:54 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Dong",
"Feng",
""
],
[
"Zhu",
"Jinchao",
""
]
] |
new_dataset
| 0.999825 |
2303.12984
|
Teerapat Jenrungrot
|
Teerapat Jenrungrot and Michael Chinen and W. Bastiaan Kleijn and Jan
Skoglund and Zal\'an Borsos and Neil Zeghidour and Marco Tagliasacchi
|
LMCodec: A Low Bitrate Speech Codec With Causal Transformer Models
|
5 pages, accepted to ICASSP 2023, project page:
https://mjenrungrot.github.io/chrome-media-audio-papers/publications/lmcodec
| null | null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce LMCodec, a causal neural speech codec that provides high quality
audio at very low bitrates. The backbone of the system is a causal
convolutional codec that encodes audio into a hierarchy of coarse-to-fine
tokens using residual vector quantization. LMCodec trains a Transformer
language model to predict the fine tokens from the coarse ones in a generative
fashion, allowing for the transmission of fewer codes. A second Transformer
predicts the uncertainty of the next codes given the past transmitted codes,
and is used to perform conditional entropy coding. A MUSHRA subjective test was
conducted and shows that the quality is comparable to reference codecs at
higher bitrates. Example audio is available at
https://mjenrungrot.github.io/chrome-media-audio-papers/publications/lmcodec.
|
[
{
"version": "v1",
"created": "Thu, 23 Mar 2023 01:27:38 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Jenrungrot",
"Teerapat",
""
],
[
"Chinen",
"Michael",
""
],
[
"Kleijn",
"W. Bastiaan",
""
],
[
"Skoglund",
"Jan",
""
],
[
"Borsos",
"Zalán",
""
],
[
"Zeghidour",
"Neil",
""
],
[
"Tagliasacchi",
"Marco",
""
]
] |
new_dataset
| 0.991648 |
2303.13013
|
Nan Gao
|
Nan Gao, Zeyu Zhao, Zhi Zeng, Shuwu Zhang, Dongdong Weng
|
GesGPT: Speech Gesture Synthesis With Text Parsing from GPT
| null | null | null | null |
cs.CL cs.CV cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Gesture synthesis has gained significant attention as a critical research
area, focusing on producing contextually appropriate and natural gestures
corresponding to speech or textual input. Although deep learning-based
approaches have achieved remarkable progress, they often overlook the rich
semantic information present in the text, leading to less expressive and
meaningful gestures. We propose GesGPT, a novel approach to gesture generation
that leverages the semantic analysis capabilities of Large Language Models
(LLMs), such as GPT. By capitalizing on the strengths of LLMs for text
analysis, we design prompts to extract gesture-related information from textual
input. Our method entails developing prompt principles that transform gesture
generation into an intention classification problem based on GPT, and utilizing
a curated gesture library and integration module to produce semantically rich
co-speech gestures. Experimental results demonstrate that GesGPT effectively
generates contextually appropriate and expressive gestures, offering a new
perspective on semantic co-speech gesture generation.
|
[
{
"version": "v1",
"created": "Thu, 23 Mar 2023 03:30:30 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Gao",
"Nan",
""
],
[
"Zhao",
"Zeyu",
""
],
[
"Zeng",
"Zhi",
""
],
[
"Zhang",
"Shuwu",
""
],
[
"Weng",
"Dongdong",
""
]
] |
new_dataset
| 0.958232 |
2303.13018
|
Yunsong Zhou
|
Yunsong Zhou, Hongzi Zhu, Quan Liu, Shan Chang, and Minyi Guo
|
MonoATT: Online Monocular 3D Object Detection with Adaptive Token
Transformer
|
in the Proceedings of IEEE/CVF Conference on Computer Vision and
Pattern Recognition (CVPR), 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mobile monocular 3D object detection (Mono3D) (e.g., on a vehicle, a drone,
or a robot) is an important yet challenging task. Existing transformer-based
offline Mono3D models adopt grid-based vision tokens, which is suboptimal when
using coarse tokens due to the limited available computational power. In this
paper, we propose an online Mono3D framework, called MonoATT, which leverages a
novel vision transformer with heterogeneous tokens of varying shapes and sizes
to facilitate mobile Mono3D. The core idea of MonoATT is to adaptively assign
finer tokens to areas of more significance before utilizing a transformer to
enhance Mono3D. To this end, we first use prior knowledge to design a scoring
network for selecting the most important areas of the image, and then propose a
token clustering and merging network with an attention mechanism to gradually
merge tokens around the selected areas in multiple stages. Finally, a
pixel-level feature map is reconstructed from heterogeneous tokens before
employing a SOTA Mono3D detector as the underlying detection core. Experiment
results on the real-world KITTI dataset demonstrate that MonoATT can
effectively improve the Mono3D accuracy for both near and far objects and
guarantee low latency. MonoATT yields the best performance compared with the
state-of-the-art methods by a large margin and is ranked number one on the
KITTI 3D benchmark.
|
[
{
"version": "v1",
"created": "Thu, 23 Mar 2023 03:45:03 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Zhou",
"Yunsong",
""
],
[
"Zhu",
"Hongzi",
""
],
[
"Liu",
"Quan",
""
],
[
"Chang",
"Shan",
""
],
[
"Guo",
"Minyi",
""
]
] |
new_dataset
| 0.991729 |
2303.13019
|
Jinnan Piao
|
Jinnan Piao, Dong Li, Jindi Liu, Xueting Yu, Zhibo Li, Ming Yang, and
Peng Zeng
|
Construction Methods Based Minimum Weight Distribution for Polar Codes
with Successive Cancellation List Decoding
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we focus on the construction methods based MWD for polar codes
to improve the performance with successive cancellation list (SCL) decoding. We
first propose an ordered and nested reliability sequence, namely MWD sequence,
to improve the ML performance of polar codes and apply fast construction
without the original channel information. In the MWD sequence, the synthetic
channels are sorted by the partial MWD which is used to evaluate the influence
of information bit on MWD and we prove the MWD sequence is the optimum sequence
under ML decoding. Then, since the list size of SCL decoding is limited, we
introduce an entropy constraint to establish a relationship between the list
size and the ML performance and propose a heuristic and greedy construction
method named bit grouping reorder based MWD (BGR-MWD) algorithm. In the
algorithm, we divide the synthetic channels into groups by the partial MWD and
greedily reorder the synthetic channels in some groups until the entropy
constraint is satisfied. The simulation results show the MWD sequence is
suitable for constructing polar codes with short code length. Meanwhile, the
BGR-MWD algorithm has superior performance over the traditional construction
methods for long code length.
|
[
{
"version": "v1",
"created": "Thu, 23 Mar 2023 03:48:58 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Piao",
"Jinnan",
""
],
[
"Li",
"Dong",
""
],
[
"Liu",
"Jindi",
""
],
[
"Yu",
"Xueting",
""
],
[
"Li",
"Zhibo",
""
],
[
"Yang",
"Ming",
""
],
[
"Zeng",
"Peng",
""
]
] |
new_dataset
| 0.987831 |
2303.13026
|
Maryam Babaie
|
Maryam Babaie, Ayaz Akram, Jason Lowe-Power
|
A Cycle-level Unified DRAM Cache Controller Model for 3DXPoint Memory
Systems in gem5
| null | null | null | null |
cs.AR cs.PF
|
http://creativecommons.org/licenses/by/4.0/
|
To accommodate the growing memory footprints of today's applications, CPU
vendors have employed large DRAM caches, backed by large non-volatile memories
like Intel Optane (e.g., Intel's Cascade Lake). The existing computer
architecture simulators do not provide support to model and evaluate systems
which use DRAM devices as a cache to the non-volatile main memory. In this
work, we present a cycle-level DRAM cache model which is integrated with gem5.
This model leverages the flexibility of gem5's memory devices models and full
system support to enable exploration of many different DRAM cache designs. We
demonstrate the usefulness of this new tool by exploring the design space of a
DRAM cache controller through several case studies including the impact of
scheduling policies, required buffering, combining different memory
technologies (e.g., HBM, DDR3/4/5, 3DXPoint, High latency) as the cache and
main memory, and the effect of wear-leveling when DRAM cache is backed by NVM
main memory. We also perform experiments with real workloads in full-system
simulations to validate the proposed model and show the sensitivity of these
workloads to the DRAM cache sizes.
|
[
{
"version": "v1",
"created": "Thu, 23 Mar 2023 04:24:30 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Babaie",
"Maryam",
""
],
[
"Akram",
"Ayaz",
""
],
[
"Lowe-Power",
"Jason",
""
]
] |
new_dataset
| 0.999375 |
2303.13071
|
Sizhe An
|
Sizhe An, Hongyi Xu, Yichun Shi, Guoxian Song, Umit Ogras, Linjie Luo
|
PanoHead: Geometry-Aware 3D Full-Head Synthesis in 360$^{\circ}$
|
CVPR 2023. Project Page:https://sizhean.github.io/panohead
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Synthesis and reconstruction of 3D human head has gained increasing interests
in computer vision and computer graphics recently. Existing state-of-the-art 3D
generative adversarial networks (GANs) for 3D human head synthesis are either
limited to near-frontal views or hard to preserve 3D consistency in large view
angles. We propose PanoHead, the first 3D-aware generative model that enables
high-quality view-consistent image synthesis of full heads in $360^\circ$ with
diverse appearance and detailed geometry using only in-the-wild unstructured
images for training. At its core, we lift up the representation power of recent
3D GANs and bridge the data alignment gap when training from in-the-wild images
with widely distributed views. Specifically, we propose a novel two-stage
self-adaptive image alignment for robust 3D GAN training. We further introduce
a tri-grid neural volume representation that effectively addresses front-face
and back-head feature entanglement rooted in the widely-adopted tri-plane
formulation. Our method instills prior knowledge of 2D image segmentation in
adversarial learning of 3D neural scene structures, enabling compositable head
synthesis in diverse backgrounds. Benefiting from these designs, our method
significantly outperforms previous 3D GANs, generating high-quality 3D heads
with accurate geometry and diverse appearances, even with long wavy and afro
hairstyles, renderable from arbitrary poses. Furthermore, we show that our
system can reconstruct full 3D heads from single input images for personalized
realistic 3D avatars.
|
[
{
"version": "v1",
"created": "Thu, 23 Mar 2023 06:54:34 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"An",
"Sizhe",
""
],
[
"Xu",
"Hongyi",
""
],
[
"Shi",
"Yichun",
""
],
[
"Song",
"Guoxian",
""
],
[
"Ogras",
"Umit",
""
],
[
"Luo",
"Linjie",
""
]
] |
new_dataset
| 0.967623 |
2303.13076
|
Xiaoshi Wu
|
Xiaoshi Wu, Feng Zhu, Rui Zhao, Hongsheng Li
|
CORA: Adapting CLIP for Open-Vocabulary Detection with Region Prompting
and Anchor Pre-Matching
|
11 pages, 4 figures. Accepted by CVPR 2023
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Open-vocabulary detection (OVD) is an object detection task aiming at
detecting objects from novel categories beyond the base categories on which the
detector is trained. Recent OVD methods rely on large-scale visual-language
pre-trained models, such as CLIP, for recognizing novel objects. We identify
the two core obstacles that need to be tackled when incorporating these models
into detector training: (1) the distribution mismatch that happens when
applying a VL-model trained on whole images to region recognition tasks; (2)
the difficulty of localizing objects of unseen classes. To overcome these
obstacles, we propose CORA, a DETR-style framework that adapts CLIP for
Open-vocabulary detection by Region prompting and Anchor pre-matching. Region
prompting mitigates the whole-to-region distribution gap by prompting the
region features of the CLIP-based region classifier. Anchor pre-matching helps
learning generalizable object localization by a class-aware matching mechanism.
We evaluate CORA on the COCO OVD benchmark, where we achieve 41.7 AP50 on novel
classes, which outperforms the previous SOTA by 2.4 AP50 even without resorting
to extra training data. When extra training data is available, we train
CORA$^+$ on both ground-truth base-category annotations and additional pseudo
bounding box labels computed by CORA. CORA$^+$ achieves 43.1 AP50 on the COCO
OVD benchmark and 28.1 box APr on the LVIS OVD benchmark.
|
[
{
"version": "v1",
"created": "Thu, 23 Mar 2023 07:13:57 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Wu",
"Xiaoshi",
""
],
[
"Zhu",
"Feng",
""
],
[
"Zhao",
"Rui",
""
],
[
"Li",
"Hongsheng",
""
]
] |
new_dataset
| 0.99934 |
2303.13100
|
Yun Liu
|
Yun Liu, Xuefeng Yan, Zhilei Chen, Zhiqi Li, Zeyong Wei, and Mingqiang
Wei
|
PointGame: Geometrically and Adaptively Masked Auto-Encoder on Point
Clouds
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Self-supervised learning is attracting large attention in point cloud
understanding. However, exploring discriminative and transferable features
still remains challenging due to their nature of irregularity and sparsity. We
propose a geometrically and adaptively masked auto-encoder for self-supervised
learning on point clouds, termed \textit{PointGame}. PointGame contains two
core components: GATE and EAT. GATE stands for the geometrical and adaptive
token embedding module; it not only absorbs the conventional wisdom of
geometric descriptors that captures the surface shape effectively, but also
exploits adaptive saliency to focus on the salient part of a point cloud. EAT
stands for the external attention-based Transformer encoder with linear
computational complexity, which increases the efficiency of the whole pipeline.
Unlike cutting-edge unsupervised learning models, PointGame leverages geometric
descriptors to perceive surface shapes and adaptively mines discriminative
features from training data. PointGame showcases clear advantages over its
competitors on various downstream tasks under both global and local fine-tuning
strategies. The code and pre-trained models will be publicly available.
|
[
{
"version": "v1",
"created": "Thu, 23 Mar 2023 08:32:10 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Liu",
"Yun",
""
],
[
"Yan",
"Xuefeng",
""
],
[
"Chen",
"Zhilei",
""
],
[
"Li",
"Zhiqi",
""
],
[
"Wei",
"Zeyong",
""
],
[
"Wei",
"Mingqiang",
""
]
] |
new_dataset
| 0.998852 |
2303.13182
|
Zhengping Che
|
Mingze Wei, Yaomin Huang, Zhiyuan Xu, Ning Liu, Zhengping Che, Xinyu
Zhang, Chaomin Shen, Feifei Feng, Chun Shan, Jian Tang
|
CMG-Net: An End-to-End Contact-Based Multi-Finger Dexterous Grasping
Network
|
The first two authors are with equal contributions. Paper accepted by
ICRA 2023
| null | null | null |
cs.RO cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a novel representation for grasping using contacts
between multi-finger robotic hands and objects to be manipulated. This
representation significantly reduces the prediction dimensions and accelerates
the learning process. We present an effective end-to-end network, CMG-Net, for
grasping unknown objects in a cluttered environment by efficiently predicting
multi-finger grasp poses and hand configurations from a single-shot point
cloud. Moreover, we create a synthetic grasp dataset that consists of five
thousand cluttered scenes, 80 object categories, and 20 million annotations. We
perform a comprehensive empirical study and demonstrate the effectiveness of
our grasping representation and CMG-Net. Our work significantly outperforms the
state-of-the-art for three-finger robotic hands. We also demonstrate that the
model trained using synthetic data performs very well for real robots.
|
[
{
"version": "v1",
"created": "Thu, 23 Mar 2023 11:29:31 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Wei",
"Mingze",
""
],
[
"Huang",
"Yaomin",
""
],
[
"Xu",
"Zhiyuan",
""
],
[
"Liu",
"Ning",
""
],
[
"Che",
"Zhengping",
""
],
[
"Zhang",
"Xinyu",
""
],
[
"Shen",
"Chaomin",
""
],
[
"Feng",
"Feifei",
""
],
[
"Shan",
"Chun",
""
],
[
"Tang",
"Jian",
""
]
] |
new_dataset
| 0.999805 |
2303.13194
|
Yunkang Cao
|
Yunkang Cao, Xiaohao Xu, Weiming Shen
|
Complementary Pseudo Multimodal Feature for Point Cloud Anomaly
Detection
|
Submitted to Pattern Recognition. Code is available on
https://github.com/caoyunkang/CPMF
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Point cloud (PCD) anomaly detection steadily emerges as a promising research
area. This study aims to improve PCD anomaly detection performance by combining
handcrafted PCD descriptions with powerful pre-trained 2D neural networks. To
this end, this study proposes Complementary Pseudo Multimodal Feature (CPMF)
that incorporates local geometrical information in 3D modality using
handcrafted PCD descriptors and global semantic information in the generated
pseudo 2D modality using pre-trained 2D neural networks. For global semantics
extraction, CPMF projects the origin PCD into a pseudo 2D modality containing
multi-view images. These images are delivered to pre-trained 2D neural networks
for informative 2D modality feature extraction. The 3D and 2D modality features
are aggregated to obtain the CPMF for PCD anomaly detection. Extensive
experiments demonstrate the complementary capacity between 2D and 3D modality
features and the effectiveness of CPMF, with 95.15% image-level AU-ROC and
92.93% pixel-level PRO on the MVTec3D benchmark. Code is available on
https://github.com/caoyunkang/CPMF.
|
[
{
"version": "v1",
"created": "Thu, 23 Mar 2023 11:52:17 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Cao",
"Yunkang",
""
],
[
"Xu",
"Xiaohao",
""
],
[
"Shen",
"Weiming",
""
]
] |
new_dataset
| 0.963744 |
2303.13254
|
EPTCS
|
Ana Cruz (University of Aveiro), Alexandre Madeira (University of
Aveiro), Lu\^A-\~A-s Soares Barbosa (University of Minho)
|
Paraconsistent Transition Systems
|
In Proceedings LSFA 2022, arXiv:2303.12680
|
EPTCS 376, 2023, pp. 3-15
|
10.4204/EPTCS.376.3
| null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
Often in Software Engineering, a modeling formalism has to support scenarios
of inconsistency in which several requirements either reinforce or contradict
each other. Paraconsistent transition systems are proposed in this paper as one
such formalism: states evolve through two accessibility relations capturing
weighted evidence of a transition or its absence, respectively. Their weights
come from a specific residuated lattice. A category of these systems, and the
corresponding algebra, is defined as providing a formal setting to model
different application scenarios. One of them, dealing with the effect of
quantum decoherence in quantum programs, is used for illustration purposes.
|
[
{
"version": "v1",
"created": "Thu, 23 Mar 2023 13:37:49 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Cruz",
"Ana",
"",
"University of Aveiro"
],
[
"Madeira",
"Alexandre",
"",
"University of\n Aveiro"
],
[
"Barbosa",
"LuÂ-Ã-s Soares",
"",
"University of Minho"
]
] |
new_dataset
| 0.999561 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.