id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2303.15671
|
Suncheng Xiang
|
Qingzhong Chen, Shilun Cai, Crystal Cai, Zefang Yu, Dahong Qian,
Suncheng Xiang
|
Colo-SCRL: Self-Supervised Contrastive Representation Learning for
Colonoscopic Video Retrieval
|
Accepted by ICME 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Colonoscopic video retrieval, which is a critical part of polyp treatment,
has great clinical significance for the prevention and treatment of colorectal
cancer. However, retrieval models trained on action recognition datasets
usually produce unsatisfactory retrieval results on colonoscopic datasets due
to the large domain gap between them. To seek a solution to this problem, we
construct a large-scale colonoscopic dataset named Colo-Pair for medical
practice. Based on this dataset, a simple yet effective training method called
Colo-SCRL is proposed for more robust representation learning. It aims to
refine general knowledge from colonoscopies through masked autoencoder-based
reconstruction and momentum contrast to improve retrieval performance. To the
best of our knowledge, this is the first attempt to employ the contrastive
learning paradigm for medical video retrieval. Empirical results show that our
method significantly outperforms current state-of-the-art methods in the
colonoscopic video retrieval task.
|
[
{
"version": "v1",
"created": "Tue, 28 Mar 2023 01:27:23 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Chen",
"Qingzhong",
""
],
[
"Cai",
"Shilun",
""
],
[
"Cai",
"Crystal",
""
],
[
"Yu",
"Zefang",
""
],
[
"Qian",
"Dahong",
""
],
[
"Xiang",
"Suncheng",
""
]
] |
new_dataset
| 0.997412 |
2303.15732
|
Farhan Rozaidi
|
Farhan Rozaidi, Emma Waters, Olivia Dawes, Jennifer Yang, Joseph R.
Davidson, Ross L. Hatton
|
HISSbot: Sidewinding with a Soft Snake Robot
|
7 pages, 9 figures, to be published in RoboSoft 2023
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Snake robots are characterized by their ability to navigate through small
spaces and loose terrain by utilizing efficient cyclic forms of locomotion.
Soft snake robots are a subset of these robots which utilize soft, compliant
actuators to produce movement. Prior work on soft snake robots has primarily
focused on planar gaits, such as undulation. More efficient spatial gaits, such
as sidewinding, are unexplored gaits for soft snake robots. We propose a novel
means of constructing a soft snake robot capable of sidewinding, and introduce
the Helical Inflating Soft Snake Robot (HISSbot). We validate this actuation
through the physical HISSbot, and demonstrate its ability to sidewind across
various surfaces. Our tests show robustness in locomotion through low-friction
and granular media.
|
[
{
"version": "v1",
"created": "Tue, 28 Mar 2023 04:57:30 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Rozaidi",
"Farhan",
""
],
[
"Waters",
"Emma",
""
],
[
"Dawes",
"Olivia",
""
],
[
"Yang",
"Jennifer",
""
],
[
"Davidson",
"Joseph R.",
""
],
[
"Hatton",
"Ross L.",
""
]
] |
new_dataset
| 0.999207 |
2303.15762
|
Shlomi Steinberg
|
Shlomi Steinberg, Ravi Ramamoorthi, Benedikt Bitterli, Eugene d'Eon,
Ling-Qi Yan, Matt Pharr
|
A Generalized Ray Formulation For Wave-Optics Rendering
|
For additional information, see
https://ssteinberg.xyz/2023/03/27/rtplt/
| null | null | null |
cs.GR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Under ray-optical light transport, the classical ray serves as a local and
linear "point query" of light's behaviour. Such point queries are useful, and
sophisticated path tracing and sampling techniques enable efficiently computing
solutions to light transport problems in complex, real-world settings and
environments. However, such formulations are firmly confined to the realm of
ray optics, while many applications of interest, in computer graphics and
computational optics, demand a more precise understanding of light. We
rigorously formulate the generalized ray, which enables local and linear point
queries of the wave-optical phase space. Furthermore, we present sample-solve:
a simple method that serves as a novel link between path tracing and
computational optics. We will show that this link enables the application of
modern path tracing techniques for wave-optical rendering, improving upon the
state-of-the-art in terms of the generality and accuracy of the formalism, ease
of application, as well as performance. Sampling using generalized rays enables
interactive rendering under rigorous wave optics, with orders-of-magnitude
faster performance compared to existing techniques.
|
[
{
"version": "v1",
"created": "Tue, 28 Mar 2023 06:42:52 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Steinberg",
"Shlomi",
""
],
[
"Ramamoorthi",
"Ravi",
""
],
[
"Bitterli",
"Benedikt",
""
],
[
"d'Eon",
"Eugene",
""
],
[
"Yan",
"Ling-Qi",
""
],
[
"Pharr",
"Matt",
""
]
] |
new_dataset
| 0.991365 |
2303.15780
|
Hiromichi Kamata
|
Hiromichi Kamata, Yuiko Sakuma, Akio Hayakawa, Masato Ishii, Takuya
Narihira
|
Instruct 3D-to-3D: Text Instruction Guided 3D-to-3D conversion
|
Project page: https://sony.github.io/Instruct3Dto3D-doc/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a high-quality 3D-to-3D conversion method, Instruct 3D-to-3D. Our
method is designed for a novel task, which is to convert a given 3D scene to
another scene according to text instructions. Instruct 3D-to-3D applies
pretrained Image-to-Image diffusion models for 3D-to-3D conversion. This
enables the likelihood maximization of each viewpoint image and high-quality 3D
generation. In addition, our proposed method explicitly inputs the source 3D
scene as a condition, which enhances 3D consistency and controllability of how
much of the source 3D scene structure is reflected. We also propose dynamic
scaling, which allows the intensity of the geometry transformation to be
adjusted. We performed quantitative and qualitative evaluations and showed that
our proposed method achieves higher quality 3D-to-3D conversions than baseline
methods.
|
[
{
"version": "v1",
"created": "Tue, 28 Mar 2023 07:50:45 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Kamata",
"Hiromichi",
""
],
[
"Sakuma",
"Yuiko",
""
],
[
"Hayakawa",
"Akio",
""
],
[
"Ishii",
"Masato",
""
],
[
"Narihira",
"Takuya",
""
]
] |
new_dataset
| 0.998665 |
2303.15782
|
Nick Heppert
|
Nick Heppert, Muhammad Zubair Irshad, Sergey Zakharov, Katherine Liu,
Rares Andrei Ambrus, Jeannette Bohg, Abhinav Valada, Thomas Kollar
|
CARTO: Category and Joint Agnostic Reconstruction of ARTiculated Objects
|
20 pages, 11 figures, accepted at CVPR 2023
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
We present CARTO, a novel approach for reconstructing multiple articulated
objects from a single stereo RGB observation. We use implicit object-centric
representations and learn a single geometry and articulation decoder for
multiple object categories. Despite training on multiple categories, our
decoder achieves a comparable reconstruction accuracy to methods that train
bespoke decoders separately for each category. Combined with our stereo image
encoder we infer the 3D shape, 6D pose, size, joint type, and the joint state
of multiple unknown objects in a single forward pass. Our method achieves a
20.4% absolute improvement in mAP 3D IOU50 for novel instances when compared to
a two-stage pipeline. Inference time is fast and can run on a NVIDIA TITAN XP
GPU at 1 HZ for eight or less objects present. While only trained on simulated
data, CARTO transfers to real-world object instances. Code and evaluation data
is available at: http://carto.cs.uni-freiburg.de
|
[
{
"version": "v1",
"created": "Tue, 28 Mar 2023 07:52:15 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Heppert",
"Nick",
""
],
[
"Irshad",
"Muhammad Zubair",
""
],
[
"Zakharov",
"Sergey",
""
],
[
"Liu",
"Katherine",
""
],
[
"Ambrus",
"Rares Andrei",
""
],
[
"Bohg",
"Jeannette",
""
],
[
"Valada",
"Abhinav",
""
],
[
"Kollar",
"Thomas",
""
]
] |
new_dataset
| 0.991689 |
2303.15784
|
EPTCS
|
Stephen Mell, Osbert Bastani, Steve Zdancewic
|
Ideograph: A Language for Expressing and Manipulating Structured Data
|
In Proceedings TERMGRAPH 2022, arXiv:2303.14219
|
EPTCS 377, 2023, pp. 65-84
|
10.4204/EPTCS.377.4
| null |
cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce Ideograph, a language for expressing and manipulating structured
data. Its types describe kinds of structures, such as natural numbers, lists,
multisets, binary trees, syntax trees with variable binding, directed
multigraphs, and relational databases. Fully normalized terms of a type
correspond exactly to members of the structure, analogous to a Church-encoding.
Moreover, definable operations over these structures are guaranteed to respect
the structures' equivalences. In this paper, we give the syntax and semantics
of the non-polymorphic subset of Ideograph, and we demonstrate how it can
represent and manipulate several interesting structures.
|
[
{
"version": "v1",
"created": "Tue, 28 Mar 2023 07:52:50 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Mell",
"Stephen",
""
],
[
"Bastani",
"Osbert",
""
],
[
"Zdancewic",
"Steve",
""
]
] |
new_dataset
| 0.990679 |
2303.15819
|
Monika Dalal
|
Monika Dalal, Sucheta Dutt and Ranjeet Sehmi
|
MDS and MHDR cyclic codes over finite chain rings
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this work, a unique set of generators for a cyclic code over a finite
chain ring has been established. The minimal spanning set and rank of the code
have also been determined. Further, sufficient as well as necessary conditions
for a cyclic code to be an MDS code and for a cyclic code to be an MHDR code
have been obtained. Some examples of optimal cyclic codes have also been
presented.
|
[
{
"version": "v1",
"created": "Tue, 28 Mar 2023 08:45:14 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Dalal",
"Monika",
""
],
[
"Dutt",
"Sucheta",
""
],
[
"Sehmi",
"Ranjeet",
""
]
] |
new_dataset
| 0.999283 |
2303.15822
|
Deze Wang
|
Deze Wang, Boxing Chen, Shanshan Li, Wei Luo, Shaoliang Peng, Wei
Dong, Xiangke Liao
|
One Adapter for All Programming Languages? Adapter Tuning for Code
Search and Summarization
|
Accepted to the 45th International Conference on Software Engineering
(ICSE 2023)
| null | null | null |
cs.SE cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As pre-trained models automate many code intelligence tasks, a widely used
paradigm is to fine-tune a model on the task dataset for each programming
language. A recent study reported that multilingual fine-tuning benefits a
range of tasks and models. However, we find that multilingual fine-tuning leads
to performance degradation on recent models UniXcoder and CodeT5.
To alleviate the potentially catastrophic forgetting issue in multilingual
models, we fix all pre-trained model parameters, insert the parameter-efficient
structure adapter, and fine-tune it. Updating only 0.6\% of the overall
parameters compared to full-model fine-tuning for each programming language,
adapter tuning yields consistent improvements on code search and summarization
tasks, achieving state-of-the-art results. In addition, we experimentally show
its effectiveness in cross-lingual and low-resource scenarios. Multilingual
fine-tuning with 200 samples per programming language approaches the results
fine-tuned with the entire dataset on code summarization. Our experiments on
three probing tasks show that adapter tuning significantly outperforms
full-model fine-tuning and effectively overcomes catastrophic forgetting.
|
[
{
"version": "v1",
"created": "Tue, 28 Mar 2023 08:49:54 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Wang",
"Deze",
""
],
[
"Chen",
"Boxing",
""
],
[
"Li",
"Shanshan",
""
],
[
"Luo",
"Wei",
""
],
[
"Peng",
"Shaoliang",
""
],
[
"Dong",
"Wei",
""
],
[
"Liao",
"Xiangke",
""
]
] |
new_dataset
| 0.989683 |
2303.15848
|
Zhuoran Zheng
|
Zhuoran Zheng and Xiuyi Jia
|
4K-HAZE: A Dehazing Benchmark with 4K Resolution Hazy and Haze-Free
Images
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Currently, mobile and IoT devices are in dire need of a series of methods to
enhance 4K images with limited resource expenditure. The absence of large-scale
4K benchmark datasets hampers progress in this area, especially for dehazing.
The challenges in building ultra-high-definition (UHD) dehazing datasets are
the absence of estimation methods for UHD depth maps, high-quality 4K depth
estimation datasets, and migration strategies for UHD haze images from
synthetic to real domains. To address these problems, we develop a novel
synthetic method to simulate 4K hazy images (including nighttime and daytime
scenes) from clear images, which first estimates the scene depth, simulates the
light rays and object reflectance, then migrates the synthetic images to real
domains by using a GAN, and finally yields the hazy effects on 4K resolution
images. We wrap these synthesized images into a benchmark called the 4K-HAZE
dataset. Specifically, we design the CS-Mixer (an MLP-based model that
integrates \textbf{C}hannel domain and \textbf{S}patial domain) to estimate the
depth map of 4K clear images, the GU-Net to migrate a 4K synthetic image to the
real hazy domain. The most appealing aspect of our approach (depth estimation
and domain migration) is the capability to run a 4K image on a single GPU with
24G RAM in real-time (33fps). Additionally, this work presents an objective
assessment of several state-of-the-art single-image dehazing methods that are
evaluated using the 4K-HAZE dataset. At the end of the paper, we discuss the
limitations of the 4K-HAZE dataset and its social implications.
|
[
{
"version": "v1",
"created": "Tue, 28 Mar 2023 09:39:29 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Zheng",
"Zhuoran",
""
],
[
"Jia",
"Xiuyi",
""
]
] |
new_dataset
| 0.999452 |
2303.15879
|
Tao Wu
|
Tao Wu and Mengqi Cao and Ziteng Gao and Gangshan Wu and Limin Wang
|
STMixer: A One-Stage Sparse Action Detector
|
Accepted by CVPR 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Traditional video action detectors typically adopt the two-stage pipeline,
where a person detector is first employed to generate actor boxes and then 3D
RoIAlign is used to extract actor-specific features for classification. This
detection paradigm requires multi-stage training and inference, and cannot
capture context information outside the bounding box. Recently, a few
query-based action detectors are proposed to predict action instances in an
end-to-end manner. However, they still lack adaptability in feature sampling
and decoding, thus suffering from the issues of inferior performance or slower
convergence. In this paper, we propose a new one-stage sparse action detector,
termed STMixer. STMixer is based on two core designs. First, we present a
query-based adaptive feature sampling module, which endows our STMixer with the
flexibility of mining a set of discriminative features from the entire
spatiotemporal domain. Second, we devise a dual-branch feature mixing module,
which allows our STMixer to dynamically attend to and mix video features along
the spatial and the temporal dimension respectively for better feature
decoding. Coupling these two designs with a video backbone yields an efficient
end-to-end action detector. Without bells and whistles, our STMixer obtains the
state-of-the-art results on the datasets of AVA, UCF101-24, and JHMDB.
|
[
{
"version": "v1",
"created": "Tue, 28 Mar 2023 10:47:06 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Wu",
"Tao",
""
],
[
"Cao",
"Mengqi",
""
],
[
"Gao",
"Ziteng",
""
],
[
"Wu",
"Gangshan",
""
],
[
"Wang",
"Limin",
""
]
] |
new_dataset
| 0.998757 |
2303.15892
|
Yuhao Cheng
|
Yuhao Cheng and Yichao Yan and Wenhan Zhu and Ye Pan and Bowen Pan and
Xiaokang Yang
|
Head3D: Complete 3D Head Generation via Tri-plane Feature Distillation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Head generation with diverse identities is an important task in computer
vision and computer graphics, widely used in multimedia applications. However,
current full head generation methods require a large number of 3D scans or
multi-view images to train the model, resulting in expensive data acquisition
cost. To address this issue, we propose Head3D, a method to generate full 3D
heads with limited multi-view images. Specifically, our approach first extracts
facial priors represented by tri-planes learned in EG3D, a 3D-aware generative
model, and then proposes feature distillation to deliver the 3D frontal faces
into complete heads without compromising head integrity. To mitigate the domain
gap between the face and head models, we present dual-discriminators to guide
the frontal and back head generation, respectively. Our model achieves
cost-efficient and diverse complete head generation with photo-realistic
renderings and high-quality geometry representations. Extensive experiments
demonstrate the effectiveness of our proposed Head3D, both qualitatively and
quantitatively.
|
[
{
"version": "v1",
"created": "Tue, 28 Mar 2023 11:12:26 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Cheng",
"Yuhao",
""
],
[
"Yan",
"Yichao",
""
],
[
"Zhu",
"Wenhan",
""
],
[
"Pan",
"Ye",
""
],
[
"Pan",
"Bowen",
""
],
[
"Yang",
"Xiaokang",
""
]
] |
new_dataset
| 0.962042 |
2303.15900
|
Jeonghwan Kim
|
Jeonghwan Kim, Tianyu Li, Sehoon Ha
|
ARMP: Autoregressive Motion Planning for Quadruped Locomotion and
Navigation in Complex Indoor Environments
|
Submitted to IROS
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Generating natural and physically feasible motions for legged robots has been
a challenging problem due to its complex dynamics. In this work, we introduce a
novel learning-based framework of autoregressive motion planner (ARMP) for
quadruped locomotion and navigation. Our method can generate motion plans with
an arbitrary length in an autoregressive fashion, unlike most offline
trajectory optimization algorithms for a fixed trajectory length. To this end,
we first construct the motion library by solving a dense set of trajectory
optimization problems for diverse scenarios and parameter settings. Then we
learn the motion manifold from the dataset in a supervised learning fashion. We
show that the proposed ARMP can generate physically plausible motions for
various tasks and situations. We also showcase that our method can be
successfully integrated with the recent robot navigation frameworks as a
low-level controller and unleash the full capability of legged robots for
complex indoor navigation.
|
[
{
"version": "v1",
"created": "Tue, 28 Mar 2023 11:26:13 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Kim",
"Jeonghwan",
""
],
[
"Li",
"Tianyu",
""
],
[
"Ha",
"Sehoon",
""
]
] |
new_dataset
| 0.99737 |
2303.15931
|
Luis Paulo Reis Prof.
|
Nuno Lau, Luis Paulo Reis, David Simoes, Mohammadreza Kasaei. Miguel
Abreu, Tiago Silva, Francisco Resende
|
FC Portugal 3D Simulation Team: Team Description Paper 2020
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The FC Portugal 3D team is developed upon the structure of our previous
Simulation league 2D/3D teams and our standard platform league team. Our
research concerning the robot low-level skills is focused on developing
behaviors that may be applied on real robots with minimal adaptation using
model-based approaches. Our research on high-level soccer coordination
methodologies and team playing is mainly focused on the adaptation of
previously developed methodologies from our 2D soccer teams to the 3D humanoid
environment and on creating new coordination methodologies based on the
previously developed ones. The research-oriented development of our team has
been pushing it to be one of the most competitive over the years (World
champion in 2000 and Coach Champion in 2002, European champion in 2000 and
2001, Coach 2nd place in 2003 and 2004, European champion in Rescue Simulation
and Simulation 3D in 2006, World Champion in Simulation 3D in Bremen 2006 and
European champion in 2007, 2012, 2013, 2014 and 2015). This paper describes
some of the main innovations of our 3D simulation league team during the last
years. A new generic framework for reinforcement learning tasks has also been
developed. The current research is focused on improving the above-mentioned
framework by developing new learning algorithms to optimize low-level skills,
such as running and sprinting. We are also trying to increase student contact
by providing reinforcement learning assignments to be completed using our new
framework, which exposes a simple interface without sharing low-level
implementation details.
|
[
{
"version": "v1",
"created": "Tue, 28 Mar 2023 12:41:25 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Lau",
"Nuno",
""
],
[
"Reis",
"Luis Paulo",
""
],
[
"Simoes",
"David",
""
],
[
"Abreu",
"Mohammadreza Kasaei. Miguel",
""
],
[
"Silva",
"Tiago",
""
],
[
"Resende",
"Francisco",
""
]
] |
new_dataset
| 0.998521 |
2303.15935
|
Xiang Li
|
Lin Zhao, Lu Zhang, Zihao Wu, Yuzhong Chen, Haixing Dai, Xiaowei Yu,
Zhengliang Liu, Tuo Zhang, Xintao Hu, Xi Jiang, Xiang Li, Dajiang Zhu,
Dinggang Shen, Tianming Liu
|
When Brain-inspired AI Meets AGI
| null | null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Artificial General Intelligence (AGI) has been a long-standing goal of
humanity, with the aim of creating machines capable of performing any
intellectual task that humans can do. To achieve this, AGI researchers draw
inspiration from the human brain and seek to replicate its principles in
intelligent machines. Brain-inspired artificial intelligence is a field that
has emerged from this endeavor, combining insights from neuroscience,
psychology, and computer science to develop more efficient and powerful AI
systems. In this article, we provide a comprehensive overview of brain-inspired
AI from the perspective of AGI. We begin with the current progress in
brain-inspired AI and its extensive connection with AGI. We then cover the
important characteristics for both human intelligence and AGI (e.g., scaling,
multimodality, and reasoning). We discuss important technologies toward
achieving AGI in current AI systems, such as in-context learning and prompt
tuning. We also investigate the evolution of AGI systems from both algorithmic
and infrastructural perspectives. Finally, we explore the limitations and
future of AGI.
|
[
{
"version": "v1",
"created": "Tue, 28 Mar 2023 12:46:38 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Zhao",
"Lin",
""
],
[
"Zhang",
"Lu",
""
],
[
"Wu",
"Zihao",
""
],
[
"Chen",
"Yuzhong",
""
],
[
"Dai",
"Haixing",
""
],
[
"Yu",
"Xiaowei",
""
],
[
"Liu",
"Zhengliang",
""
],
[
"Zhang",
"Tuo",
""
],
[
"Hu",
"Xintao",
""
],
[
"Jiang",
"Xi",
""
],
[
"Li",
"Xiang",
""
],
[
"Zhu",
"Dajiang",
""
],
[
"Shen",
"Dinggang",
""
],
[
"Liu",
"Tianming",
""
]
] |
new_dataset
| 0.974155 |
2303.15937
|
HsiaoYuan Hsu
|
HsiaoYuan Hsu, Xiangteng He, Yuxin Peng, Hao Kong and Qing Zhang
|
PosterLayout: A New Benchmark and Approach for Content-aware
Visual-Textual Presentation Layout
|
Accepted to CVPR 2023. Dataset and code are available at
https://github.com/PKU-ICST-MIPL/PosterLayout-CVPR2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Content-aware visual-textual presentation layout aims at arranging spatial
space on the given canvas for pre-defined elements, including text, logo, and
underlay, which is a key to automatic template-free creative graphic design. In
practical applications, e.g., poster designs, the canvas is originally
non-empty, and both inter-element relationships as well as inter-layer
relationships should be concerned when generating a proper layout. A few recent
works deal with them simultaneously, but they still suffer from poor graphic
performance, such as a lack of layout variety or spatial non-alignment. Since
content-aware visual-textual presentation layout is a novel task, we first
construct a new dataset named PosterLayout, which consists of 9,974
poster-layout pairs and 905 images, i.e., non-empty canvases. It is more
challenging and useful for greater layout variety, domain diversity, and
content diversity. Then, we propose design sequence formation (DSF) that
reorganizes elements in layouts to imitate the design processes of human
designers, and a novel CNN-LSTM-based conditional generative adversarial
network (GAN) is presented to generate proper layouts. Specifically, the
discriminator is design-sequence-aware and will supervise the "design" process
of the generator. Experimental results verify the usefulness of the new
benchmark and the effectiveness of the proposed approach, which achieves the
best performance by generating suitable layouts for diverse canvases.
|
[
{
"version": "v1",
"created": "Tue, 28 Mar 2023 12:48:36 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Hsu",
"HsiaoYuan",
""
],
[
"He",
"Xiangteng",
""
],
[
"Peng",
"Yuxin",
""
],
[
"Kong",
"Hao",
""
],
[
"Zhang",
"Qing",
""
]
] |
new_dataset
| 0.996635 |
2303.16055
|
Frank Regal
|
Frank Regal, Young Soo Park, Jerry Nolan, Mitch Pryor
|
Augmented Reality Remote Operation of Dual Arm Manipulators in Hot Boxes
|
Abstract Submitted to the First International Workshop "Horizons of
an Extended Robotics Reality" at the 2022 IEEE/RSJ International Conference
on Intelligent Robots and Systems (IROS).
https://sites.google.com/view/xr-robotics-iros2022/home?authuser=0
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
In nuclear isotope and chemistry laboratories, hot cells and gloveboxes
provide scientists with a controlled and safe environment to perform
experiments. Working on experiments in these isolated containment cells
requires scientists to be physically present. For hot cell work today,
scientists manipulate equipment and radioactive material inside through a
bilateral mechanical control mechanism. Motions produced outside the cell with
the master control levers are mechanically transferred to the internal grippers
inside the shielded containment cell. There is a growing need to have the
capability to conduct experiments within these cells remotely. A simple method
to enable remote manipulations within hot cell and glovebox cells is to mount
two robotic arms inside a box to mimic the motions of human hands. An AR
application was built in this work to allow a user wearing a Microsoft HoloLens
2 headset to teleoperate dual arm manipulators by grasping robotic end-effector
digital replicas in AR from a remote location. In addition to the real-time
replica of the physical robotic arms in AR, the application enables users to
view a live video stream attached to the robotic arms and parse a 3D point
cloud of 3D objects in their remote AR environment for better situational
awareness. This work also provides users with virtual fixture to assist in
manipulation and other teleoperation tasks.
|
[
{
"version": "v1",
"created": "Tue, 28 Mar 2023 15:36:06 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Regal",
"Frank",
""
],
[
"Park",
"Young Soo",
""
],
[
"Nolan",
"Jerry",
""
],
[
"Pryor",
"Mitch",
""
]
] |
new_dataset
| 0.997642 |
2303.16094
|
Tao Lu
|
Tao Lu, Xiang Ding, Haisong Liu, Gangshan Wu, Limin Wang
|
LinK: Linear Kernel for LiDAR-based 3D Perception
|
Accepted to CVPR2023
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Extending the success of 2D Large Kernel to 3D perception is challenging due
to: 1. the cubically-increasing overhead in processing 3D data; 2. the
optimization difficulties from data scarcity and sparsity. Previous work has
taken the first step to scale up the kernel size from 3x3x3 to 7x7x7 by
introducing block-shared weights. However, to reduce the feature variations
within a block, it only employs modest block size and fails to achieve larger
kernels like the 21x21x21. To address this issue, we propose a new method,
called LinK, to achieve a wider-range perception receptive field in a
convolution-like manner with two core designs. The first is to replace the
static kernel matrix with a linear kernel generator, which adaptively provides
weights only for non-empty voxels. The second is to reuse the pre-computed
aggregation results in the overlapped blocks to reduce computation complexity.
The proposed method successfully enables each voxel to perceive context within
a range of 21x21x21. Extensive experiments on two basic perception tasks, 3D
object detection and 3D semantic segmentation, demonstrate the effectiveness of
our method. Notably, we rank 1st on the public leaderboard of the 3D detection
benchmark of nuScenes (LiDAR track), by simply incorporating a LinK-based
backbone into the basic detector, CenterPoint. We also boost the strong
segmentation baseline's mIoU with 2.7% in the SemanticKITTI test set. Code is
available at https://github.com/MCG-NJU/LinK.
|
[
{
"version": "v1",
"created": "Tue, 28 Mar 2023 16:02:30 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Lu",
"Tao",
""
],
[
"Ding",
"Xiang",
""
],
[
"Liu",
"Haisong",
""
],
[
"Wu",
"Gangshan",
""
],
[
"Wang",
"Limin",
""
]
] |
new_dataset
| 0.987567 |
2303.16098
|
Marcelo Finger
|
Maria Clara Ramos Morales Crespo, Maria Lina de Souza Jeannine Rocha,
Mariana Louren\c{c}o Sturzeneker, Felipe Ribas Serras, Guilherme Lamartine de
Mello, Aline Silva Costa, Mayara Feliciano Palma, Renata Morais Mesquita,
Raquel de Paula Guets, Mariana Marques da Silva, Marcelo Finger, Maria Clara
Paix\~ao de Sousa, Cristiane Namiuti, Vanessa Martins do Monte
|
Carolina: a General Corpus of Contemporary Brazilian Portuguese with
Provenance, Typology and Versioning Information
|
14 pages, 3 figures, 1 appendix
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents the first publicly available version of the Carolina
Corpus and discusses its future directions. Carolina is a large open corpus of
Brazilian Portuguese texts under construction using web-as-corpus methodology
enhanced with provenance, typology, versioning, and text integrality. The
corpus aims at being used both as a reliable source for research in Linguistics
and as an important resource for Computer Science research on language models,
contributing towards removing Portuguese from the set of low-resource
languages. Here we present the construction of the corpus methodology,
comparing it with other existing methodologies, as well as the corpus current
state: Carolina's first public version has $653,322,577$ tokens, distributed
over $7$ broad types. Each text is annotated with several different metadata
categories in its header, which we developed using TEI annotation standards. We
also present ongoing derivative works and invite NLP researchers to contribute
with their own.
|
[
{
"version": "v1",
"created": "Tue, 28 Mar 2023 16:09:40 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Crespo",
"Maria Clara Ramos Morales",
""
],
[
"Rocha",
"Maria Lina de Souza Jeannine",
""
],
[
"Sturzeneker",
"Mariana Lourenço",
""
],
[
"Serras",
"Felipe Ribas",
""
],
[
"de Mello",
"Guilherme Lamartine",
""
],
[
"Costa",
"Aline Silva",
""
],
[
"Palma",
"Mayara Feliciano",
""
],
[
"Mesquita",
"Renata Morais",
""
],
[
"Guets",
"Raquel de Paula",
""
],
[
"da Silva",
"Mariana Marques",
""
],
[
"Finger",
"Marcelo",
""
],
[
"de Sousa",
"Maria Clara Paixão",
""
],
[
"Namiuti",
"Cristiane",
""
],
[
"Monte",
"Vanessa Martins do",
""
]
] |
new_dataset
| 0.987186 |
2303.16118
|
Lei Chen
|
Lei Chen, Zhan Tong, Yibing Song, Gangshan Wu, Limin Wang
|
CycleACR: Cycle Modeling of Actor-Context Relations for Video Action
Detection
|
technical report
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The relation modeling between actors and scene context advances video action
detection where the correlation of multiple actors makes their action
recognition challenging. Existing studies model each actor and scene relation
to improve action recognition. However, the scene variations and background
interference limit the effectiveness of this relation modeling. In this paper,
we propose to select actor-related scene context, rather than directly leverage
raw video scenario, to improve relation modeling. We develop a Cycle
Actor-Context Relation network (CycleACR) where there is a symmetric graph that
models the actor and context relations in a bidirectional form. Our CycleACR
consists of the Actor-to-Context Reorganization (A2C-R) that collects actor
features for context feature reorganizations, and the Context-to-Actor
Enhancement (C2A-E) that dynamically utilizes reorganized context features for
actor feature enhancement. Compared to existing designs that focus on C2A-E,
our CycleACR introduces A2C-R for a more effective relation modeling. This
modeling advances our CycleACR to achieve state-of-the-art performance on two
popular action detection datasets (i.e., AVA and UCF101-24). We also provide
ablation studies and visualizations as well to show how our cycle actor-context
relation modeling improves video action detection. Code is available at
https://github.com/MCG-NJU/CycleACR.
|
[
{
"version": "v1",
"created": "Tue, 28 Mar 2023 16:40:47 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Chen",
"Lei",
""
],
[
"Tong",
"Zhan",
""
],
[
"Song",
"Yibing",
""
],
[
"Wu",
"Gangshan",
""
],
[
"Wang",
"Limin",
""
]
] |
new_dataset
| 0.998601 |
2303.16138
|
Isabella Huang
|
Isabella Huang, Yashraj Narang, Ruzena Bajcsy, Fabio Ramos, Tucker
Hermans, Dieter Fox
|
DefGraspNets: Grasp Planning on 3D Fields with Graph Neural Nets
|
To be published in the IEEE Conference on Robotics and Automation
(ICRA), 2023
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robotic grasping of 3D deformable objects is critical for real-world
applications such as food handling and robotic surgery. Unlike rigid and
articulated objects, 3D deformable objects have infinite degrees of freedom.
Fully defining their state requires 3D deformation and stress fields, which are
exceptionally difficult to analytically compute or experimentally measure.
Thus, evaluating grasp candidates for grasp planning typically requires
accurate, but slow 3D finite element method (FEM) simulation. Sampling-based
grasp planning is often impractical, as it requires evaluation of a large
number of grasp candidates. Gradient-based grasp planning can be more
efficient, but requires a differentiable model to synthesize optimal grasps
from initial candidates. Differentiable FEM simulators may fill this role, but
are typically no faster than standard FEM. In this work, we propose learning a
predictive graph neural network (GNN), DefGraspNets, to act as our
differentiable model. We train DefGraspNets to predict 3D stress and
deformation fields based on FEM-based grasp simulations. DefGraspNets not only
runs up to 1500 times faster than the FEM simulator, but also enables fast
gradient-based grasp optimization over 3D stress and deformation metrics. We
design DefGraspNets to align with real-world grasp planning practices and
demonstrate generalization across multiple test sets, including real-world
experiments.
|
[
{
"version": "v1",
"created": "Tue, 28 Mar 2023 17:00:45 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Huang",
"Isabella",
""
],
[
"Narang",
"Yashraj",
""
],
[
"Bajcsy",
"Ruzena",
""
],
[
"Ramos",
"Fabio",
""
],
[
"Hermans",
"Tucker",
""
],
[
"Fox",
"Dieter",
""
]
] |
new_dataset
| 0.994788 |
2303.16150
|
Mario Mart\'inez-Zarzuela
|
Mario Mart\'inez-Zarzuela, Javier Gonz\'alez-Alonso, M\'iriam
Ant\'on-Rodr\'iguez, Francisco J. D\'iaz-Pernas, Henning M\"uller, Cristina
Sim\'on-Mart\'inez
|
VIDIMU. Multimodal video and IMU kinematic dataset on daily life
activities using affordable devices
|
Submitted to journal Scientific Data
| null | null | null |
cs.CV cs.AI cs.LG eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Human activity recognition and clinical biomechanics are challenging problems
in physical telerehabilitation medicine. However, most publicly available
datasets on human body movements cannot be used to study both problems in an
out-of-the-lab movement acquisition setting. The objective of the VIDIMU
dataset is to pave the way towards affordable patient tracking solutions for
remote daily life activities recognition and kinematic analysis. The dataset
includes 13 activities registered using a commodity camera and five inertial
sensors. The video recordings were acquired in 54 subjects, of which 16 also
had simultaneous recordings of inertial sensors. The novelty of VIDIMU lies in:
i) the clinical relevance of the chosen movements, ii) the combined utilization
of affordable video and custom sensors, and iii) the implementation of
state-of-the-art tools for multimodal data processing of 3D body pose tracking
and motion reconstruction in a musculoskeletal model from inertial data. The
validation confirms that a minimally disturbing acquisition protocol, performed
according to real-life conditions can provide a comprehensive picture of human
joint angles during daily life activities.
|
[
{
"version": "v1",
"created": "Mon, 27 Mar 2023 14:05:49 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Martínez-Zarzuela",
"Mario",
""
],
[
"González-Alonso",
"Javier",
""
],
[
"Antón-Rodríguez",
"Míriam",
""
],
[
"Díaz-Pernas",
"Francisco J.",
""
],
[
"Müller",
"Henning",
""
],
[
"Simón-Martínez",
"Cristina",
""
]
] |
new_dataset
| 0.999783 |
2303.16156
|
Mao Shi PhD
|
Mao Shi
|
On the derivatives of rational B\'{e}zier curves
| null | null | null | null |
cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
We first point out the defects of the existing derivative formula on the
rational B\'{e}zier curve, then propose a new recursive derivative formula, and
discuss the expression of derivative formula at the endpoints.
|
[
{
"version": "v1",
"created": "Mon, 27 Mar 2023 04:51:30 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Shi",
"Mao",
""
]
] |
new_dataset
| 0.997105 |
2303.16177
|
Vedant Mundheda
|
Vedant Mundheda, Damodar Datta K, Harikumar Kandath
|
Control Barrier Function-based Predictive Control for Close Proximity
operation of UAVs inside a Tunnel
|
Conference on Automation Science and Engineering (CASE) 2023
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This paper introduces a method for effectively controlling the movement of an
Unmanned Aerial Vehicle (UAV) within a tunnel. The primary challenge of this
problem lies in the UAV's exposure to nonlinear distance-dependent torques and
forces generated by the tunnel walls, along with the need to operate safely
within a defined region while in close proximity to these walls. To address
this problem, the paper proposes the implementation of a Model Predictive
Control (MPC) framework with constraints based on Control Barrier Function
(CBF). The paper approaches the issue in two distinct ways; first, by
maintaining a safe distance from the tunnel walls to avoid the effects of both
the walls and ceiling, and second, by minimizing the distance from the walls to
effectively manage the nonlinear forces associated with close proximity tasks.
Finally, the paper demonstrates the effectiveness of its approach through
testing on simulation for various close proximity trajectories with the
realistic model of aerodynamic disturbances due to the proximity of the ceiling
and boundary walls.
|
[
{
"version": "v1",
"created": "Tue, 28 Mar 2023 17:43:32 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Mundheda",
"Vedant",
""
],
[
"K",
"Damodar Datta",
""
],
[
"Kandath",
"Harikumar",
""
]
] |
new_dataset
| 0.957277 |
2303.16202
|
Vladislav Golyanik
|
Harshil Bhatia and Edith Tretschk and Zorah L\"ahner and Marcel
Seelbach Benkner and Michael Moeller and Christian Theobalt and Vladislav
Golyanik
|
CCuantuMM: Cycle-Consistent Quantum-Hybrid Matching of Multiple Shapes
|
Computer Vision and Pattern Recognition (CVPR) 2023; 22 pages, 24
figures and 5 tables; Project page: https://4dqv.mpi-inf.mpg.de/CCuantuMM/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Jointly matching multiple, non-rigidly deformed 3D shapes is a challenging,
$\mathcal{NP}$-hard problem. A perfect matching is necessarily
cycle-consistent: Following the pairwise point correspondences along several
shapes must end up at the starting vertex of the original shape. Unfortunately,
existing quantum shape-matching methods do not support multiple shapes and even
less cycle consistency. This paper addresses the open challenges and introduces
the first quantum-hybrid approach for 3D shape multi-matching; in addition, it
is also cycle-consistent. Its iterative formulation is admissible to modern
adiabatic quantum hardware and scales linearly with the total number of input
shapes. Both these characteristics are achieved by reducing the $N$-shape case
to a sequence of three-shape matchings, the derivation of which is our main
technical contribution. Thanks to quantum annealing, high-quality solutions
with low energy are retrieved for the intermediate $\mathcal{NP}$-hard
objectives. On benchmark datasets, the proposed approach significantly
outperforms extensions to multi-shape matching of a previous quantum-hybrid
two-shape matching method and is on-par with classical multi-matching methods.
|
[
{
"version": "v1",
"created": "Tue, 28 Mar 2023 17:59:55 GMT"
}
] | 2023-03-29T00:00:00 |
[
[
"Bhatia",
"Harshil",
""
],
[
"Tretschk",
"Edith",
""
],
[
"Lähner",
"Zorah",
""
],
[
"Benkner",
"Marcel Seelbach",
""
],
[
"Moeller",
"Michael",
""
],
[
"Theobalt",
"Christian",
""
],
[
"Golyanik",
"Vladislav",
""
]
] |
new_dataset
| 0.991157 |
2112.05301
|
Qing Li
|
Qing Li, Xiaojiang Peng, Chuan Yan, Pan Gao, Qi Hao
|
Self-Ensemling for 3D Point Cloud Domain Adaption
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently 3D point cloud learning has been a hot topic in computer vision and
autonomous driving. Due to the fact that it is difficult to manually annotate a
qualitative large-scale 3D point cloud dataset, unsupervised domain adaptation
(UDA) is popular in 3D point cloud learning which aims to transfer the learned
knowledge from the labeled source domain to the unlabeled target domain.
However, the generalization and reconstruction errors caused by domain shift
with simply-learned model are inevitable which substantially hinder the model's
capability from learning good representations. To address these issues, we
propose an end-to-end self-ensembling network (SEN) for 3D point cloud domain
adaption tasks. Generally, our SEN resorts to the advantages of Mean Teacher
and semi-supervised learning, and introduces a soft classification loss and a
consistency loss, aiming to achieve consistent generalization and accurate
reconstruction. In SEN, a student network is kept in a collaborative manner
with supervised learning and self-supervised learning, and a teacher network
conducts temporal consistency to learn useful representations and ensure the
quality of point clouds reconstruction. Extensive experiments on several 3D
point cloud UDA benchmarks show that our SEN outperforms the state-of-the-art
methods on both classification and segmentation tasks. Moreover, further
analysis demonstrates that our SEN also achieves better reconstruction results.
|
[
{
"version": "v1",
"created": "Fri, 10 Dec 2021 02:18:09 GMT"
},
{
"version": "v2",
"created": "Sat, 25 Mar 2023 03:19:53 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Li",
"Qing",
""
],
[
"Peng",
"Xiaojiang",
""
],
[
"Yan",
"Chuan",
""
],
[
"Gao",
"Pan",
""
],
[
"Hao",
"Qi",
""
]
] |
new_dataset
| 0.998315 |
2201.01439
|
Masaaki Harada
|
Masaaki Harada
|
Construction of extremal Type II $\mathbb{Z}_{2k}$-codes
|
25 pages
| null |
10.1016/j.ffa.2022.102154
| null |
cs.IT math.CO math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
We give methods for constructing many self-dual $\mathbb{Z}_m$-codes and Type
II $\mathbb{Z}_{2k}$-codes of length $2n$ starting from a given self-dual
$\mathbb{Z}_m$-code and Type II $\mathbb{Z}_{2k}$-code of length $2n$,
respectively. As an application, we construct extremal Type II
$\mathbb{Z}_{2k}$-codes of length $24$ for $k=4,5,\ldots,20$ and extremal Type
II $\mathbb{Z}_{2k}$-codes of length $32$ for $k=4,5,\ldots,10$. We also
construct new extremal Type II $\mathbb{Z}_4$-codes of lengths $56$ and $64$.
|
[
{
"version": "v1",
"created": "Wed, 5 Jan 2022 03:53:08 GMT"
},
{
"version": "v2",
"created": "Sat, 18 Jun 2022 06:31:40 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Harada",
"Masaaki",
""
]
] |
new_dataset
| 0.99916 |
2202.04110
|
Guangyao Zhou
|
Guangyao Zhou, Antoine Dedieu, Nishanth Kumar, Wolfgang Lehrach,
Miguel L\'azaro-Gredilla, Shrinu Kushagra, Dileep George
|
PGMax: Factor Graphs for Discrete Probabilistic Graphical Models and
Loopy Belief Propagation in JAX
|
Update authors list
| null | null | null |
cs.LG cs.AI stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
PGMax is an open-source Python package for (a) easily specifying discrete
Probabilistic Graphical Models (PGMs) as factor graphs; and (b) automatically
running efficient and scalable loopy belief propagation (LBP) in JAX. PGMax
supports general factor graphs with tractable factors, and leverages modern
accelerators like GPUs for inference. Compared with existing alternatives,
PGMax obtains higher-quality inference results with up to three
orders-of-magnitude inference time speedups. PGMax additionally interacts
seamlessly with the rapidly growing JAX ecosystem, opening up new research
possibilities. Our source code, examples and documentation are available at
https://github.com/deepmind/PGMax.
|
[
{
"version": "v1",
"created": "Tue, 8 Feb 2022 19:27:48 GMT"
},
{
"version": "v2",
"created": "Fri, 6 May 2022 19:15:22 GMT"
},
{
"version": "v3",
"created": "Mon, 13 Mar 2023 17:20:47 GMT"
},
{
"version": "v4",
"created": "Fri, 24 Mar 2023 23:34:02 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Zhou",
"Guangyao",
""
],
[
"Dedieu",
"Antoine",
""
],
[
"Kumar",
"Nishanth",
""
],
[
"Lehrach",
"Wolfgang",
""
],
[
"Lázaro-Gredilla",
"Miguel",
""
],
[
"Kushagra",
"Shrinu",
""
],
[
"George",
"Dileep",
""
]
] |
new_dataset
| 0.990811 |
2203.00456
|
Esther Rituerto-Gonz\'alez
|
Jose A. Miranda, Esther Rituerto-Gonz\'alez, Laura
Guti\'errez-Mart\'in, Clara Luis-Mingueza, Manuel F. Canabal, Alberto
Ram\'irez B\'arcenas, Jose M. Lanza-Guti\'errez, Carmen Pel\'aez-Moreno,
Celia L\'opez-Ongil
|
WEMAC: Women and Emotion Multi-modal Affective Computing dataset
| null | null | null | null |
cs.HC eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
Among the seventeen Sustainable Development Goals (SDGs) proposed within the
2030 Agenda and adopted by all the United Nations member states, the Fifth SDG
is a call for action to turn Gender Equality into a fundamental human right and
an essential foundation for a better world. It includes the eradication of all
types of violence against women. Within this context, the UC3M4Safety research
team aims to develop Bindi. This is a cyber-physical system which includes
embedded Artificial Intelligence algorithms, for user real-time monitoring
towards the detection of affective states, with the ultimate goal of achieving
the early detection of risk situations for women. On this basis, we make use of
wearable affective computing including smart sensors, data encryption for
secure and accurate collection of presumed crime evidence, as well as the
remote connection to protecting agents. Towards the development of such system,
the recordings of different laboratory and into-the-wild datasets are in
process. These are contained within the UC3M4Safety Database. Thus, this paper
presents and details the first release of WEMAC, a novel multi-modal dataset,
which comprises a laboratory-based experiment for 47 women volunteers that were
exposed to validated audio-visual stimuli to induce real emotions by using a
virtual reality headset while physiological, speech signals and self-reports
were acquired and collected. We believe this dataset will serve and assist
research on multi-modal affective computing using physiological and speech
information.
|
[
{
"version": "v1",
"created": "Tue, 1 Mar 2022 13:39:54 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Jun 2022 14:31:54 GMT"
},
{
"version": "v3",
"created": "Mon, 27 Mar 2023 10:19:49 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Miranda",
"Jose A.",
""
],
[
"Rituerto-González",
"Esther",
""
],
[
"Gutiérrez-Martín",
"Laura",
""
],
[
"Luis-Mingueza",
"Clara",
""
],
[
"Canabal",
"Manuel F.",
""
],
[
"Bárcenas",
"Alberto Ramírez",
""
],
[
"Lanza-Gutiérrez",
"Jose M.",
""
],
[
"Peláez-Moreno",
"Carmen",
""
],
[
"López-Ongil",
"Celia",
""
]
] |
new_dataset
| 0.999765 |
2204.03648
|
Yifan Wang
|
Yifan Wang, Aleksander Holynski, Xiuming Zhang and Xuaner Zhang
|
SunStage: Portrait Reconstruction and Relighting using the Sun as a
Light Stage
|
CVPR 2023. Project page: https://sunstage.cs.washington.edu/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A light stage uses a series of calibrated cameras and lights to capture a
subject's facial appearance under varying illumination and viewpoint. This
captured information is crucial for facial reconstruction and relighting.
Unfortunately, light stages are often inaccessible: they are expensive and
require significant technical expertise for construction and operation. In this
paper, we present SunStage: a lightweight alternative to a light stage that
captures comparable data using only a smartphone camera and the sun. Our method
only requires the user to capture a selfie video outdoors, rotating in place,
and uses the varying angles between the sun and the face as guidance in joint
reconstruction of facial geometry, reflectance, camera pose, and lighting
parameters. Despite the in-the-wild un-calibrated setting, our approach is able
to reconstruct detailed facial appearance and geometry, enabling compelling
effects such as relighting, novel view synthesis, and reflectance editing.
Results and interactive demos are available at
https://sunstage.cs.washington.edu/.
|
[
{
"version": "v1",
"created": "Thu, 7 Apr 2022 17:59:51 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Mar 2023 22:54:08 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Wang",
"Yifan",
""
],
[
"Holynski",
"Aleksander",
""
],
[
"Zhang",
"Xiuming",
""
],
[
"Zhang",
"Xuaner",
""
]
] |
new_dataset
| 0.997737 |
2204.08179
|
Haoying Li
|
Haoying Li, Ziran Zhang, Tingting Jiang, Peng Luo, Huajun Feng, Zhihai
Xu
|
Real-World Deep Local Motion Deblurring
|
Accept by AAAI-2023 (Oral)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most existing deblurring methods focus on removing global blur caused by
camera shake, while they cannot well handle local blur caused by object
movements. To fill the vacancy of local deblurring in real scenes, we establish
the first real local motion blur dataset (ReLoBlur), which is captured by a
synchronized beam-splitting photographing system and corrected by a
post-progressing pipeline. Based on ReLoBlur, we propose a Local Blur-Aware
Gated network (LBAG) and several local blur-aware techniques to bridge the gap
between global and local deblurring: 1) a blur detection approach based on
background subtraction to localize blurred regions; 2) a gate mechanism to
guide our network to focus on blurred regions; and 3) a blur-aware patch
cropping strategy to address data imbalance problem. Extensive experiments
prove the reliability of ReLoBlur dataset, and demonstrate that LBAG achieves
better performance than state-of-the-art global deblurring methods without our
proposed local blur-aware techniques.
|
[
{
"version": "v1",
"created": "Mon, 18 Apr 2022 06:24:02 GMT"
},
{
"version": "v2",
"created": "Sun, 26 Mar 2023 16:33:55 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Li",
"Haoying",
""
],
[
"Zhang",
"Ziran",
""
],
[
"Jiang",
"Tingting",
""
],
[
"Luo",
"Peng",
""
],
[
"Feng",
"Huajun",
""
],
[
"Xu",
"Zhihai",
""
]
] |
new_dataset
| 0.998826 |
2204.11964
|
Pinaki Nath Chowdhury
|
Pinaki Nath Chowdhury and Ayan Kumar Bhunia and Aneeshan Sain and
Subhadeep Koley and Tao Xiang and Yi-Zhe Song
|
SceneTrilogy: On Human Scene-Sketch and its Complementarity with Photo
and Text
|
Accepted in Computer Vision and Pattern Recognition (CVPR), 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we extend scene understanding to include that of human sketch.
The result is a complete trilogy of scene representation from three diverse and
complementary modalities -- sketch, photo, and text. Instead of learning a
rigid three-way embedding and be done with it, we focus on learning a flexible
joint embedding that fully supports the ``optionality" that this
complementarity brings. Our embedding supports optionality on two axes: (i)
optionality across modalities -- use any combination of modalities as query for
downstream tasks like retrieval, (ii) optionality across tasks --
simultaneously utilising the embedding for either discriminative (e.g.,
retrieval) or generative tasks (e.g., captioning). This provides flexibility to
end-users by exploiting the best of each modality, therefore serving the very
purpose behind our proposal of a trilogy in the first place. First, a
combination of information-bottleneck and conditional invertible neural
networks disentangle the modality-specific component from modality-agnostic in
sketch, photo, and text. Second, the modality-agnostic instances from sketch,
photo, and text are synergised using a modified cross-attention. Once learned,
we show our embedding can accommodate a multi-facet of scene-related tasks,
including those enabled for the first time by the inclusion of sketch, all
without any task-specific modifications. Project Page:
\url{http://www.pinakinathc.me/scenetrilogy}
|
[
{
"version": "v1",
"created": "Mon, 25 Apr 2022 20:58:17 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Dec 2022 01:36:47 GMT"
},
{
"version": "v3",
"created": "Sun, 26 Mar 2023 13:01:15 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Chowdhury",
"Pinaki Nath",
""
],
[
"Bhunia",
"Ayan Kumar",
""
],
[
"Sain",
"Aneeshan",
""
],
[
"Koley",
"Subhadeep",
""
],
[
"Xiang",
"Tao",
""
],
[
"Song",
"Yi-Zhe",
""
]
] |
new_dataset
| 0.979683 |
2205.09336
|
Albin Dahlin
|
Albin Dahlin and Yiannis Karayiannidis
|
Creating Star Worlds: Reshaping the Robot Workspace for Online Motion
Planning
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Motion planning methods like navigation functions and harmonic potential
fields provide (almost) global convergence and are suitable for obstacle
avoidance in dynamically changing environments due to their reactive nature. A
common assumption in the control design is that the robot operates in a
disjoint star world, i.e. all obstacles are strictly starshaped and mutually
disjoint. However, in real-life scenarios obstacles may intersect due to
expanded obstacle regions corresponding to robot radius or safety margins. To
broaden the applicability of aforementioned reactive motion planning methods,
we propose a method to reshape a workspace of intersecting obstacles into a
disjoint star world. The algorithm is based on two novel concepts presented
here, namely admissible kernel and starshaped hull with specified kernel, which
are closely related to the notion of starshaped hull. The utilization of the
proposed method is illustrated with examples of a robot operating in a 2D
workspace using a harmonic potential field approach in combination with the
developed algorithm.
|
[
{
"version": "v1",
"created": "Thu, 19 May 2022 06:26:47 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Mar 2023 09:38:21 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Dahlin",
"Albin",
""
],
[
"Karayiannidis",
"Yiannis",
""
]
] |
new_dataset
| 0.999396 |
2206.03285
|
Jinkun Geng
|
Jinkun Geng and Anirudh Sivaraman and Balaji Prabhakar and Mendel
Rosenblum
|
Nezha: Deployable and High-Performance Consensus Using Synchronized
Clocks
|
Accepted by 49th International Conference on Very Large Data Bases
(VLDB 2023)
| null | null | null |
cs.DC cs.DB cs.NI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper presents a high-performance consensus protocol, Nezha, which can
be deployed by cloud tenants without any support from their cloud provider.
Nezha bridges the gap between protocols such as Multi-Paxos and Raft, which can
be readily deployed and protocols such as NOPaxos and Speculative Paxos, that
provide better performance, but require access to technologies such as
programmable switches and in-network prioritization, which cloud tenants do not
have. Nezha uses a new multicast primitive called deadline-ordered multicast
(DOM). DOM uses high-accuracy software clock synchronization to synchronize
sender and receiver clocks. Senders tag messages with deadlines in synchronized
time; receivers process messages in deadline order, on or after their deadline.
We compare Nezha with Multi-Paxos, Fast Paxos, Raft, a NOPaxos version we
optimized for the cloud, and 2 recent protocols, Domino and TOQ-EPaxos, that
use synchronized clocks. In throughput, Nezha outperforms all baselines by a
median of 5.4x (range: 1.9-20.9x). In latency, Nezha outperforms five baselines
by a median of 2.3x (range: 1.3-4.0x), with one exception: it sacrifices 33%
latency compared with our optimized NOPaxos in one test. We also prototype two
applications, a key-value store and a fair-access stock exchange, on top of
Nezha to show that Nezha only modestly reduces their performance relative to an
unreplicated system. Nezha is available at https://github.com/Steamgjk/Nezha.
|
[
{
"version": "v1",
"created": "Fri, 3 Jun 2022 21:30:33 GMT"
},
{
"version": "v10",
"created": "Tue, 27 Dec 2022 21:08:54 GMT"
},
{
"version": "v11",
"created": "Fri, 24 Mar 2023 17:20:38 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Jun 2022 00:38:11 GMT"
},
{
"version": "v3",
"created": "Mon, 25 Jul 2022 23:23:03 GMT"
},
{
"version": "v4",
"created": "Tue, 2 Aug 2022 23:50:40 GMT"
},
{
"version": "v5",
"created": "Thu, 4 Aug 2022 23:15:05 GMT"
},
{
"version": "v6",
"created": "Thu, 25 Aug 2022 06:21:08 GMT"
},
{
"version": "v7",
"created": "Sat, 8 Oct 2022 07:00:43 GMT"
},
{
"version": "v8",
"created": "Sat, 15 Oct 2022 06:58:05 GMT"
},
{
"version": "v9",
"created": "Mon, 19 Dec 2022 00:56:50 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Geng",
"Jinkun",
""
],
[
"Sivaraman",
"Anirudh",
""
],
[
"Prabhakar",
"Balaji",
""
],
[
"Rosenblum",
"Mendel",
""
]
] |
new_dataset
| 0.999587 |
2206.07817
|
Jennifer Volk
|
Jennifer Volk, George Tzimpragos, Alex Wynn, Evan Golden and Timothy
Sherwood
|
Low-Cost Superconducting Fan-Out with Cell I$_\text{C}$ Ranking
|
12 pages, 20 figures, accepted at IEEE TAS
| null |
10.1109/TASC.2023.3256797
| null |
cs.ET
|
http://creativecommons.org/licenses/by/4.0/
|
Superconductor electronics (SCE) promise computer systems with orders of
magnitude higher speeds and lower energy consumption than their complementary
metal-oxide semiconductor (CMOS) counterparts. At the same time, the
scalability and resource utilization of superconducting systems are major
concerns. Some of these concerns come from device-level challenges and the gap
between SCE and CMOS technology nodes, and others come from the way Josephson
Junctions (JJs) are used. Towards this end, we notice that a considerable
fraction of hardware resources are not involved in logic operations, but rather
are used for fan-out and buffering purposes. In this paper, we ask if there is
a way to reduce these overheads, propose the use of JJs at the cell boundaries
to increase the number of outputs that a single stage can drive, and establish
a set of rules to discretize critical currents in a way that is conducive to
this assignment. Finally, we explore the design trade-offs that the presented
approach opens up and demonstrate its promise through detailed analog
simulations and modeling analyses. Our experiments indicate that the introduced
method leads to a 48% savings in the JJ count for a tree with a fan-out of
1024, as well as an average of 43% of the JJ count for signal splitting and 32%
for clock splitting in ISCAS'85 benchmarks.
|
[
{
"version": "v1",
"created": "Wed, 15 Jun 2022 21:08:16 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Oct 2022 19:47:27 GMT"
},
{
"version": "v3",
"created": "Thu, 27 Oct 2022 07:29:24 GMT"
},
{
"version": "v4",
"created": "Mon, 27 Mar 2023 16:26:47 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Volk",
"Jennifer",
""
],
[
"Tzimpragos",
"George",
""
],
[
"Wynn",
"Alex",
""
],
[
"Golden",
"Evan",
""
],
[
"Sherwood",
"Timothy",
""
]
] |
new_dataset
| 0.982079 |
2207.01610
|
Weicai Ye
|
Weicai Ye, Xinyue Lan, Shuo Chen, Yuhang Ming, Xingyuan Yu, Hujun Bao,
Zhaopeng Cui, Guofeng Zhang
|
PVO: Panoptic Visual Odometry
|
CVPR2023 Project page: https://zju3dv.github.io/pvo/ code:
https://github.com/zju3dv/PVO
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We present PVO, a novel panoptic visual odometry framework to achieve more
comprehensive modeling of the scene motion, geometry, and panoptic segmentation
information. Our PVO models visual odometry (VO) and video panoptic
segmentation (VPS) in a unified view, which makes the two tasks mutually
beneficial. Specifically, we introduce a panoptic update module into the VO
Module with the guidance of image panoptic segmentation. This Panoptic-Enhanced
VO Module can alleviate the impact of dynamic objects in the camera pose
estimation with a panoptic-aware dynamic mask. On the other hand, the
VO-Enhanced VPS Module also improves the segmentation accuracy by fusing the
panoptic segmentation result of the current frame on the fly to the adjacent
frames, using geometric information such as camera pose, depth, and optical
flow obtained from the VO Module. These two modules contribute to each other
through recurrent iterative optimization. Extensive experiments demonstrate
that PVO outperforms state-of-the-art methods in both visual odometry and video
panoptic segmentation tasks.
|
[
{
"version": "v1",
"created": "Mon, 4 Jul 2022 17:51:39 GMT"
},
{
"version": "v2",
"created": "Sun, 26 Mar 2023 17:47:21 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Ye",
"Weicai",
""
],
[
"Lan",
"Xinyue",
""
],
[
"Chen",
"Shuo",
""
],
[
"Ming",
"Yuhang",
""
],
[
"Yu",
"Xingyuan",
""
],
[
"Bao",
"Hujun",
""
],
[
"Cui",
"Zhaopeng",
""
],
[
"Zhang",
"Guofeng",
""
]
] |
new_dataset
| 0.996213 |
2208.00339
|
Jiang Li
|
Jiang Li, Xiaoping Wang, Guoqing Lv, Zhigang Zeng
|
GraphMFT: A Graph Network based Multimodal Fusion Technique for Emotion
Recognition in Conversation
|
12 pages
| null | null | null |
cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multimodal machine learning is an emerging area of research, which has
received a great deal of scholarly attention in recent years. Up to now, there
are few studies on multimodal conversational emotion recognition. Since Graph
Neural Networks (GNNs) possess the powerful capacity of relational modeling,
they have an inherent advantage in the field of multimodal learning. GNNs
leverage the graph constructed from multimodal data to perform intra- and
inter-modal information interaction, which effectively facilitates the
integration and complementation of multimodal data. In this work, we propose a
novel Graph network based Multimodal Fusion Technique (GraphMFT) for emotion
recognition in conversation. Multimodal data can be modeled as a graph, where
each data object is regarded as a node, and both intra- and inter-modal
dependencies existing between data objects can be regarded as edges. GraphMFT
utilizes multiple improved graph attention networks to capture intra-modal
contextual information and inter-modal complementary information. In addition,
the proposed GraphMFT attempts to address the challenges of existing
graph-based multimodal Emotion Recognition in Conversation (ERC) models such as
MMGCN. Empirical results on two public multimodal datasets reveal that our
model outperforms the State-Of-The-Art (SOTA) approaches with the accuracy of
67.90% and 61.30%.
|
[
{
"version": "v1",
"created": "Sun, 31 Jul 2022 02:23:24 GMT"
},
{
"version": "v2",
"created": "Wed, 10 Aug 2022 16:38:30 GMT"
},
{
"version": "v3",
"created": "Sun, 26 Mar 2023 03:32:28 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Li",
"Jiang",
""
],
[
"Wang",
"Xiaoping",
""
],
[
"Lv",
"Guoqing",
""
],
[
"Zeng",
"Zhigang",
""
]
] |
new_dataset
| 0.96602 |
2210.00445
|
Hao Wang
|
Hao Wang, Guosheng Lin, Ana Garc\'ia del Molino, Anran Wang, Jiashi
Feng, Zhiqi Shen
|
ManiCLIP: Multi-Attribute Face Manipulation from Text
|
Code link: https://github.com/hwang1996/ManiCLIP
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we present a novel multi-attribute face manipulation method
based on textual descriptions. Previous text-based image editing methods either
require test-time optimization for each individual image or are restricted to
single attribute editing. Extending these methods to multi-attribute face image
editing scenarios will introduce undesired excessive attribute change, e.g.,
text-relevant attributes are overly manipulated and text-irrelevant attributes
are also changed. In order to address these challenges and achieve natural
editing over multiple face attributes, we propose a new decoupling training
scheme where we use group sampling to get text segments from same attribute
categories, instead of whole complex sentences. Further, to preserve other
existing face attributes, we encourage the model to edit the latent code of
each attribute separately via an entropy constraint. During the inference
phase, our model is able to edit new face images without any test-time
optimization, even from complex textual prompts. We show extensive experiments
and analysis to demonstrate the efficacy of our method, which generates natural
manipulated faces with minimal text-irrelevant attribute editing. Code and
pre-trained model are available at https://github.com/hwang1996/ManiCLIP.
|
[
{
"version": "v1",
"created": "Sun, 2 Oct 2022 07:22:55 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Nov 2022 01:29:52 GMT"
},
{
"version": "v3",
"created": "Sun, 26 Mar 2023 01:52:42 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Wang",
"Hao",
""
],
[
"Lin",
"Guosheng",
""
],
[
"del Molino",
"Ana García",
""
],
[
"Wang",
"Anran",
""
],
[
"Feng",
"Jiashi",
""
],
[
"Shen",
"Zhiqi",
""
]
] |
new_dataset
| 0.995612 |
2210.01781
|
Boxiao Pan
|
Boxiao Pan, Bokui Shen, Davis Rempe, Despoina Paschalidou, Kaichun Mo,
Yanchao Yang, Leonidas J. Guibas
|
COPILOT: Human-Environment Collision Prediction and Localization from
Egocentric Videos
| null | null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The ability to forecast human-environment collisions from egocentric
observations is vital to enable collision avoidance in applications such as VR,
AR, and wearable assistive robotics. In this work, we introduce the challenging
problem of predicting collisions in diverse environments from multi-view
egocentric videos captured from body-mounted cameras. Solving this problem
requires a generalizable perception system that can classify which human body
joints will collide and estimate a collision region heatmap to localize
collisions in the environment. To achieve this, we propose a transformer-based
model called COPILOT to perform collision prediction and localization
simultaneously, which accumulates information across multi-view inputs through
a novel 4D space-time-viewpoint attention mechanism. To train our model and
enable future research on this task, we develop a synthetic data generation
framework that produces egocentric videos of virtual humans moving and
colliding within diverse 3D environments. This framework is then used to
establish a large-scale dataset consisting of 8.6M egocentric RGBD frames.
Extensive experiments show that COPILOT generalizes to unseen synthetic as well
as real-world scenes. We further demonstrate COPILOT outputs are useful for
downstream collision avoidance through simple closed-loop control. Please visit
our project webpage at https://sites.google.com/stanford.edu/copilot.
|
[
{
"version": "v1",
"created": "Tue, 4 Oct 2022 17:49:23 GMT"
},
{
"version": "v2",
"created": "Sun, 26 Mar 2023 05:27:31 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Pan",
"Boxiao",
""
],
[
"Shen",
"Bokui",
""
],
[
"Rempe",
"Davis",
""
],
[
"Paschalidou",
"Despoina",
""
],
[
"Mo",
"Kaichun",
""
],
[
"Yang",
"Yanchao",
""
],
[
"Guibas",
"Leonidas J.",
""
]
] |
new_dataset
| 0.97526 |
2211.05975
|
Qiao Xiang
|
Qiang Li, Qiao Xiang, Derui Liu, Yuxin Wang, Haonan Qiu, Xiaoliang
Wang, Jie Zhang, Ridi Wen, Haohao Song, Gexiao Tian, Chenyang Huang, Lulu
Chen, Shaozong Liu, Yaohui Wu, Zhiwu Wu, Zicheng Luo, Yuchao Shao, Chao Han,
Zhongjie Wu, Jianbo Dong, Zheng Cao, Jinbo Wu, Jiwu Shu, Jiesheng Wu
|
From RDMA to RDCA: Toward High-Speed Last Mile of Data Center Networks
Using Remote Direct Cache Access
| null | null | null | null |
cs.NI cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we conduct systematic measurement studies to show that the
high memory bandwidth consumption of modern distributed applications can lead
to a significant drop of network throughput and a large increase of tail
latency in high-speed RDMA networks.We identify its root cause as the high
contention of memory bandwidth between application processes and network
processes. This contention leads to frequent packet drops at the NIC of
receiving hosts, which triggers the congestion control mechanism of the network
and eventually results in network performance degradation.
To tackle this problem, we make a key observation that given the distributed
storage service, the vast majority of data it receives from the network will be
eventually written to high-speed storage media (e.g., SSD) by CPU. As such, we
propose to bypass host memory when processing received data to completely
circumvent this performance bottleneck. In particular, we design Lamda, a novel
receiver cache processing system that consumes a small amount of CPU cache to
process received data from the network at line rate. We implement a prototype
of Lamda and evaluate its performance extensively in a Clos-based testbed.
Results show that for distributed storage applications, Lamda improves network
throughput by 4.7% with zero memory bandwidth consumption on storage nodes, and
improves network throughput by up 17% and 45% for large block size and small
size under the memory bandwidth pressure, respectively. Lamda can also be
applied to latency-sensitive HPC applications, which reduces their
communication latency by 35.1%.
|
[
{
"version": "v1",
"created": "Fri, 11 Nov 2022 02:43:50 GMT"
},
{
"version": "v2",
"created": "Sat, 25 Mar 2023 05:52:24 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Li",
"Qiang",
""
],
[
"Xiang",
"Qiao",
""
],
[
"Liu",
"Derui",
""
],
[
"Wang",
"Yuxin",
""
],
[
"Qiu",
"Haonan",
""
],
[
"Wang",
"Xiaoliang",
""
],
[
"Zhang",
"Jie",
""
],
[
"Wen",
"Ridi",
""
],
[
"Song",
"Haohao",
""
],
[
"Tian",
"Gexiao",
""
],
[
"Huang",
"Chenyang",
""
],
[
"Chen",
"Lulu",
""
],
[
"Liu",
"Shaozong",
""
],
[
"Wu",
"Yaohui",
""
],
[
"Wu",
"Zhiwu",
""
],
[
"Luo",
"Zicheng",
""
],
[
"Shao",
"Yuchao",
""
],
[
"Han",
"Chao",
""
],
[
"Wu",
"Zhongjie",
""
],
[
"Dong",
"Jianbo",
""
],
[
"Cao",
"Zheng",
""
],
[
"Wu",
"Jinbo",
""
],
[
"Shu",
"Jiwu",
""
],
[
"Wu",
"Jiesheng",
""
]
] |
new_dataset
| 0.997449 |
2211.11177
|
Shitao Tang
|
Shitao Tang, Sicong Tang, Andrea Tagliasacchi, Ping Tan and Yasutaka
Furukawa
|
NeuMap: Neural Coordinate Mapping by Auto-Transdecoder for Camera
Localization
|
CVPR2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents an end-to-end neural mapping method for camera
localization, dubbed NeuMap, encoding a whole scene into a grid of latent
codes, with which a Transformer-based auto-decoder regresses 3D coordinates of
query pixels. State-of-the-art feature matching methods require each scene to
be stored as a 3D point cloud with per-point features, consuming several
gigabytes of storage per scene. While compression is possible, performance
drops significantly at high compression rates. Conversely, coordinate
regression methods achieve high compression by storing scene information in a
neural network but suffer from reduced robustness. NeuMap combines the
advantages of both approaches by utilizing 1) learnable latent codes for
efficient scene representation and 2) a scene-agnostic Transformer-based
auto-decoder to infer coordinates for query pixels. This scene-agnostic network
design learns robust matching priors from large-scale data and enables rapid
optimization of codes for new scenes while keeping the network weights fixed.
Extensive evaluations on five benchmarks show that NeuMap significantly
outperforms other coordinate regression methods and achieves comparable
performance to feature matching methods while requiring a much smaller scene
representation size. For example, NeuMap achieves 39.1% accuracy in the Aachen
night benchmark with only 6MB of data, whereas alternative methods require
100MB or several gigabytes and fail completely under high compression settings.
The codes are available at https://github.com/Tangshitao/NeuMap
|
[
{
"version": "v1",
"created": "Mon, 21 Nov 2022 04:46:22 GMT"
},
{
"version": "v2",
"created": "Sun, 26 Mar 2023 06:22:15 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Tang",
"Shitao",
""
],
[
"Tang",
"Sicong",
""
],
[
"Tagliasacchi",
"Andrea",
""
],
[
"Tan",
"Ping",
""
],
[
"Furukawa",
"Yasutaka",
""
]
] |
new_dataset
| 0.987654 |
2211.11646
|
Junkai Huang
|
Benran Hu, Junkai Huang, Yichen Liu, Yu-Wing Tai, Chi-Keung Tang
|
NeRF-RPN: A general framework for object detection in NeRFs
|
Accepted by CVPR 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents the first significant object detection framework,
NeRF-RPN, which directly operates on NeRF. Given a pre-trained NeRF model,
NeRF-RPN aims to detect all bounding boxes of objects in a scene. By exploiting
a novel voxel representation that incorporates multi-scale 3D neural volumetric
features, we demonstrate it is possible to regress the 3D bounding boxes of
objects in NeRF directly without rendering the NeRF at any viewpoint. NeRF-RPN
is a general framework and can be applied to detect objects without class
labels. We experimented NeRF-RPN with various backbone architectures, RPN head
designs and loss functions. All of them can be trained in an end-to-end manner
to estimate high quality 3D bounding boxes. To facilitate future research in
object detection for NeRF, we built a new benchmark dataset which consists of
both synthetic and real-world data with careful labeling and clean up. Code and
dataset are available at https://github.com/lyclyc52/NeRF_RPN.
|
[
{
"version": "v1",
"created": "Mon, 21 Nov 2022 17:02:01 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Nov 2022 01:55:00 GMT"
},
{
"version": "v3",
"created": "Mon, 27 Mar 2023 16:40:30 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Hu",
"Benran",
""
],
[
"Huang",
"Junkai",
""
],
[
"Liu",
"Yichen",
""
],
[
"Tai",
"Yu-Wing",
""
],
[
"Tang",
"Chi-Keung",
""
]
] |
new_dataset
| 0.988258 |
2211.12054
|
Yuan Yao
|
Yuan Yao, Tianyu Yu, Ao Zhang, Mengdi Li, Ruobing Xie, Cornelius
Weber, Zhiyuan Liu, Hai-Tao Zheng, Stefan Wermter, Tat-Seng Chua, Maosong Sun
|
Visually Grounded Commonsense Knowledge Acquisition
|
Accepted by AAAI 2023
| null | null | null |
cs.CV cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large-scale commonsense knowledge bases empower a broad range of AI
applications, where the automatic extraction of commonsense knowledge (CKE) is
a fundamental and challenging problem. CKE from text is known for suffering
from the inherent sparsity and reporting bias of commonsense in text. Visual
perception, on the other hand, contains rich commonsense knowledge about
real-world entities, e.g., (person, can_hold, bottle), which can serve as
promising sources for acquiring grounded commonsense knowledge. In this work,
we present CLEVER, which formulates CKE as a distantly supervised
multi-instance learning problem, where models learn to summarize commonsense
relations from a bag of images about an entity pair without any human
annotation on image instances. To address the problem, CLEVER leverages
vision-language pre-training models for deep understanding of each image in the
bag, and selects informative instances from the bag to summarize commonsense
entity relations via a novel contrastive attention mechanism. Comprehensive
experimental results in held-out and human evaluation show that CLEVER can
extract commonsense knowledge in promising quality, outperforming pre-trained
language model-based methods by 3.9 AUC and 6.4 mAUC points. The predicted
commonsense scores show strong correlation with human judgment with a 0.78
Spearman coefficient. Moreover, the extracted commonsense can also be grounded
into images with reasonable interpretability. The data and codes can be
obtained at https://github.com/thunlp/CLEVER.
|
[
{
"version": "v1",
"created": "Tue, 22 Nov 2022 07:00:16 GMT"
},
{
"version": "v2",
"created": "Sat, 25 Mar 2023 07:16:48 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Yao",
"Yuan",
""
],
[
"Yu",
"Tianyu",
""
],
[
"Zhang",
"Ao",
""
],
[
"Li",
"Mengdi",
""
],
[
"Xie",
"Ruobing",
""
],
[
"Weber",
"Cornelius",
""
],
[
"Liu",
"Zhiyuan",
""
],
[
"Zheng",
"Hai-Tao",
""
],
[
"Wermter",
"Stefan",
""
],
[
"Chua",
"Tat-Seng",
""
],
[
"Sun",
"Maosong",
""
]
] |
new_dataset
| 0.966313 |
2211.17104
|
Arash Vaezi
|
Arash Vaezi
|
Agent-Cells with DNA Programming: A Dynamic Decentralized System
| null | null | null | null |
cs.MA cs.CR cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces a new concept. We intend to give life to a software
agent. A software agent is a computer program that acts on a user's behalf. We
put a DNA inside the agent. DNA is a simple text, a whole roadmap of a network
of agents or a system with details. A Dynamic Numerical Abstract of a
multiagent system. It is also a reproductive part for an \emph{agent} that
makes the agent take actions and decide independently and reproduce coworkers.
By defining different DNA structures, one can establish new agents and
different nets for different usages. We initiate such thinking as \emph{DNA
programming}. This strategy leads to a new field of programming. This type of
programming can help us manage large systems with various elements with an
incredibly organized customizable structure. An agent can reproduce another
agent. We put one or a few agents around a given network, and the agents will
reproduce themselves till they can reach others and pervade the whole network.
An agent's position or other environmental or geographical characteristics make
it possible for an agent to know its active set of \emph{genes} on its DNA. The
active set of genes specifies its duties. There is a database that includes a
list of functions s.t. each one is an implementation of what a \emph{gene}
represents. To utilize a decentralized database, we may use a blockchain-based
structure.
This design can adapt to a system that manages many static and dynamic
networks. This network could be a distributed system, a decentralized system, a
telecommunication network such as a 5G monitoring system, an IoT management
system, or even an energy management system. The final system is the
combination of all the agents and the overlay net that connects the agents. We
denote the final net as the \emph{body} of the system.
|
[
{
"version": "v1",
"created": "Sun, 2 Oct 2022 16:53:49 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Dec 2022 06:10:44 GMT"
},
{
"version": "v3",
"created": "Mon, 27 Mar 2023 12:57:49 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Vaezi",
"Arash",
""
]
] |
new_dataset
| 0.997795 |
2212.00137
|
Abrar Alali Mrs.
|
Abrar Alali, Stephan Olariu, Shubham Jain
|
ADOPT: A system for Alerting Drivers to Occluded Pedestrian Traffic
| null | null |
10.1016/j.vehcom.2023.100601
| null |
cs.MA
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Recent statistics reveal an alarming increase in accidents involving
pedestrians (especially children) crossing the street. A common philosophy of
existing pedestrian detection approaches is that this task should be undertaken
by the moving cars themselves. In sharp departure from this philosophy, we
propose to enlist the help of cars parked along the sidewalk to detect and
protect crossing pedestrians. In support of this goal, we propose ADOPT: a
system for Alerting Drivers to Occluded Pedestrian Traffic. ADOPT lays the
theoretical foundations of a system that uses parked cars to: (1) detect the
presence of a group of crossing pedestrians - a crossing cohort; (2) predict
the time the last member of the cohort takes to clear the street; (3) send
alert messages to those approaching cars that may reach the crossing area while
pedestrians are still in the street; and, (4) show how approaching cars can
adjust their speed, given several simultaneous crossing locations. Importantly,
in ADOPT all communications occur over very short distances and at very low
power. Our extensive simulations using SUMO-generated pedestrian and car
traffic have shown the effectiveness of ADOPT in detecting and protecting
crossing pedestrians.
|
[
{
"version": "v1",
"created": "Thu, 20 Oct 2022 13:34:43 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Alali",
"Abrar",
""
],
[
"Olariu",
"Stephan",
""
],
[
"Jain",
"Shubham",
""
]
] |
new_dataset
| 0.972698 |
2212.00452
|
Marc Alexa
|
Marc Alexa
|
Tutte Embeddings of Tetrahedral Meshes
| null |
Discrete Comput Geom (2023)
|
10.1007/s00454-023-00494-0
| null |
cs.CG cs.GR math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Tutte's embedding theorem states that every 3-connected graph without a $K_5$
or $K_{3,3}$ minor (i.e. a planar graph) is embedded in the plane if the outer
face is in convex position and the interior vertices are convex combinations of
their neighbors. We show that this result extends to simply connected
tetrahedral meshes in a natural way: for the tetrahedral mesh to be embedded if
the outer polyhedron is in convex position and the interior vertices are convex
combination of their neighbors it is sufficient (but not necessary) that the
graph of the tetrahedral mesh contains no $K_6$ and no $K_{3,3,1}$, and all
triangles incident on three boundary vertices are boundary triangles.
|
[
{
"version": "v1",
"created": "Thu, 1 Dec 2022 12:06:23 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Feb 2023 15:16:24 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Alexa",
"Marc",
""
]
] |
new_dataset
| 0.993228 |
2212.01206
|
Norman M\"uller
|
Norman M\"uller, Yawar Siddiqui, Lorenzo Porzi, Samuel Rota Bul\`o,
Peter Kontschieder, Matthias Nie{\ss}ner
|
DiffRF: Rendering-Guided 3D Radiance Field Diffusion
|
Project page: https://sirwyver.github.io/DiffRF/ Video:
https://youtu.be/qETBcLu8SUk - CVPR 2023 Highlight - updated evaluations
after fixing initial data mapping error on all methods
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce DiffRF, a novel approach for 3D radiance field synthesis based
on denoising diffusion probabilistic models. While existing diffusion-based
methods operate on images, latent codes, or point cloud data, we are the first
to directly generate volumetric radiance fields. To this end, we propose a 3D
denoising model which directly operates on an explicit voxel grid
representation. However, as radiance fields generated from a set of posed
images can be ambiguous and contain artifacts, obtaining ground truth radiance
field samples is non-trivial. We address this challenge by pairing the
denoising formulation with a rendering loss, enabling our model to learn a
deviated prior that favours good image quality instead of trying to replicate
fitting errors like floating artifacts. In contrast to 2D-diffusion models, our
model learns multi-view consistent priors, enabling free-view synthesis and
accurate shape generation. Compared to 3D GANs, our diffusion-based approach
naturally enables conditional generation such as masked completion or
single-view 3D synthesis at inference time.
|
[
{
"version": "v1",
"created": "Fri, 2 Dec 2022 14:37:20 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Mar 2023 14:51:07 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Müller",
"Norman",
""
],
[
"Siddiqui",
"Yawar",
""
],
[
"Porzi",
"Lorenzo",
""
],
[
"Bulò",
"Samuel Rota",
""
],
[
"Kontschieder",
"Peter",
""
],
[
"Nießner",
"Matthias",
""
]
] |
new_dataset
| 0.997724 |
2301.00269
|
Ali Abedi Abedi
|
Ali Abedi, Haofan Lu, Alex Chen, Charlie Liu, Omid Abari
|
WiFi Physical Layer Stays Awake and Responds When it Should Not
|
12 pages
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
WiFi communication should be possible only between devices inside the same
network. However, we find that all existing WiFi devices send back
acknowledgments (ACK) to even fake packets received from unauthorized WiFi
devices outside of their network. Moreover, we find that an unauthorized device
can manipulate the power-saving mechanism of WiFi radios and keep them
continuously awake by sending specific fake beacon frames to them. Our
evaluation of over 5,000 devices from 186 vendors confirms that these are
widespread issues. We believe these loopholes cannot be prevented, and hence
they create privacy and security concerns. Finally, to show the importance of
these issues and their consequences, we implement and demonstrate two attacks
where an adversary performs battery drain and WiFi sensing attacks just using a
tiny WiFi module which costs less than ten dollars.
|
[
{
"version": "v1",
"created": "Sat, 31 Dec 2022 19:07:14 GMT"
},
{
"version": "v2",
"created": "Sat, 25 Mar 2023 01:11:55 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Abedi",
"Ali",
""
],
[
"Lu",
"Haofan",
""
],
[
"Chen",
"Alex",
""
],
[
"Liu",
"Charlie",
""
],
[
"Abari",
"Omid",
""
]
] |
new_dataset
| 0.980522 |
2301.06287
|
Kaicheng Yang
|
Rachith Aiyappa, Matthew R. DeVerna, Manita Pote, Bao Tran Truong,
Wanying Zhao, David Axelrod, Aria Pessianzadeh, Zoher Kachwala, Munjung Kim,
Ozgur Can Seckin, Minsuk Kim, Sunny Gandhi, Amrutha Manikonda, Francesco
Pierri, Filippo Menczer, Kai-Cheng Yang
|
A Multi-Platform Collection of Social Media Posts about the 2022 U.S.
Midterm Elections
|
8 pages, 3 figures, forthcoming in ICWSM23
| null | null | null |
cs.SI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Social media are utilized by millions of citizens to discuss important
political issues. Politicians use these platforms to connect with the public
and broadcast policy positions. Therefore, data from social media has enabled
many studies of political discussion. While most analyses are limited to data
from individual platforms, people are embedded in a larger information
ecosystem spanning multiple social networks. Here we describe and provide
access to the Indiana University 2022 U.S. Midterms Multi-Platform Social Media
Dataset (MEIU22), a collection of social media posts from Twitter, Facebook,
Instagram, Reddit, and 4chan. MEIU22 links to posts about the midterm elections
based on a comprehensive list of keywords and tracks the social media accounts
of 1,011 candidates from October 1 to December 25, 2022. We also publish the
source code of our pipeline to enable similar multi-platform research projects.
|
[
{
"version": "v1",
"created": "Mon, 16 Jan 2023 07:12:43 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Mar 2023 00:44:27 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Aiyappa",
"Rachith",
""
],
[
"DeVerna",
"Matthew R.",
""
],
[
"Pote",
"Manita",
""
],
[
"Truong",
"Bao Tran",
""
],
[
"Zhao",
"Wanying",
""
],
[
"Axelrod",
"David",
""
],
[
"Pessianzadeh",
"Aria",
""
],
[
"Kachwala",
"Zoher",
""
],
[
"Kim",
"Munjung",
""
],
[
"Seckin",
"Ozgur Can",
""
],
[
"Kim",
"Minsuk",
""
],
[
"Gandhi",
"Sunny",
""
],
[
"Manikonda",
"Amrutha",
""
],
[
"Pierri",
"Francesco",
""
],
[
"Menczer",
"Filippo",
""
],
[
"Yang",
"Kai-Cheng",
""
]
] |
new_dataset
| 0.991257 |
2301.07302
|
Ram Ramrakhya
|
Ram Ramrakhya, Dhruv Batra, Erik Wijmans, Abhishek Das
|
PIRLNav: Pretraining with Imitation and RL Finetuning for ObjectNav
|
8 pages + supplement
| null | null | null |
cs.LG cs.AI cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
We study ObjectGoal Navigation -- where a virtual robot situated in a new
environment is asked to navigate to an object. Prior work has shown that
imitation learning (IL) using behavior cloning (BC) on a dataset of human
demonstrations achieves promising results. However, this has limitations -- 1)
BC policies generalize poorly to new states, since the training mimics actions
not their consequences, and 2) collecting demonstrations is expensive. On the
other hand, reinforcement learning (RL) is trivially scalable, but requires
careful reward engineering to achieve desirable behavior. We present PIRLNav, a
two-stage learning scheme for BC pretraining on human demonstrations followed
by RL-finetuning. This leads to a policy that achieves a success rate of
$65.0\%$ on ObjectNav ($+5.0\%$ absolute over previous state-of-the-art). Using
this BC$\rightarrow$RL training recipe, we present a rigorous empirical
analysis of design choices. First, we investigate whether human demonstrations
can be replaced with `free' (automatically generated) sources of
demonstrations, e.g. shortest paths (SP) or task-agnostic frontier exploration
(FE) trajectories. We find that BC$\rightarrow$RL on human demonstrations
outperforms BC$\rightarrow$RL on SP and FE trajectories, even when controlled
for same BC-pretraining success on train, and even on a subset of val episodes
where BC-pretraining success favors the SP or FE policies. Next, we study how
RL-finetuning performance scales with the size of the BC pretraining dataset.
We find that as we increase the size of BC-pretraining dataset and get to high
BC accuracies, improvements from RL-finetuning are smaller, and that $90\%$ of
the performance of our best BC$\rightarrow$RL policy can be achieved with less
than half the number of BC demonstrations. Finally, we analyze failure modes of
our ObjectNav policies, and present guidelines for further improving them.
|
[
{
"version": "v1",
"created": "Wed, 18 Jan 2023 04:40:50 GMT"
},
{
"version": "v2",
"created": "Sun, 26 Mar 2023 14:49:25 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Ramrakhya",
"Ram",
""
],
[
"Batra",
"Dhruv",
""
],
[
"Wijmans",
"Erik",
""
],
[
"Das",
"Abhishek",
""
]
] |
new_dataset
| 0.999735 |
2301.09632
|
Ang Cao
|
Ang Cao, Justin Johnson
|
HexPlane: A Fast Representation for Dynamic Scenes
|
CVPR 2023, Camera Ready Project page:
https://caoang327.github.io/HexPlane
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Modeling and re-rendering dynamic 3D scenes is a challenging task in 3D
vision. Prior approaches build on NeRF and rely on implicit representations.
This is slow since it requires many MLP evaluations, constraining real-world
applications. We show that dynamic 3D scenes can be explicitly represented by
six planes of learned features, leading to an elegant solution we call
HexPlane. A HexPlane computes features for points in spacetime by fusing
vectors extracted from each plane, which is highly efficient. Pairing a
HexPlane with a tiny MLP to regress output colors and training via volume
rendering gives impressive results for novel view synthesis on dynamic scenes,
matching the image quality of prior work but reducing training time by more
than $100\times$. Extensive ablations confirm our HexPlane design and show that
it is robust to different feature fusion mechanisms, coordinate systems, and
decoding mechanisms. HexPlane is a simple and effective solution for
representing 4D volumes, and we hope they can broadly contribute to modeling
spacetime for dynamic 3D scenes.
|
[
{
"version": "v1",
"created": "Mon, 23 Jan 2023 18:59:25 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Mar 2023 16:39:58 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Cao",
"Ang",
""
],
[
"Johnson",
"Justin",
""
]
] |
new_dataset
| 0.999404 |
2301.10241
|
Giacomo Meanti
|
Sara Fridovich-Keil, Giacomo Meanti, Frederik Warburg, Benjamin Recht,
Angjoo Kanazawa
|
K-Planes: Explicit Radiance Fields in Space, Time, and Appearance
|
Project page https://sarafridov.github.io/K-Planes/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce k-planes, a white-box model for radiance fields in arbitrary
dimensions. Our model uses d choose 2 planes to represent a d-dimensional
scene, providing a seamless way to go from static (d=3) to dynamic (d=4)
scenes. This planar factorization makes adding dimension-specific priors easy,
e.g. temporal smoothness and multi-resolution spatial structure, and induces a
natural decomposition of static and dynamic components of a scene. We use a
linear feature decoder with a learned color basis that yields similar
performance as a nonlinear black-box MLP decoder. Across a range of synthetic
and real, static and dynamic, fixed and varying appearance scenes, k-planes
yields competitive and often state-of-the-art reconstruction fidelity with low
memory usage, achieving 1000x compression over a full 4D grid, and fast
optimization with a pure PyTorch implementation. For video results and code,
please see https://sarafridov.github.io/K-Planes.
|
[
{
"version": "v1",
"created": "Tue, 24 Jan 2023 18:59:08 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Mar 2023 21:32:50 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Fridovich-Keil",
"Sara",
""
],
[
"Meanti",
"Giacomo",
""
],
[
"Warburg",
"Frederik",
""
],
[
"Recht",
"Benjamin",
""
],
[
"Kanazawa",
"Angjoo",
""
]
] |
new_dataset
| 0.997231 |
2302.13056
|
Huasong Zhou
|
Huasong Zhou, Xiaowei Xu, Xiaodong Wang, and Leon Bevan Bullock
|
SATBA: An Invisible Backdoor Attack Based On Spatial Attention
|
15 pages, 6 figures
| null | null | null |
cs.CR cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Backdoor attacks pose a new and emerging threat to AI security, where Deep
Neural Networks (DNNs) are trained on datasets added to hidden trigger
patterns. Although the poisoned model behaves normally on benign samples, it
produces anomalous results on samples containing the trigger pattern.
Nevertheless, most existing backdoor attacks face two significant drawbacks:
their trigger patterns are visible and easy to detect by human inspection, and
their injection process leads to the loss of natural sample features and
trigger patterns, thereby reducing the attack success rate and the model
accuracy. In this paper, we propose a novel backdoor attack named SATBA that
overcomes these limitations by using spatial attention mechanism and U-type
model. Our attack leverages spatial attention mechanism to extract data
features and generate invisible trigger patterns that are correlated with clean
data. Then it uses U-type model to plant these trigger patterns into the
original data without causing noticeable feature loss. We evaluate our attack
on three prominent image classification DNNs across three standard datasets and
demonstrate that it achieves high attack success rate and robustness against
backdoor defenses. Additionally, we also conduct extensive experiments on image
similarity to highlight the stealthiness of our attack.
|
[
{
"version": "v1",
"created": "Sat, 25 Feb 2023 10:57:41 GMT"
},
{
"version": "v2",
"created": "Sun, 26 Mar 2023 14:23:10 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Zhou",
"Huasong",
""
],
[
"Xu",
"Xiaowei",
""
],
[
"Wang",
"Xiaodong",
""
],
[
"Bullock",
"Leon Bevan",
""
]
] |
new_dataset
| 0.977253 |
2302.13543
|
Sameera Ramasinghe Mr.
|
Sameera Ramasinghe, Violetta Shevchenko, Gil Avraham, Anton Van Den
Hengel
|
BLiRF: Bandlimited Radiance Fields for Dynamic Scene Modeling
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reasoning the 3D structure of a non-rigid dynamic scene from a single moving
camera is an under-constrained problem. Inspired by the remarkable progress of
neural radiance fields (NeRFs) in photo-realistic novel view synthesis of
static scenes, extensions have been proposed for dynamic settings. These
methods heavily rely on neural priors in order to regularize the problem. In
this work, we take a step back and reinvestigate how current implementations
may entail deleterious effects, including limited expressiveness, entanglement
of light and density fields, and sub-optimal motion localization. As a remedy,
we advocate for a bridge between classic non-rigid-structure-from-motion
(\nrsfm) and NeRF, enabling the well-studied priors of the former to constrain
the latter. To this end, we propose a framework that factorizes time and space
by formulating a scene as a composition of bandlimited, high-dimensional
signals. We demonstrate compelling results across complex dynamic scenes that
involve changes in lighting, texture and long-range dynamics.
|
[
{
"version": "v1",
"created": "Mon, 27 Feb 2023 06:40:32 GMT"
},
{
"version": "v2",
"created": "Sat, 18 Mar 2023 10:10:40 GMT"
},
{
"version": "v3",
"created": "Sat, 25 Mar 2023 02:18:02 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Ramasinghe",
"Sameera",
""
],
[
"Shevchenko",
"Violetta",
""
],
[
"Avraham",
"Gil",
""
],
[
"Hengel",
"Anton Van Den",
""
]
] |
new_dataset
| 0.969086 |
2303.00938
|
Jialiang Zhang
|
Yinzhen Xu, Weikang Wan, Jialiang Zhang, Haoran Liu, Zikang Shan, Hao
Shen, Ruicheng Wang, Haoran Geng, Yijia Weng, Jiayi Chen, Tengyu Liu, Li Yi,
He Wang
|
UniDexGrasp: Universal Robotic Dexterous Grasping via Learning Diverse
Proposal Generation and Goal-Conditioned Policy
|
Accepted to CVPR 2023
| null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we tackle the problem of learning universal robotic dexterous
grasping from a point cloud observation under a table-top setting. The goal is
to grasp and lift up objects in high-quality and diverse ways and generalize
across hundreds of categories and even the unseen. Inspired by successful
pipelines used in parallel gripper grasping, we split the task into two stages:
1) grasp proposal (pose) generation and 2) goal-conditioned grasp execution.
For the first stage, we propose a novel probabilistic model of grasp pose
conditioned on the point cloud observation that factorizes rotation from
translation and articulation. Trained on our synthesized large-scale dexterous
grasp dataset, this model enables us to sample diverse and high-quality
dexterous grasp poses for the object point cloud.For the second stage, we
propose to replace the motion planning used in parallel gripper grasping with a
goal-conditioned grasp policy, due to the complexity involved in dexterous
grasping execution. Note that it is very challenging to learn this highly
generalizable grasp policy that only takes realistic inputs without oracle
states. We thus propose several important innovations, including state
canonicalization, object curriculum, and teacher-student distillation.
Integrating the two stages, our final pipeline becomes the first to achieve
universal generalization for dexterous grasping, demonstrating an average
success rate of more than 60\% on thousands of object instances, which
significantly outperforms all baselines, meanwhile showing only a minimal
generalization gap.
|
[
{
"version": "v1",
"created": "Thu, 2 Mar 2023 03:23:18 GMT"
},
{
"version": "v2",
"created": "Sat, 25 Mar 2023 07:35:32 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Xu",
"Yinzhen",
""
],
[
"Wan",
"Weikang",
""
],
[
"Zhang",
"Jialiang",
""
],
[
"Liu",
"Haoran",
""
],
[
"Shan",
"Zikang",
""
],
[
"Shen",
"Hao",
""
],
[
"Wang",
"Ruicheng",
""
],
[
"Geng",
"Haoran",
""
],
[
"Weng",
"Yijia",
""
],
[
"Chen",
"Jiayi",
""
],
[
"Liu",
"Tengyu",
""
],
[
"Yi",
"Li",
""
],
[
"Wang",
"He",
""
]
] |
new_dataset
| 0.99726 |
2303.06464
|
Gemma Canet Tarr\'es
|
Gemma Canet Tarr\'es, Dan Ruta, Tu Bui, John Collomosse
|
PARASOL: Parametric Style Control for Diffusion Image Synthesis
|
Added Appendix
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose PARASOL, a multi-modal synthesis model that enables disentangled,
parametric control of the visual style of the image by jointly conditioning
synthesis on both content and a fine-grained visual style embedding. We train a
latent diffusion model (LDM) using specific losses for each modality and adapt
the classifier-free guidance for encouraging disentangled control over
independent content and style modalities at inference time. We leverage
auxiliary semantic and style-based search to create training triplets for
supervision of the LDM, ensuring complementarity of content and style cues.
PARASOL shows promise for enabling nuanced control over visual style in
diffusion models for image creation and stylization, as well as generative
search where text-based search results may be adapted to more closely match
user intent by interpolating both content and style descriptors.
|
[
{
"version": "v1",
"created": "Sat, 11 Mar 2023 17:30:36 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Mar 2023 17:39:05 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Tarrés",
"Gemma Canet",
""
],
[
"Ruta",
"Dan",
""
],
[
"Bui",
"Tu",
""
],
[
"Collomosse",
"John",
""
]
] |
new_dataset
| 0.998433 |
2303.07522
|
Chenguang Huang
|
Chenguang Huang, Oier Mees, Andy Zeng, Wolfram Burgard
|
Audio Visual Language Maps for Robot Navigation
|
Project page: https://avlmaps.github.io/
| null | null | null |
cs.RO cs.AI cs.CL cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While interacting in the world is a multi-sensory experience, many robots
continue to predominantly rely on visual perception to map and navigate in
their environments. In this work, we propose Audio-Visual-Language Maps
(AVLMaps), a unified 3D spatial map representation for storing cross-modal
information from audio, visual, and language cues. AVLMaps integrate the
open-vocabulary capabilities of multimodal foundation models pre-trained on
Internet-scale data by fusing their features into a centralized 3D voxel grid.
In the context of navigation, we show that AVLMaps enable robot systems to
index goals in the map based on multimodal queries, e.g., textual descriptions,
images, or audio snippets of landmarks. In particular, the addition of audio
information enables robots to more reliably disambiguate goal locations.
Extensive experiments in simulation show that AVLMaps enable zero-shot
multimodal goal navigation from multimodal prompts and provide 50% better
recall in ambiguous scenarios. These capabilities extend to mobile robots in
the real world - navigating to landmarks referring to visual, audio, and
spatial concepts. Videos and code are available at: https://avlmaps.github.io.
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 23:17:51 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Mar 2023 15:10:51 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Huang",
"Chenguang",
""
],
[
"Mees",
"Oier",
""
],
[
"Zeng",
"Andy",
""
],
[
"Burgard",
"Wolfram",
""
]
] |
new_dataset
| 0.996055 |
2303.10204
|
Michael Howard P.Eng
|
Michael Howard, R. Bruce Irvin
|
ESP32: QEMU Emulation within a Docker Container
|
7 pages and 9 figures
| null | null | null |
cs.OS
|
http://creativecommons.org/licenses/by/4.0/
|
The ESP32 is a popular microcontroller from Espressif that can be used in
many embedded applications. Robotic joints, smart car chargers, beer vat
agitators and automated bread mixers are a few examples where this
system-on-a-chip excels. It is cheap to buy and has a number of vendors
providing low-cost development board kits that come with the microcontroller
and many external connection points with peripherals. There is a large software
ecosystem for the ESP32. Espressif maintains an SDK containing many C-language
sample projects providing a starting point for a huge variety of software
services and I/O needs. Third party projects provide additional sample code as
well as support for other programming languages. For example, MicroPython is a
mature project with sample code and officially supported by Espressif. The SDK
provides tools to not just build an application but also merge a flash image,
flash to the microcontroller and monitor the output. Is it possible to build
the ESP32 load and emulate on another host OS? This paper explores the QEMU
emulator and its ability to emulate the ethernet interface for the guest OS.
Additionally, we look into the concept of containerizing the entire emulator
and ESP32 load package such that a microcontroller flash image can successfully
run with a one-step deployment of a Docker container.
|
[
{
"version": "v1",
"created": "Fri, 17 Mar 2023 18:48:50 GMT"
},
{
"version": "v2",
"created": "Sat, 25 Mar 2023 22:22:19 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Howard",
"Michael",
""
],
[
"Irvin",
"R. Bruce",
""
]
] |
new_dataset
| 0.998614 |
2303.10826
|
Jiawen Zhu
|
Jiawen Zhu, Simiao Lai, Xin Chen, Dong Wang, Huchuan Lu
|
Visual Prompt Multi-Modal Tracking
|
Accepted by CVPR2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visible-modal object tracking gives rise to a series of downstream
multi-modal tracking tributaries. To inherit the powerful representations of
the foundation model, a natural modus operandi for multi-modal tracking is full
fine-tuning on the RGB-based parameters. Albeit effective, this manner is not
optimal due to the scarcity of downstream data and poor transferability, etc.
In this paper, inspired by the recent success of the prompt learning in
language models, we develop Visual Prompt multi-modal Tracking (ViPT), which
learns the modal-relevant prompts to adapt the frozen pre-trained foundation
model to various downstream multimodal tracking tasks. ViPT finds a better way
to stimulate the knowledge of the RGB-based model that is pre-trained at scale,
meanwhile only introducing a few trainable parameters (less than 1% of model
parameters). ViPT outperforms the full fine-tuning paradigm on multiple
downstream tracking tasks including RGB+Depth, RGB+Thermal, and RGB+Event
tracking. Extensive experiments show the potential of visual prompt learning
for multi-modal tracking, and ViPT can achieve state-of-the-art performance
while satisfying parameter efficiency. Code and models are available at
https://github.com/jiawen-zhu/ViPT.
|
[
{
"version": "v1",
"created": "Mon, 20 Mar 2023 01:51:07 GMT"
},
{
"version": "v2",
"created": "Sat, 25 Mar 2023 02:29:48 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Zhu",
"Jiawen",
""
],
[
"Lai",
"Simiao",
""
],
[
"Chen",
"Xin",
""
],
[
"Wang",
"Dong",
""
],
[
"Lu",
"Huchuan",
""
]
] |
new_dataset
| 0.998209 |
2303.11694
|
Shi Linfeng
|
Linfeng Shi, Yan Li, Xi Zhu
|
Anchor Free remote sensing detector based on solving discrete polar
coordinate equation
|
20 pages,15 figures
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As the rapid development of depth learning, object detection in aviatic
remote sensing images has become increasingly popular in recent years. Most of
the current Anchor Free detectors based on key point detection sampling
directly regression and classification features, with the design of object loss
function based on the horizontal bounding box. It is more challenging for
complex and diverse aviatic remote sensing object. In this paper, we propose an
Anchor Free aviatic remote sensing object detector (BWP-Det) to detect rotating
and multi-scale object. Specifically, we design a interactive
double-branch(IDB) up-sampling network, in which one branch gradually
up-sampling is used for the prediction of Heatmap, and the other branch is used
for the regression of boundary box parameters. We improve a weighted
multi-scale convolution (WmConv) in order to highlight the difference between
foreground and background. We extracted Pixel level attention features from the
middle layer to guide the two branches to pay attention to effective object
information in the sampling process. Finally, referring to the calculation idea
of horizontal IoU, we design a rotating IoU based on the split polar coordinate
plane, namely JIoU, which is expressed as the intersection ratio following
discretization of the inner ellipse of the rotating bounding box, to solve the
correlation between angle and side length in the regression process of the
rotating bounding box. Ultimately, BWP-Det, our experiments on DOTA, UCAS-AOD
and NWPU VHR-10 datasets show, achieves advanced performance with simpler
models and fewer regression parameters.
|
[
{
"version": "v1",
"created": "Tue, 21 Mar 2023 09:28:47 GMT"
},
{
"version": "v2",
"created": "Sat, 25 Mar 2023 06:43:43 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Shi",
"Linfeng",
""
],
[
"Li",
"Yan",
""
],
[
"Zhu",
"Xi",
""
]
] |
new_dataset
| 0.994783 |
2303.11921
|
Dingkang Yang
|
Dingkang Yang, Zhaoyu Chen, Yuzheng Wang, Shunli Wang, Mingcheng Li,
Siao Liu, Xiao Zhao, Shuai Huang, Zhiyan Dong, Peng Zhai, Lihua Zhang
|
Context De-confounded Emotion Recognition
|
Accepted by CVPR 2023. CCIM is available at
https://github.com/ydk122024/CCIM
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Context-Aware Emotion Recognition (CAER) is a crucial and challenging task
that aims to perceive the emotional states of the target person with contextual
information. Recent approaches invariably focus on designing sophisticated
architectures or mechanisms to extract seemingly meaningful representations
from subjects and contexts. However, a long-overlooked issue is that a context
bias in existing datasets leads to a significantly unbalanced distribution of
emotional states among different context scenarios. Concretely, the harmful
bias is a confounder that misleads existing models to learn spurious
correlations based on conventional likelihood estimation, significantly
limiting the models' performance. To tackle the issue, this paper provides a
causality-based perspective to disentangle the models from the impact of such
bias, and formulate the causalities among variables in the CAER task via a
tailored causal graph. Then, we propose a Contextual Causal Intervention Module
(CCIM) based on the backdoor adjustment to de-confound the confounder and
exploit the true causal effect for model training. CCIM is plug-in and
model-agnostic, which improves diverse state-of-the-art approaches by
considerable margins. Extensive experiments on three benchmark datasets
demonstrate the effectiveness of our CCIM and the significance of causal
insight.
|
[
{
"version": "v1",
"created": "Tue, 21 Mar 2023 15:12:20 GMT"
},
{
"version": "v2",
"created": "Sun, 26 Mar 2023 07:18:05 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Yang",
"Dingkang",
""
],
[
"Chen",
"Zhaoyu",
""
],
[
"Wang",
"Yuzheng",
""
],
[
"Wang",
"Shunli",
""
],
[
"Li",
"Mingcheng",
""
],
[
"Liu",
"Siao",
""
],
[
"Zhao",
"Xiao",
""
],
[
"Huang",
"Shuai",
""
],
[
"Dong",
"Zhiyan",
""
],
[
"Zhai",
"Peng",
""
],
[
"Zhang",
"Lihua",
""
]
] |
new_dataset
| 0.950923 |
2303.12337
|
Tuong Do Khanh Long
|
Nhat Le, Thang Pham, Tuong Do, Erman Tjiputra, Quang D. Tran, Anh
Nguyen
|
Music-Driven Group Choreography
|
accepted in CVPR 2023
| null | null | null |
cs.MM cs.CV cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Music-driven choreography is a challenging problem with a wide variety of
industrial applications. Recently, many methods have been proposed to
synthesize dance motions from music for a single dancer. However, generating
dance motion for a group remains an open problem. In this paper, we present
$\rm AIOZ-GDANCE$, a new large-scale dataset for music-driven group dance
generation. Unlike existing datasets that only support single dance, our new
dataset contains group dance videos, hence supporting the study of group
choreography. We propose a semi-autonomous labeling method with humans in the
loop to obtain the 3D ground truth for our dataset. The proposed dataset
consists of 16.7 hours of paired music and 3D motion from in-the-wild videos,
covering 7 dance styles and 16 music genres. We show that naively applying
single dance generation technique to creating group dance motion may lead to
unsatisfactory results, such as inconsistent movements and collisions between
dancers. Based on our new dataset, we propose a new method that takes an input
music sequence and a set of 3D positions of dancers to efficiently produce
multiple group-coherent choreographies. We propose new evaluation metrics for
measuring group dance quality and perform intensive experiments to demonstrate
the effectiveness of our method. Our project facilitates future research on
group dance generation and is available at:
https://aioz-ai.github.io/AIOZ-GDANCE/
|
[
{
"version": "v1",
"created": "Wed, 22 Mar 2023 06:26:56 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Mar 2023 01:59:41 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Le",
"Nhat",
""
],
[
"Pham",
"Thang",
""
],
[
"Do",
"Tuong",
""
],
[
"Tjiputra",
"Erman",
""
],
[
"Tran",
"Quang D.",
""
],
[
"Nguyen",
"Anh",
""
]
] |
new_dataset
| 0.999842 |
2303.12368
|
JunYong Choi
|
JunYong Choi and SeokYeong Lee and Haesol Park and Seung-Won Jung and
Ig-Jae Kim and Junghyun Cho
|
MAIR: Multi-view Attention Inverse Rendering with 3D Spatially-Varying
Lighting Estimation
|
Accepted by CVPR 2023; Project Page is
https://bring728.github.io/mair.project/
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a scene-level inverse rendering framework that uses multi-view
images to decompose the scene into geometry, a SVBRDF, and 3D spatially-varying
lighting. Because multi-view images provide a variety of information about the
scene, multi-view images in object-level inverse rendering have been taken for
granted. However, owing to the absence of multi-view HDR synthetic dataset,
scene-level inverse rendering has mainly been studied using single-view image.
We were able to successfully perform scene-level inverse rendering using
multi-view images by expanding OpenRooms dataset and designing efficient
pipelines to handle multi-view images, and splitting spatially-varying
lighting. Our experiments show that the proposed method not only achieves
better performance than single-view-based methods, but also achieves robust
performance on unseen real-world scene. Also, our sophisticated 3D
spatially-varying lighting volume allows for photorealistic object insertion in
any 3D location.
|
[
{
"version": "v1",
"created": "Wed, 22 Mar 2023 08:07:28 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Mar 2023 04:32:11 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Choi",
"JunYong",
""
],
[
"Lee",
"SeokYeong",
""
],
[
"Park",
"Haesol",
""
],
[
"Jung",
"Seung-Won",
""
],
[
"Kim",
"Ig-Jae",
""
],
[
"Cho",
"Junghyun",
""
]
] |
new_dataset
| 0.999455 |
2303.13277
|
Chong Bao
|
Chong Bao, Yinda Zhang, Bangbang Yang, Tianxing Fan, Zesong Yang,
Hujun Bao, Guofeng Zhang and Zhaopeng Cui
|
SINE: Semantic-driven Image-based NeRF Editing with Prior-guided Editing
Field
|
Accepted to CVPR 2023. Project Page: https://zju3dv.github.io/sine/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Despite the great success in 2D editing using user-friendly tools, such as
Photoshop, semantic strokes, or even text prompts, similar capabilities in 3D
areas are still limited, either relying on 3D modeling skills or allowing
editing within only a few categories. In this paper, we present a novel
semantic-driven NeRF editing approach, which enables users to edit a neural
radiance field with a single image, and faithfully delivers edited novel views
with high fidelity and multi-view consistency. To achieve this goal, we propose
a prior-guided editing field to encode fine-grained geometric and texture
editing in 3D space, and develop a series of techniques to aid the editing
process, including cyclic constraints with a proxy mesh to facilitate geometric
supervision, a color compositing mechanism to stabilize semantic-driven texture
editing, and a feature-cluster-based regularization to preserve the irrelevant
content unchanged. Extensive experiments and editing examples on both
real-world and synthetic data demonstrate that our method achieves
photo-realistic 3D editing using only a single edited image, pushing the bound
of semantic-driven editing in 3D real-world scenes. Our project webpage:
https://zju3dv.github.io/sine/.
|
[
{
"version": "v1",
"created": "Thu, 23 Mar 2023 13:58:11 GMT"
},
{
"version": "v2",
"created": "Sat, 25 Mar 2023 14:58:22 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Bao",
"Chong",
""
],
[
"Zhang",
"Yinda",
""
],
[
"Yang",
"Bangbang",
""
],
[
"Fan",
"Tianxing",
""
],
[
"Yang",
"Zesong",
""
],
[
"Bao",
"Hujun",
""
],
[
"Zhang",
"Guofeng",
""
],
[
"Cui",
"Zhaopeng",
""
]
] |
new_dataset
| 0.992933 |
2303.13992
|
Wenqing Li
|
Wenqing Li, Yue Wang, Muhammad Shafique, Saif Eddin Jabari
|
Physical Backdoor Trigger Activation of Autonomous Vehicle using
Reachability Analysis
| null | null | null | null |
cs.CR cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent studies reveal that Autonomous Vehicles (AVs) can be manipulated by
hidden backdoors, causing them to perform harmful actions when activated by
physical triggers. However, it is still unclear how these triggers can be
activated while adhering to traffic principles. Understanding this
vulnerability in a dynamic traffic environment is crucial. This work addresses
this gap by presenting physical trigger activation as a reachability problem of
controlled dynamic system. Our technique identifies security-critical areas in
traffic systems where trigger conditions for accidents can be reached, and
provides intended trajectories for how those conditions can be reached. Testing
on typical traffic scenarios showed the system can be successfully driven to
trigger conditions with near 100% activation rate. Our method benefits from
identifying AV vulnerability and enabling effective safety strategies.
|
[
{
"version": "v1",
"created": "Fri, 24 Mar 2023 13:35:55 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Mar 2023 04:05:48 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Li",
"Wenqing",
""
],
[
"Wang",
"Yue",
""
],
[
"Shafique",
"Muhammad",
""
],
[
"Jabari",
"Saif Eddin",
""
]
] |
new_dataset
| 0.961033 |
2303.14092
|
Haiyu Zhang
|
Mingwu Zheng, Haiyu Zhang, Hongyu Yang, Di Huang
|
NeuFace: Realistic 3D Neural Face Rendering from Multi-view Images
|
Accepted to CVPR 2023, code is released at
https://github.com/aejion/NeuFace
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Realistic face rendering from multi-view images is beneficial to various
computer vision and graphics applications. Due to the complex spatially-varying
reflectance properties and geometry characteristics of faces, however, it
remains challenging to recover 3D facial representations both faithfully and
efficiently in the current studies. This paper presents a novel 3D face
rendering model, namely NeuFace, to learn accurate and physically-meaningful
underlying 3D representations by neural rendering techniques. It naturally
incorporates the neural BRDFs into physically based rendering, capturing
sophisticated facial geometry and appearance clues in a collaborative manner.
Specifically, we introduce an approximated BRDF integration and a simple yet
new low-rank prior, which effectively lower the ambiguities and boost the
performance of the facial BRDFs. Extensive experiments demonstrate the
superiority of NeuFace in human face rendering, along with a decent
generalization ability to common objects.
|
[
{
"version": "v1",
"created": "Fri, 24 Mar 2023 15:57:39 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Mar 2023 05:17:02 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Zheng",
"Mingwu",
""
],
[
"Zhang",
"Haiyu",
""
],
[
"Yang",
"Hongyu",
""
],
[
"Huang",
"Di",
""
]
] |
new_dataset
| 0.999007 |
2303.14207
|
Jiapeng Tang
|
Jiapeng Tang, Yinyu Nie, Lev Markhasin, Angela Dai, Justus Thies,
Matthias Nie{\ss}ner
|
DiffuScene: Scene Graph Denoising Diffusion Probabilistic Model for
Generative Indoor Scene Synthesis
|
13 figures, 5 tables
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We present DiffuScene for indoor 3D scene synthesis based on a novel scene
graph denoising diffusion probabilistic model, which generates 3D instance
properties stored in a fully-connected scene graph and then retrieves the most
similar object geometry for each graph node i.e. object instance which is
characterized as a concatenation of different attributes, including location,
size, orientation, semantic, and geometry features. Based on this scene graph,
we designed a diffusion model to determine the placements and types of 3D
instances. Our method can facilitate many downstream applications, including
scene completion, scene arrangement, and text-conditioned scene synthesis.
Experiments on the 3D-FRONT dataset show that our method can synthesize more
physically plausible and diverse indoor scenes than state-of-the-art methods.
Extensive ablation studies verify the effectiveness of our design choice in
scene diffusion models.
|
[
{
"version": "v1",
"created": "Fri, 24 Mar 2023 18:00:15 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Tang",
"Jiapeng",
""
],
[
"Nie",
"Yinyu",
""
],
[
"Markhasin",
"Lev",
""
],
[
"Dai",
"Angela",
""
],
[
"Thies",
"Justus",
""
],
[
"Nießner",
"Matthias",
""
]
] |
new_dataset
| 0.990592 |
2303.14301
|
Michael Zellinger
|
Michael J. Zellinger and Peter B\"uhlmann
|
repliclust: Synthetic Data for Cluster Analysis
|
21 pages, 11 figures
| null | null | null |
cs.LG stat.CO stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present repliclust (from repli-cate and clust-er), a Python package for
generating synthetic data sets with clusters. Our approach is based on data set
archetypes, high-level geometric descriptions from which the user can create
many different data sets, each possessing the desired geometric
characteristics. The architecture of our software is modular and
object-oriented, decomposing data generation into algorithms for placing
cluster centers, sampling cluster shapes, selecting the number of data points
for each cluster, and assigning probability distributions to clusters. The
project webpage, repliclust.org, provides a concise user guide and thorough
documentation.
|
[
{
"version": "v1",
"created": "Fri, 24 Mar 2023 23:45:27 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Zellinger",
"Michael J.",
""
],
[
"Bühlmann",
"Peter",
""
]
] |
new_dataset
| 0.969476 |
2303.14310
|
Zhen Wang
|
Ana Jojic, Zhen Wang, Nebojsa Jojic
|
GPT is becoming a Turing machine: Here are some ways to program it
|
25 pages, 1 figure
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We demonstrate that, through appropriate prompting, GPT-3 family of models
can be triggered to perform iterative behaviours necessary to execute (rather
than just write or recall) programs that involve loops, including several
popular algorithms found in computer science curricula or software developer
interviews. We trigger execution and description of Iterations by Regimenting
Self-Attention (IRSA) in one (or a combination) of three ways: 1) Using strong
repetitive structure in an example of an execution path of a target program for
one particular input, 2) Prompting with fragments of execution paths, and 3)
Explicitly forbidding (skipping) self-attention to parts of the generated text.
On a dynamic program execution, IRSA leads to larger accuracy gains than
replacing the model with the much more powerful GPT-4. IRSA has promising
applications in education, as the prompts and responses resemble student
assignments in data structures and algorithms classes. Our findings hold
implications for evaluating LLMs, which typically target the in-context
learning: We show that prompts that may not even cover one full task example
can trigger algorithmic behaviour, allowing solving problems previously thought
of as hard for LLMs, such as logical puzzles. Consequently, prompt design plays
an even more critical role in LLM performance than previously recognized.
|
[
{
"version": "v1",
"created": "Sat, 25 Mar 2023 00:43:41 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Jojic",
"Ana",
""
],
[
"Wang",
"Zhen",
""
],
[
"Jojic",
"Nebojsa",
""
]
] |
new_dataset
| 0.994226 |
2303.14377
|
Chenchen Xu
|
Chenchen Xu and Min Zhou and Tiezheng Ge and Yuning Jiang and Weiwei
Xu
|
Unsupervised Domain Adaption with Pixel-level Discriminator for
Image-aware Layout Generation
|
8 pages, 4 figures, 7 tables, accepted by CVPR2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Layout is essential for graphic design and poster generation. Recently,
applying deep learning models to generate layouts has attracted increasing
attention. This paper focuses on using the GAN-based model conditioned on image
contents to generate advertising poster graphic layouts, which requires an
advertising poster layout dataset with paired product images and graphic
layouts. However, the paired images and layouts in the existing dataset are
collected by inpainting and annotating posters, respectively. There exists a
domain gap between inpainted posters (source domain data) and clean product
images (target domain data). Therefore, this paper combines unsupervised domain
adaption techniques to design a GAN with a novel pixel-level discriminator
(PD), called PDA-GAN, to generate graphic layouts according to image contents.
The PD is connected to the shallow level feature map and computes the GAN loss
for each input-image pixel. Both quantitative and qualitative evaluations
demonstrate that PDA-GAN can achieve state-of-the-art performances and generate
high-quality image-aware graphic layouts for advertising posters.
|
[
{
"version": "v1",
"created": "Sat, 25 Mar 2023 06:50:22 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Xu",
"Chenchen",
""
],
[
"Zhou",
"Min",
""
],
[
"Ge",
"Tiezheng",
""
],
[
"Jiang",
"Yuning",
""
],
[
"Xu",
"Weiwei",
""
]
] |
new_dataset
| 0.999177 |
2303.14386
|
Hwanjun Song
|
Hwanjun Song, Jihwan Bang
|
Prompt-Guided Transformers for End-to-End Open-Vocabulary Object
Detection
|
version 1
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Prompt-OVD is an efficient and effective framework for open-vocabulary object
detection that utilizes class embeddings from CLIP as prompts, guiding the
Transformer decoder to detect objects in both base and novel classes.
Additionally, our novel RoI-based masked attention and RoI pruning techniques
help leverage the zero-shot classification ability of the Vision
Transformer-based CLIP, resulting in improved detection performance at minimal
computational cost. Our experiments on the OV-COCO and OVLVIS datasets
demonstrate that Prompt-OVD achieves an impressive 21.2 times faster inference
speed than the first end-to-end open-vocabulary detection method (OV-DETR),
while also achieving higher APs than four two-stage-based methods operating
within similar inference time ranges. Code will be made available soon.
|
[
{
"version": "v1",
"created": "Sat, 25 Mar 2023 07:31:08 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Song",
"Hwanjun",
""
],
[
"Bang",
"Jihwan",
""
]
] |
new_dataset
| 0.980996 |
2303.14407
|
Yiqian Wu
|
Yiqian Wu, Jing Zhang, Hongbo Fu, Xiaogang Jin
|
LPFF: A Portrait Dataset for Face Generators Across Large Poses
|
9 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The creation of 2D realistic facial images and 3D face shapes using
generative networks has been a hot topic in recent years. Existing face
generators exhibit exceptional performance on faces in small to medium poses
(with respect to frontal faces) but struggle to produce realistic results for
large poses. The distorted rendering results on large poses in 3D-aware
generators further show that the generated 3D face shapes are far from the
distribution of 3D faces in reality. We find that the above issues are caused
by the training dataset's pose imbalance.
In this paper, we present LPFF, a large-pose Flickr face dataset comprised of
19,590 high-quality real large-pose portrait images. We utilize our dataset to
train a 2D face generator that can process large-pose face images, as well as a
3D-aware generator that can generate realistic human face geometry. To better
validate our pose-conditional 3D-aware generators, we develop a new FID measure
to evaluate the 3D-level performance. Through this novel FID measure and other
experiments, we show that LPFF can help 2D face generators extend their latent
space and better manipulate the large-pose data, and help 3D-aware face
generators achieve better view consistency and more realistic 3D reconstruction
results.
|
[
{
"version": "v1",
"created": "Sat, 25 Mar 2023 09:07:36 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Wu",
"Yiqian",
""
],
[
"Zhang",
"Jing",
""
],
[
"Fu",
"Hongbo",
""
],
[
"Jin",
"Xiaogang",
""
]
] |
new_dataset
| 0.999847 |
2303.14408
|
Ziqin Wang
|
Ziqin Wang, Bowen Cheng, Lichen Zhao, Dong Xu, Yang Tang, Lu Sheng
|
VL-SAT: Visual-Linguistic Semantics Assisted Training for 3D Semantic
Scene Graph Prediction in Point Cloud
|
CVPR2023 Highlight
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The task of 3D semantic scene graph (3DSSG) prediction in the point cloud is
challenging since (1) the 3D point cloud only captures geometric structures
with limited semantics compared to 2D images, and (2) long-tailed relation
distribution inherently hinders the learning of unbiased prediction. Since 2D
images provide rich semantics and scene graphs are in nature coped with
languages, in this study, we propose Visual-Linguistic Semantics Assisted
Training (VL-SAT) scheme that can significantly empower 3DSSG prediction models
with discrimination about long-tailed and ambiguous semantic relations. The key
idea is to train a powerful multi-modal oracle model to assist the 3D model.
This oracle learns reliable structural representations based on semantics from
vision, language, and 3D geometry, and its benefits can be heterogeneously
passed to the 3D model during the training stage. By effectively utilizing
visual-linguistic semantics in training, our VL-SAT can significantly boost
common 3DSSG prediction models, such as SGFN and SGGpoint, only with 3D inputs
in the inference stage, especially when dealing with tail relation triplets.
Comprehensive evaluations and ablation studies on the 3DSSG dataset have
validated the effectiveness of the proposed scheme. Code is available at
https://github.com/wz7in/CVPR2023-VLSAT.
|
[
{
"version": "v1",
"created": "Sat, 25 Mar 2023 09:14:18 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Wang",
"Ziqin",
""
],
[
"Cheng",
"Bowen",
""
],
[
"Zhao",
"Lichen",
""
],
[
"Xu",
"Dong",
""
],
[
"Tang",
"Yang",
""
],
[
"Sheng",
"Lu",
""
]
] |
new_dataset
| 0.994565 |
2303.14425
|
Zhouhong Gu
|
Zhouhong Gu, Sihang Jiang, Wenhao Huang, Jiaqing Liang, Hongwei Feng,
Yanghua Xiao
|
Sem4SAP: Synonymous Expression Mining From Open Knowledge Graph For
Language Model Synonym-Aware Pretraining
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The model's ability to understand synonymous expression is crucial in many
kinds of downstream tasks. It will make the model to better understand the
similarity between context, and more robust to the synonym substitution attack.
However, many Pretrained Language Model (PLM) lack synonym knowledge due to
limitation of small-scale synsets and PLM's pretraining objectives. In this
paper, we propose a framework called Sem4SAP to mine synsets from Open
Knowledge Graph (Open-KG) and using the mined synsets to do synonym-aware
pretraining for language models. We propose to coarsly filter the content in
Open-KG and use the frequency information to better help the clustering process
under low-resource unsupervised conditions. We expand the mined synsets by
migrating core semantics between synonymous expressions.We also propose two
novel and effective synonym-aware pre-training methods for injecting synonym
knowledge into PLMs.Extensive experiments demonstrate that Sem4SAP can
dramatically outperform the original PLMs and other baselines on ten different
tasks.
|
[
{
"version": "v1",
"created": "Sat, 25 Mar 2023 10:19:14 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Gu",
"Zhouhong",
""
],
[
"Jiang",
"Sihang",
""
],
[
"Huang",
"Wenhao",
""
],
[
"Liang",
"Jiaqing",
""
],
[
"Feng",
"Hongwei",
""
],
[
"Xiao",
"Yanghua",
""
]
] |
new_dataset
| 0.973874 |
2303.14436
|
Topside Mathonsi
|
Beauty L. Komane and Topside E. Mathonsi
|
Design of a Smart Waste Management System for the City of Johannesburg
| null | null | null | null |
cs.GT cs.AI
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Every human being in this world produces waste. South Africa is a developing
country with many townships that have limited waste resources. Over-increasing
population growth overpowers the volume of most municipal authorities to
provide even the most essential services. Waste in townships is produced via
littering, dumping of bins, cutting of trees, dumping of waste near rivers, and
overrunning of waste bins. Waste increases diseases, air pollution, and
environmental pollution, and lastly increases gas emissions that contribute to
the release of greenhouse gases. The ungathered waste is dumped widely in the
streets and drains contributing to flooding, breeding of insects, rodent
vectors, and spreading of diseases. Therefore, the aim of this paper is to
design a smart waste management system for the city of Johannesburg. The city
of Johannesburg contains waste municipality workers and has provided some areas
with waste resources such as waste bins and trucks for collecting waste. But
the problem is that the resources only are not enough to solve the problem of
waste in the city. The waste municipality uses traditional ways of collecting
waste such as going to each street and picking up waste bins. The traditional
way has worked for years but as the population is increasing more waste is
produced which causes various problems for the waste municipalities and the
public at large. The proposed system consists of sensors, user applications,
and a real-time monitoring system. This paper adopts the experimental
methodology.
|
[
{
"version": "v1",
"created": "Sat, 25 Mar 2023 11:14:58 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Komane",
"Beauty L.",
""
],
[
"Mathonsi",
"Topside E.",
""
]
] |
new_dataset
| 0.951442 |
2303.14438
|
Javier Ron
|
Javier Ron, C\'esar Soto-Valero, Long Zhang, Benoit Baudry, Martin
Monperrus
|
Highly Available Blockchain Nodes With N-Version Design
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As all software, blockchain nodes are exposed to faults in their underlying
execution stack. Unstable execution environments can disrupt the availability
of blockchain nodes interfaces, resulting in downtime for users. This paper
introduces the concept of N-version Blockchain nodes. This new type of node
relies on simultaneous execution of different implementations of the same
blockchain protocol, in the line of Avizienis' N-version programming vision. We
design and implement an N-version blockchain node prototype in the context of
Ethereum, called N-ETH. We show that N-ETH is able to mitigate the effects of
unstable execution environments and significantly enhance availability under
environment faults. To simulate unstable execution environments, we perform
fault injection at the system-call level. Our results show that existing
Ethereum node implementations behave asymmetrically under identical instability
scenarios. N-ETH leverages this asymmetric behavior available in the diverse
implementations of Ethereum nodes to provide increased availability, even under
our most aggressive fault-injection strategies. We are the first to validate
the relevance of N-version design in the domain of blockchain infrastructure.
From an industrial perspective, our results are of utmost importance for
businesses operating blockchain nodes, including Google, ConsenSys, and many
other major blockchain companies.
|
[
{
"version": "v1",
"created": "Sat, 25 Mar 2023 11:16:17 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Ron",
"Javier",
""
],
[
"Soto-Valero",
"César",
""
],
[
"Zhang",
"Long",
""
],
[
"Baudry",
"Benoit",
""
],
[
"Monperrus",
"Martin",
""
]
] |
new_dataset
| 0.999188 |
2303.14441
|
Topside Mathonsi
|
Nombulelo Zulu, Deon P. Du Plessis, Topside E. Mathonsi and
Tshimangadzo M. Tshilongamulenzhe
|
A User-Based Authentication and DoS Mitigation Scheme for Wearable
Wireless Body Sensor Networks
| null | null | null | null |
cs.CR
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Wireless Body Sensor Networks (WBSNs) is one of the greatest growing
technology for sensing and performing various tasks. The information
transmitted in the WBSNs is vulnerable to cyber-attacks, therefore security is
very important. Denial of Service (DoS) attacks are considered one of the major
threats against WBSNs security. In DoS attacks, an adversary targets to degrade
and shut down the efficient use of the network and disrupt the services in the
network causing them inaccessible to its intended users. If sensitive
information of patients in WBSNs, such as the medical history is accessed by
unauthorized users, the patient may suffer much more than the disease itself,
it may result in loss of life. This paper proposes a User-Based authentication
scheme to mitigate DoS attacks in WBSNs. A five-phase User-Based authentication
DoS mitigation scheme for WBSNs is designed by integrating Elliptic Curve
Cryptography (ECC) with Rivest Cipher 4 (RC4) to ensure a strong authentication
process that will only allow authorized users to access nodes on WBSNs.
|
[
{
"version": "v1",
"created": "Sat, 25 Mar 2023 11:32:16 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Zulu",
"Nombulelo",
""
],
[
"Plessis",
"Deon P. Du",
""
],
[
"Mathonsi",
"Topside E.",
""
],
[
"Tshilongamulenzhe",
"Tshimangadzo M.",
""
]
] |
new_dataset
| 0.999058 |
2303.14474
|
Lei Wang
|
Lei Wang and Piotr Koniusz
|
3Mformer: Multi-order Multi-mode Transformer for Skeletal Action
Recognition
|
This paper is accepted by CVPR 2023
|
CVPR 2023
| null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many skeletal action recognition models use GCNs to represent the human body
by 3D body joints connected body parts. GCNs aggregate one- or few-hop graph
neighbourhoods, and ignore the dependency between not linked body joints. We
propose to form hypergraph to model hyper-edges between graph nodes (e.g.,
third- and fourth-order hyper-edges capture three and four nodes) which help
capture higher-order motion patterns of groups of body joints. We split action
sequences into temporal blocks, Higher-order Transformer (HoT) produces
embeddings of each temporal block based on (i) the body joints, (ii) pairwise
links of body joints and (iii) higher-order hyper-edges of skeleton body
joints. We combine such HoT embeddings of hyper-edges of orders 1, ..., r by a
novel Multi-order Multi-mode Transformer (3Mformer) with two modules whose
order can be exchanged to achieve coupled-mode attention on coupled-mode tokens
based on 'channel-temporal block', 'order-channel-body joint',
'channel-hyper-edge (any order)' and 'channel-only' pairs. The first module,
called Multi-order Pooling (MP), additionally learns weighted aggregation along
the hyper-edge mode, whereas the second module, Temporal block Pooling (TP),
aggregates along the temporal block mode. Our end-to-end trainable network
yields state-of-the-art results compared to GCN-, transformer- and
hypergraph-based counterparts.
|
[
{
"version": "v1",
"created": "Sat, 25 Mar 2023 14:06:31 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Wang",
"Lei",
""
],
[
"Koniusz",
"Piotr",
""
]
] |
new_dataset
| 0.97882 |
2303.14482
|
Alborz Aghamaleki Sarvestani
|
Alborz Aghamaleki Sarvestani, Felix Ruppert, and Alexander
Badri-Spr\"owitz
|
An Open-Source Modular Treadmill for Dynamic Force Measurement with Load
Dependant Range Adjustment
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Ground reaction force sensing is one of the key components of gait analysis
in legged locomotion research. To measure continuous force data during
locomotion, we present a novel compound instrumented treadmill design. The
treadmill is 1.7m long, with a natural frequency of 170Hz and an adjustable
range that can be used for humans and small robots alike. Here, we present the
treadmill's design methodology and characterize it in its natural frequency,
noise behavior and real-life performance. Additionally, we apply an ISO 376
norm conform calibration procedure for all spatial force directions and center
of pressure position. We achieve a force accuracy of $\leq$5.6N for the ground
reaction forces and $\leq$13mm in center of pressure position.
|
[
{
"version": "v1",
"created": "Sat, 25 Mar 2023 14:26:45 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Sarvestani",
"Alborz Aghamaleki",
""
],
[
"Ruppert",
"Felix",
""
],
[
"Badri-Spröwitz",
"Alexander",
""
]
] |
new_dataset
| 0.998039 |
2303.14498
|
Wenqiang Xu
|
Wenqiang Xu, Zhenjun Yu, Han Xue, Ruolin Ye, Siqiong Yao, Cewu Lu
|
Visual-Tactile Sensing for In-Hand Object Reconstruction
|
Accepted in CVPR 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Tactile sensing is one of the modalities humans rely on heavily to perceive
the world. Working with vision, this modality refines local geometry structure,
measures deformation at the contact area, and indicates the hand-object contact
state.
With the availability of open-source tactile sensors such as DIGIT, research
on visual-tactile learning is becoming more accessible and reproducible.
Leveraging this tactile sensor, we propose a novel visual-tactile in-hand
object reconstruction framework \textbf{VTacO}, and extend it to
\textbf{VTacOH} for hand-object reconstruction. Since our method can support
both rigid and deformable object reconstruction, no existing benchmarks are
proper for the goal. We propose a simulation environment, VT-Sim, which
supports generating hand-object interaction for both rigid and deformable
objects. With VT-Sim, we generate a large-scale training dataset and evaluate
our method on it. Extensive experiments demonstrate that our proposed method
can outperform the previous baseline methods qualitatively and quantitatively.
Finally, we directly apply our model trained in simulation to various
real-world test cases, which display qualitative results.
Codes, models, simulation environment, and datasets are available at
\url{https://sites.google.com/view/vtaco/}.
|
[
{
"version": "v1",
"created": "Sat, 25 Mar 2023 15:16:31 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Xu",
"Wenqiang",
""
],
[
"Yu",
"Zhenjun",
""
],
[
"Xue",
"Han",
""
],
[
"Ye",
"Ruolin",
""
],
[
"Yao",
"Siqiong",
""
],
[
"Lu",
"Cewu",
""
]
] |
new_dataset
| 0.996729 |
2303.14517
|
Surya Mahadi
|
Made Raharja Surya Mahadi and Nugraha Priya Utama
|
Indonesian Text-to-Image Synthesis with Sentence-BERT and FastGAN
|
11 pages, 3 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Currently, text-to-image synthesis uses text encoder and image generator
architecture. Research on this topic is challenging. This is because of the
domain gap between natural language and vision. Nowadays, most research on this
topic only focuses on producing a photo-realistic image, but the other domain,
in this case, is the language, which is less concentrated. A lot of the current
research uses English as the input text. Besides, there are many languages
around the world. Bahasa Indonesia, as the official language of Indonesia, is
quite popular. This language has been taught in Philipines, Australia, and
Japan. Translating or recreating a new dataset into another language with good
quality will cost a lot. Research on this domain is necessary because we need
to examine how the image generator performs in other languages besides
generating photo-realistic images. To achieve this, we translate the CUB
dataset into Bahasa using google translate and manually by humans. We use
Sentence BERT as the text encoder and FastGAN as the image generator. FastGAN
uses lots of skip excitation modules and auto-encoder to generate an image with
resolution 512x512x3, which is twice as bigger as the current state-of-the-art
model (Zhang, Xu, Li, Zhang, Wang, Huang and Metaxas, 2019). We also get 4.76
+- 0.43 and 46.401 on Inception Score and Fr\'echet inception distance,
respectively, and comparable with the current English text-to-image generation
models. The mean opinion score also gives as 3.22 out of 5, which means the
generated image is acceptable by humans. Link to source code:
https://github.com/share424/Indonesian-Text-to-Image-synthesis-with-Sentence-BERT-and-FastGAN
|
[
{
"version": "v1",
"created": "Sat, 25 Mar 2023 16:54:22 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Mahadi",
"Made Raharja Surya",
""
],
[
"Utama",
"Nugraha Priya",
""
]
] |
new_dataset
| 0.996046 |
2303.14521
|
M\'at\'e Cser\'ep
|
D\'avid Magyar, M\'at\'e Cser\'ep, Zolt\'an Vincell\'er, Attila D.
Moln\'ar
|
Waste Detection and Change Analysis based on Multispectral Satellite
Imagery
|
18 pages, 10 figures
|
In Proceedings of K\'EPAF 2023, Gyula, Hungary
| null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
One of the biggest environmental problems of our time is the increase in
illegal landfills in forests, rivers, on river banks and other secluded places.
In addition, waste in rivers causes damage not only locally, but also
downstream, both in the water and washed ashore. Large islands of waste can
also form at hydroelectric power stations and dams, and if they continue to
flow, they can cause further damage to the natural environment along the river.
Recent studies have also proved that rivers are the main source of plastic
pollution in marine environments. Monitoring potential sources of danger is
therefore highly important for effective waste collection for related
organizations. In our research we analyze two possible forms of waste
detection: identification of hot-spots (i.e. illegal waste dumps) and
identification of water-surface river blockages. We used medium to
high-resolution multispectral satellite imagery as our data source, especially
focusing on the Tisza river as our study area. We found that using satellite
imagery and machine learning are viable to locate and to monitor the change of
the previously detected waste.
|
[
{
"version": "v1",
"created": "Sat, 25 Mar 2023 17:12:22 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Magyar",
"Dávid",
""
],
[
"Cserép",
"Máté",
""
],
[
"Vincellér",
"Zoltán",
""
],
[
"Molnár",
"Attila D.",
""
]
] |
new_dataset
| 0.998873 |
2303.14541
|
David Rozenberszki
|
David Rozenberszki, Or Litany, Angela Dai
|
UnScene3D: Unsupervised 3D Instance Segmentation for Indoor Scenes
|
Project page: https://rozdavid.github.io/unscene3d
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D instance segmentation is fundamental to geometric understanding of the
world around us. Existing methods for instance segmentation of 3D scenes rely
on supervision from expensive, manual 3D annotations. We propose UnScene3D, the
first fully unsupervised 3D learning approach for class-agnostic 3D instance
segmentation of indoor scans. UnScene3D first generates pseudo masks by
leveraging self-supervised color and geometry features to find potential object
regions. We operate on a basis of geometric oversegmentation, enabling
efficient representation and learning on high-resolution 3D data. The coarse
proposals are then refined through self-training our model on its predictions.
Our approach improves over state-of-the-art unsupervised 3D instance
segmentation methods by more than 300% Average Precision score, demonstrating
effective instance segmentation even in challenging, cluttered 3D scenes.
|
[
{
"version": "v1",
"created": "Sat, 25 Mar 2023 19:15:16 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Rozenberszki",
"David",
""
],
[
"Litany",
"Or",
""
],
[
"Dai",
"Angela",
""
]
] |
new_dataset
| 0.9987 |
2303.14557
|
Zhuoyue Lyu
|
Zhuoyue Lyu
|
Clo(o)k: A Clock That Looks
|
CHI '23 Human Computer Interaction Across Borders (HCIxB) Workshop
Papers
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
What if a clock could do more than just tell time - what if it could actually
see? This paper delves into the conceptualization, design, and construction of
a timepiece with visual perception capabilities, featuring three applications
that expand the possibilities of human-time interaction. Insights from an Open
House showcase are also shared, highlighting the unique user experiences of
this device.
|
[
{
"version": "v1",
"created": "Sat, 25 Mar 2023 20:49:40 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Lyu",
"Zhuoyue",
""
]
] |
new_dataset
| 0.999275 |
2303.14587
|
Shuhong Chen
|
Shuhong Chen, Kevin Zhang, Yichun Shi, Heng Wang, Yiheng Zhu, Guoxian
Song, Sizhe An, Janus Kristjansson, Xiao Yang, Matthias Zwicker
|
PAniC-3D: Stylized Single-view 3D Reconstruction from Portraits of Anime
Characters
|
CVPR 2023, code release:
https://github.com/ShuhongChen/panic3d-anime-reconstruction
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We propose PAniC-3D, a system to reconstruct stylized 3D character heads
directly from illustrated (p)ortraits of (ani)me (c)haracters. Our anime-style
domain poses unique challenges to single-view reconstruction; compared to
natural images of human heads, character portrait illustrations have hair and
accessories with more complex and diverse geometry, and are shaded with
non-photorealistic contour lines. In addition, there is a lack of both 3D model
and portrait illustration data suitable to train and evaluate this ambiguous
stylized reconstruction task. Facing these challenges, our proposed PAniC-3D
architecture crosses the illustration-to-3D domain gap with a line-filling
model, and represents sophisticated geometries with a volumetric radiance
field. We train our system with two large new datasets (11.2k Vroid 3D models,
1k Vtuber portrait illustrations), and evaluate on a novel AnimeRecon benchmark
of illustration-to-3D pairs. PAniC-3D significantly outperforms baseline
methods, and provides data to establish the task of stylized reconstruction
from portrait illustrations.
|
[
{
"version": "v1",
"created": "Sat, 25 Mar 2023 23:36:17 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Chen",
"Shuhong",
""
],
[
"Zhang",
"Kevin",
""
],
[
"Shi",
"Yichun",
""
],
[
"Wang",
"Heng",
""
],
[
"Zhu",
"Yiheng",
""
],
[
"Song",
"Guoxian",
""
],
[
"An",
"Sizhe",
""
],
[
"Kristjansson",
"Janus",
""
],
[
"Yang",
"Xiao",
""
],
[
"Zwicker",
"Matthias",
""
]
] |
new_dataset
| 0.997763 |
2303.14588
|
Bashar Al-Rfooh
|
Bashar Al-Rfooh, Gheith Abandah, Rami Al-Rfou
|
Fine-Tashkeel: Finetuning Byte-Level Models for Accurate Arabic Text
Diacritization
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Most of previous work on learning diacritization of the Arabic language
relied on training models from scratch. In this paper, we investigate how to
leverage pre-trained language models to learn diacritization. We finetune
token-free pre-trained multilingual models (ByT5) to learn to predict and
insert missing diacritics in Arabic text, a complex task that requires
understanding the sentence semantics and the morphological structure of the
tokens. We show that we can achieve state-of-the-art on the diacritization task
with minimal amount of training and no feature engineering, reducing WER by
40%. We release our finetuned models for the greater benefit of the researchers
in the community.
|
[
{
"version": "v1",
"created": "Sat, 25 Mar 2023 23:41:33 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Al-Rfooh",
"Bashar",
""
],
[
"Abandah",
"Gheith",
""
],
[
"Al-Rfou",
"Rami",
""
]
] |
new_dataset
| 0.976141 |
2303.14597
|
Sangchul Park
|
Sangchul Park
|
Smart Cities: Striking a Balance Between Urban Resilience and Civil
Liberties
| null | null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Cities are becoming smarter and more resilient by integrating urban
infrastructure with information technology. However, concerns grow that smart
cities might reverse progress on civil liberties when sensing, profiling, and
predicting citizen activities; undermining citizen autonomy in connectivity,
mobility, and energy consumption; and deprivatizing digital infrastructure. In
response, cities need to deploy technical breakthroughs, such as
privacy-enhancing technologies, cohort modelling, and fair and explainable
machine learning. However, as throwing technologies at cities cannot always
address civil liberty concerns, cities must ensure transparency and foster
citizen participation to win public trust about the way resilience and
liberties are balanced.
|
[
{
"version": "v1",
"created": "Sun, 26 Mar 2023 01:09:11 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Park",
"Sangchul",
""
]
] |
new_dataset
| 0.994076 |
2303.14614
|
Kai Niu
|
Kai Niu, Ping Zhang, Jincheng Dai, Zhongwei Si, Chao Dong
|
A Golden Decade of Polar Codes: From Basic Principle to 5G Applications
|
29 pages, 21 figures, Published in China Communications
|
China Communications, vol.20, no. 2, pp. 94-121, 2023
|
10.23919/JCC.2023.02.015
| null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
After the pursuit of seventy years, the invention of polar codes indicates
that we have found the first capacity-achieving coding with low complexity
construction and decoding, which is the great breakthrough of the coding theory
in the past two decades. In this survey, we retrospect the history of polar
codes and summarize the advancement in the past ten years. First, the primary
principle of channel polarization is investigated such that the basic
construction, coding method, and classic successive cancellation (SC) decoding
are reviewed. Second, in order to improve the performance of the finite code
length, we introduce the guiding principle and conclude five design criteria
for the construction, design, and implementation of the polar code in the
practical communication system based on the exemplar schemes in the literature.
Especially, we explain the design principle behind the concatenated coding and
rate matching of polar codes in a 5G wireless system. Furthermore, the improved
SC decoding algorithms, such as SC list (SCL) decoding and SC stack (SCS)
decoding, etc., are investigated and compared. Finally, the research prospects
of polar codes for the future 6G communication system are explored, including
the optimization of short polar codes, coding construction in fading channels,
polar coded modulation and HARQ, and the polar coded transmission, namely polar
processing. Predictably, as a new coding methodology, polar codes will shine a
light on communication theory and unveil a revolution in transmission
technology.
|
[
{
"version": "v1",
"created": "Sun, 26 Mar 2023 03:42:59 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Niu",
"Kai",
""
],
[
"Zhang",
"Ping",
""
],
[
"Dai",
"Jincheng",
""
],
[
"Si",
"Zhongwei",
""
],
[
"Dong",
"Chao",
""
]
] |
new_dataset
| 0.981943 |
2303.14626
|
Yukang Zhang
|
Yukang Zhang, Yan Yan, Jie Li, Hanzi Wang
|
MRCN: A Novel Modality Restitution and Compensation Network for
Visible-Infrared Person Re-identification
|
Accepted by AAAI-2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visible-infrared person re-identification (VI-ReID), which aims to search
identities across different spectra, is a challenging task due to large
cross-modality discrepancy between visible and infrared images. The key to
reduce the discrepancy is to filter out identity-irrelevant interference and
effectively learn modality-invariant person representations. In this paper, we
propose a novel Modality Restitution and Compensation Network (MRCN) to narrow
the gap between the two modalities. Specifically, we first reduce the modality
discrepancy by using two Instance Normalization (IN) layers. Next, to reduce
the influence of IN layers on removing discriminative information and to reduce
modality differences, we propose a Modality Restitution Module (MRM) and a
Modality Compensation Module (MCM) to respectively distill modality-irrelevant
and modality-relevant features from the removed information. Then, the
modality-irrelevant features are used to restitute to the normalized visible
and infrared features, while the modality-relevant features are used to
compensate for the features of the other modality. Furthermore, to better
disentangle the modality-relevant features and the modality-irrelevant
features, we propose a novel Center-Quadruplet Causal (CQC) loss to encourage
the network to effectively learn the modality-relevant features and the
modality-irrelevant features. Extensive experiments are conducted to validate
the superiority of our method on the challenging SYSU-MM01 and RegDB datasets.
More remarkably, our method achieves 95.1% in terms of Rank-1 and 89.2% in
terms of mAP on the RegDB dataset.
|
[
{
"version": "v1",
"created": "Sun, 26 Mar 2023 05:03:18 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Zhang",
"Yukang",
""
],
[
"Yan",
"Yan",
""
],
[
"Li",
"Jie",
""
],
[
"Wang",
"Hanzi",
""
]
] |
new_dataset
| 0.998608 |
2303.14647
|
Behrouz Minaei-Bidgoli
|
Najmeh Torabian, Behrouz Minaei-Bidgoli and Mohsen Jahanshahi
|
Farspredict: A benchmark dataset for link prediction
|
13 pages, 3 figures, 1 algorithm and 5 tables
| null | null | null |
cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Link prediction with knowledge graph embedding (KGE) is a popular method for
knowledge graph completion. Furthermore, training KGEs on non-English knowledge
graph promote knowledge extraction and knowledge graph reasoning in the context
of these languages. However, many challenges in non-English KGEs pose to
learning a low-dimensional representation of a knowledge graph's entities and
relations. This paper proposes "Farspredict" a Persian knowledge graph based on
Farsbase (the most comprehensive knowledge graph in Persian). It also explains
how the knowledge graph structure affects link prediction accuracy in KGE. To
evaluate Farspredict, we implemented the popular models of KGE on it and
compared the results with Freebase. Given the analysis results, some
optimizations on the knowledge graph are carried out to improve its
functionality in the KGE. As a result, a new Persian knowledge graph is
achieved. Implementation results in the KGE models on Farspredict outperforming
Freebases in many cases. At last, we discuss what improvements could be
effective in enhancing the quality of Farspredict and how much it improves.
|
[
{
"version": "v1",
"created": "Sun, 26 Mar 2023 07:41:26 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Torabian",
"Najmeh",
""
],
[
"Minaei-Bidgoli",
"Behrouz",
""
],
[
"Jahanshahi",
"Mohsen",
""
]
] |
new_dataset
| 0.999647 |
2303.14653
|
Min Wang
|
Yingda Guan, Zhengyang Feng, Huiying Chang, Kuo Du, Tingting Li, Min
Wang
|
SDTracker: Synthetic Data Based Multi-Object Tracking
|
cvpr2022 workshop
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present SDTracker, a method that harnesses the potential of synthetic data
for multi-object tracking of real-world scenes in a domain generalization and
semi-supervised fashion. First, we use the ImageNet dataset as an auxiliary to
randomize the style of synthetic data. With out-of-domain data, we further
enforce pyramid consistency loss across different "stylized" images from the
same sample to learn domain invariant features. Second, we adopt the
pseudo-labeling method to effectively utilize the unlabeled MOT17 training
data. To obtain high-quality pseudo-labels, we apply proximal policy
optimization (PPO2) algorithm to search confidence thresholds for each
sequence. When using the unlabeled MOT17 training set, combined with the
pure-motion tracking strategy upgraded via developed post-processing, we
finally reach 61.4 HOTA.
|
[
{
"version": "v1",
"created": "Sun, 26 Mar 2023 08:21:22 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Guan",
"Yingda",
""
],
[
"Feng",
"Zhengyang",
""
],
[
"Chang",
"Huiying",
""
],
[
"Du",
"Kuo",
""
],
[
"Li",
"Tingting",
""
],
[
"Wang",
"Min",
""
]
] |
new_dataset
| 0.992065 |
2303.14706
|
Qian Wang
|
Qian Wang, Yiqun Wang, Michael Birsak, Peter Wonka
|
BlobGAN-3D: A Spatially-Disentangled 3D-Aware Generative Model for
Indoor Scenes
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D-aware image synthesis has attracted increasing interest as it models the
3D nature of our real world. However, performing realistic object-level editing
of the generated images in the multi-object scenario still remains a challenge.
Recently, a 2D GAN termed BlobGAN has demonstrated great multi-object editing
capabilities on real-world indoor scene datasets. In this work, we propose
BlobGAN-3D, which is a 3D-aware improvement of the original 2D BlobGAN. We
enable explicit camera pose control while maintaining the disentanglement for
individual objects in the scene by extending the 2D blobs into 3D blobs. We
keep the object-level editing capabilities of BlobGAN and in addition allow
flexible control over the 3D location of the objects in the scene. We test our
method on real-world indoor datasets and show that our method can achieve
comparable image quality compared to the 2D BlobGAN and other 3D-aware GAN
baselines while being able to enable camera pose control and object-level
editing in the challenging multi-object real-world scenarios.
|
[
{
"version": "v1",
"created": "Sun, 26 Mar 2023 12:23:11 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Wang",
"Qian",
""
],
[
"Wang",
"Yiqun",
""
],
[
"Birsak",
"Michael",
""
],
[
"Wonka",
"Peter",
""
]
] |
new_dataset
| 0.999802 |
2303.14707
|
Xinhang Liu
|
Xinhang Liu, Yu-Wing Tai, Chi-Keung Tang
|
Clean-NeRF: Reformulating NeRF to account for View-Dependent
Observations
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
While Neural Radiance Fields (NeRFs) had achieved unprecedented novel view
synthesis results, they have been struggling in dealing with large-scale
cluttered scenes with sparse input views and highly view-dependent appearances.
Specifically, existing NeRF-based models tend to produce blurry rendering with
the volumetric reconstruction often inaccurate, where a lot of reconstruction
errors are observed in the form of foggy "floaters" hovering within the entire
volume of an opaque 3D scene. Such inaccuracies impede NeRF's potential for
accurate 3D NeRF registration, object detection, segmentation, etc., which
possibly accounts for only limited significant research effort so far to
directly address these important 3D fundamental computer vision problems to
date. This paper analyzes the NeRF's struggles in such settings and proposes
Clean-NeRF for accurate 3D reconstruction and novel view rendering in complex
scenes. Our key insights consist of enforcing effective appearance and geometry
constraints, which are absent in the conventional NeRF reconstruction, by 1)
automatically detecting and modeling view-dependent appearances in the training
views to prevent them from interfering with density estimation, which is
complete with 2) a geometric correction procedure performed on each traced ray
during inference. Clean-NeRF can be implemented as a plug-in that can
immediately benefit existing NeRF-based methods without additional input. Codes
will be released.
|
[
{
"version": "v1",
"created": "Sun, 26 Mar 2023 12:24:31 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Liu",
"Xinhang",
""
],
[
"Tai",
"Yu-Wing",
""
],
[
"Tang",
"Chi-Keung",
""
]
] |
new_dataset
| 0.983765 |
2303.14717
|
Jianhui Yu
|
Jianhui Yu, Hao Zhu, Liming Jiang, Chen Change Loy, Weidong Cai, Wayne
Wu
|
CelebV-Text: A Large-Scale Facial Text-Video Dataset
|
Accepted by CVPR2023. Project Page: https://celebv-text.github.io/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Text-driven generation models are flourishing in video generation and
editing. However, face-centric text-to-video generation remains a challenge due
to the lack of a suitable dataset containing high-quality videos and highly
relevant texts. This paper presents CelebV-Text, a large-scale, diverse, and
high-quality dataset of facial text-video pairs, to facilitate research on
facial text-to-video generation tasks. CelebV-Text comprises 70,000 in-the-wild
face video clips with diverse visual content, each paired with 20 texts
generated using the proposed semi-automatic text generation strategy. The
provided texts are of high quality, describing both static and dynamic
attributes precisely. The superiority of CelebV-Text over other datasets is
demonstrated via comprehensive statistical analysis of the videos, texts, and
text-video relevance. The effectiveness and potential of CelebV-Text are
further shown through extensive self-evaluation. A benchmark is constructed
with representative methods to standardize the evaluation of the facial
text-to-video generation task. All data and models are publicly available.
|
[
{
"version": "v1",
"created": "Sun, 26 Mar 2023 13:06:35 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Yu",
"Jianhui",
""
],
[
"Zhu",
"Hao",
""
],
[
"Jiang",
"Liming",
""
],
[
"Loy",
"Chen Change",
""
],
[
"Cai",
"Weidong",
""
],
[
"Wu",
"Wayne",
""
]
] |
new_dataset
| 0.999909 |
2303.14718
|
Kazuki Sugihara
|
Kazuki Sugihara, Moju Zhao, Takuzumi Nishio, Tasuku Makabe, Kei Okada,
Masayuki Inaba
|
Design and Control of a Humanoid Equipped with Flight Unit and Wheels
for Multimodal Locomotion
|
8 pages 17 figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Humanoids are versatile robotic platforms because of their limbs with
multiple degrees of freedom. Although humanoids can walk like humans, the speed
is relatively slow, and they cannot run over large barriers. To address these
problems, we aim to achieve rapid terrestrial locomotion ability and
simultaneously expand the domain of locomotion to the air by utilizing thrust
for propulsion. In this paper, we first describe an optimized construction
method of a humanoid robot equipped with wheels and a flight unit to achieve
these abilities. Then, we describe the integrated control framework of the
proposed flying humanoid for each mode of locomotion: aerial locomotion, leg
locomotion, and wheel locomotion. Finally, we achieved multimodal locomotion
and aerial manipulation experiments using the robot platform proposed in this
work. To the best of our knowledge, it is the first time to achieve three
different types of locomotion, including flight, by a single humanoid.
|
[
{
"version": "v1",
"created": "Sun, 26 Mar 2023 13:09:14 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Sugihara",
"Kazuki",
""
],
[
"Zhao",
"Moju",
""
],
[
"Nishio",
"Takuzumi",
""
],
[
"Makabe",
"Tasuku",
""
],
[
"Okada",
"Kei",
""
],
[
"Inaba",
"Masayuki",
""
]
] |
new_dataset
| 0.990742 |
2303.14758
|
Asma Jodeiri Akbarfam
|
Asma Jodeiri Akbarfam, Sina Barazandeh, Hoda Maleki, Deepti Gupta
|
DLACB: Deep Learning Based Access Control Using Blockchain
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In general, deep learning models use to make informed decisions immensely.
Developed models are mainly based on centralized servers, which face several
issues, including transparency, traceability, reliability, security, and
privacy. In this research, we identify a research gap in a distributed
nature-based access control that can solve those issues. The innovative
technology blockchain could fill this gap and provide a robust solution.
Blockchain's immutable and distributed nature designs a useful framework in
various domains such as medicine, finance, and government, which can also
provide access control as opposed to centralized methods that rely on trusted
third parties to access the resources. In existing frameworks, a traditional
access control approach is developed using blockchain, which depends on
predefined policies and permissions that are not reliable. In this research, we
propose DLACB: Deep Learning Based Access Control Using Blockchain, which
utilizes a deep learning access control mechanism to determine a user's
permissions on a given resource. This proposed framework authenticates the
users and logs the access requests on the blockchain to recognize malicious
users. The results show that this proposed framework operates correctly for all
possible scenarios.
|
[
{
"version": "v1",
"created": "Sun, 26 Mar 2023 15:25:09 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Akbarfam",
"Asma Jodeiri",
""
],
[
"Barazandeh",
"Sina",
""
],
[
"Maleki",
"Hoda",
""
],
[
"Gupta",
"Deepti",
""
]
] |
new_dataset
| 0.998522 |
2303.14792
|
Mehdi Delrobaei
|
Fateme Zare, Paniz Sedighi, Mehdi Delrobaei
|
A Wearable RFID-Based Navigation System for the Visually Impaired
|
6 pages, 6 figures, 3 tables
| null |
10.1109/ICRoM57054.2022.10025351
| null |
cs.HC cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Recent studies have focused on developing advanced assistive devices to help
blind or visually impaired people. Navigation is challenging for this
community; however, developing a simple yet reliable navigation system is still
an unmet need. This study targets the navigation problem and proposes a
wearable assistive system. We developed a smart glove and shoe set based on
radio-frequency identification technology to assist visually impaired people
with navigation and orientation in indoor environments. The system enables the
user to find the directions through audio feedback. To evaluate the device's
performance, we designed a simple experimental setup. The proposed system has a
simple structure and can be personalized according to the user's requirements.
The results identified that the platform is reliable, power efficient, and
accurate enough for indoor navigation.
|
[
{
"version": "v1",
"created": "Sun, 26 Mar 2023 18:30:57 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Zare",
"Fateme",
""
],
[
"Sedighi",
"Paniz",
""
],
[
"Delrobaei",
"Mehdi",
""
]
] |
new_dataset
| 0.996849 |
2303.14796
|
Hadar Frenkel
|
Bernd Finkbeiner, Hadar Frenkel, Jana Hofmann, and Janine Lohse
|
Automata-Based Software Model Checking of Hyperproperties
| null | null | null | null |
cs.LO cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
We develop model checking algorithms for Temporal Stream Logic (TSL) and
Hyper Temporal Stream Logic (HyperTSL) modulo theories. TSL extends Linear
Temporal Logic (LTL) with memory cells, functions and predicates, making it a
convenient and expressive logic to reason over software and other systems with
infinite data domains. HyperTSL further extends TSL to the specification of
hyperproperties - properties that relate multiple system executions. As such,
HyperTSL can express information flow policies like noninterference in software
systems. We augment HyperTSL with theories, resulting in HyperTSL(T),and build
on methods from LTL software verification to obtain model checking algorithms
for TSL and HyperTSL(T). This results in a sound but necessarily incomplete
algorithm for specifications contained in the forall*exists* fragment of
HyperTSL(T). Our approach constitutes the first software model checking
algorithm for temporal hyperproperties with quantifier alternations that does
not rely on a finite-state abstraction.
|
[
{
"version": "v1",
"created": "Sun, 26 Mar 2023 19:01:10 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Finkbeiner",
"Bernd",
""
],
[
"Frenkel",
"Hadar",
""
],
[
"Hofmann",
"Jana",
""
],
[
"Lohse",
"Janine",
""
]
] |
new_dataset
| 0.971936 |
2303.14814
|
Yang Zou
|
Jongheon Jeong, Yang Zou, Taewan Kim, Dongqing Zhang, Avinash
Ravichandran, Onkar Dabeer
|
WinCLIP: Zero-/Few-Shot Anomaly Classification and Segmentation
|
Accepted to Conference on Computer Vision and Pattern Recognition
(CVPR) 2023
| null | null | null |
cs.CV cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual anomaly classification and segmentation are vital for automating
industrial quality inspection. The focus of prior research in the field has
been on training custom models for each quality inspection task, which requires
task-specific images and annotation. In this paper we move away from this
regime, addressing zero-shot and few-normal-shot anomaly classification and
segmentation. Recently CLIP, a vision-language model, has shown revolutionary
generality with competitive zero-/few-shot performance in comparison to
full-supervision. But CLIP falls short on anomaly classification and
segmentation tasks. Hence, we propose window-based CLIP (WinCLIP) with (1) a
compositional ensemble on state words and prompt templates and (2) efficient
extraction and aggregation of window/patch/image-level features aligned with
text. We also propose its few-normal-shot extension WinCLIP+, which uses
complementary information from normal images. In MVTec-AD (and VisA), without
further tuning, WinCLIP achieves 91.8%/85.1% (78.1%/79.6%) AUROC in zero-shot
anomaly classification and segmentation while WinCLIP+ does 93.1%/95.2%
(83.8%/96.4%) in 1-normal-shot, surpassing state-of-the-art by large margins.
|
[
{
"version": "v1",
"created": "Sun, 26 Mar 2023 20:41:21 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Jeong",
"Jongheon",
""
],
[
"Zou",
"Yang",
""
],
[
"Kim",
"Taewan",
""
],
[
"Zhang",
"Dongqing",
""
],
[
"Ravichandran",
"Avinash",
""
],
[
"Dabeer",
"Onkar",
""
]
] |
new_dataset
| 0.994434 |
2303.14816
|
Tian-Zhu Xiang
|
Zhou Huang, Hang Dai, Tian-Zhu Xiang, Shuo Wang, Huai-Xin Chen, Jie
Qin, Huan Xiong
|
Feature Shrinkage Pyramid for Camouflaged Object Detection with
Transformers
|
CVPR 2023. Project webpage at:
https://tzxiang.github.io/project/COD-FSPNet/index.html
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vision transformers have recently shown strong global context modeling
capabilities in camouflaged object detection. However, they suffer from two
major limitations: less effective locality modeling and insufficient feature
aggregation in decoders, which are not conducive to camouflaged object
detection that explores subtle cues from indistinguishable backgrounds. To
address these issues, in this paper, we propose a novel transformer-based
Feature Shrinkage Pyramid Network (FSPNet), which aims to hierarchically decode
locality-enhanced neighboring transformer features through progressive
shrinking for camouflaged object detection. Specifically, we propose a nonlocal
token enhancement module (NL-TEM) that employs the non-local mechanism to
interact neighboring tokens and explore graph-based high-order relations within
tokens to enhance local representations of transformers. Moreover, we design a
feature shrinkage decoder (FSD) with adjacent interaction modules (AIM), which
progressively aggregates adjacent transformer features through a layer-bylayer
shrinkage pyramid to accumulate imperceptible but effective cues as much as
possible for object information decoding. Extensive quantitative and
qualitative experiments demonstrate that the proposed model significantly
outperforms the existing 24 competitors on three challenging COD benchmark
datasets under six widely-used evaluation metrics. Our code is publicly
available at https://github.com/ZhouHuang23/FSPNet.
|
[
{
"version": "v1",
"created": "Sun, 26 Mar 2023 20:50:58 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Huang",
"Zhou",
""
],
[
"Dai",
"Hang",
""
],
[
"Xiang",
"Tian-Zhu",
""
],
[
"Wang",
"Shuo",
""
],
[
"Chen",
"Huai-Xin",
""
],
[
"Qin",
"Jie",
""
],
[
"Xiong",
"Huan",
""
]
] |
new_dataset
| 0.982287 |
2303.14827
|
Benjamin Kenwright
|
Ben Kenwright
|
Dual-Quaternion Julia Fractals
| null | null | null | null |
cs.CG
|
http://creativecommons.org/licenses/by/4.0/
|
Fractals offer the ability to generate fascinating geometric shapes with all
sorts of unique characteristics (for instance, fractal geometry provides a
basis for modelling infinite detail found in nature). While fractals are
non-euclidean mathematical objects which possess an assortment of properties
(e.g., attractivity and symmetry), they are also able to be scaled down,
rotated, skewed and replicated in embedded contexts. Hence, many different
types of fractals have come into limelight since their origin discovery. One
particularly popular method for generating fractal geometry is using Julia
sets. Julia sets provide a straightforward and innovative method for generating
fractal geometry using an iterative computational modelling algorithm. In this
paper, we present a method that combines Julia sets with dual-quaternion
algebra. Dual-quaternions are an alluring principal with a whole range
interesting mathematical possibilities. Extending fractal Julia sets to
encompass dual-quaternions algebra provides us with a novel visualize solution.
We explain the method of fractals using the dual-quaternions in combination
with Julia sets. Our prototype implementation demonstrate an efficient methods
for rendering fractal geometry using dual-quaternion Julia sets based upon an
uncomplicated ray tracing algorithm. We show a number of different experimental
isosurface examples to demonstrate the viability of our approach.
|
[
{
"version": "v1",
"created": "Sun, 26 Mar 2023 21:20:31 GMT"
}
] | 2023-03-28T00:00:00 |
[
[
"Kenwright",
"Ben",
""
]
] |
new_dataset
| 0.998828 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.