id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2210.06323
|
Minh Tran Quang
|
Minh Tran, Khoa Vo, Kashu Yamazaki, Arthur Fernandes, Michael Kidd,
and Ngan Le
|
AISFormer: Amodal Instance Segmentation with Transformer
|
Accepted to BMVC2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Amodal Instance Segmentation (AIS) aims to segment the region of both visible
and possible occluded parts of an object instance. While Mask R-CNN-based AIS
approaches have shown promising results, they are unable to model high-level
features coherence due to the limited receptive field. The most recent
transformer-based models show impressive performance on vision tasks, even
better than Convolution Neural Networks (CNN). In this work, we present
AISFormer, an AIS framework, with a Transformer-based mask head. AISFormer
explicitly models the complex coherence between occluder, visible, amodal, and
invisible masks within an object's regions of interest by treating them as
learnable queries. Specifically, AISFormer contains four modules: (i) feature
encoding: extract ROI and learn both short-range and long-range visual
features. (ii) mask transformer decoding: generate the occluder, visible, and
amodal mask query embeddings by a transformer decoder (iii) invisible mask
embedding: model the coherence between the amodal and visible masks, and (iv)
mask predicting: estimate output masks including occluder, visible, amodal and
invisible. We conduct extensive experiments and ablation studies on three
challenging benchmarks i.e. KINS, D2SA, and COCOA-cls to evaluate the
effectiveness of AISFormer. The code is available at:
https://github.com/UARK-AICV/AISFormer
|
[
{
"version": "v1",
"created": "Wed, 12 Oct 2022 15:42:40 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Oct 2022 19:14:37 GMT"
},
{
"version": "v3",
"created": "Mon, 6 Mar 2023 05:00:50 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Tran",
"Minh",
""
],
[
"Vo",
"Khoa",
""
],
[
"Yamazaki",
"Kashu",
""
],
[
"Fernandes",
"Arthur",
""
],
[
"Kidd",
"Michael",
""
],
[
"Le",
"Ngan",
""
]
] |
new_dataset
| 0.978125 |
2211.06726
|
Yuda Bi
|
Yuda Bi, Anees Abrol, Zening Fu, Vince Calhoun
|
MultiCrossViT: Multimodal Vision Transformer for Schizophrenia
Prediction using Structural MRI and Functional Network Connectivity Data
|
I submitted the wrong paper
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Vision Transformer (ViT) is a pioneering deep learning framework that can
address real-world computer vision issues, such as image classification and
object recognition. Importantly, ViTs are proven to outperform traditional deep
learning models, such as convolutional neural networks (CNNs). Relatively
recently, a number of ViT mutations have been transplanted into the field of
medical imaging, thereby resolving a variety of critical classification and
segmentation challenges, especially in terms of brain imaging data. In this
work, we provide a novel multimodal deep learning pipeline, MultiCrossViT,
which is capable of analyzing both structural MRI (sMRI) and static functional
network connectivity (sFNC) data for the prediction of schizophrenia disease.
On a dataset with minimal training subjects, our novel model can achieve an AUC
of 0.832. Finally, we visualize multiple brain regions and covariance patterns
most relevant to schizophrenia based on the resulting ViT attention maps by
extracting features from transformer encoders.
|
[
{
"version": "v1",
"created": "Sat, 12 Nov 2022 19:07:25 GMT"
},
{
"version": "v2",
"created": "Sun, 5 Mar 2023 23:21:41 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Bi",
"Yuda",
""
],
[
"Abrol",
"Anees",
""
],
[
"Fu",
"Zening",
""
],
[
"Calhoun",
"Vince",
""
]
] |
new_dataset
| 0.992789 |
2212.03951
|
Alexander K\"ubler
|
Alexander M. K\"ubler, Sebasti\'an Urdaneta Rivera, Frances B.
Raphael, Julian F\"orster, Roland Siegwart, and Allison M. Okamura
|
A Multi-Segment, Soft Growing Robot with Selective Steering
|
Accepted for presentation at the 6th IEEE-RAS International
Conference on Soft Robotics (RoboSoft 2023). 7 pages, 12 figures. For
associated video, see ancillary files
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Everting, soft growing vine robots benefit from reduced friction with their
environment, which allows them to navigate challenging terrain. Vine robots can
use air pouches attached to their sides for lateral steering. However, when all
pouches are serially connected, the whole robot can only perform one constant
curvature in free space. It must contact the environment to navigate through
obstacles along paths with multiple turns. This work presents a multi-segment
vine robot that can navigate complex paths without interacting with its
environment. This is achieved by a new steering method that selectively
actuates each single pouch at the tip, providing high degrees of freedom with
few control inputs. A small magnetic valve connects each pouch to a pressure
supply line. A motorized tip mount uses an interlocking mechanism and motorized
rollers on the outer material of the vine robot. As each valve passes through
the tip mount, a permanent magnet inside the tip mount opens the valve so the
corresponding pouch is connected to the pressure supply line at the same
moment. Novel cylindrical pneumatic artificial muscles (cPAMs) are integrated
into the vine robot and inflate to a cylindrical shape for improved bending
characteristics compared to other state-of-the-art vine robots. The motorized
tip mount controls a continuous eversion speed and enables controlled
retraction. A final prototype was able to repeatably grow into different shapes
and hold these shapes. We predict the path using a model that assumes a
piecewise constant curvature along the outside of the multi-segment vine robot.
The proposed multi-segment steering method can be extended to other soft
continuum robot designs.
|
[
{
"version": "v1",
"created": "Wed, 7 Dec 2022 20:50:38 GMT"
},
{
"version": "v2",
"created": "Sun, 5 Mar 2023 15:14:41 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Kübler",
"Alexander M.",
""
],
[
"Rivera",
"Sebastián Urdaneta",
""
],
[
"Raphael",
"Frances B.",
""
],
[
"Förster",
"Julian",
""
],
[
"Siegwart",
"Roland",
""
],
[
"Okamura",
"Allison M.",
""
]
] |
new_dataset
| 0.986065 |
2302.02744
|
Fei Wu
|
Fei Wu, Nora Gourmelon, Thorsten Seehaus, Jianlin Zhang, Matthias
Braun, Andreas Maier, and Vincent Christlein
|
AMD-HookNet for Glacier Front Segmentation
| null | null |
10.1109/TGRS.2023.3245419
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Knowledge on changes in glacier calving front positions is important for
assessing the status of glaciers. Remote sensing imagery provides the ideal
database for monitoring calving front positions, however, it is not feasible to
perform this task manually for all calving glaciers globally due to
time-constraints. Deep learning-based methods have shown great potential for
glacier calving front delineation from optical and radar satellite imagery. The
calving front is represented as a single thin line between the ocean and the
glacier, which makes the task vulnerable to inaccurate predictions. The limited
availability of annotated glacier imagery leads to a lack of data diversity
(not all possible combinations of different weather conditions, terminus
shapes, sensors, etc. are present in the data), which exacerbates the
difficulty of accurate segmentation. In this paper, we propose
Attention-Multi-hooking-Deep-supervision HookNet (AMD-HookNet), a novel glacier
calving front segmentation framework for synthetic aperture radar (SAR) images.
The proposed method aims to enhance the feature representation capability
through multiple information interactions between low-resolution and
high-resolution inputs based on a two-branch U-Net. The attention mechanism,
integrated into the two branch U-Net, aims to interact between the
corresponding coarse and fine-grained feature maps. This allows the network to
automatically adjust feature relationships, resulting in accurate
pixel-classification predictions. Extensive experiments and comparisons on the
challenging glacier segmentation benchmark dataset CaFFe show that our
AMD-HookNet achieves a mean distance error of 438 m to the ground truth
outperforming the current state of the art by 42%, which validates its
effectiveness.
|
[
{
"version": "v1",
"created": "Mon, 6 Feb 2023 12:39:40 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Wu",
"Fei",
""
],
[
"Gourmelon",
"Nora",
""
],
[
"Seehaus",
"Thorsten",
""
],
[
"Zhang",
"Jianlin",
""
],
[
"Braun",
"Matthias",
""
],
[
"Maier",
"Andreas",
""
],
[
"Christlein",
"Vincent",
""
]
] |
new_dataset
| 0.998192 |
2302.05907
|
Zixiong Su
|
Zixiong Su, Shitao Fang, Jun Rekimoto
|
LipLearner: Customizable Silent Speech Interactions on Mobile Devices
|
Conditionally accepted to the ACM CHI Conference on Human Factors in
Computing Systems 2023 (CHI '23)
| null |
10.1145/3544548.3581465
| null |
cs.HC cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Silent speech interface is a promising technology that enables private
communications in natural language. However, previous approaches only support a
small and inflexible vocabulary, which leads to limited expressiveness. We
leverage contrastive learning to learn efficient lipreading representations,
enabling few-shot command customization with minimal user effort. Our model
exhibits high robustness to different lighting, posture, and gesture conditions
on an in-the-wild dataset. For 25-command classification, an F1-score of 0.8947
is achievable only using one shot, and its performance can be further boosted
by adaptively learning from more data. This generalizability allowed us to
develop a mobile silent speech interface empowered with on-device fine-tuning
and visual keyword spotting. A user study demonstrated that with LipLearner,
users could define their own commands with high reliability guaranteed by an
online incremental learning scheme. Subjective feedback indicated that our
system provides essential functionalities for customizable silent speech
interactions with high usability and learnability.
|
[
{
"version": "v1",
"created": "Sun, 12 Feb 2023 13:10:57 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Feb 2023 07:56:45 GMT"
},
{
"version": "v3",
"created": "Sun, 5 Mar 2023 07:58:44 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Su",
"Zixiong",
""
],
[
"Fang",
"Shitao",
""
],
[
"Rekimoto",
"Jun",
""
]
] |
new_dataset
| 0.999647 |
2302.06961
|
Sifan Song
|
Sifan Song, Jinfeng Wang, Zilong Wang, Shaopeng Wang, Jionglong Su,
Xiaowei Ding, Kang Dang
|
Bilateral-Fuser: A Novel Multi-cue Fusion Architecture with
Anatomical-aware Tokens for Fovea Localization
|
This paper is prepared for IEEE Transactions on Medical Imaging
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Accurate localization of the fovea is a crucial initial step in analyzing
retinal diseases since it helps prevent irreversible vision loss. Although
current deep learning-based methods achieve better performance than traditional
methods, they still face challenges such as inadequate utilization of
anatomical landmarks, sensitivity to diseased retinal images, and various image
conditions. In this paper, we propose a novel transformer-based architecture
(Bilateral-Fuser) for multi-cue fusion. The Bilateral-Fuser explicitly
incorporates long-range connections and global features using retina and vessel
distributions to achieve robust fovea localization. We introduce a spatial
attention mechanism in the dual-stream encoder to extract and fuse self-learned
anatomical information. This design focuses more on features distributed along
blood vessels and significantly reduces computational costs by reducing token
numbers. Our comprehensive experiments demonstrate that the proposed
architecture achieves state-of-the-art performance on two public datasets and
one large-scale private dataset. Moreover, we show that the Bilateral-Fuser is
more robust on both normal and diseased retina images and has better
generalization capacity in cross-dataset experiments.
|
[
{
"version": "v1",
"created": "Tue, 14 Feb 2023 10:40:20 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Mar 2023 09:01:36 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Song",
"Sifan",
""
],
[
"Wang",
"Jinfeng",
""
],
[
"Wang",
"Zilong",
""
],
[
"Wang",
"Shaopeng",
""
],
[
"Su",
"Jionglong",
""
],
[
"Ding",
"Xiaowei",
""
],
[
"Dang",
"Kang",
""
]
] |
new_dataset
| 0.990158 |
2302.09908
|
Lingwei Meng
|
Lingwei Meng, Jiawen Kang, Mingyu Cui, Yuejiao Wang, Xixin Wu, Helen
Meng
|
A Sidecar Separator Can Convert a Single-Talker Speech Recognition
System to a Multi-Talker One
|
Accepted by IEEE International Conference on Acoustics, Speech, and
Signal Processing (ICASSP), 2023
| null | null | null |
cs.SD cs.AI cs.CL cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Although automatic speech recognition (ASR) can perform well in common
non-overlapping environments, sustaining performance in multi-talker
overlapping speech recognition remains challenging. Recent research revealed
that ASR model's encoder captures different levels of information with
different layers -- the lower layers tend to have more acoustic information,
and the upper layers more linguistic. This inspires us to develop a Sidecar
separator to empower a well-trained ASR model for multi-talker scenarios by
separating the mixed speech embedding between two suitable layers. We
experimented with a wav2vec 2.0-based ASR model with a Sidecar mounted. By
freezing the parameters of the original model and training only the Sidecar
(8.7 M, 8.4% of all parameters), the proposed approach outperforms the previous
state-of-the-art by a large margin for the 2-speaker mixed LibriMix dataset,
reaching a word error rate (WER) of 10.36%; and obtains comparable results
(7.56%) for LibriSpeechMix dataset when limited training.
|
[
{
"version": "v1",
"created": "Mon, 20 Feb 2023 11:09:37 GMT"
},
{
"version": "v2",
"created": "Sun, 5 Mar 2023 23:10:24 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Meng",
"Lingwei",
""
],
[
"Kang",
"Jiawen",
""
],
[
"Cui",
"Mingyu",
""
],
[
"Wang",
"Yuejiao",
""
],
[
"Wu",
"Xixin",
""
],
[
"Meng",
"Helen",
""
]
] |
new_dataset
| 0.995097 |
2302.13482
|
Kaustuv Mukherji
|
Dyuman Aditya, Kaustuv Mukherji, Srikar Balasubramanian, Abhiraj
Chaudhary, Paulo Shakarian
|
PyReason: Software for Open World Temporal Logic
|
Equal contributions from first two authors. Accepted at 2023 AAAI
Spring Symposium on Challenges Requiring the Combination of Machine Learning
and Knowledge Engineering (AAAI: MAKE)
| null | null | null |
cs.LO cs.AI cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
The growing popularity of neuro symbolic reasoning has led to the adoption of
various forms of differentiable (i.e., fuzzy) first order logic. We introduce
PyReason, a software framework based on generalized annotated logic that both
captures the current cohort of differentiable logics and temporal extensions to
support inference over finite periods of time with capabilities for open world
reasoning. Further, PyReason is implemented to directly support reasoning over
graphical structures (e.g., knowledge graphs, social networks, biological
networks, etc.), produces fully explainable traces of inference, and includes
various practical features such as type checking and a memory-efficient
implementation. This paper reviews various extensions of generalized annotated
logic integrated into our implementation, our modern, efficient Python-based
implementation that conducts exact yet scalable deductive inference, and a
suite of experiments. PyReason is available at: github.com/lab-v2/pyreason.
|
[
{
"version": "v1",
"created": "Mon, 27 Feb 2023 02:40:05 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Mar 2023 08:15:52 GMT"
},
{
"version": "v3",
"created": "Sat, 4 Mar 2023 20:10:22 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Aditya",
"Dyuman",
""
],
[
"Mukherji",
"Kaustuv",
""
],
[
"Balasubramanian",
"Srikar",
""
],
[
"Chaudhary",
"Abhiraj",
""
],
[
"Shakarian",
"Paulo",
""
]
] |
new_dataset
| 0.999475 |
2302.14576
|
Giorgos Armeniakos
|
Giorgos Armeniakos, Georgios Zervakis, Dimitrios Soudris, Mehdi B.
Tahoori, J\"org Henkel
|
Co-Design of Approximate Multilayer Perceptron for Ultra-Resource
Constrained Printed Circuits
|
Accepted for publication by IEEE Transactions on Computers, February
2023
| null |
10.1109/TC.2023.3251863
| null |
cs.LG cs.AR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Printed Electronics (PE) exhibits on-demand, extremely low-cost hardware due
to its additive manufacturing process, enabling machine learning (ML)
applications for domains that feature ultra-low cost, conformity, and
non-toxicity requirements that silicon-based systems cannot deliver.
Nevertheless, large feature sizes in PE prohibit the realization of complex
printed ML circuits. In this work, we present, for the first time, an automated
printed-aware software/hardware co-design framework that exploits approximate
computing principles to enable ultra-resource constrained printed multilayer
perceptrons (MLPs). Our evaluation demonstrates that, compared to the
state-of-the-art baseline, our circuits feature on average 6x (5.7x) lower area
(power) and less than 1% accuracy loss.
|
[
{
"version": "v1",
"created": "Tue, 28 Feb 2023 13:55:19 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Armeniakos",
"Giorgos",
""
],
[
"Zervakis",
"Georgios",
""
],
[
"Soudris",
"Dimitrios",
""
],
[
"Tahoori",
"Mehdi B.",
""
],
[
"Henkel",
"Jörg",
""
]
] |
new_dataset
| 0.998287 |
2303.00408
|
Amir Hossein Karimi
|
Masoud Akbari, Amir Hossein Karimi, Tayyebeh Saeedi, Zeinab Saeidi,
Kiana Ghezelbash, Fatemeh Shamsezat, Mohammad Akbari, Ali Mohades
|
A Persian Benchmark for Joint Intent Detection and Slot Filling
|
8 pages, 5 figures
| null | null |
2303.00408
|
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Natural Language Understanding (NLU) is important in today's technology as it
enables machines to comprehend and process human language, leading to improved
human-computer interactions and advancements in fields such as virtual
assistants, chatbots, and language-based AI systems. This paper highlights the
significance of advancing the field of NLU for low-resource languages. With
intent detection and slot filling being crucial tasks in NLU, the widely used
datasets ATIS and SNIPS have been utilized in the past. However, these datasets
only cater to the English language and do not support other languages. In this
work, we aim to address this gap by creating a Persian benchmark for joint
intent detection and slot filling based on the ATIS dataset. To evaluate the
effectiveness of our benchmark, we employ state-of-the-art methods for intent
detection and slot filling.
|
[
{
"version": "v1",
"created": "Wed, 1 Mar 2023 10:57:21 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Akbari",
"Masoud",
""
],
[
"Karimi",
"Amir Hossein",
""
],
[
"Saeedi",
"Tayyebeh",
""
],
[
"Saeidi",
"Zeinab",
""
],
[
"Ghezelbash",
"Kiana",
""
],
[
"Shamsezat",
"Fatemeh",
""
],
[
"Akbari",
"Mohammad",
""
],
[
"Mohades",
"Ali",
""
]
] |
new_dataset
| 0.999615 |
2303.01818
|
Shir Iluz
|
Shir Iluz, Yael Vinker, Amir Hertz, Daniel Berio, Daniel Cohen-Or,
Ariel Shamir
|
Word-As-Image for Semantic Typography
| null | null | null | null |
cs.CV cs.AI cs.GR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
A word-as-image is a semantic typography technique where a word illustration
presents a visualization of the meaning of the word, while also preserving its
readability. We present a method to create word-as-image illustrations
automatically. This task is highly challenging as it requires semantic
understanding of the word and a creative idea of where and how to depict these
semantics in a visually pleasing and legible manner. We rely on the remarkable
ability of recent large pretrained language-vision models to distill textual
concepts visually. We target simple, concise, black-and-white designs that
convey the semantics clearly. We deliberately do not change the color or
texture of the letters and do not use embellishments. Our method optimizes the
outline of each letter to convey the desired concept, guided by a pretrained
Stable Diffusion model. We incorporate additional loss terms to ensure the
legibility of the text and the preservation of the style of the font. We show
high quality and engaging results on numerous examples and compare to
alternative techniques.
|
[
{
"version": "v1",
"created": "Fri, 3 Mar 2023 09:59:25 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Mar 2023 16:34:15 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Iluz",
"Shir",
""
],
[
"Vinker",
"Yael",
""
],
[
"Hertz",
"Amir",
""
],
[
"Berio",
"Daniel",
""
],
[
"Cohen-Or",
"Daniel",
""
],
[
"Shamir",
"Ariel",
""
]
] |
new_dataset
| 0.988866 |
2303.02156
|
Jos\'e Antonio Fern\'andez-Fern\'andez
|
Jos\'e Antonio Fern\'andez-Fern\'andez and Fabian L\"oschner and Lukas
Westhofen and Andreas Longva and Jan Bender
|
SymX: Energy-based Simulation from Symbolic Expressions
| null | null | null | null |
cs.CE cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
Optimization time integrators have proven to be effective at solving complex
multi-physics problems, such as deformation of solids with non-linear material
models, contact with friction, strain limiting, etc. For challenging problems
with high accuracy requirements, Newton-type optimizers are often used. This
necessitates first- and second-order derivatives of the global non-linear
objective function. Manually differentiating, implementing and optimizing the
resulting code is extremely time-consuming, error-prone, and precludes quick
changes to the model.
We present SymX, a framework based on symbolic expressions that computes the
first and second derivatives by symbolic differentiation, generates efficient
vectorized source code, compiles it on-the-fly, and performs the global
assembly of element contributions in parallel. The user only has to provide the
symbolic expression of an energy function for a single element in the
discretization and our system will determine the assembled derivatives for the
whole model. SymX is designed to be an integral part of a simulation system and
can easily be integrated into existing ones. We demonstrate the versatility of
our framework in various complex simulations showing different non-linear
materials, higher-order finite elements, rigid body systems, adaptive cloth,
frictional contact, and coupling multiple interacting physical systems.
Moreover, we compare our method with alternative approaches and show that SymX
is significantly faster than a current state-or-the-art framework (up to two
orders of magnitude for a higher-order FEM simulation).
|
[
{
"version": "v1",
"created": "Wed, 22 Feb 2023 13:53:34 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Fernández-Fernández",
"José Antonio",
""
],
[
"Löschner",
"Fabian",
""
],
[
"Westhofen",
"Lukas",
""
],
[
"Longva",
"Andreas",
""
],
[
"Bender",
"Jan",
""
]
] |
new_dataset
| 0.951217 |
2303.02182
|
Justin Merrick
|
Justin D. Merrick, Benjamin K. Heiner, Cameron Long, Brian Stieber,
Steve Fierro, Vardaan Gangal, Madison Blake, Joshua Blackburn
|
CoRL: Environment Creation and Management Focused on System Integration
|
for code, see https://github.com/act3-ace/CoRL
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Existing reinforcement learning environment libraries use monolithic
environment classes, provide shallow methods for altering agent observation and
action spaces, and/or are tied to a specific simulation environment. The Core
Reinforcement Learning library (CoRL) is a modular, composable, and
hyper-configurable environment creation tool. It allows minute control over
agent observations, rewards, and done conditions through the use of
easy-to-read configuration files, pydantic validators, and a functor design
pattern. Using integration pathways allows agents to be quickly implemented in
new simulation environments, encourages rapid exploration, and enables
transition of knowledge from low-fidelity to high-fidelity simulations.
Natively multi-agent design and integration with Ray/RLLib (Liang et al., 2018)
at release allow for easy scalability of agent complexity and computing power.
The code is publicly released and available at
https://github.com/act3-ace/CoRL.
|
[
{
"version": "v1",
"created": "Fri, 3 Mar 2023 19:01:53 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Merrick",
"Justin D.",
""
],
[
"Heiner",
"Benjamin K.",
""
],
[
"Long",
"Cameron",
""
],
[
"Stieber",
"Brian",
""
],
[
"Fierro",
"Steve",
""
],
[
"Gangal",
"Vardaan",
""
],
[
"Blake",
"Madison",
""
],
[
"Blackburn",
"Joshua",
""
]
] |
new_dataset
| 0.972778 |
2303.02229
|
Omer Ozturkoglu
|
\"Omer \"Ozt\"urko\u{g}lu and G\"okberk \"Ozsakall{\i}
|
A Generic Workforce Scheduling and Routing Problem with the Vehicle
Sharing and Drop-off and Pick-up Policies
| null | null | null | null |
cs.CE math.OC
|
http://creativecommons.org/licenses/by/4.0/
|
This paper introduces a new generic problem to the literature of Workforce
Scheduling and Routing Problem. In this problem, multiple workers are assigned
to a shared vehicle based on their qualifications and customer demands, and
then the route is formed so that a traveler may be dropped off and picked up
later to minimize total flow time. We introduced a mixed-integer linear
programming model for the problem. To solve the problem, an Adaptive Large
Neighborhood Search (ALNS) algorithm was developed with problem-specific
heuristics and a decomposition-based constructive upper bound algorithm (UBA).
To analyze the impact of newly introduced policies, service area, difficulty of
service, distribution of care, and number of demand nodes type instance
characteristics are considered. The empirical analyses showed that the ALNS
algorithm presents solutions with up to 35% less total flow time than the UBA.
The implementation of the proposed drop-off and pick-up (DP) and vehicle
sharing policies present up to 24% decrease in total flow time or provide
savings on the total cost of service especially when the demand nodes are
located in small areas like in urban areas.
|
[
{
"version": "v1",
"created": "Fri, 3 Mar 2023 21:43:32 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Öztürkoğlu",
"Ömer",
""
],
[
"Özsakallı",
"Gökberk",
""
]
] |
new_dataset
| 0.99089 |
2303.02272
|
Lingjie Kong
|
Alex Fu, Lingjie Kong
|
Real-time SLAM Pipeline in Dynamics Environment
| null | null | null | null |
cs.RO cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Inspired by the recent success of application of dense data approach by using
ORB-SLAM and RGB-D SLAM, we propose a better pipeline of real-time SLAM in
dynamics environment. Different from previous SLAM which can only handle static
scenes, we are presenting a solution which use RGB-D SLAM as well as YOLO
real-time object detection to segment and remove dynamic scene and then
construct static scene 3D. We gathered a dataset which allows us to jointly
consider semantics, geometry, and physics and thus enables us to reconstruct
the static scene while filtering out all dynamic objects.
|
[
{
"version": "v1",
"created": "Sat, 4 Mar 2023 00:08:52 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Fu",
"Alex",
""
],
[
"Kong",
"Lingjie",
""
]
] |
new_dataset
| 0.999395 |
2303.02285
|
Dimuthu Dharshana Kodippili Arachchige
|
Dimuthu D. K. Arachchige, Dulanjana M. Perera, Sanjaya Mallikarachchi,
Iyad Kanj, Yue Chen, and Isuru S. Godage
|
Wheelless Soft Robotic Snake Locomotion: Study on Sidewinding and
Helical Rolling Gaits
|
This paper has been accepted to 2023 IEEE-RAS International
Conference on Soft Robotics (RoboSoft)
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Soft robotic snakes (SRSs) have a unique combination of continuous and
compliant properties that allow them to imitate the complex movements of
biological snakes. Despite the previous attempts to develop SRSs, many have
been limited to planar movements or use wheels to achieve locomotion, which
restricts their ability to imitate the full range of biological snake
movements. We propose a new design for the SRSs that is wheelless and powered
by pneumatics, relying solely on spatial bending to achieve its movements. We
derive a kinematic model of the proposed SRS and utilize it to achieve two
snake locomotion trajectories, namely sidewinding and helical rolling. These
movements are experimentally evaluated under different gait parameters on our
SRS prototype. The results demonstrate that the SRS can successfully mimic the
proposed spatial locomotion trajectories. This is a significant improvement
over the previous designs, which were either limited to planar movements or
relied on wheels for locomotion. The ability of the SRS to effectively mimic
the complex movements of biological snakes opens up new possibilities for its
use in various applications.
|
[
{
"version": "v1",
"created": "Sat, 4 Mar 2023 01:07:59 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Arachchige",
"Dimuthu D. K.",
""
],
[
"Perera",
"Dulanjana M.",
""
],
[
"Mallikarachchi",
"Sanjaya",
""
],
[
"Kanj",
"Iyad",
""
],
[
"Chen",
"Yue",
""
],
[
"Godage",
"Isuru S.",
""
]
] |
new_dataset
| 0.981095 |
2303.02291
|
Dimuthu Dharshana Kodippili Arachchige
|
Dimuthu D. K. Arachchige, Dulanjana M. Perera, Sanjaya Mallikarachchi,
Iyad Kanj, Yue Chen, Hunter B. Gilbert, and Isuru S. Godage
|
Dynamic Modeling and Validation of Soft Robotic Snake Locomotion
|
This paper has been accepted to 2023 IEEE International Conference on
Control, Automation and Robotics (ICCAR)
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Soft robotic snakes made of compliant materials can continuously deform their
bodies and, therefore, mimic the biological snakes' flexible and agile
locomotion gaits better than their rigid-bodied counterparts. Without wheel
support, to date, soft robotic snakes are limited to emulating planar
locomotion gaits, which are derived via kinematic modeling and tested on
robotic prototypes. Given that the snake locomotion results from the reaction
forces due to the distributed contact between their skin and the ground, it is
essential to investigate the locomotion gaits through efficient dynamic models
capable of accommodating distributed contact forces. We present a complete
spatial dynamic model that utilizes a floating-base kinematic model with
distributed contact dynamics for a pneumatically powered soft robotic snake. We
numerically evaluate the feasibility of the planar and spatial rolling gaits
utilizing the proposed model and experimentally validate the corresponding
locomotion gait trajectories on a soft robotic snake prototype. We
qualitatively and quantitatively compare the numerical and experimental results
which confirm the validity of the proposed dynamic model.
|
[
{
"version": "v1",
"created": "Sat, 4 Mar 2023 01:56:39 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Arachchige",
"Dimuthu D. K.",
""
],
[
"Perera",
"Dulanjana M.",
""
],
[
"Mallikarachchi",
"Sanjaya",
""
],
[
"Kanj",
"Iyad",
""
],
[
"Chen",
"Yue",
""
],
[
"Gilbert",
"Hunter B.",
""
],
[
"Godage",
"Isuru S.",
""
]
] |
new_dataset
| 0.976536 |
2303.02323
|
Yuxiang Zhang
|
Yuxiang Zhang, Nicholas Bolten, Sachin Mehta, Anat Caspi
|
APE: An Open and Shared Annotated Dataset for Learning Urban Pedestrian
Path Networks
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Inferring the full transportation network, including sidewalks and cycleways,
is crucial for many automated systems, including autonomous driving,
multi-modal navigation, trip planning, mobility simulations, and freight
management. Many transportation decisions can be informed based on an accurate
pedestrian network, its interactions, and connectivity with the road networks
of other modes of travel. A connected pedestrian path network is vital to
transportation activities, as sidewalks and crossings connect pedestrians to
other modes of transportation. However, information about these paths' location
and connectivity is often missing or inaccurate in city planning systems and
wayfinding applications, causing severe information gaps and errors for
planners and pedestrians. This work begins to address this problem at scale by
introducing a novel dataset of aerial satellite imagery, street map imagery,
and rasterized annotations of sidewalks, crossings, and corner bulbs in urban
cities. The dataset spans $2,700 km^2$ land area, covering select regions from
$6$ different cities. It can be used for various learning tasks related to
segmenting and understanding pedestrian environments. We also present an
end-to-end process for inferring a connected pedestrian path network map using
street network information and our proposed dataset. The process features the
use of a multi-input segmentation network trained on our dataset to predict
important classes in the pedestrian environment and then generate a connected
pedestrian path network. Our results demonstrate that the dataset is
sufficiently large to train common segmentation models yielding accurate,
robust pedestrian path networks.
|
[
{
"version": "v1",
"created": "Sat, 4 Mar 2023 05:08:36 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Zhang",
"Yuxiang",
""
],
[
"Bolten",
"Nicholas",
""
],
[
"Mehta",
"Sachin",
""
],
[
"Caspi",
"Anat",
""
]
] |
new_dataset
| 0.999701 |
2303.02367
|
Petr Svarny
|
Jakub Rozlivek, Petr Svarny, Matej Hoffmann
|
Perirobot space representation for HRI: measuring and designing
collaborative workspace coverage by diverse sensors
|
8 pages, 12 figures, submitted for review
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Two regimes permitting safe physical human-robot interaction, speed and
separation monitoring and safety-rated monitored stop, depend on reliable
perception of the space surrounding the robot. This can be accomplished by
visual sensors (like cameras, RGB-D cameras, LIDARs), proximity sensors, or
dedicated devices used in industrial settings like pads that are activated by
the presence of the operator. The deployment of a particular solution is often
ad hoc and no unified representation of the interaction space or its coverage
by the different sensors exists. In this work, we make first steps in this
direction by defining the spaces to be monitored, representing all sensor data
as information about occupancy and using occupancy-based metrics to calculate
how a particular sensor covers the workspace. We demonstrate our approach in
two (multi-)sensor-placement experiments in three static scenes and one
experiment in a dynamic scene. The occupancy representation allow to compare
the effectiveness of various sensor setups. Therefore, this approach can serve
as a prototyping tool to establish the sensor setup that provides the most
efficient coverage for the given metrics and sensor representations.
|
[
{
"version": "v1",
"created": "Sat, 4 Mar 2023 10:03:17 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Rozlivek",
"Jakub",
""
],
[
"Svarny",
"Petr",
""
],
[
"Hoffmann",
"Matej",
""
]
] |
new_dataset
| 0.951565 |
2303.02405
|
Tian Bian
|
Tian Bian, Yuli Jiang, Jia Li, Tingyang Xu, Yu Rong, Yi Su, Timothy
Kwok, Helen Meng, Hong Cheng
|
Decision Support System for Chronic Diseases Based on Drug-Drug
Interactions
| null |
ICDE2023
| null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Many patients with chronic diseases resort to multiple medications to relieve
various symptoms, which raises concerns about the safety of multiple medication
use, as severe drug-drug antagonism can lead to serious adverse effects or even
death. This paper presents a Decision Support System, called DSSDDI, based on
drug-drug interactions to support doctors prescribing decisions. DSSDDI
contains three modules, Drug-Drug Interaction (DDI) module, Medical Decision
(MD) module and Medical Support (MS) module. The DDI module learns safer and
more effective drug representations from the drug-drug interactions. To capture
the potential causal relationship between DDI and medication use, the MD module
considers the representations of patients and drugs as context, DDI and
patients' similarity as treatment, and medication use as outcome to construct
counterfactual links for the representation learning. Furthermore, the MS
module provides drug candidates to doctors with explanations. Experiments on
the chronic data collected from the Hong Kong Chronic Disease Study Project and
a public diagnostic data MIMIC-III demonstrate that DSSDDI can be a reliable
reference for doctors in terms of safety and efficiency of clinical diagnosis,
with significant improvements compared to baseline methods.
|
[
{
"version": "v1",
"created": "Sat, 4 Mar 2023 12:44:38 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Bian",
"Tian",
""
],
[
"Jiang",
"Yuli",
""
],
[
"Li",
"Jia",
""
],
[
"Xu",
"Tingyang",
""
],
[
"Rong",
"Yu",
""
],
[
"Su",
"Yi",
""
],
[
"Kwok",
"Timothy",
""
],
[
"Meng",
"Helen",
""
],
[
"Cheng",
"Hong",
""
]
] |
new_dataset
| 0.969676 |
2303.02430
|
Yinchuan Li
|
Yinchuan Li, Shuang Luo, Haozhi Wang and Jianye Hao
|
CFlowNets: Continuous Control with Generative Flow Networks
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generative flow networks (GFlowNets), as an emerging technique, can be used
as an alternative to reinforcement learning for exploratory control tasks.
GFlowNet aims to generate distribution proportional to the rewards over
terminating states, and to sample different candidates in an active learning
fashion. GFlowNets need to form a DAG and compute the flow matching loss by
traversing the inflows and outflows of each node in the trajectory. No
experiments have yet concluded that GFlowNets can be used to handle continuous
tasks. In this paper, we propose generative continuous flow networks
(CFlowNets) that can be applied to continuous control tasks. First, we present
the theoretical formulation of CFlowNets. Then, a training framework for
CFlowNets is proposed, including the action selection process, the flow
approximation algorithm, and the continuous flow matching loss function.
Afterward, we theoretically prove the error bound of the flow approximation.
The error decreases rapidly as the number of flow samples increases. Finally,
experimental results on continuous control tasks demonstrate the performance
advantages of CFlowNets compared to many reinforcement learning methods,
especially regarding exploration ability.
|
[
{
"version": "v1",
"created": "Sat, 4 Mar 2023 14:37:47 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Li",
"Yinchuan",
""
],
[
"Luo",
"Shuang",
""
],
[
"Wang",
"Haozhi",
""
],
[
"Hao",
"Jianye",
""
]
] |
new_dataset
| 0.975281 |
2303.02483
|
Xiao Han
|
Xiao Han, Xiatian Zhu, Licheng Yu, Li Zhang, Yi-Zhe Song, Tao Xiang
|
FAME-ViL: Multi-Tasking Vision-Language Model for Heterogeneous Fashion
Tasks
|
CVPR 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In the fashion domain, there exists a variety of vision-and-language (V+L)
tasks, including cross-modal retrieval, text-guided image retrieval,
multi-modal classification, and image captioning. They differ drastically in
each individual input/output format and dataset size. It has been common to
design a task-specific model and fine-tune it independently from a pre-trained
V+L model (e.g., CLIP). This results in parameter inefficiency and inability to
exploit inter-task relatedness. To address such issues, we propose a novel
FAshion-focused Multi-task Efficient learning method for Vision-and-Language
tasks (FAME-ViL) in this work. Compared with existing approaches, FAME-ViL
applies a single model for multiple heterogeneous fashion tasks, therefore
being much more parameter-efficient. It is enabled by two novel components: (1)
a task-versatile architecture with cross-attention adapters and task-specific
adapters integrated into a unified V+L model, and (2) a stable and effective
multi-task training strategy that supports learning from heterogeneous data and
prevents negative transfer. Extensive experiments on four fashion tasks show
that our FAME-ViL can save 61.5% of parameters over alternatives, while
significantly outperforming the conventional independently trained single-task
models. Code is available at https://github.com/BrandonHanx/FAME-ViL.
|
[
{
"version": "v1",
"created": "Sat, 4 Mar 2023 19:07:48 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Han",
"Xiao",
""
],
[
"Zhu",
"Xiatian",
""
],
[
"Yu",
"Licheng",
""
],
[
"Zhang",
"Li",
""
],
[
"Song",
"Yi-Zhe",
""
],
[
"Xiang",
"Tao",
""
]
] |
new_dataset
| 0.999793 |
2303.02491
|
A. R. Sricharan
|
Gramoz Goranci and Monika Henzinger and Harald R\"acke and Sushant
Sachdeva and A. R. Sricharan
|
Electrical Flows for Polylogarithmic Competitive Oblivious Routing
| null | null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Oblivious routing is a well-studied distributed paradigm that uses static
precomputed routing tables for selecting routing paths within a network.
Existing oblivious routing schemes with polylogarithmic competitive ratio for
general networks are tree-based, in the sense that routing is performed
according to a convex combination of trees. However, this restriction to trees
leads to a construction that has time quadratic in the size of the network and
does not parallelize well.
In this paper we study oblivious routing schemes based on electrical routing.
In particular, we show that general networks with $n$ vertices and $m$ edges
admit a routing scheme that has competitive ratio $O(\log^2 n)$ and consists of
a convex combination of only $O(\sqrt{m})$ electrical routings. This
immediately leads to an improved construction algorithm with time
$\tilde{O}(m^{3/2})$ that can also be implemented in parallel with
$\tilde{O}(\sqrt{m})$ depth.
|
[
{
"version": "v1",
"created": "Sat, 4 Mar 2023 20:13:07 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Goranci",
"Gramoz",
""
],
[
"Henzinger",
"Monika",
""
],
[
"Räcke",
"Harald",
""
],
[
"Sachdeva",
"Sushant",
""
],
[
"Sricharan",
"A. R.",
""
]
] |
new_dataset
| 0.973676 |
2303.02503
|
Ali AlQahtani
|
Ali Abdullah S. AlQahtani, Thamraa Alshayeb
|
Zero-Effort Two-Factor Authentication Using Wi-Fi Radio Wave
Transmission and Machine Learning
| null | null | null | null |
cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The proliferation of sensitive information being stored online highlights the
pressing need for secure and efficient user authentication methods. To address
this issue, this paper presents a novel zero-effort two-factor authentication
(2FA) approach that combines the unique characteristics of a users environment
and Machine Learning (ML) to confirm their identity. Our proposed approach
utilizes Wi-Fi radio wave transmission and ML algorithms to analyze beacon
frame characteristics and Received Signal Strength Indicator (RSSI) values from
Wi-Fi access points to determine the users location. The aim is to provide a
secure and efficient method of authentication without the need for additional
hardware or software. A prototype was developed using Raspberry Pi devices and
experiments were conducted to demonstrate the effectiveness and practicality of
the proposed approach. Results showed that the proposed system can
significantly enhance the security of sensitive information in various
industries such as finance, healthcare, and retail. This study sheds light on
the potential of Wi-Fi radio waves and RSSI values as a means of user
authentication and the power of ML to identify patterns in wireless signals for
security purposes. The proposed system holds great promise in revolutionizing
the field of 2FA and user authentication, offering a new era of secure and
seamless access to sensitive information.
|
[
{
"version": "v1",
"created": "Sat, 4 Mar 2023 21:04:10 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"AlQahtani",
"Ali Abdullah S.",
""
],
[
"Alshayeb",
"Thamraa",
""
]
] |
new_dataset
| 0.985691 |
2303.02561
|
Yongming Liu
|
Nan Xu, Yongming Liu
|
CAMEL: Curvature-Augmented Manifold Embedding and Learning
| null | null | null | null |
cs.LG stat.ML
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
A novel method, named Curvature-Augmented Manifold Embedding and Learning
(CAMEL), is proposed for high dimensional data classification, dimension
reduction, and visualization. CAMEL utilizes a topology metric defined on the
Riemannian manifold, and a unique Riemannian metric for both distance and
curvature to enhance its expressibility. The method also employs a smooth
partition of unity operator on the Riemannian manifold to convert localized
orthogonal projection to global embedding, which captures both the overall
topological structure and local similarity simultaneously. The local orthogonal
vectors provide a physical interpretation of the significant characteristics of
clusters. Therefore, CAMEL not only provides a low-dimensional embedding but
also interprets the physics behind this embedding. CAMEL has been evaluated on
various benchmark datasets and has shown to outperform state-of-the-art
methods, especially for high-dimensional datasets. The method's distinct
benefits are its high expressibility, interpretability, and scalability. The
paper provides a detailed discussion on Riemannian distance and curvature
metrics, physical interpretability, hyperparameter effect, manifold stability,
and computational efficiency for a holistic understanding of CAMEL. Finally,
the paper presents the limitations and future work of CAMEL along with key
conclusions.
|
[
{
"version": "v1",
"created": "Sun, 5 Mar 2023 03:12:50 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Xu",
"Nan",
""
],
[
"Liu",
"Yongming",
""
]
] |
new_dataset
| 0.993884 |
2303.02584
|
Min Wei
|
Min Wei, Xuesong Zhang
|
Super-Resolution Neural Operator
|
Accepted by CVPR 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose Super-resolution Neural Operator (SRNO), a deep operator learning
framework that can resolve high-resolution (HR) images at arbitrary scales from
the low-resolution (LR) counterparts. Treating the LR-HR image pairs as
continuous functions approximated with different grid sizes, SRNO learns the
mapping between the corresponding function spaces. From the perspective of
approximation theory, SRNO first embeds the LR input into a higher-dimensional
latent representation space, trying to capture sufficient basis functions, and
then iteratively approximates the implicit image function with a kernel
integral mechanism, followed by a final dimensionality reduction step to
generate the RGB representation at the target coordinates. The key
characteristics distinguishing SRNO from prior continuous SR works are: 1) the
kernel integral in each layer is efficiently implemented via the Galerkin-type
attention, which possesses non-local properties in the spatial domain and
therefore benefits the grid-free continuum; and 2) the multilayer attention
architecture allows for the dynamic latent basis update, which is crucial for
SR problems to "hallucinate" high-frequency information from the LR image.
Experiments show that SRNO outperforms existing continuous SR methods in terms
of both accuracy and running time. Our code is at
https://github.com/2y7c3/Super-Resolution-Neural-Operator
|
[
{
"version": "v1",
"created": "Sun, 5 Mar 2023 06:17:43 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Wei",
"Min",
""
],
[
"Zhang",
"Xuesong",
""
]
] |
new_dataset
| 0.990015 |
2303.02635
|
Kang Chen
|
Kang Chen, Xiangqian Wu
|
VTQA: Visual Text Question Answering via Entity Alignment and
Cross-Media Reasoning
| null | null | null | null |
cs.CV cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The ideal form of Visual Question Answering requires understanding, grounding
and reasoning in the joint space of vision and language and serves as a proxy
for the AI task of scene understanding. However, most existing VQA benchmarks
are limited to just picking the answer from a pre-defined set of options and
lack attention to text. We present a new challenge with a dataset that contains
23,781 questions based on 10124 image-text pairs. Specifically, the task
requires the model to align multimedia representations of the same entity to
implement multi-hop reasoning between image and text and finally use natural
language to answer the question. The aim of this challenge is to develop and
benchmark models that are capable of multimedia entity alignment, multi-step
reasoning and open-ended answer generation.
|
[
{
"version": "v1",
"created": "Sun, 5 Mar 2023 10:32:26 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Chen",
"Kang",
""
],
[
"Wu",
"Xiangqian",
""
]
] |
new_dataset
| 0.999762 |
2303.02640
|
Maryam Abdool
|
Maryam Abdool and Tony Dear
|
Swim: A General-Purpose, High-Performing, and Efficient Activation
Function for Locomotion Control Tasks
| null | null | null | null |
cs.LG cs.NE cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Activation functions play a significant role in the performance of deep
learning algorithms. In particular, the Swish activation function tends to
outperform ReLU on deeper models, including deep reinforcement learning models,
across challenging tasks. Despite this progress, ReLU is the preferred function
partly because it is more efficient than Swish. Furthermore, in contrast to the
fields of computer vision and natural language processing, the deep
reinforcement learning and robotics domains have seen less inclination to adopt
new activation functions, such as Swish, and instead continue to use more
traditional functions, like ReLU. To tackle those issues, we propose Swim, a
general-purpose, efficient, and high-performing alternative to Swish, and then
provide an analysis of its properties as well as an explanation for its
high-performance relative to Swish, in terms of both reward-achievement and
efficiency. We focus on testing Swim on MuJoCo's locomotion continuous control
tasks since they exhibit more complex dynamics and would therefore benefit most
from a high-performing and efficient activation function. We also use the TD3
algorithm in conjunction with Swim and explain this choice in the context of
the robot locomotion domain. We then conclude that Swim is a state-of-the-art
activation function for continuous control locomotion tasks and recommend using
it with TD3 as a working framework.
|
[
{
"version": "v1",
"created": "Sun, 5 Mar 2023 11:04:33 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Abdool",
"Maryam",
""
],
[
"Dear",
"Tony",
""
]
] |
new_dataset
| 0.976688 |
2303.02641
|
Varun Gupta
|
Varun Gupta, Anbumani Subramanian, C.V. Jawahar, Rohit Saluja
|
CueCAn: Cue Driven Contextual Attention For Identifying Missing Traffic
Signs on Unconstrained Roads
|
International Conference on Robotics and Automation (ICRA'23)
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Unconstrained Asian roads often involve poor infrastructure, affecting
overall road safety. Missing traffic signs are a regular part of such roads.
Missing or non-existing object detection has been studied for locating missing
curbs and estimating reasonable regions for pedestrians on road scene images.
Such methods involve analyzing task-specific single object cues. In this paper,
we present the first and most challenging video dataset for missing objects,
with multiple types of traffic signs for which the cues are visible without the
signs in the scenes. We refer to it as the Missing Traffic Signs Video Dataset
(MTSVD). MTSVD is challenging compared to the previous works in two aspects i)
The traffic signs are generally not present in the vicinity of their cues, ii)
The traffic signs cues are diverse and unique. Also, MTSVD is the first
publicly available missing object dataset. To train the models for identifying
missing signs, we complement our dataset with 10K traffic sign tracks, with 40
percent of the traffic signs having cues visible in the scenes. For identifying
missing signs, we propose the Cue-driven Contextual Attention units (CueCAn),
which we incorporate in our model encoder. We first train the encoder to
classify the presence of traffic sign cues and then train the entire
segmentation model end-to-end to localize missing traffic signs. Quantitative
and qualitative analysis shows that CueCAn significantly improves the
performance of base models.
|
[
{
"version": "v1",
"created": "Sun, 5 Mar 2023 11:06:20 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Gupta",
"Varun",
""
],
[
"Subramanian",
"Anbumani",
""
],
[
"Jawahar",
"C. V.",
""
],
[
"Saluja",
"Rohit",
""
]
] |
new_dataset
| 0.999873 |
2303.02667
|
Philippe Vincent-Lamarre
|
Philippe Vincent-Lamarre and Vincent Larivi\`ere
|
Are self-citations a normal feature of knowledge accumulation?
| null | null | null | null |
cs.DL
|
http://creativecommons.org/licenses/by/4.0/
|
Science is a cumulative activity, which can manifest itself through the act
of citing. Citations are also central to research evaluation, thus creating
incentives for researchers to cite their own work. Using a dataset containing
more than 63 million articles and 51 million disambiguated authors, this paper
examines the relative importance of self-citations and self-references in the
scholarly communication landscape, their relationship with the age and gender
of authors, as well as their effects on various research evaluation indicators.
Results show that self-citations and self-references evolve in different
directions throughout researchers' careers, and that men and older researchers
are more likely to self-cite. Although self-citations have, on average, a small
to moderate effect on author's citation rates, they highly inflate citations
for a subset of researchers. Comparison of the abstracts of cited and citing
papers to assess the relatedness of different types of citations shows that
self-citations are more similar to each other than other types of citations,
and therefore more relevant. However, researchers that self-reference more tend
to include less relevant citations. The paper concludes with a discussion of
the role of self-citations in scholarly communication.
|
[
{
"version": "v1",
"created": "Sun, 5 Mar 2023 13:17:23 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Vincent-Lamarre",
"Philippe",
""
],
[
"Larivière",
"Vincent",
""
]
] |
new_dataset
| 0.997719 |
2303.02684
|
Qingqing Li
|
Li Qingqing, Yu Xianjia, Jorge Pe\~na Queralta, Tomi Westerlund
|
Robust Multi-Modal Multi-LiDAR-Inertial Odometry and Mapping for Indoor
Environments
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Integrating multiple LiDAR sensors can significantly enhance a robot's
perception of the environment, enabling it to capture adequate measurements for
simultaneous localization and mapping (SLAM). Indeed, solid-state LiDARs can
bring in high resolution at a low cost to traditional spinning LiDARs in
robotic applications. However, their reduced field of view (FoV) limits
performance, particularly indoors. In this paper, we propose a tightly-coupled
multi-modal multi-LiDAR-inertial SLAM system for surveying and mapping tasks.
By taking advantage of both solid-state and spinnings LiDARs, and built-in
inertial measurement units (IMU), we achieve both robust and low-drift
ego-estimation as well as high-resolution maps in diverse challenging indoor
environments (e.g., small, featureless rooms). First, we use spatial-temporal
calibration modules to align the timestamp and calibrate extrinsic parameters
between sensors. Then, we extract two groups of feature points including edge
and plane points, from LiDAR data. Next, with pre-integrated IMU data, an
undistortion module is applied to the LiDAR point cloud data. Finally, the
undistorted point clouds are merged into one point cloud and processed with a
sliding window based optimization module. From extensive experiment results,
our method shows competitive performance with state-of-the-art
spinning-LiDAR-only or solid-state-LiDAR-only SLAM systems in diverse
environments. More results, code, and dataset can be found at
\href{https://github.com/TIERS/multi-modal-loam}{https://github.com/TIERS/multi-modal-loam}.
|
[
{
"version": "v1",
"created": "Sun, 5 Mar 2023 14:53:06 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Qingqing",
"Li",
""
],
[
"Xianjia",
"Yu",
""
],
[
"Queralta",
"Jorge Peña",
""
],
[
"Westerlund",
"Tomi",
""
]
] |
new_dataset
| 0.998211 |
2303.02708
|
Wen Fan
|
Wen Fan, Max Yang, Yifan Xing, Nathan F. Lepora, Dandan Zhang
|
Tac-VGNN: A Voronoi Graph Neural Network for Pose-Based Tactile Servoing
|
7 pages, 10 figures, accepted by 2023 IEEE International Conference
on Robotics and Automation (ICRA)
| null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Tactile pose estimation and tactile servoing are fundamental capabilities of
robot touch. Reliable and precise pose estimation can be provided by applying
deep learning models to high-resolution optical tactile sensors. Given the
recent successes of Graph Neural Network (GNN) and the effectiveness of Voronoi
features, we developed a Tactile Voronoi Graph Neural Network (Tac-VGNN) to
achieve reliable pose-based tactile servoing relying on a biomimetic optical
tactile sensor (TacTip). The GNN is well suited to modeling the distribution
relationship between shear motions of the tactile markers, while the Voronoi
diagram supplements this with area-based tactile features related to contact
depth. The experiment results showed that the Tac-VGNN model can help enhance
data interpretability during graph generation and model training efficiency
significantly than CNN-based methods. It also improved pose estimation accuracy
along vertical depth by 28.57% over vanilla GNN without Voronoi features and
achieved better performance on the real surface following tasks with smoother
robot control trajectories. For more project details, please view our website:
https://sites.google.com/view/tac-vgnn/home
|
[
{
"version": "v1",
"created": "Sun, 5 Mar 2023 16:18:00 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Fan",
"Wen",
""
],
[
"Yang",
"Max",
""
],
[
"Xing",
"Yifan",
""
],
[
"Lepora",
"Nathan F.",
""
],
[
"Zhang",
"Dandan",
""
]
] |
new_dataset
| 0.988483 |
2303.02758
|
Manan Suri
|
Manan Suri, Aaryak Garg, Divya Chaudhary, Ian Gorton, Bijendra Kumar
|
WADER at SemEval-2023 Task 9: A Weak-labelling framework for Data
augmentation in tExt Regression Tasks
| null | null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Intimacy is an essential element of human relationships and language is a
crucial means of conveying it. Textual intimacy analysis can reveal social
norms in different contexts and serve as a benchmark for testing computational
models' ability to understand social information. In this paper, we propose a
novel weak-labeling strategy for data augmentation in text regression tasks
called WADER. WADER uses data augmentation to address the problems of data
imbalance and data scarcity and provides a method for data augmentation in
cross-lingual, zero-shot tasks. We benchmark the performance of
State-of-the-Art pre-trained multilingual language models using WADER and
analyze the use of sampling techniques to mitigate bias in data and optimally
select augmentation candidates. Our results show that WADER outperforms the
baseline model and provides a direction for mitigating data imbalance and
scarcity in text regression tasks.
|
[
{
"version": "v1",
"created": "Sun, 5 Mar 2023 19:45:42 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Suri",
"Manan",
""
],
[
"Garg",
"Aaryak",
""
],
[
"Chaudhary",
"Divya",
""
],
[
"Gorton",
"Ian",
""
],
[
"Kumar",
"Bijendra",
""
]
] |
new_dataset
| 0.988161 |
2303.02835
|
Peng-Tao Jiang
|
Peng-Tao Jiang, Yuqi Yang, Yang Cao, Qibin Hou, Ming-Ming Cheng,
Chunhua Shen
|
Traffic Scene Parsing through the TSP6K Dataset
|
11 pages, 7 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Traffic scene parsing is one of the most important tasks to achieve
intelligent cities. So far, little effort has been spent on constructing
datasets specifically for the task of traffic scene parsing. To fill this gap,
here we introduce the TSP6K dataset, containing 6,000 urban traffic images and
spanning hundreds of street scenes under various weather conditions. In
contrast to most previous traffic scene datasets collected from a driving
platform, the images in our dataset are from the shooting platform high-hanging
on the street. Such traffic images can capture more crowded street scenes with
several times more traffic participants than the driving scenes. Each image in
the TSP6K dataset is provided with high-quality pixel-level and instance-level
annotations. We perform a detailed analysis for the dataset and comprehensively
evaluate the state-of-the-art scene parsing methods. Considering the vast
difference in instance sizes, we propose a detail refining decoder, which
recovers the details of different semantic regions in traffic scenes.
Experiments have shown its effectiveness in parsing high-hanging traffic
scenes. Code and dataset will be made publicly available.
|
[
{
"version": "v1",
"created": "Mon, 6 Mar 2023 02:05:14 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Jiang",
"Peng-Tao",
""
],
[
"Yang",
"Yuqi",
""
],
[
"Cao",
"Yang",
""
],
[
"Hou",
"Qibin",
""
],
[
"Cheng",
"Ming-Ming",
""
],
[
"Shen",
"Chunhua",
""
]
] |
new_dataset
| 0.999848 |
2303.02858
|
Zilin Si
|
Zilin Si, Tianhong Catherine Yu, Katrene Morozov, James McCann and
Wenzhen Yuan
|
RobotSweater: Scalable, Generalizable, and Customizable Machine-Knitted
Tactile Skins for Robots
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Tactile sensing is essential for robots to perceive and react to the
environment. However, it remains a challenge to make large-scale and flexible
tactile skins on robots. Industrial machine knitting provides solutions to
manufacture customizable fabrics. Along with functional yarns, it can produce
highly customizable circuits that can be made into tactile skins for robots. In
this work, we present RobotSweater, a machine-knitted pressure-sensitive
tactile skin that can be easily applied on robots. We design and fabricate a
parameterized multi-layer tactile skin using off-the-shelf yarns, and
characterize our sensor on both a flat testbed and a curved surface to show its
robust contact detection, multi-contact localization, and pressure sensing
capabilities. The sensor is fabricated using a well-established textile
manufacturing process with a programmable industrial knitting machine, which
makes it highly customizable and low-cost. The textile nature of the sensor
also makes it easily fit curved surfaces of different robots and have a
friendly appearance. Using our tactile skins, we conduct closed-loop control
with tactile feedback for two applications: (1) human lead-through control of a
robot arm, and (2) human-robot interaction with a mobile robot.
|
[
{
"version": "v1",
"created": "Mon, 6 Mar 2023 03:22:34 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Si",
"Zilin",
""
],
[
"Yu",
"Tianhong Catherine",
""
],
[
"Morozov",
"Katrene",
""
],
[
"McCann",
"James",
""
],
[
"Yuan",
"Wenzhen",
""
]
] |
new_dataset
| 0.998813 |
2303.02913
|
Zhenyu Wu
|
Zhenyu Wu, YaoXiang Wang, Jiacheng Ye, Jiangtao Feng, Jingjing Xu, Yu
Qiao, Zhiyong Wu
|
OpenICL: An Open-Source Framework for In-context Learning
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, In-context Learning (ICL) has gained increasing attention
and emerged as the new paradigm for large language model (LLM) evaluation.
Unlike traditional fine-tuning methods, ICL instead adapts the pre-trained
models to unseen tasks without any parameter updates. However, the
implementation of ICL is sophisticated due to the diverse retrieval and
inference methods involved, as well as the varying pre-processing requirements
for different models, datasets, and tasks. A unified and flexible framework for
ICL is urgently needed to ease the implementation of the aforementioned
components. To facilitate ICL research, we introduce OpenICL, an open-source
toolkit for ICL and LLM evaluation. OpenICL is research-friendly with a highly
flexible architecture that users can easily combine different components to
suit their needs. It also provides various state-of-the-art retrieval and
inference methods to streamline the process of adapting ICL to cutting-edge
research. The effectiveness of OpenICL has been validated on a wide range of
NLP tasks, including classification, QA, machine translation, and semantic
parsing. As a side-product, we found OpenICL to be an efficient yet robust tool
for LLMs evaluation. OpenICL is released at
https://github.com/Shark-NLP/OpenICL
|
[
{
"version": "v1",
"created": "Mon, 6 Mar 2023 06:20:25 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Wu",
"Zhenyu",
""
],
[
"Wang",
"YaoXiang",
""
],
[
"Ye",
"Jiacheng",
""
],
[
"Feng",
"Jiangtao",
""
],
[
"Xu",
"Jingjing",
""
],
[
"Qiao",
"Yu",
""
],
[
"Wu",
"Zhiyong",
""
]
] |
new_dataset
| 0.998609 |
2303.02972
|
Pavel Petracek
|
Pavel Petracek, Vit Kratky, Matej Petrlik, Tomas Baca, Radim
Kratochvil, Martin Saska
|
Large-Scale Exploration of Cave Environments by Unmanned Aerial Vehicles
| null |
IEEE Robotics and Automation Letters, vol. 6, no. 4, pp.
7596-7603, 2021
|
10.1109/LRA.2021.3098304
| null |
cs.RO cs.MA cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a self-contained system for the robust utilization of
aerial robots in the autonomous exploration of cave environments to help human
explorers, first responders, and speleologists. The proposed system is
generally applicable to an arbitrary exploration task within an unknown and
unstructured subterranean environment and interconnects crucial robotic
subsystems to provide full autonomy of the robots. Such subsystems primarily
include mapping, path and trajectory planning, localization, control, and
decision making. Due to the diversity, complexity, and structural uncertainty
of natural cave environments, the proposed system allows for the possible use
of any arbitrary exploration strategy for a single robot, as well as for a
cooperating team. A multi-robot cooperation strategy that maximizes the limited
flight time of each aerial robot is proposed for exploration and search &
rescue scenarios where the homing of all deployed robots back to an initial
location is not required The entire system is validated in a comprehensive
experimental analysis comprising of hours of flight time in a real-world cave
environment, as well as by hundreds of hours within a state-of-the-art virtual
testbed that was developed for the DARPA Subterranean Challenge robotic
competition. Among others, experimental results include multiple real-world
exploration flights traveling over 470 meters on a single battery in a
demanding unknown cave environment.
|
[
{
"version": "v1",
"created": "Mon, 6 Mar 2023 09:00:24 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Petracek",
"Pavel",
""
],
[
"Kratky",
"Vit",
""
],
[
"Petrlik",
"Matej",
""
],
[
"Baca",
"Tomas",
""
],
[
"Kratochvil",
"Radim",
""
],
[
"Saska",
"Martin",
""
]
] |
new_dataset
| 0.979518 |
2303.02976
|
Pavel Petracek
|
Pavel Petracek, Vit Kratky, Martin Saska
|
Dronument: System for Reliable Deployment of Micro Aerial Vehicles in
Dark Areas of Large Historical Monuments
| null |
IEEE Robotics and Automation Letters, vol. 5, no. 2, pp.
2078-2085, 2020
|
10.1109/LRA.2020.2969935
| null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This letter presents a self-contained system for robust deployment of
autonomous aerial vehicles in environments without access to global navigation
systems and with limited lighting conditions. The proposed system,
application-tailored for documentation in dark areas of large historical
monuments, uses a unique and reliable aerial platform with a multi-modal
lightweight sensory setup to acquire data in human-restricted areas with
adverse lighting conditions, especially in areas that are high above the
ground. The introduced localization method relies on an easy-to-obtain 3-D
point cloud of a historical building, while it copes with a lack of visible
light by fusing active laser-based sensors. The approach does not rely on any
external localization, or on a preset motion-capture system. This enables fast
deployment in the interiors of investigated structures while being
computationally undemanding enough to process data online, onboard an MAV
equipped with ordinary processing resources.
The reliability of the system is analyzed, is quantitatively evaluated on a
set of aerial trajectories performed inside a real-world church, and is
deployed onto the aerial platform in the position control feedback loop to
demonstrate the reliability of the system in the safety-critical application of
historical monuments documentation.
|
[
{
"version": "v1",
"created": "Mon, 6 Mar 2023 09:06:53 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Petracek",
"Pavel",
""
],
[
"Kratky",
"Vit",
""
],
[
"Saska",
"Martin",
""
]
] |
new_dataset
| 0.999411 |
2303.02982
|
Xiang Wang
|
Xiang Wang, Shiwei Zhang, Jun Cen, Changxin Gao, Yingya Zhang, Deli
Zhao, Nong Sang
|
CLIP-guided Prototype Modulating for Few-shot Action Recognition
|
This work has been submitted to the Springer for possible
publication. Copyright may be transferred without notice, after which this
version may no longer be accessible
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Learning from large-scale contrastive language-image pre-training like CLIP
has shown remarkable success in a wide range of downstream tasks recently, but
it is still under-explored on the challenging few-shot action recognition
(FSAR) task. In this work, we aim to transfer the powerful multimodal knowledge
of CLIP to alleviate the inaccurate prototype estimation issue due to data
scarcity, which is a critical problem in low-shot regimes. To this end, we
present a CLIP-guided prototype modulating framework called CLIP-FSAR, which
consists of two key components: a video-text contrastive objective and a
prototype modulation. Specifically, the former bridges the task discrepancy
between CLIP and the few-shot video task by contrasting videos and
corresponding class text descriptions. The latter leverages the transferable
textual concepts from CLIP to adaptively refine visual prototypes with a
temporal Transformer. By this means, CLIP-FSAR can take full advantage of the
rich semantic priors in CLIP to obtain reliable prototypes and achieve accurate
few-shot classification. Extensive experiments on five commonly used benchmarks
demonstrate the effectiveness of our proposed method, and CLIP-FSAR
significantly outperforms existing state-of-the-art methods under various
settings. The source code and models will be publicly available at
https://github.com/alibaba-mmai-research/CLIP-FSAR.
|
[
{
"version": "v1",
"created": "Mon, 6 Mar 2023 09:17:47 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Wang",
"Xiang",
""
],
[
"Zhang",
"Shiwei",
""
],
[
"Cen",
"Jun",
""
],
[
"Gao",
"Changxin",
""
],
[
"Zhang",
"Yingya",
""
],
[
"Zhao",
"Deli",
""
],
[
"Sang",
"Nong",
""
]
] |
new_dataset
| 0.979549 |
2303.02995
|
Shijie Geng
|
Shijie Geng, Jianbo Yuan, Yu Tian, Yuxiao Chen, Yongfeng Zhang
|
HiCLIP: Contrastive Language-Image Pretraining with Hierarchy-aware
Attention
|
Accepted at ICLR 2023
| null | null | null |
cs.CV cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The success of large-scale contrastive vision-language pretraining (CLIP) has
benefited both visual recognition and multimodal content understanding. The
concise design brings CLIP the advantage in inference efficiency against other
vision-language models with heavier cross-attention fusion layers, making it a
popular choice for a wide spectrum of downstream tasks. However, CLIP does not
explicitly capture the hierarchical nature of high-level and fine-grained
semantics conveyed in images and texts, which is arguably critical to
vision-language understanding and reasoning. To this end, we equip both the
visual and language branches in CLIP with hierarchy-aware attentions, namely
Hierarchy-aware CLIP (HiCLIP), to progressively discover semantic hierarchies
layer-by-layer from both images and texts in an unsupervised manner. As a
result, such hierarchical aggregation significantly improves the cross-modal
alignment. To demonstrate the advantages of HiCLIP, we conduct qualitative
analysis on its unsupervised hierarchy induction during inference, as well as
extensive quantitative experiments on both visual recognition and
vision-language downstream tasks.
|
[
{
"version": "v1",
"created": "Mon, 6 Mar 2023 09:44:01 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Geng",
"Shijie",
""
],
[
"Yuan",
"Jianbo",
""
],
[
"Tian",
"Yu",
""
],
[
"Chen",
"Yuxiao",
""
],
[
"Zhang",
"Yongfeng",
""
]
] |
new_dataset
| 0.999416 |
2303.03101
|
Yujing Lou
|
Yujing Lou, Zelin Ye, Yang You, Nianjuan Jiang, Jiangbo Lu, Weiming
Wang, Lizhuang Ma, Cewu Lu
|
CRIN: Rotation-Invariant Point Cloud Analysis and Rotation Estimation
via Centrifugal Reference Frame
|
AAAI 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Various recent methods attempt to implement rotation-invariant 3D deep
learning by replacing the input coordinates of points with relative distances
and angles. Due to the incompleteness of these low-level features, they have to
undertake the expense of losing global information. In this paper, we propose
the CRIN, namely Centrifugal Rotation-Invariant Network. CRIN directly takes
the coordinates of points as input and transforms local points into
rotation-invariant representations via centrifugal reference frames. Aided by
centrifugal reference frames, each point corresponds to a discrete rotation so
that the information of rotations can be implicitly stored in point features.
Unfortunately, discrete points are far from describing the whole rotation
space. We further introduce a continuous distribution for 3D rotations based on
points. Furthermore, we propose an attention-based down-sampling strategy to
sample points invariant to rotations. A relation module is adopted at last for
reinforcing the long-range dependencies between sampled points and predicts the
anchor point for unsupervised rotation estimation. Extensive experiments show
that our method achieves rotation invariance, accurately estimates the object
rotation, and obtains state-of-the-art results on rotation-augmented
classification and part segmentation. Ablation studies validate the
effectiveness of the network design.
|
[
{
"version": "v1",
"created": "Mon, 6 Mar 2023 13:14:10 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Lou",
"Yujing",
""
],
[
"Ye",
"Zelin",
""
],
[
"You",
"Yang",
""
],
[
"Jiang",
"Nianjuan",
""
],
[
"Lu",
"Jiangbo",
""
],
[
"Wang",
"Weiming",
""
],
[
"Ma",
"Lizhuang",
""
],
[
"Lu",
"Cewu",
""
]
] |
new_dataset
| 0.994917 |
2303.03181
|
S Chandra Mouli
|
S Chandra Mouli, Muhammad Ashraful Alam, Bruno Ribeiro
|
MetaPhysiCa: OOD Robustness in Physics-informed Machine Learning
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A fundamental challenge in physics-informed machine learning (PIML) is the
design of robust PIML methods for out-of-distribution (OOD) forecasting tasks.
These OOD tasks require learning-to-learn from observations of the same (ODE)
dynamical system with different unknown ODE parameters, and demand accurate
forecasts even under out-of-support initial conditions and out-of-support ODE
parameters. In this work we propose a solution for such tasks, which we define
as a meta-learning procedure for causal structure discovery (including
invariant risk minimization). Using three different OOD tasks, we empirically
observe that the proposed approach significantly outperforms existing
state-of-the-art PIML and deep learning methods.
|
[
{
"version": "v1",
"created": "Mon, 6 Mar 2023 14:48:30 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Mouli",
"S Chandra",
""
],
[
"Alam",
"Muhammad Ashraful",
""
],
[
"Ribeiro",
"Bruno",
""
]
] |
new_dataset
| 0.996464 |
2303.03221
|
Jiannan Li
|
Jiannan Li, Mauricio Sousa, Karthik Mahadevan, Bryan Wang, Paula Akemi
Aoyaui, Nicole Yu, Angela Yang, Ravin Balakrishnan, Anthony Tang, Tovi
Grossman
|
Stargazer: An Interactive Camera Robot for Capturing How-To Videos Based
on Subtle Instructor Cues
|
To appear in Proceedings of the 2023 CHI Conference on Human Factors
in Computing Systems (CHI '23), April 23--28, 2023, Hamburg, Germany
| null |
10.1145/3544548.3580896
| null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Live and pre-recorded video tutorials are an effective means for teaching
physical skills such as cooking or prototyping electronics. A dedicated
cameraperson following an instructor's activities can improve production
quality. However, instructors who do not have access to a cameraperson's help
often have to work within the constraints of static cameras. We present
Stargazer, a novel approach for assisting with tutorial content creation with a
camera robot that autonomously tracks regions of interest based on instructor
actions to capture dynamic shots. Instructors can adjust the camera behaviors
of Stargazer with subtle cues, including gestures and speech, allowing them to
fluidly integrate camera control commands into instructional activities. Our
user study with six instructors, each teaching a distinct skill, showed that
participants could create dynamic tutorial videos with a diverse range of
subjects, camera framing, and camera angle combinations using Stargazer.
|
[
{
"version": "v1",
"created": "Mon, 6 Mar 2023 15:31:16 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Li",
"Jiannan",
""
],
[
"Sousa",
"Mauricio",
""
],
[
"Mahadevan",
"Karthik",
""
],
[
"Wang",
"Bryan",
""
],
[
"Aoyaui",
"Paula Akemi",
""
],
[
"Yu",
"Nicole",
""
],
[
"Yang",
"Angela",
""
],
[
"Balakrishnan",
"Ravin",
""
],
[
"Tang",
"Anthony",
""
],
[
"Grossman",
"Tovi",
""
]
] |
new_dataset
| 0.996822 |
2303.03290
|
Tilahun Abedissa Taffa
|
Tilahun Abedissa, Ricardo Usbeck, Yaregal Assabie
|
AmQA: Amharic Question Answering Dataset
| null | null | null | null |
cs.CL cs.AI cs.IR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Question Answering (QA) returns concise answers or answer lists from natural
language text given a context document. Many resources go into curating QA
datasets to advance robust models' development. There is a surge of QA datasets
for languages like English, however, this is not true for Amharic. Amharic, the
official language of Ethiopia, is the second most spoken Semitic language in
the world. There is no published or publicly available Amharic QA dataset.
Hence, to foster the research in Amharic QA, we present the first Amharic QA
(AmQA) dataset. We crowdsourced 2628 question-answer pairs over 378 Wikipedia
articles. Additionally, we run an XLMR Large-based baseline model to spark
open-domain QA research interest. The best-performing baseline achieves an
F-score of 69.58 and 71.74 in reader-retriever QA and reading comprehension
settings respectively.
|
[
{
"version": "v1",
"created": "Mon, 6 Mar 2023 17:06:50 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Abedissa",
"Tilahun",
""
],
[
"Usbeck",
"Ricardo",
""
],
[
"Assabie",
"Yaregal",
""
]
] |
new_dataset
| 0.999797 |
2303.03300
|
Zhimeng Jiang
|
Zhimeng Jiang, Xiaotian Han, Hongye Jin, Guanchu Wang, Na Zou, Xia Hu
|
Weight Perturbation Can Help Fairness under Distribution Shift
| null | null | null | null |
cs.LG cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fairness in machine learning has attracted increasing attention in recent
years. The fairness methods improving algorithmic fairness for in-distribution
data may not perform well under distribution shift. In this paper, we first
theoretically demonstrate the inherent connection between distribution shift,
data perturbation, and weight perturbation. Subsequently, we analyze the
sufficient conditions to guarantee fairness (i.e., low demographic parity) for
the target dataset, including fairness for the source dataset, and low
prediction difference between the source and target dataset for each sensitive
attribute group. Motivated by these sufficient conditions, we propose robust
fairness regularization (RFR) by considering the worst case within the weight
perturbation ball for each sensitive attribute group. In this way, the
maximization problem can be simplified as two forward and two backward
propagations for each update of model parameters. We evaluate the effectiveness
of our proposed RFR algorithm on synthetic and real distribution shifts across
various datasets. Experimental results demonstrate that RFR achieves better
fairness-accuracy trade-off performance compared with several baselines.
|
[
{
"version": "v1",
"created": "Mon, 6 Mar 2023 17:19:23 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Jiang",
"Zhimeng",
""
],
[
"Han",
"Xiaotian",
""
],
[
"Jin",
"Hongye",
""
],
[
"Wang",
"Guanchu",
""
],
[
"Zou",
"Na",
""
],
[
"Hu",
"Xia",
""
]
] |
new_dataset
| 0.998773 |
2303.03341
|
M. G. Sarwar Murshed
|
M.G. Sarwar Murshed, Keivan Bahmani, Stephanie Schuckers, Faraz
Hussain
|
Deep Age-Invariant Fingerprint Segmentation System
|
20 Pages, 14 figures, Journal
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fingerprint-based identification systems achieve higher accuracy when a slap
containing multiple fingerprints of a subject is used instead of a single
fingerprint. However, segmenting or auto-localizing all fingerprints in a slap
image is a challenging task due to the different orientations of fingerprints,
noisy backgrounds, and the smaller size of fingertip components. The presence
of slap images in a real-world dataset where one or more fingerprints are
rotated makes it challenging for a biometric recognition system to localize and
label the fingerprints automatically. Improper fingerprint localization and
finger labeling errors lead to poor matching performance. In this paper, we
introduce a method to generate arbitrary angled bounding boxes using a deep
learning-based algorithm that precisely localizes and labels fingerprints from
both axis-aligned and over-rotated slap images. We built a fingerprint
segmentation model named CRFSEG (Clarkson Rotated Fingerprint segmentation
Model) by updating the previously proposed CFSEG model which was based on
traditional Faster R-CNN architecture [21]. CRFSEG improves upon the Faster
R-CNN algorithm with arbitrarily angled bounding boxes that allow the CRFSEG to
perform better in challenging slap images. After training the CRFSEG algorithm
on a new dataset containing slap images collected from both adult and children
subjects, our results suggest that the CRFSEG model was invariant across
different age groups and can handle over-rotated slap images successfully. In
the Combined dataset containing both normal and rotated images of adult and
children subjects, we achieved a matching accuracy of 97.17%, which
outperformed state-of-the-art VeriFinger (94.25%) and NFSEG segmentation
systems (80.58%).
|
[
{
"version": "v1",
"created": "Mon, 6 Mar 2023 18:21:16 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Murshed",
"M. G. Sarwar",
""
],
[
"Bahmani",
"Keivan",
""
],
[
"Schuckers",
"Stephanie",
""
],
[
"Hussain",
"Faraz",
""
]
] |
new_dataset
| 0.999665 |
2303.03378
|
Danny Driess
|
Danny Driess, Fei Xia, Mehdi S. M. Sajjadi, Corey Lynch, Aakanksha
Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe
Yu, Wenlong Huang, Yevgen Chebotar, Pierre Sermanet, Daniel Duckworth, Sergey
Levine, Vincent Vanhoucke, Karol Hausman, Marc Toussaint, Klaus Greff, Andy
Zeng, Igor Mordatch, Pete Florence
|
PaLM-E: An Embodied Multimodal Language Model
| null | null | null | null |
cs.LG cs.AI cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large language models excel at a wide range of complex tasks. However,
enabling general inference in the real world, e.g., for robotics problems,
raises the challenge of grounding. We propose embodied language models to
directly incorporate real-world continuous sensor modalities into language
models and thereby establish the link between words and percepts. Input to our
embodied language model are multi-modal sentences that interleave visual,
continuous state estimation, and textual input encodings. We train these
encodings end-to-end, in conjunction with a pre-trained large language model,
for multiple embodied tasks including sequential robotic manipulation planning,
visual question answering, and captioning. Our evaluations show that PaLM-E, a
single large embodied multimodal model, can address a variety of embodied
reasoning tasks, from a variety of observation modalities, on multiple
embodiments, and further, exhibits positive transfer: the model benefits from
diverse joint training across internet-scale language, vision, and
visual-language domains. Our largest model, PaLM-E-562B with 562B parameters,
in addition to being trained on robotics tasks, is a visual-language generalist
with state-of-the-art performance on OK-VQA, and retains generalist language
capabilities with increasing scale.
|
[
{
"version": "v1",
"created": "Mon, 6 Mar 2023 18:58:06 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Driess",
"Danny",
""
],
[
"Xia",
"Fei",
""
],
[
"Sajjadi",
"Mehdi S. M.",
""
],
[
"Lynch",
"Corey",
""
],
[
"Chowdhery",
"Aakanksha",
""
],
[
"Ichter",
"Brian",
""
],
[
"Wahid",
"Ayzaan",
""
],
[
"Tompson",
"Jonathan",
""
],
[
"Vuong",
"Quan",
""
],
[
"Yu",
"Tianhe",
""
],
[
"Huang",
"Wenlong",
""
],
[
"Chebotar",
"Yevgen",
""
],
[
"Sermanet",
"Pierre",
""
],
[
"Duckworth",
"Daniel",
""
],
[
"Levine",
"Sergey",
""
],
[
"Vanhoucke",
"Vincent",
""
],
[
"Hausman",
"Karol",
""
],
[
"Toussaint",
"Marc",
""
],
[
"Greff",
"Klaus",
""
],
[
"Zeng",
"Andy",
""
],
[
"Mordatch",
"Igor",
""
],
[
"Florence",
"Pete",
""
]
] |
new_dataset
| 0.995168 |
2108.10271
|
Rachmad Vidya Wicaksana Putra
|
Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif, Muhammad
Shafique
|
ReSpawn: Energy-Efficient Fault-Tolerance for Spiking Neural Networks
considering Unreliable Memories
|
To appear at the 40th IEEE/ACM International Conference on
Computer-Aided Design (ICCAD), November 2021, Virtual Event
| null |
10.1109/ICCAD51958.2021.9643524
| null |
cs.AR cs.LG cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Spiking neural networks (SNNs) have shown a potential for having low energy
with unsupervised learning capabilities due to their biologically-inspired
computation. However, they may suffer from accuracy degradation if their
processing is performed under the presence of hardware-induced faults in
memories, which can come from manufacturing defects or voltage-induced
approximation errors. Since recent works still focus on the fault-modeling and
random fault injection in SNNs, the impact of memory faults in SNN hardware
architectures on accuracy and the respective fault-mitigation techniques are
not thoroughly explored. Toward this, we propose ReSpawn, a novel framework for
mitigating the negative impacts of faults in both the off-chip and on-chip
memories for resilient and energy-efficient SNNs. The key mechanisms of ReSpawn
are: (1) analyzing the fault tolerance of SNNs; and (2) improving the SNN fault
tolerance through (a) fault-aware mapping (FAM) in memories, and (b)
fault-aware training-and-mapping (FATM). If the training dataset is not fully
available, FAM is employed through efficient bit-shuffling techniques that
place the significant bits on the non-faulty memory cells and the insignificant
bits on the faulty ones, while minimizing the memory access energy. Meanwhile,
if the training dataset is fully available, FATM is employed by considering the
faulty memory cells in the data mapping and training processes. The
experimental results show that, compared to the baseline SNN without
fault-mitigation techniques, ReSpawn with a fault-aware mapping scheme improves
the accuracy by up to 70% for a network with 900 neurons without retraining.
|
[
{
"version": "v1",
"created": "Mon, 23 Aug 2021 16:17:33 GMT"
}
] | 2023-03-06T00:00:00 |
[
[
"Putra",
"Rachmad Vidya Wicaksana",
""
],
[
"Hanif",
"Muhammad Abdullah",
""
],
[
"Shafique",
"Muhammad",
""
]
] |
new_dataset
| 0.997191 |
2111.02719
|
Geoffrey Ramseyer
|
Geoffrey Ramseyer, Ashish Goel, David Mazi\`eres
|
SPEEDEX: A Scalable, Parallelizable, and Economically Efficient
Decentralized EXchange
|
27 pages, 10 figures
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
SPEEDEX is a decentralized exchange (DEX) that lets participants securely
trade assets without giving any single party undue control over the market.
SPEEDEX offers several advantages over prior DEXes. It achieves high throughput
-- over 200,000 transactions per second on 48-core servers, even with tens of
millions of open offers. SPEEDEX runs entirely within a Layer-1 blockchain, and
thus achieves its scalability without fragmenting market liquidity between
multiple blockchains or rollups. It eliminates internal arbitrage
opportunities, so that a direct trade from asset $\mathcal{A}$ to asset
$\mathcal{B}$ always receives as good a price as trading through some third
asset such as USD. Finally, it prevents certain front-running attacks that
would otherwise increase the effective bid-ask spread for small traders.
SPEEDEX's key design insight is its use of an Arrow-Debreu exchange market
structure that fixes the valuation of assets for all trades in a given block of
transactions. We construct an algorithm, which is both asymptotically efficient
and empirically practical, that computes these valuations while exactly
preserving a DEX's financial correctness constraints. Not only does this market
structure provide fairness across trades, but it also makes trade operations
commutative and hence efficiently parallelizable. SPEEDEX is prototyped but not
yet merged within the Stellar blockchain, one of the largest Layer-1
blockchains.
|
[
{
"version": "v1",
"created": "Thu, 4 Nov 2021 10:09:09 GMT"
},
{
"version": "v2",
"created": "Mon, 23 May 2022 00:33:23 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Oct 2022 23:25:06 GMT"
},
{
"version": "v4",
"created": "Tue, 8 Nov 2022 00:53:56 GMT"
},
{
"version": "v5",
"created": "Fri, 2 Dec 2022 20:01:17 GMT"
},
{
"version": "v6",
"created": "Thu, 2 Mar 2023 22:11:29 GMT"
}
] | 2023-03-06T00:00:00 |
[
[
"Ramseyer",
"Geoffrey",
""
],
[
"Goel",
"Ashish",
""
],
[
"Mazières",
"David",
""
]
] |
new_dataset
| 0.95556 |
2204.01868
|
Leandro Marcomini
|
Leandro Arab Marcomini, Andr\'e Luiz Cunha
|
Truck Axle Detection with Convolutional Neural Networks
|
Code and dataset available for donwload, links provided
| null | null | null |
cs.CV cs.NE
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Axle count in trucks is important to the classification of vehicles and to
the operation of road systems. It is used in the determination of service fees
and in the impact on the pavement. Although axle count can be achieved with
traditional methods, such as manual labor, it is increasingly possible to count
axles using deep learning and computer vision methods. This paper aims to
compare three deep-learning object detection algorithms, YOLO, Faster R-CNN,
and SSD, for the detection of truck axles. A dataset was built to provide
training and testing examples for the neural networks. The training was done on
different base models, to increase training time efficiency and to compare
results. We evaluated results based on five metrics: precision, recall, mAP,
F1-score, and FPS count. Results indicate that YOLO and SSD have similar
accuracy and performance, with more than 96\% mAP for both models. Datasets and
codes are publicly available for download.
|
[
{
"version": "v1",
"created": "Mon, 4 Apr 2022 22:11:49 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Mar 2023 12:41:10 GMT"
}
] | 2023-03-06T00:00:00 |
[
[
"Marcomini",
"Leandro Arab",
""
],
[
"Cunha",
"André Luiz",
""
]
] |
new_dataset
| 0.999592 |
2210.07590
|
Daeun Song
|
Ivaylo Ilinkin, Daeun Song, Young J. Kim
|
Stroke-based Rendering and Planning for Robotic Performance of Artistic
Drawing
|
Submitted to IEEE IROS 2023
| null | null | null |
cs.RO cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
We present a new robotic drawing system based on stroke-based rendering
(SBR). Our motivation is the artistic quality of the whole performance. Not
only should the generated strokes in the final drawing resemble the input
image, but the stroke sequence should also exhibit a human artist's planning
process. Thus, when a robot executes the drawing task, both the drawing results
and the way the robot executes would look artistic. Our SBR system is based on
image segmentation and depth estimation. It generates the drawing strokes in an
order that allows for the intended shape to be perceived quickly and for its
detailed features to be filled in and emerge gradually when observed by the
human. This ordering represents a stroke plan that the drawing robot should
follow to create an artistic rendering of images. We experimentally demonstrate
that our SBR-based drawing makes visually pleasing artistic images, and our
robotic system can replicate the result with proper sequences of stroke
drawing.
|
[
{
"version": "v1",
"created": "Fri, 14 Oct 2022 07:42:57 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Mar 2023 08:31:04 GMT"
}
] | 2023-03-06T00:00:00 |
[
[
"Ilinkin",
"Ivaylo",
""
],
[
"Song",
"Daeun",
""
],
[
"Kim",
"Young J.",
""
]
] |
new_dataset
| 0.978857 |
2301.04559
|
Juan M. Tizón
|
Juan M. Tiz\'on
|
Burnback Analysis of Solid Propellant Rocket Motors
|
42 pages, 20 figures, 2 tables
| null | null | null |
cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Burnback analysis is a geometric exercise, whose correct solution leads to
obtaining the thrust curve of solid propellant rockets. Traditionally, Piobert
statement, which introduces a certain amount of intuition, is used as an
argument to construct analytical and numerical algorithms, although it is also
common to use numerical integration of differential equations, whose solution
is free of ambiguities. This paper presents a detailed study of the process
experienced by the combustion surface that allows enunciating the properties of
the kinematics of the surface without the need to appeal to heuristic
considerations. Next, the methods used throughout the technological development
of solid propellant rockets are reviewed, from their beginnings to modern
methods, which obtain solutions to complex problems, based on the numerical
solution of PDE. Other methods are also reviewed, which are developed around
some of the properties presented by the solution, that is, methods of heuristic
or phenomenological foundation. As a result of the review, it becomes clear
that the solution of the Eikonal equation for burnback analysis is undertaken
in the early 2000, clarifying the problem. Finally, several examples of the
capabilities of the most relevant methods are provided, from the point of view
of both efficiency and precision, presenting results in situations of interest,
in the field of propulsion by solid-propellant rockets.
|
[
{
"version": "v1",
"created": "Wed, 11 Jan 2023 16:36:21 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Mar 2023 13:44:40 GMT"
}
] | 2023-03-06T00:00:00 |
[
[
"Tizón",
"Juan M.",
""
]
] |
new_dataset
| 0.996981 |
2301.10602
|
I Made Aswin Nahrendra
|
I Made Aswin Nahrendra, Byeongho Yu, Hyun Myung
|
DreamWaQ: Learning Robust Quadrupedal Locomotion With Implicit Terrain
Imagination via Deep Reinforcement Learning
|
Accepted for ICRA 2023
| null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Quadrupedal robots resemble the physical ability of legged animals to walk
through unstructured terrains. However, designing a controller for quadrupedal
robots poses a significant challenge due to their functional complexity and
requires adaptation to various terrains. Recently, deep reinforcement learning,
inspired by how legged animals learn to walk from their experiences, has been
utilized to synthesize natural quadrupedal locomotion. However,
state-of-the-art methods strongly depend on a complex and reliable sensing
framework. Furthermore, prior works that rely only on proprioception have shown
a limited demonstration for overcoming challenging terrains, especially for a
long distance. This work proposes a novel quadrupedal locomotion learning
framework that allows quadrupedal robots to walk through challenging terrains,
even with limited sensing modalities. The proposed framework was validated in
real-world outdoor environments with varying conditions within a single run for
a long distance.
|
[
{
"version": "v1",
"created": "Wed, 25 Jan 2023 14:23:14 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Mar 2023 01:13:40 GMT"
}
] | 2023-03-06T00:00:00 |
[
[
"Nahrendra",
"I Made Aswin",
""
],
[
"Yu",
"Byeongho",
""
],
[
"Myung",
"Hyun",
""
]
] |
new_dataset
| 0.999608 |
2302.09646
|
Philip Cohen
|
Philip R. Cohen and Lucian Galescu
|
A Planning-Based Explainable Collaborative Dialogue System
|
61 pages, 8 figures, 3 appendices; V2 fixes two erroneous
cross-references
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Eva is a multimodal conversational system that helps users to accomplish
their domain goals through collaborative dialogue. The system does this by
inferring users' intentions and plans to achieve those goals, detects whether
obstacles are present, finds plans to overcome them or to achieve higher-level
goals, and plans its actions, including speech acts,to help users accomplish
those goals. In doing so, the system maintains and reasons with its own
beliefs, goals and intentions, and explicitly reasons about those of its user.
Belief reasoning is accomplished with a modal Horn-clause meta-interpreter. The
planning and reasoning subsystems obey the principles of persistent goals and
intentions, including the formation and decomposition of intentions to perform
complex actions, as well as the conditions under which they can be given up. In
virtue of its planning process, the system treats its speech acts just like its
other actions -- physical acts affect physical states, digital acts affect
digital states, and speech acts affect mental and social states. This general
approach enables Eva to plan a variety of speech acts including requests,
informs, questions, confirmations, recommendations, offers, acceptances,
greetings, and emotive expressions. Each of these has a formally specified
semantics which is used during the planning and reasoning processes. Because it
can keep track of different users' mental states, it can engage in multi-party
dialogues. Importantly, Eva can explain its utterances because it has created a
plan standing behind each of them. Finally, Eva employs multimodal input and
output, driving an avatar that can perceive and employ facial and head
movements along with emotive speech acts.
|
[
{
"version": "v1",
"created": "Sun, 19 Feb 2023 18:29:54 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Mar 2023 20:04:13 GMT"
}
] | 2023-03-06T00:00:00 |
[
[
"Cohen",
"Philip R.",
""
],
[
"Galescu",
"Lucian",
""
]
] |
new_dataset
| 0.993751 |
2302.14472
|
Donghuo Zeng
|
Donghuo Zeng, Jianming Wu, Gen Hattori, Yasuhiro Takishima
|
TV-watching partner robot: Analysis of User's Experience
|
15 pages, 3 figures, 11 tables
| null | null | null |
cs.MM cs.HC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Watching TV not only provides news information but also gives an opportunity
for different generations to communicate. With the proliferation of
smartphones, PC, and the Internet, increase the opportunities for communication
in front of the television is also likely to diminish. This has led to some
problems further from face-to-face such as a lack of self-control and
insufficient development of communication skills. This paper proposes a
TV-watching companion robot with open-domain chat ability. The robot contains
two modes: TV-watching mode and conversation mode. In TV-watching mode, the
robot first extracts keywords from the TV program and then generates the
disclosure utterances based on the extracted keywords as if enjoying the TV
program. In the conversation mode, the robot generates question utterances with
keywords in the same way and then employs a topics-based dialog management
method consisting of multiple dialog engines for rich conversations related to
the TV program. We conduct the initial experiments and the result shows that
all participants from the three groups enjoyed talking with the robot, and the
question about their interests in the robot was rated 6.5/7-levels. This
indicates that the proposed conversational features of TV-watching Companion
Robot have the potential to make our daily lives more enjoyable. Under the
analysis of the initial experiments, we achieve further experiments with more
participants by dividing them into two groups: a control group without a robot
and an intervention group with a robot. The results show that people prefer to
talk to robots because the robot will bring more enjoyable, relaxed, and
interesting.
|
[
{
"version": "v1",
"created": "Tue, 28 Feb 2023 10:30:16 GMT"
},
{
"version": "v2",
"created": "Wed, 1 Mar 2023 03:38:11 GMT"
},
{
"version": "v3",
"created": "Fri, 3 Mar 2023 02:15:32 GMT"
}
] | 2023-03-06T00:00:00 |
[
[
"Zeng",
"Donghuo",
""
],
[
"Wu",
"Jianming",
""
],
[
"Hattori",
"Gen",
""
],
[
"Takishima",
"Yasuhiro",
""
]
] |
new_dataset
| 0.994416 |
2303.01542
|
Paria Mehrani
|
Paria Mehrani and John K. Tsotsos
|
Self-attention in Vision Transformers Performs Perceptual Grouping, Not
Attention
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Recently, a considerable number of studies in computer vision involves deep
neural architectures called vision transformers. Visual processing in these
models incorporates computational models that are claimed to implement
attention mechanisms. Despite an increasing body of work that attempts to
understand the role of attention mechanisms in vision transformers, their
effect is largely unknown. Here, we asked if the attention mechanisms in vision
transformers exhibit similar effects as those known in human visual attention.
To answer this question, we revisited the attention formulation in these models
and found that despite the name, computationally, these models perform a
special class of relaxation labeling with similarity grouping effects.
Additionally, whereas modern experimental findings reveal that human visual
attention involves both feed-forward and feedback mechanisms, the purely
feed-forward architecture of vision transformers suggests that attention in
these models will not have the same effects as those known in humans. To
quantify these observations, we evaluated grouping performance in a family of
vision transformers. Our results suggest that self-attention modules group
figures in the stimuli based on similarity in visual features such as color.
Also, in a singleton detection experiment as an instance of saliency detection,
we studied if these models exhibit similar effects as those of feed-forward
visual salience mechanisms utilized in human visual attention. We found that
generally, the transformer-based attention modules assign more salience either
to distractors or the ground. Together, our study suggests that the attention
mechanisms in vision transformers perform similarity grouping and not
attention.
|
[
{
"version": "v1",
"created": "Thu, 2 Mar 2023 19:18:11 GMT"
}
] | 2023-03-06T00:00:00 |
[
[
"Mehrani",
"Paria",
""
],
[
"Tsotsos",
"John K.",
""
]
] |
new_dataset
| 0.967604 |
2303.01557
|
Foivos Tsimpourlas
|
Foivos Tsimpourlas, Pavlos Petoumenos, Min Xu, Chris Cummins, Kim
Hazelwood, Ajitha Rajan, Hugh Leather
|
BenchDirect: A Directed Language Model for Compiler Benchmarks
|
arXiv admin note: substantial text overlap with arXiv:2208.06555
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The exponential increase of hardware-software complexity has made it
impossible for compiler engineers to find the right optimization heuristics
manually. Predictive models have been shown to find near optimal heuristics
with little human effort but they are limited by a severe lack of diverse
benchmarks to train on. Generative AI has been used by researchers to
synthesize benchmarks into existing datasets. However, the synthetic programs
are short, exceedingly simple and lacking diversity in their features.
We develop BenchPress, the first ML compiler benchmark generator that can be
directed within source code feature representations. BenchPress synthesizes
executable functions by infilling code that conditions on the program's left
and right context. BenchPress uses active learning to introduce new benchmarks
with unseen features into the dataset of Grewe's et al. CPU vs GPU heuristic,
improving its acquired performance by 50%. BenchPress targets features that has
been impossible for other synthesizers to reach. In 3 feature spaces, we
outperform human-written code from GitHub, CLgen, CLSmith and the SRCIROR
mutator in targeting the features of Rodinia benchmarks.
BenchPress steers generation with beam search over a feature-agnostic
language model. We improve this with BenchDirect which utilizes a directed LM
that infills programs by jointly observing source code context and the compiler
features that are targeted. BenchDirect achieves up to 36% better accuracy in
targeting the features of Rodinia benchmarks, it is 1.8x more likely to give an
exact match and it speeds up execution time by up to 72% compared to
BenchPress. Both our models produce code that is difficult to distinguish from
human-written code. We conduct a Turing test which shows our models' synthetic
benchmarks are labelled as 'human-written' as often as human-written code from
GitHub.
|
[
{
"version": "v1",
"created": "Thu, 2 Mar 2023 20:17:24 GMT"
}
] | 2023-03-06T00:00:00 |
[
[
"Tsimpourlas",
"Foivos",
""
],
[
"Petoumenos",
"Pavlos",
""
],
[
"Xu",
"Min",
""
],
[
"Cummins",
"Chris",
""
],
[
"Hazelwood",
"Kim",
""
],
[
"Rajan",
"Ajitha",
""
],
[
"Leather",
"Hugh",
""
]
] |
new_dataset
| 0.999612 |
2303.01606
|
Artur Podobas PhD
|
Artur Podobas
|
Q2Logic: An Coarse-Grained Architecture targeting Schr\"odinger Quantum
Circuit Simulations
| null | null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Quantum computing is emerging as an important (but radical) technology that
might take us beyond Moore's law for certain applications. Today, in parallel
with improving quantum computers, computer scientists are relying heavily on
quantum circuit simulators to develop algorithms. Most existing quantum circuit
simulators run on general-purpose CPUs or GPUs. However, at the same time,
quantum circuits themselves offer multiple opportunities for parallelization,
some of which could map better to other architecture -- architectures such as
reconfigurable systems. In this early work, we created a quantum circuit
simulator system called Q2Logic. Q2Logic is a coarse-grained reconfigurable
architecture (CGRA) implemented as an overlay on Field-Programmable Gate Arrays
(FPGAs), but specialized towards quantum simulations. We described how Q2Logic
has been created and reveal implementation details, limitations, and
opportunities. We end the study by empirically comparing the performance of
Q2Logic (running on a Intel Agilex FPGA) against the state-of-the-art framework
SVSim (running on a modern processor), showing improvements in three large
circuits (#qbit=27), where Q2Logic can be up-to ~7x faster.
|
[
{
"version": "v1",
"created": "Thu, 2 Mar 2023 22:06:23 GMT"
}
] | 2023-03-06T00:00:00 |
[
[
"Podobas",
"Artur",
""
]
] |
new_dataset
| 0.999621 |
2303.01634
|
Brendan Tidd
|
Brendan Tidd
|
Learning Visuo-Motor Behaviours for Robot Locomotion Over Difficult
Terrain
|
PhD thesis
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As mobile robots become useful performing everyday tasks in complex
real-world environments, they must be able to traverse a range of difficult
terrain types such as stairs, stepping stones, gaps, jumps and narrow passages.
This work investigated traversing these types of environments with a bipedal
robot (simulation experiments), and a tracked robot (real world). Developing a
traditional monolithic controller for traversing all terrain types is
challenging, and for large physical robots realistic test facilities are
required and safety must be ensured. An alternative is a suite of simple
behaviour controllers that can be composed to achieve complex tasks. This work
efficiently trained complex behaviours to enable mobile robots to traverse
difficult terrain. By minimising retraining as new behaviours became available,
robots were able to traverse increasingly complex terrain sets, leading toward
the development of scalable behaviour libraries.
|
[
{
"version": "v1",
"created": "Thu, 2 Mar 2023 23:58:55 GMT"
}
] | 2023-03-06T00:00:00 |
[
[
"Tidd",
"Brendan",
""
]
] |
new_dataset
| 0.952495 |
2303.01639
|
Jun Rekimoto
|
Jun Rekimoto
|
WESPER: Zero-shot and Realtime Whisper to Normal Voice Conversion for
Whisper-based Speech Interactions
|
ACM CHI 2023 paper
|
Proceedings of the 2023 CHI Conference on Human Factors in
Computing Systems (CHI '23), April 23--28, 2023
|
10.1145/3544548.3580706
| null |
cs.SD cs.HC eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recognizing whispered speech and converting it to normal speech creates many
possibilities for speech interaction. Because the sound pressure of whispered
speech is significantly lower than that of normal speech, it can be used as a
semi-silent speech interaction in public places without being audible to
others. Converting whispers to normal speech also improves the speech quality
for people with speech or hearing impairments. However, conventional speech
conversion techniques do not provide sufficient conversion quality or require
speaker-dependent datasets consisting of pairs of whispered and normal speech
utterances. To address these problems, we propose WESPER, a zero-shot,
real-time whisper-to-normal speech conversion mechanism based on
self-supervised learning. WESPER consists of a speech-to-unit (STU) encoder,
which generates hidden speech units common to both whispered and normal speech,
and a unit-to-speech (UTS) decoder, which reconstructs speech from the encoded
speech units. Unlike the existing methods, this conversion is user-independent
and does not require a paired dataset for whispered and normal speech. The UTS
decoder can reconstruct speech in any target speaker's voice from speech units,
and it requires only an unlabeled target speaker's speech data. We confirmed
that the quality of the speech converted from a whisper was improved while
preserving its natural prosody. Additionally, we confirmed the effectiveness of
the proposed approach to perform speech reconstruction for people with speech
or hearing disabilities. (project page: http://lab.rekimoto.org/projects/wesper
)
|
[
{
"version": "v1",
"created": "Fri, 3 Mar 2023 00:10:25 GMT"
}
] | 2023-03-06T00:00:00 |
[
[
"Rekimoto",
"Jun",
""
]
] |
new_dataset
| 0.998202 |
2303.01645
|
Ramin Shahbazi
|
Ramin Shahbazi, Fatemeh Fard
|
APIContext2Com: Code Comment Generation by Incorporating Pre-Defined API
Documentation
| null | null | null | null |
cs.SE cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Code comments are significantly helpful in comprehending software programs
and also aid developers to save a great deal of time in software maintenance.
Code comment generation aims to automatically predict comments in natural
language given a code snippet. Several works investigate the effect of
integrating external knowledge on the quality of generated comments. In this
study, we propose a solution, namely APIContext2Com, to improve the
effectiveness of generated comments by incorporating the pre-defined
Application Programming Interface (API) context. The API context includes the
definition and description of the pre-defined APIs that are used within the
code snippets. As the detailed API information expresses the functionality of a
code snippet, it can be helpful in better generating the code summary. We
introduce a seq-2-seq encoder-decoder neural network model with different sets
of multiple encoders to effectively transform distinct inputs into target
comments. A ranking mechanism is also developed to exclude non-informative
APIs, so that we can filter out unrelated APIs. We evaluate our approach using
the Java dataset from CodeSearchNet. The findings reveal that the proposed
model improves the best baseline by 1.88 (8.24 %), 2.16 (17.58 %), 1.38 (18.3
%), 0.73 (14.17 %), 1.58 (14.98 %) and 1.9 (6.92 %) for BLEU1, BLEU2, BLEU3,
BLEU4, METEOR, ROUGE-L respectively. Human evaluation and ablation studies
confirm the quality of the generated comments and the effect of architecture
and ranking APIs.
|
[
{
"version": "v1",
"created": "Fri, 3 Mar 2023 00:38:01 GMT"
}
] | 2023-03-06T00:00:00 |
[
[
"Shahbazi",
"Ramin",
""
],
[
"Fard",
"Fatemeh",
""
]
] |
new_dataset
| 0.995715 |
2303.01648
|
Jinkun Zhang
|
Jinkun Zhang and Edmund Yeh
|
Congestion-aware routing and content placement in elastic cache networks
|
A complete version of paper "Congestion-aware routing and content
placement in elastic cache networks" submitted to MobiHoc 2023
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Caching can be leveraged to significantly improve network performance and
mitigate congestion. However, characterizing the optimal tradeoff between
routing cost and cache deployment cost remains an open problem. In this paper,
for a network with arbitrary topology and congestion-dependent nonlinear cost
functions, we aim to jointly determine the cache deployment, content placement,
and hop-by-hop routing strategies, so that the sum of routing cost and cache
deployment cost is minimized. We tackle this NP-hard problem starting with a
fixed-routing setting, and then to a general dynamic-routing setting. For the
fixed-routing setting, a Gradient-combining Frank-Wolfe algorithm with
$(\frac{1}{2},1)$-approximation is presented. For the general dynamic-routing
setting, we obtain a set of KKT necessary optimal conditions, and devise a
distributed and adaptive online algorithm based on the conditions. We
demonstrate via extensive simulation that our algorithms significantly
outperform a number of baseline techniques.
|
[
{
"version": "v1",
"created": "Fri, 3 Mar 2023 00:47:28 GMT"
}
] | 2023-03-06T00:00:00 |
[
[
"Zhang",
"Jinkun",
""
],
[
"Yeh",
"Edmund",
""
]
] |
new_dataset
| 0.987646 |
2303.01665
|
Sara Adkins
|
Sara Adkins, Pedro Sarmento, Mathieu Barthet
|
LooperGP: A Loopable Sequence Model for Live Coding Performance using
GuitarPro Tablature
|
The Version of Record of this contribution is published in
Proceedings of EvoMUSART: International Conference on Computational
Intelligence in Music, Sound, Art and Design (Part of EvoStar) 2023
|
EvoMUSART: International Conference on Computational Intelligence
in Music, Sound, Art and Design (Part of EvoStar) 2023
| null | null |
cs.SD cs.MM eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Despite their impressive offline results, deep learning models for symbolic
music generation are not widely used in live performances due to a deficit of
musically meaningful control parameters and a lack of structured musical form
in their outputs. To address these issues we introduce LooperGP, a method for
steering a Transformer-XL model towards generating loopable musical phrases of
a specified number of bars and time signature, enabling a tool for live coding
performances. We show that by training LooperGP on a dataset of 93,681 musical
loops extracted from the DadaGP dataset, we are able to steer its generative
output towards generating 3x as many loopable phrases as our baseline. In a
subjective listening test conducted by 31 participants, LooperGP loops achieved
positive median ratings in originality, musical coherence and loop smoothness,
demonstrating its potential as a performance tool.
|
[
{
"version": "v1",
"created": "Fri, 3 Mar 2023 02:00:49 GMT"
}
] | 2023-03-06T00:00:00 |
[
[
"Adkins",
"Sara",
""
],
[
"Sarmento",
"Pedro",
""
],
[
"Barthet",
"Mathieu",
""
]
] |
new_dataset
| 0.997636 |
2303.01676
|
Hsin Cheng
|
Hsin Cheng, Zhiwu Zheng, Prakhar Kumar, Wali Afridi, Ben Kim, Sigurd
Wagner, Naveen Verma, James C. Sturm and Minjie Chen
|
eViper: A Scalable Platform for Untethered Modular Soft Robots
|
8 pages, 21 figures, submitted to IROS 2023
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Soft robots present unique capabilities, but have been limited by the lack of
scalable technologies for construction and the complexity of algorithms for
efficient control and motion, which depend on soft-body dynamics,
high-dimensional actuation patterns, and external/on-board forces. This paper
presents scalable methods and platforms to study the impact of weight
distribution and actuation patterns on fully untethered modular soft robots. An
extendable Vibrating Intelligent Piezo-Electric Robot (eViper), together with
an open-source Simulation Framework for Electroactive Robotic Sheet (SFERS)
implemented in PyBullet, was developed as a platform to study the sophisticated
weight-locomotion interaction. By integrating the power electronics, sensors,
actuators, and batteries on-board, the eViper platform enables rapid design
iteration and evaluation of different weight distribution and control
strategies for the actuator arrays, supporting both physics-based modeling and
data-driven modeling via on-board automatic data-acquisition capabilities. We
show that SFERS can provide useful guidelines for optimizing the weight
distribution and actuation patterns of the eViper to achieve the maximum speed
or minimum cost-of-transportation (COT).
|
[
{
"version": "v1",
"created": "Fri, 3 Mar 2023 02:31:00 GMT"
}
] | 2023-03-06T00:00:00 |
[
[
"Cheng",
"Hsin",
""
],
[
"Zheng",
"Zhiwu",
""
],
[
"Kumar",
"Prakhar",
""
],
[
"Afridi",
"Wali",
""
],
[
"Kim",
"Ben",
""
],
[
"Wagner",
"Sigurd",
""
],
[
"Verma",
"Naveen",
""
],
[
"Sturm",
"James C.",
""
],
[
"Chen",
"Minjie",
""
]
] |
new_dataset
| 0.977011 |
2303.01716
|
Wen Ma
|
W. Ma, J. Luo
|
MacWilliams Type Identities for Linear Block Codes on Certain Pomsets
|
18 pages
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pomset block metric is a generalization of pomset metric. In this paper, we
define weight enumerator of linear block codes in pomset metric over
$\mathbb{Z}_m$ and establish MacWilliams type identities for linear block codes
with respect to certain pomsets. The relation between weight enumerators of two
linear pomset block codes and their direct sum is also investigated.
|
[
{
"version": "v1",
"created": "Fri, 3 Mar 2023 05:36:54 GMT"
}
] | 2023-03-06T00:00:00 |
[
[
"Ma",
"W.",
""
],
[
"Luo",
"J.",
""
]
] |
new_dataset
| 0.965392 |
2303.01721
|
Wen Ma
|
W. Ma, J. Luo
|
Block Codes in Pomset Metric over $\mathbb{Z}_m$
|
26 pages
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we introduce codes equipped with pomset block metric. A
Singleton type bound for pomset block codes is obtained. Code achieving the
Singleton bound, called a maximum distance separable code (for short, MDS
($\mathbb{P},\pi$)-code) is also investigated. We extend the concept of
$I$-perfect codes and $r$-perfect codes to pomset block metric. The relation
between $I$-perfect codes and MDS $(\mathbb{P},\pi)$-codes is also considered.
When all blocks have the same dimension, we prove the duality theorem for codes
and study the weight distribution of MDS pomset block codes when the pomset is
a chain.
|
[
{
"version": "v1",
"created": "Fri, 3 Mar 2023 05:52:10 GMT"
}
] | 2023-03-06T00:00:00 |
[
[
"Ma",
"W.",
""
],
[
"Luo",
"J.",
""
]
] |
new_dataset
| 0.998761 |
2303.01734
|
Amira Guesmi
|
Amira Guesmi, Ioan Marius Bilasco, Muhammad Shafique, and Ihsen
Alouani
|
AdvART: Adversarial Art for Camouflaged Object Detection Attacks
| null | null | null | null |
cs.CV cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A majority of existing physical attacks in the real world result in
conspicuous and eye-catching patterns for generated patches, which made them
identifiable/detectable by humans. To overcome this limitation, recent work has
proposed several approaches that aim at generating naturalistic patches using
generative adversarial networks (GANs), which may not catch human's attention.
However, these approaches are computationally intensive and do not always
converge to natural looking patterns. In this paper, we propose a novel
lightweight framework that systematically generates naturalistic adversarial
patches without using GANs. To illustrate the proposed approach, we generate
adversarial art (AdvART), which are patches generated to look like artistic
paintings while maintaining high attack efficiency. In fact, we redefine the
optimization problem by introducing a new similarity objective. Specifically,
we leverage similarity metrics to construct a similarity loss that is added to
the optimized objective function. This component guides the patch to follow a
predefined artistic patterns while maximizing the victim model's loss function.
Our patch achieves high success rates with $12.53\%$ mean average precision
(mAP) on YOLOv4tiny for INRIA dataset.
|
[
{
"version": "v1",
"created": "Fri, 3 Mar 2023 06:28:05 GMT"
}
] | 2023-03-06T00:00:00 |
[
[
"Guesmi",
"Amira",
""
],
[
"Bilasco",
"Ioan Marius",
""
],
[
"Shafique",
"Muhammad",
""
],
[
"Alouani",
"Ihsen",
""
]
] |
new_dataset
| 0.99513 |
2303.01758
|
Jun Rekimoto
|
Naoki Kimura, Michinari Kono, and Jun Rekimoto
|
SottoVoce: An Ultrasound Imaging-Based Silent Speech Interaction Using
Deep Neural Networks
|
ACM CHI 2019 paper
|
CHI Conference on Human Factors in Computing Systems Proceedings
(CHI 2019)
|
10.1145/3290605.3300376
| null |
cs.HC cs.LG cs.SD eess.AS eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The availability of digital devices operated by voice is expanding rapidly.
However, the applications of voice interfaces are still restricted. For
example, speaking in public places becomes an annoyance to the surrounding
people, and secret information should not be uttered. Environmental noise may
reduce the accuracy of speech recognition. To address these limitations, a
system to detect a user's unvoiced utterance is proposed. From internal
information observed by an ultrasonic imaging sensor attached to the underside
of the jaw, our proposed system recognizes the utterance contents without the
user's uttering voice. Our proposed deep neural network model is used to obtain
acoustic features from a sequence of ultrasound images. We confirmed that audio
signals generated by our system can control the existing smart speakers. We
also observed that a user can adjust their oral movement to learn and improve
the accuracy of their voice recognition.
|
[
{
"version": "v1",
"created": "Fri, 3 Mar 2023 07:46:35 GMT"
}
] | 2023-03-06T00:00:00 |
[
[
"Kimura",
"Naoki",
""
],
[
"Kono",
"Michinari",
""
],
[
"Rekimoto",
"Jun",
""
]
] |
new_dataset
| 0.997804 |
2303.01788
|
Xiwen Liang
|
Xiwen Liang, Minzhe Niu, Jianhua Han, Hang Xu, Chunjing Xu, Xiaodan
Liang
|
Visual Exemplar Driven Task-Prompting for Unified Perception in
Autonomous Driving
|
Accepted at CVPR 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-task learning has emerged as a powerful paradigm to solve a range of
tasks simultaneously with good efficiency in both computation resources and
inference time. However, these algorithms are designed for different tasks
mostly not within the scope of autonomous driving, thus making it hard to
compare multi-task methods in autonomous driving. Aiming to enable the
comprehensive evaluation of present multi-task learning methods in autonomous
driving, we extensively investigate the performance of popular multi-task
methods on the large-scale driving dataset, which covers four common perception
tasks, i.e., object detection, semantic segmentation, drivable area
segmentation, and lane detection. We provide an in-depth analysis of current
multi-task learning methods under different common settings and find out that
the existing methods make progress but there is still a large performance gap
compared with single-task baselines. To alleviate this dilemma in autonomous
driving, we present an effective multi-task framework, VE-Prompt, which
introduces visual exemplars via task-specific prompting to guide the model
toward learning high-quality task-specific representations. Specifically, we
generate visual exemplars based on bounding boxes and color-based markers,
which provide accurate visual appearances of target categories and further
mitigate the performance gap. Furthermore, we bridge transformer-based encoders
and convolutional layers for efficient and accurate unified perception in
autonomous driving. Comprehensive experimental results on the diverse
self-driving dataset BDD100K show that the VE-Prompt improves the multi-task
baseline and further surpasses single-task models.
|
[
{
"version": "v1",
"created": "Fri, 3 Mar 2023 08:54:06 GMT"
}
] | 2023-03-06T00:00:00 |
[
[
"Liang",
"Xiwen",
""
],
[
"Niu",
"Minzhe",
""
],
[
"Han",
"Jianhua",
""
],
[
"Xu",
"Hang",
""
],
[
"Xu",
"Chunjing",
""
],
[
"Liang",
"Xiaodan",
""
]
] |
new_dataset
| 0.999634 |
2303.01807
|
Foisal Ahmed
|
Tanvir Ahmad Tarique, Foisal Ahmed, Maksim Jenihhin, Liakot Ali
|
Unsupervised Recycled FPGA Detection Using Symmetry Analysis
| null | null | null | null |
cs.AR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, recycled field-programmable gate arrays (FPGAs) pose a significant
hardware security problem due to the proliferation of the semiconductor supply
chain. Ring oscillator (RO) based frequency analyzing technique is one of the
popular methods, where most studies used the known fresh FPGAs (KFFs) in
machine learning-based detection, which is not a realistic approach. In this
paper, we present a novel recycled FPGA detection method by examining the
symmetry information of the RO frequency using unsupervised anomaly detection
method. Due to the symmetrical array structure of the FPGA, some adjacent logic
blocks on an FPGA have comparable RO frequencies, hence our method simply
analyzes the RO frequencies of those blocks to determine how similar they are.
The proposed approach efficiently categorizes recycled FPGAs by utilizing
direct density ratio estimation through outliers detection. Experiments using
Xilinx Artix-7 FPGAs demonstrate that the proposed method accurately classifies
recycled FPGAs from 10 fresh FPGAs by x fewer computations compared with the
conventional method.
|
[
{
"version": "v1",
"created": "Fri, 3 Mar 2023 09:27:34 GMT"
}
] | 2023-03-06T00:00:00 |
[
[
"Tarique",
"Tanvir Ahmad",
""
],
[
"Ahmed",
"Foisal",
""
],
[
"Jenihhin",
"Maksim",
""
],
[
"Ali",
"Liakot",
""
]
] |
new_dataset
| 0.962331 |
2303.01813
|
Andriy Sarabakha
|
Andriy Sarabakha
|
anafi_ros: from Off-the-Shelf Drones to Research Platforms
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
The off-the-shelf drones are simple to operate and easy to maintain aerial
systems. However, due to proprietary flight software, these drones usually do
not provide any open-source interface which can enable them for autonomous
flight in research or teaching. This work introduces a package for ROS1 and
ROS2 for straightforward interfacing with off-the-shelf drones from the Parrot
ANAFI family. The developed ROS package is hardware agnostic, allowing
connecting seamlessly to all four supported drone models. This framework can
connect with the same ease to a single drone or a team of drones from the same
ground station. The developed package was intensively tested at the limits of
the drones' capabilities and thoughtfully documented to facilitate its use by
other research groups worldwide.
|
[
{
"version": "v1",
"created": "Fri, 3 Mar 2023 09:40:02 GMT"
}
] | 2023-03-06T00:00:00 |
[
[
"Sarabakha",
"Andriy",
""
]
] |
new_dataset
| 0.988998 |
2303.01816
|
Foisal Ahmed
|
Foisal Ahmed, Maksim Jenihhin
|
Holistic IJTAG-based External and Internal Fault Monitoring in UAVs
| null | null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cyber-Physical Systems (CPSs), such as Unmanned Aerial Vehicles (UAVs), use
System-on-Chip (SoC) based computing platforms to perform multiple complex
tasks in safety-critical applications that require a highly dependable
operation. Due to continuous technological manufacturing miniaturization SoCs
face a wide spectrum of chip-level reliability issues such as aging, soft and
hard errors during the operational lifetime of a UAV. In addition, external
(off-chip) faults in the sensors, actuators, and motors are another cause of
UAV failures. While existing works examine either on-chip faults (internal) or
sensors/actuators faults (external) separately, this research proposes a UAV
health monitoring infrastructure considering both external and internal faults
holistically. The proposed method relies on the IEEE 1687 standard (IJTAG) and
employs on-chip embedded instruments as health monitors to instantly access
external and internal sensor data. Experimental results for functional
simulation of a real-life case-study design demonstrate both types of fault
detection by serving only three clock cycles and the localization process using
16 and 30 clock cycles for the case of single and double faults, respectively.
|
[
{
"version": "v1",
"created": "Fri, 3 Mar 2023 09:53:36 GMT"
}
] | 2023-03-06T00:00:00 |
[
[
"Ahmed",
"Foisal",
""
],
[
"Jenihhin",
"Maksim",
""
]
] |
new_dataset
| 0.971096 |
2303.01847
|
Eric Kafe
|
Eric Kafe
|
Mapping Wordnets on the Fly with Permanent Sense Keys
|
Presented at GWC 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Most of the major databases on the semantic web have links to Princeton
WordNet (PWN) synonym set (synset) identifiers, which differ for each PWN
release, and are thus incompatible between versions. On the other hand, both
PWN and the more recent Open English Wordnet (OEWN) provide permanent word
sense identifiers (the sense keys), which can solve this interoperability
problem.
We present an algorithm that runs in linear time, to automatically derive a
synset mapping between any pair of Wordnet versions that use PWN sense keys.
This allows to update old WordNet links, and seamlessly interoperate with newer
English Wordnet versions for which no prior mapping exists.
By applying the proposed algorithm on the fly, at load time, we combine the
Open Multilingual Wordnet (OMW 1.4, which uses old PWN 3.0 identifiers) with
OEWN Edition 2021, and obtain almost perfect precision and recall. We compare
the results of our approach using respectively synset offsets, versus the
Collaborative InterLingual Index (CILI version 1.0) as synset identifiers, and
find that the synset offsets perform better than CILI 1.0 in all cases, except
a few ties.
|
[
{
"version": "v1",
"created": "Fri, 3 Mar 2023 11:01:10 GMT"
}
] | 2023-03-06T00:00:00 |
[
[
"Kafe",
"Eric",
""
]
] |
new_dataset
| 0.995031 |
2303.01850
|
Malihe Alavi
|
Malihe Alavi, Farnoush Manavi, Amirhossein Ansari, Ali Hamzeh
|
LBCIM: Loyalty Based Competitive Influence Maximization with
epsilon-greedy MCTS strategy
|
13 pages, 10 figures, 2 pseudo code
| null | null | null |
cs.SI cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Competitive influence maximization has been studied for several years, and
various frameworks have been proposed to model different aspects of information
diffusion under the competitive environment. This work presents a new gameboard
for two competing parties with some new features representing loyalty in social
networks and reflecting the attitude of not completely being loyal to a party
when the opponent offers better suggestions. This behavior can be observed in
most political occasions where each party tries to attract people by making
better suggestions than the opponent and even seeks to impress the fans of the
opposition party to change their minds. In order to identify the best move in
each step of the game framework, an improved Monte Carlo tree search is
developed, which uses some predefined heuristics to apply them on the
simulation step of the algorithm and takes advantage of them to search among
child nodes of the current state and pick the best one using an epsilon-greedy
way instead of choosing them at random. Experimental results on synthetic and
real datasets indicate the outperforming of the proposed strategy against some
well-known and benchmark strategies like general MCTS, minimax algorithm with
alpha-beta pruning, random nodes, nodes with maximum threshold and nodes with
minimum threshold.
|
[
{
"version": "v1",
"created": "Fri, 3 Mar 2023 11:11:53 GMT"
}
] | 2023-03-06T00:00:00 |
[
[
"Alavi",
"Malihe",
""
],
[
"Manavi",
"Farnoush",
""
],
[
"Ansari",
"Amirhossein",
""
],
[
"Hamzeh",
"Ali",
""
]
] |
new_dataset
| 0.994216 |
2303.01884
|
Sen Pei
|
Sen Pei, Jingya Yu, Qi Chen, Wozhou He
|
AutoMatch: A Large-scale Audio Beat Matching Benchmark for Boosting Deep
Learning Assistant Video Editing
|
11 pages, 5 figures
| null | null | null |
cs.SD cs.CV cs.MM eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The explosion of short videos has dramatically reshaped the manners people
socialize, yielding a new trend for daily sharing and access to the latest
information. These rich video resources, on the one hand, benefited from the
popularization of portable devices with cameras, but on the other, they can not
be independent of the valuable editing work contributed by numerous video
creators. In this paper, we investigate a novel and practical problem, namely
audio beat matching (ABM), which aims to recommend the proper transition time
stamps based on the background music. This technique helps to ease the
labor-intensive work during video editing, saving energy for creators so that
they can focus more on the creativity of video content. We formally define the
ABM problem and its evaluation protocol. Meanwhile, a large-scale audio
dataset, i.e., the AutoMatch with over 87k finely annotated background music,
is presented to facilitate this newly opened research direction. To further lay
solid foundations for the following study, we also propose a novel model termed
BeatX to tackle this challenging task. Alongside, we creatively present the
concept of label scope, which eliminates the data imbalance issues and assigns
adaptive weights for the ground truth during the training procedure in one
stop. Though plentiful short video platforms have flourished for a long time,
the relevant research concerning this scenario is not sufficient, and to the
best of our knowledge, AutoMatch is the first large-scale dataset to tackle the
audio beat matching problem. We hope the released dataset and our competitive
baseline can encourage more attention to this line of research. The dataset and
codes will be made publicly available.
|
[
{
"version": "v1",
"created": "Fri, 3 Mar 2023 12:30:09 GMT"
}
] | 2023-03-06T00:00:00 |
[
[
"Pei",
"Sen",
""
],
[
"Yu",
"Jingya",
""
],
[
"Chen",
"Qi",
""
],
[
"He",
"Wozhou",
""
]
] |
new_dataset
| 0.987833 |
2303.01943
|
Lukas Mehl
|
Lukas Mehl, Jenny Schmalfuss, Azin Jahedi, Yaroslava Nalivayko,
Andr\'es Bruhn
|
Spring: A High-Resolution High-Detail Dataset and Benchmark for Scene
Flow, Optical Flow and Stereo
|
CVPR 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While recent methods for motion and stereo estimation recover an
unprecedented amount of details, such highly detailed structures are neither
adequately reflected in the data of existing benchmarks nor their evaluation
methodology. Hence, we introduce Spring $-$ a large, high-resolution,
high-detail, computer-generated benchmark for scene flow, optical flow, and
stereo. Based on rendered scenes from the open-source Blender movie "Spring",
it provides photo-realistic HD datasets with state-of-the-art visual effects
and ground truth training data. Furthermore, we provide a website to upload,
analyze and compare results. Using a novel evaluation methodology based on a
super-resolved UHD ground truth, our Spring benchmark can assess the quality of
fine structures and provides further detailed performance statistics on
different image regions. Regarding the number of ground truth frames, Spring is
60$\times$ larger than the only scene flow benchmark, KITTI 2015, and
15$\times$ larger than the well-established MPI Sintel optical flow benchmark.
Initial results for recent methods on our benchmark show that estimating fine
details is indeed challenging, as their accuracy leaves significant room for
improvement. The Spring benchmark and the corresponding datasets are available
at http://spring-benchmark.org.
|
[
{
"version": "v1",
"created": "Fri, 3 Mar 2023 14:15:48 GMT"
}
] | 2023-03-06T00:00:00 |
[
[
"Mehl",
"Lukas",
""
],
[
"Schmalfuss",
"Jenny",
""
],
[
"Jahedi",
"Azin",
""
],
[
"Nalivayko",
"Yaroslava",
""
],
[
"Bruhn",
"Andrés",
""
]
] |
new_dataset
| 0.999844 |
2303.01999
|
Xianghao Xu
|
Xianghao Xu, Paul Guerrero, Matthew Fisher, Siddhartha Chaudhuri and
Daniel Ritchie
|
Unsupervised 3D Shape Reconstruction by Part Retrieval and Assembly
|
CVPR 2023
| null | null | null |
cs.CV cs.GR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Representing a 3D shape with a set of primitives can aid perception of
structure, improve robotic object manipulation, and enable editing,
stylization, and compression of 3D shapes. Existing methods either use simple
parametric primitives or learn a generative shape space of parts. Both have
limitations: parametric primitives lead to coarse approximations, while learned
parts offer too little control over the decomposition. We instead propose to
decompose shapes using a library of 3D parts provided by the user, giving full
control over the choice of parts. The library can contain parts with
high-quality geometry that are suitable for a given category, resulting in
meaningful decompositions with clean geometry. The type of decomposition can
also be controlled through the choice of parts in the library. Our method works
via a self-supervised approach that iteratively retrieves parts from the
library and refines their placements. We show that this approach gives higher
reconstruction accuracy and more desirable decompositions than existing
approaches. Additionally, we show how the decomposition can be controlled
through the part library by using different part libraries to reconstruct the
same shapes.
|
[
{
"version": "v1",
"created": "Fri, 3 Mar 2023 15:11:36 GMT"
}
] | 2023-03-06T00:00:00 |
[
[
"Xu",
"Xianghao",
""
],
[
"Guerrero",
"Paul",
""
],
[
"Fisher",
"Matthew",
""
],
[
"Chaudhuri",
"Siddhartha",
""
],
[
"Ritchie",
"Daniel",
""
]
] |
new_dataset
| 0.998439 |
2303.02000
|
You Shen
|
You Shen, Yunzhou Zhang, Yanmin Wu, Zhenyu Wang, Linghao Yang, Sonya
Coleman, Dermot Kerr
|
BSH-Det3D: Improving 3D Object Detection with BEV Shape Heatmap
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The progress of LiDAR-based 3D object detection has significantly enhanced
developments in autonomous driving and robotics. However, due to the
limitations of LiDAR sensors, object shapes suffer from deterioration in
occluded and distant areas, which creates a fundamental challenge to 3D
perception. Existing methods estimate specific 3D shapes and achieve remarkable
performance. However, these methods rely on extensive computation and memory,
causing imbalances between accuracy and real-time performance. To tackle this
challenge, we propose a novel LiDAR-based 3D object detection model named
BSH-Det3D, which applies an effective way to enhance spatial features by
estimating complete shapes from a bird's eye view (BEV). Specifically, we
design the Pillar-based Shape Completion (PSC) module to predict the
probability of occupancy whether a pillar contains object shapes. The PSC
module generates a BEV shape heatmap for each scene. After integrating with
heatmaps, BSH-Det3D can provide additional information in shape deterioration
areas and generate high-quality 3D proposals. We also design an attention-based
densification fusion module (ADF) to adaptively associate the sparse features
with heatmaps and raw points. The ADF module integrates the advantages of
points and shapes knowledge with negligible overheads. Extensive experiments on
the KITTI benchmark achieve state-of-the-art (SOTA) performance in terms of
accuracy and speed, demonstrating the efficiency and flexibility of BSH-Det3D.
The source code is available on https://github.com/mystorm16/BSH-Det3D.
|
[
{
"version": "v1",
"created": "Fri, 3 Mar 2023 15:13:11 GMT"
}
] | 2023-03-06T00:00:00 |
[
[
"Shen",
"You",
""
],
[
"Zhang",
"Yunzhou",
""
],
[
"Wu",
"Yanmin",
""
],
[
"Wang",
"Zhenyu",
""
],
[
"Yang",
"Linghao",
""
],
[
"Coleman",
"Sonya",
""
],
[
"Kerr",
"Dermot",
""
]
] |
new_dataset
| 0.997409 |
2108.11250
|
Dong Wu
|
Dong Wu, Manwen Liao, Weitian Zhang, Xinggang Wang, Xiang Bai, Wenqing
Cheng, Wenyu Liu
|
YOLOP: You Only Look Once for Panoptic Driving Perception
| null |
[J]. Machine Intelligence Research, 2022: 1-13
|
10.1007/s11633-022-1339-y
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
A panoptic driving perception system is an essential part of autonomous
driving. A high-precision and real-time perception system can assist the
vehicle in making the reasonable decision while driving. We present a panoptic
driving perception network (YOLOP) to perform traffic object detection,
drivable area segmentation and lane detection simultaneously. It is composed of
one encoder for feature extraction and three decoders to handle the specific
tasks. Our model performs extremely well on the challenging BDD100K dataset,
achieving state-of-the-art on all three tasks in terms of accuracy and speed.
Besides, we verify the effectiveness of our multi-task learning model for joint
training via ablative studies. To our best knowledge, this is the first work
that can process these three visual perception tasks simultaneously in
real-time on an embedded device Jetson TX2(23 FPS) and maintain excellent
accuracy. To facilitate further research, the source codes and pre-trained
models are released at https://github.com/hustvl/YOLOP.
|
[
{
"version": "v1",
"created": "Wed, 25 Aug 2021 14:19:42 GMT"
},
{
"version": "v2",
"created": "Thu, 26 Aug 2021 05:59:59 GMT"
},
{
"version": "v3",
"created": "Fri, 27 Aug 2021 06:31:48 GMT"
},
{
"version": "v4",
"created": "Mon, 30 Aug 2021 08:26:32 GMT"
},
{
"version": "v5",
"created": "Tue, 31 Aug 2021 08:38:29 GMT"
},
{
"version": "v6",
"created": "Fri, 11 Feb 2022 16:11:44 GMT"
},
{
"version": "v7",
"created": "Sat, 26 Mar 2022 15:39:42 GMT"
}
] | 2023-03-03T00:00:00 |
[
[
"Wu",
"Dong",
""
],
[
"Liao",
"Manwen",
""
],
[
"Zhang",
"Weitian",
""
],
[
"Wang",
"Xinggang",
""
],
[
"Bai",
"Xiang",
""
],
[
"Cheng",
"Wenqing",
""
],
[
"Liu",
"Wenyu",
""
]
] |
new_dataset
| 0.999828 |
2110.07145
|
Beibei Wang
|
Beibei Wang and Wenhua Jin and Milo\v{s} Ha\v{s}an and Ling-Qi Yan
|
SpongeCake: A Layered Microflake Surface Appearance Model
| null | null | null | null |
cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose SpongeCake: a layered BSDF model where each layer
is a volumetric scattering medium, defined using microflake or other phase
functions. We omit any reflecting and refracting interfaces between the layers.
The first advantage of this formulation is that an exact and analytic solution
for single scattering, regardless of the number of volumetric layers, can be
derived. We propose to approximate multiple scattering by an additional
single-scattering lobe with modified parameters and a Lambertian lobe. We use a
parameter mapping neural network to find the parameters of the newly added
lobes to closely approximate the multiple scattering effect. Despite the
absence of layer interfaces, we demonstrate that many common material effects
can be achieved with layers of SGGX microflake and other volumes with
appropriate parameters. A normal mapping effect can also be achieved through
mapping of microflake orientations, which avoids artifacts common in standard
normal maps. Thanks to the analytical formulation, our model is very fast to
evaluate and sample. Through various parameter settings, our model is able to
handle many types of materials, like plastics, wood, cloth, etc., opening a
number of practical applications.
|
[
{
"version": "v1",
"created": "Thu, 14 Oct 2021 04:21:43 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Mar 2023 01:16:45 GMT"
}
] | 2023-03-03T00:00:00 |
[
[
"Wang",
"Beibei",
""
],
[
"Jin",
"Wenhua",
""
],
[
"Hašan",
"Miloš",
""
],
[
"Yan",
"Ling-Qi",
""
]
] |
new_dataset
| 0.999575 |
2111.02005
|
Sid Chi-Kin Chau
|
Nan Wang, Sid Chi-Kin Chau and Yue Zhou
|
Privacy-Preserving Energy Storage Sharing with Blockchain and Secure
Multi-Party Computation
|
This is an updated and extended version of the conference paper
"Privacy-Preserving Energy Storage Sharing with Blockchain" in ACM e-Energy
21'
|
ACM SIGEnergy Energy Informatics Review, Volume 1, Issue 1, pp
32-50, November 2021
|
10.1145/3508467.3508471
| null |
cs.CR math.OC
|
http://creativecommons.org/licenses/by/4.0/
|
Energy storage provides an effective way of shifting temporal energy demands
and supplies, which enables significant cost reduction under time-of-use energy
pricing plans. Despite its promising benefits, the cost of present energy
storage remains expensive, presenting a major obstacle to practical deployment.
A more viable solution to improve the cost-effectiveness is by sharing energy
storage, such as community sharing, cloud energy storage and peer-to-peer
sharing. However, revealing private energy demand data to an external energy
storage operator may compromise user privacy, and is susceptible to data
misuses and breaches. In this paper, we explore a novel approach to support
energy storage sharing with privacy protection, based on privacy-preserving
blockchain and secure multi-party computation. We present an integrated
solution to enable privacy-preserving energy storage sharing, such that energy
storage service scheduling and cost-sharing can be attained without the
knowledge of individual users' demands. It also supports auditing and
verification by the grid operator via blockchain. Furthermore, our
privacy-preserving solution can safeguard against a dishonest majority of
users, who may collude in cheating, without requiring a trusted third-party. We
implemented our solution as a smart contract on real-world Ethereum blockchain
platform, and provide empirical evaluation in this paper.
|
[
{
"version": "v1",
"created": "Wed, 3 Nov 2021 03:45:34 GMT"
}
] | 2023-03-03T00:00:00 |
[
[
"Wang",
"Nan",
""
],
[
"Chau",
"Sid Chi-Kin",
""
],
[
"Zhou",
"Yue",
""
]
] |
new_dataset
| 0.986027 |
2203.10258
|
Peng Wu
|
Haoxuan Li, Yan Lyu, Chunyuan Zheng, and Peng Wu
|
TDR-CL: Targeted Doubly Robust Collaborative Learning for Debiased
Recommendations
| null | null | null | null |
cs.IR cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bias is a common problem inherent in recommender systems, which is entangled
with users' preferences and poses a great challenge to unbiased learning. For
debiasing tasks, the doubly robust (DR) method and its variants show superior
performance due to the double robustness property, that is, DR is unbiased when
either imputed errors or learned propensities are accurate. However, our
theoretical analysis reveals that DR usually has a large variance. Meanwhile,
DR would suffer unexpectedly large bias and poor generalization caused by
inaccurate imputed errors and learned propensities, which usually occur in
practice. In this paper, we propose a principled approach that can effectively
reduce bias and variance simultaneously for existing DR approaches when the
error imputation model is misspecified. In addition, we further propose a novel
semi-parametric collaborative learning approach that decomposes imputed errors
into parametric and nonparametric parts and updates them collaboratively,
resulting in more accurate predictions. Both theoretical analysis and
experiments demonstrate the superiority of the proposed methods compared with
existing debiasing methods.
|
[
{
"version": "v1",
"created": "Sat, 19 Mar 2022 06:48:50 GMT"
},
{
"version": "v2",
"created": "Wed, 18 May 2022 08:07:13 GMT"
},
{
"version": "v3",
"created": "Thu, 2 Mar 2023 12:50:53 GMT"
}
] | 2023-03-03T00:00:00 |
[
[
"Li",
"Haoxuan",
""
],
[
"Lyu",
"Yan",
""
],
[
"Zheng",
"Chunyuan",
""
],
[
"Wu",
"Peng",
""
]
] |
new_dataset
| 0.99191 |
2205.12523
|
Rongjie Huang
|
Rongjie Huang, Jinglin Liu, Huadai Liu, Yi Ren, Lichao Zhang, Jinzheng
He, Zhou Zhao
|
TranSpeech: Speech-to-Speech Translation With Bilateral Perturbation
|
Accpeted to ICLR 2023
| null | null | null |
cs.CL cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Direct speech-to-speech translation (S2ST) with discrete units leverages
recent progress in speech representation learning. Specifically, a sequence of
discrete representations derived in a self-supervised manner are predicted from
the model and passed to a vocoder for speech reconstruction, while still facing
the following challenges: 1) Acoustic multimodality: the discrete units derived
from speech with same content could be indeterministic due to the acoustic
property (e.g., rhythm, pitch, and energy), which causes deterioration of
translation accuracy; 2) high latency: current S2ST systems utilize
autoregressive models which predict each unit conditioned on the sequence
previously generated, failing to take full advantage of parallelism. In this
work, we propose TranSpeech, a speech-to-speech translation model with
bilateral perturbation. To alleviate the acoustic multimodal problem, we
propose bilateral perturbation (BiP), which consists of the style normalization
and information enhancement stages, to learn only the linguistic information
from speech samples and generate more deterministic representations. With
reduced multimodality, we step forward and become the first to establish a
non-autoregressive S2ST technique, which repeatedly masks and predicts unit
choices and produces high-accuracy results in just a few cycles. Experimental
results on three language pairs demonstrate that BiP yields an improvement of
2.9 BLEU on average compared with a baseline textless S2ST model. Moreover, our
parallel decoding shows a significant reduction of inference latency, enabling
speedup up to 21.4x than autoregressive technique. Audio samples are available
at \url{https://TranSpeech.github.io/}
|
[
{
"version": "v1",
"created": "Wed, 25 May 2022 06:34:14 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Mar 2023 09:17:01 GMT"
}
] | 2023-03-03T00:00:00 |
[
[
"Huang",
"Rongjie",
""
],
[
"Liu",
"Jinglin",
""
],
[
"Liu",
"Huadai",
""
],
[
"Ren",
"Yi",
""
],
[
"Zhang",
"Lichao",
""
],
[
"He",
"Jinzheng",
""
],
[
"Zhao",
"Zhou",
""
]
] |
new_dataset
| 0.999436 |
2208.01174
|
Peter Jansen
|
Peter A. Jansen, Marc-Alexandre C\^ot\'e
|
TextWorldExpress: Simulating Text Games at One Million Steps Per Second
|
Accepted to EACL 2023
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Text-based games offer a challenging test bed to evaluate virtual agents at
language understanding, multi-step problem-solving, and common-sense reasoning.
However, speed is a major limitation of current text-based games, capping at
300 steps per second, mainly due to the use of legacy tooling. In this work we
present TextWorldExpress, a high-performance simulator that includes
implementations of three common text game benchmarks that increases simulation
throughput by approximately three orders of magnitude, reaching over one
million steps per second on common desktop hardware. This significantly reduces
experiment runtime, enabling billion-step-scale experiments in about one day.
|
[
{
"version": "v1",
"created": "Mon, 1 Aug 2022 23:43:48 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Mar 2023 06:11:49 GMT"
}
] | 2023-03-03T00:00:00 |
[
[
"Jansen",
"Peter A.",
""
],
[
"Côté",
"Marc-Alexandre",
""
]
] |
new_dataset
| 0.998636 |
2209.07725
|
Tianrui Guan
|
Tianrui Guan, Ruitao Song, Zhixian Ye, Liangjun Zhang
|
VINet: Visual and Inertial-based Terrain Classification and Adaptive
Navigation over Unknown Terrain
| null | null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a visual and inertial-based terrain classification network (VINet)
for robotic navigation over different traversable surfaces. We use a novel
navigation-based labeling scheme for terrain classification and generalization
on unknown surfaces. Our proposed perception method and adaptive scheduling
control framework can make predictions according to terrain navigation
properties and lead to better performance on both terrain classification and
navigation control on known and unknown surfaces. Our VINet can achieve 98.37%
in terms of accuracy under supervised setting on known terrains and improve the
accuracy by 8.51% on unknown terrains compared to previous methods. We deploy
VINet on a mobile tracked robot for trajectory following and navigation on
different terrains, and we demonstrate an improvement of 10.3% compared to a
baseline controller in terms of RMSE.
|
[
{
"version": "v1",
"created": "Fri, 16 Sep 2022 05:14:08 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Sep 2022 05:14:20 GMT"
},
{
"version": "v3",
"created": "Wed, 1 Mar 2023 23:49:35 GMT"
}
] | 2023-03-03T00:00:00 |
[
[
"Guan",
"Tianrui",
""
],
[
"Song",
"Ruitao",
""
],
[
"Ye",
"Zhixian",
""
],
[
"Zhang",
"Liangjun",
""
]
] |
new_dataset
| 0.999422 |
2209.11294
|
Benjamin Stoler
|
Benjamin Stoler, Meghdeep Jana, Soonmin Hwang, Jean Oh
|
T2FPV: Dataset and Method for Correcting First-Person View Errors in
Pedestrian Trajectory Prediction
| null | null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Predicting pedestrian motion is essential for developing socially-aware
robots that interact in a crowded environment. While the natural visual
perspective for a social interaction setting is an egocentric view, the
majority of existing work in trajectory prediction therein has been
investigated purely in the top-down trajectory space. To support first-person
view trajectory prediction research, we present T2FPV, a method for
constructing high-fidelity first-person view (FPV) datasets given a real-world,
top-down trajectory dataset; we showcase our approach on the ETH/UCY pedestrian
dataset to generate the egocentric visual data of all interacting pedestrians,
creating the T2FPV-ETH dataset. In this setting, FPV-specific errors arise due
to imperfect detection and tracking, occlusions, and field-of-view (FOV)
limitations of the camera. To address these errors, we propose CoFE, a module
that further refines the imputation of missing data in an end-to-end manner
with trajectory forecasting algorithms. Our method reduces the impact of such
FPV errors on downstream prediction performance, decreasing displacement error
by more than 10% on average. To facilitate research engagement, we release our
T2FPV-ETH dataset and software tools.
|
[
{
"version": "v1",
"created": "Thu, 22 Sep 2022 20:14:43 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Mar 2023 07:51:07 GMT"
}
] | 2023-03-03T00:00:00 |
[
[
"Stoler",
"Benjamin",
""
],
[
"Jana",
"Meghdeep",
""
],
[
"Hwang",
"Soonmin",
""
],
[
"Oh",
"Jean",
""
]
] |
new_dataset
| 0.999242 |
2211.12732
|
Joshua Knights Mr
|
Joshua Knights, Kavisha Vidanapathirana, Milad Ramezani, Sridha
Sridharan, Clinton Fookes, Peyman Moghadam
|
Wild-Places: A Large-Scale Dataset for Lidar Place Recognition in
Unstructured Natural Environments
|
Equal Contribution from first two authors Accepted to ICRA2023
Website link: https://csiro-robotics.github.io/Wild-Places/
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many existing datasets for lidar place recognition are solely representative
of structured urban environments, and have recently been saturated in
performance by deep learning based approaches. Natural and unstructured
environments present many additional challenges for the tasks of long-term
localisation but these environments are not represented in currently available
datasets. To address this we introduce Wild-Places, a challenging large-scale
dataset for lidar place recognition in unstructured, natural environments.
Wild-Places contains eight lidar sequences collected with a handheld sensor
payload over the course of fourteen months, containing a total of 63K
undistorted lidar submaps along with accurate 6DoF ground truth. Our dataset
contains multiple revisits both within and between sequences, allowing for both
intra-sequence (i.e. loop closure detection) and inter-sequence (i.e.
re-localisation) place recognition. We also benchmark several state-of-the-art
approaches to demonstrate the challenges that this dataset introduces,
particularly the case of long-term place recognition due to natural
environments changing over time. Our dataset and code will be available at
https://csiro-robotics.github.io/Wild-Places.
|
[
{
"version": "v1",
"created": "Wed, 23 Nov 2022 06:50:31 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Nov 2022 07:17:10 GMT"
},
{
"version": "v3",
"created": "Thu, 2 Mar 2023 06:45:21 GMT"
}
] | 2023-03-03T00:00:00 |
[
[
"Knights",
"Joshua",
""
],
[
"Vidanapathirana",
"Kavisha",
""
],
[
"Ramezani",
"Milad",
""
],
[
"Sridharan",
"Sridha",
""
],
[
"Fookes",
"Clinton",
""
],
[
"Moghadam",
"Peyman",
""
]
] |
new_dataset
| 0.999882 |
2301.06281
|
Youxin Pang
|
Youxin Pang, Yong Zhang, Weize Quan, Yanbo Fan, Xiaodong Cun, Ying
Shan, Dong-ming Yan
|
DPE: Disentanglement of Pose and Expression for General Video Portrait
Editing
|
https://carlyx.github.io/DPE/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One-shot video-driven talking face generation aims at producing a synthetic
talking video by transferring the facial motion from a video to an arbitrary
portrait image. Head pose and facial expression are always entangled in facial
motion and transferred simultaneously. However, the entanglement sets up a
barrier for these methods to be used in video portrait editing directly, where
it may require to modify the expression only while maintaining the pose
unchanged. One challenge of decoupling pose and expression is the lack of
paired data, such as the same pose but different expressions. Only a few
methods attempt to tackle this challenge with the feat of 3D Morphable Models
(3DMMs) for explicit disentanglement. But 3DMMs are not accurate enough to
capture facial details due to the limited number of Blenshapes, which has side
effects on motion transfer. In this paper, we introduce a novel self-supervised
disentanglement framework to decouple pose and expression without 3DMMs and
paired data, which consists of a motion editing module, a pose generator, and
an expression generator. The editing module projects faces into a latent space
where pose motion and expression motion can be disentangled, and the pose or
expression transfer can be performed in the latent space conveniently via
addition. The two generators render the modified latent codes to images,
respectively. Moreover, to guarantee the disentanglement, we propose a
bidirectional cyclic training strategy with well-designed constraints.
Evaluations demonstrate our method can control pose or expression independently
and be used for general video editing.
|
[
{
"version": "v1",
"created": "Mon, 16 Jan 2023 06:39:51 GMT"
},
{
"version": "v2",
"created": "Wed, 1 Mar 2023 08:21:23 GMT"
}
] | 2023-03-03T00:00:00 |
[
[
"Pang",
"Youxin",
""
],
[
"Zhang",
"Yong",
""
],
[
"Quan",
"Weize",
""
],
[
"Fan",
"Yanbo",
""
],
[
"Cun",
"Xiaodong",
""
],
[
"Shan",
"Ying",
""
],
[
"Yan",
"Dong-ming",
""
]
] |
new_dataset
| 0.999275 |
2301.10031
|
Paloma Thome De Lima
|
Hans L. Bodlaender, \'Edouard Bonnet, Lars Jaffke, Du\v{s}an Knop,
Paloma T. Lima, Martin Milani\v{c}, Sebastian Ordyniak, Sukanya Pandey and
Ond\v{r}ej Such\'y
|
Treewidth is NP-Complete on Cubic Graphs (and related results)
| null | null | null | null |
cs.CC cs.DS math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we give a very simple proof that Treewidth is NP-complete;
this proof also shows NP-completeness on the class of co-bipartite graphs. We
then improve the result by Bodlaender and Thilikos from 1997 that Treewidth is
NP-complete on graphs with maximum degree at most 9, by showing that Treewidth
is NP-complete on cubic graphs.
|
[
{
"version": "v1",
"created": "Tue, 24 Jan 2023 14:17:58 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Mar 2023 09:49:25 GMT"
}
] | 2023-03-03T00:00:00 |
[
[
"Bodlaender",
"Hans L.",
""
],
[
"Bonnet",
"Édouard",
""
],
[
"Jaffke",
"Lars",
""
],
[
"Knop",
"Dušan",
""
],
[
"Lima",
"Paloma T.",
""
],
[
"Milanič",
"Martin",
""
],
[
"Ordyniak",
"Sebastian",
""
],
[
"Pandey",
"Sukanya",
""
],
[
"Suchý",
"Ondřej",
""
]
] |
new_dataset
| 0.999777 |
2302.04500
|
Hexiang Pan
|
Hexiang Pan, Quang-Trung Ta, Meihui Zhang, Yeow Meng Chee, Gang Chen,
Beng Chin Ooi
|
FLAC: A Robust Failure-Aware Atomic Commit Protocol for Distributed
Transactions
| null | null | null | null |
cs.DC cs.AI cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In distributed transaction processing, atomic commit protocol (ACP) is used
to ensure database consistency. With the use of commodity compute nodes and
networks, failures such as system crashes and network partitioning are common.
It is therefore important for ACP to dynamically adapt to the operating
condition for efficiency while ensuring the consistency of the database.
Existing ACPs often assume stable operating conditions, hence, they are either
non-generalizable to different environments or slow in practice.
In this paper, we propose a novel and practical ACP, called Failure-Aware
Atomic Commit (FLAC). In essence, FLAC includes three protocols, which are
specifically designed for three different environments: (i) no failure occurs,
(ii) participant nodes might crash but there is no delayed connection, or (iii)
both crashed nodes and delayed connection can occur. It models these
environments as the failure-free, crash-failure, and network-failure robustness
levels. During its operation, FLAC can monitor if any failure occurs and
dynamically switch to operate the most suitable protocol, using a robustness
level state machine, whose parameters are fine-tuned by reinforcement learning.
Consequently, it improves both the response time and throughput, and
effectively handles nodes distributed across the Internet where crash and
network failures might occur. We implement FLAC in a distributed transactional
key-value storage system based on Google Percolator and evaluate its
performance with both a micro benchmark and a macro benchmark of real workload.
The results show that FLAC achieves up to 2.22x throughput improvement and
2.82x latency speedup, compared to existing ACPs for high-contention workloads.
|
[
{
"version": "v1",
"created": "Thu, 9 Feb 2023 08:52:11 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Feb 2023 13:15:16 GMT"
},
{
"version": "v3",
"created": "Thu, 2 Mar 2023 06:44:41 GMT"
}
] | 2023-03-03T00:00:00 |
[
[
"Pan",
"Hexiang",
""
],
[
"Ta",
"Quang-Trung",
""
],
[
"Zhang",
"Meihui",
""
],
[
"Chee",
"Yeow Meng",
""
],
[
"Chen",
"Gang",
""
],
[
"Ooi",
"Beng Chin",
""
]
] |
new_dataset
| 0.997754 |
2302.10518
|
Yuxuan Xiong
|
Yue Shi, Yuxuan Xiong, Jingyi Chai, Bingbing Ni, Wenjun Zhang
|
USR: Unsupervised Separated 3D Garment and Human Reconstruction via
Geometry and Semantic Consistency
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dressed people reconstruction from images is a popular task with promising
applications in the creative media and game industry. However, most existing
methods reconstruct the human body and garments as a whole with the supervision
of 3D models, which hinders the downstream interaction tasks and requires
hard-to-obtain data. To address these issues, we propose an unsupervised
separated 3D garments and human reconstruction model (USR), which reconstructs
the human body and authentic textured clothes in layers without 3D models. More
specifically, our method proposes a generalized surface-aware neural radiance
field to learn the mapping between sparse multi-view images and geometries of
the dressed people. Based on the full geometry, we introduce a Semantic and
Confidence Guided Separation strategy (SCGS) to detect, segment, and
reconstruct the clothes layer, leveraging the consistency between 2D semantic
and 3D geometry. Moreover, we propose a Geometry Fine-tune Module to smooth
edges. Extensive experiments on our dataset show that comparing with
state-of-the-art methods, USR achieves improvements on both geometry and
appearance reconstruction while supporting generalizing to unseen people in
real time. Besides, we also introduce SMPL-D model to show the benefit of the
separated modeling of clothes and the human body that allows swapping clothes
and virtual try-on.
|
[
{
"version": "v1",
"created": "Tue, 21 Feb 2023 08:48:27 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Feb 2023 07:07:16 GMT"
},
{
"version": "v3",
"created": "Thu, 2 Mar 2023 14:06:01 GMT"
}
] | 2023-03-03T00:00:00 |
[
[
"Shi",
"Yue",
""
],
[
"Xiong",
"Yuxuan",
""
],
[
"Chai",
"Jingyi",
""
],
[
"Ni",
"Bingbing",
""
],
[
"Zhang",
"Wenjun",
""
]
] |
new_dataset
| 0.998533 |
2302.13585
|
Konrad Kollnig
|
Konrad Kollnig and Lu Zhang and Jun Zhao and Nigel Shadbolt
|
Before and after China's new Data Laws: Privacy in Apps
|
Accepted for publication by the 7th Workshop on Technology and
Consumer Protection (ConPro '23)
| null | null | null |
cs.CY cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Privacy in apps is a topic of widespread interest because many apps collect
and share large amounts of highly sensitive information. In response, China
introduced a range of new data protection laws over recent years, notably the
Personal Information Protection Law (PIPL) in 2021. So far, there exists
limited research on the impacts of these new laws on apps' privacy practices.
To address this gap, this paper analyses data collection in pairs of 634
Chinese iOS apps, one version from early 2020 and one from late 2021. Our work
finds that many more apps now implement consent. Yet, those end-users that
decline consent will often be forced to exit the app. Fewer apps now collect
data without consent but many still integrate tracking libraries. We see our
findings as characteristic of a first iteration at Chinese data regulation with
room for improvement.
|
[
{
"version": "v1",
"created": "Mon, 27 Feb 2023 08:43:14 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Feb 2023 09:00:47 GMT"
},
{
"version": "v3",
"created": "Thu, 2 Mar 2023 10:04:14 GMT"
}
] | 2023-03-03T00:00:00 |
[
[
"Kollnig",
"Konrad",
""
],
[
"Zhang",
"Lu",
""
],
[
"Zhao",
"Jun",
""
],
[
"Shadbolt",
"Nigel",
""
]
] |
new_dataset
| 0.994293 |
2302.13997
|
\v{S}imon Schierreich
|
Du\v{s}an Knop and \v{S}imon Schierreich
|
Host Community Respecting Refugee Housing
|
Proceedings of the 22nd International Conference on Autonomous Agents
and Multiagent Systems, AAMAS '23
| null | null | null |
cs.GT cs.DS econ.TH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a novel model for refugee housing respecting the preferences of
accepting community and refugees themselves. In particular, we are given a
topology representing the local community, a set of inhabitants occupying some
vertices of the topology, and a set of refugees that should be housed on the
empty vertices of graph. Both the inhabitants and the refugees have preferences
over the structure of their neighbourhood.
We are specifically interested in the problem of finding housings such that
the preferences of every individual are met; using game-theoretical words, we
are looking for housings that are stable with respect to some well-defined
notion of stability. We investigate conditions under which the existence of
equilibria is guaranteed and study the computational complexity of finding such
a stable outcome. As the problem is NP-hard even in very simple settings, we
employ the parameterised complexity framework to give a finer-grained view on
the problem's complexity with respect to natural parameters and structural
restrictions of the given topology.
|
[
{
"version": "v1",
"created": "Mon, 27 Feb 2023 17:42:03 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Mar 2023 13:33:41 GMT"
}
] | 2023-03-03T00:00:00 |
[
[
"Knop",
"Dušan",
""
],
[
"Schierreich",
"Šimon",
""
]
] |
new_dataset
| 0.990833 |
2303.00180
|
Dimitrios Kollias
|
Dimitrios Kollias, Andreas Psaroudakis, Anastasios Arsenos, Paraskeui
Theofilou
|
FaceRNET: a Facial Expression Intensity Estimation Network
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper presents our approach for Facial Expression Intensity Estimation
from videos. It includes two components: i) a representation extractor network
that extracts various emotion descriptors (valence-arousal, action units and
basic expressions) from each videoframe; ii) a RNN that captures temporal
information in the data, followed by a mask layer which enables handling
varying input video lengths through dynamic routing. This approach has been
tested on the Hume-Reaction dataset yielding excellent results.
|
[
{
"version": "v1",
"created": "Wed, 1 Mar 2023 02:14:20 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Mar 2023 01:32:53 GMT"
}
] | 2023-03-03T00:00:00 |
[
[
"Kollias",
"Dimitrios",
""
],
[
"Psaroudakis",
"Andreas",
""
],
[
"Arsenos",
"Anastasios",
""
],
[
"Theofilou",
"Paraskeui",
""
]
] |
new_dataset
| 0.99976 |
2303.00199
|
Kun Yang
|
Kun Yang, Jun Lu
|
DMSA: Dynamic Multi-scale Unsupervised Semantic Segmentation Based on
Adaptive Affinity
|
5 pages,4 figures
|
ICASSP 2023
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The proposed method in this paper proposes an end-to-end unsupervised
semantic segmentation architecture DMSA based on four loss functions. The
framework uses Atrous Spatial Pyramid Pooling (ASPP) module to enhance feature
extraction. At the same time, a dynamic dilation strategy is designed to better
capture multi-scale context information. Secondly, a Pixel-Adaptive Refinement
(PAR) module is introduced, which can adaptively refine the initial pseudo
labels after feature fusion to obtain high quality pseudo labels. Experiments
show that the proposed DSMA framework is superior to the existing methods on
the saliency dataset. On the COCO 80 dataset, the MIoU is improved by 2.0, and
the accuracy is improved by 5.39. On the Pascal VOC 2012 Augmented dataset, the
MIoU is improved by 4.9, and the accuracy is improved by 3.4. In addition, the
convergence speed of the model is also greatly improved after the introduction
of the PAR module.
|
[
{
"version": "v1",
"created": "Wed, 1 Mar 2023 03:08:30 GMT"
}
] | 2023-03-03T00:00:00 |
[
[
"Yang",
"Kun",
""
],
[
"Lu",
"Jun",
""
]
] |
new_dataset
| 0.989657 |
2303.00202
|
Xin Zhou
|
Xin Zhou, Bowen Xu, Kisub Kim, DongGyun Han, Thanh Le-Cong, Junda He,
Bach Le, David Lo
|
PatchZero: Zero-Shot Automatic Patch Correctness Assessment
|
12 pages
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automated Program Repair (APR) techniques have shown more and more promising
results in fixing real-world bugs. Despite the effectiveness, APR techniques
still face an overfitting problem: a generated patch can be incorrect although
it passes all tests. It is time-consuming to manually evaluate the correctness
of generated patches that can pass all tests. To address this problem, many
approaches have been proposed to automatically assess the correctness of
patches generated by APR techniques. However, existing approaches require a
large set of manually labeled patches as the training data. To mitigate the
issue, in this study, we propose PatchZero, the patch correctness assessment by
adopting large pre-trained models. Specifically, for patches generated by a new
or unseen APR tool, PatchZero does not need labeled patches of this new or
unseen APR tool for training (i.e., zero-shot) but directly queries the large
pre-trained model to get predictions on the correctness labels without
training. In this way, PatchZero can reduce the manual labeling effort when
building a model to automatically assess the correctness of generated patches
of new APR tools. To provide knowledge regarding the automatic patch
correctness assessment (APCA) task to the large pre-trained models, we also
design an instance-wise demonstration formation strategy by using contrastive
learning. Specifically, PatchZero selects semantically similar patches to help
the large pre-trained model to give more accurate predictions on the unlabeled
patches. Our experimental results showed that PatchZero can achieve an accuracy
of 82.7% and an F1-score of 86.0% on average although no labeled patch of the
new or unseen APR tool is available. In addition, our proposed technique
outperformed the prior state-of-the-art by a large margin.
|
[
{
"version": "v1",
"created": "Wed, 1 Mar 2023 03:12:11 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Mar 2023 08:00:09 GMT"
}
] | 2023-03-03T00:00:00 |
[
[
"Zhou",
"Xin",
""
],
[
"Xu",
"Bowen",
""
],
[
"Kim",
"Kisub",
""
],
[
"Han",
"DongGyun",
""
],
[
"Le-Cong",
"Thanh",
""
],
[
"He",
"Junda",
""
],
[
"Le",
"Bach",
""
],
[
"Lo",
"David",
""
]
] |
new_dataset
| 0.999314 |
2303.00409
|
Ming-Chang Lee
|
Ming-Chang Lee and Jia-Chun Lin
|
RePAD2: Real-Time, Lightweight, and Adaptive Anomaly Detection for
Open-Ended Time Series
|
10 pages, 11 figures, and 10 tables, the paper is accepted by 8th
International Conference on Internet of Things, Big Data and Security (IoTBDS
2023)
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An open-ended time series refers to a series of data points indexed in time
order without an end. Such a time series can be found everywhere due to the
prevalence of Internet of Things. Providing lightweight and real-time anomaly
detection for open-ended time series is highly desirable to industry and
organizations since it allows immediate response and avoids potential financial
loss. In the last few years, several real-time time series anomaly detection
approaches have been introduced. However, they might exhaust system resources
when they are applied to open-ended time series for a long time. To address
this issue, in this paper we propose RePAD2, a lightweight real-time anomaly
detection approach for open-ended time series by improving its predecessor
RePAD, which is one of the state-of-the-art anomaly detection approaches. We
conducted a series of experiments to compare RePAD2 with RePAD and another
similar detection approach based on real-world time series datasets, and
demonstrated that RePAD2 can address the mentioned resource exhaustion issue
while offering comparable detection accuracy and slightly less time
consumption.
|
[
{
"version": "v1",
"created": "Wed, 1 Mar 2023 11:00:20 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Mar 2023 08:04:03 GMT"
}
] | 2023-03-03T00:00:00 |
[
[
"Lee",
"Ming-Chang",
""
],
[
"Lin",
"Jia-Chun",
""
]
] |
new_dataset
| 0.999355 |
2303.00910
|
Tomoya Kamimura
|
Yusuke Sakurai, Tomoya Kamimura, Yuki Sakamoto, Shohei Nishii, Kodai
Sato, Yuta Fujiwara, and Akihito Sano
|
Bipedal Robot Running: Human-like Actuation Timing Using Fast and Slow
Adaptations
|
7 pages, 12 figures, submitted to the 2023 IEEE/RSJ International
Conference on Intelligent Robots and Systems (IROS 2023)
| null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
We have been developing human-sized biped robots based on passive dynamic
mechanisms. In human locomotion, the muscles activate at the same rate relative
to the gait cycle during running. To achieve adaptive running for robots, such
characteristics should be reproduced to yield the desired effect. In this
study, we designed a central pattern generator (CPG) involving fast and slow
adaptation to achieve human-like running using a simple spring-mass model and
our developed bipedal robot, which is equipped with actuators that imitate the
human musculoskeletal system. Our results demonstrate that fast and slow
adaptations can reproduce human-like running with a constant rate of muscle
firing relative to the gait cycle. Furthermore, the results suggest that the
CPG contributes to the adjustment of the muscle activation timing in human
running.
|
[
{
"version": "v1",
"created": "Thu, 2 Mar 2023 02:12:21 GMT"
}
] | 2023-03-03T00:00:00 |
[
[
"Sakurai",
"Yusuke",
""
],
[
"Kamimura",
"Tomoya",
""
],
[
"Sakamoto",
"Yuki",
""
],
[
"Nishii",
"Shohei",
""
],
[
"Sato",
"Kodai",
""
],
[
"Fujiwara",
"Yuta",
""
],
[
"Sano",
"Akihito",
""
]
] |
new_dataset
| 0.998042 |
2303.00947
|
Akshay Sarvesh
|
Sarvesh Mayilvahanan and Akshay Sarvesh and Swaminathan Gopalswamy
|
Reshaping Viscoelastic-String Path-Planner (RVP)
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Reshaping Viscoelastic-String Path-Planner a Path Planner that
reshapes a desired Global Plan for a Robotic Vehicle based on sensor
observations of the Environment. We model the path to be a viscoelastic string
with shape preserving tendencies, approximated by a connected series of
Springs, Masses, and Dampers. The resultant path is then reshaped according to
the forces emanating from the obstacles until an equilibrium is reached. The
reshaped path remains close in shape to the original path because of Anchor
Points that connect to the discrete masses through springs. The final path is
the resultant equilibrium configuration of the Spring-Mass-Damper network. Two
key concepts enable RVP (i) Virtual Obstacle Forces that push the
Spring-Mass-Damper system away from the original path and (ii) Anchor points in
conjunction with the Spring-Mass-Damper network that attempts to retain the
path shape. We demonstrate the results in simulation and compare it's
performance with an existing Reshaping Local Planner that also takes a Global
Plan and reshapes it according to sensor based observations of the environment.
|
[
{
"version": "v1",
"created": "Thu, 2 Mar 2023 03:53:48 GMT"
}
] | 2023-03-03T00:00:00 |
[
[
"Mayilvahanan",
"Sarvesh",
""
],
[
"Sarvesh",
"Akshay",
""
],
[
"Gopalswamy",
"Swaminathan",
""
]
] |
new_dataset
| 0.997381 |
2303.01000
|
Yuval Kirstain
|
Yuval Kirstain, Omer Levy, Adam Polyak
|
X&Fuse: Fusing Visual Information in Text-to-Image Generation
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce X&Fuse, a general approach for conditioning on visual
information when generating images from text. We demonstrate the potential of
X&Fuse in three different text-to-image generation scenarios. (i) When a bank
of images is available, we retrieve and condition on a related image
(Retrieve&Fuse), resulting in significant improvements on the MS-COCO
benchmark, gaining a state-of-the-art FID score of 6.65 in zero-shot settings.
(ii) When cropped-object images are at hand, we utilize them and perform
subject-driven generation (Crop&Fuse), outperforming the textual inversion
method while being more than x100 faster. (iii) Having oracle access to the
image scene (Scene&Fuse), allows us to achieve an FID score of 5.03 on MS-COCO
in zero-shot settings. Our experiments indicate that X&Fuse is an effective,
easy-to-adapt, simple, and general approach for scenarios in which the model
may benefit from additional visual information.
|
[
{
"version": "v1",
"created": "Thu, 2 Mar 2023 06:33:33 GMT"
}
] | 2023-03-03T00:00:00 |
[
[
"Kirstain",
"Yuval",
""
],
[
"Levy",
"Omer",
""
],
[
"Polyak",
"Adam",
""
]
] |
new_dataset
| 0.992512 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.