id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2303.06330
|
Hao Xiang
|
Zhijie Xiao, Zhicheng Dong, Hao Xiang
|
PRSNet: A Masked Self-Supervised Learning Pedestrian Re-Identification
Method
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, self-supervised learning has attracted widespread academic
debate and addressed many of the key issues of computer vision. The present
research focus is on how to construct a good agent task that allows for
improved network learning of advanced semantic information on images so that
model reasoning is accelerated during pre-training of the current task. In
order to solve the problem that existing feature extraction networks are
pre-trained on the ImageNet dataset and cannot extract the fine-grained
information in pedestrian images well, and the existing pre-task of contrast
self-supervised learning may destroy the original properties of pedestrian
images, this paper designs a pre-task of mask reconstruction to obtain a
pre-training model with strong robustness and uses it for the pedestrian
re-identification task. The training optimization of the network is performed
by improving the triplet loss based on the centroid, and the mask image is
added as an additional sample to the loss calculation, so that the network can
better cope with the pedestrian matching in practical applications after the
training is completed. This method achieves about 5% higher mAP on Marker1501
and CUHK03 data than existing self-supervised learning pedestrian
re-identification methods, and about 1% higher for Rank1, and ablation
experiments are conducted to demonstrate the feasibility of this method. Our
model code is located at https://github.com/ZJieX/prsnet.
|
[
{
"version": "v1",
"created": "Sat, 11 Mar 2023 07:20:32 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Xiao",
"Zhijie",
""
],
[
"Dong",
"Zhicheng",
""
],
[
"Xiang",
"Hao",
""
]
] |
new_dataset
| 0.971352 |
2303.06379
|
Weiming Xu
|
Weiming Xu, Zhihao Guo
|
TaylorAECNet: A Taylor Style Neural Network for Full-Band Echo
Cancellation
| null | null | null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
This paper describes aecX team's entry to the ICASSP 2023 acoustic echo
cancellation (AEC) challenge. Our system consists of an adaptive filter and a
proposed full-band Taylor-style acoustic echo cancellation neural network
(TaylorAECNet) as a post-filter. Specifically, we leverage the recent advances
in Taylor expansion based decoupling-style interpretable speech enhancement and
explore its feasibility in the AEC task. Our TaylorAECNet based approach
achieves an overall mean opinion score (MOS) of 4.241, a word accuracy (WAcc)
ratio of 0.767, and ranks 5th in the non-personalized track (track 1).
|
[
{
"version": "v1",
"created": "Sat, 11 Mar 2023 11:12:49 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Xu",
"Weiming",
""
],
[
"Guo",
"Zhihao",
""
]
] |
new_dataset
| 0.967911 |
2303.06458
|
Fenglin Liu
|
Bang Yang, Fenglin Liu, Yuexian Zou, Xian Wu, Yaowei Wang, and David
A. Clifton
|
ZeroNLG: Aligning and Autoencoding Domains for Zero-Shot Multimodal and
Multilingual Natural Language Generation
|
We will release the codes and models at
https://github.com/yangbang18/ZeroNLG soon. Without any labeled downstream
pairs for training, the ZeroNLG can deal with multiple NLG tasks, including
image-to-text, video-to-text, and text-to-text, across English, Chinese,
German, and French within a unified framework
| null | null | null |
cs.CL cs.AI cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Natural Language Generation (NLG) accepts input data in the form of images,
videos, or text and generates corresponding natural language text as output.
Existing NLG methods mainly adopt a supervised approach and rely heavily on
coupled data-to-text pairs. However, for many targeted scenarios and for
non-English languages, sufficient quantities of labeled data are often not
available. To relax the dependency on labeled data of downstream tasks, we
propose an intuitive and effective zero-shot learning framework, ZeroNLG, which
can deal with multiple NLG tasks, including image-to-text (image captioning),
video-to-text (video captioning), and text-to-text (neural machine
translation), across English, Chinese, German, and French within a unified
framework. ZeroNLG does not require any labeled downstream pairs for training.
During training, ZeroNLG (i) projects different domains (across modalities and
languages) to corresponding coordinates in a shared common latent space; (ii)
bridges different domains by aligning their corresponding coordinates in this
space; and (iii) builds an unsupervised multilingual auto-encoder to learn to
generate text by reconstructing the input text given its coordinate in shared
latent space. Consequently, during inference, based on the data-to-text
pipeline, ZeroNLG can generate target sentences across different languages
given the coordinate of input data in the common space. Within this unified
framework, given visual (imaging or video) data as input, ZeroNLG can perform
zero-shot visual captioning; given textual sentences as input, ZeroNLG can
perform zero-shot machine translation. We present the results of extensive
experiments on twelve NLG tasks, showing that, without using any labeled
downstream pairs for training, ZeroNLG generates high-quality and believable
outputs and significantly outperforms existing zero-shot methods.
|
[
{
"version": "v1",
"created": "Sat, 11 Mar 2023 17:14:33 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Yang",
"Bang",
""
],
[
"Liu",
"Fenglin",
""
],
[
"Zou",
"Yuexian",
""
],
[
"Wu",
"Xian",
""
],
[
"Wang",
"Yaowei",
""
],
[
"Clifton",
"David A.",
""
]
] |
new_dataset
| 0.974089 |
2303.06513
|
Huthaifa I. Ashqar
|
Ahmad Hamarshe, Huthaifa I. Ashqar, and Mohammad Hamarsheh
|
Detection of DDoS Attacks in Software Defined Networking Using Machine
Learning Models
| null | null | null | null |
cs.LG cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
The concept of Software Defined Networking (SDN) represents a modern approach
to networking that separates the control plane from the data plane through
network abstraction, resulting in a flexible, programmable and dynamic
architecture compared to traditional networks. The separation of control and
data planes has led to a high degree of network resilience, but has also given
rise to new security risks, including the threat of distributed
denial-of-service (DDoS) attacks, which pose a new challenge in the SDN
environment. In this paper, the effectiveness of using machine learning
algorithms to detect distributed denial-of-service (DDoS) attacks in
software-defined networking (SDN) environments is investigated. Four
algorithms, including Random Forest, Decision Tree, Support Vector Machine, and
XGBoost, were tested on the CICDDoS2019 dataset, with the timestamp feature
dropped among others. Performance was assessed by measures of accuracy, recall,
accuracy, and F1 score, with the Random Forest algorithm having the highest
accuracy, at 68.9%. The results indicate that ML-based detection is a more
accurate and effective method for identifying DDoS attacks in SDN, despite the
computational requirements of non-parametric algorithms.
|
[
{
"version": "v1",
"created": "Sat, 11 Mar 2023 22:56:36 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Hamarshe",
"Ahmad",
""
],
[
"Ashqar",
"Huthaifa I.",
""
],
[
"Hamarsheh",
"Mohammad",
""
]
] |
new_dataset
| 0.993347 |
2303.06537
|
Sungbok Shin
|
Sungbok Shin, Sanghyun Hong, Niklas Elmqvist
|
Perceptual Pat: A Virtual Human System for Iterative Visualization
Design
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Designing a visualization is often a process of iterative refinement where
the designer improves a chart over time by adding features, improving
encodings, and fixing mistakes. However, effective design requires external
critique and evaluation. Unfortunately, such critique is not always available
on short notice and evaluation can be costly. To address this need, we present
Perceptual Pat, an extensible suite of AI and computer vision techniques that
forms a virtual human visual system for supporting iterative visualization
design. The system analyzes snapshots of a visualization using an extensible
set of filters - including gaze maps, text recognition, color analysis, etc -
and generates a report summarizing the findings. The web-based Pat Design Lab
provides a version tracking system that enables the designer to track
improvements over time. We validate Perceptual Pat using a longitudinal
qualitative study involving 4 professional visualization designers that used
the tool over a few days to design a new visualization.
|
[
{
"version": "v1",
"created": "Sun, 12 Mar 2023 01:54:01 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Shin",
"Sungbok",
""
],
[
"Hong",
"Sanghyun",
""
],
[
"Elmqvist",
"Niklas",
""
]
] |
new_dataset
| 0.981032 |
2303.06542
|
Jean-Philippe Roberge
|
Etienne Roberge, Guillaume Fornes, Jean-Philippe Roberge
|
StereoTac: a Novel Visuotactile Sensor that Combines Tactile Sensing
with 3D Vision
|
8 pages, 11 figures, submitted to IEEE Robotics and Automation
Letters (RA-L) on March 11 2023
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Combining 3D vision with tactile sensing could unlock a greater level of
dexterity for robots and improve several manipulation tasks. However, obtaining
a close-up 3D view of the location where manipulation contacts occur can be
challenging, particularly in confined spaces, cluttered environments, or
without installing more sensors on the end effector. In this context, this
paper presents StereoTac, a novel vision-based sensor that combines tactile
sensing with 3D vision. The proposed sensor relies on stereoscopic vision to
capture a 3D representation of the environment before contact and uses
photometric stereo to reconstruct the tactile imprint generated by an object
during contact. To this end, two cameras were integrated in a single sensor,
whose interface is made of a transparent elastomer coated with a thin layer of
paint with a level of transparency that can be adjusted by varying the sensor's
internal lighting conditions. We describe the sensor's fabrication and evaluate
its performance for both tactile perception and 3D vision. Our results show
that the proposed sensor can reconstruct a 3D view of a scene just before
grasping and perceive the tactile imprint after grasping, allowing for
monitoring of the contact during manipulation.
|
[
{
"version": "v1",
"created": "Sun, 12 Mar 2023 02:25:53 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Roberge",
"Etienne",
""
],
[
"Fornes",
"Guillaume",
""
],
[
"Roberge",
"Jean-Philippe",
""
]
] |
new_dataset
| 0.999456 |
2303.06588
|
A.B. Siddique
|
M.H. Maqbool, Umar Farooq, Adib Mosharrof, A.B. Siddique, Hassan
Foroosh
|
MobileRec: A Large-Scale Dataset for Mobile Apps Recommendation
|
10 pages, 4 tables, 4 figures, Under submission at SIGIR'23
| null | null | null |
cs.IR cs.LG cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Recommender systems have become ubiquitous in our digital lives, from
recommending products on e-commerce websites to suggesting movies and music on
streaming platforms. Existing recommendation datasets, such as Amazon Product
Reviews and MovieLens, greatly facilitated the research and development of
recommender systems in their respective domains. While the number of mobile
users and applications (aka apps) has increased exponentially over the past
decade, research in mobile app recommender systems has been significantly
constrained, primarily due to the lack of high-quality benchmark datasets, as
opposed to recommendations for products, movies, and news. To facilitate
research for app recommendation systems, we introduce a large-scale dataset,
called MobileRec. We constructed MobileRec from users' activity on the Google
play store. MobileRec contains 19.3 million user interactions (i.e., user
reviews on apps) with over 10K unique apps across 48 categories. MobileRec
records the sequential activity of a total of 0.7 million distinct users. Each
of these users has interacted with no fewer than five distinct apps, which
stands in contrast to previous datasets on mobile apps that recorded only a
single interaction per user. Furthermore, MobileRec presents users' ratings as
well as sentiments on installed apps, and each app contains rich metadata such
as app name, category, description, and overall rating, among others. We
demonstrate that MobileRec can serve as an excellent testbed for app
recommendation through a comparative study of several state-of-the-art
recommendation approaches. The quantitative results can act as a baseline for
other researchers to compare their results against. The MobileRec dataset is
available at https://huggingface.co/datasets/recmeapp/mobilerec.
|
[
{
"version": "v1",
"created": "Sun, 12 Mar 2023 06:39:40 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Maqbool",
"M. H.",
""
],
[
"Farooq",
"Umar",
""
],
[
"Mosharrof",
"Adib",
""
],
[
"Siddique",
"A. B.",
""
],
[
"Foroosh",
"Hassan",
""
]
] |
new_dataset
| 0.999842 |
2303.06596
|
Jiayang Ao
|
Jiayang Ao, Qiuhong Ke, Krista A. Ehinger
|
Amodal Intra-class Instance Segmentation: New Dataset and Benchmark
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Images of realistic scenes often contain intra-class objects that are heavily
occluded from each other, making the amodal perception task that requires
parsing the occluded parts of the objects challenging. Although important for
downstream tasks such as robotic grasping systems, the lack of large-scale
amodal datasets with detailed annotations makes it difficult to model
intra-class occlusions explicitly. This paper introduces a new amodal dataset
for image amodal completion tasks, which contains over 255K images of
intra-class occlusion scenarios, annotated with multiple masks, amodal bounding
boxes, dual order relations and full appearance for instances and background.
We also present a point-supervised scheme with layer priors for amodal instance
segmentation specifically designed for intra-class occlusion scenarios.
Experiments show that our weakly supervised approach outperforms the SOTA fully
supervised methods, while our layer priors design exhibits remarkable
performance improvements in the case of intra-class occlusion in both synthetic
and real images.
|
[
{
"version": "v1",
"created": "Sun, 12 Mar 2023 07:28:36 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Ao",
"Jiayang",
""
],
[
"Ke",
"Qiuhong",
""
],
[
"Ehinger",
"Krista A.",
""
]
] |
new_dataset
| 0.998597 |
2303.06623
|
Joshua Tanner
|
Joshua Tanner and Jacob Hoffman
|
MWE as WSD: Solving Multiword Expression Identification with Word Sense
Disambiguation
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Recent work in word sense disambiguation (WSD) utilizes encodings of the
sense gloss (definition text), in addition to the input words and context, to
improve performance. In this work we demonstrate that this approach can be
adapted for use in multiword expression (MWE) identification by training a
Bi-encoder model which uses gloss and context information to filter MWE
candidates produced from a simple rule-based extraction pipeline. We achieve
state-of-the-art results in MWE identification on the DiMSUM dataset, and
competitive results on the PARSEME 1.1 English dataset using this method. Our
model also retains most of its ability to perform WSD, demonstrating that a
single model can successfully be applied to both of these tasks. Additionally,
we experiment with applying Poly-encoder models to MWE identification and WSD,
introducing a modified Poly-encoder architecture which outperforms the standard
Poly-encoder on these tasks.
|
[
{
"version": "v1",
"created": "Sun, 12 Mar 2023 09:35:42 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Tanner",
"Joshua",
""
],
[
"Hoffman",
"Jacob",
""
]
] |
new_dataset
| 0.996477 |
2303.06642
|
Wim Vanderbauwhede
|
Wim Vanderbauwhede
|
Frugal Computing -- On the need for low-carbon and sustainable computing
and the path towards zero-carbon computing
|
IAB workshop on Environmental Impact of Internet Applications and
Systems, 2022
| null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The current emissions from computing are almost 4% of the world total. This
is already more than emissions from the airline industry and are projected to
rise steeply over the next two decades. By 2040 emissions from computing alone
will account for more than half of the emissions budget to keep global warming
below 1.5$^\circ$C. Consequently, this growth in computing emissions is
unsustainable. The emissions from production of computing devices exceed the
emissions from operating them, so even if devices are more energy efficient
producing more of them will make the emissions problem worse. Therefore we must
extend the useful life of our computing devices. As a society we need to start
treating computational resources as finite and precious, to be utilised only
when necessary, and as effectively as possible. We need frugal computing:
achieving our aims with less energy and material.
|
[
{
"version": "v1",
"created": "Sun, 12 Mar 2023 12:02:21 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Vanderbauwhede",
"Wim",
""
]
] |
new_dataset
| 0.997373 |
2303.06669
|
Evanthia Papadopoulou
|
Evanthia Papadopoulou
|
Abstract Voronoi-like Graphs: Extending Delaunay's Theorem and
Applications
| null | null | null | null |
cs.CG
|
http://creativecommons.org/licenses/by/4.0/
|
Any system of bisectors (in the sense of abstract Voronoi diagrams) defines
an arrangement of simple curves in the plane. We define Voronoi-like graphs on
such an arrangement, which are graphs whose vertices are locally Voronoi. A
vertex $v$ is called locally Voronoi, if $v$ and its incident edges appear in
the Voronoi diagram of three sites. In a so-called admissible bisector system,
where Voronoi regions are connected and cover the plane, we prove that any
Voronoi-like graph is indeed an abstract Voronoi diagram. The result can be
seen as an abstract dual version of Delaunay's theorem on (locally) empty
circles.
Further, we define Voronoi-like cycles in an admissible bisector system, and
show that the Voronoi-like graph induced by such a cycle $C$ is a unique tree
(or a forest, if $C$ is unbounded). In the special case where $C$ is the
boundary of an abstract Voronoi region, the induced Voronoi-like graph can be
computed in expected linear time following the technique of [Junginger and
Papadopoulou SOCG'18]. Otherwise, within the same time, the algorithm
constructs the Voronoi-like graph of a cycle $C'$ on the same set (or subset)
of sites, which may equal $C$ or be enclosed by $C$. Overall, the technique
computes abstract Voronoi (or Voronoi-like) trees and forests in linear
expected time, given the order of their leaves along a Voronoi-like cycle. We
show a direct application in updating a constraint Delaunay triangulation in
linear expected time, after the insertion of a new segment constraint,
simplifying upon the result of [Shewchuk and Brown CGTA 2015].
|
[
{
"version": "v1",
"created": "Sun, 12 Mar 2023 14:22:41 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Papadopoulou",
"Evanthia",
""
]
] |
new_dataset
| 0.999559 |
2303.06670
|
Xinye Wanyan
|
Xinye Wanyan, Sachith Seneviratne, Shuchang Shen, Michael Kirley
|
DINO-MC: Self-supervised Contrastive Learning for Remote Sensing Imagery
with Multi-sized Local Crops
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to the costly nature of remote sensing image labeling and the large
volume of available unlabeled imagery, self-supervised methods that can learn
feature representations without manual annotation have received great
attention. While prior works have explored self-supervised learning in remote
sensing tasks, pretext tasks based on local-global view alignment remain
underexplored. Inspired by DINO, which employs an effective representation
learning structure with knowledge distillation based on global-local view
alignment, we formulate two pretext tasks for use in self-supervised learning
on remote sensing imagery (SSLRS). Using these tasks, we explore the
effectiveness of positive temporal contrast as well as multi-sized views on
SSLRS. Moreover, we extend DINO and propose DINO-MC which uses local views of
various sized crops instead of a single fixed size. Our experiments demonstrate
that even when pre-trained on only 10% of the dataset, DINO-MC performs on par
or better than existing state of the art SSLRS methods on multiple remote
sensing tasks, while using less computational resources. All codes, models and
results are available at https://github.com/WennyXY/DINO-MC.
|
[
{
"version": "v1",
"created": "Sun, 12 Mar 2023 14:24:10 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Wanyan",
"Xinye",
""
],
[
"Seneviratne",
"Sachith",
""
],
[
"Shen",
"Shuchang",
""
],
[
"Kirley",
"Michael",
""
]
] |
new_dataset
| 0.989778 |
2303.06673
|
Haonan Han
|
Haonan Han, Rui Yang, Shuyan Li, Runze Hu and Xiu Li
|
SSGD: A smartphone screen glass dataset for defect detection
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Interactive devices with touch screen have become commonly used in various
aspects of daily life, which raises the demand for high production quality of
touch screen glass. While it is desirable to develop effective defect detection
technologies to optimize the automatic touch screen production lines, the
development of these technologies suffers from the lack of publicly available
datasets. To address this issue, we in this paper propose a dedicated touch
screen glass defect dataset which includes seven types of defects and consists
of 2504 images captured in various scenarios.All data are captured with
professional acquisition equipment on the fixed workstation. Additionally, we
benchmark the CNN- and Transformer-based object detection frameworks on the
proposed dataset to demonstrate the challenges of defect detection on
high-resolution images. Dataset and related code will be available at
https://github.com/Yangr116/SSGDataset.
|
[
{
"version": "v1",
"created": "Sun, 12 Mar 2023 14:26:56 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Han",
"Haonan",
""
],
[
"Yang",
"Rui",
""
],
[
"Li",
"Shuyan",
""
],
[
"Hu",
"Runze",
""
],
[
"Li",
"Xiu",
""
]
] |
new_dataset
| 0.99979 |
2303.06678
|
Jiaze Wang
|
Yi Wang, Jiaze Wang, Jinpeng Li, Zixu Zhao, Guangyong Chen, Anfeng Liu
and Pheng-Ann Heng
|
PointPatchMix: Point Cloud Mixing with Patch Scoring
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Data augmentation is an effective regularization strategy for mitigating
overfitting in deep neural networks, and it plays a crucial role in 3D vision
tasks, where the point cloud data is relatively limited. While mixing-based
augmentation has shown promise for point clouds, previous methods mix point
clouds either on block level or point level, which has constrained their
ability to strike a balance between generating diverse training samples and
preserving the local characteristics of point clouds. Additionally, the varying
importance of each part of the point clouds has not been fully considered,
cause not all parts contribute equally to the classification task, and some
parts may contain unimportant or redundant information. To overcome these
challenges, we propose PointPatchMix, a novel approach that mixes point clouds
at the patch level and integrates a patch scoring module to generate
content-based targets for mixed point clouds. Our approach preserves local
features at the patch level, while the patch scoring module assigns targets
based on the content-based significance score from a pre-trained teacher model.
We evaluate PointPatchMix on two benchmark datasets, ModelNet40 and
ScanObjectNN, and demonstrate significant improvements over various baselines
in both synthetic and real-world datasets, as well as few-shot settings. With
Point-MAE as our baseline, our model surpasses previous methods by a
significant margin, achieving 86.3% accuracy on ScanObjectNN and 94.1% accuracy
on ModelNet40. Furthermore, our approach shows strong generalization across
multiple architectures and enhances the robustness of the baseline model.
|
[
{
"version": "v1",
"created": "Sun, 12 Mar 2023 14:49:42 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Wang",
"Yi",
""
],
[
"Wang",
"Jiaze",
""
],
[
"Li",
"Jinpeng",
""
],
[
"Zhao",
"Zixu",
""
],
[
"Chen",
"Guangyong",
""
],
[
"Liu",
"Anfeng",
""
],
[
"Heng",
"Pheng-Ann",
""
]
] |
new_dataset
| 0.996341 |
2303.06691
|
Kwabena Nuamah
|
Kwabena Nuamah and Alan Bundy
|
ALIST: Associative Logic for Inference, Storage and Transfer. A Lingua
Franca for Inference on the Web
| null | null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent developments in support for constructing knowledge graphs have led to
a rapid rise in their creation both on the Web and within organisations. Added
to existing sources of data, including relational databases, APIs, etc., there
is a strong demand for techniques to query these diverse sources of knowledge.
While formal query languages, such as SPARQL, exist for querying some knowledge
graphs, users are required to know which knowledge graphs they need to query
and the unique resource identifiers of the resources they need. Although
alternative techniques in neural information retrieval embed the content of
knowledge graphs in vector spaces, they fail to provide the representation and
query expressivity needed (e.g. inability to handle non-trivial aggregation
functions such as regression). We believe that a lingua franca, i.e. a
formalism, that enables such representational flexibility will increase the
ability of intelligent automated agents to combine diverse data sources by
inference.
Our work proposes a flexible representation (alists) to support intelligent
federated querying of diverse knowledge sources. Our contribution includes (1)
a formalism that abstracts the representation of queries from the specific
query language of a knowledge graph; (2) a representation to dynamically curate
data and functions (operations) to perform non-trivial inference over diverse
knowledge sources; (3) a demonstration of the expressiveness of alists to
represent the diversity of representational formalisms, including SPARQL
queries, and more generally first-order logic expressions.
|
[
{
"version": "v1",
"created": "Sun, 12 Mar 2023 15:55:56 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Nuamah",
"Kwabena",
""
],
[
"Bundy",
"Alan",
""
]
] |
new_dataset
| 0.990317 |
2303.06714
|
Liguo Zhou
|
Haichuan Li, Liguo Zhou, Alois Knoll
|
BCSSN: Bi-direction Compact Spatial Separable Network for Collision
Avoidance in Autonomous Driving
| null | null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Autonomous driving has been an active area of research and development, with
various strategies being explored for decision-making in autonomous vehicles.
Rule-based systems, decision trees, Markov decision processes, and Bayesian
networks have been some of the popular methods used to tackle the complexities
of traffic conditions and avoid collisions. However, with the emergence of deep
learning, many researchers have turned towards CNN-based methods to improve the
performance of collision avoidance. Despite the promising results achieved by
some CNN-based methods, the failure to establish correlations between
sequential images often leads to more collisions. In this paper, we propose a
CNN-based method that overcomes the limitation by establishing feature
correlations between regions in sequential images using variants of attention.
Our method combines the advantages of CNN in capturing regional features with a
bi-directional LSTM to enhance the relationship between different local areas.
Additionally, we use an encoder to improve computational efficiency. Our method
takes "Bird's Eye View" graphs generated from camera and LiDAR sensors as
input, simulates the position (x, y) and head offset angle (Yaw) to generate
future trajectories. Experiment results demonstrate that our proposed method
outperforms existing vision-based strategies, achieving an average of only 3.7
collisions per 1000 miles of driving distance on the L5kit test set. This
significantly improves the success rate of collision avoidance and provides a
promising solution for autonomous driving.
|
[
{
"version": "v1",
"created": "Sun, 12 Mar 2023 17:35:57 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Li",
"Haichuan",
""
],
[
"Zhou",
"Liguo",
""
],
[
"Knoll",
"Alois",
""
]
] |
new_dataset
| 0.990636 |
2303.06729
|
Setu Kumar Basak
|
Setu Kumar Basak, Lorenzo Neil, Bradley Reaves, Laurie Williams
|
SecretBench: A Dataset of Software Secrets
|
Accepted at the Data and Tool Showcase Track of the 20th
International Conference on Mining Software Repositories (MSR 2023)
| null | null | null |
cs.CR cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
According to GitGuardian's monitoring of public GitHub repositories, the
exposure of secrets (API keys and other credentials) increased two-fold in 2021
compared to 2020, totaling more than six million secrets. However, no benchmark
dataset is publicly available for researchers and tool developers to evaluate
secret detection tools that produce many false positive warnings. The goal of
our paper is to aid researchers and tool developers in evaluating and improving
secret detection tools by curating a benchmark dataset of secrets through a
systematic collection of secrets from open-source repositories. We present a
labeled dataset of source codes containing 97,479 secrets (of which 15,084 are
true secrets) of various secret types extracted from 818 public GitHub
repositories. The dataset covers 49 programming languages and 311 file types.
|
[
{
"version": "v1",
"created": "Sun, 12 Mar 2023 19:16:43 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Basak",
"Setu Kumar",
""
],
[
"Neil",
"Lorenzo",
""
],
[
"Reaves",
"Bradley",
""
],
[
"Williams",
"Laurie",
""
]
] |
new_dataset
| 0.999826 |
2303.06800
|
Hyeongseok Son
|
Hyeongseok Son, Sangil Jung, Solae Lee, Seongeun Kim, Seung-In Park,
ByungIn Yoo
|
Object-Centric Multi-Task Learning for Human Instances
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human is one of the most essential classes in visual recognition tasks such
as detection, segmentation, and pose estimation. Although much effort has been
put into individual tasks, multi-task learning for these three tasks has been
rarely studied. In this paper, we explore a compact multi-task network
architecture that maximally shares the parameters of the multiple tasks via
object-centric learning. To this end, we propose a novel query design to encode
the human instance information effectively, called human-centric query (HCQ).
HCQ enables for the query to learn explicit and structural information of human
as well such as keypoints. Besides, we utilize HCQ in prediction heads of the
target tasks directly and also interweave HCQ with the deformable attention in
Transformer decoders to exploit a well-learned object-centric representation.
Experimental results show that the proposed multi-task network achieves
comparable accuracy to state-of-the-art task-specific models in human
detection, segmentation, and pose estimation task, while it consumes less
computational costs.
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 01:10:50 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Son",
"Hyeongseok",
""
],
[
"Jung",
"Sangil",
""
],
[
"Lee",
"Solae",
""
],
[
"Kim",
"Seongeun",
""
],
[
"Park",
"Seung-In",
""
],
[
"Yoo",
"ByungIn",
""
]
] |
new_dataset
| 0.952879 |
2303.06821
|
Lutao Jiang
|
Lutao Jiang, Ruyi Ji, Libo Zhang
|
SDF-3DGAN: A 3D Object Generative Method Based on Implicit Signed
Distance Function
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we develop a new method, termed SDF-3DGAN, for 3D object
generation and 3D-Aware image synthesis tasks, which introduce implicit Signed
Distance Function (SDF) as the 3D object representation method in the
generative field. We apply SDF for higher quality representation of 3D object
in space and design a new SDF neural renderer, which has higher efficiency and
higher accuracy. To train only on 2D images, we first generate the objects,
which are represented by SDF, from Gaussian distribution. Then we render them
to 2D images and use them to apply GAN training method together with 2D images
in the dataset. In the new rendering method, we relieve all the potential of
SDF mathematical property to alleviate computation pressure in the previous SDF
neural renderer. In specific, our new SDF neural renderer can solve the problem
of sampling ambiguity when the number of sampling point is not enough, \ie use
the less points to finish higher quality sampling task in the rendering
pipeline. And in this rendering pipeline, we can locate the surface easily.
Therefore, we apply normal loss on it to control the smoothness of generated
object surface, which can make our method enjoy the much higher generation
quality. Quantitative and qualitative experiments conducted on public
benchmarks demonstrate favorable performance against the state-of-the-art
methods in 3D object generation task and 3D-Aware image synthesis task. Our
codes will be released at https://github.com/lutao2021/SDF-3DGAN.
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 02:48:54 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Jiang",
"Lutao",
""
],
[
"Ji",
"Ruyi",
""
],
[
"Zhang",
"Libo",
""
]
] |
new_dataset
| 0.999591 |
2303.06881
|
Chencan Fu
|
Chencan Fu, Lin Li, Linpeng Peng, Yukai Ma, Xiangrui Zhao, and Yong
Liu
|
OverlapNetVLAD: A Coarse-to-Fine Framework for LiDAR-based Place
Recognition
|
Submitted to IROS2023
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Place recognition is a challenging yet crucial task in robotics. Existing 3D
LiDAR place recognition methods suffer from limited feature representation
capability and long search times. To address these challenges, we propose a
novel coarse-to-fine framework for 3D LiDAR place recognition that combines
Birds' Eye View (BEV) feature extraction, coarse-grained matching, and
fine-grained verification. In the coarse stage, our framework leverages the
rich contextual information contained in BEV features to produce global
descriptors. Then the top-\textit{K} most similar candidates are identified via
descriptor matching, which is fast but coarse-grained. In the fine stage, our
overlap estimation network reuses the corresponding BEV features to predict the
overlap region, enabling meticulous and precise matching. Experimental results
on the KITTI odometry benchmark demonstrate that our framework achieves leading
performance compared to state-of-the-art methods. Our code is available at:
\url{https://github.com/fcchit/OverlapNetVLAD}.
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 05:56:36 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Fu",
"Chencan",
""
],
[
"Li",
"Lin",
""
],
[
"Peng",
"Linpeng",
""
],
[
"Ma",
"Yukai",
""
],
[
"Zhao",
"Xiangrui",
""
],
[
"Liu",
"Yong",
""
]
] |
new_dataset
| 0.995542 |
2303.06904
|
Digbalay Bose
|
Digbalay Bose, Rajat Hebbar, Krishna Somandepalli, Shrikanth Narayanan
|
Contextually-rich human affect perception using multimodal scene
information
|
Accepted to IEEE International Conference on Acoustics, Speech and
Signal Processing (ICASSP), 2023
| null | null | null |
cs.CV cs.AI cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The process of human affect understanding involves the ability to infer
person specific emotional states from various sources including images, speech,
and language. Affect perception from images has predominantly focused on
expressions extracted from salient face crops. However, emotions perceived by
humans rely on multiple contextual cues including social settings, foreground
interactions, and ambient visual scenes. In this work, we leverage pretrained
vision-language (VLN) models to extract descriptions of foreground context from
images. Further, we propose a multimodal context fusion (MCF) module to combine
foreground cues with the visual scene and person-based contextual information
for emotion prediction. We show the effectiveness of our proposed modular
design on two datasets associated with natural scenes and TV shows.
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 07:46:41 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Bose",
"Digbalay",
""
],
[
"Hebbar",
"Rajat",
""
],
[
"Somandepalli",
"Krishna",
""
],
[
"Narayanan",
"Shrikanth",
""
]
] |
new_dataset
| 0.996838 |
2303.06905
|
Sixiang Chen
|
Sixiang Chen, Tian Ye, Jun Shi, Yun Liu, JingXia Jiang, Erkang Chen,
Peng Chen
|
DEHRFormer: Real-time Transformer for Depth Estimation and Haze Removal
from Varicolored Haze Scenes
|
Accepted to ICASSP'2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Varicolored haze caused by chromatic casts poses haze removal and depth
estimation challenges. Recent learning-based depth estimation methods are
mainly targeted at dehazing first and estimating depth subsequently from
haze-free scenes. This way, the inner connections between colored haze and
scene depth are lost. In this paper, we propose a real-time transformer for
simultaneous single image Depth Estimation and Haze Removal (DEHRFormer).
DEHRFormer consists of a single encoder and two task-specific decoders. The
transformer decoders with learnable queries are designed to decode coupling
features from the task-agnostic encoder and project them into clean image and
depth map, respectively. In addition, we introduce a novel learning paradigm
that utilizes contrastive learning and domain consistency learning to tackle
weak-generalization problem for real-world dehazing, while predicting the same
depth map from the same scene with varicolored haze. Experiments demonstrate
that DEHRFormer achieves significant performance improvement across diverse
varicolored haze scenes over previous depth estimation networks and dehazing
approaches.
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 07:47:18 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Chen",
"Sixiang",
""
],
[
"Ye",
"Tian",
""
],
[
"Shi",
"Jun",
""
],
[
"Liu",
"Yun",
""
],
[
"Jiang",
"JingXia",
""
],
[
"Chen",
"Erkang",
""
],
[
"Chen",
"Peng",
""
]
] |
new_dataset
| 0.987434 |
2303.06906
|
Alexander Ivanov
|
Alexander Ivanov
|
A collection of memos dedicated to exact base-21 (EBTO) and quasi
base-21 (QBTO) codes
|
Bundle of 5 memos total, 20 pages total, 51 tables total
| null | null |
Synching Ethernet (SingE), Research Group 3, Topics 06, 12, 19, 20,
25
|
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This collection bundles the following memos dedicated to the so called exact
base-21 (EBTO) and quasi base-21 (QBTO) serial transport codes: [1] "Base-21
Scrambling" (discusses about EBTO codes, present at pp. 1-4 in the bundle); [2]
"Base-21 Word Alignment and Boundary Detection" (EBTO, pp. 5-8); [3] "Quasi
Base-21 Words" (QBTO, pp. 9-12); [4] "Quasi Base-21 Words Generated Compactly"
(QBTO, pp. 13-16); and [5] "Quasi Base-21 Words Balanced on the Framework"
(QBTO, pp. 17-20).
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 07:47:39 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Ivanov",
"Alexander",
""
]
] |
new_dataset
| 0.998658 |
2303.06911
|
Yutong Feng
|
Yutong Feng, Biao Gong, Jianwen Jiang, Yiliang Lv, Yujun Shen, Deli
Zhao, Jingren Zhou
|
ViM: Vision Middleware for Unified Downstream Transferring
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Foundation models are pre-trained on massive data and transferred to
downstream tasks via fine-tuning. This work presents Vision Middleware (ViM), a
new learning paradigm that targets unified transferring from a single
foundation model to a variety of downstream tasks. ViM consists of a zoo of
lightweight plug-in modules, each of which is independently learned on a
midstream dataset with a shared frozen backbone. Downstream tasks can then
benefit from an adequate aggregation of the module zoo thanks to the rich
knowledge inherited from midstream tasks. There are three major advantages of
such a design. From the efficiency aspect, the upstream backbone can be trained
only once and reused for all downstream tasks without tuning. From the
scalability aspect, we can easily append additional modules to ViM with no
influence on existing modules. From the performance aspect, ViM can include as
many midstream tasks as possible, narrowing the task gap between upstream and
downstream. Considering these benefits, we believe that ViM, which the
community could maintain and develop together, would serve as a powerful tool
to assist foundation models.
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 08:02:12 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Feng",
"Yutong",
""
],
[
"Gong",
"Biao",
""
],
[
"Jiang",
"Jianwen",
""
],
[
"Lv",
"Yiliang",
""
],
[
"Shen",
"Yujun",
""
],
[
"Zhao",
"Deli",
""
],
[
"Zhou",
"Jingren",
""
]
] |
new_dataset
| 0.999635 |
2303.06981
|
Georges Gagnere
|
Georges Gagner\'e (INREV, UP8, UPL), Anastasiia Ternova (INREV, AIAC,
UP8, UPL)
|
CAstelet in Virtual reality for shadOw AVatars (CAVOAV)
|
22nd ConVRgence Virtual Reality International Conference (VRIC),
Simon Richir, Apr 2020, Laval, France
| null |
10.20870/IJVR.2020..3316
| null |
cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
After an overview of the use of digital shadows in computing science research
projects with cultural and social impacts and a focus on recent researches and
insights on virtual theaters, this paper introduces a research mixing the
manipulation of shadow avatars and the building of a virtual theater setup
inspired by traditional shadow theater (or ``castelet'' in french) in a mixed
reality environment. It describes the virtual 3D setup, the nature of the
shadow avatars and the issues of directing believable interactions between
virtual avatars and physical performers on stage. Two modalities of shadow
avatars direction are exposed. Some results of the research are illustrated in
two use cases: the development of theatrical creativity in mixed reality
through pedagogical workshops; and an artistic achievement in ''The Shadow''
performance, after H. C. Andersen.
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 10:31:09 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Gagneré",
"Georges",
"",
"INREV, UP8, UPL"
],
[
"Ternova",
"Anastasiia",
"",
"INREV, AIAC,\n UP8, UPL"
]
] |
new_dataset
| 0.991408 |
2303.06999
|
Marius Schubert
|
Marius Schubert, Tobias Riedlinger, Karsten Kahl, Daniel Kr\"oll,
Sebastian Schoenen, Sini\v{s}a \v{S}egvi\'c, Matthias Rottmann
|
Identifying Label Errors in Object Detection Datasets by Loss Inspection
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Labeling datasets for supervised object detection is a dull and
time-consuming task. Errors can be easily introduced during annotation and
overlooked during review, yielding inaccurate benchmarks and performance
degradation of deep neural networks trained on noisy labels. In this work, we
for the first time introduce a benchmark for label error detection methods on
object detection datasets as well as a label error detection method and a
number of baselines. We simulate four different types of randomly introduced
label errors on train and test sets of well-labeled object detection datasets.
For our label error detection method we assume a two-stage object detector to
be given and consider the sum of both stages' classification and regression
losses. The losses are computed with respect to the predictions and the noisy
labels including simulated label errors, aiming at detecting the latter. We
compare our method to three baselines: a naive one without deep learning, the
object detector's score and the entropy of the classification softmax
distribution. We outperform all baselines and demonstrate that among the
considered methods, ours is the only one that detects label errors of all four
types efficiently. Furthermore, we detect real label errors a) on commonly used
test datasets in object detection and b) on a proprietary dataset. In both
cases we achieve low false positives rates, i.e., when considering 200
proposals from our method, we detect label errors with a precision for a) of up
to 71.5% and for b) with 97%.
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 10:54:52 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Schubert",
"Marius",
""
],
[
"Riedlinger",
"Tobias",
""
],
[
"Kahl",
"Karsten",
""
],
[
"Kröll",
"Daniel",
""
],
[
"Schoenen",
"Sebastian",
""
],
[
"Šegvić",
"Siniša",
""
],
[
"Rottmann",
"Matthias",
""
]
] |
new_dataset
| 0.995287 |
2303.07007
|
Sandor P. Fekete
|
S\'andor P. Fekete and Phillip Keldenich and Dominik Krupke and Stefan
Schirra
|
Minimum Coverage by Convex Polygons: The CG:SHOP Challenge 2023
|
12 pages, 6 figures, 1 table. arXiv admin note: text overlap with
arXiv:2203.07444
| null | null | null |
cs.CG cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We give an overview of the 2023 Computational Geometry Challenge targeting
the problem Minimum Coverage by Convex Polygons, which consists of covering a
given polygonal region (possibly with holes) by a minimum number of convex
subsets, a problem with a long-standing tradition in Computational Geometry.
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 11:05:06 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Fekete",
"Sándor P.",
""
],
[
"Keldenich",
"Phillip",
""
],
[
"Krupke",
"Dominik",
""
],
[
"Schirra",
"Stefan",
""
]
] |
new_dataset
| 0.982439 |
2303.07014
|
Wuyang Luo
|
Wuyang Luo, Su Yang, Weishan Zhang
|
Reference-Guided Large-Scale Face Inpainting with Identity and Texture
Control
|
accepted by IEEE Transactions on Circuits and Systems for Video
Technology
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Face inpainting aims at plausibly predicting missing pixels of face images
within a corrupted region. Most existing methods rely on generative models
learning a face image distribution from a big dataset, which produces
uncontrollable results, especially with large-scale missing regions. To
introduce strong control for face inpainting, we propose a novel
reference-guided face inpainting method that fills the large-scale missing
region with identity and texture control guided by a reference face image.
However, generating high-quality results under imposing two control signals is
challenging. To tackle such difficulty, we propose a dual control one-stage
framework that decouples the reference image into two levels for flexible
control: High-level identity information and low-level texture information,
where the identity information figures out the shape of the face and the
texture information depicts the component-aware texture. To synthesize
high-quality results, we design two novel modules referred to as Half-AdaIN and
Component-Wise Style Injector (CWSI) to inject the two kinds of control
information into the inpainting processing. Our method produces realistic
results with identity and texture control faithful to reference images. To the
best of our knowledge, it is the first work to concurrently apply identity and
component-level controls in face inpainting to promise more precise and
controllable results. Code is available at
https://github.com/WuyangLuo/RefFaceInpainting
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 11:22:37 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Luo",
"Wuyang",
""
],
[
"Yang",
"Su",
""
],
[
"Zhang",
"Weishan",
""
]
] |
new_dataset
| 0.996922 |
2303.07146
|
Nikolaos Papoulias
|
Nick Papoulias
|
NeuroQL: A Neuro-Symbolic Language and Dataset for Inter-Subjective
Reasoning
|
18 pages, 6 figures
| null | null | null |
cs.PL cs.AI cs.CL cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
We present a new AI task and baseline solution for Inter-Subjective
Reasoning. We define inter-subjective information, to be a mixture of objective
and subjective information possibly shared by different parties. Examples may
include commodities and their objective properties as reported by IR
(Information Retrieval) systems, that need to be cross-referenced with
subjective user reviews from an online forum. For an AI system to successfully
reason about both, it needs to be able to combine symbolic reasoning of
objective facts with the shared consensus found on subjective user reviews. To
this end we introduce the NeuroQL dataset and DSL (Domain-specific Language) as
a baseline solution for this problem. NeuroQL is a neuro-symbolic language that
extends logical unification with neural primitives for extraction and
retrieval. It can function as a target for automatic translation of
inter-subjective questions (posed in natural language) into the neuro-symbolic
code that can answer them.
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 14:16:59 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Papoulias",
"Nick",
""
]
] |
new_dataset
| 0.999836 |
2303.07182
|
Teng Wu
|
Teng Wu, Bruno Vallet, C\'edric Demonceaux
|
Mobile Mapping Mesh Change Detection and Update
|
6 pages without reference
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mobile mapping, in particular, Mobile Lidar Scanning (MLS) is increasingly
widespread to monitor and map urban scenes at city scale with unprecedented
resolution and accuracy. The resulting point cloud sampling of the scene
geometry can be meshed in order to create a continuous representation for
different applications: visualization, simulation, navigation, etc. Because of
the highly dynamic nature of these urban scenes, long term mapping should rely
on frequent map updates. A trivial solution is to simply replace old data with
newer data each time a new acquisition is made. However it has two drawbacks:
1) the old data may be of higher quality (resolution, precision) than the new
and 2) the coverage of the scene might be different in various acquisitions,
including varying occlusions. In this paper, we propose a fully automatic
pipeline to address these two issues by formulating the problem of merging
meshes with different quality, coverage and acquisition time. Our method is
based on a combined distance and visibility based change detection, a time
series analysis to assess the sustainability of changes, a mesh mosaicking
based on a global boolean optimization and finally a stitching of the resulting
mesh pieces boundaries with triangle strips. Finally, our method is
demonstrated on Robotcar and Stereopolis datasets.
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 15:24:06 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Wu",
"Teng",
""
],
[
"Vallet",
"Bruno",
""
],
[
"Demonceaux",
"Cédric",
""
]
] |
new_dataset
| 0.959058 |
2303.07211
|
Stefano Della Fiore
|
Marco Dalai, Stefano Della Fiore, Adele A. Rescigno and Ugo Vaccaro
|
Bounds and Algorithms for Frameproof Codes and Related Combinatorial
Structures
|
5 pages plus extra one reference page, accepted to the IEEE
Information Theory Workshop (ITW 2023)
| null | null | null |
cs.IT cs.DS math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we study upper bounds on the minimum length of frameproof
codes introduced by Boneh and Shaw to protect copyrighted materials. A $q$-ary
$(k,n)$-frameproof code of length $t$ is a $t \times n$ matrix having entries
in $\{0,1,\ldots, q-1\}$ and with the property that for any column $\mathbf{c}$
and any other $k$ columns, there exists a row where the symbols of the $k$
columns are all different from the corresponding symbol (in the same row) of
the column $\mathbf{c}$. In this paper, we show the existence of $q$-ary
$(k,n)$-frameproof codes of length $t = O(\frac{k^2}{q} \log n)$ for $q \leq
k$, using the Lov\'asz Local Lemma, and of length $t =
O(\frac{k}{\log(q/k)}\log(n/k))$ for $q > k$ using the expurgation method.
Remarkably, for the practical case of $q \leq k$ our findings give codes whose
length almost matches the lower bound $\Omega(\frac{k^2}{q\log k} \log n)$ on
the length of any $q$-ary $(k,n)$-frameproof code and, more importantly, allow
us to derive an algorithm of complexity $O(t n^2)$ for the construction of such
codes.
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 15:43:00 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Dalai",
"Marco",
""
],
[
"Della Fiore",
"Stefano",
""
],
[
"Rescigno",
"Adele A.",
""
],
[
"Vaccaro",
"Ugo",
""
]
] |
new_dataset
| 0.965317 |
2303.07240
|
Weixiong Lin
|
Weixiong Lin, Ziheng Zhao, Xiaoman Zhang, Chaoyi Wu, Ya Zhang, Yanfeng
Wang, Weidi Xie
|
PMC-CLIP: Contrastive Language-Image Pre-training using Biomedical
Documents
|
10 pages, 3 figures
| null | null | null |
cs.CV cs.CL cs.LG cs.MM
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Foundation models trained on large-scale dataset gain a recent surge in CV
and NLP. In contrast, development in biomedical domain lags far behind due to
data scarcity. To address this issue, we build and release PMC-OA, a biomedical
dataset with 1.6M image-caption pairs collected from PubMedCentral's OpenAccess
subset, which is 8 times larger than before. PMC-OA covers diverse modalities
or diseases, with majority of the image-caption samples aligned at
finer-grained level, i.e., subfigure and subcaption. While pretraining a
CLIP-style model on PMC-OA, our model named PMC-CLIP achieves state-of-the-art
results on various downstream tasks, including image-text retrieval on ROCO,
MedMNIST image classification, Medical VQA, i.e. +8.1% R@10 on image-text
retrieval, +3.9% accuracy on image classification.
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 16:13:16 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Lin",
"Weixiong",
""
],
[
"Zhao",
"Ziheng",
""
],
[
"Zhang",
"Xiaoman",
""
],
[
"Wu",
"Chaoyi",
""
],
[
"Zhang",
"Ya",
""
],
[
"Wang",
"Yanfeng",
""
],
[
"Xie",
"Weidi",
""
]
] |
new_dataset
| 0.999345 |
2303.07263
|
Alexey Svyatkovskiy
|
Matthew Jin, Syed Shahriar, Michele Tufano, Xin Shi, Shuai Lu, Neel
Sundaresan, Alexey Svyatkovskiy
|
InferFix: End-to-End Program Repair with LLMs
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Software development life cycle is profoundly influenced by bugs: their
introduction, identification, and eventual resolution account for a significant
portion of software cost. This has motivated software engineering researchers
and practitioners to propose different approaches for automating the
identification and repair of software defects. Large language models have been
adapted to the program repair task through few-shot demonstration learning and
instruction prompting, treating this as an infilling task. However, these
models have only focused on learning general bug-fixing patterns for
uncategorized bugs mined from public repositories. In this paper, we propose
InferFix: a transformer-based program repair framework paired with a
state-of-the-art static analyzer to fix critical security and performance bugs.
InferFix combines a Retriever -- transformer encoder model pretrained via
contrastive learning objective, which aims at searching for semantically
equivalent bugs and corresponding fixes; and a Generator -- a large language
model (Codex Cushman) finetuned on supervised bug-fix data with prompts
augmented via bug type annotations and semantically similar fixes retrieved
from an external non-parametric memory. To train and evaluate our approach, we
curated InferredBugs, a novel, metadata-rich dataset of bugs extracted by
executing the Infer static analyzer on the change histories of thousands of
Java and C# repositories. Our evaluation demonstrates that InferFix outperforms
strong LLM baselines, with a top-1 accuracy of 65.6% for generating fixes in C#
and 76.8% in Java. We discuss the deployment of InferFix alongside Infer at
Microsoft which offers an end-to-end solution for detection, classification,
and localization of bugs, as well as fixing and validation of candidate
patches, integrated in the continuous integration pipeline to automate the
software development workflow.
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 16:42:47 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Jin",
"Matthew",
""
],
[
"Shahriar",
"Syed",
""
],
[
"Tufano",
"Michele",
""
],
[
"Shi",
"Xin",
""
],
[
"Lu",
"Shuai",
""
],
[
"Sundaresan",
"Neel",
""
],
[
"Svyatkovskiy",
"Alexey",
""
]
] |
new_dataset
| 0.994394 |
2303.07311
|
Abhishek Gupta
|
Vikrant Malik, Gourab Ghatak, Abhishek K. Gupta, Sanket S. Kalamkar
|
On the Deployment of Reconfigurable Intelligent Surfaces in the Presence
of Blockages
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Wireless communications aided by reconfigurable intelligent surfaces (RISs)
is a promising way to improve the coverage for cellular users. The controlled
reflection of signals from RISs is especially useful in mm-wave/THz networks
when the direct link between a cellular user and its serving base station (BS)
is weak or unavailable due to blockages. However, the joint blockage of the
user-RIS and the user-BS links may significantly degrade the performance of
RIS-aided transmissions. This paper aims to study the impact of joint blockages
on downlink performance. When RIS locations are coupled with BS locations,
using tools from stochastic geometry, we obtain an optimal placement of RISs to
either minimize the joint blockage probability of the user-RIS and the user-BS
links or maximize the downlink coverage probability. The results show that
installing RISs near the cell edge of BSs usually provides optimal coverage.
Moreover, deploying RISs on street intersections improves the coverage
probability. For users associated with BSs that are deployed sufficiently close
to intersections, the intersection-mounted RISs offer a better coverage
performance as compared to BS-coupled RISs.
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 17:33:13 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Malik",
"Vikrant",
""
],
[
"Ghatak",
"Gourab",
""
],
[
"Gupta",
"Abhishek K.",
""
],
[
"Kalamkar",
"Sanket S.",
""
]
] |
new_dataset
| 0.997125 |
2303.07316
|
Deema Alnuhait
|
Deema Alnuhait, Qingyang Wu, Zhou Yu
|
FaceChat: An Emotion-Aware Face-to-face Dialogue Framework
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While current dialogue systems like ChatGPT have made significant
advancements in text-based interactions, they often overlook the potential of
other modalities in enhancing the overall user experience. We present FaceChat,
a web-based dialogue framework that enables emotionally-sensitive and
face-to-face conversations. By seamlessly integrating cutting-edge technologies
in natural language processing, computer vision, and speech processing,
FaceChat delivers a highly immersive and engaging user experience. FaceChat
framework has a wide range of potential applications, including counseling,
emotional support, and personalized customer service. The system is designed to
be simple and flexible as a platform for future researchers to advance the
field of multimodal dialogue systems. The code is publicly available at
https://github.com/qywu/FaceChat.
|
[
{
"version": "v1",
"created": "Wed, 8 Mar 2023 20:45:37 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Alnuhait",
"Deema",
""
],
[
"Wu",
"Qingyang",
""
],
[
"Yu",
"Zhou",
""
]
] |
new_dataset
| 0.979469 |
1604.00772
|
Nikolaus Hansen
|
Nikolaus Hansen (TAO)
|
The CMA Evolution Strategy: A Tutorial
|
ArXiv e-prints, arXiv:1604.00772, 2016, pp.1-39
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This tutorial introduces the CMA Evolution Strategy (ES), where CMA stands
for Covariance Matrix Adaptation. The CMA-ES is a stochastic, or randomized,
method for real-parameter (continuous domain) optimization of non-linear,
non-convex functions. We try to motivate and derive the algorithm from
intuitive concepts and from requirements of non-linear, non-convex search in
continuous domain.
|
[
{
"version": "v1",
"created": "Mon, 4 Apr 2016 08:16:12 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Mar 2023 09:45:23 GMT"
}
] | 2023-03-13T00:00:00 |
[
[
"Hansen",
"Nikolaus",
"",
"TAO"
]
] |
new_dataset
| 0.977398 |
2112.13910
|
Mesut Erhan Unal
|
Mesut Erhan Unal, Adriana Kovashka, Wen-Ting Chung, Yu-Ru Lin
|
Visual Persuasion in COVID-19 Social Media Content: A Multi-Modal
Characterization
|
10 pages
| null |
10.1145/3487553.3524647
| null |
cs.CL cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Social media content routinely incorporates multi-modal design to covey
information and shape meanings, and sway interpretations toward desirable
implications, but the choices and outcomes of using both texts and visual
images have not been sufficiently studied. This work proposes a computational
approach to analyze the outcome of persuasive information in multi-modal
content, focusing on two aspects, popularity and reliability, in
COVID-19-related news articles shared on Twitter. The two aspects are
intertwined in the spread of misinformation: for example, an unreliable article
that aims to misinform has to attain some popularity. This work has several
contributions. First, we propose a multi-modal (image and text) approach to
effectively identify popularity and reliability of information sources
simultaneously. Second, we identify textual and visual elements that are
predictive to information popularity and reliability. Third, by modeling
cross-modal relations and similarity, we are able to uncover how unreliable
articles construct multi-modal meaning in a distorted, biased fashion. Our work
demonstrates how to use multi-modal analysis for understanding influential
content and has implications to social media literacy and engagement.
|
[
{
"version": "v1",
"created": "Sun, 5 Dec 2021 02:15:01 GMT"
}
] | 2023-03-13T00:00:00 |
[
[
"Unal",
"Mesut Erhan",
""
],
[
"Kovashka",
"Adriana",
""
],
[
"Chung",
"Wen-Ting",
""
],
[
"Lin",
"Yu-Ru",
""
]
] |
new_dataset
| 0.983418 |
2205.11782
|
Xiaoguang Li
|
Xiaoguang Li, Ninghui Li, Wenhai Sun, Neil Zhenqiang Gong, Hui Li
|
Fine-grained Poisoning Attack to Local Differential Privacy Protocols
for Mean and Variance Estimation
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although local differential privacy (LDP) protects individual users' data
from inference by an untrusted data curator, recent studies show that an
attacker can launch a data poisoning attack from the user side to inject
carefully-crafted bogus data into the LDP protocols in order to maximally skew
the final estimate by the data curator.
In this work, we further advance this knowledge by proposing a new
fine-grained attack, which allows the attacker to fine-tune and simultaneously
manipulate mean and variance estimations that are popular analytical tasks for
many real-world applications. To accomplish this goal, the attack leverages the
characteristics of LDP to inject fake data into the output domain of the local
LDP instance. We call our attack the output poisoning attack (OPA). We observe
a security-privacy consistency where a small privacy loss enhances the security
of LDP, which contradicts the known security-privacy trade-off from prior work.
We further study the consistency and reveal a more holistic view of the threat
landscape of data poisoning attacks on LDP. We comprehensively evaluate our
attack against a baseline attack that intuitively provides false input to LDP.
The experimental results show that OPA outperforms the baseline on three
real-world datasets. We also propose a novel defense method that can recover
the result accuracy from polluted data collection and offer insight into the
secure LDP design.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 04:43:43 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Oct 2022 11:20:15 GMT"
},
{
"version": "v3",
"created": "Sun, 26 Feb 2023 16:19:30 GMT"
},
{
"version": "v4",
"created": "Fri, 10 Mar 2023 14:37:29 GMT"
}
] | 2023-03-13T00:00:00 |
[
[
"Li",
"Xiaoguang",
""
],
[
"Li",
"Ninghui",
""
],
[
"Sun",
"Wenhai",
""
],
[
"Gong",
"Neil Zhenqiang",
""
],
[
"Li",
"Hui",
""
]
] |
new_dataset
| 0.997185 |
2207.13358
|
Hasan Hassan
|
Hasan Hassan, Ataberk Olgun, A. Giray Yaglikci, Haocong Luo, Onur
Mutlu
|
A Case for Self-Managing DRAM Chips: Improving Performance, Efficiency,
Reliability, and Security via Autonomous in-DRAM Maintenance Operations
| null | null | null | null |
cs.AR cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
The memory controller is in charge of managing DRAM maintenance operations
(e.g., refresh, RowHammer protection, memory scrubbing) in current DRAM chips.
Implementing new maintenance operations often necessitates modifications in the
DRAM interface, memory controller, and potentially other system components.
Such modifications are only possible with a new DRAM standard, which takes a
long time to develop, leading to slow progress in DRAM systems.
In this paper, our goal is to 1) ease, and thus accelerate, the process of
enabling new DRAM maintenance operations and 2) enable more efficient in-DRAM
maintenance operations. Our idea is to set the memory controller free from
managing DRAM maintenance. To this end, we propose Self-Managing DRAM (SMD), a
new low-cost DRAM architecture that enables implementing new in-DRAM
maintenance mechanisms (or modifying old ones) with no further changes in the
DRAM interface, memory controller, or other system components. We use SMD to
implement new in-DRAM maintenance mechanisms for three use cases: 1) periodic
refresh, 2) RowHammer protection, and 3) memory scrubbing. We show that SMD
enables easy adoption of efficient maintenance mechanisms that significantly
improve the system performance and energy efficiency while providing higher
reliability compared to conventional DDR4 DRAM. A combination of SMD-based
maintenance mechanisms that perform refresh, RowHammer protection, and memory
scrubbing achieve 7.6% speedup and consume 5.2% less DRAM energy on average
across 20 memory-intensive four-core workloads. We make SMD source code openly
and freely available at [128].
|
[
{
"version": "v1",
"created": "Wed, 27 Jul 2022 08:27:10 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Oct 2022 15:07:49 GMT"
},
{
"version": "v3",
"created": "Fri, 10 Mar 2023 07:40:47 GMT"
}
] | 2023-03-13T00:00:00 |
[
[
"Hassan",
"Hasan",
""
],
[
"Olgun",
"Ataberk",
""
],
[
"Yaglikci",
"A. Giray",
""
],
[
"Luo",
"Haocong",
""
],
[
"Mutlu",
"Onur",
""
]
] |
new_dataset
| 0.951974 |
2208.07227
|
Bing Wang
|
Bing Wang, Lu Chen, Bo Yang
|
DM-NeRF: 3D Scene Geometry Decomposition and Manipulation from 2D Images
|
ICLR 2023. Our data and code are available at:
https://github.com/vLAR-group/DM-NeRF
| null | null | null |
cs.CV cs.AI cs.GR cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we study the problem of 3D scene geometry decomposition and
manipulation from 2D views. By leveraging the recent implicit neural
representation techniques, particularly the appealing neural radiance fields,
we introduce an object field component to learn unique codes for all individual
objects in 3D space only from 2D supervision. The key to this component is a
series of carefully designed loss functions to enable every 3D point,
especially in non-occupied space, to be effectively optimized even without 3D
labels. In addition, we introduce an inverse query algorithm to freely
manipulate any specified 3D object shape in the learned scene representation.
Notably, our manipulation algorithm can explicitly tackle key issues such as
object collisions and visual occlusions. Our method, called DM-NeRF, is among
the first to simultaneously reconstruct, decompose, manipulate and render
complex 3D scenes in a single pipeline. Extensive experiments on three datasets
clearly show that our method can accurately decompose all 3D objects from 2D
views, allowing any interested object to be freely manipulated in 3D space such
as translation, rotation, size adjustment, and deformation.
|
[
{
"version": "v1",
"created": "Mon, 15 Aug 2022 14:32:10 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Mar 2023 07:12:32 GMT"
}
] | 2023-03-13T00:00:00 |
[
[
"Wang",
"Bing",
""
],
[
"Chen",
"Lu",
""
],
[
"Yang",
"Bo",
""
]
] |
new_dataset
| 0.999478 |
2210.08057
|
Armin Danesh Pazho
|
Ghazal Alinezhad Noghre, Vinit Katariya, Armin Danesh Pazho,
Christopher Neff, Hamed Tabkhi
|
Pishgu: Universal Path Prediction Network Architecture for Real-time
Cyber-physical Edge Systems
| null | null | null | null |
cs.CV cs.LG eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Path prediction is an essential task for many real-world Cyber-Physical
Systems (CPS) applications, from autonomous driving and traffic
monitoring/management to pedestrian/worker safety. These real-world CPS
applications need a robust, lightweight path prediction that can provide a
universal network architecture for multiple subjects (e.g., pedestrians and
vehicles) from different perspectives. However, most existing algorithms are
tailor-made for a unique subject with a specific camera perspective and
scenario. This article presents Pishgu, a universal lightweight network
architecture, as a robust and holistic solution for path prediction. Pishgu's
architecture can adapt to multiple path prediction domains with different
subjects (vehicles, pedestrians), perspectives (bird's-eye, high-angle), and
scenes (sidewalk, highway). Our proposed architecture captures the
inter-dependencies within the subjects in each frame by taking advantage of
Graph Isomorphism Networks and the attention module. We separately train and
evaluate the efficacy of our architecture on three different CPS domains across
multiple perspectives (vehicle bird's-eye view, pedestrian bird's-eye view, and
human high-angle view). Pishgu outperforms state-of-the-art solutions in the
vehicle bird's-eye view domain by 42% and 61% and pedestrian high-angle view
domain by 23% and 22% in terms of ADE and FDE, respectively. Additionally, we
analyze the domain-specific details for various datasets to understand their
effect on path prediction and model interpretation. Finally, we report the
latency and throughput for all three domains on multiple embedded platforms
showcasing the robustness and adaptability of Pishgu for real-world integration
into CPS applications.
|
[
{
"version": "v1",
"created": "Fri, 14 Oct 2022 18:48:48 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Dec 2022 19:02:06 GMT"
},
{
"version": "v3",
"created": "Thu, 9 Mar 2023 23:35:08 GMT"
}
] | 2023-03-13T00:00:00 |
[
[
"Noghre",
"Ghazal Alinezhad",
""
],
[
"Katariya",
"Vinit",
""
],
[
"Pazho",
"Armin Danesh",
""
],
[
"Neff",
"Christopher",
""
],
[
"Tabkhi",
"Hamed",
""
]
] |
new_dataset
| 0.997912 |
2210.10676
|
Oliver Gasser
|
Lars Prehn, Pawel Foremski, Oliver Gasser
|
Kirin: Hitting the Internet with Millions of Distributed IPv6
Announcements
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Internet is a critical resource in the day-to-day life of billions of
users. To support the growing number of users and their increasing demands,
operators have to continuously scale their network footprint -- e.g., by
joining Internet Exchange Points -- and adopt relevant technologies -- such as
IPv6. IPv6, however, has a vastly larger address space compared to its
predecessor, which allows for new kinds of attacks on the Internet routing
infrastructure. In this paper, we revisit prefix de-aggregation attacks in the
light of these two changes and introduce Kirin -- an advanced BGP prefix
de-aggregation attack that sources millions of IPv6 routes and distributes them
via thousands of sessions across various IXPs to overflow the memory of border
routers within thousands of remote ASes. Kirin's highly distributed nature
allows it to bypass traditional route-flooding defense mechanisms, such as
per-session prefix limits or route flap damping. We analyze the theoretical
feasibility of the attack by formulating it as a Integer Linear Programming
problem, test for practical hurdles by deploying the infrastructure required to
perform a small-scale Kirin attack using 4 IXPs, and validate our assumptions
via BGP data analysis, real-world measurements, and router testbed experiments.
Despite its low deployment cost, we find Kirin capable of injecting lethal
amounts of IPv6 routes in the routers of thousands of ASes.
|
[
{
"version": "v1",
"created": "Wed, 19 Oct 2022 15:44:56 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Mar 2023 14:24:22 GMT"
}
] | 2023-03-13T00:00:00 |
[
[
"Prehn",
"Lars",
""
],
[
"Foremski",
"Pawel",
""
],
[
"Gasser",
"Oliver",
""
]
] |
new_dataset
| 0.999668 |
2210.11668
|
Stan Birchfield
|
Zhenggang Tang, Balakumar Sundaralingam, Jonathan Tremblay, Bowen Wen,
Ye Yuan, Stephen Tyree, Charles Loop, Alexander Schwing, Stan Birchfield
|
RGB-Only Reconstruction of Tabletop Scenes for Collision-Free
Manipulator Control
|
ICRA 2023. Project page at https://ngp-mpc.github.io/
| null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a system for collision-free control of a robot manipulator that
uses only RGB views of the world. Perceptual input of a tabletop scene is
provided by multiple images of an RGB camera (without depth) that is either
handheld or mounted on the robot end effector. A NeRF-like process is used to
reconstruct the 3D geometry of the scene, from which the Euclidean full signed
distance function (ESDF) is computed. A model predictive control algorithm is
then used to control the manipulator to reach a desired pose while avoiding
obstacles in the ESDF. We show results on a real dataset collected and
annotated in our lab.
|
[
{
"version": "v1",
"created": "Fri, 21 Oct 2022 01:45:08 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Mar 2023 06:13:13 GMT"
}
] | 2023-03-13T00:00:00 |
[
[
"Tang",
"Zhenggang",
""
],
[
"Sundaralingam",
"Balakumar",
""
],
[
"Tremblay",
"Jonathan",
""
],
[
"Wen",
"Bowen",
""
],
[
"Yuan",
"Ye",
""
],
[
"Tyree",
"Stephen",
""
],
[
"Loop",
"Charles",
""
],
[
"Schwing",
"Alexander",
""
],
[
"Birchfield",
"Stan",
""
]
] |
new_dataset
| 0.999409 |
2210.13066
|
Siwei Chen
|
Siwei Chen, Yiqing Xu, Cunjun Yu, Linfeng Li, Xiao Ma, Zhongwen Xu,
David Hsu
|
DaXBench: Benchmarking Deformable Object Manipulation with
Differentiable Physics
|
ICLR 2023 (Oral)
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Deformable Object Manipulation (DOM) is of significant importance to both
daily and industrial applications. Recent successes in differentiable physics
simulators allow learning algorithms to train a policy with analytic gradients
through environment dynamics, which significantly facilitates the development
of DOM algorithms. However, existing DOM benchmarks are either
single-object-based or non-differentiable. This leaves the questions of 1) how
a task-specific algorithm performs on other tasks and 2) how a
differentiable-physics-based algorithm compares with the non-differentiable
ones in general. In this work, we present DaXBench, a differentiable DOM
benchmark with a wide object and task coverage. DaXBench includes 9 challenging
high-fidelity simulated tasks, covering rope, cloth, and liquid manipulation
with various difficulty levels. To better understand the performance of general
algorithms on different DOM tasks, we conduct comprehensive experiments over
representative DOM methods, ranging from planning to imitation learning and
reinforcement learning. In addition, we provide careful empirical studies of
existing decision-making algorithms based on differentiable physics, and
discuss their limitations, as well as potential future directions.
|
[
{
"version": "v1",
"created": "Mon, 24 Oct 2022 09:33:59 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Mar 2023 07:27:19 GMT"
}
] | 2023-03-13T00:00:00 |
[
[
"Chen",
"Siwei",
""
],
[
"Xu",
"Yiqing",
""
],
[
"Yu",
"Cunjun",
""
],
[
"Li",
"Linfeng",
""
],
[
"Ma",
"Xiao",
""
],
[
"Xu",
"Zhongwen",
""
],
[
"Hsu",
"David",
""
]
] |
new_dataset
| 0.997603 |
2211.14175
|
Adnan Ferdous Ashrafi
|
Md. Rayhan Ahmed, Adnan Ferdous Ashrafi, Raihan Uddin Ahmed, Tanveer
Ahmed
|
MCFFA-Net: Multi-Contextual Feature Fusion and Attention Guided Network
for Apple Foliar Disease Classification
|
7 pages, 6 figures, ICCIT 2022 submission, Conference
|
2022 25th International Conference on Computer and Information
Technology (ICCIT), 2022, pp. 757-762
|
10.1109/ICCIT57492.2022.10055790
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Numerous diseases cause severe economic loss in the apple production-based
industry. Early disease identification in apple leaves can help to stop the
spread of infections and provide better productivity. Therefore, it is crucial
to study the identification and classification of different apple foliar
diseases. Various traditional machine learning and deep learning methods have
addressed and investigated this issue. However, it is still challenging to
classify these diseases because of their complex background, variation in the
diseased spot in the images, and the presence of several symptoms of multiple
diseases on the same leaf. This paper proposes a novel transfer learning-based
stacked ensemble architecture named MCFFA-Net, which is composed of three
pre-trained architectures named MobileNetV2, DenseNet201, and InceptionResNetV2
as backbone networks. We also propose a novel multi-scale dilated residual
convolution module to capture multi-scale contextual information with several
dilated receptive fields from the extracted features. Channel-based attention
mechanism is provided through squeeze and excitation networks to make the
MCFFA-Net focused on the relevant information in the multi-receptive fields.
The proposed MCFFA-Net achieves a classification accuracy of 90.86%.
|
[
{
"version": "v1",
"created": "Fri, 25 Nov 2022 15:25:36 GMT"
}
] | 2023-03-13T00:00:00 |
[
[
"Ahmed",
"Md. Rayhan",
""
],
[
"Ashrafi",
"Adnan Ferdous",
""
],
[
"Ahmed",
"Raihan Uddin",
""
],
[
"Ahmed",
"Tanveer",
""
]
] |
new_dataset
| 0.998221 |
2211.16290
|
Chen Zhao
|
Chen Zhao, Yinlin Hu, Mathieu Salzmann
|
LocPoseNet: Robust Location Prior for Unseen Object Pose Estimation
| null | null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Object location priors have been shown to be critical for the standard 6D
object pose estimation setting, where the training and testing objects are the
same. Specifically, they can be used to initialize the 3D object translation
and facilitate 3D object rotation estimation. Unfortunately, the object
detectors that are used for this purpose do not generalize to unseen objects,
i.e., objects from new categories at test time. Therefore, existing 6D pose
estimation methods for previously-unseen objects either assume the ground-truth
object location to be known, or yield inaccurate results when it is
unavailable. In this paper, we address this problem by developing a method,
LocPoseNet, able to robustly learn location prior for unseen objects. Our
method builds upon a template matching strategy, where we propose to distribute
the reference kernels and convolve them with a query to efficiently compute
multi-scale correlations. We then introduce a novel translation estimator,
which decouples scale-aware and scale-robust features to predict different
object location parameters. Our method outperforms existing works by a large
margin on LINEMOD and GenMOP. We further construct a challenging synthetic
dataset, which allows us to highlight the better robustness of our method to
various noise sources.
|
[
{
"version": "v1",
"created": "Tue, 29 Nov 2022 15:21:34 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Mar 2023 10:22:29 GMT"
}
] | 2023-03-13T00:00:00 |
[
[
"Zhao",
"Chen",
""
],
[
"Hu",
"Yinlin",
""
],
[
"Salzmann",
"Mathieu",
""
]
] |
new_dataset
| 0.999657 |
2303.03361
|
Xiaoshuai Zhang
|
Xiaoshuai Zhang, Abhijit Kundu, Thomas Funkhouser, Leonidas Guibas,
Hao Su, Kyle Genova
|
Nerflets: Local Radiance Fields for Efficient Structure-Aware 3D Scene
Representation from 2D Supervision
|
accepted by CVPR 2023
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We address efficient and structure-aware 3D scene representation from images.
Nerflets are our key contribution -- a set of local neural radiance fields that
together represent a scene. Each nerflet maintains its own spatial position,
orientation, and extent, within which it contributes to panoptic, density, and
radiance reconstructions. By leveraging only photometric and inferred panoptic
image supervision, we can directly and jointly optimize the parameters of a set
of nerflets so as to form a decomposed representation of the scene, where each
object instance is represented by a group of nerflets. During experiments with
indoor and outdoor environments, we find that nerflets: (1) fit and approximate
the scene more efficiently than traditional global NeRFs, (2) allow the
extraction of panoptic and photometric renderings from arbitrary views, and (3)
enable tasks rare for NeRFs, such as 3D panoptic segmentation and interactive
editing.
|
[
{
"version": "v1",
"created": "Mon, 6 Mar 2023 18:48:18 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Mar 2023 17:47:57 GMT"
}
] | 2023-03-13T00:00:00 |
[
[
"Zhang",
"Xiaoshuai",
""
],
[
"Kundu",
"Abhijit",
""
],
[
"Funkhouser",
"Thomas",
""
],
[
"Guibas",
"Leonidas",
""
],
[
"Su",
"Hao",
""
],
[
"Genova",
"Kyle",
""
]
] |
new_dataset
| 0.953479 |
2303.03387
|
Sreyan Ghosh
|
Sreyan Ghosh and Manan Suri and Purva Chiniya and Utkarsh Tyagi and
Sonal Kumar and Dinesh Manocha
|
CoSyn: Detecting Implicit Hate Speech in Online Conversations Using a
Context Synergized Hyperbolic Network
|
Under review at IJCAI 2023
| null | null | null |
cs.LG cs.AI cs.CL cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
The tremendous growth of social media users interacting in online
conversations has also led to significant growth in hate speech. Most of the
prior works focus on detecting explicit hate speech, which is overt and
leverages hateful phrases, with very little work focusing on detecting hate
speech that is implicit or denotes hatred through indirect or coded language.
In this paper, we present CoSyn, a user- and conversational-context synergized
network for detecting implicit hate speech in online conversation trees. CoSyn
first models the user's personal historical and social context using a novel
hyperbolic Fourier attention mechanism and hyperbolic graph convolution
network. Next, we jointly model the user's personal context and the
conversational context using a novel context interaction mechanism in the
hyperbolic space that clearly captures the interplay between the two and makes
independent assessments on the amounts of information to be retrieved from both
contexts. CoSyn performs all operations in the hyperbolic space to account for
the scale-free dynamics of social media. We demonstrate the effectiveness of
CoSyn both qualitatively and quantitatively on an open-source hate speech
dataset with Twitter conversations and show that CoSyn outperforms all our
baselines in detecting implicit hate speech with absolute improvements in the
range of 8.15% - 19.50%.
|
[
{
"version": "v1",
"created": "Thu, 2 Mar 2023 17:30:43 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Mar 2023 02:09:29 GMT"
}
] | 2023-03-13T00:00:00 |
[
[
"Ghosh",
"Sreyan",
""
],
[
"Suri",
"Manan",
""
],
[
"Chiniya",
"Purva",
""
],
[
"Tyagi",
"Utkarsh",
""
],
[
"Kumar",
"Sonal",
""
],
[
"Manocha",
"Dinesh",
""
]
] |
new_dataset
| 0.995933 |
2303.05546
|
Mesut Erhan Unal
|
Mesut Erhan Unal and Adriana Kovashka
|
Weakly-Supervised HOI Detection from Interaction Labels Only and
Language/Vision-Language Priors
|
8 pages, 3 figures and 5 tables
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Human-object interaction (HOI) detection aims to extract interacting
human-object pairs and their interaction categories from a given natural image.
Even though the labeling effort required for building HOI detection datasets is
inherently more extensive than for many other computer vision tasks,
weakly-supervised directions in this area have not been sufficiently explored
due to the difficulty of learning human-object interactions with weak
supervision, rooted in the combinatorial nature of interactions over the object
and predicate space. In this paper, we tackle HOI detection with the weakest
supervision setting in the literature, using only image-level interaction
labels, with the help of a pretrained vision-language model (VLM) and a large
language model (LLM). We first propose an approach to prune non-interacting
human and object proposals to increase the quality of positive pairs within the
bag, exploiting the grounding capability of the vision-language model. Second,
we use a large language model to query which interactions are possible between
a human and a given object category, in order to force the model not to put
emphasis on unlikely interactions. Lastly, we use an auxiliary
weakly-supervised preposition prediction task to make our model explicitly
reason about space. Extensive experiments and ablations show that all of our
contributions increase HOI detection performance.
|
[
{
"version": "v1",
"created": "Thu, 9 Mar 2023 19:08:02 GMT"
}
] | 2023-03-13T00:00:00 |
[
[
"Unal",
"Mesut Erhan",
""
],
[
"Kovashka",
"Adriana",
""
]
] |
new_dataset
| 0.994564 |
2303.05552
|
Bekir Z Demiray
|
Bekir Z Demiray, Muhammed Sit and Ibrahim Demir
|
EfficientTempNet: Temporal Super-Resolution of Radar Rainfall
|
Published as a workshop paper at Tackling Climate Change with Machine
Learning, ICLR 2023
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Rainfall data collected by various remote sensing instruments such as radars
or satellites has different space-time resolutions. This study aims to improve
the temporal resolution of radar rainfall products to help with more accurate
climate change modeling and studies. In this direction, we introduce a solution
based on EfficientNetV2, namely EfficientTempNet, to increase the temporal
resolution of radar-based rainfall products from 10 minutes to 5 minutes. We
tested EfficientRainNet over a dataset for the state of Iowa, US, and compared
its performance to three different baselines to show that EfficientTempNet
presents a viable option for better climate change monitoring.
|
[
{
"version": "v1",
"created": "Thu, 9 Mar 2023 19:19:56 GMT"
}
] | 2023-03-13T00:00:00 |
[
[
"Demiray",
"Bekir Z",
""
],
[
"Sit",
"Muhammed",
""
],
[
"Demir",
"Ibrahim",
""
]
] |
new_dataset
| 0.998717 |
2303.05634
|
Oumaima Hamila
|
Oumaima Hamila, Christopher J. Henry, Oscar I. Molina, Christopher P.
Bidinosti, and Maria Antonia Henriquez
|
Fusarium head blight detection, spikelet estimation, and severity
assessment in wheat using 3D convolutional neural networks
| null | null | null | null |
cs.CV cs.LG eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Fusarium head blight (FHB) is one of the most significant diseases affecting
wheat and other small grain cereals worldwide. The development of resistant
varieties requires the laborious task of field and greenhouse phenotyping. The
applications considered in this work are the automated detection of FHB disease
symptoms expressed on a wheat plant, the automated estimation of the total
number of spikelets and the total number of infected spikelets on a wheat head,
and the automated assessment of the FHB severity in infected wheat. The data
used to generate the results are 3-dimensional (3D) multispectral point clouds
(PC), which are 3D collections of points - each associated with a red, green,
blue (RGB), and near-infrared (NIR) measurement. Over 300 wheat plant images
were collected using a multispectral 3D scanner, and the labelled UW-MRDC 3D
wheat dataset was created. The data was used to develop novel and efficient 3D
convolutional neural network (CNN) models for FHB detection, which achieved
100% accuracy. The influence of the multispectral information on performance
was evaluated, and our results showed the dominance of the RGB channels over
both the NIR and the NIR plus RGB channels combined. Furthermore, novel and
efficient 3D CNNs were created to estimate the total number of spikelets and
the total number of infected spikelets on a wheat head, and our best models
achieved mean absolute errors (MAE) of 1.13 and 1.56, respectively. Moreover,
3D CNN models for FHB severity estimation were created, and our best model
achieved 8.6 MAE. A linear regression analysis between the visual FHB severity
assessment and the FHB severity predicted by our 3D CNN was performed, and the
results showed a significant correlation between the two variables with a
0.0001 P-value and 0.94 R-squared.
|
[
{
"version": "v1",
"created": "Fri, 10 Mar 2023 00:46:32 GMT"
}
] | 2023-03-13T00:00:00 |
[
[
"Hamila",
"Oumaima",
""
],
[
"Henry",
"Christopher J.",
""
],
[
"Molina",
"Oscar I.",
""
],
[
"Bidinosti",
"Christopher P.",
""
],
[
"Henriquez",
"Maria Antonia",
""
]
] |
new_dataset
| 0.9993 |
2303.05676
|
Ziyuan Jiao
|
Weiqi Wang, Zihang Zhao, Ziyuan Jiao, Yixin Zhu, Song-Chun Zhu,
Hangxin Liu
|
Rearrange Indoor Scenes for Human-Robot Co-Activity
|
7 pages, 7 figures; Accepted by ICRA 2023
| null | null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an optimization-based framework for rearranging indoor furniture
to accommodate human-robot co-activities better. The rearrangement aims to
afford sufficient accessible space for robot activities without compromising
everyday human activities. To retain human activities, our algorithm preserves
the functional relations among furniture by integrating spatial and semantic
co-occurrence extracted from SUNCG and ConceptNet, respectively. By defining
the robot's accessible space by the amount of open space it can traverse and
the number of objects it can reach, we formulate the rearrangement for
human-robot co-activity as an optimization problem, solved by adaptive
simulated annealing (ASA) and covariance matrix adaptation evolution strategy
(CMA-ES). Our experiments on the SUNCG dataset quantitatively show that
rearranged scenes provide an average of 14% more accessible space and 30% more
objects to interact with. The quality of the rearranged scenes is qualitatively
validated by a human study, indicating the efficacy of the proposed strategy.
|
[
{
"version": "v1",
"created": "Fri, 10 Mar 2023 03:03:32 GMT"
}
] | 2023-03-13T00:00:00 |
[
[
"Wang",
"Weiqi",
""
],
[
"Zhao",
"Zihang",
""
],
[
"Jiao",
"Ziyuan",
""
],
[
"Zhu",
"Yixin",
""
],
[
"Zhu",
"Song-Chun",
""
],
[
"Liu",
"Hangxin",
""
]
] |
new_dataset
| 0.998924 |
2303.05724
|
Xingyi Li
|
Xingyi Li, Zhiguo Cao, Huiqiang Sun, Jianming Zhang, Ke Xian, Guosheng
Lin
|
3D Cinemagraphy from a Single Image
|
Accepted by CVPR 2023. Project page:
https://xingyi-li.github.io/3d-cinemagraphy/
| null | null | null |
cs.CV cs.AI cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present 3D Cinemagraphy, a new technique that marries 2D image animation
with 3D photography. Given a single still image as input, our goal is to
generate a video that contains both visual content animation and camera motion.
We empirically find that naively combining existing 2D image animation and 3D
photography methods leads to obvious artifacts or inconsistent animation. Our
key insight is that representing and animating the scene in 3D space offers a
natural solution to this task. To this end, we first convert the input image
into feature-based layered depth images using predicted depth values, followed
by unprojecting them to a feature point cloud. To animate the scene, we perform
motion estimation and lift the 2D motion into the 3D scene flow. Finally, to
resolve the problem of hole emergence as points move forward, we propose to
bidirectionally displace the point cloud as per the scene flow and synthesize
novel views by separately projecting them into target image planes and blending
the results. Extensive experiments demonstrate the effectiveness of our method.
A user study is also conducted to validate the compelling rendering results of
our method.
|
[
{
"version": "v1",
"created": "Fri, 10 Mar 2023 06:08:23 GMT"
}
] | 2023-03-13T00:00:00 |
[
[
"Li",
"Xingyi",
""
],
[
"Cao",
"Zhiguo",
""
],
[
"Sun",
"Huiqiang",
""
],
[
"Zhang",
"Jianming",
""
],
[
"Xian",
"Ke",
""
],
[
"Lin",
"Guosheng",
""
]
] |
new_dataset
| 0.996146 |
2303.05736
|
Huizhi Wang
|
Huizhi Wang, Zhiqiang Xiao, and Yong Zeng
|
Cram\'er-Rao Bounds for Near-Field Sensing with Extremely Large-Scale
MIMO
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mobile communication networks were designed to mainly support ubiquitous
wireless communications, yet they are also expected to achieve radio sensing
capabilities in the near future. However, most prior studies on radio sensing
usually rely on far-field assumption with uniform plane wave (UPW) models. With
the ever-increasing antenna size, together with the growing demands to sense
nearby targets, the conventional far-field UPW assumption may become invalid.
Therefore, this paper studies near-field radio sensing with extremely
large-scale (XL) antenna arrays, where the more general uniform spheric wave
(USW) sensing model is considered. Closed-form expressions of the Cram\'er-Rao
Bounds (CRBs) for both angle and range estimations are derived for near-field
XL-MIMO radar mode and XL-phased array radar mode, respectively. Our results
reveal that different from the conventional UPW model where the CRB for angle
decreases unboundedly as the number of antennas increases, for XL-MIMO
radar-based near-field sensing, the CRB decreases with diminishing return and
approaches to a certain limit as the number of antennas increases. Besides,
different from the far-field model where the CRB for range is infinity since it
has no range estimation capability, that for the near-field case is finite.
Furthermore, it is revealed that the commonly used spherical wave model based
on second-order Taylor approximation is insufficient for near-field CRB
analysis. Extensive simulation results are provided to validate our derived
CRBs.
|
[
{
"version": "v1",
"created": "Fri, 10 Mar 2023 06:45:19 GMT"
}
] | 2023-03-13T00:00:00 |
[
[
"Wang",
"Huizhi",
""
],
[
"Xiao",
"Zhiqiang",
""
],
[
"Zeng",
"Yong",
""
]
] |
new_dataset
| 0.982656 |
2303.05762
|
Weixin Chen
|
Weixin Chen, Dawn Song, Bo Li
|
TrojDiff: Trojan Attacks on Diffusion Models with Diverse Targets
|
CVPR2023
| null | null | null |
cs.LG cs.CR cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Diffusion models have achieved great success in a range of tasks, such as
image synthesis and molecule design. As such successes hinge on large-scale
training data collected from diverse sources, the trustworthiness of these
collected data is hard to control or audit. In this work, we aim to explore the
vulnerabilities of diffusion models under potential training data manipulations
and try to answer: How hard is it to perform Trojan attacks on well-trained
diffusion models? What are the adversarial targets that such Trojan attacks can
achieve? To answer these questions, we propose an effective Trojan attack
against diffusion models, TrojDiff, which optimizes the Trojan diffusion and
generative processes during training. In particular, we design novel
transitions during the Trojan diffusion process to diffuse adversarial targets
into a biased Gaussian distribution and propose a new parameterization of the
Trojan generative process that leads to an effective training objective for the
attack. In addition, we consider three types of adversarial targets: the
Trojaned diffusion models will always output instances belonging to a certain
class from the in-domain distribution (In-D2D attack), out-of-domain
distribution (Out-D2D-attack), and one specific instance (D2I attack). We
evaluate TrojDiff on CIFAR-10 and CelebA datasets against both DDPM and DDIM
diffusion models. We show that TrojDiff always achieves high attack performance
under different adversarial targets using different types of triggers, while
the performance in benign environments is preserved. The code is available at
https://github.com/chenweixin107/TrojDiff.
|
[
{
"version": "v1",
"created": "Fri, 10 Mar 2023 08:01:23 GMT"
}
] | 2023-03-13T00:00:00 |
[
[
"Chen",
"Weixin",
""
],
[
"Song",
"Dawn",
""
],
[
"Li",
"Bo",
""
]
] |
new_dataset
| 0.989768 |
2303.05764
|
Taeyeong Choi
|
Taeyeong Choi, Dario Guevara, Grisha Bandodkar, Zifei Cheng, Chonghan
Wang, Brian N. Bailey, Mason Earles, Xin Liu
|
DAVIS-Ag: A Synthetic Plant Dataset for Developing Domain-Inspired
Active Vision in Agricultural Robots
|
8 pages, 5 figures, 4 tables
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In agricultural environments, viewpoint planning can be a critical
functionality for a robot with visual sensors to obtain informative
observations of objects of interest (e.g., fruits) from complex structures of
plant with random occlusions. Although recent studies on active vision have
shown some potential for agricultural tasks, each model has been designed and
validated on a unique environment that would not easily be replicated for
benchmarking novel methods being developed later. In this paper, hence, we
introduce a dataset for more extensive research on Domain-inspired Active
VISion in Agriculture (DAVIS-Ag). To be specific, we utilized our open-source
"AgML" framework and the 3D plant simulator of "Helios" to produce 502K RGB
images from 30K dense spatial locations in 632 realistically synthesized
orchards of strawberries, tomatoes, and grapes. In addition, useful labels are
provided for each image, including (1) bounding boxes and (2) pixel-wise
instance segmentations for all identifiable fruits, and also (3) pointers to
other images that are reachable by an execution of action so as to simulate the
active selection of viewpoint at each time step. Using DAVIS-Ag, we show the
motivating examples in which performance of fruit detection for the same plant
can significantly vary depending on the position and orientation of camera view
primarily due to occlusions by other components such as leaves. Furthermore, we
develop several baseline models to showcase the "usage" of data with one of
agricultural active vision tasks--fruit search optimization--providing
evaluation results against which future studies could benchmark their
methodologies. For encouraging relevant research, our dataset is released
online to be freely available at: https://github.com/ctyeong/DAVIS-Ag
|
[
{
"version": "v1",
"created": "Fri, 10 Mar 2023 08:04:38 GMT"
}
] | 2023-03-13T00:00:00 |
[
[
"Choi",
"Taeyeong",
""
],
[
"Guevara",
"Dario",
""
],
[
"Bandodkar",
"Grisha",
""
],
[
"Cheng",
"Zifei",
""
],
[
"Wang",
"Chonghan",
""
],
[
"Bailey",
"Brian N.",
""
],
[
"Earles",
"Mason",
""
],
[
"Liu",
"Xin",
""
]
] |
new_dataset
| 0.999752 |
2303.05807
|
Ziteng Cui
|
Ziteng Cui, Lin Gu, Xiao Sun, Yu Qiao, Tatsuya Harada
|
Aleth-NeRF: Low-light Condition View Synthesis with Concealing Fields
|
website page: https://cuiziteng.github.io/Aleth_NeRF_web/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Common capture low-light scenes are challenging for most computer vision
techniques, including Neural Radiance Fields (NeRF). Vanilla NeRF is
viewer-centred that simplifies the rendering process only as light emission
from 3D locations in the viewing direction, thus failing to model the
low-illumination induced darkness. Inspired by emission theory of ancient Greek
that visual perception is accomplished by rays casting from eyes, we make
slight modifications on vanilla NeRF to train on multiple views of low-light
scene, we can thus render out the well-lit scene in an unsupervised manner. We
introduce a surrogate concept, Concealing Fields, that reduce the transport of
light during the volume rendering stage. Specifically, our proposed method,
Aleth-NeRF, directly learns from the dark image to understand volumetric object
representation and concealing field under priors. By simply eliminating
Concealing Fields, we can render a single or multi-view well-lit image(s) and
gain superior performance over other 2D low light enhancement methods.
Additionally, we collect the first paired LOw-light and normal-light Multi-view
(LOM) datasets for future research.
|
[
{
"version": "v1",
"created": "Fri, 10 Mar 2023 09:28:09 GMT"
}
] | 2023-03-13T00:00:00 |
[
[
"Cui",
"Ziteng",
""
],
[
"Gu",
"Lin",
""
],
[
"Sun",
"Xiao",
""
],
[
"Qiao",
"Yu",
""
],
[
"Harada",
"Tatsuya",
""
]
] |
new_dataset
| 0.999367 |
2303.05830
|
Xilong Wang
|
Xilong Wang, Yaofei Wang, Kejiang Chen, Jinyang Ding, Weiming Zhang,
and Nenghai Yu
|
ICStega: Image Captioning-based Semantically Controllable Linguistic
Steganography
|
5 pages, 5 tables, 3 figures. Accepted by ICASSP 2023
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nowadays, social media has become the preferred communication platform for
web users but brought security threats. Linguistic steganography hides secret
data into text and sends it to the intended recipient to realize covert
communication. Compared to edit-based linguistic steganography,
generation-based approaches largely improve the payload capacity. However,
existing methods can only generate stego text alone. Another common behavior in
social media is sending semantically related image-text pairs. In this paper,
we put forward a novel image captioning-based stegosystem, where the secret
messages are embedded into the generated captions. Thus, the semantics of the
stego text can be controlled and the secret data can be transmitted by sending
semantically related image-text pairs. To balance the conflict between payload
capacity and semantic preservation, we proposed a new sampling method called
Two-Parameter Semantic Control Sampling to cutoff low-probability words.
Experimental results have shown that our method can control diversity, payload
capacity, security, and semantic accuracy at the same time.
|
[
{
"version": "v1",
"created": "Fri, 10 Mar 2023 10:10:28 GMT"
}
] | 2023-03-13T00:00:00 |
[
[
"Wang",
"Xilong",
""
],
[
"Wang",
"Yaofei",
""
],
[
"Chen",
"Kejiang",
""
],
[
"Ding",
"Jinyang",
""
],
[
"Zhang",
"Weiming",
""
],
[
"Yu",
"Nenghai",
""
]
] |
new_dataset
| 0.97347 |
2303.05864
|
EPTCS
|
Davi Romero Vasconcelos (Federal University of Cear\'a)
|
ANITA: Analytic Tableau Proof Assistant
|
In Proceedings ThEdu'22, arXiv:2303.05360
|
EPTCS 375, 2023, pp. 38-53
|
10.4204/EPTCS.375.4
| null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
This work presents the system ANITA (Analytic Tableau Proof Assistant)
developed for teaching analytic tableaux to computer science students. The tool
is written in Python and can be used as a desktop application, or in a web
platform. This paper describes the logical system of the tool, explains how the
tool is used and compares it to several similar tools. ANITA has already been
used in logic courses and an evaluation of the tool is presented.
|
[
{
"version": "v1",
"created": "Fri, 10 Mar 2023 11:36:29 GMT"
}
] | 2023-03-13T00:00:00 |
[
[
"Vasconcelos",
"Davi Romero",
"",
"Federal University of Ceará"
]
] |
new_dataset
| 0.964542 |
2303.05904
|
Billy Joe Franks
|
Fabian Hartung, Billy Joe Franks, Tobias Michels, Dennis Wagner,
Philipp Liznerski, Steffen Reithermann, Sophie Fellenz, Fabian Jirasek, Maja
Rudolph, Daniel Neider, Heike Leitte, Chen Song, Benjamin Kloepper, Stephan
Mandt, Michael Bortz, Jakob Burger, Hans Hasse, Marius Kloft
|
Deep Anomaly Detection on Tennessee Eastman Process Data
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper provides the first comprehensive evaluation and analysis of modern
(deep-learning) unsupervised anomaly detection methods for chemical process
data. We focus on the Tennessee Eastman process dataset, which has been a
standard litmus test to benchmark anomaly detection methods for nearly three
decades. Our extensive study will facilitate choosing appropriate anomaly
detection methods in industrial applications.
|
[
{
"version": "v1",
"created": "Fri, 10 Mar 2023 13:20:52 GMT"
}
] | 2023-03-13T00:00:00 |
[
[
"Hartung",
"Fabian",
""
],
[
"Franks",
"Billy Joe",
""
],
[
"Michels",
"Tobias",
""
],
[
"Wagner",
"Dennis",
""
],
[
"Liznerski",
"Philipp",
""
],
[
"Reithermann",
"Steffen",
""
],
[
"Fellenz",
"Sophie",
""
],
[
"Jirasek",
"Fabian",
""
],
[
"Rudolph",
"Maja",
""
],
[
"Neider",
"Daniel",
""
],
[
"Leitte",
"Heike",
""
],
[
"Song",
"Chen",
""
],
[
"Kloepper",
"Benjamin",
""
],
[
"Mandt",
"Stephan",
""
],
[
"Bortz",
"Michael",
""
],
[
"Burger",
"Jakob",
""
],
[
"Hasse",
"Hans",
""
],
[
"Kloft",
"Marius",
""
]
] |
new_dataset
| 0.960963 |
2303.05937
|
Mingfang Zhang
|
Mingfang Zhang, Jinglu Wang, Xiao Li, Yifei Huang, Yoichi Sato, Yan Lu
|
Structural Multiplane Image: Bridging Neural View Synthesis and 3D
Reconstruction
|
Accepted to CVPR2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The Multiplane Image (MPI), containing a set of fronto-parallel RGBA layers,
is an effective and efficient representation for view synthesis from sparse
inputs. Yet, its fixed structure limits the performance, especially for
surfaces imaged at oblique angles. We introduce the Structural MPI (S-MPI),
where the plane structure approximates 3D scenes concisely. Conveying RGBA
contexts with geometrically-faithful structures, the S-MPI directly bridges
view synthesis and 3D reconstruction. It can not only overcome the critical
limitations of MPI, i.e., discretization artifacts from sloped surfaces and
abuse of redundant layers, and can also acquire planar 3D reconstruction.
Despite the intuition and demand of applying S-MPI, great challenges are
introduced, e.g., high-fidelity approximation for both RGBA layers and plane
poses, multi-view consistency, non-planar regions modeling, and efficient
rendering with intersected planes. Accordingly, we propose a transformer-based
network based on a segmentation model. It predicts compact and expressive S-MPI
layers with their corresponding masks, poses, and RGBA contexts. Non-planar
regions are inclusively handled as a special case in our unified framework.
Multi-view consistency is ensured by sharing global proxy embeddings, which
encode plane-level features covering the complete 3D scenes with aligned
coordinates. Intensive experiments show that our method outperforms both
previous state-of-the-art MPI-based view synthesis methods and planar
reconstruction methods.
|
[
{
"version": "v1",
"created": "Fri, 10 Mar 2023 14:18:40 GMT"
}
] | 2023-03-13T00:00:00 |
[
[
"Zhang",
"Mingfang",
""
],
[
"Wang",
"Jinglu",
""
],
[
"Li",
"Xiao",
""
],
[
"Huang",
"Yifei",
""
],
[
"Sato",
"Yoichi",
""
],
[
"Lu",
"Yan",
""
]
] |
new_dataset
| 0.99899 |
2303.05953
|
Ronnie de Souza Santos Dr
|
Ronnie de Souza Santos, Brody Stuart-Verner, Cleyton de Magalhaes
|
LGBTQIA+ (In)Visibility in Computer Science and Software Engineering
Education
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Modern society is diverse, multicultural, and multifaceted. Because of these
characteristics, we are currently observing an increase in the debates about
equity, diversity, and inclusion in different areas, especially because several
groups of individuals are underrepresented in many environments. In computer
science and software engineering, it seems counter-intuitive that these areas,
which are responsible for creating technological solutions and systems for
billions of users around the world, do not reflect the diversity of the society
to which it serves. In trying to solve this diversity crisis in the software
industry, researchers started to investigate strategies that can be applied to
increase diversity and improve inclusion in academia and the software industry.
However, the lack of diversity in computer science and related courses,
including software engineering, is still a problem, in particular when some
specific groups are considered. LGBTQIA+ students, for instance, face several
challenges to fit into technology courses, even though most students in
universities right now belong to Generation Z, which is described as
open-minded to aspects of gender and sexuality. In this study, we aimed to
discuss the state-of-art of publications about the inclusion of LGBTQIA+
students in computer science education. Using a mapping study, we identified
eight studies published in the past six years that focused on this public. We
present strategies developed to adapt curricula and lectures to be more
inclusive to LGBTQIA+ students and discuss challenges and opportunities for
future research
|
[
{
"version": "v1",
"created": "Fri, 10 Mar 2023 14:39:05 GMT"
}
] | 2023-03-13T00:00:00 |
[
[
"Santos",
"Ronnie de Souza",
""
],
[
"Stuart-Verner",
"Brody",
""
],
[
"de Magalhaes",
"Cleyton",
""
]
] |
new_dataset
| 0.999466 |
2303.05996
|
Pablo Picazo
|
Pablo Picazo-Mart\'inez, Carlos Barroso-Fern\'andez, Jorge
Mart\'in-P\'erez, Milan Groshev, Antonio de la Oliva
|
IEEE 802.11az Indoor Positioning with mmWave
|
8 pages, 6 figures, magazine submission
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Last years we have witnessed the uprising of location based applications,
which depend on the devices ability to accurately obtain their position. IEEE
802.11, foretelling the need for such applications, started the IEEE 802.11az
work on Next Generation Positioning. Although this standard provides
positioning enhancements for sub-6GHz and mmWave bands, high accuracy in the
order of centimeters can only be obtained in the latter band, thanks to the
beamforming information available at mmWave operation. This work presents a
detailed analysis on the new techniques provided by IEEE 802.11az for enhanced
secured positioning in the mmWave band, assessing them through experimentation.
|
[
{
"version": "v1",
"created": "Fri, 10 Mar 2023 15:58:14 GMT"
}
] | 2023-03-13T00:00:00 |
[
[
"Picazo-Martínez",
"Pablo",
""
],
[
"Barroso-Fernández",
"Carlos",
""
],
[
"Martín-Pérez",
"Jorge",
""
],
[
"Groshev",
"Milan",
""
],
[
"de la Oliva",
"Antonio",
""
]
] |
new_dataset
| 0.999272 |
2303.06042
|
Mutian Xu
|
Xianggang Yu, Mutian Xu, Yidan Zhang, Haolin Liu, Chongjie Ye,
Yushuang Wu, Zizheng Yan, Chenming Zhu, Zhangyang Xiong, Tianyou Liang,
Guanying Chen, Shuguang Cui, Xiaoguang Han
|
MVImgNet: A Large-scale Dataset of Multi-view Images
|
To be appear in CVPR2023. Project page:
https://gaplab.cuhk.edu.cn/projects/MVImgNet/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Being data-driven is one of the most iconic properties of deep learning
algorithms. The birth of ImageNet drives a remarkable trend of "learning from
large-scale data" in computer vision. Pretraining on ImageNet to obtain rich
universal representations has been manifested to benefit various 2D visual
tasks, and becomes a standard in 2D vision. However, due to the laborious
collection of real-world 3D data, there is yet no generic dataset serving as a
counterpart of ImageNet in 3D vision, thus how such a dataset can impact the 3D
community is unraveled. To remedy this defect, we introduce MVImgNet, a
large-scale dataset of multi-view images, which is highly convenient to gain by
shooting videos of real-world objects in human daily life. It contains 6.5
million frames from 219,188 videos crossing objects from 238 classes, with rich
annotations of object masks, camera parameters, and point clouds. The
multi-view attribute endows our dataset with 3D-aware signals, making it a soft
bridge between 2D and 3D vision.
We conduct pilot studies for probing the potential of MVImgNet on a variety
of 3D and 2D visual tasks, including radiance field reconstruction, multi-view
stereo, and view-consistent image understanding, where MVImgNet demonstrates
promising performance, remaining lots of possibilities for future explorations.
Besides, via dense reconstruction on MVImgNet, a 3D object point cloud
dataset is derived, called MVPNet, covering 87,200 samples from 150 categories,
with the class label on each point cloud. Experiments show that MVPNet can
benefit the real-world 3D object classification while posing new challenges to
point cloud understanding.
MVImgNet and MVPNet will be publicly available, hoping to inspire the broader
vision community.
|
[
{
"version": "v1",
"created": "Fri, 10 Mar 2023 16:31:31 GMT"
}
] | 2023-03-13T00:00:00 |
[
[
"Yu",
"Xianggang",
""
],
[
"Xu",
"Mutian",
""
],
[
"Zhang",
"Yidan",
""
],
[
"Liu",
"Haolin",
""
],
[
"Ye",
"Chongjie",
""
],
[
"Wu",
"Yushuang",
""
],
[
"Yan",
"Zizheng",
""
],
[
"Zhu",
"Chenming",
""
],
[
"Xiong",
"Zhangyang",
""
],
[
"Liang",
"Tianyou",
""
],
[
"Chen",
"Guanying",
""
],
[
"Cui",
"Shuguang",
""
],
[
"Han",
"Xiaoguang",
""
]
] |
new_dataset
| 0.999876 |
2303.06073
|
Yasir Zaki
|
Hazem Ibrahim, Rohail Asim, Matteo Varvello, Yasir Zaki
|
I Tag, You Tag, Everybody Tags!
|
8 pages, 9 figures
| null | null | null |
cs.PF cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Location tags enable tracking of personal belongings. This is achieved
locally, e.g., via Bluetooth with a paired phone, and remotely, by piggybacking
on the location reported by location-reporting devices which come into
proximity of a tag. There has been anecdotal evidence that location tags are
also misused to stalk people. This paper studies the performance of the two
most popular location tags (Apple's AirTag and Samsung's SmartTag) through
controlled experiments -- with a known large distribution of location-reporting
devices -- as well as in-the-wild experiments -- with no control on the number
and kind of reporting devices encountered, thus emulating real-life use-cases.
We find that both tags achieve similar performance, e.g., they are located 60%
of the times in about 10 minutes within a 100 meter radius. It follows that
real time stalking via location tags is impractical, even when both tags are
concurrently deployed which achieves comparable accuracy in half the time.
Nevertheless, half of a victim's movements can be backtracked accurately (10
meter error) with just a one-hour delay.
|
[
{
"version": "v1",
"created": "Thu, 9 Mar 2023 17:19:19 GMT"
}
] | 2023-03-13T00:00:00 |
[
[
"Ibrahim",
"Hazem",
""
],
[
"Asim",
"Rohail",
""
],
[
"Varvello",
"Matteo",
""
],
[
"Zaki",
"Yasir",
""
]
] |
new_dataset
| 0.999695 |
2108.05681
|
Hyowoon Seo
|
Hyowoon Seo, Jihong Park, Mehdi Bennis, M\'erouane Debbah
|
Semantics-Native Communication with Contextual Reasoning
|
18 pages, 16 figures, in IEEE Transactions on Cognitive
Communications and Networking
| null | null | null |
cs.IT cs.LG math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Spurred by a huge interest in the post-Shannon communication, it has recently
been shown that leveraging semantics can significantly improve the
communication effectiveness across many tasks. In this article, inspired by
human communication, we propose a novel stochastic model of System 1
semantics-native communication (SNC) for generic tasks, where a speaker has an
intention of referring to an entity, extracts the semantics, and communicates
its symbolic representation to a target listener. To further reach its full
potential, we additionally infuse contextual reasoning into SNC such that the
speaker locally and iteratively self-communicates with a virtual agent built on
the physical listener's unique way of coding its semantics, i.e., communication
context. The resultant System 2 SNC allows the speaker to extract the most
effective semantics for its listener. Leveraging the proposed stochastic model,
we show that the reliability of System 2 SNC increases with the number of
meaningful concepts, and derive the expected semantic representation (SR) bit
length which quantifies the extracted effective semantics. It is also shown
that System 2 SNC significantly reduces the SR length without compromising
communication reliability.
|
[
{
"version": "v1",
"created": "Thu, 12 Aug 2021 12:04:27 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Mar 2023 05:49:56 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Seo",
"Hyowoon",
""
],
[
"Park",
"Jihong",
""
],
[
"Bennis",
"Mehdi",
""
],
[
"Debbah",
"Mérouane",
""
]
] |
new_dataset
| 0.978888 |
2201.10276
|
Jin Huang
|
Jin Huang, Jantien Stoter, Ravi Peters, Liangliang Nan
|
City3D: Large-Scale Building Reconstruction from Airborne LiDAR Point
Clouds
| null | null |
10.3390/rs14092254
| null |
cs.CV cs.GR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We present a fully automatic approach for reconstructing compact 3D building
models from large-scale airborne point clouds. A major challenge of urban
reconstruction from airborne LiDAR point clouds lies in that the vertical walls
are typically missing. Based on the observation that urban buildings typically
consist of planar roofs connected with vertical walls to the ground, we propose
an approach to infer the vertical walls directly from the data. With the planar
segments of both roofs and walls, we hypothesize the faces of the building
surface, and the final model is obtained by using an extended
hypothesis-and-selection-based polygonal surface reconstruction framework.
Specifically, we introduce a new energy term to encourage roof preferences and
two additional hard constraints into the optimization step to ensure correct
topology and enhance detail recovery. Experiments on various large-scale
airborne LiDAR point clouds have demonstrated that the method is superior to
the state-of-the-art methods in terms of reconstruction accuracy and
robustness. In addition, we have generated a new dataset with our method
consisting of the point clouds and 3D models of 20k real-world buildings. We
believe this dataset can stimulate research in urban reconstruction from
airborne LiDAR point clouds and the use of 3D city models in urban
applications.
|
[
{
"version": "v1",
"created": "Tue, 25 Jan 2022 12:41:11 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Mar 2023 17:41:34 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Huang",
"Jin",
""
],
[
"Stoter",
"Jantien",
""
],
[
"Peters",
"Ravi",
""
],
[
"Nan",
"Liangliang",
""
]
] |
new_dataset
| 0.999062 |
2207.00531
|
Georg Hess
|
Georg Hess, Johan Jaxing, Elias Svensson, David Hagerman, Christoffer
Petersson, Lennart Svensson
|
Masked Autoencoder for Self-Supervised Pre-training on Lidar Point
Clouds
| null | null |
10.1109/WACVW58289.2023.00039
| null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Masked autoencoding has become a successful pretraining paradigm for
Transformer models for text, images, and, recently, point clouds. Raw
automotive datasets are suitable candidates for self-supervised pre-training as
they generally are cheap to collect compared to annotations for tasks like 3D
object detection (OD). However, the development of masked autoencoders for
point clouds has focused solely on synthetic and indoor data. Consequently,
existing methods have tailored their representations and models toward small
and dense point clouds with homogeneous point densities. In this work, we study
masked autoencoding for point clouds in an automotive setting, which are sparse
and for which the point density can vary drastically among objects in the same
scene. To this end, we propose Voxel-MAE, a simple masked autoencoding
pre-training scheme designed for voxel representations. We pre-train the
backbone of a Transformer-based 3D object detector to reconstruct masked voxels
and to distinguish between empty and non-empty voxels. Our method improves the
3D OD performance by 1.75 mAP points and 1.05 NDS on the challenging nuScenes
dataset. Further, we show that by pre-training with Voxel-MAE, we require only
40% of the annotated data to outperform a randomly initialized equivalent. Code
available at https://github.com/georghess/voxel-mae
|
[
{
"version": "v1",
"created": "Fri, 1 Jul 2022 16:31:45 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Oct 2022 12:31:40 GMT"
},
{
"version": "v3",
"created": "Thu, 9 Mar 2023 15:16:24 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Hess",
"Georg",
""
],
[
"Jaxing",
"Johan",
""
],
[
"Svensson",
"Elias",
""
],
[
"Hagerman",
"David",
""
],
[
"Petersson",
"Christoffer",
""
],
[
"Svensson",
"Lennart",
""
]
] |
new_dataset
| 0.997482 |
2207.07609
|
Ruiqing Mao
|
Ruiqing Mao, Jingyu Guo, Yukuan Jia, Yuxuan Sun, Sheng Zhou, Zhisheng
Niu
|
DOLPHINS: Dataset for Collaborative Perception enabled Harmonious and
Interconnected Self-driving
| null | null |
10.1007/978-3-031-26348-4_29
| null |
cs.CV cs.LG eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vehicle-to-Everything (V2X) network has enabled collaborative perception in
autonomous driving, which is a promising solution to the fundamental defect of
stand-alone intelligence including blind zones and long-range perception.
However, the lack of datasets has severely blocked the development of
collaborative perception algorithms. In this work, we release DOLPHINS: Dataset
for cOllaborative Perception enabled Harmonious and INterconnected
Self-driving, as a new simulated large-scale various-scenario multi-view
multi-modality autonomous driving dataset, which provides a ground-breaking
benchmark platform for interconnected autonomous driving. DOLPHINS outperforms
current datasets in six dimensions: temporally-aligned images and point clouds
from both vehicles and Road Side Units (RSUs) enabling both Vehicle-to-Vehicle
(V2V) and Vehicle-to-Infrastructure (V2I) based collaborative perception; 6
typical scenarios with dynamic weather conditions make the most various
interconnected autonomous driving dataset; meticulously selected viewpoints
providing full coverage of the key areas and every object; 42376 frames and
292549 objects, as well as the corresponding 3D annotations, geo-positions, and
calibrations, compose the largest dataset for collaborative perception; Full-HD
images and 64-line LiDARs construct high-resolution data with sufficient
details; well-organized APIs and open-source codes ensure the extensibility of
DOLPHINS. We also construct a benchmark of 2D detection, 3D detection, and
multi-view collaborative perception tasks on DOLPHINS. The experiment results
show that the raw-level fusion scheme through V2X communication can help to
improve the precision as well as to reduce the necessity of expensive LiDAR
equipment on vehicles when RSUs exist, which may accelerate the popularity of
interconnected self-driving vehicles. DOLPHINS is now available on
https://dolphins-dataset.net/.
|
[
{
"version": "v1",
"created": "Fri, 15 Jul 2022 17:07:07 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Mao",
"Ruiqing",
""
],
[
"Guo",
"Jingyu",
""
],
[
"Jia",
"Yukuan",
""
],
[
"Sun",
"Yuxuan",
""
],
[
"Zhou",
"Sheng",
""
],
[
"Niu",
"Zhisheng",
""
]
] |
new_dataset
| 0.999783 |
2208.14288
|
Daniele Bernardini
|
Hongpeng Cao, Lukas Dirnberger, Daniele Bernardini, Cristina Piazza,
Marco Caccamo
|
6IMPOSE: Bridging the Reality Gap in 6D Pose Estimation for Robotic
Grasping
| null | null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
6D pose recognition has been a crucial factor in the success of robotic
grasping, and recent deep learning based approaches have achieved remarkable
results on benchmarks. However, their generalization capabilities in real-world
applications remain unclear. To overcome this gap, we introduce 6IMPOSE, a
novel framework for sim-to-real data generation and 6D pose estimation. 6IMPOSE
consists of four modules: First, a data generation pipeline that employs the 3D
software suite Blender to create synthetic RGBD image datasets with 6D pose
annotations. Second, an annotated RGBD dataset of five household objects
generated using the proposed pipeline. Third, a real-time two-stage 6D pose
estimation approach that integrates the object detector YOLO-V4 and a
streamlined, real-time version of the 6D pose estimation algorithm PVN3D
optimized for time-sensitive robotics applications. Fourth, a codebase designed
to facilitate the integration of the vision system into a robotic grasping
experiment. Our approach demonstrates the efficient generation of large amounts
of photo-realistic RGBD images and the successful transfer of the trained
inference model to robotic grasping experiments, achieving an overall success
rate of 87% in grasping five different household objects from cluttered
backgrounds under varying lighting conditions. This is made possible by the
fine-tuning of data generation and domain randomization techniques, and the
optimization of the inference pipeline, overcoming the generalization and
performance shortcomings of the original PVN3D algorithm. Finally, we make the
code, synthetic dataset, and all the pretrained models available on Github.
|
[
{
"version": "v1",
"created": "Tue, 30 Aug 2022 14:17:15 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Mar 2023 09:51:27 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Cao",
"Hongpeng",
""
],
[
"Dirnberger",
"Lukas",
""
],
[
"Bernardini",
"Daniele",
""
],
[
"Piazza",
"Cristina",
""
],
[
"Caccamo",
"Marco",
""
]
] |
new_dataset
| 0.992156 |
2210.05556
|
Terry Yue Zhuo
|
Terry Yue Zhuo and Yaqing Liao and Yuecheng Lei and Lizhen Qu and
Gerard de Melo and Xiaojun Chang and Yazhou Ren and Zenglin Xu
|
ViLPAct: A Benchmark for Compositional Generalization on Multimodal
Human Activities
|
Accepted at EACL2023 (Findings)
| null | null | null |
cs.CV cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce ViLPAct, a novel vision-language benchmark for human activity
planning. It is designed for a task where embodied AI agents can reason and
forecast future actions of humans based on video clips about their initial
activities and intents in text. The dataset consists of 2.9k videos from
\charades extended with intents via crowdsourcing, a multi-choice question test
set, and four strong baselines. One of the baselines implements a neurosymbolic
approach based on a multi-modal knowledge base (MKB), while the other ones are
deep generative models adapted from recent state-of-the-art (SOTA) methods.
According to our extensive experiments, the key challenges are compositional
generalization and effective use of information from both modalities.
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2022 15:50:51 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Feb 2023 06:00:57 GMT"
},
{
"version": "v3",
"created": "Sun, 19 Feb 2023 09:28:10 GMT"
},
{
"version": "v4",
"created": "Thu, 9 Mar 2023 11:04:07 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Zhuo",
"Terry Yue",
""
],
[
"Liao",
"Yaqing",
""
],
[
"Lei",
"Yuecheng",
""
],
[
"Qu",
"Lizhen",
""
],
[
"de Melo",
"Gerard",
""
],
[
"Chang",
"Xiaojun",
""
],
[
"Ren",
"Yazhou",
""
],
[
"Xu",
"Zenglin",
""
]
] |
new_dataset
| 0.999624 |
2210.16984
|
Gwendal Le Vaillant
|
Gwendal Le Vaillant, Thierry Dutoit
|
Synthesizer Preset Interpolation using Transformer Auto-Encoders
|
Accepted to IEEE ICASSP 2023
| null | null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Sound synthesizers are widespread in modern music production but they
increasingly require expert skills to be mastered. This work focuses on
interpolation between presets, i.e., sets of values of all sound synthesis
parameters, to enable the intuitive creation of new sounds from existing ones.
We introduce a bimodal auto-encoder neural network, which simultaneously
processes presets using multi-head attention blocks, and audio using
convolutions. This model has been tested on a popular frequency modulation
synthesizer with more than one hundred parameters. Experiments have compared
the model to related architectures and methods, and have demonstrated that it
performs smoother interpolations. After training, the proposed model can be
integrated into commercial synthesizers for live interpolation or sound design
tasks.
|
[
{
"version": "v1",
"created": "Thu, 27 Oct 2022 15:20:18 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Mar 2023 16:12:17 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Vaillant",
"Gwendal Le",
""
],
[
"Dutoit",
"Thierry",
""
]
] |
new_dataset
| 0.983662 |
2211.08451
|
Mete Ismayilzada
|
Mete Ismayilzada, Antoine Bosselut
|
kogito: A Commonsense Knowledge Inference Toolkit
|
EACL 2023 Camera ready, 9 pages
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present kogito, an open-source tool for generating
commonsense inferences about situations described in text. kogito provides an
intuitive and extensible interface to interact with natural language generation
models that can be used for hypothesizing commonsense knowledge inference from
a textual input. In particular, kogito offers several features for targeted,
multi-granularity knowledge generation. These include a standardized API for
training and evaluating knowledge models, and generating and filtering
inferences from them. We also include helper functions for converting natural
language texts into a format ingestible by knowledge models - intermediate
pipeline stages such as knowledge head extraction from text, heuristic and
model-based knowledge head-relation matching, and an ability to define and use
custom knowledge relations. We make the code for kogito available at
https://github.com/epfl-nlp/kogito along with thorough documentation at
https://kogito.readthedocs.io.
|
[
{
"version": "v1",
"created": "Tue, 15 Nov 2022 19:04:13 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Dec 2022 10:44:53 GMT"
},
{
"version": "v3",
"created": "Wed, 8 Mar 2023 20:50:27 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Ismayilzada",
"Mete",
""
],
[
"Bosselut",
"Antoine",
""
]
] |
new_dataset
| 0.989377 |
2211.12046
|
Dogyoon Lee
|
Dogyoon Lee, Minhyeok Lee, Chajin Shin, Sangyoun Lee
|
DP-NeRF: Deblurred Neural Radiance Field with Physical Scene Priors
|
Accepted at CVPR 2023, Code: https://github.com/dogyoonlee/DP-NeRF,
Project page: https://dogyoonlee.github.io/dpnerf/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural Radiance Field (NeRF) has exhibited outstanding three-dimensional (3D)
reconstruction quality via the novel view synthesis from multi-view images and
paired calibrated camera parameters. However, previous NeRF-based systems have
been demonstrated under strictly controlled settings, with little attention
paid to less ideal scenarios, including with the presence of noise such as
exposure, illumination changes, and blur. In particular, though blur frequently
occurs in real situations, NeRF that can handle blurred images has received
little attention. The few studies that have investigated NeRF for blurred
images have not considered geometric and appearance consistency in 3D space,
which is one of the most important factors in 3D reconstruction. This leads to
inconsistency and the degradation of the perceptual quality of the constructed
scene. Hence, this paper proposes a DP-NeRF, a novel clean NeRF framework for
blurred images, which is constrained with two physical priors. These priors are
derived from the actual blurring process during image acquisition by the
camera. DP-NeRF proposes rigid blurring kernel to impose 3D consistency
utilizing the physical priors and adaptive weight proposal to refine the color
composition error in consideration of the relationship between depth and blur.
We present extensive experimental results for synthetic and real scenes with
two types of blur: camera motion blur and defocus blur. The results demonstrate
that DP-NeRF successfully improves the perceptual quality of the constructed
NeRF ensuring 3D geometric and appearance consistency. We further demonstrate
the effectiveness of our model with comprehensive ablation analysis.
|
[
{
"version": "v1",
"created": "Tue, 22 Nov 2022 06:40:53 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Dec 2022 04:45:28 GMT"
},
{
"version": "v3",
"created": "Tue, 7 Mar 2023 05:13:18 GMT"
},
{
"version": "v4",
"created": "Thu, 9 Mar 2023 04:46:50 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Lee",
"Dogyoon",
""
],
[
"Lee",
"Minhyeok",
""
],
[
"Shin",
"Chajin",
""
],
[
"Lee",
"Sangyoun",
""
]
] |
new_dataset
| 0.997631 |
2211.12914
|
Maria A. Bravo
|
Mar\'ia A. Bravo, Sudhanshu Mittal, Simon Ging, Thomas Brox
|
Open-vocabulary Attribute Detection
|
Accepted at CVPR 2023. https://ovad-benchmark.github.io
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Vision-language modeling has enabled open-vocabulary tasks where predictions
can be queried using any text prompt in a zero-shot manner. Existing
open-vocabulary tasks focus on object classes, whereas research on object
attributes is limited due to the lack of a reliable attribute-focused
evaluation benchmark. This paper introduces the Open-Vocabulary Attribute
Detection (OVAD) task and the corresponding OVAD benchmark. The objective of
the novel task and benchmark is to probe object-level attribute information
learned by vision-language models. To this end, we created a clean and densely
annotated test set covering 117 attribute classes on the 80 object classes of
MS COCO. It includes positive and negative annotations, which enables
open-vocabulary evaluation. Overall, the benchmark consists of 1.4 million
annotations. For reference, we provide a first baseline method for
open-vocabulary attribute detection. Moreover, we demonstrate the benchmark's
value by studying the attribute detection performance of several foundation
models. Project page https://ovad-benchmark.github.io
|
[
{
"version": "v1",
"created": "Wed, 23 Nov 2022 12:34:43 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Mar 2023 19:29:46 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Bravo",
"María A.",
""
],
[
"Mittal",
"Sudhanshu",
""
],
[
"Ging",
"Simon",
""
],
[
"Brox",
"Thomas",
""
]
] |
new_dataset
| 0.99624 |
2301.03561
|
Armin Danesh Pazho
|
Armin Danesh Pazho, Christopher Neff, Ghazal Alinezhad Noghre, Babak
Rahimi Ardabili, Shanle Yao, Mohammadreza Baharani, Hamed Tabkhi
|
Ancilia: Scalable Intelligent Video Surveillance for the Artificial
Intelligence of Things
| null | null | null | null |
cs.CV cs.AI cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the advancement of vision-based artificial intelligence, the
proliferation of the Internet of Things connected cameras, and the increasing
societal need for rapid and equitable security, the demand for accurate
real-time intelligent surveillance has never been higher. This article presents
Ancilia, an end-to-end scalable, intelligent video surveillance system for the
Artificial Intelligence of Things. Ancilia brings state-of-the-art artificial
intelligence to real-world surveillance applications while respecting ethical
concerns and performing high-level cognitive tasks in real-time. Ancilia aims
to revolutionize the surveillance landscape, to bring more effective,
intelligent, and equitable security to the field, resulting in safer and more
secure communities without requiring people to compromise their right to
privacy.
|
[
{
"version": "v1",
"created": "Mon, 9 Jan 2023 18:21:22 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Mar 2023 18:55:02 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Pazho",
"Armin Danesh",
""
],
[
"Neff",
"Christopher",
""
],
[
"Noghre",
"Ghazal Alinezhad",
""
],
[
"Ardabili",
"Babak Rahimi",
""
],
[
"Yao",
"Shanle",
""
],
[
"Baharani",
"Mohammadreza",
""
],
[
"Tabkhi",
"Hamed",
""
]
] |
new_dataset
| 0.997733 |
2302.11606
|
Pranathi Rayavaram
|
Nathan Percival, Pranathi Rayavaram, Sashank Narain, Claire Seungeun
Lee
|
CryptoScratch: Developing and evaluating a block-based programming tool
for teaching K-12 cryptography education using Scratch
| null |
2022 IEEE Global Engineering Education Conference (EDUCON)
|
10.1109/EDUCON52537.2022.9766637
| null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents the design, implementation, and evaluation of a new
framework called CryptoScratch, which extends the Scratch programming
environment with modern cryptographic algorithms (e.g., AES, RSA, SHA-256)
implemented as visual blocks. Using the simple interface of CryptoScratch, K-12
students can study how to use cryptographic algorithms for services like
confidentiality, authentication, and integrity protection; and then use these
blocks to build complex modern cryptographic schemes (e.g., Pretty Good
Privacy, Digital Signatures). In addition, we present the design and
implementation of a Task Block that provides students instruction on various
cryptography problems and verifies that they have successfully completed the
problem. The task block also generates feedback, nudging learners to implement
more secure solutions for cryptographic problems. An initial usability study
was performed with 16 middle-school students where students were taught basic
cryptographic concepts and then asked to complete tasks using those concepts.
Once students had knowledge of a variety of basic cryptographic algorithms,
they were asked to use those algorithms to implement complex cryptographic
schemes such as Pretty Good Privacy and Digital Signatures. Using the
successful implementation of the cryptographic and task blocks in Scratch, the
initial testing indicated that $\approx 60\%$ of the students could quickly
grasp and implement complex cryptography concepts using CryptoScratch, while
$\approx 90\%$ showed comfort with cryptography concepts and use-cases. Based
on the positive results from the initial testing, a larger study of students is
being developed to investigate the effectiveness across the socioeconomic
spectrum.
|
[
{
"version": "v1",
"created": "Wed, 22 Feb 2023 19:04:12 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Percival",
"Nathan",
""
],
[
"Rayavaram",
"Pranathi",
""
],
[
"Narain",
"Sashank",
""
],
[
"Lee",
"Claire Seungeun",
""
]
] |
new_dataset
| 0.976971 |
2302.13570
|
Fabian Woitschek
|
Fabian Woitschek, Georg Schneider
|
Physical Adversarial Attacks on Deep Neural Networks for Traffic Sign
Recognition: A Feasibility Study
| null |
2021 IEEE Intelligent Vehicles Symposium (IV), Nagoya, Japan,
2021, pp. 481-487
|
10.1109/IV48863.2021.9575935
| null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Deep Neural Networks (DNNs) are increasingly applied in the real world in
safety critical applications like advanced driver assistance systems. An
example for such use case is represented by traffic sign recognition systems.
At the same time, it is known that current DNNs can be fooled by adversarial
attacks, which raises safety concerns if those attacks can be applied under
realistic conditions. In this work we apply different black-box attack methods
to generate perturbations that are applied in the physical environment and can
be used to fool systems under different environmental conditions. To the best
of our knowledge we are the first to combine a general framework for physical
attacks with different black-box attack methods and study the impact of the
different methods on the success rate of the attack under the same setting. We
show that reliable physical adversarial attacks can be performed with different
methods and that it is also possible to reduce the perceptibility of the
resulting perturbations. The findings highlight the need for viable defenses of
a DNN even in the black-box case, but at the same time form the basis for
securing a DNN with methods like adversarial training which utilizes
adversarial attacks to augment the original training data.
|
[
{
"version": "v1",
"created": "Mon, 27 Feb 2023 08:10:58 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Woitschek",
"Fabian",
""
],
[
"Schneider",
"Georg",
""
]
] |
new_dataset
| 0.970718 |
2302.13863
|
Zhijingcheng Yu
|
Jason Zhijingcheng Yu, Conrad Watt, Aditya Badole, Trevor E. Carlson,
Prateek Saxena
|
Capstone: A Capability-based Foundation for Trustless Secure Memory
Access (Extended Version)
|
31 pages, 10 figures. This is an extended version of a paper to
appear at 32nd USENIX Security Symposium, August 2023; acknowledgments
updated
| null | null | null |
cs.CR cs.AR cs.OS
|
http://creativecommons.org/licenses/by/4.0/
|
Capability-based memory isolation is a promising new architectural primitive.
Software can access low-level memory only via capability handles rather than
raw pointers, which provides a natural interface to enforce security
restrictions. Existing architectural capability designs such as CHERI provide
spatial safety, but fail to extend to other memory models that
security-sensitive software designs may desire. In this paper, we propose
Capstone, a more expressive architectural capability design that supports
multiple existing memory isolation models in a trustless setup, i.e., without
relying on trusted software components. We show how Capstone is well-suited for
environments where privilege boundaries are fluid (dynamically extensible),
memory sharing/delegation are desired both temporally and spatially, and where
such needs are to be balanced with availability concerns. Capstone can also be
implemented efficiently. We present an implementation sketch and through
evaluation show that its overhead is below 50% in common use cases. We also
prototype a functional emulator for Capstone and use it to demonstrate the
runnable implementations of six real-world memory models without trusted
software components: three types of enclave-based TEEs, a thread scheduler, a
memory allocator, and Rust-style memory safety -- all within the interface of
Capstone.
|
[
{
"version": "v1",
"created": "Mon, 27 Feb 2023 15:03:15 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Mar 2023 07:23:25 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Yu",
"Jason Zhijingcheng",
""
],
[
"Watt",
"Conrad",
""
],
[
"Badole",
"Aditya",
""
],
[
"Carlson",
"Trevor E.",
""
],
[
"Saxena",
"Prateek",
""
]
] |
new_dataset
| 0.997753 |
2303.01932
|
Kejie Li
|
Kejie Li, Jia-Wang Bian, Robert Castle, Philip H.S. Torr, Victor
Adrian Prisacariu
|
MobileBrick: Building LEGO for 3D Reconstruction on Mobile Devices
|
To be appeared at CVPR 2023
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
High-quality 3D ground-truth shapes are critical for 3D object reconstruction
evaluation. However, it is difficult to create a replica of an object in
reality, and even 3D reconstructions generated by 3D scanners have artefacts
that cause biases in evaluation. To address this issue, we introduce a novel
multi-view RGBD dataset captured using a mobile device, which includes highly
precise 3D ground-truth annotations for 153 object models featuring a diverse
set of 3D structures. We obtain precise 3D ground-truth shape without relying
on high-end 3D scanners by utilising LEGO models with known geometry as the 3D
structures for image capture. The distinct data modality offered by
high-resolution RGB images and low-resolution depth maps captured on a mobile
device, when combined with precise 3D geometry annotations, presents a unique
opportunity for future research on high-fidelity 3D reconstruction.
Furthermore, we evaluate a range of 3D reconstruction algorithms on the
proposed dataset. Project page: http://code.active.vision/MobileBrick/
|
[
{
"version": "v1",
"created": "Fri, 3 Mar 2023 14:02:50 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Mar 2023 13:08:22 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Li",
"Kejie",
""
],
[
"Bian",
"Jia-Wang",
""
],
[
"Castle",
"Robert",
""
],
[
"Torr",
"Philip H. S.",
""
],
[
"Prisacariu",
"Victor Adrian",
""
]
] |
new_dataset
| 0.999859 |
2303.03398
|
Ronald Caplan
|
Ronald M. Caplan, Miko M. Stulajter, Jon A. Linker
|
Acceleration of a production Solar MHD code with Fortran standard
parallelism: From OpenACC to `do concurrent'
|
10 pages, 2 tables, 4 figures, accepted to the AsHES workshop at
IPDPS 2023
| null | null | null |
cs.MS astro-ph.IM cs.DC cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There is growing interest in using standard language constructs for
accelerated computing, avoiding the need for (often vendor-specific) external
APIs. These constructs hold the potential to be more portable and much more
`future-proof'. For Fortran codes, the current focus is on the {\tt do
concurrent} (DC) loop. While there have been some successful examples of
GPU-acceleration using DC for benchmark and/or small codes, its widespread
adoption will require demonstrations of its use in full-size applications.
Here, we look at the current capabilities and performance of using DC in a
production application called Magnetohydrodynamic Algorithm outside a Sphere
(MAS). MAS is a state-of-the-art model for studying coronal and heliospheric
dynamics, is over 70,000 lines long, and has previously been ported to GPUs
using MPI+OpenACC. We attempt to eliminate as many of its OpenACC directives as
possible in favor of DC. We show that using the NVIDIA {\tt nvfortran}
compiler's Fortran 202X preview implementation, unified managed memory, and
modified MPI launch methods, we can achieve GPU acceleration across multiple
GPUs without using a single OpenACC directive. However, doing so results in a
slowdown between 1.25x and 3x. We discuss what future improvements are needed
to avoid this loss, and show how we can still retain close
|
[
{
"version": "v1",
"created": "Sun, 5 Mar 2023 21:37:34 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Mar 2023 20:18:20 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Caplan",
"Ronald M.",
""
],
[
"Stulajter",
"Miko M.",
""
],
[
"Linker",
"Jon A.",
""
]
] |
new_dataset
| 0.999669 |
2303.04001
|
Rodrigo Mello
|
Rodrigo Mello, Filipe Calegario, Geber Ramalho
|
ELODIN: Naming Concepts in Embedding Spaces
|
Added quantitative data, fixed formatting issues
| null | null | null |
cs.CV cs.CL cs.GR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Despite recent advancements, the field of text-to-image synthesis still
suffers from lack of fine-grained control. Using only text, it remains
challenging to deal with issues such as concept coherence and concept
contamination. We propose a method to enhance control by generating specific
concepts that can be reused throughout multiple images, effectively expanding
natural language with new words that can be combined much like a painter's
palette. Unlike previous contributions, our method does not copy visuals from
input data and can generate concepts through text alone. We perform a set of
comparisons that finds our method to be a significant improvement over
text-only prompts.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2023 16:00:26 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Mar 2023 17:10:27 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Mello",
"Rodrigo",
""
],
[
"Calegario",
"Filipe",
""
],
[
"Ramalho",
"Geber",
""
]
] |
new_dataset
| 0.995791 |
2303.04027
|
Chia Sheng Liu
|
Chia-Sheng Liu, Jia-Fong Yeh, Hao Hsu, Hung-Ting Su, Ming-Sui Lee,
Winston H. Hsu
|
BIRD-PCC: Bi-directional Range Image-based Deep LiDAR Point Cloud
Compression
|
Accepted to ICASSP 2023
| null | null | null |
cs.MM cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The large amount of data collected by LiDAR sensors brings the issue of LiDAR
point cloud compression (PCC). Previous works on LiDAR PCC have used range
image representations and followed the predictive coding paradigm to create a
basic prototype of a coding framework. However, their prediction methods give
an inaccurate result due to the negligence of invalid pixels in range images
and the omission of future frames in the time step. Moreover, their handcrafted
design of residual coding methods could not fully exploit spatial redundancy.
To remedy this, we propose a coding framework BIRD-PCC. Our prediction module
is aware of the coordinates of invalid pixels in range images and takes a
bidirectional scheme. Also, we introduce a deep-learned residual coding module
that can further exploit spatial redundancy within a residual frame.
Experiments conducted on SemanticKITTI and KITTI-360 datasets show that
BIRD-PCC outperforms other methods in most bitrate conditions and generalizes
well to unseen environments.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2023 16:39:09 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Mar 2023 02:58:24 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Liu",
"Chia-Sheng",
""
],
[
"Yeh",
"Jia-Fong",
""
],
[
"Hsu",
"Hao",
""
],
[
"Su",
"Hung-Ting",
""
],
[
"Lee",
"Ming-Sui",
""
],
[
"Hsu",
"Winston H.",
""
]
] |
new_dataset
| 0.995671 |
2303.04835
|
Alexandra Bremers
|
Alexandra Bremers, Maria Teresa Parreira, Xuanyu Fang, Natalie
Friedman, Adolfo Ramirez-Aristizabal, Alexandria Pabst, Mirjana Spasojevic,
Michael Kuniavsky, Wendy Ju
|
The Bystander Affect Detection (BAD) Dataset for Failure Detection in
HRI
|
12 pages
| null | null | null |
cs.RO cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
For a robot to repair its own error, it must first know it has made a
mistake. One way that people detect errors is from the implicit reactions from
bystanders -- their confusion, smirks, or giggles clue us in that something
unexpected occurred. To enable robots to detect and act on bystander responses
to task failures, we developed a novel method to elicit bystander responses to
human and robot errors. Using 46 different stimulus videos featuring a variety
of human and machine task failures, we collected a total of 2452 webcam videos
of human reactions from 54 participants. To test the viability of the collected
data, we used the bystander reaction dataset as input to a deep-learning model,
BADNet, to predict failure occurrence. We tested different data labeling
methods and learned how they affect model performance, achieving precisions
above 90%. We discuss strategies to model bystander reactions and predict
failure and how this approach can be used in real-world robotic deployments to
detect errors and improve robot performance. As part of this work, we also
contribute with the "Bystander Affect Detection" (BAD) dataset of bystander
reactions, supporting the development of better prediction models.
|
[
{
"version": "v1",
"created": "Wed, 8 Mar 2023 19:13:18 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Bremers",
"Alexandra",
""
],
[
"Parreira",
"Maria Teresa",
""
],
[
"Fang",
"Xuanyu",
""
],
[
"Friedman",
"Natalie",
""
],
[
"Ramirez-Aristizabal",
"Adolfo",
""
],
[
"Pabst",
"Alexandria",
""
],
[
"Spasojevic",
"Mirjana",
""
],
[
"Kuniavsky",
"Michael",
""
],
[
"Ju",
"Wendy",
""
]
] |
new_dataset
| 0.999851 |
2303.04838
|
Bilal Porgali
|
Bilal Porgali, V\'itor Albiero, Jordan Ryda, Cristian Canton Ferrer,
Caner Hazirbas
|
The Casual Conversations v2 Dataset
| null | null | null | null |
cs.CV cs.AI cs.CL cs.CY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper introduces a new large consent-driven dataset aimed at assisting
in the evaluation of algorithmic bias and robustness of computer vision and
audio speech models in regards to 11 attributes that are self-provided or
labeled by trained annotators. The dataset includes 26,467 videos of 5,567
unique paid participants, with an average of almost 5 videos per person,
recorded in Brazil, India, Indonesia, Mexico, Vietnam, Philippines, and the
USA, representing diverse demographic characteristics. The participants agreed
for their data to be used in assessing fairness of AI models and provided
self-reported age, gender, language/dialect, disability status, physical
adornments, physical attributes and geo-location information, while trained
annotators labeled apparent skin tone using the Fitzpatrick Skin Type and Monk
Skin Tone scales, and voice timbre. Annotators also labeled for different
recording setups and per-second activity annotations.
|
[
{
"version": "v1",
"created": "Wed, 8 Mar 2023 19:17:05 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Porgali",
"Bilal",
""
],
[
"Albiero",
"Vítor",
""
],
[
"Ryda",
"Jordan",
""
],
[
"Ferrer",
"Cristian Canton",
""
],
[
"Hazirbas",
"Caner",
""
]
] |
new_dataset
| 0.999725 |
2303.04864
|
Christopher Hahn
|
Matthias Cosler, Christopher Hahn, Daniel Mendoza, Frederik Schmitt,
Caroline Trippel
|
nl2spec: Interactively Translating Unstructured Natural Language to
Temporal Logics with Large Language Models
| null | null | null | null |
cs.LO cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A rigorous formalization of desired system requirements is indispensable when
performing any verification task. This often limits the application of
verification techniques, as writing formal specifications is an error-prone and
time-consuming manual task. To facilitate this, we present nl2spec, a framework
for applying Large Language Models (LLMs) to derive formal specifications (in
temporal logics) from unstructured natural language. In particular, we
introduce a new methodology to detect and resolve the inherent ambiguity of
system requirements in natural language: we utilize LLMs to map subformulas of
the formalization back to the corresponding natural language fragments of the
input. Users iteratively add, delete, and edit these sub-translations to amend
erroneous formalizations, which is easier than manually redrafting the entire
formalization. The framework is agnostic to specific application domains and
can be extended to similar specification languages and new neural models. We
perform a user study to obtain a challenging dataset, which we use to run
experiments on the quality of translations. We provide an open-source
implementation, including a web-based frontend.
|
[
{
"version": "v1",
"created": "Wed, 8 Mar 2023 20:08:53 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Cosler",
"Matthias",
""
],
[
"Hahn",
"Christopher",
""
],
[
"Mendoza",
"Daniel",
""
],
[
"Schmitt",
"Frederik",
""
],
[
"Trippel",
"Caroline",
""
]
] |
new_dataset
| 0.99067 |
2303.04884
|
Pengyu Chu
|
Pengyu Chu, Zhaojian Li, Kaixiang Zhang, Dong Chen, Kyle Lammers and
Renfu Lu
|
O2RNet: Occluder-Occludee Relational Network for Robust Apple Detection
in Clustered Orchard Environments
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automated apple harvesting has attracted significant research interest in
recent years due to its potential to revolutionize the apple industry,
addressing the issues of shortage and high costs in labor. One key technology
to fully enable efficient automated harvesting is accurate and robust apple
detection, which is challenging due to complex orchard environments that
involve varying lighting conditions and foliage/branch occlusions. Furthermore,
clustered apples are common in the orchard, which brings additional challenges
as the clustered apples may be identified as one apple. This will cause issues
in localization for subsequent robotic operations. In this paper, we present
the development of a novel deep learning-based apple detection framework,
Occluder-Occludee Relational Network (O2RNet), for robust detection of apples
in such clustered environments. This network exploits the occuluder-occludee
relationship modeling head by introducing a feature expansion structure to
enable the combination of layered traditional detectors to split clustered
apples and foliage occlusions. More specifically, we collect a comprehensive
apple orchard image dataset under different lighting conditions (overcast,
front lighting, and back lighting) with frequent apple occlusions. We then
develop a novel occlusion-aware network for apple detection, in which a feature
expansion structure is incorporated into the convolutional neural networks to
extract additional features generated by the original network for occluded
apples. Comprehensive evaluations are performed, which show that the developed
O2RNet outperforms state-of-the-art models with a higher accuracy of 94\% and a
higher F1-score of 0.88 on apple detection.
|
[
{
"version": "v1",
"created": "Wed, 8 Mar 2023 20:46:05 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Chu",
"Pengyu",
""
],
[
"Li",
"Zhaojian",
""
],
[
"Zhang",
"Kaixiang",
""
],
[
"Chen",
"Dong",
""
],
[
"Lammers",
"Kyle",
""
],
[
"Lu",
"Renfu",
""
]
] |
new_dataset
| 0.999486 |
2303.04891
|
Timothy Chase Jr
|
Timothy Chase Jr, Chris Gnam, John Crassidis, Karthik Dantu
|
You Only Crash Once: Improved Object Detection for Real-Time,
Sim-to-Real Hazardous Terrain Detection and Classification for Autonomous
Planetary Landings
|
To be published in proceedings of AAS/AIAA Astrodynamics Specialist
Conference 2022
| null | null | null |
cs.CV cs.AI cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The detection of hazardous terrain during the planetary landing of spacecraft
plays a critical role in assuring vehicle safety and mission success. A cheap
and effective way of detecting hazardous terrain is through the use of visual
cameras, which ensure operational ability from atmospheric entry through
touchdown. Plagued by resource constraints and limited computational power,
traditional techniques for visual hazardous terrain detection focus on template
matching and registration to pre-built hazard maps. Although successful on
previous missions, this approach is restricted to the specificity of the
templates and limited by the fidelity of the underlying hazard map, which both
require extensive pre-flight cost and effort to obtain and develop. Terrestrial
systems that perform a similar task in applications such as autonomous driving
utilize state-of-the-art deep learning techniques to successfully localize and
classify navigation hazards. Advancements in spacecraft co-processors aimed at
accelerating deep learning inference enable the application of these methods in
space for the first time. In this work, we introduce You Only Crash Once
(YOCO), a deep learning-based visual hazardous terrain detection and
classification technique for autonomous spacecraft planetary landings. Through
the use of unsupervised domain adaptation we tailor YOCO for training by
simulation, removing the need for real-world annotated data and expensive
mission surveying phases. We further improve the transfer of representative
terrain knowledge between simulation and the real world through visual
similarity clustering. We demonstrate the utility of YOCO through a series of
terrestrial and extraterrestrial simulation-to-real experiments and show
substantial improvements toward the ability to both detect and accurately
classify instances of planetary terrain.
|
[
{
"version": "v1",
"created": "Wed, 8 Mar 2023 21:11:51 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Chase",
"Timothy",
"Jr"
],
[
"Gnam",
"Chris",
""
],
[
"Crassidis",
"John",
""
],
[
"Dantu",
"Karthik",
""
]
] |
new_dataset
| 0.998591 |
2303.04895
|
Isabelle Bloch
|
Marc Aiguier, Isabelle Bloch, Salim Nibouche and Ramon Pino Perez
|
Morpho-logic from a Topos Perspective: Application to symbolic AI
| null | null | null | null |
cs.AI cs.LO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Modal logics have proved useful for many reasoning tasks in symbolic
artificial intelligence (AI), such as belief revision, spatial reasoning, among
others. On the other hand, mathematical morphology (MM) is a theory for
non-linear analysis of structures, that was widely developed and applied in
image analysis. Its mathematical bases rely on algebra, complete lattices,
topology. Strong links have been established between MM and mathematical
logics, mostly modal logics. In this paper, we propose to further develop and
generalize this link between mathematical morphology and modal logic from a
topos perspective, i.e. categorial structures generalizing space, and
connecting logics, sets and topology. Furthermore, we rely on the internal
language and logic of topos. We define structuring elements, dilations and
erosions as morphisms. Then we introduce the notion of structuring
neighborhoods, and show that the dilations and erosions based on them lead to a
constructive modal logic, for which a sound and complete proof system is
proposed. We then show that the modal logic thus defined (called morpho-logic
here), is well adapted to define concrete and efficient operators for revision,
merging, and abduction of new knowledge, or even spatial reasoning.
|
[
{
"version": "v1",
"created": "Wed, 8 Mar 2023 21:24:25 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Aiguier",
"Marc",
""
],
[
"Bloch",
"Isabelle",
""
],
[
"Nibouche",
"Salim",
""
],
[
"Perez",
"Ramon Pino",
""
]
] |
new_dataset
| 0.99882 |
2303.04923
|
Karthik Shetty
|
Karthik Shetty, Annette Birkhold, Srikrishna Jaganathan, Norbert
Strobel, Bernhard Egger, Markus Kowarschik, Andreas Maier
|
BOSS: Bones, Organs and Skin Shape Model
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Objective: A digital twin of a patient can be a valuable tool for enhancing
clinical tasks such as workflow automation, patient-specific X-ray dose
optimization, markerless tracking, positioning, and navigation assistance in
image-guided interventions. However, it is crucial that the patient's surface
and internal organs are of high quality for any pose and shape estimates. At
present, the majority of statistical shape models (SSMs) are restricted to a
small number of organs or bones or do not adequately represent the general
population. Method: To address this, we propose a deformable human shape and
pose model that combines skin, internal organs, and bones, learned from CT
images. By modeling the statistical variations in a pose-normalized space using
probabilistic PCA while also preserving joint kinematics, our approach offers a
holistic representation of the body that can benefit various medical
applications. Results: We assessed our model's performance on a registered
dataset, utilizing the unified shape space, and noted an average error of 3.6
mm for bones and 8.8 mm for organs. To further verify our findings, we
conducted additional tests on publicly available datasets with multi-part
segmentations, which confirmed the effectiveness of our model. Conclusion: This
works shows that anatomically parameterized statistical shape models can be
created accurately and in a computationally efficient manner. Significance: The
proposed approach enables the construction of shape models that can be directly
applied to various medical applications, including biomechanics and
reconstruction.
|
[
{
"version": "v1",
"created": "Wed, 8 Mar 2023 22:31:24 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Shetty",
"Karthik",
""
],
[
"Birkhold",
"Annette",
""
],
[
"Jaganathan",
"Srikrishna",
""
],
[
"Strobel",
"Norbert",
""
],
[
"Egger",
"Bernhard",
""
],
[
"Kowarschik",
"Markus",
""
],
[
"Maier",
"Andreas",
""
]
] |
new_dataset
| 0.97844 |
2303.04946
|
Ravi Vadlamani
|
Yelleti Vivek, Vadlamani Ravi, Abhay Anand Mane, Laveti Ramesh Naidu
|
ATM Fraud Detection using Streaming Data Analytics
|
25 pages, 15 figures, 10 tables. arXiv admin note: text overlap with
arXiv:2211.10595
| null | null | null |
cs.LG cs.DC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Gaining the trust and confidence of customers is the essence of the growth
and success of financial institutions and organizations. Of late, the financial
industry is significantly impacted by numerous instances of fraudulent
activities. Further, owing to the generation of large voluminous datasets, it
is highly essential that underlying framework is scalable and meet real time
needs. To address this issue, in the study, we proposed ATM fraud detection in
static and streaming contexts respectively. In the static context, we
investigated a parallel and scalable machine learning algorithms for ATM fraud
detection that is built on Spark and trained with a variety of machine learning
(ML) models including Naive Bayes (NB), Logistic Regression (LR), Support
Vector Machine (SVM), Decision Tree (DT), Random Forest (RF), Gradient Boosting
Tree (GBT), and Multi-layer perceptron (MLP). We also employed several
balancing techniques like Synthetic Minority Oversampling Technique (SMOTE) and
its variants, Generative Adversarial Networks (GAN), to address the rarity in
the dataset. In addition, we proposed a streaming based ATM fraud detection in
the streaming context. Our sliding window based method collects ATM
transactions that are performed within a specified time interval and then
utilizes to train several ML models, including NB, RF, DT, and K-Nearest
Neighbour (KNN). We selected these models based on their less model complexity
and quicker response time. In both contexts, RF turned out to be the best
model. RF obtained the best mean AUC of 0.975 in the static context and mean
AUC of 0.910 in the streaming context. RF is also empirically proven to be
statistically significant than the next-best performing models.
|
[
{
"version": "v1",
"created": "Wed, 8 Mar 2023 23:40:18 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Vivek",
"Yelleti",
""
],
[
"Ravi",
"Vadlamani",
""
],
[
"Mane",
"Abhay Anand",
""
],
[
"Naidu",
"Laveti Ramesh",
""
]
] |
new_dataset
| 0.968524 |
2303.04962
|
Rie Kamikubo
|
Rie Kamikubo, Kyungjun Lee, Hernisa Kacorri
|
Contributing to Accessibility Datasets: Reflections on Sharing Study
Data by Blind People
|
Preprint, ACM CHI Conference on Human Factors in Computing Systems
(CHI 2023)
| null |
10.1145/3544548.3581337
| null |
cs.CY cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To ensure that AI-infused systems work for disabled people, we need to bring
accessibility datasets sourced from this community in the development
lifecycle. However, there are many ethical and privacy concerns limiting
greater data inclusion, making such datasets not readily available. We present
a pair of studies where 13 blind participants engage in data capturing
activities and reflect with and without probing on various factors that
influence their decision to share their data via an AI dataset. We see how
different factors influence blind participants' willingness to share study data
as they assess risk-benefit tradeoffs. The majority support sharing of their
data to improve technology but also express concerns over commercial use,
associated metadata, and the lack of transparency about the impact of their
data. These insights have implications for the development of responsible
practices for stewarding accessibility datasets, and can contribute to broader
discussions in this area.
|
[
{
"version": "v1",
"created": "Thu, 9 Mar 2023 00:42:18 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Kamikubo",
"Rie",
""
],
[
"Lee",
"Kyungjun",
""
],
[
"Kacorri",
"Hernisa",
""
]
] |
new_dataset
| 0.987674 |
2303.04970
|
Lin Zhang
|
Lin Zhang, Xin Li, Dongliang He, Errui Ding, Zhaoxiang Zhang
|
LMR: A Large-Scale Multi-Reference Dataset for Reference-based
Super-Resolution
|
6 figures, 10 pages
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
It is widely agreed that reference-based super-resolution (RefSR) achieves
superior results by referring to similar high quality images, compared to
single image super-resolution (SISR). Intuitively, the more references, the
better performance. However, previous RefSR methods have all focused on
single-reference image training, while multiple reference images are often
available in testing or practical applications. The root cause of such
training-testing mismatch is the absence of publicly available multi-reference
SR training datasets, which greatly hinders research efforts on multi-reference
super-resolution. To this end, we construct a large-scale, multi-reference
super-resolution dataset, named LMR. It contains 112,142 groups of 300x300
training images, which is 10x of the existing largest RefSR dataset. The image
size is also much larger. More importantly, each group is equipped with 5
reference images with different similarity levels. Furthermore, we propose a
new baseline method for multi-reference super-resolution: MRefSR, including a
Multi-Reference Attention Module (MAM) for feature fusion of an arbitrary
number of reference images, and a Spatial Aware Filtering Module (SAFM) for the
fused feature selection. The proposed MRefSR achieves significant improvements
over state-of-the-art approaches on both quantitative and qualitative
evaluations. Our code and data would be made available soon.
|
[
{
"version": "v1",
"created": "Thu, 9 Mar 2023 01:07:06 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Zhang",
"Lin",
""
],
[
"Li",
"Xin",
""
],
[
"He",
"Dongliang",
""
],
[
"Ding",
"Errui",
""
],
[
"Zhang",
"Zhaoxiang",
""
]
] |
new_dataset
| 0.99966 |
2303.04986
|
Jiangtao Gong
|
Mengdi Chu, Keyu Zong, Xin Shu, Jiangtao Gong, Zicong Lu, Kaimin Guo,
Xinyi Dai, Guyue Zhou
|
Work with AI and Work for AI: Autonomous Vehicle Safety Drivers' Lived
Experiences
|
17 pages, 2 figures
|
CHI 2023
|
10.1145/3544548.3581564
| null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The development of Autonomous Vehicle (AV) has created a novel job, the
safety driver, recruited from experienced drivers to supervise and operate AV
in numerous driving missions. Safety drivers usually work with non-perfect AV
in high-risk real-world traffic environments for road testing tasks. However,
this group of workers is under-explored in the HCI community. To fill this gap,
we conducted semi-structured interviews with 26 safety drivers. Our results
present how safety drivers cope with defective algorithms and shape and
calibrate their perceptions while working with AV. We found that, as front-line
workers, safety drivers are forced to take risks accumulated from the AV
industry upstream and are also confronting restricted self-development in
working for AV development. We contribute the first empirical evidence of the
lived experience of safety drivers, the first passengers in the development of
AV, and also the grassroots workers for AV, which can shed light on future
human-AI interaction research.
|
[
{
"version": "v1",
"created": "Thu, 9 Mar 2023 02:07:28 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Chu",
"Mengdi",
""
],
[
"Zong",
"Keyu",
""
],
[
"Shu",
"Xin",
""
],
[
"Gong",
"Jiangtao",
""
],
[
"Lu",
"Zicong",
""
],
[
"Guo",
"Kaimin",
""
],
[
"Dai",
"Xinyi",
""
],
[
"Zhou",
"Guyue",
""
]
] |
new_dataset
| 0.998339 |
2303.05026
|
Jiacheng Wang
|
Jiacheng Wang, Hao Li, Han Liu, Dewei Hu, Daiwei Lu, Keejin Yoon,
Kelsey Barter, Francesca Bagnato, and Ipek Oguz
|
SSL^2: Self-Supervised Learning meets Semi-Supervised Learning: Multiple
Sclerosis Segmentation in 7T-MRI from large-scale 3T-MRI
|
Accepted at the International Society for Optics and Photonics -
Medical Imaging (SPIE-MI) 2023
| null | null | null |
cs.CV cs.LG eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automated segmentation of multiple sclerosis (MS) lesions from MRI scans is
important to quantify disease progression. In recent years, convolutional
neural networks (CNNs) have shown top performance for this task when a large
amount of labeled data is available. However, the accuracy of CNNs suffers when
dealing with few and/or sparsely labeled datasets. A potential solution is to
leverage the information available in large public datasets in conjunction with
a target dataset which only has limited labeled data. In this paper, we propose
a training framework, SSL2 (self-supervised-semi-supervised), for
multi-modality MS lesion segmentation with limited supervision. We adopt
self-supervised learning to leverage the knowledge from large public 3T
datasets to tackle the limitations of a small 7T target dataset. To leverage
the information from unlabeled 7T data, we also evaluate state-of-the-art
semi-supervised methods for other limited annotation settings, such as small
labeled training size and sparse annotations. We use the shifted-window (Swin)
transformer1 as our backbone network. The effectiveness of self-supervised and
semi-supervised training strategies is evaluated in our in-house 7T MRI
dataset. The results indicate that each strategy improves lesion segmentation
for both limited training data size and for sparse labeling scenarios. The
combined overall framework further improves the performance substantially
compared to either of its components alone. Our proposed framework thus
provides a promising solution for future data/label-hungry 7T MS studies.
|
[
{
"version": "v1",
"created": "Thu, 9 Mar 2023 04:20:16 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Wang",
"Jiacheng",
""
],
[
"Li",
"Hao",
""
],
[
"Liu",
"Han",
""
],
[
"Hu",
"Dewei",
""
],
[
"Lu",
"Daiwei",
""
],
[
"Yoon",
"Keejin",
""
],
[
"Barter",
"Kelsey",
""
],
[
"Bagnato",
"Francesca",
""
],
[
"Oguz",
"Ipek",
""
]
] |
new_dataset
| 0.984701 |
2303.05046
|
Satarupa Guha
|
Satarupa Guha, Rahul Ambavat, Ankur Gupta, Manish Gupta, Rupeshkumar
Mehta
|
Unsupervised Language agnostic WER Standardization
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Word error rate (WER) is a standard metric for the evaluation of Automated
Speech Recognition (ASR) systems. However, WER fails to provide a fair
evaluation of human perceived quality in presence of spelling variations,
abbreviations, or compound words arising out of agglutination. Multiple
spelling variations might be acceptable based on locale/geography, alternative
abbreviations, borrowed words, and transliteration of code-mixed words from a
foreign language to the target language script. Similarly, in case of
agglutination, often times the agglutinated, as well as the split forms, are
acceptable. Previous work handled this problem by using manually identified
normalization pairs and applying them to both the transcription and the
hypothesis before computing WER. In this paper, we propose an automatic WER
normalization system consisting of two modules: spelling normalization and
segmentation normalization. The proposed system is unsupervised and language
agnostic, and therefore scalable. Experiments with ASR on 35K utterances across
four languages yielded an average WER reduction of 13.28%. Human judgements of
these automatically identified normalization pairs show that our WER-normalized
evaluation is highly consistent with the perceived quality of ASR output.
|
[
{
"version": "v1",
"created": "Thu, 9 Mar 2023 05:50:54 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Guha",
"Satarupa",
""
],
[
"Ambavat",
"Rahul",
""
],
[
"Gupta",
"Ankur",
""
],
[
"Gupta",
"Manish",
""
],
[
"Mehta",
"Rupeshkumar",
""
]
] |
new_dataset
| 0.963637 |
2303.05071
|
Tianxing Xu
|
Tian-Xing Xu, Yuan-Chen Guo, Yu-Kun Lai, Song-Hai Zhang
|
MBPTrack: Improving 3D Point Cloud Tracking with Memory Networks and Box
Priors
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D single object tracking has been a crucial problem for decades with
numerous applications such as autonomous driving. Despite its wide-ranging use,
this task remains challenging due to the significant appearance variation
caused by occlusion and size differences among tracked targets. To address
these issues, we present MBPTrack, which adopts a Memory mechanism to utilize
past information and formulates localization in a coarse-to-fine scheme using
Box Priors given in the first frame. Specifically, past frames with targetness
masks serve as an external memory, and a transformer-based module propagates
tracked target cues from the memory to the current frame. To precisely localize
objects of all sizes, MBPTrack first predicts the target center via Hough
voting. By leveraging box priors given in the first frame, we adaptively sample
reference points around the target center that roughly cover the target of
different sizes. Then, we obtain dense feature maps by aggregating point
features into the reference points, where localization can be performed more
effectively. Extensive experiments demonstrate that MBPTrack achieves
state-of-the-art performance on KITTI, nuScenes and Waymo Open Dataset, while
running at 50 FPS on a single RTX3090 GPU.
|
[
{
"version": "v1",
"created": "Thu, 9 Mar 2023 07:07:39 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Xu",
"Tian-Xing",
""
],
[
"Guo",
"Yuan-Chen",
""
],
[
"Lai",
"Yu-Kun",
""
],
[
"Zhang",
"Song-Hai",
""
]
] |
new_dataset
| 0.999718 |
2303.05208
|
Loe Feijs
|
Loe Feijs
|
Geometry of Language
|
17 pages, 24 figures
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
In this article, we present a fresh perspective on language, combining ideas
from various sources, but mixed in a new synthesis. As in the minimalist
program, the question is whether we can formulate an elegant formalism, a
universal grammar or a mechanism which explains significant aspects of the
human faculty of language, which in turn can be considered a natural
disposition for the evolution and deployment of the diverse human languages. We
describe such a mechanism, which differs from existing logical and grammatical
approaches by its geometric nature. Our main contribution is to explore the
assumption that sentence recognition takes place by forming chains of tokens
representing words, followed by matching these chains with pre-existing chains
representing grammatical word orders. The aligned chains of tokens give rise to
two- and three-dimensional complexes. The resulting model gives an alternative
presentation for subtle rules, traditionally formalized using categorial
grammar.
|
[
{
"version": "v1",
"created": "Thu, 9 Mar 2023 12:22:28 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Feijs",
"Loe",
""
]
] |
new_dataset
| 0.998004 |
2303.05252
|
Jianyuan Ruan
|
Jianyuan Ruan, Bo Li, Yibo Wang, Yuxiang Sun
|
SLAMesh: Real-time LiDAR Simultaneous Localization and Meshing
|
Accepted by ICRA 2023. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other uses
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Most current LiDAR simultaneous localization and mapping (SLAM) systems build
maps in point clouds, which are sparse when zoomed in, even though they seem
dense to human eyes. Dense maps are essential for robotic applications, such as
map-based navigation. Due to the low memory cost, mesh has become an attractive
dense model for mapping in recent years. However, existing methods usually
produce mesh maps by using an offline post-processing step to generate mesh
maps. This two-step pipeline does not allow these methods to use the built mesh
maps online and to enable localization and meshing to benefit each other. To
solve this problem, we propose the first CPU-only real-time LiDAR SLAM system
that can simultaneously build a mesh map and perform localization against the
mesh map. A novel and direct meshing strategy with Gaussian process
reconstruction realizes the fast building, registration, and updating of mesh
maps. We perform experiments on several public datasets. The results show that
our SLAM system can run at around $40$Hz. The localization and meshing accuracy
also outperforms the state-of-the-art methods, including the TSDF map and
Poisson reconstruction. Our code and video demos are available at:
https://github.com/lab-sun/SLAMesh.
|
[
{
"version": "v1",
"created": "Thu, 9 Mar 2023 13:42:34 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Ruan",
"Jianyuan",
""
],
[
"Li",
"Bo",
""
],
[
"Wang",
"Yibo",
""
],
[
"Sun",
"Yuxiang",
""
]
] |
new_dataset
| 0.979335 |
2303.05305
|
Zhuohong Li
|
Zhuohong Li, Wei He, Hongyan Zhang
|
National-scale 1-m resolution land-cover mapping for the entire China
based on a low-cost solution and open-access data
|
4 pages, 3 figures, conference paper
| null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nowadays, many large-scale land-cover (LC) products have been released,
however, current LC products for China either lack a fine resolution or
nationwide coverage. With the rapid urbanization of China, there is an urgent
need for creating a very-high-resolution (VHR) national-scale LC map for China.
In this study, a novel 1-m resolution LC map of China covering $9,600,000
km^2$, called SinoLC-1, was produced by using a deep learning framework and
multi-source open-access data. To efficiently generate the VHR national-scale
LC map, firstly, the reliable LC labels were collected from three 10-m LC
products and Open Street Map data. Secondly, the collected 10-m labels and 1-m
Google Earth imagery were utilized in the proposed low-to-high (L2H) framework
for training. With weak and self-supervised strategies, the L2H framework
resolves the label noise brought by the mismatched resolution between training
pairs and produces VHR results. Lastly, we compare the SinoLC-1 with five
widely used products and validate it with a sample set including 10,6852 points
and a statistical report collected from the government. The results show the
SinoLC-1 achieved an OA of 74\% and a Kappa of 0.65. Moreover, as the first 1-m
national-scale LC map for China, the SinoLC-1 shows overall acceptable results
with the finest landscape details.
|
[
{
"version": "v1",
"created": "Thu, 9 Mar 2023 14:55:53 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Li",
"Zhuohong",
""
],
[
"He",
"Wei",
""
],
[
"Zhang",
"Hongyan",
""
]
] |
new_dataset
| 0.999584 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.