id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2303.13255
|
EPTCS
|
Erick Grilo (Universidade Federal Fluminense), Bruno Lopes
(Universidade Federal Fluminense)
|
ReLo: a Dynamic Logic to Reason About Reo Circuits
|
In Proceedings LSFA 2022, arXiv:2303.12680
|
EPTCS 376, 2023, pp. 16-33
|
10.4204/EPTCS.376.4
| null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
Critical systems require high reliability and are present in many domains.
They are systems in which failure may result in financial damage or even loss
of lives. Standard techniques of software engineering are not enough to ensure
the absence of unacceptable failures and/or that critical requirements are
fulfilled. Reo is a component-based modelling language that aims to provide a
framework to build software based on existing pieces of software, which has
been used in a wide variety of domains. Its formal semantics provides grounds
to certify that systems based on Reo models satisfy specific requirements
(i.e., absence of deadlocks). Current logical approaches for reasoning over Reo
require the conversion of formal semantics into a logical framework. ReLo is a
dynamic logic that naturally subsumes Reo's semantics. It provides a means to
reason over Reo circuits. This work extends ReLo by introducing the iteration
operator, and soundness and completeness proofs for its axiomatization.
|
[
{
"version": "v1",
"created": "Thu, 23 Mar 2023 13:38:07 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Grilo",
"Erick",
"",
"Universidade Federal Fluminense"
],
[
"Lopes",
"Bruno",
"",
"Universidade Federal Fluminense"
]
] |
new_dataset
| 0.999818 |
2303.13272
|
Dichucheng Li
|
Dichucheng Li, Mingjin Che, Wenwu Meng, Yulun Wu, Yi Yu, Fan Xia, Wei
Li
|
Frame-Level Multi-Label Playing Technique Detection Using Multi-Scale
Network and Self-Attention Mechanism
|
Accepted to ICASSP 2023
| null | null | null |
cs.SD cs.AI eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Instrument playing technique (IPT) is a key element of musical presentation.
However, most of the existing works for IPT detection only concern monophonic
music signals, yet little has been done to detect IPTs in polyphonic
instrumental solo pieces with overlapping IPTs or mixed IPTs. In this paper, we
formulate it as a frame-level multi-label classification problem and apply it
to Guzheng, a Chinese plucked string instrument. We create a new dataset,
Guzheng\_Tech99, containing Guzheng recordings and onset, offset, pitch, IPT
annotations of each note. Because different IPTs vary a lot in their lengths,
we propose a new method to solve this problem using multi-scale network and
self-attention. The multi-scale network extracts features from different
scales, and the self-attention mechanism applied to the feature maps at the
coarsest scale further enhances the long-range feature extraction. Our approach
outperforms existing works by a large margin, indicating its effectiveness in
IPT detection.
|
[
{
"version": "v1",
"created": "Thu, 23 Mar 2023 13:52:42 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Li",
"Dichucheng",
""
],
[
"Che",
"Mingjin",
""
],
[
"Meng",
"Wenwu",
""
],
[
"Wu",
"Yulun",
""
],
[
"Yu",
"Yi",
""
],
[
"Xia",
"Fan",
""
],
[
"Li",
"Wei",
""
]
] |
new_dataset
| 0.996369 |
2303.13293
|
Ege \"Ozsoy
|
Ege \"Ozsoy, Tobias Czempiel, Felix Holm, Chantal Pellegrini, Nassir
Navab
|
LABRAD-OR: Lightweight Memory Scene Graphs for Accurate Bimodal
Reasoning in Dynamic Operating Rooms
|
11 pages, 3 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern surgeries are performed in complex and dynamic settings, including
ever-changing interactions between medical staff, patients, and equipment. The
holistic modeling of the operating room (OR) is, therefore, a challenging but
essential task, with the potential to optimize the performance of surgical
teams and aid in developing new surgical technologies to improve patient
outcomes. The holistic representation of surgical scenes as semantic scene
graphs (SGG), where entities are represented as nodes and relations between
them as edges, is a promising direction for fine-grained semantic OR
understanding. We propose, for the first time, the use of temporal information
for more accurate and consistent holistic OR modeling. Specifically, we
introduce memory scene graphs, where the scene graphs of previous time steps
act as the temporal representation guiding the current prediction. We design an
end-to-end architecture that intelligently fuses the temporal information of
our lightweight memory scene graphs with the visual information from point
clouds and images. We evaluate our method on the 4D-OR dataset and demonstrate
that integrating temporality leads to more accurate and consistent results
achieving an +5% increase and a new SOTA of 0.88 in macro F1. This work opens
the path for representing the entire surgery history with memory scene graphs
and improves the holistic understanding in the OR. Introducing scene graphs as
memory representations can offer a valuable tool for many temporal
understanding tasks.
|
[
{
"version": "v1",
"created": "Thu, 23 Mar 2023 14:26:16 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Özsoy",
"Ege",
""
],
[
"Czempiel",
"Tobias",
""
],
[
"Holm",
"Felix",
""
],
[
"Pellegrini",
"Chantal",
""
],
[
"Navab",
"Nassir",
""
]
] |
new_dataset
| 0.994488 |
2303.13378
|
Thomas Lidbetter Dr
|
Steve Alpern and Thomas Lidbetter
|
Searching a Tree with Signals: Routing Mobile Sensors for Targets
Emitting Radiation, Chemicals or Scents
| null | null | null | null |
cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Adversarial search of a network for an immobile Hider (or target) was
introduced and solved for rooted trees by Gal (1979). In this zero-sum game, a
Hider picks a point to hide on the tree and a Searcher picks a unit speed
trajectory starting at the root. The payoff (to the Hider) is the search time.
In Gal's model (and many subsequent investigations), the Searcher receives no
additional information after the Hider chooses his location. In reality, the
Searcher will often receive such locational information. For homeland security,
mobile sensors on vehicles have been used to locate radioactive material
stashed in an urban environment. In a military setting, mobile sensors can
detect chemical signatures from land mines. In predator-prey search, the
predator often has specially attuned senses (hearing for wolves, vision for
eagles, smell for dogs, sonar for bats, pressure sensors for sharks) that may
help it locate the prey. How can such noisy locational information be used by
the Searcher to modify her route? We model such information as signals which
indicate which of two branches of a binary tree should be searched first, where
the signal has a known accuracy p<1. Our solution calculates which branch (at
every branch node) is favored, meaning it should always be searched first when
the signal is in that direction. When the signal is in the other direction, we
calculate the probability the signal should be followed. Compared to the
optimal Hider strategy in the classic search game of Gal, the Hider's optimal
distribution for this model is more skewed towards leaf nodes that are further
from the root.
|
[
{
"version": "v1",
"created": "Thu, 23 Mar 2023 15:47:20 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Alpern",
"Steve",
""
],
[
"Lidbetter",
"Thomas",
""
]
] |
new_dataset
| 0.975199 |
2303.13455
|
Haoxuan You
|
Haoxuan You, Mandy Guo, Zhecan Wang, Kai-Wei Chang, Jason Baldridge,
Jiahui Yu
|
CoBIT: A Contrastive Bi-directional Image-Text Generation Model
|
14 pages, 5 figures
| null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The field of vision and language has witnessed a proliferation of pre-trained
foundation models. Most existing methods are independently pre-trained with
contrastive objective like CLIP, image-to-text generative objective like PaLI,
or text-to-image generative objective like Parti. However, the three objectives
can be pre-trained on the same data, image-text pairs, and intuitively they
complement each other as contrasting provides global alignment capacity and
generation grants fine-grained understanding. In this work, we present a
Contrastive Bi-directional Image-Text generation model (CoBIT), which attempts
to unify the three pre-training objectives in one framework. Specifically,
CoBIT employs a novel unicoder-decoder structure, consisting of an image
unicoder, a text unicoder and a cross-modal decoder. The image/text unicoders
can switch between encoding and decoding in different tasks, enabling
flexibility and shared knowledge that benefits both image-to-text and
text-to-image generations. CoBIT achieves superior performance in image
understanding, image-text understanding (Retrieval, Captioning, VQA, SNLI-VE)
and text-based content creation, particularly in zero-shot scenarios. For
instance, 82.7% in zero-shot ImageNet classification, 9.37 FID score in
zero-shot text-to-image generation and 44.8 CIDEr in zero-shot captioning.
|
[
{
"version": "v1",
"created": "Thu, 23 Mar 2023 17:24:31 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"You",
"Haoxuan",
""
],
[
"Guo",
"Mandy",
""
],
[
"Wang",
"Zhecan",
""
],
[
"Chang",
"Kai-Wei",
""
],
[
"Baldridge",
"Jason",
""
],
[
"Yu",
"Jiahui",
""
]
] |
new_dataset
| 0.997934 |
2303.13463
|
Wen Cheng
|
Wen Cheng, Shichen Dong, Wei Wang
|
W2KPE: Keyphrase Extraction with Word-Word Relation
| null | null | null | null |
cs.CL eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper describes our submission to ICASSP 2023 MUG Challenge Track 4,
Keyphrase Extraction, which aims to extract keyphrases most relevant to the
conference theme from conference materials. We model the challenge as a
single-class Named Entity Recognition task and developed techniques for better
performance on the challenge: For the data preprocessing, we encode the split
keyphrases after word segmentation. In addition, we increase the amount of
input information that the model can accept at one time by fusing multiple
preprocessed sentences into one segment. We replace the loss function with the
multi-class focal loss to address the sparseness of keyphrases. Besides, we
score each appearance of keyphrases and add an extra output layer to fit the
score to rank keyphrases. Exhaustive evaluations are performed to find the best
combination of the word segmentation tool, the pre-trained embedding model, and
the corresponding hyperparameters. With these proposals, we scored 45.04 on the
final test set.
|
[
{
"version": "v1",
"created": "Wed, 22 Mar 2023 15:32:40 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Cheng",
"Wen",
""
],
[
"Dong",
"Shichen",
""
],
[
"Wang",
"Wei",
""
]
] |
new_dataset
| 0.995145 |
2303.13477
|
Mai Nishimura
|
Yuta Yoshitake, Mai Nishimura, Shohei Nobuhara, Ko Nishino
|
TransPoser: Transformer as an Optimizer for Joint Object Shape and Pose
Estimation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a novel method for joint estimation of shape and pose of rigid
objects from their sequentially observed RGB-D images. In sharp contrast to
past approaches that rely on complex non-linear optimization, we propose to
formulate it as a neural optimization that learns to efficiently estimate the
shape and pose. We introduce Deep Directional Distance Function (DeepDDF), a
neural network that directly outputs the depth image of an object given the
camera viewpoint and viewing direction, for efficient error computation in 2D
image space. We formulate the joint estimation itself as a Transformer which we
refer to as TransPoser. We fully leverage the tokenization and multi-head
attention to sequentially process the growing set of observations and to
efficiently update the shape and pose with a learned momentum, respectively.
Experimental results on synthetic and real data show that DeepDDF achieves high
accuracy as a category-level object shape representation and TransPoser
achieves state-of-the-art accuracy efficiently for joint shape and pose
estimation.
|
[
{
"version": "v1",
"created": "Thu, 23 Mar 2023 17:46:54 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Yoshitake",
"Yuta",
""
],
[
"Nishimura",
"Mai",
""
],
[
"Nobuhara",
"Shohei",
""
],
[
"Nishino",
"Ko",
""
]
] |
new_dataset
| 0.956235 |
2303.13482
|
Tao Chen
|
Sameer Pai, Tao Chen, Megha Tippur, Edward Adelson, Abhishek Gupta,
Pulkit Agrawal
|
TactoFind: A Tactile Only System for Object Retrieval
|
Accepted in ICRA 2023
| null | null | null |
cs.RO cs.AI cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the problem of object retrieval in scenarios where visual sensing is
absent, object shapes are unknown beforehand and objects can move freely, like
grabbing objects out of a drawer. Successful solutions require localizing free
objects, identifying specific object instances, and then grasping the
identified objects, only using touch feedback. Unlike vision, where cameras can
observe the entire scene, touch sensors are local and only observe parts of the
scene that are in contact with the manipulator. Moreover, information gathering
via touch sensors necessitates applying forces on the touched surface which may
disturb the scene itself. Reasoning with touch, therefore, requires careful
exploration and integration of information over time -- a challenge we tackle.
We present a system capable of using sparse tactile feedback from fingertip
touch sensors on a dexterous hand to localize, identify and grasp novel objects
without any visual feedback. Videos are available at
https://taochenshh.github.io/projects/tactofind.
|
[
{
"version": "v1",
"created": "Thu, 23 Mar 2023 17:50:09 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Pai",
"Sameer",
""
],
[
"Chen",
"Tao",
""
],
[
"Tippur",
"Megha",
""
],
[
"Adelson",
"Edward",
""
],
[
"Gupta",
"Abhishek",
""
],
[
"Agrawal",
"Pulkit",
""
]
] |
new_dataset
| 0.997937 |
2303.13483
|
Joy Hsu
|
Joy Hsu, Jiayuan Mao, Jiajun Wu
|
NS3D: Neuro-Symbolic Grounding of 3D Objects and Relations
|
In CVPR 2023
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Grounding object properties and relations in 3D scenes is a prerequisite for
a wide range of artificial intelligence tasks, such as visually grounded
dialogues and embodied manipulation. However, the variability of the 3D domain
induces two fundamental challenges: 1) the expense of labeling and 2) the
complexity of 3D grounded language. Hence, essential desiderata for models are
to be data-efficient, generalize to different data distributions and tasks with
unseen semantic forms, as well as ground complex language semantics (e.g.,
view-point anchoring and multi-object reference). To address these challenges,
we propose NS3D, a neuro-symbolic framework for 3D grounding. NS3D translates
language into programs with hierarchical structures by leveraging large
language-to-code models. Different functional modules in the programs are
implemented as neural networks. Notably, NS3D extends prior neuro-symbolic
visual reasoning methods by introducing functional modules that effectively
reason about high-arity relations (i.e., relations among more than two
objects), key in disambiguating objects in complex 3D scenes. Modular and
compositional architecture enables NS3D to achieve state-of-the-art results on
the ReferIt3D view-dependence task, a 3D referring expression comprehension
benchmark. Importantly, NS3D shows significantly improved performance on
settings of data-efficiency and generalization, and demonstrate zero-shot
transfer to an unseen 3D question-answering task.
|
[
{
"version": "v1",
"created": "Thu, 23 Mar 2023 17:50:40 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Hsu",
"Joy",
""
],
[
"Mao",
"Jiayuan",
""
],
[
"Wu",
"Jiajun",
""
]
] |
new_dataset
| 0.984738 |
2303.13497
|
Artem Sevastopolsky
|
Ananta R. Bhattarai, Matthias Nie{\ss}ner, Artem Sevastopolsky
|
TriPlaneNet: An Encoder for EG3D Inversion
|
Video: https://youtu.be/GpmSswHMeWU Project page:
https://anantarb.github.io/triplanenet
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Recent progress in NeRF-based GANs has introduced a number of approaches for
high-resolution and high-fidelity generative modeling of human heads with a
possibility for novel view rendering. At the same time, one must solve an
inverse problem to be able to re-render or modify an existing image or video.
Despite the success of universal optimization-based methods for 2D GAN
inversion, those, applied to 3D GANs, may fail to produce 3D-consistent
renderings. Fast encoder-based techniques, such as those developed for
StyleGAN, may also be less appealing due to the lack of identity preservation.
In our work, we introduce a real-time method that bridges the gap between the
two approaches by directly utilizing the tri-plane representation introduced
for EG3D generative model. In particular, we build upon a feed-forward
convolutional encoder for the latent code and extend it with a
fully-convolutional predictor of tri-plane numerical offsets. As shown in our
work, the renderings are similar in quality to optimization-based techniques
and significantly outperform the baselines for novel view. As we empirically
prove, this is a consequence of directly operating in the tri-plane space, not
in the GAN parameter space, while making use of an encoder-based trainable
approach.
|
[
{
"version": "v1",
"created": "Thu, 23 Mar 2023 17:56:20 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Bhattarai",
"Ananta R.",
""
],
[
"Nießner",
"Matthias",
""
],
[
"Sevastopolsky",
"Artem",
""
]
] |
new_dataset
| 0.954134 |
2303.13504
|
Jeya Maria Jose Valanarasu
|
Jeya Maria Jose Valanarasu, Rahul Garg, Andeep Toor, Xin Tong, Weijuan
Xi, Andreas Lugmayr, Vishal M. Patel, Anne Menini
|
ReBotNet: Fast Real-time Video Enhancement
|
Project Website: https://jeya-maria-jose.github.io/rebotnet-web/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most video restoration networks are slow, have high computational load, and
can't be used for real-time video enhancement. In this work, we design an
efficient and fast framework to perform real-time video enhancement for
practical use-cases like live video calls and video streams. Our proposed
method, called Recurrent Bottleneck Mixer Network (ReBotNet), employs a
dual-branch framework. The first branch learns spatio-temporal features by
tokenizing the input frames along the spatial and temporal dimensions using a
ConvNext-based encoder and processing these abstract tokens using a bottleneck
mixer. To further improve temporal consistency, the second branch employs a
mixer directly on tokens extracted from individual frames. A common decoder
then merges the features form the two branches to predict the enhanced frame.
In addition, we propose a recurrent training approach where the last frame's
prediction is leveraged to efficiently enhance the current frame while
improving temporal consistency. To evaluate our method, we curate two new
datasets that emulate real-world video call and streaming scenarios, and show
extensive results on multiple datasets where ReBotNet outperforms existing
approaches with lower computations, reduced memory requirements, and faster
inference time.
|
[
{
"version": "v1",
"created": "Thu, 23 Mar 2023 17:58:05 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Valanarasu",
"Jeya Maria Jose",
""
],
[
"Garg",
"Rahul",
""
],
[
"Toor",
"Andeep",
""
],
[
"Tong",
"Xin",
""
],
[
"Xi",
"Weijuan",
""
],
[
"Lugmayr",
"Andreas",
""
],
[
"Patel",
"Vishal M.",
""
],
[
"Menini",
"Anne",
""
]
] |
new_dataset
| 0.953945 |
2303.13510
|
Runsen Xu
|
Runsen Xu, Tai Wang, Wenwei Zhang, Runjian Chen, Jinkun Cao, Jiangmiao
Pang, Dahua Lin
|
MV-JAR: Masked Voxel Jigsaw and Reconstruction for LiDAR-Based
Self-Supervised Pre-Training
|
Accepted by CVPR 2023 with a carefully designed benchmark on Waymo.
Codes and the benchmark will be available at
https://github.com/SmartBot-PJLab/MV-JAR
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper introduces the Masked Voxel Jigsaw and Reconstruction (MV-JAR)
method for LiDAR-based self-supervised pre-training and a carefully designed
data-efficient 3D object detection benchmark on the Waymo dataset. Inspired by
the scene-voxel-point hierarchy in downstream 3D object detectors, we design
masking and reconstruction strategies accounting for voxel distributions in the
scene and local point distributions within the voxel. We employ a
Reversed-Furthest-Voxel-Sampling strategy to address the uneven distribution of
LiDAR points and propose MV-JAR, which combines two techniques for modeling the
aforementioned distributions, resulting in superior performance. Our
experiments reveal limitations in previous data-efficient experiments, which
uniformly sample fine-tuning splits with varying data proportions from each
LiDAR sequence, leading to similar data diversity across splits. To address
this, we propose a new benchmark that samples scene sequences for diverse
fine-tuning splits, ensuring adequate model convergence and providing a more
accurate evaluation of pre-training methods. Experiments on our Waymo benchmark
and the KITTI dataset demonstrate that MV-JAR consistently and significantly
improves 3D detection performance across various data scales, achieving up to a
6.3% increase in mAPH compared to training from scratch. Codes and the
benchmark will be available at https://github.com/SmartBot-PJLab/MV-JAR .
|
[
{
"version": "v1",
"created": "Thu, 23 Mar 2023 17:59:02 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Xu",
"Runsen",
""
],
[
"Wang",
"Tai",
""
],
[
"Zhang",
"Wenwei",
""
],
[
"Chen",
"Runjian",
""
],
[
"Cao",
"Jinkun",
""
],
[
"Pang",
"Jiangmiao",
""
],
[
"Lin",
"Dahua",
""
]
] |
new_dataset
| 0.997591 |
2303.13514
|
Mehmet Ayg\"un
|
Mehmet Ayg\"un and Oisin Mac Aodha
|
SAOR: Single-View Articulated Object Reconstruction
|
https://mehmetaygun.github.io/saor
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce SAOR, a novel approach for estimating the 3D shape, texture, and
viewpoint of an articulated object from a single image captured in the wild.
Unlike prior approaches that rely on pre-defined category-specific 3D templates
or tailored 3D skeletons, SAOR learns to articulate shapes from single-view
image collections with a skeleton-free part-based model without requiring any
3D object shape priors. To prevent ill-posed solutions, we propose a
cross-instance consistency loss that exploits disentangled object shape
deformation and articulation. This is helped by a new silhouette-based sampling
mechanism to enhance viewpoint diversity during training. Our method only
requires estimated object silhouettes and relative depth maps from
off-the-shelf pre-trained networks during training. At inference time, given a
single-view image, it efficiently outputs an explicit mesh representation. We
obtain improved qualitative and quantitative results on challenging quadruped
animals compared to relevant existing work.
|
[
{
"version": "v1",
"created": "Thu, 23 Mar 2023 17:59:35 GMT"
}
] | 2023-03-24T00:00:00 |
[
[
"Aygün",
"Mehmet",
""
],
[
"Mac Aodha",
"Oisin",
""
]
] |
new_dataset
| 0.991103 |
1803.10664
|
Alexander Kott
|
Alexander Kott, Paul Th\'eron, Martin Dra\v{s}ar, Edlira Dushku,
Beno\^it LeBlanc, Paul Losiewicz, Alessandro Guarino, Luigi Mancini, Agostino
Panico, Mauno Pihelgas, Krzysztof Rzadca, Fabio De Gaspari
|
Autonomous Intelligent Cyber-defense Agent (AICA) Reference
Architecture. Release 2.0
|
This is a major revision and extension of the earlier release of AICA
Reference Architecture
| null | null |
ARL-SR-0421
|
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This report - a major revision of its previous release - describes a
reference architecture for intelligent software agents performing active,
largely autonomous cyber-defense actions on military networks of computing and
communicating devices. The report is produced by the North Atlantic Treaty
Organization (NATO) Research Task Group (RTG) IST-152 "Intelligent Autonomous
Agents for Cyber Defense and Resilience". In a conflict with a technically
sophisticated adversary, NATO military tactical networks will operate in a
heavily contested battlefield. Enemy software cyber agents - malware - will
infiltrate friendly networks and attack friendly command, control,
communications, computers, intelligence, surveillance, and reconnaissance and
computerized weapon systems. To fight them, NATO needs artificial cyber hunters
- intelligent, autonomous, mobile agents specialized in active cyber defense.
With this in mind, in 2016, NATO initiated RTG IST-152. Its objective has been
to help accelerate the development and transition to practice of such software
agents by producing a reference architecture and technical roadmap. This report
presents the concept and architecture of an Autonomous Intelligent
Cyber-defense Agent (AICA). We describe the rationale of the AICA concept,
explain the methodology and purpose that drive the definition of the AICA
Reference Architecture, and review some of the main features and challenges of
AICAs.
|
[
{
"version": "v1",
"created": "Wed, 28 Mar 2018 14:55:53 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Sep 2019 16:17:44 GMT"
},
{
"version": "v3",
"created": "Wed, 22 Mar 2023 14:01:19 GMT"
}
] | 2023-03-23T00:00:00 |
[
[
"Kott",
"Alexander",
""
],
[
"Théron",
"Paul",
""
],
[
"Drašar",
"Martin",
""
],
[
"Dushku",
"Edlira",
""
],
[
"LeBlanc",
"Benoît",
""
],
[
"Losiewicz",
"Paul",
""
],
[
"Guarino",
"Alessandro",
""
],
[
"Mancini",
"Luigi",
""
],
[
"Panico",
"Agostino",
""
],
[
"Pihelgas",
"Mauno",
""
],
[
"Rzadca",
"Krzysztof",
""
],
[
"De Gaspari",
"Fabio",
""
]
] |
new_dataset
| 0.998058 |
2009.12369
|
Tzvika Geft
|
Pankaj K. Agarwal, Boris Aronov, Tzvika Geft, Dan Halperin
|
On Two-Handed Planar Assembly Partitioning with Connectivity Constraints
|
This version generalizes our algorithm from the SODA '21 version for
unit-grid squares to polygonal assemblies and improves presentation
| null | null | null |
cs.CG cs.CC cs.DS cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Assembly planning is a fundamental problem in robotics and automation, which
involves designing a sequence of motions to bring the separate constituent
parts of a product into their final placement in the product. Assembly planning
is naturally cast as a disassembly problem, giving rise to the assembly
partitioning problem: Given a set $A$ of parts, find a subset $S\subset A$,
referred to as a subassembly, such that $S$ can be rigidly translated to
infinity along a prescribed direction without colliding with $A\setminus S$.
While assembly partitioning is efficiently solvable, it is further desirable
for the parts of a subassembly to be easily held together. This motivates the
problem that we study, called connected-assembly-partitioning, which
additionally requires each of the two subassemblies, $S$ and $A\setminus S$, to
be connected. We show that this problem is NP-complete, settling an open
question posed by Wilson et al. (1995) a quarter of a century ago, even when
$A$ consists of unit-grid squares (i.e., $A$ is polyomino-shaped). Towards this
result, we prove the NP-hardness of a new Planar 3-SAT variant having an
adjacency requirement for variables appearing in the same clause, which may be
of independent interest. On the positive side, we give an $O(2^k n^2)$-time
fixed-parameter tractable algorithm (requiring low degree polynomial-time
pre-processing) for an assembly $A$ consisting of polygons in the plane, where
$n=|A|$ and $k=|S|$. We also describe a special case of unit-grid square
assemblies, where a connected partition can always be found in $O(n)$-time.
|
[
{
"version": "v1",
"created": "Fri, 25 Sep 2020 17:59:33 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Mar 2023 17:14:49 GMT"
}
] | 2023-03-23T00:00:00 |
[
[
"Agarwal",
"Pankaj K.",
""
],
[
"Aronov",
"Boris",
""
],
[
"Geft",
"Tzvika",
""
],
[
"Halperin",
"Dan",
""
]
] |
new_dataset
| 0.998498 |
2203.16244
|
Wei Lin
|
Wei Lin, Anna Kukleva, Kunyang Sun, Horst Possegger, Hilde Kuehne,
Horst Bischof
|
CycDA: Unsupervised Cycle Domain Adaptation from Image to Video
|
Accepted at ECCV2022. Supplementary included
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although action recognition has achieved impressive results over recent
years, both collection and annotation of video training data are still
time-consuming and cost intensive. Therefore, image-to-video adaptation has
been proposed to exploit labeling-free web image source for adapting on
unlabeled target videos. This poses two major challenges: (1) spatial domain
shift between web images and video frames; (2) modality gap between image and
video data. To address these challenges, we propose Cycle Domain Adaptation
(CycDA), a cycle-based approach for unsupervised image-to-video domain
adaptation by leveraging the joint spatial information in images and videos on
the one hand and, on the other hand, training an independent spatio-temporal
model to bridge the modality gap. We alternate between the spatial and
spatio-temporal learning with knowledge transfer between the two in each cycle.
We evaluate our approach on benchmark datasets for image-to-video as well as
for mixed-source domain adaptation achieving state-of-the-art results and
demonstrating the benefits of our cyclic adaptation. Code is available at
\url{https://github.com/wlin-at/CycDA}.
|
[
{
"version": "v1",
"created": "Wed, 30 Mar 2022 12:22:26 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Jul 2022 12:05:53 GMT"
},
{
"version": "v3",
"created": "Wed, 22 Mar 2023 11:19:19 GMT"
}
] | 2023-03-23T00:00:00 |
[
[
"Lin",
"Wei",
""
],
[
"Kukleva",
"Anna",
""
],
[
"Sun",
"Kunyang",
""
],
[
"Possegger",
"Horst",
""
],
[
"Kuehne",
"Hilde",
""
],
[
"Bischof",
"Horst",
""
]
] |
new_dataset
| 0.999521 |
2205.00363
|
Fangyu Liu
|
Fangyu Liu, Guy Emerson, Nigel Collier
|
Visual Spatial Reasoning
|
TACL camera-ready version; code and data available at
https://github.com/cambridgeltl/visual-spatial-reasoning
| null | null | null |
cs.CL cs.AI cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Spatial relations are a basic part of human cognition. However, they are
expressed in natural language in a variety of ways, and previous work has
suggested that current vision-and-language models (VLMs) struggle to capture
relational information. In this paper, we present Visual Spatial Reasoning
(VSR), a dataset containing more than 10k natural text-image pairs with 66
types of spatial relations in English (such as: under, in front of, and
facing). While using a seemingly simple annotation format, we show how the
dataset includes challenging linguistic phenomena, such as varying reference
frames. We demonstrate a large gap between human and model performance: the
human ceiling is above 95%, while state-of-the-art models only achieve around
70%. We observe that VLMs' by-relation performances have little correlation
with the number of training examples and the tested models are in general
incapable of recognising relations concerning the orientations of objects.
|
[
{
"version": "v1",
"created": "Sat, 30 Apr 2022 23:03:49 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Feb 2023 18:42:00 GMT"
},
{
"version": "v3",
"created": "Wed, 22 Mar 2023 15:42:50 GMT"
}
] | 2023-03-23T00:00:00 |
[
[
"Liu",
"Fangyu",
""
],
[
"Emerson",
"Guy",
""
],
[
"Collier",
"Nigel",
""
]
] |
new_dataset
| 0.998994 |
2206.05149
|
Jizhizi Li
|
Jizhizi Li, Jing Zhang, Dacheng Tao
|
Referring Image Matting
|
Accepted to CVPR2023. The dataset, code and models are available at
https://github.com/JizhiziLi/RIM
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Different from conventional image matting, which either requires user-defined
scribbles/trimap to extract a specific foreground object or directly extracts
all the foreground objects in the image indiscriminately, we introduce a new
task named Referring Image Matting (RIM) in this paper, which aims to extract
the meticulous alpha matte of the specific object that best matches the given
natural language description, thus enabling a more natural and simpler
instruction for image matting. First, we establish a large-scale challenging
dataset RefMatte by designing a comprehensive image composition and expression
generation engine to automatically produce high-quality images along with
diverse text attributes based on public datasets. RefMatte consists of 230
object categories, 47,500 images, 118,749 expression-region entities, and
474,996 expressions. Additionally, we construct a real-world test set with 100
high-resolution natural images and manually annotate complex phrases to
evaluate the out-of-domain generalization abilities of RIM methods.
Furthermore, we present a novel baseline method CLIPMat for RIM, including a
context-embedded prompt, a text-driven semantic pop-up, and a multi-level
details extractor. Extensive experiments on RefMatte in both keyword and
expression settings validate the superiority of CLIPMat over representative
methods. We hope this work could provide novel insights into image matting and
encourage more follow-up studies. The dataset, code and models are available at
https://github.com/JizhiziLi/RIM.
|
[
{
"version": "v1",
"created": "Fri, 10 Jun 2022 14:44:43 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Nov 2022 13:48:06 GMT"
},
{
"version": "v3",
"created": "Wed, 22 Mar 2023 03:47:41 GMT"
}
] | 2023-03-23T00:00:00 |
[
[
"Li",
"Jizhizi",
""
],
[
"Zhang",
"Jing",
""
],
[
"Tao",
"Dacheng",
""
]
] |
new_dataset
| 0.997811 |
2211.06627
|
Zhixi Cai
|
Zhixi Cai, Shreya Ghosh, Kalin Stefanov, Abhinav Dhall, Jianfei Cai,
Hamid Rezatofighi, Reza Haffari, Munawar Hayat
|
MARLIN: Masked Autoencoder for facial video Representation LearnINg
|
CVPR 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
This paper proposes a self-supervised approach to learn universal facial
representations from videos, that can transfer across a variety of facial
analysis tasks such as Facial Attribute Recognition (FAR), Facial Expression
Recognition (FER), DeepFake Detection (DFD), and Lip Synchronization (LS). Our
proposed framework, named MARLIN, is a facial video masked autoencoder, that
learns highly robust and generic facial embeddings from abundantly available
non-annotated web crawled facial videos. As a challenging auxiliary task,
MARLIN reconstructs the spatio-temporal details of the face from the densely
masked facial regions which mainly include eyes, nose, mouth, lips, and skin to
capture local and global aspects that in turn help in encoding generic and
transferable features. Through a variety of experiments on diverse downstream
tasks, we demonstrate MARLIN to be an excellent facial video encoder as well as
feature extractor, that performs consistently well across a variety of
downstream tasks including FAR (1.13% gain over supervised benchmark), FER
(2.64% gain over unsupervised benchmark), DFD (1.86% gain over unsupervised
benchmark), LS (29.36% gain for Frechet Inception Distance), and even in low
data regime. Our code and models are available at
https://github.com/ControlNet/MARLIN .
|
[
{
"version": "v1",
"created": "Sat, 12 Nov 2022 10:29:05 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Dec 2022 03:47:40 GMT"
},
{
"version": "v3",
"created": "Wed, 22 Mar 2023 09:32:26 GMT"
}
] | 2023-03-23T00:00:00 |
[
[
"Cai",
"Zhixi",
""
],
[
"Ghosh",
"Shreya",
""
],
[
"Stefanov",
"Kalin",
""
],
[
"Dhall",
"Abhinav",
""
],
[
"Cai",
"Jianfei",
""
],
[
"Rezatofighi",
"Hamid",
""
],
[
"Haffari",
"Reza",
""
],
[
"Hayat",
"Munawar",
""
]
] |
new_dataset
| 0.996688 |
2211.12764
|
Siteng Huang
|
Siteng Huang, Biao Gong, Yulin Pan, Jianwen Jiang, Yiliang Lv, Yuyuan
Li, Donglin Wang
|
VoP: Text-Video Co-operative Prompt Tuning for Cross-Modal Retrieval
|
Accepted by CVPR 2023
| null | null | null |
cs.CV cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many recent studies leverage the pre-trained CLIP for text-video cross-modal
retrieval by tuning the backbone with additional heavy modules, which not only
brings huge computational burdens with much more parameters, but also leads to
the knowledge forgetting from upstream models. In this work, we propose the
VoP: Text-Video Co-operative Prompt Tuning for efficient tuning on the
text-video retrieval task. The proposed VoP is an end-to-end framework with
both video & text prompts introducing, which can be regarded as a powerful
baseline with only 0.1% trainable parameters. Further, based on the
spatio-temporal characteristics of videos, we develop three novel video prompt
mechanisms to improve the performance with different scales of trainable
parameters. The basic idea of the VoP enhancement is to model the frame
position, frame context, and layer function with specific trainable prompts,
respectively. Extensive experiments show that compared to full fine-tuning, the
enhanced VoP achieves a 1.4% average R@1 gain across five text-video retrieval
benchmarks with 6x less parameter overhead. The code will be available at
https://github.com/bighuang624/VoP.
|
[
{
"version": "v1",
"created": "Wed, 23 Nov 2022 08:20:29 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Mar 2023 06:31:05 GMT"
},
{
"version": "v3",
"created": "Wed, 22 Mar 2023 02:36:52 GMT"
}
] | 2023-03-23T00:00:00 |
[
[
"Huang",
"Siteng",
""
],
[
"Gong",
"Biao",
""
],
[
"Pan",
"Yulin",
""
],
[
"Jiang",
"Jianwen",
""
],
[
"Lv",
"Yiliang",
""
],
[
"Li",
"Yuyuan",
""
],
[
"Wang",
"Donglin",
""
]
] |
new_dataset
| 0.996439 |
2211.12782
|
Xingyu Chen
|
Xingyu Chen, Baoyuan Wang, Heung-Yeung Shum
|
Hand Avatar: Free-Pose Hand Animation and Rendering from Monocular Video
| null | null | null | null |
cs.CV cs.GR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We present HandAvatar, a novel representation for hand animation and
rendering, which can generate smoothly compositional geometry and
self-occlusion-aware texture. Specifically, we first develop a MANO-HD model as
a high-resolution mesh topology to fit personalized hand shapes. Sequentially,
we decompose hand geometry into per-bone rigid parts, and then re-compose
paired geometry encodings to derive an across-part consistent occupancy field.
As for texture modeling, we propose a self-occlusion-aware shading field
(SelF). In SelF, drivable anchors are paved on the MANO-HD surface to record
albedo information under a wide variety of hand poses. Moreover, directed soft
occupancy is designed to describe the ray-to-surface relation, which is
leveraged to generate an illumination field for the disentanglement of
pose-independent albedo and pose-dependent illumination. Trained from monocular
video data, our HandAvatar can perform free-pose hand animation and rendering
while at the same time achieving superior appearance fidelity. We also
demonstrate that HandAvatar provides a route for hand appearance editing.
Project website: https://seanchenxy.github.io/HandAvatarWeb.
|
[
{
"version": "v1",
"created": "Wed, 23 Nov 2022 08:50:03 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Mar 2023 09:08:09 GMT"
}
] | 2023-03-23T00:00:00 |
[
[
"Chen",
"Xingyu",
""
],
[
"Wang",
"Baoyuan",
""
],
[
"Shum",
"Heung-Yeung",
""
]
] |
new_dataset
| 0.999526 |
2211.16312
|
Runyu Ding
|
Runyu Ding, Jihan Yang, Chuhui Xue, Wenqing Zhang, Song Bai, Xiaojuan
Qi
|
PLA: Language-Driven Open-Vocabulary 3D Scene Understanding
|
CVPR2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Open-vocabulary scene understanding aims to localize and recognize unseen
categories beyond the annotated label space. The recent breakthrough of 2D
open-vocabulary perception is largely driven by Internet-scale paired
image-text data with rich vocabulary concepts. However, this success cannot be
directly transferred to 3D scenarios due to the inaccessibility of large-scale
3D-text pairs. To this end, we propose to distill knowledge encoded in
pre-trained vision-language (VL) foundation models through captioning
multi-view images from 3D, which allows explicitly associating 3D and
semantic-rich captions. Further, to foster coarse-to-fine visual-semantic
representation learning from captions, we design hierarchical 3D-caption pairs,
leveraging geometric constraints between 3D scenes and multi-view images.
Finally, by employing contrastive learning, the model learns language-aware
embeddings that connect 3D and text for open-vocabulary tasks. Our method not
only remarkably outperforms baseline methods by 25.8% $\sim$ 44.7% hIoU and
14.5% $\sim$ 50.4% hAP$_{50}$ in open-vocabulary semantic and instance
segmentation, but also shows robust transferability on challenging zero-shot
domain transfer tasks. See the project website at
https://dingry.github.io/projects/PLA.
|
[
{
"version": "v1",
"created": "Tue, 29 Nov 2022 15:52:22 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Mar 2023 05:17:01 GMT"
}
] | 2023-03-23T00:00:00 |
[
[
"Ding",
"Runyu",
""
],
[
"Yang",
"Jihan",
""
],
[
"Xue",
"Chuhui",
""
],
[
"Zhang",
"Wenqing",
""
],
[
"Bai",
"Song",
""
],
[
"Qi",
"Xiaojuan",
""
]
] |
new_dataset
| 0.9998 |
2302.01990
|
Arani Roy
|
Arani Roy and Kaushik Roy
|
HADES: Hardware/Algorithm Co-design in DNN accelerators using
Energy-efficient Approximate Alphabet Set Multipliers
|
6 pages, 3 figures, 6 tables
| null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Edge computing must be capable of executing computationally intensive
algorithms, such as Deep Neural Networks (DNNs) while operating within a
constrained computational resource budget. Such computations involve Matrix
Vector Multiplications (MVMs) which are the dominant contributor to the memory
and energy budget of DNNs. To alleviate the computational intensity and storage
demand of MVMs, we propose circuit-algorithm co-design techniques with
low-complexity approximate Multiply-Accumulate (MAC) units derived from the
principles of Alphabet Set Multipliers (ASMs). Selection of few and proper
alphabets from ASMs lead to a Multiplier-less DNN implementation, and enables
encoding of low precision weights and input activations into fewer bits. To
maintain accuracy under alphabet set approximations, we developed a novel
ASM-alphabet aware training. The proposed low-complexity multiplication-aware
algorithm was implemented In-Memory and Near-Memory with efficient shift
operations to further improve the data-movement cost between memory and
processing unit. We benchmark our design on CIFAR10 and ImageNet datasets for
ResNet and MobileNet models and attain <1-2% accuracy degradation against full
precision with energy benefits of >50% compared to standard Von-Neumann
counterpart.
|
[
{
"version": "v1",
"created": "Fri, 3 Feb 2023 20:21:33 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Mar 2023 20:02:16 GMT"
}
] | 2023-03-23T00:00:00 |
[
[
"Roy",
"Arani",
""
],
[
"Roy",
"Kaushik",
""
]
] |
new_dataset
| 0.997122 |
2302.12781
|
Mohammad Ali Sayed
|
Khaled Sarieddine, Mohammad Ali Sayed, Danial Jafarigiv, Ribal
Atallah, Mourad Debbabi, and Chadi Assi
|
A Real-Time Cosimulation Testbed for Electric Vehicle Charging and Smart
Grid Security
|
"\c{opyright} 2022 IEEE. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising
or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other works."
|
IEEE Security & Privacy, 2023
|
10.1109/MSEC.2023.3247374
| null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Faced with the threat of climate change, the world is rapidly adopting
Electric Vehicles (EVs). The EV ecosystem, however, is vulnerable to
cyber-attacks putting it and the power grid at risk. In this article, we
present a security-oriented real-time Co-simulation Testbed for the EV
ecosystem and the power grid.
|
[
{
"version": "v1",
"created": "Fri, 24 Feb 2023 17:52:39 GMT"
}
] | 2023-03-23T00:00:00 |
[
[
"Sarieddine",
"Khaled",
""
],
[
"Sayed",
"Mohammad Ali",
""
],
[
"Jafarigiv",
"Danial",
""
],
[
"Atallah",
"Ribal",
""
],
[
"Debbabi",
"Mourad",
""
],
[
"Assi",
"Chadi",
""
]
] |
new_dataset
| 0.995859 |
2303.06919
|
Kun Zhou
|
Kun Zhou, Wenbo Li, Yi Wang, Tao Hu, Nianjuan Jiang, Xiaoguang Han,
Jiangbo Lu
|
NeRFLiX: High-Quality Neural View Synthesis by Learning a
Degradation-Driven Inter-viewpoint MiXer
|
Accepted to CVPR 2023; Project Page: see
https://redrock303.github.io/nerflix/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Neural radiance fields (NeRF) show great success in novel view synthesis.
However, in real-world scenes, recovering high-quality details from the source
images is still challenging for the existing NeRF-based approaches, due to the
potential imperfect calibration information and scene representation
inaccuracy. Even with high-quality training frames, the synthetic novel views
produced by NeRF models still suffer from notable rendering artifacts, such as
noise, blur, etc. Towards to improve the synthesis quality of NeRF-based
approaches, we propose NeRFLiX, a general NeRF-agnostic restorer paradigm by
learning a degradation-driven inter-viewpoint mixer. Specially, we design a
NeRF-style degradation modeling approach and construct large-scale training
data, enabling the possibility of effectively removing NeRF-native rendering
artifacts for existing deep neural networks. Moreover, beyond the degradation
removal, we propose an inter-viewpoint aggregation framework that is able to
fuse highly related high-quality training images, pushing the performance of
cutting-edge NeRF models to entirely new levels and producing highly
photo-realistic synthetic views.
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 08:36:30 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Mar 2023 09:45:51 GMT"
}
] | 2023-03-23T00:00:00 |
[
[
"Zhou",
"Kun",
""
],
[
"Li",
"Wenbo",
""
],
[
"Wang",
"Yi",
""
],
[
"Hu",
"Tao",
""
],
[
"Jiang",
"Nianjuan",
""
],
[
"Han",
"Xiaoguang",
""
],
[
"Lu",
"Jiangbo",
""
]
] |
new_dataset
| 0.999388 |
2303.09461
|
Sebastian Bordt
|
Sebastian Bordt, Ulrike von Luxburg
|
ChatGPT Participates in a Computer Science Exam
| null | null | null | null |
cs.CL cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We asked ChatGPT to participate in an undergraduate computer science exam on
''Algorithms and Data Structures''. The program was evaluated on the entire
exam as posed to the students. We hand-copied its answers onto an exam sheet,
which was subsequently graded in a blind setup alongside those of 200
participating students. We find that ChatGPT narrowly passed the exam,
obtaining 20.5 out of 40 points. This impressive performance indicates that
ChatGPT can indeed succeed in challenging tasks like university exams. At the
same time, the questions in our exam are structurally similar to those of other
exams, solved homework problems, and teaching materials that can be found
online and might have been part of ChatGPT's training data. Therefore, it would
be inadequate to conclude from this experiment that ChatGPT has any
understanding of computer science. We also assess the improvements brought by
GPT-4. We find that GPT-4 would have obtained about 17\% more exam points than
GPT-3.5, reaching the performance of the average student. The transcripts of
our conversations with ChatGPT are available at
\url{https://github.com/tml-tuebingen/chatgpt-algorithm-exam}, and the entire
graded exam is in the appendix of this paper.
|
[
{
"version": "v1",
"created": "Wed, 8 Mar 2023 15:46:14 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Mar 2023 11:30:41 GMT"
}
] | 2023-03-23T00:00:00 |
[
[
"Bordt",
"Sebastian",
""
],
[
"von Luxburg",
"Ulrike",
""
]
] |
new_dataset
| 0.990602 |
2303.09638
|
Lu Niu
|
Lu Niu, Jeremy Speth, Nathan Vance, Benjamin Sporrer, Adam Czajka,
Patrick Flynn
|
Full-Body Cardiovascular Sensing with Remote Photoplethysmography
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Remote photoplethysmography (rPPG) allows for noncontact monitoring of blood
volume changes from a camera by detecting minor fluctuations in reflected
light. Prior applications of rPPG focused on face videos. In this paper we
explored the feasibility of rPPG from non-face body regions such as the arms,
legs, and hands. We collected a new dataset titled Multi-Site Physiological
Monitoring (MSPM), which will be released with this paper. The dataset consists
of 90 frames per second video of exposed arms, legs, and face, along with 10
synchronized PPG recordings. We performed baseline heart rate estimation
experiments from non-face regions with several state-of-the-art rPPG
approaches, including chrominance-based (CHROM), plane-orthogonal-to-skin (POS)
and RemotePulseNet (RPNet). To our knowledge, this is the first evaluation of
the fidelity of rPPG signals simultaneously obtained from multiple regions of a
human body. Our experiments showed that skin pixels from arms, legs, and hands
are all potential sources of the blood volume pulse. The best-performing
approach, POS, achieved a mean absolute error peaking at 7.11 beats per minute
from non-facial body parts compared to 1.38 beats per minute from the face.
Additionally, we performed experiments on pulse transit time (PTT) from both
the contact PPG and rPPG signals. We found that remote PTT is possible with
moderately high frame rate video when distal locations on the body are visible.
These findings and the supporting dataset should facilitate new research on
non-face rPPG and monitoring blood flow dynamics over the whole body with a
camera.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 20:37:07 GMT"
}
] | 2023-03-23T00:00:00 |
[
[
"Niu",
"Lu",
""
],
[
"Speth",
"Jeremy",
""
],
[
"Vance",
"Nathan",
""
],
[
"Sporrer",
"Benjamin",
""
],
[
"Czajka",
"Adam",
""
],
[
"Flynn",
"Patrick",
""
]
] |
new_dataset
| 0.999615 |
2303.10659
|
Ramakanth Kavuluru
|
Yuhang Jiang and Ramakanth Kavuluru
|
COVID-19 event extraction from Twitter via extractive question answering
with continuous prompts
|
Accepted to appear in MEDINFO 2023. Code:
https://github.com/bionlproc/twitter-covid-QA-extraction
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
As COVID-19 ravages the world, social media analytics could augment
traditional surveys in assessing how the pandemic evolves and capturing
consumer chatter that could help healthcare agencies in addressing it. This
typically involves mining disclosure events that mention testing positive for
the disease or discussions surrounding perceptions and beliefs in preventative
or treatment options. The 2020 shared task on COVID-19 event extraction
(conducted as part of the W-NUT workshop during the EMNLP conference)
introduced a new Twitter dataset for benchmarking event extraction from
COVID-19 tweets. In this paper, we cast the problem of event extraction as
extractive question answering using recent advances in continuous prompting in
language models. On the shared task test dataset, our approach leads to over 5%
absolute micro-averaged F1-score improvement over prior best results, across
all COVID-19 event slots. Our ablation study shows that continuous prompts have
a major impact on the eventual performance.
|
[
{
"version": "v1",
"created": "Sun, 19 Mar 2023 13:47:56 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Mar 2023 15:27:08 GMT"
}
] | 2023-03-23T00:00:00 |
[
[
"Jiang",
"Yuhang",
""
],
[
"Kavuluru",
"Ramakanth",
""
]
] |
new_dataset
| 0.987792 |
2303.11330
|
Deepak Pathak
|
Xuxin Cheng, Ashish Kumar, Deepak Pathak
|
Legs as Manipulator: Pushing Quadrupedal Agility Beyond Locomotion
|
Accepted at ICRA 2023. Videos at https://robot-skills.github.io
| null | null | null |
cs.RO cs.AI cs.CV cs.LG cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Locomotion has seen dramatic progress for walking or running across
challenging terrains. However, robotic quadrupeds are still far behind their
biological counterparts, such as dogs, which display a variety of agile skills
and can use the legs beyond locomotion to perform several basic manipulation
tasks like interacting with objects and climbing. In this paper, we take a step
towards bridging this gap by training quadruped robots not only to walk but
also to use the front legs to climb walls, press buttons, and perform object
interaction in the real world. To handle this challenging optimization, we
decouple the skill learning broadly into locomotion, which involves anything
that involves movement whether via walking or climbing a wall, and
manipulation, which involves using one leg to interact while balancing on the
other three legs. These skills are trained in simulation using curriculum and
transferred to the real world using our proposed sim2real variant that builds
upon recent locomotion success. Finally, we combine these skills into a robust
long-term plan by learning a behavior tree that encodes a high-level task
hierarchy from one clean expert demonstration. We evaluate our method in both
simulation and real-world showing successful executions of both short as well
as long-range tasks and how robustness helps confront external perturbations.
Videos at https://robot-skills.github.io
|
[
{
"version": "v1",
"created": "Mon, 20 Mar 2023 17:59:58 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Mar 2023 08:48:15 GMT"
}
] | 2023-03-23T00:00:00 |
[
[
"Cheng",
"Xuxin",
""
],
[
"Kumar",
"Ashish",
""
],
[
"Pathak",
"Deepak",
""
]
] |
new_dataset
| 0.99792 |
2303.11553
|
Daniel Gonzalez Cedre
|
Daniel Gonzalez Cedre, Justus Isaiah Hibshman, Timothy La Fond, Grant
Boquet, Tim Weninger
|
Dynamic Vertex Replacement Grammars
| null | null | null | null |
cs.LG cs.FL cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
Context-free graph grammars have shown a remarkable ability to model
structures in real-world relational data. However, graph grammars lack the
ability to capture time-changing phenomena since the left-to-right transitions
of a production rule do not represent temporal change. In the present work, we
describe dynamic vertex-replacement grammars (DyVeRG), which generalize vertex
replacement grammars in the time domain by providing a formal framework for
updating a learned graph grammar in accordance with modifications to its
underlying data. We show that DyVeRG grammars can be learned from, and used to
generate, real-world dynamic graphs faithfully while remaining
human-interpretable. We also demonstrate their ability to forecast by computing
dyvergence scores, a novel graph similarity measurement exposed by this
framework.
|
[
{
"version": "v1",
"created": "Tue, 21 Mar 2023 02:44:15 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Mar 2023 01:13:23 GMT"
}
] | 2023-03-23T00:00:00 |
[
[
"Cedre",
"Daniel Gonzalez",
""
],
[
"Hibshman",
"Justus Isaiah",
""
],
[
"La Fond",
"Timothy",
""
],
[
"Boquet",
"Grant",
""
],
[
"Weninger",
"Tim",
""
]
] |
new_dataset
| 0.992509 |
2303.11997
|
Saizhe Ding
|
Saizhe Ding, Jinze Chen, Yang Wang, Yu Kang, Weiguo Song, Jie Cheng,
Yang Cao
|
E-MLB: Multilevel Benchmark for Event-Based Camera Denoising
| null |
IEEE Transactions on Multimedia, 2023
|
10.1109/TMM.2023.3260638
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Event cameras, such as dynamic vision sensors (DVS), are biologically
inspired vision sensors that have advanced over conventional cameras in high
dynamic range, low latency and low power consumption, showing great application
potential in many fields. Event cameras are more sensitive to junction leakage
current and photocurrent as they output differential signals, losing the
smoothing function of the integral imaging process in the RGB camera. The
logarithmic conversion further amplifies noise, especially in low-contrast
conditions. Recently, researchers proposed a series of datasets and evaluation
metrics but limitations remain: 1) the existing datasets are small in scale and
insufficient in noise diversity, which cannot reflect the authentic working
environments of event cameras; and 2) the existing denoising evaluation metrics
are mostly referenced evaluation metrics, relying on APS information or manual
annotation. To address the above issues, we construct a large-scale event
denoising dataset (multilevel benchmark for event denoising, E-MLB) for the
first time, which consists of 100 scenes, each with four noise levels, that is
12 times larger than the largest existing denoising dataset. We also propose
the first nonreference event denoising metric, the event structural ratio
(ESR), which measures the structural intensity of given events. ESR is inspired
by the contrast metric, but is independent of the number of events and
projection direction. Based on the proposed benchmark and ESR, we evaluate the
most representative denoising algorithms, including classic and SOTA, and
provide denoising baselines under various scenes and noise levels. The
corresponding results and codes are available at
https://github.com/KugaMaxx/cuke-emlb.
|
[
{
"version": "v1",
"created": "Tue, 21 Mar 2023 16:31:53 GMT"
}
] | 2023-03-23T00:00:00 |
[
[
"Ding",
"Saizhe",
""
],
[
"Chen",
"Jinze",
""
],
[
"Wang",
"Yang",
""
],
[
"Kang",
"Yu",
""
],
[
"Song",
"Weiguo",
""
],
[
"Cheng",
"Jie",
""
],
[
"Cao",
"Yang",
""
]
] |
new_dataset
| 0.999179 |
2303.12171
|
Timo Asikainen
|
Timo Asikainen and Tomi M\"annist\"o and Eetu Huovila
|
nivel2: A web-based multi-level modelling environment built on a
relational database
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce the nivel2 software for multi-level modelling. Multi-level
modelling is a modelling paradigm where a model element may be simultaneously a
type for and an instance of other elements under some constraints. This
contrasts traditional modelling methods, such as the UML, where an element may
not be a class and an object simultaneously. In contrast with previous
approaches to multi-level modelling, the nivel2 software utilises an industrial
scale relational database for data storage and reasoning. Further, a web-based
user interface is provided for viewing and editing models. The architecture
enables multiple users in different roles working on the same models at various
levels of abstraction at the same time.
|
[
{
"version": "v1",
"created": "Tue, 21 Mar 2023 20:07:03 GMT"
}
] | 2023-03-23T00:00:00 |
[
[
"Asikainen",
"Timo",
""
],
[
"Männistö",
"Tomi",
""
],
[
"Huovila",
"Eetu",
""
]
] |
new_dataset
| 0.999715 |
2303.12194
|
Zixiang Zhou
|
Zixiang Zhou, Dongqiangzi Ye, Weijia Chen, Yufei Xie, Yu Wang, Panqu
Wang, Hassan Foroosh
|
LiDARFormer: A Unified Transformer-based Multi-task Network for LiDAR
Perception
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
There is a recent trend in the LiDAR perception field towards unifying
multiple tasks in a single strong network with improved performance, as opposed
to using separate networks for each task. In this paper, we introduce a new
LiDAR multi-task learning paradigm based on the transformer. The proposed
LiDARFormer utilizes cross-space global contextual feature information and
exploits cross-task synergy to boost the performance of LiDAR perception tasks
across multiple large-scale datasets and benchmarks. Our novel
transformer-based framework includes a cross-space transformer module that
learns attentive features between the 2D dense Bird's Eye View (BEV) and 3D
sparse voxel feature maps. Additionally, we propose a transformer decoder for
the segmentation task to dynamically adjust the learned features by leveraging
the categorical feature representations. Furthermore, we combine the
segmentation and detection features in a shared transformer decoder with
cross-task attention layers to enhance and integrate the object-level and
class-level features. LiDARFormer is evaluated on the large-scale nuScenes and
the Waymo Open datasets for both 3D detection and semantic segmentation tasks,
and it outperforms all previously published methods on both tasks. Notably,
LiDARFormer achieves the state-of-the-art performance of 76.4% L2 mAPH and
74.3% NDS on the challenging Waymo and nuScenes detection benchmarks for a
single model LiDAR-only method.
|
[
{
"version": "v1",
"created": "Tue, 21 Mar 2023 20:52:02 GMT"
}
] | 2023-03-23T00:00:00 |
[
[
"Zhou",
"Zixiang",
""
],
[
"Ye",
"Dongqiangzi",
""
],
[
"Chen",
"Weijia",
""
],
[
"Xie",
"Yufei",
""
],
[
"Wang",
"Yu",
""
],
[
"Wang",
"Panqu",
""
],
[
"Foroosh",
"Hassan",
""
]
] |
new_dataset
| 0.997971 |
2303.12208
|
Sungwoong Kim
|
Sungwoong Kim, Daejin Jo, Donghoon Lee, Jongmin Kim
|
MAGVLT: Masked Generative Vision-and-Language Transformer
|
CVPR 2023
| null | null | null |
cs.CV cs.CL cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
While generative modeling on multimodal image-text data has been actively
developed with large-scale paired datasets, there have been limited attempts to
generate both image and text data by a single model rather than a generation of
one fixed modality conditioned on the other modality. In this paper, we explore
a unified generative vision-and-language (VL) model that can produce both
images and text sequences. Especially, we propose a generative VL transformer
based on the non-autoregressive mask prediction, named MAGVLT, and compare it
with an autoregressive generative VL transformer (ARGVLT). In comparison to
ARGVLT, the proposed MAGVLT enables bidirectional context encoding, fast
decoding by parallel token predictions in an iterative refinement, and extended
editing capabilities such as image and text infilling. For rigorous training of
our MAGVLT with image-text pairs from scratch, we combine the image-to-text,
text-to-image, and joint image-and-text mask prediction tasks. Moreover, we
devise two additional tasks based on the step-unrolled mask prediction and the
selective prediction on the mixture of two image-text pairs. Experimental
results on various downstream generation tasks of VL benchmarks show that our
MAGVLT outperforms ARGVLT by a large margin even with significant inference
speedup. Particularly, MAGVLT achieves competitive results on both zero-shot
image-to-text and text-to-image generation tasks from MS-COCO by one
moderate-sized model (fewer than 500M parameters) even without the use of
monomodal data and networks.
|
[
{
"version": "v1",
"created": "Tue, 21 Mar 2023 21:49:39 GMT"
}
] | 2023-03-23T00:00:00 |
[
[
"Kim",
"Sungwoong",
""
],
[
"Jo",
"Daejin",
""
],
[
"Lee",
"Donghoon",
""
],
[
"Kim",
"Jongmin",
""
]
] |
new_dataset
| 0.99805 |
2303.12269
|
Behnam Ghavami
|
Eduardo Rhod, Behnam Ghavami, Zhenman Fang, Lesley Shannon
|
A Cycle-Accurate Soft Error Vulnerability Analysis Framework for
FPGA-based Designs
| null | null | null | null |
cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
Many aerospace and automotive applications use FPGAs in their designs due to
their low power and reconfigurability requirements. Meanwhile, such
applications also pose a high standard on system reliability, which makes the
early-stage reliability analysis for FPGA-based designs very critical.
In this paper, we present a framework that enables fast and accurate
early-stage analysis of soft error vulnerability for small FPGA-based designs.
Our framework first extracts the post-synthesis netlist from an FPGA design.
Then it inserts the bit-flip configuration faults into the design netlist using
our proposed interface software. After that, it seamlessly feeds the golden
copy and fault copies of the netlist into the open source simulator Verilator
for cycle-accurate simulation. Finally, it generates a histogram of
vulnerability scores of the original design to guide the reliability analysis.
Experimental results show that our framework runs up to 53x faster than the
Xilinx Vivado fault simulation with cycle-level accuracy, when analyzing the
injected bit-flip faults on the ITC'99 benchmarks.
|
[
{
"version": "v1",
"created": "Wed, 22 Mar 2023 02:35:07 GMT"
}
] | 2023-03-23T00:00:00 |
[
[
"Rhod",
"Eduardo",
""
],
[
"Ghavami",
"Behnam",
""
],
[
"Fang",
"Zhenman",
""
],
[
"Shannon",
"Lesley",
""
]
] |
new_dataset
| 0.996441 |
2303.12319
|
Guangzheng Hu
|
Guangzheng Hu, Haoran Li, Shasha Liu, Mingjun Ma, Yuanheng Zhu, and
Dongbin Zhao
|
NeuronsMAE: A Novel Multi-Agent Reinforcement Learning Environment for
Cooperative and Competitive Multi-Robot Tasks
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Multi-agent reinforcement learning (MARL) has achieved remarkable success in
various challenging problems. Meanwhile, more and more benchmarks have emerged
and provided some standards to evaluate the algorithms in different fields. On
the one hand, the virtual MARL environments lack knowledge of real-world tasks
and actuator abilities, and on the other hand, the current task-specified
multi-robot platform has poor support for the generality of multi-agent
reinforcement learning algorithms and lacks support for transferring from
simulation to the real environment. Bridging the gap between the virtual MARL
environments and the real multi-robot platform becomes the key to promoting the
practicability of MARL algorithms. This paper proposes a novel MARL environment
for real multi-robot tasks named NeuronsMAE (Neurons Multi-Agent Environment).
This environment supports cooperative and competitive multi-robot tasks and is
configured with rich parameter interfaces to study the multi-agent policy
transfer from simulation to reality. With this platform, we evaluate various
popular MARL algorithms and build a new MARL benchmark for multi-robot tasks.
We hope that this platform will facilitate the research and application of MARL
algorithms for real robot tasks. Information about the benchmark and the
open-source code will be released.
|
[
{
"version": "v1",
"created": "Wed, 22 Mar 2023 05:30:39 GMT"
}
] | 2023-03-23T00:00:00 |
[
[
"Hu",
"Guangzheng",
""
],
[
"Li",
"Haoran",
""
],
[
"Liu",
"Shasha",
""
],
[
"Ma",
"Mingjun",
""
],
[
"Zhu",
"Yuanheng",
""
],
[
"Zhao",
"Dongbin",
""
]
] |
new_dataset
| 0.996871 |
2303.12341
|
Chao Chen
|
Chao Chen, Haoyu Geng, Nianzu Yang, Xiaokang Yang and Junchi Yan
|
EasyDGL: Encode, Train and Interpret for Continuous-time Dynamic Graph
Learning
|
9 figures, 7 tables
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dynamic graphs arise in various real-world applications, and it is often
welcomed to model the dynamics directly in continuous time domain for its
flexibility. This paper aims to design an easy-to-use pipeline (termed as
EasyDGL which is also due to its implementation by DGL toolkit) composed of
three key modules with both strong fitting ability and interpretability.
Specifically the proposed pipeline which involves encoding, training and
interpreting: i) a temporal point process (TPP) modulated attention
architecture to endow the continuous-time resolution with the coupled
spatiotemporal dynamics of the observed graph with edge-addition events; ii) a
principled loss composed of task-agnostic TPP posterior maximization based on
observed events on the graph, and a task-aware loss with a masking strategy
over dynamic graph, where the covered tasks include dynamic link prediction,
dynamic node classification and node traffic forecasting; iii) interpretation
of the model outputs (e.g., representations and predictions) with scalable
perturbation-based quantitative analysis in the graph Fourier domain, which
could more comprehensively reflect the behavior of the learned model. Extensive
experimental results on public benchmarks show the superior performance of our
EasyDGL for time-conditioned predictive tasks, and in particular demonstrate
that EasyDGL can effectively quantify the predictive power of frequency content
that a model learn from the evolving graph data.
|
[
{
"version": "v1",
"created": "Wed, 22 Mar 2023 06:35:08 GMT"
}
] | 2023-03-23T00:00:00 |
[
[
"Chen",
"Chao",
""
],
[
"Geng",
"Haoyu",
""
],
[
"Yang",
"Nianzu",
""
],
[
"Yang",
"Xiaokang",
""
],
[
"Yan",
"Junchi",
""
]
] |
new_dataset
| 0.999617 |
2303.12374
|
Stijn Heldens
|
Stijn Heldens, Ben van Werkhoven
|
Kernel Launcher: C++ Library for Optimal-Performance Portable CUDA
Applications
| null |
International Workshop on Automatic Performance Tuning (iWAPT) at
IEEE International Parallel and Distributed Processing Symposium Workshops
(IPDPSW), 2023
| null | null |
cs.DC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Graphic Processing Units (GPUs) have become ubiquitous in scientific
computing. However, writing efficient GPU kernels can be challenging due to the
need for careful code tuning. To automatically explore the kernel optimization
space, several auto-tuning tools - like Kernel Tuner - have been proposed.
Unfortunately, these existing auto-tuning tools often do not concern themselves
with integration of tuning results back into applications, which puts a
significant implementation and maintenance burden on application developers. In
this work, we present Kernel Launcher: an easy-to-use C++ library that
simplifies the creation of highly-tuned CUDA applications. With Kernel
Launcher, programmers can capture kernel launches, tune the captured kernels
for different setups, and integrate the tuning results back into applications
using runtime compilation. To showcase the applicability of Kernel Launcher, we
consider a real-world computational fluid dynamics code and tune its kernels
for different GPUs, input domains, and precisions.
|
[
{
"version": "v1",
"created": "Wed, 22 Mar 2023 08:21:42 GMT"
}
] | 2023-03-23T00:00:00 |
[
[
"Heldens",
"Stijn",
""
],
[
"van Werkhoven",
"Ben",
""
]
] |
new_dataset
| 0.99484 |
2303.12379
|
Yi-Shan Lee
|
Yi-Shan Lee, Wei-Cheng Tseng, Fu-En Wang, Min Sun
|
VMCML: Video and Music Matching via Cross-Modality Lifting
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a content-based system for matching video and background music.
The system aims to address the challenges in music recommendation for new users
or new music give short-form videos. To this end, we propose a cross-modal
framework VMCML that finds a shared embedding space between video and music
representations. To ensure the embedding space can be effectively shared by
both representations, we leverage CosFace loss based on margin-based cosine
similarity loss. Furthermore, we establish a large-scale dataset called MSVD,
in which we provide 390 individual music and the corresponding matched 150,000
videos. We conduct extensive experiments on Youtube-8M and our MSVD datasets.
Our quantitative and qualitative results demonstrate the effectiveness of our
proposed framework and achieve state-of-the-art video and music matching
performance.
|
[
{
"version": "v1",
"created": "Wed, 22 Mar 2023 08:28:23 GMT"
}
] | 2023-03-23T00:00:00 |
[
[
"Lee",
"Yi-Shan",
""
],
[
"Tseng",
"Wei-Cheng",
""
],
[
"Wang",
"Fu-En",
""
],
[
"Sun",
"Min",
""
]
] |
new_dataset
| 0.999712 |
2303.12438
|
Oliver Lang
|
Oliver Lang, Christian Hofbauer, Reinhard Feger, Mario Huemer
|
Doppler-Division Multiplexing for MIMO OFDM Joint Sensing and
Communications
|
13 pages, 11 figures
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
A promising waveform candidate for future joint sensing and communication
systems is orthogonal frequencydivision multiplexing (OFDM). For such systems,
supporting multiple transmit antennas requires multiplexing methods for the
generation of orthogonal transmit signals, where equidistant subcarrier
interleaving (ESI) is the most popular multiplexing method. In this work, we
analyze a multiplexing method called Doppler-division multiplexing (DDM). This
method applies a phase shift from OFDM symbol to OFDM symbol to separate
signals transmitted by different Tx antennas along the velocity axis of the
range-Doppler map. While general properties of DDM for the task of radar
sensing are analyzed in this work, the main focus lies on the implications of
DDM on the communication task. It will be shown that for DDM, the channels
observed in the communication receiver are heavily timevarying, preventing any
meaningful transmission of data when not taken into account. In this work, a
communication system designed to combat these time-varying channels is
proposed, which includes methods for data estimation, synchronization, and
channel estimation. Bit error ratio (BER) simulations demonstrate the
superiority of this communications system compared to a system utilizing ESI.
|
[
{
"version": "v1",
"created": "Wed, 22 Mar 2023 10:18:58 GMT"
}
] | 2023-03-23T00:00:00 |
[
[
"Lang",
"Oliver",
""
],
[
"Hofbauer",
"Christian",
""
],
[
"Feger",
"Reinhard",
""
],
[
"Huemer",
"Mario",
""
]
] |
new_dataset
| 0.991286 |
2303.12455
|
Lei Hu
|
Lei Hu, Chen Sun, Guyue Li, Aiqun Hu, Derrick Wing Kwan Ng
|
Reconfigurable Intelligent Surface-aided Secret Key Generation in
Multi-Cell Systems
|
30 pages, 12 figures
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Physical-layer key generation (PKG) exploits the reciprocity and randomness
of wireless channels to generate a symmetric key between two legitimate
communication ends. However, in multi-cell systems, PKG suffers from severe
pilot contamination due to the reuse of pilots in different cells. In this
paper, we invoke multiple reconfigurable intelligent surfaces (RISs) for
adaptively shaping the environment and enhancing the PKG performance. To this
end, we formulate an optimization problem to maximize the weighted sum key rate
(WSKR) by jointly optimizing the precoding matrices at the base stations (BSs)
and the phase shifts at the RISs. For addressing the non-convexity of the
problem, we derive an upper bound of the WSKR and prove its tightness. To
tackle the upper bound maximization problem, we apply an alternating
optimization (AO)-based algorithm to divide the joint optimization into two
sub-problems. We apply the Lagrangian dual approach based on the
Karush-Kuhn-Tucker (KKT) conditions for the sub-problem of precoding matrices
and adopt a projected gradient ascent (PGA) algorithm for the sub-problem of
phase shifts. Simulation results confirm the near-optimal performance of the
proposed algorithm and the effectiveness of RISs for improving the WSKR via
mitigating pilot contamination.
|
[
{
"version": "v1",
"created": "Wed, 22 Mar 2023 10:56:43 GMT"
}
] | 2023-03-23T00:00:00 |
[
[
"Hu",
"Lei",
""
],
[
"Sun",
"Chen",
""
],
[
"Li",
"Guyue",
""
],
[
"Hu",
"Aiqun",
""
],
[
"Ng",
"Derrick Wing Kwan",
""
]
] |
new_dataset
| 0.996013 |
2303.12492
|
Vincenzo Suriani
|
Vincenzo Suriani, Daniele Nardi
|
Preserving HRI Capabilities: Physical, Remote and Simulated Modalities
in the SciRoc 2021 Competition
|
HRI 2023 Workshop on Advancing HRI Researchand Benchmarking Through
Open-Source Ecosystems, March 13, 2023, Stockholm, Sweden
| null | null | null |
cs.RO cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
In the last years, robots are moving out of research laboratories to enter
everyday life. Competitions aiming at benchmarking the capabilities of a robot
in everyday scenarios are useful to make a step forward in this path. In fact,
they foster the development of robust architectures capable of solving issues
that might occur during human-robot coexistence in human-shaped scenarios. One
of those competitions is SciRoc that, in its second edition, proposed new
benchmarking environments. In particular, Episode 1 of SciRoc 2 proposed three
different modalities of participation while preserving the Human-Robot
Interaction (HRI), being a fundamental benchmarking functionality. The Coffee
Shop environment, used to challenge the participating teams, represented an
excellent testbed enabling for the benchmarking of different robotics
functionalities, but also an exceptional opportunity for proposing novel
solutions to guarantee real human-robot interaction procedures despite the
Covid-19 pandemic restrictions. The developed software is publicly released.
|
[
{
"version": "v1",
"created": "Wed, 22 Mar 2023 12:05:25 GMT"
}
] | 2023-03-23T00:00:00 |
[
[
"Suriani",
"Vincenzo",
""
],
[
"Nardi",
"Daniele",
""
]
] |
new_dataset
| 0.972051 |
2303.12536
|
Aakash Garg
|
Aakash Garg, Ankit Tyagi, Anant Patel, Divyansh Raj
|
BlockChain and Decentralized Apps
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Blockchain, the backbone of Bitcoin, has recently gained a lot of attention.
Blockchain functions as an immutable record that enables decentralized
transactions. Blockchain-based applications are sprouting up in a variety of
industries, including financial services, reputation systems, and the Internet
of Things (IoT), among others. However, many hurdles of blockchain technology,
including scalability and security issues, have to be overcome. Many
industries, including finance, medicine, manufacturing, and education, use
blockchain applications to capitalize on this technology's unique set of
properties. Blockchain technology (BT) has the potential to improve
trustworthiness, collaboration, organization, identity, credibility, and
transparency. We provide an overview of blockchain architecture, various
different kinds of blockchain as well as information about the Decentralized
apps which are also known as Dapps. This paper provides an in-depth look at
blockchain technology
|
[
{
"version": "v1",
"created": "Wed, 22 Mar 2023 13:10:29 GMT"
}
] | 2023-03-23T00:00:00 |
[
[
"Garg",
"Aakash",
""
],
[
"Tyagi",
"Ankit",
""
],
[
"Patel",
"Anant",
""
],
[
"Raj",
"Divyansh",
""
]
] |
new_dataset
| 0.962957 |
2303.12621
|
Yanan Zhang
|
Chao Zhou, Yanan Zhang, Jiaxin Chen, Di Huang
|
OcTr: Octree-based Transformer for 3D Object Detection
|
Accepted by CVPR 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A key challenge for LiDAR-based 3D object detection is to capture sufficient
features from large scale 3D scenes especially for distant or/and occluded
objects. Albeit recent efforts made by Transformers with the long sequence
modeling capability, they fail to properly balance the accuracy and efficiency,
suffering from inadequate receptive fields or coarse-grained holistic
correlations. In this paper, we propose an Octree-based Transformer, named
OcTr, to address this issue. It first constructs a dynamic octree on the
hierarchical feature pyramid through conducting self-attention on the top level
and then recursively propagates to the level below restricted by the octants,
which captures rich global context in a coarse-to-fine manner while maintaining
the computational complexity under control. Furthermore, for enhanced
foreground perception, we propose a hybrid positional embedding, composed of
the semantic-aware positional embedding and attention mask, to fully exploit
semantic and geometry clues. Extensive experiments are conducted on the Waymo
Open Dataset and KITTI Dataset, and OcTr reaches newly state-of-the-art
results.
|
[
{
"version": "v1",
"created": "Wed, 22 Mar 2023 15:01:20 GMT"
}
] | 2023-03-23T00:00:00 |
[
[
"Zhou",
"Chao",
""
],
[
"Zhang",
"Yanan",
""
],
[
"Chen",
"Jiaxin",
""
],
[
"Huang",
"Di",
""
]
] |
new_dataset
| 0.998916 |
2303.12725
|
Zhipeng Chang
|
Zhipeng Chang, Ruiling Ma, Wenliang Jia
|
Pedestrain detection for low-light vision proposal
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The demand for pedestrian detection has created a challenging problem for
various visual tasks such as image fusion. As infrared images can capture
thermal radiation information, image fusion between infrared and visible images
could significantly improve target detection under environmental limitations.
In our project, we would approach by preprocessing our dataset with image
fusion technique, then using Vision Transformer model to detect pedestrians
from the fused images. During the evaluation procedure, a comparison would be
made between YOLOv5 and the revised ViT model performance on our fused images
|
[
{
"version": "v1",
"created": "Fri, 17 Mar 2023 04:13:58 GMT"
}
] | 2023-03-23T00:00:00 |
[
[
"Chang",
"Zhipeng",
""
],
[
"Ma",
"Ruiling",
""
],
[
"Jia",
"Wenliang",
""
]
] |
new_dataset
| 0.99756 |
2303.12727
|
Bingquan Zhang
|
Xinrui Chen, Bingquan Zhang
|
A XGBoost Algorithm-based Fatigue Recognition Model Using Face Detection
|
6 pages;2 fiqures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
As fatigue is normally revealed in the eyes and mouth of a person's face,
this paper tried to construct a XGBoost Algorithm-Based fatigue recognition
model using the two indicators, EAR (Eye Aspect Ratio) and MAR(Mouth Aspect
Ratio). With an accuracy rate of 87.37% and sensitivity rate of 89.14%, the
model was proved to be efficient and valid for further applications.
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 11:31:35 GMT"
}
] | 2023-03-23T00:00:00 |
[
[
"Chen",
"Xinrui",
""
],
[
"Zhang",
"Bingquan",
""
]
] |
new_dataset
| 0.982428 |
2303.12772
|
Tasnim Sakib Apon
|
Ramisa Anan, Tasnim Sakib Apon, Zeba Tahsin Hossain, Elizabeth Antora
Modhu, Sudipta Mondal, MD. Golam Rabiul Alam
|
Interpretable Bangla Sarcasm Detection using BERT and Explainable AI
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A positive phrase or a sentence with an underlying negative motive is usually
defined as sarcasm that is widely used in today's social media platforms such
as Facebook, Twitter, Reddit, etc. In recent times active users in social media
platforms are increasing dramatically which raises the need for an automated
NLP-based system that can be utilized in various tasks such as determining
market demand, sentiment analysis, threat detection, etc. However, since
sarcasm usually implies the opposite meaning and its detection is frequently a
challenging issue, data meaning extraction through an NLP-based model becomes
more complicated. As a result, there has been a lot of study on sarcasm
detection in English over the past several years, and there's been a noticeable
improvement and yet sarcasm detection in the Bangla language's state remains
the same. In this article, we present a BERT-based system that can achieve
99.60\% while the utilized traditional machine learning algorithms are only
capable of achieving 89.93\%. Additionally, we have employed Local
Interpretable Model-Agnostic Explanations that introduce explainability to our
system. Moreover, we have utilized a newly collected bangla sarcasm dataset,
BanglaSarc that was constructed specifically for the evaluation of this study.
This dataset consists of fresh records of sarcastic and non-sarcastic comments,
the majority of which are acquired from Facebook and YouTube comment sections.
|
[
{
"version": "v1",
"created": "Wed, 22 Mar 2023 17:35:35 GMT"
}
] | 2023-03-23T00:00:00 |
[
[
"Anan",
"Ramisa",
""
],
[
"Apon",
"Tasnim Sakib",
""
],
[
"Hossain",
"Zeba Tahsin",
""
],
[
"Modhu",
"Elizabeth Antora",
""
],
[
"Mondal",
"Sudipta",
""
],
[
"Alam",
"MD. Golam Rabiul",
""
]
] |
new_dataset
| 0.999308 |
2303.12793
|
Fangyun Wei
|
Yiting Cheng, Fangyun Wei, Jianmin Bao, Dong Chen, Wenqiang Zhang
|
CiCo: Domain-Aware Sign Language Retrieval via Cross-Lingual Contrastive
Learning
|
Accepted by CVPR 2023. Code and models are available at:
https://github.com/FangyunWei/SLRT
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work focuses on sign language retrieval-a recently proposed task for
sign language understanding. Sign language retrieval consists of two sub-tasks:
text-to-sign-video (T2V) retrieval and sign-video-to-text (V2T) retrieval.
Different from traditional video-text retrieval, sign language videos, not only
contain visual signals but also carry abundant semantic meanings by themselves
due to the fact that sign languages are also natural languages. Considering
this character, we formulate sign language retrieval as a cross-lingual
retrieval problem as well as a video-text retrieval task. Concretely, we take
into account the linguistic properties of both sign languages and natural
languages, and simultaneously identify the fine-grained cross-lingual (i.e.,
sign-to-word) mappings while contrasting the texts and the sign videos in a
joint embedding space. This process is termed as cross-lingual contrastive
learning. Another challenge is raised by the data scarcity issue-sign language
datasets are orders of magnitude smaller in scale than that of speech
recognition. We alleviate this issue by adopting a domain-agnostic sign encoder
pre-trained on large-scale sign videos into the target domain via
pseudo-labeling. Our framework, termed as domain-aware sign language retrieval
via Cross-lingual Contrastive learning or CiCo for short, outperforms the
pioneering method by large margins on various datasets, e.g., +22.4 T2V and
+28.0 V2T R@1 improvements on How2Sign dataset, and +13.7 T2V and +17.1 V2T R@1
improvements on PHOENIX-2014T dataset. Code and models are available at:
https://github.com/FangyunWei/SLRT.
|
[
{
"version": "v1",
"created": "Wed, 22 Mar 2023 17:59:59 GMT"
}
] | 2023-03-23T00:00:00 |
[
[
"Cheng",
"Yiting",
""
],
[
"Wei",
"Fangyun",
""
],
[
"Bao",
"Jianmin",
""
],
[
"Chen",
"Dong",
""
],
[
"Zhang",
"Wenqiang",
""
]
] |
new_dataset
| 0.999425 |
1711.08136
|
Tse-Tin Chan
|
Tse-Tin Chan and Tat-Ming Lok
|
Signal-Aligned Network Coding in K-User MIMO Interference Channels with
Limited Receiver Cooperation
|
12 pages, 4 figures, submitted to the IEEE Transactions on Vehicular
Technology
|
IEEE Transactions on Communications, vol. 68, no. 8, pp.
4832-4843, Aug. 2020
|
10.1109/TCOMM.2020.2992719
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a signal-aligned network coding (SNC) scheme for
K-user time-varying multiple-input multiple-output (MIMO) interference channels
with limited receiver cooperation. We assume that the receivers are connected
to a central processor via wired cooperation links with individual limited
capacities. Our SNC scheme determines the precoding matrices of the
transmitters so that the transmitted signals are aligned at each receiver. The
aligned signals are then decoded into noiseless integer combinations of
messages, also known as network-coded messages, by physical-layer network
coding. The key idea of our scheme is to ensure that independent integer
combinations of messages can be decoded at the receivers. Hence the central
processor can recover the original messages of the transmitters by solving the
linearly independent equations. We prove that our SNC scheme achieves full
degrees of freedom (DoF) by utilizing signal alignment and physical-layer
network coding. Simulation results show that our SNC scheme outperforms the
compute-and-forward scheme in the finite SNR regime of the two-user and the
three-user cases. The performance improvement of our SNC scheme mainly comes
from efficient utilization of the signal subspaces for conveying independent
linear equations of messages to the central processor.
|
[
{
"version": "v1",
"created": "Wed, 22 Nov 2017 05:36:14 GMT"
},
{
"version": "v2",
"created": "Mon, 22 Oct 2018 09:09:51 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Chan",
"Tse-Tin",
""
],
[
"Lok",
"Tat-Ming",
""
]
] |
new_dataset
| 0.96772 |
2011.13203
|
Malvin Gattinger
|
Hans van Ditmarsch, Malvin Gattinger, Rahim Ramezanian
|
Everyone Knows that Everyone Knows: Gossip Protocols for Super Experts
| null |
Studia Logica 2023
|
10.1007/s11225-022-10032-3
| null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
A gossip protocol is a procedure for sharing secrets in a network. The basic
action in a gossip protocol is a pairwise message exchange (telephone call)
wherein the calling agents exchange all the secrets they know. An agent who
knows all secrets is an expert. The usual termination condition is that all
agents are experts. Instead, we explore protocols wherein the termination
condition is that all agents know that all agents are experts. We call such
agents super experts. We also investigate gossip protocols that are common
knowledge among the agents. Additionally, we model that agents who are super
experts do not make and do not answer calls, and that this is common knowledge.
We investigate conditions under which protocols terminate, both in the
synchronous case, where there is a global clock, and in the asynchronous case,
where there is not. We show that a commonly known protocol with engaged agents
may terminate faster than the same commonly known protocol without engaged
agents.
|
[
{
"version": "v1",
"created": "Thu, 26 Nov 2020 09:57:04 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Mar 2022 09:48:33 GMT"
},
{
"version": "v3",
"created": "Thu, 22 Dec 2022 21:03:23 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"van Ditmarsch",
"Hans",
""
],
[
"Gattinger",
"Malvin",
""
],
[
"Ramezanian",
"Rahim",
""
]
] |
new_dataset
| 0.996279 |
2102.02244
|
Cornelia Ott
|
Cornelia Ott, Sven Puchinger, Martin Bossert
|
Bounds and Genericity of Sum-Rank-Metric Codes
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We derive simplified sphere-packing and Gilbert--Varshamov bounds for codes
in the sum-rank metric, which can be computed more efficiently than previous
ones. They give rise to asymptotic bounds that cover the asymptotic setting
that has not yet been considered in the literature: families of sum-rank-metric
codes whose block size grows in the code length. We also provide two genericity
results: we show that random linear codes achieve almost the sum-rank-metric
Gilbert--Varshamov bound with high probability. Furthermore, we derive bounds
on the probability that a random linear code attains the sum-rank-metric
Singleton bound, showing that for large enough extension fields, almost all
linear codes achieve it.
|
[
{
"version": "v1",
"created": "Wed, 3 Feb 2021 19:25:54 GMT"
},
{
"version": "v2",
"created": "Sat, 31 Jul 2021 16:08:48 GMT"
},
{
"version": "v3",
"created": "Tue, 21 Mar 2023 11:00:22 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Ott",
"Cornelia",
""
],
[
"Puchinger",
"Sven",
""
],
[
"Bossert",
"Martin",
""
]
] |
new_dataset
| 0.971271 |
2111.03265
|
Shivam Gupta Mr
|
Shivam Gupta, Virender Ranga, Priyansh Agrawal
|
EpilNet: A Novel Approach to IoT based Epileptic Seizure Prediction and
Diagnosis System using Artificial Intelligence
|
12 Pages, 12 Figures, 2 Tables
|
ADCAIJ: Advances in Distributed Computing and Artificial
Intelligence Journal, Issue, Vol. 10 N. 4 (2021), 429-446
|
10.14201/ADCAIJ2021104429446
| null |
cs.LG eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Epilepsy is one of the most occurring neurological diseases. The main
characteristic of this disease is a frequent seizure, which is an electrical
imbalance in the brain. It is generally accompanied by shaking of body parts
and even leads (fainting). In the past few years, many treatments have come up.
These mainly involve the use of anti-seizure drugs for controlling seizures.
But in 70% of cases, these drugs are not effective, and surgery is the only
solution when the condition worsens. So patients need to take care of
themselves while having a seizure and be safe. Wearable electroencephalogram
(EEG) devices have come up with the development in medical science and
technology. These devices help in the analysis of brain electrical activities.
EEG helps in locating the affected cortical region. The most important is that
it can predict any seizure in advance on-site. This has resulted in a sudden
increase in demand for effective and efficient seizure prediction and diagnosis
systems. A novel approach to epileptic seizure prediction and diagnosis system
EpilNet is proposed in the present paper. It is a one-dimensional (1D)
convolution neural network. EpilNet gives the testing accuracy of 79.13% for
five classes, leading to a significant increase of about 6-7% compared to
related works. The developed Web API helps in bringing EpilNet into practical
use. Thus, it is an integrated system for both patients and doctors. The system
will help patients prevent injury or accidents and increase the efficiency of
the treatment process by doctors in the hospitals.
|
[
{
"version": "v1",
"created": "Fri, 5 Nov 2021 05:19:46 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Gupta",
"Shivam",
""
],
[
"Ranga",
"Virender",
""
],
[
"Agrawal",
"Priyansh",
""
]
] |
new_dataset
| 0.967717 |
2201.09302
|
Yajing Zheng
|
Tiejun Huang, Yajing Zheng, Zhaofei Yu, Rui Chen, Yuan Li, Ruiqin
Xiong, Lei Ma, Junwei Zhao, Siwei Dong, Lin Zhu, Jianing Li, Shanshan Jia,
Yihua Fu, Boxin Shi, Si Wu and Yonghong Tian
|
1000x Faster Camera and Machine Vision with Ordinary Devices
| null | null |
10.1016/j.eng.2022.01.012
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In digital cameras, we find a major limitation: the image and video form
inherited from a film camera obstructs it from capturing the rapidly changing
photonic world. Here, we present vidar, a bit sequence array where each bit
represents whether the accumulation of photons has reached a threshold, to
record and reconstruct the scene radiance at any moment. By employing only
consumer-level CMOS sensors and integrated circuits, we have developed a vidar
camera that is 1,000x faster than conventional cameras. By treating vidar as
spike trains in biological vision, we have further developed a spiking neural
network-based machine vision system that combines the speed of the machine and
the mechanism of biological vision, achieving high-speed object detection and
tracking 1,000x faster than human vision. We demonstrate the utility of the
vidar camera and the super vision system in an assistant referee and target
pointing system. Our study is expected to fundamentally revolutionize the image
and video concepts and related industries, including photography, movies, and
visual media, and to unseal a new spiking neural network-enabled speed-free
machine vision era.
|
[
{
"version": "v1",
"created": "Sun, 23 Jan 2022 16:10:11 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Huang",
"Tiejun",
""
],
[
"Zheng",
"Yajing",
""
],
[
"Yu",
"Zhaofei",
""
],
[
"Chen",
"Rui",
""
],
[
"Li",
"Yuan",
""
],
[
"Xiong",
"Ruiqin",
""
],
[
"Ma",
"Lei",
""
],
[
"Zhao",
"Junwei",
""
],
[
"Dong",
"Siwei",
""
],
[
"Zhu",
"Lin",
""
],
[
"Li",
"Jianing",
""
],
[
"Jia",
"Shanshan",
""
],
[
"Fu",
"Yihua",
""
],
[
"Shi",
"Boxin",
""
],
[
"Wu",
"Si",
""
],
[
"Tian",
"Yonghong",
""
]
] |
new_dataset
| 0.998405 |
2202.08414
|
Nathan Jessurun
|
Nathan Jessurun, Olivia P. Dizon-Paradis, Jacob Harrison, Shajib
Ghosh, Mark M. Tehranipoor, Damon L. Woodard, Navid Asadizanjani
|
FPIC: A Novel Semantic Dataset for Optical PCB Assurance
|
Dataset is available at https://www.trust-hub.org/#/data/pcb-images ;
Submitted to ACM JETC in Feb 2022; Accepted February 2023
| null |
10.1145/3588032
| null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Outsourced printed circuit board (PCB) fabrication necessitates increased
hardware assurance capabilities. Several assurance techniques based on
automated optical inspection (AOI) have been proposed that leverage PCB images
acquired using digital cameras. We review state-of-the-art AOI techniques and
observe a strong, rapid trend toward machine learning (ML) solutions. These
require significant amounts of labeled ground truth data, which is lacking in
the publicly available PCB data space. We contribute the FICS PCB Image
Collection (FPIC) dataset to address this need. Additionally, we outline new
hardware security methodologies enabled by our data set.
|
[
{
"version": "v1",
"created": "Thu, 17 Feb 2022 02:29:58 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Mar 2023 18:30:49 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Jessurun",
"Nathan",
""
],
[
"Dizon-Paradis",
"Olivia P.",
""
],
[
"Harrison",
"Jacob",
""
],
[
"Ghosh",
"Shajib",
""
],
[
"Tehranipoor",
"Mark M.",
""
],
[
"Woodard",
"Damon L.",
""
],
[
"Asadizanjani",
"Navid",
""
]
] |
new_dataset
| 0.99981 |
2203.03186
|
Timoth\'ee Mathieu
|
Debabrota Basu, Odalric-Ambrym Maillard, Timoth\'ee Mathieu
|
Bandits Corrupted by Nature: Lower Bounds on Regret and Robust
Optimistic Algorithm
| null | null | null | null |
cs.LG math.ST stat.TH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the corrupted bandit problem, i.e. a stochastic multi-armed bandit
problem with $k$ unknown reward distributions, which are heavy-tailed and
corrupted by a history-independent adversary or Nature. To be specific, the
reward obtained by playing an arm comes from corresponding heavy-tailed reward
distribution with probability $1-\varepsilon \in (0.5,1]$ and an arbitrary
corruption distribution of unbounded support with probability $\varepsilon \in
[0,0.5)$.
First, we provide $\textit{a problem-dependent lower bound on the regret}$ of
any corrupted bandit algorithm. The lower bounds indicate that the corrupted
bandit problem is harder than the classical stochastic bandit problem with
sub-Gaussian or heavy-tail rewards.
Following that, we propose a novel UCB-type algorithm for corrupted bandits,
namely HubUCB, that builds on Huber's estimator for robust mean estimation.
Leveraging a novel concentration inequality of Huber's estimator, we prove that
HubUCB achieves a near-optimal regret upper bound.
Since computing Huber's estimator has quadratic complexity, we further
introduce a sequential version of Huber's estimator that exhibits linear
complexity. We leverage this sequential estimator to design SeqHubUCB that
enjoys similar regret guarantees while reducing the computational burden.
Finally, we experimentally illustrate the efficiency of HubUCB and SeqHubUCB
in solving corrupted bandits for different reward distributions and different
levels of corruptions.
|
[
{
"version": "v1",
"created": "Mon, 7 Mar 2022 07:44:05 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Mar 2023 10:04:25 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Basu",
"Debabrota",
""
],
[
"Maillard",
"Odalric-Ambrym",
""
],
[
"Mathieu",
"Timothée",
""
]
] |
new_dataset
| 0.959553 |
2205.14311
|
Yujie Qian
|
Yujie Qian, Jiang Guo, Zhengkai Tu, Zhening Li, Connor W. Coley,
Regina Barzilay
|
MolScribe: Robust Molecular Structure Recognition with Image-To-Graph
Generation
|
To be published in the Journal of Chemical Information and Modeling
| null |
10.1021/acs.jcim.2c01480
| null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Molecular structure recognition is the task of translating a molecular image
into its graph structure. Significant variation in drawing styles and
conventions exhibited in chemical literature poses a significant challenge for
automating this task. In this paper, we propose MolScribe, a novel
image-to-graph generation model that explicitly predicts atoms and bonds, along
with their geometric layouts, to construct the molecular structure. Our model
flexibly incorporates symbolic chemistry constraints to recognize chirality and
expand abbreviated structures. We further develop data augmentation strategies
to enhance the model robustness against domain shifts. In experiments on both
synthetic and realistic molecular images, MolScribe significantly outperforms
previous models, achieving 76-93% accuracy on public benchmarks. Chemists can
also easily verify MolScribe's prediction, informed by its confidence
estimation and atom-level alignment with the input image. MolScribe is publicly
available through Python and web interfaces:
https://github.com/thomas0809/MolScribe.
|
[
{
"version": "v1",
"created": "Sat, 28 May 2022 03:03:45 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Mar 2023 23:04:53 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Qian",
"Yujie",
""
],
[
"Guo",
"Jiang",
""
],
[
"Tu",
"Zhengkai",
""
],
[
"Li",
"Zhening",
""
],
[
"Coley",
"Connor W.",
""
],
[
"Barzilay",
"Regina",
""
]
] |
new_dataset
| 0.998377 |
2206.04928
|
Mohit Vaishnav
|
Mohit Vaishnav, Thomas Serre
|
GAMR: A Guided Attention Model for (visual) Reasoning
| null |
Eleventh International Conference on Learning Representations
(ICLR) 2023
| null | null |
cs.AI cs.LG cs.NE cs.SC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Humans continue to outperform modern AI systems in their ability to flexibly
parse and understand complex visual scenes. Here, we present a novel module for
visual reasoning, the Guided Attention Model for (visual) Reasoning (GAMR),
which instantiates an active vision theory -- positing that the brain solves
complex visual reasoning problems dynamically -- via sequences of attention
shifts to select and route task-relevant visual information into memory.
Experiments on an array of visual reasoning tasks and datasets demonstrate
GAMR's ability to learn visual routines in a robust and sample-efficient
manner. In addition, GAMR is shown to be capable of zero-shot generalization on
completely novel reasoning tasks. Overall, our work provides computational
support for cognitive theories that postulate the need for a critical interplay
between attention and memory to dynamically maintain and manipulate
task-relevant visual information to solve complex visual reasoning tasks.
|
[
{
"version": "v1",
"created": "Fri, 10 Jun 2022 07:52:06 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Jun 2022 17:52:57 GMT"
},
{
"version": "v3",
"created": "Wed, 21 Sep 2022 10:17:20 GMT"
},
{
"version": "v4",
"created": "Thu, 22 Sep 2022 11:57:12 GMT"
},
{
"version": "v5",
"created": "Tue, 21 Mar 2023 15:35:50 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Vaishnav",
"Mohit",
""
],
[
"Serre",
"Thomas",
""
]
] |
new_dataset
| 0.980485 |
2207.06726
|
Martin Knoche
|
Martin Knoche, Mohamed Elkadeem, Stefan H\"ormann, Gerhard Rigoll
|
Octuplet Loss: Make Face Recognition Robust to Image Resolution
| null | null |
10.1109/FG57933.2023.10042669
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image resolution, or in general, image quality, plays an essential role in
the performance of today's face recognition systems. To address this problem,
we propose a novel combination of the popular triplet loss to improve
robustness against image resolution via fine-tuning of existing face
recognition models. With octuplet loss, we leverage the relationship between
high-resolution images and their synthetically down-sampled variants jointly
with their identity labels. Fine-tuning several state-of-the-art approaches
with our method proves that we can significantly boost performance for
cross-resolution (high-to-low resolution) face verification on various datasets
without meaningfully exacerbating the performance on high-to-high resolution
images. Our method applied on the FaceTransformer network achieves 95.12% face
verification accuracy on the challenging XQLFW dataset while reaching 99.73% on
the LFW database. Moreover, the low-to-low face verification accuracy benefits
from our method. We release our code to allow seamless integration of the
octuplet loss into existing frameworks.
|
[
{
"version": "v1",
"created": "Thu, 14 Jul 2022 08:22:58 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Mar 2023 07:23:13 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Knoche",
"Martin",
""
],
[
"Elkadeem",
"Mohamed",
""
],
[
"Hörmann",
"Stefan",
""
],
[
"Rigoll",
"Gerhard",
""
]
] |
new_dataset
| 0.97983 |
2207.08892
|
Xuan Wang
|
Xuan Wang, Yizhi Zhou, Wanxin Jin
|
D3G: Learning Multi-robot Coordination from Demonstrations
| null | null | null | null |
cs.RO cs.LG cs.MA cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper develops a Distributed Differentiable Dynamic Game (D3G)
framework, which enables learning multi-robot coordination from demonstrations.
We represent multi-robot coordination as a dynamic game, where the behavior of
a robot is dictated by its own dynamics and objective that also depends on
others' behavior. The coordination thus can be adapted by tuning the objective
and dynamics of each robot. The proposed D3G enables each robot to
automatically tune its individual dynamics and objectives in a distributed
manner by minimizing the mismatch between its trajectory and demonstrations.
This learning framework features a new design, including a forward-pass, where
all robots collaboratively seek Nash equilibrium of a game, and a
backward-pass, where gradients are propagated via the communication graph. We
test the D3G in simulation with two types of robots given different task
configurations. The results validate the capability of D3G for learning
multi-robot coordination from demonstrations.
|
[
{
"version": "v1",
"created": "Mon, 18 Jul 2022 19:06:18 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Mar 2023 22:04:06 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Wang",
"Xuan",
""
],
[
"Zhou",
"Yizhi",
""
],
[
"Jin",
"Wanxin",
""
]
] |
new_dataset
| 0.999602 |
2207.11187
|
Leon Feng
|
Leon Feng, Jnana Senapati, Bill Liu
|
TaDaa: real time Ticket Assignment Deep learning Auto Advisor for
customer support, help desk, and issue ticketing systems
| null | null | null | null |
cs.IR cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
This paper proposes TaDaa: Ticket Assignment Deep learning Auto Advisor,
which leverages the latest Transformers models and machine learning techniques
quickly assign issues within an organization, like customer support, help desk
and alike issue ticketing systems. The project provides functionality to 1)
assign an issue to the correct group, 2) assign an issue to the best resolver,
and 3) provide the most relevant previously solved tickets to resolvers. We
leverage one ticketing system sample dataset, with over 3k+ groups and over
10k+ resolvers to obtain a 95.2% top 3 accuracy on group suggestions and a
79.0% top 5 accuracy on resolver suggestions. We hope this research will
greatly improve average issue resolution time on customer support, help desk,
and issue ticketing systems.
|
[
{
"version": "v1",
"created": "Mon, 18 Jul 2022 18:08:34 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Mar 2023 19:15:04 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Feng",
"Leon",
""
],
[
"Senapati",
"Jnana",
""
],
[
"Liu",
"Bill",
""
]
] |
new_dataset
| 0.985652 |
2207.13306
|
Toru Tamaki
|
Tomoya Nitta, Tsubasa Hirakawa, Hironobu Fujiyoshi, Toru Tamaki
|
Object-ABN: Learning to Generate Sharp Attention Maps for Action
Recognition
|
9 pages
| null |
10.1587/transinf.2022EDP7138
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we propose an extension of the Attention Branch Network (ABN)
by using instance segmentation for generating sharper attention maps for action
recognition. Methods for visual explanation such as Grad-CAM usually generate
blurry maps which are not intuitive for humans to understand, particularly in
recognizing actions of people in videos. Our proposed method, Object-ABN,
tackles this issue by introducing a new mask loss that makes the generated
attention maps close to the instance segmentation result. Further the PC loss
and multiple attention maps are introduced to enhance the sharpness of the maps
and improve the performance of classification. Experimental results with UCF101
and SSv2 shows that the generated maps by the proposed method are much clearer
qualitatively and quantitatively than those of the original ABN.
|
[
{
"version": "v1",
"created": "Wed, 27 Jul 2022 05:30:58 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Nitta",
"Tomoya",
""
],
[
"Hirakawa",
"Tsubasa",
""
],
[
"Fujiyoshi",
"Hironobu",
""
],
[
"Tamaki",
"Toru",
""
]
] |
new_dataset
| 0.997338 |
2208.04319
|
Longxiang Jiang
|
Longxiang Jiang, Liyuan Wang, Xinkun Chu, Yonghao Xiao and Hao Zhang
|
PhyGNNet: Solving spatiotemporal PDEs with Physics-informed Graph Neural
Network
|
there some errors in method describtion
| null | null | null |
cs.NE cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Solving partial differential equations (PDEs) is an important research means
in the fields of physics, biology, and chemistry. As an approximate alternative
to numerical methods, PINN has received extensive attention and played an
important role in many fields. However, PINN uses a fully connected network as
its model, which has limited fitting ability and limited extrapolation ability
in both time and space. In this paper, we propose PhyGNNet for solving partial
differential equations on the basics of a graph neural network which consists
of encoder, processer, and decoder blocks. In particular, we divide the
computing area into regular grids, define partial differential operators on the
grids, then construct pde loss for the network to optimize to build PhyGNNet
model. What's more, we conduct comparative experiments on Burgers equation and
heat equation to validate our approach, the results show that our method has
better fit ability and extrapolation ability both in time and spatial areas
compared with PINN.
|
[
{
"version": "v1",
"created": "Sun, 7 Aug 2022 13:33:34 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Mar 2023 05:28:26 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Jiang",
"Longxiang",
""
],
[
"Wang",
"Liyuan",
""
],
[
"Chu",
"Xinkun",
""
],
[
"Xiao",
"Yonghao",
""
],
[
"Zhang",
"Hao",
""
]
] |
new_dataset
| 0.999566 |
2208.14160
|
Anyi Huang
|
Anyi Huang, Qian Xie, Zhoutao Wang, Dening Lu, Mingqiang Wei, Jun Wang
|
MODNet: Multi-offset Point Cloud Denoising Network Customized for
Multi-scale Patches
| null |
Computer Graphics Forum, Volume 41 (2022), Number 7
|
10.1111/cgf.14661
| null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The intricacy of 3D surfaces often results cutting-edge point cloud denoising
(PCD) models in surface degradation including remnant noise, wrongly-removed
geometric details. Although using multi-scale patches to encode the geometry of
a point has become the common wisdom in PCD, we find that simple aggregation of
extracted multi-scale features can not adaptively utilize the appropriate scale
information according to the geometric information around noisy points. It
leads to surface degradation, especially for points close to edges and points
on complex curved surfaces. We raise an intriguing question -- if employing
multi-scale geometric perception information to guide the network to utilize
multi-scale information, can eliminate the severe surface degradation problem?
To answer it, we propose a Multi-offset Denoising Network (MODNet) customized
for multi-scale patches. First, we extract the low-level feature of three
scales patches by patch feature encoders. Second, a multi-scale perception
module is designed to embed multi-scale geometric information for each scale
feature and regress multi-scale weights to guide a multi-offset denoising
displacement. Third, a multi-offset decoder regresses three scale offsets,
which are guided by the multi-scale weights to predict the final displacement
by weighting them adaptively. Experiments demonstrate that our method achieves
new state-of-the-art performance on both synthetic and real-scanned datasets.
|
[
{
"version": "v1",
"created": "Tue, 30 Aug 2022 11:21:39 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Sep 2022 07:31:19 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Huang",
"Anyi",
""
],
[
"Xie",
"Qian",
""
],
[
"Wang",
"Zhoutao",
""
],
[
"Lu",
"Dening",
""
],
[
"Wei",
"Mingqiang",
""
],
[
"Wang",
"Jun",
""
]
] |
new_dataset
| 0.99155 |
2209.08662
|
Junheng Li
|
Junheng Li and Quan Nguyen
|
Multi-contact MPC for Dynamic Loco-manipulation on Humanoid Robots
|
6 pages, 7 figures, ACC 2023
| null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a novel method to control humanoid robot dynamic
loco-manipulation with multiple contact modes via multi-contact Model
Predictive Control (MPC) framework. The proposed framework includes a
multi-contact dynamics model capable of capturing various contact modes in
loco-manipulation, such as hand-object contact and foot-ground contacts. Our
proposed dynamics model represents the object dynamics as an external force
acting on the system, which simplifies the model and makes it feasible for
solving the MPC problem. In numerical validations, our multi-contact MPC
framework only needs contact timings of each task and desired states to give
MPC the knowledge of changes in contact modes in the prediction horizons in
loco-manipulation. The proposed framework can control the humanoid robot to
complete multi-tasks dynamic loco-manipulation applications such as efficiently
picking up and dropping off objects while turning and walking.
|
[
{
"version": "v1",
"created": "Sun, 18 Sep 2022 21:47:59 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Mar 2023 05:13:35 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Li",
"Junheng",
""
],
[
"Nguyen",
"Quan",
""
]
] |
new_dataset
| 0.992995 |
2210.01988
|
Ziyu Wang
|
Lijing Zhou and Ziyu Wang and Hongrui Cui and Qingrui Song and Yu Yu
|
Bicoptor: Two-round Secure Three-party Non-linear Computation without
Preprocessing for Privacy-preserving Machine Learning
|
Accepted at 44th IEEE Symposium on Security and Privacy (S&P 2023)
| null |
10.1109/SP46215.2023.00074
| null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The overhead of non-linear functions dominates the performance of the secure
multiparty computation (MPC) based privacy-preserving machine learning (PPML).
This work introduces a family of novel secure three-party computation (3PC)
protocols, Bicoptor, which improve the efficiency of evaluating non-linear
functions. The basis of Bicoptor is a new sign determination protocol, which
relies on a clever use of the truncation protocol proposed in SecureML (S\&P
2017). Our 3PC sign determination protocol only requires two communication
rounds, and does not involve any preprocessing. Such sign determination
protocol is well-suited for computing non-linear functions in PPML, e.g. the
activation function ReLU, Maxpool, and their variants. We develop suitable
protocols for these non-linear functions, which form a family of GPU-friendly
protocols, Bicoptor. All Bicoptor protocols only require two communication
rounds without preprocessing. We evaluate Bicoptor under a 3-party LAN network
over a public cloud, and achieve more than 370,000 DReLU/ReLU or 41,000 Maxpool
(find the maximum value of nine inputs) operations per second. Under the same
settings and environment, our ReLU protocol has a one or even two orders of
magnitude improvement to the state-of-the-art works, Falcon (PETS 2021) or
Edabits (CRYPTO 2020), respectively without batch processing.
|
[
{
"version": "v1",
"created": "Wed, 5 Oct 2022 02:33:53 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Mar 2023 14:07:45 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Zhou",
"Lijing",
""
],
[
"Wang",
"Ziyu",
""
],
[
"Cui",
"Hongrui",
""
],
[
"Song",
"Qingrui",
""
],
[
"Yu",
"Yu",
""
]
] |
new_dataset
| 0.961649 |
2210.11035
|
Jing Tan
|
Jing Tan, Xiaotong Zhao, Xintian Shi, Bin Kang, Limin Wang
|
PointTAD: Multi-Label Temporal Action Detection with Learnable Query
Points
|
NeurIPS 2022 camera ready version
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Traditional temporal action detection (TAD) usually handles untrimmed videos
with small number of action instances from a single label (e.g., ActivityNet,
THUMOS). However, this setting might be unrealistic as different classes of
actions often co-occur in practice. In this paper, we focus on the task of
multi-label temporal action detection that aims to localize all action
instances from a multi-label untrimmed video. Multi-label TAD is more
challenging as it requires for fine-grained class discrimination within a
single video and precise localization of the co-occurring instances. To
mitigate this issue, we extend the sparse query-based detection paradigm from
the traditional TAD and propose the multi-label TAD framework of PointTAD.
Specifically, our PointTAD introduces a small set of learnable query points to
represent the important frames of each action instance. This point-based
representation provides a flexible mechanism to localize the discriminative
frames at boundaries and as well the important frames inside the action.
Moreover, we perform the action decoding process with the Multi-level
Interactive Module to capture both point-level and instance-level action
semantics. Finally, our PointTAD employs an end-to-end trainable framework
simply based on RGB input for easy deployment. We evaluate our proposed method
on two popular benchmarks and introduce the new metric of detection-mAP for
multi-label TAD. Our model outperforms all previous methods by a large margin
under the detection-mAP metric, and also achieves promising results under the
segmentation-mAP metric. Code is available at
https://github.com/MCG-NJU/PointTAD.
|
[
{
"version": "v1",
"created": "Thu, 20 Oct 2022 06:08:03 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Oct 2022 04:38:38 GMT"
},
{
"version": "v3",
"created": "Tue, 21 Mar 2023 16:03:50 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Tan",
"Jing",
""
],
[
"Zhao",
"Xiaotong",
""
],
[
"Shi",
"Xintian",
""
],
[
"Kang",
"Bin",
""
],
[
"Wang",
"Limin",
""
]
] |
new_dataset
| 0.985016 |
2210.14771
|
Charlie Budd
|
Charlie Budd, Luis C. Garcia-Peraza-Herrera, Martin Huber, Sebastien
Ourselin, Tom Vercauteren
|
Rapid and robust endoscopic content area estimation: A lean GPU-based
pipeline and curated benchmark dataset
|
Presented at AE-CAI MICCAI workshop
| null |
10.1080/21681163.2022.2156393
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Endoscopic content area refers to the informative area enclosed by the dark,
non-informative, border regions present in most endoscopic footage. The
estimation of the content area is a common task in endoscopic image processing
and computer vision pipelines. Despite the apparent simplicity of the problem,
several factors make reliable real-time estimation surprisingly challenging.
The lack of rigorous investigation into the topic combined with the lack of a
common benchmark dataset for this task has been a long-lasting issue in the
field. In this paper, we propose two variants of a lean GPU-based computational
pipeline combining edge detection and circle fitting. The two variants differ
by relying on handcrafted features, and learned features respectively to
extract content area edge point candidates. We also present a first-of-its-kind
dataset of manually annotated and pseudo-labelled content areas across a range
of surgical indications. To encourage further developments, the curated
dataset, and an implementation of both algorithms, has been made public
(https://doi.org/10.7303/syn32148000,
https://github.com/charliebudd/torch-content-area). We compare our proposed
algorithm with a state-of-the-art U-Net-based approach and demonstrate
significant improvement in terms of both accuracy (Hausdorff distance: 6.3 px
versus 118.1 px) and computational time (Average runtime per frame: 0.13 ms
versus 11.2 ms).
|
[
{
"version": "v1",
"created": "Wed, 26 Oct 2022 15:10:44 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Budd",
"Charlie",
""
],
[
"Garcia-Peraza-Herrera",
"Luis C.",
""
],
[
"Huber",
"Martin",
""
],
[
"Ourselin",
"Sebastien",
""
],
[
"Vercauteren",
"Tom",
""
]
] |
new_dataset
| 0.998977 |
2211.02807
|
Chenlei Lv
|
Chenlei Lv, Weisi Lin, and Baoquan Zhao
|
KSS-ICP: Point Cloud Registration based on Kendall Shape Space
|
13 pages, 20 figures
| null |
10.1109/TIP.2023.3251021
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Point cloud registration is a popular topic which has been widely used in 3D
model reconstruction, location, and retrieval. In this paper, we propose a new
registration method, KSS-ICP, to address the rigid registration task in Kendall
shape space (KSS) with Iterative Closest Point (ICP). The KSS is a quotient
space that removes influences of translations, scales, and rotations for shape
feature-based analysis. Such influences can be concluded as the similarity
transformations that do not change the shape feature. The point cloud
representation in KSS is invariant to similarity transformations. We utilize
such property to design the KSS-ICP for point cloud registration. To tackle the
difficulty to achieve the KSS representation in general, the proposed KSS-ICP
formulates a practical solution that does not require complex feature analysis,
data training, and optimization. With a simple implementation, KSS-ICP achieves
more accurate registration from point clouds. It is robust to similarity
transformation, non-uniform density, noise, and defective parts. Experiments
show that KSS-ICP has better performance than the state of the art.
|
[
{
"version": "v1",
"created": "Sat, 5 Nov 2022 04:00:53 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Lv",
"Chenlei",
""
],
[
"Lin",
"Weisi",
""
],
[
"Zhao",
"Baoquan",
""
]
] |
new_dataset
| 0.99357 |
2212.02978
|
Mia Chiquier
|
Mia Chiquier, Carl Vondrick
|
Muscles in Action
| null | null | null | null |
cs.CV q-bio.TO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human motion is created by, and constrained by, our muscles. We take a first
step at building computer vision methods that represent the internal muscle
activity that causes motion. We present a new dataset, Muscles in Action (MIA),
to learn to incorporate muscle activity into human motion representations. The
dataset consists of 12.5 hours of synchronized video and surface
electromyography (sEMG) data of 10 subjects performing various exercises. Using
this dataset, we learn a bidirectional representation that predicts muscle
activation from video, and conversely, reconstructs motion from muscle
activation. We evaluate our model on in-distribution subjects and exercises, as
well as on out-of-distribution subjects and exercises. We demonstrate how
advances in modeling both modalities jointly can serve as conditioning for
muscularly consistent motion generation. Putting muscles into computer vision
systems will enable richer models of virtual humans, with applications in
sports, fitness, and AR/VR.
|
[
{
"version": "v1",
"created": "Mon, 5 Dec 2022 16:47:09 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Mar 2023 16:28:08 GMT"
},
{
"version": "v3",
"created": "Mon, 20 Mar 2023 19:10:22 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Chiquier",
"Mia",
""
],
[
"Vondrick",
"Carl",
""
]
] |
new_dataset
| 0.999726 |
2302.06891
|
Biao Gong
|
Biao Gong, Xiaoying Xie, Yutong Feng, Yiliang Lv, Yujun Shen, Deli
Zhao
|
UKnow: A Unified Knowledge Protocol for Common-Sense Reasoning and
Vision-Language Pre-training
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work presents a unified knowledge protocol, called UKnow, which
facilitates knowledge-based studies from the perspective of data. Particularly
focusing on visual and linguistic modalities, we categorize data knowledge into
five unit types, namely, in-image, in-text, cross-image, cross-text, and
image-text, and set up an efficient pipeline to help construct the multimodal
knowledge graph from any data collection. Thanks to the logical information
naturally contained in knowledge graph, organizing datasets under UKnow format
opens up more possibilities of data usage compared to the commonly used
image-text pairs. Following UKnow protocol, we collect, from public
international news, a large-scale multimodal knowledge graph dataset that
consists of 1,388,568 nodes (with 571,791 vision-related ones) and 3,673,817
triplets. The dataset is also annotated with rich event tags, including 11
coarse labels and 9,185 fine labels. Experiments on four benchmarks demonstrate
the potential of UKnow in supporting common-sense reasoning and boosting
vision-language pre-training with a single dataset, benefiting from its unified
form of knowledge organization. Code, dataset, and models will be made publicly
available.
|
[
{
"version": "v1",
"created": "Tue, 14 Feb 2023 08:27:42 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Mar 2023 06:20:10 GMT"
},
{
"version": "v3",
"created": "Tue, 21 Mar 2023 16:33:56 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Gong",
"Biao",
""
],
[
"Xie",
"Xiaoying",
""
],
[
"Feng",
"Yutong",
""
],
[
"Lv",
"Yiliang",
""
],
[
"Shen",
"Yujun",
""
],
[
"Zhao",
"Deli",
""
]
] |
new_dataset
| 0.995671 |
2302.14115
|
Antoine Yang
|
Antoine Yang, Arsha Nagrani, Paul Hongsuck Seo, Antoine Miech, Jordi
Pont-Tuset, Ivan Laptev, Josef Sivic and Cordelia Schmid
|
Vid2Seq: Large-Scale Pretraining of a Visual Language Model for Dense
Video Captioning
|
CVPR 2023 Camera-Ready; Project Webpage:
https://antoyang.github.io/vid2seq.html ; 18 pages; 6 figures
| null | null | null |
cs.CV cs.AI cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we introduce Vid2Seq, a multi-modal single-stage dense event
captioning model pretrained on narrated videos which are readily-available at
scale. The Vid2Seq architecture augments a language model with special time
tokens, allowing it to seamlessly predict event boundaries and textual
descriptions in the same output sequence. Such a unified model requires
large-scale training data, which is not available in current annotated
datasets. We show that it is possible to leverage unlabeled narrated videos for
dense video captioning, by reformulating sentence boundaries of transcribed
speech as pseudo event boundaries, and using the transcribed speech sentences
as pseudo event captions. The resulting Vid2Seq model pretrained on the
YT-Temporal-1B dataset improves the state of the art on a variety of dense
video captioning benchmarks including YouCook2, ViTT and ActivityNet Captions.
Vid2Seq also generalizes well to the tasks of video paragraph captioning and
video clip captioning, and to few-shot settings. Our code is publicly available
at https://antoyang.github.io/vid2seq.html.
|
[
{
"version": "v1",
"created": "Mon, 27 Feb 2023 19:53:49 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Mar 2023 11:01:09 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Yang",
"Antoine",
""
],
[
"Nagrani",
"Arsha",
""
],
[
"Seo",
"Paul Hongsuck",
""
],
[
"Miech",
"Antoine",
""
],
[
"Pont-Tuset",
"Jordi",
""
],
[
"Laptev",
"Ivan",
""
],
[
"Sivic",
"Josef",
""
],
[
"Schmid",
"Cordelia",
""
]
] |
new_dataset
| 0.999394 |
2303.01593
|
Asaf Yehudai
|
Asaf Yehudai, Matan Vetzler, Yosi Mass, Koren Lazar, Doron Cohen, Boaz
Carmeli
|
QAID: Question Answering Inspired Few-shot Intent Detection
|
ICLR paper
| null | null | null |
cs.CL cs.AI cs.IR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Intent detection with semantically similar fine-grained intents is a
challenging task. To address it, we reformulate intent detection as a
question-answering retrieval task by treating utterances and intent names as
questions and answers. To that end, we utilize a question-answering retrieval
architecture and adopt a two stages training schema with batch contrastive
loss. In the pre-training stage, we improve query representations through
self-supervised training. Then, in the fine-tuning stage, we increase
contextualized token-level similarity scores between queries and answers from
the same intent. Our results on three few-shot intent detection benchmarks
achieve state-of-the-art performance.
|
[
{
"version": "v1",
"created": "Thu, 2 Mar 2023 21:35:15 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Mar 2023 14:22:00 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Yehudai",
"Asaf",
""
],
[
"Vetzler",
"Matan",
""
],
[
"Mass",
"Yosi",
""
],
[
"Lazar",
"Koren",
""
],
[
"Cohen",
"Doron",
""
],
[
"Carmeli",
"Boaz",
""
]
] |
new_dataset
| 0.984359 |
2303.03015
|
Peter Mosses
|
Peter D. Mosses
|
Using Spoofax to Support Online Code Navigation
|
Accepted for publication in Proc. Eelco Visser Commemorative
Symposium (EVCS 2023)
| null |
10.4230/OASIcs.EVCS.2023.21
| null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Spoofax is a language workbench. A Spoofax language specification generally
includes name resolution: the analysis of bindings between definitions and
references. When browsing code in the specified language using Spoofax, the
bindings appear as hyperlinks, supporting precise name-based code navigation.
However, Spoofax cannot be used for browsing code in online repositories.
This paper is about a toolchain that uses Spoofax to generate hyperlinked
twins of code repositories. These generated artefacts support the same precise
code navigation as Spoofax, and can be browsed online. The technique has been
prototyped on the CBS (Component-Based Semantics) specification language
developed by the PLanCompS project, but could be used on any language after
specifying its name resolution in Spoofax.
|
[
{
"version": "v1",
"created": "Mon, 6 Mar 2023 10:37:41 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Mosses",
"Peter D.",
""
]
] |
new_dataset
| 0.998494 |
2303.09093
|
Sha Li
|
Qiusi Zhan, Sha Li, Kathryn Conger, Martha Palmer, Heng Ji, Jiawei Han
|
GLEN: General-Purpose Event Detection for Thousands of Types
|
The first two authors contributed equally. (15 pages, 11 figures)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The development of event extraction systems has been hindered by the absence
of wide-coverage, large-scale datasets. To make event extraction systems more
accessible, we build a general-purpose event detection dataset GLEN, which
covers 3,465 different event types, making it over 20x larger in ontology than
any current dataset. GLEN is created by utilizing the DWD Overlay, which
provides a mapping between Wikidata Qnodes and PropBank rolesets. This enables
us to use the abundant existing annotation for PropBank as distant supervision.
In addition, we also propose a new multi-stage event detection model
specifically designed to handle the large ontology size and partial labels in
GLEN. We show that our model exhibits superior performance (~10% F1 gain)
compared to both conventional classification baselines and newer
definition-based models. Finally, we perform error analysis and show that label
noise is still the largest challenge for improving performance.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 05:36:38 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Mar 2023 20:40:15 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Zhan",
"Qiusi",
""
],
[
"Li",
"Sha",
""
],
[
"Conger",
"Kathryn",
""
],
[
"Palmer",
"Martha",
""
],
[
"Ji",
"Heng",
""
],
[
"Han",
"Jiawei",
""
]
] |
new_dataset
| 0.999571 |
2303.09702
|
Anurag Murty Naredla
|
Anna Lubiw and Anurag Murty Naredla
|
The geodesic edge center of a simple polygon
| null | null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The geodesic edge center of a polygon is a point c inside the polygon that
minimizes the maximum geodesic distance from c to any edge of the polygon,
where geodesic distance is the shortest path distance inside the polygon. We
give a linear-time algorithm to find a geodesic edge center of a simple
polygon. This improves on the previous O(n log n) time algorithm by Lubiw and
Naredla [European Symposium on Algorithms, 2021]. The algorithm builds on an
algorithm to find the geodesic vertex center of a simple polygon due to
Pollack, Sharir, and Rote [Discrete & Computational Geometry, 1989] and an
improvement to linear time by Ahn, Barba, Bose, De Carufel, Korman, and Oh
[Discrete & Computational Geometry, 2016]. The geodesic edge center can easily
be found from the geodesic farthest-edge Voronoi diagram of the polygon.
Finding that Voronoi diagram in linear time is an open question, although the
geodesic nearest edge Voronoi diagram (the medial axis) can be found in linear
time. As a first step of our geodesic edge center algorithm, we give a
linear-time algorithm to find the geodesic farthest-edge Voronoi diagram
restricted to the polygon boundary.
|
[
{
"version": "v1",
"created": "Fri, 17 Mar 2023 00:17:53 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Mar 2023 11:50:40 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Lubiw",
"Anna",
""
],
[
"Naredla",
"Anurag Murty",
""
]
] |
new_dataset
| 0.999004 |
2303.09730
|
Li Lyna Zhang
|
Chen Tang, Li Lyna Zhang, Huiqiang Jiang, Jiahang Xu, Ting Cao, Quanlu
Zhang, Yuqing Yang, Zhi Wang, Mao Yang
|
ElasticViT: Conflict-aware Supernet Training for Deploying Fast Vision
Transformer on Diverse Mobile Devices
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Neural Architecture Search (NAS) has shown promising performance in the
automatic design of vision transformers (ViT) exceeding 1G FLOPs. However,
designing lightweight and low-latency ViT models for diverse mobile devices
remains a big challenge. In this work, we propose ElasticViT, a two-stage NAS
approach that trains a high-quality ViT supernet over a very large search space
that supports a wide range of mobile devices, and then searches an optimal
sub-network (subnet) for direct deployment. However, prior supernet training
methods that rely on uniform sampling suffer from the gradient conflict issue:
the sampled subnets can have vastly different model sizes (e.g., 50M vs. 2G
FLOPs), leading to different optimization directions and inferior performance.
To address this challenge, we propose two novel sampling techniques:
complexity-aware sampling and performance-aware sampling. Complexity-aware
sampling limits the FLOPs difference among the subnets sampled across adjacent
training steps, while covering different-sized subnets in the search space.
Performance-aware sampling further selects subnets that have good accuracy,
which can reduce gradient conflicts and improve supernet quality. Our
discovered models, ElasticViT models, achieve top-1 accuracy from 67.2% to
80.0% on ImageNet from 60M to 800M FLOPs without extra retraining,
outperforming all prior CNNs and ViTs in terms of accuracy and latency. Our
tiny and small models are also the first ViT models that surpass
state-of-the-art CNNs with significantly lower latency on mobile devices. For
instance, ElasticViT-S1 runs 2.62x faster than EfficientNet-B0 with 0.1% higher
accuracy.
|
[
{
"version": "v1",
"created": "Fri, 17 Mar 2023 02:19:28 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Mar 2023 10:11:01 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Tang",
"Chen",
""
],
[
"Zhang",
"Li Lyna",
""
],
[
"Jiang",
"Huiqiang",
""
],
[
"Xu",
"Jiahang",
""
],
[
"Cao",
"Ting",
""
],
[
"Zhang",
"Quanlu",
""
],
[
"Yang",
"Yuqing",
""
],
[
"Wang",
"Zhi",
""
],
[
"Yang",
"Mao",
""
]
] |
new_dataset
| 0.986221 |
2303.10444
|
Youshan Zhang
|
Youshan Zhang
|
Stall Number Detection of Cow Teats Key Frames
| null | null | null | null |
cs.CV
|
http://creativecommons.org/publicdomain/zero/1.0/
|
In this paper, we present a small cow stall number dataset named
CowStallNumbers, which is extracted from cow teat videos with the goal of
advancing cow stall number detection. This dataset contains 1042 training
images and 261 test images with the stall number ranging from 0 to 60. In
addition, we fine-tuned a ResNet34 model and augmented the dataset with the
random crop, center crop, and random rotation. The experimental result achieves
a 92% accuracy in stall number recognition and a 40.1% IoU score in stall
number position prediction.
|
[
{
"version": "v1",
"created": "Sat, 18 Mar 2023 15:56:29 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Mar 2023 00:54:10 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Zhang",
"Youshan",
""
]
] |
new_dataset
| 0.999828 |
2303.10865
|
Tianyuan Liu
|
Shiyu Xu, Tianyuan Liu, Michael Wong, Dana Kuli\'c, Akansel Cosgun
|
Rotating Objects via In-Hand Pivoting using Vision, Force and Touch
|
8 pages, 7 figures, 4 tables
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a robotic manipulation system that can pivot objects on a surface
using vision, wrist force and tactile sensing. We aim to control the rotation
of an object around the grip point of a parallel gripper by allowing rotational
slip, while maintaining a desired wrist force profile. Our approach runs an
end-effector position controller and a gripper width controller concurrently in
a closed loop. The position controller maintains a desired force using vision
and wrist force. The gripper controller uses tactile sensing to keep the grip
firm enough to prevent translational slip, but loose enough to induce
rotational slip. Our sensor-based control approach relies on matching a desired
force profile derived from object dimensions and weight and vision-based
monitoring of the object pose. The gripper controller uses tactile sensors to
detect and prevent translational slip by tightening the grip when needed.
Experimental results where the robot was tasked with rotating cuboid objects 90
degrees show that the multi-modal pivoting approach was able to rotate the
objects without causing lift or slip, and was more energy-efficient compared to
using a single sensor modality and to pick-and-place. While our work
demonstrated the benefit of multi-modal sensing for the pivoting task, further
work is needed to generalize our approach to any given object.
|
[
{
"version": "v1",
"created": "Mon, 20 Mar 2023 04:55:56 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Mar 2023 00:43:57 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Xu",
"Shiyu",
""
],
[
"Liu",
"Tianyuan",
""
],
[
"Wong",
"Michael",
""
],
[
"Kulić",
"Dana",
""
],
[
"Cosgun",
"Akansel",
""
]
] |
new_dataset
| 0.992127 |
2303.11141
|
Hongbo Wang
|
Hongbo Wang, Weimin Xiong, Yifan Song, Dawei Zhu, Yu Xia and Sujian Li
|
DocRED-FE: A Document-Level Fine-Grained Entity And Relation Extraction
Dataset
|
Accepted by IEEE ICASSP 2023. The first two authors contribute
equally
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Joint entity and relation extraction (JERE) is one of the most important
tasks in information extraction. However, most existing works focus on
sentence-level coarse-grained JERE, which have limitations in real-world
scenarios. In this paper, we construct a large-scale document-level
fine-grained JERE dataset DocRED-FE, which improves DocRED with Fine-Grained
Entity Type. Specifically, we redesign a hierarchical entity type schema
including 11 coarse-grained types and 119 fine-grained types, and then
re-annotate DocRED manually according to this schema. Through comprehensive
experiments we find that: (1) DocRED-FE is challenging to existing JERE models;
(2) Our fine-grained entity types promote relation classification. We make
DocRED-FE with instruction and the code for our baselines publicly available at
https://github.com/PKU-TANGENT/DOCRED-FE.
|
[
{
"version": "v1",
"created": "Mon, 20 Mar 2023 14:19:58 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Mar 2023 09:03:14 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Wang",
"Hongbo",
""
],
[
"Xiong",
"Weimin",
""
],
[
"Song",
"Yifan",
""
],
[
"Zhu",
"Dawei",
""
],
[
"Xia",
"Yu",
""
],
[
"Li",
"Sujian",
""
]
] |
new_dataset
| 0.999676 |
2303.11364
|
Wang Yifan
|
Wei-Ting Chen, Wang Yifan, Sy-Yen Kuo, Gordon Wetzstein
|
DehazeNeRF: Multiple Image Haze Removal and 3D Shape Reconstruction
using Neural Radiance Fields
|
including supplemental material; project page:
https://www.computationalimaging.org/publications/dehazenerf
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Neural radiance fields (NeRFs) have demonstrated state-of-the-art performance
for 3D computer vision tasks, including novel view synthesis and 3D shape
reconstruction. However, these methods fail in adverse weather conditions. To
address this challenge, we introduce DehazeNeRF as a framework that robustly
operates in hazy conditions. DehazeNeRF extends the volume rendering equation
by adding physically realistic terms that model atmospheric scattering. By
parameterizing these terms using suitable networks that match the physical
properties, we introduce effective inductive biases, which, together with the
proposed regularizations, allow DehazeNeRF to demonstrate successful multi-view
haze removal, novel view synthesis, and 3D shape reconstruction where existing
approaches fail.
|
[
{
"version": "v1",
"created": "Mon, 20 Mar 2023 18:03:32 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Chen",
"Wei-Ting",
""
],
[
"Yifan",
"Wang",
""
],
[
"Kuo",
"Sy-Yen",
""
],
[
"Wetzstein",
"Gordon",
""
]
] |
new_dataset
| 0.958938 |
2303.11381
|
Zhengyuan Yang
|
Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Ehsan Azarnasab,
Faisal Ahmed, Zicheng Liu, Ce Liu, Michael Zeng, Lijuan Wang
|
MM-REACT: Prompting ChatGPT for Multimodal Reasoning and Action
| null | null | null | null |
cs.CV cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose MM-REACT, a system paradigm that integrates ChatGPT with a pool of
vision experts to achieve multimodal reasoning and action. In this paper, we
define and explore a comprehensive list of advanced vision tasks that are
intriguing to solve, but may exceed the capabilities of existing vision and
vision-language models. To achieve such advanced visual intelligence, MM-REACT
introduces a textual prompt design that can represent text descriptions,
textualized spatial coordinates, and aligned file names for dense visual
signals such as images and videos. MM-REACT's prompt design allows language
models to accept, associate, and process multimodal information, thereby
facilitating the synergetic combination of ChatGPT and various vision experts.
Zero-shot experiments demonstrate MM-REACT's effectiveness in addressing the
specified capabilities of interests and its wide application in different
scenarios that require advanced visual understanding. Furthermore, we discuss
and compare MM-REACT's system paradigm with an alternative approach that
extends language models for multimodal scenarios through joint finetuning.
Code, demo, video, and visualization are available at
https://multimodal-react.github.io/
|
[
{
"version": "v1",
"created": "Mon, 20 Mar 2023 18:31:47 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Yang",
"Zhengyuan",
""
],
[
"Li",
"Linjie",
""
],
[
"Wang",
"Jianfeng",
""
],
[
"Lin",
"Kevin",
""
],
[
"Azarnasab",
"Ehsan",
""
],
[
"Ahmed",
"Faisal",
""
],
[
"Liu",
"Zicheng",
""
],
[
"Liu",
"Ce",
""
],
[
"Zeng",
"Michael",
""
],
[
"Wang",
"Lijuan",
""
]
] |
new_dataset
| 0.998795 |
2303.11396
|
Dave Zhenyu Chen
|
Dave Zhenyu Chen, Yawar Siddiqui, Hsin-Ying Lee, Sergey Tulyakov,
Matthias Nie{\ss}ner
|
Text2Tex: Text-driven Texture Synthesis via Diffusion Models
|
Project page: https://daveredrum.github.io/Text2Tex/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Text2Tex, a novel method for generating high-quality textures for
3D meshes from the given text prompts. Our method incorporates inpainting into
a pre-trained depth-aware image diffusion model to progressively synthesize
high resolution partial textures from multiple viewpoints. To avoid
accumulating inconsistent and stretched artifacts across views, we dynamically
segment the rendered view into a generation mask, which represents the
generation status of each visible texel. This partitioned view representation
guides the depth-aware inpainting model to generate and update partial textures
for the corresponding regions. Furthermore, we propose an automatic view
sequence generation scheme to determine the next best view for updating the
partial texture. Extensive experiments demonstrate that our method
significantly outperforms the existing text-driven approaches and GAN-based
methods.
|
[
{
"version": "v1",
"created": "Mon, 20 Mar 2023 19:02:13 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Chen",
"Dave Zhenyu",
""
],
[
"Siddiqui",
"Yawar",
""
],
[
"Lee",
"Hsin-Ying",
""
],
[
"Tulyakov",
"Sergey",
""
],
[
"Nießner",
"Matthias",
""
]
] |
new_dataset
| 0.990602 |
2303.11455
|
Toufique Ahmed Mr.
|
Kevin Jesse, Toufique Ahmed, Premkumar T. Devanbu, Emily Morgan
|
Large Language Models and Simple, Stupid Bugs
|
Accepted at International Conference on Mining Software Repositories
(MSR-2023)
| null | null | null |
cs.SE cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the advent of powerful neural language models, AI-based systems to
assist developers in coding tasks are becoming widely available; Copilot is one
such system. Copilot uses Codex, a large language model (LLM), to complete code
conditioned on a preceding "prompt". Codex, however, is trained on public
GitHub repositories, viz., on code that may include bugs and vulnerabilities.
Previous studies [1], [2] show Codex reproduces vulnerabilities seen in
training. In this study, we examine how prone Codex is to generate an
interesting bug category, single statement bugs, commonly referred to as
simple, stupid bugs or SStuBs in the MSR community. We find that Codex and
similar LLMs do help avoid some SStuBs, but do produce known, verbatim SStuBs
as much as 2x as likely than known, verbatim correct code. We explore the
consequences of the Codex generated SStuBs and propose avoidance strategies
that suggest the possibility of reducing the production of known, verbatim
SStubs, and increase the possibility of producing known, verbatim fixes.
|
[
{
"version": "v1",
"created": "Mon, 20 Mar 2023 21:14:06 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Jesse",
"Kevin",
""
],
[
"Ahmed",
"Toufique",
""
],
[
"Devanbu",
"Premkumar T.",
""
],
[
"Morgan",
"Emily",
""
]
] |
new_dataset
| 0.979144 |
2303.11466
|
Levon Muradyan
|
Arsen Hambardzumyan and Levon Muradyan
|
On interval edge-colorings of planar graphs
| null | null | null | null |
cs.DM math.CO
|
http://creativecommons.org/licenses/by/4.0/
|
An edge-coloring of a graph $G$ with colors $1,\ldots,t$ is called an
\emph{interval $t$-coloring} if all colors are used and the colors of edges
incident to each vertex of $G$ are distinct and form an interval of integers.
In 1990, Kamalian proved that if a graph $G$ with at least one edge has an
interval $t$-coloring, then $t\leq 2|V(G)|-3$. In 2002, Axenovich improved this
upper bound for planar graphs: if a planar graph $G$ admits an interval
$t$-coloring, then $t\leq \frac{11}{6}|V(G)|$. In the same paper Axenovich
suggested a conjecture that if a planar graph $G$ has an interval $t$-coloring,
then $t\leq \frac{3}{2}|V(G)|$. In this paper we confirm the conjecture by
showing that if a planar graph $G$ admits an interval $t$-coloring, then $t\leq
\frac{3|V(G)|-4}{2}$. We also prove that if an outerplanar graph $G$ has an
interval $t$-coloring, then $t\leq |V(G)|-1$. Moreover, all these upper bounds
are sharp.
|
[
{
"version": "v1",
"created": "Mon, 20 Mar 2023 21:47:08 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Hambardzumyan",
"Arsen",
""
],
[
"Muradyan",
"Levon",
""
]
] |
new_dataset
| 0.98222 |
2303.11492
|
Do\u{g}analp Ergen\c{c}
|
Do\u{g}analp Ergen\c{c} and Robin Schenderlein and Mathias Fischer
|
TSNZeek: An Open-source Intrusion Detection System for IEEE 802.1
Time-sensitive Networking
| null | null | null | null |
cs.CR cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
IEEE 802.1 Time-sensitive Networking~(TSN) standards are envisioned to
replace legacy network protocols in critical domains to ensure reliable and
deterministic communication over off-the-shelf Ethernet equipment. However,
they lack security countermeasures and can even impose new attack vectors that
may lead to hazardous consequences. This paper presents the first open-source
security monitoring and intrusion detection mechanism, TSNZeek, for IEEE 802.1
TSN protocols. We extend an existing monitoring tool, Zeek, with a new packet
parsing grammar to process TSN data traffic and a rule-based attack detection
engine for TSN-specific threats. We also discuss various security-related
configuration and design aspects for IEEE 802.1 TSN monitoring. Our experiments
show that TSNZeek causes only ~5% CPU overhead on top of Zeek and successfully
detects various threats in a real TSN testbed.
|
[
{
"version": "v1",
"created": "Mon, 20 Mar 2023 23:18:08 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Ergenç",
"Doğanalp",
""
],
[
"Schenderlein",
"Robin",
""
],
[
"Fischer",
"Mathias",
""
]
] |
new_dataset
| 0.999725 |
2303.11514
|
Babar Shahzaad
|
Sarah Bradley, Albertus Alvin Janitra, Babar Shahzaad, Balsam Alkouz,
Athman Bouguettaya, and Abdallah Lakhdari
|
Service-based Trajectory Planning in Multi-Drone Skyway Networks
|
3 pages, 5 figures. This is an accepted demo paper and it is going to
appear in the Proceedings of The 21st International Conference on Pervasive
Computing and Communications (PerCom 2023)
| null | null | null |
cs.RO cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a demonstration of service-based trajectory planning for a drone
delivery system in a multi-drone skyway network. We conduct several experiments
using Crazyflie drones to collect the drone's position data, wind speed and
direction, and wind effects on voltage consumption rates. The experiments are
run for a varying number of recharging stations, wind speed, and wind direction
in a multi-drone skyway network. Demo: https://youtu.be/zEwqdtEmmiw
|
[
{
"version": "v1",
"created": "Tue, 21 Mar 2023 00:27:27 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Bradley",
"Sarah",
""
],
[
"Janitra",
"Albertus Alvin",
""
],
[
"Shahzaad",
"Babar",
""
],
[
"Alkouz",
"Balsam",
""
],
[
"Bouguettaya",
"Athman",
""
],
[
"Lakhdari",
"Abdallah",
""
]
] |
new_dataset
| 0.996506 |
2303.11551
|
Akash Gupta
|
Akash Gupta, Rohun Tripathi, Wondong Jang
|
ModEFormer: Modality-Preserving Embedding for Audio-Video
Synchronization using Transformers
|
Paper accepted at ICASSP 2023
| null | null | null |
cs.CV cs.LG cs.SD eess.AS eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Lack of audio-video synchronization is a common problem during television
broadcasts and video conferencing, leading to an unsatisfactory viewing
experience. A widely accepted paradigm is to create an error detection
mechanism that identifies the cases when audio is leading or lagging. We
propose ModEFormer, which independently extracts audio and video embeddings
using modality-specific transformers. Different from the other
transformer-based approaches, ModEFormer preserves the modality of the input
streams which allows us to use a larger batch size with more negative audio
samples for contrastive learning. Further, we propose a trade-off between the
number of negative samples and number of unique samples in a batch to
significantly exceed the performance of previous methods. Experimental results
show that ModEFormer achieves state-of-the-art performance, 94.5% for LRS2 and
90.9% for LRS3. Finally, we demonstrate how ModEFormer can be used for offset
detection for test clips.
|
[
{
"version": "v1",
"created": "Tue, 21 Mar 2023 02:37:46 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Gupta",
"Akash",
""
],
[
"Tripathi",
"Rohun",
""
],
[
"Jang",
"Wondong",
""
]
] |
new_dataset
| 0.997026 |
2303.11597
|
Alex Stivala
|
Alex Stivala
|
Geodesic cycle length distributions in fictional character networks
|
36 pages, 27 figures
| null | null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A geodesic cycle in a graph is a cycle with no shortcuts, so that the
shortest path between any two nodes in the cycle is the path along the cycle
itself. A recently published paper used random graph models to investigate the
geodesic cycle length distributions of a unique set of delusional social
networks, first examined in an earlier work, as well as some other publicly
available social networks. Here I test the hypothesis, suggested in the former
work, that fictional character networks, and in particular those from works by
a single author, might have geodesic cycle length distributions which are
extremely unlikely under random graph models, as the delusional social networks
do. The results do not show any support for this hypothesis. In addition, the
recently published work is reproduced using a method for counting geodesic
cycles exactly, rather than the approximate method used originally. The
substantive conclusions of that work are unchanged, but some differences in the
results for particular networks are described.
|
[
{
"version": "v1",
"created": "Tue, 21 Mar 2023 05:19:29 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Stivala",
"Alex",
""
]
] |
new_dataset
| 0.994104 |
2303.11625
|
Yao Zhu
|
Yao Zhu, Yuefeng Chen, Xiaodan Li, Rong Zhang, Xiang Tian, Bolun
Zheng, Yaowu Chen
|
Information-containing Adversarial Perturbation for Combating Facial
Manipulation Systems
|
\copyright 20XX IEEE. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising
or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other works
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the development of deep learning technology, the facial manipulation
system has become powerful and easy to use. Such systems can modify the
attributes of the given facial images, such as hair color, gender, and age.
Malicious applications of such systems pose a serious threat to individuals'
privacy and reputation. Existing studies have proposed various approaches to
protect images against facial manipulations. Passive defense methods aim to
detect whether the face is real or fake, which works for posterior forensics
but can not prevent malicious manipulation. Initiative defense methods protect
images upfront by injecting adversarial perturbations into images to disrupt
facial manipulation systems but can not identify whether the image is fake. To
address the limitation of existing methods, we propose a novel two-tier
protection method named Information-containing Adversarial Perturbation (IAP),
which provides more comprehensive protection for {facial images}. We use an
encoder to map a facial image and its identity message to a cross-model
adversarial example which can disrupt multiple facial manipulation systems to
achieve initiative protection. Recovering the message in adversarial examples
with a decoder serves passive protection, contributing to provenance tracking
and fake image detection. We introduce a feature-level correlation measurement
that is more suitable to measure the difference between the facial images than
the commonly used mean squared error. Moreover, we propose a spectral diffusion
method to spread messages to different frequency channels, thereby improving
the robustness of the message against facial manipulation. Extensive
experimental results demonstrate that our proposed IAP can recover the messages
from the adversarial examples with high average accuracy and effectively
disrupt the facial manipulation systems.
|
[
{
"version": "v1",
"created": "Tue, 21 Mar 2023 06:48:14 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Zhu",
"Yao",
""
],
[
"Chen",
"Yuefeng",
""
],
[
"Li",
"Xiaodan",
""
],
[
"Zhang",
"Rong",
""
],
[
"Tian",
"Xiang",
""
],
[
"Zheng",
"Bolun",
""
],
[
"Chen",
"Yaowu",
""
]
] |
new_dataset
| 0.9542 |
2303.11647
|
Harsh Shrivastava
|
Shima Imani, Harsh Shrivastava
|
Are uGLAD? Time will tell!
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We frequently encounter multiple series that are temporally correlated in our
surroundings, such as EEG data to examine alterations in brain activity or
sensors to monitor body movements. Segmentation of multivariate time series
data is a technique for identifying meaningful patterns or changes in the time
series that can signal a shift in the system's behavior. However, most
segmentation algorithms have been designed primarily for univariate time
series, and their performance on multivariate data remains largely
unsatisfactory, making this a challenging problem. In this work, we introduce a
novel approach for multivariate time series segmentation using conditional
independence (CI) graphs. CI graphs are probabilistic graphical models that
represents the partial correlations between the nodes. We propose a domain
agnostic multivariate segmentation framework `$\texttt{tGLAD}$' which draws a
parallel between the CI graph nodes and the variables of the time series.
Consider applying a graph recovery model $\texttt{uGLAD}$ to a short interval
of the time series, it will result in a CI graph that shows partial
correlations among the variables. We extend this idea to the entire time series
by utilizing a sliding window to create a batch of time intervals and then run
a single $\texttt{uGLAD}$ model in multitask learning mode to recover all the
CI graphs simultaneously. As a result, we obtain a corresponding temporal CI
graphs representation. We then designed a first-order and second-order based
trajectory tracking algorithms to study the evolution of these graphs across
distinct intervals. Finally, an `Allocation' algorithm is used to determine a
suitable segmentation of the temporal graph sequence. $\texttt{tGLAD}$ provides
a competitive time complexity of $O(N)$ for settings where number of variables
$D<<N$. We demonstrate successful empirical results on a Physical Activity
Monitoring data.
|
[
{
"version": "v1",
"created": "Tue, 21 Mar 2023 07:46:28 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Imani",
"Shima",
""
],
[
"Shrivastava",
"Harsh",
""
]
] |
new_dataset
| 0.996477 |
2303.11715
|
Shaohan Huang
|
Shaohan Huang, Yi Liu, Carol Fung, Jiaxing Qi, Hailong Yang, Zhongzhi
Luan
|
LogQA: Question Answering in Unstructured Logs
| null | null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Modern systems produce a large volume of logs to record run-time status and
events. System operators use these raw logs to track a system in order to
obtain some useful information to diagnose system anomalies. One of the most
important problems in this area is to help operators find the answers to
log-based questions efficiently and user-friendly. In this work, we propose
LogQA, which aims at answering log-based questions in the form of natural
language based on large-scale unstructured log corpora. Our system presents the
answer to a question directly instead of returning a list of relevant snippets,
thus offering better user-friendliness and efficiency. LogQA represents the
first approach to solve question answering in lod domain. LogQA has two key
components: Log Retriever and Log Reader. Log Retriever aims at retrieving
relevant logs w.r.t. a given question, while Log Reader is responsible for
inferring the final answer. Given the lack of a public dataset for log questing
answering, we manually labelled a QA dataset of three open-source log corpus
and will make them publicly available. We evaluated our proposed model on these
datasets by comparing its performance with 6 other baseline methods. Our
experimental results demonstrate that LogQA has outperformed other baseline
methods.
|
[
{
"version": "v1",
"created": "Tue, 21 Mar 2023 10:07:17 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Huang",
"Shaohan",
""
],
[
"Liu",
"Yi",
""
],
[
"Fung",
"Carol",
""
],
[
"Qi",
"Jiaxing",
""
],
[
"Yang",
"Hailong",
""
],
[
"Luan",
"Zhongzhi",
""
]
] |
new_dataset
| 0.999253 |
2303.11801
|
Minahil Raza
|
Khaled Nakhleh, Minahil Raza, Mack Tang, Matthew Andrews, Rinu Boney,
Ilija Hadzic, Jeongran Lee, Atefeh Mohajeri, Karina Palyutina
|
SACPlanner: Real-World Collision Avoidance with a Soft Actor Critic
Local Planner and Polar State Representations
|
Accepted at 2023 IEEE International Conference on Robotics and
Automation (ICRA)
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
We study the training performance of ROS local planners based on
Reinforcement Learning (RL), and the trajectories they produce on real-world
robots. We show that recent enhancements to the Soft Actor Critic (SAC)
algorithm such as RAD and DrQ achieve almost perfect training after only 10000
episodes. We also observe that on real-world robots the resulting SACPlanner is
more reactive to obstacles than traditional ROS local planners such as DWA.
|
[
{
"version": "v1",
"created": "Tue, 21 Mar 2023 12:35:12 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Nakhleh",
"Khaled",
""
],
[
"Raza",
"Minahil",
""
],
[
"Tang",
"Mack",
""
],
[
"Andrews",
"Matthew",
""
],
[
"Boney",
"Rinu",
""
],
[
"Hadzic",
"Ilija",
""
],
[
"Lee",
"Jeongran",
""
],
[
"Mohajeri",
"Atefeh",
""
],
[
"Palyutina",
"Karina",
""
]
] |
new_dataset
| 0.959731 |
2303.11846
|
Hongbin Fang
|
Qinyan Zhou, Hongbin Fang, Zhihai Bi, Jian Xu
|
Dynamic models for Planar Peristaltic Locomotion of a Metameric
Earthworm-like Robot
|
12 pages, 4 figures
| null | null | null |
cs.RO physics.app-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The development of versatile robots capable of traversing challenging and
irregular environments is of increasing interest in the field of robotics, and
metameric robots have been identified as a promising solution due to their
slender, deformable bodies. Inspired by the effective locomotion of earthworms,
earthworm-like robots capable of both rectilinear and planar locomotion have
been designed and prototyped. While much research has focused on developing
kinematic models to describe the planar locomotion of earthworm-like robots,
the authors argue that the development of dynamic models is critical to
improving the accuracy and efficiency of these robots. A comprehensive analysis
of the dynamics of a metameric earthworm-like robot capable of planar motion is
presented in this work. The model takes into account the complex interactions
between the robot's deformable body and the forces acting on it and draws on
the methods previously used to develop mathematical models of snake-like
robots. The proposed model represents a significant advancement in the field of
metameric robotics and has the potential to enhance the performance of
earthworm-like robots in a variety of challenging environments, such as
underground pipes and tunnels, and serves as a foundation for future research
into the dynamics of soft-bodied robots.
|
[
{
"version": "v1",
"created": "Tue, 21 Mar 2023 13:43:37 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Zhou",
"Qinyan",
""
],
[
"Fang",
"Hongbin",
""
],
[
"Bi",
"Zhihai",
""
],
[
"Xu",
"Jian",
""
]
] |
new_dataset
| 0.989851 |
2303.11860
|
Nathan Leroux
|
Nathan Leroux, Jan Finkbeiner, Emre Neftci
|
Online Transformers with Spiking Neurons for Fast Prosthetic Hand
Control
|
Preprint of 9 pages, 4 figures
| null | null | null |
cs.NE cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Transformers are state-of-the-art networks for most sequence processing
tasks. However, the self-attention mechanism often used in Transformers
requires large time windows for each computation step and thus makes them less
suitable for online signal processing compared to Recurrent Neural Networks
(RNNs). In this paper, instead of the self-attention mechanism, we use a
sliding window attention mechanism. We show that this mechanism is more
efficient for continuous signals with finite-range dependencies between input
and target, and that we can use it to process sequences element-by-element,
this making it compatible with online processing. We test our model on a finger
position regression dataset (NinaproDB8) with Surface Electromyographic (sEMG)
signals measured on the forearm skin to estimate muscle activities. Our
approach sets the new state-of-the-art in terms of accuracy on this dataset
while requiring only very short time windows of 3.5 ms at each inference step.
Moreover, we increase the sparsity of the network using Leaky-Integrate and
Fire (LIF) units, a bio-inspired neuron model that activates sparsely in time
solely when crossing a threshold. We thus reduce the number of synaptic
operations up to a factor of $\times5.3$ without loss of accuracy. Our results
hold great promises for accurate and fast online processing of sEMG signals for
smooth prosthetic hand control and is a step towards Transformers and Spiking
Neural Networks (SNNs) co-integration for energy efficient temporal signal
processing.
|
[
{
"version": "v1",
"created": "Tue, 21 Mar 2023 13:59:35 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Leroux",
"Nathan",
""
],
[
"Finkbeiner",
"Jan",
""
],
[
"Neftci",
"Emre",
""
]
] |
new_dataset
| 0.999045 |
2303.11938
|
Yu-Jhe Li
|
Yu-Jhe Li, Kris Kitani
|
3D-CLFusion: Fast Text-to-3D Rendering with Contrastive Latent Diffusion
|
15 pages. Non-CMU authors are currently hidden due to an internal
legal review in progress of their company
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We tackle the task of text-to-3D creation with pre-trained latent-based NeRFs
(NeRFs that generate 3D objects given input latent code). Recent works such as
DreamFusion and Magic3D have shown great success in generating 3D content using
NeRFs and text prompts, but the current approach of optimizing a NeRF for every
text prompt is 1) extremely time-consuming and 2) often leads to low-resolution
outputs. To address these challenges, we propose a novel method named
3D-CLFusion which leverages the pre-trained latent-based NeRFs and performs
fast 3D content creation in less than a minute. In particular, we introduce a
latent diffusion prior network for learning the w latent from the input CLIP
text/image embeddings. This pipeline allows us to produce the w latent without
further optimization during inference and the pre-trained NeRF is able to
perform multi-view high-resolution 3D synthesis based on the latent. We note
that the novelty of our model lies in that we introduce contrastive learning
during training the diffusion prior which enables the generation of the valid
view-invariant latent code. We demonstrate through experiments the
effectiveness of our proposed view-invariant diffusion process for fast
text-to-3D creation, e.g., 100 times faster than DreamFusion. We note that our
model is able to serve as the role of a plug-and-play tool for text-to-3D with
pre-trained NeRFs.
|
[
{
"version": "v1",
"created": "Tue, 21 Mar 2023 15:38:26 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Li",
"Yu-Jhe",
""
],
[
"Kitani",
"Kris",
""
]
] |
new_dataset
| 0.98162 |
2303.12044
|
Abdullatif Baba
|
Abdullatif Baba
|
Flying robots for a smarter life
| null | null |
10.2139/ssrn.4349634
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Innovative ideas are continuously emerging to produce better life conditions
where essential human needs are supposed to be fulfilled with perfect scenarios
leading us to propose modern strategies drawing the future of smart cities. In
this context, flying robots are increasingly exploited in many fields to
improve the quality of our life. This paper illustrates new designs of flying
robots that could be used to perform a variety of advanced missions like
investigating the state of high-power lines and manipulating cabling
maintenance procedures when failures are detected, evaluating the state of the
outer edge of sidewalks to color their partially or wholly erased parts, and
spraying pesticides to trees or crops that are affected by different diseases.
Creating such smart devices demands developing many other partial designs
relying on AI-based algorithms, computer vision techniques, and embedded
systems. A variety of techniques that we have recently developed in this field
are presented here.
|
[
{
"version": "v1",
"created": "Wed, 8 Feb 2023 18:51:02 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Baba",
"Abdullatif",
""
]
] |
new_dataset
| 0.999493 |
2303.12050
|
Colton Stearns
|
Colton Stearns and Jiateng Liu and Davis Rempe and Despoina
Paschalidou and Jeong Joon Park and Sebastien Mascha and Leonidas J. Guibas
|
CurveCloudNet: Processing Point Clouds with 1D Structure
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Modern depth sensors such as LiDAR operate by sweeping laser-beams across the
scene, resulting in a point cloud with notable 1D curve-like structures. In
this work, we introduce a new point cloud processing scheme and backbone,
called CurveCloudNet, which takes advantage of the curve-like structure
inherent to these sensors. While existing backbones discard the rich 1D
traversal patterns and rely on Euclidean operations, CurveCloudNet
parameterizes the point cloud as a collection of polylines (dubbed a "curve
cloud"), establishing a local surface-aware ordering on the points. Our method
applies curve-specific operations to process the curve cloud, including a
symmetric 1D convolution, a ball grouping for merging points along curves, and
an efficient 1D farthest point sampling algorithm on curves. By combining these
curve operations with existing point-based operations, CurveCloudNet is an
efficient, scalable, and accurate backbone with low GPU memory requirements.
Evaluations on the ShapeNet, Kortx, Audi Driving, and nuScenes datasets
demonstrate that CurveCloudNet outperforms both point-based and sparse-voxel
backbones in various segmentation settings, notably scaling better to large
scenes than point-based alternatives while exhibiting better single object
performance than sparse-voxel alternatives.
|
[
{
"version": "v1",
"created": "Tue, 21 Mar 2023 17:41:36 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Stearns",
"Colton",
""
],
[
"Liu",
"Jiateng",
""
],
[
"Rempe",
"Davis",
""
],
[
"Paschalidou",
"Despoina",
""
],
[
"Park",
"Jeong Joon",
""
],
[
"Mascha",
"Sebastien",
""
],
[
"Guibas",
"Leonidas J.",
""
]
] |
new_dataset
| 0.999716 |
2303.12076
|
Lerrel Pinto
|
Irmak Guzey, Ben Evans, Soumith Chintala, Lerrel Pinto
|
Dexterity from Touch: Self-Supervised Pre-Training of Tactile
Representations with Robotic Play
|
Video and code can be accessed here:
https://tactile-dexterity.github.io/
| null | null | null |
cs.RO cs.AI cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Teaching dexterity to multi-fingered robots has been a longstanding challenge
in robotics. Most prominent work in this area focuses on learning controllers
or policies that either operate on visual observations or state estimates
derived from vision. However, such methods perform poorly on fine-grained
manipulation tasks that require reasoning about contact forces or about objects
occluded by the hand itself. In this work, we present T-Dex, a new approach for
tactile-based dexterity, that operates in two phases. In the first phase, we
collect 2.5 hours of play data, which is used to train self-supervised tactile
encoders. This is necessary to bring high-dimensional tactile readings to a
lower-dimensional embedding. In the second phase, given a handful of
demonstrations for a dexterous task, we learn non-parametric policies that
combine the tactile observations with visual ones. Across five challenging
dexterous tasks, we show that our tactile-based dexterity models outperform
purely vision and torque-based models by an average of 1.7X. Finally, we
provide a detailed analysis on factors critical to T-Dex including the
importance of play data, architectures, and representation learning.
|
[
{
"version": "v1",
"created": "Tue, 21 Mar 2023 17:59:20 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Guzey",
"Irmak",
""
],
[
"Evans",
"Ben",
""
],
[
"Chintala",
"Soumith",
""
],
[
"Pinto",
"Lerrel",
""
]
] |
new_dataset
| 0.99549 |
2303.12080
|
Fangyun Wei
|
Ronglai Zuo, Fangyun Wei, Brian Mak
|
Natural Language-Assisted Sign Language Recognition
|
Accepted by CVPR 2023. Codes are available at
https://github.com/FangyunWei/SLRT
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sign languages are visual languages which convey information by signers'
handshape, facial expression, body movement, and so forth. Due to the inherent
restriction of combinations of these visual ingredients, there exist a
significant number of visually indistinguishable signs (VISigns) in sign
languages, which limits the recognition capacity of vision neural networks. To
mitigate the problem, we propose the Natural Language-Assisted Sign Language
Recognition (NLA-SLR) framework, which exploits semantic information contained
in glosses (sign labels). First, for VISigns with similar semantic meanings, we
propose language-aware label smoothing by generating soft labels for each
training sign whose smoothing weights are computed from the normalized semantic
similarities among the glosses to ease training. Second, for VISigns with
distinct semantic meanings, we present an inter-modality mixup technique which
blends vision and gloss features to further maximize the separability of
different signs under the supervision of blended labels. Besides, we also
introduce a novel backbone, video-keypoint network, which not only models both
RGB videos and human body keypoints but also derives knowledge from sign videos
of different temporal receptive fields. Empirically, our method achieves
state-of-the-art performance on three widely-adopted benchmarks: MSASL, WLASL,
and NMFs-CSL. Codes are available at https://github.com/FangyunWei/SLRT.
|
[
{
"version": "v1",
"created": "Tue, 21 Mar 2023 17:59:57 GMT"
}
] | 2023-03-22T00:00:00 |
[
[
"Zuo",
"Ronglai",
""
],
[
"Wei",
"Fangyun",
""
],
[
"Mak",
"Brian",
""
]
] |
new_dataset
| 0.995025 |
2202.07437
|
Yaping Zhao
|
Yaping Zhao
|
Mathematical Cookbook for Snapshot Compressive Imaging
|
15 pages
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The author intends to provide you with a beautiful, elegant, user-friendly
cookbook for mathematics in Snapshot Compressive Imaging (SCI). Currently, the
cookbook is composed of introduction, conventional optimization, and deep
equilibrium models. The latest releases are strongly recommended! For any other
questions, suggestions, or comments, feel free to email the author.
|
[
{
"version": "v1",
"created": "Wed, 9 Feb 2022 01:24:36 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Apr 2022 10:25:27 GMT"
},
{
"version": "v3",
"created": "Sun, 19 Mar 2023 13:11:59 GMT"
}
] | 2023-03-21T00:00:00 |
[
[
"Zhao",
"Yaping",
""
]
] |
new_dataset
| 0.998691 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.