id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2303.09841
|
Andreas Lohrer
|
Andreas Lohrer, Darpan Malik and Peer Kr\"oger
|
GADFormer: An Attention-based Model for Group Anomaly Detection on
Trajectories
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Group Anomaly Detection (GAD) reveals anomalous behavior among groups
consisting of multiple member instances, which are, individually considered,
not necessarily anomalous. This task is of major importance across multiple
disciplines, in which also sequences like trajectories can be considered as a
group. However, with increasing amount and heterogenity of group members,
actual abnormal groups get harder to detect, especially in an unsupervised or
semi-supervised setting. Recurrent Neural Networks are well established deep
sequence models, but recent works have shown that their performance can
decrease with increasing sequence lengths. Hence, we introduce with this paper
GADFormer, a GAD specific BERT architecture, capable to perform attention-based
Group Anomaly Detection on trajectories in an unsupervised and semi-supervised
setting. We show formally and experimentally how trajectory outlier detection
can be realized as an attention-based Group Anomaly Detection problem.
Furthermore, we introduce a Block Attention-anomaly Score (BAS) to improve the
interpretability of transformer encoder blocks for GAD. In addition to that,
synthetic trajectory generation allows us to optimize the training for
domain-specific GAD. In extensive experiments we investigate our approach
versus GRU in their robustness for trajectory noise and novelties on synthetic
and real world datasets.
|
[
{
"version": "v1",
"created": "Fri, 17 Mar 2023 08:49:09 GMT"
}
] | 2023-03-20T00:00:00 |
[
[
"Lohrer",
"Andreas",
""
],
[
"Malik",
"Darpan",
""
],
[
"Kröger",
"Peer",
""
]
] |
new_dataset
| 0.971262 |
2303.09861
|
Zhanchi Wang
|
Zhanchi Wang and Nikolaos M. Freris
|
Bioinspired Soft Spiral Robots for Versatile Grasping and Manipulation
|
14 pages, 8 figures
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Across various species and different scales, certain organisms use their
appendages to grasp objects not through clamping but through wrapping. This
pattern of movement is found in octopus tentacles, elephant trunks, and
chameleon prehensile tails, demonstrating a great versatility to grasp a wide
range of objects of various sizes and weights as well as dynamically manipulate
them in the 3D space. We observed that the structures of these appendages
follow a common pattern - a logarithmic spiral - which is especially
challenging for existing robot designs to reproduce. This paper reports the
design, fabrication, and operation of a class of cable-driven soft robots that
morphologically replicate spiral-shaped wrapping. This amounts to substantially
curling in length while actively controlling the curling direction as enabled
by two principles: a) the parametric design based on the logarithmic spiral
makes it possible to tightly pack to grasp objects that vary in size by more
than two orders of magnitude and up to 260 times self-weight and b) asymmetric
cable forces allow the swift control of the curling direction for conducting
object manipulation. We demonstrate the ability to dynamically operate objects
at a sub-second level by exploiting passive compliance. We believe that our
study constitutes a step towards engineered systems that wrap to grasp and
manipulate, and further sheds some insights into understanding the efficacy of
biological spiral-shaped appendages.
|
[
{
"version": "v1",
"created": "Fri, 17 Mar 2023 09:59:30 GMT"
}
] | 2023-03-20T00:00:00 |
[
[
"Wang",
"Zhanchi",
""
],
[
"Freris",
"Nikolaos M.",
""
]
] |
new_dataset
| 0.976254 |
2303.09956
|
Yangfan Zhou
|
Yang-Fan Zhou, Kai-Lang Yao, Wu-Jun Li
|
GNNFormer: A Graph-based Framework for Cytopathology Report Generation
|
12 pages, 6 figures
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cytopathology report generation is a necessary step for the standardized
examination of pathology images. However, manually writing detailed reports
brings heavy workloads for pathologists. To improve efficiency, some existing
works have studied automatic generation of cytopathology reports, mainly by
applying image caption generation frameworks with visual encoders originally
proposed for natural images. A common weakness of these works is that they do
not explicitly model the structural information among cells, which is a key
feature of pathology images and provides significant information for making
diagnoses. In this paper, we propose a novel graph-based framework called
GNNFormer, which seamlessly integrates graph neural network (GNN) and
Transformer into the same framework, for cytopathology report generation. To
the best of our knowledge, GNNFormer is the first report generation method that
explicitly models the structural information among cells in pathology images.
It also effectively fuses structural information among cells, fine-grained
morphology features of cells and background features to generate high-quality
reports. Experimental results on the NMI-WSI dataset show that GNNFormer can
outperform other state-of-the-art baselines.
|
[
{
"version": "v1",
"created": "Fri, 17 Mar 2023 13:25:29 GMT"
}
] | 2023-03-20T00:00:00 |
[
[
"Zhou",
"Yang-Fan",
""
],
[
"Yao",
"Kai-Lang",
""
],
[
"Li",
"Wu-Jun",
""
]
] |
new_dataset
| 0.99167 |
2303.09957
|
Norman Meuschke
|
Norman Meuschke, Apurva Jagdale, Timo Spinde, Jelena Mitrovi\'c, Bela
Gipp
|
A Benchmark of PDF Information Extraction Tools using a Multi-Task and
Multi-Domain Evaluation Framework for Academic Documents
|
iConference 2023
| null |
10.1007/978-3-031-28032-0_31
| null |
cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
Extracting information from academic PDF documents is crucial for numerous
indexing, retrieval, and analysis use cases. Choosing the best tool to extract
specific content elements is difficult because many, technically diverse tools
are available, but recent performance benchmarks are rare. Moreover, such
benchmarks typically cover only a few content elements like header metadata or
bibliographic references and use smaller datasets from specific academic
disciplines. We provide a large and diverse evaluation framework that supports
more extraction tasks than most related datasets. Our framework builds upon
DocBank, a multi-domain dataset of 1.5M annotated content elements extracted
from 500K pages of research papers on arXiv. Using the new framework, we
benchmark ten freely available tools in extracting document metadata,
bibliographic references, tables, and other content elements from academic PDF
documents. GROBID achieves the best metadata and reference extraction results,
followed by CERMINE and Science Parse. For table extraction, Adobe Extract
outperforms other tools, even though the performance is much lower than for
other content elements. All tools struggle to extract lists, footers, and
equations. We conclude that more research on improving and combining tools is
necessary to achieve satisfactory extraction quality for most content elements.
Evaluation datasets and frameworks like the one we present support this line of
research. We make our data and code publicly available to contribute toward
this goal.
|
[
{
"version": "v1",
"created": "Fri, 17 Mar 2023 13:26:33 GMT"
}
] | 2023-03-20T00:00:00 |
[
[
"Meuschke",
"Norman",
""
],
[
"Jagdale",
"Apurva",
""
],
[
"Spinde",
"Timo",
""
],
[
"Mitrović",
"Jelena",
""
],
[
"Gipp",
"Bela",
""
]
] |
new_dataset
| 0.998826 |
2303.10007
|
Diab Abueidda
|
Asha Viswanath, Diab W Abueidda, Mohamad Modrek, Kamran A Khan, Seid
Koric, Rashid K. Abu Al-Rub
|
Gyroid-like metamaterials: Topology optimization and Deep Learning
| null | null | null | null |
cs.CE
|
http://creativecommons.org/licenses/by/4.0/
|
Triply periodic minimal surface (TPMS) metamaterials characterized by
mathematically-controlled topologies exhibit better mechanical properties
compared to uniform structures. The unit cell topology of such metamaterials
can be further optimized to improve a desired mechanical property for a
specific application. However, such inverse design involves multiple costly 3D
finite element analyses in topology optimization and hence has not been
attempted. Data-driven models have recently gained popularity as surrogate
models in the geometrical design of metamaterials. Gyroid-like unit cells are
designed using a novel voxel algorithm, a homogenization-based topology
optimization, and a Heaviside filter to attain optimized densities of 0-1
configuration. Few optimization data are used as input-output for supervised
learning of the topology optimization process from a 3D CNN model. These models
could then be used to instantaneously predict the optimized unit cell geometry
for any topology parameters, thus alleviating the need to run any topology
optimization for future design. The high accuracy of the model was demonstrated
by a low mean square error metric and a high dice coefficient metric. This
accelerated design of 3D metamaterials opens the possibility of designing any
computationally costly problems involving complex geometry of metamaterials
with multi-objective properties or multi-scale applications.
|
[
{
"version": "v1",
"created": "Fri, 17 Mar 2023 14:30:26 GMT"
}
] | 2023-03-20T00:00:00 |
[
[
"Viswanath",
"Asha",
""
],
[
"Abueidda",
"Diab W",
""
],
[
"Modrek",
"Mohamad",
""
],
[
"Khan",
"Kamran A",
""
],
[
"Koric",
"Seid",
""
],
[
"Al-Rub",
"Rashid K. Abu",
""
]
] |
new_dataset
| 0.999122 |
2303.10056
|
Can Qin
|
Can Qin, Ning Yu, Chen Xing, Shu Zhang, Zeyuan Chen, Stefano Ermon,
Yun Fu, Caiming Xiong, Ran Xu
|
GlueGen: Plug and Play Multi-modal Encoders for X-to-image Generation
|
26 pages, 23 figures
| null | null | null |
cs.CV cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
Text-to-image (T2I) models based on diffusion processes have achieved
remarkable success in controllable image generation using user-provided
captions. However, the tight coupling between the current text encoder and
image decoder in T2I models makes it challenging to replace or upgrade. Such
changes often require massive fine-tuning or even training from scratch with
the prohibitive expense. To address this problem, we propose GlueGen, which
applies a newly proposed GlueNet model to align features from single-modal or
multi-modal encoders with the latent space of an existing T2I model. The
approach introduces a new training objective that leverages parallel corpora to
align the representation spaces of different encoders. Empirical results show
that GlueNet can be trained efficiently and enables various capabilities beyond
previous state-of-the-art models: 1) multilingual language models such as
XLM-Roberta can be aligned with existing T2I models, allowing for the
generation of high-quality images from captions beyond English; 2) GlueNet can
align multi-modal encoders such as AudioCLIP with the Stable Diffusion model,
enabling sound-to-image generation; 3) it can also upgrade the current text
encoder of the latent diffusion model for challenging case generation. By the
alignment of various feature representations, the GlueNet allows for flexible
and efficient integration of new functionality into existing T2I models and
sheds light on X-to-image (X2I) generation.
|
[
{
"version": "v1",
"created": "Fri, 17 Mar 2023 15:37:07 GMT"
}
] | 2023-03-20T00:00:00 |
[
[
"Qin",
"Can",
""
],
[
"Yu",
"Ning",
""
],
[
"Xing",
"Chen",
""
],
[
"Zhang",
"Shu",
""
],
[
"Chen",
"Zeyuan",
""
],
[
"Ermon",
"Stefano",
""
],
[
"Fu",
"Yun",
""
],
[
"Xiong",
"Caiming",
""
],
[
"Xu",
"Ran",
""
]
] |
new_dataset
| 0.950749 |
2303.10133
|
Senthil Hariharan Arul
|
Senthil Hariharan Arul, Jong Jin Park, Dinesh Manocha
|
DS-MPEPC: Safe and Deadlock-Avoiding Robot Navigation in Cluttered
Dynamic Scenes
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an algorithm for safe robot navigation in complex dynamic
environments using a variant of model predictive equilibrium point control. We
use an optimization formulation to navigate robots gracefully in dynamic
environments by optimizing over a trajectory cost function at each timestep. We
present a novel trajectory cost formulation that significantly reduces the
conservative and deadlock behaviors and generates smooth trajectories. In
particular, we propose a new collision probability function that effectively
captures the risk associated with a given configuration and the time to avoid
collisions based on the velocity direction. Moreover, we propose a terminal
state cost based on the expected time-to-goal and time-to-collision values that
helps in avoiding trajectories that could result in deadlock. We evaluate our
cost formulation in multiple simulated and real-world scenarios, including
narrow corridors with dynamic obstacles, and observe significantly improved
navigation behavior and reduced deadlocks as compared to prior methods.
|
[
{
"version": "v1",
"created": "Fri, 17 Mar 2023 17:22:06 GMT"
}
] | 2023-03-20T00:00:00 |
[
[
"Arul",
"Senthil Hariharan",
""
],
[
"Park",
"Jong Jin",
""
],
[
"Manocha",
"Dinesh",
""
]
] |
new_dataset
| 0.99485 |
2303.10145
|
Sudarshan Ambasamudram Rajagopalan
|
Kitty Varghese, Sudarshan Rajagopalan, Mohit Lamba, Kaushik Mitra
|
Spectrum-inspired Low-light Image Translation for Saliency Detection
|
Presented at The Indian Conference on Computer Vision, Graphics and
Image Processing (ICVGIP) 2022
| null |
10.1145/3571600.3571634
| null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Saliency detection methods are central to several real-world applications
such as robot navigation and satellite imagery. However, the performance of
existing methods deteriorate under low-light conditions because training
datasets mostly comprise of well-lit images. One possible solution is to
collect a new dataset for low-light conditions. This involves pixel-level
annotations, which is not only tedious and time-consuming but also infeasible
if a huge training corpus is required. We propose a technique that performs
classical band-pass filtering in the Fourier space to transform well-lit images
to low-light images and use them as a proxy for real low-light images. Unlike
popular deep learning approaches which require learning thousands of parameters
and enormous amounts of training data, the proposed transformation is fast and
simple and easy to extend to other tasks such as low-light depth estimation.
Our experiments show that the state-of-the-art saliency detection and depth
estimation networks trained on our proxy low-light images perform significantly
better on real low-light images than networks trained using existing
strategies.
|
[
{
"version": "v1",
"created": "Fri, 17 Mar 2023 17:30:42 GMT"
}
] | 2023-03-20T00:00:00 |
[
[
"Varghese",
"Kitty",
""
],
[
"Rajagopalan",
"Sudarshan",
""
],
[
"Lamba",
"Mohit",
""
],
[
"Mitra",
"Kaushik",
""
]
] |
new_dataset
| 0.998694 |
1812.04408
|
Johan F. Hoorn
|
Johan F. Hoorn
|
Theory of Robot Communication: I. The Medium is the Communication
Partner
|
Published as Hoorn, J. F. (2020a). Theory of robot communication: I.
The medium is the communication partner. International Journal of Humanoid
Robotics, 17(6), 2050026. doi: 10.1142/S0219843620500267
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
When people use electronic media for their communication, Computer-Mediated
Communication (CMC) theories describe the social and communicative aspects of
people's interpersonal transactions. When people interact via a
remote-controlled robot, many of the CMC theses hold. Yet, what if people
communicate with a conversation robot that is (partly) autonomous? Do the same
theories apply? This paper discusses CMC theories in confrontation with
observations and research data gained from human-robot communication. As a
result, I argue for an addition to CMC theorizing when the robot as a medium
itself becomes the communication partner. In view of the rise of social robots
in coming years, I define the theoretical precepts of a possible next step in
CMC, which I elaborate in a second paper.
|
[
{
"version": "v1",
"created": "Mon, 10 Dec 2018 05:18:14 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Mar 2023 12:40:28 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Hoorn",
"Johan F.",
""
]
] |
new_dataset
| 0.993671 |
2005.03564
|
Sujit Gujar Dr
|
Shoeb Siddiqui, Varul Srivastava, Raj Maheshwari, Sujit Gujar
|
QuickSync: A Quickly Synchronizing PoS-Based Blockchain Protocol
| null | null | null | null |
cs.CR cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To implement a blockchain, we need a blockchain protocol for all the nodes to
follow. To design a blockchain protocol, we need a block publisher selection
mechanism and a chain selection rule. In Proof-of-Stake (PoS) based blockchain
protocols, block publisher selection mechanism selects the node to publish the
next block based on the relative stake held by the node. However, PoS
protocols, such as Ouroboros v1, may face vulnerability to fully adaptive
corruptions.
In this paper, we propose a novel PoS-based blockchain protocol, QuickSync,
to achieve security against fully adaptive corruptions while improving on
performance. We propose a metric called block power, a value defined for each
block, derived from the output of the verifiable random function based on the
digital signature of the block publisher. With this metric, we compute chain
power, the sum of block powers of all the blocks comprising the chain, for all
the valid chains. These metrics are a function of the block publisher's stake
to enable the PoS aspect of the protocol. The chain selection rule selects the
chain with the highest chain power as the one to extend. This chain selection
rule hence determines the selected block publisher of the previous block. When
we use metrics to define the chain selection rule, it may lead to
vulnerabilities against Sybil attacks. QuickSync uses a Sybil attack resistant
function implemented using histogram matching. We prove that QuickSync
satisfies common prefix, chain growth, and chain quality properties and hence
it is secure. We also show that it is resilient to different types of
adversarial attack strategies. Our analysis demonstrates that QuickSync
performs better than Bitcoin by an order of magnitude on both transactions per
second and time to finality, and better than Ouroboros v1 by a factor of three
on time to finality.
|
[
{
"version": "v1",
"created": "Thu, 7 May 2020 15:53:00 GMT"
},
{
"version": "v2",
"created": "Sun, 7 Jun 2020 13:19:46 GMT"
},
{
"version": "v3",
"created": "Sun, 17 Jan 2021 04:47:03 GMT"
},
{
"version": "v4",
"created": "Thu, 16 Mar 2023 07:28:49 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Siddiqui",
"Shoeb",
""
],
[
"Srivastava",
"Varul",
""
],
[
"Maheshwari",
"Raj",
""
],
[
"Gujar",
"Sujit",
""
]
] |
new_dataset
| 0.999112 |
2101.02390
|
Junjie Huang
|
Junjie Huang, Huawei Shen, Liang Hou, Xueqi Cheng
|
SDGNN: Learning Node Representation for Signed Directed Networks
|
Accepted and to appear at AAAI2021
| null | null | null |
cs.SI cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Network embedding is aimed at mapping nodes in a network into low-dimensional
vector representations. Graph Neural Networks (GNNs) have received widespread
attention and lead to state-of-the-art performance in learning node
representations. However, most GNNs only work in unsigned networks, where only
positive links exist. It is not trivial to transfer these models to signed
directed networks, which are widely observed in the real world yet less
studied. In this paper, we first review two fundamental sociological theories
(i.e., status theory and balance theory) and conduct empirical studies on
real-world datasets to analyze the social mechanism in signed directed
networks. Guided by related sociological theories, we propose a novel Signed
Directed Graph Neural Networks model named SDGNN to learn node embeddings for
signed directed networks. The proposed model simultaneously reconstructs link
signs, link directions, and signed directed triangles. We validate our model's
effectiveness on five real-world datasets, which are commonly used as the
benchmark for signed network embedding. Experiments demonstrate the proposed
model outperforms existing models, including feature-based methods, network
embedding methods, and several GNN methods.
|
[
{
"version": "v1",
"created": "Thu, 7 Jan 2021 06:15:07 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Mar 2021 02:23:07 GMT"
},
{
"version": "v3",
"created": "Sat, 27 Mar 2021 11:45:02 GMT"
},
{
"version": "v4",
"created": "Thu, 16 Mar 2023 09:01:55 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Huang",
"Junjie",
""
],
[
"Shen",
"Huawei",
""
],
[
"Hou",
"Liang",
""
],
[
"Cheng",
"Xueqi",
""
]
] |
new_dataset
| 0.998882 |
2102.06867
|
Changxing Ding
|
Shengcong Chen, Changxing Ding, Minfeng Liu, Jun Cheng, and Dacheng
Tao
|
CPP-Net: Context-aware Polygon Proposal Network for Nucleus Segmentation
|
Accepted Version to IEEE Transactions on Image Processing
| null |
10.1109/TIP.2023.3237013
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nucleus segmentation is a challenging task due to the crowded distribution
and blurry boundaries of nuclei. Recent approaches represent nuclei by means of
polygons to differentiate between touching and overlapping nuclei and have
accordingly achieved promising performance. Each polygon is represented by a
set of centroid-to-boundary distances, which are in turn predicted by features
of the centroid pixel for a single nucleus. However, using the centroid pixel
alone does not provide sufficient contextual information for robust prediction
and thus degrades the segmentation accuracy. To handle this problem, we propose
a Context-aware Polygon Proposal Network (CPP-Net) for nucleus segmentation.
First, we sample a point set rather than one single pixel within each cell for
distance prediction. This strategy substantially enhances contextual
information and thereby improves the robustness of the prediction. Second, we
propose a Confidence-based Weighting Module, which adaptively fuses the
predictions from the sampled point set. Third, we introduce a novel Shape-Aware
Perceptual (SAP) loss that constrains the shape of the predicted polygons.
Here, the SAP loss is based on an additional network that is pre-trained by
means of mapping the centroid probability map and the pixel-to-boundary
distance maps to a different nucleus representation. Extensive experiments
justify the effectiveness of each component in the proposed CPP-Net. Finally,
CPP-Net is found to achieve state-of-the-art performance on three publicly
available databases, namely DSB2018, BBBC06, and PanNuke. Code of this paper is
available at \url{https://github.com/csccsccsccsc/cpp-net
|
[
{
"version": "v1",
"created": "Sat, 13 Feb 2021 05:59:52 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Mar 2023 14:25:47 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Chen",
"Shengcong",
""
],
[
"Ding",
"Changxing",
""
],
[
"Liu",
"Minfeng",
""
],
[
"Cheng",
"Jun",
""
],
[
"Tao",
"Dacheng",
""
]
] |
new_dataset
| 0.999273 |
2109.09223
|
Eric Ca\~nas
|
Eric Canas, Alba M. G. Garcia, Anais Garrell and Cecilio Angulo
|
Initial Test of "BabyRobot" Behaviour on a Teleoperated Toy
Substitution: Improving the Motor Skills of Toddlers
| null |
Proceedings of the 2022 ACM/IEEE International Conference on
Human-Robot Interaction (HRI'22). IEEE Press, 708-712
|
10.5555/3523760.3523860
| null |
cs.RO cs.CV cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This article introduces "Baby Robot", a robot aiming to improve motor skills
of babies and toddlers. Authors developed a car-like toy that moves
autonomously using reinforcement learning and computer vision techniques. The
robot behaviour is to escape from a target baby that has been previously
recognized, or at least detected, while avoiding obstacles, so that the
security of the baby is not compromised. A myriad of commercial toys with a
similar mobility improvement purpose are into the market; however, there is no
one that bets for an intelligent autonomous movement, as they perform simple
yet repetitive trajectories in the best of the cases. Two crawling toys -- one
in representation of "Baby Robot" -- were tested in a real environment with
respect to regular toys in order to check how they improved the toddlers
mobility. These real-life experiments were conducted with our proposed robot in
a kindergarten, where a group of children interacted with the toys. Significant
improvement in the motion skills of participants were detected.
|
[
{
"version": "v1",
"created": "Sun, 19 Sep 2021 21:00:44 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Mar 2023 12:35:46 GMT"
},
{
"version": "v3",
"created": "Thu, 16 Mar 2023 08:28:00 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Canas",
"Eric",
""
],
[
"Garcia",
"Alba M. G.",
""
],
[
"Garrell",
"Anais",
""
],
[
"Angulo",
"Cecilio",
""
]
] |
new_dataset
| 0.998489 |
2111.12513
|
Andr\'e Silva
|
Andr\'e Silva, Matias Martinez, Benjamin Danglot, Davide Ginelli,
Martin Monperrus
|
FLACOCO: Fault Localization for Java based on Industry-grade Coverage
|
11 pages, 4 figures, code available
https://github.com/SpoonLabs/flacoco
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Fault localization is an essential step in the debugging process.
Spectrum-Based Fault Localization (SBFL) is a popular fault localization family
of techniques, utilizing code-coverage to predict suspicious lines of code. In
this paper, we present FLACOCO, a new fault localization tool for Java. The key
novelty of FLACOCO is that it is built on top of one of the most used and most
reliable coverage libraries for Java, JaCoCo. FLACOCO is made available through
a well-designed command-line interface and Java API and supports all Java
versions. We validate FLACOCO on two use-cases from the automatic program
repair domain by reproducing previous scientific experiments. We find it is
capable of effectively replacing the state-of-the-art FL library. Overall, we
hope that FLACOCO will help research in fault localization as well as industry
adoption thanks to being founded on industry-grade code coverage. An
introductory video is available at https://youtu.be/RFRyvQuwRYA
|
[
{
"version": "v1",
"created": "Wed, 24 Nov 2021 14:19:09 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Mar 2023 15:52:03 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Silva",
"André",
""
],
[
"Martinez",
"Matias",
""
],
[
"Danglot",
"Benjamin",
""
],
[
"Ginelli",
"Davide",
""
],
[
"Monperrus",
"Martin",
""
]
] |
new_dataset
| 0.99495 |
2201.08656
|
Gianmarco Ottavi
|
Gianmarco Ottavi, Angelo Garofalo, Giuseppe Tagliavini, Francesco
Conti, Alfio Di Mauro, Luca Benini, Davide Rossi
|
Dustin: A 16-Cores Parallel Ultra-Low-Power Cluster with 2b-to-32b Fully
Flexible Bit-Precision and Vector Lockstep Execution Mode
|
13 pages, 17 figures, 2 tables, Journal
| null |
10.1109/TCSI.2023.3254810
| null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Computationally intensive algorithms such as Deep Neural Networks (DNNs) are
becoming killer applications for edge devices. Porting heavily data-parallel
algorithms on resource-constrained and battery-powered devices poses several
challenges related to memory footprint, computational throughput, and energy
efficiency. Low-bitwidth and mixed-precision arithmetic have been proven to be
valid strategies for tackling these problems. We present Dustin, a fully
programmable compute cluster integrating 16 RISC-V cores capable of 2- to
32-bit arithmetic and all possible mixed-precision permutations. In addition to
a conventional Multiple-Instruction Multiple-Data (MIMD) processing paradigm,
Dustin introduces a Vector Lockstep Execution Mode (VLEM) to minimize power
consumption in highly data-parallel kernels. In VLEM, a single leader core
fetches instructions and broadcasts them to the 15 follower cores. Clock gating
Instruction Fetch (IF) stages and private caches of the follower cores leads to
38\% power reduction with minimal performance overhead (<3%). The cluster,
implemented in 65 nm CMOS technology, achieves a peak performance of 58 GOPS
and a peak efficiency of 1.15 TOPS/W.
|
[
{
"version": "v1",
"created": "Fri, 21 Jan 2022 11:59:37 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Mar 2023 09:24:28 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Ottavi",
"Gianmarco",
""
],
[
"Garofalo",
"Angelo",
""
],
[
"Tagliavini",
"Giuseppe",
""
],
[
"Conti",
"Francesco",
""
],
[
"Di Mauro",
"Alfio",
""
],
[
"Benini",
"Luca",
""
],
[
"Rossi",
"Davide",
""
]
] |
new_dataset
| 0.994086 |
2202.00307
|
Qiujie Dong
|
Qiujie Dong, Zixiong Wang, Manyi Li, Junjie Gao, Shuangmin Chen,
Zhenyu Shu, Shiqing Xin, Changhe Tu, Wenping Wang
|
Laplacian2Mesh: Laplacian-Based Mesh Understanding
|
Accepted by IEEE Transactions on Visualization and Computer Graphics
(TVCG)
| null |
10.1109/TVCG.2023.3259044
| null |
cs.CV cs.GR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Geometric deep learning has sparked a rising interest in computer graphics to
perform shape understanding tasks, such as shape classification and semantic
segmentation. When the input is a polygonal surface, one has to suffer from the
irregular mesh structure. Motivated by the geometric spectral theory, we
introduce Laplacian2Mesh, a novel and flexible convolutional neural network
(CNN) framework for coping with irregular triangle meshes (vertices may have
any valence). By mapping the input mesh surface to the multi-dimensional
Laplacian-Beltrami space, Laplacian2Mesh enables one to perform shape analysis
tasks directly using the mature CNNs, without the need to deal with the
irregular connectivity of the mesh structure. We further define a mesh pooling
operation such that the receptive field of the network can be expanded while
retaining the original vertex set as well as the connections between them.
Besides, we introduce a channel-wise self-attention block to learn the
individual importance of feature ingredients. Laplacian2Mesh not only decouples
the geometry from the irregular connectivity of the mesh structure but also
better captures the global features that are central to shape classification
and segmentation. Extensive tests on various datasets demonstrate the
effectiveness and efficiency of Laplacian2Mesh, particularly in terms of the
capability of being vulnerable to noise to fulfill various learning tasks.
|
[
{
"version": "v1",
"created": "Tue, 1 Feb 2022 10:10:13 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Mar 2023 10:57:44 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Dong",
"Qiujie",
""
],
[
"Wang",
"Zixiong",
""
],
[
"Li",
"Manyi",
""
],
[
"Gao",
"Junjie",
""
],
[
"Chen",
"Shuangmin",
""
],
[
"Shu",
"Zhenyu",
""
],
[
"Xin",
"Shiqing",
""
],
[
"Tu",
"Changhe",
""
],
[
"Wang",
"Wenping",
""
]
] |
new_dataset
| 0.999477 |
2202.08192
|
Zitong Yu
|
Zitong Yu, Ajian Liu, Chenxu Zhao, Kevin H. M. Cheng, Xu Cheng,
Guoying Zhao
|
Flexible-Modal Face Anti-Spoofing: A Benchmark
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Face anti-spoofing (FAS) plays a vital role in securing face recognition
systems from presentation attacks. Benefitted from the maturing camera sensors,
single-modal (RGB) and multi-modal (e.g., RGB+Depth) FAS has been applied in
various scenarios with different configurations of sensors/modalities. Existing
single- and multi-modal FAS methods usually separately train and deploy models
for each possible modality scenario, which might be redundant and inefficient.
Can we train a unified model, and flexibly deploy it under various modality
scenarios? In this paper, we establish the first flexible-modal FAS benchmark
with the principle `train one for all'. To be specific, with trained
multi-modal (RGB+Depth+IR) FAS models, both intra- and cross-dataset testings
are conducted on four flexible-modal sub-protocols (RGB, RGB+Depth, RGB+IR, and
RGB+Depth+IR). We also investigate prevalent deep models and feature fusion
strategies for flexible-modal FAS. We hope this new benchmark will facilitate
the future research of the multi-modal FAS. The protocols and codes are
available at https://github.com/ZitongYu/Flex-Modal-FAS.
|
[
{
"version": "v1",
"created": "Wed, 16 Feb 2022 16:55:39 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Mar 2023 16:04:15 GMT"
},
{
"version": "v3",
"created": "Thu, 16 Mar 2023 00:52:59 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Yu",
"Zitong",
""
],
[
"Liu",
"Ajian",
""
],
[
"Zhao",
"Chenxu",
""
],
[
"Cheng",
"Kevin H. M.",
""
],
[
"Cheng",
"Xu",
""
],
[
"Zhao",
"Guoying",
""
]
] |
new_dataset
| 0.991184 |
2203.02606
|
Lucrezia Grassi
|
Lucrezia Grassi, Carmine Tommaso Recchiuto, Antonio Sgorbissa
|
Sustainable Cloud Services for Verbal Interaction with Embodied Agents
|
24 pages, 11 figures, associated video on YouTube:
https://youtu.be/hgsFGDvvIww
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This article presents the design and the implementation of a cloud system for
knowledge-based autonomous interaction devised for Social Robots and other
conversational agents. The system is particularly convenient for low-cost
robots and devices: it can be used as a stand-alone dialogue system or as an
integration to provide "background" dialogue capabilities to any preexisting
Natural Language Processing ability that the robot may already have as part of
its basic skills. By connecting to the cloud, developers are provided with a
sustainable solution to manage verbal interaction through a network connection,
with about 3,000 topics of conversation ready for "chit-chatting" and a library
of pre-cooked plans that only needs to be grounded into the robot's physical
capabilities. The system is structured as a set of REST API endpoints so that
it can be easily expanded by adding new APIs to improve the capabilities of the
clients connected to the cloud. Another key feature of the system is that it
has been designed to make the development of its clients straightforward: in
this way, multiple robots and devices can be easily endowed with the capability
of autonomously interacting with the user, understanding when to perform
specific actions, and exploiting all the information provided by cloud
services. The article outlines and discusses the results of the experiments
performed to assess the system's performance in terms of response time, paving
the way for its use both for research and market solutions. Links to
repositories with clients for ROS and popular robots such as Pepper and NAO are
available on request.
|
[
{
"version": "v1",
"created": "Fri, 4 Mar 2022 23:18:46 GMT"
},
{
"version": "v2",
"created": "Mon, 22 Aug 2022 06:49:25 GMT"
},
{
"version": "v3",
"created": "Tue, 22 Nov 2022 16:47:53 GMT"
},
{
"version": "v4",
"created": "Wed, 15 Mar 2023 23:38:40 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Grassi",
"Lucrezia",
""
],
[
"Recchiuto",
"Carmine Tommaso",
""
],
[
"Sgorbissa",
"Antonio",
""
]
] |
new_dataset
| 0.960629 |
2205.14375
|
Pranav Jeevan P
|
Pranav Jeevan, Kavitha Viswanathan, Anandu A S, Amit Sethi
|
WaveMix: A Resource-efficient Neural Network for Image Analysis
|
20 pages, 5 figures
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We propose WaveMix -- a novel neural architecture for computer vision that is
resource-efficient yet generalizable and scalable. WaveMix networks achieve
comparable or better accuracy than the state-of-the-art convolutional neural
networks, vision transformers, and token mixers for several tasks, establishing
new benchmarks for segmentation on Cityscapes; and for classification on
Places-365, five EMNIST datasets, and iNAT-mini. Remarkably, WaveMix
architectures require fewer parameters to achieve these benchmarks compared to
the previous state-of-the-art. Moreover, when controlled for the number of
parameters, WaveMix requires lesser GPU RAM, which translates to savings in
time, cost, and energy. To achieve these gains we used multi-level
two-dimensional discrete wavelet transform (2D-DWT) in WaveMix blocks, which
has the following advantages: (1) It reorganizes spatial information based on
three strong image priors -- scale-invariance, shift-invariance, and sparseness
of edges, (2) in a lossless manner without adding parameters, (3) while also
reducing the spatial sizes of feature maps, which reduces the memory and time
required for forward and backward passes, and (4) expanding the receptive field
faster than convolutions do. The whole architecture is a stack of self-similar
and resolution-preserving WaveMix blocks, which allows architectural
flexibility for various tasks and levels of resource availability. Our code and
trained models are publicly available.
|
[
{
"version": "v1",
"created": "Sat, 28 May 2022 09:08:50 GMT"
},
{
"version": "v2",
"created": "Wed, 1 Jun 2022 17:09:58 GMT"
},
{
"version": "v3",
"created": "Thu, 19 Jan 2023 00:05:12 GMT"
},
{
"version": "v4",
"created": "Wed, 15 Mar 2023 22:37:45 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Jeevan",
"Pranav",
""
],
[
"Viswanathan",
"Kavitha",
""
],
[
"S",
"Anandu A",
""
],
[
"Sethi",
"Amit",
""
]
] |
new_dataset
| 0.981448 |
2210.17174
|
Athanasios Xygkis
|
Marcos K. Aguilera, Naama Ben-David, Rachid Guerraoui, Antoine Murat,
Athanasios Xygkis and Igor Zablotchi
|
uBFT: Microsecond-scale BFT using Disaggregated Memory [Extended
Version]
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We propose uBFT, the first State Machine Replication (SMR) system to achieve
microsecond-scale latency in data centers, while using only $2f{+}1$ replicas
to tolerate $f$ Byzantine failures. The Byzantine Fault Tolerance (BFT)
provided by uBFT is essential as pure crashes appear to be a mere illusion with
real-life systems reportedly failing in many unexpected ways. uBFT relies on a
small non-tailored trusted computing base -- disaggregated memory -- and
consumes a practically bounded amount of memory. uBFT is based on a novel
abstraction called Consistent Tail Broadcast, which we use to prevent
equivocation while bounding memory. We implement uBFT using RDMA-based
disaggregated memory and obtain an end-to-end latency of as little as 10us.
This is at least 50$\times$ faster than MinBFT , a state of the art $2f{+}1$
BFT SMR based on Intel's SGX. We use uBFT to replicate two KV-stores (Memcached
and Redis), as well as a financial order matching engine (Liquibook). These
applications have low latency (up to 20us) and become Byzantine tolerant with
as little as 10us more. The price for uBFT is a small amount of reliable
disaggregated memory (less than 1 MiB), which in our prototype consists of a
small number of memory servers connected through RDMA and replicated for fault
tolerance.
|
[
{
"version": "v1",
"created": "Mon, 31 Oct 2022 09:38:04 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Nov 2022 11:29:19 GMT"
},
{
"version": "v3",
"created": "Wed, 7 Dec 2022 10:38:32 GMT"
},
{
"version": "v4",
"created": "Thu, 16 Mar 2023 13:30:51 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Aguilera",
"Marcos K.",
""
],
[
"Ben-David",
"Naama",
""
],
[
"Guerraoui",
"Rachid",
""
],
[
"Murat",
"Antoine",
""
],
[
"Xygkis",
"Athanasios",
""
],
[
"Zablotchi",
"Igor",
""
]
] |
new_dataset
| 0.979456 |
2211.07157
|
Ruihan Xu
|
Ruihan Xu, Haokui Zhang, Wenze Hu, Shiliang Zhang, Xiaoyu Wang
|
ParCNetV2: Oversized Kernel with Enhanced Attention
|
16 pages, 10 figures. Source code is available at
https://github.com/XuRuihan/ParCNetV2
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Transformers have shown great potential in various computer vision tasks. By
borrowing design concepts from transformers, many studies revolutionized CNNs
and showed remarkable results. This paper falls in this line of studies.
Specifically, we propose a new convolutional neural network, ParCNetV2, that
extends position-aware circular convolution (ParCNet) with oversized
convolutions and bifurcate gate units to enhance attention. The oversized
convolution employs a kernel with twice the input size to model long-range
dependencies through a global receptive field. Simultaneously, it achieves
implicit positional encoding by removing the shift-invariant property from
convolution kernels, i.e., the effective kernels at different spatial locations
are different when the kernel size is twice as large as the input size. The
bifurcate gate unit implements an attention mechanism similar to self-attention
in transformers. It is applied through element-wise multiplication of the two
branches, one serves as feature transformation while the other serves as
attention weights. Additionally, we introduce a uniform local-global
convolution block to unify the design of the early and late stage convolution
blocks. Extensive experiments demonstrate the superiority of our method over
other convolutional neural networks and hybrid models that combine CNNs and
transformers. Code will be released.
|
[
{
"version": "v1",
"created": "Mon, 14 Nov 2022 07:22:55 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Mar 2023 08:57:52 GMT"
},
{
"version": "v3",
"created": "Thu, 16 Mar 2023 02:38:06 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Xu",
"Ruihan",
""
],
[
"Zhang",
"Haokui",
""
],
[
"Hu",
"Wenze",
""
],
[
"Zhang",
"Shiliang",
""
],
[
"Wang",
"Xiaoyu",
""
]
] |
new_dataset
| 0.968952 |
2211.12254
|
Ashkan Mirzaei
|
Ashkan Mirzaei, Tristan Aumentado-Armstrong, Konstantinos G. Derpanis,
Jonathan Kelly, Marcus A. Brubaker, Igor Gilitschenski, Alex Levinshtein
|
SPIn-NeRF: Multiview Segmentation and Perceptual Inpainting with Neural
Radiance Fields
|
Project Page: https://spinnerf3d.github.io
|
The IEEE/CVF Conference on Computer Vision and Pattern Recognition
(CVPR) 2023
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural Radiance Fields (NeRFs) have emerged as a popular approach for novel
view synthesis. While NeRFs are quickly being adapted for a wider set of
applications, intuitively editing NeRF scenes is still an open challenge. One
important editing task is the removal of unwanted objects from a 3D scene, such
that the replaced region is visually plausible and consistent with its context.
We refer to this task as 3D inpainting. In 3D, solutions must be both
consistent across multiple views and geometrically valid. In this paper, we
propose a novel 3D inpainting method that addresses these challenges. Given a
small set of posed images and sparse annotations in a single input image, our
framework first rapidly obtains a 3D segmentation mask for a target object.
Using the mask, a perceptual optimizationbased approach is then introduced that
leverages learned 2D image inpainters, distilling their information into 3D
space, while ensuring view consistency. We also address the lack of a diverse
benchmark for evaluating 3D scene inpainting methods by introducing a dataset
comprised of challenging real-world scenes. In particular, our dataset contains
views of the same scene with and without a target object, enabling more
principled benchmarking of the 3D inpainting task. We first demonstrate the
superiority of our approach on multiview segmentation, comparing to NeRFbased
methods and 2D segmentation approaches. We then evaluate on the task of 3D
inpainting, establishing state-ofthe-art performance against other NeRF
manipulation algorithms, as well as a strong 2D image inpainter baseline.
Project Page: https://spinnerf3d.github.io
|
[
{
"version": "v1",
"created": "Tue, 22 Nov 2022 13:14:50 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Mar 2023 21:11:11 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Mirzaei",
"Ashkan",
""
],
[
"Aumentado-Armstrong",
"Tristan",
""
],
[
"Derpanis",
"Konstantinos G.",
""
],
[
"Kelly",
"Jonathan",
""
],
[
"Brubaker",
"Marcus A.",
""
],
[
"Gilitschenski",
"Igor",
""
],
[
"Levinshtein",
"Alex",
""
]
] |
new_dataset
| 0.995555 |
2212.04821
|
Roei Herzig
|
Roei Herzig, Ofir Abramovich, Elad Ben-Avraham, Assaf Arbelle, Leonid
Karlinsky, Ariel Shamir, Trevor Darrell, Amir Globerson
|
PromptonomyViT: Multi-Task Prompt Learning Improves Video Transformers
using Synthetic Scene Data
|
Tech report
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Action recognition models have achieved impressive results by incorporating
scene-level annotations, such as objects, their relations, 3D structure, and
more. However, obtaining annotations of scene structure for videos requires a
significant amount of effort to gather and annotate, making these methods
expensive to train. In contrast, synthetic datasets generated by graphics
engines provide powerful alternatives for generating scene-level annotations
across multiple tasks. In this work, we propose an approach to leverage
synthetic scene data for improving video understanding. We present a multi-task
prompt learning approach for video transformers, where a shared video
transformer backbone is enhanced by a small set of specialized parameters for
each task. Specifically, we add a set of ``task prompts'', each corresponding
to a different task, and let each prompt predict task-related annotations. This
design allows the model to capture information shared among synthetic scene
tasks as well as information shared between synthetic scene tasks and a real
video downstream task throughout the entire network. We refer to this approach
as ``Promptonomy'', since the prompts model task-related structure. We propose
the PromptonomyViT model (PViT), a video transformer that incorporates various
types of scene-level information from synthetic data using the ``Promptonomy''
approach. PViT shows strong performance improvements on multiple video
understanding tasks and datasets.
|
[
{
"version": "v1",
"created": "Thu, 8 Dec 2022 18:55:31 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Mar 2023 08:07:49 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Herzig",
"Roei",
""
],
[
"Abramovich",
"Ofir",
""
],
[
"Ben-Avraham",
"Elad",
""
],
[
"Arbelle",
"Assaf",
""
],
[
"Karlinsky",
"Leonid",
""
],
[
"Shamir",
"Ariel",
""
],
[
"Darrell",
"Trevor",
""
],
[
"Globerson",
"Amir",
""
]
] |
new_dataset
| 0.994754 |
2212.04968
|
Mohammed Brahimi
|
Mohammed Brahimi, Bjoern Haefner, Tarun Yenamandra, Bastian Goldluecke
and Daniel Cremers
|
SupeRVol: Super-Resolution Shape and Reflectance Estimation in Inverse
Volume Rendering
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose an end-to-end inverse rendering pipeline called SupeRVol that
allows us to recover 3D shape and material parameters from a set of color
images in a super-resolution manner. To this end, we represent both the
bidirectional reflectance distribution function (BRDF) and the signed distance
function (SDF) by multi-layer perceptrons. In order to obtain both the surface
shape and its reflectance properties, we revert to a differentiable volume
renderer with a physically based illumination model that allows us to decouple
reflectance and lighting. This physical model takes into account the effect of
the camera's point spread function thereby enabling a reconstruction of shape
and material in a super-resolution quality. Experimental validation confirms
that SupeRVol achieves state of the art performance in terms of inverse
rendering quality. It generates reconstructions that are sharper than the
individual input images, making this method ideally suited for 3D modeling from
low-resolution imagery.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2022 16:30:17 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Mar 2023 17:35:55 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Brahimi",
"Mohammed",
""
],
[
"Haefner",
"Bjoern",
""
],
[
"Yenamandra",
"Tarun",
""
],
[
"Goldluecke",
"Bastian",
""
],
[
"Cremers",
"Daniel",
""
]
] |
new_dataset
| 0.999836 |
2212.07201
|
Luis Scoccola
|
Luis Scoccola, Hitesh Gakhar, Johnathan Bush, Nikolas Schonsheck,
Tatum Rask, Ling Zhou, Jose A. Perea
|
Toroidal Coordinates: Decorrelating Circular Coordinates With Lattice
Reduction
|
24 pages, 12 figures. To appear in proceedings of 39th International
Symposium on Computational Geometry
| null | null | null |
cs.CG cs.LG math.AT stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The circular coordinates algorithm of de Silva, Morozov, and
Vejdemo-Johansson takes as input a dataset together with a cohomology class
representing a $1$-dimensional hole in the data; the output is a map from the
data into the circle that captures this hole, and that is of minimum energy in
a suitable sense. However, when applied to several cohomology classes, the
output circle-valued maps can be "geometrically correlated" even if the chosen
cohomology classes are linearly independent. It is shown in the original work
that less correlated maps can be obtained with suitable integer linear
combinations of the cohomology classes, with the linear combinations being
chosen by inspection. In this paper, we identify a formal notion of geometric
correlation between circle-valued maps which, in the Riemannian manifold case,
corresponds to the Dirichlet form, a bilinear form derived from the Dirichlet
energy. We describe a systematic procedure for constructing low energy
torus-valued maps on data, starting from a set of linearly independent
cohomology classes. We showcase our procedure with computational examples. Our
main algorithm is based on the Lenstra--Lenstra--Lov\'asz algorithm from
computational number theory.
|
[
{
"version": "v1",
"created": "Wed, 14 Dec 2022 12:59:25 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Mar 2023 20:32:23 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Scoccola",
"Luis",
""
],
[
"Gakhar",
"Hitesh",
""
],
[
"Bush",
"Johnathan",
""
],
[
"Schonsheck",
"Nikolas",
""
],
[
"Rask",
"Tatum",
""
],
[
"Zhou",
"Ling",
""
],
[
"Perea",
"Jose A.",
""
]
] |
new_dataset
| 0.984851 |
2303.02563
|
Keane Wei Yang Ong
|
Keane Ong, Wihan van der Heever, Ranjan Satapathy, Gianmarco Mengaldo
and Erik Cambria
|
FinXABSA: Explainable Finance through Aspect-Based Sentiment Analysis
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a novel approach for explainability in financial analysis
by utilizing the Pearson correlation coefficient to establish a relationship
between aspect-based sentiment analysis and stock prices. The proposed
methodology involves constructing an aspect list from financial news articles
and analyzing sentiment intensity scores for each aspect. These scores are then
compared to the stock prices for the relevant companies using the Pearson
coefficient to determine any significant correlations. The results indicate
that the proposed approach provides a more detailed and accurate understanding
of the relationship between sentiment analysis and stock prices, which can be
useful for investors and financial analysts in making informed decisions.
Additionally, this methodology offers a transparent and interpretable way to
explain the sentiment analysis results and their impact on stock prices.
Overall, the findings of this paper demonstrate the importance of
explainability in financial analysis and highlight the potential benefits of
utilizing the Pearson coefficient for analyzing aspect-based sentiment analysis
and stock prices. The proposed approach offers a valuable tool for
understanding the complex relationships between financial news sentiment and
stock prices, providing a new perspective on the financial market and aiding in
making informed investment decisions.
|
[
{
"version": "v1",
"created": "Sun, 5 Mar 2023 03:18:56 GMT"
},
{
"version": "v2",
"created": "Sat, 11 Mar 2023 13:51:21 GMT"
},
{
"version": "v3",
"created": "Thu, 16 Mar 2023 02:35:42 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Ong",
"Keane",
""
],
[
"van der Heever",
"Wihan",
""
],
[
"Satapathy",
"Ranjan",
""
],
[
"Mengaldo",
"Gianmarco",
""
],
[
"Cambria",
"Erik",
""
]
] |
new_dataset
| 0.989847 |
2303.03684
|
Mingzhen Sun
|
Mingzhen Sun, Weining Wang, Xinxin Zhu and Jing Liu
|
MOSO: Decomposing MOtion, Scene and Object for Video Prediction
|
Accepted by CVPR 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Motion, scene and object are three primary visual components of a video. In
particular, objects represent the foreground, scenes represent the background,
and motion traces their dynamics. Based on this insight, we propose a two-stage
MOtion, Scene and Object decomposition framework (MOSO) for video prediction,
consisting of MOSO-VQVAE and MOSO-Transformer. In the first stage, MOSO-VQVAE
decomposes a previous video clip into the motion, scene and object components,
and represents them as distinct groups of discrete tokens. Then, in the second
stage, MOSO-Transformer predicts the object and scene tokens of the subsequent
video clip based on the previous tokens and adds dynamic motion at the token
level to the generated object and scene tokens. Our framework can be easily
extended to unconditional video generation and video frame interpolation tasks.
Experimental results demonstrate that our method achieves new state-of-the-art
performance on five challenging benchmarks for video prediction and
unconditional video generation: BAIR, RoboNet, KTH, KITTI and UCF101. In
addition, MOSO can produce realistic videos by combining objects and scenes
from different videos.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2023 06:54:48 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Mar 2023 08:41:44 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Sun",
"Mingzhen",
""
],
[
"Wang",
"Weining",
""
],
[
"Zhu",
"Xinxin",
""
],
[
"Liu",
"Jing",
""
]
] |
new_dataset
| 0.999761 |
2303.04748
|
Junbo Zhang
|
Junbo Zhang, Runpei Dong, Kaisheng Ma
|
CLIP-FO3D: Learning Free Open-world 3D Scene Representations from 2D
Dense CLIP
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Training a 3D scene understanding model requires complicated human
annotations, which are laborious to collect and result in a model only encoding
close-set object semantics. In contrast, vision-language pre-training models
(e.g., CLIP) have shown remarkable open-world reasoning properties. To this
end, we propose directly transferring CLIP's feature space to 3D scene
understanding model without any form of supervision. We first modify CLIP's
input and forwarding process so that it can be adapted to extract dense pixel
features for 3D scene contents. We then project multi-view image features to
the point cloud and train a 3D scene understanding model with feature
distillation. Without any annotations or additional training, our model
achieves promising annotation-free semantic segmentation results on
open-vocabulary semantics and long-tailed concepts. Besides, serving as a
cross-modal pre-training framework, our method can be used to improve data
efficiency during fine-tuning. Our model outperforms previous SOTA methods in
various zero-shot and data-efficient learning benchmarks. Most importantly, our
model successfully inherits CLIP's rich-structured knowledge, allowing 3D scene
understanding models to recognize not only object concepts but also open-world
semantics.
|
[
{
"version": "v1",
"created": "Wed, 8 Mar 2023 17:30:58 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Mar 2023 04:52:06 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Zhang",
"Junbo",
""
],
[
"Dong",
"Runpei",
""
],
[
"Ma",
"Kaisheng",
""
]
] |
new_dataset
| 0.959191 |
2303.08450
|
Ziyu Yao
|
Ziyu Yao, Xuxin Cheng, Yuexian Zou
|
PoseRAC: Pose Saliency Transformer for Repetitive Action Counting
|
10 pages, 7 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a significant contribution to the field of repetitive
action counting through the introduction of a new approach called Pose Saliency
Representation. The proposed method efficiently represents each action using
only two salient poses instead of redundant frames, which significantly reduces
the computational cost while improving the performance. Moreover, we introduce
a pose-level method, PoseRAC, which is based on this representation and
achieves state-of-the-art performance on two new version datasets by using Pose
Saliency Annotation to annotate salient poses for training. Our lightweight
model is highly efficient, requiring only 20 minutes for training on a GPU, and
infers nearly 10x faster compared to previous methods. In addition, our
approach achieves a substantial improvement over the previous state-of-the-art
TransRAC, achieving an OBO metric of 0.56 compared to 0.29 of TransRAC. The
code and new dataset are available at https://github.com/MiracleDance/PoseRAC
for further research and experimentation, making our proposed approach highly
accessible to the research community.
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 08:51:17 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Mar 2023 01:33:08 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Yao",
"Ziyu",
""
],
[
"Cheng",
"Xuxin",
""
],
[
"Zou",
"Yuexian",
""
]
] |
new_dataset
| 0.99909 |
2303.08824
|
Wei Jiang
|
Wei Jiang and Hans D. Schotten
|
Intelligent Reflecting Vehicle Surface: A Novel IRS Paradigm for Moving
Vehicular Networks
|
MILCOM 2022 - 2022 IEEE Military Communications Conference (MILCOM).
arXiv admin note: text overlap with arXiv:2303.08659
| null |
10.1109/MILCOM55135.2022.10017691
| null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Intelligent reflecting surface (IRS) has recently received much attention
from the research community due to its potential to achieve high spectral and
power efficiency cost-effectively. In addition to traditional cellular
networks, the use of IRS in vehicular networks is also considered. Prior works
on IRS-aided vehicle-to-everything communications focus on deploying reflection
surfaces on the facades of buildings along the road for sidelink performance
enhancement. This paper goes beyond the state of the art by presenting a novel
paradigm coined Intelligent Reflecting Vehicle Surface (IRVS). It embeds a
massive number of reflection elements on vehicles' surfaces to aid moving
vehicular networks in military and emergency communications. We propose an
alternative optimization method to optimize jointly active beamforming at the
base station and passive reflection across multiple randomly-distributed
vehicle surfaces. Performance evaluation in terms of sum spectral efficiency
under continuous, discrete, and random phase shifts is conducted. Numerical
results reveal that IRVS can substantially improve the capacity of a moving
vehicular network.
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 15:57:04 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Jiang",
"Wei",
""
],
[
"Schotten",
"Hans D.",
""
]
] |
new_dataset
| 0.987881 |
2303.08877
|
Elliot Murphy
|
Elliot Murphy
|
ROSE: A Neurocomputational Architecture for Syntax
| null | null | null | null |
cs.CL q-bio.NC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
A comprehensive model of natural language processing in the brain must
accommodate four components: representations, operations, structures and
encoding. It further requires a principled account of how these components
mechanistically, and causally, relate to each another. While previous models
have isolated regions of interest for structure-building and lexical access,
many gaps remain with respect to bridging distinct scales of neural complexity.
By expanding existing accounts of how neural oscillations can index various
linguistic processes, this article proposes a neurocomputational architecture
for syntax, termed the ROSE model (Representation, Operation, Structure,
Encoding). Under ROSE, the basic data structures of syntax are atomic features,
types of mental representations (R), and are coded at the single-unit and
ensemble level. Elementary computations (O) that transform these units into
manipulable objects accessible to subsequent structure-building levels are
coded via high frequency gamma activity. Low frequency synchronization and
cross-frequency coupling code for recursive categorial inferences (S). Distinct
forms of low frequency coupling and phase-amplitude coupling (delta-theta
coupling via pSTS-IFG; theta-gamma coupling via IFG to conceptual hubs) then
encode these structures onto distinct workspaces (E). Causally connecting R to
O is spike-phase/LFP coupling; connecting O to S is phase-amplitude coupling;
connecting S to E is a system of frontotemporal traveling oscillations;
connecting E to lower levels is low-frequency phase resetting of spike-LFP
coupling. ROSE is reliant on neurophysiologically plausible mechanisms, is
supported at all four levels by a range of recent empirical research, and
provides an anatomically precise and falsifiable grounding for the basic
property of natural language syntax: hierarchical, recursive
structure-building.
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 18:44:37 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Murphy",
"Elliot",
""
]
] |
new_dataset
| 0.988702 |
2303.08883
|
Riccardo Albertoni
|
Riccardo Albertoni, David Browning, Simon Cox, Alejandra N.
Gonzalez-Beltran, Andrea Perego, Peter Winstanley
|
The W3C Data Catalog Vocabulary, Version 2: Rationale, Design
Principles, and Uptake
| null | null | null | null |
cs.DL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
DCAT is an RDF vocabulary designed to facilitate interoperability between
data catalogs published on the Web. Since its first release in 2014 as a W3C
Recommendation, DCAT has seen a wide adoption across communities and domains,
particularly in conjunction with implementing the FAIR data principles (for
findable, accessible, interoperable and reusable data). These implementation
experiences, besides demonstrating the fitness of DCAT to meet its intended
purpose, helped identify existing issues and gaps. Moreover, over the last few
years, additional requirements emerged in data catalogs, given the increasing
practice of documenting not only datasets but also data services and APIs. This
paper illustrates the new version of DCAT, explaining the rationale behind its
main revisions and extensions, based on the collected use cases and
requirements, and outlines the issues yet to be addressed in future versions of
DCAT.
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 18:59:53 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Albertoni",
"Riccardo",
""
],
[
"Browning",
"David",
""
],
[
"Cox",
"Simon",
""
],
[
"Gonzalez-Beltran",
"Alejandra N.",
""
],
[
"Perego",
"Andrea",
""
],
[
"Winstanley",
"Peter",
""
]
] |
new_dataset
| 0.999394 |
2303.08886
|
Jiaqi Xue
|
Qian Lou, Muhammad Santriaji, Ardhi Wiratama Baskara Yudha, Jiaqi Xue,
Yan Solihin
|
vFHE: Verifiable Fully Homomorphic Encryption with Blind Hash
|
8 pages, 5 figures
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Fully homomorphic encryption (FHE) is a powerful encryption technique that
allows for computation to be performed on ciphertext without the need for
decryption. FHE will thus enable privacy-preserving computation and a wide
range of applications, such as secure cloud computing on sensitive medical and
financial data, secure machine learning, etc. Prior research in FHE has largely
concentrated on improving its speed, and great stride has been made. However,
there has been a scarcity of research on addressing a major challenge of FHE
computation: client-side data owners cannot verify the integrity of the
calculations performed by the service and computation providers, hence cannot
be assured of the correctness of computation results. This is particularly
concerning when the service or computation provider may act in an
untrustworthy, unreliable, or malicious manner and tampers the computational
results. Prior work on ensuring FHE computational integrity has been
non-universal or incurring too much overhead. We propose vFHE to add
computational integrity to FHE without losing universality and without
incurring high performance overheads.
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 19:12:53 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Lou",
"Qian",
""
],
[
"Santriaji",
"Muhammad",
""
],
[
"Yudha",
"Ardhi Wiratama Baskara",
""
],
[
"Xue",
"Jiaqi",
""
],
[
"Solihin",
"Yan",
""
]
] |
new_dataset
| 0.972699 |
2303.08891
|
Adar Kahana
|
Oded Ovadia, Adar Kahana, Panos Stinis, Eli Turkel, George Em
Karniadakis
|
ViTO: Vision Transformer-Operator
| null | null | null |
PNNL-SA-182861
|
cs.CV cs.NA math.NA
|
http://creativecommons.org/licenses/by/4.0/
|
We combine vision transformers with operator learning to solve diverse
inverse problems described by partial differential equations (PDEs). Our
approach, named ViTO, combines a U-Net based architecture with a vision
transformer. We apply ViTO to solve inverse PDE problems of increasing
complexity, namely for the wave equation, the Navier-Stokes equations and the
Darcy equation. We focus on the more challenging case of super-resolution,
where the input dataset for the inverse problem is at a significantly coarser
resolution than the output. The results we obtain are comparable or exceed the
leading operator network benchmarks in terms of accuracy. Furthermore, ViTO`s
architecture has a small number of trainable parameters (less than 10% of the
leading competitor), resulting in a performance speed-up of over 5x when
averaged over the various test cases.
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 19:24:14 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Ovadia",
"Oded",
""
],
[
"Kahana",
"Adar",
""
],
[
"Stinis",
"Panos",
""
],
[
"Turkel",
"Eli",
""
],
[
"Karniadakis",
"George Em",
""
]
] |
new_dataset
| 0.999553 |
2303.08920
|
Chenbin Pan
|
Chenbin Pan, Zhiqi Zhang, Senem Velipasalar, Yi Xu
|
EgoViT: Pyramid Video Transformer for Egocentric Action Recognition
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Capturing interaction of hands with objects is important to autonomously
detect human actions from egocentric videos. In this work, we present a pyramid
video transformer with a dynamic class token generator for egocentric action
recognition. Different from previous video transformers, which use the same
static embedding as the class token for diverse inputs, we propose a dynamic
class token generator that produces a class token for each input video by
analyzing the hand-object interaction and the related motion information. The
dynamic class token can diffuse such information to the entire model by
communicating with other informative tokens in the subsequent transformer
layers. With the dynamic class token, dissimilarity between videos can be more
prominent, which helps the model distinguish various inputs. In addition,
traditional video transformers explore temporal features globally, which
requires large amounts of computation. However, egocentric videos often have a
large amount of background scene transition, which causes discontinuities
across distant frames. In this case, blindly reducing the temporal sampling
rate will risk losing crucial information. Hence, we also propose a pyramid
architecture to hierarchically process the video from short-term high rate to
long-term low rate. With the proposed architecture, we significantly reduce the
computational cost as well as the memory requirement without sacrificing from
the model performance. We perform comparisons with different baseline video
transformers on the EPIC-KITCHENS-100 and EGTEA Gaze+ datasets. Both
quantitative and qualitative results show that the proposed model can
efficiently improve the performance for egocentric action recognition.
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 20:33:50 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Pan",
"Chenbin",
""
],
[
"Zhang",
"Zhiqi",
""
],
[
"Velipasalar",
"Senem",
""
],
[
"Xu",
"Yi",
""
]
] |
new_dataset
| 0.95026 |
2303.08934
|
Wenxin Jiang
|
Wenxin Jiang, Nicholas Synovic, Purvish Jajal, Taylor R. Schorlemmer,
Arav Tewari, Bhavesh Pareek, George K. Thiruvathukal, James C. Davis
|
PTMTorrent: A Dataset for Mining Open-source Pre-trained Model Packages
|
5 pages, 2 figures, Accepted to MSR'23
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Due to the cost of developing and training deep learning models from scratch,
machine learning engineers have begun to reuse pre-trained models (PTMs) and
fine-tune them for downstream tasks. PTM registries known as "model hubs"
support engineers in distributing and reusing deep learning models. PTM
packages include pre-trained weights, documentation, model architectures,
datasets, and metadata. Mining the information in PTM packages will enable the
discovery of engineering phenomena and tools to support software engineers.
However, accessing this information is difficult - there are many PTM
registries, and both the registries and the individual packages may have rate
limiting for accessing the data. We present an open-source dataset, PTMTorrent,
to facilitate the evaluation and understanding of PTM packages. This paper
describes the creation, structure, usage, and limitations of the dataset. The
dataset includes a snapshot of 5 model hubs and a total of 15,913 PTM packages.
These packages are represented in a uniform data schema for cross-hub mining.
We describe prior uses of this data and suggest research opportunities for
mining using our dataset. The PTMTorrent dataset (v1) is available at:
https://app.globus.org/file-manager?origin_id=55e17a6e-9d8f-11ed-a2a2-8383522b48d9&origin_path=%2F~%2F.
Our dataset generation tools are available on GitHub:
https://doi.org/10.5281/zenodo.7570357.
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 21:01:31 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Jiang",
"Wenxin",
""
],
[
"Synovic",
"Nicholas",
""
],
[
"Jajal",
"Purvish",
""
],
[
"Schorlemmer",
"Taylor R.",
""
],
[
"Tewari",
"Arav",
""
],
[
"Pareek",
"Bhavesh",
""
],
[
"Thiruvathukal",
"George K.",
""
],
[
"Davis",
"James C.",
""
]
] |
new_dataset
| 0.999827 |
2303.08937
|
Rodrigo Silveira
|
Maarten L\"offler, Tim Ophelders, Frank Staals, Rodrigo I. Silveira
|
Shortest Paths in Portalgons
|
34 pages. Full version of a paper to appear in a shorter form in the
39th International Symposium on Computational Geometry (SoCG 2023)
| null | null | null |
cs.CG cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
Any surface that is intrinsically polyhedral can be represented by a
collection of simple polygons (fragments), glued along pairs of equally long
oriented edges, where each fragment is endowed with the geodesic metric arising
from its Euclidean metric. We refer to such a representation as a portalgon,
and we call two portalgons equivalent if the surfaces they represent are
isometric. We analyze the complexity of shortest paths in portalgons. We call a
fragment happy if any shortest path on the portalgon visits it at most a
constant number of times. A portalgon is happy if all of its fragments are
happy. We present an efficient algorithm to compute shortest paths on happy
portalgons. The number of times that a shortest path visits a fragment is
unbounded in general. We contrast this by showing that the intrinsic Delaunay
triangulation of any polyhedral surface corresponds to a happy portalgon. Since
computing the intrinsic Delaunay triangulation may be inefficient, we provide
an efficient algorithm to compute happy portalgons for a restricted class of
portalgons.
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 21:06:45 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Löffler",
"Maarten",
""
],
[
"Ophelders",
"Tim",
""
],
[
"Staals",
"Frank",
""
],
[
"Silveira",
"Rodrigo I.",
""
]
] |
new_dataset
| 0.998938 |
2303.08959
|
Pedro Enrique Iturria Rivera Mr.
|
Pedro Enrique Iturria Rivera, Marcel Chenier, Bernard Herscovici,
Burak Kantarci and Melike Erol-Kantarci
|
RL meets Multi-Link Operation in IEEE 802.11be: Multi-Headed Recurrent
Soft-Actor Critic-based Traffic Allocation
|
Accepted in ICC'23
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
IEEE 802.11be -Extremely High Throughput-, commercially known as
Wireless-Fidelity (Wi-Fi) 7 is the newest IEEE 802.11 amendment that comes to
address the increasingly throughput hungry services such as Ultra High
Definition (4K/8K) Video and Virtual/Augmented Reality (VR/AR). To do so, IEEE
802.11be presents a set of novel features that will boost the Wi-Fi technology
to its edge. Among them, Multi-Link Operation (MLO) devices are anticipated to
become a reality, leaving Single-Link Operation (SLO) Wi-Fi in the past. To
achieve superior throughput and very low latency, a careful design approach
must be taken, on how the incoming traffic is distributed in MLO capable
devices. In this paper, we present a Reinforcement Learning (RL) algorithm
named Multi-Headed Recurrent Soft-Actor Critic (MH-RSAC) to distribute incoming
traffic in 802.11be MLO capable networks. Moreover, we compare our results with
two non-RL baselines previously proposed in the literature named: Single Link
Less Congested Interface (SLCI) and Multi-Link Congestion-aware Load balancing
at flow arrivals (MCAA). Simulation results reveal that the MH-RSAC algorithm
is able to obtain gains in terms of Throughput Drop Ratio (TDR) up to 35.2% and
6% when compared with the SLCI and MCAA algorithms, respectively. Finally, we
observed that our scheme is able to respond more efficiently to high throughput
and dynamic traffic such as VR and Web Browsing (WB) when compared with the
baselines. Results showed an improvement of the MH-RSAC scheme in terms of Flow
Satisfaction (FS) of up to 25.6% and 6% over the the SCLI and MCAA algorithms.
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 22:14:28 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Rivera",
"Pedro Enrique Iturria",
""
],
[
"Chenier",
"Marcel",
""
],
[
"Herscovici",
"Bernard",
""
],
[
"Kantarci",
"Burak",
""
],
[
"Erol-Kantarci",
"Melike",
""
]
] |
new_dataset
| 0.999787 |
2303.08964
|
Ali Behrouz
|
Farnoosh Hashemi and Ali Behrouz and Milad Rezaei Hajidehi
|
CS-TGN: Community Search via Temporal Graph Neural Networks
|
This is the author's version of the paper. Published in companion
proceedings of the ACM Web Conference 2023 (WWW '23 Companion)
| null |
10.1145/3543873.3587654
| null |
cs.SI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Searching for local communities is an important research challenge that
allows for personalized community discovery and supports advanced data analysis
in various complex networks, such as the World Wide Web, social networks, and
brain networks. The evolution of these networks over time has motivated several
recent studies to identify local communities in temporal networks. Given any
query nodes, Community Search aims to find a densely connected subgraph
containing query nodes. However, existing community search approaches in
temporal networks have two main limitations: (1) they adopt pre-defined
subgraph patterns to model communities, which cannot find communities that do
not conform to these patterns in real-world networks, and (2) they only use the
aggregation of disjoint structural information to measure quality, missing the
dynamic of connections and temporal properties. In this paper, we propose a
query-driven Temporal Graph Convolutional Network (CS-TGN) that can capture
flexible community structures by learning from the ground-truth communities in
a data-driven manner. CS-TGN first combines the local query-dependent structure
and the global graph embedding in each snapshot of the network and then uses a
GRU cell with contextual attention to learn the dynamics of interactions and
update node embeddings over time. We demonstrate how this model can be used for
interactive community search in an online setting, allowing users to evaluate
the found communities and provide feedback. Experiments on real-world temporal
graphs with ground-truth communities validate the superior quality of the
solutions obtained and the efficiency of our model in both temporal and
interactive static settings.
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 22:23:32 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Hashemi",
"Farnoosh",
""
],
[
"Behrouz",
"Ali",
""
],
[
"Hajidehi",
"Milad Rezaei",
""
]
] |
new_dataset
| 0.999103 |
2303.08973
|
Kostantinos Draziotis
|
George S. Rizos and Konstantinos A. Draziotis
|
Cryptographic Primitives based on Compact Knapsack Problem
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-sa/4.0/
|
In the present paper, we extend previous results of an id scheme based on
compact knapsack problem defined by one equation. We present a sound three-move
id scheme based on compact knapsack problem defined by an integer matrix. We
study this problem by providing attacks based on lattices. Furthermore, we
provide the corresponding digital signature obtained by Fiat-Shamir transform
and we prove that is secure under ROM. These primitives are post quantum
resistant.
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 22:53:37 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Rizos",
"George S.",
""
],
[
"Draziotis",
"Konstantinos A.",
""
]
] |
new_dataset
| 0.979047 |
2303.09024
|
Arnab Bhattacharjee Mr
|
Arnab Bhattacharjee, Tapan K. Saha, Ashu Verma, Sukumar Mishra
|
DeeBBAA: A benchmark Deep Black Box Adversarial Attack against
Cyber-Physical Power Systems
| null | null | null | null |
cs.CR cs.SY eess.SY
|
http://creativecommons.org/licenses/by-sa/4.0/
|
An increased energy demand, and environmental pressure to accommodate higher
levels of renewable energy and flexible loads like electric vehicles have led
to numerous smart transformations in the modern power systems. These
transformations make the cyber-physical power system highly susceptible to
cyber-adversaries targeting its numerous operations. In this work, a novel
black box adversarial attack strategy is proposed targeting the AC state
estimation operation of an unknown power system using historical data.
Specifically, false data is injected into the measurements obtained from a
small subset of the power system components which leads to significant
deviations in the state estimates. Experiments carried out on the IEEE 39 bus
and 118 bus test systems make it evident that the proposed strategy, called
DeeBBAA, can evade numerous conventional and state-of-the-art attack detection
mechanisms with very high probability.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 01:36:18 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Bhattacharjee",
"Arnab",
""
],
[
"Saha",
"Tapan K.",
""
],
[
"Verma",
"Ashu",
""
],
[
"Mishra",
"Sukumar",
""
]
] |
new_dataset
| 0.999704 |
2303.09054
|
Haruya Ishikawa
|
Haruya Ishikawa, Yoshimitsu Aoki
|
FindView: Precise Target View Localization Task for Look Around Agents
|
19 pages, 7 figures, preprint, code available in
https://github.com/haruishi43/look_around
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
With the increase in demands for service robots and automated inspection,
agents need to localize in its surrounding environment to achieve more natural
communication with humans by shared contexts. In this work, we propose a novel
but straightforward task of precise target view localization for look around
agents called the FindView task. This task imitates the movements of PTZ
cameras or user interfaces for 360 degree mediums, where the observer must
"look around" to find a view that exactly matches the target. To solve this
task, we introduce a rule-based agent that heuristically finds the optimal view
and a policy learning agent that employs reinforcement learning to learn by
interacting with the 360 degree scene. Through extensive evaluations and
benchmarks, we conclude that learned methods have many advantages, in
particular precise localization that is robust to corruption and can be easily
deployed in novel scenes.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 03:00:20 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Ishikawa",
"Haruya",
""
],
[
"Aoki",
"Yoshimitsu",
""
]
] |
new_dataset
| 0.977067 |
2303.09079
|
Jiaqi Xue
|
Mengxin Zheng, Jiaqi Xue, Xun Chen, Lei Jiang, Qian Lou
|
SSL-Cleanse: Trojan Detection and Mitigation in Self-Supervised Learning
|
10 pages, 6 figures
| null | null | null |
cs.CR cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Self-supervised learning (SSL) is a commonly used approach to learning and
encoding data representations. By using a pre-trained SSL image encoder and
training a downstream classifier on top of it, impressive performance can be
achieved on various tasks with very little labeled data. The increasing usage
of SSL has led to an uptick in security research related to SSL encoders and
the development of various Trojan attacks. The danger posed by Trojan attacks
inserted in SSL encoders lies in their ability to operate covertly and spread
widely among various users and devices. The presence of backdoor behavior in
Trojaned encoders can inadvertently be inherited by downstream classifiers,
making it even more difficult to detect and mitigate the threat. Although
current Trojan detection methods in supervised learning can potentially
safeguard SSL downstream classifiers, identifying and addressing triggers in
the SSL encoder before its widespread dissemination is a challenging task. This
is because downstream tasks are not always known, dataset labels are not
available, and even the original training dataset is not accessible during the
SSL encoder Trojan detection. This paper presents an innovative technique
called SSL-Cleanse that is designed to detect and mitigate backdoor attacks in
SSL encoders. We evaluated SSL-Cleanse on various datasets using 300 models,
achieving an average detection success rate of 83.7% on ImageNet-100. After
mitigating backdoors, on average, backdoored encoders achieve 0.24% attack
success rate without great accuracy loss, proving the effectiveness of
SSL-Cleanse.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 04:45:06 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Zheng",
"Mengxin",
""
],
[
"Xue",
"Jiaqi",
""
],
[
"Chen",
"Xun",
""
],
[
"Jiang",
"Lei",
""
],
[
"Lou",
"Qian",
""
]
] |
new_dataset
| 0.987367 |
2303.09085
|
Lichin Chen
|
Li-Chin Chen, Jung-Nien Lai, Hung-En Lin, Hsien-Te Chen, Kuo-Hsuan
Hung, Yu Tsao
|
Preoperative Prognosis Assessment of Lumbar Spinal Surgery for Low Back
Pain and Sciatica Patients based on Multimodalities and Multimodal Learning
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Low back pain (LBP) and sciatica may require surgical therapy when they are
symptomatic of severe pain. However, there is no effective measures to evaluate
the surgical outcomes in advance. This work combined elements of Eastern
medicine and machine learning, and developed a preoperative assessment tool to
predict the prognosis of lumbar spinal surgery in LBP and sciatica patients.
Standard operative assessments, traditional Chinese medicine body constitution
assessments, planned surgical approach, and vowel pronunciation recordings were
collected and stored in different modalities. Our work provides insights into
leveraging modality combinations, multimodals, and fusion strategies. The
interpretability of models and correlations between modalities were also
inspected. Based on the recruited 105 patients, we found that combining
standard operative assessments, body constitution assessments, and planned
surgical approach achieved the best performance in 0.81 accuracy. Our approach
is effective and can be widely applied in general practice due to simplicity
and effective.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 05:06:06 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Chen",
"Li-Chin",
""
],
[
"Lai",
"Jung-Nien",
""
],
[
"Lin",
"Hung-En",
""
],
[
"Chen",
"Hsien-Te",
""
],
[
"Hung",
"Kuo-Hsuan",
""
],
[
"Tsao",
"Yu",
""
]
] |
new_dataset
| 0.996522 |
2303.09100
|
Xinyang Liu
|
Xinyang Liu, Dongsheng Wang, Miaoge Li, Zhibin Duan, Yishi Xu, Bo
Chen, Mingyuan Zhou
|
Patch-Token Aligned Bayesian Prompt Learning for Vision-Language Models
| null | null | null | null |
cs.CV cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
For downstream applications of vision-language pre-trained models, there has
been significant interest in constructing effective prompts. Existing works on
prompt engineering, which either require laborious manual designs or optimize
the prompt tuning as a point estimation problem, may fail to describe diverse
characteristics of categories and limit their applications. We introduce a
Bayesian probabilistic resolution to prompt learning, where the label-specific
stochastic prompts are generated hierarchically by first sampling a latent
vector from an underlying distribution and then employing a lightweight
generative model. Importantly, we semantically regularize prompt learning with
the visual knowledge and view images and the corresponding prompts as patch and
token sets under optimal transport, which pushes the prompt tokens to
faithfully capture the label-specific visual concepts, instead of overfitting
the training categories. Moreover, the proposed model can also be
straightforwardly extended to the conditional case where the
instance-conditional prompts are generated to improve the generalizability.
Extensive experiments on 15 datasets show promising transferability and
generalization performance of our proposed model.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 06:09:15 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Liu",
"Xinyang",
""
],
[
"Wang",
"Dongsheng",
""
],
[
"Li",
"Miaoge",
""
],
[
"Duan",
"Zhibin",
""
],
[
"Xu",
"Yishi",
""
],
[
"Chen",
"Bo",
""
],
[
"Zhou",
"Mingyuan",
""
]
] |
new_dataset
| 0.997073 |
2303.09187
|
Zhongwei Qiu
|
Zhongwei Qiu, Yang Qiansheng, Jian Wang, Haocheng Feng, Junyu Han,
Errui Ding, Chang Xu, Dongmei Fu, Jingdong Wang
|
PSVT: End-to-End Multi-person 3D Pose and Shape Estimation with
Progressive Video Transformers
|
CVPR2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing methods of multi-person video 3D human Pose and Shape Estimation
(PSE) typically adopt a two-stage strategy, which first detects human instances
in each frame and then performs single-person PSE with temporal model. However,
the global spatio-temporal context among spatial instances can not be captured.
In this paper, we propose a new end-to-end multi-person 3D Pose and Shape
estimation framework with progressive Video Transformer, termed PSVT. In PSVT,
a spatio-temporal encoder (STE) captures the global feature dependencies among
spatial objects. Then, spatio-temporal pose decoder (STPD) and shape decoder
(STSD) capture the global dependencies between pose queries and feature tokens,
shape queries and feature tokens, respectively. To handle the variances of
objects as time proceeds, a novel scheme of progressive decoding is used to
update pose and shape queries at each frame. Besides, we propose a novel
pose-guided attention (PGA) for shape decoder to better predict shape
parameters. The two components strengthen the decoder of PSVT to improve
performance. Extensive experiments on the four datasets show that PSVT achieves
stage-of-the-art results.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 09:55:43 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Qiu",
"Zhongwei",
""
],
[
"Qiansheng",
"Yang",
""
],
[
"Wang",
"Jian",
""
],
[
"Feng",
"Haocheng",
""
],
[
"Han",
"Junyu",
""
],
[
"Ding",
"Errui",
""
],
[
"Xu",
"Chang",
""
],
[
"Fu",
"Dongmei",
""
],
[
"Wang",
"Jingdong",
""
]
] |
new_dataset
| 0.996763 |
2303.09210
|
Jichao Zhu
|
Jun Yu, Jichao Zhu, Wangyuan Zhu, Zhongpeng Cai, Guochen Xie, Renda
Li, Gongpeng Zhao
|
A Dual Branch Network for Emotional Reaction Intensity Estimation
| null | null | null | null |
cs.AI cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Emotional Reaction Intensity(ERI) estimation is an important task in
multimodal scenarios, and has fundamental applications in medicine, safe
driving and other fields. In this paper, we propose a solution to the ERI
challenge of the fifth Affective Behavior Analysis in-the-wild(ABAW), a
dual-branch based multi-output regression model. The spatial attention is used
to better extract visual features, and the Mel-Frequency Cepstral Coefficients
technology extracts acoustic features, and a method named modality dropout is
added to fusion multimodal features. Our method achieves excellent results on
the official validation set.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 10:31:40 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Yu",
"Jun",
""
],
[
"Zhu",
"Jichao",
""
],
[
"Zhu",
"Wangyuan",
""
],
[
"Cai",
"Zhongpeng",
""
],
[
"Xie",
"Guochen",
""
],
[
"Li",
"Renda",
""
],
[
"Zhao",
"Gongpeng",
""
]
] |
new_dataset
| 0.957043 |
2303.09220
|
Gustavo Rezende Silva
|
Gustavo Rezende Silva, Juliane P\"a{\ss}ler, Jeroen Zwanepol, Elvin
Alberts, S. Lizeth Tapia Tarifa, Ilias Gerostathopoulos, Einar Broch Johnsen,
Carlos Hern\'andez Corbato
|
SUAVE: An Exemplar for Self-Adaptive Underwater Vehicles
|
7 pages, 3 figures, accepted at SEAMS artifact track
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Once deployed in the real world, autonomous underwater vehicles (AUVs) are
out of reach for human supervision yet need to take decisions to adapt to
unstable and unpredictable environments. To facilitate research on
self-adaptive AUVs, this paper presents SUAVE, an exemplar for two-layered
system-level adaptation of AUVs, which clearly separates the application and
self-adaptation concerns. The exemplar focuses on a mission for underwater
pipeline inspection by a single AUV, implemented as a ROS2-based system. This
mission must be completed while simultaneously accounting for uncertainties
such as thruster failures and unfavorable environmental conditions. The paper
discusses how SUAVE can be used with different self-adaptation frameworks,
illustrated by an experiment using the Metacontrol framework to compare AUV
behavior with and without self-adaptation. The experiment shows that the use of
Metacontrol to adapt the AUV during its mission improves its performance when
measured by the overall time taken to complete the mission or the length of the
inspected pipeline.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 10:49:44 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Silva",
"Gustavo Rezende",
""
],
[
"Päßler",
"Juliane",
""
],
[
"Zwanepol",
"Jeroen",
""
],
[
"Alberts",
"Elvin",
""
],
[
"Tarifa",
"S. Lizeth Tapia",
""
],
[
"Gerostathopoulos",
"Ilias",
""
],
[
"Johnsen",
"Einar Broch",
""
],
[
"Corbato",
"Carlos Hernández",
""
]
] |
new_dataset
| 0.975882 |
2303.09252
|
Jiayi Lin
|
Jiayi Lin, Shaogang Gong
|
GridCLIP: One-Stage Object Detection by Grid-Level CLIP Representation
Learning
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A vision-language foundation model pretrained on very large-scale image-text
paired data has the potential to provide generalizable knowledge representation
for downstream visual recognition and detection tasks, especially on
supplementing the undersampled categories in downstream model training. Recent
studies utilizing CLIP for object detection have shown that a two-stage
detector design typically outperforms a one-stage detector, while requiring
more expensive training resources and longer inference time. In this work, we
propose a one-stage detector GridCLIP that narrows its performance gap to those
of two-stage detectors, with approximately 43 and 5 times faster than its
two-stage counterpart (ViLD) in the training and test process respectively.
GridCLIP learns grid-level representations to adapt to the intrinsic principle
of one-stage detection learning by expanding the conventional CLIP image-text
holistic mapping to a more fine-grained, grid-text alignment. This differs from
the region-text mapping in two-stage detectors that apply CLIP directly by
treating regions as images. Specifically, GridCLIP performs Grid-level
Alignment to adapt the CLIP image-level representations to grid-level
representations by aligning to CLIP category representations to learn the
annotated (especially frequent) categories. To learn generalizable visual
representations of broader categories, especially undersampled ones, we perform
Image-level Alignment during training to propagate broad pre-learned categories
in the CLIP image encoder from the image-level to the grid-level
representations. Experiments show that the learned CLIP-based grid-level
representations boost the performance of undersampled (infrequent and novel)
categories, reaching comparable detection performance on the LVIS benchmark.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 12:06:02 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Lin",
"Jiayi",
""
],
[
"Gong",
"Shaogang",
""
]
] |
new_dataset
| 0.999174 |
2303.09292
|
Zihao Yu
|
Hongwei Liu and Zihao Yu
|
Linear Codes from Simplicial Complexes over $\mathbb{F}_{2^n}$
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this article we mainly study linear codes over $\mathbb{F}_{2^n}$ and
their binary subfield codes. We construct linear codes over $\mathbb{F}_{2^n}$
whose defining sets are the certain subsets of $\mathbb{F}_{2^n}^m$ obtained
from mathematical objects called simplicial complexes. We use a result in LFSR
sequences to illustrate the relation of the weights of codewords in two special
codes obtained from simplical complexes and then determin the parameters of
these codes. We construct fiveinfinite families of distance optimal codes and
give sufficient conditions for these codes to be minimal.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 13:12:28 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Liu",
"Hongwei",
""
],
[
"Yu",
"Zihao",
""
]
] |
new_dataset
| 0.998664 |
2303.09310
|
Bo Dang
|
Yansheng Li, Bo Dang, Wanchun Li, Yongjun Zhang
|
GLH-Water: A Large-Scale Dataset for Global Surface Water Detection in
Large-Size Very-High-Resolution Satellite Imagery
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Global surface water detection in very-high-resolution (VHR) satellite
imagery can directly serve major applications such as refined flood mapping and
water resource assessment. Although achievements have been made in detecting
surface water in small-size satellite images corresponding to local geographic
scales, datasets and methods suitable for mapping and analyzing global surface
water have yet to be explored. To encourage the development of this task and
facilitate the implementation of relevant applications, we propose the
GLH-water dataset that consists of 250 satellite images and manually labeled
surface water annotations that are distributed globally and contain water
bodies exhibiting a wide variety of types (e.g., rivers, lakes, and ponds in
forests, irrigated fields, bare areas, and urban areas). Each image is of the
size 12,800 $\times$ 12,800 pixels at 0.3 meter spatial resolution. To build a
benchmark for GLH-water, we perform extensive experiments employing
representative surface water detection models, popular semantic segmentation
models, and ultra-high resolution segmentation models. Furthermore, we also
design a strong baseline with the novel pyramid consistency loss (PCL) to
initially explore this challenge. Finally, we implement the cross-dataset and
pilot area generalization experiments, and the superior performance illustrates
the strong generalization and practical application of GLH-water. The dataset
is available at https://jack-bo1220.github.io/project/GLH-water.html.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 13:35:56 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Li",
"Yansheng",
""
],
[
"Dang",
"Bo",
""
],
[
"Li",
"Wanchun",
""
],
[
"Zhang",
"Yongjun",
""
]
] |
new_dataset
| 0.999873 |
2303.09346
|
Christopher Ford
|
Christopher J. Ford, Haoran Li, John Lloyd, Manuel G. Catalano, Matteo
Bianchi, Efi Psomopoulou, Nathan F. Lepora
|
Tactile-Driven Gentle Grasping for Human-Robot Collaborative Tasks
|
Manuscript accepted to ICRA 2023. 6+n pages, 7 figures
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a control scheme for force sensitive, gentle grasping
with a Pisa/IIT anthropomorphic SoftHand equipped with a miniaturised version
of the TacTip optical tactile sensor on all five fingertips. The tactile
sensors provide high-resolution information about a grasp and how the fingers
interact with held objects. We first describe a series of hardware developments
for performing asynchronous sensor data acquisition and processing, resulting
in a fast control loop sufficient for real-time grasp control. We then develop
a novel grasp controller that uses tactile feedback from all five fingertip
sensors simultaneously to gently and stably grasp 43 objects of varying
geometry and stiffness, which is then applied to a human-to-robot handover
task. These developments open the door to more advanced manipulation with
underactuated hands via fast reflexive control using high-resolution tactile
sensing.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 14:26:48 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Ford",
"Christopher J.",
""
],
[
"Li",
"Haoran",
""
],
[
"Lloyd",
"John",
""
],
[
"Catalano",
"Manuel G.",
""
],
[
"Bianchi",
"Matteo",
""
],
[
"Psomopoulou",
"Efi",
""
],
[
"Lepora",
"Nathan F.",
""
]
] |
new_dataset
| 0.999313 |
2303.09364
|
Guru Ravi Shanker Ramaguru
|
R Guru Ravi Shanker, B Manikanta Gupta, BV Koushik, Vinoo Alluri
|
Tollywood Emotions: Annotation of Valence-Arousal in Telugu Song Lyrics
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Emotion recognition from a given music track has heavily relied on acoustic
features, social tags, and metadata but is seldom focused on lyrics. There are
no datasets of Indian language songs that contain both valence and arousal
manual ratings of lyrics. We present a new manually annotated dataset of Telugu
songs' lyrics collected from Spotify with valence and arousal annotated on a
discrete scale. A fairly high inter-annotator agreement was observed for both
valence and arousal. Subsequently, we create two music emotion recognition
models by using two classification techniques to identify valence, arousal and
respective emotion quadrant from lyrics. Support vector machine (SVM) with term
frequency-inverse document frequency (TF-IDF) features and fine-tuning the
pre-trained XLMRoBERTa (XLM-R) model were used for valence, arousal and
quadrant classification tasks. Fine-tuned XLMRoBERTa performs better than the
SVM by improving macro-averaged F1-scores of 54.69%, 67.61%, 34.13% to 77.90%,
80.71% and 58.33% for valence, arousal and quadrant classifications,
respectively, on 10-fold cross-validation. In addition, we compare our lyrics
annotations with Spotify's annotations of valence and energy (same as arousal),
which are based on entire music tracks. The implications of our findings are
discussed. Finally, we make the dataset publicly available with lyrics,
annotations and Spotify IDs.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 14:47:52 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Shanker",
"R Guru Ravi",
""
],
[
"Gupta",
"B Manikanta",
""
],
[
"Koushik",
"BV",
""
],
[
"Alluri",
"Vinoo",
""
]
] |
new_dataset
| 0.999726 |
2303.09384
|
Nicolas E. Diaz Ferreyra PhD
|
Catherine Tony, Markus Mutas, Nicol\'as E. D\'iaz Ferreyra and
Riccardo Scandariato
|
LLMSecEval: A Dataset of Natural Language Prompts for Security
Evaluations
|
Accepted at MSR '23 Data and Tool Showcase Track
| null | null | null |
cs.SE cs.IR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Large Language Models (LLMs) like Codex are powerful tools for performing
code completion and code generation tasks as they are trained on billions of
lines of code from publicly available sources. Moreover, these models are
capable of generating code snippets from Natural Language (NL) descriptions by
learning languages and programming practices from public GitHub repositories.
Although LLMs promise an effortless NL-driven deployment of software
applications, the security of the code they generate has not been extensively
investigated nor documented. In this work, we present LLMSecEval, a dataset
containing 150 NL prompts that can be leveraged for assessing the security
performance of such models. Such prompts are NL descriptions of code snippets
prone to various security vulnerabilities listed in MITRE's Top 25 Common
Weakness Enumeration (CWE) ranking. Each prompt in our dataset comes with a
secure implementation example to facilitate comparative evaluations against
code produced by LLMs. As a practical application, we show how LLMSecEval can
be used for evaluating the security of snippets automatically generated from NL
descriptions.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 15:13:58 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Tony",
"Catherine",
""
],
[
"Mutas",
"Markus",
""
],
[
"Ferreyra",
"Nicolás E. Díaz",
""
],
[
"Scandariato",
"Riccardo",
""
]
] |
new_dataset
| 0.999725 |
2303.09438
|
Shahab Jalalvand
|
Evandro Gouv\^ea, Ali Dadgar, Shahab Jalalvand, Rathi Chengalvarayan,
Badrinath Jayakumar, Ryan Price, Nicholas Ruiz, Jennifer McGovern, Srinivas
Bangalore, Ben Stern
|
Trustera: A Live Conversation Redaction System
|
5
| null | null | null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Trustera, the first functional system that redacts personally identifiable
information (PII) in real-time spoken conversations to remove agents' need to
hear sensitive information while preserving the naturalness of live
customer-agent conversations. As opposed to post-call redaction, audio masking
starts as soon as the customer begins speaking to a PII entity. This
significantly reduces the risk of PII being intercepted or stored in insecure
data storage. Trustera's architecture consists of a pipeline of automatic
speech recognition, natural language understanding, and a live audio redactor
module. The system's goal is three-fold: redact entities that are PII, mask the
audio that goes to the agent, and at the same time capture the entity, so that
the captured PII can be used for a payment transaction or caller
identification. Trustera is currently being used by thousands of agents to
secure customers' sensitive information.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 16:13:36 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Gouvêa",
"Evandro",
""
],
[
"Dadgar",
"Ali",
""
],
[
"Jalalvand",
"Shahab",
""
],
[
"Chengalvarayan",
"Rathi",
""
],
[
"Jayakumar",
"Badrinath",
""
],
[
"Price",
"Ryan",
""
],
[
"Ruiz",
"Nicholas",
""
],
[
"McGovern",
"Jennifer",
""
],
[
"Bangalore",
"Srinivas",
""
],
[
"Stern",
"Ben",
""
]
] |
new_dataset
| 0.997383 |
2303.09447
|
Zhuowei Li
|
Zhuowei Li, Long Zhao, Zizhao Zhang, Han Zhang, Di Liu, Ting Liu,
Dimitris N. Metaxas
|
Steering Prototype with Prompt-tuning for Rehearsal-free Continual
Learning
| null | null | null | null |
cs.LG cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Prototype, as a representation of class embeddings, has been explored to
reduce memory footprint or mitigate forgetting for continual learning
scenarios. However, prototype-based methods still suffer from abrupt
performance deterioration due to semantic drift and prototype interference. In
this study, we propose Contrastive Prototypical Prompt (CPP) and show that
task-specific prompt-tuning, when optimized over a contrastive learning
objective, can effectively address both obstacles and significantly improve the
potency of prototypes. Our experiments demonstrate that CPP excels in four
challenging class-incremental learning benchmarks, resulting in 4% to 6%
absolute improvements over state-of-the-art methods. Moreover, CPP does not
require a rehearsal buffer and it largely bridges the performance gap between
continual learning and offline joint-learning, showcasing a promising design
scheme for continual learning systems under a Transformer architecture.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 16:23:13 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Li",
"Zhuowei",
""
],
[
"Zhao",
"Long",
""
],
[
"Zhang",
"Zizhao",
""
],
[
"Zhang",
"Han",
""
],
[
"Liu",
"Di",
""
],
[
"Liu",
"Ting",
""
],
[
"Metaxas",
"Dimitris N.",
""
]
] |
new_dataset
| 0.986272 |
2303.09463
|
Hyunki Seong
|
Chanyoung Jung, Andrea Finazzi, Hyunki Seong, Daegyu Lee, Seungwook
Lee, Bosung Kim, Gyuri Gang, Seungil Han, David Hyunchul Shim
|
An Autonomous System for Head-to-Head Race: Design, Implementation and
Analysis; Team KAIST at the Indy Autonomous Challenge
|
35 pages, 31 figures, 5 tables, Field Robotics (accepted)
| null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
While the majority of autonomous driving research has concentrated on
everyday driving scenarios, further safety and performance improvements of
autonomous vehicles require a focus on extreme driving conditions. In this
context, autonomous racing is a new area of research that has been attracting
considerable interest recently. Due to the fact that a vehicle is driven by its
perception, planning, and control limits during racing, numerous research and
development issues arise. This paper provides a comprehensive overview of the
autonomous racing system built by team KAIST for the Indy Autonomous Challenge
(IAC). Our autonomy stack consists primarily of a multi-modal perception
module, a high-speed overtaking planner, a resilient control stack, and a
system status manager. We present the details of all components of our autonomy
solution, including algorithms, implementation, and unit test results. In
addition, this paper outlines the design principles and the results of a
systematical analysis. Even though our design principles are derived from the
unique application domain of autonomous racing, they can also be applied to a
variety of safety-critical, high-cost-of-failure robotics applications. The
proposed system was integrated into a full-scale autonomous race car (Dallara
AV-21) and field-tested extensively. As a result, team KAIST was one of three
teams who qualified and participated in the official IAC race events without
any accidents. Our proposed autonomous system successfully completed all
missions, including overtaking at speeds of around $220 km/h$ in the
IAC@CES2022, the world's first autonomous 1:1 head-to-head race.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 16:35:40 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Jung",
"Chanyoung",
""
],
[
"Finazzi",
"Andrea",
""
],
[
"Seong",
"Hyunki",
""
],
[
"Lee",
"Daegyu",
""
],
[
"Lee",
"Seungwook",
""
],
[
"Kim",
"Bosung",
""
],
[
"Gang",
"Gyuri",
""
],
[
"Han",
"Seungil",
""
],
[
"Shim",
"David Hyunchul",
""
]
] |
new_dataset
| 0.995277 |
2303.09484
|
Nima Hatami
|
Nima Hatami and Laura Mechtouff and David Rousseau and Tae-Hee Cho and
Omer Eker and Yves Berthezene and Carole Frindel
|
A Novel Autoencoders-LSTM Model for Stroke Outcome Prediction using
Multimodal MRI Data
|
The IEEE International Symposium on Biomedical Imaging (ISBI). arXiv
admin note: text overlap with arXiv:2205.05545
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Patient outcome prediction is critical in management of ischemic stroke. In
this paper, a novel machine learning model is proposed for stroke outcome
prediction using multimodal Magnetic Resonance Imaging (MRI). The proposed
model consists of two serial levels of Autoencoders (AEs), where different AEs
at level 1 are used for learning unimodal features from different MRI
modalities and a AE at level 2 is used to combine the unimodal features into
compressed multimodal features. The sequences of multimodal features of a given
patient are then used by an LSTM network for predicting outcome score. The
proposed AE2-LSTM model is proved to be an effective approach for better
addressing the multimodality and volumetric nature of MRI data. Experimental
results show that the proposed AE2-LSTM outperforms the existing state-of-the
art models by achieving highest AUC=0.71 and lowest MAE=0.34.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 17:00:45 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Hatami",
"Nima",
""
],
[
"Mechtouff",
"Laura",
""
],
[
"Rousseau",
"David",
""
],
[
"Cho",
"Tae-Hee",
""
],
[
"Eker",
"Omer",
""
],
[
"Berthezene",
"Yves",
""
],
[
"Frindel",
"Carole",
""
]
] |
new_dataset
| 0.998624 |
2303.09511
|
James Chin-Jen Pang
|
James Chin-Jen Pang, Hessam Mahdavifar, and S. Sandeep Pradhan
|
Capacity-achieving Polar-based Codes with Sparsity Constraints on the
Generator Matrices
|
31 pages, single column. arXiv admin note: substantial text overlap
with arXiv:2012.13977
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we leverage polar codes and the well-established channel
polarization to design capacity-achieving codes with a certain constraint on
the weights of all the columns in the generator matrix (GM) while having a
low-complexity decoding algorithm. We first show that given a binary-input
memoryless symmetric (BMS) channel $W$ and a constant $s \in (0, 1]$, there
exists a polarization kernel such that the corresponding polar code is
capacity-achieving with the \textit{rate of polarization} $s/2$, and the GM
column weights being bounded from above by $N^s$. To improve the sparsity
versus error rate trade-off, we devise a column-splitting algorithm and two
coding schemes for BEC and then for general BMS channels. The
\textit{polar-based} codes generated by the two schemes inherit several
fundamental properties of polar codes with the original $2 \times 2$ kernel
including the decay in error probability, decoding complexity, and the
capacity-achieving property. Furthermore, they demonstrate the additional
property that their GM column weights are bounded from above sublinearly in
$N$, while the original polar codes have some column weights that are linear in
$N$. In particular, for any BEC and $\beta <0.5$, the existence of a sequence
of capacity-achieving polar-based codes where all the GM column weights are
bounded from above by $N^\lambda$ with $\lambda \approx 0.585$, and with the
error probability bounded by $O(2^{-N^{\beta}} )$ under a decoder with
complexity $O(N\log N)$, is shown. The existence of similar capacity-achieving
polar-based codes with the same decoding complexity is shown for any BMS
channel and $\beta <0.5$ with $\lambda \approx 0.631$.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 17:29:05 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Pang",
"James Chin-Jen",
""
],
[
"Mahdavifar",
"Hessam",
""
],
[
"Pradhan",
"S. Sandeep",
""
]
] |
new_dataset
| 0.958109 |
2303.09534
|
Mai Nishimura
|
Mai Nishimura, Shohei Nobuhara, Ko Nishino
|
InCrowdFormer: On-Ground Pedestrian World Model From Egocentric Views
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce an on-ground Pedestrian World Model, a computational model that
can predict how pedestrians move around an observer in the crowd on the ground
plane, but from just the egocentric-views of the observer. Our model,
InCrowdFormer, fully leverages the Transformer architecture by modeling
pedestrian interaction and egocentric to top-down view transformation with
attention, and autoregressively predicts on-ground positions of a variable
number of people with an encoder-decoder architecture. We encode the
uncertainties arising from unknown pedestrian heights with latent codes to
predict the posterior distributions of pedestrian positions. We validate the
effectiveness of InCrowdFormer on a novel prediction benchmark of real
movements. The results show that InCrowdFormer accurately predicts the future
coordination of pedestrians. To the best of our knowledge, InCrowdFormer is the
first-of-its-kind pedestrian world model which we believe will benefit a wide
range of egocentric-view applications including crowd navigation, tracking, and
synthesis.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 17:51:02 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Nishimura",
"Mai",
""
],
[
"Nobuhara",
"Shohei",
""
],
[
"Nishino",
"Ko",
""
]
] |
new_dataset
| 0.98532 |
2303.09553
|
Justin Kerr
|
Justin Kerr, Chung Min Kim, Ken Goldberg, Angjoo Kanazawa, Matthew
Tancik
|
LERF: Language Embedded Radiance Fields
|
Project website can be found at https://lerf.io
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Humans describe the physical world using natural language to refer to
specific 3D locations based on a vast range of properties: visual appearance,
semantics, abstract associations, or actionable affordances. In this work we
propose Language Embedded Radiance Fields (LERFs), a method for grounding
language embeddings from off-the-shelf models like CLIP into NeRF, which enable
these types of open-ended language queries in 3D. LERF learns a dense,
multi-scale language field inside NeRF by volume rendering CLIP embeddings
along training rays, supervising these embeddings across training views to
provide multi-view consistency and smooth the underlying language field. After
optimization, LERF can extract 3D relevancy maps for a broad range of language
prompts interactively in real-time, which has potential use cases in robotics,
understanding vision-language models, and interacting with 3D scenes. LERF
enables pixel-aligned, zero-shot queries on the distilled 3D CLIP embeddings
without relying on region proposals or masks, supporting long-tail
open-vocabulary queries hierarchically across the volume. The project website
can be found at https://lerf.io .
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 17:59:20 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Kerr",
"Justin",
""
],
[
"Kim",
"Chung Min",
""
],
[
"Goldberg",
"Ken",
""
],
[
"Kanazawa",
"Angjoo",
""
],
[
"Tancik",
"Matthew",
""
]
] |
new_dataset
| 0.999328 |
2303.09555
|
Chuang Gan
|
Tsun-Hsuan Wang, Pingchuan Ma, Andrew Everett Spielberg, Zhou Xian,
Hao Zhang, Joshua B. Tenenbaum, Daniela Rus, Chuang Gan
|
SoftZoo: A Soft Robot Co-design Benchmark For Locomotion In Diverse
Environments
|
ICLR 2023. Project page:
https://sites.google.com/view/softzoo-iclr-2023
| null | null | null |
cs.RO cs.AI cs.CV cs.GR cs.LG
|
http://creativecommons.org/publicdomain/zero/1.0/
|
While significant research progress has been made in robot learning for
control, unique challenges arise when simultaneously co-optimizing morphology.
Existing work has typically been tailored for particular environments or
representations. In order to more fully understand inherent design and
performance tradeoffs and accelerate the development of new breeds of soft
robots, a comprehensive virtual platform with well-established tasks,
environments, and evaluation metrics is needed. In this work, we introduce
SoftZoo, a soft robot co-design platform for locomotion in diverse
environments. SoftZoo supports an extensive, naturally-inspired material set,
including the ability to simulate environments such as flat ground, desert,
wetland, clay, ice, snow, shallow water, and ocean. Further, it provides a
variety of tasks relevant for soft robotics, including fast locomotion, agile
turning, and path following, as well as differentiable design representations
for morphology and control. Combined, these elements form a feature-rich
platform for analysis and development of soft robot co-design algorithms. We
benchmark prevalent representations and co-design algorithms, and shed light on
1) the interplay between environment, morphology, and behavior; 2) the
importance of design space representations; 3) the ambiguity in muscle
formation and controller synthesis; and 4) the value of differentiable physics.
We envision that SoftZoo will serve as a standard platform and template an
approach toward the development of novel representations and algorithms for
co-designing soft robots' behavioral and morphological intelligence.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 17:59:50 GMT"
}
] | 2023-03-17T00:00:00 |
[
[
"Wang",
"Tsun-Hsuan",
""
],
[
"Ma",
"Pingchuan",
""
],
[
"Spielberg",
"Andrew Everett",
""
],
[
"Xian",
"Zhou",
""
],
[
"Zhang",
"Hao",
""
],
[
"Tenenbaum",
"Joshua B.",
""
],
[
"Rus",
"Daniela",
""
],
[
"Gan",
"Chuang",
""
]
] |
new_dataset
| 0.99956 |
2107.03092
|
Yasuaki Kobayashi
|
Takehiro Ito, Yuni Iwamasa, Yasuaki Kobayashi, Yu Nakahata, Yota
Otachi, Kunihiro Wasa
|
Reconfiguring (non-spanning) arborescences
|
14 pages. This is a post-peer-review, pre-copyedit version of an
article published in Theoretical Computer Science. The final authenticated
version is available online at https://doi.org/10.1016/j.tcs.2022.12.007
| null |
10.1016/j.tcs.2022.12.007
| null |
cs.DS cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we investigate the computational complexity of subgraph
reconfiguration problems in directed graphs. More specifically, we focus on the
problem of reconfiguring arborescences in a digraph, where an arborescence is a
directed graph such that its underlying undirected graph forms a tree and all
vertices have in-degree at most 1. Given two arborescences in a digraph, the
goal of the problem is to determine whether there is a (reconfiguration)
sequence of arborescences between the given arborescences such that each
arborescence in the sequence can be obtained from the previous one by removing
an arc and then adding another arc. We show that this problem can be solved in
polynomial time, whereas the problem is PSPACE-complete when we restrict
arborescences in a reconfiguration sequence to directed paths or relax to
directed acyclic graphs. We also show that there is a polynomial-time algorithm
for finding a shortest reconfiguration sequence between two spanning
arborescences.
|
[
{
"version": "v1",
"created": "Wed, 7 Jul 2021 09:18:00 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Mar 2023 23:52:14 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"Ito",
"Takehiro",
""
],
[
"Iwamasa",
"Yuni",
""
],
[
"Kobayashi",
"Yasuaki",
""
],
[
"Nakahata",
"Yu",
""
],
[
"Otachi",
"Yota",
""
],
[
"Wasa",
"Kunihiro",
""
]
] |
new_dataset
| 0.99729 |
2107.12003
|
Seyun Um
|
Se-Yun Um, Jihyun Kim, Jihyun Lee, and Hong-Goo Kang
|
Facetron: A Multi-speaker Face-to-Speech Model based on Cross-modal
Latent Representations
|
5 pages (including references), 1 figure
| null | null | null |
cs.CV cs.LG cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a multi-speaker face-to-speech waveform generation
model that also works for unseen speaker conditions. Using a generative
adversarial network (GAN) with linguistic and speaker characteristic features
as auxiliary conditions, our method directly converts face images into speech
waveforms under an end-to-end training framework. The linguistic features are
extracted from lip movements using a lip-reading model, and the speaker
characteristic features are predicted from face images using cross-modal
learning with a pre-trained acoustic model. Since these two features are
uncorrelated and controlled independently, we can flexibly synthesize speech
waveforms whose speaker characteristics vary depending on the input face
images. We show the superiority of our proposed model over conventional methods
in terms of objective and subjective evaluation results. Specifically, we
evaluate the performances of linguistic features by measuring their accuracy on
an automatic speech recognition task. In addition, we estimate speaker and
gender similarity for multi-speaker and unseen conditions, respectively. We
also evaluate the aturalness of the synthesized speech waveforms using a mean
opinion score (MOS) test and non-intrusive objective speech quality assessment
(NISQA).The demo samples of the proposed and other models are available at
https://sam-0927.github.io/
|
[
{
"version": "v1",
"created": "Mon, 26 Jul 2021 07:36:02 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Oct 2022 00:55:49 GMT"
},
{
"version": "v3",
"created": "Wed, 15 Mar 2023 12:28:22 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"Um",
"Se-Yun",
""
],
[
"Kim",
"Jihyun",
""
],
[
"Lee",
"Jihyun",
""
],
[
"Kang",
"Hong-Goo",
""
]
] |
new_dataset
| 0.999601 |
2203.11544
|
Nicol\'as Navarro-Guerrero
|
Nicol\'as Navarro-Guerrero, Sibel Toprak, Josip Josifovski, Lorenzo
Jamone
|
Visuo-Haptic Object Perception for Robots: An Overview
|
published in Autonomous Robots
|
Autonomous Robots, 27 (2023)
https://link.springer.com/article/10.1007/s10514-023-10091-y
|
10.1007/s10514-023-10091-y
| null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The object perception capabilities of humans are impressive, and this becomes
even more evident when trying to develop solutions with a similar proficiency
in autonomous robots. While there have been notable advancements in the
technologies for artificial vision and touch, the effective integration of
these two sensory modalities in robotic applications still needs to be
improved, and several open challenges exist. Taking inspiration from how humans
combine visual and haptic perception to perceive object properties and drive
the execution of manual tasks, this article summarises the current state of the
art of visuo-haptic object perception in robots. Firstly, the biological basis
of human multimodal object perception is outlined. Then, the latest advances in
sensing technologies and data collection strategies for robots are discussed.
Next, an overview of the main computational techniques is presented,
highlighting the main challenges of multimodal machine learning and presenting
a few representative articles in the areas of robotic object recognition,
peripersonal space representation and manipulation. Finally, informed by the
latest advancements and open challenges, this article outlines promising new
research directions.
|
[
{
"version": "v1",
"created": "Tue, 22 Mar 2022 08:55:36 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Aug 2022 13:30:32 GMT"
},
{
"version": "v3",
"created": "Wed, 15 Mar 2023 15:41:27 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"Navarro-Guerrero",
"Nicolás",
""
],
[
"Toprak",
"Sibel",
""
],
[
"Josifovski",
"Josip",
""
],
[
"Jamone",
"Lorenzo",
""
]
] |
new_dataset
| 0.998804 |
2203.14122
|
Davide Basile
|
Davide Basile and Maurice H. ter Beek
|
A Runtime Environment for Contract Automata
| null | null |
10.1007/978-3-031-27481-7_31
| null |
cs.SE cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
Contract automata have been introduced for specifying applications through
behavioural contracts and for synthesising their orchestrations as finite state
automata. This paper addresses the realisation of applications from contract
automata specifications. We present CARE, a new runtime environment to
coordinate services implementing contracts that guarantees the adherence of the
implementation to its contract. We discuss how CARE can be adopted to realise
contract-based applications, its formal guarantees, and we identify the
responsibilities of the involved business actors. Experiments show the benefits
of adopting CARE with respect to manual implementations.
|
[
{
"version": "v1",
"created": "Sat, 26 Mar 2022 17:48:23 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Mar 2023 16:02:10 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"Basile",
"Davide",
""
],
[
"ter Beek",
"Maurice H.",
""
]
] |
new_dataset
| 0.999777 |
2206.14767
|
Lindsey Kuper
|
Patrick Redmond, Gan Shen, Niki Vazou, Lindsey Kuper
|
Verified Causal Broadcast with Liquid Haskell
|
Appeared at IFL 2022
| null |
10.1145/3587216.3587222
| null |
cs.PL cs.DC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Protocols to ensure that messages are delivered in causal order are a
ubiquitous building block of distributed systems. For instance, distributed
data storage systems can use causally ordered message delivery to ensure causal
consistency, and CRDTs can rely on the existence of an underlying
causally-ordered messaging layer to simplify their implementation. A causal
delivery protocol ensures that when a message is delivered to a process, any
causally preceding messages sent to the same process have already been
delivered to it. While causal delivery protocols are widely used, verification
of their correctness is less common, much less machine-checked proofs about
executable implementations.
We implemented a standard causal broadcast protocol in Haskell and used the
Liquid Haskell solver-aided verification system to express and mechanically
prove that messages will never be delivered to a process in an order that
violates causality. We express this property using refinement types and prove
that it holds of our implementation, taking advantage of Liquid Haskell's
underlying SMT solver to automate parts of the proof and using its manual
theorem-proving features for the rest. We then put our verified causal
broadcast implementation to work as the foundation of a distributed key-value
store.
|
[
{
"version": "v1",
"created": "Wed, 29 Jun 2022 16:58:21 GMT"
},
{
"version": "v2",
"created": "Mon, 29 Aug 2022 20:53:50 GMT"
},
{
"version": "v3",
"created": "Wed, 15 Mar 2023 14:09:21 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"Redmond",
"Patrick",
""
],
[
"Shen",
"Gan",
""
],
[
"Vazou",
"Niki",
""
],
[
"Kuper",
"Lindsey",
""
]
] |
new_dataset
| 0.965617 |
2209.10150
|
Zhenhua Xu
|
Zhenhua Xu, Yuxuan Liu, Yuxiang Sun, Ming Liu, Lujia Wang
|
RNGDet++: Road Network Graph Detection by Transformer with Instance
Segmentation and Multi-scale Features Enhancement
|
Accepted by IEEE Robotics and Automation Letters (RA-L)
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The road network graph is a critical component for downstream tasks in
autonomous driving, such as global route planning and navigation. In the past
years, road network graphs are usually annotated by human experts manually,
which is time-consuming and labor-intensive. To annotate road network graphs
effectively and efficiently, automatic algorithms for road network graph
detection are demanded. Most existing methods either adopt a post-processing
step on semantic segmentation maps to produce road network graphs, or propose
graph-based algorithms to directly predict the graphs. However, these works
suffer from hard-coded algorithms and inferior performance. To enhance the
previous state-of-the-art (SOTA) method RNGDet, we add an instance segmentation
head to better supervise the training, and enable the network to leverage
multi-scale features of the backbone. Since the new proposed approach is
improved from RNGDet, we name it RNGDet++. Experimental results show that our
RNGDet++ outperforms baseline methods in terms of almost all evaluation metrics
on two large-scale public datasets. Our code and supplementary materials are
available at \url{https://tonyxuqaq.github.io/projects/RNGDetPlusPlus/}.
|
[
{
"version": "v1",
"created": "Wed, 21 Sep 2022 07:06:46 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Mar 2023 03:05:12 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"Xu",
"Zhenhua",
""
],
[
"Liu",
"Yuxuan",
""
],
[
"Sun",
"Yuxiang",
""
],
[
"Liu",
"Ming",
""
],
[
"Wang",
"Lujia",
""
]
] |
new_dataset
| 0.981721 |
2210.06575
|
Qiyu Dai
|
Qiyu Dai, Yan Zhu, Yiran Geng, Ciyu Ruan, Jiazhao Zhang, He Wang
|
GraspNeRF: Multiview-based 6-DoF Grasp Detection for Transparent and
Specular Objects Using Generalizable NeRF
|
IEEE International Conference on Robotics and Automation (ICRA), 2023
| null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we tackle 6-DoF grasp detection for transparent and specular
objects, which is an important yet challenging problem in vision-based robotic
systems, due to the failure of depth cameras in sensing their geometry. We, for
the first time, propose a multiview RGB-based 6-DoF grasp detection network,
GraspNeRF, that leverages the generalizable neural radiance field (NeRF) to
achieve material-agnostic object grasping in clutter. Compared to the existing
NeRF-based 3-DoF grasp detection methods that rely on densely captured input
images and time-consuming per-scene optimization, our system can perform
zero-shot NeRF construction with sparse RGB inputs and reliably detect 6-DoF
grasps, both in real-time. The proposed framework jointly learns generalizable
NeRF and grasp detection in an end-to-end manner, optimizing the scene
representation construction for the grasping. For training data, we generate a
large-scale photorealistic domain-randomized synthetic dataset of grasping in
cluttered tabletop scenes that enables direct transfer to the real world. Our
extensive experiments in synthetic and real-world environments demonstrate that
our method significantly outperforms all the baselines in all the experiments
while remaining in real-time. Project page can be found at
https://pku-epic.github.io/GraspNeRF
|
[
{
"version": "v1",
"created": "Wed, 12 Oct 2022 20:31:23 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Mar 2023 07:26:40 GMT"
},
{
"version": "v3",
"created": "Wed, 15 Mar 2023 17:35:57 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"Dai",
"Qiyu",
""
],
[
"Zhu",
"Yan",
""
],
[
"Geng",
"Yiran",
""
],
[
"Ruan",
"Ciyu",
""
],
[
"Zhang",
"Jiazhao",
""
],
[
"Wang",
"He",
""
]
] |
new_dataset
| 0.987069 |
2210.10910
|
Qin Wang
|
Qin Wang, Guangsheng Yu, Shange Fu, Shiping Chen, Jiangshan Yu, Sherry
Xu
|
A Referable NFT Scheme
|
Accepted by CryptoEx@ICBC 2023; Align with EIP-5521
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Existing NFTs confront restrictions of \textit{one-time incentive} and
\textit{product isolation}. Creators cannot obtain benefits once having sold
their NFT products due to the lack of relationships across different NFTs,
which results in controversial possible profit sharing. This work proposes a
referable NFT scheme to extend the incentive sustainability of NFTs. We
construct the referable NFT (rNFT) network to increase exposure and enhance the
referring relationship of inclusive items. We introduce the DAG topology to
generate directed edges between each pair of NFTs with corresponding weights
and labels for advanced usage. We accordingly implement and propose the scheme
under Ethereum Improvement Proposal (EIP) standards, indexed in EIP-1155.
Further, we provide the mathematical formation to analyze the utility for each
rNFT participant. The discussion gives general guidance among multi-dimensional
parameters. To our knowledge, this is the first study to build the referable
NFT network, explicitly showing the virtual connections among NFTs.
|
[
{
"version": "v1",
"created": "Wed, 19 Oct 2022 22:19:41 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Mar 2023 02:00:43 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"Wang",
"Qin",
""
],
[
"Yu",
"Guangsheng",
""
],
[
"Fu",
"Shange",
""
],
[
"Chen",
"Shiping",
""
],
[
"Yu",
"Jiangshan",
""
],
[
"Xu",
"Sherry",
""
]
] |
new_dataset
| 0.987551 |
2210.15363
|
Atul Shriwastva
|
Atul Kumar Shriwastva, R. S. Selvaraj
|
Block Codes on Pomset Metric
|
15 Pages
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given a regular multiset $M$ on $[n]=\{1,2,\ldots,n\}$, a partial order $R$
on $M$, and a label map $\pi : [n] \rightarrow \mathbb{N}$ defined by $\pi(i) =
k_i$ with $\sum_{i=1}^{n}\pi (i) = N$, we define a pomset block metric
$d_{(Pm,\pi)}$ on the direct sum $ \mathbb{Z}_{m}^{k_1} \oplus
\mathbb{Z}_{m}^{k_2} \oplus \ldots \oplus \mathbb{Z}_{m}^{k_n}$ of
$\mathbb{Z}_{m}^{N}$ based on the pomset $\mathbb{P}=(M,R)$. The pomset block
metric extends the classical pomset metric introduced by I. G. Sudha and R. S.
Selvaraj and generalizes the poset block metric introduced by M. M. S. Alves et
al over $\mathbb{Z}_m$. The space $ (\mathbb{Z}_{m}^N,~d_{(Pm,\pi)} ) $ is
called the pomset block space and we determine the complete weight distribution
of it. Further, $I$-perfect pomset block codes for ideals with partial and full
counts are described. Then, for block codes with chain pomset, the packing
radius and Singleton bound are established. The relation between MDS codes and
$I$-perfect codes for any ideal $I$ is investigated. Moreover, the duality
theorem for an MDS pomset block code is established when all the blocks have
the same size.
|
[
{
"version": "v1",
"created": "Thu, 27 Oct 2022 12:23:02 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Mar 2023 06:14:59 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"Shriwastva",
"Atul Kumar",
""
],
[
"Selvaraj",
"R. S.",
""
]
] |
new_dataset
| 0.999727 |
2210.15447
|
Takaaki Saeki
|
Takaaki Saeki, Heiga Zen, Zhehuai Chen, Nobuyuki Morioka, Gary Wang,
Yu Zhang, Ankur Bapna, Andrew Rosenberg, Bhuvana Ramabhadran
|
Virtuoso: Massive Multilingual Speech-Text Joint Semi-Supervised
Learning for Text-To-Speech
|
To appear in ICASSP 2023
| null | null | null |
cs.SD cs.CL eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes Virtuoso, a massively multilingual speech-text joint
semi-supervised learning framework for text-to-speech synthesis (TTS) models.
Existing multilingual TTS typically supports tens of languages, which are a
small fraction of the thousands of languages in the world. One difficulty to
scale multilingual TTS to hundreds of languages is collecting high-quality
speech-text paired data in low-resource languages. This study extends Maestro,
a speech-text joint pretraining framework for automatic speech recognition
(ASR), to speech generation tasks. To train a TTS model from various types of
speech and text data, different training schemes are designed to handle
supervised (paired TTS and ASR data) and unsupervised (untranscribed speech and
unspoken text) datasets. Experimental evaluation shows that 1) multilingual TTS
models trained on Virtuoso can achieve significantly better naturalness and
intelligibility than baseline ones in seen languages, and 2) they can
synthesize reasonably intelligible and naturally sounding speech for unseen
languages where no high-quality paired TTS data is available.
|
[
{
"version": "v1",
"created": "Thu, 27 Oct 2022 14:09:48 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Mar 2023 10:52:03 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"Saeki",
"Takaaki",
""
],
[
"Zen",
"Heiga",
""
],
[
"Chen",
"Zhehuai",
""
],
[
"Morioka",
"Nobuyuki",
""
],
[
"Wang",
"Gary",
""
],
[
"Zhang",
"Yu",
""
],
[
"Bapna",
"Ankur",
""
],
[
"Rosenberg",
"Andrew",
""
],
[
"Ramabhadran",
"Bhuvana",
""
]
] |
new_dataset
| 0.981952 |
2211.12036
|
Suhwan Cho
|
Suhwan Cho, Minhyeok Lee, Seunghoon Lee, Dogyoon Lee, Sangyoun Lee
|
Dual Prototype Attention for Unsupervised Video Object Segmentation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unsupervised video object segmentation (VOS) aims to detect and segment the
most salient object in videos. The primary techniques used in unsupervised VOS
are 1) the collaboration of appearance and motion information and 2) temporal
fusion between different frames. This paper proposes two novel prototype-based
attention mechanisms, inter-modality attention (IMA) and inter-frame attention
(IFA), to incorporate these techniques via dense propagation across different
modalities and frames. IMA densely integrates context information from
different modalities based on a mutual refinement. IFA injects global context
of a video to the query frame, enabling a full utilization of useful properties
from multiple frames. Experimental results on public benchmark datasets
demonstrate that our proposed approach outperforms all existing methods by a
substantial margin. The proposed two components are also thoroughly validated
via ablative study.
|
[
{
"version": "v1",
"created": "Tue, 22 Nov 2022 06:19:17 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Mar 2023 07:11:13 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"Cho",
"Suhwan",
""
],
[
"Lee",
"Minhyeok",
""
],
[
"Lee",
"Seunghoon",
""
],
[
"Lee",
"Dogyoon",
""
],
[
"Lee",
"Sangyoun",
""
]
] |
new_dataset
| 0.998209 |
2211.13843
|
Joshua Pinskier
|
Josh Pinskier, Prabhat Kumar, Matthijs Langelaar, and David Howard
|
Automated design of pneumatic soft grippers through design-dependent
multi-material topology optimization
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Soft robotic grasping has rapidly spread through the academic robotics
community in recent years and pushed into industrial applications. At the same
time, multimaterial 3D printing has become widely available, enabling the
monolithic manufacture of devices containing rigid and elastic sections. We
propose a novel design technique that leverages both technologies and can
automatically design bespoke soft robotic grippers for fruit-picking and
similar applications. We demonstrate the novel topology optimisation
formulation that generates multi-material soft grippers, can solve internal and
external pressure boundaries, and investigate methods to produce air-tight
designs. Compared to existing methods, it vastly expands the searchable design
space while increasing simulation accuracy.
|
[
{
"version": "v1",
"created": "Fri, 25 Nov 2022 00:42:04 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Mar 2023 02:30:57 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"Pinskier",
"Josh",
""
],
[
"Kumar",
"Prabhat",
""
],
[
"Langelaar",
"Matthijs",
""
],
[
"Howard",
"David",
""
]
] |
new_dataset
| 0.979164 |
2212.09501
|
Stylianos Venieris
|
Stylianos I. Venieris and Mario Almeida and Royson Lee and Nicholas D.
Lane
|
NAWQ-SR: A Hybrid-Precision NPU Engine for Efficient On-Device
Super-Resolution
|
Accepted for publication at the IEEE Transactions on Mobile Computing
(TMC), 2023
| null |
10.1109/TMC.2023.3255822
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, image and video delivery systems have begun integrating deep
learning super-resolution (SR) approaches, leveraging their unprecedented
visual enhancement capabilities while reducing reliance on networking
conditions. Nevertheless, deploying these solutions on mobile devices still
remains an active challenge as SR models are excessively demanding with respect
to workload and memory footprint. Despite recent progress on on-device SR
frameworks, existing systems either penalize visual quality, lead to excessive
energy consumption or make inefficient use of the available resources. This
work presents NAWQ-SR, a novel framework for the efficient on-device execution
of SR models. Through a novel hybrid-precision quantization technique and a
runtime neural image codec, NAWQ-SR exploits the multi-precision capabilities
of modern mobile NPUs in order to minimize latency, while meeting
user-specified quality constraints. Moreover, NAWQ-SR selectively adapts the
arithmetic precision at run time to equip the SR DNN's layers with wider
representational power, improving visual quality beyond what was previously
possible on NPUs. Altogether, NAWQ-SR achieves an average speedup of 7.9x, 3x
and 1.91x over the state-of-the-art on-device SR systems that use heterogeneous
processors (MobiSR), CPU (SplitSR) and NPU (XLSR), respectively. Furthermore,
NAWQ-SR delivers an average of 3.2x speedup and 0.39 dB higher PSNR over
status-quo INT8 NPU designs, but most importantly mitigates the negative
effects of quantization on visual quality, setting a new state-of-the-art in
the attainable quality of NPU-based SR.
|
[
{
"version": "v1",
"created": "Thu, 15 Dec 2022 23:51:18 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Mar 2023 23:25:05 GMT"
},
{
"version": "v3",
"created": "Tue, 14 Mar 2023 11:48:37 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"Venieris",
"Stylianos I.",
""
],
[
"Almeida",
"Mario",
""
],
[
"Lee",
"Royson",
""
],
[
"Lane",
"Nicholas D.",
""
]
] |
new_dataset
| 0.998447 |
2301.06625
|
Huayu Li
|
Ping Chang, Huayu Li, Stuart F. Quan, Shuyang Lu, Shu-Fen Wung, Janet
Roveda and Ao Li
|
TDSTF: Transformer-based Diffusion probabilistic model for Sparse Time
series Forecasting
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Background and objective: In the intensive care unit (ICU), vital sign
monitoring is critical, and an accurate predictive system is required. This
study will create a novel model to forecast Heart Rate (HR), Systolic Blood
Pressure (SBP), and Diastolic Blood Pressure (DBP) in ICU. These vital signs
are crucial for prompt interventions for patients. We extracted $24,886$ ICU
stays from the MIMIC-III database, which contains data from over $46$ thousand
patients, to train and test the model. Methods: The model proposed in this
study, areansformerin intensive careabilistic Model for Sparse Time Series
Forecasting (TDSTF), uses a deep learning technique called the Transformer. The
TDSTF model showed state-of-the-art performance in predicting vital signs in
the ICU, outperforming other models' ability to predict distributions of vital
signs and being more computationally efficient. The code is available at
https://github.com/PingChang818/TDSTF. Results: The results of the study showed
that TDSTF achieved a Normalized Average Continuous Ranked Probability Score
(NACRPS) of $0.4438$ and a Mean Squared Error (MSE) of $0.4168$, an improvement
of $18.9\%$ and $34.3\%$ over the best baseline model, respectively.
Conclusion: In conclusion, TDSTF is an effective and efficient solution for
forecasting vital signs in the ICU, and it shows a significant improvement
compared to other models in the field.
|
[
{
"version": "v1",
"created": "Mon, 16 Jan 2023 22:22:04 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Mar 2023 01:35:55 GMT"
},
{
"version": "v3",
"created": "Wed, 15 Mar 2023 04:13:03 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"Chang",
"Ping",
""
],
[
"Li",
"Huayu",
""
],
[
"Quan",
"Stuart F.",
""
],
[
"Lu",
"Shuyang",
""
],
[
"Wung",
"Shu-Fen",
""
],
[
"Roveda",
"Janet",
""
],
[
"Li",
"Ao",
""
]
] |
new_dataset
| 0.994046 |
2302.10390
|
Ke Yu
|
Ke Yu, Li Sun, Junxiang Chen, Max Reynolds, Tigmanshu Chaudhary,
Kayhan Batmanghelich
|
DrasCLR: A Self-supervised Framework of Learning Disease-related and
Anatomy-specific Representation for 3D Medical Images
|
Added some recent references
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Large-scale volumetric medical images with annotation are rare, costly, and
time prohibitive to acquire. Self-supervised learning (SSL) offers a promising
pre-training and feature extraction solution for many downstream tasks, as it
only uses unlabeled data. Recently, SSL methods based on instance
discrimination have gained popularity in the medical imaging domain. However,
SSL pre-trained encoders may use many clues in the image to discriminate an
instance that are not necessarily disease-related. Moreover, pathological
patterns are often subtle and heterogeneous, requiring the ability of the
desired method to represent anatomy-specific features that are sensitive to
abnormal changes in different body parts. In this work, we present a novel SSL
framework, named DrasCLR, for 3D medical imaging to overcome these challenges.
We propose two domain-specific contrastive learning strategies: one aims to
capture subtle disease patterns inside a local anatomical region, and the other
aims to represent severe disease patterns that span larger regions. We
formulate the encoder using conditional hyper-parameterized network, in which
the parameters are dependant on the anatomical location, to extract
anatomically sensitive features. Extensive experiments on large-scale computer
tomography (CT) datasets of lung images show that our method improves the
performance of many downstream prediction and segmentation tasks. The
patient-level representation improves the performance of the patient survival
prediction task. We show how our method can detect emphysema subtypes via dense
prediction. We demonstrate that fine-tuning the pre-trained model can
significantly reduce annotation efforts without sacrificing emphysema detection
accuracy. Our ablation study highlights the importance of incorporating
anatomical context into the SSL framework.
|
[
{
"version": "v1",
"created": "Tue, 21 Feb 2023 01:32:27 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Mar 2023 15:44:37 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"Yu",
"Ke",
""
],
[
"Sun",
"Li",
""
],
[
"Chen",
"Junxiang",
""
],
[
"Reynolds",
"Max",
""
],
[
"Chaudhary",
"Tigmanshu",
""
],
[
"Batmanghelich",
"Kayhan",
""
]
] |
new_dataset
| 0.994723 |
2302.11217
|
Paul Voigtlaender
|
Paul Voigtlaender and Soravit Changpinyo and Jordi Pont-Tuset and Radu
Soricut and Vittorio Ferrari
|
Connecting Vision and Language with Video Localized Narratives
|
Accepted at CVPR 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose Video Localized Narratives, a new form of multimodal video
annotations connecting vision and language. In the original Localized
Narratives, annotators speak and move their mouse simultaneously on an image,
thus grounding each word with a mouse trace segment. However, this is
challenging on a video. Our new protocol empowers annotators to tell the story
of a video with Localized Narratives, capturing even complex events involving
multiple actors interacting with each other and with several passive objects.
We annotated 20k videos of the OVIS, UVO, and Oops datasets, totalling 1.7M
words. Based on this data, we also construct new benchmarks for the video
narrative grounding and video question answering tasks, and provide reference
results from strong baseline models. Our annotations are available at
https://google.github.io/video-localized-narratives/.
|
[
{
"version": "v1",
"created": "Wed, 22 Feb 2023 09:04:00 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Mar 2023 10:30:18 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"Voigtlaender",
"Paul",
""
],
[
"Changpinyo",
"Soravit",
""
],
[
"Pont-Tuset",
"Jordi",
""
],
[
"Soricut",
"Radu",
""
],
[
"Ferrari",
"Vittorio",
""
]
] |
new_dataset
| 0.953105 |
2302.11799
|
Qichen Ye
|
Qichen Ye, Bowen Cao, Nuo Chen, Weiyuan Xu, Yuexian Zou
|
FiTs: Fine-grained Two-stage Training for Knowledge-aware Question
Answering
|
Accepted in AAAI 2023, oral
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Knowledge-aware question answering (KAQA) requires the model to answer
questions over a knowledge base, which is essential for both open-domain QA and
domain-specific QA, especially when language models alone cannot provide all
the knowledge needed. Despite the promising result of recent KAQA systems which
tend to integrate linguistic knowledge from pre-trained language models (PLM)
and factual knowledge from knowledge graphs (KG) to answer complex questions, a
bottleneck exists in effectively fusing the representations from PLMs and KGs
because of (i) the semantic and distributional gaps between them, and (ii) the
difficulties in joint reasoning over the provided knowledge from both
modalities. To address the above two problems, we propose a Fine-grained
Two-stage training framework (FiTs) to boost the KAQA system performance: The
first stage aims at aligning representations from the PLM and the KG, thus
bridging the modality gaps between them, named knowledge adaptive
post-training. The second stage, called knowledge-aware fine-tuning, aims to
improve the model's joint reasoning ability based on the aligned
representations. In detail, we fine-tune the post-trained model via two
auxiliary self-supervised tasks in addition to the QA supervision. Extensive
experiments demonstrate that our approach achieves state-of-the-art performance
on three benchmarks in the commonsense reasoning (i.e., CommonsenseQA,
OpenbookQA) and medical question answering (i.e., MedQA-USMILE) domains.
|
[
{
"version": "v1",
"created": "Thu, 23 Feb 2023 06:25:51 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Mar 2023 14:31:56 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"Ye",
"Qichen",
""
],
[
"Cao",
"Bowen",
""
],
[
"Chen",
"Nuo",
""
],
[
"Xu",
"Weiyuan",
""
],
[
"Zou",
"Yuexian",
""
]
] |
new_dataset
| 0.950006 |
2303.08204
|
Miguel \'A. Gonz\'alez-Santamarta
|
Miguel \'A. Gonz\'alez-Santamarta, Francisco J. Rodr\'iguez-Lera,
Vicente Matell\'an Olivera
|
SAILOR: Perceptual Anchoring For Robotic Cognitive Architectures
|
10 pages, 5 figures, 3 tables, 7 algorithms
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Symbolic anchoring is a crucial problem in the field of robotics, as it
enables robots to obtain symbolic knowledge from the perceptual information
acquired through their sensors. In cognitive-based robots, this process of
processing sub-symbolic data from real-world sensors to obtain symbolic
knowledge is still an open problem. To address this issue, this paper presents
SAILOR, a framework for providing symbolic anchoring in ROS 2 ecosystem. SAILOR
aims to maintain the link between symbolic data and perceptual data in real
robots over time. It provides a semantic world modeling approach using two deep
learning-based sub-symbolic robotic skills: object recognition and matching
function. The object recognition skill allows the robot to recognize and
identify objects in its environment, while the matching function enables the
robot to decide if new perceptual data corresponds to existing symbolic data.
This paper provides a description of the framework, the pipeline and
development as well as its integration in MERLIN2, a hybrid cognitive
architecture fully functional in robots running ROS 2.
|
[
{
"version": "v1",
"created": "Tue, 14 Mar 2023 19:44:23 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"González-Santamarta",
"Miguel Á.",
""
],
[
"Rodríguez-Lera",
"Francisco J.",
""
],
[
"Olivera",
"Vicente Matellán",
""
]
] |
new_dataset
| 0.993076 |
2303.08221
|
Ania Piotrowska
|
Alfredo Rial and Ania M. Piotrowska
|
Compact and Divisible E-Cash with Threshold Issuance
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Decentralized, offline, and privacy-preserving e-cash could fulfil the need
for both scalable and byzantine fault-resistant payment systems. Existing
offline anonymous e-cash schemes are unsuitable for distributed environments
due to a central bank. We construct a distributed offline anonymous e-cash
scheme, in which the role of the bank is performed by a quorum of authorities,
and present its two instantiations. Our first scheme is compact, i.e. the cost
of the issuance protocol and the size of a wallet are independent of the number
of coins issued, but the cost of payment grows linearly with the number of
coins spent. Our second scheme is divisible and thus the cost of payments is
also independent of the number of coins spent, but the verification of deposits
is more costly. We provide formal security proof of both schemes and compare
the efficiency of their implementations.
|
[
{
"version": "v1",
"created": "Tue, 14 Mar 2023 20:27:21 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"Rial",
"Alfredo",
""
],
[
"Piotrowska",
"Ania M.",
""
]
] |
new_dataset
| 0.996033 |
2303.08264
|
David Chanin
|
David Chanin, Anthony Hunter
|
Neuro-symbolic Commonsense Social Reasoning
| null | null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Social norms underlie all human social interactions, yet formalizing and
reasoning with them remains a major challenge for AI systems. We present a
novel system for taking social rules of thumb (ROTs) in natural language from
the Social Chemistry 101 dataset and converting them to first-order logic where
reasoning is performed using a neuro-symbolic theorem prover. We accomplish
this in several steps. First, ROTs are converted into Abstract Meaning
Representation (AMR), which is a graphical representation of the concepts in a
sentence, and align the AMR with RoBERTa embeddings. We then generate alternate
simplified versions of the AMR via a novel algorithm, recombining and merging
embeddings for added robustness against different wordings of text, and
incorrect AMR parses. The AMR is then converted into first-order logic, and is
queried with a neuro-symbolic theorem prover. The goal of this paper is to
develop and evaluate a neuro-symbolic method which performs explicit reasoning
about social situations in a logical form.
|
[
{
"version": "v1",
"created": "Tue, 14 Mar 2023 22:37:33 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"Chanin",
"David",
""
],
[
"Hunter",
"Anthony",
""
]
] |
new_dataset
| 0.99957 |
2303.08303
|
Wei Zhu
|
Wei Zhu, Runtao Zhou, Yao Yuan, Campbell Timothy, Rajat Jain, Jiebo
Luo
|
SegPrompt: Using Segmentation Map as a Better Prompt to Finetune Deep
Models for Kidney Stone Classification
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Recently, deep learning has produced encouraging results for kidney stone
classification using endoscope images. However, the shortage of annotated
training data poses a severe problem in improving the performance and
generalization ability of the trained model. It is thus crucial to fully
exploit the limited data at hand. In this paper, we propose SegPrompt to
alleviate the data shortage problems by exploiting segmentation maps from two
aspects. First, SegPrompt integrates segmentation maps to facilitate
classification training so that the classification model is aware of the
regions of interest. The proposed method allows the image and segmentation
tokens to interact with each other to fully utilize the segmentation map
information. Second, we use the segmentation maps as prompts to tune the
pretrained deep model, resulting in much fewer trainable parameters than
vanilla finetuning. We perform extensive experiments on the collected kidney
stone dataset. The results show that SegPrompt can achieve an advantageous
balance between the model fitting ability and the generalization ability,
eventually leading to an effective model with limited training data.
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 01:30:48 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"Zhu",
"Wei",
""
],
[
"Zhou",
"Runtao",
""
],
[
"Yuan",
"Yao",
""
],
[
"Timothy",
"Campbell",
""
],
[
"Jain",
"Rajat",
""
],
[
"Luo",
"Jiebo",
""
]
] |
new_dataset
| 0.964841 |
2303.08314
|
Minhyeok Lee
|
Minhyeok Lee, Suhwan Cho, Dogyoon Lee, Chaewon Park, Jungho Lee,
Sangyoun Lee
|
Guided Slot Attention for Unsupervised Video Object Segmentation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unsupervised video object segmentation aims to segment the most prominent
object in a video sequence. However, the existence of complex backgrounds and
multiple foreground objects make this task challenging. To address this issue,
we propose a guided slot attention network to reinforce spatial structural
information and obtain better foreground--background separation. The foreground
and background slots, which are initialized with query guidance, are
iteratively refined based on interactions with template information.
Furthermore, to improve slot--template interaction and effectively fuse global
and local features in the target and reference frames, K-nearest neighbors
filtering and a feature aggregation transformer are introduced. The proposed
model achieves state-of-the-art performance on two popular datasets.
Additionally, we demonstrate the robustness of the proposed model in
challenging scenes through various comparative experiments.
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 02:08:20 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"Lee",
"Minhyeok",
""
],
[
"Cho",
"Suhwan",
""
],
[
"Lee",
"Dogyoon",
""
],
[
"Park",
"Chaewon",
""
],
[
"Lee",
"Jungho",
""
],
[
"Lee",
"Sangyoun",
""
]
] |
new_dataset
| 0.982734 |
2303.08316
|
Chenhang He
|
Chenhang He, Ruihuang Li, Yabin Zhang, Shuai Li, Lei Zhang
|
MSF: Motion-guided Sequential Fusion for Efficient 3D Object Detection
from Point Cloud Sequences
|
Accepted by CVPR2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Point cloud sequences are commonly used to accurately detect 3D objects in
applications such as autonomous driving. Current top-performing multi-frame
detectors mostly follow a Detect-and-Fuse framework, which extracts features
from each frame of the sequence and fuses them to detect the objects in the
current frame. However, this inevitably leads to redundant computation since
adjacent frames are highly correlated. In this paper, we propose an efficient
Motion-guided Sequential Fusion (MSF) method, which exploits the continuity of
object motion to mine useful sequential contexts for object detection in the
current frame. We first generate 3D proposals on the current frame and
propagate them to preceding frames based on the estimated velocities. The
points-of-interest are then pooled from the sequence and encoded as proposal
features. A novel Bidirectional Feature Aggregation (BiFA) module is further
proposed to facilitate the interactions of proposal features across frames.
Besides, we optimize the point cloud pooling by a voxel-based sampling
technique so that millions of points can be processed in several milliseconds.
The proposed MSF method achieves not only better efficiency than other
multi-frame detectors but also leading accuracy, with 83.12% and 78.30% mAP on
the LEVEL1 and LEVEL2 test sets of Waymo Open Dataset, respectively. Codes can
be found at \url{https://github.com/skyhehe123/MSF}.
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 02:10:27 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"He",
"Chenhang",
""
],
[
"Li",
"Ruihuang",
""
],
[
"Zhang",
"Yabin",
""
],
[
"Li",
"Shuai",
""
],
[
"Zhang",
"Lei",
""
]
] |
new_dataset
| 0.951121 |
2303.08336
|
Tongyu Zong
|
Tongyu Zong, Yixiang Mao, Chen Li, Yong Liu, Yao Wang
|
Progressive Frame Patching for FoV-based Point Cloud Video Streaming
| null | null | null | null |
cs.MM eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Immersive multimedia applications, such as Virtual, Augmented and Mixed
Reality, have become more practical with advances in hardware and software for
acquiring and rendering 3D media as well as 5G/6G wireless networks. Such
applications require the delivery of volumetric video to users with six degrees
of freedom (6-DoF) movements. Point Cloud has become a popular volumetric video
format due to its flexibility and simplicity. A dense point cloud consumes much
higher bandwidth than a 2D/360 degree video frame. User Field of View (FoV) is
more dynamic with 6-DoF movement than 3-DoF movement. A user's view quality of
a 3D object is affected by points occlusion and distance, which are constantly
changing with user and object movements. To save bandwidth, FoV-adaptive
streaming predicts user FoV and only downloads the data falling in the
predicted FoV, but it is vulnerable to FoV prediction errors, which is
significant when a long buffer is used for smoothed streaming. In this work, we
propose a multi-round progressive refinement framework for point cloud-based
volumetric video streaming. Instead of sequentially downloading frames, we
simultaneously downloads/patches multiple frames falling into a sliding
time-window, leveraging on the scalability of point-cloud coding. The rate
allocation among all tiles of active frames are solved analytically using the
heterogeneous tile utility functions calibrated by the predicted user FoV.
Multi-frame patching takes advantage of the streaming smoothness resulted from
long buffer and the FoV prediction accuracy at short buffer length. We evaluate
our solution using simulations driven by real point cloud videos, bandwidth
traces and 6-DoF FoV traces of real users. The experiments show that our
solution is robust against bandwidth/FoV prediction errors, and can deliver
high and smooth quality in the face of bandwidth variations and dynamic user
movements.
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 02:54:27 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"Zong",
"Tongyu",
""
],
[
"Mao",
"Yixiang",
""
],
[
"Li",
"Chen",
""
],
[
"Liu",
"Yong",
""
],
[
"Wang",
"Yao",
""
]
] |
new_dataset
| 0.98267 |
2303.08364
|
Junbong Jang
|
Junbong Jang, Kwonmoo Lee and Tae-Kyun Kim
|
Unsupervised Contour Tracking of Live Cells by Mechanical and Cycle
Consistency Losses
|
12 pages, 9 figures, Accepted to CVPR 2023
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Analyzing the dynamic changes of cellular morphology is important for
understanding the various functions and characteristics of live cells,
including stem cells and metastatic cancer cells. To this end, we need to track
all points on the highly deformable cellular contour in every frame of live
cell video. Local shapes and textures on the contour are not evident, and their
motions are complex, often with expansion and contraction of local contour
features. The prior arts for optical flow or deep point set tracking are
unsuited due to the fluidity of cells, and previous deep contour tracking does
not consider point correspondence. We propose the first deep learning-based
tracking of cellular (or more generally viscoelastic materials) contours with
point correspondence by fusing dense representation between two contours with
cross attention. Since it is impractical to manually label dense tracking
points on the contour, unsupervised learning comprised of the mechanical and
cyclical consistency losses is proposed to train our contour tracker. The
mechanical loss forcing the points to move perpendicular to the contour
effectively helps out. For quantitative evaluation, we labeled sparse tracking
points along the contour of live cells from two live cell datasets taken with
phase contrast and confocal fluorescence microscopes. Our contour tracker
quantitatively outperforms compared methods and produces qualitatively more
favorable results. Our code and data are publicly available at
https://github.com/JunbongJang/contour-tracking/
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 04:48:19 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"Jang",
"Junbong",
""
],
[
"Lee",
"Kwonmoo",
""
],
[
"Kim",
"Tae-Kyun",
""
]
] |
new_dataset
| 0.991951 |
2303.08409
|
Xiaohan Wang
|
Xiaohan Wang, Wenguan Wang, Jiayi Shao, Yi Yang
|
Lana: A Language-Capable Navigator for Instruction Following and
Generation
|
Accepted to CVPR 2023
| null | null | null |
cs.CV cs.MM cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Recently, visual-language navigation (VLN) -- entailing robot agents to
follow navigation instructions -- has shown great advance. However, existing
literature put most emphasis on interpreting instructions into actions, only
delivering "dumb" wayfinding agents. In this article, we devise LANA, a
language-capable navigation agent which is able to not only execute
human-written navigation commands, but also provide route descriptions to
humans. This is achieved by simultaneously learning instruction following and
generation with only one single model. More specifically, two encoders,
respectively for route and language encoding, are built and shared by two
decoders, respectively, for action prediction and instruction generation, so as
to exploit cross-task knowledge and capture task-specific characteristics.
Throughout pretraining and fine-tuning, both instruction following and
generation are set as optimization objectives. We empirically verify that,
compared with recent advanced task-specific solutions, LANA attains better
performances on both instruction following and route description, with nearly
half complexity. In addition, endowed with language generation capability, LANA
can explain to humans its behaviors and assist human's wayfinding. This work is
expected to foster future efforts towards building more trustworthy and
socially-intelligent navigation robots.
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 07:21:28 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"Wang",
"Xiaohan",
""
],
[
"Wang",
"Wenguan",
""
],
[
"Shao",
"Jiayi",
""
],
[
"Yang",
"Yi",
""
]
] |
new_dataset
| 0.998096 |
2303.08454
|
Zhe Jin
|
Zhe Jin, Chaoyang Jiang
|
Range-Aided LiDAR-Inertial Multi-Vehicle Mapping in Degenerate
Environment
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a range-aided LiDAR-inertial multi-vehicle mapping system
(RaLI-Multi). Firstly, we design a multi-metric weights LiDAR-inertial odometry
by fusing observations from an inertial measurement unit (IMU) and a light
detection and ranging sensor (LiDAR). The degenerate level and direction are
evaluated by analyzing the distribution of normal vectors of feature point
clouds and are used to activate the degeneration correction module in which
range measurements correct the pose estimation from the degeneration direction.
We then design a multi-vehicle mapping system in which a centralized vehicle
receives local maps of each vehicle and range measurements between vehicles to
optimize a global pose graph. The global map is broadcast to other vehicles for
localization and mapping updates, and the centralized vehicle is dynamically
fungible. Finally, we provide three experiments to verify the effectiveness of
the proposed RaLI-Multi. The results show its superiority in degeneration
environments
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 08:58:23 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"Jin",
"Zhe",
""
],
[
"Jiang",
"Chaoyang",
""
]
] |
new_dataset
| 0.994325 |
2303.08505
|
George Alexandropoulos
|
George C. Alexandropoulos, Dinh-Thuy Phan-Huy, Kostantinos D.
Katsanos, Maurizio Crozzoli, Henk Wymeersch, Petar Popovski, Philippe
Ratajczak, Yohann B\'en\'edic, Marie-Helene Hamon, Sebastien Herraiz
Gonzalez, Placido Mursia, Marco Rossanese, Vincenzo Sciancalepore,
Jean-Baptiste Gros, Sergio Terranova, Gabriele Gradoni, Paolo Di Lorenzo,
Moustafa Rahal, Benoit Denis, Raffaele D'Errico, Antonio Clemente, and Emilio
Calvanese Strinati
|
RIS-Enabled Smart Wireless Environments: Deployment Scenarios, Network
Architecture, Bandwidth and Area of Influence
|
43 pages, 21 figures, sumbitted for a journal publication. arXiv
admin note: text overlap with arXiv:2203.13478
| null | null | null |
cs.IT cs.ET math.IT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Reconfigurable Intelligent Surfaces (RISs) constitute the key enabler for
programmable electromagnetic propagation environments, and are lately being
considered as a candidate physical-layer technology for the demanding
connectivity, reliability, localization, and sustainability requirements of
next generation wireless networks. In this paper, we first present the
deployment scenarios for RIS-enabled smart wireless environments that have been
recently designed within the ongoing European Union Horizon 2020 RISE-6G
project, as well as a network architecture integrating RISs with existing
standardized interfaces. We identify various RIS deployment strategies and
sketch the core architectural requirements in terms of RIS control and
signaling, depending on the RIS hardware architectures and respective
capabilities. Furthermore, we introduce and discuss, with the aid of
simulations and reflectarray measurements, two novel metrics that emerge in the
context of RIS-empowered wireless systems: the RIS bandwidth and area of
influence. Their extensive investigation corroborates the need for careful
deployment and planning of the RIS technology in future networks.
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 10:29:33 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"Alexandropoulos",
"George C.",
""
],
[
"Phan-Huy",
"Dinh-Thuy",
""
],
[
"Katsanos",
"Kostantinos D.",
""
],
[
"Crozzoli",
"Maurizio",
""
],
[
"Wymeersch",
"Henk",
""
],
[
"Popovski",
"Petar",
""
],
[
"Ratajczak",
"Philippe",
""
],
[
"Bénédic",
"Yohann",
""
],
[
"Hamon",
"Marie-Helene",
""
],
[
"Gonzalez",
"Sebastien Herraiz",
""
],
[
"Mursia",
"Placido",
""
],
[
"Rossanese",
"Marco",
""
],
[
"Sciancalepore",
"Vincenzo",
""
],
[
"Gros",
"Jean-Baptiste",
""
],
[
"Terranova",
"Sergio",
""
],
[
"Gradoni",
"Gabriele",
""
],
[
"Di Lorenzo",
"Paolo",
""
],
[
"Rahal",
"Moustafa",
""
],
[
"Denis",
"Benoit",
""
],
[
"D'Errico",
"Raffaele",
""
],
[
"Clemente",
"Antonio",
""
],
[
"Strinati",
"Emilio Calvanese",
""
]
] |
new_dataset
| 0.996302 |
2303.08525
|
Pan Gao Prof.
|
Pan Gao, Xinlang Chen, Rong Quan, Wei Xiang
|
MRGAN360: Multi-stage Recurrent Generative Adversarial Network for 360
Degree Image Saliency Prediction
| null | null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Thanks to the ability of providing an immersive and interactive experience,
the uptake of 360 degree image content has been rapidly growing in consumer and
industrial applications. Compared to planar 2D images, saliency prediction for
360 degree images is more challenging due to their high resolutions and
spherical viewing ranges. Currently, most high-performance saliency prediction
models for omnidirectional images (ODIs) rely on deeper or broader
convolutional neural networks (CNNs), which benefit from CNNs' superior feature
representation capabilities while suffering from their high computational
costs. In this paper, inspired by the human visual cognitive process, i.e.,
human being's perception of a visual scene is always accomplished by multiple
stages of analysis, we propose a novel multi-stage recurrent generative
adversarial networks for ODIs dubbed MRGAN360, to predict the saliency maps
stage by stage. At each stage, the prediction model takes as input the original
image and the output of the previous stage and outputs a more accurate saliency
map. We employ a recurrent neural network among adjacent prediction stages to
model their correlations, and exploit a discriminator at the end of each stage
to supervise the output saliency map. In addition, we share the weights among
all the stages to obtain a lightweight architecture that is computationally
cheap. Extensive experiments are conducted to demonstrate that our proposed
model outperforms the state-of-the-art model in terms of both prediction
accuracy and model size.
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 11:15:03 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"Gao",
"Pan",
""
],
[
"Chen",
"Xinlang",
""
],
[
"Quan",
"Rong",
""
],
[
"Xiang",
"Wei",
""
]
] |
new_dataset
| 0.996438 |
2303.08562
|
Weijian Huang
|
Weijian Huang, Hao Yang, Cheng Li, Mingtong Dai, Rui Yang, Shanshan
Wang
|
MGA: Medical generalist agent through text-guided knowledge
transformation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-modal representation methods have achieved advanced performance in
medical applications by extracting more robust features from multi-domain data.
However, existing methods usually need to train additional branches for
downstream tasks, which may increase the model complexities in clinical
applications as well as introduce additional human inductive bias. Besides,
very few studies exploit the rich clinical knowledge embedded in clinical daily
reports. To this end, we propose a novel medical generalist agent, MGA, that
can address three kinds of common clinical tasks via clinical reports knowledge
transformation. Unlike the existing methods, MGA can easily adapt to different
tasks without specific downstream branches when their corresponding annotations
are missing. More importantly, we are the first attempt to use medical
professional language guidance as a transmission medium to guide the agent's
behavior. The proposed method is implemented on four well-known X-ray
open-source datasets, MIMIC-CXR, CheXpert, MIMIC-CXR-JPG, and MIMIC-CXR-MS.
Promising results are obtained, which validate the effectiveness of our
proposed MGA. Code is available at: https://github.com/SZUHvern/MGA
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 12:28:31 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"Huang",
"Weijian",
""
],
[
"Yang",
"Hao",
""
],
[
"Li",
"Cheng",
""
],
[
"Dai",
"Mingtong",
""
],
[
"Yang",
"Rui",
""
],
[
"Wang",
"Shanshan",
""
]
] |
new_dataset
| 0.99683 |
2303.08574
|
Th\'eo Matricon
|
Th\'eo Matricon, Nathana\"el Fijalkow, Ga\"etan Margueritte
|
WikiCoder: Learning to Write Knowledge-Powered Code
|
Published in the proceedings of SPIN 2023
| null | null | null |
cs.LG cs.PL cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
We tackle the problem of automatic generation of computer programs from a few
pairs of input-output examples. The starting point of this work is the
observation that in many applications a solution program must use external
knowledge not present in the examples: we call such programs knowledge-powered
since they can refer to information collected from a knowledge graph such as
Wikipedia. This paper makes a first step towards knowledge-powered program
synthesis. We present WikiCoder, a system building upon state of the art
machine-learned program synthesizers and integrating knowledge graphs. We
evaluate it to show its wide applicability over different domains and discuss
its limitations. WikiCoder solves tasks that no program synthesizers were able
to solve before thanks to the use of knowledge graphs, while integrating with
recent developments in the field to operate at scale.
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 12:50:54 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"Matricon",
"Théo",
""
],
[
"Fijalkow",
"Nathanaël",
""
],
[
"Margueritte",
"Gaëtan",
""
]
] |
new_dataset
| 0.959507 |
2303.08600
|
Jiale Li
|
Jiale Li, Hang Dai, Hao Han, Yong Ding
|
MSeg3D: Multi-modal 3D Semantic Segmentation for Autonomous Driving
|
Accepted to CVPR 2023 (preprint)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
LiDAR and camera are two modalities available for 3D semantic segmentation in
autonomous driving. The popular LiDAR-only methods severely suffer from
inferior segmentation on small and distant objects due to insufficient laser
points, while the robust multi-modal solution is under-explored, where we
investigate three crucial inherent difficulties: modality heterogeneity,
limited sensor field of view intersection, and multi-modal data augmentation.
We propose a multi-modal 3D semantic segmentation model (MSeg3D) with joint
intra-modal feature extraction and inter-modal feature fusion to mitigate the
modality heterogeneity. The multi-modal fusion in MSeg3D consists of
geometry-based feature fusion GF-Phase, cross-modal feature completion, and
semantic-based feature fusion SF-Phase on all visible points. The multi-modal
data augmentation is reinvigorated by applying asymmetric transformations on
LiDAR point cloud and multi-camera images individually, which benefits the
model training with diversified augmentation transformations. MSeg3D achieves
state-of-the-art results on nuScenes, Waymo, and SemanticKITTI datasets. Under
the malfunctioning multi-camera input and the multi-frame point clouds input,
MSeg3D still shows robustness and improves the LiDAR-only baseline. Our code is
publicly available at \url{https://github.com/jialeli1/lidarseg3d}.
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 13:13:03 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"Li",
"Jiale",
""
],
[
"Dai",
"Hang",
""
],
[
"Han",
"Hao",
""
],
[
"Ding",
"Yong",
""
]
] |
new_dataset
| 0.999363 |
2303.08639
|
Hugo Bertiche
|
Hugo Bertiche, Niloy J. Mitra, Kuldeep Kulkarni, Chun-Hao Paul Huang,
Tuanfeng Y. Wang, Meysam Madadi, Sergio Escalera and Duygu Ceylan
|
Blowing in the Wind: CycleNet for Human Cinemagraphs from Still Images
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Cinemagraphs are short looping videos created by adding subtle motions to a
static image. This kind of media is popular and engaging. However, automatic
generation of cinemagraphs is an underexplored area and current solutions
require tedious low-level manual authoring by artists. In this paper, we
present an automatic method that allows generating human cinemagraphs from
single RGB images. We investigate the problem in the context of dressed humans
under the wind. At the core of our method is a novel cyclic neural network that
produces looping cinemagraphs for the target loop duration. To circumvent the
problem of collecting real data, we demonstrate that it is possible, by working
in the image normal space, to learn garment motion dynamics on synthetic data
and generalize to real data. We evaluate our method on both synthetic and real
data and demonstrate that it is possible to create compelling and plausible
cinemagraphs from single RGB images.
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 14:09:35 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"Bertiche",
"Hugo",
""
],
[
"Mitra",
"Niloy J.",
""
],
[
"Kulkarni",
"Kuldeep",
""
],
[
"Huang",
"Chun-Hao Paul",
""
],
[
"Wang",
"Tuanfeng Y.",
""
],
[
"Madadi",
"Meysam",
""
],
[
"Escalera",
"Sergio",
""
],
[
"Ceylan",
"Duygu",
""
]
] |
new_dataset
| 0.998006 |
2303.08672
|
Markus Nemitz
|
Kalina Bonofiglio, Lauryn Whiteside, Maya Angeles, Matthew Haahr,
Brandon Simpson, Josh Palmer, Yijia Wu, and Markus P. Nemitz
|
Soft Fluidic Closed-Loop Controller for Untethered Underwater Gliders
|
6 pages, 5 figures
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Soft underwater robots typically explore bioinspired designs at the expense
of power efficiency when compared to traditional underwater robots, which
limits their practical use in real-world applications. We leverage a fluidic
closed-loop controller to actuate a passive underwater glider. A soft
hydrostatic pressure sensor is configured as a bangbang controller actuating a
swim bladder made from silicone balloons. Our underwater glider oscillates
between the water surface and 4 m depth while traveling 15 m translational. The
fluidic underwater glider demonstrates a power efficiency of 28 mW/m. This work
demonstrates a low-cost and power-efficient underwater glider and
non-electronic controller. Due to its simple design, low cost, and ease of
fabrication using FDM printing and soft lithography, it serves as a starting
point for the exploration of non-electronic underwater soft robots.
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 14:56:27 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"Bonofiglio",
"Kalina",
""
],
[
"Whiteside",
"Lauryn",
""
],
[
"Angeles",
"Maya",
""
],
[
"Haahr",
"Matthew",
""
],
[
"Simpson",
"Brandon",
""
],
[
"Palmer",
"Josh",
""
],
[
"Wu",
"Yijia",
""
],
[
"Nemitz",
"Markus P.",
""
]
] |
new_dataset
| 0.998353 |
2303.08689
|
Patrick Zimmer
|
Patrick Zimmer, Michael Halstead, Chris McCool
|
Panoptic One-Click Segmentation: Applied to Agricultural Data
|
in IEEE Robotics and Automation Letters (2023)
| null |
10.1109/LRA.2023.3254451
| null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In weed control, precision agriculture can help to greatly reduce the use of
herbicides, resulting in both economical and ecological benefits. A key element
is the ability to locate and segment all the plants from image data. Modern
instance segmentation techniques can achieve this, however, training such
systems requires large amounts of hand-labelled data which is expensive and
laborious to obtain. Weakly supervised training can help to greatly reduce
labelling efforts and costs. We propose panoptic one-click segmentation, an
efficient and accurate offline tool to produce pseudo-labels from click inputs
which reduces labelling effort. Our approach jointly estimates the pixel-wise
location of all N objects in the scene, compared to traditional approaches
which iterate independently through all N objects; this greatly reduces
training time. Using just 10% of the data to train our panoptic one-click
segmentation approach yields 68.1% and 68.8% mean object intersection over
union (IoU) on challenging sugar beet and corn image data respectively,
providing comparable performance to traditional one-click approaches while
being approximately 12 times faster to train. We demonstrate the applicability
of our system by generating pseudo-labels from clicks on the remaining 90% of
the data. These pseudo-labels are then used to train Mask R-CNN, in a
semi-supervised manner, improving the absolute performance (of mean foreground
IoU) by 9.4 and 7.9 points for sugar beet and corn data respectively. Finally,
we show that our technique can recover missed clicks during annotation
outlining a further benefit over traditional approaches.
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 15:20:36 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"Zimmer",
"Patrick",
""
],
[
"Halstead",
"Michael",
""
],
[
"McCool",
"Chris",
""
]
] |
new_dataset
| 0.972165 |
2303.08704
|
Rui Zhou
|
Rui Zhou, Yan Niu
|
Multi-Exposure HDR Composition by Gated Swin Transformer
|
7 pages, 4 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fusing a sequence of perfectly aligned images captured at various exposures,
has shown great potential to approach High Dynamic Range (HDR) imaging by
sensors with limited dynamic range. However, in the presence of large motion of
scene objects or the camera, mis-alignment is almost inevitable and leads to
the notorious ``ghost'' artifacts. Besides, factors such as the noise in the
dark region or color saturation in the over-bright region may also fail to fill
local image details to the HDR image. This paper provides a novel
multi-exposure fusion model based on Swin Transformer. Particularly, we design
feature selection gates, which are integrated with the feature extraction
layers to detect outliers and block them from HDR image synthesis. To
reconstruct the missing local details by well-aligned and properly-exposed
regions, we exploit the long distance contextual dependency in the
exposure-space pyramid by the self-attention mechanism. Extensive numerical and
visual evaluation has been conducted on a variety of benchmark datasets. The
experiments show that our model achieves the accuracy on par with current top
performing multi-exposure HDR imaging models, while gaining higher efficiency.
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 15:38:43 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"Zhou",
"Rui",
""
],
[
"Niu",
"Yan",
""
]
] |
new_dataset
| 0.997458 |
2303.08729
|
Mootez Saad
|
Himesh Nandani, Mootez Saad, Tushar Sharma
|
DACOS-A Manually Annotated Dataset of Code Smells
|
4 pages
| null | null | null |
cs.SE cs.AI cs.LG cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Researchers apply machine-learning techniques for code smell detection to
counter the subjectivity of many code smells. Such approaches need a large,
manually annotated dataset for training and benchmarking. Existing literature
offers a few datasets; however, they are small in size and, more importantly,
do not focus on the subjective code snippets. In this paper, we present DACOS,
a manually annotated dataset containing 10,267 annotations for 5,192 code
snippets. The dataset targets three kinds of code smells at different
granularity: multifaceted abstraction, complex method, and long parameter list.
The dataset is created in two phases. The first phase helps us identify the
code snippets that are potentially subjective by determining the thresholds of
metrics used to detect a smell. The second phase collects annotations for
potentially subjective snippets. We also offer an extended dataset DACOSX that
includes definitely benign and definitely smelly snippets by using the
thresholds identified in the first phase. We have developed TagMan, a web
application to help annotators view and mark the snippets one-by-one and record
the provided annotations. We make the datasets and the web application
accessible publicly. This dataset will help researchers working on smell
detection techniques to build relevant and context-aware machine-learning
models.
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 16:13:40 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"Nandani",
"Himesh",
""
],
[
"Saad",
"Mootez",
""
],
[
"Sharma",
"Tushar",
""
]
] |
new_dataset
| 0.999766 |
2303.08737
|
Taras Kucherenko
|
Taras Kucherenko, Pieter Wolfert, Youngwoo Yoon, Carla Viegas, Teodor
Nikolov, Mihail Tsakov, Gustav Eje Henter
|
Evaluating gesture-generation in a large-scale open challenge: The GENEA
Challenge 2022
|
The first three authors made equal contributions and share joint
first authorship. arXiv admin note: substantial text overlap with
arXiv:2208.10441
| null | null | null |
cs.HC cs.LG cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper reports on the second GENEA Challenge to benchmark data-driven
automatic co-speech gesture generation. Participating teams used the same
speech and motion dataset to build gesture-generation systems. Motion generated
by all these systems was rendered to video using a standardised visualisation
pipeline and evaluated in several large, crowdsourced user studies. Unlike when
comparing different research papers, differences in results are here only due
to differences between methods, enabling direct comparison between systems. The
dataset was based on 18 hours of full-body motion capture, including fingers,
of different persons engaging in a dyadic conversation. Ten teams participated
in the challenge across two tiers: full-body and upper-body gesticulation. For
each tier, we evaluated both the human-likeness of the gesture motion and its
appropriateness for the specific speech signal. Our evaluations decouple
human-likeness from gesture appropriateness, which has been a difficult problem
in the field.
The evaluation results are a revolution, and a revelation. Some synthetic
conditions are rated as significantly more human-like than human motion
capture. To the best of our knowledge, this has never been shown before on a
high-fidelity avatar. On the other hand, all synthetic motion is found to be
vastly less appropriate for the speech than the original motion-capture
recordings. We also find that conventional objective metrics do not correlate
well with subjective human-likeness ratings in this large evaluation. The one
exception is the Fr\'echet gesture distance (FGD), which achieves a Kendall's
tau rank correlation of around -0.5. Based on the challenge results we
formulate numerous recommendations for system building and evaluation.
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 16:21:50 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"Kucherenko",
"Taras",
""
],
[
"Wolfert",
"Pieter",
""
],
[
"Yoon",
"Youngwoo",
""
],
[
"Viegas",
"Carla",
""
],
[
"Nikolov",
"Teodor",
""
],
[
"Tsakov",
"Mihail",
""
],
[
"Henter",
"Gustav Eje",
""
]
] |
new_dataset
| 0.959786 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.