id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2304.03098
|
Matteo Muffo
|
Matteo Muffo, Roberto Tedesco, Licia Sbattella and Vincenzo Scotti
|
Static Fuzzy Bag-of-Words: a lightweight sentence embedding algorithm
|
9 pages, 2 figures
|
Proceedings of the 4th International Conference on Natural
Language and Speech Processing (ICNLSP 2021)
| null | null |
cs.CL stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
The introduction of embedding techniques has pushed forward significantly the
Natural Language Processing field. Many of the proposed solutions have been
presented for word-level encoding; anyhow, in the last years, new mechanism to
treat information at an higher level of aggregation, like at sentence- and
document-level, have emerged. With this work we address specifically the
sentence embeddings problem, presenting the Static Fuzzy Bag-of-Word model. Our
model is a refinement of the Fuzzy Bag-of-Words approach, providing sentence
embeddings with a predefined dimension. SFBoW provides competitive performances
in Semantic Textual Similarity benchmarks, while requiring low computational
resources.
|
[
{
"version": "v1",
"created": "Thu, 6 Apr 2023 14:25:46 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Muffo",
"Matteo",
""
],
[
"Tedesco",
"Roberto",
""
],
[
"Sbattella",
"Licia",
""
],
[
"Scotti",
"Vincenzo",
""
]
] |
new_dataset
| 0.992978 |
2304.03117
|
Longwen Zhang
|
Longwen Zhang, Qiwei Qiu, Hongyang Lin, Qixuan Zhang, Cheng Shi, Wei
Yang, Ye Shi, Sibei Yang, Lan Xu, Jingyi Yu
|
DreamFace: Progressive Generation of Animatable 3D Faces under Text
Guidance
|
Go to DreamFace project page https://sites.google.com/view/dreamface
watch our video at https://youtu.be/yCuvzgGMvPM and experience DreamFace
online at https://hyperhuman.top
| null | null | null |
cs.GR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Emerging Metaverse applications demand accessible, accurate, and easy-to-use
tools for 3D digital human creations in order to depict different cultures and
societies as if in the physical world. Recent large-scale vision-language
advances pave the way to for novices to conveniently customize 3D content.
However, the generated CG-friendly assets still cannot represent the desired
facial traits for human characteristics. In this paper, we present DreamFace, a
progressive scheme to generate personalized 3D faces under text guidance. It
enables layman users to naturally customize 3D facial assets that are
compatible with CG pipelines, with desired shapes, textures, and fine-grained
animation capabilities. From a text input to describe the facial traits, we
first introduce a coarse-to-fine scheme to generate the neutral facial geometry
with a unified topology. We employ a selection strategy in the CLIP embedding
space, and subsequently optimize both the details displacements and normals
using Score Distillation Sampling from generic Latent Diffusion Model. Then,
for neutral appearance generation, we introduce a dual-path mechanism, which
combines the generic LDM with a novel texture LDM to ensure both the diversity
and textural specification in the UV space. We also employ a two-stage
optimization to perform SDS in both the latent and image spaces to
significantly provides compact priors for fine-grained synthesis. Our generated
neutral assets naturally support blendshapes-based facial animations. We
further improve the animation ability with personalized deformation
characteristics by learning the universal expression prior using the
cross-identity hypernetwork. Notably, DreamFace can generate of realistic 3D
facial assets with physically-based rendering quality and rich animation
ability from video footage, even for fashion icons or exotic characters in
cartoons and fiction movies.
|
[
{
"version": "v1",
"created": "Sat, 1 Apr 2023 07:22:55 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Zhang",
"Longwen",
""
],
[
"Qiu",
"Qiwei",
""
],
[
"Lin",
"Hongyang",
""
],
[
"Zhang",
"Qixuan",
""
],
[
"Shi",
"Cheng",
""
],
[
"Yang",
"Wei",
""
],
[
"Shi",
"Ye",
""
],
[
"Yang",
"Sibei",
""
],
[
"Xu",
"Lan",
""
],
[
"Yu",
"Jingyi",
""
]
] |
new_dataset
| 0.961049 |
2304.03135
|
Mengyin Liu
|
Mengyin Liu, Jie Jiang, Chao Zhu, Xu-Cheng Yin
|
VLPD: Context-Aware Pedestrian Detection via Vision-Language Semantic
Self-Supervision
|
Accepted by CVPR 2023
| null | null | null |
cs.CV cs.AI cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Detecting pedestrians accurately in urban scenes is significant for realistic
applications like autonomous driving or video surveillance. However, confusing
human-like objects often lead to wrong detections, and small scale or heavily
occluded pedestrians are easily missed due to their unusual appearances. To
address these challenges, only object regions are inadequate, thus how to fully
utilize more explicit and semantic contexts becomes a key problem. Meanwhile,
previous context-aware pedestrian detectors either only learn latent contexts
with visual clues, or need laborious annotations to obtain explicit and
semantic contexts. Therefore, we propose in this paper a novel approach via
Vision-Language semantic self-supervision for context-aware Pedestrian
Detection (VLPD) to model explicitly semantic contexts without any extra
annotations. Firstly, we propose a self-supervised Vision-Language Semantic
(VLS) segmentation method, which learns both fully-supervised pedestrian
detection and contextual segmentation via self-generated explicit labels of
semantic classes by vision-language models. Furthermore, a self-supervised
Prototypical Semantic Contrastive (PSC) learning method is proposed to better
discriminate pedestrians and other classes, based on more explicit and semantic
contexts obtained from VLS. Extensive experiments on popular benchmarks show
that our proposed VLPD achieves superior performances over the previous
state-of-the-arts, particularly under challenging circumstances like small
scale and heavy occlusion. Code is available at
https://github.com/lmy98129/VLPD.
|
[
{
"version": "v1",
"created": "Thu, 6 Apr 2023 15:16:29 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Liu",
"Mengyin",
""
],
[
"Jiang",
"Jie",
""
],
[
"Zhu",
"Chao",
""
],
[
"Yin",
"Xu-Cheng",
""
]
] |
new_dataset
| 0.996883 |
2304.03140
|
Changsheng Lu
|
Changsheng Lu, Hao Zhu, Piotr Koniusz
|
From Saliency to DINO: Saliency-guided Vision Transformer for Few-shot
Keypoint Detection
|
15 pages, 10 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Unlike current deep keypoint detectors that are trained to recognize limited
number of body parts, few-shot keypoint detection (FSKD) attempts to localize
any keypoints, including novel or base keypoints, depending on the reference
samples. FSKD requires the semantically meaningful relations for keypoint
similarity learning to overcome the ubiquitous noise and ambiguous local
patterns. One rescue comes with vision transformer (ViT) as it captures
long-range relations well. However, ViT may model irrelevant features outside
of the region of interest due to the global attention matrix, thus degrading
similarity learning between support and query features. In this paper, we
present a novel saliency-guided vision transformer, dubbed SalViT, for few-shot
keypoint detection. Our SalViT enjoys a uniquely designed masked self-attention
and a morphology learner, where the former introduces saliency map as a soft
mask to constrain the self-attention on foregrounds, while the latter leverages
the so-called power normalization to adjust morphology of saliency map,
realizing ``dynamically changing receptive field''. Moreover, as salinecy
detectors add computations, we show that attentive masks of DINO transformer
can replace saliency. On top of SalViT, we also investigate i) transductive
FSKD that enhances keypoint representations with unlabelled data and ii) FSKD
under occlusions. We show that our model performs well on five public datasets
and achieves ~10% PCK higher than the normally trained model under severe
occlusions.
|
[
{
"version": "v1",
"created": "Thu, 6 Apr 2023 15:22:34 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Lu",
"Changsheng",
""
],
[
"Zhu",
"Hao",
""
],
[
"Koniusz",
"Piotr",
""
]
] |
new_dataset
| 0.997821 |
2304.03141
|
Matthew Weidner
|
Matthew Weidner, Ria Pradeep, Benito Geordie, Heather Miller
|
For-Each Operations in Collaborative Apps
|
7 pages, 4 figures, to appear at PaPoC 2023
| null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Conflict-free Replicated Data Types (CRDTs) allow collaborative access to an
app's data. We describe a novel CRDT operation, for-each on the list of CRDTs,
and demonstrate its use in collaborative apps. Our for-each operation applies a
given mutation to each element of a list, including elements inserted
concurrently. This often preserves user intention in a way that would otherwise
require custom CRDT algorithms. We give example applications of our for-each
operation to collaborative rich-text, recipe, and slideshow editors.
|
[
{
"version": "v1",
"created": "Thu, 6 Apr 2023 15:23:50 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Weidner",
"Matthew",
""
],
[
"Pradeep",
"Ria",
""
],
[
"Geordie",
"Benito",
""
],
[
"Miller",
"Heather",
""
]
] |
new_dataset
| 0.983055 |
2304.03223
|
Chuang Gan
|
Sizhe Li, Zhiao Huang, Tao Chen, Tao Du, Hao Su, Joshua B. Tenenbaum,
Chuang Gan
|
DexDeform: Dexterous Deformable Object Manipulation with Human
Demonstrations and Differentiable Physics
|
ICLR 2023. Project page: https://sites.google.com/view/dexdeform
| null | null | null |
cs.CV cs.AI cs.LG cs.RO
|
http://creativecommons.org/publicdomain/zero/1.0/
|
In this work, we aim to learn dexterous manipulation of deformable objects
using multi-fingered hands. Reinforcement learning approaches for dexterous
rigid object manipulation would struggle in this setting due to the complexity
of physics interaction with deformable objects. At the same time, previous
trajectory optimization approaches with differentiable physics for deformable
manipulation would suffer from local optima caused by the explosion of contact
modes from hand-object interactions. To address these challenges, we propose
DexDeform, a principled framework that abstracts dexterous manipulation skills
from human demonstration and refines the learned skills with differentiable
physics. Concretely, we first collect a small set of human demonstrations using
teleoperation. And we then train a skill model using demonstrations for
planning over action abstractions in imagination. To explore the goal space, we
further apply augmentations to the existing deformable shapes in demonstrations
and use a gradient optimizer to refine the actions planned by the skill model.
Finally, we adopt the refined trajectories as new demonstrations for finetuning
the skill model. To evaluate the effectiveness of our approach, we introduce a
suite of six challenging dexterous deformable object manipulation tasks.
Compared with baselines, DexDeform is able to better explore and generalize
across novel goals unseen in the initial human demonstrations.
|
[
{
"version": "v1",
"created": "Mon, 27 Mar 2023 17:59:49 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Li",
"Sizhe",
""
],
[
"Huang",
"Zhiao",
""
],
[
"Chen",
"Tao",
""
],
[
"Du",
"Tao",
""
],
[
"Su",
"Hao",
""
],
[
"Tenenbaum",
"Joshua B.",
""
],
[
"Gan",
"Chuang",
""
]
] |
new_dataset
| 0.992698 |
2304.03282
|
Mingyu Ding
|
Mingyu Ding, Yikang Shen, Lijie Fan, Zhenfang Chen, Zitian Chen, Ping
Luo, Joshua B. Tenenbaum, Chuang Gan
|
Visual Dependency Transformers: Dependency Tree Emerges from Reversed
Attention
|
CVPR 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Humans possess a versatile mechanism for extracting structured
representations of our visual world. When looking at an image, we can decompose
the scene into entities and their parts as well as obtain the dependencies
between them. To mimic such capability, we propose Visual Dependency
Transformers (DependencyViT) that can induce visual dependencies without any
labels. We achieve that with a novel neural operator called \emph{reversed
attention} that can naturally capture long-range visual dependencies between
image patches. Specifically, we formulate it as a dependency graph where a
child token in reversed attention is trained to attend to its parent tokens and
send information following a normalized probability distribution rather than
gathering information in conventional self-attention. With such a design,
hierarchies naturally emerge from reversed attention layers, and a dependency
tree is progressively induced from leaf nodes to the root node unsupervisedly.
DependencyViT offers several appealing benefits. (i) Entities and their parts
in an image are represented by different subtrees, enabling part partitioning
from dependencies; (ii) Dynamic visual pooling is made possible. The leaf nodes
which rarely send messages can be pruned without hindering the model
performance, based on which we propose the lightweight DependencyViT-Lite to
reduce the computational and memory footprints; (iii) DependencyViT works well
on both self- and weakly-supervised pretraining paradigms on ImageNet, and
demonstrates its effectiveness on 8 datasets and 5 tasks, such as unsupervised
part and saliency segmentation, recognition, and detection.
|
[
{
"version": "v1",
"created": "Thu, 6 Apr 2023 17:59:26 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Ding",
"Mingyu",
""
],
[
"Shen",
"Yikang",
""
],
[
"Fan",
"Lijie",
""
],
[
"Chen",
"Zhenfang",
""
],
[
"Chen",
"Zitian",
""
],
[
"Luo",
"Ping",
""
],
[
"Tenenbaum",
"Joshua B.",
""
],
[
"Gan",
"Chuang",
""
]
] |
new_dataset
| 0.998576 |
2304.03284
|
Xinlong Wang
|
Xinlong Wang, Xiaosong Zhang, Yue Cao, Wen Wang, Chunhua Shen, Tiejun
Huang
|
SegGPT: Segmenting Everything In Context
|
Code and Demo: https://github.com/baaivision/Painter
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present SegGPT, a generalist model for segmenting everything in context.
We unify various segmentation tasks into a generalist in-context learning
framework that accommodates different kinds of segmentation data by
transforming them into the same format of images. The training of SegGPT is
formulated as an in-context coloring problem with random color mapping for each
data sample. The objective is to accomplish diverse tasks according to the
context, rather than relying on specific colors. After training, SegGPT can
perform arbitrary segmentation tasks in images or videos via in-context
inference, such as object instance, stuff, part, contour, and text. SegGPT is
evaluated on a broad range of tasks, including few-shot semantic segmentation,
video object segmentation, semantic segmentation, and panoptic segmentation.
Our results show strong capabilities in segmenting in-domain and out-of-domain
targets, either qualitatively or quantitatively.
|
[
{
"version": "v1",
"created": "Thu, 6 Apr 2023 17:59:57 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Wang",
"Xinlong",
""
],
[
"Zhang",
"Xiaosong",
""
],
[
"Cao",
"Yue",
""
],
[
"Wang",
"Wen",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Huang",
"Tiejun",
""
]
] |
new_dataset
| 0.999591 |
1612.00276
|
Theo van Uem
|
Theo van Uem
|
Ebert's asymmetric three person three color Hat Game
|
24 pages
| null | null | null |
cs.IT math.CO math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We generalize Ebert's Hat Problem for three persons and three colors. All
players guess simultaneously the color of their own hat observing only the hat
colors of the other players. It is also allowed for each player to pass: no
color is guessed. The team wins if at least one player guesses his or her hat
color correct and none of the players has an incorrect guess. This paper
studies Ebert's hat problem, where the probabilities of the colors may be
different (asymmetric case). Our goal is to maximize the probability of winning
the game and to describe winning strategies. In this paper we use the notion of
an adequate set. The construction of adequate sets is independent of underlying
probabilities and we can use this fact in the analysis of the asymmetric case.
Another point of interest is the fact that computational complexity using
adequate sets is much less than using standard methods.
|
[
{
"version": "v1",
"created": "Thu, 1 Dec 2016 14:42:11 GMT"
},
{
"version": "v2",
"created": "Mon, 22 Nov 2021 07:58:35 GMT"
},
{
"version": "v3",
"created": "Sat, 4 Dec 2021 10:38:01 GMT"
},
{
"version": "v4",
"created": "Tue, 27 Sep 2022 13:24:28 GMT"
},
{
"version": "v5",
"created": "Wed, 2 Nov 2022 15:51:01 GMT"
},
{
"version": "v6",
"created": "Fri, 6 Jan 2023 11:02:40 GMT"
},
{
"version": "v7",
"created": "Wed, 5 Apr 2023 13:58:11 GMT"
}
] | 2023-04-06T00:00:00 |
[
[
"van Uem",
"Theo",
""
]
] |
new_dataset
| 0.991692 |
1902.03966
|
Hang Hu
|
Junlin Wang, Xiao Li, Tzu-Hao Huang, Shuangyue Yu, Yanjun Li, Tianyao
Chen, Alessandra Carriero, Mooyeon Oh-Park, and Hao Su
|
Comfort-Centered Design of a Lightweight and Backdrivable Knee
Exoskeleton
|
8 pages, 16figures, Journal
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents design principles for comfort-centered wearable robots
and their application in a lightweight and backdrivable knee exoskeleton. The
mitigation of discomfort is treated as mechanical design and control issues and
three solutions are proposed in this paper: 1) a new wearable structure
optimizes the strap attachment configuration and suit layout to ameliorate
excessive shear forces of conventional wearable structure design; 2) rolling
knee joint and double-hinge mechanisms reduce the misalignment in the sagittal
and frontal plane, without increasing the mechanical complexity and inertia,
respectively; 3) a low impedance mechanical transmission reduces the reflected
inertia and damping of the actuator to human, thus the exoskeleton is
highly-backdrivable. Kinematic simulations demonstrate that misalignment
between the robot joint and knee joint can be reduced by 74% at maximum knee
flexion. In experiments, the exoskeleton in the unpowered mode exhibits 1.03 Nm
root mean square (RMS) low resistive torque. The torque control experiments
demonstrate 0.31 Nm RMS torque tracking error in three human subjects.
|
[
{
"version": "v1",
"created": "Mon, 11 Feb 2019 16:20:52 GMT"
}
] | 2023-04-06T00:00:00 |
[
[
"Wang",
"Junlin",
""
],
[
"Li",
"Xiao",
""
],
[
"Huang",
"Tzu-Hao",
""
],
[
"Yu",
"Shuangyue",
""
],
[
"Li",
"Yanjun",
""
],
[
"Chen",
"Tianyao",
""
],
[
"Carriero",
"Alessandra",
""
],
[
"Oh-Park",
"Mooyeon",
""
],
[
"Su",
"Hao",
""
]
] |
new_dataset
| 0.986822 |
1903.00473
|
Liqun Lin
|
Liqun Lin, Shiqi Yu, Tiesong Zhao, and Zhou Wang
|
PEA265: Perceptual Assessment of Video Compression Artifacts
|
10 pages,15 figures,4 tables
| null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The most widely used video encoders share a common hybrid coding framework
that includes block-based motion estimation/compensation and block-based
transform coding. Despite their high coding efficiency, the encoded videos
often exhibit visually annoying artifacts, denoted as Perceivable Encoding
Artifacts (PEAs), which significantly degrade the visual Qualityof- Experience
(QoE) of end users. To monitor and improve visual QoE, it is crucial to develop
subjective and objective measures that can identify and quantify various types
of PEAs. In this work, we make the first attempt to build a large-scale
subjectlabelled database composed of H.265/HEVC compressed videos containing
various PEAs. The database, namely the PEA265 database, includes 4 types of
spatial PEAs (i.e. blurring, blocking, ringing and color bleeding) and 2 types
of temporal PEAs (i.e. flickering and floating). Each containing at least
60,000 image or video patches with positive and negative labels. To objectively
identify these PEAs, we train Convolutional Neural Networks (CNNs) using the
PEA265 database. It appears that state-of-theart ResNeXt is capable of
identifying each type of PEAs with high accuracy. Furthermore, we define PEA
pattern and PEA intensity measures to quantify PEA levels of compressed video
sequence. We believe that the PEA265 database and our findings will benefit the
future development of video quality assessment methods and perceptually
motivated video encoders.
|
[
{
"version": "v1",
"created": "Fri, 1 Mar 2019 15:25:35 GMT"
}
] | 2023-04-06T00:00:00 |
[
[
"Lin",
"Liqun",
""
],
[
"Yu",
"Shiqi",
""
],
[
"Zhao",
"Tiesong",
""
],
[
"Wang",
"Zhou",
""
]
] |
new_dataset
| 0.992764 |
1905.01583
|
Qi Wang
|
Yuan Yuan, Zhitong Xiong, and Qi Wang
|
VSSA-NET: Vertical Spatial Sequence Attention Network for Traffic Sign
Detection
| null | null |
10.1109/TIP.2019.2896952
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although traffic sign detection has been studied for years and great progress
has been made with the rise of deep learning technique, there are still many
problems remaining to be addressed. For complicated real-world traffic scenes,
there are two main challenges. Firstly, traffic signs are usually small size
objects, which makes it more difficult to detect than large ones; Secondly, it
is hard to distinguish false targets which resemble real traffic signs in
complex street scenes without context information. To handle these problems, we
propose a novel end-to-end deep learning method for traffic sign detection in
complex environments. Our contributions are as follows: 1) We propose a
multi-resolution feature fusion network architecture which exploits densely
connected deconvolution layers with skip connections, and can learn more
effective features for the small size object; 2) We frame the traffic sign
detection as a spatial sequence classification and regression task, and propose
a vertical spatial sequence attention (VSSA) module to gain more context
information for better detection performance. To comprehensively evaluate the
proposed method, we do experiments on several traffic sign datasets as well as
the general object detection dataset and the results have shown the
effectiveness of our proposed method.
|
[
{
"version": "v1",
"created": "Sun, 5 May 2019 02:16:43 GMT"
}
] | 2023-04-06T00:00:00 |
[
[
"Yuan",
"Yuan",
""
],
[
"Xiong",
"Zhitong",
""
],
[
"Wang",
"Qi",
""
]
] |
new_dataset
| 0.99768 |
1909.00155
|
Lei He
|
Shengwen Liang, Ying Wang, Cheng Liu, Lei He, Huawei Li, and Xiaowei
Li
|
EnGN: A High-Throughput and Energy-Efficient Accelerator for Large Graph
Neural Networks
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph neural networks (GNNs) emerge as a powerful approach to process
non-euclidean data structures and have been proved powerful in various
application domains such as social networks and e-commerce. While such graph
data maintained in real-world systems can be extremely large and sparse, thus
employing GNNs to deal with them requires substantial computational and memory
overhead, which induces considerable energy and resource cost on CPUs and GPUs.
In this work, we present a specialized accelerator architecture, EnGN, to
enable high-throughput and energy-efficient processing of large-scale GNNs. The
proposed EnGN is designed to accelerate the three key stages of GNN
propagation, which is abstracted as common computing patterns shared by typical
GNNs. To support the key stages simultaneously, we propose the
ring-edge-reduce(RER) dataflow that tames the poor locality of
sparsely-and-randomly connected vertices, and the RER PE-array to practice RER
dataflow. In addition, we utilize a graph tiling strategy to fit large graphs
into EnGN and make good use of the hierarchical on-chip buffers through
adaptive computation reordering and tile scheduling. Overall, EnGN achieves
performance speedup by 1802.9X, 19.75X, and 2.97X and energy efficiency by
1326.35X, 304.43X, and 6.2X on average compared to CPU, GPU, and a
state-of-the-art GCN accelerator HyGCN, respectively.
|
[
{
"version": "v1",
"created": "Sat, 31 Aug 2019 07:12:59 GMT"
},
{
"version": "v2",
"created": "Sat, 30 Nov 2019 02:08:40 GMT"
},
{
"version": "v3",
"created": "Tue, 7 Apr 2020 11:34:10 GMT"
}
] | 2023-04-06T00:00:00 |
[
[
"Liang",
"Shengwen",
""
],
[
"Wang",
"Ying",
""
],
[
"Liu",
"Cheng",
""
],
[
"He",
"Lei",
""
],
[
"Li",
"Huawei",
""
],
[
"Li",
"Xiaowei",
""
]
] |
new_dataset
| 0.99406 |
2203.05711
|
Yidan Sun Miss
|
Yidan Sun, Qin Chao, Yangfeng Ji and Boyang Li
|
Synopses of Movie Narratives: a Video-Language Dataset for Story
Understanding
|
25 pages, 17 figures
| null | null | null |
cs.CV cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Despite recent advances of AI, story understanding remains an open and
under-investigated problem. We collect, preprocess, and publicly release a
video-language story dataset, Synopses of Movie Narratives (SyMoN), containing
5,193 video summaries of popular movies and TV series with a total length of
869 hours. SyMoN captures naturalistic storytelling videos made by human
creators and intended for a human audience. As a prototypical and naturalistic
story dataset, SyMoN features high coverage of multimodal story events and
abundant mental-state descriptions. Its use of storytelling techniques cause
cross-domain semantic gaps that provide appropriate challenges to existing
models. We establish benchmarks on video-text retrieval and zero-shot alignment
on movie summary videos, which showcase the importance of in-domain data and
long-term memory in story understanding. With SyMoN, we hope to lay the
groundwork for progress in multimodal story understanding.
|
[
{
"version": "v1",
"created": "Fri, 11 Mar 2022 01:45:33 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Apr 2023 03:52:14 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Apr 2023 16:27:38 GMT"
},
{
"version": "v4",
"created": "Wed, 5 Apr 2023 02:09:02 GMT"
}
] | 2023-04-06T00:00:00 |
[
[
"Sun",
"Yidan",
""
],
[
"Chao",
"Qin",
""
],
[
"Ji",
"Yangfeng",
""
],
[
"Li",
"Boyang",
""
]
] |
new_dataset
| 0.999849 |
2206.15298
|
Johannes Pankert
|
Johannes Pankert, Giorgio Valsecchi, Davide Baret, Jon Zehnder, Lukasz
L. Pietrasik, Marko Bjelonic, Marco Hutter
|
Design and Motion Planning for a Reconfigurable Robotic Base
|
8 pages, accepted for RA-L and IROS 2022
|
IEEE Robotics and Automation Letters, vol. 7, no. 4, pp.
9012-9019, Oct. 2022
|
10.1109/LRA.2022.3189166
| null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A robotic platform for mobile manipulation needs to satisfy two contradicting
requirements for many real-world applications: A compact base is required to
navigate through cluttered indoor environments, while the support needs to be
large enough to prevent tumbling or tip over, especially during fast
manipulation operations with heavy payloads or forceful interaction with the
environment. This paper proposes a novel robot design that fulfills both
requirements through a versatile footprint. It can reconfigure its footprint to
a narrow configuration when navigating through tight spaces and to a wide
stance when manipulating heavy objects. Furthermore, its triangular
configuration allows for high-precision tasks on uneven ground by preventing
support switches. A model predictive control strategy is presented that unifies
planning and control for simultaneous navigation, reconfiguration, and
manipulation. It converts task-space goals into whole-body motion plans for the
new robot. The proposed design has been tested extensively with a hardware
prototype. The footprint reconfiguration allows to almost completely remove
manipulation-induced vibrations. The control strategy proves effective in both
lab experiment and during a real-world construction task.
|
[
{
"version": "v1",
"created": "Thu, 30 Jun 2022 14:00:47 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Jul 2022 11:54:28 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Apr 2023 12:47:15 GMT"
}
] | 2023-04-06T00:00:00 |
[
[
"Pankert",
"Johannes",
""
],
[
"Valsecchi",
"Giorgio",
""
],
[
"Baret",
"Davide",
""
],
[
"Zehnder",
"Jon",
""
],
[
"Pietrasik",
"Lukasz L.",
""
],
[
"Bjelonic",
"Marko",
""
],
[
"Hutter",
"Marco",
""
]
] |
new_dataset
| 0.976019 |
2209.07976
|
Jiri Sedlar
|
Jiri Sedlar, Karla Stepanova, Radoslav Skoviera, Jan K. Behrens, Matus
Tuna, Gabriela Sejnova, Josef Sivic, Robert Babuska
|
Imitrob: Imitation Learning Dataset for Training and Evaluating 6D
Object Pose Estimators
|
The dataset and code are publicly available at
http://imitrob.ciirc.cvut.cz/imitrobdataset.php
|
IEEE Robotics and Automation Letters, vol. 8, no. 5, pp.
2788-2795, 2023
|
10.1109/LRA.2023.3259735
| null |
cs.RO cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper introduces a dataset for training and evaluating methods for 6D
pose estimation of hand-held tools in task demonstrations captured by a
standard RGB camera. Despite the significant progress of 6D pose estimation
methods, their performance is usually limited for heavily occluded objects,
which is a common case in imitation learning, where the object is typically
partially occluded by the manipulating hand. Currently, there is a lack of
datasets that would enable the development of robust 6D pose estimation methods
for these conditions. To overcome this problem, we collect a new dataset
(Imitrob) aimed at 6D pose estimation in imitation learning and other
applications where a human holds a tool and performs a task. The dataset
contains image sequences of nine different tools and twelve manipulation tasks
with two camera viewpoints, four human subjects, and left/right hand. Each
image is accompanied by an accurate ground truth measurement of the 6D object
pose obtained by the HTC Vive motion tracking device. The use of the dataset is
demonstrated by training and evaluating a recent 6D object pose estimation
method (DOPE) in various setups.
|
[
{
"version": "v1",
"created": "Fri, 16 Sep 2022 14:43:46 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Sep 2022 15:34:06 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Apr 2023 17:30:35 GMT"
}
] | 2023-04-06T00:00:00 |
[
[
"Sedlar",
"Jiri",
""
],
[
"Stepanova",
"Karla",
""
],
[
"Skoviera",
"Radoslav",
""
],
[
"Behrens",
"Jan K.",
""
],
[
"Tuna",
"Matus",
""
],
[
"Sejnova",
"Gabriela",
""
],
[
"Sivic",
"Josef",
""
],
[
"Babuska",
"Robert",
""
]
] |
new_dataset
| 0.999506 |
2209.13136
|
Pranav Shetty
|
Pranav Shetty, Arunkumar Chitteth Rajan, Christopher Kuenneth,
Sonkakshi Gupta, Lakshmi Prerana Panchumarti, Lauren Holm, Chao Zhang, and
Rampi Ramprasad
|
A general-purpose material property data extraction pipeline from large
polymer corpora using Natural Language Processing
| null | null |
10.1038/s41524-023-01003-w
| null |
cs.CL cond-mat.mtrl-sci cond-mat.soft
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The ever-increasing number of materials science articles makes it hard to
infer chemistry-structure-property relations from published literature. We used
natural language processing (NLP) methods to automatically extract material
property data from the abstracts of polymer literature. As a component of our
pipeline, we trained MaterialsBERT, a language model, using 2.4 million
materials science abstracts, which outperforms other baseline models in three
out of five named entity recognition datasets when used as the encoder for
text. Using this pipeline, we obtained ~300,000 material property records from
~130,000 abstracts in 60 hours. The extracted data was analyzed for a diverse
range of applications such as fuel cells, supercapacitors, and polymer solar
cells to recover non-trivial insights. The data extracted through our pipeline
is made available through a web platform at https://polymerscholar.org which
can be used to locate material property data recorded in abstracts
conveniently. This work demonstrates the feasibility of an automatic pipeline
that starts from published literature and ends with a complete set of extracted
material property information.
|
[
{
"version": "v1",
"created": "Tue, 27 Sep 2022 03:47:03 GMT"
}
] | 2023-04-06T00:00:00 |
[
[
"Shetty",
"Pranav",
""
],
[
"Rajan",
"Arunkumar Chitteth",
""
],
[
"Kuenneth",
"Christopher",
""
],
[
"Gupta",
"Sonkakshi",
""
],
[
"Panchumarti",
"Lakshmi Prerana",
""
],
[
"Holm",
"Lauren",
""
],
[
"Zhang",
"Chao",
""
],
[
"Ramprasad",
"Rampi",
""
]
] |
new_dataset
| 0.968783 |
2210.00192
|
Han Ruihua
|
Ruihua Han, Shuai Wang, Shuaijun Wang, Zeqing Zhang, Qianru Zhang,
Yonina C. Eldar, Qi Hao, Jia Pan
|
RDA: An Accelerated Collision Free Motion Planner for Autonomous
Navigation in Cluttered Environments
|
Published in: IEEE Robotics and Automation Letters ( Volume: 8,
Issue: 3, March 2023) (https://ieeexplore.ieee.org/document/10036019)
| null |
10.1109/LRA.2023.3242138
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Autonomous motion planning is challenging in multi-obstacle environments due
to nonconvex collision avoidance constraints. Directly applying numerical
solvers to these nonconvex formulations fails to exploit the constraint
structures, resulting in excessive computation time. In this paper, we present
an accelerated collision-free motion planner, namely regularized dual
alternating direction method of multipliers (RDADMM or RDA for short), for the
model predictive control (MPC) based motion planning problem. The proposed RDA
addresses nonconvex motion planning via solving a smooth biconvex reformulation
via duality and allows the collision avoidance constraints to be computed in
parallel for each obstacle to reduce computation time significantly. We
validate the performance of the RDA planner through path-tracking experiments
with car-like robots in both simulation and real-world settings. Experimental
results show that the proposed method generates smooth collision-free
trajectories with less computation time compared with other benchmarks and
performs robustly in cluttered environments. The source code is available at
https://github.com/hanruihua/RDA_planner.
|
[
{
"version": "v1",
"created": "Sat, 1 Oct 2022 04:47:02 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Dec 2022 07:53:19 GMT"
},
{
"version": "v3",
"created": "Wed, 1 Feb 2023 09:49:42 GMT"
},
{
"version": "v4",
"created": "Wed, 5 Apr 2023 02:25:53 GMT"
}
] | 2023-04-06T00:00:00 |
[
[
"Han",
"Ruihua",
""
],
[
"Wang",
"Shuai",
""
],
[
"Wang",
"Shuaijun",
""
],
[
"Zhang",
"Zeqing",
""
],
[
"Zhang",
"Qianru",
""
],
[
"Eldar",
"Yonina C.",
""
],
[
"Hao",
"Qi",
""
],
[
"Pan",
"Jia",
""
]
] |
new_dataset
| 0.997434 |
2210.08118
|
Pei Lv
|
Pei Lv, Xinming Pei, Xinyu Ren, Yuzhen Zhang, Chaochao Li, and
Mingliang Xu
|
TraInterSim: Adaptive and Planning-Aware Hybrid-Driven Traffic
Intersection Simulation
|
13 pages, 12 figures
| null | null | null |
cs.RO cs.GR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Traffic intersections are important scenes that can be seen almost everywhere
in the traffic system. Currently, most simulation methods perform well at
highways and urban traffic networks. In intersection scenarios, the challenge
lies in the lack of clearly defined lanes, where agents with various motion
plannings converge in the central area from different directions. Traditional
model-based methods are difficult to drive agents to move realistically at
intersections without enough predefined lanes, while data-driven methods often
require a large amount of high-quality input data. Simultaneously, tedious
parameter tuning is inevitable involved to obtain the desired simulation
results. In this paper, we present a novel adaptive and planning-aware
hybrid-driven method (TraInterSim) to simulate traffic intersection scenarios.
Our hybrid-driven method combines an optimization-based data-driven scheme with
a velocity continuity model. It guides the agent's movements using real-world
data and can generate those behaviors not present in the input data. Our
optimization method fully considers velocity continuity, desired speed,
direction guidance, and planning-aware collision avoidance. Agents can perceive
others' motion planning and relative distance to avoid possible collisions. To
preserve the individual flexibility of different agents, the parameters in our
method are automatically adjusted during the simulation. TraInterSim can
generate realistic behaviors of heterogeneous agents in different traffic
intersection scenarios in interactive rates. Through extensive experiments as
well as user studies, we validate the effectiveness and rationality of the
proposed simulation method.
|
[
{
"version": "v1",
"created": "Mon, 3 Oct 2022 07:57:53 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Apr 2023 08:35:21 GMT"
}
] | 2023-04-06T00:00:00 |
[
[
"Lv",
"Pei",
""
],
[
"Pei",
"Xinming",
""
],
[
"Ren",
"Xinyu",
""
],
[
"Zhang",
"Yuzhen",
""
],
[
"Li",
"Chaochao",
""
],
[
"Xu",
"Mingliang",
""
]
] |
new_dataset
| 0.982076 |
2210.16386
|
Qinyi Chen
|
Qinyi Chen, Negin Golrezaei, Djallel Bouneffouf
|
Dynamic Bandits with an Auto-Regressive Temporal Structure
|
41 pages, 4 figures
| null | null | null |
cs.LG cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-armed bandit (MAB) problems are mainly studied under two extreme
settings known as stochastic and adversarial. These two settings, however, do
not capture realistic environments such as search engines and marketing and
advertising, in which rewards stochastically change in time. Motivated by that,
we introduce and study a dynamic MAB problem with stochastic temporal
structure, where the expected reward of each arm is governed by an
auto-regressive (AR) model. Due to the dynamic nature of the rewards, simple
"explore and commit" policies fail, as all arms have to be explored
continuously over time. We formalize this by characterizing a per-round regret
lower bound, where the regret is measured against a strong (dynamic) benchmark.
We then present an algorithm whose per-round regret almost matches our regret
lower bound. Our algorithm relies on two mechanisms: (i) alternating between
recently pulled arms and unpulled arms with potential, and (ii) restarting.
These mechanisms enable the algorithm to dynamically adapt to changes and
discard irrelevant past information at a suitable rate. In numerical studies,
we further demonstrate the strength of our algorithm under non-stationary
settings.
|
[
{
"version": "v1",
"created": "Fri, 28 Oct 2022 20:02:21 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Apr 2023 01:16:08 GMT"
}
] | 2023-04-06T00:00:00 |
[
[
"Chen",
"Qinyi",
""
],
[
"Golrezaei",
"Negin",
""
],
[
"Bouneffouf",
"Djallel",
""
]
] |
new_dataset
| 0.996671 |
2211.13929
|
Pritam Sarkar
|
Pritam Sarkar and Ali Etemad
|
XKD: Cross-modal Knowledge Distillation with Domain Alignment for Video
Representation Learning
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We present XKD, a novel self-supervised framework to learn meaningful
representations from unlabelled video clips. XKD is trained with two pseudo
tasks. First, masked data reconstruction is performed to learn individual
representations from audio and visual streams. Next, self-supervised
cross-modal knowledge distillation is performed between the two modalities
through teacher-student setups to learn complementary information. To identify
the most effective information to transfer and also to tackle the domain gap
between audio and visual modalities which could hinder knowledge transfer, we
introduce a domain alignment and feature refinement strategy for effective
cross-modal knowledge distillation. Lastly, to develop a general-purpose
network capable of handling both audio and visual streams, modality-agnostic
variants of our proposed framework are introduced, which use the same backbone
for both audio and visual modalities. Our proposed cross-modal knowledge
distillation improves linear evaluation top-1 accuracy of video action
classification by 8.6% on UCF101, 8.2% on HMDB51, 13.9% on Kinetics-Sound, and
15.7% on Kinetics400. Additionally, our modality-agnostic variant shows
promising results in developing a general-purpose network capable of learning
both data streams for solving different downstream tasks.
|
[
{
"version": "v1",
"created": "Fri, 25 Nov 2022 06:51:35 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Nov 2022 03:59:11 GMT"
},
{
"version": "v3",
"created": "Mon, 12 Dec 2022 21:36:35 GMT"
},
{
"version": "v4",
"created": "Wed, 5 Apr 2023 06:20:28 GMT"
}
] | 2023-04-06T00:00:00 |
[
[
"Sarkar",
"Pritam",
""
],
[
"Etemad",
"Ali",
""
]
] |
new_dataset
| 0.988543 |
2212.05199
|
Lijun Yu
|
Lijun Yu, Yong Cheng, Kihyuk Sohn, Jos\'e Lezama, Han Zhang, Huiwen
Chang, Alexander G. Hauptmann, Ming-Hsuan Yang, Yuan Hao, Irfan Essa, Lu
Jiang
|
MAGVIT: Masked Generative Video Transformer
|
CVPR 2023 highlight
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce the MAsked Generative VIdeo Transformer, MAGVIT, to tackle
various video synthesis tasks with a single model. We introduce a 3D tokenizer
to quantize a video into spatial-temporal visual tokens and propose an
embedding method for masked video token modeling to facilitate multi-task
learning. We conduct extensive experiments to demonstrate the quality,
efficiency, and flexibility of MAGVIT. Our experiments show that (i) MAGVIT
performs favorably against state-of-the-art approaches and establishes the
best-published FVD on three video generation benchmarks, including the
challenging Kinetics-600. (ii) MAGVIT outperforms existing methods in inference
time by two orders of magnitude against diffusion models and by 60x against
autoregressive models. (iii) A single MAGVIT model supports ten diverse
generation tasks and generalizes across videos from different visual domains.
The source code and trained models will be released to the public at
https://magvit.cs.cmu.edu.
|
[
{
"version": "v1",
"created": "Sat, 10 Dec 2022 04:26:32 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Apr 2023 02:32:59 GMT"
}
] | 2023-04-06T00:00:00 |
[
[
"Yu",
"Lijun",
""
],
[
"Cheng",
"Yong",
""
],
[
"Sohn",
"Kihyuk",
""
],
[
"Lezama",
"José",
""
],
[
"Zhang",
"Han",
""
],
[
"Chang",
"Huiwen",
""
],
[
"Hauptmann",
"Alexander G.",
""
],
[
"Yang",
"Ming-Hsuan",
""
],
[
"Hao",
"Yuan",
""
],
[
"Essa",
"Irfan",
""
],
[
"Jiang",
"Lu",
""
]
] |
new_dataset
| 0.997643 |
2301.02938
|
Leimin Tian
|
Leimin Tian, Kerry He, Shiyu Xu, Akansel Cosgun, Dana Kuli\'c
|
Crafting with a Robot Assistant: Use Social Cues to Inform Adaptive
Handovers in Human-Robot Collaboration
|
accepted at HRI 2023
| null | null | null |
cs.RO cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
We study human-robot handovers in a naturalistic collaboration scenario,
where a mobile manipulator robot assists a person during a crafting session by
providing and retrieving objects used for wooden piece assembly (functional
activities) and painting (creative activities). We collect quantitative and
qualitative data from 20 participants in a Wizard-of-Oz study, generating the
Functional And Creative Tasks Human-Robot Collaboration dataset (the FACT HRC
dataset), available to the research community. This work illustrates how social
cues and task context inform the temporal-spatial coordination in human-robot
handovers, and how human-robot collaboration is shaped by and in turn
influences people's functional and creative activities.
|
[
{
"version": "v1",
"created": "Sat, 7 Jan 2023 21:19:31 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Apr 2023 02:56:46 GMT"
}
] | 2023-04-06T00:00:00 |
[
[
"Tian",
"Leimin",
""
],
[
"He",
"Kerry",
""
],
[
"Xu",
"Shiyu",
""
],
[
"Cosgun",
"Akansel",
""
],
[
"Kulić",
"Dana",
""
]
] |
new_dataset
| 0.993809 |
2302.09864
|
Luke Haliburton
|
Luke Haliburton and Natalia Bart{\l}omiejczyk and Pawe{\l} W.
Wo\'zniak and Albrecht Schmidt and Jasmin Niess
|
The Walking Talking Stick: Understanding Automated Note-Taking in
Walking Meetings
|
In CHI 2023
|
Proceedings of the 2023 CHI Conference on Human Factors in
Computing Systems
|
10.1145/3544548.3580986
| null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
While walking meetings offer a healthy alternative to sit-down meetings, they
also pose practical challenges. Taking notes is difficult while walking, which
limits the potential of walking meetings. To address this, we designed the
Walking Talking Stick -- a tangible device with integrated voice recording,
transcription, and a physical highlighting button to facilitate note-taking
during walking meetings. We investigated our system in a three-condition
between-subjects user study with thirty pairs of participants ($N$=60) who
conducted 15-minute outdoor walking meetings. Participants either used clip-on
microphones, the prototype without the button, or the prototype with the
highlighting button. We found that the tangible device increased task focus,
and the physical highlighting button facilitated turn-taking and resulted in
more useful notes. Our work demonstrates how interactive artifacts can
incentivize users to hold meetings in motion and enhance conversation dynamics.
We contribute insights for future systems which support conducting work tasks
in mobile environments.
|
[
{
"version": "v1",
"created": "Mon, 20 Feb 2023 09:54:30 GMT"
}
] | 2023-04-06T00:00:00 |
[
[
"Haliburton",
"Luke",
""
],
[
"Bartłomiejczyk",
"Natalia",
""
],
[
"Woźniak",
"Paweł W.",
""
],
[
"Schmidt",
"Albrecht",
""
],
[
"Niess",
"Jasmin",
""
]
] |
new_dataset
| 0.957912 |
2303.02760
|
Xuan Ju
|
Xuan Ju, Ailing Zeng, Jianan Wang, Qiang Xu, Lei Zhang
|
Human-Art: A Versatile Human-Centric Dataset Bridging Natural and
Artificial Scenes
|
CVPR2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Humans have long been recorded in a variety of forms since antiquity. For
example, sculptures and paintings were the primary media for depicting human
beings before the invention of cameras. However, most current human-centric
computer vision tasks like human pose estimation and human image generation
focus exclusively on natural images in the real world. Artificial humans, such
as those in sculptures, paintings, and cartoons, are commonly neglected, making
existing models fail in these scenarios. As an abstraction of life, art
incorporates humans in both natural and artificial scenes. We take advantage of
it and introduce the Human-Art dataset to bridge related tasks in natural and
artificial scenarios. Specifically, Human-Art contains 50k high-quality images
with over 123k person instances from 5 natural and 15 artificial scenarios,
which are annotated with bounding boxes, keypoints, self-contact points, and
text information for humans represented in both 2D and 3D. It is, therefore,
comprehensive and versatile for various downstream tasks. We also provide a
rich set of baseline results and detailed analyses for related tasks, including
human detection, 2D and 3D human pose estimation, image generation, and motion
transfer. As a challenging dataset, we hope Human-Art can provide insights for
relevant research and open up new research questions.
|
[
{
"version": "v1",
"created": "Sun, 5 Mar 2023 20:05:21 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Apr 2023 07:36:48 GMT"
}
] | 2023-04-06T00:00:00 |
[
[
"Ju",
"Xuan",
""
],
[
"Zeng",
"Ailing",
""
],
[
"Wang",
"Jianan",
""
],
[
"Xu",
"Qiang",
""
],
[
"Zhang",
"Lei",
""
]
] |
new_dataset
| 0.999618 |
2303.03750
|
Vukosi Marivate
|
Richard Lastrucci, Isheanesu Dzingirai, Jenalea Rajab, Andani
Madodonga, Matimba Shingange, Daniel Njini, Vukosi Marivate
|
Preparing the Vuk'uzenzele and ZA-gov-multilingual South African
multilingual corpora
|
Accepted and to appear at Fourth workshop on Resources for African
Indigenous Languages (RAIL) at EACL 2023
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces two multilingual government themed corpora in various
South African languages. The corpora were collected by gathering the South
African Government newspaper (Vuk'uzenzele), as well as South African
government speeches (ZA-gov-multilingual), that are translated into all 11
South African official languages. The corpora can be used for a myriad of
downstream NLP tasks. The corpora were created to allow researchers to study
the language used in South African government publications, with a focus on
understanding how South African government officials communicate with their
constituents. In this paper we highlight the process of gathering, cleaning and
making available the corpora. We create parallel sentence corpora for Neural
Machine Translation (NMT) tasks using Language-Agnostic Sentence
Representations (LASER) embeddings. With these aligned sentences we then
provide NMT benchmarks for 9 indigenous languages by fine-tuning a massively
multilingual pre-trained language model.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2023 09:20:09 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Apr 2023 09:39:32 GMT"
}
] | 2023-04-06T00:00:00 |
[
[
"Lastrucci",
"Richard",
""
],
[
"Dzingirai",
"Isheanesu",
""
],
[
"Rajab",
"Jenalea",
""
],
[
"Madodonga",
"Andani",
""
],
[
"Shingange",
"Matimba",
""
],
[
"Njini",
"Daniel",
""
],
[
"Marivate",
"Vukosi",
""
]
] |
new_dataset
| 0.997498 |
2304.02015
|
Zheng Yuan
|
Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, Songfang Huang
|
How well do Large Language Models perform in Arithmetic tasks?
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Large language models have emerged abilities including chain-of-thought to
answer math word problems step by step. Solving math word problems not only
requires abilities to disassemble problems via chain-of-thought but also needs
to calculate arithmetic expressions correctly for each step. To the best of our
knowledge, there is no work to focus on evaluating the arithmetic ability of
large language models. In this work, we propose an arithmetic dataset MATH 401
to test the latest large language models including GPT-4, ChatGPT, InstrctGPT,
Galactica, and LLaMA with various arithmetic expressions and provide a detailed
analysis of the ability of large language models. MATH 401 and evaluation codes
are released at \url{https://github.com/GanjinZero/math401-llm}.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 09:28:15 GMT"
}
] | 2023-04-06T00:00:00 |
[
[
"Yuan",
"Zheng",
""
],
[
"Yuan",
"Hongyi",
""
],
[
"Tan",
"Chuanqi",
""
],
[
"Wang",
"Wei",
""
],
[
"Huang",
"Songfang",
""
]
] |
new_dataset
| 0.972895 |
2304.02126
|
Mohamed Behery
|
Mohamed Behery and Gerhard Lakemeyer
|
Digital Shadows of Safety for Human Robot Collaboration in the
World-Wide Lab
| null | null | null | null |
cs.RO cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
The World Wide Lab (WWL) connects the Digital Shadows (DSs) of processes,
products, companies, and other entities allowing the exchange of information
across company boundaries. Since DSs are context- and purpose-specific
representations of a process, as opposed to Digital Twins (DTs) which offer a
full simulation, the integration of a process into the WWL requires the
creation of DSs representing different aspects of the process. Human-Robot
Collaboration (HRC) for assembly processes was recently studied in the context
of the WWL where Behaviour Trees (BTs) were proposed as a standard task-level
representation of these processes. We extend previous work by proposing to
standardise safety functions that can be directly integrated into these BTs.
This addition uses the WWL as a communication and information exchange platform
allowing industrial and academic practitioners to exchange, reuse, and
experiment with different safety requirements and solutions in the WWL.
|
[
{
"version": "v1",
"created": "Tue, 4 Apr 2023 21:17:49 GMT"
}
] | 2023-04-06T00:00:00 |
[
[
"Behery",
"Mohamed",
""
],
[
"Lakemeyer",
"Gerhard",
""
]
] |
new_dataset
| 0.953735 |
2304.02205
|
Jifan Yu
|
Jifan Yu, Mengying Lu, Qingyang Zhong, Zijun Yao, Shangqing Tu,
Zhengshan Liao, Xiaoya Li, Manli Li, Lei Hou, Hai-Tao Zheng, Juanzi Li, Jie
Tang
|
MoocRadar: A Fine-grained and Multi-aspect Knowledge Repository for
Improving Cognitive Student Modeling in MOOCs
|
Accepted by SIGIR 2023
| null | null | null |
cs.AI cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Student modeling, the task of inferring a student's learning characteristics
through their interactions with coursework, is a fundamental issue in
intelligent education. Although the recent attempts from knowledge tracing and
cognitive diagnosis propose several promising directions for improving the
usability and effectiveness of current models, the existing public datasets are
still insufficient to meet the need for these potential solutions due to their
ignorance of complete exercising contexts, fine-grained concepts, and cognitive
labels. In this paper, we present MoocRadar, a fine-grained, multi-aspect
knowledge repository consisting of 2,513 exercise questions, 5,600 knowledge
concepts, and over 12 million behavioral records. Specifically, we propose a
framework to guarantee a high-quality and comprehensive annotation of
fine-grained concepts and cognitive labels. The statistical and experimental
results indicate that our dataset provides the basis for the future
improvements of existing methods. Moreover, to support the convenient usage for
researchers, we release a set of tools for data querying, model adaption, and
even the extension of our repository, which are now available at
https://github.com/THU-KEG/MOOC-Radar.
|
[
{
"version": "v1",
"created": "Wed, 5 Apr 2023 03:36:40 GMT"
}
] | 2023-04-06T00:00:00 |
[
[
"Yu",
"Jifan",
""
],
[
"Lu",
"Mengying",
""
],
[
"Zhong",
"Qingyang",
""
],
[
"Yao",
"Zijun",
""
],
[
"Tu",
"Shangqing",
""
],
[
"Liao",
"Zhengshan",
""
],
[
"Li",
"Xiaoya",
""
],
[
"Li",
"Manli",
""
],
[
"Hou",
"Lei",
""
],
[
"Zheng",
"Hai-Tao",
""
],
[
"Li",
"Juanzi",
""
],
[
"Tang",
"Jie",
""
]
] |
new_dataset
| 0.977304 |
2304.02214
|
Binbin Feng
|
Binbin Feng, Jun Li, Jianhua Xu
|
LogoNet: a fine-grained network for instance-level logo sketch retrieval
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sketch-based image retrieval, which aims to use sketches as queries to
retrieve images containing the same query instance, receives increasing
attention in recent years. Although dramatic progress has been made in sketch
retrieval, few efforts are devoted to logo sketch retrieval which is still
hindered by the following challenges: Firstly, logo sketch retrieval is more
difficult than typical sketch retrieval problem, since a logo sketch usually
contains much less visual contents with only irregular strokes and lines.
Secondly, instance-specific sketches demonstrate dramatic appearance variances,
making them less identifiable when querying the same logo instance. Thirdly,
there exist several sketch retrieval benchmarking datasets nowadays, whereas an
instance-level logo sketch dataset is still publicly unavailable. To address
the above-mentioned limitations, we make twofold contributions in this study
for instance-level logo sketch retrieval. To begin with, we construct an
instance-level logo sketch dataset containing 2k logo instances and exceeding
9k sketches. To our knowledge, this is the first publicly available
instance-level logo sketch dataset. Next, we develop a fine-grained
triple-branch CNN architecture based on hybrid attention mechanism termed
LogoNet for accurate logo sketch retrieval. More specifically, we embed the
hybrid attention mechanism into the triple-branch architecture for capturing
the key query-specific information from the limited visual cues in the logo
sketches. Experimental evaluations both on our assembled dataset and public
benchmark datasets demonstrate the effectiveness of our proposed network.
|
[
{
"version": "v1",
"created": "Wed, 5 Apr 2023 04:03:02 GMT"
}
] | 2023-04-06T00:00:00 |
[
[
"Feng",
"Binbin",
""
],
[
"Li",
"Jun",
""
],
[
"Xu",
"Jianhua",
""
]
] |
new_dataset
| 0.999723 |
2304.02233
|
Zihao Wang
|
Zihao Wang, Ali Ahmadvand, Jason Choi, Payam Karisani, Eugene
Agichtein
|
Ericson: An Interactive Open-Domain Conversational Search Agent
|
pre-print
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Open-domain conversational search (ODCS) aims to provide valuable, up-to-date
information, while maintaining natural conversations to help users refine and
ultimately answer information needs. However, creating an effective and robust
ODCS agent is challenging. In this paper, we present a fully functional ODCS
system, Ericson, which includes state-of-the-art question answering and
information retrieval components, as well as intent inference and dialogue
management models for proactive question refinement and recommendations. Our
system was stress-tested in the Amazon Alexa Prize, by engaging in live
conversations with thousands of Alexa users, thus providing empirical basis for
the analysis of the ODCS system in real settings. Our interaction data analysis
revealed that accurate intent classification, encouraging user engagement, and
careful proactive recommendations contribute most to the users satisfaction.
Our study further identifies limitations of the existing search techniques, and
can serve as a building block for the next generation of ODCS agents.
|
[
{
"version": "v1",
"created": "Wed, 5 Apr 2023 05:28:31 GMT"
}
] | 2023-04-06T00:00:00 |
[
[
"Wang",
"Zihao",
""
],
[
"Ahmadvand",
"Ali",
""
],
[
"Choi",
"Jason",
""
],
[
"Karisani",
"Payam",
""
],
[
"Agichtein",
"Eugene",
""
]
] |
new_dataset
| 0.998006 |
2304.02246
|
Philipp Straubinger
|
Philipp Straubinger, Laura Caspari, Gordon Fraser
|
Code Critters: A Block-Based Testing Game
| null | null | null | null |
cs.SE cs.CY
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Learning to program has become common in schools, higher education and
individual learning. Although testing is an important aspect of programming, it
is often neglected in education due to a perceived lack of time and knowledge,
or simply because testing is considered less important or fun. To make testing
more engaging, we therefore introduce Code Critters, a Tower Defense game based
on testing concepts: The aim of the game is to place magic mines along the
route taken by small "critters" from their home to a tower, such that the mines
distinguish between critters executing correct code from those executing buggy
code. Code is shown and edited using a block-based language to make the game
accessible for younger learners. The mines encode test inputs as well as test
oracles, thus making testing an integral and fun component of the game.
|
[
{
"version": "v1",
"created": "Wed, 5 Apr 2023 06:34:42 GMT"
}
] | 2023-04-06T00:00:00 |
[
[
"Straubinger",
"Philipp",
""
],
[
"Caspari",
"Laura",
""
],
[
"Fraser",
"Gordon",
""
]
] |
new_dataset
| 0.999608 |
2304.02250
|
Yang Zheng
|
Yang Zheng, Oles Andrienko, Yonglei Zhao, Minwoo Park, Trung Pham
|
DPPD: Deformable Polar Polygon Object Detection
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Regular object detection methods output rectangle bounding boxes, which are
unable to accurately describe the actual object shapes. Instance segmentation
methods output pixel-level labels, which are computationally expensive for
real-time applications. Therefore, a polygon representation is needed to
achieve precise shape alignment, while retaining low computation cost. We
develop a novel Deformable Polar Polygon Object Detection method (DPPD) to
detect objects in polygon shapes. In particular, our network predicts, for each
object, a sparse set of flexible vertices to construct the polygon, where each
vertex is represented by a pair of angle and distance in the Polar coordinate
system. To enable training, both ground truth and predicted polygons are
densely resampled to have the same number of vertices with equal-spaced
raypoints. The resampling operation is fully differentable, allowing gradient
back-propagation. Sparse polygon predicton ensures high-speed runtime inference
while dense resampling allows the network to learn object shapes with high
precision. The polygon detection head is established on top of an anchor-free
and NMS-free network architecture. DPPD has been demonstrated successfully in
various object detection tasks for autonomous driving such as traffic-sign,
crosswalk, vehicle and pedestrian objects.
|
[
{
"version": "v1",
"created": "Wed, 5 Apr 2023 06:43:41 GMT"
}
] | 2023-04-06T00:00:00 |
[
[
"Zheng",
"Yang",
""
],
[
"Andrienko",
"Oles",
""
],
[
"Zhao",
"Yonglei",
""
],
[
"Park",
"Minwoo",
""
],
[
"Pham",
"Trung",
""
]
] |
new_dataset
| 0.99975 |
2304.02251
|
Chao Zhao
|
Chao Zhao, Shuai Yuan, Chunli Jiang, Junhao Cai, Hongyu Yu, Michael Yu
Wang, and Qifeng Chen
|
ERRA: An Embodied Representation and Reasoning Architecture for
Long-horizon Language-conditioned Manipulation Tasks
|
Accepted to IEEE Robotics and Automation Letters (RA-L)
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This letter introduces ERRA, an embodied learning architecture that enables
robots to jointly obtain three fundamental capabilities (reasoning, planning,
and interaction) for solving long-horizon language-conditioned manipulation
tasks. ERRA is based on tightly-coupled probabilistic inferences at two
granularity levels. Coarse-resolution inference is formulated as sequence
generation through a large language model, which infers action language from
natural language instruction and environment state. The robot then zooms to the
fine-resolution inference part to perform the concrete action corresponding to
the action language. Fine-resolution inference is constructed as a Markov
decision process, which takes action language and environmental sensing as
observations and outputs the action. The results of action execution in
environments provide feedback for subsequent coarse-resolution reasoning. Such
coarse-to-fine inference allows the robot to decompose and achieve long-horizon
tasks interactively. In extensive experiments, we show that ERRA can complete
various long-horizon manipulation tasks specified by abstract language
instructions. We also demonstrate successful generalization to the novel but
similar natural language instructions.
|
[
{
"version": "v1",
"created": "Wed, 5 Apr 2023 06:50:22 GMT"
}
] | 2023-04-06T00:00:00 |
[
[
"Zhao",
"Chao",
""
],
[
"Yuan",
"Shuai",
""
],
[
"Jiang",
"Chunli",
""
],
[
"Cai",
"Junhao",
""
],
[
"Yu",
"Hongyu",
""
],
[
"Wang",
"Michael Yu",
""
],
[
"Chen",
"Qifeng",
""
]
] |
new_dataset
| 0.999543 |
2304.02274
|
Ze Gao Mr
|
Simin Yang, Ze Gao, Reza Hadi Mogavi, Pan Hui, Tristan Braud
|
Tangible Web: An Interactive Immersion Virtual RealityCreativity System
that Travels Across Reality
|
Accepted In Proceedings of the ACM Web Conference 2023, April 30-May
4, 2023, Austin, TX, USA. ACM, New York, NY, USA
| null |
10.1145/3543507.3587432
| null |
cs.HC cs.MM
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
With the advancement of virtual reality (VR) technology, virtual displays
have become integral to how museums, galleries, and other tourist destinations
present their collections to the public. However, the current lack of immersion
in virtual reality displays limits the user's ability to experience and
appreciate its aesthetics. This paper presents a case study of a creative
approach taken by a tourist attraction venue in developing a physical network
system that allows visitors to enhance VR's aesthetic aspects based on
environmental parameters gathered by external sensors. Our system was
collaboratively developed through interviews and sessions with twelve
stakeholder groups interested in art and exhibitions. This paper demonstrates
how our technological advancements in interaction, immersion, and visual
attractiveness surpass those of earlier virtual display generations. Through
multimodal interaction, we aim to encourage innovation on the Web and create
more visually appealing and engaging virtual displays. It is hoped that the
greater online art community will gain fresh insight into how people interact
with virtual worlds as a result of this work.
|
[
{
"version": "v1",
"created": "Wed, 5 Apr 2023 07:37:55 GMT"
}
] | 2023-04-06T00:00:00 |
[
[
"Yang",
"Simin",
""
],
[
"Gao",
"Ze",
""
],
[
"Mogavi",
"Reza Hadi",
""
],
[
"Hui",
"Pan",
""
],
[
"Braud",
"Tristan",
""
]
] |
new_dataset
| 0.998988 |
2304.02291
|
Chang-Hwan Son
|
Jae-Hyeon Lee, Chang-Hwan Son
|
Trap-Based Pest Counting: Multiscale and Deformable Attention CenterNet
Integrating Internal LR and HR Joint Feature Learning
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Pest counting, which predicts the number of pests in the early stage, is very
important because it enables rapid pest control, reduces damage to crops, and
improves productivity. In recent years, light traps have been increasingly used
to lure and photograph pests for pest counting. However, pest images have a
wide range of variability in pest appearance owing to severe occlusion, wide
pose variation, and even scale variation. This makes pest counting more
challenging. To address these issues, this study proposes a new pest counting
model referred to as multiscale and deformable attention CenterNet
(Mada-CenterNet) for internal low-resolution (LR) and high-resolution (HR)
joint feature learning. Compared with the conventional CenterNet, the proposed
Mada-CenterNet adopts a multiscale heatmap generation approach in a two-step
fashion to predict LR and HR heatmaps adaptively learned to scale variations,
that is, changes in the number of pests. In addition, to overcome the pose and
occlusion problems, a new between-hourglass skip connection based on deformable
and multiscale attention is designed to ensure internal LR and HR joint feature
learning and incorporate geometric deformation, thereby resulting in an
improved pest counting accuracy. Through experiments, the proposed
Mada-CenterNet is verified to generate the HR heatmap more accurately and
improve pest counting accuracy owing to multiscale heatmap generation, joint
internal feature learning, and deformable and multiscale attention. In
addition, the proposed model is confirmed to be effective in overcoming severe
occlusions and variations in pose and scale. The experimental results show that
the proposed model outperforms state-of-the-art crowd counting and object
detection models.
|
[
{
"version": "v1",
"created": "Wed, 5 Apr 2023 08:23:17 GMT"
}
] | 2023-04-06T00:00:00 |
[
[
"Lee",
"Jae-Hyeon",
""
],
[
"Son",
"Chang-Hwan",
""
]
] |
new_dataset
| 0.996853 |
2304.02313
|
Yaochen Zhu
|
Yaochen Zhu, Xiangqing Shen, Rui Xia
|
Personality-aware Human-centric Multimodal Reasoning: A New Task
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multimodal reasoning, an area of artificial intelligence that aims at make
inferences from multimodal signals such as vision, language and speech, has
drawn more and more attention in recent years. People with different
personalities may respond differently to the same situation. However, such
individual personalities were ignored in the previous studies. In this work, we
introduce a new Personality-aware Human-centric Multimodal Reasoning
(Personality-aware HMR) task, and accordingly construct a new dataset based on
The Big Bang Theory television shows, to predict the behavior of a specific
person at a specific moment, given the multimodal information of its past and
future moments. The Myers-Briggs Type Indicator (MBTI) was annotated and
utilized in the task to represent individuals' personalities. We benchmark the
task by proposing three baseline methods, two were adapted from the related
tasks and one was newly proposed for our task. The experimental results
demonstrate that personality can effectively improve the performance of
human-centric multimodal reasoning. To further solve the lack of personality
annotation in real-life scenes, we introduce an extended task called
Personality-predicted HMR, and propose the corresponding methods, to predict
the MBTI personality at first, and then use the predicted personality to help
multimodal reasoning. The experimental results show that our method can
accurately predict personality and achieves satisfactory multimodal reasoning
performance without relying on personality annotations.
|
[
{
"version": "v1",
"created": "Wed, 5 Apr 2023 09:09:10 GMT"
}
] | 2023-04-06T00:00:00 |
[
[
"Zhu",
"Yaochen",
""
],
[
"Shen",
"Xiangqing",
""
],
[
"Xia",
"Rui",
""
]
] |
new_dataset
| 0.991232 |
2304.02371
|
Bokeon Kwak
|
Shuhang Zhang, Bokeon Kwak, Dario Floreano
|
Design and manufacture of edible microfluidic logic gates
|
7 pages, 6 figures
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Edible robotics is an emerging research field with potential use in
environmental, food, and medical scenarios. In this context, the design of
edible control circuits could increase the behavioral complexity of edible
robots and reduce their dependence on inedible components. Here we describe a
method to design and manufacture edible control circuits based on microfluidic
logic gates. We focus on the choice of materials and fabrication procedure to
produce edible logic gates based on recently available soft microfluidic logic.
We validate the proposed design with the production of a functional NOT gate
and suggest further research avenues for scaling up the method to more complex
circuits.
|
[
{
"version": "v1",
"created": "Wed, 5 Apr 2023 11:22:04 GMT"
}
] | 2023-04-06T00:00:00 |
[
[
"Zhang",
"Shuhang",
""
],
[
"Kwak",
"Bokeon",
""
],
[
"Floreano",
"Dario",
""
]
] |
new_dataset
| 0.999331 |
2304.02444
|
P\'eter Antal
|
P\'eter Antal, Tam\'as P\'eni, and Roland T\'oth
|
Payload Grasping and Transportation by a Quadrotor with a Hook-Based
Manipulator
|
Submitted to IEEE Robotics and Automation Letters (2023)
| null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The paper proposes an efficient trajectory planning and control approach for
payload grasping and transportation using an aerial manipulator. The proposed
manipulator structure consists of a hook attached to a quadrotor using a 1 DoF
revolute joint. To perform payload grasping, transportation, and release,
first, time-optimal reference trajectories are designed through specific
waypoints to ensure the fast and reliable execution of the tasks. Then, a
two-stage motion control approach is developed based on a robust geometric
controller for precise and reliable reference tracking and a linear--quadratic
payload regulator for rapid setpoint stabilization of the payload swing. The
proposed control architecture and design are evaluated in a high-fidelity
physical simulator with external disturbances and also in real flight
experiments.
|
[
{
"version": "v1",
"created": "Wed, 5 Apr 2023 14:02:53 GMT"
}
] | 2023-04-06T00:00:00 |
[
[
"Antal",
"Péter",
""
],
[
"Péni",
"Tamás",
""
],
[
"Tóth",
"Roland",
""
]
] |
new_dataset
| 0.999239 |
2304.02509
|
Emmanuel Abbe
|
Emmanuel Abbe and Colin Sandon
|
A proof that Reed-Muller codes achieve Shannon capacity on symmetric
channels
| null | null | null | null |
cs.IT cs.DM math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reed-Muller codes were introduced in 1954, with a simple explicit
construction based on polynomial evaluations, and have long been conjectured to
achieve Shannon capacity on symmetric channels. Major progress was made towards
a proof over the last decades; using combinatorial weight enumerator bounds, a
breakthrough on the erasure channel from sharp thresholds, hypercontractivity
arguments, and polarization theory. Another major progress recently established
that the bit error probability vanishes slowly below capacity. However, when
channels allow for errors, the results of Bourgain-Kalai do not apply for
converting a vanishing bit to a vanishing block error probability, neither do
the known weight enumerator bounds. The conjecture that RM codes achieve
Shannon capacity on symmetric channels, with high probability of recovering the
codewords, has thus remained open.
This paper closes the conjecture's proof. It uses a new recursive boosting
framework, which aggregates the decoding of codeword restrictions on
`subspace-sunflowers', handling their dependencies via an $L_p$ Boolean Fourier
analysis, and using a list-decoding argument with a weight enumerator bound
from Sberlo-Shpilka. The proof does not require a vanishing bit error
probability for the base case, but only a non-trivial probability, obtained
here for general symmetric codes. This gives in particular a shortened and
tightened argument for the vanishing bit error probability result of
Reeves-Pfister, and with prior works, it implies the strong wire-tap secrecy of
RM codes on pure-state classical-quantum channels.
|
[
{
"version": "v1",
"created": "Wed, 5 Apr 2023 15:31:55 GMT"
}
] | 2023-04-06T00:00:00 |
[
[
"Abbe",
"Emmanuel",
""
],
[
"Sandon",
"Colin",
""
]
] |
new_dataset
| 0.993004 |
2304.02510
|
Mahya Morid Ahmadi
|
Mahya Morid Ahmadi, Lilas Alrahis, Ozgur Sinanoglu and Muhammad
Shafique
|
FPGA-Patch: Mitigating Remote Side-Channel Attacks on FPGAs using
Dynamic Patch Generation
|
6 pages
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose FPGA-Patch, the first-of-its-kind defense that leverages automated
program repair concepts to thwart power side-channel attacks on cloud FPGAs.
FPGA-Patch generates isofunctional variants of the target hardware by injecting
faults and finding transformations that eliminate failure. The obtained
variants display different hardware characteristics, ensuring a maximal
diversity in power traces once dynamically swapped at run-time. Yet, FPGA-Patch
forces the variants to have enough similarity, enabling bitstream compression
and minimizing dynamic exchange costs. Considering AES running on AMD/Xilinx
FPGA, FPGA-Patch increases the attacker's effort by three orders of magnitude,
while preserving the performance of AES and a minimal area overhead of 14.2%.
|
[
{
"version": "v1",
"created": "Wed, 5 Apr 2023 15:35:35 GMT"
}
] | 2023-04-06T00:00:00 |
[
[
"Ahmadi",
"Mahya Morid",
""
],
[
"Alrahis",
"Lilas",
""
],
[
"Sinanoglu",
"Ozgur",
""
],
[
"Shafique",
"Muhammad",
""
]
] |
new_dataset
| 0.999718 |
2304.02541
|
Vil\'em Zouhar
|
Vil\'em Zouhar, Kalvin Chang, Chenxuan Cui, Nathaniel Carlson,
Nathaniel Robinson, Mrinmaya Sachan, David Mortensen
|
PWESuite: Phonetic Word Embeddings and Tasks They Facilitate
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Word embeddings that map words into a fixed-dimensional vector space are the
backbone of modern NLP. Most word embedding methods encode semantic
information. However, phonetic information, which is important for some tasks,
is often overlooked. In this work, we develop several novel methods which
leverage articulatory features to build phonetically informed word embeddings,
and present a set of phonetic word embeddings to encourage their community
development, evaluation and use. While several methods for learning phonetic
word embeddings already exist, there is a lack of consistency in evaluating
their effectiveness. Thus, we also proposes several ways to evaluate both
intrinsic aspects of phonetic word embeddings, such as word retrieval and
correlation with sound similarity, and extrinsic performances, such as rhyme
and cognate detection and sound analogies. We hope that our suite of tasks will
promote reproducibility and provide direction for future research on phonetic
word embeddings.
|
[
{
"version": "v1",
"created": "Wed, 5 Apr 2023 16:03:42 GMT"
}
] | 2023-04-06T00:00:00 |
[
[
"Zouhar",
"Vilém",
""
],
[
"Chang",
"Kalvin",
""
],
[
"Cui",
"Chenxuan",
""
],
[
"Carlson",
"Nathaniel",
""
],
[
"Robinson",
"Nathaniel",
""
],
[
"Sachan",
"Mrinmaya",
""
],
[
"Mortensen",
"David",
""
]
] |
new_dataset
| 0.986379 |
2304.02560
|
Kumara Kahatapitiya
|
Kumara Kahatapitiya, Anurag Arnab, Arsha Nagrani and Michael S. Ryoo
|
VicTR: Video-conditioned Text Representations for Activity Recognition
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vision-Language models have shown strong performance in the image-domain --
even in zero-shot settings, thanks to the availability of large amount of
pretraining data (i.e., paired image-text examples). However for videos, such
paired data is not as abundant. Thus, video-text models are usually designed by
adapting pretrained image-text models to video-domain, instead of training from
scratch. All such recipes rely on augmenting visual embeddings with temporal
information (i.e., image -> video), often keeping text embeddings unchanged or
even being discarded. In this paper, we argue that such adapted video-text
models can benefit more by augmenting text rather than visual information. We
propose VicTR, which jointly-optimizes text and video tokens, generating
'Video-conditioned Text' embeddings. Our method can further make use of
freely-available semantic information, in the form of visually-grounded
auxiliary text (e.g., object or scene information). We conduct experiments on
multiple benchmarks including supervised (Kinetics-400, Charades), zero-shot
and few-shot (HMDB-51, UCF-101) settings, showing competitive performance on
activity recognition based on video-text models.
|
[
{
"version": "v1",
"created": "Wed, 5 Apr 2023 16:30:36 GMT"
}
] | 2023-04-06T00:00:00 |
[
[
"Kahatapitiya",
"Kumara",
""
],
[
"Arnab",
"Anurag",
""
],
[
"Nagrani",
"Arsha",
""
],
[
"Ryoo",
"Michael S.",
""
]
] |
new_dataset
| 0.994315 |
2304.02569
|
Shengyu Huang
|
Liyuan Zhu, Yuru Jia, Shengyu Huang, Nicholas Meyer, Andreas Wieser,
Konrad Schindler, Jordan Aaron
|
DEFLOW: Self-supervised 3D Motion Estimation of Debris Flow
|
Photogrammetric Computer Vision Workshop, CVPRW 2023, camera ready
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Existing work on scene flow estimation focuses on autonomous driving and
mobile robotics, while automated solutions are lacking for motion in nature,
such as that exhibited by debris flows. We propose DEFLOW, a model for 3D
motion estimation of debris flows, together with a newly captured dataset. We
adopt a novel multi-level sensor fusion architecture and self-supervision to
incorporate the inductive biases of the scene. We further adopt a multi-frame
temporal processing module to enable flow speed estimation over time. Our model
achieves state-of-the-art optical flow and depth estimation on our dataset, and
fully automates the motion estimation for debris flows. The source code and
dataset are available at project page.
|
[
{
"version": "v1",
"created": "Wed, 5 Apr 2023 16:40:14 GMT"
}
] | 2023-04-06T00:00:00 |
[
[
"Zhu",
"Liyuan",
""
],
[
"Jia",
"Yuru",
""
],
[
"Huang",
"Shengyu",
""
],
[
"Meyer",
"Nicholas",
""
],
[
"Wieser",
"Andreas",
""
],
[
"Schindler",
"Konrad",
""
],
[
"Aaron",
"Jordan",
""
]
] |
new_dataset
| 0.998901 |
2304.02643
|
Alexander Kirillov
|
Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe
Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg,
Wan-Yen Lo, Piotr Doll\'ar, Ross Girshick
|
Segment Anything
|
Project web-page: https://segment-anything.com
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce the Segment Anything (SA) project: a new task, model, and
dataset for image segmentation. Using our efficient model in a data collection
loop, we built the largest segmentation dataset to date (by far), with over 1
billion masks on 11M licensed and privacy respecting images. The model is
designed and trained to be promptable, so it can transfer zero-shot to new
image distributions and tasks. We evaluate its capabilities on numerous tasks
and find that its zero-shot performance is impressive -- often competitive with
or even superior to prior fully supervised results. We are releasing the
Segment Anything Model (SAM) and corresponding dataset (SA-1B) of 1B masks and
11M images at https://segment-anything.com to foster research into foundation
models for computer vision.
|
[
{
"version": "v1",
"created": "Wed, 5 Apr 2023 17:59:46 GMT"
}
] | 2023-04-06T00:00:00 |
[
[
"Kirillov",
"Alexander",
""
],
[
"Mintun",
"Eric",
""
],
[
"Ravi",
"Nikhila",
""
],
[
"Mao",
"Hanzi",
""
],
[
"Rolland",
"Chloe",
""
],
[
"Gustafson",
"Laura",
""
],
[
"Xiao",
"Tete",
""
],
[
"Whitehead",
"Spencer",
""
],
[
"Berg",
"Alexander C.",
""
],
[
"Lo",
"Wan-Yen",
""
],
[
"Dollár",
"Piotr",
""
],
[
"Girshick",
"Ross",
""
]
] |
new_dataset
| 0.999485 |
2008.10326
|
Nikolaj Ignatieff Schwartzbach
|
Nikolaj Ignatieff Schwartzbach (Department of Computer Science, Aarhus
University)
|
An Incentive-Compatible Smart Contract for Decentralized Commerce
|
14 pages, 3 figures
|
ICBC 2021: 3rd IEEE International Conference on Blockchain and
Cryptocurrency
|
10.1109/ICBC51069.2021.9461077
| null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a smart contract that allows two mutually distrusting parties to
transact any non-digital good or service by deploying a smart contract on a
blockchain to act as escrow. The contract settles disputes by letting parties
wager that they can convince an arbiter that they were the honest party. We
analyse the contract as an extensive-form game and prove that the honest
strategy is secure in a strong game-theoretic sense if and only if the arbiter
is biased in favor of honest parties. By relaxing the security notion, we can
replace the arbiter by a random coin toss. Finally, we show how to generalize
the contract to multiparty transactions in a way that amortizes the transaction
fees.
|
[
{
"version": "v1",
"created": "Mon, 24 Aug 2020 11:21:35 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Schwartzbach",
"Nikolaj Ignatieff",
"",
"Department of Computer Science, Aarhus\n University"
]
] |
new_dataset
| 0.995334 |
2107.04393
|
Nikolaj Ignatieff Schwartzbach
|
Mathias Hall-Andersen and Nikolaj I. Schwartzbach
|
Game theory on the blockchain: a model for games with smart contracts
| null |
SAGT 2021: 14th International Symposium on Algorithmic Game Theory
|
10.1007/978-3-030-85947-3_11
| null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a model for games in which the players have shared access to a
blockchain that allows them to deploy smart contracts to act on their behalf.
This changes fundamental game-theoretic assumptions about rationality since a
contract can commit a player to act irrationally in specific subgames, making
credible otherwise non-credible threats. This is further complicated by
considering the interaction between multiple contracts which can reason about
each other. This changes the nature of the game in a nontrivial way as choosing
which contract to play can itself be considered a move in the game. Our model
generalizes known notions of equilibria, with a single contract being
equivalent to a Stackelberg equilibrium, and two contracts being equivalent to
a reverse Stackelberg equilibrium. We prove a number of bounds on the
complexity of computing SPE in such games with smart contracts. We show that
computing an SPE is $\textsf{PSPACE}$-hard in the general case. Specifically,
in games with $k$ contracts, we show that computing an SPE is
$\Sigma_k^\textsf{P}$-hard for games of imperfect information. We show that
computing an SPE remains $\textsf{PSPACE}$-hard in games of perfect information
if we allow for an unbounded number of contracts. We give an algorithm for
computing an SPE in two-contract games of perfect information that runs in time
$O(m\ell)$ where $m$ is the size of the game tree and $\ell$ is the number of
terminal nodes. Finally, we conjecture the problem to be $\textsf{NP}$-complete
for three contracts.
|
[
{
"version": "v1",
"created": "Fri, 9 Jul 2021 12:43:04 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Hall-Andersen",
"Mathias",
""
],
[
"Schwartzbach",
"Nikolaj I.",
""
]
] |
new_dataset
| 0.963469 |
2206.01121
|
Arash Vaezi
|
Arash Vaezi
|
The Loop of the Rings: A Fully Decentralized Cooperative System (The
Concept)
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a highly secure decentralized and distributed collaborative
environment denoted by $LoR$, which stands for "The Loop of the Rings". The
$LoR$ system provides a secure, user-friendly cooperative environment for an
arbitrarily large number of users who can offer particular services to each
other. The type of services determines the use of the $LoR$ environment. For
example, we might need to create a freelancer; by setting the related services,
we can come up with an instance of $LoR$, which provides those services as a
freelancer. The unique structure of the $LoR$ system makes it a secure and
reliable environment that stands on a (distributed) database as a cooperative
workplace or a service provider system. Such a service provider could be a
freelancer or an IoT management system. The 5G-related services can be
organized to be managed by the $LoR$ system, too.
The $LoR$ system deals with cooperation rather than transactions. The system
provides reliability and security for the collaborators in each group of
coworkers from the start to the end of a collaboration.
The benefit of the system comes from randomized techniques and its
well-structured design and policies. These techniques together maintain
consensus and trust for the groups of collaborator parties. The interesting
point regarding the $LoR$ system is that the greater the number of users
becomes, the more secure the system gets. Surprisingly, this will never affect
the performance of the system.
|
[
{
"version": "v1",
"created": "Thu, 2 Jun 2022 16:01:00 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Jun 2022 17:35:48 GMT"
},
{
"version": "v3",
"created": "Fri, 10 Jun 2022 10:01:41 GMT"
},
{
"version": "v4",
"created": "Tue, 9 Aug 2022 18:49:06 GMT"
},
{
"version": "v5",
"created": "Wed, 29 Mar 2023 14:46:14 GMT"
},
{
"version": "v6",
"created": "Thu, 30 Mar 2023 12:10:14 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Vaezi",
"Arash",
""
]
] |
new_dataset
| 0.968027 |
2209.02021
|
Daniel Bonilla Licea
|
Daniel Bonilla Licea, Mounir Ghogho, Martin Saska
|
When Robotics Meets Wireless Communications: An Introductory Tutorial
|
39 pages, 192 references
| null | null | null |
cs.RO cs.SY eess.SP eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The importance of ground Mobile Robots (MRs) and Unmanned Aerial Vehicles
(UAVs) within the research community, industry, and society is growing fast.
Many of these agents are nowadays equipped with communication systems that are,
in some cases, essential to successfully achieve certain tasks. In this
context, we have begun to witness the development of a new interdisciplinary
research field at the intersection of robotics and communications. This
research field has been boosted by the intention of integrating UAVs within the
5G and 6G communication networks. This research will undoubtedly lead to many
important applications in the near future. Nevertheless, one of the main
obstacles to the development of this research area is that most researchers
address these problems by oversimplifying either the robotics or the
communications aspect. This impedes the ability of reaching the full potential
of this new interdisciplinary research area. In this tutorial, we present some
of the modelling tools necessary to address problems involving both robotics
and communication from an interdisciplinary perspective. As an illustrative
example of such problems, we focus in this tutorial on the issue of
communication-aware trajectory planning.
|
[
{
"version": "v1",
"created": "Mon, 5 Sep 2022 15:41:13 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Apr 2023 11:32:28 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Licea",
"Daniel Bonilla",
""
],
[
"Ghogho",
"Mounir",
""
],
[
"Saska",
"Martin",
""
]
] |
new_dataset
| 0.991314 |
2209.13953
|
Khloud Al Jallad
|
Khloud Al Jallad, Nada Ghneim
|
ArNLI: Arabic Natural Language Inference for Entailment and
Contradiction Detection
| null |
(2023) Computer Science, 24(2)
|
10.7494/csci.2023.24.2.4378
| null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Natural Language Inference (NLI) is a hot topic research in natural language
processing, contradiction detection between sentences is a special case of NLI.
This is considered a difficult NLP task which has a big influence when added as
a component in many NLP applications, such as Question Answering Systems, text
Summarization. Arabic Language is one of the most challenging low-resources
languages in detecting contradictions due to its rich lexical, semantics
ambiguity. We have created a data set of more than 12k sentences and named
ArNLI, that will be publicly available. Moreover, we have applied a new model
inspired by Stanford contradiction detection proposed solutions on English
language. We proposed an approach to detect contradictions between pairs of
sentences in Arabic language using contradiction vector combined with language
model vector as an input to machine learning model. We analyzed results of
different traditional machine learning classifiers and compared their results
on our created data set (ArNLI) and on an automatic translation of both PHEME,
SICK English data sets. Best results achieved using Random Forest classifier
with an accuracy of 99%, 60%, 75% on PHEME, SICK and ArNLI respectively.
|
[
{
"version": "v1",
"created": "Wed, 28 Sep 2022 09:37:16 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Jallad",
"Khloud Al",
""
],
[
"Ghneim",
"Nada",
""
]
] |
new_dataset
| 0.996473 |
2211.12501
|
Chengjian Feng
|
Chengjian Feng, Zequn Jie, Yujie Zhong, Xiangxiang Chu and Lin Ma
|
AeDet: Azimuth-invariant Multi-view 3D Object Detection
|
CVPR2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent LSS-based multi-view 3D object detection has made tremendous progress,
by processing the features in Brid-Eye-View (BEV) via the convolutional
detector. However, the typical convolution ignores the radial symmetry of the
BEV features and increases the difficulty of the detector optimization. To
preserve the inherent property of the BEV features and ease the optimization,
we propose an azimuth-equivariant convolution (AeConv) and an
azimuth-equivariant anchor. The sampling grid of AeConv is always in the radial
direction, thus it can learn azimuth-invariant BEV features. The proposed
anchor enables the detection head to learn predicting azimuth-irrelevant
targets. In addition, we introduce a camera-decoupled virtual depth to unify
the depth prediction for the images with different camera intrinsic parameters.
The resultant detector is dubbed Azimuth-equivariant Detector (AeDet).
Extensive experiments are conducted on nuScenes, and AeDet achieves a 62.0%
NDS, surpassing the recent multi-view 3D object detectors such as PETRv2 and
BEVDepth by a large margin. Project page: https://fcjian.github.io/aedet.
|
[
{
"version": "v1",
"created": "Tue, 22 Nov 2022 18:59:52 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Mar 2023 13:03:02 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Apr 2023 09:34:04 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Feng",
"Chengjian",
""
],
[
"Jie",
"Zequn",
""
],
[
"Zhong",
"Yujie",
""
],
[
"Chu",
"Xiangxiang",
""
],
[
"Ma",
"Lin",
""
]
] |
new_dataset
| 0.96146 |
2212.05935
|
Rub\`en P\'erez Tito
|
Rub\`en Tito, Dimosthenis Karatzas and Ernest Valveny
|
Hierarchical multimodal transformers for Multi-Page DocVQA
| null | null | null | null |
cs.CV cs.AI cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Document Visual Question Answering (DocVQA) refers to the task of answering
questions from document images. Existing work on DocVQA only considers
single-page documents. However, in real scenarios documents are mostly composed
of multiple pages that should be processed altogether. In this work we extend
DocVQA to the multi-page scenario. For that, we first create a new dataset,
MP-DocVQA, where questions are posed over multi-page documents instead of
single pages. Second, we propose a new hierarchical method, Hi-VT5, based on
the T5 architecture, that overcomes the limitations of current methods to
process long multi-page documents. The proposed method is based on a
hierarchical transformer architecture where the encoder summarizes the most
relevant information of every page and then, the decoder takes this summarized
information to generate the final answer. Through extensive experimentation, we
demonstrate that our method is able, in a single stage, to answer the questions
and provide the page that contains the relevant information to find the answer,
which can be used as a kind of explainability measure.
|
[
{
"version": "v1",
"created": "Wed, 7 Dec 2022 10:09:49 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Apr 2023 10:00:35 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Tito",
"Rubèn",
""
],
[
"Karatzas",
"Dimosthenis",
""
],
[
"Valveny",
"Ernest",
""
]
] |
new_dataset
| 0.966757 |
2302.12656
|
Fatemeh Mohammadi Amin
|
Charith Munasinghe, Fatemeh Mohammadi Amin, Davide Scaramuzza, Hans
Wernher van de Venn
|
COVERED, CollabOratiVE Robot Environment Dataset for 3D Semantic
segmentation
| null |
IEEE Conference on Emerging Technologies and Factory Automation
(ETFA 2022)
|
10.1109/ETFA52439.2022.9921525
| null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Safe human-robot collaboration (HRC) has recently gained a lot of interest
with the emerging Industry 5.0 paradigm. Conventional robots are being replaced
with more intelligent and flexible collaborative robots (cobots). Safe and
efficient collaboration between cobots and humans largely relies on the cobot's
comprehensive semantic understanding of the dynamic surrounding of industrial
environments. Despite the importance of semantic understanding for such
applications, 3D semantic segmentation of collaborative robot workspaces lacks
sufficient research and dedicated datasets. The performance limitation caused
by insufficient datasets is called 'data hunger' problem. To overcome this
current limitation, this work develops a new dataset specifically designed for
this use case, named "COVERED", which includes point-wise annotated point
clouds of a robotic cell. Lastly, we also provide a benchmark of current
state-of-the-art (SOTA) algorithm performance on the dataset and demonstrate a
real-time semantic segmentation of a collaborative robot workspace using a
multi-LiDAR system. The promising results from using the trained Deep Networks
on a real-time dynamically changing situation shows that we are on the right
track. Our perception pipeline achieves 20Hz throughput with a prediction point
accuracy of $>$96\% and $>$92\% mean intersection over union (mIOU) while
maintaining an 8Hz throughput.
|
[
{
"version": "v1",
"created": "Fri, 24 Feb 2023 14:24:58 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Apr 2023 09:06:52 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Munasinghe",
"Charith",
""
],
[
"Amin",
"Fatemeh Mohammadi",
""
],
[
"Scaramuzza",
"Davide",
""
],
[
"van de Venn",
"Hans Wernher",
""
]
] |
new_dataset
| 0.995703 |
2303.03373
|
Yixin Chen
|
Yixin Chen, Sai Kumar Dwivedi, Michael J. Black, Dimitrios Tzionas
|
Detecting Human-Object Contact in Images
|
Accepted at CVPR 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Humans constantly contact objects to move and perform tasks. Thus, detecting
human-object contact is important for building human-centered artificial
intelligence. However, there exists no robust method to detect contact between
the body and the scene from an image, and there exists no dataset to learn such
a detector. We fill this gap with HOT ("Human-Object conTact"), a new dataset
of human-object contacts for images. To build HOT, we use two data sources: (1)
We use the PROX dataset of 3D human meshes moving in 3D scenes, and
automatically annotate 2D image areas for contact via 3D mesh proximity and
projection. (2) We use the V-COCO, HAKE and Watch-n-Patch datasets, and ask
trained annotators to draw polygons for the 2D image areas where contact takes
place. We also annotate the involved body part of the human body. We use our
HOT dataset to train a new contact detector, which takes a single color image
as input, and outputs 2D contact heatmaps as well as the body-part labels that
are in contact. This is a new and challenging task that extends current
foot-ground or hand-object contact detectors to the full generality of the
whole body. The detector uses a part-attention branch to guide contact
estimation through the context of the surrounding body parts and scene. We
evaluate our detector extensively, and quantitative results show that our model
outperforms baselines, and that all components contribute to better
performance. Results on images from an online repository show reasonable
detections and generalizability.
|
[
{
"version": "v1",
"created": "Mon, 6 Mar 2023 18:56:26 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Apr 2023 13:48:30 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Chen",
"Yixin",
""
],
[
"Dwivedi",
"Sai Kumar",
""
],
[
"Black",
"Michael J.",
""
],
[
"Tzionas",
"Dimitrios",
""
]
] |
new_dataset
| 0.999675 |
2303.07700
|
Yijin Li
|
Junjie Ni, Yijin Li, Zhaoyang Huang, Hongsheng Li, Hujun Bao, Zhaopeng
Cui, Guofeng Zhang
|
PATS: Patch Area Transportation with Subdivision for Local Feature
Matching
|
Accepted to CVPR 2023. Project page: https://zju3dv.github.io/pats
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Local feature matching aims at establishing sparse correspondences between a
pair of images. Recently, detector-free methods present generally better
performance but are not satisfactory in image pairs with large scale
differences. In this paper, we propose Patch Area Transportation with
Subdivision (PATS) to tackle this issue. Instead of building an expensive image
pyramid, we start by splitting the original image pair into equal-sized patches
and gradually resizing and subdividing them into smaller patches with the same
scale. However, estimating scale differences between these patches is
non-trivial since the scale differences are determined by both relative camera
poses and scene structures, and thus spatially varying over image pairs.
Moreover, it is hard to obtain the ground truth for real scenes. To this end,
we propose patch area transportation, which enables learning scale differences
in a self-supervised manner. In contrast to bipartite graph matching, which
only handles one-to-one matching, our patch area transportation can deal with
many-to-many relationships. PATS improves both matching accuracy and coverage,
and shows superior performance in downstream tasks, such as relative pose
estimation, visual localization, and optical flow estimation. The source code
is available at \url{https://zju3dv.github.io/pats/}.
|
[
{
"version": "v1",
"created": "Tue, 14 Mar 2023 08:28:36 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Apr 2023 06:31:14 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Ni",
"Junjie",
""
],
[
"Li",
"Yijin",
""
],
[
"Huang",
"Zhaoyang",
""
],
[
"Li",
"Hongsheng",
""
],
[
"Bao",
"Hujun",
""
],
[
"Cui",
"Zhaopeng",
""
],
[
"Zhang",
"Guofeng",
""
]
] |
new_dataset
| 0.998434 |
2303.12582
|
Chris C. Emezue
|
Chris Chinenye Emezue, Sanchit Gandhi, Lewis Tunstall, Abubakar Abid,
Josh Meyer, Quentin Lhoest, Pete Allen, Patrick Von Platen, Douwe Kiela,
Yacine Jernite, Julien Chaumond, Merve Noyan, Omar Sanseviero
|
AfroDigits: A Community-Driven Spoken Digit Dataset for African
Languages
|
Accepted to the AfricaNLP Workshop at ICLR 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The advancement of speech technologies has been remarkable, yet its
integration with African languages remains limited due to the scarcity of
African speech corpora. To address this issue, we present AfroDigits, a
minimalist, community-driven dataset of spoken digits for African languages,
currently covering 38 African languages. As a demonstration of the practical
applications of AfroDigits, we conduct audio digit classification experiments
on six African languages [Igbo (ibo), Yoruba (yor), Rundi (run), Oshiwambo
(kua), Shona (sna), and Oromo (gax)] using the Wav2Vec2.0-Large and XLS-R
models. Our experiments reveal a useful insight on the effect of mixing African
speech corpora during finetuning. AfroDigits is the first published audio digit
dataset for African languages and we believe it will, among other things, pave
the way for Afro-centric speech applications such as the recognition of
telephone numbers, and street numbers. We release the dataset and platform
publicly at https://huggingface.co/datasets/chrisjay/crowd-speech-africa and
https://huggingface.co/spaces/chrisjay/afro-speech respectively.
|
[
{
"version": "v1",
"created": "Wed, 22 Mar 2023 14:09:20 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Apr 2023 03:32:24 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Emezue",
"Chris Chinenye",
""
],
[
"Gandhi",
"Sanchit",
""
],
[
"Tunstall",
"Lewis",
""
],
[
"Abid",
"Abubakar",
""
],
[
"Meyer",
"Josh",
""
],
[
"Lhoest",
"Quentin",
""
],
[
"Allen",
"Pete",
""
],
[
"Von Platen",
"Patrick",
""
],
[
"Kiela",
"Douwe",
""
],
[
"Jernite",
"Yacine",
""
],
[
"Chaumond",
"Julien",
""
],
[
"Noyan",
"Merve",
""
],
[
"Sanseviero",
"Omar",
""
]
] |
new_dataset
| 0.999602 |
2304.00464
|
Haoran Geng
|
Weikang Wan, Haoran Geng, Yun Liu, Zikang Shan, Yaodong Yang, Li Yi,
He Wang
|
UniDexGrasp++: Improving Dexterous Grasping Policy Learning via
Geometry-aware Curriculum and Iterative Generalist-Specialist Learning
| null | null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a novel, object-agnostic method for learning a universal policy
for dexterous object grasping from realistic point cloud observations and
proprioceptive information under a table-top setting, namely UniDexGrasp++. To
address the challenge of learning the vision-based policy across thousands of
object instances, we propose Geometry-aware Curriculum Learning (GeoCurriculum)
and Geometry-aware iterative Generalist-Specialist Learning (GiGSL) which
leverage the geometry feature of the task and significantly improve the
generalizability. With our proposed techniques, our final policy shows
universal dexterous grasping on thousands of object instances with 85.4% and
78.2% success rate on the train set and test set which outperforms the
state-of-the-art baseline UniDexGrasp by 11.7% and 11.3%, respectively.
|
[
{
"version": "v1",
"created": "Sun, 2 Apr 2023 06:32:19 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Apr 2023 03:05:50 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Wan",
"Weikang",
""
],
[
"Geng",
"Haoran",
""
],
[
"Liu",
"Yun",
""
],
[
"Shan",
"Zikang",
""
],
[
"Yang",
"Yaodong",
""
],
[
"Yi",
"Li",
""
],
[
"Wang",
"He",
""
]
] |
new_dataset
| 0.964907 |
2304.00553
|
Yong-Lu Li
|
Yong-Lu Li, Xiaoqian Wu, Xinpeng Liu, Yiming Dou, Yikun Ji, Junyi
Zhang, Yixing Li, Jingru Tan, Xudong Lu, Cewu Lu
|
From Isolated Islands to Pangea: Unifying Semantic Space for Human
Action Understanding
|
Project Webpage: https://mvig-rhos.com/pangea
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Action understanding matters and attracts attention. It can be formed as the
mapping from the action physical space to the semantic space. Typically,
researchers built action datasets according to idiosyncratic choices to define
classes and push the envelope of benchmarks respectively. Thus, datasets are
incompatible with each other like "Isolated Islands" due to semantic gaps and
various class granularities, e.g., do housework in dataset A and wash plate in
dataset B. We argue that a more principled semantic space is an urgent need to
concentrate the community efforts and enable us to use all datasets together to
pursue generalizable action learning. To this end, we design a Poincare action
semantic space given verb taxonomy hierarchy and covering massive actions. By
aligning the classes of previous datasets to our semantic space, we gather
(image/video/skeleton/MoCap) datasets into a unified database in a unified
label system, i.e., bridging "isolated islands" into a "Pangea". Accordingly,
we propose a bidirectional mapping model between physical and semantic space to
fully use Pangea. In extensive experiments, our system shows significant
superiority, especially in transfer learning. Code and data will be made
publicly available.
|
[
{
"version": "v1",
"created": "Sun, 2 Apr 2023 15:04:43 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Apr 2023 09:04:27 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Li",
"Yong-Lu",
""
],
[
"Wu",
"Xiaoqian",
""
],
[
"Liu",
"Xinpeng",
""
],
[
"Dou",
"Yiming",
""
],
[
"Ji",
"Yikun",
""
],
[
"Zhang",
"Junyi",
""
],
[
"Li",
"Yixing",
""
],
[
"Tan",
"Jingru",
""
],
[
"Lu",
"Xudong",
""
],
[
"Lu",
"Cewu",
""
]
] |
new_dataset
| 0.999235 |
2304.00782
|
Youjia Zhang
|
Youjia Zhang, Teng Xu, Junqing Yu, Yuteng Ye, Junle Wang, Yanqing
Jing, Jingyi Yu, Wei Yang
|
NeMF: Inverse Volume Rendering with Neural Microflake Field
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recovering the physical attributes of an object's appearance from its images
captured under an unknown illumination is challenging yet essential for
photo-realistic rendering. Recent approaches adopt the emerging implicit scene
representations and have shown impressive results.However, they unanimously
adopt a surface-based representation,and hence can not well handle scenes with
very complex geometry, translucent object and etc. In this paper, we propose to
conduct inverse volume rendering, in contrast to surface-based, by representing
a scene using microflake volume, which assumes the space is filled with
infinite small flakes and light reflects or scatters at each spatial location
according to microflake distributions. We further adopt the coordinate networks
to implicitly encode the microflake volume, and develop a differentiable
microflake volume renderer to train the network in an end-to-end way in
principle.Our NeMF enables effective recovery of appearance attributes for
highly complex geometry and scattering object, enables high-quality relighting,
material editing, and especially simulates volume rendering effects, such as
scattering, which is infeasible for surface-based approaches.
|
[
{
"version": "v1",
"created": "Mon, 3 Apr 2023 08:12:18 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Apr 2023 01:13:03 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Zhang",
"Youjia",
""
],
[
"Xu",
"Teng",
""
],
[
"Yu",
"Junqing",
""
],
[
"Ye",
"Yuteng",
""
],
[
"Wang",
"Junle",
""
],
[
"Jing",
"Yanqing",
""
],
[
"Yu",
"Jingyi",
""
],
[
"Yang",
"Wei",
""
]
] |
new_dataset
| 0.990022 |
2304.01214
|
Krish Desai Mr
|
Krish Desai
|
Parkinsons Disease Detection via Resting-State Electroencephalography
Using Signal Processing and Machine Learning Techniques
|
9 pages, 8 figures
| null | null | null |
cs.CV q-bio.NC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Parkinsons Disease (PD) is a neurodegenerative disorder resulting in motor
deficits due to advancing degeneration of dopaminergic neurons. PD patients
report experiencing tremor, rigidity, visual impairment, bradykinesia, and
several cognitive deficits. Although Electroencephalography (EEG) indicates
abnormalities in PD patients, one major challenge is the lack of a consistent,
accurate, and systemic biomarker for PD in order to closely monitor the disease
with therapeutic treatments and medication. In this study, we collected
Electroencephalographic data from 15 PD patients and 16 Healthy Controls (HC).
We first preprocessed every EEG signal using several techniques and extracted
relevant features using many feature extraction algorithms. Afterwards, we
applied several machine learning algorithms to classify PD versus HC. We found
the most significant metrics to be achieved by the Random Forest ensemble
learning approach, with an accuracy, precision, recall, F1 score, and AUC of
97.5%, 100%, 95%, 0.967, and 0.975, respectively. The results of this study
show promise for exposing PD abnormalities using EEG during clinical diagnosis,
and automating this process using signal processing techniques and ML
algorithms to evaluate the difference between healthy individuals and PD
patients.
|
[
{
"version": "v1",
"created": "Wed, 29 Mar 2023 06:03:05 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Desai",
"Krish",
""
]
] |
new_dataset
| 0.99148 |
2304.01244
|
Thomas Kunz
|
Li Li, Jean-Pierre S. El Rami, Adrian Taylor, James Hailing Rao, and
Thomas Kunz
|
Unified Emulation-Simulation Training Environment for Autonomous Cyber
Agents
|
To be published in the Proceedings of the 5th International
Conference on Machine Learning for Networking (MLN'2022)
| null | null | null |
cs.LG cs.AI cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Autonomous cyber agents may be developed by applying reinforcement and deep
reinforcement learning (RL/DRL), where agents are trained in a representative
environment. The training environment must simulate with high-fidelity the
network Cyber Operations (CyOp) that the agent aims to explore. Given the
complexity of net-work CyOps, a good simulator is difficult to achieve. This
work presents a systematic solution to automatically generate a high-fidelity
simulator in the Cyber Gym for Intelligent Learning (CyGIL). Through
representation learning and continuous learning, CyGIL provides a unified CyOp
training environment where an emulated CyGIL-E automatically generates a
simulated CyGIL-S. The simulator generation is integrated with the agent
training process to further reduce the required agent training time. The agent
trained in CyGIL-S is transferrable directly to CyGIL-E showing full
transferability to the emulated "real" network. Experimental results are
presented to demonstrate the CyGIL training performance. Enabling offline RL,
the CyGIL solution presents a promising direction towards sim-to-real for
leveraging RL agents in real-world cyber networks.
|
[
{
"version": "v1",
"created": "Mon, 3 Apr 2023 15:00:32 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Li",
"Li",
""
],
[
"Rami",
"Jean-Pierre S. El",
""
],
[
"Taylor",
"Adrian",
""
],
[
"Rao",
"James Hailing",
""
],
[
"Kunz",
"Thomas",
""
]
] |
new_dataset
| 0.983136 |
2304.01278
|
Shaull Almagor
|
Shaull Almagor and Omer Yizhaq
|
Jumping Automata over Infinite Words
| null | null | null | null |
cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
Jumping automata are finite automata that read their input in a
non-consecutive manner, disregarding the order of the letters in the word. We
introduce and study jumping automata over infinite words. Unlike the setting of
finite words, which has been well studied, for infinite words it is not clear
how words can be reordered. To this end, we consider three semantics: automata
that read the infinite word in some order so that no letter is overlooked,
automata that can permute the word in windows of a given size k, and automata
that can permute the word in windows of an existentially-quantified bound. We
study expressiveness, closure properties and algorithmic properties of these
models.
|
[
{
"version": "v1",
"created": "Mon, 3 Apr 2023 18:12:57 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Almagor",
"Shaull",
""
],
[
"Yizhaq",
"Omer",
""
]
] |
new_dataset
| 0.987244 |
2304.01289
|
Xianpeng Liu
|
Xianpeng Liu, Ce Zheng, Kelvin Cheng, Nan Xue, Guo-Jun Qi, Tianfu Wu
|
Monocular 3D Object Detection with Bounding Box Denoising in 3D by
Perceiver
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The main challenge of monocular 3D object detection is the accurate
localization of 3D center. Motivated by a new and strong observation that this
challenge can be remedied by a 3D-space local-grid search scheme in an ideal
case, we propose a stage-wise approach, which combines the information flow
from 2D-to-3D (3D bounding box proposal generation with a single 2D image) and
3D-to-2D (proposal verification by denoising with 3D-to-2D contexts) in a
top-down manner. Specifically, we first obtain initial proposals from
off-the-shelf backbone monocular 3D detectors. Then, we generate a 3D anchor
space by local-grid sampling from the initial proposals. Finally, we perform 3D
bounding box denoising at the 3D-to-2D proposal verification stage. To
effectively learn discriminative features for denoising highly overlapped
proposals, this paper presents a method of using the Perceiver I/O model to
fuse the 3D-to-2D geometric information and the 2D appearance information. With
the encoded latent representation of a proposal, the verification head is
implemented with a self-attention module. Our method, named as MonoXiver, is
generic and can be easily adapted to any backbone monocular 3D detectors.
Experimental results on the well-established KITTI dataset and the challenging
large-scale Waymo dataset show that MonoXiver consistently achieves improvement
with limited computation overhead.
|
[
{
"version": "v1",
"created": "Mon, 3 Apr 2023 18:24:46 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Liu",
"Xianpeng",
""
],
[
"Zheng",
"Ce",
""
],
[
"Cheng",
"Kelvin",
""
],
[
"Xue",
"Nan",
""
],
[
"Qi",
"Guo-Jun",
""
],
[
"Wu",
"Tianfu",
""
]
] |
new_dataset
| 0.992387 |
2304.01293
|
Mark Rucker
|
Emma R. Toner, Mark Rucker, Zhiyuan Wang, Maria A. Larrazabal, Lihua
Cai, Debajyoti Datta, Elizabeth Thompson, Haroon Lone, Mehdi Boukhechba,
Bethany A. Teachman, and Laura E. Barnes
|
Wearable Sensor-based Multimodal Physiological Responses of Socially
Anxious Individuals across Social Contexts
| null | null | null | null |
cs.CY eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Correctly identifying an individual's social context from passively worn
sensors holds promise for delivering just-in-time adaptive interventions
(JITAIs) to treat social anxiety disorder. In this study, we present results
using passively collected data from a within-subject experiment that assessed
physiological response across different social contexts (i.e, alone vs. with
others), social phases (i.e., pre- and post-interaction vs. during an
interaction), social interaction sizes (i.e., dyadic vs. group interactions),
and levels of social threat (i.e., implicit vs. explicit social evaluation).
Participants in the study ($N=46$) reported moderate to severe social anxiety
symptoms as assessed by the Social Interaction Anxiety Scale ($\geq$34 out of
80). Univariate paired difference tests, multivariate random forest models, and
follow-up cluster analyses were used to explore physiological response patterns
across different social and non-social contexts. Our results suggest that
social context is more reliably distinguishable than social phase, group size,
or level of social threat, but that there is considerable variability in
physiological response patterns even among these distinguishable contexts.
Implications for real-world context detection and deployment of JITAIs are
discussed.
|
[
{
"version": "v1",
"created": "Mon, 3 Apr 2023 18:34:54 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Toner",
"Emma R.",
""
],
[
"Rucker",
"Mark",
""
],
[
"Wang",
"Zhiyuan",
""
],
[
"Larrazabal",
"Maria A.",
""
],
[
"Cai",
"Lihua",
""
],
[
"Datta",
"Debajyoti",
""
],
[
"Thompson",
"Elizabeth",
""
],
[
"Lone",
"Haroon",
""
],
[
"Boukhechba",
"Mehdi",
""
],
[
"Teachman",
"Bethany A.",
""
],
[
"Barnes",
"Laura E.",
""
]
] |
new_dataset
| 0.971903 |
2304.01305
|
Jun Jet Tai Jet
|
Jun Jet Tai, Jim Wong, Mauro Innocente, Nadjim Horri, James Brusey,
Swee King Phang
|
PyFlyt -- UAV Simulation Environments for Reinforcement Learning
Research
|
Under Review for Transactions on Robotics
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Unmanned aerial vehicles (UAVs) have numerous applications, but their
efficient and optimal flight can be a challenge. Reinforcement Learning (RL)
has emerged as a promising approach to address this challenge, yet there is no
standardized library for testing and benchmarking RL algorithms on UAVs. In
this paper, we introduce PyFlyt, a platform built on the Bullet physics engine
with native Gymnasium API support. PyFlyt provides modular implementations of
simple components, such as motors and lifting surfaces, allowing for the
implementation of UAVs of arbitrary configurations. Additionally, PyFlyt
includes various task definitions and multiple reward function settings for
each vehicle type. We demonstrate the effectiveness of PyFlyt by training
various RL agents for two UAV models: quadrotor and fixed-wing. Our findings
highlight the effectiveness of RL in UAV control and planning, and further show
that it is possible to train agents in sparse reward settings for UAVs. PyFlyt
fills a gap in existing literature by providing a flexible and standardised
platform for testing RL algorithms on UAVs. We believe that this will inspire
more standardised research in this direction.
|
[
{
"version": "v1",
"created": "Mon, 3 Apr 2023 19:12:20 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Tai",
"Jun Jet",
""
],
[
"Wong",
"Jim",
""
],
[
"Innocente",
"Mauro",
""
],
[
"Horri",
"Nadjim",
""
],
[
"Brusey",
"James",
""
],
[
"Phang",
"Swee King",
""
]
] |
new_dataset
| 0.991051 |
2304.01322
|
Sina Ahmadi
|
Sina Ahmadi and Milind Agarwal and Antonios Anastasopoulos
|
PALI: A Language Identification Benchmark for Perso-Arabic Scripts
|
13 pages - accepted at VarDial at EACL 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The Perso-Arabic scripts are a family of scripts that are widely adopted and
used by various linguistic communities around the globe. Identifying various
languages using such scripts is crucial to language technologies and
challenging in low-resource setups. As such, this paper sheds light on the
challenges of detecting languages using Perso-Arabic scripts, especially in
bilingual communities where ``unconventional'' writing is practiced. To address
this, we use a set of supervised techniques to classify sentences into their
languages. Building on these, we also propose a hierarchical model that targets
clusters of languages that are more often confused by the classifiers. Our
experiment results indicate the effectiveness of our solutions.
|
[
{
"version": "v1",
"created": "Mon, 3 Apr 2023 19:40:14 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Ahmadi",
"Sina",
""
],
[
"Agarwal",
"Milind",
""
],
[
"Anastasopoulos",
"Antonios",
""
]
] |
new_dataset
| 0.999822 |
2304.01396
|
Shubham Suresh Patil
|
Patil Shubham Suresh, Gautham Narayan Narasimhan
|
Lidar based 3D Tracking and State Estimation of Dynamic Objects
|
6 pages, 12 figures, Carnegie Mellon University work
| null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
State estimation of oncoming vehicles: Earlier research has been based on
determining states like position, velocity, orientation , angular velocity, etc
of ego-vehicle. Our approach focuses on estimating the states of non-ego
vehicles which is crucial for Motion planning and decision-making. Dynamic
Scene Based Localization: Our project will work on dynamic scenes like moving
ego (self) and non-ego vehicles. Previous methods were focused on static
environments.
|
[
{
"version": "v1",
"created": "Mon, 3 Apr 2023 22:13:58 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Suresh",
"Patil Shubham",
""
],
[
"Narasimhan",
"Gautham Narayan",
""
]
] |
new_dataset
| 0.994958 |
2304.01424
|
Swapnil Mane
|
Swapnil Mane and Vaibhav Khatavkar
|
Polarity based Sarcasm Detection using Semigraph
|
11 pages
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sarcasm is an advanced linguistic expression often found on various online
platforms. Sarcasm detection is challenging in natural language processing
tasks that affect sentiment analysis. This article presents the inventive
method of the semigraph, including semigraph construction and sarcasm detection
processes. A variation of the semigraph is suggested in the pattern-relatedness
of the text document. The proposed method is to obtain the sarcastic and
non-sarcastic polarity scores of a document using a semigraph. The sarcastic
polarity score represents the possibility that a document will become
sarcastic. Sarcasm is detected based on the polarity scoring model. The
performance of the proposed model enhances the existing prior art approach to
sarcasm detection. In the Amazon product review, the model achieved the
accuracy, recall, and f-measure of 0.87, 0.79, and 0.83, respectively.
|
[
{
"version": "v1",
"created": "Tue, 4 Apr 2023 00:13:55 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Mane",
"Swapnil",
""
],
[
"Khatavkar",
"Vaibhav",
""
]
] |
new_dataset
| 0.99564 |
2304.01503
|
Jonathan Freedman
|
Jonathan D. Freedman and Ian A. Nappier
|
GPT-4 to GPT-3.5: 'Hold My Scalpel' -- A Look at the Competency of
OpenAI's GPT on the Plastic Surgery In-Service Training Exam
|
30 pages, 1 table, 8 figures, Appendix
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Plastic Surgery In-Service Training Exam (PSITE) is an important
indicator of resident proficiency and serves as a useful benchmark for
evaluating OpenAI's GPT. Unlike many of the simulated tests or practice
questions shown in the GPT-4 Technical Paper, the multiple-choice questions
evaluated here are authentic PSITE questions. These questions offer realistic
clinical vignettes that a plastic surgeon commonly encounters in practice and
scores highly correlate with passing the written boards required to become a
Board Certified Plastic Surgeon. Our evaluation shows dramatic improvement of
GPT-4 (without vision) over GPT-3.5 with both the 2022 and 2021 exams
respectively increasing the score from 8th to 88th percentile and 3rd to 99th
percentile. The final results of the 2023 PSITE are set to be released on April
11, 2023, and this is an exciting moment to continue our research with a fresh
exam. Our evaluation pipeline is ready for the moment that the exam is released
so long as we have access via OpenAI to the GPT-4 API. With multimodal input,
we may achieve superhuman performance on the 2023.
|
[
{
"version": "v1",
"created": "Tue, 4 Apr 2023 03:30:12 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Freedman",
"Jonathan D.",
""
],
[
"Nappier",
"Ian A.",
""
]
] |
new_dataset
| 0.987568 |
2304.01517
|
Xu Chen
|
Xu Chen, Zhiyong Feng, Zhiqing Wei, Ping Zhang, and Xin Yuan
|
Code-Division OFDM Joint Communication and Sensing System for 6G
Machine-type Communication
|
13 pages,16 figures
|
IEEE Internet of Things Journal, vol. 8, no. 15, pp. 12 093-12
105, Feb. 2021
| null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The joint communication and sensing (JCS) system can provide higher spectrum
efficiency and load-saving for 6G machine-type communication (MTC) applications
by merging necessary communication and sensing abilities with unified spectrum
and transceivers. In order to suppress the mutual interference between the
communication and radar sensing signals to improve the communication
reliability and radar sensing accuracy, we propose a novel code-division
orthogonal frequency division multiplex (CD-OFDM) JCS MTC system, where MTC
users can simultaneously and continuously conduct communication and sensing
with each other. {\color{black} We propose a novel CD-OFDM JCS signal and
corresponding successive-interference-cancellation (SIC) based signal
processing technique that obtains code-division multiplex (CDM) gain, which is
compatible with the prevalent orthogonal frequency division multiplex (OFDM)
communication system.} To model the unified JCS signal transmission and
reception process, we propose a novel unified JCS channel model. Finally, the
simulation and numerical results are shown to verify the feasibility of the
CD-OFDM JCS MTC system {\color{black} and the error propagation performance}.
We show that the CD-OFDM JCS MTC system can achieve not only more reliable
communication but also comparably robust radar sensing compared with the
precedent OFDM JCS system, especially in low signal-to-interference-and-noise
ratio (SINR) regime.
|
[
{
"version": "v1",
"created": "Tue, 4 Apr 2023 04:00:37 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Chen",
"Xu",
""
],
[
"Feng",
"Zhiyong",
""
],
[
"Wei",
"Zhiqing",
""
],
[
"Zhang",
"Ping",
""
],
[
"Yuan",
"Xin",
""
]
] |
new_dataset
| 0.995318 |
2304.01519
|
Haitao Yang
|
Haitao Yang, Zaiwei Zhang, Xiangru Huang, Min Bai, Chen Song, Bo Sun,
Li Erran Li, Qixing Huang
|
LiDAR-Based 3D Object Detection via Hybrid 2D Semantic Scene Generation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bird's-Eye View (BEV) features are popular intermediate scene representations
shared by the 3D backbone and the detector head in LiDAR-based object
detectors. However, little research has been done to investigate how to
incorporate additional supervision on the BEV features to improve proposal
generation in the detector head, while still balancing the number of powerful
3D layers and efficient 2D network operations. This paper proposes a novel
scene representation that encodes both the semantics and geometry of the 3D
environment in 2D, which serves as a dense supervision signal for better BEV
feature learning. The key idea is to use auxiliary networks to predict a
combination of explicit and implicit semantic probabilities by exploiting their
complementary properties. Extensive experiments show that our simple yet
effective design can be easily integrated into most state-of-the-art 3D object
detectors and consistently improves upon baseline models.
|
[
{
"version": "v1",
"created": "Tue, 4 Apr 2023 04:05:56 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Yang",
"Haitao",
""
],
[
"Zhang",
"Zaiwei",
""
],
[
"Huang",
"Xiangru",
""
],
[
"Bai",
"Min",
""
],
[
"Song",
"Chen",
""
],
[
"Sun",
"Bo",
""
],
[
"Li",
"Li Erran",
""
],
[
"Huang",
"Qixing",
""
]
] |
new_dataset
| 0.995978 |
2304.01567
|
Hannes Fassold
|
Hannes Fassold, Karlheinz Gutjahr, Anna Weber, Roland Perko
|
A real-time algorithm for human action recognition in RGB and thermal
video
|
Accepted for SPIE Real-Time Image Processing and Deep Learning
Conference 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Monitoring the movement and actions of humans in video in real-time is an
important task. We present a deep learning based algorithm for human action
recognition for both RGB and thermal cameras. It is able to detect and track
humans and recognize four basic actions (standing, walking, running, lying) in
real-time on a notebook with a NVIDIA GPU. For this, it combines state of the
art components for object detection (Scaled YoloV4), optical flow (RAFT) and
pose estimation (EvoSkeleton). Qualitative experiments on a set of tunnel
videos show that the proposed algorithm works robustly for both RGB and thermal
video.
|
[
{
"version": "v1",
"created": "Tue, 4 Apr 2023 06:44:13 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Fassold",
"Hannes",
""
],
[
"Gutjahr",
"Karlheinz",
""
],
[
"Weber",
"Anna",
""
],
[
"Perko",
"Roland",
""
]
] |
new_dataset
| 0.977269 |
2304.01585
|
Nilah Ravi Nair
|
Nilah Ravi Nair, Fernando Moya Rueda, Christopher Reining and Gernot
A. Fink
|
Multi-Channel Time-Series Person and Soft-Biometric Identification
|
Accepted at the ICPR 2022 workshop: 12th International Workshop on
Human Behavior Understanding
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Multi-channel time-series datasets are popular in the context of human
activity recognition (HAR). On-body device (OBD) recordings of human movements
are often preferred for HAR applications not only for their reliability but as
an approach for identity protection, e.g., in industrial settings.
Contradictory, the gait activity is a biometric, as the cyclic movement is
distinctive and collectable. In addition, the gait cycle has proven to contain
soft-biometric information of human groups, such as age and height. Though
general human movements have not been considered a biometric, they might
contain identity information. This work investigates person and soft-biometrics
identification from OBD recordings of humans performing different activities
using deep architectures. Furthermore, we propose the use of attribute
representation for soft-biometric identification. We evaluate the method on
four datasets of multi-channel time-series HAR, measuring the performance of a
person and soft-biometrics identification and its relation concerning performed
activities. We find that person identification is not limited to gait activity.
The impact of activities on the identification performance was found to be
training and dataset specific. Soft-biometric based attribute representation
shows promising results and emphasis the necessity of larger datasets.
|
[
{
"version": "v1",
"created": "Tue, 4 Apr 2023 07:24:51 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Nair",
"Nilah Ravi",
""
],
[
"Rueda",
"Fernando Moya",
""
],
[
"Reining",
"Christopher",
""
],
[
"Fink",
"Gernot A.",
""
]
] |
new_dataset
| 0.999446 |
2304.01612
|
Ruiqi Li
|
Ruiqi Li, Patrik Haslum, Leyang Cui
|
EDeR: A Dataset for Exploring Dependency Relations Between Events
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Relation extraction is a central task in natural language processing (NLP)
and information retrieval (IR) research. We argue that an important type of
relation not explored in NLP or IR research to date is that of an event being
an argument - required or optional - of another event. We introduce the
human-annotated Event Dependency Relation dataset (EDeR) which provides this
dependency relation. The annotation is done on a sample of documents from the
OntoNotes dataset, which has the added benefit that it integrates with
existing, orthogonal, annotations of this dataset. We investigate baseline
approaches for predicting the event dependency relation, the best of which
achieves an accuracy of 82.61 for binary argument/non-argument classification.
We show that recognizing this relation leads to more accurate event extraction
(semantic role labelling) and can improve downstream tasks that depend on this,
such as co-reference resolution. Furthermore, we demonstrate that predicting
the three-way classification into the required argument, optional argument or
non-argument is a more challenging task.
|
[
{
"version": "v1",
"created": "Tue, 4 Apr 2023 08:07:07 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Li",
"Ruiqi",
""
],
[
"Haslum",
"Patrik",
""
],
[
"Cui",
"Leyang",
""
]
] |
new_dataset
| 0.999194 |
2304.01617
|
Ryan Shah
|
Theodoros Georgiou, Lynne Baillie, Ryan Shah
|
Investigating Concerns of Security and Privacy Among Rohingya Refugees
in Malaysia
|
5 pages, 3 figures, CHI'23 Workshop on Migration, Security and
Privacy (see https://migrationsecurityprivacy.uk)
| null | null | null |
cs.CY cs.CR cs.HC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The security and privacy of refugee communities have emerged as pressing
concerns in the context of increasing global migration. The Rohingya refugees
are a stateless Muslim minority group in Myanmar who were forced to flee their
homes after conflict broke out, with many fleeing to neighbouring countries and
ending up in refugee camps, such as in Bangladesh. However, others migrated to
Malaysia and those who arrive there live within the community as urban
refugees. However, the Rohingya in Malaysia are not legally recognized and have
limited and restricted access to public resources such as healthcare and
education. This means they face security and privacy challenges, different to
other refugee groups, which are often compounded by this lack of recognition,
social isolation and lack of access to vital resources. This paper discusses
the implications of security and privacy of the Rohingya refugees, focusing on
available and accessible technological assistance, uncovering the heightened
need for a human-centered approach to design and implementation of solutions
that factor in these requirements. Overall, the discussions and findings
presented in this paper on the security and privacy of the Rohingya provides a
valuable resource for researchers, practitioners and policymakers in the wider
HCI community.
|
[
{
"version": "v1",
"created": "Tue, 4 Apr 2023 08:14:41 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Georgiou",
"Theodoros",
""
],
[
"Baillie",
"Lynne",
""
],
[
"Shah",
"Ryan",
""
]
] |
new_dataset
| 0.9989 |
2304.01693
|
Molham Alsakat
|
Molham Alsakati (KTH Royal Institute of Technology, Sweden), Charlie
Pettersson (Ericsson Research, Sweden), Sebastian Max (Ericsson Research,
Germany), Vishnu Narayanan Moothedath and James Gross (KTH Royal Institute of
Technology, Sweden)
|
Performance of 802.11be Wi-Fi 7 with Multi-Link Operation on AR
Applications
| null | null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Since its first release in the late 1990s, Wi-Fi has been updated to keep up
with evolving user needs. Recently, Wi-Fi and other radio access technologies
have been pushed to their edge when serving Augmented Reality (AR)
applications. AR applications require high throughput, low latency, and high
reliability to ensure a high-quality user experience. The 802.11be amendment,
which will be marketed as Wi-Fi 7, introduces several features that aim to
enhance its capabilities to support challenging applications like AR. One of
the main features introduced in this amendment is Multi-Link Operation (MLO)
which allows nodes to transmit and receive over multiple links concurrently.
When using MLO, traffic is distributed among links using an
implementation-specific traffic-to-link allocation policy. This paper aims to
evaluate the performance of MLO, using different policies, in serving AR
applications compared to Single-Link (SL). Experimental simulations using an
event-based Wi-Fi simulator have been conducted. Our results show the general
superiority of MLO when serving AR applications. MLO achieves lower latency and
serves a higher number of AR users compared to SL with the same frequency
resources. In addition, increasing the number of links can improve the
performance of MLO. Regarding traffic-to-link allocation policies, we found
that policies can be more susceptible to channel blocking, resulting in
possible performance degradation.
|
[
{
"version": "v1",
"created": "Tue, 4 Apr 2023 10:44:58 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Alsakati",
"Molham",
"",
"KTH Royal Institute of Technology, Sweden"
],
[
"Pettersson",
"Charlie",
"",
"Ericsson Research, Sweden"
],
[
"Max",
"Sebastian",
"",
"Ericsson Research,\n Germany"
],
[
"Moothedath",
"Vishnu Narayanan",
"",
"KTH Royal Institute of\n Technology, Sweden"
],
[
"Gross",
"James",
"",
"KTH Royal Institute of\n Technology, Sweden"
]
] |
new_dataset
| 0.994909 |
2304.01721
|
Michele Paolino
|
Anna Panagopoulou, Michele Paolino, Daniel Raho
|
Virtio-FPGA: a virtualization solution for SoC-attached FPGAs
| null | null | null | null |
cs.OS
|
http://creativecommons.org/licenses/by/4.0/
|
Recently, FPGA accelerators have risen in popularity as they present a
suitable way of satisfying the high-computation and low-power demands of real
time applications. The modern electric transportation systems (such as
aircraft, road vehicles) can greatly profit from embedded FPGAs, which
incorporate both high-performance and flexibility features into a single SoC.
At the same time, the virtualization of FPGA resources aims to reinforce these
systems with strong isolation, consolidation and security. In this paper, we
present a novel virtualization framework aimed for SoC-attached FPGA devices,
in a Linux and QEMU/KVM setup. We use Virtio as a means to enable the
configuration of FPGA resources from guest systems in an efficient way. Also,
we employ the Linux VFIO and Device Tree Overlays technologies in order to
render the FPGA resources dynamically accessible to guest systems. The ability
to dynamically configure and utilize the FPGA resources from a virtualization
environment is described in details. The evaluation procedure of the solution
is presented and the virtualization overhead is benchmarked as minimal (around
10%) when accessing the FPGA devices from guest systems.
|
[
{
"version": "v1",
"created": "Tue, 4 Apr 2023 11:30:24 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Panagopoulou",
"Anna",
""
],
[
"Paolino",
"Michele",
""
],
[
"Raho",
"Daniel",
""
]
] |
new_dataset
| 0.986886 |
2304.01744
|
Tuukka Korhonen
|
Tuukka Korhonen, Konrad Majewski, Wojciech Nadara, Micha{\l}
Pilipczuk, Marek Soko{\l}owski
|
Dynamic treewidth
|
80 pages, 2 figures
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a data structure that for a dynamic graph $G$ that is updated by
edge insertions and deletions, maintains a tree decomposition of $G$ of width
at most $6k+5$ under the promise that the treewidth of $G$ never grows above
$k$. The amortized update time is ${\cal O}_k(2^{\sqrt{\log n}\log\log n})$,
where $n$ is the vertex count of $G$ and the ${\cal O}_k(\cdot)$ notation hides
factors depending on $k$. In addition, we also obtain the dynamic variant of
Courcelle's Theorem: for any fixed property $\varphi$ expressible in the
$\mathsf{CMSO}_2$ logic, the data structure can maintain whether $G$ satisfies
$\varphi$ within the same time complexity bounds. To a large extent, this
answers a question posed by Bodlaender [WG 1993].
|
[
{
"version": "v1",
"created": "Tue, 4 Apr 2023 12:30:51 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Korhonen",
"Tuukka",
""
],
[
"Majewski",
"Konrad",
""
],
[
"Nadara",
"Wojciech",
""
],
[
"Pilipczuk",
"Michał",
""
],
[
"Sokołowski",
"Marek",
""
]
] |
new_dataset
| 0.996063 |
2304.01790
|
Hung Le
|
Hung Le and Christian Wulff-Nilsen
|
VC Set Systems in Minor-free (Di)Graphs and Applications
|
40 pages, 7 figures, abstract shorten due to Arxiv limits
| null | null | null |
cs.DS math.CO
|
http://creativecommons.org/licenses/by/4.0/
|
A recent line of work on VC set systems in minor-free (undirected) graphs,
starting from Li and Parter, who constructed a new VC set system for planar
graphs, has given surprising algorithmic results. In this work, we initialize a
more systematic study of VC set systems for minor-free graphs and their
applications in both undirected graphs and directed graphs (a.k.a digraphs).
More precisely:
- We propose a new variant of Li-Parter set system for undirected graphs.
- We extend our set system to $K_h$-minor-free digraphs and show that its VC
dimension is $O(h^2)$.
- We show that the system of directed balls in minor-free digraphs has VC
dimension at most $h-1$.
- On the negative side, we show that VC set system constructed from shortest
path trees of planar digraphs does not have a bounded VC dimension.
The highlight of our work is the results for digraphs, as we are not aware of
known algorithmic work on constructing and exploiting VC set systems for
digraphs.
|
[
{
"version": "v1",
"created": "Tue, 4 Apr 2023 13:34:13 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Le",
"Hung",
""
],
[
"Wulff-Nilsen",
"Christian",
""
]
] |
new_dataset
| 0.989037 |
2304.01838
|
Anders Dahl
|
Anders Bjorholm Dahl, Patrick M{\o}ller Jensen, Carsten Gundlach,
Rebecca Engberg, Hans Martin Kjer, Vedrana Andersen Dahl
|
BugNIST -- A New Large Scale Volumetric 3D Image Dataset for
Classification and Detection
|
11 pages, 5 figures, 2 tables
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Progress in 3D volumetric image analysis research is limited by the lack of
datasets and most advances in analysis methods for volumetric images are based
on medical data. However, medical data do not necessarily resemble the
characteristics of other volumetric images such as micro-CT. To promote
research in 3D volumetric image analysis beyond medical data, we have created
the BugNIST dataset and made it freely available. BugNIST is an extensive
dataset of micro-CT scans of 12 types of bugs, such as insects and larvae.
BugNIST contains 9437 volumes where 9087 are of individual bugs and 350 are
mixtures of bugs and other material. The goal of BugNIST is to benchmark
classification and detection methods, and we have designed the detection
challenge such that detection models are trained on scans of individual bugs
and tested on bug mixtures. Models capable of solving this task will be
independent of the context, i.e., the surrounding material. This is a great
advantage if the context is unknown or changing, as is often the case in
micro-CT. Our initial baseline analysis shows that current state-of-the-art
deep learning methods classify individual bugs very well, but has great
difficulty with the detection challenge. Hereby, BugNIST enables research in
image analysis areas that until now have missed relevant data - both
classification, detection, and hopefully more.
|
[
{
"version": "v1",
"created": "Tue, 4 Apr 2023 14:44:06 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Dahl",
"Anders Bjorholm",
""
],
[
"Jensen",
"Patrick Møller",
""
],
[
"Gundlach",
"Carsten",
""
],
[
"Engberg",
"Rebecca",
""
],
[
"Kjer",
"Hans Martin",
""
],
[
"Dahl",
"Vedrana Andersen",
""
]
] |
new_dataset
| 0.999868 |
2304.01843
|
Naveed Ul Hassan
|
Ammar Rafique, Naveed Ul Hassan, Muhammad Zubair, Ijaz Haider Naqvi,
Muhammad Qasim Mehmood, Chau Yuen, Marco Di Renzo, and Merouane Debbah
|
Reconfigurable Intelligent Surfaces: Interplay of Unit-Cell- and
Surface-Level Design and Performance under Quantifiable Benchmarks
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
The ability of reconfigurable intelligent surfaces (RIS) to produce complex
radiation patterns in the far-field is determined by various factors, such as
the unit-cell's size, shape, spatial arrangement, tuning mechanism, the
communication and control circuitry's complexity, and the illuminating source's
type (point/planewave). Research on RIS has been mainly focused on two areas:
first, the optimization and design of unit-cells to achieve desired
electromagnetic responses within a specific frequency band; and second,
exploring the applications of RIS in various settings, including system-level
performance analysis. The former does not assume any specific radiation pattern
on the surface level, while the latter does not consider any particular
unit-cell design. Both approaches largely ignore the complexity and power
requirements of the RIS control circuitry. As we progress towards the
fabrication and use of RIS in real-world settings, it is becoming increasingly
necessary to consider the interplay between the unit-cell design, the required
surface-level radiation patterns, the control circuit's complexity, and the
power requirements concurrently. In this paper, a benchmarking framework for
RIS is employed to compare performance and analyze tradeoffs between the
unit-cell's specified radiation patterns and the control circuit's complexity
for far-field beamforming, considering different diode-based unit-cell designs
for a given surface size. This work lays the foundation for optimizing the
design of the unit-cells and surface-level radiation patterns, facilitating the
optimization of RIS-assisted wireless communication systems.
|
[
{
"version": "v1",
"created": "Tue, 4 Apr 2023 14:53:58 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Rafique",
"Ammar",
""
],
[
"Hassan",
"Naveed Ul",
""
],
[
"Zubair",
"Muhammad",
""
],
[
"Naqvi",
"Ijaz Haider",
""
],
[
"Mehmood",
"Muhammad Qasim",
""
],
[
"Yuen",
"Chau",
""
],
[
"Di Renzo",
"Marco",
""
],
[
"Debbah",
"Merouane",
""
]
] |
new_dataset
| 0.999479 |
2304.01860
|
Jose Maria Santiago Iii
|
Jose Ma. Santiago III, Richard Lance Parayno, Jordan Aiko Deja, Briane
Paul V. Samson
|
Rolling the Dice: Imagining Generative AI as a Dungeons & Dragons
Storytelling Companion
|
5 pages, 2 figures
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
AI Advancements have augmented casual writing and story generation, but their
usage poses challenges in collaborative storytelling. In role-playing games
such as Dungeons & Dragons (D&D), composing prompts using generative AI
requires a technical understanding to generate ideal results, which is
difficult for novices. Thus, emergent narratives organically developed based on
player actions and decisions have yet to be fully utilized. This paper
envisions the use of generative AI in transforming storytelling into an
interactive drama using dynamic and immersive narratives. First, we describe
scenarios where narratives are created and character conversations are designed
within an overarching fantasy disposition. Then, we recommend design guidelines
to help create tools using generative AI in interactive storytelling. Lastly,
we raise questions on its potential impact on player immersion and cognitive
load. Our contributions may be expanded within the broader interactive
storytelling domain, such as speech-conversational AI and persona-driven
chatbots.
|
[
{
"version": "v1",
"created": "Tue, 4 Apr 2023 15:09:00 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Santiago",
"Jose Ma.",
"III"
],
[
"Parayno",
"Richard Lance",
""
],
[
"Deja",
"Jordan Aiko",
""
],
[
"Samson",
"Briane Paul V.",
""
]
] |
new_dataset
| 0.99396 |
2304.01865
|
Christian Keilstrup Ingwersen
|
Christian Keilstrup Ingwersen and Christian Mikkelstrup and Janus
N{\o}rtoft Jensen and Morten Rieger Hannemose and Anders Bjorholm Dahl
|
SportsPose -- A Dynamic 3D sports pose dataset
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Accurate 3D human pose estimation is essential for sports analytics,
coaching, and injury prevention. However, existing datasets for monocular pose
estimation do not adequately capture the challenging and dynamic nature of
sports movements. In response, we introduce SportsPose, a large-scale 3D human
pose dataset consisting of highly dynamic sports movements. With more than
176,000 3D poses from 24 different subjects performing 5 different sports
activities, SportsPose provides a diverse and comprehensive set of 3D poses
that reflect the complex and dynamic nature of sports movements. Contrary to
other markerless datasets we have quantitatively evaluated the precision of
SportsPose by comparing our poses with a commercial marker-based system and
achieve a mean error of 34.5 mm across all evaluation sequences. This is
comparable to the error reported on the commonly used 3DPW dataset. We further
introduce a new metric, local movement, which describes the movement of the
wrist and ankle joints in relation to the body. With this, we show that
SportsPose contains more movement than the Human3.6M and 3DPW datasets in these
extremum joints, indicating that our movements are more dynamic. The dataset
with accompanying code can be downloaded from our website. We hope that
SportsPose will allow researchers and practitioners to develop and evaluate
more effective models for the analysis of sports performance and injury
prevention. With its realistic and diverse dataset, SportsPose provides a
valuable resource for advancing the state-of-the-art in pose estimation in
sports.
|
[
{
"version": "v1",
"created": "Tue, 4 Apr 2023 15:15:25 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Ingwersen",
"Christian Keilstrup",
""
],
[
"Mikkelstrup",
"Christian",
""
],
[
"Jensen",
"Janus Nørtoft",
""
],
[
"Hannemose",
"Morten Rieger",
""
],
[
"Dahl",
"Anders Bjorholm",
""
]
] |
new_dataset
| 0.999858 |
2304.01895
|
Manuel Mu\~noz S\'anchez
|
Manuel Mu\~noz S\'anchez, Emilia Silvas, Jos Elfring, Ren\'e van de
Molengraft
|
Robustness Benchmark of Road User Trajectory Prediction Models for
Automated Driving
| null | null | null | null |
cs.AI cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Accurate and robust trajectory predictions of road users are needed to enable
safe automated driving. To do this, machine learning models are often used,
which can show erratic behavior when presented with previously unseen inputs.
In this work, two environment-aware models (MotionCNN and MultiPath++) and two
common baselines (Constant Velocity and an LSTM) are benchmarked for robustness
against various perturbations that simulate functional insufficiencies observed
during model deployment in a vehicle: unavailability of road information, late
detections, and noise. Results show significant performance degradation under
the presence of these perturbations, with errors increasing up to +1444.8\% in
commonly used trajectory prediction evaluation metrics. Training the models
with similar perturbations effectively reduces performance degradation, with
error increases of up to +87.5\%. We argue that despite being an effective
mitigation strategy, data augmentation through perturbations during training
does not guarantee robustness towards unforeseen perturbations, since
identification of all possible on-road complications is unfeasible.
Furthermore, degrading the inputs sometimes leads to more accurate predictions,
suggesting that the models are unable to learn the true relationships between
the different elements in the data.
|
[
{
"version": "v1",
"created": "Tue, 4 Apr 2023 15:47:42 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Sánchez",
"Manuel Muñoz",
""
],
[
"Silvas",
"Emilia",
""
],
[
"Elfring",
"Jos",
""
],
[
"van de Molengraft",
"René",
""
]
] |
new_dataset
| 0.997739 |
2304.01922
|
Michal \v{S}tef\'anik
|
Michal \v{S}tef\'anik and Marek Kadl\v{c}\'ik and Piotr Gramacki and
Petr Sojka
|
Resources and Few-shot Learners for In-context Learning in Slavic
Languages
|
EACL 2023 SlavicNLP Long Paper. New instructional templates and
models are available on
https://github.com/fewshot-goes-multilingual/slavic-incontext-learning
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Despite the rapid recent progress in creating accurate and compact in-context
learners, most recent work focuses on in-context learning (ICL) for tasks in
English. However, the ability to interact with users of languages outside
English presents a great potential for broadening the applicability of language
technologies to non-English speakers.
In this work, we collect the infrastructure necessary for training and
evaluation of ICL in a selection of Slavic languages: Czech, Polish, and
Russian. We link a diverse set of datasets and cast these into a unified
instructional format through a set of transformations and newly-crafted
templates written purely in target languages. Using the newly-curated dataset,
we evaluate a set of the most recent in-context learners and compare their
results to the supervised baselines. Finally, we train, evaluate and publish a
set of in-context learning models that we train on the collected resources and
compare their performance to previous work.
We find that ICL models tuned in English are also able to learn some tasks
from non-English contexts, but multilingual instruction fine-tuning
consistently improves the ICL ability. We also find that the massive multitask
training can be outperformed by single-task training in the target language,
uncovering the potential for specializing in-context learners to the
language(s) of their application.
|
[
{
"version": "v1",
"created": "Tue, 4 Apr 2023 16:16:25 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Štefánik",
"Michal",
""
],
[
"Kadlčík",
"Marek",
""
],
[
"Gramacki",
"Piotr",
""
],
[
"Sojka",
"Petr",
""
]
] |
new_dataset
| 0.996108 |
2304.01961
|
Jheng-Hong Yang
|
Jheng-Hong Yang, Carlos Lassance, Rafael Sampaio de Rezende, Krishna
Srinivasan, Miriam Redi, St\'ephane Clinchant, Jimmy Lin
|
AToMiC: An Image/Text Retrieval Test Collection to Support Multimedia
Content Creation
| null | null | null | null |
cs.IR cs.CL cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents the AToMiC (Authoring Tools for Multimedia Content)
dataset, designed to advance research in image/text cross-modal retrieval.
While vision-language pretrained transformers have led to significant
improvements in retrieval effectiveness, existing research has relied on
image-caption datasets that feature only simplistic image-text relationships
and underspecified user models of retrieval tasks. To address the gap between
these oversimplified settings and real-world applications for multimedia
content creation, we introduce a new approach for building retrieval test
collections. We leverage hierarchical structures and diverse domains of texts,
styles, and types of images, as well as large-scale image-document associations
embedded in Wikipedia. We formulate two tasks based on a realistic user model
and validate our dataset through retrieval experiments using baseline models.
AToMiC offers a testbed for scalable, diverse, and reproducible multimedia
retrieval research. Finally, the dataset provides the basis for a dedicated
track at the 2023 Text Retrieval Conference (TREC), and is publicly available
at https://github.com/TREC-AToMiC/AToMiC.
|
[
{
"version": "v1",
"created": "Tue, 4 Apr 2023 17:11:34 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Yang",
"Jheng-Hong",
""
],
[
"Lassance",
"Carlos",
""
],
[
"de Rezende",
"Rafael Sampaio",
""
],
[
"Srinivasan",
"Krishna",
""
],
[
"Redi",
"Miriam",
""
],
[
"Clinchant",
"Stéphane",
""
],
[
"Lin",
"Jimmy",
""
]
] |
new_dataset
| 0.998886 |
2304.01962
|
Xuanchao Ma
|
Xuanchao Ma and Yuchen Liu
|
Ethylene Leak Detection Based on Infrared Imaging: A Benchmark
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Ethylene leakage detection has become one of the most important research
directions in the field of target detection due to the fact that ethylene
leakage in the petrochemical industry is closely related to production safety
and environmental pollution. Under infrared conditions, there are many factors
that affect the texture characteristics of ethylene, such as ethylene
concentration, background, and so on. We find that the detection criteria used
in infrared imaging ethylene leakage detection research cannot fully reflect
real-world production conditions, which is not conducive to evaluate the
performance of current image-based target detection methods. Therefore, we
create a new infrared image dataset of ethylene leakage with different
concentrations and backgrounds, including 54275 images. We use the proposed
dataset benchmark to evaluate seven advanced image-based target detection
algorithms. Experimental results demonstrate the performance and limitations of
existing algorithms, and the dataset benchmark has good versatility and
effectiveness.
|
[
{
"version": "v1",
"created": "Tue, 4 Apr 2023 17:13:06 GMT"
}
] | 2023-04-05T00:00:00 |
[
[
"Ma",
"Xuanchao",
""
],
[
"Liu",
"Yuchen",
""
]
] |
new_dataset
| 0.999342 |
1310.8313
|
Maya Stein
|
Flavia Bonomo, Oliver Schaudt, Maya Stein, Mario Valencia-Pabon
|
b-coloring is NP-hard on co-bipartite graphs and polytime solvable on
tree-cographs
| null |
Algorithmica 73(2), 2015, 59-69
|
10.1007/s00453-014-9921-5
| null |
cs.CC cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A b-coloring of a graph is a proper coloring such that every color class
contains a vertex that is adjacent to all other color classes. The b-chromatic
number of a graph G, denoted by \chi_b(G), is the maximum number t such that G
admits a b-coloring with t colors. A graph G is called b-continuous if it
admits a b-coloring with t colors, for every t = \chi(G),\ldots,\chi_b(G), and
b-monotonic if \chi_b(H_1) \geq \chi_b(H_2) for every induced subgraph H_1 of
G, and every induced subgraph H_2 of H_1.
We investigate the b-chromatic number of graphs with stability number two.
These are exactly the complements of triangle-free graphs, thus including all
complements of bipartite graphs. The main results of this work are the
following:
- We characterize the b-colorings of a graph with stability number two in
terms of matchings with no augmenting paths of length one or three. We derive
that graphs with stability number two are b-continuous and b-monotonic.
- We prove that it is NP-complete to decide whether the b-chromatic number of
co-bipartite graph is at most a given threshold.
- We describe a polynomial time dynamic programming algorithm to compute the
b-chromatic number of co-trees.
- Extending several previous results, we show that there is a polynomial time
dynamic programming algorithm for computing the b-chromatic number of
tree-cographs. Moreover, we show that tree-cographs are b-continuous and
b-monotonic.
|
[
{
"version": "v1",
"created": "Wed, 30 Oct 2013 20:23:02 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Jan 2014 15:38:45 GMT"
}
] | 2023-04-04T00:00:00 |
[
[
"Bonomo",
"Flavia",
""
],
[
"Schaudt",
"Oliver",
""
],
[
"Stein",
"Maya",
""
],
[
"Valencia-Pabon",
"Mario",
""
]
] |
new_dataset
| 0.987551 |
1509.02876
|
Harish Karunakaran
|
Harish Karunakaran, Varadhan R, Anurag R M, Harmanpreet S
|
Low Cost Swarm Based Diligent Cargo Transit System
|
6 pages, 9 figures, 1 block diagram
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The goal of this paper is to present the design and development of a low cost
cargo transit system which can be adapted in developing countries like India
where there is abundant and cheap human labour which makes the process of
automation in any industry a challenge to innovators. The need of the hour is
an automation system that can diligently transfer cargo from one place to
another and minimize human intervention in the cargo transit industry.
Therefore, a solution is being proposed which could effectively bring down
human labour and the resources needed to implement them. The reduction in human
labour and resources is achieved by the use of low cost components and very
limited modification of the surroundings and the existing vehicles themselves.
The operation of the cargo transit system has been verified and the relevant
results are presented. An economical and robust cargo transit system is
designed and implemented.
|
[
{
"version": "v1",
"created": "Wed, 9 Sep 2015 18:06:36 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Apr 2023 15:38:08 GMT"
}
] | 2023-04-04T00:00:00 |
[
[
"Karunakaran",
"Harish",
""
],
[
"R",
"Varadhan",
""
],
[
"M",
"Anurag R",
""
],
[
"S",
"Harmanpreet",
""
]
] |
new_dataset
| 0.988298 |
1710.03131
|
Huikai Wu
|
Huikai Wu, Yanqi Zong, Junge Zhang, Kaiqi Huang
|
MSC: A Dataset for Macro-Management in StarCraft II
|
Homepage: https://github.com/wuhuikai/MSC
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Macro-management is an important problem in StarCraft, which has been studied
for a long time. Various datasets together with assorted methods have been
proposed in the last few years. But these datasets have some defects for
boosting the academic and industrial research: 1) There're neither standard
preprocessing, parsing and feature extraction procedures nor predefined
training, validation and test set in some datasets. 2) Some datasets are only
specified for certain tasks in macro-management. 3) Some datasets are either
too small or don't have enough labeled data for modern machine learning
algorithms such as deep neural networks. So most previous methods are trained
with various features, evaluated on different test sets from the same or
different datasets, making it difficult to be compared directly. To boost the
research of macro-management in StarCraft, we release a new dataset MSC based
on the platform SC2LE. MSC consists of well-designed feature vectors,
pre-defined high-level actions and final result of each match. We also split
MSC into training, validation and test set for the convenience of evaluation
and comparison. Besides the dataset, we propose a baseline model and present
initial baseline results for global state evaluation and build order
prediction, which are two of the key tasks in macro-management. Various
downstream tasks and analyses of the dataset are also described for the sake of
research on macro-management in StarCraft II. Homepage:
https://github.com/wuhuikai/MSC.
|
[
{
"version": "v1",
"created": "Mon, 9 Oct 2017 14:59:11 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Feb 2019 12:06:34 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Apr 2023 11:56:53 GMT"
}
] | 2023-04-04T00:00:00 |
[
[
"Wu",
"Huikai",
""
],
[
"Zong",
"Yanqi",
""
],
[
"Zhang",
"Junge",
""
],
[
"Huang",
"Kaiqi",
""
]
] |
new_dataset
| 0.999196 |
1909.06231
|
Carolina Luc\'ia Gonzalez
|
Flavia Bonomo-Braberman and Esther Galby and Carolina Luc\'ia Gonzalez
|
Characterising circular-arc contact $B_0$-VPG graphs
| null |
Discrete Applied Mathematics 283 (2020), 435-443
|
10.1016/j.dam.2020.01.027
| null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A contact $B_0$-VPG graph is a graph for which there exists a collection of
nontrivial pairwise interiorly disjoint horizontal and vertical segments in
one-to-one correspondence with its vertex set such that two vertices are
adjacent if and only if the corresponding segments touch. It was shown by Deniz
et al. that Recognition is $\mathsf{NP}$-complete for contact $B_0$-VPG graphs.
In this paper we present a minimal forbidden induced subgraph characterisation
of contact $B_0$-VPG graphs within the class of circular-arc graphs and provide
a polynomial-time algorithm for recognising these graphs.
|
[
{
"version": "v1",
"created": "Fri, 13 Sep 2019 13:50:52 GMT"
}
] | 2023-04-04T00:00:00 |
[
[
"Bonomo-Braberman",
"Flavia",
""
],
[
"Galby",
"Esther",
""
],
[
"Gonzalez",
"Carolina Lucía",
""
]
] |
new_dataset
| 0.973533 |
1909.11966
|
Weilin Huang
|
Miao Kang and Xiaojun Hu and Weilin Huang and Matthew R. Scott and
Mauricio Reyes
|
Dual-Stream Pyramid Registration Network
|
Published in Medical Image Analysis, 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a Dual-Stream Pyramid Registration Network (referred as
Dual-PRNet) for unsupervised 3D medical image registration. Unlike recent
CNN-based registration approaches, such as VoxelMorph, which explores a
single-stream encoder-decoder network to compute a registration fields from a
pair of 3D volumes, we design a two-stream architecture able to compute
multi-scale registration fields from convolutional feature pyramids. Our
contributions are two-fold: (i) we design a two-stream 3D encoder-decoder
network which computes two convolutional feature pyramids separately for a pair
of input volumes, resulting in strong deep representations that are meaningful
for deformation estimation; (ii) we propose a pyramid registration module able
to predict multi-scale registration fields directly from the decoding feature
pyramids. This allows it to refine the registration fields gradually in a
coarse-to-fine manner via sequential warping, and enable the model with the
capability for handling significant deformations between two volumes, such as
large displacements in spatial domain or slice space. The proposed Dual-PRNet
is evaluated on two standard benchmarks for brain MRI registration, where it
outperforms the state-of-the-art approaches by a large margin, e.g., having
improvements over recent VoxelMorph [2] with 0.683->0.778 on the LPBA40, and
0.511->0.631 on the Mindboggle101, in term of average Dice score. Code is
available at: https://github.com/kangmiao15/Dual-Stream-PRNet-Plus.
|
[
{
"version": "v1",
"created": "Thu, 26 Sep 2019 08:17:01 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Apr 2023 11:28:05 GMT"
}
] | 2023-04-04T00:00:00 |
[
[
"Kang",
"Miao",
""
],
[
"Hu",
"Xiaojun",
""
],
[
"Huang",
"Weilin",
""
],
[
"Scott",
"Matthew R.",
""
],
[
"Reyes",
"Mauricio",
""
]
] |
new_dataset
| 0.982837 |
2006.16887
|
Flavia Bonomo
|
Flavia Bonomo-Braberman, Carolina L. Gonzalez, Fabiano S. Oliveira,
Moys\'es S. Sampaio Jr., Jayme L. Szwarcfiter
|
Thinness of product graphs
|
45 pages. arXiv admin note: text overlap with arXiv:1704.00379
|
Discrete Applied Mathematics 312 (2022), 52-71
|
10.1016/j.dam.2021.04.003
| null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The thinness of a graph is a width parameter that generalizes some properties
of interval graphs, which are exactly the graphs of thinness one. Many
NP-complete problems can be solved in polynomial time for graphs with bounded
thinness, given a suitable representation of the graph. In this paper we study
the thinness and its variations of graph products. We show that the thinness
behaves "well" in general for products, in the sense that for most of the graph
products defined in the literature, the thinness of the product of two graphs
is bounded by a function (typically product or sum) of their thinness, or of
the thinness of one of them and the size of the other. We also show for some
cases the non-existence of such a function.
|
[
{
"version": "v1",
"created": "Tue, 30 Jun 2020 15:18:58 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Apr 2021 17:59:24 GMT"
},
{
"version": "v3",
"created": "Fri, 16 Apr 2021 13:02:15 GMT"
}
] | 2023-04-04T00:00:00 |
[
[
"Bonomo-Braberman",
"Flavia",
""
],
[
"Gonzalez",
"Carolina L.",
""
],
[
"Oliveira",
"Fabiano S.",
""
],
[
"Sampaio",
"Moysés S.",
"Jr."
],
[
"Szwarcfiter",
"Jayme L.",
""
]
] |
new_dataset
| 0.952218 |
2007.00570
|
Nina Pardal
|
Flavia Bonomo-Braberman, Guillermo A. Dur\'an, Nina Pardal, Mart\'in
D. Safe
|
Forbidden induced subgraph characterization of circle graphs within
split graphs
|
59 pages, 15 figures
|
Discrete Applied Mathematics 323 (2022), 43-75
|
10.1016/j.dam.2020.12.021
| null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A graph is circle if its vertices are in correspondence with a family of
chords in a circle in such a way that every two distinct vertices are adjacent
if and only if the corresponding chords have nonempty intersection. Even though
there are diverse characterizations of circle graphs, a structural
characterization by minimal forbidden induced subgraphs for the entire class of
circle graphs is not known, not even restricted to split graphs (which are the
graphs whose vertex set can be partitioned into a clique and a stable set). In
this work, we give a characterization by minimal forbidden induced subgraphs of
circle graphs, restricted to split graphs.
|
[
{
"version": "v1",
"created": "Wed, 1 Jul 2020 15:56:34 GMT"
}
] | 2023-04-04T00:00:00 |
[
[
"Bonomo-Braberman",
"Flavia",
""
],
[
"Durán",
"Guillermo A.",
""
],
[
"Pardal",
"Nina",
""
],
[
"Safe",
"Martín D.",
""
]
] |
new_dataset
| 0.998995 |
2101.01867
|
Neha R. Gupta
|
Neha R. Gupta (1), Vittorio Orlandi (1), Chia-Rui Chang (2), Tianyu
Wang (3), Marco Morucci (4), Pritam Dey (1), Thomas J. Howell (1), Xian Sun
(1), Angikar Ghosal (1), Sudeepa Roy (1), Cynthia Rudin (1), Alexander
Volfovsky (1) ((1) Duke University, (2) Harvard University, (3) Fudan
University, (4) New York University)
|
dame-flame: A Python Library Providing Fast Interpretable Matching for
Causal Inference
|
26 pages, 2 figures
| null | null | null |
cs.LG cs.MS
|
http://creativecommons.org/licenses/by-sa/4.0/
|
dame-flame is a Python package for performing matching for observational
causal inference on datasets containing discrete covariates. This package
implements the Dynamic Almost Matching Exactly (DAME) and Fast Large-Scale
Almost Matching Exactly (FLAME) algorithms, which match treatment and control
units on subsets of the covariates. The resulting matched groups are
interpretable, because the matches are made on covariates, and high-quality,
because machine learning is used to determine which covariates are important to
match on. DAME solves an optimization problem that matches units on as many
covariates as possible, prioritizing matches on important covariates. FLAME
approximates the solution found by DAME via a much faster backward feature
selection procedure. The package provides several adjustable parameters to
adapt the algorithms to specific applications, and can calculate treatment
effect estimates after matching. Descriptions of these parameters, details on
estimating treatment effects, and further examples, can be found in the
documentation at
https://almost-matching-exactly.github.io/DAME-FLAME-Python-Package/
|
[
{
"version": "v1",
"created": "Wed, 6 Jan 2021 04:38:57 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Jan 2021 18:21:44 GMT"
},
{
"version": "v3",
"created": "Sun, 2 Apr 2023 18:16:37 GMT"
}
] | 2023-04-04T00:00:00 |
[
[
"Gupta",
"Neha R.",
""
],
[
"Orlandi",
"Vittorio",
""
],
[
"Chang",
"Chia-Rui",
""
],
[
"Wang",
"Tianyu",
""
],
[
"Morucci",
"Marco",
""
],
[
"Dey",
"Pritam",
""
],
[
"Howell",
"Thomas J.",
""
],
[
"Sun",
"Xian",
""
],
[
"Ghosal",
"Angikar",
""
],
[
"Roy",
"Sudeepa",
""
],
[
"Rudin",
"Cynthia",
""
],
[
"Volfovsky",
"Alexander",
""
]
] |
new_dataset
| 0.991717 |
2104.14336
|
Rub\`en P\'erez Tito
|
Rub\`en Tito, Dimosthenis Karatzas, Ernest Valveny
|
Document Collection Visual Question Answering
| null | null |
10.1007/978-3-030-86331-9_50
| null |
cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
Current tasks and methods in Document Understanding aims to process documents
as single elements. However, documents are usually organized in collections
(historical records, purchase invoices), that provide context useful for their
interpretation. To address this problem, we introduce Document Collection
Visual Question Answering (DocCVQA) a new dataset and related task, where
questions are posed over a whole collection of document images and the goal is
not only to provide the answer to the given question, but also to retrieve the
set of documents that contain the information needed to infer the answer. Along
with the dataset we propose a new evaluation metric and baselines which provide
further insights to the new dataset and task.
|
[
{
"version": "v1",
"created": "Tue, 27 Apr 2021 18:05:48 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Jun 2021 15:07:09 GMT"
}
] | 2023-04-04T00:00:00 |
[
[
"Tito",
"Rubèn",
""
],
[
"Karatzas",
"Dimosthenis",
""
],
[
"Valveny",
"Ernest",
""
]
] |
new_dataset
| 0.998536 |
2107.01872
|
Chuan Tang
|
Chuan Tang, Xi Yang, Bojian Wu, Zhizhong Han, Yi Chang
|
Parts2Words: Learning Joint Embedding of Point Clouds and Texts by
Bidirectional Matching between Parts and Words
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Shape-Text matching is an important task of high-level shape understanding.
Current methods mainly represent a 3D shape as multiple 2D rendered views,
which obviously can not be understood well due to the structural ambiguity
caused by self-occlusion in the limited number of views. To resolve this issue,
we directly represent 3D shapes as point clouds, and propose to learn joint
embedding of point clouds and texts by bidirectional matching between parts
from shapes and words from texts. Specifically, we first segment the point
clouds into parts, and then leverage optimal transport method to match parts
and words in an optimized feature space, where each part is represented by
aggregating features of all points within it and each word is abstracted by its
contextual information. We optimize the feature space in order to enlarge the
similarities between the paired training samples, while simultaneously
maximizing the margin between the unpaired ones. Experiments demonstrate that
our method achieves a significant improvement in accuracy over the SOTAs on
multi-modal retrieval tasks under the Text2Shape dataset. Codes are available
at https://github.com/JLUtangchuan/Parts2Words.
|
[
{
"version": "v1",
"created": "Mon, 5 Jul 2021 08:55:34 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Apr 2023 02:12:38 GMT"
}
] | 2023-04-04T00:00:00 |
[
[
"Tang",
"Chuan",
""
],
[
"Yang",
"Xi",
""
],
[
"Wu",
"Bojian",
""
],
[
"Han",
"Zhizhong",
""
],
[
"Chang",
"Yi",
""
]
] |
new_dataset
| 0.967191 |
2107.05411
|
Masayuki Tezuka
|
Masayuki Tezuka, Yusuke Yoshida, Keisuke Tanaka
|
Weakened Random Oracle Models with Target Prefix
| null |
SecITC 2018
|
10.1007/978-3-030-12942-2_26
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Weakened random oracle models (WROMs) are variants of the random oracle model
(ROM). The WROMs have the random oracle and the additional oracle which breaks
some property of a hash function. Analyzing the security of cryptographic
schemes in WROMs, we can specify the property of a hash function on which the
security of cryptographic schemes depends. Liskov (SAC 2006) proposed WROMs and
later Numayama et al. (PKC 2008) formalized them as CT-ROM, SPT-ROM, and
FPT-ROM. In each model, there is the additional oracle to break collision
resistance, second preimage resistance, preimage resistance respectively. Tan
and Wong (ACISP 2012) proposed the generalized FPT-ROM (GFPT-ROM) which
intended to capture the chosen prefix collision attack suggested by Stevens et
al. (EUROCRYPT 2007). In this paper, in order to analyze the security of
cryptographic schemes more precisely, we formalize GFPT-ROM and propose
additional three WROMs which capture the chosen prefix collision attack and its
variants. In particular, we focus on signature schemes such as RSA-FDH, its
variants, and DSA, in order to understand essential roles of WROMs in their
security proofs.
|
[
{
"version": "v1",
"created": "Mon, 12 Jul 2021 13:28:25 GMT"
}
] | 2023-04-04T00:00:00 |
[
[
"Tezuka",
"Masayuki",
""
],
[
"Yoshida",
"Yusuke",
""
],
[
"Tanaka",
"Keisuke",
""
]
] |
new_dataset
| 0.987516 |
2112.12761
|
Gengshan Yang
|
Gengshan Yang, Minh Vo, Natalia Neverova, Deva Ramanan, Andrea
Vedaldi, Hanbyul Joo
|
BANMo: Building Animatable 3D Neural Models from Many Casual Videos
|
CVPR 2022 camera-ready version (last update: May 2022)
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Prior work for articulated 3D shape reconstruction often relies on
specialized sensors (e.g., synchronized multi-camera systems), or pre-built 3D
deformable models (e.g., SMAL or SMPL). Such methods are not able to scale to
diverse sets of objects in the wild. We present BANMo, a method that requires
neither a specialized sensor nor a pre-defined template shape. BANMo builds
high-fidelity, articulated 3D models (including shape and animatable skinning
weights) from many monocular casual videos in a differentiable rendering
framework. While the use of many videos provides more coverage of camera views
and object articulations, they introduce significant challenges in establishing
correspondence across scenes with different backgrounds, illumination
conditions, etc. Our key insight is to merge three schools of thought; (1)
classic deformable shape models that make use of articulated bones and blend
skinning, (2) volumetric neural radiance fields (NeRFs) that are amenable to
gradient-based optimization, and (3) canonical embeddings that generate
correspondences between pixels and an articulated model. We introduce neural
blend skinning models that allow for differentiable and invertible articulated
deformations. When combined with canonical embeddings, such models allow us to
establish dense correspondences across videos that can be self-supervised with
cycle consistency. On real and synthetic datasets, BANMo shows higher-fidelity
3D reconstructions than prior works for humans and animals, with the ability to
render realistic images from novel viewpoints and poses. Project webpage:
banmo-www.github.io .
|
[
{
"version": "v1",
"created": "Thu, 23 Dec 2021 18:30:31 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Dec 2021 06:09:12 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Apr 2023 13:57:31 GMT"
}
] | 2023-04-04T00:00:00 |
[
[
"Yang",
"Gengshan",
""
],
[
"Vo",
"Minh",
""
],
[
"Neverova",
"Natalia",
""
],
[
"Ramanan",
"Deva",
""
],
[
"Vedaldi",
"Andrea",
""
],
[
"Joo",
"Hanbyul",
""
]
] |
new_dataset
| 0.99556 |
2202.09807
|
Masayuki Tezuka
|
Masayuki Tezuka, Xiangyu Su, Keisuke Tanaka
|
A t-out-of-n Redactable Signature Scheme
| null |
CANS 2019
|
10.1007/978-3-030-31578-8_26
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A redactable signature scheme allows removing parts of a signed message
without invalidating the signature. Currently, the need to prove the validity
of digital documents issued by governments and enterprises is increasing.
However, when disclosing documents, governments and enterprises must remove
privacy information concerning individuals. A redactable signature scheme is
useful for such a situation. In this paper, we introduce the new notion of the
t-out-of-n redactable signature scheme. This scheme has a signer, n redactors,
a combiner, and a verifier. The signer designates n redactors and a combiner in
advance and generates a signature of a message M. Each redactor decides parts
that he or she wants to remove from the message and generates a piece of
redaction information. The combiner collects pieces of redaction information
from all redactors, extracts parts of the message that more than t redactors
want to remove, and generate a redacted message. We consider the one-time
redaction model which allows redacting signatures generated by the signer only
once. We formalize the one-time redaction t-out-of-n redactable signature
scheme, define security, and give a construction using the pairing based
aggregate signature scheme in the random oracle model.
|
[
{
"version": "v1",
"created": "Sun, 20 Feb 2022 12:36:06 GMT"
}
] | 2023-04-04T00:00:00 |
[
[
"Tezuka",
"Masayuki",
""
],
[
"Su",
"Xiangyu",
""
],
[
"Tanaka",
"Keisuke",
""
]
] |
new_dataset
| 0.996268 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.