id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2305.02517
|
Jun-Yu Ma
|
Jun-Yu Ma, Jia-Chen Gu, Jiajun Qi, Zhen-Hua Ling, Quan Liu, Xiaoyi
Zhao
|
USTC-NELSLIP at SemEval-2023 Task 2: Statistical Construction and Dual
Adaptation of Gazetteer for Multilingual Complex NER
|
Winner system (USTC-NELSLIP) of SemEval 2023 MultiCoNER II shared
task on Hindi track. arXiv admin note: substantial text overlap with
arXiv:2203.03216
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper describes the system developed by the USTC-NELSLIP team for
SemEval-2023 Task 2 Multilingual Complex Named Entity Recognition (MultiCoNER
II). A method named Statistical Construction and Dual Adaptation of Gazetteer
(SCDAG) is proposed for Multilingual Complex NER. The method first utilizes a
statistics-based approach to construct a gazetteer. Secondly, the
representations of gazetteer networks and language models are adapted by
minimizing the KL divergence between them at both the sentence-level and
entity-level. Finally, these two networks are then integrated for supervised
named entity recognition (NER) training. The proposed method is applied to
XLM-R with a gazetteer built from Wikidata, and shows great generalization
ability across different tracks. Experimental results and detailed analysis
verify the effectiveness of the proposed method. The official results show that
our system ranked 1st on one track (Hindi) in this task.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 03:00:46 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Ma",
"Jun-Yu",
""
],
[
"Gu",
"Jia-Chen",
""
],
[
"Qi",
"Jiajun",
""
],
[
"Ling",
"Zhen-Hua",
""
],
[
"Liu",
"Quan",
""
],
[
"Zhao",
"Xiaoyi",
""
]
] |
new_dataset
| 0.994377 |
2305.02519
|
Zhou Yu
|
Zhou Yu, Lixiang Zheng, Zhou Zhao, Fei Wu, Jianping Fan, Kui Ren, Jun
Yu
|
ANetQA: A Large-scale Benchmark for Fine-grained Compositional Reasoning
over Untrimmed Videos
|
Accepted at CVPR 2023, Project homepage at:
https://milvlg.github.io/anetqa/
| null | null | null |
cs.CV cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Building benchmarks to systemically analyze different capabilities of video
question answering (VideoQA) models is challenging yet crucial. Existing
benchmarks often use non-compositional simple questions and suffer from
language biases, making it difficult to diagnose model weaknesses incisively. A
recent benchmark AGQA poses a promising paradigm to generate QA pairs
automatically from pre-annotated scene graphs, enabling it to measure diverse
reasoning abilities with granular control. However, its questions have
limitations in reasoning about the fine-grained semantics in videos as such
information is absent in its scene graphs. To this end, we present ANetQA, a
large-scale benchmark that supports fine-grained compositional reasoning over
the challenging untrimmed videos from ActivityNet. Similar to AGQA, the QA
pairs in ANetQA are automatically generated from annotated video scene graphs.
The fine-grained properties of ANetQA are reflected in the following: (i)
untrimmed videos with fine-grained semantics; (ii) spatio-temporal scene graphs
with fine-grained taxonomies; and (iii) diverse questions generated from
fine-grained templates. ANetQA attains 1.4 billion unbalanced and 13.4 million
balanced QA pairs, which is an order of magnitude larger than AGQA with a
similar number of videos. Comprehensive experiments are performed for
state-of-the-art methods. The best model achieves 44.5% accuracy while human
performance tops out at 84.5%, leaving sufficient room for improvement.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 03:04:59 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Yu",
"Zhou",
""
],
[
"Zheng",
"Lixiang",
""
],
[
"Zhao",
"Zhou",
""
],
[
"Wu",
"Fei",
""
],
[
"Fan",
"Jianping",
""
],
[
"Ren",
"Kui",
""
],
[
"Yu",
"Jun",
""
]
] |
new_dataset
| 0.999553 |
2305.02525
|
Hui-Ru Ho
|
Hui-Ru Ho, Nathan White, Edward Hubbard, Bilge Mutlu
|
Designing Parent-child-robot Interactions to Facilitate In-Home Parental
Math Talk with Young Children
|
15 pages, Accepted to IDC'23
| null | null | null |
cs.RO cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Parent-child interaction is critical for child development, yet parents may
need guidance in some aspects of their engagement with their children. Current
research on educational math robots focuses on child-robot interactions but
falls short of including the parents and integrating the critical role they
play in children's learning. We explore how educational robots can be designed
to facilitate parent-child conversations, focusing on math talk, a predictor of
later math ability in children. We prototyped capabilities for a social robot
to support math talk via reading and play activities and conducted an
exploratory Wizard-of-Oz in-home study for parent-child interactions
facilitated by a robot. Our findings yield insights into how parents were
inspired by the robot's prompts, their desired interaction styles and methods
for the robot, and how they wanted to include the robot in the activities,
leading to guidelines for the design of parent-child-robot interaction in
educational contexts.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 03:25:18 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Ho",
"Hui-Ru",
""
],
[
"White",
"Nathan",
""
],
[
"Hubbard",
"Edward",
""
],
[
"Mutlu",
"Bilge",
""
]
] |
new_dataset
| 0.988602 |
2305.02560
|
Zhewen Yang
|
Zhewen Yang, Changrong Wu, Chen Tian, Zhaochen Zhang
|
ProNet: Network-level Bandwidth Sharing among Tenants in Cloud
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In today's private cloud, the resource of the datacenter is shared by
multiple tenants. Unlike the storage and computing resources, it's challenging
to allocate bandwidth resources among tenants in private datacenter networks.
State-of-the-art approaches are not effective or practical enough to meet
tenants' bandwidth requirements. In this paper, we propose ProNet, a practical
end-host-based solution for bandwidth sharing among tenants to meet their
various demands. The key idea of ProNet is byte-counter, a mechanism to collect
the bandwidth usage of tenants on end-hosts to guide the adjustment of the
whole network allocation, without putting much pressure on switches. We
evaluate ProNet both in our testbed and large-scale simulations. Results show
that ProNet can support multiple allocation policies such as network
proportionality and minimum bandwidth guarantee. Accordingly, the
application-level performance is improved.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 05:26:11 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Yang",
"Zhewen",
""
],
[
"Wu",
"Changrong",
""
],
[
"Tian",
"Chen",
""
],
[
"Zhang",
"Zhaochen",
""
]
] |
new_dataset
| 0.971499 |
2305.02605
|
Xiang Zheng
|
Xiang Zheng, Xingjun Ma, Shengjie Wang, Xinyu Wang, Chao Shen, Cong
Wang
|
IMAP: Intrinsically Motivated Adversarial Policy
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reinforcement learning (RL) agents are known to be vulnerable to evasion
attacks during deployment. In single-agent environments, attackers can inject
imperceptible perturbations on the policy or value network's inputs or outputs;
in multi-agent environments, attackers can control an adversarial opponent to
indirectly influence the victim's observation. Adversarial policies offer a
promising solution to craft such attacks. Still, current approaches either
require perfect or partial knowledge of the victim policy or suffer from sample
inefficiency due to the sparsity of task-related rewards. To overcome these
limitations, we propose the Intrinsically Motivated Adversarial Policy (IMAP)
for efficient black-box evasion attacks in single- and multi-agent environments
without any knowledge of the victim policy. IMAP uses four intrinsic objectives
based on state coverage, policy coverage, risk, and policy divergence to
encourage exploration and discover stronger attacking skills. We also design a
novel Bias-Reduction (BR) method to boost IMAP further. Our experiments
demonstrate the effectiveness of these intrinsic objectives and BR in improving
adversarial policy learning in the black-box setting against multiple types of
victim agents in various single- and multi-agent MuJoCo environments. Notably,
our IMAP reduces the performance of the state-of-the-art robust WocaR-PPO
agents by 34\%-54\% and achieves a SOTA attacking success rate of 83.91\% in
the two-player zero-sum game YouShallNotPass.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 07:24:12 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Zheng",
"Xiang",
""
],
[
"Ma",
"Xingjun",
""
],
[
"Wang",
"Shengjie",
""
],
[
"Wang",
"Xinyu",
""
],
[
"Shen",
"Chao",
""
],
[
"Wang",
"Cong",
""
]
] |
new_dataset
| 0.996612 |
2305.02627
|
Guoqing Yang
|
Guoqing Yang, Fuyou Xue, Qi Zhang, Ke Xie, Chi-Wing Fu, Hui Huang
|
UrbanBIS: a Large-scale Benchmark for Fine-grained Urban Building
Instance Segmentation
|
11 pages, 6 figures. Accepted by SIGGRAPH 2023
| null |
10.1145/3588432.3591508
| null |
cs.GR cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the UrbanBIS benchmark for large-scale 3D urban understanding,
supporting practical urban-level semantic and building-level instance
segmentation. UrbanBIS comprises six real urban scenes, with 2.5 billion
points, covering a vast area of 10.78 square kilometers and 3,370 buildings,
captured by 113,346 views of aerial photogrammetry. Particularly, UrbanBIS
provides not only semantic-level annotations on a rich set of urban objects,
including buildings, vehicles, vegetation, roads, and bridges, but also
instance-level annotations on the buildings. Further, UrbanBIS is the first 3D
dataset that introduces fine-grained building sub-categories, considering a
wide variety of shapes for different building types. Besides, we propose B-Seg,
a building instance segmentation method to establish UrbanBIS. B-Seg adopts an
end-to-end framework with a simple yet effective strategy for handling
large-scale point clouds. Compared with mainstream methods, B-Seg achieves
better accuracy with faster inference speed on UrbanBIS. In addition to the
carefully-annotated point clouds, UrbanBIS provides high-resolution
aerial-acquisition photos and high-quality large-scale 3D reconstruction
models, which shall facilitate a wide range of studies such as multi-view
stereo, urban LOD generation, aerial path planning, autonomous navigation, road
network extraction, and so on, thus serving as an important platform for many
intelligent city applications.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 08:01:38 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Yang",
"Guoqing",
""
],
[
"Xue",
"Fuyou",
""
],
[
"Zhang",
"Qi",
""
],
[
"Xie",
"Ke",
""
],
[
"Fu",
"Chi-Wing",
""
],
[
"Huang",
"Hui",
""
]
] |
new_dataset
| 0.999834 |
2305.02651
|
Maciej Wielgosz
|
Maciej Wielgosz and Stefano Puliti and Phil Wilkes and Rasmus Astrup
|
Point2Tree(P2T) -- framework for parameter tuning of semantic and
instance segmentation used with mobile laser scanning data in coniferous
forest
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This article introduces Point2Tree, a novel framework that incorporates a
three-stage process involving semantic segmentation, instance segmentation,
optimization analysis of hyperparemeters importance. It introduces a
comprehensive and modular approach to processing laser points clouds in
Forestry. We tested it on two independent datasets. The first area was located
in an actively managed boreal coniferous dominated forest in V{\aa}ler, Norway,
16 circular plots of 400 square meters were selected to cover a range of forest
conditions in terms of species composition and stand density. We trained a
model based on Pointnet++ architecture which achieves 0.92 F1-score in semantic
segmentation. As a second step in our pipeline we used graph-based approach for
instance segmentation which reached F1-score approx. 0.6. The optimization
allowed to further boost the performance of the pipeline by approx. 4 \%
points.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 08:45:17 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Wielgosz",
"Maciej",
""
],
[
"Puliti",
"Stefano",
""
],
[
"Wilkes",
"Phil",
""
],
[
"Astrup",
"Rasmus",
""
]
] |
new_dataset
| 0.999459 |
2305.02697
|
Sabri Pllana
|
Julian Kunkel, Christian Boehme, Jonathan Decker, Fabrizio Magugliani,
Dirk Pleiter, Bastian Koller, Karthee Sivalingam, Sabri Pllana, Alexander
Nikolov, Mujdat Soyturk, Christian Racca, Andrea Bartolini, Adrian Tate,
Berkay Yaman
|
DECICE: Device-Edge-Cloud Intelligent Collaboration Framework
| null | null | null | null |
cs.DC cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
DECICE is a Horizon Europe project that is developing an AI-enabled open and
portable management framework for automatic and adaptive optimization and
deployment of applications in computing continuum encompassing from IoT sensors
on the Edge to large-scale Cloud / HPC computing infrastructures. In this
paper, we describe the DECICE framework and architecture. Furthermore, we
highlight use-cases for framework evaluation: intelligent traffic intersection,
magnetic resonance imaging, and emergency response.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 10:11:14 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Kunkel",
"Julian",
""
],
[
"Boehme",
"Christian",
""
],
[
"Decker",
"Jonathan",
""
],
[
"Magugliani",
"Fabrizio",
""
],
[
"Pleiter",
"Dirk",
""
],
[
"Koller",
"Bastian",
""
],
[
"Sivalingam",
"Karthee",
""
],
[
"Pllana",
"Sabri",
""
],
[
"Nikolov",
"Alexander",
""
],
[
"Soyturk",
"Mujdat",
""
],
[
"Racca",
"Christian",
""
],
[
"Bartolini",
"Andrea",
""
],
[
"Tate",
"Adrian",
""
],
[
"Yaman",
"Berkay",
""
]
] |
new_dataset
| 0.999343 |
2305.02723
|
Bengisu Cagiltay
|
Bengisu Cagiltay, Bilge Mutlu, Margaret Kerr
|
Family Theories in Child-Robot Interactions: Understanding Families as a
Whole for Child-Robot Interaction Design
| null | null |
10.1145/3585088.3589386
| null |
cs.HC cs.RO
|
http://creativecommons.org/publicdomain/zero/1.0/
|
In this work, we discuss a theoretically motivated family-centered design
approach for child-robot interactions, adapted by Family Systems Theory (FST)
and Family Ecological Model (FEM). Long-term engagement and acceptance of
robots in the home is influenced by factors that surround the child and the
family, such as child-sibling-parent relationships and family routines,
rituals, and values. A family-centered approach to interaction design is
essential when developing in-home technology for children, especially for
social agents like robots with which they can form connections and
relationships. We review related literature in family theories and connect it
with child-robot interaction and child-computer interaction research. We
present two case studies that exemplify how family theories, FST and FEM, can
inform the integration of robots into homes, particularly research into
child-robot and family-robot interaction. Finally, we pose five overarching
recommendations for a family-centered design approach in child-robot
interactions.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 10:43:19 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Cagiltay",
"Bengisu",
""
],
[
"Mutlu",
"Bilge",
""
],
[
"Kerr",
"Margaret",
""
]
] |
new_dataset
| 0.950899 |
2305.02793
|
Daniel Hausmann
|
Daniel Hausmann, Mathieu Lehaut, Nir Pitermann
|
Symbolic Reactive Synthesis for the Safety and EL-fragment of LTL
| null | null | null | null |
cs.FL cs.GT
|
http://creativecommons.org/licenses/by/4.0/
|
We suggest an expressive fragment of LTL for which reactive synthesis can be
performed by symbolically analyzing games. For general LTL, this kind of
analysis is impossible due to the complexity of determinization. Bypasses are
either by enumerative handling of determinization or by restricting attention
to fragments of the language. Here, we take the second approach and suggest a
fragment combining a safety specification and a liveness part. The safety part
is unrestricted but allows symbolic treatment due to the simplicity of
determinization in the case of safety languages. The liveness part is very
general, allowing to define Emerson-Lei conditions on occurrences of letters.
We elaborate the construction of an Emerson-Lei game that captures the
synthesis problem. We also show how Emerson-Lei games can be analyzed
symbolically by providing a fixpoint-based characterization of the winning
region, which is obtained from an analysis of the Zielonka tree of the winning
condition. Our algorithm generalizes the solutions of games with known winning
conditions such as B\"uchi, GR[1], parity, Streett, Rabin and Muller
objectives, and in the case of these conditions reproduces previously known
algorithms and complexity results; the algorithm solves unrestricted
Emerson-Lei games with $n$ nodes, $m$ edges and $k$ colors in time
$\mathcal{O}(k!\cdot m\cdot n^k)$ and yields winning strategies with memory
$\mathcal{O}(k!)$. The runtime of the resulting overall synthesis algorithm is
single-exponential in the size of the liveness part and doubly-exponential in
the size of the safety part, as it is for (safety) LTL. However, the trade-off
between enumerative and symbolic aspects is maximized by enumeratively
analyzing the liveness condition and generating from it a symbolic game
analysis algorithm.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 12:48:31 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Hausmann",
"Daniel",
""
],
[
"Lehaut",
"Mathieu",
""
],
[
"Pitermann",
"Nir",
""
]
] |
new_dataset
| 0.991786 |
2305.02814
|
Haoyu Zhang
|
Yuanyuan Liu, Haoyu Zhang, Yibing Zhan, Zijing Chen, Guanghao Yin, Lin
Wei and Zhe Chen
|
Noise-Resistant Multimodal Transformer for Emotion Recognition
| null | null | null | null |
cs.MM cs.AI cs.CV cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multimodal emotion recognition identifies human emotions from various data
modalities like video, text, and audio. However, we found that this task can be
easily affected by noisy information that does not contain useful semantics. To
this end, we present a novel paradigm that attempts to extract noise-resistant
features in its pipeline and introduces a noise-aware learning scheme to
effectively improve the robustness of multimodal emotion understanding. Our new
pipeline, namely Noise-Resistant Multimodal Transformer (NORM-TR), mainly
introduces a Noise-Resistant Generic Feature (NRGF) extractor and a Transformer
for the multimodal emotion recognition task. In particular, we make the NRGF
extractor learn a generic and disturbance-insensitive representation so that
consistent and meaningful semantics can be obtained. Furthermore, we apply a
Transformer to incorporate Multimodal Features (MFs) of multimodal inputs based
on their relations to the NRGF. Therefore, the possible insensitive but useful
information of NRGF could be complemented by MFs that contain more details. To
train the NORM-TR properly, our proposed noise-aware learning scheme
complements normal emotion recognition losses by enhancing the learning against
noises. Our learning scheme explicitly adds noises to either all the modalities
or a specific modality at random locations of a multimodal input sequence. We
correspondingly introduce two adversarial losses to encourage the NRGF
extractor to learn to extract the NRGFs invariant to the added noises, thus
facilitating the NORM-TR to achieve more favorable multimodal emotion
recognition performance. In practice, on several popular multimodal datasets,
our NORM-TR achieves state-of-the-art performance and outperforms existing
methods by a large margin, which demonstrates that the ability to resist noisy
information is important for effective emotion recognition.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 13:22:21 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Liu",
"Yuanyuan",
""
],
[
"Zhang",
"Haoyu",
""
],
[
"Zhan",
"Yibing",
""
],
[
"Chen",
"Zijing",
""
],
[
"Yin",
"Guanghao",
""
],
[
"Wei",
"Lin",
""
],
[
"Chen",
"Zhe",
""
]
] |
new_dataset
| 0.988026 |
2305.02836
|
Ana Oliveira da Costa
|
Ezio Bartocci, Thomas A. Henzinger, Dejan Nickovic, Ana Oliveira da
Costa
|
Hypernode Automata
| null | null | null | null |
cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce hypernode automata as a new specification formalism for
hyperproperties of concurrent systems. They are finite automata with nodes
labeled with hypernode logic formulas and transitions labeled with actions. A
hypernode logic formula specifies relations between sequences of variable
values in different system executions. Unlike HyperLTL, hypernode logic takes
an asynchronous view on execution traces by constraining the values and the
order of value changes of each variable without correlating the timing of the
changes. Different execution traces are synchronized solely through the
transitions of hypernode automata. Hypernode automata naturally combine
asynchronicity at the node level with synchronicity at the transition level. We
show that the model-checking problem for hypernode automata is decidable over
action-labeled Kripke structures, whose actions induce transitions of the
specification automaton. For this reason, hypernode automaton is a suitable
formalism for specifying and verifying asynchronous hyperproperties, such as
declassifying observational determinism in multi-threaded programs.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 13:52:13 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Bartocci",
"Ezio",
""
],
[
"Henzinger",
"Thomas A.",
""
],
[
"Nickovic",
"Dejan",
""
],
[
"da Costa",
"Ana Oliveira",
""
]
] |
new_dataset
| 0.996855 |
2305.02842
|
Zhou'an Zhu
|
Zhou'an_Zhu, Xin Li, Jicai Pan, Yufei Xiao, Yanan Chang, Feiyi Zheng,
Shangfei Wang
|
MEDIC: A Multimodal Empathy Dataset in Counseling
| null | null | null | null |
cs.CV cs.CY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Although empathic interaction between counselor and client is fundamental to
success in the psychotherapeutic process, there are currently few datasets to
aid a computational approach to empathy understanding. In this paper, we
construct a multimodal empathy dataset collected from face-to-face
psychological counseling sessions. The dataset consists of 771 video clips. We
also propose three labels (i.e., expression of experience, emotional reaction,
and cognitive reaction) to describe the degree of empathy between counselors
and their clients. Expression of experience describes whether the client has
expressed experiences that can trigger empathy, and emotional and cognitive
reactions indicate the counselor's empathic reactions. As an elementary
assessment of the usability of the constructed multimodal empathy dataset, an
interrater reliability analysis of annotators' subjective evaluations for video
clips is conducted using the intraclass correlation coefficient and Fleiss'
Kappa. Results prove that our data annotation is reliable. Furthermore, we
conduct empathy prediction using three typical methods, including the tensor
fusion network, the sentimental words aware fusion network, and a simple
concatenation model. The experimental results show that empathy can be well
predicted on our dataset. Our dataset is available for research purposes.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 14:02:02 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Zhou'an_Zhu",
"",
""
],
[
"Li",
"Xin",
""
],
[
"Pan",
"Jicai",
""
],
[
"Xiao",
"Yufei",
""
],
[
"Chang",
"Yanan",
""
],
[
"Zheng",
"Feiyi",
""
],
[
"Wang",
"Shangfei",
""
]
] |
new_dataset
| 0.999543 |
2305.02888
|
Mehdi Sefidgar Dilmaghani Mr
|
Paul Kielty, Mehdi Sefidgar Dilmaghani, Cian Ryan, Joe Lemley, Peter
Corcoran
|
Neuromorphic Sensing for Yawn Detection in Driver Drowsiness
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Driver monitoring systems (DMS) are a key component of vehicular safety and
essential for the transition from semiautonomous to fully autonomous driving. A
key task for DMS is to ascertain the cognitive state of a driver and to
determine their level of tiredness. Neuromorphic vision systems, based on event
camera technology, provide advanced sensing of facial characteristics, in
particular the behavior of a driver's eyes. This research explores the
potential to extend neuromorphic sensing techniques to analyze the entire
facial region, detecting yawning behaviors that give a complimentary indicator
of tiredness. A neuromorphic dataset is constructed from 952 video clips (481
yawns, 471 not-yawns) captured with an RGB color camera, with 37 subjects. A
total of 95200 neuromorphic image frames are generated from this video data
using a video-to-event converter. From these data 21 subjects were selected to
provide a training dataset, 8 subjects were used for validation data, and the
remaining 8 subjects were reserved for an "unseen" test dataset. An additional
12300 frames were generated from event simulations of a public dataset to test
against other methods. A CNN with self-attention and a recurrent head was
designed, trained, and tested with these data. Respective precision and recall
scores of 95.9 percent and 94.7 percent were achieved on our test set, and 89.9
percent and 91 percent on the simulated public test set, demonstrating the
feasibility to add yawn detection as a sensing component of a neuromorphic DMS.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 14:50:38 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Kielty",
"Paul",
""
],
[
"Dilmaghani",
"Mehdi Sefidgar",
""
],
[
"Ryan",
"Cian",
""
],
[
"Lemley",
"Joe",
""
],
[
"Corcoran",
"Peter",
""
]
] |
new_dataset
| 0.998936 |
2305.02911
|
Shan Jia
|
Chuanbo Hu, Shan Jia, Fan Zhang, Changjiang Xiao, Mindi Ruan, Jacob
Thrasher, Xin Li
|
UPDExplainer: an Interpretable Transformer-based Framework for Urban
Physical Disorder Detection Using Street View Imagery
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Urban Physical Disorder (UPD), such as old or abandoned buildings, broken
sidewalks, litter, and graffiti, has a negative impact on residents' quality of
life. They can also increase crime rates, cause social disorder, and pose a
public health risk. Currently, there is a lack of efficient and reliable
methods for detecting and understanding UPD. To bridge this gap, we propose
UPDExplainer, an interpretable transformer-based framework for UPD detection.
We first develop a UPD detection model based on the Swin Transformer
architecture, which leverages readily accessible street view images to learn
discriminative representations. In order to provide clear and comprehensible
evidence and analysis, we subsequently introduce a UPD factor identification
and ranking module that combines visual explanation maps with semantic
segmentation maps. This novel integrated approach enables us to identify the
exact objects within street view images that are responsible for physical
disorders and gain insights into the underlying causes. Experimental results on
the re-annotated Place Pulse 2.0 dataset demonstrate promising detection
performance of the proposed method, with an accuracy of 79.9%. For a
comprehensive evaluation of the method's ranking performance, we report the
mean Average Precision (mAP), R-Precision (RPrec), and Normalized Discounted
Cumulative Gain (NDCG), with success rates of 75.51%, 80.61%, and 82.58%,
respectively. We also present a case study of detecting and ranking physical
disorders in the southern region of downtown Los Angeles, California, to
demonstrate the practicality and effectiveness of our framework.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 15:18:28 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Hu",
"Chuanbo",
""
],
[
"Jia",
"Shan",
""
],
[
"Zhang",
"Fan",
""
],
[
"Xiao",
"Changjiang",
""
],
[
"Ruan",
"Mindi",
""
],
[
"Thrasher",
"Jacob",
""
],
[
"Li",
"Xin",
""
]
] |
new_dataset
| 0.997822 |
2305.02918
|
Alex Sprintson
|
Luke McHale, Paul V Gratz, and Alex Sprintson
|
Flow Correlator: A Flow Table Cache Management Strategy
|
26 pages, 22 figures
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Switching, routing, and security functions are the backbone of packet
processing networks. Fast and efficient processing of packets requires
maintaining the state of a large number of transient network connections. In
particular, modern stateful firewalls, security monitoring devices, and
software-defined networking (SDN) programmable dataplanes require maintaining
stateful flow tables. These flow tables often grow much larger than can be
expected to fit within on-chip memory, requiring a managed caching layer to
maintain performance. This paper focuses on improving the efficiency of
caching, an important architectural component of the packet processing data
planes. We present a novel predictive approach to network flow table cache
management. Our approach leverages a Hashed Perceptron binary classifier as
well as an iterative approach to feature selection and ranking to improve the
reliability and performance of the data plane caches. We validate the
efficiency of the proposed techniques through extensive experimentation using
real-world data sets. Our numerical results demonstrate that our techniques
improve the reliability and performance of flow-centric packet processing
architectures.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 15:21:12 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"McHale",
"Luke",
""
],
[
"Gratz",
"Paul V",
""
],
[
"Sprintson",
"Alex",
""
]
] |
new_dataset
| 0.993705 |
2305.02921
|
Mohammad Rowshan
|
Mohammad Rowshan, Vlad-Florin Dr\u{a}goi, and Jinhong Yuan
|
On the Closed-form Weight Enumeration of Polar Codes: 1.5$d$-weight
Codewords
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
The weight distribution of error correction codes is a critical determinant
of their error-correcting performance, making enumeration of utmost importance.
In the case of polar codes, the minimum weight $\wm$ (which is equal to minimum
distance $d$) is the only weight for which an explicit enumerator formula is
currently available. Having closed-form weight enumerators for polar codewords
with weights greater than the minimum weight not only simplifies the
enumeration process but also provides valuable insights towards constructing
better polar-like codes. In this paper, we contribute towards understanding the
algebraic structure underlying higher weights by analyzing Minkowski sums of
orbits. Our approach builds upon the lower triangular affine (LTA) group of
decreasing monomial codes. Specifically, we propose a closed-form expression
for the enumeration of codewords with weight $1.5\wm$. Our simulations
demonstrate the potential for extending this method to higher weights.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 15:24:07 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Rowshan",
"Mohammad",
""
],
[
"Drăgoi",
"Vlad-Florin",
""
],
[
"Yuan",
"Jinhong",
""
]
] |
new_dataset
| 0.999748 |
2305.02957
|
Barbara K\"onig
|
Paolo Baldan and Richard Eggert and Barbara K\"onig and Timo Matt and
Tommaso Padoan
|
A Monoidal View on Fixpoint Checks
| null | null | null | null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
Fixpoints are ubiquitous in computer science as they play a central role in
providing a meaning to recursive and cyclic definitions. Bisimilarity,
behavioural metrics, termination probabilities for Markov chains and stochastic
games are defined in terms of least or greatest fixpoints. Here we show that
our recent work which proposes a technique for checking whether the fixpoint of
a function is the least (or the largest) admits a natural categorical
interpretation in terms of gs-monoidal categories.
The technique is based on a construction that maps a function to a suitable
approximation and the compositionality properties of this mapping are naturally
interpreted as a gs-monoidal functor. This guides the realisation of a tool,
called UDEfix that allows to build functions (and their approximations) like a
circuit out of basic building blocks and subsequently perform the fixpoints
checks.
We also show that a slight generalisation of the theory allows one to treat a
new relevant case study: coalgebraic behavioural metrics based on Wasserstein
liftings.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 16:04:34 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Baldan",
"Paolo",
""
],
[
"Eggert",
"Richard",
""
],
[
"König",
"Barbara",
""
],
[
"Matt",
"Timo",
""
],
[
"Padoan",
"Tommaso",
""
]
] |
new_dataset
| 0.992304 |
2305.02961
|
Mrinal Kanti Dhar
|
Mrinal Kanti Dhar, Taiyu Zhang, Yash Patel, and Zeyun Yu
|
FUSegNet: A Deep Convolutional Neural Network for Foot Ulcer
Segmentation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper presents FUSegNet, a new model for foot ulcer segmentation in
diabetes patients, which uses the pre-trained EfficientNet-b7 as a backbone to
address the issue of limited training samples. A modified spatial and channel
squeeze-and-excitation (scSE) module called parallel scSE or P-scSE is proposed
that combines additive and max-out scSE. A new arrangement is introduced for
the module by fusing it in the middle of each decoder stage. As the top decoder
stage carries a limited number of feature maps, max-out scSE is bypassed there
to form a shorted P-scSE. A set of augmentations, comprising geometric,
morphological, and intensity-based augmentations, is applied before feeding the
data into the network. The proposed model is first evaluated on a publicly
available chronic wound dataset where it achieves a data-based dice score of
92.70%, which is the highest score among the reported approaches. The model
outperforms other scSE-based UNet models in terms of Pratt's figure of merits
(PFOM) scores in most categories, which evaluates the accuracy of edge
localization. The model is then tested in the MICCAI 2021 FUSeg challenge,
where a variation of FUSegNet called x-FUSegNet is submitted. The x-FUSegNet
model, which takes the average of outputs obtained by FUSegNet using 5-fold
cross-validation, achieves a dice score of 89.23%, placing it at the top of the
FUSeg Challenge leaderboard. The source code for the model is available on
https://github.com/mrinal054/FUSegNet.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 16:07:22 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Dhar",
"Mrinal Kanti",
""
],
[
"Zhang",
"Taiyu",
""
],
[
"Patel",
"Yash",
""
],
[
"Yu",
"Zeyun",
""
]
] |
new_dataset
| 0.996118 |
2305.02966
|
Antonis Klironomos
|
Antonis Klironomos, Baifan Zhou, Zhipeng Tan, Zhuoxun Zheng, Gad-Elrab
Mohamed, Heiko Paulheim, Evgeny Kharlamov
|
ExeKGLib: Knowledge Graphs-Empowered Machine Learning Analytics
|
This paper has been accepted as a Demo paper at ESWC 2023
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many machine learning (ML) libraries are accessible online for ML
practitioners. Typical ML pipelines are complex and consist of a series of
steps, each of them invoking several ML libraries. In this demo paper, we
present ExeKGLib, a Python library that allows users with coding skills and
minimal ML knowledge to build ML pipelines. ExeKGLib relies on knowledge graphs
to improve the transparency and reusability of the built ML workflows, and to
ensure that they are executable. We demonstrate the usage of ExeKGLib and
compare it with conventional ML code to show its benefits.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 16:10:22 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Klironomos",
"Antonis",
""
],
[
"Zhou",
"Baifan",
""
],
[
"Tan",
"Zhipeng",
""
],
[
"Zheng",
"Zhuoxun",
""
],
[
"Mohamed",
"Gad-Elrab",
""
],
[
"Paulheim",
"Heiko",
""
],
[
"Kharlamov",
"Evgeny",
""
]
] |
new_dataset
| 0.991666 |
2305.03001
|
Rustam Tagiew
|
Rustam Tagiew, Martin K\"oppel, Karsten Schwalbe, Patrick Denzler,
Philipp Neumaier, Tobias Klockau, Martin Boekhoff, Pavel Klasek, Roman Tilly
|
OSDaR23: Open Sensor Data for Rail 2023
|
6 pages, 11 images, 3 tables
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For driverless train operation on mainline railways, several tasks need to be
implemented by technical systems. One of the most challenging tasks is to
monitor the train's driveway and its surroundings for potential obstacles due
to long braking distances. Machine learning algorithms can be used to analyze
data from vision sensors such as infrared (IR) and visual (RGB) cameras,
lidars, and radars to detect objects. Such algorithms require large amounts of
annotated data from objects in the rail environment that may pose potential
obstacles, as well as rail-specific objects such as tracks or catenary poles,
as training data. However, only very few datasets are publicly available and
these available datasets typically involve only a limited number of sensors.
Datasets and trained models from other domains, such as automotive, are useful
but insufficient for object detection in the railway context. Therefore, this
publication presents OSDaR23, a multi-sensor dataset of 21 sequences captured
in Hamburg, Germany, in September 2021. The sensor setup consisted of multiple
calibrated and synchronized IR/RGB cameras, lidars, a radar, and position and
acceleration sensors front-mounted on a railway vehicle. In addition to raw
data, the dataset contains 204091 polyline, polygonal, rectangle and cuboid
annotations for 20 different object classes. This dataset can also be used for
tasks going beyond collision prediction, which are listed in this paper.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 17:19:47 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Tagiew",
"Rustam",
""
],
[
"Köppel",
"Martin",
""
],
[
"Schwalbe",
"Karsten",
""
],
[
"Denzler",
"Patrick",
""
],
[
"Neumaier",
"Philipp",
""
],
[
"Klockau",
"Tobias",
""
],
[
"Boekhoff",
"Martin",
""
],
[
"Klasek",
"Pavel",
""
],
[
"Tilly",
"Roman",
""
]
] |
new_dataset
| 0.999817 |
2305.03007
|
James Gung
|
James Gung, Emily Moeng, Wesley Rose, Arshit Gupta, Yi Zhang, Saab
Mansour
|
NatCS: Eliciting Natural Customer Support Dialogues
|
Accepted to Findings of ACL 2023
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite growing interest in applications based on natural customer support
conversations, there exist remarkably few publicly available datasets that
reflect the expected characteristics of conversations in these settings.
Existing task-oriented dialogue datasets, which were collected to benchmark
dialogue systems mainly in written human-to-bot settings, are not
representative of real customer support conversations and do not provide
realistic benchmarks for systems that are applied to natural data. To address
this gap, we introduce NatCS, a multi-domain collection of spoken customer
service conversations. We describe our process for collecting synthetic
conversations between customers and agents based on natural language phenomena
observed in real conversations. Compared to previous dialogue datasets, the
conversations collected with our approach are more representative of real
human-to-human conversations along multiple metrics. Finally, we demonstrate
potential uses of NatCS, including dialogue act classification and intent
induction from conversations as potential applications, showing that dialogue
act annotations in NatCS provide more effective training data for modeling real
conversations compared to existing synthetic written datasets. We publicly
release NatCS to facilitate research in natural dialog systems
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 17:25:24 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Gung",
"James",
""
],
[
"Moeng",
"Emily",
""
],
[
"Rose",
"Wesley",
""
],
[
"Gupta",
"Arshit",
""
],
[
"Zhang",
"Yi",
""
],
[
"Mansour",
"Saab",
""
]
] |
new_dataset
| 0.999618 |
2305.03052
|
Basile Van Hoorick
|
Basile Van Hoorick, Pavel Tokmakov, Simon Stent, Jie Li, Carl Vondrick
|
Tracking through Containers and Occluders in the Wild
|
Accepted at CVPR 2023. Project webpage is available at:
https://tcow.cs.columbia.edu/
| null | null | null |
cs.CV cs.AI cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Tracking objects with persistence in cluttered and dynamic environments
remains a difficult challenge for computer vision systems. In this paper, we
introduce $\textbf{TCOW}$, a new benchmark and model for visual tracking
through heavy occlusion and containment. We set up a task where the goal is to,
given a video sequence, segment both the projected extent of the target object,
as well as the surrounding container or occluder whenever one exists. To study
this task, we create a mixture of synthetic and annotated real datasets to
support both supervised learning and structured evaluation of model performance
under various forms of task variation, such as moving or nested containment. We
evaluate two recent transformer-based video models and find that while they can
be surprisingly capable of tracking targets under certain settings of task
variation, there remains a considerable performance gap before we can claim a
tracking model to have acquired a true notion of object permanence.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 17:59:58 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Van Hoorick",
"Basile",
""
],
[
"Tokmakov",
"Pavel",
""
],
[
"Stent",
"Simon",
""
],
[
"Li",
"Jie",
""
],
[
"Vondrick",
"Carl",
""
]
] |
new_dataset
| 0.995364 |
2208.01695
|
Kartik Lakhotia
|
Kartik Lakhotia, Maciej Besta, Laura Monroe, Kelly Isham, Patrick Iff,
Torsten Hoefler, Fabrizio Petrini
|
PolarFly: A Cost-Effective and Flexible Low-Diameter Topology
|
In Proceedings of International Conference for High Performance
Computing, Networking, Storage, and Analysis (SC) 2022
| null |
10.1109/SC41404.2022.00017
| null |
cs.NI cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we present PolarFly, a diameter-2 network topology based on the
Erdos-Renyi family of polarity graphs from finite geometry. This is a highly
scalable low-diameter topology that asymptotically reaches the Moore bound on
the number of nodes for a given network degree and diameter
PolarFly achieves high Moore bound efficiency even for the moderate radixes
commonly seen in current and near-future routers, reaching more than 96% of the
theoretical peak. It also offers more feasible router degrees than the
state-of-the-art solutions, greatly adding to the selection of scalable
diameter-2 networks. PolarFly enjoys many other topological properties highly
relevant in practice, such as a modular design and expandability that allow
incremental growth in network size without rewiring the whole network. Our
evaluation shows that PolarFly outperforms competitive networks in terms of
scalability, cost and performance for various traffic patterns.
|
[
{
"version": "v1",
"created": "Tue, 2 Aug 2022 18:55:37 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Sep 2022 05:49:34 GMT"
},
{
"version": "v3",
"created": "Fri, 14 Oct 2022 03:34:42 GMT"
},
{
"version": "v4",
"created": "Tue, 2 May 2023 19:48:14 GMT"
}
] | 2023-05-04T00:00:00 |
[
[
"Lakhotia",
"Kartik",
""
],
[
"Besta",
"Maciej",
""
],
[
"Monroe",
"Laura",
""
],
[
"Isham",
"Kelly",
""
],
[
"Iff",
"Patrick",
""
],
[
"Hoefler",
"Torsten",
""
],
[
"Petrini",
"Fabrizio",
""
]
] |
new_dataset
| 0.999442 |
2208.04931
|
Nicola Cotumaccio
|
Nicola Cotumaccio, Giovanna D'Agostino, Alberto Policriti, Nicola
Prezza
|
Co-lexicographically Ordering Automata and Regular Languages -- Part I
|
arXiv admin note: text overlap with arXiv:2106.02309
| null | null | null |
cs.FL cs.DS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In the present work, we lay out a new theory showing that all automata can
always be co-lexicographically partially ordered, and an intrinsic measure of
their complexity can be defined and effectively determined, namely, the minimum
width $p$ of one of their admissible co-lex partial orders - dubbed here the
automaton's co-lex width. We first show that this new measure captures at once
the complexity of several seemingly-unrelated hard problems on automata. Any
NFA of co-lex width $p$: (i) has an equivalent powerset DFA whose size is
exponential in $p$ rather than (as a classic analysis shows) in the NFA's size;
(ii) can be encoded using just $\Theta(\log p)$ bits per transition; (iii)
admits a linear-space data structure solving regular expression matching
queries in time proportional to $p^2$ per matched character. Some consequences
of this new parametrization of automata are that PSPACE-hard problems such as
NFA equivalence are FPT in $p$, and quadratic lower bounds for the regular
expression matching problem do not hold for sufficiently small $p$. We prove
that a canonical minimum-width DFA accepting a language $\mathcal L$ - dubbed
the Hasse automaton $\mathcal H$ of $\mathcal L$ - can be exhibited. Finally,
we explore the relationship between two conflicting objectives: minimizing the
width and minimizing the number of states of a DFA. In this context, we provide
an analogous of the Myhill-Nerode Theorem for co-lexicographically ordered
regular languages.
|
[
{
"version": "v1",
"created": "Tue, 9 Aug 2022 17:51:20 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Nov 2022 23:21:02 GMT"
},
{
"version": "v3",
"created": "Wed, 3 May 2023 06:11:13 GMT"
}
] | 2023-05-04T00:00:00 |
[
[
"Cotumaccio",
"Nicola",
""
],
[
"D'Agostino",
"Giovanna",
""
],
[
"Policriti",
"Alberto",
""
],
[
"Prezza",
"Nicola",
""
]
] |
new_dataset
| 0.994063 |
2209.03878
|
Joshua Peeples
|
Joshua Peeples, Alina Zare, Jeffrey Dale, James Keller
|
Histogram Layers for Synthetic Aperture Sonar Imagery
|
7 pages, 9 Figures, Accepted to IEEE International Conference on
Machine Learning and Applications (ICMLA) 2022
| null |
10.1109/ICMLA55696.2022.00032
| null |
cs.CV cs.AI cs.LG eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Synthetic aperture sonar (SAS) imagery is crucial for several applications,
including target recognition and environmental segmentation. Deep learning
models have led to much success in SAS analysis; however, the features
extracted by these approaches may not be suitable for capturing certain
textural information. To address this problem, we present a novel application
of histogram layers on SAS imagery. The addition of histogram layer(s) within
the deep learning models improved performance by incorporating statistical
texture information on both synthetic and real-world datasets.
|
[
{
"version": "v1",
"created": "Thu, 8 Sep 2022 15:33:35 GMT"
}
] | 2023-05-04T00:00:00 |
[
[
"Peeples",
"Joshua",
""
],
[
"Zare",
"Alina",
""
],
[
"Dale",
"Jeffrey",
""
],
[
"Keller",
"James",
""
]
] |
new_dataset
| 0.993253 |
2210.01185
|
Dejiao Zhang
|
Nihal Jain, Dejiao Zhang, Wasi Uddin Ahmad, Zijian Wang, Feng Nan,
Xiaopeng Li, Ming Tan, Ramesh Nallapati, Baishakhi Ray, Parminder Bhatia,
Xiaofei Ma, Bing Xiang
|
ContraCLM: Contrastive Learning For Causal Language Model
|
10 pages
|
ACL 2023
| null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Despite exciting progress in causal language models, the expressiveness of
the representations is largely limited due to poor discrimination ability. To
remedy this issue, we present ContraCLM, a novel contrastive learning framework
at both token-level and sequence-level. We assess ContraCLM on a variety of
downstream tasks. We show that ContraCLM enhances discrimination of the
representations and bridges the gap with the encoder-only models, which makes
causal language models better suited for tasks beyond language generation.
Specifically, we attain $44\%$ relative improvement on the Semantic Textual
Similarity tasks and $34\%$ on Code-to-Code Search tasks. Furthermore, by
improving the expressiveness of the representations, ContraCLM also boosts the
source code generation capability with $9\%$ relative improvement on execution
accuracy on the HumanEval benchmark.
|
[
{
"version": "v1",
"created": "Mon, 3 Oct 2022 18:56:35 GMT"
},
{
"version": "v2",
"created": "Tue, 2 May 2023 22:46:46 GMT"
}
] | 2023-05-04T00:00:00 |
[
[
"Jain",
"Nihal",
""
],
[
"Zhang",
"Dejiao",
""
],
[
"Ahmad",
"Wasi Uddin",
""
],
[
"Wang",
"Zijian",
""
],
[
"Nan",
"Feng",
""
],
[
"Li",
"Xiaopeng",
""
],
[
"Tan",
"Ming",
""
],
[
"Nallapati",
"Ramesh",
""
],
[
"Ray",
"Baishakhi",
""
],
[
"Bhatia",
"Parminder",
""
],
[
"Ma",
"Xiaofei",
""
],
[
"Xiang",
"Bing",
""
]
] |
new_dataset
| 0.999414 |
2210.05311
|
Wuti Xiong
|
Wuti Xiong
|
CD-FSOD: A Benchmark for Cross-domain Few-shot Object Detection
|
Accepted by ICASSP 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a study of the cross-domain few-shot object
detection (CD-FSOD) benchmark, consisting of image data from a diverse data
domain. On the proposed benchmark, we evaluate state-of-art FSOD approaches,
including meta-learning FSOD approaches and fine-tuning FSOD approaches. The
results show that these methods tend to fall, and even underperform the naive
fine-tuning model. We analyze the reasons for their failure and introduce a
strong baseline that uses a mutually-beneficial manner to alleviate the
overfitting problem. Our approach is remarkably superior to existing approaches
by significant margins (2.0\% on average) on the proposed benchmark. Our code
is available at \url{https://github.com/FSOD/CD-FSOD}.
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2022 10:10:07 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Oct 2022 10:52:27 GMT"
},
{
"version": "v3",
"created": "Wed, 3 May 2023 09:19:05 GMT"
}
] | 2023-05-04T00:00:00 |
[
[
"Xiong",
"Wuti",
""
]
] |
new_dataset
| 0.994475 |
2210.08682
|
Tingyuan Liang
|
Tingyuan Liang, Gengjie Chen, Jieru Zhao, Sharad Sinha, Wei Zhang
|
AMF-Placer 2.0: Open Source Timing-driven Analytical Mixed-size Placer
for Large-scale Heterogeneous FPGA
| null | null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
On modern field-programmable gate arrays (FPGAs), certain critical path
portions of the designs might be prearranged into many multi-cell macros during
synthesis. These movable macros with constraints of shape and resources lead to
challenging mixed-size placement for FPGA designs which cannot be addressed by
previous analytical placers. Moreover, general timing-driven placement
algorithms are facing challenges when handling real-world application design
and ultrascale FPGA architectures. In this work, we propose AMF-Placer 2.0, an
open-source comprehensive timing-driven analytical mixed-size FPGA placer. It
supports mixed-size placement of heterogeneous resources (e.g.,
LUT/FF/LUTRAM/MUX/CARRY/DSP/BRAM) on FPGA, with an interface to Xilinx Vivado.
Standing upon the shoulders of AMF-Placer 1.0, AMFPlacer 2.0 is equipped with a
series of new techniques for timing optimization, including a simple but
effective timing model, placement-blockage-aware anchor insertion, WNS-aware
timing-driven quadratic placement, and sector-guided detailed placement. Based
on a set of the latest large open-source benchmarks from various domains for
Xilinx Ultrascale FPGAs, experimental results indicate that critical path
delays realized by AMF-Placer 2.0 are averagely 2.2% and 0.59% higher than
those achieved by commercial tool Xilinx Vivavo 2020.2 and 2021.2 respectively.
Meanwhile, the average runtime of placement procedure of AMF-Placer 2.0 is 14%
and 8.5% higher than Xilinx Vivavo 2020.2 and 2021.2 respectively. Although
limited by the absence of the exact timing model of the device, the information
of design hierarchy and accurate routing feedback, AMF-Placer 2.0 is the first
open-source FPGA placer which can handle the timingdriven mixed-size placement
of practical complex designs with various FPGA resources and achieves the
comparable quality to the latest commercial tools.
|
[
{
"version": "v1",
"created": "Mon, 17 Oct 2022 01:04:21 GMT"
},
{
"version": "v2",
"created": "Wed, 3 May 2023 04:57:04 GMT"
}
] | 2023-05-04T00:00:00 |
[
[
"Liang",
"Tingyuan",
""
],
[
"Chen",
"Gengjie",
""
],
[
"Zhao",
"Jieru",
""
],
[
"Sinha",
"Sharad",
""
],
[
"Zhang",
"Wei",
""
]
] |
new_dataset
| 0.972436 |
2212.00873
|
Manil Dev Gomony Dr.
|
M. Gomony, F. Putter, A. Gebregiorgis, G. Paulin, L. Mei, V. Jain, S.
Hamdioui, V. Sanchez, T. Grosser, M. Geilen, M. Verhelst, F. Zenke, F.
Gurkaynak, B. Bruin, S. Stuijk, S. Davidson, S. De, M. Ghogho, A. Jimborean,
S. Eissa, L. Benini, D. Soudris, R. Bishnoi, S. Ainsworth, F. Corradi, O.
Karrakchou, T. G\"uneysu and H. Corporaal
|
CONVOLVE: Smart and seamless design of smart edge processors
| null | null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the rise of Deep Learning (DL), our world braces for AI in every edge
device, creating an urgent need for edge-AI SoCs. This SoC hardware needs to
support high throughput, reliable and secure AI processing at Ultra Low Power
(ULP), with a very short time to market. With its strong legacy in edge
solutions and open processing platforms, the EU is well-positioned to become a
leader in this SoC market. However, this requires AI edge processing to become
at least 100 times more energy-efficient, while offering sufficient flexibility
and scalability to deal with AI as a fast-moving target. Since the design space
of these complex SoCs is huge, advanced tooling is needed to make their design
tractable. The CONVOLVE project (currently in Inital stage) addresses these
roadblocks. It takes a holistic approach with innovations at all levels of the
design hierarchy. Starting with an overview of SOTA DL processing support and
our project methodology, this paper presents 8 important design choices largely
impacting the energy efficiency and flexibility of DL hardware. Finding good
solutions is key to making smart-edge computing a reality.
|
[
{
"version": "v1",
"created": "Thu, 1 Dec 2022 21:24:28 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Jan 2023 12:56:31 GMT"
},
{
"version": "v3",
"created": "Wed, 8 Feb 2023 22:13:00 GMT"
},
{
"version": "v4",
"created": "Tue, 2 May 2023 20:00:55 GMT"
}
] | 2023-05-04T00:00:00 |
[
[
"Gomony",
"M.",
""
],
[
"Putter",
"F.",
""
],
[
"Gebregiorgis",
"A.",
""
],
[
"Paulin",
"G.",
""
],
[
"Mei",
"L.",
""
],
[
"Jain",
"V.",
""
],
[
"Hamdioui",
"S.",
""
],
[
"Sanchez",
"V.",
""
],
[
"Grosser",
"T.",
""
],
[
"Geilen",
"M.",
""
],
[
"Verhelst",
"M.",
""
],
[
"Zenke",
"F.",
""
],
[
"Gurkaynak",
"F.",
""
],
[
"Bruin",
"B.",
""
],
[
"Stuijk",
"S.",
""
],
[
"Davidson",
"S.",
""
],
[
"De",
"S.",
""
],
[
"Ghogho",
"M.",
""
],
[
"Jimborean",
"A.",
""
],
[
"Eissa",
"S.",
""
],
[
"Benini",
"L.",
""
],
[
"Soudris",
"D.",
""
],
[
"Bishnoi",
"R.",
""
],
[
"Ainsworth",
"S.",
""
],
[
"Corradi",
"F.",
""
],
[
"Karrakchou",
"O.",
""
],
[
"Güneysu",
"T.",
""
],
[
"Corporaal",
"H.",
""
]
] |
new_dataset
| 0.992228 |
2302.05658
|
Milan \v{S}ulc
|
\v{S}t\v{e}p\'an \v{S}imsa and Milan \v{S}ulc and Michal
U\v{r}i\v{c}\'a\v{r} and Yash Patel and Ahmed Hamdi and Mat\v{e}j Koci\'an
and Maty\'a\v{s} Skalick\'y and Ji\v{r}\'i Matas and Antoine Doucet and
Micka\"el Coustaty and Dimosthenis Karatzas
|
DocILE Benchmark for Document Information Localization and Extraction
|
Accepted to ICDAR 2023
| null | null | null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces the DocILE benchmark with the largest dataset of
business documents for the tasks of Key Information Localization and Extraction
and Line Item Recognition. It contains 6.7k annotated business documents, 100k
synthetically generated documents, and nearly~1M unlabeled documents for
unsupervised pre-training. The dataset has been built with knowledge of domain-
and task-specific aspects, resulting in the following key features: (i)
annotations in 55 classes, which surpasses the granularity of previously
published key information extraction datasets by a large margin; (ii) Line Item
Recognition represents a highly practical information extraction task, where
key information has to be assigned to items in a table; (iii) documents come
from numerous layouts and the test set includes zero- and few-shot cases as
well as layouts commonly seen in the training set. The benchmark comes with
several baselines, including RoBERTa, LayoutLMv3 and DETR-based Table
Transformer; applied to both tasks of the DocILE benchmark, with results shared
in this paper, offering a quick starting point for future work. The dataset,
baselines and supplementary material are available at
https://github.com/rossumai/docile.
|
[
{
"version": "v1",
"created": "Sat, 11 Feb 2023 11:32:10 GMT"
},
{
"version": "v2",
"created": "Wed, 3 May 2023 16:24:58 GMT"
}
] | 2023-05-04T00:00:00 |
[
[
"Šimsa",
"Štěpán",
""
],
[
"Šulc",
"Milan",
""
],
[
"Uřičář",
"Michal",
""
],
[
"Patel",
"Yash",
""
],
[
"Hamdi",
"Ahmed",
""
],
[
"Kocián",
"Matěj",
""
],
[
"Skalický",
"Matyáš",
""
],
[
"Matas",
"Jiří",
""
],
[
"Doucet",
"Antoine",
""
],
[
"Coustaty",
"Mickaël",
""
],
[
"Karatzas",
"Dimosthenis",
""
]
] |
new_dataset
| 0.999819 |
2303.07519
|
Antonios Liapis
|
Theodoros Galanos, Antonios Liapis and Georgios N. Yannakakis
|
Architext: Language-Driven Generative Architecture Design
|
21 pages
| null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Architectural design is a highly complex practice that involves a wide
diversity of disciplines, technologies, proprietary design software, expertise,
and an almost infinite number of constraints, across a vast array of design
tasks. Enabling intuitive, accessible, and scalable design processes is an
important step towards performance-driven and sustainable design for all. To
that end, we introduce Architext, a novel semantic generation assistive tool.
Architext enables design generation with only natural language prompts, given
to large-scale Language Models, as input. We conduct a thorough quantitative
evaluation of Architext's downstream task performance, focusing on semantic
accuracy and diversity for a number of pre-trained language models ranging from
120 million to 6 billion parameters. Architext models are able to learn the
specific design task, generating valid residential layouts at a near 100% rate.
Accuracy shows great improvement when scaling the models, with the largest
model (GPT-J) yielding impressive accuracy ranging between 25% to over 80% for
different prompt categories. We open source the finetuned Architext models and
our synthetic dataset, hoping to inspire experimentation in this exciting area
of design research.
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 23:11:05 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Mar 2023 16:07:05 GMT"
},
{
"version": "v3",
"created": "Wed, 3 May 2023 09:29:05 GMT"
}
] | 2023-05-04T00:00:00 |
[
[
"Galanos",
"Theodoros",
""
],
[
"Liapis",
"Antonios",
""
],
[
"Yannakakis",
"Georgios N.",
""
]
] |
new_dataset
| 0.99926 |
2304.07302
|
Qijie Bai
|
Qijie Bai, Changli Nie, Haiwei Zhang, Dongming Zhao, Xiaojie Yuan
|
HGWaveNet: A Hyperbolic Graph Neural Network for Temporal Link
Prediction
|
Accepted by Web Conference (WWW) 2023
|
WWW '23: Proceedings of the ACM Web Conference 2023 (523-532)
|
10.1145/3543507.3583455
| null |
cs.LG cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Temporal link prediction, aiming to predict future edges between paired nodes
in a dynamic graph, is of vital importance in diverse applications. However,
existing methods are mainly built upon uniform Euclidean space, which has been
found to be conflict with the power-law distributions of real-world graphs and
unable to represent the hierarchical connections between nodes effectively.
With respect to the special data characteristic, hyperbolic geometry offers an
ideal alternative due to its exponential expansion property. In this paper, we
propose HGWaveNet, a novel hyperbolic graph neural network that fully exploits
the fitness between hyperbolic spaces and data distributions for temporal link
prediction. Specifically, we design two key modules to learn the spatial
topological structures and temporal evolutionary information separately. On the
one hand, a hyperbolic diffusion graph convolution (HDGC) module effectively
aggregates information from a wider range of neighbors. On the other hand, the
internal order of causal correlation between historical states is captured by
hyperbolic dilated causal convolution (HDCC) modules. The whole model is built
upon the hyperbolic spaces to preserve the hierarchical structural information
in the entire data flow. To prove the superiority of HGWaveNet, extensive
experiments are conducted on six real-world graph datasets and the results show
a relative improvement by up to 6.67% on AUC for temporal link prediction over
SOTA methods.
|
[
{
"version": "v1",
"created": "Fri, 14 Apr 2023 07:07:00 GMT"
},
{
"version": "v2",
"created": "Wed, 3 May 2023 04:54:52 GMT"
}
] | 2023-05-04T00:00:00 |
[
[
"Bai",
"Qijie",
""
],
[
"Nie",
"Changli",
""
],
[
"Zhang",
"Haiwei",
""
],
[
"Zhao",
"Dongming",
""
],
[
"Yuan",
"Xiaojie",
""
]
] |
new_dataset
| 0.998894 |
2305.01190
|
Yuelang Xu
|
Yuelang Xu, Hongwen Zhang, Lizhen Wang, Xiaochen Zhao, Han Huang,
Guojun Qi, Yebin Liu
|
LatentAvatar: Learning Latent Expression Code for Expressive Neural Head
Avatar
|
Accepted by SIGGRAPH 2023
| null |
10.1145/3588432.3591545
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Existing approaches to animatable NeRF-based head avatars are either built
upon face templates or use the expression coefficients of templates as the
driving signal. Despite the promising progress, their performances are heavily
bound by the expression power and the tracking accuracy of the templates. In
this work, we present LatentAvatar, an expressive neural head avatar driven by
latent expression codes. Such latent expression codes are learned in an
end-to-end and self-supervised manner without templates, enabling our method to
get rid of expression and tracking issues. To achieve this, we leverage a
latent head NeRF to learn the person-specific latent expression codes from a
monocular portrait video, and further design a Y-shaped network to learn the
shared latent expression codes of different subjects for cross-identity
reenactment. By optimizing the photometric reconstruction objectives in NeRF,
the latent expression codes are learned to be 3D-aware while faithfully
capturing the high-frequency detailed expressions. Moreover, by learning a
mapping between the latent expression code learned in shared and
person-specific settings, LatentAvatar is able to perform expressive
reenactment between different subjects. Experimental results show that our
LatentAvatar is able to capture challenging expressions and the subtle movement
of teeth and even eyeballs, which outperforms previous state-of-the-art
solutions in both quantitative and qualitative comparisons. Project page:
https://www.liuyebin.com/latentavatar.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2023 03:49:12 GMT"
},
{
"version": "v2",
"created": "Wed, 3 May 2023 06:41:43 GMT"
}
] | 2023-05-04T00:00:00 |
[
[
"Xu",
"Yuelang",
""
],
[
"Zhang",
"Hongwen",
""
],
[
"Wang",
"Lizhen",
""
],
[
"Zhao",
"Xiaochen",
""
],
[
"Huang",
"Han",
""
],
[
"Qi",
"Guojun",
""
],
[
"Liu",
"Yebin",
""
]
] |
new_dataset
| 0.996969 |
2305.01598
|
Anirudh Khatry
|
Anirudh Khatry, Joyce Cahoon, Jordan Henkel, Shaleen Deep, Venkatesh
Emani, Avrilia Floratou, Sumit Gulwani, Vu Le, Mohammad Raza, Sherry Shi,
Mukul Singh, Ashish Tiwari
|
From Words to Code: Harnessing Data for Program Synthesis from Natural
Language
|
14 pages
| null | null | null |
cs.DB cs.AI cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Creating programs to correctly manipulate data is a difficult task, as the
underlying programming languages and APIs can be challenging to learn for many
users who are not skilled programmers. Large language models (LLMs) demonstrate
remarkable potential for generating code from natural language, but in the data
manipulation domain, apart from the natural language (NL) description of the
intended task, we also have the dataset on which the task is to be performed,
or the "data context". Existing approaches have utilized data context in a
limited way by simply adding relevant information from the input data into the
prompts sent to the LLM.
In this work, we utilize the available input data to execute the candidate
programs generated by the LLMs and gather their outputs. We introduce semantic
reranking, a technique to rerank the programs generated by LLMs based on three
signals coming the program outputs: (a) semantic filtering and well-formedness
based score tuning: do programs even generate well-formed outputs, (b) semantic
interleaving: how do the outputs from different candidates compare to each
other, and (c) output-based score tuning: how do the outputs compare to outputs
predicted for the same task. We provide theoretical justification for semantic
interleaving. We also introduce temperature mixing, where we combine samples
generated by LLMs using both high and low temperatures. We extensively evaluate
our approach in three domains, namely databases (SQL), data science (Pandas)
and business intelligence (Excel's Power Query M) on a variety of new and
existing benchmarks. We observe substantial gains across domains, with
improvements of up to 45% in top-1 accuracy and 34% in top-3 accuracy.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2023 16:56:32 GMT"
},
{
"version": "v2",
"created": "Wed, 3 May 2023 07:02:57 GMT"
}
] | 2023-05-04T00:00:00 |
[
[
"Khatry",
"Anirudh",
""
],
[
"Cahoon",
"Joyce",
""
],
[
"Henkel",
"Jordan",
""
],
[
"Deep",
"Shaleen",
""
],
[
"Emani",
"Venkatesh",
""
],
[
"Floratou",
"Avrilia",
""
],
[
"Gulwani",
"Sumit",
""
],
[
"Le",
"Vu",
""
],
[
"Raza",
"Mohammad",
""
],
[
"Shi",
"Sherry",
""
],
[
"Singh",
"Mukul",
""
],
[
"Tiwari",
"Ashish",
""
]
] |
new_dataset
| 0.996905 |
2305.01658
|
Dongyue Guo
|
Dongyue Guo, Zheng Zhang, Jianwei Zhang, and Yi Lin
|
FlightBERT++: A Non-autoregressive Multi-Horizon Flight Trajectory
Prediction Framework
|
8 pages, 3 figures
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Flight Trajectory Prediction (FTP) is an essential task in Air Traffic
Control (ATC), which can assist air traffic controllers to manage airspace more
safely and efficiently. Existing approaches generally perform multi-horizon FTP
tasks in an autoregressive manner, which is prone to suffer from error
accumulation and low-efficiency problems. In this paper, a novel framework,
called FlightBERT++, is proposed to i) forecast multi-horizon flight
trajectories directly in a non-autoregressive way, and ii) improved the
limitation of the binary encoding (BE) representation in the FlightBERT
framework. Specifically, the proposed framework is implemented by a generalized
Encoder-Decoder architecture, in which the encoder learns the temporal-spatial
patterns from historical observations and the decoder predicts the flight
status for the future time steps. Compared to conventional architecture, an
extra horizon-aware contexts generator (HACG) is dedicatedly designed to
consider the prior horizon information that enables us to perform multi-horizon
non-autoregressive prediction. Additionally, a differential prediction strategy
is designed by well considering both the stationarity of the differential
sequence and the high-bits errors of the BE representation. Moreover, the
Bit-wise Weighted Binary Cross Entropy loss function is proposed to optimize
the proposed framework that can further constrain the high-bits errors of the
predictions. Finally, the proposed framework is validated on a real-world
flight trajectory dataset. The experimental results show that the proposed
framework outperformed the competitive baselines.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2023 04:11:23 GMT"
}
] | 2023-05-04T00:00:00 |
[
[
"Guo",
"Dongyue",
""
],
[
"Zhang",
"Zheng",
""
],
[
"Zhang",
"Jianwei",
""
],
[
"Lin",
"Yi",
""
]
] |
new_dataset
| 0.998866 |
2305.01661
|
Dongyue Guo
|
Dongyue Guo, Jianwei Zhang, Yi Lin
|
SIA-FTP: A Spoken Instruction Aware Flight Trajectory Prediction
Framework
| null | null | null | null |
cs.SD cs.AI cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Ground-air negotiation via speech communication is a vital prerequisite for
ensuring safety and efficiency in air traffic control (ATC) operations.
However, with the increase in traffic flow, incorrect instructions caused by
human factors bring a great threat to ATC safety. Existing flight trajectory
prediction (FTP) approaches primarily rely on the flight status of historical
trajectory, leading to significant delays in the prediction of real-time
maneuvering instruction, which is not conducive to conflict detection. A major
reason is that spoken instructions and flight trajectories are presented in
different modalities in the current air traffic control (ATC) system, bringing
great challenges to considering the maneuvering instruction in the FTP tasks.
In this paper, a spoken instruction-aware FTP framework, called SIA-FTP, is
innovatively proposed to support high-maneuvering FTP tasks by incorporating
instant spoken instruction. To address the modality gap and minimize the data
requirements, a 3-stage learning paradigm is proposed to implement the SIA-FTP
framework in a progressive manner, including trajectory-based FTP pretraining,
intent-oriented instruction embedding learning, and multi-modal finetuning.
Specifically, the FTP model and the instruction embedding with maneuvering
semantics are pre-trained using volumes of well-resourced trajectory and text
data in the 1st and 2nd stages. In succession, a multi-modal fusion strategy is
proposed to incorporate the pre-trained instruction embedding into the FTP
model and integrate the two pre-trained networks into a joint model. Finally,
the joint model is finetuned using the limited trajectory-instruction data to
enhance the FTP performance within maneuvering instruction scenarios. The
experimental results demonstrated that the proposed framework presents an
impressive performance improvement in high-maneuvering scenarios.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2023 08:28:55 GMT"
}
] | 2023-05-04T00:00:00 |
[
[
"Guo",
"Dongyue",
""
],
[
"Zhang",
"Jianwei",
""
],
[
"Lin",
"Yi",
""
]
] |
new_dataset
| 0.993784 |
2305.01763
|
Ahmet-Serdar Karakaya
|
Ahmet-Serdar Karakaya, Ioan-Alexandru Stef, Konstantin K\"ohler,
Julian Heinovski, Falko Dressler
|
Achieving Realistic Cyclist Behavior in SUMO using the SimRa Dataset
|
arXiv admin note: substantial text overlap with arXiv:2205.04538
| null | null | null |
cs.MA cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Increasing the modal share of bicycle traffic to reduce carbon emissions,
reduce urban car traffic, and to improve the health of citizens, requires a
shift away from car-centric city planning. For this, traffic planners often
rely on simulation tools such as SUMO which allow them to study the effects of
construction changes before implementing them. Similarly, studies of vulnerable
road users, here cyclists, also use such models to assess the performance of
communication-based road traffic safety systems. The cyclist model in SUMO,
however, is very imprecise as SUMO cyclists behave either like slow cars or
fast pedestrians, thus, casting doubt on simulation results for bicycle
traffic. In this paper, we analyze acceleration, deceleration, velocity, and
intersection left-turn behavior of cyclists in a large dataset of real world
cycle tracks. We use the results to improve the existing cyclist model in SUMO
and add three more detailed cyclist models and implement them in SUMO.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2023 20:03:52 GMT"
}
] | 2023-05-04T00:00:00 |
[
[
"Karakaya",
"Ahmet-Serdar",
""
],
[
"Stef",
"Ioan-Alexandru",
""
],
[
"Köhler",
"Konstantin",
""
],
[
"Heinovski",
"Julian",
""
],
[
"Dressler",
"Falko",
""
]
] |
new_dataset
| 0.999702 |
2305.01778
|
Biao Zhang
|
Biao Zhang, Mathias M\"uller, Rico Sennrich
|
SLTUNET: A Simple Unified Model for Sign Language Translation
|
ICLR 2023
| null | null | null |
cs.CL cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite recent successes with neural models for sign language translation
(SLT), translation quality still lags behind spoken languages because of the
data scarcity and modality gap between sign video and text. To address both
problems, we investigate strategies for cross-modality representation sharing
for SLT. We propose SLTUNET, a simple unified neural model designed to support
multiple SLTrelated tasks jointly, such as sign-to-gloss, gloss-to-text and
sign-to-text translation. Jointly modeling different tasks endows SLTUNET with
the capability to explore the cross-task relatedness that could help narrow the
modality gap. In addition, this allows us to leverage the knowledge from
external resources, such as abundant parallel data used for spoken-language
machine translation (MT). We show in experiments that SLTUNET achieves
competitive and even state-of-the-art performance on PHOENIX-2014T and
CSL-Daily when augmented with MT data and equipped with a set of optimization
techniques. We further use the DGS Corpus for end-to-end SLT for the first
time. It covers broader domains with a significantly larger vocabulary, which
is more challenging and which we consider to allow for a more realistic
assessment of the current state of SLT than the former two. Still, SLTUNET
obtains improved results on the DGS Corpus. Code is available at
https://github.com/bzhangGo/sltunet.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2023 20:41:59 GMT"
}
] | 2023-05-04T00:00:00 |
[
[
"Zhang",
"Biao",
""
],
[
"Müller",
"Mathias",
""
],
[
"Sennrich",
"Rico",
""
]
] |
new_dataset
| 0.953927 |
2305.01795
|
Yujie Lu
|
Yujie Lu, Pan Lu, Zhiyu Chen, Wanrong Zhu, Xin Eric Wang, William Yang
Wang
|
Multimodal Procedural Planning via Dual Text-Image Prompting
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Embodied agents have achieved prominent performance in following human
instructions to complete tasks. However, the potential of providing
instructions informed by texts and images to assist humans in completing tasks
remains underexplored. To uncover this capability, we present the multimodal
procedural planning (MPP) task, in which models are given a high-level goal and
generate plans of paired text-image steps, providing more complementary and
informative guidance than unimodal plans. The key challenges of MPP are to
ensure the informativeness, temporal coherence,and accuracy of plans across
modalities. To tackle this, we propose Text-Image Prompting (TIP), a
dual-modality prompting method that jointly leverages zero-shot reasoning
ability in large language models (LLMs) and compelling text-to-image generation
ability from diffusion-based models. TIP improves the interaction in the dual
modalities using Text-to-Image Bridge and Image-to-Text Bridge, allowing LLMs
to guide the textual-grounded image plan generation and leveraging the
descriptions of image plans to ground the textual plan reversely. To address
the lack of relevant datasets, we collect WIKIPLAN and RECIPEPLAN as a testbed
for MPP. Our results show compelling human preferences and automatic scores
against unimodal and multimodal baselines on WIKIPLAN and RECIPEPLAN in terms
of informativeness, temporal coherence, and plan accuracy. Our code and data:
https://github.com/YujieLu10/MPP.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2023 21:46:44 GMT"
}
] | 2023-05-04T00:00:00 |
[
[
"Lu",
"Yujie",
""
],
[
"Lu",
"Pan",
""
],
[
"Chen",
"Zhiyu",
""
],
[
"Zhu",
"Wanrong",
""
],
[
"Wang",
"Xin Eric",
""
],
[
"Wang",
"William Yang",
""
]
] |
new_dataset
| 0.992192 |
2305.01836
|
Shentong Mo
|
Shentong Mo, Yapeng Tian
|
AV-SAM: Segment Anything Model Meets Audio-Visual Localization and
Segmentation
| null | null | null | null |
cs.CV cs.LG cs.MM cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Segment Anything Model (SAM) has recently shown its powerful effectiveness in
visual segmentation tasks. However, there is less exploration concerning how
SAM works on audio-visual tasks, such as visual sound localization and
segmentation. In this work, we propose a simple yet effective audio-visual
localization and segmentation framework based on the Segment Anything Model,
namely AV-SAM, that can generate sounding object masks corresponding to the
audio. Specifically, our AV-SAM simply leverages pixel-wise audio-visual fusion
across audio features and visual features from the pre-trained image encoder in
SAM to aggregate cross-modal representations. Then, the aggregated cross-modal
features are fed into the prompt encoder and mask decoder to generate the final
audio-visual segmentation masks. We conduct extensive experiments on
Flickr-SoundNet and AVSBench datasets. The results demonstrate that the
proposed AV-SAM can achieve competitive performance on sounding object
localization and segmentation.
|
[
{
"version": "v1",
"created": "Wed, 3 May 2023 00:33:52 GMT"
}
] | 2023-05-04T00:00:00 |
[
[
"Mo",
"Shentong",
""
],
[
"Tian",
"Yapeng",
""
]
] |
new_dataset
| 0.998093 |
2305.01843
|
Kenny Chen
|
Kenny Chen, Ryan Nemiroff, Brett T. Lopez
|
Direct LiDAR-Inertial Odometry and Mapping: Perceptive and Connective
SLAM
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents Direct LiDAR-Inertial Odometry and Mapping (DLIOM), a
robust SLAM algorithm with an explicit focus on computational efficiency,
operational reliability, and real-world efficacy. DLIOM contains several key
algorithmic innovations in both the front-end and back-end subsystems to design
a resilient LiDAR-inertial architecture that is perceptive to the environment
and produces accurate localization and high-fidelity 3D mapping for autonomous
robotic platforms. Our ideas spawned after a deep investigation into modern
LiDAR SLAM systems and their inabilities to generalize across different
operating environments, in which we address several common algorithmic failure
points by means of proactive safe-guards to provide long-term operational
reliability in the unstructured real world. We detail several important
innovations to localization accuracy and mapping resiliency distributed
throughout a typical LiDAR SLAM pipeline to comprehensively increase
algorithmic speed, accuracy, and robustness. In addition, we discuss insights
gained from our ground-up approach while implementing such a complex system for
real-time state estimation on resource-constrained systems, and we
experimentally show the increased performance of our method as compared to the
current state-of-the-art on both public benchmark and self-collected datasets.
|
[
{
"version": "v1",
"created": "Wed, 3 May 2023 01:06:25 GMT"
}
] | 2023-05-04T00:00:00 |
[
[
"Chen",
"Kenny",
""
],
[
"Nemiroff",
"Ryan",
""
],
[
"Lopez",
"Brett T.",
""
]
] |
new_dataset
| 0.999746 |
2305.01911
|
Lin Jiang
|
Lin Jiang, Anthony Dowling, Ming-C. Cheng, Yu Liu
|
PODTherm-GP: A Physics-based Data-Driven Approach for Effective
Architecture-Level Thermal Simulation of Multi-Core CPUs
| null | null | null | null |
cs.CE physics.comp-ph
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
A thermal simulation methodology derived from the proper orthogonal
decomposition (POD) and the Galerkin projection (GP), hereafter referred to as
PODTherm-GP, is evaluated in terms of its efficiency and accuracy in a
multi-core CPU. The GP projects the heat transfer equation onto a mathematical
space whose basis functions are generated from thermal data enabled by the POD
learning algorithm. The thermal solution data are collected from FEniCS using
the finite element method (FEM) accounting for appropriate parametric
variations. The GP incorporates physical principles of heat transfer in the
methodology to reach high accuracy and efficiency. The dynamic power map for
the CPU in FEM thermal simulation is generated from gem5 and McPACT, together
with the SPLASH-2 benchmarks as the simulation workload. It is shown that
PODTherm-GP offers an accurate thermal prediction of the CPU with a resolution
as fine as the FEM. It is also demonstrated that PODTherm-GP is capable of
predicting the dynamic thermal profile of the chip with a good accuracy beyond
the training conditions. Additionally, the approach offers a reduction in
degrees of freedom by more than 5 orders of magnitude and a speedup of 4
orders, compared to the FEM.
|
[
{
"version": "v1",
"created": "Wed, 3 May 2023 05:59:23 GMT"
}
] | 2023-05-04T00:00:00 |
[
[
"Jiang",
"Lin",
""
],
[
"Dowling",
"Anthony",
""
],
[
"Cheng",
"Ming-C.",
""
],
[
"Liu",
"Yu",
""
]
] |
new_dataset
| 0.987597 |
2305.01912
|
Liang Zeng
|
Liang Zeng, Lanqing Li, Jian Li
|
MolKD: Distilling Cross-Modal Knowledge in Chemical Reactions for
Molecular Property Prediction
| null | null | null | null |
cs.LG cs.AI physics.chem-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
How to effectively represent molecules is a long-standing challenge for
molecular property prediction and drug discovery. This paper studies this
problem and proposes to incorporate chemical domain knowledge, specifically
related to chemical reactions, for learning effective molecular
representations. However, the inherent cross-modality property between chemical
reactions and molecules presents a significant challenge to address. To this
end, we introduce a novel method, namely MolKD, which Distills cross-modal
Knowledge in chemical reactions to assist Molecular property prediction.
Specifically, the reaction-to-molecule distillation model within MolKD
transfers cross-modal knowledge from a pre-trained teacher network learning
with one modality (i.e., reactions) into a student network learning with
another modality (i.e., molecules). Moreover, MolKD learns effective molecular
representations by incorporating reaction yields to measure transformation
efficiency of the reactant-product pair when pre-training on reactions.
Extensive experiments demonstrate that MolKD significantly outperforms various
competitive baseline models, e.g., 2.1% absolute AUC-ROC gain on Tox21. Further
investigations demonstrate that pre-trained molecular representations in MolKD
can distinguish chemically reasonable molecular similarities, which enables
molecular property prediction with high robustness and interpretability.
|
[
{
"version": "v1",
"created": "Wed, 3 May 2023 06:01:03 GMT"
}
] | 2023-05-04T00:00:00 |
[
[
"Zeng",
"Liang",
""
],
[
"Li",
"Lanqing",
""
],
[
"Li",
"Jian",
""
]
] |
new_dataset
| 0.998611 |
2305.01936
|
Georgios Batsis
|
Georgios Batsis, Ioannis Mademlis, Georgios Th. Papadopoulos
|
Illicit item detection in X-ray images for security applications
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Automated detection of contraband items in X-ray images can significantly
increase public safety, by enhancing the productivity and alleviating the
mental load of security officers in airports, subways, customs/post offices,
etc. The large volume and high throughput of passengers, mailed parcels, etc.,
during rush hours make it a Big Data analysis task. Modern computer vision
algorithms relying on Deep Neural Networks (DNNs) have proven capable of
undertaking this task even under resource-constrained and embedded execution
scenarios, e.g., as is the case with fast, single-stage, anchor-based object
detectors. This paper proposes a two-fold improvement of such algorithms for
the X-ray analysis domain, introducing two complementary novelties. Firstly,
more efficient anchors are obtained by hierarchical clustering the sizes of the
ground-truth training set bounding boxes; thus, the resulting anchors follow a
natural hierarchy aligned with the semantic structure of the data. Secondly,
the default Non-Maximum Suppression (NMS) algorithm at the end of the object
detection pipeline is modified to better handle occluded object detection and
to reduce the number of false predictions, by inserting the Efficient
Intersection over Union (E-IoU) metric into the Weighted Cluster NMS method.
E-IoU provides more discriminative geometrical correlations between the
candidate bounding boxes/Regions-of-Interest (RoIs). The proposed method is
implemented on a common single-stage object detector (YOLOv5) and its
experimental evaluation on a relevant public dataset indicates significant
accuracy gains over both the baseline and competing approaches. This highlights
the potential of Big Data analysis in enhancing public safety.
|
[
{
"version": "v1",
"created": "Wed, 3 May 2023 07:28:05 GMT"
}
] | 2023-05-04T00:00:00 |
[
[
"Batsis",
"Georgios",
""
],
[
"Mademlis",
"Ioannis",
""
],
[
"Papadopoulos",
"Georgios Th.",
""
]
] |
new_dataset
| 0.99692 |
2305.01957
|
Sardana Ivanova
|
Sardana Ivanova, Fredrik Aas Andreassen, Matias Jentoft, Sondre Wold,
Lilja {\O}vrelid
|
NorQuAD: Norwegian Question Answering Dataset
|
Accepted to NoDaLiDa 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this paper we present NorQuAD: the first Norwegian question answering
dataset for machine reading comprehension. The dataset consists of 4,752
manually created question-answer pairs. We here detail the data collection
procedure and present statistics of the dataset. We also benchmark several
multilingual and Norwegian monolingual language models on the dataset and
compare them against human performance. The dataset will be made freely
available.
|
[
{
"version": "v1",
"created": "Wed, 3 May 2023 08:17:07 GMT"
}
] | 2023-05-04T00:00:00 |
[
[
"Ivanova",
"Sardana",
""
],
[
"Andreassen",
"Fredrik Aas",
""
],
[
"Jentoft",
"Matias",
""
],
[
"Wold",
"Sondre",
""
],
[
"Øvrelid",
"Lilja",
""
]
] |
new_dataset
| 0.999777 |
2305.01971
|
Vasantha Ramani
|
Subin Lin, Vasantha Ramani, Miguel Martin, Pandarasamy Arjunan, Adrian
Chong, Filip Biljecki, Marcel Ignatius, Kameshwar Poolla, Clayton Miller
|
District-scale surface temperatures generated from high-resolution
longitudinal thermal infrared images
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The paper describes a dataset that was collected by infrared thermography,
which is a non-contact, non-intrusive technique to collect data and analyze the
built environment in various aspects. While most studies focus on the city and
building scales, the rooftop observatory provides high temporal and spatial
resolution observations with dynamic interactions on the district scale. The
rooftop infrared thermography observatory with a multi-modal platform that is
capable of assessing a wide range of dynamic processes in urban systems was
deployed in Singapore. It was placed on the top of two buildings that overlook
the outdoor context of the campus of the National University of Singapore. The
platform collects remote sensing data from tropical areas on a temporal scale,
allowing users to determine the temperature trend of individual features such
as buildings, roads, and vegetation. The dataset includes 1,365,921 thermal
images collected on average at approximately 10 seconds intervals from two
locations during ten months.
|
[
{
"version": "v1",
"created": "Wed, 3 May 2023 08:36:06 GMT"
}
] | 2023-05-04T00:00:00 |
[
[
"Lin",
"Subin",
""
],
[
"Ramani",
"Vasantha",
""
],
[
"Martin",
"Miguel",
""
],
[
"Arjunan",
"Pandarasamy",
""
],
[
"Chong",
"Adrian",
""
],
[
"Biljecki",
"Filip",
""
],
[
"Ignatius",
"Marcel",
""
],
[
"Poolla",
"Kameshwar",
""
],
[
"Miller",
"Clayton",
""
]
] |
new_dataset
| 0.999862 |
2305.01972
|
Marvin Geiselhart
|
Marvin Geiselhart, Marc Gauger, Felix Krieg, Jannis Clausius and
Stephan ten Brink
|
Phase-Equivariant Polar Coded Modulation
|
5 pages, 6 figures, submitted to IEEE for possible publication
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
For short-packet, low-latency communications over random access channels,
piloting overhead significantly reduces spectral efficiency. Therefore,
pilotless systems recently gained attraction. While blind phase estimation
algorithms such as Viterbi-Viterbi Phase Estimation (VVPE) can correct a phase
offset using only payload symbols, a phase ambiguity remains. We first show
that the remaining phase rotations in a polar coded quadrature amplitude
modulation (QAM) transmission with gray labeling are combinations of bit-flips
and automorphisms. Therefore, the decoder is equivariant to such phase
rotations and, by smartly selecting the frozen bits, one can jointly decode and
resolve the phase ambiguity, without the need for pilot symbols or an outer
code. Our proposed system outperforms pilot-assisted transmissions by up to 0.8
dB and 2 dB for quaternary phase shift keying (QPSK) and 16-QAM, respectively.
|
[
{
"version": "v1",
"created": "Wed, 3 May 2023 08:38:10 GMT"
}
] | 2023-05-04T00:00:00 |
[
[
"Geiselhart",
"Marvin",
""
],
[
"Gauger",
"Marc",
""
],
[
"Krieg",
"Felix",
""
],
[
"Clausius",
"Jannis",
""
],
[
"Brink",
"Stephan ten",
""
]
] |
new_dataset
| 0.995165 |
2305.02008
|
William Ljungbergh
|
Mina Alibeigi, William Ljungbergh, Adam Tonderski, Georg Hess, Adam
Lilja, Carl Lindstrom, Daria Motorniuk, Junsheng Fu, Jenny Widahl, and
Christoffer Petersson
|
Zenseact Open Dataset: A large-scale and diverse multimodal dataset for
autonomous driving
| null | null | null | null |
cs.CV cs.LG cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Existing datasets for autonomous driving (AD) often lack diversity and
long-range capabilities, focusing instead on 360{\deg} perception and temporal
reasoning. To address this gap, we introduce Zenseact Open Dataset (ZOD), a
large-scale and diverse multimodal dataset collected over two years in various
European countries, covering an area 9x that of existing datasets. ZOD boasts
the highest range and resolution sensors among comparable datasets, coupled
with detailed keyframe annotations for 2D and 3D objects (up to 245m), road
instance/semantic segmentation, traffic sign recognition, and road
classification. We believe that this unique combination will facilitate
breakthroughs in long-range perception and multi-task learning. The dataset is
composed of Frames, Sequences, and Drives, designed to encompass both data
diversity and support for spatio-temporal learning, sensor fusion,
localization, and mapping. Frames consist of 100k curated camera images with
two seconds of other supporting sensor data, while the 1473 Sequences and 29
Drives include the entire sensor suite for 20 seconds and a few minutes,
respectively. ZOD is the only large-scale AD dataset released under a
permissive license, allowing for both research and commercial use. The dataset
is accompanied by an extensive development kit. Data and more information are
available online (https://zod.zenseact.com).
|
[
{
"version": "v1",
"created": "Wed, 3 May 2023 09:59:18 GMT"
}
] | 2023-05-04T00:00:00 |
[
[
"Alibeigi",
"Mina",
""
],
[
"Ljungbergh",
"William",
""
],
[
"Tonderski",
"Adam",
""
],
[
"Hess",
"Georg",
""
],
[
"Lilja",
"Adam",
""
],
[
"Lindstrom",
"Carl",
""
],
[
"Motorniuk",
"Daria",
""
],
[
"Fu",
"Junsheng",
""
],
[
"Widahl",
"Jenny",
""
],
[
"Petersson",
"Christoffer",
""
]
] |
new_dataset
| 0.999859 |
2305.02016
|
Eduardo Gallo
|
Eduardo Gallo
|
Stochastic High Fidelity Autonomous Fixed Wing Aircraft Flight Simulator
|
135 pages, 49 figures
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This document describes the architecture and algorithms of a high fidelity
fixed wing flight simulator intended to test and validate novel guidance,
navigation, and control (GNC) algorithms for autonomous aircraft. It aims to
replicate the influence of as many factors as possible on the aircraft
performances, the Earth model, the physics of flight and the associated
equations of motion, and in particular the behavior of the onboard sensors,
limiting the assumptions to the bare minimum, and including multiple relatively
minor effects not usually considered in simulation that may play a role in the
GNC algorithms not performing as intended. The author releases the flight
simulator C ++ implementation as open-source software. The simulator modular
design enables the replacement of the standard GNC algorithms with the
objective of evaluating their performances when subject to specific missions
and meteorological conditions (atmospheric properties, wind field, air
turbulence). The testing and evaluation is performed by means of Monte Carlo
simulations, as most simulation modules (such as the aircraft mission, the
meteorological conditions, the errors introduced by the sensors, and the
initial conditions) are defined stochastically and hence vary in a
pseudo-random way from one execution to the next according to certain
user-defined input parameters, ensuring that the results are valid for a wide
range of conditions. In addition to modeling the outputs of all sensors usually
present onboard a fixed wing platform, such as accelerometers, gyroscopes,
magnetometers, Pitot tube, air vanes, and a Global Navigation Satellite System
(GNCC) receiver, the simulator is also capable of generating realistic images
of the Earth surface that resemble what an onboard camera would record if
following the resulting trajectory, enabling the use and evaluation of visual
and visual inertial navigation systems.
|
[
{
"version": "v1",
"created": "Wed, 3 May 2023 10:11:43 GMT"
}
] | 2023-05-04T00:00:00 |
[
[
"Gallo",
"Eduardo",
""
]
] |
new_dataset
| 0.987524 |
2305.02033
|
Mosayeb Shams
|
Mosayeb Shams, Ahmed H. Elsheikh
|
Gym-preCICE: Reinforcement Learning Environments for Active Flow Control
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Active flow control (AFC) involves manipulating fluid flow over time to
achieve a desired performance or efficiency. AFC, as a sequential optimisation
task, can benefit from utilising Reinforcement Learning (RL) for dynamic
optimisation. In this work, we introduce Gym-preCICE, a Python adapter fully
compliant with Gymnasium (formerly known as OpenAI Gym) API to facilitate
designing and developing RL environments for single- and multi-physics AFC
applications. In an actor-environment setting, Gym-preCICE takes advantage of
preCICE, an open-source coupling library for partitioned multi-physics
simulations, to handle information exchange between a controller (actor) and an
AFC simulation environment. The developed framework results in a seamless
non-invasive integration of realistic physics-based simulation toolboxes with
RL algorithms. Gym-preCICE provides a framework for designing RL environments
to model AFC tasks, as well as a playground for applying RL algorithms in
various AFC-related engineering applications.
|
[
{
"version": "v1",
"created": "Wed, 3 May 2023 10:54:56 GMT"
}
] | 2023-05-04T00:00:00 |
[
[
"Shams",
"Mosayeb",
""
],
[
"Elsheikh",
"Ahmed H.",
""
]
] |
new_dataset
| 0.999454 |
2305.02053
|
Ermes Franch
|
Ermes Franch, Philippe Gaborit, Chunlei Li
|
Generalized LRPC codes
|
A shorter version of this paper was presented in ITW 2023
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we generalize the notion of low-rank parity check (LRPC) codes
by introducing a bilinear product over F^m q based on a generic 3-tensor in
Fq^mxmxm, where Fq is the finite field with q elements. The generalized LRPC
codes are Fq -linear codes in general and a particular choice of the 3-tensor
corresponds to the original Fqm -linear LRPC codes. For the generalized LRPC
codes, we propose two probabilistic polynomial-time decoding algorithms by
adapting the decoding method for LRPC codes and also show that the proposed
algorithms have a decoding failure rate similar to that of decoding LRPC codes
|
[
{
"version": "v1",
"created": "Wed, 3 May 2023 11:39:28 GMT"
}
] | 2023-05-04T00:00:00 |
[
[
"Franch",
"Ermes",
""
],
[
"Gaborit",
"Philippe",
""
],
[
"Li",
"Chunlei",
""
]
] |
new_dataset
| 0.994292 |
2305.02195
|
Chen Tessler
|
Chen Tessler, Yoni Kasten, Yunrong Guo, Shie Mannor, Gal Chechik, Xue
Bin Peng
|
CALM: Conditional Adversarial Latent Models for Directable Virtual
Characters
|
Accepted to SIGGRAPH 2023
| null |
10.1145/3588432.3591541
| null |
cs.CV cs.AI cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we present Conditional Adversarial Latent Models (CALM), an
approach for generating diverse and directable behaviors for user-controlled
interactive virtual characters. Using imitation learning, CALM learns a
representation of movement that captures the complexity and diversity of human
motion, and enables direct control over character movements. The approach
jointly learns a control policy and a motion encoder that reconstructs key
characteristics of a given motion without merely replicating it. The results
show that CALM learns a semantic motion representation, enabling control over
the generated motions and style-conditioning for higher-level task training.
Once trained, the character can be controlled using intuitive interfaces, akin
to those found in video games.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2023 09:01:44 GMT"
}
] | 2023-05-04T00:00:00 |
[
[
"Tessler",
"Chen",
""
],
[
"Kasten",
"Yoni",
""
],
[
"Guo",
"Yunrong",
""
],
[
"Mannor",
"Shie",
""
],
[
"Chechik",
"Gal",
""
],
[
"Peng",
"Xue Bin",
""
]
] |
new_dataset
| 0.993392 |
2305.02235
|
Yuxiang Nie
|
Yuxiang Nie, Heyan Huang, Wei Wei, Xian-Ling Mao
|
AttenWalker: Unsupervised Long-Document Question Answering via
Attention-based Graph Walking
|
Accepted to the Findings of ACL 2023
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Annotating long-document question answering (long-document QA) pairs is
time-consuming and expensive. To alleviate the problem, it might be possible to
generate long-document QA pairs via unsupervised question answering (UQA)
methods. However, existing UQA tasks are based on short documents, and can
hardly incorporate long-range information. To tackle the problem, we propose a
new task, named unsupervised long-document question answering (ULQA), aiming to
generate high-quality long-document QA instances in an unsupervised manner.
Besides, we propose AttenWalker, a novel unsupervised method to aggregate and
generate answers with long-range dependency so as to construct long-document QA
pairs. Specifically, AttenWalker is composed of three modules, i.e., span
collector, span linker and answer aggregator. Firstly, the span collector takes
advantage of constituent parsing and reconstruction loss to select informative
candidate spans for constructing answers. Secondly, by going through the
attention graph of a pre-trained long-document model, potentially interrelated
text spans (that might be far apart) could be linked together via an
attention-walking algorithm. Thirdly, in the answer aggregator, linked spans
are aggregated into the final answer via the mask-filling ability of a
pre-trained model. Extensive experiments show that AttenWalker outperforms
previous methods on Qasper and NarrativeQA. In addition, AttenWalker also shows
strong performance in the few-shot learning setting.
|
[
{
"version": "v1",
"created": "Wed, 3 May 2023 16:16:14 GMT"
}
] | 2023-05-04T00:00:00 |
[
[
"Nie",
"Yuxiang",
""
],
[
"Huang",
"Heyan",
""
],
[
"Wei",
"Wei",
""
],
[
"Mao",
"Xian-Ling",
""
]
] |
new_dataset
| 0.978618 |
2305.02269
|
Jinlong Xue
|
Jinlong Xue, Yayue Deng, Fengping Wang, Ya Li, Yingming Gao, Jianhua
Tao, Jianqing Sun, Jiaen Liang
|
M2-CTTS: End-to-End Multi-scale Multi-modal Conversational
Text-to-Speech Synthesis
|
5 pages, 1 figures, 2 tables. Accepted by ICASSP 2023
| null | null | null |
cs.SD cs.CL eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Conversational text-to-speech (TTS) aims to synthesize speech with proper
prosody of reply based on the historical conversation. However, it is still a
challenge to comprehensively model the conversation, and a majority of
conversational TTS systems only focus on extracting global information and omit
local prosody features, which contain important fine-grained information like
keywords and emphasis. Moreover, it is insufficient to only consider the
textual features, and acoustic features also contain various prosody
information. Hence, we propose M2-CTTS, an end-to-end multi-scale multi-modal
conversational text-to-speech system, aiming to comprehensively utilize
historical conversation and enhance prosodic expression. More specifically, we
design a textual context module and an acoustic context module with both
coarse-grained and fine-grained modeling. Experimental results demonstrate that
our model mixed with fine-grained context information and additionally
considering acoustic features achieves better prosody performance and
naturalness in CMOS tests.
|
[
{
"version": "v1",
"created": "Wed, 3 May 2023 16:59:38 GMT"
}
] | 2023-05-04T00:00:00 |
[
[
"Xue",
"Jinlong",
""
],
[
"Deng",
"Yayue",
""
],
[
"Wang",
"Fengping",
""
],
[
"Li",
"Ya",
""
],
[
"Gao",
"Yingming",
""
],
[
"Tao",
"Jianhua",
""
],
[
"Sun",
"Jianqing",
""
],
[
"Liang",
"Jiaen",
""
]
] |
new_dataset
| 0.996182 |
2305.02307
|
Mengyun Shi
|
Mengyun Shi, Serge Belongie, Claire Cardie
|
Fashionpedia-Taste: A Dataset towards Explaining Human Fashion Taste
| null | null | null | null |
cs.CV cs.AI cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
Existing fashion datasets do not consider the multi-facts that cause a
consumer to like or dislike a fashion image. Even two consumers like a same
fashion image, they could like this image for total different reasons. In this
paper, we study the reason why a consumer like a certain fashion image. Towards
this goal, we introduce an interpretability dataset, Fashionpedia-taste,
consist of rich annotation to explain why a subject like or dislike a fashion
image from the following 3 perspectives: 1) localized attributes; 2) human
attention; 3) caption. Furthermore, subjects are asked to provide their
personal attributes and preference on fashion, such as personality and
preferred fashion brands. Our dataset makes it possible for researchers to
build computational models to fully understand and interpret human fashion
taste from different humanistic perspectives and modalities.
|
[
{
"version": "v1",
"created": "Wed, 3 May 2023 17:54:50 GMT"
}
] | 2023-05-04T00:00:00 |
[
[
"Shi",
"Mengyun",
""
],
[
"Belongie",
"Serge",
""
],
[
"Cardie",
"Claire",
""
]
] |
new_dataset
| 0.999863 |
2206.02342
|
San Jiang
|
Shenhong Li, Sheng He, San Jiang, Wanshou Jiang, Lin Zhang
|
WHU-Stereo: A Challenging Benchmark for Stereo Matching of
High-Resolution Satellite Images
| null | null |
10.1109/TGRS.2023.3245205
| null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Stereo matching of high-resolution satellite images (HRSI) is still a
fundamental but challenging task in the field of photogrammetry and remote
sensing. Recently, deep learning (DL) methods, especially convolutional neural
networks (CNNs), have demonstrated tremendous potential for stereo matching on
public benchmark datasets. However, datasets for stereo matching of satellite
images are scarce. To facilitate further research, this paper creates and
publishes a challenging dataset, termed WHU-Stereo, for stereo matching DL
network training and testing. This dataset is created by using airborne LiDAR
point clouds and high-resolution stereo imageries taken from the Chinese
GaoFen-7 satellite (GF-7). The WHU-Stereo dataset contains more than 1700
epipolar rectified image pairs, which cover six areas in China and includes
various kinds of landscapes. We have assessed the accuracy of ground-truth
disparity maps, and it is proved that our dataset achieves comparable precision
compared with existing state-of-the-art stereo matching datasets. To verify its
feasibility, in experiments, the hand-crafted SGM stereo matching algorithm and
recent deep learning networks have been tested on the WHU-Stereo dataset.
Experimental results show that deep learning networks can be well trained and
achieves higher performance than hand-crafted SGM algorithm, and the dataset
has great potential in remote sensing application. The WHU-Stereo dataset can
serve as a challenging benchmark for stereo matching of high-resolution
satellite images, and performance evaluation of deep learning models. Our
dataset is available at https://github.com/Sheng029/WHU-Stereo
|
[
{
"version": "v1",
"created": "Mon, 6 Jun 2022 04:01:46 GMT"
}
] | 2023-05-03T00:00:00 |
[
[
"Li",
"Shenhong",
""
],
[
"He",
"Sheng",
""
],
[
"Jiang",
"San",
""
],
[
"Jiang",
"Wanshou",
""
],
[
"Zhang",
"Lin",
""
]
] |
new_dataset
| 0.999244 |
2210.16153
|
Izzy Friedlander
|
Izzy Friedlander, Thanasis Bouganis, Maximilien Gadouleau
|
The MacWilliams Identity for the Skew Rank Metric
|
39 pages
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The weight distribution of an error correcting code is a crucial statistic in
determining it's performance. One key tool for relating the weight of a code to
that of it's dual is the MacWilliams Identity, first developed for the Hamming
metric. This identity has two forms: one is a functional transformation of the
weight enumerators, while the other is a direct relation of the weight
distributions via (generalised) Krawtchouk polynomials. The functional
transformation form can in particular be used to derive important moment
identities for the weight distribution of codes. In this paper, we focus on
codes in the skew rank metric. In these codes, the codewords are skew-symmetric
matrices, and the distance between two matrices is the skew rank metric, which
is half the rank of their difference. This paper develops a $q$-analog
MacWilliams Identity in the form of a functional transformation for codes based
on skew-symmetric matrices under their associated skew rank metric. The method
introduces a skew-$q$ algebra and uses generalised Krawtchouk polynomials.
Based on this new MacWilliams Identity, we then derive several moments of the
skew rank distribution for these codes.
|
[
{
"version": "v1",
"created": "Fri, 28 Oct 2022 14:31:12 GMT"
},
{
"version": "v2",
"created": "Tue, 2 May 2023 12:22:20 GMT"
}
] | 2023-05-03T00:00:00 |
[
[
"Friedlander",
"Izzy",
""
],
[
"Bouganis",
"Thanasis",
""
],
[
"Gadouleau",
"Maximilien",
""
]
] |
new_dataset
| 0.987442 |
2211.00982
|
Aaron Lopez-Garcia
|
Aar\'on L\'opez-Garc\'ia
|
SpectroMap: Peak detection algorithm for audio fingerprinting
|
12 pages, 5 figures
| null | null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Audio fingerprinting is a technique used to identify and match audio
recordings based on their unique characteristics. It involves creating a
condensed representation of an audio signal that can be used to quickly compare
and match against other audio recordings. The fingerprinting process involves
analyzing the audio signal to extract certain features, such as spectral
content, tempo, and rhythm, among other things. In this paper, we present
SpectroMap, an open-source GitHub repository for audio fingerprinting written
in Python programming language. It is composed of a peak search algorithm that
extracts topological prominences from a spectrogram via time-frequency bands.
In this paper, we introduce the algorithm functioning with two experimental
applications in a high-quality urban sound dataset and environmental audio
recordings to describe how it works and how effective it is in handling the
input data. Finally, we have posed two Python scripts that would reproduce the
proposed case studies in order to ease the reproducibility of our audio
fingerprinting system.
|
[
{
"version": "v1",
"created": "Wed, 2 Nov 2022 09:40:22 GMT"
},
{
"version": "v2",
"created": "Tue, 2 May 2023 14:21:09 GMT"
}
] | 2023-05-03T00:00:00 |
[
[
"López-García",
"Aarón",
""
]
] |
new_dataset
| 0.999598 |
2212.06858
|
Adam Tonderski
|
Georg Hess, Adam Tonderski, Christoffer Petersson, Kalle {\AA}str\"om,
Lennart Svensson
|
LidarCLIP or: How I Learned to Talk to Point Clouds
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Research connecting text and images has recently seen several breakthroughs,
with models like CLIP, DALL-E 2, and Stable Diffusion. However, the connection
between text and other visual modalities, such as lidar data, has received less
attention, prohibited by the lack of text-lidar datasets. In this work, we
propose LidarCLIP, a mapping from automotive point clouds to a pre-existing
CLIP embedding space. Using image-lidar pairs, we supervise a point cloud
encoder with the image CLIP embeddings, effectively relating text and lidar
data with the image domain as an intermediary. We show the effectiveness of
LidarCLIP by demonstrating that lidar-based retrieval is generally on par with
image-based retrieval, but with complementary strengths and weaknesses. By
combining image and lidar features, we improve upon both single-modality
methods and enable a targeted search for challenging detection scenarios under
adverse sensor conditions. We also explore zero-shot classification and show
that LidarCLIP outperforms existing attempts to use CLIP for point clouds by a
large margin. Finally, we leverage our compatibility with CLIP to explore a
range of applications, such as point cloud captioning and lidar-to-image
generation, without any additional training. Code and pre-trained models are
available at https://github.com/atonderski/lidarclip.
|
[
{
"version": "v1",
"created": "Tue, 13 Dec 2022 19:02:35 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Mar 2023 16:00:00 GMT"
},
{
"version": "v3",
"created": "Tue, 2 May 2023 13:53:40 GMT"
}
] | 2023-05-03T00:00:00 |
[
[
"Hess",
"Georg",
""
],
[
"Tonderski",
"Adam",
""
],
[
"Petersson",
"Christoffer",
""
],
[
"Åström",
"Kalle",
""
],
[
"Svensson",
"Lennart",
""
]
] |
new_dataset
| 0.999632 |
2301.11445
|
Biao Zhang
|
Biao Zhang, Jiapeng Tang, Matthias Niessner, Peter Wonka
|
3DShape2VecSet: A 3D Shape Representation for Neural Fields and
Generative Diffusion Models
|
Accepted by SIGGRAPH 2023 (Journal Track), Project website:
https://1zb.github.io/3DShape2VecSet/, Project demo:
https://youtu.be/KKQsQccpBFk
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce 3DShape2VecSet, a novel shape representation for neural fields
designed for generative diffusion models. Our shape representation can encode
3D shapes given as surface models or point clouds, and represents them as
neural fields. The concept of neural fields has previously been combined with a
global latent vector, a regular grid of latent vectors, or an irregular grid of
latent vectors. Our new representation encodes neural fields on top of a set of
vectors. We draw from multiple concepts, such as the radial basis function
representation and the cross attention and self-attention function, to design a
learnable representation that is especially suitable for processing with
transformers. Our results show improved performance in 3D shape encoding and 3D
shape generative modeling tasks. We demonstrate a wide variety of generative
applications: unconditioned generation, category-conditioned generation,
text-conditioned generation, point-cloud completion, and image-conditioned
generation.
|
[
{
"version": "v1",
"created": "Thu, 26 Jan 2023 22:23:03 GMT"
},
{
"version": "v2",
"created": "Wed, 1 Feb 2023 17:37:49 GMT"
},
{
"version": "v3",
"created": "Mon, 1 May 2023 22:19:24 GMT"
}
] | 2023-05-03T00:00:00 |
[
[
"Zhang",
"Biao",
""
],
[
"Tang",
"Jiapeng",
""
],
[
"Niessner",
"Matthias",
""
],
[
"Wonka",
"Peter",
""
]
] |
new_dataset
| 0.997939 |
2302.13519
|
Jiawei Lian
|
Jiawei Lian, Xiaofei Wang, Yuru Su, Mingyang Ma, Shaohui Mei
|
CBA: Contextual Background Attack against Optical Aerial Detection in
the Physical World
| null | null |
10.1109/TGRS.2023.3264839
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Patch-based physical attacks have increasingly aroused concerns.
However, most existing methods focus on obscuring targets captured on the
ground, and some of these methods are simply extended to deceive aerial
detectors.
They smear the targeted objects in the physical world with the elaborated
adversarial patches, which can only slightly sway the aerial detectors'
prediction and with weak attack transferability.
To address the above issues, we propose to perform Contextual Background
Attack (CBA), a novel physical attack framework against aerial detection, which
can achieve strong attack efficacy and transferability in the physical world
even without smudging the interested objects at all.
Specifically, the targets of interest, i.e. the aircraft in aerial images,
are adopted to mask adversarial patches.
The pixels outside the mask area are optimized to make the generated
adversarial patches closely cover the critical contextual background area for
detection, which contributes to gifting adversarial patches with more robust
and transferable attack potency in the real world.
To further strengthen the attack performance, the adversarial patches are
forced to be outside targets during training, by which the detected objects of
interest, both on and outside patches, benefit the accumulation of attack
efficacy.
Consequently, the sophisticatedly designed patches are gifted with solid
fooling efficacy against objects both on and outside the adversarial patches
simultaneously.
Extensive proportionally scaled experiments are performed in physical
scenarios, demonstrating the superiority and potential of the proposed
framework for physical attacks.
We expect that the proposed physical attack method will serve as a benchmark
for assessing the adversarial robustness of diverse aerial detectors and
defense methods.
|
[
{
"version": "v1",
"created": "Mon, 27 Feb 2023 05:10:27 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Mar 2023 01:37:57 GMT"
},
{
"version": "v3",
"created": "Fri, 24 Mar 2023 01:09:41 GMT"
}
] | 2023-05-03T00:00:00 |
[
[
"Lian",
"Jiawei",
""
],
[
"Wang",
"Xiaofei",
""
],
[
"Su",
"Yuru",
""
],
[
"Ma",
"Mingyang",
""
],
[
"Mei",
"Shaohui",
""
]
] |
new_dataset
| 0.999722 |
2303.06849
|
Zhonghua Sun
|
Tingfang Chen and Cunsheng Ding and Chengju Li and Zhonghua Sun
|
Four infinite families of ternary cyclic codes with a square-root-like
lower bound
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cyclic codes are an interesting type of linear codes and have wide
applications in communication and storage systems due to their efficient
encoding and decoding algorithms. Inspired by the recent work on binary cyclic
codes published in IEEE Trans. Inf. Theory, vol. 68, no. 12, pp. 7842-7849,
2022, and the arXiv paper arXiv:2301.06446, the objectives of this paper are
the construction and analyses of four infinite families of ternary cyclic codes
with length $n=3^m-1$ for odd $m$ and dimension $k \in \{n/2, (n + 2)/2\}$
whose minimum distances have a square-root-like lower bound. Their duals have
parameters $[n, k^\perp, d^\perp]$, where $k^\perp \in \{n/2, (n- 2)/2\}$ and
$d^\perp$ also has a square-root-like lower bound. These families of codes and
their duals contain distance-optimal cyclic codes.
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 04:36:55 GMT"
},
{
"version": "v2",
"created": "Tue, 2 May 2023 13:20:08 GMT"
}
] | 2023-05-03T00:00:00 |
[
[
"Chen",
"Tingfang",
""
],
[
"Ding",
"Cunsheng",
""
],
[
"Li",
"Chengju",
""
],
[
"Sun",
"Zhonghua",
""
]
] |
new_dataset
| 0.99681 |
2303.12394
|
Qianxiong Xu
|
Qianxiong Xu, Cheng Long, Liang Yu, Chen Zhang
|
Road Extraction with Satellite Images and Partial Road Maps
|
This paper has been accepted by IEEE Transactions on Geoscience and
Remote Sensing
| null |
10.1109/TGRS.2023.3261332
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Road extraction is a process of automatically generating road maps mainly
from satellite images. Existing models all target to generate roads from the
scratch despite that a large quantity of road maps, though incomplete, are
publicly available (e.g. those from OpenStreetMap) and can help with road
extraction. In this paper, we propose to conduct road extraction based on
satellite images and partial road maps, which is new. We then propose a
two-branch Partial to Complete Network (P2CNet) for the task, which has two
prominent components: Gated Self-Attention Module (GSAM) and Missing Part (MP)
loss. GSAM leverages a channel-wise self-attention module and a gate module to
capture long-range semantics, filter out useless information, and better fuse
the features from two branches. MP loss is derived from the partial road maps,
trying to give more attention to the road pixels that do not exist in partial
road maps. Extensive experiments are conducted to demonstrate the effectiveness
of our model, e.g. P2CNet achieves state-of-the-art performance with the IoU
scores of 70.71% and 75.52%, respectively, on the SpaceNet and OSM datasets.
|
[
{
"version": "v1",
"created": "Wed, 22 Mar 2023 08:59:42 GMT"
}
] | 2023-05-03T00:00:00 |
[
[
"Xu",
"Qianxiong",
""
],
[
"Long",
"Cheng",
""
],
[
"Yu",
"Liang",
""
],
[
"Zhang",
"Chen",
""
]
] |
new_dataset
| 0.990851 |
2304.09148
|
Lanyun Zhu
|
Tianrun Chen, Lanyun Zhu, Chaotao Ding, Runlong Cao, Yan Wang, Zejian
Li, Lingyun Sun, Papa Mao, Ying Zang
|
SAM Fails to Segment Anything? -- SAM-Adapter: Adapting SAM in
Underperformed Scenes: Camouflage, Shadow, Medical Image Segmentation, and
More
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The emergence of large models, also known as foundation models, has brought
significant advancements to AI research. One such model is Segment Anything
(SAM), which is designed for image segmentation tasks. However, as with other
foundation models, our experimental findings suggest that SAM may fail or
perform poorly in certain segmentation tasks, such as shadow detection and
camouflaged object detection (concealed object detection). This study first
paves the way for applying the large pre-trained image segmentation model SAM
to these downstream tasks, even in situations where SAM performs poorly. Rather
than fine-tuning the SAM network, we propose \textbf{SAM-Adapter}, which
incorporates domain-specific information or visual prompts into the
segmentation network by using simple yet effective adapters. By integrating
task-specific knowledge with general knowledge learnt by the large model,
SAM-Adapter can significantly elevate the performance of SAM in challenging
tasks as shown in extensive experiments. We can even outperform task-specific
network models and achieve state-of-the-art performance in the task we tested:
camouflaged object detection, shadow detection. We also tested polyp
segmentation (medical image segmentation) and achieves better results. We
believe our work opens up opportunities for utilizing SAM in downstream tasks,
with potential applications in various fields, including medical image
processing, agriculture, remote sensing, and more.
|
[
{
"version": "v1",
"created": "Tue, 18 Apr 2023 17:38:54 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Apr 2023 17:03:58 GMT"
},
{
"version": "v3",
"created": "Tue, 2 May 2023 17:06:51 GMT"
}
] | 2023-05-03T00:00:00 |
[
[
"Chen",
"Tianrun",
""
],
[
"Zhu",
"Lanyun",
""
],
[
"Ding",
"Chaotao",
""
],
[
"Cao",
"Runlong",
""
],
[
"Wang",
"Yan",
""
],
[
"Li",
"Zejian",
""
],
[
"Sun",
"Lingyun",
""
],
[
"Mao",
"Papa",
""
],
[
"Zang",
"Ying",
""
]
] |
new_dataset
| 0.988951 |
2304.13180
|
Juraj Vladika
|
Juraj Vladika, Florian Matthes
|
Sebis at SemEval-2023 Task 7: A Joint System for Natural Language
Inference and Evidence Retrieval from Clinical Trial Reports
|
6 pages, SemEval 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
With the increasing number of clinical trial reports generated every day, it
is becoming hard to keep up with novel discoveries that inform evidence-based
healthcare recommendations. To help automate this process and assist medical
experts, NLP solutions are being developed. This motivated the SemEval-2023
Task 7, where the goal was to develop an NLP system for two tasks: evidence
retrieval and natural language inference from clinical trial data. In this
paper, we describe our two developed systems. The first one is a pipeline
system that models the two tasks separately, while the second one is a joint
system that learns the two tasks simultaneously with a shared representation
and a multi-task learning approach. The final system combines their outputs in
an ensemble system. We formalize the models, present their characteristics and
challenges, and provide an analysis of achieved results. Our system ranked 3rd
out of 40 participants with a final submission.
|
[
{
"version": "v1",
"created": "Tue, 25 Apr 2023 22:22:42 GMT"
},
{
"version": "v2",
"created": "Tue, 2 May 2023 16:46:33 GMT"
}
] | 2023-05-03T00:00:00 |
[
[
"Vladika",
"Juraj",
""
],
[
"Matthes",
"Florian",
""
]
] |
new_dataset
| 0.998989 |
2304.13509
|
Alexander Kapitanov
|
Alexander Kapitanov, Karina Kvanchiani, Sofia Kirillova
|
EasyPortrait -- Face Parsing and Portrait Segmentation Dataset
|
portrait segmentation, face parsing, image segmentation dataset
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Recently, due to COVID-19 and the growing demand for remote work, video
conferencing apps have become especially widespread. The most valuable features
of video chats are real-time background removal and face beautification. While
solving these tasks, computer vision researchers face the problem of having
relevant data for the training stage. There is no large dataset with
high-quality labeled and diverse images of people in front of a laptop or
smartphone camera to train a lightweight model without additional approaches.
To boost the progress in this area, we provide a new image dataset,
EasyPortrait, for portrait segmentation and face parsing tasks. It contains
20,000 primarily indoor photos of 8,377 unique users, and fine-grained
segmentation masks separated into 9 classes. Images are collected and labeled
from crowdsourcing platforms. Unlike most face parsing datasets, in
EasyPortrait, the beard is not considered part of the skin mask, and the inside
area of the mouth is separated from the teeth. These features allow using
EasyPortrait for skin enhancement and teeth whitening tasks. This paper
describes the pipeline for creating a large-scale and clean image segmentation
dataset using crowdsourcing platforms without additional synthetic data.
Moreover, we trained several models on EasyPortrait and showed experimental
results. Proposed dataset and trained models are publicly available.
|
[
{
"version": "v1",
"created": "Wed, 26 Apr 2023 12:51:34 GMT"
},
{
"version": "v2",
"created": "Tue, 2 May 2023 05:32:50 GMT"
}
] | 2023-05-03T00:00:00 |
[
[
"Kapitanov",
"Alexander",
""
],
[
"Kvanchiani",
"Karina",
""
],
[
"Kirillova",
"Sofia",
""
]
] |
new_dataset
| 0.999858 |
2304.14082
|
Utku Evci
|
Joo Hyung Lee, Wonpyo Park, Nicole Mitchell, Jonathan Pilault, Johan
Obando-Ceron, Han-Byul Kim, Namhoon Lee, Elias Frantar, Yun Long, Amir
Yazdanbakhsh, Shivani Agrawal, Suvinay Subramanian, Xin Wang, Sheng-Chun Kao,
Xingyao Zhang, Trevor Gale, Aart Bik, Woohyun Han, Milen Ferev, Zhonglin Han,
Hong-Seok Kim, Yann Dauphin, Gintare Karolina Dziugaite, Pablo Samuel Castro,
Utku Evci
|
JaxPruner: A concise library for sparsity research
|
Jaxpruner is hosted at http://github.com/google-research/jaxpruner
| null | null | null |
cs.LG cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
This paper introduces JaxPruner, an open-source JAX-based pruning and sparse
training library for machine learning research. JaxPruner aims to accelerate
research on sparse neural networks by providing concise implementations of
popular pruning and sparse training algorithms with minimal memory and latency
overhead. Algorithms implemented in JaxPruner use a common API and work
seamlessly with the popular optimization library Optax, which, in turn, enables
easy integration with existing JAX based libraries. We demonstrate this ease of
integration by providing examples in four different codebases: Scenic, t5x,
Dopamine and FedJAX and provide baseline experiments on popular benchmarks.
|
[
{
"version": "v1",
"created": "Thu, 27 Apr 2023 10:45:30 GMT"
},
{
"version": "v2",
"created": "Tue, 2 May 2023 08:43:29 GMT"
}
] | 2023-05-03T00:00:00 |
[
[
"Lee",
"Joo Hyung",
""
],
[
"Park",
"Wonpyo",
""
],
[
"Mitchell",
"Nicole",
""
],
[
"Pilault",
"Jonathan",
""
],
[
"Obando-Ceron",
"Johan",
""
],
[
"Kim",
"Han-Byul",
""
],
[
"Lee",
"Namhoon",
""
],
[
"Frantar",
"Elias",
""
],
[
"Long",
"Yun",
""
],
[
"Yazdanbakhsh",
"Amir",
""
],
[
"Agrawal",
"Shivani",
""
],
[
"Subramanian",
"Suvinay",
""
],
[
"Wang",
"Xin",
""
],
[
"Kao",
"Sheng-Chun",
""
],
[
"Zhang",
"Xingyao",
""
],
[
"Gale",
"Trevor",
""
],
[
"Bik",
"Aart",
""
],
[
"Han",
"Woohyun",
""
],
[
"Ferev",
"Milen",
""
],
[
"Han",
"Zhonglin",
""
],
[
"Kim",
"Hong-Seok",
""
],
[
"Dauphin",
"Yann",
""
],
[
"Dziugaite",
"Gintare Karolina",
""
],
[
"Castro",
"Pablo Samuel",
""
],
[
"Evci",
"Utku",
""
]
] |
new_dataset
| 0.972199 |
2304.14643
|
Haoqiang Huang
|
Siu-Wing Cheng, Haoqiang Huang
|
Approximate Nearest Neighbor for Polygonal Curves under Fr\'echet
Distance
|
To appear at ICALP 2023
| null | null | null |
cs.CG cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
We propose $\kappa$-approximate nearest neighbor (ANN) data structures for
$n$ polygonal curves under the Fr\'{e}chet distance in $\mathbb{R}^d$, where
$\kappa \in \{1+\varepsilon,3+\varepsilon\}$ and $d \geq 2$. We assume that
every input curve has at most $m$ vertices, every query curve has at most $k$
vertices, $k \ll m$, and $k$ is given for preprocessing. The query times are
$\tilde{O}(k(mn)^{0.5+\varepsilon}/\varepsilon^d+ k(d/\varepsilon)^{O(dk)})$
for $(1+\varepsilon)$-ANN and
$\tilde{O}(k(mn)^{0.5+\varepsilon}/\varepsilon^d)$ for $(3+\varepsilon)$-ANN.
The space and expected preprocessing time are
$\tilde{O}(k(mnd^d/\varepsilon^d)^{O(k+1/\varepsilon^2)})$ in both cases. In
two and three dimensions, we improve the query times to
$O(1/\varepsilon)^{O(k)} \cdot \tilde{O}(k)$ for $(1+\varepsilon)$-ANN and
$\tilde{O}(k)$ for $(3+\varepsilon)$-ANN. The space and expected preprocessing
time improve to $O(mn/\varepsilon)^{O(k)} \cdot \tilde{O}(k)$ in both cases.
For ease of presentation, we treat factors in our bounds that depend purely on
$d$ as~$O(1)$. The hidden polylog factors in the big-$\tilde{O}$ notation have
powers dependent on $d$.
|
[
{
"version": "v1",
"created": "Fri, 28 Apr 2023 06:15:13 GMT"
},
{
"version": "v2",
"created": "Tue, 2 May 2023 05:30:33 GMT"
}
] | 2023-05-03T00:00:00 |
[
[
"Cheng",
"Siu-Wing",
""
],
[
"Huang",
"Haoqiang",
""
]
] |
new_dataset
| 0.994759 |
2305.00436
|
Ngoc-Thanh Nguyen
|
Rogardt Heldal, Ngoc-Thanh Nguyen, Ana Moreira, Patricia Lago, Leticia
Duboc, Stefanie Betz, Vlad C. Coroama, Birgit Penzenstadler, Jari Porras,
Rafael Capilla, Ian Brooks, Shola Oyedeji, Colin C. Venters
|
Sustainability Competencies and Skills in Software Engineering: An
Industry Perspective
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Achieving the UN Sustainable Development Goals (SDGs) demands adequate levels
of awareness and actions to address sustainability challenges. Software systems
will play an important role in moving towards these targets. Sustainability
skills are necessary to support the development of software systems and to
provide sustainable IT-supported services for citizens. While there is a
growing number of academic bodies, including sustainability education in
engineering and computer science curricula, there is not yet comprehensive
research on the competencies and skills required by IT professionals to develop
such systems. This study aims to identify the industrial sustainability needs
for education and training from software engineers' perspective. We conducted
interviews and focus groups with experts from twenty-eight organisations with
an IT division from nine countries to understand their interests, goals and
achievements related to sustainability, and the skills and competencies needed
to achieve their goals. Our findings show that organisations are interested in
sustainability, both idealistically and increasingly for core business reasons.
They seek to improve the sustainability of processes and products but encounter
difficulties, like the trade-off between short-term financial profitability and
long-term sustainability goals. To fill the gaps, they have promoted in-house
training courses, collaborated with universities, and sent employees to
external training. The acquired competencies make sustainability an integral
part of software development. We conclude that educational programs should
include knowledge and skills on core sustainability concepts, system thinking,
soft skills, technical sustainability, sustainability impact and measurements,
values and ethics, standards and legal aspects, and advocacy and lobbying.
|
[
{
"version": "v1",
"created": "Sun, 30 Apr 2023 09:34:07 GMT"
},
{
"version": "v2",
"created": "Tue, 2 May 2023 07:27:32 GMT"
}
] | 2023-05-03T00:00:00 |
[
[
"Heldal",
"Rogardt",
""
],
[
"Nguyen",
"Ngoc-Thanh",
""
],
[
"Moreira",
"Ana",
""
],
[
"Lago",
"Patricia",
""
],
[
"Duboc",
"Leticia",
""
],
[
"Betz",
"Stefanie",
""
],
[
"Coroama",
"Vlad C.",
""
],
[
"Penzenstadler",
"Birgit",
""
],
[
"Porras",
"Jari",
""
],
[
"Capilla",
"Rafael",
""
],
[
"Brooks",
"Ian",
""
],
[
"Oyedeji",
"Shola",
""
],
[
"Venters",
"Colin C.",
""
]
] |
new_dataset
| 0.966741 |
2305.01024
|
Shixun Wu
|
Shixun Wu, Yujia Zhai, Jinyang Liu, Jiajun Huang, Zizhe Jian, Bryan M.
Wong, and Zizhong Chen
|
Anatomy of High-Performance GEMM with Online Fault Tolerance on GPUs
|
11 pages, 2023 International Conference on Supercomputing
| null |
10.1145/3577193.3593715
| null |
cs.DC cs.PF
|
http://creativecommons.org/licenses/by-sa/4.0/
|
General Matrix Multiplication (GEMM) is a crucial algorithm for various
applications such as machine learning and scientific computing, and an
efficient GEMM implementation is essential for the performance of these
systems. While researchers often strive for faster performance by using large
compute platforms, the increased scale of these systems can raise concerns
about hardware and software reliability. In this paper, we present a design for
a high-performance GEMM with algorithm-based fault tolerance for use on GPUs.
We describe fault-tolerant designs for GEMM at the thread, warp, and
threadblock levels, and also provide a baseline GEMM implementation that is
competitive with or faster than the state-of-the-art, proprietary cuBLAS GEMM.
We present a kernel fusion strategy to overlap and mitigate the memory latency
due to fault tolerance with the original GEMM computation. To support a wide
range of input matrix shapes and reduce development costs, we present a
template-based approach for automatic code generation for both fault-tolerant
and non-fault-tolerant GEMM implementations. We evaluate our work on NVIDIA
Tesla T4 and A100 server GPUs. Experimental results demonstrate that our
baseline GEMM presents comparable or superior performance compared to the
closed-source cuBLAS. The fault-tolerant GEMM incurs only a minimal overhead
(8.89\% on average) compared to cuBLAS even with hundreds of errors injected
per minute. For irregularly shaped inputs, the code generator-generated kernels
show remarkable speedups of $160\% \sim 183.5\%$ and $148.55\% \sim 165.12\%$
for fault-tolerant and non-fault-tolerant GEMMs, outperforming cuBLAS by up to
$41.40\%$.
|
[
{
"version": "v1",
"created": "Mon, 1 May 2023 18:30:22 GMT"
}
] | 2023-05-03T00:00:00 |
[
[
"Wu",
"Shixun",
""
],
[
"Zhai",
"Yujia",
""
],
[
"Liu",
"Jinyang",
""
],
[
"Huang",
"Jiajun",
""
],
[
"Jian",
"Zizhe",
""
],
[
"Wong",
"Bryan M.",
""
],
[
"Chen",
"Zizhong",
""
]
] |
new_dataset
| 0.989663 |
2305.01056
|
Madeline Endres
|
Kaia Newman, Madeline Endres, Brittany Johnson, Westley Weimer
|
From Organizations to Individuals: Psychoactive Substance Use By
Professional Programmers
|
11 pages + 2 for citations, 4 Tables. Preprint of a paper that will
be published in the International Conference of Software Engineering (ICSE,
2023)
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Psychoactive substances, which influence the brain to alter perceptions and
moods, have the potential to have positive and negative effects on critical
software engineering tasks. They are widely used in software, but that use is
not well understood. We present the results of the first qualitative
investigation of the experiences of, and challenges faced by, psychoactive
substance users in professional software communities. We conduct a thematic
analysis of hour-long interviews with 26 professional programmers who use
psychoactive substances at work. Our results provide insight into individual
motivations and impacts, including mental health and the relationships between
various substances and productivity. Our findings elaborate on socialization
effects, including soft skills, stigma, and remote work. The analysis also
highlights implications for organizational policy, including positive and
negative impacts on recruitment and retention. By exploring individual usage
motivations, social and cultural ramifications, and organizational policy, we
demonstrate how substance use can permeate all levels of software development.
|
[
{
"version": "v1",
"created": "Mon, 1 May 2023 19:44:00 GMT"
}
] | 2023-05-03T00:00:00 |
[
[
"Newman",
"Kaia",
""
],
[
"Endres",
"Madeline",
""
],
[
"Johnson",
"Brittany",
""
],
[
"Weimer",
"Westley",
""
]
] |
new_dataset
| 0.982757 |
2305.01090
|
Michael D. Graham
|
Kevin Zeng, Michael D. Graham
|
Autoencoders for discovering manifold dimension and coordinates in data
from complex dynamical systems
| null | null | null | null |
cs.LG nlin.CD
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While many phenomena in physics and engineering are formally
high-dimensional, their long-time dynamics often live on a lower-dimensional
manifold. The present work introduces an autoencoder framework that combines
implicit regularization with internal linear layers and $L_2$ regularization
(weight decay) to automatically estimate the underlying dimensionality of a
data set, produce an orthogonal manifold coordinate system, and provide the
mapping functions between the ambient space and manifold space, allowing for
out-of-sample projections. We validate our framework's ability to estimate the
manifold dimension for a series of datasets from dynamical systems of varying
complexities and compare to other state-of-the-art estimators. We analyze the
training dynamics of the network to glean insight into the mechanism of
low-rank learning and find that collectively each of the implicit regularizing
layers compound the low-rank representation and even self-correct during
training. Analysis of gradient descent dynamics for this architecture in the
linear case reveals the role of the internal linear layers in leading to faster
decay of a "collective weight variable" incorporating all layers, and the role
of weight decay in breaking degeneracies and thus driving convergence along
directions in which no decay would occur in its absence. We show that this
framework can be naturally extended for applications of state-space modeling
and forecasting by generating a data-driven dynamic model of a spatiotemporally
chaotic partial differential equation using only the manifold coordinates.
Finally, we demonstrate that our framework is robust to hyperparameter choices.
|
[
{
"version": "v1",
"created": "Mon, 1 May 2023 21:14:47 GMT"
}
] | 2023-05-03T00:00:00 |
[
[
"Zeng",
"Kevin",
""
],
[
"Graham",
"Michael D.",
""
]
] |
new_dataset
| 0.982677 |
2305.01099
|
Charlie Cowen-Breen
|
Charlie Cowen-Breen (1), Creston Brooks (2), Johannes Haubold (2),
Barbara Graziosi (2) ((1) University of Cambridge, (2) Princeton University)
|
Logion: Machine Learning for Greek Philology
|
14 pages, 4 figures
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents machine-learning methods to address various problems in
Greek philology. After training a BERT model on the largest premodern Greek
dataset used for this purpose to date, we identify and correct previously
undetected errors made by scribes in the process of textual transmission, in
what is, to our knowledge, the first successful identification of such errors
via machine learning. Additionally, we demonstrate the model's capacity to fill
gaps caused by material deterioration of premodern manuscripts and compare the
model's performance to that of a domain expert. We find that best performance
is achieved when the domain expert is provided with model suggestions for
inspiration. With such human-computer collaborations in mind, we explore the
model's interpretability and find that certain attention heads appear to encode
select grammatical features of premodern Greek.
|
[
{
"version": "v1",
"created": "Mon, 1 May 2023 21:56:25 GMT"
}
] | 2023-05-03T00:00:00 |
[
[
"Cowen-Breen",
"Charlie",
"",
"University of Cambridge"
],
[
"Brooks",
"Creston",
"",
"Princeton University"
],
[
"Haubold",
"Johannes",
"",
"Princeton University"
],
[
"Graziosi",
"Barbara",
"",
"Princeton University"
]
] |
new_dataset
| 0.994838 |
2305.01120
|
Jes\'us Camacho-Rodr\'iguez
|
Jes\'us Camacho-Rodr\'iguez, Ashvin Agrawal, Anja Gruenheid, Ashit
Gosalia, Cristian Petculescu, Josep Aguilar-Saborit, Avrilia Floratou, Carlo
Curino, Raghu Ramakrishnan
|
LST-Bench: Benchmarking Log-Structured Tables in the Cloud
| null | null | null | null |
cs.DB
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Log-Structured Tables (LSTs), also commonly referred to as table formats,
have recently emerged to bring consistency and isolation to object stores. With
the separation of compute and storage, object stores have become the go-to for
highly scalable and durable storage. However, this comes with its own set of
challenges, such as the lack of recovery and concurrency management that
traditional database management systems provide. This is where LSTs such as
Delta Lake, Apache Iceberg, and Apache Hudi come into play, providing an
automatic metadata layer that manages tables defined over object stores,
effectively addressing these challenges. A paradigm shift in the design of
these systems necessitates the updating of evaluation methodologies. In this
paper, we examine the characteristics of LSTs and propose extensions to
existing benchmarks, including workload patterns and metrics, to accurately
capture their performance. We introduce our framework, LST-Bench, which enables
users to execute benchmarks tailored for the evaluation of LSTs. Our evaluation
demonstrates how these benchmarks can be utilized to evaluate the performance,
efficiency, and stability of LSTs. The code for LST-Bench is open sourced and
is available at https://github.com/microsoft/lst-bench/ .
|
[
{
"version": "v1",
"created": "Mon, 1 May 2023 23:15:17 GMT"
}
] | 2023-05-03T00:00:00 |
[
[
"Camacho-Rodríguez",
"Jesús",
""
],
[
"Agrawal",
"Ashvin",
""
],
[
"Gruenheid",
"Anja",
""
],
[
"Gosalia",
"Ashit",
""
],
[
"Petculescu",
"Cristian",
""
],
[
"Aguilar-Saborit",
"Josep",
""
],
[
"Floratou",
"Avrilia",
""
],
[
"Curino",
"Carlo",
""
],
[
"Ramakrishnan",
"Raghu",
""
]
] |
new_dataset
| 0.997914 |
2305.01183
|
Yang Zhang
|
Yang Zhang, Le Cheng, Yuting Peng, Chengming Xu, Yanwei Fu, Bo Wu,
Guodong Sun
|
Faster OreFSDet : A Lightweight and Effective Few-shot Object Detector
for Ore Images
|
18 pages, 11 figures
| null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
For the ore particle size detection, obtaining a sizable amount of
high-quality ore labeled data is time-consuming and expensive. General object
detection methods often suffer from severe over-fitting with scarce labeled
data. Despite their ability to eliminate over-fitting, existing few-shot object
detectors encounter drawbacks such as slow detection speed and high memory
requirements, making them difficult to implement in a real-world deployment
scenario. To this end, we propose a lightweight and effective few-shot detector
to achieve competitive performance with general object detection with only a
few samples for ore images. First, the proposed support feature mining block
characterizes the importance of location information in support features. Next,
the relationship guidance block makes full use of support features to guide the
generation of accurate candidate proposals. Finally, the dual-scale semantic
aggregation module retrieves detailed features at different resolutions to
contribute with the prediction process. Experimental results show that our
method consistently exceeds the few-shot detectors with an excellent
performance gap on all metrics. Moreover, our method achieves the smallest
model size of 19MB as well as being competitive at 50 FPS detection speed
compared with general object detectors. The source code is available at
https://github.com/MVME-HBUT/Faster-OreFSDet.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2023 03:30:03 GMT"
}
] | 2023-05-03T00:00:00 |
[
[
"Zhang",
"Yang",
""
],
[
"Cheng",
"Le",
""
],
[
"Peng",
"Yuting",
""
],
[
"Xu",
"Chengming",
""
],
[
"Fu",
"Yanwei",
""
],
[
"Wu",
"Bo",
""
],
[
"Sun",
"Guodong",
""
]
] |
new_dataset
| 0.999163 |
2305.01191
|
Linghao Chen
|
Linghao Chen, Yuzhe Qin, Xiaowei Zhou, Hao Su
|
EasyHeC: Accurate and Automatic Hand-eye Calibration via Differentiable
Rendering and Space Exploration
|
Project page: https://ootts.github.io/easyhec
| null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Hand-eye calibration is a critical task in robotics, as it directly affects
the efficacy of critical operations such as manipulation and grasping.
Traditional methods for achieving this objective necessitate the careful design
of joint poses and the use of specialized calibration markers, while most
recent learning-based approaches using solely pose regression are limited in
their abilities to diagnose inaccuracies. In this work, we introduce a new
approach to hand-eye calibration called EasyHeC, which is markerless,
white-box, and offers comprehensive coverage of positioning accuracy across the
entire robot configuration space. We introduce two key technologies:
differentiable rendering-based camera pose optimization and consistency-based
joint space exploration, which enables accurate end-to-end optimization of the
calibration process and eliminates the need for the laborious manual design of
robot joint poses. Our evaluation demonstrates superior performance in
synthetic and real-world datasets, enhancing downstream manipulation tasks by
providing precise camera poses for locating and interacting with objects. The
code is available at the project page: https://ootts.github.io/easyhec.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2023 03:49:54 GMT"
}
] | 2023-05-03T00:00:00 |
[
[
"Chen",
"Linghao",
""
],
[
"Qin",
"Yuzhe",
""
],
[
"Zhou",
"Xiaowei",
""
],
[
"Su",
"Hao",
""
]
] |
new_dataset
| 0.971306 |
2305.01195
|
Jiangyi Lin
|
Jiangyi Lin, Yaxin Fan, Feng Jiang, Xiaomin Chu, and Peifeng Li
|
Topic Shift Detection in Chinese Dialogues: Corpus and Benchmark
| null | null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dialogue topic shift detection is to detect whether an ongoing topic has
shifted or should shift in a dialogue, which can be divided into two
categories, i.e., response-known task and response-unknown task. Currently,
only a few investigated the latter, because it is still a challenge to predict
the topic shift without the response information. In this paper, we first
annotate a Chinese Natural Topic Dialogue (CNTD) corpus consisting of 1308
dialogues to fill the gap in the Chinese natural conversation topic corpus. And
then we focus on the response-unknown task and propose a teacher-student
framework based on hierarchical contrastive learning to predict the topic shift
without the response. Specifically, the response at high-level teacher-student
is introduced to build the contrastive learning between the response and the
context, while the label contrastive learning is constructed at low-level
student. The experimental results on our Chinese CNTD and English TIAGE show
the effectiveness of our proposed model.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2023 04:03:50 GMT"
}
] | 2023-05-03T00:00:00 |
[
[
"Lin",
"Jiangyi",
""
],
[
"Fan",
"Yaxin",
""
],
[
"Jiang",
"Feng",
""
],
[
"Chu",
"Xiaomin",
""
],
[
"Li",
"Peifeng",
""
]
] |
new_dataset
| 0.999311 |
2305.01211
|
Joel Niklaus
|
Tobias Brugger, Matthias St\"urmer, Joel Niklaus
|
MultiLegalSBD: A Multilingual Legal Sentence Boundary Detection Dataset
|
Accepted at ICAIL 2023
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Sentence Boundary Detection (SBD) is one of the foundational building blocks
of Natural Language Processing (NLP), with incorrectly split sentences heavily
influencing the output quality of downstream tasks. It is a challenging task
for algorithms, especially in the legal domain, considering the complex and
different sentence structures used. In this work, we curated a diverse
multilingual legal dataset consisting of over 130'000 annotated sentences in 6
languages. Our experimental results indicate that the performance of existing
SBD models is subpar on multilingual legal data. We trained and tested
monolingual and multilingual models based on CRF, BiLSTM-CRF, and transformers,
demonstrating state-of-the-art performance. We also show that our multilingual
models outperform all baselines in the zero-shot setting on a Portuguese test
set. To encourage further research and development by the community, we have
made our dataset, models, and code publicly available.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2023 05:52:03 GMT"
}
] | 2023-05-03T00:00:00 |
[
[
"Brugger",
"Tobias",
""
],
[
"Stürmer",
"Matthias",
""
],
[
"Niklaus",
"Joel",
""
]
] |
new_dataset
| 0.999594 |
2305.01239
|
Xiaocheng Lu
|
Xiaocheng Lu, Ziming Liu, Song Guo, Jingcai Guo, Fushuo Huo, Sikai Bai
and Tao Han
|
DRPT: Disentangled and Recurrent Prompt Tuning for Compositional
Zero-Shot Learning
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Compositional Zero-shot Learning (CZSL) aims to recognize novel concepts
composed of known knowledge without training samples. Standard CZSL either
identifies visual primitives or enhances unseen composed entities, and as a
result, entanglement between state and object primitives cannot be fully
utilized. Admittedly, vision-language models (VLMs) could naturally cope with
CZSL through tuning prompts, while uneven entanglement leads prompts to be
dragged into local optimum. In this paper, we take a further step to introduce
a novel Disentangled and Recurrent Prompt Tuning framework termed DRPT to
better tap the potential of VLMs in CZSL. Specifically, the state and object
primitives are deemed as learnable tokens of vocabulary embedded in prompts and
tuned on seen compositions. Instead of jointly tuning state and object, we
devise a disentangled and recurrent tuning strategy to suppress the traction
force caused by entanglement and gradually optimize the token parameters,
leading to a better prompt space. Notably, we develop a progressive fine-tuning
procedure that allows for incremental updates to the prompts, optimizing the
object first, then the state, and vice versa. Meanwhile, the optimization of
state and object is independent, thus clearer features can be learned to
further alleviate the issue of entangling misleading optimization. Moreover, we
quantify and analyze the entanglement in CZSL and supplement entanglement
rebalancing optimization schemes. DRPT surpasses representative
state-of-the-art methods on extensive benchmark datasets, demonstrating
superiority in both accuracy and efficiency.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2023 07:42:47 GMT"
}
] | 2023-05-03T00:00:00 |
[
[
"Lu",
"Xiaocheng",
""
],
[
"Liu",
"Ziming",
""
],
[
"Guo",
"Song",
""
],
[
"Guo",
"Jingcai",
""
],
[
"Huo",
"Fushuo",
""
],
[
"Bai",
"Sikai",
""
],
[
"Han",
"Tao",
""
]
] |
new_dataset
| 0.985569 |
2305.01245
|
Jingcai Guo
|
Jingcai Guo, Yuanyuan Xu, Wenchao Xu, Yufeng Zhan, Yuxia Sun, Song Guo
|
MDENet: Multi-modal Dual-embedding Networks for Malware Open-set
Recognition
|
14 pages, 7 figures
| null | null | null |
cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Malware open-set recognition (MOSR) aims at jointly classifying malware
samples from known families and detect the ones from novel unknown families,
respectively. Existing works mostly rely on a well-trained classifier
considering the predicted probabilities of each known family with a
threshold-based detection to achieve the MOSR. However, our observation reveals
that the feature distributions of malware samples are extremely similar to each
other even between known and unknown families. Thus the obtained classifier may
produce overly high probabilities of testing unknown samples toward known
families and degrade the model performance. In this paper, we propose the
Multi-modal Dual-Embedding Networks, dubbed MDENet, to take advantage of
comprehensive malware features (i.e., malware images and malware sentences)
from different modalities to enhance the diversity of malware feature space,
which is more representative and discriminative for down-stream recognition.
Last, to further guarantee the open-set recognition, we dually embed the fused
multi-modal representation into one primary space and an associated sub-space,
i.e., discriminative and exclusive spaces, with contrastive sampling and
rho-bounded enclosing sphere regularizations, which resort to classification
and detection, respectively. Moreover, we also enrich our previously proposed
large-scaled malware dataset MAL-100 with multi-modal characteristics and
contribute an improved version dubbed MAL-100+. Experimental results on the
widely used malware dataset Mailing and the proposed MAL-100+ demonstrate the
effectiveness of our method.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2023 08:09:51 GMT"
}
] | 2023-05-03T00:00:00 |
[
[
"Guo",
"Jingcai",
""
],
[
"Xu",
"Yuanyuan",
""
],
[
"Xu",
"Wenchao",
""
],
[
"Zhan",
"Yufeng",
""
],
[
"Sun",
"Yuxia",
""
],
[
"Guo",
"Song",
""
]
] |
new_dataset
| 0.9918 |
2305.01257
|
Mehmet Saygin Seyfioglu
|
Mehmet Saygin Seyfioglu, Karim Bouyarmane, Suren Kumar, Amir Tavanaei,
Ismail B. Tutar
|
DreamPaint: Few-Shot Inpainting of E-Commerce Items for Virtual Try-On
without 3D Modeling
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce DreamPaint, a framework to intelligently inpaint any e-commerce
product on any user-provided context image. The context image can be, for
example, the user's own image for virtual try-on of clothes from the e-commerce
catalog on themselves, the user's room image for virtual try-on of a piece of
furniture from the e-commerce catalog in their room, etc. As opposed to
previous augmented-reality (AR)-based virtual try-on methods, DreamPaint does
not use, nor does it require, 3D modeling of neither the e-commerce product nor
the user context. Instead, it directly uses 2D images of the product as
available in product catalog database, and a 2D picture of the context, for
example taken from the user's phone camera. The method relies on few-shot fine
tuning a pre-trained diffusion model with the masked latents (e.g., Masked
DreamBooth) of the catalog images per item, whose weights are then loaded on a
pre-trained inpainting module that is capable of preserving the characteristics
of the context image. DreamPaint allows to preserve both the product image and
the context (environment/user) image without requiring text guidance to
describe the missing part (product/context). DreamPaint also allows to
intelligently infer the best 3D angle of the product to place at the desired
location on the user context, even if that angle was previously unseen in the
product's reference 2D images. We compare our results against both text-guided
and image-guided inpainting modules and show that DreamPaint yields superior
performance in both subjective human study and quantitative metrics.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2023 08:41:21 GMT"
}
] | 2023-05-03T00:00:00 |
[
[
"Seyfioglu",
"Mehmet Saygin",
""
],
[
"Bouyarmane",
"Karim",
""
],
[
"Kumar",
"Suren",
""
],
[
"Tavanaei",
"Amir",
""
],
[
"Tutar",
"Ismail B.",
""
]
] |
new_dataset
| 0.999527 |
2305.01336
|
Fatih Sezgin
|
Fatih Sezgin, Daniel Vriesman, Dagmar Steinhauser, Robert Lugner and
Thomas Brandmeier
|
Safe Autonomous Driving in Adverse Weather: Sensor Evaluation and
Performance Monitoring
|
Accepted for the 35th IEEE Intelligent Vehicles Symposium (IV 2023),
6 pages
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The vehicle's perception sensors radar, lidar and camera, which must work
continuously and without restriction, especially with regard to
automated/autonomous driving, can lose performance due to unfavourable weather
conditions. This paper analyzes the sensor signals of these three sensor
technologies under rain and fog as well as day and night. A data set of a
driving test vehicle as an object target under different weather conditions was
recorded in a controlled environment with adjustable, defined, and reproducible
weather conditions. Based on the sensor performance evaluation, a method has
been developed to detect sensor degradation, including determining the affected
data areas and estimating how severe they are. Through this sensor monitoring,
measures can be taken in subsequent algorithms to reduce the influences or to
take them into account in safety and assistance systems to avoid malfunctions.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2023 11:30:29 GMT"
}
] | 2023-05-03T00:00:00 |
[
[
"Sezgin",
"Fatih",
""
],
[
"Vriesman",
"Daniel",
""
],
[
"Steinhauser",
"Dagmar",
""
],
[
"Lugner",
"Robert",
""
],
[
"Brandmeier",
"Thomas",
""
]
] |
new_dataset
| 0.995647 |
2305.01356
|
Geert Van Wordragen
|
S\'andor Kisfaludi-Bak, Geert van Wordragen
|
A Quadtree for Hyperbolic Space
| null | null | null | null |
cs.CG
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a data structure in d-dimensional hyperbolic space that can be
considered a natural counterpart to quadtrees in Euclidean spaces. Based on
this data structure we propose a so-called L-order for hyperbolic point sets,
which is an extension of the Z-order defined in Euclidean spaces. We
demonstrate the usefulness of our hyperbolic quadtree data structure by giving
an algorithm for constant-approximate closest pair and dynamic
constant-approximate nearest neighbours in hyperbolic space of constant
dimension d.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2023 12:23:41 GMT"
}
] | 2023-05-03T00:00:00 |
[
[
"Kisfaludi-Bak",
"Sándor",
""
],
[
"van Wordragen",
"Geert",
""
]
] |
new_dataset
| 0.985922 |
2305.01373
|
Juho Veps\"al\"ainen
|
Juho Veps\"al\"ainen
|
ECMAScript -- The journey of a programming language from an idea to a
standard
|
20 pages, 2 figures, 2 tables, EURAS 2023, preprint of an accepted
full paper
| null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A significant portion of the web is powered by ECMAScript. As a web
technology, it is ubiquitous and available on most platforms natively or
through a web browser. ECMAScript is the dominant language of the web, but at
the same time, it was not designed as such. The story of ECMAScript is a story
of the impact of standardization on the popularity of technology.
Simultaneously, the story shows how external pressures can shape a programming
language and how politics can mar the evolution of a standard. In this article,
we will go through the movements that led to the dominant position of
ECMAScript, evaluate the factors leading to it, and consider its evolution
using the Futures Triangle framework and the theory of standards wars.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2023 12:48:25 GMT"
}
] | 2023-05-03T00:00:00 |
[
[
"Vepsäläinen",
"Juho",
""
]
] |
new_dataset
| 0.976477 |
2305.01375
|
Ilkka T\"orm\"a
|
Ville Salo, Ilkka T\"orm\"a
|
Diddy: a Python toolbox for infinite discrete dynamical systems
|
12 pages
| null | null | null |
cs.MS cs.DM math.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce Diddy, a collection of Python scripts for analyzing infinite
discrete dynamical systems. The main focus is on generalized multidimensional
shifts of finite type (SFTs). We show how Diddy can be used to easily define
SFTs and cellular automata, and analyze their basic properties. We also
showcase how to verify or rediscover some results from coding theory and
cellular automata theory.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2023 12:51:25 GMT"
}
] | 2023-05-03T00:00:00 |
[
[
"Salo",
"Ville",
""
],
[
"Törmä",
"Ilkka",
""
]
] |
new_dataset
| 0.999693 |
2305.01470
|
Jittat Fakcharoenphol
|
Jittat Fakcharoenphol and Chayutpong Prompak
|
Stochastic Contextual Bandits with Graph-based Contexts
| null | null | null | null |
cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
We naturally generalize the on-line graph prediction problem to a version of
stochastic contextual bandit problems where contexts are vertices in a graph
and the structure of the graph provides information on the similarity of
contexts. More specifically, we are given a graph $G=(V,E)$, whose vertex set
$V$ represents contexts with {\em unknown} vertex label $y$. In our stochastic
contextual bandit setting, vertices with the same label share the same reward
distribution. The standard notion of instance difficulties in graph label
prediction is the cutsize $f$ defined to be the number of edges whose end
points having different labels. For line graphs and trees we present an
algorithm with regret bound of $\tilde{O}(T^{2/3}K^{1/3}f^{1/3})$ where $K$ is
the number of arms. Our algorithm relies on the optimal stochastic bandit
algorithm by Zimmert and Seldin~[AISTAT'19, JMLR'21]. When the best arm
outperforms the other arms, the regret improves to $\tilde{O}(\sqrt{KT\cdot
f})$. The regret bound in the later case is comparable to other optimal
contextual bandit results in more general cases, but our algorithm is easy to
analyze, runs very efficiently, and does not require an i.i.d. assumption on
the input context sequence. The algorithm also works with general graphs using
a standard random spanning tree reduction.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2023 14:51:35 GMT"
}
] | 2023-05-03T00:00:00 |
[
[
"Fakcharoenphol",
"Jittat",
""
],
[
"Prompak",
"Chayutpong",
""
]
] |
new_dataset
| 0.999546 |
2305.01484
|
Yixin Xu
|
Zijian Zhao, Shan Deng, Swetaki Chatterjee, Zhouhang Jiang, Muhammad
Shaffatul Islam, Yi Xiao, Yixin Xu, Scott Meninger, Mohamed Mohamed, Rajiv
Joshi, Yogesh Singh Chauhan, Halid Mulaosmanovic, Stefan Duenkel, Dominik
Kleimaier, Sven Beyer, Hussam Amrouch, Vijaykrishnan Narayanan, Kai Ni
|
Powering Disturb-Free Reconfigurable Computing and Tunable Analog
Electronics with Dual-Port Ferroelectric FET
|
32 pages
| null | null | null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Single-port ferroelectric FET (FeFET) that performs write and read operations
on the same electrical gate prevents its wide application in tunable analog
electronics and suffers from read disturb, especially to the high-threshold
voltage (VTH) state as the retention energy barrier is reduced by the applied
read bias. To address both issues, we propose to adopt a read disturb-free
dual-port FeFET where write is performed on the gate featuring a ferroelectric
layer and the read is done on a separate gate featuring a non-ferroelectric
dielectric. Combining the unique structure and the separate read gate, read
disturb is eliminated as the applied field is aligned with polarization in the
high-VTH state and thus improving its stability, while it is screened by the
channel inversion charge and exerts no negative impact on the low-VTH state
stability. Comprehensive theoretical and experimental validation have been
performed on fully-depleted silicon-on-insulator (FDSOI) FeFETs integrated on
22 nm platform, which intrinsically has dual ports with its buried oxide layer
acting as the non-ferroelectric dielectric. Novel applications that can exploit
the proposed dual-port FeFET are proposed and experimentally demonstrated for
the first time, including FPGA that harnesses its read disturb-free feature and
tunable analog electronics (e.g., frequency tunable ring oscillator in this
work) leveraging the separated write and read paths.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2023 15:07:08 GMT"
}
] | 2023-05-03T00:00:00 |
[
[
"Zhao",
"Zijian",
""
],
[
"Deng",
"Shan",
""
],
[
"Chatterjee",
"Swetaki",
""
],
[
"Jiang",
"Zhouhang",
""
],
[
"Islam",
"Muhammad Shaffatul",
""
],
[
"Xiao",
"Yi",
""
],
[
"Xu",
"Yixin",
""
],
[
"Meninger",
"Scott",
""
],
[
"Mohamed",
"Mohamed",
""
],
[
"Joshi",
"Rajiv",
""
],
[
"Chauhan",
"Yogesh Singh",
""
],
[
"Mulaosmanovic",
"Halid",
""
],
[
"Duenkel",
"Stefan",
""
],
[
"Kleimaier",
"Dominik",
""
],
[
"Beyer",
"Sven",
""
],
[
"Amrouch",
"Hussam",
""
],
[
"Narayanan",
"Vijaykrishnan",
""
],
[
"Ni",
"Kai",
""
]
] |
new_dataset
| 0.998336 |
2305.01488
|
Wei-Chang Yeh
|
Wei-Chang Yeh
|
Building Reliable Budget-Based Binary-State Networks
| null | null | null | null |
cs.NI math.PR physics.soc-ph
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Everyday life is driven by various network, such as supply chains for
distributing raw materials, semi-finished product goods, and final products;
Internet of Things (IoT) for connecting and exchanging data; utility networks
for transmitting fuel, power, water, electricity, and 4G/5G; and social
networks for sharing information and connections. The binary-state network is a
basic network, where the state of each component is either success or failure,
i.e., the binary-state. Network reliability plays an important role in
evaluating the performance of network planning, design, and management. Because
more networks are being set up in the real world currently, there is a need for
their reliability. It is necessary to build a reliable network within a limited
budget. However, existing studies are focused on the budget limit for each
minimal path (MP) in networks without considering the total budget of the
entire network. We propose a novel concept to consider how to build a more
reliable binary-state network under the budget limit. In addition, we propose
an algorithm based on the binary-addition-tree algorithm (BAT) and stepwise
vectors to solve the problem efficiently.
|
[
{
"version": "v1",
"created": "Mon, 30 Jan 2023 09:28:49 GMT"
}
] | 2023-05-03T00:00:00 |
[
[
"Yeh",
"Wei-Chang",
""
]
] |
new_dataset
| 0.997692 |
2305.01526
|
Benyou Wang
|
Jianquan Li, Xidong Wang, Xiangbo Wu, Zhiyi Zhang, Xiaolong Xu, Jie
Fu, Prayag Tiwari, Xiang Wan, Benyou Wang
|
Huatuo-26M, a Large-scale Chinese Medical QA Dataset
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we release a largest ever medical Question Answering (QA)
dataset with 26 million QA pairs. We benchmark many existing approaches in our
dataset in terms of both retrieval and generation. Experimental results show
that the existing models perform far lower than expected and the released
dataset is still challenging in the pre-trained language model era. Moreover,
we also experimentally show the benefit of the proposed dataset in many
aspects: (i) trained models for other QA datasets in a zero-shot fashion; and
(ii) as external knowledge for retrieval-augmented generation (RAG); and (iii)
improving existing pre-trained language models by using the QA pairs as a
pre-training corpus in continued training manner. We believe that this dataset
will not only contribute to medical research but also facilitate both the
patients and clinical doctors. See
\url{https://github.com/FreedomIntelligence/Huatuo-26M}.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2023 15:33:01 GMT"
}
] | 2023-05-03T00:00:00 |
[
[
"Li",
"Jianquan",
""
],
[
"Wang",
"Xidong",
""
],
[
"Wu",
"Xiangbo",
""
],
[
"Zhang",
"Zhiyi",
""
],
[
"Xu",
"Xiaolong",
""
],
[
"Fu",
"Jie",
""
],
[
"Tiwari",
"Prayag",
""
],
[
"Wan",
"Xiang",
""
],
[
"Wang",
"Benyou",
""
]
] |
new_dataset
| 0.999771 |
2305.01545
|
Delin Hu
|
Delin Hu, Zhou Chen, Paul Baisamy, Zhe Liu, Francesco Giorgio-Serchi
and Yunjie Yang
|
Touch and deformation perception of soft manipulators with capacitive
e-skins and deep learning
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Tactile sensing in soft robots remains particularly challenging because of
the coupling between contact and deformation information which the sensor is
subject to during actuation and interaction with the environment. This often
results in severe interference and makes disentangling tactile sensing and
geometric deformation difficult. To address this problem, this paper proposes a
soft capacitive e-skin with a sparse electrode distribution and deep learning
for information decoupling. Our approach successfully separates tactile sensing
from geometric deformation, enabling touch recognition on a soft pneumatic
actuator subject to both internal (actuation) and external (manual handling)
forces. Using a multi-layer perceptron, the proposed e-skin achieves 99.88\%
accuracy in touch recognition across a range of deformations. When complemented
with prior knowledge, a transformer-based architecture effectively tracks the
deformation of the soft actuator. The average distance error in positional
reconstruction of the manipulator is as low as 2.905$\pm$2.207 mm, even under
operative conditions with different inflation states and physical contacts
which lead to additional signal variations and consequently interfere with
deformation tracking. These findings represent a tangible way forward in the
development of e-skins that can endow soft robots with proprioception and
exteroception.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2023 15:51:04 GMT"
}
] | 2023-05-03T00:00:00 |
[
[
"Hu",
"Delin",
""
],
[
"Chen",
"Zhou",
""
],
[
"Baisamy",
"Paul",
""
],
[
"Liu",
"Zhe",
""
],
[
"Giorgio-Serchi",
"Francesco",
""
],
[
"Yang",
"Yunjie",
""
]
] |
new_dataset
| 0.992466 |
2305.01569
|
Yuval Kirstain
|
Yuval Kirstain and Adam Polyak and Uriel Singer and Shahbuland Matiana
and Joe Penna and Omer Levy
|
Pick-a-Pic: An Open Dataset of User Preferences for Text-to-Image
Generation
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/publicdomain/zero/1.0/
|
The ability to collect a large dataset of human preferences from
text-to-image users is usually limited to companies, making such datasets
inaccessible to the public. To address this issue, we create a web app that
enables text-to-image users to generate images and specify their preferences.
Using this web app we build Pick-a-Pic, a large, open dataset of text-to-image
prompts and real users' preferences over generated images. We leverage this
dataset to train a CLIP-based scoring function, PickScore, which exhibits
superhuman performance on the task of predicting human preferences. Then, we
test PickScore's ability to perform model evaluation and observe that it
correlates better with human rankings than other automatic evaluation metrics.
Therefore, we recommend using PickScore for evaluating future text-to-image
generation models, and using Pick-a-Pic prompts as a more relevant dataset than
MS-COCO. Finally, we demonstrate how PickScore can enhance existing
text-to-image models via ranking.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2023 16:18:11 GMT"
}
] | 2023-05-03T00:00:00 |
[
[
"Kirstain",
"Yuval",
""
],
[
"Polyak",
"Adam",
""
],
[
"Singer",
"Uriel",
""
],
[
"Matiana",
"Shahbuland",
""
],
[
"Penna",
"Joe",
""
],
[
"Levy",
"Omer",
""
]
] |
new_dataset
| 0.999874 |
2305.01573
|
Jialuo Du
|
Jialuo Du, Yidong Ren, Mi Zhang, Yunhao Liu, Zhichao Cao
|
NELoRa-Bench: A Benchmark for Neural-enhanced LoRa Demodulation
|
Accepted by International Conference on Learning Representations
(ICLR'23) Workshop on Machine Learning for IoT
| null | null | null |
cs.NI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Low-Power Wide-Area Networks (LPWANs) are an emerging Internet-of-Things
(IoT) paradigm marked by low-power and long-distance communication. Among them,
LoRa is widely deployed for its unique characteristics and open-source
technology. By adopting the Chirp Spread Spectrum (CSS) modulation, LoRa
enables low signal-to-noise ratio (SNR) communication. The standard LoRa
demodulation method accumulates the chirp power of the whole chirp into an
energy peak in the frequency domain. In this way, it can support communication
even when SNR is lower than -15 dB. Beyond that, we proposed NELoRa, a
neural-enhanced decoder that exploits multi-dimensional information to achieve
significant SNR gain. This paper presents the dataset used to train/test
NELoRa, which includes 27,329 LoRa symbols with spreading factors from 7 to 10,
for further improvement of neural-enhanced LoRa demodulation. The dataset shows
that NELoRa can achieve 1.84-2.35 dB SNR gain over the standard LoRa decoder.
The dataset and codes can be found at
https://github.com/daibiaoxuwu/NeLoRa_Dataset.
|
[
{
"version": "v1",
"created": "Thu, 20 Apr 2023 14:09:18 GMT"
}
] | 2023-05-03T00:00:00 |
[
[
"Du",
"Jialuo",
""
],
[
"Ren",
"Yidong",
""
],
[
"Zhang",
"Mi",
""
],
[
"Liu",
"Yunhao",
""
],
[
"Cao",
"Zhichao",
""
]
] |
new_dataset
| 0.999353 |
2305.01618
|
Zehao Zhu
|
Zehao Zhu, Jiashun Wang, Yuzhe Qin, Deqing Sun, Varun Jampani,
Xiaolong Wang
|
ContactArt: Learning 3D Interaction Priors for Category-level
Articulated Object and Hand Poses Estimation
|
Project: https://zehaozhu.github.io/ContactArt/ ; Dataset Explorer:
https://zehaozhu.github.io/ContactArt/explorer/
| null | null | null |
cs.CV cs.LG cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a new dataset and a novel approach to learning hand-object
interaction priors for hand and articulated object pose estimation. We first
collect a dataset using visual teleoperation, where the human operator can
directly play within a physical simulator to manipulate the articulated
objects. We record the data and obtain free and accurate annotations on object
poses and contact information from the simulator. Our system only requires an
iPhone to record human hand motion, which can be easily scaled up and largely
lower the costs of data and annotation collection. With this data, we learn 3D
interaction priors including a discriminator (in a GAN) capturing the
distribution of how object parts are arranged, and a diffusion model which
generates the contact regions on articulated objects, guiding the hand pose
estimation. Such structural and contact priors can easily transfer to
real-world data with barely any domain gap. By using our data and learned
priors, our method significantly improves the performance on joint hand and
articulated object poses estimation over the existing state-of-the-art methods.
The project is available at https://zehaozhu.github.io/ContactArt/ .
|
[
{
"version": "v1",
"created": "Tue, 2 May 2023 17:24:08 GMT"
}
] | 2023-05-03T00:00:00 |
[
[
"Zhu",
"Zehao",
""
],
[
"Wang",
"Jiashun",
""
],
[
"Qin",
"Yuzhe",
""
],
[
"Sun",
"Deqing",
""
],
[
"Jampani",
"Varun",
""
],
[
"Wang",
"Xiaolong",
""
]
] |
new_dataset
| 0.999258 |
2305.01626
|
Gasper Begus
|
Ga\v{s}per Begu\v{s} and Thomas Lu and Zili Wang
|
Basic syntax from speech: Spontaneous concatenation in unsupervised deep
neural networks
| null | null | null | null |
cs.CL cs.AI cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Computational models of syntax are predominantly text-based. Here we propose
that basic syntax can be modeled directly from raw speech in a fully
unsupervised way. We focus on one of the most ubiquitous and basic properties
of syntax -- concatenation. We introduce spontaneous concatenation: a
phenomenon where convolutional neural networks (CNNs) trained on acoustic
recordings of individual words start generating outputs with two or even three
words concatenated without ever accessing data with multiple words in the
input. Additionally, networks trained on two words learn to embed words into
novel unobserved word combinations. To our knowledge, this is a previously
unreported property of CNNs trained on raw speech in the Generative Adversarial
Network setting and has implications both for our understanding of how these
architectures learn as well as for modeling syntax and its evolution from raw
acoustic inputs.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2023 17:38:21 GMT"
}
] | 2023-05-03T00:00:00 |
[
[
"Beguš",
"Gašper",
""
],
[
"Lu",
"Thomas",
""
],
[
"Wang",
"Zili",
""
]
] |
new_dataset
| 0.958615 |
2305.01652
|
Ruoshi Liu
|
Ruoshi Liu, Carl Vondrick
|
Humans as Light Bulbs: 3D Human Reconstruction from Thermal Reflection
|
Website: https://thermal.cs.columbia.edu/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The relatively hot temperature of the human body causes people to turn into
long-wave infrared light sources. Since this emitted light has a larger
wavelength than visible light, many surfaces in typical scenes act as infrared
mirrors with strong specular reflections. We exploit the thermal reflections of
a person onto objects in order to locate their position and reconstruct their
pose, even if they are not visible to a normal camera. We propose an
analysis-by-synthesis framework that jointly models the objects, people, and
their thermal reflections, which allows us to combine generative models with
differentiable rendering of reflections. Quantitative and qualitative
experiments show our approach works in highly challenging cases, such as with
curved mirrors or when the person is completely unseen by a normal camera.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2023 17:59:55 GMT"
}
] | 2023-05-03T00:00:00 |
[
[
"Liu",
"Ruoshi",
""
],
[
"Vondrick",
"Carl",
""
]
] |
new_dataset
| 0.9994 |
2104.10480
|
Ye Wang
|
Chenhang Zhou and Yu Chen and Roger Wattenhofer and Ye Wang
|
Print Your Money: Cash-Like Experiences with Digital Money
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The use of digital money has become increasingly popular, but it comes with
certain drawbacks. For instance, it can be challenging to make payments during
power outages or internet failures. Additionally, some groups may find it
difficult to use digital money. To address these concerns, we propose a design
for a central bank digital currency (CBDC) similar to physical cash but also
integrates with digital payment systems. This would enable users to access
digital money without needing a third party. Our design also addresses
technical and security concerns by implementing a trust-level model and
ensuring that the system meets users' security needs. Ultimately, our design
has the potential to replace physical banknotes and coins.
|
[
{
"version": "v1",
"created": "Wed, 21 Apr 2021 11:59:05 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Jan 2022 11:24:56 GMT"
},
{
"version": "v3",
"created": "Mon, 1 May 2023 07:43:48 GMT"
}
] | 2023-05-02T00:00:00 |
[
[
"Zhou",
"Chenhang",
""
],
[
"Chen",
"Yu",
""
],
[
"Wattenhofer",
"Roger",
""
],
[
"Wang",
"Ye",
""
]
] |
new_dataset
| 0.986494 |
2201.12452
|
Kritkorn Karntikoon
|
Erik D. Demaine, Kritkorn Karntikoon
|
Unfolding Orthotubes with a Dual Hamiltonian Path
| null | null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An orthotube consists of orthogonal boxes (e.g., unit cubes) glued
face-to-face to form a path. In 1998, Biedl et al. showed that every orthotube
has a grid unfolding: a cutting along edges of the boxes so that the surface
unfolds into a connected planar shape without overlap. We give a new
algorithmic grid unfolding of orthotubes with the additional property that the
rectangular faces are attached in a single path -- a Hamiltonian path on the
rectangular faces of the orthotube surface.
|
[
{
"version": "v1",
"created": "Fri, 28 Jan 2022 23:05:39 GMT"
},
{
"version": "v2",
"created": "Mon, 1 May 2023 01:29:28 GMT"
}
] | 2023-05-02T00:00:00 |
[
[
"Demaine",
"Erik D.",
""
],
[
"Karntikoon",
"Kritkorn",
""
]
] |
new_dataset
| 0.997738 |
2206.09900
|
Chen Min
|
Chen Min and Xinli Xu and Dawei Zhao and Liang Xiao and Yiming Nie and
Bin Dai
|
Occupancy-MAE: Self-supervised Pre-training Large-scale LiDAR Point
Clouds with Masked Occupancy Autoencoders
|
10 pages, 4 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Current perception models in autonomous driving rely heavily on large-scale
labeled LiDAR data, which is costly and time-consuming to annotate. In this
work, we aim to facilitate research on self-supervised masked learning using
the vast amount of unlabeled LiDAR data available in autonomous driving.
However, existing masked point autoencoding methods only focus on small-scale
indoor point clouds and struggle to adapt to outdoor scenes, which usually have
a large number of non-evenly distributed LiDAR points. To address these
challenges, we propose a new self-supervised masked learning method named
Occupancy-MAE, specifically designed for large-scale outdoor LiDAR points. We
leverage the gradually sparse occupancy structure of large-scale outdoor LiDAR
point clouds and introduce a range-aware random masking strategy and a pretext
task of occupancy prediction. Occupancy-MAE randomly masks voxels of LiDAR
point clouds based on their distance to LiDAR and predicts the masked occupancy
structure of the whole 3D scene. This simple occupancy prediction objective
encourages Occupancy-MAE to extract high-level semantic information to recover
the masked voxel from only a small amount of visible voxels. Extensive
experiments demonstrate the effectiveness of Occupancy-MAE across several
downstream tasks. For the 3D object detection task, Occupancy-MAE reduces the
labeled data required for car detection on KITTI by half and boosts small
object detection by around 2% mAP on Waymo. For the 3D semantic segmentation
task, Occupancy-MAE outperforms training from scratch by around 2% mIOU on
nuScenes. For the unsupervised domain adaptation task, Occupancy-MAE improves
the performance by about 0.5\% ~ 1% mAP. Our results show that it is feasible
to pre-train unlabeled large-scale LiDAR point clouds with masked autoencoding
to enhance the 3D perception ability of autonomous driving.
|
[
{
"version": "v1",
"created": "Mon, 20 Jun 2022 17:15:50 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Jun 2022 06:46:02 GMT"
},
{
"version": "v3",
"created": "Mon, 27 Jun 2022 09:01:51 GMT"
},
{
"version": "v4",
"created": "Tue, 16 Aug 2022 14:16:21 GMT"
},
{
"version": "v5",
"created": "Wed, 23 Nov 2022 06:15:30 GMT"
},
{
"version": "v6",
"created": "Sat, 29 Apr 2023 00:54:33 GMT"
}
] | 2023-05-02T00:00:00 |
[
[
"Min",
"Chen",
""
],
[
"Xu",
"Xinli",
""
],
[
"Zhao",
"Dawei",
""
],
[
"Xiao",
"Liang",
""
],
[
"Nie",
"Yiming",
""
],
[
"Dai",
"Bin",
""
]
] |
new_dataset
| 0.955341 |
2208.06144
|
Linhao Luo
|
Linhao Luo, Yixiang Fang, Moli Lu, Xin Cao, Xiaofeng Zhang, Wenjie
Zhang
|
GSim: A Graph Neural Network based Relevance Measure for Heterogeneous
Graphs
|
Accepted by TKDE
| null |
10.1109/TKDE.2023.3271425
| null |
cs.IR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Heterogeneous graphs, which contain nodes and edges of multiple types, are
prevalent in various domains, including bibliographic networks, social media,
and knowledge graphs. As a fundamental task in analyzing heterogeneous graphs,
relevance measure aims to calculate the relevance between two objects of
different types, which has been used in many applications such as web search,
recommendation, and community detection. Most of existing relevance measures
focus on homogeneous networks where objects are of the same type, and a few
measures are developed for heterogeneous graphs, but they often need the
pre-defined meta-path. Defining meaningful meta-paths requires much domain
knowledge, which largely limits their applications, especially on schema-rich
heterogeneous graphs like knowledge graphs. Recently, the Graph Neural Network
(GNN) has been widely applied in many graph mining tasks, but it has not been
applied for measuring relevance yet. To address the aforementioned problems, we
propose a novel GNN-based relevance measure, namely GSim. Specifically, we
first theoretically analyze and show that GNN is effective for measuring the
relevance of nodes in the graph. We then propose a context path-based graph
neural network (CP-GNN) to automatically leverage the semantics in
heterogeneous graphs. Moreover, we exploit CP-GNN to support relevance measures
between two objects of any type. Extensive experiments demonstrate that GSim
outperforms existing measures.
|
[
{
"version": "v1",
"created": "Fri, 12 Aug 2022 07:26:05 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Apr 2023 11:29:34 GMT"
}
] | 2023-05-02T00:00:00 |
[
[
"Luo",
"Linhao",
""
],
[
"Fang",
"Yixiang",
""
],
[
"Lu",
"Moli",
""
],
[
"Cao",
"Xin",
""
],
[
"Zhang",
"Xiaofeng",
""
],
[
"Zhang",
"Wenjie",
""
]
] |
new_dataset
| 0.970682 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.