id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2201.04275
|
Jordan Meadows
|
Jordan Meadows, Zili Zhou, Andre Freitas
|
PhysNLU: A Language Resource for Evaluating Natural Language
Understanding and Explanation Coherence in Physics
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
In order for language models to aid physics research, they must first encode
representations of mathematical and natural language discourse which lead to
coherent explanations, with correct ordering and relevance of statements. We
present a collection of datasets developed to evaluate the performance of
language models in this regard, which measure capabilities with respect to
sentence ordering, position, section prediction, and discourse coherence.
Analysis of the data reveals equations and sub-disciplines which are most
common in physics discourse, as well as the sentence-level frequency of
equations and expressions. We present baselines that demonstrate how
contemporary language models are challenged by coherence related tasks in
physics, even when trained on mathematical natural language objectives.
|
[
{
"version": "v1",
"created": "Wed, 12 Jan 2022 02:32:40 GMT"
},
{
"version": "v2",
"created": "Mon, 9 May 2022 00:08:14 GMT"
},
{
"version": "v3",
"created": "Fri, 2 Jun 2023 15:06:25 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Meadows",
"Jordan",
""
],
[
"Zhou",
"Zili",
""
],
[
"Freitas",
"Andre",
""
]
] |
new_dataset
| 0.997154 |
2207.05800
|
David Paulius
|
David Paulius, Alejandro Agostini and Dongheui Lee
|
Long-Horizon Planning and Execution with Functional Object-Oriented
Networks
|
To be published in RA-L, 8 pages, Joint First Authors (Alejandro and
David). For project website, see https://davidpaulius.github.io/foon-lhpe
| null | null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Following work on joint object-action representations, functional
object-oriented networks (FOON) were introduced as a knowledge graph
representation for robots. A FOON contains symbolic concepts useful to a
robot's understanding of tasks and its environment for object-level planning.
Prior to this work, little has been done to show how plans acquired from FOON
can be executed by a robot, as the concepts in a FOON are too abstract for
execution. We thereby introduce the idea of exploiting object-level knowledge
as a FOON for task planning and execution. Our approach automatically
transforms FOON into PDDL and leverages off-the-shelf planners, action
contexts, and robot skills in a hierarchical planning pipeline to generate
executable task plans. We demonstrate our entire approach on long-horizon tasks
in CoppeliaSim and show how learned action contexts can be extended to
never-before-seen scenarios.
|
[
{
"version": "v1",
"created": "Tue, 12 Jul 2022 19:29:35 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Nov 2022 16:27:32 GMT"
},
{
"version": "v3",
"created": "Mon, 7 Nov 2022 18:36:58 GMT"
},
{
"version": "v4",
"created": "Thu, 26 Jan 2023 02:33:05 GMT"
},
{
"version": "v5",
"created": "Sat, 1 Apr 2023 19:06:43 GMT"
},
{
"version": "v6",
"created": "Fri, 2 Jun 2023 17:12:02 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Paulius",
"David",
""
],
[
"Agostini",
"Alejandro",
""
],
[
"Lee",
"Dongheui",
""
]
] |
new_dataset
| 0.994372 |
2209.13300
|
Conghe Wang
|
Conghe Wang (1), Yutong He (2), Xia Wang (1), Honghao Huang (2),
Changda Yan (1), Xin Zhang (1) and Hongwei Chen (2)((1) Key Laboratory of
Photoelectronic Imaging Technology and System of Ministry of Education of
China, School of Optics and Photonics, Beijing Institute of Technology (2)
Beijing National Research Center for Information Science and Technology
(BNRist), Department of Electronic Engineering, Tsinghua University)
|
Passive Non-line-of-sight Imaging for Moving Targets with an Event
Camera
| null |
[J]. Chinese Optics Letters, 2023, 21(6): 061103
|
10.3788/COL202321.061103
| null |
cs.CV eess.IV physics.optics
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Non-line-of-sight (NLOS) imaging is an emerging technique for detecting
objects behind obstacles or around corners. Recent studies on passive NLOS
mainly focus on steady-state measurement and reconstruction methods, which show
limitations in recognition of moving targets. To the best of our knowledge, we
propose a novel event-based passive NLOS imaging method. We acquire
asynchronous event-based data which contains detailed dynamic information of
the NLOS target, and efficiently ease the degradation of speckle caused by
movement. Besides, we create the first event-based NLOS imaging dataset,
NLOS-ES, and the event-based feature is extracted by time-surface
representation. We compare the reconstructions through event-based data with
frame-based data. The event-based method performs well on PSNR and LPIPS, which
is 20% and 10% better than frame-based method, while the data volume takes only
2% of traditional method.
|
[
{
"version": "v1",
"created": "Tue, 27 Sep 2022 10:56:14 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Wang",
"Conghe",
""
],
[
"He",
"Yutong",
""
],
[
"Wang",
"Xia",
""
],
[
"Huang",
"Honghao",
""
],
[
"Yan",
"Changda",
""
],
[
"Zhang",
"Xin",
""
],
[
"Chen",
"Hongwei",
""
]
] |
new_dataset
| 0.951508 |
2210.01293
|
Nikolay Malkin
|
Batu Ozturkler, Nikolay Malkin, Zhen Wang, Nebojsa Jojic
|
ThinkSum: Probabilistic reasoning over sets using large language models
|
ACL 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Large language models (LLMs) have a substantial capacity for high-level
analogical reasoning: reproducing patterns in linear text that occur in their
training data (zero-shot evaluation) or in the provided context (few-shot
in-context learning). However, recent studies show that even the more advanced
LLMs fail in scenarios that require reasoning over multiple objects or facts
and making sequences of logical deductions. We propose a two-stage
probabilistic inference paradigm, ThinkSum, which reasons over sets of objects
or facts in a structured manner. In the first stage (Think - retrieval of
associations), a LLM is queried in parallel over a set of phrases extracted
from the prompt or an auxiliary model call. In the second stage (Sum -
probabilistic inference or reasoning), the results of these queries are
aggregated to make the final prediction. We demonstrate the possibilities and
advantages of ThinkSum on the BIG-bench suite of LLM evaluation tasks,
achieving improvements over the state of the art using GPT-family models on
thirteen difficult tasks, often with far smaller model variants. We also
compare and contrast ThinkSum with other proposed modifications to direct
prompting of LLMs, such as variants of chain-of-thought prompting. Our results
suggest that because the probabilistic inference in ThinkSum is performed
outside of calls to the LLM, ThinkSum is less sensitive to prompt design,
yields more interpretable predictions, and can be flexibly combined with latent
variable models to extract structured knowledge from LLMs. Overall, our
proposed paradigm represents a promising approach for enhancing the reasoning
capabilities of LLMs.
|
[
{
"version": "v1",
"created": "Tue, 4 Oct 2022 00:34:01 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Jun 2023 17:25:19 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Ozturkler",
"Batu",
""
],
[
"Malkin",
"Nikolay",
""
],
[
"Wang",
"Zhen",
""
],
[
"Jojic",
"Nebojsa",
""
]
] |
new_dataset
| 0.997037 |
2212.02341
|
Ivan Zelinka
|
Ivan Zelinka, Miloslav Szczypka, Jan Plucar, Nikolay Kuznetsov
|
From Malware Samples to Fractal Images: A New Paradigm for
Classification. (Version 2.0, Previous version paper name: Have you ever seen
malware?)
|
This paper is under review; the section describing conversion from
malware structure to fractal figure is temporarily erased here to protect our
idea. It will be replaced by a full version when accepted
| null | null | null |
cs.CR cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To date, a large number of research papers have been written on the
classification of malware, its identification, classification into different
families and the distinction between malware and goodware. These works have
been based on captured malware samples and have attempted to analyse malware
and goodware using various techniques, including techniques from the field of
artificial intelligence. For example, neural networks have played a significant
role in these classification methods. Some of this work also deals with
analysing malware using its visualisation. These works usually convert malware
samples capturing the structure of malware into image structures, which are
then the object of image processing. In this paper, we propose a very
unconventional and novel approach to malware visualisation based on dynamic
behaviour analysis, with the idea that the images, which are visually very
interesting, are then used to classify malware concerning goodware. Our
approach opens an extensive topic for future discussion and provides many new
directions for research in malware analysis and classification, as discussed in
conclusion. The results of the presented experiments are based on a database of
6 589 997 goodware, 827 853 potentially unwanted applications and 4 174 203
malware samples provided by ESET and selected experimental data (images,
generating polynomial formulas and software generating images) are available on
GitHub for interested readers. Thus, this paper is not a comprehensive compact
study that reports the results obtained from comparative experiments but rather
attempts to show a new direction in the field of visualisation with possible
applications in malware analysis.
|
[
{
"version": "v1",
"created": "Mon, 5 Dec 2022 15:15:54 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Jun 2023 19:36:38 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Zelinka",
"Ivan",
""
],
[
"Szczypka",
"Miloslav",
""
],
[
"Plucar",
"Jan",
""
],
[
"Kuznetsov",
"Nikolay",
""
]
] |
new_dataset
| 0.995861 |
2212.07401
|
Jennifer J. Sun
|
Jennifer J. Sun, Lili Karashchuk, Amil Dravid, Serim Ryou, Sonia
Fereidooni, John Tuthill, Aggelos Katsaggelos, Bingni W. Brunton, Georgia
Gkioxari, Ann Kennedy, Yisong Yue, Pietro Perona
|
BKinD-3D: Self-Supervised 3D Keypoint Discovery from Multi-View Videos
|
CVPR 2023. Project page: https://sites.google.com/view/b-kind/3d
Code: https://github.com/neuroethology/BKinD-3D
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Quantifying motion in 3D is important for studying the behavior of humans and
other animals, but manual pose annotations are expensive and time-consuming to
obtain. Self-supervised keypoint discovery is a promising strategy for
estimating 3D poses without annotations. However, current keypoint discovery
approaches commonly process single 2D views and do not operate in the 3D space.
We propose a new method to perform self-supervised keypoint discovery in 3D
from multi-view videos of behaving agents, without any keypoint or bounding box
supervision in 2D or 3D. Our method, BKinD-3D, uses an encoder-decoder
architecture with a 3D volumetric heatmap, trained to reconstruct
spatiotemporal differences across multiple views, in addition to joint length
constraints on a learned 3D skeleton of the subject. In this way, we discover
keypoints without requiring manual supervision in videos of humans and rats,
demonstrating the potential of 3D keypoint discovery for studying behavior.
|
[
{
"version": "v1",
"created": "Wed, 14 Dec 2022 18:34:29 GMT"
},
{
"version": "v2",
"created": "Sat, 6 May 2023 23:11:39 GMT"
},
{
"version": "v3",
"created": "Fri, 2 Jun 2023 05:03:24 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Sun",
"Jennifer J.",
""
],
[
"Karashchuk",
"Lili",
""
],
[
"Dravid",
"Amil",
""
],
[
"Ryou",
"Serim",
""
],
[
"Fereidooni",
"Sonia",
""
],
[
"Tuthill",
"John",
""
],
[
"Katsaggelos",
"Aggelos",
""
],
[
"Brunton",
"Bingni W.",
""
],
[
"Gkioxari",
"Georgia",
""
],
[
"Kennedy",
"Ann",
""
],
[
"Yue",
"Yisong",
""
],
[
"Perona",
"Pietro",
""
]
] |
new_dataset
| 0.964452 |
2212.09258
|
Armin Danesh Pazho
|
Armin Danesh Pazho, Ghazal Alinezhad Noghre, Babak Rahimi Ardabili,
Christopher Neff, Hamed Tabkhi
|
CHAD: Charlotte Anomaly Dataset
| null |
Image Analysis: 23rd Scandinavian Conference, SCIA 2023, Sirkka,
Finland, April 18-21, 2023, Proceedings, Part I, pp. 50-66. Cham: Springer
Nature Switzerland, 2023
|
10.1007/978-3-031-31435-3_4
| null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, we have seen a significant interest in data-driven deep
learning approaches for video anomaly detection, where an algorithm must
determine if specific frames of a video contain abnormal behaviors. However,
video anomaly detection is particularly context-specific, and the availability
of representative datasets heavily limits real-world accuracy. Additionally,
the metrics currently reported by most state-of-the-art methods often do not
reflect how well the model will perform in real-world scenarios. In this
article, we present the Charlotte Anomaly Dataset (CHAD). CHAD is a
high-resolution, multi-camera anomaly dataset in a commercial parking lot
setting. In addition to frame-level anomaly labels, CHAD is the first anomaly
dataset to include bounding box, identity, and pose annotations for each actor.
This is especially beneficial for skeleton-based anomaly detection, which is
useful for its lower computational demand in real-world settings. CHAD is also
the first anomaly dataset to contain multiple views of the same scene. With
four camera views and over 1.15 million frames, CHAD is the largest fully
annotated anomaly detection dataset including person annotations, collected
from continuous video streams from stationary cameras for smart video
surveillance applications. To demonstrate the efficacy of CHAD for training and
evaluation, we benchmark two state-of-the-art skeleton-based anomaly detection
algorithms on CHAD and provide comprehensive analysis, including both
quantitative results and qualitative examination. The dataset is available at
https://github.com/TeCSAR-UNCC/CHAD.
|
[
{
"version": "v1",
"created": "Mon, 19 Dec 2022 06:05:34 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Mar 2023 18:29:47 GMT"
},
{
"version": "v3",
"created": "Thu, 1 Jun 2023 19:21:20 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Pazho",
"Armin Danesh",
""
],
[
"Noghre",
"Ghazal Alinezhad",
""
],
[
"Ardabili",
"Babak Rahimi",
""
],
[
"Neff",
"Christopher",
""
],
[
"Tabkhi",
"Hamed",
""
]
] |
new_dataset
| 0.99974 |
2212.10114
|
Maksym Del
|
Maksym Del and Mark Fishel
|
True Detective: A Deep Abductive Reasoning Benchmark Undoable for GPT-3
and Challenging for GPT-4
|
5 pages, to appear at *SEM
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Large language models (LLMs) have demonstrated solid zero-shot reasoning
capabilities, which is reflected in their performance on the current test
tasks. This calls for a more challenging benchmark requiring highly advanced
reasoning ability to be solved. In this paper, we introduce such a benchmark,
consisting of 191 long-form (1200 words on average) mystery narratives
constructed as detective puzzles. Puzzles are sourced from the "5 Minute
Mystery" platform and include a multiple-choice question for evaluation. Only
47% of humans solve a puzzle successfully on average, while the best human
solvers achieve over 80% success rate. We show that GPT-3 models barely
outperform random on this benchmark (with 28% accuracy) while state-of-the-art
GPT-4 solves only 38% of puzzles. This indicates that there is still a
significant gap in the deep reasoning abilities of LLMs and humans and
highlights the need for further research in this area. Our work introduces a
challenging benchmark for future studies on reasoning in language models and
contributes to a better understanding of the limits of LLMs' abilities.
|
[
{
"version": "v1",
"created": "Tue, 20 Dec 2022 09:34:43 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Jun 2023 18:50:21 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Del",
"Maksym",
""
],
[
"Fishel",
"Mark",
""
]
] |
new_dataset
| 0.989384 |
2303.02504
|
Ayoub Foussoul
|
Ayoub Foussoul, Vineet Goyal, Varun Gupta
|
MNL-Bandit in non-stationary environments
| null | null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we study the MNL-Bandit problem in a non-stationary
environment and present an algorithm with a worst-case expected regret of
$\tilde{O}\left( \min \left\{ \sqrt{NTL}\;,\;
N^{\frac{1}{3}}(\Delta_{\infty}^{K})^{\frac{1}{3}} T^{\frac{2}{3}} +
\sqrt{NT}\right\}\right)$. Here $N$ is the number of arms, $L$ is the number of
changes and $\Delta_{\infty}^{K}$ is a variation measure of the unknown
parameters. Furthermore, we show matching lower bounds on the expected regret
(up to logarithmic factors), implying that our algorithm is optimal. Our
approach builds upon the epoch-based algorithm for stationary MNL-Bandit in
Agrawal et al. 2016. However, non-stationarity poses several challenges and we
introduce new techniques and ideas to address these. In particular, we give a
tight characterization for the bias introduced in the estimators due to non
stationarity and derive new concentration bounds.
|
[
{
"version": "v1",
"created": "Sat, 4 Mar 2023 21:10:42 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Jun 2023 01:29:18 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Foussoul",
"Ayoub",
""
],
[
"Goyal",
"Vineet",
""
],
[
"Gupta",
"Varun",
""
]
] |
new_dataset
| 0.958871 |
2303.03565
|
Jingyu Liu
|
Jingyu Liu, Wenhan Xiong, Ian Jones, Yixin Nie, Anchit Gupta, Barlas
O\u{g}uz
|
CLIP-Layout: Style-Consistent Indoor Scene Synthesis with Semantic
Furniture Embedding
|
Changed paper template and cleaned up tables
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Indoor scene synthesis involves automatically picking and placing furniture
appropriately on a floor plan, so that the scene looks realistic and is
functionally plausible. Such scenes can serve as homes for immersive 3D
experiences, or be used to train embodied agents. Existing methods for this
task rely on labeled categories of furniture, e.g. bed, chair or table, to
generate contextually relevant combinations of furniture. Whether heuristic or
learned, these methods ignore instance-level visual attributes of objects, and
as a result may produce visually less coherent scenes. In this paper, we
introduce an auto-regressive scene model which can output instance-level
predictions, using general purpose image embedding based on CLIP. This allows
us to learn visual correspondences such as matching color and style, and
produce more functionally plausible and aesthetically pleasing scenes.
Evaluated on the 3D-FRONT dataset, our model achieves SOTA results in scene
synthesis and improves auto-completion metrics by over 50%. Moreover, our
embedding-based approach enables zero-shot text-guided scene synthesis and
editing, which easily generalizes to furniture not seen during training.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2023 00:26:02 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Jun 2023 04:48:55 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Liu",
"Jingyu",
""
],
[
"Xiong",
"Wenhan",
""
],
[
"Jones",
"Ian",
""
],
[
"Nie",
"Yixin",
""
],
[
"Gupta",
"Anchit",
""
],
[
"Oğuz",
"Barlas",
""
]
] |
new_dataset
| 0.997719 |
2303.15266
|
Rixin Zhou
|
Rixin Zhou, Jiafu Wei, Qian Zhang, Ruihua Qi, Xi Yang, Chuntao Li
|
Multi-Granularity Archaeological Dating of Chinese Bronze Dings Based on
a Knowledge-Guided Relation Graph
|
CVPR2023 accepted
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The archaeological dating of bronze dings has played a critical role in the
study of ancient Chinese history. Current archaeology depends on trained
experts to carry out bronze dating, which is time-consuming and
labor-intensive. For such dating, in this study, we propose a learning-based
approach to integrate advanced deep learning techniques and archaeological
knowledge. To achieve this, we first collect a large-scale image dataset of
bronze dings, which contains richer attribute information than other existing
fine-grained datasets. Second, we introduce a multihead classifier and a
knowledge-guided relation graph to mine the relationship between attributes and
the ding era. Third, we conduct comparison experiments with various existing
methods, the results of which show that our dating method achieves a
state-of-the-art performance. We hope that our data and applied networks will
enrich fine-grained classification research relevant to other interdisciplinary
areas of expertise. The dataset and source code used are included in our
supplementary materials, and will be open after submission owing to the
anonymity policy. Source codes and data are available at:
https://github.com/zhourixin/bronze-Ding.
|
[
{
"version": "v1",
"created": "Mon, 27 Mar 2023 14:54:50 GMT"
},
{
"version": "v2",
"created": "Mon, 29 May 2023 12:59:19 GMT"
},
{
"version": "v3",
"created": "Fri, 2 Jun 2023 05:51:39 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Zhou",
"Rixin",
""
],
[
"Wei",
"Jiafu",
""
],
[
"Zhang",
"Qian",
""
],
[
"Qi",
"Ruihua",
""
],
[
"Yang",
"Xi",
""
],
[
"Li",
"Chuntao",
""
]
] |
new_dataset
| 0.999467 |
2304.14446
|
Jenny Xu
|
Jenny Xu and Steven L. Waslander
|
HyperMODEST: Self-Supervised 3D Object Detection with Confidence Score
Filtering
|
Accepted in CRV (Conference on Robots and Vision) 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Current LiDAR-based 3D object detectors for autonomous driving are almost
entirely trained on human-annotated data collected in specific geographical
domains with specific sensor setups, making it difficult to adapt to a
different domain. MODEST is the first work to train 3D object detectors without
any labels. Our work, HyperMODEST, proposes a universal method implemented on
top of MODEST that can largely accelerate the self-training process and does
not require tuning on a specific dataset. We filter intermediate pseudo-labels
used for data augmentation with low confidence scores. On the nuScenes dataset,
we observe a significant improvement of 1.6% in AP BEV in 0-80m range at
IoU=0.25 and an improvement of 1.7% in AP BEV in 0-80m range at IoU=0.5 while
only using one-fifth of the training time in the original approach by MODEST.
On the Lyft dataset, we also observe an improvement over the baseline during
the first round of iterative self-training. We explore the trade-off between
high precision and high recall in the early stage of the self-training process
by comparing our proposed method with two other score filtering methods:
confidence score filtering for pseudo-labels with and without static label
retention. The code and models of this work are available at
https://github.com/TRAILab/HyperMODEST
|
[
{
"version": "v1",
"created": "Thu, 27 Apr 2023 18:12:11 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Jun 2023 20:18:56 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Xu",
"Jenny",
""
],
[
"Waslander",
"Steven L.",
""
]
] |
new_dataset
| 0.999406 |
2305.05662
|
Wenhai Wang
|
Zhaoyang Liu, Yinan He, Wenhai Wang, Weiyun Wang, Yi Wang, Shoufa
Chen, Qinglong Zhang, Zeqiang Lai, Yang Yang, Qingyun Li, Jiashuo Yu,
Kunchang Li, Zhe Chen, Xue Yang, Xizhou Zhu, Yali Wang, Limin Wang, Ping Luo,
Jifeng Dai, Yu Qiao
|
InternGPT: Solving Vision-Centric Tasks by Interacting with ChatGPT
Beyond Language
|
Technical Report
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present an interactive visual framework named InternGPT, or iGPT for
short. The framework integrates chatbots that have planning and reasoning
capabilities, such as ChatGPT, with non-verbal instructions like pointing
movements that enable users to directly manipulate images or videos on the
screen. Pointing (including gestures, cursors, etc.) movements can provide more
flexibility and precision in performing vision-centric tasks that require
fine-grained control, editing, and generation of visual content. The name
InternGPT stands for \textbf{inter}action, \textbf{n}onverbal, and
\textbf{chat}bots. Different from existing interactive systems that rely on
pure language, by incorporating pointing instructions, the proposed iGPT
significantly improves the efficiency of communication between users and
chatbots, as well as the accuracy of chatbots in vision-centric tasks,
especially in complicated visual scenarios where the number of objects is
greater than 2. Additionally, in iGPT, an auxiliary control mechanism is used
to improve the control capability of LLM, and a large vision-language model
termed Husky is fine-tuned for high-quality multi-modal dialogue (impressing
ChatGPT-3.5-turbo with 93.89\% GPT-4 Quality). We hope this work can spark new
ideas and directions for future interactive visual systems. Welcome to watch
the code at https://github.com/OpenGVLab/InternGPT.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2023 17:58:34 GMT"
},
{
"version": "v2",
"created": "Wed, 10 May 2023 17:45:08 GMT"
},
{
"version": "v3",
"created": "Thu, 11 May 2023 14:48:24 GMT"
},
{
"version": "v4",
"created": "Fri, 2 Jun 2023 16:19:48 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Liu",
"Zhaoyang",
""
],
[
"He",
"Yinan",
""
],
[
"Wang",
"Wenhai",
""
],
[
"Wang",
"Weiyun",
""
],
[
"Wang",
"Yi",
""
],
[
"Chen",
"Shoufa",
""
],
[
"Zhang",
"Qinglong",
""
],
[
"Lai",
"Zeqiang",
""
],
[
"Yang",
"Yang",
""
],
[
"Li",
"Qingyun",
""
],
[
"Yu",
"Jiashuo",
""
],
[
"Li",
"Kunchang",
""
],
[
"Chen",
"Zhe",
""
],
[
"Yang",
"Xue",
""
],
[
"Zhu",
"Xizhou",
""
],
[
"Wang",
"Yali",
""
],
[
"Wang",
"Limin",
""
],
[
"Luo",
"Ping",
""
],
[
"Dai",
"Jifeng",
""
],
[
"Qiao",
"Yu",
""
]
] |
new_dataset
| 0.998757 |
2305.08010
|
Kaushik Roy
|
Kaushik Roy, Manas Gaur, Misagh Soltani, Vipula Rawte, Ashwin Kalyan,
Amit Sheth
|
ProKnow: Process Knowledge for Safety Constrained and Explainable
Question Generation for Mental Health Diagnostic Assistance
| null |
Front. Big Data, 09 January 2023, Sec. Data Science, Volume 5 -
2022
|
10.3389/fdata.2022.1056728
| null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Current Virtual Mental Health Assistants (VMHAs) provide counseling and
suggestive care. They refrain from patient diagnostic assistance because they
lack training in safety-constrained and specialized clinical process knowledge.
In this work, we define Proknow as an ordered set of information that maps to
evidence-based guidelines or categories of conceptual understanding to experts
in a domain. We also introduce a new dataset of diagnostic conversations guided
by safety constraints and Proknow that healthcare professionals use. We develop
a method for natural language question generation (NLG) that collects
diagnostic information from the patient interactively. We demonstrate the
limitations of using state-of-the-art large-scale language models (LMs) on this
dataset. Our algorithm models the process knowledge through explicitly modeling
safety, knowledge capture, and explainability. LMs augmented with ProKnow
guided method generated 89% safer questions in the depression and anxiety
domain. The Explainability of the generated question is assessed by computing
similarity with concepts in depression and anxiety knowledge bases. Overall,
irrespective of the type of LMs augmented with our ProKnow, we achieved an
average 82% improvement over simple pre-trained LMs on safety, explainability,
and process-guided question generation. We qualitatively and quantitatively
evaluate the efficacy of the proposed ProKnow-guided methods by introducing
three new evaluation metrics for safety, explainability, and process knowledge
adherence.
|
[
{
"version": "v1",
"created": "Sat, 13 May 2023 21:31:02 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Jun 2023 18:33:33 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Roy",
"Kaushik",
""
],
[
"Gaur",
"Manas",
""
],
[
"Soltani",
"Misagh",
""
],
[
"Rawte",
"Vipula",
""
],
[
"Kalyan",
"Ashwin",
""
],
[
"Sheth",
"Amit",
""
]
] |
new_dataset
| 0.999288 |
2305.17415
|
Zhibin Lan
|
Zhibin Lan, Jiawei Yu, Xiang Li, Wen Zhang, Jian Luan, Bin Wang, Degen
Huang, Jinsong Su
|
Exploring Better Text Image Translation with Multimodal Codebook
|
Accepted by ACL 2023 Main Conference
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Text image translation (TIT) aims to translate the source texts embedded in
the image to target translations, which has a wide range of applications and
thus has important research value. However, current studies on TIT are
confronted with two main bottlenecks: 1) this task lacks a publicly available
TIT dataset, 2) dominant models are constructed in a cascaded manner, which
tends to suffer from the error propagation of optical character recognition
(OCR). In this work, we first annotate a Chinese-English TIT dataset named
OCRMT30K, providing convenience for subsequent studies. Then, we propose a TIT
model with a multimodal codebook, which is able to associate the image with
relevant texts, providing useful supplementary information for translation.
Moreover, we present a multi-stage training framework involving text machine
translation, image-text alignment, and TIT tasks, which fully exploits
additional bilingual texts, OCR dataset and our OCRMT30K dataset to train our
model. Extensive experiments and in-depth analyses strongly demonstrate the
effectiveness of our proposed model and training framework.
|
[
{
"version": "v1",
"created": "Sat, 27 May 2023 08:41:18 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Jun 2023 12:38:37 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Lan",
"Zhibin",
""
],
[
"Yu",
"Jiawei",
""
],
[
"Li",
"Xiang",
""
],
[
"Zhang",
"Wen",
""
],
[
"Luan",
"Jian",
""
],
[
"Wang",
"Bin",
""
],
[
"Huang",
"Degen",
""
],
[
"Su",
"Jinsong",
""
]
] |
new_dataset
| 0.991623 |
2305.17813
|
Kevin Jude Concessao
|
Kevin Jude Concessao, Unnikrishnan Cheramangalath, MJ Ricky Dev,
Rupesh Nasre
|
Meerkat: A framework for Dynamic Graph Algorithms on GPUs
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Graph algorithms are challenging to implement due to their varying topology
and irregular access patterns. Real-world graphs are dynamic in nature and
routinely undergo edge and vertex additions, as well as, deletions. Typical
examples of dynamic graphs are social networks, collaboration networks, and
road networks. Applying static algorithms repeatedly on dynamic graphs is
inefficient. Unfortunately, we know little about how to efficiently process
dynamic graphs on massively parallel architectures such as GPUs. Existing
approaches to represent and process dynamic graphs are either not general or
inefficient. In this work, we propose a library-based framework for dynamic
graph algorithms that proposes a GPU-tailored graph representation and exploits
the warp-cooperative execution model. The library, named Meerkat, builds upon a
recently proposed dynamic graph representation on GPUs. This representation
exploits a hashtable-based mechanism to store a vertex's neighborhood. Meerkat
also enables fast iteration through a group of vertices, such as the whole set
of vertices or the neighbors of a vertex. Based on the efficient iterative
patterns encoded in Meerkat, we implement dynamic versions of the popular graph
algorithms such as breadth-first search, single-source shortest paths, triangle
counting, weakly connected components, and PageRank. Compared to the
state-of-the-art dynamic graph analytics framework Hornet, Meerkat is
$12.6\times$, $12.94\times$, and $6.1\times$ faster, for query, insert, and
delete operations, respectively. Using a variety of real-world graphs, we
observe that Meerkat significantly improves the efficiency of the underlying
dynamic graph algorithm. Meerkat performs $1.17\times$ for BFS, $1.32\times$
for SSSP, $1.74\times$ for PageRank, and $6.08\times$ for WCC, better than
Hornet on average.
|
[
{
"version": "v1",
"created": "Sun, 28 May 2023 21:10:31 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Jun 2023 15:22:20 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Concessao",
"Kevin Jude",
""
],
[
"Cheramangalath",
"Unnikrishnan",
""
],
[
"Dev",
"MJ Ricky",
""
],
[
"Nasre",
"Rupesh",
""
]
] |
new_dataset
| 0.985502 |
2306.00037
|
Despoina Antonakaki
|
Alexander Shevtsov, Despoina Antonakaki, Ioannis Lamprou, Polyvios
Pratikakis, Sotiris Ioannidis
|
BotArtist: Twitter bot detection Machine Learning model based on Twitter
suspension
| null | null | null | null |
cs.SI cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Twitter as one of the most popular social networks, offers a means for
communication and online discourse, which unfortunately has been the target of
bots and fake accounts, leading to the manipulation and spreading of false
information. Towards this end, we gather a challenging, multilingual dataset of
social discourse on Twitter, originating from 9M users regarding the recent
Russo-Ukrainian war, in order to detect the bot accounts and the conversation
involving them. We collect the ground truth for our dataset through the Twitter
API suspended accounts collection, containing approximately 343K of bot
accounts and 8M of normal users. Additionally, we use a dataset provided by
Botometer-V3 with 1,777 Varol, 483 German accounts, and 1,321 US accounts.
Besides the publicly available datasets, we also manage to collect 2
independent datasets around popular discussion topics of the 2022 energy crisis
and the 2022 conspiracy discussions. Both of the datasets were labeled
according to the Twitter suspension mechanism. We build a novel ML model for
bot detection using the state-of-the-art XGBoost model. We combine the model
with a high volume of labeled tweets according to the Twitter suspension
mechanism ground truth. This requires a limited set of profile features
allowing labeling of the dataset in different time periods from the collection,
as it is independent of the Twitter API. In comparison with Botometer our
methodology achieves an average 11% higher ROC-AUC score over two real-case
scenario datasets.
|
[
{
"version": "v1",
"created": "Wed, 31 May 2023 09:12:35 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Jun 2023 11:15:02 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Shevtsov",
"Alexander",
""
],
[
"Antonakaki",
"Despoina",
""
],
[
"Lamprou",
"Ioannis",
""
],
[
"Pratikakis",
"Polyvios",
""
],
[
"Ioannidis",
"Sotiris",
""
]
] |
new_dataset
| 0.999762 |
2306.00253
|
Bonaventure F. P. Dossou
|
Tobi Olatunji, Tejumade Afonja, Bonaventure F. P. Dossou, Atnafu
Lambebo Tonja, Chris Chinenye Emezue, Amina Mardiyyah Rufai, Sahib Singh
|
AfriNames: Most ASR models "butcher" African Names
|
Accepted at Interspeech 2023 (Main Conference)
| null | null | null |
cs.CL cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Useful conversational agents must accurately capture named entities to
minimize error for downstream tasks, for example, asking a voice assistant to
play a track from a certain artist, initiating navigation to a specific
location, or documenting a laboratory result for a patient. However, where
named entities such as ``Ukachukwu`` (Igbo), ``Lakicia`` (Swahili), or
``Ingabire`` (Rwandan) are spoken, automatic speech recognition (ASR) models'
performance degrades significantly, propagating errors to downstream systems.
We model this problem as a distribution shift and demonstrate that such model
bias can be mitigated through multilingual pre-training, intelligent data
augmentation strategies to increase the representation of African-named
entities, and fine-tuning multilingual ASR models on multiple African accents.
The resulting fine-tuned models show an 81.5\% relative WER improvement
compared with the baseline on samples with African-named entities.
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 00:17:52 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Jun 2023 15:35:42 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Olatunji",
"Tobi",
""
],
[
"Afonja",
"Tejumade",
""
],
[
"Dossou",
"Bonaventure F. P.",
""
],
[
"Tonja",
"Atnafu Lambebo",
""
],
[
"Emezue",
"Chris Chinenye",
""
],
[
"Rufai",
"Amina Mardiyyah",
""
],
[
"Singh",
"Sahib",
""
]
] |
new_dataset
| 0.992134 |
2306.00385
|
Martin Hermann Paul Fuchs
|
Martin Hermann Paul Fuchs, Beg\"um Demir
|
HySpecNet-11k: A Large-Scale Hyperspectral Dataset for Benchmarking
Learning-Based Hyperspectral Image Compression Methods
|
Accepted at IEEE International Geoscience and Remote Sensing
Symposium (IGARSS) 2023. The dataset, our code and the pre-trained weights
are publicly available at https://hyspecnet.rsim.berlin
| null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
The development of learning-based hyperspectral image compression methods has
recently attracted great attention in remote sensing. Such methods require a
high number of hyperspectral images to be used during training to optimize all
parameters and reach a high compression performance. However, existing
hyperspectral datasets are not sufficient to train and evaluate learning-based
compression methods, which hinders the research in this field. To address this
problem, in this paper we present HySpecNet-11k that is a large-scale
hyperspectral benchmark dataset made up of 11,483 nonoverlapping image patches.
Each patch is a portion of 128 $\times$ 128 pixels with 224 spectral bands and
a ground sample distance of 30 m. We exploit HySpecNet-11k to benchmark the
current state of the art in learning-based hyperspectral image compression by
focussing our attention on various 1D, 2D and 3D convolutional autoencoder
architectures. Nevertheless, HySpecNet-11k can be used for any unsupervised
learning task in the framework of hyperspectral image analysis. The dataset,
our code and the pre-trained weights are publicly available at
https://hyspecnet.rsim.berlin
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 06:34:14 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Jun 2023 10:01:48 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Fuchs",
"Martin Hermann Paul",
""
],
[
"Demir",
"Begüm",
""
]
] |
new_dataset
| 0.99973 |
2306.00547
|
Mohit Mendiratta
|
Mohit Mendiratta, Xingang Pan, Mohamed Elgharib, Kartik Teotia,
Mallikarjun B R, Ayush Tewari, Vladislav Golyanik, Adam Kortylewski,
Christian Theobalt
|
AvatarStudio: Text-driven Editing of 3D Dynamic Human Head Avatars
|
17 pages, 17 figures. Project page:
https://vcai.mpi-inf.mpg.de/projects/AvatarStudio/
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Capturing and editing full head performances enables the creation of virtual
characters with various applications such as extended reality and media
production. The past few years witnessed a steep rise in the photorealism of
human head avatars. Such avatars can be controlled through different input data
modalities, including RGB, audio, depth, IMUs and others. While these data
modalities provide effective means of control, they mostly focus on editing the
head movements such as the facial expressions, head pose and/or camera
viewpoint. In this paper, we propose AvatarStudio, a text-based method for
editing the appearance of a dynamic full head avatar. Our approach builds on
existing work to capture dynamic performances of human heads using neural
radiance field (NeRF) and edits this representation with a text-to-image
diffusion model. Specifically, we introduce an optimization strategy for
incorporating multiple keyframes representing different camera viewpoints and
time stamps of a video performance into a single diffusion model. Using this
personalized diffusion model, we edit the dynamic NeRF by introducing
view-and-time-aware Score Distillation Sampling (VT-SDS) following a
model-based guidance approach. Our method edits the full head in a canonical
space, and then propagates these edits to remaining time steps via a pretrained
deformation network. We evaluate our method visually and numerically via a user
study, and results show that our method outperforms existing approaches. Our
experiments validate the design choices of our method and highlight that our
edits are genuine, personalized, as well as 3D- and time-consistent.
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 11:06:01 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Jun 2023 08:45:09 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Mendiratta",
"Mohit",
""
],
[
"Pan",
"Xingang",
""
],
[
"Elgharib",
"Mohamed",
""
],
[
"Teotia",
"Kartik",
""
],
[
"R",
"Mallikarjun B",
""
],
[
"Tewari",
"Ayush",
""
],
[
"Golyanik",
"Vladislav",
""
],
[
"Kortylewski",
"Adam",
""
],
[
"Theobalt",
"Christian",
""
]
] |
new_dataset
| 0.96614 |
2306.00758
|
Leonard Hackel
|
Leonard Hackel (1,3), Kai Norman Clasen (1), Mahdyar Ravanbakhsh (2),
Beg\"um Demir (1,3) ((1) Technische Universit\"at Berlin, (2) Zalando SE
Berlin, (3) Berlin Institute for the Foundations of Learning and Data)
|
LiT-4-RSVQA: Lightweight Transformer-based Visual Question Answering in
Remote Sensing
|
Accepted at IEEE International Geoscience and Remote Sensing
Symposium 2023
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual question answering (VQA) methods in remote sensing (RS) aim to answer
natural language questions with respect to an RS image. Most of the existing
methods require a large amount of computational resources, which limits their
application in operational scenarios in RS. To address this issue, in this
paper we present an effective lightweight transformer-based VQA in RS
(LiT-4-RSVQA) architecture for efficient and accurate VQA in RS. Our
architecture consists of: i) a lightweight text encoder module; ii) a
lightweight image encoder module; iii) a fusion module; and iv) a
classification module. The experimental results obtained on a VQA benchmark
dataset demonstrate that our proposed LiT-4-RSVQA architecture provides
accurate VQA results while significantly reducing the computational
requirements on the executing hardware. Our code is publicly available at
https://git.tu-berlin.de/rsim/lit4rsvqa.
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 14:53:07 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Jun 2023 08:58:08 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Hackel",
"Leonard",
""
],
[
"Clasen",
"Kai Norman",
""
],
[
"Ravanbakhsh",
"Mahdyar",
""
],
[
"Demir",
"Begüm",
""
]
] |
new_dataset
| 0.997344 |
2306.01016
|
Hejie Cui
|
Hejie Cui, Rongmei Lin, Nasser Zalmout, Chenwei Zhang, Jingbo Shang,
Carl Yang, Xian Li
|
PV2TEA: Patching Visual Modality to Textual-Established Information
Extraction
|
ACL 2023 Findings
| null | null | null |
cs.CL cs.AI cs.CV cs.LG cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Information extraction, e.g., attribute value extraction, has been
extensively studied and formulated based only on text. However, many attributes
can benefit from image-based extraction, like color, shape, pattern, among
others. The visual modality has long been underutilized, mainly due to
multimodal annotation difficulty. In this paper, we aim to patch the visual
modality to the textual-established attribute information extractor. The
cross-modality integration faces several unique challenges: (C1) images and
textual descriptions are loosely paired intra-sample and inter-samples; (C2)
images usually contain rich backgrounds that can mislead the prediction; (C3)
weakly supervised labels from textual-established extractors are biased for
multimodal training. We present PV2TEA, an encoder-decoder architecture
equipped with three bias reduction schemes: (S1) Augmented label-smoothed
contrast to improve the cross-modality alignment for loosely-paired image and
text; (S2) Attention-pruning that adaptively distinguishes the visual
foreground; (S3) Two-level neighborhood regularization that mitigates the label
textual bias via reliability estimation. Empirical results on real-world
e-Commerce datasets demonstrate up to 11.74% absolute (20.97% relatively) F1
increase over unimodal baselines.
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 05:39:45 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Cui",
"Hejie",
""
],
[
"Lin",
"Rongmei",
""
],
[
"Zalmout",
"Nasser",
""
],
[
"Zhang",
"Chenwei",
""
],
[
"Shang",
"Jingbo",
""
],
[
"Yang",
"Carl",
""
],
[
"Li",
"Xian",
""
]
] |
new_dataset
| 0.997573 |
2306.01027
|
Rishad Shafik
|
Samuel Prescott and Adrian Wheeldon and Rishad Shafik and Tousif
Rahman and Alex Yakovlev and Ole-Christoffer Granmo
|
An FPGA Architecture for Online Learning using the Tsetlin Machine
| null | null | null | null |
cs.LG cs.AI cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
There is a need for machine learning models to evolve in unsupervised
circumstances. New classifications may be introduced, unexpected faults may
occur, or the initial dataset may be small compared to the data-points
presented to the system during normal operation. Implementing such a system
using neural networks involves significant mathematical complexity, which is a
major issue in power-critical edge applications.
This paper proposes a novel field-programmable gate-array infrastructure for
online learning, implementing a low-complexity machine learning algorithm
called the Tsetlin Machine. This infrastructure features a custom-designed
architecture for run-time learning management, providing on-chip offline and
online learning. Using this architecture, training can be carried out on-demand
on the \ac{FPGA} with pre-classified data before inference takes place.
Additionally, our architecture provisions online learning, where training can
be interleaved with inference during operation. Tsetlin Machine (TM) training
naturally descends to an optimum, with training also linked to a threshold
hyper-parameter which is used to reduce the probability of issuing feedback as
the TM becomes trained further. The proposed architecture is modular, allowing
the data input source to be easily changed, whilst inbuilt cross-validation
infrastructure allows for reliable and representative results during system
testing. We present use cases for online learning using the proposed
infrastructure and demonstrate the energy/performance/accuracy trade-offs.
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 13:33:26 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Prescott",
"Samuel",
""
],
[
"Wheeldon",
"Adrian",
""
],
[
"Shafik",
"Rishad",
""
],
[
"Rahman",
"Tousif",
""
],
[
"Yakovlev",
"Alex",
""
],
[
"Granmo",
"Ole-Christoffer",
""
]
] |
new_dataset
| 0.999728 |
2306.01028
|
Enno Adler
|
Enno Adler, Stefan B\"ottcher, Rita Hartel
|
ITR: A grammar-based graph compressor supporting fast neighborhood
queries
| null | null | null | null |
cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
Neighborhood queries are the most common queries on graphs; thus, it is
desirable to answer them efficiently on compressed data structures. We present
a compression scheme called Incidence-Type-RePair (ITR) for graphs with labeled
nodes and labeled edges based on RePair and apply the scheme to RDF graphs. We
show that ITR speeds up neighborhood queries to only a few milliseconds and
thereby outperforms existing solutions while providing a compression size
comparable to existing RDF graph compressors.
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 13:49:18 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Adler",
"Enno",
""
],
[
"Böttcher",
"Stefan",
""
],
[
"Hartel",
"Rita",
""
]
] |
new_dataset
| 0.994125 |
2306.01069
|
Wang-Chiew Tan
|
Wang-Chiew Tan, Jane Dwivedi-Yu, Yuliang Li, Lambert Mathias, Marzieh
Saeidi, Jing Nathan Yan, Alon Y. Halevy
|
TimelineQA: A Benchmark for Question Answering over Timelines
| null | null | null | null |
cs.CL cs.AI cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
Lifelogs are descriptions of experiences that a person had during their life.
Lifelogs are created by fusing data from the multitude of digital services,
such as online photos, maps, shopping and content streaming services. Question
answering over lifelogs can offer personal assistants a critical resource when
they try to provide advice in context. However, obtaining answers to questions
over lifelogs is beyond the current state of the art of question answering
techniques for a variety of reasons, the most pronounced of which is that
lifelogs combine free text with some degree of structure such as temporal and
geographical information.
We create and publicly release TimelineQA1, a benchmark for accelerating
progress on querying lifelogs. TimelineQA generates lifelogs of imaginary
people. The episodes in the lifelog range from major life episodes such as high
school graduation to those that occur on a daily basis such as going for a run.
We describe a set of experiments on TimelineQA with several state-of-the-art QA
models. Our experiments reveal that for atomic queries, an extractive QA system
significantly out-performs a state-of-the-art retrieval-augmented QA system.
For multi-hop queries involving aggregates, we show that the best result is
obtained with a state-of-the-art table QA technique, assuming the ground truth
set of episodes for deriving the answer is available.
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 18:17:13 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Tan",
"Wang-Chiew",
""
],
[
"Dwivedi-Yu",
"Jane",
""
],
[
"Li",
"Yuliang",
""
],
[
"Mathias",
"Lambert",
""
],
[
"Saeidi",
"Marzieh",
""
],
[
"Yan",
"Jing Nathan",
""
],
[
"Halevy",
"Alon Y.",
""
]
] |
new_dataset
| 0.99883 |
2306.01116
|
Julien Launay
|
Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru,
Alessandro Cappelli, Hamza Alobeidli, Baptiste Pannier, Ebtesam Almazrouei,
Julien Launay
|
The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora
with Web Data, and Web Data Only
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large language models are commonly trained on a mixture of filtered web data
and curated high-quality corpora, such as social media conversations, books, or
technical papers. This curation process is believed to be necessary to produce
performant models with broad zero-shot generalization abilities. However, as
larger models requiring pretraining on trillions of tokens are considered, it
is unclear how scalable is curation and whether we will run out of unique
high-quality data soon. At variance with previous beliefs, we show that
properly filtered and deduplicated web data alone can lead to powerful models;
even significantly outperforming models from the state-of-the-art trained on
The Pile. Despite extensive filtering, the high-quality data we extract from
the web is still plentiful, and we are able to obtain five trillion tokens from
CommonCrawl. We publicly release an extract of 600 billion tokens from our
RefinedWeb dataset, and 1.3/7.5B parameters language models trained on it.
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 20:03:56 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Penedo",
"Guilherme",
""
],
[
"Malartic",
"Quentin",
""
],
[
"Hesslow",
"Daniel",
""
],
[
"Cojocaru",
"Ruxandra",
""
],
[
"Cappelli",
"Alessandro",
""
],
[
"Alobeidli",
"Hamza",
""
],
[
"Pannier",
"Baptiste",
""
],
[
"Almazrouei",
"Ebtesam",
""
],
[
"Launay",
"Julien",
""
]
] |
new_dataset
| 0.999414 |
2306.01163
|
Sahraoui Dhelim Dr
|
Amar Khelloufi, Huansheng Ning, Abdenacer Naouri, Abdelkarim Ben Sada,
Attia Qammar, Abdelkader Khalil, Sahraoui Dhelim and Lingfeng Mao
|
A Multi-Modal Latent-Features based Service Recommendation System for
the Social Internet of Things
| null | null | null | null |
cs.SI cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Social Internet of Things (SIoT), is revolutionizing how we interact with
our everyday lives. By adding the social dimension to connecting devices, the
SIoT has the potential to drastically change the way we interact with smart
devices. This connected infrastructure allows for unprecedented levels of
convenience, automation, and access to information, allowing us to do more with
less effort. However, this revolutionary new technology also brings an eager
need for service recommendation systems. As the SIoT grows in scope and
complexity, it becomes increasingly important for businesses and individuals,
and SIoT objects alike to have reliable sources for products, services, and
information that are tailored to their specific needs. Few works have been
proposed to provide service recommendations for SIoT environments. However,
these efforts have been confined to only focusing on modeling user-item
interactions using contextual information, devices' SIoT relationships, and
correlation social groups but these schemes do not account for latent semantic
item-item structures underlying the sparse multi-modal contents in SIoT
environment. In this paper, we propose a latent-based SIoT recommendation
system that learns item-item structures and aggregates multiple modalities to
obtain latent item graphs which are then used in graph convolutions to inject
high-order affinities into item representations. Experiments showed that the
proposed recommendation system outperformed state-of-the-art SIoT
recommendation methods and validated its efficacy at mining latent
relationships from multi-modal features.
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 21:38:50 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Khelloufi",
"Amar",
""
],
[
"Ning",
"Huansheng",
""
],
[
"Naouri",
"Abdenacer",
""
],
[
"Sada",
"Abdelkarim Ben",
""
],
[
"Qammar",
"Attia",
""
],
[
"Khalil",
"Abdelkader",
""
],
[
"Dhelim",
"Sahraoui",
""
],
[
"Mao",
"Lingfeng",
""
]
] |
new_dataset
| 0.98844 |
2306.01197
|
Marcelo Mendoza Mr.
|
Naim Bro and Marcelo Mendoza
|
Surname affinity in Santiago, Chile: A network-based approach that
uncovers urban segregation
| null |
PLoS ONE 16(1): e0244372 (2021)
|
10.1371/journal.pone.0244372
| null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Based on a geocoded registry of more than four million residents of Santiago,
Chile, we build two surname-based networks that reveal the city's population
structure. The first network is formed from paternal and maternal surname
pairs. The second network is formed from the isonymic distances between the
city's neighborhoods. These networks uncover the city's main ethnic groups and
their spatial distribution. We match the networks to a socioeconomic index, and
find that surnames of high socioeconomic status tend to cluster, be more
diverse, and occupy a well-defined quarter of the city. The results are
suggestive of a high degree of urban segregation in Santiago.
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 23:22:48 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Bro",
"Naim",
""
],
[
"Mendoza",
"Marcelo",
""
]
] |
new_dataset
| 0.998108 |
2306.01268
|
Edward Williams
|
Edward C. Williams, Grace Su, Sandra R. Schloen, Miller C. Prosser,
Susanne Paulus, Sanjay Krishnan
|
DeepScribe: Localization and Classification of Elamite Cuneiform Signs
Via Deep Learning
|
Currently under review in the ACM JOCCH
| null | null | null |
cs.CV cs.DL cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
Twenty-five hundred years ago, the paperwork of the Achaemenid Empire was
recorded on clay tablets. In 1933, archaeologists from the University of
Chicago's Oriental Institute (OI) found tens of thousands of these tablets and
fragments during the excavation of Persepolis. Many of these tablets have been
painstakingly photographed and annotated by expert cuneiformists, and now
provide a rich dataset consisting of over 5,000 annotated tablet images and
100,000 cuneiform sign bounding boxes. We leverage this dataset to develop
DeepScribe, a modular computer vision pipeline capable of localizing cuneiform
signs and providing suggestions for the identity of each sign. We investigate
the difficulty of learning subtasks relevant to cuneiform tablet transcription
on ground-truth data, finding that a RetinaNet object detector can achieve a
localization mAP of 0.78 and a ResNet classifier can achieve a top-5 sign
classification accuracy of 0.89. The end-to-end pipeline achieves a top-5
classification accuracy of 0.80. As part of the classification module,
DeepScribe groups cuneiform signs into morphological clusters. We consider how
this automatic clustering approach differs from the organization of standard,
printed sign lists and what we may learn from it. These components, trained
individually, are sufficient to produce a system that can analyze photos of
cuneiform tablets from the Achaemenid period and provide useful transliteration
suggestions to researchers. We evaluate the model's end-to-end performance on
locating and classifying signs, providing a roadmap to a linguistically-aware
transliteration system, then consider the model's potential utility when
applied to other periods of cuneiform writing.
|
[
{
"version": "v1",
"created": "Fri, 2 Jun 2023 05:04:27 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Williams",
"Edward C.",
""
],
[
"Su",
"Grace",
""
],
[
"Schloen",
"Sandra R.",
""
],
[
"Prosser",
"Miller C.",
""
],
[
"Paulus",
"Susanne",
""
],
[
"Krishnan",
"Sanjay",
""
]
] |
new_dataset
| 0.999357 |
2306.01325
|
Alejandro Benito-Santos
|
Alejandro Benito-Santos, Adri\'an Ghajari, Pedro Hern\'andez, V\'ictor
Fresno, Salvador Ros, Elena Gonz\'alez-Blanco
|
LyricSIM: A novel Dataset and Benchmark for Similarity Detection in
Spanish Song LyricS
|
Accepted to Congreso Internacional de la Sociedad Espa\~nola para el
Procesamiento del Lenguaje Natural 2023 (SEPLN2023)
| null | null | null |
cs.CL cs.IR
|
http://creativecommons.org/licenses/by-sa/4.0/
|
In this paper, we present a new dataset and benchmark tailored to the task of
semantic similarity in song lyrics. Our dataset, originally consisting of 2775
pairs of Spanish songs, was annotated in a collective annotation experiment by
63 native annotators. After collecting and refining the data to ensure a high
degree of consensus and data integrity, we obtained 676 high-quality annotated
pairs that were used to evaluate the performance of various state-of-the-art
monolingual and multilingual language models. Consequently, we established
baseline results that we hope will be useful to the community in all future
academic and industrial applications conducted in this context.
|
[
{
"version": "v1",
"created": "Fri, 2 Jun 2023 07:48:20 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Benito-Santos",
"Alejandro",
""
],
[
"Ghajari",
"Adrián",
""
],
[
"Hernández",
"Pedro",
""
],
[
"Fresno",
"Víctor",
""
],
[
"Ros",
"Salvador",
""
],
[
"González-Blanco",
"Elena",
""
]
] |
new_dataset
| 0.999883 |
2306.01369
|
David Millard
|
David Millard, Daniel Pastor, Joseph Bowkett, Paul Backes, Gaurav S.
Sukhatme
|
Granular Gym: High Performance Simulation for Robotic Tasks with
Granular Materials
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Granular materials are of critical interest to many robotic tasks in
planetary science, construction, and manufacturing. However, the dynamics of
granular materials are complex and often computationally very expensive to
simulate. We propose a set of methodologies and a system for the fast
simulation of granular materials on Graphics Processing Units (GPUs), and show
that this simulation is fast enough for basic training with Reinforcement
Learning algorithms, which currently require many dynamics samples to achieve
acceptable performance. Our method models granular material dynamics using
implicit timestepping methods for multibody rigid contacts, as well as
algorithmic techniques for efficient parallel collision detection between pairs
of particles and between particle and arbitrarily shaped rigid bodies, and
programming techniques for minimizing warp divergence on Single-Instruction,
Multiple-Thread (SIMT) chip architectures. We showcase our simulation system on
several environments targeted toward robotic tasks, and release our simulator
as an open-source tool.
|
[
{
"version": "v1",
"created": "Fri, 2 Jun 2023 08:49:50 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Millard",
"David",
""
],
[
"Pastor",
"Daniel",
""
],
[
"Bowkett",
"Joseph",
""
],
[
"Backes",
"Paul",
""
],
[
"Sukhatme",
"Gaurav S.",
""
]
] |
new_dataset
| 0.997847 |
2306.01395
|
Minho Shim
|
Minho Shim, Taeoh Kim, Jinhyung Kim, Dongyoon Wee
|
Masked Autoencoder for Unsupervised Video Summarization
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Summarizing a video requires a diverse understanding of the video, ranging
from recognizing scenes to evaluating how much each frame is essential enough
to be selected as a summary. Self-supervised learning (SSL) is acknowledged for
its robustness and flexibility to multiple downstream tasks, but the video SSL
has not shown its value for dense understanding tasks like video summarization.
We claim an unsupervised autoencoder with sufficient self-supervised learning
does not need any extra downstream architecture design or fine-tuning weights
to be utilized as a video summarization model. The proposed method to evaluate
the importance score of each frame takes advantage of the reconstruction score
of the autoencoder's decoder. We evaluate the method in major unsupervised
video summarization benchmarks to show its effectiveness under various
experimental settings.
|
[
{
"version": "v1",
"created": "Fri, 2 Jun 2023 09:44:45 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Shim",
"Minho",
""
],
[
"Kim",
"Taeoh",
""
],
[
"Kim",
"Jinhyung",
""
],
[
"Wee",
"Dongyoon",
""
]
] |
new_dataset
| 0.984706 |
2306.01438
|
Yingjie Wang
|
Yingjie Wang, Jiajun Deng, Yao Li, Jinshui Hu, Cong Liu, Yu Zhang,
Jianmin Ji, Wanli Ouyang, Yanyong Zhang
|
Bi-LRFusion: Bi-Directional LiDAR-Radar Fusion for 3D Dynamic Object
Detection
|
accepted by CVPR2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
LiDAR and Radar are two complementary sensing approaches in that LiDAR
specializes in capturing an object's 3D shape while Radar provides longer
detection ranges as well as velocity hints. Though seemingly natural, how to
efficiently combine them for improved feature representation is still unclear.
The main challenge arises from that Radar data are extremely sparse and lack
height information. Therefore, directly integrating Radar features into
LiDAR-centric detection networks is not optimal. In this work, we introduce a
bi-directional LiDAR-Radar fusion framework, termed Bi-LRFusion, to tackle the
challenges and improve 3D detection for dynamic objects. Technically,
Bi-LRFusion involves two steps: first, it enriches Radar's local features by
learning important details from the LiDAR branch to alleviate the problems
caused by the absence of height information and extreme sparsity; second, it
combines LiDAR features with the enhanced Radar features in a unified
bird's-eye-view representation. We conduct extensive experiments on nuScenes
and ORR datasets, and show that our Bi-LRFusion achieves state-of-the-art
performance for detecting dynamic objects. Notably, Radar data in these two
datasets have different formats, which demonstrates the generalizability of our
method. Codes are available at https://github.com/JessieW0806/BiLRFusion.
|
[
{
"version": "v1",
"created": "Fri, 2 Jun 2023 10:57:41 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Wang",
"Yingjie",
""
],
[
"Deng",
"Jiajun",
""
],
[
"Li",
"Yao",
""
],
[
"Hu",
"Jinshui",
""
],
[
"Liu",
"Cong",
""
],
[
"Zhang",
"Yu",
""
],
[
"Ji",
"Jianmin",
""
],
[
"Ouyang",
"Wanli",
""
],
[
"Zhang",
"Yanyong",
""
]
] |
new_dataset
| 0.999009 |
2306.01455
|
Thomas Studer
|
Thomas Studer
|
The logic of temporal domination
| null | null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this short note, we are concerned with the fairness condition "A and B
hold almost equally often", which is important for specifying and verifying the
correctness of non-terminating processes and protocols. We introduce the logic
of temporal domination, in which the above condition can be expressed. We
present syntax and semantics of our logic and show that it is a proper
extension of linear time temporal logic. In order to obtain this result, we
rely on the corresponding result for k-counting automata.
|
[
{
"version": "v1",
"created": "Fri, 2 Jun 2023 11:26:56 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Studer",
"Thomas",
""
]
] |
new_dataset
| 0.989217 |
2306.01461
|
Jiacheng Chen
|
Jiacheng Chen, Ruizhi Deng, Yasutaka Furukawa
|
PolyDiffuse: Polygonal Shape Reconstruction via Guided Set Diffusion
Models
|
Project page: https://poly-diffuse.github.io/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents PolyDiffuse, a novel structured reconstruction algorithm
that transforms visual sensor data into polygonal shapes with Diffusion Models
(DM), an emerging machinery amid exploding generative AI, while formulating
reconstruction as a generation process conditioned on sensor data. The task of
structured reconstruction poses two fundamental challenges to DM: 1) A
structured geometry is a ``set'' (e.g., a set of polygons for a floorplan
geometry), where a sample of $N$ elements has $N!$ different but equivalent
representations, making the denoising highly ambiguous; and 2) A
``reconstruction'' task has a single solution, where an initial noise needs to
be chosen carefully, while any initial noise works for a generation task. Our
technical contribution is the introduction of a Guided Set Diffusion Model
where 1) the forward diffusion process learns guidance networks to control
noise injection so that one representation of a sample remains distinct from
its other permutation variants, thus resolving denoising ambiguity; and 2) the
reverse denoising process reconstructs polygonal shapes, initialized and
directed by the guidance networks, as a conditional generation process subject
to the sensor data. We have evaluated our approach for reconstructing two types
of polygonal shapes: floorplan as a set of polygons and HD map for autonomous
cars as a set of polylines. Through extensive experiments on standard
benchmarks, we demonstrate that PolyDiffuse significantly advances the current
state of the art and enables broader practical applications.
|
[
{
"version": "v1",
"created": "Fri, 2 Jun 2023 11:38:04 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Chen",
"Jiacheng",
""
],
[
"Deng",
"Ruizhi",
""
],
[
"Furukawa",
"Yasutaka",
""
]
] |
new_dataset
| 0.995731 |
2306.01465
|
Elena Chistova
|
Elena Chistova and Ivan Smirnov
|
Light Coreference Resolution for Russian with Hierarchical Discourse
Features
|
Accepted at Dialogue-2023 conference
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Coreference resolution is the task of identifying and grouping mentions
referring to the same real-world entity. Previous neural models have mainly
focused on learning span representations and pairwise scores for coreference
decisions. However, current methods do not explicitly capture the referential
choice in the hierarchical discourse, an important factor in coreference
resolution. In this study, we propose a new approach that incorporates
rhetorical information into neural coreference resolution models. We collect
rhetorical features from automated discourse parses and examine their impact.
As a base model, we implement an end-to-end span-based coreference resolver
using a partially fine-tuned multilingual entity-aware language model LUKE. We
evaluate our method on the RuCoCo-23 Shared Task for coreference resolution in
Russian. Our best model employing rhetorical distance between mentions has
ranked 1st on the development set (74.6% F1) and 2nd on the test set (73.3% F1)
of the Shared Task. We hope that our work will inspire further research on
incorporating discourse information in neural coreference resolution models.
|
[
{
"version": "v1",
"created": "Fri, 2 Jun 2023 11:41:24 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Chistova",
"Elena",
""
],
[
"Smirnov",
"Ivan",
""
]
] |
new_dataset
| 0.997047 |
2306.01504
|
Ngoc Luyen Le
|
Ngoc Luyen Le and Jinfeng Zhong and Elsa Negre and Marie-H\'el\`ene
Abel
|
Syst\`eme de recommandations bas\'e sur les contraintes pour les
simulations de gestion de crise
|
in French language
| null | null | null |
cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
In the context of the evacuation of populations, some citizens/volunteers may
want and be able to participate in the evacuation of populations in difficulty
by coming to lend a hand to emergency/evacuation vehicles with their own
vehicles. One way of framing these impulses of solidarity would be to be able
to list in real-time the citizens/volunteers available with their vehicles
(land, sea, air, etc.), to be able to geolocate them according to the risk
areas to be evacuated, and adding them to the evacuation/rescue vehicles.
Because it is difficult to propose an effective real-time operational system on
the field in a real crisis situation, in this work, we propose to add a module
for recommending driver/vehicle pairs (with their specificities) to a system of
crisis management simulation. To do that, we chose to model and develop an
ontology-supported constraint-based recommender system for crisis management
simulations.
|
[
{
"version": "v1",
"created": "Fri, 2 Jun 2023 12:51:48 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Le",
"Ngoc Luyen",
""
],
[
"Zhong",
"Jinfeng",
""
],
[
"Negre",
"Elsa",
""
],
[
"Abel",
"Marie-Hélène",
""
]
] |
new_dataset
| 0.999312 |
2306.01529
|
Helge Spieker
|
Arnaud Gotlieb, Morten Mossige, Helge Spieker
|
Constraint-Guided Test Execution Scheduling: An Experience Report at ABB
Robotics
|
SafeComp 2023
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automated test execution scheduling is crucial in modern software development
environments, where components are frequently updated with changes that impact
their integration with hardware systems. Building test schedules, which focus
on the right tests and make optimal use of the available resources, both time
and hardware, under consideration of vast requirements on the selection of test
cases and their assignment to certain test execution machines, is a complex
optimization task. Manual solutions are time-consuming and often error-prone.
Furthermore, when software and hardware components and test scripts are
frequently added, removed or updated, static test execution scheduling is no
longer feasible and the motivation for automation taking care of dynamic
changes grows. Since 2012, our work has focused on transferring technology
based on constraint programming for automating the testing of industrial
robotic systems at ABB Robotics. After having successfully transferred
constraint satisfaction models dedicated to test case generation, we present
the results of a project called DynTest whose goal is to automate the
scheduling of test execution from a large test repository, on distinct
industrial robots. This paper reports on our experience and lessons learned for
successfully transferring constraint-based optimization models for test
execution scheduling at ABB Robotics. Our experience underlines the benefits of
a close collaboration between industry and academia for both parties.
|
[
{
"version": "v1",
"created": "Fri, 2 Jun 2023 13:29:32 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Gotlieb",
"Arnaud",
""
],
[
"Mossige",
"Morten",
""
],
[
"Spieker",
"Helge",
""
]
] |
new_dataset
| 0.995271 |
2306.01540
|
Ayush Agrawal
|
Ayush Agrawal, Raghav Arora, Ahana Datta, Snehasis Banerjee,
Brojeshwar Bhowmick, Krishna Murthy Jatavallabhula, Mohan Sridharan, Madhava
Krishna
|
CLIPGraphs: Multimodal Graph Networks to Infer Object-Room Affinities
| null |
RO-MAN 2023 Conference
| null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces a novel method for determining the best room to place
an object in, for embodied scene rearrangement. While state-of-the-art
approaches rely on large language models (LLMs) or reinforcement learned (RL)
policies for this task, our approach, CLIPGraphs, efficiently combines
commonsense domain knowledge, data-driven methods, and recent advances in
multimodal learning. Specifically, it (a)encodes a knowledge graph of prior
human preferences about the room location of different objects in home
environments, (b) incorporates vision-language features to support multimodal
queries based on images or text, and (c) uses a graph network to learn
object-room affinities based on embeddings of the prior knowledge and the
vision-language features. We demonstrate that our approach provides better
estimates of the most appropriate location of objects from a benchmark set of
object categories in comparison with state-of-the-art baselines
|
[
{
"version": "v1",
"created": "Fri, 2 Jun 2023 13:44:01 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Agrawal",
"Ayush",
""
],
[
"Arora",
"Raghav",
""
],
[
"Datta",
"Ahana",
""
],
[
"Banerjee",
"Snehasis",
""
],
[
"Bhowmick",
"Brojeshwar",
""
],
[
"Jatavallabhula",
"Krishna Murthy",
""
],
[
"Sridharan",
"Mohan",
""
],
[
"Krishna",
"Madhava",
""
]
] |
new_dataset
| 0.997999 |
2306.01579
|
Hsien-Chin Lin
|
Hsien-Chin Lin, Shutong Feng, Christian Geishauser, Nurul Lubis, Carel
van Niekerk, Michael Heck, Benjamin Ruppik, Renato Vukovic, Milica
Ga\v{s}i\'c
|
EmoUS: Simulating User Emotions in Task-Oriented Dialogues
|
accepted by SIGIR2023
| null |
10.1145/3539618.3592092
| null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Existing user simulators (USs) for task-oriented dialogue systems only model
user behaviour on semantic and natural language levels without considering the
user persona and emotions. Optimising dialogue systems with generic user
policies, which cannot model diverse user behaviour driven by different
emotional states, may result in a high drop-off rate when deployed in the real
world. Thus, we present EmoUS, a user simulator that learns to simulate user
emotions alongside user behaviour. EmoUS generates user emotions, semantic
actions, and natural language responses based on the user goal, the dialogue
history, and the user persona. By analysing what kind of system behaviour
elicits what kind of user emotions, we show that EmoUS can be used as a probe
to evaluate a variety of dialogue systems and in particular their effect on the
user's emotional state. Developing such methods is important in the age of
large language model chat-bots and rising ethical concerns.
|
[
{
"version": "v1",
"created": "Fri, 2 Jun 2023 14:48:19 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Lin",
"Hsien-Chin",
""
],
[
"Feng",
"Shutong",
""
],
[
"Geishauser",
"Christian",
""
],
[
"Lubis",
"Nurul",
""
],
[
"van Niekerk",
"Carel",
""
],
[
"Heck",
"Michael",
""
],
[
"Ruppik",
"Benjamin",
""
],
[
"Vukovic",
"Renato",
""
],
[
"Gašić",
"Milica",
""
]
] |
new_dataset
| 0.963254 |
2306.01650
|
Diego Saez-Trumper
|
Mykola Trokhymovych, Muniza Aslam, Ai-Jou Chou, Ricardo Baeza-Yates,
and Diego Saez-Trumper
|
Fair multilingual vandalism detection system for Wikipedia
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a novel design of the system aimed at supporting the
Wikipedia community in addressing vandalism on the platform. To achieve this,
we collected a massive dataset of 47 languages, and applied advanced filtering
and feature engineering techniques, including multilingual masked language
modeling to build the training dataset from human-generated data. The
performance of the system was evaluated through comparison with the one used in
production in Wikipedia, known as ORES. Our research results in a significant
increase in the number of languages covered, making Wikipedia patrolling more
efficient to a wider range of communities. Furthermore, our model outperforms
ORES, ensuring that the results provided are not only more accurate but also
less biased against certain groups of contributors.
|
[
{
"version": "v1",
"created": "Fri, 2 Jun 2023 16:19:16 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Trokhymovych",
"Mykola",
""
],
[
"Aslam",
"Muniza",
""
],
[
"Chou",
"Ai-Jou",
""
],
[
"Baeza-Yates",
"Ricardo",
""
],
[
"Saez-Trumper",
"Diego",
""
]
] |
new_dataset
| 0.993667 |
2306.01738
|
Zhangyang Qi
|
Zhangyang Qi, Jiaqi Wang, Xiaoyang Wu, Hengshuang Zhao
|
OCBEV: Object-Centric BEV Transformer for Multi-View 3D Object Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-view 3D object detection is becoming popular in autonomous driving due
to its high effectiveness and low cost. Most of the current state-of-the-art
detectors follow the query-based bird's-eye-view (BEV) paradigm, which benefits
from both BEV's strong perception power and end-to-end pipeline. Despite
achieving substantial progress, existing works model objects via globally
leveraging temporal and spatial information of BEV features, resulting in
problems when handling the challenging complex and dynamic autonomous driving
scenarios. In this paper, we proposed an Object-Centric query-BEV detector
OCBEV, which can carve the temporal and spatial cues of moving targets more
effectively. OCBEV comprises three designs: Object Aligned Temporal Fusion
aligns the BEV feature based on ego-motion and estimated current locations of
moving objects, leading to a precise instance-level feature fusion. Object
Focused Multi-View Sampling samples more 3D features from an adaptive local
height ranges of objects for each scene to enrich foreground information.
Object Informed Query Enhancement replaces part of pre-defined decoder queries
in common DETR-style decoders with positional features of objects on
high-confidence locations, introducing more direct object positional priors.
Extensive experimental evaluations are conducted on the challenging nuScenes
dataset. Our approach achieves a state-of-the-art result, surpassing the
traditional BEVFormer by 1.5 NDS points. Moreover, we have a faster convergence
speed and only need half of the training iterations to get comparable
performance, which further demonstrates its effectiveness.
|
[
{
"version": "v1",
"created": "Fri, 2 Jun 2023 17:59:48 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Qi",
"Zhangyang",
""
],
[
"Wang",
"Jiaqi",
""
],
[
"Wu",
"Xiaoyang",
""
],
[
"Zhao",
"Hengshuang",
""
]
] |
new_dataset
| 0.998748 |
1910.14031
|
Harnaik Dhami
|
Harnaik Dhami, Kevin Yu, Tianshu Xu, Qian Zhu, Kshitiz Dhakal, James
Friel, Song Li, and Pratap Tokekar
|
Crop Height and Plot Estimation for Phenotyping from Unmanned Aerial
Vehicles using 3D LiDAR
|
8 pages, 10 figures, 1 table, Accepted to IROS 2020
| null |
10.1109/IROS45743.2020.9341343
| null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present techniques to measure crop heights using a 3D Light Detection and
Ranging (LiDAR) sensor mounted on an Unmanned Aerial Vehicle (UAV). Knowing the
height of plants is crucial to monitor their overall health and growth cycles,
especially for high-throughput plant phenotyping. We present a methodology for
extracting plant heights from 3D LiDAR point clouds, specifically focusing on
plot-based phenotyping environments. We also present a toolchain that can be
used to create phenotyping farms for use in Gazebo simulations. The tool
creates a randomized farm with realistic 3D plant and terrain models. We
conducted a series of simulations and hardware experiments in controlled and
natural settings. Our algorithm was able to estimate the plant heights in a
field with 112 plots with a root mean square error (RMSE) of 6.1 cm. This is
the first such dataset for 3D LiDAR from an airborne robot over a wheat field.
The developed simulation toolchain, algorithmic implementation, and datasets
can be found on the GitHub repository located at
https://github.com/hsd1121/PointCloudProcessing.
|
[
{
"version": "v1",
"created": "Wed, 30 Oct 2019 15:03:21 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Mar 2020 15:42:05 GMT"
},
{
"version": "v3",
"created": "Wed, 18 Nov 2020 01:23:36 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Dhami",
"Harnaik",
""
],
[
"Yu",
"Kevin",
""
],
[
"Xu",
"Tianshu",
""
],
[
"Zhu",
"Qian",
""
],
[
"Dhakal",
"Kshitiz",
""
],
[
"Friel",
"James",
""
],
[
"Li",
"Song",
""
],
[
"Tokekar",
"Pratap",
""
]
] |
new_dataset
| 0.998581 |
2007.07573
|
Giovanni Casini
|
Giovanni Casini, Umberto Straccia
|
Defeasible RDFS via Rational Closure
|
47 pages. Preprint version
|
Information Sciences, Volume 643, 2023, 118409, Elsevier
|
10.1016/j.ins.2022.11.165
| null |
cs.AI cs.LO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In the field of non-monotonic logics, the notion of Rational Closure (RC) is
acknowledged as a prominent approach. In recent years, RC has gained even more
popularity in the context of Description Logics (DLs), the logic underpinning
the semantic web standard ontology language OWL 2, whose main ingredients are
classes and roles. In this work, we show how to integrate RC within the triple
language RDFS, which together with OWL2 are the two major standard semantic web
ontology languages. To do so, we start from $\rho df$, which is the logic
behind RDFS, and then extend it to $\rho df_\bot$, allowing to state that two
entities are incompatible. Eventually, we propose defeasible $\rho df_\bot$ via
a typical RC construction. The main features of our approach are: (i) unlike
most other approaches that add an extra non-monotone rule layer on top of
monotone RDFS, defeasible $\rho df_\bot$ remains syntactically a triple
language and is a simple extension of $\rho df_\bot$ by introducing some new
predicate symbols with specific semantics. In particular, any RDFS
reasoner/store may handle them as ordinary terms if it does not want to take
account for the extra semantics of the new predicate symbols; (ii) the
defeasible $\rho df_\bot$ entailment decision procedure is build on top of the
$\rho df_\bot$ entailment decision procedure, which in turn is an extension of
the one for $\rho df$ via some additional inference rules favouring an
potential implementation; and (iii) defeasible $\rho df_\bot$ entailment can be
decided in polynomial time.
|
[
{
"version": "v1",
"created": "Wed, 15 Jul 2020 09:45:50 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Jun 2023 14:21:27 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Casini",
"Giovanni",
""
],
[
"Straccia",
"Umberto",
""
]
] |
new_dataset
| 0.99809 |
2010.01436
|
Jianxiong Guo
|
Jianxiong Guo, Xingjian Ding, Weili Wu, Ding-Zhu Du
|
A Double Auction for Charging Scheduling among Vehicles Using
DAG-Blockchains
| null | null | null | null |
cs.NI cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Electric Vehicles (EVs) are becoming more and more popular in our daily life,
which replaces traditional fuel vehicles to reduce carbon emissions and protect
the environment. EVs need to be charged, but the number of charging piles in a
Charging Station (CS) is limited and charging is usually more time-consuming
than fueling. According to this scenario, we propose a secure and efficient
charging scheduling system based on a Directed Acyclic Graph (DAG)-blockchain
and double auction mechanism. In a smart area, it attempts to assign EVs to the
available CSs in the light of their submitted charging requests and status
information. First, we design a lightweight charging scheduling framework that
integrates DAG-blockchain and modern cryptography technology to ensure security
and scalability during performing scheduling and completing tradings. In this
process, a constrained multi-item double auction problem is formulated because
of the limited charging resources in a CS, which motivates EVs and CSs in this
area to participate in the market based on their preferences and statuses. Due
to this constraint, our problem is more complicated and harder to achieve
truthfulness as well as system efficiency compared to the existing double
auction model. To adapt to it, we propose two algorithms, namely Truthful
Mechanism for Charging (TMC) and Efficient Mechanism for Charging (EMC), to
determine an assignment between EVs and CSs and pricing strategies. Then, both
theoretical analysis and numerical simulations show the correctness and
effectiveness of our proposed algorithms.
|
[
{
"version": "v1",
"created": "Sat, 3 Oct 2020 22:34:40 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Jun 2023 17:03:59 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Guo",
"Jianxiong",
""
],
[
"Ding",
"Xingjian",
""
],
[
"Wu",
"Weili",
""
],
[
"Du",
"Ding-Zhu",
""
]
] |
new_dataset
| 0.995631 |
2104.00893
|
Duo Lu
|
Duo Lu, Varun C Jammula, Steven Como, Jeffrey Wishart, Yan Chen,
Yezhou Yang
|
CAROM -- Vehicle Localization and Traffic Scene Reconstruction from
Monocular Cameras on Road Infrastructures
|
Accepted to IEEE ICRA 2021
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Traffic monitoring cameras are powerful tools for traffic management and
essential components of intelligent road infrastructure systems. In this paper,
we present a vehicle localization and traffic scene reconstruction framework
using these cameras, dubbed as CAROM, i.e., "CARs On the Map". CAROM processes
traffic monitoring videos and converts them to anonymous data structures of
vehicle type, 3D shape, position, and velocity for traffic scene reconstruction
and replay. Through collaborating with a local department of transportation in
the United States, we constructed a benchmarking dataset containing GPS data,
roadside camera videos, and drone videos to validate the vehicle tracking
results. On average, the localization error is approximately 0.8 m and 1.7 m
within the range of 50 m and 120 m from the cameras, respectively.
|
[
{
"version": "v1",
"created": "Fri, 2 Apr 2021 05:49:01 GMT"
},
{
"version": "v2",
"created": "Wed, 31 May 2023 18:39:13 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Lu",
"Duo",
""
],
[
"Jammula",
"Varun C",
""
],
[
"Como",
"Steven",
""
],
[
"Wishart",
"Jeffrey",
""
],
[
"Chen",
"Yan",
""
],
[
"Yang",
"Yezhou",
""
]
] |
new_dataset
| 0.999415 |
2112.10028
|
S M Farabi Mahmud
|
Farabi Mahmud, Sungkeun Kim, Harpreet Singh Chawla, Chia-Che Tsai, Eun
Jung Kim, Abdullah Muzahid
|
Attack of the Knights: A Non Uniform Cache Side-Channel Attack
| null | null | null | null |
cs.CR cs.AR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
For a distributed last-level cache (LLC) in a large multicore chip, the
access time to one LLC bank can significantly differ from that to another due
to the difference in physical distance. In this paper, we successfully
demonstrated a new distance-based side-channel attack by timing the AES
decryption operation and extracting part of an AES secret key on an Intel
Knights Landing CPU. We introduce several techniques to overcome the challenges
of the attack, including the use of multiple attack threads to ensure LLC hits,
to detect vulnerable memory locations, and to obtain fine-grained timing of the
victim operations. While operating as a covert channel, this attack can reach a
bandwidth of 205 kbps with an error rate of only 0.02%. We also observed that
the side-channel attack can extract 4 bytes of an AES key with 100% accuracy
with only 4000 trial rounds of encryption
|
[
{
"version": "v1",
"created": "Sun, 19 Dec 2021 00:01:36 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Nov 2022 03:15:49 GMT"
},
{
"version": "v3",
"created": "Tue, 2 May 2023 21:49:42 GMT"
},
{
"version": "v4",
"created": "Wed, 31 May 2023 18:25:48 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Mahmud",
"Farabi",
""
],
[
"Kim",
"Sungkeun",
""
],
[
"Chawla",
"Harpreet Singh",
""
],
[
"Tsai",
"Chia-Che",
""
],
[
"Kim",
"Eun Jung",
""
],
[
"Muzahid",
"Abdullah",
""
]
] |
new_dataset
| 0.990417 |
2205.12219
|
Yue Fan
|
Yue Fan, Winson Chen, Tongzhou Jiang, Chun Zhou, Yi Zhang, Xin Eric
Wang
|
Aerial Vision-and-Dialog Navigation
|
Accepted by ACL 2023 Findings
| null | null | null |
cs.CV cs.AI cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The ability to converse with humans and follow natural language commands is
crucial for intelligent unmanned aerial vehicles (a.k.a. drones). It can
relieve people's burden of holding a controller all the time, allow
multitasking, and make drone control more accessible for people with
disabilities or with their hands occupied. To this end, we introduce Aerial
Vision-and-Dialog Navigation (AVDN), to navigate a drone via natural language
conversation. We build a drone simulator with a continuous photorealistic
environment and collect a new AVDN dataset of over 3k recorded navigation
trajectories with asynchronous human-human dialogs between commanders and
followers. The commander provides initial navigation instruction and further
guidance by request, while the follower navigates the drone in the simulator
and asks questions when needed. During data collection, followers' attention on
the drone's visual observation is also recorded. Based on the AVDN dataset, we
study the tasks of aerial navigation from (full) dialog history and propose an
effective Human Attention Aided Transformer model (HAA-Transformer), which
learns to predict both navigation waypoints and human attention.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 17:28:14 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Nov 2022 12:33:32 GMT"
},
{
"version": "v3",
"created": "Thu, 1 Jun 2023 06:39:11 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Fan",
"Yue",
""
],
[
"Chen",
"Winson",
""
],
[
"Jiang",
"Tongzhou",
""
],
[
"Zhou",
"Chun",
""
],
[
"Zhang",
"Yi",
""
],
[
"Wang",
"Xin Eric",
""
]
] |
new_dataset
| 0.999521 |
2206.05239
|
Sindhu Tipirneni
|
Sindhu Tipirneni, Ming Zhu, Chandan K. Reddy
|
StructCoder: Structure-Aware Transformer for Code Generation
|
Revised and added new experiments, edited writing
| null | null | null |
cs.LG cs.SE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
There has been a recent surge of interest in automating software engineering
tasks using deep learning. This paper addresses the problem of code generation
where the goal is to generate target code given source code in a different
language or a natural language description. Most of the state-of-the-art deep
learning models for code generation use training strategies primarily designed
for natural language. However, understanding and generating code requires a
more rigorous comprehension of the code syntax and semantics. With this
motivation, we develop an encoder-decoder Transformer model where both the
encoder and decoder are explicitly trained to recognize the syntax and data
flow in the source and target codes, respectively. We not only make the encoder
structure-aware by leveraging the source code's syntax tree and data flow
graph, but we also support the decoder in preserving the syntax and data flow
of the target code by introducing two novel auxiliary tasks: AST (Abstract
Syntax Tree) paths prediction and data flow prediction. To the best of our
knowledge, this is the first work to introduce a structure-aware Transformer
decoder that models both syntax and data flow to enhance the quality of
generated code. The proposed StructCoder model achieves state-of-the-art
performance on code translation and text-to-code generation tasks in the
CodeXGLUE benchmark, and improves over baselines of similar size on the APPS
code generation benchmark. Our code is publicly available at
https://github.com/reddy-lab-code-research/StructCoder/.
|
[
{
"version": "v1",
"created": "Fri, 10 Jun 2022 17:26:31 GMT"
},
{
"version": "v2",
"created": "Wed, 31 May 2023 23:25:43 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Tipirneni",
"Sindhu",
""
],
[
"Zhu",
"Ming",
""
],
[
"Reddy",
"Chandan K.",
""
]
] |
new_dataset
| 0.959453 |
2210.02396
|
Wilson Yan
|
Wilson Yan, Danijar Hafner, Stephen James, Pieter Abbeel
|
Temporally Consistent Transformers for Video Generation
|
Project website: https://wilson1yan.github.io/teco
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
To generate accurate videos, algorithms have to understand the spatial and
temporal dependencies in the world. Current algorithms enable accurate
predictions over short horizons but tend to suffer from temporal
inconsistencies. When generated content goes out of view and is later
revisited, the model invents different content instead. Despite this severe
limitation, no established benchmarks on complex data exist for rigorously
evaluating video generation with long temporal dependencies. In this paper, we
curate 3 challenging video datasets with long-range dependencies by rendering
walks through 3D scenes of procedural mazes, Minecraft worlds, and indoor
scans. We perform a comprehensive evaluation of current models and observe
their limitations in temporal consistency. Moreover, we introduce the
Temporally Consistent Transformer (TECO), a generative model that substantially
improves long-term consistency while also reducing sampling time. By
compressing its input sequence into fewer embeddings, applying a temporal
transformer, and expanding back using a spatial MaskGit, TECO outperforms
existing models across many metrics. Videos are available on the website:
https://wilson1yan.github.io/teco
|
[
{
"version": "v1",
"created": "Wed, 5 Oct 2022 17:15:10 GMT"
},
{
"version": "v2",
"created": "Wed, 31 May 2023 20:19:01 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Yan",
"Wilson",
""
],
[
"Hafner",
"Danijar",
""
],
[
"James",
"Stephen",
""
],
[
"Abbeel",
"Pieter",
""
]
] |
new_dataset
| 0.993332 |
2210.16478
|
Ziyu Shan
|
Ziyu Shan, Qi Yang, Rui Ye, Yujie Zhang, Yiling Xu, Xiaozhong Xu and
Shan Liu
|
GPA-Net:No-Reference Point Cloud Quality Assessment with Multi-task
Graph Convolutional Network
| null | null | null | null |
cs.CV cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the rapid development of 3D vision, point cloud has become an
increasingly popular 3D visual media content. Due to the irregular structure,
point cloud has posed novel challenges to the related research, such as
compression, transmission, rendering and quality assessment. In these latest
researches, point cloud quality assessment (PCQA) has attracted wide attention
due to its significant role in guiding practical applications, especially in
many cases where the reference point cloud is unavailable. However, current
no-reference metrics which based on prevalent deep neural network have apparent
disadvantages. For example, to adapt to the irregular structure of point cloud,
they require preprocessing such as voxelization and projection that introduce
extra distortions, and the applied grid-kernel networks, such as Convolutional
Neural Networks, fail to extract effective distortion-related features.
Besides, they rarely consider the various distortion patterns and the
philosophy that PCQA should exhibit shifting, scaling, and rotational
invariance. In this paper, we propose a novel no-reference PCQA metric named
the Graph convolutional PCQA network (GPA-Net). To extract effective features
for PCQA, we propose a new graph convolution kernel, i.e., GPAConv, which
attentively captures the perturbation of structure and texture. Then, we
propose the multi-task framework consisting of one main task (quality
regression) and two auxiliary tasks (distortion type and degree predictions).
Finally, we propose a coordinate normalization module to stabilize the results
of GPAConv under shift, scale and rotation transformations. Experimental
results on two independent databases show that GPA-Net achieves the best
performance compared to the state-of-the-art no-reference PCQA metrics, even
better than some full-reference metrics in some cases.
|
[
{
"version": "v1",
"created": "Sat, 29 Oct 2022 03:06:55 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Nov 2022 01:42:37 GMT"
},
{
"version": "v3",
"created": "Thu, 1 Jun 2023 14:42:23 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Shan",
"Ziyu",
""
],
[
"Yang",
"Qi",
""
],
[
"Ye",
"Rui",
""
],
[
"Zhang",
"Yujie",
""
],
[
"Xu",
"Yiling",
""
],
[
"Xu",
"Xiaozhong",
""
],
[
"Liu",
"Shan",
""
]
] |
new_dataset
| 0.963263 |
2211.00815
|
Zhengyang Chen
|
Zhengyang Chen, Bing Han, Xu Xiang, Houjun Huang, Bei Liu, Yanmin Qian
|
Build a SRE Challenge System: Lessons from VoxSRC 2022 and CNSRC 2022
|
Accepted by InterSpeech 2023
| null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many speaker recognition challenges have been held to assess the speaker
verification system in the wild and probe the performance limit. Voxceleb
Speaker Recognition Challenge (VoxSRC), based on the voxceleb, is the most
popular. Besides, another challenge called CN-Celeb Speaker Recognition
Challenge (CNSRC) is also held this year, which is based on the Chinese
celebrity multi-genre dataset CN-Celeb. This year, our team participated in
both speaker verification closed tracks in CNSRC 2022 and VoxSRC 2022, and
achieved the 1st place and 3rd place respectively. In most system reports, the
authors usually only provide a description of their systems but lack an
effective analysis of their methods. In this paper, we will outline how to
build a strong speaker verification challenge system and give a detailed
analysis of each method compared with some other popular technical means.
|
[
{
"version": "v1",
"created": "Wed, 2 Nov 2022 01:33:23 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Jun 2023 05:39:10 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Chen",
"Zhengyang",
""
],
[
"Han",
"Bing",
""
],
[
"Xiang",
"Xu",
""
],
[
"Huang",
"Houjun",
""
],
[
"Liu",
"Bei",
""
],
[
"Qian",
"Yanmin",
""
]
] |
new_dataset
| 0.999823 |
2212.00259
|
Zhuowan Li
|
Zhuowan Li, Xingrui Wang, Elias Stengel-Eskin, Adam Kortylewski, Wufei
Ma, Benjamin Van Durme, Alan Yuille
|
Super-CLEVR: A Virtual Benchmark to Diagnose Domain Robustness in Visual
Reasoning
|
Published in CVPR 2023 as Highlight. Data and code are released at
https://github.com/Lizw14/Super-CLEVR
| null | null | null |
cs.CV cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Visual Question Answering (VQA) models often perform poorly on
out-of-distribution data and struggle on domain generalization. Due to the
multi-modal nature of this task, multiple factors of variation are intertwined,
making generalization difficult to analyze. This motivates us to introduce a
virtual benchmark, Super-CLEVR, where different factors in VQA domain shifts
can be isolated in order that their effects can be studied independently. Four
factors are considered: visual complexity, question redundancy, concept
distribution and concept compositionality. With controllably generated data,
Super-CLEVR enables us to test VQA methods in situations where the test data
differs from the training data along each of these axes. We study four existing
methods, including two neural symbolic methods NSCL and NSVQA, and two
non-symbolic methods FiLM and mDETR; and our proposed method, probabilistic
NSVQA (P-NSVQA), which extends NSVQA with uncertainty reasoning. P-NSVQA
outperforms other methods on three of the four domain shift factors. Our
results suggest that disentangling reasoning and perception, combined with
probabilistic uncertainty, form a strong VQA model that is more robust to
domain shifts. The dataset and code are released at
https://github.com/Lizw14/Super-CLEVR.
|
[
{
"version": "v1",
"created": "Thu, 1 Dec 2022 03:53:24 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Jun 2023 03:57:12 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Li",
"Zhuowan",
""
],
[
"Wang",
"Xingrui",
""
],
[
"Stengel-Eskin",
"Elias",
""
],
[
"Kortylewski",
"Adam",
""
],
[
"Ma",
"Wufei",
""
],
[
"Van Durme",
"Benjamin",
""
],
[
"Yuille",
"Alan",
""
]
] |
new_dataset
| 0.998292 |
2212.07564
|
Florent Bonnet
|
Florent Bonnet, Ahmed Jocelyn Mazari, Paola Cinnella, Patrick
Gallinari
|
AirfRANS: High Fidelity Computational Fluid Dynamics Dataset for
Approximating Reynolds-Averaged Navier-Stokes Solutions
| null |
36th Conference on Neural Information Processing Systems (NeurIPS
2022) Track on Datasets and Benchmarks
| null | null |
cs.LG cs.CV physics.comp-ph physics.flu-dyn
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Surrogate models are necessary to optimize meaningful quantities in physical
dynamics as their recursive numerical resolutions are often prohibitively
expensive. It is mainly the case for fluid dynamics and the resolution of
Navier-Stokes equations. However, despite the fast-growing field of data-driven
models for physical systems, reference datasets representing real-world
phenomena are lacking. In this work, we develop AirfRANS, a dataset for
studying the two-dimensional incompressible steady-state Reynolds-Averaged
Navier-Stokes equations over airfoils at a subsonic regime and for different
angles of attacks. We also introduce metrics on the stress forces at the
surface of geometries and visualization of boundary layers to assess the
capabilities of models to accurately predict the meaningful information of the
problem. Finally, we propose deep learning baselines on four machine learning
tasks to study AirfRANS under different constraints for generalization
considerations: big and scarce data regime, Reynolds number, and angle of
attack extrapolation.
|
[
{
"version": "v1",
"created": "Thu, 15 Dec 2022 00:41:09 GMT"
},
{
"version": "v2",
"created": "Fri, 6 Jan 2023 20:01:25 GMT"
},
{
"version": "v3",
"created": "Thu, 1 Jun 2023 14:52:42 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Bonnet",
"Florent",
""
],
[
"Mazari",
"Ahmed Jocelyn",
""
],
[
"Cinnella",
"Paola",
""
],
[
"Gallinari",
"Patrick",
""
]
] |
new_dataset
| 0.999817 |
2301.07773
|
Vincent Kurtz
|
Vince Kurtz and Hai Lin
|
Temporal Logic Motion Planning with Convex Optimization via Graphs of
Convex Sets
| null | null | null | null |
cs.RO cs.FL cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Temporal logic is a concise way of specifying complex tasks. But motion
planning to achieve temporal logic specifications is difficult, and existing
methods struggle to scale to complex specifications and high-dimensional system
dynamics. In this paper, we cast Linear Temporal Logic (LTL) motion planning as
a shortest path problem in a Graph of Convex Sets (GCS) and solve it with
convex optimization. This approach brings together the best of modern
optimization-based temporal logic planners and older automata-theoretic
methods, addressing the limitations of each: we avoid clipping and passthrough
by representing paths with continuous Bezier curves; computational complexity
is polynomial (not exponential) in the number of sample points; global
optimality can be certified (though it is not guaranteed); soundness and
probabilistic completeness are guaranteed under mild assumptions; and most
importantly, the method scales to complex specifications and high-dimensional
systems, including a 30-DoF humanoid. Open-source code is available at
https://github.com/vincekurtz/ltl_gcs.
|
[
{
"version": "v1",
"created": "Wed, 18 Jan 2023 20:28:28 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Jun 2023 14:42:20 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Kurtz",
"Vince",
""
],
[
"Lin",
"Hai",
""
]
] |
new_dataset
| 0.989498 |
2302.09450
|
Zhongyu Li
|
Zhongyu Li, Xue Bin Peng, Pieter Abbeel, Sergey Levine, Glen Berseth,
Koushil Sreenath
|
Robust and Versatile Bipedal Jumping Control through Reinforcement
Learning
|
Accepted in Robotics: Science and Systems 2023 (RSS 2023). The
accompanying video is at https://youtu.be/aAPSZ2QFB-E
| null | null | null |
cs.RO cs.AI cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work aims to push the limits of agility for bipedal robots by enabling a
torque-controlled bipedal robot to perform robust and versatile dynamic jumps
in the real world. We present a reinforcement learning framework for training a
robot to accomplish a large variety of jumping tasks, such as jumping to
different locations and directions. To improve performance on these challenging
tasks, we develop a new policy structure that encodes the robot's long-term
input/output (I/O) history while also providing direct access to a short-term
I/O history. In order to train a versatile jumping policy, we utilize a
multi-stage training scheme that includes different training stages for
different objectives. After multi-stage training, the policy can be directly
transferred to a real bipedal Cassie robot. Training on different tasks and
exploring more diverse scenarios lead to highly robust policies that can
exploit the diverse set of learned maneuvers to recover from perturbations or
poor landings during real-world deployment. Such robustness in the proposed
policy enables Cassie to succeed in completing a variety of challenging jump
tasks in the real world, such as standing long jumps, jumping onto elevated
platforms, and multi-axes jumps.
|
[
{
"version": "v1",
"created": "Sun, 19 Feb 2023 01:06:09 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Jun 2023 03:03:22 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Li",
"Zhongyu",
""
],
[
"Peng",
"Xue Bin",
""
],
[
"Abbeel",
"Pieter",
""
],
[
"Levine",
"Sergey",
""
],
[
"Berseth",
"Glen",
""
],
[
"Sreenath",
"Koushil",
""
]
] |
new_dataset
| 0.97543 |
2302.12057
|
Maureen de Seyssel
|
Maureen de Seyssel, Marvin Lavechin, Hadrien Titeux, Arthur Thomas,
Gwendal Virlet, Andrea Santos Revilla, Guillaume Wisniewski, Bogdan Ludusan,
Emmanuel Dupoux
|
ProsAudit, a prosodic benchmark for self-supervised speech models
|
Accepted at Interspeech 2023. 4 pages + references, 1 figure
| null | null | null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
We present ProsAudit, a benchmark in English to assess structural prosodic
knowledge in self-supervised learning (SSL) speech models. It consists of two
subtasks, their corresponding metrics, and an evaluation dataset. In the
protosyntax task, the model must correctly identify strong versus weak prosodic
boundaries. In the lexical task, the model needs to correctly distinguish
between pauses inserted between words and within words. We also provide human
evaluation scores on this benchmark. We evaluated a series of SSL models and
found that they were all able to perform above chance on both tasks, even when
evaluated on an unseen language. However, non-native models performed
significantly worse than native ones on the lexical task, highlighting the
importance of lexical knowledge in this task. We also found a clear effect of
size with models trained on more data performing better in the two subtasks.
|
[
{
"version": "v1",
"created": "Thu, 23 Feb 2023 14:30:23 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Feb 2023 13:16:31 GMT"
},
{
"version": "v3",
"created": "Thu, 1 Jun 2023 08:11:15 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"de Seyssel",
"Maureen",
""
],
[
"Lavechin",
"Marvin",
""
],
[
"Titeux",
"Hadrien",
""
],
[
"Thomas",
"Arthur",
""
],
[
"Virlet",
"Gwendal",
""
],
[
"Revilla",
"Andrea Santos",
""
],
[
"Wisniewski",
"Guillaume",
""
],
[
"Ludusan",
"Bogdan",
""
],
[
"Dupoux",
"Emmanuel",
""
]
] |
new_dataset
| 0.999776 |
2302.14030
|
Allen Chang
|
Allen Chang, Xiaoyuan Zhu, Aarav Monga, Seoho Ahn, Tejas Srinivasan,
Jesse Thomason
|
Multimodal Speech Recognition for Language-Guided Embodied Agents
|
5 pages, 5 figures, 24th ISCA Interspeech Conference (INTERSPEECH
2023)
| null | null | null |
cs.CL cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Benchmarks for language-guided embodied agents typically assume text-based
instructions, but deployed agents will encounter spoken instructions. While
Automatic Speech Recognition (ASR) models can bridge the input gap, erroneous
ASR transcripts can hurt the agents' ability to complete tasks. In this work,
we propose training a multimodal ASR model to reduce errors in transcribing
spoken instructions by considering the accompanying visual context. We train
our model on a dataset of spoken instructions, synthesized from the ALFRED task
completion dataset, where we simulate acoustic noise by systematically masking
spoken words. We find that utilizing visual observations facilitates masked
word recovery, with multimodal ASR models recovering up to 30% more masked
words than unimodal baselines. We also find that a text-trained embodied agent
successfully completes tasks more often by following transcribed instructions
from multimodal ASR models. github.com/Cylumn/embodied-multimodal-asr
|
[
{
"version": "v1",
"created": "Mon, 27 Feb 2023 18:41:48 GMT"
},
{
"version": "v2",
"created": "Wed, 31 May 2023 21:02:09 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Chang",
"Allen",
""
],
[
"Zhu",
"Xiaoyuan",
""
],
[
"Monga",
"Aarav",
""
],
[
"Ahn",
"Seoho",
""
],
[
"Srinivasan",
"Tejas",
""
],
[
"Thomason",
"Jesse",
""
]
] |
new_dataset
| 0.999718 |
2303.01229
|
Cyril Zakka
|
Cyril Zakka, Akash Chaurasia, Rohan Shad, Alex R. Dalal, Jennifer L.
Kim, Michael Moor, Kevin Alexander, Euan Ashley, Jack Boyd, Kathleen Boyd,
Karen Hirsch, Curt Langlotz, Joanna Nelson, and William Hiesinger
|
Almanac: Retrieval-Augmented Language Models for Clinical Medicine
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Large-language models have recently demonstrated impressive zero-shot
capabilities in a variety of natural language tasks such as summarization,
dialogue generation, and question-answering. Despite many promising
applications in clinical medicine, adoption of these models in real-world
settings has been largely limited by their tendency to generate incorrect and
sometimes even toxic statements. In this study, we develop Almanac, a large
language model framework augmented with retrieval capabilities for medical
guideline and treatment recommendations. Performance on a novel dataset of
clinical scenarios (n = 130) evaluated by a panel of 5 board-certified and
resident physicians demonstrates significant increases in factuality (mean of
18% at p-value < 0.05) across all specialties, with improvements in
completeness and safety. Our results demonstrate the potential for large
language models to be effective tools in the clinical decision-making process,
while also emphasizing the importance of careful testing and deployment to
mitigate their shortcomings.
|
[
{
"version": "v1",
"created": "Wed, 1 Mar 2023 02:30:11 GMT"
},
{
"version": "v2",
"created": "Wed, 31 May 2023 21:17:13 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Zakka",
"Cyril",
""
],
[
"Chaurasia",
"Akash",
""
],
[
"Shad",
"Rohan",
""
],
[
"Dalal",
"Alex R.",
""
],
[
"Kim",
"Jennifer L.",
""
],
[
"Moor",
"Michael",
""
],
[
"Alexander",
"Kevin",
""
],
[
"Ashley",
"Euan",
""
],
[
"Boyd",
"Jack",
""
],
[
"Boyd",
"Kathleen",
""
],
[
"Hirsch",
"Karen",
""
],
[
"Langlotz",
"Curt",
""
],
[
"Nelson",
"Joanna",
""
],
[
"Hiesinger",
"William",
""
]
] |
new_dataset
| 0.999666 |
2303.12789
|
Ayaan Haque
|
Ayaan Haque, Matthew Tancik, Alexei A. Efros, Aleksander Holynski,
Angjoo Kanazawa
|
Instruct-NeRF2NeRF: Editing 3D Scenes with Instructions
|
Project website: https://instruct-nerf2nerf.github.io; v1. Revisions
to related work and discussion
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a method for editing NeRF scenes with text-instructions. Given a
NeRF of a scene and the collection of images used to reconstruct it, our method
uses an image-conditioned diffusion model (InstructPix2Pix) to iteratively edit
the input images while optimizing the underlying scene, resulting in an
optimized 3D scene that respects the edit instruction. We demonstrate that our
proposed method is able to edit large-scale, real-world scenes, and is able to
accomplish more realistic, targeted edits than prior work.
|
[
{
"version": "v1",
"created": "Wed, 22 Mar 2023 17:57:57 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Jun 2023 17:17:38 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Haque",
"Ayaan",
""
],
[
"Tancik",
"Matthew",
""
],
[
"Efros",
"Alexei A.",
""
],
[
"Holynski",
"Aleksander",
""
],
[
"Kanazawa",
"Angjoo",
""
]
] |
new_dataset
| 0.996131 |
2304.12308
|
Jiazhong Cen
|
Jiazhong Cen, Zanwei Zhou, Jiemin Fang, Chen Yang, Wei Shen, Lingxi
Xie, Dongsheng Jiang, Xiaopeng Zhang, Qi Tian
|
Segment Anything in 3D with NeRFs
|
Work in progress. Project page: https://jumpat.github.io/SA3D/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, the Segment Anything Model (SAM) emerged as a powerful vision
foundation model which is capable to segment anything in 2D images. This paper
aims to generalize SAM to segment 3D objects. Rather than replicating the data
acquisition and annotation procedure which is costly in 3D, we design an
efficient solution, leveraging the Neural Radiance Field (NeRF) as a cheap and
off-the-shelf prior that connects multi-view 2D images to the 3D space. We
refer to the proposed solution as SA3D, for Segment Anything in 3D. It is only
required to provide a manual segmentation prompt (e.g., rough points) for the
target object in a single view, which is used to generate its 2D mask in this
view with SAM. Next, SA3D alternately performs mask inverse rendering and
cross-view self-prompting across various views to iteratively complete the 3D
mask of the target object constructed with voxel grids. The former projects the
2D mask obtained by SAM in the current view onto 3D mask with guidance of the
density distribution learned by the NeRF; The latter extracts reliable prompts
automatically as the input to SAM from the NeRF-rendered 2D mask in another
view. We show in experiments that SA3D adapts to various scenes and achieves 3D
segmentation within minutes. Our research offers a generic and efficient
methodology to lift a 2D vision foundation model to 3D, as long as the 2D model
can steadily address promptable segmentation across multiple views. The project
page is at https://jumpat.github.io/SA3D/.
|
[
{
"version": "v1",
"created": "Mon, 24 Apr 2023 17:57:15 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Apr 2023 05:47:32 GMT"
},
{
"version": "v3",
"created": "Thu, 1 Jun 2023 13:58:46 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Cen",
"Jiazhong",
""
],
[
"Zhou",
"Zanwei",
""
],
[
"Fang",
"Jiemin",
""
],
[
"Yang",
"Chen",
""
],
[
"Shen",
"Wei",
""
],
[
"Xie",
"Lingxi",
""
],
[
"Jiang",
"Dongsheng",
""
],
[
"Zhang",
"Xiaopeng",
""
],
[
"Tian",
"Qi",
""
]
] |
new_dataset
| 0.99164 |
2305.15878
|
Bruce W. Lee
|
Bruce W. Lee, Jason Hyung-Jong Lee
|
LFTK: Handcrafted Features in Computational Linguistics
|
BEA @ ACL 2023
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Past research has identified a rich set of handcrafted linguistic features
that can potentially assist various tasks. However, their extensive number
makes it difficult to effectively select and utilize existing handcrafted
features. Coupled with the problem of inconsistent implementation across
research works, there has been no categorization scheme or generally-accepted
feature names. This creates unwanted confusion. Also, most existing handcrafted
feature extraction libraries are not open-source or not actively maintained. As
a result, a researcher often has to build such an extraction system from the
ground up.
We collect and categorize more than 220 popular handcrafted features grounded
on past literature. Then, we conduct a correlation analysis study on several
task-specific datasets and report the potential use cases of each feature.
Lastly, we devise a multilingual handcrafted linguistic feature extraction
system in a systematically expandable manner. We open-source our system for
public access to a rich set of pre-implemented handcrafted features. Our system
is coined LFTK and is the largest of its kind. Find it at
github.com/brucewlee/lftk.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 09:20:27 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Jun 2023 17:42:21 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Lee",
"Bruce W.",
""
],
[
"Lee",
"Jason Hyung-Jong",
""
]
] |
new_dataset
| 0.993679 |
2305.17497
|
Zhuang Li
|
Zhuang Li, Yuyang Chai, Terry Yue Zhuo, Lizhen Qu, Gholamreza Haffari,
Fei Li, Donghong Ji, Quan Hung Tran
|
FACTUAL: A Benchmark for Faithful and Consistent Textual Scene Graph
Parsing
|
9 pages, ACL 2023 (findings)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Textual scene graph parsing has become increasingly important in various
vision-language applications, including image caption evaluation and image
retrieval. However, existing scene graph parsers that convert image captions
into scene graphs often suffer from two types of errors. First, the generated
scene graphs fail to capture the true semantics of the captions or the
corresponding images, resulting in a lack of faithfulness. Second, the
generated scene graphs have high inconsistency, with the same semantics
represented by different annotations.
To address these challenges, we propose a novel dataset, which involves
re-annotating the captions in Visual Genome (VG) using a new intermediate
representation called FACTUAL-MR. FACTUAL-MR can be directly converted into
faithful and consistent scene graph annotations. Our experimental results
clearly demonstrate that the parser trained on our dataset outperforms existing
approaches in terms of faithfulness and consistency. This improvement leads to
a significant performance boost in both image caption evaluation and zero-shot
image retrieval tasks. Furthermore, we introduce a novel metric for measuring
scene graph similarity, which, when combined with the improved scene graph
parser, achieves state-of-the-art (SOTA) results on multiple benchmark datasets
for the aforementioned tasks. The code and dataset are available at
https://github.com/zhuang-li/FACTUAL .
|
[
{
"version": "v1",
"created": "Sat, 27 May 2023 15:38:31 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Jun 2023 04:56:26 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Li",
"Zhuang",
""
],
[
"Chai",
"Yuyang",
""
],
[
"Zhuo",
"Terry Yue",
""
],
[
"Qu",
"Lizhen",
""
],
[
"Haffari",
"Gholamreza",
""
],
[
"Li",
"Fei",
""
],
[
"Ji",
"Donghong",
""
],
[
"Tran",
"Quan Hung",
""
]
] |
new_dataset
| 0.988851 |
2305.17547
|
Eliya Nachmani
|
Eliya Nachmani, Alon Levkovitch, Yifan Ding, Chulayuth Asawaroengchai,
Heiga Zen, Michelle Tadmor Ramanovich
|
Translatotron 3: Speech to Speech Translation with Monolingual Data
| null | null | null | null |
cs.CL cs.LG cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents Translatotron 3, a novel approach to train a direct
speech-to-speech translation model from monolingual speech-text datasets only
in a fully unsupervised manner. Translatotron 3 combines masked autoencoder,
unsupervised embedding mapping, and back-translation to achieve this goal.
Experimental results in speech-to-speech translation tasks between Spanish and
English show that Translatotron 3 outperforms a baseline cascade system,
reporting 18.14 BLEU points improvement on the synthesized
Unpaired-Conversational dataset. In contrast to supervised approaches that
necessitate real paired data, which is unavailable, or specialized modeling to
replicate para-/non-linguistic information, Translatotron 3 showcases its
capability to retain para-/non-linguistic such as pauses, speaking rates, and
speaker identity. Audio samples can be found in our website
http://google-research.github.io/lingvo-lab/translatotron3
|
[
{
"version": "v1",
"created": "Sat, 27 May 2023 18:30:54 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Jun 2023 08:01:16 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Nachmani",
"Eliya",
""
],
[
"Levkovitch",
"Alon",
""
],
[
"Ding",
"Yifan",
""
],
[
"Asawaroengchai",
"Chulayuth",
""
],
[
"Zen",
"Heiga",
""
],
[
"Ramanovich",
"Michelle Tadmor",
""
]
] |
new_dataset
| 0.99945 |
2305.19683
|
Manuel De Stefano
|
Manuel De Stefano, Fabiano Pecorelli, Dario Di Nucci, Fabio Palomba,
Andrea De Lucia
|
The Quantum Frontier of Software Engineering: A Systematic Mapping Study
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Context. Quantum computing is becoming a reality, and quantum software
engineering (QSE) is emerging as a new discipline to enable developers to
design and develop quantum programs.
Objective. This paper presents a systematic mapping study of the current
state of QSE research, aiming to identify the most investigated topics, the
types and number of studies, the main reported results, and the most studied
quantum computing tools/frameworks. Additionally, the study aims to explore the
research community's interest in QSE, how it has evolved, and any prior
contributions to the discipline before its formal introduction through the
Talavera Manifesto.
Method. We searched for relevant articles in several databases and applied
inclusion and exclusion criteria to select the most relevant studies. After
evaluating the quality of the selected resources, we extracted relevant data
from the primary studies and analyzed them.
Results. We found that QSE research has primarily focused on software
testing, with little attention given to other topics, such as software
engineering management. The most commonly studied technology for techniques and
tools is Qiskit, although, in most studies, either multiple or none specific
technologies were employed. The researchers most interested in QSE are
interconnected through direct collaborations, and several strong collaboration
clusters have been identified. Most articles in QSE have been published in
non-thematic venues, with a preference for conferences.
Conclusions. The study's implications are providing a centralized source of
information for researchers and practitioners in the field, facilitating
knowledge transfer, and contributing to the advancement and growth of QSE.
|
[
{
"version": "v1",
"created": "Wed, 31 May 2023 09:26:10 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Jun 2023 07:28:59 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"De Stefano",
"Manuel",
""
],
[
"Pecorelli",
"Fabiano",
""
],
[
"Di Nucci",
"Dario",
""
],
[
"Palomba",
"Fabio",
""
],
[
"De Lucia",
"Andrea",
""
]
] |
new_dataset
| 0.998964 |
2306.00020
|
Jonathan Roberts
|
Jonathan Roberts, Timo L\"uddecke, Sowmen Das, Kai Han, Samuel Albanie
|
GPT4GEO: How a Language Model Sees the World's Geography
| null | null | null | null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large language models (LLMs) have shown remarkable capabilities across a
broad range of tasks involving question answering and the generation of
coherent text and code. Comprehensively understanding the strengths and
weaknesses of LLMs is beneficial for safety, downstream applications and
improving performance. In this work, we investigate the degree to which GPT-4
has acquired factual geographic knowledge and is capable of using this
knowledge for interpretative reasoning, which is especially important for
applications that involve geographic data, such as geospatial analysis, supply
chain management, and disaster response. To this end, we design and conduct a
series of diverse experiments, starting from factual tasks such as location,
distance and elevation estimation to more complex questions such as generating
country outlines and travel networks, route finding under constraints and
supply chain analysis. We provide a broad characterisation of what GPT-4
(without plugins or Internet access) knows about the world, highlighting both
potentially surprising capabilities but also limitations.
|
[
{
"version": "v1",
"created": "Tue, 30 May 2023 18:28:04 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Roberts",
"Jonathan",
""
],
[
"Lüddecke",
"Timo",
""
],
[
"Das",
"Sowmen",
""
],
[
"Han",
"Kai",
""
],
[
"Albanie",
"Samuel",
""
]
] |
new_dataset
| 0.998699 |
2306.00029
|
Nghi D. Q. Bui
|
Nghi D. Q. Bui, Hung Le, Yue Wang, Junnan Li, Akhilesh Deepak Gotmare,
Steven C. H. Hoi
|
CodeTF: One-stop Transformer Library for State-of-the-art Code LLM
|
Ongoing work - Draft Preview
| null | null | null |
cs.SE cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Code intelligence plays a key role in transforming modern software
engineering. Recently, deep learning-based models, especially Transformer-based
large language models (LLMs), have demonstrated remarkable potential in
tackling these tasks by leveraging massive open-source code data and
programming language features. However, the development and deployment of such
models often require expertise in both machine learning and software
engineering, creating a barrier for the model adoption. In this paper, we
present CodeTF, an open-source Transformer-based library for state-of-the-art
Code LLMs and code intelligence. Following the principles of modular design and
extensible framework, we design CodeTF with a unified interface to enable rapid
access and development across different types of models, datasets and tasks.
Our library supports a collection of pretrained Code LLM models and popular
code benchmarks, including a standardized interface to train and serve code
LLMs efficiently, and data features such as language-specific parsers and
utility functions for extracting code attributes. In this paper, we describe
the design principles, the architecture, key modules and components, and
compare with other related library tools. Finally, we hope CodeTF is able to
bridge the gap between machine learning/generative AI and software engineering,
providing a comprehensive open-source solution for developers, researchers, and
practitioners.
|
[
{
"version": "v1",
"created": "Wed, 31 May 2023 05:24:48 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Bui",
"Nghi D. Q.",
""
],
[
"Le",
"Hung",
""
],
[
"Wang",
"Yue",
""
],
[
"Li",
"Junnan",
""
],
[
"Gotmare",
"Akhilesh Deepak",
""
],
[
"Hoi",
"Steven C. H.",
""
]
] |
new_dataset
| 0.998532 |
2306.00075
|
Duo Lu
|
Duo Lu, Eric Eaton, Matt Weg, Wei Wang, Steven Como, Jeffrey Wishart,
Hongbin Yu, Yezhou Yang
|
CAROM Air -- Vehicle Localization and Traffic Scene Reconstruction from
Aerial Videos
|
Accepted to IEEE ICRA 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Road traffic scene reconstruction from videos has been desirable by road
safety regulators, city planners, researchers, and autonomous driving
technology developers. However, it is expensive and unnecessary to cover every
mile of the road with cameras mounted on the road infrastructure. This paper
presents a method that can process aerial videos to vehicle trajectory data so
that a traffic scene can be automatically reconstructed and accurately
re-simulated using computers. On average, the vehicle localization error is
about 0.1 m to 0.3 m using a consumer-grade drone flying at 120 meters. This
project also compiles a dataset of 50 reconstructed road traffic scenes from
about 100 hours of aerial videos to enable various downstream traffic analysis
applications and facilitate further road traffic related research. The dataset
is available at https://github.com/duolu/CAROM.
|
[
{
"version": "v1",
"created": "Wed, 31 May 2023 18:00:17 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Lu",
"Duo",
""
],
[
"Eaton",
"Eric",
""
],
[
"Weg",
"Matt",
""
],
[
"Wang",
"Wei",
""
],
[
"Como",
"Steven",
""
],
[
"Wishart",
"Jeffrey",
""
],
[
"Yu",
"Hongbin",
""
],
[
"Yang",
"Yezhou",
""
]
] |
new_dataset
| 0.972449 |
2306.00095
|
Cliff Zou
|
Roy Laurens, Edo Christianto, Bruce Caulkins, Cliff C. Zou
|
Side-Channel VoIP Profiling Attack against Customer Service Automated
Phone System
|
6 pages, 12 figures. Published in IEEE Global Communications
Conference (GLOBECOM), 2022
|
2022 IEEE Global Communications Conference, Rio de Janeiro,
Brazil, 2022, pp. 6091-6096
|
10.1109/GLOBECOM48099.2022.10001537
| null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
In many VoIP systems, Voice Activity Detection (VAD) is often used on VoIP
traffic to suppress packets of silence in order to reduce the bandwidth
consumption of phone calls. Unfortunately, although VoIP traffic is fully
encrypted and secured, traffic analysis of this suppression can reveal
identifying information about calls made to customer service automated phone
systems. Because different customer service phone systems have distinct, but
fixed (pre-recorded) automated voice messages sent to customers, VAD silence
suppression used in VoIP will enable an eavesdropper to profile and identify
these automated voice messages. In this paper, we will use a popular enterprise
VoIP system (Cisco CallManager), running the default Session Initiation
Protocol (SIP) protocol, to demonstrate that an attacker can reliably use the
silence suppression to profile calls to such VoIP systems. Our real-world
experiments demonstrate that this side-channel profiling attack can be used to
accurately identify not only what customer service phone number a customer
calls, but also what following options are subsequently chosen by the caller in
the phone conversation.
|
[
{
"version": "v1",
"created": "Wed, 31 May 2023 18:14:38 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Laurens",
"Roy",
""
],
[
"Christianto",
"Edo",
""
],
[
"Caulkins",
"Bruce",
""
],
[
"Zou",
"Cliff C.",
""
]
] |
new_dataset
| 0.973004 |
2306.00110
|
Peiling Lu
|
Peiling Lu, Xin Xu, Chenfei Kang, Botao Yu, Chengyi Xing, Xu Tan,
Jiang Bian
|
MuseCoco: Generating Symbolic Music from Text
| null | null | null | null |
cs.SD cs.AI cs.CL cs.LG cs.MM eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generating music from text descriptions is a user-friendly mode since the
text is a relatively easy interface for user engagement. While some approaches
utilize texts to control music audio generation, editing musical elements in
generated audio is challenging for users. In contrast, symbolic music offers
ease of editing, making it more accessible for users to manipulate specific
musical elements. In this paper, we propose MuseCoco, which generates symbolic
music from text descriptions with musical attributes as the bridge to break
down the task into text-to-attribute understanding and attribute-to-music
generation stages. MuseCoCo stands for Music Composition Copilot that empowers
musicians to generate music directly from given text descriptions, offering a
significant improvement in efficiency compared to creating music entirely from
scratch. The system has two main advantages: Firstly, it is data efficient. In
the attribute-to-music generation stage, the attributes can be directly
extracted from music sequences, making the model training self-supervised. In
the text-to-attribute understanding stage, the text is synthesized and refined
by ChatGPT based on the defined attribute templates. Secondly, the system can
achieve precise control with specific attributes in text descriptions and
offers multiple control options through attribute-conditioned or
text-conditioned approaches. MuseCoco outperforms baseline systems in terms of
musicality, controllability, and overall score by at least 1.27, 1.08, and 1.32
respectively. Besides, there is a notable enhancement of about 20% in objective
control accuracy. In addition, we have developed a robust large-scale model
with 1.2 billion parameters, showcasing exceptional controllability and
musicality.
|
[
{
"version": "v1",
"created": "Wed, 31 May 2023 18:34:16 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Lu",
"Peiling",
""
],
[
"Xu",
"Xin",
""
],
[
"Kang",
"Chenfei",
""
],
[
"Yu",
"Botao",
""
],
[
"Xing",
"Chengyi",
""
],
[
"Tan",
"Xu",
""
],
[
"Bian",
"Jiang",
""
]
] |
new_dataset
| 0.999846 |
2306.00121
|
Huiyuan Lai
|
Huiyuan Lai, Antonio Toral, Malvina Nissim
|
Multilingual Multi-Figurative Language Detection
|
Accepted to ACL 2023 (Findings)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Figures of speech help people express abstract concepts and evoke stronger
emotions than literal expressions, thereby making texts more creative and
engaging. Due to its pervasive and fundamental character, figurative language
understanding has been addressed in Natural Language Processing, but it's
highly understudied in a multilingual setting and when considering more than
one figure of speech at the same time. To bridge this gap, we introduce
multilingual multi-figurative language modelling, and provide a benchmark for
sentence-level figurative language detection, covering three common figures of
speech and seven languages. Specifically, we develop a framework for figurative
language detection based on template-based prompt learning. In so doing, we
unify multiple detection tasks that are interrelated across multiple figures of
speech and languages, without requiring task- or language-specific modules.
Experimental results show that our framework outperforms several strong
baselines and may serve as a blueprint for the joint modelling of other
interrelated tasks.
|
[
{
"version": "v1",
"created": "Wed, 31 May 2023 18:52:41 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Lai",
"Huiyuan",
""
],
[
"Toral",
"Antonio",
""
],
[
"Nissim",
"Malvina",
""
]
] |
new_dataset
| 0.999596 |
2306.00179
|
Chenghao Wang
|
Chenghao Wang
|
LeggedWalking on Inclined Surfaces
|
Masters thesis
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The main contribution of this MS Thesis is centered around taking steps
towards successful multi-modal demonstrations using Northeastern's
legged-aerial robot, Husky Carbon. This work discusses the challenges involved
in achieving multi-modal locomotion such as trotting-hovering and
thruster-assisted incline walking and reports progress made towards overcoming
these challenges. Animals like birds use a combination of legged and aerial
mobility, as seen in Chukars' wing-assisted incline running (WAIR), to achieve
multi-modal locomotion. Chukars use forces generated by their flapping wings to
manipulate ground contact forces and traverse steep slopes and overhangs.
Husky's design takes inspiration from birds such as Chukars. This MS thesis
presentation outlines the mechanical and electrical details of Husky's legged
and aerial units. The thesis presents simulated incline walking using a
high-fidelity model of the Husky Carbon over steep slopes of up to 45 degrees.
|
[
{
"version": "v1",
"created": "Wed, 31 May 2023 20:58:23 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Wang",
"Chenghao",
""
]
] |
new_dataset
| 0.95406 |
2306.00223
|
Levent Guvenc
|
Mustafa Ridvan Cantas, Levent Guvenc
|
Customized Co-Simulation Environment for Autonomous Driving Algorithm
Development and Evaluation
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Increasing the implemented SAE level of autonomy in road vehicles requires
extensive simulations and verifications in a realistic simulation environment
before proving ground and public road testing. The level of detail in the
simulation environment helps ensure the safety of a real-world implementation
and reduces algorithm development cost by allowing developers to complete most
of the validation in the simulation environment. Considering sensors like
camera, LIDAR, radar, and V2X used in autonomous vehicles, it is essential to
create a simulation environment that can provide these sensor simulations as
realistically as possible. While sensor simulations are of crucial importance
for perception algorithm development, the simulation environment will be
incomplete for the simulation of holistic AV operation without being
complemented by a realistic vehicle dynamic model and traffic cosimulation.
Therefore, this paper investigates existing simulation environments, identifies
use case scenarios, and creates a cosimulation environment to satisfy the
simulation requirements for autonomous driving function development using the
Carla simulator based on the Unreal game engine for the environment, Sumo or
Vissim for traffic co-simulation, Carsim or Matlab, Simulink for vehicle
dynamics co-simulation and Autoware or the author or user routines for
autonomous driving algorithm co-simulation. As a result of this work, a
model-based vehicle dynamics simulation with realistic sensor simulation and
traffic simulation is presented. A sensor fusion methodology is implemented in
the created simulation environment as a use case scenario. The results of this
work will be a valuable resource for researchers who need a comprehensive
co-simulation environment to develop connected and autonomous driving
algorithms.
|
[
{
"version": "v1",
"created": "Wed, 31 May 2023 22:38:00 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Cantas",
"Mustafa Ridvan",
""
],
[
"Guvenc",
"Levent",
""
]
] |
new_dataset
| 0.982824 |
2306.00226
|
Kelly Blincoe
|
Kelly Blincoe, Markus Luczak-Roesch, Tim Miller, Matthias Galster
|
Human-centric Literature on Trust for SfTI Veracity Spearhead
| null | null | null | null |
cs.CY cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
This article summarizes the literature on trust of digital technologies from
a human-centric perspective. We summarize literature on trust in face-to-face
interactions from other fields, followed by a discussion of organizational
trust, technology-mediated trust, trust of software products, trust of AI, and
blockchain. This report was created for the Science for Technological
Innovation Veracity Spearhead supported by New Zealand's National Science
Challenges.
|
[
{
"version": "v1",
"created": "Wed, 31 May 2023 22:46:44 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Blincoe",
"Kelly",
""
],
[
"Luczak-Roesch",
"Markus",
""
],
[
"Miller",
"Tim",
""
],
[
"Galster",
"Matthias",
""
]
] |
new_dataset
| 0.958496 |
2306.00231
|
Andre Wyzykowski
|
Andre Brasil Vieira Wyzykowski, Anil K. Jain
|
A Universal Latent Fingerprint Enhancer Using Transformers
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Forensic science heavily relies on analyzing latent fingerprints, which are
crucial for criminal investigations. However, various challenges, such as
background noise, overlapping prints, and contamination, make the
identification process difficult. Moreover, limited access to real crime scene
and laboratory-generated databases hinders the development of efficient
recognition algorithms. This study aims to develop a fast method, which we call
ULPrint, to enhance various latent fingerprint types, including those obtained
from real crime scenes and laboratory-created samples, to boost fingerprint
recognition system performance. In closed-set identification accuracy
experiments, the enhanced image was able to improve the performance of the
MSU-AFIS from 61.56\% to 75.19\% in the NIST SD27 database, from 67.63\% to
77.02\% in the MSP Latent database, and from 46.90\% to 52.12\% in the NIST
SD302 database. Our contributions include (1) the development of a two-step
latent fingerprint enhancement method that combines Ridge Segmentation with
UNet and Mix Visual Transformer (MiT) SegFormer-B5 encoder architecture, (2)
the implementation of multiple dilated convolutions in the UNet architecture to
capture intricate, non-local patterns better and enhance ridge segmentation,
and (3) the guided blending of the predicted ridge mask with the latent
fingerprint. This novel approach, ULPrint, streamlines the enhancement process,
addressing challenges across diverse latent fingerprint types to improve
forensic investigations and criminal justice outcomes.
|
[
{
"version": "v1",
"created": "Wed, 31 May 2023 23:01:11 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Wyzykowski",
"Andre Brasil Vieira",
""
],
[
"Jain",
"Anil K.",
""
]
] |
new_dataset
| 0.965418 |
2306.00246
|
Cohen Archbold
|
Cohen Archbold, Benjamin Brodie, Aram Ansary Ogholbake, Nathan Jacobs
|
Fine-Grained Property Value Assessment using Probabilistic
Disaggregation
|
4 pages, 1 figure, Accepted to IGARSS 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The monetary value of a given piece of real estate, a parcel, is often
readily available from a geographic information system. However, for many
applications, such as insurance and urban planning, it is useful to have
estimates of property value at much higher spatial resolutions. We propose a
method to estimate the distribution over property value at the pixel level from
remote sensing imagery. We evaluate on a real-world dataset of a major urban
area. Our results show that the proposed approaches are capable of generating
fine-level estimates of property values, significantly improving upon a diverse
collection of baseline approaches.
|
[
{
"version": "v1",
"created": "Wed, 31 May 2023 23:40:47 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Archbold",
"Cohen",
""
],
[
"Brodie",
"Benjamin",
""
],
[
"Ogholbake",
"Aram Ansary",
""
],
[
"Jacobs",
"Nathan",
""
]
] |
new_dataset
| 0.999439 |
2306.00285
|
Youcef Maouche
|
Maouche Youcef
|
Linear codes with arbitrary dimensional hull and pure LCD code
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we introduce a general construction of linear codes with small
dimension hull from any non LCD codes. Furthermore, we show that for any linear
code $\Co$ over $\F_q$ ($q > 3$) with $dim(Hull(\Co))=h$ there exist an
equivalent codes $\Co_j$ with $dim(Hull(\Co_j))=j$ for any integer $0\leq j
\leq h$. We also introduce the notion of pure LCD code; an LCD code and all its
equivalent are LCD; and construct an infinite family of pure LCD codes. In
addition, we introduce a general construction of linear codes with one
dimension hull.
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 02:04:55 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Youcef",
"Maouche",
""
]
] |
new_dataset
| 0.999305 |
2306.00379
|
Happy Mittal
|
Anant Khandelwal, Happy Mittal, Shreyas Sunil Kulkarni, Deepak Gupta
|
Large Scale Generative Multimodal Attribute Extraction for E-commerce
Attributes
|
ACL 2023 Industry Track, 8 Pages
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
E-commerce websites (e.g. Amazon) have a plethora of structured and
unstructured information (text and images) present on the product pages.
Sellers often either don't label or mislabel values of the attributes (e.g.
color, size etc.) for their products. Automatically identifying these attribute
values from an eCommerce product page that contains both text and images is a
challenging task, especially when the attribute value is not explicitly
mentioned in the catalog. In this paper, we present a scalable solution for
this problem where we pose attribute extraction problem as a question-answering
task, which we solve using \textbf{MXT}, consisting of three key components:
(i) \textbf{M}AG (Multimodal Adaptation Gate), (ii) \textbf{X}ception network,
and (iii) \textbf{T}5 encoder-decoder. Our system consists of a generative
model that \emph{generates} attribute-values for a given product by using both
textual and visual characteristics (e.g. images) of the product. We show that
our system is capable of handling zero-shot attribute prediction (when
attribute value is not seen in training data) and value-absent prediction (when
attribute value is not mentioned in the text) which are missing in traditional
classification-based and NER-based models respectively. We have trained our
models using distant supervision, removing dependency on human labeling, thus
making them practical for real-world applications. With this framework, we are
able to train a single model for 1000s of (product-type, attribute) pairs, thus
reducing the overhead of training and maintaining separate models. Extensive
experiments on two real world datasets show that our framework improves the
absolute recall@90P by 10.16\% and 6.9\% from the existing state of the art
models. In a popular e-commerce store, we have deployed our models for 1000s of
(product-type, attribute) pairs.
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 06:21:45 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Khandelwal",
"Anant",
""
],
[
"Mittal",
"Happy",
""
],
[
"Kulkarni",
"Shreyas Sunil",
""
],
[
"Gupta",
"Deepak",
""
]
] |
new_dataset
| 0.965121 |
2306.00381
|
Jinman Zhao
|
Hengzhi Pei, Jinman Zhao, Leonard Lausen, Sheng Zha, George Karypis
|
Better Context Makes Better Code Language Models: A Case Study on
Function Call Argument Completion
|
12 pages. Accepted to AAAI 2023
| null | null | null |
cs.SE cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pretrained code language models have enabled great progress towards program
synthesis. However, common approaches only consider in-file local context and
thus miss information and constraints imposed by other parts of the codebase
and its external dependencies. Existing code completion benchmarks also lack
such context. To resolve these restrictions we curate a new dataset of
permissively licensed Python packages that includes full projects and their
dependencies and provide tools to extract non-local information with the help
of program analyzers. We then focus on the task of function call argument
completion which requires predicting the arguments to function calls. We show
that existing code completion models do not yield good results on our
completion task. To better solve this task, we query a program analyzer for
information relevant to a given function call, and consider ways to provide the
analyzer results to different code completion models during inference and
training. Our experiments show that providing access to the function
implementation and function usages greatly improves the argument completion
performance. Our ablation study provides further insights on how different
types of information available from the program analyzer and different ways of
incorporating the information affect the model performance.
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 06:25:58 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Pei",
"Hengzhi",
""
],
[
"Zhao",
"Jinman",
""
],
[
"Lausen",
"Leonard",
""
],
[
"Zha",
"Sheng",
""
],
[
"Karypis",
"George",
""
]
] |
new_dataset
| 0.99921 |
2306.00395
|
Muhammad Shoaib Farooq
|
Muhammad Shoaib Farooq, Sawera Kanwal
|
Traffic Road Congestion System using by the internet of vehicles (IoV)
|
pages 16, figures 9
| null | null | null |
cs.NI cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Traffic problems have increased in modern life due to a huge number of
vehicles, big cities, and ignoring the traffic rules. Vehicular ad hoc network
(VANET) has improved the traffic system in previous some and plays a vital role
in the best traffic control system in big cities. But due to some limitations,
it is not enough to control some problems in specific conditions. Now a day
invention of new technologies of the Internet of Things (IoT) is used for
collaboratively and efficiently performing tasks. This technology was also
introduced in the transportation system which makes it an intelligent
transportation system (ITS), this is called the Internet of vehicles (IOV). We
will elaborate on traffic problems in the traditional system and elaborate on
the benefits, enhancements, and reasons to better IOV by Systematic Literature
Review (SLR). This technique will be implemented by targeting needed papers
through many search phrases. A systematic literature review is used for 121
articles between 2014 and 2023. The IoV technologies and tools are required to
create the IoV and resolve some traffic rules through SUMO (simulation of urban
mobility) which is used for the design and simulation the road traffic. We have
tried to contribute to the best model of the traffic control system. This paper
will analysis two vehicular congestion control models in term of select the
optimized and efficient model and elaborate on the reasons for efficiency by
searching the solution SLR based questions. Due to some efficient features, we
have suggested the IOV based on vehicular clouds. These efficient features make
this model the best and most effective than the traditional model which is a
great reason to enhance the network system.
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 06:55:40 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Farooq",
"Muhammad Shoaib",
""
],
[
"Kanwal",
"Sawera",
""
]
] |
new_dataset
| 0.999284 |
2306.00400
|
Jitao Xu
|
Josep Crego, Jitao Xu, Fran\c{c}ois Yvon
|
BiSync: A Bilingual Editor for Synchronized Monolingual Texts
|
ACL 2023 System Demo
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
In our globalized world, a growing number of situations arise where people
are required to communicate in one or several foreign languages. In the case of
written communication, users with a good command of a foreign language may find
assistance from computer-aided translation (CAT) technologies. These
technologies often allow users to access external resources, such as
dictionaries, terminologies or bilingual concordancers, thereby interrupting
and considerably hindering the writing process. In addition, CAT systems assume
that the source sentence is fixed and also restrict the possible changes on the
target side. In order to make the writing process smoother, we present BiSync,
a bilingual writing assistant that allows users to freely compose text in two
languages, while maintaining the two monolingual texts synchronized. We also
include additional functionalities, such as the display of alternative prefix
translations and paraphrases, which are intended to facilitate the authoring of
texts. We detail the model architecture used for synchronization and evaluate
the resulting tool, showing that high accuracy can be attained with limited
computational resources. The interface and models are publicly available at
https://github.com/jmcrego/BiSync and a demonstration video can be watched on
YouTube at https://youtu.be/_l-ugDHfNgU .
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 07:03:47 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Crego",
"Josep",
""
],
[
"Xu",
"Jitao",
""
],
[
"Yvon",
"François",
""
]
] |
new_dataset
| 0.999473 |
2306.00424
|
Tejas Gokhale
|
Man Luo, Zhiyuan Fang, Tejas Gokhale, Yezhou Yang, Chitta Baral
|
End-to-end Knowledge Retrieval with Multi-modal Queries
|
ACL 2023
| null | null | null |
cs.CL cs.CV cs.IR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We investigate knowledge retrieval with multi-modal queries, i.e. queries
containing information split across image and text inputs, a challenging task
that differs from previous work on cross-modal retrieval. We curate a new
dataset called ReMuQ for benchmarking progress on this task. ReMuQ requires a
system to retrieve knowledge from a large corpus by integrating contents from
both text and image queries. We introduce a retriever model ``ReViz'' that can
directly process input text and images to retrieve relevant knowledge in an
end-to-end fashion without being dependent on intermediate modules such as
object detectors or caption generators. We introduce a new pretraining task
that is effective for learning knowledge retrieval with multimodal queries and
also improves performance on downstream tasks. We demonstrate superior
performance in retrieval on two datasets (ReMuQ and OK-VQA) under zero-shot
settings as well as further improvements when finetuned on these datasets.
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 08:04:12 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Luo",
"Man",
""
],
[
"Fang",
"Zhiyuan",
""
],
[
"Gokhale",
"Tejas",
""
],
[
"Yang",
"Yezhou",
""
],
[
"Baral",
"Chitta",
""
]
] |
new_dataset
| 0.997617 |
2306.00455
|
David Vivancos
|
David Vivancos
|
MindBigData 2023 MNIST-8B The 8 billion datapoints Multimodal Dataset of
Brain Signals
|
9 pages, 10 figures
| null | null | null |
cs.LG cs.CV q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
MindBigData 2023 MNIST-8B is the largest, to date (June 1st 2023), brain
signals open dataset created for Machine Learning, based on EEG signals from a
single subject captured using a custom 128 channels device, replicating the
full 70,000 digits from Yaan LeCun et all MNIST dataset. The brain signals were
captured while the subject was watching the pixels of the original digits one
by one on a screen and listening at the same time to the spoken number 0 to 9
from the real label. The data, collection procedures, hardware and software
created are described in detail, background extra information and other related
datasets can be found at our previous paper MindBigData 2022: A Large Dataset
of Brain Signals.
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 08:58:35 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Vivancos",
"David",
""
]
] |
new_dataset
| 0.99979 |
2306.00489
|
Juan F. Montesinos
|
Juan F. Montesinos and Daniel Michelsanti and Gloria Haro and
Zheng-Hua Tan and Jesper Jensen
|
Speech inpainting: Context-based speech synthesis guided by video
|
Accepted in Interspeech23
| null | null | null |
cs.SD cs.AI eess.AS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Audio and visual modalities are inherently connected in speech signals: lip
movements and facial expressions are correlated with speech sounds. This
motivates studies that incorporate the visual modality to enhance an acoustic
speech signal or even restore missing audio information. Specifically, this
paper focuses on the problem of audio-visual speech inpainting, which is the
task of synthesizing the speech in a corrupted audio segment in a way that it
is consistent with the corresponding visual content and the uncorrupted audio
context. We present an audio-visual transformer-based deep learning model that
leverages visual cues that provide information about the content of the
corrupted audio. It outperforms the previous state-of-the-art audio-visual
model and audio-only baselines. We also show how visual features extracted with
AV-HuBERT, a large audio-visual transformer for speech recognition, are
suitable for synthesizing speech.
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 09:40:47 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Montesinos",
"Juan F.",
""
],
[
"Michelsanti",
"Daniel",
""
],
[
"Haro",
"Gloria",
""
],
[
"Tan",
"Zheng-Hua",
""
],
[
"Jensen",
"Jesper",
""
]
] |
new_dataset
| 0.998317 |
2306.00503
|
Guangyuan Jiang
|
Guangyuan Jiang, Manjie Xu, Shiji Xin, Wei Liang, Yujia Peng, Chi
Zhang, Yixin Zhu
|
MEWL: Few-shot multimodal word learning with referential uncertainty
|
Accepted at ICML 2023
| null | null | null |
cs.CL cs.AI cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Without explicit feedback, humans can rapidly learn the meaning of words.
Children can acquire a new word after just a few passive exposures, a process
known as fast mapping. This word learning capability is believed to be the most
fundamental building block of multimodal understanding and reasoning. Despite
recent advancements in multimodal learning, a systematic and rigorous
evaluation is still missing for human-like word learning in machines. To fill
in this gap, we introduce the MachinE Word Learning (MEWL) benchmark to assess
how machines learn word meaning in grounded visual scenes. MEWL covers human's
core cognitive toolkits in word learning: cross-situational reasoning,
bootstrapping, and pragmatic learning. Specifically, MEWL is a few-shot
benchmark suite consisting of nine tasks for probing various word learning
capabilities. These tasks are carefully designed to be aligned with the
children's core abilities in word learning and echo the theories in the
developmental literature. By evaluating multimodal and unimodal agents'
performance with a comparative analysis of human performance, we notice a sharp
divergence in human and machine word learning. We further discuss these
differences between humans and machines and call for human-like few-shot word
learning in machines.
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 09:54:31 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Jiang",
"Guangyuan",
""
],
[
"Xu",
"Manjie",
""
],
[
"Xin",
"Shiji",
""
],
[
"Liang",
"Wei",
""
],
[
"Peng",
"Yujia",
""
],
[
"Zhang",
"Chi",
""
],
[
"Zhu",
"Yixin",
""
]
] |
new_dataset
| 0.986377 |
2306.00553
|
Ke Li
|
Yihan Liu, Ke Li, Zihao Huang, Bowen Li, Guiyan Wang, Wei Cai
|
EduChain: A Blockchain-based Education Data Management System
| null |
CBCC 2020. Communications in Computer and Information Science, vol
1305. Springer, Singapore
|
10.1007/978-981-33-6478-3_5
| null |
cs.CR cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The predominant centralized paradigm in educational data management currently
suffers from several critical issues such as vulnerability to malicious
tampering, a high prevalence of diploma counterfeiting, and the onerous cost of
certificate authentication. Decentralized blockchain technology, with its
cutting-edge capabilities, presents a viable solution to these pervasive
problems. In this paper, we illuminate the inherent limitations of existing
centralized systems and introduce EduChain, a novel heterogeneous
blockchain-based system for managing educational data. EduChain uniquely
harnesses the strengths of both private and consortium blockchains, offering an
unprecedented level of security and efficiency. In addition, we propose a
robust mechanism for performing database consistency checks and error tracing.
This is achieved through the implementation of a secondary consensus, employing
the pt-table-checksum tool. This approach effectively addresses the prevalent
issue of database mismatches. Our system demonstrates superior performance in
key areas such as information verification, error traceback, and data security,
thereby significantly improving the integrity and trustworthiness of
educational data management. Through EduChain, we offer a powerful solution for
future advancements in secure and efficient educational data management.
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 11:16:31 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Liu",
"Yihan",
""
],
[
"Li",
"Ke",
""
],
[
"Huang",
"Zihao",
""
],
[
"Li",
"Bowen",
""
],
[
"Wang",
"Guiyan",
""
],
[
"Cai",
"Wei",
""
]
] |
new_dataset
| 0.976599 |
2306.00576
|
Jun Chen
|
Jun Chen, Ming Hu, Darren J. Coker, Michael L. Berumen, Blair
Costelloe, Sara Beery, Anna Rohrbach, Mohamed Elhoseiny
|
MammalNet: A Large-scale Video Benchmark for Mammal Recognition and
Behavior Understanding
|
CVPR 2023 proceeding
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Monitoring animal behavior can facilitate conservation efforts by providing
key insights into wildlife health, population status, and ecosystem function.
Automatic recognition of animals and their behaviors is critical for
capitalizing on the large unlabeled datasets generated by modern video devices
and for accelerating monitoring efforts at scale. However, the development of
automated recognition systems is currently hindered by a lack of appropriately
labeled datasets. Existing video datasets 1) do not classify animals according
to established biological taxonomies; 2) are too small to facilitate
large-scale behavioral studies and are often limited to a single species; and
3) do not feature temporally localized annotations and therefore do not
facilitate localization of targeted behaviors within longer video sequences.
Thus, we propose MammalNet, a new large-scale animal behavior dataset with
taxonomy-guided annotations of mammals and their common behaviors. MammalNet
contains over 18K videos totaling 539 hours, which is ~10 times larger than the
largest existing animal behavior dataset. It covers 17 orders, 69 families, and
173 mammal categories for animal categorization and captures 12 high-level
animal behaviors that received focus in previous animal behavior studies. We
establish three benchmarks on MammalNet: standard animal and behavior
recognition, compositional low-shot animal and behavior recognition, and
behavior detection. Our dataset and code have been made available at:
https://mammal-net.github.io.
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 11:45:33 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Chen",
"Jun",
""
],
[
"Hu",
"Ming",
""
],
[
"Coker",
"Darren J.",
""
],
[
"Berumen",
"Michael L.",
""
],
[
"Costelloe",
"Blair",
""
],
[
"Beery",
"Sara",
""
],
[
"Rohrbach",
"Anna",
""
],
[
"Elhoseiny",
"Mohamed",
""
]
] |
new_dataset
| 0.999654 |
2306.00577
|
Vincent Moens
|
Albert Bou, Matteo Bettini, Sebastian Dittert, Vikash Kumar, Shagun
Sodhani, Xiaomeng Yang, Gianni De Fabritiis, Vincent Moens
|
TorchRL: A data-driven decision-making library for PyTorch
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Striking a balance between integration and modularity is crucial for a
machine learning library to be versatile and user-friendly, especially in
handling decision and control tasks that involve large development teams and
complex, real-world data, and environments. To address this issue, we propose
TorchRL, a generalistic control library for PyTorch that provides
well-integrated, yet standalone components. With a versatile and robust
primitive design, TorchRL facilitates streamlined algorithm development across
the many branches of Reinforcement Learning (RL) and control. We introduce a
new PyTorch primitive, TensorDict, as a flexible data carrier that empowers the
integration of the library's components while preserving their modularity.
Hence replay buffers, datasets, distributed data collectors, environments,
transforms and objectives can be effortlessly used in isolation or combined. We
provide a detailed description of the building blocks, supporting code examples
and an extensive overview of the library across domains and tasks. Finally, we
show comparative benchmarks to demonstrate its computational efficiency.
TorchRL fosters long-term support and is publicly available on GitHub for
greater reproducibility and collaboration within the research community. The
code is opensourced on https://github.com/pytorch/rl.
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 11:45:45 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Bou",
"Albert",
""
],
[
"Bettini",
"Matteo",
""
],
[
"Dittert",
"Sebastian",
""
],
[
"Kumar",
"Vikash",
""
],
[
"Sodhani",
"Shagun",
""
],
[
"Yang",
"Xiaomeng",
""
],
[
"De Fabritiis",
"Gianni",
""
],
[
"Moens",
"Vincent",
""
]
] |
new_dataset
| 0.997864 |
2306.00680
|
Jee-Weon Jung
|
Jee-weon Jung, Soonshin Seo, Hee-Soo Heo, Geonmin Kim, You Jin Kim,
Young-ki Kwon, Minjae Lee, Bong-Jin Lee
|
Encoder-decoder multimodal speaker change detection
|
5 pages, accepted for presentation at INTERSPEECH 2023
| null | null | null |
cs.SD cs.AI eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The task of speaker change detection (SCD), which detects points where
speakers change in an input, is essential for several applications. Several
studies solved the SCD task using audio inputs only and have shown limited
performance. Recently, multimodal SCD (MMSCD) models, which utilise text
modality in addition to audio, have shown improved performance. In this study,
the proposed model are built upon two main proposals, a novel mechanism for
modality fusion and the adoption of a encoder-decoder architecture. Different
to previous MMSCD works that extract speaker embeddings from extremely short
audio segments, aligned to a single word, we use a speaker embedding extracted
from 1.5s. A transformer decoder layer further improves the performance of an
encoder-only MMSCD model. The proposed model achieves state-of-the-art results
among studies that report SCD performance and is also on par with recent work
that combines SCD with automatic speech recognition via human transcription.
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 13:55:23 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Jung",
"Jee-weon",
""
],
[
"Seo",
"Soonshin",
""
],
[
"Heo",
"Hee-Soo",
""
],
[
"Kim",
"Geonmin",
""
],
[
"Kim",
"You Jin",
""
],
[
"Kwon",
"Young-ki",
""
],
[
"Lee",
"Minjae",
""
],
[
"Lee",
"Bong-Jin",
""
]
] |
new_dataset
| 0.975641 |
2306.00681
|
Nils Aschenbruck
|
Daniel Otten, Alexander Brundiers, Timmy Sch\"uller, Nils Aschenbruck
|
Green Segment Routing for Improved Sustainability of Backbone Networks
|
This work has been submitted to IEEE for possible publication.
Copyright may be transferred without notice
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Improving the energy efficiency of Internet Service Provider (ISP) backbone
networks is an important objective for ISP operators. In these networks, the
overall traffic load throughout the day can vary drastically, resulting in many
backbone networks being highly overprovisioned during periods of lower traffic
volume. In this paper, we propose a new Segment Routing (SR)-based optimization
algorithm that aims at reducing the energy consumption of networks during such
low-traffic periods. It uses the traffic steering capabilities of SR to remove
traffic from as many links as possible to allow the respective hardware
components to be switched off. Furthermore, it simultaneously ensures that
solutions comply to additional operator requirements regarding the overall
Maximum Link Utilization in the network. Based on data from a Tier-1 ISP and a
public available dataset, we show that our approach allows for up to 70 % of
the overall linecards to be switched off, corresponding to an around 56%
reduction of the overall energy consumption of the network in times of low
traffic demands.
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 13:55:41 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Otten",
"Daniel",
""
],
[
"Brundiers",
"Alexander",
""
],
[
"Schüller",
"Timmy",
""
],
[
"Aschenbruck",
"Nils",
""
]
] |
new_dataset
| 0.990868 |
2306.00689
|
Shakeel Ahmad Sheikh
|
Shakeel A. Sheikh, Md Sahidullah, Fabrice Hirsch, Slim Ouni
|
Stuttering Detection Using Speaker Representations and Self-supervised
Contextual Embeddings
|
Accepted in International Journal of Speech Technology, Springer 2023
substantial overlap with arXiv:2204.01564
| null | null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/publicdomain/zero/1.0/
|
The adoption of advanced deep learning architectures in stuttering detection
(SD) tasks is challenging due to the limited size of the available datasets. To
this end, this work introduces the application of speech embeddings extracted
from pre-trained deep learning models trained on large audio datasets for
different tasks. In particular, we explore audio representations obtained using
emphasized channel attention, propagation, and aggregation time delay neural
network (ECAPA-TDNN) and Wav2Vec2.0 models trained on VoxCeleb and LibriSpeech
datasets respectively. After extracting the embeddings, we benchmark with
several traditional classifiers, such as the K-nearest neighbour (KNN),
Gaussian naive Bayes, and neural network, for the SD tasks. In comparison to
the standard SD systems trained only on the limited SEP-28k dataset, we obtain
a relative improvement of 12.08%, 28.71%, 37.9% in terms of unweighted average
recall (UAR) over the baselines. Finally, we have shown that combining two
embeddings and concatenating multiple layers of Wav2Vec2.0 can further improve
the UAR by up to 2.60% and 6.32% respectively.
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 14:00:47 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Sheikh",
"Shakeel A.",
""
],
[
"Sahidullah",
"Md",
""
],
[
"Hirsch",
"Fabrice",
""
],
[
"Ouni",
"Slim",
""
]
] |
new_dataset
| 0.999197 |
2306.00794
|
Mirazul Haque
|
Mirazul Haque, Rutvij Shah, Simin Chen, Berrak \c{S}i\c{s}man, Cong
Liu, Wei Yang
|
SlothSpeech: Denial-of-service Attack Against Speech Recognition Models
| null | null | null | null |
cs.SD cs.CR cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Deep Learning (DL) models have been popular nowadays to execute different
speech-related tasks, including automatic speech recognition (ASR). As ASR is
being used in different real-time scenarios, it is important that the ASR model
remains efficient against minor perturbations to the input. Hence, evaluating
efficiency robustness of the ASR model is the need of the hour. We show that
popular ASR models like Speech2Text model and Whisper model have dynamic
computation based on different inputs, causing dynamic efficiency. In this
work, we propose SlothSpeech, a denial-of-service attack against ASR models,
which exploits the dynamic behaviour of the model. SlothSpeech uses the
probability distribution of the output text tokens to generate perturbations to
the audio such that efficiency of the ASR model is decreased. We find that
SlothSpeech generated inputs can increase the latency up to 40X times the
latency induced by benign input.
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 15:25:14 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Haque",
"Mirazul",
""
],
[
"Shah",
"Rutvij",
""
],
[
"Chen",
"Simin",
""
],
[
"Şişman",
"Berrak",
""
],
[
"Liu",
"Cong",
""
],
[
"Yang",
"Wei",
""
]
] |
new_dataset
| 0.996242 |
2306.00844
|
Mahdi Taheri
|
Mahdi Taheri, Saeideh Sheikhpour, Ali Mahani, and Maksim Jenihhin
|
A Novel Fault-Tolerant Logic Style with Self-Checking Capability
|
6 pages, 3 tables, 5 figures
| null |
10.1109/IOLTS56730.2022.9897818
| null |
cs.AR cs.AI cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We introduce a novel logic style with self-checking capability to enhance
hardware reliability at logic level. The proposed logic cells have two-rail
inputs/outputs, and the functionality for each rail of outputs enables
construction of faulttolerant configurable circuits. The AND and OR gates
consist of 8 transistors based on CNFET technology, while the proposed XOR gate
benefits from both CNFET and low-power MGDI technologies in its transistor
arrangement. To demonstrate the feasibility of our new logic gates, we used an
AES S-box implementation as the use case. The extensive simulation results
using HSPICE indicate that the case-study circuit using on proposed gates has
superior speed and power consumption compared to other implementations with
error-detection capability
|
[
{
"version": "v1",
"created": "Wed, 31 May 2023 12:21:53 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Taheri",
"Mahdi",
""
],
[
"Sheikhpour",
"Saeideh",
""
],
[
"Mahani",
"Ali",
""
],
[
"Jenihhin",
"Maksim",
""
]
] |
new_dataset
| 0.999609 |
2306.00867
|
Rohan Chitnis
|
Rohan Chitnis, Yingchen Xu, Bobak Hashemi, Lucas Lehnert, Urun Dogan,
Zheqing Zhu, Olivier Delalleau
|
IQL-TD-MPC: Implicit Q-Learning for Hierarchical Model Predictive
Control
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Model-based reinforcement learning (RL) has shown great promise due to its
sample efficiency, but still struggles with long-horizon sparse-reward tasks,
especially in offline settings where the agent learns from a fixed dataset. We
hypothesize that model-based RL agents struggle in these environments due to a
lack of long-term planning capabilities, and that planning in a temporally
abstract model of the environment can alleviate this issue. In this paper, we
make two key contributions: 1) we introduce an offline model-based RL
algorithm, IQL-TD-MPC, that extends the state-of-the-art Temporal Difference
Learning for Model Predictive Control (TD-MPC) with Implicit Q-Learning (IQL);
2) we propose to use IQL-TD-MPC as a Manager in a hierarchical setting with any
off-the-shelf offline RL algorithm as a Worker. More specifically, we pre-train
a temporally abstract IQL-TD-MPC Manager to predict "intent embeddings", which
roughly correspond to subgoals, via planning. We empirically show that
augmenting state representations with intent embeddings generated by an
IQL-TD-MPC manager significantly improves off-the-shelf offline RL agents'
performance on some of the most challenging D4RL benchmark tasks. For instance,
the offline RL algorithms AWAC, TD3-BC, DT, and CQL all get zero or near-zero
normalized evaluation scores on the medium and large antmaze tasks, while our
modification gives an average score over 40.
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 16:24:40 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Chitnis",
"Rohan",
""
],
[
"Xu",
"Yingchen",
""
],
[
"Hashemi",
"Bobak",
""
],
[
"Lehnert",
"Lucas",
""
],
[
"Dogan",
"Urun",
""
],
[
"Zhu",
"Zheqing",
""
],
[
"Delalleau",
"Olivier",
""
]
] |
new_dataset
| 0.965791 |
2306.00956
|
Ruohan Gao
|
Ruohan Gao, Yiming Dou, Hao Li, Tanmay Agarwal, Jeannette Bohg, Yunzhu
Li, Li Fei-Fei, Jiajun Wu
|
The ObjectFolder Benchmark: Multisensory Learning with Neural and Real
Objects
|
In CVPR 2023. Project page: https://objectfolder.stanford.edu/.
ObjectFolder Real demo: https://www.objectfolder.org/swan_vis/. Gao, Dou, and
Li contributed equally to this work
| null | null | null |
cs.CV cs.AI cs.GR cs.HC cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce the ObjectFolder Benchmark, a benchmark suite of 10 tasks for
multisensory object-centric learning, centered around object recognition,
reconstruction, and manipulation with sight, sound, and touch. We also
introduce the ObjectFolder Real dataset, including the multisensory
measurements for 100 real-world household objects, building upon a newly
designed pipeline for collecting the 3D meshes, videos, impact sounds, and
tactile readings of real-world objects. We conduct systematic benchmarking on
both the 1,000 multisensory neural objects from ObjectFolder, and the real
multisensory data from ObjectFolder Real. Our results demonstrate the
importance of multisensory perception and reveal the respective roles of
vision, audio, and touch for different object-centric learning tasks. By
publicly releasing our dataset and benchmark suite, we hope to catalyze and
enable new research in multisensory object-centric learning in computer vision,
robotics, and beyond. Project page: https://objectfolder.stanford.edu
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 17:51:22 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Gao",
"Ruohan",
""
],
[
"Dou",
"Yiming",
""
],
[
"Li",
"Hao",
""
],
[
"Agarwal",
"Tanmay",
""
],
[
"Bohg",
"Jeannette",
""
],
[
"Li",
"Yunzhu",
""
],
[
"Fei-Fei",
"Li",
""
],
[
"Wu",
"Jiajun",
""
]
] |
new_dataset
| 0.999823 |
2306.00958
|
Yecheng Jason Ma
|
Yecheng Jason Ma, William Liang, Vaidehi Som, Vikash Kumar, Amy Zhang,
Osbert Bastani, Dinesh Jayaraman
|
LIV: Language-Image Representations and Rewards for Robotic Control
|
Extended version of ICML 2023 camera-ready; Project website:
https://penn-pal-lab.github.io/LIV/
| null | null | null |
cs.RO cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Language-Image Value learning (LIV), a unified objective for
vision-language representation and reward learning from action-free videos with
text annotations. Exploiting a novel connection between dual reinforcement
learning and mutual information contrastive learning, the LIV objective trains
a multi-modal representation that implicitly encodes a universal value function
for tasks specified as language or image goals. We use LIV to pre-train the
first control-centric vision-language representation from large human video
datasets such as EpicKitchen. Given only a language or image goal, the
pre-trained LIV model can assign dense rewards to each frame in videos of
unseen robots or humans attempting that task in unseen environments. Further,
when some target domain-specific data is available, the same objective can be
used to fine-tune and improve LIV and even other pre-trained representations
for robotic control and reward specification in that domain. In our experiments
on several simulated and real-world robot environments, LIV models consistently
outperform the best prior input state representations for imitation learning,
as well as reward specification methods for policy synthesis. Our results
validate the advantages of joint vision-language representation and reward
learning within the unified, compact LIV framework.
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 17:52:23 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Ma",
"Yecheng Jason",
""
],
[
"Liang",
"William",
""
],
[
"Som",
"Vaidehi",
""
],
[
"Kumar",
"Vikash",
""
],
[
"Zhang",
"Amy",
""
],
[
"Bastani",
"Osbert",
""
],
[
"Jayaraman",
"Dinesh",
""
]
] |
new_dataset
| 0.999698 |
2306.00968
|
Henghui Ding
|
Chang Liu, Henghui Ding, Xudong Jiang
|
GRES: Generalized Referring Expression Segmentation
|
CVPR2023 Highlight, Project Page: https://henghuiding.github.io/GRES/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Referring Expression Segmentation (RES) aims to generate a segmentation mask
for the object described by a given language expression. Existing classic RES
datasets and methods commonly support single-target expressions only, i.e., one
expression refers to one target object. Multi-target and no-target expressions
are not considered. This limits the usage of RES in practice. In this paper, we
introduce a new benchmark called Generalized Referring Expression Segmentation
(GRES), which extends the classic RES to allow expressions to refer to an
arbitrary number of target objects. Towards this, we construct the first
large-scale GRES dataset called gRefCOCO that contains multi-target, no-target,
and single-target expressions. GRES and gRefCOCO are designed to be
well-compatible with RES, facilitating extensive experiments to study the
performance gap of the existing RES methods on the GRES task. In the
experimental study, we find that one of the big challenges of GRES is complex
relationship modeling. Based on this, we propose a region-based GRES baseline
ReLA that adaptively divides the image into regions with sub-instance clues,
and explicitly models the region-region and region-language dependencies. The
proposed approach ReLA achieves new state-of-the-art performance on the both
newly proposed GRES and classic RES tasks. The proposed gRefCOCO dataset and
method are available at https://henghuiding.github.io/GRES.
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 17:57:32 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Liu",
"Chang",
""
],
[
"Ding",
"Henghui",
""
],
[
"Jiang",
"Xudong",
""
]
] |
new_dataset
| 0.98525 |
2306.00971
|
Shaozhe Hao
|
Shaozhe Hao, Kai Han, Shihao Zhao, Kwan-Yee K. Wong
|
ViCo: Detail-Preserving Visual Condition for Personalized Text-to-Image
Generation
|
Under review
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Personalized text-to-image generation using diffusion models has recently
been proposed and attracted lots of attention. Given a handful of images
containing a novel concept (e.g., a unique toy), we aim to tune the generative
model to capture fine visual details of the novel concept and generate
photorealistic images following a text condition. We present a plug-in method,
named ViCo, for fast and lightweight personalized generation. Specifically, we
propose an image attention module to condition the diffusion process on the
patch-wise visual semantics. We introduce an attention-based object mask that
comes almost at no cost from the attention module. In addition, we design a
simple regularization based on the intrinsic properties of text-image attention
maps to alleviate the common overfitting degradation. Unlike many existing
models, our method does not finetune any parameters of the original diffusion
model. This allows more flexible and transferable model deployment. With only
light parameter training (~6% of the diffusion U-Net), our method achieves
comparable or even better performance than all state-of-the-art models both
qualitatively and quantitatively.
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 17:58:44 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Hao",
"Shaozhe",
""
],
[
"Han",
"Kai",
""
],
[
"Zhao",
"Shihao",
""
],
[
"Wong",
"Kwan-Yee K.",
""
]
] |
new_dataset
| 0.993571 |
2306.00989
|
Daniel Bolya
|
Chaitanya Ryali, Yuan-Ting Hu, Daniel Bolya, Chen Wei, Haoqi Fan,
Po-Yao Huang, Vaibhav Aggarwal, Arkabandhu Chowdhury, Omid Poursaeed, Judy
Hoffman, Jitendra Malik, Yanghao Li, Christoph Feichtenhofer
|
Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles
|
ICML 2023 Oral version. Code+Models:
https://github.com/facebookresearch/hiera
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern hierarchical vision transformers have added several vision-specific
components in the pursuit of supervised classification performance. While these
components lead to effective accuracies and attractive FLOP counts, the added
complexity actually makes these transformers slower than their vanilla ViT
counterparts. In this paper, we argue that this additional bulk is unnecessary.
By pretraining with a strong visual pretext task (MAE), we can strip out all
the bells-and-whistles from a state-of-the-art multi-stage vision transformer
without losing accuracy. In the process, we create Hiera, an extremely simple
hierarchical vision transformer that is more accurate than previous models
while being significantly faster both at inference and during training. We
evaluate Hiera on a variety of tasks for image and video recognition. Our code
and models are available at https://github.com/facebookresearch/hiera.
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 17:59:58 GMT"
}
] | 2023-06-02T00:00:00 |
[
[
"Ryali",
"Chaitanya",
""
],
[
"Hu",
"Yuan-Ting",
""
],
[
"Bolya",
"Daniel",
""
],
[
"Wei",
"Chen",
""
],
[
"Fan",
"Haoqi",
""
],
[
"Huang",
"Po-Yao",
""
],
[
"Aggarwal",
"Vaibhav",
""
],
[
"Chowdhury",
"Arkabandhu",
""
],
[
"Poursaeed",
"Omid",
""
],
[
"Hoffman",
"Judy",
""
],
[
"Malik",
"Jitendra",
""
],
[
"Li",
"Yanghao",
""
],
[
"Feichtenhofer",
"Christoph",
""
]
] |
new_dataset
| 0.963392 |
1203.0781
|
Takayuki Katsuki
|
Takayuki Katsuki, Masato Inoue
|
Posterior Mean Super-Resolution with a Compound Gaussian Markov Random
Field Prior
|
5 pages, 20 figures, 1 tables, accepted to ICASSP2012 (corrected
2012/3/23)
| null |
10.1109/ICASSP.2012.6288015
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This manuscript proposes a posterior mean (PM) super-resolution (SR) method
with a compound Gaussian Markov random field (MRF) prior. SR is a technique to
estimate a spatially high-resolution image from observed multiple
low-resolution images. A compound Gaussian MRF model provides a preferable
prior for natural images that preserves edges. PM is the optimal estimator for
the objective function of peak signal-to-noise ratio (PSNR). This estimator is
numerically determined by using variational Bayes (VB). We then solve the
conjugate prior problem on VB and the exponential-order calculation cost
problem of a compound Gaussian MRF prior with simple Taylor approximations. In
experiments, the proposed method roughly overcomes existing methods.
|
[
{
"version": "v1",
"created": "Sun, 4 Mar 2012 22:12:54 GMT"
},
{
"version": "v2",
"created": "Sat, 10 Mar 2012 04:11:08 GMT"
},
{
"version": "v3",
"created": "Fri, 23 Mar 2012 02:52:46 GMT"
}
] | 2023-06-01T00:00:00 |
[
[
"Katsuki",
"Takayuki",
""
],
[
"Inoue",
"Masato",
""
]
] |
new_dataset
| 0.985545 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.