id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2306.06522
|
Philipp Hallgarten
|
Philipp Hallgarten, David Bethge, Ozan \"Ozdenizci, Tobias
Grosse-Puppendahl, Enkelejda Kasneci
|
TS-MoCo: Time-Series Momentum Contrast for Self-Supervised Physiological
Representation Learning
|
31st European Signal Processing Conference (EUSIPCO)
| null | null | null |
cs.LG cs.HC eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Limited availability of labeled physiological data often prohibits the use of
powerful supervised deep learning models in the biomedical machine intelligence
domain. We approach this problem and propose a novel encoding framework that
relies on self-supervised learning with momentum contrast to learn
representations from multivariate time-series of various physiological domains
without needing labels. Our model uses a transformer architecture that can be
easily adapted to classification problems by optimizing a linear output
classification layer. We experimentally evaluate our framework using two
publicly available physiological datasets from different domains, i.e., human
activity recognition from embedded inertial sensory and emotion recognition
from electroencephalography. We show that our self-supervised learning approach
can indeed learn discriminative features which can be exploited in downstream
classification tasks. Our work enables the development of domain-agnostic
intelligent systems that can effectively analyze multivariate time-series data
from physiological domains.
|
[
{
"version": "v1",
"created": "Sat, 10 Jun 2023 21:17:42 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Hallgarten",
"Philipp",
""
],
[
"Bethge",
"David",
""
],
[
"Özdenizci",
"Ozan",
""
],
[
"Grosse-Puppendahl",
"Tobias",
""
],
[
"Kasneci",
"Enkelejda",
""
]
] |
new_dataset
| 0.988949 |
2306.06543
|
Ahmed H. Qureshi
|
Vivek Gupta, Praphpreet Dhir, Jeegn Dani, Ahmed H. Qureshi
|
MANER: Multi-Agent Neural Rearrangement Planning of Objects in Cluttered
Environments
|
The videos and supplementary material are available at
https://sites.google.com/view/maner-supplementary
| null | null | null |
cs.RO cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Object rearrangement is a fundamental problem in robotics with various
practical applications ranging from managing warehouses to cleaning and
organizing home kitchens. While existing research has primarily focused on
single-agent solutions, real-world scenarios often require multiple robots to
work together on rearrangement tasks. This paper proposes a comprehensive
learning-based framework for multi-agent object rearrangement planning,
addressing the challenges of task sequencing and path planning in complex
environments. The proposed method iteratively selects objects, determines their
relocation regions, and pairs them with available robots under kinematic
feasibility and task reachability for execution to achieve the target
arrangement. Our experiments on a diverse range of environments demonstrate the
effectiveness and robustness of the proposed framework. Furthermore, results
indicate improved performance in terms of traversal time and success rate
compared to baseline approaches.
|
[
{
"version": "v1",
"created": "Sat, 10 Jun 2023 23:53:28 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Gupta",
"Vivek",
""
],
[
"Dhir",
"Praphpreet",
""
],
[
"Dani",
"Jeegn",
""
],
[
"Qureshi",
"Ahmed H.",
""
]
] |
new_dataset
| 0.996436 |
2306.06573
|
Dasha Pruss
|
Dasha Pruss
|
Ghosting the Machine: Judicial Resistance to a Recidivism Risk
Assessment Instrument
|
Accepted to FAccT '23
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recidivism risk assessment instruments are presented as an 'evidence-based'
strategy for criminal justice reform - a way of increasing consistency in
sentencing, replacing cash bail, and reducing mass incarceration. In practice,
however, AI-centric reforms can simply add another layer to the sluggish,
labyrinthine machinery of bureaucratic systems and are met with internal
resistance. Through a community-informed interview-based study of 23 criminal
judges and other criminal legal bureaucrats in Pennsylvania, I find that judges
overwhelmingly ignore a recently-implemented sentence risk assessment
instrument, which they disparage as "useless," "worthless," "boring," "a waste
of time," "a non-thing," and simply "not helpful." I argue that this algorithm
aversion cannot be accounted for by individuals' distrust of the tools or
automation anxieties, per the explanations given by existing scholarship.
Rather, the instrument's non-use is the result of an interplay between three
organizational factors: county-level norms about pre-sentence investigation
reports; alterations made to the instrument by the Pennsylvania Sentencing
Commission in response to years of public and internal resistance; and problems
with how information is disseminated to judges. These findings shed new light
on the important role of organizational influences on professional resistance
to algorithms, which helps explain why algorithm-centric reforms can fail to
have their desired effect. This study also contributes to an
empirically-informed argument against the use of risk assessment instruments:
they are resource-intensive and have not demonstrated positive on-the-ground
impacts.
|
[
{
"version": "v1",
"created": "Sun, 11 Jun 2023 03:43:23 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Pruss",
"Dasha",
""
]
] |
new_dataset
| 0.958955 |
2306.06583
|
Siyang Song
|
Siyang Song, Micol Spitale, Cheng Luo, German Barquero, Cristina
Palmero, Sergio Escalera, Michel Valstar, Tobias Baur, Fabien Ringeval,
Elisabeth Andre and Hatice Gunes
|
REACT2023: the first Multi-modal Multiple Appropriate Facial Reaction
Generation Challenge
| null | null | null | null |
cs.CV
|
http://creativecommons.org/publicdomain/zero/1.0/
|
The Multi-modal Multiple Appropriate Facial Reaction Generation Challenge
(REACT2023) is the first competition event focused on evaluating multimedia
processing and machine learning techniques for generating human-appropriate
facial reactions in various dyadic interaction scenarios, with all participants
competing strictly under the same conditions. The goal of the challenge is to
provide the first benchmark test set for multi-modal information processing and
to foster collaboration among the audio, visual, and audio-visual affective
computing communities, to compare the relative merits of the approaches to
automatic appropriate facial reaction generation under different spontaneous
dyadic interaction conditions. This paper presents: (i) novelties,
contributions and guidelines of the REACT2023 challenge; (ii) the dataset
utilized in the challenge; and (iii) the performance of baseline systems on the
two proposed sub-challenges: Offline Multiple Appropriate Facial Reaction
Generation and Online Multiple Appropriate Facial Reaction Generation,
respectively. The challenge baseline code is publicly available at
\url{https://github.com/reactmultimodalchallenge/baseline_react2023}.
|
[
{
"version": "v1",
"created": "Sun, 11 Jun 2023 04:15:56 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Song",
"Siyang",
""
],
[
"Spitale",
"Micol",
""
],
[
"Luo",
"Cheng",
""
],
[
"Barquero",
"German",
""
],
[
"Palmero",
"Cristina",
""
],
[
"Escalera",
"Sergio",
""
],
[
"Valstar",
"Michel",
""
],
[
"Baur",
"Tobias",
""
],
[
"Ringeval",
"Fabien",
""
],
[
"Andre",
"Elisabeth",
""
],
[
"Gunes",
"Hatice",
""
]
] |
new_dataset
| 0.99817 |
2306.06598
|
Andrei-Marius Avram
|
Iulian-Marius T\u{a}iatu, Andrei-Marius Avram, Dumitru-Clementin
Cercel and Florin Pop
|
RoBERTweet: A BERT Language Model for Romanian Tweets
|
Accepted at NLDB2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Developing natural language processing (NLP) systems for social media
analysis remains an important topic in artificial intelligence research. This
article introduces RoBERTweet, the first Transformer architecture trained on
Romanian tweets. Our RoBERTweet comes in two versions, following the base and
large architectures of BERT. The corpus used for pre-training the models
represents a novelty for the Romanian NLP community and consists of all tweets
collected from 2008 to 2022. Experiments show that RoBERTweet models outperform
the previous general-domain Romanian and multilingual language models on three
NLP tasks with tweet inputs: emotion detection, sexist language identification,
and named entity recognition. We make our models and the newly created corpus
of Romanian tweets freely available.
|
[
{
"version": "v1",
"created": "Sun, 11 Jun 2023 06:11:56 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Tăiatu",
"Iulian-Marius",
""
],
[
"Avram",
"Andrei-Marius",
""
],
[
"Cercel",
"Dumitru-Clementin",
""
],
[
"Pop",
"Florin",
""
]
] |
new_dataset
| 0.956561 |
2306.06656
|
Kailun Yang
|
Xu Zhang, Kailun Yang, Jiacheng Lin, Jin Yuan, Zhiyong Li, Shutao Li
|
VPUFormer: Visual Prompt Unified Transformer for Interactive Image
Segmentation
|
Code will be made publicly available at
https://github.com/XuZhang1211/VPUFormer
| null | null | null |
cs.CV cs.RO eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The integration of diverse visual prompts like clicks, scribbles, and boxes
in interactive image segmentation could significantly facilitate user
interaction as well as improve interaction efficiency. Most existing studies
focus on a single type of visual prompt by simply concatenating prompts and
images as input for segmentation prediction, which suffers from low-efficiency
prompt representation and weak interaction issues. This paper proposes a simple
yet effective Visual Prompt Unified Transformer (VPUFormer), which introduces a
concise unified prompt representation with deeper interaction to boost the
segmentation performance. Specifically, we design a Prompt-unified Encoder
(PuE) by using Gaussian mapping to generate a unified one-dimensional vector
for click, box, and scribble prompts, which well captures users' intentions as
well as provides a denser representation of user prompts. In addition, we
present a Prompt-to-Pixel Contrastive Loss (P2CL) that leverages user feedback
to gradually refine candidate semantic features, aiming to bring image semantic
features closer to the features that are similar to the user prompt, while
pushing away those image semantic features that are dissimilar to the user
prompt, thereby correcting results that deviate from expectations. On this
basis, our approach injects prompt representations as queries into Dual-cross
Merging Attention (DMA) blocks to perform a deeper interaction between image
and query inputs. A comprehensive variety of experiments on seven challenging
datasets demonstrates that the proposed VPUFormer with PuE, DMA, and P2CL
achieves consistent improvements, yielding state-of-the-art segmentation
performance. Our code will be made publicly available at
https://github.com/XuZhang1211/VPUFormer.
|
[
{
"version": "v1",
"created": "Sun, 11 Jun 2023 12:00:33 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Zhang",
"Xu",
""
],
[
"Yang",
"Kailun",
""
],
[
"Lin",
"Jiacheng",
""
],
[
"Yuan",
"Jin",
""
],
[
"Li",
"Zhiyong",
""
],
[
"Li",
"Shutao",
""
]
] |
new_dataset
| 0.997565 |
2306.06683
|
Zainab Zaidi
|
Zainab Zaidi, Mengbin Ye, Shanika Karunasekera, Yoshihisa Kashima
|
To be a pro-vax or not, the COVID-19 vaccine conundrum on Twitter
| null | null | null | null |
cs.SI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The most surprising observation reported by the study in (arXiv:2208.13523),
involving stance detection of COVID-19 vaccine related tweets during the first
year of pandemic, is the presence of a significant number of users (~2 million)
who posted tweets with both anti-vax and pro-vax stances. This is a sizable
cohort even when the stance detection noise is considered. In this paper, we
tried to get deeper understanding of this 'dual-stance' group. Out of this
group, 60% of users have more pro-vax tweets than anti-vax tweets and 17% have
the same number of tweets in both classes. The rest have more anti-vax tweets,
and they were highly active in expressing concerns about mandate and safety of
a fast-tracked vaccine, while also tweeted some updates about vaccine
development. The leaning pro-vax group have opposite composition: more vaccine
updates and some posts about concerns. It is important to note that vaccine
concerns were not always genuine and had a large dose of misinformation. 43% of
the balanced group have only tweeted one tweet of each type during our study
period and are the less active participants in the vaccine discourse. Our
temporal study also shows that the change-of-stance behaviour became really
significant once the trial results of COVID-19 vaccine were announced to the
public, and it appears as the change of stance towards pro-vax is a reaction to
people changing their opinion towards anti-vax. Our study finished at Mar 23,
2021 when the conundrum was still going strong. The dilemma might be a
reflection of the uncertain and stressful times, but it also highlights the
importance of building public trust to combat prevalent misinformation.
|
[
{
"version": "v1",
"created": "Sun, 11 Jun 2023 13:57:58 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Zaidi",
"Zainab",
""
],
[
"Ye",
"Mengbin",
""
],
[
"Karunasekera",
"Shanika",
""
],
[
"Kashima",
"Yoshihisa",
""
]
] |
new_dataset
| 0.99095 |
2306.06719
|
Andrew Adamatzky
|
Panagiotis Mougkogiannis and Andrew Adamatzky
|
Proteinoid microspheres as proto-neural networks
| null | null | null | null |
cs.ET physics.bio-ph q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Proteinoids, also known as thermal proteins, possess a fascinating ability to
generate microspheres that exhibit electrical spikes resembling the action
potentials of neurons. These spiking microspheres, referred to as protoneurons,
hold the potential to assemble into proto-nano-brains. In our study, we
investigate the feasibility of utilizing a promising electrochemical technique
called differential pulse voltammetry (DPV) to interface with proteinoid
nano-brains. We evaluate DPV's suitability by examining critical parameters
such as selectivity, sensitivity, and linearity of the electrochemical
responses. The research systematically explores the influence of various
operational factors, including pulse width, pulse amplitude, scan rate, and
scan time. Encouragingly, our findings indicate that DPV exhibits significant
potential as an efficient electrochemical interface for proteinoid nano-brains.
This technology opens up new avenues for developing artificial neural networks
with broad applications across diverse fields of research.
|
[
{
"version": "v1",
"created": "Sun, 11 Jun 2023 16:38:18 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Mougkogiannis",
"Panagiotis",
""
],
[
"Adamatzky",
"Andrew",
""
]
] |
new_dataset
| 0.964096 |
2306.06796
|
Mohsen Heidari
|
Mohsen Heidari, Achilleas Anastasopoulos, S. Sandeep Pradhan
|
On The Reliability Function of Discrete Memoryless Multiple-Access
Channel with Feedback
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
The reliability function of a channel is the maximum achievable exponential
rate of decay of the error probability as a function of the transmission rate.
In this work, we derive bounds on the reliability function of discrete
memoryless multiple-access channels (MAC) with noiseless feedback. We show that
our bounds are tight for a variety of MACs, such as $m$-ary additive and two
independent point-to-point channels. The bounds are expressed in terms of a new
information measure called ``variable-length directed information". The upper
bound is proved by analyzing stochastic processes defined based on the entropy
of the message, given the past channel's outputs. Our method relies on tools
from the theory of martingales, variable-length information measures, and a new
technique called time pruning. We further propose a variable-length achievable
scheme consisting of three phases: (i) data transmission, (ii) hybrid
data-confirmation, and (iii) full confirmation. We show that two-phase-type
schemes are strictly suboptimal in achieving the MAC's reliability function.
Moreover, we study the shape of the lower-bound and show that it increases
linearly with respect to a specific Euclidean distance measure defined between
the transmission rate pair and the capacity boundary. As side results, we
derive an upper bound on the capacity of MAC with noiseless feedback and study
a new problem involving a hybrid of hypothesis testing and data transmission.
|
[
{
"version": "v1",
"created": "Sun, 11 Jun 2023 22:28:26 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Heidari",
"Mohsen",
""
],
[
"Anastasopoulos",
"Achilleas",
""
],
[
"Pradhan",
"S. Sandeep",
""
]
] |
new_dataset
| 0.993612 |
2306.06797
|
Jaskaran Singh
|
Jaskaran Singh
|
VBSF-TLD: Validation-Based Approach for Soft Computing-Inspired Transfer
Learning in Drone Detection
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
With the increasing utilization of Internet of Things (IoT) enabled drones in
diverse applications like photography, delivery, and surveillance, concerns
regarding privacy and security have become more prominent. Drones have the
ability to capture sensitive information, compromise privacy, and pose security
risks. As a result, the demand for advanced technology to automate drone
detection has become crucial. This paper presents a project on a transfer-based
drone detection scheme, which forms an integral part of a computer vision-based
module and leverages transfer learning to enhance performance. By harnessing
the knowledge of pre-trained models from a related domain, transfer learning
enables improved results even with limited training data. To evaluate the
scheme's performance, we conducted tests on benchmark datasets, including the
Drone-vs-Bird Dataset and the UAVDT dataset. Notably, the scheme's
effectiveness is highlighted by its IOU-based validation results, demonstrating
the potential of deep learning-based technology in automating drone detection
in critical areas such as airports, military bases, and other high-security
zones.
|
[
{
"version": "v1",
"created": "Sun, 11 Jun 2023 22:30:23 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Singh",
"Jaskaran",
""
]
] |
new_dataset
| 0.990131 |
2306.06800
|
Asaad Alghamdi
|
Asaad Alghamdi, Xinyu Duan, Wei Jiang, Zhenhai Wang, Yimeng Wu,
Qingrong Xia, Zhefeng Wang, Yi Zheng, Mehdi Rezagholizadeh, Baoxing Huai,
Peilun Cheng, Abbas Ghaddar
|
AraMUS: Pushing the Limits of Data and Model Scale for Arabic Natural
Language Processing
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Developing monolingual large Pre-trained Language Models (PLMs) is shown to
be very successful in handling different tasks in Natural Language Processing
(NLP). In this work, we present AraMUS, the largest Arabic PLM with 11B
parameters trained on 529GB of high-quality Arabic textual data. AraMUS
achieves state-of-the-art performances on a diverse set of Arabic
classification and generative tasks. Moreover, AraMUS shows impressive few-shot
learning abilities compared with the best existing Arabic PLMs.
|
[
{
"version": "v1",
"created": "Sun, 11 Jun 2023 22:55:18 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Alghamdi",
"Asaad",
""
],
[
"Duan",
"Xinyu",
""
],
[
"Jiang",
"Wei",
""
],
[
"Wang",
"Zhenhai",
""
],
[
"Wu",
"Yimeng",
""
],
[
"Xia",
"Qingrong",
""
],
[
"Wang",
"Zhefeng",
""
],
[
"Zheng",
"Yi",
""
],
[
"Rezagholizadeh",
"Mehdi",
""
],
[
"Huai",
"Baoxing",
""
],
[
"Cheng",
"Peilun",
""
],
[
"Ghaddar",
"Abbas",
""
]
] |
new_dataset
| 0.972285 |
2306.06811
|
Samuel Reinders
|
Samuel Reinders, Swamy Ananthanarayan, Matthew Butler, Kim Marriott
|
Designing Conversational Multimodal 3D Printed Models with People who
are Blind
|
To appear in ACM Designing Interactive Systems Conference (DIS '23),
July 10-14, 2023, Pittsburgh, PA, USA
| null |
10.1145/3563657.3595989
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D printed models have been used to improve access to graphical information
by people who are blind, offering benefits over conventional accessible
graphics. Here we investigate an interactive 3D printed model (I3M) that
combines a conversational interface with haptic vibration and touch to provide
more natural and accessible experiences. Specifically, we co-designed a
multimodal model of the Solar System with nine blind people and evaluated the
prototype with another seven blind participants. We discuss our journey from a
design perspective, focusing on touch, conversational and multimodal
interactions. Based on our experience, we suggest design recommendations that
consider blind users' desire for independence and control, customisation,
comfort and use of prior experience
|
[
{
"version": "v1",
"created": "Mon, 12 Jun 2023 00:44:57 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Reinders",
"Samuel",
""
],
[
"Ananthanarayan",
"Swamy",
""
],
[
"Butler",
"Matthew",
""
],
[
"Marriott",
"Kim",
""
]
] |
new_dataset
| 0.992058 |
2306.06870
|
Sijie Zhao
|
Sijie Zhao, Yixiao Ge, Zhongang Qi, Lin Song, Xiaohan Ding, Zehua Xie,
Ying Shan
|
Sticker820K: Empowering Interactive Retrieval with Stickers
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Stickers have become a ubiquitous part of modern-day communication, conveying
complex emotions through visual imagery. To facilitate the development of more
powerful algorithms for analyzing stickers, we propose a large-scale Chinese
sticker dataset, namely Sticker820K, which consists of 820k image-text pairs.
Each sticker has rich and high-quality textual annotations, including
descriptions, optical characters, emotional labels, and style classifications.
Although vision-language tasks in the domain of natural images have been well
studied, directly applying the those models, such as CLIP, to sticker data is
not an optimal solution due to the discrepant nature between natural and
emotive image data. Therefore, we propose StickerCLIP as a benchmark model on
the Sticker820K dataset. For the text-to-image retrieval task, our StickerCLIP
demonstrates strong superiority over the CLIP, which achieves an absolute gain
of 66.0\% in mean recall on the Sticker820K test set. Additionally, we endeavor
to extend the recently popularized LLM by means of prompt tuning, integrating
its ability for sticker retrieval and allowing users to retrieve stickers
through instructions. We validate the feasibility of this method, demonstrating
the immense potential of prompt tuning in expanding LLM abilities while not
affecting the quality of upstream tasks.
|
[
{
"version": "v1",
"created": "Mon, 12 Jun 2023 05:06:53 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Zhao",
"Sijie",
""
],
[
"Ge",
"Yixiao",
""
],
[
"Qi",
"Zhongang",
""
],
[
"Song",
"Lin",
""
],
[
"Ding",
"Xiaohan",
""
],
[
"Xie",
"Zehua",
""
],
[
"Shan",
"Ying",
""
]
] |
new_dataset
| 0.99972 |
2306.06997
|
Yanbo Wang
|
Yanbo Wang, Letao Liu, Justin Dauwels
|
Slot-VAE: Object-Centric Scene Generation with Slot Attention
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Slot attention has shown remarkable object-centric representation learning
performance in computer vision tasks without requiring any supervision. Despite
its object-centric binding ability brought by compositional modelling, as a
deterministic module, slot attention lacks the ability to generate novel
scenes. In this paper, we propose the Slot-VAE, a generative model that
integrates slot attention with the hierarchical VAE framework for
object-centric structured scene generation. For each image, the model
simultaneously infers a global scene representation to capture high-level scene
structure and object-centric slot representations to embed individual object
components. During generation, slot representations are generated from the
global scene representation to ensure coherent scene structures. Our extensive
evaluation of the scene generation ability indicates that Slot-VAE outperforms
slot representation-based generative baselines in terms of sample quality and
scene structure accuracy.
|
[
{
"version": "v1",
"created": "Mon, 12 Jun 2023 09:50:36 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Wang",
"Yanbo",
""
],
[
"Liu",
"Letao",
""
],
[
"Dauwels",
"Justin",
""
]
] |
new_dataset
| 0.978336 |
2306.07054
|
Behrouz Sefid-Dashti
|
Behrouz Sefid-Dashti, Javad Salimi Sartakhti and Hassan Daghigh
|
A UML Profile for Bitcoin Blockchain
|
21 page, 11 figures
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Blockchain has received attention for its potential use in business. Bitcoin
is powered by blockchain, and interest in it has surged in the past few years.
It has many uses that need to be modeled. Modeling is used in many walks of
life to share ideas, reduce complexity, achieve close alignment of one person
viewpoint with another and provide abstractions of a system at some level of
precision and detail. Software modeling is used in Model Driven Engineering
(MDE), and Domain Specific Languages (DSLs) ease model development and provide
intuitive syntax for domain experts. The present study has designed and
evaluated a meta-model for the bitcoin application domain to facilitate
application development and help in truly understanding bitcoin. The proposed
meta-model, including stereotypes, tagged values, enumerations and a set of
constraints defined by Object Constraint Language (OCL), was defined as a
Unified Modeling Language (UML) profile and was implemented in the Sparx
Enterprise Architect (Sparx EA) modeling tool. A case study developed by our
meta-model is also presented.
|
[
{
"version": "v1",
"created": "Mon, 12 Jun 2023 12:02:12 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Sefid-Dashti",
"Behrouz",
""
],
[
"Sartakhti",
"Javad Salimi",
""
],
[
"Daghigh",
"Hassan",
""
]
] |
new_dataset
| 0.998542 |
2306.07087
|
Royden Wagner
|
Royden Wagner, Marvin Klemp, Carlos Fernandez Lopez
|
MaskedFusion360: Reconstruct LiDAR Data by Querying Camera Features
|
Technical report, 6 pages, 4 figures, accepted at ICLR 2023 Tiny
Papers
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
In self-driving applications, LiDAR data provides accurate information about
distances in 3D but lacks the semantic richness of camera data. Therefore,
state-of-the-art methods for perception in urban scenes fuse data from both
sensor types. In this work, we introduce a novel self-supervised method to fuse
LiDAR and camera data for self-driving applications. We build upon masked
autoencoders (MAEs) and train deep learning models to reconstruct masked LiDAR
data from fused LiDAR and camera features. In contrast to related methods that
use birds-eye-view representations, we fuse features from dense spherical LiDAR
projections and features from fish-eye camera crops with a similar field of
view. Therefore, we reduce the learned spatial transformations to moderate
perspective transformations and do not require additional modules to generate
dense LiDAR representations. Code is available at:
https://github.com/KIT-MRT/masked-fusion-360
|
[
{
"version": "v1",
"created": "Mon, 12 Jun 2023 13:01:33 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Wagner",
"Royden",
""
],
[
"Klemp",
"Marvin",
""
],
[
"Lopez",
"Carlos Fernandez",
""
]
] |
new_dataset
| 0.990885 |
2306.07117
|
Mourad Heddaya
|
Mourad Heddaya, Solomon Dworkin, Chenhao Tan, Rob Voigt, Alexander
Zentefis
|
Language of Bargaining
|
ACL 2023 Main Conference
| null | null | null |
cs.CL cs.AI cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Leveraging an established exercise in negotiation education, we build a novel
dataset for studying how the use of language shapes bilateral bargaining. Our
dataset extends existing work in two ways: 1) we recruit participants via
behavioral labs instead of crowdsourcing platforms and allow participants to
negotiate through audio, enabling more naturalistic interactions; 2) we add a
control setting where participants negotiate only through alternating, written
numeric offers.Despite the two contrasting forms of communication, we find that
the average agreed prices of the two treatments are identical. But when
subjects can talk, fewer offers are exchanged, negotiations finish faster, the
likelihood of reaching agreement rises, and the variance of prices at which
subjects agree drops substantially. We further propose a taxonomy of speech
acts in negotiation and enrich the dataset with annotated speech acts. We set
up prediction tasks to predict negotiation success and find that being reactive
to the arguments of the other party is advantageous over driving the
negotiation.
|
[
{
"version": "v1",
"created": "Mon, 12 Jun 2023 13:52:01 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Heddaya",
"Mourad",
""
],
[
"Dworkin",
"Solomon",
""
],
[
"Tan",
"Chenhao",
""
],
[
"Voigt",
"Rob",
""
],
[
"Zentefis",
"Alexander",
""
]
] |
new_dataset
| 0.99299 |
2306.07154
|
Jiale Xu
|
Jiale Xu, Xintao Wang, Yan-Pei Cao, Weihao Cheng, Ying Shan, Shenghua
Gao
|
InstructP2P: Learning to Edit 3D Point Clouds with Text Instructions
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Enhancing AI systems to perform tasks following human instructions can
significantly boost productivity. In this paper, we present InstructP2P, an
end-to-end framework for 3D shape editing on point clouds, guided by high-level
textual instructions. InstructP2P extends the capabilities of existing methods
by synergizing the strengths of a text-conditioned point cloud diffusion model,
Point-E, and powerful language models, enabling color and geometry editing
using language instructions. To train InstructP2P, we introduce a new shape
editing dataset, constructed by integrating a shape segmentation dataset,
off-the-shelf shape programs, and diverse edit instructions generated by a
large language model, ChatGPT. Our proposed method allows for editing both
color and geometry of specific regions in a single forward pass, while leaving
other regions unaffected. In our experiments, InstructP2P shows generalization
capabilities, adapting to novel shape categories and instructions, despite
being trained on a limited amount of data.
|
[
{
"version": "v1",
"created": "Mon, 12 Jun 2023 14:42:23 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Xu",
"Jiale",
""
],
[
"Wang",
"Xintao",
""
],
[
"Cao",
"Yan-Pei",
""
],
[
"Cheng",
"Weihao",
""
],
[
"Shan",
"Ying",
""
],
[
"Gao",
"Shenghua",
""
]
] |
new_dataset
| 0.998602 |
2306.07183
|
Pier Paolo Tricomi
|
Francesco Luigi De Faveri, Luca Cosuti, Pier Paolo Tricomi, Mauro
Conti
|
Twitter Bots Influence on the Russo-Ukrainian War During the 2022
Italian General Elections
| null | null | null | null |
cs.SI cs.CR cs.CY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In February 2022, Russia launched a full-scale invasion of Ukraine. This
event had global repercussions, especially on the political decisions of
European countries. As expected, the role of Italy in the conflict became a
major campaign issue for the Italian General Election held on 25 September
2022. Politicians frequently use Twitter to communicate during political
campaigns, but bots often interfere and attempt to manipulate elections. Hence,
understanding whether bots influenced public opinion regarding the conflict
and, therefore, the elections is essential.
In this work, we investigate how Italian politics responded to the
Russo-Ukrainian conflict on Twitter and whether bots manipulated public opinion
before the 2022 general election. We first analyze 39,611 tweets of six major
political Italian parties to understand how they discussed the war during the
period February-December 2022. Then, we focus on the 360,823 comments under the
last month's posts before the elections, discovering around 12% of the
commenters are bots. By examining their activities, it becomes clear they both
distorted how war topics were treated and influenced real users during the last
month before the elections.
|
[
{
"version": "v1",
"created": "Mon, 12 Jun 2023 15:32:25 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"De Faveri",
"Francesco Luigi",
""
],
[
"Cosuti",
"Luca",
""
],
[
"Tricomi",
"Pier Paolo",
""
],
[
"Conti",
"Mauro",
""
]
] |
new_dataset
| 0.959595 |
2306.07186
|
Li Zhang
|
Wenxuan Ge, Xubing Yang, Li Zhang
|
CD-CTFM: A Lightweight CNN-Transformer Network for Remote Sensing Cloud
Detection Fusing Multiscale Features
| null | null | null | null |
cs.CV cs.LG eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Clouds in remote sensing images inevitably affect information extraction,
which hinder the following analysis of satellite images. Hence, cloud detection
is a necessary preprocessing procedure. However, the existing methods have
numerous calculations and parameters. In this letter, a lightweight
CNN-Transformer network, CD-CTFM, is proposed to solve the problem. CD-CTFM is
based on encoder-decoder architecture and incorporates the attention mechanism.
In the decoder part, we utilize a lightweight network combing CNN and
Transformer as backbone, which is conducive to extract local and global
features simultaneously. Moreover, a lightweight feature pyramid module is
designed to fuse multiscale features with contextual information. In the
decoder part, we integrate a lightweight channel-spatial attention module into
each skip connection between encoder and decoder, extracting low-level features
while suppressing irrelevant information without introducing many parameters.
Finally, the proposed model is evaluated on two cloud datasets, 38-Cloud and
MODIS. The results demonstrate that CD-CTFM achieves comparable accuracy as the
state-of-art methods. At the same time, CD-CTFM outperforms state-of-art
methods in terms of efficiency.
|
[
{
"version": "v1",
"created": "Mon, 12 Jun 2023 15:37:18 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Ge",
"Wenxuan",
""
],
[
"Yang",
"Xubing",
""
],
[
"Zhang",
"Li",
""
]
] |
new_dataset
| 0.998434 |
2306.07206
|
Shuai Liu
|
Shuai Liu, Hyundong J. Cho, Marjorie Freedman, Xuezhe Ma, Jonathan May
|
RECAP: Retrieval-Enhanced Context-Aware Prefix Encoder for Personalized
Dialogue Response Generation
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Endowing chatbots with a consistent persona is essential to an engaging
conversation, yet it remains an unresolved challenge. In this work, we propose
a new retrieval-enhanced approach for personalized response generation.
Specifically, we design a hierarchical transformer retriever trained on
dialogue domain data to perform personalized retrieval and a context-aware
prefix encoder that fuses the retrieved information to the decoder more
effectively. Extensive experiments on a real-world dataset demonstrate the
effectiveness of our model at generating more fluent and personalized
responses. We quantitatively evaluate our model's performance under a suite of
human and automatic metrics and find it to be superior compared to
state-of-the-art baselines on English Reddit conversations.
|
[
{
"version": "v1",
"created": "Mon, 12 Jun 2023 16:10:21 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Liu",
"Shuai",
""
],
[
"Cho",
"Hyundong J.",
""
],
[
"Freedman",
"Marjorie",
""
],
[
"Ma",
"Xuezhe",
""
],
[
"May",
"Jonathan",
""
]
] |
new_dataset
| 0.991719 |
2306.07244
|
Peter Buckel M.Sc.
|
Peter Buckel, Timo Oksanen, Thomas Dietmueller
|
RB-Dust -- A Reference-based Dataset for Vision-based Dust Removal
|
Accepted by CVPR Workshop NTIRE 2023. Errata: Caption Figure 6
changed
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dust in the agricultural landscape is a significant challenge and influences,
for example, the environmental perception of autonomous agricultural machines.
Image enhancement algorithms can be used to reduce dust. However, these require
dusty and dust-free images of the same environment for validation. In fact, to
date, there is no dataset that we are aware of that addresses this issue.
Therefore, we present the agriscapes RB-Dust dataset, which is named after its
purpose of reference-based dust removal. It is not possible to take pictures
from the cabin during tillage, as this would cause shifts in the images.
Because of this, we built a setup from which it is possible to take images from
a stationary position close to the passing tractor. The test setup was based on
a half-sided gate through which the tractor could drive. The field tests were
carried out on a farm in Bavaria, Germany, during tillage. During the field
tests, other parameters such as soil moisture and wind speed were controlled,
as these significantly affect dust development. We validated our dataset with
contrast enhancement and image dehazing algorithms and analyzed the
generalizability from recordings from the moving tractor. Finally, we
demonstrate the application of dust removal based on a high-level vision task,
such as person classification. Our empirical study confirms the validity of
RB-Dust for vision-based dust removal in agriculture.
|
[
{
"version": "v1",
"created": "Mon, 12 Jun 2023 17:09:24 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Buckel",
"Peter",
""
],
[
"Oksanen",
"Timo",
""
],
[
"Dietmueller",
"Thomas",
""
]
] |
new_dataset
| 0.999843 |
2109.08079
|
Himanshu Gupta
|
Himanshu Gupta, Shreyas Verma, Santosh Mashetty, Swaroop Mishra
|
Context-NER : Contextual Phrase Generation at Scale
|
29 pages, 5 Figures, 2 AlgorithmS, 17 Tables. Accepted in NeurIPS
2022 - Efficient Natural Language and Speech Processing (ENLSP) Workshop
| null | null | null |
cs.IR cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Named Entity Recognition (NER) has seen significant progress in recent years,
with numerous state-of-the-art (SOTA) models achieving high performance.
However, very few studies have focused on the generation of entities' context.
In this paper, we introduce CONTEXT-NER, a task that aims to generate the
relevant context for entities in a sentence, where the context is a phrase
describing the entity but not necessarily present in the sentence. To
facilitate research in this task, we also present the EDGAR10-Q dataset, which
consists of annual and quarterly reports from the top 1500 publicly traded
companies. The dataset is the largest of its kind, containing 1M sentences,
2.8M entities, and an average of 35 tokens per sentence, making it a
challenging dataset. We propose a baseline approach that combines a phrase
generation algorithm with inferencing using a 220M language model, achieving a
ROUGE-L score of 27% on the test split. Additionally, we perform a one-shot
inference with ChatGPT, which obtains a 30% ROUGE-L, highlighting the
difficulty of the dataset. We also evaluate models such as T5 and BART, which
achieve a maximum ROUGE-L of 49% after supervised finetuning on EDGAR10-Q. We
also find that T5-large, when pre-finetuned on EDGAR10-Q, achieve SOTA results
on downstream finance tasks such as Headline, FPB, and FiQA SA, outperforming
vanilla version by 10.81 points. To our surprise, this 66x smaller
pre-finetuned model also surpasses the finance-specific LLM BloombergGPT-50B by
15 points. We hope that our dataset and generated artifacts will encourage
further research in this direction, leading to the development of more
sophisticated language models for financial text analysis
|
[
{
"version": "v1",
"created": "Thu, 16 Sep 2021 16:10:05 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Oct 2022 05:33:28 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Oct 2022 04:49:28 GMT"
},
{
"version": "v4",
"created": "Thu, 8 Jun 2023 18:33:01 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Gupta",
"Himanshu",
""
],
[
"Verma",
"Shreyas",
""
],
[
"Mashetty",
"Santosh",
""
],
[
"Mishra",
"Swaroop",
""
]
] |
new_dataset
| 0.999408 |
2202.06268
|
Nannan Li
|
Nannan Li, Yaran Chen, Weifan Li, Zixiang Ding, Dongbin Zhao
|
BViT: Broad Attention based Vision Transformer
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent works have demonstrated that transformer can achieve promising
performance in computer vision, by exploiting the relationship among image
patches with self-attention. While they only consider the attention in a single
feature layer, but ignore the complementarity of attention in different levels.
In this paper, we propose the broad attention to improve the performance by
incorporating the attention relationship of different layers for vision
transformer, which is called BViT. The broad attention is implemented by broad
connection and parameter-free attention. Broad connection of each transformer
layer promotes the transmission and integration of information for BViT.
Without introducing additional trainable parameters, parameter-free attention
jointly focuses on the already available attention information in different
layers for extracting useful information and building their relationship.
Experiments on image classification tasks demonstrate that BViT delivers
state-of-the-art accuracy of 74.8\%/81.6\% top-1 accuracy on ImageNet with
5M/22M parameters. Moreover, we transfer BViT to downstream object recognition
benchmarks to achieve 98.9\% and 89.9\% on CIFAR10 and CIFAR100 respectively
that exceed ViT with fewer parameters. For the generalization test, the broad
attention in Swin Transformer and T2T-ViT also bring an improvement of more
than 1\%. To sum up, broad attention is promising to promote the performance of
attention based models. Code and pre-trained models are available at
https://github.com/DRL-CASIA/Broad_ViT.
|
[
{
"version": "v1",
"created": "Sun, 13 Feb 2022 09:23:29 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Jun 2023 06:08:37 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Li",
"Nannan",
""
],
[
"Chen",
"Yaran",
""
],
[
"Li",
"Weifan",
""
],
[
"Ding",
"Zixiang",
""
],
[
"Zhao",
"Dongbin",
""
]
] |
new_dataset
| 0.981373 |
2207.03579
|
Xuanwen Huang
|
Xuanwen Huang, Yang Yang, Yang Wang, Chunping Wang, Zhisheng Zhang,
Jiarong Xu, Lei Chen, Michalis Vazirgiannis
|
DGraph: A Large-Scale Financial Dataset for Graph Anomaly Detection
|
Accepted to NeurIPS 2022. Dataset Url: https://dgraph.xinye.com/
| null | null | null |
cs.SI cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph Anomaly Detection (GAD) has recently become a hot research spot due to
its practicability and theoretical value. Since GAD emphasizes the application
and the rarity of anomalous samples, enriching the varieties of its datasets is
fundamental work. Thus, this paper present DGraph, a real-world dynamic graph
in the finance domain. DGraph overcomes many limitations of current GAD
datasets. It contains about 3M nodes, 4M dynamic edges, and 1M ground-truth
nodes. We provide a comprehensive observation of DGraph, revealing that
anomalous nodes and normal nodes generally have different structures, neighbor
distribution, and temporal dynamics. Moreover, it suggests that unlabeled nodes
are also essential for detecting fraudsters. Furthermore, we conduct extensive
experiments on DGraph. Observation and experiments demonstrate that DGraph is
propulsive to advance GAD research and enable in-depth exploration of anomalous
nodes.
|
[
{
"version": "v1",
"created": "Thu, 30 Jun 2022 07:16:03 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Jul 2022 02:53:29 GMT"
},
{
"version": "v3",
"created": "Thu, 25 May 2023 09:00:07 GMT"
},
{
"version": "v4",
"created": "Fri, 9 Jun 2023 11:37:55 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Huang",
"Xuanwen",
""
],
[
"Yang",
"Yang",
""
],
[
"Wang",
"Yang",
""
],
[
"Wang",
"Chunping",
""
],
[
"Zhang",
"Zhisheng",
""
],
[
"Xu",
"Jiarong",
""
],
[
"Chen",
"Lei",
""
],
[
"Vazirgiannis",
"Michalis",
""
]
] |
new_dataset
| 0.999096 |
2211.03064
|
Weiyan Xie
|
Weiyan Xie, Xiao-Hui Li, Caleb Chen Cao, Nevin L.Zhang
|
ViT-CX: Causal Explanation of Vision Transformers
|
IJCAI2023 Camera-ready
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Despite the popularity of Vision Transformers (ViTs) and eXplainable AI
(XAI), only a few explanation methods have been designed specially for ViTs
thus far. They mostly use attention weights of the [CLS] token on patch
embeddings and often produce unsatisfactory saliency maps. This paper proposes
a novel method for explaining ViTs called ViT-CX. It is based on patch
embeddings, rather than attentions paid to them, and their causal impacts on
the model output. Other characteristics of ViTs such as causal
overdetermination are also considered in the design of ViT-CX. The empirical
results show that ViT-CX produces more meaningful saliency maps and does a
better job revealing all important evidence for the predictions than previous
methods. The explanation generated by ViT-CX also shows significantly better
faithfulness to the model. The codes and appendix are available at
https://github.com/vaynexie/CausalX-ViT.
|
[
{
"version": "v1",
"created": "Sun, 6 Nov 2022 09:06:16 GMT"
},
{
"version": "v2",
"created": "Sat, 25 Feb 2023 03:08:06 GMT"
},
{
"version": "v3",
"created": "Fri, 9 Jun 2023 08:32:06 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Xie",
"Weiyan",
""
],
[
"Li",
"Xiao-Hui",
""
],
[
"Cao",
"Caleb Chen",
""
],
[
"Zhang",
"Nevin L.",
""
]
] |
new_dataset
| 0.957482 |
2303.14822
|
XInlei He
|
Xinlei He and Xinyue Shen and Zeyuan Chen and Michael Backes and Yang
Zhang
|
MGTBench: Benchmarking Machine-Generated Text Detection
| null | null | null | null |
cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Nowadays large language models (LLMs) have shown revolutionary power in a
variety of natural language processing (NLP) tasks such as text classification,
sentiment analysis, language translation, and question-answering. In this way,
detecting machine-generated texts (MGTs) is becoming increasingly important as
LLMs become more advanced and prevalent. These models can generate human-like
language that can be difficult to distinguish from text written by a human,
which raises concerns about authenticity, accountability, and potential bias.
However, existing detection methods against MGTs are evaluated under different
model architectures, datasets, and experimental settings, resulting in a lack
of a comprehensive evaluation framework across different methodologies
In this paper, we fill this gap by proposing the first benchmark framework
for MGT detection, named MGTBench. Extensive evaluations on public datasets
with curated answers generated by ChatGPT (the most representative and powerful
LLMs thus far) show that most of the current detection methods perform less
satisfactorily against MGTs. An exceptional case is ChatGPT Detector, which is
trained with ChatGPT-generated texts and shows great performance in detecting
MGTs. Nonetheless, we note that only a small fraction of adversarial-crafted
perturbations on MGTs can evade the ChatGPT Detector, thus highlighting the
need for more robust MGT detection methods. We envision that MGTBench will
serve as a benchmark tool to accelerate future investigations involving the
evaluation of state-of-the-art MGT detection methods on their respective
datasets and the development of more advanced MGT detection methods. Our source
code and datasets are available at https://github.com/xinleihe/MGTBench.
|
[
{
"version": "v1",
"created": "Sun, 26 Mar 2023 21:12:36 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Jun 2023 06:50:57 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"He",
"Xinlei",
""
],
[
"Shen",
"Xinyue",
""
],
[
"Chen",
"Zeyuan",
""
],
[
"Backes",
"Michael",
""
],
[
"Zhang",
"Yang",
""
]
] |
new_dataset
| 0.999502 |
2303.15429
|
Okko Makkonen
|
Okko Makkonen, Elif Sa\c{c}{\i}kara, Camilla Hollanti
|
Algebraic Geometry Codes for Secure Distributed Matrix Multiplication
|
16 pages, 1 figure
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a novel construction for secure distributed matrix
multiplication (SDMM) based on algebraic geometry (AG) codes, which we call the
PoleGap SDMM scheme. The proposed construction is inspired by the GASP code,
where so-called gaps in a certain polynomial are utilized to achieve higher
communication rates. Our construction considers the gaps in a Weierstrass
semigroup of a rational place in an algebraic function field to achieve a
similar increase in the rate. This construction shows that there is potential
in utilizing AG codes and their subcodes in SDMM since we demonstrate a better
performance compared to state-of-the-art schemes in some parameter regimes.
|
[
{
"version": "v1",
"created": "Mon, 27 Mar 2023 17:53:25 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Jun 2023 10:05:44 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Makkonen",
"Okko",
""
],
[
"Saçıkara",
"Elif",
""
],
[
"Hollanti",
"Camilla",
""
]
] |
new_dataset
| 0.999283 |
2303.16282
|
Adam Caulfield
|
Adam Caulfield, Norrathep Rattanavipanon, Ivan De Oliveira Nunes
|
ACFA: Secure Runtime Auditing & Guaranteed Device Healing via Active
Control Flow Attestation
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Low-end embedded devices are increasingly used in various smart applications
and spaces. They are implemented under strict cost and energy budgets, using
microcontroller units (MCUs) that lack security features available in
general-purpose processors. In this context, Remote Attestation (RA) was
proposed as an inexpensive security service to enable a verifier (Vrf) to
remotely detect illegal modifications to a software binary installed on a
low-end prover MCU (Prv). Since attacks that hijack the software's control flow
can evade RA, Control Flow Attestation (CFA) augments RA with information about
the exact order in which instructions in the binary are executed, enabling
detection of control flow attacks. We observe that current CFA architectures
can not guarantee that Vrf ever receives control flow reports in case of
attacks. In turn, while they support exploit detection, they provide no means
to pinpoint the exploit origin. Furthermore, existing CFA requires either
binary instrumentation, incurring significant runtime overhead and code size
increase, or relatively expensive hardware support, such as hash engines. In
addition, current techniques are neither continuous (only meant to attest
self-contained operations) nor active (offer no secure means to remotely
remediate detected compromises). To jointly address these challenges, we
propose ACFA: a hybrid (hardware/software) architecture for Active CFA. ACFA
enables continuous monitoring of all control flow transfers in the MCU and does
not require binary instrumentation. It also leverages the recently proposed
concept of Active Roots-of-Trust to enable secure auditing of vulnerability
sources and guaranteed remediation when a compromise is detected. We provide an
open-source reference implementation of ACFA on top of a commodity low-end MCU
(TI MSP430) and evaluate it to demonstrate its security and cost-effectiveness.
|
[
{
"version": "v1",
"created": "Tue, 28 Mar 2023 19:51:00 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Jun 2023 15:29:14 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Caulfield",
"Adam",
""
],
[
"Rattanavipanon",
"Norrathep",
""
],
[
"Nunes",
"Ivan De Oliveira",
""
]
] |
new_dataset
| 0.988865 |
2305.05301
|
Clementin Boittiaux
|
Cl\'ementin Boittiaux (IFREMER, COSMER, DYNI), Claire Dune (COSMER),
Maxime Ferrera (IFREMER), Aur\'elien Arnaubec (IFREMER), Ricard Marxer
(DYNI), Marjolaine Matabos (BEEP), Lo\"ic Van Audenhaege (BEEP), Vincent
Hugel (COSMER)
|
Eiffel Tower: A Deep-Sea Underwater Dataset for Long-Term Visual
Localization
|
The International Journal of Robotics Research, In press
| null |
10.1177/02783649231177322
| null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual localization plays an important role in the positioning and navigation
of robotics systems within previously visited environments. When visits occur
over long periods of time, changes in the environment related to seasons or
day-night cycles present a major challenge. Under water, the sources of
variability are due to other factors such as water conditions or growth of
marine organisms. Yet it remains a major obstacle and a much less studied one,
partly due to the lack of data. This paper presents a new deep-sea dataset to
benchmark underwater long-term visual localization. The dataset is composed of
images from four visits to the same hydrothermal vent edifice over the course
of five years. Camera poses and a common geometry of the scene were estimated
using navigation data and Structure-from-Motion. This serves as a reference
when evaluating visual localization techniques. An analysis of the data
provides insights about the major changes observed throughout the years.
Furthermore, several well-established visual localization methods are evaluated
on the dataset, showing there is still room for improvement in underwater
long-term visual localization. The data is made publicly available at
https://www.seanoe.org/data/00810/92226/.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2023 09:43:27 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Boittiaux",
"Clémentin",
"",
"IFREMER, COSMER, DYNI"
],
[
"Dune",
"Claire",
"",
"COSMER"
],
[
"Ferrera",
"Maxime",
"",
"IFREMER"
],
[
"Arnaubec",
"Aurélien",
"",
"IFREMER"
],
[
"Marxer",
"Ricard",
"",
"DYNI"
],
[
"Matabos",
"Marjolaine",
"",
"BEEP"
],
[
"Van Audenhaege",
"Loïc",
"",
"BEEP"
],
[
"Hugel",
"Vincent",
"",
"COSMER"
]
] |
new_dataset
| 0.999743 |
2305.11255
|
Hao Fei
|
Hao Fei, Bobo Li, Qian Liu, Lidong Bing, Fei Li, Tat-Seng Chua
|
Reasoning Implicit Sentiment with Chain-of-Thought Prompting
|
ACL2023 Short Paper
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
While sentiment analysis systems try to determine the sentiment polarities of
given targets based on the key opinion expressions in input texts, in implicit
sentiment analysis (ISA) the opinion cues come in an implicit and obscure
manner. Thus detecting implicit sentiment requires the common-sense and
multi-hop reasoning ability to infer the latent intent of opinion. Inspired by
the recent chain-of-thought (CoT) idea, in this work we introduce a Three-hop
Reasoning (THOR) CoT framework to mimic the human-like reasoning process for
ISA. We design a three-step prompting principle for THOR to step-by-step induce
the implicit aspect, opinion, and finally the sentiment polarity. Our
THOR+Flan-T5 (11B) pushes the state-of-the-art (SoTA) by over 6% F1 on
supervised setup. More strikingly, THOR+GPT3 (175B) boosts the SoTA by over 50%
F1 on zero-shot setting. Our code is open at
https://github.com/scofield7419/THOR-ISA.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 18:38:32 GMT"
},
{
"version": "v2",
"created": "Thu, 25 May 2023 03:57:57 GMT"
},
{
"version": "v3",
"created": "Sat, 3 Jun 2023 03:56:23 GMT"
},
{
"version": "v4",
"created": "Fri, 9 Jun 2023 01:27:58 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Fei",
"Hao",
""
],
[
"Li",
"Bobo",
""
],
[
"Liu",
"Qian",
""
],
[
"Bing",
"Lidong",
""
],
[
"Li",
"Fei",
""
],
[
"Chua",
"Tat-Seng",
""
]
] |
new_dataset
| 0.967885 |
2305.15213
|
Qian Wang
|
Wei Zhou, Qian Wang, Weiwei Jin, Xinzhe Shi, Ying He
|
GTNet: Graph Transformer Network for 3D Point Cloud Classification and
Semantic Segmentation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, graph-based and Transformer-based deep learning networks have
demonstrated excellent performances on various point cloud tasks. Most of the
existing graph methods are based on static graph, which take a fixed input to
establish graph relations. Moreover, many graph methods apply maximization and
averaging to aggregate neighboring features, so that only a single neighboring
point affects the feature of centroid or different neighboring points have the
same influence on the centroid's feature, which ignoring the correlation and
difference between points. Most Transformer-based methods extract point cloud
features based on global attention and lack the feature learning on local
neighbors. To solve the problems of these two types of models, we propose a new
feature extraction block named Graph Transformer and construct a 3D point point
cloud learning network called GTNet to learn features of point clouds on local
and global patterns. Graph Transformer integrates the advantages of graph-based
and Transformer-based methods, and consists of Local Transformer and Global
Transformer modules. Local Transformer uses a dynamic graph to calculate all
neighboring point weights by intra-domain cross-attention with dynamically
updated graph relations, so that every neighboring point could affect the
features of centroid with different weights; Global Transformer enlarges the
receptive field of Local Transformer by a global self-attention. In addition,
to avoid the disappearance of the gradient caused by the increasing depth of
network, we conduct residual connection for centroid features in GTNet; we also
adopt the features of centroid and neighbors to generate the local geometric
descriptors in Local Transformer to strengthen the local information learning
capability of the model. Finally, we use GTNet for shape classification, part
segmentation and semantic segmentation tasks in this paper.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 14:51:18 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Jun 2023 14:23:12 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Zhou",
"Wei",
""
],
[
"Wang",
"Qian",
""
],
[
"Jin",
"Weiwei",
""
],
[
"Shi",
"Xinzhe",
""
],
[
"He",
"Ying",
""
]
] |
new_dataset
| 0.960677 |
2305.16321
|
Dor Verbin
|
Dor Verbin, Ben Mildenhall, Peter Hedman, Jonathan T. Barron, Todd
Zickler, Pratul P. Srinivasan
|
Eclipse: Disambiguating Illumination and Materials using Unintended
Shadows
|
Project page: https://dorverbin.github.io/eclipse/
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Decomposing an object's appearance into representations of its materials and
the surrounding illumination is difficult, even when the object's 3D shape is
known beforehand. This problem is ill-conditioned because diffuse materials
severely blur incoming light, and is ill-posed because diffuse materials under
high-frequency lighting can be indistinguishable from shiny materials under
low-frequency lighting. We show that it is possible to recover precise
materials and illumination -- even from diffuse objects -- by exploiting
unintended shadows, like the ones cast onto an object by the photographer who
moves around it. These shadows are a nuisance in most previous inverse
rendering pipelines, but here we exploit them as signals that improve
conditioning and help resolve material-lighting ambiguities. We present a
method based on differentiable Monte Carlo ray tracing that uses images of an
object to jointly recover its spatially-varying materials, the surrounding
illumination environment, and the shapes of the unseen light occluders who
inadvertently cast shadows upon it.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 17:59:52 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Jun 2023 21:34:12 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Verbin",
"Dor",
""
],
[
"Mildenhall",
"Ben",
""
],
[
"Hedman",
"Peter",
""
],
[
"Barron",
"Jonathan T.",
""
],
[
"Zickler",
"Todd",
""
],
[
"Srinivasan",
"Pratul P.",
""
]
] |
new_dataset
| 0.999636 |
2305.16744
|
Huaxiaoyue Wang
|
Huaxiaoyue Wang, Gonzalo Gonzalez-Pumariega, Yash Sharma, Sanjiban
Choudhury
|
Demo2Code: From Summarizing Demonstrations to Synthesizing Code via
Extended Chain-of-Thought
|
10 pages (not including references and appendix), 14 figures (7 in
main paper, 7 in appendix); (v2) added additional references to section 2 and
9, added acknowledgement section
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Language instructions and demonstrations are two natural ways for users to
teach robots personalized tasks. Recent progress in Large Language Models
(LLMs) has shown impressive performance in translating language instructions
into code for robotic tasks. However, translating demonstrations into task code
continues to be a challenge due to the length and complexity of both
demonstrations and code, making learning a direct mapping intractable. This
paper presents Demo2Code, a novel framework that generates robot task code from
demonstrations via an extended chain-of-thought and defines a common latent
specification to connect the two. Our framework employs a robust two-stage
process: (1) a recursive summarization technique that condenses demonstrations
into concise specifications, and (2) a code synthesis approach that expands
each function recursively from the generated specifications. We conduct
extensive evaluation on various robot task benchmarks, including a novel game
benchmark Robotouille, designed to simulate diverse cooking tasks in a kitchen
environment. The project's website is available at
https://portal-cornell.github.io/demo2code-webpage
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 08:47:42 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Jun 2023 18:39:08 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Wang",
"Huaxiaoyue",
""
],
[
"Gonzalez-Pumariega",
"Gonzalo",
""
],
[
"Sharma",
"Yash",
""
],
[
"Choudhury",
"Sanjiban",
""
]
] |
new_dataset
| 0.995924 |
2306.01741
|
Naoki Wake
|
Naoki Wake, Atsushi Kanehira, Kazuhiro Sasabuchi, Jun Takamatsu,
Katsushi Ikeuchi
|
GPT Models Meet Robotic Applications: Co-Speech Gesturing Chat System
| null | null | null | null |
cs.RO cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This technical paper introduces a chatting robot system that utilizes recent
advancements in large-scale language models (LLMs) such as GPT-3 and ChatGPT.
The system is integrated with a co-speech gesture generation system, which
selects appropriate gestures based on the conceptual meaning of speech. Our
motivation is to explore ways of utilizing the recent progress in LLMs for
practical robotic applications, which benefits the development of both chatbots
and LLMs. Specifically, it enables the development of highly responsive chatbot
systems by leveraging LLMs and adds visual effects to the user interface of
LLMs as an additional value. The source code for the system is available on
GitHub for our in-house robot
(https://github.com/microsoft/LabanotationSuite/tree/master/MSRAbotChatSimulation)
and GitHub for Toyota HSR
(https://github.com/microsoft/GPT-Enabled-HSR-CoSpeechGestures).
|
[
{
"version": "v1",
"created": "Wed, 10 May 2023 10:14:16 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Wake",
"Naoki",
""
],
[
"Kanehira",
"Atsushi",
""
],
[
"Sasabuchi",
"Kazuhiro",
""
],
[
"Takamatsu",
"Jun",
""
],
[
"Ikeuchi",
"Katsushi",
""
]
] |
new_dataset
| 0.977541 |
2306.01985
|
Xuhui Zhou
|
Xuhui Zhou, Hao Zhu, Akhila Yerukola, and Thomas Davidson, Jena D.
Hwang, Swabha Swayamdipta, Maarten Sap
|
COBRA Frames: Contextual Reasoning about Effects and Harms of Offensive
Statements
|
Accepted to Findings of ACL 2023
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Warning: This paper contains content that may be offensive or upsetting.
Understanding the harms and offensiveness of statements requires reasoning
about the social and situational context in which statements are made. For
example, the utterance "your English is very good" may implicitly signal an
insult when uttered by a white man to a non-white colleague, but uttered by an
ESL teacher to their student would be interpreted as a genuine compliment. Such
contextual factors have been largely ignored by previous approaches to toxic
language detection. We introduce COBRA frames, the first context-aware
formalism for explaining the intents, reactions, and harms of offensive or
biased statements grounded in their social and situational context. We create
COBRACORPUS, a dataset of 33k potentially offensive statements paired with
machine-generated contexts and free-text explanations of offensiveness, implied
biases, speaker intents, and listener reactions. To study the contextual
dynamics of offensiveness, we train models to generate COBRA explanations, with
and without access to the context. We find that explanations by
context-agnostic models are significantly worse than by context-aware ones,
especially in situations where the context inverts the statement's
offensiveness (29% accuracy drop). Our work highlights the importance and
feasibility of contextualized NLP by modeling social factors.
|
[
{
"version": "v1",
"created": "Sat, 3 Jun 2023 02:47:24 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Jun 2023 01:49:06 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Zhou",
"Xuhui",
""
],
[
"Zhu",
"Hao",
""
],
[
"Yerukola",
"Akhila",
""
],
[
"Davidson",
"Thomas",
""
],
[
"Hwang",
"Jena D.",
""
],
[
"Swayamdipta",
"Swabha",
""
],
[
"Sap",
"Maarten",
""
]
] |
new_dataset
| 0.993966 |
2306.02140
|
Qingxin Xia
|
Qingxin Xia and Takuya Maekawa and Takahiro Hara
|
Unsupervised Human Activity Recognition through Two-stage Prompting with
ChatGPT
|
4 pages
| null | null | null |
cs.HC cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Wearable sensor devices, which offer the advantage of recording daily objects
used by a person while performing an activity, enable the feasibility of
unsupervised Human Activity Recognition (HAR). Unfortunately, previous
unsupervised approaches using the usage sequence of objects usually require a
proper description of activities manually prepared by humans. Instead, we
leverage the knowledge embedded in a Large Language Model (LLM) of ChatGPT.
Because the sequence of objects robustly characterizes the activity identity,
it is possible that ChatGPT already learned the association between activities
and objects from existing contexts. However, previous prompt engineering for
ChatGPT exhibits limited generalization ability when dealing with a list of
words (i.e., sequence of objects) due to the similar weighting assigned to each
word in the list. In this study, we propose a two-stage prompt engineering,
which first guides ChatGPT to generate activity descriptions associated with
objects while emphasizing important objects for distinguishing similar
activities; then outputs activity classes and explanations for enhancing the
contexts that are helpful for HAR. To the best of our knowledge, this is the
first study that utilizes ChatGPT to recognize activities using objects in an
unsupervised manner. We conducted our approach on three datasets and
demonstrated the state-of-the-art performance.
|
[
{
"version": "v1",
"created": "Sat, 3 Jun 2023 15:41:59 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Xia",
"Qingxin",
""
],
[
"Maekawa",
"Takuya",
""
],
[
"Hara",
"Takahiro",
""
]
] |
new_dataset
| 0.989436 |
2306.02245
|
Dingyuan Zhang
|
Dingyuan Zhang, Dingkang Liang, Hongcheng Yang, Zhikang Zou, Xiaoqing
Ye, Zhe Liu, Xiang Bai
|
SAM3D: Zero-Shot 3D Object Detection via Segment Anything Model
|
Technical Report. The code is released at
https://github.com/DYZhang09/SAM3D
| null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the development of large language models, many remarkable linguistic
systems like ChatGPT have thrived and achieved astonishing success on many
tasks, showing the incredible power of foundation models. In the spirit of
unleashing the capability of foundation models on vision tasks, the Segment
Anything Model (SAM), a vision foundation model for image segmentation, has
been proposed recently and presents strong zero-shot ability on many downstream
2D tasks. However, whether SAM can be adapted to 3D vision tasks has yet to be
explored, especially 3D object detection. With this inspiration, we explore
adapting the zero-shot ability of SAM to 3D object detection in this paper. We
propose a SAM-powered BEV processing pipeline to detect objects and get
promising results on the large-scale Waymo open dataset. As an early attempt,
our method takes a step toward 3D object detection with vision foundation
models and presents the opportunity to unleash their power on 3D vision tasks.
The code is released at https://github.com/DYZhang09/SAM3D.
|
[
{
"version": "v1",
"created": "Sun, 4 Jun 2023 03:09:21 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Zhang",
"Dingyuan",
""
],
[
"Liang",
"Dingkang",
""
],
[
"Yang",
"Hongcheng",
""
],
[
"Zou",
"Zhikang",
""
],
[
"Ye",
"Xiaoqing",
""
],
[
"Liu",
"Zhe",
""
],
[
"Bai",
"Xiang",
""
]
] |
new_dataset
| 0.999599 |
2306.02329
|
Alexandros Delitzas
|
Alexandros Delitzas, Maria Parelli, Nikolas Hars, Georgios Vlassis,
Sotirios Anagnostidis, Gregor Bachmann, Thomas Hofmann
|
Multi-CLIP: Contrastive Vision-Language Pre-training for Question
Answering tasks in 3D Scenes
|
The first two authors contributed equally. arXiv admin note: text
overlap with arXiv:2304.06061
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Training models to apply common-sense linguistic knowledge and visual
concepts from 2D images to 3D scene understanding is a promising direction that
researchers have only recently started to explore. However, it still remains
understudied whether 2D distilled knowledge can provide useful representations
for downstream 3D vision-language tasks such as 3D question answering. In this
paper, we propose a novel 3D pre-training Vision-Language method, namely
Multi-CLIP, that enables a model to learn language-grounded and transferable 3D
scene point cloud representations. We leverage the representational power of
the CLIP model by maximizing the agreement between the encoded 3D scene
features and the corresponding 2D multi-view image and text embeddings in the
CLIP space via a contrastive objective. To validate our approach, we consider
the challenging downstream tasks of 3D Visual Question Answering (3D-VQA) and
3D Situated Question Answering (3D-SQA). To this end, we develop novel
multi-modal transformer-based architectures and we demonstrate how our
pre-training method can benefit their performance. Quantitative and qualitative
experimental results show that Multi-CLIP outperforms state-of-the-art works
across the downstream tasks of 3D-VQA and 3D-SQA and leads to a well-structured
3D scene feature space.
|
[
{
"version": "v1",
"created": "Sun, 4 Jun 2023 11:08:53 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Delitzas",
"Alexandros",
""
],
[
"Parelli",
"Maria",
""
],
[
"Hars",
"Nikolas",
""
],
[
"Vlassis",
"Georgios",
""
],
[
"Anagnostidis",
"Sotirios",
""
],
[
"Bachmann",
"Gregor",
""
],
[
"Hofmann",
"Thomas",
""
]
] |
new_dataset
| 0.973123 |
2306.03584
|
Zhengping Che
|
Haowen Wang, Zhengping Che, Mingyuan Wang, Zhiyuan Xu, Xiuquan Qiao,
Mengshi Qi, Feifei Feng, Jian Tang
|
RDFC-GAN: RGB-Depth Fusion CycleGAN for Indoor Depth Completion
|
Haowen Wang and Zhengping Che are with equal contributions. Under
review. An earlier version has been accepted by CVPR 2022 (arXiv:2203.10856).
arXiv admin note: substantial text overlap with arXiv:2203.10856
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The raw depth image captured by indoor depth sensors usually has an extensive
range of missing depth values due to inherent limitations such as the inability
to perceive transparent objects and the limited distance range. The incomplete
depth map with missing values burdens many downstream vision tasks, and a
rising number of depth completion methods have been proposed to alleviate this
issue. While most existing methods can generate accurate dense depth maps from
sparse and uniformly sampled depth maps, they are not suitable for
complementing large contiguous regions of missing depth values, which is common
and critical in images captured in indoor environments. To overcome these
challenges, we design a novel two-branch end-to-end fusion network named
RDFC-GAN, which takes a pair of RGB and incomplete depth images as input to
predict a dense and completed depth map. The first branch employs an
encoder-decoder structure, by adhering to the Manhattan world assumption and
utilizing normal maps from RGB-D information as guidance, to regress the local
dense depth values from the raw depth map. In the other branch, we propose an
RGB-depth fusion CycleGAN to transfer the RGB image to the fine-grained
textured depth map. We adopt adaptive fusion modules named W-AdaIN to propagate
the features across the two branches, and we append a confidence fusion head to
fuse the two outputs of the branches for the final depth map. Extensive
experiments on NYU-Depth V2 and SUN RGB-D demonstrate that our proposed method
clearly improves the depth completion performance, especially in a more
realistic setting of indoor environments, with the help of our proposed pseudo
depth maps in training.
|
[
{
"version": "v1",
"created": "Tue, 6 Jun 2023 11:03:05 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Wang",
"Haowen",
""
],
[
"Che",
"Zhengping",
""
],
[
"Wang",
"Mingyuan",
""
],
[
"Xu",
"Zhiyuan",
""
],
[
"Qiao",
"Xiuquan",
""
],
[
"Qi",
"Mengshi",
""
],
[
"Feng",
"Feifei",
""
],
[
"Tang",
"Jian",
""
]
] |
new_dataset
| 0.994225 |
2306.05176
|
Leilei Wang
|
Leilei Wang
|
RRWKV: Capturing Long-range Dependencies in RWKV
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Owing to the impressive dot-product attention, the Transformers have been the
dominant architectures in various natural language processing (NLP) tasks.
Recently, the Receptance Weighted Key Value (RWKV) architecture follows a
non-transformer architecture to eliminate the drawbacks of dot-product
attention, where memory and computational complexity exhibits quadratic scaling
with sequence length. Although RWKV has exploited a linearly tensor-product
attention mechanism and achieved parallelized computations by deploying the
time-sequential mode, it fails to capture long-range dependencies because of
its limitation on looking back at previous information, compared with full
information obtained by direct interactions in the standard transformer.
Therefore, the paper devises the Retrospected Receptance Weighted Key Value
(RRWKV) architecture via incorporating the retrospecting ability into the RWKV
to effectively absorb information, which maintains memory and computational
efficiency as well.
|
[
{
"version": "v1",
"created": "Thu, 8 Jun 2023 13:17:06 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Jun 2023 02:56:20 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Wang",
"Leilei",
""
]
] |
new_dataset
| 0.958536 |
2306.05431
|
Jieh-Sheng Lee
|
Jieh-Sheng Lee
|
LexGPT 0.1: pre-trained GPT-J models with Pile of Law
|
10 pages and 2 figures. To be published in the Proceedings of the
Seventeenth International Workshop on Juris-informatics (JURISIN 2023),
hosted by JSAI International Symposia on AI 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This research aims to build generative language models specialized for the
legal domain. The manuscript presents the development of LexGPT models based on
GPT-J models and pre-trained with Pile of Law. The foundation model built in
this manuscript is the initial step for the development of future applications
in the legal domain, such as further training with reinforcement learning from
human feedback. Another objective of this manuscript is to assist legal
professionals in utilizing language models through the ``No Code'' approach. By
fine-tuning models with specialized data and without modifying any source code,
legal professionals can create custom language models for downstream tasks with
minimum effort and technical knowledge. The downstream task in this manuscript
is to turn a LexGPT model into a classifier, although the performance is
notably lower than the state-of-the-art result. How to enhance downstream task
performance without modifying the model or its source code is a research topic
for future exploration.
|
[
{
"version": "v1",
"created": "Mon, 5 Jun 2023 08:42:59 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Lee",
"Jieh-Sheng",
""
]
] |
new_dataset
| 0.991684 |
2306.05443
|
Qianqian Xie
|
Qianqian Xie, Weiguang Han, Xiao Zhang, Yanzhao Lai, Min Peng,
Alejandro Lopez-Lira, Jimin Huang
|
PIXIU: A Large Language Model, Instruction Data and Evaluation Benchmark
for Finance
|
12 pages, 1 figures
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although large language models (LLMs) has shown great performance on natural
language processing (NLP) in the financial domain, there are no publicly
available financial tailtored LLMs, instruction tuning datasets, and evaluation
benchmarks, which is critical for continually pushing forward the open-source
development of financial artificial intelligence (AI). This paper introduces
PIXIU, a comprehensive framework including the first financial LLM based on
fine-tuning LLaMA with instruction data, the first instruction data with 136K
data samples to support the fine-tuning, and an evaluation benchmark with 5
tasks and 9 datasets. We first construct the large-scale multi-task instruction
data considering a variety of financial tasks, financial document types, and
financial data modalities. We then propose a financial LLM called FinMA by
fine-tuning LLaMA with the constructed dataset to be able to follow
instructions for various financial tasks. To support the evaluation of
financial LLMs, we propose a standardized benchmark that covers a set of
critical financial tasks, including five financial NLP tasks and one financial
prediction task. With this benchmark, we conduct a detailed analysis of FinMA
and several existing LLMs, uncovering their strengths and weaknesses in
handling critical financial tasks. The model, datasets, benchmark, and
experimental results are open-sourced to facilitate future research in
financial AI.
|
[
{
"version": "v1",
"created": "Thu, 8 Jun 2023 14:20:29 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Xie",
"Qianqian",
""
],
[
"Han",
"Weiguang",
""
],
[
"Zhang",
"Xiao",
""
],
[
"Lai",
"Yanzhao",
""
],
[
"Peng",
"Min",
""
],
[
"Lopez-Lira",
"Alejandro",
""
],
[
"Huang",
"Jimin",
""
]
] |
new_dataset
| 0.999765 |
2306.05523
|
Parth Patwa
|
Megha Chakraborty, Khusbu Pahwa, Anku Rani, Adarsh Mahor, Aditya
Pakala, Arghya Sarkar, Harshit Dave, Ishan Paul, Janvita Reddy, Preethi
Gurumurthy, Ritvik G, Samahriti Mukherjee, Shreyas Chatterjee, Kinjal
Sensharma, Dwip Dalal, Suryavardan S, Shreyash Mishra, Parth Patwa, Aman
Chadha, Amit Sheth, Amitava Das
|
FACTIFY3M: A Benchmark for Multimodal Fact Verification with
Explainability through 5W Question-Answering
|
arXiv admin note: text overlap with arXiv:2305.04329
| null | null | null |
cs.CL cs.AI cs.CV cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
Combating disinformation is one of the burning societal crises -- about 67%
of the American population believes that disinformation produces a lot of
uncertainty, and 10% of them knowingly propagate disinformation. Evidence shows
that disinformation can manipulate democratic processes and public opinion,
causing disruption in the share market, panic and anxiety in society, and even
death during crises. Therefore, disinformation should be identified promptly
and, if possible, mitigated. With approximately 3.2 billion images and 720,000
hours of video shared online daily on social media platforms, scalable
detection of multimodal disinformation requires efficient fact verification.
Despite progress in automatic text-based fact verification (e.g., FEVER, LIAR),
the research community lacks substantial effort in multimodal fact
verification. To address this gap, we introduce FACTIFY 3M, a dataset of 3
million samples that pushes the boundaries of the domain of fact verification
via a multimodal fake news dataset, in addition to offering explainability
through the concept of 5W question-answering. Salient features of the dataset
include: (i) textual claims, (ii) ChatGPT-generated paraphrased claims, (iii)
associated images, (iv) stable diffusion-generated additional images (i.e.,
visual paraphrases), (v) pixel-level image heatmap to foster image-text
explainability of the claim, (vi) 5W QA pairs, and (vii) adversarial fake news
stories.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 08:29:47 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Chakraborty",
"Megha",
""
],
[
"Pahwa",
"Khusbu",
""
],
[
"Rani",
"Anku",
""
],
[
"Mahor",
"Adarsh",
""
],
[
"Pakala",
"Aditya",
""
],
[
"Sarkar",
"Arghya",
""
],
[
"Dave",
"Harshit",
""
],
[
"Paul",
"Ishan",
""
],
[
"Reddy",
"Janvita",
""
],
[
"Gurumurthy",
"Preethi",
""
],
[
"G",
"Ritvik",
""
],
[
"Mukherjee",
"Samahriti",
""
],
[
"Chatterjee",
"Shreyas",
""
],
[
"Sensharma",
"Kinjal",
""
],
[
"Dalal",
"Dwip",
""
],
[
"S",
"Suryavardan",
""
],
[
"Mishra",
"Shreyash",
""
],
[
"Patwa",
"Parth",
""
],
[
"Chadha",
"Aman",
""
],
[
"Sheth",
"Amit",
""
],
[
"Das",
"Amitava",
""
]
] |
new_dataset
| 0.961945 |
2306.05534
|
Jens Dietrich
|
Jens Dietrich, Shawn Rasheed, Alexander Jordan
|
On the Security Blind Spots of Software Composition Analysis
|
16 pages, 1 figure
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Modern software heavily relies on the use of components. Those components are
usually published in central repositories, and managed by build systems via
dependencies. Due to issues around vulnerabilities, licenses and the
propagation of bugs, the study of those dependencies is of utmost importance,
and numerous software composition analysis tools have emerged to address those
issues. A particular challenge are hidden dependencies that are the result of
cloning or shading where code from a component is "inlined", and, in the case
of shading, moved to different namespaces. We present an approach to detect
cloned and shaded artifacts in the Maven repository. Our approach is
lightweight in that it does not require the creation and maintenance of an
index, and uses a custom AST-based clone detection. Our analysis focuses on the
detection of vulnerabilities in artifacts which use cloning or shading.
Starting with eight vulnerabilities with assigned CVEs (four of those
classified as critical) and proof-of-vulnerability projects demonstrating the
presence of a vulnerability in an artifact, we query the Maven repository and
retrieve over 16k potential clones of the vulnerable artifacts. After running
our analysis on this set, we detect 554 artifacts with the respective
vulnerabilities (49 if versions are ignored). We synthesize a testable
proof-of-vulnerability project for each of those. We demonstrate that existing
SCA tools often miss these exposures.
|
[
{
"version": "v1",
"created": "Thu, 8 Jun 2023 20:14:46 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Dietrich",
"Jens",
""
],
[
"Rasheed",
"Shawn",
""
],
[
"Jordan",
"Alexander",
""
]
] |
new_dataset
| 0.971016 |
2306.05552
|
Anaelia Ovalle
|
Anaelia Ovalle, Mehrab Beikzadeh, Parshan Teimouri, Kai-Wei Chang,
Majid Sarrafzadeh
|
ChatGPT for Us: Preserving Data Privacy in ChatGPT via Dialogue Text
Ambiguation to Expand Mental Health Care Delivery
| null |
EMBC 2023
| null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large language models have been useful in expanding mental health care
delivery. ChatGPT, in particular, has gained popularity for its ability to
generate human-like dialogue. However, data-sensitive domains -- including but
not limited to healthcare -- face challenges in using ChatGPT due to privacy
and data-ownership concerns. To enable its utilization, we propose a text
ambiguation framework that preserves user privacy. We ground this in the task
of addressing stress prompted by user-provided texts to demonstrate the
viability and helpfulness of privacy-preserved generations. Our results suggest
that chatGPT recommendations are still able to be moderately helpful and
relevant, even when the original user text is not provided.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 02:09:52 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Ovalle",
"Anaelia",
""
],
[
"Beikzadeh",
"Mehrab",
""
],
[
"Teimouri",
"Parshan",
""
],
[
"Chang",
"Kai-Wei",
""
],
[
"Sarrafzadeh",
"Majid",
""
]
] |
new_dataset
| 0.992301 |
2306.05562
|
Adam Cobb
|
Adam D. Cobb, Anirban Roy, Daniel Elenius, F. Michael Heim, Brian
Swenson, Sydney Whittington, James D. Walker, Theodore Bapty, Joseph Hite,
Karthik Ramani, Christopher McComb, Susmit Jha
|
AircraftVerse: A Large-Scale Multimodal Dataset of Aerial Vehicle
Designs
|
The dataset is hosted at https://zenodo.org/record/6525446, baseline
models and code at https://github.com/SRI-CSL/AircraftVerse, and the dataset
description at https://aircraftverse.onrender.com/
| null | null | null |
cs.RO cs.AI cs.CE
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We present AircraftVerse, a publicly available aerial vehicle design dataset.
Aircraft design encompasses different physics domains and, hence, multiple
modalities of representation. The evaluation of these cyber-physical system
(CPS) designs requires the use of scientific analytical and simulation models
ranging from computer-aided design tools for structural and manufacturing
analysis, computational fluid dynamics tools for drag and lift computation,
battery models for energy estimation, and simulation models for flight control
and dynamics. AircraftVerse contains 27,714 diverse air vehicle designs - the
largest corpus of engineering designs with this level of complexity. Each
design comprises the following artifacts: a symbolic design tree describing
topology, propulsion subsystem, battery subsystem, and other design details; a
STandard for the Exchange of Product (STEP) model data; a 3D CAD design using a
stereolithography (STL) file format; a 3D point cloud for the shape of the
design; and evaluation results from high fidelity state-of-the-art physics
models that characterize performance metrics such as maximum flight distance
and hover-time. We also present baseline surrogate models that use different
modalities of design representation to predict design performance metrics,
which we provide as part of our dataset release. Finally, we discuss the
potential impact of this dataset on the use of learning in aircraft design and,
more generally, in CPS. AircraftVerse is accompanied by a data card, and it is
released under Creative Commons Attribution-ShareAlike (CC BY-SA) license. The
dataset is hosted at https://zenodo.org/record/6525446, baseline models and
code at https://github.com/SRI-CSL/AircraftVerse, and the dataset description
at https://aircraftverse.onrender.com/.
|
[
{
"version": "v1",
"created": "Thu, 8 Jun 2023 21:07:15 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Cobb",
"Adam D.",
""
],
[
"Roy",
"Anirban",
""
],
[
"Elenius",
"Daniel",
""
],
[
"Heim",
"F. Michael",
""
],
[
"Swenson",
"Brian",
""
],
[
"Whittington",
"Sydney",
""
],
[
"Walker",
"James D.",
""
],
[
"Bapty",
"Theodore",
""
],
[
"Hite",
"Joseph",
""
],
[
"Ramani",
"Karthik",
""
],
[
"McComb",
"Christopher",
""
],
[
"Jha",
"Susmit",
""
]
] |
new_dataset
| 0.999856 |
2306.05582
|
Denizhan Oak
|
Denizhan Pak, Donsuk Lee, Samantha M. W. Wood, Justin N. Wood
|
A newborn embodied Turing test for view-invariant object recognition
|
7 Pages. 4 figures, 1 table. This paper was accepted to the CogSci
2023 Conference. (https://cognitivesciencesociety.org/)
| null | null | null |
cs.AI q-bio.NC
|
http://creativecommons.org/licenses/by/4.0/
|
Recent progress in artificial intelligence has renewed interest in building
machines that learn like animals. Almost all of the work comparing learning
across biological and artificial systems comes from studies where animals and
machines received different training data, obscuring whether differences
between animals and machines emerged from differences in learning mechanisms
versus training data. We present an experimental approach-a "newborn embodied
Turing Test"-that allows newborn animals and machines to be raised in the same
environments and tested with the same tasks, permitting direct comparison of
their learning abilities. To make this platform, we first collected
controlled-rearing data from newborn chicks, then performed "digital twin"
experiments in which machines were raised in virtual environments that mimicked
the rearing conditions of the chicks. We found that (1) machines (deep
reinforcement learning agents with intrinsic motivation) can spontaneously
develop visually guided preference behavior, akin to imprinting in newborn
chicks, and (2) machines are still far from newborn-level performance on object
recognition tasks. Almost all of the chicks developed view-invariant object
recognition, whereas the machines tended to develop view-dependent recognition.
The learning outcomes were also far more constrained in the chicks versus
machines. Ultimately, we anticipate that this approach will help researchers
develop embodied AI systems that learn like newborn animals.
|
[
{
"version": "v1",
"created": "Thu, 8 Jun 2023 22:46:31 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Pak",
"Denizhan",
""
],
[
"Lee",
"Donsuk",
""
],
[
"Wood",
"Samantha M. W.",
""
],
[
"Wood",
"Justin N.",
""
]
] |
new_dataset
| 0.972503 |
2306.05587
|
Yanhua Xu
|
Yanhua Xu and Dominik Wojtczak
|
MC-NN: An End-to-End Multi-Channel Neural Network Approach for
Predicting Influenza A Virus Hosts and Antigenic Types
|
Accepted version submitted to the SN Computer Science; Published in
the SN Computer Science 2023
| null |
10.1007/s42979-023-01839-5
| null |
cs.LG q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Influenza poses a significant threat to public health, particularly among the
elderly, young children, and people with underlying dis-eases. The
manifestation of severe conditions, such as pneumonia, highlights the
importance of preventing the spread of influenza. An accurate and
cost-effective prediction of the host and antigenic sub-types of influenza A
viruses is essential to addressing this issue, particularly in
resource-constrained regions. In this study, we propose a multi-channel neural
network model to predict the host and antigenic subtypes of influenza A viruses
from hemagglutinin and neuraminidase protein sequences. Our model was trained
on a comprehensive data set of complete protein sequences and evaluated on
various test data sets of complete and incomplete sequences. The results
demonstrate the potential and practicality of using multi-channel neural
networks in predicting the host and antigenic subtypes of influenza A viruses
from both full and partial protein sequences.
|
[
{
"version": "v1",
"created": "Thu, 8 Jun 2023 23:14:39 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Xu",
"Yanhua",
""
],
[
"Wojtczak",
"Dominik",
""
]
] |
new_dataset
| 0.993889 |
2306.05596
|
Muskan Garg
|
Muskan Garg, Manas Gaur, Raxit Goswami, Sunghwan Sohn
|
LOST: A Mental Health Dataset of Low Self-esteem in Reddit Posts
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Low self-esteem and interpersonal needs (i.e., thwarted belongingness (TB)
and perceived burdensomeness (PB)) have a major impact on depression and
suicide attempts. Individuals seek social connectedness on social media to
boost and alleviate their loneliness. Social media platforms allow people to
express their thoughts, experiences, beliefs, and emotions. Prior studies on
mental health from social media have focused on symptoms, causes, and
disorders. Whereas an initial screening of social media content for
interpersonal risk factors and low self-esteem may raise early alerts and
assign therapists to at-risk users of mental disturbance. Standardized scales
measure self-esteem and interpersonal needs from questions created using
psychological theories. In the current research, we introduce a
psychology-grounded and expertly annotated dataset, LoST: Low Self esTeem, to
study and detect low self-esteem on Reddit. Through an annotation approach
involving checks on coherence, correctness, consistency, and reliability, we
ensure gold-standard for supervised learning. We present results from different
deep language models tested using two data augmentation techniques. Our
findings suggest developing a class of language models that infuses
psychological and clinical knowledge.
|
[
{
"version": "v1",
"created": "Thu, 8 Jun 2023 23:52:35 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Garg",
"Muskan",
""
],
[
"Gaur",
"Manas",
""
],
[
"Goswami",
"Raxit",
""
],
[
"Sohn",
"Sunghwan",
""
]
] |
new_dataset
| 0.999817 |
2306.05629
|
Kai Song
|
Kai Song, Biqian Feng, Yongpeng Wu, Zhen Gao and Wenjun Zhang
|
R-PMAC: A Robust Preamble Based MAC Mechanism Applied in Industrial
Internet of Things
|
This paper has been accepted by IEEE Internet of Things Journal
| null | null | null |
cs.IT cs.SY eess.SY math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
This paper proposes a novel media access control (MAC) mechanism, called the
robust preamble-based MAC mechanism (R-PMAC), which can be applied to power
line communication (PLC) networks in the context of the Industrial Internet of
Things (IIoT). Compared with other MAC mechanisms such as P-MAC and the MAC
layer of IEEE1901.1, R-PMAC has higher networking speed. Besides, it supports
whitelist authentication and functions properly in the presence of data frame
loss. Firstly, we outline three basic mechanisms of R-PMAC, containing precise
time difference calculation, preambles generation and short ID allocation.
Secondly, we elaborate its networking process of single layer and multiple
layers. Thirdly, we illustrate its robust mechanisms, including collision
handling and data retransmission. Moreover, a low-cost hardware platform is
established to measure the time of connecting hundreds of PLC nodes for the
R-PMAC, P-MAC, and IEEE1901.1 mechanisms in a real power line environment. The
experiment results show that R-PMAC outperforms the other mechanisms by
achieving a 50% reduction in networking time. These findings indicate that the
R-PMAC mechanism holds great potential for quickly and effectively building a
PLC network in actual industrial scenarios.
|
[
{
"version": "v1",
"created": "Fri, 9 Jun 2023 02:28:55 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Song",
"Kai",
""
],
[
"Feng",
"Biqian",
""
],
[
"Wu",
"Yongpeng",
""
],
[
"Gao",
"Zhen",
""
],
[
"Zhang",
"Wenjun",
""
]
] |
new_dataset
| 0.99831 |
2306.05644
|
Qiyu Wu
|
Qiyu Wu, Masaaki Nagata, Yoshimasa Tsuruoka
|
WSPAlign: Word Alignment Pre-training via Large-Scale Weakly Supervised
Span Prediction
|
To appear at ACL 2023
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most existing word alignment methods rely on manual alignment datasets or
parallel corpora, which limits their usefulness. Here, to mitigate the
dependence on manual data, we broaden the source of supervision by relaxing the
requirement for correct, fully-aligned, and parallel sentences. Specifically,
we make noisy, partially aligned, and non-parallel paragraphs. We then use such
a large-scale weakly-supervised dataset for word alignment pre-training via
span prediction. Extensive experiments with various settings empirically
demonstrate that our approach, which is named WSPAlign, is an effective and
scalable way to pre-train word aligners without manual data. When fine-tuned on
standard benchmarks, WSPAlign has set a new state-of-the-art by improving upon
the best-supervised baseline by 3.3~6.1 points in F1 and 1.5~6.1 points in AER.
Furthermore, WSPAlign also achieves competitive performance compared with the
corresponding baselines in few-shot, zero-shot and cross-lingual tests, which
demonstrates that WSPAlign is potentially more practical for low-resource
languages than existing methods.
|
[
{
"version": "v1",
"created": "Fri, 9 Jun 2023 03:11:42 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Wu",
"Qiyu",
""
],
[
"Nagata",
"Masaaki",
""
],
[
"Tsuruoka",
"Yoshimasa",
""
]
] |
new_dataset
| 0.975596 |
2306.05663
|
Eduardo R. Corral-Soto
|
Eduardo R. Corral-Soto, Alaap Grandhi, Yannis Y. He, Mrigank Rochan,
Bingbing Liu
|
Improving LiDAR 3D Object Detection via Range-based Point Cloud Density
Optimization
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, much progress has been made in LiDAR-based 3D object
detection mainly due to advances in detector architecture designs and
availability of large-scale LiDAR datasets. Existing 3D object detectors tend
to perform well on the point cloud regions closer to the LiDAR sensor as
opposed to on regions that are farther away. In this paper, we investigate this
problem from the data perspective instead of detector architecture design. We
observe that there is a learning bias in detection models towards the dense
objects near the sensor and show that the detection performance can be improved
by simply manipulating the input point cloud density at different distance
ranges without modifying the detector architecture and without data
augmentation. We propose a model-free point cloud density adjustment
pre-processing mechanism that uses iterative MCMC optimization to estimate
optimal parameters for altering the point density at different distance ranges.
We conduct experiments using four state-of-the-art LiDAR 3D object detectors on
two public LiDAR datasets, namely Waymo and ONCE. Our results demonstrate that
our range-based point cloud density manipulation technique can improve the
performance of the existing detectors, which in turn could potentially inspire
future detector designs.
|
[
{
"version": "v1",
"created": "Fri, 9 Jun 2023 04:11:43 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Corral-Soto",
"Eduardo R.",
""
],
[
"Grandhi",
"Alaap",
""
],
[
"He",
"Yannis Y.",
""
],
[
"Rochan",
"Mrigank",
""
],
[
"Liu",
"Bingbing",
""
]
] |
new_dataset
| 0.996448 |
2306.05666
|
Sunmin Lee
|
Sunmin Lee, Sebastian Starke, Yuting Ye, Jungdam Won, and Alexander
Winkler
|
QuestEnvSim: Environment-Aware Simulated Motion Tracking from Sparse
Sensors
| null |
SIGGRAPH 23 Conference Proceedings, August 6-10, 2023, Los
Angeles, CA, USA
|
10.1145/3588432.3591504
| null |
cs.GR cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Replicating a user's pose from only wearable sensors is important for many
AR/VR applications. Most existing methods for motion tracking avoid environment
interaction apart from foot-floor contact due to their complex dynamics and
hard constraints. However, in daily life people regularly interact with their
environment, e.g. by sitting on a couch or leaning on a desk. Using
Reinforcement Learning, we show that headset and controller pose, if combined
with physics simulation and environment observations can generate realistic
full-body poses even in highly constrained environments. The physics simulation
automatically enforces the various constraints necessary for realistic poses,
instead of manually specifying them as in many kinematic approaches. These hard
constraints allow us to achieve high-quality interaction motions without
typical artifacts such as penetration or contact sliding. We discuss three
features, the environment representation, the contact reward and scene
randomization, crucial to the performance of the method. We demonstrate the
generality of the approach through various examples, such as sitting on chairs,
a couch and boxes, stepping over boxes, rocking a chair and turning an office
chair. We believe these are some of the highest-quality results achieved for
motion tracking from sparse sensor with scene interaction.
|
[
{
"version": "v1",
"created": "Fri, 9 Jun 2023 04:40:38 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Lee",
"Sunmin",
""
],
[
"Starke",
"Sebastian",
""
],
[
"Ye",
"Yuting",
""
],
[
"Won",
"Jungdam",
""
],
[
"Winkler",
"Alexander",
""
]
] |
new_dataset
| 0.998816 |
2306.05672
|
Long Xuan Ma
|
Longxuan Ma and Weinan Zhang and Shuhan Zhou and Churui Sun and
Changxin Ke and Ting Liu
|
I run as fast as a rabbit, can you? A Multilingual Simile Dialogue
Dataset
|
13 Pages, 1 Figure, 12 Tables, ACL 2023 findings
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A simile is a figure of speech that compares two different things (called the
tenor and the vehicle) via shared properties. The tenor and the vehicle are
usually connected with comparator words such as "like" or "as". The simile
phenomena are unique and complex in a real-life dialogue scene where the tenor
and the vehicle can be verbal phrases or sentences, mentioned by different
speakers, exist in different sentences, or occur in reversed order. However,
the current simile research usually focuses on similes in a triplet tuple
(tenor, property, vehicle) or a single sentence where the tenor and vehicle are
usually entities or noun phrases, which could not reflect complex simile
phenomena in real scenarios. In this paper, we propose a novel and high-quality
multilingual simile dialogue (MSD) dataset to facilitate the study of complex
simile phenomena. The MSD is the largest manually annotated simile data
($\sim$20K) and it contains both English and Chinese data. Meanwhile, the MSD
data can also be used on dialogue tasks to test the ability of dialogue systems
when using similes. We design 3 simile tasks (recognition, interpretation, and
generation) and 2 dialogue tasks (retrieval and generation) with MSD. For each
task, we provide experimental results from strong pre-trained or
state-of-the-art models. The experiments demonstrate the challenge of MSD and
we have released the data/code on GitHub.
|
[
{
"version": "v1",
"created": "Fri, 9 Jun 2023 05:04:13 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Ma",
"Longxuan",
""
],
[
"Zhang",
"Weinan",
""
],
[
"Zhou",
"Shuhan",
""
],
[
"Sun",
"Churui",
""
],
[
"Ke",
"Changxin",
""
],
[
"Liu",
"Ting",
""
]
] |
new_dataset
| 0.999692 |
2306.05690
|
Jiangshan Yu Dr
|
Jiangshan Yu
|
Fault Independence in Blockchain
|
Disrupt Track of DSN 2023
| null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Byzantine Fault-Tolerant (BFT) protocols have been proposed to tolerate
malicious behaviors in state machine replications. With classic BFT protocols,
the total number of replicas is known and fixed a priori. The resilience of BFT
protocols, i.e., the number of tolerated Byzantine replicas (denoted f ), is
derived from the total number of replicas according to the quorum theory.
To guarantee that an attacker cannot control more than f replicas, so to
guarantee safety, it is vital to ensure fault independence among all replicas.
This in practice is achieved by enforcing diverse configurations of replicas,
i.e., each replica has a unique configuration, avoiding f fault compromises
more than f replicas.
While managing replica diversity in BFT protocols has been studied in
permissioned environments with a small number of replicas, no prior work has
discussed the fault independence in a permissionless environment (such as
public blockchains) where anyone can join and leave the system at any time.
This is particularly challenging due to the following two facts. First, with
permissionless environment, any one can join as a replica at any time and no
global coordinator can be relied on to manage replica diversity. Second, while
great progress has been made to scale consensus algorithms to thousands of
replicas, the replica diversity cannot provide fault independence at this
scale, limiting practical and meaningful resilience.
This paper provides the first discussion on the impact of fault independence
on permissionless blockchains, provides discussions on replica configuration
diversity, quantifies replica diversity by using entropy, and defines optimal
fault independence.
|
[
{
"version": "v1",
"created": "Fri, 9 Jun 2023 06:04:09 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Yu",
"Jiangshan",
""
]
] |
new_dataset
| 0.999387 |
2306.05695
|
Yinghui Ye
|
Haohang Yang, Yinghui Ye, Kai Liang, Xiaoli Chu
|
Power Beacon Energy Consumption Minimization in Wireless Powered
Backscatter Communication Networks
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Internet-of-Things (IoT) networks are expected to support the wireless
connection of massive energy limited IoT nodes. The emerging wireless powered
backscatter communications (WPBC) enable IoT nodes to harvest energy from the
incident radio frequency signals transmitted by a power beacon (PB) to support
their circuit operation, but the energy consumption of the PB (a potentially
high cost borne by the network operator) has not been sufficiently studied for
WPBC. In this paper, we aim to minimize the energy consumption of the PB while
satisfying the throughput requirement per IoT node by jointly optimizing the
time division multiple access (TDMA) time slot duration and backscatter
reflection coefficient of each IoT node and the PB transmit power per time
slot. As the formulated joint optimization problem is non-convex, we transform
it into a convex problem by using auxiliary variables, then employ the Lagrange
dual method to obtain the optimal solutions. To reduce the implementation
complexity required for adjusting the PB's transmit power every time slot, we
keep the PB transmit power constant in each time block and solve the
corresponding PB energy consumption minimization problem by using auxiliary
variables, the block coordinated decent method and the successive convex
approximation technique. Based on the above solutions, two iterative algorithms
are proposed for the dynamic PB transmit power scheme and the static PB
transmit power scheme. The simulation results show that the dynamic PB transmit
power scheme and the static PB transmit power scheme both achieve a lower PB
energy consumption than the benchmark schemes, and the former achieves the
lowest PB energy consumption.
|
[
{
"version": "v1",
"created": "Fri, 9 Jun 2023 06:33:27 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Yang",
"Haohang",
""
],
[
"Ye",
"Yinghui",
""
],
[
"Liang",
"Kai",
""
],
[
"Chu",
"Xiaoli",
""
]
] |
new_dataset
| 0.996105 |
2306.05846
|
Gu\'enol\'e Fiche
|
Gu\'enol\'e Fiche, Simon Leglaive, Xavier Alameda-Pineda, Renaud
S\'eguier
|
Motion-DVAE: Unsupervised learning for fast human motion denoising
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pose and motion priors are crucial for recovering realistic and accurate
human motion from noisy observations. Substantial progress has been made on
pose and shape estimation from images, and recent works showed impressive
results using priors to refine frame-wise predictions. However, a lot of motion
priors only model transitions between consecutive poses and are used in
time-consuming optimization procedures, which is problematic for many
applications requiring real-time motion capture. We introduce Motion-DVAE, a
motion prior to capture the short-term dependencies of human motion. As part of
the dynamical variational autoencoder (DVAE) models family, Motion-DVAE
combines the generative capability of VAE models and the temporal modeling of
recurrent architectures. Together with Motion-DVAE, we introduce an
unsupervised learned denoising method unifying regression- and
optimization-based approaches in a single framework for real-time 3D human pose
estimation. Experiments show that the proposed approach reaches competitive
performance with state-of-the-art methods while being much faster.
|
[
{
"version": "v1",
"created": "Fri, 9 Jun 2023 12:18:48 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Fiche",
"Guénolé",
""
],
[
"Leglaive",
"Simon",
""
],
[
"Alameda-Pineda",
"Xavier",
""
],
[
"Séguier",
"Renaud",
""
]
] |
new_dataset
| 0.998495 |
2306.05889
|
Giuseppe Bruni
|
Giuseppe Bruni, Sepehr Maleki, Senthil K. Krishnababu
|
C(NN)FD -- a deep learning framework for turbomachinery CFD analysis
| null | null | null | null |
cs.LG cs.CE physics.flu-dyn
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep Learning methods have seen a wide range of successful applications
across different industries. Up until now, applications to physical simulations
such as CFD (Computational Fluid Dynamics), have been limited to simple
test-cases of minor industrial relevance. This paper demonstrates the
development of a novel deep learning framework for real-time predictions of the
impact of manufacturing and build variations on the overall performance of
axial compressors in gas turbines, with a focus on tip clearance variations.
The associated scatter in efficiency can significantly increase the $CO_2$
emissions, thus being of great industrial and environmental relevance. The
proposed \textit{C(NN)FD} architecture achieves in real-time accuracy
comparable to the CFD benchmark. Predicting the flow field and using it to
calculate the corresponding overall performance renders the methodology
generalisable, while filtering only relevant parts of the CFD solution makes
the methodology scalable to industrial applications.
|
[
{
"version": "v1",
"created": "Fri, 9 Jun 2023 13:35:04 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Bruni",
"Giuseppe",
""
],
[
"Maleki",
"Sepehr",
""
],
[
"Krishnababu",
"Senthil K.",
""
]
] |
new_dataset
| 0.973866 |
2306.05895
|
Tomas Cerny Ph.D.
|
Sheldon Smith, Ethan Robinson, Timmy Frederiksen, Trae Stevens, Tomas
Cerny, Miroslav Bures, Davide Taibi
|
Benchmarks for End-to-End Microservices Testing
|
7 pages
|
IEEE SOSE 2023
| null | null |
cs.SE cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Testing microservice systems involves a large amount of planning and
problem-solving. The difficulty of testing microservice systems increases as
the size and structure of such systems become more complex. To help the
microservice community and simplify experiments with testing and traffic
simulation, we created a test benchmark containing full functional testing
coverage for two well-established open-source microservice systems. Through our
benchmark design, we aimed to demonstrate ways to overcome certain challenges
and find effective strategies when testing microservices. In addition, to
demonstrate our benchmark use, we conducted a case study to identify the best
approaches to take to validate a full coverage of tests using
service-dependency graph discovery and business process discovery using
tracing.
|
[
{
"version": "v1",
"created": "Fri, 9 Jun 2023 13:42:53 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Smith",
"Sheldon",
""
],
[
"Robinson",
"Ethan",
""
],
[
"Frederiksen",
"Timmy",
""
],
[
"Stevens",
"Trae",
""
],
[
"Cerny",
"Tomas",
""
],
[
"Bures",
"Miroslav",
""
],
[
"Taibi",
"Davide",
""
]
] |
new_dataset
| 0.980962 |
2306.05957
|
Tal Daniel
|
Tal Daniel, Aviv Tamar
|
DDLP: Unsupervised Object-Centric Video Prediction with Deep Dynamic
Latent Particles
|
Project site: https://taldatech.github.io/ddlp-web
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a new object-centric video prediction algorithm based on the deep
latent particle (DLP) representation. In comparison to existing slot- or
patch-based representations, DLPs model the scene using a set of keypoints with
learned parameters for properties such as position and size, and are both
efficient and interpretable. Our method, deep dynamic latent particles (DDLP),
yields state-of-the-art object-centric video prediction results on several
challenging datasets. The interpretable nature of DDLP allows us to perform
``what-if'' generation -- predict the consequence of changing properties of
objects in the initial frames, and DLP's compact structure enables efficient
diffusion-based unconditional video generation. Videos, code and pre-trained
models are available: https://taldatech.github.io/ddlp-web
|
[
{
"version": "v1",
"created": "Fri, 9 Jun 2023 15:17:13 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Daniel",
"Tal",
""
],
[
"Tamar",
"Aviv",
""
]
] |
new_dataset
| 0.970352 |
2306.06007
|
Sepand Kashani
|
Sepand Kashani, Joan Ru\'e Queralt, Adrian Jarret, Matthieu Simeoni
|
HVOX: Scalable Interferometric Synthesis and Analysis of Spherical Sky
Maps
| null | null | null | null |
cs.CE astro-ph.IM
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Analysis and synthesis are key steps of the radio-interferometric imaging
process, serving as a bridge between visibility and sky domains. They can be
expressed as partial Fourier transforms involving a large number of non-uniform
frequencies and spherically-constrained spatial coordinates. Due to the data
non-uniformity, these partial Fourier transforms are computationally expensive
and represent a serious bottleneck in the image reconstruction process. The
W-gridding algorithm achieves log-linear complexity for both steps by applying
a series of 2D non-uniform FFTs (NUFFT) to the data sliced along the so-called
$w$ frequency coordinate. A major drawback of this method however is its
restriction to direction-cosine meshes, which are fundamentally ill-suited for
large field of views. This paper introduces the HVOX gridder, a novel algorithm
for analysis/synthesis based on a 3D-NUFFT. Unlike W-gridding, the latter is
compatible with arbitrary spherical meshes such as the popular HEALPix scheme
for spherical data processing. The 3D-NUFFT allows one to optimally select the
size of the inner FFTs, in particular the number of W-planes. This results in a
better performing and auto-tuned algorithm, with controlled accuracy guarantees
backed by strong results from approximation theory. To cope with the
challenging scale of next-generation radio telescopes, we propose moreover a
chunked evaluation strategy: by partitioning the visibility and sky domains,
the 3D-NUFFT is decomposed into sub-problems which execute in parallel, while
simultaneously cutting memory requirements. Our benchmarking results
demonstrate the scalability of HVOX for both SKA and LOFAR, considering
state-of-the-art challenging imaging setups. HVOX is moreover computationally
competitive with W-gridder, despite the absence of domain-specific
optimizations in our implementation.
|
[
{
"version": "v1",
"created": "Fri, 9 Jun 2023 16:22:32 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Kashani",
"Sepand",
""
],
[
"Queralt",
"Joan Rué",
""
],
[
"Jarret",
"Adrian",
""
],
[
"Simeoni",
"Matthieu",
""
]
] |
new_dataset
| 0.979547 |
2306.06010
|
Akash Kumar
|
Akash Kumar, Ashlesha Kumar, Vibhav Vineet, Yogesh Singh Rawat
|
Benchmarking self-supervised video representation learning
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Self-supervised learning is an effective way for label-free model
pre-training, especially in the video domain where labeling is expensive.
Existing self-supervised works in the video domain use varying experimental
setups to demonstrate their effectiveness and comparison across approaches
becomes challenging with no standard benchmark. In this work, we first provide
a benchmark that enables a comparison of existing approaches on the same
ground. Next, we study five different aspects of self-supervised learning
important for videos; 1) dataset size, 2) complexity, 3) data distribution, 4)
data noise, and, 5)feature analysis. To facilitate this study, we focus on
seven different methods along with seven different network architectures and
perform an extensive set of experiments on 5 different datasets with an
evaluation of two different downstream tasks. We present several interesting
insights from this study which span across different properties of pretraining
and target datasets, pretext-tasks, and model architectures among others. We
further put some of these insights to the real test and propose an approach
that requires a limited amount of training data and outperforms existing
state-of-the-art approaches which use 10x pretraining data. We believe this
work will pave the way for researchers to a better understanding of
self-supervised pretext tasks in video representation learning.
|
[
{
"version": "v1",
"created": "Fri, 9 Jun 2023 16:27:14 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Kumar",
"Akash",
""
],
[
"Kumar",
"Ashlesha",
""
],
[
"Vineet",
"Vibhav",
""
],
[
"Rawat",
"Yogesh Singh",
""
]
] |
new_dataset
| 0.994373 |
2306.06052
|
Muhammad Ali
|
Muhammad Ali, Angelica Goetzen, Alan Mislove, Elissa M. Redmiles,
Piotr Sapiezynski
|
Problematic Advertising and its Disparate Exposure on Facebook
|
Accepted to USENIX Security 2023
| null | null | null |
cs.CY cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Targeted advertising remains an important part of the free web browsing
experience, where advertisers' targeting and personalization algorithms
together find the most relevant audience for millions of ads every day.
However, given the wide use of advertising, this also enables using ads as a
vehicle for problematic content, such as scams or clickbait. Recent work that
explores people's sentiments toward online ads, and the impacts of these ads on
people's online experiences, has found evidence that online ads can indeed be
problematic. Further, there is the potential for personalization to aid the
delivery of such ads, even when the advertiser targets with low specificity. In
this paper, we study Facebook -- one of the internet's largest ad platforms --
and investigate key gaps in our understanding of problematic online
advertising: (a) What categories of ads do people find problematic? (b) Are
there disparities in the distribution of problematic ads to viewers? and if so,
(c) Who is responsible -- advertisers or advertising platforms? To answer these
questions, we empirically measure a diverse sample of user experiences with
Facebook ads via a 3-month longitudinal panel. We categorize over 32,000 ads
collected from this panel ($n=132$); and survey participants' sentiments toward
their own ads to identify four categories of problematic ads. Statistically
modeling the distribution of problematic ads across demographics, we find that
older people and minority groups are especially likely to be shown such ads.
Further, given that 22% of problematic ads had no specific targeting from
advertisers, we infer that ad delivery algorithms (advertising platforms
themselves) played a significant role in the biased distribution of these ads.
|
[
{
"version": "v1",
"created": "Fri, 9 Jun 2023 17:23:59 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Ali",
"Muhammad",
""
],
[
"Goetzen",
"Angelica",
""
],
[
"Mislove",
"Alan",
""
],
[
"Redmiles",
"Elissa M.",
""
],
[
"Sapiezynski",
"Piotr",
""
]
] |
new_dataset
| 0.989674 |
2306.06068
|
Christian L\"owens
|
Christian L\"owens, Daniela Thyssens, Emma Andersson, Christina
Jenkins, Lars Schmidt-Thieme
|
DeepStay: Stay Region Extraction from Location Trajectories using Weak
Supervision
|
Paper under peer review
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Nowadays, mobile devices enable constant tracking of the user's position and
location trajectories can be used to infer personal points of interest (POIs)
like homes, workplaces, or stores. A common way to extract POIs is to first
identify spatio-temporal regions where a user spends a significant amount of
time, known as stay regions (SRs).
Common approaches to SR extraction are evaluated either solely unsupervised
or on a small-scale private dataset, as popular public datasets are unlabeled.
Most of these methods rely on hand-crafted features or thresholds and do not
learn beyond hyperparameter optimization. Therefore, we propose a weakly and
self-supervised transformer-based model called DeepStay, which is trained on
location trajectories to predict stay regions. To the best of our knowledge,
this is the first approach based on deep learning and the first approach that
is evaluated on a public, labeled dataset. Our SR extraction method outperforms
state-of-the-art methods. In addition, we conducted a limited experiment on the
task of transportation mode detection from GPS trajectories using the same
architecture and achieved significantly higher scores than the
state-of-the-art. Our code is available at
https://github.com/christianll9/deepstay.
|
[
{
"version": "v1",
"created": "Mon, 5 Jun 2023 11:16:47 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Löwens",
"Christian",
""
],
[
"Thyssens",
"Daniela",
""
],
[
"Andersson",
"Emma",
""
],
[
"Jenkins",
"Christina",
""
],
[
"Schmidt-Thieme",
"Lars",
""
]
] |
new_dataset
| 0.992913 |
2306.06071
|
Sanyam Jain
|
Sanyam Jain
|
Adversarial Attack On Yolov5 For Traffic And Road Sign Detection
| null | null | null | null |
cs.CV cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper implements and investigates popular adversarial attacks on the
YOLOv5 Object Detection algorithm. The paper explores the vulnerability of the
YOLOv5 to adversarial attacks in the context of traffic and road sign
detection. The paper investigates the impact of different types of attacks,
including the Limited memory Broyden Fletcher Goldfarb Shanno (L-BFGS), the
Fast Gradient Sign Method (FGSM) attack, the Carlini and Wagner (C&W) attack,
the Basic Iterative Method (BIM) attack, the Projected Gradient Descent (PGD)
attack, One Pixel Attack, and the Universal Adversarial Perturbations attack on
the accuracy of YOLOv5 in detecting traffic and road signs. The results show
that YOLOv5 is susceptible to these attacks, with misclassification rates
increasing as the magnitude of the perturbations increases. We also explain the
results using saliency maps. The findings of this paper have important
implications for the safety and reliability of object detection algorithms used
in traffic and transportation systems, highlighting the need for more robust
and secure models to ensure their effectiveness in real-world applications.
|
[
{
"version": "v1",
"created": "Sat, 27 May 2023 12:45:32 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Jain",
"Sanyam",
""
]
] |
new_dataset
| 0.997924 |
2306.06080
|
Shamyla Riaz
|
Muhammad Shoaib Farooq, Tabir Arif, Shamyla Riaz
|
Detection of Late Blight Disease in Tomato Leaf Using Image Processing
Techniques
|
it is a review search that contains 17 pages and 8 figures
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
=One of the most frequently farmed crops is the tomato crop. Late blight is
the most prevalent tomato disease in the world, and often causes a significant
reduction in the production of tomato crops. The importance of tomatoes as an
agricultural product necessitates early detection of late blight. It is
produced by the fungus Phytophthora. The earliest signs of late blight on
tomatoes are unevenly formed, water-soaked lesions on the leaves located on the
plant canopy's younger leave White cottony growth may appear in humid
environments evident on the undersides of the leaves that have been impacted.
Lesions increase as the disease proceeds, turning the leaves brown to shrivel
up and die. Using picture segmentation and the Multi-class SVM technique, late
blight disorder is discovered in this work. Image segmentation is employed for
separating damaged areas on leaves, and the Multi-class SVM method is used for
reliable disease categorization. 30 reputable studies were chosen from a total
of 2770 recognized papers. The primary goal of this study is to compile
cutting-edge research that identifies current research trends, problems, and
prospects for late blight detection. It also looks at current approaches for
applying image processing to diagnose and detect late blight. A suggested
taxonomy for late blight detection has also been provided. In the same way, a
model for the development of the solutions to problems is also presented.
Finally, the research gaps have been presented in terms of open issues for the
provision of future directions in image processing for the researchers.
|
[
{
"version": "v1",
"created": "Wed, 31 May 2023 06:16:40 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Farooq",
"Muhammad Shoaib",
""
],
[
"Arif",
"Tabir",
""
],
[
"Riaz",
"Shamyla",
""
]
] |
new_dataset
| 0.999387 |
2306.06088
|
Alexandre Binninger
|
Alexandre Binninger, Amir Hertz, Olga Sorkine-Hornung, Daniel
Cohen-Or, Raja Giryes
|
SENS: Sketch-based Implicit Neural Shape Modeling
|
18 pages, 18 figures
| null | null | null |
cs.GR cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We present SENS, a novel method for generating and editing 3D models from
hand-drawn sketches, including those of an abstract nature. Our method allows
users to quickly and easily sketch a shape, and then maps the sketch into the
latent space of a part-aware neural implicit shape architecture. SENS analyzes
the sketch and encodes its parts into ViT patch encoding, then feeds them into
a transformer decoder that converts them to shape embeddings, suitable for
editing 3D neural implicit shapes. SENS not only provides intuitive
sketch-based generation and editing, but also excels in capturing the intent of
the user's sketch to generate a variety of novel and expressive 3D shapes, even
from abstract sketches. We demonstrate the effectiveness of our model compared
to the state-of-the-art using objective metric evaluation criteria and a
decisive user study, both indicating strong performance on sketches with a
medium level of abstraction. Furthermore, we showcase its intuitive
sketch-based shape editing capabilities.
|
[
{
"version": "v1",
"created": "Fri, 9 Jun 2023 17:50:53 GMT"
}
] | 2023-06-12T00:00:00 |
[
[
"Binninger",
"Alexandre",
""
],
[
"Hertz",
"Amir",
""
],
[
"Sorkine-Hornung",
"Olga",
""
],
[
"Cohen-Or",
"Daniel",
""
],
[
"Giryes",
"Raja",
""
]
] |
new_dataset
| 0.987843 |
2205.03448
|
Xingzhe He
|
Xingzhe He, Bastian Wandt, Helge Rhodin
|
LatentKeypointGAN: Controlling Images via Latent Keypoints -- Extended
Abstract
|
arXiv admin note: substantial text overlap with arXiv:2103.15812
|
CVPR Workshop 2022
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generative adversarial networks (GANs) can now generate photo-realistic
images. However, how to best control the image content remains an open
challenge. We introduce LatentKeypointGAN, a two-stage GAN internally
conditioned on a set of keypoints and associated appearance embeddings
providing control of the position and style of the generated objects and their
respective parts. A major difficulty that we address is disentangling the image
into spatial and appearance factors with little domain knowledge and
supervision signals. We demonstrate in a user study and quantitative
experiments that LatentKeypointGAN provides an interpretable latent space that
can be used to re-arrange the generated images by re-positioning and exchanging
keypoint embeddings, such as generating portraits by combining the eyes, and
mouth from different images. Notably, our method does not require labels as it
is self-supervised and thereby applies to diverse application domains, such as
editing portraits, indoor rooms, and full-body human poses.
|
[
{
"version": "v1",
"created": "Fri, 6 May 2022 19:00:07 GMT"
},
{
"version": "v2",
"created": "Tue, 17 May 2022 18:53:20 GMT"
}
] | 2023-06-10T00:00:00 |
[
[
"He",
"Xingzhe",
""
],
[
"Wandt",
"Bastian",
""
],
[
"Rhodin",
"Helge",
""
]
] |
new_dataset
| 0.996586 |
2203.10174
|
Keenan Burnett
|
Keenan Burnett, Yuchen Wu, David J. Yoon, Angela P. Schoellig, Timothy
D. Barfoot
|
Are We Ready for Radar to Replace Lidar in All-Weather Mapping and
Localization?
|
Version 3: Accepted to RA-L, presented at IROS 2022. Localization
results updated due to improved ground truth and calibration. Also switched
Huber Loss for Cauchy Loss for the radar-based approaches
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an extensive comparison between three topometric localization
systems: radar-only, lidar-only, and a cross-modal radar-to-lidar system across
varying seasonal and weather conditions using the Boreas dataset. Contrary to
our expectations, our experiments showed that our lidar-only pipeline achieved
the best localization accuracy even during a snowstorm. Our results seem to
suggest that the sensitivity of lidar localization to moderate precipitation
has been exaggerated in prior works. However, our radar-only pipeline was able
to achieve competitive accuracy with a much smaller map. Furthermore, radar
localization and radar sensors still have room to improve and may yet prove
valuable in extreme weather or as a redundant backup system. Code for this
project can be found at: https://github.com/utiasASRL/vtr3
|
[
{
"version": "v1",
"created": "Fri, 18 Mar 2022 21:58:34 GMT"
},
{
"version": "v2",
"created": "Mon, 23 May 2022 20:31:30 GMT"
},
{
"version": "v3",
"created": "Thu, 8 Jun 2023 15:32:50 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Burnett",
"Keenan",
""
],
[
"Wu",
"Yuchen",
""
],
[
"Yoon",
"David J.",
""
],
[
"Schoellig",
"Angela P.",
""
],
[
"Barfoot",
"Timothy D.",
""
]
] |
new_dataset
| 0.998513 |
2208.08984
|
Zheng Ding
|
Zheng Ding, Jieke Wang, Zhuowen Tu
|
Open-Vocabulary Universal Image Segmentation with MaskCLIP
|
ICML 2023 Camera Ready
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we tackle an emerging computer vision task, open-vocabulary
universal image segmentation, that aims to perform semantic/instance/panoptic
segmentation (background semantic labeling + foreground instance segmentation)
for arbitrary categories of text-based descriptions in inference time. We first
build a baseline method by directly adopting pre-trained CLIP models without
finetuning or distillation. We then develop MaskCLIP, a Transformer-based
approach with a MaskCLIP Visual Encoder, which is an encoder-only module that
seamlessly integrates mask tokens with a pre-trained ViT CLIP model for
semantic/instance segmentation and class prediction. MaskCLIP learns to
efficiently and effectively utilize pre-trained partial/dense CLIP features
within the MaskCLIP Visual Encoder that avoids the time-consuming
student-teacher training process. MaskCLIP outperforms previous methods for
semantic/instance/panoptic segmentation on ADE20K and PASCAL datasets. We show
qualitative illustrations for MaskCLIP with online custom categories. Project
website: https://maskclip.github.io.
|
[
{
"version": "v1",
"created": "Thu, 18 Aug 2022 17:55:37 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Jun 2023 06:35:33 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Ding",
"Zheng",
""
],
[
"Wang",
"Jieke",
""
],
[
"Tu",
"Zhuowen",
""
]
] |
new_dataset
| 0.982528 |
2209.07805
|
Junyi Gao
|
Junyi Gao, Yinghao Zhu, Wenqing Wang, Yasha Wang, Wen Tang, Ewen M.
Harrison, Liantao Ma
|
A Comprehensive Benchmark for COVID-19 Predictive Modeling Using
Electronic Health Records in Intensive Care
|
Junyi Gao, Yinghao Zhu and Wenqing Wang contributed equally
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The COVID-19 pandemic has posed a heavy burden to the healthcare system
worldwide and caused huge social disruption and economic loss. Many deep
learning models have been proposed to conduct clinical predictive tasks such as
mortality prediction for COVID-19 patients in intensive care units using
Electronic Health Record (EHR) data. Despite their initial success in certain
clinical applications, there is currently a lack of benchmarking results to
achieve a fair comparison so that we can select the optimal model for clinical
use. Furthermore, there is a discrepancy between the formulation of traditional
prediction tasks and real-world clinical practice in intensive care. To fill
these gaps, we propose two clinical prediction tasks, Outcome-specific
length-of-stay prediction and Early mortality prediction for COVID-19 patients
in intensive care units. The two tasks are adapted from the naive
length-of-stay and mortality prediction tasks to accommodate the clinical
practice for COVID-19 patients. We propose fair, detailed, open-source
data-preprocessing pipelines and evaluate 17 state-of-the-art predictive models
on two tasks, including 5 machine learning models, 6 basic deep learning models
and 6 deep learning predictive models specifically designed for EHR data. We
provide benchmarking results using data from two real-world COVID-19 EHR
datasets. One dataset is publicly available without needing any inquiry and
another dataset can be accessed on request. We provide fair, reproducible
benchmarking results for two tasks. We deploy all experiment results and models
on an online platform. We also allow clinicians and researchers to upload their
data to the platform and get quick prediction results using our trained models.
We hope our efforts can further facilitate deep learning and machine learning
research for COVID-19 predictive modeling.
|
[
{
"version": "v1",
"created": "Fri, 16 Sep 2022 09:09:15 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Oct 2022 20:17:59 GMT"
},
{
"version": "v3",
"created": "Wed, 7 Jun 2023 21:03:43 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Gao",
"Junyi",
""
],
[
"Zhu",
"Yinghao",
""
],
[
"Wang",
"Wenqing",
""
],
[
"Wang",
"Yasha",
""
],
[
"Tang",
"Wen",
""
],
[
"Harrison",
"Ewen M.",
""
],
[
"Ma",
"Liantao",
""
]
] |
new_dataset
| 0.979899 |
2210.06379
|
Gregor Geigle
|
Gregor Geigle, Chen Cecilia Liu, Jonas Pfeiffer and Iryna Gurevych
|
One does not fit all! On the Complementarity of Vision Encoders for
Vision and Language Tasks
|
Repl4NLP 2023
| null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current multimodal models, aimed at solving Vision and Language (V+L) tasks,
predominantly repurpose Vision Encoders (VE) as feature extractors. While many
VEs -- of different architectures, trained on different data and objectives --
are publicly available, they are not designed for the downstream V+L tasks.
Nonetheless, most current work assumes that a \textit{single} pre-trained VE
can serve as a general-purpose encoder. In this work, we focus on analysis and
aim to understand whether the information stored within different VEs is
complementary, i.e. if providing the model with features from multiple VEs can
improve the performance on a target task, and how they are combined. We
exhaustively experiment with three popular VEs on six downstream V+L tasks and
analyze the attention and VE-dropout patterns. Our analyses suggest that
diverse VEs complement each other, resulting in improved downstream V+L task
performance, where the improvements are not due to simple ensemble effects
(i.e. the performance does not always improve when increasing the number of
encoders). We demonstrate that future VEs, which are not \textit{repurposed},
but explicitly \textit{designed} for V+L tasks, have the potential of improving
performance on the target V+L tasks.
|
[
{
"version": "v1",
"created": "Wed, 12 Oct 2022 16:31:39 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Jun 2023 15:42:13 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Geigle",
"Gregor",
""
],
[
"Liu",
"Chen Cecilia",
""
],
[
"Pfeiffer",
"Jonas",
""
],
[
"Gurevych",
"Iryna",
""
]
] |
new_dataset
| 0.955324 |
2211.13226
|
Zhi-Hao Lin
|
Yuan Li, Zhi-Hao Lin, David Forsyth, Jia-Bin Huang, Shenlong Wang
|
ClimateNeRF: Extreme Weather Synthesis in Neural Radiance Field
|
project page: https://climatenerf.github.io/
| null | null | null |
cs.CV cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
Physical simulations produce excellent predictions of weather effects. Neural
radiance fields produce SOTA scene models. We describe a novel NeRF-editing
procedure that can fuse physical simulations with NeRF models of scenes,
producing realistic movies of physical phenomena in those scenes. Our
application -- Climate NeRF -- allows people to visualize what climate change
outcomes will do to them. ClimateNeRF allows us to render realistic weather
effects, including smog, snow, and flood. Results can be controlled with
physically meaningful variables like water level. Qualitative and quantitative
studies show that our simulated results are significantly more realistic than
those from SOTA 2D image editing and SOTA 3D NeRF stylization.
|
[
{
"version": "v1",
"created": "Wed, 23 Nov 2022 18:59:13 GMT"
},
{
"version": "v2",
"created": "Sat, 26 Nov 2022 08:07:42 GMT"
},
{
"version": "v3",
"created": "Thu, 8 Jun 2023 06:14:30 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Li",
"Yuan",
""
],
[
"Lin",
"Zhi-Hao",
""
],
[
"Forsyth",
"David",
""
],
[
"Huang",
"Jia-Bin",
""
],
[
"Wang",
"Shenlong",
""
]
] |
new_dataset
| 0.991211 |
2211.14130
|
Oliver Watts
|
Oliver Watts, Lovisa Wihlborg, Cassia Valentini-Botinhao
|
Puffin: pitch-synchronous neural waveform generation for fullband speech
on modest devices
|
ICASSP 2023
| null |
10.1109/ICASSP49357.2023.10094729
| null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
We present a neural vocoder designed with low-powered Alternative and
Augmentative Communication devices in mind. By combining elements of successful
modern vocoders with established ideas from an older generation of technology,
our system is able to produce high quality synthetic speech at 48kHz on devices
where neural vocoders are otherwise prohibitively complex. The system is
trained adversarially using differentiable pitch synchronous overlap add, and
reduces complexity by relying on pitch synchronous Inverse Short-Time Fourier
Transform (ISTFT) to generate speech samples. Our system achieves comparable
quality with a strong (HiFi-GAN) baseline while using only a fraction of the
compute. We present results of a perceptual evaluation as well as an analysis
of system complexity.
|
[
{
"version": "v1",
"created": "Fri, 25 Nov 2022 14:15:21 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Jun 2023 12:38:34 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Watts",
"Oliver",
""
],
[
"Wihlborg",
"Lovisa",
""
],
[
"Valentini-Botinhao",
"Cassia",
""
]
] |
new_dataset
| 0.997984 |
2212.10029
|
Yuling Gu
|
Yuling Gu, Bhavana Dalvi Mishra, Peter Clark
|
Do language models have coherent mental models of everyday things?
|
ACL 2023
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
When people think of everyday things like an egg, they typically have a
mental image associated with it. This allows them to correctly judge, for
example, that "the yolk surrounds the shell" is a false statement. Do language
models similarly have a coherent picture of such everyday things? To
investigate this, we propose a benchmark dataset consisting of 100 everyday
things, their parts, and the relationships between these parts, expressed as
11,720 "X relation Y?" true/false questions. Using these questions as probes,
we observe that state-of-the-art pre-trained language models (LMs) like GPT-3
and Macaw have fragments of knowledge about these everyday things, but do not
have fully coherent "parts mental models" (54-59% accurate, 19-43% conditional
constraint violation). We propose an extension where we add a constraint
satisfaction layer on top of the LM's raw predictions to apply commonsense
constraints. As well as removing inconsistencies, we find that this also
significantly improves accuracy (by 16-20%), suggesting how the incoherence of
the LM's pictures of everyday things can be significantly reduced.
|
[
{
"version": "v1",
"created": "Tue, 20 Dec 2022 06:54:04 GMT"
},
{
"version": "v2",
"created": "Wed, 24 May 2023 20:40:18 GMT"
},
{
"version": "v3",
"created": "Thu, 8 Jun 2023 17:27:44 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Gu",
"Yuling",
""
],
[
"Mishra",
"Bhavana Dalvi",
""
],
[
"Clark",
"Peter",
""
]
] |
new_dataset
| 0.997 |
2301.02311
|
Kumar Ashutosh
|
Kumar Ashutosh, Rohit Girdhar, Lorenzo Torresani, Kristen Grauman
|
HierVL: Learning Hierarchical Video-Language Embeddings
|
CVPR 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Video-language embeddings are a promising avenue for injecting semantics into
visual representations, but existing methods capture only short-term
associations between seconds-long video clips and their accompanying text. We
propose HierVL, a novel hierarchical video-language embedding that
simultaneously accounts for both long-term and short-term associations. As
training data, we take videos accompanied by timestamped text descriptions of
human actions, together with a high-level text summary of the activity
throughout the long video (as are available in Ego4D). We introduce a
hierarchical contrastive training objective that encourages text-visual
alignment at both the clip level and video level. While the clip-level
constraints use the step-by-step descriptions to capture what is happening in
that instant, the video-level constraints use the summary text to capture why
it is happening, i.e., the broader context for the activity and the intent of
the actor. Our hierarchical scheme yields a clip representation that
outperforms its single-level counterpart as well as a long-term video
representation that achieves SotA results on tasks requiring long-term video
modeling. HierVL successfully transfers to multiple challenging downstream
tasks (in EPIC-KITCHENS-100, Charades-Ego, HowTo100M) in both zero-shot and
fine-tuned settings.
|
[
{
"version": "v1",
"created": "Thu, 5 Jan 2023 21:53:19 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Jun 2023 14:29:35 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Ashutosh",
"Kumar",
""
],
[
"Girdhar",
"Rohit",
""
],
[
"Torresani",
"Lorenzo",
""
],
[
"Grauman",
"Kristen",
""
]
] |
new_dataset
| 0.997029 |
2301.07788
|
Gonzalo Mart\'inez
|
Gonzalo Mart\'inez, Jos\'e Alberto Hern\'andez, Pedro Reviriego and
Paul Reinheimer
|
Round Trip Time (RTT) Delay in the Internet: Analysis and Trends
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Both capacity and latency are crucial performance metrics for the optimal
operation of most networking services and applications, from online gaming to
futuristic holographic-type communications. Networks worldwide have witnessed
important breakthroughs in terms of capacity, including fibre introduction
everywhere, new radio technologies and faster core networks. However, the
impact of these capacity upgrades on end-to-end delay is not straightforward as
traffic has also grown exponentially. This article overviews the current status
of end-to-end latency on different regions and continents worldwide and how far
these are from the theoretical minimum baseline, given by the speed of light
propagation over an optical fibre. We observe that the trend in the last decade
goes toward latency reduction (in spite of the ever-increasing annual traffic
growth), but still there are important differences between countries.
|
[
{
"version": "v1",
"created": "Wed, 18 Jan 2023 21:07:01 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Jun 2023 15:45:21 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Martínez",
"Gonzalo",
""
],
[
"Hernández",
"José Alberto",
""
],
[
"Reviriego",
"Pedro",
""
],
[
"Reinheimer",
"Paul",
""
]
] |
new_dataset
| 0.95275 |
2302.06848
|
Jian Hua Yang
|
Jianhua Yang and Kun Dai
|
YOWOv2: A Stronger yet Efficient Multi-level Detection Framework for
Real-time Spatio-temporal Action Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Designing a real-time framework for the spatio-temporal action detection task
is still a challenge. In this paper, we propose a novel real-time action
detection framework, YOWOv2. In this new framework, YOWOv2 takes advantage of
both the 3D backbone and 2D backbone for accurate action detection. A
multi-level detection pipeline is designed to detect action instances of
different scales. To achieve this goal, we carefully build a simple and
efficient 2D backbone with a feature pyramid network to extract different
levels of classification features and regression features. For the 3D backbone,
we adopt the existing efficient 3D CNN to save development time. By combining
3D backbones and 2D backbones of different sizes, we design a YOWOv2 family
including YOWOv2-Tiny, YOWOv2-Medium, and YOWOv2-Large. We also introduce the
popular dynamic label assignment strategy and anchor-free mechanism to make the
YOWOv2 consistent with the advanced model architecture design. With our
improvement, YOWOv2 is significantly superior to YOWO, and can still keep
real-time detection. Without any bells and whistles, YOWOv2 achieves 87.0 %
frame mAP and 52.8 % video mAP with over 20 FPS on the UCF101-24. On the AVA,
YOWOv2 achieves 21.7 % frame mAP with over 20 FPS. Our code is available on
https://github.com/yjh0410/YOWOv2.
|
[
{
"version": "v1",
"created": "Tue, 14 Feb 2023 05:52:45 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Jun 2023 01:49:33 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Yang",
"Jianhua",
""
],
[
"Dai",
"Kun",
""
]
] |
new_dataset
| 0.968646 |
2302.08551
|
Mark Rucker
|
Mark Rucker, Yinglun Zhu, Paul Mineiro
|
Infinite Action Contextual Bandits with Reusable Data Exhaust
|
Final version after responding to reviewers
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
For infinite action contextual bandits, smoothed regret and reduction to
regression results in state-of-the-art online performance with computational
cost independent of the action set: unfortunately, the resulting data exhaust
does not have well-defined importance-weights. This frustrates the execution of
downstream data science processes such as offline model selection. In this
paper we describe an online algorithm with an equivalent smoothed regret
guarantee, but which generates well-defined importance weights: in exchange,
the online computational cost increases, but only to order smoothness (i.e.,
still independent of the action set). This removes a key obstacle to adoption
of smoothed regret in production scenarios.
|
[
{
"version": "v1",
"created": "Thu, 16 Feb 2023 19:57:41 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Jun 2023 22:55:07 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Rucker",
"Mark",
""
],
[
"Zhu",
"Yinglun",
""
],
[
"Mineiro",
"Paul",
""
]
] |
new_dataset
| 0.982731 |
2302.10912
|
Wenke Xia
|
Wenke Xia, Xu Zhao, Xincheng Pang, Changqing Zhang, Di Hu
|
Balanced Audiovisual Dataset for Imbalance Analysis
|
website:https://gewu-lab.github.io/Balanced-Audiovisual-Dataset/
| null | null | null |
cs.LG cs.CV cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The imbalance problem is widespread in the field of machine learning, which
also exists in multimodal learning areas caused by the intrinsic discrepancy
between modalities of samples. Recent works have attempted to solve the
modality imbalance problem from algorithm perspective, however, they do not
fully analyze the influence of modality bias in datasets. Concretely, existing
multimodal datasets are usually collected under specific tasks, where one
modality tends to perform better than other ones in most conditions. In this
work, to comprehensively explore the influence of modality bias, we first split
existing datasets into different subsets by estimating sample-wise modality
discrepancy. We surprisingly find that: the multimodal models with existing
imbalance algorithms consistently perform worse than the unimodal one on
specific subsets, in accordance with the modality bias. To further explore the
influence of modality bias and analyze the effectiveness of existing imbalance
algorithms, we build a balanced audiovisual dataset, with uniformly distributed
modality discrepancy over the whole dataset. We then conduct extensive
experiments to re-evaluate existing imbalance algorithms and draw some
interesting findings: existing algorithms only provide a compromise between
modalities and suffer from the large modality discrepancy of samples. We hope
that these findings could facilitate future research on the modality imbalance
problem.
|
[
{
"version": "v1",
"created": "Tue, 14 Feb 2023 15:35:17 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Jun 2023 06:58:05 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Xia",
"Wenke",
""
],
[
"Zhao",
"Xu",
""
],
[
"Pang",
"Xincheng",
""
],
[
"Zhang",
"Changqing",
""
],
[
"Hu",
"Di",
""
]
] |
new_dataset
| 0.999787 |
2305.06595
|
Mohsinul Kabir
|
Mohsinul Kabir, Obayed Bin Mahfuz, Syed Rifat Raiyan, Hasan Mahmud and
Md Kamrul Hasan
|
BanglaBook: A Large-scale Bangla Dataset for Sentiment Analysis from
Book Reviews
|
Accepted in Findings of the Association for Computational
Linguistics: ACL 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The analysis of consumer sentiment, as expressed through reviews, can provide
a wealth of insight regarding the quality of a product. While the study of
sentiment analysis has been widely explored in many popular languages,
relatively less attention has been given to the Bangla language, mostly due to
a lack of relevant data and cross-domain adaptability. To address this
limitation, we present BanglaBook, a large-scale dataset of Bangla book reviews
consisting of 158,065 samples classified into three broad categories: positive,
negative, and neutral. We provide a detailed statistical analysis of the
dataset and employ a range of machine learning models to establish baselines
including SVM, LSTM, and Bangla-BERT. Our findings demonstrate a substantial
performance advantage of pre-trained models over models that rely on manually
crafted features, emphasizing the necessity for additional training resources
in this domain. Additionally, we conduct an in-depth error analysis by
examining sentiment unigrams, which may provide insight into common
classification errors in under-resourced languages like Bangla. Our codes and
data are publicly available at https://github.com/mohsinulkabir14/BanglaBook.
|
[
{
"version": "v1",
"created": "Thu, 11 May 2023 06:27:38 GMT"
},
{
"version": "v2",
"created": "Fri, 26 May 2023 11:33:44 GMT"
},
{
"version": "v3",
"created": "Thu, 8 Jun 2023 08:57:41 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Kabir",
"Mohsinul",
""
],
[
"Mahfuz",
"Obayed Bin",
""
],
[
"Raiyan",
"Syed Rifat",
""
],
[
"Mahmud",
"Hasan",
""
],
[
"Hasan",
"Md Kamrul",
""
]
] |
new_dataset
| 0.999838 |
2305.09059
|
Geoffrey Goodell
|
Geoffrey Goodell
|
Response to "The digital pound: a new form of money for households and
businesses"
|
30 pages
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This document constitutes a response to a Consultation Paper published by the
Bank of England and HM Treasury, "The digital pound: a new form of money for
households and businesses?", the latest document in a series that includes
"Central Bank Digital Currency: opportunities, challenges and design" in 2020
and "New forms of digital money" in 2021. The Consultation Paper concerns the
adoption of central bank digital currency (CBDC) for retail use in the United
Kingdom by the Bank of England. We shall address the consultation questions
directly in the third section of this document.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 22:59:21 GMT"
},
{
"version": "v2",
"created": "Mon, 22 May 2023 09:48:53 GMT"
},
{
"version": "v3",
"created": "Fri, 26 May 2023 12:44:27 GMT"
},
{
"version": "v4",
"created": "Wed, 7 Jun 2023 21:02:11 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Goodell",
"Geoffrey",
""
]
] |
new_dataset
| 0.999772 |
2305.11699
|
Davide Rigoni
|
Davide Rigoni, Nicol\`o Navarin, Alessandro Sperduti
|
RGCVAE: Relational Graph Conditioned Variational Autoencoder for
Molecule Design
| null | null | null | null |
cs.LG cs.AI q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Identifying molecules that exhibit some pre-specified properties is a
difficult problem to solve. In the last few years, deep generative models have
been used for molecule generation. Deep Graph Variational Autoencoders are
among the most powerful machine learning tools with which it is possible to
address this problem. However, existing methods struggle in capturing the true
data distribution and tend to be computationally expensive. In this work, we
propose RGCVAE, an efficient and effective Graph Variational Autoencoder based
on: (i) an encoding network exploiting a new powerful Relational Graph
Isomorphism Network; (ii) a novel probabilistic decoding component. Compared to
several state-of-the-art VAE methods on two widely adopted datasets, RGCVAE
shows state-of-the-art molecule generation performance while being
significantly faster to train.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 14:23:48 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Jun 2023 10:42:01 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Rigoni",
"Davide",
""
],
[
"Navarin",
"Nicolò",
""
],
[
"Sperduti",
"Alessandro",
""
]
] |
new_dataset
| 0.972555 |
2305.16283
|
Guangyao Zhai
|
Guangyao Zhai, Evin P{\i}nar \"Ornek, Shun-Cheng Wu, Yan Di, Federico
Tombari, Nassir Navab, Benjamin Busam
|
CommonScenes: Generating Commonsense 3D Indoor Scenes with Scene Graphs
|
25 pages. Video: https://youtu.be/KowMOkI32N4
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Controllable scene synthesis aims to create interactive environments for
various industrial use cases. Scene graphs provide a highly suitable interface
to facilitate these applications by abstracting the scene context in a compact
manner. Existing methods, reliant on retrieval from extensive databases or
pre-trained shape embeddings, often overlook scene-object and object-object
relationships, leading to inconsistent results due to their limited generation
capacity. To address this issue, we present CommonScenes, a fully generative
model that converts scene graphs into corresponding controllable 3D scenes,
which are semantically realistic and conform to commonsense. Our pipeline
consists of two branches, one predicting the overall scene layout via a
variational auto-encoder and the other generating compatible shapes via latent
diffusion, capturing global scene-object and local inter-object relationships
while preserving shape diversity. The generated scenes can be manipulated by
editing the input scene graph and sampling the noise in the diffusion model.
Due to lacking a scene graph dataset offering high-quality object-level meshes
with relations, we also construct SG-FRONT, enriching the off-the-shelf indoor
dataset 3D-FRONT with additional scene graph labels. Extensive experiments are
conducted on SG-FRONT where CommonScenes shows clear advantages over other
methods regarding generation consistency, quality, and diversity. Codes and the
dataset will be released upon acceptance.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 17:39:13 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Jun 2023 15:46:36 GMT"
},
{
"version": "v3",
"created": "Thu, 8 Jun 2023 10:00:21 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Zhai",
"Guangyao",
""
],
[
"Örnek",
"Evin Pınar",
""
],
[
"Wu",
"Shun-Cheng",
""
],
[
"Di",
"Yan",
""
],
[
"Tombari",
"Federico",
""
],
[
"Navab",
"Nassir",
""
],
[
"Busam",
"Benjamin",
""
]
] |
new_dataset
| 0.999474 |
2306.01506
|
Marvin Lavechin
|
Marvin Lavechin and Yaya Sy and Hadrien Titeux and Mar\'ia Andrea Cruz
Bland\'on and Okko R\"as\"anen and Herv\'e Bredin and Emmanuel Dupoux and
Alejandrina Cristia
|
BabySLM: language-acquisition-friendly benchmark of self-supervised
spoken language models
|
Proceedings of Interspeech 2023
| null | null | null |
cs.CL eess.AS stat.ML
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Self-supervised techniques for learning speech representations have been
shown to develop linguistic competence from exposure to speech without the need
for human labels. In order to fully realize the potential of these approaches
and further our understanding of how infants learn language, simulations must
closely emulate real-life situations by training on developmentally plausible
corpora and benchmarking against appropriate test sets. To this end, we propose
a language-acquisition-friendly benchmark to probe spoken language models at
the lexical and syntactic levels, both of which are compatible with the
vocabulary typical of children's language experiences. This paper introduces
the benchmark and summarizes a range of experiments showing its usefulness. In
addition, we highlight two exciting challenges that need to be addressed for
further progress: bridging the gap between text and speech and between clean
speech and in-the-wild speech.
|
[
{
"version": "v1",
"created": "Fri, 2 Jun 2023 12:54:38 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Jun 2023 12:22:30 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Lavechin",
"Marvin",
""
],
[
"Sy",
"Yaya",
""
],
[
"Titeux",
"Hadrien",
""
],
[
"Blandón",
"María Andrea Cruz",
""
],
[
"Räsänen",
"Okko",
""
],
[
"Bredin",
"Hervé",
""
],
[
"Dupoux",
"Emmanuel",
""
],
[
"Cristia",
"Alejandrina",
""
]
] |
new_dataset
| 0.999442 |
2306.02827
|
Aswathy Velutharambath
|
Aswathy Velutharambath and Roman Klinger
|
UNIDECOR: A Unified Deception Corpus for Cross-Corpus Deception
Detection
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Verbal deception has been studied in psychology, forensics, and computational
linguistics for a variety of reasons, like understanding behaviour patterns,
identifying false testimonies, and detecting deception in online communication.
Varying motivations across research fields lead to differences in the domain
choices to study and in the conceptualization of deception, making it hard to
compare models and build robust deception detection systems for a given
language. With this paper, we improve this situation by surveying available
English deception datasets which include domains like social media reviews,
court testimonials, opinion statements on specific topics, and deceptive
dialogues from online strategy games. We consolidate these datasets into a
single unified corpus. Based on this resource, we conduct a correlation
analysis of linguistic cues of deception across datasets to understand the
differences and perform cross-corpus modeling experiments which show that a
cross-domain generalization is challenging to achieve. The unified deception
corpus (UNIDECOR) can be obtained from
https://www.ims.uni-stuttgart.de/data/unidecor.
|
[
{
"version": "v1",
"created": "Mon, 5 Jun 2023 12:23:04 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Jun 2023 23:07:26 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Velutharambath",
"Aswathy",
""
],
[
"Klinger",
"Roman",
""
]
] |
new_dataset
| 0.993543 |
2306.03030
|
Junling Liu
|
Junling Liu, Peilin Zhou, Yining Hua, Dading Chong, Zhongyu Tian,
Andrew Liu, Helin Wang, Chenyu You, Zhenhua Guo, Lei Zhu, Michael Lingzhi Li
|
Benchmarking Large Language Models on CMExam -- A Comprehensive Chinese
Medical Exam Dataset
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent advancements in large language models (LLMs) have transformed the
field of question answering (QA). However, evaluating LLMs in the medical field
is challenging due to the lack of standardized and comprehensive datasets. To
address this gap, we introduce CMExam, sourced from the Chinese National
Medical Licensing Examination. CMExam consists of 60K+ multiple-choice
questions for standardized and objective evaluations, as well as solution
explanations for model reasoning evaluation in an open-ended manner. For
in-depth analyses of LLMs, we invited medical professionals to label five
additional question-wise annotations, including disease groups, clinical
departments, medical disciplines, areas of competency, and question difficulty
levels. Alongside the dataset, we further conducted thorough experiments with
representative LLMs and QA algorithms on CMExam. The results show that GPT-4
had the best accuracy of 61.6% and a weighted F1 score of 0.617. These results
highlight a great disparity when compared to human accuracy, which stood at
71.6%. For explanation tasks, while LLMs could generate relevant reasoning and
demonstrate improved performance after finetuning, they fall short of a desired
standard, indicating ample room for improvement. To the best of our knowledge,
CMExam is the first Chinese medical exam dataset to provide comprehensive
medical annotations. The experiments and findings of LLM evaluation also
provide valuable insights into the challenges and potential solutions in
developing Chinese medical QA systems and LLM evaluation pipelines. The dataset
and relevant code are available at https://github.com/williamliujl/CMExam.
|
[
{
"version": "v1",
"created": "Mon, 5 Jun 2023 16:48:41 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Jun 2023 06:13:36 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Liu",
"Junling",
""
],
[
"Zhou",
"Peilin",
""
],
[
"Hua",
"Yining",
""
],
[
"Chong",
"Dading",
""
],
[
"Tian",
"Zhongyu",
""
],
[
"Liu",
"Andrew",
""
],
[
"Wang",
"Helin",
""
],
[
"You",
"Chenyu",
""
],
[
"Guo",
"Zhenhua",
""
],
[
"Zhu",
"Lei",
""
],
[
"Li",
"Michael Lingzhi",
""
]
] |
new_dataset
| 0.999797 |
2306.04236
|
Yuekun Dai
|
Yuekun Dai, Chongyi Li, Shangchen Zhou, Ruicheng Feng, Yihang Luo,
Chen Change Loy
|
Flare7K++: Mixing Synthetic and Real Datasets for Nighttime Flare
Removal and Beyond
|
Extension of arXiv:2210.06570; Project page at
https://ykdai.github.io/projects/Flare7K
| null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Artificial lights commonly leave strong lens flare artifacts on the images
captured at night, degrading both the visual quality and performance of vision
algorithms. Existing flare removal approaches mainly focus on removing daytime
flares and fail in nighttime cases. Nighttime flare removal is challenging due
to the unique luminance and spectrum of artificial lights, as well as the
diverse patterns and image degradation of the flares. The scarcity of the
nighttime flare removal dataset constraints the research on this crucial task.
In this paper, we introduce Flare7K++, the first comprehensive nighttime flare
removal dataset, consisting of 962 real-captured flare images (Flare-R) and
7,000 synthetic flares (Flare7K). Compared to Flare7K, Flare7K++ is
particularly effective in eliminating complicated degradation around the light
source, which is intractable by using synthetic flares alone. Besides, the
previous flare removal pipeline relies on the manual threshold and blur kernel
settings to extract light sources, which may fail when the light sources are
tiny or not overexposed. To address this issue, we additionally provide the
annotations of light sources in Flare7K++ and propose a new end-to-end pipeline
to preserve the light source while removing lens flares. Our dataset and
pipeline offer a valuable foundation and benchmark for future investigations
into nighttime flare removal studies. Extensive experiments demonstrate that
Flare7K++ supplements the diversity of existing flare datasets and pushes the
frontier of nighttime flare removal towards real-world scenarios.
|
[
{
"version": "v1",
"created": "Wed, 7 Jun 2023 08:27:44 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Jun 2023 02:41:19 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Dai",
"Yuekun",
""
],
[
"Li",
"Chongyi",
""
],
[
"Zhou",
"Shangchen",
""
],
[
"Feng",
"Ruicheng",
""
],
[
"Luo",
"Yihang",
""
],
[
"Loy",
"Chen Change",
""
]
] |
new_dataset
| 0.999585 |
2306.04281
|
Anzhela Sukhanova
|
Anzhela Sukhanova, Valentyn Sobol
|
HornFuzz: Fuzzing CHC solvers
| null | null |
10.1145/3593434.3593455
| null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Many advanced program analysis and verification methods are based on solving
systems of Constrained Horn Clauses (CHC). Testing CHC solvers is very
important, as correctness of their work determines whether bugs in the analyzed
programs are detected or missed. One of the well-established and efficient
methods of automated software testing is fuzzing: analyzing the reactions of
programs to random input data. Currently, there are no fuzzers for CHC solvers,
and fuzzers for SMT solvers are not efficient in CHC solver testing, since they
do not consider CHC specifics. In this paper, we present HornFuzz, a
mutation-based gray-box fuzzing technique for detecting bugs in CHC solvers
based on the idea of metamorphic testing. We evaluated our fuzzer on one of the
highest performing CHC solvers, Spacer, and found a handful of bugs in Spacer.
In particular, some discovered problems are so serious that they require fixes
with significant changes to the solver.
|
[
{
"version": "v1",
"created": "Wed, 7 Jun 2023 09:35:59 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Jun 2023 14:19:55 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Sukhanova",
"Anzhela",
""
],
[
"Sobol",
"Valentyn",
""
]
] |
new_dataset
| 0.954374 |
2306.04387
|
Lei Li
|
Lei Li, Yuwei Yin, Shicheng Li, Liang Chen, Peiyi Wang, Shuhuai Ren,
Mukai Li, Yazheng Yang, Jingjing Xu, Xu Sun, Lingpeng Kong, Qi Liu
|
M$^3$IT: A Large-Scale Dataset towards Multi-Modal Multilingual
Instruction Tuning
|
Fix dataset url: https://huggingface.co/datasets/MMInstruction/M3IT
Project: https://m3-it.github.io/
| null | null | null |
cs.CV cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Instruction tuning has significantly advanced large language models (LLMs)
such as ChatGPT, enabling them to align with human instructions across diverse
tasks. However, progress in open vision-language models (VLMs) has been limited
due to the scarcity of high-quality instruction datasets. To tackle this
challenge and promote research in the vision-language field, we introduce the
Multi-Modal, Multilingual Instruction Tuning (M$^3$IT) dataset, designed to
optimize VLM alignment with human instructions. Our M$^3$IT dataset comprises
40 carefully curated datasets, including 2.4 million instances and 400 manually
written task instructions, reformatted into a vision-to-text structure. Key
tasks are translated into 80 languages with an advanced translation system,
ensuring broader accessibility. M$^3$IT surpasses previous datasets regarding
task coverage, instruction number and instance scale. Moreover, we develop
Ying-VLM, a VLM model trained on our M$^3$IT dataset, showcasing its potential
to answer complex questions requiring world knowledge, generalize to unseen
video tasks, and comprehend unseen instructions in Chinese. We have
open-sourced the dataset to encourage further research.
|
[
{
"version": "v1",
"created": "Wed, 7 Jun 2023 12:35:37 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Jun 2023 13:44:24 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Li",
"Lei",
""
],
[
"Yin",
"Yuwei",
""
],
[
"Li",
"Shicheng",
""
],
[
"Chen",
"Liang",
""
],
[
"Wang",
"Peiyi",
""
],
[
"Ren",
"Shuhuai",
""
],
[
"Li",
"Mukai",
""
],
[
"Yang",
"Yazheng",
""
],
[
"Xu",
"Jingjing",
""
],
[
"Sun",
"Xu",
""
],
[
"Kong",
"Lingpeng",
""
],
[
"Liu",
"Qi",
""
]
] |
new_dataset
| 0.959894 |
2306.04737
|
Sung-Hwan Kim
|
Ruben Becker and Davide Cenzato and Sung-Hwan Kim and Bojana Kodric
and Alberto Policriti and Nicola Prezza
|
Optimal Wheeler Language Recognition
| null | null | null | null |
cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
A Wheeler automaton is a finite state automaton whose states admit a total
Wheeler order, reflecting the co-lexicographic order of the strings labeling
source-to-node paths. A Wheeler language is a regular language admitting an
accepting Wheeler automaton. Wheeler languages admit efficient and elegant
solutions to hard problems such as automata compression and regular expression
matching, therefore deciding whether a regular language is Wheeler is relevant
in applications requiring efficient solutions to those problems. In this paper,
we show that it is possible to decide whether a DFA with n states and m
transitions recognizes a Wheeler language in $O(mn)$ time. This is a
significant improvement over the running time $O(n^{13} + m\log n)$ of the
previous polynomial-time algorithm (Alanko et al., Information and Computation
2021). A proof-of-concept implementation of this algorithm is available in a
public repository. We complement this upper bound with a conditional matching
lower bound stating that, unless the strong exponential time hypothesis (SETH)
fails, the problem cannot be solved in strongly subquadratic time. The same
problem is known to be PSPACE-complete when the input is an NFA (D'Agostino et
al., Theoretical Computer Science 2023). Together with that result, our paper
essentially closes the algorithmic problem of Wheeler language recognition.
|
[
{
"version": "v1",
"created": "Wed, 7 Jun 2023 19:15:54 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Becker",
"Ruben",
""
],
[
"Cenzato",
"Davide",
""
],
[
"Kim",
"Sung-Hwan",
""
],
[
"Kodric",
"Bojana",
""
],
[
"Policriti",
"Alberto",
""
],
[
"Prezza",
"Nicola",
""
]
] |
new_dataset
| 0.999675 |
2306.04743
|
Yi Zhang
|
Yi Zhang, Jan Deriu, George Katsogiannis-Meimarakis, Catherine Kosten,
Georgia Koutrika, Kurt Stockinger
|
ScienceBenchmark: A Complex Real-World Benchmark for Evaluating Natural
Language to SQL Systems
|
12 pages, 2 figures, 5 tables
| null | null | null |
cs.DB cs.AI cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Natural Language to SQL systems (NL-to-SQL) have recently shown a significant
increase in accuracy for natural language to SQL query translation. This
improvement is due to the emergence of transformer-based language models, and
the popularity of the Spider benchmark - the de-facto standard for evaluating
NL-to-SQL systems. The top NL-to-SQL systems reach accuracies of up to 85\%.
However, Spider mainly contains simple databases with few tables, columns, and
entries, which does not reflect a realistic setting. Moreover, complex
real-world databases with domain-specific content have little to no training
data available in the form of NL/SQL-pairs leading to poor performance of
existing NL-to-SQL systems.
In this paper, we introduce ScienceBenchmark, a new complex NL-to-SQL
benchmark for three real-world, highly domain-specific databases. For this new
benchmark, SQL experts and domain experts created high-quality NL/SQL-pairs for
each domain. To garner more data, we extended the small amount of
human-generated data with synthetic data generated using GPT-3. We show that
our benchmark is highly challenging, as the top performing systems on Spider
achieve a very low performance on our benchmark. Thus, the challenge is
many-fold: creating NL-to-SQL systems for highly complex domains with a small
amount of hand-made training data augmented with synthetic data. To our
knowledge, ScienceBenchmark is the first NL-to-SQL benchmark designed with
complex real-world scientific databases, containing challenging training and
test data carefully validated by domain experts.
|
[
{
"version": "v1",
"created": "Wed, 7 Jun 2023 19:37:55 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Zhang",
"Yi",
""
],
[
"Deriu",
"Jan",
""
],
[
"Katsogiannis-Meimarakis",
"George",
""
],
[
"Kosten",
"Catherine",
""
],
[
"Koutrika",
"Georgia",
""
],
[
"Stockinger",
"Kurt",
""
]
] |
new_dataset
| 0.984149 |
2306.04744
|
Changhoon Kim
|
Changhoon Kim, Kyle Min, Maitreya Patel, Sheng Cheng, Yezhou Yang
|
WOUAF: Weight Modulation for User Attribution and Fingerprinting in
Text-to-Image Diffusion Models
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The rapid advancement of generative models, facilitating the creation of
hyper-realistic images from textual descriptions, has concurrently escalated
critical societal concerns such as misinformation. Traditional fake detection
mechanisms, although providing some mitigation, fall short in attributing
responsibility for the malicious use of synthetic images. This paper introduces
a novel approach to model fingerprinting that assigns responsibility for the
generated images, thereby serving as a potential countermeasure to model
misuse. Our method modifies generative models based on each user's unique
digital fingerprint, imprinting a unique identifier onto the resultant content
that can be traced back to the user. This approach, incorporating fine-tuning
into Text-to-Image (T2I) tasks using the Stable Diffusion Model, demonstrates
near-perfect attribution accuracy with a minimal impact on output quality. We
rigorously scrutinize our method's secrecy under two distinct scenarios: one
where a malicious user attempts to detect the fingerprint, and another where a
user possesses a comprehensive understanding of our method. We also evaluate
the robustness of our approach against various image post-processing
manipulations typically executed by end-users. Through extensive evaluation of
the Stable Diffusion models, our method presents a promising and novel avenue
for accountable model distribution and responsible use.
|
[
{
"version": "v1",
"created": "Wed, 7 Jun 2023 19:44:14 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Kim",
"Changhoon",
""
],
[
"Min",
"Kyle",
""
],
[
"Patel",
"Maitreya",
""
],
[
"Cheng",
"Sheng",
""
],
[
"Yang",
"Yezhou",
""
]
] |
new_dataset
| 0.96456 |
2306.04752
|
Philipp Weigell
|
Philipp Weigell
|
Data coverage, richness, and quality of OpenStreetMap for special
interest tags: wayside crosses -- a case study
| null | null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Volunteered Geographic Information projects like OpenStreetMap which allow
accessing and using the raw data, are a treasure trove for investigations -
e.g. cultural topics, urban planning, or accessibility of services. Among the
concerns are the reliability and accurateness of the data. While it was found
that for mainstream topics, like roads or museums, the data completeness and
accuracy is very high, especially in the western world, this is not clear for
niche topics. Furthermore, many of the analyses are almost one decade old in
which the OpenStreetMap-database grew to over nine billion elements.
Based on OpenStreetMap-data of wayside crosses and other cross-like objects
regional cultural differences and prevalence of the types within Europe,
Germany and Bavaria are investigated. For Bavaria, internally and by comparing
to an official dataset and other proxies the data completeness, logical
consistency, positional, temporal, and thematic accuracy is assessed.
Subsequently, the usability for the specific case and to generalize for the use
of OpenStreetMap data for niche topics.
It is estimated that about one sixth to one third of the crosses located
within Bavaria are recorded in the database and positional accuracy is better
than 50 metres in most cases. In addition, linguistic features of the
inscriptions, the usage of building materials, dates of erection and other
details deducible from the dataset are discussed. It is found that data quality
and coverage for niche topics exceeds expectations but varies strongly by
region and should not be trusted without thorough dissection of the dataset.
|
[
{
"version": "v1",
"created": "Wed, 7 Jun 2023 20:00:46 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Weigell",
"Philipp",
""
]
] |
new_dataset
| 0.991891 |
2306.04774
|
Andre Abrantes
|
Andre Abrantes, Jiang Wang, Peng Chu, Quanzeng You, Zicheng Liu
|
RefineVIS: Video Instance Segmentation with Temporal Attention
Refinement
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce a novel framework called RefineVIS for Video Instance
Segmentation (VIS) that achieves good object association between frames and
accurate segmentation masks by iteratively refining the representations using
sequence context. RefineVIS learns two separate representations on top of an
off-the-shelf frame-level image instance segmentation model: an association
representation responsible for associating objects across frames and a
segmentation representation that produces accurate segmentation masks.
Contrastive learning is utilized to learn temporally stable association
representations. A Temporal Attention Refinement (TAR) module learns
discriminative segmentation representations by exploiting temporal
relationships and a novel temporal contrastive denoising technique. Our method
supports both online and offline inference. It achieves state-of-the-art video
instance segmentation accuracy on YouTube-VIS 2019 (64.4 AP), Youtube-VIS 2021
(61.4 AP), and OVIS (46.1 AP) datasets. The visualization shows that the TAR
module can generate more accurate instance segmentation masks, particularly for
challenging cases such as highly occluded objects.
|
[
{
"version": "v1",
"created": "Wed, 7 Jun 2023 20:45:15 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Abrantes",
"Andre",
""
],
[
"Wang",
"Jiang",
""
],
[
"Chu",
"Peng",
""
],
[
"You",
"Quanzeng",
""
],
[
"Liu",
"Zicheng",
""
]
] |
new_dataset
| 0.998967 |
2306.04842
|
Hanrong Ye
|
Hanrong Ye and Dan Xu
|
InvPT++: Inverted Pyramid Multi-Task Transformer for Visual Scene
Understanding
|
Journal extension for InvPT
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Multi-task scene understanding aims to design models that can simultaneously
predict several scene understanding tasks with one versatile model. Previous
studies typically process multi-task features in a more local way, and thus
cannot effectively learn spatially global and cross-task interactions, which
hampers the models' ability to fully leverage the consistency of various tasks
in multi-task learning. To tackle this problem, we propose an Inverted Pyramid
multi-task Transformer, capable of modeling cross-task interaction among
spatial features of different tasks in a global context. Specifically, we first
utilize a transformer encoder to capture task-generic features for all tasks.
And then, we design a transformer decoder to establish spatial and cross-task
interaction globally, and a novel UP-Transformer block is devised to increase
the resolutions of multi-task features gradually and establish cross-task
interaction at different scales. Furthermore, two types of Cross-Scale
Self-Attention modules, i.e., Fusion Attention and Selective Attention, are
proposed to efficiently facilitate cross-task interaction across different
feature scales. An Encoder Feature Aggregation strategy is further introduced
to better model multi-scale information in the decoder. Comprehensive
experiments on several 2D/3D multi-task benchmarks clearly demonstrate our
proposal's effectiveness, establishing significant state-of-the-art
performances.
|
[
{
"version": "v1",
"created": "Thu, 8 Jun 2023 00:28:22 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Ye",
"Hanrong",
""
],
[
"Xu",
"Dan",
""
]
] |
new_dataset
| 0.9765 |
2306.04850
|
Jarno Alanko
|
Jarno N. Alanko, Elena Biagi and Simon J. Puglisi
|
Longest Common Prefix Arrays for Succinct k-Spectra
| null | null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
The k-spectrum of a string is the set of all distinct substrings of length k
occurring in the string. K-spectra have many applications in bioinformatics
including pseudoalignment and genome assembly. The Spectral Burrows-Wheeler
Transform (SBWT) has been recently introduced as an algorithmic tool to
efficiently represent and query these objects. The longest common prefix (LCP)
array for a k-spectrum is an array of length n that stores the length of the
longest common prefix of adjacent k-mers as they occur in lexicographical
order. The LCP array has at least two important applications, namely to
accelerate pseudoalignment algorithms using the SBWT and to allow simulation of
variable-order de Bruijn graphs within the SBWT framework. In this paper we
explore algorithms to compute the LCP array efficiently from the SBWT
representation of the k-spectrum. Starting with a straightforward O(nk) time
algorithm, we describe algorithms that are efficient in both theory and
practice. We show that the LCP array can be computed in optimal O(n) time,
where n is the length of the SBWT of the spectrum. In practical genomics
scenarios, we show that this theoretically optimal algorithm is indeed
practical, but is often outperformed on smaller values of k by an
asymptotically suboptimal algorithm that interacts better with the CPU cache.
Our algorithms share some features with both classical Burrows-Wheeler
inversion algorithms and LCP array construction algorithms for suffix arrays.
|
[
{
"version": "v1",
"created": "Thu, 8 Jun 2023 00:57:24 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Alanko",
"Jarno N.",
""
],
[
"Biagi",
"Elena",
""
],
[
"Puglisi",
"Simon J.",
""
]
] |
new_dataset
| 0.999061 |
2306.04853
|
Tuan Dang
|
Tuan Dang, Khang Nguyen, Manfred Huber
|
ExtPerFC: An Efficient 2D and 3D Perception Hardware-Software Framework
for Mobile Cobot
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As the reliability of the robot's perception correlates with the number of
integrated sensing modalities to tackle uncertainty, a practical solution to
manage these sensors from different computers, operate them simultaneously, and
maintain their real-time performance on the existing robotic system with
minimal effort is needed. In this work, we present an end-to-end
software-hardware framework, namely ExtPerFC, that supports both conventional
hardware and software components and integrates machine learning object
detectors without requiring an additional dedicated graphic processor unit
(GPU). We first design our framework to achieve real-time performance on the
existing robotic system, guarantee configuration optimization, and concentrate
on code reusability. We then mathematically model and utilize our transfer
learning strategies for 2D object detection and fuse them into depth images for
3D depth estimation. Lastly, we systematically test the proposed framework on
the Baxter robot with two 7-DOF arms, a four-wheel mobility base, and an Intel
RealSense D435i RGB-D camera. The results show that the robot achieves
real-time performance while executing other tasks (e.g., map building,
localization, navigation, object detection, arm moving, and grasping)
simultaneously with available hardware like Intel onboard CPUS/GPUs on
distributed computers. Also, to comprehensively control, program, and monitor
the robot system, we design and introduce an end-user application. The source
code is available at https://github.com/tuantdang/perception_framework.
|
[
{
"version": "v1",
"created": "Thu, 8 Jun 2023 01:03:07 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Dang",
"Tuan",
""
],
[
"Nguyen",
"Khang",
""
],
[
"Huber",
"Manfred",
""
]
] |
new_dataset
| 0.999721 |
2306.04889
|
Zhiqin Chen
|
Qimin Chen, Zhiqin Chen, Hang Zhou, Hao Zhang
|
ShaDDR: Real-Time Example-Based Geometry and Texture Generation via 3D
Shape Detailization and Differentiable Rendering
| null | null | null | null |
cs.CV cs.GR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present ShaDDR, an example-based deep generative neural network which
produces a high-resolution textured 3D shape through geometry detailization and
conditional texture generation applied to an input coarse voxel shape. Trained
on a small set of detailed and textured exemplar shapes, our method learns to
detailize the geometry via multi-resolution voxel upsampling and generate
textures on voxel surfaces via differentiable rendering against exemplar
texture images from a few views. The generation is real-time, taking less than
1 second to produce a 3D model with voxel resolutions up to 512^3. The
generated shape preserves the overall structure of the input coarse voxel
model, while the style of the generated geometric details and textures can be
manipulated through learned latent codes. In the experiments, we show that our
method can generate higher-resolution shapes with plausible and improved
geometric details and clean textures compared to prior works. Furthermore, we
showcase the ability of our method to learn geometric details and textures from
shapes reconstructed from real-world photos. In addition, we have developed an
interactive modeling application to demonstrate the generalizability of our
method to various user inputs and the controllability it offers, allowing users
to interactively sculpt a coarse voxel shape to define the overall structure of
the detailized 3D shape.
|
[
{
"version": "v1",
"created": "Thu, 8 Jun 2023 02:35:30 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Chen",
"Qimin",
""
],
[
"Chen",
"Zhiqin",
""
],
[
"Zhou",
"Hang",
""
],
[
"Zhang",
"Hao",
""
]
] |
new_dataset
| 0.998379 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.