id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2210.14611
|
Afshin Shoeibi
|
Mahboobeh Jafari, Afshin Shoeibi, Navid Ghassemi, Jonathan Heras,
Abbas Khosravi, Sai Ho Ling, Roohallah Alizadehsani, Amin Beheshti, Yu-Dong
Zhang, Shui-Hua Wang, Juan M. Gorriz, U. Rajendra Acharya, Hamid Alinejad
Rokny
|
Automatic Diagnosis of Myocarditis Disease in Cardiac MRI Modality using
Deep Transformers and Explainable Artificial Intelligence
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Myocarditis is among the most important cardiovascular diseases (CVDs),
endangering the health of many individuals by damaging the myocardium. Microbes
and viruses, such as HIV, play a vital role in myocarditis disease (MCD)
incidence. Lack of MCD diagnosis in the early stages is associated with
irreversible complications. Cardiac magnetic resonance imaging (CMRI) is highly
popular among cardiologists to diagnose CVDs. In this paper, a deep learning
(DL) based computer-aided diagnosis system (CADS) is presented for the
diagnosis of MCD using CMRI images. The proposed CADS includes dataset,
preprocessing, feature extraction, classification, and post-processing steps.
First, the Z-Alizadeh dataset was selected for the experiments. The
preprocessing step included noise removal, image resizing, and data
augmentation (DA). In this step, CutMix, and MixUp techniques were used for the
DA. Then, the most recent pre-trained and transformers models were used for
feature extraction and classification using CMRI images. Our results show high
performance for the detection of MCD using transformer models compared with the
pre-trained architectures. Among the DL architectures, Turbulence Neural
Transformer (TNT) architecture achieved an accuracy of 99.73% with 10-fold
cross-validation strategy. Explainable-based Grad Cam method is used to
visualize the MCD suspected areas in CMRI images.
|
[
{
"version": "v1",
"created": "Wed, 26 Oct 2022 10:34:20 GMT"
}
] | 2022-10-27T00:00:00 |
[
[
"Jafari",
"Mahboobeh",
""
],
[
"Shoeibi",
"Afshin",
""
],
[
"Ghassemi",
"Navid",
""
],
[
"Heras",
"Jonathan",
""
],
[
"Khosravi",
"Abbas",
""
],
[
"Ling",
"Sai Ho",
""
],
[
"Alizadehsani",
"Roohallah",
""
],
[
"Beheshti",
"Amin",
""
],
[
"Zhang",
"Yu-Dong",
""
],
[
"Wang",
"Shui-Hua",
""
],
[
"Gorriz",
"Juan M.",
""
],
[
"Acharya",
"U. Rajendra",
""
],
[
"Rokny",
"Hamid Alinejad",
""
]
] |
new_dataset
| 0.99741 |
2210.14624
|
Benjamin Bischke
|
Priyash Bhugra, Benjamin Bischke, Christoph Werner, Robert Syrnicki,
Carolin Packbier, Patrick Helber, Caglar Senaras, Akhil Singh Rana, Tim
Davis, Wanda De Keersmaecker, Daniele Zanaga, Annett Wania, Ruben Van De
Kerchove, Giovanni Marchisio
|
RapidAI4EO: Mono- and Multi-temporal Deep Learning models for Updating
the CORINE Land Cover Product
|
Published in IGARSS 2022 - 2022 IEEE International Geoscience and
Remote Sensing Symposium
| null |
10.1109/IGARSS46834.2022.9883198
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the remote sensing community, Land Use Land Cover (LULC) classification
with satellite imagery is a main focus of current research activities. Accurate
and appropriate LULC classification, however, continues to be a challenging
task. In this paper, we evaluate the performance of multi-temporal (monthly
time series) compared to mono-temporal (single time step) satellite images for
multi-label classification using supervised learning on the RapidAI4EO dataset.
As a first step, we trained our CNN model on images at a single time step for
multi-label classification, i.e. mono-temporal. We incorporated time-series
images using a LSTM model to assess whether or not multi-temporal signals from
satellites improves CLC classification. The results demonstrate an improvement
of approximately 0.89% in classifying satellite imagery on 15 classes using a
multi-temporal approach on monthly time series images compared to the
mono-temporal approach. Using features from multi-temporal or mono-temporal
images, this work is a step towards an efficient change detection and land
monitoring approach.
|
[
{
"version": "v1",
"created": "Wed, 26 Oct 2022 11:08:13 GMT"
}
] | 2022-10-27T00:00:00 |
[
[
"Bhugra",
"Priyash",
""
],
[
"Bischke",
"Benjamin",
""
],
[
"Werner",
"Christoph",
""
],
[
"Syrnicki",
"Robert",
""
],
[
"Packbier",
"Carolin",
""
],
[
"Helber",
"Patrick",
""
],
[
"Senaras",
"Caglar",
""
],
[
"Rana",
"Akhil Singh",
""
],
[
"Davis",
"Tim",
""
],
[
"De Keersmaecker",
"Wanda",
""
],
[
"Zanaga",
"Daniele",
""
],
[
"Wania",
"Annett",
""
],
[
"Van De Kerchove",
"Ruben",
""
],
[
"Marchisio",
"Giovanni",
""
]
] |
new_dataset
| 0.990455 |
2210.14667
|
Yuchen Eleanor Jiang
|
Yuchen Eleanor Jiang, Tianyu Liu, Shuming Ma, Dongdong Zhang, Mrinmaya
Sachan, Ryan Cotterell
|
A Bilingual Parallel Corpus with Discourse Annotations
|
4 pages
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Machine translation (MT) has almost achieved human parity at sentence-level
translation. In response, the MT community has, in part, shifted its focus to
document-level translation. However, the development of document-level MT
systems is hampered by the lack of parallel document corpora. This paper
describes BWB, a large parallel corpus first introduced in Jiang et al. (2022),
along with an annotated test set. The BWB corpus consists of Chinese novels
translated by experts into English, and the annotated test set is designed to
probe the ability of machine translation systems to model various discourse
phenomena. Our resource is freely available, and we hope it will serve as a
guide and inspiration for more work in document-level machine translation.
|
[
{
"version": "v1",
"created": "Wed, 26 Oct 2022 12:33:53 GMT"
}
] | 2022-10-27T00:00:00 |
[
[
"Jiang",
"Yuchen Eleanor",
""
],
[
"Liu",
"Tianyu",
""
],
[
"Ma",
"Shuming",
""
],
[
"Zhang",
"Dongdong",
""
],
[
"Sachan",
"Mrinmaya",
""
],
[
"Cotterell",
"Ryan",
""
]
] |
new_dataset
| 0.974949 |
2210.14703
|
Diego Ulisse Pizzagalli
|
Diego Ulisse Pizzagalli, Ilaria Arini, Mauro Prevostini
|
ClipBot: an educational, physically impaired robot that learns to walk
via genetic algorithm optimization
|
5 pages, 3 figures, brief report
| null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Educational robots allow experimenting with a variety of principles from
mechanics, electronics, and informatics. Here we propose ClipBot, a low-cost,
do-it-yourself, robot whose skeleton is made of two paper clips. An Arduino
nano microcontroller actuates two servo motors that move the paper clips.
However, such mechanical configuration confers physical impairments to
movement. This creates the need for and allows experimenting with artificial
intelligence methods to overcome hardware limitations. We report our experience
in the usage of this robot during the study week 'fascinating informatics',
organized by the Swiss Foundation Schweizer Jugend Forscht (www.sjf.ch).
Students at the high school level were asked to implement a genetic algorithm
to optimize the movements of the robot until it learned to walk. Such a
methodology allowed the robot to learn the motor actuation scheme yielding
straight movement in the forward direction using less than 20 iterations.
|
[
{
"version": "v1",
"created": "Wed, 26 Oct 2022 13:31:43 GMT"
}
] | 2022-10-27T00:00:00 |
[
[
"Pizzagalli",
"Diego Ulisse",
""
],
[
"Arini",
"Ilaria",
""
],
[
"Prevostini",
"Mauro",
""
]
] |
new_dataset
| 0.997956 |
2210.14712
|
Daniel Whitenack
|
Colin Leong, Joshua Nemecek, Jacob Mansdorfer, Anna Filighera, Abraham
Owodunni, and Daniel Whitenack
|
Bloom Library: Multimodal Datasets in 300+ Languages for a Variety of
Downstream Tasks
|
14 pages, 1 figure, 3 tables, accepted to and presented at EMNLP 2022
|
EMNLP 2022
| null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We present Bloom Library, a linguistically diverse set of multimodal and
multilingual datasets for language modeling, image captioning, visual
storytelling, and speech synthesis/recognition. These datasets represent either
the most, or among the most, multilingual datasets for each of the included
downstream tasks. In total, the initial release of the Bloom Library datasets
covers 363 languages across 32 language families. We train downstream task
models for various languages represented in the data, showing the viability of
the data for future work in low-resource, multimodal NLP and establishing the
first known baselines for these downstream tasks in certain languages (e.g.,
Bisu [bzi], with an estimated population of 700 users). Some of these
first-of-their-kind baselines are comparable to state-of-the-art performance
for higher-resourced languages. The Bloom Library datasets are released under
Creative Commons licenses on the Hugging Face datasets hub to catalyze more
linguistically diverse research in the included downstream tasks.
|
[
{
"version": "v1",
"created": "Wed, 26 Oct 2022 13:45:14 GMT"
}
] | 2022-10-27T00:00:00 |
[
[
"Leong",
"Colin",
""
],
[
"Nemecek",
"Joshua",
""
],
[
"Mansdorfer",
"Jacob",
""
],
[
"Filighera",
"Anna",
""
],
[
"Owodunni",
"Abraham",
""
],
[
"Whitenack",
"Daniel",
""
]
] |
new_dataset
| 0.99978 |
2210.14716
|
Marcelo Matheus Gauy
|
Marcelo Matheus Gauy and Marcelo Finger
|
Pretrained audio neural networks for Speech emotion recognition in
Portuguese
| null |
First Workshop on Automatic Speech Recognition for Spontaneous and
Prepared Speech Speech emotion recognition in Portuguese (SER 2022)
| null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
The goal of speech emotion recognition (SER) is to identify the emotional
aspects of speech. The SER challenge for Brazilian Portuguese speech was
proposed with short snippets of Portuguese which are classified as neutral,
non-neutral female and non-neutral male according to paralinguistic elements
(laughing, crying, etc). This dataset contains about $50$ minutes of Brazilian
Portuguese speech. As the dataset leans on the small side, we investigate
whether a combination of transfer learning and data augmentation techniques can
produce positive results. Thus, by combining a data augmentation technique
called SpecAugment, with the use of Pretrained Audio Neural Networks (PANNs)
for transfer learning we are able to obtain interesting results. The PANNs
(CNN6, CNN10 and CNN14) are pretrained on a large dataset called AudioSet
containing more than $5000$ hours of audio. They were finetuned on the SER
dataset and the best performing model (CNN10) on the validation set was
submitted to the challenge, achieving an $F1$ score of $0.73$ up from $0.54$
from the baselines provided by the challenge. Moreover, we also tested the use
of Transformer neural architecture, pretrained on about $600$ hours of
Brazilian Portuguese audio data. Transformers, as well as more complex models
of PANNs (CNN14), fail to generalize to the test set in the SER dataset and do
not beat the baseline. Considering the limitation of the dataset sizes,
currently the best approach for SER is using PANNs (specifically, CNN6 and
CNN10).
|
[
{
"version": "v1",
"created": "Wed, 26 Oct 2022 13:48:51 GMT"
}
] | 2022-10-27T00:00:00 |
[
[
"Gauy",
"Marcelo Matheus",
""
],
[
"Finger",
"Marcelo",
""
]
] |
new_dataset
| 0.999442 |
2210.14814
|
Mohaddeseh Bastan
|
Mohaddeseh Bastan, Mihai Surdeanu, and Niranjan Balasubramanian
|
BioNLI: Generating a Biomedical NLI Dataset Using Lexico-semantic
Constraints for Adversarial Examples
|
Accepted to Findings of EMNLP 2022, Data and evaluation suite
available at https://stonybrooknlp.github.io/BioNLI/
| null | null | null |
cs.CL cs.IR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Natural language inference (NLI) is critical for complex decision-making in
biomedical domain. One key question, for example, is whether a given biomedical
mechanism is supported by experimental evidence. This can be seen as an NLI
problem but there are no directly usable datasets to address this. The main
challenge is that manually creating informative negative examples for this task
is difficult and expensive. We introduce a novel semi-supervised procedure that
bootstraps an NLI dataset from existing biomedical dataset that pairs
mechanisms with experimental evidence in abstracts. We generate a range of
negative examples using nine strategies that manipulate the structure of the
underlying mechanisms both with rules, e.g., flip the roles of the entities in
the interaction, and, more importantly, as perturbations via logical
constraints in a neuro-logical decoding system. We use this procedure to create
a novel dataset for NLI in the biomedical domain, called BioNLI and benchmark
two state-of-the-art biomedical classifiers. The best result we obtain is
around mid 70s in F1, suggesting the difficulty of the task. Critically, the
performance on the different classes of negative examples varies widely, from
97% F1 on the simple role change negative examples, to barely better than
chance on the negative examples generated using neuro-logic decoding.
|
[
{
"version": "v1",
"created": "Wed, 26 Oct 2022 16:02:49 GMT"
}
] | 2022-10-27T00:00:00 |
[
[
"Bastan",
"Mohaddeseh",
""
],
[
"Surdeanu",
"Mihai",
""
],
[
"Balasubramanian",
"Niranjan",
""
]
] |
new_dataset
| 0.997489 |
2011.02980
|
Suthee Ruangwises
|
Suthee Ruangwises
|
Using Five Cards to Encode Each Integer in $\mathbb{Z}/6\mathbb{Z}$
|
This paper has appeared at SecITC 2021
| null |
10.1007/978-3-031-17510-7_12
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Research in secure multi-party computation using a deck of playing cards,
often called card-based cryptography, dates back to 1989 when Den Boer
introduced the "five-card trick" to compute the logical AND function. Since
then, many protocols to compute different functions have been developed. In
this paper, we propose a new encoding scheme that uses five cards to encode
each integer in $\mathbb{Z}/6\mathbb{Z}$. Using this encoding scheme, we
develop protocols that can copy a commitment with 13 cards, add two integers
with 10 cards, and multiply two integers with 14 cards. All of our protocols
are the currently best known protocols in terms of the required number of
cards. Our encoding scheme can be generalized to encode integers in
$\mathbb{Z}/n\mathbb{Z}$ for other values of $n$ as well.
|
[
{
"version": "v1",
"created": "Thu, 5 Nov 2020 17:12:09 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Dec 2020 20:57:44 GMT"
},
{
"version": "v3",
"created": "Sun, 30 May 2021 15:53:09 GMT"
},
{
"version": "v4",
"created": "Tue, 25 Oct 2022 09:25:33 GMT"
}
] | 2022-10-26T00:00:00 |
[
[
"Ruangwises",
"Suthee",
""
]
] |
new_dataset
| 0.995187 |
2110.06870
|
Jennifer Switzer
|
Jennifer Switzer, Gabriel Marcano, Ryan Kastner, and Pat Pannuto
|
Junkyard Computing: Repurposing Discarded Smartphones to Minimize Carbon
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
1.5 billion smartphones are sold annually, and most are decommissioned less
than two years later. Most of these unwanted smartphones are neither discarded
nor recycled but languish in junk drawers and storage units. This computational
stockpile represents a substantial wasted potential: modern smartphones have
increasingly high-performance and energy-efficient processors, extensive
networking capabilities, and a reliable built-in power supply. This project
studies the ability to reuse smartphones as "junkyard computers." Junkyard
computers grow global computing capacity by extending device lifetimes, which
supplants the manufacture of new devices. We show that the capabilities of even
decade-old smartphones are within those demanded by modern cloud microservices
and discuss how to combine phones to perform increasingly complex tasks. We
describe how current operation-focused metrics do not capture the actual carbon
costs of compute. We propose Computational Carbon Intensity -- a performance
metric that balances the continued service of older devices with the
superlinear runtime improvements of newer machines. We use this metric to
redefine device service lifetime in terms of carbon efficiency. We develop a
cloudlet of reused Pixel 3A phones. We analyze the carbon benefits of deploying
large, end-to-end microservice-based applications on these smartphones.
Finally, we describe system architectures and associated challenges to scale to
cloudlets with hundreds and thousands of smartphones.
|
[
{
"version": "v1",
"created": "Wed, 13 Oct 2021 17:05:19 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Oct 2022 04:04:06 GMT"
}
] | 2022-10-26T00:00:00 |
[
[
"Switzer",
"Jennifer",
""
],
[
"Marcano",
"Gabriel",
""
],
[
"Kastner",
"Ryan",
""
],
[
"Pannuto",
"Pat",
""
]
] |
new_dataset
| 0.955525 |
2111.02168
|
Alexandra Hotti
|
Alexandra Hotti, Riccardo Sven Risuleo, Stefan Magureanu, Aref Moradi,
Jens Lagergren
|
Graph Neural Networks for Nomination and Representation Learning of Web
Elements
|
12 pages, 8 figures, 3 tables, under review
| null | null | null |
cs.LG cs.CL cs.CV cs.HC cs.IR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper tackles the under-explored problem of DOM element nomination and
representation learning with three important contributions. First, we present a
large-scale and realistic dataset of webpages, far richer and more diverse than
other datasets proposed for element representation learning, classification and
nomination on the web. The dataset contains $51,701$ manually labeled product
pages from $8,175$ real e-commerce websites. Second, we adapt several Graph
Neural Network (GNN) architectures to website DOM trees and benchmark their
performance on a diverse set of element nomination tasks using our proposed
dataset. In element nomination, a single element on a page is selected for a
given class. We show that on our challenging dataset a simple Convolutional GNN
outperforms state-of-the-art methods on web element nomination. Finally, we
propose a new training method that further boosts the element nomination
accuracy. In nomination for the web, classification (assigning a class to a
given element) is usually used as a surrogate objective for nomination during
training. Our novel training methodology steers the classification objective
towards the more complex and useful nomination objective.
|
[
{
"version": "v1",
"created": "Wed, 3 Nov 2021 12:13:52 GMT"
},
{
"version": "v2",
"created": "Tue, 9 Nov 2021 15:17:14 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Oct 2022 14:27:11 GMT"
}
] | 2022-10-26T00:00:00 |
[
[
"Hotti",
"Alexandra",
""
],
[
"Risuleo",
"Riccardo Sven",
""
],
[
"Magureanu",
"Stefan",
""
],
[
"Moradi",
"Aref",
""
],
[
"Lagergren",
"Jens",
""
]
] |
new_dataset
| 0.99679 |
2112.04246
|
Yong Deng
|
Chenhui Qiang and Yong Deng and Kang Hao Cheong
|
Information fractal dimension of mass function
| null | null |
10.1142/S0218348X22501109
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fractal plays an important role in nonlinear science. The most important
parameter to model fractal is fractal dimension. Existing information dimension
can calculate the dimension of probability distribution. However, given a mass
function which is the generalization of probability distribution, how to
determine its fractal dimension is still an open problem of immense interest.
The main contribution of this work is to propose an information fractal
dimension of mass function. Numerical examples are illustrated to show the
effectiveness of our proposed dimension. We discover an important property in
that the dimension of mass function with the maximum Deng entropy is
$\frac{ln3}{ln2}\approx 1.585$, which is the well-known fractal dimension of
Sierpi\'nski triangle.
|
[
{
"version": "v1",
"created": "Wed, 8 Dec 2021 11:44:59 GMT"
}
] | 2022-10-26T00:00:00 |
[
[
"Qiang",
"Chenhui",
""
],
[
"Deng",
"Yong",
""
],
[
"Cheong",
"Kang Hao",
""
]
] |
new_dataset
| 0.994261 |
2202.02973
|
Sungjae Lee
|
Sungjae Lee, Jaeil Hwang and Kyungyong Lee
|
SpotLake: Diverse Spot Instance Dataset Archive Service
|
14 pages, 11 figures. This paper is accepted to IISWC 2022
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Public cloud service vendors provide a surplus of computing resources at a
cheaper price as a spot instance. Despite the cheaper price, the spot instance
can be forced to be shutdown at any moment whenever the surplus resources are
in shortage. To enhance spot instance usage, vendors provide diverse spot
instance datasets. Amon them, the spot price information has been most widely
used so far. However, the tendency toward barely changing spot price weakens
the applicability of the spot price dataset. Besides the price dataset, the
recently introduced spot instance availability and interruption ratio datasets
can help users better utilize spot instances, but they are rarely used in
reality. With a thorough analysis, we could uncover major hurdles when using
the new datasets concerning the lack of historical information, query
constraints, and limited query interfaces. To overcome them, we develop
SpotLake, a spot instance data archive web service that provides historical
information of various spot instance datasets. Novel heuristics to collect
various datasets and a data serving architecture are presented. Through
real-world spot instance availability experiments, we present the applicability
of the proposed system. SpotLake is publicly available as a web service to
speed up cloud system research to improve spot instance usage and availability
while reducing cost.
|
[
{
"version": "v1",
"created": "Mon, 7 Feb 2022 06:46:53 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Oct 2022 15:53:04 GMT"
}
] | 2022-10-26T00:00:00 |
[
[
"Lee",
"Sungjae",
""
],
[
"Hwang",
"Jaeil",
""
],
[
"Lee",
"Kyungyong",
""
]
] |
new_dataset
| 0.99286 |
2202.09788
|
Suthee Ruangwises
|
Suthee Ruangwises, Toshiya Itoh
|
How to Physically Verify a Rectangle in a Grid: A Physical ZKP for
Shikaku
|
This paper has appeared at FUN 2022
| null |
10.4230/LIPIcs.FUN.2022.24
| null |
cs.CR math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Shikaku is a pencil puzzle consisting of a rectangular grid, with some cells
containing a number. The player has to partition the grid into rectangles such
that each rectangle contains exactly one number equal to the area of that
rectangle. In this paper, we propose two physical zero-knowledge proof
protocols for Shikaku using a deck of playing cards, which allow a prover to
physically show that he/she knows a solution of the puzzle without revealing
it. Most importantly, in our second protocol we develop a general technique to
physically verify a rectangle-shaped area with a certain size in a rectangular
grid, which can be used to verify other problems with similar constraints.
|
[
{
"version": "v1",
"created": "Sun, 20 Feb 2022 11:07:26 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Oct 2022 09:29:22 GMT"
}
] | 2022-10-26T00:00:00 |
[
[
"Ruangwises",
"Suthee",
""
],
[
"Itoh",
"Toshiya",
""
]
] |
new_dataset
| 0.99957 |
2205.07058
|
Stan Birchfield
|
Jonathan Tremblay, Moustafa Meshry, Alex Evans, Jan Kautz, Alexander
Keller, Sameh Khamis, Thomas M\"uller, Charles Loop, Nathan Morrical, Koki
Nagano, Towaki Takikawa, Stan Birchfield
|
RTMV: A Ray-Traced Multi-View Synthetic Dataset for Novel View Synthesis
|
ECCV 2022 Workshop on Learning to Generate 3D Shapes and Scenes.
Project page at http://www.cs.umd.edu/~mmeshry/projects/rtmv
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a large-scale synthetic dataset for novel view synthesis
consisting of ~300k images rendered from nearly 2000 complex scenes using
high-quality ray tracing at high resolution (1600 x 1600 pixels). The dataset
is orders of magnitude larger than existing synthetic datasets for novel view
synthesis, thus providing a large unified benchmark for both training and
evaluation. Using 4 distinct sources of high-quality 3D meshes, the scenes of
our dataset exhibit challenging variations in camera views, lighting, shape,
materials, and textures. Because our dataset is too large for existing methods
to process, we propose Sparse Voxel Light Field (SVLF), an efficient
voxel-based light field approach for novel view synthesis that achieves
comparable performance to NeRF on synthetic data, while being an order of
magnitude faster to train and two orders of magnitude faster to render. SVLF
achieves this speed by relying on a sparse voxel octree, careful voxel sampling
(requiring only a handful of queries per ray), and reduced network structure;
as well as ground truth depth maps at training time. Our dataset is generated
by NViSII, a Python-based ray tracing renderer, which is designed to be simple
for non-experts to use and share, flexible and powerful through its use of
scripting, and able to create high-quality and physically-based rendered
images. Experiments with a subset of our dataset allow us to compare standard
methods like NeRF and mip-NeRF for single-scene modeling, and pixelNeRF for
category-level modeling, pointing toward the need for future improvements in
this area.
|
[
{
"version": "v1",
"created": "Sat, 14 May 2022 13:15:32 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Oct 2022 01:44:56 GMT"
}
] | 2022-10-26T00:00:00 |
[
[
"Tremblay",
"Jonathan",
""
],
[
"Meshry",
"Moustafa",
""
],
[
"Evans",
"Alex",
""
],
[
"Kautz",
"Jan",
""
],
[
"Keller",
"Alexander",
""
],
[
"Khamis",
"Sameh",
""
],
[
"Müller",
"Thomas",
""
],
[
"Loop",
"Charles",
""
],
[
"Morrical",
"Nathan",
""
],
[
"Nagano",
"Koki",
""
],
[
"Takikawa",
"Towaki",
""
],
[
"Birchfield",
"Stan",
""
]
] |
new_dataset
| 0.999876 |
2205.14794
|
Aniket Didolkar
|
Aniket Didolkar, Kshitij Gupta, Anirudh Goyal, Nitesh B. Gundavarapu,
Alex Lamb, Nan Rosemary Ke, Yoshua Bengio
|
Temporal Latent Bottleneck: Synthesis of Fast and Slow Processing
Mechanisms in Sequence Learning
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Recurrent neural networks have a strong inductive bias towards learning
temporally compressed representations, as the entire history of a sequence is
represented by a single vector. By contrast, Transformers have little inductive
bias towards learning temporally compressed representations, as they allow for
attention over all previously computed elements in a sequence. Having a more
compressed representation of a sequence may be beneficial for generalization,
as a high-level representation may be more easily re-used and re-purposed and
will contain fewer irrelevant details. At the same time, excessive compression
of representations comes at the cost of expressiveness. We propose a solution
which divides computation into two streams. A slow stream that is recurrent in
nature aims to learn a specialized and compressed representation, by forcing
chunks of $K$ time steps into a single representation which is divided into
multiple vectors. At the same time, a fast stream is parameterized as a
Transformer to process chunks consisting of $K$ time-steps conditioned on the
information in the slow-stream. In the proposed approach we hope to gain the
expressiveness of the Transformer, while encouraging better compression and
structuring of representations in the slow stream. We show the benefits of the
proposed method in terms of improved sample efficiency and generalization
performance as compared to various competitive baselines for visual perception
and sequential decision making tasks.
|
[
{
"version": "v1",
"created": "Mon, 30 May 2022 00:12:33 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Oct 2022 06:54:51 GMT"
}
] | 2022-10-26T00:00:00 |
[
[
"Didolkar",
"Aniket",
""
],
[
"Gupta",
"Kshitij",
""
],
[
"Goyal",
"Anirudh",
""
],
[
"Gundavarapu",
"Nitesh B.",
""
],
[
"Lamb",
"Alex",
""
],
[
"Ke",
"Nan Rosemary",
""
],
[
"Bengio",
"Yoshua",
""
]
] |
new_dataset
| 0.962192 |
2206.03277
|
Kai Li Lim
|
Thara Philip, Kai Li Lim, Jake Whitehead
|
Driving and charging an EV in Australia: A real-world analysis
|
This work has been published in Australasian Transport Research Forum
(ATRF), proceedings (2022)
| null | null | null |
cs.CY stat.AP
|
http://creativecommons.org/licenses/by/4.0/
|
As outlined by the Intergovernmental Panel on Climate Change, electric
vehicles offer the greatest decarbonisation potential for land transport, in
addition to other benefits, including reduced fuel and maintenance costs,
improved air quality, reduced noise pollution, and improved national fuel
security. Owing to these benefits, governments worldwide are planning and
rolling out EV-favourable policies, and major car manufacturers are committing
to fully electrifying their offerings over the coming decades. With the number
of EVs on the roads expected to increase, it is imperative to understand the
effect of EVs on transport and energy systems. While unmanaged charging of EVs
could potentially add stress to the electricity grid, managed charging of EVs
could be beneficial to the grid in terms of improved demand-supply management
and improved integration of renewable energy sources into the grid, as well as
offer other ancillary services. To assess the impact of EVs on the electricity
grid and their potential use as batteries-on-wheels through smart charging
capabilities, decision-makers need to understand how current EV owners drive
and charge their vehicles. As such, an emerging area of research focuses on
understanding these behaviours. Some studies have used stated preference
surveys of non-EV owners or data collected from EV trials to estimate EV
driving and charging patterns. Other studies have tried to decipher EV owners'
behaviour based on data collected from national surveys or as reported by EV
owners. This study aims to fill this gap in the literature by collecting data
on real-world driving and charging patterns of 239 EVs across Australia. To
this effect, data collection from current EV owners via an application
programming interface platform began in November 2021 and is currently live.
|
[
{
"version": "v1",
"created": "Fri, 3 Jun 2022 11:01:23 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Oct 2022 07:26:11 GMT"
}
] | 2022-10-26T00:00:00 |
[
[
"Philip",
"Thara",
""
],
[
"Lim",
"Kai Li",
""
],
[
"Whitehead",
"Jake",
""
]
] |
new_dataset
| 0.998475 |
2206.14774
|
Jose Camacho-Collados
|
Jose Camacho-Collados and Kiamehr Rezaee and Talayeh Riahi and Asahi
Ushio and Daniel Loureiro and Dimosthenis Antypas and Joanne Boisson and Luis
Espinosa-Anke and Fangyu Liu and Eugenio Mart\'inez-C\'amara and Gonzalo
Medina and Thomas Buhrmann and Leonardo Neves and Francesco Barbieri
|
TweetNLP: Cutting-Edge Natural Language Processing for Social Media
|
EMNLP 2022 Demo paper. TweetNLP: https://tweetnlp.org/
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we present TweetNLP, an integrated platform for Natural
Language Processing (NLP) in social media. TweetNLP supports a diverse set of
NLP tasks, including generic focus areas such as sentiment analysis and named
entity recognition, as well as social media-specific tasks such as emoji
prediction and offensive language identification. Task-specific systems are
powered by reasonably-sized Transformer-based language models specialized on
social media text (in particular, Twitter) which can be run without the need
for dedicated hardware or cloud services. The main contributions of TweetNLP
are: (1) an integrated Python library for a modern toolkit supporting social
media analysis using our various task-specific models adapted to the social
domain; (2) an interactive online demo for codeless experimentation using our
models; and (3) a tutorial covering a wide variety of typical social media
applications.
|
[
{
"version": "v1",
"created": "Wed, 29 Jun 2022 17:16:58 GMT"
},
{
"version": "v2",
"created": "Mon, 1 Aug 2022 11:26:25 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Oct 2022 09:34:32 GMT"
}
] | 2022-10-26T00:00:00 |
[
[
"Camacho-Collados",
"Jose",
""
],
[
"Rezaee",
"Kiamehr",
""
],
[
"Riahi",
"Talayeh",
""
],
[
"Ushio",
"Asahi",
""
],
[
"Loureiro",
"Daniel",
""
],
[
"Antypas",
"Dimosthenis",
""
],
[
"Boisson",
"Joanne",
""
],
[
"Espinosa-Anke",
"Luis",
""
],
[
"Liu",
"Fangyu",
""
],
[
"Martínez-Cámara",
"Eugenio",
""
],
[
"Medina",
"Gonzalo",
""
],
[
"Buhrmann",
"Thomas",
""
],
[
"Neves",
"Leonardo",
""
],
[
"Barbieri",
"Francesco",
""
]
] |
new_dataset
| 0.999413 |
2209.04280
|
Arie Cattan
|
Shon Otmazgin, Arie Cattan, Yoav Goldberg
|
F-coref: Fast, Accurate and Easy to Use Coreference Resolution
|
AACL 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce fastcoref, a python package for fast, accurate, and easy-to-use
English coreference resolution. The package is pip-installable, and allows two
modes: an accurate mode based on the LingMess architecture, providing
state-of-the-art coreference accuracy, and a substantially faster model,
F-coref, which is the focus of this work. F-coref allows to process 2.8K
OntoNotes documents in 25 seconds on a V100 GPU (compared to 6 minutes for the
LingMess model, and to 12 minutes of the popular AllenNLP coreference model)
with only a modest drop in accuracy. The fast speed is achieved through a
combination of distillation of a compact model from the LingMess model, and an
efficient batching implementation using a technique we call leftover batching.
Our code is available at https://github.com/shon-otmazgin/fastcoref
|
[
{
"version": "v1",
"created": "Fri, 9 Sep 2022 12:52:28 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Sep 2022 09:24:22 GMT"
},
{
"version": "v3",
"created": "Wed, 14 Sep 2022 13:40:57 GMT"
},
{
"version": "v4",
"created": "Tue, 25 Oct 2022 10:42:29 GMT"
}
] | 2022-10-26T00:00:00 |
[
[
"Otmazgin",
"Shon",
""
],
[
"Cattan",
"Arie",
""
],
[
"Goldberg",
"Yoav",
""
]
] |
new_dataset
| 0.99969 |
2210.05857
|
Nathaniel Simon
|
Nathaniel Simon, Allen Z. Ren, Alexander Piqu\'e, David Snyder, Daphne
Barretto, Marcus Hultmark, and Anirudha Majumdar
|
FlowDrone: Wind Estimation and Gust Rejection on UAVs Using
Fast-Response Hot-Wire Flow Sensors
|
Submitted to ICRA 2023. See supplementary video at
https://youtu.be/KWqkH9Z-338
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unmanned aerial vehicles (UAVs) are finding use in applications that place
increasing emphasis on robustness to external disturbances including extreme
wind. However, traditional multirotor UAV platforms do not directly sense wind;
conventional flow sensors are too slow, insensitive, or bulky for widespread
integration on UAVs. Instead, drones typically observe the effects of wind
indirectly through accumulated errors in position or trajectory tracking. In
this work, we integrate a novel flow sensor based on micro-electro-mechanical
systems (MEMS) hot-wire technology developed in our prior work onto a
multirotor UAV for wind estimation. These sensors are omnidirectional,
lightweight, fast, and accurate. In order to achieve superior tracking
performance in windy conditions, we train a `wind-aware' residual-based
controller via reinforcement learning using simulated wind gusts and their
aerodynamic effects on the drone. In extensive hardware experiments, we
demonstrate the wind-aware controller outperforming two strong `wind-unaware'
baseline controllers in challenging windy conditions. See:
https://youtu.be/KWqkH9Z-338.
|
[
{
"version": "v1",
"created": "Wed, 12 Oct 2022 01:49:56 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Oct 2022 01:40:21 GMT"
}
] | 2022-10-26T00:00:00 |
[
[
"Simon",
"Nathaniel",
""
],
[
"Ren",
"Allen Z.",
""
],
[
"Piqué",
"Alexander",
""
],
[
"Snyder",
"David",
""
],
[
"Barretto",
"Daphne",
""
],
[
"Hultmark",
"Marcus",
""
],
[
"Majumdar",
"Anirudha",
""
]
] |
new_dataset
| 0.979895 |
2210.12154
|
Shashank Reddy Vadyala
|
Shashank Reddy Vadyala, and Sai Nethra Betgeri
|
Use of BNNM for interference wave solutions of the gBS-like equation and
comparison with PINNs
|
Mistakes in paper
| null | null | null |
cs.LG cs.NA cs.NE math.NA
|
http://creativecommons.org/licenses/by/4.0/
|
In this work, the generalized broken soliton-like (gBS-like) equation is
derived through the generalized bilinear method. The neural network model,
which can fit the explicit solution with zero error, is found. The interference
wave solution of the gBS-like equation is obtained by using the bilinear neural
network method (BNNM) and physical informed neural networks (PINNs).
Interference waves are shown well via three-dimensional plots and density
plots. Compared with PINNs, the bilinear neural network method is not only more
accurate but also faster.
|
[
{
"version": "v1",
"created": "Sun, 7 Aug 2022 17:54:40 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Oct 2022 11:20:04 GMT"
}
] | 2022-10-26T00:00:00 |
[
[
"Vadyala",
"Shashank Reddy",
""
],
[
"Betgeri",
"Sai Nethra",
""
]
] |
new_dataset
| 0.994644 |
2210.12889
|
Elliot Murphy
|
Evelina Leivada, Elliot Murphy, Gary Marcus
|
DALL-E 2 Fails to Reliably Capture Common Syntactic Processes
| null | null | null | null |
cs.CL cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Machine intelligence is increasingly being linked to claims about sentience,
language processing, and an ability to comprehend and transform natural
language into a range of stimuli. We systematically analyze the ability of
DALL-E 2 to capture 8 grammatical phenomena pertaining to compositionality that
are widely discussed in linguistics and pervasive in human language: binding
principles and coreference, passives, word order, coordination, comparatives,
negation, ellipsis, and structural ambiguity. Whereas young children routinely
master these phenomena, learning systematic mappings between syntax and
semantics, DALL-E 2 is unable to reliably infer meanings that are consistent
with the syntax. These results challenge recent claims concerning the capacity
of such systems to understand of human language. We make available the full set
of test materials as a benchmark for future testing.
|
[
{
"version": "v1",
"created": "Sun, 23 Oct 2022 23:56:54 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Oct 2022 05:16:50 GMT"
}
] | 2022-10-26T00:00:00 |
[
[
"Leivada",
"Evelina",
""
],
[
"Murphy",
"Elliot",
""
],
[
"Marcus",
"Gary",
""
]
] |
new_dataset
| 0.976555 |
2210.13520
|
Robert Dougherty-Bliss
|
Robert Dougherty-Bliss
|
Gosper's algorithm and Bell numbers
|
13 pages
| null | null | null |
cs.SC math.CO math.NT
|
http://creativecommons.org/licenses/by/4.0/
|
Computers are good at evaluating finite sums in closed form, but there are
finite sums which do not have closed forms. Summands which do not produce a
closed form can often be ``fixed'' by multiplying them by a suitable
polynomial. We provide an explicit description of a class of such polynomials
for simple hypergeometric summands in terms of the Bell numbers.
|
[
{
"version": "v1",
"created": "Mon, 24 Oct 2022 18:20:07 GMT"
}
] | 2022-10-26T00:00:00 |
[
[
"Dougherty-Bliss",
"Robert",
""
]
] |
new_dataset
| 0.998526 |
2210.13522
|
Anjali Narayan-Chen
|
Jiao Sun, Anjali Narayan-Chen, Shereen Oraby, Shuyang Gao, Tagyoung
Chung, Jing Huang, Yang Liu, Nanyun Peng
|
Context-Situated Pun Generation
|
Accepted to EMNLP 2022 main conference
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Previous work on pun generation commonly begins with a given pun word (a pair
of homophones for heterographic pun generation and a polyseme for homographic
pun generation) and seeks to generate an appropriate pun. While this may enable
efficient pun generation, we believe that a pun is most entertaining if it fits
appropriately within a given context, e.g., a given situation or dialogue. In
this work, we propose a new task, context-situated pun generation, where a
specific context represented by a set of keywords is provided, and the task is
to first identify suitable pun words that are appropriate for the context, then
generate puns based on the context keywords and the identified pun words. We
collect CUP (Context-sitUated Pun), containing 4.5k tuples of context words and
pun pairs. Based on the new data and setup, we propose a pipeline system for
context-situated pun generation, including a pun word retrieval module that
identifies suitable pun words for a given context, and a generation module that
generates puns from context keywords and pun words. Human evaluation shows that
69% of our top retrieved pun words can be used to generate context-situated
puns, and our generation module yields successful puns 31% of the time given a
plausible tuple of context words and pun pair, almost tripling the yield of a
state-of-the-art pun generation model. With an end-to-end evaluation, our
pipeline system with the top-1 retrieved pun pair for a given context can
generate successful puns 40% of the time, better than all other modeling
variations but 32% lower than the human success rate. This highlights the
difficulty of the task, and encourages more research in this direction.
|
[
{
"version": "v1",
"created": "Mon, 24 Oct 2022 18:24:48 GMT"
}
] | 2022-10-26T00:00:00 |
[
[
"Sun",
"Jiao",
""
],
[
"Narayan-Chen",
"Anjali",
""
],
[
"Oraby",
"Shereen",
""
],
[
"Gao",
"Shuyang",
""
],
[
"Chung",
"Tagyoung",
""
],
[
"Huang",
"Jing",
""
],
[
"Liu",
"Yang",
""
],
[
"Peng",
"Nanyun",
""
]
] |
new_dataset
| 0.994596 |
2210.13600
|
Abdulaziz Alhamadani
|
Abdulaziz Alhamadani, Xuchao Zhang, Jianfeng He, Chang-Tien Lu
|
LANS: Large-scale Arabic News Summarization Corpus
|
10 pages, 1 figure
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Text summarization has been intensively studied in many languages, and some
languages have reached advanced stages. Yet, Arabic Text Summarization (ATS) is
still in its developing stages. Existing ATS datasets are either small or lack
diversity. We build, LANS, a large-scale and diverse dataset for Arabic Text
Summarization task. LANS offers 8.4 million articles and their summaries
extracted from newspapers websites metadata between 1999 and 2019. The
high-quality and diverse summaries are written by journalists from 22 major
Arab newspapers, and include an eclectic mix of at least more than 7 topics
from each source. We conduct an intrinsic evaluation on LANS by both automatic
and human evaluations. Human evaluation of 1000 random samples reports 95.4%
accuracy for our collected summaries, and automatic evaluation quantifies the
diversity and abstractness of the summaries. The dataset is publicly available
upon request.
|
[
{
"version": "v1",
"created": "Mon, 24 Oct 2022 20:54:01 GMT"
}
] | 2022-10-26T00:00:00 |
[
[
"Alhamadani",
"Abdulaziz",
""
],
[
"Zhang",
"Xuchao",
""
],
[
"He",
"Jianfeng",
""
],
[
"Lu",
"Chang-Tien",
""
]
] |
new_dataset
| 0.999156 |
2210.13626
|
Aditya Aravind Chinchure
|
Sahithya Ravi, Aditya Chinchure, Leonid Sigal, Renjie Liao, Vered
Shwartz
|
VLC-BERT: Visual Question Answering with Contextualized Commonsense
Knowledge
|
Accepted at WACV 2023. For code and supplementary material, see
https://github.com/aditya10/VLC-BERT
| null | null | null |
cs.CV cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
There has been a growing interest in solving Visual Question Answering (VQA)
tasks that require the model to reason beyond the content present in the image.
In this work, we focus on questions that require commonsense reasoning. In
contrast to previous methods which inject knowledge from static knowledge
bases, we investigate the incorporation of contextualized knowledge using
Commonsense Transformer (COMET), an existing knowledge model trained on
human-curated knowledge bases. We propose a method to generate, select, and
encode external commonsense knowledge alongside visual and textual cues in a
new pre-trained Vision-Language-Commonsense transformer model, VLC-BERT.
Through our evaluation on the knowledge-intensive OK-VQA and A-OKVQA datasets,
we show that VLC-BERT is capable of outperforming existing models that utilize
static knowledge bases. Furthermore, through a detailed analysis, we explain
which questions benefit, and which don't, from contextualized commonsense
knowledge from COMET.
|
[
{
"version": "v1",
"created": "Mon, 24 Oct 2022 22:01:17 GMT"
}
] | 2022-10-26T00:00:00 |
[
[
"Ravi",
"Sahithya",
""
],
[
"Chinchure",
"Aditya",
""
],
[
"Sigal",
"Leonid",
""
],
[
"Liao",
"Renjie",
""
],
[
"Shwartz",
"Vered",
""
]
] |
new_dataset
| 0.990275 |
2210.13670
|
Shreemoy Mishra
|
Sergio Demian Lerner, Federico Jinich, Diego Masini, Shreemoy Mishra
|
Simplified State Storage Rent for EVM Blockchains
|
5 pages
| null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Uncontrolled growth of blockchain state can adversely affect client
performance, decentralization and security. Previous attempts to introduce
duration-based state storage pricing or 'storage rent' in Ethereum have
stalled, partly because of complexity. We present a new approach with finer
granularity to "spread" rent payments across peers. Our proposal shifts the
burden of state rent from accounts to transaction senders in a quasi-random
manner. This proposal offers a simple path for initial adoption on Ethereum
Virtual Machine (EVM) compatible chains, and serve as a foundation to address
remaining challenges.
|
[
{
"version": "v1",
"created": "Tue, 25 Oct 2022 00:07:21 GMT"
}
] | 2022-10-26T00:00:00 |
[
[
"Lerner",
"Sergio Demian",
""
],
[
"Jinich",
"Federico",
""
],
[
"Masini",
"Diego",
""
],
[
"Mishra",
"Shreemoy",
""
]
] |
new_dataset
| 0.996057 |
2210.13693
|
Peng Shi
|
Peng Shi, Rui Zhang, He Bai, and Jimmy Lin
|
XRICL: Cross-lingual Retrieval-Augmented In-Context Learning for
Cross-lingual Text-to-SQL Semantic Parsing
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
In-context learning using large language models has recently shown surprising
results for semantic parsing tasks such as Text-to-SQL translation. Prompting
GPT-3 or Codex using several examples of question-SQL pairs can produce
excellent results, comparable to state-of-the-art finetuning-based models.
However, existing work primarily focuses on English datasets, and it is unknown
whether large language models can serve as competitive semantic parsers for
other languages. To bridge this gap, our work focuses on cross-lingual
Text-to-SQL semantic parsing for translating non-English utterances into SQL
queries based on an English schema. We consider a zero-shot transfer learning
setting with the assumption that we do not have any labeled examples in the
target language (but have annotated examples in English). This work introduces
the XRICL framework, which learns to retrieve relevant English exemplars for a
given query to construct prompts. We also include global translation exemplars
for a target language to facilitate the translation process for large language
models. To systematically evaluate our model, we construct two new benchmark
datasets, XSpider and XKaggle-dbqa, which include questions in Chinese,
Vietnamese, Farsi, and Hindi. Our experiments show that XRICL effectively
leverages large pre-trained language models to outperform existing baselines.
Data and code are publicly available at https://github.com/Impavidity/XRICL.
|
[
{
"version": "v1",
"created": "Tue, 25 Oct 2022 01:33:49 GMT"
}
] | 2022-10-26T00:00:00 |
[
[
"Shi",
"Peng",
""
],
[
"Zhang",
"Rui",
""
],
[
"Bai",
"He",
""
],
[
"Lin",
"Jimmy",
""
]
] |
new_dataset
| 0.999101 |
2210.13734
|
Tarik A. Rashid
|
Rebin M. Ahmed, Tarik A. Rashid, Polla Fattah, Abeer Alsadoon, Nebojsa
Bacanin, Seyedali Mirjalili, S.Vimal, Amit Chhabra
|
Kurdish Handwritten Character Recognition using Deep Learning Techniques
|
12 pages
|
Gene Expression Patterns, 2022
|
10.1016/j.gep.2022.119278
| null |
cs.CV cs.LG cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
Handwriting recognition is one of the active and challenging areas of
research in the field of image processing and pattern recognition. It has many
applications that include: a reading aid for visual impairment, automated
reading and processing for bank checks, making any handwritten document
searchable, and converting them into structural text form, etc. Moreover, high
accuracy rates have been recorded by handwriting recognition systems for
English, Chinese Arabic, Persian, and many other languages. Yet there is no
such system available for offline Kurdish handwriting recognition. In this
paper, an attempt is made to design and develop a model that can recognize
handwritten characters for Kurdish alphabets using deep learning techniques.
Kurdish (Sorani) contains 34 characters and mainly employs an Arabic\Persian
based script with modified alphabets. In this work, a Deep Convolutional Neural
Network model is employed that has shown exemplary performance in handwriting
recognition systems. Then, a comprehensive dataset was created for handwritten
Kurdish characters, which contains more than 40 thousand images. The created
dataset has been used for training the Deep Convolutional Neural Network model
for classification and recognition tasks. In the proposed system, the
experimental results show an acceptable recognition level. The testing results
reported a 96% accuracy rate, and training accuracy reported a 97% accuracy
rate. From the experimental results, it is clear that the proposed deep
learning model is performing well and is comparable to the similar model of
other languages' handwriting recognition systems.
|
[
{
"version": "v1",
"created": "Tue, 18 Oct 2022 16:48:28 GMT"
}
] | 2022-10-26T00:00:00 |
[
[
"Ahmed",
"Rebin M.",
""
],
[
"Rashid",
"Tarik A.",
""
],
[
"Fattah",
"Polla",
""
],
[
"Alsadoon",
"Abeer",
""
],
[
"Bacanin",
"Nebojsa",
""
],
[
"Mirjalili",
"Seyedali",
""
],
[
"Vimal",
"S.",
""
],
[
"Chhabra",
"Amit",
""
]
] |
new_dataset
| 0.996194 |
2210.13778
|
Rifki Afina Putri
|
Rifki Afina Putri and Alice Oh
|
IDK-MRC: Unanswerable Questions for Indonesian Machine Reading
Comprehension
|
EMNLP 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Machine Reading Comprehension (MRC) has become one of the essential tasks in
Natural Language Understanding (NLU) as it is often included in several NLU
benchmarks (Liang et al., 2020; Wilie et al., 2020). However, most MRC datasets
only have answerable question type, overlooking the importance of unanswerable
questions. MRC models trained only on answerable questions will select the span
that is most likely to be the answer, even when the answer does not actually
exist in the given passage (Rajpurkar et al., 2018). This problem especially
remains in medium- to low-resource languages like Indonesian. Existing
Indonesian MRC datasets (Purwarianti et al., 2007; Clark et al., 2020) are
still inadequate because of the small size and limited question types, i.e.,
they only cover answerable questions. To fill this gap, we build a new
Indonesian MRC dataset called I(n)don'tKnow- MRC (IDK-MRC) by combining the
automatic and manual unanswerable question generation to minimize the cost of
manual dataset construction while maintaining the dataset quality. Combined
with the existing answerable questions, IDK-MRC consists of more than 10K
questions in total. Our analysis shows that our dataset significantly improves
the performance of Indonesian MRC models, showing a large improvement for
unanswerable questions.
|
[
{
"version": "v1",
"created": "Tue, 25 Oct 2022 05:46:53 GMT"
}
] | 2022-10-26T00:00:00 |
[
[
"Putri",
"Rifki Afina",
""
],
[
"Oh",
"Alice",
""
]
] |
new_dataset
| 0.993111 |
2210.13826
|
Lizhao Liu
|
Lizhao Liu, Kunyang Lin, Shangxin Huang, Zhongli Li, Chao Li, Yunbo
Cao, and Qingyu Zhou
|
Instance Segmentation for Chinese Character Stroke Extraction, Datasets
and Benchmarks
|
12 pages, 8 pages for the main paper, 4 pages for the supplementary
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Stroke is the basic element of Chinese character and stroke extraction has
been an important and long-standing endeavor. Existing stroke extraction
methods are often handcrafted and highly depend on domain expertise due to the
limited training data. Moreover, there are no standardized benchmarks to
provide a fair comparison between different stroke extraction methods, which,
we believe, is a major impediment to the development of Chinese character
stroke understanding and related tasks. In this work, we present the first
public available Chinese Character Stroke Extraction (CCSE) benchmark, with two
new large-scale datasets: Kaiti CCSE (CCSE-Kai) and Handwritten CCSE (CCSE-HW).
With the large-scale datasets, we hope to leverage the representation power of
deep models such as CNNs to solve the stroke extraction task, which, however,
remains an open question. To this end, we turn the stroke extraction problem
into a stroke instance segmentation problem. Using the proposed datasets to
train a stroke instance segmentation model, we surpass previous methods by a
large margin. Moreover, the models trained with the proposed datasets benefit
the downstream font generation and handwritten aesthetic assessment tasks. We
hope these benchmark results can facilitate further research. The source code
and datasets are publicly available at: https://github.com/lizhaoliu-Lec/CCSE.
|
[
{
"version": "v1",
"created": "Tue, 25 Oct 2022 08:09:14 GMT"
}
] | 2022-10-26T00:00:00 |
[
[
"Liu",
"Lizhao",
""
],
[
"Lin",
"Kunyang",
""
],
[
"Huang",
"Shangxin",
""
],
[
"Li",
"Zhongli",
""
],
[
"Li",
"Chao",
""
],
[
"Cao",
"Yunbo",
""
],
[
"Zhou",
"Qingyu",
""
]
] |
new_dataset
| 0.993951 |
2210.13885
|
Fernando Alonso-Fernandez
|
Andreas Ranftl, Fernando Alonso-Fernandez, Stefan Karlsson, Josef
Bigun
|
Real-time AdaBoost cascade face tracker based on likelihood map and
optical flow
|
Published at IET Biometrics Journal
| null |
10.1049/iet-bmt.2016.0202
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The authors present a novel face tracking approach where optical flow
information is incorporated into a modified version of the Viola Jones
detection algorithm. In the original algorithm, detection is static, as
information from previous frames is not considered. In addition, candidate
windows have to pass all stages of the classification cascade, otherwise they
are discarded as containing no face. In contrast, the proposed tracker
preserves information about the number of classification stages passed by each
window. Such information is used to build a likelihood map, which represents
the probability of having a face located at that position. Tracking
capabilities are provided by extrapolating the position of the likelihood map
to the next frame by optical flow computation. The proposed algorithm works in
real time on a standard laptop. The system is verified on the Boston Head
Tracking Database, showing that the proposed algorithm outperforms the standard
Viola Jones detector in terms of detection rate and stability of the output
bounding box, as well as including the capability to deal with occlusions. The
authors also evaluate two recently published face detectors based on
convolutional networks and deformable part models with their algorithm showing
a comparable accuracy at a fraction of the computation time.
|
[
{
"version": "v1",
"created": "Tue, 25 Oct 2022 10:15:07 GMT"
}
] | 2022-10-26T00:00:00 |
[
[
"Ranftl",
"Andreas",
""
],
[
"Alonso-Fernandez",
"Fernando",
""
],
[
"Karlsson",
"Stefan",
""
],
[
"Bigun",
"Josef",
""
]
] |
new_dataset
| 0.99371 |
2210.13992
|
Lukas Bernreiter
|
Lukas Bernreiter, Lionel Ott, Roland Siegwart and Cesar Cadena
|
SphNet: A Spherical Network for Semantic Pointcloud Segmentation
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Semantic segmentation for robotic systems can enable a wide range of
applications, from self-driving cars and augmented reality systems to domestic
robots. We argue that a spherical representation is a natural one for
egocentric pointclouds. Thus, in this work, we present a novel framework
exploiting such a representation of LiDAR pointclouds for the task of semantic
segmentation. Our approach is based on a spherical convolutional neural network
that can seamlessly handle observations from various sensor systems (e.g.,
different LiDAR systems) and provides an accurate segmentation of the
environment. We operate in two distinct stages: First, we encode the projected
input pointclouds to spherical features. Second, we decode and back-project the
spherical features to achieve an accurate semantic segmentation of the
pointcloud. We evaluate our method with respect to state-of-the-art
projection-based semantic segmentation approaches using well-known public
datasets. We demonstrate that the spherical representation enables us to
provide more accurate segmentation and to have a better generalization to
sensors with different field-of-view and number of beams than what was seen
during training.
|
[
{
"version": "v1",
"created": "Mon, 24 Oct 2022 09:08:19 GMT"
}
] | 2022-10-26T00:00:00 |
[
[
"Bernreiter",
"Lukas",
""
],
[
"Ott",
"Lionel",
""
],
[
"Siegwart",
"Roland",
""
],
[
"Cadena",
"Cesar",
""
]
] |
new_dataset
| 0.987546 |
2210.14006
|
Wentu Song
|
Wentu Song and Kui Cai
|
Non-binary Two-Deletion Correcting Codes and Burst-Deletion Correcting
Codes
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we construct systematic $q$-ary two-deletion correcting codes
and burst-deletion correcting codes, where $q\geq 2$ is an even integer. For
two-deletion codes, our construction has redundancy $5\log n+O(\log q\log\log
n)$ and has encoding complexity near-linear in $n$, where $n$ is the length of
the message sequences. For burst-deletion codes, we first present a
construction of binary codes with redundancy $\log n+9\log\log
n+\gamma_t+o(\log\log n)$ bits $(\gamma_t$ is a constant that depends only on
$t)$ and capable of correcting a burst of at most $t$ deletions, which improves
the Lenz-Polyanskii Construction (ISIT 2020). Then we give a construction of
$q$-ary codes with redundancy $\log n+(8\log q+9)\log\log n+o(\log q\log\log
n)+\gamma_t$ bits and capable of correcting a burst of at most $t$ deletions.
|
[
{
"version": "v1",
"created": "Tue, 25 Oct 2022 13:21:54 GMT"
}
] | 2022-10-26T00:00:00 |
[
[
"Song",
"Wentu",
""
],
[
"Cai",
"Kui",
""
]
] |
new_dataset
| 0.9995 |
2210.14085
|
Marcelo Matheus Gauy
|
Marcelo Matheus Gauy and Marcelo Finger
|
Audio MFCC-gram Transformers for respiratory insufficiency detection in
COVID-19
| null |
SIMP\'OSIO BRASILEIRO DE TECNOLOGIA DA INFORMA\c{C}\~AO E DA
LINGUAGEM HUMANA (STIL), 13. , 2021, Evento Online. Anais [...]. Porto
Alegre: Sociedade Brasileira de Computa\c{c}\~ao, 2021 . p. 143-152
|
10.5753/stil.2021.17793
| null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
This work explores speech as a biomarker and investigates the detection of
respiratory insufficiency (RI) by analyzing speech samples. Previous work
\cite{spira2021} constructed a dataset of respiratory insufficiency COVID-19
patient utterances and analyzed it by means of a convolutional neural network
achieving an accuracy of $87.04\%$, validating the hypothesis that one can
detect RI through speech. Here, we study how Transformer neural network
architectures can improve the performance on RI detection. This approach
enables construction of an acoustic model. By choosing the correct pretraining
technique, we generate a self-supervised acoustic model, leading to improved
performance ($96.53\%$) of Transformers for RI detection.
|
[
{
"version": "v1",
"created": "Tue, 25 Oct 2022 15:11:40 GMT"
}
] | 2022-10-26T00:00:00 |
[
[
"Gauy",
"Marcelo Matheus",
""
],
[
"Finger",
"Marcelo",
""
]
] |
new_dataset
| 0.976115 |
2210.14101
|
Shenjie Huang
|
Shenjie Huang, Cheng Chen, Mohammad Dehghani Soltani, Robert
Henderson, Harald Haas, and Majid Safari
|
SPAD-Based Optical Wireless Communication with ACO-OFDM
|
arXiv admin note: substantial text overlap with arXiv:2206.02062
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
The sensitivity of the optical wireless communication (OWC) can be
effectively improved by employing the highly sensitive single-photon avalanche
diode (SPAD) arrays. However, the nonlinear distortion introduced by the dead
time strongly limits the throughput of the SPAD-based OWC systems. Optical
orthogonal frequency division multiplexing (OFDM) can be employed in the
systems with SPAD arrays to improve the spectral efficiency. In this work, a
theoretical performance analysis of SPAD-based OWC system with
asymmetrically-clipped optical OFDM (ACO-OFDM) is presented. The impact of the
SPAD nonlinearity on the system performance is investigated. In addition, the
comparison of the considered scheme with direct-current-biased optical OFDM
(DCO-OFDM) is presented showing the distinct reliable operation regimes of the
two schemes. In the low power regimes, ACO-OFDM outperforms DCO-OFDM; whereas,
the latter is more preferable in the high power regimes.
|
[
{
"version": "v1",
"created": "Tue, 25 Oct 2022 15:39:20 GMT"
}
] | 2022-10-26T00:00:00 |
[
[
"Huang",
"Shenjie",
""
],
[
"Chen",
"Cheng",
""
],
[
"Soltani",
"Mohammad Dehghani",
""
],
[
"Henderson",
"Robert",
""
],
[
"Haas",
"Harald",
""
],
[
"Safari",
"Majid",
""
]
] |
new_dataset
| 0.966859 |
2210.14124
|
Yufan Zhou
|
Yufan Zhou, Chunyuan Li, Changyou Chen, Jianfeng Gao, Jinhui Xu
|
Lafite2: Few-shot Text-to-Image Generation
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Text-to-image generation models have progressed considerably in recent years,
which can now generate impressive realistic images from arbitrary text. Most of
such models are trained on web-scale image-text paired datasets, which may not
be affordable for many researchers. In this paper, we propose a novel method
for pre-training text-to-image generation model on image-only datasets. It
considers a retrieval-then-optimization procedure to synthesize pseudo text
features: for a given image, relevant pseudo text features are first retrieved,
then optimized for better alignment. The low requirement of the proposed method
yields high flexibility and usability: it can be beneficial to a wide range of
settings, including the few-shot, semi-supervised and fully-supervised
learning; it can be applied on different models including generative
adversarial networks (GANs) and diffusion models. Extensive experiments
illustrate the effectiveness of the proposed method. On MS-COCO dataset, our
GAN model obtains Fr\'echet Inception Distance (FID) of 6.78 which is the new
state-of-the-art (SoTA) of GANs under fully-supervised setting. Our diffusion
model obtains FID of 8.42 and 4.28 on zero-shot and supervised setting
respectively, which are competitive to SoTA diffusion models with a much
smaller model size.
|
[
{
"version": "v1",
"created": "Tue, 25 Oct 2022 16:22:23 GMT"
}
] | 2022-10-26T00:00:00 |
[
[
"Zhou",
"Yufan",
""
],
[
"Li",
"Chunyuan",
""
],
[
"Chen",
"Changyou",
""
],
[
"Gao",
"Jianfeng",
""
],
[
"Xu",
"Jinhui",
""
]
] |
new_dataset
| 0.999813 |
2210.14128
|
Xiao Liu
|
Chenguang Wang, Xiao Liu, Dawn Song
|
IELM: An Open Information Extraction Benchmark for Pre-Trained Language
Models
|
EMNLP 2022. arXiv admin note: substantial text overlap with
arXiv:2010.11967
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce a new open information extraction (OIE) benchmark for
pre-trained language models (LM). Recent studies have demonstrated that
pre-trained LMs, such as BERT and GPT, may store linguistic and relational
knowledge. In particular, LMs are able to answer ``fill-in-the-blank''
questions when given a pre-defined relation category. Instead of focusing on
pre-defined relations, we create an OIE benchmark aiming to fully examine the
open relational information present in the pre-trained LMs. We accomplish this
by turning pre-trained LMs into zero-shot OIE systems. Surprisingly,
pre-trained LMs are able to obtain competitive performance on both standard OIE
datasets (CaRB and Re-OIE2016) and two new large-scale factual OIE datasets
(TAC KBP-OIE and Wikidata-OIE) that we establish via distant supervision. For
instance, the zero-shot pre-trained LMs outperform the F1 score of the
state-of-the-art supervised OIE methods on our factual OIE datasets without
needing to use any training sets. Our code and datasets are available at
https://github.com/cgraywang/IELM
|
[
{
"version": "v1",
"created": "Tue, 25 Oct 2022 16:25:00 GMT"
}
] | 2022-10-26T00:00:00 |
[
[
"Wang",
"Chenguang",
""
],
[
"Liu",
"Xiao",
""
],
[
"Song",
"Dawn",
""
]
] |
new_dataset
| 0.99323 |
2210.14162
|
Tsunehiko Tanaka
|
Tsunehiko Tanaka, Daiki Kimura, Michiaki Tatsubori
|
Commonsense Knowledge from Scene Graphs for Textual Environments
|
AAAI-22 Workshop on Reinforcement Learning in Games
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Text-based games are becoming commonly used in reinforcement learning as
real-world simulation environments. They are usually imperfect information
games, and their interactions are only in the textual modality. To challenge
these games, it is effective to complement the missing information by providing
knowledge outside the game, such as human common sense. However, such knowledge
has only been available from textual information in previous works. In this
paper, we investigate the advantage of employing commonsense reasoning obtained
from visual datasets such as scene graph datasets. In general, images convey
more comprehensive information compared with text for humans. This property
enables to extract commonsense relationship knowledge more useful for acting
effectively in a game. We compare the statistics of spatial relationships
available in Visual Genome (a scene graph dataset) and ConceptNet (a text-based
knowledge) to analyze the effectiveness of introducing scene graph datasets. We
also conducted experiments on a text-based game task that requires commonsense
reasoning. Our experimental results demonstrated that our proposed methods have
higher and competitive performance than existing state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Wed, 19 Oct 2022 03:09:17 GMT"
}
] | 2022-10-26T00:00:00 |
[
[
"Tanaka",
"Tsunehiko",
""
],
[
"Kimura",
"Daiki",
""
],
[
"Tatsubori",
"Michiaki",
""
]
] |
new_dataset
| 0.999531 |
2210.14165
|
Nicolas Monet
|
Nicolas Monet and Dongyoon Wee
|
MEEV: Body Mesh Estimation On Egocentric Video
|
5 pages
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This technical report introduces our solution, MEEV, proposed to the EgoBody
Challenge at ECCV 2022. Captured from head-mounted devices, the dataset
consists of human body shape and motion of interacting people. The EgoBody
dataset has challenges such as occluded body or blurry image. In order to
overcome the challenges, MEEV is designed to exploit multiscale features for
rich spatial information. Besides, to overcome the limited size of dataset, the
model is pre-trained with the dataset aggregated 2D and 3D pose estimation
datasets. Achieving 82.30 for MPJPE and 92.93 for MPVPE, MEEV has won the
EgoBody Challenge at ECCV 2022, which shows the effectiveness of the proposed
method. The code is available at https://github.com/clovaai/meev
|
[
{
"version": "v1",
"created": "Fri, 21 Oct 2022 02:20:50 GMT"
}
] | 2022-10-26T00:00:00 |
[
[
"Monet",
"Nicolas",
""
],
[
"Wee",
"Dongyoon",
""
]
] |
new_dataset
| 0.998608 |
2210.14189
|
Dimitrios Panteleimon Giakatos
|
Dimitrios Panteleimon Giakatos, Sofia Kostoglou, Pavlos Sermpezis,
Athena Vakali
|
Benchmarking Graph Neural Networks for Internet Routing Data
| null | null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The Internet is composed of networks, called Autonomous Systems (or, ASes),
interconnected to each other, thus forming a large graph. While both the
AS-graph is known and there is a multitude of data available for the ASes
(i.e., node attributes), the research on applying graph machine learning (ML)
methods on Internet data has not attracted a lot of attention. In this work, we
provide a benchmarking framework aiming to facilitate research on Internet data
using graph-ML and graph neural network (GNN) methods. Specifically, we compile
a dataset with heterogeneous node/AS attributes by collecting data from
multiple online sources, and preprocessing them so that they can be easily used
as input in GNN architectures. Then, we create a framework/pipeline for
applying GNNs on the compiled data. For a set of tasks, we perform a
benchmarking of different GNN models (as well as, non-GNN ML models) to test
their efficiency; our results can serve as a common baseline for future
research and provide initial insights for the application of GNNs on Internet
data.
|
[
{
"version": "v1",
"created": "Tue, 25 Oct 2022 17:32:16 GMT"
}
] | 2022-10-26T00:00:00 |
[
[
"Giakatos",
"Dimitrios Panteleimon",
""
],
[
"Kostoglou",
"Sofia",
""
],
[
"Sermpezis",
"Pavlos",
""
],
[
"Vakali",
"Athena",
""
]
] |
new_dataset
| 0.99665 |
2210.14190
|
Bashar Alhafni
|
Hossein Rajaby Faghihi, Bashar Alhafni, Ke Zhang, Shihao Ran, Joel
Tetreault, Alejandro Jaimes
|
CrisisLTLSum: A Benchmark for Local Crisis Event Timeline Extraction and
Summarization
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Social media has increasingly played a key role in emergency response: first
responders can use public posts to better react to ongoing crisis events and
deploy the necessary resources where they are most needed. Timeline extraction
and abstractive summarization are critical technical tasks to leverage large
numbers of social media posts about events. Unfortunately, there are few
datasets for benchmarking technical approaches for those tasks. This paper
presents CrisisLTLSum, the largest dataset of local crisis event timelines
available to date. CrisisLTLSum contains 1,000 crisis event timelines across
four domains: wildfires, local fires, traffic, and storms. We built
CrisisLTLSum using a semi-automated cluster-then-refine approach to collect
data from the public Twitter stream. Our initial experiments indicate a
significant gap between the performance of strong baselines compared to the
human performance on both tasks. Our dataset, code, and models are publicly
available.
|
[
{
"version": "v1",
"created": "Tue, 25 Oct 2022 17:32:40 GMT"
}
] | 2022-10-26T00:00:00 |
[
[
"Faghihi",
"Hossein Rajaby",
""
],
[
"Alhafni",
"Bashar",
""
],
[
"Zhang",
"Ke",
""
],
[
"Ran",
"Shihao",
""
],
[
"Tetreault",
"Joel",
""
],
[
"Jaimes",
"Alejandro",
""
]
] |
new_dataset
| 0.999849 |
2210.14210
|
Sudharshan Suresh
|
Sudharshan Suresh, Zilin Si, Stuart Anderson, Michael Kaess, Mustafa
Mukadam
|
MidasTouch: Monte-Carlo inference over distributions across sliding
touch
|
Accepted at CoRL 2022 (Oral). Project website:
https://suddhu.github.io/midastouch-tactile/
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present MidasTouch, a tactile perception system for online global
localization of a vision-based touch sensor sliding on an object surface. This
framework takes in posed tactile images over time, and outputs an evolving
distribution of sensor pose on the object's surface, without the need for
visual priors. Our key insight is to estimate local surface geometry with
tactile sensing, learn a compact representation for it, and disambiguate these
signals over a long time horizon. The backbone of MidasTouch is a Monte-Carlo
particle filter, with a measurement model based on a tactile code network
learned from tactile simulation. This network, inspired by LIDAR place
recognition, compactly summarizes local surface geometries. These generated
codes are efficiently compared against a precomputed tactile codebook
per-object, to update the pose distribution. We further release the YCB-Slide
dataset of real-world and simulated forceful sliding interactions between a
vision-based tactile sensor and standard YCB objects. While single-touch
localization can be inherently ambiguous, we can quickly localize our sensor by
traversing salient surface geometries. Project page:
https://suddhu.github.io/midastouch-tactile/
|
[
{
"version": "v1",
"created": "Tue, 25 Oct 2022 17:55:09 GMT"
}
] | 2022-10-26T00:00:00 |
[
[
"Suresh",
"Sudharshan",
""
],
[
"Si",
"Zilin",
""
],
[
"Anderson",
"Stuart",
""
],
[
"Kaess",
"Michael",
""
],
[
"Mukadam",
"Mustafa",
""
]
] |
new_dataset
| 0.992636 |
2210.14222
|
Kashyap Chitta
|
Katrin Renz, Kashyap Chitta, Otniel-Bogdan Mercea, A. Sophia Koepke,
Zeynep Akata, Andreas Geiger
|
PlanT: Explainable Planning Transformers via Object-Level
Representations
|
CoRL 2022. Project Page: https://www.katrinrenz.de/plant/
| null | null | null |
cs.RO cs.AI cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Planning an optimal route in a complex environment requires efficient
reasoning about the surrounding scene. While human drivers prioritize important
objects and ignore details not relevant to the decision, learning-based
planners typically extract features from dense, high-dimensional grid
representations containing all vehicle and road context information. In this
paper, we propose PlanT, a novel approach for planning in the context of
self-driving that uses a standard transformer architecture. PlanT is based on
imitation learning with a compact object-level input representation. On the
Longest6 benchmark for CARLA, PlanT outperforms all prior methods (matching the
driving score of the expert) while being 5.3x faster than equivalent
pixel-based planning baselines during inference. Combining PlanT with an
off-the-shelf perception module provides a sensor-based driving system that is
more than 10 points better in terms of driving score than the existing state of
the art. Furthermore, we propose an evaluation protocol to quantify the ability
of planners to identify relevant objects, providing insights regarding their
decision-making. Our results indicate that PlanT can focus on the most relevant
object in the scene, even when this object is geometrically distant.
|
[
{
"version": "v1",
"created": "Tue, 25 Oct 2022 17:59:46 GMT"
}
] | 2022-10-26T00:00:00 |
[
[
"Renz",
"Katrin",
""
],
[
"Chitta",
"Kashyap",
""
],
[
"Mercea",
"Otniel-Bogdan",
""
],
[
"Koepke",
"A. Sophia",
""
],
[
"Akata",
"Zeynep",
""
],
[
"Geiger",
"Andreas",
""
]
] |
new_dataset
| 0.999173 |
2001.05787
|
Takayuki Nozaki
|
Takayuki Nozaki
|
Weight Enumerators and Cardinalities for Number-Theoretic Codes
|
9 pages, accepted to IEEE Transactions on Information Theory
|
IEEE Transactions on Information Theory, vol. 68, no. 11, pp.
7165-7173, Nov. 2022
|
10.1109/TIT.2022.3184776
| null |
cs.IT math.CO math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The number-theoretic codes are a class of codes defined by single or multiple
congruences. These codes are mainly used for correcting insertion and deletion
errors, and for correcting asymmetric errors. This paper presents a formula for
a generalization of the complete weight enumerator for the number-theoretic
codes. This formula allows us to derive the weight enumerators and
cardinalities for the number-theoretic codes. As a special case, this paper
provides the Hamming weight enumerators and cardinalities of the non-binary
Tenengolts' codes, correcting single insertion or deletion. Moreover, we show
that the formula deduces the MacWilliams identity for the linear codes over the
ring of integers modulo $r$.
|
[
{
"version": "v1",
"created": "Thu, 16 Jan 2020 13:21:08 GMT"
},
{
"version": "v2",
"created": "Mon, 29 Nov 2021 10:07:33 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Oct 2022 07:15:33 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Nozaki",
"Takayuki",
""
]
] |
new_dataset
| 0.994068 |
2002.05905
|
Omar Ibrahim Mr
|
Omar Adel Ibrahim, Savio Sciancalepore, Gabriele Oligeri, Roberto Di
Pietro
|
MAGNETO: Fingerprinting USB Flash Drives via Unintentional Magnetic
Emissions
|
Accepted for publication in ACM Transactions on Embedded Computing
Systems (TECS) in September 2020
| null |
10.1145/3422308
| null |
cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Universal Serial Bus (USB) Flash Drives are nowadays one of the most
convenient and diffused means to transfer files, especially when no Internet
connection is available. However, USB flash drives are also one of the most
common attack vectors used to gain unauthorized access to host devices. For
instance, it is possible to replace a USB drive so that when the USB key is
connected, it would install passwords stealing tools, root-kit software, and
other disrupting malware. In such a way, an attacker can steal sensitive
information via the USB-connected devices, as well as inject any kind of
malicious software into the host.
To thwart the above-cited raising threats, we propose MAGNETO, an efficient,
non-interactive, and privacy-preserving framework to verify the authenticity of
a USB flash drive, rooted in the analysis of its unintentional magnetic
emissions. We show that the magnetic emissions radiated during boot operations
on a specific host are unique for each device, and sufficient to uniquely
fingerprint both the brand and the model of the USB flash drive, or the
specific USB device, depending on the used equipment. Our investigation on 59
different USB flash drives---belonging to 17 brands, including the top brands
purchased on Amazon in mid-2019---, reveals a minimum classification accuracy
of 98.2% in the identification of both brand and model, accompanied by a
negligible time and computational overhead. MAGNETO can also identify the
specific USB Flash drive, with a minimum classification accuracy of 91.2%.
Overall, MAGNETO proves that unintentional magnetic emissions can be considered
as a viable and reliable means to fingerprint read-only USB flash drives.
Finally, future research directions in this domain are also discussed.
|
[
{
"version": "v1",
"created": "Fri, 14 Feb 2020 08:09:54 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Aug 2020 12:33:20 GMT"
},
{
"version": "v3",
"created": "Sun, 13 Sep 2020 02:34:33 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Ibrahim",
"Omar Adel",
""
],
[
"Sciancalepore",
"Savio",
""
],
[
"Oligeri",
"Gabriele",
""
],
[
"Di Pietro",
"Roberto",
""
]
] |
new_dataset
| 0.999251 |
2012.15375
|
Weiyan Shi
|
Weiyan Shi, Yu Li, Saurav Sahay, Zhou Yu
|
Refine and Imitate: Reducing Repetition and Inconsistency in Persuasion
Dialogues via Reinforcement Learning and Human Demonstration
|
EMNLP 2021 Findings
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Persuasion dialogue systems reflect the machine's ability to make strategic
moves beyond verbal communication, and therefore differentiate themselves from
task-oriented or open-domain dialogue systems and have their own unique values.
However, the repetition and inconsistency problems still persist in dialogue
response generation and could substantially impact user experience and impede
the persuasion outcome. Besides, although reinforcement learning (RL)
approaches have achieved big success in strategic tasks such as games, they
require a sophisticated user simulator to provide real-time feedback to the
dialogue system, which limits the application of RL on persuasion dialogues. To
address these issues towards a better persuasion dialogue system, we apply RL
to refine a language model baseline without user simulators, and distill
sentence-level information about repetition, inconsistency, and task relevance
through rewards. Moreover, to better accomplish the persuasion task, the model
learns from human demonstration to imitate human persuasion behavior and
selects the most persuasive responses. Experiments show that our model
outperforms previous state-of-the-art dialogue models on both automatic metrics
and human evaluation results on a donation persuasion task, and generates more
diverse, consistent and persuasive conversations according to the user
feedback.
|
[
{
"version": "v1",
"created": "Thu, 31 Dec 2020 00:02:51 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Oct 2022 13:24:02 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Shi",
"Weiyan",
""
],
[
"Li",
"Yu",
""
],
[
"Sahay",
"Saurav",
""
],
[
"Yu",
"Zhou",
""
]
] |
new_dataset
| 0.987044 |
2105.06942
|
Yoshimichi Nakatsuka
|
Scott Jordan, Yoshimichi Nakatsuka, Ercan Ozturk, Andrew Paverd, Gene
Tsudik
|
VICEROY: GDPR-/CCPA-compliant Enforcement of Verifiable Accountless
Consumer Requests
| null |
Network and Distributed System Security (NDSS) Symposium 2023
|
10.14722/ndss.2023.23074
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent data protection regulations (such as GDPR and CCPA) grant consumers
various rights, including the right to access, modify or delete any personal
information collected about them (and retained) by a service provider. To
exercise these rights, one must submit a verifiable consumer request proving
that the collected data indeed pertains to them. This action is straightforward
for consumers with active accounts with a service provider at the time of data
collection, since they can use standard (e.g., password-based) means of
authentication to validate their requests. However, a major conundrum arises
from the need to support consumers without accounts to exercise their rights.
To this end, some service providers began requiring such accountless consumers
to reveal and prove their identities (e.g., using government-issued documents,
utility bills, or credit card numbers) as part of issuing a verifiable consumer
request. While understandable as a short-term cure, this approach is cumbersome
and expensive for service providers as well as privacy-invasive for consumers.
Consequently, there is a strong need to provide better means of authenticating
requests from accountless consumers. To achieve this, we propose VICEROY, a
privacy-preserving and scalable framework for producing proofs of data
ownership, which form a basis for verifiable consumer requests. Building upon
existing web techniques and features, VICEROY allows accountless consumers to
interact with service providers, and later prove that they are the same person
in a privacy-preserving manner, while requiring minimal changes for both
parties. We design and implement VICEROY with emphasis on security/privacy,
deployability and usability. We also thoroughly assess its practicality via
extensive experiments.
|
[
{
"version": "v1",
"created": "Fri, 14 May 2021 16:34:32 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Feb 2022 05:07:18 GMT"
},
{
"version": "v3",
"created": "Fri, 21 Oct 2022 18:35:44 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Jordan",
"Scott",
""
],
[
"Nakatsuka",
"Yoshimichi",
""
],
[
"Ozturk",
"Ercan",
""
],
[
"Paverd",
"Andrew",
""
],
[
"Tsudik",
"Gene",
""
]
] |
new_dataset
| 0.991048 |
2109.05569
|
Alejandro Pardo
|
Alejandro Pardo, Fabian Caba Heilbron, Juan Le\'on Alc\'azar, Ali
Thabet, Bernard Ghanem
|
MovieCuts: A New Dataset and Benchmark for Cut Type Recognition
|
Paper's website:
https://www.alejandropardo.net/publication/moviecuts/
|
ECCV 2022
| null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Understanding movies and their structural patterns is a crucial task in
decoding the craft of video editing. While previous works have developed tools
for general analysis, such as detecting characters or recognizing
cinematography properties at the shot level, less effort has been devoted to
understanding the most basic video edit, the Cut. This paper introduces the Cut
type recognition task, which requires modeling multi-modal information. To
ignite research in this new task, we construct a large-scale dataset called
MovieCuts, which contains 173,967 video clips labeled with ten cut types
defined by professionals in the movie industry. We benchmark a set of
audio-visual approaches, including some dealing with the problem's multi-modal
nature. Our best model achieves 47.7% mAP, which suggests that the task is
challenging and that attaining highly accurate Cut type recognition is an open
research problem. Advances in automatic Cut-type recognition can unleash new
experiences in the video editing industry, such as movie analysis for
education, video re-editing, virtual cinematography, machine-assisted trailer
generation, machine-assisted video editing, among others. Our data and code are
publicly available:
https://github.com/PardoAlejo/MovieCuts}{https://github.com/PardoAlejo/MovieCuts.
|
[
{
"version": "v1",
"created": "Sun, 12 Sep 2021 17:36:55 GMT"
},
{
"version": "v2",
"created": "Sun, 19 Sep 2021 09:25:45 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Oct 2022 10:00:07 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Pardo",
"Alejandro",
""
],
[
"Heilbron",
"Fabian Caba",
""
],
[
"Alcázar",
"Juan León",
""
],
[
"Thabet",
"Ali",
""
],
[
"Ghanem",
"Bernard",
""
]
] |
new_dataset
| 0.999824 |
2109.12941
|
Chanjun Park
|
Chanjun Park, Yoonna Jang, Seolhwa Lee, Jaehyung Seo, Kisu Yang,
Heuiseok Lim
|
PicTalky: Augmentative and Alternative Communication Software for
Language Developmental Disabilities
|
Accepted in AACL 2022 Demo Track
| null | null | null |
cs.CL cs.CY cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Augmentative and alternative communication (AAC) is a practical means of
communication for people with language disabilities. In this study, we propose
PicTalky, which is an AI-based AAC system that helps children with language
developmental disabilities to improve their communication skills and language
comprehension abilities. PicTalky can process both text and pictograms more
accurately by connecting a series of neural-based NLP modules. Moreover, we
perform quantitative and qualitative analyses on the essential features of
PicTalky. It is expected that those suffering from language problems will be
able to express their intentions or desires more easily and improve their
quality of life by using this service. We have made the models freely available
alongside a demonstration of the Web interface. Furthermore, we implemented
robotics AAC for the first time by applying PicTalky to the NAO robot.
|
[
{
"version": "v1",
"created": "Mon, 27 Sep 2021 10:46:14 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Oct 2022 23:08:00 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Park",
"Chanjun",
""
],
[
"Jang",
"Yoonna",
""
],
[
"Lee",
"Seolhwa",
""
],
[
"Seo",
"Jaehyung",
""
],
[
"Yang",
"Kisu",
""
],
[
"Lim",
"Heuiseok",
""
]
] |
new_dataset
| 0.989711 |
2112.11122
|
Shangda Wu
|
Shangda Wu, Yue Yang, Zhaowen Wang, Xiaobing Li, Maosong Sun
|
Generating Chords from Melody with Flexible Harmonic Rhythm and
Controllable Harmonic Density
|
5 pages, 3 figures, 1 table
| null | null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Melody harmonization, i.e., generating a chord progression for a user-given
melody, remains a challenging task to this day. A chord progression must not
only be in harmony with the melody, but its harmonic rhythm is also
interdependent on the melodic rhythm. Although previous neural network-based
systems can effectively generate a chord progression for a melody, few studies
have addressed controllable melody harmonization, and there has been a lack of
focus on generating flexible harmonic rhythms. In this paper, we propose
AutoHarmonizer, a harmonic density-controllable melody harmonization system
with flexible harmonic rhythm. This system supports 1,462 chord types and can
generate denser or sparser chord progressions for a given melody. Experimental
results demonstrate the diversity of harmonic rhythms in the
AutoHarmonizer-generated chord progressions and the effectiveness of
controllable harmonic density.
|
[
{
"version": "v1",
"created": "Tue, 21 Dec 2021 11:51:51 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Oct 2022 06:38:42 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Wu",
"Shangda",
""
],
[
"Yang",
"Yue",
""
],
[
"Wang",
"Zhaowen",
""
],
[
"Li",
"Xiaobing",
""
],
[
"Sun",
"Maosong",
""
]
] |
new_dataset
| 0.996623 |
2201.08081
|
Qi Shi
|
Qi Shi, Qian Liu, Bei Chen, Yu Zhang, Ting Liu, Jian-Guang Lou
|
LEMON: Language-Based Environment Manipulation via Execution-Guided
Pre-training
|
EMNLP 2022 Findings
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Language-based environment manipulation requires agents to manipulate the
environment following natural language instructions, which is challenging due
to the huge space of the environments. To address this challenge, various
approaches have been proposed in recent work. Although these approaches work
well for their intended environments, they are difficult to generalize across
environments. In this work, we propose LEMON, a general framework for
language-based environment manipulation tasks. Specifically, we first specify a
task-agnostic approach for language-based environment manipulation tasks, which
can deal with various environments using the same generative language model.
Then we propose an execution-guided pre-training strategy to inject prior
knowledge of environments to the language model with a pure synthetic
pre-training corpus. Experimental results on tasks including Alchemy, Scene,
Tangrams, ProPara and Recipes demonstrate the effectiveness of LEMON: it
achieves new state-of-the-art results on four of the tasks, and the
execution-guided pre-training strategy brings remarkable improvements on all
experimental tasks.
|
[
{
"version": "v1",
"created": "Thu, 20 Jan 2022 09:29:34 GMT"
},
{
"version": "v2",
"created": "Mon, 23 May 2022 13:28:28 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Oct 2022 04:55:59 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Shi",
"Qi",
""
],
[
"Liu",
"Qian",
""
],
[
"Chen",
"Bei",
""
],
[
"Zhang",
"Yu",
""
],
[
"Liu",
"Ting",
""
],
[
"Lou",
"Jian-Guang",
""
]
] |
new_dataset
| 0.9986 |
2201.11473
|
Qian Liu
|
Xinyu Pi, Qian Liu, Bei Chen, Morteza Ziyadi, Zeqi Lin, Qiang Fu, Yan
Gao, Jian-Guang Lou, Weizhu Chen
|
Reasoning Like Program Executors
|
To appear in EMNLP 2022 main conference. The first two authors
contributed equally
| null | null | null |
cs.CL cs.AI cs.SC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Reasoning over natural language is a long-standing goal for the research
community. However, studies have shown that existing language models are
inadequate in reasoning. To address the issue, we present POET, a novel
reasoning pre-training paradigm. Through pre-training language models with
programs and their execution results, POET empowers language models to harvest
the reasoning knowledge possessed by program executors via a data-driven
approach. POET is conceptually simple and can be instantiated by different
kinds of program executors. In this paper, we showcase two simple instances
POET-Math and POET-Logic, in addition to a complex instance, POET-SQL.
Experimental results on six benchmarks demonstrate that POET can significantly
boost model performance in natural language reasoning, such as numerical
reasoning, logical reasoning, and multi-hop reasoning. POET opens a new gate on
reasoning-enhancement pre-training, and we hope our analysis would shed light
on the future research of reasoning like program executors.
|
[
{
"version": "v1",
"created": "Thu, 27 Jan 2022 12:28:24 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Oct 2022 13:46:24 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Pi",
"Xinyu",
""
],
[
"Liu",
"Qian",
""
],
[
"Chen",
"Bei",
""
],
[
"Ziyadi",
"Morteza",
""
],
[
"Lin",
"Zeqi",
""
],
[
"Fu",
"Qiang",
""
],
[
"Gao",
"Yan",
""
],
[
"Lou",
"Jian-Guang",
""
],
[
"Chen",
"Weizhu",
""
]
] |
new_dataset
| 0.978095 |
2202.00185
|
Przemyslaw Musialski
|
Kurt Leimer, Paul Guerrero, Tomer Weiss, Przemyslaw Musialski
|
LayoutEnhancer: Generating Good Indoor Layouts from Imperfect Data
|
preprint of ACM SIGGRAPH Asia 2022 Conference Paper, 14 pages
including appendix and supplementary figures, 16 figures
| null |
10.1145/3550469.3555425
| null |
cs.GR cs.AI cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We address the problem of indoor layout synthesis, which is a topic of
continuing research interest in computer graphics. The newest works made
significant progress using data-driven generative methods; however, these
approaches rely on suitable datasets. In practice, desirable layout properties
may not exist in a dataset, for instance, specific expert knowledge can be
missing in the data. We propose a method that combines expert knowledge, for
example, knowledge about ergonomics, with a data-driven generator based on the
popular Transformer architecture. The knowledge is given as differentiable
scalar functions, which can be used both as weights or as additional terms in
the loss function. Using this knowledge, the synthesized layouts can be biased
to exhibit desirable properties, even if these properties are not present in
the dataset. Our approach can also alleviate problems of lack of data and
imperfections in the data. Our work aims to improve generative machine learning
for modeling and provide novel tools for designers and amateurs for the problem
of interior layout creation.
|
[
{
"version": "v1",
"created": "Tue, 1 Feb 2022 02:25:04 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Oct 2022 01:04:32 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Leimer",
"Kurt",
""
],
[
"Guerrero",
"Paul",
""
],
[
"Weiss",
"Tomer",
""
],
[
"Musialski",
"Przemyslaw",
""
]
] |
new_dataset
| 0.982883 |
2203.00241
|
Huaicheng Li
|
Huaicheng Li and Daniel S. Berger and Stanko Novakovic and Lisa Hsu
and Dan Ernst and Pantea Zardoshti and Monish Shah and Samir Rajadnya and
Scott Lee and Ishwar Agarwal and Mark D. Hill and Marcus Fontoura and Ricardo
Bianchini
|
Pond: CXL-Based Memory Pooling Systems for Cloud Platforms
|
Update affiliations
| null | null | null |
cs.OS cs.PF
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Public cloud providers seek to meet stringent performance requirements and
low hardware cost. A key driver of performance and cost is main memory. Memory
pooling promises to improve DRAM utilization and thereby reduce costs. However,
pooling is challenging under cloud performance requirements. This paper
proposes Pond, the first memory pooling system that both meets cloud
performance goals and significantly reduces DRAM cost. Pond builds on the
Compute Express Link (CXL) standard for load/store access to pool memory and
two key insights. First, our analysis of cloud production traces shows that
pooling across 8-16 sockets is enough to achieve most of the benefits. This
enables a small-pool design with low access latency. Second, it is possible to
create machine learning models that can accurately predict how much local and
pool memory to allocate to a virtual machine (VM) to resemble same-NUMA-node
memory performance. Our evaluation with 158 workloads shows that Pond reduces
DRAM costs by 7% with performance within 1-5% of same-NUMA-node VM allocations.
|
[
{
"version": "v1",
"created": "Tue, 1 Mar 2022 05:32:52 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Mar 2022 20:30:25 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Oct 2022 23:18:39 GMT"
},
{
"version": "v4",
"created": "Fri, 21 Oct 2022 22:02:53 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Li",
"Huaicheng",
""
],
[
"Berger",
"Daniel S.",
""
],
[
"Novakovic",
"Stanko",
""
],
[
"Hsu",
"Lisa",
""
],
[
"Ernst",
"Dan",
""
],
[
"Zardoshti",
"Pantea",
""
],
[
"Shah",
"Monish",
""
],
[
"Rajadnya",
"Samir",
""
],
[
"Lee",
"Scott",
""
],
[
"Agarwal",
"Ishwar",
""
],
[
"Hill",
"Mark D.",
""
],
[
"Fontoura",
"Marcus",
""
],
[
"Bianchini",
"Ricardo",
""
]
] |
new_dataset
| 0.987378 |
2203.11022
|
Nikhil Garg
|
Nikhil Garg, Ismael Balafrej, Terrence C. Stewart, Jean Michel Portal,
Marc Bocquet, Damien Querlioz, Dominique Drouin, Jean Rouat, Yann Beilliard,
Fabien Alibart
|
Voltage-Dependent Synaptic Plasticity (VDSP): Unsupervised probabilistic
Hebbian plasticity rule based on neurons membrane potential
|
Front. Neurosci., 21 October 2022 Sec. Neuromorphic Engineering
|
Front. Neurosci. 16:983950 (2022)
|
10.3389/fnins.2022.983950
| null |
cs.NE cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This study proposes voltage-dependent-synaptic plasticity (VDSP), a novel
brain-inspired unsupervised local learning rule for the online implementation
of Hebb's plasticity mechanism on neuromorphic hardware. The proposed VDSP
learning rule updates the synaptic conductance on the spike of the postsynaptic
neuron only, which reduces by a factor of two the number of updates with
respect to standard spike-timing-dependent plasticity (STDP). This update is
dependent on the membrane potential of the presynaptic neuron, which is readily
available as part of neuron implementation and hence does not require
additional memory for storage. Moreover, the update is also regularized on
synaptic weight and prevents explosion or vanishing of weights on repeated
stimulation. Rigorous mathematical analysis is performed to draw an equivalence
between VDSP and STDP. To validate the system-level performance of VDSP, we
train a single-layer spiking neural network (SNN) for the recognition of
handwritten digits. We report 85.01 $ \pm $ 0.76% (Mean $ \pm $ S.D.) accuracy
for a network of 100 output neurons on the MNIST dataset. The performance
improves when scaling the network size (89.93 $ \pm $ 0.41% for 400 output
neurons, 90.56 $ \pm $ 0.27 for 500 neurons), which validates the applicability
of the proposed learning rule for spatial pattern recognition tasks. Future
work will consider more complicated tasks. Interestingly, the learning rule
better adapts than STDP to the frequency of input signal and does not require
hand-tuning of hyperparameters
|
[
{
"version": "v1",
"created": "Mon, 21 Mar 2022 14:39:02 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Mar 2022 16:01:35 GMT"
},
{
"version": "v3",
"created": "Thu, 14 Apr 2022 11:08:48 GMT"
},
{
"version": "v4",
"created": "Fri, 1 Jul 2022 09:47:31 GMT"
},
{
"version": "v5",
"created": "Mon, 5 Sep 2022 22:10:37 GMT"
},
{
"version": "v6",
"created": "Sat, 22 Oct 2022 08:59:58 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Garg",
"Nikhil",
""
],
[
"Balafrej",
"Ismael",
""
],
[
"Stewart",
"Terrence C.",
""
],
[
"Portal",
"Jean Michel",
""
],
[
"Bocquet",
"Marc",
""
],
[
"Querlioz",
"Damien",
""
],
[
"Drouin",
"Dominique",
""
],
[
"Rouat",
"Jean",
""
],
[
"Beilliard",
"Yann",
""
],
[
"Alibart",
"Fabien",
""
]
] |
new_dataset
| 0.968876 |
2203.13530
|
Zhenrong Zhang
|
Zhenrong Zhang, Jiefeng Ma, Jun Du, Licheng Wang and Jianshu Zhang
|
Multimodal Pre-training Based on Graph Attention Network for Document
Understanding
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Document intelligence as a relatively new research topic supports many
business applications. Its main task is to automatically read, understand, and
analyze documents. However, due to the diversity of formats (invoices, reports,
forms, etc.) and layouts in documents, it is difficult to make machines
understand documents. In this paper, we present the GraphDoc, a multimodal
graph attention-based model for various document understanding tasks. GraphDoc
is pre-trained in a multimodal framework by utilizing text, layout, and image
information simultaneously. In a document, a text block relies heavily on its
surrounding contexts, accordingly we inject the graph structure into the
attention mechanism to form a graph attention layer so that each input node can
only attend to its neighborhoods. The input nodes of each graph attention layer
are composed of textual, visual, and positional features from semantically
meaningful regions in a document image. We do the multimodal feature fusion of
each node by the gate fusion layer. The contextualization between each node is
modeled by the graph attention layer. GraphDoc learns a generic representation
from only 320k unlabeled documents via the Masked Sentence Modeling task.
Extensive experimental results on the publicly available datasets show that
GraphDoc achieves state-of-the-art performance, which demonstrates the
effectiveness of our proposed method. The code is available at
https://github.com/ZZR8066/GraphDoc.
|
[
{
"version": "v1",
"created": "Fri, 25 Mar 2022 09:27:50 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Oct 2022 16:12:10 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Zhang",
"Zhenrong",
""
],
[
"Ma",
"Jiefeng",
""
],
[
"Du",
"Jun",
""
],
[
"Wang",
"Licheng",
""
],
[
"Zhang",
"Jianshu",
""
]
] |
new_dataset
| 0.970981 |
2203.15219
|
Morris Gu Mr
|
Morris Gu, Elizabeth Croft, Akansel Cosgun
|
AR Point&Click: An Interface for Setting Robot Navigation Goals
|
Accepted at ICSR 2022 "14th International Conference on Social
Robotics", 6 Pages, 5 Figures, 4 Tables
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper considers the problem of designating navigation goal locations for
interactive mobile robots. We propose a point-and-click interface, implemented
with an Augmented Reality (AR) headset. The cameras on the AR headset are used
to detect natural pointing gestures performed by the user. The selected goal is
visualized through the AR headset, allowing the users to adjust the goal
location if desired. We conduct a user study in which participants set
consecutive navigation goals for the robot using three different interfaces: AR
Point & Click, Person Following and Tablet (birdeye map view). Results show
that the proposed AR Point&Click interface improved the perceived accuracy,
efficiency and reduced mental load compared to the baseline tablet interface,
and it performed on-par to the Person Following method. These results show that
the AR Point\&Click is a feasible interaction model for setting navigation
goals.
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2022 03:45:00 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Oct 2022 10:45:42 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Gu",
"Morris",
""
],
[
"Croft",
"Elizabeth",
""
],
[
"Cosgun",
"Akansel",
""
]
] |
new_dataset
| 0.967976 |
2204.03051
|
F\'abio Vital
|
F\'abio Vital, Miguel Vasco, Alberto Sardinha, and Francisco Melo
|
Perceive, Represent, Generate: Translating Multimodal Information to
Robotic Motion Trajectories
|
14 pages, 4 figures, 8 tables, 1 algorithm
| null | null | null |
cs.RO cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Perceive-Represent-Generate (PRG), a novel three-stage framework
that maps perceptual information of different modalities (e.g., visual or
sound), corresponding to a sequence of instructions, to an adequate sequence of
movements to be executed by a robot. In the first stage, we perceive and
pre-process the given inputs, isolating individual commands from the complete
instruction provided by a human user. In the second stage we encode the
individual commands into a multimodal latent space, employing a deep generative
model. Finally, in the third stage we convert the multimodal latent values into
individual trajectories and combine them into a single dynamic movement
primitive, allowing its execution in a robotic platform. We evaluate our
pipeline in the context of a novel robotic handwriting task, where the robot
receives as input a word through different perceptual modalities (e.g., image,
sound), and generates the corresponding motion trajectory to write it, creating
coherent and readable handwritten words.
|
[
{
"version": "v1",
"created": "Wed, 6 Apr 2022 19:31:18 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Oct 2022 04:27:37 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Vital",
"Fábio",
""
],
[
"Vasco",
"Miguel",
""
],
[
"Sardinha",
"Alberto",
""
],
[
"Melo",
"Francisco",
""
]
] |
new_dataset
| 0.967857 |
2204.10757
|
Nouha Dziri
|
Nouha Dziri, Ehsan Kamalloo, Sivan Milton, Osmar Zaiane, Mo Yu,
Edoardo M. Ponti, Siva Reddy
|
FaithDial: A Faithful Benchmark for Information-Seeking Dialogue
|
TACL 2022 (20 pages, 3 figures, 10 tables)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The goal of information-seeking dialogue is to respond to seeker queries with
natural language utterances that are grounded on knowledge sources. However,
dialogue systems often produce unsupported utterances, a phenomenon known as
hallucination. To mitigate this behavior, we adopt a data-centric solution and
create FaithDial, a new benchmark for hallucination-free dialogues, by editing
hallucinated responses in the Wizard of Wikipedia (WoW) benchmark. We observe
that FaithDial is more faithful than WoW while also maintaining engaging
conversations. We show that FaithDial can serve as training signal for: i) a
hallucination critic, which discriminates whether an utterance is faithful or
not, and boosts the performance by 12.8 F1 score on the BEGIN benchmark
compared to existing datasets for dialogue coherence; ii) high-quality dialogue
generation. We benchmark a series of state-of-the-art models and propose an
auxiliary contrastive objective that achieves the highest level of faithfulness
and abstractiveness based on several automated metrics. Further, we find that
the benefits of FaithDial generalize to zero-shot transfer on other datasets,
such as CMU-Dog and TopicalChat. Finally, human evaluation reveals that
responses generated by models trained on FaithDial are perceived as more
interpretable, cooperative, and engaging.
|
[
{
"version": "v1",
"created": "Fri, 22 Apr 2022 15:25:12 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Oct 2022 16:47:36 GMT"
},
{
"version": "v3",
"created": "Sun, 23 Oct 2022 19:08:40 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Dziri",
"Nouha",
""
],
[
"Kamalloo",
"Ehsan",
""
],
[
"Milton",
"Sivan",
""
],
[
"Zaiane",
"Osmar",
""
],
[
"Yu",
"Mo",
""
],
[
"Ponti",
"Edoardo M.",
""
],
[
"Reddy",
"Siva",
""
]
] |
new_dataset
| 0.993065 |
2205.11764
|
Binwei Yao
|
Binwei Yao, Chao Shi, Likai Zou, Lingfeng Dai, Mengyue Wu, Lu Chen,
Zhen Wang, Kai Yu
|
D4: a Chinese Dialogue Dataset for Depression-Diagnosis-Oriented Chat
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In a depression-diagnosis-directed clinical session, doctors initiate a
conversation with ample emotional support that guides the patients to expose
their symptoms based on clinical diagnosis criteria. Such a dialogue system is
distinguished from existing single-purpose human-machine dialog systems, as it
combines task-oriented and chit-chats with uniqueness in dialogue topics and
procedures. However, due to the social stigma associated with mental illness,
the dialogue data related to depression consultation and diagnosis are rarely
disclosed. Based on clinical depression diagnostic criteria ICD-11 and DSM-5,
we designed a 3-phase procedure to construct D$^4$: a Chinese Dialogue Dataset
for Depression-Diagnosis-Oriented Chat, which simulates the dialogue between
doctors and patients during the diagnosis of depression, including diagnosis
results and symptom summary given by professional psychiatrists for each
conversation. Upon the newly-constructed dataset, four tasks mirroring the
depression diagnosis process are established: response generation, topic
prediction, dialog summary, and severity classification of depressive episode
and suicide risk. Multi-scale evaluation results demonstrate that a more
empathy-driven and diagnostic-accurate consultation dialogue system trained on
our dataset can be achieved compared to rule-based bots.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 03:54:22 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Oct 2022 06:18:56 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Yao",
"Binwei",
""
],
[
"Shi",
"Chao",
""
],
[
"Zou",
"Likai",
""
],
[
"Dai",
"Lingfeng",
""
],
[
"Wu",
"Mengyue",
""
],
[
"Chen",
"Lu",
""
],
[
"Wang",
"Zhen",
""
],
[
"Yu",
"Kai",
""
]
] |
new_dataset
| 0.999867 |
2207.11838
|
Ansh Mittal
|
Ansh Mittal, Shuvam Ghosal, Rishibha Bansal
|
SAVCHOI: Detecting Suspicious Activities using Dense Video Captioning
with Human Object Interactions
|
14 pages, 6 figures, 6 tables
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Detecting suspicious activities in surveillance videos is a longstanding
problem in real-time surveillance that leads to difficulties in detecting
crimes. Hence, we propose a novel approach for detecting and summarizing
suspicious activities in surveillance videos. We have also created ground truth
summaries for the UCF-Crime video dataset. We modify a pre-existing approach
for this task by leveraging the Human-Object Interaction (HOI) model for the
Visual features in the Bi-Modal Transformer. Further, we validate our approach
against the existing state-of-the-art algorithms for the Dense Video Captioning
task for the ActivityNet Captions dataset. We observe that this formulation for
Dense Captioning performs significantly better than other discussed BMT-based
approaches for BLEU@1, BLEU@2, BLEU@3, BLEU@4, and METEOR. We further perform a
comparative analysis of the dataset and the model to report the findings based
on different NMS thresholds (searched using Genetic Algorithms). Here, our
formulation outperforms all the models for BLEU@1, BLEU@2, BLEU@3, and most
models for BLEU@4 and METEOR falling short of only ADV-INF Global by 25% and
0.5%, respectively.
|
[
{
"version": "v1",
"created": "Sun, 24 Jul 2022 22:53:23 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Oct 2022 20:10:42 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Mittal",
"Ansh",
""
],
[
"Ghosal",
"Shuvam",
""
],
[
"Bansal",
"Rishibha",
""
]
] |
new_dataset
| 0.992358 |
2208.09788
|
Bipasha Sen
|
Aditya Agarwal, Bipasha Sen, Rudrabha Mukhopadhyay, Vinay Namboodiri,
C.V. Jawahar
|
FaceOff: A Video-to-Video Face Swapping System
|
Accepted at WACV 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Doubles play an indispensable role in the movie industry. They take the place
of the actors in dangerous stunt scenes or scenes where the same actor plays
multiple characters. The double's face is later replaced with the actor's face
and expressions manually using expensive CGI technology, costing millions of
dollars and taking months to complete. An automated, inexpensive, and fast way
can be to use face-swapping techniques that aim to swap an identity from a
source face video (or an image) to a target face video. However, such methods
cannot preserve the source expressions of the actor important for the scene's
context. To tackle this challenge, we introduce video-to-video (V2V)
face-swapping, a novel task of face-swapping that can preserve (1) the identity
and expressions of the source (actor) face video and (2) the background and
pose of the target (double) video. We propose FaceOff, a V2V face-swapping
system that operates by learning a robust blending operation to merge two face
videos following the constraints above. It reduces the videos to a quantized
latent space and then blends them in the reduced space. FaceOff is trained in a
self-supervised manner and robustly tackles the non-trivial challenges of V2V
face-swapping. As shown in the experimental section, FaceOff significantly
outperforms alternate approaches qualitatively and quantitatively.
|
[
{
"version": "v1",
"created": "Sun, 21 Aug 2022 03:18:07 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Oct 2022 00:28:05 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Agarwal",
"Aditya",
""
],
[
"Sen",
"Bipasha",
""
],
[
"Mukhopadhyay",
"Rudrabha",
""
],
[
"Namboodiri",
"Vinay",
""
],
[
"Jawahar",
"C. V.",
""
]
] |
new_dataset
| 0.999285 |
2210.03078
|
Jiacheng Liu
|
Jiacheng Liu, Skyler Hallinan, Ximing Lu, Pengfei He, Sean Welleck,
Hannaneh Hajishirzi, Yejin Choi
|
Rainier: Reinforced Knowledge Introspector for Commonsense Question
Answering
|
EMNLP 2022 main conference
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Knowledge underpins reasoning. Recent research demonstrates that when
relevant knowledge is provided as additional context to commonsense question
answering (QA), it can substantially enhance the performance even on top of
state-of-the-art. The fundamental challenge is where and how to find such
knowledge that is high quality and on point with respect to the question;
knowledge retrieved from knowledge bases are incomplete and knowledge generated
from language models are inconsistent. We present Rainier, or Reinforced
Knowledge Introspector, that learns to generate contextually relevant knowledge
in response to given questions. Our approach starts by imitating knowledge
generated by GPT-3, then learns to generate its own knowledge via reinforcement
learning where rewards are shaped based on the increased performance on the
resulting question answering. Rainier demonstrates substantial and consistent
performance gains when tested over 9 different commonsense benchmarks:
including 5 datasets that are seen during model training, as well as 4 datasets
that are kept unseen. Our work is the first to report that knowledge generated
by models that are orders of magnitude smaller than GPT-3, even without direct
supervision on the knowledge itself, can exceed the quality of commonsense
knowledge elicited from GPT-3.
|
[
{
"version": "v1",
"created": "Thu, 6 Oct 2022 17:34:06 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Oct 2022 04:45:48 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Liu",
"Jiacheng",
""
],
[
"Hallinan",
"Skyler",
""
],
[
"Lu",
"Ximing",
""
],
[
"He",
"Pengfei",
""
],
[
"Welleck",
"Sean",
""
],
[
"Hajishirzi",
"Hannaneh",
""
],
[
"Choi",
"Yejin",
""
]
] |
new_dataset
| 0.990883 |
2210.07884
|
Martin Ochoa
|
Mart\'in Ochoa, Jorge Toro-Pozo, David Basin
|
SealClub: Computer-aided Paper Document Authentication
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Digital authentication is a mature field, offering a range of solutions with
rigorous mathematical guarantees. Nevertheless, paper documents, where
cryptographic techniques are not directly applicable, are still widely utilized
due to usability and legal reasons. We propose a novel approach to
authenticating paper documents using smartphones by taking short videos of
them. Our solution combines cryptographic and image comparison techniques to
detect and highlight subtle semantic-changing attacks on rich documents,
containing text and graphics, that could go unnoticed by humans. We rigorously
analyze our approach, proving that it is secure against strong adversaries
capable of compromising different system components. We also measure its
accuracy empirically on a set of 128 videos of paper documents, half containing
subtle forgeries. Our algorithm finds all forgeries accurately (no false
alarms) after analyzing 5.13 frames on average (corresponding to 1.28 seconds
of video). Highlighted regions are large enough to be visible to users, but
small enough to precisely locate forgeries. Thus, our approach provides a
promising way for users to authenticate paper documents using conventional
smartphones under realistic conditions.
|
[
{
"version": "v1",
"created": "Fri, 14 Oct 2022 15:07:35 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Oct 2022 13:41:52 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Ochoa",
"Martín",
""
],
[
"Toro-Pozo",
"Jorge",
""
],
[
"Basin",
"David",
""
]
] |
new_dataset
| 0.999521 |
2210.09059
|
Kerianne Hobbs
|
Kerianne L. Hobbs, Joseph B. Lyons, Martin S. Feather, Benjamen P
Bycroft, Sean Phillips, Michelle Simon, Mark Harter, Kenneth Costello, Yuri
Gawdiak, Stephen Paine
|
Space Trusted Autonomy Readiness Levels
| null | null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Technology Readiness Levels are a mainstay for organizations that fund,
develop, test, acquire, or use technologies. Technology Readiness Levels
provide a standardized assessment of a technology's maturity and enable
consistent comparison among technologies. They inform decisions throughout a
technology's development life cycle, from concept, through development, to use.
A variety of alternative Readiness Levels have been developed, including
Algorithm Readiness Levels, Manufacturing Readiness Levels, Human Readiness
Levels, Commercialization Readiness Levels, Machine Learning Readiness Levels,
and Technology Commitment Levels. However, while Technology Readiness Levels
have been increasingly applied to emerging disciplines, there are unique
challenges to assessing the rapidly developing capabilities of autonomy. This
paper adopts the moniker of Space Trusted Autonomy Readiness Levels to identify
a two-dimensional scale of readiness and trust appropriate for the special
challenges of assessing autonomy technologies that seek space use. It draws
inspiration from other readiness levels' definitions, and from the rich field
of trust and trustworthiness. The Space Trusted Autonomy Readiness Levels were
developed by a collaborative Space Trusted Autonomy subgroup, which was created
from The Space Science and Technology Partnership Forum between the United
States Space Force, the National Aeronautics and Space Administration, and the
National Reconnaissance Office.
|
[
{
"version": "v1",
"created": "Thu, 13 Oct 2022 15:16:42 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Oct 2022 14:50:00 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Hobbs",
"Kerianne L.",
""
],
[
"Lyons",
"Joseph B.",
""
],
[
"Feather",
"Martin S.",
""
],
[
"Bycroft",
"Benjamen P",
""
],
[
"Phillips",
"Sean",
""
],
[
"Simon",
"Michelle",
""
],
[
"Harter",
"Mark",
""
],
[
"Costello",
"Kenneth",
""
],
[
"Gawdiak",
"Yuri",
""
],
[
"Paine",
"Stephen",
""
]
] |
new_dataset
| 0.999376 |
2210.10033
|
Yu Wang
|
Yu Wang, Haoyao Chen, Yufeng Liu, and Shiwu Zhang
|
Edge-based Monocular Thermal-Inertial Odometry in Visually Degraded
Environments
|
8 pages, 10 figures,
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
State estimation in complex illumination environments based on conventional
visual-inertial odometry is a challenging task due to the severe visual
degradation of the visual camera. The thermal infrared camera is capable of
all-day time and is less affected by illumination variation. However, most
existing visual data association algorithms are incompatible because the
thermal infrared data contains large noise and low contrast. Motivated by the
phenomenon that thermal radiation varies most significantly at the edges of
objects, the study proposes an ETIO, which is the first edge-based monocular
thermal-inertial odometry for robust localization in visually degraded
environments. Instead of the raw image, we utilize the binarized image from
edge extraction for pose estimation to overcome the poor thermal infrared image
quality. Then, an adaptive feature tracking strategy ADT-KLT is developed for
robust data association based on limited edge information and its distance
distribution. Finally, a pose graph optimization performs real-time estimation
over a sliding window of recent states by combining IMU pre-integration with
reprojection error of all edge feature observations. We evaluated the
performance of the proposed system on public datasets and real-world
experiments and compared it against state-of-the-art methods. The proposed ETIO
was verified with the ability to enable accurate and robust localization
all-day time.
|
[
{
"version": "v1",
"created": "Tue, 18 Oct 2022 17:54:15 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Oct 2022 06:34:03 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Wang",
"Yu",
""
],
[
"Chen",
"Haoyao",
""
],
[
"Liu",
"Yufeng",
""
],
[
"Zhang",
"Shiwu",
""
]
] |
new_dataset
| 0.984443 |
2210.10358
|
Elisa Sanchez-Bayona
|
Elisa Sanchez-Bayona, Rodrigo Agerri
|
Leveraging a New Spanish Corpus for Multilingual and Crosslingual
Metaphor Detection
|
To be published in CoNLL 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The lack of wide coverage datasets annotated with everyday metaphorical
expressions for languages other than English is striking. This means that most
research on supervised metaphor detection has been published only for that
language. In order to address this issue, this work presents the first corpus
annotated with naturally occurring metaphors in Spanish large enough to develop
systems to perform metaphor detection. The presented dataset, CoMeta, includes
texts from various domains, namely, news, political discourse, Wikipedia and
reviews. In order to label CoMeta, we apply the MIPVU method, the guidelines
most commonly used to systematically annotate metaphor on real data. We use our
newly created dataset to provide competitive baselines by fine-tuning several
multilingual and monolingual state-of-the-art large language models.
Furthermore, by leveraging the existing VUAM English data in addition to
CoMeta, we present the, to the best of our knowledge, first cross-lingual
experiments on supervised metaphor detection. Finally, we perform a detailed
error analysis that explores the seemingly high transfer of everyday metaphor
across these two languages and datasets.
|
[
{
"version": "v1",
"created": "Wed, 19 Oct 2022 07:55:36 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Oct 2022 10:48:25 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Sanchez-Bayona",
"Elisa",
""
],
[
"Agerri",
"Rodrigo",
""
]
] |
new_dataset
| 0.99923 |
2210.11065
|
Digbalay Bose
|
Digbalay Bose, Rajat Hebbar, Krishna Somandepalli, Haoyang Zhang, Yin
Cui, Kree Cole-McLaughlin, Huisheng Wang, Shrikanth Narayanan
|
MovieCLIP: Visual Scene Recognition in Movies
|
Accepted to 2023 IEEE/CVF Winter Conference on Applications of
Computer Vision (WACV 2023). Project website with supplemental material:
https://sail.usc.edu/~mica/MovieCLIP/. Revised version with updated author
affiliations
| null | null | null |
cs.CV cs.CL cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
Longform media such as movies have complex narrative structures, with events
spanning a rich variety of ambient visual scenes. Domain specific challenges
associated with visual scenes in movies include transitions, person coverage,
and a wide array of real-life and fictional scenarios. Existing visual scene
datasets in movies have limited taxonomies and don't consider the visual scene
transition within movie clips. In this work, we address the problem of visual
scene recognition in movies by first automatically curating a new and extensive
movie-centric taxonomy of 179 scene labels derived from movie scripts and
auxiliary web-based video datasets. Instead of manual annotations which can be
expensive, we use CLIP to weakly label 1.12 million shots from 32K movie clips
based on our proposed taxonomy. We provide baseline visual models trained on
the weakly labeled dataset called MovieCLIP and evaluate them on an independent
dataset verified by human raters. We show that leveraging features from models
pretrained on MovieCLIP benefits downstream tasks such as multi-label scene and
genre classification of web videos and movie trailers.
|
[
{
"version": "v1",
"created": "Thu, 20 Oct 2022 07:38:56 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Oct 2022 01:25:13 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Bose",
"Digbalay",
""
],
[
"Hebbar",
"Rajat",
""
],
[
"Somandepalli",
"Krishna",
""
],
[
"Zhang",
"Haoyang",
""
],
[
"Cui",
"Yin",
""
],
[
"Cole-McLaughlin",
"Kree",
""
],
[
"Wang",
"Huisheng",
""
],
[
"Narayanan",
"Shrikanth",
""
]
] |
new_dataset
| 0.999868 |
2210.12169
|
Abdulrahman Aloraini
|
Abdulrahman Aloraini and Sameer Pradhan and Massimo Poesio
|
Joint Coreference Resolution for Zeros and non-Zeros in Arabic
| null |
Published at The Seventh Arabic Natural Language Processing
Workshop (WANLP 2022)
| null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Most existing proposals about anaphoric zero pronoun (AZP) resolution regard
full mention coreference and AZP resolution as two independent tasks, even
though the two tasks are clearly related. The main issues that need tackling to
develop a joint model for zero and non-zero mentions are the difference between
the two types of arguments (zero pronouns, being null, provide no nominal
information) and the lack of annotated datasets of a suitable size in which
both types of arguments are annotated for languages other than Chinese and
Japanese. In this paper, we introduce two architectures for jointly resolving
AZPs and non-AZPs, and evaluate them on Arabic, a language for which, as far as
we know, there has been no prior work on joint resolution. Doing this also
required creating a new version of the Arabic subset of the standard
coreference resolution dataset used for the CoNLL-2012 shared task (Pradhan et
al.,2012) in which both zeros and non-zeros are included in a single dataset.
|
[
{
"version": "v1",
"created": "Fri, 21 Oct 2022 18:01:01 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Aloraini",
"Abdulrahman",
""
],
[
"Pradhan",
"Sameer",
""
],
[
"Poesio",
"Massimo",
""
]
] |
new_dataset
| 0.992667 |
2210.12181
|
Weizi Li
|
Weizi Li
|
Urban Socio-Technical Systems: An Autonomy and Mobility Perspective
| null | null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
The future of the human race is urban. The world's population is projected to
grow an additional 2.5 billion by 2050, with all expected to live in urban
areas. This will increase the percentage of urban population from 55% today to
70% within three decades and further strengthen the role of cities as the hub
for information, transportation, and overall socio-economic development. Unlike
any other time in human history, the increasing levels of autonomy and machine
intelligence are transforming cities to be no longer just human agglomerations
but a fusion of humans, machines, and algorithms making collective decisions,
thus complex socio-technical systems. This manuscript summarizes and discusses
my efforts from the urban autonomy and mobility perspective to develop the
urban socio-technical system.
|
[
{
"version": "v1",
"created": "Fri, 21 Oct 2022 18:15:41 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Li",
"Weizi",
""
]
] |
new_dataset
| 0.968561 |
2210.12198
|
Jonathan Schneider
|
Hossein Esfandiari, Vahab Mirrokni, Jon Schneider
|
Anonymous Bandits for Multi-User Systems
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we present and study a new framework for online learning in
systems with multiple users that provide user anonymity. Specifically, we
extend the notion of bandits to obey the standard $k$-anonymity constraint by
requiring each observation to be an aggregation of rewards for at least $k$
users. This provides a simple yet effective framework where one can learn a
clustering of users in an online fashion without observing any user's
individual decision. We initiate the study of anonymous bandits and provide the
first sublinear regret algorithms and lower bounds for this setting.
|
[
{
"version": "v1",
"created": "Fri, 21 Oct 2022 18:55:08 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Esfandiari",
"Hossein",
""
],
[
"Mirrokni",
"Vahab",
""
],
[
"Schneider",
"Jon",
""
]
] |
new_dataset
| 0.96026 |
2210.12209
|
Adam Fishman
|
Adam Fishman, Adithyavairan Murali, Clemens Eppner, Bryan Peele, Byron
Boots, Dieter Fox
|
Motion Policy Networks
|
To be published in the Conference on Robot Learning (CoRL) 2022. 10
pages with 4 figures. Appendix has 10 pages and 1 figure
| null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Collision-free motion generation in unknown environments is a core building
block for robot manipulation. Generating such motions is challenging due to
multiple objectives; not only should the solutions be optimal, the motion
generator itself must be fast enough for real-time performance and reliable
enough for practical deployment. A wide variety of methods have been proposed
ranging from local controllers to global planners, often being combined to
offset their shortcomings. We present an end-to-end neural model called Motion
Policy Networks (M$\pi$Nets) to generate collision-free, smooth motion from
just a single depth camera observation. M$\pi$Nets are trained on over 3
million motion planning problems in over 500,000 environments. Our experiments
show that M$\pi$Nets are significantly faster than global planners while
exhibiting the reactivity needed to deal with dynamic scenes. They are 46%
better than prior neural planners and more robust than local control policies.
Despite being only trained in simulation, M$\pi$Nets transfer well to the real
robot with noisy partial point clouds. Code and data are publicly available at
https://mpinets.github.io.
|
[
{
"version": "v1",
"created": "Fri, 21 Oct 2022 19:37:09 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Fishman",
"Adam",
""
],
[
"Murali",
"Adithyavairan",
""
],
[
"Eppner",
"Clemens",
""
],
[
"Peele",
"Bryan",
""
],
[
"Boots",
"Byron",
""
],
[
"Fox",
"Dieter",
""
]
] |
new_dataset
| 0.981602 |
2210.12213
|
Zekun Li
|
Zekun Li, Jina Kim, Yao-Yi Chiang, Muhao Chen
|
SpaBERT: A Pretrained Language Model from Geographic Data for Geo-Entity
Representation
|
Accepted by EMNLP 2022
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Named geographic entities (geo-entities for short) are the building blocks of
many geographic datasets. Characterizing geo-entities is integral to various
application domains, such as geo-intelligence and map comprehension, while a
key challenge is to capture the spatial-varying context of an entity. We
hypothesize that we shall know the characteristics of a geo-entity by its
surrounding entities, similar to knowing word meanings by their linguistic
context. Accordingly, we propose a novel spatial language model, SpaBERT, which
provides a general-purpose geo-entity representation based on neighboring
entities in geospatial data. SpaBERT extends BERT to capture linearized spatial
context, while incorporating a spatial coordinate embedding mechanism to
preserve spatial relations of entities in the 2-dimensional space. SpaBERT is
pretrained with masked language modeling and masked entity prediction tasks to
learn spatial dependencies. We apply SpaBERT to two downstream tasks:
geo-entity typing and geo-entity linking. Compared with the existing language
models that do not use spatial context, SpaBERT shows significant performance
improvement on both tasks. We also analyze the entity representation from
SpaBERT in various settings and the effect of spatial coordinate embedding.
|
[
{
"version": "v1",
"created": "Fri, 21 Oct 2022 19:42:32 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Li",
"Zekun",
""
],
[
"Kim",
"Jina",
""
],
[
"Chiang",
"Yao-Yi",
""
],
[
"Chen",
"Muhao",
""
]
] |
new_dataset
| 0.996827 |
2210.12215
|
Akshat Gahoi
|
Akshat Gahoi, Jayant Duneja, Anshul Padhi, Shivam Mangale, Saransh
Rajput, Tanvi Kamble, Dipti Misra Sharma, Vasudeva Varma
|
Gui at MixMT 2022 : English-Hinglish: An MT approach for translation of
code mixed data
| null | null | null | null |
cs.CL
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Code-mixed machine translation has become an important task in multilingual
communities and extending the task of machine translation to code mixed data
has become a common task for these languages. In the shared tasks of WMT 2022,
we try to tackle the same for both English + Hindi to Hinglish and Hinglish to
English. The first task dealt with both Roman and Devanagari script as we had
monolingual data in both English and Hindi whereas the second task only had
data in Roman script. To our knowledge, we achieved one of the top ROUGE-L and
WER scores for the first task of Monolingual to Code-Mixed machine translation.
In this paper, we discuss the use of mBART with some special pre-processing and
post-processing (transliteration from Devanagari to Roman) for the first task
in detail and the experiments that we performed for the second task of
translating code-mixed Hinglish to monolingual English.
|
[
{
"version": "v1",
"created": "Fri, 21 Oct 2022 19:48:18 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Gahoi",
"Akshat",
""
],
[
"Duneja",
"Jayant",
""
],
[
"Padhi",
"Anshul",
""
],
[
"Mangale",
"Shivam",
""
],
[
"Rajput",
"Saransh",
""
],
[
"Kamble",
"Tanvi",
""
],
[
"Sharma",
"Dipti Misra",
""
],
[
"Varma",
"Vasudeva",
""
]
] |
new_dataset
| 0.998506 |
2210.12228
|
Bowen Zhao
|
Bowen Zhao, Jiuding Sun, Bin Xu, Xingyu Lu, Yuchen Li, Jifan Yu,
Minghui Liu, Tingjian Zhang, Qiuyang Chen, Hanming Li, Lei Hou, Juanzi Li
|
EDUKG: a Heterogeneous Sustainable K-12 Educational Knowledge Graph
|
17 pages, 8 figures
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Web and artificial intelligence technologies, especially semantic web and
knowledge graph (KG), have recently raised significant attention in educational
scenarios. Nevertheless, subject-specific KGs for K-12 education still lack
sufficiency and sustainability from knowledge and data perspectives. To tackle
these issues, we propose EDUKG, a heterogeneous sustainable K-12 Educational
Knowledge Graph. We first design an interdisciplinary and fine-grained ontology
for uniformly modeling knowledge and resource in K-12 education, where we
define 635 classes, 445 object properties, and 1314 datatype properties in
total. Guided by this ontology, we propose a flexible methodology for
interactively extracting factual knowledge from textbooks. Furthermore, we
establish a general mechanism based on our proposed generalized entity linking
system for EDUKG's sustainable maintenance, which can dynamically index
numerous heterogeneous resources and data with knowledge topics in EDUKG. We
further evaluate EDUKG to illustrate its sufficiency, richness, and
variability. We publish EDUKG with more than 252 million entities and 3.86
billion triplets. Our code and data repository is now available at
https://github.com/THU-KEG/EDUKG.
|
[
{
"version": "v1",
"created": "Fri, 21 Oct 2022 20:14:41 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Zhao",
"Bowen",
""
],
[
"Sun",
"Jiuding",
""
],
[
"Xu",
"Bin",
""
],
[
"Lu",
"Xingyu",
""
],
[
"Li",
"Yuchen",
""
],
[
"Yu",
"Jifan",
""
],
[
"Liu",
"Minghui",
""
],
[
"Zhang",
"Tingjian",
""
],
[
"Chen",
"Qiuyang",
""
],
[
"Li",
"Hanming",
""
],
[
"Hou",
"Lei",
""
],
[
"Li",
"Juanzi",
""
]
] |
new_dataset
| 0.953493 |
2210.12233
|
Jonathan Brophy
|
Kalyani Asthana, Zhouhang Xie, Wencong You, Adam Noack, Jonathan
Brophy, Sameer Singh, Daniel Lowd
|
TCAB: A Large-Scale Text Classification Attack Benchmark
|
32 pages, 7 figures, and 14 tables
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We introduce the Text Classification Attack Benchmark (TCAB), a dataset for
analyzing, understanding, detecting, and labeling adversarial attacks against
text classifiers. TCAB includes 1.5 million attack instances, generated by
twelve adversarial attacks targeting three classifiers trained on six source
datasets for sentiment analysis and abuse detection in English. Unlike standard
text classification, text attacks must be understood in the context of the
target classifier that is being attacked, and thus features of the target
classifier are important as well. TCAB includes all attack instances that are
successful in flipping the predicted label; a subset of the attacks are also
labeled by human annotators to determine how frequently the primary semantics
are preserved. The process of generating attacks is automated, so that TCAB can
easily be extended to incorporate new text attacks and better classifiers as
they are developed. In addition to the primary tasks of detecting and labeling
attacks, TCAB can also be used for attack localization, attack target labeling,
and attack characterization. TCAB code and dataset are available at
https://react-nlp.github.io/tcab/.
|
[
{
"version": "v1",
"created": "Fri, 21 Oct 2022 20:22:45 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Asthana",
"Kalyani",
""
],
[
"Xie",
"Zhouhang",
""
],
[
"You",
"Wencong",
""
],
[
"Noack",
"Adam",
""
],
[
"Brophy",
"Jonathan",
""
],
[
"Singh",
"Sameer",
""
],
[
"Lowd",
"Daniel",
""
]
] |
new_dataset
| 0.999628 |
2210.12261
|
Yue Yang
|
Yue Yang, Wenlin Yao, Hongming Zhang, Xiaoyang Wang, Dong Yu, Jianshu
Chen
|
Z-LaVI: Zero-Shot Language Solver Fueled by Visual Imagination
|
EMNLP 2022
| null | null | null |
cs.CL cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large-scale pretrained language models have made significant advances in
solving downstream language understanding tasks. However, they generally suffer
from reporting bias, the phenomenon describing the lack of explicit commonsense
knowledge in written text, e.g., ''an orange is orange''. To overcome this
limitation, we develop a novel approach, Z-LaVI, to endow language models with
visual imagination capabilities. Specifically, we leverage two complementary
types of ''imaginations'': (i) recalling existing images through retrieval and
(ii) synthesizing nonexistent images via text-to-image generation. Jointly
exploiting the language inputs and the imagination, a pretrained
vision-language model (e.g., CLIP) eventually composes a zero-shot solution to
the original language tasks. Notably, fueling language models with imagination
can effectively leverage visual knowledge to solve plain language tasks. In
consequence, Z-LaVI consistently improves the zero-shot performance of existing
language models across a diverse set of language tasks.
|
[
{
"version": "v1",
"created": "Fri, 21 Oct 2022 21:33:10 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Yang",
"Yue",
""
],
[
"Yao",
"Wenlin",
""
],
[
"Zhang",
"Hongming",
""
],
[
"Wang",
"Xiaoyang",
""
],
[
"Yu",
"Dong",
""
],
[
"Chen",
"Jianshu",
""
]
] |
new_dataset
| 0.990932 |
2210.12270
|
Chen Chen
|
Chen Chen, Matin Yarmand, Zhuoqun Xu, Varun Singh, Yang Zhang, Nadir
Weibel
|
Investigating Input Modality and Task Geometry on Precision-first 3D
Drawing in Virtual Reality
|
C. Chen, M. Yarmand, Z. Xu and V. Singh, Y. Zhang and N. Weibel,
"Investigating Input Modality and Task Geometry on Precision-first 3D Drawing
in Virtual Reality", 2022 IEEE International Symposium on Mixed and Augmented
Reality (ISMAR), 2022, pp. 1-10, doi: 10.1109/ISMAR55827.2022.00054
| null |
10.1109/ISMAR55827.2022.00054
| null |
cs.HC cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Accurately drawing non-planar 3D curves in immersive Virtual Reality (VR) is
indispensable for many precise 3D tasks. However, due to lack of physical
support, limited depth perception, and the non-planar nature of 3D curves, it
is challenging to adjust mid-air strokes to achieve high precision. Instead of
creating new interaction techniques, we investigated how task geometric shapes
and input modalities affect precision-first drawing performance in a
within-subject study (n = 12) focusing on 3D target tracing in commercially
available VR headsets. We found that compared to using bare hands, VR
controllers and pens yield nearly 30% of precision gain, and that the tasks
with large curvature, forward-backward or left-right orientations perform best.
We finally discuss opportunities for designing novel interaction techniques for
precise 3D drawing. We believe that our work will benefit future research
aiming to create usable toolboxes for precise 3D drawing.
|
[
{
"version": "v1",
"created": "Fri, 21 Oct 2022 21:56:43 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Chen",
"Chen",
""
],
[
"Yarmand",
"Matin",
""
],
[
"Xu",
"Zhuoqun",
""
],
[
"Singh",
"Varun",
""
],
[
"Zhang",
"Yang",
""
],
[
"Weibel",
"Nadir",
""
]
] |
new_dataset
| 0.968042 |
2210.12308
|
Niranjan Uma Naresh
|
Niranjan Uma Naresh, Ziyan Jiang, Ankit, Sungjin Lee, Jie Hao, Xing
Fan, Chenlei Guo
|
PENTATRON: PErsonalized coNText-Aware Transformer for Retrieval-based
cOnversational uNderstanding
|
EMNLP 2022
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Conversational understanding is an integral part of modern intelligent
devices. In a large fraction of the global traffic from customers using smart
digital assistants, frictions in dialogues may be attributed to incorrect
understanding of the entities in a customer's query due to factors including
ambiguous mentions, mispronunciation, background noise and faulty on-device
signal processing. Such errors are compounded by two common deficiencies from
intelligent devices namely, (1) the device not being tailored to individual
customers, and (2) the device responses being unaware of the context in the
conversation session. Viewing this problem via the lens of retrieval-based
search engines, we build and evaluate a scalable entity correction system,
PENTATRON. The system leverages a parametric transformer-based language model
to learn patterns from in-session customer-device interactions coupled with a
non-parametric personalized entity index to compute the correct query, which
aids downstream components in reasoning about the best response. In addition to
establishing baselines and demonstrating the value of personalized and
context-aware systems, we use multitasking to learn the domain of the correct
entity. We also investigate the utility of language model prompts. Through
extensive experiments, we show a significant upward movement of the key metric
(Exact Match) by up to 500.97% (relative to the baseline).
|
[
{
"version": "v1",
"created": "Sat, 22 Oct 2022 00:14:47 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Naresh",
"Niranjan Uma",
""
],
[
"Jiang",
"Ziyan",
""
],
[
"Ankit",
"",
""
],
[
"Lee",
"Sungjin",
""
],
[
"Hao",
"Jie",
""
],
[
"Fan",
"Xing",
""
],
[
"Guo",
"Chenlei",
""
]
] |
new_dataset
| 0.980403 |
2210.12352
|
Yi-Ling Qiao
|
Yi-Ling Qiao, Alexander Gao, and Ming C. Lin
|
NeuPhysics: Editable Neural Geometry and Physics from Monocular Videos
|
NeurIPS 2022
| null | null | null |
cs.CV cs.GR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a method for learning 3D geometry and physics parameters of a
dynamic scene from only a monocular RGB video input. To decouple the learning
of underlying scene geometry from dynamic motion, we represent the scene as a
time-invariant signed distance function (SDF) which serves as a reference
frame, along with a time-conditioned deformation field. We further bridge this
neural geometry representation with a differentiable physics simulator by
designing a two-way conversion between the neural field and its corresponding
hexahedral mesh, enabling us to estimate physics parameters from the source
video by minimizing a cycle consistency loss. Our method also allows a user to
interactively edit 3D objects from the source video by modifying the recovered
hexahedral mesh, and propagating the operation back to the neural field
representation. Experiments show that our method achieves superior mesh and
video reconstruction of dynamic scenes compared to competing Neural Field
approaches, and we provide extensive examples which demonstrate its ability to
extract useful 3D representations from videos captured with consumer-grade
cameras.
|
[
{
"version": "v1",
"created": "Sat, 22 Oct 2022 04:57:55 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Qiao",
"Yi-Ling",
""
],
[
"Gao",
"Alexander",
""
],
[
"Lin",
"Ming C.",
""
]
] |
new_dataset
| 0.996881 |
2210.12374
|
Yilun Zhao
|
Yilun Zhao, Linyong Nan, Zhenting Qi, Rui Zhang, Dragomir Radev
|
ReasTAP: Injecting Table Reasoning Skills During Pre-training via
Synthetic Reasoning Examples
|
accepted by EMNLP 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Reasoning over tabular data requires both table structure understanding and a
broad set of table reasoning skills. Current models with table-specific
architectures and pre-training methods perform well on understanding table
structures, but they still struggle with tasks that require various table
reasoning skills. In this work, we develop ReasTAP to show that high-level
table reasoning skills can be injected into models during pre-training without
a complex table-specific architecture design. We define 7 table reasoning
skills, such as numerical operation, temporal comparison, and conjunction. Each
reasoning skill is associated with one example generator, which synthesizes
questions over semi-structured tables according to the sampled templates. We
model the table pre-training task as a sequence generation task and pre-train
ReasTAP to generate precise answers to the synthetic examples. ReasTAP is
evaluated on four benchmarks covering three downstream tasks including: 1)
WikiSQL and WTQ for Table Question Answering; 2) TabFact for Table Fact
Verification; and 3) LogicNLG for Faithful Table-to-Text Generation.
Experimental results demonstrate that ReasTAP achieves new state-of-the-art
performance on all benchmarks and delivers a significant improvement on
low-resource setting. Our code is publicly available at
https://github.com/Yale-LILY/ReasTAP.
|
[
{
"version": "v1",
"created": "Sat, 22 Oct 2022 07:04:02 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Zhao",
"Yilun",
""
],
[
"Nan",
"Linyong",
""
],
[
"Qi",
"Zhenting",
""
],
[
"Zhang",
"Rui",
""
],
[
"Radev",
"Dragomir",
""
]
] |
new_dataset
| 0.994777 |
2210.12384
|
Zhixun Li
|
Zhixun Li, Dingshuo Chen, Qiang Liu, Shu Wu
|
The Devil is in the Conflict: Disentangled Information Graph Neural
Networks for Fraud Detection
|
10 pages, 8 figures, IEEE International Conference on Data Mining
(ICDM)
| null | null | null |
cs.LG cs.AI cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph-based fraud detection has heretofore received considerable attention.
Owning to the great success of Graph Neural Networks (GNNs), many approaches
adopting GNNs for fraud detection has been gaining momentum. However, most
existing methods are based on the strong inductive bias of homophily, which
indicates that the context neighbors tend to have same labels or similar
features. In real scenarios, fraudsters often engage in camouflage behaviors in
order to avoid detection system. Therefore, the homophilic assumption no longer
holds, which is known as the inconsistency problem. In this paper, we argue
that the performance degradation is mainly attributed to the inconsistency
between topology and attribute. To address this problem, we propose to
disentangle the fraud network into two views, each corresponding to topology
and attribute respectively. Then we propose a simple and effective method that
uses the attention mechanism to adaptively fuse two views which captures
data-specific preference. In addition, we further improve it by introducing
mutual information constraints for topology and attribute. To this end, we
propose a Disentangled Information Graph Neural Network (DIGNN) model, which
utilizes variational bounds to find an approximate solution to our proposed
optimization objective function. Extensive experiments demonstrate that our
model can significantly outperform stateof-the-art baselines on real-world
fraud detection datasets.
|
[
{
"version": "v1",
"created": "Sat, 22 Oct 2022 08:21:49 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Li",
"Zhixun",
""
],
[
"Chen",
"Dingshuo",
""
],
[
"Liu",
"Qiang",
""
],
[
"Wu",
"Shu",
""
]
] |
new_dataset
| 0.971443 |
2210.12401
|
Xianjun Yang
|
Xianjun Yang, Ya Zhuo, Julia Zuo, Xinlu Zhang, Stephen Wilson, Linda
Petzold
|
PcMSP: A Dataset for Scientific Action Graphs Extraction from
Polycrystalline Materials Synthesis Procedure Text
|
Findings of EMNLP 2022
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Scientific action graphs extraction from materials synthesis procedures is
important for reproducible research, machine automation, and material
prediction. But the lack of annotated data has hindered progress in this field.
We demonstrate an effort to annotate Polycrystalline Materials Synthesis
Procedures (PcMSP) from 305 open access scientific articles for the
construction of synthesis action graphs. This is a new dataset for material
science information extraction that simultaneously contains the synthesis
sentences extracted from the experimental paragraphs, as well as the entity
mentions and intra-sentence relations. A two-step human annotation and
inter-annotator agreement study guarantee the high quality of the PcMSP corpus.
We introduce four natural language processing tasks: sentence classification,
named entity recognition, relation classification, and joint extraction of
entities and relations. Comprehensive experiments validate the effectiveness of
several state-of-the-art models for these challenges while leaving large space
for improvement. We also perform the error analysis and point out some unique
challenges that require further investigation. We will release our annotation
scheme, the corpus, and codes to the research community to alleviate the
scarcity of labeled data in this domain.
|
[
{
"version": "v1",
"created": "Sat, 22 Oct 2022 09:43:54 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Yang",
"Xianjun",
""
],
[
"Zhuo",
"Ya",
""
],
[
"Zuo",
"Julia",
""
],
[
"Zhang",
"Xinlu",
""
],
[
"Wilson",
"Stephen",
""
],
[
"Petzold",
"Linda",
""
]
] |
new_dataset
| 0.99977 |
2210.12463
|
Chen Tang
|
Chen Tang, Chenghua Lin, Henglin Huang, Frank Guerin and Zhihao Zhang
|
EtriCA: Event-Triggered Context-Aware Story Generation Augmented by
Cross Attention
|
Accepted by EMNLP 2022 Findings
|
EMNLP 2022 Findings
| null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One of the key challenges of automatic story generation is how to generate a
long narrative that can maintain fluency, relevance, and coherence. Despite
recent progress, current story generation systems still face the challenge of
how to effectively capture contextual and event features, which has a profound
impact on a model's generation performance. To address these challenges, we
present EtriCA, a novel neural generation model, which improves the relevance
and coherence of the generated stories through residually mapping context
features to event sequences with a cross-attention mechanism. Such a feature
capturing mechanism allows our model to better exploit the logical relatedness
between events when generating stories. Extensive experiments based on both
automatic and human evaluations show that our model significantly outperforms
state-of-the-art baselines, demonstrating the effectiveness of our model in
leveraging context and event features.
|
[
{
"version": "v1",
"created": "Sat, 22 Oct 2022 14:51:12 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Tang",
"Chen",
""
],
[
"Lin",
"Chenghua",
""
],
[
"Huang",
"Henglin",
""
],
[
"Guerin",
"Frank",
""
],
[
"Zhang",
"Zhihao",
""
]
] |
new_dataset
| 0.992918 |
2210.12478
|
Prajjwal Bhargava
|
Prajjwal Bhargava, Vincent Ng
|
DiscoSense: Commonsense Reasoning with Discourse Connectives
|
EMNLP 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We present DiscoSense, a benchmark for commonsense reasoning via
understanding a wide variety of discourse connectives. We generate compelling
distractors in DiscoSense using Conditional Adversarial Filtering, an extension
of Adversarial Filtering that employs conditional generation. We show that
state-of-the-art pre-trained language models struggle to perform well on
DiscoSense, which makes this dataset ideal for evaluating next-generation
commonsense reasoning systems.
|
[
{
"version": "v1",
"created": "Sat, 22 Oct 2022 15:33:38 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Bhargava",
"Prajjwal",
""
],
[
"Ng",
"Vincent",
""
]
] |
new_dataset
| 0.999842 |
2210.12485
|
Yichi Zhang
|
Yichi Zhang, Jianing Yang, Jiayi Pan, Shane Storks, Nikhil Devraj,
Ziqiao Ma, Keunwoo Peter Yu, Yuwei Bao, Joyce Chai
|
DANLI: Deliberative Agent for Following Natural Language Instructions
|
Accepted in EMNLP 2022
| null | null | null |
cs.AI cs.CL cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent years have seen an increasing amount of work on embodied AI agents
that can perform tasks by following human language instructions. However, most
of these agents are reactive, meaning that they simply learn and imitate
behaviors encountered in the training data. These reactive agents are
insufficient for long-horizon complex tasks. To address this limitation, we
propose a neuro-symbolic deliberative agent that, while following language
instructions, proactively applies reasoning and planning based on its neural
and symbolic representations acquired from past experience (e.g., natural
language and egocentric vision). We show that our deliberative agent achieves
greater than 70% improvement over reactive baselines on the challenging TEACh
benchmark. Moreover, the underlying reasoning and planning processes, together
with our modular framework, offer impressive transparency and explainability to
the behaviors of the agent. This enables an in-depth understanding of the
agent's capabilities, which shed light on challenges and opportunities for
future embodied agents for instruction following. The code is available at
https://github.com/sled-group/DANLI.
|
[
{
"version": "v1",
"created": "Sat, 22 Oct 2022 15:57:01 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Zhang",
"Yichi",
""
],
[
"Yang",
"Jianing",
""
],
[
"Pan",
"Jiayi",
""
],
[
"Storks",
"Shane",
""
],
[
"Devraj",
"Nikhil",
""
],
[
"Ma",
"Ziqiao",
""
],
[
"Yu",
"Keunwoo Peter",
""
],
[
"Bao",
"Yuwei",
""
],
[
"Chai",
"Joyce",
""
]
] |
new_dataset
| 0.99691 |
2210.12487
|
Yinya Huang
|
Yinya Huang, Hongming Zhang, Ruixin Hong, Xiaodan Liang, Changshui
Zhang and Dong Yu
|
MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure
|
To appear at the main conference of EMNLP 2022
|
EMNLP 2022
| null | null |
cs.AI cs.CL cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a comprehensive benchmark to investigate models'
logical reasoning capabilities in complex real-life scenarios. Current
explanation datasets often employ synthetic data with simple reasoning
structures. Therefore, it cannot express more complex reasoning processes, such
as the rebuttal to a reasoning step and the degree of certainty of the
evidence. To this end, we propose a comprehensive logical reasoning explanation
form. Based on the multi-hop chain of reasoning, the explanation form includes
three main components: (1) The condition of rebuttal that the reasoning node
can be challenged; (2) Logical formulae that uncover the internal texture of
reasoning nodes; (3) Reasoning strength indicated by degrees of certainty. The
fine-grained structure conforms to the real logical reasoning scenario, better
fitting the human cognitive process but, simultaneously, is more challenging
for the current models. We evaluate the current best models' performance on
this new explanation form. The experimental results show that generating
reasoning graphs remains a challenging task for current models, even with the
help of giant pre-trained language models.
|
[
{
"version": "v1",
"created": "Sat, 22 Oct 2022 16:01:13 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Huang",
"Yinya",
""
],
[
"Zhang",
"Hongming",
""
],
[
"Hong",
"Ruixin",
""
],
[
"Liang",
"Xiaodan",
""
],
[
"Zhang",
"Changshui",
""
],
[
"Yu",
"Dong",
""
]
] |
new_dataset
| 0.999113 |
2210.12493
|
Pascal Jansen Jansen
|
Pascal Jansen, Mark Colley, Enrico Rukzio
|
A Design Space for Human Sensor and Actuator Focused In-Vehicle
Interaction Based on a Systematic Literature Review
|
Proceedings of the ACM on Interactive, Mobile, Wearable and
Ubiquitous Technologies
|
6 (2022) 1-51
|
10.1145/3534617
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automotive user interfaces constantly change due to increasing automation,
novel features, additional applications, and user demands. While in-vehicle
interaction can utilize numerous promising modalities, no existing overview
includes an extensive set of human sensors and actuators and interaction
locations throughout the vehicle interior. We conducted a systematic literature
review of 327 publications leading to a design space for in-vehicle interaction
that outlines existing and lack of work regarding input and output modalities,
locations, and multimodal interaction. To investigate user acceptance of
possible modalities and locations inferred from existing work and gaps unveiled
in our design space, we conducted an online study (N=48). The study revealed
users' general acceptance of novel modalities (e.g., brain or thermal activity)
and interaction with locations other than the front (e.g., seat or table). Our
work helps practitioners evaluate key design decisions, exploit trends, and
explore new areas in the domain of in-vehicle interaction.
|
[
{
"version": "v1",
"created": "Sat, 22 Oct 2022 16:36:22 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Jansen",
"Pascal",
""
],
[
"Colley",
"Mark",
""
],
[
"Rukzio",
"Enrico",
""
]
] |
new_dataset
| 0.971149 |
2210.12511
|
Ziqiao Ma
|
Ziqiao Ma, Ben VanDerPloeg, Cristian-Paul Bara, Huang Yidong, Eui-In
Kim, Felix Gervits, Matthew Marge, Joyce Chai
|
DOROTHIE: Spoken Dialogue for Handling Unexpected Situations in
Interactive Autonomous Driving Agents
|
Findings of EMNLP, 2022
| null | null | null |
cs.AI cs.CL cs.CV cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In the real world, autonomous driving agents navigate in highly dynamic
environments full of unexpected situations where pre-trained models are
unreliable. In these situations, what is immediately available to vehicles is
often only human operators. Empowering autonomous driving agents with the
ability to navigate in a continuous and dynamic environment and to communicate
with humans through sensorimotor-grounded dialogue becomes critical. To this
end, we introduce Dialogue On the ROad To Handle Irregular Events (DOROTHIE), a
novel interactive simulation platform that enables the creation of unexpected
situations on the fly to support empirical studies on situated communication
with autonomous driving agents. Based on this platform, we created the Situated
Dialogue Navigation (SDN), a navigation benchmark of 183 trials with a total of
8415 utterances, around 18.7 hours of control streams, and 2.9 hours of trimmed
audio. SDN is developed to evaluate the agent's ability to predict dialogue
moves from humans as well as generate its own dialogue moves and physical
navigation actions. We further developed a transformer-based baseline model for
these SDN tasks. Our empirical results indicate that language guided-navigation
in a highly dynamic environment is an extremely difficult task for end-to-end
models. These results will provide insight towards future work on robust
autonomous driving agents. The DOROTHIE platform, SDN benchmark, and code for
the baseline model are available at https://github.com/sled-group/DOROTHIE.
|
[
{
"version": "v1",
"created": "Sat, 22 Oct 2022 17:52:46 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Ma",
"Ziqiao",
""
],
[
"VanDerPloeg",
"Ben",
""
],
[
"Bara",
"Cristian-Paul",
""
],
[
"Yidong",
"Huang",
""
],
[
"Kim",
"Eui-In",
""
],
[
"Gervits",
"Felix",
""
],
[
"Marge",
"Matthew",
""
],
[
"Chai",
"Joyce",
""
]
] |
new_dataset
| 0.982011 |
2210.12521
|
Kei Ota
|
Kei Ota, Hsiao-Yu Tung, Kevin A. Smith, Anoop Cherian, Tim K. Marks,
Alan Sullivan, Asako Kanezaki, and Joshua B. Tenenbaum
|
H-SAUR: Hypothesize, Simulate, Act, Update, and Repeat for Understanding
Object Articulations from Interactions
| null | null | null | null |
cs.RO cs.AI cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The world is filled with articulated objects that are difficult to determine
how to use from vision alone, e.g., a door might open inwards or outwards.
Humans handle these objects with strategic trial-and-error: first pushing a
door then pulling if that doesn't work. We enable these capabilities in
autonomous agents by proposing "Hypothesize, Simulate, Act, Update, and Repeat"
(H-SAUR), a probabilistic generative framework that simultaneously generates a
distribution of hypotheses about how objects articulate given input
observations, captures certainty over hypotheses over time, and infer plausible
actions for exploration and goal-conditioned manipulation. We compare our model
with existing work in manipulating objects after a handful of exploration
actions, on the PartNet-Mobility dataset. We further propose a novel
PuzzleBoxes benchmark that contains locked boxes that require multiple steps to
solve. We show that the proposed model significantly outperforms the current
state-of-the-art articulated object manipulation framework, despite using zero
training data. We further improve the test-time efficiency of H-SAUR by
integrating a learned prior from learning-based vision models.
|
[
{
"version": "v1",
"created": "Sat, 22 Oct 2022 18:39:33 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Ota",
"Kei",
""
],
[
"Tung",
"Hsiao-Yu",
""
],
[
"Smith",
"Kevin A.",
""
],
[
"Cherian",
"Anoop",
""
],
[
"Marks",
"Tim K.",
""
],
[
"Sullivan",
"Alan",
""
],
[
"Kanezaki",
"Asako",
""
],
[
"Tenenbaum",
"Joshua B.",
""
]
] |
new_dataset
| 0.977079 |
2210.12539
|
Tanya Shreedhar
|
Tanya Shreedhar, Sanjit K. Kaul and Roy D. Yates
|
ACP+: An Age Control Protocol for the Internet
|
Under submission. arXiv admin note: text overlap with
arXiv:2103.07797, arXiv:1811.03353
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present ACP+, an age control protocol, which is a transport layer protocol
that regulates the rate at which update packets from a source are sent over the
Internet to a monitor. The source would like to keep the average age of sensed
information at the monitor to a minimum, given the network conditions.
Extensive experimentation helps us shed light on age control over the current
Internet and its implications for sources sending updates over a shared
wireless access to monitors in the cloud. We also show that many congestion
control algorithms proposed over the years for the Transmission Control
Protocol (TCP), including hybrid approaches that achieve higher throughputs at
lower delays than traditional loss-based congestion control, are unsuitable for
age control.
|
[
{
"version": "v1",
"created": "Sat, 22 Oct 2022 20:01:22 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Shreedhar",
"Tanya",
""
],
[
"Kaul",
"Sanjit K.",
""
],
[
"Yates",
"Roy D.",
""
]
] |
new_dataset
| 0.999578 |
2210.12541
|
Yuanbo Hou
|
Yuanbo Hou, Yun Wang, Wenwu Wang, Dick Botteldooren
|
GCT: Gated Contextual Transformer for Sequential Audio Tagging
| null | null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Audio tagging aims to assign predefined tags to audio clips to indicate the
class information of audio events. Sequential audio tagging (SAT) means
detecting both the class information of audio events, and the order in which
they occur within the audio clip. Most existing methods for SAT are based on
connectionist temporal classification (CTC). However, CTC cannot effectively
capture connections between events due to the conditional independence
assumption between outputs at different times. The contextual Transformer
(cTransformer) addresses this issue by exploiting contextual information in
SAT. Nevertheless, cTransformer is also limited in exploiting contextual
information as it only uses forward information in inference. This paper
proposes a gated contextual Transformer (GCT) with forward-backward inference
(FBI). In addition, a gated contextual multi-layer perceptron (GCMLP) block is
proposed in GCT to improve the performance of cTransformer structurally.
Experiments on two real-life audio datasets show that the proposed GCT with
GCMLP and FBI performs better than the CTC-based methods and cTransformer. To
promote research on SAT, the manually annotated sequential labels for the two
datasets are released.
|
[
{
"version": "v1",
"created": "Sat, 22 Oct 2022 20:07:57 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Hou",
"Yuanbo",
""
],
[
"Wang",
"Yun",
""
],
[
"Wang",
"Wenwu",
""
],
[
"Botteldooren",
"Dick",
""
]
] |
new_dataset
| 0.990728 |
2210.12560
|
Zhaoyue Sun
|
Zhaoyue Sun, Jiazheng Li, Gabriele Pergola, Byron C. Wallace, Bino
John, Nigel Greene, Joseph Kim, Yulan He
|
PHEE: A Dataset for Pharmacovigilance Event Extraction from Text
|
17 pages, 3 figures, EMNLP2022 accepted
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The primary goal of drug safety researchers and regulators is to promptly
identify adverse drug reactions. Doing so may in turn prevent or reduce the
harm to patients and ultimately improve public health. Evaluating and
monitoring drug safety (i.e., pharmacovigilance) involves analyzing an ever
growing collection of spontaneous reports from health professionals,
physicians, and pharmacists, and information voluntarily submitted by patients.
In this scenario, facilitating analysis of such reports via automation has the
potential to rapidly identify safety signals. Unfortunately, public resources
for developing natural language models for this task are scant. We present
PHEE, a novel dataset for pharmacovigilance comprising over 5000 annotated
events from medical case reports and biomedical literature, making it the
largest such public dataset to date. We describe the hierarchical event schema
designed to provide coarse and fine-grained information about patients'
demographics, treatments and (side) effects. Along with the discussion of the
dataset, we present a thorough experimental evaluation of current
state-of-the-art approaches for biomedical event extraction, point out their
limitations, and highlight open challenges to foster future research in this
area.
|
[
{
"version": "v1",
"created": "Sat, 22 Oct 2022 21:57:42 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Sun",
"Zhaoyue",
""
],
[
"Li",
"Jiazheng",
""
],
[
"Pergola",
"Gabriele",
""
],
[
"Wallace",
"Byron C.",
""
],
[
"John",
"Bino",
""
],
[
"Greene",
"Nigel",
""
],
[
"Kim",
"Joseph",
""
],
[
"He",
"Yulan",
""
]
] |
new_dataset
| 0.999817 |
2210.12564
|
Shih-Po Lee
|
Shih-Po Lee, Niraj Prakash Kini, Wen-Hsiao Peng, Ching-Wen Ma,
Jenq-Neng Hwang
|
HuPR: A Benchmark for Human Pose Estimation Using Millimeter Wave Radar
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper introduces a novel human pose estimation benchmark, Human Pose
with Millimeter Wave Radar (HuPR), that includes synchronized vision and radio
signal components. This dataset is created using cross-calibrated mmWave radar
sensors and a monocular RGB camera for cross-modality training of radar-based
human pose estimation. There are two advantages of using mmWave radar to
perform human pose estimation. First, it is robust to dark and low-light
conditions. Second, it is not visually perceivable by humans and thus, can be
widely applied to applications with privacy concerns, e.g., surveillance
systems in patient rooms. In addition to the benchmark, we propose a
cross-modality training framework that leverages the ground-truth 2D keypoints
representing human body joints for training, which are systematically generated
from the pre-trained 2D pose estimation network based on a monocular camera
input image, avoiding laborious manual label annotation efforts. The framework
consists of a new radar pre-processing method that better extracts the velocity
information from radar data, Cross- and Self-Attention Module (CSAM), to fuse
multi-scale radar features, and Pose Refinement Graph Convolutional Networks
(PRGCN), to refine the predicted keypoint confidence heatmaps. Our intensive
experiments on the HuPR benchmark show that the proposed scheme achieves better
human pose estimation performance with only radar data, as compared to
traditional pre-processing solutions and previous radio-frequency-based
methods.
|
[
{
"version": "v1",
"created": "Sat, 22 Oct 2022 22:28:40 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Lee",
"Shih-Po",
""
],
[
"Kini",
"Niraj Prakash",
""
],
[
"Peng",
"Wen-Hsiao",
""
],
[
"Ma",
"Ching-Wen",
""
],
[
"Hwang",
"Jenq-Neng",
""
]
] |
new_dataset
| 0.999702 |
2210.12593
|
William Beksi
|
Quan H. Nguyen, William J. Beksi
|
Single Image Super-Resolution via a Dual Interactive Implicit Neural
Network
|
To be published in the 2023 IEEE/CVF Winter Conference on
Applications of Computer Vision (WACV)
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we introduce a novel implicit neural network for the task of
single image super-resolution at arbitrary scale factors. To do this, we
represent an image as a decoding function that maps locations in the image
along with their associated features to their reciprocal pixel attributes.
Since the pixel locations are continuous in this representation, our method can
refer to any location in an image of varying resolution. To retrieve an image
of a particular resolution, we apply a decoding function to a grid of locations
each of which refers to the center of a pixel in the output image. In contrast
to other techniques, our dual interactive neural network decouples content and
positional features. As a result, we obtain a fully implicit representation of
the image that solves the super-resolution problem at (real-valued) elective
scales using a single model. We demonstrate the efficacy and flexibility of our
approach against the state of the art on publicly available benchmark datasets.
|
[
{
"version": "v1",
"created": "Sun, 23 Oct 2022 02:05:19 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Nguyen",
"Quan H.",
""
],
[
"Beksi",
"William J.",
""
]
] |
new_dataset
| 0.997839 |
2210.12605
|
Conor Power
|
Shadaj Laddad, Conor Power, Mae Milano, Alvin Cheung, Natacha Crooks,
Joseph M. Hellerstein
|
Keep CALM and CRDT On
| null | null | null | null |
cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
Despite decades of research and practical experience, developers have few
tools for programming reliable distributed applications without resorting to
expensive coordination techniques. Conflict-free replicated datatypes (CRDTs)
are a promising line of work that enable coordination-free replication and
offer certain eventual consistency guarantees in a relatively simple
object-oriented API. Yet CRDT guarantees extend only to data updates;
observations of CRDT state are unconstrained and unsafe. We propose an agenda
that embraces the simplicity of CRDTs, but provides richer, more uniform
guarantees. We extend CRDTs with a query model that reasons about which queries
are safe without coordination by applying monotonicity results from the CALM
Theorem, and lay out a larger agenda for developing CRDT data stores that let
developers safely and efficiently interact with replicated application state.
|
[
{
"version": "v1",
"created": "Sun, 23 Oct 2022 03:12:43 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Laddad",
"Shadaj",
""
],
[
"Power",
"Conor",
""
],
[
"Milano",
"Mae",
""
],
[
"Cheung",
"Alvin",
""
],
[
"Crooks",
"Natacha",
""
],
[
"Hellerstein",
"Joseph M.",
""
]
] |
new_dataset
| 0.997926 |
2210.12647
|
Liming Ma
|
Lingfei Jin, Liming Ma, and Chaoping Xing
|
Binary sequences with a low correlation via cyclotomic function fields
with odd characteristics
|
arXiv admin note: text overlap with arXiv:2107.11766
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Sequences with a low correlation have very important applications in
communications, cryptography, and compressed sensing. In the literature, many
efforts have been made to construct good sequences with various lengths where
binary sequences attracts great attention. As a result, various constructions
of good binary sequences have been proposed. However, most of the known
constructions made use of the multiplicative cyclic group structure of finite
field $\mathbb{F}_{p^n}$ for a prime $p$ and a positive integer $n$. In fact,
all $p^n+1$ rational places including the place at infinity of the rational
function field over $\mathbb{F}_{p^n}$ form a cyclic structure under an
automorphism of order $p^n+1$. In this paper, we make use of this cyclic
structure to provide an explicit construction of binary sequences with a low
correlation of length $p^n+1$ via cyclotomic function fields over
$\mathbb{F}_{p^n}$ for any odd prime $p$. Each family of binary sequences has
size $p^n-2$ and its correlation is upper bounded by $4+\lfloor 2\cdot
p^{n/2}\rfloor$. To the best of our knowledge, this is the first construction
of binary sequences with a low correlation of length $p^n+1$ for odd prime $p$.
Moreover, our sequences can be constructed explicitly and have competitive
parameters.
|
[
{
"version": "v1",
"created": "Sun, 23 Oct 2022 08:08:01 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Jin",
"Lingfei",
""
],
[
"Ma",
"Liming",
""
],
[
"Xing",
"Chaoping",
""
]
] |
new_dataset
| 0.99925 |
2210.12654
|
Alon Eirew
|
Alon Eirew, Avi Caciularu, Ido Dagan
|
Cross-document Event Coreference Search: Task, Dataset and Modeling
|
EMNLP 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The task of Cross-document Coreference Resolution has been traditionally
formulated as requiring to identify all coreference links across a given set of
documents. We propose an appealing, and often more applicable, complementary
set up for the task - Cross-document Coreference Search, focusing in this paper
on event coreference. Concretely, given a mention in context of an event of
interest, considered as a query, the task is to find all coreferring mentions
for the query event in a large document collection. To support research on this
task, we create a corresponding dataset, which is derived from Wikipedia while
leveraging annotations in the available Wikipedia Event Coreference dataset
(WEC-Eng). Observing that the coreference search setup is largely analogous to
the setting of Open Domain Question Answering, we adapt the prominent Deep
Passage Retrieval (DPR) model to our setting, as an appealing baseline.
Finally, we present a novel model that integrates a powerful coreference
scoring scheme into the DPR architecture, yielding improved performance.
|
[
{
"version": "v1",
"created": "Sun, 23 Oct 2022 08:21:25 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Eirew",
"Alon",
""
],
[
"Caciularu",
"Avi",
""
],
[
"Dagan",
"Ido",
""
]
] |
new_dataset
| 0.999812 |
2210.12658
|
Panzhong Lu
|
Panzhong Lu, Xin Zhang, Meishan Zhang and Min Zhang
|
Extending Phrase Grounding with Pronouns in Visual Dialogues
|
Accepted by EMNLP 2022
| null | null | null |
cs.CL cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Conventional phrase grounding aims to localize noun phrases mentioned in a
given caption to their corresponding image regions, which has achieved great
success recently. Apparently, sole noun phrase grounding is not enough for
cross-modal visual language understanding. Here we extend the task by
considering pronouns as well. First, we construct a dataset of phrase grounding
with both noun phrases and pronouns to image regions. Based on the dataset, we
test the performance of phrase grounding by using a state-of-the-art literature
model of this line. Then, we enhance the baseline grounding model with
coreference information which should help our task potentially, modeling the
coreference structures with graph convolutional networks. Experiments on our
dataset, interestingly, show that pronouns are easier to ground than noun
phrases, where the possible reason might be that these pronouns are much less
ambiguous. Additionally, our final model with coreference information can
significantly boost the grounding performance of both noun phrases and
pronouns.
|
[
{
"version": "v1",
"created": "Sun, 23 Oct 2022 08:32:25 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Lu",
"Panzhong",
""
],
[
"Zhang",
"Xin",
""
],
[
"Zhang",
"Meishan",
""
],
[
"Zhang",
"Min",
""
]
] |
new_dataset
| 0.998051 |
2210.12678
|
Silin Gao
|
Silin Gao, Jena D. Hwang, Saya Kanno, Hiromi Wakaki, Yuki Mitsufuji,
Antoine Bosselut
|
ComFact: A Benchmark for Linking Contextual Commonsense Knowledge
|
Findings of EMNLP 2022, long paper
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Understanding rich narratives, such as dialogues and stories, often requires
natural language processing systems to access relevant knowledge from
commonsense knowledge graphs. However, these systems typically retrieve facts
from KGs using simple heuristics that disregard the complex challenges of
identifying situationally-relevant commonsense knowledge (e.g.,
contextualization, implicitness, ambiguity).
In this work, we propose the new task of commonsense fact linking, where
models are given contexts and trained to identify situationally-relevant
commonsense knowledge from KGs. Our novel benchmark, ComFact, contains ~293k
in-context relevance annotations for commonsense triplets across four
stylistically diverse dialogue and storytelling datasets. Experimental results
confirm that heuristic fact linking approaches are imprecise knowledge
extractors. Learned fact linking models demonstrate across-the-board
performance improvements (~34.6% F1) over these heuristics. Furthermore,
improved knowledge retrieval yielded average downstream improvements of 9.8%
for a dialogue response generation task. However, fact linking models still
significantly underperform humans, suggesting our benchmark is a promising
testbed for research in commonsense augmentation of NLP systems.
|
[
{
"version": "v1",
"created": "Sun, 23 Oct 2022 09:30:39 GMT"
}
] | 2022-10-25T00:00:00 |
[
[
"Gao",
"Silin",
""
],
[
"Hwang",
"Jena D.",
""
],
[
"Kanno",
"Saya",
""
],
[
"Wakaki",
"Hiromi",
""
],
[
"Mitsufuji",
"Yuki",
""
],
[
"Bosselut",
"Antoine",
""
]
] |
new_dataset
| 0.992621 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.