id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2205.03114
|
Ali Nassif
|
Ali Bou Nassif, Ashraf Elnagar, Omar Elgendy, Yaman Afadar
|
Arabic Fake News Detection Based on Deep Contextualized Embedding Models
|
Published online at Neural Computing and Applications Journal
| null |
10.1007/s00521-022-07206-4
| null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Social media is becoming a source of news for many people due to its ease and
freedom of use. As a result, fake news has been spreading quickly and easily
regardless of its credibility, especially in the last decade. Fake news
publishers take advantage of critical situations such as the Covid-19 pandemic
and the American presidential elections to affect societies negatively. Fake
news can seriously impact society in many fields including politics, finance,
sports, etc. Many studies have been conducted to help detect fake news in
English, but research conducted on fake news detection in the Arabic language
is scarce. Our contribution is twofold: first, we have constructed a large and
diverse Arabic fake news dataset. Second, we have developed and evaluated
transformer-based classifiers to identify fake news while utilizing eight
state-of-the-art Arabic contextualized embedding models. The majority of these
models had not been previously used for Arabic fake news detection. We conduct
a thorough analysis of the state-of-the-art Arabic contextualized embedding
models as well as comparison with similar fake news detection systems.
Experimental results confirm that these state-of-the-art models are robust,
with accuracy exceeding 98%.
|
[
{
"version": "v1",
"created": "Fri, 6 May 2022 09:54:35 GMT"
}
] | 2022-05-09T00:00:00 |
[
[
"Nassif",
"Ali Bou",
""
],
[
"Elnagar",
"Ashraf",
""
],
[
"Elgendy",
"Omar",
""
],
[
"Afadar",
"Yaman",
""
]
] |
new_dataset
| 0.984905 |
2205.03120
|
Andreas Schuler
|
Andreas Schuler and Gabriele Kotsis
|
MANAi -- An IntelliJ Plugin for Software Energy Consumption Profiling
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Developing energy-efficient software solutions is a tedious task. We need
both, the awareness that energy-efficiency plays a key role in modern software
development and the tools and techniques to support stakeholders involved in
the software development lifecycle. So, we present the MANAi plugin which helps
to make energy consumption of unit test methods explicit by providing visual
feedback as a plugin to the Integrated Development Environment (IDE)IntelliJ.
Our tool is intended to bring software energy consumption into the limelight as
an important non-functional quality aspect in software development.
Furthermore, with MANAi we provide a tool that eases the process of software
energy experiments for a broad range of users from academia to industry.
|
[
{
"version": "v1",
"created": "Fri, 6 May 2022 10:12:33 GMT"
}
] | 2022-05-09T00:00:00 |
[
[
"Schuler",
"Andreas",
""
],
[
"Kotsis",
"Gabriele",
""
]
] |
new_dataset
| 0.998569 |
2205.03224
|
Tianshi Xu
|
Tianshi Xu, Vassilis Kalantzis, Ruipeng Li, Yuanzhe Xi, Geoffrey
Dillon, Yousef Saad
|
parGeMSLR: A Parallel Multilevel Schur Complement Low-Rank
Preconditioning and Solution Package for General Sparse Matrices
|
14 pages, 11 figures
| null | null | null |
cs.MS cs.NA math.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper discusses parGeMSLR, a C++/MPI software library for the solution
of sparse systems of linear algebraic equations via preconditioned Krylov
subspace methods in distributed-memory computing environments. The
preconditioner implemented in parGeMSLR is based on algebraic domain
decomposition and partitions the symmetrized adjacency graph recursively into
several non-overlapping partitions via a p-way vertex separator, where p is an
integer multiple of the total number of MPI processes. From a numerical
perspective, parGeMSLR builds a Schur complement approximate inverse
preconditioner as the sum between the matrix inverse of the interface coupling
matrix and a low-rank correction term. To reduce the cost associated with the
computation of the approximate inverse matrices, parGeMSLR exploits a
multilevel partitioning of the algebraic domain. The parGeMSLR library is
implemented on top of the Message Passing Interface and can solve both real and
complex linear systems. Furthermore, parGeMSLR can take advantage of hybrid
computing environments with in-node access to one or more Graphics Processing
Units. Finally, the parallel efficiency (weak and strong scaling) of parGeMSLR
is demonstrated on a few model problems arising from discretizations of 3D
Partial Differential Equations.
|
[
{
"version": "v1",
"created": "Wed, 4 May 2022 19:39:48 GMT"
}
] | 2022-05-09T00:00:00 |
[
[
"Xu",
"Tianshi",
""
],
[
"Kalantzis",
"Vassilis",
""
],
[
"Li",
"Ruipeng",
""
],
[
"Xi",
"Yuanzhe",
""
],
[
"Dillon",
"Geoffrey",
""
],
[
"Saad",
"Yousef",
""
]
] |
new_dataset
| 0.983766 |
2205.03262
|
Abhiroop Sarkar
|
Abhiroop Sarkar, Bo Joel Svensson, Mary Sheeran
|
Synchron -- An API and Runtime for Embedded Systems
|
39 pages; published in ECOOP 2022
| null | null | null |
cs.PL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Programming embedded systems applications involve writing concurrent,
event-driven and timing-aware programs. Traditionally, such programs are
written in low-level machine-oriented programming languages like C or Assembly.
We present an alternative by introducing Synchron, an API that offers
high-level abstractions to the programmer while supporting the low-level
infrastructure in an associated runtime system and one-time-effort drivers.
Embedded systems applications exhibit the general characteristics of being (i)
concurrent, (ii) I/O-bound and (iii) timing-aware. To address each of these
concerns, the Synchron API consists of three components: (1) a Concurrent ML
(CML) inspired message-passing concurrency model, (2) a message-passing--based
I/O interface that translates between low-level interrupt based and
memory-mapped peripherals, and (3) a timing operator, $syncT$, that marries
CML's $sync$ operator with timing windows inspired from the TinyTimber kernel.
We implement the Synchron API as the bytecode instructions of a virtual machine
called SynchronVM. SynchronVM hosts a Caml-inspired functional language as its
frontend language, and the backend of the VM supports the STM32F4 and NRF52
microcontrollers, with RAM in the order of hundreds of kilobytes. We illustrate
the expressiveness of the Synchron API by showing examples of expressing state
machines commonly found in embedded systems. The timing functionality is
demonstrated through a music programming exercise. Finally, we provide
benchmarks on the response time, jitter rates, memory, and power usage of the
SynchronVM.
|
[
{
"version": "v1",
"created": "Fri, 6 May 2022 14:33:08 GMT"
}
] | 2022-05-09T00:00:00 |
[
[
"Sarkar",
"Abhiroop",
""
],
[
"Svensson",
"Bo Joel",
""
],
[
"Sheeran",
"Mary",
""
]
] |
new_dataset
| 0.999501 |
2205.03325
|
Yu-Shun Hsiao
|
Tianyu Jia, En-Yu Yang, Yu-Shun Hsiao, Jonathan Cruz, David Brooks,
Gu-Yeon Wei, Vijay Janapa Reddi
|
OMU: A Probabilistic 3D Occupancy Mapping Accelerator for Real-time
OctoMap at the Edge
|
2022 Design Automation and Test in Europe Conference (DATE), March
14-23, 2022, Virtual
| null | null | null |
cs.AR cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Autonomous machines (e.g., vehicles, mobile robots, drones) require
sophisticated 3D mapping to perceive the dynamic environment. However,
maintaining a real-time 3D map is expensive both in terms of compute and memory
requirements, especially for resource-constrained edge machines. Probabilistic
OctoMap is a reliable and memory-efficient 3D dense map model to represent the
full environment, with dynamic voxel node pruning and expansion capacity. This
paper presents the first efficient accelerator solution, i.e. OMU, to enable
real-time probabilistic 3D mapping at the edge. To improve the performance, the
input map voxels are updated via parallel PE units for data parallelism. Within
each PE, the voxels are stored using a specially developed data structure in
parallel memory banks. In addition, a pruning address manager is designed
within each PE unit to reuse the pruned memory addresses. The proposed 3D
mapping accelerator is implemented and evaluated using a commercial 12 nm
technology. Compared to the ARM Cortex-A57 CPU in the Nvidia Jetson TX2
platform, the proposed accelerator achieves up to 62$\times$ performance and
708$\times$ energy efficiency improvement. Furthermore, the accelerator
provides 63 FPS throughput, more than 2$\times$ higher than a real-time
requirement, enabling real-time perception for 3D mapping.
|
[
{
"version": "v1",
"created": "Fri, 6 May 2022 16:03:13 GMT"
}
] | 2022-05-09T00:00:00 |
[
[
"Jia",
"Tianyu",
""
],
[
"Yang",
"En-Yu",
""
],
[
"Hsiao",
"Yu-Shun",
""
],
[
"Cruz",
"Jonathan",
""
],
[
"Brooks",
"David",
""
],
[
"Wei",
"Gu-Yeon",
""
],
[
"Reddi",
"Vijay Janapa",
""
]
] |
new_dataset
| 0.966386 |
2205.03335
|
Omid Esrafilian
|
David Gesbert, Omid Esrafilian, Junting Chen, Rajeev Gangula, Urbashi
Mitra
|
UAV-aided RF Mapping for Sensing and Connectivity in Wireless Networks
|
Accepted for publication in IEEE Wireless Communications Magazine
| null | null | null |
cs.IT cs.LG cs.NI math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The use of unmanned aerial vehicles (UAV) as flying radio access network
(RAN) nodes offers a promising complement to traditional fixed terrestrial
deployments. More recently yet still in the context of wireless networks,
drones have also been envisioned for use as radio frequency (RF) sensing and
localization devices. In both cases, the advantage of using UAVs lies in their
ability to navigate themselves freely in 3D and in a timely manner to locations
of space where the obtained network throughput or sensing performance is
optimal. In practice, the selection of a proper location or trajectory for the
UAV very much depends on local terrain features, including the position of
surrounding radio obstacles. Hence, the robot must be able to map the features
of its radio environment as it performs its data communication or sensing
services. The challenges related to this task, referred here as radio mapping,
are discussed in this paper. Its promises related to efficient trajectory
design for autonomous radio-aware UAVs are highlighted, along with algorithm
solutions. The advantages induced by radio-mapping in terms of connectivity,
sensing, and localization performance are illustrated.
|
[
{
"version": "v1",
"created": "Fri, 6 May 2022 16:16:08 GMT"
}
] | 2022-05-09T00:00:00 |
[
[
"Gesbert",
"David",
""
],
[
"Esrafilian",
"Omid",
""
],
[
"Chen",
"Junting",
""
],
[
"Gangula",
"Rajeev",
""
],
[
"Mitra",
"Urbashi",
""
]
] |
new_dataset
| 0.995394 |
2205.03346
|
Ziteng Cui
|
Ziteng Cui, Guo-Jun Qi, Lin Gu, Shaodi You, Zenghui Zhang, Tatsuya
Harada
|
Multitask AET with Orthogonal Tangent Regularity for Dark Object
Detection
|
ICCV 2021. Low-light object detection, code link:
https://github.com/cuiziteng/MAET
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dark environment becomes a challenge for computer vision algorithms owing to
insufficient photons and undesirable noise. To enhance object detection in a
dark environment, we propose a novel multitask auto encoding transformation
(MAET) model which is able to explore the intrinsic pattern behind illumination
translation. In a self-supervision manner, the MAET learns the intrinsic visual
structure by encoding and decoding the realistic illumination-degrading
transformation considering the physical noise model and image signal processing
(ISP).
Based on this representation, we achieve the object detection task by
decoding the bounding box coordinates and classes. To avoid the
over-entanglement of two tasks, our MAET disentangles the object and degrading
features by imposing an orthogonal tangent regularity. This forms a parametric
manifold along which multitask predictions can be geometrically formulated by
maximizing the orthogonality between the tangents along the outputs of
respective tasks. Our framework can be implemented based on the mainstream
object detection architecture and directly trained end-to-end using normal
target detection datasets, such as VOC and COCO. We have achieved the
state-of-the-art performance using synthetic and real-world datasets. Code is
available at https://github.com/cuiziteng/MAET.
|
[
{
"version": "v1",
"created": "Fri, 6 May 2022 16:27:14 GMT"
}
] | 2022-05-09T00:00:00 |
[
[
"Cui",
"Ziteng",
""
],
[
"Qi",
"Guo-Jun",
""
],
[
"Gu",
"Lin",
""
],
[
"You",
"Shaodi",
""
],
[
"Zhang",
"Zenghui",
""
],
[
"Harada",
"Tatsuya",
""
]
] |
new_dataset
| 0.977073 |
2205.03355
|
Jason Stock
|
Jason Stock and Chuck Anderson
|
Trainable Wavelet Neural Network for Non-Stationary Signals
|
AI for Earth and Space Science Workshop at the International
Conference on Learning Representations (ICLR), April, 2022
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This work introduces a wavelet neural network to learn a filter-bank
specialized to fit non-stationary signals and improve interpretability and
performance for digital signal processing. The network uses a wavelet transform
as the first layer of a neural network where the convolution is a parameterized
function of the complex Morlet wavelet. Experimental results, on both
simplified data and atmospheric gravity waves, show the network is quick to
converge, generalizes well on noisy data, and outperforms standard network
architectures.
|
[
{
"version": "v1",
"created": "Fri, 6 May 2022 16:41:27 GMT"
}
] | 2022-05-09T00:00:00 |
[
[
"Stock",
"Jason",
""
],
[
"Anderson",
"Chuck",
""
]
] |
new_dataset
| 0.985584 |
2205.03375
|
Debarun Bhattacharjya
|
Debarun Bhattacharjya, Saurabh Sihag, Oktie Hassanzadeh, Liza Bialik
|
Summary Markov Models for Event Sequences
|
In Proceedings of International Joint Conference on Artificial
Intelligence (IJCAI) 2022
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Datasets involving sequences of different types of events without meaningful
time stamps are prevalent in many applications, for instance when extracted
from textual corpora. We propose a family of models for such event sequences --
summary Markov models -- where the probability of observing an event type
depends only on a summary of historical occurrences of its influencing set of
event types. This Markov model family is motivated by Granger causal models for
time series, with the important distinction that only one event can occur in a
position in an event sequence. We show that a unique minimal influencing set
exists for any set of event types of interest and choice of summary function,
formulate two novel models from the general family that represent specific
sequence dynamics, and propose a greedy search algorithm for learning them from
event sequence data. We conduct an experimental investigation comparing the
proposed models with relevant baselines, and illustrate their knowledge
acquisition and discovery capabilities through case studies involving sequences
from text.
|
[
{
"version": "v1",
"created": "Fri, 6 May 2022 17:16:24 GMT"
}
] | 2022-05-09T00:00:00 |
[
[
"Bhattacharjya",
"Debarun",
""
],
[
"Sihag",
"Saurabh",
""
],
[
"Hassanzadeh",
"Oktie",
""
],
[
"Bialik",
"Liza",
""
]
] |
new_dataset
| 0.986409 |
2205.03391
|
Alexander Kathan
|
Alexander Kathan, Andreas Triantafyllopoulos, Xiangheng He, Manuel
Milling, Tianhao Yan, Srividya Tirunellai Rajamani, Ludwig K\"uster, Mathias
Harrer, Elena Heber, Inga Grossmann, David D. Ebert, Bj\"orn W. Schuller
|
Journaling Data for Daily PHQ-2 Depression Prediction and Forecasting
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Digital health applications are becoming increasingly important for assessing
and monitoring the wellbeing of people suffering from mental health conditions
like depression. A common target of said applications is to predict the results
of self-assessed Patient-Health-Questionnaires (PHQ), indicating current
symptom severity of depressive individuals. In this work, we explore the
potential of using actively-collected data to predict and forecast daily PHQ-2
scores on a newly-collected longitudinal dataset. We obtain a best MAE of 1.417
for daily prediction of PHQ-2 scores, which specifically in the used dataset
have a range of 0 to 12, using leave-one-subject-out cross-validation, as well
as a best MAE of 1.914 for forecasting PHQ-2 scores using data from up to the
last 7 days. This illustrates the additive value that can be obtained by
incorporating actively-collected data in a depression monitoring application.
|
[
{
"version": "v1",
"created": "Fri, 6 May 2022 17:47:05 GMT"
}
] | 2022-05-09T00:00:00 |
[
[
"Kathan",
"Alexander",
""
],
[
"Triantafyllopoulos",
"Andreas",
""
],
[
"He",
"Xiangheng",
""
],
[
"Milling",
"Manuel",
""
],
[
"Yan",
"Tianhao",
""
],
[
"Rajamani",
"Srividya Tirunellai",
""
],
[
"Küster",
"Ludwig",
""
],
[
"Harrer",
"Mathias",
""
],
[
"Heber",
"Elena",
""
],
[
"Grossmann",
"Inga",
""
],
[
"Ebert",
"David D.",
""
],
[
"Schuller",
"Björn W.",
""
]
] |
new_dataset
| 0.954022 |
2011.08659
|
Marcel Schreiber
|
Marcel Schreiber, Vasileios Belagiannis, Claudius Gl\"aser and Klaus
Dietmayer
|
Dynamic Occupancy Grid Mapping with Recurrent Neural Networks
| null |
2021 IEEE International Conference on Robotics and Automation
(ICRA), May 30 - June 5, 2021, Xi'an, China, pp. 6717-6724
|
10.1109/ICRA48506.2021.9561375
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modeling and understanding the environment is an essential task for
autonomous driving. In addition to the detection of objects, in complex traffic
scenarios the motion of other road participants is of special interest.
Therefore, we propose to use a recurrent neural network to predict a dynamic
occupancy grid map, which divides the vehicle surrounding in cells, each
containing the occupancy probability and a velocity estimate. During training,
our network is fed with sequences of measurement grid maps, which encode the
lidar measurements of a single time step. Due to the combination of
convolutional and recurrent layers, our approach is capable to use spatial and
temporal information for the robust detection of static and dynamic
environment. In order to apply our approach with measurements from a moving
ego-vehicle, we propose a method for ego-motion compensation that is applicable
in neural network architectures with recurrent layers working on different
resolutions. In our evaluations, we compare our approach with a
state-of-the-art particle-based algorithm on a large publicly available dataset
to demonstrate the improved accuracy of velocity estimates and the more robust
separation of the environment in static and dynamic area. Additionally, we show
that our proposed method for ego-motion compensation leads to comparable
results in scenarios with stationary and with moving ego-vehicle.
|
[
{
"version": "v1",
"created": "Tue, 17 Nov 2020 14:41:48 GMT"
},
{
"version": "v2",
"created": "Fri, 26 Mar 2021 08:22:21 GMT"
},
{
"version": "v3",
"created": "Thu, 5 May 2022 08:46:41 GMT"
}
] | 2022-05-06T00:00:00 |
[
[
"Schreiber",
"Marcel",
""
],
[
"Belagiannis",
"Vasileios",
""
],
[
"Gläser",
"Claudius",
""
],
[
"Dietmayer",
"Klaus",
""
]
] |
new_dataset
| 0.996257 |
2101.04269
|
Yan Han
|
Yan Han, Chongyan Chen, Ahmed H Tewfik, Ying Ding, Yifan Peng
|
Pneumonia Detection on Chest X-ray using Radiomic Features and
Contrastive Learning
|
Accepted for ISBI 2021
| null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Chest X-ray becomes one of the most common medical diagnoses due to its
noninvasiveness. The number of chest X-ray images has skyrocketed, but reading
chest X-rays still have been manually performed by radiologists, which creates
huge burnouts and delays. Traditionally, radiomics, as a subfield of radiology
that can extract a large number of quantitative features from medical images,
demonstrates its potential to facilitate medical imaging diagnosis before the
deep learning era. With the rise of deep learning, the explainability of deep
neural networks on chest X-ray diagnosis remains opaque. In this study, we
proposed a novel framework that leverages radiomics features and contrastive
learning to detect pneumonia in chest X-ray. Experiments on the RSNA Pneumonia
Detection Challenge dataset show that our model achieves superior results to
several state-of-the-art models (> 10% in F1-score) and increases the model's
interpretability.
|
[
{
"version": "v1",
"created": "Tue, 12 Jan 2021 02:52:24 GMT"
},
{
"version": "v2",
"created": "Wed, 4 May 2022 19:42:06 GMT"
}
] | 2022-05-06T00:00:00 |
[
[
"Han",
"Yan",
""
],
[
"Chen",
"Chongyan",
""
],
[
"Tewfik",
"Ahmed H",
""
],
[
"Ding",
"Ying",
""
],
[
"Peng",
"Yifan",
""
]
] |
new_dataset
| 0.992368 |
2108.00573
|
Harsh Trivedi
|
Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, Ashish Sabharwal
|
MuSiQue: Multihop Questions via Single-hop Question Composition
|
Accepted for publication in Transactions of the Association for
Computational Linguistics (TACL), 2022
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multihop reasoning remains an elusive goal as existing multihop benchmarks
are known to be largely solvable via shortcuts. Can we create a question
answering (QA) dataset that, by construction, \emph{requires} proper multihop
reasoning? To this end, we introduce a bottom-up approach that systematically
selects composable pairs of single-hop questions that are connected, i.e.,
where one reasoning step critically relies on information from another. This
bottom-up methodology lets us explore a vast space of questions and add
stringent filters as well as other mechanisms targeting connected reasoning. It
provides fine-grained control over the construction process and the properties
of the resulting $k$-hop questions. We use this methodology to create
MuSiQue-Ans, a new multihop QA dataset with 25K 2-4 hop questions. Relative to
existing datasets, MuSiQue-Ans is more difficult overall (3x increase in
human-machine gap), and harder to cheat via disconnected reasoning (e.g., a
single-hop model has a 30 point drop in F1). We further add unanswerable
contrast questions to produce a more stringent dataset, MuSiQue-Full. We hope
our datasets will help the NLP community develop models that perform genuine
multihop reasoning.
|
[
{
"version": "v1",
"created": "Mon, 2 Aug 2021 00:33:27 GMT"
},
{
"version": "v2",
"created": "Sat, 16 Oct 2021 02:48:25 GMT"
},
{
"version": "v3",
"created": "Thu, 5 May 2022 05:50:50 GMT"
}
] | 2022-05-06T00:00:00 |
[
[
"Trivedi",
"Harsh",
""
],
[
"Balasubramanian",
"Niranjan",
""
],
[
"Khot",
"Tushar",
""
],
[
"Sabharwal",
"Ashish",
""
]
] |
new_dataset
| 0.998935 |
2110.11867
|
Oshada Jayasinghe
|
Oshada Jayasinghe, Sahan Hemachandra, Damith Anhettigama, Shenali
Kariyawasam, Ranga Rodrigo, Peshala Jayasekara
|
CeyMo: See More on Roads -- A Novel Benchmark Dataset for Road Marking
Detection
|
Accepted to 2022 IEEE/CVF Winter Conference on Applications of
Computer Vision (WACV 2022)
| null |
10.1109/WACV51458.2022.00344
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we introduce a novel road marking benchmark dataset for road
marking detection, addressing the limitations in the existing publicly
available datasets such as lack of challenging scenarios, prominence given to
lane markings, unavailability of an evaluation script, lack of annotation
formats and lower resolutions. Our dataset consists of 2887 total images with
4706 road marking instances belonging to 11 classes. The images have a high
resolution of 1920 x 1080 and capture a wide range of traffic, lighting and
weather conditions. We provide road marking annotations in polygons, bounding
boxes and pixel-level segmentation masks to facilitate a diverse range of road
marking detection algorithms. The evaluation metrics and the evaluation script
we provide, will further promote direct comparison of novel approaches for road
marking detection with existing methods. Furthermore, we evaluate the
effectiveness of using both instance segmentation and object detection based
approaches for the road marking detection task. Speed and accuracy scores for
two instance segmentation models and two object detector models are provided as
a performance baseline for our benchmark dataset. The dataset and the
evaluation script is publicly available at https://github.com/oshadajay/CeyMo.
|
[
{
"version": "v1",
"created": "Fri, 22 Oct 2021 15:56:17 GMT"
},
{
"version": "v2",
"created": "Mon, 2 May 2022 17:12:09 GMT"
},
{
"version": "v3",
"created": "Tue, 3 May 2022 05:27:37 GMT"
}
] | 2022-05-06T00:00:00 |
[
[
"Jayasinghe",
"Oshada",
""
],
[
"Hemachandra",
"Sahan",
""
],
[
"Anhettigama",
"Damith",
""
],
[
"Kariyawasam",
"Shenali",
""
],
[
"Rodrigo",
"Ranga",
""
],
[
"Jayasekara",
"Peshala",
""
]
] |
new_dataset
| 0.999869 |
2201.03904
|
Conor Heins
|
Conor Heins, Beren Millidge, Daphne Demekas, Brennan Klein, Karl
Friston, Iain Couzin, Alexander Tschantz
|
pymdp: A Python library for active inference in discrete state spaces
| null |
Journal of Open Source Software, 7(73), 4098 (2022)
|
10.21105/joss.04098
| null |
cs.AI cs.MS q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Active inference is an account of cognition and behavior in complex systems
which brings together action, perception, and learning under the theoretical
mantle of Bayesian inference. Active inference has seen growing applications in
academic research, especially in fields that seek to model human or animal
behavior. While in recent years, some of the code arising from the active
inference literature has been written in open source languages like Python and
Julia, to-date, the most popular software for simulating active inference
agents is the DEM toolbox of SPM, a MATLAB library originally developed for the
statistical analysis and modelling of neuroimaging data. Increasing interest in
active inference, manifested both in terms of sheer number as well as
diversifying applications across scientific disciplines, has thus created a
need for generic, widely-available, and user-friendly code for simulating
active inference in open-source scientific computing languages like Python. The
Python package we present here, pymdp (see
https://github.com/infer-actively/pymdp), represents a significant step in this
direction: namely, we provide the first open-source package for simulating
active inference with partially-observable Markov Decision Processes or POMDPs.
We review the package's structure and explain its advantages like modular
design and customizability, while providing in-text code blocks along the way
to demonstrate how it can be used to build and run active inference processes
with ease. We developed pymdp to increase the accessibility and exposure of the
active inference framework to researchers, engineers, and developers with
diverse disciplinary backgrounds. In the spirit of open-source software, we
also hope that it spurs new innovation, development, and collaboration in the
growing active inference community.
|
[
{
"version": "v1",
"created": "Tue, 11 Jan 2022 12:18:44 GMT"
},
{
"version": "v2",
"created": "Wed, 4 May 2022 22:13:22 GMT"
}
] | 2022-05-06T00:00:00 |
[
[
"Heins",
"Conor",
""
],
[
"Millidge",
"Beren",
""
],
[
"Demekas",
"Daphne",
""
],
[
"Klein",
"Brennan",
""
],
[
"Friston",
"Karl",
""
],
[
"Couzin",
"Iain",
""
],
[
"Tschantz",
"Alexander",
""
]
] |
new_dataset
| 0.99933 |
2201.05848
|
Florian Meier
|
Florian Meier
|
TWikiL -- The Twitter Wikipedia Link Dataset
| null | null | null | null |
cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
Recent research has shown how strongly Wikipedia and other web services or
platforms are connected. For example, search engines rely heavily on surfacing
Wikipedia links to satisfy their users' information needs and volunteer-created
Wikipedia content frequently gets re-used on other social media platforms like
Reddit. However, publicly accessible datasets that enable researchers to study
the interrelationship between Wikipedia and other platforms are sparse. In
addition to that, most studies only focus on certain points in time and don't
consider the historical perspective. To begin solving these problems we
developed TWikiL, the Twitter Wikipedia Link Dataset, which contains all
Wikipedia links posted on Twitter in the period 2006 to January 2021. We
extract Wikipedia links from Tweets and enrich the referenced articles with
their respective Wikidata identifiers and Wikipedia topic categories, which
will make this dataset immediately useful for a large range of scholarly use
cases. In this paper, we describe the data collection process, perform an
initial exploratory analysis and present a comprehensive overview of how this
dataset can be useful for the research community.
|
[
{
"version": "v1",
"created": "Sat, 15 Jan 2022 13:32:05 GMT"
},
{
"version": "v2",
"created": "Thu, 5 May 2022 14:08:53 GMT"
}
] | 2022-05-06T00:00:00 |
[
[
"Meier",
"Florian",
""
]
] |
new_dataset
| 0.99896 |
2202.05863
|
Daniel Sobotka
|
Daniel Sobotka, Michael Ebner, Ernst Schwartz, Karl-Heinz Nenning,
Athena Taymourtash, Tom Vercauteren, Sebastien Ourselin, Gregor Kasprian,
Daniela Prayer, Georg Langs, Roxane Licandro
|
Motion Correction and Volumetric Reconstruction for Fetal Functional
Magnetic Resonance Imaging Data
|
Preprint submitted to NeuroImage
| null |
10.1016/j.neuroimage.2022.119213
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Motion correction is an essential preprocessing step in functional Magnetic
Resonance Imaging (fMRI) of the fetal brain with the aim to remove artifacts
caused by fetal movement and maternal breathing and consequently to suppress
erroneous signal correlations. Current motion correction approaches for fetal
fMRI choose a single 3D volume from a specific acquisition timepoint with least
motion artefacts as reference volume, and perform interpolation for the
reconstruction of the motion corrected time series. The results can suffer, if
no low-motion frame is available, and if reconstruction does not exploit any
assumptions about the continuity of the fMRI signal. Here, we propose a novel
framework, which estimates a high-resolution reference volume by using
outlier-robust motion correction, and by utilizing Huber L2 regularization for
intra-stack volumetric reconstruction of the motion-corrected fetal brain fMRI.
We performed an extensive parameter study to investigate the effectiveness of
motion estimation and present in this work benchmark metrics to quantify the
effect of motion correction and regularised volumetric reconstruction
approaches on functional connectivity computations. We demonstrate the proposed
framework's ability to improve functional connectivity estimates,
reproducibility and signal interpretability, which is clinically highly
desirable for the establishment of prognostic noninvasive imaging biomarkers.
The motion correction and volumetric reconstruction framework is made available
as an open-source package of NiftyMIC.
|
[
{
"version": "v1",
"created": "Fri, 11 Feb 2022 19:11:16 GMT"
}
] | 2022-05-06T00:00:00 |
[
[
"Sobotka",
"Daniel",
""
],
[
"Ebner",
"Michael",
""
],
[
"Schwartz",
"Ernst",
""
],
[
"Nenning",
"Karl-Heinz",
""
],
[
"Taymourtash",
"Athena",
""
],
[
"Vercauteren",
"Tom",
""
],
[
"Ourselin",
"Sebastien",
""
],
[
"Kasprian",
"Gregor",
""
],
[
"Prayer",
"Daniela",
""
],
[
"Langs",
"Georg",
""
],
[
"Licandro",
"Roxane",
""
]
] |
new_dataset
| 0.974175 |
2204.01349
|
Xuri Ge
|
Xuri Ge, Joemon M. Jose, Songpei Xu, Xiao Liu, Hu Han
|
MGRR-Net: Multi-level Graph Relational Reasoning Network for Facial
Action Units Detection
|
10 pages, 4 figures, 8 tables; submitted to IEEE TCyb for possible
publication. Copyright may be transferred without notice, after which this
version may no longer be accessible
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Facial Action Coding System (FACS) encodes the action units (AUs) in
facial images, which has attracted extensive research attention due to its wide
use in facial expression analysis. Many methods that perform well on automatic
facial action unit (AU) detection primarily focus on modeling various types of
AU relations between corresponding local muscle areas, or simply mining global
attention-aware facial features, however, neglect the dynamic interactions
among local-global features. We argue that encoding AU features just from one
perspective may not capture the rich contextual information between regional
and global face features, as well as the detailed variability across AUs,
because of the diversity in expression and individual characteristics. In this
paper, we propose a novel Multi-level Graph Relational Reasoning Network
(termed MGRR-Net) for facial AU detection. Each layer of MGRR-Net performs a
multi-level (i.e., region-level, pixel-wise and channel-wise level) feature
learning. While the region-level feature learning from local face patches
features via graph neural network can encode the correlation across different
AUs, the pixel-wise and channel-wise feature learning via graph attention
network can enhance the discrimination ability of AU features from global face
features. The fused features from the three levels lead to improved AU
discriminative ability. Extensive experiments on DISFA and BP4D AU datasets
show that the proposed approach achieves superior performance than the
state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Mon, 4 Apr 2022 09:47:22 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Apr 2022 10:14:37 GMT"
},
{
"version": "v3",
"created": "Thu, 5 May 2022 13:55:06 GMT"
}
] | 2022-05-06T00:00:00 |
[
[
"Ge",
"Xuri",
""
],
[
"Jose",
"Joemon M.",
""
],
[
"Xu",
"Songpei",
""
],
[
"Liu",
"Xiao",
""
],
[
"Han",
"Hu",
""
]
] |
new_dataset
| 0.993218 |
2204.04862
|
Krishnapriya Vishnubhotla
|
Krishnapriya Vishnubhotla and Saif M. Mohammad
|
Tweet Emotion Dynamics: Emotion Word Usage in Tweets from US and Canada
|
Accepted for publication at LREC 2022 (camera-ready)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Over the last decade, Twitter has emerged as one of the most influential
forums for social, political, and health discourse. In this paper, we introduce
a massive dataset of more than 45 million geo-located tweets posted between
2015 and 2021 from US and Canada (TUSC), especially curated for natural
language analysis. We also introduce Tweet Emotion Dynamics (TED) -- metrics to
capture patterns of emotions associated with tweets over time. We use TED and
TUSC to explore the use of emotion-associated words across US and Canada;
across 2019 (pre-pandemic), 2020 (the year the pandemic hit), and 2021 (the
second year of the pandemic); and across individual tweeters. We show that
Canadian tweets tend to have higher valence, lower arousal, and higher
dominance than the US tweets. Further, we show that the COVID-19 pandemic had a
marked impact on the emotional signature of tweets posted in 2020, when
compared to the adjoining years. Finally, we determine metrics of TED for
170,000 tweeters to benchmark characteristics of TED metrics at an aggregate
level. TUSC and the metrics for TED will enable a wide variety of research on
studying how we use language to express ourselves, persuade, communicate, and
influence, with particularly promising applications in public health, affective
science, social science, and psychology.
|
[
{
"version": "v1",
"created": "Mon, 11 Apr 2022 04:39:39 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Apr 2022 14:06:27 GMT"
},
{
"version": "v3",
"created": "Thu, 5 May 2022 00:59:04 GMT"
}
] | 2022-05-06T00:00:00 |
[
[
"Vishnubhotla",
"Krishnapriya",
""
],
[
"Mohammad",
"Saif M.",
""
]
] |
new_dataset
| 0.999695 |
2204.04952
|
Mieradilijiang Maimaiti
|
Jianhai Zhang, Mieradilijiang Maimaiti, Xing Gao, Yuanhang Zheng, and
Ji Zhang
|
MGIMN: Multi-Grained Interactive Matching Network for Few-shot Text
Classification
|
10 pages, 2 figures, 6 tabels
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Text classification struggles to generalize to unseen classes with very few
labeled text instances per class. In such a few-shot learning (FSL) setting,
metric-based meta-learning approaches have shown promising results. Previous
studies mainly aim to derive a prototype representation for each class.
However, they neglect that it is challenging-yet-unnecessary to construct a
compact representation which expresses the entire meaning for each class. They
also ignore the importance to capture the inter-dependency between query and
the support set for few-shot text classification. To deal with these issues, we
propose a meta-learning based method MGIMN which performs instance-wise
comparison followed by aggregation to generate class-wise matching vectors
instead of prototype learning. The key of instance-wise comparison is the
interactive matching within the class-specific context and episode-specific
context. Extensive experiments demonstrate that the proposed method
significantly outperforms the existing state-of-the-art approaches, under both
the standard FSL and generalized FSL settings.
|
[
{
"version": "v1",
"created": "Mon, 11 Apr 2022 08:58:55 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Apr 2022 06:01:41 GMT"
},
{
"version": "v3",
"created": "Thu, 5 May 2022 12:16:58 GMT"
}
] | 2022-05-06T00:00:00 |
[
[
"Zhang",
"Jianhai",
""
],
[
"Maimaiti",
"Mieradilijiang",
""
],
[
"Gao",
"Xing",
""
],
[
"Zheng",
"Yuanhang",
""
],
[
"Zhang",
"Ji",
""
]
] |
new_dataset
| 0.984684 |
2204.05084
|
Wei Liu
|
Wei Liu, Fangyue Liu, Fei Ding, Qian He, Zili Yi
|
XMP-Font: Self-Supervised Cross-Modality Pre-training for Few-Shot Font
Generation
|
Accepted by CVPR2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generating a new font library is a very labor-intensive and time-consuming
job for glyph-rich scripts. Few-shot font generation is thus required, as it
requires only a few glyph references without fine-tuning during test. Existing
methods follow the style-content disentanglement paradigm and expect novel
fonts to be produced by combining the style codes of the reference glyphs and
the content representations of the source. However, these few-shot font
generation methods either fail to capture content-independent style
representations, or employ localized component-wise style representations,
which is insufficient to model many Chinese font styles that involve
hyper-component features such as inter-component spacing and
"connected-stroke". To resolve these drawbacks and make the style
representations more reliable, we propose a self-supervised cross-modality
pre-training strategy and a cross-modality transformer-based encoder that is
conditioned jointly on the glyph image and the corresponding stroke labels. The
cross-modality encoder is pre-trained in a self-supervised manner to allow
effective capture of cross- and intra-modality correlations, which facilitates
the content-style disentanglement and modeling style representations of all
scales (stroke-level, component-level and character-level). The pre-trained
encoder is then applied to the downstream font generation task without
fine-tuning. Experimental comparisons of our method with state-of-the-art
methods demonstrate our method successfully transfers styles of all scales. In
addition, it only requires one reference glyph and achieves the lowest rate of
bad cases in the few-shot font generation task 28% lower than the second best
|
[
{
"version": "v1",
"created": "Mon, 11 Apr 2022 13:34:40 GMT"
},
{
"version": "v2",
"created": "Thu, 5 May 2022 06:53:47 GMT"
}
] | 2022-05-06T00:00:00 |
[
[
"Liu",
"Wei",
""
],
[
"Liu",
"Fangyue",
""
],
[
"Ding",
"Fei",
""
],
[
"He",
"Qian",
""
],
[
"Yi",
"Zili",
""
]
] |
new_dataset
| 0.97789 |
2204.05746
|
Yuexin Xiang
|
Yuexin Xiang, Yuchen Lei, Ding Bao, Wei Ren, Tiantian Li, Qingqing
Yang, Wenmao Liu, Tianqing Zhu, and Kim-Kwang Raymond Choo
|
BABD: A Bitcoin Address Behavior Dataset for Pattern Analysis
|
14 pages, 4 figures
| null | null | null |
cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cryptocurrencies are no longer just the preferred option for cybercriminal
activities on darknets, due to the increasing adoption in mainstream
applications. This is partly due to the transparency associated with the
underpinning ledgers, where any individual can access the record of a
transaction record on the public ledger. In this paper, we build a dataset
comprising Bitcoin transactions between 12 July 2019 and 26 May 2021. This
dataset (hereafter referred to as BABD-13) contains 13 types of Bitcoin
addresses, 5 categories of indicators with 148 features, and 544,462 labeled
data, which is the largest labeled Bitcoin address behavior dataset publicly
available to our knowledge. We then use our proposed dataset on common machine
learning models, namely: k-nearest neighbors algorithm, decision tree, random
forest, multilayer perceptron, and XGBoost. The results show that the accuracy
rates of these machine learning models for the multi-classification task on our
proposed dataset are between 93.24% and 97.13%. We also analyze the proposed
features and their relationships from the experiments, and propose a k-hop
subgraph generation algorithm to extract a k-hop subgraph from the entire
Bitcoin transaction graph constructed by the directed heterogeneous multigraph
starting from a specific Bitcoin address node (e.g., a known transaction
associated with a criminal investigation). Besides, we initially analyze the
behavior patterns of different types of Bitcoin addresses according to the
extracted features.
|
[
{
"version": "v1",
"created": "Sun, 10 Apr 2022 06:46:51 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Apr 2022 09:09:13 GMT"
},
{
"version": "v3",
"created": "Thu, 5 May 2022 08:50:52 GMT"
}
] | 2022-05-06T00:00:00 |
[
[
"Xiang",
"Yuexin",
""
],
[
"Lei",
"Yuchen",
""
],
[
"Bao",
"Ding",
""
],
[
"Ren",
"Wei",
""
],
[
"Li",
"Tiantian",
""
],
[
"Yang",
"Qingqing",
""
],
[
"Liu",
"Wenmao",
""
],
[
"Zhu",
"Tianqing",
""
],
[
"Choo",
"Kim-Kwang Raymond",
""
]
] |
new_dataset
| 0.999822 |
2204.13021
|
Inigo Casanueva
|
I\~nigo Casanueva, Ivan Vuli\'c, Georgios P. Spithourakis, Pawe{\l}
Budzianowski
|
NLU++: A Multi-Label, Slot-Rich, Generalisable Dataset for Natural
Language Understanding in Task-Oriented Dialogue
|
16 pages, 1 figure, 10 tables. Accepted in NAACL 2022 (Findings)
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We present NLU++, a novel dataset for natural language understanding (NLU) in
task-oriented dialogue (ToD) systems, with the aim to provide a much more
challenging evaluation environment for dialogue NLU models, up to date with the
current application and industry requirements. NLU++ is divided into two
domains (BANKING and HOTELS) and brings several crucial improvements over
current commonly used NLU datasets. 1) NLU++ provides fine-grained domain
ontologies with a large set of challenging multi-intent sentences, introducing
and validating the idea of intent modules that can be combined into complex
intents that convey complex user goals, combined with finer-grained and thus
more challenging slot sets. 2) The ontology is divided into domain-specific and
generic (i.e., domain-universal) intent modules that overlap across domains,
promoting cross-domain reusability of annotated examples. 3) The dataset design
has been inspired by the problems observed in industrial ToD systems, and 4) it
has been collected, filtered and carefully annotated by dialogue NLU experts,
yielding high-quality annotated data. Finally, we benchmark a series of current
state-of-the-art NLU models on NLU++; the results demonstrate the challenging
nature of the dataset, especially in low-data regimes, the validity of `intent
modularisation', and call for further research on ToD NLU.
|
[
{
"version": "v1",
"created": "Wed, 27 Apr 2022 16:00:23 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Apr 2022 08:33:13 GMT"
},
{
"version": "v3",
"created": "Thu, 5 May 2022 13:38:43 GMT"
}
] | 2022-05-06T00:00:00 |
[
[
"Casanueva",
"Iñigo",
""
],
[
"Vulić",
"Ivan",
""
],
[
"Spithourakis",
"Georgios P.",
""
],
[
"Budzianowski",
"Paweł",
""
]
] |
new_dataset
| 0.999753 |
2205.01818
|
Ziyi Yang
|
Ziyi Yang, Yuwei Fang, Chenguang Zhu, Reid Pryzant, Dongdong Chen, Yu
Shi, Yichong Xu, Yao Qian, Mei Gao, Yi-Ling Chen, Liyang Lu, Yujia Xie,
Robert Gmyr, Noel Codella, Naoyuki Kanda, Bin Xiao, Lu Yuan, Takuya Yoshioka,
Michael Zeng, Xuedong Huang
|
i-Code: An Integrative and Composable Multimodal Learning Framework
| null | null | null | null |
cs.LG cs.AI cs.CL cs.CV eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human intelligence is multimodal; we integrate visual, linguistic, and
acoustic signals to maintain a holistic worldview. Most current pretraining
methods, however, are limited to one or two modalities. We present i-Code, a
self-supervised pretraining framework where users may flexibly combine the
modalities of vision, speech, and language into unified and general-purpose
vector representations. In this framework, data from each modality are first
given to pretrained single-modality encoders. The encoder outputs are then
integrated with a multimodal fusion network, which uses novel attention
mechanisms and other architectural innovations to effectively combine
information from the different modalities. The entire system is pretrained
end-to-end with new objectives including masked modality unit modeling and
cross-modality contrastive learning. Unlike previous research using only video
for pretraining, the i-Code framework can dynamically process single, dual, and
triple-modality data during training and inference, flexibly projecting
different combinations of modalities into a single representation space.
Experimental results demonstrate how i-Code can outperform state-of-the-art
techniques on five video understanding tasks and the GLUE NLP benchmark,
improving by as much as 11% and demonstrating the power of integrative
multimodal pretraining.
|
[
{
"version": "v1",
"created": "Tue, 3 May 2022 23:38:50 GMT"
},
{
"version": "v2",
"created": "Thu, 5 May 2022 06:35:23 GMT"
}
] | 2022-05-06T00:00:00 |
[
[
"Yang",
"Ziyi",
""
],
[
"Fang",
"Yuwei",
""
],
[
"Zhu",
"Chenguang",
""
],
[
"Pryzant",
"Reid",
""
],
[
"Chen",
"Dongdong",
""
],
[
"Shi",
"Yu",
""
],
[
"Xu",
"Yichong",
""
],
[
"Qian",
"Yao",
""
],
[
"Gao",
"Mei",
""
],
[
"Chen",
"Yi-Ling",
""
],
[
"Lu",
"Liyang",
""
],
[
"Xie",
"Yujia",
""
],
[
"Gmyr",
"Robert",
""
],
[
"Codella",
"Noel",
""
],
[
"Kanda",
"Naoyuki",
""
],
[
"Xiao",
"Bin",
""
],
[
"Yuan",
"Lu",
""
],
[
"Yoshioka",
"Takuya",
""
],
[
"Zeng",
"Michael",
""
],
[
"Huang",
"Xuedong",
""
]
] |
new_dataset
| 0.996728 |
2205.01906
|
Xue Bin Peng
|
Xue Bin Peng, Yunrong Guo, Lina Halper, Sergey Levine, Sanja Fidler
|
ASE: Large-Scale Reusable Adversarial Skill Embeddings for Physically
Simulated Characters
| null | null |
10.1145/3528223.3530110
| null |
cs.GR cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The incredible feats of athleticism demonstrated by humans are made possible
in part by a vast repertoire of general-purpose motor skills, acquired through
years of practice and experience. These skills not only enable humans to
perform complex tasks, but also provide powerful priors for guiding their
behaviors when learning new tasks. This is in stark contrast to what is common
practice in physics-based character animation, where control policies are most
typically trained from scratch for each task. In this work, we present a
large-scale data-driven framework for learning versatile and reusable skill
embeddings for physically simulated characters. Our approach combines
techniques from adversarial imitation learning and unsupervised reinforcement
learning to develop skill embeddings that produce life-like behaviors, while
also providing an easy to control representation for use on new downstream
tasks. Our models can be trained using large datasets of unstructured motion
clips, without requiring any task-specific annotation or segmentation of the
motion data. By leveraging a massively parallel GPU-based simulator, we are
able to train skill embeddings using over a decade of simulated experiences,
enabling our model to learn a rich and versatile repertoire of skills. We show
that a single pre-trained model can be effectively applied to perform a diverse
set of new tasks. Our system also allows users to specify tasks through simple
reward functions, and the skill embedding then enables the character to
automatically synthesize complex and naturalistic strategies in order to
achieve the task objectives.
|
[
{
"version": "v1",
"created": "Wed, 4 May 2022 06:13:28 GMT"
},
{
"version": "v2",
"created": "Thu, 5 May 2022 17:25:14 GMT"
}
] | 2022-05-06T00:00:00 |
[
[
"Peng",
"Xue Bin",
""
],
[
"Guo",
"Yunrong",
""
],
[
"Halper",
"Lina",
""
],
[
"Levine",
"Sergey",
""
],
[
"Fidler",
"Sanja",
""
]
] |
new_dataset
| 0.999214 |
2205.01959
|
Mingsheng Ying
|
Mingsheng Ying
|
Birkhoff-von Neumann Quantum Logic as an Assertion Language for Quantum
Programs
| null | null | null | null |
cs.LO cs.PL quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A first-order logic with quantum variables is needed as an assertion language
for specifying and reasoning about various properties (e.g. correctness) of
quantum programs. Surprisingly, such a logic is missing in the literature, and
the existing first-order Birkhoff-von Neumann quantum logic deals with only
classical variables and quantifications over them. In this paper, we fill in
this gap by introducing a first-order extension of Birkhoff-von Neumann quantum
logic with universal and existential quantifiers over quantum variables.
Examples are presented to show our logic is particularly suitable for
specifying some important properties studied in quantum computation and quantum
information. We further incorporate this logic into quantum Hoare logic as an
assertion logic so that it can play a role similar to that of first-order logic
for classical Hoare logic and BI-logic for separation logic. In particular, we
show how it can be used to define and derive quantum generalisations of some
adaptation rules that have been applied to significantly simplify verification
of classical programs. It is expected that the assertion logic defined in this
paper - first-order quantum logic with quantum variables - can be combined with
various quantum program logics to serve as a solid logical foundation upon
which verification tools can be built using proof assistants such as Coq and
Isabelle/HOL.
|
[
{
"version": "v1",
"created": "Wed, 4 May 2022 08:57:44 GMT"
}
] | 2022-05-06T00:00:00 |
[
[
"Ying",
"Mingsheng",
""
]
] |
new_dataset
| 0.995835 |
2205.02287
|
Charles Yuan
|
Charles Yuan and Christopher McNally and Michael Carbin
|
Twist: Sound Reasoning for Purity and Entanglement in Quantum Programs
|
This version of the paper differs from ACM Proceedings in that it
includes a more refined comparison to prior work, specifically in Sections
3.5 and 9.6
|
Proc. ACM Program. Lang. 6, POPL, Article 30 (January 2022), 32
pages
|
10.1145/3498691
| null |
cs.PL quant-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Quantum programming languages enable developers to implement algorithms for
quantum computers that promise computational breakthroughs in classically
intractable tasks. Programming quantum computers requires awareness of
entanglement, the phenomenon in which measurement outcomes of qubits are
correlated. Entanglement can determine the correctness of algorithms and
suitability of programming patterns.
In this work, we formalize purity as a central tool for automating reasoning
about entanglement in quantum programs. A pure expression is one whose
evaluation is unaffected by the measurement outcomes of qubits that it does not
own, implying freedom from entanglement with any other expression in the
computation.
We present Twist, the first language that features a type system for sound
reasoning about purity. The type system enables the developer to identify pure
expressions using type annotations. Twist also features purity assertion
operators that state the absence of entanglement in the output of quantum
gates. To soundly check these assertions, Twist uses a combination of static
analysis and runtime verification.
We evaluate Twist's type system and analyses on a benchmark suite of quantum
programs in simulation, demonstrating that Twist can express quantum
algorithms, catch programming errors in them, and support programs that several
languages disallow, while incurring runtime verification overhead of less than
3.5%.
|
[
{
"version": "v1",
"created": "Wed, 4 May 2022 18:46:08 GMT"
}
] | 2022-05-06T00:00:00 |
[
[
"Yuan",
"Charles",
""
],
[
"McNally",
"Christopher",
""
],
[
"Carbin",
"Michael",
""
]
] |
new_dataset
| 0.985077 |
2205.02289
|
Vijay Viswanathan
|
Aryeh Tiktinsky, Vijay Viswanathan, Danna Niezni, Dana Meron Azagury,
Yosi Shamay, Hillel Taub-Tabib, Tom Hope, Yoav Goldberg
|
A Dataset for N-ary Relation Extraction of Drug Combinations
|
To appear in NAACL 2022
| null | null | null |
cs.CL cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Combination therapies have become the standard of care for diseases such as
cancer, tuberculosis, malaria and HIV. However, the combinatorial set of
available multi-drug treatments creates a challenge in identifying effective
combination therapies available in a situation. To assist medical professionals
in identifying beneficial drug-combinations, we construct an expert-annotated
dataset for extracting information about the efficacy of drug combinations from
the scientific literature. Beyond its practical utility, the dataset also
presents a unique NLP challenge, as the first relation extraction dataset
consisting of variable-length relations. Furthermore, the relations in this
dataset predominantly require language understanding beyond the sentence level,
adding to the challenge of this task. We provide a promising baseline model and
identify clear areas for further improvement. We release our dataset, code, and
baseline models publicly to encourage the NLP community to participate in this
task.
|
[
{
"version": "v1",
"created": "Wed, 4 May 2022 19:01:16 GMT"
}
] | 2022-05-06T00:00:00 |
[
[
"Tiktinsky",
"Aryeh",
""
],
[
"Viswanathan",
"Vijay",
""
],
[
"Niezni",
"Danna",
""
],
[
"Azagury",
"Dana Meron",
""
],
[
"Shamay",
"Yosi",
""
],
[
"Taub-Tabib",
"Hillel",
""
],
[
"Hope",
"Tom",
""
],
[
"Goldberg",
"Yoav",
""
]
] |
new_dataset
| 0.999258 |
2205.02298
|
Dhruv Nandakumar
|
Christopher Redino, Dhruv Nandakumar, Robert Schiller, Kevin Choi,
Abdul Rahman, Edward Bowen, Matthew Weeks, Aaron Shaha, Joe Nehila
|
Zero Day Threat Detection Using Graph and Flow Based Security Telemetry
|
11 pages, 6 figures, submitting to NeurIPS 2022
| null | null | null |
cs.CR cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Zero Day Threats (ZDT) are novel methods used by malicious actors to attack
and exploit information technology (IT) networks or infrastructure. In the past
few years, the number of these threats has been increasing at an alarming rate
and have been costing organizations millions of dollars to remediate. The
increasing expansion of network attack surfaces and the exponentially growing
number of assets on these networks necessitate the need for a robust AI-based
Zero Day Threat detection model that can quickly analyze petabyte-scale data
for potentially malicious and novel activity. In this paper, the authors
introduce a deep learning based approach to Zero Day Threat detection that can
generalize, scale, and effectively identify threats in near real-time. The
methodology utilizes network flow telemetry augmented with asset-level graph
features, which are passed through a dual-autoencoder structure for anomaly and
novelty detection respectively. The models have been trained and tested on four
large scale datasets that are representative of real-world organizational
networks and they produce strong results with high precision and recall values.
The models provide a novel methodology to detect complex threats with low
false-positive rates that allow security operators to avoid alert fatigue while
drastically reducing their mean time to response with near-real-time detection.
Furthermore, the authors also provide a novel, labelled, cyber attack dataset
generated from adversarial activity that can be used for validation or training
of other models. With this paper, the authors' overarching goal is to provide a
novel architecture and training methodology for cyber anomaly detectors that
can generalize to multiple IT networks with minimal to no retraining while
still maintaining strong performance.
|
[
{
"version": "v1",
"created": "Wed, 4 May 2022 19:30:48 GMT"
}
] | 2022-05-06T00:00:00 |
[
[
"Redino",
"Christopher",
""
],
[
"Nandakumar",
"Dhruv",
""
],
[
"Schiller",
"Robert",
""
],
[
"Choi",
"Kevin",
""
],
[
"Rahman",
"Abdul",
""
],
[
"Bowen",
"Edward",
""
],
[
"Weeks",
"Matthew",
""
],
[
"Shaha",
"Aaron",
""
],
[
"Nehila",
"Joe",
""
]
] |
new_dataset
| 0.982961 |
2205.02360
|
Niranjan Hasabnis
|
Niranjan Hasabnis
|
GitRank: A Framework to Rank GitHub Repositories
|
3 pages, 1 figure; to be published in Mining Software Repositories
2022 conference (hackathon)
| null | null | null |
cs.SE cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Open-source repositories provide wealth of information and are increasingly
being used to build artificial intelligence (AI) based systems to solve
problems in software engineering. Open-source repositories could be of varying
quality levels, and bad-quality repositories could degrade performance of these
systems. Evaluating quality of open-source repositories, which is not available
directly on code hosting sites such as GitHub, is thus important. In this
hackathon, we utilize known code quality measures and GrimoireLab toolkit to
implement a framework, named GitRank, to rank open-source repositories on three
different criteria. We discuss our findings and preliminary evaluation in this
hackathon report.
|
[
{
"version": "v1",
"created": "Wed, 4 May 2022 23:42:30 GMT"
}
] | 2022-05-06T00:00:00 |
[
[
"Hasabnis",
"Niranjan",
""
]
] |
new_dataset
| 0.997902 |
2205.02422
|
Mahdi Chehimi
|
Mahdi Chehimi, Christina Chaccour, Walid Saad
|
Quantum Semantic Communications: An Unexplored Avenue for Contextual
Networking
|
6 pages, 3 figures
| null | null | null |
cs.NI quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Future communication systems (6G and beyond) will witness a paradigm shift
from communication-intensive systems towards intelligent computing-intensive
architectures. A key research area that enables this transition is semantic
communications, whereby the communication process conveys the meaning of
messages instead of being a mere reconstruction process of raw, naive data
bits. In this paper, a novel quantum semantic communications (QSC) framework is
proposed to develop reasoning-based future communication systems with quantum
semantic representations that are characterized with minimalism, efficiency,
and accuracy. In particular, the concepts of quantum embedding and
high-dimensional Hilbert spaces are exploited so as to extract the meaning of
classical data. Moreover, in order to equip our approach with minimalism and
efficiency, an unsupervised quantum machine learning (QML) technique, namely,
quantum clustering is employed. Quantum clustering enables extraction of
contextual information and distinct characterization of the semantics of the
message to be conveyed. Subsequently, to successfully transmit the constructed
semantic representations, quantum communication links are used to transfer the
quantum states. This new QSC framework exploits unique quantum principles such
as the minimalism of entangled photons, quantum-semantic entropy of noise, and
quantum fidelity. Simulation results show that the proposed framework can save
around 85\% of quantum communication resources, i.e., entangled photons,
compared to semantic-agnostic quantum communication schemes. Results also show
the benefits of increasing the number of dimensions on the expressivity of the
semantic representations.
|
[
{
"version": "v1",
"created": "Thu, 5 May 2022 03:49:19 GMT"
}
] | 2022-05-06T00:00:00 |
[
[
"Chehimi",
"Mahdi",
""
],
[
"Chaccour",
"Christina",
""
],
[
"Saad",
"Walid",
""
]
] |
new_dataset
| 0.997773 |
2205.02455
|
Ashutosh Modi
|
Abhinav Joshi and Ashwani Bhat and Ayush Jain and Atin Vikram Singh
and Ashutosh Modi
|
COGMEN: COntextualized GNN based Multimodal Emotion recognitioN
|
17 pages (9 main + 8 appendix). Accepted at NAACL 2022
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Emotions are an inherent part of human interactions, and consequently, it is
imperative to develop AI systems that understand and recognize human emotions.
During a conversation involving various people, a person's emotions are
influenced by the other speaker's utterances and their own emotional state over
the utterances. In this paper, we propose COntextualized Graph Neural Network
based Multimodal Emotion recognitioN (COGMEN) system that leverages local
information (i.e., inter/intra dependency between speakers) and global
information (context). The proposed model uses Graph Neural Network (GNN) based
architecture to model the complex dependencies (local and global information)
in a conversation. Our model gives state-of-the-art (SOTA) results on IEMOCAP
and MOSEI datasets, and detailed ablation experiments show the importance of
modeling information at both levels.
|
[
{
"version": "v1",
"created": "Thu, 5 May 2022 05:54:24 GMT"
}
] | 2022-05-06T00:00:00 |
[
[
"Joshi",
"Abhinav",
""
],
[
"Bhat",
"Ashwani",
""
],
[
"Jain",
"Ayush",
""
],
[
"Singh",
"Atin Vikram",
""
],
[
"Modi",
"Ashutosh",
""
]
] |
new_dataset
| 0.995675 |
2205.02524
|
Ning Wang
|
Ning Wang
|
M2R2: Missing-Modality Robust emotion Recognition framework with
iterative data augmentation
| null | null | null | null |
cs.SD cs.AI eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
This paper deals with the utterance-level modalities missing problem with
uncertain patterns on emotion recognition in conversation (ERC) task. Present
models generally predict the speaker's emotions by its current utterance and
context, which is degraded by modality missing considerably. Our work proposes
a framework Missing-Modality Robust emotion Recognition (M2R2), which trains
emotion recognition model with iterative data augmentation by learned common
representation. Firstly, a network called Party Attentive Network (PANet) is
designed to classify emotions, which tracks all the speakers' states and
context. Attention mechanism between speaker with other participants and
dialogue topic is used to decentralize dependence on multi-time and multi-party
utterances instead of the possible incomplete one. Moreover, the Common
Representation Learning (CRL) problem is defined for modality-missing problem.
Data imputation methods improved by the adversarial strategy are used here to
construct extra features to augment data. Extensive experiments and case
studies validate the effectiveness of our methods over baselines for
modality-missing emotion recognition on two different datasets.
|
[
{
"version": "v1",
"created": "Thu, 5 May 2022 09:16:31 GMT"
}
] | 2022-05-06T00:00:00 |
[
[
"Wang",
"Ning",
""
]
] |
new_dataset
| 0.966826 |
2205.02533
|
Li You
|
Jie Xu, Li You, George C. Alexandropoulos, Xinping Yi, Wenjin Wang,
Xiqi Gao
|
Near-Field Wideband Extremely Large-scale MIMO Transmission with
Holographic Metasurface Antennas
|
30 pages, 9 figures
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Extremely large-scale multiple-input multiple-output (XL-MIMO) is the
development trend of future wireless communications. However, the extremely
large-scale antenna array could bring inevitable nearfield and dual-wideband
effects that seriously reduce the transmission performance. This paper proposes
an algorithmic framework to design the beam combining for the near-field
wideband XL-MIMO uplink transmissions assisted by holographic metasurface
antennas (HMAs). Firstly, we introduce a spherical-wave-based channel model
that simultaneously takes into account both the near-field and dual-wideband
effects. Based on such a model, we then formulate the HMA-based beam combining
problem for the proposed XL-MIMO communications, which is challenging due to
the nonlinear coupling of high dimensional HMA weights and baseband combiners.
We further present a sum-mean-square-error-minimization-based algorithmic
framework. Numerical results showcase that the proposed scheme can effectively
alleviate the sum-rate loss caused by the near-field and dual-wideband effects
in HMA-assisted XL-MIMO systems. Meanwhile, the proposed HMA-based scheme can
achieve a higher sum rate than the conventional phase-shifter-based hybrid
analog/digital one with the same array aperture.
|
[
{
"version": "v1",
"created": "Thu, 5 May 2022 09:38:53 GMT"
}
] | 2022-05-06T00:00:00 |
[
[
"Xu",
"Jie",
""
],
[
"You",
"Li",
""
],
[
"Alexandropoulos",
"George C.",
""
],
[
"Yi",
"Xinping",
""
],
[
"Wang",
"Wenjin",
""
],
[
"Gao",
"Xiqi",
""
]
] |
new_dataset
| 0.994388 |
2205.02543
|
Naresh Saini
|
Naresh Saini, Promodh Pinto, Aravinth Bheemaraj, Deepak Kumar, Dhiraj
Daga, Saurabh Yadav and Srihari Nagaraj
|
OCR Synthetic Benchmark Dataset for Indic Languages
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present the largest publicly available synthetic OCR benchmark dataset for
Indic languages. The collection contains a total of 90k images and their ground
truth for 23 Indic languages. OCR model validation in Indic languages require a
good amount of diverse data to be processed in order to create a robust and
reliable model. Generating such a huge amount of data would be difficult
otherwise but with synthetic data, it becomes far easier. It can be of great
importance to fields like Computer Vision or Image Processing where once an
initial synthetic data is developed, model creation becomes easier. Generating
synthetic data comes with the flexibility to adjust its nature and environment
as and when required in order to improve the performance of the model. Accuracy
for labeled real-time data is sometimes quite expensive while accuracy for
synthetic data can be easily achieved with a good score.
|
[
{
"version": "v1",
"created": "Thu, 5 May 2022 10:07:57 GMT"
}
] | 2022-05-06T00:00:00 |
[
[
"Saini",
"Naresh",
""
],
[
"Pinto",
"Promodh",
""
],
[
"Bheemaraj",
"Aravinth",
""
],
[
"Kumar",
"Deepak",
""
],
[
"Daga",
"Dhiraj",
""
],
[
"Yadav",
"Saurabh",
""
],
[
"Nagaraj",
"Srihari",
""
]
] |
new_dataset
| 0.99983 |
2205.02545
|
Ignatius Ezeani
|
Ignatius Ezeani and Mahmoud El-Haj and Jonathan Morris and Dawn Knight
|
Introducing the Welsh Text Summarisation Dataset and Baseline Systems
| null | null | null |
10 pages, 6 figures
|
cs.CL cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
Welsh is an official language in Wales and is spoken by an estimated 884,300
people (29.2% of the population of Wales). Despite this status and estimated
increase in speaker numbers since the last (2011) census, Welsh remains a
minority language undergoing revitalization and promotion by Welsh Government
and relevant stakeholders. As part of the effort to increase the availability
of Welsh digital technology, this paper introduces the first Welsh
summarisation dataset, which we provide freely for research purposes to help
advance the work on Welsh text summarization. The dataset was created by Welsh
speakers by manually summarising Welsh Wikipedia articles. In addition, the
paper discusses the implementation and evaluation of different summarisation
systems for Welsh. The summarization systems and results will serve as
benchmarks for the development of summarises in other minority language
contexts.
|
[
{
"version": "v1",
"created": "Thu, 5 May 2022 10:12:45 GMT"
}
] | 2022-05-06T00:00:00 |
[
[
"Ezeani",
"Ignatius",
""
],
[
"El-Haj",
"Mahmoud",
""
],
[
"Morris",
"Jonathan",
""
],
[
"Knight",
"Dawn",
""
]
] |
new_dataset
| 0.999822 |
2205.02625
|
Peizhuo Li
|
Peizhuo Li, Kfir Aberman, Zihan Zhang, Rana Hanocka, Olga
Sorkine-Hornung
|
GANimator: Neural Motion Synthesis from a Single Sequence
|
SIGGRAPH 2022. Project page: https://peizhuoli.github.io/ganimator/ ,
Video: https://www.youtube.com/watch?v=OV9VoHMEeyI
| null |
10.1145/3528223.3530157
| null |
cs.GR cs.AI cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present GANimator, a generative model that learns to synthesize novel
motions from a single, short motion sequence. GANimator generates motions that
resemble the core elements of the original motion, while simultaneously
synthesizing novel and diverse movements. Existing data-driven techniques for
motion synthesis require a large motion dataset which contains the desired and
specific skeletal structure. By contrast, GANimator only requires training on a
single motion sequence, enabling novel motion synthesis for a variety of
skeletal structures e.g., bipeds, quadropeds, hexapeds, and more. Our framework
contains a series of generative and adversarial neural networks, each
responsible for generating motions in a specific frame rate. The framework
progressively learns to synthesize motion from random noise, enabling
hierarchical control over the generated motion content across varying levels of
detail. We show a number of applications, including crowd simulation, key-frame
editing, style transfer, and interactive control, which all learn from a single
input sequence. Code and data for this paper are at
https://peizhuoli.github.io/ganimator.
|
[
{
"version": "v1",
"created": "Thu, 5 May 2022 13:04:14 GMT"
}
] | 2022-05-06T00:00:00 |
[
[
"Li",
"Peizhuo",
""
],
[
"Aberman",
"Kfir",
""
],
[
"Zhang",
"Zihan",
""
],
[
"Hanocka",
"Rana",
""
],
[
"Sorkine-Hornung",
"Olga",
""
]
] |
new_dataset
| 0.999656 |
2205.02627
|
Gabriel Amaral
|
Gabriel Amaral, Odinaldo Rodrigues, Elena Simperl
|
WDV: A Broad Data Verbalisation Dataset Built from Wikidata
| null | null | null | null |
cs.CL
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Data verbalisation is a task of great importance in the current field of
natural language processing, as there is great benefit in the transformation of
our abundant structured and semi-structured data into human-readable formats.
Verbalising Knowledge Graph (KG) data focuses on converting interconnected
triple-based claims, formed of subject, predicate, and object, into text.
Although KG verbalisation datasets exist for some KGs, there are still gaps in
their fitness for use in many scenarios. This is especially true for Wikidata,
where available datasets either loosely couple claim sets with textual
information or heavily focus on predicates around biographies, cities, and
countries. To address these gaps, we propose WDV, a large KG claim
verbalisation dataset built from Wikidata, with a tight coupling between
triples and text, covering a wide variety of entities and predicates. We also
evaluate the quality of our verbalisations through a reusable workflow for
measuring human-centred fluency and adequacy scores. Our data and code are
openly available in the hopes of furthering research towards KG verbalisation.
|
[
{
"version": "v1",
"created": "Thu, 5 May 2022 13:10:12 GMT"
}
] | 2022-05-06T00:00:00 |
[
[
"Amaral",
"Gabriel",
""
],
[
"Rodrigues",
"Odinaldo",
""
],
[
"Simperl",
"Elena",
""
]
] |
new_dataset
| 0.999818 |
2205.02671
|
Jae Hee Lee
|
Jae Hee Lee, Matthias Kerzel, Kyra Ahrens, Cornelius Weber and Stefan
Wermter
|
What is Right for Me is Not Yet Right for You: A Dataset for Grounding
Relative Directions via Multi-Task Learning
|
Accepted to IJCAI 2022
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Understanding spatial relations is essential for intelligent agents to act
and communicate in the physical world. Relative directions are spatial
relations that describe the relative positions of target objects with regard to
the intrinsic orientation of reference objects. Grounding relative directions
is more difficult than grounding absolute directions because it not only
requires a model to detect objects in the image and to identify spatial
relation based on this information, but it also needs to recognize the
orientation of objects and integrate this information into the reasoning
process. We investigate the challenging problem of grounding relative
directions with end-to-end neural networks. To this end, we provide GRiD-3D, a
novel dataset that features relative directions and complements existing visual
question answering (VQA) datasets, such as CLEVR, that involve only absolute
directions. We also provide baselines for the dataset with two established
end-to-end VQA models. Experimental evaluations show that answering questions
on relative directions is feasible when questions in the dataset simulate the
necessary subtasks for grounding relative directions. We discover that those
subtasks are learned in an order that reflects the steps of an intuitive
pipeline for processing relative directions.
|
[
{
"version": "v1",
"created": "Thu, 5 May 2022 14:25:46 GMT"
}
] | 2022-05-06T00:00:00 |
[
[
"Lee",
"Jae Hee",
""
],
[
"Kerzel",
"Matthias",
""
],
[
"Ahrens",
"Kyra",
""
],
[
"Weber",
"Cornelius",
""
],
[
"Wermter",
"Stefan",
""
]
] |
new_dataset
| 0.998968 |
2205.02679
|
Ad\`ele Douin
|
Ad\`ele Douin, J. P. Bruneton, Fr\'ed\'eric Lechenault
|
KnitCity: a machine learning-based, game-theoretical framework for
prediction assessment and seismic risk policy design
| null | null | null | null |
cs.LG cond-mat.dis-nn cond-mat.stat-mech physics.soc-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Knitted fabric exhibits avalanche-like events when deformed: by analogy with
eathquakes, we are interested in predicting these "knitquakes". However, as in
most analogous seismic models, the peculiar statistics of the corresponding
time-series severely jeopardize this endeavour, due to the time intermittence
and scale-invariance of these events. But more importantly, such predictions
are hard to {\it assess}: depending on the choice of what to predict, the
results can be very different and not easily compared. Furthermore, forecasting
models may be trained with various generic metrics which ignore some important
specificities of the problem at hand, in our case seismic risk. Finally, these
models often do not provide a clear strategy regarding the best way to use
these predictions in practice. Here we introduce a framework that allows to
design, evaluate and compare not only predictors but also decision-making
policies: a model seismically active {\it city} subjected to the crackling
dynamics observed in the mechanical response of knitted fabric. We thus proceed
to study the population of KnitCity, introducing a policy through which the
mayor of the town can decide to either keep people in, which in case of large
events cause human loss, or evacuate the city, which costs a daily fee. The
policy only relies on past seismic observations. We construct efficient
policies using a reinforcement learning environment and various time-series
predictors based on artificial neural networks. By inducing a physically
motivated metric on the predictors, this mechanism allows quantitative
assessment and comparison of their relevance in the decision-making process.
|
[
{
"version": "v1",
"created": "Thu, 5 May 2022 14:38:03 GMT"
}
] | 2022-05-06T00:00:00 |
[
[
"Douin",
"Adèle",
""
],
[
"Bruneton",
"J. P.",
""
],
[
"Lechenault",
"Frédéric",
""
]
] |
new_dataset
| 0.971343 |
2205.02684
|
Erick Mendez Guzman
|
Erick Mendez Guzman, Viktor Schlegel and Riza Batista-Navarro
|
RaFoLa: A Rationale-Annotated Corpus for Detecting Indicators of Forced
Labour
| null | null | null | null |
cs.CL cs.CY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Forced labour is the most common type of modern slavery, and it is
increasingly gaining the attention of the research and social community. Recent
studies suggest that artificial intelligence (AI) holds immense potential for
augmenting anti-slavery action. However, AI tools need to be developed
transparently in cooperation with different stakeholders. Such tools are
contingent on the availability and access to domain-specific data, which are
scarce due to the near-invisible nature of forced labour. To the best of our
knowledge, this paper presents the first openly accessible English corpus
annotated for multi-class and multi-label forced labour detection. The corpus
consists of 989 news articles retrieved from specialised data sources and
annotated according to risk indicators defined by the International Labour
Organization (ILO). Each news article was annotated for two aspects: (1)
indicators of forced labour as classification labels and (2) snippets of the
text that justify labelling decisions. We hope that our data set can help
promote research on explainability for multi-class and multi-label text
classification. In this work, we explain our process for collecting the data
underpinning the proposed corpus, describe our annotation guidelines and
present some statistical analysis of its content. Finally, we summarise the
results of baseline experiments based on different variants of the
Bidirectional Encoder Representation from Transformer (BERT) model.
|
[
{
"version": "v1",
"created": "Thu, 5 May 2022 14:43:31 GMT"
}
] | 2022-05-06T00:00:00 |
[
[
"Guzman",
"Erick Mendez",
""
],
[
"Schlegel",
"Viktor",
""
],
[
"Batista-Navarro",
"Riza",
""
]
] |
new_dataset
| 0.998034 |
2205.02692
|
Zheng Zhu
|
Zheng Zhu, Xianda Guo, Tian Yang, Junjie Huang, Jiankang Deng, Guan
Huang, Dalong Du, Jiwen Lu, Jie Zhou
|
Gait Recognition in the Wild: A Benchmark
|
Published in ICCV 2021. Benchmark website is
https://www.grew-benchmark.org/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Gait benchmarks empower the research community to train and evaluate
high-performance gait recognition systems. Even though growing efforts have
been devoted to cross-view recognition, academia is restricted by current
existing databases captured in the controlled environment. In this paper, we
contribute a new benchmark for Gait REcognition in the Wild (GREW). The GREW
dataset is constructed from natural videos, which contains hundreds of cameras
and thousands of hours streams in open systems. With tremendous manual
annotations, the GREW consists of 26K identities and 128K sequences with rich
attributes for unconstrained gait recognition. Moreover, we add a distractor
set of over 233K sequences, making it more suitable for real-world
applications. Compared with prevailing predefined cross-view datasets, the GREW
has diverse and practical view variations, as well as more natural challenging
factors. To the best of our knowledge, this is the first large-scale dataset
for gait recognition in the wild. Equipped with this benchmark, we dissect the
unconstrained gait recognition problem. Representative appearance-based and
model-based methods are explored, and comprehensive baselines are established.
Experimental results show (1) The proposed GREW benchmark is necessary for
training and evaluating gait recognizer in the wild. (2) For state-of-the-art
gait recognition approaches, there is a lot of room for improvement. (3) The
GREW benchmark can be used as effective pre-training for controlled gait
recognition. Benchmark website is https://www.grew-benchmark.org/.
|
[
{
"version": "v1",
"created": "Thu, 5 May 2022 14:57:39 GMT"
}
] | 2022-05-06T00:00:00 |
[
[
"Zhu",
"Zheng",
""
],
[
"Guo",
"Xianda",
""
],
[
"Yang",
"Tian",
""
],
[
"Huang",
"Junjie",
""
],
[
"Deng",
"Jiankang",
""
],
[
"Huang",
"Guan",
""
],
[
"Du",
"Dalong",
""
],
[
"Lu",
"Jiwen",
""
],
[
"Zhou",
"Jie",
""
]
] |
new_dataset
| 0.999797 |
2205.02832
|
Yasumasa Onoe
|
Yasumasa Onoe, Michael J.Q. Zhang, Eunsol Choi, Greg Durrett
|
Entity Cloze By Date: What LMs Know About Unseen Entities
|
NAACL 2022 Findings
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Language models (LMs) are typically trained once on a large-scale corpus and
used for years without being updated. However, in a dynamic world, new entities
constantly arise. We propose a framework to analyze what LMs can infer about
new entities that did not exist when the LMs were pretrained. We derive a
dataset of entities indexed by their origination date and paired with their
English Wikipedia articles, from which we can find sentences about each entity.
We evaluate LMs' perplexity on masked spans within these sentences. We show
that models more informed about the entities, such as those with access to a
textual definition of them, achieve lower perplexity on this benchmark. Our
experimental results demonstrate that making inferences about new entities
remains difficult for LMs. Given its wide coverage on entity knowledge and
temporal indexing, our dataset can be used to evaluate LMs and techniques
designed to modify or extend their knowledge. Our automatic data collection
pipeline can be easily used to continually update our benchmark.
|
[
{
"version": "v1",
"created": "Thu, 5 May 2022 17:59:31 GMT"
}
] | 2022-05-06T00:00:00 |
[
[
"Onoe",
"Yasumasa",
""
],
[
"Zhang",
"Michael J. Q.",
""
],
[
"Choi",
"Eunsol",
""
],
[
"Durrett",
"Greg",
""
]
] |
new_dataset
| 0.983893 |
2205.02834
|
Chuang Gan
|
Yining Hong, Kaichun Mo, Li Yi, Leonidas J. Guibas, Antonio Torralba,
Joshua B. Tenenbaum, Chuang Gan
|
Fixing Malfunctional Objects With Learned Physical Simulation and
Functional Prediction
|
CVPR 2022. Project page: http://fixing-malfunctional.csail.mit.edu
| null | null | null |
cs.CV cs.AI cs.GR cs.LG cs.RO
|
http://creativecommons.org/publicdomain/zero/1.0/
|
This paper studies the problem of fixing malfunctional 3D objects. While
previous works focus on building passive perception models to learn the
functionality from static 3D objects, we argue that functionality is reckoned
with respect to the physical interactions between the object and the user.
Given a malfunctional object, humans can perform mental simulations to reason
about its functionality and figure out how to fix it. Inspired by this, we
propose FixIt, a dataset that contains about 5k poorly-designed 3D physical
objects paired with choices to fix them. To mimic humans' mental simulation
process, we present FixNet, a novel framework that seamlessly incorporates
perception and physical dynamics. Specifically, FixNet consists of a perception
module to extract the structured representation from the 3D point cloud, a
physical dynamics prediction module to simulate the results of interactions on
3D objects, and a functionality prediction module to evaluate the functionality
and choose the correct fix. Experimental results show that our framework
outperforms baseline models by a large margin, and can generalize well to
objects with similar interaction types.
|
[
{
"version": "v1",
"created": "Thu, 5 May 2022 17:59:36 GMT"
}
] | 2022-05-06T00:00:00 |
[
[
"Hong",
"Yining",
""
],
[
"Mo",
"Kaichun",
""
],
[
"Yi",
"Li",
""
],
[
"Guibas",
"Leonidas J.",
""
],
[
"Torralba",
"Antonio",
""
],
[
"Tenenbaum",
"Joshua B.",
""
],
[
"Gan",
"Chuang",
""
]
] |
new_dataset
| 0.99569 |
1910.06247
|
Martin Monperrus
|
Martin Monperrus (KTH), Simon Urli (SPIRALS), Thomas Durieux (INESC),
Matias Martinez (LAMIH, UPHF), Benoit Baudry (KTH), Lionel Seinturier
(SPIRALS, CRIStAL)
|
Repairnator patches programs automatically
|
arXiv admin note: substantial text overlap with arXiv:1810.05806
|
Ubiquity, Association for Computing Machinery, July (2), pp.1-12,
2019
|
10.1145/3349589
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Repairnator is a bot. It constantly monitors software bugs discovered during
continuous integration of open-source software and tries to fix them
automatically. If it succeeds in synthesizing a valid patch, Repairnator
proposes the patch to the human developers, disguised under a fake human
identity. To date, Repairnator has been able to producepatches that were
accepted by the human developers and permanently merged into the code base.
This is a milestone for human-competitiveness in software engineering research
on automatic program repair.
|
[
{
"version": "v1",
"created": "Fri, 11 Oct 2019 06:57:24 GMT"
},
{
"version": "v2",
"created": "Wed, 4 May 2022 11:54:01 GMT"
}
] | 2022-05-05T00:00:00 |
[
[
"Monperrus",
"Martin",
"",
"KTH"
],
[
"Urli",
"Simon",
"",
"SPIRALS"
],
[
"Durieux",
"Thomas",
"",
"INESC"
],
[
"Martinez",
"Matias",
"",
"LAMIH, UPHF"
],
[
"Baudry",
"Benoit",
"",
"KTH"
],
[
"Seinturier",
"Lionel",
"",
"SPIRALS, CRIStAL"
]
] |
new_dataset
| 0.999195 |
2101.03529
|
Gullal Singh Cheema
|
Gullal S. Cheema, Sherzod Hakimov, Ralph Ewerth
|
TIB's Visual Analytics Group at MediaEval '20: Detecting Fake News on
Corona Virus and 5G Conspiracy
|
MediaEval 2020 Fake News Task
| null | null | null |
cs.SI cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Fake news on social media has become a hot topic of research as it negatively
impacts the discourse of real news in the public. Specifically, the ongoing
COVID-19 pandemic has seen a rise of inaccurate and misleading information due
to the surrounding controversies and unknown details at the beginning of the
pandemic. The FakeNews task at MediaEval 2020 tackles this problem by creating
a challenge to automatically detect tweets containing misinformation based on
text and structure from Twitter follower network. In this paper, we present a
simple approach that uses BERT embeddings and a shallow neural network for
classifying tweets using only text, and discuss our findings and limitations of
the approach in text-based misinformation detection.
|
[
{
"version": "v1",
"created": "Sun, 10 Jan 2021 11:52:17 GMT"
}
] | 2022-05-05T00:00:00 |
[
[
"Cheema",
"Gullal S.",
""
],
[
"Hakimov",
"Sherzod",
""
],
[
"Ewerth",
"Ralph",
""
]
] |
new_dataset
| 0.994341 |
2105.01765
|
Lin Bai
|
Lin Bai, Yiming Zhao and Xinming Huang
|
Enabling 3D Object Detection with a Low-Resolution LiDAR
| null | null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Light Detection And Ranging (LiDAR) has been widely used in autonomous
vehicles for perception and localization. However, the cost of a
high-resolution LiDAR is still prohibitively expensive, while its
low-resolution counterpart is much more affordable. Therefore, using
low-resolution LiDAR for autonomous driving is an economically viable solution,
but the point cloud sparsity makes it extremely challenging. In this paper, we
propose a two-stage neural network framework that enables 3D object detection
using a low-resolution LiDAR. Taking input from a low-resolution LiDAR point
cloud and a monocular camera image, a depth completion network is employed to
produce dense point cloud that is subsequently processed by a voxel-based
network for 3D object detection. Evaluated with KITTI dataset for 3D object
detection in Bird-Eye View (BEV), the experimental result shows that the
proposed approach performs significantly better than directly applying the
16-line LiDAR point cloud for object detection. For both easy and moderate
cases, our 3D vehicle detection results are close to those using 64-line
high-resolution LiDARs.
|
[
{
"version": "v1",
"created": "Tue, 4 May 2021 21:08:20 GMT"
},
{
"version": "v2",
"created": "Wed, 4 May 2022 00:54:39 GMT"
}
] | 2022-05-05T00:00:00 |
[
[
"Bai",
"Lin",
""
],
[
"Zhao",
"Yiming",
""
],
[
"Huang",
"Xinming",
""
]
] |
new_dataset
| 0.99889 |
2110.04067
|
Keivan Bahmani
|
M. G. Sarwar Murshed, Robert Kline, Keivan Bahmani, Faraz Hussain,
Stephanie Schuckers
|
Deep Slap Fingerprint Segmentation for Juveniles and Adults
| null |
In 2021 IEEE International Conference on Consumer Electronics-Asia
(ICCE-Asia) (pp. 1-4). IEEE
| null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Many fingerprint recognition systems capture four fingerprints in one image.
In such systems, the fingerprint processing pipeline must first segment each
four-fingerprint slap into individual fingerprints. Note that most of the
current fingerprint segmentation algorithms have been designed and evaluated
using only adult fingerprint datasets. In this work, we have developed a
human-annotated in-house dataset of 15790 slaps of which 9084 are adult samples
and 6706 are samples drawn from children from ages 4 to 12. Subsequently, the
dataset is used to evaluate the matching performance of the NFSEG, a slap
fingerprint segmentation system developed by NIST, on slaps from adults and
juvenile subjects. Our results reveal the lower performance of NFSEG on slaps
from juvenile subjects. Finally, we utilized our novel dataset to develop the
Mask-RCNN based Clarkson Fingerprint Segmentation (CFSEG). Our matching results
using the Verifinger fingerprint matcher indicate that CFSEG outperforms NFSEG
for both adults and juvenile slaps. The CFSEG model is publicly available at
\url{https://github.com/keivanB/Clarkson_Finger_Segment}
|
[
{
"version": "v1",
"created": "Wed, 6 Oct 2021 04:48:23 GMT"
},
{
"version": "v2",
"created": "Tue, 3 May 2022 21:29:17 GMT"
}
] | 2022-05-05T00:00:00 |
[
[
"Murshed",
"M. G. Sarwar",
""
],
[
"Kline",
"Robert",
""
],
[
"Bahmani",
"Keivan",
""
],
[
"Hussain",
"Faraz",
""
],
[
"Schuckers",
"Stephanie",
""
]
] |
new_dataset
| 0.999721 |
2111.09453
|
Juan Manuel Perez
|
Juan Manuel P\'erez, Dami\'an A. Furman, Laura Alonso Alemany, Franco
Luque
|
RoBERTuito: a pre-trained language model for social media text in
Spanish
|
LREC 2022
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Since BERT appeared, Transformer language models and transfer learning have
become state-of-the-art for Natural Language Understanding tasks. Recently,
some works geared towards pre-training specially-crafted models for particular
domains, such as scientific papers, medical documents, user-generated texts,
among others. These domain-specific models have been shown to improve
performance significantly in most tasks. However, for languages other than
English such models are not widely available.
In this work, we present RoBERTuito, a pre-trained language model for
user-generated text in Spanish, trained on over 500 million tweets. Experiments
on a benchmark of tasks involving user-generated text showed that RoBERTuito
outperformed other pre-trained language models in Spanish. In addition to this,
our model achieves top results for some English-Spanish tasks of the Linguistic
Code-Switching Evaluation benchmark (LinCE) and has also competitive
performance against monolingual models in English tasks. To facilitate further
research, we make RoBERTuito publicly available at the HuggingFace model hub
together with the dataset used to pre-train it.
|
[
{
"version": "v1",
"created": "Thu, 18 Nov 2021 00:10:25 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Jan 2022 23:04:08 GMT"
},
{
"version": "v3",
"created": "Wed, 4 May 2022 10:18:30 GMT"
}
] | 2022-05-05T00:00:00 |
[
[
"Pérez",
"Juan Manuel",
""
],
[
"Furman",
"Damián A.",
""
],
[
"Alemany",
"Laura Alonso",
""
],
[
"Luque",
"Franco",
""
]
] |
new_dataset
| 0.954536 |
2112.08466
|
Derek Pham
|
Xun Yuan, Derek Pham, Sam Davidson, Zhou Yu
|
ErAConD : Error Annotated Conversational Dialog Dataset for Grammatical
Error Correction
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Currently available grammatical error correction (GEC) datasets are compiled
using well-formed written text, limiting the applicability of these datasets to
other domains such as informal writing and dialog. In this paper, we present a
novel parallel GEC dataset drawn from open-domain chatbot conversations; this
dataset is, to our knowledge, the first GEC dataset targeted to a
conversational setting. To demonstrate the utility of the dataset, we use our
annotated data to fine-tune a state-of-the-art GEC model, resulting in a 16
point increase in model precision. This is of particular importance in a GEC
model, as model precision is considered more important than recall in GEC tasks
since false positives could lead to serious confusion in language learners. We
also present a detailed annotation scheme which ranks errors by perceived
impact on comprehensibility, making our dataset both reproducible and
extensible. Experimental results show the effectiveness of our data in
improving GEC model performance in conversational scenario.
|
[
{
"version": "v1",
"created": "Wed, 15 Dec 2021 20:27:40 GMT"
},
{
"version": "v2",
"created": "Tue, 3 May 2022 22:49:14 GMT"
}
] | 2022-05-05T00:00:00 |
[
[
"Yuan",
"Xun",
""
],
[
"Pham",
"Derek",
""
],
[
"Davidson",
"Sam",
""
],
[
"Yu",
"Zhou",
""
]
] |
new_dataset
| 0.999119 |
2112.10728
|
Revanth Reddy
|
Revanth Gangi Reddy, Xilin Rui, Manling Li, Xudong Lin, Haoyang Wen,
Jaemin Cho, Lifu Huang, Mohit Bansal, Avirup Sil, Shih-Fu Chang, Alexander
Schwing, Heng Ji
|
MuMuQA: Multimedia Multi-Hop News Question Answering via Cross-Media
Knowledge Extraction and Grounding
|
Accepted at AAAI 2022
| null | null | null |
cs.CL cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Recently, there has been an increasing interest in building question
answering (QA) models that reason across multiple modalities, such as text and
images. However, QA using images is often limited to just picking the answer
from a pre-defined set of options. In addition, images in the real world,
especially in news, have objects that are co-referential to the text, with
complementary information from both modalities. In this paper, we present a new
QA evaluation benchmark with 1,384 questions over news articles that require
cross-media grounding of objects in images onto text. Specifically, the task
involves multi-hop questions that require reasoning over image-caption pairs to
identify the grounded visual object being referred to and then predicting a
span from the news body text to answer the question. In addition, we introduce
a novel multimedia data augmentation framework, based on cross-media knowledge
extraction and synthetic question-answer generation, to automatically augment
data that can provide weak supervision for this task. We evaluate both
pipeline-based and end-to-end pretraining-based multimedia QA models on our
benchmark, and show that they achieve promising performance, while considerably
lagging behind human performance hence leaving large room for future work on
this challenging new task.
|
[
{
"version": "v1",
"created": "Mon, 20 Dec 2021 18:23:30 GMT"
},
{
"version": "v2",
"created": "Wed, 4 May 2022 05:45:41 GMT"
}
] | 2022-05-05T00:00:00 |
[
[
"Reddy",
"Revanth Gangi",
""
],
[
"Rui",
"Xilin",
""
],
[
"Li",
"Manling",
""
],
[
"Lin",
"Xudong",
""
],
[
"Wen",
"Haoyang",
""
],
[
"Cho",
"Jaemin",
""
],
[
"Huang",
"Lifu",
""
],
[
"Bansal",
"Mohit",
""
],
[
"Sil",
"Avirup",
""
],
[
"Chang",
"Shih-Fu",
""
],
[
"Schwing",
"Alexander",
""
],
[
"Ji",
"Heng",
""
]
] |
new_dataset
| 0.999768 |
2202.11087
|
Gilderlan De Ara\'ujo Tavares
|
Gilderlan Tavares de Ara\'ujo, Paulo Ricardo Brboza Gomes, Andr\'e
Lima F\'errer de Almeida, Gabor Fodor, Behrooz Makki
|
Semi-Blind Joint Channel and Symbol Estimation in IRS-Assisted
Multi-User MIMO Networks
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Intelligent reflecting surface (IRS) is a promising technology for beyond 5th
Generation of the wireless communications. In fully passive IRS-assisted
systems, channel estimation is challenging and should be carried out only at
the base station or at the terminals since the elements of the IRS are
incapable of processing signals. In this letter, we formulate a tensor-based
semi-blind receiver that solves the joint channel and symbol estimation problem
in an IRS-assisted multi-user multiple-input multiple-output system. The
proposed approach relies on a generalized PARATUCK tensor model of the signals
reflected by the IRS, based on a two-stage closed-form semi-blind receiver
using Khatri-Rao and Kronecker factorizations. Simulation results demonstrate
the superior performance of the proposed semi-blind receiver, in terms of the
normalized mean squared error and symbol error rate, as well as a lower
computational complexity, compared to recently proposed parallel factor
analysis-based receivers.
|
[
{
"version": "v1",
"created": "Tue, 22 Feb 2022 18:29:11 GMT"
},
{
"version": "v2",
"created": "Wed, 4 May 2022 16:56:26 GMT"
}
] | 2022-05-05T00:00:00 |
[
[
"de Araújo",
"Gilderlan Tavares",
""
],
[
"Gomes",
"Paulo Ricardo Brboza",
""
],
[
"de Almeida",
"André Lima Férrer",
""
],
[
"Fodor",
"Gabor",
""
],
[
"Makki",
"Behrooz",
""
]
] |
new_dataset
| 0.954312 |
2203.07589
|
Helei Duan
|
Helei Duan, Ashish Malik, Jeremy Dao, Aseem Saxena, Kevin Green, Jonah
Siekmann, Alan Fern, Jonathan Hurst
|
Sim-to-Real Learning of Footstep-Constrained Bipedal Dynamic Walking
|
Accepted at ICRA 2022. Video at
https://www.youtube.com/watch?v=-zim1QQgA2s
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, work on reinforcement learning (RL) for bipedal robots has
successfully learned controllers for a variety of dynamic gaits with robust
sim-to-real demonstrations. In order to maintain balance, the learned
controllers have full freedom of where to place the feet, resulting in highly
robust gaits. In the real world however, the environment will often impose
constraints on the feasible footstep locations, typically identified by
perception systems. Unfortunately, most demonstrated RL controllers on bipedal
robots do not allow for specifying and responding to such constraints. This
missing control interface greatly limits the real-world application of current
RL controllers. In this paper, we aim to maintain the robust and dynamic nature
of learned gaits while also respecting footstep constraints imposed externally.
We develop an RL formulation for training dynamic gait controllers that can
respond to specified touchdown locations. We then successfully demonstrate
simulation and sim-to-real performance on the bipedal robot Cassie. In
addition, we use supervised learning to induce a transition model for
accurately predicting the next touchdown locations that the controller can
achieve given the robot's proprioceptive observations. This model paves the way
for integrating the learned controller into a full-order robot locomotion
planner that robustly satisfies both balance and environmental constraints.
|
[
{
"version": "v1",
"created": "Tue, 15 Mar 2022 01:28:14 GMT"
},
{
"version": "v2",
"created": "Tue, 3 May 2022 22:39:30 GMT"
}
] | 2022-05-05T00:00:00 |
[
[
"Duan",
"Helei",
""
],
[
"Malik",
"Ashish",
""
],
[
"Dao",
"Jeremy",
""
],
[
"Saxena",
"Aseem",
""
],
[
"Green",
"Kevin",
""
],
[
"Siekmann",
"Jonah",
""
],
[
"Fern",
"Alan",
""
],
[
"Hurst",
"Jonathan",
""
]
] |
new_dataset
| 0.993768 |
2204.02296
|
Joseph Ortiz
|
Joseph Ortiz, Alexander Clegg, Jing Dong, Edgar Sucar, David Novotny,
Michael Zollhoefer, Mustafa Mukadam
|
iSDF: Real-Time Neural Signed Distance Fields for Robot Perception
|
Published in Robotics: Science and Systems (RSS) 2022. Project page:
https://joeaortiz.github.io/iSDF/
| null | null | null |
cs.RO cs.CV
|
http://creativecommons.org/publicdomain/zero/1.0/
|
We present iSDF, a continual learning system for real-time signed distance
field (SDF) reconstruction. Given a stream of posed depth images from a moving
camera, it trains a randomly initialised neural network to map input 3D
coordinate to approximate signed distance. The model is self-supervised by
minimising a loss that bounds the predicted signed distance using the distance
to the closest sampled point in a batch of query points that are actively
sampled. In contrast to prior work based on voxel grids, our neural method is
able to provide adaptive levels of detail with plausible filling in of
partially observed regions and denoising of observations, all while having a
more compact representation. In evaluations against alternative methods on real
and synthetic datasets of indoor environments, we find that iSDF produces more
accurate reconstructions, and better approximations of collision costs and
gradients useful for downstream planners in domains from navigation to
manipulation. Code and video results can be found at our project page:
https://joeaortiz.github.io/iSDF/ .
|
[
{
"version": "v1",
"created": "Tue, 5 Apr 2022 15:48:39 GMT"
},
{
"version": "v2",
"created": "Wed, 4 May 2022 16:16:28 GMT"
}
] | 2022-05-05T00:00:00 |
[
[
"Ortiz",
"Joseph",
""
],
[
"Clegg",
"Alexander",
""
],
[
"Dong",
"Jing",
""
],
[
"Sucar",
"Edgar",
""
],
[
"Novotny",
"David",
""
],
[
"Zollhoefer",
"Michael",
""
],
[
"Mukadam",
"Mustafa",
""
]
] |
new_dataset
| 0.994932 |
2204.06885
|
Youngjin Jin
|
Youngjin Jin, Eugene Jang, Yongjae Lee, Seungwon Shin, Jin-Woo Chung
|
Shedding New Light on the Language of the Dark Web
|
To appear at NAACL 2022 (main conference)
| null | null | null |
cs.CL cs.IR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The hidden nature and the limited accessibility of the Dark Web, combined
with the lack of public datasets in this domain, make it difficult to study its
inherent characteristics such as linguistic properties. Previous works on text
classification of Dark Web domain have suggested that the use of deep neural
models may be ineffective, potentially due to the linguistic differences
between the Dark and Surface Webs. However, not much work has been done to
uncover the linguistic characteristics of the Dark Web. This paper introduces
CoDA, a publicly available Dark Web dataset consisting of 10000 web documents
tailored towards text-based Dark Web analysis. By leveraging CoDA, we conduct a
thorough linguistic analysis of the Dark Web and examine the textual
differences between the Dark Web and the Surface Web. We also assess the
performance of various methods of Dark Web page classification. Finally, we
compare CoDA with an existing public Dark Web dataset and evaluate their
suitability for various use cases.
|
[
{
"version": "v1",
"created": "Thu, 14 Apr 2022 11:17:22 GMT"
},
{
"version": "v2",
"created": "Wed, 4 May 2022 08:47:32 GMT"
}
] | 2022-05-05T00:00:00 |
[
[
"Jin",
"Youngjin",
""
],
[
"Jang",
"Eugene",
""
],
[
"Lee",
"Yongjae",
""
],
[
"Shin",
"Seungwon",
""
],
[
"Chung",
"Jin-Woo",
""
]
] |
new_dataset
| 0.998631 |
2204.10994
|
Yue Zhang
|
Yue Zhang, Zhenghua Li, Zuyi Bao, Jiacheng Li, Bo Zhang, Chen Li, Fei
Huang, Min Zhang
|
MuCGEC: a Multi-Reference Multi-Source Evaluation Dataset for Chinese
Grammatical Error Correction
|
Accepted by NAACL2022 (main conference)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents MuCGEC, a multi-reference multi-source evaluation dataset
for Chinese Grammatical Error Correction (CGEC), consisting of 7,063 sentences
collected from three Chinese-as-a-Second-Language (CSL) learner sources. Each
sentence is corrected by three annotators, and their corrections are carefully
reviewed by a senior annotator, resulting in 2.3 references per sentence. We
conduct experiments with two mainstream CGEC models, i.e., the
sequence-to-sequence model and the sequence-to-edit model, both enhanced with
large pretrained language models, achieving competitive benchmark performance
on previous and our datasets. We also discuss CGEC evaluation methodologies,
including the effect of multiple references and using a char-based metric. Our
annotation guidelines, data, and code are available at
\url{https://github.com/HillZhang1999/MuCGEC}.
|
[
{
"version": "v1",
"created": "Sat, 23 Apr 2022 05:20:38 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Apr 2022 05:58:14 GMT"
},
{
"version": "v3",
"created": "Wed, 4 May 2022 06:22:18 GMT"
}
] | 2022-05-05T00:00:00 |
[
[
"Zhang",
"Yue",
""
],
[
"Li",
"Zhenghua",
""
],
[
"Bao",
"Zuyi",
""
],
[
"Li",
"Jiacheng",
""
],
[
"Zhang",
"Bo",
""
],
[
"Li",
"Chen",
""
],
[
"Huang",
"Fei",
""
],
[
"Zhang",
"Min",
""
]
] |
new_dataset
| 0.999782 |
2204.13955
|
Juan M. Gandarias
|
Wansoo Kim, Virginia Ruiz Garate, Juan M. Gandarias, Marta Lorenzini,
Arash Ajoudani
|
A Directional Vibrotactile Feedback Interface for Ergonomic Postural
Adjustment
|
12 pages. 13 figures. Now published in IEEE Transactions on Haptics
DOI: 10.1109/TOH.2021.3112795
|
IEEE Transactions on Haptics ( Volume: 15, Issue: 1, Jan.-March 1
2022)
|
10.1109/TOH.2021.3112795
| null |
cs.HC cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The objective of this paper is to develop and evaluate a directional
vibrotactile feedback interface as a guidance tool for postural adjustments
during work. In contrast to the existing active and wearable systems such as
exoskeletons, we aim to create a lightweight and intuitive interface, capable
of guiding its wearers towards more ergonomic and healthy working conditions.
To achieve this, a vibrotactile device called ErgoTac is employed to develop
three different feedback modalities that are able to provide a directional
guidance at the body segments towards a desired pose. In addition, an
evaluation is made to find the most suitable, comfortable, and intuitive
feedback modality for the user. Therefore, these modalities are first compared
experimentally on fifteen subjects wearing eight ErgoTac devices to achieve
targeted arm and torso configurations. The most effective directional feedback
modality is then evaluated on five subjects in a set of experiments in which an
ergonomic optimisation module provides the optimised body posture while
performing heavy lifting or forceful exertion tasks. The results yield strong
evidence on the usefulness and the intuitiveness of one of the developed
modalities in providing guidance towards ergonomic working conditions, by
minimising the effect of an external load on body joints. We believe that the
integration of such low-cost devices in workplaces can help address the
well-known and complex problem of work-related musculoskeletal disorders.
|
[
{
"version": "v1",
"created": "Fri, 29 Apr 2022 09:04:05 GMT"
},
{
"version": "v2",
"created": "Wed, 4 May 2022 14:12:56 GMT"
}
] | 2022-05-05T00:00:00 |
[
[
"Kim",
"Wansoo",
""
],
[
"Garate",
"Virginia Ruiz",
""
],
[
"Gandarias",
"Juan M.",
""
],
[
"Lorenzini",
"Marta",
""
],
[
"Ajoudani",
"Arash",
""
]
] |
new_dataset
| 0.956903 |
2205.00429
|
Lorenzo Miretti
|
Lorenzo Miretti, Renato L. G. Cavalcante, Slawomir Stanczak, Martin
Schubert, Ronald Boehnke, Wen Xu
|
Closed-form max-min power control for some cellular and cell-free
massive MIMO networks
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Many common instances of power control problems for cellular and cell-free
massive MIMO networks can be interpreted as max-min utility optimization
problems involving affine interference mappings and polyhedral constraints. We
show that these problems admit a closed-form solution which depends on the
spectral radius of known matrices. In contrast, previous solutions in the
literature have been indirectly obtained using iterative algorithms based on
the bisection method, or on fixed-point iterations. Furthermore, we also show
an asymptotically tight bound for the optimal utility, which in turn provides a
simple rule of thumb for evaluating whether the network is operating in the
noise or interference limited regime. We finally illustrate our results by
focusing on classical max-min fair power control for cell-free massive MIMO
networks.
|
[
{
"version": "v1",
"created": "Sun, 1 May 2022 09:14:04 GMT"
},
{
"version": "v2",
"created": "Tue, 3 May 2022 22:17:51 GMT"
}
] | 2022-05-05T00:00:00 |
[
[
"Miretti",
"Lorenzo",
""
],
[
"Cavalcante",
"Renato L. G.",
""
],
[
"Stanczak",
"Slawomir",
""
],
[
"Schubert",
"Martin",
""
],
[
"Boehnke",
"Ronald",
""
],
[
"Xu",
"Wen",
""
]
] |
new_dataset
| 0.981409 |
2205.01527
|
Daniel S. Katz
|
Kyle Chard, Yadu Babuji, Anna Woodard, Ben Clifford, Zhuozhao Li,
Mihael Hategan, Ian Foster, Mike Wilde, Daniel S. Katz
|
Extended Abstract: Productive Parallel Programming with Parsl
| null |
ACM SIGAda Ada Letters 40 (2), 73-75, 2020
|
10.1145/3463478.3463486
| null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Parsl is a parallel programming library for Python that aims to make it easy
to specify parallelism in programs and to realize that parallelism on arbitrary
parallel and distributed computing systems. Parsl relies on developers
annotating Python functions-wrapping either Python or external applications-to
indicate that these functions may be executed concurrently. Developers can then
link together functions via the exchange of data. Parsl establishes a dynamic
dependency graph and sends tasks for execution on connected resources when
dependencies are resolved. Parsl's runtime system enables different compute
resources to be used, from laptops to supercomputers, without modification to
the Parsl program.
|
[
{
"version": "v1",
"created": "Tue, 3 May 2022 14:29:42 GMT"
},
{
"version": "v2",
"created": "Wed, 4 May 2022 17:33:19 GMT"
}
] | 2022-05-05T00:00:00 |
[
[
"Chard",
"Kyle",
""
],
[
"Babuji",
"Yadu",
""
],
[
"Woodard",
"Anna",
""
],
[
"Clifford",
"Ben",
""
],
[
"Li",
"Zhuozhao",
""
],
[
"Hategan",
"Mihael",
""
],
[
"Foster",
"Ian",
""
],
[
"Wilde",
"Mike",
""
],
[
"Katz",
"Daniel S.",
""
]
] |
new_dataset
| 0.997585 |
2205.01713
|
Maximiliano Cristia
|
Maximiliano Cristi\'a and Gianfranco Rossi
|
A Typechecker for a Set-Based Constraint Logic Programming Language
| null | null | null | null |
cs.LO cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
{log} (read 'setlog') is a Constraint Logic Programming (CLP) language and
satisfiability solver whose constraint domain is the theory of finite sets.
Rooted in CLP and Prolog, {log} essentially provides an untyped language. As
such it can accept formulas that make the solver to produce unwanted behaviors.
Besides, {log} users may make mistakes in their programs that would normally be
caught by a typechecker. In particular, {log} has been proposed as a
prototyping language for B and Z specifications, which are typed formalisms.
Then, without a type system for {log} there is a gap that users need to fill
manually. Therefore, in this paper we define a type system and implement a
typechecker for {log}. The type system is proved to be safe (sound) by adapting
the functional programming formulation of type safety to the CLP context. We
also show how types and CLP can be combined to provide stronger assurances on
program correctness. Finally, we apply the type system, the typechecker and
their combination with CLP to a real-world case study from the aeronautic
domain.
|
[
{
"version": "v1",
"created": "Tue, 3 May 2022 18:23:07 GMT"
}
] | 2022-05-05T00:00:00 |
[
[
"Cristiá",
"Maximiliano",
""
],
[
"Rossi",
"Gianfranco",
""
]
] |
new_dataset
| 0.997692 |
2205.01724
|
Saeed Ranjbar Alvar
|
Saeed Ranjbar Alvar, Korcan Uyanik, and Ivan V. Baji\'c
|
License Plate Privacy in Collaborative Visual Analysis of Traffic Scenes
|
submitted to IEEE MIPR'22
| null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Traffic scene analysis is important for emerging technologies such as smart
traffic management and autonomous vehicles. However, such analysis also poses
potential privacy threats. For example, a system that can recognize license
plates may construct patterns of behavior of the corresponding vehicles' owners
and use that for various illegal purposes. In this paper we present a system
that enables traffic scene analysis while at the same time preserving license
plate privacy. The system is based on a multi-task model whose latent space is
selectively compressed depending on the amount of information the specific
features carry about analysis tasks and private information. Effectiveness of
the proposed method is illustrated by experiments on the Cityscapes dataset,
for which we also provide license plate annotations.
|
[
{
"version": "v1",
"created": "Tue, 3 May 2022 18:47:27 GMT"
}
] | 2022-05-05T00:00:00 |
[
[
"Alvar",
"Saeed Ranjbar",
""
],
[
"Uyanik",
"Korcan",
""
],
[
"Bajić",
"Ivan V.",
""
]
] |
new_dataset
| 0.981406 |
2205.01791
|
Wenshan Wang
|
Samuel Triest, Matthew Sivaprakasam, Sean J. Wang, Wenshan Wang, Aaron
M. Johnson, Sebastian Scherer
|
TartanDrive: A Large-Scale Dataset for Learning Off-Road Dynamics Models
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
We present TartanDrive, a large scale dataset for learning dynamics models
for off-road driving. We collected a dataset of roughly 200,000 off-road
driving interactions on a modified Yamaha Viking ATV with seven unique sensing
modalities in diverse terrains. To the authors' knowledge, this is the largest
real-world multi-modal off-road driving dataset, both in terms of number of
interactions and sensing modalities. We also benchmark several state-of-the-art
methods for model-based reinforcement learning from high-dimensional
observations on this dataset. We find that extending these models to
multi-modality leads to significant performance on off-road dynamics
prediction, especially in more challenging terrains. We also identify some
shortcomings with current neural network architectures for the off-road driving
task. Our dataset is available at https://github.com/castacks/tartan_drive.
|
[
{
"version": "v1",
"created": "Tue, 3 May 2022 21:34:14 GMT"
}
] | 2022-05-05T00:00:00 |
[
[
"Triest",
"Samuel",
""
],
[
"Sivaprakasam",
"Matthew",
""
],
[
"Wang",
"Sean J.",
""
],
[
"Wang",
"Wenshan",
""
],
[
"Johnson",
"Aaron M.",
""
],
[
"Scherer",
"Sebastian",
""
]
] |
new_dataset
| 0.9999 |
2205.01821
|
Yufei Tian
|
Yufei Tian and Nanyun Peng
|
Zero-shot Sonnet Generation with Discourse-level Planning and Aesthetics
Features
|
To appear in NAACL 2022
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Poetry generation, and creative language generation in general, usually
suffers from the lack of large training data. In this paper, we present a novel
framework to generate sonnets that does not require training on poems. We
design a hierarchical framework which plans the poem sketch before decoding.
Specifically, a content planning module is trained on non-poetic texts to
obtain discourse-level coherence; then a rhyme module generates rhyme words and
a polishing module introduces imagery and similes for aesthetics purposes.
Finally, we design a constrained decoding algorithm to impose the
meter-and-rhyme constraint of the generated sonnets. Automatic and human
evaluation show that our multi-stage approach without training on poem corpora
generates more coherent, poetic, and creative sonnets than several strong
baselines.
|
[
{
"version": "v1",
"created": "Tue, 3 May 2022 23:44:28 GMT"
}
] | 2022-05-05T00:00:00 |
[
[
"Tian",
"Yufei",
""
],
[
"Peng",
"Nanyun",
""
]
] |
new_dataset
| 0.999004 |
2205.01841
|
Jinhao Jiang
|
Jinhao Jiang, Kun Zhou, Wayne Xin Zhao and Ji-Rong Wen
|
Great Truths are Always Simple: A Rather Simple Knowledge Encoder for
Enhancing the Commonsense Reasoning Capacity of Pre-Trained Models
|
12 pages, NAACL-Findings-2022
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Commonsense reasoning in natural language is a desired ability of artificial
intelligent systems. For solving complex commonsense reasoning tasks, a typical
solution is to enhance pre-trained language models~(PTMs) with a
knowledge-aware graph neural network~(GNN) encoder that models a commonsense
knowledge graph~(CSKG). Despite the effectiveness, these approaches are built
on heavy architectures, and can't clearly explain how external knowledge
resources improve the reasoning capacity of PTMs. Considering this issue, we
conduct a deep empirical analysis, and find that it is indeed relation features
from CSKGs (but not node features) that mainly contribute to the performance
improvement of PTMs. Based on this finding, we design a simple MLP-based
knowledge encoder that utilizes statistical relation paths as features.
Extensive experiments conducted on five benchmarks demonstrate the
effectiveness of our approach, which also largely reduces the parameters for
encoding CSKGs. Our codes and data are publicly available at
https://github.com/RUCAIBox/SAFE.
|
[
{
"version": "v1",
"created": "Wed, 4 May 2022 01:27:36 GMT"
}
] | 2022-05-05T00:00:00 |
[
[
"Jiang",
"Jinhao",
""
],
[
"Zhou",
"Kun",
""
],
[
"Zhao",
"Wayne Xin",
""
],
[
"Wen",
"Ji-Rong",
""
]
] |
new_dataset
| 0.995562 |
2205.01850
|
Chenyu Zhang
|
Chenyu Zhang, Benjamin Van Durme, Zhuowan Li, Elias Stengel-Eskin
|
Visual Commonsense in Pretrained Unimodal and Multimodal Models
|
To appear in NAACL 2022
| null | null | null |
cs.CL cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Our commonsense knowledge about objects includes their typical visual
attributes; we know that bananas are typically yellow or green, and not purple.
Text and image corpora, being subject to reporting bias, represent this
world-knowledge to varying degrees of faithfulness. In this paper, we
investigate to what degree unimodal (language-only) and multimodal (image and
language) models capture a broad range of visually salient attributes. To that
end, we create the Visual Commonsense Tests (ViComTe) dataset covering 5
property types (color, shape, material, size, and visual co-occurrence) for
over 5000 subjects. We validate this dataset by showing that our grounded color
data correlates much better than ungrounded text-only data with crowdsourced
color judgments provided by Paik et al. (2021). We then use our dataset to
evaluate pretrained unimodal models and multimodal models. Our results indicate
that multimodal models better reconstruct attribute distributions, but are
still subject to reporting bias. Moreover, increasing model size does not
enhance performance, suggesting that the key to visual commonsense lies in the
data.
|
[
{
"version": "v1",
"created": "Wed, 4 May 2022 02:07:55 GMT"
}
] | 2022-05-05T00:00:00 |
[
[
"Zhang",
"Chenyu",
""
],
[
"Van Durme",
"Benjamin",
""
],
[
"Li",
"Zhuowan",
""
],
[
"Stengel-Eskin",
"Elias",
""
]
] |
new_dataset
| 0.999631 |
2205.01932
|
Simon Fernandez
|
Simon Fernandez (LIG), Maciej Korczy\'nski (LIG), Andrzej Duda (LIG)
|
Early Detection of Spam Domains with Passive DNS and SPF
| null |
Passive and Active Measurement, 13210, Springer International
Publishing, pp.30-49, 2022, Lecture Notes in Computer Science
|
10.1007/978-3-030-98785-5_2
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Spam domains are sources of unsolicited mails and one of the primary vehicles
for fraud and malicious activities such as phishing campaigns or malware
distribution. Spam domain detection is a race: as soon as the spam mails are
sent, taking down the domain or blacklisting it is of relative use, as spammers
have to register a new domain for their next campaign. To prevent malicious
actors from sending mails, we need to detect them as fast as possible and,
ideally, even before the campaign is launched. In this paper, using
near-real-time passive DNS data from Farsight Security, we monitor the DNS
traffic of newly registered domains and the contents of their TXT records, in
particular, the configuration of the Sender Policy Framework, an anti-spoofing
protocol for domain names and the first line of defense against devastating
Business Email Compromise scams. Because spammers and benign domains have
different SPF rules and different traffic profiles, we build a new method to
detect spam domains using features collected from passive DNS traffic. Using
the SPF configuration and the traffic to the TXT records of a domain, we
accurately detect a significant proportion of spam domains with a low false
positives rate demonstrating its potential in real-world deployments. Our
classification scheme can detect spam domains before they send any mail, using
only a single DNS query and later on, it can refine its classification by
monitoring more traffic to the domain name.
|
[
{
"version": "v1",
"created": "Wed, 4 May 2022 08:10:11 GMT"
}
] | 2022-05-05T00:00:00 |
[
[
"Fernandez",
"Simon",
"",
"LIG"
],
[
"Korczyński",
"Maciej",
"",
"LIG"
],
[
"Duda",
"Andrzej",
"",
"LIG"
]
] |
new_dataset
| 0.978659 |
2205.01989
|
Gullal Singh Cheema
|
Gullal S. Cheema, Sherzod Hakimov, Abdul Sittar, Eric M\"uller-Budack,
Christian Otto, Ralph Ewerth
|
MM-Claims: A Dataset for Multimodal Claim Detection in Social Media
|
Accepted to Findings of NAACL 2022
| null | null | null |
cs.CL cs.AI cs.CV cs.MM cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, the problem of misinformation on the web has become
widespread across languages, countries, and various social media platforms.
Although there has been much work on automated fake news detection, the role of
images and their variety are not well explored. In this paper, we investigate
the roles of image and text at an earlier stage of the fake news detection
pipeline, called claim detection. For this purpose, we introduce a novel
dataset, MM-Claims, which consists of tweets and corresponding images over
three topics: COVID-19, Climate Change and broadly Technology. The dataset
contains roughly 86000 tweets, out of which 3400 are labeled manually by
multiple annotators for the training and evaluation of multimodal models. We
describe the dataset in detail, evaluate strong unimodal and multimodal
baselines, and analyze the potential and drawbacks of current models.
|
[
{
"version": "v1",
"created": "Wed, 4 May 2022 10:43:58 GMT"
}
] | 2022-05-05T00:00:00 |
[
[
"Cheema",
"Gullal S.",
""
],
[
"Hakimov",
"Sherzod",
""
],
[
"Sittar",
"Abdul",
""
],
[
"Müller-Budack",
"Eric",
""
],
[
"Otto",
"Christian",
""
],
[
"Ewerth",
"Ralph",
""
]
] |
new_dataset
| 0.999861 |
2205.02031
|
Ngoc Long Nguyen
|
Ngoc Long Nguyen, J\'er\'emy Anger, Axel Davy, Pablo Arias, and
Gabriele Facciolo
|
Self-Supervised Super-Resolution for Multi-Exposure Push-Frame
Satellites
|
CVPR 2022
| null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Modern Earth observation satellites capture multi-exposure bursts of
push-frame images that can be super-resolved via computational means. In this
work, we propose a super-resolution method for such multi-exposure sequences, a
problem that has received very little attention in the literature. The proposed
method can handle the signal-dependent noise in the inputs, process sequences
of any length, and be robust to inaccuracies in the exposure times.
Furthermore, it can be trained end-to-end with self-supervision, without
requiring ground truth high resolution frames, which makes it especially suited
to handle real data. Central to our method are three key contributions: i) a
base-detail decomposition for handling errors in the exposure times, ii) a
noise-level-aware feature encoding for improved fusion of frames with varying
signal-to-noise ratio and iii) a permutation invariant fusion strategy by
temporal pooling operators. We evaluate the proposed method on synthetic and
real data and show that it outperforms by a significant margin existing
single-exposure approaches that we adapted to the multi-exposure case.
|
[
{
"version": "v1",
"created": "Wed, 4 May 2022 12:42:57 GMT"
}
] | 2022-05-05T00:00:00 |
[
[
"Nguyen",
"Ngoc Long",
""
],
[
"Anger",
"Jérémy",
""
],
[
"Davy",
"Axel",
""
],
[
"Arias",
"Pablo",
""
],
[
"Facciolo",
"Gabriele",
""
]
] |
new_dataset
| 0.960211 |
2205.02065
|
Julien Posso
|
Julien Posso, Guy Bois, Yvon Savaria
|
Mobile-URSONet: an Embeddable Neural Network for Onboard Spacecraft Pose
Estimation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Spacecraft pose estimation is an essential computer vision application that
can improve the autonomy of in-orbit operations. An ESA/Stanford competition
brought out solutions that seem hardly compatible with the constraints imposed
on spacecraft onboard computers. URSONet is among the best in the competition
for its generalization capabilities but at the cost of a tremendous number of
parameters and high computational complexity. In this paper, we propose
Mobile-URSONet: a spacecraft pose estimation convolutional neural network with
178 times fewer parameters while degrading accuracy by no more than four times
compared to URSONet.
|
[
{
"version": "v1",
"created": "Wed, 4 May 2022 13:54:34 GMT"
}
] | 2022-05-05T00:00:00 |
[
[
"Posso",
"Julien",
""
],
[
"Bois",
"Guy",
""
],
[
"Savaria",
"Yvon",
""
]
] |
new_dataset
| 0.99947 |
2205.02093
|
Roshanak Ashrafi
|
Roshanak Ashrafi, Mona Azarbayjania, Hamed Tabkhi
|
A Novel Fully Annotated Thermal Infrared Face Dataset: Recorded in
Various Environment Conditions and Distances From The Camera
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Facial thermography is one of the most popular research areas in infrared
thermal imaging, with diverse applications in medical, surveillance, and
environmental monitoring. However, in contrast to facial imagery in the visual
spectrum, the lack of public datasets on facial thermal images is an obstacle
to research improvement in this area. Thermal face imagery is still a
relatively new research area to be evaluated and studied in different
domains.The current thermal face datasets are limited in regards to the
subjects' distance from the camera, the ambient temperature variation, and
facial landmarks' localization. We address these gaps by presenting a new
facial thermography dataset. This article makes two main contributions to the
body of knowledge. First, it presents a comprehensive review and comparison of
current public datasets in facial thermography. Second, it introduces and
studies a novel public dataset on facial thermography, which we call it
Charlotte-ThermalFace. Charlotte-ThermalFace contains more than10000 infrared
thermal images in varying thermal conditions, several distances from the
camera, and different head positions. The data is fully annotated with the
facial landmarks, ambient temperature, relative humidity, the air speed of the
room, distance to the camera, and subject thermal sensation at the time of
capturing each image. Our dataset is the first publicly available thermal
dataset annotated with the thermal sensation of each subject in different
thermal conditions and one of the few datasets in raw 16-bit format. Finally,
we present a preliminary analysis of the dataset to show the applicability and
importance of the thermal conditions in facial thermography. The full dataset,
including annotations, are freely available for research purpose at
https://github.com/TeCSAR-UNCC/UNCC-ThermalFace
|
[
{
"version": "v1",
"created": "Fri, 29 Apr 2022 17:57:54 GMT"
}
] | 2022-05-05T00:00:00 |
[
[
"Ashrafi",
"Roshanak",
""
],
[
"Azarbayjania",
"Mona",
""
],
[
"Tabkhi",
"Hamed",
""
]
] |
new_dataset
| 0.999806 |
2205.02106
|
Pedro Fouto
|
Pedro Fouto, Pedro \'Akos Costa, Nuno Pregui\c{c}a and Jo\~ao Leit\~ao
|
Babel: A Framework for Developing Performant and Dependable Distributed
Protocols
|
18 pages, 2 figures
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Prototyping and implementing distributed algorithms, particularly those that
address challenges related with fault-tolerance and dependability, is a time
consuming task. This is, in part, due to the need of addressing low level
aspects such as management of communication channels, controlling timeouts or
periodic tasks, and dealing with concurrency issues. This has a significant
impact for researchers that want to build prototypes for conducting
experimental evaluation; practitioners that want to compare different design
alternatives/solutions; and even for practical teaching activities on
distributed algorithms courses.
In this paper we present Babel, a novel framework to develop, implement, and
execute distributed protocols and systems. Babel promotes an event driven
programming and execution model that simplifies the task of translating typical
specifications or descriptions of algorithms into performant prototypes, while
allowing the programmer to focus on the relevant challenges of these algorithms
by transparently handling time consuming low level aspects. Furthermore, Babel
provides, and allows the definition of, networking components that can capture
different network capabilities (e.g., P2P, Client/Server, phi-accrual Failure
Detector), making the code mostly independent from the underlying communication
aspects. Babel was built to be generic and can be used to implement a wide
variety of different classes of distributed protocols.
We conduct our experimental work with two relevant case studies, a
Peer-to-Peer application and a State Machine Replication application, that show
the generality and ease of use of Babel and present competitive performance
when compared with significantly more complex implementations.
|
[
{
"version": "v1",
"created": "Wed, 4 May 2022 15:07:28 GMT"
}
] | 2022-05-05T00:00:00 |
[
[
"Fouto",
"Pedro",
""
],
[
"Costa",
"Pedro Ákos",
""
],
[
"Preguiça",
"Nuno",
""
],
[
"Leitão",
"João",
""
]
] |
new_dataset
| 0.992526 |
2205.02117
|
Yuanzhi Yao
|
Zhengyu Yue, Yuanzhi Yao, Weihai Li, Nenghai Yu
|
ATDD: Fine-Grained Assured Time-Sensitive Data Deletion Scheme in Cloud
Storage
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the rapid development of general cloud services, more and more
individuals or collectives use cloud platforms to store data. Assured data
deletion deserves investigation in cloud storage. In time-sensitive data
storage scenarios, it is necessary for cloud platforms to automatically destroy
data after the data owner-specified expiration time. Therefore, assured
timesensitive data deletion should be sought. In this paper, a finegrained
assured time-sensitive data deletion (ATDD) scheme in cloud storage is proposed
by embedding the time trapdoor in Ciphertext-Policy Attribute-Based Encryption
(CP-ABE). Timesensitive data is self-destructed after the data owner-specified
expiration time so that the authorized users cannot get access to the related
data. In addition, a credential is returned to the data owner for data deletion
verification. This proposed scheme provides solutions for fine-grained access
control and verifiable data self-destruction. Detailed security and performance
analysis demonstrate the security and the practicability of the proposed
scheme.
|
[
{
"version": "v1",
"created": "Tue, 3 May 2022 07:10:05 GMT"
}
] | 2022-05-05T00:00:00 |
[
[
"Yue",
"Zhengyu",
""
],
[
"Yao",
"Yuanzhi",
""
],
[
"Li",
"Weihai",
""
],
[
"Yu",
"Nenghai",
""
]
] |
new_dataset
| 0.999192 |
2205.02141
|
Jianfa Chen
|
Jianfa Chen, Yue Yin, Yifan Xu
|
RecipeSnap -- a lightweight image-to-recipe model
|
7 pages, 3 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper we want to address the problem of automation for recognition of
photographed cooking dishes and generating the corresponding food recipes.
Current image-to-recipe models are computation expensive and require powerful
GPUs for model training and implementation. High computational cost prevents
those existing models from being deployed on portable devices, like smart
phones. To solve this issue we introduce a lightweight image-to-recipe
prediction model, RecipeSnap, that reduces memory cost and computational cost
by more than 90% while still achieving 2.0 MedR, which is in line with the
state-of-the-art model. A pre-trained recipe encoder was used to compute recipe
embeddings. Recipes from Recipe1M dataset and corresponding recipe embeddings
are collected as a recipe library, which are used for image encoder training
and image query later. We use MobileNet-V2 as image encoder backbone, which
makes our model suitable to portable devices. This model can be further
developed into an application for smart phones with a few effort. A comparison
of the performance between this lightweight model to other heavy models are
presented in this paper. Code, data and models are publicly accessible on
github.
|
[
{
"version": "v1",
"created": "Wed, 4 May 2022 15:49:52 GMT"
}
] | 2022-05-05T00:00:00 |
[
[
"Chen",
"Jianfa",
""
],
[
"Yin",
"Yue",
""
],
[
"Xu",
"Yifan",
""
]
] |
new_dataset
| 0.999105 |
2205.02142
|
Alejandro D\'iaz-Caro
|
Alejandro D\'iaz-Caro and Octavio Malherbe
|
Semimodules and the (syntactically-)linear lambda calculus
| null | null | null | null |
cs.LO math.CT math.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In a recent paper, the $\mathcal L^{\mathcal S}$-calculus has been defined.
It is a proof-language for a significant fragment of intuitionistic linear
logic. Its main feature is that the linearity properties can be expressed in
its syntax, since it has interstitial logical rules whose proof-terms are a sum
and a multiplication by scalar.
The calculus is parametrized on the structure $\mathcal S$. This structure
was originally identified with the field of complex numbers, since the calculus
is designed as a quantum lambda calculus. However, in this paper we show that a
semiring is enough, and we provide a categorical semantics for this calculus in
the category of cancellative semimodules over the given semiring. We prove the
semantics to be sound and adequate.
|
[
{
"version": "v1",
"created": "Wed, 4 May 2022 15:50:23 GMT"
}
] | 2022-05-05T00:00:00 |
[
[
"Díaz-Caro",
"Alejandro",
""
],
[
"Malherbe",
"Octavio",
""
]
] |
new_dataset
| 0.994279 |
2205.02203
|
Malintha Fernando
|
Malintha Fernando, Ransalu Senanayake, Ariful Azad, Martin Swany
|
Graphical Games for UAV Swarm Control Under Time-Varying Communication
Networks
|
Presented in Workshop on Intelligent Aerial Robotics, International
Conference on Robotics and Automation, 2022
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a unified framework for coordinating Unmanned Aerial Vehicle (UAV)
swarms operating under time-varying communication networks. Our framework
builds on the concept of graphical games, which we argue provides a compelling
paradigm to subsume the interaction structures found in networked UAV swarms
thanks to the shared local neighborhood properties. We present a general-sum,
factorizable payoff function for cooperative UAV swarms based on the aggregated
local states and yield a Nash equilibrium for the stage games. Further, we
propose a decomposition-based approach to solve stage-graphical games in a
scalable and decentralized fashion by approximating virtual, mean
neighborhoods. Finally, we discuss extending the proposed framework toward
general-sum stochastic games by leveraging deep Q-learning and model-predictive
control.
|
[
{
"version": "v1",
"created": "Wed, 4 May 2022 17:30:14 GMT"
}
] | 2022-05-05T00:00:00 |
[
[
"Fernando",
"Malintha",
""
],
[
"Senanayake",
"Ransalu",
""
],
[
"Azad",
"Ariful",
""
],
[
"Swany",
"Martin",
""
]
] |
new_dataset
| 0.987859 |
2205.02226
|
Vitaliy Kurlin
|
Olga Anosova and Vitaliy Kurlin
|
Density functions of periodic sequences
|
12 pages, 4 figures, the latest version is at
http://kurlin.org/projects/periodic-geometry-topology/densities1D.pdf. arXiv
admin note: substantial text overlap with arXiv:2103.02749
| null | null | null |
cs.CG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Periodic point sets model all solid crystalline materials whose structures
are determined in a rigid form and should be studied up to rigid motion or
isometry preserving inter-point distances. In 2021 H.Edelsbrunner et al.
introduced an infinite sequence of density functions that are continuous
isometry invariants of periodic point sets. These density functions turned out
to be highly non-trivial even in dimension 1 for periodic sequences of points
in the line. This paper fully describes the density functions of any periodic
sequence and their symmetry properties. The explicit description theoretically
confirms coincidences of density functions that were previously computed only
through finite samples.
|
[
{
"version": "v1",
"created": "Wed, 4 May 2022 17:57:47 GMT"
}
] | 2022-05-05T00:00:00 |
[
[
"Anosova",
"Olga",
""
],
[
"Kurlin",
"Vitaliy",
""
]
] |
new_dataset
| 0.998403 |
2011.14619
|
Zhaoqi Su
|
Zhaoqi Su and Tao Yu and Yangang Wang and Yebin Liu
|
DeepCloth: Neural Garment Representation for Shape and Style Editing
| null | null |
10.1109/TPAMI.2022.3168569
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Garment representation, editing and animation are challenging topics in the
area of computer vision and graphics. It remains difficult for existing garment
representations to achieve smooth and plausible transitions between different
shapes and topologies. In this work, we introduce, DeepCloth, a unified
framework for garment representation, reconstruction, animation and editing.
Our unified framework contains 3 components: First, we represent the garment
geometry with a "topology-aware UV-position map", which allows for the unified
description of various garments with different shapes and topologies by
introducing an additional topology-aware UV-mask for the UV-position map.
Second, to further enable garment reconstruction and editing, we contribute a
method to embed the UV-based representations into a continuous feature space,
which enables garment shape reconstruction and editing by optimization and
control in the latent space, respectively. Finally, we propose a garment
animation method by unifying our neural garment representation with body shape
and pose, which achieves plausible garment animation results leveraging the
dynamic information encoded by our shape and style representation, even under
drastic garment editing operations. To conclude, with DeepCloth, we move a step
forward in establishing a more flexible and general 3D garment digitization
framework. Experiments demonstrate that our method can achieve state-of-the-art
garment representation performance compared with previous methods.
|
[
{
"version": "v1",
"created": "Mon, 30 Nov 2020 08:42:38 GMT"
},
{
"version": "v2",
"created": "Tue, 3 May 2022 14:13:57 GMT"
}
] | 2022-05-04T00:00:00 |
[
[
"Su",
"Zhaoqi",
""
],
[
"Yu",
"Tao",
""
],
[
"Wang",
"Yangang",
""
],
[
"Liu",
"Yebin",
""
]
] |
new_dataset
| 0.998334 |
2105.01469
|
Mark Jerrum
|
Heng Guo and Mark Jerrum
|
Counting vertices of integral polytopes defined by facets
|
15 pages. Minor edits, including a small change to the title. This
version is accepted for publication in Discrete and Computational Geometry
| null | null | null |
cs.CC math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a number of complexity results concerning the problem of counting
vertices of an integral polytope defined by a system of linear inequalities.
The focus is on polytopes with small integer vertices, particularly 0/1
polytopes and half-integral polytopes.
|
[
{
"version": "v1",
"created": "Tue, 4 May 2021 12:51:57 GMT"
},
{
"version": "v2",
"created": "Tue, 3 May 2022 14:41:08 GMT"
}
] | 2022-05-04T00:00:00 |
[
[
"Guo",
"Heng",
""
],
[
"Jerrum",
"Mark",
""
]
] |
new_dataset
| 0.999764 |
2108.07955
|
Jiang Yu
|
Yu Jiang, Lei Hu, Yongmei Zhang, and Xin Yang
|
WRICNet:A Weighted Rich-scale Inception Coder Network for
Multi-Resolution Remote Sensing Image Change Detection
| null | null |
10.1109/TGRS.2022.3145652
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Majority models of remote sensing image changing detection can only get great
effect in a specific resolution data set. With the purpose of improving change
detection effectiveness of the model in the multi-resolution data set, a
weighted rich-scale inception coder network (WRICNet) is proposed in this
article, which can make a great fusion of shallow multi-scale features, and
deep multi-scale features. The weighted rich-scale inception module of the
proposed can obtain shallow multi-scale features, the weighted rich-scale coder
module can obtain deep multi-scale features. The weighted scale block assigns
appropriate weights to features of different scales, which can strengthen
expressive ability of the edge of the changing area. The performance
experiments on the multi-resolution data set demonstrate that, compared to the
comparative methods, the proposed can further reduce the false alarm outside
the change area, and the missed alarm in the change area, besides, the edge of
the change area is more accurate. The ablation study of the proposed shows that
the training strategy, and improvements of this article can improve the
effectiveness of change detection.
|
[
{
"version": "v1",
"created": "Wed, 18 Aug 2021 02:56:11 GMT"
}
] | 2022-05-04T00:00:00 |
[
[
"Jiang",
"Yu",
""
],
[
"Hu",
"Lei",
""
],
[
"Zhang",
"Yongmei",
""
],
[
"Yang",
"Xin",
""
]
] |
new_dataset
| 0.999115 |
2109.07148
|
Petr Plechac
|
Artjoms \v{S}e\c{l}a, Petr Plech\'a\v{c}, Alie Lassche
|
Semantics of European poetry is shaped by conservative forces: The
relationship between poetic meter and meaning in accentual-syllabic verse
| null | null |
10.1371/journal.pone.0266556
| null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Recent advances in cultural analytics and large-scale computational studies
of art, literature and film often show that long-term change in the features of
artistic works happens gradually. These findings suggest that conservative
forces that shape creative domains might be underestimated. To this end, we
provide the first large-scale formal evidence of the persistent association
between poetic meter and semantics in 18-19th European literatures, using
Czech, German and Russian collections with additional data from English poetry
and early modern Dutch songs. Our study traces this association through a
series of clustering experiments using the abstracted semantic features of
150,000 poems. With the aid of topic modeling we infer semantic features for
individual poems. Texts were also lexically simplified across collections to
increase generalizability and decrease the sparseness of word frequency
distributions. Topics alone enable recognition of the meters in each observed
language, as may be seen from highly robust clustering of same-meter samples
(median Adjusted Rand Index between 0.48 and 1). In addition, this study shows
that the strength of the association between form and meaning tends to decrease
over time. This may reflect a shift in aesthetic conventions between the 18th
and 19th centuries as individual innovation was increasingly favored in
literature. Despite this decline, it remains possible to recognize semantics of
the meters from past or future, which suggests the continuity of semantic
traditions while also revealing the historical variability of conditions across
languages. This paper argues that distinct metrical forms, which are often
copied in a language over centuries, also maintain long-term semantic inertia
in poetry. Our findings, thus, highlight the role of the formal features of
cultural items in influencing the pace and shape of cultural evolution.
|
[
{
"version": "v1",
"created": "Wed, 15 Sep 2021 08:20:01 GMT"
}
] | 2022-05-04T00:00:00 |
[
[
"Šeļa",
"Artjoms",
""
],
[
"Plecháč",
"Petr",
""
],
[
"Lassche",
"Alie",
""
]
] |
new_dataset
| 0.977097 |
2109.12595
|
Song Feng
|
Song Feng and Siva Sankalp Patel and Hui Wan and Sachindra Joshi
|
MultiDoc2Dial: Modeling Dialogues Grounded in Multiple Documents
| null |
Proceedings of the 2021 Conference on Empirical Methods in Natural
Language Processing
|
10.18653/v1/2021.emnlp-main.498
| null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We propose MultiDoc2Dial, a new task and dataset on modeling goal-oriented
dialogues grounded in multiple documents. Most previous works treat
document-grounded dialogue modeling as a machine reading comprehension task
based on a single given document or passage. In this work, we aim to address
more realistic scenarios where a goal-oriented information-seeking conversation
involves multiple topics, and hence is grounded on different documents. To
facilitate such a task, we introduce a new dataset that contains dialogues
grounded in multiple documents from four different domains. We also explore
modeling the dialogue-based and document-based context in the dataset. We
present strong baseline approaches and various experimental results, aiming to
support further research efforts on such a task.
|
[
{
"version": "v1",
"created": "Sun, 26 Sep 2021 13:12:05 GMT"
}
] | 2022-05-04T00:00:00 |
[
[
"Feng",
"Song",
""
],
[
"Patel",
"Siva Sankalp",
""
],
[
"Wan",
"Hui",
""
],
[
"Joshi",
"Sachindra",
""
]
] |
new_dataset
| 0.999562 |
2110.01711
|
Christian Schilling
|
Marcelo Forets and Christian Schilling
|
LazySets.jl: Scalable Symbolic-Numeric Set Computations
|
published in the Proceedings of the JuliaCon Conferences 2021
|
JuliaCon Proceedings (2021)
|
10.21105/jcon.00097
| null |
cs.MS cs.CG cs.NA math.NA
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
LazySets.jl is a Julia library that provides ways to symbolically represent
sets of points as geometric shapes, with a special focus on convex sets and
polyhedral approximations. LazySets provides methods to apply common set
operations, convert between different set representations, and efficiently
compute with sets in high dimensions using specialized algorithms based on the
set types. LazySets is the core library of JuliaReach, a cutting-edge software
addressing the fundamental problem of reachability analysis: computing the set
of states that are reachable by a dynamical system from all initial states and
for all admissible inputs and parameters. While the library was originally
designed for reachability and formal verification, its scope goes beyond such
topics. LazySets is an easy-to-use, general-purpose and scalable library for
computations that mix symbolics and numerics. In this article we showcase the
basic functionality, highlighting some of the key design choices.
|
[
{
"version": "v1",
"created": "Mon, 4 Oct 2021 20:50:47 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Dec 2021 17:45:01 GMT"
}
] | 2022-05-04T00:00:00 |
[
[
"Forets",
"Marcelo",
""
],
[
"Schilling",
"Christian",
""
]
] |
new_dataset
| 0.996606 |
2110.06635
|
Darius R\"uckert
|
Darius R\"uckert, Linus Franke, Marc Stamminger
|
ADOP: Approximate Differentiable One-Pixel Point Rendering
| null | null | null | null |
cs.CV cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper we present ADOP, a novel point-based, differentiable neural
rendering pipeline. Like other neural renderers, our system takes as input
calibrated camera images and a proxy geometry of the scene, in our case a point
cloud. To generate a novel view, the point cloud is rasterized with learned
feature vectors as colors and a deep neural network fills the remaining holes
and shades each output pixel. The rasterizer renders points as one-pixel
splats, which makes it very fast and allows us to compute gradients with
respect to all relevant input parameters efficiently. Furthermore, our pipeline
contains a fully differentiable physically-based photometric camera model,
including exposure, white balance, and a camera response function. Following
the idea of inverse rendering, we use our renderer to refine its input in order
to reduce inconsistencies and optimize the quality of its output. In
particular, we can optimize structural parameters like the camera pose, lens
distortions, point positions and features, and a neural environment map, but
also photometric parameters like camera response function, vignetting, and
per-image exposure and white balance. Because our pipeline includes photometric
parameters, e.g.~exposure and camera response function, our system can smoothly
handle input images with varying exposure and white balance, and generates
high-dynamic range output. We show that due to the improved input, we can
achieve high render quality, also for difficult input, e.g. with imperfect
camera calibrations, inaccurate proxy geometry, or varying exposure. As a
result, a simpler and thus faster deep neural network is sufficient for
reconstruction. In combination with the fast point rasterization, ADOP achieves
real-time rendering rates even for models with well over 100M points.
https://github.com/darglein/ADOP
|
[
{
"version": "v1",
"created": "Wed, 13 Oct 2021 10:55:39 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Oct 2021 19:44:23 GMT"
},
{
"version": "v3",
"created": "Tue, 3 May 2022 08:19:39 GMT"
}
] | 2022-05-04T00:00:00 |
[
[
"Rückert",
"Darius",
""
],
[
"Franke",
"Linus",
""
],
[
"Stamminger",
"Marc",
""
]
] |
new_dataset
| 0.981148 |
2110.14223
|
Runmin Cong
|
Runmin Cong, Yumo Zhang, Leyuan Fang, Jun Li, Yao Zhao, and Sam Kwong
|
RRNet: Relational Reasoning Network with Parallel Multi-scale Attention
for Salient Object Detection in Optical Remote Sensing Images
|
11 pages, 9 figures, Accepted by IEEE Transactions on Geoscience and
Remote Sensing 2021, project: https://rmcong.github.io/proj_RRNet.html
| null |
10.1109/TGRS.2021.3123984
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Salient object detection (SOD) for optical remote sensing images (RSIs) aims
at locating and extracting visually distinctive objects/regions from the
optical RSIs. Despite some saliency models were proposed to solve the intrinsic
problem of optical RSIs (such as complex background and scale-variant objects),
the accuracy and completeness are still unsatisfactory. To this end, we propose
a relational reasoning network with parallel multi-scale attention for SOD in
optical RSIs in this paper. The relational reasoning module that integrates the
spatial and the channel dimensions is designed to infer the semantic
relationship by utilizing high-level encoder features, thereby promoting the
generation of more complete detection results. The parallel multi-scale
attention module is proposed to effectively restore the detail information and
address the scale variation of salient objects by using the low-level features
refined by multi-scale attention. Extensive experiments on two datasets
demonstrate that our proposed RRNet outperforms the existing state-of-the-art
SOD competitors both qualitatively and quantitatively.
|
[
{
"version": "v1",
"created": "Wed, 27 Oct 2021 07:18:32 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Feb 2022 00:52:00 GMT"
}
] | 2022-05-04T00:00:00 |
[
[
"Cong",
"Runmin",
""
],
[
"Zhang",
"Yumo",
""
],
[
"Fang",
"Leyuan",
""
],
[
"Li",
"Jun",
""
],
[
"Zhao",
"Yao",
""
],
[
"Kwong",
"Sam",
""
]
] |
new_dataset
| 0.999598 |
2110.15943
|
Sewon Min
|
Sewon Min, Mike Lewis, Luke Zettlemoyer, Hannaneh Hajishirzi
|
MetaICL: Learning to Learn In Context
|
19 pages, 2 figures. Published as a conference paper at NAACL 2022
(long). Code available at https://github.com/facebookresearch/MetaICL
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce MetaICL (Meta-training for In-Context Learning), a new
meta-training framework for few-shot learning where a pretrained language model
is tuned to do in-context learning on a large set of training tasks. This
meta-training enables the model to more effectively learn a new task in context
at test time, by simply conditioning on a few training examples with no
parameter updates or task-specific templates. We experiment on a large, diverse
collection of tasks consisting of 142 NLP datasets including classification,
question answering, natural language inference, paraphrase detection and more,
across seven different meta-training/target splits. MetaICL outperforms a range
of baselines including in-context learning without meta-training and multi-task
learning followed by zero-shot transfer. We find that the gains are
particularly significant for target tasks that have domain shifts from the
meta-training tasks, and that using a diverse set of the meta-training tasks is
key to improvements. We also show that MetaICL approaches (and sometimes beats)
the performance of models fully finetuned on the target task, and outperforms
much bigger models with nearly 8x parameters. Finally, we show that MetaICL is
complementary to human-written instructions, and the best performance can be
achieved by combining both approaches.
|
[
{
"version": "v1",
"created": "Fri, 29 Oct 2021 17:42:08 GMT"
},
{
"version": "v2",
"created": "Tue, 3 May 2022 10:36:39 GMT"
}
] | 2022-05-04T00:00:00 |
[
[
"Min",
"Sewon",
""
],
[
"Lewis",
"Mike",
""
],
[
"Zettlemoyer",
"Luke",
""
],
[
"Hajishirzi",
"Hannaneh",
""
]
] |
new_dataset
| 0.998942 |
2111.12122
|
Osmar Luiz De Carvalho
|
Osmar Luiz Ferreira de Carvalho, Osmar Ab\'ilio de Carvalho J\'unior,
Anesmar Olino de Albuquerque, Nickolas Castro Santana, Dibio Leandro Borges,
Roberto Arnaldo Trancoso Gomes, Renato Fontes Guimar\~aes
|
Bounding Box-Free Instance Segmentation Using Semi-Supervised Learning
for Generating a City-Scale Vehicle Dataset
|
38 pages, 10 figures, submitted to journal
| null |
10.1109/JSTARS.2022.3169128
| null |
cs.CV cs.AI cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vehicle classification is a hot computer vision topic, with studies ranging
from ground-view up to top-view imagery. In remote sensing, the usage of
top-view images allows for understanding city patterns, vehicle concentration,
traffic management, and others. However, there are some difficulties when
aiming for pixel-wise classification: (a) most vehicle classification studies
use object detection methods, and most publicly available datasets are designed
for this task, (b) creating instance segmentation datasets is laborious, and
(c) traditional instance segmentation methods underperform on this task since
the objects are small. Thus, the present research objectives are: (1) propose a
novel semi-supervised iterative learning approach using GIS software, (2)
propose a box-free instance segmentation approach, and (3) provide a city-scale
vehicle dataset. The iterative learning procedure considered: (1) label a small
number of vehicles, (2) train on those samples, (3) use the model to classify
the entire image, (4) convert the image prediction into a polygon shapefile,
(5) correct some areas with errors and include them in the training data, and
(6) repeat until results are satisfactory. To separate instances, we considered
vehicle interior and vehicle borders, and the DL model was the U-net with the
Efficient-net-B7 backbone. When removing the borders, the vehicle interior
becomes isolated, allowing for unique object identification. To recover the
deleted 1-pixel borders, we proposed a simple method to expand each prediction.
The results show better pixel-wise metrics when compared to the Mask-RCNN (82%
against 67% in IoU). On per-object analysis, the overall accuracy, precision,
and recall were greater than 90%. This pipeline applies to any remote sensing
target, being very efficient for segmentation and generating datasets.
|
[
{
"version": "v1",
"created": "Tue, 23 Nov 2021 19:42:12 GMT"
}
] | 2022-05-04T00:00:00 |
[
[
"de Carvalho",
"Osmar Luiz Ferreira",
""
],
[
"Júnior",
"Osmar Abílio de Carvalho",
""
],
[
"de Albuquerque",
"Anesmar Olino",
""
],
[
"Santana",
"Nickolas Castro",
""
],
[
"Borges",
"Dibio Leandro",
""
],
[
"Gomes",
"Roberto Arnaldo Trancoso",
""
],
[
"Guimarães",
"Renato Fontes",
""
]
] |
new_dataset
| 0.991494 |
2111.12126
|
Osmar Luiz De Carvalho
|
Osmar Luiz Ferreira de Carvalho, Osmar Ab\'ilio de Carvalho J\'unior,
Cristiano Rosa e Silva, Anesmar Olino de Albuquerque, Nickolas Castro
Santana, Dibio Leandro Borges, Roberto Arnaldo Trancoso Gomes, Renato Fontes
Guimar\~aes
|
Panoptic Segmentation Meets Remote Sensing
|
40 pages, 10 figures, submitted to journal
| null |
10.3390/rs14040965
| null |
cs.CV cs.AI cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Panoptic segmentation combines instance and semantic predictions, allowing
the detection of "things" and "stuff" simultaneously. Effectively approaching
panoptic segmentation in remotely sensed data can be auspicious in many
challenging problems since it allows continuous mapping and specific target
counting. Several difficulties have prevented the growth of this task in remote
sensing: (a) most algorithms are designed for traditional images, (b) image
labelling must encompass "things" and "stuff" classes, and (c) the annotation
format is complex. Thus, aiming to solve and increase the operability of
panoptic segmentation in remote sensing, this study has five objectives: (1)
create a novel data preparation pipeline for panoptic segmentation, (2) propose
an annotation conversion software to generate panoptic annotations; (3) propose
a novel dataset on urban areas, (4) modify the Detectron2 for the task, and (5)
evaluate difficulties of this task in the urban setting. We used an aerial
image with a 0,24-meter spatial resolution considering 14 classes. Our pipeline
considers three image inputs, and the proposed software uses point shapefiles
for creating samples in the COCO format. Our study generated 3,400 samples with
512x512 pixel dimensions. We used the Panoptic-FPN with two backbones
(ResNet-50 and ResNet-101), and the model evaluation considered semantic
instance and panoptic metrics. We obtained 93.9, 47.7, and 64.9 for the mean
IoU, box AP, and PQ. Our study presents the first effective pipeline for
panoptic segmentation and an extensive database for other researchers to use
and deal with other data or related problems requiring a thorough scene
understanding.
|
[
{
"version": "v1",
"created": "Tue, 23 Nov 2021 19:48:55 GMT"
},
{
"version": "v2",
"created": "Tue, 30 Nov 2021 12:42:11 GMT"
}
] | 2022-05-04T00:00:00 |
[
[
"de Carvalho",
"Osmar Luiz Ferreira",
""
],
[
"Júnior",
"Osmar Abílio de Carvalho",
""
],
[
"Silva",
"Cristiano Rosa e",
""
],
[
"de Albuquerque",
"Anesmar Olino",
""
],
[
"Santana",
"Nickolas Castro",
""
],
[
"Borges",
"Dibio Leandro",
""
],
[
"Gomes",
"Roberto Arnaldo Trancoso",
""
],
[
"Guimarães",
"Renato Fontes",
""
]
] |
new_dataset
| 0.9756 |
2112.05536
|
Rob Scharff
|
Rob B.N. Scharff, Dirk-Jan Boonstra, Laurence Willemet, Xi Lin and
Micha\"el Wiertlewski
|
Rapid manufacturing of color-based hemispherical soft tactile fingertips
| null | null |
10.1109/RoboSoft54090.2022.9762136
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Tactile sensing can provide access to information about the contact (i.e.
slippage, surface feature, friction), which is out of reach of vision but
crucial for manipulation. To access this information, a dense measurement of
the deformation of soft fingertips is necessary. Recently, tactile sensors that
rely on a camera looking at a deformable membrane have demonstrated that a
dense measurement of the contact is possible. However, their manufacturing can
be time-consuming and labor-intensive. Here, we show a new design method that
uses multi-color additive manufacturing and silicone casting to efficiently
manufacture soft marker-based tactile sensors that are able to capture with
high-resolution the three-dimensional deformation field at the interface. Each
marker is composed of two superimposed color filters. The subtractive color
mixing encodes the normal deformation of the membrane, and the lateral
deformation is found by centroid detection. With this manufacturing method, we
can reach a density of 400 markers on a 21 mm radius hemisphere, allowing for
regular and dense measurement of the deformation. We calibrated and validated
the approach by finding the curvature of objects with a threefold increase in
accuracy as compared to previous implementations. The results demonstrate a
simple yet effective approach to manufacturing artificial fingertips for
capturing a rich image of the tactile interaction at the location of contact.
|
[
{
"version": "v1",
"created": "Wed, 27 Oct 2021 08:16:59 GMT"
},
{
"version": "v2",
"created": "Tue, 3 May 2022 10:06:39 GMT"
}
] | 2022-05-04T00:00:00 |
[
[
"Scharff",
"Rob B. N.",
""
],
[
"Boonstra",
"Dirk-Jan",
""
],
[
"Willemet",
"Laurence",
""
],
[
"Lin",
"Xi",
""
],
[
"Wiertlewski",
"Michaël",
""
]
] |
new_dataset
| 0.985888 |
2112.08594
|
Giscard Biamby
|
Giscard Biamby, Grace Luo, Trevor Darrell, Anna Rohrbach
|
Twitter-COMMs: Detecting Climate, COVID, and Military Multimodal
Misinformation
|
11 pages, 6 figures
| null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Detecting out-of-context media, such as "mis-captioned" images on Twitter, is
a relevant problem, especially in domains of high public significance. In this
work we aim to develop defenses against such misinformation for the topics of
Climate Change, COVID-19, and Military Vehicles. We first present a large-scale
multimodal dataset with over 884k tweets relevant to these topics. Next, we
propose a detection method, based on the state-of-the-art CLIP model, that
leverages automatically generated hard image-text mismatches. While this
approach works well on our automatically constructed out-of-context tweets, we
aim to validate its usefulness on data representative of the real world. Thus,
we test it on a set of human-generated fakes created by mimicking in-the-wild
misinformation. We achieve an 11% detection improvement in a high precision
regime over a strong baseline. Finally, we share insights about our best model
design and analyze the challenges of this emerging threat.
|
[
{
"version": "v1",
"created": "Thu, 16 Dec 2021 03:37:20 GMT"
},
{
"version": "v2",
"created": "Tue, 3 May 2022 00:51:02 GMT"
}
] | 2022-05-04T00:00:00 |
[
[
"Biamby",
"Giscard",
""
],
[
"Luo",
"Grace",
""
],
[
"Darrell",
"Trevor",
""
],
[
"Rohrbach",
"Anna",
""
]
] |
new_dataset
| 0.999606 |
2201.05041
|
Tatiana Passali
|
T. Passali, T. Mavropoulos, G. Tsoumakas, G. Meditskos, S. Vrochidis
|
LARD: Large-scale Artificial Disfluency Generation
|
Accepted at LREC 2022
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Disfluency detection is a critical task in real-time dialogue systems.
However, despite its importance, it remains a relatively unexplored field,
mainly due to the lack of appropriate datasets. At the same time, existing
datasets suffer from various issues, including class imbalance issues, which
can significantly affect the performance of the model on rare classes, as it is
demonstrated in this paper. To this end, we propose LARD, a method for
generating complex and realistic artificial disfluencies with little effort.
The proposed method can handle three of the most common types of disfluencies:
repetitions, replacements and restarts. In addition, we release a new
large-scale dataset with disfluencies that can be used on four different tasks:
disfluency detection, classification, extraction and correction. Experimental
results on the LARD dataset demonstrate that the data produced by the proposed
method can be effectively used for detecting and removing disfluencies, while
also addressing limitations of existing datasets.
|
[
{
"version": "v1",
"created": "Thu, 13 Jan 2022 16:02:36 GMT"
},
{
"version": "v2",
"created": "Tue, 3 May 2022 14:54:30 GMT"
}
] | 2022-05-04T00:00:00 |
[
[
"Passali",
"T.",
""
],
[
"Mavropoulos",
"T.",
""
],
[
"Tsoumakas",
"G.",
""
],
[
"Meditskos",
"G.",
""
],
[
"Vrochidis",
"S.",
""
]
] |
new_dataset
| 0.980346 |
2201.08049
|
Gongyang Li
|
Gongyang Li and Zhi Liu and Zhen Bai and Weisi Lin and and Haibin Ling
|
Lightweight Salient Object Detection in Optical Remote Sensing Images
via Feature Correlation
|
11 pages, 6 figures, Accepted by IEEE Transactions on Geoscience and
Remote Sensing 2022
| null |
10.1109/TGRS.2022.3145483
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Salient object detection in optical remote sensing images (ORSI-SOD) has been
widely explored for understanding ORSIs. However, previous methods focus mainly
on improving the detection accuracy while neglecting the cost in memory and
computation, which may hinder their real-world applications. In this paper, we
propose a novel lightweight ORSI-SOD solution, named CorrNet, to address these
issues. In CorrNet, we first lighten the backbone (VGG-16) and build a
lightweight subnet for feature extraction. Then, following the coarse-to-fine
strategy, we generate an initial coarse saliency map from high-level semantic
features in a Correlation Module (CorrM). The coarse saliency map serves as the
location guidance for low-level features. In CorrM, we mine the object location
information between high-level semantic features through the cross-layer
correlation operation. Finally, based on low-level detailed features, we refine
the coarse saliency map in the refinement subnet equipped with Dense
Lightweight Refinement Blocks, and produce the final fine saliency map. By
reducing the parameters and computations of each component, CorrNet ends up
having only 4.09M parameters and running with 21.09G FLOPs. Experimental
results on two public datasets demonstrate that our lightweight CorrNet
achieves competitive or even better performance compared with 26
state-of-the-art methods (including 16 large CNN-based methods and 2
lightweight methods), and meanwhile enjoys the clear memory and run time
efficiency. The code and results of our method are available at
https://github.com/MathLee/CorrNet.
|
[
{
"version": "v1",
"created": "Thu, 20 Jan 2022 08:28:01 GMT"
}
] | 2022-05-04T00:00:00 |
[
[
"Li",
"Gongyang",
""
],
[
"Liu",
"Zhi",
""
],
[
"Bai",
"Zhen",
""
],
[
"Lin",
"Weisi",
""
],
[
"Ling",
"and Haibin",
""
]
] |
new_dataset
| 0.998336 |
2201.09310
|
Hojjat Salehinejad
|
Hojjat Salehinejad and Shahrokh Valaee
|
LiteHAR: Lightweight Human Activity Recognition from WiFi Signals with
Random Convolution Kernels
|
Accepted for presentation at IEEE ICASSP 2022. Copyright 2022 IEEE
| null |
10.1109/ICASSP43922.2022.9746803
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Anatomical movements of the human body can change the channel state
information (CSI) of wireless signals in an indoor environment. These changes
in the CSI signals can be used for human activity recognition (HAR), which is a
predominant and unique approach due to preserving privacy and flexibility of
capturing motions in non-line-of-sight environments. Existing models for HAR
generally have a high computational complexity, contain very large number of
trainable parameters, and require extensive computational resources. This issue
is particularly important for implementation of these solutions on devices with
limited resources, such as edge devices. In this paper, we propose a
lightweight human activity recognition (LiteHAR) approach which, unlike the
state-of-the-art deep learning models, does not require extensive training of
large number of parameters. This approach uses randomly initialized convolution
kernels for feature extraction from CSI signals without training the kernels.
The extracted features are then classified using Ridge regression classifier,
which has a linear computational complexity and is very fast. LiteHAR is
evaluated on a public benchmark dataset and the results show its high
classification performance in comparison with the complex deep learning models
with a much lower computational complexity.
|
[
{
"version": "v1",
"created": "Sun, 23 Jan 2022 16:48:12 GMT"
}
] | 2022-05-04T00:00:00 |
[
[
"Salehinejad",
"Hojjat",
""
],
[
"Valaee",
"Shahrokh",
""
]
] |
new_dataset
| 0.999473 |
2202.03497
|
Wenzhong Yan
|
Wenzhong Yan and Ankur Mehta
|
A crawling robot driven by a folded self-sustained oscillator
|
6 pages, 8 figures, has been accepted by RoboSoft 2022
|
2022 IEEE 5th International Conference on Soft Robotics (RoboSoft)
|
10.1109/RoboSoft54090.2022.9762079
| null |
cs.RO physics.app-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Locomotive robots that do not rely on electronics and/or electromagnetic
components will open up new perspectives and applications for robotics.
However, these robots usually involve complicated and tedious fabrication
processes, limiting their applications. Here, we develop an easy-to-fabricate
crawling robot by embedding simple control and actuation into origami-inspired
mechanisms through folding, eliminating the need for discrete electronics and
transducers. Our crawling robot locomotes through directional friction
propelled by an onboard origami self-sustained oscillator, which generates
periodic actuation from a single source of constant power. The crawling robot
is lightweight (~ 3.8 gram), ultra low-cost (~ US $1), nonmagnetic, and
electronic-free; it may enable practical applications in extreme environments,
e.g., large radiation or magnetic fields. The robot can be fabricated through a
monolithic origami-inspired folding-based method with universal materials,
i.e., sheet materials and conductive threads. This rapid design and fabrication
approach enables the programmable assembly of various mechanisms within this
manufacturing paradigm, laying the foundation for autonomous, untethered robots
without requiring electronics.
|
[
{
"version": "v1",
"created": "Mon, 7 Feb 2022 20:22:34 GMT"
},
{
"version": "v2",
"created": "Tue, 3 May 2022 01:04:47 GMT"
}
] | 2022-05-04T00:00:00 |
[
[
"Yan",
"Wenzhong",
""
],
[
"Mehta",
"Ankur",
""
]
] |
new_dataset
| 0.995227 |
2203.15198
|
Zhiwu Zheng
|
Zhiwu Zheng, Prakhar Kumar, Yenan Chen, Hsin Cheng, Sigurd Wagner,
Minjie Chen, Naveen Verma and James C. Sturm
|
Model-Based Control of Planar Piezoelectric Inchworm Soft Robot for
Crawling in Constrained Environments
|
Accepted to the 2022 IEEE 5th International Conference on Soft
Robotics (RoboSoft). Project website: https://piezorobotcontroller.github.io/
Summary video: https://youtu.be/Md-Uo-pUaIs
|
2022 IEEE 5th International Conference on Soft Robotics
(RoboSoft), 693-698
|
10.1109/RoboSoft54090.2022.9762147
| null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Soft robots have drawn significant attention recently for their ability to
achieve rich shapes when interacting with complex environments. However, their
elasticity and flexibility compared to rigid robots also pose significant
challenges for precise and robust shape control in real-time. Motivated by
their potential to operate in highly-constrained environments, as in
search-and-rescue operations, this work addresses these challenges of soft
robots by developing a model-based full-shape controller, validated and
demonstrated by experiments. A five-actuator planar soft robot was constructed
with planar piezoelectric layers bonded to a steel foil substrate, enabling
inchworm-like motion. The controller uses a soft-body continuous model for
shape planning and control, given target shapes and/or environmental
constraints, such as crawling under overhead barriers or "roof" safety lines.
An approach to background model calibrations is developed to address deviations
of actual robot shape due to material parameter variations and drift. Full
experimental shape control and optimal movement under a roof safety line are
demonstrated, where the robot maximizes its speed within the overhead
constraint. The mean-squared error between the measured and target shapes
improves from ~0.05 cm$^{2}$ without calibration to ~0.01 cm$^{2}$ with
calibration. Simulation-based validation is also performed with various
different roof shapes.
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2022 02:35:57 GMT"
}
] | 2022-05-04T00:00:00 |
[
[
"Zheng",
"Zhiwu",
""
],
[
"Kumar",
"Prakhar",
""
],
[
"Chen",
"Yenan",
""
],
[
"Cheng",
"Hsin",
""
],
[
"Wagner",
"Sigurd",
""
],
[
"Chen",
"Minjie",
""
],
[
"Verma",
"Naveen",
""
],
[
"Sturm",
"James C.",
""
]
] |
new_dataset
| 0.999136 |
2204.05991
|
Sanjay Subramanian
|
Sanjay Subramanian, William Merrill, Trevor Darrell, Matt Gardner,
Sameer Singh, Anna Rohrbach
|
ReCLIP: A Strong Zero-Shot Baseline for Referring Expression
Comprehension
|
ACL 2022
| null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Training a referring expression comprehension (ReC) model for a new visual
domain requires collecting referring expressions, and potentially corresponding
bounding boxes, for images in the domain. While large-scale pre-trained models
are useful for image classification across domains, it remains unclear if they
can be applied in a zero-shot manner to more complex tasks like ReC. We present
ReCLIP, a simple but strong zero-shot baseline that repurposes CLIP, a
state-of-the-art large-scale model, for ReC. Motivated by the close connection
between ReC and CLIP's contrastive pre-training objective, the first component
of ReCLIP is a region-scoring method that isolates object proposals via
cropping and blurring, and passes them to CLIP. However, through controlled
experiments on a synthetic dataset, we find that CLIP is largely incapable of
performing spatial reasoning off-the-shelf. Thus, the second component of
ReCLIP is a spatial relation resolver that handles several types of spatial
relations. We reduce the gap between zero-shot baselines from prior work and
supervised models by as much as 29% on RefCOCOg, and on RefGTA (video game
imagery), ReCLIP's relative improvement over supervised ReC models trained on
real images is 8%.
|
[
{
"version": "v1",
"created": "Tue, 12 Apr 2022 17:55:38 GMT"
},
{
"version": "v2",
"created": "Mon, 2 May 2022 20:08:17 GMT"
}
] | 2022-05-04T00:00:00 |
[
[
"Subramanian",
"Sanjay",
""
],
[
"Merrill",
"William",
""
],
[
"Darrell",
"Trevor",
""
],
[
"Gardner",
"Matt",
""
],
[
"Singh",
"Sameer",
""
],
[
"Rohrbach",
"Anna",
""
]
] |
new_dataset
| 0.99327 |
2204.08535
|
Yujie Lu
|
Yujie Lu, Wanrong Zhu, Xin Eric Wang, Miguel Eckstein, William Yang
Wang
|
Imagination-Augmented Natural Language Understanding
|
NAACL 2022 Main Conference
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Human brains integrate linguistic and perceptual information simultaneously
to understand natural language, and hold the critical ability to render
imaginations. Such abilities enable us to construct new abstract concepts or
concrete objects, and are essential in involving practical knowledge to solve
problems in low-resource scenarios. However, most existing methods for Natural
Language Understanding (NLU) are mainly focused on textual signals. They do not
simulate human visual imagination ability, which hinders models from inferring
and learning efficiently from limited data samples. Therefore, we introduce an
Imagination-Augmented Cross-modal Encoder (iACE) to solve natural language
understanding tasks from a novel learning perspective -- imagination-augmented
cross-modal understanding. iACE enables visual imagination with external
knowledge transferred from the powerful generative and pre-trained
vision-and-language models. Extensive experiments on GLUE and SWAG show that
iACE achieves consistent improvement over visually-supervised pre-trained
models. More importantly, results in extreme and normal few-shot settings
validate the effectiveness of iACE in low-resource natural language
understanding circumstances.
|
[
{
"version": "v1",
"created": "Mon, 18 Apr 2022 19:39:36 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Apr 2022 18:15:41 GMT"
},
{
"version": "v3",
"created": "Tue, 3 May 2022 06:21:21 GMT"
}
] | 2022-05-04T00:00:00 |
[
[
"Lu",
"Yujie",
""
],
[
"Zhu",
"Wanrong",
""
],
[
"Wang",
"Xin Eric",
""
],
[
"Eckstein",
"Miguel",
""
],
[
"Wang",
"William Yang",
""
]
] |
new_dataset
| 0.995162 |
2204.12425
|
Frederic Fol Leymarie
|
Frederic Fol Leymarie, William Latham, Guido Salimbeni, Suhail A.
Islam, Christopher Reynolds, Charlie Cook, Luis Armas Suarez, Richard
Leinfellner and Michael J. E. Sternberg
|
Bioblox 2.5D -- Developing an Educational Game Based on Protein Docking
|
9 pages
| null | null | null |
cs.HC cs.GR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We present the development process of Bioblox2-5D, an educational biology
game aimed at teenagers. The game content refers to protein docking and aims to
improve learning about molecular shape complexity, the roles of charges in
molecular docking and the scoring function to calculate binding affinity. We
developed the game as part of a collaboration between the Computing Department
at Goldsmiths, University of London, and the Structural Bioinformatics group at
Imperial College London. The team at Imperial provided the content requirements
and validated the technical solution adopted in the game. The team at
Goldsmiths designed and implemented the content requirements into a fun and
stimulating educational puzzle game that supports teaching and motivates
students to engage with biology. We illustrate the game design choices, the
compromises and solutions that we applied to accomplish the desired learning
outcomes. This paper aims to illustrate useful insights and inspirations in the
context of educational game development for biology students.
|
[
{
"version": "v1",
"created": "Tue, 26 Apr 2022 16:36:03 GMT"
},
{
"version": "v2",
"created": "Tue, 3 May 2022 13:24:47 GMT"
}
] | 2022-05-04T00:00:00 |
[
[
"Leymarie",
"Frederic Fol",
""
],
[
"Latham",
"William",
""
],
[
"Salimbeni",
"Guido",
""
],
[
"Islam",
"Suhail A.",
""
],
[
"Reynolds",
"Christopher",
""
],
[
"Cook",
"Charlie",
""
],
[
"Suarez",
"Luis Armas",
""
],
[
"Leinfellner",
"Richard",
""
],
[
"Sternberg",
"Michael J. E.",
""
]
] |
new_dataset
| 0.998947 |
2204.13653
|
Ryan Marten
|
Tanmay Gupta, Ryan Marten, Aniruddha Kembhavi, Derek Hoiem
|
GRIT: General Robust Image Task Benchmark
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Computer vision models excel at making predictions when the test distribution
closely resembles the training distribution. Such models have yet to match the
ability of biological vision to learn from multiple sources and generalize to
new data sources and tasks. To facilitate the development and evaluation of
more general vision systems, we introduce the General Robust Image Task (GRIT)
benchmark. GRIT evaluates the performance, robustness, and calibration of a
vision system across a variety of image prediction tasks, concepts, and data
sources. The seven tasks in GRIT are selected to cover a range of visual
skills: object categorization, object localization, referring expression
grounding, visual question answering, segmentation, human keypoint detection,
and surface normal estimation. GRIT is carefully designed to enable the
evaluation of robustness under image perturbations, image source distribution
shift, and concept distribution shift. By providing a unified platform for
thorough assessment of skills and concepts learned by a vision model, we hope
GRIT catalyzes the development of performant and robust general purpose vision
systems.
|
[
{
"version": "v1",
"created": "Thu, 28 Apr 2022 17:13:23 GMT"
},
{
"version": "v2",
"created": "Mon, 2 May 2022 19:26:41 GMT"
}
] | 2022-05-04T00:00:00 |
[
[
"Gupta",
"Tanmay",
""
],
[
"Marten",
"Ryan",
""
],
[
"Kembhavi",
"Aniruddha",
""
],
[
"Hoiem",
"Derek",
""
]
] |
new_dataset
| 0.994504 |
2205.00451
|
Dennis Soemers
|
Dennis J. N. J. Soemers and \'Eric Piette and Matthew Stephenson and
Cameron Browne
|
The Ludii Game Description Language is Universal
| null | null | null | null |
cs.AI cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There are several different game description languages (GDLs), each intended
to allow wide ranges of arbitrary games (i.e., general games) to be described
in a single higher-level language than general-purpose programming languages.
Games described in such formats can subsequently be presented as challenges for
automated general game playing agents, which are expected to be capable of
playing any arbitrary game described in such a language without prior knowledge
about the games to be played. The language used by the Ludii general game
system was previously shown to be capable of representing equivalent games for
any arbitrary, finite, deterministic, fully observable extensive-form game. In
this paper, we prove its universality by extending this to include finite
non-deterministic and imperfect-information games.
|
[
{
"version": "v1",
"created": "Sun, 1 May 2022 11:52:40 GMT"
},
{
"version": "v2",
"created": "Tue, 3 May 2022 10:48:54 GMT"
}
] | 2022-05-04T00:00:00 |
[
[
"Soemers",
"Dennis J. N. J.",
""
],
[
"Piette",
"Éric",
""
],
[
"Stephenson",
"Matthew",
""
],
[
"Browne",
"Cameron",
""
]
] |
new_dataset
| 0.999753 |
2205.01091
|
Duc Tran
|
Duc A. Tran and Bhaskar Krishnamachari
|
Blockchain in a nutshell
|
Pre-print. Book chapter (50 pages) in "Handbook on Blockchain". Duc
A. Tran, My T. Thai, and Bhaskar Krishnamachari (eds). Springer Nature
Publisher, 2022
| null | null | null |
cs.CR cs.DC cs.GT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Blockchain enables a digital society where people can contribute,
collaborate, and transact without having to second-guess trust and
transparency. It is the technology behind the success of Bitcoin, Ethereum, and
many disruptive applications and platforms that have positive impact in
numerous sectors, including finance, education, health care, environment,
transportation, and philanthropy, to name a few. This chapter provides a
friendly description of essential concepts, mathematics, and algorithms that
lay the foundation for blockchain technology.
|
[
{
"version": "v1",
"created": "Fri, 29 Apr 2022 19:23:58 GMT"
}
] | 2022-05-04T00:00:00 |
[
[
"Tran",
"Duc A.",
""
],
[
"Krishnamachari",
"Bhaskar",
""
]
] |
new_dataset
| 0.999676 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.