id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2205.12662
|
Zhi Chen
|
Zhi Chen, Jijia Bao, Lu Chen, Yuncong Liu, Da Ma, Bei Chen, Mengyue
Wu, Su Zhu, Xin Dong, Fujiang Ge, Qingliang Miao, Jian-Guang Lou and Kai Yu
|
DFM: Dialogue Foundation Model for Universal Large-Scale
Dialogue-Oriented Task Learning
|
Work in Progress
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Building a universal conversational agent has been a long-standing goal of
the dialogue research community. Most previous works only focus on a small set
of dialogue tasks. In this work, we aim to build a unified dialogue foundation
model (DFM) which can be used to solve massive diverse dialogue tasks. To
achieve this goal, a large-scale well-annotated dialogue dataset with rich task
diversity (DialogZoo) is collected. We introduce a framework to unify all
dialogue tasks and propose novel auxiliary self-supervised tasks to achieve
stable training of DFM on the highly diverse large scale DialogZoo corpus.
Experiments show that, compared with models of the same size, DFM can achieve
state-of-the-art or competitive performance on very rich cross-domain
downstream dialogue tasks. This demonstrates that DFM largely extends the
ability of unified dialogue pre-trained model.
|
[
{
"version": "v1",
"created": "Wed, 25 May 2022 11:17:16 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Oct 2022 05:04:31 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Chen",
"Zhi",
""
],
[
"Bao",
"Jijia",
""
],
[
"Chen",
"Lu",
""
],
[
"Liu",
"Yuncong",
""
],
[
"Ma",
"Da",
""
],
[
"Chen",
"Bei",
""
],
[
"Wu",
"Mengyue",
""
],
[
"Zhu",
"Su",
""
],
[
"Dong",
"Xin",
""
],
[
"Ge",
"Fujiang",
""
],
[
"Miao",
"Qingliang",
""
],
[
"Lou",
"Jian-Guang",
""
],
[
"Yu",
"Kai",
""
]
] |
new_dataset
| 0.998623 |
2208.00543
|
Dan Boneh
|
Kaili Wang, Qinchen Wang, Dan Boneh
|
ERC-20R and ERC-721R: Reversible Transactions on Ethereum
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Blockchains are meant to be persistent: posted transactions are immutable and
cannot be changed. When a theft takes place, there are limited options for
reversing the disputed transaction, and this has led to significant losses in
the blockchain ecosystem.
In this paper we propose reversible versions of ERC-20 and ERC-721, the most
widely used token standards. With these new standards, a transaction is
eligible for reversal for a short period of time after it has been posted on
chain. After the dispute period has elapsed, the transaction can no longer be
reversed. Within the short dispute period, a sender can request to reverse a
transaction by convincing a decentralized set of judges to first freeze the
disputed assets, and then later convincing them to reverse the transaction.
Supporting reversibility in the context of ERC-20 and ERC-721 raises many
interesting technical challenges. This paper explores these challenges and
proposes a design for our ERC-20R and ERC-721R standards, the reversible
versions of ERC-20 and ERC-721. We also provide a prototype implementation. Our
goal is to initiate a deeper conversation about reversibility in the hope of
reducing some of the losses in the blockchain ecosystem.
|
[
{
"version": "v1",
"created": "Sun, 31 Jul 2022 23:47:11 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Sep 2022 01:39:23 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Oct 2022 03:03:04 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Wang",
"Kaili",
""
],
[
"Wang",
"Qinchen",
""
],
[
"Boneh",
"Dan",
""
]
] |
new_dataset
| 0.989019 |
2208.01436
|
Christopher Sun
|
Christopher Sun, Jay Nimbalkar, Ravnoor Bedi
|
Predicting Future Mosquito Larval Habitats Using Time Series Climate
Forecasting and Deep Learning
|
2022 MIT IEEE Undergraduate Research Technology Conference
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Mosquito habitat ranges are projected to expand due to climate change. This
investigation aims to identify future mosquito habitats by analyzing preferred
ecological conditions of mosquito larvae. After assembling a data set with
atmospheric records and larvae observations, a neural network is trained to
predict larvae counts from ecological inputs. Time series forecasting is
conducted on these variables and climate projections are passed into the
initial deep learning model to generate location-specific larvae abundance
predictions. The results support the notion of regional ecosystem-driven
changes in mosquito spread, with high-elevation regions in particular
experiencing an increase in susceptibility to mosquito infestation.
|
[
{
"version": "v1",
"created": "Mon, 1 Aug 2022 17:25:09 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Oct 2022 02:41:34 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Sun",
"Christopher",
""
],
[
"Nimbalkar",
"Jay",
""
],
[
"Bedi",
"Ravnoor",
""
]
] |
new_dataset
| 0.997947 |
2209.02000
|
Alpha Renner
|
Alpha Renner, Lazar Supic, Andreea Danielescu, Giacomo Indiveri, E.
Paxon Frady, Friedrich T. Sommer and Yulia Sandamirskaya
|
Neuromorphic Visual Odometry with Resonator Networks
|
14 pages, 5 figures, minor changes
| null | null | null |
cs.RO cs.AI cs.CV cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Autonomous agents require self-localization to navigate in unknown
environments. They can use Visual Odometry (VO) to estimate self-motion and
localize themselves using visual sensors. This motion-estimation strategy is
not compromised by drift as inertial sensors or slippage as wheel encoders.
However, VO with conventional cameras is computationally demanding, limiting
its application in systems with strict low-latency, -memory, and -energy
requirements. Using event-based cameras and neuromorphic computing hardware
offers a promising low-power solution to the VO problem. However, conventional
algorithms for VO are not readily convertible to neuromorphic hardware. In this
work, we present a VO algorithm built entirely of neuronal building blocks
suitable for neuromorphic implementation. The building blocks are groups of
neurons representing vectors in the computational framework of Vector Symbolic
Architecture (VSA) which was proposed as an abstraction layer to program
neuromorphic hardware. The VO network we propose generates and stores a working
memory of the presented visual environment. It updates this working memory
while at the same time estimating the changing location and orientation of the
camera. We demonstrate how VSA can be leveraged as a computing paradigm for
neuromorphic robotics. Moreover, our results represent an important step
towards using neuromorphic computing hardware for fast and power-efficient VO
and the related task of simultaneous localization and mapping (SLAM). We
validate this approach experimentally in a simple robotic task and with an
event-based dataset, demonstrating state-of-the-art performance in these
settings.
|
[
{
"version": "v1",
"created": "Mon, 5 Sep 2022 14:57:03 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Oct 2022 16:44:13 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Renner",
"Alpha",
""
],
[
"Supic",
"Lazar",
""
],
[
"Danielescu",
"Andreea",
""
],
[
"Indiveri",
"Giacomo",
""
],
[
"Frady",
"E. Paxon",
""
],
[
"Sommer",
"Friedrich T.",
""
],
[
"Sandamirskaya",
"Yulia",
""
]
] |
new_dataset
| 0.964155 |
2209.10700
|
Jitesh Joshi
|
Jitesh Joshi, Nadia Bianchi-Berthouze, Youngjun Cho
|
Self-adversarial Multi-scale Contrastive Learning for Semantic
Segmentation of Thermal Facial Images
|
Accepted at the British Machine Vision Conference (BMVC), 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Segmentation of thermal facial images is a challenging task. This is because
facial features often lack salience due to high-dynamic thermal range scenes
and occlusion issues. Limited availability of datasets from unconstrained
settings further limits the use of the state-of-the-art segmentation networks,
loss functions and learning strategies which have been built and validated for
RGB images. To address the challenge, we propose Self-Adversarial Multi-scale
Contrastive Learning (SAM-CL) framework as a new training strategy for thermal
image segmentation. SAM-CL framework consists of a SAM-CL loss function and a
thermal image augmentation (TiAug) module as a domain-specific augmentation
technique. We use the Thermal-Face-Database to demonstrate effectiveness of our
approach. Experiments conducted on the existing segmentation networks (UNET,
Attention-UNET, DeepLabV3 and HRNetv2) evidence the consistent performance
gains from the SAM-CL framework. Furthermore, we present a qualitative analysis
with UBComfort and DeepBreath datasets to discuss how our proposed methods
perform in handling unconstrained situations.
|
[
{
"version": "v1",
"created": "Wed, 21 Sep 2022 22:58:47 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Oct 2022 23:05:24 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Joshi",
"Jitesh",
""
],
[
"Bianchi-Berthouze",
"Nadia",
""
],
[
"Cho",
"Youngjun",
""
]
] |
new_dataset
| 0.980208 |
2209.11887
|
Bo Ai
|
Bo Ai, Yuchen Wang, Yugin Tan, Samson Tan
|
Whodunit? Learning to Contrast for Authorship Attribution
|
camera-ready version, AACL-IJCNLP 2022
| null | null | null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Authorship attribution is the task of identifying the author of a given text.
The key is finding representations that can differentiate between authors.
Existing approaches typically use manually designed features that capture a
dataset's content and style, but these approaches are dataset-dependent and
yield inconsistent performance across corpora. In this work, we propose
\textit{learning} author-specific representations by fine-tuning pre-trained
generic language representations with a contrastive objective (Contra-X). We
show that Contra-X learns representations that form highly separable clusters
for different authors. It advances the state-of-the-art on multiple human and
machine authorship attribution benchmarks, enabling improvements of up to 6.8%
over cross-entropy fine-tuning. However, we find that Contra-X improves overall
accuracy at the cost of sacrificing performance for some authors. Resolving
this tension will be an important direction for future work. To the best of our
knowledge, we are the first to integrate contrastive learning with pre-trained
language model fine-tuning for authorship attribution.
|
[
{
"version": "v1",
"created": "Fri, 23 Sep 2022 23:45:08 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Oct 2022 07:55:15 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Ai",
"Bo",
""
],
[
"Wang",
"Yuchen",
""
],
[
"Tan",
"Yugin",
""
],
[
"Tan",
"Samson",
""
]
] |
new_dataset
| 0.99883 |
2210.03360
|
Kaspar M\"uller
|
Kaspar M\"uller, Franz Zotter
|
The PerspectiveLiberator -- an upmixing 6DoF rendering plugin for
single-perspective Ambisonic room impulse responses
|
4 pages, submitted to conference: DAGA 2021, Vienna, Austria, 2021
|
Fortschritte der Akustik - DAGA 2021, Vienna, Austria, 2021, vol.
47, pp. 306-309
| null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nowadays, virtual reality interfaces allow the user to change perspectives in
six degrees of freedom (6DoF) virtually, and consistently with the visual part,
the acoustic perspective needs to be updated interactively. Single-perspective
rendering with dynamic head rotation already works quite reliably with upmixed
first-order Ambisonic room impulse responses (ASDM, SIRR, etc.). This
contribution presents a plugin to free the virtual perspective from the
measured one by real-time perspective extrapolation: The PerspectiveLiberator.
The plugin permits selecting between two different algorithms for directional
resolution enhancement (ASDM, 4DE). And for its main task of convolution-based
6DoF rendering, the plugin detects and localizes prominent directional sound
events in the early Ambisonic room impulse response and re-encodes them with
direction, time of arrival, and level adapted to the variable perspective of
the virtual listener. The diffuse residual is enhanced in directional
resolution but remains unaffected by translatory movement to preserve as much
of the original room impression as possible.
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2022 07:11:01 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Oct 2022 07:24:20 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Müller",
"Kaspar",
""
],
[
"Zotter",
"Franz",
""
]
] |
new_dataset
| 0.994404 |
2210.03768
|
Arif Usta
|
Arif Usta, Akifhan Karakayali and \"Ozg\"ur Ulusoy
|
xDBTagger: Explainable Natural Language Interface to Databases Using
Keyword Mappings and Schema Graph
|
20 pages, 6 figures. This work is the extended version of
arXiv:2101.04226 that appeared in PVLDB'21
| null | null | null |
cs.DB cs.AI cs.CL cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Translating natural language queries (NLQ) into structured query language
(SQL) in interfaces to relational databases is a challenging task that has been
widely studied by researchers from both the database and natural language
processing communities. Numerous works have been proposed to attack the natural
language interfaces to databases (NLIDB) problem either as a conventional
pipeline-based or an end-to-end deep-learning-based solution. Nevertheless,
regardless of the approach preferred, such solutions exhibit black-box nature,
which makes it difficult for potential users targeted by these systems to
comprehend the decisions made to produce the translated SQL. To this end, we
propose xDBTagger, an explainable hybrid translation pipeline that explains the
decisions made along the way to the user both textually and visually. We also
evaluate xDBTagger quantitatively in three real-world relational databases. The
evaluation results indicate that in addition to being fully interpretable,
xDBTagger is effective in terms of accuracy and translates the queries more
efficiently compared to other state-of-the-art pipeline-based systems up to
10000 times.
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2022 18:17:09 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Usta",
"Arif",
""
],
[
"Karakayali",
"Akifhan",
""
],
[
"Ulusoy",
"Özgür",
""
]
] |
new_dataset
| 0.96131 |
2210.03780
|
Satish Kumar
|
Satish Kumar, ASM Iftekhar, Ekta Prashnani, B.S.Manjunath
|
LOCL: Learning Object-Attribute Composition using Localization
|
20 pages, 7 figures, 11 tables, Accepted in British Machine Vision
Conference 2022
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper describes LOCL (Learning Object Attribute Composition using
Localization) that generalizes composition zero shot learning to objects in
cluttered and more realistic settings. The problem of unseen Object Attribute
(OA) associations has been well studied in the field, however, the performance
of existing methods is limited in challenging scenes. In this context, our key
contribution is a modular approach to localizing objects and attributes of
interest in a weakly supervised context that generalizes robustly to unseen
configurations. Localization coupled with a composition classifier
significantly outperforms state of the art (SOTA) methods, with an improvement
of about 12% on currently available challenging datasets. Further, the
modularity enables the use of localized feature extractor to be used with
existing OA compositional learning methods to improve their overall
performance.
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2022 18:48:45 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Kumar",
"Satish",
""
],
[
"Iftekhar",
"ASM",
""
],
[
"Prashnani",
"Ekta",
""
],
[
"Manjunath",
"B. S.",
""
]
] |
new_dataset
| 0.957253 |
2210.03787
|
Meera Hahn
|
Meera Hahn, Kevin Carlberg, Ruta Desai, James Hillis
|
Learning a Visually Grounded Memory Assistant
| null | null | null | null |
cs.CV cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce a novel interface for large scale collection of human memory and
assistance. Using the 3D Matterport simulator we create a realistic indoor
environments in which we have people perform specific embodied memory tasks
that mimic household daily activities. This interface was then deployed on
Amazon Mechanical Turk allowing us to test and record human memory, navigation
and needs for assistance at a large scale that was previously impossible. Using
the interface we collect the `The Visually Grounded Memory Assistant Dataset'
which is aimed at developing our understanding of (1) the information people
encode during navigation of 3D environments and (2) conditions under which
people ask for memory assistance. Additionally we experiment with with
predicting when people will ask for assistance using models trained on
hand-selected visual and semantic features. This provides an opportunity to
build stronger ties between the machine-learning and cognitive-science
communities through learned models of human perception, memory, and cognition.
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2022 19:19:01 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Hahn",
"Meera",
""
],
[
"Carlberg",
"Kevin",
""
],
[
"Desai",
"Ruta",
""
],
[
"Hillis",
"James",
""
]
] |
new_dataset
| 0.995755 |
2210.03899
|
Jingjing Wang
|
Jie Liu, Jingjing Wang, Peng Zhang, Chunmao Wang, Di Xie, Shiliang Pu
|
Multi-Scale Wavelet Transformer for Face Forgery Detection
|
The first two authors contributed equally to this work. Accepted to
ACCV 2022 as oral presentation
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Currently, many face forgery detection methods aggregate spatial and
frequency features to enhance the generalization ability and gain promising
performance under the cross-dataset scenario. However, these methods only
leverage one level frequency information which limits their expressive ability.
To overcome these limitations, we propose a multi-scale wavelet transformer
framework for face forgery detection. Specifically, to take full advantage of
the multi-scale and multi-frequency wavelet representation, we gradually
aggregate the multi-scale wavelet representation at different stages of the
backbone network. To better fuse the frequency feature with the spatial
features, frequency-based spatial attention is designed to guide the spatial
feature extractor to concentrate more on forgery traces. Meanwhile,
cross-modality attention is proposed to fuse the frequency features with the
spatial features. These two attention modules are calculated through a unified
transformer block for efficiency. A wide variety of experiments demonstrate
that the proposed method is efficient and effective for both within and cross
datasets.
|
[
{
"version": "v1",
"created": "Sat, 8 Oct 2022 03:39:36 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Liu",
"Jie",
""
],
[
"Wang",
"Jingjing",
""
],
[
"Zhang",
"Peng",
""
],
[
"Wang",
"Chunmao",
""
],
[
"Xie",
"Di",
""
],
[
"Pu",
"Shiliang",
""
]
] |
new_dataset
| 0.960226 |
2210.03929
|
Baoxiong Jia
|
Baoxiong Jia, Ting Lei, Song-Chun Zhu, Siyuan Huang
|
EgoTaskQA: Understanding Human Tasks in Egocentric Videos
|
Published at NeurIPS Track on Datasets and Benchmarks 2022
| null | null | null |
cs.CV cs.AI cs.CL cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Understanding human tasks through video observations is an essential
capability of intelligent agents. The challenges of such capability lie in the
difficulty of generating a detailed understanding of situated actions, their
effects on object states (i.e., state changes), and their causal dependencies.
These challenges are further aggravated by the natural parallelism from
multi-tasking and partial observations in multi-agent collaboration. Most prior
works leverage action localization or future prediction as an indirect metric
for evaluating such task understanding from videos. To make a direct
evaluation, we introduce the EgoTaskQA benchmark that provides a single home
for the crucial dimensions of task understanding through question-answering on
real-world egocentric videos. We meticulously design questions that target the
understanding of (1) action dependencies and effects, (2) intents and goals,
and (3) agents' beliefs about others. These questions are divided into four
types, including descriptive (what status?), predictive (what will?),
explanatory (what caused?), and counterfactual (what if?) to provide diagnostic
analyses on spatial, temporal, and causal understandings of goal-oriented
tasks. We evaluate state-of-the-art video reasoning models on our benchmark and
show their significant gaps between humans in understanding complex
goal-oriented egocentric videos. We hope this effort will drive the vision
community to move onward with goal-oriented video understanding and reasoning.
|
[
{
"version": "v1",
"created": "Sat, 8 Oct 2022 05:49:05 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Jia",
"Baoxiong",
""
],
[
"Lei",
"Ting",
""
],
[
"Zhu",
"Song-Chun",
""
],
[
"Huang",
"Siyuan",
""
]
] |
new_dataset
| 0.996227 |
2210.03951
|
Hamzah Luqman
|
Hamzah Luqman
|
ArabSign: A Multi-modality Dataset and Benchmark for Continuous Arabic
Sign Language Recognition
|
8
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Sign language recognition has attracted the interest of researchers in recent
years. While numerous approaches have been proposed for European and Asian sign
languages recognition, very limited attempts have been made to develop similar
systems for the Arabic sign language (ArSL). This can be attributed partly to
the lack of a dataset at the sentence level. In this paper, we aim to make a
significant contribution by proposing ArabSign, a continuous ArSL dataset. The
proposed dataset consists of 9,335 samples performed by 6 signers. The total
time of the recorded sentences is around 10 hours and the average sentence's
length is 3.1 signs. ArabSign dataset was recorded using a Kinect V2 camera
that provides three types of information (color, depth, and skeleton joint
points) recorded simultaneously for each sentence. In addition, we provide the
annotation of the dataset according to ArSL and Arabic language structures that
can help in studying the linguistic characteristics of ArSL. To benchmark this
dataset, we propose an encoder-decoder model for Continuous ArSL recognition.
The model has been evaluated on the proposed dataset, and the obtained results
show that the encoder-decoder model outperformed the attention mechanism with
an average word error rate (WER) of 0.50 compared with 0.62 with the attention
mechanism. The data and code are available at github.com/Hamzah-Luqman/ArabSign
|
[
{
"version": "v1",
"created": "Sat, 8 Oct 2022 07:36:20 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Luqman",
"Hamzah",
""
]
] |
new_dataset
| 0.999908 |
2210.04002
|
Forough Shahab Samani
|
Forough Shahab Samani, Rolf Stadler
|
Dynamically meeting performance objectives for multiple services on a
service mesh
|
Accepted at the 18th International Conference on Network and Service
Management
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We present a framework that lets a service provider achieve end-to-end
management objectives under varying load. Dynamic control actions are performed
by a reinforcement learning (RL) agent. Our work includes experimentation and
evaluation on a laboratory testbed where we have implemented basic information
services on a service mesh supported by the Istio and Kubernetes platforms. We
investigate different management objectives that include end-to-end delay
bounds on service requests, throughput objectives, and service differentiation.
These objectives are mapped onto reward functions that an RL agent learns to
optimize, by executing control actions, namely, request routing and request
blocking. We compute the control policies not on the testbed, but in a
simulator, which speeds up the learning process by orders of magnitude. In our
approach, the system model is learned on the testbed; it is then used to
instantiate the simulator, which produces near-optimal control policies for
various management objectives. The learned policies are then evaluated on the
testbed using unseen load patterns.
|
[
{
"version": "v1",
"created": "Sat, 8 Oct 2022 11:54:25 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Samani",
"Forough Shahab",
""
],
[
"Stadler",
"Rolf",
""
]
] |
new_dataset
| 0.986838 |
2210.04084
|
Lois Orosa
|
Lois Orosa, Ulrich R\"uhrmair, A. Giray Yaglikci, Haocong Luo, Ataberk
Olgun, Patrick Jattke, Minesh Patel, Jeremie Kim, Kaveh Razavi, Onur Mutlu
|
SpyHammer: Using RowHammer to Remotely Spy on Temperature
| null | null | null | null |
cs.CR cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
RowHammer is a DRAM vulnerability that can cause bit errors in a victim DRAM
row by just accessing its neighboring DRAM rows at a high-enough rate. Recent
studies demonstrate that new DRAM devices are becoming increasingly more
vulnerable to RowHammer, and many works demonstrate system-level attacks for
privilege escalation or information leakage. In this work, we leverage two key
observations about RowHammer characteristics to spy on DRAM temperature: 1)
RowHammer-induced bit error rate consistently increases (or decreases) as the
temperature increases, and 2) some DRAM cells that are vulnerable to RowHammer
cause bit errors only at a particular temperature. Based on these observations,
we propose a new RowHammer attack, called SpyHammer, that spies on the
temperature of critical systems such as industrial production lines, vehicles,
and medical systems. SpyHammer is the first practical attack that can spy on
DRAM temperature. SpyHammer can spy on absolute temperature with an error of
less than 2.5 {\deg}C at the 90th percentile of tested temperature points, for
12 real DRAM modules from 4 main manufacturers.
|
[
{
"version": "v1",
"created": "Sat, 8 Oct 2022 18:31:58 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Orosa",
"Lois",
""
],
[
"Rührmair",
"Ulrich",
""
],
[
"Yaglikci",
"A. Giray",
""
],
[
"Luo",
"Haocong",
""
],
[
"Olgun",
"Ataberk",
""
],
[
"Jattke",
"Patrick",
""
],
[
"Patel",
"Minesh",
""
],
[
"Kim",
"Jeremie",
""
],
[
"Razavi",
"Kaveh",
""
],
[
"Mutlu",
"Onur",
""
]
] |
new_dataset
| 0.998213 |
2210.04085
|
Shi-Jie Li
|
Shijie Li, Ming-Ming Cheng, Juergen Gall
|
Dual Pyramid Generative Adversarial Networks for Semantic Image
Synthesis
|
BMVC2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The goal of semantic image synthesis is to generate photo-realistic images
from semantic label maps. It is highly relevant for tasks like content
generation and image editing. Current state-of-the-art approaches, however,
still struggle to generate realistic objects in images at various scales. In
particular, small objects tend to fade away and large objects are often
generated as collages of patches. In order to address this issue, we propose a
Dual Pyramid Generative Adversarial Network (DP-GAN) that learns the
conditioning of spatially-adaptive normalization blocks at all scales jointly,
such that scale information is bi-directionally used, and it unifies
supervision at different scales. Our qualitative and quantitative results show
that the proposed approach generates images where small and large objects look
more realistic compared to images generated by state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Sat, 8 Oct 2022 18:45:44 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Li",
"Shijie",
""
],
[
"Cheng",
"Ming-Ming",
""
],
[
"Gall",
"Juergen",
""
]
] |
new_dataset
| 0.982428 |
2210.04090
|
Deniz A\u{g}ao\u{g}lu \c{C}a\u{g}{\i}r{\i}c{\i} Mgr.
|
Deniz A\u{g}ao\u{g}lu \c{C}a\u{g}{\i}r{\i}c{\i}, Onur
\c{C}a\u{g}{\i}r{\i}c{\i}
|
APUD(1,1) Recognition in Polynomial Time
| null | null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A unit disk graph is the intersection graph of a set of disk of unit radius
in the Euclidean plane. In 1998, Breu and Kirkpatrick showed that the
recognition problem for unit disk graphs is NP-hard. Given $k$ horizontal and
$m$ vertical lines, an APUD($k,m$) is a unit disk graph such that each unit
disk is centered either on a given horizontal or vertical line.
\c{C}a\u{g}{\i}r{\i}c{\i} showed in 2020 that APUD($k,m$) recognition is
NP-hard. In this paper, we show that APUD($1,1$) recognition is polynomial time
solvable.
|
[
{
"version": "v1",
"created": "Sat, 8 Oct 2022 19:04:45 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Çağırıcı",
"Deniz Ağaoğlu",
""
],
[
"Çağırıcı",
"Onur",
""
]
] |
new_dataset
| 0.999612 |
2210.04161
|
Yueyue Huang
|
Yueyue Huang, Chu-Ren Huang
|
Cross-strait Variations on Two Near-synonymous Loanwords xie2shang1 and
tan2pan4: A Corpus-based Comparative Study
|
To appear in PACLIC 2022. 10 pages, 5 figures, 5 tables
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This study attempts to investigate cross-strait variations on two typical
synonymous loanwords in Chinese, i.e. xie2shang1 and tan2pan4, drawn on MARVS
theory. Through a comparative analysis, the study found some distributional,
eventual, and contextual similarities and differences across Taiwan and
Mainland Mandarin. Compared with the underused tan2pan4, xie2shang1 is
significantly overused in Taiwan Mandarin and vice versa in Mainland Mandarin.
Additionally, though both words can refer to an inchoative process in Mainland
and Taiwan Mandarin, the starting point for xie2shang1 in Mainland Mandarin is
somewhat blurring compared with the usage in Taiwan Mandarin. Further on, in
Taiwan Mandarin, tan2pan4 can be used in economic and diplomatic contexts,
while xie2shang1 is used almost exclusively in political contexts. In Mainland
Mandarin, however, the two words can be used in a hybrid manner within
political contexts; moreover, tan2pan4 is prominently used in diplomatic
contexts with less reference to economic activities, while xie2sahng1 can be
found in both political and legal contexts, emphasizing a role of mediation.
|
[
{
"version": "v1",
"created": "Sun, 9 Oct 2022 04:10:58 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Huang",
"Yueyue",
""
],
[
"Huang",
"Chu-Ren",
""
]
] |
new_dataset
| 0.984829 |
2210.04179
|
Hideyuki Kawashima
|
Jun Nemoto, Takashi Kambayashi, Takashi Hoshino, Hideyuki Kawashima
|
Oze: Decentralized Graph-based Concurrency Control for Real-world Long
Transactions on BoM Benchmark
| null | null | null | null |
cs.DB
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this paper, we propose Oze, a new concurrency control protocol that
handles heterogeneous workloads which include long-running update transactions.
Oze explores a large scheduling space using a fully precise multi-version
serialization graph to reduce false positives. Oze manages the graph in a
decentralized manner to exploit many cores in modern servers. We also propose a
new OLTP benchmark, BoMB (Bill of Materials Benchmark), based on a use case in
an actual manufacturing company. BoMB consists of one long-running update
transaction and five short transactions that conflict with each other.
Experiments using BoMB show that Oze keeps the abort rate of the long-running
update transaction at zero while reaching up to 1.7 Mtpm for short transactions
with near linear scalability, whereas state-of-the-art protocols cannot commit
the long transaction or experience performance degradation in short transaction
throughput.
|
[
{
"version": "v1",
"created": "Sun, 9 Oct 2022 06:14:43 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Nemoto",
"Jun",
""
],
[
"Kambayashi",
"Takashi",
""
],
[
"Hoshino",
"Takashi",
""
],
[
"Kawashima",
"Hideyuki",
""
]
] |
new_dataset
| 0.999176 |
2210.04252
|
Teerath Kumar
|
Aisha Chandio, Gong Gui, Teerath Kumar, Irfan Ullah, Ramin
Ranjbarzadeh, Arunabha M Roy, Akhtar Hussain, and Yao Shen
|
Precise Single-stage Detector
|
We will submit it soon to the IEEE transaction. Due to characters
limitation, we can not upload the full abstract. Please read the pdf file for
more detail
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There are still two problems in SDD causing some inaccurate results: (1) In
the process of feature extraction, with the layer-by-layer acquisition of
semantic information, local information is gradually lost, resulting into less
representative feature maps; (2) During the Non-Maximum Suppression (NMS)
algorithm due to inconsistency in classification and regression tasks, the
classification confidence and predicted detection position cannot accurately
indicate the position of the prediction boxes. Methods: In order to address
these aforementioned issues, we propose a new architecture, a modified version
of Single Shot Multibox Detector (SSD), named Precise Single Stage Detector
(PSSD). Firstly, we improve the features by adding extra layers to SSD.
Secondly, we construct a simple and effective feature enhancement module to
expand the receptive field step by step for each layer and enhance its local
and semantic information. Finally, we design a more efficient loss function to
predict the IOU between the prediction boxes and ground truth boxes, and the
threshold IOU guides classification training and attenuates the scores, which
are used by the NMS algorithm. Main Results: Benefiting from the above
optimization, the proposed model PSSD achieves exciting performance in
real-time. Specifically, with the hardware of Titan Xp and the input size of
320 pix, PSSD achieves 33.8 mAP at 45 FPS speed on MS COCO benchmark and 81.28
mAP at 66 FPS speed on Pascal VOC 2007 outperforming state-of-the-art object
detection models. Besides, the proposed model performs significantly well with
larger input size. Under 512 pix, PSSD can obtain 37.2 mAP with 27 FPS on MS
COCO and 82.82 mAP with 40 FPS on Pascal VOC 2007. The experiment results prove
that the proposed model has a better trade-off between speed and accuracy.
|
[
{
"version": "v1",
"created": "Sun, 9 Oct 2022 12:58:37 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Chandio",
"Aisha",
""
],
[
"Gui",
"Gong",
""
],
[
"Kumar",
"Teerath",
""
],
[
"Ullah",
"Irfan",
""
],
[
"Ranjbarzadeh",
"Ramin",
""
],
[
"Roy",
"Arunabha M",
""
],
[
"Hussain",
"Akhtar",
""
],
[
"Shen",
"Yao",
""
]
] |
new_dataset
| 0.996339 |
2210.04261
|
Melissa Dell
|
Emily Silcock, Luca D'Amico-Wong, Jinglin Yang, Melissa Dell
|
Noise-Robust De-Duplication at Scale
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Identifying near duplicates within large, noisy text corpora has a myriad of
applications that range from de-duplicating training datasets, reducing privacy
risk, and evaluating test set leakage, to identifying reproduced news articles
and literature within large corpora. Across these diverse applications, the
overwhelming majority of work relies on N-grams. Limited efforts have been made
to evaluate how well N-gram methods perform, in part because it is unclear how
one could create an unbiased evaluation dataset for a massive corpus. This
study uses the unique timeliness of historical news wires to create a 27,210
document dataset, with 122,876 positive duplicate pairs, for studying
noise-robust de-duplication. The time-sensitivity of news makes comprehensive
hand labelling feasible - despite the massive overall size of the corpus - as
duplicates occur within a narrow date range. The study then develops and
evaluates a range of de-duplication methods: hashing and N-gram overlap (which
predominate in the literature), a contrastively trained bi-encoder, and a
re-rank style approach combining a bi- and cross-encoder. The neural approaches
significantly outperform hashing and N-gram overlap. We show that the
bi-encoder scales well, de-duplicating a 10 million article corpus on a single
GPU card in a matter of hours. The public release of our NEWS-COPY
de-duplication dataset will facilitate further research and applications.
|
[
{
"version": "v1",
"created": "Sun, 9 Oct 2022 13:30:42 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Silcock",
"Emily",
""
],
[
"D'Amico-Wong",
"Luca",
""
],
[
"Yang",
"Jinglin",
""
],
[
"Dell",
"Melissa",
""
]
] |
new_dataset
| 0.983511 |
2210.04341
|
Adriano Fragomeni
|
Adriano Fragomeni, Michael Wray, Dima Damen
|
ConTra: (Con)text (Tra)nsformer for Cross-Modal Video Retrieval
|
Accepted in ACCV 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we re-examine the task of cross-modal clip-sentence retrieval,
where the clip is part of a longer untrimmed video. When the clip is short or
visually ambiguous, knowledge of its local temporal context (i.e. surrounding
video segments) can be used to improve the retrieval performance. We propose
Context Transformer (ConTra); an encoder architecture that models the
interaction between a video clip and its local temporal context in order to
enhance its embedded representations. Importantly, we supervise the context
transformer using contrastive losses in the cross-modal embedding space. We
explore context transformers for video and text modalities. Results
consistently demonstrate improved performance on three datasets: YouCook2,
EPIC-KITCHENS and a clip-sentence version of ActivityNet Captions. Exhaustive
ablation studies and context analysis show the efficacy of the proposed method.
|
[
{
"version": "v1",
"created": "Sun, 9 Oct 2022 20:11:38 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Fragomeni",
"Adriano",
""
],
[
"Wray",
"Michael",
""
],
[
"Damen",
"Dima",
""
]
] |
new_dataset
| 0.999761 |
2210.04359
|
Steffen Eger
|
Dominik Beese and Ole P\"utz and Steffen Eger
|
FairGer: Using NLP to Measure Support for Women and Migrants in 155
Years of German Parliamentary Debates
| null | null | null | null |
cs.CL cs.LG cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
We measure support with women and migrants in German political debates over
the last 155 years. To do so, we (1) provide a gold standard of 1205 text
snippets in context, annotated for support with our target groups, (2) train a
BERT model on our annotated data, with which (3) we infer large-scale trends.
These show that support with women is stronger than support with migrants, but
both have steadily increased over time. While we hardly find any direct
anti-support with women, there is more polarization when it comes to migrants.
We also discuss the difficulty of annotation as a result of ambiguity in
political discourse and indirectness, i.e., politicians' tendency to relate
stances attributed to political opponents. Overall, our results indicate that
German society, as measured from its political elite, has become fairer over
time.
|
[
{
"version": "v1",
"created": "Sun, 9 Oct 2022 22:02:58 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Beese",
"Dominik",
""
],
[
"Pütz",
"Ole",
""
],
[
"Eger",
"Steffen",
""
]
] |
new_dataset
| 0.99394 |
2210.04413
|
Junfu Guo
|
Junfu Guo, Changhao Li, Xi Xia, Ruizhen Hu, Ligang Liu
|
Asynchronous Collaborative Autoscanning with Mode Switching for
Multi-Robot Scene Reconstruction
|
13pages, 12 figures, Conference: SIGGRAPH Asia 2022
|
ACM Trans. Graph., Vol. 41, No. 6, Article 198. Publication date:
December 2022
|
10.1145/3550454.3555483
| null |
cs.RO cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
When conducting autonomous scanning for the online reconstruction of unknown
indoor environments, robots have to be competent at exploring scene structure
and reconstructing objects with high quality. Our key observation is that
different tasks demand specialized scanning properties of robots: rapid moving
speed and far vision for global exploration and slow moving speed and narrow
vision for local object reconstruction, which are referred as two different
scanning modes: explorer and reconstructor, respectively. When requiring
multiple robots to collaborate for efficient exploration and fine-grained
reconstruction, the questions on when to generate and how to assign those tasks
should be carefully answered. Therefore, we propose a novel asynchronous
collaborative autoscanning method with mode switching, which generates two
kinds of scanning tasks with associated scanning modes, i.e., exploration task
with explorer mode and reconstruction task with reconstructor mode, and assign
them to the robots to execute in an asynchronous collaborative manner to highly
boost the scanning efficiency and reconstruction quality. The task assignment
is optimized by solving a modified Multi-Depot Multiple Traveling Salesman
Problem (MDMTSP). Moreover, to further enhance the collaboration and increase
the efficiency, we propose a task-flow model that actives the task generation
and assignment process immediately when any of the robots finish all its tasks
with no need to wait for all other robots to complete the tasks assigned in the
previous iteration. Extensive experiments have been conducted to show the
importance of each key component of our method and the superiority over
previous methods in scanning efficiency and reconstruction quality.
|
[
{
"version": "v1",
"created": "Mon, 10 Oct 2022 03:06:52 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Guo",
"Junfu",
""
],
[
"Li",
"Changhao",
""
],
[
"Xia",
"Xi",
""
],
[
"Hu",
"Ruizhen",
""
],
[
"Liu",
"Ligang",
""
]
] |
new_dataset
| 0.984064 |
2210.04435
|
Zhongyu Li
|
Xiaoyu Huang, Zhongyu Li, Yanzhen Xiang, Yiming Ni, Yufeng Chi, Yunhao
Li, Lizhi Yang, Xue Bin Peng and Koushil Sreenath
|
Creating a Dynamic Quadrupedal Robotic Goalkeeper with Reinforcement
Learning
|
First two authors contributed equally. Accompanying video is at
https://youtu.be/iX6OgG67-ZQ
| null | null | null |
cs.RO cs.AI cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a reinforcement learning (RL) framework that enables quadrupedal
robots to perform soccer goalkeeping tasks in the real world. Soccer
goalkeeping using quadrupeds is a challenging problem, that combines highly
dynamic locomotion with precise and fast non-prehensile object (ball)
manipulation. The robot needs to react to and intercept a potentially flying
ball using dynamic locomotion maneuvers in a very short amount of time, usually
less than one second. In this paper, we propose to address this problem using a
hierarchical model-free RL framework. The first component of the framework
contains multiple control policies for distinct locomotion skills, which can be
used to cover different regions of the goal. Each control policy enables the
robot to track random parametric end-effector trajectories while performing one
specific locomotion skill, such as jump, dive, and sidestep. These skills are
then utilized by the second part of the framework which is a high-level planner
to determine a desired skill and end-effector trajectory in order to intercept
a ball flying to different regions of the goal. We deploy the proposed
framework on a Mini Cheetah quadrupedal robot and demonstrate the effectiveness
of our framework for various agile interceptions of a fast-moving ball in the
real world.
|
[
{
"version": "v1",
"created": "Mon, 10 Oct 2022 04:54:55 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Huang",
"Xiaoyu",
""
],
[
"Li",
"Zhongyu",
""
],
[
"Xiang",
"Yanzhen",
""
],
[
"Ni",
"Yiming",
""
],
[
"Chi",
"Yufeng",
""
],
[
"Li",
"Yunhao",
""
],
[
"Yang",
"Lizhi",
""
],
[
"Peng",
"Xue Bin",
""
],
[
"Sreenath",
"Koushil",
""
]
] |
new_dataset
| 0.998036 |
2210.04446
|
Suneesh Jacob Akkarapakam
|
Akkarapakam Suneesh Jacob and Bhaskar Dasgupta
|
Dimensional synthesis of spatial manipulators for velocity and force
transmission for operation around a specified task point
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dimensional synthesis refers to design of the dimensions of manipulators by
optimising different kinds of performance indices. The motivation of this study
is to perform dimensional synthesis for a wide set of spatial manipulators by
optimising the manipulability of each manipulator around a pre-defined task
point in the workspace and to finally give a prescription of manipulators along
with their dimensions optimised for velocity and force transmission. A
systematic method to formulate Jacobian matrix of a manipulator is presented.
Optimisation of manipulability is performed for manipulation of the
end-effector around a chosen task point for 96 1-DOF manipulators, 645 2-DOF
manipulators, 8 3-DOF manipulators and 15 4-DOF manipulators taken from the
result of enumeration of manipulators that is done in its companion paper
devoted to enumeration of possible manipulators up to a number of links.
Prescriptions for these sets of manipulators are presented along with their
scaled condition numbers and their ordered indices. This gives the designer a
prescription of manipulators with their optimised dimensions that reflects the
performance of the end-effector around the given task point for velocity and
force transmission.
|
[
{
"version": "v1",
"created": "Mon, 10 Oct 2022 06:05:04 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Jacob",
"Akkarapakam Suneesh",
""
],
[
"Dasgupta",
"Bhaskar",
""
]
] |
new_dataset
| 0.961196 |
2210.04449
|
Yu Wei Tan
|
Yu Wei Tan, Nicholas Chua, Clarence Koh, Anand Bhojan
|
RTSDF: Generating Signed Distance Fields in Real Time for Soft Shadow
Rendering
| null |
Pacific Graphics Short Papers, Posters, and Work-in-Progress
Papers (2020)
|
10.2312/pg.20201232
| null |
cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
Signed Distance Fields (SDFs) for surface representation are commonly
generated offline and subsequently loaded into interactive applications like
games. Since they are not updated every frame, they only provide a rigid
surface representation. While there are methods to generate them quickly on
GPU, the efficiency of these approaches is limited at high resolutions. This
paper showcases a novel technique that combines jump flooding and ray tracing
to generate approximate SDFs in real-time for soft shadow approximation,
achieving prominent shadow penumbras while maintaining interactive frame rates.
|
[
{
"version": "v1",
"created": "Mon, 10 Oct 2022 06:08:24 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Tan",
"Yu Wei",
""
],
[
"Chua",
"Nicholas",
""
],
[
"Koh",
"Clarence",
""
],
[
"Bhojan",
"Anand",
""
]
] |
new_dataset
| 0.999073 |
2210.04483
|
Mohammad Ridwan Kabir
|
Mohammad Ridwan Kabir (1), Mohammad Ishrak Abedin (1), Rizvi Ahmed
(1), Saad Bin Ashraf (1), Hasan Mahmud (1), Md. Kamrul Hasan (1) (Department
of Computer Science and Engineering (CSE), Islamic University of Technology
(IUT), Board Bazar, Gazipur, Bangladesh-1704)
|
Auxilio: A Sensor-Based Wireless Head-Mounted Mouse for People with
Upper Limb Disability
|
28 pages, 9 figures, 5 tables
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Upper limb disability may be caused either due to accidents, neurological
disorders, or even birth defects, imposing limitations and restrictions on the
interaction with a computer for the concerned individuals using a generic
optical mouse. Our work proposes the design and development of a working
prototype of a sensor-based wireless head-mounted Assistive Mouse Controller
(AMC), Auxilio, facilitating interaction with a computer for people with upper
limb disability. Combining commercially available, low-cost motion and infrared
sensors, Auxilio solely utilizes head and cheek movements for mouse control.
Its performance has been juxtaposed with that of a generic optical mouse in
different pointing tasks as well as in typing tasks, using a virtual keyboard.
Furthermore, our work also analyzes the usability of Auxilio, featuring the
System Usability Scale. The results of different experiments reveal the
practicality and effectiveness of Auxilio as a head-mounted AMC for empowering
the upper limb disabled community.
|
[
{
"version": "v1",
"created": "Mon, 10 Oct 2022 08:16:29 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Kabir",
"Mohammad Ridwan",
""
],
[
"Abedin",
"Mohammad Ishrak",
""
],
[
"Ahmed",
"Rizvi",
""
],
[
"Ashraf",
"Saad Bin",
""
],
[
"Mahmud",
"Hasan",
""
],
[
"Hasan",
"Md. Kamrul",
""
]
] |
new_dataset
| 0.999602 |
2210.04487
|
Liangdong Lu
|
Liangdong Lu, Chaofeng Guan, Ruihu Li, Yuezhen Ren
|
Quasi-cyclic Hermitian construction of binary quantum codes
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we propose a sufficient condition for a family of 2-generator
self-orthogonal quasi-cyclic codes with respect to Hermitian inner product.
Supported in the Hermitian construction, we show algebraic constructions of
good quantum codes. 30 new binary quantum codes with good parameters improving
the best-known lower bounds on minimum distance in Grassl's code tables
\cite{Grassl:codetables} are constructed.
|
[
{
"version": "v1",
"created": "Mon, 10 Oct 2022 08:30:14 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Lu",
"Liangdong",
""
],
[
"Guan",
"Chaofeng",
""
],
[
"Li",
"Ruihu",
""
],
[
"Ren",
"Yuezhen",
""
]
] |
new_dataset
| 0.998535 |
2210.04514
|
Luca Schmidtke
|
Luca Schmidtke, Benjamin Hou, Athanasios Vlontzos, Bernhard Kainz
|
Self-Supervised 3D Human Pose Estimation in Static Video Via Neural
Rendering
|
CV4Metaverse Workshop @ ECCV 2022
| null | null | null |
cs.CV cs.AI cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
Inferring 3D human pose from 2D images is a challenging and long-standing
problem in the field of computer vision with many applications including motion
capture, virtual reality, surveillance or gait analysis for sports and
medicine. We present preliminary results for a method to estimate 3D pose from
2D video containing a single person and a static background without the need
for any manual landmark annotations. We achieve this by formulating a simple
yet effective self-supervision task: our model is required to reconstruct a
random frame of a video given a frame from another timepoint and a rendered
image of a transformed human shape template. Crucially for optimisation, our
ray casting based rendering pipeline is fully differentiable, enabling end to
end training solely based on the reconstruction task.
|
[
{
"version": "v1",
"created": "Mon, 10 Oct 2022 09:24:07 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Schmidtke",
"Luca",
""
],
[
"Hou",
"Benjamin",
""
],
[
"Vlontzos",
"Athanasios",
""
],
[
"Kainz",
"Bernhard",
""
]
] |
new_dataset
| 0.991335 |
2210.04522
|
Kun Yan
|
Kun Yan, Lei Ji, Chenfei Wu, Jian Liang, Ming Zhou, Nan Duan, Shuai Ma
|
HORIZON: A High-Resolution Panorama Synthesis Framework
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Panorama synthesis aims to generate a visual scene with all 360-degree views
and enables an immersive virtual world. If the panorama synthesis process can
be semantically controlled, we can then build an interactive virtual world and
form an unprecedented human-computer interaction experience. Existing panoramic
synthesis methods mainly focus on dealing with the inherent challenges brought
by panoramas' spherical structure such as the projection distortion and the
in-continuity problem when stitching edges, but is hard to effectively control
semantics. The recent success of visual synthesis like DALL.E generates
promising 2D flat images with semantic control, however, it is hard to directly
be applied to panorama synthesis which inevitably generates distorted content.
Besides, both of the above methods can not effectively synthesize
high-resolution panoramas either because of quality or inference speed. In this
work, we propose a new generation framework for high-resolution panorama
images. The contributions include 1) alleviating the spherical distortion and
edge in-continuity problem through spherical modeling, 2) supporting semantic
control through both image and text hints, and 3) effectively generating
high-resolution panoramas through parallel decoding. Our experimental results
on a large-scale high-resolution Street View dataset validated the superiority
of our approach quantitatively and qualitatively.
|
[
{
"version": "v1",
"created": "Mon, 10 Oct 2022 09:43:26 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Yan",
"Kun",
""
],
[
"Ji",
"Lei",
""
],
[
"Wu",
"Chenfei",
""
],
[
"Liang",
"Jian",
""
],
[
"Zhou",
"Ming",
""
],
[
"Duan",
"Nan",
""
],
[
"Ma",
"Shuai",
""
]
] |
new_dataset
| 0.996556 |
2210.04530
|
Simon Razniewski
|
Julien Romero and Simon Razniewski
|
Do Children Texts Hold The Key To Commonsense Knowledge?
|
6 pages, 10 tables
|
EMNLP 2022
| null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Compiling comprehensive repositories of commonsense knowledge is a
long-standing problem in AI. Many concerns revolve around the issue of
reporting bias, i.e., that frequency in text sources is not a good proxy for
relevance or truth. This paper explores whether children's texts hold the key
to commonsense knowledge compilation, based on the hypothesis that such content
makes fewer assumptions on the reader's knowledge, and therefore spells out
commonsense more explicitly. An analysis with several corpora shows that
children's texts indeed contain much more, and more typical commonsense
assertions. Moreover, experiments show that this advantage can be leveraged in
popular language-model-based commonsense knowledge extraction settings, where
task-unspecific fine-tuning on small amounts of children texts (childBERT)
already yields significant improvements. This provides a refreshing perspective
different from the common trend of deriving progress from ever larger models
and corpora.
|
[
{
"version": "v1",
"created": "Mon, 10 Oct 2022 09:56:08 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Romero",
"Julien",
""
],
[
"Razniewski",
"Simon",
""
]
] |
new_dataset
| 0.970852 |
2210.04553
|
Yitong Xia
|
Yitong Xia, Hao Tang, Radu Timofte, Luc Van Gool
|
SiNeRF: Sinusoidal Neural Radiance Fields for Joint Pose Estimation and
Scene Reconstruction
|
Accepted yet not published by BMVC2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
NeRFmm is the Neural Radiance Fields (NeRF) that deal with Joint Optimization
tasks, i.e., reconstructing real-world scenes and registering camera parameters
simultaneously. Despite NeRFmm producing precise scene synthesis and pose
estimations, it still struggles to outperform the full-annotated baseline on
challenging scenes. In this work, we identify that there exists a systematic
sub-optimality in joint optimization and further identify multiple potential
sources for it. To diminish the impacts of potential sources, we propose
Sinusoidal Neural Radiance Fields (SiNeRF) that leverage sinusoidal activations
for radiance mapping and a novel Mixed Region Sampling (MRS) for selecting ray
batch efficiently. Quantitative and qualitative results show that compared to
NeRFmm, SiNeRF achieves comprehensive significant improvements in image
synthesis quality and pose estimation accuracy. Codes are available at
https://github.com/yitongx/sinerf.
|
[
{
"version": "v1",
"created": "Mon, 10 Oct 2022 10:47:51 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Xia",
"Yitong",
""
],
[
"Tang",
"Hao",
""
],
[
"Timofte",
"Radu",
""
],
[
"Van Gool",
"Luc",
""
]
] |
new_dataset
| 0.954073 |
2210.04570
|
Luca Bonfiglioli
|
Luca Bonfiglioli, Marco Toschi, Davide Silvestri, Nicola Fioraio,
Daniele De Gregorio
|
The Eyecandies Dataset for Unsupervised Multimodal Anomaly Detection and
Localization
|
14 pages, 6 figures. To be published in ACCV 2022. For the website
and download links see https://eyecan-ai.github.io/eyecandies
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present Eyecandies, a novel synthetic dataset for unsupervised anomaly
detection and localization. Photo-realistic images of procedurally generated
candies are rendered in a controlled environment under multiple lightning
conditions, also providing depth and normal maps in an industrial conveyor
scenario. We make available anomaly-free samples for model training and
validation, while anomalous instances with precise ground-truth annotations are
provided only in the test set. The dataset comprises ten classes of candies,
each showing different challenges, such as complex textures, self-occlusions
and specularities. Furthermore, we achieve large intra-class variation by
randomly drawing key parameters of a procedural rendering pipeline, which
enables the creation of an arbitrary number of instances with photo-realistic
appearance. Likewise, anomalies are injected into the rendering graph and
pixel-wise annotations are automatically generated, overcoming human-biases and
possible inconsistencies.
We believe this dataset may encourage the exploration of original approaches
to solve the anomaly detection task, e.g. by combining color, depth and normal
maps, as they are not provided by most of the existing datasets. Indeed, in
order to demonstrate how exploiting additional information may actually lead to
higher detection performance, we show the results obtained by training a deep
convolutional autoencoder to reconstruct different combinations of inputs.
|
[
{
"version": "v1",
"created": "Mon, 10 Oct 2022 11:19:58 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Bonfiglioli",
"Luca",
""
],
[
"Toschi",
"Marco",
""
],
[
"Silvestri",
"Davide",
""
],
[
"Fioraio",
"Nicola",
""
],
[
"De Gregorio",
"Daniele",
""
]
] |
new_dataset
| 0.999723 |
2210.04572
|
Anna Sokolova
|
Anna Sokolova, Filipp Nikitin, Anna Vorontsova, Anton Konushin
|
Floorplan-Aware Camera Poses Refinement
|
IROS 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Processing large indoor scenes is a challenging task, as scan registration
and camera trajectory estimation methods accumulate errors across time. As a
result, the quality of reconstructed scans is insufficient for some
applications, such as visual-based localization and navigation, where the
correct position of walls is crucial.
For many indoor scenes, there exists an image of a technical floorplan that
contains information about the geometry and main structural elements of the
scene, such as walls, partitions, and doors. We argue that such a floorplan is
a useful source of spatial information, which can guide a 3D model
optimization.
The standard RGB-D 3D reconstruction pipeline consists of a tracking module
applied to an RGB-D sequence and a bundle adjustment (BA) module that takes the
posed RGB-D sequence and corrects the camera poses to improve consistency. We
propose a novel optimization algorithm expanding conventional BA that leverages
the prior knowledge about the scene structure in the form of a floorplan. Our
experiments on the Redwood dataset and our self-captured data demonstrate that
utilizing floorplan improves accuracy of 3D reconstructions.
|
[
{
"version": "v1",
"created": "Mon, 10 Oct 2022 11:24:10 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Sokolova",
"Anna",
""
],
[
"Nikitin",
"Filipp",
""
],
[
"Vorontsova",
"Anna",
""
],
[
"Konushin",
"Anton",
""
]
] |
new_dataset
| 0.999127 |
2210.04615
|
Tien-Phat Nguyen
|
Tien-Phat Nguyen, Trong-Thang Pham, Tri Nguyen, Hieu Le, Dung Nguyen,
Hau Lam, Phong Nguyen, Jennifer Fowler, Minh-Triet Tran, Ngan Le
|
EmbryosFormer: Deformable Transformer and Collaborative
Encoding-Decoding for Embryos Stage Development Classification
|
Accepted at WACV 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The timing of cell divisions in early embryos during the In-Vitro
Fertilization (IVF) process is a key predictor of embryo viability. However,
observing cell divisions in Time-Lapse Monitoring (TLM) is a time-consuming
process and highly depends on experts. In this paper, we propose EmbryosFormer,
a computational model to automatically detect and classify cell divisions from
original time-lapse images. Our proposed network is designed as an
encoder-decoder deformable transformer with collaborative heads. The
transformer contracting path predicts per-image labels and is optimized by a
classification head. The transformer expanding path models the temporal
coherency between embryo images to ensure monotonic non-decreasing constraint
and is optimized by a segmentation head. Both contracting and expanding paths
are synergetically learned by a collaboration head. We have benchmarked our
proposed EmbryosFormer on two datasets: a public dataset with mouse embryos
with 8-cell stage and an in-house dataset with human embryos with 4-cell stage.
Source code: https://github.com/UARK-AICV/Embryos.
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2022 02:54:34 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Nguyen",
"Tien-Phat",
""
],
[
"Pham",
"Trong-Thang",
""
],
[
"Nguyen",
"Tri",
""
],
[
"Le",
"Hieu",
""
],
[
"Nguyen",
"Dung",
""
],
[
"Lam",
"Hau",
""
],
[
"Nguyen",
"Phong",
""
],
[
"Fowler",
"Jennifer",
""
],
[
"Tran",
"Minh-Triet",
""
],
[
"Le",
"Ngan",
""
]
] |
new_dataset
| 0.971729 |
2210.04628
|
Daniel Watson
|
Daniel Watson, William Chan, Ricardo Martin-Brualla, Jonathan Ho,
Andrea Tagliasacchi, Mohammad Norouzi
|
Novel View Synthesis with Diffusion Models
| null | null | null | null |
cs.CV cs.GR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We present 3DiM, a diffusion model for 3D novel view synthesis, which is able
to translate a single input view into consistent and sharp completions across
many views. The core component of 3DiM is a pose-conditional image-to-image
diffusion model, which takes a source view and its pose as inputs, and
generates a novel view for a target pose as output. 3DiM can generate multiple
views that are 3D consistent using a novel technique called stochastic
conditioning. The output views are generated autoregressively, and during the
generation of each novel view, one selects a random conditioning view from the
set of available views at each denoising step. We demonstrate that stochastic
conditioning significantly improves the 3D consistency of a naive sampler for
an image-to-image diffusion model, which involves conditioning on a single
fixed view. We compare 3DiM to prior work on the SRN ShapeNet dataset,
demonstrating that 3DiM's generated completions from a single view achieve much
higher fidelity, while being approximately 3D consistent. We also introduce a
new evaluation methodology, 3D consistency scoring, to measure the 3D
consistency of a generated object by training a neural field on the model's
output views. 3DiM is geometry free, does not rely on hyper-networks or
test-time optimization for novel view synthesis, and allows a single model to
easily scale to a large number of scenes.
|
[
{
"version": "v1",
"created": "Thu, 6 Oct 2022 16:59:56 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Watson",
"Daniel",
""
],
[
"Chan",
"William",
""
],
[
"Martin-Brualla",
"Ricardo",
""
],
[
"Ho",
"Jonathan",
""
],
[
"Tagliasacchi",
"Andrea",
""
],
[
"Norouzi",
"Mohammad",
""
]
] |
new_dataset
| 0.998074 |
2210.04683
|
Pablo Andreu
|
Pablo Andreu, Carles Hernandez, Tomas Picornell, Pedro Lopez, Sergi
Alcaide, Francisco Bas, Pedro Benedicte, Guillem Cabo, Feng Chang, Francisco
Fuentes, Jaume Abella
|
End-to-End QoS for the Open Source Safety-Relevant RISC-V SELENE
Platform
|
4 pages, 3 figures, work presented on FORECAST workshop of HIPEAC
2022
| null | null | null |
cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents the end-to-end QoS approach to provide performance
guarantees followed in the SELENE platform, a high-performance RISC-V based
heterogeneous SoC for safety-related real-time systems. Our QoS approach
includes smart interconnect solutions for buses and NoCs, along with multicore
interference-aware statistics units to, cooperatively, achieve end-to-end QoS.
|
[
{
"version": "v1",
"created": "Mon, 10 Oct 2022 13:32:23 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Andreu",
"Pablo",
""
],
[
"Hernandez",
"Carles",
""
],
[
"Picornell",
"Tomas",
""
],
[
"Lopez",
"Pedro",
""
],
[
"Alcaide",
"Sergi",
""
],
[
"Bas",
"Francisco",
""
],
[
"Benedicte",
"Pedro",
""
],
[
"Cabo",
"Guillem",
""
],
[
"Chang",
"Feng",
""
],
[
"Fuentes",
"Francisco",
""
],
[
"Abella",
"Jaume",
""
]
] |
new_dataset
| 0.966977 |
2210.04692
|
Qingyi Si
|
Qingyi Si, Fandong Meng, Mingyu Zheng, Zheng Lin, Yuanxin Liu, Peng
Fu, Yanan Cao, Weiping Wang and Jie Zhou
|
Language Prior Is Not the Only Shortcut: A Benchmark for Shortcut
Learning in VQA
|
Fingdings of EMNLP-2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual Question Answering (VQA) models are prone to learn the shortcut
solution formed by dataset biases rather than the intended solution. To
evaluate the VQA models' reasoning ability beyond shortcut learning, the VQA-CP
v2 dataset introduces a distribution shift between the training and test set
given a question type. In this way, the model cannot use the training set
shortcut (from question type to answer) to perform well on the test set.
However, VQA-CP v2 only considers one type of shortcut and thus still cannot
guarantee that the model relies on the intended solution rather than a solution
specific to this shortcut. To overcome this limitation, we propose a new
dataset that considers varying types of shortcuts by constructing different
distribution shifts in multiple OOD test sets. In addition, we overcome the
three troubling practices in the use of VQA-CP v2, e.g., selecting models using
OOD test sets, and further standardize OOD evaluation procedure. Our benchmark
provides a more rigorous and comprehensive testbed for shortcut learning in
VQA. We benchmark recent methods and find that methods specifically designed
for particular shortcuts fail to simultaneously generalize to our varying OOD
test sets. We also systematically study the varying shortcuts and provide
several valuable findings, which may promote the exploration of shortcut
learning in VQA.
|
[
{
"version": "v1",
"created": "Mon, 10 Oct 2022 13:39:08 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Si",
"Qingyi",
""
],
[
"Meng",
"Fandong",
""
],
[
"Zheng",
"Mingyu",
""
],
[
"Lin",
"Zheng",
""
],
[
"Liu",
"Yuanxin",
""
],
[
"Fu",
"Peng",
""
],
[
"Cao",
"Yanan",
""
],
[
"Wang",
"Weiping",
""
],
[
"Zhou",
"Jie",
""
]
] |
new_dataset
| 0.995115 |
2210.04708
|
Fan Zhang
|
Fan Zhang, Shaodi You, Yu Li, Ying Fu
|
GTAV-NightRain: Photometric Realistic Large-scale Dataset for Night-time
Rain Streak Removal
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Rain is transparent, which reflects and refracts light in the scene to the
camera. In outdoor vision, rain, especially rain streaks degrade visibility and
therefore need to be removed. In existing rain streak removal datasets,
although density, scale, direction and intensity have been considered,
transparency is not fully taken into account. This problem is particularly
serious in night scenes, where the appearance of rain largely depends on the
interaction with scene illuminations and changes drastically on different
positions within the image. This is problematic, because unrealistic dataset
causes serious domain bias. In this paper, we propose GTAV-NightRain dataset,
which is a large-scale synthetic night-time rain streak removal dataset. Unlike
existing datasets, by using 3D computer graphic platform (namely GTA V), we are
allowed to infer the three dimensional interaction between rain and
illuminations, which insures the photometric realness. Current release of the
dataset contains 12,860 HD rainy images and 1,286 corresponding HD ground truth
images in diversified night scenes. A systematic benchmark and analysis are
provided along with the dataset to inspire further research.
|
[
{
"version": "v1",
"created": "Mon, 10 Oct 2022 14:08:09 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Zhang",
"Fan",
""
],
[
"You",
"Shaodi",
""
],
[
"Li",
"Yu",
""
],
[
"Fu",
"Ying",
""
]
] |
new_dataset
| 0.999804 |
2210.04777
|
David Monschein
|
David Monschein and Oliver P. Waldhorst
|
mPSAuth: Privacy-Preserving and Scalable Authentication for Mobile Web
Applications
|
This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible
| null | null | null |
cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As nowadays most web application requests originate from mobile devices,
authentication of mobile users is essential in terms of security
considerations. To this end, recent approaches rely on machine learning
techniques to analyze various aspects of user behavior as a basis for
authentication decisions. These approaches face two challenges: first,
examining behavioral data raises significant privacy concerns, and second,
approaches must scale to support a large number of users. Existing approaches
do not address these challenges sufficiently. We propose mPSAuth, an approach
for continuously tracking various data sources reflecting user behavior (e.g.,
touchscreen interactions, sensor data) and estimating the likelihood of the
current user being legitimate based on machine learning techniques. With
mPSAuth, both the authentication protocol and the machine learning models
operate on homomorphically encrypted data to ensure the users' privacy.
Furthermore, the number of machine learning models used by mPSAuth is
independent of the number of users, thus providing adequate scalability. In an
extensive evaluation based on real-world data from a mobile application, we
illustrate that mPSAuth can provide high accuracy with low encryption and
communication overhead, while the effort for the inference is increased to a
tolerable extent.
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2022 12:49:34 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Monschein",
"David",
""
],
[
"Waldhorst",
"Oliver P.",
""
]
] |
new_dataset
| 0.998837 |
2210.04829
|
Pinelopi Papalampidi
|
Pinelopi Papalampidi, Mirella Lapata
|
Hierarchical3D Adapters for Long Video-to-text Summarization
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we focus on video-to-text summarization and investigate how to
best utilize multimodal information for summarizing long inputs (e.g., an
hour-long TV show) into long outputs (e.g., a multi-sentence summary). We
extend SummScreen (Chen et al., 2021), a dialogue summarization dataset
consisting of transcripts of TV episodes with reference summaries, and create a
multimodal variant by collecting corresponding full-length videos. We
incorporate multimodal information into a pre-trained textual summarizer
efficiently using adapter modules augmented with a hierarchical structure while
tuning only 3.8\% of model parameters. Our experiments demonstrate that
multimodal information offers superior performance over more memory-heavy and
fully fine-tuned textual summarization methods.
|
[
{
"version": "v1",
"created": "Mon, 10 Oct 2022 16:44:36 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Papalampidi",
"Pinelopi",
""
],
[
"Lapata",
"Mirella",
""
]
] |
new_dataset
| 0.991357 |
2210.04887
|
Haozhi Qi
|
Haozhi Qi, Ashish Kumar, Roberto Calandra, Yi Ma, Jitendra Malik
|
In-Hand Object Rotation via Rapid Motor Adaptation
|
CoRL 2022. Code and Website: https://haozhi.io/hora
| null | null | null |
cs.RO cs.AI cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generalized in-hand manipulation has long been an unsolved challenge of
robotics. As a small step towards this grand goal, we demonstrate how to design
and learn a simple adaptive controller to achieve in-hand object rotation using
only fingertips. The controller is trained entirely in simulation on only
cylindrical objects, which then - without any fine-tuning - can be directly
deployed to a real robot hand to rotate dozens of objects with diverse sizes,
shapes, and weights over the z-axis. This is achieved via rapid online
adaptation of the controller to the object properties using only proprioception
history. Furthermore, natural and stable finger gaits automatically emerge from
training the control policy via reinforcement learning. Code and more videos
are available at https://haozhi.io/hora
|
[
{
"version": "v1",
"created": "Mon, 10 Oct 2022 17:58:45 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Qi",
"Haozhi",
""
],
[
"Kumar",
"Ashish",
""
],
[
"Calandra",
"Roberto",
""
],
[
"Ma",
"Yi",
""
],
[
"Malik",
"Jitendra",
""
]
] |
new_dataset
| 0.993585 |
1901.02514
|
Yegna Subramanian Jambunath
|
Stephanie Ger, Yegna Subramanian Jambunath, Diego Klabjan
|
Autoencoders and Generative Adversarial Networks for Imbalanced Sequence
Classification
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generative Adversarial Networks (GANs) have been used in many different
applications to generate realistic synthetic data. We introduce a novel GAN
with Autoencoder (GAN-AE) architecture to generate synthetic samples for
variable length, multi-feature sequence datasets. In this model, we develop a
GAN architecture with an additional autoencoder component, where recurrent
neural networks (RNNs) are used for each component of the model in order to
generate synthetic data to improve classification accuracy for a highly
imbalanced medical device dataset. In addition to the medical device dataset,
we also evaluate the GAN-AE performance on two additional datasets and
demonstrate the application of GAN-AE to a sequence-to-sequence task where both
synthetic sequence inputs and sequence outputs must be generated. To evaluate
the quality of the synthetic data, we train encoder-decoder models both with
and without the synthetic data and compare the classification model
performance. We show that a model trained with GAN-AE generated synthetic data
outperforms models trained with synthetic data generated both with standard
oversampling techniques such as SMOTE and Autoencoders as well as with state of
the art GAN-based models.
|
[
{
"version": "v1",
"created": "Tue, 8 Jan 2019 20:52:35 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Jan 2019 04:06:01 GMT"
},
{
"version": "v3",
"created": "Wed, 18 Sep 2019 13:42:14 GMT"
},
{
"version": "v4",
"created": "Mon, 28 Oct 2019 00:28:15 GMT"
},
{
"version": "v5",
"created": "Wed, 19 Aug 2020 18:59:12 GMT"
},
{
"version": "v6",
"created": "Thu, 6 Oct 2022 19:19:29 GMT"
}
] | 2022-10-10T00:00:00 |
[
[
"Ger",
"Stephanie",
""
],
[
"Jambunath",
"Yegna Subramanian",
""
],
[
"Klabjan",
"Diego",
""
]
] |
new_dataset
| 0.980274 |
2105.13204
|
Constantin Seibold
|
Zdravko Marinov, Stanka Vasileva, Qing Wang, Constantin Seibold,
Jiaming Zhang and Rainer Stiefelhagen
|
Pose2Drone: A Skeleton-Pose-based Framework for Human-Drone Interaction
| null | null |
10.23919/EUSIPCO54536.2021.9616116
| null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Drones have become a common tool, which is utilized in many tasks such as
aerial photography, surveillance, and delivery. However, operating a drone
requires more and more interaction with the user. A natural and safe method for
Human-Drone Interaction (HDI) is using gestures. In this paper, we introduce an
HDI framework building upon skeleton-based pose estimation. Our framework
provides the functionality to control the movement of the drone with simple arm
gestures and to follow the user while keeping a safe distance. We also propose
a monocular distance estimation method, which is entirely based on image
features and does not require any additional depth sensors. To perform
comprehensive experiments and quantitative analysis, we create a customized
testing dataset. The experiments indicate that our HDI framework can achieve an
average of 93.5\% accuracy in the recognition of 11 common gestures. The code
is available at: https://github.com/Zrrr1997/Pose2Drone
|
[
{
"version": "v1",
"created": "Thu, 27 May 2021 14:50:57 GMT"
},
{
"version": "v2",
"created": "Fri, 28 May 2021 11:15:20 GMT"
}
] | 2022-10-10T00:00:00 |
[
[
"Marinov",
"Zdravko",
""
],
[
"Vasileva",
"Stanka",
""
],
[
"Wang",
"Qing",
""
],
[
"Seibold",
"Constantin",
""
],
[
"Zhang",
"Jiaming",
""
],
[
"Stiefelhagen",
"Rainer",
""
]
] |
new_dataset
| 0.999535 |
2110.11048
|
Donghee Paek
|
Donghee Paek, Seung-Hyun Kong and Kevin Tirta Wijaya
|
K-Lane: Lidar Lane Dataset and Benchmark for Urban Roads and Highways
|
20 pages, 20 figures, 11 tables
|
2022 IEEE/CVF Conference on Computer Vision and Pattern
Recognition (CVPR) Workshop on Autonomous Driving (WAD)
|
10.1109/CVPRW56347.2022.00491
| null |
cs.CV cs.AI cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Lane detection is a critical function for autonomous driving. With the recent
development of deep learning and the publication of camera lane datasets and
benchmarks, camera lane detection networks (CLDNs) have been remarkably
developed. Unfortunately, CLDNs rely on camera images which are often distorted
near the vanishing line and prone to poor lighting condition. This is in
contrast with Lidar lane detection networks (LLDNs), which can directly extract
the lane lines on the bird's eye view (BEV) for motion planning and operate
robustly under various lighting conditions. However, LLDNs have not been
actively studied, mostly due to the absence of large public lidar lane
datasets. In this paper, we introduce KAIST-Lane (K-Lane), the world's first
and the largest public urban road and highway lane dataset for Lidar. K-Lane
has more than 15K frames and contains annotations of up to six lanes under
various road and traffic conditions, e.g., occluded roads of multiple occlusion
levels, roads at day and night times, merging (converging and diverging) and
curved lanes. We also provide baseline networks we term Lidar lane detection
networks utilizing global feature correlator (LLDN-GFC). LLDN-GFC exploits the
spatial characteristics of lane lines on the point cloud, which are sparse,
thin, and stretched along the entire ground plane of the point cloud. From
experimental results, LLDN-GFC achieves the state-of-the-art performance with
an F1- score of 82.1%, on the K-Lane. Moreover, LLDN-GFC shows strong
performance under various lighting conditions, which is unlike CLDNs, and also
robust even in the case of severe occlusions, unlike LLDNs using the
conventional CNN. The K-Lane, LLDN-GFC training code, pre-trained models, and
complete development kits including evaluation, visualization and annotation
tools are available at https://github.com/kaist-avelab/k-lane.
|
[
{
"version": "v1",
"created": "Thu, 21 Oct 2021 10:46:50 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Jun 2022 11:09:19 GMT"
}
] | 2022-10-10T00:00:00 |
[
[
"Paek",
"Donghee",
""
],
[
"Kong",
"Seung-Hyun",
""
],
[
"Wijaya",
"Kevin Tirta",
""
]
] |
new_dataset
| 0.999839 |
2112.12579
|
Yancong Lin
|
Yancong Lin, Silvia-Laura Pintea, Jan van Gemert
|
NeRD++: Improved 3D-mirror symmetry learning from a single image
|
BMVC 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Many objects are naturally symmetric, and this symmetry can be exploited to
infer unseen 3D properties from a single 2D image. Recently, NeRD is proposed
for accurate 3D mirror plane estimation from a single image. Despite the
unprecedented accuracy, it relies on large annotated datasets for training and
suffers from slow inference. Here we aim to improve its data and compute
efficiency. We do away with the computationally expensive 4D feature volumes
and instead explicitly compute the feature correlation of the pixel
correspondences across depth, thus creating a compact 3D volume. We also design
multi-stage spherical convolutions to identify the optimal mirror plane on the
hemisphere, whose inductive bias offers gains in data-efficiency. Experiments
on both synthetic and real-world datasets show the benefit of our proposed
changes for improved data efficiency and inference speed.
|
[
{
"version": "v1",
"created": "Thu, 23 Dec 2021 14:37:52 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Oct 2022 08:34:42 GMT"
}
] | 2022-10-10T00:00:00 |
[
[
"Lin",
"Yancong",
""
],
[
"Pintea",
"Silvia-Laura",
""
],
[
"van Gemert",
"Jan",
""
]
] |
new_dataset
| 0.996539 |
2203.01437
|
Xuanlong Yu
|
Gianni Franchi, Xuanlong Yu, Andrei Bursuc, Angel Tena, R\'emi
Kazmierczak, S\'everine Dubuisson, Emanuel Aldea, David Filliat
|
MUAD: Multiple Uncertainties for Autonomous Driving, a benchmark for
multiple uncertainty types and tasks
|
Accepted at BMVC 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Predictive uncertainty estimation is essential for safe deployment of Deep
Neural Networks in real-world autonomous systems. However, disentangling the
different types and sources of uncertainty is non trivial for most datasets,
especially since there is no ground truth for uncertainty. In addition, while
adverse weather conditions of varying intensities can disrupt neural network
predictions, they are usually under-represented in both training and test sets
in public datasets.We attempt to mitigate these setbacks and introduce the MUAD
dataset (Multiple Uncertainties for Autonomous Driving), consisting of 10,413
realistic synthetic images with diverse adverse weather conditions (night, fog,
rain, snow), out-of-distribution objects, and annotations for semantic
segmentation, depth estimation, object, and instance detection. MUAD allows to
better assess the impact of different sources of uncertainty on model
performance. We conduct a thorough experimental study of this impact on several
baseline Deep Neural Networks across multiple tasks, and release our dataset to
allow researchers to benchmark their algorithm methodically in adverse
conditions. More visualizations and the download link for MUAD are available at
https://muad-dataset.github.io/.
|
[
{
"version": "v1",
"created": "Wed, 2 Mar 2022 22:14:12 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Oct 2022 16:25:51 GMT"
}
] | 2022-10-10T00:00:00 |
[
[
"Franchi",
"Gianni",
""
],
[
"Yu",
"Xuanlong",
""
],
[
"Bursuc",
"Andrei",
""
],
[
"Tena",
"Angel",
""
],
[
"Kazmierczak",
"Rémi",
""
],
[
"Dubuisson",
"Séverine",
""
],
[
"Aldea",
"Emanuel",
""
],
[
"Filliat",
"David",
""
]
] |
new_dataset
| 0.974042 |
2208.07400
|
Fan Bai
|
Fan Bai, Alan Ritter, Peter Madrid, Dayne Freitag, John Niekrasz
|
SynKB: Semantic Search for Synthetic Procedures
|
Accepted to EMNLP 2022 Demo track
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we present SynKB, an open-source, automatically extracted
knowledge base of chemical synthesis protocols. Similar to proprietary
chemistry databases such as Reaxsys, SynKB allows chemists to retrieve
structured knowledge about synthetic procedures. By taking advantage of recent
advances in natural language processing for procedural texts, SynKB supports
more flexible queries about reaction conditions, and thus has the potential to
help chemists search the literature for conditions used in relevant reactions
as they design new synthetic routes. Using customized Transformer models to
automatically extract information from 6 million synthesis procedures described
in U.S. and EU patents, we show that for many queries, SynKB has higher recall
than Reaxsys, while maintaining high precision. We plan to make SynKB available
as an open-source tool; in contrast, proprietary chemistry databases require
costly subscriptions.
|
[
{
"version": "v1",
"created": "Mon, 15 Aug 2022 18:33:16 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Oct 2022 19:51:45 GMT"
}
] | 2022-10-10T00:00:00 |
[
[
"Bai",
"Fan",
""
],
[
"Ritter",
"Alan",
""
],
[
"Madrid",
"Peter",
""
],
[
"Freitag",
"Dayne",
""
],
[
"Niekrasz",
"John",
""
]
] |
new_dataset
| 0.988389 |
2209.11429
|
Rishabh Misra
|
Rishabh Misra
|
News Category Dataset
|
correction of a missing citation
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
People rely on news to know what is happening around the world and inform
their daily lives. In today's world, when the proliferation of fake news is
rampant, having a large-scale and high-quality source of authentic news
articles with the published category information is valuable to learning
authentic news' Natural Language syntax and semantics. As part of this work, we
present a News Category Dataset that contains around 210k news headlines from
the year 2012 to 2022 obtained from HuffPost, along with useful metadata to
enable various NLP tasks. In this paper, we also produce some novel insights
from the dataset and describe various existing and potential applications of
our dataset.
|
[
{
"version": "v1",
"created": "Fri, 23 Sep 2022 06:13:16 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Oct 2022 21:28:21 GMT"
},
{
"version": "v3",
"created": "Thu, 6 Oct 2022 20:43:53 GMT"
}
] | 2022-10-10T00:00:00 |
[
[
"Misra",
"Rishabh",
""
]
] |
new_dataset
| 0.999894 |
2209.12511
|
Yi Han
|
Yi Han, He Wang, Xiaogang Jin
|
Spatio-temporal Keyframe Control of Traffic Simulation using
Coarse-to-Fine Optimization
| null | null |
10.1111/cgf.14699
| null |
cs.GR cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
We present a novel traffic trajectory editing method which uses
spatio-temporal keyframes to control vehicles during the simulation to generate
desired traffic trajectories. By taking self-motivation, path following and
collision avoidance into account, the proposed force-based traffic simulation
framework updates vehicle's motions in both the Frenet coordinates and the
Cartesian coordinates. With the way-points from users, lane-level navigation
can be generated by reference path planning. With a given keyframe, the
coarse-to-fine optimization is proposed to efficiently generate the plausible
trajectory which can satisfy the spatio-temporal constraints. At first, a
directed state-time graph constructed along the reference path is used to
search for a coarse-grained trajectory by mapping the keyframe as the goal.
Then, using the information extracted from the coarse trajectory as
initialization, adjoint-based optimization is applied to generate a finer
trajectory with smooth motions based on our force-based simulation. We validate
our method with extensive experiments.
|
[
{
"version": "v1",
"created": "Mon, 26 Sep 2022 08:36:06 GMT"
}
] | 2022-10-10T00:00:00 |
[
[
"Han",
"Yi",
""
],
[
"Wang",
"He",
""
],
[
"Jin",
"Xiaogang",
""
]
] |
new_dataset
| 0.988941 |
2210.02989
|
Ching-Yun Ko
|
Ching-Yun Ko, Pin-Yu Chen, Jeet Mohapatra, Payel Das, Luca Daniel
|
SynBench: Task-Agnostic Benchmarking of Pretrained Representations using
Synthetic Data
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Recent success in fine-tuning large models, that are pretrained on broad data
at scale, on downstream tasks has led to a significant paradigm shift in deep
learning, from task-centric model design to task-agnostic representation
learning and task-specific fine-tuning. As the representations of pretrained
models are used as a foundation for different downstream tasks, this paper
proposes a new task-agnostic framework, \textit{SynBench}, to measure the
quality of pretrained representations using synthetic data. We set up a
reference by a theoretically-derived robustness-accuracy tradeoff of the class
conditional Gaussian mixture. Given a pretrained model, the representations of
data synthesized from the Gaussian mixture are used to compare with our
reference to infer the quality. By comparing the ratio of area-under-curve
between the raw data and their representations, SynBench offers a quantifiable
score for robustness-accuracy performance benchmarking. Our framework applies
to a wide range of pretrained models taking continuous data inputs and is
independent of the downstream tasks and datasets. Evaluated with several
pretrained vision transformer models, the experimental results show that our
SynBench score well matches the actual linear probing performance of the
pre-trained model when fine-tuned on downstream tasks. Moreover, our framework
can be used to inform the design of robust linear probing on pretrained
representations to mitigate the robustness-accuracy tradeoff in downstream
tasks.
|
[
{
"version": "v1",
"created": "Thu, 6 Oct 2022 15:25:00 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Oct 2022 04:07:50 GMT"
}
] | 2022-10-10T00:00:00 |
[
[
"Ko",
"Ching-Yun",
""
],
[
"Chen",
"Pin-Yu",
""
],
[
"Mohapatra",
"Jeet",
""
],
[
"Das",
"Payel",
""
],
[
"Daniel",
"Luca",
""
]
] |
new_dataset
| 0.993301 |
2210.03173
|
Abhinav Keshari
|
Abhinav K. Keshari, Hanwen Ren, Ahmed H. Qureshi
|
CoGrasp: 6-DoF Grasp Generation for Human-Robot Collaboration
| null | null | null | null |
cs.RO cs.CV cs.HC cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robot grasping is an actively studied area in robotics, mainly focusing on
the quality of generated grasps for object manipulation. However, despite
advancements, these methods do not consider the human-robot collaboration
settings where robots and humans will have to grasp the same objects
concurrently. Therefore, generating robot grasps compatible with human
preferences of simultaneously holding an object becomes necessary to ensure a
safe and natural collaboration experience. In this paper, we propose a novel,
deep neural network-based method called CoGrasp that generates human-aware
robot grasps by contextualizing human preference models of object grasping into
the robot grasp selection process. We validate our approach against existing
state-of-the-art robot grasping methods through simulated and real-robot
experiments and user studies. In real robot experiments, our method achieves
about 88\% success rate in producing stable grasps that also allow humans to
interact and grasp objects simultaneously in a socially compliant manner.
Furthermore, our user study with 10 independent participants indicated our
approach enables a safe, natural, and socially-aware human-robot objects'
co-grasping experience compared to a standard robot grasping technique.
|
[
{
"version": "v1",
"created": "Thu, 6 Oct 2022 19:23:25 GMT"
}
] | 2022-10-10T00:00:00 |
[
[
"Keshari",
"Abhinav K.",
""
],
[
"Ren",
"Hanwen",
""
],
[
"Qureshi",
"Ahmed H.",
""
]
] |
new_dataset
| 0.975309 |
2210.03230
|
Colin White
|
Arjun Krishnakumar, Colin White, Arber Zela, Renbo Tu, Mahmoud Safari,
Frank Hutter
|
NAS-Bench-Suite-Zero: Accelerating Research on Zero Cost Proxies
|
NeurIPS Datasets and Benchmarks Track 2022
| null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Zero-cost proxies (ZC proxies) are a recent architecture performance
prediction technique aiming to significantly speed up algorithms for neural
architecture search (NAS). Recent work has shown that these techniques show
great promise, but certain aspects, such as evaluating and exploiting their
complementary strengths, are under-studied. In this work, we create
NAS-Bench-Suite: we evaluate 13 ZC proxies across 28 tasks, creating by far the
largest dataset (and unified codebase) for ZC proxies, enabling
orders-of-magnitude faster experiments on ZC proxies, while avoiding
confounding factors stemming from different implementations. To demonstrate the
usefulness of NAS-Bench-Suite, we run a large-scale analysis of ZC proxies,
including a bias analysis, and the first information-theoretic analysis which
concludes that ZC proxies capture substantial complementary information.
Motivated by these findings, we present a procedure to improve the performance
of ZC proxies by reducing biases such as cell size, and we also show that
incorporating all 13 ZC proxies into the surrogate models used by NAS
algorithms can improve their predictive performance by up to 42%. Our code and
datasets are available at https://github.com/automl/naslib/tree/zerocost.
|
[
{
"version": "v1",
"created": "Thu, 6 Oct 2022 21:56:26 GMT"
}
] | 2022-10-10T00:00:00 |
[
[
"Krishnakumar",
"Arjun",
""
],
[
"White",
"Colin",
""
],
[
"Zela",
"Arber",
""
],
[
"Tu",
"Renbo",
""
],
[
"Safari",
"Mahmoud",
""
],
[
"Hutter",
"Frank",
""
]
] |
new_dataset
| 0.998629 |
2210.03234
|
Wali Ullah Khan
|
Muhammad Asghar Khan, Neeraj Kumar, Syed Agha Hassnain Mohsan, Wali
Ullah Khan, Moustafa M. Nasralla, Mohammed H. Alsharif, Justyna ywioek, Insaf
Ullah
|
Swarm of UAVs for Network Management in 6G: A Technical Review
|
19, 9
| null | null | null |
cs.NI eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fifth-generation (5G) cellular networks have led to the implementation of
beyond 5G (B5G) networks, which are capable of incorporating autonomous
services to swarm of unmanned aerial vehicles (UAVs). They provide capacity
expansion strategies to address massive connectivity issues and guarantee
ultra-high throughput and low latency, especially in extreme or emergency
situations where network density, bandwidth, and traffic patterns fluctuate. On
the one hand, 6G technology integrates AI/ML, IoT, and blockchain to establish
ultra-reliable, intelligent, secure, and ubiquitous UAV networks. 6G networks,
on the other hand, rely on new enabling technologies such as air interface and
transmission technologies, as well as a unique network design, posing new
challenges for the swarm of UAVs. Keeping these challenges in mind, this
article focuses on the security and privacy, intelligence, and
energy-efficiency issues faced by swarms of UAVs operating in 6G mobile
networks. In this state-of-the-art review, we integrated blockchain and AI/ML
with UAV networks utilizing the 6G ecosystem. The key findings are then
presented, and potential research challenges are identified. We conclude the
review by shedding light on future research in this emerging field of research.
|
[
{
"version": "v1",
"created": "Thu, 6 Oct 2022 22:00:55 GMT"
}
] | 2022-10-10T00:00:00 |
[
[
"Khan",
"Muhammad Asghar",
""
],
[
"Kumar",
"Neeraj",
""
],
[
"Mohsan",
"Syed Agha Hassnain",
""
],
[
"Khan",
"Wali Ullah",
""
],
[
"Nasralla",
"Moustafa M.",
""
],
[
"Alsharif",
"Mohammed H.",
""
],
[
"ywioek",
"Justyna",
""
],
[
"Ullah",
"Insaf",
""
]
] |
new_dataset
| 0.979655 |
2210.03254
|
Siamak Layeghy
|
Liam Daly Manocchio, Siamak Layeghy, Marius Portmann
|
Network Intrusion Detection System in a Light Bulb
| null | null | null | null |
cs.CR cs.DC cs.LG cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Internet of Things (IoT) devices are progressively being utilised in a
variety of edge applications to monitor and control home and industry
infrastructure. Due to the limited compute and energy resources, active
security protections are usually minimal in many IoT devices. This has created
a critical security challenge that has attracted researchers' attention in the
field of network security. Despite a large number of proposed Network Intrusion
Detection Systems (NIDSs), there is limited research into practical IoT
implementations, and to the best of our knowledge, no edge-based NIDS has been
demonstrated to operate on common low-power chipsets found in the majority of
IoT devices, such as the ESP8266. This research aims to address this gap by
pushing the boundaries on low-power Machine Learning (ML) based NIDSs. We
propose and develop an efficient and low-power ML-based NIDS, and demonstrate
its applicability for IoT edge applications by running it on a typical smart
light bulb. We also evaluate our system against other proposed edge-based NIDSs
and show that our model has a higher detection performance, and is
significantly faster and smaller, and therefore more applicable to a wider
range of IoT edge devices.
|
[
{
"version": "v1",
"created": "Thu, 6 Oct 2022 23:36:04 GMT"
}
] | 2022-10-10T00:00:00 |
[
[
"Manocchio",
"Liam Daly",
""
],
[
"Layeghy",
"Siamak",
""
],
[
"Portmann",
"Marius",
""
]
] |
new_dataset
| 0.972839 |
2210.03270
|
Pedro F. Proen\c{c}a
|
Pedro F. Proen\c{c}a, Patrick Spieler, Robert A. Hewitt, Jeff Delaune
|
TRADE: Object Tracking with 3D Trajectory and Ground Depth Estimates for
UAVs
| null | null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose TRADE for robust tracking and 3D localization of a moving target
in cluttered environments, from UAVs equipped with a single camera. Ultimately
TRADE enables 3d-aware target following.
Tracking-by-detection approaches are vulnerable to target switching,
especially between similar objects. Thus, TRADE predicts and incorporates the
target 3D trajectory to select the right target from the tracker's response
map. Unlike static environments, depth estimation of a moving target from a
single camera is a ill-posed problem. Therefore we propose a novel 3D
localization method for ground targets on complex terrain. It reasons about
scene geometry by combining ground plane segmentation, depth-from-motion and
single-image depth estimation. The benefits of using TRADE are demonstrated as
tracking robustness and depth accuracy on several dynamic scenes simulated in
this work. Additionally, we demonstrate autonomous target following using a
thermal camera by running TRADE on a quadcopter's board computer.
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2022 00:52:21 GMT"
}
] | 2022-10-10T00:00:00 |
[
[
"Proença",
"Pedro F.",
""
],
[
"Spieler",
"Patrick",
""
],
[
"Hewitt",
"Robert A.",
""
],
[
"Delaune",
"Jeff",
""
]
] |
new_dataset
| 0.998957 |
2210.03280
|
Octavian Donca
|
Octavian A. Donca, Chayapol Beokhaimook, Ayonga Hereid
|
Real-Time Navigation for Bipedal Robots in Dynamic Environments
|
Submitted to 2023 IEEE International Conference on Robotics and
Automation (ICRA). For associated experiment recordings see
https://www.youtube.com/watch?v=WzHejHx-Kzs
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
The popularity of mobile robots has been steadily growing, with these robots
being increasingly utilized to execute tasks previously completed by human
workers. For bipedal robots to see this same success, robust autonomous
navigation systems need to be developed that can execute in real-time and
respond to dynamic environments. These systems can be divided into three
stages: perception, planning, and control. A holistic navigation framework for
bipedal robots must successfully integrate all three components of the
autonomous navigation problem to enable robust real-world navigation. In this
paper, we present a real-time navigation framework for bipedal robots in
dynamic environments. The proposed system addresses all components of the
navigation problem: We introduce a depth-based perception system for obstacle
detection, mapping, and localization. A two-stage planner is developed to
generate collision-free trajectories robust to unknown and dynamic
environments. And execute trajectories on the Digit bipedal robot's walking
gait controller. The navigation framework is validated through a series of
simulation and hardware experiments that contain unknown environments and
dynamic obstacles.
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2022 01:51:20 GMT"
}
] | 2022-10-10T00:00:00 |
[
[
"Donca",
"Octavian A.",
""
],
[
"Beokhaimook",
"Chayapol",
""
],
[
"Hereid",
"Ayonga",
""
]
] |
new_dataset
| 0.996523 |
2210.03293
|
Wenxing Zhu
|
Ximeng Li, Keyu Peng, Fuxing Huang and Wenxing Zhu
|
PeF: Poisson's Equation Based Large-Scale Fixed-Outline Floorplanning
| null | null | null | null |
cs.AR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Floorplanning is the first stage of VLSI physical design. An effective
floorplanning engine definitely has positive impact on chip design speed,
quality and performance. In this paper, we present a novel mathematical model
to characterize non-overlapping of modules, and propose a flat fixed-outline
floorplanning algorithm based on the VLSI global placement approach using
Poisson's equation. The algorithm consists of global floorplanning and
legalization phases. In global floorplanning, we redefine the potential energy
of each module based on the novel mathematical model for characterizing
non-overlapping of modules and an analytical solution of Poisson's equation. In
this scheme, the widths of soft modules appear as variables in the energy
function and can be optimized. Moreover, we design a fast approximate
computation scheme for partial derivatives of the potential energy. In
legalization, based on the defined horizontal and vertical constraint graphs,
we eliminate overlaps between modules remained after global floorplanning, by
modifying relative positions of modules. Experiments on the MCNC, GSRC, HB+ and
ami49\_x benchmarks show that, our algorithm improves the average wirelength by
at least 2\% and 5\% on small and large scale benchmarks with certain
whitespace, respectively, compared to state-of-the-art floorplanners.
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2022 02:52:08 GMT"
}
] | 2022-10-10T00:00:00 |
[
[
"Li",
"Ximeng",
""
],
[
"Peng",
"Keyu",
""
],
[
"Huang",
"Fuxing",
""
],
[
"Zhu",
"Wenxing",
""
]
] |
new_dataset
| 0.951182 |
2210.03324
|
Colin White
|
Renbo Tu, Nicholas Roberts, Vishak Prasad, Sibasis Nayak, Paarth Jain,
Frederic Sala, Ganesh Ramakrishnan, Ameet Talwalkar, Willie Neiswanger, Colin
White
|
AutoML for Climate Change: A Call to Action
| null | null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The challenge that climate change poses to humanity has spurred a rapidly
developing field of artificial intelligence research focused on climate change
applications. The climate change AI (CCAI) community works on a diverse,
challenging set of problems which often involve physics-constrained ML or
heterogeneous spatiotemporal data. It would be desirable to use automated
machine learning (AutoML) techniques to automatically find high-performing
architectures and hyperparameters for a given dataset. In this work, we
benchmark popular AutoML libraries on three high-leverage CCAI applications:
climate modeling, wind power forecasting, and catalyst discovery. We find that
out-of-the-box AutoML libraries currently fail to meaningfully surpass the
performance of human-designed CCAI models. However, we also identify a few key
weaknesses, which stem from the fact that most AutoML techniques are tailored
to computer vision and NLP applications. For example, while dozens of search
spaces have been designed for image and language data, none have been designed
for spatiotemporal data. Addressing these key weaknesses can lead to the
discovery of novel architectures that yield substantial performance gains
across numerous CCAI applications. Therefore, we present a call to action to
the AutoML community, since there are a number of concrete, promising
directions for future work in the space of AutoML for CCAI. We release our code
and a list of resources at
https://github.com/climate-change-automl/climate-change-automl.
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2022 04:52:26 GMT"
}
] | 2022-10-10T00:00:00 |
[
[
"Tu",
"Renbo",
""
],
[
"Roberts",
"Nicholas",
""
],
[
"Prasad",
"Vishak",
""
],
[
"Nayak",
"Sibasis",
""
],
[
"Jain",
"Paarth",
""
],
[
"Sala",
"Frederic",
""
],
[
"Ramakrishnan",
"Ganesh",
""
],
[
"Talwalkar",
"Ameet",
""
],
[
"Neiswanger",
"Willie",
""
],
[
"White",
"Colin",
""
]
] |
new_dataset
| 0.998958 |
2210.03332
|
Tasnim Sakib Apon
|
Touhidul Islam Chayan, Anita Islam, Eftykhar Rahman, Md. Tanzim Reza,
Tasnim Sakib Apon, MD. Golam Rabiul Alam
|
Explainable AI based Glaucoma Detection using Transfer Learning and LIME
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Glaucoma is the second driving reason for partial or complete blindness among
all the visual deficiencies which mainly occurs because of excessive pressure
in the eye due to anxiety or depression which damages the optic nerve and
creates complications in vision. Traditional glaucoma screening is a
time-consuming process that necessitates the medical professionals' constant
attention, and even so time to time due to the time constrains and pressure
they fail to classify correctly that leads to wrong treatment. Numerous efforts
have been made to automate the entire glaucoma classification procedure
however, these existing models in general have a black box characteristics that
prevents users from understanding the key reasons behind the prediction and
thus medical practitioners generally can not rely on these system. In this
article after comparing with various pre-trained models, we propose a transfer
learning model that is able to classify Glaucoma with 94.71\% accuracy. In
addition, we have utilized Local Interpretable Model-Agnostic
Explanations(LIME) that introduces explainability in our system. This
improvement enables medical professionals obtain important and comprehensive
information that aid them in making judgments. It also lessen the opacity and
fragility of the traditional deep learning models.
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2022 05:36:33 GMT"
}
] | 2022-10-10T00:00:00 |
[
[
"Chayan",
"Touhidul Islam",
""
],
[
"Islam",
"Anita",
""
],
[
"Rahman",
"Eftykhar",
""
],
[
"Reza",
"Md. Tanzim",
""
],
[
"Apon",
"Tasnim Sakib",
""
],
[
"Alam",
"MD. Golam Rabiul",
""
]
] |
new_dataset
| 0.996024 |
2210.03405
|
Jiangtao Feng
|
Jiangtao Feng, Yi Zhou, Jun Zhang, Xian Qian, Liwei Wu, Zhexi Zhang,
Yanming Liu, Mingxuan Wang, Lei Li, Hao Zhou
|
PARAGEN : A Parallel Generation Toolkit
|
9 pages, 1 figure, 6 tables
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
PARAGEN is a PyTorch-based NLP toolkit for further development on parallel
generation. PARAGEN provides thirteen types of customizable plugins, helping
users to experiment quickly with novel ideas across model architectures,
optimization, and learning strategies. We implement various features, such as
unlimited data loading and automatic model selection, to enhance its industrial
usage. ParaGen is now deployed to support various research and industry
applications at ByteDance. PARAGEN is available at
https://github.com/bytedance/ParaGen.
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2022 08:55:10 GMT"
}
] | 2022-10-10T00:00:00 |
[
[
"Feng",
"Jiangtao",
""
],
[
"Zhou",
"Yi",
""
],
[
"Zhang",
"Jun",
""
],
[
"Qian",
"Xian",
""
],
[
"Wu",
"Liwei",
""
],
[
"Zhang",
"Zhexi",
""
],
[
"Liu",
"Yanming",
""
],
[
"Wang",
"Mingxuan",
""
],
[
"Li",
"Lei",
""
],
[
"Zhou",
"Hao",
""
]
] |
new_dataset
| 0.99103 |
2210.03417
|
Ziyi Li
|
Qinye Zhou, Ziyi Li, Weidi Xie, Xiaoyun Zhang, Ya Zhang, Yanfeng Wang
|
A Simple Plugin for Transforming Images to Arbitrary Scales
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing models on super-resolution often specialized for one scale,
fundamentally limiting their use in practical scenarios. In this paper, we aim
to develop a general plugin that can be inserted into existing super-resolution
models, conveniently augmenting their ability towards Arbitrary Resolution
Image Scaling, thus termed ARIS. We make the following contributions: (i) we
propose a transformer-based plugin module, which uses spatial coordinates as
query, iteratively attend the low-resolution image feature through
cross-attention, and output visual feature for the queried spatial location,
resembling an implicit representation for images; (ii) we introduce a novel
self-supervised training scheme, that exploits consistency constraints to
effectively augment the model's ability for upsampling images towards unseen
scales, i.e. ground-truth high-resolution images are not available; (iii)
without loss of generality, we inject the proposed ARIS plugin module into
several existing models, namely, IPT, SwinIR, and HAT, showing that the
resulting models can not only maintain their original performance on fixed
scale factor but also extrapolate to unseen scales, substantially outperforming
existing any-scale super-resolution models on standard benchmarks, e.g.
Urban100, DIV2K, etc.
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2022 09:24:38 GMT"
}
] | 2022-10-10T00:00:00 |
[
[
"Zhou",
"Qinye",
""
],
[
"Li",
"Ziyi",
""
],
[
"Xie",
"Weidi",
""
],
[
"Zhang",
"Xiaoyun",
""
],
[
"Zhang",
"Ya",
""
],
[
"Wang",
"Yanfeng",
""
]
] |
new_dataset
| 0.951544 |
2210.03432
|
Emmanuel Baccelli
|
Koen Zandberg, Emmanuel Baccelli, Shenghao Yuan, Fr\'ed\'eric Besson,
Jean-Pierre Talpin
|
Femto-Containers: Lightweight Virtualization and Fault Isolation For
Small Software Functions on Low-Power IoT Microcontrollers
|
arXiv admin note: text overlap with arXiv:2106.12553
|
23rd ACM/IFIP International Middleware Conference (MIDDLEWARE
2022)
| null | null |
cs.OS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Low-power operating system runtimes used on IoT microcontrollers typically
provide rudimentary APIs, basic connectivity and, sometimes, a (secure)
firmware update mechanism. In contrast, on less constrained hardware, networked
software has entered the age of serverless, microservices and agility. With a
view to bridge this gap, in the paper we design Femto-Containers, a new
middleware runtime which can be embedded on heterogeneous low-power IoT
devices. Femto-Containers enable the secure deployment, execution and isolation
of small virtual software functions on low-power IoT devices, over the network.
We implement Femto-Containers, and provide integration in RIOT, a popular open
source IoT operating system. We then evaluate the performance of our
implementation, which was formally verified for fault-isolation, guaranteeing
that RIOT is shielded from logic loaded and executed in a Femto-Container. Our
experiments on various popular microcontroller architectures (Arm Cortex-M,
ESP32 and RISC-V) show that Femto-Containers offer an attractive trade-off in
terms of memory footprint overhead, energy consumption, and security
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2022 10:03:55 GMT"
}
] | 2022-10-10T00:00:00 |
[
[
"Zandberg",
"Koen",
""
],
[
"Baccelli",
"Emmanuel",
""
],
[
"Yuan",
"Shenghao",
""
],
[
"Besson",
"Frédéric",
""
],
[
"Talpin",
"Jean-Pierre",
""
]
] |
new_dataset
| 0.986263 |
2210.03436
|
Alan Lukezic
|
Alan Lukezic and Ziga Trojer and Jiri Matas and Matej Kristan
|
Trans2k: Unlocking the Power of Deep Models for Transparent Object
Tracking
|
Accepted to BMVC 2022. Project page:
https://github.com/trojerz/Trans2k
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Visual object tracking has focused predominantly on opaque objects, while
transparent object tracking received very little attention. Motivated by the
uniqueness of transparent objects in that their appearance is directly affected
by the background, the first dedicated evaluation dataset has emerged recently.
We contribute to this effort by proposing the first transparent object tracking
training dataset Trans2k that consists of over 2k sequences with 104,343 images
overall, annotated by bounding boxes and segmentation masks. Noting that
transparent objects can be realistically rendered by modern renderers, we
quantify domain-specific attributes and render the dataset containing visual
attributes and tracking situations not covered in the existing object training
datasets. We observe a consistent performance boost (up to 16%) across a
diverse set of modern tracking architectures when trained using Trans2k, and
show insights not previously possible due to the lack of appropriate training
sets. The dataset and the rendering engine will be publicly released to unlock
the power of modern learning-based trackers and foster new designs in
transparent object tracking.
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2022 10:08:13 GMT"
}
] | 2022-10-10T00:00:00 |
[
[
"Lukezic",
"Alan",
""
],
[
"Trojer",
"Ziga",
""
],
[
"Matas",
"Jiri",
""
],
[
"Kristan",
"Matej",
""
]
] |
new_dataset
| 0.999543 |
2210.03437
|
Irvin Haozhe Zhan
|
Irvin Haozhe Zhan, Yiheng Han, Yu-Ping Wang, Long Zeng, Yong-Jin Liu
|
KRF: Keypoint Refinement with Fusion Network for 6D Pose Estimation
| null | null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing refinement methods gradually lose their ability to further improve
pose estimation methods' accuracy. In this paper, we propose a new refinement
pipeline, Keypoint Refinement with Fusion Network (KRF), for 6D pose
estimation, especially for objects with serious occlusion. The pipeline
consists of two steps. It first completes the input point clouds via a novel
point completion network. The network uses both local and global features,
considering the pose information during point completion. Then, it registers
the completed object point cloud with corresponding target point cloud by Color
supported Iterative KeyPoint (CIKP). The CIKP method introduces color
information into registration and registers point cloud around each keypoint to
increase stability. The KRF pipeline can be integrated with existing popular 6D
pose estimation methods, e.g. the full flow bidirectional fusion network, to
further improved their pose estimation accuracy. Experiments show that our
method outperforms the state-of-the-art method from 93.9\% to 94.4\% on
YCB-Video dataset and from 64.4\% to 66.8\% on Occlusion LineMOD dataset. Our
source code is available at https://github.com/zhanhz/KRF.
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2022 10:13:30 GMT"
}
] | 2022-10-10T00:00:00 |
[
[
"Zhan",
"Irvin Haozhe",
""
],
[
"Han",
"Yiheng",
""
],
[
"Wang",
"Yu-Ping",
""
],
[
"Zeng",
"Long",
""
],
[
"Liu",
"Yong-Jin",
""
]
] |
new_dataset
| 0.997789 |
2210.03441
|
Sahar Salimpour
|
Sahar Salimpour, Farhad Keramat, Jorge Pe\~na Queralta, Tomi
Westerlund
|
Decentralized Vision-Based Byzantine Agent Detection in Multi-Robot
Systems with IOTA Smart Contracts
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Multiple opportunities lie at the intersection of multi-robot systems and
distributed ledger technologies (DLTs). In this work, we investigate the
potential of new DLT solutions such as IOTA, for detecting anomalies and
byzantine agents in multi-robot systems in a decentralized manner. Traditional
blockchain approaches are not applicable to real-world networked and
decentralized robotic systems where connectivity conditions are not ideal. To
address this, we leverage recent advances in partition-tolerant and
byzantine-tolerant collaborative decision-making processes with IOTA smart
contracts. We show how our work in vision-based anomaly and change detection
can be applied to detecting byzantine agents within multiple robots operating
in the same environment. We show that IOTA smart contracts add a low
computational overhead while allowing to build trust within the multi-robot
system. The proposed approach effectively enables byzantine robot detection
based on the comparison of images submitted by the different robots and
detection of anomalies and changes between them.
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2022 10:19:12 GMT"
}
] | 2022-10-10T00:00:00 |
[
[
"Salimpour",
"Sahar",
""
],
[
"Keramat",
"Farhad",
""
],
[
"Queralta",
"Jorge Peña",
""
],
[
"Westerlund",
"Tomi",
""
]
] |
new_dataset
| 0.983124 |
2210.03479
|
Mithun Das
|
Mithun Das, Somnath Banerjee, Punyajoy Saha, Animesh Mukherjee
|
Hate Speech and Offensive Language Detection in Bengali
|
Accepted at AACL-IJCNLP 2022
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Social media often serves as a breeding ground for various hateful and
offensive content. Identifying such content on social media is crucial due to
its impact on the race, gender, or religion in an unprejudiced society.
However, while there is extensive research in hate speech detection in English,
there is a gap in hateful content detection in low-resource languages like
Bengali. Besides, a current trend on social media is the use of Romanized
Bengali for regular interactions. To overcome the existing research's
limitations, in this study, we develop an annotated dataset of 10K Bengali
posts consisting of 5K actual and 5K Romanized Bengali tweets. We implement
several baseline models for the classification of such hateful posts. We
further explore the interlingual transfer mechanism to boost classification
performance. Finally, we perform an in-depth error analysis by looking into the
misclassified posts by the models. While training actual and Romanized datasets
separately, we observe that XLM-Roberta performs the best. Further, we witness
that on joint training and few-shot training, MuRIL outperforms other models by
interpreting the semantic expressions better. We make our code and dataset
public for others.
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2022 12:06:04 GMT"
}
] | 2022-10-10T00:00:00 |
[
[
"Das",
"Mithun",
""
],
[
"Banerjee",
"Somnath",
""
],
[
"Saha",
"Punyajoy",
""
],
[
"Mukherjee",
"Animesh",
""
]
] |
new_dataset
| 0.998681 |
2210.03482
|
Eli Verwimp
|
Eli Verwimp, Kuo Yang, Sarah Parisot, Hong Lanqing, Steven McDonagh,
Eduardo P\'erez-Pellitero, Matthias De Lange and Tinne Tuytelaars
|
CLAD: A realistic Continual Learning benchmark for Autonomous Driving
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this paper we describe the design and the ideas motivating a new Continual
Learning benchmark for Autonomous Driving (CLAD), that focuses on the problems
of object classification and object detection. The benchmark utilises SODA10M,
a recently released large-scale dataset that concerns autonomous driving
related problems. First, we review and discuss existing continual learning
benchmarks, how they are related, and show that most are extreme cases of
continual learning. To this end, we survey the benchmarks used in continual
learning papers at three highly ranked computer vision conferences. Next, we
introduce CLAD-C, an online classification benchmark realised through a
chronological data stream that poses both class and domain incremental
challenges; and CLAD-D, a domain incremental continual object detection
benchmark. We examine the inherent difficulties and challenges posed by the
benchmark, through a survey of the techniques and methods used by the top-3
participants in a CLAD-challenge workshop at ICCV 2021. We conclude with
possible pathways to improve the current continual learning state of the art,
and which directions we deem promising for future research.
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2022 12:08:25 GMT"
}
] | 2022-10-10T00:00:00 |
[
[
"Verwimp",
"Eli",
""
],
[
"Yang",
"Kuo",
""
],
[
"Parisot",
"Sarah",
""
],
[
"Lanqing",
"Hong",
""
],
[
"McDonagh",
"Steven",
""
],
[
"Pérez-Pellitero",
"Eduardo",
""
],
[
"De Lange",
"Matthias",
""
],
[
"Tuytelaars",
"Tinne",
""
]
] |
new_dataset
| 0.999078 |
2210.03537
|
Massimo Battaglioni Dr.
|
Massimo Battaglioni and Giovanni Cancellieri
|
Punctured Binary Simplex Codes as LDPC codes
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Digital data transfer can be protected by means of suitable error correcting
codes. Among the families of state-of-the-art codes, LDPC (Low Density
Parity-Check) codes have received a great deal of attention recently, because
of their performance and flexibility of operation, in wireless and mobile radio
channels, as well as in cable transmission systems. In this paper, we present a
class of rate-adaptive LDPC codes, obtained as properly punctured simplex
codes. These codes allow for the use of an efficient soft-decision decoding
algorithm, provided that a condition called row-column constraint is satisfied.
This condition is tested on small-length codes, and then extended to
medium-length codes. The puncturing operations we apply do not influence the
satisfaction of the row-column constraint, assuring that a wide range of code
rates can be obtained. We can reach code rates remarkably higher than those
obtainable by the original simplex code, and the price in terms of minimum
distance turns out to be relatively small, leading to interesting trade-offs in
the resulting asymptotic coding gain.
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2022 13:19:24 GMT"
}
] | 2022-10-10T00:00:00 |
[
[
"Battaglioni",
"Massimo",
""
],
[
"Cancellieri",
"Giovanni",
""
]
] |
new_dataset
| 0.999526 |
2210.03570
|
Elahe Arani
|
Haris Iqbal, Hemang Chawla, Arnav Varma, Terence Brouns, Ahmed Badar,
Elahe Arani, Bahram Zonooz
|
AI-Driven Road Maintenance Inspection v2: Reducing Data Dependency &
Quantifying Road Damage
|
Accepted at IRF Global R2T Conference & Exhibition 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Road infrastructure maintenance inspection is typically a labor-intensive and
critical task to ensure the safety of all road users. Existing state-of-the-art
techniques in Artificial Intelligence (AI) for object detection and
segmentation help automate a huge chunk of this task given adequate annotated
data. However, annotating videos from scratch is cost-prohibitive. For
instance, it can take an annotator several days to annotate a 5-minute video
recorded at 30 FPS. Hence, we propose an automated labelling pipeline by
leveraging techniques like few-shot learning and out-of-distribution detection
to generate labels for road damage detection. In addition, our pipeline
includes a risk factor assessment for each damage by instance quantification to
prioritize locations for repairs which can lead to optimal deployment of road
maintenance machinery. We show that the AI models trained with these techniques
can not only generalize better to unseen real-world data with reduced
requirement for human annotation but also provide an estimate of maintenance
urgency, thereby leading to safer roads.
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2022 14:11:27 GMT"
}
] | 2022-10-10T00:00:00 |
[
[
"Iqbal",
"Haris",
""
],
[
"Chawla",
"Hemang",
""
],
[
"Varma",
"Arnav",
""
],
[
"Brouns",
"Terence",
""
],
[
"Badar",
"Ahmed",
""
],
[
"Arani",
"Elahe",
""
],
[
"Zonooz",
"Bahram",
""
]
] |
new_dataset
| 0.963616 |
2210.03590
|
Jelle Piepenbrock
|
Jelle Piepenbrock, Josef Urban, Konstantin Korovin, Miroslav
Ol\v{s}\'ak, Tom Heskes and Mikola\v{s} Janota
|
Machine Learning Meets The Herbrand Universe
|
8 pages, 10 figures
| null | null | null |
cs.LG cs.AI cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The appearance of strong CDCL-based propositional (SAT) solvers has greatly
advanced several areas of automated reasoning (AR). One of the directions in AR
is thus to apply SAT solvers to expressive formalisms such as first-order
logic, for which large corpora of general mathematical problems exist today.
This is possible due to Herbrand's theorem, which allows reduction of
first-order problems to propositional problems by instantiation. The core
challenge is choosing the right instances from the typically infinite Herbrand
universe. In this work, we develop the first machine learning system targeting
this task, addressing its combinatorial and invariance properties. In
particular, we develop a GNN2RNN architecture based on an invariant graph
neural network (GNN) that learns from problems and their solutions
independently of symbol names (addressing the abundance of skolems), combined
with a recurrent neural network (RNN) that proposes for each clause its
instantiations. The architecture is then trained on a corpus of mathematical
problems and their instantiation-based proofs, and its performance is evaluated
in several ways. We show that the trained system achieves high accuracy in
predicting the right instances, and that it is capable of solving many problems
by educated guessing when combined with a ground solver. To our knowledge, this
is the first convincing use of machine learning in synthesizing relevant
elements from arbitrary Herbrand universes.
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2022 14:46:32 GMT"
}
] | 2022-10-10T00:00:00 |
[
[
"Piepenbrock",
"Jelle",
""
],
[
"Urban",
"Josef",
""
],
[
"Korovin",
"Konstantin",
""
],
[
"Olšák",
"Miroslav",
""
],
[
"Heskes",
"Tom",
""
],
[
"Janota",
"Mikolaš",
""
]
] |
new_dataset
| 0.994538 |
2210.03628
|
Tomas Van Der Velde MSc
|
Tomas van der Velde, Hamidreza Kasaei
|
GraspCaps: Capsule Networks Are All You Need for Grasping Familiar
Objects
|
Submitted to ICRA 2023, Supplementary video:
https://youtu.be/duuEDnk6HNw
| null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
As robots become more accessible outside of industrial settings, the need for
reliable object grasping and manipulation grows significantly. In such dynamic
environments it is expected that the robot is capable of reliably grasping and
manipulating novel objects in different situations. In this work we present
GraspCaps: a novel architecture based on Capsule Networks for generating
per-point grasp configurations for familiar objects. In our work, the
activation vector of each capsule in the deepest capsule layer corresponds to
one specific class of object. This way, the network is able to extract a rich
feature vector of the objects present in the point cloud input, which is then
used for generating per-point grasp vectors. This approach should allow the
network to learn specific grasping strategies for each of the different object
categories. Along with GraspCaps we present a method for generating a large
object grasping dataset using simulated annealing. The obtained dataset is then
used to train the GraspCaps network. We performed an extensive set of
experiments to assess the performance of the proposed approach regarding
familiar object recognition accuracy and grasp success rate on challenging real
and simulated scenarios.
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2022 15:32:34 GMT"
}
] | 2022-10-10T00:00:00 |
[
[
"van der Velde",
"Tomas",
""
],
[
"Kasaei",
"Hamidreza",
""
]
] |
new_dataset
| 0.992924 |
2210.03650
|
Kumar Shridhar
|
Kumar Shridhar, Nicholas Monath, Raghuveer Thirukovalluru, Alessandro
Stolfo, Manzil Zaheer, Andrew McCallum, Mrinmaya Sachan
|
Longtonotes: OntoNotes with Longer Coreference Chains
| null | null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Ontonotes has served as the most important benchmark for coreference
resolution. However, for ease of annotation, several long documents in
Ontonotes were split into smaller parts. In this work, we build a corpus of
coreference-annotated documents of significantly longer length than what is
currently available. We do so by providing an accurate, manually-curated,
merging of annotations from documents that were split into multiple parts in
the original Ontonotes annotation process. The resulting corpus, which we call
LongtoNotes contains documents in multiple genres of the English language with
varying lengths, the longest of which are up to 8x the length of documents in
Ontonotes, and 2x those in Litbank. We evaluate state-of-the-art neural
coreference systems on this new corpus, analyze the relationships between model
architectures/hyperparameters and document length on performance and efficiency
of the models, and demonstrate areas of improvement in long-document
coreference modeling revealed by our new corpus. Our data and code is available
at: https://github.com/kumar-shridhar/LongtoNotes.
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2022 15:58:41 GMT"
}
] | 2022-10-10T00:00:00 |
[
[
"Shridhar",
"Kumar",
""
],
[
"Monath",
"Nicholas",
""
],
[
"Thirukovalluru",
"Raghuveer",
""
],
[
"Stolfo",
"Alessandro",
""
],
[
"Zaheer",
"Manzil",
""
],
[
"McCallum",
"Andrew",
""
],
[
"Sachan",
"Mrinmaya",
""
]
] |
new_dataset
| 0.999275 |
2210.03676
|
Gwangbin Bae
|
Gwangbin Bae, Ignas Budvytis, Roberto Cipolla
|
IronDepth: Iterative Refinement of Single-View Depth using Surface
Normal and its Uncertainty
|
BMVC 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Single image surface normal estimation and depth estimation are closely
related problems as the former can be calculated from the latter. However, the
surface normals computed from the output of depth estimation methods are
significantly less accurate than the surface normals directly estimated by
networks. To reduce such discrepancy, we introduce a novel framework that uses
surface normal and its uncertainty to recurrently refine the predicted
depth-map. The depth of each pixel can be propagated to a query pixel, using
the predicted surface normal as guidance. We thus formulate depth refinement as
a classification of choosing the neighboring pixel to propagate from. Then, by
propagating to sub-pixel points, we upsample the refined, low-resolution
output. The proposed method shows state-of-the-art performance on NYUv2 and
iBims-1 - both in terms of depth and normal. Our refinement module can also be
attached to the existing depth estimation methods to improve their accuracy. We
also show that our framework, only trained for depth estimation, can also be
used for depth completion. The code is available at
https://github.com/baegwangbin/IronDepth.
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2022 16:34:20 GMT"
}
] | 2022-10-10T00:00:00 |
[
[
"Bae",
"Gwangbin",
""
],
[
"Budvytis",
"Ignas",
""
],
[
"Cipolla",
"Roberto",
""
]
] |
new_dataset
| 0.981375 |
2210.03696
|
Simin Chen
|
Simin Chen, Cong Liu, Mirazul Haque, Zihe Song, Wei Yang
|
NMTSloth: Understanding and Testing Efficiency Degradation of Neural
Machine Translation Systems
|
This paper has been accepted to ESEC/FSE 2022
| null | null | null |
cs.CL cs.AI cs.LG cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural Machine Translation (NMT) systems have received much recent attention
due to their human-level accuracy. While existing works mostly focus on either
improving accuracy or testing accuracy robustness, the computation efficiency
of NMT systems, which is of paramount importance due to often vast translation
demands and real-time requirements, has surprisingly received little attention.
In this paper, we make the first attempt to understand and test potential
computation efficiency robustness in state-of-the-art NMT systems. By analyzing
the working mechanism and implementation of 1455 public-accessible NMT systems,
we observe a fundamental property in NMT systems that could be manipulated in
an adversarial manner to reduce computation efficiency significantly. Our key
motivation is to generate test inputs that could sufficiently delay the
generation of EOS such that NMT systems would have to go through enough
iterations to satisfy the pre-configured threshold. We present NMTSloth, which
develops a gradient-guided technique that searches for a minimal and
unnoticeable perturbation at character-level, token-level, and structure-level,
which sufficiently delays the appearance of EOS and forces these inputs to
reach the naturally-unreachable threshold. To demonstrate the effectiveness of
NMTSloth, we conduct a systematic evaluation on three public-available NMT
systems: Google T5, AllenAI WMT14, and Helsinki-NLP translators. Experimental
results show that NMTSloth can increase NMT systems' response latency and
energy consumption by 85% to 3153% and 86% to 3052%, respectively, by
perturbing just one character or token in the input sentence. Our case study
shows that inputs generated by NMTSloth significantly affect the battery power
in real-world mobile devices (i.e., drain more than 30 times battery power than
normal inputs).
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2022 17:01:01 GMT"
}
] | 2022-10-10T00:00:00 |
[
[
"Chen",
"Simin",
""
],
[
"Liu",
"Cong",
""
],
[
"Haque",
"Mirazul",
""
],
[
"Song",
"Zihe",
""
],
[
"Yang",
"Wei",
""
]
] |
new_dataset
| 0.982598 |
2210.03701
|
Youngsun Wi
|
Youngsun Wi, Andy Zeng, Pete Florence, Nima Fazeli
|
VIRDO++: Real-World, Visuo-tactile Dynamics and Perception of Deformable
Objects
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deformable objects manipulation can benefit from representations that
seamlessly integrate vision and touch while handling occlusions. In this work,
we present a novel approach for, and real-world demonstration of, multimodal
visuo-tactile state-estimation and dynamics prediction for deformable objects.
Our approach, VIRDO++, builds on recent progress in multimodal neural implicit
representations for deformable object state-estimation [1] via a new
formulation for deformation dynamics and a complementary state-estimation
algorithm that (i) maintains a belief over deformations, and (ii) enables
practical real-world application by removing the need for privileged contact
information. In the context of two real-world robotic tasks, we show:(i)
high-fidelity cross-modal state-estimation and prediction of deformable objects
from partial visuo-tactile feedback, and (ii) generalization to unseen objects
and contact formations.
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2022 17:09:05 GMT"
}
] | 2022-10-10T00:00:00 |
[
[
"Wi",
"Youngsun",
""
],
[
"Zeng",
"Andy",
""
],
[
"Florence",
"Pete",
""
],
[
"Fazeli",
"Nima",
""
]
] |
new_dataset
| 0.999528 |
2106.08409
|
Aurora Saibene
|
Francesca Gasparini, Giulia Rizzi, Aurora Saibene, Elisabetta Fersini
|
Benchmark dataset of memes with text transcriptions for automatic
detection of multi-modal misogynistic content
| null |
Data in brief 44 (2022): 108526
|
10.1016/j.dib.2022.108526
| null |
cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this paper we present a benchmark dataset generated as part of a project
for automatic identification of misogyny within online content, which focuses
in particular on memes. The benchmark here described is composed of 800 memes
collected from the most popular social media platforms, such as Facebook,
Twitter, Instagram and Reddit, and consulting websites dedicated to collection
and creation of memes. To gather misogynistic memes, specific keywords that
refer to misogynistic content have been considered as search criterion,
considering different manifestations of hatred against women, such as body
shaming, stereotyping, objectification and violence. In parallel, memes with no
misogynist content have been manually downloaded from the same web sources.
Among all the collected memes, three domain experts have selected a dataset of
800 memes equally balanced between misogynistic and non-misogynistic ones. This
dataset has been validated through a crowdsourcing platform, involving 60
subjects for the labelling process, in order to collect three evaluations for
each instance. Two further binary labels have been collected from both the
experts and the crowdsourcing platform, for memes evaluated as misogynistic,
concerning aggressiveness and irony. Finally for each meme, the text has been
manually transcribed. The dataset provided is thus composed of the 800 memes,
the labels given by the experts and those obtained by the crowdsourcing
validation, and the transcribed texts. This data can be used to approach the
problem of automatic detection of misogynistic content on the Web relying on
both textual and visual cues, facing phenomenons that are growing every day
such as cybersexism and technology-facilitated violence.
|
[
{
"version": "v1",
"created": "Tue, 15 Jun 2021 20:01:28 GMT"
}
] | 2022-10-07T00:00:00 |
[
[
"Gasparini",
"Francesca",
""
],
[
"Rizzi",
"Giulia",
""
],
[
"Saibene",
"Aurora",
""
],
[
"Fersini",
"Elisabetta",
""
]
] |
new_dataset
| 0.999795 |
2109.06716
|
Katharina Eggensperger
|
Katharina Eggensperger, Philipp M\"uller, Neeratyoy Mallik, Matthias
Feurer, Ren\'e Sass, Aaron Klein, Noor Awad, Marius Lindauer, Frank Hutter
|
HPOBench: A Collection of Reproducible Multi-Fidelity Benchmark Problems
for HPO
|
Published at NeurIPS Datasets and Benchmarks Track 2021. Updated
version
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To achieve peak predictive performance, hyperparameter optimization (HPO) is
a crucial component of machine learning and its applications. Over the last
years, the number of efficient algorithms and tools for HPO grew substantially.
At the same time, the community is still lacking realistic, diverse,
computationally cheap, and standardized benchmarks. This is especially the case
for multi-fidelity HPO methods. To close this gap, we propose HPOBench, which
includes 7 existing and 5 new benchmark families, with a total of more than 100
multi-fidelity benchmark problems. HPOBench allows to run this extendable set
of multi-fidelity HPO benchmarks in a reproducible way by isolating and
packaging the individual benchmarks in containers. It also provides surrogate
and tabular benchmarks for computationally affordable yet statistically sound
evaluations. To demonstrate HPOBench's broad compatibility with various
optimization tools, as well as its usefulness, we conduct an exemplary
large-scale study evaluating 13 optimizers from 6 optimization tools. We
provide HPOBench here: https://github.com/automl/HPOBench.
|
[
{
"version": "v1",
"created": "Tue, 14 Sep 2021 14:28:51 GMT"
},
{
"version": "v2",
"created": "Thu, 25 Nov 2021 15:09:52 GMT"
},
{
"version": "v3",
"created": "Thu, 6 Oct 2022 15:12:56 GMT"
}
] | 2022-10-07T00:00:00 |
[
[
"Eggensperger",
"Katharina",
""
],
[
"Müller",
"Philipp",
""
],
[
"Mallik",
"Neeratyoy",
""
],
[
"Feurer",
"Matthias",
""
],
[
"Sass",
"René",
""
],
[
"Klein",
"Aaron",
""
],
[
"Awad",
"Noor",
""
],
[
"Lindauer",
"Marius",
""
],
[
"Hutter",
"Frank",
""
]
] |
new_dataset
| 0.987306 |
2111.13091
|
Bas van den Heuvel
|
Bas van den Heuvel and Jorge A. P\'erez
|
Asynchronous Session-Based Concurrency: Deadlock-freedom in Cyclic
Process Networks
|
Extended version of arXiv:2110.00146, doi:10.4204/EPTCS.347.3 and
arXiv:2209.06820, doi:10.4204/EPTCS.368.5
| null | null | null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
This paper considers the challenge of establishing the deadlock-freedom
property for message-passing processes that communicate asynchronously in
cyclic process networks. We present Asynchronous Priority-based Classical
Processes (APCP), a typed process framework that supports asynchronous
communication, delegation, and recursion in cyclic process networks. APCP
builds upon the Curry-Howard correspondences between linear logic and session
types; using these foundations, we establish the essential meta-theoretical
results of APCP, in particular deadlock-freedom. To illustrate the
expressiveness of APCP, we formulate and study CGV, a new concurrent
$\lambda$-calculus with asynchronous sessions. We establish the correct
encodability of asynchronous terms in CGV into asynchronous processes in APCP.
|
[
{
"version": "v1",
"created": "Thu, 25 Nov 2021 14:00:40 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Jul 2022 14:46:22 GMT"
},
{
"version": "v3",
"created": "Thu, 6 Oct 2022 14:20:44 GMT"
}
] | 2022-10-07T00:00:00 |
[
[
"Heuvel",
"Bas van den",
""
],
[
"Pérez",
"Jorge A.",
""
]
] |
new_dataset
| 0.975258 |
2111.14341
|
Bingchen Zhao
|
Bingchen Zhao, Shaozuo Yu, Wufei Ma, Mingxin Yu, Shenxiao Mei, Angtian
Wang, Ju He, Alan Yuille, Adam Kortylewski
|
OOD-CV: A Benchmark for Robustness to Out-of-Distribution Shifts of
Individual Nuisances in Natural Images
|
Project webpage: http://bzhao.me/OOD-CV/, this work is accepted as
Oral at ECCV 2022
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Enhancing the robustness of vision algorithms in real-world scenarios is
challenging. One reason is that existing robustness benchmarks are limited, as
they either rely on synthetic data or ignore the effects of individual nuisance
factors. We introduce OOD-CV, a benchmark dataset that includes
out-of-distribution examples of 10 object categories in terms of pose, shape,
texture, context and the weather conditions, and enables benchmarking models
for image classification, object detection, and 3D pose estimation. In addition
to this novel dataset, we contribute extensive experiments using popular
baseline methods, which reveal that: 1. Some nuisance factors have a much
stronger negative effect on the performance compared to others, also depending
on the vision task. 2. Current approaches to enhance robustness have only
marginal effects, and can even reduce robustness. 3. We do not observe
significant differences between convolutional and transformer architectures. We
believe our dataset provides a rich testbed to study robustness and will help
push forward research in this area.
|
[
{
"version": "v1",
"created": "Mon, 29 Nov 2021 06:18:46 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Dec 2021 11:53:03 GMT"
},
{
"version": "v3",
"created": "Mon, 25 Jul 2022 10:21:26 GMT"
},
{
"version": "v4",
"created": "Thu, 6 Oct 2022 08:19:03 GMT"
}
] | 2022-10-07T00:00:00 |
[
[
"Zhao",
"Bingchen",
""
],
[
"Yu",
"Shaozuo",
""
],
[
"Ma",
"Wufei",
""
],
[
"Yu",
"Mingxin",
""
],
[
"Mei",
"Shenxiao",
""
],
[
"Wang",
"Angtian",
""
],
[
"He",
"Ju",
""
],
[
"Yuille",
"Alan",
""
],
[
"Kortylewski",
"Adam",
""
]
] |
new_dataset
| 0.99985 |
2203.04907
|
Soon Yau Cheong
|
Soon Yau Cheong, Armin Mustafa, Andrew Gilbert
|
KPE: Keypoint Pose Encoding for Transformer-based Image Generation
| null |
British Machine Vision Conference (BMVC) 2022
| null | null |
cs.CV cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transformers have recently been shown to generate high quality images from
text input. However, the existing method of pose conditioning using skeleton
image tokens is computationally inefficient and generate low quality images.
Therefore we propose a new method; Keypoint Pose Encoding (KPE); KPE is 10
times more memory efficient and over 73% faster at generating high quality
images from text input conditioned on the pose. The pose constraint improves
the image quality and reduces errors on body extremities such as arms and legs.
The additional benefits include invariance to changes in the target image
domain and image resolution, making it easily scalable to higher resolution
images. We demonstrate the versatility of KPE by generating photorealistic
multiperson images derived from the DeepFashion dataset. We also introduce a
evaluation method People Count Error (PCE) that is effective in detecting error
in generated human images.
|
[
{
"version": "v1",
"created": "Wed, 9 Mar 2022 17:38:03 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Oct 2022 10:00:48 GMT"
}
] | 2022-10-07T00:00:00 |
[
[
"Cheong",
"Soon Yau",
""
],
[
"Mustafa",
"Armin",
""
],
[
"Gilbert",
"Andrew",
""
]
] |
new_dataset
| 0.999367 |
2205.01902
|
Zhengzhong Tu
|
Runsheng Xu, Zhengzhong Tu, Yuanqi Du, Xiaoyu Dong, Jinlong Li, Zibo
Meng, Jiaqi Ma, Alan Bovik, Hongkai Yu
|
Pik-Fix: Restoring and Colorizing Old Photos
|
WACV 2022; code: https://github.com/DerrickXuNu/Pik-Fix. arXiv admin
note: text overlap with arXiv:2202.02606
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Restoring and inpainting the visual memories that are present, but often
impaired, in old photos remains an intriguing but unsolved research topic.
Decades-old photos often suffer from severe and commingled degradation such as
cracks, defocus, and color-fading, which are difficult to treat individually
and harder to repair when they interact. Deep learning presents a plausible
avenue, but the lack of large-scale datasets of old photos makes addressing
this restoration task very challenging. Here we present a novel reference-based
end-to-end learning framework that is able to both repair and colorize old,
degraded pictures. Our proposed framework consists of three modules: a
restoration sub-network that conducts restoration from degradations, a
similarity network that performs color histogram matching and color transfer,
and a colorization subnet that learns to predict the chroma elements of images
conditioned on chromatic reference signals. The overall system makes uses of
color histogram priors from reference images, which greatly reduces the need
for large-scale training data. We have also created a first-of-a-kind public
dataset of real old photos that are paired with ground truth ''pristine''
photos that have been manually restored by PhotoShop experts. We conducted
extensive experiments on this dataset and synthetic datasets, and found that
our method significantly outperforms previous state-of-the-art models using
both qualitative comparisons and quantitative measurements. The code is
available at https://github.com/DerrickXuNu/Pik-Fix.
|
[
{
"version": "v1",
"created": "Wed, 4 May 2022 05:46:43 GMT"
},
{
"version": "v2",
"created": "Wed, 11 May 2022 07:10:00 GMT"
},
{
"version": "v3",
"created": "Thu, 6 Oct 2022 07:00:40 GMT"
}
] | 2022-10-07T00:00:00 |
[
[
"Xu",
"Runsheng",
""
],
[
"Tu",
"Zhengzhong",
""
],
[
"Du",
"Yuanqi",
""
],
[
"Dong",
"Xiaoyu",
""
],
[
"Li",
"Jinlong",
""
],
[
"Meng",
"Zibo",
""
],
[
"Ma",
"Jiaqi",
""
],
[
"Bovik",
"Alan",
""
],
[
"Yu",
"Hongkai",
""
]
] |
new_dataset
| 0.998781 |
2207.10970
|
Lars Schmarje
|
Lars Schmarje, Stefan Reinhold, Timo Damm, Eric Orwoll, Claus-C.
Gl\"uer, Reinhard Koch
|
Opportunistic hip fracture risk prediction in Men from X-ray: Findings
from the Osteoporosis in Men (MrOS) Study
|
Oral Presentation at MICCAI 2022 Workshop (PRIME), Considered for
best paper award Predictive Intelligence in Medicine. PRIME 2022. Lecture
Notes in Computer Science, vol 13564
| null |
10.1007/978-3-031-16919-9_10
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Osteoporosis is a common disease that increases fracture risk. Hip fractures,
especially in elderly people, lead to increased morbidity, decreased quality of
life and increased mortality. Being a silent disease before fracture,
osteoporosis often remains undiagnosed and untreated. Areal bone mineral
density (aBMD) assessed by dual-energy X-ray absorptiometry (DXA) is the
gold-standard method for osteoporosis diagnosis and hence also for future
fracture prediction (prognostic). However, the required special equipment is
not broadly available everywhere, in particular not to patients in developing
countries. We propose a deep learning classification model (FORM) that can
directly predict hip fracture risk from either plain radiographs (X-ray) or 2D
projection images of computed tomography (CT) data. Our method is fully
automated and therefore well suited for opportunistic screening settings,
identifying high risk patients in a broader population without additional
screening. FORM was trained and evaluated on X-rays and CT projections from the
Osteoporosis in Men (MrOS) study. 3108 X-rays (89 incident hip fractures) or
2150 CTs (80 incident hip fractures) with a 80/20 split were used. We show that
FORM can correctly predict the 10-year hip fracture risk with a validation AUC
of 81.44 +- 3.11% / 81.04 +- 5.54% (mean +- STD) including additional
information like age, BMI, fall history and health background across a 5-fold
cross validation on the X-ray and CT cohort, respectively. Our approach
significantly (p < 0.01) outperforms previous methods like Cox
Proportional-Hazards Model and \frax with 70.19 +- 6.58 and 74.72 +- 7.21
respectively on the X-ray cohort. Our model outperform on both cohorts hip aBMD
based predictions. We are confident that FORM can contribute on improving
osteoporosis diagnosis at an early stage.
|
[
{
"version": "v1",
"created": "Fri, 22 Jul 2022 09:35:48 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Oct 2022 08:14:11 GMT"
}
] | 2022-10-07T00:00:00 |
[
[
"Schmarje",
"Lars",
""
],
[
"Reinhold",
"Stefan",
""
],
[
"Damm",
"Timo",
""
],
[
"Orwoll",
"Eric",
""
],
[
"Glüer",
"Claus-C.",
""
],
[
"Koch",
"Reinhard",
""
]
] |
new_dataset
| 0.988812 |
2208.07644
|
Bas van den Heuvel
|
Bas van den Heuvel and Jorge A. P\'erez
|
Asynchronous Functional Sessions: Cyclic and Concurrent (Extended
Version)
|
Extended version of a paper accepted at EXPRESS'22. arXiv admin note:
substantial text overlap with arXiv:2111.13091
| null | null | null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
We present Concurrent GV (CGV), a functional calculus with message-passing
concurrency governed by session types. With respect to prior calculi, CGV has
increased support for concurrent evaluation and for cyclic network topologies.
The design of CGV draws on APCP, a session-typed asynchronous pi-calculus
developed in prior work. Technical contributions are (i) the syntax, semantics,
and type system of CGV; (ii) a correct translation of CGV into APCP; (iii) a
technique for establishing deadlock-free CGV programs, by resorting to APCP's
priority-based type system.
|
[
{
"version": "v1",
"created": "Tue, 16 Aug 2022 10:07:27 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Sep 2022 17:11:23 GMT"
}
] | 2022-10-07T00:00:00 |
[
[
"Heuvel",
"Bas van den",
""
],
[
"Pérez",
"Jorge A.",
""
]
] |
new_dataset
| 0.99883 |
2208.14681
|
Jean Neraud
|
Jean N\'eraud (UNIROUEN)
|
When Variable-Length Codes Meet the Field of Error Detection
| null | null | null | null |
cs.IT cs.CL cs.DM math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given a finite alphabet $A$ and a binary relation $\tau\subseteq A^*\times
A^*$, a set $X$ is $\tau$-{\it independent} if $ \tau(X)\cap X=\emptyset$.
Given a quasi-metric $d$ over $A^*$ (in the meaning of \cite{W31}) and $k\ge
1$, we associate the relation $\tau_{d,k}$ defined by $(x,y)\in\tau_{d,k}$ if,
and only if, $d(x,y)\le k$ \cite{CP02}.In the spirit of \cite{JK97,N21}, the
error detection-correction capability of variable-length codes can be expressed
in term of conditions over $\tau_{d,k}$. With respect to the prefix metric, the
factor one, and every quasi-metric associated to (anti-)automorphisms of the
free monoid, we examine whether those conditions are decidable for a given
regular code.
|
[
{
"version": "v1",
"created": "Wed, 31 Aug 2022 08:14:28 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Oct 2022 08:18:36 GMT"
}
] | 2022-10-07T00:00:00 |
[
[
"Néraud",
"Jean",
"",
"UNIROUEN"
]
] |
new_dataset
| 0.999215 |
2209.10930
|
Hang Guo
|
Hang Guo, Zhengxi Hu, Jingtai Liu
|
MGTR: End-to-End Mutual Gaze Detection with Transformer
|
ACCV2022 accepted paper
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
People's looking at each other or mutual gaze is ubiquitous in our daily
interactions, and detecting mutual gaze is of great significance for
understanding human social scenes. Current mutual gaze detection methods focus
on two-stage methods, whose inference speed is limited by the two-stage
pipeline and the performance in the second stage is affected by the first one.
In this paper, we propose a novel one-stage mutual gaze detection framework
called Mutual Gaze TRansformer or MGTR to perform mutual gaze detection in an
end-to-end manner. By designing mutual gaze instance triples, MGTR can detect
each human head bounding box and simultaneously infer mutual gaze relationship
based on global image information, which streamlines the whole process with
simplicity. Experimental results on two mutual gaze datasets show that our
method is able to accelerate mutual gaze detection process without losing
performance. Ablation study shows that different components of MGTR can capture
different levels of semantic information in images. Code is available at
https://github.com/Gmbition/MGTR
|
[
{
"version": "v1",
"created": "Thu, 22 Sep 2022 11:26:22 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Oct 2022 03:51:32 GMT"
}
] | 2022-10-07T00:00:00 |
[
[
"Guo",
"Hang",
""
],
[
"Hu",
"Zhengxi",
""
],
[
"Liu",
"Jingtai",
""
]
] |
new_dataset
| 0.999429 |
2209.15159
|
Shakti Nagnath Wadekar
|
Shakti N. Wadekar and Abhishek Chaurasia
|
MobileViTv3: Mobile-Friendly Vision Transformer with Simple and
Effective Fusion of Local, Global and Input Features
|
20 pages, 7 figures
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
MobileViT (MobileViTv1) combines convolutional neural networks (CNNs) and
vision transformers (ViTs) to create light-weight models for mobile vision
tasks. Though the main MobileViTv1-block helps to achieve competitive
state-of-the-art results, the fusion block inside MobileViTv1-block, creates
scaling challenges and has a complex learning task. We propose changes to the
fusion block that are simple and effective to create MobileViTv3-block, which
addresses the scaling and simplifies the learning task. Our proposed
MobileViTv3-block used to create MobileViTv3-XXS, XS and S models outperform
MobileViTv1 on ImageNet-1k, ADE20K, COCO and PascalVOC2012 datasets. On
ImageNet-1K, MobileViTv3-XXS and MobileViTv3-XS surpasses MobileViTv1-XXS and
MobileViTv1-XS by 2% and 1.9% respectively. Recently published MobileViTv2
architecture removes fusion block and uses linear complexity transformers to
perform better than MobileViTv1. We add our proposed fusion block to
MobileViTv2 to create MobileViTv3-0.5, 0.75 and 1.0 models. These new models
give better accuracy numbers on ImageNet-1k, ADE20K, COCO and PascalVOC2012
datasets as compared to MobileViTv2. MobileViTv3-0.5 and MobileViTv3-0.75
outperforms MobileViTv2-0.5 and MobileViTv2-0.75 by 2.1% and 1.0% respectively
on ImageNet-1K dataset. For segmentation task, MobileViTv3-1.0 achieves 2.07%
and 1.1% better mIOU compared to MobileViTv2-1.0 on ADE20K dataset and
PascalVOC2012 dataset respectively. Our code and the trained models are
available at: https://github.com/micronDLA/MobileViTv3
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2022 01:04:10 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Oct 2022 14:19:13 GMT"
}
] | 2022-10-07T00:00:00 |
[
[
"Wadekar",
"Shakti N.",
""
],
[
"Chaurasia",
"Abhishek",
""
]
] |
new_dataset
| 0.99917 |
2210.02476
|
Mayug Maniparambil
|
Mayug Maniparambil, Kevin McGuinness, Noel O'Connor
|
BaseTransformers: Attention over base data-points for One Shot Learning
|
Paper accepted at British Machine Vision Conference 2022
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Few shot classification aims to learn to recognize novel categories using
only limited samples per category. Most current few shot methods use a base
dataset rich in labeled examples to train an encoder that is used for obtaining
representations of support instances for novel classes. Since the test
instances are from a distribution different to the base distribution, their
feature representations are of poor quality, degrading performance. In this
paper we propose to make use of the well-trained feature representations of the
base dataset that are closest to each support instance to improve its
representation during meta-test time. To this end, we propose BaseTransformers,
that attends to the most relevant regions of the base dataset feature space and
improves support instance representations. Experiments on three benchmark data
sets show that our method works well for several backbones and achieves
state-of-the-art results in the inductive one shot setting. Code is available
at github.com/mayug/BaseTransformers
|
[
{
"version": "v1",
"created": "Wed, 5 Oct 2022 18:00:24 GMT"
}
] | 2022-10-07T00:00:00 |
[
[
"Maniparambil",
"Mayug",
""
],
[
"McGuinness",
"Kevin",
""
],
[
"O'Connor",
"Noel",
""
]
] |
new_dataset
| 0.999062 |
2210.02535
|
Zhengxiang Shi
|
Zhengxiang Shi, Pin Ni, Meihui Wang, To Eun Kim and Aldo Lipani
|
Attention-based Ingredient Phrase Parser
|
ESANN 2022 proceedings, European Symposium on Artificial Neural
Networks, Computational Intelligence and Machine Learning
| null |
10.14428/esann/2022.es2022-10
| null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As virtual personal assistants have now penetrated the consumer market, with
products such as Siri and Alexa, the research community has produced several
works on task-oriented dialogue tasks such as hotel booking, restaurant
booking, and movie recommendation. Assisting users to cook is one of these
tasks that are expected to be solved by intelligent assistants, where
ingredients and their corresponding attributes, such as name, unit, and
quantity, should be provided to users precisely and promptly. However, existing
ingredient information scraped from the cooking website is in the unstructured
form with huge variation in the lexical structure, for example, '1 garlic
clove, crushed', and '1 (8 ounce) package cream cheese, softened', making it
difficult to extract information exactly. To provide an engaged and successful
conversational service to users for cooking tasks, we propose a new ingredient
parsing model that can parse an ingredient phrase of recipes into the structure
form with its corresponding attributes with over 0.93 F1-score. Experimental
results show that our model achieves state-of-the-art performance on AllRecipes
and Food.com datasets.
|
[
{
"version": "v1",
"created": "Wed, 5 Oct 2022 20:09:35 GMT"
}
] | 2022-10-07T00:00:00 |
[
[
"Shi",
"Zhengxiang",
""
],
[
"Ni",
"Pin",
""
],
[
"Wang",
"Meihui",
""
],
[
"Kim",
"To Eun",
""
],
[
"Lipani",
"Aldo",
""
]
] |
new_dataset
| 0.996997 |
2210.02545
|
Mayumi Ohta
|
Mayumi Ohta, Julia Kreutzer, Stefan Riezler
|
JoeyS2T: Minimalistic Speech-to-Text Modeling with JoeyNMT
|
EMNLP 2022 demo track
| null | null | null |
cs.CL cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
JoeyS2T is a JoeyNMT extension for speech-to-text tasks such as automatic
speech recognition and end-to-end speech translation. It inherits the core
philosophy of JoeyNMT, a minimalist NMT toolkit built on PyTorch, seeking
simplicity and accessibility. JoeyS2T's workflow is self-contained, starting
from data pre-processing, over model training and prediction to evaluation, and
is seamlessly integrated into JoeyNMT's compact and simple code base. On top of
JoeyNMT's state-of-the-art Transformer-based encoder-decoder architecture,
JoeyS2T provides speech-oriented components such as convolutional layers,
SpecAugment, CTC-loss, and WER evaluation. Despite its simplicity compared to
prior implementations, JoeyS2T performs competitively on English speech
recognition and English-to-German speech translation benchmarks. The
implementation is accompanied by a walk-through tutorial and available on
https://github.com/may-/joeys2t.
|
[
{
"version": "v1",
"created": "Wed, 5 Oct 2022 20:19:58 GMT"
}
] | 2022-10-07T00:00:00 |
[
[
"Ohta",
"Mayumi",
""
],
[
"Kreutzer",
"Julia",
""
],
[
"Riezler",
"Stefan",
""
]
] |
new_dataset
| 0.984865 |
2210.02576
|
Yongbin Liu
|
Liu Yongbin, Liu Qingjie, Chen Jiaxin, Wang Yunhong
|
Reading Chinese in Natural Scenes with a Bag-of-Radicals Prior
|
Accepted by BMVC 2022
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Scene text recognition (STR) on Latin datasets has been extensively studied
in recent years, and state-of-the-art (SOTA) models often reach high accuracy.
However, the performance on non-Latin transcripts, such as Chinese, is not
satisfactory. In this paper, we collect six open-source Chinese STR datasets
and evaluate a series of classic methods performing well on Latin datasets,
finding a significant performance drop. To improve the performance on Chinese
datasets, we propose a novel radical-embedding (RE) representation to utilize
the ideographic descriptions of Chinese characters. The ideographic
descriptions of Chinese characters are firstly converted to bags of radicals
and then fused with learnable character embeddings by a
character-vector-fusion-module (CVFM). In addition, we utilize a bag of
radicals as supervision signals for multi-task training to improve the
ideographic structure perception of our model. Experiments show performance of
the model with RE + CVFM + multi-task training is superior compared with the
baseline on six Chinese STR datasets. In addition, we utilize a bag of radicals
as supervision signals for multi-task training to improve the ideographic
structure perception of our model. Experiments show performance of the model
with RE + CVFM + multi-task training is superior compared with the baseline on
six Chinese STR datasets.
|
[
{
"version": "v1",
"created": "Wed, 5 Oct 2022 21:56:09 GMT"
}
] | 2022-10-07T00:00:00 |
[
[
"Yongbin",
"Liu",
""
],
[
"Qingjie",
"Liu",
""
],
[
"Jiaxin",
"Chen",
""
],
[
"Yunhong",
"Wang",
""
]
] |
new_dataset
| 0.99541 |
2210.02579
|
Gwangbin Bae
|
Gwangbin Bae, Martin de La Gorce, Tadas Baltrusaitis, Charlie Hewitt,
Dong Chen, Julien Valentin, Roberto Cipolla, Jingjing Shen
|
DigiFace-1M: 1 Million Digital Face Images for Face Recognition
|
WACV 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
State-of-the-art face recognition models show impressive accuracy, achieving
over 99.8% on Labeled Faces in the Wild (LFW) dataset. Such models are trained
on large-scale datasets that contain millions of real human face images
collected from the internet. Web-crawled face images are severely biased (in
terms of race, lighting, make-up, etc) and often contain label noise. More
importantly, the face images are collected without explicit consent, raising
ethical concerns. To avoid such problems, we introduce a large-scale synthetic
dataset for face recognition, obtained by rendering digital faces using a
computer graphics pipeline. We first demonstrate that aggressive data
augmentation can significantly reduce the synthetic-to-real domain gap. Having
full control over the rendering pipeline, we also study how each attribute
(e.g., variation in facial pose, accessories and textures) affects the
accuracy. Compared to SynFace, a recent method trained on GAN-generated
synthetic faces, we reduce the error rate on LFW by 52.5% (accuracy from 91.93%
to 96.17%). By fine-tuning the network on a smaller number of real face images
that could reasonably be obtained with consent, we achieve accuracy that is
comparable to the methods trained on millions of real face images.
|
[
{
"version": "v1",
"created": "Wed, 5 Oct 2022 22:02:48 GMT"
}
] | 2022-10-07T00:00:00 |
[
[
"Bae",
"Gwangbin",
""
],
[
"de La Gorce",
"Martin",
""
],
[
"Baltrusaitis",
"Tadas",
""
],
[
"Hewitt",
"Charlie",
""
],
[
"Chen",
"Dong",
""
],
[
"Valentin",
"Julien",
""
],
[
"Cipolla",
"Roberto",
""
],
[
"Shen",
"Jingjing",
""
]
] |
new_dataset
| 0.99972 |
2210.02582
|
Neeldhara Misra
|
Neeldhara Misra, Manas Mulpuri, Prafullkumar Tale, Gaurav Viramgami
|
Romeo and Juliet Meeting in Forest Like Regions
|
A shorter version of this work has been accepted for presentation at
the 42nd IARCS Annual Conference on Foundations of Software Technology and
Theoretical Computer Science (FSTTCS), 2022
| null | null | null |
cs.DS cs.DM
|
http://creativecommons.org/licenses/by/4.0/
|
The game of rendezvous with adversaries is a game on a graph played by two
players: Facilitator and Divider. Facilitator has two agents and Divider has a
team of $k \ge 1$ agents. While the initial positions of Facilitator's agents
are fixed, Divider gets to select the initial positions of his agents. Then,
they take turns to move their agents to adjacent vertices (or stay put) with
Facilitator's goal to bring both her agents at same vertex and Divider's goal
to prevent it. The computational question of interest is to determine if
Facilitator has a winning strategy against Divider with $k$ agents. Fomin,
Golovach, and Thilikos [WG, 2021] introduced this game and proved that it is
PSPACE-hard and co-W[2]-hard parameterized by the number of agents.
This hardness naturally motivates the structural parameterization of the
problem. The authors proved that it admits an FPT algorithm when parameterized
by the modular width and the number of allowed rounds. However, they left open
the complexity of the problem from the perspective of other structural
parameters. In particular, they explicitly asked whether the problem admits an
FPT or XP-algorithm with respect to the treewidth of the input graph. We answer
this question in the negative and show that Rendezvous is co-NP-hard even for
graphs of constant treewidth. Further, we show that the problem is co-W[1]-hard
when parameterized by the feedback vertex set number and the number of agents,
and is unlikely to admit a polynomial kernel when parameterized by the vertex
cover number and the number of agents. Complementing these hardness results, we
show that the Rendezvous is FPT when parameterized by both the vertex cover
number and the solution size. Finally, for graphs of treewidth at most two and
girds, we show that the problem can be solved in polynomial time.
|
[
{
"version": "v1",
"created": "Wed, 5 Oct 2022 22:12:59 GMT"
}
] | 2022-10-07T00:00:00 |
[
[
"Misra",
"Neeldhara",
""
],
[
"Mulpuri",
"Manas",
""
],
[
"Tale",
"Prafullkumar",
""
],
[
"Viramgami",
"Gaurav",
""
]
] |
new_dataset
| 0.999025 |
2210.02589
|
Zhong Wang
|
Ashley Tung, Haiyan Wang, Yue Li, Zhong Wang, and Jingchao Sun
|
Spot-on: A Checkpointing Framework for Fault-Tolerant Long-running
Workloads on Cloud Spot Instances
|
3 pages, 3 figures, accepted to "Third International Symposium on
Checkpointing for Supercomputing (SuperCheck-SC22)
https://supercheck.lbl.gov/
| null | null | null |
cs.DC q-bio.GN
|
http://creativecommons.org/licenses/by/4.0/
|
Spot instances offer a cost-effective solution for applications running in
the cloud computing environment. However, it is challenging to run long-running
jobs on spot instances because they are subject to unpredictable evictions.
Here, we present Spot-on, a generic software framework that supports
fault-tolerant long-running workloads on spot instances through checkpoint and
restart. Spot-on leverages existing checkpointing packages and is compatible
with the major cloud vendors. Using a genomics application as a test case, we
demonstrated that Spot-on supports both application-specific and transparent
checkpointing methods. Compared to running applications using on-demand
instances, it allows the completion of these workloads for a significant
reduction in computing costs. Compared to running applications using
application-specific checkpoint mechanisms, transparent checkpoint-protected
applications reduce runtime by up to 40%, leading to further cost savings of up
to 86%.
|
[
{
"version": "v1",
"created": "Wed, 5 Oct 2022 22:37:39 GMT"
}
] | 2022-10-07T00:00:00 |
[
[
"Tung",
"Ashley",
""
],
[
"Wang",
"Haiyan",
""
],
[
"Li",
"Yue",
""
],
[
"Wang",
"Zhong",
""
],
[
"Sun",
"Jingchao",
""
]
] |
new_dataset
| 0.963401 |
2210.02630
|
Leyi Wei
|
Yu Wang, Chao Pang, Yuzhe Wang, Yi Jiang, Junru Jin, Sirui Liang, Quan
Zou, and Leyi Wei
|
MechRetro is a chemical-mechanism-driven graph learning framework for
interpretable retrosynthesis prediction and pathway planning
| null | null | null | null |
cs.LG physics.chem-ph q-bio.BM
|
http://creativecommons.org/licenses/by/4.0/
|
Leveraging artificial intelligence for automatic retrosynthesis speeds up
organic pathway planning in digital laboratories. However, existing deep
learning approaches are unexplainable, like "black box" with few insights,
notably limiting their applications in real retrosynthesis scenarios. Here, we
propose MechRetro, a chemical-mechanism-driven graph learning framework for
interpretable retrosynthetic prediction and pathway planning, which learns
several retrosynthetic actions to simulate a reverse reaction via elaborate
self-adaptive joint learning. By integrating chemical knowledge as prior
information, we design a novel Graph Transformer architecture to adaptively
learn discriminative and chemically meaningful molecule representations,
highlighting the strong capacity in molecule feature representation learning.
We demonstrate that MechRetro outperforms the state-of-the-art approaches for
retrosynthetic prediction with a large margin on large-scale benchmark
datasets. Extending MechRetro to the multi-step retrosynthesis analysis, we
identify efficient synthetic routes via an interpretable reasoning mechanism,
leading to a better understanding in the realm of knowledgeable synthetic
chemists. We also showcase that MechRetro discovers a novel pathway for
protokylol, along with energy scores for uncertainty assessment, broadening the
applicability for practical scenarios. Overall, we expect MechRetro to provide
meaningful insights for high-throughput automated organic synthesis in drug
discovery.
|
[
{
"version": "v1",
"created": "Thu, 6 Oct 2022 01:27:53 GMT"
}
] | 2022-10-07T00:00:00 |
[
[
"Wang",
"Yu",
""
],
[
"Pang",
"Chao",
""
],
[
"Wang",
"Yuzhe",
""
],
[
"Jiang",
"Yi",
""
],
[
"Jin",
"Junru",
""
],
[
"Liang",
"Sirui",
""
],
[
"Zou",
"Quan",
""
],
[
"Wei",
"Leyi",
""
]
] |
new_dataset
| 0.993705 |
2210.02650
|
Charith Perera
|
Bayan Al Muhander, Omer Rana, Nalin Arachchilage, Charith Perera
|
PrivacyCube: A Tangible Device for Improving Privacy Awareness in IoT
|
In Proceedings of the 2022 IEEE/ACM Seventh International Conference
on Internet-of-Things Design and Implementation (IoTDI) 2022
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Consumers increasingly bring IoT devices into their living spaces without
understanding how their data is collected, processed, and used. We present
PrivacyCube, a novel tangible device designed to explore the extent to which
privacy awareness in smart homes can be elevated. PrivacyCube visualises IoT
devices' data consumption displaying privacy-related notices. PrivacyCube aims
at assisting families to (i) understand key privacy aspects better and (ii)
have conversations around data management practices of IoT devices. Thus,
families can learn and make informed privacy decisions collectively.
|
[
{
"version": "v1",
"created": "Thu, 6 Oct 2022 02:44:06 GMT"
}
] | 2022-10-07T00:00:00 |
[
[
"Muhander",
"Bayan Al",
""
],
[
"Rana",
"Omer",
""
],
[
"Arachchilage",
"Nalin",
""
],
[
"Perera",
"Charith",
""
]
] |
new_dataset
| 0.996538 |
2210.02651
|
Junjie Li
|
Junjie Li, Jinqiu Yang
|
Tracking the Evolution of Static Code Warnings: the State-of-the-Art and
a Better Approach
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Static bug detection tools help developers detect problems in the code,
including bad programming practices and potential defects. However, it is known
that static bug detectors remain underutilized due to various reasons. Recent
advances to incorporate static bug detectors in modern software development
workflows, such as in code review and continuous integration, are shown capable
of better motivating developers to fix the reported warnings on the fly.
Moreover, tracking the static code warnings will benefit many downstream
software engineering tasks, such as learning the fix patterns for automated
program repair and learning which warnings are of more interest, so they can be
prioritized automatically. Hence, precisely tracking the warnings by static bug
detectors is critical to improve the utilization of static bug detectors
further.
In this paper, we study the effectiveness of the state-of-the-art (SOA)
solution in tracking the warnings by static bug detectors and propose a better
solution based on our analysis of the insufficiencies of the SOA solution. In
particular, we examined over 2000 commits in four large-scale open-source
systems (i.e., JClouds, Kafka, Spring-boot, and Guava) and crafted a dataset of
3,452 static code warnings by two static bug detectors (i.e., Spotbugs and
PMD). We manually uncover the ground-truth evolution status of the static
warnings: persistent, resolved, or newly-introduced. Moreover, upon manual
analysis, we identified the main reasons behind the insufficiencies of the SOA
solution. Finally, we propose a better approach to improving the tracking of
static warnings over software development history. Our evaluation shows that
our proposed approach provides a significant improvement in terms of the
precision of the tracking, i.e., from 66.9% to 90.0%.
|
[
{
"version": "v1",
"created": "Thu, 6 Oct 2022 03:02:32 GMT"
}
] | 2022-10-07T00:00:00 |
[
[
"Li",
"Junjie",
""
],
[
"Yang",
"Jinqiu",
""
]
] |
new_dataset
| 0.998379 |
2210.02766
|
Jianjun Zhao
|
Kentaro Murakami, Jianjun Zhao
|
AutoQC: Automated Synthesis of Quantum Circuits Using Neural Network
|
9 pages, 15 figures
| null | null | null |
cs.SE cs.LG cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While the ability to build quantum computers is improving dramatically,
developing quantum algorithms is limited and relies on human insight and
ingenuity. Although a number of quantum programming languages have been
developed, it is challenging for software developers who are not familiar with
quantum computing to learn and use these languages. It is, therefore, necessary
to develop tools to support developing new quantum algorithms and programs
automatically. This paper proposes AutoQC, an approach to automatically
synthesizing quantum circuits using the neural network from input and output
pairs. We consider a quantum circuit a sequence of quantum gates and synthesize
a quantum circuit probabilistically by prioritizing with a neural network at
each step. The experimental results highlight the ability of AutoQC to
synthesize some essential quantum circuits at a lower cost.
|
[
{
"version": "v1",
"created": "Thu, 6 Oct 2022 09:05:42 GMT"
}
] | 2022-10-07T00:00:00 |
[
[
"Murakami",
"Kentaro",
""
],
[
"Zhao",
"Jianjun",
""
]
] |
new_dataset
| 0.994379 |
2210.02864
|
Sven Hertling
|
Sven Hertling, Heiko Paulheim
|
DBkWik++ -- Multi Source Matching of Knowledge Graphs
|
Published at KGSWC 2022
| null | null | null |
cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
Large knowledge graphs like DBpedia and YAGO are always based on the same
source, i.e., Wikipedia. But there are more wikis that contain information
about long-tail entities such as wiki hosting platforms like Fandom. In this
paper, we present the approach and analysis of DBkWik++, a fused Knowledge
Graph from thousands of wikis. A modified version of the DBpedia framework is
applied to each wiki which results in many isolated Knowledge Graphs. With an
incremental merge based approach, we reuse one-to-one matching systems to solve
the multi source KG matching task. Based on this alignment we create a
consolidated knowledge graph with more than 15 million instances.
|
[
{
"version": "v1",
"created": "Thu, 6 Oct 2022 12:31:08 GMT"
}
] | 2022-10-07T00:00:00 |
[
[
"Hertling",
"Sven",
""
],
[
"Paulheim",
"Heiko",
""
]
] |
new_dataset
| 0.998576 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.