id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2212.06218
|
Devansh Sharma
|
Devansh Sharma, Tihitina Hade, Qing Tian
|
Comparison Of Deep Object Detectors On A New Vulnerable Pedestrian
Dataset
|
7 pages, 4 Figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Pedestrian safety is one primary concern in autonomous driving. The
under-representation of vulnerable groups in today's pedestrian datasets points
to an urgent need for a dataset of vulnerable road users. In this paper, we
first introduce a new vulnerable pedestrian detection dataset, BG Vulnerable
Pedestrian (BGVP) dataset to help train well-rounded models and thus induce
research to increase the efficacy of vulnerable pedestrian detection. The
dataset includes four classes, i.e., Children Without Disability, Elderly
without Disability, With Disability, and Non-Vulnerable. This dataset consists
of images collected from the public domain and manually-annotated bounding
boxes. In addition, on the proposed dataset, we have trained and tested five
state-of-the-art object detection models, i.e., YOLOv4, YOLOv5, YOLOX, Faster
R-CNN, and EfficientDet. Our results indicate that YOLOX and YOLOv4 perform the
best on our dataset, YOLOv4 scoring 0.7999 and YOLOX scoring 0.7779 on the mAP
0.5 metric, while YOLOX outperforms YOLOv4 by 3.8 percent on the mAP 0.5:0.95
metric. Generally speaking, all five detectors do well predicting the With
Disability class and perform poorly in the Elderly Without Disability class.
YOLOX consistently outperforms all other detectors on the mAP (0.5:0.95) per
class metric, obtaining 0.5644, 0.5242, 0.4781, and 0.6796 for Children Without
Disability, Elderly Without Disability, Non-vulnerable, and With Disability,
respectively. Our dataset and codes are available at
https://github.com/devvansh1997/BGVP.
|
[
{
"version": "v1",
"created": "Mon, 12 Dec 2022 19:59:47 GMT"
}
] | 2022-12-14T00:00:00 |
[
[
"Sharma",
"Devansh",
""
],
[
"Hade",
"Tihitina",
""
],
[
"Tian",
"Qing",
""
]
] |
new_dataset
| 0.999496 |
2212.06249
|
Giuseppe Cotardo
|
Giuseppe Cotardo
|
Zeta Functions for Tensor Codes
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In this work we introduce a new class of optimal tensor codes related to the
Ravagnani-type anticodes, namely the $j$-tensor maximum rank distance codes. We
show that it extends the family of $j$-maximum rank distance codes and contains
the $j$-tensor binomial moment determined codes (with respect to the
Ravagnani-type anticodes) as a proper subclass. We define and study the
generalized zeta function for tensor codes. We establish connections between
this object and the weight enumerator of a code with respect to the
Ravagnani-type anticodes. We introduce a new refinement of the invariants of
tensor codes exploiting the structure of product lattices of some classes of
anticodes and we derive the corresponding MacWilliams identities. In this
framework, we also define a multivariate version of the tensor weight
enumerator and we establish relations with the corresponding zeta function. As
an application we derive connections on the generalized tensor weights related
to the Delsarte and Ravagnani-type anticodes.
|
[
{
"version": "v1",
"created": "Mon, 12 Dec 2022 21:17:05 GMT"
}
] | 2022-12-14T00:00:00 |
[
[
"Cotardo",
"Giuseppe",
""
]
] |
new_dataset
| 0.999574 |
2212.06251
|
Marco Mussi
|
Francesco Bacchiocchi, Gianmarco Genalti, Davide Maran, Marco Mussi,
Marcello Restelli, Nicola Gatti, Alberto Maria Metelli
|
Autoregressive Bandits
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Autoregressive processes naturally arise in a large variety of real-world
scenarios, including e.g., stock markets, sell forecasting, weather prediction,
advertising, and pricing. When addressing a sequential decision-making problem
in such a context, the temporal dependence between consecutive observations
should be properly accounted for converge to the optimal decision policy. In
this work, we propose a novel online learning setting, named Autoregressive
Bandits (ARBs), in which the observed reward follows an autoregressive process
of order $k$, whose parameters depend on the action the agent chooses, within a
finite set of $n$ actions. Then, we devise an optimistic regret minimization
algorithm AutoRegressive Upper Confidence Bounds (AR-UCB) that suffers regret
of order $\widetilde{\mathcal{O}} \left(
\frac{(k+1)^{3/2}\sqrt{nT}}{(1-\Gamma)^2} \right)$, being $T$ the optimization
horizon and $\Gamma < 1$ an index of the stability of the system. Finally, we
present a numerical validation in several synthetic and one real-world setting,
in comparison with general and specific purpose bandit baselines showing the
advantages of the proposed approach.
|
[
{
"version": "v1",
"created": "Mon, 12 Dec 2022 21:37:36 GMT"
}
] | 2022-12-14T00:00:00 |
[
[
"Bacchiocchi",
"Francesco",
""
],
[
"Genalti",
"Gianmarco",
""
],
[
"Maran",
"Davide",
""
],
[
"Mussi",
"Marco",
""
],
[
"Restelli",
"Marcello",
""
],
[
"Gatti",
"Nicola",
""
],
[
"Metelli",
"Alberto Maria",
""
]
] |
new_dataset
| 0.95082 |
2212.06259
|
Yongding Tian
|
Yongding Tian, Matthijs A. Reukers, Zaid Al-Ars, Peter Hofstee,
Matthijs Brobbel, Johan Peltenburg, Jeroen van Straten
|
Tydi-lang: A Language for Typed Streaming Hardware
|
8 pages and 1 page of reference, 4 figures, 4 tables
| null | null | null |
cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
Transferring composite data structures with variable-length fields often
requires designing non-trivial protocols that are not compatible between
hardware designs. When each project designs its own data format and protocols
the ability to collaborate between hardware developers is diminished, which is
an issue especially in the open-source community. Because the high-level
meaning of a protocol is often lost in translation to low-level languages when
a custom protocol needs to be designed, extra documentation is required, the
interpretation of which introduces new opportunities for errors.
The Tydi specification (Tydi-spec) was proposed to address the above issues
by codifying the composite and variable-length data structures in a type and
providing a standard protocol to transfer typed data among hardware components.
The Tydi intermediate representation (Tydi-IR) extends the Tydi-spec by
defining typed interfaces, typed components, and connections among typed
components.
In this paper, we propose Tydi-lang, a high-level hardware description
language (HDL) for streaming designs. The language incorporates Tydi-spec to
describe typed streams and provides templates to describe abstract reusable
components. We also implement an open-source compiler from Tydi-lang to
Tydi-IR. We leverage a Tydi-IR to VHDL compiler, and also present a simulator
blueprint to identify streaming bottlenecks. We show several Tydi-lang examples
to translate high-level SQL to VHDL to demonstrate that Tydi-lang can
efficiently raise the level of abstraction and reduce design effort.
|
[
{
"version": "v1",
"created": "Mon, 12 Dec 2022 21:55:52 GMT"
}
] | 2022-12-14T00:00:00 |
[
[
"Tian",
"Yongding",
""
],
[
"Reukers",
"Matthijs A.",
""
],
[
"Al-Ars",
"Zaid",
""
],
[
"Hofstee",
"Peter",
""
],
[
"Brobbel",
"Matthijs",
""
],
[
"Peltenburg",
"Johan",
""
],
[
"van Straten",
"Jeroen",
""
]
] |
new_dataset
| 0.999693 |
2212.06292
|
Nika Mansouri Ghiasi
|
Nika Mansouri Ghiasi, Nandita Vijaykumar, Geraldo F. Oliveira, Lois
Orosa, Ivan Fernandez, Mohammad Sadrosadati, Konstantinos Kanellopoulos,
Nastaran Hajinazar, Juan G\'omez Luna, Onur Mutlu
|
ALP: Alleviating CPU-Memory Data Movement Overheads in Memory-Centric
Systems
|
To appear in IEEE TETC
| null | null | null |
cs.AR cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Partitioning applications between NDP and host CPU cores causes inter-segment
data movement overhead, which is caused by moving data generated from one
segment (e.g., instructions, functions) and used in consecutive segments. Prior
works take two approaches to this problem. The first class of works maps
segments to NDP or host cores based on the properties of each segment,
neglecting the inter-segment data movement overhead. The second class of works
partitions applications based on the overall memory bandwidth saving of each
segment, and does not offload each segment to the best-fitting core if they
incur high inter-segment data movement. We show that 1) mapping each segment to
its best-fitting core ideally can provide substantial benefits, and 2) the
inter-segment data movement reduces this benefit significantly.
To this end, we introduce ALP, a new programmer-transparent technique to
leverage the performance benefits of NDP by alleviating the inter-segment data
movement overhead between host and memory and enabling efficient partitioning
of applications. ALP alleviates the inter-segment data movement overhead by
proactively and accurately transferring the required data between the segments.
This is based on the key observation that the instructions that generate the
inter-segment data stay the same across different executions of a program on
different inputs. ALP uses a compiler pass to identify these instructions and
uses specialized hardware to transfer data between the host and NDP cores at
runtime. ALP efficiently maps application segments to either host or NDP
considering 1) the properties of each segment, 2) the inter-segment data
movement overhead, and 3) whether this overhead can be alleviated in a timely
manner. We evaluate ALP across a wide range of workloads and show on average
54.3% and 45.4% speedup compared to only-host CPU or only-NDP executions,
respectively.
|
[
{
"version": "v1",
"created": "Tue, 13 Dec 2022 00:10:55 GMT"
}
] | 2022-12-14T00:00:00 |
[
[
"Ghiasi",
"Nika Mansouri",
""
],
[
"Vijaykumar",
"Nandita",
""
],
[
"Oliveira",
"Geraldo F.",
""
],
[
"Orosa",
"Lois",
""
],
[
"Fernandez",
"Ivan",
""
],
[
"Sadrosadati",
"Mohammad",
""
],
[
"Kanellopoulos",
"Konstantinos",
""
],
[
"Hajinazar",
"Nastaran",
""
],
[
"Luna",
"Juan Gómez",
""
],
[
"Mutlu",
"Onur",
""
]
] |
new_dataset
| 0.974598 |
2212.06346
|
Jack FitzGerald
|
Christopher Hench, Charith Peris, Jack FitzGerald, Kay Rottmann
|
The Massively Multilingual Natural Language Understanding 2022
(MMNLU-22) Workshop and Competition
|
5 pages
|
Proceedings of the Massively Multilingual Natural Language
Understanding Workshop (MMNLU-22), pages 83 - 87 December 7, 2022, copyright
2022 Association for Computational Linguistics
| null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite recent progress in Natural Language Understanding (NLU), the creation
of multilingual NLU systems remains a challenge. It is common to have NLU
systems limited to a subset of languages due to lack of available data. They
also often vary widely in performance. We launch a three-phase approach to
address the limitations in NLU and help propel NLU technology to new heights.
We release a 52 language dataset called the Multilingual Amazon SLU resource
package (SLURP) for Slot-filling, Intent classification, and Virtual assistant
Evaluation, or MASSIVE, in an effort to address parallel data availability for
voice assistants. We organize the Massively Multilingual NLU 2022 Challenge to
provide a competitive environment and push the state-of-the art in the
transferability of models into other languages. Finally, we host the first
Massively Multilingual NLU workshop which brings these components together. The
MMNLU workshop seeks to advance the science behind multilingual NLU by
providing a platform for the presentation of new research in the field and
connecting teams working on this research direction. This paper summarizes the
dataset, workshop and the competition and the findings of each phase.
|
[
{
"version": "v1",
"created": "Tue, 13 Dec 2022 03:00:36 GMT"
}
] | 2022-12-14T00:00:00 |
[
[
"Hench",
"Christopher",
""
],
[
"Peris",
"Charith",
""
],
[
"FitzGerald",
"Jack",
""
],
[
"Rottmann",
"Kay",
""
]
] |
new_dataset
| 0.991443 |
2212.06402
|
Aishwarya Srinivasan
|
Aishwarya Srinivasan
|
Balloon-to-Balloon AdHoc Wireless Network Connectivity: Google Project
Loon
| null | null | null | null |
cs.NI cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Project Loon is a Google initiated research project from the Google X Lab.
The project focuses on providing remote internet access and network
connectivity. The connectivity is established in vertical and horizontal space;
vertical connectivity between Google Access Point (GAP) and the balloons, and
between balloons and antennas installed at land; horizontal connectivity is
between the balloons. This research focuses on the connectivity between the
balloons in a mesh network. The proposal focuses on implementing graphical
methods like convex hull with adhoc communication protocols. The proposed
protocol includes content-based multicasting using angular sector division
rather than grids, along with dynamic core-based mesh protocol defining certain
core active nodes and passive nodes forming the convex hull. The transmission
(multicasting and broadcasting) between the nodes will be evaluated using the
link probability defining the probability of the link between two nodes
failing. Based on the link probability and node features, best path between
transmitting and receiver nodes will be evaluated.
|
[
{
"version": "v1",
"created": "Tue, 13 Dec 2022 06:56:12 GMT"
}
] | 2022-12-14T00:00:00 |
[
[
"Srinivasan",
"Aishwarya",
""
]
] |
new_dataset
| 0.99866 |
2212.06492
|
Panagiotis Papadopoulos
|
Panagiotis Papadopoulos, Dimitris Spithouris, Evangelos P. Markatos,
Nicolas Kourtellis
|
FNDaaS: Content-agnostic Detection of Fake News sites
| null | null | null | null |
cs.CY cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatic fake news detection is a challenging problem in misinformation
spreading, and it has tremendous real-world political and social impacts. Past
studies have proposed machine learning-based methods for detecting such fake
news, focusing on different properties of the published news articles, such as
linguistic characteristics of the actual content, which however have
limitations due to the apparent language barriers. Departing from such efforts,
we propose FNDaaS, the first automatic, content-agnostic fake news detection
method, that considers new and unstudied features such as network and
structural characteristics per news website. This method can be enforced
as-a-Service, either at the ISP-side for easier scalability and maintenance, or
user-side for better end-user privacy. We demonstrate the efficacy of our
method using data crawled from existing lists of 637 fake and 1183 real news
websites, and by building and testing a proof of concept system that
materializes our proposal. Our analysis of data collected from these websites
shows that the vast majority of fake news domains are very young and appear to
have lower time periods of an IP associated with their domain than real news
ones. By conducting various experiments with machine learning classifiers, we
demonstrate that FNDaaS can achieve an AUC score of up to 0.967 on past sites,
and up to 77-92% accuracy on newly-flagged ones.
|
[
{
"version": "v1",
"created": "Tue, 13 Dec 2022 11:17:32 GMT"
}
] | 2022-12-14T00:00:00 |
[
[
"Papadopoulos",
"Panagiotis",
""
],
[
"Spithouris",
"Dimitris",
""
],
[
"Markatos",
"Evangelos P.",
""
],
[
"Kourtellis",
"Nicolas",
""
]
] |
new_dataset
| 0.998252 |
2212.06493
|
Zhen-Yu Wu
|
Zhenyu Wu, Lin Wang, Wei Wang, Qing Xia, Chenglizhao Chen, Aimin Hao,
Shuo Li
|
Pixel is All You Need: Adversarial Trajectory-Ensemble Active Learning
for Salient Object Detection
|
9 pages, 8 figures
|
AAAI 2023
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although weakly-supervised techniques can reduce the labeling effort, it is
unclear whether a saliency model trained with weakly-supervised data (e.g.,
point annotation) can achieve the equivalent performance of its
fully-supervised version. This paper attempts to answer this unexplored
question by proving a hypothesis: there is a point-labeled dataset where
saliency models trained on it can achieve equivalent performance when trained
on the densely annotated dataset. To prove this conjecture, we proposed a novel
yet effective adversarial trajectory-ensemble active learning (ATAL). Our
contributions are three-fold: 1) Our proposed adversarial attack triggering
uncertainty can conquer the overconfidence of existing active learning methods
and accurately locate these uncertain pixels. {2)} Our proposed
trajectory-ensemble uncertainty estimation method maintains the advantages of
the ensemble networks while significantly reducing the computational cost. {3)}
Our proposed relationship-aware diversity sampling algorithm can conquer
oversampling while boosting performance. Experimental results show that our
ATAL can find such a point-labeled dataset, where a saliency model trained on
it obtained $97\%$ -- $99\%$ performance of its fully-supervised version with
only ten annotated points per image.
|
[
{
"version": "v1",
"created": "Tue, 13 Dec 2022 11:18:08 GMT"
}
] | 2022-12-14T00:00:00 |
[
[
"Wu",
"Zhenyu",
""
],
[
"Wang",
"Lin",
""
],
[
"Wang",
"Wei",
""
],
[
"Xia",
"Qing",
""
],
[
"Chen",
"Chenglizhao",
""
],
[
"Hao",
"Aimin",
""
],
[
"Li",
"Shuo",
""
]
] |
new_dataset
| 0.988433 |
2212.06511
|
David Howard
|
David Howard, Jack O'Connor, Jordan Letchford, Therese Joseph, Sophia
Lin, Sarah Baldwin and Gary Delaney
|
A Comprehensive Dataset of Grains for Granular Jamming in Soft Robotics:
Grip Strength and Shock Absorption
| null | null | null | null |
cs.RO cond-mat.soft
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We test grip strength and shock absorption properties of various granular
material in granular jamming robotic components. The granular material
comprises a range of natural, manufactured, and 3D printed material
encompassing a wide range of shapes, sizes, and Shore hardness. Two main
experiments are considered, both representing compelling use cases for granular
jamming in soft robotics. The first experiment measures grip strength
(retention force measured in Newtons) when we fill a latex balloon with the
chosen grain type and use it as a granular jamming gripper to pick up a range
of test objects. The second experiment measures shock absorption properties
recorded by an Inertial Measurement Unit which is suspended in an envelope of
granular material and dropped from a set height. Our results highlight a range
of shape, size and softness effects, including that grain deformability is a
key determinant of grip strength, and interestingly, that larger grain sizes in
3D printed grains create better shock absorbing materials.
|
[
{
"version": "v1",
"created": "Tue, 13 Dec 2022 11:48:46 GMT"
}
] | 2022-12-14T00:00:00 |
[
[
"Howard",
"David",
""
],
[
"O'Connor",
"Jack",
""
],
[
"Letchford",
"Jordan",
""
],
[
"Joseph",
"Therese",
""
],
[
"Lin",
"Sophia",
""
],
[
"Baldwin",
"Sarah",
""
],
[
"Delaney",
"Gary",
""
]
] |
new_dataset
| 0.999793 |
2212.06570
|
Qibin Hou
|
Bowen Yin and Xuying Zhang and Qibin Hou and Bo-Yuan Sun and Deng-Ping
Fan and Luc Van Gool
|
CamoFormer: Masked Separable Attention for Camouflaged Object Detection
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
How to identify and segment camouflaged objects from the background is
challenging. Inspired by the multi-head self-attention in Transformers, we
present a simple masked separable attention (MSA) for camouflaged object
detection. We first separate the multi-head self-attention into three parts,
which are responsible for distinguishing the camouflaged objects from the
background using different mask strategies. Furthermore, we propose to capture
high-resolution semantic representations progressively based on a simple
top-down decoder with the proposed MSA to attain precise segmentation results.
These structures plus a backbone encoder form a new model, dubbed CamoFormer.
Extensive experiments show that CamoFormer surpasses all existing
state-of-the-art methods on three widely-used camouflaged object detection
benchmarks. There are on average around 5% relative improvements over previous
methods in terms of S-measure and weighted F-measure.
|
[
{
"version": "v1",
"created": "Sat, 10 Dec 2022 10:03:27 GMT"
}
] | 2022-12-14T00:00:00 |
[
[
"Yin",
"Bowen",
""
],
[
"Zhang",
"Xuying",
""
],
[
"Hou",
"Qibin",
""
],
[
"Sun",
"Bo-Yuan",
""
],
[
"Fan",
"Deng-Ping",
""
],
[
"Van Gool",
"Luc",
""
]
] |
new_dataset
| 0.999458 |
1205.3813
|
William Gasarch
|
Daniel Apon, William Gasarch and Kevin Lawler
|
An NP-Complete Problem in Grid Coloring
|
35 pages
| null | null | null |
cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A c-coloring of G(n,m)=n x m is a mapping of G(n,m) into {1,...,c} such that
no four corners forming a rectangle have the same color. In 2009 a challenge
was proposed via the internet to find a 4-coloring of G(17,17). This attracted
considerable attention from the popular mathematics community. A coloring was
produced; however, finding it proved to be difficult. The question arises: is
the problem of grid coloring is difficult in general? We show that the problem
of, given a partial coloring of a grid, can it be extended to a full (proper)
coloring, is NP-complete.
|
[
{
"version": "v1",
"created": "Wed, 16 May 2012 21:23:47 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Nov 2012 17:04:09 GMT"
},
{
"version": "v3",
"created": "Sat, 10 Dec 2022 04:55:26 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Apon",
"Daniel",
""
],
[
"Gasarch",
"William",
""
],
[
"Lawler",
"Kevin",
""
]
] |
new_dataset
| 0.961325 |
1708.08749
|
Cuneyt Gurcan Akcora
|
Cuneyt Gurcan Akcora and Yulia R. Gel and Murat Kantarcioglu
|
Blockchain: A Graph Primer
|
19 pages, 5 figures
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bitcoin and its underlying technology, blockchain, have gained significant
popularity in recent years. Satoshi Nakamoto designed Bitcoin to enable a
secure, distributed platform without the need for central authorities, and
blockchain has been hailed as a paradigm that will be as impactful as Big Data,
Cloud Computing, and Machine Learning.
Blockchain incorporates innovative ideas from various fields, such as
public-key encryption and distributed systems. As a result, readers often
encounter resources that explain Blockchain technology from a single
perspective, leaving them with more questions than answers.
In this primer, we aim to provide a comprehensive view of blockchain. We will
begin with a brief history and introduce the building blocks of the blockchain.
As graph mining is a major area of blockchain analysis, we will delve into the
graph-theoretical aspects of Blockchain technology. We will also discuss the
future of blockchain and explain how extensions such as smart contracts and
decentralized autonomous organizations will function.
Our goal is to provide a concise but complete description of blockchain
technology that is accessible to readers with no prior expertise in the field.
|
[
{
"version": "v1",
"created": "Thu, 10 Aug 2017 16:45:00 GMT"
},
{
"version": "v2",
"created": "Sun, 11 Dec 2022 21:26:54 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Akcora",
"Cuneyt Gurcan",
""
],
[
"Gel",
"Yulia R.",
""
],
[
"Kantarcioglu",
"Murat",
""
]
] |
new_dataset
| 0.999605 |
2109.12346
|
Mohamed Berrimi Mr
|
Amine Abdaoui, Mohamed Berrimi, Mourad Oussalah, Abdelouahab Moussaoui
|
DziriBERT: a Pre-trained Language Model for the Algerian Dialect
|
4 Pages
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Pre-trained transformers are now the de facto models in Natural Language
Processing given their state-of-the-art results in many tasks and languages.
However, most of the current models have been trained on languages for which
large text resources are already available (such as English, French, Arabic,
etc.). Therefore, there are still a number of low-resource languages that need
more attention from the community. In this paper, we study the Algerian dialect
which has several specificities that make the use of Arabic or multilingual
models inappropriate. To address this issue, we collected more than one million
Algerian tweets, and pre-trained the first Algerian language model: DziriBERT.
When compared with existing models, DziriBERT achieves better results,
especially when dealing with the Roman script. The obtained results show that
pre-training a dedicated model on a small dataset (150 MB) can outperform
existing models that have been trained on much more data (hundreds of GB).
Finally, our model is publicly available to the community.
|
[
{
"version": "v1",
"created": "Sat, 25 Sep 2021 11:51:35 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Dec 2022 14:24:51 GMT"
},
{
"version": "v3",
"created": "Mon, 12 Dec 2022 08:55:20 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Abdaoui",
"Amine",
""
],
[
"Berrimi",
"Mohamed",
""
],
[
"Oussalah",
"Mourad",
""
],
[
"Moussaoui",
"Abdelouahab",
""
]
] |
new_dataset
| 0.990092 |
2201.07823
|
Farhad Pakdaman
|
Farhad Pakdaman, Mohammad Ali Adelimanesh, Mahmoud Reza Hashemi
|
BLINC: Lightweight Bimodal Learning for Low-Complexity VVC Intra Coding
| null |
Journal of Real-Time Image Processing (2022)
|
10.1007/s11554-022-01223-1
| null |
cs.MM cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
The latest video coding standard, Versatile Video Coding (VVC), achieves
almost twice coding efficiency compared to its predecessor, the High Efficiency
Video Coding (HEVC). However, achieving this efficiency (for intra coding)
requires 31x computational complexity compared to HEVC, making it challenging
for low power and real-time applications. This paper, proposes a novel machine
learning approach that jointly and separately employs two modalities of
features, to simplify the intra coding decision. First a set of features are
extracted that use the existing DCT core of VVC, to assess the texture
characteristics, and forms the first modality of data. This produces high
quality features with almost no overhead. The distribution of intra modes at
the neighboring blocks is also used to form the second modality of data, which
provides statistical information about the frame. Second, a two-step feature
reduction method is designed that reduces the size of feature set, such that a
lightweight model with a limited number of parameters can be used to learn the
intra mode decision task. Third, three separate training strategies are
proposed (1) an offline training strategy using the first (single) modality of
data, (2) an online training strategy that uses the second (single) modality,
and (3) a mixed online-offline strategy that uses bimodal learning. Finally, a
low-complexity encoding algorithms is proposed based on the proposed learning
strategies. Extensive experimental results show that the proposed methods can
reduce up to 24% of encoding time, with a negligible loss of coding efficiency.
Moreover, it is demonstrated how a bimodal learning strategy can boost the
performance of learning. Lastly, the proposed method has a very low
computational overhead (0.2%), and uses existing components of a VVC encoder,
which makes it much more practical compared to competing solutions.
|
[
{
"version": "v1",
"created": "Wed, 19 Jan 2022 19:12:41 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Pakdaman",
"Farhad",
""
],
[
"Adelimanesh",
"Mohammad Ali",
""
],
[
"Hashemi",
"Mahmoud Reza",
""
]
] |
new_dataset
| 0.998347 |
2202.01736
|
Jack Sturgess
|
Jack Sturgess, Simon Eberz, Ivo Sluganovic, Ivan Martinovic
|
WatchAuth: User Authentication and Intent Recognition in Mobile Payments
using a Smartwatch
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we show that the tap gesture, performed when a user 'taps' a
smartwatch onto an NFC-enabled terminal to make a payment, is a biometric
capable of implicitly authenticating the user and simultaneously recognising
intent-to-pay. The proposed system can be deployed purely in software on the
watch without requiring updates to payment terminals. It is agnostic to
terminal type and position and the intent recognition portion does not require
any training data from the user. To validate the system, we conduct a user
study (n=16) to collect wrist motion data from users as they interact with
payment terminals and to collect long-term data from a subset of them (n=9) as
they perform daily activities. Based on this data, we identify optimum gesture
parameters and develop authentication and intent recognition models, for which
we achieve EERs of 0.08 and 0.04, respectively.
|
[
{
"version": "v1",
"created": "Thu, 3 Feb 2022 17:56:06 GMT"
},
{
"version": "v2",
"created": "Sun, 11 Dec 2022 14:43:05 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Sturgess",
"Jack",
""
],
[
"Eberz",
"Simon",
""
],
[
"Sluganovic",
"Ivo",
""
],
[
"Martinovic",
"Ivan",
""
]
] |
new_dataset
| 0.99975 |
2202.13758
|
Zhijing Jin
|
Zhijing Jin, Abhinav Lalwani, Tejas Vaidhya, Xiaoyu Shen, Yiwen Ding,
Zhiheng Lyu, Mrinmaya Sachan, Rada Mihalcea, Bernhard Sch\"olkopf
|
Logical Fallacy Detection
|
EMNLP 2021 Findings
| null | null | null |
cs.CL cs.AI cs.CY cs.LG cs.LO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Reasoning is central to human intelligence. However, fallacious arguments are
common, and some exacerbate problems such as spreading misinformation about
climate change. In this paper, we propose the task of logical fallacy
detection, and provide a new dataset (Logic) of logical fallacies generally
found in text, together with an additional challenge set for detecting logical
fallacies in climate change claims (LogicClimate). Detecting logical fallacies
is a hard problem as the model must understand the underlying logical structure
of the argument. We find that existing pretrained large language models perform
poorly on this task. In contrast, we show that a simple structure-aware
classifier outperforms the best language model by 5.46% on Logic and 4.51% on
LogicClimate. We encourage future work to explore this task as (a) it can serve
as a new reasoning challenge for language models, and (b) it can have potential
applications in tackling the spread of misinformation. Our dataset and code are
available at https://github.com/causalNLP/logical-fallacy
|
[
{
"version": "v1",
"created": "Mon, 28 Feb 2022 13:18:26 GMT"
},
{
"version": "v2",
"created": "Tue, 24 May 2022 10:04:48 GMT"
},
{
"version": "v3",
"created": "Mon, 12 Dec 2022 04:47:49 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Jin",
"Zhijing",
""
],
[
"Lalwani",
"Abhinav",
""
],
[
"Vaidhya",
"Tejas",
""
],
[
"Shen",
"Xiaoyu",
""
],
[
"Ding",
"Yiwen",
""
],
[
"Lyu",
"Zhiheng",
""
],
[
"Sachan",
"Mrinmaya",
""
],
[
"Mihalcea",
"Rada",
""
],
[
"Schölkopf",
"Bernhard",
""
]
] |
new_dataset
| 0.998699 |
2205.01414
|
Daniel Bogdoll
|
Daniel Bogdoll and Enrico Eisen and Maximilian Nitsche and Christin
Scheib and J. Marius Z\"ollner
|
Multimodal Detection of Unknown Objects on Roads for Autonomous Driving
|
Daniel Bogdoll, Enrico Eisen, Maximilian Nitsche, and Christin Scheib
contributed equally. Accepted for publication at SMC 2022
| null |
10.1109/SMC53654.2022.9945211
| null |
cs.CV cs.LG cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Tremendous progress in deep learning over the last years has led towards a
future with autonomous vehicles on our roads. Nevertheless, the performance of
their perception systems is strongly dependent on the quality of the utilized
training data. As these usually only cover a fraction of all object classes an
autonomous driving system will face, such systems struggle with handling the
unexpected. In order to safely operate on public roads, the identification of
objects from unknown classes remains a crucial task. In this paper, we propose
a novel pipeline to detect unknown objects. Instead of focusing on a single
sensor modality, we make use of lidar and camera data by combining state-of-the
art detection models in a sequential manner. We evaluate our approach on the
Waymo Open Perception Dataset and point out current research gaps in anomaly
detection.
|
[
{
"version": "v1",
"created": "Tue, 3 May 2022 10:58:41 GMT"
},
{
"version": "v2",
"created": "Sat, 2 Jul 2022 17:02:43 GMT"
},
{
"version": "v3",
"created": "Fri, 22 Jul 2022 10:21:57 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Bogdoll",
"Daniel",
""
],
[
"Eisen",
"Enrico",
""
],
[
"Nitsche",
"Maximilian",
""
],
[
"Scheib",
"Christin",
""
],
[
"Zöllner",
"J. Marius",
""
]
] |
new_dataset
| 0.989442 |
2206.01612
|
Wenzhuo Yang
|
Wenzhuo Yang and Hung Le and Tanmay Laud and Silvio Savarese and
Steven C.H. Hoi
|
OmniXAI: A Library for Explainable AI
|
Github repo: https://github.com/salesforce/OmniXAI
| null | null | null |
cs.LG cs.AI cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We introduce OmniXAI (short for Omni eXplainable AI), an open-source Python
library of eXplainable AI (XAI), which offers omni-way explainable AI
capabilities and various interpretable machine learning techniques to address
the pain points of understanding and interpreting the decisions made by machine
learning (ML) in practice. OmniXAI aims to be a one-stop comprehensive library
that makes explainable AI easy for data scientists, ML researchers and
practitioners who need explanation for various types of data, models and
explanation methods at different stages of ML process (data exploration,
feature engineering, model development, evaluation, and decision-making, etc).
In particular, our library includes a rich family of explanation methods
integrated in a unified interface, which supports multiple data types (tabular
data, images, texts, time-series), multiple types of ML models (traditional ML
in Scikit-learn and deep learning models in PyTorch/TensorFlow), and a range of
diverse explanation methods including "model-specific" and "model-agnostic"
ones (such as feature-attribution explanation, counterfactual explanation,
gradient-based explanation, etc). For practitioners, the library provides an
easy-to-use unified interface to generate the explanations for their
applications by only writing a few lines of codes, and also a GUI dashboard for
visualization of different explanations for more insights about decisions. In
this technical report, we present OmniXAI's design principles, system
architectures, and major functionalities, and also demonstrate several example
use cases across different types of data, tasks, and models.
|
[
{
"version": "v1",
"created": "Wed, 1 Jun 2022 11:35:37 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Jun 2022 03:15:20 GMT"
},
{
"version": "v3",
"created": "Thu, 9 Jun 2022 02:20:40 GMT"
},
{
"version": "v4",
"created": "Tue, 28 Jun 2022 06:48:31 GMT"
},
{
"version": "v5",
"created": "Fri, 22 Jul 2022 05:51:48 GMT"
},
{
"version": "v6",
"created": "Thu, 8 Sep 2022 09:23:40 GMT"
},
{
"version": "v7",
"created": "Thu, 15 Sep 2022 12:29:58 GMT"
},
{
"version": "v8",
"created": "Mon, 12 Dec 2022 09:26:32 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Yang",
"Wenzhuo",
""
],
[
"Le",
"Hung",
""
],
[
"Laud",
"Tanmay",
""
],
[
"Savarese",
"Silvio",
""
],
[
"Hoi",
"Steven C. H.",
""
]
] |
new_dataset
| 0.997011 |
2206.04922
|
Junhui Zhang
|
Junhui Zhang, Wudi Bao, Junjie Pan, Xiang Yin, Zejun Ma
|
A Novel Chinese Dialect TTS Frontend with Non-Autoregressive Neural
Machine Translation
|
4 pages,5 figures
| null | null | null |
cs.CL cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Chinese dialects are different variations of Chinese and can be considered as
different languages in the same language family with Mandarin. Though they all
use Chinese characters, the pronunciations, grammar and idioms can vary
significantly, and even local speakers may find it hard to input correct
written forms of dialect. Besides, using Mandarin text as text-to-speech inputs
would generate speech with poor naturalness. In this paper, we propose a novel
Chinese dialect TTS frontend with a translation module, which converts Mandarin
text into dialectic expressions to improve the intelligibility and naturalness
of synthesized speech. A non-autoregressive neural machine translation model
with various tricks is proposed for the translation task. It is the first known
work to incorporate translation with TTS frontend. Experiments on Cantonese
show the proposed model improves 2.56 BLEU and TTS improves 0.27 MOS with
Mandarin inputs.
|
[
{
"version": "v1",
"created": "Fri, 10 Jun 2022 07:46:34 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Jun 2022 13:00:57 GMT"
},
{
"version": "v3",
"created": "Mon, 12 Dec 2022 09:06:43 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Zhang",
"Junhui",
""
],
[
"Bao",
"Wudi",
""
],
[
"Pan",
"Junjie",
""
],
[
"Yin",
"Xiang",
""
],
[
"Ma",
"Zejun",
""
]
] |
new_dataset
| 0.996337 |
2209.08524
|
Jianzhu Yao
|
Jianzhu Yao, Ziqi Liu, Jian Guan, Minlie Huang
|
A Benchmark for Understanding and Generating Dialogue between Characters
in Stories
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Many classical fairy tales, fiction, and screenplays leverage dialogue to
advance story plots and establish characters. We present the first study to
explore whether machines can understand and generate dialogue in stories, which
requires capturing traits of different characters and the relationships between
them. To this end, we propose two new tasks including Masked Dialogue
Generation and Dialogue Speaker Recognition, i.e., generating missing dialogue
turns and predicting speakers for specified dialogue turns, respectively. We
build a new dataset DialStory, which consists of 105k Chinese stories with a
large amount of dialogue weaved into the plots to support the evaluation. We
show the difficulty of the proposed tasks by testing existing models with
automatic and manual evaluation on DialStory. Furthermore, we propose to learn
explicit character representations to improve performance on these tasks.
Extensive experiments and case studies show that our approach can generate more
coherent and informative dialogue, and achieve higher speaker recognition
accuracy than strong baselines.
|
[
{
"version": "v1",
"created": "Sun, 18 Sep 2022 10:19:04 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Dec 2022 02:32:09 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Yao",
"Jianzhu",
""
],
[
"Liu",
"Ziqi",
""
],
[
"Guan",
"Jian",
""
],
[
"Huang",
"Minlie",
""
]
] |
new_dataset
| 0.999853 |
2209.12405
|
Diptarama Hendrian
|
Koshiro Kumagai and Diptarama Hendrian and Ryo Yoshinaka and Ayumi
Shinohara
|
Inferring Strings from Position Heaps in Linear Time
|
10 pages, 5 figures
| null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Position heaps are index structures of text strings used for the string
matching problem. They are rooted trees whose edges and nodes are labeled and
numbered, respectively. This paper is concerned with variants of the inverse
problem of position heap construction and gives linear-time algorithms for
those problems. The basic problem is to restore a text string from a rooted
tree with labeled edges and numbered nodes. In the variant problems, the input
trees may miss edge labels or node numbers which we must restore as well.
|
[
{
"version": "v1",
"created": "Mon, 26 Sep 2022 04:05:05 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Dec 2022 07:08:46 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Kumagai",
"Koshiro",
""
],
[
"Hendrian",
"Diptarama",
""
],
[
"Yoshinaka",
"Ryo",
""
],
[
"Shinohara",
"Ayumi",
""
]
] |
new_dataset
| 0.984075 |
2210.05633
|
Karmesh Yadav
|
Karmesh Yadav, Ram Ramrakhya, Santhosh Kumar Ramakrishnan, Theo
Gervet, John Turner, Aaron Gokaslan, Noah Maestre, Angel Xuan Chang, Dhruv
Batra, Manolis Savva, Alexander William Clegg, Devendra Singh Chaplot
|
Habitat-Matterport 3D Semantics Dataset
|
14 Pages, 10 Figures, 5 Tables
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We present the Habitat-Matterport 3D Semantics (HM3DSEM) dataset. HM3DSEM is
the largest dataset of 3D real-world spaces with densely annotated semantics
that is currently available to the academic community. It consists of 142,646
object instance annotations across 216 3D spaces and 3,100 rooms within those
spaces. The scale, quality, and diversity of object annotations far exceed
those of prior datasets. A key difference setting apart HM3DSEM from other
datasets is the use of texture information to annotate pixel-accurate object
boundaries. We demonstrate the effectiveness of HM3DSEM dataset for the Object
Goal Navigation task using different methods. Policies trained using HM3DSEM
perform outperform those trained on prior datasets. Introduction of HM3DSEM in
the Habitat ObjectNav Challenge lead to an increase in participation from 400
submissions in 2021 to 1022 submissions in 2022.
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2022 17:25:51 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Dec 2022 07:14:03 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Yadav",
"Karmesh",
""
],
[
"Ramrakhya",
"Ram",
""
],
[
"Ramakrishnan",
"Santhosh Kumar",
""
],
[
"Gervet",
"Theo",
""
],
[
"Turner",
"John",
""
],
[
"Gokaslan",
"Aaron",
""
],
[
"Maestre",
"Noah",
""
],
[
"Chang",
"Angel Xuan",
""
],
[
"Batra",
"Dhruv",
""
],
[
"Savva",
"Manolis",
""
],
[
"Clegg",
"Alexander William",
""
],
[
"Chaplot",
"Devendra Singh",
""
]
] |
new_dataset
| 0.999789 |
2210.05844
|
Yifan Liu
|
Bowen Zhang and Zhi Tian and Quan Tang and Xiangxiang Chu and Xiaolin
Wei and Chunhua Shen and Yifan Liu
|
SegViT: Semantic Segmentation with Plain Vision Transformers
|
9 Pages, NeurIPS 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We explore the capability of plain Vision Transformers (ViTs) for semantic
segmentation and propose the SegVit. Previous ViT-based segmentation networks
usually learn a pixel-level representation from the output of the ViT.
Differently, we make use of the fundamental component -- attention mechanism,
to generate masks for semantic segmentation. Specifically, we propose the
Attention-to-Mask (ATM) module, in which the similarity maps between a set of
learnable class tokens and the spatial feature maps are transferred to the
segmentation masks. Experiments show that our proposed SegVit using the ATM
module outperforms its counterparts using the plain ViT backbone on the ADE20K
dataset and achieves new state-of-the-art performance on COCO-Stuff-10K and
PASCAL-Context datasets. Furthermore, to reduce the computational cost of the
ViT backbone, we propose query-based down-sampling (QD) and query-based
up-sampling (QU) to build a Shrunk structure. With the proposed Shrunk
structure, the model can save up to $40\%$ computations while maintaining
competitive performance.
|
[
{
"version": "v1",
"created": "Wed, 12 Oct 2022 00:30:26 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Dec 2022 15:35:01 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Zhang",
"Bowen",
""
],
[
"Tian",
"Zhi",
""
],
[
"Tang",
"Quan",
""
],
[
"Chu",
"Xiangxiang",
""
],
[
"Wei",
"Xiaolin",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Liu",
"Yifan",
""
]
] |
new_dataset
| 0.997841 |
2210.08978
|
Frederic Jumelle
|
Frederic Jumelle, Timothy Pagett and Ryan Lemand
|
Decentralized nation, solving the web identity crisis
|
11 pages, 1 figure
|
https://portal.issn.org/resource/ISSN/1556-5068/2022
|
10.2139/ssrn.4237007
| null |
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
The web of today whether you prefer to call it web 2.0, web 3.0, web 5.0 or
even the metaverse is at a critical stage of evolution and challenge, largely
centered around its crisis of identity. Like teenagers who cannot assess
properly their reason for being and do not seem ready to take responsibility
for their actions, we are constantly blaming the very system we are trying to
get away from. To truly realize the benefits from innovation and technology,
this crisis has to be resolved, not just through tactical solutions but through
developments that enhance the sustainability of the web and its benefits.
Significant strides are being made in the evolution of digital services enabled
by technology, regulation, and the sheer pace of societal change. The journey
to the decentralized web is mirroring the convergence of the physical and
digital worlds across all economies and is increasingly embracing the digital
native world. Technology has provided the foundational platform for individuals
and entities to create and manage wealth, potentially without the need for big
institutions. Ironically, despite all of the advancements, we are still facing
an unprecedented and increasing wealth gap. Clearly, the system is broken, not
just around the edges but at the very core of the democratic underpinning of
our society. In this whitepaper, we propose how artificial intelligence on
blockchain can be used to generate a new class of identity through direct human
computer interaction. We demonstrate how this, combined with new perspectives
for sustaining community and governance embedded within the use of blockchain
technology, will underpin a sustainable solution to protect identity,
authorship and privacy at the same time while contributing to restore trust
amongst members of a future decentralized nation and hence contribute to
solving the web most significant identity crisis.
|
[
{
"version": "v1",
"created": "Mon, 3 Oct 2022 01:02:24 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Jumelle",
"Frederic",
""
],
[
"Pagett",
"Timothy",
""
],
[
"Lemand",
"Ryan",
""
]
] |
new_dataset
| 0.950793 |
2211.02904
|
Lu Bai
|
Lu Bai, Lixin Cui, Yue Wang, Ming Li, Edwin R. Hancock
|
HAQJSK: Hierarchical-Aligned Quantum Jensen-Shannon Kernels for Graph
Classification
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we propose a family of novel quantum kernels, namely the
Hierarchical Aligned Quantum Jensen-Shannon Kernels (HAQJSK), for un-attributed
graphs. Different from most existing classical graph kernels, the proposed
HAQJSK kernels can incorporate hierarchical aligned structure information
between graphs and transform graphs of random sizes into fixed-sized aligned
graph structures, i.e., the Hierarchical Transitive Aligned Adjacency Matrix of
vertices and the Hierarchical Transitive Aligned Density Matrix of the
Continuous-Time Quantum Walk (CTQW). For a pair of graphs to hand, the
resulting HAQJSK kernels are defined by measuring the Quantum Jensen-Shannon
Divergence (QJSD) between their transitive aligned graph structures. We show
that the proposed HAQJSK kernels not only reflect richer intrinsic global graph
characteristics in terms of the CTQW, but also address the drawback of
neglecting structural correspondence information arising in most existing
R-convolution kernels. Furthermore, unlike the previous Quantum Jensen-Shannon
Kernels associated with the QJSD and the CTQW, the proposed HAQJSK kernels can
simultaneously guarantee the properties of permutation invariant and positive
definiteness, explaining the theoretical advantages of the HAQJSK kernels.
Experiments indicate the effectiveness of the proposed kernels.
|
[
{
"version": "v1",
"created": "Sat, 5 Nov 2022 13:35:04 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Nov 2022 04:31:50 GMT"
},
{
"version": "v3",
"created": "Sat, 10 Dec 2022 06:54:46 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Bai",
"Lu",
""
],
[
"Cui",
"Lixin",
""
],
[
"Wang",
"Yue",
""
],
[
"Li",
"Ming",
""
],
[
"Hancock",
"Edwin R.",
""
]
] |
new_dataset
| 0.998922 |
2211.09717
|
Ziyao Wang
|
Ziyao Wang, Thai Le and Dongwon Lee
|
UPTON: Unattributable Authorship Text via Data Poisoning
| null | null | null | null |
cs.CY cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In online medium such as opinion column in Bloomberg, The Guardian and
Western Journal, aspiring writers post their writings for various reasons with
their names often proudly open. However, it may occur that such a writer wants
to write in other venues anonymously or under a pseudonym (e.g., activist,
whistle-blower). However, if an attacker has already built an accurate
authorship attribution (AA) model based off of the writings from such
platforms, attributing an anonymous writing to the known authorship is
possible. Therefore, in this work, we ask a question "can one make the writings
and texts, T, in the open spaces such as opinion sharing platforms
unattributable so that AA models trained from T cannot attribute authorship
well?" Toward this question, we present a novel solution, UPTON, that exploits
textual data poisoning method to disturb the training process of AA models.
UPTON uses data poisoning to destroy the authorship feature only in training
samples by perturbing them, and try to make released textual data unlearnable
on deep neuron networks. It is different from previous obfuscation works, that
use adversarial attack to modify the test samples and mislead an AA model, and
also the backdoor works, which use trigger words both in test and training
samples and only change the model output when trigger words occur. Using four
authorship datasets (e.g., IMDb10, IMDb64, Enron and WJO), then, we present
empirical validation where: (1)UPTON is able to downgrade the test accuracy to
about 30% with carefully designed target-selection methods. (2)UPTON poisoning
is able to preserve most of the original semantics. The BERTSCORE between the
clean and UPTON poisoned texts are higher than 0.95. The number is very closed
to 1.00, which means no sematic change. (3)UPTON is also robust towards
spelling correction systems.
|
[
{
"version": "v1",
"created": "Thu, 17 Nov 2022 17:49:57 GMT"
},
{
"version": "v2",
"created": "Sat, 10 Dec 2022 13:19:30 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Wang",
"Ziyao",
""
],
[
"Le",
"Thai",
""
],
[
"Lee",
"Dongwon",
""
]
] |
new_dataset
| 0.978821 |
2211.10578
|
Shancheng Fang
|
Shancheng Fang, Zhendong Mao, Hongtao Xie, Yuxin Wang, Chenggang Yan,
Yongdong Zhang
|
ABINet++: Autonomous, Bidirectional and Iterative Language Modeling for
Scene Text Spotting
|
Accepted by TPAMI. Code is available at
https://github.com/FangShancheng/ABINet-PP. arXiv admin note: substantial
text overlap with arXiv:2103.06495 (conference version)
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Scene text spotting is of great importance to the computer vision community
due to its wide variety of applications. Recent methods attempt to introduce
linguistic knowledge for challenging recognition rather than pure visual
classification. However, how to effectively model the linguistic rules in
end-to-end deep networks remains a research challenge. In this paper, we argue
that the limited capacity of language models comes from 1) implicit language
modeling; 2) unidirectional feature representation; and 3) language model with
noise input. Correspondingly, we propose an autonomous, bidirectional and
iterative ABINet++ for scene text spotting. Firstly, the autonomous suggests
enforcing explicitly language modeling by decoupling the recognizer into vision
model and language model and blocking gradient flow between both models.
Secondly, a novel bidirectional cloze network (BCN) as the language model is
proposed based on bidirectional feature representation. Thirdly, we propose an
execution manner of iterative correction for the language model which can
effectively alleviate the impact of noise input. Finally, to polish ABINet++ in
long text recognition, we propose to aggregate horizontal features by embedding
Transformer units inside a U-Net, and design a position and content attention
module which integrates character order and content to attend to character
features precisely. ABINet++ achieves state-of-the-art performance on both
scene text recognition and scene text spotting benchmarks, which consistently
demonstrates the superiority of our method in various environments especially
on low-quality images. Besides, extensive experiments including in English and
Chinese also prove that, a text spotter that incorporates our language modeling
method can significantly improve its performance both in accuracy and speed
compared with commonly used attention-based recognizers.
|
[
{
"version": "v1",
"created": "Sat, 19 Nov 2022 03:50:33 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Dec 2022 02:16:04 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Fang",
"Shancheng",
""
],
[
"Mao",
"Zhendong",
""
],
[
"Xie",
"Hongtao",
""
],
[
"Wang",
"Yuxin",
""
],
[
"Yan",
"Chenggang",
""
],
[
"Zhang",
"Yongdong",
""
]
] |
new_dataset
| 0.987616 |
2212.00956
|
Alexandre Signorel
|
Alexandre Signorel
|
Exploring The Relationship Between Road Infrastructure and Crimes in
Memphis, Tennessee
| null | null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Memphis, Tennessee is one of the cities with highest crime rate in the United
States. In this work, we explore the relationship between road infrastructure,
especially potholes, and crimes. The pothole and crime data are collected from
Memphis Data Hub between 2020 and 2022. The crime data report various crimes in
the Memphis area, which contain the location, time, and type of the crime. The
pothole data is part of the Open 311 data, which contains information of
different infrastructure projects, including the location of the project, and
the starting and ending dates of the project. We focus on infrastructure
projects regarding pothole repairs.
|
[
{
"version": "v1",
"created": "Fri, 2 Dec 2022 03:52:35 GMT"
},
{
"version": "v2",
"created": "Sat, 10 Dec 2022 12:37:07 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Signorel",
"Alexandre",
""
]
] |
new_dataset
| 0.991312 |
2212.04531
|
Kushagra Tiwary
|
Kushagra Tiwary, Akshat Dave, Nikhil Behari, Tzofi Klinghoffer, Ashok
Veeraraghavan, Ramesh Raskar
|
ORCa: Glossy Objects as Radiance Field Cameras
|
for more information, see https://ktiwary2.github.io/objectsascam/
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Reflections on glossy objects contain valuable and hidden information about
the surrounding environment. By converting these objects into cameras, we can
unlock exciting applications, including imaging beyond the camera's
field-of-view and from seemingly impossible vantage points, e.g. from
reflections on the human eye. However, this task is challenging because
reflections depend jointly on object geometry, material properties, the 3D
environment, and the observer viewing direction. Our approach converts glossy
objects with unknown geometry into radiance-field cameras to image the world
from the object's perspective. Our key insight is to convert the object surface
into a virtual sensor that captures cast reflections as a 2D projection of the
5D environment radiance field visible to the object. We show that recovering
the environment radiance fields enables depth and radiance estimation from the
object to its surroundings in addition to beyond field-of-view novel-view
synthesis, i.e. rendering of novel views that are only directly-visible to the
glossy object present in the scene, but not the observer. Moreover, using the
radiance field we can image around occluders caused by close-by objects in the
scene. Our method is trained end-to-end on multi-view images of the object and
jointly estimates object geometry, diffuse radiance, and the 5D environment
radiance field.
|
[
{
"version": "v1",
"created": "Thu, 8 Dec 2022 19:32:08 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Dec 2022 14:51:24 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Tiwary",
"Kushagra",
""
],
[
"Dave",
"Akshat",
""
],
[
"Behari",
"Nikhil",
""
],
[
"Klinghoffer",
"Tzofi",
""
],
[
"Veeraraghavan",
"Ashok",
""
],
[
"Raskar",
"Ramesh",
""
]
] |
new_dataset
| 0.996776 |
2212.05005
|
Tianyu He
|
Anni Tang, Tianyu He, Xu Tan, Jun Ling, Runnan Li, Sheng Zhao, Li
Song, Jiang Bian
|
Memories are One-to-Many Mapping Alleviators in Talking Face Generation
|
Project page: see https://memoryface.github.io
| null | null | null |
cs.CV cs.MM
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Talking face generation aims at generating photo-realistic video portraits of
a target person driven by input audio. Due to its nature of one-to-many mapping
from the input audio to the output video (e.g., one speech content may have
multiple feasible visual appearances), learning a deterministic mapping like
previous works brings ambiguity during training, and thus causes inferior
visual results. Although this one-to-many mapping could be alleviated in part
by a two-stage framework (i.e., an audio-to-expression model followed by a
neural-rendering model), it is still insufficient since the prediction is
produced without enough information (e.g., emotions, wrinkles, etc.). In this
paper, we propose MemFace to complement the missing information with an
implicit memory and an explicit memory that follow the sense of the two stages
respectively. More specifically, the implicit memory is employed in the
audio-to-expression model to capture high-level semantics in the
audio-expression shared space, while the explicit memory is employed in the
neural-rendering model to help synthesize pixel-level details. Our experimental
results show that our proposed MemFace surpasses all the state-of-the-art
results across multiple scenarios consistently and significantly.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2022 17:45:36 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Dec 2022 07:32:57 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Tang",
"Anni",
""
],
[
"He",
"Tianyu",
""
],
[
"Tan",
"Xu",
""
],
[
"Ling",
"Jun",
""
],
[
"Li",
"Runnan",
""
],
[
"Zhao",
"Sheng",
""
],
[
"Song",
"Li",
""
],
[
"Bian",
"Jiang",
""
]
] |
new_dataset
| 0.989423 |
2212.05063
|
Asma Bensalah
|
Alicia Forn\'es, Asma Bensalah, Cristina Carmona-Duarte, Jialuo Chen,
Miguel A. Ferrer, Andreas Fischer, Josep Llad\'os, Cristina Mart\'in, Eloy
Opisso, R\'ejean Plamondon, Anna Scius-Bertrand, and Josep Maria Tormos
|
The RPM3D project: 3D Kinematics for Remote Patient Monitoring
| null | null |
10.1007/978-3-031-19745-1_16
| null |
cs.HC cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This project explores the feasibility of remote patient monitoring based on
the analysis of 3D movements captured with smartwatches. We base our analysis
on the Kinematic Theory of Rapid Human Movement. We have validated our research
in a real case scenario for stroke rehabilitation at the Guttmann Institute5
(neurorehabilitation hospital), showing promising results. Our work could have
a great impact in remote healthcare applications, improving the medical
efficiency and reducing the healthcare costs. Future steps include more
clinical validation, developing multi-modal analysis architectures (analysing
data from sensors, images, audio, etc.), and exploring the application of our
technology to monitor other neurodegenerative diseases.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2022 14:16:32 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Fornés",
"Alicia",
""
],
[
"Bensalah",
"Asma",
""
],
[
"Carmona-Duarte",
"Cristina",
""
],
[
"Chen",
"Jialuo",
""
],
[
"Ferrer",
"Miguel A.",
""
],
[
"Fischer",
"Andreas",
""
],
[
"Lladós",
"Josep",
""
],
[
"Martín",
"Cristina",
""
],
[
"Opisso",
"Eloy",
""
],
[
"Plamondon",
"Réjean",
""
],
[
"Scius-Bertrand",
"Anna",
""
],
[
"Tormos",
"Josep Maria",
""
]
] |
new_dataset
| 0.995985 |
2212.05101
|
Andr\'e Gomes
|
Jacek Kibilda, Nurul H. Mahmood, Andr\'e Gomes, Matti Latva-aho, Luiz
A. DaSilva
|
Reconfigurable Intelligent Surfaces: The New Frontier of Next G Security
|
This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible
| null | null | null |
cs.CR cs.NI eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
RIS is one of the significant technological advancements that will mark
next-generation wireless. RIS technology also opens up the possibility of new
security threats, since the reflection of impinging signals can be used for
malicious purposes. This article introduces the basic concept for a
RIS-assisted attack that re-uses the legitimate signal towards a malicious
objective. Specific attacks are identified from this base scenario, and the
RIS-assisted signal cancellation attack is selected for evaluation as an attack
that inherently exploits RIS capabilities. The key takeaway from the evaluation
is that an effective attack requires accurate channel information, a RIS
deployed in a favorable location (from the point of view of the attacker), and
it disproportionately affects legitimate links that already suffer from reduced
path loss. These observations motivate specific security solutions and
recommendations for future work.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2022 19:58:30 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Kibilda",
"Jacek",
""
],
[
"Mahmood",
"Nurul H.",
""
],
[
"Gomes",
"André",
""
],
[
"Latva-aho",
"Matti",
""
],
[
"DaSilva",
"Luiz A.",
""
]
] |
new_dataset
| 0.9983 |
2212.05108
|
Neha Sunil
|
Neha Sunil, Shaoxiong Wang, Yu She, Edward Adelson, and Alberto
Rodriguez
|
Visuotactile Affordances for Cloth Manipulation with Local Control
|
Accepted at CoRL 2022. Project website:
http://nehasunil.com/visuotactile/visuotactile.html
| null | null | null |
cs.RO cs.LG cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cloth in the real world is often crumpled, self-occluded, or folded in on
itself such that key regions, such as corners, are not directly graspable,
making manipulation difficult. We propose a system that leverages visual and
tactile perception to unfold the cloth via grasping and sliding on edges. By
doing so, the robot is able to grasp two adjacent corners, enabling subsequent
manipulation tasks like folding or hanging. As components of this system, we
develop tactile perception networks that classify whether an edge is grasped
and estimate the pose of the edge. We use the edge classification network to
supervise a visuotactile edge grasp affordance network that can grasp edges
with a 90% success rate. Once an edge is grasped, we demonstrate that the robot
can slide along the cloth to the adjacent corner using tactile pose
estimation/control in real time. See
http://nehasunil.com/visuotactile/visuotactile.html for videos.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2022 20:18:12 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Sunil",
"Neha",
""
],
[
"Wang",
"Shaoxiong",
""
],
[
"She",
"Yu",
""
],
[
"Adelson",
"Edward",
""
],
[
"Rodriguez",
"Alberto",
""
]
] |
new_dataset
| 0.995393 |
2212.05111
|
Sen Yang
|
Sen Yang, Fan Zhang, Ken Huang, Xi Chen, Youwei Yang, Feng Zhu
|
SoK: MEV Countermeasures: Theory and Practice
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Blockchains offer strong security guarantees, but they cannot protect the
ordering of transactions. Powerful players, such as miners, sequencers, and
sophisticated bots, can reap significant profits by selectively including,
excluding, or re-ordering user transactions. Such profits are called
Miner/Maximal Extractable Value or MEV. MEV bears profound implications for
blockchain security and decentralization. While numerous countermeasures have
been proposed, there is no agreement on the best solution. Moreover, solutions
developed in academic literature differ quite drastically from what is widely
adopted by practitioners. For these reasons, this paper systematizes the
knowledge of the theory and practice of MEV countermeasures. The contribution
is twofold. First, we present a comprehensive taxonomy of 28 proposed MEV
countermeasures, covering four different technical directions. Secondly, we
empirically studied the most popular MEV- auction-based solution with rich
blockchain and mempool data. In addition to gaining insights into MEV auction
platforms' real-world operations, our study shed light on the prevalent
censorship by MEV auction platforms as a result of the recent OFAC sanction,
and its implication on blockchain properties.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2022 20:32:23 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Yang",
"Sen",
""
],
[
"Zhang",
"Fan",
""
],
[
"Huang",
"Ken",
""
],
[
"Chen",
"Xi",
""
],
[
"Yang",
"Youwei",
""
],
[
"Zhu",
"Feng",
""
]
] |
new_dataset
| 0.986777 |
2212.05144
|
Christine Herlihy
|
Christine Herlihy, John P. Dickerson
|
Networked Restless Bandits with Positive Externalities
|
Accepted to AAAI 2023
| null | null | null |
cs.LG cs.AI cs.CY cs.SI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Restless multi-armed bandits are often used to model budget-constrained
resource allocation tasks where receipt of the resource is associated with an
increased probability of a favorable state transition. Prior work assumes that
individual arms only benefit if they receive the resource directly. However,
many allocation tasks occur within communities and can be characterized by
positive externalities that allow arms to derive partial benefit when their
neighbor(s) receive the resource. We thus introduce networked restless bandits,
a novel multi-armed bandit setting in which arms are both restless and embedded
within a directed graph. We then present Greta, a graph-aware, Whittle
index-based heuristic algorithm that can be used to efficiently construct a
constrained reward-maximizing action vector at each timestep. Our empirical
results demonstrate that Greta outperforms comparison policies across a range
of hyperparameter values and graph topologies.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2022 23:37:14 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Herlihy",
"Christine",
""
],
[
"Dickerson",
"John P.",
""
]
] |
new_dataset
| 0.999849 |
2212.05155
|
Yi Ding
|
Yi Ding, Aijia Gao, Thibaud Ryden, Kaushik Mitra, Sukumar Kalmanje,
Yanai Golany, Michael Carbin, Henry Hoffmann
|
Acela: Predictable Datacenter-level Maintenance Job Scheduling
| null | null | null | null |
cs.DC cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Datacenter operators ensure fair and regular server maintenance by using
automated processes to schedule maintenance jobs to complete within a strict
time budget. Automating this scheduling problem is challenging because
maintenance job duration varies based on both job type and hardware. While it
is tempting to use prior machine learning techniques for predicting job
duration, we find that the structure of the maintenance job scheduling problem
creates a unique challenge. In particular, we show that prior machine learning
methods that produce the lowest error predictions do not produce the best
scheduling outcomes due to asymmetric costs. Specifically, underpredicting
maintenance job duration has results in more servers being taken offline and
longer server downtime than overpredicting maintenance job duration. The system
cost of underprediction is much larger than that of overprediction.
We present Acela, a machine learning system for predicting maintenance job
duration, which uses quantile regression to bias duration predictions toward
overprediction. We integrate Acela into a maintenance job scheduler and
evaluate it on datasets from large-scale, production datacenters. Compared to
machine learning based predictors from prior work, Acela reduces the number of
servers that are taken offline by 1.87-4.28X, and reduces the server offline
time by 1.40-2.80X.
|
[
{
"version": "v1",
"created": "Sat, 10 Dec 2022 00:22:49 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Ding",
"Yi",
""
],
[
"Gao",
"Aijia",
""
],
[
"Ryden",
"Thibaud",
""
],
[
"Mitra",
"Kaushik",
""
],
[
"Kalmanje",
"Sukumar",
""
],
[
"Golany",
"Yanai",
""
],
[
"Carbin",
"Michael",
""
],
[
"Hoffmann",
"Henry",
""
]
] |
new_dataset
| 0.998917 |
2212.05211
|
Yizhou Zhao
|
Yizhou Zhao, Qiaozi Gao, Liang Qiu, Govind Thattai, Gaurav S. Sukhatme
|
OpenD: A Benchmark for Language-Driven Door and Drawer Opening
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce OPEND, a benchmark for learning how to use a hand to open
cabinet doors or drawers in a photo-realistic and physics-reliable simulation
environment driven by language instruction. To solve the task, we propose a
multi-step planner composed of a deep neural network and rule-base controllers.
The network is utilized to capture spatial relationships from images and
understand semantic meaning from language instructions. Controllers efficiently
execute the plan based on the spatial and semantic understanding. We evaluate
our system by measuring its zero-shot performance in test data set.
Experimental results demonstrate the effectiveness of decision planning by our
multi-step planner for different hands, while suggesting that there is
significant room for developing better models to address the challenge brought
by language understanding, spatial reasoning, and long-term manipulation. We
will release OPEND and host challenges to promote future research in this area.
|
[
{
"version": "v1",
"created": "Sat, 10 Dec 2022 05:19:58 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Zhao",
"Yizhou",
""
],
[
"Gao",
"Qiaozi",
""
],
[
"Qiu",
"Liang",
""
],
[
"Thattai",
"Govind",
""
],
[
"Sukhatme",
"Gaurav S.",
""
]
] |
new_dataset
| 0.999719 |
2212.05228
|
Lu Bai
|
Lu Bai, Lixin Cui, Edwin R. Hancock
|
QESK: Quantum-based Entropic Subtree Kernels for Graph Classification
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a novel graph kernel, namely the Quantum-based
Entropic Subtree Kernel (QESK), for Graph Classification. To this end, we
commence by computing the Average Mixing Matrix (AMM) of the Continuous-time
Quantum Walk (CTQW) evolved on each graph structure. Moreover, we show how this
AMM matrix can be employed to compute a series of entropic subtree
representations associated with the classical Weisfeiler-Lehman (WL) algorithm.
For a pair of graphs, the QESK kernel is defined by computing the
exponentiation of the negative Euclidean distance between their entropic
subtree representations, theoretically resulting in a positive definite graph
kernel. We show that the proposed QESK kernel not only encapsulates complicated
intrinsic quantum-based structural characteristics of graph structures through
the CTQW, but also theoretically addresses the shortcoming of ignoring the
effects of unshared substructures arising in state-of-the-art R-convolution
graph kernels. Moreover, unlike the classical R-convolution kernels, the
proposed QESK can discriminate the distinctions of isomorphic subtrees in terms
of the global graph structures, theoretically explaining the effectiveness.
Experiments indicate that the proposed QESK kernel can significantly outperform
state-of-the-art graph kernels and graph deep learning methods for graph
classification problems.
|
[
{
"version": "v1",
"created": "Sat, 10 Dec 2022 07:10:03 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Bai",
"Lu",
""
],
[
"Cui",
"Lixin",
""
],
[
"Hancock",
"Edwin R.",
""
]
] |
new_dataset
| 0.959347 |
2212.05254
|
Qianyu He
|
Qianyu He, Xintao Wang, Jiaqing Liang, Yanghua Xiao
|
MAPS-KB: A Million-scale Probabilistic Simile Knowledge Base
|
Accepted to AAAI 2023
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The ability to understand and generate similes is an imperative step to
realize human-level AI. However, there is still a considerable gap between
machine intelligence and human cognition in similes, since deep models based on
statistical distribution tend to favour high-frequency similes. Hence, a
large-scale symbolic knowledge base of similes is required, as it contributes
to the modeling of diverse yet unpopular similes while facilitating additional
evaluation and reasoning. To bridge the gap, we propose a novel framework for
large-scale simile knowledge base construction, as well as two probabilistic
metrics which enable an improved understanding of simile phenomena in natural
language. Overall, we construct MAPS-KB, a million-scale probabilistic simile
knowledge base, covering 4.3 million triplets over 0.4 million terms from 70 GB
corpora. We conduct sufficient experiments to justify the effectiveness and
necessity of the methods of our framework. We also apply MAPS-KB on three
downstream tasks to achieve state-of-the-art performance, further demonstrating
the value of MAPS-KB.
|
[
{
"version": "v1",
"created": "Sat, 10 Dec 2022 10:06:05 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"He",
"Qianyu",
""
],
[
"Wang",
"Xintao",
""
],
[
"Liang",
"Jiaqing",
""
],
[
"Xiao",
"Yanghua",
""
]
] |
new_dataset
| 0.999124 |
2212.05342
|
Ruohao Wang
|
Ruohao Wang, Xiaohui Liu, Zhilu Zhang, Xiaohe Wu, Chun-Mei Feng, Lei
Zhang, Wangmeng Zuo
|
Benchmark Dataset and Effective Inter-Frame Alignment for Real-World
Video Super-Resolution
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Video super-resolution (VSR) aiming to reconstruct a high-resolution (HR)
video from its low-resolution (LR) counterpart has made tremendous progress in
recent years. However, it remains challenging to deploy existing VSR methods to
real-world data with complex degradations. On the one hand, there are few
well-aligned real-world VSR datasets, especially with large super-resolution
scale factors, which limits the development of real-world VSR tasks. On the
other hand, alignment algorithms in existing VSR methods perform poorly for
real-world videos, leading to unsatisfactory results. As an attempt to address
the aforementioned issues, we build a real-world 4 VSR dataset, namely
MVSR4$\times$, where low- and high-resolution videos are captured with
different focal length lenses of a smartphone, respectively. Moreover, we
propose an effective alignment method for real-world VSR, namely EAVSR. EAVSR
takes the proposed multi-layer adaptive spatial transform network (MultiAdaSTN)
to refine the offsets provided by the pre-trained optical flow estimation
network. Experimental results on RealVSR and MVSR4$\times$ datasets show the
effectiveness and practicality of our method, and we achieve state-of-the-art
performance in real-world VSR task. The dataset and code will be publicly
available.
|
[
{
"version": "v1",
"created": "Sat, 10 Dec 2022 17:41:46 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Wang",
"Ruohao",
""
],
[
"Liu",
"Xiaohui",
""
],
[
"Zhang",
"Zhilu",
""
],
[
"Wu",
"Xiaohe",
""
],
[
"Feng",
"Chun-Mei",
""
],
[
"Zhang",
"Lei",
""
],
[
"Zuo",
"Wangmeng",
""
]
] |
new_dataset
| 0.999871 |
2212.05359
|
Alireza Ramezani
|
Eric Sihite, Alireza Ramezani
|
Wake-Based Locomotion Gait Design for Aerobat
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Flying animals, such as bats, fly through their fluidic environment as they
create air jets and form wake structures downstream of their flight path. Bats,
in particular, dynamically morph their highly flexible and dexterous armwing to
manipulate their fluidic environment which is key to their agility and flight
efficiency. This paper presents the theoretical and numerical analysis of the
wake-structure-based gait design inspired by bat flight for flapping robots
using the notion of reduced-order models and unsteady aerodynamic model
incorporating Wagner function. The objective of this paper is to introduce the
notion of gait design for flapping robots by systematically searching the
design space in the context of optimization. The solution found using our gait
design framework was used to design and test a flapping robot.
|
[
{
"version": "v1",
"created": "Sat, 10 Dec 2022 20:13:51 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Sihite",
"Eric",
""
],
[
"Ramezani",
"Alireza",
""
]
] |
new_dataset
| 0.995153 |
2212.05374
|
Dushyantha Basnayaka
|
Dushyantha A Basnayaka
|
Mediumband Wireless Communication
|
5 pages, 3 figures, Proceedings of IEEE Vehicular Technology
conference (VTC-Fall) 2022, London-Beijing
|
Proceedings of IEEE Vehicular Technology conference (VTC-Fall)
2022, London-Beijing
| null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
The fundamental phenomenon widely known as multipath is unavoidable in
wireless communication, and affects almost every element of modern wireless
communication systems. The impact of multipath on the received signal depends
on whether the delay spread (i.e., spread of time delays associated with
different multipath components) is large or small relative to the signalling
period of the wireless communication system. In narrowband systems, the delay
spread is about one tenth (or less) of the signalling period. The delay spread
and the signalling period of broadband systems are in the same order of
magnitude. In between these two extremes, there appears to exist an important,
yet overlooked, class of systems whose delay spread is neither small nor large
enough for them to fall into these two basic classes. In this paper, the effect
of multipath on this class of systems denoted henceforth as mediumband is
studied, and its channel is characterized in compact form in order to enable
future research into this class of wireless communication systems.
|
[
{
"version": "v1",
"created": "Sat, 10 Dec 2022 22:40:41 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Basnayaka",
"Dushyantha A",
""
]
] |
new_dataset
| 0.999509 |
2212.05435
|
Justin Chan
|
Justin Chan, Antonio Glenn, Malek Itani, Lisa R. Mancl, Emily
Gallagher, Randall Bly, Shwetak Patel, and Shyamnath Gollakota
|
Wireless earbuds for low-cost hearing screening
| null | null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
We present the first wireless earbud hardware that can perform hearing
screening by detecting otoacoustic emissions. The conventional wisdom has been
that detecting otoacoustic emissions, which are the faint sounds generated by
the cochlea, requires sensitive and expensive acoustic hardware. Thus, medical
devices for hearing screening cost thousands of dollars and are inaccessible in
low and middle income countries. We show that by designing wireless earbuds
using low-cost acoustic hardware and combining them with wireless sensing
algorithms, we can reliably identify otoacoustic emissions and perform hearing
screening. Our algorithms combine frequency modulated chirps with wideband
pulses emitted from a low-cost speaker to reliably separate otoacoustic
emissions from in-ear reflections and echoes. We conducted a clinical study
with 50 ears across two healthcare sites. Our study shows that the low-cost
earbuds detect hearing loss with 100% sensitivity and 89.7% specificity, which
is comparable to the performance of a $8000 medical device. By developing
low-cost and open-source wearable technology, our work may help address global
health inequities in hearing screening by democratizing these medical devices.
|
[
{
"version": "v1",
"created": "Sun, 11 Dec 2022 07:36:46 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Chan",
"Justin",
""
],
[
"Glenn",
"Antonio",
""
],
[
"Itani",
"Malek",
""
],
[
"Mancl",
"Lisa R.",
""
],
[
"Gallagher",
"Emily",
""
],
[
"Bly",
"Randall",
""
],
[
"Patel",
"Shwetak",
""
],
[
"Gollakota",
"Shyamnath",
""
]
] |
new_dataset
| 0.99755 |
2212.05447
|
Shanaka Anuradha Samarakoon
|
Shanaka Anuradha Samarakoon
|
Bypassing Content-based internet packages with an SSL/TLS Tunnel, SNI
Spoofing, and DNS spoofing
| null | null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Internet Service Providers (ISPs) are increasingly offering content-based
packages to their clients. These packages offer access to a range of online
content, such as Facebook, YouTube, Messenger, Zoom, and many other popular
services, for a fixed price. This allows users to access all the content they
want without worrying about data caps or overage charges. These packages are
way cheaper than regular internet packages. Even some ISPs offer unlimited
content-based packages for a low price. When using these packages, network
traffic is continuously filtered by the ISP, and the user will be charged
separately for using other services which are not included in the content-based
package.
Some internet users are using HTTP injector Software to bypass ISP's Network
traffic filters and access other resources available on the internet using
content-based package data quotes. This research aims to find an alternative
method to bypass ISP's Network traffic filters without using an HTTP injector.
|
[
{
"version": "v1",
"created": "Sun, 11 Dec 2022 08:51:27 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Samarakoon",
"Shanaka Anuradha",
""
]
] |
new_dataset
| 0.999159 |
2212.05451
|
Deepika Saxena
|
Deepika Saxena and Ashutosh Kumar Singh
|
OSC-MC: Online Secure Communication Model for Cloud Environment
| null |
IEEE Communications Letters, 2021
|
10.1109/LCOMM.2021.3086986
| null |
cs.DC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
A malicious cloud user may exploit outsourced data involved in online
communication, co-residency, and hypervisor vulnerabilities to breach and
hamper sensitive information, and inject malicious traffic-based congestion,
rendering services to other benign users. To address this critical and
challenging the problem, this letter proposes an Online Secure Communication
Model for Cloud (OSC-MC) by identifying and terminating malicious VMs and
inter-VM links prior to the occurrence of security threats. The anomalous
network traffic, bandwidth usage, and unauthorized inter-VM links are security
breach indicators which guides secure cloud communication and resource
allocation. The simulation and comparison of the proposed model with existing
approaches reveal that it significantly improves authorised inter-communication
links up to 34.5% with a reduction of network hogs, and power consumption by
66.46% and 39.31%, respectively.
|
[
{
"version": "v1",
"created": "Sun, 11 Dec 2022 09:09:38 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Saxena",
"Deepika",
""
],
[
"Singh",
"Ashutosh Kumar",
""
]
] |
new_dataset
| 0.978589 |
2212.05479
|
Fethi Bougares
|
Fethi Bougares and Salim Jouili
|
End-to-End Speech Translation of Arabic to English Broadcast News
|
Arabic Natural Language Processing Workshop 2022
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Speech translation (ST) is the task of directly translating acoustic speech
signals in a source language into text in a foreign language. ST task has been
addressed, for a long time, using a pipeline approach with two modules : first
an Automatic Speech Recognition (ASR) in the source language followed by a
text-to-text Machine translation (MT). In the past few years, we have seen a
paradigm shift towards the end-to-end approaches using sequence-to-sequence
deep neural network models. This paper presents our efforts towards the
development of the first Broadcast News end-to-end Arabic to English speech
translation system. Starting from independent ASR and MT LDC releases, we were
able to identify about 92 hours of Arabic audio recordings for which the manual
transcription was also translated into English at the segment level. These data
was used to train and compare pipeline and end-to-end speech translation
systems under multiple scenarios including transfer learning and data
augmentation techniques.
|
[
{
"version": "v1",
"created": "Sun, 11 Dec 2022 11:35:46 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Bougares",
"Fethi",
""
],
[
"Jouili",
"Salim",
""
]
] |
new_dataset
| 0.969525 |
2212.05489
|
Zhiling Luo
|
Zhiling Luo, Qiankun Shi, Sha Zhao, Wei Zhou, Haiqing Chen, Yuankai Ma
and Haitao Leng
|
AliCHI: A Large-scale Multi-modal Dataset and Automated Evaluation Tool
for Human-like Dialogue Systems
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
A well-designed interactive human-like dialogue system is expected to take
actions (e.g. smiling) and respond in a pattern similar to humans. However, due
to the limitation of single-modality (only speech) or small volume of currently
public datasets, most dialogue systems can only respond in speech and cannot
take human-like actions. In this work, we build a large-scale multi-modal
dataset of human-to-human conversation in a face-to-face fashion, with
fine-grained annotations. The raw data in video format contains 635 dialogue
sessions, being collected from 200 participants on designed topics and lasting
52 hours in total. Moreover, we manually annotated the verbal and non-verbal
behaviors in each dialogue session on their start/end timestamp. Furthermore,
we developed a corresponding evaluation tool for human-like dialogue systems to
automatically evaluates the accuracy of two basic tasks, turn-taking
prediction, and backchannel prediction, on both time and content. We have
opened the data, the tools will be released at the conference.
|
[
{
"version": "v1",
"created": "Sun, 11 Dec 2022 12:33:53 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Luo",
"Zhiling",
""
],
[
"Shi",
"Qiankun",
""
],
[
"Zhao",
"Sha",
""
],
[
"Zhou",
"Wei",
""
],
[
"Chen",
"Haiqing",
""
],
[
"Ma",
"Yuankai",
""
],
[
"Leng",
"Haitao",
""
]
] |
new_dataset
| 0.999822 |
2212.05630
|
Chih-Hui Ho
|
Chih-Hui Ho, Nuno Vasconcelos
|
DISCO: Adversarial Defense with Local Implicit Functions
|
Accepted to Neurips 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The problem of adversarial defenses for image classification, where the goal
is to robustify a classifier against adversarial examples, is considered.
Inspired by the hypothesis that these examples lie beyond the natural image
manifold, a novel aDversarIal defenSe with local impliCit functiOns (DISCO) is
proposed to remove adversarial perturbations by localized manifold projections.
DISCO consumes an adversarial image and a query pixel location and outputs a
clean RGB value at the location. It is implemented with an encoder and a local
implicit module, where the former produces per-pixel deep features and the
latter uses the features in the neighborhood of query pixel for predicting the
clean RGB value. Extensive experiments demonstrate that both DISCO and its
cascade version outperform prior defenses, regardless of whether the defense is
known to the attacker. DISCO is also shown to be data and parameter efficient
and to mount defenses that transfers across datasets, classifiers and attacks.
|
[
{
"version": "v1",
"created": "Sun, 11 Dec 2022 23:54:26 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Ho",
"Chih-Hui",
""
],
[
"Vasconcelos",
"Nuno",
""
]
] |
new_dataset
| 0.998983 |
2212.05705
|
Kangcheng Liu
|
Kangcheng Liu
|
An Integrated LiDAR-SLAM System for Complex Environment with Noisy Point
Clouds
|
IROS 2022
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
The current LiDAR SLAM (Simultaneous Localization and Mapping) system suffers
greatly from low accuracy and limited robustness when faced with complicated
circumstances. From our experiments, we find that current LiDAR SLAM systems
have limited performance when the noise level in the obtained point clouds is
large. Therefore, in this work, we propose a general framework to tackle the
problem of denoising and loop closure for LiDAR SLAM in complex environments
with many noises and outliers caused by reflective materials. Current
approaches for point clouds denoising are mainly designed for small-scale point
clouds and can not be extended to large-scale point clouds scenes. In this
work, we firstly proposed a lightweight network for large-scale point clouds
denoising. Subsequently, we have also designed an efficient loop closure
network for place recognition in global optimization to improve the
localization accuracy of the whole system. Finally, we have demonstrated by
extensive experiments and benchmark studies that our method can have a
significant boost on the localization accuracy of the LiDAR SLAM system when
faced with noisy point clouds, with a marginal increase in computational cost.
|
[
{
"version": "v1",
"created": "Mon, 12 Dec 2022 05:14:59 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Liu",
"Kangcheng",
""
]
] |
new_dataset
| 0.99823 |
2212.05709
|
Hui Wei
|
Hui Wei, Zhixiang Wang, Xuemei Jia, Yinqiang Zheng, Hao Tang,
Shin'ichi Satoh, Zheng Wang
|
HOTCOLD Block: Fooling Thermal Infrared Detectors with a Novel Wearable
Design
|
Accepted to AAAI 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Adversarial attacks on thermal infrared imaging expose the risk of related
applications. Estimating the security of these systems is essential for safely
deploying them in the real world. In many cases, realizing the attacks in the
physical space requires elaborate special perturbations. These solutions are
often \emph{impractical} and \emph{attention-grabbing}. To address the need for
a physically practical and stealthy adversarial attack, we introduce
\textsc{HotCold} Block, a novel physical attack for infrared detectors that
hide persons utilizing the wearable Warming Paste and Cooling Paste. By
attaching these readily available temperature-controlled materials to the body,
\textsc{HotCold} Block evades human eyes efficiently. Moreover, unlike existing
methods that build adversarial patches with complex texture and structure
features, \textsc{HotCold} Block utilizes an SSP-oriented adversarial
optimization algorithm that enables attacks with pure color blocks and explores
the influence of size, shape, and position on attack performance. Extensive
experimental results in both digital and physical environments demonstrate the
performance of our proposed \textsc{HotCold} Block. \emph{Code is available:
\textcolor{magenta}{https://github.com/weihui1308/HOTCOLDBlock}}.
|
[
{
"version": "v1",
"created": "Mon, 12 Dec 2022 05:23:11 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Wei",
"Hui",
""
],
[
"Wang",
"Zhixiang",
""
],
[
"Jia",
"Xuemei",
""
],
[
"Zheng",
"Yinqiang",
""
],
[
"Tang",
"Hao",
""
],
[
"Satoh",
"Shin'ichi",
""
],
[
"Wang",
"Zheng",
""
]
] |
new_dataset
| 0.991788 |
2212.05782
|
Ting G
|
Ting Gao, Rodrigo Kappes Marques, Lei Yu
|
GT-CausIn: a novel causal-based insight for traffic prediction
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Traffic forecasting is an important application of spatiotemporal series
prediction. Among different methods, graph neural networks have achieved so far
the most promising results, learning relations between graph nodes then becomes
a crucial task. However, improvement space is very limited when these relations
are learned in a node-to-node manner. The challenge stems from (1) obscure
temporal dependencies between different stations, (2) difficulties in defining
variables beyond the node level, and (3) no ready-made method to validate the
learned relations. To confront these challenges, we define legitimate traffic
causal variables to discover the causal relation inside the traffic network,
which is carefully checked with statistic tools and case analysis. We then
present a novel model named Graph Spatial-Temporal Network Based on Causal
Insight (GT-CausIn), where prior learned causal information is integrated with
graph diffusion layers and temporal convolutional network (TCN) layers.
Experiments are carried out on two real-world traffic datasets: PEMS-BAY and
METR-LA, which show that GT-CausIn significantly outperforms the
state-of-the-art models on mid-term and long-term prediction.
|
[
{
"version": "v1",
"created": "Mon, 12 Dec 2022 09:09:39 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Gao",
"Ting",
""
],
[
"Marques",
"Rodrigo Kappes",
""
],
[
"Yu",
"Lei",
""
]
] |
new_dataset
| 0.990227 |
2212.05884
|
Hailin Li
|
Raghavendra Ramachandra and Hailin Li
|
Finger-NestNet: Interpretable Fingerphoto Verification on Smartphone
using Deep Nested Residual Network
|
a preprint paper accepted in wacv2023 workshop
| null | null | null |
cs.CV cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Fingerphoto images captured using a smartphone are successfully used to
verify the individuals that have enabled several applications. This work
presents a novel algorithm for fingerphoto verification using a nested residual
block: Finger-NestNet. The proposed Finger-NestNet architecture is designed
with three consecutive convolution blocks followed by a series of nested
residual blocks to achieve reliable fingerphoto verification. This paper also
presents the interpretability of the proposed method using four different
visualization techniques that can shed light on the critical regions in the
fingerphoto biometrics that can contribute to the reliable verification
performance of the proposed method. Extensive experiments are performed on the
fingerphoto dataset comprised of 196 unique fingers collected from 52 unique
data subjects using an iPhone6S. Experimental results indicate the improved
verification of the proposed method compared to six different existing methods
with EER = 1.15%.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2022 17:15:35 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Ramachandra",
"Raghavendra",
""
],
[
"Li",
"Hailin",
""
]
] |
new_dataset
| 0.97769 |
2212.05893
|
Sterre Lutz
|
Sterre Lutz
|
Deontic Paradoxes in Library Lending Regulations: A Case Study in Flint
|
2 pages. Accepted submission for ProLaLa 2023 Workshop (part of POPL
2023 conference)
| null | null | null |
cs.LO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Flint is a frame-based and action-centered language developed by Van Doesburg
et al. to capture and compare different interpretations of sources of norms
(e.g. laws or regulations). The aim of this research is to investigate whether
Flint is susceptible to paradoxes that are known to occur in normative systems.
The example of library lending regulations -- first introduced by Sergot to
argue for including deontic concepts in legal knowledge representation -- is
central to this analysis. The hypothesis is that Flint is capable of expressing
Sergot's library example without the occurrence of deontic paradoxes (most
notably: the Chisholm paradox). This research is a first step towards a formal
analysis of the expressive power of Flint as a language and furthers
understanding of the relation between Flint and existing deontic logics.
|
[
{
"version": "v1",
"created": "Mon, 12 Dec 2022 13:50:56 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Lutz",
"Sterre",
""
]
] |
new_dataset
| 0.997788 |
2212.05903
|
Smaran Adarsh
|
Smaran Adarsh, Lukas Burgholzer, Tanmay Manjunath and Robert Wille
|
SyReC Synthesizer: An MQT tool for synthesis of reversible circuits
|
6 pages, 3 figures, Software Impacts Journal
|
Software Impacts, vol. 14, p. 100451, 2022
|
10.1016/j.simpa.2022.100451
| null |
cs.AR cs.ET quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reversible circuits form the backbone for many promising emerging
technologies such as quantum computing, low power/adiabatic design,
encoder/decoder devices, and several other applications. In the recent years,
the scalable synthesis of such circuits has gained significant attention. In
this work, we present the SyReC Synthesizer, a synthesis tool for reversible
circuits based on the hardware description language SyReC. SyReC allows to
describe reversible functionality at a high level of abstraction. The provided
SyReC Synthesizer then realizes this functionality in a push-button fashion.
Corresponding options allow for a trade-off between the number of needed
circuit signals/lines (relevant, e.g., for quantum computing in which every
circuit line corresponds to a qubit) and the respectively needed gates
(corresponding to the circuit's costs). Furthermore, the tool allows to
simulate the resulting circuit as well as to determine the gate costs of it.
The SyReC Synthesizer is available as an open-source software package at
https://github.com/cda-tum/syrec as part of the Munich Quantum Toolkit (MQT).
|
[
{
"version": "v1",
"created": "Mon, 12 Dec 2022 14:03:43 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Adarsh",
"Smaran",
""
],
[
"Burgholzer",
"Lukas",
""
],
[
"Manjunath",
"Tanmay",
""
],
[
"Wille",
"Robert",
""
]
] |
new_dataset
| 0.999527 |
2212.05909
|
Tanish Mittal
|
Tanish Mittal, Preyansh Agrawal, Esha Pahwa, Aarya Makwana
|
NFResNet: Multi-scale and U-shaped Networks for Deblurring
| null | null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Multi-Scale and U-shaped Networks are widely used in various image
restoration problems, including deblurring. Keeping in mind the wide range of
applications, we present a comparison of these architectures and their effects
on image deblurring. We also introduce a new block called as NFResblock. It
consists of a Fast Fourier Transformation layer and a series of modified
Non-Linear Activation Free Blocks. Based on these architectures and additions,
we introduce NFResnet and NFResnet+, which are modified multi-scale and U-Net
architectures, respectively. We also use three different loss functions to
train these architectures: Charbonnier Loss, Edge Loss, and Frequency
Reconstruction Loss. Extensive experiments on the Deep Video Deblurring
dataset, along with ablation studies for each component, have been presented in
this paper. The proposed architectures achieve a considerable increase in Peak
Signal to Noise (PSNR) ratio and Structural Similarity Index (SSIM) value.
|
[
{
"version": "v1",
"created": "Mon, 12 Dec 2022 14:19:34 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Mittal",
"Tanish",
""
],
[
"Agrawal",
"Preyansh",
""
],
[
"Pahwa",
"Esha",
""
],
[
"Makwana",
"Aarya",
""
]
] |
new_dataset
| 0.99572 |
2212.06007
|
Tom Davot
|
Tom Davot and Lucas Isenmann and Sanjukta Roy and Jocelyn Thiebaut
|
Degreewidth: a New Parameter for Solving Problems on Tournaments
| null | null | null | null |
cs.DM
|
http://creativecommons.org/licenses/by/4.0/
|
In the paper, we define a new parameter for tournaments called degreewidth
which can be seen as a measure of how far is the tournament from being acyclic.
The degreewidth of a tournament $T$ denoted by $\Delta(T)$ is the minimum value
$k$ for which we can find an ordering $\langle v_1, \dots, v_n \rangle$ of the
vertices of $T$ such that every vertex is incident to at most $k$ backward arcs
(\textit{i.e.} an arc $(v_i,v_j)$ such that $j<i$). Thus, a tournament is
acyclic if and only if its degreewidth is zero.
Additionally, the class of sparse tournaments defined by Bessy et al. [ESA
2017] is exactly the class of tournaments with degreewidth one.
We first study computational complexity of finding degreewidth. Namely, we
show it is NP-hard and complement this result with a $3$-approximation
algorithm. We also provide a cubic algorithm to decide if a tournament is
sparse.
Finally, we study classical graph problems \textsc{Dominating Set} and
\textsc{Feedback Vertex Set} parameterized by degreewidth. We show the former
is fixed parameter tractable whereas the latter is NP-hard on sparse
tournaments. Additionally, we study \textsc{Feedback Arc Set} on sparse
tournaments.
|
[
{
"version": "v1",
"created": "Mon, 12 Dec 2022 16:13:20 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Davot",
"Tom",
""
],
[
"Isenmann",
"Lucas",
""
],
[
"Roy",
"Sanjukta",
""
],
[
"Thiebaut",
"Jocelyn",
""
]
] |
new_dataset
| 0.957621 |
2212.06034
|
Rishabh Misra
|
Rishabh Misra
|
IMDB Spoiler Dataset
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
User-generated reviews are often our first point of contact when we consider
watching a movie or a TV show. However, beyond telling us the qualitative
aspects of the media we want to consume, reviews may inevitably contain
undesired revelatory information (i.e. 'spoilers') such as the surprising fate
of a character in a movie, or the identity of a murderer in a crime-suspense
movie, etc. In this paper, we present a high-quality movie-review based spoiler
dataset to tackle the problem of spoiler detection and describe various
research questions it can answer.
|
[
{
"version": "v1",
"created": "Sat, 17 Sep 2022 22:31:06 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Misra",
"Rishabh",
""
]
] |
new_dataset
| 0.99991 |
2212.06035
|
Rishabh Misra
|
Rishabh Misra
|
News Headlines Dataset For Sarcasm Detection
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Past studies in Sarcasm Detection mostly make use of Twitter datasets
collected using hashtag-based supervision but such datasets are noisy in terms
of labels and language. Furthermore, many tweets are replies to other tweets,
and detecting sarcasm in these requires the availability of contextual tweets.
To overcome the limitations related to noise in Twitter datasets, we curate
News Headlines Dataset from two news websites: TheOnion aims at producing
sarcastic versions of current events, whereas HuffPost publishes real news. The
dataset contains about 28K headlines out of which 13K are sarcastic. To make it
more useful, we have included the source links of the news articles so that
more data can be extracted as needed. In this paper, we describe various
details about the dataset and potential use cases apart from Sarcasm Detection.
|
[
{
"version": "v1",
"created": "Sat, 17 Sep 2022 22:25:36 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Misra",
"Rishabh",
""
]
] |
new_dataset
| 0.999895 |
2212.06037
|
Siyao Peng
|
Siyao Peng, Yang Janet Liu, Amir Zeldes
|
Chinese Discourse Annotation Reference Manual
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This document provides extensive guidelines and examples for Rhetorical
Structure Theory (RST) annotation in Mandarin Chinese. The guideline is divided
into three sections. We first introduce preprocessing steps to prepare data for
RST annotation. Secondly, we discuss syntactic criteria to segment texts into
Elementary Discourse Units (EDUs). Lastly, we provide examples to define and
distinguish discourse relations in different genres. We hope that this
reference manual can facilitate RST annotations in Chinese and accelerate the
development of the RST framework across languages.
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2022 11:02:42 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Peng",
"Siyao",
""
],
[
"Liu",
"Yang Janet",
""
],
[
"Zeldes",
"Amir",
""
]
] |
new_dataset
| 0.991385 |
2212.06049
|
Deeksha Varshney
|
Deeksha Varshney, Aizan Zafar, Niranshu Kumar Behra and Asif Ekbal
|
CDialog: A Multi-turn Covid-19 Conversation Dataset for Entity-Aware
Dialog Generation
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The development of conversational agents to interact with patients and
deliver clinical advice has attracted the interest of many researchers,
particularly in light of the COVID-19 pandemic. The training of an end-to-end
neural based dialog system, on the other hand, is hampered by a lack of
multi-turn medical dialog corpus. We make the very first attempt to release a
high-quality multi-turn Medical Dialog dataset relating to Covid-19 disease
named CDialog, with over 1K conversations collected from the online medical
counselling websites. We annotate each utterance of the conversation with seven
different categories of medical entities, including diseases, symptoms, medical
tests, medical history, remedies, medications and other aspects as additional
labels. Finally, we propose a novel neural medical dialog system based on the
CDialog dataset to advance future research on developing automated medical
dialog systems. We use pre-trained language models for dialogue generation,
incorporating annotated medical entities, to generate a virtual doctor's
response that addresses the patient's query. Experimental results show that the
proposed dialog models perform comparably better when supplemented with entity
information and hence can improve the response quality.
|
[
{
"version": "v1",
"created": "Wed, 16 Nov 2022 11:07:34 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Varshney",
"Deeksha",
""
],
[
"Zafar",
"Aizan",
""
],
[
"Behra",
"Niranshu Kumar",
""
],
[
"Ekbal",
"Asif",
""
]
] |
new_dataset
| 0.999364 |
2212.06088
|
Yen-Chen Lin
|
Lin Yen-Chen, Pete Florence, Andy Zeng, Jonathan T. Barron, Yilun Du,
Wei-Chiu Ma, Anthony Simeonov, Alberto Rodriguez Garcia, Phillip Isola
|
MIRA: Mental Imagery for Robotic Affordances
|
CoRL 2022, webpage: https://yenchenlin.me/mira
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Humans form mental images of 3D scenes to support counterfactual imagination,
planning, and motor control. Our abilities to predict the appearance and
affordance of the scene from previously unobserved viewpoints aid us in
performing manipulation tasks (e.g., 6-DoF kitting) with a level of ease that
is currently out of reach for existing robot learning frameworks. In this work,
we aim to build artificial systems that can analogously plan actions on top of
imagined images. To this end, we introduce Mental Imagery for Robotic
Affordances (MIRA), an action reasoning framework that optimizes actions with
novel-view synthesis and affordance prediction in the loop. Given a set of 2D
RGB images, MIRA builds a consistent 3D scene representation, through which we
synthesize novel orthographic views amenable to pixel-wise affordances
prediction for action optimization. We illustrate how this optimization process
enables us to generalize to unseen out-of-plane rotations for 6-DoF robotic
manipulation tasks given a limited number of demonstrations, paving the way
toward machines that autonomously learn to understand the world around them for
planning actions.
|
[
{
"version": "v1",
"created": "Mon, 12 Dec 2022 18:02:32 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Yen-Chen",
"Lin",
""
],
[
"Florence",
"Pete",
""
],
[
"Zeng",
"Andy",
""
],
[
"Barron",
"Jonathan T.",
""
],
[
"Du",
"Yilun",
""
],
[
"Ma",
"Wei-Chiu",
""
],
[
"Simeonov",
"Anthony",
""
],
[
"Garcia",
"Alberto Rodriguez",
""
],
[
"Isola",
"Phillip",
""
]
] |
new_dataset
| 0.998345 |
2212.06135
|
Tengfei Wang
|
Tengfei Wang, Bo Zhang, Ting Zhang, Shuyang Gu, Jianmin Bao, Tadas
Baltrusaitis, Jingjing Shen, Dong Chen, Fang Wen, Qifeng Chen, Baining Guo
|
Rodin: A Generative Model for Sculpting 3D Digital Avatars Using
Diffusion
|
Project Webpage: https://3d-avatar-diffusion.microsoft.com/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a 3D generative model that uses diffusion models to
automatically generate 3D digital avatars represented as neural radiance
fields. A significant challenge in generating such avatars is that the memory
and processing costs in 3D are prohibitive for producing the rich details
required for high-quality avatars. To tackle this problem we propose the
roll-out diffusion network (Rodin), which represents a neural radiance field as
multiple 2D feature maps and rolls out these maps into a single 2D feature
plane within which we perform 3D-aware diffusion. The Rodin model brings the
much-needed computational efficiency while preserving the integrity of
diffusion in 3D by using 3D-aware convolution that attends to projected
features in the 2D feature plane according to their original relationship in
3D. We also use latent conditioning to orchestrate the feature generation for
global coherence, leading to high-fidelity avatars and enabling their semantic
editing based on text prompts. Finally, we use hierarchical synthesis to
further enhance details. The 3D avatars generated by our model compare
favorably with those produced by existing generative techniques. We can
generate highly detailed avatars with realistic hairstyles and facial hair like
beards. We also demonstrate 3D avatar generation from image or text as well as
text-guided editability.
|
[
{
"version": "v1",
"created": "Mon, 12 Dec 2022 18:59:40 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Wang",
"Tengfei",
""
],
[
"Zhang",
"Bo",
""
],
[
"Zhang",
"Ting",
""
],
[
"Gu",
"Shuyang",
""
],
[
"Bao",
"Jianmin",
""
],
[
"Baltrusaitis",
"Tadas",
""
],
[
"Shen",
"Jingjing",
""
],
[
"Chen",
"Dong",
""
],
[
"Wen",
"Fang",
""
],
[
"Chen",
"Qifeng",
""
],
[
"Guo",
"Baining",
""
]
] |
new_dataset
| 0.974706 |
2212.06138
|
Dongdong Chen
|
Xiaoyi Dong and Jianmin Bao and Ting Zhang and Dongdong Chen and
Shuyang Gu and Weiming Zhang and Lu Yuan and Dong Chen and Fang Wen and
Nenghai Yu
|
CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1
Accuracy with ViT-B and ViT-L on ImageNet
|
Technical Report, code will be available at
https://github.com/LightDXY/FT-CLIP
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent studies have shown that CLIP has achieved remarkable success in
performing zero-shot inference while its fine-tuning performance is not
satisfactory. In this paper, we identify that fine-tuning performance is
significantly impacted by hyper-parameter choices. We examine various key
hyper-parameters and empirically evaluate their impact in fine-tuning CLIP for
classification tasks through a comprehensive study. We find that the
fine-tuning performance of CLIP is substantially underestimated. Equipped with
hyper-parameter refinement, we demonstrate CLIP itself is better or at least
competitive in fine-tuning compared with large-scale supervised pre-training
approaches or latest works that use CLIP as prediction targets in Masked Image
Modeling. Specifically, CLIP ViT-Base/16 and CLIP ViT-Large/14 can achieve
85.7%,88.0% finetuning Top-1 accuracy on the ImageNet-1K dataset . These
observations challenge the conventional conclusion that CLIP is not suitable
for fine-tuning, and motivate us to rethink recently proposed improvements
based on CLIP. We will release our code publicly at
\url{https://github.com/LightDXY/FT-CLIP}.
|
[
{
"version": "v1",
"created": "Mon, 12 Dec 2022 18:59:59 GMT"
}
] | 2022-12-13T00:00:00 |
[
[
"Dong",
"Xiaoyi",
""
],
[
"Bao",
"Jianmin",
""
],
[
"Zhang",
"Ting",
""
],
[
"Chen",
"Dongdong",
""
],
[
"Gu",
"Shuyang",
""
],
[
"Zhang",
"Weiming",
""
],
[
"Yuan",
"Lu",
""
],
[
"Chen",
"Dong",
""
],
[
"Wen",
"Fang",
""
],
[
"Yu",
"Nenghai",
""
]
] |
new_dataset
| 0.988639 |
1904.12060
|
Th\'eo Pierron
|
Marthe Bonamy, Th\'eo Pierron, \'Eric Sopena
|
Every planar graph with $\Delta\geqslant 8$ is totally
$(\Delta+2)$-choosable
|
64 pages, 77 figures
| null | null | null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Total coloring is a variant of edge coloring where both vertices and edges
are to be colored. A graph is totally $k$-choosable if for any list assignment
of $k$ colors to each vertex and each edge, we can extract a proper total
coloring. In this setting, a graph of maximum degree $\Delta$ needs at least
$\Delta+1$ colors. In the planar case, Borodin proved in 1989 that $\Delta+2$
colors suffice when $\Delta$ is at least 9. We show that this bound also holds
when $\Delta$ is $8$.
|
[
{
"version": "v1",
"created": "Fri, 26 Apr 2019 22:06:39 GMT"
},
{
"version": "v2",
"created": "Mon, 16 Dec 2019 14:19:02 GMT"
},
{
"version": "v3",
"created": "Wed, 27 Oct 2021 13:15:14 GMT"
},
{
"version": "v4",
"created": "Fri, 9 Dec 2022 15:11:31 GMT"
}
] | 2022-12-12T00:00:00 |
[
[
"Bonamy",
"Marthe",
""
],
[
"Pierron",
"Théo",
""
],
[
"Sopena",
"Éric",
""
]
] |
new_dataset
| 0.991483 |
2007.02330
|
Bruno Bauwens
|
Bruno Bauwens and Marius Zimand
|
Universal codes in the shared-randomness model for channels with general
distortion capabilities
|
Removed the mentioning of online matching, which is not used here
| null | null | null |
cs.IT cs.CC math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We put forth new models for universal channel coding. Unlike standard codes
which are designed for a specific type of channel, our most general universal
code makes communication resilient on every channel, provided the noise level
is below the tolerated bound, where the noise level t of a channel is the
logarithm of its ambiguity (the maximum number of strings that can be distorted
into a given one). The other more restricted universal codes still work for
large classes of natural channels. In a universal code, encoding is
channel-independent, but the decoding function knows the type of channel. We
allow the encoding and the decoding functions to share randomness, which is
unavailable to the channel. There are two scenarios for the type of attack that
a channel can perform. In the oblivious scenario, codewords belong to an
additive group and the channel distorts a codeword by adding a vector from a
fixed set. The selection is based on the message and the encoding function, but
not on the codeword. In the Hamming scenario, the channel knows the codeword
and is fully adversarial. For a universal code, there are two parameters of
interest: the rate, which is the ratio between the message length k and the
codeword length n, and the number of shared random bits. We show the existence
in both scenarios of universal codes with rate 1-t/n - o(1), which is optimal
modulo the o(1) term. The number of shared random bits is O(log n) in the
oblivious scenario, and O(n) in the Hamming scenario, which, for typical values
of the noise level, we show to be optimal, modulo the constant hidden in the
O() notation. In both scenarios, the universal encoding is done in time
polynomial in n, but the channel-dependent decoding procedures are in general
not efficient. For some weaker classes of channels we construct universal codes
with polynomial-time encoding and decoding.
|
[
{
"version": "v1",
"created": "Sun, 5 Jul 2020 13:05:14 GMT"
},
{
"version": "v2",
"created": "Fri, 6 Nov 2020 22:28:09 GMT"
},
{
"version": "v3",
"created": "Sun, 13 Dec 2020 20:24:28 GMT"
},
{
"version": "v4",
"created": "Wed, 17 Feb 2021 14:57:43 GMT"
},
{
"version": "v5",
"created": "Thu, 8 Dec 2022 22:06:59 GMT"
}
] | 2022-12-12T00:00:00 |
[
[
"Bauwens",
"Bruno",
""
],
[
"Zimand",
"Marius",
""
]
] |
new_dataset
| 0.995903 |
2203.07601
|
Naoki Kobayashi
|
Naoki Kobayashi, Kento Tanahashi, Ryosuke Sato, Takeshi Tsukada
|
Automatic HFL(Z) Validity Checking for Program Verification
|
A long version of the paper published in Proceedings of POPL 2023
| null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose an automated method for checking the validity of a formula of
HFL(Z), a higher-order logic with fixpoint operators and integers. Combined
with Kobayashi et al.'s reduction from higher-order program verification to
HFL(Z) validity checking, our method yields a fully automated, uniform
verification method for arbitrary temporal properties of higher-order
functional programs expressible in the modal mu-calculus, including
termination, non-termination, fair termination, fair non-termination, and also
branching-time properties. We have implemented our method and obtained
promising experimental results.
|
[
{
"version": "v1",
"created": "Tue, 15 Mar 2022 02:17:49 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Dec 2022 04:59:17 GMT"
}
] | 2022-12-12T00:00:00 |
[
[
"Kobayashi",
"Naoki",
""
],
[
"Tanahashi",
"Kento",
""
],
[
"Sato",
"Ryosuke",
""
],
[
"Tsukada",
"Takeshi",
""
]
] |
new_dataset
| 0.997235 |
2203.15720
|
Yifeng Jiang
|
Yifeng Jiang, Yuting Ye, Deepak Gopinath, Jungdam Won, Alexander W.
Winkler, C. Karen Liu
|
Transformer Inertial Poser: Real-time Human Motion Reconstruction from
Sparse IMUs with Simultaneous Terrain Generation
|
SIGGRAPH Asia 2022. Video: https://youtu.be/rXb6SaXsnc0. Code:
https://github.com/jyf588/transformer-inertial-poser
| null |
10.1145/3550469.3555428
| null |
cs.CV cs.GR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Real-time human motion reconstruction from a sparse set of (e.g. six)
wearable IMUs provides a non-intrusive and economic approach to motion capture.
Without the ability to acquire position information directly from IMUs, recent
works took data-driven approaches that utilize large human motion datasets to
tackle this under-determined problem. Still, challenges remain such as temporal
consistency, drifting of global and joint motions, and diverse coverage of
motion types on various terrains. We propose a novel method to simultaneously
estimate full-body motion and generate plausible visited terrain from only six
IMU sensors in real-time. Our method incorporates 1. a conditional Transformer
decoder model giving consistent predictions by explicitly reasoning prediction
history, 2. a simple yet general learning target named "stationary body points"
(SBPs) which can be stably predicted by the Transformer model and utilized by
analytical routines to correct joint and global drifting, and 3. an algorithm
to generate regularized terrain height maps from noisy SBP predictions which
can in turn correct noisy global motion estimation. We evaluate our framework
extensively on synthesized and real IMU data, and with real-time live demos,
and show superior performance over strong baseline methods.
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2022 16:24:52 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Sep 2022 22:45:58 GMT"
},
{
"version": "v3",
"created": "Thu, 8 Dec 2022 19:29:18 GMT"
}
] | 2022-12-12T00:00:00 |
[
[
"Jiang",
"Yifeng",
""
],
[
"Ye",
"Yuting",
""
],
[
"Gopinath",
"Deepak",
""
],
[
"Won",
"Jungdam",
""
],
[
"Winkler",
"Alexander W.",
""
],
[
"Liu",
"C. Karen",
""
]
] |
new_dataset
| 0.994684 |
2207.14636
|
Son T. Luu
|
Co Van Dinh, Son T. Luu and Anh Gia-Tuan Nguyen
|
Detecting Spam Reviews on Vietnamese E-commerce Websites
|
Published at The 14th Asian Conference on Intelligent Information and
Database Systems (ACIIDS 2022). The dataset is available at
https://github.com/sonlam1102/vispamdetection
| null |
10.1007/978-3-031-21743-2_48
| null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The reviews of customers play an essential role in online shopping. People
often refer to reviews or comments of previous customers to decide whether to
buy a new product. Catching up with this behavior, some people create untruths
and illegitimate reviews to hoax customers about the fake quality of products.
These are called spam reviews, confusing consumers on online shopping platforms
and negatively affecting online shopping behaviors. We propose the dataset
called ViSpamReviews, which has a strict annotation procedure for detecting
spam reviews on e-commerce platforms. Our dataset consists of two tasks: the
binary classification task for detecting whether a review is spam or not and
the multi-class classification task for identifying the type of spam. The
PhoBERT obtained the highest results on both tasks, 86.89% and 72.17%,
respectively, by macro average F1 score.
|
[
{
"version": "v1",
"created": "Wed, 27 Jul 2022 10:37:14 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Dec 2022 04:02:18 GMT"
}
] | 2022-12-12T00:00:00 |
[
[
"Van Dinh",
"Co",
""
],
[
"Luu",
"Son T.",
""
],
[
"Nguyen",
"Anh Gia-Tuan",
""
]
] |
new_dataset
| 0.999794 |
2208.14908
|
Chansup Byun
|
Chansup Byun, William Arcand, David Bestor, Bill Bergeron, Vijay
Gadepally, Michael Houle, Matthew Hubbell, Hayden Jananthan, Michael Jones,
Kurt Keville, Anna Klein, Peter Michaleas, Lauren Milechin, Guillermo
Morales, Julie Mullen, Andrew Prout, Albert Reuther, Antonio Rosa, Siddharth
Samsi, Charles Yee, and Jeremy Kepner
|
pPython for Parallel Python Programming
|
arXiv admin note: substantial text overlap with
arXiv:astro-ph/0606464
| null |
10.1109/HPEC55821.2022.9926365
| null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
pPython seeks to provide a parallel capability that provides good speed-up
without sacrificing the ease of programming in Python by implementing
partitioned global array semantics (PGAS) on top of a simple file-based
messaging library (PythonMPI) in pure Python. The core data structure in
pPython is a distributed numerical array whose distribution onto multiple
processors is specified with a map construct. Communication operations between
distributed arrays are abstracted away from the user and pPython transparently
supports redistribution between any block-cyclic-overlapped distributions in up
to four dimensions. pPython follows a SPMD (single program multiple data) model
of computation. pPython runs on any combination of heterogeneous systems that
support Python, including Windows, Linux, and MacOS operating systems. In
addition to running transparently on single-node (e.g., a laptop), pPython
provides a scheduler interface, so that pPython can be executed in a massively
parallel computing environment. The initial implementation uses the Slurm
scheduler. Performance of pPython on the HPC Challenge benchmark suite
demonstrates both ease of programming and scalability.
|
[
{
"version": "v1",
"created": "Wed, 31 Aug 2022 15:08:39 GMT"
}
] | 2022-12-12T00:00:00 |
[
[
"Byun",
"Chansup",
""
],
[
"Arcand",
"William",
""
],
[
"Bestor",
"David",
""
],
[
"Bergeron",
"Bill",
""
],
[
"Gadepally",
"Vijay",
""
],
[
"Houle",
"Michael",
""
],
[
"Hubbell",
"Matthew",
""
],
[
"Jananthan",
"Hayden",
""
],
[
"Jones",
"Michael",
""
],
[
"Keville",
"Kurt",
""
],
[
"Klein",
"Anna",
""
],
[
"Michaleas",
"Peter",
""
],
[
"Milechin",
"Lauren",
""
],
[
"Morales",
"Guillermo",
""
],
[
"Mullen",
"Julie",
""
],
[
"Prout",
"Andrew",
""
],
[
"Reuther",
"Albert",
""
],
[
"Rosa",
"Antonio",
""
],
[
"Samsi",
"Siddharth",
""
],
[
"Yee",
"Charles",
""
],
[
"Kepner",
"Jeremy",
""
]
] |
new_dataset
| 0.999594 |
2209.11672
|
Josiah Lutton
|
Adam Platt, E. Josiah Lutton, Edward Offord, Till Bretschneider
|
MiCellAnnGELo: Annotate microscopy time series of complex cell surfaces
with 3D Virtual Reality
|
For associated code and sample data, see
https://github.com/CellDynamics/MiCellAnnGELo.git
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Summary: Advances in 3D live cell microscopy are enabling high-resolution
capture of previously unobserved processes. Unleashing the power of modern
machine learning methods to fully benefit from these technologies is, however,
frustrated by the difficulty of manually annotating 3D training data.
MiCellAnnGELo virtual reality software offers an immersive environment for
viewing and interacting with 4D microscopy data, including efficient tools for
annotation. We present tools for labelling cell surfaces with a wide range of
applications, including cell motility, endocytosis, and transmembrane
signalling. Availability and implementation: MiCellAnnGELo employs the cross
platform (Mac/Unix/Windows) Unity game engine and is available under the MIT
licence at https://github.com/CellDynamics/MiCellAnnGELo.git, together with
sample data and demonstration movies. MiCellAnnGELo can be run in desktop mode
on a 2D screen or in 3D using a standard VR headset with compatible GPU.
|
[
{
"version": "v1",
"created": "Fri, 23 Sep 2022 16:02:00 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Dec 2022 15:25:45 GMT"
}
] | 2022-12-12T00:00:00 |
[
[
"Platt",
"Adam",
""
],
[
"Lutton",
"E. Josiah",
""
],
[
"Offord",
"Edward",
""
],
[
"Bretschneider",
"Till",
""
]
] |
new_dataset
| 0.999375 |
2211.05627
|
Alexander K\"uchler
|
Alexander K\"uchler and Christian Banse
|
Representing LLVM-IR in a Code Property Graph
| null |
Information Security (ISC) 2022
|
10.1007/978-3-031-22390-7_21
| null |
cs.SE cs.CR cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the past years, a number of static application security testing tools have
been proposed which make use of so-called code property graphs, a graph model
which keeps rich information about the source code while enabling its user to
write language-agnostic analyses. However, they suffer from several
shortcomings. They work mostly on source code and exclude the analysis of
third-party dependencies if they are only available as compiled binaries.
Furthermore, they are limited in their analysis to whether an individual
programming language is supported or not. While often support for
well-established languages such as C/C++ or Java is included, languages that
are still heavily evolving, such as Rust, are not considered because of the
constant changes in the language design. To overcome these limitations, we
extend an open source implementation of a code property graph to support
LLVM-IR which can be used as output by many compilers and binary lifters. In
this paper, we discuss how we address challenges that arise when mapping
concepts of an intermediate representation to a CPG. At the same time, we
optimize the resulting graph to be minimal and close to the representation of
equivalent source code. Our evaluation indicates that existing analyses can be
reused without modifications and that the performance requirements are
comparable to operating on source code. This makes the approach suitable for an
analysis of large-scale projects.
|
[
{
"version": "v1",
"created": "Wed, 9 Nov 2022 09:37:30 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Dec 2022 07:00:31 GMT"
}
] | 2022-12-12T00:00:00 |
[
[
"Küchler",
"Alexander",
""
],
[
"Banse",
"Christian",
""
]
] |
new_dataset
| 0.999226 |
2212.01098
|
Chen Wang
|
Chen Wang, Zhongcai Pei, Shuang Qiu, Zhiyong Tang
|
RGB-D-based Stair Detection using Deep Learning for Autonomous Stair
Climbing
| null | null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Stairs are common building structures in urban environments, and stair
detection is an important part of environment perception for autonomous mobile
robots. Most existing algorithms have difficulty combining the visual
information from binocular sensors effectively and ensuring reliable detection
at night and in the case of extremely fuzzy visual clues. To solve these
problems, we propose a neural network architecture with RGB and depth map
inputs. Specifically, we design a selective module, which can make the network
learn the complementary relationship between the RGB map and the depth map and
effectively combine the information from the RGB map and the depth map in
different scenes. In addition, we design a line clustering algorithm for the
postprocessing of detection results, which can make full use of the detection
results to obtain the geometric stair parameters. Experiments on our dataset
show that our method can achieve better accuracy and recall compared with
existing state-of-the-art deep learning methods, which are 5.64% and 7.97%,
respectively, and our method also has extremely fast detection speed. A
lightweight version can achieve 300 + frames per second with the same
resolution, which can meet the needs of most real-time detection scenes.
|
[
{
"version": "v1",
"created": "Fri, 2 Dec 2022 11:22:52 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Dec 2022 12:57:54 GMT"
}
] | 2022-12-12T00:00:00 |
[
[
"Wang",
"Chen",
""
],
[
"Pei",
"Zhongcai",
""
],
[
"Qiu",
"Shuang",
""
],
[
"Tang",
"Zhiyong",
""
]
] |
new_dataset
| 0.965795 |
2212.04111
|
Tianhao Xu
|
Zizhang Wu, Yuanzhu Gan, Xianzhi Li, Yunzhe Wu, Xiaoquan Wang, Tianhao
Xu, Fan Wang
|
Surround-view Fisheye BEV-Perception for Valet Parking: Dataset,
Baseline and Distortion-insensitive Multi-task Framework
|
12 pages, 11 figures
|
IEEE Transactions on Intelligent Vehicles 2022
|
10.1109/TIV.2022.3218594
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Surround-view fisheye perception under valet parking scenes is fundamental
and crucial in autonomous driving. Environmental conditions in parking lots
perform differently from the common public datasets, such as imperfect light
and opacity, which substantially impacts on perception performance. Most
existing networks based on public datasets may generalize suboptimal results on
these valet parking scenes, also affected by the fisheye distortion. In this
article, we introduce a new large-scale fisheye dataset called Fisheye Parking
Dataset(FPD) to promote the research in dealing with diverse real-world
surround-view parking cases. Notably, our compiled FPD exhibits excellent
characteristics for different surround-view perception tasks. In addition, we
also propose our real-time distortion-insensitive multi-task framework Fisheye
Perception Network (FPNet), which improves the surround-view fisheye BEV
perception by enhancing the fisheye distortion operation and multi-task
lightweight designs. Extensive experiments validate the effectiveness of our
approach and the dataset's exceptional generalizability.
|
[
{
"version": "v1",
"created": "Thu, 8 Dec 2022 07:06:08 GMT"
}
] | 2022-12-12T00:00:00 |
[
[
"Wu",
"Zizhang",
""
],
[
"Gan",
"Yuanzhu",
""
],
[
"Li",
"Xianzhi",
""
],
[
"Wu",
"Yunzhe",
""
],
[
"Wang",
"Xiaoquan",
""
],
[
"Xu",
"Tianhao",
""
],
[
"Wang",
"Fan",
""
]
] |
new_dataset
| 0.999569 |
2212.04116
|
Tianhao Xu
|
Zizhang Wu, Xinyuan Chen, Jizheng Wang, Xiaoquan Wang, Yuanzhu Gan,
Muqing Fang and Tianhao Xu
|
OCR-RTPS: An OCR-based real-time positioning system for the valet
parking
|
25 pages, 9 figures
|
Applied Intelligence 2023
| null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Obtaining the position of ego-vehicle is a crucial prerequisite for automatic
control and path planning in the field of autonomous driving. Most existing
positioning systems rely on GPS, RTK, or wireless signals, which are arduous to
provide effective localization under weak signal conditions. This paper
proposes a real-time positioning system based on the detection of the parking
numbers as they are unique positioning marks in the parking lot scene. It does
not only can help with the positioning with open area, but also run
independently under isolation environment. The result tested on both public
datasets and self-collected dataset show that the system outperforms others in
both performances and applies in practice. In addition, the code and dataset
will release later.
|
[
{
"version": "v1",
"created": "Thu, 8 Dec 2022 07:16:29 GMT"
}
] | 2022-12-12T00:00:00 |
[
[
"Wu",
"Zizhang",
""
],
[
"Chen",
"Xinyuan",
""
],
[
"Wang",
"Jizheng",
""
],
[
"Wang",
"Xiaoquan",
""
],
[
"Gan",
"Yuanzhu",
""
],
[
"Fang",
"Muqing",
""
],
[
"Xu",
"Tianhao",
""
]
] |
new_dataset
| 0.995697 |
2212.04537
|
Jiaqi Ma
|
Jiaqi Ma, Xingjian Zhang, Hezheng Fan, Jin Huang, Tianyue Li, Ting Wei
Li, Yiwen Tu, Chenshu Zhu, Qiaozhu Mei
|
Graph Learning Indexer: A Contributor-Friendly and Metadata-Rich
Platform for Graph Learning Benchmarks
|
Oral Presentation at LOG 2022
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Establishing open and general benchmarks has been a critical driving force
behind the success of modern machine learning techniques. As machine learning
is being applied to broader domains and tasks, there is a need to establish
richer and more diverse benchmarks to better reflect the reality of the
application scenarios. Graph learning is an emerging field of machine learning
that urgently needs more and better benchmarks. To accommodate the need, we
introduce Graph Learning Indexer (GLI), a benchmark curation platform for graph
learning. In comparison to existing graph learning benchmark libraries, GLI
highlights two novel design objectives. First, GLI is designed to incentivize
\emph{dataset contributors}. In particular, we incorporate various measures to
minimize the effort of contributing and maintaining a dataset, increase the
usability of the contributed dataset, as well as encourage attributions to
different contributors of the dataset. Second, GLI is designed to curate a
knowledge base, instead of a plain collection, of benchmark datasets. We use
multiple sources of meta information to augment the benchmark datasets with
\emph{rich characteristics}, so that they can be easily selected and used in
downstream research or development. The source code of GLI is available at
\url{https://github.com/Graph-Learning-Benchmarks/gli}.
|
[
{
"version": "v1",
"created": "Thu, 8 Dec 2022 19:57:01 GMT"
}
] | 2022-12-12T00:00:00 |
[
[
"Ma",
"Jiaqi",
""
],
[
"Zhang",
"Xingjian",
""
],
[
"Fan",
"Hezheng",
""
],
[
"Huang",
"Jin",
""
],
[
"Li",
"Tianyue",
""
],
[
"Li",
"Ting Wei",
""
],
[
"Tu",
"Yiwen",
""
],
[
"Zhu",
"Chenshu",
""
],
[
"Mei",
"Qiaozhu",
""
]
] |
new_dataset
| 0.994156 |
2212.04609
|
Federico Tartarini
|
Giovanni Betti, Federico Tartarini, Christine Nguyen, Stefano Schiavon
|
CBE Clima Tool: a free and open-source web application for climate
analysis tailored to sustainable building design
|
Submitted to SoftwareX
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Buildings that are designed specifically to respond to the local climate can
be more comfortable, energy-efficient, and with a lower environmental impact.
However, there are many social, cultural, and economic obstacles that might
prevent the wide adoption of designing climate-adapted buildings. One of the
said obstacles can be removed by enabling practitioners to easily access and
analyse local climate data. The CBE Clima Tool (Clima) is a free and
open-source web application that offers easy access to publicly available
weather files (in EPW format) specifically created for building energy
simulation and design. It provides a series of interactive visualization of the
variables therein contained and several derived ones. It is aimed at students,
educators, and practitioners in the architecture and engineering fields. Since
its launch has been consistently recording over 3000 monthly unique users from
over 70 countries worldwide, both in professional and educational settings.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2022 00:13:20 GMT"
}
] | 2022-12-12T00:00:00 |
[
[
"Betti",
"Giovanni",
""
],
[
"Tartarini",
"Federico",
""
],
[
"Nguyen",
"Christine",
""
],
[
"Schiavon",
"Stefano",
""
]
] |
new_dataset
| 0.997316 |
2212.04622
|
Yan Qin
|
Yan Qin, Anushiya Arunan, Chau Yuen
|
Digital Twin for Real-time Li-ion Battery State of Health Estimation
with Partially Discharged Cycling Data
|
This paper has been accepted for IEEE Transactions on Industrial
Informatics
| null | null | null |
cs.LG cs.AI eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To meet the fairly high safety and reliability requirements in practice, the
state of health (SOH) estimation of Lithium-ion batteries (LIBs), which has a
close relationship with the degradation performance, has been extensively
studied with the widespread applications of various electronics. The
conventional SOH estimation approaches with digital twin are end-of-cycle
estimation that require the completion of a full charge/discharge cycle to
observe the maximum available capacity. However, under dynamic operating
conditions with partially discharged data, it is impossible to sense accurate
real-time SOH estimation for LIBs. To bridge this research gap, we put forward
a digital twin framework to gain the capability of sensing the battery's SOH on
the fly, updating the physical battery model. The proposed digital twin
solution consists of three core components to enable real-time SOH estimation
without requiring a complete discharge. First, to handle the variable training
cycling data, the energy discrepancy-aware cycling synchronization is proposed
to align cycling data with guaranteeing the same data structure. Second, to
explore the temporal importance of different training sampling times, a
time-attention SOH estimation model is developed with data encoding to capture
the degradation behavior over cycles, excluding adverse influences of
unimportant samples. Finally, for online implementation, a similarity
analysis-based data reconstruction has been put forward to provide real-time
SOH estimation without requiring a full discharge cycle. Through a series of
results conducted on a widely used benchmark, the proposed method yields the
real-time SOH estimation with errors less than 1% for most sampling times in
ongoing cycles.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2022 01:30:10 GMT"
}
] | 2022-12-12T00:00:00 |
[
[
"Qin",
"Yan",
""
],
[
"Arunan",
"Anushiya",
""
],
[
"Yuen",
"Chau",
""
]
] |
new_dataset
| 0.998005 |
2212.04625
|
Vedant Mundheda
|
Vedant Mundheda, Karan Mirakhor, Rahul K S, Harikumar Kandath,
Nagamanikandan Govindan
|
Predictive Barrier Lyapunov Function Based Control for Safe Trajectory
Tracking of an Aerial Manipulator
|
European Control Conference '23
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This paper proposes a novel controller framework that provides trajectory
tracking for an Aerial Manipulator (AM) while ensuring the safe operation of
the system under unknown bounded disturbances. The AM considered here is a
2-DOF (degrees-of-freedom) manipulator rigidly attached to a UAV. Our proposed
controller structure follows the conventional inner loop PID control for
attitude dynamics and an outer loop controller for tracking a reference
trajectory. The outer loop control is based on the Model Predictive Control
(MPC) with constraints derived using the Barrier Lyapunov Function (BLF) for
the safe operation of the AM. BLF-based constraints are proposed for two
objectives, viz. 1) To avoid the AM from colliding with static obstacles like a
rectangular wall, and 2) To maintain the end effector of the manipulator within
the desired workspace. The proposed BLF ensures that the above-mentioned
objectives are satisfied even in the presence of unknown bounded disturbances.
The capabilities of the proposed controller are demonstrated through
high-fidelity non-linear simulations with parameters derived from a real
laboratory scale AM. We compare the performance of our controller with other
state-of-the-art MPC controllers for AM.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2022 01:40:00 GMT"
}
] | 2022-12-12T00:00:00 |
[
[
"Mundheda",
"Vedant",
""
],
[
"Mirakhor",
"Karan",
""
],
[
"S",
"Rahul K",
""
],
[
"Kandath",
"Harikumar",
""
],
[
"Govindan",
"Nagamanikandan",
""
]
] |
new_dataset
| 0.993283 |
2212.04654
|
Yitong Li
|
Ruqayah Alsayed Ebrahim, Shivanan Singh, Yitong Li, Wenying Ji
|
Discrete Event Simulation for Port Berth Maintenance Planning
| null | null | null | null |
cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Industrial and commercial ports, which are one of the three main hubs to the
country, require 24/7 operations to maintain the goods export and import flow.
Due to the aging and weather factors, berths require regular maintenance, such
as replacing old piles, timber finders, marine ladders, rubber fenders, and
deck slabs. For efficient berth maintenance, strategies are highly desired to
minimize or eliminate any delays in operations during the maintenance. This
paper develops a discrete event simulation model using Simphony.NET for berth
maintenance processes in Doha Port, Kuwait. The model derives minimum
maintenance duration under limited resources and associated uncertainties. The
model can be used as a decision support tool to minimize interruption or delays
in the port maintenance operations.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2022 03:52:56 GMT"
}
] | 2022-12-12T00:00:00 |
[
[
"Ebrahim",
"Ruqayah Alsayed",
""
],
[
"Singh",
"Shivanan",
""
],
[
"Li",
"Yitong",
""
],
[
"Ji",
"Wenying",
""
]
] |
new_dataset
| 0.991803 |
2212.04700
|
Zhimin Li
|
Jie Jiang, Zhimin Li, Jiangfeng Xiong, Rongwei Quan, Qinglin Lu, Wei
Liu
|
Tencent AVS: A Holistic Ads Video Dataset for Multi-modal Scene
Segmentation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Temporal video segmentation and classification have been advanced greatly by
public benchmarks in recent years. However, such research still mainly focuses
on human actions, failing to describe videos in a holistic view. In addition,
previous research tends to pay much attention to visual information yet ignores
the multi-modal nature of videos. To fill this gap, we construct the Tencent
`Ads Video Segmentation'~(TAVS) dataset in the ads domain to escalate
multi-modal video analysis to a new level. TAVS describes videos from three
independent perspectives as `presentation form', `place', and `style', and
contains rich multi-modal information such as video, audio, and text. TAVS is
organized hierarchically in semantic aspects for comprehensive temporal video
segmentation with three levels of categories for multi-label classification,
e.g., `place' - `working place' - `office'. Therefore, TAVS is distinguished
from previous temporal segmentation datasets due to its multi-modal
information, holistic view of categories, and hierarchical granularities. It
includes 12,000 videos, 82 classes, 33,900 segments, 121,100 shots, and 168,500
labels. Accompanied with TAVS, we also present a strong multi-modal video
segmentation baseline coupled with multi-label class prediction. Extensive
experiments are conducted to evaluate our proposed method as well as existing
representative methods to reveal key challenges of our dataset TAVS.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2022 07:26:20 GMT"
}
] | 2022-12-12T00:00:00 |
[
[
"Jiang",
"Jie",
""
],
[
"Li",
"Zhimin",
""
],
[
"Xiong",
"Jiangfeng",
""
],
[
"Quan",
"Rongwei",
""
],
[
"Lu",
"Qinglin",
""
],
[
"Liu",
"Wei",
""
]
] |
new_dataset
| 0.999664 |
2212.04706
|
Sergei Nikolaev
|
Fabio Cacciatori and Sergei Nikolaev and Dmitrii Grigorev
|
The Platform for non-metallic pipes defects recognition. Design and
Implementation
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper describes a prototype software and hardware platform to provide
support to field operators during the inspection of surface defects of
non-metallic pipes. Inspection is carried out by video filming defects created
on the same surface in real-time using a "smart" helmet device and other mobile
devices. The work focuses on the detection and recognition of the defects which
appears as colored iridescence of reflected light caused by the diffraction
effect arising from the presence of internal stresses in the inspected
material. The platform allows you to carry out preliminary analysis directly on
the device in offline mode, and, if a connection to the network is established,
the received data is transmitted to the server for post-processing to extract
information about possible defects that were not detected at the previous
stage. The paper presents a description of the stages of design, formal
description, and implementation details of the platform. It also provides
descriptions of the models used to recognize defects and examples of the result
of the work.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2022 07:34:17 GMT"
}
] | 2022-12-12T00:00:00 |
[
[
"Cacciatori",
"Fabio",
""
],
[
"Nikolaev",
"Sergei",
""
],
[
"Grigorev",
"Dmitrii",
""
]
] |
new_dataset
| 0.995683 |
2212.04764
|
Mingze Sun
|
Mingze Sun, Haoxiang Wang, Wei Yao, Jiawang Liu
|
AuE-IPA: An AU Engagement Based Infant Pain Assessment Method
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Recent studies have found that pain in infancy has a significant impact on
infant development, including psychological problems, possible brain injury,
and pain sensitivity in adulthood. However, due to the lack of specialists and
the fact that infants are unable to express verbally their experience of pain,
it is difficult to assess infant pain. Most existing infant pain assessment
systems directly apply adult methods to infants ignoring the differences
between infant expressions and adult expressions. Meanwhile, as the study of
facial action coding system continues to advance, the use of action units (AUs)
opens up new possibilities for expression recognition and pain assessment. In
this paper, a novel AuE-IPA method is proposed for assessing infant pain by
leveraging different engagement levels of AUs. First, different engagement
levels of AUs in infant pain are revealed, by analyzing the class activation
map of an end-to-end pain assessment model. The intensities of top-engaged AUs
are then used in a regression model for achieving automatic infant pain
assessment. The model proposed is trained and experimented on YouTube
Immunization dataset, YouTube Blood Test dataset, and iCOPEVid dataset. The
experimental results show that our AuE-IPA method is more applicable to infants
and possesses stronger generalization ability than end-to-end assessment model
and the classic PSPI metric.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2022 10:41:22 GMT"
}
] | 2022-12-12T00:00:00 |
[
[
"Sun",
"Mingze",
""
],
[
"Wang",
"Haoxiang",
""
],
[
"Yao",
"Wei",
""
],
[
"Liu",
"Jiawang",
""
]
] |
new_dataset
| 0.966938 |
2212.04786
|
Fernando Alonso-Fernandez
|
Otto Zell, Joel P{\aa}lsson, Kevin Hernandez-Diaz, Fernando
Alonso-Fernandez, Felix Nilsson
|
Image-Based Fire Detection in Industrial Environments with YOLOv4
|
Accepted for publication at ICPRAM
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fires have destructive power when they break out and affect their
surroundings on a devastatingly large scale. The best way to minimize their
damage is to detect the fire as quickly as possible before it has a chance to
grow. Accordingly, this work looks into the potential of AI to detect and
recognize fires and reduce detection time using object detection on an image
stream. Object detection has made giant leaps in speed and accuracy over the
last six years, making real-time detection feasible. To our end, we collected
and labeled appropriate data from several public sources, which have been used
to train and evaluate several models based on the popular YOLOv4 object
detector. Our focus, driven by a collaborating industrial partner, is to
implement our system in an industrial warehouse setting, which is characterized
by high ceilings. A drawback of traditional smoke detectors in this setup is
that the smoke has to rise to a sufficient height. The AI models brought
forward in this research managed to outperform these detectors by a significant
amount of time, providing precious anticipation that could help to minimize the
effects of fires further.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2022 11:32:36 GMT"
}
] | 2022-12-12T00:00:00 |
[
[
"Zell",
"Otto",
""
],
[
"Pålsson",
"Joel",
""
],
[
"Hernandez-Diaz",
"Kevin",
""
],
[
"Alonso-Fernandez",
"Fernando",
""
],
[
"Nilsson",
"Felix",
""
]
] |
new_dataset
| 0.999506 |
2212.04794
|
Fernando Alonso-Fernandez
|
Jonathan Karlsson, Fredrik Strand, Josef Bigun, Fernando
Alonso-Fernandez, Kevin Hernandez-Diaz, Felix Nilsson
|
Visual Detection of Personal Protective Equipment and Safety Gear on
Industry Workers
|
Accepted for publication at ICPRAM
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Workplace injuries are common in today's society due to a lack of adequately
worn safety equipment. A system that only admits appropriately equipped
personnel can be created to improve working conditions. The goal is thus to
develop a system that will improve workers' safety using a camera that will
detect the usage of Personal Protective Equipment (PPE). To this end, we
collected and labeled appropriate data from several public sources, which have
been used to train and evaluate several models based on the popular YOLOv4
object detector. Our focus, driven by a collaborating industrial partner, is to
implement our system into an entry control point where workers must present
themselves to obtain access to a restricted area. Combined with facial identity
recognition, the system would ensure that only authorized people wearing
appropriate equipment are granted access. A novelty of this work is that we
increase the number of classes to five objects (hardhat, safety vest, safety
gloves, safety glasses, and hearing protection), whereas most existing works
only focus on one or two classes, usually hardhats or vests. The AI model
developed provides good detection accuracy at a distance of 3 and 5 meters in
the collaborative environment where we aim at operating (mAP of 99/89%,
respectively). The small size of some objects or the potential occlusion by
body parts have been identified as potential factors that are detrimental to
accuracy, which we have counteracted via data augmentation and cropping of the
body before applying PPE detection.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2022 11:50:03 GMT"
}
] | 2022-12-12T00:00:00 |
[
[
"Karlsson",
"Jonathan",
""
],
[
"Strand",
"Fredrik",
""
],
[
"Bigun",
"Josef",
""
],
[
"Alonso-Fernandez",
"Fernando",
""
],
[
"Hernandez-Diaz",
"Kevin",
""
],
[
"Nilsson",
"Felix",
""
]
] |
new_dataset
| 0.958085 |
2212.04819
|
Kiana Ehsani
|
Matt Deitke, Rose Hendrix, Luca Weihs, Ali Farhadi, Kiana Ehsani,
Aniruddha Kembhavi
|
Phone2Proc: Bringing Robust Robots Into Our Chaotic World
|
https://allenai.org/project/phone2proc
| null | null | null |
cs.RO cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Training embodied agents in simulation has become mainstream for the embodied
AI community. However, these agents often struggle when deployed in the
physical world due to their inability to generalize to real-world environments.
In this paper, we present Phone2Proc, a method that uses a 10-minute phone scan
and conditional procedural generation to create a distribution of training
scenes that are semantically similar to the target environment. The generated
scenes are conditioned on the wall layout and arrangement of large objects from
the scan, while also sampling lighting, clutter, surface textures, and
instances of smaller objects with randomized placement and materials.
Leveraging just a simple RGB camera, training with Phone2Proc shows massive
improvements from 34.7% to 70.7% success rate in sim-to-real ObjectNav
performance across a test suite of over 200 trials in diverse real-world
environments, including homes, offices, and RoboTHOR. Furthermore, Phone2Proc's
diverse distribution of generated scenes makes agents remarkably robust to
changes in the real world, such as human movement, object rearrangement,
lighting changes, or clutter.
|
[
{
"version": "v1",
"created": "Thu, 8 Dec 2022 18:52:27 GMT"
}
] | 2022-12-12T00:00:00 |
[
[
"Deitke",
"Matt",
""
],
[
"Hendrix",
"Rose",
""
],
[
"Weihs",
"Luca",
""
],
[
"Farhadi",
"Ali",
""
],
[
"Ehsani",
"Kiana",
""
],
[
"Kembhavi",
"Aniruddha",
""
]
] |
new_dataset
| 0.997467 |
2212.04869
|
Kaixuan Lu
|
Kaixuan Lu and Xiao Huang
|
RCDT: Relational Remote Sensing Change Detection with Transformer
|
18 pages, 11 figures,
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep learning based change detection methods have received wide attentoion,
thanks to their strong capability in obtaining rich features from images.
However, existing AI-based CD methods largely rely on three
functionality-enhancing modules, i.e., semantic enhancement, attention
mechanisms, and correspondence enhancement. The stacking of these modules leads
to great model complexity. To unify these three modules into a simple pipeline,
we introduce Relational Change Detection Transformer (RCDT), a novel and simple
framework for remote sensing change detection tasks. The proposed RCDT consists
of three major components, a weight-sharing Siamese Backbone to obtain
bi-temporal features, a Relational Cross Attention Module (RCAM) that
implements offset cross attention to obtain bi-temporal relation-aware
features, and a Features Constrain Module (FCM) to achieve the final refined
predictions with high-resolution constraints. Extensive experiments on four
different publically available datasets suggest that our proposed RCDT exhibits
superior change detection performance compared with other competing methods.
The therotical, methodogical, and experimental knowledge of this study is
expected to benefit future change detection efforts that involve the cross
attention mechanism.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2022 14:21:42 GMT"
}
] | 2022-12-12T00:00:00 |
[
[
"Lu",
"Kaixuan",
""
],
[
"Huang",
"Xiao",
""
]
] |
new_dataset
| 0.999156 |
2212.04873
|
Xinzhe Ni
|
Xinzhe Ni, Hao Wen, Yong Liu, Yatai Ji, Yujiu Yang
|
Multimodal Prototype-Enhanced Network for Few-Shot Action Recognition
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current methods for few-shot action recognition mainly fall into the metric
learning framework following ProtoNet. However, they either ignore the effect
of representative prototypes or fail to enhance the prototypes with multimodal
information adequately. In this work, we propose a novel Multimodal
Prototype-Enhanced Network (MORN) to use the semantic information of label
texts as multimodal information to enhance prototypes, including two modality
flows. A CLIP visual encoder is introduced in the visual flow, and visual
prototypes are computed by the Temporal-Relational CrossTransformer (TRX)
module. A frozen CLIP text encoder is introduced in the text flow, and a
semantic-enhanced module is used to enhance text features. After inflating,
text prototypes are obtained. The final multimodal prototypes are then computed
by a multimodal prototype-enhanced module. Besides, there exist no evaluation
metrics to evaluate the quality of prototypes. To the best of our knowledge, we
are the first to propose a prototype evaluation metric called Prototype
Similarity Difference (PRIDE), which is used to evaluate the performance of
prototypes in discriminating different categories. We conduct extensive
experiments on four popular datasets. MORN achieves state-of-the-art results on
HMDB51, UCF101, Kinetics and SSv2. MORN also performs well on PRIDE, and we
explore the correlation between PRIDE and accuracy.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2022 14:24:39 GMT"
}
] | 2022-12-12T00:00:00 |
[
[
"Ni",
"Xinzhe",
""
],
[
"Wen",
"Hao",
""
],
[
"Liu",
"Yong",
""
],
[
"Ji",
"Yatai",
""
],
[
"Yang",
"Yujiu",
""
]
] |
new_dataset
| 0.998572 |
2212.04891
|
Xunzhu Tang
|
Shi Wang and Daniel Tang and Luchen Zhang and Huilin Li and Ding Han
|
HieNet: Bidirectional Hierarchy Framework for Automated ICD Coding
| null | null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
International Classification of Diseases (ICD) is a set of classification
codes for medical records. Automated ICD coding, which assigns unique
International Classification of Diseases codes with each medical record, is
widely used recently for its efficiency and error-prone avoidance. However,
there are challenges that remain such as heterogeneity, label unbalance, and
complex relationships between ICD codes. In this work, we proposed a novel
Bidirectional Hierarchy Framework(HieNet) to address the challenges.
Specifically, a personalized PageRank routine is developed to capture the
co-relation of codes, a bidirectional hierarchy passage encoder to capture the
codes' hierarchical representations, and a progressive predicting method is
then proposed to narrow down the semantic searching space of prediction. We
validate our method on two widely used datasets. Experimental results on two
authoritative public datasets demonstrate that our proposed method boosts
state-of-the-art performance by a large margin.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2022 14:51:12 GMT"
}
] | 2022-12-12T00:00:00 |
[
[
"Wang",
"Shi",
""
],
[
"Tang",
"Daniel",
""
],
[
"Zhang",
"Luchen",
""
],
[
"Li",
"Huilin",
""
],
[
"Han",
"Ding",
""
]
] |
new_dataset
| 0.981319 |
2212.04972
|
Jialiang Lin
|
Jialiang Lin, Jiaxin Song, Zhangping Zhou, Yidong Chen, Xiaodong Shi
|
MOPRD: A multidisciplinary open peer review dataset
| null | null | null | null |
cs.DL cs.AI cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Open peer review is a growing trend in academic publications. Public access
to peer review data can benefit both the academic and publishing communities.
It also serves as a great support to studies on review comment generation and
further to the realization of automated scholarly paper review. However, most
of the existing peer review datasets do not provide data that cover the whole
peer review process. Apart from this, their data are not diversified enough as
they are mainly collected from the field of computer science. These two
drawbacks of the currently available peer review datasets need to be addressed
to unlock more opportunities for related studies. In response to this problem,
we construct MOPRD, a multidisciplinary open peer review dataset. This dataset
consists of paper metadata, multiple version manuscripts, review comments,
meta-reviews, author's rebuttal letters, and editorial decisions. Moreover, we
design a modular guided review comment generation method based on MOPRD.
Experiments show that our method delivers better performance indicated by both
automatic metrics and human evaluation. We also explore other potential
applications of MOPRD, including meta-review generation, editorial decision
prediction, author rebuttal generation, and scientometric analysis. MOPRD is a
strong endorsement for further studies in peer review-related research and
other applications.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2022 16:35:14 GMT"
}
] | 2022-12-12T00:00:00 |
[
[
"Lin",
"Jialiang",
""
],
[
"Song",
"Jiaxin",
""
],
[
"Zhou",
"Zhangping",
""
],
[
"Chen",
"Yidong",
""
],
[
"Shi",
"Xiaodong",
""
]
] |
new_dataset
| 0.99922 |
2212.04981
|
Nam Anh Dinh
|
Nam Anh Dinh, Haochen Wang, Greg Shakhnarovich, Rana Hanocka
|
LoopDraw: a Loop-Based Autoregressive Model for Shape Synthesis and
Editing
| null | null | null | null |
cs.GR cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
There is no settled universal 3D representation for geometry with many
alternatives such as point clouds, meshes, implicit functions, and voxels to
name a few. In this work, we present a new, compelling alternative for
representing shapes using a sequence of cross-sectional closed loops. The loops
across all planes form an organizational hierarchy which we leverage for
autoregressive shape synthesis and editing. Loops are a non-local description
of the underlying shape, as simple loop manipulations (such as shifts) result
in significant structural changes to the geometry. This is in contrast to
manipulating local primitives such as points in a point cloud or a triangle in
a triangle mesh. We further demonstrate that loops are intuitive and natural
primitive for analyzing and editing shapes, both computationally and for users.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2022 16:41:15 GMT"
}
] | 2022-12-12T00:00:00 |
[
[
"Dinh",
"Nam Anh",
""
],
[
"Wang",
"Haochen",
""
],
[
"Shakhnarovich",
"Greg",
""
],
[
"Hanocka",
"Rana",
""
]
] |
new_dataset
| 0.999672 |
2212.05011
|
Ian Huang
|
Ian Huang, Panos Achlioptas, Tianyi Zhang, Sergey Tulyakov, Minhyuk
Sung, Leonidas Guibas
|
LADIS: Language Disentanglement for 3D Shape Editing
| null | null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Natural language interaction is a promising direction for democratizing 3D
shape design. However, existing methods for text-driven 3D shape editing face
challenges in producing decoupled, local edits to 3D shapes. We address this
problem by learning disentangled latent representations that ground language in
3D geometry. To this end, we propose a complementary tool set including a novel
network architecture, a disentanglement loss, and a new editing procedure.
Additionally, to measure edit locality, we define a new metric that we call
part-wise edit precision. We show that our method outperforms existing SOTA
methods by 20% in terms of edit locality, and up to 6.6% in terms of language
reference resolution accuracy. Our work suggests that by solely disentangling
language representations, downstream 3D shape editing can become more local to
relevant parts, even if the model was never given explicit part-based
supervision.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2022 17:54:28 GMT"
}
] | 2022-12-12T00:00:00 |
[
[
"Huang",
"Ian",
""
],
[
"Achlioptas",
"Panos",
""
],
[
"Zhang",
"Tianyi",
""
],
[
"Tulyakov",
"Sergey",
""
],
[
"Sung",
"Minhyuk",
""
],
[
"Guibas",
"Leonidas",
""
]
] |
new_dataset
| 0.999591 |
2212.05030
|
Rui Pereira
|
Rui Pereira and Gordana Raki\'c
|
ICT4S2022 -- Demonstrations and Posters Track Proceedings
| null | null | null | null |
cs.CY cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Submissions accepted for The 8th International Conference on ICT for
Sustainability (ICT4S 2022), Demonstrations and Posters Track Proceedings,
Plovdiv, Bulgaria, Mon 13 - Fri 17 June 2022. Most of the submissions are
included in the arXiv proceedings while some demonstrations and posters are out
of arXiv publication scope as the ICT4S scope is broad and multidisciplinary.
Corresponding posters are available on the ICT4S2022 - Demonstrations and
Posters page.
|
[
{
"version": "v1",
"created": "Wed, 7 Dec 2022 23:56:40 GMT"
}
] | 2022-12-12T00:00:00 |
[
[
"Pereira",
"Rui",
""
],
[
"Rakić",
"Gordana",
""
]
] |
new_dataset
| 0.971505 |
2212.05033
|
Michiel Van Beirendonck
|
Lucas Bex, Furkan Turan, Michiel Van Beirendonck, Ingrid Verbauwhede
|
Mining CryptoNight-Haven on the Varium C1100 Blockchain Accelerator Card
| null | null | null | null |
cs.CR cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
Cryptocurrency mining is an energy-intensive process that presents a prime
candidate for hardware acceleration. This work-in-progress presents the first
coprocessor design for the ASIC-resistant CryptoNight-Haven Proof of Work (PoW)
algorithm. We construct our hardware accelerator as a Xilinx Run Time (XRT) RTL
kernel targeting the Xilinx Varium C1100 Blockchain Accelerator Card. The
design employs deeply pipelined computation and High Bandwidth Memory (HBM) for
the underlying scratchpad data. We aim to compare our accelerator to existing
CPU and GPU miners to show increased throughput and energy efficiency of its
hash computations
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2022 18:36:05 GMT"
}
] | 2022-12-12T00:00:00 |
[
[
"Bex",
"Lucas",
""
],
[
"Turan",
"Furkan",
""
],
[
"Van Beirendonck",
"Michiel",
""
],
[
"Verbauwhede",
"Ingrid",
""
]
] |
new_dataset
| 0.99297 |
1801.09061
|
Nick Bassiliades
|
Nick Bassiliades
|
SWRL2SPIN: A tool for transforming SWRL rule bases in OWL ontologies to
object-oriented SPIN rules
| null |
International Journal on Semantic Web and Information Systems,
Vol. 16, Iss. 1, Art. 5, 2020
|
10.4018/IJSWIS.2020010105
| null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Semantic Web Rule Language (SWRL) combines OWL (Web Ontology Language)
ontologies with Horn Logic rules of the Rule Markup Language (RuleML) family.
Being supported by ontology editors, rule engines and ontology reasoners, it
has become a very popular choice for developing rule-based applications on top
of ontologies. However, SWRL is probably not go-ing to become a WWW Consortium
standard, prohibiting industrial acceptance. On the other hand, SPIN (SPARQL
Inferencing Notation) has become a de-facto industry standard to rep-resent
SPARQL rules and constraints on Semantic Web models, building on the widespread
acceptance of SPARQL (SPARQL Protocol and RDF Query Language). In this paper,
we ar-gue that the life of existing SWRL rule-based ontology applications can
be prolonged by con-verting them to SPIN. To this end, we have developed the
SWRL2SPIN tool in Prolog that transforms SWRL rules into SPIN rules,
considering the object-orientation of SPIN, i.e. linking rules to the
appropriate ontology classes and optimizing them, as derived by analysing the
rule conditions.
|
[
{
"version": "v1",
"created": "Sat, 27 Jan 2018 09:36:22 GMT"
},
{
"version": "v2",
"created": "Sat, 3 Feb 2018 09:33:33 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Dec 2018 07:31:21 GMT"
},
{
"version": "v4",
"created": "Thu, 8 Dec 2022 08:03:53 GMT"
}
] | 2022-12-09T00:00:00 |
[
[
"Bassiliades",
"Nick",
""
]
] |
new_dataset
| 0.995208 |
2001.07626
|
Peter Hirsch
|
Peter Hirsch, Lisa Mais, Dagmar Kainmueller
|
PatchPerPix for Instance Segmentation
|
ECCV2020, code: https://github.com/Kainmueller-Lab/PatchPerPix
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a novel method for proposal free instance segmentation that can
handle sophisticated object shapes which span large parts of an image and form
dense object clusters with crossovers. Our method is based on predicting dense
local shape descriptors, which we assemble to form instances. All instances are
assembled simultaneously in one go. To our knowledge, our method is the first
non-iterative method that yields instances that are composed of learnt shape
patches. We evaluate our method on a diverse range of data domains, where it
defines the new state of the art on four benchmarks, namely the ISBI 2012 EM
segmentation benchmark, the BBBC010 C. elegans dataset, and 2d as well as 3d
fluorescence microscopy data of cell nuclei. We show furthermore that our
method also applies to 3d light microscopy data of Drosophila neurons, which
exhibit extreme cases of complex shape clusters
|
[
{
"version": "v1",
"created": "Tue, 21 Jan 2020 16:06:51 GMT"
},
{
"version": "v2",
"created": "Thu, 19 Mar 2020 10:41:28 GMT"
},
{
"version": "v3",
"created": "Wed, 19 Aug 2020 15:45:11 GMT"
},
{
"version": "v4",
"created": "Thu, 8 Dec 2022 17:46:30 GMT"
}
] | 2022-12-09T00:00:00 |
[
[
"Hirsch",
"Peter",
""
],
[
"Mais",
"Lisa",
""
],
[
"Kainmueller",
"Dagmar",
""
]
] |
new_dataset
| 0.99861 |
2202.13335
|
Benjamin Bergougnoux
|
Benjamin Bergougnoux, Jan Dreier, Lars Jaffke
|
A logic-based algorithmic meta-theorem for mim-width
| null | null | null | null |
cs.DS cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a logic called distance neighborhood logic with acyclicity and
connectivity constraints ($\mathsf{A\&C~DN}$ for short) which extends
existential $\mathsf{MSO_1}$ with predicates for querying neighborhoods of
vertex sets and for verifying connectivity and acyclicity of vertex sets in
various powers of a graph. Building upon [Bergougnoux and Kant\'e, ESA 2019;
SIDMA 2021], we show that the model checking problem for every fixed
$\mathsf{A\&C~DN}$ formula is solvable in $n^{O(w)}$ time when the input graph
is given together with a branch decomposition of mim-width $w$. Nearly all
problems that are known to be solvable in polynomial time given a branch
decomposition of constant mim-width can be expressed in this framework. We add
several natural problems to this list, including problems asking for diverse
sets of solutions. Our model checking algorithm is efficient whenever the given
branch decomposition of the input graph has small index in terms of the
$d$-neighborhood equivalence [Bui-Xuan, Telle, and Vatshelle, TCS 2013]. We
therefore unify and extend known algorithms for tree-width, clique-width and
rank-width. Our algorithm has a single-exponential dependence on these three
width measures and asymptotically matches run times of the fastest known
algorithms for several problems. This results in algorithms with tight run
times under the Exponential Time Hypothesis ($\mathsf{ETH}$) for tree-width,
clique-width and rank-width; the above mentioned run time for mim-width is
nearly tight under the $\mathsf{ETH}$ for several problems as well. Our results
are also tight in terms of the expressive power of the logic: we show that
already slight extensions of our logic make the model checking problem
para-$\mathsf{NP}$-hard when parameterized by mim-width plus formula length.
|
[
{
"version": "v1",
"created": "Sun, 27 Feb 2022 10:25:59 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Dec 2022 10:08:16 GMT"
}
] | 2022-12-09T00:00:00 |
[
[
"Bergougnoux",
"Benjamin",
""
],
[
"Dreier",
"Jan",
""
],
[
"Jaffke",
"Lars",
""
]
] |
new_dataset
| 0.997993 |
2205.01536
|
Darian Toma\v{s}evi\'c
|
Darian Toma\v{s}evi\'c, Peter Peer, Vitomir \v{S}truc
|
BiOcularGAN: Bimodal Synthesis and Annotation of Ocular Images
|
13 pages, 14 figures
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current state-of-the-art segmentation techniques for ocular images are
critically dependent on large-scale annotated datasets, which are
labor-intensive to gather and often raise privacy concerns. In this paper, we
present a novel framework, called BiOcularGAN, capable of generating synthetic
large-scale datasets of photorealistic (visible light and near-infrared) ocular
images, together with corresponding segmentation labels to address these
issues. At its core, the framework relies on a novel Dual-Branch StyleGAN2
(DB-StyleGAN2) model that facilitates bimodal image generation, and a Semantic
Mask Generator (SMG) component that produces semantic annotations by exploiting
latent features of the DB-StyleGAN2 model. We evaluate BiOcularGAN through
extensive experiments across five diverse ocular datasets and analyze the
effects of bimodal data generation on image quality and the produced
annotations. Our experimental results show that BiOcularGAN is able to produce
high-quality matching bimodal images and annotations (with minimal manual
intervention) that can be used to train highly competitive (deep) segmentation
models (in a privacy aware-manner) that perform well across multiple real-world
datasets. The source code for the BiOcularGAN framework is publicly available
at https://github.com/dariant/BiOcularGAN.
|
[
{
"version": "v1",
"created": "Tue, 3 May 2022 14:43:39 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Oct 2022 15:10:14 GMT"
},
{
"version": "v3",
"created": "Thu, 8 Dec 2022 16:06:13 GMT"
}
] | 2022-12-09T00:00:00 |
[
[
"Tomašević",
"Darian",
""
],
[
"Peer",
"Peter",
""
],
[
"Štruc",
"Vitomir",
""
]
] |
new_dataset
| 0.998635 |
2205.12114
|
Ond\v{r}ej Leng\'al
|
Sab\'ina Gul\v{c}\'ikov\'a and Ond\v{r}ej Leng\'al
|
Register Set Automata (Technical Report)
| null | null | null | null |
cs.LO cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
We present register set automata (RsAs), a register automaton model over data
words where registers can contain sets of data values and the following
operations are supported: adding values to registers, clearing registers, and
testing (non-)membership. We show that the emptiness problem for RsAs is
decidable and complete for the $F_\omega$ class. Moreover, we show that a large
class of register automata can be transformed into deterministic RsAs, which
can serve as a basis for (i) fast matching of a family of regular expressions
with back-references and (ii) language inclusion algorithm for a sub-class of
register automata. RsAs are incomparable in expressive power to other popular
automata models over data words, such as alternating register automata and
pebble automata.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 14:45:38 GMT"
},
{
"version": "v2",
"created": "Tue, 2 Aug 2022 12:45:42 GMT"
},
{
"version": "v3",
"created": "Thu, 8 Dec 2022 07:57:07 GMT"
}
] | 2022-12-09T00:00:00 |
[
[
"Gulčíková",
"Sabína",
""
],
[
"Lengál",
"Ondřej",
""
]
] |
new_dataset
| 0.999089 |
2209.09048
|
Franka Bause
|
Franka Bause and Nils M. Kriege
|
Gradual Weisfeiler-Leman: Slow and Steady Wins the Race
|
LoG 2022
| null | null | null |
cs.LG cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The classical Weisfeiler-Leman algorithm aka color refinement is fundamental
for graph learning with kernels and neural networks. Originally developed for
graph isomorphism testing, the algorithm iteratively refines vertex colors. On
many datasets, the stable coloring is reached after a few iterations and the
optimal number of iterations for machine learning tasks is typically even
lower. This suggests that the colors diverge too fast, defining a similarity
that is too coarse. We generalize the concept of color refinement and propose a
framework for gradual neighborhood refinement, which allows a slower
convergence to the stable coloring and thus provides a more fine-grained
refinement hierarchy and vertex similarity. We assign new colors by clustering
vertex neighborhoods, replacing the original injective color assignment
function. Our approach is used to derive new variants of existing graph kernels
and to approximate the graph edit distance via optimal assignments regarding
vertex similarity. We show that in both tasks, our method outperforms the
original color refinement with only a moderate increase in running time
advancing the state of the art.
|
[
{
"version": "v1",
"created": "Mon, 19 Sep 2022 14:37:35 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Dec 2022 11:13:33 GMT"
}
] | 2022-12-09T00:00:00 |
[
[
"Bause",
"Franka",
""
],
[
"Kriege",
"Nils M.",
""
]
] |
new_dataset
| 0.991831 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.