id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2206.15302
|
Florian Euchner
|
Florian Euchner, Marc Gauger, Sebastian D\"orner, Stephan ten Brink
|
A Distributed Massive MIMO Channel Sounder for "Big CSI Data"-driven
Machine Learning
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by-sa/4.0/
|
A distributed massive MIMO channel sounder for acquiring large CSI datasets,
dubbed DICHASUS, is presented. The measured data has potential applications in
the study of various machine learning algorithms for user localization, JCAS,
channel charting, enabling massive MIMO in FDD operation, and many others. The
proposed channel sounder architecture is distinct from similar previous designs
in that each individual single-antenna receiver is completely autonomous,
enabling arbitrary, spatially distributed antenna deployments, and offering
virtually unlimited scalability in the number of antennas. Optionally,
extracted channel coefficient vectors can be tagged with ground truth position
data, obtained either through a GNSS receiver (for outdoor operation) or
through various indoor positioning techniques.
|
[
{
"version": "v1",
"created": "Thu, 30 Jun 2022 14:16:32 GMT"
}
] | 2022-07-01T00:00:00 |
[
[
"Euchner",
"Florian",
""
],
[
"Gauger",
"Marc",
""
],
[
"Dörner",
"Sebastian",
""
],
[
"Brink",
"Stephan ten",
""
]
] |
new_dataset
| 0.998824 |
2206.15304
|
Simon X. Yang
|
Zhiwei Yu, Kai Li, Yu Ji, Simon X. Yang
|
Designs, Motion Mechanism, Motion Coordination, and Communication of
Bionic Robot Fishes: A Survey
| null | null |
10.20517/ir.2022.10
| null |
cs.RO cs.AI cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
In the last few years, there have been many new developments and significant
accomplishments in the research of bionic robot fishes. However, in terms of
swimming performance, existing bionic robot fishes lag far behind fish,
prompting researchers to constantly develop innovative designs of various
bionic robot fishes. In this paper, the latest designs of robot fishes are
presented in detail, distinguished by the propulsion mode. New robot fishes
mainly include soft robot fishes and rigid-soft coupled robot fishes. The
latest progress in the study of the swimming mechanism is analyzed on the basis
of summarizing the main swimming theories of fish. The current state-of-the-art
research in the new field of motion coordination and communication of multiple
robot fishes is summarized. The general research trend in robot fishes is to
utilize more efficient and robust methods to best mimic real fish while
exhibiting superior swimming performance. The current challenges and potential
future research directions are discussed. Various methods are needed to narrow
the gap in swimming performance between robot fishes and fish. This paper is a
first step to bring together roboticists and marine biologists interested in
learning state-of-the-art research on bionic robot fishes.
|
[
{
"version": "v1",
"created": "Thu, 30 Jun 2022 14:17:20 GMT"
}
] | 2022-07-01T00:00:00 |
[
[
"Yu",
"Zhiwei",
""
],
[
"Li",
"Kai",
""
],
[
"Ji",
"Yu",
""
],
[
"Yang",
"Simon X.",
""
]
] |
new_dataset
| 0.994629 |
2005.14407
|
Alexander P. Kartun-Giles MSci PhD
|
Alexander P. Kartun-Giles, Konstantinos Koufos, Xiao Lu, and Dusit
Niyato
|
Two-Hop Connectivity to the Roadside in a VANET Under the Random
Connection Model
|
5 pages, 5 figures
| null | null | null |
cs.NI cond-mat.stat-mech cs.IT math.CO math.IT math.PR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we compute the expected number of vehicles with at least one
two-hop path to a fixed roadside unit (RSU) in a multi-hop, one-dimensional
vehicular ad hoc network (VANET) where other cars can act as relays. The
pairwise channels experience Rayleigh fading in the random connection model,
and so exist, with a probability given by a function of the mutual distance
between the cars, or between the cars and the RSU. We derive exact expressions
for the expected number of cars with a two-hop connection to the RSU when the
car density $\rho$ tends to zero and infinity, and determine its behaviour
using an infinite oscillating power series in $\rho$, which is accurate for all
regimes of traffic density. We also corroborate those findings with a realistic
scenario, using snapshots of actual traffic data. Finally, a normal
approximation is discussed for the probability mass function of the number of
cars with a two-hop connection to the RSU.
|
[
{
"version": "v1",
"created": "Fri, 29 May 2020 06:14:26 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Jun 2022 22:54:55 GMT"
}
] | 2022-06-30T00:00:00 |
[
[
"Kartun-Giles",
"Alexander P.",
""
],
[
"Koufos",
"Konstantinos",
""
],
[
"Lu",
"Xiao",
""
],
[
"Niyato",
"Dusit",
""
]
] |
new_dataset
| 0.998314 |
2008.01681
|
Shihua Huang
|
Shihua Huang, Cheng He, Ran Cheng
|
SoloGAN: Multi-domain Multimodal Unpaired Image-to-Image Translation via
a Single Generative Adversarial Network
|
pages 14, 15 figures
|
IEEE Transactions on Artificial Intelligence 2022
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite significant advances in image-to-image (I2I) translation with
generative adversarial networks (GANs), it remains challenging to effectively
translate an image to a set of diverse images in multiple target domains using
a single pair of generator and discriminator. Existing I2I translation methods
adopt multiple domain-specific content encoders for different domains, where
each domain-specific content encoder is trained with images from the same
domain only. Nevertheless, we argue that the content (domain-invariance)
features should be learned from images among all of the domains. Consequently,
each domain-specific content encoder of existing schemes fails to extract the
domain-invariant features efficiently. To address this issue, we present a
flexible and general SoloGAN model for efficient multimodal I2I translation
among multiple domains with unpaired data. In contrast to existing methods, the
SoloGAN algorithm uses a single projection discriminator with an additional
auxiliary classifier and shares the encoder and generator for all domains.
Consequently, the SoloGAN can be trained effectively with images from all
domains such that the domain-invariance content representation can be
efficiently extracted. Qualitative and quantitative results over a wide range
of datasets against several counterparts and variants of the SoloGAN
demonstrate the merits of the method, especially for challenging I2I
translation datasets, i.e., datasets involving extreme shape variations or need
to keep the complex backgrounds unchanged after translations. Furthermore, we
demonstrate the contribution of each component in SoloGAN by ablation studies.
|
[
{
"version": "v1",
"created": "Tue, 4 Aug 2020 16:31:15 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Feb 2022 05:58:07 GMT"
},
{
"version": "v3",
"created": "Tue, 28 Jun 2022 18:35:53 GMT"
}
] | 2022-06-30T00:00:00 |
[
[
"Huang",
"Shihua",
""
],
[
"He",
"Cheng",
""
],
[
"Cheng",
"Ran",
""
]
] |
new_dataset
| 0.989407 |
2010.00170
|
Samiul Alam
|
Samiul Alam, Tahsin Reasat, Asif Shahriyar Sushmit, Sadi Mohammad
Siddiquee, Fuad Rahman, Mahady Hasan, Ahmed Imtiaz Humayun
|
A Large Multi-Target Dataset of Common Bengali Handwritten Graphemes
|
15 pages, 12 figures, 6 Tables, Submitted to CVPR-21
| null |
10.1007/978-3-030-86337-1_26
| null |
cs.CV cs.CL cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Latin has historically led the state-of-the-art in handwritten optical
character recognition (OCR) research. Adapting existing systems from Latin to
alpha-syllabary languages is particularly challenging due to a sharp contrast
between their orthographies. The segmentation of graphical constituents
corresponding to characters becomes significantly hard due to a cursive writing
system and frequent use of diacritics in the alpha-syllabary family of
languages. We propose a labeling scheme based on graphemes (linguistic segments
of word formation) that makes segmentation in-side alpha-syllabary words linear
and present the first dataset of Bengali handwritten graphemes that are
commonly used in an everyday context. The dataset contains 411k curated samples
of 1295 unique commonly used Bengali graphemes. Additionally, the test set
contains 900 uncommon Bengali graphemes for out of dictionary performance
evaluation. The dataset is open-sourced as a part of a public Handwritten
Grapheme Classification Challenge on Kaggle to benchmark vision algorithms for
multi-target grapheme classification. The unique graphemes present in this
dataset are selected based on commonality in the Google Bengali ASR corpus.
From competition proceedings, we see that deep-learning methods can generalize
to a large span of out of dictionary graphemes which are absent during
training. Dataset and starter codes at www.kaggle.com/c/bengaliai-cv19.
|
[
{
"version": "v1",
"created": "Thu, 1 Oct 2020 01:51:45 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Oct 2020 23:18:35 GMT"
},
{
"version": "v3",
"created": "Wed, 13 Jan 2021 17:19:52 GMT"
}
] | 2022-06-30T00:00:00 |
[
[
"Alam",
"Samiul",
""
],
[
"Reasat",
"Tahsin",
""
],
[
"Sushmit",
"Asif Shahriyar",
""
],
[
"Siddiquee",
"Sadi Mohammad",
""
],
[
"Rahman",
"Fuad",
""
],
[
"Hasan",
"Mahady",
""
],
[
"Humayun",
"Ahmed Imtiaz",
""
]
] |
new_dataset
| 0.999685 |
2010.14663
|
Daniel Gabric
|
Daniel Gabric
|
Mutual Borders and Overlaps
| null | null | null | null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A word is said to be \emph{bordered} if it contains a non-empty proper prefix
that is also a suffix. We can naturally extend this definition to pairs of
non-empty words. A pair of words $(u,v)$ is said to be \emph{mutually bordered}
if there exists a word that is a non-empty proper prefix of $u$ and suffix of
$v$, and there exists a word that is a non-empty proper suffix of $u$ and
prefix of $v$. In other words, $(u,v)$ is mutually bordered if $u$ overlaps $v$
and $v$ overlaps $u$. We give a recurrence for the number of mutually bordered
pairs of words. Furthermore, we show that, asymptotically, there are $c\cdot
k^{2n}$ mutually bordered words of length-$n$ over a $k$-letter alphabet, where
$c$ is a constant. Finally, we show that the expected shortest overlap between
pairs of words is bounded above by a constant.
|
[
{
"version": "v1",
"created": "Tue, 27 Oct 2020 22:59:33 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Jun 2022 23:32:35 GMT"
}
] | 2022-06-30T00:00:00 |
[
[
"Gabric",
"Daniel",
""
]
] |
new_dataset
| 0.966548 |
2101.08169
|
Paulo Andr\'e Lima de Castro
|
Paulo Andr\'e Lima de Castro
|
mt5se: An Open Source Framework for Building Autonomous Trading Robots
|
This paper replaces an old version of the framework, called mt5b3,
which is now deprecated
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Autonomous trading robots have been studied in artificial intelligence area
for quite some time. Many AI techniques have been tested for building
autonomous agents able to trade financial assets. These initiatives include
traditional neural networks, fuzzy logic, reinforcement learning but also more
recent approaches like deep neural networks and deep reinforcement learning.
Many developers claim to be successful in creating robots with great
performance when simulating execution with historical price series, so called
backtesting. However, when these robots are used in real markets frequently
they present poor performance in terms of risks and return. In this paper, we
propose an open source framework (mt5se) that helps the development,
backtesting, live testing and real operation of autonomous traders. We built
and tested several traders using mt5se. The results indicate that it may help
the development of better traders. Furthermore, we discuss the simple
architecture that is used in many studies and propose an alternative multiagent
architecture. Such architecture separates two main concerns for portfolio
manager (PM) : price prediction and capital allocation. More than achieve a
high accuracy, a PM should increase profits when it is right and reduce loss
when it is wrong. Furthermore, price prediction is highly dependent of asset's
nature and history, while capital allocation is dependent only on analyst's
prediction performance and assets' correlation. Finally, we discuss some
promising technologies in the area.
|
[
{
"version": "v1",
"created": "Wed, 20 Jan 2021 15:01:02 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Dec 2021 12:19:21 GMT"
},
{
"version": "v3",
"created": "Tue, 28 Jun 2022 23:14:56 GMT"
}
] | 2022-06-30T00:00:00 |
[
[
"de Castro",
"Paulo André Lima",
""
]
] |
new_dataset
| 0.967881 |
2203.15683
|
Takaaki Saeki
|
Takaaki Saeki, Kentaro Tachibana, Ryuichi Yamamoto
|
DRSpeech: Degradation-Robust Text-to-Speech Synthesis with Frame-Level
and Utterance-Level Acoustic Representation Learning
|
Accepted to INTERSPEECH 2022
| null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most text-to-speech (TTS) methods use high-quality speech corpora recorded in
a well-designed environment, incurring a high cost for data collection. To
solve this problem, existing noise-robust TTS methods are intended to use noisy
speech corpora as training data. However, they only address either
time-invariant or time-variant noises. We propose a degradation-robust TTS
method, which can be trained on speech corpora that contain both additive
noises and environmental distortions. It jointly represents the time-variant
additive noises with a frame-level encoder and the time-invariant environmental
distortions with an utterance-level encoder. We also propose a regularization
method to attain clean environmental embedding that is disentangled from the
utterance-dependent information such as linguistic contents and speaker
characteristics. Evaluation results show that our method achieved significantly
higher-quality synthetic speech than previous methods in the condition
including both additive noise and reverberation.
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2022 15:41:52 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Jun 2022 13:38:02 GMT"
}
] | 2022-06-30T00:00:00 |
[
[
"Saeki",
"Takaaki",
""
],
[
"Tachibana",
"Kentaro",
""
],
[
"Yamamoto",
"Ryuichi",
""
]
] |
new_dataset
| 0.994445 |
2205.15812
|
Iknoor Singh
|
Iknoor Singh, Yue Li, Melissa Thong, Carolina Scarton
|
GateNLP-UShef at SemEval-2022 Task 8: Entity-Enriched Siamese
Transformer for Multilingual News Article Similarity
|
Accepted at SemEval-2022 Task 8: Multilingual News Article Similarity
(co-located with NAACL 2022)
| null | null | null |
cs.CL cs.AI cs.CY cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper describes the second-placed system on the leaderboard of
SemEval-2022 Task 8: Multilingual News Article Similarity. We propose an
entity-enriched Siamese Transformer which computes news article similarity
based on different sub-dimensions, such as the shared narrative, entities,
location and time of the event discussed in the news article. Our system
exploits a Siamese network architecture using a Transformer encoder to learn
document-level representations for the purpose of capturing the narrative
together with the auxiliary entity-based features extracted from the news
articles. The intuition behind using all these features together is to capture
the similarity between news articles at different granularity levels and to
assess the extent to which different news outlets write about "the same
events". Our experimental results and detailed ablation study demonstrate the
effectiveness and the validity of our proposed method.
|
[
{
"version": "v1",
"created": "Tue, 31 May 2022 14:11:45 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Jun 2022 14:28:37 GMT"
}
] | 2022-06-30T00:00:00 |
[
[
"Singh",
"Iknoor",
""
],
[
"Li",
"Yue",
""
],
[
"Thong",
"Melissa",
""
],
[
"Scarton",
"Carolina",
""
]
] |
new_dataset
| 0.977601 |
2206.14053
|
Samiul Alam
|
Samiul Alam, Asif Sushmit, Zaowad Abdullah, Shahrin Nakkhatra, MD.
Nazmuddoha Ansary, Syed Mobassir Hossen, Sazia Morshed Mehnaz, Tahsin Reasat,
Ahmed Imtiaz Humayun
|
Bengali Common Voice Speech Dataset for Automatic Speech Recognition
| null | null | null | null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Bengali is one of the most spoken languages in the world with over 300
million speakers globally. Despite its popularity, research into the
development of Bengali speech recognition systems is hindered due to the lack
of diverse open-source datasets. As a way forward, we have crowdsourced the
Bengali Common Voice Speech Dataset, which is a sentence-level automatic speech
recognition corpus. Collected on the Mozilla Common Voice platform, the dataset
is part of an ongoing campaign that has led to the collection of over 400 hours
of data in 2 months and is growing rapidly. Our analysis shows that this
dataset has more speaker, phoneme, and environmental diversity compared to the
OpenSLR Bengali ASR dataset, the largest existing open-source Bengali speech
dataset. We present insights obtained from the dataset and discuss key
linguistic challenges that need to be addressed in future versions.
Additionally, we report the current performance of a few Automatic Speech
Recognition (ASR) algorithms and set a benchmark for future research.
|
[
{
"version": "v1",
"created": "Tue, 28 Jun 2022 14:52:08 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Jun 2022 15:34:23 GMT"
}
] | 2022-06-30T00:00:00 |
[
[
"Alam",
"Samiul",
""
],
[
"Sushmit",
"Asif",
""
],
[
"Abdullah",
"Zaowad",
""
],
[
"Nakkhatra",
"Shahrin",
""
],
[
"Ansary",
"MD. Nazmuddoha",
""
],
[
"Hossen",
"Syed Mobassir",
""
],
[
"Mehnaz",
"Sazia Morshed",
""
],
[
"Reasat",
"Tahsin",
""
],
[
"Humayun",
"Ahmed Imtiaz",
""
]
] |
new_dataset
| 0.999841 |
2206.14201
|
Minjia Shi
|
Xuan Wang, Minjia Shi
|
$\mathbb{Z}_p\mathbb{Z}_{p^2}$-additive cyclic codes: kernel and rank
|
arXiv admin note: text overlap with arXiv:2206.13810
| null | null | null |
cs.IT cs.CR math.IT
|
http://creativecommons.org/publicdomain/zero/1.0/
|
A code $C = \Phi(\mathcal{C})$ is called $\mathbb{Z}_p
\mathbb{Z}_{p^2}$-linear if it's the Gray image of the $\mathbb{Z}_p
\mathbb{Z}_{p^2}$-additive code $\mathcal{C}$. In this paper, the rank and the
dimension of the kernel of $\mathcal{C}$ are studied. Both of the codes
$\langle \Phi(\mathcal{C}) \rangle$ and $\ker(\Phi(\mathcal{C}))$ are proven
$\mathbb{Z}_p \mathbb{Z}_{p^2}$-additive cyclic codes, and their generator
polynomials are determined. Finally, accurate values of rank and the dimension
of the kernel of some classes of $\mathbb{Z}_p \mathbb{Z}_{p^2}$-additive
cyclic codes are considered.
|
[
{
"version": "v1",
"created": "Tue, 28 Jun 2022 09:02:56 GMT"
}
] | 2022-06-30T00:00:00 |
[
[
"Wang",
"Xuan",
""
],
[
"Shi",
"Minjia",
""
]
] |
new_dataset
| 0.996371 |
2206.14263
|
Zanyar Zohourianshahzadi Ph.D. Candidate
|
Zanyar Zohourianshahzadi and Jugal Kalita
|
ZoDIAC: Zoneout Dropout Injection Attention Calculation
|
This work has been submitted to SN-AIRE journal and is currently
under review
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Recently the use of self-attention has yielded to state-of-the-art results in
vision-language tasks such as image captioning as well as natural language
understanding and generation (NLU and NLG) tasks and computer vision tasks such
as image classification. This is since self-attention maps the internal
interactions among the elements of input source and target sequences. Although
self-attention successfully calculates the attention values and maps the
relationships among the elements of input source and target sequence, yet there
is no mechanism to control the intensity of attention. In real world, when
communicating with each other face to face or vocally, we tend to express
different visual and linguistic context with various amounts of intensity. Some
words might carry (be spoken with) more stress and weight indicating the
importance of that word in the context of the whole sentence. Based on this
intuition, we propose Zoneout Dropout Injection Attention Calculation (ZoDIAC)
in which the intensities of attention values in the elements of the input
sequence are calculated with respect to the context of the elements of input
sequence. The results of our experiments reveal that employing ZoDIAC leads to
better performance in comparison with the self-attention module in the
Transformer model. The ultimate goal is to find out if we could modify
self-attention module in the Transformer model with a method that is
potentially extensible to other models that leverage on self-attention at their
core. Our findings suggest that this particular goal deserves further attention
and investigation by the research community.
The code for ZoDIAC is available on www.github.com/zanyarz/zodiac .
|
[
{
"version": "v1",
"created": "Tue, 28 Jun 2022 19:36:11 GMT"
}
] | 2022-06-30T00:00:00 |
[
[
"Zohourianshahzadi",
"Zanyar",
""
],
[
"Kalita",
"Jugal",
""
]
] |
new_dataset
| 0.997071 |
2206.14368
|
Zhimin Zeng
|
Zhimin Zeng, Xinyu Chen, Laurence T Yang, Jinhua Cui
|
IMRSim: A Disk Simulator for Interlaced Magnetic Recording Technology
|
7 pages, 7 figures
| null | null | null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The emerging interlaced magnetic recording (IMR) technology achieves a higher
areal density for hard disk drive (HDD) over the conventional magnetic
recording (CMR) technology. IMR-based HDD interlaces top tracks and bottom
tracks, where each bottom track is overlapped with two neighboring top tracks.
Thus, top tracks can be updated without restraint, whereas bottom tracks can be
updated by the time-consuming read-modify-write (RMW) or other novel update
strategy. Therefore, the layout of the tracks between the IMR-based HDD and the
CMR-based HDD is much different. Unfortunately, there has been no related disk
simulator and product available to the public, which motivates us to develop an
open-source IMR disk simulator to provide a platform for further research. We
implement the first public IMR disk simulator, called IMRSim, as a block device
driver in the Linux kernel, simulating the interlaced tracks and implementing
many state-of-the-art data placement strategies. IMRSim is built on the actual
CMR-based HDD to precisely simulate the I/O performance of IMR drives. While
I/O operations in CMR-based HDD are easy to visualize, update strategy and
multi-stage allocation strategy in IMR are inherently dynamic. Therefore, we
further graphically demonstrate how IMRSim processes I/O requests in the
visualization mode. We release IMRSim as an open-source IMR disk simulation
tool and hope to attract more scholars into related research on IMR technology.
|
[
{
"version": "v1",
"created": "Wed, 29 Jun 2022 02:21:41 GMT"
}
] | 2022-06-30T00:00:00 |
[
[
"Zeng",
"Zhimin",
""
],
[
"Chen",
"Xinyu",
""
],
[
"Yang",
"Laurence T",
""
],
[
"Cui",
"Jinhua",
""
]
] |
new_dataset
| 0.992543 |
2206.14388
|
Yangxi Zhou
|
Yangxi Zhou, Junping Du, Zhe Xue, Ang Li, Zeli Guan
|
Chinese Word Sense Embedding with SememeWSD and Synonym Set
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Word embedding is a fundamental natural language processing task which can
learn feature of words. However, most word embedding methods assign only one
vector to a word, even if polysemous words have multi-senses. To address this
limitation, we propose SememeWSD Synonym (SWSDS) model to assign a different
vector to every sense of polysemous words with the help of word sense
disambiguation (WSD) and synonym set in OpenHowNet. We use the SememeWSD model,
an unsupervised word sense disambiguation model based on OpenHowNet, to do word
sense disambiguation and annotate the polysemous word with sense id. Then, we
obtain top 10 synonyms of the word sense from OpenHowNet and calculate the
average vector of synonyms as the vector of the word sense. In experiments, We
evaluate the SWSDS model on semantic similarity calculation with Gensim's
wmdistance method. It achieves improvement of accuracy. We also examine the
SememeWSD model on different BERT models to find the more effective model.
|
[
{
"version": "v1",
"created": "Wed, 29 Jun 2022 03:42:03 GMT"
}
] | 2022-06-30T00:00:00 |
[
[
"Zhou",
"Yangxi",
""
],
[
"Du",
"Junping",
""
],
[
"Xue",
"Zhe",
""
],
[
"Li",
"Ang",
""
],
[
"Guan",
"Zeli",
""
]
] |
new_dataset
| 0.997814 |
2206.14465
|
Shiyuan Sun
|
Shiyuan Sun, Fang Yang, Jian Song and Rui Zhang
|
Intelligent Reflecting Surface for MIMO VLC: Joint Design of Surface
Configuration and Transceiver Signal Processing
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the capability of reconfiguring the wireless electromagnetic
environment, intelligent reflecting surface (IRS) is a new paradigm for
designing future wireless communication systems. In this paper, we consider
optical IRS for improving the performance of visible light communication (VLC)
under a multiple-input and multiple-output (MIMO) setting. Specifically, we
focus on the downlink communication of an indoor MIMO VLC system and aim to
minimize the mean square error (MSE) of demodulated signals at the receiver. To
this end, the MIMO channel gain of the IRS-aided VLC is first derived under the
point source assumption, based on which the MSE minimization problem is then
formulated subject to the emission power constraints. Next, we propose an
alternating optimization algorithm, which decomposes the original problem into
three subproblems, to iteratively optimize the IRS configuration, the precoding
and detection matrices for minimizing the MSE. Moreover, theoretical analysis
on the performance of the proposed algorithm in high and low signal-to-noise
rate (SNR) regimes is provided, revealing that the joint optimization process
can be simplified in such special cases, and the algorithm's convergence
property and computational complexity are also discussed. Finally, numerical
results show that IRS-aided schemes significantly reduce the MSE as compared to
their counterparts without IRS, and the proposed algorithm outperforms other
baseline schemes.
|
[
{
"version": "v1",
"created": "Wed, 29 Jun 2022 08:43:54 GMT"
}
] | 2022-06-30T00:00:00 |
[
[
"Sun",
"Shiyuan",
""
],
[
"Yang",
"Fang",
""
],
[
"Song",
"Jian",
""
],
[
"Zhang",
"Rui",
""
]
] |
new_dataset
| 0.966461 |
2206.14538
|
Xiao Liu
|
Xiao Liu, Spyridon Thermos, Pedro Sanchez, Alison Q. O'Neil and
Sotirios A. Tsaftaris
|
vMFNet: Compositionality Meets Domain-generalised Segmentation
|
Accepted by MICCAI 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Training medical image segmentation models usually requires a large amount of
labeled data. By contrast, humans can quickly learn to accurately recognise
anatomy of interest from medical (e.g. MRI and CT) images with some limited
guidance. Such recognition ability can easily generalise to new images from
different clinical centres. This rapid and generalisable learning ability is
mostly due to the compositional structure of image patterns in the human brain,
which is less incorporated in medical image segmentation. In this paper, we
model the compositional components (i.e. patterns) of human anatomy as
learnable von-Mises-Fisher (vMF) kernels, which are robust to images collected
from different domains (e.g. clinical centres). The image features can be
decomposed to (or composed by) the components with the composing operations,
i.e. the vMF likelihoods. The vMF likelihoods tell how likely each anatomical
part is at each position of the image. Hence, the segmentation mask can be
predicted based on the vMF likelihoods. Moreover, with a reconstruction module,
unlabeled data can also be used to learn the vMF kernels and likelihoods by
recombining them to reconstruct the input image. Extensive experiments show
that the proposed vMFNet achieves improved generalisation performance on two
benchmarks, especially when annotations are limited. Code is publicly available
at: https://github.com/vios-s/vMFNet.
|
[
{
"version": "v1",
"created": "Wed, 29 Jun 2022 11:31:23 GMT"
}
] | 2022-06-30T00:00:00 |
[
[
"Liu",
"Xiao",
""
],
[
"Thermos",
"Spyridon",
""
],
[
"Sanchez",
"Pedro",
""
],
[
"O'Neil",
"Alison Q.",
""
],
[
"Tsaftaris",
"Sotirios A.",
""
]
] |
new_dataset
| 0.995217 |
2206.14550
|
Guan Shen
|
Guan Shen, Jieru Zhao, Quan Chen, Jingwen Leng, Chao Li, Minyi Guo
|
SALO: An Efficient Spatial Accelerator Enabling Hybrid Sparse Attention
Mechanisms for Long Sequences
|
Accepted by 59th DAC
| null | null | null |
cs.AR cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The attention mechanisms of transformers effectively extract pertinent
information from the input sequence. However, the quadratic complexity of
self-attention w.r.t the sequence length incurs heavy computational and memory
burdens, especially for tasks with long sequences. Existing accelerators face
performance degradation in these tasks. To this end, we propose SALO to enable
hybrid sparse attention mechanisms for long sequences. SALO contains a data
scheduler to map hybrid sparse attention patterns onto hardware and a spatial
accelerator to perform the efficient attention computation. We show that SALO
achieves 17.66x and 89.33x speedup on average compared to GPU and CPU
implementations, respectively, on typical workloads, i.e., Longformer and ViL.
|
[
{
"version": "v1",
"created": "Wed, 29 Jun 2022 12:01:19 GMT"
}
] | 2022-06-30T00:00:00 |
[
[
"Shen",
"Guan",
""
],
[
"Zhao",
"Jieru",
""
],
[
"Chen",
"Quan",
""
],
[
"Leng",
"Jingwen",
""
],
[
"Li",
"Chao",
""
],
[
"Guo",
"Minyi",
""
]
] |
new_dataset
| 0.998057 |
2206.14568
|
Ramesh Sah
|
Ramesh Kumar Sah, Michael McDonell, Patricia Pendry, Sara Parent,
Hassan Ghasemzadeh, Michael J Cleveland
|
ADARP: A Multi Modal Dataset for Stress and Alcohol Relapse
Quantification in Real Life Setting
| null | null | null | null |
cs.HC eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
Stress detection and classification from wearable sensor data is an emerging
area of research with significant implications for individuals' physical and
mental health. In this work, we introduce a new dataset, ADARP, which contains
physiological data and self-report outcomes collected in real-world ambulatory
settings involving individuals diagnosed with alcohol use disorders. We
describe the user study, present details of the dataset, establish the
significant correlation between physiological data and self-reported outcomes,
demonstrate stress classification, and make our dataset public to facilitate
research.
|
[
{
"version": "v1",
"created": "Tue, 14 Jun 2022 20:39:02 GMT"
}
] | 2022-06-30T00:00:00 |
[
[
"Sah",
"Ramesh Kumar",
""
],
[
"McDonell",
"Michael",
""
],
[
"Pendry",
"Patricia",
""
],
[
"Parent",
"Sara",
""
],
[
"Ghasemzadeh",
"Hassan",
""
],
[
"Cleveland",
"Michael J",
""
]
] |
new_dataset
| 0.99977 |
2206.14606
|
Ludovic Court\`es
|
Ludovic Court\`es (Inria, France)
|
Building a Secure Software Supply Chain with GNU Guix
| null |
The Art, Science, and Engineering of Programming, 2023, Vol. 7,
Issue 1, Article 1
|
10.22152/programming-journal.org/2023/7/1
| null |
cs.SE cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The software supply chain is becoming a widespread analogy to designate the
series of steps taken to go from source code published by developers to
executables running on the users? computers. A security vulnerability in any of
these steps puts users at risk, and evidence shows that attacks on the supply
chain are becoming more common. The consequences of an attack on the software
supply chain can be tragic in a society that relies on many interconnected
software systems, and this has led research interest as well as governmental
incentives for supply chain security to rise.
GNU Guix is a software deployment tool and software distribution that
supports provenance tracking, reproducible builds, and reproducible software
environments. Unlike many software distributions, it consists exclusively of
source code: it provides a set of package definitions that describe how to
build code from source. Together, these properties set it apart from many
deployment tools that center on the distribution of binaries.
This paper focuses on one research question: how can Guix and similar systems
allow users to securely update their software? Guix source code is distributed
using the Git version control system; updating Guix-installed software packages
means, first, updating the local copy of the Guix source code. Prior work on
secure software updates focuses on systems very different from Guix -- systems
such as Debian, Fedora, or PyPI where updating consists in fetching metadata
about the latest binary artifacts available -- and is largely inapplicable in
the context of Guix. By contrast, the main threats for Guix are attacks on its
source code repository, which could lead users to run inauthentic code or to
downgrade their system. Deployment tools that more closely resemble Guix, from
Nix to Portage, either lack secure update mechanisms or suffer from
shortcomings.
Our main contribution is a model and tool to authenticate new Git revisions.
We further show how, building on Git semantics, we build protections against
downgrade attacks and related threats. We explain implementation choices. This
work has been deployed in production two years ago, giving us insight on its
actual use at scale every day. The Git checkout authentication at its core is
applicable beyond the specific use case of Guix, and we think it could benefit
to developer teams that use Git.
As attacks on the software supply chain appear, security research is now
looking at every link of the supply chain. Secure updates are one important
aspect of the supply chain, but this paper also looks at the broader context:
how Guix models and implements the supply chain, from upstream source code to
binaries running on computers. While much recent work focuses on attestation --
certifying each link of the supply chain -- Guix takes a more radical approach:
enabling independent verification of each step, building on reproducible
builds, "bootstrappable" builds, and provenance tracking. The big picture shows
how Guix can be used as the foundation of secure software supply chains.
|
[
{
"version": "v1",
"created": "Tue, 28 Jun 2022 08:53:21 GMT"
}
] | 2022-06-30T00:00:00 |
[
[
"Courtès",
"Ludovic",
"",
"Inria, France"
]
] |
new_dataset
| 0.995069 |
2206.14619
|
Ninghan Chen
|
Ninghan Chen, Xihui Chen, Jun Pang
|
A Multilingual Dataset of COVID-19 Vaccination Attitudes on Twitter
| null | null | null | null |
cs.CL cs.CY cs.LG cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
Vaccine hesitancy is considered as one main cause of the stagnant uptake
ratio of COVID-19 vaccines in Europe and the US where vaccines are sufficiently
supplied. Fast and accurate grasp of public attitudes toward vaccination is
critical to address vaccine hesitancy, and social media platforms have proved
to be an effective source of public opinions. In this paper, we describe the
collection and release of a dataset of tweets related to COVID-19 vaccines.
This dataset consists of the IDs of 2,198,090 tweets collected from Western
Europe, 17,934 of which are annotated with the originators' vaccination
stances. Our annotation will facilitate using and developing data-driven models
to extract vaccination attitudes from social media posts and thus further
confirm the power of social media in public health surveillance. To lay the
groundwork for future research, we not only perform statistical analysis and
visualisation of our dataset, but also evaluate and compare the performance of
established text-based benchmarks in vaccination stance extraction. We
demonstrate one potential use of our data in practice in tracking the temporal
changes of public COVID-19 vaccination attitudes.
|
[
{
"version": "v1",
"created": "Mon, 27 Jun 2022 13:44:48 GMT"
}
] | 2022-06-30T00:00:00 |
[
[
"Chen",
"Ninghan",
""
],
[
"Chen",
"Xihui",
""
],
[
"Pang",
"Jun",
""
]
] |
new_dataset
| 0.999714 |
2206.14709
|
Ahmed Mazari
|
Florent Bonnet, Jocelyn Ahmed Mazari, Thibaut Munzer, Pierre Yser,
Patrick Gallinari
|
An extensible Benchmarking Graph-Mesh dataset for studying Steady-State
Incompressible Navier-Stokes Equations
|
ICLR 2022 Workshop on Geometrical and Topological Representation
Learning
|
ICLR 2022 Workshop on Geometrical and Topological Representation
Learning
| null | null |
cs.LG cs.CV cs.NA math.NA
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Recent progress in \emph{Geometric Deep Learning} (GDL) has shown its
potential to provide powerful data-driven models. This gives momentum to
explore new methods for learning physical systems governed by \emph{Partial
Differential Equations} (PDEs) from Graph-Mesh data. However, despite the
efforts and recent achievements, several research directions remain unexplored
and progress is still far from satisfying the physical requirements of
real-world phenomena. One of the major impediments is the absence of
benchmarking datasets and common physics evaluation protocols. In this paper,
we propose a 2-D graph-mesh dataset to study the airflow over airfoils at high
Reynolds regime (from $10^6$ and beyond). We also introduce metrics on the
stress forces over the airfoil in order to evaluate GDL models on important
physical quantities. Moreover, we provide extensive GDL baselines.
|
[
{
"version": "v1",
"created": "Wed, 29 Jun 2022 15:18:30 GMT"
}
] | 2022-06-30T00:00:00 |
[
[
"Bonnet",
"Florent",
""
],
[
"Mazari",
"Jocelyn Ahmed",
""
],
[
"Munzer",
"Thibaut",
""
],
[
"Yser",
"Pierre",
""
],
[
"Gallinari",
"Patrick",
""
]
] |
new_dataset
| 0.963152 |
2206.14723
|
Javier Nistal
|
Javier Nistal, Cyran Aouameur, Ithan Velarde, and Stefan Lattner
|
DrumGAN VST: A Plugin for Drum Sound Analysis/Synthesis With
Autoencoding Generative Adversarial Networks
|
7 pages, 2 figures, 3 tables, ICML2022 Machine Learning for Audio
Synthesis (MLAS) Workshop, for sound examples visit
https://cslmusicteam.sony.fr/drumgan-vst/
| null | null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
In contemporary popular music production, drum sound design is commonly
performed by cumbersome browsing and processing of pre-recorded samples in
sound libraries. One can also use specialized synthesis hardware, typically
controlled through low-level, musically meaningless parameters. Today, the
field of Deep Learning offers methods to control the synthesis process via
learned high-level features and allows generating a wide variety of sounds. In
this paper, we present DrumGAN VST, a plugin for synthesizing drum sounds using
a Generative Adversarial Network. DrumGAN VST operates on 44.1 kHz sample-rate
audio, offers independent and continuous instrument class controls, and
features an encoding neural network that maps sounds into the GAN's latent
space, enabling resynthesis and manipulation of pre-existing drum sounds. We
provide numerous sound examples and a demo of the proposed VST plugin.
|
[
{
"version": "v1",
"created": "Wed, 29 Jun 2022 15:44:19 GMT"
}
] | 2022-06-30T00:00:00 |
[
[
"Nistal",
"Javier",
""
],
[
"Aouameur",
"Cyran",
""
],
[
"Velarde",
"Ithan",
""
],
[
"Lattner",
"Stefan",
""
]
] |
new_dataset
| 0.998663 |
1801.00471
|
Rose Bohrer
|
Rose Bohrer and Karl Crary
|
TWAM: A Certifying Abstract Machine for Logic Programs
|
41 pages, under submission to ACM Transactions on Computational Logic
| null | null | null |
cs.PL cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Type-preserving (or typed) compilation uses typing derivations to certify
correctness properties of compilation. We have designed and implemented a
type-preserving compiler for a simply-typed dialect of Prolog we call T-Prolog.
The crux of our approach is a new certifying abstract machine which we call the
Typed Warren Abstract Machine (TWAM). The TWAM has a dependent type system
strong enough to specify the semantics of a logic program in the logical
framework LF. We present a soundness metatheorem which constitutes a partial
correctness guarantee: well-typed programs implement the logic program
specified by their type. This metatheorem justifies our design and
implementation of a certifying compiler from T-Prolog to TWAM.
|
[
{
"version": "v1",
"created": "Mon, 1 Jan 2018 16:46:28 GMT"
}
] | 2022-06-29T00:00:00 |
[
[
"Bohrer",
"Rose",
""
],
[
"Crary",
"Karl",
""
]
] |
new_dataset
| 0.997929 |
2008.06812
|
Yuan Feng
|
Yuan Feng and Mingsheng Ying
|
Quantum Hoare logic with classical variables
|
ACM Transactions on Quantum Computing, to appear
|
ACM Transactions on Quantum Computing 2, 4 (2021),1-43
|
10.1145/3456877
| null |
cs.LO quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Hoare logic provides a syntax-oriented method to reason about program
correctness and has been proven effective in the verification of classical and
probabilistic programs. Existing proposals for quantum Hoare logic either lack
completeness or support only quantum variables, thus limiting their capability
in practical use. In this paper, we propose a quantum Hoare logic for a simple
while language which involves both classical and quantum variables. Its
soundness and relative completeness are proven for both partial and total
correctness of quantum programs written in the language. Remarkably, with novel
definitions of classical-quantum states and corresponding assertions, the logic
system is quite simple and similar to the traditional Hoare logic for classical
programs. Furthermore, to simplify reasoning in real applications, auxiliary
proof rules are provided which support standard logical operation in the
classical part of assertions, and of super-operator application in the quantum
part. Finally, a series of practical quantum algorithms, in particular the
whole algorithm of Shor's factorisation, are formally verified to show the
effectiveness of the logic.
|
[
{
"version": "v1",
"created": "Sat, 15 Aug 2020 23:56:18 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Apr 2021 07:15:59 GMT"
}
] | 2022-06-29T00:00:00 |
[
[
"Feng",
"Yuan",
""
],
[
"Ying",
"Mingsheng",
""
]
] |
new_dataset
| 0.999521 |
2105.05089
|
Lorenzo Natale
|
Lorenzo Natale and Giorgio Cannata
|
Tactile Sensing
| null |
Humanoid Robotics: A Reference, Springer, 2017
|
10.1007/978-94-007-7194-9_110-1
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Research on tactile sensing has been progressing at constant pace. In
robotics, tactile sensing is typically studied in the context of object
grasping and manipulation. In this domain, the development of robust,
multi-modal, tactile sensors for robotic hands has supported the study of novel
algorithms for in-hand object manipulation, material classification and object
perception. In the field of humanoid robotics, research has focused on solving
the challenges that allow developing systems of tactile sensors that can cover
large areas of the robot body, and can integrate different types of transducers
to measure pressure at various frequency bands, acceleration and temperature.
The availability of such systems has extended the application of tactile
sensing to whole-body control, autonomous calibration, self-perception and
human-robot interaction. The goal of this Chapter is to provide an overview of
the technologies for tactile sensing, with particular emphasis on the systems
that have been deployed on humanoid robots. We describe the skills that have
been implemented with the adoption of these technologies and discuss the main
challenges that remain to be addressed.
|
[
{
"version": "v1",
"created": "Fri, 7 May 2021 20:44:09 GMT"
}
] | 2022-06-29T00:00:00 |
[
[
"Natale",
"Lorenzo",
""
],
[
"Cannata",
"Giorgio",
""
]
] |
new_dataset
| 0.996098 |
2106.15211
|
Michele Colledanchise
|
Michele Colledanchise, Giuseppe Cicala, Daniele E. Domenichelli,
Lorenzo Natale, Armando Tacchella
|
A Toolchain to Design, Execute, and Monitor Robots Behaviors
|
arXiv admin note: text overlap with arXiv:2106.12474
|
Robust and Reliable Autonomy in the Wild (R2AW) IJCAI 2021
Workshop
| null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present a toolchain to design, execute, and verify robot
behaviors. The toolchain follows the guidelines defined by the EU H2020 project
RobMoSys and encodes the robot deliberation as a Behavior Tree (BT), a directed
tree where the internal nodes model behavior composition and leaf nodes model
action or measurement operations. Such leaf nodes take the form of a statechart
(SC), which runs in separate threads, whose states perform basic arithmetic
operations and send commands to the robot. The toolchain provides the ability
to define a runtime monitor for a given system specification that warns the
user whenever a given specification is violated.
We validated the toolchain in a simulated experiment that we made
reproducible in an OS-virtualization environment.
|
[
{
"version": "v1",
"created": "Tue, 29 Jun 2021 09:53:10 GMT"
}
] | 2022-06-29T00:00:00 |
[
[
"Colledanchise",
"Michele",
""
],
[
"Cicala",
"Giuseppe",
""
],
[
"Domenichelli",
"Daniele E.",
""
],
[
"Natale",
"Lorenzo",
""
],
[
"Tacchella",
"Armando",
""
]
] |
new_dataset
| 0.9868 |
2107.08829
|
Rafael Rafailov
|
Rafael Rafailov, Tianhe Yu, Aravind Rajeswaran, Chelsea Finn
|
Visual Adversarial Imitation Learning using Variational Models
| null | null | null | null |
cs.LG cs.AI cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Reward function specification, which requires considerable human effort and
iteration, remains a major impediment for learning behaviors through deep
reinforcement learning. In contrast, providing visual demonstrations of desired
behaviors often presents an easier and more natural way to teach agents. We
consider a setting where an agent is provided a fixed dataset of visual
demonstrations illustrating how to perform a task, and must learn to solve the
task using the provided demonstrations and unsupervised environment
interactions. This setting presents a number of challenges including
representation learning for visual observations, sample complexity due to high
dimensional spaces, and learning instability due to the lack of a fixed reward
or learning signal. Towards addressing these challenges, we develop a
variational model-based adversarial imitation learning (V-MAIL) algorithm. The
model-based approach provides a strong signal for representation learning,
enables sample efficiency, and improves the stability of adversarial training
by enabling on-policy learning. Through experiments involving several
vision-based locomotion and manipulation tasks, we find that V-MAIL learns
successful visuomotor policies in a sample-efficient manner, has better
stability compared to prior work, and also achieves higher asymptotic
performance. We further find that by transferring the learned models, V-MAIL
can learn new tasks from visual demonstrations without any additional
environment interactions. All results including videos can be found online at
\url{https://sites.google.com/view/variational-mail}.
|
[
{
"version": "v1",
"created": "Fri, 16 Jul 2021 00:15:18 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Jun 2022 19:35:34 GMT"
}
] | 2022-06-29T00:00:00 |
[
[
"Rafailov",
"Rafael",
""
],
[
"Yu",
"Tianhe",
""
],
[
"Rajeswaran",
"Aravind",
""
],
[
"Finn",
"Chelsea",
""
]
] |
new_dataset
| 0.99239 |
2108.06096
|
Maxime Jakubowski
|
Bart Bogaerts, Maxime Jakubowski, Jan Van den Bussche
|
SHACL: A Description Logic in Disguise
|
Presented at LPNRM conference 2022
| null | null | null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
SHACL is a W3C-proposed language for expressing structural constraints on RDF
graphs. In recent years, SHACL's popularity has risen quickly. This rise in
popularity comes with questions related to its place in the semantic web,
particularly about its relation to OWL (the de facto standard for expressing
ontological information on the web) and description logics (which form the
formal foundations of OWL). We answer these questions by arguing that SHACL is
in fact a description logic. On the one hand, our answer is surprisingly
simple, some might even say obvious. But, on the hand, our answer is also
controversial. By resolving this issue once and for all, we establish the field
of description logics as the solid formal foundations of SHACL.
|
[
{
"version": "v1",
"created": "Fri, 13 Aug 2021 07:12:47 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Oct 2021 05:59:29 GMT"
},
{
"version": "v3",
"created": "Tue, 28 Jun 2022 07:38:04 GMT"
}
] | 2022-06-29T00:00:00 |
[
[
"Bogaerts",
"Bart",
""
],
[
"Jakubowski",
"Maxime",
""
],
[
"Bussche",
"Jan Van den",
""
]
] |
new_dataset
| 0.999786 |
2109.12065
|
James Wang
|
Tongan Cai, Haomiao Ni, Mingli Yu, Xiaolei Huang, Kelvin Wong, John
Volpi, James Z. Wang, Stephen T.C. Wong
|
DeepStroke: An Efficient Stroke Screening Framework for Emergency Rooms
with Multimodal Adversarial Deep Learning
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In an emergency room (ER) setting, stroke triage or screening is a common
challenge. A quick CT is usually done instead of MRI due to MRI's slow
throughput and high cost. Clinical tests are commonly referred to during the
process, but the misdiagnosis rate remains high. We propose a novel multimodal
deep learning framework, DeepStroke, to achieve computer-aided stroke presence
assessment by recognizing patterns of minor facial muscles incoordination and
speech inability for patients with suspicion of stroke in an acute setting. Our
proposed DeepStroke takes one-minute facial video data and audio data readily
available during stroke triage for local facial paralysis detection and global
speech disorder analysis. Transfer learning was adopted to reduce
face-attribute biases and improve generalizability. We leverage a multi-modal
lateral fusion to combine the low- and high-level features and provide mutual
regularization for joint training. Novel adversarial training is introduced to
obtain identity-free and stroke-discriminative features. Experiments on our
video-audio dataset with actual ER patients show that DeepStroke outperforms
state-of-the-art models and achieves better performance than both a triage team
and ER doctors, attaining a 10.94% higher sensitivity and maintaining 7.37%
higher accuracy than traditional stroke triage when specificity is aligned.
Meanwhile, each assessment can be completed in less than six minutes,
demonstrating the framework's great potential for clinical translation.
|
[
{
"version": "v1",
"created": "Fri, 24 Sep 2021 16:46:13 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Jun 2022 18:02:49 GMT"
}
] | 2022-06-29T00:00:00 |
[
[
"Cai",
"Tongan",
""
],
[
"Ni",
"Haomiao",
""
],
[
"Yu",
"Mingli",
""
],
[
"Huang",
"Xiaolei",
""
],
[
"Wong",
"Kelvin",
""
],
[
"Volpi",
"John",
""
],
[
"Wang",
"James Z.",
""
],
[
"Wong",
"Stephen T. C.",
""
]
] |
new_dataset
| 0.986842 |
2112.02265
|
Huy Nghiem
|
Huy Nghiem, Fred Morstatter
|
"Stop Asian Hate!" : Refining Detection of Anti-Asian Hate Speech During
the COVID-19 Pandemic
| null | null | null | null |
cs.CL cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Content warning: This work displays examples of explicit and/or strongly
offensive language. Fueled by a surge of anti-Asian xenophobia and prejudice
during the COVID-19 pandemic, many have taken to social media to express these
negative sentiments. Identifying these posts is crucial for moderation and
understanding the nature of hate in online spaces. In this paper, we create and
annotate a corpus of tweets to explore anti-Asian hate speech with a finer
level of granularity. Our analysis reveals that this emergent form of hate
speech often eludes established approaches. To address this challenge, we
develop a model and an accompanied efficient training regimen that incorporates
agreement between annotators. Our approach produces up to 8.8% improvement in
macro F1 scores over a strong established baseline, indicating its
effectiveness even in settings where consensus among annotators is low. We
demonstrate that we are able to identify hate speech that is systematically
missed by established hate speech detectors.
|
[
{
"version": "v1",
"created": "Sat, 4 Dec 2021 06:55:19 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Jun 2022 06:58:32 GMT"
}
] | 2022-06-29T00:00:00 |
[
[
"Nghiem",
"Huy",
""
],
[
"Morstatter",
"Fred",
""
]
] |
new_dataset
| 0.997686 |
2201.09415
|
Min Qiu
|
Min Qiu and Jinhong Yuan
|
Sub-Block Rearranged Staircase Codes
|
16 pages, 7 figures, 2 tables, accepted by IEEE Transactions on
Communications
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a new family of spatially coupled product codes, called sub-block
rearranged staircase (SR-staircase) codes. Each code block of SR-staircase
codes is obtained by encoding rearranged preceding code blocks and new
information block, where the rearrangement involves sub-blocks decomposition
and transposition. The proposed codes can be constructed to have each code
block size of $1/q$ to that of the conventional staircase codes while having
the same rate and component codes, for any positive integer $q$. In this
regard, we can use strong algebraic component codes to construct SR-staircase
codes with a similar or the same code block size and rate as staircase codes
with weak component codes. We characterize the decoding threshold of the
proposed codes under iterative bounded distance decoding (iBDD) by using
density evolution. We also derive the conditions under which they achieve a
better decoding threshold than that of staircase codes. Further, we investigate
the error floor performance by analyzing the contributing error patterns and
their multiplicities. Both theoretical and simulation results show that the
designed SR-staircase codes outperform staircase codes in terms of waterfall
and error floor while the performance can be further improved by using a large
coupling width.
|
[
{
"version": "v1",
"created": "Mon, 24 Jan 2022 01:52:14 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Mar 2022 06:39:02 GMT"
},
{
"version": "v3",
"created": "Tue, 28 Jun 2022 04:17:12 GMT"
}
] | 2022-06-29T00:00:00 |
[
[
"Qiu",
"Min",
""
],
[
"Yuan",
"Jinhong",
""
]
] |
new_dataset
| 0.999308 |
2201.13063
|
Maria Korosteleva
|
Maria Korosteleva, Sung-Hee Lee
|
NeuralTailor: Reconstructing Sewing Pattern Structures from 3D Point
Clouds of Garments
|
Updated to the version accepted to SIGGRAPH 2022 (Journal Track)
| null |
10.1145/3528223.3530179
| null |
cs.CV cs.AI cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
The fields of SocialVR, performance capture, and virtual try-on are often
faced with a need to faithfully reproduce real garments in the virtual world.
One critical task is the disentanglement of the intrinsic garment shape from
deformations due to fabric properties, physical forces, and contact with the
body. We propose to use a garment sewing pattern, a realistic and compact
garment descriptor, to facilitate the intrinsic garment shape estimation.
Another major challenge is a high diversity of shapes and designs in the
domain. The most common approach for Deep Learning on 3D garments is to build
specialized models for individual garments or garment types. We argue that
building a unified model for various garment designs has the benefit of
generalization to novel garment types, hence covering a larger design domain
than individual models would. We introduce NeuralTailor, a novel architecture
based on point-level attention for set regression with variable cardinality,
and apply it to the task of reconstructing 2D garment sewing patterns from the
3D point could garment models. Our experiments show that NeuralTailor
successfully reconstructs sewing patterns and generalizes to garment types with
pattern topologies unseen during training.
|
[
{
"version": "v1",
"created": "Mon, 31 Jan 2022 08:33:49 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Jun 2022 03:15:55 GMT"
}
] | 2022-06-29T00:00:00 |
[
[
"Korosteleva",
"Maria",
""
],
[
"Lee",
"Sung-Hee",
""
]
] |
new_dataset
| 0.980798 |
2202.04365
|
Theo Ladune
|
Th\'eo Ladune, Pierrick Philippe
|
AIVC: Artificial Intelligence based Video Codec
| null |
ICIP 2022 (IEEE International Conference on Image Processing), Oct
2022, Bordeaux, France
| null | null |
cs.NE eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces AIVC, an end-to-end neural video codec. It is based on
two conditional autoencoders MNet and CNet, for motion compensation and coding.
AIVC learns to compress videos using any coding configurations through a single
end-to-end rate-distortion optimization. Furthermore, it offers performance
competitive with the recent video coder HEVC under several established test
conditions. A comprehensive ablation study is performed to evaluate the
benefits of the different modules composing AIVC. The implementation is made
available at https://orange-opensource.github.io/AIVC/.
|
[
{
"version": "v1",
"created": "Wed, 9 Feb 2022 10:03:12 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Feb 2022 08:28:07 GMT"
},
{
"version": "v3",
"created": "Tue, 28 Jun 2022 09:37:26 GMT"
}
] | 2022-06-29T00:00:00 |
[
[
"Ladune",
"Théo",
""
],
[
"Philippe",
"Pierrick",
""
]
] |
new_dataset
| 0.996647 |
2206.13517
|
Ali Madani
|
Erik Nijkamp, Jeffrey Ruffolo, Eli N. Weinstein, Nikhil Naik, Ali
Madani
|
ProGen2: Exploring the Boundaries of Protein Language Models
| null | null | null | null |
cs.LG q-bio.QM
|
http://creativecommons.org/licenses/by/4.0/
|
Attention-based models trained on protein sequences have demonstrated
incredible success at classification and generation tasks relevant for
artificial intelligence-driven protein design. However, we lack a sufficient
understanding of how very large-scale models and data play a role in effective
protein model development. We introduce a suite of protein language models,
named ProGen2, that are scaled up to 6.4B parameters and trained on different
sequence datasets drawn from over a billion proteins from genomic, metagenomic,
and immune repertoire databases. ProGen2 models show state-of-the-art
performance in capturing the distribution of observed evolutionary sequences,
generating novel viable sequences, and predicting protein fitness without
additional finetuning. As large model sizes and raw numbers of protein
sequences continue to become more widely accessible, our results suggest that a
growing emphasis needs to be placed on the data distribution provided to a
protein sequence model. We release the ProGen2 models and code at
https://github.com/salesforce/progen.
|
[
{
"version": "v1",
"created": "Mon, 27 Jun 2022 17:55:02 GMT"
}
] | 2022-06-29T00:00:00 |
[
[
"Nijkamp",
"Erik",
""
],
[
"Ruffolo",
"Jeffrey",
""
],
[
"Weinstein",
"Eli N.",
""
],
[
"Naik",
"Nikhil",
""
],
[
"Madani",
"Ali",
""
]
] |
new_dataset
| 0.968871 |
2206.13611
|
Vivek Jayaram
|
Ishan Chatterjee, Maruchi Kim, Vivek Jayaram, Shyamnath Gollakota, Ira
Kemelmacher-Shlizerman, Shwetak Patel, Steven M. Seitz
|
ClearBuds: Wireless Binaural Earbuds for Learning-Based Speech
Enhancement
|
12 pages, Published in Mobisys 2022
| null |
10.1145/3498361.3538933
| null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
We present ClearBuds, the first hardware and software system that utilizes a
neural network to enhance speech streamed from two wireless earbuds. Real-time
speech enhancement for wireless earbuds requires high-quality sound separation
and background cancellation, operating in real-time and on a mobile phone.
Clear-Buds bridges state-of-the-art deep learning for blind audio source
separation and in-ear mobile systems by making two key technical contributions:
1) a new wireless earbud design capable of operating as a synchronized,
binaural microphone array, and 2) a lightweight dual-channel speech enhancement
neural network that runs on a mobile device. Our neural network has a novel
cascaded architecture that combines a time-domain conventional neural network
with a spectrogram-based frequency masking neural network to reduce the
artifacts in the audio output. Results show that our wireless earbuds achieve a
synchronization error less than 64 microseconds and our network has a runtime
of 21.4 milliseconds on an accompanying mobile phone. In-the-wild evaluation
with eight users in previously unseen indoor and outdoor multipath scenarios
demonstrates that our neural network generalizes to learn both spatial and
acoustic cues to perform noise suppression and background speech removal. In a
user-study with 37 participants who spent over 15.4 hours rating 1041 audio
samples collected in-the-wild, our system achieves improved mean opinion score
and background noise suppression.
Project page with demos: https://clearbuds.cs.washington.edu
|
[
{
"version": "v1",
"created": "Mon, 27 Jun 2022 20:09:25 GMT"
}
] | 2022-06-29T00:00:00 |
[
[
"Chatterjee",
"Ishan",
""
],
[
"Kim",
"Maruchi",
""
],
[
"Jayaram",
"Vivek",
""
],
[
"Gollakota",
"Shyamnath",
""
],
[
"Kemelmacher-Shlizerman",
"Ira",
""
],
[
"Patel",
"Shwetak",
""
],
[
"Seitz",
"Steven M.",
""
]
] |
new_dataset
| 0.999228 |
2206.13676
|
Xiaomin Li
|
Xiaomin Li, Anne Hee Hiong Ngu, Vangelis Metsis
|
TTS-CGAN: A Transformer Time-Series Conditional GAN for Biosignal Data
Augmentation
|
under review
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Signal measurement appearing in the form of time series is one of the most
common types of data used in medical machine learning applications. Such
datasets are often small in size, expensive to collect and annotate, and might
involve privacy issues, which hinders our ability to train large,
state-of-the-art deep learning models for biomedical applications. For
time-series data, the suite of data augmentation strategies we can use to
expand the size of the dataset is limited by the need to maintain the basic
properties of the signal. Generative Adversarial Networks (GANs) can be
utilized as another data augmentation tool. In this paper, we present TTS-CGAN,
a transformer-based conditional GAN model that can be trained on existing
multi-class datasets and generate class-specific synthetic time-series
sequences of arbitrary length. We elaborate on the model architecture and
design strategies. Synthetic sequences generated by our model are
indistinguishable from real ones, and can be used to complement or replace real
signals of the same type, thus achieving the goal of data augmentation. To
evaluate the quality of the generated data, we modify the wavelet coherence
metric to be able to compare the similarity between two sets of signals, and
also conduct a case study where a mix of synthetic and real data are used to
train a deep learning model for sequence classification. Together with other
visualization techniques and qualitative evaluation approaches, we demonstrate
that TTS-CGAN generated synthetic data are similar to real data, and that our
model performs better than the other state-of-the-art GAN models built for
time-series data generation.
|
[
{
"version": "v1",
"created": "Tue, 28 Jun 2022 01:01:34 GMT"
}
] | 2022-06-29T00:00:00 |
[
[
"Li",
"Xiaomin",
""
],
[
"Ngu",
"Anne Hee Hiong",
""
],
[
"Metsis",
"Vangelis",
""
]
] |
new_dataset
| 0.997679 |
2206.13723
|
Xiwei Liu
|
Linlong Xu and Xiwei Liu
|
Prescribed-Time Synchronization of Multiweighted and Directed Complex
Networks
|
18 pages, 3 figures
| null | null | null |
cs.MA cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this note, we study the prescribed-time (PT) synchronization of
multiweighted and directed complex networks (MWDCNs) via pinning control.
Unlike finite-time and fixed-time synchronization, the time for synchronization
can be preset as needed, which is independent of initial values and parameters
like coupling strength. First and foremost, we reveal the essence of PT
stability by improper integral, L'Hospital rule and Taylor expansion theory.
Many controllers established previously for PT stability can be included in our
new model. Then, we apply this new result on MWDCNs as an application. The
synchronization error at the prescribed time is discussed carefully, so, PT
synchronization can be reached. The network topology can be directed and
disconnected, which means that the outer coupling matrices (OCMs) can be
asymmetric and not connected. The relationships between nodes are allowed to be
cooperative or competitive, so elements in OCMs and inner coupling matrices
(ICMs) can be positive or negative. We use the rearranging variables' order
technique to combine ICMs and OCMs together to get the sum matrices, which can
make a bridge between multiweighted and single-weighted networks. Finally,
simulations are presented to illustrate the effectiveness of our theory.
|
[
{
"version": "v1",
"created": "Tue, 28 Jun 2022 03:18:45 GMT"
}
] | 2022-06-29T00:00:00 |
[
[
"Xu",
"Linlong",
""
],
[
"Liu",
"Xiwei",
""
]
] |
new_dataset
| 0.981723 |
2206.13742
|
Burak \"Ozturan
|
Burak Ozturan
|
The COVID-19 Pandemic on the Turkish Twittersphere
| null | null | null | null |
cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
With the increase in the time spent at home, social media platforms' role has
become an integral part of the public discussion in the COVID-19 period.
Individuals use social media platforms to express their emotions, interact, and
engage in public debate. Therefore, it is essential to analyze social media
platforms for those wanting to understand public opinion during the pandemic.
This thesis is the first study that examines the Turkish Twitter-sphere to
understand the change in public opinion during the COVID-19 outbreak. For that
purpose, starting from 12 February 2020 (one month before the first announced
coronavirus cases in Turkey), 4.3 million Turkish tweets with a broad range of
keywords are collected until June 2020 to investigate the public opinion change
on different topics and to examine the actors leading to that change. The scope
of the analysis is not only health-related discussion but also includes a
broader range of themes such as politics, economy, and disinformation. This
study also collects 4.15 million Turkish tweets with keywords of vaccine
("a\c{s}{\i}" in Turkish) from 4 April 2020 until 17 March 2021 to unpack the
health of the information ecosystem. Preliminary results suggest that (i)
religion is the prominent phenomenon in Turkish people's perception of the
pandemic, (ii) and the Turkish Twitter-sphere is highly vulnerable to
mis/disinformation operations, and (iii) several communities with divergent
interests exist in the vaccine network.
|
[
{
"version": "v1",
"created": "Tue, 28 Jun 2022 03:53:43 GMT"
}
] | 2022-06-29T00:00:00 |
[
[
"Ozturan",
"Burak",
""
]
] |
new_dataset
| 0.997081 |
2206.13747
|
Amifa Raj
|
Amifa Raj and Michael D. Ekstrand
|
Fire Dragon and Unicorn Princess; Gender Stereotypes and Children's
Products in Search Engine Responses
|
SIGIR ecom'22: ACM SIGIR Workshop on eCommerce
| null | null | null |
cs.IR cs.CY cs.HC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Search engines in e-commerce settings allow users to search, browse, and
select items from a wide range of products available online including
children's items. Children's products such as toys, books, and learning
materials often have stereotype-based gender associations. Both academic
research and public campaigns are working to promote stereotype-free childhood
development. However, to date, e-commerce search engines have not received as
much attention as physical stores, product design, or marketing as a potential
channel of gender stereotypes. To fill this gap, in this paper, we study the
manifestations of gender stereotypes in e-commerce sites when responding to
queries related to children's products by exploring query suggestions and
search results. We have three primary contributions. First, we provide an
aggregated list of children's products with associated gender stereotypes from
the existing body of research. Second, we provide preliminary methods for
identifying and quantifying gender stereotypes in system's responses. Third, to
show the importance of attending this problem, we identify the existence of
gender stereotypes in query suggestions and search results across multiple
e-commerce sites.
|
[
{
"version": "v1",
"created": "Tue, 28 Jun 2022 04:08:06 GMT"
}
] | 2022-06-29T00:00:00 |
[
[
"Raj",
"Amifa",
""
],
[
"Ekstrand",
"Michael D.",
""
]
] |
new_dataset
| 0.999675 |
2206.13752
|
Min Qiu
|
Min Qiu and Jinhong Yuan
|
Sub-Block Rearranged Staircase Codes for Optical Transport Networks
|
6 pages, 3 figures, 1 table, accepted by the 2022 IEEE International
Symposium on Information Theory (ISIT). arXiv admin note: substantial text
overlap with arXiv:2201.09415
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a new family of spatially coupled product codes, called sub-block
rearranged staircase (SR-staircase) codes. Each SR-staircase code block is
constructed by encoding rearranged preceding code blocks and new information
blocks, where the rearrangement involves sub-blocks decomposition and
transposition. The proposed codes can be constructed to have each code block
size of $1/q$ to that of the conventional staircase codes while having the same
rate and component codes, for any positive integer $q$. In this regard, we can
use strong algebraic component codes to construct SR-staircase codes with a
similar or the same code block size and rate as staircase codes with weak
component codes. Moreover, both waterfall and error floor performance can be
further improved by using a large coupling width. The superior performance of
the proposed codes is demonstrated through density evolution and error floor
analysis as well as simulation.
|
[
{
"version": "v1",
"created": "Tue, 28 Jun 2022 04:38:18 GMT"
}
] | 2022-06-29T00:00:00 |
[
[
"Qiu",
"Min",
""
],
[
"Yuan",
"Jinhong",
""
]
] |
new_dataset
| 0.999629 |
2206.13765
|
Sebastian Siebertz
|
Jan Dreier, Nikolas M\"ahlmann, Sebastian Siebertz, Szymon Toru\'nczyk
|
Indiscernibles and Wideness in Monadically Stable and Monadically NIP
Classes
| null | null | null | null |
cs.LO cs.DM math.CO math.LO
|
http://creativecommons.org/licenses/by/4.0/
|
An indiscernible sequence $(\bar a_i)_{1\leq i\leq n}$ in a structure is an
ordered sequence of tuples of elements which is very homogeneous in the sense
that any two finite subsequences of the same length satisfy the same
first-order formulas. We provide new characterizations of monadically stable
and monadically NIP classes of structures in terms of indiscernible sequences
by showing that they impose a strong structure on their neighborhoods. In
particular, we show that every formula~$\phi(x,\bar y)$, where $x$ is a single
free variable, has alternation rank at most $2$ over every sufficiently long
indiscernible sequence in a monadically NIP class. We provide a second new
characterization of monadically stable classes of graphs in terms of a new
notion called flip-wideness. Flip-wideness generalizes the notion of uniform
quasi-wideness, which characterizes nowhere dense classes and had a key impact
on the combinatorial and algorithmic treatment of nowhere dense classes. All
our proofs are constructive and yield efficient algorithms.
|
[
{
"version": "v1",
"created": "Tue, 28 Jun 2022 05:27:52 GMT"
}
] | 2022-06-29T00:00:00 |
[
[
"Dreier",
"Jan",
""
],
[
"Mählmann",
"Nikolas",
""
],
[
"Siebertz",
"Sebastian",
""
],
[
"Toruńczyk",
"Szymon",
""
]
] |
new_dataset
| 0.989017 |
2206.13772
|
Yuan Feng
|
Yuan Feng and Sanjiang Li
|
Abstract interpretation, Hoare logic, and incorrectness logic for
quantum programs
|
26 pages
| null | null | null |
cs.LO quant-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Abstract interpretation, Hoare logic, and incorrectness (or reverse Hoare)
logic are powerful techniques for static analysis of computer programs. All of
them have been successfully extended to the quantum setting, but largely
developed in parallel. In this paper, we examine the relationship between these
techniques in the context of verifying quantum while-programs, where the
abstract domain and the set of assertions for quantum states are
well-structured. In particular, we show that any complete quantum abstract
interpretation induces a quantum Hoare logic and a quantum incorrectness logic,
both of which are sound and relatively complete. Unlike the logics proposed in
the literature, the induced logic systems are in a forward manner, making them
more useful in certain applications. Conversely, any sound and relatively
complete quantum Hoare logic or quantum incorrectness logic induces a complete
quantum abstract interpretation. As an application, we are able to show the
non-existence of any sound and relatively complete quantum Hoare logic or
incorrectness logic if tuples of local subspaces are taken as assertions.
|
[
{
"version": "v1",
"created": "Tue, 28 Jun 2022 05:49:55 GMT"
}
] | 2022-06-29T00:00:00 |
[
[
"Feng",
"Yuan",
""
],
[
"Li",
"Sanjiang",
""
]
] |
new_dataset
| 0.996984 |
2206.13861
|
Dharanidhar Dang
|
Dharanidhar Dang, Bill Lin, Debashis Sahoo
|
LiteCON: An All-Photonic Neuromorphic Accelerator for Energy-efficient
Deep Learning (Preprint)
|
24 pages, 17 figures, to appear in ACM Transactions on Architecture &
Code Optimization (TACO). arXiv admin note: substantial text overlap with
arXiv:2102.10140
| null | null | null |
cs.ET cs.AR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Deep learning is highly pervasive in today's data-intensive era. In
particular, convolutional neural networks (CNNs) are being widely adopted in a
variety of fields for superior accuracy. However, computing deep CNNs on
traditional CPUs and GPUs brings several performance and energy pitfalls.
Several novel approaches based on ASIC, FPGA, and resistive-memory devices have
been recently demonstrated with promising results. Most of them target only the
inference (testing) phase of deep learning. There have been very limited
attempts to design a full-fledged deep learning accelerator capable of both
training and inference. It is due to the highly compute and memory-intensive
nature of the training phase. In this paper, we propose LiteCON, a novel analog
photonics CNN accelerator. LiteCON uses silicon microdisk-based convolution,
memristor-based memory, and dense-wavelength-division-multiplexing for
energy-efficient and ultrafast deep learning. We evaluate LiteCON using a
commercial CAD framework (IPKISS) on deep learning benchmark models including
LeNet and VGG-Net. Compared to the state-of-the-art, LiteCON improves the CNN
throughput, energy efficiency, and computational efficiency by up to 32x, 37x,
and 5x respectively with trivial accuracy degradation.
|
[
{
"version": "v1",
"created": "Tue, 28 Jun 2022 09:56:05 GMT"
}
] | 2022-06-29T00:00:00 |
[
[
"Dang",
"Dharanidhar",
""
],
[
"Lin",
"Bill",
""
],
[
"Sahoo",
"Debashis",
""
]
] |
new_dataset
| 0.9878 |
2206.13969
|
Hao Yang
|
Hao Yang, Yanyan Zhao, Jianwei Liu, Yang Wu and Bing Qin
|
MACSA: A Multimodal Aspect-Category Sentiment Analysis Dataset with
Multimodal Fine-grained Aligned Annotations
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multimodal fine-grained sentiment analysis has recently attracted increasing
attention due to its broad applications. However, the existing multimodal
fine-grained sentiment datasets most focus on annotating the fine-grained
elements in text but ignore those in images, which leads to the fine-grained
elements in visual content not receiving the full attention they deserve. In
this paper, we propose a new dataset, the Multimodal Aspect-Category Sentiment
Analysis (MACSA) dataset, which contains more than 21K text-image pairs. The
dataset provides fine-grained annotations for both textual and visual content
and firstly uses the aspect category as the pivot to align the fine-grained
elements between the two modalities. Based on our dataset, we propose the
Multimodal ACSA task and a multimodal graph-based aligned model (MGAM), which
adopts a fine-grained cross-modal fusion method. Experimental results show that
our method can facilitate the baseline comparison for future research on this
corpus. We will make the dataset and code publicly available.
|
[
{
"version": "v1",
"created": "Tue, 28 Jun 2022 12:49:16 GMT"
}
] | 2022-06-29T00:00:00 |
[
[
"Yang",
"Hao",
""
],
[
"Zhao",
"Yanyan",
""
],
[
"Liu",
"Jianwei",
""
],
[
"Wu",
"Yang",
""
],
[
"Qin",
"Bing",
""
]
] |
new_dataset
| 0.999763 |
2206.13999
|
Hai Lin
|
Hai Lin and Jinhong Yuan
|
Orthogonal Delay-Doppler Division Multiplexing Modulation
|
This paper has been accepted by IEEE Trans. Wireless Commun. arXiv
admin note: text overlap with arXiv:2206.13382
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Inspired by the orthogonal time frequency space (OTFS) modulation, in this
paper, we consider designing a multicarrier (MC) modulation on delay-Doppler
(DD) plane, to couple the modulated signal with a doubly-selective channel
having DD resolutions. A key challenge for the design of DD plane MC modulation
is to investigate whether a realizable pulse orthogonal with respect to the DD
plane's fine resolutions exists or not. To this end, we first indicate that a
feasible DD plane MC modulation is essentially a type of staggered multitone
modulation. Then, analogous to orthogonal frequency division multiplexing, we
propose an orthogonal delay-Doppler division multiplexing (ODDM) modulation,
and design the corresponding transmit pulse. Furthermore, we prove that the
proposed transmit pulse is orthogonal with respect to the DD plane's
resolutions and therefore a realizable DD plane orthogonal pulse does exist.
The orthogonality of this particular pulse significantly eases the derivation
of the ODDM's DD domain channel input-output relation, and yields a channel
matrix with an elegant block-circulant-like structure. We demonstrate that the
ODDM outperforms the OTFS in terms of out-of-band emission and bit error rate,
by achieving perfect coupling between the modulated signal and the DD channel.
|
[
{
"version": "v1",
"created": "Tue, 28 Jun 2022 13:37:11 GMT"
}
] | 2022-06-29T00:00:00 |
[
[
"Lin",
"Hai",
""
],
[
"Yuan",
"Jinhong",
""
]
] |
new_dataset
| 0.999051 |
2206.14009
|
Lotfy Abdel Khaliq
|
Christen Millerdurai, Lotfy Abdel Khaliq, and Timon Ulrich
|
Show Me Your Face, And I'll Tell You How You Speak
| null | null | null | null |
cs.CV cs.SD eess.AS eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
When we speak, the prosody and content of the speech can be inferred from the
movement of our lips. In this work, we explore the task of lip to speech
synthesis, i.e., learning to generate speech given only the lip movements of a
speaker where we focus on learning accurate lip to speech mappings for multiple
speakers in unconstrained, large vocabulary settings. We capture the speaker's
voice identity through their facial characteristics, i.e., age, gender,
ethnicity and condition them along with the lip movements to generate speaker
identity aware speech. To this end, we present a novel method "Lip2Speech",
with key design choices to achieve accurate lip to speech synthesis in
unconstrained scenarios. We also perform various experiments and extensive
evaluation using quantitative, qualitative metrics and human evaluation.
|
[
{
"version": "v1",
"created": "Tue, 28 Jun 2022 13:52:47 GMT"
}
] | 2022-06-29T00:00:00 |
[
[
"Millerdurai",
"Christen",
""
],
[
"Khaliq",
"Lotfy Abdel",
""
],
[
"Ulrich",
"Timon",
""
]
] |
new_dataset
| 0.991207 |
2206.14089
|
Sayantan Adak
|
Sayantan Adak, Altaf Ahmad, Aditya Basu, Animesh Mukherjee
|
Placing (Historical) Facts on a Timeline: A Classification cum Coref
Resolution Approach
|
Accepted at the main conference of ECML PKDD 2022 as a long paper.
The camera-ready version
| null | null | null |
cs.CL cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A timeline provides one of the most effective ways to visualize the important
historical facts that occurred over a period of time, presenting the insights
that may not be so apparent from reading the equivalent information in textual
form. By leveraging generative adversarial learning for important sentence
classification and by assimilating knowledge based tags for improving the
performance of event coreference resolution we introduce a two staged system
for event timeline generation from multiple (historical) text documents. We
demonstrate our results on two manually annotated historical text documents.
Our results can be extremely helpful for historians, in advancing research in
history and in understanding the socio-political landscape of a country as
reflected in the writings of famous personas.
|
[
{
"version": "v1",
"created": "Tue, 28 Jun 2022 15:36:44 GMT"
}
] | 2022-06-29T00:00:00 |
[
[
"Adak",
"Sayantan",
""
],
[
"Ahmad",
"Altaf",
""
],
[
"Basu",
"Aditya",
""
],
[
"Mukherjee",
"Animesh",
""
]
] |
new_dataset
| 0.984748 |
2206.14137
|
Shiyuan Li
|
Shiyuan Li
|
aSTDP: A More Biologically Plausible Learning
|
17 pages, 6 figures. arXiv admin note: text overlap with
arXiv:1912.00009
| null | null | null |
cs.NE cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Spike-timing dependent plasticity in biological neural networks has been
proven to be important during biological learning process. On the other hand,
artificial neural networks use a different way to learn, such as
Back-Propagation or Contrastive Hebbian Learning. In this work we introduce
approximate STDP, a new neural networks learning framework more similar to the
biological learning process. It uses only STDP rules for supervised and
unsupervised learning, every neuron distributed learn patterns and don' t need
a global loss or other supervised information. We also use a numerical way to
approximate the derivatives of each neuron in order to better use SDTP learning
and use the derivatives to set a target for neurons to accelerate training and
testing process. The framework can make predictions or generate patterns in one
model without additional configuration. Finally, we verified our framework on
MNIST dataset for classification and generation tasks.
|
[
{
"version": "v1",
"created": "Sun, 22 May 2022 08:12:50 GMT"
}
] | 2022-06-29T00:00:00 |
[
[
"Li",
"Shiyuan",
""
]
] |
new_dataset
| 0.956776 |
2206.14169
|
Sonu Gupta
|
Sonu Gupta, Ellen Poplavska, Nora O'Toole, Siddhant Arora, Thomas
Norton, Norman Sadeh, Shomir Wilson
|
Creation and Analysis of an International Corpus of Privacy Laws
|
14 pages, 7 figures, 7 tables
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The landscape of privacy laws and regulations around the world is complex and
ever-changing. National and super-national laws, agreements, decrees, and other
government-issued rules form a patchwork that companies must follow to operate
internationally. To examine the status and evolution of this patchwork, we
introduce the Government Privacy Instructions Corpus, or GPI Corpus, of 1,043
privacy laws, regulations, and guidelines, covering 182 jurisdictions. This
corpus enables a large-scale quantitative and qualitative examination of legal
foci on privacy. We examine the temporal distribution of when GPIs were created
and illustrate the dramatic increase in privacy legislation over the past 50
years, although a finer-grained examination reveals that the rate of increase
varies depending on the personal data types that GPIs address. Our exploration
also demonstrates that most privacy laws respectively address relatively few
personal data types, showing that comprehensive privacy legislation remains
rare. Additionally, topic modeling results show the prevalence of common themes
in GPIs, such as finance, healthcare, and telecommunications. Finally, we
release the corpus to the research community to promote further study.
|
[
{
"version": "v1",
"created": "Tue, 28 Jun 2022 17:36:12 GMT"
}
] | 2022-06-29T00:00:00 |
[
[
"Gupta",
"Sonu",
""
],
[
"Poplavska",
"Ellen",
""
],
[
"O'Toole",
"Nora",
""
],
[
"Arora",
"Siddhant",
""
],
[
"Norton",
"Thomas",
""
],
[
"Sadeh",
"Norman",
""
],
[
"Wilson",
"Shomir",
""
]
] |
new_dataset
| 0.984656 |
2206.14176
|
Danijar Hafner
|
Philipp Wu, Alejandro Escontrela, Danijar Hafner, Ken Goldberg, Pieter
Abbeel
|
DayDreamer: World Models for Physical Robot Learning
|
Website: https://danijar.com/daydreamer
| null | null | null |
cs.RO cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
To solve tasks in complex environments, robots need to learn from experience.
Deep reinforcement learning is a common approach to robot learning but requires
a large amount of trial and error to learn, limiting its deployment in the
physical world. As a consequence, many advances in robot learning rely on
simulators. On the other hand, learning inside of simulators fails to capture
the complexity of the real world, is prone to simulator inaccuracies, and the
resulting behaviors do not adapt to changes in the world. The Dreamer algorithm
has recently shown great promise for learning from small amounts of interaction
by planning within a learned world model, outperforming pure reinforcement
learning in video games. Learning a world model to predict the outcomes of
potential actions enables planning in imagination, reducing the amount of trial
and error needed in the real environment. However, it is unknown whether
Dreamer can facilitate faster learning on physical robots. In this paper, we
apply Dreamer to 4 robots to learn online and directly in the real world,
without simulators. Dreamer trains a quadruped robot to roll off its back,
stand up, and walk from scratch and without resets in only 1 hour. We then push
the robot and find that Dreamer adapts within 10 minutes to withstand
perturbations or quickly roll over and stand back up. On two different robotic
arms, Dreamer learns to pick and place multiple objects directly from camera
images and sparse rewards, approaching human performance. On a wheeled robot,
Dreamer learns to navigate to a goal position purely from camera images,
automatically resolving ambiguity about the robot orientation. Using the same
hyperparameters across all experiments, we find that Dreamer is capable of
online learning in the real world, establishing a strong baseline. We release
our infrastructure for future applications of world models to robot learning.
|
[
{
"version": "v1",
"created": "Tue, 28 Jun 2022 17:44:48 GMT"
}
] | 2022-06-29T00:00:00 |
[
[
"Wu",
"Philipp",
""
],
[
"Escontrela",
"Alejandro",
""
],
[
"Hafner",
"Danijar",
""
],
[
"Goldberg",
"Ken",
""
],
[
"Abbeel",
"Pieter",
""
]
] |
new_dataset
| 0.999377 |
0909.5521
|
Prabhu Manyem
|
Prabhu Manyem
|
Clique and Vertex Cover are solvable in polynomial time if the input
structure is ordered and contains a successor predicate
|
The results are incorrect. If phi = phi_1 AND phi_2, and phi is a
Horn formula, it does NOT mean that both phi_1 and phi_2 are Horn formulae.
Furthermore, the cardinality constraint CANNOT be expressed as a universal
Horn sentence in ESO (NOT even when the structure is ordered)
| null | null | null |
cs.CC cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this manuscript, assuming that Graedel's 1991 results are correct (which
implies that bounds on the solution values for optimization problems can be
expressed in existential second order logic where the first order part is
universal Horn), I will show that Clique and Vertex Cover can be solved in
polynomial time if the input structure is ordered and contains a successor
predicate. In the last section, we will argue about the validity of Graedel's
1991 results. Update: Manuscript withdrawn, because results are incorrect. If
phi = phi_1 AND phi_2, and phi is a Horn formula, it does NOT mean that both
phi_1 and phi_2 are Horn formulae. Furthermore, the cardinality constraint
CANNOT be expressed as a universal Horn sentence in ESO (NOT even when the
structure is ordered).
|
[
{
"version": "v1",
"created": "Wed, 30 Sep 2009 06:34:47 GMT"
},
{
"version": "v2",
"created": "Sun, 20 Dec 2009 11:16:32 GMT"
},
{
"version": "v3",
"created": "Sat, 2 Oct 2010 22:33:43 GMT"
},
{
"version": "v4",
"created": "Sat, 25 Jun 2022 23:13:48 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Manyem",
"Prabhu",
""
]
] |
new_dataset
| 0.997516 |
1906.05004
|
Lloyd Allison
|
Lloyd Allison, Arun Konagurthu and Daniel Schmidt
|
On Universal Codes for Integers: Wallace Tree, Elias Omega and
Variations
|
8 pages, 8 figures (3 figure image files)
|
Data Compression Conference (DCC), pp.313-322, March 2021
|
10.1109/DCC50243.2021.00039
| null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
A universal code for the (positive) integers can be used to store or compress
a sequence of integers. Every universal code implies a probability distribution
on integers. This implied distribution may be a reasonable choice when the true
distribution of a source of integers is unknown. Wallace Tree Code (WTC) is a
universal code for integers based on binary trees. We give the encoding and
decoding routines for WTC and analyse the properties of the code in comparison
to two well-known codes, the Fibonacci and Elias omega codes. Some improvements
on the Elias omega code are also described and examined.
|
[
{
"version": "v1",
"created": "Wed, 12 Jun 2019 08:40:35 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Allison",
"Lloyd",
""
],
[
"Konagurthu",
"Arun",
""
],
[
"Schmidt",
"Daniel",
""
]
] |
new_dataset
| 0.999757 |
2007.06954
|
Yinping Yang Dr
|
Raj Kumar Gupta, Ajay Vishwanath, Yinping Yang
|
COVID-19 Twitter Dataset with Latent Topics, Sentiments and Emotions
Attributes
|
The latest dataset version (V12, June 2022) has the following main
updates: a) Full data coverage extended to cover 28 January 2020 - 1 June
2022 (2 years and 4 months), b) Country-specific CSV files download covers 30
representative countries, c) Added new vaccine-related data covering from 3
November 2021 to 1 June 2022 (8 months), d) an updated discussion on the
dataset's usage
| null |
10.3886/E120321V12
| null |
cs.CL cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper describes a large global dataset on people's discourse and
responses to the COVID-19 pandemic over the Twitter platform. From 28 January
2020 to 1 June 2022, we collected and processed over 252 million Twitter posts
from more than 29 million unique users using four keywords: "corona", "wuhan",
"nCov" and "covid". Leveraging probabilistic topic modelling and pre-trained
machine learning-based emotion recognition algorithms, we labelled each tweet
with seventeen attributes, including a) ten binary attributes indicating the
tweet's relevance (1) or irrelevance (0) to the top ten detected topics, b)
five quantitative emotion attributes indicating the degree of intensity of the
valence or sentiment (from 0: extremely negative to 1: extremely positive) and
the degree of intensity of fear, anger, sadness and happiness emotions (from 0:
not at all to 1: extremely intense), and c) two categorical attributes
indicating the sentiment (very negative, negative, neutral or mixed, positive,
very positive) and the dominant emotion (fear, anger, sadness, happiness, no
specific emotion) the tweet is mainly expressing. We discuss the technical
validity and report the descriptive statistics of these attributes, their
temporal distribution, and geographic representation. The paper concludes with
a discussion of the dataset's usage in communication, psychology, public
health, economics, and epidemiology.
|
[
{
"version": "v1",
"created": "Tue, 14 Jul 2020 10:30:47 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Jul 2020 11:39:23 GMT"
},
{
"version": "v3",
"created": "Sat, 1 Aug 2020 05:49:29 GMT"
},
{
"version": "v4",
"created": "Fri, 7 Aug 2020 10:39:40 GMT"
},
{
"version": "v5",
"created": "Sat, 5 Sep 2020 04:12:15 GMT"
},
{
"version": "v6",
"created": "Tue, 16 Feb 2021 13:31:40 GMT"
},
{
"version": "v7",
"created": "Sun, 26 Sep 2021 09:49:17 GMT"
},
{
"version": "v8",
"created": "Sat, 25 Jun 2022 06:35:40 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Gupta",
"Raj Kumar",
""
],
[
"Vishwanath",
"Ajay",
""
],
[
"Yang",
"Yinping",
""
]
] |
new_dataset
| 0.999797 |
2008.09311
|
Geonho Han
|
Geonho Han, Junil Choi
|
Radar Imaging Based on IEEE 802.11ad Waveform
|
6 pages, 6 figures, and accepted for 2020 IEEE Global Communications
Conference (GLOBECOM)
|
IEEE GLOBECOM 2020 - 2020 IEEE Global Communications Conference,
pp. 1-6, Dec. 2020
|
10.1109/GLOBECOM42002.2020.9322602
| null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The extension to millimeter-wave (mmWave) spectrum of communication frequency
band makes it easy to implement a joint radar and communication system using
single hardware. In this paper, we propose radar imaging based on the IEEE
802.11ad waveform for a vehicular setting. The necessary parameters to be
estimated for inverse synthetic aperture radar (ISAR) imaging are sampled
version of round-trip delay, Doppler shift, and vehicular velocity. The delay
is estimated using the correlation property of Golay complementary sequences
embedded on the IEEE 802.11ad preamble. The Doppler shift is first obtained
from least square estimation using radar return signals and refined by
correcting the phase uncertainty of Doppler shift by phase rotation. The
vehicular velocity is determined from the estimated Doppler shifts and an
equation of motion. Finally, an ISAR image is formed with the acquired
parameters. Simulation results show that it is possible to obtain recognizable
ISAR image from a point scatterer model of a realistic vehicular setting.
|
[
{
"version": "v1",
"created": "Fri, 21 Aug 2020 05:20:01 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Aug 2020 12:29:31 GMT"
},
{
"version": "v3",
"created": "Fri, 11 Sep 2020 04:00:14 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Han",
"Geonho",
""
],
[
"Choi",
"Junil",
""
]
] |
new_dataset
| 0.982357 |
2011.14582
|
Sung Hyuck Hong
|
Sung Hyuck Hong, Sucheol Kim, Junil Choi, Wan Choi
|
Polar-Cap Codebook Design for MISO Rician Fading Channels with Limited
Feedback
|
5 pages, 4 figures, and published in IEEE Wireless Communications
Letters
|
IEEE Wireless Communications Letters, Volume: 10, Issue: 4, April
2021
|
10.1109/LWC.2020.3041941
| null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Most of the prior works on designing codebooks for limited feedback systems
have not considered the presence of strong line-of-sight (LOS) channel
component. This paper proposes the design of polar-cap codebook (PCC) for
multipleinput single-output (MISO) limited feedback systems subject to Rician
fading channels. The codewords of the designed PCC are adaptively constructed
according to the instantaneous strength of the LOS channel component.
Simulation results show that the codebook can significantly enhance the
performance of transmit beamforming in terms of received signal-to-noise ratio
(SNR).
|
[
{
"version": "v1",
"created": "Mon, 30 Nov 2020 07:09:18 GMT"
},
{
"version": "v2",
"created": "Fri, 28 May 2021 08:01:16 GMT"
},
{
"version": "v3",
"created": "Mon, 27 Jun 2022 04:05:38 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Hong",
"Sung Hyuck",
""
],
[
"Kim",
"Sucheol",
""
],
[
"Choi",
"Junil",
""
],
[
"Choi",
"Wan",
""
]
] |
new_dataset
| 0.998095 |
2012.13977
|
James Chin-Jen Pang
|
James Chin-Jen Pang, Hessam Mahdavifar, and S. Sandeep Pradhan
|
Capacity-achieving Polar-based LDGM Codes
|
Extended version, now includes moderate-block length comparison with
the RLE. arXiv admin note: text overlap with arXiv:2001.11986
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we study codes with sparse generator matrices. More
specifically, low-density generator matrix (LDGM) codes with a certain
constraint on the weight of the columns in the generator matrix are considered.
In this paper, it is first shown that when a BMS channel W and a constant s>0
are given, there exists a polarization kernel such that the corresponding polar
code is capacity-achieving and the column weights of the generator matrix (GM)
are bounded from above by $N^s$. Then, a general construction based on a
concatenation of polar codes and a rate-$1$ code, and a new column-splitting
algorithm that guarantees a much sparser GM, is given. More specifically, for
any BMS channel and any $\epsilon > 2\epsilon^*$, where $\epsilon^* \approx
0.085$, an existence of a sequence of capacity-achieving codes with all the GM
column weights upper bounded by $(\log N)^{1+\epsilon}$ is shown. Furthermore,
two coding schemes for BEC and BMS channels, based on a second column-splitting
algorithm, are devised with low-complexity decoding that uses
successive-cancellation. The second splitting algorithm allows for the use of a
low-complexity decoder by preserving the reliability of the bit-channels
observed by the source bits, and by increasing the code block length. The
concatenation-based construction can also be applied to the random linear code
ensemble to yield capacity-achieving codes with all the GM column weights being
$O(\log N)$ and with (large-degree) polynomial decoding complexity.
|
[
{
"version": "v1",
"created": "Sun, 27 Dec 2020 17:11:04 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Jun 2022 16:56:01 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Pang",
"James Chin-Jen",
""
],
[
"Mahdavifar",
"Hessam",
""
],
[
"Pradhan",
"S. Sandeep",
""
]
] |
new_dataset
| 0.989368 |
2102.01909
|
Inbal Yahav
|
Avihay Chriqui, Inbal Yahav
|
HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis
and Emotion Recognition
| null | null |
10.1287/ijds.2022.0016
| null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces HeBERT and HebEMO. HeBERT is a Transformer-based model
for modern Hebrew text, which relies on a BERT (Bidirectional Encoder
Representations for Transformers) architecture. BERT has been shown to
outperform alternative architectures in sentiment analysis, and is suggested to
be particularly appropriate for MRLs. Analyzing multiple BERT specifications,
we find that while model complexity correlates with high performance on
language tasks that aim to understand terms in a sentence, a more-parsimonious
model better captures the sentiment of entire sentence. Either way, out
BERT-based language model outperforms all existing Hebrew alternatives on all
common language tasks. HebEMO is a tool that uses HeBERT to detect polarity and
extract emotions from Hebrew UGC. HebEMO is trained on a unique
Covid-19-related UGC dataset that we collected and annotated for this study.
Data collection and annotation followed an active learning procedure that aimed
to maximize predictability. We show that HebEMO yields a high F1-score of 0.96
for polarity classification. Emotion detection reaches F1-scores of 0.78-0.97
for various target emotions, with the exception of surprise, which the model
failed to capture (F1 = 0.41). These results are better than the best-reported
performance, even among English-language models of emotion detection.
|
[
{
"version": "v1",
"created": "Wed, 3 Feb 2021 06:59:59 GMT"
},
{
"version": "v2",
"created": "Tue, 23 Feb 2021 07:43:43 GMT"
},
{
"version": "v3",
"created": "Thu, 25 Feb 2021 07:04:34 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Chriqui",
"Avihay",
""
],
[
"Yahav",
"Inbal",
""
]
] |
new_dataset
| 0.997169 |
2103.06450
|
Sumeet Sohan Singh
|
Sumeet S. Singh, Sergey Karayev
|
Full Page Handwriting Recognition via Image to Sequence Extraction
|
Appeared in ICDAR 2021
| null |
10.1007/978-3-030-86334-0_4
| null |
cs.CV cs.AI cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We present a Neural Network based Handwritten Text Recognition (HTR) model
architecture that can be trained to recognize full pages of handwritten or
printed text without image segmentation. Being based on Image to Sequence
architecture, it can extract text present in an image and then sequence it
correctly without imposing any constraints regarding orientation, layout and
size of text and non-text. Further, it can also be trained to generate
auxiliary markup related to formatting, layout and content. We use character
level vocabulary, thereby enabling language and terminology of any subject. The
model achieves a new state-of-art in paragraph level recognition on the IAM
dataset. When evaluated on scans of real world handwritten free form test
answers - beset with curved and slanted lines, drawings, tables, math,
chemistry and other symbols - it performs better than all commercially
available HTR cloud APIs. It is deployed in production as part of a commercial
web application.
|
[
{
"version": "v1",
"created": "Thu, 11 Mar 2021 04:37:29 GMT"
},
{
"version": "v2",
"created": "Fri, 21 May 2021 18:52:44 GMT"
},
{
"version": "v3",
"created": "Sun, 26 Jun 2022 21:01:23 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Singh",
"Sumeet S.",
""
],
[
"Karayev",
"Sergey",
""
]
] |
new_dataset
| 0.995344 |
2107.14578
|
Erick Galinkin
|
Erick Galinkin
|
Winning the Ransomware Lottery: A Game-Theoretic Model for Mitigating
Ransomware Attacks
|
To be published in the Proceedings of the Conference on Decision and
Game Theory for Security -- GameSec 2021
| null |
10.1007/978-3-030-90370-1_11
| null |
cs.CR cs.CY cs.GT
|
http://creativecommons.org/licenses/by/4.0/
|
Ransomware is a growing threat to individuals and enterprises alike,
constituting a major factor in cyber insurance and in the security planning of
every organization. Although the game theoretic lens often frames the game as a
competition between equals -- a profit maximizing attacker and a loss
minimizing defender -- the reality of many situations is that ransomware
organizations are not playing a non-cooperative game, they are playing a
lottery. The wanton behavior of attackers creates a situation where many
victims are hit more than once by ransomware operators, sometimes even by the
same group. If defenders wish to combat malware, they must then seek to remove
the incentives of it.
In this work, we construct an expected value model based on data from actual
ransomware attacks and identify three variables: the value of payments, the
cost of an attack, and the probability of payment. Using this model, we
consider the potential to manipulate these variables to reduce the profit
motive associated with ransomware attack. Based on the model, we present
mitigations to encourage an environment that is hostile to ransomware
operators. In particular, we find that off-site backups and government
incentives for their adoption are the most fruitful avenue for combating
ransomware.
|
[
{
"version": "v1",
"created": "Fri, 30 Jul 2021 12:29:34 GMT"
},
{
"version": "v2",
"created": "Sun, 19 Sep 2021 17:18:34 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Galinkin",
"Erick",
""
]
] |
new_dataset
| 0.99928 |
2109.09701
|
Dat Quoc Nguyen
|
Nguyen Luong Tran, Duong Minh Le, Dat Quoc Nguyen
|
BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese
|
In Proceedings of INTERSPEECH 2022 (to appear)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present BARTpho with two versions, BARTpho-syllable and BARTpho-word,
which are the first public large-scale monolingual sequence-to-sequence models
pre-trained for Vietnamese. BARTpho uses the "large" architecture and the
pre-training scheme of the sequence-to-sequence denoising autoencoder BART,
thus it is especially suitable for generative NLP tasks. We conduct experiments
to compare our BARTpho with its competitor mBART on a downstream task of
Vietnamese text summarization and show that: in both automatic and human
evaluations, BARTpho outperforms the strong baseline mBART and improves the
state-of-the-art. We further evaluate and compare BARTpho and mBART on the
Vietnamese capitalization and punctuation restoration tasks and also find that
BARTpho is more effective than mBART on these two tasks. We publicly release
BARTpho to facilitate future research and applications of generative Vietnamese
NLP tasks. Our BARTpho models are available at
https://github.com/VinAIResearch/BARTpho
|
[
{
"version": "v1",
"created": "Mon, 20 Sep 2021 17:14:22 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Jan 2022 03:08:20 GMT"
},
{
"version": "v3",
"created": "Mon, 27 Jun 2022 15:45:40 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Tran",
"Nguyen Luong",
""
],
[
"Le",
"Duong Minh",
""
],
[
"Nguyen",
"Dat Quoc",
""
]
] |
new_dataset
| 0.999365 |
2109.10445
|
Mohammad Mahdavian
|
Mohammad Mahdavian, KangKang Yin, Mo Chen
|
Robust Visual Teach and Repeat for UGVs Using 3D Semantic Maps
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a Visual Teach and Repeat (VTR) algorithm using semantic landmarks
extracted from environmental objects for ground robots with fixed mount
monocular cameras. The proposed algorithm is robust to changes in the starting
pose of the camera/robot, where a pose is defined as the planar position plus
the orientation around the vertical axis. VTR consists of a teach phase in
which a robot moves in a prescribed path, and a repeat phase in which the robot
tries to repeat the same path starting from the same or a different pose. Most
available VTR algorithms are pose dependent and cannot perform well in the
repeat phase when starting from an initial pose far from that of the teach
phase. To achieve more robust pose independency, the key is to generate a 3D
semantic map of the environment containing the camera trajectory and the
positions of surrounding objects during the teach phase. For specific
implementation, we use ORB-SLAM to collect the camera poses and the 3D point
clouds of the environment, and YOLOv3 to detect objects in the environment. We
then combine the two outputs to build the semantic map. In the repeat phase, we
relocalize the robot based on the detected objects and the stored semantic map.
The robot is then able to move toward the teach path, and repeat it in both
forward and backward directions. We have tested the proposed algorithm in
different scenarios and compared it with two most relevant recent studies.
Also, we compared our algorithm with two image-based relocalization methods.
One is purely based on ORB-SLAM and the other combines Superglue and RANSAC.
The results show that our algorithm is much more robust with respect to pose
variations as well as environmental alterations. Our code and data are
available at the following Github page:
https://github.com/mmahdavian/semantic_visual_teach_repeat.
|
[
{
"version": "v1",
"created": "Tue, 21 Sep 2021 22:16:48 GMT"
},
{
"version": "v2",
"created": "Sun, 27 Feb 2022 22:49:11 GMT"
},
{
"version": "v3",
"created": "Fri, 24 Jun 2022 19:26:11 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Mahdavian",
"Mohammad",
""
],
[
"Yin",
"KangKang",
""
],
[
"Chen",
"Mo",
""
]
] |
new_dataset
| 0.984369 |
2110.05802
|
Zhen Xu
|
Zhen Xu, Sergio Escalera, Isabelle Guyon, Adrien Pav\~ao, Magali
Richard, Wei-Wei Tu, Quanming Yao, Huan Zhao
|
Codabench: Flexible, Easy-to-Use and Reproducible Benchmarking Platform
| null |
Patterns Cell Press 2022
|
10.1016/j.patter.2022.100543
| null |
cs.LG cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Obtaining standardized crowdsourced benchmark of computational methods is a
major issue in data science communities. Dedicated frameworks enabling fair
benchmarking in a unified environment are yet to be developed. Here we
introduce Codabench, an open-source, community-driven platform for benchmarking
algorithms or software agents versus datasets or tasks. A public instance of
Codabench (https://www.codabench.org/) is open to everyone, free of charge, and
allows benchmark organizers to compare fairly submissions, under the same
setting (software, hardware, data, algorithms), with custom protocols and data
formats. Codabench has unique features facilitating the organization of
benchmarks flexibly, easily and reproducibly, such as the possibility of
re-using templates of benchmarks, and supplying compute resources on-demand.
Codabench has been used internally and externally on various applications,
receiving more than 130 users and 2500 submissions. As illustrative use cases,
we introduce 4 diverse benchmarks covering Graph Machine Learning, Cancer
Heterogeneity, Clinical Diagnosis and Reinforcement Learning.
|
[
{
"version": "v1",
"created": "Tue, 12 Oct 2021 07:54:34 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Feb 2022 08:20:35 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Xu",
"Zhen",
""
],
[
"Escalera",
"Sergio",
""
],
[
"Guyon",
"Isabelle",
""
],
[
"Pavão",
"Adrien",
""
],
[
"Richard",
"Magali",
""
],
[
"Tu",
"Wei-Wei",
""
],
[
"Yao",
"Quanming",
""
],
[
"Zhao",
"Huan",
""
]
] |
new_dataset
| 0.997848 |
2111.04814
|
Huang Huang
|
Vincent Lim, Huang Huang, Lawrence Yunliang Chen, Jonathan Wang,
Jeffrey Ichnowski, Daniel Seita, Michael Laskey, Ken Goldberg
|
Planar Robot Casting with Real2Sim2Real Self-Supervised Learning
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces the task of {\em Planar Robot Casting (PRC)}: where one
planar motion of a robot arm holding one end of a cable causes the other end to
slide across the plane toward a desired target. PRC allows the cable to reach
points beyond the robot workspace and has applications for cable management in
homes, warehouses, and factories. To efficiently learn a PRC policy for a given
cable, we propose Real2Sim2Real, a self-supervised framework that automatically
collects physical trajectory examples to tune parameters of a dynamics
simulator using Differential Evolution, generates many simulated examples, and
then learns a policy using a weighted combination of simulated and physical
data. We evaluate Real2Sim2Real with three simulators, Isaac Gym-segmented,
Isaac Gym-hybrid, and PyBullet, two function approximators, Gaussian Processes
and Neural Networks (NNs), and three cables with differing stiffness, torsion,
and friction. Results with 240 physical trials suggest that the PRC policies
can attain median error distance (as % of cable length) ranging from 8% to 14%,
outperforming baselines and policies trained on only real or only simulated
examples. Code, data, and videos are available at
https://tinyurl.com/robotcast.
|
[
{
"version": "v1",
"created": "Mon, 8 Nov 2021 20:37:30 GMT"
},
{
"version": "v2",
"created": "Sat, 25 Jun 2022 18:50:54 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Lim",
"Vincent",
""
],
[
"Huang",
"Huang",
""
],
[
"Chen",
"Lawrence Yunliang",
""
],
[
"Wang",
"Jonathan",
""
],
[
"Ichnowski",
"Jeffrey",
""
],
[
"Seita",
"Daniel",
""
],
[
"Laskey",
"Michael",
""
],
[
"Goldberg",
"Ken",
""
]
] |
new_dataset
| 0.999414 |
2111.11397
|
Ali J. Ghandour
|
Hasan Nasrallah, Abed Ellatif Samhat, Yilei Shi, Xiaoxiang Zhu, Ghaleb
Faour and Ali J. Ghandour
|
Lebanon Solar Rooftop Potential Assessment using Buildings Segmentation
from Aerial Images
| null | null |
10.1109/JSTARS.2022.3181446
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Estimating solar rooftop potential at a national level is a fundamental
building block for every country to utilize solar power efficiently. Solar
rooftop potential assessment relies on several features such as building
geometry, location, and surrounding facilities. Hence, national-level
approximations that do not take these factors into deep consideration are often
inaccurate. This paper introduces Lebanon's first comprehensive footprint and
solar rooftop potential maps using deep learning-based instance segmentation to
extract buildings' footprints from satellite images. A photovoltaic panels
placement algorithm that considers the morphology of each roof is proposed. We
show that the average rooftop's solar potential can fulfill the yearly electric
needs of a single-family residence while using only 5% of the roof surface. The
usage of 50% of a residential apartment rooftop area would achieve energy
security for up to 8 households. We also compute the average and total solar
rooftop potential per district to localize regions corresponding to the highest
and lowest solar rooftop potential yield. Factors such as size, ground coverage
ratio and PV_out are carefully investigated for each district. Baalbeck
district yielded the highest total solar rooftop potential despite its low
built-up area. While, Beirut capital city has the highest average solar rooftop
potential due to its extremely populated urban nature. Reported results and
analysis reveal solar rooftop potential urban patterns and provides
policymakers and key stakeholders with tangible insights. Lebanon's total solar
rooftop potential is about 28.1 TWh/year, two times larger than the national
energy consumption in 2019.
|
[
{
"version": "v1",
"created": "Mon, 22 Nov 2021 18:16:07 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Dec 2021 17:16:42 GMT"
},
{
"version": "v3",
"created": "Fri, 14 Jan 2022 08:42:51 GMT"
},
{
"version": "v4",
"created": "Thu, 24 Feb 2022 12:51:53 GMT"
},
{
"version": "v5",
"created": "Sat, 26 Feb 2022 17:13:06 GMT"
},
{
"version": "v6",
"created": "Mon, 9 May 2022 03:00:47 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Nasrallah",
"Hasan",
""
],
[
"Samhat",
"Abed Ellatif",
""
],
[
"Shi",
"Yilei",
""
],
[
"Zhu",
"Xiaoxiang",
""
],
[
"Faour",
"Ghaleb",
""
],
[
"Ghandour",
"Ali J.",
""
]
] |
new_dataset
| 0.996858 |
2112.01122
|
Si Yuan Jin
|
Si Yuan Jin, Yong Xia
|
CEV Framework: A Central Bank Digital Currency Evaluation and
Verification Framework With a Focus on Consensus Algorithms and Operating
Architectures
|
This paper is accepted on June 8, 2022, and published on June 14,
2022 by IEEE Access. Digital Object Identifier 10.1109/ACCESS.2022.3183092
|
IEEE Vol 10, 2022
|
10.1109/ACCESS.2022.3183092
|
63698 - 63714
|
cs.CE
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a Central Bank Digital Currency Evaluation and Verification (CEV)
Framework for recommending and verifying technical solutions in the central
bank digital currency (CBDC) system. We demonstrate two sub-frameworks: an
evaluation sub-framework that provides consensus algorithm and operating
architecture solutions and a verification sub-framework that validates the
proposed solutions. Our framework offers a universal CBDC solution that is
compatible with different national economic and regulatory regimes. The
evaluation sub-framework generates customized solutions by splitting the
consensus algorithms into several components and analyzing their impacts on
CBDC systems. CBDC design involves a trade-off between system features - the
consensus algorithm cannot achieve all system features simultaneously. However,
we also improve the operating architectures to compensate for the weak system
features. The verification sub-framework helps verify our proposed solution
through empirical experiments and formal proof. Our framework offers CBDC
designers the flexibility to iteratively tune the trade-off between CBDC system
features for the desired solution. To the best of our knowledge, we are the
first to propose a framework to recommend and verify CBDC technical solutions.
|
[
{
"version": "v1",
"created": "Thu, 2 Dec 2021 10:56:31 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Dec 2021 02:42:18 GMT"
},
{
"version": "v3",
"created": "Sun, 26 Jun 2022 08:20:14 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Jin",
"Si Yuan",
""
],
[
"Xia",
"Yong",
""
]
] |
new_dataset
| 0.986347 |
2201.02053
|
Qiang Li
|
Qiang Li, Miaowen Wen, Ertugrul Basar, George C. Alexandropoulos,
Kyeong Jin Kim, and H. Vincent Poor
|
Channel Estimation and Multipath Diversity Reception for RIS-Empowered
Broadband Wireless Systems Based on Cyclic-Prefixed Single-Carrier
Transmission
|
Submitted to an IEEE Journal
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, a cyclic-prefixed single-carrier (CPSC) transmission scheme
with phase shift keying (PSK) signaling is presented for broadband wireless
communications systems empowered by a reconfigurable intelligent surface (RIS).
In the proposed CPSC-RIS, the RIS is configured according to the transmitted
PSK symbols such that different cyclically delayed versions of the incident
signal are created by the RIS to achieve multipath diversity. A practical and
efficient channel estimator is developed for CPSC-RIS and the mean square error
of the channel estimation is expressed in closed-form. We analyze the bit error
rate (BER) performance of CPSC-RIS over frequency-selective Nakagami-$m$ fading
channels. An upper bound on the BER is derived by assuming the
maximum-likelihood detection. Furthermore, by resorting to the concept of index
modulation (IM), we propose an extension of CPSC-RIS, termed CPSC-RIS-IM, which
enhances the spectral efficiency. In addition to conventional constellation
information of PSK symbols, CPSC-RIS-IM uses the full permutations of cyclic
delays caused by the RIS to carry information. A sub-optimal receiver is
designed for CPSC-RIS-IM to aim at low computational complexity. Our simulation
results in terms of BER corroborate the performance analysis and the
superiority of CPSC-RIS(-IM) over the conventional CPSC without an RIS and
orthogonal frequency division multiplexing with an RIS.
|
[
{
"version": "v1",
"created": "Thu, 6 Jan 2022 13:35:56 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Jun 2022 12:28:18 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Li",
"Qiang",
""
],
[
"Wen",
"Miaowen",
""
],
[
"Basar",
"Ertugrul",
""
],
[
"Alexandropoulos",
"George C.",
""
],
[
"Kim",
"Kyeong Jin",
""
],
[
"Poor",
"H. Vincent",
""
]
] |
new_dataset
| 0.975682 |
2201.03713
|
Ye Jia
|
Ye Jia, Michelle Tadmor Ramanovich, Quan Wang, Heiga Zen
|
CVSS Corpus and Massively Multilingual Speech-to-Speech Translation
|
LREC 2022
| null | null | null |
cs.CL cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce CVSS, a massively multilingual-to-English speech-to-speech
translation (S2ST) corpus, covering sentence-level parallel S2ST pairs from 21
languages into English. CVSS is derived from the Common Voice speech corpus and
the CoVoST 2 speech-to-text translation (ST) corpus, by synthesizing the
translation text from CoVoST 2 into speech using state-of-the-art TTS systems.
Two versions of translation speeches are provided: 1) CVSS-C: All the
translation speeches are in a single high-quality canonical voice; 2) CVSS-T:
The translation speeches are in voices transferred from the corresponding
source speeches. In addition, CVSS provides normalized translation text which
matches the pronunciation in the translation speech. On each version of CVSS,
we built baseline multilingual direct S2ST models and cascade S2ST models,
verifying the effectiveness of the corpus. To build strong cascade S2ST
baselines, we trained an ST model on CoVoST 2, which outperforms the previous
state-of-the-art trained on the corpus without extra data by 5.8 BLEU.
Nevertheless, the performance of the direct S2ST models approaches the strong
cascade baselines when trained from scratch, and with only 0.1 or 0.7 BLEU
difference on ASR transcribed translation when initialized from matching ST
models.
|
[
{
"version": "v1",
"created": "Tue, 11 Jan 2022 00:27:08 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Jan 2022 05:27:43 GMT"
},
{
"version": "v3",
"created": "Sun, 26 Jun 2022 06:14:05 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Jia",
"Ye",
""
],
[
"Ramanovich",
"Michelle Tadmor",
""
],
[
"Wang",
"Quan",
""
],
[
"Zen",
"Heiga",
""
]
] |
new_dataset
| 0.995054 |
2201.06374
|
Zhouxia Wang
|
Zhouxia Wang, Jiawei Zhang, Runjian Chen, Wenping Wang and Ping Luo
|
RestoreFormer: High-Quality Blind Face Restoration from Undegraded
Key-Value Pairs
|
Accepted by CVPR 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Blind face restoration is to recover a high-quality face image from unknown
degradations. As face image contains abundant contextual information, we
propose a method, RestoreFormer, which explores fully-spatial attentions to
model contextual information and surpasses existing works that use local
operators. RestoreFormer has several benefits compared to prior arts. First,
unlike the conventional multi-head self-attention in previous Vision
Transformers (ViTs), RestoreFormer incorporates a multi-head cross-attention
layer to learn fully-spatial interactions between corrupted queries and
high-quality key-value pairs. Second, the key-value pairs in ResotreFormer are
sampled from a reconstruction-oriented high-quality dictionary, whose elements
are rich in high-quality facial features specifically aimed for face
reconstruction, leading to superior restoration results. Third, RestoreFormer
outperforms advanced state-of-the-art methods on one synthetic dataset and
three real-world datasets, as well as produces images with better visual
quality.
|
[
{
"version": "v1",
"created": "Mon, 17 Jan 2022 12:21:55 GMT"
},
{
"version": "v2",
"created": "Mon, 23 May 2022 10:08:23 GMT"
},
{
"version": "v3",
"created": "Sat, 25 Jun 2022 07:15:48 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Wang",
"Zhouxia",
""
],
[
"Zhang",
"Jiawei",
""
],
[
"Chen",
"Runjian",
""
],
[
"Wang",
"Wenping",
""
],
[
"Luo",
"Ping",
""
]
] |
new_dataset
| 0.961576 |
2201.07384
|
Zinan Xiong
|
Zinan Xiong, Chenxi Wang, Ying Li, Yan Luo, Yu Cao
|
Swin-Pose: Swin Transformer Based Human Pose Estimation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Convolutional neural networks (CNNs) have been widely utilized in many
computer vision tasks. However, CNNs have a fixed reception field and lack the
ability of long-range perception, which is crucial to human pose estimation.
Due to its capability to capture long-range dependencies between pixels,
transformer architecture has been adopted to computer vision applications
recently and is proven to be a highly effective architecture. We are interested
in exploring its capability in human pose estimation, and thus propose a novel
model based on transformer architecture, enhanced with a feature pyramid fusion
structure. More specifically, we use pre-trained Swin Transformer as our
backbone and extract features from input images, we leverage a feature pyramid
structure to extract feature maps from different stages. By fusing the features
together, our model predicts the keypoint heatmap. The experiment results of
our study have demonstrated that the proposed transformer-based model can
achieve better performance compared to the state-of-the-art CNN-based models.
|
[
{
"version": "v1",
"created": "Wed, 19 Jan 2022 02:15:26 GMT"
},
{
"version": "v2",
"created": "Sat, 25 Jun 2022 23:08:10 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Xiong",
"Zinan",
""
],
[
"Wang",
"Chenxi",
""
],
[
"Li",
"Ying",
""
],
[
"Luo",
"Yan",
""
],
[
"Cao",
"Yu",
""
]
] |
new_dataset
| 0.999453 |
2203.00545
|
Xinyu Wang
|
Xinyu Wang, Yongliang Shen, Jiong Cai, Tao Wang, Xiaobin Wang, Pengjun
Xie, Fei Huang, Weiming Lu, Yueting Zhuang, Kewei Tu, Wei Lu, Yong Jiang
|
DAMO-NLP at SemEval-2022 Task 11: A Knowledge-based System for
Multilingual Named Entity Recognition
|
Our Knowledge-based NER system wins 10 out of 13 tracks in the
SemEval-2022 MultiCoNER shared task
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The MultiCoNER shared task aims at detecting semantically ambiguous and
complex named entities in short and low-context settings for multiple
languages. The lack of contexts makes the recognition of ambiguous named
entities challenging. To alleviate this issue, our team DAMO-NLP proposes a
knowledge-based system, where we build a multilingual knowledge base based on
Wikipedia to provide related context information to the named entity
recognition (NER) model. Given an input sentence, our system effectively
retrieves related contexts from the knowledge base. The original input
sentences are then augmented with such context information, allowing
significantly better contextualized token representations to be captured. Our
system wins 10 out of 13 tracks in the MultiCoNER shared task.
|
[
{
"version": "v1",
"created": "Tue, 1 Mar 2022 15:29:35 GMT"
},
{
"version": "v2",
"created": "Sat, 30 Apr 2022 03:29:06 GMT"
},
{
"version": "v3",
"created": "Sun, 26 Jun 2022 00:12:21 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Wang",
"Xinyu",
""
],
[
"Shen",
"Yongliang",
""
],
[
"Cai",
"Jiong",
""
],
[
"Wang",
"Tao",
""
],
[
"Wang",
"Xiaobin",
""
],
[
"Xie",
"Pengjun",
""
],
[
"Huang",
"Fei",
""
],
[
"Lu",
"Weiming",
""
],
[
"Zhuang",
"Yueting",
""
],
[
"Tu",
"Kewei",
""
],
[
"Lu",
"Wei",
""
],
[
"Jiang",
"Yong",
""
]
] |
new_dataset
| 0.965342 |
2203.10750
|
Zewang Zhang
|
Zewang Zhang, Yibin Zheng, Xinhui Li, Li Lu
|
WeSinger: Data-augmented Singing Voice Synthesis with Auxiliary Losses
|
accepted at InterSpeech2022
| null | null | null |
cs.SD cs.CL eess.AS stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we develop a new multi-singer Chinese neural singing voice
synthesis (SVS) system named WeSinger. To improve the accuracy and naturalness
of synthesized singing voice, we design several specifical modules and
techniques: 1) A deep bi-directional LSTM-based duration model with multi-scale
rhythm loss and post-processing step; 2) A Transformer-alike acoustic model
with progressive pitch-weighted decoder loss; 3) a 24 kHz pitch-aware LPCNet
neural vocoder to produce high-quality singing waveforms; 4) A novel data
augmentation method with multi-singer pre-training for stronger robustness and
naturalness. To our knowledge, WeSinger is the first SVS system to adopt 24 kHz
LPCNet and multi-singer pre-training simultaneously. Both quantitative and
qualitative evaluation results demonstrate the effectiveness of WeSinger in
terms of accuracy and naturalness, and WeSinger achieves state-of-the-art
performance on the recent public Chinese singing corpus
Opencpop\footnote{https://wenet.org.cn/opencpop/}. Some synthesized singing
samples are available online\footnote{https://zzw922cn.github.io/wesinger/}.
|
[
{
"version": "v1",
"created": "Mon, 21 Mar 2022 06:42:44 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Mar 2022 03:57:17 GMT"
},
{
"version": "v3",
"created": "Sun, 27 Mar 2022 15:54:29 GMT"
},
{
"version": "v4",
"created": "Thu, 21 Apr 2022 12:39:11 GMT"
},
{
"version": "v5",
"created": "Sat, 25 Jun 2022 07:48:46 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Zhang",
"Zewang",
""
],
[
"Zheng",
"Yibin",
""
],
[
"Li",
"Xinhui",
""
],
[
"Lu",
"Li",
""
]
] |
new_dataset
| 0.999431 |
2203.16291
|
Burak Yildiz
|
Burak Yildiz, Seyran Khademi, Ronald Maria Siebes, Jan van Gemert
|
AmsterTime: A Visual Place Recognition Benchmark Dataset for Severe
Domain Shift
|
Accepted to ICPR 2022 (26th International Conference on Pattern
Recognition), Dataset and evaluation code:
https://github.com/seyrankhademi/AmsterTime
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce AmsterTime: a challenging dataset to benchmark visual place
recognition (VPR) in presence of a severe domain shift. AmsterTime offers a
collection of 2,500 well-curated images matching the same scene from a street
view matched to historical archival image data from Amsterdam city. The image
pairs capture the same place with different cameras, viewpoints, and
appearances. Unlike existing benchmark datasets, AmsterTime is directly
crowdsourced in a GIS navigation platform (Mapillary). We evaluate various
baselines, including non-learning, supervised and self-supervised methods,
pre-trained on different relevant datasets, for both verification and retrieval
tasks. Our result credits the best accuracy to the ResNet-101 model pre-trained
on the Landmarks dataset for both verification and retrieval tasks by 84% and
24%, respectively. Additionally, a subset of Amsterdam landmarks is collected
for feature evaluation in a classification task. Classification labels are
further used to extract the visual explanations using Grad-CAM for inspection
of the learned similar visuals in a deep metric learning models.
|
[
{
"version": "v1",
"created": "Wed, 30 Mar 2022 13:33:45 GMT"
},
{
"version": "v2",
"created": "Sun, 26 Jun 2022 15:19:24 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Yildiz",
"Burak",
""
],
[
"Khademi",
"Seyran",
""
],
[
"Siebes",
"Ronald Maria",
""
],
[
"van Gemert",
"Jan",
""
]
] |
new_dataset
| 0.999878 |
2205.09299
|
Minh Tran Quang
|
Minh Tran, Viet-Khoa Vo-Ho, Ngan T.H. Le
|
3DConvCaps: 3DUnet with Convolutional Capsule Encoder for Medical Image
Segmentation
|
Accepted to ICPR 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Convolutional Neural Networks (CNNs) have achieved promising results in
medical image segmentation. However, CNNs require lots of training data and are
incapable of handling pose and deformation of objects. Furthermore, their
pooling layers tend to discard important information such as positions as well
as CNNs are sensitive to rotation and affine transformation. Capsule network is
a recent new architecture that has achieved better robustness in part-whole
representation learning by replacing pooling layers with dynamic routing and
convolutional strides, which has shown potential results on popular tasks such
as digit classification and object segmentation. In this paper, we propose a 3D
encoder-decoder network with Convolutional Capsule Encoder (called 3DConvCaps)
to learn lower-level features (short-range attention) with convolutional layers
while modeling the higher-level features (long-range dependence) with capsule
layers. Our experiments on multiple datasets including iSeg-2017, Hippocampus,
and Cardiac demonstrate that our 3D 3DConvCaps network considerably outperforms
previous capsule networks and 3D-UNets. We further conduct ablation studies of
network efficiency and segmentation performance under various configurations of
convolution layers and capsule layers at both contracting and expanding paths.
|
[
{
"version": "v1",
"created": "Thu, 19 May 2022 03:00:04 GMT"
},
{
"version": "v2",
"created": "Sun, 26 Jun 2022 00:12:03 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Tran",
"Minh",
""
],
[
"Vo-Ho",
"Viet-Khoa",
""
],
[
"Le",
"Ngan T. H.",
""
]
] |
new_dataset
| 0.99618 |
2206.07117
|
Razieh Rastgoo
|
Mohammad Rezaei, Razieh Rastgoo, and Vassilis Athitsos
|
TriHorn-Net: A Model for Accurate Depth-Based 3D Hand Pose Estimation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
3D hand pose estimation methods have made significant progress recently.
However, the estimation accuracy is often far from sufficient for specific
real-world applications, and thus there is significant room for improvement.
This paper proposes TriHorn-Net, a novel model that uses specific innovations
to improve hand pose estimation accuracy on depth images. The first innovation
is the decomposition of the 3D hand pose estimation into the estimation of 2D
joint locations in the depth image space (UV), and the estimation of their
corresponding depths aided by two complementary attention maps. This
decomposition prevents depth estimation, which is a more difficult task, from
interfering with the UV estimations at both the prediction and feature levels.
The second innovation is PixDropout, which is, to the best of our knowledge,
the first appearance-based data augmentation method for hand depth images.
Experimental results demonstrate that the proposed model outperforms the
state-of-the-art methods on three public benchmark datasets. Our implementation
is available at https://github.com/mrezaei92/TriHorn-Net.
|
[
{
"version": "v1",
"created": "Tue, 14 Jun 2022 19:08:42 GMT"
},
{
"version": "v2",
"created": "Sun, 26 Jun 2022 12:18:20 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Rezaei",
"Mohammad",
""
],
[
"Rastgoo",
"Razieh",
""
],
[
"Athitsos",
"Vassilis",
""
]
] |
new_dataset
| 0.99826 |
2206.09907
|
Chen Min
|
Chen Min and Weizhong Jiang and Dawei Zhao and Jiaolong Xu and Liang
Xiao and Yiming Nie and Bin Dai
|
ORFD: A Dataset and Benchmark for Off-Road Freespace Detection
|
Accepted by ICRA2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Freespace detection is an essential component of autonomous driving
technology and plays an important role in trajectory planning. In the last
decade, deep learning-based free space detection methods have been proved
feasible. However, these efforts were focused on urban road environments and
few deep learning-based methods were specifically designed for off-road free
space detection due to the lack of off-road benchmarks. In this paper, we
present the ORFD dataset, which, to our knowledge, is the first off-road free
space detection dataset. The dataset was collected in different scenes
(woodland, farmland, grassland, and countryside), different weather conditions
(sunny, rainy, foggy, and snowy), and different light conditions (bright light,
daylight, twilight, darkness), which totally contains 12,198 LiDAR point cloud
and RGB image pairs with the traversable area, non-traversable area and
unreachable area annotated in detail. We propose a novel network named OFF-Net,
which unifies Transformer architecture to aggregate local and global
information, to meet the requirement of large receptive fields for free space
detection tasks. We also propose the cross-attention to dynamically fuse LiDAR
and RGB image information for accurate off-road free space detection. Dataset
and code are publicly available athttps://github.com/chaytonmin/OFF-Net.
|
[
{
"version": "v1",
"created": "Mon, 20 Jun 2022 17:22:57 GMT"
},
{
"version": "v2",
"created": "Sun, 26 Jun 2022 13:28:17 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Min",
"Chen",
""
],
[
"Jiang",
"Weizhong",
""
],
[
"Zhao",
"Dawei",
""
],
[
"Xu",
"Jiaolong",
""
],
[
"Xiao",
"Liang",
""
],
[
"Nie",
"Yiming",
""
],
[
"Dai",
"Bin",
""
]
] |
new_dataset
| 0.999838 |
2206.12410
|
Oriol Colom\'es
|
Oriol Colom\'es, Francesc Verdugo and Ido Akkerman
|
A monolithic Finite Element formulation for the hydroelastic analysis of
Very Large Floating Structures
|
35 pages, 25 figures
| null | null | null |
cs.CE cs.NA math.NA
|
http://creativecommons.org/licenses/by/4.0/
|
In this work we present a novel monolithic Finite Element Method (FEM) for
the hydroelastic analysis of Very Large Floating Structures (VLFS) with
arbitrary shapes that is stable, energy conserving and overcomes the need of an
iterative algorithm. The new formulation enables a fully monolithic solution of
the linear free-surface flow, described by linear potential flow, coupled with
floating thin structures, described by the Euler-Bernoulli beam or
Poisson-Kirchhoff plate equations. The formulation presented in this work is
general in the sense that solutions can be found in the frequency and time
domains, it overcomes the need of using elements with C1 continuity by
employing a continuous/discontinuous Galerkin (C/DG) approach, and it is
suitable for Finite Elements of arbitrary order. We show that the proposed
approach can accurately describe the hydroelastic phenomena of VLFS with a
variety of tests, including structures with elastic joints, variable bathymetry
and arbitrary structural shapes.
|
[
{
"version": "v1",
"created": "Wed, 22 Jun 2022 20:33:40 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Colomés",
"Oriol",
""
],
[
"Verdugo",
"Francesc",
""
],
[
"Akkerman",
"Ido",
""
]
] |
new_dataset
| 0.970265 |
2206.12452
|
Angela Meyer
|
Stefan Jonas, Dimitrios Anagnostos, Bernhard Brodbeck, Angela Meyer
|
Vibration fault detection in wind turbines based on normal behaviour
models without feature engineering
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Most wind turbines are remotely monitored 24/7 to allow for an early
detection of operation problems and developing damage. We present a new fault
detection method for vibration-monitored drivetrains that does not require any
feature engineering. Our method relies on a simple model architecture to enable
a straightforward implementation in practice. We propose to apply convolutional
autoencoders for identifying and extracting the most relevant features from the
half spectrum in an automated manner, saving time and effort. Thereby, a
spectral model of the normal vibration response is learnt for the monitored
component from past measurements. We demonstrate that the model can
successfully distinguish damaged from healthy components and detect a damaged
generator bearing and damaged gearbox parts from their vibration responses.
Using measurements from commercial wind turbines and a test rig, we show that
vibration-based fault detection in wind turbine drivetrains can be performed
without the usual upfront definition of spectral features. Another advantage of
the presented method is that the entire half spectrum is monitored instead of
the usual focus on monitoring individual frequencies and harmonics.
|
[
{
"version": "v1",
"created": "Fri, 24 Jun 2022 18:24:07 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Jonas",
"Stefan",
""
],
[
"Anagnostos",
"Dimitrios",
""
],
[
"Brodbeck",
"Bernhard",
""
],
[
"Meyer",
"Angela",
""
]
] |
new_dataset
| 0.999484 |
2206.12485
|
Neguine Rezaii
|
Neguine Rezaii
|
The syntax-lexicon tradeoff in writing
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
As speakers turn their thoughts into sentences, they maintain a balance
between the complexity of words and syntax. However, it is unclear whether this
syntax-lexicon tradeoff is unique to the spoken language production that is
under the pressure of rapid online processing. Alternatively, it is possible
that the tradeoff is a basic property of language irrespective of the modality
of production. This work evaluates the relationship between the complexity of
words and syntactic rules in the written language of neurotypical individuals
on three different topics. We found that similar to speaking, constructing
sentences in writing involves a tradeoff between the complexity of the lexical
and syntactic items. We also show that the reduced online processing demands
during writing allows for retrieving more complex words at the cost of
incorporating simpler syntax. This work further highlights the role of
accessibility of the elements of a sentence as the driving force in the
emergence of the syntax-lexicon tradeoff.
|
[
{
"version": "v1",
"created": "Fri, 24 Jun 2022 19:57:12 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Rezaii",
"Neguine",
""
]
] |
new_dataset
| 0.999298 |
2206.12495
|
Ted Anderson
|
Shashank Gugnani, Scott Guthridge, Frank Schmuck, Owen Anderson,
Deepavali Bhagwat, Xiaoyi Lu
|
Arcadia: A Fast and Reliable Persistent Memory Replicated Log
|
14 pages, 10 figures
| null | null | null |
cs.DC cs.PF
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The performance properties of byte-addressable persistent memory (PMEM) have
the potential to significantly improve system performance over a wide spectrum
of applications. But persistent memory brings considerable new challenges to
the programmer: only 8-byte write atomicity, out of order flush and
availability limited by node failure. It's possible to work with the atomicity
and ordering constraints of PMEM directly by carefully sequencing the order of
store operations and inserting explicit flush and fence operations at each
ordering point. But this is tedious and error-prone: too many flush operations
defeat the performance benefits of PMEM, and even with generous use, it is
difficult to prove that a given program is crash-consistent. Logging is a great
abstraction to deal with these issues but prior work on PMEM logging has not
successfully hidden the idiosyncrasies of PMEM. Moreover, shortcomings in the
log interface and design have prevented attainment of full PMEM performance. We
believe that a log design that hides the idiosyncrasies from programmers while
delivering full performance is key to success. In this paper, we present the
design and implementation of Arcadia, a generic replicated log on PMEM to
address these problems. Arcadia handles atomicity, integrity, and replication
of log records to reduce programmer burden. Our design has several novel
aspects including concurrent log writes with in-order commit, atomicity and
integrity primitives for local and remote PMEM writes, and a frequency-based
log force policy for providing low overhead persistence with guaranteed bounded
loss of uncommitted records. Our evaluation shows that Arcadia outperforms
state-of-the-art PMEM logs, such as PMDK's libpmemlog, FLEX, and Query Fresh by
several times while providing stronger log record durability guarantees. We
expect Arcadia to become the leading off-the-shelf PMEM log design.
|
[
{
"version": "v1",
"created": "Fri, 24 Jun 2022 21:45:38 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Gugnani",
"Shashank",
""
],
[
"Guthridge",
"Scott",
""
],
[
"Schmuck",
"Frank",
""
],
[
"Anderson",
"Owen",
""
],
[
"Bhagwat",
"Deepavali",
""
],
[
"Lu",
"Xiaoyi",
""
]
] |
new_dataset
| 0.998605 |
2206.12523
|
Erico Lopes
|
Erico S. P. Lopes and Lukas T. N. Landau
|
MMSE Symbol Level Precoding Under a Per Antenna Power Constraint for
Multiuser MIMO Systems With PSK Modulation
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This study proposes a symbol-level precoding algorithm based on the minimum
mean squared error design objective under a strict per antenna power constraint
for PSK modulation. The proposed design is then formulated in the standard form
of a second-order cone program, allowing for an optimal solution via the
interior point method. Numerical results indicate that the proposed design is
superior to the existing approaches in terms of bit-error-rate for the low and
intermediate SNR regime.
|
[
{
"version": "v1",
"created": "Sat, 25 Jun 2022 01:10:14 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Lopes",
"Erico S. P.",
""
],
[
"Landau",
"Lukas T. N.",
""
]
] |
new_dataset
| 0.994456 |
2206.12590
|
Xiaoliang Liu
|
Xiaoliang Liu, Furao Shen, Jian Zhao, Changhai Nie
|
RSTAM: An Effective Black-Box Impersonation Attack on Face Recognition
using a Mobile and Compact Printer
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Face recognition has achieved considerable progress in recent years thanks to
the development of deep neural networks, but it has recently been discovered
that deep neural networks are vulnerable to adversarial examples. This means
that face recognition models or systems based on deep neural networks are also
susceptible to adversarial examples. However, the existing methods of attacking
face recognition models or systems with adversarial examples can effectively
complete white-box attacks but not black-box impersonation attacks, physical
attacks, or convenient attacks, particularly on commercial face recognition
systems. In this paper, we propose a new method to attack face recognition
models or systems called RSTAM, which enables an effective black-box
impersonation attack using an adversarial mask printed by a mobile and compact
printer. First, RSTAM enhances the transferability of the adversarial masks
through our proposed random similarity transformation strategy. Furthermore, we
propose a random meta-optimization strategy for ensembling several pre-trained
face models to generate more general adversarial masks. Finally, we conduct
experiments on the CelebA-HQ, LFW, Makeup Transfer (MT), and CASIA-FaceV5
datasets. The performance of the attacks is also evaluated on state-of-the-art
commercial face recognition systems: Face++, Baidu, Aliyun, Tencent, and
Microsoft. Extensive experiments show that RSTAM can effectively perform
black-box impersonation attacks on face recognition models or systems.
|
[
{
"version": "v1",
"created": "Sat, 25 Jun 2022 08:16:55 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Liu",
"Xiaoliang",
""
],
[
"Shen",
"Furao",
""
],
[
"Zhao",
"Jian",
""
],
[
"Nie",
"Changhai",
""
]
] |
new_dataset
| 0.993998 |
2206.12614
|
Juewen Peng
|
Juewen Peng, Zhiguo Cao, Xianrui Luo, Hao Lu, Ke Xian, Jianming Zhang
|
BokehMe: When Neural Rendering Meets Classical Rendering
|
Accepted by CVPR 2022 (Oral); Project:
https://juewenpeng.github.io/BokehMe/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose BokehMe, a hybrid bokeh rendering framework that marries a neural
renderer with a classical physically motivated renderer. Given a single image
and a potentially imperfect disparity map, BokehMe generates high-resolution
photo-realistic bokeh effects with adjustable blur size, focal plane, and
aperture shape. To this end, we analyze the errors from the classical
scattering-based method and derive a formulation to calculate an error map.
Based on this formulation, we implement the classical renderer by a
scattering-based method and propose a two-stage neural renderer to fix the
erroneous areas from the classical renderer. The neural renderer employs a
dynamic multi-scale scheme to efficiently handle arbitrary blur sizes, and it
is trained to handle imperfect disparity input. Experiments show that our
method compares favorably against previous methods on both synthetic image data
and real image data with predicted disparity. A user study is further conducted
to validate the advantage of our method.
|
[
{
"version": "v1",
"created": "Sat, 25 Jun 2022 10:00:32 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Peng",
"Juewen",
""
],
[
"Cao",
"Zhiguo",
""
],
[
"Luo",
"Xianrui",
""
],
[
"Lu",
"Hao",
""
],
[
"Xian",
"Ke",
""
],
[
"Zhang",
"Jianming",
""
]
] |
new_dataset
| 0.999487 |
2206.12653
|
Ding Li
|
Hong Zhang, Ding Li
|
Diagnostic Communication and Visual System based on Vehicle UDS Protocol
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unified Diagnostic Services (UDS) is a diagnostic communication protocol used
in electronic control units (ECUs) within automotive electronics, which is
specified in the ISO 14229-1. It is derived from ISO 14230-3 (KWP2000) and the
now obsolete ISO 15765-3 (Diagnostic Communication over Controller Area Network
(DoCAN). 'Unified' in this context means that it is an international and not a
company-specific standard. By now this communication protocol is used in all
new ECUs made by Tier 1 suppliers of Original Equipment Manufacturer (OEM), and
is incorporated into other standards, such as AUTOSAR. The ECUs in modern
vehicles control nearly all functions, including electronic fuel injection
(EFI), engine control, the transmission, anti-lock braking system, door locks,
braking, window operation, and more.
|
[
{
"version": "v1",
"created": "Sat, 25 Jun 2022 13:47:56 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Zhang",
"Hong",
""
],
[
"Li",
"Ding",
""
]
] |
new_dataset
| 0.983572 |
2206.12655
|
Haoran Li
|
Haoran Li, Christopher J. Ford, Matteo Bianchi, Manuel G. Catalano,
Efi Psomopoulou, Nathan F. Lepora
|
BRL/Pisa/IIT SoftHand: A Low-cost, 3D-Printed, Underactuated,
Tendon-Driven Hand with Soft and Adaptive Synergies
|
7 pages,9 figures,to be published in IEEE Robotics and Automation
Letters
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This paper introduces the BRL/Pisa/IIT (BPI) SoftHand: a single
actuator-driven, low-cost, 3D-printed, tendon-driven, underactuated robot hand
that can be used to perform a range of grasping tasks. Based on the adaptive
synergies of the Pisa/IIT SoftHand, we design a new joint system and tendon
routing to facilitate the inclusion of both soft and adaptive synergies, which
helps us balance durability, affordability and grasping performance of the
hand. The focus of this work is on the design, simulation, synergies and
grasping tests of this SoftHand. The novel phalanges are designed and printed
based on linkages, gear pairs and geometric restraint mechanisms, and can be
applied to most tendon-driven robotic hands. We show that the robot hand can
successfully grasp and lift various target objects and adapt to hold complex
geometric shapes, reflecting the successful adoption of the soft and adaptive
synergies. We intend to open-source the design of the hand so that it can be
built cheaply on a home 3D-printer. For more detail:
https://sites.google.com/view/bpi-softhandtactile-group-bri/brlpisaiit-softhand-design
|
[
{
"version": "v1",
"created": "Sat, 25 Jun 2022 13:55:54 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Li",
"Haoran",
""
],
[
"Ford",
"Christopher J.",
""
],
[
"Bianchi",
"Matteo",
""
],
[
"Catalano",
"Manuel G.",
""
],
[
"Psomopoulou",
"Efi",
""
],
[
"Lepora",
"Nathan F.",
""
]
] |
new_dataset
| 0.998974 |
2206.12740
|
Stefan Denkovski
|
Stefan Denkovski, Shehroz S. Khan, Brandon Malamis, Sae Young Moon,
Bing Ye, Alex Mihailidis
|
Multi Visual Modality Fall Detection Dataset
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Falls are one of the leading cause of injury-related deaths among the elderly
worldwide. Effective detection of falls can reduce the risk of complications
and injuries. Fall detection can be performed using wearable devices or ambient
sensors; these methods may struggle with user compliance issues or false
alarms. Video cameras provide a passive alternative; however, regular RGB
cameras are impacted by changing lighting conditions and privacy concerns. From
a machine learning perspective, developing an effective fall detection system
is challenging because of the rarity and variability of falls. Many existing
fall detection datasets lack important real-world considerations, such as
varied lighting, continuous activities of daily living (ADLs), and camera
placement. The lack of these considerations makes it difficult to develop
predictive models that can operate effectively in the real world. To address
these limitations, we introduce a novel multi-modality dataset (MUVIM) that
contains four visual modalities: infra-red, depth, RGB and thermal cameras.
These modalities offer benefits such as obfuscated facial features and improved
performance in low-light conditions. We formulated fall detection as an anomaly
detection problem, in which a customized spatio-temporal convolutional
autoencoder was trained only on ADLs so that a fall would increase the
reconstruction error. Our results showed that infra-red cameras provided the
highest level of performance (AUC ROC=0.94), followed by thermal (AUC
ROC=0.87), depth (AUC ROC=0.86) and RGB (AUC ROC=0.83). This research provides
a unique opportunity to analyze the utility of camera modalities in detecting
falls in a home setting while balancing performance, passiveness, and privacy.
|
[
{
"version": "v1",
"created": "Sat, 25 Jun 2022 21:54:26 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Denkovski",
"Stefan",
""
],
[
"Khan",
"Shehroz S.",
""
],
[
"Malamis",
"Brandon",
""
],
[
"Moon",
"Sae Young",
""
],
[
"Ye",
"Bing",
""
],
[
"Mihailidis",
"Alex",
""
]
] |
new_dataset
| 0.999809 |
2206.12751
|
Diomadson Belfort
|
Mariana Villarim, Jo\~ao Marcos Costa and Diomadson Belfort
|
Implementation of SquashFS Support in U-Boot
| null | null | null | null |
cs.OS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
U-Boot is a notorious bootloader and Open Source project. This work had as
objective adding support for the SquashFS filesystem to U-Boot and the support
developed was submitted as a contribution to the project. The bootloader is
responsible, in this context, for loading the kernel and the device tree blob
into RAM. It needs to be capable of reading a storage device's partition
formatted with a specific filesystem type. Adding this support allows U-Boot to
read from SquashFS partitions. The source code was submitted to U-Boot's
mailing list through a series of patches to be reviewed by one of the project's
maintainer. Once it gets merged, the support will be used and modified by
U-Boot's international community.
|
[
{
"version": "v1",
"created": "Sat, 25 Jun 2022 23:56:45 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Villarim",
"Mariana",
""
],
[
"Costa",
"João Marcos",
""
],
[
"Belfort",
"Diomadson",
""
]
] |
new_dataset
| 0.970574 |
2206.12770
|
Md Jobair Hossain Faruk
|
Md Jobair Hossain Faruk, Hossain Shahriar, Maria Valero, Farhat Lamia
Barsha, Shahriar Sobhan, Md Abdullah Khan, Michael Whitman, Alfredo
Cuzzocreak, Dan Lo, Akond Rahman, Fan Wu
|
Malware Detection and Prevention using Artificial Intelligence
Techniques
| null |
2021 IEEE International Conference on Big Data (Big Data)
|
10.1109/BigData52589.2021.9671434
| null |
cs.CR cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
With the rapid technological advancement, security has become a major issue
due to the increase in malware activity that poses a serious threat to the
security and safety of both computer systems and stakeholders. To maintain
stakeholders, particularly, end users security, protecting the data from
fraudulent efforts is one of the most pressing concerns. A set of malicious
programming code, scripts, active content, or intrusive software that is
designed to destroy intended computer systems and programs or mobile and web
applications is referred to as malware. According to a study, naive users are
unable to distinguish between malicious and benign applications. Thus, computer
systems and mobile applications should be designed to detect malicious
activities towards protecting the stakeholders. A number of algorithms are
available to detect malware activities by utilizing novel concepts including
Artificial Intelligence, Machine Learning, and Deep Learning. In this study, we
emphasize Artificial Intelligence (AI) based techniques for detecting and
preventing malware activity. We present a detailed review of current malware
detection technologies, their shortcomings, and ways to improve efficiency. Our
study shows that adopting futuristic approaches for the development of malware
detection applications shall provide significant advantages. The comprehension
of this synthesis shall help researchers for further research on malware
detection and prevention using AI.
|
[
{
"version": "v1",
"created": "Sun, 26 Jun 2022 02:41:46 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Faruk",
"Md Jobair Hossain",
""
],
[
"Shahriar",
"Hossain",
""
],
[
"Valero",
"Maria",
""
],
[
"Barsha",
"Farhat Lamia",
""
],
[
"Sobhan",
"Shahriar",
""
],
[
"Khan",
"Md Abdullah",
""
],
[
"Whitman",
"Michael",
""
],
[
"Cuzzocreak",
"Alfredo",
""
],
[
"Lo",
"Dan",
""
],
[
"Rahman",
"Akond",
""
],
[
"Wu",
"Fan",
""
]
] |
new_dataset
| 0.988674 |
2206.12852
|
Elaheh Ataeebojd
|
Elaheh Ataeebojd, Mehdi Rasti, Hossein Pedram, and Pedro H. J.
Nardelli
|
Spectrum Sharing Among Multiple-Seller and Multiple-Buyer Operators of A
Mobile Network: A Stochastic Geometry Approach
|
17 pages, 11 figures
|
IEEE Transactions on Cognitive Communications and Networking, 2022
|
10.1109/TCCN.2022.3183898
| null |
cs.CG cs.NI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Sharing the spectrum among mobile network operators (MNOs) is a promising
approach to improve the spectrum utilization and to increase the monetary
income of MNOs. In this paper, we model a nonorthogonal spectrum sharing system
for a large-scale cellular network where multiple seller MNOs lease their
licensed sub-bands to several buyer MNOs. We first analyze the per-user
expected rate and the per-MNO expected profit using stochastic geometry. Then,
we formulate the joint problem of power control and licensed sub-band sharing
to maximize the expected profit of all MNOs as a multiobjective optimization
problem (MOOP) under the users' quality of service requirement and the
nonnegative return on investment constraints. To transform the MOOP into a
single objective form, we use a combination of the $\epsilon$-constraint and
weighted sum methods. However, the transformed problem is nonconvex because of
the presence of binary variables and nonconvex rate functions in the objective
function and constraints. We address this problem by using a penalty function
and approximating the nonconvex rate functions by a constrained stochastic
successive convex approximation method. Finally, the numerical results show the
correctness and performance of the proposed algorithm under various conditions.
|
[
{
"version": "v1",
"created": "Sun, 26 Jun 2022 11:44:17 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Ataeebojd",
"Elaheh",
""
],
[
"Rasti",
"Mehdi",
""
],
[
"Pedram",
"Hossein",
""
],
[
"Nardelli",
"Pedro H. J.",
""
]
] |
new_dataset
| 0.953268 |
2206.12926
|
Teddy Lazebnik Dr.
|
Teddy Lazebnik, Hanna Weitman, Yoav Goldberg, Gal A. Kaminka
|
Rivendell: Project-Based Academic Search Engine
| null | null | null | null |
cs.IR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Finding relevant research literature in online databases is a familiar
challenge to all researchers. General search approaches trying to tackle this
challenge fall into two groups: one-time search and life-time search. We
observe that both approaches ignore unique attributes of the research domain
and are affected by concept drift. We posit that in searching for research
papers, a combination of a life-time search engine with an explicitly-provided
context (project) provides a solution to the concept drift problem. We
developed and deployed a project-based meta-search engine for research papers
called Rivendell. Using Rivendell, we conducted experiments with 199 subjects,
comparing project-based search performance to one-time and life-time search
engines, revealing an improvement of up to 12.8 percent in project-based search
compared to life-time search.
|
[
{
"version": "v1",
"created": "Sun, 26 Jun 2022 17:07:15 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Lazebnik",
"Teddy",
""
],
[
"Weitman",
"Hanna",
""
],
[
"Goldberg",
"Yoav",
""
],
[
"Kaminka",
"Gal A.",
""
]
] |
new_dataset
| 0.993986 |
2206.12941
|
Dogukan Aksu
|
A. Huzeyfe Demir, Berke Yavas, Mehmet Yazici, Dogukan Aksu, M. Ali
Aydin
|
Object Detection and Tracking with Autonomous UAV
| null | null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, a combat Unmanned Air Vehicle (UAV) is modeled in the
simulation environment. The rotary wing UAV is successfully performed various
tasks such as locking on the targets, tracking, and sharing the relevant data
with surrounding vehicles. Different software technologies such as API
communication, ground control station configuration, autonomous movement
algorithms, computer vision, and deep learning are employed.
|
[
{
"version": "v1",
"created": "Sun, 26 Jun 2022 18:48:59 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Demir",
"A. Huzeyfe",
""
],
[
"Yavas",
"Berke",
""
],
[
"Yazici",
"Mehmet",
""
],
[
"Aksu",
"Dogukan",
""
],
[
"Aydin",
"M. Ali",
""
]
] |
new_dataset
| 0.994907 |
2206.12944
|
Anku Adhikari
|
Anku Adhikari, Samuel Guo, Paris Smaragdis, Marianne Winslett
|
Don't Look Up: Ubiquitous Data Exfiltration Pathways in Commercial
Spaces
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We show that as a side effect of building code requirements, almost all
commercial buildings today are vulnerable to a novel data exfiltration attack,
even if they are air-gapped and secured against traditional attacks. The new
attack uses vibrations from an inconspicuous transmitter to send data across
the building's physical infrastructure to a receiver. Our analysis and
experiments with several large real-world buildings show a single-frequency bit
rate of 300Kbps, which is sufficient to transmit ordinary files, real-time
MP3-quality audio, or periodic high-quality still photos. The attacker can use
multiple channels to transmit, for example, real-time MP4-quality video. We
discuss the difficulty of detecting the attack and the viability of various
potential countermeasures.
|
[
{
"version": "v1",
"created": "Sun, 26 Jun 2022 19:09:23 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Adhikari",
"Anku",
""
],
[
"Guo",
"Samuel",
""
],
[
"Smaragdis",
"Paris",
""
],
[
"Winslett",
"Marianne",
""
]
] |
new_dataset
| 0.961397 |
2206.12958
|
Sahaj Garg
|
Sahaj Garg
|
Szloca: towards a framework for full 3D tracking through a single camera
in context of interactive arts
| null | null | null | null |
cs.CV cs.HC cs.MM
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Realtime virtual data of objects and human presence in a large area holds a
valuable key in enabling many experiences and applications in various
industries and with exponential rise in the technological development of
artificial intelligence, computer vision has expanded the possibilities of
tracking and classifying things through just video inputs, which is also
surpassing the limitations of most popular and common hardware setups known
traditionally to detect human pose and position, such as low field of view and
limited tracking capacity. The benefits of using computer vision in application
development is large as it augments traditional input sources (like video
streams) and can be integrated in many environments and platforms. In the
context of new media interactive arts, based on physical movements and
expanding over large areas or gallaries, this research presents a novel way and
a framework towards obtaining data and virtual representation of objects/people
- such as three-dimensional positions, skeltons/pose and masks from a single
rgb camera. Looking at the state of art through some recent developments and
building on prior research in the field of computer vision, the paper also
proposes an original method to obtain three dimensional position data from
monocular images, the model does not rely on complex training of computer
vision systems but combines prior computer vision research and adds a capacity
to represent z depth, ieto represent a world position in 3 axis from a 2d input
source.
|
[
{
"version": "v1",
"created": "Sun, 26 Jun 2022 20:09:47 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Garg",
"Sahaj",
""
]
] |
new_dataset
| 0.98549 |
2206.13056
|
Oguzhan Derebasi
|
Oguzhan Derebasi, Murat Isik, Oguzhan Demirag, Dilek Goksel Duru, Anup
Das
|
A Coupled Neural Circuit Design for Guillain-Barre Syndrome
| null | null | null | null |
cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
Guillain-Barre syndrome is a rare neurological condition in which the human
immune system attacks the peripheral nervous system. A peripheral nervous
system appears as a diffusively connected system of mathematical models of
neuron models, and the system's period becomes shorter than the periods of each
neural circuit. The stimuli in the conduction path that will address the myelin
sheath that has lost its function are received by the axons and are conveyed
externally to the target organ, aiming to solve the problem of decreased nerve
conduction. In the NEURON simulation environment, one can create a neuron model
and define biophysical events that take place within the system for study. In
this environment, signal transmission between cells and dendrites is obtained
graphically. The simulated potassium and sodium conductance are replicated
adequately, and the electronic action potentials are quite comparable to those
measured experimentally. In this work, we propose an analog and digital coupled
neuron model comprising individual excitatory and inhibitory neural circuit
blocks for a low-cost and energy-efficient system. Compared to digital design,
our analog design performs in lower frequency but gives a 32.3\% decreased
energy efficiency. Thus, the resulting coupled analog hardware neuron model can
be a proposed model for the simulation of reduced nerve conduction. As a
result, the analog coupled neuron, (even with its greater design complexity)
serious contender for the future development of a wearable sensor device that
could help with Guillain-Barre syndrome and other neurologic diseases.
|
[
{
"version": "v1",
"created": "Mon, 27 Jun 2022 05:40:04 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Derebasi",
"Oguzhan",
""
],
[
"Isik",
"Murat",
""
],
[
"Demirag",
"Oguzhan",
""
],
[
"Duru",
"Dilek Goksel",
""
],
[
"Das",
"Anup",
""
]
] |
new_dataset
| 0.999061 |
2206.13117
|
Chao Liu
|
Chao Liu, Jianwei Guo, Dong-Ming Yan, Zhirong Liang, Xiaopeng Zhang,
Zhanglin Cheng
|
SARNet: Semantic Augmented Registration of Large-Scale Urban Point
Clouds
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Registering urban point clouds is a quite challenging task due to the
large-scale, noise and data incompleteness of LiDAR scanning data. In this
paper, we propose SARNet, a novel semantic augmented registration network aimed
at achieving efficient registration of urban point clouds at city scale.
Different from previous methods that construct correspondences only in the
point-level space, our approach fully exploits semantic features as assistance
to improve registration accuracy. Specifically, we extract per-point semantic
labels with advanced semantic segmentation networks and build a prior semantic
part-to-part correspondence. Then we incorporate the semantic information into
a learning-based registration pipeline, consisting of three core modules: a
semantic-based farthest point sampling module to efficiently filter out
outliers and dynamic objects; a semantic-augmented feature extraction module
for learning more discriminative point descriptors; a semantic-refined
transformation estimation module that utilizes prior semantic matching as a
mask to refine point correspondences by reducing false matching for better
convergence. We evaluate the proposed SARNet extensively by using real-world
data from large regions of urban scenes and comparing it with alternative
methods. The code is available at
https://github.com/WinterCodeForEverything/SARNet.
|
[
{
"version": "v1",
"created": "Mon, 27 Jun 2022 08:49:11 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Liu",
"Chao",
""
],
[
"Guo",
"Jianwei",
""
],
[
"Yan",
"Dong-Ming",
""
],
[
"Liang",
"Zhirong",
""
],
[
"Zhang",
"Xiaopeng",
""
],
[
"Cheng",
"Zhanglin",
""
]
] |
new_dataset
| 0.994122 |
2206.13135
|
Shuhao Deng
|
Chengfei Li, Shuhao Deng, Yaoping Wang, Guangjing Wang, Yaguang Gong,
Changbin Chen and Jinfeng Bai
|
TALCS: An Open-Source Mandarin-English Code-Switching Corpus and a
Speech Recognition Baseline
|
accepted by INTERSPEECH 2022
| null | null | null |
cs.CL cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces a new corpus of Mandarin-English code-switching speech
recognition--TALCS corpus, suitable for training and evaluating code-switching
speech recognition systems. TALCS corpus is derived from real online one-to-one
English teaching scenes in TAL education group, which contains roughly 587
hours of speech sampled at 16 kHz. To our best knowledge, TALCS corpus is the
largest well labeled Mandarin-English code-switching open source automatic
speech recognition (ASR) dataset in the world. In this paper, we will introduce
the recording procedure in detail, including audio capturing devices and corpus
environments. And the TALCS corpus is freely available for download under the
permissive license1. Using TALCS corpus, we conduct ASR experiments in two
popular speech recognition toolkits to make a baseline system, including ESPnet
and Wenet. The Mixture Error Rate (MER) performance in the two speech
recognition toolkits is compared in TALCS corpus. The experimental results
implies that the quality of audio recordings and transcriptions are promising
and the baseline system is workable.
|
[
{
"version": "v1",
"created": "Mon, 27 Jun 2022 09:30:25 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Li",
"Chengfei",
""
],
[
"Deng",
"Shuhao",
""
],
[
"Wang",
"Yaoping",
""
],
[
"Wang",
"Guangjing",
""
],
[
"Gong",
"Yaguang",
""
],
[
"Chen",
"Changbin",
""
],
[
"Bai",
"Jinfeng",
""
]
] |
new_dataset
| 0.999766 |
2206.13155
|
Chuwei Luo
|
Chuwei Luo, Guozhi Tang, Qi Zheng, Cong Yao, Lianwen Jin, Chenliang
Li, Yang Xue, Luo Si
|
Bi-VLDoc: Bidirectional Vision-Language Modeling for Visually-Rich
Document Understanding
|
Under review
| null | null | null |
cs.CV cs.CL cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-modal document pre-trained models have proven to be very effective in a
variety of visually-rich document understanding (VrDU) tasks. Though existing
document pre-trained models have achieved excellent performance on standard
benchmarks for VrDU, the way they model and exploit the interactions between
vision and language on documents has hindered them from better generalization
ability and higher accuracy. In this work, we investigate the problem of
vision-language joint representation learning for VrDU mainly from the
perspective of supervisory signals. Specifically, a pre-training paradigm
called Bi-VLDoc is proposed, in which a bidirectional vision-language
supervision strategy and a vision-language hybrid-attention mechanism are
devised to fully explore and utilize the interactions between these two
modalities, to learn stronger cross-modal document representations with richer
semantics. Benefiting from the learned informative cross-modal document
representations, Bi-VLDoc significantly advances the state-of-the-art
performance on three widely-used document understanding benchmarks, including
Form Understanding (from 85.14% to 93.44%), Receipt Information Extraction
(from 96.01% to 97.84%), and Document Classification (from 96.08% to 97.12%).
On Document Visual QA, Bi-VLDoc achieves the state-of-the-art performance
compared to previous single model methods.
|
[
{
"version": "v1",
"created": "Mon, 27 Jun 2022 09:58:34 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Luo",
"Chuwei",
""
],
[
"Tang",
"Guozhi",
""
],
[
"Zheng",
"Qi",
""
],
[
"Yao",
"Cong",
""
],
[
"Jin",
"Lianwen",
""
],
[
"Li",
"Chenliang",
""
],
[
"Xue",
"Yang",
""
],
[
"Si",
"Luo",
""
]
] |
new_dataset
| 0.994632 |
2206.13162
|
Marc Sanchez-Artigas
|
Raul Saiz-Laudo, Marc Sanchez-Artigas
|
EGEON: Software-Defined Data Protection for Object Storage
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
With the growth in popularity of cloud computing, object storage systems
(e.g., Amazon S3, OpenStack Swift, Ceph) have gained momentum for their
relatively low per-GB costs and high availability. However, as increasingly
more sensitive data is being accrued, the need to natively integrate privacy
controls into the storage is growing in relevance. Today, due to the poor
object storage interface, privacy controls are enforced by data curators with
full access to data in the clear. This motivates the need for a new approach to
data privacy that can provide strong assurance and control to data owners. To
fulfill this need, this paper presents EGEON, a novel software-defined data
protection framework for object storage. EGEON enables users to declaratively
set privacy policies on how their data can be shared. In the privacy policies,
the users can build complex data protection services through the composition of
data transformations, which are invoked inline by EGEON upon a read request. As
a result, data owners can trivially display multiple views from the same data
piece, and modify these views by only updating the policies. And all without
restructuring the internals of the underlying object storage system. The EGEON
prototype has been built atop OpenStack Swift. Evaluation results shows promise
in developing data protection services with little overhead directly into the
object store. Further, depending on the amount of data filtered out in the
transformed views, end-to-end latency can be low due to the savings in network
communication.
|
[
{
"version": "v1",
"created": "Mon, 27 Jun 2022 10:10:41 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Saiz-Laudo",
"Raul",
""
],
[
"Sanchez-Artigas",
"Marc",
""
]
] |
new_dataset
| 0.998671 |
2206.13199
|
Markus Sch\"on
|
Markus Sch\"on, Michael Buchholz, Klaus Dietmayer
|
MGNet: Monocular Geometric Scene Understanding for Autonomous Driving
| null |
2021 IEEE/CVF International Conference on Computer Vision (ICCV),
2021, pp. 15784-15795
|
10.1109/ICCV48922.2021.01551
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce MGNet, a multi-task framework for monocular geometric scene
understanding. We define monocular geometric scene understanding as the
combination of two known tasks: Panoptic segmentation and self-supervised
monocular depth estimation. Panoptic segmentation captures the full scene not
only semantically, but also on an instance basis. Self-supervised monocular
depth estimation uses geometric constraints derived from the camera measurement
model in order to measure depth from monocular video sequences only. To the
best of our knowledge, we are the first to propose the combination of these two
tasks in one single model. Our model is designed with focus on low latency to
provide fast inference in real-time on a single consumer-grade GPU. During
deployment, our model produces dense 3D point clouds with instance aware
semantic labels from single high-resolution camera images. We evaluate our
model on two popular autonomous driving benchmarks, i.e., Cityscapes and KITTI,
and show competitive performance among other real-time capable methods. Source
code is available at https://github.com/markusschoen/MGNet.
|
[
{
"version": "v1",
"created": "Mon, 27 Jun 2022 11:27:55 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Schön",
"Markus",
""
],
[
"Buchholz",
"Michael",
""
],
[
"Dietmayer",
"Klaus",
""
]
] |
new_dataset
| 0.975272 |
2206.13217
|
Daniel Mitropolsky
|
Daniel Mitropolsky, Adiba Ejaz, Mirah Shi, Mihalis Yannakakis,
Christos H. Papadimitriou
|
Center-Embedding and Constituency in the Brain and a New
Characterization of Context-Free Languages
|
NALOMA 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
A computational system implemented exclusively through the spiking of neurons
was recently shown capable of syntax, that is, of carrying out the dependency
parsing of simple English sentences. We address two of the most important
questions left open by that work: constituency (the identification of key parts
of the sentence such as the verb phrase) and the processing of dependent
sentences, especially center-embedded ones. We show that these two aspects of
language can also be implemented by neurons and synapses in a way that is
compatible with what is known, or widely believed, about the structure and
function of the language organ. Surprisingly, the way we implement center
embedding points to a new characterization of context-free languages.
|
[
{
"version": "v1",
"created": "Mon, 27 Jun 2022 12:11:03 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Mitropolsky",
"Daniel",
""
],
[
"Ejaz",
"Adiba",
""
],
[
"Shi",
"Mirah",
""
],
[
"Yannakakis",
"Mihalis",
""
],
[
"Papadimitriou",
"Christos H.",
""
]
] |
new_dataset
| 0.99883 |
2206.13325
|
Guang Yang
|
Chi Yu, Guang Yang, Xiang Chen, Ke Liu, Yanlin Zhou
|
BashExplainer: Retrieval-Augmented Bash Code Comment Generation based on
Fine-tuned CodeBERT
|
Accepted in ICSME2022
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Developers use shell commands for many tasks, such as file system management,
network control, and process management. Bash is one of the most commonly used
shells and plays an important role in Linux system development and maintenance.
Due to the language flexibility of Bash code, developers who are not familiar
with Bash often have difficulty understanding the purpose and functionality of
Bash code. In this study, we study Bash code comment generation problem and
proposed an automatic method BashExplainer based on two-stage training
strategy. In the first stage, we train a Bash encoder by fine-tuning CodeBERT
on our constructed Bash code corpus. In the second stage, we first retrieve the
most similar code from the code repository for the target code based on
semantic and lexical similarity. Then we use the trained Bash encoder to
generate two vector representations. Finally, we fuse these two vector
representations via the fusion layer and generate the code comment through the
decoder. To show the competitiveness of our proposed method, we construct a
high-quality corpus by combining the corpus shared in the previous NL2Bash
study and the corpus shared in the NLC2CMD competition. This corpus contains
10,592 Bash codes and corresponding comments. Then we selected ten baselines
from previous studies on automatic code comment generation, which cover
information retrieval methods, deep learning methods, and hybrid methods.
|
[
{
"version": "v1",
"created": "Mon, 27 Jun 2022 14:13:37 GMT"
}
] | 2022-06-28T00:00:00 |
[
[
"Yu",
"Chi",
""
],
[
"Yang",
"Guang",
""
],
[
"Chen",
"Xiang",
""
],
[
"Liu",
"Ke",
""
],
[
"Zhou",
"Yanlin",
""
]
] |
new_dataset
| 0.999367 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.