id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2212.13398
|
Vaclav Skala
|
Vaclav Skala
|
Poseidon: Non-server WEB Forms Off-line Processing System
|
Draft of the paper submitted to International Journal of Computers,
ISSN: 2367-8895
| null | null | null |
cs.NI cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
The proposed Poseidon system is based on email services of filled forms
instead of WEB server based services. This approach is convenient especially
for small applications or small-medium companies. It is based on PDF forms that
are available on a WEB page. PDF forms can be downloaded, off-line filled in,
printed and finally sent by email for final processing. Data are actually
stored in the local outbox waiting for a connection to a mail server. This
follows an idea of the standard "paper" letter sending. Filled in data are
actually sent when a user is on-line, therefore a user is "free" of being
on-line when filling the forms. When the PDF form is processed on the recipient
side, answer is sent back via email as well. Typical application is e.g. in
conference management systems, systems for submission to journals etc. The
great advantage of the PDF forms use is that they can be easily made or
modified by a non-specialized administrative person easily.
|
[
{
"version": "v1",
"created": "Tue, 27 Dec 2022 07:57:07 GMT"
}
] | 2022-12-29T00:00:00 |
[
[
"Skala",
"Vaclav",
""
]
] |
new_dataset
| 0.999492 |
2212.13421
|
Reza Hooshmand
|
Reza Hooshmand, Farhad Naserizadeh, and Jalil Mazloum
|
Hardware Implementation of a Polar Code-based Public Key Cryptosystem
|
19 pages, 15 figures
| null | null | null |
cs.CR
|
http://creativecommons.org/publicdomain/zero/1.0/
|
In recent years, there have been many studies on quantum computing and the
construction of quantum computers which are capable of breaking conventional
number theory-based public key cryptosystems. Therefore, in the not-too-distant
future, we need the public key cryptosystems that withstand against the attacks
executed by quantum computers, so-called post-quantum cryptosystems. A public
key cryptosystem based on polar codes (PKC-PC) has recently been introduced
whose security depends on the difficulty of solving the general decoding
problem of polar code. In this paper, we first implement the encryption, key
generation and decryption algorithms of PKC-PC on Raspberry Pi3. Then, to
evaluate its performance, we have measured several related parameters such as
execution time, energy consumption, memory consumption and CPU utilization. All
these metrics are investigated for encryption/decryption algorithms of PKC-PC
with various parameters of polar codes. In the next step, the investigated
parameters are compared to the implemented McEliece public key cryptosystem.
Analyses of such results show that the execution time of encryption/decryption
as well as the energy and memory consumption of PKC-PC is shorter than the
McEliece cryptosystem.
|
[
{
"version": "v1",
"created": "Tue, 27 Dec 2022 09:29:04 GMT"
}
] | 2022-12-29T00:00:00 |
[
[
"Hooshmand",
"Reza",
""
],
[
"Naserizadeh",
"Farhad",
""
],
[
"Mazloum",
"Jalil",
""
]
] |
new_dataset
| 0.985347 |
2212.13452
|
Kaixin Lin
|
Jiajing Wu, Kaixin Lin, Dan Lin, Ziye Zheng, Huawei Huang, and Zibin
Zheng
|
Financial Crimes in Web3-empowered Metaverse: Taxonomy, Countermeasures,
and Opportunities
|
24pages, 6 figures, 140 references, submitted to the Open Journal of
the Computer Society
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
At present, the concept of metaverse has sparked widespread attention from
the public to major industries. With the rapid development of blockchain and
Web3 technologies, the decentralized metaverse ecology has attracted a large
influx of users and capital.
Due to the lack of industry standards and regulatory rules, the
Web3-empowered metaverse ecosystem has witnessed a variety of financial crimes,
such as scams, code exploit, wash trading, money laundering, and illegal
services and shops. To this end, it is especially urgent and critical to
summarize and classify the financial security threats on the Web3-empowered
metaverse in order to maintain the long-term healthy development of its
ecology.
In this paper, we first outline the background, foundation, and applications
of the Web3 metaverse. Then, we provide a comprehensive overview and taxonomy
of the security risks and financial crimes that have emerged since the
development of the decentralized metaverse. For each financial crime, we focus
on three issues: a) existing definitions, b) relevant cases and analysis, and
c) existing academic research on this type of crime. Next, from the perspective
of academic research and government policy, we summarize the current anti-crime
measurements and technologies in the metaverse. Finally, we discuss the
opportunities and challenges in behavioral mining and the potential regulation
of financial activities in the metaverse.
The overview of this paper is expected to help readers better understand the
potential security threats in this emerging ecology, and to provide insights
and references for financial crime fighting.
|
[
{
"version": "v1",
"created": "Tue, 27 Dec 2022 11:27:55 GMT"
}
] | 2022-12-29T00:00:00 |
[
[
"Wu",
"Jiajing",
""
],
[
"Lin",
"Kaixin",
""
],
[
"Lin",
"Dan",
""
],
[
"Zheng",
"Ziye",
""
],
[
"Huang",
"Huawei",
""
],
[
"Zheng",
"Zibin",
""
]
] |
new_dataset
| 0.998722 |
2212.13492
|
Longxu Dou
|
Longxu Dou, Yan Gao, Mingyang Pan, Dingzirui Wang, Wanxiang Che,
Dechen Zhan, Jian-Guang Lou
|
MultiSpider: Towards Benchmarking Multilingual Text-to-SQL Semantic
Parsing
|
AAAI2023 Main Conference. Code:
https://github.com/microsoft/ContextualSP
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Text-to-SQL semantic parsing is an important NLP task, which greatly
facilitates the interaction between users and the database and becomes the key
component in many human-computer interaction systems. Much recent progress in
text-to-SQL has been driven by large-scale datasets, but most of them are
centered on English. In this work, we present MultiSpider, the largest
multilingual text-to-SQL dataset which covers seven languages (English, German,
French, Spanish, Japanese, Chinese, and Vietnamese). Upon MultiSpider, we
further identify the lexical and structural challenges of text-to-SQL (caused
by specific language properties and dialect sayings) and their intensity across
different languages. Experimental results under three typical settings
(zero-shot, monolingual and multilingual) reveal a 6.1% absolute drop in
accuracy in non-English languages. Qualitative and quantitative analyses are
conducted to understand the reason for the performance drop of each language.
Besides the dataset, we also propose a simple schema augmentation framework
SAVe (Schema-Augmentation-with-Verification), which significantly boosts the
overall performance by about 1.8% and closes the 29.5% performance gap across
languages.
|
[
{
"version": "v1",
"created": "Tue, 27 Dec 2022 13:58:30 GMT"
}
] | 2022-12-29T00:00:00 |
[
[
"Dou",
"Longxu",
""
],
[
"Gao",
"Yan",
""
],
[
"Pan",
"Mingyang",
""
],
[
"Wang",
"Dingzirui",
""
],
[
"Che",
"Wanxiang",
""
],
[
"Zhan",
"Dechen",
""
],
[
"Lou",
"Jian-Guang",
""
]
] |
new_dataset
| 0.996761 |
2212.13607
|
Xiaojun Xu
|
Xiaojun Xu, Yue Yu, Hanzhang Wang, Alok Lal, Carl A. Gunter, Bo Li
|
EDoG: Adversarial Edge Detection For Graph Neural Networks
|
Accepted by IEEE Conference on Secure and Trustworthy Machine
Learning 2023
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph Neural Networks (GNNs) have been widely applied to different tasks such
as bioinformatics, drug design, and social networks. However, recent studies
have shown that GNNs are vulnerable to adversarial attacks which aim to mislead
the node or subgraph classification prediction by adding subtle perturbations.
Detecting these attacks is challenging due to the small magnitude of
perturbation and the discrete nature of graph data. In this paper, we propose a
general adversarial edge detection pipeline EDoG without requiring knowledge of
the attack strategies based on graph generation. Specifically, we propose a
novel graph generation approach combined with link prediction to detect
suspicious adversarial edges. To effectively train the graph generative model,
we sample several sub-graphs from the given graph data. We show that since the
number of adversarial edges is usually low in practice, with low probability
the sampled sub-graphs will contain adversarial edges based on the union bound.
In addition, considering the strong attacks which perturb a large number of
edges, we propose a set of novel features to perform outlier detection as the
preprocessing for our detection. Extensive experimental results on three
real-world graph datasets including a private transaction rule dataset from a
major company and two types of synthetic graphs with controlled properties show
that EDoG can achieve above 0.8 AUC against four state-of-the-art unseen attack
strategies without requiring any knowledge about the attack type; and around
0.85 with knowledge of the attack type. EDoG significantly outperforms
traditional malicious edge detection baselines. We also show that an adaptive
attack with full knowledge of our detection pipeline is difficult to bypass it.
|
[
{
"version": "v1",
"created": "Tue, 27 Dec 2022 20:42:36 GMT"
}
] | 2022-12-29T00:00:00 |
[
[
"Xu",
"Xiaojun",
""
],
[
"Yu",
"Yue",
""
],
[
"Wang",
"Hanzhang",
""
],
[
"Lal",
"Alok",
""
],
[
"Gunter",
"Carl A.",
""
],
[
"Li",
"Bo",
""
]
] |
new_dataset
| 0.983512 |
2212.13689
|
Guanqun Song
|
Guanqun Song, Ting Zhu
|
ML-based Secure Low-Power Communication in Adversarial Contexts
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As wireless network technology becomes more and more popular, mutual
interference between various signals has become more and more severe and
common. Therefore, there is often a situation in which the transmission of its
own signal is interfered with by occupying the channel. Especially in a
confrontational environment, Jamming has caused great harm to the security of
information transmission. So I propose ML-based secure ultra-low power
communication, which is an approach to use machine learning to predict future
wireless traffic by capturing patterns of past wireless traffic to ensure
ultra-low-power transmission of signals via backscatters. In order to be more
suitable for the adversarial environment, we use backscatter to achieve
ultra-low power signal transmission, and use frequency-hopping technology to
achieve successful confrontation with Jamming information. In the end, we
achieved a prediction success rate of 96.19%.
|
[
{
"version": "v1",
"created": "Wed, 28 Dec 2022 04:09:25 GMT"
}
] | 2022-12-29T00:00:00 |
[
[
"Song",
"Guanqun",
""
],
[
"Zhu",
"Ting",
""
]
] |
new_dataset
| 0.990417 |
2212.13695
|
Ye Wang
|
Ye Wang, Rui Ma, Xiaoqing Ma, Honghua Cui, Yubin Xiao, Xuan Wu, You
Zhou
|
Shape-Aware Fine-Grained Classification of Erythroid Cells
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fine-grained classification and counting of bone marrow erythroid cells are
vital for evaluating the health status and formulating therapeutic schedules
for leukemia or hematopathy. Due to the subtle visual differences between
different types of erythroid cells, it is challenging to apply existing
image-based deep learning models for fine-grained erythroid cell
classification. Moreover, there is no large open-source datasets on erythroid
cells to support the model training. In this paper, we introduce BMEC (Bone
Morrow Erythroid Cells), the first large fine-grained image dataset of
erythroid cells, to facilitate more deep learning research on erythroid cells.
BMEC contains 5,666 images of individual erythroid cells, each of which is
extracted from the bone marrow erythroid cell smears and professionally
annotated to one of the four types of erythroid cells. To distinguish the
erythroid cells, one key indicator is the cell shape which is closely related
to the cell growth and maturation. Therefore, we design a novel shape-aware
image classification network for fine-grained erythroid cell classification.
The shape feature is extracted from the shape mask image and aggregated to the
raw image feature with a shape attention module. With the shape-attended image
feature, our network achieved superior classification performance (81.12\%
top-1 accuracy) on the BMEC dataset comparing to the baseline methods. Ablation
studies also demonstrate the effectiveness of incorporating the shape
information for the fine-grained cell classification. To further verify the
generalizability of our method, we tested our network on two additional public
white blood cells (WBC) datasets and the results show our shape-aware method
can generally outperform recent state-of-the-art works on classifying the WBC.
The code and BMEC dataset can be found on https://github.com/wangye8899/BMEC.
|
[
{
"version": "v1",
"created": "Wed, 28 Dec 2022 04:43:25 GMT"
}
] | 2022-12-29T00:00:00 |
[
[
"Wang",
"Ye",
""
],
[
"Ma",
"Rui",
""
],
[
"Ma",
"Xiaoqing",
""
],
[
"Cui",
"Honghua",
""
],
[
"Xiao",
"Yubin",
""
],
[
"Wu",
"Xuan",
""
],
[
"Zhou",
"You",
""
]
] |
new_dataset
| 0.978467 |
2212.13709
|
Gautam Choudhary
|
Gautam Choudhary, Iftikhar Ahamath Burhanuddin, Eunyee Koh, Fan Du,
and Ryan A. Rossi
|
PersonaSAGE: A Multi-Persona Graph Neural Network
|
10 pages, 6 figures, 7 tables
| null | null | null |
cs.LG cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph Neural Networks (GNNs) have become increasingly important in recent
years due to their state-of-the-art performance on many important downstream
applications. Existing GNNs have mostly focused on learning a single node
representation, despite that a node often exhibits polysemous behavior in
different contexts. In this work, we develop a persona-based graph neural
network framework called PersonaSAGE that learns multiple persona-based
embeddings for each node in the graph. Such disentangled representations are
more interpretable and useful than a single embedding. Furthermore, PersonaSAGE
learns the appropriate set of persona embeddings for each node in the graph,
and every node can have a different number of assigned persona embeddings. The
framework is flexible enough and the general design helps in the wide
applicability of the learned embeddings to suit the domain. We utilize publicly
available benchmark datasets to evaluate our approach and against a variety of
baselines. The experiments demonstrate the effectiveness of PersonaSAGE for a
variety of important tasks including link prediction where we achieve an
average gain of 15% while remaining competitive for node classification.
Finally, we also demonstrate the utility of PersonaSAGE with a case study for
personalized recommendation of different entity types in a data management
platform.
|
[
{
"version": "v1",
"created": "Wed, 28 Dec 2022 05:50:38 GMT"
}
] | 2022-12-29T00:00:00 |
[
[
"Choudhary",
"Gautam",
""
],
[
"Burhanuddin",
"Iftikhar Ahamath",
""
],
[
"Koh",
"Eunyee",
""
],
[
"Du",
"Fan",
""
],
[
"Rossi",
"Ryan A.",
""
]
] |
new_dataset
| 0.996865 |
2212.13733
|
In-Kwon Lee
|
June-Young Hwang, Soon-Uk Kwon, Yong-Hun Cho, Sang-Bin Jeon, In-Kwon
Lee
|
Redirected Walking in Infinite Virtual Indoor Environment Using
Change-blindness
|
https://www.youtube.com/watch?v=s-ZKavhXxdk
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
We present a change-blindness based redirected walking algorithm that allows
a user to explore on foot a virtual indoor environment consisting of an
infinite number of rooms while at the same time ensuring collision-free walking
for the user in real space. This method uses change blindness to scale and
translate the room without the user's awareness by moving the wall while the
user is not looking. Consequently, the virtual room containing the current user
always exists in the valid real space. We measured the detection threshold for
whether the user recognizes the movement of the wall outside the field of view.
Then, we used the measured detection threshold to determine the amount of
changing the dimension of the room by moving that wall. We conducted a
live-user experiment to navigate the same virtual environment using the
proposed method and other existing methods. As a result, users reported higher
usability, presence, and immersion when using the proposed method while showing
reduced motion sickness compared to other methods. Hence, our approach can be
used to implement applications to allow users to explore an infinitely large
virtual indoor environment such as virtual museum and virtual model house while
simultaneously walking in a small real space, giving users a more realistic
experience.
|
[
{
"version": "v1",
"created": "Wed, 28 Dec 2022 07:44:47 GMT"
}
] | 2022-12-29T00:00:00 |
[
[
"Hwang",
"June-Young",
""
],
[
"Kwon",
"Soon-Uk",
""
],
[
"Cho",
"Yong-Hun",
""
],
[
"Jeon",
"Sang-Bin",
""
],
[
"Lee",
"In-Kwon",
""
]
] |
new_dataset
| 0.956708 |
2212.13754
|
Niels Mommen
|
Niels Mommen, Bart Jacobs
|
Verification of C++ Programs with VeriFast
|
20 pages
| null | null | null |
cs.LO cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
VeriFast is a prototype tool based on separation logic for modular
verification of C and Java programs. We are in the process of adding support
for C++. In this report, we describe the features of C++ for which we added
support so far, as well as the proof obligations we generate for these
features. At this point, VeriFast has basic support for most object-oriented
programming features of C++: member functions, member function and operator
overloading, implicit and explicit conversions, constructors and initializer
lists, destructors, reference types, allocation and deallocation on the stack
or on the heap (using new and delete), inheritance (including multiple
inheritance but not virtual base classes), and virtual member functions and
overriding. To support specification of inheritance hierarchies, we added
support for instance predicates, which can be introduced in a base class and
overridden in derived classes. The main missing feature at this point is
support for C++ templates, which we plan to work on next.
|
[
{
"version": "v1",
"created": "Wed, 28 Dec 2022 09:05:06 GMT"
}
] | 2022-12-29T00:00:00 |
[
[
"Mommen",
"Niels",
""
],
[
"Jacobs",
"Bart",
""
]
] |
new_dataset
| 0.981238 |
2212.13766
|
Zimian Wei
|
Zimian Wei, Hengyue Pan, Xin Niu, Dongsheng Li
|
OVO: One-shot Vision Transformer Search with Online distillation
|
arXiv admin note: substantial text overlap with arXiv:2107.00651 by
other authors
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pure transformers have shown great potential for vision tasks recently.
However, their accuracy in small or medium datasets is not satisfactory.
Although some existing methods introduce a CNN as a teacher to guide the
training process by distillation, the gap between teacher and student networks
would lead to sub-optimal performance. In this work, we propose a new One-shot
Vision transformer search framework with Online distillation, namely OVO. OVO
samples sub-nets for both teacher and student networks for better distillation
results. Benefiting from the online distillation, thousands of subnets in the
supernet are well-trained without extra finetuning or retraining. In
experiments, OVO-Ti achieves 73.32% top-1 accuracy on ImageNet and 75.2% on
CIFAR-100, respectively.
|
[
{
"version": "v1",
"created": "Wed, 28 Dec 2022 10:08:55 GMT"
}
] | 2022-12-29T00:00:00 |
[
[
"Wei",
"Zimian",
""
],
[
"Pan",
"Hengyue",
""
],
[
"Niu",
"Xin",
""
],
[
"Li",
"Dongsheng",
""
]
] |
new_dataset
| 0.995838 |
2212.13768
|
Tiziano De Matteis
|
Johannes de Fine Licht, Tiziano De Matteis, Tal Ben-Nun, Andreas
Kuster, Oliver Rausch, Manuel Burger, Carl-Johannes Johnsen, Torsten Hoefler
|
Python FPGA Programming with Data-Centric Multi-Level Design
| null | null | null | null |
cs.DC cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although high-level synthesis (HLS) tools have significantly improved
programmer productivity over hardware description languages, developing for
FPGAs remains tedious and error prone. Programmers must learn and implement a
large set of vendor-specific syntax, patterns, and tricks to optimize (or even
successfully compile) their applications, while dealing with ever-changing
toolflows from the FPGA vendors. We propose a new way to develop, optimize, and
compile FPGA programs. The Data-Centric parallel programming (DaCe) framework
allows applications to be defined by their dataflow and control flow through
the Stateful DataFlow multiGraph (SDFG) representation, capturing the abstract
program characteristics, and exposing a plethora of optimization opportunities.
In this work, we show how extending SDFGs with multi-level Library Nodes
incorporates both domain-specific and platform-specific optimizations into the
design flow, enabling knowledge transfer across application domains and FPGA
vendors. We present the HLS-based FPGA code generation backend of DaCe, and
show how SDFGs are code generated for either FPGA vendor, emitting efficient
HLS code that is structured and annotated to implement the desired
architecture.
|
[
{
"version": "v1",
"created": "Wed, 28 Dec 2022 10:15:51 GMT"
}
] | 2022-12-29T00:00:00 |
[
[
"Licht",
"Johannes de Fine",
""
],
[
"De Matteis",
"Tiziano",
""
],
[
"Ben-Nun",
"Tal",
""
],
[
"Kuster",
"Andreas",
""
],
[
"Rausch",
"Oliver",
""
],
[
"Burger",
"Manuel",
""
],
[
"Johnsen",
"Carl-Johannes",
""
],
[
"Hoefler",
"Torsten",
""
]
] |
new_dataset
| 0.999282 |
2212.13801
|
Abuzer Yakaryilmaz
|
Abuzer Yakary{\i}lmaz
|
Classical and quantum Merlin-Arthur automata
|
14 pages
| null | null | null |
cs.FL cs.CC quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce Merlin-Arthur (MA) automata as Merlin provides a single
certificate and it is scanned by Arthur before reading the input. We define
Merlin-Arthur deterministic, probabilistic, and quantum finite state automata
(resp., MA-DFAs, MA-PFAs, MA-QFAs) and postselecting MA-PFAs and MA-QFAs
(resp., MA-PostPFA and MA-PostQFA). We obtain several results using different
certificate lengths.
We show that MA-DFAs use constant length certificates, and they are
equivalent to multi-entry DFAs. Thus, they recognize all and only regular
languages but can be exponential and polynomial state efficient over binary and
unary languages, respectively. With sublinear length certificates, MA-PFAs can
recognize several nonstochastic unary languages with cutpoint 1/2. With linear
length certificates, MA-PostPFAs recognize the same nonstochastic unary
languages with bounded error. With arbitrarily long certificates, bounded-error
MA-PostPFAs verify every unary decidable language. With sublinear length
certificates, bounded-error MA-PostQFAs verify several nonstochastic unary
languages. With linear length certificates, they can verify every unary
language and some NP-complete binary languages. With exponential length
certificates, they can verify every binary language.
|
[
{
"version": "v1",
"created": "Wed, 28 Dec 2022 12:46:18 GMT"
}
] | 2022-12-29T00:00:00 |
[
[
"Yakaryılmaz",
"Abuzer",
""
]
] |
new_dataset
| 0.987402 |
2212.13843
|
Haiwei Dong
|
Ying Qiu, Yang Liu, Juan Arteaga-Falconi, Haiwei Dong, and
Abdulmotaleb El Saddik
|
EVM-CNN: Real-Time Contactless Heart Rate Estimation from Facial Video
| null |
IEEE Transactions on Multimedia, vol. 21, no. 7, pp. 1778-1787,
2019
|
10.1109/TMM.2018.2883866
| null |
cs.CV cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the increase in health consciousness, noninvasive body monitoring has
aroused interest among researchers. As one of the most important pieces of
physiological information, researchers have remotely estimated the heart rate
(HR) from facial videos in recent years. Although progress has been made over
the past few years, there are still some limitations, like the processing time
increasing with accuracy and the lack of comprehensive and challenging datasets
for use and comparison. Recently, it was shown that HR information can be
extracted from facial videos by spatial decomposition and temporal filtering.
Inspired by this, a new framework is introduced in this paper to remotely
estimate the HR under realistic conditions by combining spatial and temporal
filtering and a convolutional neural network. Our proposed approach shows
better performance compared with the benchmark on the MMSE-HR dataset in terms
of both the average HR estimation and short-time HR estimation. High
consistency in short-time HR estimation is observed between our method and the
ground truth.
|
[
{
"version": "v1",
"created": "Sun, 25 Dec 2022 15:25:15 GMT"
}
] | 2022-12-29T00:00:00 |
[
[
"Qiu",
"Ying",
""
],
[
"Liu",
"Yang",
""
],
[
"Arteaga-Falconi",
"Juan",
""
],
[
"Dong",
"Haiwei",
""
],
[
"Saddik",
"Abdulmotaleb El",
""
]
] |
new_dataset
| 0.989554 |
2212.13965
|
Deepank Verma
|
Deepank Verma, Olaf Mumm, Vanessa Miriam Carlow
|
Exploration of latent space of LOD2 GML dataset to identify similar
buildings
|
10 pages, 6 figures
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Explainable numerical representations of otherwise complex datasets are vital
as they extract relevant information, which is more convenient to analyze and
study. These latent representations help identify clusters and outliers and
assess the similarity between data points. The 3-D model of buildings is one
dataset that possesses inherent complexity given the variety in footprint
shape, distinct roof types, walls, height, and volume. Traditionally, comparing
building shapes requires matching their known properties and shape metrics with
each other. However, this requires obtaining a plethora of such properties to
calculate similarity. In contrast, this study utilizes an autoencoder-based
method to compute the shape information in a fixed-size vector form that can be
compared and grouped with the help of distance metrics. This study uses
"FoldingNet," a 3D autoencoder, to generate the latent representation of each
building from the obtained LOD2 GML dataset of German cities and villages. The
Cosine distance is calculated for each latent vector to determine the locations
of similar buildings in the city. Further, a set of geospatial tools is
utilized to iteratively find the geographical clusters of buildings with
similar forms. The state of Brandenburg in Germany is taken as an example to
test the methodology. The study introduces a novel approach to finding similar
buildings and their geographical location, which can define the neighborhood's
character, history, and social setting. Further, the process can be scaled to
include multiple settlements where more regional insights can be made.
|
[
{
"version": "v1",
"created": "Wed, 28 Dec 2022 17:16:23 GMT"
}
] | 2022-12-29T00:00:00 |
[
[
"Verma",
"Deepank",
""
],
[
"Mumm",
"Olaf",
""
],
[
"Carlow",
"Vanessa Miriam",
""
]
] |
new_dataset
| 0.995577 |
2212.13974
|
Hichem Sahbi
|
Hichem Sahbi and Sebastien Deschamps
|
Adversarial Virtual Exemplar Learning for Label-Frugal Satellite Image
Change Detection
|
arXiv admin note: substantial text overlap with arXiv:2203.11559
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Satellite image change detection aims at finding occurrences of targeted
changes in a given scene taken at different instants. This task is highly
challenging due to the acquisition conditions and also to the subjectivity of
changes. In this paper, we investigate satellite image change detection using
active learning. Our method is interactive and relies on a question and answer
model which asks the oracle (user) questions about the most informative display
(dubbed as virtual exemplars), and according to the user's responses, updates
change detections. The main contribution of our method consists in a novel
adversarial model that allows frugally probing the oracle with only the most
representative, diverse and uncertain virtual exemplars. The latter are learned
to challenge the most the trained change decision criteria which ultimately
leads to a better re-estimate of these criteria in the following iterations of
active learning. Conducted experiments show the out-performance of our proposed
adversarial display model against other display strategies as well as the
related work.
|
[
{
"version": "v1",
"created": "Wed, 28 Dec 2022 17:46:20 GMT"
}
] | 2022-12-29T00:00:00 |
[
[
"Sahbi",
"Hichem",
""
],
[
"Deschamps",
"Sebastien",
""
]
] |
new_dataset
| 0.998329 |
2212.13979
|
Renrui Zhang
|
Peixiang Huang, Li Liu, Renrui Zhang, Song Zhang, Xinli Xu, Baichao
Wang, Guoyi Liu
|
TiG-BEV: Multi-view BEV 3D Object Detection via Target Inner-Geometry
Learning
|
Code link: https://github.com/ADLab3Ds/TiG-BEV
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To achieve accurate and low-cost 3D object detection, existing methods
propose to benefit camera-based multi-view detectors with spatial cues provided
by the LiDAR modality, e.g., dense depth supervision and bird-eye-view (BEV)
feature distillation. However, they directly conduct point-to-point mimicking
from LiDAR to camera, which neglects the inner-geometry of foreground targets
and suffers from the modal gap between 2D-3D features. In this paper, we
propose the learning scheme of Target Inner-Geometry from the LiDAR modality
into camera-based BEV detectors for both dense depth and BEV features, termed
as TiG-BEV. First, we introduce an inner-depth supervision module to learn the
low-level relative depth relations between different foreground pixels. This
enables the camera-based detector to better understand the object-wise spatial
structures. Second, we design an inner-feature BEV distillation module to
imitate the high-level semantics of different keypoints within foreground
targets. To further alleviate the BEV feature gap between two modalities, we
adopt both inter-channel and inter-keypoint distillation for feature-similarity
modeling. With our target inner-geometry distillation, TiG-BEV can effectively
boost BEVDepth by +2.3% NDS and +2.4% mAP, along with BEVDet by +9.1% NDS and
+10.3% mAP on nuScenes val set. Code will be available at
https://github.com/ADLab3Ds/TiG-BEV.
|
[
{
"version": "v1",
"created": "Wed, 28 Dec 2022 17:53:43 GMT"
}
] | 2022-12-29T00:00:00 |
[
[
"Huang",
"Peixiang",
""
],
[
"Liu",
"Li",
""
],
[
"Zhang",
"Renrui",
""
],
[
"Zhang",
"Song",
""
],
[
"Xu",
"Xinli",
""
],
[
"Wang",
"Baichao",
""
],
[
"Liu",
"Guoyi",
""
]
] |
new_dataset
| 0.995447 |
2212.13986
|
Heung-No Lee
|
Heung-No Lee, Young-Sik Kim, Dilbag Singh, and Manjit Kaur
|
Green Bitcoin: Global Sound Money
|
16 pages
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Modern societies have adopted government-issued fiat currencies many of which
exist today mainly in the form of digits in credit and bank accounts. Fiat
currencies are controlled by central banks for economic stimulation and
stabilization. Boom-and-bust cycles are created. The volatility of the cycle
has become increasingly extreme. Social inequality due to the concentration of
wealth is prevalent worldwide. As such, restoring sound money, which provides
stored value over time, has become a pressing issue. Currently,
cryptocurrencies such as Bitcoin are in their infancy and may someday qualify
as sound money. Bitcoin today is considered as a digital asset for storing
value. But Bitcoin has problems. The first issue of the current Bitcoin network
is its high energy consumption consensus mechanism. The second is the
cryptographic primitives which are unsafe against post-quantum (PQ) attacks. We
aim to propose Green Bitcoin which addresses both issues. To save energy in
consensus mechanism, we introduce a post-quantum secure (self-election)
verifiable coin-toss function and novel PQ secure proof-of-computation
primitives. It is expected to reduce the rate of energy consumption more than
90 percent of the current Bitcoin network. The elliptic curve cryptography will
be replaced with PQ-safe versions. The Green Bitcoin protocol will help Bitcoin
evolve into a post-quantum secure network.
|
[
{
"version": "v1",
"created": "Wed, 7 Dec 2022 19:53:22 GMT"
}
] | 2022-12-29T00:00:00 |
[
[
"Lee",
"Heung-No",
""
],
[
"Kim",
"Young-Sik",
""
],
[
"Singh",
"Dilbag",
""
],
[
"Kaur",
"Manjit",
""
]
] |
new_dataset
| 0.999696 |
2212.13989
|
Hongyan Bao
|
Helene Orsini, Hongyan Bao, Yujun Zhou, Xiangrui Xu, Yufei Han,
Longyang Yi, Wei Wang, Xin Gao, Xiangliang Zhang
|
AdvCat: Domain-Agnostic Robustness Assessment for Cybersecurity-Critical
Applications with Categorical Inputs
|
IEEE BigData 2022
| null | null | null |
cs.CR cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Machine Learning-as-a-Service systems (MLaaS) have been largely developed for
cybersecurity-critical applications, such as detecting network intrusions and
fake news campaigns. Despite effectiveness, their robustness against
adversarial attacks is one of the key trust concerns for MLaaS deployment. We
are thus motivated to assess the adversarial robustness of the Machine Learning
models residing at the core of these security-critical applications with
categorical inputs. Previous research efforts on accessing model robustness
against manipulation of categorical inputs are specific to use cases and
heavily depend on domain knowledge, or require white-box access to the target
ML model. Such limitations prevent the robustness assessment from being as a
domain-agnostic service provided to various real-world applications. We propose
a provably optimal yet computationally highly efficient adversarial robustness
assessment protocol for a wide band of ML-driven cybersecurity-critical
applications. We demonstrate the use of the domain-agnostic robustness
assessment method with substantial experimental study on fake news detection
and intrusion detection problems.
|
[
{
"version": "v1",
"created": "Tue, 13 Dec 2022 18:12:02 GMT"
}
] | 2022-12-29T00:00:00 |
[
[
"Orsini",
"Helene",
""
],
[
"Bao",
"Hongyan",
""
],
[
"Zhou",
"Yujun",
""
],
[
"Xu",
"Xiangrui",
""
],
[
"Han",
"Yufei",
""
],
[
"Yi",
"Longyang",
""
],
[
"Wang",
"Wei",
""
],
[
"Gao",
"Xin",
""
],
[
"Zhang",
"Xiangliang",
""
]
] |
new_dataset
| 0.989383 |
2111.08629
|
Zerina Kapetanovic
|
Zerina Kapetanovic, Miguel Morales, Joshua R. Smith
|
Communication by means of Modulated Johnson Noise
| null | null |
10.1073/pnas.2201337119
| null |
cs.NI cs.ET eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the design of a new passive wireless communication system that
does not rely on ambient or generated RF sources. Instead, we exploit the
Johnson (thermal) noise generated by a resistor to transmit information bits
wirelessly. By switching the load connected to an antenna between a resistor
and open circuit, we can achieve data rates of up to 26bps and distances of up
to 7.3 meters. This communication method is orders of magnitude less power
consuming than conventional communication schemes and presents the opportunity
to enable wireless communication in areas with a complete lack of connectivity.
|
[
{
"version": "v1",
"created": "Tue, 16 Nov 2021 17:17:39 GMT"
},
{
"version": "v2",
"created": "Fri, 31 Dec 2021 05:07:08 GMT"
},
{
"version": "v3",
"created": "Sat, 6 Aug 2022 17:42:44 GMT"
}
] | 2022-12-28T00:00:00 |
[
[
"Kapetanovic",
"Zerina",
""
],
[
"Morales",
"Miguel",
""
],
[
"Smith",
"Joshua R.",
""
]
] |
new_dataset
| 0.997091 |
2012.08865
|
Xiaowei Tang
|
Xiao-Wei Tang, Shuowen Zhang, Changsheng You, Xin-Lin Huang, Rui Zhang
|
UAV-Assisted Image Acquisition: 3D UAV Trajectory Design and Camera
Control
|
This paper has been accepted by IEEE VTC2022-Fall and will appear
soon!
| null | null | null |
cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we consider a new unmanned aerial vehicle (UAV)-assisted
oblique image acquisition system where a UAV is dispatched to take images of
multiple ground targets (GTs). To study the three-dimensional (3D) UAV
trajectory design for image acquisition, we first propose a novel UAV-assisted
oblique photography model, which characterizes the image resolution with
respect to the UAV's 3D image-taking location. Then, we formulate a 3D UAV
trajectory optimization problem to minimize the UAV's traveling distance
subject to the image resolution constraints. The formulated problem is shown to
be equivalent to a modified 3D traveling salesman problem with neighbourhoods,
which is NP-hard in general. To tackle this difficult problem, we propose an
iterative algorithm to obtain a high-quality suboptimal solution efficiently,
by alternately optimizing the UAV's 3D image-taking waypoints and its visiting
order for the GTs. Numerical results show that the proposed algorithm
significantly reduces the UAV's traveling distance as compared to various
benchmark schemes, while meeting the image resolution requirement.
|
[
{
"version": "v1",
"created": "Wed, 16 Dec 2020 11:08:09 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Dec 2022 03:17:16 GMT"
}
] | 2022-12-27T00:00:00 |
[
[
"Tang",
"Xiao-Wei",
""
],
[
"Zhang",
"Shuowen",
""
],
[
"You",
"Changsheng",
""
],
[
"Huang",
"Xin-Lin",
""
],
[
"Zhang",
"Rui",
""
]
] |
new_dataset
| 0.979305 |
2102.02182
|
Prasad Krishnan Dr
|
Prasad Krishnan, Rogers Mathew, Subrahmanyam Kalyanasundaram
|
Pliable Index Coding via Conflict-Free Colorings of Hypergraphs
|
A shorter version has appeared in IEEE International Symposium on
Information Theory, 2021
| null | null | null |
cs.IT cs.DM math.CO math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the pliable index coding (PICOD) problem, a server is to serve multiple
clients, each of which possesses a unique subset of the complete message set as
side information and requests a new message which it does not have. The goal of
the server is to do this using as few transmissions as possible. This work
presents a hypergraph coloring approach to the scalar PICOD problem. A
\textit{conflict-free coloring} of a hypergraph is known from literature as an
assignment of colors to its vertices so that each hyperedge of the graph
contains one uniquely colored vertex. For a given PICOD problem represented by
a hypergraph consisting of messages as vertices and request-sets as hyperedges,
we present achievable PICOD schemes using conflict-free colorings of the PICOD
hypergraph. Various graph theoretic parameters arising out of such colorings
(and some new coloring variants) then give a number of upper bounds on the
optimal PICOD length, which we study in this work. Suppose the PICOD hypergraph
has $m$ vertices and $n$ hyperedges, where every hyperedge overlaps with at
most $\Gamma$ other hyperedges. We show easy to implement randomized algorithms
for the following: (a) For the single request case, we give a PICOD of length
$O(\log^2\Gamma)$. This result improves over known achievability results for
some parameter ranges, (b) For the $t$-request case, we give an MDS code of
length $\max(O(\log \Gamma \log m), O(t \log m))$. Further if the hyperedges
(request sets) are sufficiently large, we give a PICOD of the same length as
above, which is not based on MDS construction. In general, this gives an
improvement over prior achievability results. Our codes are of near-optimal
length (up to a multiplicative factor of $\log t$).
|
[
{
"version": "v1",
"created": "Wed, 3 Feb 2021 18:18:29 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Apr 2021 13:57:32 GMT"
},
{
"version": "v3",
"created": "Mon, 26 Dec 2022 08:28:59 GMT"
}
] | 2022-12-27T00:00:00 |
[
[
"Krishnan",
"Prasad",
""
],
[
"Mathew",
"Rogers",
""
],
[
"Kalyanasundaram",
"Subrahmanyam",
""
]
] |
new_dataset
| 0.997702 |
2112.13630
|
Rahmat Faddli Siregar
|
Rahmat Faddli Siregar, Nandana Rajatheva, and Matti Latva-Aho
|
Permutation Matrix Modulation
|
This article has been accepted for publication in IEEE Transaction on
Wireless Communications
| null |
10.1109/TWC.2022.3231011
| null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a novel scheme that allows MIMO system to modulate a set of
permutation matrices to send more information bits, extending our initial work
on the topic. This system is called Permutation Matrix Modulation (PMM). The
basic idea is to employ a permutation matrix as a precoder and treat it as a
modulated symbol. We continue the evolution of index modulation in MIMO by
adopting all-antenna activation and obtaining a set of unique symbols from
altering the positions of the antenna transmit power. We provide the analysis
of the achievable rate of PMM under Gaussian Mixture Model (GMM) distribution
\revv{and finite cardinality input (FCI). Numerical results are evaluated by
comparing PMM with the other existing systems.} We also present a way to attain
the optimal achievable rate of PMM by solving a maximization problem via
interior-point method. A low complexity detection scheme based on zero-forcing
(ZF) is proposed, and maximum likelihood (ML) detection is discussed. We
demonstrate the trade-off between simulation of the symbol error rate (SER) and
the computational complexity where ZF performs worse in the SER simulation but
requires much less computational complexity than ML.
|
[
{
"version": "v1",
"created": "Mon, 27 Dec 2021 12:29:08 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Dec 2022 12:11:23 GMT"
},
{
"version": "v3",
"created": "Mon, 26 Dec 2022 08:41:12 GMT"
}
] | 2022-12-27T00:00:00 |
[
[
"Siregar",
"Rahmat Faddli",
""
],
[
"Rajatheva",
"Nandana",
""
],
[
"Latva-Aho",
"Matti",
""
]
] |
new_dataset
| 0.990194 |
2202.03209
|
Weixiao Gao
|
Weixiao Gao, Liangliang Nan, Bas Boom, Hugo Ledoux
|
PSSNet: Planarity-sensible Semantic Segmentation of Large-scale Urban
Meshes
|
24 pages,11 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We introduce a novel deep learning-based framework to interpret 3D urban
scenes represented as textured meshes. Based on the observation that object
boundaries typically align with the boundaries of planar regions, our framework
achieves semantic segmentation in two steps: planarity-sensible
over-segmentation followed by semantic classification. The over-segmentation
step generates an initial set of mesh segments that capture the planar and
non-planar regions of urban scenes. In the subsequent classification step, we
construct a graph that encodes the geometric and photometric features of the
segments in its nodes and the multi-scale contextual features in its edges. The
final semantic segmentation is obtained by classifying the segments using a
graph convolutional network. Experiments and comparisons on two semantic urban
mesh benchmarks demonstrate that our approach outperforms the state-of-the-art
methods in terms of boundary quality, mean IoU (intersection over union), and
generalization ability. We also introduce several new metrics for evaluating
mesh over-segmentation methods dedicated to semantic segmentation, and our
proposed over-segmentation approach outperforms state-of-the-art methods on all
metrics. Our source code is available at
\url{https://github.com/WeixiaoGao/PSSNet}.
|
[
{
"version": "v1",
"created": "Mon, 7 Feb 2022 14:16:10 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Feb 2022 09:22:46 GMT"
},
{
"version": "v3",
"created": "Thu, 21 Apr 2022 20:30:48 GMT"
},
{
"version": "v4",
"created": "Sat, 24 Dec 2022 12:59:37 GMT"
}
] | 2022-12-27T00:00:00 |
[
[
"Gao",
"Weixiao",
""
],
[
"Nan",
"Liangliang",
""
],
[
"Boom",
"Bas",
""
],
[
"Ledoux",
"Hugo",
""
]
] |
new_dataset
| 0.982459 |
2202.07402
|
Tao Wang
|
Tao Wang, Jun Hao Liew, Yu Li, Yunpeng Chen, Jiashi Feng
|
SODAR: Segmenting Objects by DynamicallyAggregating Neighboring Mask
Representations
|
accepted to IEEE Transactions on Image Processing (TIP), code:
https://github.com/advdfacd/AggMask
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent state-of-the-art one-stage instance segmentation model SOLO divides
the input image into a grid and directly predicts per grid cell object masks
with fully-convolutional networks, yielding comparably good performance as
traditional two-stage Mask R-CNN yet enjoying much simpler architecture and
higher efficiency. We observe SOLO generates similar masks for an object at
nearby grid cells, and these neighboring predictions can complement each other
as some may better segment certain object part, most of which are however
directly discarded by non-maximum-suppression. Motivated by the observed gap,
we develop a novel learning-based aggregation method that improves upon SOLO by
leveraging the rich neighboring information while maintaining the architectural
efficiency. The resulting model is named SODAR. Unlike the original per grid
cell object masks, SODAR is implicitly supervised to learn mask representations
that encode geometric structure of nearby objects and complement adjacent
representations with context. The aggregation method further includes two novel
designs: 1) a mask interpolation mechanism that enables the model to generate
much fewer mask representations by sharing neighboring representations among
nearby grid cells, and thus saves computation and memory; 2) a deformable
neighbour sampling mechanism that allows the model to adaptively adjust
neighbor sampling locations thus gathering mask representations with more
relevant context and achieving higher performance. SODAR significantly improves
the instance segmentation performance, e.g., it outperforms a SOLO model with
ResNet-101 backbone by 2.2 AP on COCO \texttt{test} set, with only about 3\%
additional computation. We further show consistent performance gain with the
SOLOv2 model.
|
[
{
"version": "v1",
"created": "Tue, 15 Feb 2022 13:53:03 GMT"
},
{
"version": "v2",
"created": "Fri, 23 Dec 2022 13:58:40 GMT"
}
] | 2022-12-27T00:00:00 |
[
[
"Wang",
"Tao",
""
],
[
"Liew",
"Jun Hao",
""
],
[
"Li",
"Yu",
""
],
[
"Chen",
"Yunpeng",
""
],
[
"Feng",
"Jiashi",
""
]
] |
new_dataset
| 0.997384 |
2204.08997
|
Wei Chen
|
Wei Chen, Zhiwei Li, Hongyi Fang, Qianyuan Yao, Cheng Zhong, Jianye
Hao, Qi Zhang, Xuanjing Huang, Jiajie Peng, Zhongyu Wei
|
A Benchmark for Automatic Medical Consultation System: Frameworks, Tasks
and Datasets
|
8 pages, 3 figures, 9 tables
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, interest has arisen in using machine learning to improve the
efficiency of automatic medical consultation and enhance patient experience. In
this article, we propose two frameworks to support automatic medical
consultation, namely doctor-patient dialogue understanding and task-oriented
interaction. We create a new large medical dialogue dataset with multi-level
finegrained annotations and establish five independent tasks, including named
entity recognition, dialogue act classification, symptom label inference,
medical report generation and diagnosis-oriented dialogue policy. We report a
set of benchmark results for each task, which shows the usability of the
dataset and sets a baseline for future studies. Both code and data is available
from https://github.com/lemuria-wchen/imcs21.
|
[
{
"version": "v1",
"created": "Tue, 19 Apr 2022 16:43:21 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Apr 2022 03:50:57 GMT"
},
{
"version": "v3",
"created": "Sun, 25 Dec 2022 11:03:57 GMT"
}
] | 2022-12-27T00:00:00 |
[
[
"Chen",
"Wei",
""
],
[
"Li",
"Zhiwei",
""
],
[
"Fang",
"Hongyi",
""
],
[
"Yao",
"Qianyuan",
""
],
[
"Zhong",
"Cheng",
""
],
[
"Hao",
"Jianye",
""
],
[
"Zhang",
"Qi",
""
],
[
"Huang",
"Xuanjing",
""
],
[
"Peng",
"Jiajie",
""
],
[
"Wei",
"Zhongyu",
""
]
] |
new_dataset
| 0.999764 |
2205.14955
|
Yulin Shao
|
Yulin Shao, Emre Ozfatura, Alberto Perotti, Branislav Popovic, Deniz
Gunduz
|
AttentionCode: Ultra-Reliable Feedback Codes for Short-Packet
Communications
|
Ultra-reliable short-packet communications, feedback, deep learning,
the attention mechanism
| null | null | null |
cs.IT cs.LG math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Ultra-reliable short-packet communication is a major challenge in future
wireless networks with critical applications. To achieve ultra-reliable
communications beyond 99.999%, this paper envisions a new interaction-based
communication paradigm that exploits feedback from the receiver. We present
AttentionCode, a new class of feedback codes leveraging deep learning (DL)
technologies. The underpinnings of AttentionCode are three architectural
innovations: AttentionNet, input restructuring, and adaptation to fading
channels, accompanied by several training methods, including large-batch
training, distributed learning, look-ahead optimizer, training-test
signal-to-noise ratio (SNR) mismatch, and curriculum learning. The training
methods can potentially be generalized to other wireless communication
applications with machine learning. Numerical experiments verify that
AttentionCode establishes a new state of the art among all DL-based feedback
codes in both additive white Gaussian noise (AWGN) channels and fading
channels. In AWGN channels with noiseless feedback, for example, AttentionCode
achieves a block error rate (BLER) of $10^{-7}$ when the forward channel SNR is
0 dB for a block size of 50 bits, demonstrating the potential of AttentionCode
to provide ultra-reliable short-packet communications.
|
[
{
"version": "v1",
"created": "Mon, 30 May 2022 09:44:20 GMT"
},
{
"version": "v2",
"created": "Sat, 24 Dec 2022 12:18:24 GMT"
}
] | 2022-12-27T00:00:00 |
[
[
"Shao",
"Yulin",
""
],
[
"Ozfatura",
"Emre",
""
],
[
"Perotti",
"Alberto",
""
],
[
"Popovic",
"Branislav",
""
],
[
"Gunduz",
"Deniz",
""
]
] |
new_dataset
| 0.999475 |
2206.08669
|
Yuwei Cai
|
Yuwei Cai, Huanlin Li, Zhun Fan, Juncao Hong, Peng Xu, Hui Cheng,
Xiaomi Zhu, Bingliang Hu, Zhifeng Hao
|
VG-Swarm: A Vision-based Gene Regulation Network for UAVs Swarm Behavior
Emergence
|
This work has been submitted to the IEEE Robotics and Automation
Letters (RA-L) for possible publication. Copyright may be transferred without
notice, after which this version may no longer be accessible
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unmanned Aerial Vehicles (UAVs) dynamic encirclement is an emerging field
with great potential. Researchers often get inspiration from biological
systems, either from macro-world like fish schools or bird flocks etc, or from
micro-world like gene regulatory networks (GRN). However, most swarm control
algorithms rely on centralized control, global information acquisition, and
communications among neighboring agents. In this work, we propose a distributed
swarm control method based purely on vision and GRN without any direct
communications, in which swarm agents of e.g. UAVs can generate an entrapping
pattern to encircle an escaping target of UAV based purely on their installed
omnidirectional vision sensors. A finite-state-machine (FSM) describing the
behavioral model of each drone is also designed so that a swarm of drones can
accomplish searching and entrapping of the target collectively in an integrated
way. We verify the effectiveness and efficiency of the proposed method in
various simulation and real-world experiments.
|
[
{
"version": "v1",
"created": "Fri, 17 Jun 2022 10:13:56 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Sep 2022 09:37:29 GMT"
},
{
"version": "v3",
"created": "Mon, 26 Dec 2022 09:50:23 GMT"
}
] | 2022-12-27T00:00:00 |
[
[
"Cai",
"Yuwei",
""
],
[
"Li",
"Huanlin",
""
],
[
"Fan",
"Zhun",
""
],
[
"Hong",
"Juncao",
""
],
[
"Xu",
"Peng",
""
],
[
"Cheng",
"Hui",
""
],
[
"Zhu",
"Xiaomi",
""
],
[
"Hu",
"Bingliang",
""
],
[
"Hao",
"Zhifeng",
""
]
] |
new_dataset
| 0.998884 |
2209.06345
|
Chenhui Zhao
|
Chenhui Zhao and Xiang Li and Rabih Younes
|
Self-supervised Multi-Modal Video Forgery Attack Detection
| null | null | null | null |
cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Video forgery attack threatens the surveillance system by replacing the video
captures with unrealistic synthesis, which can be powered by the latest augment
reality and virtual reality technologies. From the machine perception aspect,
visual objects often have RF signatures that are naturally synchronized with
them during recording. In contrast to video captures, the RF signatures are
more difficult to attack given their concealed and ubiquitous nature. In this
work, we investigate multimodal video forgery attack detection methods using
both vision and wireless modalities. Since wireless signal-based human
perception is environmentally sensitive, we propose a self-supervised training
strategy to enable the system to work without external annotation and thus can
adapt to different environments. Our method achieves a perfect human detection
accuracy and a high forgery attack detection accuracy of 94.38% which is
comparable with supervised methods.
|
[
{
"version": "v1",
"created": "Tue, 13 Sep 2022 23:41:26 GMT"
},
{
"version": "v2",
"created": "Sat, 24 Dec 2022 00:28:18 GMT"
}
] | 2022-12-27T00:00:00 |
[
[
"Zhao",
"Chenhui",
""
],
[
"Li",
"Xiang",
""
],
[
"Younes",
"Rabih",
""
]
] |
new_dataset
| 0.998181 |
2212.12640
|
Yan Gao
|
Yan Gao, Chenggang Bai, Quan Quan
|
Distributed Control within a Trapezoid Virtual Tube Containing Obstacles
for UAV Swarm Subject to Speed Constraints
|
11 pages, 12 figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For guiding the UAV swarm to pass through narrow openings, a trapezoid
virtual tube is designed in our previous work. In this paper, we generalize its
application range to the condition that there exist obstacles inside the
trapezoid virtual tube and UAVs have strict speed constraints. First, a
distributed vector field controller is proposed for the trapezoid virtual tube
with no obstacle inside. The relationship between the trapezoid virtual tube
and the speed constraints is also presented. Then, a switching logic for the
obstacle avoidance is put forward. The key point is to divide the trapezoid
virtual tube containing obstacles into several sub trapezoid virtual tubes with
no obstacle inside. Formal analyses and proofs are made to show that all UAVs
are able to pass through the trapezoid virtual tube safely. Besides, the
effectiveness of the proposed method is validated by numerical simulations and
real experiments.
|
[
{
"version": "v1",
"created": "Sat, 24 Dec 2022 03:01:28 GMT"
}
] | 2022-12-27T00:00:00 |
[
[
"Gao",
"Yan",
""
],
[
"Bai",
"Chenggang",
""
],
[
"Quan",
"Quan",
""
]
] |
new_dataset
| 0.980301 |
2212.12721
|
Jinyu Zhao
|
Jinyu Zhao, Yusuke Monno, Masatoshi Okutomi
|
Polarimetric Multi-View Inverse Rendering
|
Paper accepted in IEEE Transactions on Pattern Analysis and Machine
Intelligence (2022). arXiv admin note: substantial text overlap with
arXiv:2007.08830
| null |
10.1109/TPAMI.2022.3232211
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A polarization camera has great potential for 3D reconstruction since the
angle of polarization (AoP) and the degree of polarization (DoP) of reflected
light are related to an object's surface normal. In this paper, we propose a
novel 3D reconstruction method called Polarimetric Multi-View Inverse Rendering
(Polarimetric MVIR) that effectively exploits geometric, photometric, and
polarimetric cues extracted from input multi-view color-polarization images. We
first estimate camera poses and an initial 3D model by geometric reconstruction
with a standard structure-from-motion and multi-view stereo pipeline. We then
refine the initial model by optimizing photometric rendering errors and
polarimetric errors using multi-view RGB, AoP, and DoP images, where we propose
a novel polarimetric cost function that enables an effective constraint on the
estimated surface normal of each vertex, while considering four possible
ambiguous azimuth angles revealed from the AoP measurement. The weight for the
polarimetric cost is effectively determined based on the DoP measurement, which
is regarded as the reliability of polarimetric information. Experimental
results using both synthetic and real data demonstrate that our Polarimetric
MVIR can reconstruct a detailed 3D shape without assuming a specific surface
material and lighting condition.
|
[
{
"version": "v1",
"created": "Sat, 24 Dec 2022 12:12:12 GMT"
}
] | 2022-12-27T00:00:00 |
[
[
"Zhao",
"Jinyu",
""
],
[
"Monno",
"Yusuke",
""
],
[
"Okutomi",
"Masatoshi",
""
]
] |
new_dataset
| 0.999477 |
2212.12745
|
Parker Lusk
|
Parker C. Lusk, Devarth Parikh, Jonathan P. How
|
GraffMatch: Global Matching of 3D Lines and Planes for Wide Baseline
LiDAR Registration
|
accepted to RA-L; 8 pages. arXiv admin note: text overlap with
arXiv:2205.08556
|
IEEE Robotics and Automation Letters, vol. 8, no. 2, pp. 632-639,
Feb. 2023
|
10.1109/LRA.2022.3229224
| null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Using geometric landmarks like lines and planes can increase navigation
accuracy and decrease map storage requirements compared to commonly-used LiDAR
point cloud maps. However, landmark-based registration for applications like
loop closure detection is challenging because a reliable initial guess is not
available. Global landmark matching has been investigated in the literature,
but these methods typically use ad hoc representations of 3D line and plane
landmarks that are not invariant to large viewpoint changes, resulting in
incorrect matches and high registration error. To address this issue, we adopt
the affine Grassmannian manifold to represent 3D lines and planes and prove
that the distance between two landmarks is invariant to rotation and
translation if a shift operation is performed before applying the Grassmannian
metric. This invariance property enables the use of our graph-based data
association framework for identifying landmark matches that can subsequently be
used for registration in the least-squares sense. Evaluated on a challenging
landmark matching and registration task using publicly-available LiDAR
datasets, our approach yields a 1.7x and 3.5x improvement in successful
registrations compared to methods that use viewpoint-dependent centroid and
"closest point" representations, respectively.
|
[
{
"version": "v1",
"created": "Sat, 24 Dec 2022 15:02:15 GMT"
}
] | 2022-12-27T00:00:00 |
[
[
"Lusk",
"Parker C.",
""
],
[
"Parikh",
"Devarth",
""
],
[
"How",
"Jonathan P.",
""
]
] |
new_dataset
| 0.992034 |
2212.12785
|
Mina Namazi
|
Mina Namazi, Duncan Ross, Xiaojie Zhu, Erman Ayday
|
zkFaith: Soonami's Zero-Knowledge Identity Protocol
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Individuals are encouraged to prove their eligibility to access specific
services regularly. However, providing various organizations with personal data
spreads sensitive information and endangers people's privacy. Hence,
privacy-preserving identification systems that enable individuals to prove they
are permitted to use specific services are required to fill the gap.
Cryptographic techniques are deployed to construct identity proofs across the
internet; nonetheless, they do not offer complete control over personal data or
prevent users from forging and submitting fake data.
In this paper, we design a privacy-preserving identity protocol called
"zkFaith." A new approach to obtain a verified zero-knowledge identity unique
to each individual. The protocol verifies the integrity of the documents
provided by the individuals and issues a zero-knowledge-based id without
revealing any information to the authenticator or verifier. The zkFaith
leverages an aggregated version of the Camenisch-Lysyanskaya (CL) signature
scheme to sign the user's commitment to the verified personal data. Then the
users with a zero-knowledge proof system can prove that they own the required
attributes of the access criterion of the requested service providers. Vector
commitment and their position binding property enables us to, later on, update
the commitments based on the modification of the personal data; hence update
the issued zkFaith id with no requirement of initiating the protocol from
scratch. We show that the design and implementation of the zkFaith with the
generated proofs in real-world scenarios are scalable and comparable with the
state-of-the-art schemes.
|
[
{
"version": "v1",
"created": "Sat, 24 Dec 2022 17:21:41 GMT"
}
] | 2022-12-27T00:00:00 |
[
[
"Namazi",
"Mina",
""
],
[
"Ross",
"Duncan",
""
],
[
"Zhu",
"Xiaojie",
""
],
[
"Ayday",
"Erman",
""
]
] |
new_dataset
| 0.989034 |
2212.12801
|
Anthony Rios
|
Sonam Singh and Anthony Rios
|
Linguistic Elements of Engaging Customer Service Discourse on Social
Media
|
Accepted to NLP+CSS at EMNLP 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Customers are rapidly turning to social media for customer support. While
brand agents on these platforms are motivated and well-intentioned to help and
engage with customers, their efforts are often ignored if their initial
response to the customer does not match a specific tone, style, or topic the
customer is aiming to receive. The length of a conversation can reflect the
effort and quality of the initial response made by a brand toward collaborating
and helping consumers, even when the overall sentiment of the conversation
might not be very positive. Thus, through this study, we aim to bridge this
critical gap in the existing literature by analyzing language's content and
stylistic aspects such as expressed empathy, psycho-linguistic features,
dialogue tags, and metrics for quantifying personalization of the utterances
that can influence the engagement of an interaction. This paper demonstrates
that we can predict engagement using initial customer and brand posts.
|
[
{
"version": "v1",
"created": "Sat, 24 Dec 2022 18:49:03 GMT"
}
] | 2022-12-27T00:00:00 |
[
[
"Singh",
"Sonam",
""
],
[
"Rios",
"Anthony",
""
]
] |
new_dataset
| 0.995825 |
2212.12859
|
Vaclav Skala
|
Vaclav Skala and Michal Smolik and Lukas Karlicek
|
HS-Patch: A New Hermite Smart Bicubic Patch Modification
|
Draft of the paper: NAUN Journal International Journal of Mathematics
and Computers in Simulation, Vol.8, pp.292-299, ISSN: 1998-0159, 2014. arXiv
admin note: substantial text overlap with arXiv:2212.11986, arXiv:2212.11875
|
NAUN Journal International Journal of Mathematics and Computers in
Simulation, Vol.8, pp.292-299, ISSN: 1998-0159, 2014
| null | null |
cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
Bicubic four-sided patches are widely used in computer graphics, CAD/CAM
systems etc. Their flexibility is high and enables to compress a surface
description before final rendering. However, computer graphics hardware
supports only triangular meshes. Therefore, four-sided bicubic patches are
approximated by a triangular mesh. The border curves of a bicubic patch are of
degree 3, while diagonal and anti-diagonal curves are of degree 6. Therefore
the resulting shape and texturing depend on the actual mapping, i.e. how the
tessellation of a bicubic patch is made. The proposed new modification of the
Hermite bicubic patch, the HS-patch, is a result of additional restriction put
on the Hermite bicubic patch formulation - the diagonal and anti-diagonal
curves are of degree 3. This requirement leads to a new Hermite based bicubic
four-sided patch with 12 control points and another 4 control points, i.e.
twist vectors, are computed from those 12 control points.
|
[
{
"version": "v1",
"created": "Thu, 22 Dec 2022 17:27:28 GMT"
}
] | 2022-12-27T00:00:00 |
[
[
"Skala",
"Vaclav",
""
],
[
"Smolik",
"Michal",
""
],
[
"Karlicek",
"Lukas",
""
]
] |
new_dataset
| 0.999725 |
2212.12937
|
Subba Reddy Oota
|
Lakshmi Sireesha Vakada, Anudeep Ch, Mounika Marreddy, Subba Reddy
Oota, Radhika Mamidi
|
GAE-ISumm: Unsupervised Graph-Based Summarization of Indian Languages
|
9 pages, 7 figures
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Document summarization aims to create a precise and coherent summary of a
text document. Many deep learning summarization models are developed mainly for
English, often requiring a large training corpus and efficient pre-trained
language models and tools. However, English summarization models for
low-resource Indian languages are often limited by rich morphological
variation, syntax, and semantic differences. In this paper, we propose
GAE-ISumm, an unsupervised Indic summarization model that extracts summaries
from text documents. In particular, our proposed model, GAE-ISumm uses Graph
Autoencoder (GAE) to learn text representations and a document summary jointly.
We also provide a manually-annotated Telugu summarization dataset TELSUM, to
experiment with our model GAE-ISumm. Further, we experiment with the most
publicly available Indian language summarization datasets to investigate the
effectiveness of GAE-ISumm on other Indian languages. Our experiments of
GAE-ISumm in seven languages make the following observations: (i) it is
competitive or better than state-of-the-art results on all datasets, (ii) it
reports benchmark results on TELSUM, and (iii) the inclusion of positional and
cluster information in the proposed model improved the performance of
summaries.
|
[
{
"version": "v1",
"created": "Sun, 25 Dec 2022 17:20:03 GMT"
}
] | 2022-12-27T00:00:00 |
[
[
"Vakada",
"Lakshmi Sireesha",
""
],
[
"Ch",
"Anudeep",
""
],
[
"Marreddy",
"Mounika",
""
],
[
"Oota",
"Subba Reddy",
""
],
[
"Mamidi",
"Radhika",
""
]
] |
new_dataset
| 0.999007 |
2212.12976
|
Nima Rahimi Foroushaani
|
Nima Rahimi Foroushaani, Bart Jacobs
|
Modular Formal Verification of Rust Programs with Unsafe Blocks
|
22 pages, 13 listings, 3 figures, Technical report, Appendix by Bart
Jacobs
| null | null | null |
cs.LO cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Rust is a modern systems programming language whose type system guarantees
memory safety. For the sake of expressivity and performance it allows
programmers to relax typing rules temporarily, using unsafe code blocks.
However, in unsafe blocks, the burden of making sure that the code does not end
up having undefined behaviour is on the programmer. Even most expert
programmers make mistakes and a memory safety bug in an unsafe block renders
all the type system guarantees void. To address this problem we are trying to
verify soundness of Rust unsafe code applying our Modular Symbolic Execution
algorithm. This text outlines our approach and the progress that has been made
so far.
|
[
{
"version": "v1",
"created": "Mon, 26 Dec 2022 00:19:19 GMT"
}
] | 2022-12-27T00:00:00 |
[
[
"Foroushaani",
"Nima Rahimi",
""
],
[
"Jacobs",
"Bart",
""
]
] |
new_dataset
| 0.995236 |
2212.13007
|
Yaonan Zhu
|
Yaonan Zhu, Shukrullo Nazirjonov, Bingheng Jiang, Jacinto Colan,
Tadayoshi Aoyama, Yasuhisa Hasegawa, Boris Belousov, Kay Hansel, and Jan
Peters
|
Visual Tactile Sensor Based Force Estimation for Position-Force
Teleoperation
|
IEEE CBS 2022
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vision-based tactile sensors have gained extensive attention in the robotics
community. The sensors are highly expected to be capable of extracting contact
information i.e. haptic information during in-hand manipulation. This nature of
tactile sensors makes them a perfect match for haptic feedback applications. In
this paper, we propose a contact force estimation method using the vision-based
tactile sensor DIGIT, and apply it to a position-force teleoperation
architecture for force feedback. The force estimation is done by building a
depth map for DIGIT gel surface deformation measurement and applying a
regression algorithm on estimated depth data and ground truth force data to get
the depth-force relationship. The experiment is performed by constructing a
grasping force feedback system with a haptic device as a leader robot and a
parallel robot gripper as a follower robot, where the DIGIT sensor is attached
to the tip of the robot gripper to estimate the contact force. The preliminary
results show the capability of using the low-cost vision-based sensor for force
feedback applications.
|
[
{
"version": "v1",
"created": "Mon, 26 Dec 2022 03:59:50 GMT"
}
] | 2022-12-27T00:00:00 |
[
[
"Zhu",
"Yaonan",
""
],
[
"Nazirjonov",
"Shukrullo",
""
],
[
"Jiang",
"Bingheng",
""
],
[
"Colan",
"Jacinto",
""
],
[
"Aoyama",
"Tadayoshi",
""
],
[
"Hasegawa",
"Yasuhisa",
""
],
[
"Belousov",
"Boris",
""
],
[
"Hansel",
"Kay",
""
],
[
"Peters",
"Jan",
""
]
] |
new_dataset
| 0.972639 |
2212.13015
|
Shangeth Rajaa
|
Shangeth Rajaa, Swaraj Dalmia, Kumarmanas Nethil
|
Skit-S2I: An Indian Accented Speech to Intent dataset
| null | null | null | null |
cs.CL cs.LG cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Conventional conversation assistants extract text transcripts from the speech
signal using automatic speech recognition (ASR) and then predict intent from
the transcriptions. Using end-to-end spoken language understanding (SLU), the
intents of the speaker are predicted directly from the speech signal without
requiring intermediate text transcripts. As a result, the model can optimize
directly for intent classification and avoid cascading errors from ASR. The
end-to-end SLU system also helps in reducing the latency of the intent
prediction model. Although many datasets are available publicly for
text-to-intent tasks, the availability of labeled speech-to-intent datasets is
limited, and there are no datasets available in the Indian accent. In this
paper, we release the Skit-S2I dataset, the first publicly available
Indian-accented SLU dataset in the banking domain in a conversational tonality.
We experiment with multiple baselines, compare different pretrained speech
encoder's representations, and find that SSL pretrained representations perform
slightly better than ASR pretrained representations lacking prosodic features
for speech-to-intent classification. The dataset and baseline code is available
at \url{https://github.com/skit-ai/speech-to-intent-dataset}
|
[
{
"version": "v1",
"created": "Mon, 26 Dec 2022 05:10:43 GMT"
}
] | 2022-12-27T00:00:00 |
[
[
"Rajaa",
"Shangeth",
""
],
[
"Dalmia",
"Swaraj",
""
],
[
"Nethil",
"Kumarmanas",
""
]
] |
new_dataset
| 0.999066 |
2212.13138
|
Shekoofeh Azizi
|
Karan Singhal, Shekoofeh Azizi, Tao Tu, S. Sara Mahdavi, Jason Wei,
Hyung Won Chung, Nathan Scales, Ajay Tanwani, Heather Cole-Lewis, Stephen
Pfohl, Perry Payne, Martin Seneviratne, Paul Gamble, Chris Kelly, Nathaneal
Scharli, Aakanksha Chowdhery, Philip Mansfield, Blaise Aguera y Arcas, Dale
Webster, Greg S. Corrado, Yossi Matias, Katherine Chou, Juraj Gottweis, Nenad
Tomasev, Yun Liu, Alvin Rajkomar, Joelle Barral, Christopher Semturs, Alan
Karthikesalingam, Vivek Natarajan
|
Large Language Models Encode Clinical Knowledge
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Large language models (LLMs) have demonstrated impressive capabilities in
natural language understanding and generation, but the quality bar for medical
and clinical applications is high. Today, attempts to assess models' clinical
knowledge typically rely on automated evaluations on limited benchmarks. There
is no standard to evaluate model predictions and reasoning across a breadth of
tasks. To address this, we present MultiMedQA, a benchmark combining six
existing open question answering datasets spanning professional medical exams,
research, and consumer queries; and HealthSearchQA, a new free-response dataset
of medical questions searched online. We propose a framework for human
evaluation of model answers along multiple axes including factuality,
precision, possible harm, and bias. In addition, we evaluate PaLM (a
540-billion parameter LLM) and its instruction-tuned variant, Flan-PaLM, on
MultiMedQA. Using a combination of prompting strategies, Flan-PaLM achieves
state-of-the-art accuracy on every MultiMedQA multiple-choice dataset (MedQA,
MedMCQA, PubMedQA, MMLU clinical topics), including 67.6% accuracy on MedQA (US
Medical License Exam questions), surpassing prior state-of-the-art by over 17%.
However, human evaluation reveals key gaps in Flan-PaLM responses. To resolve
this we introduce instruction prompt tuning, a parameter-efficient approach for
aligning LLMs to new domains using a few exemplars. The resulting model,
Med-PaLM, performs encouragingly, but remains inferior to clinicians. We show
that comprehension, recall of knowledge, and medical reasoning improve with
model scale and instruction prompt tuning, suggesting the potential utility of
LLMs in medicine. Our human evaluations reveal important limitations of today's
models, reinforcing the importance of both evaluation frameworks and method
development in creating safe, helpful LLM models for clinical applications.
|
[
{
"version": "v1",
"created": "Mon, 26 Dec 2022 14:28:24 GMT"
}
] | 2022-12-27T00:00:00 |
[
[
"Singhal",
"Karan",
""
],
[
"Azizi",
"Shekoofeh",
""
],
[
"Tu",
"Tao",
""
],
[
"Mahdavi",
"S. Sara",
""
],
[
"Wei",
"Jason",
""
],
[
"Chung",
"Hyung Won",
""
],
[
"Scales",
"Nathan",
""
],
[
"Tanwani",
"Ajay",
""
],
[
"Cole-Lewis",
"Heather",
""
],
[
"Pfohl",
"Stephen",
""
],
[
"Payne",
"Perry",
""
],
[
"Seneviratne",
"Martin",
""
],
[
"Gamble",
"Paul",
""
],
[
"Kelly",
"Chris",
""
],
[
"Scharli",
"Nathaneal",
""
],
[
"Chowdhery",
"Aakanksha",
""
],
[
"Mansfield",
"Philip",
""
],
[
"Arcas",
"Blaise Aguera y",
""
],
[
"Webster",
"Dale",
""
],
[
"Corrado",
"Greg S.",
""
],
[
"Matias",
"Yossi",
""
],
[
"Chou",
"Katherine",
""
],
[
"Gottweis",
"Juraj",
""
],
[
"Tomasev",
"Nenad",
""
],
[
"Liu",
"Yun",
""
],
[
"Rajkomar",
"Alvin",
""
],
[
"Barral",
"Joelle",
""
],
[
"Semturs",
"Christopher",
""
],
[
"Karthikesalingam",
"Alan",
""
],
[
"Natarajan",
"Vivek",
""
]
] |
new_dataset
| 0.998461 |
2212.13169
|
Om Prakash
|
Indibar Debnath, Ashutosh Singh, Om Prakash and Abdollah Alhevaz
|
Quantum Codes from additive constacyclic codes over a mixed alphabet and
the MacWilliams identities
|
22 pages
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Let $\mathbb{Z}_p$ be the ring of integers modulo a prime number $p$ where
$p-1$ is a quadratic residue modulo $p$. This paper presents the study of
constacyclic codes over chain rings $\mathcal{R}=\frac{\mathbb{Z}_p[u]}{\langle
u^2\rangle}$ and $\mathcal{S}=\frac{\mathbb{Z}_p[u]}{\langle u^3\rangle}$. We
also study additive constacyclic codes over $\mathcal{R}\mathcal{S}$ and
$\mathbb{Z}_p\mathcal{R}\mathcal{S}$ using the generator polynomials over the
rings $\mathcal{R}$ and $\mathcal{S},$ respectively. Further, by defining Gray
maps on $\mathcal{R}$, $\mathcal{S}$ and $\mathbb{Z}_p\mathcal{R}\mathcal{S},$
we obtain some results on the Gray images of additive codes. Then we give the
weight enumeration and MacWilliams identities corresponding to the additive
codes over $\mathbb{Z}_p\mathcal{R}\mathcal{S}$. Finally, as an application of
the obtained codes, we give quantum codes using the CSS construction.
|
[
{
"version": "v1",
"created": "Mon, 26 Dec 2022 14:10:07 GMT"
}
] | 2022-12-27T00:00:00 |
[
[
"Debnath",
"Indibar",
""
],
[
"Singh",
"Ashutosh",
""
],
[
"Prakash",
"Om",
""
],
[
"Alhevaz",
"Abdollah",
""
]
] |
new_dataset
| 0.997652 |
2212.13190
|
Angelot Behajaina
|
Angelot Behajaina, Martino Borello, Javier de la Cruz, Wolfgang
Willems
|
Twisted skew $G$-codes
| null | null | null | null |
cs.IT math.CO math.IT math.RA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we investigate left ideals as codes in twisted skew group
rings. The considered rings, which are often algebras over a finite field,
allows us to detect many of the well-known codes. The presentation, given here,
unifies the concept of group codes, twisted group codes and skew group codes.
|
[
{
"version": "v1",
"created": "Mon, 26 Dec 2022 15:29:14 GMT"
}
] | 2022-12-27T00:00:00 |
[
[
"Behajaina",
"Angelot",
""
],
[
"Borello",
"Martino",
""
],
[
"de la Cruz",
"Javier",
""
],
[
"Willems",
"Wolfgang",
""
]
] |
new_dataset
| 0.999731 |
2212.13221
|
Lynnette Hui Xian Ng
|
Lynnette Hui Xian Ng and Kathleen M. Carley
|
A Combined Synchronization Index for Grassroots Activism on Social Media
| null | null | null | null |
cs.SI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Social media has provided a citizen voice, giving rise to grassroots
collective action, where users deploy a concerted effort to disseminate online
narratives and even carry out offline protests. Sometimes these collective
action are aided by inorganic synchronization, which arise from bot actors. It
is thus important to identify the synchronicity of emerging discourse on social
media and the indications of organic/inorganic activity within the
conversations. This provides a way of profiling an event for possibility of
offline protests and violence. In this study, we build on past definitions of
synchronous activity on social media -- simultaneous user action -- and develop
a Combined Synchronization Index (CSI) which adopts a hierarchical approach in
measuring user synchronicity. We apply this index on six political and social
activism events on Twitter and analyzed three action types: synchronicity by
hashtag, URL and @mentions.The CSI provides an overall quantification of
synchronization across all action types within an event, which allows ranking
of a spectrum of synchronicity across the six events. Human users have higher
synchronous scores than bot users in most events; and bots and humans exhibits
the most synchronized activities across all events as compared to other pairs
(i.e., bot-bot and human-human). We further rely on the harmony and dissonance
of CSI-Network scores with network centrality metrics to observe the presence
of organic/inorganic synchronization. We hope this work aids in investigating
synchronized action within social media in a collective manner.
|
[
{
"version": "v1",
"created": "Mon, 26 Dec 2022 17:03:03 GMT"
}
] | 2022-12-27T00:00:00 |
[
[
"Ng",
"Lynnette Hui Xian",
""
],
[
"Carley",
"Kathleen M.",
""
]
] |
new_dataset
| 0.995669 |
2212.13256
|
Yaniv Sadeh
|
Yaniv Sadeh, Ori Rottenstreich, Haim Kaplan
|
Codes for Load Balancing in TCAMs: Size Analysis
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Traffic splitting is a required functionality in networks, for example for
load balancing over paths or servers, or by the source's access restrictions.
The capacities of the servers (or the number of users with particular access
restrictions) determine the sizes of the parts into which traffic should be
split. A recent approach implements traffic splitting within the ternary
content addressable memory (TCAM), which is often available in switches. It is
important to reduce the amount of memory allocated for this task since TCAMs
are power consuming and are often also required for other tasks such as
classification and routing. Recent works suggested algorithms to compute a
smallest implementation of a given partition in the longest prefix match (LPM)
model. In this paper we analyze properties of such minimal representations and
prove lower and upper bounds on their size. The upper bounds hold for general
TCAMs, and we also prove an additional lower-bound for general TCAMs. We also
analyze the expected size of a representation, for uniformly random ordered
partitions. We show that the expected representation size of a random partition
is at least half the size for the worst-case partition, and is linear in the
number of parts and in the logarithm of the size of the address space.
|
[
{
"version": "v1",
"created": "Mon, 26 Dec 2022 18:54:30 GMT"
}
] | 2022-12-27T00:00:00 |
[
[
"Sadeh",
"Yaniv",
""
],
[
"Rottenstreich",
"Ori",
""
],
[
"Kaplan",
"Haim",
""
]
] |
new_dataset
| 0.998439 |
2009.09730
|
Daniel Fern\'andez-Gonz\'alez
|
Daniel Fern\'andez-Gonz\'alez and Carlos G\'omez-Rodr\'iguez
|
Multitask Pointer Network for Multi-Representational Parsing
|
Final peer-reviewed manuscript accepted for publication in
Knowledge-Based Systems
| null |
10.1016/j.knosys.2021.107760
| null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a transition-based approach that, by training a single model, can
efficiently parse any input sentence with both constituent and dependency
trees, supporting both continuous/projective and discontinuous/non-projective
syntactic structures. To that end, we develop a Pointer Network architecture
with two separate task-specific decoders and a common encoder, and follow a
multitask learning strategy to jointly train them. The resulting quadratic
system, not only becomes the first parser that can jointly produce both
unrestricted constituent and dependency trees from a single model, but also
proves that both syntactic formalisms can benefit from each other during
training, achieving state-of-the-art accuracies in several widely-used
benchmarks such as the continuous English and Chinese Penn Treebanks, as well
as the discontinuous German NEGRA and TIGER datasets.
|
[
{
"version": "v1",
"created": "Mon, 21 Sep 2020 10:04:07 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Dec 2022 19:29:04 GMT"
}
] | 2022-12-26T00:00:00 |
[
[
"Fernández-González",
"Daniel",
""
],
[
"Gómez-Rodríguez",
"Carlos",
""
]
] |
new_dataset
| 0.97296 |
2109.07744
|
Yizhou Shan
|
Yizhou Shan, Will Lin, Ryan Kosta, Arvind Krishnamurthy, Yiying Zhang
|
SuperNIC: A Hardware-Based, Programmable, and Multi-Tenant SmartNIC
|
17 pages
| null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
With CPU scaling slowing down in today's data centers, more functionalities
are being offloaded from the CPU to auxiliary devices. One such device is the
SmartNIC, which is being increasingly adopted in data centers. In today's cloud
environment, VMs on the same server can each have their own network computation
(or network tasks) or workflows of network tasks to offload to a SmartNIC.
These network tasks can be dynamically added/removed as VMs come and go and can
be shared across VMs. Such dynamism demands that a SmartNIC not only schedules
and processes packets but also manages and executes offloaded network tasks for
different users. Although software solutions like an OS exist for managing
software-based network tasks, such software-based SmartNICs cannot keep up with
the quickly increasing data-center network speed.
This paper proposes a new SmartNIC platform called SuperNIC that allows
multiple tenants to efficiently and safely offload FPGA-based network
computation DAGs. For efficiency and scalability, our core idea is to group
network tasks into chains that are connected and scheduled as one unit. We
further propose techniques to automatically scale network task chains with
different types of parallelism. Moreover, we propose a fair share mechanism
that considers both fair space sharing and fair time sharing of different types
of hardware resources. Our FPGA prototype of SuperNIC achieves high bandwidth,
low latency performance whilst efficiently utilizing and fairly sharing
resources.
|
[
{
"version": "v1",
"created": "Thu, 16 Sep 2021 06:28:37 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Feb 2022 07:20:35 GMT"
},
{
"version": "v3",
"created": "Tue, 2 Aug 2022 14:38:44 GMT"
},
{
"version": "v4",
"created": "Wed, 23 Nov 2022 12:45:16 GMT"
},
{
"version": "v5",
"created": "Fri, 23 Dec 2022 03:48:13 GMT"
}
] | 2022-12-26T00:00:00 |
[
[
"Shan",
"Yizhou",
""
],
[
"Lin",
"Will",
""
],
[
"Kosta",
"Ryan",
""
],
[
"Krishnamurthy",
"Arvind",
""
],
[
"Zhang",
"Yiying",
""
]
] |
new_dataset
| 0.999234 |
2201.07040
|
Matthias Samwald
|
Kathrin Blagec, Jakob Kraiger, Wolfgang Fr\"uhwirt, Matthias Samwald
|
Benchmark datasets driving artificial intelligence development fail to
capture the needs of medical professionals
|
(this version extends the literature references)
|
Journal of Bioinformatics, January 2023
|
10.1016/j.jbi.2022.104274
| null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Publicly accessible benchmarks that allow for assessing and comparing model
performances are important drivers of progress in artificial intelligence (AI).
While recent advances in AI capabilities hold the potential to transform
medical practice by assisting and augmenting the cognitive processes of
healthcare professionals, the coverage of clinically relevant tasks by AI
benchmarks is largely unclear. Furthermore, there is a lack of systematized
meta-information that allows clinical AI researchers to quickly determine
accessibility, scope, content and other characteristics of datasets and
benchmark datasets relevant to the clinical domain.
To address these issues, we curated and released a comprehensive catalogue of
datasets and benchmarks pertaining to the broad domain of clinical and
biomedical natural language processing (NLP), based on a systematic review of
literature and online resources. A total of 450 NLP datasets were manually
systematized and annotated with rich metadata, such as targeted tasks, clinical
applicability, data types, performance metrics, accessibility and licensing
information, and availability of data splits. We then compared tasks covered by
AI benchmark datasets with relevant tasks that medical practitioners reported
as highly desirable targets for automation in a previous empirical study.
Our analysis indicates that AI benchmarks of direct clinical relevance are
scarce and fail to cover most work activities that clinicians want to see
addressed. In particular, tasks associated with routine documentation and
patient data administration workflows are not represented despite significant
associated workloads. Thus, currently available AI benchmarks are improperly
aligned with desired targets for AI automation in clinical settings, and novel
benchmarks should be created to fill these gaps.
|
[
{
"version": "v1",
"created": "Tue, 18 Jan 2022 15:05:28 GMT"
},
{
"version": "v2",
"created": "Thu, 12 May 2022 13:25:37 GMT"
}
] | 2022-12-26T00:00:00 |
[
[
"Blagec",
"Kathrin",
""
],
[
"Kraiger",
"Jakob",
""
],
[
"Frühwirt",
"Wolfgang",
""
],
[
"Samwald",
"Matthias",
""
]
] |
new_dataset
| 0.997423 |
2209.14345
|
Jonah Anton
|
Jonah Anton, Harry Coppock, Pancham Shukla, Bjorn W.Schuller
|
Audio Barlow Twins: Self-Supervised Audio Representation Learning
|
15 pages (4 main text, rest references + appendices)
| null | null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
The Barlow Twins self-supervised learning objective requires neither negative
samples or asymmetric learning updates, achieving results on a par with the
current state-of-the-art within Computer Vision. As such, we present Audio
Barlow Twins, a novel self-supervised audio representation learning approach,
adapting Barlow Twins to the audio domain. We pre-train on the large-scale
audio dataset AudioSet, and evaluate the quality of the learnt representations
on 18 tasks from the HEAR 2021 Challenge, achieving results which outperform,
or otherwise are on a par with, the current state-of-the-art for instance
discrimination self-supervised learning approaches to audio representation
learning. Code at https://github.com/jonahanton/SSL_audio.
|
[
{
"version": "v1",
"created": "Wed, 28 Sep 2022 18:17:11 GMT"
}
] | 2022-12-26T00:00:00 |
[
[
"Anton",
"Jonah",
""
],
[
"Coppock",
"Harry",
""
],
[
"Shukla",
"Pancham",
""
],
[
"Schuller",
"Bjorn W.",
""
]
] |
new_dataset
| 0.999279 |
2211.13194
|
Siddharth Agrawal Mr.
|
Siddharth Agrawal and Keyur D. Joshi
|
Indian Commercial Truck License Plate Detection and Recognition for
Weighbridge Automation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Detection and recognition of a licence plate is important when automating
weighbridge services. While many large databases are available for Latin and
Chinese alphanumeric license plates, data for Indian License Plates is
inadequate. In particular, databases of Indian commercial truck license plates
are inadequate, despite the fact that commercial vehicle license plate
recognition plays a profound role in terms of logistics management and
weighbridge automation. Moreover, models to recognise license plates are not
effectively able to generalise to such data due to its challenging nature, and
due to the abundant frequency of handwritten license plates, leading to the
usage of diverse font styles. Thus, a database and effective models to
recognise and detect such license plates are crucial. This paper provides a
database on commercial truck license plates, and using state-of-the-art models
in real-time object Detection: You Only Look Once Version 7, and SceneText
Recognition: Permuted Autoregressive Sequence Models, our method outperforms
the other cited references where the maximum accuracy obtained was less than
90%, while we have achieved 95.82% accuracy in our algorithm implementation on
the presented challenging license plate dataset. Index Terms- Automatic License
Plate Recognition, character recognition, license plate detection, vision
transformer.
|
[
{
"version": "v1",
"created": "Wed, 23 Nov 2022 18:28:12 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Dec 2022 20:48:04 GMT"
}
] | 2022-12-26T00:00:00 |
[
[
"Agrawal",
"Siddharth",
""
],
[
"Joshi",
"Keyur D.",
""
]
] |
new_dataset
| 0.999879 |
2212.00289
|
Joseph Chow
|
Zhexi Fu, Joseph Y. J. Chow
|
Dial-a-ride problem with modular platooning and en-route transfers
| null | null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Modular vehicles (MV) possess the ability to physically connect/disconnect
with each other and travel in platoon with less energy consumption. A fleet of
demand-responsive transit vehicles with such technology can serve passengers
door to door or have vehicles deviate to platoon with each other to travel at
lower cost and allow for en-route passenger transfers before splitting. A mixed
integer linear programming (MILP) model is formulated to solve this "modular
dial-a-ride problem" (MDARP). A heuristic algorithm based on
Steiner-tree-inspired large neighborhood search is developed to solve the MDARP
for practical scenarios. A set of small-scale synthetic numerical experiments
are tested to evaluate the optimality gap and computation time between exact
solutions of the MDARP using commercial software and the proposed heuristic.
Large-scale experiments are conducted on the Anaheim network with 378 candidate
join/split nodes to further explore the potentials and identify the ideal
operation scenarios of MVs. The results show that MV technology can save up to
52.0% in vehicle travel cost, 35.6% in passenger service time, and 29.4% in
total cost against existing on-demand mobility services in the scenarios
tested. Results suggest that MVs best benefit from platooning by serving
"enclave pairs" as a hub-and-spoke service.
|
[
{
"version": "v1",
"created": "Thu, 1 Dec 2022 05:29:46 GMT"
},
{
"version": "v2",
"created": "Fri, 23 Dec 2022 15:16:20 GMT"
}
] | 2022-12-26T00:00:00 |
[
[
"Fu",
"Zhexi",
""
],
[
"Chow",
"Joseph Y. J.",
""
]
] |
new_dataset
| 0.994406 |
2212.05322
|
Michael Nelson
|
Michael L. Nelson
|
Twitter DM Videos Are Accessible to Unauthenticated Users
|
22 pages, 7 figures, v2 adds "available this way since 2016" and
"http/https" discussion
| null | null | null |
cs.SI cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Videos shared in Twitter Direct Messages (DMs) have opaque URLs based on
hashes of their content, but are otherwise available to unauthenticated HTTP
users. These DM video URLs are thus hard to guess, but if they were somehow
discovered, they are available to any user, including users without Twitter
credentials (i.e., twitter.com specific HTTP Cookie or Authorization request
headers). This includes web archives, such as the well-known Internet Archive
Wayback Machine, which can be used to move DM videos to domains outside of
twitter.com. This lack of authentication for DM videos is in contrast to
Twitter's model for images in DMs, which also have opaque URLs but require a
session-specific HTTP cookie shared only between the DM participants. We review
a minimal reproducible example of an image and video shared between two demo
accounts, and show that while the image is protected from unauthenticated
access as well as from an authenticated third party, the video itself is
persistently available for any user who knows the URL.
|
[
{
"version": "v1",
"created": "Sat, 10 Dec 2022 15:37:48 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Dec 2022 22:51:19 GMT"
}
] | 2022-12-26T00:00:00 |
[
[
"Nelson",
"Michael L.",
""
]
] |
new_dataset
| 0.999489 |
2212.11342
|
Olawale Salaudeen
|
Olawale Salaudeen, Oluwasanmi Koyejo
|
Target Conditioned Representation Independence (TCRI); From
Domain-Invariant to Domain-General Representations
| null | null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a Target Conditioned Representation Independence (TCRI) objective
for domain generalization. TCRI addresses the limitations of existing domain
generalization methods due to incomplete constraints. Specifically, TCRI
implements regularizers motivated by conditional independence constraints that
are sufficient to strictly learn complete sets of invariant mechanisms, which
we show are necessary and sufficient for domain generalization. Empirically, we
show that TCRI is effective on both synthetic and real-world data. TCRI is
competitive with baselines in average accuracy while outperforming them in
worst-domain accuracy, indicating desired cross-domain stability.
|
[
{
"version": "v1",
"created": "Wed, 21 Dec 2022 20:24:45 GMT"
}
] | 2022-12-26T00:00:00 |
[
[
"Salaudeen",
"Olawale",
""
],
[
"Koyejo",
"Oluwasanmi",
""
]
] |
new_dataset
| 0.952107 |
2212.12141
|
Derek Prijatelj
|
Derek S. Prijatelj (1), Samuel Grieggs (1), Jin Huang (1), Dawei Du
(2), Ameya Shringi (2), Christopher Funk (2), Adam Kaufman (3), Eric
Robertson (3), Walter J. Scheirer (1) ((1) University of Notre Dame, (2)
Kitware, (3) PAR Government)
|
Human Activity Recognition in an Open World
|
39 pages, 16 figures, 3 tables, Pre-print submitted to JAIR
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Managing novelty in perception-based human activity recognition (HAR) is
critical in realistic settings to improve task performance over time and ensure
solution generalization outside of prior seen samples. Novelty manifests in HAR
as unseen samples, activities, objects, environments, and sensor changes, among
other ways. Novelty may be task-relevant, such as a new class or new features,
or task-irrelevant resulting in nuisance novelty, such as never before seen
noise, blur, or distorted video recordings. To perform HAR optimally,
algorithmic solutions must be tolerant to nuisance novelty, and learn over time
in the face of novelty. This paper 1) formalizes the definition of novelty in
HAR building upon the prior definition of novelty in classification tasks, 2)
proposes an incremental open world learning (OWL) protocol and applies it to
the Kinetics datasets to generate a new benchmark KOWL-718, 3) analyzes the
performance of current state-of-the-art HAR models when novelty is introduced
over time, 4) provides a containerized and packaged pipeline for reproducing
the OWL protocol and for modifying for any future updates to Kinetics. The
experimental analysis includes an ablation study of how the different models
perform under various conditions as annotated by Kinetics-AVA. The protocol as
an algorithm for reproducing experiments using the KOWL-718 benchmark will be
publicly released with code and containers at
https://github.com/prijatelj/human-activity-recognition-in-an-open-world. The
code may be used to analyze different annotations and subsets of the Kinetics
datasets in an incremental open world fashion, as well as be extended as
further updates to Kinetics are released.
|
[
{
"version": "v1",
"created": "Fri, 23 Dec 2022 04:31:20 GMT"
}
] | 2022-12-26T00:00:00 |
[
[
"Prijatelj",
"Derek S.",
""
],
[
"Grieggs",
"Samuel",
""
],
[
"Huang",
"Jin",
""
],
[
"Du",
"Dawei",
""
],
[
"Shringi",
"Ameya",
""
],
[
"Funk",
"Christopher",
""
],
[
"Kaufman",
"Adam",
""
],
[
"Robertson",
"Eric",
""
],
[
"Scheirer",
"Walter J.",
""
]
] |
new_dataset
| 0.987418 |
2212.12146
|
Raihan Tanvir
|
Md Tanvir Rouf Shawon, Raihan Tanvir, Md. Golam Rabiul Alam
|
Bengali Handwritten Digit Recognition using CNN with Explainable AI
|
2022 4th International Conference on Sustainable Technologies for
Industry 4.0 (STI), pp. 1-6
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Handwritten character recognition is a hot topic for research nowadays. If we
can convert a handwritten piece of paper into a text-searchable document using
the Optical Character Recognition (OCR) technique, we can easily understand the
content and do not need to read the handwritten document. OCR in the English
language is very common, but in the Bengali language, it is very hard to find a
good quality OCR application. If we can merge machine learning and deep
learning with OCR, it could be a huge contribution to this field. Various
researchers have proposed a number of strategies for recognizing Bengali
handwritten characters. A lot of ML algorithms and deep neural networks were
used in their work, but the explanations of their models are not available. In
our work, we have used various machine learning algorithms and CNN to recognize
handwritten Bengali digits. We have got acceptable accuracy from some ML
models, and CNN has given us great testing accuracy. Grad-CAM was used as an
XAI method on our CNN model, which gave us insights into the model and helped
us detect the origin of interest for recognizing a digit from an image.
|
[
{
"version": "v1",
"created": "Fri, 23 Dec 2022 04:40:20 GMT"
}
] | 2022-12-26T00:00:00 |
[
[
"Shawon",
"Md Tanvir Rouf",
""
],
[
"Tanvir",
"Raihan",
""
],
[
"Alam",
"Md. Golam Rabiul",
""
]
] |
new_dataset
| 0.99034 |
2212.12151
|
Ahmed Tanvir Mahdad
|
Ahmed Tanvir Mahdad, Cong Shi, Zhengkun Ye, Tianming Zhao, Yan Wang,
Yingying Chen, Nitesh Saxena
|
EarSpy: Spying Caller Speech and Identity through Tiny Vibrations of
Smartphone Ear Speakers
| null | null | null | null |
cs.SD cs.CR eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Eavesdropping from the user's smartphone is a well-known threat to the user's
safety and privacy. Existing studies show that loudspeaker reverberation can
inject speech into motion sensor readings, leading to speech eavesdropping.
While more devastating attacks on ear speakers, which produce much smaller
scale vibrations, were believed impossible to eavesdrop with zero-permission
motion sensors. In this work, we revisit this important line of reach. We
explore recent trends in smartphone manufacturers that include extra/powerful
speakers in place of small ear speakers, and demonstrate the feasibility of
using motion sensors to capture such tiny speech vibrations. We investigate the
impacts of these new ear speakers on built-in motion sensors and examine the
potential to elicit private speech information from the minute vibrations. Our
designed system EarSpy can successfully detect word regions, time, and
frequency domain features and generate a spectrogram for each word region. We
train and test the extracted data using classical machine learning algorithms
and convolutional neural networks. We found up to 98.66% accuracy in gender
detection, 92.6% detection in speaker detection, and 56.42% detection in digit
detection (which is 5X more significant than the random selection (10%)). Our
result unveils the potential threat of eavesdropping on phone conversations
from ear speakers using motion sensors.
|
[
{
"version": "v1",
"created": "Fri, 23 Dec 2022 05:05:09 GMT"
}
] | 2022-12-26T00:00:00 |
[
[
"Mahdad",
"Ahmed Tanvir",
""
],
[
"Shi",
"Cong",
""
],
[
"Ye",
"Zhengkun",
""
],
[
"Zhao",
"Tianming",
""
],
[
"Wang",
"Yan",
""
],
[
"Chen",
"Yingying",
""
],
[
"Saxena",
"Nitesh",
""
]
] |
new_dataset
| 0.987086 |
2212.12204
|
Shuo Wang
|
Haoran Wang, Yan Zhu, Wenzheng Qin, Yizhe Zhang, Pinghong Zhou,
Quanlin Li, Shuo Wang and Zhijian Song
|
EndoBoost: a plug-and-play module for false positive suppression during
computer-aided polyp detection in real-world colonoscopy (with dataset)
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The advance of computer-aided detection systems using deep learning opened a
new scope in endoscopic image analysis. However, the learning-based models
developed on closed datasets are susceptible to unknown anomalies in complex
clinical environments. In particular, the high false positive rate of polyp
detection remains a major challenge in clinical practice. In this work, we
release the FPPD-13 dataset, which provides a taxonomy and real-world cases of
typical false positives during computer-aided polyp detection in real-world
colonoscopy. We further propose a post-hoc module EndoBoost, which can be
plugged into generic polyp detection models to filter out false positive
predictions. This is realized by generative learning of the polyp manifold with
normalizing flows and rejecting false positives through density estimation.
Compared to supervised classification, this anomaly detection paradigm achieves
better data efficiency and robustness in open-world settings. Extensive
experiments demonstrate a promising false positive suppression in both
retrospective and prospective validation. In addition, the released dataset can
be used to perform 'stress' tests on established detection systems and
encourages further research toward robust and reliable computer-aided
endoscopic image analysis. The dataset and code will be publicly available at
http://endoboost.miccai.cloud.
|
[
{
"version": "v1",
"created": "Fri, 23 Dec 2022 08:34:36 GMT"
}
] | 2022-12-26T00:00:00 |
[
[
"Wang",
"Haoran",
""
],
[
"Zhu",
"Yan",
""
],
[
"Qin",
"Wenzheng",
""
],
[
"Zhang",
"Yizhe",
""
],
[
"Zhou",
"Pinghong",
""
],
[
"Li",
"Quanlin",
""
],
[
"Wang",
"Shuo",
""
],
[
"Song",
"Zhijian",
""
]
] |
new_dataset
| 0.999702 |
2212.12213
|
Priyank Bhandia
|
Ishita Goyal, Priyank Bhandia, Sanjana Dulam
|
Finetuning for Sarcasm Detection with a Pruned Dataset
|
5 pages, 3 tables
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Sarcasm is a form of irony that involves saying or writing something that is
opposite or opposite to what one really means, often in a humorous or mocking
way. It is often used to mock or mock someone or something, or to be humorous
or amusing. Sarcasm is usually conveyed through tone of voice, facial
expressions, or other forms of nonverbal communication, but it can also be
indicated by the use of certain words or phrases that are typically associated
with irony or humor. Sarcasm detection is difficult because it relies on
context and non-verbal cues. It can also be culturally specific, subjective and
ambiguous. In this work, we fine-tune the RoBERTa based sarcasm detection model
presented in Abaskohi et al. [2022] to get to within 0.02 F1 of the
state-of-the-art (Hercog et al. [2022]) on the iSarcasm dataset (Oprea and
Magdy [2019]). This performance is achieved by augmenting iSarcasm with a
pruned version of the Self Annotated Reddit Corpus (SARC) (Khodak et al.
[2017]). Our pruned version is 100 times smaller than the subset of SARC used
to train the state-of-the-art model.
|
[
{
"version": "v1",
"created": "Fri, 23 Dec 2022 08:59:30 GMT"
}
] | 2022-12-26T00:00:00 |
[
[
"Goyal",
"Ishita",
""
],
[
"Bhandia",
"Priyank",
""
],
[
"Dulam",
"Sanjana",
""
]
] |
new_dataset
| 0.999922 |
2212.12360
|
Jaynarayan Tudu PhD
|
Lakshmi Bhanuprakash Reddy Konduru, Vijaya Lakshmi, Jaynarayan T Tudu
|
Approximate Scan Flip-flop to Reduce Functional Path Delay and Power
Consumption
|
6 pages, 7 figures
| null | null | null |
cs.AR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The scan-based testing has been widely used as a Design-for-Test (DfT)
mechanism for most recent designs. It has gained importance not only in
manufacturing testing but also in online testing and debugging. However, the
multiplexer-based scan flip-flop, which is the basic building block of scan
chain, is troubled with a set of issues such as mux-induced additional delay
and test power among others. The effect of additional delay due to the
multiplexer on the functional path (D in path) has started influencing the
clock period, particularly at the lower technology nodes for the
high-performance design. In this work, we propose two scan flip-flop designs
using 10nm FinFET technology to address the problem of mux-induced delay and
internal power. The proposed designs have been experimentally validated for
performance gain and power reduction and compared to the existing designs.
|
[
{
"version": "v1",
"created": "Fri, 23 Dec 2022 14:29:06 GMT"
}
] | 2022-12-26T00:00:00 |
[
[
"Konduru",
"Lakshmi Bhanuprakash Reddy",
""
],
[
"Lakshmi",
"Vijaya",
""
],
[
"Tudu",
"Jaynarayan T",
""
]
] |
new_dataset
| 0.973798 |
2212.12411
|
Ha Manh Bui
|
Ha Manh Bui and Iliana Maifeld-Carucci
|
Benchmark for Uncertainty & Robustness in Self-Supervised Learning
|
15 pages, 3 tables, 6 figures, the class project in CSCI 601.771:
Self-supervised Statistical Models - Johns Hopkins University - Fall 2022
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Self-Supervised Learning (SSL) is crucial for real-world applications,
especially in data-hungry domains such as healthcare and self-driving cars. In
addition to a lack of labeled data, these applications also suffer from
distributional shifts. Therefore, an SSL method should provide robust
generalization and uncertainty estimation in the test dataset to be considered
a reliable model in such high-stakes domains. However, existing approaches
often focus on generalization, without evaluating the model's uncertainty. The
ability to compare SSL techniques for improving these estimates is therefore
critical for research on the reliability of self-supervision models. In this
paper, we explore variants of SSL methods, including Jigsaw Puzzles, Context,
Rotation, Geometric Transformations Prediction for vision, as well as BERT and
GPT for language tasks. We train SSL in auxiliary learning for vision and
pre-training for language model, then evaluate the generalization (in-out
classification accuracy) and uncertainty (expected calibration error) across
different distribution covariate shift datasets, including MNIST-C, CIFAR-10-C,
CIFAR-10.1, and MNLI. Our goal is to create a benchmark with outputs from
experiments, providing a starting point for new SSL methods in Reliable Machine
Learning. All source code to reproduce results is available at
https://github.com/hamanhbui/reliable_ssl_baselines.
|
[
{
"version": "v1",
"created": "Fri, 23 Dec 2022 15:46:23 GMT"
}
] | 2022-12-26T00:00:00 |
[
[
"Bui",
"Ha Manh",
""
],
[
"Maifeld-Carucci",
"Iliana",
""
]
] |
new_dataset
| 0.967187 |
2212.12436
|
Avinash Prabu
|
Avinash Prabu, Nitya Ranjan, Lingxi Li, Renran Tian, Stanley Chien,
Yaobin Chen, Rini Sherony
|
SceNDD: A Scenario-based Naturalistic Driving Dataset
|
Conference: 2022 IEEE 25th International Conference on Intelligent
Transportation Systems (ITSC). Link:
https://ieeexplore.ieee.org/document/9921953
| null |
10.1109/ITSC55140.2022.9921953
| null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we propose SceNDD: a scenario-based naturalistic driving
dataset that is built upon data collected from an instrumented vehicle in
downtown Indianapolis. The data collection was completed in 68 driving sessions
with different drivers, where each session lasted about 20--40 minutes. The
main goal of creating this dataset is to provide the research community with
real driving scenarios that have diverse trajectories and driving behaviors.
The dataset contains ego-vehicle's waypoints, velocity, yaw angle, as well as
non-ego actor's waypoints, velocity, yaw angle, entry-time, and exit-time.
Certain flexibility is provided to users so that actors, sensors, lanes, roads,
and obstacles can be added to the existing scenarios. We used a Joint
Probabilistic Data Association (JPDA) tracker to detect non-ego vehicles on the
road. We present some preliminary results of the proposed dataset and a few
applications associated with it. The complete dataset is expected to be
released by early 2023.
|
[
{
"version": "v1",
"created": "Thu, 22 Dec 2022 18:26:28 GMT"
}
] | 2022-12-26T00:00:00 |
[
[
"Prabu",
"Avinash",
""
],
[
"Ranjan",
"Nitya",
""
],
[
"Li",
"Lingxi",
""
],
[
"Tian",
"Renran",
""
],
[
"Chien",
"Stanley",
""
],
[
"Chen",
"Yaobin",
""
],
[
"Sherony",
"Rini",
""
]
] |
new_dataset
| 0.999701 |
2212.12495
|
P. W. H. Pinkse
|
E. Marakis, U. R\"uhrmair, M. Lachner, R. Uppu, B. \v{S}kori\'c,
P.W.H. Pinkse
|
Clones of the Unclonable: Nanoduplicating Optical PUFs and Applications
|
9 pages, 6 figures
| null | null | null |
cs.CR physics.optics
|
http://creativecommons.org/licenses/by/4.0/
|
Physical unclonable functions (PUFs), physical objects that are practically
unclonable because of their andom and uncontrollable manufacturing variations,
are becoming increasingly popular as security primitives and unique identifiers
in a fully digitized world. One of the central PUF premises states that both
friends and foes, both legitimate manufacturers and external attackers alike,
cannot clone a PUF, producing two instances that are the same. Using the latest
nanofabrication techniques, we show that this premise is not always met: We
demonstrate the possibility of effective PUF duplication through sophisticated
manufacturers by producing 63 copies of a non-trivial optical scattering
structure which exhibit essentially the same scattering behavior. The remaining
minuscule differences are close to or below noise levels, whence the duplicates
have to be considered fully equivalent from a PUF perspective. The possibility
for manufacturer-based optical PUF duplication has positive and negative
consequences at the same time: While fully breaking the security of certain
schemes, it enables new applications, too. For example, it facilitates
unforgeable labels for valuable items; the first key-free group identification
schemes over digital networks; or new types of encryption/decryption devices
that do not contain secret keys.
|
[
{
"version": "v1",
"created": "Fri, 23 Dec 2022 17:37:29 GMT"
}
] | 2022-12-26T00:00:00 |
[
[
"Marakis",
"E.",
""
],
[
"Rührmair",
"U.",
""
],
[
"Lachner",
"M.",
""
],
[
"Uppu",
"R.",
""
],
[
"Škorić",
"B.",
""
],
[
"Pinkse",
"P. W. H.",
""
]
] |
new_dataset
| 0.994248 |
2212.12502
|
Oleg Kiselyov
|
Oleg Kiselyov, Toshihiro Nakayama (Tohoku University, Japan)
|
Demo: New View on Plasma Fractals -- From the High Point of Array
Languages
|
Peer-reviewed, accepted for presentation and presented at the ACM
SIGPLAN FARM 2022 workshop
| null | null | null |
cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Plasma fractals is a technique to generate random and realistic clouds,
textures and terrains~-- traditionally using recursive subdivision. We
demonstrate a new approach, based on iterative expansion. It gives a family of
algorithms that includes the standard square-diamond algorithm and offers
various interesting ways of extending it, and hence generating nicer pictures.
The approach came about from exploring plasma fractals from the point of view
of an array language (which we implemented as an embedded DSL in OCaml)~-- that
is, from the perspective of declaring whole image transformations rather than
fiddling with individual pixels.
|
[
{
"version": "v1",
"created": "Sat, 10 Dec 2022 13:37:32 GMT"
}
] | 2022-12-26T00:00:00 |
[
[
"Kiselyov",
"Oleg",
"",
"Tohoku University, Japan"
],
[
"Nakayama",
"Toshihiro",
"",
"Tohoku University, Japan"
]
] |
new_dataset
| 0.997624 |
2212.12523
|
Nabil Simaan
|
Andrew L. Orekhov, Elan Z. Ahronovich, and Nabil Simaan
|
Lie Group Formulation and Sensitivity Analysis for Shape Sensing of
Variable Curvature Continuum Robots with General String Encoder Routing
|
17 pages, 17 figures. Accepted for publication in IEEE Transactions
on Robotics
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper considers a combination of actuation tendons and measurement
strings to achieve accurate shape sensing and direct kinematics of continuum
robots. Assuming general string routing, a methodical Lie group formulation for
the shape sensing of these robots is presented. The shape kinematics is
expressed using arc-length-dependent curvature distributions parameterized by
modal functions, and the Magnus expansion for Lie group integration is used to
express the shape as a product of exponentials. The tendon and string length
kinematic constraints are solved for the modal coefficients and the
configuration space and body Jacobian are derived. The noise amplification
index for the shape reconstruction problem is defined and used for optimizing
the string/tendon routing paths, and a planar simulation study shows the
minimal number of strings/tendons needed for accurate shape reconstruction. A
torsionally stiff continuum segment is used for experimental evaluation,
demonstrating mean (maximal) end-effector absolute position error of less than
2% (5%) of total length. Finally, a simulation study of a torsionally compliant
segment demonstrates the approach for general deflections and string routings.
We believe that the methods of this paper can benefit the design process,
sensing and control of continuum and soft robots.
|
[
{
"version": "v1",
"created": "Fri, 23 Dec 2022 18:32:39 GMT"
}
] | 2022-12-26T00:00:00 |
[
[
"Orekhov",
"Andrew L.",
""
],
[
"Ahronovich",
"Elan Z.",
""
],
[
"Simaan",
"Nabil",
""
]
] |
new_dataset
| 0.971142 |
2003.08567
|
Vivek Sharma
|
Ramesh Raskar, Isabel Schunemann, Rachel Barbar, Kristen Vilcans, Jim
Gray, Praneeth Vepakomma, Suraj Kapa, Andrea Nuzzo, Rajiv Gupta, Alex Berke,
Dazza Greenwood, Christian Keegan, Shriank Kanaparti, Robson Beaudry, David
Stansbury, Beatriz Botero Arcila, Rishank Kanaparti, Vitor Pamplona,
Francesco M Benedetti, Alina Clough, Riddhiman Das, Kaushal Jain, Khahlil
Louisy, Greg Nadeau, Steve Penrod, Yasaman Rajaee, Abhishek Singh, Greg
Storm, John Werner, Ayush Chopra, Gauri Gupta, Vivek Sharma
|
Apps Gone Rogue: Maintaining Personal Privacy in an Epidemic
|
15 pages
| null | null | null |
cs.CR cs.CY cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Containment, the key strategy in quickly halting an epidemic, requires rapid
identification and quarantine of the infected individuals, determination of
whom they have had close contact with in the previous days and weeks, and
decontamination of locations the infected individual has visited. Achieving
containment demands accurate and timely collection of the infected individual's
location and contact history. Traditionally, this process is labor intensive,
susceptible to memory errors, and fraught with privacy concerns. With the
recent almost ubiquitous availability of smart phones, many people carry a tool
which can be utilized to quickly identify an infected individual's contacts
during an epidemic, such as the current 2019 novel Coronavirus crisis.
Unfortunately, the very same first-generation contact tracing tools have been
used to expand mass surveillance, limit individual freedoms and expose the most
private details about individuals. We seek to outline the different
technological approaches to mobile-phone based contact-tracing to date and
elaborate on the opportunities and the risks that these technologies pose to
individuals and societies. We describe advanced security enhancing approaches
that can mitigate these risks and describe trade-offs one must make when
developing and deploying any mass contact-tracing technology. With this paper,
our aim is to continue to grow the conversation regarding contact-tracing for
epidemic and pandemic containment and discuss opportunities to advance this
space. We invite feedback and discussion.
|
[
{
"version": "v1",
"created": "Thu, 19 Mar 2020 04:22:24 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Dec 2022 23:38:06 GMT"
}
] | 2022-12-23T00:00:00 |
[
[
"Raskar",
"Ramesh",
""
],
[
"Schunemann",
"Isabel",
""
],
[
"Barbar",
"Rachel",
""
],
[
"Vilcans",
"Kristen",
""
],
[
"Gray",
"Jim",
""
],
[
"Vepakomma",
"Praneeth",
""
],
[
"Kapa",
"Suraj",
""
],
[
"Nuzzo",
"Andrea",
""
],
[
"Gupta",
"Rajiv",
""
],
[
"Berke",
"Alex",
""
],
[
"Greenwood",
"Dazza",
""
],
[
"Keegan",
"Christian",
""
],
[
"Kanaparti",
"Shriank",
""
],
[
"Beaudry",
"Robson",
""
],
[
"Stansbury",
"David",
""
],
[
"Arcila",
"Beatriz Botero",
""
],
[
"Kanaparti",
"Rishank",
""
],
[
"Pamplona",
"Vitor",
""
],
[
"Benedetti",
"Francesco M",
""
],
[
"Clough",
"Alina",
""
],
[
"Das",
"Riddhiman",
""
],
[
"Jain",
"Kaushal",
""
],
[
"Louisy",
"Khahlil",
""
],
[
"Nadeau",
"Greg",
""
],
[
"Penrod",
"Steve",
""
],
[
"Rajaee",
"Yasaman",
""
],
[
"Singh",
"Abhishek",
""
],
[
"Storm",
"Greg",
""
],
[
"Werner",
"John",
""
],
[
"Chopra",
"Ayush",
""
],
[
"Gupta",
"Gauri",
""
],
[
"Sharma",
"Vivek",
""
]
] |
new_dataset
| 0.985145 |
2012.02124
|
Senthil Yogamani
|
Hazem Rashed, Eslam Mohamed, Ganesh Sistu, Varun Ravi Kumar, Ciaran
Eising, Ahmad El-Sallab and Senthil Yogamani
|
Generalized Object Detection on Fisheye Cameras for Autonomous Driving:
Dataset, Representations and Baseline
|
Camera ready version. Accepted for presentation at Winter Conference
on Applications of Computer Vision 2021. Dataset is shared at
https://drive.google.com/drive/folders/1bobmY2wlIBozeU5ZgPfYPqVAnpPw4QrM
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Object detection is a comprehensively studied problem in autonomous driving.
However, it has been relatively less explored in the case of fisheye cameras.
The standard bounding box fails in fisheye cameras due to the strong radial
distortion, particularly in the image's periphery. We explore better
representations like oriented bounding box, ellipse, and generic polygon for
object detection in fisheye images in this work. We use the IoU metric to
compare these representations using accurate instance segmentation ground
truth. We design a novel curved bounding box model that has optimal properties
for fisheye distortion models. We also design a curvature adaptive perimeter
sampling method for obtaining polygon vertices, improving relative mAP score by
4.9% compared to uniform sampling. Overall, the proposed polygon model improves
mIoU relative accuracy by 40.3%. It is the first detailed study on object
detection on fisheye cameras for autonomous driving scenarios to the best of
our knowledge. The dataset comprising of 10,000 images along with all the
object representations ground truth will be made public to encourage further
research. We summarize our work in a short video with qualitative results at
https://youtu.be/iLkOzvJpL-A.
|
[
{
"version": "v1",
"created": "Thu, 3 Dec 2020 18:00:16 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Dec 2022 23:10:50 GMT"
}
] | 2022-12-23T00:00:00 |
[
[
"Rashed",
"Hazem",
""
],
[
"Mohamed",
"Eslam",
""
],
[
"Sistu",
"Ganesh",
""
],
[
"Kumar",
"Varun Ravi",
""
],
[
"Eising",
"Ciaran",
""
],
[
"El-Sallab",
"Ahmad",
""
],
[
"Yogamani",
"Senthil",
""
]
] |
new_dataset
| 0.99855 |
2107.04782
|
Huabin Liu
|
Shuyuan Li, Huabin Liu, Rui Qian, Yuxi Li, John See, Mengjuan Fei,
Xiaoyuan Yu, Weiyao Lin
|
TA2N: Two-Stage Action Alignment Network for Few-shot Action Recognition
|
Published in AAAI 2022
| null |
10.1609/aaai.v36i2.20029
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Few-shot action recognition aims to recognize novel action classes (query)
using just a few samples (support). The majority of current approaches follow
the metric learning paradigm, which learns to compare the similarity between
videos. Recently, it has been observed that directly measuring this similarity
is not ideal since different action instances may show distinctive temporal
distribution, resulting in severe misalignment issues across query and support
videos. In this paper, we arrest this problem from two distinct aspects --
action duration misalignment and action evolution misalignment. We address them
sequentially through a Two-stage Action Alignment Network (TA2N). The first
stage locates the action by learning a temporal affine transform, which warps
each video feature to its action duration while dismissing the
action-irrelevant feature (e.g. background). Next, the second stage coordinates
query feature to match the spatial-temporal action evolution of support by
performing temporally rearrange and spatially offset prediction. Extensive
experiments on benchmark datasets show the potential of the proposed method in
achieving state-of-the-art performance for few-shot action recognition.The code
of this project can be found at https://github.com/R00Kie-Liu/TA2N
|
[
{
"version": "v1",
"created": "Sat, 10 Jul 2021 07:22:49 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Sep 2021 04:40:53 GMT"
},
{
"version": "v3",
"created": "Thu, 7 Jul 2022 10:47:00 GMT"
},
{
"version": "v4",
"created": "Thu, 22 Dec 2022 08:40:02 GMT"
}
] | 2022-12-23T00:00:00 |
[
[
"Li",
"Shuyuan",
""
],
[
"Liu",
"Huabin",
""
],
[
"Qian",
"Rui",
""
],
[
"Li",
"Yuxi",
""
],
[
"See",
"John",
""
],
[
"Fei",
"Mengjuan",
""
],
[
"Yu",
"Xiaoyuan",
""
],
[
"Lin",
"Weiyao",
""
]
] |
new_dataset
| 0.976199 |
2108.12387
|
Jo\~ao Barreto
|
Paulo Silva, Miguel Matos and Jo\~ao Barreto
|
NimbleChain: Speeding up cryptocurrencies in general-purpose
permissionless blockchains
| null |
ACM Distributed Ledger Technologies, 2022
|
10.1145/3573895
| null |
cs.DC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Nakamoto's seminal work gave rise to permissionless blockchains -- as well as
a wide range of proposals to mitigate their performance shortcomings. Despite
substantial throughput and energy efficiency achievements, most proposals only
bring modest (or marginal) gains in transaction commit latency. Consequently,
commit latencies in today's permissionless blockchain landscape remain
prohibitively high. This paper proposes NimbleChain, a novel algorithm that
extends permissionless blockchains based on Nakamoto consensus with a fast path
that delivers causal promises of commitment, or simply promises. Since promises
only partially order transactions, their latency is only a small fraction of
the totally-ordered commitment latency of Nakamoto consensus. Still, the weak
consistency guarantees of promises are strong enough to correctly implement
cryptocurrencies. To the best of our knowledge, NimbleChain is the first system
to bring together fast, partially-ordered transactions with consensus-based,
totally-ordered transactions in a permissionless setting. This hybrid
consistency model is able to speed up cryptocurrency transactions while still
supporting smart contracts, which typically have (strong) sequential
consistency needs. We implement NimbleChain as an extension of Ethereum and
evaluate it in a 500-node geo-distributed deployment. The results show
NimbleChain can promise a cryptocurrency transactions up to an order of
magnitude faster than a vanilla Ethereum implementation, with marginal
overheads.
|
[
{
"version": "v1",
"created": "Fri, 27 Aug 2021 16:50:15 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Dec 2022 16:18:25 GMT"
}
] | 2022-12-23T00:00:00 |
[
[
"Silva",
"Paulo",
""
],
[
"Matos",
"Miguel",
""
],
[
"Barreto",
"João",
""
]
] |
new_dataset
| 0.98409 |
2205.04392
|
Uli Fahrenberg
|
Sven Dziadek, Uli Fahrenberg, Philipp Schlehuber-Caissier
|
Energy B\"uchi Problems
| null | null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We show how to efficiently solve energy B\"uchi problems in finite weighted
automata and in one-clock weighted timed automata. Solving the former problem
is our main contribution and is handled by a modified version of Bellman-Ford
interleaved with Couvreur's algorithm. The latter problem is handled via a
reduction to the former relying on the corner-point abstraction. All our
algorithms are freely available and implemented in a tool based on the
open-source platforms TChecker and Spot.
|
[
{
"version": "v1",
"created": "Mon, 9 May 2022 15:52:16 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Dec 2022 14:13:16 GMT"
}
] | 2022-12-23T00:00:00 |
[
[
"Dziadek",
"Sven",
""
],
[
"Fahrenberg",
"Uli",
""
],
[
"Schlehuber-Caissier",
"Philipp",
""
]
] |
new_dataset
| 0.986925 |
2206.09592
|
Yunhao Ge
|
Yunhao Ge, Jiashu Xu, Brian Nlong Zhao, Neel Joshi, Laurent Itti,
Vibhav Vineet
|
DALL-E for Detection: Language-driven Compositional Image Synthesis for
Object Detection
|
v3(same as v2) version, update structure (add foreground generation,
stable diffusion), add more experiments
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a new paradigm to automatically generate training data with
accurate labels at scale using the text-toimage synthesis frameworks (e.g.,
DALL-E, Stable Diffusion, etc.). The proposed approach decouples training data
generation into foreground object mask generation and background (context)
image generation. For foreground object mask generation, we use a simple
textual template with object class name as input to DALL-E to generate a
diverse set of foreground images. A foreground-background segmentation
algorithm is then used to generate foreground object masks. Next, in order to
generate context images, first a language description of the context is
generated by applying an image captioning method on a small set of images
representing the context. These language descriptions are then used to generate
diverse sets of context images using the DALL-E framework. These are then
composited with object masks generated in the first step to provide an
augmented training set for a classifier. We demonstrate the advantages of our
approach on four object detection datasets including on Pascal VOC and COCO
object detection tasks. Furthermore, we also highlight the compositional nature
of our data generation approach on out-of-distribution and zero-shot data
generation scenarios.
|
[
{
"version": "v1",
"created": "Mon, 20 Jun 2022 06:43:17 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Dec 2022 17:31:38 GMT"
},
{
"version": "v3",
"created": "Thu, 22 Dec 2022 00:55:29 GMT"
}
] | 2022-12-23T00:00:00 |
[
[
"Ge",
"Yunhao",
""
],
[
"Xu",
"Jiashu",
""
],
[
"Zhao",
"Brian Nlong",
""
],
[
"Joshi",
"Neel",
""
],
[
"Itti",
"Laurent",
""
],
[
"Vineet",
"Vibhav",
""
]
] |
new_dataset
| 0.984259 |
2207.11365
|
Tushar Nagarajan
|
Tushar Nagarajan, Santhosh Kumar Ramakrishnan, Ruta Desai, James
Hillis, Kristen Grauman
|
EgoEnv: Human-centric environment representations from egocentric video
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
First-person video highlights a camera-wearer's activities in the context of
their persistent environment. However, current video understanding approaches
reason over visual features from short video clips that are detached from the
underlying physical space and capture only what is immediately visible. We
present an approach that links egocentric video and the environment by learning
representations that are predictive of the camera-wearer's (potentially unseen)
local surroundings to facilitate human-centric environment understanding. We
train such models using videos from agents in simulated 3D environments where
the environment is fully observable, and test them on human-captured real-world
videos from unseen environments. On two human-centric video tasks, we show that
state-of-the-art video models equipped with our environment-aware features
consistently outperform their counterparts with traditional clip features.
Moreover, despite being trained exclusively on simulated videos, our approach
successfully handles real-world videos from HouseTours and Ego4D. Project page:
https://vision.cs.utexas.edu/projects/ego-env/
|
[
{
"version": "v1",
"created": "Fri, 22 Jul 2022 22:39:57 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Dec 2022 16:39:40 GMT"
}
] | 2022-12-23T00:00:00 |
[
[
"Nagarajan",
"Tushar",
""
],
[
"Ramakrishnan",
"Santhosh Kumar",
""
],
[
"Desai",
"Ruta",
""
],
[
"Hillis",
"James",
""
],
[
"Grauman",
"Kristen",
""
]
] |
new_dataset
| 0.997105 |
2207.11406
|
Wenqi Yang
|
Wenqi Yang, Guanying Chen, Chaofeng Chen, Zhenfang Chen, Kwan-Yee K.
Wong
|
PS-NeRF: Neural Inverse Rendering for Multi-view Photometric Stereo
|
ECCV 2022, Project page: https://ywq.github.io/psnerf
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Traditional multi-view photometric stereo (MVPS) methods are often composed
of multiple disjoint stages, resulting in noticeable accumulated errors. In
this paper, we present a neural inverse rendering method for MVPS based on
implicit representation. Given multi-view images of a non-Lambertian object
illuminated by multiple unknown directional lights, our method jointly
estimates the geometry, materials, and lights. Our method first employs
multi-light images to estimate per-view surface normal maps, which are used to
regularize the normals derived from the neural radiance field. It then jointly
optimizes the surface normals, spatially-varying BRDFs, and lights based on a
shadow-aware differentiable rendering layer. After optimization, the
reconstructed object can be used for novel-view rendering, relighting, and
material editing. Experiments on both synthetic and real datasets demonstrate
that our method achieves far more accurate shape reconstruction than existing
MVPS and neural rendering methods. Our code and model can be found at
https://ywq.github.io/psnerf.
|
[
{
"version": "v1",
"created": "Sat, 23 Jul 2022 03:55:18 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Dec 2022 06:54:21 GMT"
}
] | 2022-12-23T00:00:00 |
[
[
"Yang",
"Wenqi",
""
],
[
"Chen",
"Guanying",
""
],
[
"Chen",
"Chaofeng",
""
],
[
"Chen",
"Zhenfang",
""
],
[
"Wong",
"Kwan-Yee K.",
""
]
] |
new_dataset
| 0.986865 |
2209.00299
|
B.Sundar Rajan
|
Elizabath Peter, K. K. Krishnan Namboodiri, and B. Sundar Rajan
|
Coded Caching with Shared Caches and Private Caches
|
15 pages and 4 figures. Added a subsection and few proofs were
improved
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work studies the coded caching problem in a setting where the users are
simultaneously endowed with a private cache and a shared cache. The setting
consists of a server connected to a set of users, assisted by a smaller number
of helper nodes that are equipped with their own storage. In addition to the
helper cache, each user possesses a dedicated cache which is also used to
prefetch file contents. Each helper cache can serve an arbitrary number of
users, but each user gets served by only one helper cache. We consider two
scenarios: (a) the server has no prior information about the user-to-helper
cache association, and (b) the server knows the user-to-helper cache
association at the placement phase itself. We design centralized coded caching
schemes under uncoded placement for the above two settings. For case (b), two
schemes are proposed that are optimal in certain memory regimes. Further, a
cut-set based lower bound is derived and used to show that one of the proposed
schemes for case (b) is optimal in certain memory regime.
|
[
{
"version": "v1",
"created": "Thu, 1 Sep 2022 09:03:37 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Dec 2022 06:54:39 GMT"
}
] | 2022-12-23T00:00:00 |
[
[
"Peter",
"Elizabath",
""
],
[
"Namboodiri",
"K. K. Krishnan",
""
],
[
"Rajan",
"B. Sundar",
""
]
] |
new_dataset
| 0.995155 |
2209.06434
|
Qiaowei Ma
|
Qiaowei Ma, Jinghui Zhong, Yitao Yang, Weiheng Liu, Ying Gao and Wing
W.Y. Ng
|
ConvNeXt Based Neural Network for Audio Anti-Spoofing
|
6 pages
| null | null | null |
cs.SD cs.CL eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the rapid development of speech conversion and speech synthesis
algorithms, automatic speaker verification (ASV) systems are vulnerable to
spoofing attacks. In recent years, researchers had proposed a number of
anti-spoofing methods based on hand-crafted features. However, using
hand-crafted features rather than raw waveform will lose implicit information
for anti-spoofing. Inspired by the promising performance of ConvNeXt in image
classification tasks, we revise the ConvNeXt network architecture and propose a
lightweight end-to-end anti-spoofing model. By integrating with the channel
attention block and using the focal loss function, the proposed model can focus
on the most informative sub-bands of speech representations and the difficult
samples that are hard to classify. Experiments show that our proposed system
could achieve an equal error rate of 0.64% and min-tDCF of 0.0187 for the
ASVSpoof 2019 LA evaluation dataset, which outperforms the state-of-the-art
systems.
|
[
{
"version": "v1",
"created": "Wed, 14 Sep 2022 05:53:37 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Sep 2022 02:24:02 GMT"
},
{
"version": "v3",
"created": "Tue, 22 Nov 2022 15:28:01 GMT"
},
{
"version": "v4",
"created": "Mon, 28 Nov 2022 11:18:18 GMT"
},
{
"version": "v5",
"created": "Thu, 22 Dec 2022 03:11:59 GMT"
}
] | 2022-12-23T00:00:00 |
[
[
"Ma",
"Qiaowei",
""
],
[
"Zhong",
"Jinghui",
""
],
[
"Yang",
"Yitao",
""
],
[
"Liu",
"Weiheng",
""
],
[
"Gao",
"Ying",
""
],
[
"Ng",
"Wing W. Y.",
""
]
] |
new_dataset
| 0.997912 |
2209.09395
|
Xiaomin Lin
|
Xiaomin Lin, Nitesh Jha, Mayank Joshi, Nare Karapetyan, Yiannis
Aloimonos, and Miao Yu
|
OysterSim: Underwater Simulation for Enhancing Oyster Reef Monitoring
| null |
OCEANS 2022, Hampton Roads, 2022, pp. 1-6
|
10.1109/OCEANS47191.2022.9977233
| null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Oysters are the living vacuum cleaners of the oceans. There is an exponential
decline in the oyster population due to over-harvesting. With the current
development of the automation and AI, robots are becoming an integral part of
the environmental monitoring process that can be also utilized for oyster reef
preservation. Nevertheless, the underwater environment poses many difficulties,
both from the practical - dangerous and time consuming operations, and the
technical perspectives - distorted perception and unreliable navigation. To
this end, we present a simulated environment that can be used to improve oyster
reef monitoring. The simulated environment can be used to create
photo-realistic image datasets with multiple sensor data and ground truth
location of a remotely operated vehicle(ROV). Currently, there are no
photo-realistic image datasets for oyster reef monitoring. Thus, we want to
provide a new benchmark suite to the underwater community.
|
[
{
"version": "v1",
"created": "Tue, 20 Sep 2022 00:38:39 GMT"
}
] | 2022-12-23T00:00:00 |
[
[
"Lin",
"Xiaomin",
""
],
[
"Jha",
"Nitesh",
""
],
[
"Joshi",
"Mayank",
""
],
[
"Karapetyan",
"Nare",
""
],
[
"Aloimonos",
"Yiannis",
""
],
[
"Yu",
"Miao",
""
]
] |
new_dataset
| 0.999433 |
2209.12993
|
Amit Klein
|
Moshe Kol, Amit Klein, Yossi Gilad
|
Device Tracking via Linux's New TCP Source Port Selection Algorithm
(Extended Version)
|
This is an extended version of a paper with the same name that will
be presented in the 32nd Usenix Security Symposium (USENIX 2023). UPDATE
(2022-10-08): We revised some bibliography entries and clarified some aspects
of the mathematical analysis. UPDATE (2022-12-22): Added Usenix 2023 artifact
badges and fixed some typos
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
We describe a tracking technique for Linux devices, exploiting a new TCP
source port generation mechanism recently introduced to the Linux kernel. This
mechanism is based on an algorithm, standardized in RFC 6056, for boosting
security by better randomizing port selection. Our technique detects collisions
in a hash function used in the said algorithm, based on sampling TCP source
ports generated in an attacker-prescribed manner. These hash collisions depend
solely on a per-device key, and thus the set of collisions forms a device ID
that allows tracking devices across browsers, browser privacy modes,
containers, and IPv4/IPv6 networks (including some VPNs). It can distinguish
among devices with identical hardware and software, and lasts until the device
restarts.
We implemented this technique and then tested it using tracking servers in
two different locations and with Linux devices on various networks. We also
tested it on an Android device that we patched to introduce the new port
selection algorithm. The tracking technique works in real-life conditions, and
we report detailed findings about it, including its dwell time, scalability,
and success rate in different network types. We worked with the Linux kernel
team to mitigate the exploit, resulting in a security patch introduced in May
2022 to the Linux kernel, and we provide recommendations for better securing
the port selection algorithm in the paper.
|
[
{
"version": "v1",
"created": "Mon, 26 Sep 2022 20:10:57 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Oct 2022 09:07:15 GMT"
},
{
"version": "v3",
"created": "Thu, 22 Dec 2022 11:44:00 GMT"
}
] | 2022-12-23T00:00:00 |
[
[
"Kol",
"Moshe",
""
],
[
"Klein",
"Amit",
""
],
[
"Gilad",
"Yossi",
""
]
] |
new_dataset
| 0.984227 |
2209.13353
|
Svetlana Pavlitskaya
|
Svetlana Pavlitskaya, Jonas Hendl, Sebastian Kleim, Leopold M\"uller,
Fabian Wylczoch and J. Marius Z\"ollner
|
Suppress with a Patch: Revisiting Universal Adversarial Patch Attacks
against Object Detection
|
Accepted for publication at ICECCME 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Adversarial patch-based attacks aim to fool a neural network with an
intentionally generated noise, which is concentrated in a particular region of
an input image. In this work, we perform an in-depth analysis of different
patch generation parameters, including initialization, patch size, and
especially positioning a patch in an image during training. We focus on the
object vanishing attack and run experiments with YOLOv3 as a model under attack
in a white-box setting and use images from the COCO dataset. Our experiments
have shown, that inserting a patch inside a window of increasing size during
training leads to a significant increase in attack strength compared to a fixed
position. The best results were obtained when a patch was positioned randomly
during training, while patch position additionally varied within a batch.
|
[
{
"version": "v1",
"created": "Tue, 27 Sep 2022 12:59:19 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Dec 2022 08:53:12 GMT"
}
] | 2022-12-23T00:00:00 |
[
[
"Pavlitskaya",
"Svetlana",
""
],
[
"Hendl",
"Jonas",
""
],
[
"Kleim",
"Sebastian",
""
],
[
"Müller",
"Leopold",
""
],
[
"Wylczoch",
"Fabian",
""
],
[
"Zöllner",
"J. Marius",
""
]
] |
new_dataset
| 0.999572 |
2212.11325
|
Valentino Smaldore
|
Valentino Smaldore
|
Bent functions and strongly regular graphs
| null | null | null | null |
cs.IT math.CO math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
The family of bent functions is a known class of Boolean functions, which
have a great importance in cryptography. The Cayley graph defined on
$\mathbb{Z}_{2}^{n}$ by the support of a bent function is a strongly regular
graph $srg(v,k\lambda,\mu)$, with $\lambda=\mu$. In this note we list the
parameters of such Cayley graphs. Moreover, it is given a condition on
$(n,m)$-bent functions $F=(f_1,\ldots,f_m)$, involving the support of their
components $f_i$, and their $n$-ary symmetric differences.
|
[
{
"version": "v1",
"created": "Wed, 21 Dec 2022 19:44:01 GMT"
}
] | 2022-12-23T00:00:00 |
[
[
"Smaldore",
"Valentino",
""
]
] |
new_dataset
| 0.974571 |
2212.11369
|
Dale Chen-Song
|
Dale Chen-Song, Erfan Khalaji, Vaishali Rani
|
MM811 Project Report: Cloud Detection and Removal in Satellite Images
| null | null | null | null |
cs.CV cs.LG eess.IV
|
http://creativecommons.org/publicdomain/zero/1.0/
|
For satellite images, the presence of clouds presents a problem as clouds
obscure more than half to two-thirds of the ground information. This problem
causes many issues for reliability in a noise-free environment to communicate
data and other applications that need seamless monitoring. Removing the clouds
from the images while keeping the background pixels intact can help address the
mentioned issues. Recently, deep learning methods have become popular for
researching cloud removal by demonstrating promising results, among which
Generative Adversarial Networks (GAN) have shown considerably better
performance. In this project, we aim to address cloud removal from satellite
images using AttentionGAN and then compare our results by reproducing the
results obtained using traditional GANs and auto-encoders. We use RICE dataset.
The outcome of this project can be used to develop applications that require
cloud-free satellite images. Moreover, our results could be helpful for making
further research improvements.
|
[
{
"version": "v1",
"created": "Wed, 21 Dec 2022 21:14:35 GMT"
}
] | 2022-12-23T00:00:00 |
[
[
"Chen-Song",
"Dale",
""
],
[
"Khalaji",
"Erfan",
""
],
[
"Rani",
"Vaishali",
""
]
] |
new_dataset
| 0.998269 |
2212.11438
|
Hyeong-Ju Kang
|
Hyeong-Ju Kang
|
AoCStream: All-on-Chip CNN Accelerator With Stream-Based Line-Buffer
Architecture
|
7 pages, 6 figures, poster paper in ACM/SIGDA International Symposium
on Field-Programmable Gate Arrays 2023
| null | null | null |
cs.AR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Convolutional neural network (CNN) accelerators are being widely used for
their efficiency, but they require a large amount of memory, leading to the use
of a slow and power consuming external memory. This paper exploits two schemes
to reduce the required memory amount and ultimately to implement a CNN of
reasonable performance only with on-chip memory of a practical device like a
low-end FPGA. To reduce the memory amount of the intermediate data, a
stream-based line-buffer architecture and a dataflow for the architecture are
proposed instead of the conventional frame-based architecture, where the amount
of the intermediate data memory is proportional to the square of the input
image size. The architecture consists of layer-dedicated blocks operating in a
pipelined way with the input and output streams. Each convolutional layer block
has a line buffer storing just a few rows of input data. The sizes of the line
buffers are proportional to the width of the input image, so the architecture
requires less intermediate data storage than the conventional frame-based
architecture, especially in the trend of getting larger input size in modern
object detection CNNs. In addition to the reduced intermediate data storage,
the weight memory is reduced by the accelerator-aware pruning. The experimental
results show that a whole object detection CNN can be implemented even on a
low-end FPGA without an external memory. Compared to previous accelerators with
similar object detection accuracy, the proposed accelerator reaches much higher
throughput even with less FPGA resources of LUTs, registers, and DSPs, showing
much higher efficiency. The trained models and implemented bit files are
available at https://github.com/HyeongjuKang/accelerator-aware-pruning and
https://github.com/HyeongjuKang/aocstream.
|
[
{
"version": "v1",
"created": "Thu, 22 Dec 2022 01:09:14 GMT"
}
] | 2022-12-23T00:00:00 |
[
[
"Kang",
"Hyeong-Ju",
""
]
] |
new_dataset
| 0.992488 |
2212.11538
|
Kaixing Yang
|
Zhaoxin Fan, Kaixing Yang, Min Zhang, Zhenbo Song, Hongyan Liu, and
Jun He
|
SHLE: Devices Tracking and Depth Filtering for Stereo-based Height Limit
Estimation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, over-height vehicle strike frequently occurs, causing great
economic cost and serious safety problems. Hence, an alert system which can
accurately discover any possible height limiting devices in advance is
necessary to be employed in modern large or medium sized cars, such as touring
cars. Detecting and estimating the height limiting devices act as the key point
of a successful height limit alert system. Though there are some works research
height limit estimation, existing methods are either too computational
expensive or not accurate enough. In this paper, we propose a novel
stereo-based pipeline named SHLE for height limit estimation. Our SHLE pipeline
consists of two stages. In stage 1, a novel devices detection and tracking
scheme is introduced, which accurately locate the height limit devices in the
left or right image. Then, in stage 2, the depth is temporally measured,
extracted and filtered to calculate the height limit device. To benchmark the
height limit estimation task, we build a large-scale dataset named "Disparity
Height", where stereo images, pre-computed disparities and ground-truth height
limit annotations are provided. We conducted extensive experiments on
"Disparity Height" and the results show that SHLE achieves an average error
below than 10cm though the car is 70m away from the devices. Our method also
outperforms all compared baselines and achieves state-of-the-art performance.
Code is available at https://github.com/Yang-Kaixing/SHLE.
|
[
{
"version": "v1",
"created": "Thu, 22 Dec 2022 08:27:21 GMT"
}
] | 2022-12-23T00:00:00 |
[
[
"Fan",
"Zhaoxin",
""
],
[
"Yang",
"Kaixing",
""
],
[
"Zhang",
"Min",
""
],
[
"Song",
"Zhenbo",
""
],
[
"Liu",
"Hongyan",
""
],
[
"He",
"Jun",
""
]
] |
new_dataset
| 0.968195 |
2212.11591
|
Arkady Zgonnikov
|
Klaas Koerten, David Abbink, Arkady Zgonnikov
|
Haptic Shared Control for Dissipating Phantom Traffic Jams
| null | null | null | null |
cs.HC cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Traffic jams occurring on highways cause increased travel time as well as
increased fuel consumption and collisions. Traffic jams without a clear cause,
such as an on-ramp or an accident, are called phantom traffic jams and are said
to make up 50% of all traffic jams. They are the result of an unstable traffic
flow caused by human driving behavior. Automating the longitudinal vehicle
motion of only 5% of all cars in the flow can dissipate phantom traffic jams.
However, driving automation introduces safety issues when human drivers need to
take over the control from the automation. We investigated whether phantom
traffic jams can be dissolved using haptic shared control. This keeps humans in
the loop and thus bypasses the problem of humans' limited capacity to take over
control, while benefiting from most advantages of automation. In an experiment
with 24 participants in a driving simulator, we tested the effect of haptic
shared control on the dynamics of traffic flow, and compared it with manual
control and full automation. We also investigated the effect of two control
types on participants' behavior during simulated silent automation failures.
Results show that haptic shared control can help dissipating phantom traffic
jams better than fully manual control but worse than full automation. We also
found that haptic shared control reduces the occurrence of unsafe situations
caused by silent automation failures compared to full automation. Our results
suggest that haptic shared control can dissipate phantom traffic jams while
preventing safety risks associated with full automation.
|
[
{
"version": "v1",
"created": "Thu, 22 Dec 2022 10:34:34 GMT"
}
] | 2022-12-23T00:00:00 |
[
[
"Koerten",
"Klaas",
""
],
[
"Abbink",
"David",
""
],
[
"Zgonnikov",
"Arkady",
""
]
] |
new_dataset
| 0.99764 |
2212.11626
|
EPTCS
|
Jens H. Weber (University of Victoria)
|
A Foundation for Functional Graph Programs: The Graph Transformation
Control Algebra (GTA)
|
In Proceedings GCM 2022, arXiv:2212.10975
|
EPTCS 374, 2022, pp. 45-58
|
10.4204/EPTCS.374.5
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Applications of graph transformation (GT) systems often require control
structures that can be used to direct GT processes. Most existing GT tools
follow a stateful computational model, where a single graph is repeatedly
modified "in-place" when GT rules are applied. The implementation of control
structures in such tools is not trivial. Common challenges include dealing with
the non-determinism inherent to rule application and transactional constraints
when executing compositions of GTs, in particular atomicity and isolation. The
complexity of associated transaction mechanisms and rule application search
algorithms (e.g., backtracking) complicates the definition of a formal
foundation for these control structures. Compared to these stateful approaches,
functional graph rewriting presents a simpler (stateless) computational model,
which simplifies the definition of a formal basis for (functional) GT control
structures. In this paper, we propose the "Graph Transformation control
Algebra" (GTA) as such a foundation. The GTA has been used as the formal basis
for implementing the control structures in the (functional) GT tool
"GrapeVine".
|
[
{
"version": "v1",
"created": "Thu, 22 Dec 2022 11:51:10 GMT"
}
] | 2022-12-23T00:00:00 |
[
[
"Weber",
"Jens H.",
"",
"University of Victoria"
]
] |
new_dataset
| 0.957515 |
2212.11692
|
Supun Randeni
|
Supun Randeni, Michael Sacarny, Michael Benjamin, Michael
Triantafyllou
|
Morpheus: An A-sized AUV with morphing fins and algorithms for agile
maneuvering
|
20 pages, 18 figures
| null | null | null |
cs.RO cs.SY eess.SY physics.flu-dyn
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We designed and constructed an A-sized base autonomous underwater vehicle
(AUV), augmented with a stack of modular and extendable hardware and software,
including autonomy, navigation, control and high fidelity simulation
capabilities (A-size stands for the standard sonobuoy form factor, with a
maximum diameter of 124 mm). Subsequently, we extended this base vehicle with a
novel tuna-inspired morphing fin payload module (referred to as the Morpheus
AUV), to achieve good directional stability and exceptional maneuverability;
properties that are highly desirable for rigid hull AUVs, but are presently
difficult to achieve because they impose contradictory requirements. The
morphing fin payload allows the base AUV to dynamically change its
stability-maneuverability qualities by using morphing fins, which can be
deployed, deflected and retracted, as needed. The base vehicle and Morpheus AUV
were both extensively field tested in-water in the Charles river,
Massachusetts, USA; by conducting hundreds of hours of operations over a period
of two years. The maneuvering capability of the Morpheus AUV was evaluated with
and without the use of morphing fins to quantify the performance improvement.
The Morpheus AUV was able to showcase an exceptional turning rate of around
25-35 deg/s. A maximum turn rate improvement of around 35% - 50% was gained
through the use of morphing fins.
|
[
{
"version": "v1",
"created": "Thu, 22 Dec 2022 13:30:21 GMT"
}
] | 2022-12-23T00:00:00 |
[
[
"Randeni",
"Supun",
""
],
[
"Sacarny",
"Michael",
""
],
[
"Benjamin",
"Michael",
""
],
[
"Triantafyllou",
"Michael",
""
]
] |
new_dataset
| 0.997409 |
2212.11715
|
Ofek Pearl
|
Ofek Pearl, Itai Lang, Yuhua Hu, Raymond A. Yeh, Rana Hanocka
|
GeoCode: Interpretable Shape Programs
|
project page: https://threedle.github.io/GeoCode/
| null | null | null |
cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
Mapping high-fidelity 3D geometry to a representation that allows for
intuitive edits remains an elusive goal in computer vision and graphics. The
key challenge is the need to model both continuous and discrete shape
variations. Current approaches, such as implicit shape representation, lack
straightforward interpretable encoding, while others that employ procedural
methods output coarse geometry. We present GeoCode, a technique for 3D shape
synthesis using an intuitively editable parameter space. We build a novel
program that enforces a complex set of rules and enables users to perform
intuitive and controlled high-level edits that procedurally propagate at a low
level to the entire shape. Our program produces high-quality mesh outputs by
construction. We use a neural network to map a given point cloud or sketch to
our interpretable parameter space. Once produced by our procedural program,
shapes can be easily modified. Empirically, we show that GeoCode can infer and
recover 3D shapes more accurately compared to existing techniques and we
demonstrate its ability to perform controlled local and global shape
manipulations.
|
[
{
"version": "v1",
"created": "Mon, 19 Dec 2022 20:38:22 GMT"
}
] | 2022-12-23T00:00:00 |
[
[
"Pearl",
"Ofek",
""
],
[
"Lang",
"Itai",
""
],
[
"Hu",
"Yuhua",
""
],
[
"Yeh",
"Raymond A.",
""
],
[
"Hanocka",
"Rana",
""
]
] |
new_dataset
| 0.992761 |
2212.11756
|
Yuanbo Li
|
Yuanbo Li, Yiqin Wang, Yi Chen, Ziming Yu, and Chong Han
|
N2-SAGE: Narrow-beam Near-field SAGE Algorithm for Channel Parameter
Estimation in mmWave and THz Direction-scan Measurements
|
13 pages, 8 figures, 2 tables. arXiv admin note: substantial text
overlap with arXiv:2203.16745 by other authors
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To extract channel characteristics and conduct channel modeling in
millimeter-wave (mmWave) and Terahertz (THz) bands, accurate estimations of
multi-path component (MPC) parameters in measured results are fundamental.
However, due to high frequency and narrow antenna beams in mmWave and THz
direction-scan measurements, existing channel parameter estimation algorithms
are no longer effective. In this paper, a novel narrow-beam near-field
space-alternating generalized expectation-maximization (N2-SAGE) algorithm is
proposed, which is derived by carefully considering the features of mmWave and
THz direction-scan measurement campaigns, such as near field propagation,
narrow antenna beams as well as asynchronous measurements in different scanning
directions. The delays of MPCs are calculated using spherical wave front (SWF),
which depends on delay and angles of MPCs, resulting in a high-dimensional
estimation problem. To overcome this, a novel two-phase estimation process is
proposed, including a rough estimation phase and an accurate estimation phase.
Moreover, considering the narrow antenna beams used for mmWave and THz
direction-scan measurements, the usage of partial information alleviates
influence of background noises. Additionally, the phases of MPCs in different
scanning directions are treated as random variables, which are estimated and
reused during the estimation process, making the algorithm immune to possible
phase errors. Furthermore, performance of the proposed N2-SAGE algorithm is
validated and compared with existing channel parameter estimation algorithms,
based on simulations and measured data. Results show that the proposed N2-SAGE
algorithm greatly outperforms existing channel parameter estimation algorithms
in terms of estimation accuracy. By using the N2-SAGE algorithm, the channel is
characterized more correctly and reasonably.
|
[
{
"version": "v1",
"created": "Mon, 28 Nov 2022 16:11:52 GMT"
}
] | 2022-12-23T00:00:00 |
[
[
"Li",
"Yuanbo",
""
],
[
"Wang",
"Yiqin",
""
],
[
"Chen",
"Yi",
""
],
[
"Yu",
"Ziming",
""
],
[
"Han",
"Chong",
""
]
] |
new_dataset
| 0.998826 |
2212.11758
|
Xiang He
|
Xiang He, Teng Wang, Lei Liu, Jianan Li, Zihang Su, Yingming Guo,
Zhiying Tu, Hanchuan Xu, Zhongjie Wang
|
RescureService: A Benchmark Microservice System for the Research of
Mobile Edge and Cloud Computing
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The dramatic development of cloud and edge computing allows for better
Quality of Service (QoS) in many scenarios by deploying services on cloud and
edge servers. Microservice technology is also adopted in these scenarios to
decompose complex business logic into many small independent services.
Meanwhile, as microservice systems continue to grow, providing stable QoS in
these systems becomes a challenge, and many different approaches have been
proposed for stable QoS. However, the microservice systems used in the
experiments of these work have problems such as the low number of services and
a single type of service. Therefore, we developed the open-source benchmark
microservice system RescureService with 20+ services, including database,
front-end, business logic, data processing, and artificial intelligence
services in the disaster relief scenario. Measuring tools are provided to
measure the service properties to help researchers prepare experimental data,
and the service properties pre-measured are also presented. Meanwhile, the
fulfillment of benchmark requirements is detailed, and the results show that
our RescureService meets the requirements of a benchmark system in research.
Moreover, instructions are given to describe adopting our system in service
computing as examples.
|
[
{
"version": "v1",
"created": "Fri, 28 Oct 2022 01:53:16 GMT"
}
] | 2022-12-23T00:00:00 |
[
[
"He",
"Xiang",
""
],
[
"Wang",
"Teng",
""
],
[
"Liu",
"Lei",
""
],
[
"Li",
"Jianan",
""
],
[
"Su",
"Zihang",
""
],
[
"Guo",
"Yingming",
""
],
[
"Tu",
"Zhiying",
""
],
[
"Xu",
"Hanchuan",
""
],
[
"Wang",
"Zhongjie",
""
]
] |
new_dataset
| 0.998361 |
2212.11761
|
Yanyi Chen
|
Shangsheng Wen, Manxi Liu, Yanyi Chen, Yirong Chen, Futong An,
Yingcong Chen, Weipeng Guan
|
Optical Bar Code for Internet Access Application based on Optical camera
communication and Bluetooth Control
|
3 pages, 1 figure
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We demonstrate an internet access application based on optical camera
communication and bluetooth. The app will access the website while the camera
in the phone receives the optical signal. \c{opyright} 2022 The Author(s)
|
[
{
"version": "v1",
"created": "Mon, 31 Oct 2022 11:06:03 GMT"
}
] | 2022-12-23T00:00:00 |
[
[
"Wen",
"Shangsheng",
""
],
[
"Liu",
"Manxi",
""
],
[
"Chen",
"Yanyi",
""
],
[
"Chen",
"Yirong",
""
],
[
"An",
"Futong",
""
],
[
"Chen",
"Yingcong",
""
],
[
"Guan",
"Weipeng",
""
]
] |
new_dataset
| 0.998107 |
2212.11778
|
Yalin E. Sagduyu
|
Yalin E. Sagduyu
|
Adversarial Machine Learning and Defense Game for NextG Signal
Classification with Deep Learning
| null | null | null | null |
cs.NI cs.AI cs.CR cs.GT cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a game-theoretic framework to study the interactions of
attack and defense for deep learning-based NextG signal classification. NextG
systems such as the one envisioned for a massive number of IoT devices can
employ deep neural networks (DNNs) for various tasks such as user equipment
identification, physical layer authentication, and detection of incumbent users
(such as in the Citizens Broadband Radio Service (CBRS) band). By training
another DNN as the surrogate model, an adversary can launch an inference
(exploratory) attack to learn the behavior of the victim model, predict
successful operation modes (e.g., channel access), and jam them. A defense
mechanism can increase the adversary's uncertainty by introducing controlled
errors in the victim model's decisions (i.e., poisoning the adversary's
training data). This defense is effective against an attack but reduces the
performance when there is no attack. The interactions between the defender and
the adversary are formulated as a non-cooperative game, where the defender
selects the probability of defending or the defense level itself (i.e., the
ratio of falsified decisions) and the adversary selects the probability of
attacking. The defender's objective is to maximize its reward (e.g., throughput
or transmission success ratio), whereas the adversary's objective is to
minimize this reward and its attack cost. The Nash equilibrium strategies are
determined as operation modes such that no player can unilaterally improve its
utility given the other's strategy is fixed. A fictitious play is formulated
for each player to play the game repeatedly in response to the empirical
frequency of the opponent's actions. The performance in Nash equilibrium is
compared to the fixed attack and defense cases, and the resilience of NextG
signal classification against attacks is quantified.
|
[
{
"version": "v1",
"created": "Thu, 22 Dec 2022 15:13:03 GMT"
}
] | 2022-12-23T00:00:00 |
[
[
"Sagduyu",
"Yalin E.",
""
]
] |
new_dataset
| 0.998111 |
2212.11804
|
Abonia Sojasingarayar
|
Abonia Sojasingarayar, Ashish Patel
|
Monocular 3D Object Detection using Multi-Stage Approaches with
Attention and Slicing aided hyper inference
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/publicdomain/zero/1.0/
|
3D object detection is vital as it would enable us to capture objects' sizes,
orientation, and position in the world. As a result, we would be able to use
this 3D detection in real-world applications such as Augmented Reality (AR),
self-driving cars, and robotics which perceive the world the same way we do as
humans. Monocular 3D Object Detection is the task to draw 3D bounding box
around objects in a single 2D RGB image. It is localization task but without
any extra information like depth or other sensors or multiple images. Monocular
3D object detection is an important yet challenging task. Beyond the
significant progress in image-based 2D object detection, 3D understanding of
real-world objects is an open challenge that has not been explored extensively
thus far. In addition to the most closely related studies.
|
[
{
"version": "v1",
"created": "Thu, 22 Dec 2022 15:36:07 GMT"
}
] | 2022-12-23T00:00:00 |
[
[
"Sojasingarayar",
"Abonia",
""
],
[
"Patel",
"Ashish",
""
]
] |
new_dataset
| 0.987374 |
2212.11875
|
Vaclav Skala
|
Vaclav Skala and Vit Ondracka
|
S-patch: Modification of the Hermite parametric patch
|
Draft of the paper: S-Patch: Modification of the Hermite Parametric
Patch, ICGG 2010 conference, Kyoto, Japan, 2010
| null | null | null |
cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
A new modification of the Hermite cubic rectangular patch is proposed: the
S-Patch, which is based on the requirement that diagonal curves must be of
degree 3 instead of degree 6 as it is in the case of the Hermite patch.
Theoretical derivation of conditions is presented and some experimental results
as well. The S-Patch is convenient for applications, where different
tessellation of the u-v domain is needed, boundary and diagonal curves of
different degrees are not acceptable.
|
[
{
"version": "v1",
"created": "Thu, 22 Dec 2022 17:08:30 GMT"
}
] | 2022-12-23T00:00:00 |
[
[
"Skala",
"Vaclav",
""
],
[
"Ondracka",
"Vit",
""
]
] |
new_dataset
| 0.963619 |
2212.11933
|
Soheyla Amirian
|
Soheyla Amirian, Husam Ghazaleh, Mehdi Assefi, Hilal Maradit Kremers,
Hamid R. Arabnia, Johannes F. Plate, and Ahmad P. Tafti
|
Word Embedding Neural Networks to Advance Knee Osteoarthritis Research
|
5 pages, 3 figures, Accepted in Computational Science and
Computational Intelligence; 2022 International Conference on IEEE CPS (IEEE
XPLORE, Scopus)
| null | null | null |
cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Osteoarthritis (OA) is the most prevalent chronic joint disease worldwide,
where knee OA takes more than 80% of commonly affected joints. Knee OA is not a
curable disease yet, and it affects large columns of patients, making it costly
to patients and healthcare systems. Etiology, diagnosis, and treatment of knee
OA might be argued by variability in its clinical and physical manifestations.
Although knee OA carries a list of well-known terminology aiming to standardize
the nomenclature of the diagnosis, prognosis, treatment, and clinical outcomes
of the chronic joint disease, in practice there is a wide range of terminology
associated with knee OA across different data sources, including but not
limited to biomedical literature, clinical notes, healthcare literacy, and
health-related social media. Among these data sources, the scientific articles
published in the biomedical literature usually make a principled pipeline to
study disease. Rapid yet, accurate text mining on large-scale scientific
literature may discover novel knowledge and terminology to better understand
knee OA and to improve the quality of knee OA diagnosis, prevention, and
treatment. The present works aim to utilize artificial neural network
strategies to automatically extract vocabularies associated with knee OA
diseases. Our finding indicates the feasibility of developing word embedding
neural networks for autonomous keyword extraction and abstraction of knee OA.
|
[
{
"version": "v1",
"created": "Thu, 22 Dec 2022 18:12:27 GMT"
}
] | 2022-12-23T00:00:00 |
[
[
"Amirian",
"Soheyla",
""
],
[
"Ghazaleh",
"Husam",
""
],
[
"Assefi",
"Mehdi",
""
],
[
"Kremers",
"Hilal Maradit",
""
],
[
"Arabnia",
"Hamid R.",
""
],
[
"Plate",
"Johannes F.",
""
],
[
"Tafti",
"Ahmad P.",
""
]
] |
new_dataset
| 0.968242 |
2212.11984
|
Menglei Chai
|
Yinghao Xu, Menglei Chai, Zifan Shi, Sida Peng, Ivan Skorokhodov,
Aliaksandr Siarohin, Ceyuan Yang, Yujun Shen, Hsin-Ying Lee, Bolei Zhou,
Sergey Tulyakov
|
DisCoScene: Spatially Disentangled Generative Radiance Fields for
Controllable 3D-aware Scene Synthesis
|
Project page: https://snap-research.github.io/discoscene/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing 3D-aware image synthesis approaches mainly focus on generating a
single canonical object and show limited capacity in composing a complex scene
containing a variety of objects. This work presents DisCoScene: a 3Daware
generative model for high-quality and controllable scene synthesis. The key
ingredient of our method is a very abstract object-level representation (i.e.,
3D bounding boxes without semantic annotation) as the scene layout prior, which
is simple to obtain, general to describe various scene contents, and yet
informative to disentangle objects and background. Moreover, it serves as an
intuitive user control for scene editing. Based on such a prior, the proposed
model spatially disentangles the whole scene into object-centric generative
radiance fields by learning on only 2D images with the global-local
discrimination. Our model obtains the generation fidelity and editing
flexibility of individual objects while being able to efficiently compose
objects and the background into a complete scene. We demonstrate
state-of-the-art performance on many scene datasets, including the challenging
Waymo outdoor dataset. Project page:
https://snap-research.github.io/discoscene/
|
[
{
"version": "v1",
"created": "Thu, 22 Dec 2022 18:59:59 GMT"
}
] | 2022-12-23T00:00:00 |
[
[
"Xu",
"Yinghao",
""
],
[
"Chai",
"Menglei",
""
],
[
"Shi",
"Zifan",
""
],
[
"Peng",
"Sida",
""
],
[
"Skorokhodov",
"Ivan",
""
],
[
"Siarohin",
"Aliaksandr",
""
],
[
"Yang",
"Ceyuan",
""
],
[
"Shen",
"Yujun",
""
],
[
"Lee",
"Hsin-Ying",
""
],
[
"Zhou",
"Bolei",
""
],
[
"Tulyakov",
"Sergey",
""
]
] |
new_dataset
| 0.999205 |
2107.13132
|
Eric Zhan
|
Eric Zhan, Jennifer J. Sun, Ann Kennedy, Yisong Yue, Swarat Chaudhuri
|
Unsupervised Learning of Neurosymbolic Encoders
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a framework for the unsupervised learning of neurosymbolic
encoders, which are encoders obtained by composing neural networks with
symbolic programs from a domain-specific language. Our framework naturally
incorporates symbolic expert knowledge into the learning process, which leads
to more interpretable and factorized latent representations compared to fully
neural encoders. We integrate modern program synthesis techniques with the
variational autoencoding (VAE) framework, in order to learn a neurosymbolic
encoder in conjunction with a standard decoder. The programmatic descriptions
from our encoders can benefit many analysis workflows, such as in behavior
modeling where interpreting agent actions and movements is important. We
evaluate our method on learning latent representations for real-world
trajectory data from animal biology and sports analytics. We show that our
approach offers significantly better separation of meaningful categories than
standard VAEs and leads to practical gains on downstream analysis tasks, such
as for behavior classification.
|
[
{
"version": "v1",
"created": "Wed, 28 Jul 2021 02:16:14 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Dec 2022 03:09:06 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Zhan",
"Eric",
""
],
[
"Sun",
"Jennifer J.",
""
],
[
"Kennedy",
"Ann",
""
],
[
"Yue",
"Yisong",
""
],
[
"Chaudhuri",
"Swarat",
""
]
] |
new_dataset
| 0.996574 |
2203.13352
|
Visar Berisha
|
Leo Hsu and Visar Berisha
|
Does human speech follow Benford's Law?
| null | null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Researchers have observed that the frequencies of leading digits in many
man-made and naturally occurring datasets follow a logarithmic curve, with
digits that start with the number 1 accounting for $\sim 30\%$ of all numbers
in the dataset and digits that start with the number 9 accounting for $\sim
5\%$ of all numbers in the dataset. This phenomenon, known as Benford's Law, is
highly repeatable and appears in lists of numbers from electricity bills, stock
prices, tax returns, house prices, death rates, lengths of rivers, and
naturally occurring images. In this paper we demonstrate that human speech
spectra also follow Benford's Law on average. That is, when averaged over many
speakers, the frequencies of leading digits in speech magnitude spectra follow
this distribution, although with some variability at the individual sample
level. We use this observation to motivate a new set of features that can be
efficiently extracted from speech and demonstrate that these features can be
used to classify between human speech and synthetic speech.
|
[
{
"version": "v1",
"created": "Thu, 24 Mar 2022 21:54:49 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Dec 2022 16:54:56 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Hsu",
"Leo",
""
],
[
"Berisha",
"Visar",
""
]
] |
new_dataset
| 0.999766 |
2204.10196
|
Md. Rezaul Karim
|
Md. Rezaul Karim and Sumon Kanti Dey and Tanhim Islam and Md. Shajalal
and Bharathi Raja Chakravarthi
|
Multimodal Hate Speech Detection from Bengali Memes and Texts
|
arXiv admin note: text overlap with arXiv:2107.00648 by other authors
|
Pre-print for our paper at International Conference on Speech &
Language Technology for Low-resource Languages (SPELLL'2022)
| null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Numerous machine learning (ML) and deep learning (DL)-based approaches have
been proposed to utilize textual data from social media for anti-social
behavior analysis like cyberbullying, fake news detection, and identification
of hate speech mainly for highly-resourced languages such as English. However,
despite having a lot of diversity and millions of native speakers, some
languages like Bengali are under-resourced, which is due to a lack of
computational resources for natural language processing (NLP). Similar to other
languages, Bengali social media contents also include images along with texts
(e.g., multimodal memes are posted by embedding short texts into images on
Facebook). Therefore, only the textual data is not enough to judge them since
images might give extra context to make a proper judgement. This paper is about
hate speech detection from multimodal Bengali memes and texts. We prepared the
only multimodal hate speech dataset for-a-kind of problem for Bengali, which we
use to train state-of-the-art neural architectures (e.g., Bi-LSTM/Conv-LSTM
with word embeddings, ConvNets + pre-trained language models, e.g., monolingual
Bangla BERT, multilingual BERT-cased/uncased, and XLM-RoBERTa) to jointly
analyze textual and visual information for hate speech detection. Conv-LSTM and
XLM-RoBERTa models performed best for texts, yielding F1 scores of 0.78 and
0.82, respectively. As of memes, ResNet-152 and DenseNet-161 models yield F1
scores of 0.78 and 0.79, respectively. As for multimodal fusion, XLM-RoBERTa +
DenseNet-161 performed the best, yielding an F1 score of 0.83. Our study
suggests that text modality is most useful for hate speech detection, while
memes are moderately useful.
|
[
{
"version": "v1",
"created": "Tue, 19 Apr 2022 11:15:25 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Sep 2022 15:48:59 GMT"
},
{
"version": "v3",
"created": "Wed, 21 Dec 2022 13:11:52 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Karim",
"Md. Rezaul",
""
],
[
"Dey",
"Sumon Kanti",
""
],
[
"Islam",
"Tanhim",
""
],
[
"Shajalal",
"Md.",
""
],
[
"Chakravarthi",
"Bharathi Raja",
""
]
] |
new_dataset
| 0.999767 |
2206.12558
|
Xiujuan Zheng
|
Jialiang Zhuang, Yuheng Chen, Yun Zhang, Xiujuan Zheng
|
FastBVP-Net: a lightweight pulse extraction network for measuring heart
rhythm via facial videos
|
9 pages, 2figures
| null | null | null |
cs.CV
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Remote photoplethysmography (rPPG) is an attractive camera-based health
monitoring method that can measure the heart rhythm from facial videos. Many
well-established deep-learning models have been reported to measure heart rate
(HR) and heart rate variability (HRV). However, most of these models usually
require a 30-second facial video and enormous computational resources to obtain
accurate and robust results, which significantly limits their applications in
real-world scenarios. Hence, we propose a lightweight pulse extraction network,
FastBVP-Net, to quickly measure heart rhythm via facial videos. The proposed
FastBVP-Net uses a multi-frequency mode signal fusion (MMSF) mechanism to
characterize the different modes of the raw signals in a decompose module and
reconstruct the blood volume pulse (BVP) signal under a complex noise
environment in a compose module. Meanwhile, an oversampling training scheme is
used to solve the over-fitting problem caused by the limitations of the
datasets. Then, the HR and HRV can be estimated based on the extracted BVP
signals. Comprehensive experiments are conducted on the benchmark datasets to
validate the proposed FastBVP-Net. For intra-dataset and cross-dataset testing,
the proposed approach achieves better performance for HR and HRV estimation
from 30-second facial videos with fewer computational burdens than the current
well-established methods. Moreover, the proposed approach also achieves
competitive results from 15-second facial videos. Therefore, the proposed
FastBVP-Net has the potential to be applied in many real-world scenarios with
shorter videos.
|
[
{
"version": "v1",
"created": "Sat, 25 Jun 2022 05:24:52 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Oct 2022 07:43:57 GMT"
},
{
"version": "v3",
"created": "Wed, 21 Dec 2022 16:11:22 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Zhuang",
"Jialiang",
""
],
[
"Chen",
"Yuheng",
""
],
[
"Zhang",
"Yun",
""
],
[
"Zheng",
"Xiujuan",
""
]
] |
new_dataset
| 0.993417 |
2207.03422
|
Dhrubajyoti Pathak
|
Dhrubajyoti Pathak, Sukumar Nandi, Priyankoo Sarmah
|
AsNER -- Annotated Dataset and Baseline for Assamese Named Entity
recognition
|
Published at LREC 2022. https://aclanthology.org/2022.lrec-1.706
|
Proceedings of the Language Resources and Evaluation Conference,
June 2022, Marseille, France, European Language Resources Association,
6571-6577
| null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We present the AsNER, a named entity annotation dataset for low resource
Assamese language with a baseline Assamese NER model. The dataset contains
about 99k tokens comprised of text from the speech of the Prime Minister of
India and Assamese play. It also contains person names, location names and
addresses. The proposed NER dataset is likely to be a significant resource for
deep neural based Assamese language processing. We benchmark the dataset by
training NER models and evaluating using state-of-the-art architectures for
supervised named entity recognition (NER) such as Fasttext, BERT, XLM-R, FLAIR,
MuRIL etc. We implement several baseline approaches with state-of-the-art
sequence tagging Bi-LSTM-CRF architecture. The highest F1-score among all
baselines achieves an accuracy of 80.69% when using MuRIL as a word embedding
method. The annotated dataset and the top performing model are made publicly
available.
|
[
{
"version": "v1",
"created": "Thu, 7 Jul 2022 16:45:55 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Pathak",
"Dhrubajyoti",
""
],
[
"Nandi",
"Sukumar",
""
],
[
"Sarmah",
"Priyankoo",
""
]
] |
new_dataset
| 0.999738 |
2208.11821
|
Shufan Li
|
Akash Gokul, Konstantinos Kallidromitis, Shufan Li, Yusuke Kato,
Kazuki Kozuka, Trevor Darrell, and Colorado J Reed
|
Refine and Represent: Region-to-Object Representation Learning
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Recent works in self-supervised learning have demonstrated strong performance
on scene-level dense prediction tasks by pretraining with object-centric or
region-based correspondence objectives. In this paper, we present
Region-to-Object Representation Learning (R2O) which unifies region-based and
object-centric pretraining. R2O operates by training an encoder to dynamically
refine region-based segments into object-centric masks and then jointly learns
representations of the contents within the mask. R2O uses a "region refinement
module" to group small image regions, generated using a region-level prior,
into larger regions which tend to correspond to objects by clustering
region-level features. As pretraining progresses, R2O follows a
region-to-object curriculum which encourages learning region-level features
early on and gradually progresses to train object-centric representations.
Representations learned using R2O lead to state-of-the art performance in
semantic segmentation for PASCAL VOC (+0.7 mIOU) and Cityscapes (+0.4 mIOU) and
instance segmentation on MS COCO (+0.3 mask AP). Further, after pretraining on
ImageNet, R2O pretrained models are able to surpass existing state-of-the-art
in unsupervised object segmentation on the Caltech-UCSD Birds 200-2011 dataset
(+2.9 mIoU) without any further training. We provide the code/models from this
work at https://github.com/KKallidromitis/r2o.
|
[
{
"version": "v1",
"created": "Thu, 25 Aug 2022 01:44:28 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Dec 2022 23:36:52 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Gokul",
"Akash",
""
],
[
"Kallidromitis",
"Konstantinos",
""
],
[
"Li",
"Shufan",
""
],
[
"Kato",
"Yusuke",
""
],
[
"Kozuka",
"Kazuki",
""
],
[
"Darrell",
"Trevor",
""
],
[
"Reed",
"Colorado J",
""
]
] |
new_dataset
| 0.965206 |
2210.16269
|
Rongqi Pan
|
Rongqi Pan, Taher A. Ghaleb, and Lionel Briand
|
ATM: Black-box Test Case Minimization based on Test Code Similarity and
Evolutionary Search
|
Accepted at the 45th IEEE/ACM International Conference on Software
Engineering
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Executing large test suites is time and resource consuming, sometimes
impossible, and such test suites typically contain many redundant test cases.
Hence, test case minimization is used to remove redundant test cases that are
unlikely to detect new faults. However, most test case (suite) minimization
techniques rely on code coverage (white-box), model-based features, or
requirements specifications, which are not always accessible by test engineers.
Recently, a set of novel techniques was proposed, called FAST-R, relying solely
on test case code for test case minimization, which appeared to be much more
efficient than white-box techniques. However, it achieved a comparable low
fault detection capability for Java projects, making its application
challenging in practice. This paper proposes ATM (AST-based Test case
Minimizer), a similarity-based, search-based test case minimization technique,
taking a specific budget as input, that also relies exclusively on the source
code of test cases but attempts to achieve higher fault detection through
finer-grained similarity analysis and a dedicated search algorithm. ATM
transforms test case code into Abstract Syntax Trees (AST) and relies on four
tree-based similarity measures to apply evolutionary search, specifically
genetic algorithms, to minimize test cases. We evaluated the effectiveness and
efficiency of ATM on a large dataset of 16 Java projects with 661 faulty
versions using three budgets ranging from 25% to 75% of test suites. ATM
achieved significantly higher fault detection rates (0.82 on average), compared
to FAST-R (0.61 on average) and random minimization (0.52 on average), when
running only 50% of the test cases, within practically acceptable time (1.1-4.3
hours, on average), given that minimization is only occasionally applied when
many new test cases are created (major releases). Results achieved for other
budgets were consistent.
|
[
{
"version": "v1",
"created": "Fri, 28 Oct 2022 16:59:13 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Dec 2022 01:42:55 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Pan",
"Rongqi",
""
],
[
"Ghaleb",
"Taher A.",
""
],
[
"Briand",
"Lionel",
""
]
] |
new_dataset
| 0.981135 |
2211.05719
|
Qingfeng Sun
|
Jiazhan Feng, Qingfeng Sun, Can Xu, Pu Zhao, Yaming Yang, Chongyang
Tao, Dongyan Zhao, Qingwei Lin
|
MMDialog: A Large-scale Multi-turn Dialogue Dataset Towards Multi-modal
Open-domain Conversation
| null | null | null | null |
cs.CL cs.AI cs.CV cs.LG cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Responding with multi-modal content has been recognized as an essential
capability for an intelligent conversational agent. In this paper, we introduce
the MMDialog dataset to better facilitate multi-modal conversation. MMDialog is
composed of a curated set of 1.08 million real-world dialogues with 1.53
million unique images across 4,184 topics. MMDialog has two main and unique
advantages. First, it is the largest multi-modal conversation dataset by the
number of dialogues by 88x. Second, it contains massive topics to generalize
the open-domain. To build engaging dialogue system with this dataset, we
propose and normalize two response producing tasks based on retrieval and
generative scenarios. In addition, we build two baselines for above tasks with
state-of-the-art techniques and report their experimental performance. We also
propose a novel evaluation metric MM-Relevance to measure the multi-modal
responses. Our dataset and scripts are available in
https://github.com/victorsungo/MMDialog.
|
[
{
"version": "v1",
"created": "Thu, 10 Nov 2022 17:37:04 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Nov 2022 08:10:32 GMT"
},
{
"version": "v3",
"created": "Wed, 21 Dec 2022 08:12:46 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Feng",
"Jiazhan",
""
],
[
"Sun",
"Qingfeng",
""
],
[
"Xu",
"Can",
""
],
[
"Zhao",
"Pu",
""
],
[
"Yang",
"Yaming",
""
],
[
"Tao",
"Chongyang",
""
],
[
"Zhao",
"Dongyan",
""
],
[
"Lin",
"Qingwei",
""
]
] |
new_dataset
| 0.999019 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.