id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2206.02114
|
Xin Lian
|
Xin Lian
|
Speech Detection Task Against Asian Hate: BERT the Central, While
Data-Centric Studies the Crucial
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
With the COVID-19 pandemic continuing, hatred against Asians is intensifying
in countries outside Asia, especially among the Chinese. There is an urgent
need to detect and prevent hate speech towards Asians effectively. In this
work, we first create COVID-HATE-2022, an annotated dataset including 2,025
annotated tweets fetched in early February 2022, which are labeled based on
specific criteria, and we present the comprehensive collection of scenarios of
hate and non-hate tweets in the dataset. Second, we fine-tune the BERT model
based on the relevant datasets and demonstrate several strategies related to
the "cleaning" of the tweets. Third, we investigate the performance of advanced
fine-tuning strategies with various model-centric and data-centric approaches,
and we show that both strategies generally improve the performance, while
data-centric ones outperform the others, and it demonstrates the feasibility
and effectiveness of the data-centric approaches in the associated tasks.
|
[
{
"version": "v1",
"created": "Sun, 5 Jun 2022 07:41:24 GMT"
},
{
"version": "v2",
"created": "Sun, 21 Aug 2022 15:22:03 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Lian",
"Xin",
""
]
] |
new_dataset
| 0.998791 |
2206.10351
|
Weibo Ning
|
Weibo Ning, Jiaqi Zhu, Hongjiang Chen, Weijun Zhou, Shuxing He,
Yecheng Tan, Qianrui Xu, Ye Yuan, Jun Hu, Zhun Fan
|
Novel total hip surgery robotic system based on self-localization and
optical measurement
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents the development and experimental evaluation of a surgical
robotic system for total hip arthroplasty (THA). Although existing robotic
systems used in joint replacement surgery have achieved some progresses, the
robot arm must be situated accurately at the target position during operation,
which depends significantly on the experience of the surgeon. In addition,
handheld acetabulum reamers typically exhibit uneven strength and grinding
file. Moreover, the lack of techniques to real-time measure femoral neck length
may lead to poor outcomes. To tackle these challenges, we propose a real-time
traceable optical positioning strategy to reduce unnecessary manual adjustments
to the robotic arm during surgery, an end-effector system to stabilise
grinding, and an optical probe to provide real-time measurement of the femoral
neck length and other parameters used to choose the proper prosthesis. The
lengths of the lower limbs are measured as the prosthesis is installed. The
experimental evaluation results show that, based on its accuracy, execution
ability, and robustness, the proposed surgical robotic system is feasible for
THA.
|
[
{
"version": "v1",
"created": "Wed, 15 Jun 2022 10:52:44 GMT"
},
{
"version": "v2",
"created": "Mon, 22 Aug 2022 11:07:12 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Ning",
"Weibo",
""
],
[
"Zhu",
"Jiaqi",
""
],
[
"Chen",
"Hongjiang",
""
],
[
"Zhou",
"Weijun",
""
],
[
"He",
"Shuxing",
""
],
[
"Tan",
"Yecheng",
""
],
[
"Xu",
"Qianrui",
""
],
[
"Yuan",
"Ye",
""
],
[
"Hu",
"Jun",
""
],
[
"Fan",
"Zhun",
""
]
] |
new_dataset
| 0.971615 |
2207.00208
|
Wonyoung Shin
|
Wonyoung Shin, Jonghun Park, Taekang Woo, Yongwoo Cho, Kwangjin Oh,
Hwanjun Song
|
e-CLIP: Large-Scale Vision-Language Representation Learning in
E-commerce
|
Accepted to CIKM 2022
| null |
10.1145/3511808.3557067
| null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Understanding vision and language representations of product content is vital
for search and recommendation applications in e-commerce. As a backbone for
online shopping platforms and inspired by the recent success in representation
learning research, we propose a contrastive learning framework that aligns
language and visual models using unlabeled raw product text and images. We
present techniques we used to train large-scale representation learning models
and share solutions that address domain-specific challenges. We study the
performance using our pre-trained model as backbones for diverse downstream
tasks, including category classification, attribute extraction, product
matching, product clustering, and adult product recognition. Experimental
results show that our proposed method outperforms the baseline in each
downstream task regarding both single modality and multiple modalities.
|
[
{
"version": "v1",
"created": "Fri, 1 Jul 2022 05:16:47 GMT"
},
{
"version": "v2",
"created": "Mon, 22 Aug 2022 14:25:14 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Shin",
"Wonyoung",
""
],
[
"Park",
"Jonghun",
""
],
[
"Woo",
"Taekang",
""
],
[
"Cho",
"Yongwoo",
""
],
[
"Oh",
"Kwangjin",
""
],
[
"Song",
"Hwanjun",
""
]
] |
new_dataset
| 0.979387 |
2208.06187
|
Helena Mart\'in-Cruz
|
Carlos Galindo, Fernando Hernando, Helena Mart\'in-Cruz, Diego Ruano
|
Stabilizer quantum codes defined by trace-depending polynomials
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Quantum error-correcting codes with good parameters can be constructed by
evaluating polynomials at the roots of the polynomial trace. In this paper, we
propose to evaluate polynomials at the roots of trace-depending polynomials
(given by a constant plus the trace of a polynomial) and show that this
procedure gives rise to stabilizer quantum error-correcting codes with a wider
range of lengths than in other papers involving roots of the trace and with
excellent parameters. Namely, we are able to provide new binary records and
non-binary codes improving the ones available in the literature.
|
[
{
"version": "v1",
"created": "Fri, 12 Aug 2022 09:32:08 GMT"
},
{
"version": "v2",
"created": "Sun, 21 Aug 2022 16:06:28 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Galindo",
"Carlos",
""
],
[
"Hernando",
"Fernando",
""
],
[
"Martín-Cruz",
"Helena",
""
],
[
"Ruano",
"Diego",
""
]
] |
new_dataset
| 0.999788 |
2208.08768
|
Sk Aziz Ali
|
Ahmet Serdar Karadeniz, Sk Aziz Ali, Anis Kacem, Elona Dupont, Djamila
Aouada
|
TSCom-Net: Coarse-to-Fine 3D Textured Shape Completion Network
|
Accepted in European Conference on Computer Vision Workshop (ECCVW)
2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Reconstructing 3D human body shapes from 3D partial textured scans remains a
fundamental task for many computer vision and graphics applications -- e.g.,
body animation, and virtual dressing. We propose a new neural network
architecture for 3D body shape and high-resolution texture completion --
BCom-Net -- that can reconstruct the full geometry from mid-level to high-level
partial input scans. We decompose the overall reconstruction task into two
stages - first, a joint implicit learning network (SCom-Net and TCom-Net) that
takes a voxelized scan and its occupancy grid as input to reconstruct the full
body shape and predict vertex textures. Second, a high-resolution texture
completion network, that utilizes the predicted coarse vertex textures to
inpaint the missing parts of the partial 'texture atlas'. A thorough
experimental evaluation on 3DBodyTex.V2 dataset shows that our method achieves
competitive results with respect to the state-of-the-art while generalizing to
different types and levels of partial shapes. The proposed method has also
ranked second in the track1 of SHApe Recovery from Partial textured 3D scans
(SHARP [38,1]) 2022 challenge1.
|
[
{
"version": "v1",
"created": "Thu, 18 Aug 2022 11:06:10 GMT"
},
{
"version": "v2",
"created": "Mon, 22 Aug 2022 14:45:12 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Karadeniz",
"Ahmet Serdar",
""
],
[
"Ali",
"Sk Aziz",
""
],
[
"Kacem",
"Anis",
""
],
[
"Dupont",
"Elona",
""
],
[
"Aouada",
"Djamila",
""
]
] |
new_dataset
| 0.999385 |
2208.08807
|
Stefan Thalhammer
|
Stefan Thalhammer, Timothy Patten, Markus Vincze
|
COPE: End-to-end trainable Constant Runtime Object Pose Estimation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
State-of-the-art object pose estimation handles multiple instances in a test
image by using multi-model formulations: detection as a first stage and then
separately trained networks per object for 2D-3D geometric correspondence
prediction as a second stage. Poses are subsequently estimated using the
Perspective-n-Points algorithm at runtime. Unfortunately, multi-model
formulations are slow and do not scale well with the number of object instances
involved. Recent approaches show that direct 6D object pose estimation is
feasible when derived from the aforementioned geometric correspondences. We
present an approach that learns an intermediate geometric representation of
multiple objects to directly regress 6D poses of all instances in a test image.
The inherent end-to-end trainability overcomes the requirement of separately
processing individual object instances. By calculating the mutual
Intersection-over-Unions, pose hypotheses are clustered into distinct
instances, which achieves negligible runtime overhead with respect to the
number of object instances. Results on multiple challenging standard datasets
show that the pose estimation performance is superior to single-model
state-of-the-art approaches despite being more than ~35 times faster. We
additionally provide an analysis showing real-time applicability (>24 fps) for
images where more than 90 object instances are present. Further results show
the advantage of supervising geometric-correspondence-based object pose
estimation with the 6D pose.
|
[
{
"version": "v1",
"created": "Thu, 18 Aug 2022 12:58:53 GMT"
},
{
"version": "v2",
"created": "Mon, 22 Aug 2022 12:06:50 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Thalhammer",
"Stefan",
""
],
[
"Patten",
"Timothy",
""
],
[
"Vincze",
"Markus",
""
]
] |
new_dataset
| 0.98269 |
2208.09580
|
Moojan Ghafurian
|
Sami Alperen Akgun, Moojan Ghafurian, Mark Crowley, Kerstin Dautenhahn
|
Using Affect as a Communication Modality to Improve Human-Robot
Communication in Robot-Assisted Search and Rescue Scenarios
| null | null | null | null |
cs.RO cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Emotions can provide a natural communication modality to complement the
existing multi-modal capabilities of social robots, such as text and speech, in
many domains. We conducted three online studies with 112, 223, and 151
participants to investigate the benefits of using emotions as a communication
modality for Search And Rescue (SAR) robots. In the first experiment, we
investigated the feasibility of conveying information related to SAR situations
through robots' emotions, resulting in mappings from SAR situations to
emotions. The second study used Affect Control Theory as an alternative method
for deriving such mappings. This method is more flexible, e.g. allows for such
mappings to be adjusted for different emotion sets and different robots. In the
third experiment, we created affective expressions for an
appearance-constrained outdoor field research robot using LEDs as an expressive
channel. Using these affective expressions in a variety of simulated SAR
situations, we evaluated the effect of these expressions on participants'
(adopting the role of rescue workers) situational awareness. Our results and
proposed methodologies provide (a) insights on how emotions could help
conveying messages in the context of SAR, and (b) evidence on the effectiveness
of adding emotions as a communication modality in a (simulated) SAR
communication context.
|
[
{
"version": "v1",
"created": "Sat, 20 Aug 2022 02:24:18 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Akgun",
"Sami Alperen",
""
],
[
"Ghafurian",
"Moojan",
""
],
[
"Crowley",
"Mark",
""
],
[
"Dautenhahn",
"Kerstin",
""
]
] |
new_dataset
| 0.988308 |
2208.09610
|
Hongxin Li
|
Hongxin Li, Xu Yang, Yuran Yang, Shuqi Mei, Zhaoxiang Zhang
|
MemoNav: Selecting Informative Memories for Visual Navigation
|
Submitted to ICLR2023
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Image-goal navigation is a challenging task, as it requires the agent to
navigate to a target indicated by an image in a previously unseen scene.
Current methods introduce diverse memory mechanisms which save navigation
history to solve this task. However, these methods use all observations in the
memory for generating navigation actions without considering which fraction of
this memory is informative. To address this limitation, we present the MemoNav,
a novel memory mechanism for image-goal navigation, which retains the agent's
informative short-term memory and long-term memory to improve the navigation
performance on a multi-goal task. The node features on the agent's topological
map are stored in the short-term memory, as these features are dynamically
updated. To aid the short-term memory, we also generate long-term memory by
continuously aggregating the short-term memory via a graph attention module.
The MemoNav retains the informative fraction of the short-term memory via a
forgetting module based on a Transformer decoder and then incorporates this
retained short-term memory and the long-term memory into working memory.
Lastly, the agent uses the working memory for action generation. We evaluate
our model on a new multi-goal navigation dataset. The experimental results show
that the MemoNav outperforms the SoTA methods by a large margin with a smaller
fraction of navigation history. The results also empirically show that our
model is less likely to be trapped in a deadlock, which further validates that
the MemoNav improves the agent's navigation efficiency by reducing redundant
steps.
|
[
{
"version": "v1",
"created": "Sat, 20 Aug 2022 05:57:21 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Li",
"Hongxin",
""
],
[
"Yang",
"Xu",
""
],
[
"Yang",
"Yuran",
""
],
[
"Mei",
"Shuqi",
""
],
[
"Zhang",
"Zhaoxiang",
""
]
] |
new_dataset
| 0.993513 |
2208.09615
|
George Alexandropoulos
|
Aris L. Moustakas and George C. Alexandropoulos and M\'erouane Debbah
|
Reconfigurable Intelligent Surfaces and Capacity Optimization: A Large
System Analysis
|
14 pages, 7 figures, submitted to an IEEE Transactions journal. arXiv
admin note: text overlap with arXiv:2109.07754
| null | null | null |
cs.IT cs.ET math.IT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Reconfigurable Intelligent Surfaces (RISs), comprising large numbers of
low-cost and almost passive metamaterials with tunable reflection properties,
have been recently proposed as an enabling technology for programmable wireless
propagation environments. In this paper, we present asymptotic closed-form
expressions for the mean and variance of the mutual information metric for a
multi-antenna transmitter-receiver pair in the presence of multiple RISs, using
methods from statistical physics. While nominally valid in the large system
limit, we show that the derived Gaussian approximation for the mutual
information can be quite accurate, even for modest-sized antenna arrays and
metasurfaces. The above results are particularly useful when fast-fading
conditions are present, which renders instantaneous channel estimation
extremely challenging. We find that, when the channel close to an RIS is
correlated, for instance due to small angle spread, which is reasonable for
wireless systems with increasing carrier frequencies, the communication link
benefits significantly from statistical RIS phase optimization, resulting in
gains that are surprisingly higher than the nearly uncorrelated case. Using our
novel asymptotic properties of the correlation matrices of the impinging and
outgoing signals at the RISs, we can optimize the metasurfaces without
brute-force numerical optimization. Furthermore, when the desired reflection
from any of the RISs departs significantly from geometrical optics, the
metasurfaces can be optimized to provide robust communication links, without
significant need for their optimal placement.
|
[
{
"version": "v1",
"created": "Sat, 20 Aug 2022 06:33:49 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Moustakas",
"Aris L.",
""
],
[
"Alexandropoulos",
"George C.",
""
],
[
"Debbah",
"Mérouane",
""
]
] |
new_dataset
| 0.999041 |
2208.09650
|
Ugochukwu Orji
|
Ugochukwu Orji, Chikodili Ugwuishiwu, Mathew Okoronkwo, Caroline
Asogwa, Nnaemeka Ogbene
|
Visual Exploratory Data Analysis of the Covid-19 Vaccination Progress in
Nigeria
| null | null | null | null |
cs.CY cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
The coronavirus outbreak in 2020 devastated the world's economy, including
Nigeria, even resulted in a severe recession. Slowly the country is building
back again, and the vaccines are helping to reduce the spread of covid-19.
Since the covid-19 vaccine came to Nigeria; 18,728,188 people have been fully
vaccinated as at May 31st, 2022. This is roughly 10% of the Nigerian population
estimated at 206.7 million [1]. This paper presents a visual Exploratory Data
Analysis of the covid-19 vaccination progress in Nigeria using the R-tidyverse
package in R studio IDE for data cleaning & analysis, and Tableau for the
visualizations. Our dataset is from the Nigerian National Primary Health Care
Development Agency (NPHCDA) in charge of the vaccines. The data used for this
research contain the state-by-state breakdown of Covid-19 vaccine distribution
recorded between March 5th, 2021, and May 31st, 2022. This paper aims to show
how these data analytics tools and techniques can be useful in finding insights
in raw data by presenting the results of the EDA visually thus reducing the
ambiguity and possible confusions that is associated with data in tables.
Furthermore, our findings contribute to the growing literature on Covid-19
research by showcasing the Covid-19 vaccination trend in Nigeria and the state
by state distribution.
|
[
{
"version": "v1",
"created": "Sat, 20 Aug 2022 09:52:18 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Orji",
"Ugochukwu",
""
],
[
"Ugwuishiwu",
"Chikodili",
""
],
[
"Okoronkwo",
"Mathew",
""
],
[
"Asogwa",
"Caroline",
""
],
[
"Ogbene",
"Nnaemeka",
""
]
] |
new_dataset
| 0.987166 |
2208.09660
|
Leonardo Nascimento Ferreira
|
Leonardo N. Ferreira
|
From Time Series to Networks in R with the ts2net Package
| null | null | null | null |
cs.SI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Network science established itself as a prominent tool for modeling time
series and complex systems. This modeling process consists of transforming a
set or a single time series into a network. Nodes may represent complete time
series, segments, or single values, while links define associations or
similarities between the represented parts. R is one of the main programming
languages used in data science, statistics, and machine learning, with many
packages available. However, no single package provides the necessary methods
to transform time series into networks. This paper presents ts2net, an R
package for modeling one or multiple time series into networks. The package
provides the time series distance functions that can be easily computed in
parallel and in supercomputers to process larger data sets and methods to
transform distance matrices into networks. Ts2net also provides methods to
transform a single time series into a network, such as recurrence networks,
visibility graphs, and transition networks. Together with other packages,
ts2net permits using network science and graph mining tools to extract
information from time series.
|
[
{
"version": "v1",
"created": "Sat, 20 Aug 2022 11:25:54 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Ferreira",
"Leonardo N.",
""
]
] |
new_dataset
| 0.967245 |
2208.09676
|
Hongliang Zhang
|
Hongliang Zhang and Boya Di
|
Intelligent Omni-Surfaces: Simultaneous Refraction and Reflection for
Full-dimensional Wireless Communications
|
This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The development of metasurfaces has unlocked various use cases in wireless
communication networks to improve performance by manipulating the propagation
environment. Intelligent omni-surface (IOS), an innovative technique in this
category, is proposed for coverage extension. In contrast to the widely studied
reflective metasurfaces, i.e., intelligent reflecting surfaces (IRSs), which
can only serve receivers located on the same side of the transmitter, the IOS
can achieve full-dimensional wireless communications by enabling the
simultaneous reflection and refraction of the surface, and thus users on both
sides can be served. In this paper, we provide a comprehensive overview of the
state-of-the-art in IOS from the perspective of wireless communications, with
the emphasis on their design principles, channel modeling, beamforming design,
experimental implementation and measurements, as well as possible applications
in future cellular networks. We first describe the basic concepts of
metasurfaces, and introduce the corresponding design principles for different
types of metasurfaces. Moreover, we elaborate on the reflective-refractive
model for each IOS element and the channel model for IOS-aided wireless
communication systems. Furthermore, we show how to achieve full-dimensional
wireless communications with the IOS for three different scenarios. In
particular, we present the implementation of an IOS-aided wireless
communication prototype and report its experimental measurement results.
Finally, we outline some potential future directions and challenges in this
area.
|
[
{
"version": "v1",
"created": "Sat, 20 Aug 2022 13:06:29 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Zhang",
"Hongliang",
""
],
[
"Di",
"Boya",
""
]
] |
new_dataset
| 0.985467 |
2208.09709
|
Chowdhury Rahman
|
Chowdhury Rafeed Rahman, MD. Hasibur Rahman, Samiha Zakir, Mohammad
Rafsan, Mohammed Eunus Ali
|
BSpell: A CNN-blended BERT Based Bengali Spell Checker
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Bengali typing is mostly performed using English keyboard and can be highly
erroneous due to the presence of compound and similarly pronounced letters.
Spelling correction of a misspelled word requires understanding of word typing
pattern as well as the context of the word usage. We propose a specialized BERT
model, BSpell targeted towards word for word correction in sentence level.
BSpell contains an end-to-end trainable CNN sub-model named SemanticNet along
with specialized auxiliary loss. This allows BSpell to specialize in highly
inflected Bengali vocabulary in the presence of spelling errors. We further
propose hybrid pretraining scheme for BSpell combining word level and character
level masking. Utilizing this pretraining scheme, BSpell achieves 91.5%
accuracy on real life Bengali spelling correction validation set. Detailed
comparison on two Bengali and one Hindi spelling correction dataset shows the
superiority of proposed BSpell over existing spell checkers.
|
[
{
"version": "v1",
"created": "Sat, 20 Aug 2022 15:21:35 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Rahman",
"Chowdhury Rafeed",
""
],
[
"Rahman",
"MD. Hasibur",
""
],
[
"Zakir",
"Samiha",
""
],
[
"Rafsan",
"Mohammad",
""
],
[
"Ali",
"Mohammed Eunus",
""
]
] |
new_dataset
| 0.998977 |
2208.09716
|
Minghui Xu
|
Wenxuan Yu, Minghui Xu, Dongxiao Yu, Xiuzhen Cheng, Qin Hu, Zehui
Xiong
|
zk-PCN: A Privacy-Preserving Payment Channel Network Using zk-SNARKs
|
8 pages, 9 figures
| null | null | null |
cs.CR cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Payment channel network (PCN) is a layer-two scaling solution that enables
fast off-chain transactions but does not involve on-chain transaction
settlement. PCNs raise new privacy issues including balance secrecy,
relationship anonymity and payment privacy. Moreover, protecting privacy causes
low transaction success rates. To address this dilemma, we propose zk-PCN, a
privacy-preserving payment channel network using zk-SNARKs. We prevent from
exposing true balances by setting up \textit{public balances} instead. Using
public balances, zk-PCN can guarantee high transaction success rates and
protect PCN privacy with zero-knowledge proofs. Additionally, zk-PCN is
compatible with the existing routing algorithms of PCNs. To support such
compatibility, we propose zk-IPCN to improve zk-PCN with a novel proof
generation (RPG) algorithm. zk-IPCN reduces the overheads of storing channel
information and lowers the frequency of generating zero-knowledge proofs.
Finally, extensive simulations demonstrate the effectiveness and efficiency of
zk-PCN in various settings.
|
[
{
"version": "v1",
"created": "Sat, 20 Aug 2022 16:09:51 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Yu",
"Wenxuan",
""
],
[
"Xu",
"Minghui",
""
],
[
"Yu",
"Dongxiao",
""
],
[
"Cheng",
"Xiuzhen",
""
],
[
"Hu",
"Qin",
""
],
[
"Xiong",
"Zehui",
""
]
] |
new_dataset
| 0.992991 |
2208.09764
|
Mordechai Guri
|
Mordechai Guri
|
GAIROSCOPE: Injecting Data from Air-Gapped Computers to Nearby
Gyroscopes
| null |
2021 18th International Conference on Privacy, Security and Trust
(PST)
|
10.1109/PST52912.2021.9647842
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is known that malware can leak data from isolated, air-gapped computers to
nearby smartphones using ultrasonic waves. However, this covert channel
requires access to the smartphone's microphone, which is highly protected in
Android OS and iOS, and might be non-accessible, disabled, or blocked.
In this paper we present `GAIROSCOPE,' an ultrasonic covert channel that
doesn't require a microphone on the receiving side. Our malware generates
ultrasonic tones in the resonance frequencies of the MEMS gyroscope. These
inaudible frequencies produce tiny mechanical oscillations within the
smartphone's gyroscope, which can be demodulated into binary information.
Notably, the gyroscope in smartphones is considered to be a 'safe' sensor that
can be used legitimately from mobile apps and javascript. We introduce the
adversarial attack model and present related work. We provide the relevant
technical background and show the design and implementation of GAIROSCOPE. We
present the evaluation results and discuss a set of countermeasures to this
threat. Our experiments show that attackers can exfiltrate sensitive
information from air-gapped computers to smartphones located a few meters away
via Speakers-to-Gyroscope covert channel.
|
[
{
"version": "v1",
"created": "Sun, 21 Aug 2022 00:00:38 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Guri",
"Mordechai",
""
]
] |
new_dataset
| 0.999682 |
2208.09800
|
Michel Kinsy
|
Alan Ehret, Jacob Abraham, Mihailo Isakov, Michel A. Kinsy
|
Zeno: A Scalable Capability-Based Secure Architecture
| null | null | null |
R-V1-2022
|
cs.AR cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite the numerous efforts of security researchers, memory vulnerabilities
remain a top issue for modern computing systems. Capability-based solutions aim
to solve whole classes of memory vulnerabilities at the hardware level by
encoding access permissions with each memory reference. While some capability
systems have seen commercial adoption, little work has been done to apply a
capability model to datacenter-scale systems. Cloud and high-performance
computing often require programs to share memory across many compute nodes.
This presents a challenge for existing capability models, as capabilities must
be enforceable across multiple nodes. Each node must agree on what access
permissions a capability has and overheads of remote memory access must remain
manageable.
To address these challenges, we introduce Zeno, a new capability-based
architecture. Zeno supports a Namespace-based capability model to support
globally shareable capabilities in a large-scale, multi-node system. In this
work, we describe the Zeno architecture, define Zeno's security properties,
evaluate the scalability of Zeno as a large-scale capability architecture, and
measure the hardware overhead with an FPGA implementation.
|
[
{
"version": "v1",
"created": "Sun, 21 Aug 2022 04:33:34 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Ehret",
"Alan",
""
],
[
"Abraham",
"Jacob",
""
],
[
"Isakov",
"Mihailo",
""
],
[
"Kinsy",
"Michel A.",
""
]
] |
new_dataset
| 0.993424 |
2208.09838
|
Padraig Lamont
|
Padraig X. Lamont
|
Tyche: A library for probabilistic reasoning and belief modelling in
Python
|
21 pages, submitted to AJCAI2022
| null | null | null |
cs.AI cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents Tyche, a Python library to facilitate probabilistic
reasoning in uncertain worlds through the construction, querying, and learning
of belief models. Tyche uses aleatoric description logic (ADL), which provides
computational advantages in its evaluation over other description logics. Tyche
belief models can be succinctly created by defining classes of individuals, the
probabilistic beliefs about them (concepts), and the probabilistic
relationships between them (roles). We also introduce a method of observation
propagation to facilitate learning from complex ADL observations. A
demonstration of Tyche to predict the author of anonymised messages, and to
extract author writing tendencies from anonymised messages, is provided. Tyche
has the potential to assist in the development of expert systems, knowledge
extraction systems, and agents to play games with incomplete and probabilistic
information.
|
[
{
"version": "v1",
"created": "Sun, 21 Aug 2022 08:17:39 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Lamont",
"Padraig X.",
""
]
] |
new_dataset
| 0.993884 |
2208.09844
|
Qiong Wu
|
Qiong Wu, Jiaer Xia, Pingyang Dai, Yiyi Zhou, Yongjian Wu, Rongrong Ji
|
CycleTrans: Learning Neutral yet Discriminative Features for
Visible-Infrared Person Re-Identification
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visible-infrared person re-identification (VI-ReID) is a task of matching the
same individuals across the visible and infrared modalities. Its main challenge
lies in the modality gap caused by cameras operating on different spectra.
Existing VI-ReID methods mainly focus on learning general features across
modalities, often at the expense of feature discriminability. To address this
issue, we present a novel cycle-construction-based network for neutral yet
discriminative feature learning, termed CycleTrans. Specifically, CycleTrans
uses a lightweight Knowledge Capturing Module (KCM) to capture rich semantics
from the modality-relevant feature maps according to pseudo queries.
Afterwards, a Discrepancy Modeling Module (DMM) is deployed to transform these
features into neutral ones according to the modality-irrelevant prototypes. To
ensure feature discriminability, another two KCMs are further deployed for
feature cycle constructions. With cycle construction, our method can learn
effective neutral features for visible and infrared images while preserving
their salient semantics. Extensive experiments on SYSU-MM01 and RegDB datasets
validate the merits of CycleTrans against a flurry of state-of-the-art methods,
+4.57% on rank-1 in SYSU-MM01 and +2.2% on rank-1 in RegDB.
|
[
{
"version": "v1",
"created": "Sun, 21 Aug 2022 08:41:40 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Wu",
"Qiong",
""
],
[
"Xia",
"Jiaer",
""
],
[
"Dai",
"Pingyang",
""
],
[
"Zhou",
"Yiyi",
""
],
[
"Wu",
"Yongjian",
""
],
[
"Ji",
"Rongrong",
""
]
] |
new_dataset
| 0.962317 |
2208.09870
|
Aikaterini Adam
|
Aikaterini Adam, Torsten Sattler, Konstantinos Karantzalos and Tomas
Pajdla
|
Objects Can Move: 3D Change Detection by Geometric Transformation
Constistency
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
AR/VR applications and robots need to know when the scene has changed. An
example is when objects are moved, added, or removed from the scene. We propose
a 3D object discovery method that is based only on scene changes. Our method
does not need to encode any assumptions about what is an object, but rather
discovers objects by exploiting their coherent move. Changes are initially
detected as differences in the depth maps and segmented as objects if they
undergo rigid motions. A graph cut optimization propagates the changing labels
to geometrically consistent regions. Experiments show that our method achieves
state-of-the-art performance on the 3RScan dataset against competitive
baselines. The source code of our method can be found at
https://github.com/katadam/ObjectsCanMove.
|
[
{
"version": "v1",
"created": "Sun, 21 Aug 2022 11:32:47 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Adam",
"Aikaterini",
""
],
[
"Sattler",
"Torsten",
""
],
[
"Karantzalos",
"Konstantinos",
""
],
[
"Pajdla",
"Tomas",
""
]
] |
new_dataset
| 0.959675 |
2208.09878
|
Jingyu Lin
|
Jingyu Lin, Jie Jiang, Yan Yan, Chunchao Guo, Hongfa Wang, Wei Liu,
Hanzi Wang
|
DPTNet: A Dual-Path Transformer Architecture for Scene Text Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The prosperity of deep learning contributes to the rapid progress in scene
text detection. Among all the methods with convolutional networks,
segmentation-based ones have drawn extensive attention due to their superiority
in detecting text instances of arbitrary shapes and extreme aspect ratios.
However, the bottom-up methods are limited to the performance of their
segmentation models. In this paper, we propose DPTNet (Dual-Path Transformer
Network), a simple yet effective architecture to model the global and local
information for the scene text detection task. We further propose a parallel
design that integrates the convolutional network with a powerful self-attention
mechanism to provide complementary clues between the attention path and
convolutional path. Moreover, a bi-directional interaction module across the
two paths is developed to provide complementary clues in the channel and
spatial dimensions. We also upgrade the concentration operation by adding an
extra multi-head attention layer to it. Our DPTNet achieves state-of-the-art
results on the MSRA-TD500 dataset, and provides competitive results on other
standard benchmarks in terms of both detection accuracy and speed.
|
[
{
"version": "v1",
"created": "Sun, 21 Aug 2022 12:58:45 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Lin",
"Jingyu",
""
],
[
"Jiang",
"Jie",
""
],
[
"Yan",
"Yan",
""
],
[
"Guo",
"Chunchao",
""
],
[
"Wang",
"Hongfa",
""
],
[
"Liu",
"Wei",
""
],
[
"Wang",
"Hanzi",
""
]
] |
new_dataset
| 0.992873 |
2208.09975
|
Mordechai Guri
|
Mordechai Guri
|
ETHERLED: Sending Covert Morse Signals from Air-Gapped Devices via
Network Card (NIC) LEDs
| null | null |
10.1109/CSR54599.2022.9850284
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Highly secure devices are often isolated from the Internet or other public
networks due to the confidential information they process. This level of
isolation is referred to as an 'air-gap .'
In this paper, we present a new technique named ETHERLED, allowing attackers
to leak data from air-gapped networked devices such as PCs, printers, network
cameras, embedded controllers, and servers. Networked devices have an
integrated network interface controller (NIC) that includes status and activity
indicator LEDs. We show that malware installed on the device can control the
status LEDs by blinking and alternating colors, using documented methods or
undocumented firmware commands. Information can be encoded via simple encoding
such as Morse code and modulated over these optical signals. An attacker can
intercept and decode these signals from tens to hundreds of meters away. We
show an evaluation and discuss defensive and preventive countermeasures for
this exfiltration attack.
|
[
{
"version": "v1",
"created": "Sun, 21 Aug 2022 22:24:11 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Guri",
"Mordechai",
""
]
] |
new_dataset
| 0.999485 |
2208.09999
|
Rabab Abdelfattah
|
Rabab Abdelfattah, Xin Zhang, Zhenyao Wu, Xinyi Wu, Xiaofeng Wang, and
Song Wang
|
PLMCL: Partial-Label Momentum Curriculum Learning for Multi-Label Image
Classification
|
Accepted in ECCVw
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-label image classification aims to predict all possible labels in an
image. It is usually formulated as a partial-label learning problem, given the
fact that it could be expensive in practice to annotate all labels in every
training image. Existing works on partial-label learning focus on the case
where each training image is annotated with only a subset of its labels. A
special case is to annotate only one positive label in each training image. To
further relieve the annotation burden and enhance the performance of the
classifier, this paper proposes a new partial-label setting in which only a
subset of the training images are labeled, each with only one positive label,
while the rest of the training images remain unlabeled. To handle this new
setting, we propose an end-to-end deep network, PLMCL (Partial Label Momentum
Curriculum Learning), that can learn to produce confident pseudo labels for
both partially-labeled and unlabeled training images. The novel momentum-based
law updates soft pseudo labels on each training image with the consideration of
the updating velocity of pseudo labels, which help avoid trapping to
low-confidence local minimum, especially at the early stage of training in lack
of both observed labels and confidence on pseudo labels. In addition, we
present a confidence-aware scheduler to adaptively perform easy-to-hard
learning for different labels. Extensive experiments demonstrate that our
proposed PLMCL outperforms many state-of-the-art multi-label classification
methods under various partial-label settings on three different datasets.
|
[
{
"version": "v1",
"created": "Mon, 22 Aug 2022 01:23:08 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Abdelfattah",
"Rabab",
""
],
[
"Zhang",
"Xin",
""
],
[
"Wu",
"Zhenyao",
""
],
[
"Wu",
"Xinyi",
""
],
[
"Wang",
"Xiaofeng",
""
],
[
"Wang",
"Song",
""
]
] |
new_dataset
| 0.965941 |
2208.10100
|
Jonathan Fhima
|
Jonathan Fhima, Jan Van Eijgen, Moti Freiman, Ingeborg Stalmans and
Joachim A. Behar
|
Lirot.ai: A Novel Platform for Crowd-Sourcing Retinal Image
Segmentations
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Introduction: For supervised deep learning (DL) tasks, researchers need a
large annotated dataset. In medical data science, one of the major limitations
to develop DL models is the lack of annotated examples in large quantity. This
is most often due to the time and expertise required to annotate. We introduce
Lirot.ai, a novel platform for facilitating and crowd-sourcing image
segmentations. Methods: Lirot.ai is composed of three components; an iPadOS
client application named Lirot.ai-app, a backend server named Lirot.ai-server
and a python API name Lirot.ai-API. Lirot.ai-app was developed in Swift 5.6 and
Lirot.ai-server is a firebase backend. Lirot.ai-API allows the management of
the database. Lirot.ai-app can be installed on as many iPadOS devices as needed
so that annotators may be able to perform their segmentation simultaneously and
remotely. We incorporate Apple Pencil compatibility, making the segmentation
faster, more accurate, and more intuitive for the expert than any other
computer-based alternative. Results: We demonstrate the usage of Lirot.ai for
the creation of a retinal fundus dataset with reference vasculature
segmentations. Discussion and future work: We will use active learning
strategies to continue enlarging our retinal fundus dataset by including a more
efficient process to select the images to be annotated and distribute them to
annotators.
|
[
{
"version": "v1",
"created": "Mon, 22 Aug 2022 07:19:46 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Fhima",
"Jonathan",
""
],
[
"Van Eijgen",
"Jan",
""
],
[
"Freiman",
"Moti",
""
],
[
"Stalmans",
"Ingeborg",
""
],
[
"Behar",
"Joachim A.",
""
]
] |
new_dataset
| 0.997122 |
2208.10145
|
Zengran Wang
|
Zengran Wang, Chen Min, Zheng Ge, Yinhao Li, Zeming Li, Hongyu Yang,
Di Huang
|
STS: Surround-view Temporal Stereo for Multi-view 3D Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Learning accurate depth is essential to multi-view 3D object detection.
Recent approaches mainly learn depth from monocular images, which confront
inherent difficulties due to the ill-posed nature of monocular depth learning.
Instead of using a sole monocular depth method, in this work, we propose a
novel Surround-view Temporal Stereo (STS) technique that leverages the geometry
correspondence between frames across time to facilitate accurate depth
learning. Specifically, we regard the field of views from all cameras around
the ego vehicle as a unified view, namely surroundview, and conduct temporal
stereo matching on it. The resulting geometrical correspondence between
different frames from STS is utilized and combined with the monocular depth to
yield final depth prediction. Comprehensive experiments on nuScenes show that
STS greatly boosts 3D detection ability, notably for medium and long distance
objects. On BEVDepth with ResNet-50 backbone, STS improves mAP and NDS by 2.6%
and 1.4%, respectively. Consistent improvements are observed when using a
larger backbone and a larger image resolution, demonstrating its effectiveness
|
[
{
"version": "v1",
"created": "Mon, 22 Aug 2022 08:46:33 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Wang",
"Zengran",
""
],
[
"Min",
"Chen",
""
],
[
"Ge",
"Zheng",
""
],
[
"Li",
"Yinhao",
""
],
[
"Li",
"Zeming",
""
],
[
"Yang",
"Hongyu",
""
],
[
"Huang",
"Di",
""
]
] |
new_dataset
| 0.98746 |
2208.10168
|
Asaf Petruschka
|
Merav Parter, Asaf Petruschka
|
\~{O}ptimal Dual Vertex Failure Connectivity Labels
|
DISC 2022
| null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper we present succinct labeling schemes for supporting
connectivity queries under vertex faults. For a given $n$-vertex graph $G$, an
$f$-VFT (resp., EFT) connectivity labeling scheme is a distributed data
structure that assigns each of the graph edges and vertices a short label, such
that given the labels of a vertex pair $u$ and $v$, and the labels of at most
$f$ failing vertices (resp., edges) $F$, one can determine if $u$ and $v$ are
connected in $G \setminus F$. The primary complexity measure is the length of
the individual labels. Since their introduction by [Courcelle, Twigg, STACS
'07], FT labeling schemes have been devised only for a limited collection of
graph families. A recent work [Dory and Parter, PODC 2021] provided EFT
labeling schemes for general graphs under edge failures, leaving the vertex
failure case fairly open. We provide the first sublinear $f$-VFT labeling
schemes for $f \geq 2$ for any $n$-vertex graph. Our key result is $2$-VFT
connectivity labels with $O(\log^3 n)$ bits. Our constructions are based on
analyzing the structure of dual failure replacement paths on top of the
well-known heavy-light tree decomposition technique of [Sleator and Tarjan,
STOC 1981]. We also provide $f$-VFT labels with sub-linear length (in $|V|$)
for any $f=o(\log\log n)$, that are based on a reduction to the existing EFT
labels.
|
[
{
"version": "v1",
"created": "Mon, 22 Aug 2022 09:30:18 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Parter",
"Merav",
""
],
[
"Petruschka",
"Asaf",
""
]
] |
new_dataset
| 0.993856 |
2208.10218
|
Vincent Wall
|
Vincent Wall and Oliver Brock
|
A Virtual 2D Tactile Array for Soft Actuators Using Acoustic Sensing
|
Accepted at 2022 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS)
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We create a virtual 2D tactile array for soft pneumatic actuators using
embedded audio components. We detect contact-specific changes in sound
modulation to infer tactile information. We evaluate different sound
representations and learning methods to detect even small contact variations.
We demonstrate the acoustic tactile sensor array by the example of a PneuFlex
actuator and use a Braille display to individually control the contact of 29x4
pins with the actuator's 90x10 mm palmar surface. Evaluating the spatial
resolution, the acoustic sensor localizes edges in x- and y-direction with a
root-mean-square regression error of 1.67 mm and 0.0 mm, respectively. Even
light contacts of a single Braille pin with a lifting force of 0.17 N are
measured with high accuracy. Finally, we demonstrate the sensor's sensitivity
to complex contact shapes by successfully reading the 26 letters of the Braille
alphabet from a single display cell with a classification rate of 88%.
|
[
{
"version": "v1",
"created": "Mon, 22 Aug 2022 11:56:56 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Wall",
"Vincent",
""
],
[
"Brock",
"Oliver",
""
]
] |
new_dataset
| 0.999647 |
2208.10233
|
Hugo Daniel Macedo
|
Hugo Daniel Macedo and Ken Pierce
|
Proceedings of the 20th International Overture Workshop
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This volume contains the papers presented at the 20th International Overture
Workshop, which was held in an hybrid format: online and physically at Aarhus,
Denmark on 05th July 2022. This event was the latest in a series of workshops
around the Vienna Development Method (VDM), the open-source project Overture,
and related tools and formalisms. VDM is one of the longest established formal
methods for systems development. A lively community of researchers and
practitioners has grown up in academia and industry around the modelling
languages (VDM-SL, VDM++, VDM-RT, CML) and tools (VDMTools, Overture, VDM
VSCode extension, Crescendo, Symphony, the INTO-CPS chain, and ViennaTalk).
Together, these provide a platform for work on modelling and analysis
technology that includes static and dynamic analysis, test generation,
execution support, and model checking. This workshop provided updates on the
emerging technology of VDM/Overture, including collaboration infrastructure,
collaborative modelling and co-simulation for Cyber-Physical Systems.
|
[
{
"version": "v1",
"created": "Mon, 22 Aug 2022 12:16:09 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Macedo",
"Hugo Daniel",
""
],
[
"Pierce",
"Ken",
""
]
] |
new_dataset
| 0.977955 |
2208.10248
|
Oiwi Parker Jones
|
Oiwi Parker Jones and Brendan Shillingford
|
Composing RNNs and FSTs for Small Data: Recovering Missing Characters in
Old Hawaiian Text
|
This paper originally appeared in a NeurIPS Workshop in 2018: IRASL -
Interpretability and Robustness in Audio, Speech, and Language. It builds on
a shorter paper that appeared in the Proceedings of the 2018 Conference on
Empirical Methods in Natural Language Processing (EMNLP). See
acknowledgements for details
| null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In contrast to the older writing system of the 19th century, modern Hawaiian
orthography employs characters for long vowels and glottal stops. These extra
characters account for about one-third of the phonemes in Hawaiian, so
including them makes a big difference to reading comprehension and
pronunciation. However, transliterating between older and newer texts is a
laborious task when performed manually. We introduce two related methods to
help solve this transliteration problem automatically, given that there were
not enough data to train an end-to-end deep learning model. One method is
implemented, end-to-end, using finite state transducers (FSTs). The other is a
hybrid deep learning approach which approximately composes an FST with a
recurrent neural network (RNN). We find that the hybrid approach outperforms
the end-to-end FST by partitioning the original problem into one part that can
be modelled by hand, using an FST, and into another part, which is easily
solved by an RNN trained on the available data.
|
[
{
"version": "v1",
"created": "Sun, 24 Jul 2022 00:46:21 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Jones",
"Oiwi Parker",
""
],
[
"Shillingford",
"Brendan",
""
]
] |
new_dataset
| 0.997411 |
2208.10269
|
Steven Yin
|
Ruizhe Jia, Steven Yin
|
To EVM or Not to EVM: Blockchain Compatibility and Network Effects
| null | null | null | null |
cs.GT cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
We study the competition between blockchains in a \emph{multi-chain}
environment, where a dominant EVM-compatible blockchain (e.g., Ethereum)
co-exists with an alternative EVM-compatible (e.g., Avalanche) and an
EVM-incompatible (e.g., Algorand) blockchain. While EVM compatibility allows
existing Ethereum users and developers to migrate more easily over to the
alternative layer-1, EVM incompatibility might allow the firms to build more
loyal and ``sticky'' user base, and in turn a more robust ecosystem. As such,
the choice to be EVM-compatible is not merely a technological decision, but
also an important strategic decision. In this paper, we develop a game
theoretic model to study this competitive dynamic, and find that at
equilibrium, new entrants/developers tend to adopt the dominant blockchain. To
avoid adoption failure, the alternative blockchains have to either (1) directly
subsidize the new entrant firms or (2) offer better features, which in practice
can take form in lower transaction costs, faster finality, or larger network
effects. We find that it is easier for EVM-compatible blockchains to attract
users through direct subsidy, while it is more efficient for EVM-incompatible
blockchains to attract users through offering better features/products.
|
[
{
"version": "v1",
"created": "Thu, 18 Aug 2022 03:01:20 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Jia",
"Ruizhe",
""
],
[
"Yin",
"Steven",
""
]
] |
new_dataset
| 0.977005 |
2208.10281
|
EPTCS
|
Muhammad Hamza Waseem, Jonathon Liu, Vincent Wang-Ma\'scianica, Bob
Coecke
|
Language-independence of DisCoCirc's Text Circuits: English and Urdu
|
In Proceedings E2ECOMPVEC, arXiv:2208.05313
|
EPTCS 366, 2022, pp. 50-60
|
10.4204/EPTCS.366.7
| null |
cs.CL cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
DisCoCirc is a newly proposed framework for representing the grammar and
semantics of texts using compositional, generative circuits. While it
constitutes a development of the Categorical Distributional Compositional
(DisCoCat) framework, it exposes radically new features. In particular, [14]
suggested that DisCoCirc goes some way toward eliminating grammatical
differences between languages. In this paper we provide a sketch that this is
indeed the case for restricted fragments of English and Urdu. We first develop
DisCoCirc for a fragment of Urdu, as it was done for English in [14]. There is
a simple translation from English grammar to Urdu grammar, and vice versa. We
then show that differences in grammatical structure between English and Urdu -
primarily relating to the ordering of words and phrases - vanish when passing
to DisCoCirc circuits.
|
[
{
"version": "v1",
"created": "Thu, 11 Aug 2022 09:32:00 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Waseem",
"Muhammad Hamza",
""
],
[
"Liu",
"Jonathon",
""
],
[
"Wang-Maścianica",
"Vincent",
""
],
[
"Coecke",
"Bob",
""
]
] |
new_dataset
| 0.99978 |
2208.10282
|
Yichen Zhu
|
Shimin Tao, Weibin Meng, Yimeng Chen, Yichen Zhu, Ying Liu Chunning
Du, Tao Han, Yongpeng Zhao, Xiangguang Wang and Hao Yang
|
LogStamp: Automatic Online Log Parsing Based on Sequence Labelling
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Logs are one of the most critical data for service management. It contains
rich runtime information for both services and users. Since size of logs are
often enormous in size and have free handwritten constructions, a typical
log-based analysis needs to parse logs into structured format first. However,
we observe that most existing log parsing methods cannot parse logs online,
which is essential for online services. In this paper, we present an automatic
online log parsing method, name as LogStamp. We extensively evaluate LogStamp
on five public datasets to demonstrate the effectiveness of our proposed
method. The experiments show that our proposed method can achieve high accuracy
with only a small portion of the training set. For example, it can achieve an
average accuracy of 0.956 when using only 10% of the data training.
|
[
{
"version": "v1",
"created": "Wed, 10 Aug 2022 03:39:53 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Tao",
"Shimin",
""
],
[
"Meng",
"Weibin",
""
],
[
"Chen",
"Yimeng",
""
],
[
"Zhu",
"Yichen",
""
],
[
"Du",
"Ying Liu Chunning",
""
],
[
"Han",
"Tao",
""
],
[
"Zhao",
"Yongpeng",
""
],
[
"Wang",
"Xiangguang",
""
],
[
"Yang",
"Hao",
""
]
] |
new_dataset
| 0.964802 |
2208.10296
|
Shucheng Yang
|
Shucheng Yang, Xiaoping Gao, Jie Ren
|
Sequential Circuits Synthesis for Rapid Single Flux Quantum Logic Based
on Finite State Machine Decomposition
| null | null | null | null |
cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
Rapid Single Flux Quantum (RSFQ) logic is a promising technology to supersede
Complementary metal-oxide-semiconductor (CMOS) logic in some specialized areas
due to providing ultra-fast and energy-efficient circuits. To realize a
large-scale integration design, electronic design automation (EDA) tools
specialized for RSFQ logic are required due to the divergences in logic type,
timing constraints, and circuit structure compared with CMOS logic. Logic
synthesis is crucial in converting behavioral circuit description into circuit
netlist, typically combining combinational and sequential circuit synthesis.
For the RSFQ logic, the sequential circuit synthesis is challenging, especially
for non-linear sequential blocks with feedback loops. Thus, this paper presents
a sequential circuit synthesis algorithm based on finite state machine (FSM)
decomposition, which ensures design functionality, lowers costs, and improves
the RSFQ circuit performance. Additionally, we present the synthesis processes
of the feedback logic and the 2-bit counter to demonstrate how the proposed
algorithm operates, and ISCAS89 benchmark circuits reveal our method's ability
to synthesize large-scale sequential circuits.
|
[
{
"version": "v1",
"created": "Mon, 22 Aug 2022 13:23:31 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Yang",
"Shucheng",
""
],
[
"Gao",
"Xiaoping",
""
],
[
"Ren",
"Jie",
""
]
] |
new_dataset
| 0.999344 |
2208.10299
|
Vincent Wall
|
Vincent Wall, Gabriel Z\"oller, Oliver Brock
|
Passive and Active Acoustic Sensing for Soft Pneumatic Actuators
|
This paper is currently under review in The International Journal of
Robotics Research
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a sensorization method for soft pneumatic actuators that uses an
embedded microphone and speaker to measure different actuator properties. The
physical state of the actuator determines the specific modulation of sound as
it travels through the structure. Using simple machine learning, we create a
computational sensor that infers the corresponding state from sound recordings.
We demonstrate the acoustic sensor on a soft pneumatic continuum actuator and
use it to measure contact locations, contact forces, object materials, actuator
inflation, and actuator temperature. We show that the sensor is reliable
(average classification rate for six contact locations of 93%), precise (mean
spatial accuracy of 3.7 mm), and robust against common disturbances like
background noise. Finally, we compare different sounds and learning methods and
achieve best results with 20 ms of white noise and a support vector classifier
as the sensor model.
|
[
{
"version": "v1",
"created": "Mon, 22 Aug 2022 13:25:43 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Wall",
"Vincent",
""
],
[
"Zöller",
"Gabriel",
""
],
[
"Brock",
"Oliver",
""
]
] |
new_dataset
| 0.994767 |
2208.10414
|
Jianfei Yang
|
Jianfei Yang, Yunjiao Zhou, He Huang, Han Zou, Lihua Xie
|
MetaFi: Device-Free Pose Estimation via Commodity WiFi for Metaverse
Avatar Simulation
|
6 pages, 3 figures, 3 tables
| null | null | null |
cs.CV cs.AI cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Avatar refers to a representative of a physical user in the virtual world
that can engage in different activities and interact with other objects in
metaverse. Simulating the avatar requires accurate human pose estimation.
Though camera-based solutions yield remarkable performance, they encounter the
privacy issue and degraded performance caused by varying illumination,
especially in smart home. In this paper, we propose a WiFi-based IoT-enabled
human pose estimation scheme for metaverse avatar simulation, namely MetaFi.
Specifically, a deep neural network is designed with customized convolutional
layers and residual blocks to map the channel state information to human pose
landmarks. It is enforced to learn the annotations from the accurate computer
vision model, thus achieving cross-modal supervision. WiFi is ubiquitous and
robust to illumination, making it a feasible solution for avatar applications
in smart home. The experiments are conducted in the real world, and the results
show that the MetaFi achieves very high performance with a PCK@50 of 95.23%.
|
[
{
"version": "v1",
"created": "Mon, 22 Aug 2022 15:50:54 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Yang",
"Jianfei",
""
],
[
"Zhou",
"Yunjiao",
""
],
[
"Huang",
"He",
""
],
[
"Zou",
"Han",
""
],
[
"Xie",
"Lihua",
""
]
] |
new_dataset
| 0.998068 |
2208.10415
|
Genoveva Vargas Solar
|
Genoveva Vargas-Solar, Karim Dao, Mirian Halfeld Ferrari Alves
|
NLDS-QL: From natural language data science questions to queries on
graphs: analysing patients conditions & treatments
| null | null | null | null |
cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
This paper introduces NLDS-QL, a translator of data science questions
expressed in natural language (NL) into data science queries on graph
databases. Our translator is based on a simplified NL described by a grammar
that specifies sentences combining keywords to refer to operations on graphs
with the vocabulary of the graph schema. The demonstration proposed in this
paper shows NLDS-QL in action within a scenario to explore and analyse a graph
base on patient diagnoses generated with the open-source Synthea.
|
[
{
"version": "v1",
"created": "Mon, 22 Aug 2022 15:53:39 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Vargas-Solar",
"Genoveva",
""
],
[
"Dao",
"Karim",
""
],
[
"Alves",
"Mirian Halfeld Ferrari",
""
]
] |
new_dataset
| 0.973103 |
2006.03876
|
Yuecong Xu
|
Yuecong Xu, Jianfei Yang, Haozhi Cao, Kezhi Mao, Jianxiong Yin and
Simon See
|
ARID: A New Dataset for Recognizing Action in the Dark
|
6 pages, 7 figures, Data available at
https://xuyu0010.github.io/arid, simplified title, extension of IJCAIW
version published by Springer
(https://link.springer.com/chapter/10.1007/978-981-16-0575-8_6)
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The task of action recognition in dark videos is useful in various scenarios,
e.g., night surveillance and self-driving at night. Though progress has been
made in the action recognition task for videos in normal illumination, few have
studied action recognition in the dark. This is partly due to the lack of
sufficient datasets for such a task. In this paper, we explored the task of
action recognition in dark videos. We bridge the gap of the lack of data for
this task by collecting a new dataset: the Action Recognition in the Dark
(ARID) dataset. It consists of over 3,780 video clips with 11 action
categories. To the best of our knowledge, it is the first dataset focused on
human actions in dark videos. To gain further understandings of our ARID
dataset, we analyze the ARID dataset in detail and exhibited its necessity over
synthetic dark videos. Additionally, we benchmarked the performance of several
current action recognition models on our dataset and explored potential methods
for increasing their performances. Our results show that current action
recognition models and frame enhancement methods may not be effective solutions
for the task of action recognition in dark videos.
|
[
{
"version": "v1",
"created": "Sat, 6 Jun 2020 14:25:52 GMT"
},
{
"version": "v2",
"created": "Tue, 9 Jun 2020 02:34:52 GMT"
},
{
"version": "v3",
"created": "Tue, 20 Jul 2021 15:40:15 GMT"
},
{
"version": "v4",
"created": "Fri, 19 Aug 2022 05:41:15 GMT"
}
] | 2022-08-22T00:00:00 |
[
[
"Xu",
"Yuecong",
""
],
[
"Yang",
"Jianfei",
""
],
[
"Cao",
"Haozhi",
""
],
[
"Mao",
"Kezhi",
""
],
[
"Yin",
"Jianxiong",
""
],
[
"See",
"Simon",
""
]
] |
new_dataset
| 0.999825 |
2011.07499
|
M. F. Mridha
|
M. F. Mridha, Abu Quwsar Ohi, M. Ameer Ali, Mazedul Islam Emon,
Muhammad Mohsin Kabir
|
BanglaWriting: A multi-purpose offline Bangla handwriting dataset
|
Accepted in journal Data in Brief. The dataset is available on
https://data.mendeley.com/datasets/r43wkvdk4w/
| null |
10.1016/j.dib.2020.106633
| null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This article presents a Bangla handwriting dataset named BanglaWriting that
contains single-page handwritings of 260 individuals of different personalities
and ages. Each page includes bounding-boxes that bounds each word, along with
the unicode representation of the writing. This dataset contains 21,234 words
and 32,787 characters in total. Moreover, this dataset includes 5,470 unique
words of Bangla vocabulary. Apart from the usual words, the dataset comprises
261 comprehensible overwriting and 450 handwritten strikes and mistakes. All of
the bounding-boxes and word labels are manually-generated. The dataset can be
used for complex optical character/word recognition, writer identification,
handwritten word segmentation, and word generation. Furthermore, this dataset
is suitable for extracting age-based and gender-based variation of handwriting.
|
[
{
"version": "v1",
"created": "Sun, 15 Nov 2020 11:08:53 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Dec 2020 09:30:02 GMT"
},
{
"version": "v3",
"created": "Fri, 19 Aug 2022 14:06:08 GMT"
}
] | 2022-08-22T00:00:00 |
[
[
"Mridha",
"M. F.",
""
],
[
"Ohi",
"Abu Quwsar",
""
],
[
"Ali",
"M. Ameer",
""
],
[
"Emon",
"Mazedul Islam",
""
],
[
"Kabir",
"Muhammad Mohsin",
""
]
] |
new_dataset
| 0.999887 |
2110.07588
|
Zhongang Cai
|
Zhongang Cai, Mingyuan Zhang, Jiawei Ren, Chen Wei, Daxuan Ren,
Zhengyu Lin, Haiyu Zhao, Lei Yang, Chen Change Loy, Ziwei Liu
|
Playing for 3D Human Recovery
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Image- and video-based 3D human recovery (i.e., pose and shape estimation)
have achieved substantial progress. However, due to the prohibitive cost of
motion capture, existing datasets are often limited in scale and diversity. In
this work, we obtain massive human sequences by playing the video game with
automatically annotated 3D ground truths. Specifically, we contribute
GTA-Human, a large-scale 3D human dataset generated with the GTA-V game engine,
featuring a highly diverse set of subjects, actions, and scenarios. More
importantly, we study the use of game-playing data and obtain five major
insights. First, game-playing data is surprisingly effective. A simple
frame-based baseline trained on GTA-Human outperforms more sophisticated
methods by a large margin. For video-based methods, GTA-Human is even on par
with the in-domain training set. Second, we discover that synthetic data
provides critical complements to the real data that is typically collected
indoor. Our investigation into domain gap provides explanations for our data
mixture strategies that are simple yet useful. Third, the scale of the dataset
matters. The performance boost is closely related to the additional data
available. A systematic study reveals the model sensitivity to data density
from multiple key aspects. Fourth, the effectiveness of GTA-Human is also
attributed to the rich collection of strong supervision labels (SMPL
parameters), which are otherwise expensive to acquire in real datasets. Fifth,
the benefits of synthetic data extend to larger models such as deeper
convolutional neural networks (CNNs) and Transformers, for which a significant
impact is also observed. We hope our work could pave the way for scaling up 3D
human recovery to the real world. Homepage:
https://caizhongang.github.io/projects/GTA-Human/
|
[
{
"version": "v1",
"created": "Thu, 14 Oct 2021 17:49:42 GMT"
},
{
"version": "v2",
"created": "Thu, 18 Aug 2022 17:58:02 GMT"
}
] | 2022-08-22T00:00:00 |
[
[
"Cai",
"Zhongang",
""
],
[
"Zhang",
"Mingyuan",
""
],
[
"Ren",
"Jiawei",
""
],
[
"Wei",
"Chen",
""
],
[
"Ren",
"Daxuan",
""
],
[
"Lin",
"Zhengyu",
""
],
[
"Zhao",
"Haiyu",
""
],
[
"Yang",
"Lei",
""
],
[
"Loy",
"Chen Change",
""
],
[
"Liu",
"Ziwei",
""
]
] |
new_dataset
| 0.999585 |
2110.07906
|
Francis Lau C.M.
|
Peng W. Zhang, Francis C.M. Lau and Chiu-W. Sham
|
Hardware Architecture of Layered Decoders for PLDPC-Hadamard Codes
|
The paper has been accepted to IEEE Trans. on Circuits on Systems I
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Protograph-based low-density parity-check Hadamard codes (PLDPC-HCs) are a
new type of ultimate-Shannon-limit-approaching codes. In this paper, we propose
a hardware architecture for the PLDPC-HC layered decoders. The decoders consist
mainly of random address memories, Hadamard sub-decoders and control logics.
Two types of pipelined structures are presented and the latency and throughput
of these two structures are derived. Implementation of the decoder design on an
FPGA board shows that a throughput of $1.48$ Gbps is achieved with a bit error
rate (BER) of $10^{-5}$ at around $E_b/N_0 = - 0.40$ dB. The decoder can also
achieve the same BER at $E_b/N_0 = - 1.14$ dB with a reduced throughput of
$0.20$ Gbps.
|
[
{
"version": "v1",
"created": "Fri, 15 Oct 2021 07:41:31 GMT"
},
{
"version": "v2",
"created": "Fri, 19 Aug 2022 06:06:01 GMT"
}
] | 2022-08-22T00:00:00 |
[
[
"Zhang",
"Peng W.",
""
],
[
"Lau",
"Francis C. M.",
""
],
[
"Sham",
"Chiu-W.",
""
]
] |
new_dataset
| 0.990148 |
2111.09314
|
Janamejaya Channegowda
|
Edward Elson Kosasih, Rucha Bhalchandra Joshi, Janamejaya Channegowda
|
GAETS: A Graph Autoencoder Time Series Approach Towards Battery
Parameter Estimation
|
Accepted at CoSubmitting Summer (CSS) Workshop
https://iclr.cc/virtual/2022/workshop/9069
| null | null | null |
cs.LG cs.SY eess.SY
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Lithium-ion batteries are powering the ongoing transportation electrification
revolution. Lithium-ion batteries possess higher energy density and favourable
electrochemical properties which make it a preferable energy source for
electric vehicles. Precise estimation of battery parameters (Charge capacity,
voltage etc) is vital to estimate the available range in an electric vehicle.
Graph-based estimation techniques enable us to understand the variable
dependencies underpinning them to improve estimates. In this paper we employ
Graph Neural Networks for battery parameter estimation, we introduce a unique
graph autoencoder time series estimation approach. Variables in battery
measurements are known to have an underlying relationship with each other in a
certain correlation within variables of interest. We use graph autoencoder
based on a non-linear version of NOTEARS as this allowed us to perform
gradient-descent in learning the structure (instead of treating it as a
combinatorial optimisation problem). The proposed architecture outperforms the
state-of-the-art Graph Time Series (GTS) architecture for battery parameter
estimation. We call our method GAETS (Graph AutoEncoder Time Series).
|
[
{
"version": "v1",
"created": "Wed, 17 Nov 2021 16:04:01 GMT"
},
{
"version": "v2",
"created": "Fri, 19 Aug 2022 10:00:47 GMT"
}
] | 2022-08-22T00:00:00 |
[
[
"Kosasih",
"Edward Elson",
""
],
[
"Joshi",
"Rucha Bhalchandra",
""
],
[
"Channegowda",
"Janamejaya",
""
]
] |
new_dataset
| 0.990259 |
2112.08281
|
Filippos Gouidis Mr.
|
Filippos Gouidis, Theodore Patkos, Antonis Argyros and Dimitris
Plexousakis
|
Detecting Object States vs Detecting Objects: A New Dataset and a
Quantitative Experimental Study
|
Submitted to the Proceedings of the 17th International Joint
Conference on Computer Vision, Imaging and Computer Graphics Theory and
Applications (VISAPP)
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The detection of object states in images (State Detection - SD) is a problem
of both theoretical and practical importance and it is tightly interwoven with
other important computer vision problems, such as action recognition and
affordance detection. It is also highly relevant to any entity that needs to
reason and act in dynamic domains, such as robotic systems and intelligent
agents. Despite its importance, up to now, the research on this problem has
been limited. In this paper, we attempt a systematic study of the SD problem.
First, we introduce the Object State Detection Dataset (OSDD), a new publicly
available dataset consisting of more than 19,000 annotations for 18 object
categories and 9 state classes. Second, using a standard deep learning
framework used for Object Detection (OD), we conduct a number of appropriately
designed experiments, towards an in-depth study of the behavior of the SD
problem. This study enables the setup of a baseline on the performance of SD,
as well as its relative performance in comparison to OD, in a variety of
scenarios. Overall, the experimental outcomes confirm that SD is harder than OD
and that tailored SD methods need to be developed for addressing effectively
this significant problem.
|
[
{
"version": "v1",
"created": "Wed, 15 Dec 2021 17:19:14 GMT"
},
{
"version": "v2",
"created": "Thu, 18 Aug 2022 20:43:12 GMT"
}
] | 2022-08-22T00:00:00 |
[
[
"Gouidis",
"Filippos",
""
],
[
"Patkos",
"Theodore",
""
],
[
"Argyros",
"Antonis",
""
],
[
"Plexousakis",
"Dimitris",
""
]
] |
new_dataset
| 0.96169 |
2201.05297
|
Hanting Li
|
Hanting Li, Mingzhe Sui, Zhaoqing Zhu, Feng Zhao
|
MMNet: Muscle motion-guided network for micro-expression recognition
|
8 pages, 4 figures
|
Proc. 31st Int'l Joint Conf. Artificial Intelligence (IJCAI), 2022
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Facial micro-expressions (MEs) are involuntary facial motions revealing
peoples real feelings and play an important role in the early intervention of
mental illness, the national security, and many human-computer interaction
systems. However, existing micro-expression datasets are limited and usually
pose some challenges for training good classifiers. To model the subtle facial
muscle motions, we propose a robust micro-expression recognition (MER)
framework, namely muscle motion-guided network (MMNet). Specifically, a
continuous attention (CA) block is introduced to focus on modeling local subtle
muscle motion patterns with little identity information, which is different
from most previous methods that directly extract features from complete video
frames with much identity information. Besides, we design a position
calibration (PC) module based on the vision transformer. By adding the position
embeddings of the face generated by PC module at the end of the two branches,
the PC module can help to add position information to facial muscle motion
pattern features for the MER. Extensive experiments on three public
micro-expression datasets demonstrate that our approach outperforms
state-of-the-art methods by a large margin.
|
[
{
"version": "v1",
"created": "Fri, 14 Jan 2022 04:05:49 GMT"
},
{
"version": "v2",
"created": "Fri, 19 Aug 2022 11:24:19 GMT"
}
] | 2022-08-22T00:00:00 |
[
[
"Li",
"Hanting",
""
],
[
"Sui",
"Mingzhe",
""
],
[
"Zhu",
"Zhaoqing",
""
],
[
"Zhao",
"Feng",
""
]
] |
new_dataset
| 0.994698 |
2203.02284
|
Uwe Schmidt
|
Martin Weigert and Uwe Schmidt
|
Nuclei instance segmentation and classification in histopathology images
with StarDist
| null | null |
10.1109/ISBIC56247.2022.9854534
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Instance segmentation and classification of nuclei is an important task in
computational pathology. We show that StarDist, a deep learning nuclei
segmentation method originally developed for fluorescence microscopy, can be
extended and successfully applied to histopathology images. This is
substantiated by conducting experiments on the Lizard dataset, and through
entering the Colon Nuclei Identification and Counting (CoNIC) challenge 2022,
where our approach achieved the first spot on the leaderboard for the
segmentation and classification task for both the preliminary and final test
phase.
|
[
{
"version": "v1",
"created": "Thu, 3 Mar 2022 01:00:26 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Mar 2022 13:17:49 GMT"
},
{
"version": "v3",
"created": "Fri, 19 Aug 2022 12:48:56 GMT"
}
] | 2022-08-22T00:00:00 |
[
[
"Weigert",
"Martin",
""
],
[
"Schmidt",
"Uwe",
""
]
] |
new_dataset
| 0.992184 |
2203.12066
|
Sidney Pontes-Filho
|
Sidney Pontes-Filho, Kathryn Walker, Elias Najarro, Stefano Nichele
and Sebastian Risi
|
A Unified Substrate for Body-Brain Co-evolution
|
13 pages, 7 figures, accepted as a poster paper at The Genetic and
Evolutionary Computation Conference (GECCO 2022), accepted as workshop paper
at Workshop From Cells to Societies: Collective Learning Across Scales at
Tenth International Conference on Learning Representations (ICLR 2022)
| null |
10.1145/3520304.3529002
| null |
cs.RO cs.AI cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
The discovery of complex multicellular organism development took millions of
years of evolution. The genome of such a multicellular organism guides the
development of its body from a single cell, including its control system. Our
goal is to imitate this natural process using a single neural cellular
automaton (NCA) as a genome for modular robotic agents. In the introduced
approach, called Neural Cellular Robot Substrate (NCRS), a single NCA guides
the growth of a robot and the cellular activity which controls the robot during
deployment. We also introduce three benchmark environments, which test the
ability of the approach to grow different robot morphologies. In this paper,
NCRSs are trained with covariance matrix adaptation evolution strategy
(CMA-ES), and covariance matrix adaptation MAP-Elites (CMA-ME) for quality
diversity, which we show leads to more diverse robot morphologies with higher
fitness scores. While the NCRS can solve the easier tasks from our benchmark
environments, the success rate reduces when the difficulty of the task
increases. We discuss directions for future work that may facilitate the use of
the NCRS approach for more complex domains.
|
[
{
"version": "v1",
"created": "Tue, 22 Mar 2022 21:57:59 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Apr 2022 15:00:51 GMT"
}
] | 2022-08-22T00:00:00 |
[
[
"Pontes-Filho",
"Sidney",
""
],
[
"Walker",
"Kathryn",
""
],
[
"Najarro",
"Elias",
""
],
[
"Nichele",
"Stefano",
""
],
[
"Risi",
"Sebastian",
""
]
] |
new_dataset
| 0.998295 |
2208.04756
|
Wen-Yi Hsiao
|
Da-Yi Wu, Wen-Yi Hsiao, Fu-Rong Yang, Oscar Friedman, Warren Jackson,
Scott Bruzenak, Yi-Wen Liu, Yi-Hsuan Yang
|
DDSP-based Singing Vocoders: A New Subtractive-based Synthesizer and A
Comprehensive Evaluation
|
Accepted at ISMIR 2022
|
International Society for Music Information Retrieval (ISMIR) 2022
| null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
A vocoder is a conditional audio generation model that converts acoustic
features such as mel-spectrograms into waveforms. Taking inspiration from
Differentiable Digital Signal Processing (DDSP), we propose a new vocoder named
SawSing for singing voices. SawSing synthesizes the harmonic part of singing
voices by filtering a sawtooth source signal with a linear time-variant finite
impulse response filter whose coefficients are estimated from the input
mel-spectrogram by a neural network. As this approach enforces phase
continuity, SawSing can generate singing voices without the phase-discontinuity
glitch of many existing vocoders. Moreover, the source-filter assumption
provides an inductive bias that allows SawSing to be trained on a small amount
of data. Our experiments show that SawSing converges much faster and
outperforms state-of-the-art generative adversarial network and diffusion-based
vocoders in a resource-limited scenario with only 3 training recordings and a
3-hour training time.
|
[
{
"version": "v1",
"created": "Tue, 9 Aug 2022 13:06:08 GMT"
},
{
"version": "v2",
"created": "Fri, 19 Aug 2022 03:19:23 GMT"
}
] | 2022-08-22T00:00:00 |
[
[
"Wu",
"Da-Yi",
""
],
[
"Hsiao",
"Wen-Yi",
""
],
[
"Yang",
"Fu-Rong",
""
],
[
"Friedman",
"Oscar",
""
],
[
"Jackson",
"Warren",
""
],
[
"Bruzenak",
"Scott",
""
],
[
"Liu",
"Yi-Wen",
""
],
[
"Yang",
"Yi-Hsuan",
""
]
] |
new_dataset
| 0.98746 |
2208.08224
|
Muhammad Muzammel
|
Muhammad Muzammel, Mohd Zuki Yusoff, Mohamad Naufal Mohamad Saad,
Faryal Sheikh and Muhammad Ahsan Awais
|
Blind-Spot Collision Detection System for Commercial Vehicles Using
Multi Deep CNN Architecture
| null | null |
10.3390/s22166088
| null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Buses and heavy vehicles have more blind spots compared to cars and other
road vehicles due to their large sizes. Therefore, accidents caused by these
heavy vehicles are more fatal and result in severe injuries to other road
users. These possible blind-spot collisions can be identified early using
vision-based object detection approaches. Yet, the existing state-of-the-art
vision-based object detection models rely heavily on a single feature
descriptor for making decisions. In this research, the design of two
convolutional neural networks (CNNs) based on high-level feature descriptors
and their integration with faster R-CNN is proposed to detect blind-spot
collisions for heavy vehicles. Moreover, a fusion approach is proposed to
integrate two pre-trained networks (i.e., Resnet 50 and Resnet 101) for
extracting high level features for blind-spot vehicle detection. The fusion of
features significantly improves the performance of faster R-CNN and
outperformed the existing state-of-the-art methods. Both approaches are
validated on a self-recorded blind-spot vehicle detection dataset for buses and
an online LISA dataset for vehicle detection. For both proposed approaches, a
false detection rate (FDR) of 3.05% and 3.49% are obtained for the self
recorded dataset, making these approaches suitable for real time applications.
|
[
{
"version": "v1",
"created": "Wed, 17 Aug 2022 11:10:37 GMT"
},
{
"version": "v2",
"created": "Fri, 19 Aug 2022 09:46:30 GMT"
}
] | 2022-08-22T00:00:00 |
[
[
"Muzammel",
"Muhammad",
""
],
[
"Yusoff",
"Mohd Zuki",
""
],
[
"Saad",
"Mohamad Naufal Mohamad",
""
],
[
"Sheikh",
"Faryal",
""
],
[
"Awais",
"Muhammad Ahsan",
""
]
] |
new_dataset
| 0.992004 |
2208.09070
|
Sudeep Pasricha
|
Sudeep Pasricha, John Jose, Sujay Deb
|
Electronic, Wireless, and Photonic Network-on-Chip Security: Challenges
and Countermeasures
| null | null | null | null |
cs.AR cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Networks-on-chips (NoCs) are an integral part of emerging manycore computing
chips. They play a key role in facilitating communication among processing
cores and between cores and memory. To meet the aggressive performance and
energy-efficiency targets of machine learning and big data applications, NoCs
have been evolving to leverage emerging paradigms such as silicon photonics and
wireless communication. Increasingly, these NoC fabrics are becoming
susceptible to security vulnerabilities, such as from hardware trojans that can
snoop, corrupt, or disrupt information transfers on NoCs. This article surveys
the landscape of security challenges and countermeasures across electronic,
wireless, and photonic NoCs.
|
[
{
"version": "v1",
"created": "Thu, 18 Aug 2022 21:14:34 GMT"
}
] | 2022-08-22T00:00:00 |
[
[
"Pasricha",
"Sudeep",
""
],
[
"Jose",
"John",
""
],
[
"Deb",
"Sujay",
""
]
] |
new_dataset
| 0.995546 |
2208.09126
|
Guanzi Chen
|
Guanzi Chen, Jiying Zhang, Xi Xiao and Yang Li
|
GraphTTA: Test Time Adaptation on Graph Neural Networks
|
ICML 2022 Workshop "Principles of Distribution Shift"
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, test time adaptation (TTA) has attracted increasing attention due
to its power of handling the distribution shift issue in the real world. Unlike
what has been developed for convolutional neural networks (CNNs) for image
data, TTA is less explored for Graph Neural Networks (GNNs). There is still a
lack of efficient algorithms tailored for graphs with irregular structures. In
this paper, we present a novel test time adaptation strategy named Graph
Adversarial Pseudo Group Contrast (GAPGC), for graph neural networks TTA, to
better adapt to the Out Of Distribution (OOD) test data. Specifically, GAPGC
employs a contrastive learning variant as a self-supervised task during TTA,
equipped with Adversarial Learnable Augmenter and Group Pseudo-Positive Samples
to enhance the relevance between the self-supervised task and the main task,
boosting the performance of the main task. Furthermore, we provide theoretical
evidence that GAPGC can extract minimal sufficient information for the main
task from information theory perspective. Extensive experiments on molecular
scaffold OOD dataset demonstrated that the proposed approach achieves
state-of-the-art performance on GNNs.
|
[
{
"version": "v1",
"created": "Fri, 19 Aug 2022 02:24:16 GMT"
}
] | 2022-08-22T00:00:00 |
[
[
"Chen",
"Guanzi",
""
],
[
"Zhang",
"Jiying",
""
],
[
"Xiao",
"Xi",
""
],
[
"Li",
"Yang",
""
]
] |
new_dataset
| 0.951531 |
2208.09195
|
Husheng Han
|
Husheng Han, Xing Hu, Kaidi Xu, Pucheng Dang, Ying Wang, Yongwei Zhao,
Zidong Du, Qi Guo, Yanzhi Yang, Tianshi Chen
|
Real-Time Robust Video Object Detection System Against Physical-World
Adversarial Attacks
| null | null | null | null |
cs.CV cs.AR cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
DNN-based video object detection (VOD) powers autonomous driving and video
surveillance industries with rising importance and promising opportunities.
However, adversarial patch attack yields huge concern in live vision tasks
because of its practicality, feasibility, and powerful attack effectiveness.
This work proposes Themis, a software/hardware system to defend against
adversarial patches for real-time robust video object detection. We observe
that adversarial patches exhibit extremely localized superficial feature
importance in a small region with non-robust predictions, and thus propose the
adversarial region detection algorithm for adversarial effect elimination.
Themis also proposes a systematic design to efficiently support the algorithm
by eliminating redundant computations and memory traffics. Experimental results
show that the proposed methodology can effectively recover the system from the
adversarial attack with negligible hardware overhead.
|
[
{
"version": "v1",
"created": "Fri, 19 Aug 2022 07:39:31 GMT"
}
] | 2022-08-22T00:00:00 |
[
[
"Han",
"Husheng",
""
],
[
"Hu",
"Xing",
""
],
[
"Xu",
"Kaidi",
""
],
[
"Dang",
"Pucheng",
""
],
[
"Wang",
"Ying",
""
],
[
"Zhao",
"Yongwei",
""
],
[
"Du",
"Zidong",
""
],
[
"Guo",
"Qi",
""
],
[
"Yang",
"Yanzhi",
""
],
[
"Chen",
"Tianshi",
""
]
] |
new_dataset
| 0.976732 |
2208.09257
|
Yujia Zhou
|
Yujia Zhou, Jing Yao, Zhicheng Dou, Ledell Wu, Peitian Zhang, Ji-Rong
Wen
|
Ultron: An Ultimate Retriever on Corpus with a Model-based Indexer
| null | null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Document retrieval has been extensively studied within the index-retrieve
framework for decades, which has withstood the test of time. Unfortunately,
such a pipelined framework limits the optimization of the final retrieval
quality, because indexing and retrieving are separated stages that can not be
jointly optimized in an end-to-end manner. In order to unify these two stages,
we explore a model-based indexer for document retrieval. Concretely, we propose
Ultron, which encodes the knowledge of all documents into the model and aims to
directly retrieve relevant documents end-to-end. For the model-based indexer,
how to represent docids and how to train the model are two main issues to be
explored. Existing solutions suffer from semantically deficient docids and
limited supervised data. To tackle these two problems, first, we devise two
types of docids that are richer in semantics and easier for model inference. In
addition, we propose a three-stage training workflow to capture more knowledge
contained in the corpus and associations between queries and docids.
Experiments on two public datasets demonstrate the superiority of Ultron over
advanced baselines for document retrieval.
|
[
{
"version": "v1",
"created": "Fri, 19 Aug 2022 10:28:36 GMT"
}
] | 2022-08-22T00:00:00 |
[
[
"Zhou",
"Yujia",
""
],
[
"Yao",
"Jing",
""
],
[
"Dou",
"Zhicheng",
""
],
[
"Wu",
"Ledell",
""
],
[
"Zhang",
"Peitian",
""
],
[
"Wen",
"Ji-Rong",
""
]
] |
new_dataset
| 0.979832 |
2208.09270
|
Ilja Behnke
|
Markus Toll, Ilja Behnke, Odej Kao
|
IoTreeplay: Synchronous Distributed Traffic Replay in IoT Environments
|
2nd International Workshop on Testing Distributed Internet of Things
Systems
| null | null | null |
cs.NI cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Use-cases in the Internet of Things (IoT) typically involve a high number of
interconnected, heterogeneous devices. Due to the criticality of many IoT
scenarios, systems and applications need to be tested thoroughly before
rollout. Existing staging environments and testing frameworks are able to
emulate network properties but fail to deliver actual network-wide traffic
control to test systems application independently. To extend existing
frameworks, we present the distributed traffic replaying tool IoTreeplay.
The tool embeds TCPLivePlay into an environment that allows the synchronous
replaying of network traffic with multiple endpoints and connections. Replaying
takes place in a user-defined network or testbed containing IoT use-cases.
Network traffic can be captured and compared to the original trace to evaluate
accuracy and reliability. The resulting implementation is able to accurately
replay connections within a maximum transmission rate but struggles with
deviations from regular TCP connections, like packet loss or connection reset.
An evaluation has been performed, measuring individual and aggregated delays
between packets, based on the recorded timestamps.
|
[
{
"version": "v1",
"created": "Fri, 19 Aug 2022 11:29:41 GMT"
}
] | 2022-08-22T00:00:00 |
[
[
"Toll",
"Markus",
""
],
[
"Behnke",
"Ilja",
""
],
[
"Kao",
"Odej",
""
]
] |
new_dataset
| 0.994169 |
2208.09305
|
Eoin Clerkin PhD
|
E. Clerkin, P.-N. Kramp, P.-A. Loizeau, and M. Szuba
|
Real and simulated CBM data interacting with an ESCAPE datalake
|
4 pages, 6 figures
|
CBM Progress Report 2021
|
10.15120/GSI-2022-00599
| null |
cs.DB hep-ex
|
http://creativecommons.org/licenses/by/4.0/
|
Integration of the ESCAPE and CBM software environment. The ESCAPE datalake
are utilized by the CBM experiment for the storage, distribution and retrieval
of real SIS18 and simulated SIS100 particle physics data.
|
[
{
"version": "v1",
"created": "Fri, 19 Aug 2022 12:33:03 GMT"
}
] | 2022-08-22T00:00:00 |
[
[
"Clerkin",
"E.",
""
],
[
"Kramp",
"P. -N.",
""
],
[
"Loizeau",
"P. -A.",
""
],
[
"Szuba",
"M.",
""
]
] |
new_dataset
| 0.98598 |
2208.09374
|
Sunan He
|
Sunan He, Taian Guo, Tao Dai, Ruizhi Qiao, Chen Wu, Xiujun Shu, Bo Ren
|
VLMAE: Vision-Language Masked Autoencoder
|
12 pages, 7 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image and language modeling is of crucial importance for vision-language
pre-training (VLP), which aims to learn multi-modal representations from
large-scale paired image-text data. However, we observe that most existing VLP
methods focus on modeling the interactions between image and text features
while neglecting the information disparity between image and text, thus
suffering from focal bias. To address this problem, we propose a
vision-language masked autoencoder framework (VLMAE). VLMAE employs visual
generative learning, facilitating the model to acquire fine-grained and
unbiased features. Unlike the previous works, VLMAE pays attention to almost
all critical patches in an image, providing more comprehensive understanding.
Extensive experiments demonstrate that VLMAE achieves better performance in
various vision-language downstream tasks, including visual question answering,
image-text retrieval and visual grounding, even with up to 20% pre-training
speedup.
|
[
{
"version": "v1",
"created": "Fri, 19 Aug 2022 14:39:18 GMT"
}
] | 2022-08-22T00:00:00 |
[
[
"He",
"Sunan",
""
],
[
"Guo",
"Taian",
""
],
[
"Dai",
"Tao",
""
],
[
"Qiao",
"Ruizhi",
""
],
[
"Wu",
"Chen",
""
],
[
"Shu",
"Xiujun",
""
],
[
"Ren",
"Bo",
""
]
] |
new_dataset
| 0.996609 |
2208.09394
|
Zhou Hongyu
|
Hongyu Zhou, Zheng Ge, Weixin Mao, Zeming Li
|
PersDet: Monocular 3D Detection in Perspective Bird's-Eye-View
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Currently, detecting 3D objects in Bird's-Eye-View (BEV) is superior to other
3D detectors for autonomous driving and robotics. However, transforming image
features into BEV necessitates special operators to conduct feature sampling.
These operators are not supported on many edge devices, bringing extra
obstacles when deploying detectors. To address this problem, we revisit the
generation of BEV representation and propose detecting objects in perspective
BEV -- a new BEV representation that does not require feature sampling. We
demonstrate that perspective BEV features can likewise enjoy the benefits of
the BEV paradigm. Moreover, the perspective BEV improves detection performance
by addressing issues caused by feature sampling. We propose PersDet for
high-performance object detection in perspective BEV space based on this
discovery. While implementing a simple and memory-efficient structure, PersDet
outperforms existing state-of-the-art monocular methods on the nuScenes
benchmark, reaching 34.6% mAP and 40.8% NDS when using ResNet-50 as the
backbone.
|
[
{
"version": "v1",
"created": "Fri, 19 Aug 2022 15:19:20 GMT"
}
] | 2022-08-22T00:00:00 |
[
[
"Zhou",
"Hongyu",
""
],
[
"Ge",
"Zheng",
""
],
[
"Mao",
"Weixin",
""
],
[
"Li",
"Zeming",
""
]
] |
new_dataset
| 0.991894 |
2208.09466
|
Vyas Raina
|
Vyas Raina and Mark Gales
|
Gender Bias and Universal Substitution Adversarial Attacks on
Grammatical Error Correction Systems for Automated Assessment
| null |
U.K. Speech 2022
| null | null |
cs.CL cs.CY cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Grammatical Error Correction (GEC) systems perform a sequence-to-sequence
task, where an input word sequence containing grammatical errors, is corrected
for these errors by the GEC system to output a grammatically correct word
sequence. With the advent of deep learning methods, automated GEC systems have
become increasingly popular. For example, GEC systems are often used on speech
transcriptions of English learners as a form of assessment and feedback - these
powerful GEC systems can be used to automatically measure an aspect of a
candidate's fluency. The count of \textit{edits} from a candidate's input
sentence (or essay) to a GEC system's grammatically corrected output sentence
is indicative of a candidate's language ability, where fewer edits suggest
better fluency. The count of edits can thus be viewed as a \textit{fluency
score} with zero implying perfect fluency. However, although deep learning
based GEC systems are extremely powerful and accurate, they are susceptible to
adversarial attacks: an adversary can introduce a small, specific change at the
input of a system that causes a large, undesired change at the output. When
considering the application of GEC systems to automated language assessment,
the aim of an adversary could be to cheat by making a small change to a
grammatically incorrect input sentence that conceals the errors from a GEC
system, such that no edits are found and the candidate is unjustly awarded a
perfect fluency score. This work examines a simple universal substitution
adversarial attack that non-native speakers of English could realistically
employ to deceive GEC systems used for assessment.
|
[
{
"version": "v1",
"created": "Fri, 19 Aug 2022 17:44:13 GMT"
}
] | 2022-08-22T00:00:00 |
[
[
"Raina",
"Vyas",
""
],
[
"Gales",
"Mark",
""
]
] |
new_dataset
| 0.965416 |
2009.05139
|
Ali Beikmohammadi
|
Ali Beikmohammadi, Karim Faez, Ali Motallebi
|
SWP-LeafNET: A novel multistage approach for plant leaf identification
based on deep CNN
| null |
Expert Systems with Applications 202(2022)
|
10.1016/j.eswa.2022.117470
| null |
cs.CV cs.LG eess.IV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Modern scientific and technological advances allow botanists to use computer
vision-based approaches for plant identification tasks. These approaches have
their own challenges. Leaf classification is a computer-vision task performed
for the automated identification of plant species, a serious challenge due to
variations in leaf morphology, including its size, texture, shape, and
venation. Researchers have recently become more inclined toward deep
learning-based methods rather than conventional feature-based methods due to
the popularity and successful implementation of deep learning methods in image
analysis, object recognition, and speech recognition.
In this paper, to have an interpretable and reliable system, a botanist's
behavior is modeled in leaf identification by proposing a highly-efficient
method of maximum behavioral resemblance developed through three deep
learning-based models. Different layers of the three models are visualized to
ensure that the botanist's behavior is modeled accurately. The first and second
models are designed from scratch. Regarding the third model, the pre-trained
architecture MobileNetV2 is employed along with the transfer-learning
technique. The proposed method is evaluated on two well-known datasets: Flavia
and MalayaKew. According to a comparative analysis, the suggested approach is
more accurate than hand-crafted feature extraction methods and other deep
learning techniques in terms of 99.67% and 99.81% accuracy. Unlike conventional
techniques that have their own specific complexities and depend on datasets,
the proposed method requires no hand-crafted feature extraction. Also, it
increases accuracy as compared with other deep learning techniques. Moreover,
SWP-LeafNET is distributable and considerably faster than other methods because
of using shallower models with fewer parameters asynchronously.
|
[
{
"version": "v1",
"created": "Thu, 10 Sep 2020 20:28:57 GMT"
},
{
"version": "v2",
"created": "Wed, 17 Aug 2022 20:52:20 GMT"
}
] | 2022-08-19T00:00:00 |
[
[
"Beikmohammadi",
"Ali",
""
],
[
"Faez",
"Karim",
""
],
[
"Motallebi",
"Ali",
""
]
] |
new_dataset
| 0.998441 |
2104.11469
|
Jan Philipp Thoma
|
Jan Philipp Thoma, Christian Niesler, Dominic Funke, Gregor Leander,
Pierre Mayr, Nils Pohl, Lucas Davi, Tim G\"uneysu
|
ClepsydraCache -- Preventing Cache Attacks with Time-Based Evictions
| null | null | null | null |
cs.CR cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the recent past, we have witnessed the shift towards attacks on the
microarchitectural CPU level. In particular, cache side-channels play a
predominant role as they allow an attacker to exfiltrate secret information by
exploiting the CPU microarchitecture. These subtle attacks exploit the
architectural visibility of conflicting cache addresses. In this paper, we
present ClepsydraCache, which mitigates state-of-the-art cache attacks using a
novel combination of cache decay and index randomization. Each cache entry is
linked with a Time-To-Live (TTL) value. We propose a new dynamic scheduling
mechanism of the TTL which plays a fundamental role in preventing those attacks
while maintaining performance. ClepsydraCache efficiently protects against the
latest cache attacks such as Prime+(Prune+)Probe. We present a full prototype
in gem5 and lay out a proof-of-concept hardware design of the TTL mechanism,
which demonstrates the feasibility of deploying ClepsydraCache in real-world
systems.
|
[
{
"version": "v1",
"created": "Fri, 23 Apr 2021 08:36:49 GMT"
},
{
"version": "v2",
"created": "Thu, 18 Aug 2022 14:32:30 GMT"
}
] | 2022-08-19T00:00:00 |
[
[
"Thoma",
"Jan Philipp",
""
],
[
"Niesler",
"Christian",
""
],
[
"Funke",
"Dominic",
""
],
[
"Leander",
"Gregor",
""
],
[
"Mayr",
"Pierre",
""
],
[
"Pohl",
"Nils",
""
],
[
"Davi",
"Lucas",
""
],
[
"Güneysu",
"Tim",
""
]
] |
new_dataset
| 0.99715 |
2202.10565
|
Doksoo Lee
|
Doksoo Lee, Yu-Chin Chan, Wei Wayne Chen, Liwei Wang, Anton van Beek,
Wei Chen
|
t-METASET: Tailoring Property Bias of Large-Scale Metamaterial Datasets
through Active Learning
|
This preprint has been submitted as a manuscript to ASME Journal of
Mechanical Design
| null | null | null |
cs.CE cs.DB cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Inspired by the recent achievements of machine learning in diverse domains,
data-driven metamaterials design has emerged as a compelling paradigm that can
unlock the potential of multiscale architectures. The model-centric research
trend, however, lacks principled frameworks dedicated to data acquisition,
whose quality propagates into the downstream tasks. Often built by naive
space-filling design in shape descriptor space, metamaterial datasets suffer
from property distributions that are either highly imbalanced or at odds with
design tasks of interest. To this end, we present t-METASET: an
active-learning-based data acquisition framework aiming to guide both diverse
and task-aware data generation. Distinctly, we seek a solution to a commonplace
yet frequently overlooked scenario at early stages of data-driven design of
metamaterials: when a massive (~O(10^4 )) shape-only library has been prepared
with no properties evaluated. The key idea is to harness a data-driven shape
descriptor learned from generative models, fit a sparse regressor as a start-up
agent, and leverage metrics related to diversity to drive data acquisition to
areas that help designers fulfill design goals. We validate the proposed
framework in three deployment cases, which encompass general use, task-specific
use, and tailorable use. Two large-scale mechanical metamaterial datasets are
used to demonstrate the efficacy. Applicable to general image-based design
representations, t-METASET could boost future advancements in data-driven
design.
|
[
{
"version": "v1",
"created": "Mon, 21 Feb 2022 22:46:49 GMT"
},
{
"version": "v2",
"created": "Thu, 18 Aug 2022 14:54:07 GMT"
}
] | 2022-08-19T00:00:00 |
[
[
"Lee",
"Doksoo",
""
],
[
"Chan",
"Yu-Chin",
""
],
[
"Chen",
"Wei Wayne",
""
],
[
"Wang",
"Liwei",
""
],
[
"van Beek",
"Anton",
""
],
[
"Chen",
"Wei",
""
]
] |
new_dataset
| 0.999839 |
2202.10842
|
Chongming Gao
|
Chongming Gao, Shijun Li, Wenqiang Lei, Jiawei Chen, Biao Li, Peng
Jiang, Xiangnan He, Jiaxin Mao, Tat-Seng Chua
|
KuaiRec: A Fully-observed Dataset and Insights for Evaluating
Recommender Systems
|
CIKM '22 Full Paper
| null |
10.1145/3511808.3557220
| null |
cs.IR cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The progress of recommender systems is hampered mainly by evaluation as it
requires real-time interactions between humans and systems, which is too
laborious and expensive. This issue is usually approached by utilizing the
interaction history to conduct offline evaluation. However, existing datasets
of user-item interactions are partially observed, leaving it unclear how and to
what extent the missing interactions will influence the evaluation. To answer
this question, we collect a fully-observed dataset from Kuaishou's online
environment, where almost all 1,411 users have been exposed to all 3,327 items.
To the best of our knowledge, this is the first real-world fully-observed data
with millions of user-item interactions.
With this unique dataset, we conduct a preliminary analysis of how the two
factors - data density and exposure bias - affect the evaluation results of
multi-round conversational recommendation. Our main discoveries are that the
performance ranking of different methods varies with the two factors, and this
effect can only be alleviated in certain cases by estimating missing
interactions for user simulation. This demonstrates the necessity of the
fully-observed dataset. We release the dataset and the pipeline implementation
for evaluation at https://kuairec.com
|
[
{
"version": "v1",
"created": "Tue, 22 Feb 2022 12:08:14 GMT"
},
{
"version": "v2",
"created": "Thu, 19 May 2022 06:14:33 GMT"
},
{
"version": "v3",
"created": "Thu, 18 Aug 2022 08:53:37 GMT"
}
] | 2022-08-19T00:00:00 |
[
[
"Gao",
"Chongming",
""
],
[
"Li",
"Shijun",
""
],
[
"Lei",
"Wenqiang",
""
],
[
"Chen",
"Jiawei",
""
],
[
"Li",
"Biao",
""
],
[
"Jiang",
"Peng",
""
],
[
"He",
"Xiangnan",
""
],
[
"Mao",
"Jiaxin",
""
],
[
"Chua",
"Tat-Seng",
""
]
] |
new_dataset
| 0.983629 |
2202.11932
|
Rui Liu
|
Bowei He, Zhenting Zhao, Wenhao Luo, Rui Liu
|
Collective Conditioned Reflex: A Bio-Inspired Fast Emergency Reaction
Mechanism for Designing Safe Multi-Robot Systems
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A multi-robot system (MRS) is a group of coordinated robots designed to
cooperate with each other and accomplish given tasks. Due to the uncertainties
in operating environments, the system may encounter emergencies, such as
unobserved obstacles, moving vehicles, and extreme weather. Animal groups such
as bee colonies initiate collective emergency reaction behaviors such as
bypassing obstacles and avoiding predators, similar to muscle-conditioned
reflex which organizes local muscles to avoid hazards in the first response
without delaying passage through the brain. Inspired by this, we develop a
similar collective conditioned reflex mechanism for multi-robot systems to
respond to emergencies. In this study, Collective Conditioned Reflex (CCR), a
bio-inspired emergency reaction mechanism, is developed based on animal
collective behavior analysis and multi-agent reinforcement learning (MARL). The
algorithm uses a physical model to determine if the robots are experiencing an
emergency; then, rewards for robots involved in the emergency are augmented
with corresponding heuristic rewards, which evaluate emergency magnitudes and
consequences and decide local robots' participation. CCR is validated on three
typical emergency scenarios: \textit{turbulence, strong wind, and hidden
obstacle}. Simulation results demonstrate that CCR improves robot teams'
emergency reaction capability with faster reaction speed and safer trajectory
adjustment compared with baseline methods.
|
[
{
"version": "v1",
"created": "Thu, 24 Feb 2022 07:07:20 GMT"
},
{
"version": "v2",
"created": "Thu, 18 Aug 2022 02:14:59 GMT"
}
] | 2022-08-19T00:00:00 |
[
[
"He",
"Bowei",
""
],
[
"Zhao",
"Zhenting",
""
],
[
"Luo",
"Wenhao",
""
],
[
"Liu",
"Rui",
""
]
] |
new_dataset
| 0.96468 |
2203.04814
|
Mohamed Ali Souibgui
|
Mohamed Ali Souibgui, Sanket Biswas, Andres Mafla, Ali Furkan Biten,
Alicia Forn\'es, Yousri Kessentini, Josep Llad\'os, Lluis Gomez, Dimosthenis
Karatzas
|
Text-DIAE: A Self-Supervised Degradation Invariant Autoencoders for Text
Recognition and Document Enhancement
|
Preprint
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we propose a Text-Degradation Invariant Auto Encoder
(Text-DIAE), a self-supervised model designed to tackle two tasks, text
recognition (handwritten or scene-text) and document image enhancement. We
start by employing a transformer-based architecture that incorporates three
pretext tasks as learning objectives to be optimized during pre-training
without the usage of labeled data. Each of the pretext objectives is
specifically tailored for the final downstream tasks. We conduct several
ablation experiments that confirm the design choice of the selected pretext
tasks. Importantly, the proposed model does not exhibit limitations of previous
state-of-the-art methods based on contrastive losses, while at the same time
requiring substantially fewer data samples to converge. Finally, we demonstrate
that our method surpasses the state-of-the-art in existing supervised and
self-supervised settings in handwritten and scene text recognition and document
image enhancement. Our code and trained models will be made publicly available
at~\url{ http://Upon_Acceptance}.
|
[
{
"version": "v1",
"created": "Wed, 9 Mar 2022 15:44:36 GMT"
},
{
"version": "v2",
"created": "Thu, 10 Mar 2022 17:39:02 GMT"
},
{
"version": "v3",
"created": "Wed, 16 Mar 2022 15:12:56 GMT"
},
{
"version": "v4",
"created": "Thu, 18 Aug 2022 14:29:56 GMT"
}
] | 2022-08-19T00:00:00 |
[
[
"Souibgui",
"Mohamed Ali",
""
],
[
"Biswas",
"Sanket",
""
],
[
"Mafla",
"Andres",
""
],
[
"Biten",
"Ali Furkan",
""
],
[
"Fornés",
"Alicia",
""
],
[
"Kessentini",
"Yousri",
""
],
[
"Lladós",
"Josep",
""
],
[
"Gomez",
"Lluis",
""
],
[
"Karatzas",
"Dimosthenis",
""
]
] |
new_dataset
| 0.999223 |
2204.01837
|
Sangho Shim
|
Sunil Chopra, Feng Qiu and Sangho Shim
|
Parallel Power System Restoration
|
30 pages, working paper
| null | null | null |
cs.CE cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Power system restoration is an essential activity for grid resilience, where
grid operators restart generators, re-establish transmission paths, and restore
loads after a blackout event. With a goal of restoring electric service in the
shortest time, the core decisions in restoration planning are to partition the
grid into sub-networks, each of which has an initial power source for
black-start (called sectionalization problem), and then restart all generators
in each network (called generator startup sequencing problem or GSS) as soon as
possible. Due to the complexity of each problem, the sectionalization and GSS
problems are usually solved separately, often resulting in a sub-optimal
solution. Our paper develops models and computational methods to solve the two
problems simultaneously. We first study the computational complexity of the GSS
problem and develop an efficient integer linear programming formulation. We
then integrate the GSS problem with the sectionalization problem and develop an
integer linear programming formulation for the parallel power system
restoration (PPSR) problem to find exact optimal solutions. To solve larger
systems, we then develop bounding approaches that find good upper and lower
bounds efficiently. Finally, to address computational challenges for very large
power grids, we develop a randomized approach to find a high-quality feasible
solution quickly. Our computational experiments demonstrate that the proposed
approaches are able to find good solutions for PPSR in up to 2000-bus systems.
|
[
{
"version": "v1",
"created": "Mon, 4 Apr 2022 20:43:11 GMT"
},
{
"version": "v2",
"created": "Thu, 18 Aug 2022 05:59:56 GMT"
}
] | 2022-08-19T00:00:00 |
[
[
"Chopra",
"Sunil",
""
],
[
"Qiu",
"Feng",
""
],
[
"Shim",
"Sangho",
""
]
] |
new_dataset
| 0.980428 |
2204.03128
|
\c{C}a\u{g}atay Demiralp
|
James Gale and Max Seiden and Deepanshu Utkarsh and Jason Frantz and
Rob Woollen and \c{C}a\u{g}atay Demiralp
|
Sigma Workbook: A Spreadsheet for Cloud Data Warehouses
|
VLDB'22 Demonstrations
| null | null | null |
cs.DB cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cloud data warehouses (CDWs) bring large-scale data and compute power closer
to users in enterprises. However, existing tools for analyzing data in CDWs are
either limited in ad-hoc transformations or difficult to use for business
users. Here we introduce Sigma Workbook, a new interactive system that enables
business users to easily perform a visual analysis of data in CDWs at scale.
For this, Sigma Workbook provides an accessible spreadsheet-like interface for
analysis through direct manipulation. Sigma Workbook dynamically constructs
matching SQL queries from user interactions, building on the versatility and
expressivity of SQL. Constructed queries are directly executed on CDWs,
leveraging the superior characteristics of the new generation CDWs, including
scalability. We demonstrate Sigma Workbook through 3 real-life use cases --
cohort analysis, sessionization, and data augmentation -- and underline
Workbook's ease of use, scalability, and expressivity.
|
[
{
"version": "v1",
"created": "Wed, 6 Apr 2022 23:55:22 GMT"
},
{
"version": "v2",
"created": "Sun, 24 Jul 2022 16:04:27 GMT"
},
{
"version": "v3",
"created": "Thu, 18 Aug 2022 04:21:46 GMT"
}
] | 2022-08-19T00:00:00 |
[
[
"Gale",
"James",
""
],
[
"Seiden",
"Max",
""
],
[
"Utkarsh",
"Deepanshu",
""
],
[
"Frantz",
"Jason",
""
],
[
"Woollen",
"Rob",
""
],
[
"Demiralp",
"Çağatay",
""
]
] |
new_dataset
| 0.997259 |
2204.11511
|
Adrian Holzbock
|
Adrian Holzbock, Alexander Tsaregorodtsev, Youssef Dawoud, Klaus
Dietmayer, Vasileios Belagiannis
|
A Spatio-Temporal Multilayer Perceptron for Gesture Recognition
|
Accepted for presentation at the 33rd IEEE Intelligent Vehicles
Symposium (IV 2022), June 5 - June 9, 2022, Aachen, Germany
|
2022 IEEE Intelligent Vehicles Symposium (IV), June 5th - 9th,
2022, Aachen, Germany, pp. 1099-1106
|
10.1109/IV51971.2022.9827054
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Gesture recognition is essential for the interaction of autonomous vehicles
with humans. While the current approaches focus on combining several modalities
like image features, keypoints and bone vectors, we present neural network
architecture that delivers state-of-the-art results only with body skeleton
input data. We propose the spatio-temporal multilayer perceptron for gesture
recognition in the context of autonomous vehicles. Given 3D body poses over
time, we define temporal and spatial mixing operations to extract features in
both domains. Additionally, the importance of each time step is re-weighted
with Squeeze-and-Excitation layers. An extensive evaluation of the TCG and
Drive&Act datasets is provided to showcase the promising performance of our
approach. Furthermore, we deploy our model to our autonomous vehicle to show
its real-time capability and stable execution.
|
[
{
"version": "v1",
"created": "Mon, 25 Apr 2022 08:42:47 GMT"
},
{
"version": "v2",
"created": "Thu, 18 Aug 2022 11:48:28 GMT"
}
] | 2022-08-19T00:00:00 |
[
[
"Holzbock",
"Adrian",
""
],
[
"Tsaregorodtsev",
"Alexander",
""
],
[
"Dawoud",
"Youssef",
""
],
[
"Dietmayer",
"Klaus",
""
],
[
"Belagiannis",
"Vasileios",
""
]
] |
new_dataset
| 0.997772 |
2205.10088
|
Hlynur Dav{\i}{\dh} Hlynsson
|
Hlynur D. Hlynsson, Steind\'or Ellertsson, J\'on F. Da{\dh}ason, Emil
L. Sigurdsson, Hrafn Loftsson
|
Semi-self-supervised Automated ICD Coding
|
Re-upload comment: added a baseline comparison as well as an analysis
of the features
| null | null | null |
cs.CL cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Clinical Text Notes (CTNs) contain physicians' reasoning process, written in
an unstructured free text format, as they examine and interview patients. In
recent years, several studies have been published that provide evidence for the
utility of machine learning for predicting doctors' diagnoses from CTNs, a task
known as ICD coding. Data annotation is time consuming, particularly when a
degree of specialization is needed, as is the case for medical data. This paper
presents a method of augmenting a sparsely annotated dataset of Icelandic CTNs
with a machine-learned imputation in a semi-self-supervised manner. We train a
neural network on a small set of annotated CTNs and use it to extract clinical
features from a set of un-annotated CTNs. These clinical features consist of
answers to about a thousand potential questions that a physician might find the
answers to during a consultation of a patient. The features are then used to
train a classifier for the diagnosis of certain types of diseases. We report
the results of an evaluation of this data augmentation method over three tiers
of data availability to the physician. Our data augmentation method shows a
significant positive effect which is diminished when clinical features from the
examination of the patient and diagnostics are made available. We recommend our
method for augmenting scarce datasets for systems that take decisions based on
clinical features that do not include examinations or tests.
|
[
{
"version": "v1",
"created": "Fri, 20 May 2022 11:12:54 GMT"
},
{
"version": "v2",
"created": "Thu, 18 Aug 2022 10:34:15 GMT"
}
] | 2022-08-19T00:00:00 |
[
[
"Hlynsson",
"Hlynur D.",
""
],
[
"Ellertsson",
"Steindór",
""
],
[
"Daðason",
"Jón F.",
""
],
[
"Sigurdsson",
"Emil L.",
""
],
[
"Loftsson",
"Hrafn",
""
]
] |
new_dataset
| 0.990638 |
2206.07923
|
Zhixuan Zhou
|
Zhixuan Zhou, Zixin Wang, Franziska Zimmer
|
Anonymous Expression in an Online Community for Women in China
|
56th Hawaii International Conference on System Sciences (HICSS)
| null | null | null |
cs.SI cs.CY cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Gender issues faced by women can range from workplace harassment to domestic
violence. While publicly disclosing these issues on social media can be hard,
some may incline to express themselves anonymously. We approached such an
anonymous female community on Chinese social media where discussion on gender
issues takes place with a qualitative content analysis. By observing anonymous
experiences contributed by female users and made publicly available by an
influencer, we identified 20 issues commonly discussed, with cheating-partner,
controlling parents and age anxiety taking the lead. The results are placed
into context with Chinese culture and expectations about gender. By describing
the results in context with the social challenges faced by women in China, and
understanding how these issues are anonymously and openly discussed by them, we
aim to motivate more policies and platform designs to accommodate the needs of
the affected population.
|
[
{
"version": "v1",
"created": "Thu, 16 Jun 2022 04:56:56 GMT"
},
{
"version": "v2",
"created": "Thu, 18 Aug 2022 12:47:53 GMT"
}
] | 2022-08-19T00:00:00 |
[
[
"Zhou",
"Zhixuan",
""
],
[
"Wang",
"Zixin",
""
],
[
"Zimmer",
"Franziska",
""
]
] |
new_dataset
| 0.999047 |
2206.14341
|
Jared Mathews
|
Jared Mathews, Prosenjit Chatterjee, Shankar Banik
|
CoAP-DoS: An IoT Network Intrusion Dataset
|
6 pages, 8 figures, Publication Title: 2022 6th International
Conference on Cryptography, Security and Privacy (CSP), eCF Paper Id:
1641864704381, accepted for publishing, not yet published
| null |
10.1109/CSP55486.2022.00025
| null |
cs.CR cs.AI cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The need for secure Internet of Things (IoT) devices is growing as IoT
devices are becoming more integrated into vital networks. Many systems rely on
these devices to remain available and provide reliable service. Denial of
service attacks against IoT devices are a real threat due to the fact these low
power devices are very susceptible to denial-of-service attacks. Machine
learning enabled network intrusion detection systems are effective at
identifying new threats, but they require a large amount of data to work well.
There are many network traffic data sets but very few that focus on IoT network
traffic. Within the IoT network data sets there is a lack of CoAP denial of
service data. We propose a novel data set covering this gap. We develop a new
data set by collecting network traffic from real CoAP denial of service attacks
and compare the data on multiple different machine learning classifiers. We
show that the data set is effective on many classifiers.
|
[
{
"version": "v1",
"created": "Wed, 29 Jun 2022 00:50:15 GMT"
}
] | 2022-08-19T00:00:00 |
[
[
"Mathews",
"Jared",
""
],
[
"Chatterjee",
"Prosenjit",
""
],
[
"Banik",
"Shankar",
""
]
] |
new_dataset
| 0.998831 |
2208.05545
|
Jackson Trager
|
Jackson Trager, Alireza S. Ziabari, Aida Mostafazadeh Davani, Preni
Golazizian, Farzan Karimi-Malekabadi, Ali Omrani, Zhihe Li, Brendan Kennedy,
Nils Karl Reimer, Melissa Reyes, Kelsey Cheng, Mellow Wei, Christina
Merrifield, Arta Khosravi, Evans Alvarez, Morteza Dehghani
|
The Moral Foundations Reddit Corpus
| null | null | null | null |
cs.CL cs.CY cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Moral framing and sentiment can affect a variety of online and offline
behaviors, including donation, pro-environmental action, political engagement,
and even participation in violent protests. Various computational methods in
Natural Language Processing (NLP) have been used to detect moral sentiment from
textual data, but in order to achieve better performances in such subjective
tasks, large sets of hand-annotated training data are needed. Previous corpora
annotated for moral sentiment have proven valuable, and have generated new
insights both within NLP and across the social sciences, but have been limited
to Twitter. To facilitate improving our understanding of the role of moral
rhetoric, we present the Moral Foundations Reddit Corpus, a collection of
16,123 Reddit comments that have been curated from 12 distinct subreddits,
hand-annotated by at least three trained annotators for 8 categories of moral
sentiment (i.e., Care, Proportionality, Equality, Purity, Authority, Loyalty,
Thin Morality, Implicit/Explicit Morality) based on the updated Moral
Foundations Theory (MFT) framework. We use a range of methodologies to provide
baseline moral-sentiment classification results for this new corpus, e.g.,
cross-domain classification and knowledge transfer.
|
[
{
"version": "v1",
"created": "Wed, 10 Aug 2022 20:08:10 GMT"
},
{
"version": "v2",
"created": "Thu, 18 Aug 2022 03:21:14 GMT"
}
] | 2022-08-19T00:00:00 |
[
[
"Trager",
"Jackson",
""
],
[
"Ziabari",
"Alireza S.",
""
],
[
"Davani",
"Aida Mostafazadeh",
""
],
[
"Golazizian",
"Preni",
""
],
[
"Karimi-Malekabadi",
"Farzan",
""
],
[
"Omrani",
"Ali",
""
],
[
"Li",
"Zhihe",
""
],
[
"Kennedy",
"Brendan",
""
],
[
"Reimer",
"Nils Karl",
""
],
[
"Reyes",
"Melissa",
""
],
[
"Cheng",
"Kelsey",
""
],
[
"Wei",
"Mellow",
""
],
[
"Merrifield",
"Christina",
""
],
[
"Khosravi",
"Arta",
""
],
[
"Alvarez",
"Evans",
""
],
[
"Dehghani",
"Morteza",
""
]
] |
new_dataset
| 0.981065 |
2208.08491
|
Huaishu Peng
|
Anup Sathya, Jiasheng Li, Tauhidur Rahman, Ge Gao, Huaishu Peng
|
Calico: Relocatable On-cloth Wearables with Fast, Reliable, and Precise
Locomotion
| null |
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., Vol. 6, No.
3, Article 136. Publication date: September 2022
|
10.1145/3550323
| null |
cs.RO cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We explore Calico, a miniature relocatable wearable system with fast and
precise locomotion for on-body interaction, actuation and sensing. Calico
consists of a two-wheel robot and an on-cloth track mechanism or "railway," on
which the robot travels. The robot is self-contained, small in size, and has
additional sensor expansion options. The track system allows the robot to move
along the user's body and reach any predetermined location. It also includes
rotational switches to enable complex routing options when diverging tracks are
presented. We report the design and implementation of Calico with a series of
technical evaluations for system performance. We then present a few application
scenarios, and user studies to understand the potential of Calico as a dance
trainer and also explore the qualitative perception of our scenarios to inform
future research in this space.
|
[
{
"version": "v1",
"created": "Wed, 17 Aug 2022 19:21:00 GMT"
}
] | 2022-08-19T00:00:00 |
[
[
"Sathya",
"Anup",
""
],
[
"Li",
"Jiasheng",
""
],
[
"Rahman",
"Tauhidur",
""
],
[
"Gao",
"Ge",
""
],
[
"Peng",
"Huaishu",
""
]
] |
new_dataset
| 0.999644 |
2208.08524
|
Yisroel Mirsky Dr.
|
Yisroel Mirsky
|
DF-Captcha: A Deepfake Captcha for Preventing Fake Calls
|
A draft academic paper based on and protected by the provisional
patent submitted January 1st 2022 under provisional Number 63/302,086. arXiv
admin note: text overlap with arXiv:2004.11138
| null | null | null |
cs.CR cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Social engineering (SE) is a form of deception that aims to trick people into
giving access to data, information, networks and even money. For decades SE has
been a key method for attackers to gain access to an organization, virtually
skipping all lines of defense. Attackers also regularly use SE to scam innocent
people by making threatening phone calls which impersonate an authority or by
sending infected emails which look like they have been sent from a loved one.
SE attacks will likely remain a top attack vector for criminals because humans
are the weakest link in cyber security.
Unfortunately, the threat will only get worse now that a new technology
called deepfakes as arrived. A deepfake is believable media (e.g., videos)
created by an AI. Although the technology has mostly been used to swap the
faces of celebrities, it can also be used to `puppet' different personas.
Recently, researchers have shown how this technology can be deployed in
real-time to clone someone's voice in a phone call or reenact a face in a video
call. Given that any novice user can download this technology to use it, it is
no surprise that criminals have already begun to monetize it to perpetrate
their SE attacks.
In this paper, we propose a lightweight application which can protect
organizations and individuals from deepfake SE attacks. Through a challenge and
response approach, we leverage the technical and theoretical limitations of
deepfake technologies to expose the attacker. Existing defence solutions are
too heavy as an end-point solution and can be evaded by a dynamic attacker. In
contrast, our approach is lightweight and breaks the reactive arms race,
putting the attacker at a disadvantage.
|
[
{
"version": "v1",
"created": "Wed, 17 Aug 2022 20:40:54 GMT"
}
] | 2022-08-19T00:00:00 |
[
[
"Mirsky",
"Yisroel",
""
]
] |
new_dataset
| 0.998171 |
2208.08570
|
Chun-Hao Liu
|
Chun-Hao Liu and Burhaneddin Yaman
|
Object Detection for Autonomous Dozers
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce a new type of autonomous vehicle - an autonomous dozer that is
expected to complete construction site tasks in an efficient, robust, and safe
manner. To better handle the path planning for the dozer and ensure
construction site safety, object detection plays one of the most critical
components among perception tasks. In this work, we first collect the
construction site data by driving around our dozers. Then we analyze the data
thoroughly to understand its distribution. Finally, two well-known object
detection models are trained, and their performances are benchmarked with a
wide range of training strategies and hyperparameters.
|
[
{
"version": "v1",
"created": "Wed, 17 Aug 2022 23:46:14 GMT"
}
] | 2022-08-19T00:00:00 |
[
[
"Liu",
"Chun-Hao",
""
],
[
"Yaman",
"Burhaneddin",
""
]
] |
new_dataset
| 0.997222 |
2208.08578
|
Li Xu
|
Li Xu and Cuiling Fan and Dongchun Han
|
Near-MDS Codes from Maximal Arcs in PG$(2,q)$
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The singleton defect of an $[n,k,d]$ linear code ${\cal C}$ is defined as
$s({\cal C})=n-k+1-d$. Codes with $S({\cal C})=0$ are called maximum distance
separable (MDS) codes, and codes with $S(\cal C)=S(\cal C ^{\bot})=1$ are
called near maximum distance separable (NMDS) codes. Both MDS codes and NMDS
codes have good representations in finite projective geometry.
MDS codes over $F_q$ with length $n$ and $n$-arcs in PG$(k-1,q)$ are
equivalent objects. When $k=3$, NMDS codes of length $n$ are equivalent to
$(n,3)$-arcs in PG$(2,q)$. In this paper, we deal with the NMDS codes with
dimension 3. By adding some suitable projective points in maximal arcs of
PG$(2,q)$, we can obtain two classes of $(q+5,3)$-arcs (or equivalently
$[q+5,3,q+2]$ NMDS codes) for any prime power $q$.
We also determine the exact weight distribution
and the locality of such NMDS codes and their duals. It turns out that the
resultant NMDS codes and their duals are both distance-optimal and
dimension-optimal locally recoverable codes.
|
[
{
"version": "v1",
"created": "Thu, 18 Aug 2022 00:47:25 GMT"
}
] | 2022-08-19T00:00:00 |
[
[
"Xu",
"Li",
""
],
[
"Fan",
"Cuiling",
""
],
[
"Han",
"Dongchun",
""
]
] |
new_dataset
| 0.99727 |
2208.08621
|
Yu-Huan Wu
|
Yu-Huan Wu, Da Zhang, Le Zhang, Xin Zhan, Dengxin Dai, Yun Liu, and
Ming-Ming Cheng
|
Ret3D: Rethinking Object Relations for Efficient 3D Object Detection in
Driving Scenes
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Current efficient LiDAR-based detection frameworks are lacking in exploiting
object relations, which naturally present in both spatial and temporal manners.
To this end, we introduce a simple, efficient, and effective two-stage
detector, termed as Ret3D. At the core of Ret3D is the utilization of novel
intra-frame and inter-frame relation modules to capture the spatial and
temporal relations accordingly. More Specifically, intra-frame relation module
(IntraRM) encapsulates the intra-frame objects into a sparse graph and thus
allows us to refine the object features through efficient message passing. On
the other hand, inter-frame relation module (InterRM) densely connects each
object in its corresponding tracked sequences dynamically, and leverages such
temporal information to further enhance its representations efficiently through
a lightweight transformer network. We instantiate our novel designs of IntraRM
and InterRM with general center-based or anchor-based detectors and evaluate
them on Waymo Open Dataset (WOD). With negligible extra overhead, Ret3D
achieves the state-of-the-art performance, being 5.5% and 3.2% higher than the
recent competitor in terms of the LEVEL 1 and LEVEL 2 mAPH metrics on vehicle
detection, respectively.
|
[
{
"version": "v1",
"created": "Thu, 18 Aug 2022 03:48:58 GMT"
}
] | 2022-08-19T00:00:00 |
[
[
"Wu",
"Yu-Huan",
""
],
[
"Zhang",
"Da",
""
],
[
"Zhang",
"Le",
""
],
[
"Zhan",
"Xin",
""
],
[
"Dai",
"Dengxin",
""
],
[
"Liu",
"Yun",
""
],
[
"Cheng",
"Ming-Ming",
""
]
] |
new_dataset
| 0.987038 |
2208.08667
|
Nan Ming
|
Nan Ming, Yi Feng, Rui Fan
|
SDA-SNE: Spatial Discontinuity-Aware Surface Normal Estimation via
Multi-Directional Dynamic Programming
|
3DV 2022 oral paper
| null | null | null |
cs.CV cs.AI cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The state-of-the-art (SoTA) surface normal estimators (SNEs) generally
translate depth images into surface normal maps in an end-to-end fashion.
Although such SNEs have greatly minimized the trade-off between efficiency and
accuracy, their performance on spatial discontinuities, e.g., edges and ridges,
is still unsatisfactory. To address this issue, this paper first introduces a
novel multi-directional dynamic programming strategy to adaptively determine
inliers (co-planar 3D points) by minimizing a (path) smoothness energy. The
depth gradients can then be refined iteratively using a novel recursive
polynomial interpolation algorithm, which helps yield more reasonable surface
normals. Our introduced spatial discontinuity-aware (SDA) depth gradient
refinement strategy is compatible with any depth-to-normal SNEs. Our proposed
SDA-SNE achieves much greater performance than all other SoTA approaches,
especially near/on spatial discontinuities. We further evaluate the performance
of SDA-SNE with respect to different iterations, and the results suggest that
it converges fast after only a few iterations. This ensures its high efficiency
in various robotics and computer vision applications requiring real-time
performance. Additional experiments on the datasets with different extents of
random noise further validate our SDA-SNE's robustness and environmental
adaptability. Our source code, demo video, and supplementary material are
publicly available at mias.group/SDA-SNE.
|
[
{
"version": "v1",
"created": "Thu, 18 Aug 2022 06:57:54 GMT"
}
] | 2022-08-19T00:00:00 |
[
[
"Ming",
"Nan",
""
],
[
"Feng",
"Yi",
""
],
[
"Fan",
"Rui",
""
]
] |
new_dataset
| 0.997472 |
2208.08706
|
Marco Pasini
|
Marco Pasini, Jan Schl\"uter
|
Musika! Fast Infinite Waveform Music Generation
|
Accepted at ISMIR 2022
| null | null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Fast and user-controllable music generation could enable novel ways of
composing or performing music. However, state-of-the-art music generation
systems require large amounts of data and computational resources for training,
and are slow at inference. This makes them impractical for real-time
interactive use. In this work, we introduce Musika, a music generation system
that can be trained on hundreds of hours of music using a single consumer GPU,
and that allows for much faster than real-time generation of music of arbitrary
length on a consumer CPU. We achieve this by first learning a compact
invertible representation of spectrogram magnitudes and phases with adversarial
autoencoders, then training a Generative Adversarial Network (GAN) on this
representation for a particular music domain. A latent coordinate system
enables generating arbitrarily long sequences of excerpts in parallel, while a
global context vector allows the music to remain stylistically coherent through
time. We perform quantitative evaluations to assess the quality of the
generated samples and showcase options for user control in piano and techno
music generation. We release the source code and pretrained autoencoder weights
at github.com/marcoppasini/musika, such that a GAN can be trained on a new
music domain with a single GPU in a matter of hours.
|
[
{
"version": "v1",
"created": "Thu, 18 Aug 2022 08:31:15 GMT"
}
] | 2022-08-19T00:00:00 |
[
[
"Pasini",
"Marco",
""
],
[
"Schlüter",
"Jan",
""
]
] |
new_dataset
| 0.997017 |
2208.08709
|
Johannes Blum
|
Johannes Blum and Sabine Storandt
|
Customizable Hub Labeling: Properties and Algorithms
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Hub Labeling (HL) is one of the state-of-the-art preprocessing-based
techniques for route planning in road networks. It is a special incarnation of
distance labeling, and it is well-studied in both theory and practice. The core
concept of HL is to associate a label with each vertex, which consists of a
subset of all vertices and respective shortest path information, such that the
shortest path distance between any two vertices can be derived from considering
the intersection of their labels. HL provides excellent query times but
requires a time-consuming preprocessing phase. Therefore, in case of edge cost
changes, rerunning the whole preprocessing is not viable. Inspired by the
concept of Customizable Route Planning, we hence propose in this paper a
Customizable Hub Labeling variant for which the edge costs in the network do
not need to be known at construction time. These labels can then be used with
any edge costs after conducting a so called customization phase. We study the
theoretical properties of Customizable Hub Labelings, provide an
$\mathcal{O}(\log^2 n)$-approximation algorithm for the average label size, and
propose efficient customization algorithms.
|
[
{
"version": "v1",
"created": "Thu, 18 Aug 2022 08:49:48 GMT"
}
] | 2022-08-19T00:00:00 |
[
[
"Blum",
"Johannes",
""
],
[
"Storandt",
"Sabine",
""
]
] |
new_dataset
| 0.989076 |
2208.08745
|
Alsharif Abuadbba Dr
|
Mariya Shmalko, Alsharif Abuadbba, Raj Gaire, Tingmin Wu, Hye-Young
Paik, Surya Nepal
|
Profiler: Profile-Based Model to Detect Phishing Emails
|
12 pages
|
42nd IEEE International Conference on Distributed Computing
Systems 2022
| null | null |
cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Email phishing has become more prevalent and grows more sophisticated over
time. To combat this rise, many machine learning (ML) algorithms for detecting
phishing emails have been developed. However, due to the limited email data
sets on which these algorithms train, they are not adept at recognising varied
attacks and, thus, suffer from concept drift; attackers can introduce small
changes in the statistical characteristics of their emails or websites to
successfully bypass detection. Over time, a gap develops between the reported
accuracy from literature and the algorithm's actual effectiveness in the real
world. This realises itself in frequent false positive and false negative
classifications.
To this end, we propose a multidimensional risk assessment of emails to
reduce the feasibility of an attacker adapting their email and avoiding
detection. This horizontal approach to email phishing detection profiles an
incoming email on its main features. We develop a risk assessment framework
that includes three models which analyse an email's (1) threat level, (2)
cognitive manipulation, and (3) email type, which we combine to return the
final risk assessment score. The Profiler does not require large data sets to
train on to be effective and its analysis of varied email features reduces the
impact of concept drift. Our Profiler can be used in conjunction with ML
approaches, to reduce their misclassifications or as a labeller for large email
data sets in the training stage.
We evaluate the efficacy of the Profiler against a machine learning ensemble
using state-of-the-art ML algorithms on a data set of 9000 legitimate and 900
phishing emails from a large Australian research organisation. Our results
indicate that the Profiler's mitigates the impact of concept drift, and
delivers 30% less false positive and 25% less false negative email
classifications over the ML ensemble's approach.
|
[
{
"version": "v1",
"created": "Thu, 18 Aug 2022 10:01:55 GMT"
}
] | 2022-08-19T00:00:00 |
[
[
"Shmalko",
"Mariya",
""
],
[
"Abuadbba",
"Alsharif",
""
],
[
"Gaire",
"Raj",
""
],
[
"Wu",
"Tingmin",
""
],
[
"Paik",
"Hye-Young",
""
],
[
"Nepal",
"Surya",
""
]
] |
new_dataset
| 0.972309 |
2208.08760
|
Megha Rani R
|
Ms. Megha Rani R, Roshan R Acharya, Ramkishan, Ranjith K, Rakshith Ay
Gowda
|
Blockchain based digital vaccine passport
| null | null | null | null |
cs.CR
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Travel has been challenging recently since different nations have implemented
varied immigration and travel policies. For the time being, immigration
officials want proof of each person's immunity to the virus. A vaccine passport
serves as evidence that a person has tested negative for or is immune to a
particular virus. In terms of COVID-19, those who hold a vaccine passport will
be permitted entry into other nations as long as they can provide proof that
they have COVID-19 antibodies from prior infections or from full COVID-19
immunizations. To reduce time and effort spent managing data, the vaccination
passport system has been digitalized. The process of contact tracing may be
facilitated by digitization. The "Blockchain technology" system, which is
currently in use, has demonstrated its security and privacy in systems for data
exchange among bitcoin users. The Digital Vaccination Passport scheme can use
Blockchain technology. The end result would be a decentralized, traceable,
transparent, reliable, auditable, secure, and trustworthy solution based on the
Ethereum block-chain that would allow tracking of vaccines given and the
history of diseases.
|
[
{
"version": "v1",
"created": "Thu, 18 Aug 2022 10:42:25 GMT"
}
] | 2022-08-19T00:00:00 |
[
[
"R",
"Ms. Megha Rani",
""
],
[
"Acharya",
"Roshan R",
""
],
[
"Ramkishan",
"",
""
],
[
"K",
"Ranjith",
""
],
[
"Gowda",
"Rakshith Ay",
""
]
] |
new_dataset
| 0.998333 |
2208.08806
|
Florin Manea
|
Joel D. Day, Adrian Kr\"oger, Mitja Kulczynski, Florin Manea, Dirk
Nowotka and Danny B{\o}gsted Poulsen
|
A Generic Information Extraction System for String Constraints
| null | null | null | null |
cs.LO cs.DB cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
String constraint solving, and the underlying theory of word equations, are
highly interesting research topics both for practitioners and theoreticians
working in the wide area of satisfiability modulo theories. As string
constraint solving algorithms, a.k.a. string solvers, gained a more prominent
role in the formal analysis of string-heavy programs, especially in connection
to symbolic code execution and security protocol verification, we can witness
an ever-growing number of benchmarks collecting string solving instances from
real-world applications as well as an ever-growing need for more efficient and
reliable solvers, especially for the aforementioned real-world instances. Thus,
it seems that the string solving area (and the developers, theoreticians, and
end-users active in it) could greatly benefit from a better understanding and
processing of the existing string solving benchmarks. In this context, we
propose SMTQUERY: an SMT-LIB benchmark analysis tool for string constraints.
SMTQUERY is implemented in Python 3, and offers a collection of analysis and
information extraction tools for a comprehensive data base of string benchmarks
(presented in SMT-LIB format), based on an SQL-centred language called QLANG.
|
[
{
"version": "v1",
"created": "Thu, 18 Aug 2022 12:56:12 GMT"
}
] | 2022-08-19T00:00:00 |
[
[
"Day",
"Joel D.",
""
],
[
"Kröger",
"Adrian",
""
],
[
"Kulczynski",
"Mitja",
""
],
[
"Manea",
"Florin",
""
],
[
"Nowotka",
"Dirk",
""
],
[
"Poulsen",
"Danny Bøgsted",
""
]
] |
new_dataset
| 0.961883 |
2208.08836
|
Aline Sindel
|
Aline Sindel, Andreas Maier and Vincent Christlein
|
A Multi-modal Registration and Visualization Software Tool for Artworks
using CraquelureNet
|
14 pages, 9 figures, 1 table, accepted to PatReCH 2022 Workshop at
ICPR 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For art investigations of paintings, multiple imaging technologies, such as
visual light photography, infrared reflectography, ultraviolet fluorescence
photography, and x-radiography are often used. For a pixel-wise comparison, the
multi-modal images have to be registered. We present a registration and
visualization software tool, that embeds a convolutional neural network to
extract cross-modal features of the crack structures in historical paintings
for automatic registration. The graphical user interface processes the user's
input to configure the registration parameters and to interactively adapt the
image views with the registered pair and image overlays, such as by individual
or synchronized zoom or movements of the views. In the evaluation, we
qualitatively and quantitatively show the effectiveness of our software tool in
terms of registration performance and short inference time on multi-modal
paintings and its transferability by applying our method to historical prints.
|
[
{
"version": "v1",
"created": "Thu, 18 Aug 2022 13:57:37 GMT"
}
] | 2022-08-19T00:00:00 |
[
[
"Sindel",
"Aline",
""
],
[
"Maier",
"Andreas",
""
],
[
"Christlein",
"Vincent",
""
]
] |
new_dataset
| 0.958563 |
2208.08913
|
Ian Pratt-Hartmann
|
Ian Pratt-Hartmann
|
Walking on Words
| null | null | null | null |
cs.DM cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
Take any word over some alphabet. If it is non-empty, go to any position and
print out the letter being scanned. Now repeat the following any number of
times (possibly zero): either stay at the current letter, or move one letter
leftwards (if possible) or move one letter rightwards (if possible); then print
out the letter being scanned. In effect, we are going for a walk on the input
word. Let u be the infix of the input word comprising the visited positions,
and w the word printed out (empty if the input word is). Since any unvisited
prefix or suffix of the input word cannot influence w, we may as well discard
them, and say that u generates w. We ask: given a word w, what words u generate
it? The answer is surprising. Call u a primitive generator of w if u generates
w and is not generated by any word shorter than u. We show that, excepting some
degenerate cases, every word has precisely two primitive generators.
|
[
{
"version": "v1",
"created": "Thu, 18 Aug 2022 15:29:11 GMT"
}
] | 2022-08-19T00:00:00 |
[
[
"Pratt-Hartmann",
"Ian",
""
]
] |
new_dataset
| 0.984278 |
2208.08952
|
Fangquan Lin
|
Fangquan Lin, Wei Jiang, Hanwei Zhang, Cheng Yang
|
KDD CUP 2022 Wind Power Forecasting Team 88VIP Solution
|
https://aistudio.baidu.com/aistudio/competition/detail/152/0/introduction
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
KDD CUP 2022 proposes a time-series forecasting task on spatial dynamic wind
power dataset, in which the participants are required to predict the future
generation given the historical context factors. The evaluation metrics contain
RMSE and MAE. This paper describes the solution of Team 88VIP, which mainly
comprises two types of models: a gradient boosting decision tree to memorize
the basic data patterns and a recurrent neural network to capture the deep and
latent probabilistic transitions. Ensembling these models contributes to tackle
the fluctuation of wind power, and training submodels targets on the
distinguished properties in heterogeneous timescales of forecasting, from
minutes to days. In addition, feature engineering, imputation techniques and
the design of offline evaluation are also described in details. The proposed
solution achieves an overall online score of -45.213 in Phase 3.
|
[
{
"version": "v1",
"created": "Thu, 18 Aug 2022 16:46:50 GMT"
}
] | 2022-08-19T00:00:00 |
[
[
"Lin",
"Fangquan",
""
],
[
"Jiang",
"Wei",
""
],
[
"Zhang",
"Hanwei",
""
],
[
"Yang",
"Cheng",
""
]
] |
new_dataset
| 0.997284 |
2004.03549
|
Shengkai Li
|
Shengkai Li, Yasemin Ozkan Aydin, Charles Xiao, Gabriella Small,
Hussain N. Gynai, Gongjie Li, Jennifer M. Rieser, Pablo Laguna, Daniel I.
Goldman
|
Field-mediated locomotor dynamics on highly deformable surfaces
| null |
Proceedings of the National Academy of Sciences 119 (30),
e2113912119, 2022
|
10.1073/pnas.2113912119
| null |
cs.RO gr-qc
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In many systems motion occurs on deformed and deformable surfaces, setting up
the possibility for dynamical interactions solely mediated by the coupling of
the entities with their environment. Here we study the "two-body" dynamics of
robot locomotion on a highly deformable spandex membrane in two scenarios: one
in which a robot orbits a large central depression and the other where the two
robots affect each other's motion solely through mutual environmental
deformations. Inspired by the resemblance of the orbits of the single robot
with those of general relativistic orbits around black holes, we recast the
vehicle plus membrane dynamics in physical space into the geodesic motion of a
"test particle" in a fiducial curved space-time and demonstrate how this
framework facilitates understanding the observed dynamics. The two-robot
problem also exhibits a resemblance with Einstein's general relativistic view
of gravity, which in the words of Wheeler: "spacetime tells matter how to move;
matter tells spacetime how to curve." We generalize this case the mapping to
include a reciprocal coupling that translates into robotic curvature-based
control schemes which modify interaction (promoting avoidance or aggregation)
without long-range sensing. Our work provides a starting point for developing a
mechanical analog gravity system as well as develops a framework that can
provide insights into active matter in deformable environments and robot
exploration in complex landscapes.
|
[
{
"version": "v1",
"created": "Tue, 7 Apr 2020 17:19:00 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Oct 2020 17:57:43 GMT"
},
{
"version": "v3",
"created": "Tue, 3 Aug 2021 17:32:27 GMT"
}
] | 2022-08-18T00:00:00 |
[
[
"Li",
"Shengkai",
""
],
[
"Aydin",
"Yasemin Ozkan",
""
],
[
"Xiao",
"Charles",
""
],
[
"Small",
"Gabriella",
""
],
[
"Gynai",
"Hussain N.",
""
],
[
"Li",
"Gongjie",
""
],
[
"Rieser",
"Jennifer M.",
""
],
[
"Laguna",
"Pablo",
""
],
[
"Goldman",
"Daniel I.",
""
]
] |
new_dataset
| 0.987767 |
2006.00165
|
Ashraf Tantawy
|
Ashraf Tantawy, Sherif Abdelwahed, and Abdelkarim Erradi
|
Cyber LOPA: An Integrated Approach for the Design of Dependable and
Secure Cyber Physical Systems
|
Preprint version of the published paper
|
IEEE Transactions on Reliability, VOL. 71, NO. 2, JUNE 2022
|
10.1109/TR.2022.3163652
| null |
cs.CR cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Safety risk assessment is an essential process to ensure a dependable
Cyber-Physical System (CPS) design. Traditional risk assessment considers only
physical failures. For modern CPS, failures caused by cyber attacks are on the
rise. The focus of latest research effort is on safety-security lifecycle
integration and the expansion of modeling formalisms for risk assessment to
incorporate security failures. The interaction between safety and security
lifecycles and its impact on the overall system design, as well as the
reliability loss resulting from ignoring security failures are some of the
overlooked research questions. This paper addresses these research questions by
presenting a new safety design method named Cyber Layer Of Protection Analysis
(CLOPA) that extends existing LOPA framework to include failures caused by
cyber attacks. The proposed method provides a rigorous mathematical formulation
that expresses quantitatively the trade-off between designing a highly-reliable
versus a highly-secure CPS. We further propose a co-design lifecycle process
that integrates the safety and security risk assessment processes. We evaluate
the proposed CLOPA approach and the integrated lifecycle on a practical case
study of a process reactor controlled by an industrial control testbed, and
provide a comparison between the proposed CLOPA and current LOPA risk
assessment practice.
|
[
{
"version": "v1",
"created": "Sat, 30 May 2020 03:53:18 GMT"
},
{
"version": "v2",
"created": "Sat, 6 Jun 2020 18:53:26 GMT"
},
{
"version": "v3",
"created": "Thu, 15 Jul 2021 12:32:14 GMT"
},
{
"version": "v4",
"created": "Wed, 17 Aug 2022 16:05:51 GMT"
}
] | 2022-08-18T00:00:00 |
[
[
"Tantawy",
"Ashraf",
""
],
[
"Abdelwahed",
"Sherif",
""
],
[
"Erradi",
"Abdelkarim",
""
]
] |
new_dataset
| 0.97937 |
2103.07298
|
Francesco Verdoja
|
Krishnananda Prabhu Sivananda, Francesco Verdoja, Ville Kyrki
|
Augmented Environment Representations with Complete Object Models
|
Accepted for publication in the 31st IEEE International Conference on
Robot & Human Interactive Communication (RO-MAN 2022)
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While 2D occupancy maps commonly used in mobile robotics enable safe
navigation in indoor environments, in order for robots to understand and
interact with their environment and its inhabitants representing 3D geometry
and semantic environment information is required. Semantic information is
crucial in effective interpretation of the meanings humans attribute to
different parts of a space, while 3D geometry is important for safety and
high-level understanding. We propose a pipeline that can generate a multi-layer
representation of indoor environments for robotic applications. The proposed
representation includes 3D metric-semantic layers, a 2D occupancy layer, and an
object instance layer where known objects are replaced with an approximate
model obtained through a novel model-matching approach. The metric-semantic
layer and the object instance layer are combined to form an augmented
representation of the environment. Experiments show that the proposed shape
matching method outperforms a state-of-the-art deep learning method when tasked
to complete unseen parts of objects in the scene. The pipeline performance
translates well from simulation to real world as shown by F1-score analysis,
with semantic segmentation accuracy using Mask R-CNN acting as the major
bottleneck. Finally, we also demonstrate on a real robotic platform how the
multi-layer map can be used to improve navigation safety.
|
[
{
"version": "v1",
"created": "Fri, 12 Mar 2021 14:14:45 GMT"
},
{
"version": "v2",
"created": "Wed, 17 Aug 2022 11:57:33 GMT"
}
] | 2022-08-18T00:00:00 |
[
[
"Sivananda",
"Krishnananda Prabhu",
""
],
[
"Verdoja",
"Francesco",
""
],
[
"Kyrki",
"Ville",
""
]
] |
new_dataset
| 0.996278 |
2110.12340
|
Yanan Guo
|
Yanan Guo, Andrew Zigerelli, Youtao Zhang, Jun Yang
|
Adversarial Prefetch: New Cross-Core Cache Side Channel Attacks
|
camera-ready for IEEE S&P 2022
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Modern x86 processors have many prefetch instructions that can be used by
programmers to boost performance. However, these instructions may also cause
security problems. In particular, we found that on Intel processors, there are
two security flaws in the implementation of PREFETCHW, an instruction for
accelerating future writes. First, this instruction can execute on data with
read-only permission. Second, the execution time of this instruction leaks the
current coherence state of the target data.
Based on these two design issues, we build two cross-core private cache
attacks that work with both inclusive and non-inclusive LLCs, named
Prefetch+Reload and Prefetch+Prefetch. We demonstrate the significance of our
attacks in different scenarios. First, in the covert channel case,
Prefetch+Reload and Prefetch+Prefetch achieve 782 KB/s and 822 KB/s channel
capacities, when using only one shared cache line between the sender and
receiver, the largest-to-date single-line capacities for CPU cache covert
channels. Further, in the side channel case, our attacks can monitor the access
pattern of the victim on the same processor, with almost zero error rate. We
show that they can be used to leak private information of real-world
applications such as cryptographic keys. Finally, our attacks can be used in
transient execution attacks in order to leak more secrets within the transient
window than prior work. From the experimental results, our attacks allow
leaking about 2 times as many secret bytes, compared to Flush+Reload, which is
widely used in transient execution attacks.
|
[
{
"version": "v1",
"created": "Sun, 24 Oct 2021 03:11:21 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Dec 2021 04:04:56 GMT"
},
{
"version": "v3",
"created": "Wed, 17 Aug 2022 02:27:12 GMT"
}
] | 2022-08-18T00:00:00 |
[
[
"Guo",
"Yanan",
""
],
[
"Zigerelli",
"Andrew",
""
],
[
"Zhang",
"Youtao",
""
],
[
"Yang",
"Jun",
""
]
] |
new_dataset
| 0.999712 |
2111.03701
|
Marco Peressotti
|
Lu\'is Cruz-Filipe, Eva Graversen, Lovro Lugovi\'c, Fabrizio Montesi,
Marco Peressotti
|
Functional Choreographic Programming
| null | null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Choreographic programming is an emerging programming paradigm for concurrent
and distributed systems, whereby developers write the communications that
should be enacted and then a distributed implementation is automatically
obtained by means of a compiler. Theories of choreographic programming
typically come with strong theoretical guarantees about the compilation
process, most notably: the generated implementations operationally correspond
to their source choreographies and are deadlock-free.
Currently, the most advanced incarnation of the paradigm is Choral, an
object-oriented choreographic programming language that targets Java. Choral
deviated significantly from known theories of choreographies, and introduced
the possibility of expressing higher-order choreographies (choreographies
parameterised over choreographies) that are fully distributed. As a
consequence, it is unclear if the usual guarantees of choreographies can still
hold in the more general setting of higher-order ones.
We introduce Chor{\lambda}, the first functional choreographic programming
language: it introduces a new formulation of the standard communication
primitive found in choreographies as a function, and it is based upon the
{\lambda}-calculus. Chor{\lambda} is the first theory that explains the core
ideas of higher-order choreographic programming (as in Choral). Bridging the
gap between practice and theory requires developing a new evaluation strategy
and typing discipline for {\lambda} terms that accounts for the distributed
nature of computation in choreographies. We illustrate the expressivity of
Chor{\lambda} with a series of examples, which include reconstructions of the
key examples from the original presentation of Choral. Our theory supports the
expected properties of choreographic programming and bridges the gap between
the communities of functional and choreographic programming.
|
[
{
"version": "v1",
"created": "Fri, 5 Nov 2021 18:58:53 GMT"
},
{
"version": "v2",
"created": "Tue, 9 Nov 2021 11:14:49 GMT"
},
{
"version": "v3",
"created": "Wed, 4 May 2022 12:10:12 GMT"
},
{
"version": "v4",
"created": "Wed, 17 Aug 2022 06:20:34 GMT"
}
] | 2022-08-18T00:00:00 |
[
[
"Cruz-Filipe",
"Luís",
""
],
[
"Graversen",
"Eva",
""
],
[
"Lugović",
"Lovro",
""
],
[
"Montesi",
"Fabrizio",
""
],
[
"Peressotti",
"Marco",
""
]
] |
new_dataset
| 0.981914 |
2206.00379
|
Yiming Chen
|
Yiming Chen, Guodong Yin, Zhanhong Tan, Mingyen Lee, Zekun Yang,
Yongpan Liu, Huazhong Yang, Kaisheng Ma, Xueqing Li
|
YOLoC: DeploY Large-Scale Neural Network by ROM-based
Computing-in-Memory using ResiduaL Branch on a Chip
|
6 pages, 14 figures. to be published in DAC 2022
|
Design Automation Conference 2022
|
10.1145/3489517.3530576
| null |
cs.AR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Computing-in-memory (CiM) is a promising technique to achieve high energy
efficiency in data-intensive matrix-vector multiplication (MVM) by relieving
the memory bottleneck. Unfortunately, due to the limited SRAM capacity,
existing SRAM-based CiM needs to reload the weights from DRAM in large-scale
networks. This undesired fact weakens the energy efficiency significantly. This
work, for the first time, proposes the concept, design, and optimization of
computing-in-ROM to achieve much higher on-chip memory capacity, and thus less
DRAM access and lower energy consumption. Furthermore, to support different
computing scenarios with varying weights, a weight fine-tune technique, namely
Residual Branch (ReBranch), is also proposed. ReBranch combines ROM-CiM and
assisting SRAM-CiM to ahieve high versatility. YOLoC, a ReBranch-assisted
ROM-CiM framework for object detection is presented and evaluated. With the
same area in 28nm CMOS, YOLoC for several datasets has shown significant energy
efficiency improvement by 14.8x for YOLO (Darknet-19) and 4.8x for ResNet-18,
with <8% latency overhead and almost no mean average precision (mAP) loss
(-0.5% ~ +0.2%), compared with the fully SRAM-based CiM.
|
[
{
"version": "v1",
"created": "Wed, 1 Jun 2022 10:30:47 GMT"
}
] | 2022-08-18T00:00:00 |
[
[
"Chen",
"Yiming",
""
],
[
"Yin",
"Guodong",
""
],
[
"Tan",
"Zhanhong",
""
],
[
"Lee",
"Mingyen",
""
],
[
"Yang",
"Zekun",
""
],
[
"Liu",
"Yongpan",
""
],
[
"Yang",
"Huazhong",
""
],
[
"Ma",
"Kaisheng",
""
],
[
"Li",
"Xueqing",
""
]
] |
new_dataset
| 0.999679 |
2206.01062
|
Michele Dolfi
|
Birgit Pfitzmann, Christoph Auer, Michele Dolfi, Ahmed S Nassar, Peter
W J Staar
|
DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis
|
9 pages, 6 figures, 5 tables. Accepted paper at SIGKDD 2022
conference
| null |
10.1145/3534678.3539043
| null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Accurate document layout analysis is a key requirement for high-quality PDF
document conversion. With the recent availability of public, large ground-truth
datasets such as PubLayNet and DocBank, deep-learning models have proven to be
very effective at layout detection and segmentation. While these datasets are
of adequate size to train such models, they severely lack in layout variability
since they are sourced from scientific article repositories such as PubMed and
arXiv only. Consequently, the accuracy of the layout segmentation drops
significantly when these models are applied on more challenging and diverse
layouts. In this paper, we present \textit{DocLayNet}, a new, publicly
available, document-layout annotation dataset in COCO format. It contains 80863
manually annotated pages from diverse data sources to represent a wide
variability in layouts. For each PDF page, the layout annotations provide
labelled bounding-boxes with a choice of 11 distinct classes. DocLayNet also
provides a subset of double- and triple-annotated pages to determine the
inter-annotator agreement. In multiple experiments, we provide baseline
accuracy scores (in mAP) for a set of popular object detection models. We also
demonstrate that these models fall approximately 10\% behind the
inter-annotator agreement. Furthermore, we provide evidence that DocLayNet is
of sufficient size. Lastly, we compare models trained on PubLayNet, DocBank and
DocLayNet, showing that layout predictions of the DocLayNet-trained models are
more robust and thus the preferred choice for general-purpose document-layout
analysis.
|
[
{
"version": "v1",
"created": "Thu, 2 Jun 2022 14:25:12 GMT"
}
] | 2022-08-18T00:00:00 |
[
[
"Pfitzmann",
"Birgit",
""
],
[
"Auer",
"Christoph",
""
],
[
"Dolfi",
"Michele",
""
],
[
"Nassar",
"Ahmed S",
""
],
[
"Staar",
"Peter W J",
""
]
] |
new_dataset
| 0.974328 |
2206.07468
|
Pengxin Chen
|
Wenzhong Shi, Pengxin Chen, Muyang Wang, Sheng Bao, Haodong Xiang, Yue
Yu, Daping Yang
|
PolyU-BPCoMa: A Dataset and Benchmark Towards Mobile Colorized Mapping
Using a Backpack Multisensorial System
|
11 pages
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Constructing colorized point clouds from mobile laser scanning and images is
a fundamental work in surveying and mapping. It is also an essential
prerequisite for building digital twins for smart cities. However, existing
public datasets are either in relatively small scales or lack accurate
geometrical and color ground truth. This paper documents a multisensorial
dataset named PolyU-BPCoMA which is distinctively positioned towards mobile
colorized mapping. The dataset incorporates resources of 3D LiDAR, spherical
imaging, GNSS and IMU on a backpack platform. Color checker boards are pasted
in each surveyed area as targets and ground truth data are collected by an
advanced terrestrial laser scanner (TLS). 3D geometrical and color information
can be recovered in the colorized point clouds produced by the backpack system
and the TLS, respectively. Accordingly, we provide an opportunity to benchmark
the mapping and colorization accuracy simultaneously for a mobile
multisensorial system. The dataset is approximately 800 GB in size covering
both indoor and outdoor environments. The dataset and development kits are
available at https://github.com/chenpengxin/PolyU-BPCoMa.git.
|
[
{
"version": "v1",
"created": "Wed, 15 Jun 2022 12:06:08 GMT"
},
{
"version": "v2",
"created": "Wed, 17 Aug 2022 03:03:07 GMT"
}
] | 2022-08-18T00:00:00 |
[
[
"Shi",
"Wenzhong",
""
],
[
"Chen",
"Pengxin",
""
],
[
"Wang",
"Muyang",
""
],
[
"Bao",
"Sheng",
""
],
[
"Xiang",
"Haodong",
""
],
[
"Yu",
"Yue",
""
],
[
"Yang",
"Daping",
""
]
] |
new_dataset
| 0.998573 |
2206.08522
|
Kaizhi Zheng
|
Kaizhi Zheng, Xiaotong Chen, Odest Chadwicke Jenkins, Xin Eric Wang
|
VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation
| null | null | null | null |
cs.RO cs.CL cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Benefiting from language flexibility and compositionality, humans naturally
intend to use language to command an embodied agent for complex tasks such as
navigation and object manipulation. In this work, we aim to fill the blank of
the last mile of embodied agents -- object manipulation by following human
guidance, e.g., "move the red mug next to the box while keeping it upright." To
this end, we introduce an Automatic Manipulation Solver (AMSolver) system and
build a Vision-and-Language Manipulation benchmark (VLMbench) based on it,
containing various language instructions on categorized robotic manipulation
tasks. Specifically, modular rule-based task templates are created to
automatically generate robot demonstrations with language instructions,
consisting of diverse object shapes and appearances, action types, and motion
constraints. We also develop a keypoint-based model 6D-CLIPort to deal with
multi-view observations and language input and output a sequence of 6 degrees
of freedom (DoF) actions. We hope the new simulator and benchmark will
facilitate future research on language-guided robotic manipulation.
|
[
{
"version": "v1",
"created": "Fri, 17 Jun 2022 03:07:18 GMT"
},
{
"version": "v2",
"created": "Wed, 17 Aug 2022 17:18:43 GMT"
}
] | 2022-08-18T00:00:00 |
[
[
"Zheng",
"Kaizhi",
""
],
[
"Chen",
"Xiaotong",
""
],
[
"Jenkins",
"Odest Chadwicke",
""
],
[
"Wang",
"Xin Eric",
""
]
] |
new_dataset
| 0.999578 |
2208.07904
|
J. Maurice Rojas
|
Philippe P\'ebay, J. Maurice Rojas, David C. Thompson
|
Sturm's Theorem with Endpoints
|
4 pages. A software implementation can be found in algorithm
vtkPolynomialSolversUnivariate , within the VTK (Visualization Toolkit)
software package
| null | null | null |
cs.SC math.AC
|
http://creativecommons.org/licenses/by/4.0/
|
Sturm's Theorem is a fundamental 19th century result relating the number of
real roots of a polynomial $f$ in an interval to the number of sign
alternations in a sequence of polynomial division-like calculations. We provide
a short direct proof of Sturm's Theorem, including the numerically vexing case
(ignored in many published accounts) where an interval endpoint is a root of
$f$.
|
[
{
"version": "v1",
"created": "Tue, 16 Aug 2022 18:47:09 GMT"
}
] | 2022-08-18T00:00:00 |
[
[
"Pébay",
"Philippe",
""
],
[
"Rojas",
"J. Maurice",
""
],
[
"Thompson",
"David C.",
""
]
] |
new_dataset
| 0.999464 |
2208.08042
|
Runyan Yang
|
Gaofeng Cheng, Yifan Chen, Runyan Yang, Qingxuan Li, Zehui Yang,
Lingxuan Ye, Pengyuan Zhang, Qingqing Zhang, Lei Xie, Yanmin Qian, Kong Aik
Lee, Yonghong Yan
|
The Conversational Short-phrase Speaker Diarization (CSSD) Task:
Dataset, Evaluation Metric and Baselines
|
arXiv admin note: text overlap with arXiv:2203.16844
| null | null | null |
cs.CL cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The conversation scenario is one of the most important and most challenging
scenarios for speech processing technologies because people in conversation
respond to each other in a casual style. Detecting the speech activities of
each person in a conversation is vital to downstream tasks, like natural
language processing, machine translation, etc. People refer to the detection
technology of "who speak when" as speaker diarization (SD). Traditionally,
diarization error rate (DER) has been used as the standard evaluation metric of
SD systems for a long time. However, DER fails to give enough importance to
short conversational phrases, which are short but important on the semantic
level. Also, a carefully and accurately manually-annotated testing dataset
suitable for evaluating the conversational SD technologies is still unavailable
in the speech community. In this paper, we design and describe the
Conversational Short-phrases Speaker Diarization (CSSD) task, which consists of
training and testing datasets, evaluation metric and baselines. In the dataset
aspect, despite the previously open-sourced 180-hour conversational
MagicData-RAMC dataset, we prepare an individual 20-hour conversational speech
test dataset with carefully and artificially verified speakers timestamps
annotations for the CSSD task. In the metric aspect, we design the new
conversational DER (CDER) evaluation metric, which calculates the SD accuracy
at the utterance level. In the baseline aspect, we adopt a commonly used
method: Variational Bayes HMM x-vector system, as the baseline of the CSSD
task. Our evaluation metric is publicly available at
https://github.com/SpeechClub/CDER_Metric.
|
[
{
"version": "v1",
"created": "Wed, 17 Aug 2022 03:26:23 GMT"
}
] | 2022-08-18T00:00:00 |
[
[
"Cheng",
"Gaofeng",
""
],
[
"Chen",
"Yifan",
""
],
[
"Yang",
"Runyan",
""
],
[
"Li",
"Qingxuan",
""
],
[
"Yang",
"Zehui",
""
],
[
"Ye",
"Lingxuan",
""
],
[
"Zhang",
"Pengyuan",
""
],
[
"Zhang",
"Qingqing",
""
],
[
"Xie",
"Lei",
""
],
[
"Qian",
"Yanmin",
""
],
[
"Lee",
"Kong Aik",
""
],
[
"Yan",
"Yonghong",
""
]
] |
new_dataset
| 0.977231 |
2208.08049
|
Cheng Peng
|
Cheng Peng, Rama Chellappa
|
PDRF: Progressively Deblurring Radiance Field for Fast and Robust Scene
Reconstruction from Blurry Images
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Progressively Deblurring Radiance Field (PDRF), a novel approach
to efficiently reconstruct high quality radiance fields from blurry images.
While current State-of-The-Art (SoTA) scene reconstruction methods achieve
photo-realistic rendering results from clean source views, their performances
suffer when the source views are affected by blur, which is commonly observed
for images in the wild. Previous deblurring methods either do not account for
3D geometry, or are computationally intense. To addresses these issues, PDRF, a
progressively deblurring scheme in radiance field modeling, accurately models
blur by incorporating 3D scene context. PDRF further uses an efficient
importance sampling scheme, which results in fast scene optimization.
Specifically, PDRF proposes a Coarse Ray Renderer to quickly estimate voxel
density and feature; a Fine Voxel Renderer is then used to achieve high quality
ray tracing. We perform extensive experiments and show that PDRF is 15X faster
than previous SoTA while achieving better performance on both synthetic and
real scenes.
|
[
{
"version": "v1",
"created": "Wed, 17 Aug 2022 03:42:29 GMT"
}
] | 2022-08-18T00:00:00 |
[
[
"Peng",
"Cheng",
""
],
[
"Chellappa",
"Rama",
""
]
] |
new_dataset
| 0.950646 |
2208.08080
|
Dong Won Lee
|
Dong Won Lee, Chaitanya Ahuja, Paul Pu Liang, Sanika Natu,
Louis-Philippe Morency
|
Multimodal Lecture Presentations Dataset: Understanding Multimodality in
Educational Slides
|
9 pages, 5 figures
| null | null | null |
cs.AI cs.CL cs.CV cs.LG cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
Lecture slide presentations, a sequence of pages that contain text and
figures accompanied by speech, are constructed and presented carefully in order
to optimally transfer knowledge to students. Previous studies in multimedia and
psychology attribute the effectiveness of lecture presentations to their
multimodal nature. As a step toward developing AI to aid in student learning as
intelligent teacher assistants, we introduce the Multimodal Lecture
Presentations dataset as a large-scale benchmark testing the capabilities of
machine learning models in multimodal understanding of educational content. Our
dataset contains aligned slides and spoken language, for 180+ hours of video
and 9000+ slides, with 10 lecturers from various subjects (e.g., computer
science, dentistry, biology). We introduce two research tasks which are
designed as stepping stones towards AI agents that can explain (automatically
captioning a lecture presentation) and illustrate (synthesizing visual figures
to accompany spoken explanations) educational content. We provide manual
annotations to help implement these two research tasks and evaluate
state-of-the-art models on them. Comparing baselines and human student
performances, we find that current models struggle in (1) weak crossmodal
alignment between slides and spoken text, (2) learning novel visual mediums,
(3) technical language, and (4) long-range sequences. Towards addressing this
issue, we also introduce PolyViLT, a multimodal transformer trained with a
multi-instance learning loss that is more effective than current approaches. We
conclude by shedding light on the challenges and opportunities in multimodal
understanding of educational presentations.
|
[
{
"version": "v1",
"created": "Wed, 17 Aug 2022 05:30:18 GMT"
}
] | 2022-08-18T00:00:00 |
[
[
"Lee",
"Dong Won",
""
],
[
"Ahuja",
"Chaitanya",
""
],
[
"Liang",
"Paul Pu",
""
],
[
"Natu",
"Sanika",
""
],
[
"Morency",
"Louis-Philippe",
""
]
] |
new_dataset
| 0.999763 |
2208.08091
|
Heng Yao
|
Heng Yao, Sanaz Motamedi, Wayne C.W. Giang, Alexandra Kondyli, Eakta
Jain
|
In-vehicle alertness monitoring for older adults
|
12 pages, 14 figures, 6 tables
| null | null | null |
cs.CV cs.HC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Alertness monitoring in the context of driving improves safety and saves
lives. Computer vision based alertness monitoring is an active area of
research. However, the algorithms and datasets that exist for alertness
monitoring are primarily aimed at younger adults (18-50 years old). We present
a system for in-vehicle alertness monitoring for older adults. Through a design
study, we ascertained the variables and parameters that are suitable for older
adults traveling independently in Level 5 vehicles. We implemented a prototype
traveler monitoring system and evaluated the alertness detection algorithm on
ten older adults (70 years and older). We report on the system design and
implementation at a level of detail that is suitable for the beginning
researcher or practitioner. Our study suggests that dataset development is the
foremost challenge for developing alertness monitoring systems targeted at
older adults. This study is the first of its kind for a hitherto under-studied
population and has implications for future work on algorithm development and
system design through participatory methods.
|
[
{
"version": "v1",
"created": "Wed, 17 Aug 2022 06:07:37 GMT"
}
] | 2022-08-18T00:00:00 |
[
[
"Yao",
"Heng",
""
],
[
"Motamedi",
"Sanaz",
""
],
[
"Giang",
"Wayne C. W.",
""
],
[
"Kondyli",
"Alexandra",
""
],
[
"Jain",
"Eakta",
""
]
] |
new_dataset
| 0.991398 |
2208.08092
|
Jaskirat Singh
|
Jaskirat Singh, Liang Zheng, Cameron Smith, Jose Echevarria
|
Paint2Pix: Interactive Painting based Progressive Image Synthesis and
Editing
|
ECCV 2022
|
ECCV 2022
| null | null |
cs.CV cs.AI cs.LG cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
Controllable image synthesis with user scribbles is a topic of keen interest
in the computer vision community. In this paper, for the first time we study
the problem of photorealistic image synthesis from incomplete and primitive
human paintings. In particular, we propose a novel approach paint2pix, which
learns to predict (and adapt) "what a user wants to draw" from rudimentary
brushstroke inputs, by learning a mapping from the manifold of incomplete human
paintings to their realistic renderings. When used in conjunction with recent
works in autonomous painting agents, we show that paint2pix can be used for
progressive image synthesis from scratch. During this process, paint2pix allows
a novice user to progressively synthesize the desired image output, while
requiring just few coarse user scribbles to accurately steer the trajectory of
the synthesis process. Furthermore, we find that our approach also forms a
surprisingly convenient approach for real image editing, and allows the user to
perform a diverse range of custom fine-grained edits through the addition of
only a few well-placed brushstrokes. Supplemental video and demo are available
at https://1jsingh.github.io/paint2pix
|
[
{
"version": "v1",
"created": "Wed, 17 Aug 2022 06:08:11 GMT"
}
] | 2022-08-18T00:00:00 |
[
[
"Singh",
"Jaskirat",
""
],
[
"Zheng",
"Liang",
""
],
[
"Smith",
"Cameron",
""
],
[
"Echevarria",
"Jose",
""
]
] |
new_dataset
| 0.999231 |
2208.08099
|
Hanqing Zhu
|
Hanqing Zhu, Keren Zhu, Jiaqi Gu, Harrison Jin, Ray Chen, Jean Anne
Incorvia, and David Z. Pan
|
Fuse and Mix: MACAM-Enabled Analog Activation for Energy-Efficient
Neural Acceleration
|
Accepted by ICCAD 2022
| null | null | null |
cs.ET cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Analog computing has been recognized as a promising low-power alternative to
digital counterparts for neural network acceleration. However, conventional
analog computing is mainly in a mixed-signal manner. Tedious analog/digital
(A/D) conversion cost significantly limits the overall system's energy
efficiency. In this work, we devise an efficient analog activation unit with
magnetic tunnel junction (MTJ)-based analog content-addressable memory (MACAM),
simultaneously realizing nonlinear activation and A/D conversion in a fused
fashion. To compensate for the nascent and therefore currently limited
representation capability of MACAM, we propose to mix our analog activation
unit with digital activation dataflow. A fully differential framework,
SuperMixer, is developed to search for an optimized activation workload
assignment, adaptive to various activation energy constraints. The
effectiveness of our proposed methods is evaluated on a silicon photonic
accelerator. Compared to standard activation implementation, our mixed
activation system with the searched assignment can achieve competitive accuracy
with $>$60% energy saving on A/D conversion and activation.
|
[
{
"version": "v1",
"created": "Wed, 17 Aug 2022 06:28:05 GMT"
}
] | 2022-08-18T00:00:00 |
[
[
"Zhu",
"Hanqing",
""
],
[
"Zhu",
"Keren",
""
],
[
"Gu",
"Jiaqi",
""
],
[
"Jin",
"Harrison",
""
],
[
"Chen",
"Ray",
""
],
[
"Incorvia",
"Jean Anne",
""
],
[
"Pan",
"David Z.",
""
]
] |
new_dataset
| 0.964451 |
2208.08120
|
Weijie Wang
|
Weijie Wang, Song Liu, Qinfeng Shan and Lihao Jia
|
Highly dynamic locomotion control of biped robot enhanced by swing arms
|
7 pages, 12 figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Swing arms have an irreplaceable role in promoting highly dynamic locomotion
on bipedal robots by a larger angular momentum control space from the viewpoint
of biomechanics. Few bipedal robots utilize swing arms and its redundancy
characteristic of multiple degrees of freedom due to the lack of appropriate
locomotion control strategies to perfectly integrate modeling and control. This
paper presents a kind of control strategy by modeling the bipedal robot as a
flywheel-spring loaded inverted pendulum (F-SLIP) to extract characteristics of
swing arms and using the whole-body controller (WBC) to achieve these
characteristics, and also proposes a evaluation system including three aspects
of agility defined by us, stability and energy consumption for the highly
dynamic locomotion of bipedal robots. We design several sets of simulation
experiments and analyze the effects of swing arms according to the evaluation
system during the jumping motion of Purple (Purple energy rises in the
east)V1.0, a kind of bipedal robot designed to test high explosive locomotion.
Results show that Purple's agility is increased by more than 10 percent,
stabilization time is reduced by a factor of two, and energy consumption is
reduced by more than 20 percent after introducing swing arms.
|
[
{
"version": "v1",
"created": "Wed, 17 Aug 2022 07:29:16 GMT"
}
] | 2022-08-18T00:00:00 |
[
[
"Wang",
"Weijie",
""
],
[
"Liu",
"Song",
""
],
[
"Shan",
"Qinfeng",
""
],
[
"Jia",
"Lihao",
""
]
] |
new_dataset
| 0.998021 |
2208.08190
|
Gabriel Van Zandycke
|
Gabriel Van Zandycke and Vladimir Somers and Maxime Istasse and Carlo
Del Don and Davide Zambrano
|
DeepSportradar-v1: Computer Vision Dataset for Sports Understanding with
High Quality Annotations
| null | null |
10.1145/3552437.3555699
| null |
cs.CV cs.LG eess.IV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
With the recent development of Deep Learning applied to Computer Vision,
sport video understanding has gained a lot of attention, providing much richer
information for both sport consumers and leagues. This paper introduces
DeepSportradar-v1, a suite of computer vision tasks, datasets and benchmarks
for automated sport understanding. The main purpose of this framework is to
close the gap between academic research and real world settings. To this end,
the datasets provide high-resolution raw images, camera parameters and high
quality annotations. DeepSportradar currently supports four challenging tasks
related to basketball: ball 3D localization, camera calibration, player
instance segmentation and player re-identification. For each of the four tasks,
a detailed description of the dataset, objective, performance metrics, and the
proposed baseline method are provided. To encourage further research on
advanced methods for sport understanding, a competition is organized as part of
the MMSports workshop from the ACM Multimedia 2022 conference, where
participants have to develop state-of-the-art methods to solve the above tasks.
The four datasets, development kits and baselines are publicly available.
|
[
{
"version": "v1",
"created": "Wed, 17 Aug 2022 09:55:02 GMT"
}
] | 2022-08-18T00:00:00 |
[
[
"Van Zandycke",
"Gabriel",
""
],
[
"Somers",
"Vladimir",
""
],
[
"Istasse",
"Maxime",
""
],
[
"Del Don",
"Carlo",
""
],
[
"Zambrano",
"Davide",
""
]
] |
new_dataset
| 0.999453 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.