id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2209.01789
|
Sadullah Canakci
|
Sadullah Canakci, Chathura Rajapaksha, Anoop Mysore Nataraja, Leila
Delshadtehrani, Michael Taylor, Manuel Egele, Ajay Joshi
|
ProcessorFuzz: Guiding Processor Fuzzing using Control and Status
Registers
| null | null | null | null |
cs.AR cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As the complexity of modern processors has increased over the years,
developing effective verification strategies to identify bugs prior to
manufacturing has become critical. Undiscovered micro-architectural bugs in
processors can manifest as severe security vulnerabilities in the form of side
channels, functional bugs, etc. Inspired by software fuzzing, a technique
commonly used for software testing, multiple recent works use hardware fuzzing
for the verification of Register-Transfer Level (RTL) designs. However, these
works suffer from several limitations such as lack of support for widely-used
Hardware Description Languages (HDLs) and misleading coverage-signals that
misidentify "interesting" inputs. Towards overcoming these shortcomings, we
present ProcessorFuzz, a processor fuzzer that guides the fuzzer with a novel
CSR-transition coverage metric. ProcessorFuzz monitors the transitions in
Control and Status Registers (CSRs) as CSRs are in charge of controlling and
holding the state of the processor. Therefore, transitions in CSRs indicate a
new processor state, and guiding the fuzzer based on this feedback enables
ProcessorFuzz to explore new processor states. ProcessorFuzz is agnostic to the
HDL and does not require any instrumentation in the processor design. Thus, it
supports a wide range of RTL designs written in different hardware languages.
We evaluated ProcessorFuzz with three real-world open-source processors --
Rocket, BOOM, and BlackParrot. ProcessorFuzz triggered a set of ground-truth
bugs 1.23$\times$ faster (on average) than DIFUZZRTL. Moreover, our experiments
exposed 8 new bugs across the three RISC-V cores and one new bug in a reference
model. All nine bugs were confirmed by the developers of the corresponding
projects.
|
[
{
"version": "v1",
"created": "Mon, 5 Sep 2022 06:57:14 GMT"
}
] | 2022-09-07T00:00:00 |
[
[
"Canakci",
"Sadullah",
""
],
[
"Rajapaksha",
"Chathura",
""
],
[
"Nataraja",
"Anoop Mysore",
""
],
[
"Delshadtehrani",
"Leila",
""
],
[
"Taylor",
"Michael",
""
],
[
"Egele",
"Manuel",
""
],
[
"Joshi",
"Ajay",
""
]
] |
new_dataset
| 0.977741 |
2209.01927
|
Rika Kobayashi
|
Rika Kobayashi, Sarah Jaffa, Jiachen Dong, Roger D. Amos, Jeremy Cohen
and Emily F. Kerrison
|
Gather -- a better way to codehack online
|
10 pages, 3 figures
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
A virtual hands-on computer laboratory has been designed within the Gather
online meeting platform. Gather's features such as spatial audio, private
spaces and interactable objects offer scope for great improvements over
currently used platforms, especially for small-group based teaching. We
describe our experience using this virtual computer laboratory for a recent
'Python for Beginners' workshop held as part of the Software Sustainability
Institute's 2022 Research Software Camp.
|
[
{
"version": "v1",
"created": "Mon, 5 Sep 2022 12:12:25 GMT"
}
] | 2022-09-07T00:00:00 |
[
[
"Kobayashi",
"Rika",
""
],
[
"Jaffa",
"Sarah",
""
],
[
"Dong",
"Jiachen",
""
],
[
"Amos",
"Roger D.",
""
],
[
"Cohen",
"Jeremy",
""
],
[
"Kerrison",
"Emily F.",
""
]
] |
new_dataset
| 0.998648 |
2209.01936
|
Pavel Karpyshev
|
Pavel Karpyshev, Evgeny Kruzhkov, Evgeny Yudin, Alena Savinykh, Andrei
Potapov, Mikhail Kurenkov, Anton Kolomeytsev, Ivan Kalinov, and Dzmitry
Tsetserukou
|
MuCaSLAM: CNN-Based Frame Quality Assessment for Mobile Robot with
Omnidirectional Visual SLAM
|
This paper has been accepted to the 2022 IEEE 18th Conference on
Automation Science and Engineering
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In the proposed study, we describe an approach to improving the computational
efficiency and robustness of visual SLAM algorithms on mobile robots with
multiple cameras and limited computational power by implementing an
intermediate layer between the cameras and the SLAM pipeline. In this layer,
the images are classified using a ResNet18-based neural network regarding their
applicability to the robot localization. The network is trained on a six-camera
dataset collected in the campus of the Skolkovo Institute of Science and
Technology (Skoltech). For training, we use the images and ORB features that
were successfully matched with subsequent frame of the same camera ("good"
keypoints or features). The results have shown that the network is able to
accurately determine the optimal images for ORB-SLAM2, and implementing the
proposed approach in the SLAM pipeline can help significantly increase the
number of images the SLAM algorithm can localize on, and improve the overall
robustness of visual SLAM. The experiments on operation time state that the
proposed approach is at least 6 times faster compared to using ORB extractor
and feature matcher when operated on CPU, and more than 30 times faster when
run on GPU. The network evaluation has shown at least 90% accuracy in
recognizing images with a big number of "good" ORB keypoints. The use of the
proposed approach allowed to maintain a high number of features throughout the
dataset by robustly switching from cameras with feature-poor streams.
|
[
{
"version": "v1",
"created": "Mon, 5 Sep 2022 12:29:20 GMT"
}
] | 2022-09-07T00:00:00 |
[
[
"Karpyshev",
"Pavel",
""
],
[
"Kruzhkov",
"Evgeny",
""
],
[
"Yudin",
"Evgeny",
""
],
[
"Savinykh",
"Alena",
""
],
[
"Potapov",
"Andrei",
""
],
[
"Kurenkov",
"Mikhail",
""
],
[
"Kolomeytsev",
"Anton",
""
],
[
"Kalinov",
"Ivan",
""
],
[
"Tsetserukou",
"Dzmitry",
""
]
] |
new_dataset
| 0.996292 |
2209.01943
|
Jianhui Ma
|
Jianhui Ma, Qiang Li, Zilong Liu, Linsong Du, Hongyang Chen, and
Nirwan Ansari
|
Jamming Modulation: An Active Anti-Jamming Scheme
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Providing quality communications under adversarial electronic attacks, e.g.,
broadband jamming attacks, is a challenging task. Unlike state-of-the-art
approaches which treat jamming signals as destructive interference, this paper
presents a novel active anti-jamming (AAJ) scheme for a jammed channel to
enhance the communication quality between a transmitter node (TN) and receiver
node (RN), where the TN actively exploits the jamming signal as a carrier to
send messages. Specifically, the TN is equipped with a programmable-gain
amplifier, which is capable of re-modulating the jamming signals for jamming
modulation. Considering four typical jamming types, we derive both the bit
error rates (BER) and the corresponding optimal detection thresholds of the AAJ
scheme. The asymptotic performances of the AAJ scheme are discussed under the
high jamming-to-noise ratio (JNR) and sampling rate cases. Our analysis shows
that there exists a BER floor for sufficiently large JNR. Simulation results
indicate that the proposed AAJ scheme allows the TN to communicate with the RN
reliably even under extremely strong and/or broadband jamming. Additionally, we
investigate the channel capacity of the proposed AAJ scheme and show that the
channel capacity of the AAJ scheme outperforms that of the direct transmission
when the JNR is relatively high.
|
[
{
"version": "v1",
"created": "Mon, 5 Sep 2022 12:48:20 GMT"
}
] | 2022-09-07T00:00:00 |
[
[
"Ma",
"Jianhui",
""
],
[
"Li",
"Qiang",
""
],
[
"Liu",
"Zilong",
""
],
[
"Du",
"Linsong",
""
],
[
"Chen",
"Hongyang",
""
],
[
"Ansari",
"Nirwan",
""
]
] |
new_dataset
| 0.997004 |
2209.01947
|
Sasha Salter
|
Sasha Salter, Markus Wulfmeier, Dhruva Tirumala, Nicolas Heess, Martin
Riedmiller, Raia Hadsell, Dushyant Rao
|
MO2: Model-Based Offline Options
|
Accepted at 1st Conference on Lifelong Learning Agents (CoLLAs)
Conference Track, 2022
| null | null | null |
cs.LG cs.AI cs.RO stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The ability to discover useful behaviours from past experience and transfer
them to new tasks is considered a core component of natural embodied
intelligence. Inspired by neuroscience, discovering behaviours that switch at
bottleneck states have been long sought after for inducing plans of minimum
description length across tasks. Prior approaches have either only supported
online, on-policy, bottleneck state discovery, limiting sample-efficiency, or
discrete state-action domains, restricting applicability. To address this, we
introduce Model-Based Offline Options (MO2), an offline hindsight framework
supporting sample-efficient bottleneck option discovery over continuous
state-action spaces. Once bottleneck options are learnt offline over source
domains, they are transferred online to improve exploration and value
estimation on the transfer domain. Our experiments show that on complex
long-horizon continuous control tasks with sparse, delayed rewards, MO2's
properties are essential and lead to performance exceeding recent option
learning methods. Additional ablations further demonstrate the impact on option
predictability and credit assignment.
|
[
{
"version": "v1",
"created": "Mon, 5 Sep 2022 12:58:50 GMT"
}
] | 2022-09-07T00:00:00 |
[
[
"Salter",
"Sasha",
""
],
[
"Wulfmeier",
"Markus",
""
],
[
"Tirumala",
"Dhruva",
""
],
[
"Heess",
"Nicolas",
""
],
[
"Riedmiller",
"Martin",
""
],
[
"Hadsell",
"Raia",
""
],
[
"Rao",
"Dushyant",
""
]
] |
new_dataset
| 0.99449 |
2209.01970
|
Ruyue Xin
|
Ruyue Xin, Hongyun Liu, Peng Chen, Paola Grosso, Zhiming Zhao
|
FIRED: a fine-grained robust performance diagnosis framework for cloud
applications
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To run a cloud application with the required service quality, operators have
to continuously monitor the cloud application's run-time status, detect
potential performance anomalies, and diagnose the root causes of anomalies.
However, existing models of performance anomaly detection often suffer from low
re-usability and robustness due to the diversity of system-level metrics being
monitored and the lack of high-quality labeled monitoring data for anomalies.
Moreover, the current coarse-grained analysis models make it difficult to
locate system-level root causes of the application performance anomalies for
effective adaptation decisions. We provide a FIne-grained Robust pErformance
Diagnosis (FIRED) framework to tackle those challenges. The framework offers an
ensemble of several well-selected base models for anomaly detection using a
deep neural network, which adopts weakly-supervised learning considering fewer
labels exist in reality. The framework also employs a real-time fine-grained
analysis model to locate dependent system metrics of the anomaly. Our
experiments show that the framework can achieve the best detection accuracy and
algorithm robustness, and it can predict anomalies in four minutes with F1
score higher than 0.8. In addition, the framework can accurately localize the
first root causes, and with an average accuracy higher than 0.7 of locating
first four root causes.
|
[
{
"version": "v1",
"created": "Mon, 5 Sep 2022 13:49:42 GMT"
}
] | 2022-09-07T00:00:00 |
[
[
"Xin",
"Ruyue",
""
],
[
"Liu",
"Hongyun",
""
],
[
"Chen",
"Peng",
""
],
[
"Grosso",
"Paola",
""
],
[
"Zhao",
"Zhiming",
""
]
] |
new_dataset
| 0.954001 |
2209.01983
|
Re'em Harel
|
Re'em Harel, Matan Rusanovsky, Ron Wagner, Harel Levin, Gal Oren
|
ScalSALE: Scalable SALE Benchmark Framework for Supercomputers
| null | null | null | null |
cs.DC cs.PF
|
http://creativecommons.org/licenses/by/4.0/
|
Supercomputers worldwide provide the necessary infrastructure for
groundbreaking research. However, most supercomputers are not designed equally
due to different desired figure of merit, which is derived from the
computational bounds of the targeted scientific applications' portfolio. In
turn, the design of such computers becomes an optimization process that strives
to achieve the best performances possible in a multi-parameters search space.
Therefore, verifying and evaluating whether a supercomputer can achieve its
desired goal becomes a tedious and complex task. For this purpose, many full,
mini, proxy, and benchmark applications have been introduced in the attempt to
represent scientific applications. Nevertheless, as these benchmarks are hard
to expand, and most importantly, are over-simplified compared to scientific
applications that tend to couple multiple scientific domains, they fail to
represent the true scaling capabilities. We suggest a new physical scalable
benchmark framework, namely ScalSALE, based on the well-known SALE scheme.
ScalSALE's main goal is to provide a simple, flexible, scalable infrastructure
that can be easily expanded to include multi-physical schemes while maintaining
scalable and efficient execution times. By expanding ScalSALE, the gap between
the over-simplified benchmarks and scientific applications can be bridged. To
achieve this goal, ScalSALE is implemented in Modern Fortran with simple OOP
design patterns and supported by transparent MPI-3 blocking and non-blocking
communication that allows such a scalable framework. ScalSALE is compared to
LULESH via simulating the Sedov-Taylor blast wave problem using strong and weak
scaling tests. ScalSALE is executed and evaluated with both rezoning options -
Lagrangian and Eulerian.
|
[
{
"version": "v1",
"created": "Mon, 5 Sep 2022 14:31:28 GMT"
}
] | 2022-09-07T00:00:00 |
[
[
"Harel",
"Re'em",
""
],
[
"Rusanovsky",
"Matan",
""
],
[
"Wagner",
"Ron",
""
],
[
"Levin",
"Harel",
""
],
[
"Oren",
"Gal",
""
]
] |
new_dataset
| 0.955591 |
2209.01988
|
Haozhe Liu
|
Haoqin Ji, Haozhe Liu, Yuexiang Li, Jinheng Xie, Nanjun He, Yawen
Huang, Dong Wei, Xinrong Chen, Linlin Shen, Yefeng Zheng
|
A Benchmark for Weakly Semi-Supervised Abnormality Localization in Chest
X-Rays
|
Accepted by MICCAI-2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Accurate abnormality localization in chest X-rays (CXR) can benefit the
clinical diagnosis of various thoracic diseases. However, the lesion-level
annotation can only be performed by experienced radiologists, and it is tedious
and time-consuming, thus difficult to acquire. Such a situation results in a
difficulty to develop a fully-supervised abnormality localization system for
CXR. In this regard, we propose to train the CXR abnormality localization
framework via a weakly semi-supervised strategy, termed Point Beyond Class
(PBC), which utilizes a small number of fully annotated CXRs with lesion-level
bounding boxes and extensive weakly annotated samples by points. Such a point
annotation setting can provide weakly instance-level information for
abnormality localization with a marginal annotation cost. Particularly, the
core idea behind our PBC is to learn a robust and accurate mapping from the
point annotations to the bounding boxes against the variance of annotated
points. To achieve that, a regularization term, namely multi-point consistency,
is proposed, which drives the model to generate the consistent bounding box
from different point annotations inside the same abnormality. Furthermore, a
self-supervision, termed symmetric consistency, is also proposed to deeply
exploit the useful information from the weakly annotated data for abnormality
localization. Experimental results on RSNA and VinDr-CXR datasets justify the
effectiveness of the proposed method. When less than 20% box-level labels are
used for training, an improvement of ~5 in mAP can be achieved by our PBC,
compared to the current state-of-the-art method (i.e., Point DETR). Code is
available at https://github.com/HaozheLiu-ST/Point-Beyond-Class.
|
[
{
"version": "v1",
"created": "Mon, 5 Sep 2022 14:36:07 GMT"
}
] | 2022-09-07T00:00:00 |
[
[
"Ji",
"Haoqin",
""
],
[
"Liu",
"Haozhe",
""
],
[
"Li",
"Yuexiang",
""
],
[
"Xie",
"Jinheng",
""
],
[
"He",
"Nanjun",
""
],
[
"Huang",
"Yawen",
""
],
[
"Wei",
"Dong",
""
],
[
"Chen",
"Xinrong",
""
],
[
"Shen",
"Linlin",
""
],
[
"Zheng",
"Yefeng",
""
]
] |
new_dataset
| 0.95505 |
2209.01996
|
Peining Zhang
|
Peining Zhang, Junliang Guo, Linli Xu, Mu You, Junming Yin
|
Bridging Music and Text with Crowdsourced Music Comments: A
Sequence-to-Sequence Framework for Thematic Music Comments Generation
| null | null | null | null |
cs.SD cs.CL eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
We consider a novel task of automatically generating text descriptions of
music. Compared with other well-established text generation tasks such as image
caption, the scarcity of well-paired music and text datasets makes it a much
more challenging task. In this paper, we exploit the crowd-sourced music
comments to construct a new dataset and propose a sequence-to-sequence model to
generate text descriptions of music. More concretely, we use the dilated
convolutional layer as the basic component of the encoder and a memory based
recurrent neural network as the decoder. To enhance the authenticity and
thematicity of generated texts, we further propose to fine-tune the model with
a discriminator as well as a novel topic evaluator. To measure the quality of
generated texts, we also propose two new evaluation metrics, which are more
aligned with human evaluation than traditional metrics such as BLEU.
Experimental results verify that our model is capable of generating fluent and
meaningful comments while containing thematic and content information of the
original music.
|
[
{
"version": "v1",
"created": "Mon, 5 Sep 2022 14:51:51 GMT"
}
] | 2022-09-07T00:00:00 |
[
[
"Zhang",
"Peining",
""
],
[
"Guo",
"Junliang",
""
],
[
"Xu",
"Linli",
""
],
[
"You",
"Mu",
""
],
[
"Yin",
"Junming",
""
]
] |
new_dataset
| 0.984998 |
2209.02207
|
Shaoshan Liu
|
Yuhui Hao, Bo Yu, Qiang Liu, Shaoshan Liu, Yuhao Zhu
|
Factor Graph Accelerator for LiDAR-Inertial Odometry
|
ICCAD 2022
| null | null | null |
cs.RO cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Factor graph is a graph representing the factorization of a probability
distribution function, and has been utilized in many autonomous machine
computing tasks, such as localization, tracking, planning and control etc. We
are developing an architecture with the goal of using factor graph as a common
abstraction for most, if not, all autonomous machine computing tasks. If
successful, the architecture would provide a very simple interface of mapping
autonomous machine functions to the underlying compute hardware. As a first
step of such an attempt, this paper presents our most recent work of developing
a factor graph accelerator for LiDAR-Inertial Odometry (LIO), an essential task
in many autonomous machines, such as autonomous vehicles and mobile robots. By
modeling LIO as a factor graph, the proposed accelerator not only supports
multi-sensor fusion such as LiDAR, inertial measurement unit (IMU), GPS, etc.,
but solves the global optimization problem of robot navigation in batch or
incremental modes. Our evaluation demonstrates that the proposed design
significantly improves the real-time performance and energy efficiency of
autonomous machine navigation systems. The initial success suggests the
potential of generalizing the factor graph architecture as a common abstraction
for autonomous machine computing, including tracking, planning, and control
etc.
|
[
{
"version": "v1",
"created": "Tue, 6 Sep 2022 04:11:57 GMT"
}
] | 2022-09-07T00:00:00 |
[
[
"Hao",
"Yuhui",
""
],
[
"Yu",
"Bo",
""
],
[
"Liu",
"Qiang",
""
],
[
"Liu",
"Shaoshan",
""
],
[
"Zhu",
"Yuhao",
""
]
] |
new_dataset
| 0.997964 |
2209.02211
|
Michal Yemini
|
Nir Weinberger and Michal Yemini
|
Multi-Armed Bandits with Self-Information Rewards
| null | null | null | null |
cs.IT cs.LG math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces the informational multi-armed bandit (IMAB) model in
which at each round, a player chooses an arm, observes a symbol, and receives
an unobserved reward in the form of the symbol's self-information. Thus, the
expected reward of an arm is the Shannon entropy of the probability mass
function of the source that generates its symbols. The player aims to maximize
the expected total reward associated with the entropy values of the arms
played. Under the assumption that the alphabet size is known, two UCB-based
algorithms are proposed for the IMAB model which consider the biases of the
plug-in entropy estimator. The first algorithm optimistically corrects the bias
term in the entropy estimation. The second algorithm relies on data-dependent
confidence intervals that adapt to sources with small entropy values.
Performance guarantees are provided by upper bounding the expected regret of
each of the algorithms. Furthermore, in the Bernoulli case, the asymptotic
behavior of these algorithms is compared to the Lai-Robbins lower bound for the
pseudo regret. Additionally, under the assumption that the \textit{exact}
alphabet size is unknown, and instead the player only knows a loose upper bound
on it, a UCB-based algorithm is proposed, in which the player aims to reduce
the regret caused by the unknown alphabet size in a finite time regime.
Numerical results illustrating the expected regret of the algorithms presented
in the paper are provided.
|
[
{
"version": "v1",
"created": "Tue, 6 Sep 2022 04:26:21 GMT"
}
] | 2022-09-07T00:00:00 |
[
[
"Weinberger",
"Nir",
""
],
[
"Yemini",
"Michal",
""
]
] |
new_dataset
| 0.95835 |
2209.02215
|
Abari Bhattacharya
|
Abhinav Kumar, Barbara Di Eugenio, Abari Bhattacharya, Jillian
Aurisano, Andrew Johnson
|
Reference Resolution and Context Change in Multimodal Situated Dialogue
for Exploring Data Visualizations
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Reference resolution, which aims to identify entities being referred to by a
speaker, is more complex in real world settings: new referents may be created
by processes the agents engage in and/or be salient only because they belong to
the shared physical setting. Our focus is on resolving references to
visualizations on a large screen display in multimodal dialogue; crucially,
reference resolution is directly involved in the process of creating new
visualizations. We describe our annotations for user references to
visualizations appearing on a large screen via language and hand gesture and
also new entity establishment, which results from executing the user request to
create a new visualization. We also describe our reference resolution pipeline
which relies on an information-state architecture to maintain dialogue context.
We report results on detecting and resolving references, effectiveness of
contextual information on the model, and under-specified requests for creating
visualizations. We also experiment with conventional CRF and deep learning /
transformer models (BiLSTM-CRF and BERT-CRF) for tagging references in user
utterance text. Our results show that transfer learning significantly boost
performance of the deep learning methods, although CRF still out-performs them,
suggesting that conventional methods may generalize better for low resource
data.
|
[
{
"version": "v1",
"created": "Tue, 6 Sep 2022 04:43:28 GMT"
}
] | 2022-09-07T00:00:00 |
[
[
"Kumar",
"Abhinav",
""
],
[
"Di Eugenio",
"Barbara",
""
],
[
"Bhattacharya",
"Abari",
""
],
[
"Aurisano",
"Jillian",
""
],
[
"Johnson",
"Andrew",
""
]
] |
new_dataset
| 0.977608 |
2209.02353
|
EPTCS
|
Silvia Crafa (University of Padova, Italy)
|
From Legal Contracts to Legal Calculi: the code-driven normativity
|
In Proceedings EXPRESS/SOS 2022, arXiv:2208.14777. arXiv admin note:
text overlap with arXiv:2110.11069
|
EPTCS 368, 2022, pp. 23-42
|
10.4204/EPTCS.368.2
| null |
cs.PL cs.CY cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
Using dedicated software to represent or enact legislation or regulation has
the advantage of solving the inherent ambiguity of legal texts and enabling the
automation of compliance with legal norms. On the other hand, the so-called
code-driven normativity is less flexible than the legal provisions it claims to
implement, and transforms the nature of legal protection, potentially reducing
the capability of individual human beings to invoke legal remedies.
In this article we focus on software-based legal contracts; we illustrate the
design of a legal calculus whose primitives allow a direct formalisation of
contracts' normative elements (i.e., permissions, prohibitions, obligations,
asset transfer, judicial enforcement and openness to the external context). We
show that interpreting legal contracts as interaction protocols between
(untrusted) parties enables the generalisation of formal methods and tools for
concurrent systems to the legal setting
|
[
{
"version": "v1",
"created": "Tue, 6 Sep 2022 10:38:19 GMT"
}
] | 2022-09-07T00:00:00 |
[
[
"Crafa",
"Silvia",
"",
"University of Padova, Italy"
]
] |
new_dataset
| 0.999462 |
2209.02368
|
Tu JiaXiang
|
Jian Guo, Jiaxiang Tu, Hengyi Ren, Chong Han, Lijuan Sun
|
Finger Multimodal Feature Fusion and Recognition Based on Channel
Spatial Attention
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to the instability and limitations of unimodal biometric systems,
multimodal systems have attracted more and more attention from researchers.
However, how to exploit the independent and complementary information between
different modalities remains a key and challenging problem. In this paper, we
propose a multimodal biometric fusion recognition algorithm based on
fingerprints and finger veins (Fingerprint Finger Veins-Channel Spatial
Attention Fusion Module, FPV-CSAFM). Specifically, for each pair of fingerprint
and finger vein images, we first propose a simple and effective Convolutional
Neural Network (CNN) to extract features. Then, we build a multimodal feature
fusion module (Channel Spatial Attention Fusion Module, CSAFM) to fully fuse
the complementary information between fingerprints and finger veins. Different
from existing fusion strategies, our fusion method can dynamically adjust the
fusion weights according to the importance of different modalities in channel
and spatial dimensions, so as to better combine the information between
different modalities and improve the overall recognition performance. To
evaluate the performance of our method, we conduct a series of experiments on
multiple public datasets. Experimental results show that the proposed FPV-CSAFM
achieves excellent recognition performance on three multimodal datasets based
on fingerprints and finger veins.
|
[
{
"version": "v1",
"created": "Tue, 6 Sep 2022 10:48:30 GMT"
}
] | 2022-09-07T00:00:00 |
[
[
"Guo",
"Jian",
""
],
[
"Tu",
"Jiaxiang",
""
],
[
"Ren",
"Hengyi",
""
],
[
"Han",
"Chong",
""
],
[
"Sun",
"Lijuan",
""
]
] |
new_dataset
| 0.987397 |
2209.02377
|
Md Sawkat Ali
|
Sarder Iftekhar Ahmed, Muhammad Ibrahim, Md. Nadim, Md. Mizanur
Rahman, Maria Mehjabin Shejunti, Taskeed Jabid, Md. Sawkat Ali
|
MangoLeafBD: A Comprehensive Image Dataset to Classify Diseased and
Healthy Mango Leaves
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Agriculture is of one of the few remaining sectors that is yet to receive
proper attention from the machine learning community. The importance of
datasets in the machine learning discipline cannot be overemphasized. The lack
of standard and publicly available datasets related to agriculture impedes
practitioners of this discipline to harness the full benefit of these powerful
computational predictive tools and techniques. To improve this scenario, we
develop, to the best of our knowledge, the first-ever standard, ready-to-use,
and publicly available dataset of mango leaves. The images are collected from
four mango orchards of Bangladesh, one of the top mango-growing countries of
the world. The dataset contains 4000 images of about 1800 distinct leaves
covering seven diseases. Although the dataset is developed using mango leaves
of Bangladesh only, since we deal with diseases that are common across many
countries, this dataset is likely to be applicable to identify mango diseases
in other countries as well, thereby boosting mango yield. This dataset is
expected to draw wide attention from machine learning researchers and
practitioners in the field of automated agriculture.
|
[
{
"version": "v1",
"created": "Sat, 27 Aug 2022 16:07:16 GMT"
}
] | 2022-09-07T00:00:00 |
[
[
"Ahmed",
"Sarder Iftekhar",
""
],
[
"Ibrahim",
"Muhammad",
""
],
[
"Nadim",
"Md.",
""
],
[
"Rahman",
"Md. Mizanur",
""
],
[
"Shejunti",
"Maria Mehjabin",
""
],
[
"Jabid",
"Taskeed",
""
],
[
"Ali",
"Md. Sawkat",
""
]
] |
new_dataset
| 0.99978 |
2209.02380
|
Abdul Rahman Shaikh
|
Abdul Rahman Shaikh, Hamed Alhoori, Maoyuan Sun
|
YouTube and Science: Models for Research Impact
|
21 pages, 12 figures, Scientometrics Journal
| null | null | null |
cs.DL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Video communication has been rapidly increasing over the past decade, with
YouTube providing a medium where users can post, discover, share, and react to
videos. There has also been an increase in the number of videos citing research
articles, especially since it has become relatively commonplace for academic
conferences to require video submissions. However, the relationship between
research articles and YouTube videos is not clear, and the purpose of the
present paper is to address this issue. We created new datasets using YouTube
videos and mentions of research articles on various online platforms. We found
that most of the articles cited in the videos are related to medicine and
biochemistry. We analyzed these datasets through statistical techniques and
visualization, and built machine learning models to predict (1) whether a
research article is cited in videos, (2) whether a research article cited in a
video achieves a level of popularity, and (3) whether a video citing a research
article becomes popular. The best models achieved F1 scores between 80% and
94%. According to our results, research articles mentioned in more tweets and
news coverage have a higher chance of receiving video citations. We also found
that video views are important for predicting citations and increasing research
articles' popularity and public engagement with science.
|
[
{
"version": "v1",
"created": "Thu, 1 Sep 2022 19:25:38 GMT"
}
] | 2022-09-07T00:00:00 |
[
[
"Shaikh",
"Abdul Rahman",
""
],
[
"Alhoori",
"Hamed",
""
],
[
"Sun",
"Maoyuan",
""
]
] |
new_dataset
| 0.999245 |
2209.02387
|
Igor Pivovarov
|
Igor Pivovarov and Sergey Shumsky
|
MARTI-4: new model of human brain, considering neocortex and basal
ganglia -- learns to play Atari game by reinforcement learning on a single
CPU
|
Accepted to AGI-2022 conference
| null | null | null |
cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Deep Control - new ML architecture of cortico-striatal brain
circuits, which use whole cortical column as a structural element, instead of a
singe neuron. Based on this architecture, we present MARTI - new model of human
brain, considering neocortex and basal ganglia. This model is de-signed to
implement expedient behavior and is capable to learn and achieve goals in
unknown environments. We introduce a novel surprise feeling mechanism, that
significantly improves reinforcement learning process through inner rewards. We
use OpenAI Gym environment to demonstrate MARTI learning on a single CPU just
in several hours.
|
[
{
"version": "v1",
"created": "Thu, 18 Aug 2022 20:23:49 GMT"
}
] | 2022-09-07T00:00:00 |
[
[
"Pivovarov",
"Igor",
""
],
[
"Shumsky",
"Sergey",
""
]
] |
new_dataset
| 0.997098 |
2209.02391
|
Ashok Urlana
|
Chakravarthi Jada, Lokesh Ch.R.S, Ashok Urlana, Shridi Swamy
Yerubandi, Kantha Rao Bora, Gouse Basha Shaik, Pavan Baswani, Balaraju Karri
|
Butterflies: A new source of inspiration for futuristic aerial robotics
|
2 pages, 3 figures, Accepted as Late Breaking Report at ICRA 2017
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Nature is an inhabitant for enormous number of species. All the species do
perform complex activities with simple and elegant rules for their survival.
The property of emergence of collective behavior is remarkably supporting their
activities. One form of the collective behaviour is the swarm intelligence --
all agents poses same rules and capabilities. This equality along with local
cooperation in the agents tremendously leads to achieving global results. Some
of the swarm behaviours in the nature includes birds formations , fish school
maneuverings, ants movement. Recently, one school of research has studied these
behaviours and proposed artificial paradigms such as Particle Swarm
Optimization (PSO), Ant Colony Optimization (ACO), Glowworm Swarm Optimization
(GSO) etc. Another school of research used these models and designed robotic
platforms to detect (locate) multiple signal sources such as light, fire,
plume, odour etc. Kinbots platform is one such recent experiment. In the same
line of thought, this extended abstract presents the recently proposed
butterfly inspired metaphor and corresponding simulations, ongoing experiments
with outcomes.
|
[
{
"version": "v1",
"created": "Wed, 24 Aug 2022 18:16:49 GMT"
}
] | 2022-09-07T00:00:00 |
[
[
"Jada",
"Chakravarthi",
""
],
[
"S",
"Lokesh Ch. R.",
""
],
[
"Urlana",
"Ashok",
""
],
[
"Yerubandi",
"Shridi Swamy",
""
],
[
"Bora",
"Kantha Rao",
""
],
[
"Shaik",
"Gouse Basha",
""
],
[
"Baswani",
"Pavan",
""
],
[
"Karri",
"Balaraju",
""
]
] |
new_dataset
| 0.997759 |
2209.02438
|
Umang Goenka
|
Umang Goenka, Aaryan Jagetia, Param Patil, Akshay Singh, Taresh
Sharma, Poonam Saini
|
Threat Detection In Self-Driving Vehicles Using Computer Vision
|
Presented in 3rd International Conference on Machine Learning, Image
Processing, Network Security and Data Sciences MIND-2021
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
On-road obstacle detection is an important field of research that falls in
the scope of intelligent transportation infrastructure systems. The use of
vision-based approaches results in an accurate and cost-effective solution to
such systems. In this research paper, we propose a threat detection mechanism
for autonomous self-driving cars using dashcam videos to ensure the presence of
any unwanted obstacle on the road that falls within its visual range. This
information can assist the vehicle's program to en route safely. There are four
major components, namely, YOLO to identify the objects, advanced lane detection
algorithm, multi regression model to measure the distance of the object from
the camera, the two-second rule for measuring the safety, and limiting speed.
In addition, we have used the Car Crash Dataset(CCD) for calculating the
accuracy of the model. The YOLO algorithm gives an accuracy of around 93%. The
final accuracy of our proposed Threat Detection Model (TDM) is 82.65%.
|
[
{
"version": "v1",
"created": "Tue, 6 Sep 2022 12:01:07 GMT"
}
] | 2022-09-07T00:00:00 |
[
[
"Goenka",
"Umang",
""
],
[
"Jagetia",
"Aaryan",
""
],
[
"Patil",
"Param",
""
],
[
"Singh",
"Akshay",
""
],
[
"Sharma",
"Taresh",
""
],
[
"Saini",
"Poonam",
""
]
] |
new_dataset
| 0.984881 |
2209.02492
|
Abhishek Sharma
|
Abhishek Sharma, Pranjal Sharma, Darshan Pincha, Prateek Jain
|
Surya Namaskar: real-time advanced yoga pose recognition and correction
for smart healthcare
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Nowadays, yoga has gained worldwide attention because of increasing levels of
stress in the modern way of life, and there are many ways or resources to learn
yoga. The word yoga means a deep connection between the mind and body. Today
there is substantial Medical and scientific evidence to show that the very
fundamentals of the activity of our brain, our chemistry even our genetic
content can be changed by practicing different systems of yoga. Suryanamaskar,
also known as salute to the sun, is a yoga practice that combines eight
different forms and 12 asanas(4 asana get repeated) devoted to the Hindu Sun
God, Surya. Suryanamaskar offers a number of health benefits such as
strengthening muscles and helping to control blood sugar levels. Here the
Mediapipe Library is used to analyze Surya namaskar situations. Standing is
detected in real time with advanced software, as one performs Surya namaskar in
front of the camera. The class divider identifies the form as one of the
following: Pranamasana, Hasta Padasana, Hasta Uttanasana, Ashwa - Sanchalan
asana, Ashtanga Namaskar, Dandasana, or Bhujangasana and Svanasana. Deep
learning-based techniques(CNN) are used to develop this model with model
accuracy of 98.68 percent and an accuracy score of 0.75 to detect correct yoga
(Surya Namaskar ) posture. With this method, the users can practice the desired
pose and can check if the pose that the person is doing is correct or not. It
will help in doing all the different poses of surya namaskar correctly and
increase the efficiency of the yoga practitioner. This paper describes the
whole framework which is to be implemented in the model.
|
[
{
"version": "v1",
"created": "Tue, 6 Sep 2022 13:37:25 GMT"
}
] | 2022-09-07T00:00:00 |
[
[
"Sharma",
"Abhishek",
""
],
[
"Sharma",
"Pranjal",
""
],
[
"Pincha",
"Darshan",
""
],
[
"Jain",
"Prateek",
""
]
] |
new_dataset
| 0.999812 |
2209.02522
|
Mickael Cormier
|
Andreas Specker, Mickael Cormier, J\"urgen Beyerer
|
UPAR: Unified Pedestrian Attribute Recognition and Person Retrieval
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recognizing soft-biometric pedestrian attributes is essential in video
surveillance and fashion retrieval. Recent works show promising results on
single datasets. Nevertheless, the generalization ability of these methods
under different attribute distributions, viewpoints, varying illumination, and
low resolutions remains rarely understood due to strong biases and varying
attributes in current datasets. To close this gap and support a systematic
investigation, we present UPAR, the Unified Person Attribute Recognition
Dataset. It is based on four well-known person attribute recognition datasets:
PA100K, PETA, RAPv2, and Market1501. We unify those datasets by providing 3,3M
additional annotations to harmonize 40 important binary attributes over 12
attribute categories across the datasets. We thus enable research on
generalizable pedestrian attribute recognition as well as attribute-based
person retrieval for the first time. Due to the vast variance of the image
distribution, pedestrian pose, scale, and occlusion, existing approaches are
greatly challenged both in terms of accuracy and efficiency. Furthermore, we
develop a strong baseline for PAR and attribute-based person retrieval based on
a thorough analysis of regularization methods. Our models achieve
state-of-the-art performance in cross-domain and specialization settings on
PA100k, PETA, RAPv2, Market1501-Attributes, and UPAR. We believe UPAR and our
strong baseline will contribute to the artificial intelligence community and
promote research on large-scale, generalizable attribute recognition systems.
|
[
{
"version": "v1",
"created": "Tue, 6 Sep 2022 14:20:56 GMT"
}
] | 2022-09-07T00:00:00 |
[
[
"Specker",
"Andreas",
""
],
[
"Cormier",
"Mickael",
""
],
[
"Beyerer",
"Jürgen",
""
]
] |
new_dataset
| 0.999773 |
2209.02604
|
Ziqi Yuan
|
Yihe Liu, Ziqi Yuan, Huisheng Mao, Zhiyun Liang, Wanqiuyue Yang,
Yuanzhe Qiu, Tie Cheng, Xiaoteng Li, Hua Xu, Kai Gao
|
Make Acoustic and Visual Cues Matter: CH-SIMS v2.0 Dataset and AV-Mixup
Consistent Module
|
16pages, 7 figures, accepted by ICMI 2022
| null | null | null |
cs.MM cs.AI cs.CV cs.SD eess.AS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Multimodal sentiment analysis (MSA), which supposes to improve text-based
sentiment analysis with associated acoustic and visual modalities, is an
emerging research area due to its potential applications in Human-Computer
Interaction (HCI). However, the existing researches observe that the acoustic
and visual modalities contribute much less than the textual modality, termed as
text-predominant. Under such circumstances, in this work, we emphasize making
non-verbal cues matter for the MSA task. Firstly, from the resource
perspective, we present the CH-SIMS v2.0 dataset, an extension and enhancement
of the CH-SIMS. Compared with the original dataset, the CH-SIMS v2.0 doubles
its size with another 2121 refined video segments with both unimodal and
multimodal annotations and collects 10161 unlabelled raw video segments with
rich acoustic and visual emotion-bearing context to highlight non-verbal cues
for sentiment prediction. Secondly, from the model perspective, benefiting from
the unimodal annotations and the unsupervised data in the CH-SIMS v2.0, the
Acoustic Visual Mixup Consistent (AV-MC) framework is proposed. The designed
modality mixup module can be regarded as an augmentation, which mixes the
acoustic and visual modalities from different videos. Through drawing
unobserved multimodal context along with the text, the model can learn to be
aware of different non-verbal contexts for sentiment prediction. Our
evaluations demonstrate that both CH-SIMS v2.0 and AV-MC framework enables
further research for discovering emotion-bearing acoustic and visual cues and
paves the path to interpretable end-to-end HCI applications for real-world
scenarios.
|
[
{
"version": "v1",
"created": "Mon, 22 Aug 2022 03:31:33 GMT"
}
] | 2022-09-07T00:00:00 |
[
[
"Liu",
"Yihe",
""
],
[
"Yuan",
"Ziqi",
""
],
[
"Mao",
"Huisheng",
""
],
[
"Liang",
"Zhiyun",
""
],
[
"Yang",
"Wanqiuyue",
""
],
[
"Qiu",
"Yuanzhe",
""
],
[
"Cheng",
"Tie",
""
],
[
"Li",
"Xiaoteng",
""
],
[
"Xu",
"Hua",
""
],
[
"Gao",
"Kai",
""
]
] |
new_dataset
| 0.999665 |
1705.08684
|
Andrea Tassi
|
Ioannis Mavromatis, Andrea Tassi, Robert J. Piechocki, Andrew Nix
|
MmWave System for Future ITS: A MAC-layer Approach for V2X Beam Steering
|
Accepted for publication in IEEE VTC Fall 2017 conference proceedings
| null |
10.1109/VTCFall.2017.8288267
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Millimeter Waves (mmWave) systems have the potential of enabling
multi-gigabit-per-second communications in future Intelligent Transportation
Systems (ITSs). Unfortunately, because of the increased vehicular mobility,
they require frequent antenna beam realignments - thus significantly increasing
the in-band Beamforming (BF) overhead. In this paper, we propose Smart
Motion-prediction Beam Alignment (SAMBA), a MAC-layer algorithm that exploits
the information broadcast via DSRC beacons by all vehicles. Based on this
information, overhead-free BF is achieved by estimating the position of the
vehicle and predicting its motion. Moreover, adapting the beamwidth with
respect to the estimated position can further enhance the performance. Our
investigation shows that SAMBA outperforms the IEEE 802.11ad BF strategy,
increasing the data rate by more than twice for sparse vehicle density while
enhancing the network throughput proportionally to the number of vehicles.
Furthermore, SAMBA was proven to be more efficient compared to legacy BF
algorithm under highly dynamic vehicular environments and hence, a viable
solution for future ITS services.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2017 10:13:49 GMT"
}
] | 2022-09-05T00:00:00 |
[
[
"Mavromatis",
"Ioannis",
""
],
[
"Tassi",
"Andrea",
""
],
[
"Piechocki",
"Robert J.",
""
],
[
"Nix",
"Andrew",
""
]
] |
new_dataset
| 0.99791 |
1806.04951
|
Ioannis Mavromatis
|
Ioannis Mavromatis and Andrea Tassi and Robert J. Piechocki and Andrew
Nix
|
A City-Scale ITS-G5 Network for Next-Generation Intelligent
Transportation Systems: Design Insights and Challenges
|
Accepted for publication to AdHoc-Now 2018
| null |
10.1007/978-3-030-00247-3_5
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As we move towards autonomous vehicles, a reliable Vehicle-to-Everything
(V2X) communication framework becomes of paramount importance. In this paper we
present the development and the performance evaluation of a real-world
vehicular networking testbed. Our testbed, deployed in the heart of the City of
Bristol, UK, is able to exchange sensor data in a V2X manner. We will describe
the testbed architecture and its operational modes. Then, we will provide some
insight pertaining the firmware operating on the network devices. The system
performance has been evaluated under a series of large-scale field trials,
which have proven how our solution represents a low-cost high-quality framework
for V2X communications. Our system managed to achieve high packet delivery
ratios under different scenarios (urban, rural, highway) and for different
locations around the city. We have also identified the instability of the
packet transmission rate while using single-core devices, and we present some
future directions that will address that.
|
[
{
"version": "v1",
"created": "Wed, 13 Jun 2018 11:19:02 GMT"
},
{
"version": "v2",
"created": "Fri, 6 Jul 2018 01:11:21 GMT"
}
] | 2022-09-05T00:00:00 |
[
[
"Mavromatis",
"Ioannis",
""
],
[
"Tassi",
"Andrea",
""
],
[
"Piechocki",
"Robert J.",
""
],
[
"Nix",
"Andrew",
""
]
] |
new_dataset
| 0.999078 |
1903.10289
|
Andrea Tassi
|
Andrea Tassi and Ioannis Mavromatis and Robert J. Piechocki
|
A Dataset of Full-Stack ITS-G5 DSRC Communications over Licensed and
Unlicensed Bands Using a Large-Scale Urban Testbed
|
Manuscript submitted to Elsevier Data in Brief
| null |
10.1016/j.dib.2019.104368
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A dataset of measurements of ETSI ITS-G5 Dedicated Short Range Communications
(DSRC) is presented. Our dataset consists of network interactions happening
between two On-Board Units (OBUs) and four Road Side Units (RSUs). Each OBU was
fitted onto a vehicle driven across the FLOURISH Test Track in Bristol, UK.
Each RSU and OBU was equipped with two transceivers operating at different
frequencies. During our experiments, each transceiver broadcasts Cooperative
Awareness Messages (CAMs) over the licensed DSRC band, and over the unlicensed
Industrial, Scientific, and Medical radio (ISM) bands 2.4GHz-2.5GHz and
5.725GHz-5.875GHz. Each transmitted and received CAM is logged along with its
Received Signal Strength Indicator (RSSI) value and accurate positioning
information. The Media Access Control layer (MAC) layer Packet Delivery Rates
(PDRs) and RSSI values are also empirically calculated across the whole length
of the track for any transceiver. The dataset can be used to derive realistic
approximations of the PDR as a function of RSSI values under urban environments
and for both the DSRC and ISM bands -- thus, the dataset is suitable to
calibrate (simplified) physical layers of full-stack vehicular simulators where
the MAC layer PDR is a direct function of the RSSI. The dataset is not intended
to be used for signal propagation modelling.
The dataset can be found at
https://doi.org/10.5523/bris.eupowp7h3jl525yxhm3521f57 , and it has been
analyzed in the following paper: I. Mavromatis, A. Tassi, and R. J. Piechocki,
"Operating ITS-G5 DSRC over Unlicensed Bands: A City-Scale Performance
Evaluation," IEEE PIMRC 2019. [Online]. Available: arXiv:1904.00464.
|
[
{
"version": "v1",
"created": "Mon, 25 Mar 2019 13:07:52 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Apr 2019 06:33:41 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Jun 2019 13:08:42 GMT"
},
{
"version": "v4",
"created": "Wed, 17 Jul 2019 09:47:07 GMT"
}
] | 2022-09-05T00:00:00 |
[
[
"Tassi",
"Andrea",
""
],
[
"Mavromatis",
"Ioannis",
""
],
[
"Piechocki",
"Robert J.",
""
]
] |
new_dataset
| 0.999803 |
1904.00464
|
Ioannis Mavromatis
|
Ioannis Mavromatis and Andrea Tassi and Robert J. Piechocki
|
Operating ITS-G5 DSRC over Unlicensed Bands: A City-Scale Performance
Evaluation
|
IEEE PIMRC 2019, to appear
| null |
10.1109/PIMRC.2019.8904214
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Future Connected and Autonomous Vehicles (CAVs) will be equipped with a large
set of sensors. The large amount of generated sensor data is expected to be
exchanged with other CAVs and the road-side infrastructure. Both in Europe and
the US, Dedicated Short Range Communications (DSRC) systems, based on the IEEE
802.11p Physical Layer, are key enabler for the communication among vehicles.
Given the expected market penetration of connected vehicles, the licensed band
of 75 MHz, dedicated to DSRC communications, is expected to become increasingly
congested. In this paper, we investigate the performance of a vehicular
communication system, operated over the unlicensed bands 2.4 GHz - 2.5 GHz and
5.725 GHz - 5.875 GHz. Our experimental evaluation was carried out in a testing
track in the centre of Bristol, UK and our system is a full-stack ETSI ITS-G5
implementation. Our performance investigation compares key communication
metrics (e.g., packet delivery rate, received signal strength indicator)
measured by operating our system over the licensed DSRC and the considered
unlicensed bands. In particular, when operated over the 2.4 GHz - 2.5 GHz band,
our system achieves comparable performance to the case when the DSRC band is
used. On the other hand, as soon as the system, is operated over the 5.725 GHz
- 5.875 GHz band, the packet delivery rate is 30% smaller compared to the case
when the DSRC band is employed. These findings prove that operating our system
over unlicensed ISM bands is a viable option. During our experimental
evaluation, we recorded all the generated network interactions and the complete
data set has been publicly available.
|
[
{
"version": "v1",
"created": "Sun, 31 Mar 2019 19:14:11 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Jun 2019 08:51:45 GMT"
},
{
"version": "v3",
"created": "Tue, 11 Jun 2019 12:57:52 GMT"
}
] | 2022-09-05T00:00:00 |
[
[
"Mavromatis",
"Ioannis",
""
],
[
"Tassi",
"Andrea",
""
],
[
"Piechocki",
"Robert J.",
""
]
] |
new_dataset
| 0.998298 |
2004.07031
|
Guotai Wang
|
Qi Duan, Guotai Wang, Rui Wang, Chao Fu, Xinjun Li, Na Wang, Yechong
Huang, Xiaodi Huang, Tao Song, Liang Zhao, Xinglong Liu, Qing Xia, Zhiqiang
Hu, Yinan Chen and Shaoting Zhang
|
SenseCare: A Research Platform for Medical Image Informatics and
Interactive 3D Visualization
|
15 pages, 16 figures
| null | null | null |
cs.HC eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Clinical research on smart health has an increasing demand for intelligent
and clinic-oriented medical image computing algorithms and platforms that
support various applications. To this end, we have developed SenseCare research
platform, which is designed to facilitate translational research on intelligent
diagnosis and treatment planning in various clinical scenarios. To enable
clinical research with Artificial Intelligence (AI), SenseCare provides a range
of AI toolkits for different tasks, including image segmentation, registration,
lesion and landmark detection from various image modalities ranging from
radiology to pathology. In addition, SenseCare is clinic-oriented and supports
a wide range of clinical applications such as diagnosis and surgical planning
for lung cancer, pelvic tumor, coronary artery disease, etc. SenseCare provides
several appealing functions and features such as advanced 3D visualization,
concurrent and efficient web-based access, fast data synchronization and high
data security, multi-center deployment, support for collaborative research,
etc. In this report, we present an overview of SenseCare as an efficient
platform providing comprehensive toolkits and high extensibility for
intelligent image analysis and clinical research in different application
scenarios. We also summarize the research outcome through the collaboration
with multiple hospitals.
|
[
{
"version": "v1",
"created": "Fri, 3 Apr 2020 03:17:04 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Sep 2022 13:03:13 GMT"
}
] | 2022-09-05T00:00:00 |
[
[
"Duan",
"Qi",
""
],
[
"Wang",
"Guotai",
""
],
[
"Wang",
"Rui",
""
],
[
"Fu",
"Chao",
""
],
[
"Li",
"Xinjun",
""
],
[
"Wang",
"Na",
""
],
[
"Huang",
"Yechong",
""
],
[
"Huang",
"Xiaodi",
""
],
[
"Song",
"Tao",
""
],
[
"Zhao",
"Liang",
""
],
[
"Liu",
"Xinglong",
""
],
[
"Xia",
"Qing",
""
],
[
"Hu",
"Zhiqiang",
""
],
[
"Chen",
"Yinan",
""
],
[
"Zhang",
"Shaoting",
""
]
] |
new_dataset
| 0.984824 |
2007.03680
|
Ioannis Mavromatis Dr
|
Ioannis Mavromatis, Robert J. Piechocki, Mahesh Sooriyabandara, Arjun
Parekh
|
DRIVE: A Digital Network Oracle for Cooperative Intelligent
Transportation Systems
|
Accepted for publication at IEEE ISCC 2020
| null |
10.1109/ISCC50000.2020.9219683
| null |
cs.NI cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In a world where Artificial Intelligence revolutionizes inference, prediction
and decision-making tasks, Digital Twins emerge as game-changing tools. A case
in point is the development and optimization of Cooperative Intelligent
Transportation Systems (C-ITSs): a confluence of cyber-physical digital
infrastructure and (semi)automated mobility. Herein we introduce Digital Twin
for self-dRiving Intelligent VEhicles (DRIVE). The developed framework tackles
shortcomings of traditional vehicular and network simulators. It provides a
flexible, modular, and scalable implementation to ensure large-scale, city-wide
experimentation with a moderate computational cost. The defining feature of our
Digital Twin is a unique architecture allowing for submission of sequential
queries, to which the Digital Twin provides instantaneous responses with the
"state of the world", and hence is an Oracle. With such bidirectional
interaction with external intelligent agents and realistic mobility traces,
DRIVE provides the environment for development, training and optimization of
Machine Learning based C-ITS solutions.
|
[
{
"version": "v1",
"created": "Tue, 7 Jul 2020 09:34:09 GMT"
}
] | 2022-09-05T00:00:00 |
[
[
"Mavromatis",
"Ioannis",
""
],
[
"Piechocki",
"Robert J.",
""
],
[
"Sooriyabandara",
"Mahesh",
""
],
[
"Parekh",
"Arjun",
""
]
] |
new_dataset
| 0.981271 |
2112.15230
|
Yaroslav Golubev
|
Eman Abdullah AlOmar, Anton Ivanov, Zarina Kurbatova, Yaroslav
Golubev, Mohamed Wiem Mkaouer, Ali Ouni, Timofey Bryksin, Le Nguyen, Amit
Kini, Aditya Thakur
|
AntiCopyPaster: Extracting Code Duplicates As Soon As They Are
Introduced in the IDE
|
4 pages, 3 figures
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We developed a plugin for IntelliJ IDEA called AntiCopyPaster, which tracks
the pasting of code fragments inside the IDE and suggests the appropriate
Extract Method refactoring to combat the propagation of duplicates. Unlike the
existing approaches, our tool is integrated with the developer's workflow, and
pro-actively recommends refactorings. Since not all code fragments need to be
extracted, we develop a classification model to make this decision. When a
developer copies and pastes a code fragment, the plugin searches for duplicates
in the currently opened file, waits for a short period of time to allow the
developer to edit the code, and finally inferences the refactoring decision
based on a number of features.
Our experimental study on a large dataset of 18,942 code fragments mined from
13 Apache projects shows that AntiCopyPaster correctly recommends Extract
Method refactorings with an F-score of 0.82. Furthermore, our survey of 59
developers reflects their satisfaction with the developed plugin's operation.
The plugin and its source code are publicly available on GitHub at
https://github.com/JetBrains-Research/anti-copy-paster. The demonstration video
can be found on YouTube: https://youtu.be/_wwHg-qFjJY.
|
[
{
"version": "v1",
"created": "Thu, 30 Dec 2021 22:51:04 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Sep 2022 16:19:15 GMT"
}
] | 2022-09-05T00:00:00 |
[
[
"AlOmar",
"Eman Abdullah",
""
],
[
"Ivanov",
"Anton",
""
],
[
"Kurbatova",
"Zarina",
""
],
[
"Golubev",
"Yaroslav",
""
],
[
"Mkaouer",
"Mohamed Wiem",
""
],
[
"Ouni",
"Ali",
""
],
[
"Bryksin",
"Timofey",
""
],
[
"Nguyen",
"Le",
""
],
[
"Kini",
"Amit",
""
],
[
"Thakur",
"Aditya",
""
]
] |
new_dataset
| 0.976164 |
2202.13015
|
Oksana Firman
|
Oksana Firman, Philipp Kindermann, Jonathan Klawitter, Boris Klemz,
Felix Klesen, Alexander Wolff
|
Outside-Obstacle Representations with All Vertices on the Outer Face
|
Appears in the Proceedings of the 30th International Symposium on
Graph Drawing and Network Visualization (GD 2022)
| null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An obstacle representation of a graph $G$ consists of a set of polygonal
obstacles and a drawing of $G$ as a visibility graph with respect to the
obstacles: vertices are mapped to points and edges to straight-line segments
such that each edge avoids all obstacles whereas each non-edge intersects at
least one obstacle. Obstacle representations have been investigated quite
intensely over the last few years. Here we focus on outside-obstacle
representations (OORs) that use only one obstacle in the outer face of the
drawing. It is known that every outerplanar graph admits such a representation
[Alpert, Koch, Laison; DCG 2010].
We strengthen this result by showing that every (partial) 2-tree has an OOR.
We also consider restricted versions of OORs where the vertices of the graph
lie on a convex polygon or a regular polygon. We characterize when the
complement of a tree and when a complete graph minus a simple cycle admits a
convex OOR. We construct regular OORs for all (partial) outerpaths, cactus
graphs, and grids.
|
[
{
"version": "v1",
"created": "Fri, 25 Feb 2022 23:23:20 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Sep 2022 10:36:47 GMT"
},
{
"version": "v3",
"created": "Fri, 2 Sep 2022 10:09:06 GMT"
}
] | 2022-09-05T00:00:00 |
[
[
"Firman",
"Oksana",
""
],
[
"Kindermann",
"Philipp",
""
],
[
"Klawitter",
"Jonathan",
""
],
[
"Klemz",
"Boris",
""
],
[
"Klesen",
"Felix",
""
],
[
"Wolff",
"Alexander",
""
]
] |
new_dataset
| 0.988173 |
2203.04682
|
Ioannis Mavromatis Dr
|
Ioannis Mavromatis, Aleksandar Stanoev, Anthony J. Portelli, Charles
Lockie, Marius Ammann, Yichao Jin, Mahesh Sooriyabandara
|
Reliable IoT Firmware Updates: A Large-scale Mesh Network Performance
Investigation
|
Accepted to IEEE WCNC 2022, Austin, Texas, USA
| null |
10.1109/WCNC51071.2022.9771708
| null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Internet of Things (IoT) networks require regular firmware updates to ensure
enhanced security and stability. As we move towards methodologies of codifying
security and policy decisions and exchanging them over IoT large-scale
deployments (security-as-a-code), these demands should be considered a routine
operation. However, rolling out firmware updates to large-scale networks
presents a crucial challenge for constrained wireless environments with large
numbers of IoT devices. This paper initially investigates how the current
state-of-the-art protocols operate in such adverse conditions by measuring
various Quality-of-Service (QoS) Key Performance Indicators (KPIs) of the
shared wireless medium. We later discuss how Concurrent Transmissions (CT) can
extend the scalability of IoT protocols and ensure reliable firmware roll-outs
over large geographical areas. Measuring KPIs such as the mesh join time, the
throughput, and the number of nodes forming a network, we provide great insight
into how an IoT environment will behave under a large-scale firmware roll-out.
Finally, we conducted our performance investigation over the UMBRELLA platform,
a real-world IoT testbed deployed in Bristol, UK. This ensures our findings
represent a realistic IoT scenario and meet the strict QoS requirements of
today's IoT applications.
|
[
{
"version": "v1",
"created": "Wed, 9 Mar 2022 12:55:38 GMT"
}
] | 2022-09-05T00:00:00 |
[
[
"Mavromatis",
"Ioannis",
""
],
[
"Stanoev",
"Aleksandar",
""
],
[
"Portelli",
"Anthony J.",
""
],
[
"Lockie",
"Charles",
""
],
[
"Ammann",
"Marius",
""
],
[
"Jin",
"Yichao",
""
],
[
"Sooriyabandara",
"Mahesh",
""
]
] |
new_dataset
| 0.996569 |
2205.05979
|
Xuesong Chen
|
Xuesong Chen, Shaoshuai Shi, Benjin Zhu, Ka Chun Cheung, Hang Xu and
Hongsheng Li
|
MPPNet: Multi-Frame Feature Intertwining with Proxy Points for 3D
Temporal Object Detection
|
Accepted by ECCV 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Accurate and reliable 3D detection is vital for many applications including
autonomous driving vehicles and service robots. In this paper, we present a
flexible and high-performance 3D detection framework, named MPPNet, for 3D
temporal object detection with point cloud sequences. We propose a novel
three-hierarchy framework with proxy points for multi-frame feature encoding
and interactions to achieve better detection. The three hierarchies conduct
per-frame feature encoding, short-clip feature fusion, and whole-sequence
feature aggregation, respectively. To enable processing long-sequence point
clouds with reasonable computational resources, intra-group feature mixing and
inter-group feature attention are proposed to form the second and third feature
encoding hierarchies, which are recurrently applied for aggregating multi-frame
trajectory features. The proxy points not only act as consistent object
representations for each frame, but also serve as the courier to facilitate
feature interaction between frames. The experiments on large Waymo Open dataset
show that our approach outperforms state-of-the-art methods with large margins
when applied to both short (e.g., 4-frame) and long (e.g., 16-frame) point
cloud sequences. Code is available at https://github.com/open-mmlab/OpenPCDet.
|
[
{
"version": "v1",
"created": "Thu, 12 May 2022 09:38:42 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Sep 2022 15:08:08 GMT"
}
] | 2022-09-05T00:00:00 |
[
[
"Chen",
"Xuesong",
""
],
[
"Shi",
"Shaoshuai",
""
],
[
"Zhu",
"Benjin",
""
],
[
"Cheung",
"Ka Chun",
""
],
[
"Xu",
"Hang",
""
],
[
"Li",
"Hongsheng",
""
]
] |
new_dataset
| 0.990906 |
2205.09669
|
Wentao Chen
|
Zihan Li, Wentao Chen, Zhiqing Wei, Xingqi Luo, Bing Su
|
Semi-WTC: A Practical Semi-supervised Framework for Attack
Categorization through Weight-Task Consistency
|
Tech report
| null | null | null |
cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Supervised learning has been widely used for attack categorization, requiring
high-quality data and labels. However, the data is often imbalanced and it is
difficult to obtain sufficient annotations. Moreover, supervised models are
subject to real-world deployment issues, such as defending against unseen
artificial attacks. To tackle the challenges, we propose a semi-supervised
fine-grained attack categorization framework consisting of an encoder and a
two-branch structure and this framework can be generalized to different
supervised models. The multilayer perceptron with residual connection is used
as the encoder to extract features and reduce the complexity. The Recurrent
Prototype Module (RPM) is proposed to train the encoder effectively in a
semi-supervised manner. To alleviate the data imbalance problem, we introduce
the Weight-Task Consistency (WTC) into the iterative process of RPM by
assigning larger weights to classes with fewer samples in the loss function. In
addition, to cope with new attacks in real-world deployment, we propose an
Active Adaption Resampling (AAR) method, which can better discover the
distribution of unseen sample data and adapt the parameters of encoder.
Experimental results show that our model outperforms the state-of-the-art
semi-supervised attack detection methods with a 3% improvement in
classification accuracy and a 90% reduction in training time.
|
[
{
"version": "v1",
"created": "Thu, 19 May 2022 16:30:31 GMT"
},
{
"version": "v2",
"created": "Fri, 20 May 2022 16:09:38 GMT"
},
{
"version": "v3",
"created": "Fri, 2 Sep 2022 04:48:56 GMT"
}
] | 2022-09-05T00:00:00 |
[
[
"Li",
"Zihan",
""
],
[
"Chen",
"Wentao",
""
],
[
"Wei",
"Zhiqing",
""
],
[
"Luo",
"Xingqi",
""
],
[
"Su",
"Bing",
""
]
] |
new_dataset
| 0.956373 |
2207.01583
|
Ashkan Mirzaei
|
Ashkan Mirzaei, Yash Kant, Jonathan Kelly, and Igor Gilitschenski
|
LaTeRF: Label and Text Driven Object Radiance Fields
| null |
European Conference on Computer Vision (ECCV) 2022
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Obtaining 3D object representations is important for creating photo-realistic
simulations and for collecting AR and VR assets. Neural fields have shown their
effectiveness in learning a continuous volumetric representation of a scene
from 2D images, but acquiring object representations from these models with
weak supervision remains an open challenge. In this paper we introduce LaTeRF,
a method for extracting an object of interest from a scene given 2D images of
the entire scene, known camera poses, a natural language description of the
object, and a set of point-labels of object and non-object points in the input
images. To faithfully extract the object from the scene, LaTeRF extends the
NeRF formulation with an additional `objectness' probability at each 3D point.
Additionally, we leverage the rich latent space of a pre-trained CLIP model
combined with our differentiable object renderer, to inpaint the occluded parts
of the object. We demonstrate high-fidelity object extraction on both synthetic
and real-world datasets and justify our design choices through an extensive
ablation study.
|
[
{
"version": "v1",
"created": "Mon, 4 Jul 2022 17:07:57 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Jul 2022 14:32:57 GMT"
},
{
"version": "v3",
"created": "Mon, 18 Jul 2022 18:27:31 GMT"
}
] | 2022-09-05T00:00:00 |
[
[
"Mirzaei",
"Ashkan",
""
],
[
"Kant",
"Yash",
""
],
[
"Kelly",
"Jonathan",
""
],
[
"Gilitschenski",
"Igor",
""
]
] |
new_dataset
| 0.995216 |
2207.05774
|
Rui Dilao
|
Rui Dil\~ao and Nuno Teixeira
|
A solvable walking model for a two-legged robot
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
We present a solvable biped walking model based on an inverted pendulum with
two massless articulated legs capable of walking on uneven floors and inclined
planes. The stride of the two-legged robot results from the pendular motion of
a standing leg and the articulated motion of a trailing leg. Gaiting is
possible due to the alternating role of the legs, the standing and the trailing
leg, and the conservation of energy of the pendular motion. The motion on
uneven surfaces and inclined planes is possible by imposing the same maximal
opening angle between the two legs in the transition between strides and the
adaptability of the time of each stride. This model is solvable in closed form
and is reversible in time, modelling the different types of biped motion.
Several optimisation results for the speed of gaiting as a function of the
robot parameters have been derived.
|
[
{
"version": "v1",
"created": "Tue, 12 Jul 2022 18:02:58 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Sep 2022 16:11:05 GMT"
}
] | 2022-09-05T00:00:00 |
[
[
"Dilão",
"Rui",
""
],
[
"Teixeira",
"Nuno",
""
]
] |
new_dataset
| 0.991188 |
2208.13027
|
Yi-Lin Tsai
|
Yi-Lin Tsai (1), Jeremy Irvin (2), Suhas Chundi (2), Andrew Y. Ng (2),
Christopher B. Field (3, 4, and 5), Peter K. Kitanidis (1, 3, and 6) ((1)
Department of Civil and Environmental Engineering, Stanford University,
Stanford, CA, USA, (2) Department of Computer Science, Stanford University,
Stanford, CA, USA, (3) Woods Institute for the Environment, Stanford
University, Stanford, CA, USA, (4) Interdisciplinary Environmental Studies
Program, Stanford University, Stanford, CA, USA, (5) Department of Earth
System Science, Stanford University, Stanford, CA, USA, (6) Institute for
Computational and Mathematical Engineering, Stanford University, Stanford,
CA, USA)
|
Improving debris flow evacuation alerts in Taiwan using machine learning
|
Supplementary information:
https://drive.google.com/file/d/1Y17YxXo5rhIbUuZzwLo99pmttbh28v9X/view?usp=sharing
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Taiwan has the highest susceptibility to and fatalities from debris flows
worldwide. The existing debris flow warning system in Taiwan, which uses a
time-weighted measure of rainfall, leads to alerts when the measure exceeds a
predefined threshold. However, this system generates many false alarms and
misses a substantial fraction of the actual debris flows. Towards improving
this system, we implemented five machine learning models that input historical
rainfall data and predict whether a debris flow will occur within a selected
time. We found that a random forest model performed the best among the five
models and outperformed the existing system in Taiwan. Furthermore, we
identified the rainfall trajectories strongly related to debris flow
occurrences and explored trade-offs between the risks of missing debris flows
versus frequent false alerts. These results suggest the potential for machine
learning models trained on hourly rainfall data alone to save lives while
reducing false alerts.
|
[
{
"version": "v1",
"created": "Sat, 27 Aug 2022 14:39:58 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Sep 2022 04:39:32 GMT"
}
] | 2022-09-05T00:00:00 |
[
[
"Tsai",
"Yi-Lin",
"",
"3, 4, and 5"
],
[
"Irvin",
"Jeremy",
"",
"3, 4, and 5"
],
[
"Chundi",
"Suhas",
"",
"3, 4, and 5"
],
[
"Ng",
"Andrew Y.",
"",
"3, 4, and 5"
],
[
"Field",
"Christopher B.",
"",
"3, 4, and 5"
],
[
"Kitanidis",
"Peter K.",
"",
"1, 3, and 6"
]
] |
new_dataset
| 0.960443 |
2208.14250
|
Johannes Zink
|
Grzegorz Gutowski and Florian Mittelst\"adt and Ignaz Rutter and
Joachim Spoerhase and Alexander Wolff and Johannes Zink
|
Coloring Mixed and Directional Interval Graphs
|
Appears in the Proceedings of the 30th International Symposium on
Graph Drawing and Network Visualization (GD 2022)
| null | null | null |
cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A mixed graph has a set of vertices, a set of undirected egdes, and a set of
directed arcs. A proper coloring of a mixed graph $G$ is a function $c$ that
assigns to each vertex in $G$ a positive integer such that, for each edge $uv$
in $G$, $c(u) \ne c(v)$ and, for each arc $uv$ in $G$, $c(u) < c(v)$. For a
mixed graph $G$, the chromatic number $\chi(G)$ is the smallest number of
colors in any proper coloring of $G$. A directional interval graph is a mixed
graph whose vertices correspond to intervals on the real line. Such a graph has
an edge between every two intervals where one is contained in the other and an
arc between every two overlapping intervals, directed towards the interval that
starts and ends to the right.
Coloring such graphs has applications in routing edges in layered orthogonal
graph drawing according to the Sugiyama framework; the colors correspond to the
tracks for routing the edges. We show how to recognize directional interval
graphs, and how to compute their chromatic number efficiently. On the other
hand, for mixed interval graphs, i.e., graphs where two intersecting intervals
can be connected by an edge or by an arc in either direction arbitrarily, we
prove that computing the chromatic number is NP-hard.
|
[
{
"version": "v1",
"created": "Tue, 30 Aug 2022 13:24:28 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Sep 2022 15:16:43 GMT"
}
] | 2022-09-05T00:00:00 |
[
[
"Gutowski",
"Grzegorz",
""
],
[
"Mittelstädt",
"Florian",
""
],
[
"Rutter",
"Ignaz",
""
],
[
"Spoerhase",
"Joachim",
""
],
[
"Wolff",
"Alexander",
""
],
[
"Zink",
"Johannes",
""
]
] |
new_dataset
| 0.952676 |
2208.14657
|
Qihua Feng
|
Qihua Feng, Peiya Li, Zhixun Lu, Chaozhuo Li, Zefang Wang, Zhiquan
Liu, Chunhui Duan, Feiran Huang
|
EViT: Privacy-Preserving Image Retrieval via Encrypted Vision
Transformer in Cloud Computing
|
29 pages
| null | null | null |
cs.CV cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image retrieval systems help users to browse and search among extensive
images in real-time. With the rise of cloud computing, retrieval tasks are
usually outsourced to cloud servers. However, the cloud scenario brings a
daunting challenge of privacy protection as cloud servers cannot be fully
trusted. To this end, image-encryption-based privacy-preserving image retrieval
schemes have been developed, which first extract features from cipher-images,
and then build retrieval models based on these features. Yet, most existing
approaches extract shallow features and design trivial retrieval models,
resulting in insufficient expressiveness for the cipher-images. In this paper,
we propose a novel paradigm named Encrypted Vision Transformer (EViT), which
advances the discriminative representations capability of cipher-images. First,
in order to capture comprehensive ruled information, we extract multi-level
local length sequence and global Huffman-code frequency features from the
cipher-images which are encrypted by stream cipher during JPEG compression
process. Second, we design the Vision Transformer-based retrieval model to
couple with the multi-level features, and propose two adaptive data
augmentation methods to improve representation power of the retrieval model.
Our proposal can be easily adapted to unsupervised and supervised settings via
self-supervised contrastive learning manner. Extensive experiments reveal that
EViT achieves both excellent encryption and retrieval performance,
outperforming current schemes in terms of retrieval accuracy by large margins
while protecting image privacy effectively. Code is publicly available at
\url{https://github.com/onlinehuazai/EViT}.
|
[
{
"version": "v1",
"created": "Wed, 31 Aug 2022 07:07:21 GMT"
}
] | 2022-09-05T00:00:00 |
[
[
"Feng",
"Qihua",
""
],
[
"Li",
"Peiya",
""
],
[
"Lu",
"Zhixun",
""
],
[
"Li",
"Chaozhuo",
""
],
[
"Wang",
"Zefang",
""
],
[
"Liu",
"Zhiquan",
""
],
[
"Duan",
"Chunhui",
""
],
[
"Huang",
"Feiran",
""
]
] |
new_dataset
| 0.985825 |
2209.00685
|
Seyed Ali Fakhrzadehgan
|
Ali Fakhrzadehgan, Prakash Ramrakhyani, Moinuddin K. Qureshi, Mattan
Erez
|
SecDDR: Enabling Low-Cost Secure Memories by Protecting the DDR
Interface
| null | null | null | null |
cs.CR cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The security goals of cloud providers and users include memory
confidentiality and integrity, which requires implementing Replay-Attack
protection (RAP). RAP can be achieved using integrity trees or mutually
authenticated channels. Integrity trees incur significant performance overheads
and are impractical for protecting large memories. Mutually authenticated
channels have been proposed only for packetized memory interfaces that address
only a very small niche domain and require fundamental changes to memory system
architecture. We propose SecDDR, a low-cost RAP that targets direct-attached
memories, like DDRx. SecDDR avoids memory-side data authentication, and thus,
only adds a small amount of logic to memory components and does not change the
underlying DDR protocol, making it practical for widespread adoption. In
contrast to prior mutual authentication proposals, which require trusting the
entire memory module, SecDDR targets untrusted modules by placing its limited
security logic on the DRAM die (or package) of the ECC chip. Our evaluation
shows that SecDDR performs within 1% of an encryption-only memory without RAP
and that SecDDR provides 18.8% and 7.8% average performance improvements (up to
190.4% and 24.8%) relative to a 64-ary integrity tree and an authenticated
channel, respectively.
|
[
{
"version": "v1",
"created": "Thu, 1 Sep 2022 18:39:39 GMT"
}
] | 2022-09-05T00:00:00 |
[
[
"Fakhrzadehgan",
"Ali",
""
],
[
"Ramrakhyani",
"Prakash",
""
],
[
"Qureshi",
"Moinuddin K.",
""
],
[
"Erez",
"Mattan",
""
]
] |
new_dataset
| 0.99074 |
2209.00757
|
Elizabeth Coda
|
Elizabeth Coda, Brad Clymer, Chance DeSmet, Yijing Watkins, Michael
Girard
|
Universal Fourier Attack for Time Series
| null | null | null | null |
cs.CR cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
A wide variety of adversarial attacks have been proposed and explored using
image and audio data. These attacks are notoriously easy to generate digitally
when the attacker can directly manipulate the input to a model, but are much
more difficult to implement in the real-world. In this paper we present a
universal, time invariant attack for general time series data such that the
attack has a frequency spectrum primarily composed of the frequencies present
in the original data. The universality of the attack makes it fast and easy to
implement as no computation is required to add it to an input, while time
invariance is useful for real-world deployment. Additionally, the frequency
constraint ensures the attack can withstand filtering. We demonstrate the
effectiveness of the attack in two different domains, speech recognition and
unintended radiated emission, and show that the attack is robust against common
transform-and-compare defense pipelines.
|
[
{
"version": "v1",
"created": "Fri, 2 Sep 2022 00:02:17 GMT"
}
] | 2022-09-05T00:00:00 |
[
[
"Coda",
"Elizabeth",
""
],
[
"Clymer",
"Brad",
""
],
[
"DeSmet",
"Chance",
""
],
[
"Watkins",
"Yijing",
""
],
[
"Girard",
"Michael",
""
]
] |
new_dataset
| 0.972042 |
2209.00840
|
Simeng Han
|
Simeng Han, Hailey Schoelkopf, Yilun Zhao, Zhenting Qi, Martin
Riddell, Luke Benson, Lucy Sun, Ekaterina Zubova, Yujie Qiao, Matthew
Burtell, David Peng, Jonathan Fan, Yixin Liu, Brian Wong, Malcolm Sailor,
Ansong Ni, Linyong Nan, Jungo Kasai, Tao Yu, Rui Zhang, Shafiq Joty,
Alexander R. Fabbri, Wojciech Kryscinski, Xi Victoria Lin, Caiming Xiong,
Dragomir Radev
|
FOLIO: Natural Language Reasoning with First-Order Logic
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present FOLIO, a human-annotated, open-domain, and logically complex and
diverse dataset for reasoning in natural language (NL), equipped with first
order logic (FOL) annotations. FOLIO consists of 1,435 examples (unique
conclusions), each paired with one of 487 sets of premises which serve as rules
to be used to deductively reason for the validity of each conclusion. The
logical correctness of premises and conclusions is ensured by their parallel
FOL annotations, which are automatically verified by our FOL inference engine.
In addition to the main NL reasoning task, NL-FOL pairs in FOLIO automatically
constitute a new NL-FOL translation dataset using FOL as the logical form. Our
experiments on FOLIO systematically evaluate the FOL reasoning ability of
supervised fine-tuning on medium-sized language models (BERT, RoBERTa) and
few-shot prompting on large language models (GPT-NeoX, OPT, GPT-3, Codex). For
NL-FOL translation, we experiment with GPT-3 and Codex. Our results show that
one of the most capable Large Language Model (LLM) publicly available, GPT-3
davinci, achieves only slightly better than random results with few-shot
prompting on a subset of FOLIO, and the model is especially bad at predicting
the correct truth values for False and Unknown conclusions. Our dataset and
code are available at https://github.com/Yale-LILY/FOLIO.
|
[
{
"version": "v1",
"created": "Fri, 2 Sep 2022 06:50:11 GMT"
}
] | 2022-09-05T00:00:00 |
[
[
"Han",
"Simeng",
""
],
[
"Schoelkopf",
"Hailey",
""
],
[
"Zhao",
"Yilun",
""
],
[
"Qi",
"Zhenting",
""
],
[
"Riddell",
"Martin",
""
],
[
"Benson",
"Luke",
""
],
[
"Sun",
"Lucy",
""
],
[
"Zubova",
"Ekaterina",
""
],
[
"Qiao",
"Yujie",
""
],
[
"Burtell",
"Matthew",
""
],
[
"Peng",
"David",
""
],
[
"Fan",
"Jonathan",
""
],
[
"Liu",
"Yixin",
""
],
[
"Wong",
"Brian",
""
],
[
"Sailor",
"Malcolm",
""
],
[
"Ni",
"Ansong",
""
],
[
"Nan",
"Linyong",
""
],
[
"Kasai",
"Jungo",
""
],
[
"Yu",
"Tao",
""
],
[
"Zhang",
"Rui",
""
],
[
"Joty",
"Shafiq",
""
],
[
"Fabbri",
"Alexander R.",
""
],
[
"Kryscinski",
"Wojciech",
""
],
[
"Lin",
"Xi Victoria",
""
],
[
"Xiong",
"Caiming",
""
],
[
"Radev",
"Dragomir",
""
]
] |
new_dataset
| 0.99972 |
2209.00860
|
Sifan Zhou
|
Jiayao Shan, Sifan Zhou, Yubo Cui, Zheng Fang
|
Real-time 3D Single Object Tracking with Transformer
|
IEEE Transactions on Multimedia. arXiv admin note: text overlap with
arXiv:2108.06455
| null |
10.1109/TMM.2022.3146714
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
LiDAR-based 3D single object tracking is a challenging issue in robotics and
autonomous driving. Currently, existing approaches usually suffer from the
problem that objects at long distance often have very sparse or
partially-occluded point clouds, which makes the features extracted by the
model ambiguous. Ambiguous features will make it hard to locate the target
object and finally lead to bad tracking results. To solve this problem, we
utilize the powerful Transformer architecture and propose a
Point-Track-Transformer (PTT) module for point cloud-based 3D single object
tracking task. Specifically, PTT module generates fine-tuned attention features
by computing attention weights, which guides the tracker focusing on the
important features of the target and improves the tracking ability in complex
scenarios. To evaluate our PTT module, we embed PTT into the dominant method
and construct a novel 3D SOT tracker named PTT-Net. In PTT-Net, we embed PTT
into the voting stage and proposal generation stage, respectively. PTT module
in the voting stage could model the interactions among point patches, which
learns context-dependent features. Meanwhile, PTT module in the proposal
generation stage could capture the contextual information between object and
background. We evaluate our PTT-Net on KITTI and NuScenes datasets.
Experimental results demonstrate the effectiveness of PTT module and the
superiority of PTT-Net, which surpasses the baseline by a noticeable margin,
~10% in the Car category. Meanwhile, our method also has a significant
performance improvement in sparse scenarios. In general, the combination of
transformer and tracking pipeline enables our PTT-Net to achieve
state-of-the-art performance on both two datasets. Additionally, PTT-Net could
run in real-time at 40FPS on NVIDIA 1080Ti GPU. Our code is open-sourced for
the research community at https://github.com/shanjiayao/PTT.
|
[
{
"version": "v1",
"created": "Fri, 2 Sep 2022 07:36:20 GMT"
}
] | 2022-09-05T00:00:00 |
[
[
"Shan",
"Jiayao",
""
],
[
"Zhou",
"Sifan",
""
],
[
"Cui",
"Yubo",
""
],
[
"Fang",
"Zheng",
""
]
] |
new_dataset
| 0.996817 |
2209.00943
|
Ricardo Morla
|
Gon\c{c}alo Xavier, Carlos Novo, Ricardo Morla
|
Tweaking Metasploit to Evade Encrypted C2 Traffic Detection
| null | null | null | null |
cs.CR cs.LG cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Command and Control (C2) communication is a key component of any structured
cyber-attack. As such, security operations actively try to detect this type of
communication in their networks. This poses a problem for legitimate pentesters
that try to remain undetected, since commonly used pentesting tools, such as
Metasploit, generate constant traffic patterns that are easily distinguishable
from regular web traffic. In this paper we start with these identifiable
patterns in Metasploit's C2 traffic and show that a machine learning-based
detector is able to detect the presence of such traffic with high accuracy,
even when encrypted. We then outline and implement a set of modifications to
the Metasploit framework in order to decrease the detection rates of such
classifier. To evaluate the performance of these modifications, we use two
threat models with increasing awareness of these modifications. We look at the
detection evasion performance and at the byte count and runtime overhead of the
modifications. Our results show that for the second, increased-awareness threat
model the framework-side traffic modifications yield a better detection
avoidance rate (90%) than payload-side only modifications (50%). We also show
that although the modifications use up to 3 times more TLS payload bytes than
the original, the runtime does not significantly change and the total number of
bytes (including TLS payload) reduces.
|
[
{
"version": "v1",
"created": "Fri, 2 Sep 2022 10:56:15 GMT"
}
] | 2022-09-05T00:00:00 |
[
[
"Xavier",
"Gonçalo",
""
],
[
"Novo",
"Carlos",
""
],
[
"Morla",
"Ricardo",
""
]
] |
new_dataset
| 0.999072 |
2209.01004
|
Giuseppe Liotta
|
William J. Lenhart and Giuseppe Liotta
|
Mutual Witness Gabriel Drawings of Complete Bipartite Graphs
|
Appears in the Proceedings of the 30th International Symposium on
Graph Drawing and Network Visualization (GD 2022)
| null | null | null |
cs.CG
|
http://creativecommons.org/licenses/by/4.0/
|
Let $\Gamma$ be a straight-line drawing of a graph and let $u$ and $v$ be two
vertices of $\Gamma$. The Gabriel disk of $u,v$ is the disk having $u$ and $v$
as antipodal points. A pair $\langle \Gamma_0,\Gamma_1 \rangle$ of
vertex-disjoint straight-line drawings form a mutual witness Gabriel drawing
when, for $i=0,1$, any two vertices $u$ and $v$ of $\Gamma_i$ are adjacent if
and only if their Gabriel disk does not contain any vertex of $\Gamma_{1-i}$.
We characterize the pairs $\langle G_0,G_1 \rangle $ of complete bipartite
graphs that admit a mutual witness Gabriel drawing. The characterization leads
to a linear time testing algorithm. We also show that when at least one of the
graphs in the pair $\langle G_0, G_1 \rangle $ is complete $k$-partite with
$k>2$ and all partition sets in the two graphs have size greater than one, the
pair does not admit a mutual witness Gabriel drawing.
|
[
{
"version": "v1",
"created": "Fri, 2 Sep 2022 12:39:48 GMT"
}
] | 2022-09-05T00:00:00 |
[
[
"Lenhart",
"William J.",
""
],
[
"Liotta",
"Giuseppe",
""
]
] |
new_dataset
| 0.999751 |
2209.01012
|
Samuele Vinanzi
|
Samuele Vinanzi and Angelo Cangelosi
|
CASPER: Cognitive Architecture for Social Perception and Engagement in
Robots
|
16 pages, 13 figures
| null | null | null |
cs.RO cs.AI cs.SC
|
http://creativecommons.org/licenses/by/4.0/
|
Our world is being increasingly pervaded by intelligent robots with varying
degrees of autonomy. To seamlessly integrate themselves in our society, these
machines should possess the ability to navigate the complexities of our daily
routines even in the absence of a human's direct input. In other words, we want
these robots to understand the intentions of their partners with the purpose of
predicting the best way to help them. In this paper, we present CASPER
(Cognitive Architecture for Social Perception and Engagement in Robots): a
symbolic cognitive architecture that uses qualitative spatial reasoning to
anticipate the pursued goal of another agent and to calculate the best
collaborative behavior. This is performed through an ensemble of parallel
processes that model a low-level action recognition and a high-level goal
understanding, both of which are formally verified. We have tested this
architecture in a simulated kitchen environment and the results we have
collected show that the robot is able to both recognize an ongoing goal and to
properly collaborate towards its achievement. This demonstrates a new use of
Qualitative Spatial Relations applied to the problem of intention reading in
the domain of human-robot interaction.
|
[
{
"version": "v1",
"created": "Thu, 1 Sep 2022 10:15:03 GMT"
}
] | 2022-09-05T00:00:00 |
[
[
"Vinanzi",
"Samuele",
""
],
[
"Cangelosi",
"Angelo",
""
]
] |
new_dataset
| 0.985146 |
2209.01065
|
Alfio Di Mauro
|
Alfio Di Mauro and Moritz Scherer and Davide Rossi and Luca Benini
|
Kraken: A Direct Event/Frame-Based Multi-sensor Fusion SoC for
Ultra-Efficient Visual Processing in Nano-UAVs
| null | null | null | null |
cs.AR eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
Small-size unmanned aerial vehicles (UAV) have the potential to dramatically
increase safety and reduce cost in applications like critical infrastructure
maintenance and post-disaster search and rescue. Many scenarios require UAVs to
shrink toward nano and pico-size form factors. The key open challenge to
achieve true autonomy on Nano-UAVs is to run complex visual tasks like object
detection, tracking, navigation and obstacle avoidance fully on board, at high
speed and robustness, under tight payload and power constraints. With the
Kraken SoC, fabricated in 22nm FDX technology, we demonstrate a
multi-visual-sensor capability exploiting both event-based and BW/RGB imagers,
combining their output for multi-functional visual tasks previously impossible
on a single low-power chip for Nano-UAVs. Kraken is an ultra-low-power,
heterogeneous SoC architecture integrating three acceleration engines and a
vast set of peripherals to enable efficient interfacing with standard
frame-based sensors and novel event-based DVS. Kraken enables highly sparse
event-driven sub-uJ/inf SNN inference on a dedicated neuromorphic
energy-proportional accelerator. Moreover, it can perform frame-based inference
by combining a 1.8TOp\s\W 8-cores RISC-V processor cluster with mixed-precision
DNN extensions with a 1036TOp\s\W} TNN accelerator.
|
[
{
"version": "v1",
"created": "Thu, 18 Aug 2022 15:36:35 GMT"
}
] | 2022-09-05T00:00:00 |
[
[
"Di Mauro",
"Alfio",
""
],
[
"Scherer",
"Moritz",
""
],
[
"Rossi",
"Davide",
""
],
[
"Benini",
"Luca",
""
]
] |
new_dataset
| 0.999023 |
2209.01118
|
Khulud Alharthi
|
Khulud Alharthi, Zahraa S Abdallah, Sabine Hauert
|
Understandable Controller Extraction from Video Observations of Swarms
| null | null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Swarm behavior emerges from the local interaction of agents and their
environment often encoded as simple rules. Extracting the rules by watching a
video of the overall swarm behavior could help us study and control swarm
behavior in nature, or artificial swarms that have been designed by external
actors. It could also serve as a new source of inspiration for swarm robotics.
Yet extracting such rules is challenging as there is often no visible link
between the emergent properties of the swarm and their local interactions. To
this end, we develop a method to automatically extract understandable swarm
controllers from video demonstrations. The method uses evolutionary algorithms
driven by a fitness function that compares eight high-level swarm metrics. The
method is able to extract many controllers (behavior trees) in a simple
collective movement task. We then provide a qualitative analysis of behaviors
that resulted in different trees, but similar behaviors. This provides the
first steps toward automatic extraction of swarm controllers based on
observations.
|
[
{
"version": "v1",
"created": "Fri, 2 Sep 2022 15:28:28 GMT"
}
] | 2022-09-05T00:00:00 |
[
[
"Alharthi",
"Khulud",
""
],
[
"Abdallah",
"Zahraa S",
""
],
[
"Hauert",
"Sabine",
""
]
] |
new_dataset
| 0.978124 |
2209.01190
|
Irene Parada
|
Oswin Aichholzer, Alfredo Garc\'ia, Irene Parada, Birgit Vogtenhuber,
and Alexandra Weinberger
|
Shooting Stars in Simple Drawings of $K_{m,n}$
|
Appears in the Proceedings of the 30th International Symposium on
Graph Drawing and Network Visualization (GD 2022)
| null | null | null |
cs.CG math.CO
|
http://creativecommons.org/licenses/by/4.0/
|
Simple drawings are drawings of graphs in which two edges have at most one
common point (either a common endpoint, or a proper crossing). It has been an
open question whether every simple drawing of a complete bipartite graph
$K_{m,n}$ contains a plane spanning tree as a subdrawing. We answer this
question to the positive by showing that for every simple drawing of $K_{m,n}$
and for every vertex $v$ in that drawing, the drawing contains a shooting star
rooted at $v$, that is, a plane spanning tree containing all edges incident to
$v$.
|
[
{
"version": "v1",
"created": "Fri, 2 Sep 2022 17:39:57 GMT"
}
] | 2022-09-05T00:00:00 |
[
[
"Aichholzer",
"Oswin",
""
],
[
"García",
"Alfredo",
""
],
[
"Parada",
"Irene",
""
],
[
"Vogtenhuber",
"Birgit",
""
],
[
"Weinberger",
"Alexandra",
""
]
] |
new_dataset
| 0.999789 |
2106.09369
|
Moritz Wolter
|
Moritz Wolter and Felix Blanke and Raoul Heese and Jochen Garcke
|
Wavelet-Packets for Deepfake Image Analysis and Detection
|
Source code is available at
https://github.com/gan-police/frequency-forensics and
https://github.com/v0lta/PyTorch-Wavelet-Toolbox
|
Machine Learning, Special Issue of the ECML PKDD 2022 Journal
Track
|
10.1007/s10994-022-06225-5
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
As neural networks become able to generate realistic artificial images, they
have the potential to improve movies, music, video games and make the internet
an even more creative and inspiring place. Yet, the latest technology
potentially enables new digital ways to lie. In response, the need for a
diverse and reliable method toolbox arises to identify artificial images and
other content. Previous work primarily relies on pixel-space CNNs or the
Fourier transform. To the best of our knowledge, synthesized fake image
analysis and detection methods based on a multi-scale wavelet representation,
localized in both space and frequency, have been absent thus far. The wavelet
transform conserves spatial information to a degree, which allows us to present
a new analysis. Comparing the wavelet coefficients of real and fake images
allows interpretation. Significant differences are identified. Additionally,
this paper proposes to learn a model for the detection of synthetic images
based on the wavelet-packet representation of natural and GAN-generated images.
Our lightweight forensic classifiers exhibit competitive or improved
performance at comparatively small network sizes, as we demonstrate on the
FFHQ, CelebA and LSUN source identification problems. Furthermore, we study the
binary FaceForensics++ fake-detection problem.
|
[
{
"version": "v1",
"created": "Thu, 17 Jun 2021 10:41:44 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Oct 2021 11:48:41 GMT"
},
{
"version": "v3",
"created": "Thu, 17 Mar 2022 10:11:52 GMT"
},
{
"version": "v4",
"created": "Thu, 1 Sep 2022 10:24:07 GMT"
}
] | 2022-09-02T00:00:00 |
[
[
"Wolter",
"Moritz",
""
],
[
"Blanke",
"Felix",
""
],
[
"Heese",
"Raoul",
""
],
[
"Garcke",
"Jochen",
""
]
] |
new_dataset
| 0.994797 |
2112.07158
|
Jonathan Conroy
|
Jonathan B. Conroy and Csaba D. T\'oth
|
Hop-Spanners for Geometric Intersection Graphs
|
34 pages, 19 figures, full version of an extended abstract in the
Proceedings of SoCG 2022
| null | null | null |
cs.CG math.CO
|
http://creativecommons.org/licenses/by/4.0/
|
A $t$-spanner of a graph $G=(V,E)$ is a subgraph $H=(V,E')$ that contains a
$uv$-path of length at most $t$ for every $uv\in E$. It is known that every
$n$-vertex graph admits a $(2k-1)$-spanner with $O(n^{1+1/k})$ edges for $k\geq
1$. This bound is the best possible for $1\leq k\leq 9$ and is conjectured to
be optimal due to Erd\H{o}s' girth conjecture.
We study $t$-spanners for $t\in \{2,3\}$ for geometric intersection graphs in
the plane. These spanners are also known as \emph{$t$-hop spanners} to
emphasize the use of graph-theoretic distances (as opposed to Euclidean
distances between the geometric objects or their centers). We obtain the
following results: (1) Every $n$-vertex unit disk graph (UDG) admits a 2-hop
spanner with $O(n)$ edges; improving upon the previous bound of $O(n\log n)$.
(2) The intersection graph of $n$ axis-aligned fat rectangles admits a 2-hop
spanner with $O(n\log n)$ edges, and this bound is tight up to a factor of
$\log \log n$. (3) The intersection graph of $n$ fat convex bodies in the plane
admits a 3-hop spanner with $O(n\log n)$ edges. (4) The intersection graph of
$n$ axis-aligned rectangles admits a 3-hop spanner with $O(n\log^2 n)$ edges.
|
[
{
"version": "v1",
"created": "Tue, 14 Dec 2021 04:41:19 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Mar 2022 04:20:45 GMT"
},
{
"version": "v3",
"created": "Wed, 31 Aug 2022 19:45:05 GMT"
}
] | 2022-09-02T00:00:00 |
[
[
"Conroy",
"Jonathan B.",
""
],
[
"Tóth",
"Csaba D.",
""
]
] |
new_dataset
| 0.996232 |
2112.10203
|
Tao Hu
|
Tao Hu, Tao Yu, Zerong Zheng, He Zhang, Yebin Liu, Matthias Zwicker
|
HVTR: Hybrid Volumetric-Textural Rendering for Human Avatars
|
Accepted to 3DV 2022. See more results at
https://www.cs.umd.edu/~taohu/hvtr/ Demo:
https://www.youtube.com/watch?v=LE0-YpbLlkY
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a novel neural rendering pipeline, Hybrid Volumetric-Textural
Rendering (HVTR), which synthesizes virtual human avatars from arbitrary poses
efficiently and at high quality. First, we learn to encode articulated human
motions on a dense UV manifold of the human body surface. To handle complicated
motions (e.g., self-occlusions), we then leverage the encoded information on
the UV manifold to construct a 3D volumetric representation based on a dynamic
pose-conditioned neural radiance field. While this allows us to represent 3D
geometry with changing topology, volumetric rendering is computationally heavy.
Hence we employ only a rough volumetric representation using a pose-conditioned
downsampled neural radiance field (PD-NeRF), which we can render efficiently at
low resolutions. In addition, we learn 2D textural features that are fused with
rendered volumetric features in image space. The key advantage of our approach
is that we can then convert the fused features into a high-resolution,
high-quality avatar by a fast GAN-based textural renderer. We demonstrate that
hybrid rendering enables HVTR to handle complicated motions, render
high-quality avatars under user-controlled poses/shapes and even loose
clothing, and most importantly, be efficient at inference time. Our
experimental results also demonstrate state-of-the-art quantitative results.
|
[
{
"version": "v1",
"created": "Sun, 19 Dec 2021 17:34:15 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Sep 2022 16:05:40 GMT"
}
] | 2022-09-02T00:00:00 |
[
[
"Hu",
"Tao",
""
],
[
"Yu",
"Tao",
""
],
[
"Zheng",
"Zerong",
""
],
[
"Zhang",
"He",
""
],
[
"Liu",
"Yebin",
""
],
[
"Zwicker",
"Matthias",
""
]
] |
new_dataset
| 0.996713 |
2112.12331
|
Taher A. Ghaleb
|
Sakina Fatima, Taher A. Ghaleb, and Lionel Briand
|
Flakify: A Black-Box, Language Model-based Predictor for Flaky Tests
| null |
IEEE Transactions on Software Engineering (TSE). (2022) 1-17
|
10.1109/TSE.2022.3201209
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Software testing assures that code changes do not adversely affect existing
functionality. However, a test case can be flaky, i.e., passing and failing
across executions, even for the same version of the source code. Flaky test
cases introduce overhead to software development as they can lead to
unnecessary attempts to debug production or testing code. The state-of-the-art
ML-based flaky test case predictors rely on pre-defined sets of features that
are either project-specific, require access to production code, which is not
always available to software test engineers. Therefore, in this paper, we
propose Flakify, a black-box, language model-based predictor for flaky test
cases. Flakify relies exclusively on the source code of test cases, thus not
requiring to (a) access to production code (black-box), (b) rerun test cases,
(c) pre-define features. To this end, we employed CodeBERT, a pre-trained
language model, and fine-tuned it to predict flaky test cases using the source
code of test cases. We evaluated Flakify on two publicly available datasets
(FlakeFlagger and IDoFT) for flaky test cases and compared our technique with
the FlakeFlagger approach using two different evaluation procedures:
cross-validation and per-project validation. Flakify achieved high F1-scores on
both datasets using cross-validation and per-project validation, and surpassed
FlakeFlagger by 10 and 18 percentage points in terms of precision and recall,
respectively, when evaluated on the FlakeFlagger dataset, thus reducing the
cost bound to be wasted on unnecessarily debugging test cases and production
code by the same percentages. Flakify also achieved significantly higher
prediction results when used to predict test cases on new projects, suggesting
better generalizability over FlakeFlagger. Our results further show that a
black-box version of FlakeFlagger is not a viable option for predicting flaky
test cases.
|
[
{
"version": "v1",
"created": "Thu, 23 Dec 2021 02:58:59 GMT"
},
{
"version": "v2",
"created": "Sat, 18 Jun 2022 20:48:49 GMT"
},
{
"version": "v3",
"created": "Tue, 16 Aug 2022 04:17:46 GMT"
}
] | 2022-09-02T00:00:00 |
[
[
"Fatima",
"Sakina",
""
],
[
"Ghaleb",
"Taher A.",
""
],
[
"Briand",
"Lionel",
""
]
] |
new_dataset
| 0.998071 |
2201.11692
|
Tianshuo Cong
|
Tianshuo Cong and Xinlei He and Yang Zhang
|
SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained
Encoders
|
Accepted by CCS 2022
| null | null | null |
cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Self-supervised learning is an emerging machine learning paradigm. Compared
to supervised learning which leverages high-quality labeled datasets,
self-supervised learning relies on unlabeled datasets to pre-train powerful
encoders which can then be treated as feature extractors for various downstream
tasks. The huge amount of data and computational resources consumption makes
the encoders themselves become the valuable intellectual property of the model
owner. Recent research has shown that the machine learning model's copyright is
threatened by model stealing attacks, which aim to train a surrogate model to
mimic the behavior of a given model. We empirically show that pre-trained
encoders are highly vulnerable to model stealing attacks. However, most of the
current efforts of copyright protection algorithms such as watermarking
concentrate on classifiers. Meanwhile, the intrinsic challenges of pre-trained
encoder's copyright protection remain largely unstudied. We fill the gap by
proposing SSLGuard, the first watermarking scheme for pre-trained encoders.
Given a clean pre-trained encoder, SSLGuard injects a watermark into it and
outputs a watermarked version. The shadow training technique is also applied to
preserve the watermark under potential model stealing attacks. Our extensive
evaluation shows that SSLGuard is effective in watermark injection and
verification, and it is robust against model stealing and other watermark
removal attacks such as input noising, output perturbing, overwriting, model
pruning, and fine-tuning.
|
[
{
"version": "v1",
"created": "Thu, 27 Jan 2022 17:41:54 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Jul 2022 21:12:46 GMT"
},
{
"version": "v3",
"created": "Thu, 7 Jul 2022 13:28:36 GMT"
},
{
"version": "v4",
"created": "Wed, 31 Aug 2022 20:08:15 GMT"
}
] | 2022-09-02T00:00:00 |
[
[
"Cong",
"Tianshuo",
""
],
[
"He",
"Xinlei",
""
],
[
"Zhang",
"Yang",
""
]
] |
new_dataset
| 0.988035 |
2207.00681
|
Ashis Banerjee
|
Benjamin Wong, Wade Marquette, Nikolay Bykov, Tyler M. Paine, and
Ashis G. Banerjee
|
Human-Assisted Robotic Detection of Foreign Object Debris Inside
Confined Spaces of Marine Vessels Using Probabilistic Mapping
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many complex vehicular systems, such as large marine vessels, contain
confined spaces like water tanks, which are critical for the safe functioning
of the vehicles. It is particularly hazardous for humans to inspect such spaces
due to limited accessibility, poor visibility, and unstructured configuration.
While robots provide a viable alternative, they encounter the same set of
challenges in realizing robust autonomy. In this work, we specifically address
the problem of detecting foreign object debris (FODs) left inside the confined
spaces using a visual mapping-based system that relies on Mahalanobis
distance-driven comparisons between the nominal and online maps for local
outlier identification. Simulation trials show extremely high recall but low
precision for the outlier identification method. The assistance of remote
humans is, therefore, taken to deal with the precision problem by going over
the close-up robot camera images of the outlier regions. An online survey is
conducted to show the usefulness of this assistance process. Physical
experiments are also reported on a GPU-enabled mobile robot platform inside a
scaled-down, prototype tank to demonstrate the feasibility of the FOD detection
system.
|
[
{
"version": "v1",
"created": "Fri, 1 Jul 2022 23:09:57 GMT"
},
{
"version": "v2",
"created": "Wed, 31 Aug 2022 23:24:19 GMT"
}
] | 2022-09-02T00:00:00 |
[
[
"Wong",
"Benjamin",
""
],
[
"Marquette",
"Wade",
""
],
[
"Bykov",
"Nikolay",
""
],
[
"Paine",
"Tyler M.",
""
],
[
"Banerjee",
"Ashis G.",
""
]
] |
new_dataset
| 0.996837 |
2207.14702
|
Dipak Kumar Bhunia
|
Dipak Kumar Bhunia, Cristina Fern\'andez-C\'ordoba, Merc\`e Villanueva
|
$\mathbb{Z}_p\mathbb{Z}_{p^2}\dots\mathbb{Z}_{p^s}$-Additive Generalized
Hadamard Codes
|
arXiv admin note: text overlap with arXiv:2203.15657,
arXiv:2203.15407
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The $\mathbb{Z}_p\mathbb{Z}_{p^2}\dots\mathbb{Z}_{p^s}$-additive codes are
subgroups of $\mathbb{Z}_p^{\alpha_1} \times \mathbb{Z}_{p^2}^{\alpha_2} \times
\cdots \times \mathbb{Z}_{p^s}^{\alpha_s}$, and can be seen as linear codes
over $\mathbb{Z}_p$ when $\alpha_i=0$ for all $i \in \{2,\dots, s\}$, a
$\mathbb{Z}_{p^s}$-additive code when $\alpha_i=0$ for all $i \in \{1,\dots,
s-1\}$ , or a $\mathbb{Z}_p\mathbb{Z}_{p^2}$-additive code when $s=2$, or
$\mathbb{Z}_2\mathbb{Z}_4$-additive codes when $p=2$ and $s=2$. A
$\mathbb{Z}_p\mathbb{Z}_{p^2}\dots\mathbb{Z}_{p^s}$-linear generalized Hadamard
(GH) code is a GH code over $\mathbb{Z}_p$ which is the Gray map image of a
$\mathbb{Z}_p\mathbb{Z}_{p^2}\dots\mathbb{Z}_{p^s}$-additive code. In this
paper, we generalize some known results for
$\mathbb{Z}_p\mathbb{Z}_{p^2}\dots\mathbb{Z}_{p^s}$-linear GH codes with $p$
prime and $s\geq 2$. First, we give a recursive construction of
$\mathbb{Z}_p\mathbb{Z}_{p^2}\dots \mathbb{Z}_{p^s}$-additive GH codes of type
$(\alpha_1,\dots,\alpha_s;t_1,\dots,t_s)$ with $t_1\geq 1,
t_2,\dots,t_{s-1}\geq 0$, and $t_s\geq1$. Then, we show for which types the
corresponding $\mathbb{Z}_p\mathbb{Z}_{p^2}\dots\mathbb{Z}_{p^s}$-linear GH
codes are nonlinear over $\mathbb{Z}_p$. We also compute the kernel and its
dimension whenever they are nonlinear.
|
[
{
"version": "v1",
"created": "Fri, 29 Jul 2022 14:19:09 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Sep 2022 13:15:53 GMT"
}
] | 2022-09-02T00:00:00 |
[
[
"Bhunia",
"Dipak Kumar",
""
],
[
"Fernández-Córdoba",
"Cristina",
""
],
[
"Villanueva",
"Mercè",
""
]
] |
new_dataset
| 0.995814 |
2208.01040
|
Yuxiang Zhao
|
Zhuomin Chai, Yuxiang Zhao, Yibo Lin, Wei Liu, Runsheng Wang, Ru Huang
|
CircuitNet: An Open-Source Dataset for Machine Learning Applications in
Electronic Design Automation (EDA)
| null |
SCIENCE CHINA Information Sciences 2022
|
10.1007/s11432-022-3571-8.
| null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The electronic design automation (EDA) community has been actively exploring
machine learning (ML) for very large-scale integrated computer-aided design
(VLSI CAD). Many studies explored learning-based techniques for cross-stage
prediction tasks in the design flow to achieve faster design convergence.
Although building ML models usually requires a large amount of data, most
studies can only generate small internal datasets for validation because of the
lack of large public datasets. In this essay, we present the first open-source
dataset called CircuitNet for ML tasks in VLSI CAD.
|
[
{
"version": "v1",
"created": "Mon, 1 Aug 2022 01:49:28 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Aug 2022 08:15:56 GMT"
},
{
"version": "v3",
"created": "Sat, 27 Aug 2022 14:02:53 GMT"
},
{
"version": "v4",
"created": "Thu, 1 Sep 2022 03:37:05 GMT"
}
] | 2022-09-02T00:00:00 |
[
[
"Chai",
"Zhuomin",
""
],
[
"Zhao",
"Yuxiang",
""
],
[
"Lin",
"Yibo",
""
],
[
"Liu",
"Wei",
""
],
[
"Wang",
"Runsheng",
""
],
[
"Huang",
"Ru",
""
]
] |
new_dataset
| 0.99981 |
2208.14613
|
Naser Al Madi
|
Naser Al Madi
|
How Readable is Model-generated Code? Examining Readability and Visual
Inspection of GitHub Copilot
| null | null |
10.1145/3551349.3560438
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Background: Recent advancements in large language models have motivated the
practical use of such models in code generation and program synthesis. However,
little is known about the effects of such tools on code readability and visual
attention in practice.
Objective: In this paper, we focus on GitHub Copilot to address the issues of
readability and visual inspection of model generated code. Readability and low
complexity are vital aspects of good source code, and visual inspection of
generated code is important in light of automation bias.
Method: Through a human experiment (n=21) we compare model generated code to
code written completely by human programmers. We use a combination of static
code analysis and human annotators to assess code readability, and we use eye
tracking to assess the visual inspection of code.
Results: Our results suggest that model generated code is comparable in
complexity and readability to code written by human pair programmers. At the
same time, eye tracking data suggests, to a statistically significant level,
that programmers direct less visual attention to model generated code.
Conclusion: Our findings highlight that reading code is more important than
ever, and programmers should beware of complacency and automation bias with
model generated code.
|
[
{
"version": "v1",
"created": "Wed, 31 Aug 2022 03:21:31 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Sep 2022 01:50:04 GMT"
}
] | 2022-09-02T00:00:00 |
[
[
"Madi",
"Naser Al",
""
]
] |
new_dataset
| 0.995711 |
2209.00076
|
Kyle Evans
|
Kyle Evans, Katherine T. Chang
|
Connecticut Redistricting Analysis
|
13 pages, 3 tables
| null | null | null |
cs.CY cs.SI stat.AP
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Connecticut passed their new state House of Representatives district plan on
November 18, 2021 and passed their new state Senate district plan on November
23, 2021. Each passed unanimously in their 9-person bipartisan Reapportionment
Commission; however, the process has been criticized for legislators
controlling the process and for the negotiations that serve to protect
incumbents. This paper investigates the extent of incumbent protection in the
new Assembly maps while also providing summary data on the new districts. The
impact of new districts on incumbents is analyzed through the location of
district borders, an ensemble analysis (using MCMC methods) to determine if the
protection of incumbents constitutes a statistical outlier, and by
investigating changes to competitive districts.
|
[
{
"version": "v1",
"created": "Wed, 31 Aug 2022 19:13:47 GMT"
}
] | 2022-09-02T00:00:00 |
[
[
"Evans",
"Kyle",
""
],
[
"Chang",
"Katherine T.",
""
]
] |
new_dataset
| 0.997188 |
2209.00080
|
Ziqi Xu
|
Connor Dickey, Christopher Smith, Quentin Johnson, Jingcheng Li, Ziqi
Xu, Loukas Lazos, Ming Li
|
Wiggle: Physical Challenge-Response Verification of Vehicle Platooning
|
10 pages, 13 figures
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Autonomous vehicle platooning promises many benefits such as fuel efficiency,
road safety, reduced traffic congestion, and passenger comfort. Platooning
vehicles travel in a single file, in close distance, and at the same velocity.
The platoon formation is autonomously maintained by a Cooperative Adaptive
Cruise Control (CACC) system which relies on sensory data and
vehicle-to-vehicle (V2V) communications. In fact, V2V messages play a critical
role in shortening the platooning distance while maintaining safety. Whereas
V2V message integrity and source authentication can be verified via
cryptographic methods, establishing the truthfulness of the message contents is
a much harder task.
This work establishes a physical access control mechanism to restrict V2V
messages to platooning members. Specifically, we aim at tying the digital
identity of a candidate requesting to join a platoon to its physical trajectory
relative to the platoon. We propose the {\em Wiggle} protocol that employs a
physical challenge-response exchange to prove that a candidate requesting to be
admitted into a platoon actually follows it. The protocol name is inspired by
the random longitudinal movements that the candidate is challenged to execute.
{\em Wiggle} prevents any remote adversary from joining the platoon and
injecting fake CACC messages. Compared to prior works, {\em Wiggle} is
resistant to pre-recording attacks and can verify that the candidate is
directly behind the verifier at the same lane.
|
[
{
"version": "v1",
"created": "Wed, 31 Aug 2022 19:28:42 GMT"
}
] | 2022-09-02T00:00:00 |
[
[
"Dickey",
"Connor",
""
],
[
"Smith",
"Christopher",
""
],
[
"Johnson",
"Quentin",
""
],
[
"Li",
"Jingcheng",
""
],
[
"Xu",
"Ziqi",
""
],
[
"Lazos",
"Loukas",
""
],
[
"Li",
"Ming",
""
]
] |
new_dataset
| 0.998824 |
2209.00084
|
Sudeep Pasricha
|
Febin Sunny, Mahdi Nikdast, Sudeep Pasricha
|
RecLight: A Recurrent Neural Network Accelerator with Integrated Silicon
Photonics
| null | null | null | null |
cs.LG cs.AR cs.NE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Recurrent Neural Networks (RNNs) are used in applications that learn
dependencies in data sequences, such as speech recognition, human activity
recognition, and anomaly detection. In recent years, newer RNN variants, such
as GRUs and LSTMs, have been used for implementing these applications. As many
of these applications are employed in real-time scenarios, accelerating
RNN/LSTM/GRU inference is crucial. In this paper, we propose a novel photonic
hardware accelerator called RecLight for accelerating simple RNNs, GRUs, and
LSTMs. Simulation results indicate that RecLight achieves 37x lower
energy-per-bit and 10% better throughput compared to the state-of-the-art.
|
[
{
"version": "v1",
"created": "Wed, 31 Aug 2022 19:36:01 GMT"
}
] | 2022-09-02T00:00:00 |
[
[
"Sunny",
"Febin",
""
],
[
"Nikdast",
"Mahdi",
""
],
[
"Pasricha",
"Sudeep",
""
]
] |
new_dataset
| 0.995275 |
2209.00086
|
Sanjana Gautam
|
Sanjana Gautam
|
In Alexa, We Trust. Or Do We? : An analysis of People's Perception of
Privacy Policies
| null | null | null | null |
cs.HC cs.CY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Smart home devices have found their way through people's homes as well as
hearts. One such smart device is Amazon Alexa. Amazon Alexa is a
voice-controlled application that is rapidly gaining popularity. Alexa was
primarily used for checking weather forecasts, playing music, and controlling
other devices. This paper tries to explore the extent to which people are aware
of the privacy policies pertaining to the Amazon Alexa devices. We have
evaluated behavioral change towards their interactions with the device post
being aware of the adverse implications. Resulting knowledge will give
researchers new avenues of research and interaction designers new insights into
improving their systems.
|
[
{
"version": "v1",
"created": "Wed, 31 Aug 2022 19:44:58 GMT"
}
] | 2022-09-02T00:00:00 |
[
[
"Gautam",
"Sanjana",
""
]
] |
new_dataset
| 0.950484 |
2209.00185
|
Matthew Guzdial
|
Dagmar Lukka Loftsd\'ottir and Matthew Guzdial
|
SketchBetween: Video-to-Video Synthesis for Sprite Animation via
Sketches
|
7 pages, 6 figures, ACM Conference on the Foundations of Digital
Games
| null |
10.1145/3555858.3555928
| null |
cs.GR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
2D animation is a common factor in game development, used for characters,
effects and background art. It involves work that takes both skill and time,
but parts of which are repetitive and tedious. Automated animation approaches
exist, but are designed without animators in mind. The focus is heavily on
real-life video, which follows strict laws of how objects move, and does not
account for the stylistic movement often present in 2D animation. We propose a
problem formulation that more closely adheres to the standard workflow of
animation. We also demonstrate a model, SketchBetween, which learns to map
between keyframes and sketched in-betweens to rendered sprite animations. We
demonstrate that our problem formulation provides the required information for
the task and that our model outperforms an existing method.
|
[
{
"version": "v1",
"created": "Thu, 1 Sep 2022 02:43:19 GMT"
}
] | 2022-09-02T00:00:00 |
[
[
"Loftsdóttir",
"Dagmar Lukka",
""
],
[
"Guzdial",
"Matthew",
""
]
] |
new_dataset
| 0.999842 |
2209.00247
|
Md. Abubakar Siddik
|
Md. Abubakar Siddik, Most. Anju Ara Hasi, Jakia Akter Nitu, Sumonto
Sarker, Nasrin Sultana and Emarn Ali
|
A Modified IEEE 802.15.6 MAC Scheme to Enhance Performance of Wireless
Body Area Networks in E-health Applications
|
23 pages
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
The recently released IEEE 802.15.6 standard specifies several physical (PHY)
layers and medium access control (MAC) layer protocols for variety of medical
and non-medical applications of Wireless Body Area Networks (WBAN). The medical
applications of WBAN has several obligatory requirements and constrains viz.
high reliability, strict delay deadlines and low power consumption. The
standard IEEE 802.15.6 MAC scheme is not able to fulfil the all requirements of
medical applications of WBAN. To address this issue we propose an IEEE
802.15.6-based MAC scheme that is the modification of superframe structure,
user priorities and access mechanism of standard IEEE 802.15.6 MAC scheme. The
proposed superframe has three access phases: random access phases (RAP), manage
access phases (MAP) and contention access phase (CAP). The proposed four user
priorities nodes access the channel during RAP using CAMA/CA mechanism with a
large value of contention window. The proposed MAC scheme uses RTS/CTS access
mechanism instead of basic access mechanism to mitigate the effect of hidden
and expose terminal problem. Moreover, we develop an analytical model to
evaluate the performance of proposed MAC scheme and solve the analytical model
using Maple. The results show that the modified IEEE 802.15.6 MAC scheme
achieve the better performance in terms of reliability, throughput, average
access delay, energy consumption, channel utilization and fairness compared to
standard IEEE 802.15.6 MAC scheme in E-health applications.
|
[
{
"version": "v1",
"created": "Thu, 1 Sep 2022 06:13:49 GMT"
}
] | 2022-09-02T00:00:00 |
[
[
"Siddik",
"Md. Abubakar",
""
],
[
"Hasi",
"Most. Anju Ara",
""
],
[
"Nitu",
"Jakia Akter",
""
],
[
"Sarker",
"Sumonto",
""
],
[
"Sultana",
"Nasrin",
""
],
[
"Ali",
"Emarn",
""
]
] |
new_dataset
| 0.996869 |
2209.00269
|
Jiangli Shao
|
Jiangli Shao, Yongqing Wang, Boshen Shi, Hao Gao, Huawei Shen, Xueqi
Cheng
|
Adversarial for Social Privacy: A Poisoning Strategy to Degrade User
Identity Linkage
| null | null | null | null |
cs.SI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Privacy issues on social networks have been extensively discussed in recent
years. The user identity linkage (UIL) task, aiming at finding corresponding
users across different social networks, would be a threat to privacy if
unethically applied. The sensitive user information might be detected through
connected identities. A promising and novel solution to this issue is to design
an adversarial strategy to degrade the matching performance of UIL models.
However, most existing adversarial attacks on graphs are designed for models
working in a single network, while UIL is a cross-network learning task.
Meanwhile, privacy protection against UIL works unilaterally in real-world
scenarios, i.e., the service provider can only add perturbations to its own
network to protect its users from being linked. To tackle these challenges,
this paper proposes a novel adversarial attack strategy that poisons one target
network to prevent its nodes from being linked to other networks by UIL
algorithms. Specifically, we reformalize the UIL problem in the perspective of
kernelized topology consistency and convert the attack objective to maximizing
the structural changes within the target network before and after attacks. A
novel graph kernel is then defined with Earth mover's distance (EMD) on the
edge-embedding space. In terms of efficiency, a fast attack strategy is
proposed by greedy searching and replacing EMD with its lower bound. Results on
three real-world datasets indicate that the proposed attacks can best fool a
wide range of UIL models and reach a balance between attack effectiveness and
imperceptibility.
|
[
{
"version": "v1",
"created": "Thu, 1 Sep 2022 07:12:20 GMT"
}
] | 2022-09-02T00:00:00 |
[
[
"Shao",
"Jiangli",
""
],
[
"Wang",
"Yongqing",
""
],
[
"Shi",
"Boshen",
""
],
[
"Gao",
"Hao",
""
],
[
"Shen",
"Huawei",
""
],
[
"Cheng",
"Xueqi",
""
]
] |
new_dataset
| 0.988319 |
2209.00271
|
Gabriele Fici
|
Golnaz Badkobeh, Alessandro De Luca, Gabriele Fici and Simon Puglisi
|
Maximal Closed Substrings
|
accepted in SPIRE '22
| null | null | null |
cs.DS cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
A string is closed if it has length 1 or has a nonempty border without
internal occurrences. In this paper we introduce the definition of a maximal
closed substring (MCS), which is an occurrence of a closed substring that
cannot be extended to the left nor to the right into a longer closed substring.
MCSs with exponent at least $2$ are commonly called runs; those with exponent
smaller than $2$, instead, are particular cases of maximal gapped repeats. We
show that a string of length $n$ contains $\mathcal O(n^{1.5})$ MCSs. We also
provide an output-sensitive algorithm that, given a string of length $n$ over a
constant-size alphabet, locates all $m$ MCSs the string contains in $\mathcal
O(n\log n + m)$ time.
|
[
{
"version": "v1",
"created": "Thu, 1 Sep 2022 07:18:12 GMT"
}
] | 2022-09-02T00:00:00 |
[
[
"Badkobeh",
"Golnaz",
""
],
[
"De Luca",
"Alessandro",
""
],
[
"Fici",
"Gabriele",
""
],
[
"Puglisi",
"Simon",
""
]
] |
new_dataset
| 0.99106 |
2209.00274
|
Rohan Pratap Singh
|
Rohan P. Singh, Pierre Gergondet, Fumio Kanehiro
|
mc-mujoco: Simulating Articulated Robots with FSM Controllers in MuJoCo
|
GitHub code: https://github.com/rohanpsingh/mc_mujoco
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
For safe and reliable deployment of any robot controller on the real hardware
platform, it is generally a necessary practice to comprehensively assess the
performance of the controller with the specific robot in a realistic simulation
environment beforehand. While there exist several software solutions that can
provide the core physics engine for this purpose, it is often a cumbersome and
error-prone effort to interface the simulation environment with the robot
controller being evaluated. The controller may have a complex structure
consisting of multiple states and transitions within a finite-state machine
(FSM), and may even require input through a GUI.
In this work, we present mc-mujoco -- an open-source software framework that
forms an interface between the MuJoCo physics simulator and the mc-rtc robot
control framework. We provide implementation details and describe the process
for adding support for essentially any new robot. We also demonstrate and
publish a sample FSM controller for bipedal locomotion and stable grasping of a
rigid object by the HRP-5P humanoid robot in MuJoCo. The code and usage
instructions for mc-mujoco, the developed robot modules, and the FSM controller
are available online.
|
[
{
"version": "v1",
"created": "Thu, 1 Sep 2022 07:31:42 GMT"
}
] | 2022-09-02T00:00:00 |
[
[
"Singh",
"Rohan P.",
""
],
[
"Gergondet",
"Pierre",
""
],
[
"Kanehiro",
"Fumio",
""
]
] |
new_dataset
| 0.999749 |
2209.00280
|
Jinho Choi
|
Jinho Choi and Jihong Park
|
Semantic Communication as a Signaling Game with Correlated Knowledge
Bases
|
5 pages, 4 figures, VTC Fall 2022 (Workshop of Semantic
Communication)
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Semantic communication (SC) goes beyond technical communication in which a
given sequence of bits or symbols, often referred to as information, is be
transmitted reliably over a noisy channel, regardless of its meaning. In SC,
conveying the meaning of information becomes important, which requires some
sort of agreement between a sender and a receiver through their knowledge
bases. In this sense, SC is closely related to a signaling game where a sender
takes an action to send a signal that conveys information to a receiver, while
the receiver can interpret the signal and choose a response accordingly. Based
on the signaling game, we can build a SC model and characterize the performance
in terms of mutual information in this paper. In addition, we show that the
conditional mutual information between the instances of the knowledge bases of
communicating parties plays a crucial role in improving the performance of SC.
|
[
{
"version": "v1",
"created": "Thu, 1 Sep 2022 07:56:46 GMT"
}
] | 2022-09-02T00:00:00 |
[
[
"Choi",
"Jinho",
""
],
[
"Park",
"Jihong",
""
]
] |
new_dataset
| 0.981313 |
2209.00291
|
Prateek Verma
|
Rishabh Dahale, Vaibhav Talwadker, Preeti Rao, Prateek Verma
|
Generating Coherent Drum Accompaniment With Fills And Improvisations
|
8 pages, 7 figures, 23rd International Society for Music Information
Retrieval Conference (ISMIR 2022), Bengaluru, India
| null | null | null |
cs.SD cs.LG cs.MM eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Creating a complex work of art like music necessitates profound creativity.
With recent advancements in deep learning and powerful models such as
transformers, there has been huge progress in automatic music generation. In an
accompaniment generation context, creating a coherent drum pattern with
apposite fills and improvisations at proper locations in a song is a
challenging task even for an experienced drummer. Drum beats tend to follow a
repetitive pattern through stanzas with fills or improvisation at section
boundaries. In this work, we tackle the task of drum pattern generation
conditioned on the accompanying music played by four melodic instruments:
Piano, Guitar, Bass, and Strings. We use the transformer sequence to sequence
model to generate a basic drum pattern conditioned on the melodic accompaniment
to find that improvisation is largely absent, attributed possibly to its
expectedly relatively low representation in the training data. We propose a
novelty function to capture the extent of improvisation in a bar relative to
its neighbors. We train a model to predict improvisation locations from the
melodic accompaniment tracks. Finally, we use a novel BERT-inspired in-filling
architecture, to learn the structure of both the drums and melody to in-fill
elements of improvised music.
|
[
{
"version": "v1",
"created": "Thu, 1 Sep 2022 08:31:26 GMT"
}
] | 2022-09-02T00:00:00 |
[
[
"Dahale",
"Rishabh",
""
],
[
"Talwadker",
"Vaibhav",
""
],
[
"Rao",
"Preeti",
""
],
[
"Verma",
"Prateek",
""
]
] |
new_dataset
| 0.972639 |
2209.00325
|
Aleksandr Petrov
|
Aleksandr Petrov, Ildar Safilo, Daria Tikhonovich, Dmitry Ignatov
|
MTS Kion Implicit Contextualised Sequential Dataset for Movie
Recommendation
|
Accepted at ACM RecSys CARS workshop 2022
| null | null | null |
cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
We present a new movie and TV show recommendation dataset collected from the
real users of MTS Kion video-on-demand platform. In contrast to other popular
movie recommendation datasets, such as MovieLens or Netflix, our dataset is
based on the implicit interactions registered at the watching time, rather than
on explicit ratings. We also provide rich contextual and side information
including interactions characteristics (such as temporal information, watch
duration and watch percentage), user demographics and rich movies
meta-information. In addition, we describe the MTS Kion Challenge - an online
recommender systems challenge that was based on this dataset - and provide an
overview of the best performing solutions of the winners. We keep the
competition sandbox open, so the researchers are welcome to try their own
recommendation algorithms and measure the quality on the private part of the
dataset.
|
[
{
"version": "v1",
"created": "Thu, 1 Sep 2022 09:53:57 GMT"
}
] | 2022-09-02T00:00:00 |
[
[
"Petrov",
"Aleksandr",
""
],
[
"Safilo",
"Ildar",
""
],
[
"Tikhonovich",
"Daria",
""
],
[
"Ignatov",
"Dmitry",
""
]
] |
new_dataset
| 0.99984 |
2209.00353
|
Li Yi
|
Li Yi, Haochen Hu, Jingwei Zhao, Gus Xia
|
AccoMontage2: A Complete Harmonization and Accompaniment Arrangement
System
|
Accepted by ISMIR 2022
| null | null | null |
cs.SD cs.IR cs.MM eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
We propose AccoMontage2, a system capable of doing full-length song
harmonization and accompaniment arrangement based on a lead melody. Following
AccoMontage, this study focuses on generating piano arrangements for
popular/folk songs and it carries on the generalized template-based retrieval
method. The novelties of this study are twofold. First, we invent a
harmonization module (which AccoMontage does not have). This module generates
structured and coherent full-length chord progression by optimizing and
balancing three loss terms: a micro-level loss for note-wise dissonance, a
meso-level loss for phrase-template matching, and a macro-level loss for full
piece coherency. Second, we develop a graphical user interface which allows
users to select different styles of chord progression and piano texture.
Currently, chord progression styles include Pop, R&B, and Dark, while piano
texture styles include several levels of voicing density and rhythmic
complexity. Experimental results show that both our harmonization and
arrangement results significantly outperform the baselines. Lastly, we release
AccoMontage2 as an online application as well as the organized chord
progression templates as a public dataset.
|
[
{
"version": "v1",
"created": "Thu, 1 Sep 2022 10:42:56 GMT"
}
] | 2022-09-02T00:00:00 |
[
[
"Yi",
"Li",
""
],
[
"Hu",
"Haochen",
""
],
[
"Zhao",
"Jingwei",
""
],
[
"Xia",
"Gus",
""
]
] |
new_dataset
| 0.996247 |
2209.00355
|
Jinkai Zheng
|
Jinkai Zheng, Xinchen Liu, Xiaoyan Gu, Yaoqi Sun, Chuang Gan, Jiyong
Zhang, Wu Liu, Chenggang Yan
|
Gait Recognition in the Wild with Multi-hop Temporal Switch
|
10 pages, 6 figures; Accepted by ACM MM 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing studies for gait recognition are dominated by in-the-lab scenarios.
Since people live in real-world senses, gait recognition in the wild is a more
practical problem that has recently attracted the attention of the community of
multimedia and computer vision. Current methods that obtain state-of-the-art
performance on in-the-lab benchmarks achieve much worse accuracy on the
recently proposed in-the-wild datasets because these methods can hardly model
the varied temporal dynamics of gait sequences in unconstrained scenes.
Therefore, this paper presents a novel multi-hop temporal switch method to
achieve effective temporal modeling of gait patterns in real-world scenes.
Concretely, we design a novel gait recognition network, named Multi-hop
Temporal Switch Network (MTSGait), to learn spatial features and multi-scale
temporal features simultaneously. Different from existing methods that use 3D
convolutions for temporal modeling, our MTSGait models the temporal dynamics of
gait sequences by 2D convolutions. By this means, it achieves high efficiency
with fewer model parameters and reduces the difficulty in optimization compared
with 3D convolution-based models. Based on the specific design of the 2D
convolution kernels, our method can eliminate the misalignment of features
among adjacent frames. In addition, a new sampling strategy, i.e., non-cyclic
continuous sampling, is proposed to make the model learn more robust temporal
features. Finally, the proposed method achieves superior performance on two
public gait in-the-wild datasets, i.e., GREW and Gait3D, compared with
state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Thu, 1 Sep 2022 10:46:09 GMT"
}
] | 2022-09-02T00:00:00 |
[
[
"Zheng",
"Jinkai",
""
],
[
"Liu",
"Xinchen",
""
],
[
"Gu",
"Xiaoyan",
""
],
[
"Sun",
"Yaoqi",
""
],
[
"Gan",
"Chuang",
""
],
[
"Zhang",
"Jiyong",
""
],
[
"Liu",
"Wu",
""
],
[
"Yan",
"Chenggang",
""
]
] |
new_dataset
| 0.950978 |
2209.00381
|
Juan Lagos
|
Juan Pablo Lagos and Esa Rahtu
|
SemSegDepth: A Combined Model for Semantic Segmentation and Depth
Completion
| null |
17th VISIGRAPP 2022 - Volume 5: VISAPP
|
10.5220/0010838500003124
| null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Holistic scene understanding is pivotal for the performance of autonomous
machines. In this paper we propose a new end-to-end model for performing
semantic segmentation and depth completion jointly. The vast majority of recent
approaches have developed semantic segmentation and depth completion as
independent tasks. Our approach relies on RGB and sparse depth as inputs to our
model and produces a dense depth map and the corresponding semantic
segmentation image. It consists of a feature extractor, a depth completion
branch, a semantic segmentation branch and a joint branch which further
processes semantic and depth information altogether. The experiments done on
Virtual KITTI 2 dataset, demonstrate and provide further evidence, that
combining both tasks, semantic segmentation and depth completion, in a
multi-task network can effectively improve the performance of each task. Code
is available at https://github.com/juanb09111/semantic depth.
|
[
{
"version": "v1",
"created": "Thu, 1 Sep 2022 11:52:11 GMT"
}
] | 2022-09-02T00:00:00 |
[
[
"Lagos",
"Juan Pablo",
""
],
[
"Rahtu",
"Esa",
""
]
] |
new_dataset
| 0.988256 |
2209.00407
|
Xiaodong Chen
|
Xiaodong Chen and Wu Liu and Xinchen Liu and Yongdong Zhang and
Jungong Han and Tao Mei
|
MAPLE: Masked Pseudo-Labeling autoEncoder for Semi-supervised Point
Cloud Action Recognition
|
11 pages, 7 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recognizing human actions from point cloud videos has attracted tremendous
attention from both academia and industry due to its wide applications like
automatic driving, robotics, and so on. However, current methods for point
cloud action recognition usually require a huge amount of data with manual
annotations and a complex backbone network with high computation costs, which
makes it impractical for real-world applications. Therefore, this paper
considers the task of semi-supervised point cloud action recognition. We
propose a Masked Pseudo-Labeling autoEncoder (\textbf{MAPLE}) framework to
learn effective representations with much fewer annotations for point cloud
action recognition. In particular, we design a novel and efficient
\textbf{De}coupled \textbf{s}patial-\textbf{t}emporal Trans\textbf{Former}
(\textbf{DestFormer}) as the backbone of MAPLE. In DestFormer, the spatial and
temporal dimensions of the 4D point cloud videos are decoupled to achieve
efficient self-attention for learning both long-term and short-term features.
Moreover, to learn discriminative features from fewer annotations, we design a
masked pseudo-labeling autoencoder structure to guide the DestFormer to
reconstruct features of masked frames from the available frames. More
importantly, for unlabeled data, we exploit the pseudo-labels from the
classification head as the supervision signal for the reconstruction of
features from the masked frames. Finally, comprehensive experiments demonstrate
that MAPLE achieves superior results on three public benchmarks and outperforms
the state-of-the-art method by 8.08\% accuracy on the MSR-Action3D dataset.
|
[
{
"version": "v1",
"created": "Thu, 1 Sep 2022 12:32:40 GMT"
}
] | 2022-09-02T00:00:00 |
[
[
"Chen",
"Xiaodong",
""
],
[
"Liu",
"Wu",
""
],
[
"Liu",
"Xinchen",
""
],
[
"Zhang",
"Yongdong",
""
],
[
"Han",
"Jungong",
""
],
[
"Mei",
"Tao",
""
]
] |
new_dataset
| 0.953469 |
2209.00447
|
Ziyue Zhu Ms
|
Ziyue Zhu
|
Identifying Films with Noir Characteristics Using Audience's Tags on
MovieLens
| null | null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the noir classification problem by exploring noir attributes and
what films are likely to be regarded as noirish from the perspective of a wide
Internet audience. We use a dataset consisting of more than 30,000 films with
relevant tags added by users of MovieLens, a web-based recommendation system.
Based on this data, we develop a statistical model to identify films with noir
characteristics using these free-form tags. After retrieving information for
describing films from tags, we implement a one-class nearest neighbors
algorithm to recognize noirish films by learning from IMDb-labeled noirs. Our
analysis evidences film noirs' close relationship with German Expressionism,
French Poetic Realism, British thrillers, and American pre-code crime pictures,
revealing the similarities and differences between neo noirs after 1960 and
noirs in the classic period.
|
[
{
"version": "v1",
"created": "Wed, 24 Aug 2022 17:08:54 GMT"
}
] | 2022-09-02T00:00:00 |
[
[
"Zhu",
"Ziyue",
""
]
] |
new_dataset
| 0.999125 |
2209.00470
|
Bram Van Es
|
Bram van Es, Leon C. Reteig, Sander C. Tan, Marijn Schraagen, Myrthe
M. Hemker, Sebastiaan R.S. Arends, Miguel A.R. Rios, Saskia Haitjema
|
Negation detection in Dutch clinical texts: an evaluation of rule-based
and machine learning methods
|
24, 8, journal
| null | null | null |
cs.CL cs.IR cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
As structured data are often insufficient, labels need to be extracted from
free text in electronic health records when developing models for clinical
information retrieval and decision support systems. One of the most important
contextual properties in clinical text is negation, which indicates the absence
of findings. We aimed to improve large scale extraction of labels by comparing
three methods for negation detection in Dutch clinical notes. We used the
Erasmus Medical Center Dutch Clinical Corpus to compare a rule-based method
based on ContextD, a biLSTM model using MedCAT and (finetuned) RoBERTa-based
models. We found that both the biLSTM and RoBERTa models consistently
outperform the rule-based model in terms of F1 score, precision and recall. In
addition, we systematically categorized the classification errors for each
model, which can be used to further improve model performance in particular
applications. Combining the three models naively was not beneficial in terms of
performance. We conclude that the biLSTM and RoBERTa-based models in particular
are highly accurate accurate in detecting clinical negations, but that
ultimately all three approaches can be viable depending on the use case at
hand.
|
[
{
"version": "v1",
"created": "Thu, 1 Sep 2022 14:00:13 GMT"
}
] | 2022-09-02T00:00:00 |
[
[
"van Es",
"Bram",
""
],
[
"Reteig",
"Leon C.",
""
],
[
"Tan",
"Sander C.",
""
],
[
"Schraagen",
"Marijn",
""
],
[
"Hemker",
"Myrthe M.",
""
],
[
"Arends",
"Sebastiaan R. S.",
""
],
[
"Rios",
"Miguel A. R.",
""
],
[
"Haitjema",
"Saskia",
""
]
] |
new_dataset
| 0.984874 |
2209.00533
|
Wei Luo
|
Wei Luo, Jingshan Chen, Henrik Ebel, Peter Eberhard
|
Time-Optimal Handover Trajectory Planning for Aerial Manipulators based
on Discrete Mechanics and Complementarity Constraints
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Planning a time-optimal trajectory for aerial robots is critical in many
drone applications, such as rescue missions and package delivery, which have
been widely researched in recent years. However, it still involves several
challenges, particularly when it comes to incorporating special task
requirements into the planning as well as the aerial robot's dynamics. In this
work, we study a case where an aerial manipulator shall hand over a parcel from
a moving mobile robot in a time-optimal manner. Rather than setting up the
approach trajectory manually, which makes it difficult to determine the optimal
total travel time to accomplish the desired task within dynamic limits, we
propose an optimization framework, which combines discrete mechanics and
complementarity constraints (DMCC) together. In the proposed framework, the
system dynamics is constrained with the discrete variational Lagrangian
mechanics that provides reliable estimation results also according to our
experiments. The handover opportunities are automatically determined and
arranged based on the desired complementarity constraints. Finally, the
performance of the proposed framework is verified with numerical simulations
and hardware experiments with our self-designed aerial manipulators.
|
[
{
"version": "v1",
"created": "Thu, 1 Sep 2022 15:28:39 GMT"
}
] | 2022-09-02T00:00:00 |
[
[
"Luo",
"Wei",
""
],
[
"Chen",
"Jingshan",
""
],
[
"Ebel",
"Henrik",
""
],
[
"Eberhard",
"Peter",
""
]
] |
new_dataset
| 0.953533 |
2209.00551
|
Lingyun Gu
|
Gu Lingyun, Eugene Popov, Dong Ge
|
Fast Fourier Convolution Based Remote Sensor Image Object Detection for
Earth Observation
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Remote sensor image object detection is an important technology for Earth
observation, and is used in various tasks such as forest fire monitoring and
ocean monitoring. Image object detection technology, despite the significant
developments, is struggling to handle remote sensor images and small-scale
objects, due to the limited pixels of small objects. Numerous existing studies
have demonstrated that an effective way to promote small object detection is to
introduce the spatial context. Meanwhile, recent researches for image
classification have shown that spectral convolution operations can perceive
long-term spatial dependence more efficiently in the frequency domain than
spatial domain. Inspired by this observation, we propose a Frequency-aware
Feature Pyramid Framework (FFPF) for remote sensing object detection, which
consists of a novel Frequency-aware ResNet (F-ResNet) and a Bilateral
Spectral-aware Feature Pyramid Network (BS-FPN). Specifically, the F-ResNet is
proposed to perceive the spectral context information by plugging the frequency
domain convolution into each stage of the backbone, extracting richer features
of small objects. To the best of our knowledge, this is the first work to
introduce frequency-domain convolution into remote sensing object detection
task. In addition, the BSFPN is designed to use a bilateral sampling strategy
and skipping connection to better model the association of object features at
different scales, towards unleashing the potential of the spectral context
information from F-ResNet. Extensive experiments are conducted for object
detection in the optical remote sensing image dataset (DIOR and DOTA). The
experimental results demonstrate the excellent performance of our method. It
achieves an average accuracy (mAP) without any tricks.
|
[
{
"version": "v1",
"created": "Thu, 1 Sep 2022 15:50:58 GMT"
}
] | 2022-09-02T00:00:00 |
[
[
"Lingyun",
"Gu",
""
],
[
"Popov",
"Eugene",
""
],
[
"Ge",
"Dong",
""
]
] |
new_dataset
| 0.999423 |
2012.03112
|
Juan G\'omez-Luna
|
Onur Mutlu, Saugata Ghose, Juan G\'omez-Luna, Rachata Ausavarungnirun
|
A Modern Primer on Processing in Memory
|
arXiv admin note: substantial text overlap with arXiv:1903.03988
| null | null | null |
cs.AR cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Modern computing systems are overwhelmingly designed to move data to
computation. This design choice goes directly against at least three key trends
in computing that cause performance, scalability and energy bottlenecks: (1)
data access is a key bottleneck as many important applications are increasingly
data-intensive, and memory bandwidth and energy do not scale well, (2) energy
consumption is a key limiter in almost all computing platforms, especially
server and mobile systems, (3) data movement, especially off-chip to on-chip,
is very expensive in terms of bandwidth, energy and latency, much more so than
computation. These trends are especially severely-felt in the data-intensive
server and energy-constrained mobile systems of today. At the same time,
conventional memory technology is facing many technology scaling challenges in
terms of reliability, energy, and performance. As a result, memory system
architects are open to organizing memory in different ways and making it more
intelligent, at the expense of higher cost. The emergence of 3D-stacked memory
plus logic, the adoption of error correcting codes inside the latest DRAM
chips, proliferation of different main memory standards and chips, specialized
for different purposes (e.g., graphics, low-power, high bandwidth, low
latency), and the necessity of designing new solutions to serious reliability
and security issues, such as the RowHammer phenomenon, are an evidence of this
trend. This chapter discusses recent research that aims to practically enable
computation close to data, an approach we call processing-in-memory (PIM). PIM
places computation mechanisms in or near where the data is stored (i.e., inside
the memory chips, in the logic layer of 3D-stacked memory, or in the memory
controllers), so that data movement between the computation units and memory is
reduced or eliminated.
|
[
{
"version": "v1",
"created": "Sat, 5 Dec 2020 19:59:49 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Jul 2022 20:04:21 GMT"
},
{
"version": "v3",
"created": "Wed, 31 Aug 2022 09:11:16 GMT"
}
] | 2022-09-01T00:00:00 |
[
[
"Mutlu",
"Onur",
""
],
[
"Ghose",
"Saugata",
""
],
[
"Gómez-Luna",
"Juan",
""
],
[
"Ausavarungnirun",
"Rachata",
""
]
] |
new_dataset
| 0.979147 |
2201.08701
|
Martin Westerkamp
|
Martin Westerkamp and Axel K\"upper
|
SmartSync: Cross-Blockchain Smart Contract Interaction and
Synchronization
|
9 pages, 4 figures
|
2022 IEEE International Conference on Blockchain and
Cryptocurrency (ICBC)
|
10.1109/ICBC54727.2022.9805524
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cross-Blockchain communication has gained traction due to the increasing
fragmentation of blockchain networks and scalability solutions such as
side-chaining and sharding. With SmartSync, we propose a novel concept for
cross-blockchain smart contract interactions that creates client contracts on
arbitrary blockchain networks supporting the same execution environment. Client
contracts mirror the logic and state of the original instance and enable
seamless on-chain function executions providing recent states. Synchronized
contracts supply instant read-only function calls to other applications hosted
on the target blockchain. Hereby, current limitations in cross-chain
communication are alleviated and new forms of contract interactions are
enabled. State updates are transmitted in a verifiable manner using Merkle
proofs and do not require trusted intermediaries. To permit lightweight
synchronizations, we introduce transition confirmations that facilitate the
application of verifiable state transitions without re-executing transactions
of the source blockchain. We prove the concept's soundness by providing a
prototypical implementation that enables smart contract forks, state
synchronizations, and on-chain validation on EVM-compatible blockchains. Our
evaluation demonstrates SmartSync's applicability for presented use cases
providing access to recent states to third-party contracts on the target
blockchain. Execution costs scale sub-linearly with the number of value updates
and depend on the depth and index of corresponding Merkle proofs.
|
[
{
"version": "v1",
"created": "Fri, 21 Jan 2022 13:57:59 GMT"
}
] | 2022-09-01T00:00:00 |
[
[
"Westerkamp",
"Martin",
""
],
[
"Küpper",
"Axel",
""
]
] |
new_dataset
| 0.999444 |
2203.08138
|
Frederic Poitevin
|
Axel Levy, Fr\'ed\'eric Poitevin, Julien Martel, Youssef Nashed,
Ariana Peck, Nina Miolane, Daniel Ratner, Mike Dunne, Gordon Wetzstein
|
CryoAI: Amortized Inference of Poses for Ab Initio Reconstruction of 3D
Molecular Volumes from Real Cryo-EM Images
|
Project page:
https://www.computationalimaging.org/publications/cryoai/
| null | null | null |
cs.CV cs.LG q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cryo-electron microscopy (cryo-EM) has become a tool of fundamental
importance in structural biology, helping us understand the basic building
blocks of life. The algorithmic challenge of cryo-EM is to jointly estimate the
unknown 3D poses and the 3D electron scattering potential of a biomolecule from
millions of extremely noisy 2D images. Existing reconstruction algorithms,
however, cannot easily keep pace with the rapidly growing size of cryo-EM
datasets due to their high computational and memory cost. We introduce cryoAI,
an ab initio reconstruction algorithm for homogeneous conformations that uses
direct gradient-based optimization of particle poses and the electron
scattering potential from single-particle cryo-EM data. CryoAI combines a
learned encoder that predicts the poses of each particle image with a
physics-based decoder to aggregate each particle image into an implicit
representation of the scattering potential volume. This volume is stored in the
Fourier domain for computational efficiency and leverages a modern coordinate
network architecture for memory efficiency. Combined with a symmetrized loss
function, this framework achieves results of a quality on par with
state-of-the-art cryo-EM solvers for both simulated and experimental data, one
order of magnitude faster for large datasets and with significantly lower
memory requirements than existing methods.
|
[
{
"version": "v1",
"created": "Tue, 15 Mar 2022 17:58:03 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Mar 2022 04:25:18 GMT"
},
{
"version": "v3",
"created": "Fri, 29 Jul 2022 06:53:38 GMT"
},
{
"version": "v4",
"created": "Tue, 30 Aug 2022 21:58:28 GMT"
}
] | 2022-09-01T00:00:00 |
[
[
"Levy",
"Axel",
""
],
[
"Poitevin",
"Frédéric",
""
],
[
"Martel",
"Julien",
""
],
[
"Nashed",
"Youssef",
""
],
[
"Peck",
"Ariana",
""
],
[
"Miolane",
"Nina",
""
],
[
"Ratner",
"Daniel",
""
],
[
"Dunne",
"Mike",
""
],
[
"Wetzstein",
"Gordon",
""
]
] |
new_dataset
| 0.999256 |
2208.02747
|
Zhangzi Zhu
|
Zhangzi Zhu, Yu Hao, Wenqing Zhang, Chuhui Xue, Song Bai
|
Runner-Up Solution to ECCV 2022 Challenge on Out of Vocabulary Scene
Text Understanding: Cropped Word Recognition
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This report presents our 2nd place solution to ECCV 2022 challenge on
Out-of-Vocabulary Scene Text Understanding (OOV-ST) : Cropped Word Recognition.
This challenge is held in the context of ECCV 2022 workshop on Text in
Everything (TiE), which aims to extract out-of-vocabulary words from natural
scene images. In the competition, we first pre-train SCATTER on the synthetic
datasets, then fine-tune the model on the training set with data augmentations.
Meanwhile, two additional models are trained specifically for long and vertical
texts. Finally, we combine the output from different models with different
layers, different backbones, and different seeds as the final results. Our
solution achieves a word accuracy of 59.45\% when considering out-of-vocabulary
words only.
|
[
{
"version": "v1",
"created": "Thu, 4 Aug 2022 16:20:58 GMT"
},
{
"version": "v2",
"created": "Tue, 23 Aug 2022 06:51:25 GMT"
},
{
"version": "v3",
"created": "Wed, 31 Aug 2022 13:00:42 GMT"
}
] | 2022-09-01T00:00:00 |
[
[
"Zhu",
"Zhangzi",
""
],
[
"Hao",
"Yu",
""
],
[
"Zhang",
"Wenqing",
""
],
[
"Xue",
"Chuhui",
""
],
[
"Bai",
"Song",
""
]
] |
new_dataset
| 0.986361 |
2208.09245
|
Tze-Yang Tung
|
Tze-Yang Tung and Deniz Gunduz
|
Deep Joint Source-Channel and Encryption Coding: Secure Semantic
Communications
| null | null | null | null |
cs.CR eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
Deep learning driven joint source-channel coding (JSCC) for wireless image or
video transmission, also called DeepJSCC, has been a topic of interest recently
with very promising results. The idea is to map similar source samples to
nearby points in the channel input space such that, despite the noise
introduced by the channel, the input can be recovered with minimal distortion.
In DeepJSCC, this is achieved by an autoencoder architecture with a
non-trainable channel layer between the encoder and decoder. DeepJSCC has many
favorable properties, such as better end-to-end distortion performance than its
separate source and channel coding counterpart as well as graceful degradation
with respect to channel quality. However, due to the inherent correlation
between the source sample and channel input, DeepJSCC is vulnerable to
eavesdropping attacks. In this paper, we propose the first DeepJSCC scheme for
wireless image transmission that is secure against eavesdroppers, called
DeepJSCEC. DeepJSCEC not only preserves the favorable properties of DeepJSCC,
it also provides security against chosen-plaintext attacks from the
eavesdropper, without the need to make assumptions about the eavesdropper's
channel condition, or its intended use of the intercepted signal. Numerical
results show that DeepJSCEC achieves similar or better image quality than
separate source coding using BPG compression, AES encryption, and LDPC codes
for channel coding, while preserving the graceful degradation of image quality
with respect to channel quality. We also show that the proposed encryption
method is problem agnostic, meaning it can be applied to other end-to-end JSCC
problems, such as remote classification, without modification. Given the
importance of security in modern wireless communication systems, we believe
this work brings DeepJSCC schemes much closer to adoption in practice.
|
[
{
"version": "v1",
"created": "Fri, 19 Aug 2022 09:56:06 GMT"
},
{
"version": "v2",
"created": "Wed, 31 Aug 2022 16:10:09 GMT"
}
] | 2022-09-01T00:00:00 |
[
[
"Tung",
"Tze-Yang",
""
],
[
"Gunduz",
"Deniz",
""
]
] |
new_dataset
| 0.966959 |
2208.11774
|
Josef Spjut
|
Josef Spjut, Arjun Madhusudan, Benjamin Watson, Ben Boudaoud, Joohwan
Kim
|
The Esports Frontier: Rendering for Competitive Games
|
3 pages, 1 figure, Abstract of talk presented at SIGGRAPH Frontiers
esports workshop in 2022
| null | null | null |
cs.GR cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Real-time graphics is commonly thought of as anything exceeding about 30 fps,
where the interactivity of the application becomes fluid enough for high rates
of interaction. Inspired by esports and competitive gaming, where players
regularly exceed the threshold for real-time by 10x (esports displays commonly
reach 360 Hz or beyond), this talk begins the exploration of how rendering has
the opportunity to evolve beyond the current state of focus on either image
quality or frame rate. Esports gamers regularly decline nearly all options for
increased image quality in exchange for maximum frame rates. However, there
remains a distinct opportunity to move beyond the focus on video as a sequence
of images and instead rethink the pipeline for more continuous updates.
|
[
{
"version": "v1",
"created": "Wed, 24 Aug 2022 21:15:00 GMT"
},
{
"version": "v2",
"created": "Tue, 30 Aug 2022 22:35:45 GMT"
}
] | 2022-09-01T00:00:00 |
[
[
"Spjut",
"Josef",
""
],
[
"Madhusudan",
"Arjun",
""
],
[
"Watson",
"Benjamin",
""
],
[
"Boudaoud",
"Ben",
""
],
[
"Kim",
"Joohwan",
""
]
] |
new_dataset
| 0.999359 |
2208.12637
|
Christiane Gresse von Wangenheim
|
Fabiano Pereira de Oliveira, Christiane Gresse von Wangenheim, Jean C.
R. Hauck
|
TMIC: App Inventor Extension for the Deployment of Image Classification
Models Exported from Teachable Machine
|
7 pages
| null | null | null |
cs.CY cs.AI cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
TMIC is an App Inventor extension for the deployment of ML models for image
classification developed with Google Teachable Machine in educational settings.
Google Teachable Machine, is an intuitive visual tool that provides
workflow-oriented support for the development of ML models for image
classification. Aiming at the usage of models developed with Google Teachable
Machine, the extension TMIC enables the deployment of the trained models
exported as TensorFlow.js to Google Cloud as part of App Inventor, one of the
most popular block-based programming environments for teaching computing in
K-12. The extension was created with the App Inventor extension framework based
on the extension PIC and is available under the BSD 3 license. It can be used
for teaching ML in K-12, in introductory courses in higher education or by
anyone interested in creating intelligent apps with image classification. The
extension TMIC is being developed by the initiative Computa\c{c}\~ao na Escola
of the Department of Informatics and Statistics at the Federal University of
Santa Catarina/Brazil as part of a research effort aiming at introducing AI
education in K-12.
|
[
{
"version": "v1",
"created": "Wed, 24 Aug 2022 17:34:47 GMT"
},
{
"version": "v2",
"created": "Tue, 30 Aug 2022 22:08:48 GMT"
}
] | 2022-09-01T00:00:00 |
[
[
"de Oliveira",
"Fabiano Pereira",
""
],
[
"von Wangenheim",
"Christiane Gresse",
""
],
[
"Hauck",
"Jean C. R.",
""
]
] |
new_dataset
| 0.971803 |
2208.13446
|
Sabine Cornelsen
|
Sabine Cornelsen and Gregor Diatzko
|
Planar Confluent Orthogonal Drawings of 4-Modal Digraphs
|
Appears in the Proceedings of the 30th International Symposium on
Graph Drawing and Network Visualization (GD 2022)
| null | null | null |
cs.CG cs.DS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In a planar confluent orthogonal drawing (PCOD) of a directed graph (digraph)
vertices are drawn as points in the plane and edges as orthogonal polylines
starting with a vertical segment and ending with a horizontal segment. Edges
may overlap in their first or last segment, but must not intersect otherwise.
PCODs can be seen as a directed variant of Kandinsky drawings or as planar
L-drawings of subdivisions of digraphs. The maximum number of subdivision
vertices in an edge is then the split complexity. A PCOD is upward if each edge
is drawn with monotonically increasing y-coordinates and quasi-upward if no
edge starts with decreasing y-coordinates. We study the split complexity of
PCODs and (quasi-)upward PCODs for various classes of graphs.
|
[
{
"version": "v1",
"created": "Mon, 29 Aug 2022 09:28:49 GMT"
},
{
"version": "v2",
"created": "Wed, 31 Aug 2022 09:26:55 GMT"
}
] | 2022-09-01T00:00:00 |
[
[
"Cornelsen",
"Sabine",
""
],
[
"Diatzko",
"Gregor",
""
]
] |
new_dataset
| 0.999587 |
2208.14142
|
Giordano Da Lozzo
|
Carlos Alegria and Giordano Da Lozzo and Giuseppe Di Battista and
Fabrizio Frati and Fabrizio Grosso and Maurizio Patrignani
|
Unit-length Rectangular Drawings of Graphs
|
Appears in the Proceedings of the 30th International Symposium on
Graph Drawing and Network Visualization (GD 2022)
| null | null | null |
cs.CG cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A rectangular drawing of a planar graph $G$ is a planar drawing of $G$ in
which vertices are mapped to grid points, edges are mapped to horizontal and
vertical straight-line segments, and faces are drawn as rectangles. Sometimes
this latter constraint is relaxed for the outer face. In this paper, we study
rectangular drawings in which the edges have unit length. We show a complexity
dichotomy for the problem of deciding the existence of a unit-length
rectangular drawing, depending on whether the outer face must also be drawn as
a rectangle or not. Specifically, we prove that the problem is NP-complete for
biconnected graphs when the drawing of the outer face is not required to be a
rectangle, even if the sought drawing must respect a given planar embedding,
whereas it is polynomial-time solvable, both in the fixed and the variable
embedding settings, if the outer face is required to be drawn as a rectangle.
|
[
{
"version": "v1",
"created": "Tue, 30 Aug 2022 10:49:23 GMT"
},
{
"version": "v2",
"created": "Wed, 31 Aug 2022 07:23:00 GMT"
}
] | 2022-09-01T00:00:00 |
[
[
"Alegria",
"Carlos",
""
],
[
"Da Lozzo",
"Giordano",
""
],
[
"Di Battista",
"Giuseppe",
""
],
[
"Frati",
"Fabrizio",
""
],
[
"Grosso",
"Fabrizio",
""
],
[
"Patrignani",
"Maurizio",
""
]
] |
new_dataset
| 0.999049 |
2208.14287
|
Anuj Kumar Bhagat
|
Anuj Kumar Bhagat and Ritumoni Sarma
|
On the exponent of cyclic codes
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
We propose an algorithm to find a lower bound for the number of cyclic codes
over any finite field with any given exponent. Besides, we give a formula to
find the exponent of BCH codes.
|
[
{
"version": "v1",
"created": "Tue, 30 Aug 2022 14:14:56 GMT"
},
{
"version": "v2",
"created": "Wed, 31 Aug 2022 06:03:26 GMT"
}
] | 2022-09-01T00:00:00 |
[
[
"Bhagat",
"Anuj Kumar",
""
],
[
"Sarma",
"Ritumoni",
""
]
] |
new_dataset
| 0.998864 |
2208.14493
|
Johann Frei
|
Johann Frei and Frank Kramer
|
Annotated Dataset Creation through General Purpose Language Models for
non-English Medical NLP
| null | null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Obtaining text datasets with semantic annotations is an effortful process,
yet crucial for supervised training in natural language processsing (NLP). In
general, developing and applying new NLP pipelines in domain-specific contexts
for tasks often requires custom designed datasets to address NLP tasks in
supervised machine learning fashion. When operating in non-English languages
for medical data processing, this exposes several minor and major,
interconnected problems such as lack of task-matching datasets as well as
task-specific pre-trained models. In our work we suggest to leverage pretrained
language models for training data acquisition in order to retrieve sufficiently
large datasets for training smaller and more efficient models for use-case
specific tasks. To demonstrate the effectiveness of your approach, we create a
custom dataset which we use to train a medical NER model for German texts,
GPTNERMED, yet our method remains language-independent in principle. Our
obtained dataset as well as our pre-trained models are publicly available at:
https://github.com/frankkramer-lab/GPTNERMED
|
[
{
"version": "v1",
"created": "Tue, 30 Aug 2022 18:42:55 GMT"
}
] | 2022-09-01T00:00:00 |
[
[
"Frei",
"Johann",
""
],
[
"Kramer",
"Frank",
""
]
] |
new_dataset
| 0.994351 |
2208.14536
|
Besnik Fetahu
|
Shervin Malmasi, Anjie Fang, Besnik Fetahu, Sudipta Kar, Oleg
Rokhlenko
|
MultiCoNER: A Large-scale Multilingual dataset for Complex Named Entity
Recognition
|
Accepted at COLING 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We present MultiCoNER, a large multilingual dataset for Named Entity
Recognition that covers 3 domains (Wiki sentences, questions, and search
queries) across 11 languages, as well as multilingual and code-mixing subsets.
This dataset is designed to represent contemporary challenges in NER, including
low-context scenarios (short and uncased text), syntactically complex entities
like movie titles, and long-tail entity distributions. The 26M token dataset is
compiled from public resources using techniques such as heuristic-based
sentence sampling, template extraction and slotting, and machine translation.
We applied two NER models on our dataset: a baseline XLM-RoBERTa model, and a
state-of-the-art GEMNET model that leverages gazetteers. The baseline achieves
moderate performance (macro-F1=54%), highlighting the difficulty of our data.
GEMNET, which uses gazetteers, improvement significantly (average improvement
of macro-F1=+30%). MultiCoNER poses challenges even for large pre-trained
language models, and we believe that it can help further research in building
robust NER systems. MultiCoNER is publicly available at
https://registry.opendata.aws/multiconer/ and we hope that this resource will
help advance research in various aspects of NER.
|
[
{
"version": "v1",
"created": "Tue, 30 Aug 2022 20:45:54 GMT"
}
] | 2022-09-01T00:00:00 |
[
[
"Malmasi",
"Shervin",
""
],
[
"Fang",
"Anjie",
""
],
[
"Fetahu",
"Besnik",
""
],
[
"Kar",
"Sudipta",
""
],
[
"Rokhlenko",
"Oleg",
""
]
] |
new_dataset
| 0.999641 |
2208.14543
|
Peng Yin
|
Peng Yin, Abulikemu Abuduweili, Shiqi Zhao, Changliu Liu and Sebastian
Scherer
|
BioSLAM: A Bio-inspired Lifelong Memory System for General Place
Recognition
|
19 pages, 18 figures, submitted to IEEE T-RO
| null | null | null |
cs.RO cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present BioSLAM, a lifelong SLAM framework for learning various new
appearances incrementally and maintaining accurate place recognition for
previously visited areas. Unlike humans, artificial neural networks suffer from
catastrophic forgetting and may forget the previously visited areas when
trained with new arrivals. For humans, researchers discover that there exists a
memory replay mechanism in the brain to keep the neuron active for previous
events. Inspired by this discovery, BioSLAM designs a gated generative replay
to control the robot's learning behavior based on the feedback rewards.
Specifically, BioSLAM provides a novel dual-memory mechanism for maintenance:
1) a dynamic memory to efficiently learn new observations and 2) a static
memory to balance new-old knowledge. When combined with a visual-/LiDAR- based
SLAM system, the complete processing pipeline can help the agent incrementally
update the place recognition ability, robust to the increasing complexity of
long-term place recognition. We demonstrate BioSLAM in two incremental SLAM
scenarios. In the first scenario, a LiDAR-based agent continuously travels
through a city-scale environment with a 120km trajectory and encounters
different types of 3D geometries (open streets, residential areas, commercial
buildings). We show that BioSLAM can incrementally update the agent's place
recognition ability and outperform the state-of-the-art incremental approach,
Generative Replay, by 24%. In the second scenario, a LiDAR-vision-based agent
repeatedly travels through a campus-scale area on a 4.5km trajectory. BioSLAM
can guarantee the place recognition accuracy to outperform 15\% over the
state-of-the-art approaches under different appearances. To our knowledge,
BioSLAM is the first memory-enhanced lifelong SLAM system to help incremental
place recognition in long-term navigation tasks.
|
[
{
"version": "v1",
"created": "Tue, 30 Aug 2022 21:22:04 GMT"
}
] | 2022-09-01T00:00:00 |
[
[
"Yin",
"Peng",
""
],
[
"Abuduweili",
"Abulikemu",
""
],
[
"Zhao",
"Shiqi",
""
],
[
"Liu",
"Changliu",
""
],
[
"Scherer",
"Sebastian",
""
]
] |
new_dataset
| 0.999513 |
2208.14567
|
Amin Heyrani Nobari
|
Amin Heyrani Nobari, Akash Srivastava, Dan Gutfreund, Faez Ahmed
|
LINKS: A dataset of a hundred million planar linkage mechanisms for
data-driven kinematic design
|
Code & Data: https://github.com/ahnobari/LINKS
| null | null | null |
cs.LG cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we introduce LINKS, a dataset of 100 million one degree of
freedom planar linkage mechanisms and 1.1 billion coupler curves, which is more
than 1000 times larger than any existing database of planar mechanisms and is
not limited to specific kinds of mechanisms such as four-bars, six-bars, \etc
which are typically what most databases include. LINKS is made up of various
components including 100 million mechanisms, the simulation data for each
mechanism, normalized paths generated by each mechanism, a curated set of
paths, the code used to generate the data and simulate mechanisms, and a live
web demo for interactive design of linkage mechanisms. The curated paths are
provided as a measure for removing biases in the paths generated by mechanisms
that enable a more even design space representation. In this paper, we discuss
the details of how we can generate such a large dataset and how we can overcome
major issues with such scales. To be able to generate such a large dataset we
introduce a new operator to generate 1-DOF mechanism topologies, furthermore,
we take many steps to speed up slow simulations of mechanisms by vectorizing
our simulations and parallelizing our simulator on a large number of threads,
which leads to a simulation 800 times faster than the simple simulation
algorithm. This is necessary given on average, 1 out of 500 candidates that are
generated are valid~(and all must be simulated to determine their validity),
which means billions of simulations must be performed for the generation of
this dataset. Then we demonstrate the depth of our dataset through a
bi-directional chamfer distance-based shape retrieval study where we show how
our dataset can be used directly to find mechanisms that can trace paths very
close to desired target paths.
|
[
{
"version": "v1",
"created": "Tue, 30 Aug 2022 23:33:05 GMT"
}
] | 2022-09-01T00:00:00 |
[
[
"Nobari",
"Amin Heyrani",
""
],
[
"Srivastava",
"Akash",
""
],
[
"Gutfreund",
"Dan",
""
],
[
"Ahmed",
"Faez",
""
]
] |
new_dataset
| 0.999353 |
2208.14569
|
Liming Ma
|
Shu Liu, Liming Ma, Ting-Yi Wu, Chaoping Xing
|
A new construction of nonlinear codes via algebraic function fields
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In coding theory, constructing codes with good parameters is one of the most
important and fundamental problems. Though a great many of good codes have been
produced, most of them are defined over alphabets of sizes equal to prime
powers. In this paper, we provide a new explicit construction of $(q+1)$-ary
nonlinear codes via algebraic function fields, where $q$ is a prime power. Our
codes are constructed by evaluations of rational functions at all rational
places of the algebraic function field. Compared with algebraic geometry codes,
the main difference is that we allow rational functions to be evaluated at pole
places. After evaluating rational functions from a union of Riemann-Roch
spaces, we obtain a family of nonlinear codes over the alphabet
$\mathbb{F}_{q}\cup \{\infty\}$. It turns out that our codes have better
parameters than those obtained from MDS codes or good algebraic geometry codes
via code alphabet extension and restriction.
|
[
{
"version": "v1",
"created": "Tue, 30 Aug 2022 23:51:55 GMT"
}
] | 2022-09-01T00:00:00 |
[
[
"Liu",
"Shu",
""
],
[
"Ma",
"Liming",
""
],
[
"Wu",
"Ting-Yi",
""
],
[
"Xing",
"Chaoping",
""
]
] |
new_dataset
| 0.966739 |
2208.14600
|
Zhuang Jia
|
Tianyu Xu, Zhuang Jia, Yijian Zhang, Long Bao, Heng Sun
|
ELSR: Extreme Low-Power Super Resolution Network For Mobile Devices
| null | null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the popularity of mobile devices, e.g., smartphone and wearable devices,
lighter and faster model is crucial for the application of video super
resolution. However, most previous lightweight models tend to concentrate on
reducing lantency of model inference on desktop GPU, which may be not energy
efficient in current mobile devices. In this paper, we proposed Extreme
Low-Power Super Resolution (ELSR) network which only consumes a small amount of
energy in mobile devices. Pretraining and finetuning methods are applied to
boost the performance of the extremely tiny model. Extensive experiments show
that our method achieves a excellent balance between restoration quality and
power consumption. Finally, we achieve a competitive score of 90.9 with PSNR
27.34 dB and power 0.09 W/30FPS on the target MediaTek Dimensity 9000
plantform, ranking 1st place in the Mobile AI & AIM 2022 Real-Time Video
Super-Resolution Challenge.
|
[
{
"version": "v1",
"created": "Wed, 31 Aug 2022 02:32:50 GMT"
}
] | 2022-09-01T00:00:00 |
[
[
"Xu",
"Tianyu",
""
],
[
"Jia",
"Zhuang",
""
],
[
"Zhang",
"Yijian",
""
],
[
"Bao",
"Long",
""
],
[
"Sun",
"Heng",
""
]
] |
new_dataset
| 0.999248 |
2208.14616
|
Kaiping Cui
|
Xia Feng, Kaiping Cui and Liangmin Wang
|
PBAG: A Privacy-Preserving Blockchain-based Authentication Protocol with
Global-updated Commitment in IoV
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Internet of Vehicles(IoV) is increasingly used as a medium to propagate
critical information via establishing connections between entities such as
vehicles and infrastructures. During message transmission, privacy-preserving
authentication is considered as the first line of defence against attackers and
malicious information. To achieve a more secure and stable communication
environment, ever-increasing numbers of blockchain-based authentication schemes
are proposed. At first glance, existing approaches provide robust architectures
and achieve transparent authentication. However, in these schemes, verifiers
must connect to the blockchain network in advance and accomplish the
authentication with smart contracts, which prolongs the latency. To remedy this
limit, we propose a privacy-preserving blockchain-based authentication
protocol(PBAG), where Root Authority(RA) generates a unique evaluation proof
corresponding to the issued certificate for each authorized vehicle. Meanwhile,
RA broadcasts a public global commitment based on all valid certificates.
Instead of querying certificates stored in the blockchain, the vehicle will be
efficiently proved to be an authorized user by utilizing the global commitment
through bilinear pairing. Moreover, our scheme can prevent vehicles equipped
with invalid certificates from accomplishing the authentication, thus avoiding
the time-consuming for checking Certificate Revocation List (CRL). Finally, our
scheme provides privacy properties such as anonymity and unlinkability. It
allows anonymous authentication based on evaluation proofs and achieves
traceability of identity in the event of a dispute. The simulation demonstrates
that the average time of verification is 0.36ms under the batch-enabled
mechanism, outperforming existing schemes by at least 63.7%.
|
[
{
"version": "v1",
"created": "Wed, 31 Aug 2022 03:30:38 GMT"
}
] | 2022-09-01T00:00:00 |
[
[
"Feng",
"Xia",
""
],
[
"Cui",
"Kaiping",
""
],
[
"Wang",
"Liangmin",
""
]
] |
new_dataset
| 0.999654 |
2208.14678
|
Xinrui Guo
|
Xinrui Guo, Xiaoyang Ma, Franz Muller, Kai Ni, Thomas Kampfe, Yongpan
Liu, Vijaykrishnan Narayanan, Xueqing Li
|
Ferroelectric FET-based strong physical unclonable function: a
low-power, high-reliable and reconfigurable solution for Internet-of-Things
security
| null | null | null | null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Hardware security has been a key concern in modern information technologies.
Especially, as the number of Internet-of-Things (IoT) devices grows rapidly, to
protect the device security with low-cost security primitives becomes
essential, among which Physical Unclonable Function (PUF) is a widely-used
solution. In this paper, we propose the first FeFET-based strong PUF exploiting
the cycle-to-cycle (C2C) variation of FeFETs as the entropy source. Based on
the experimental measurements, the proposed PUF shows satisfying performance
including high uniformity, uniqueness, reconfigurability and reliability. To
resist machine-learning attack, XOR structure was introduced, and simulations
show that our proposed PUF has similar resistance to existing attack models
with traditional arbiter PUFs. Furthermore, our design is shown to be
power-efficient, and highly robust to write voltage, temperature and device
size, which makes it a competitive security solution for Internet-of-Things
edge devices.
|
[
{
"version": "v1",
"created": "Wed, 31 Aug 2022 08:05:41 GMT"
}
] | 2022-09-01T00:00:00 |
[
[
"Guo",
"Xinrui",
""
],
[
"Ma",
"Xiaoyang",
""
],
[
"Muller",
"Franz",
""
],
[
"Ni",
"Kai",
""
],
[
"Kampfe",
"Thomas",
""
],
[
"Liu",
"Yongpan",
""
],
[
"Narayanan",
"Vijaykrishnan",
""
],
[
"Li",
"Xueqing",
""
]
] |
new_dataset
| 0.999548 |
2208.14685
|
Anke Brock
|
Julie Ducasse (UT3, IRIT-ELIPSE, CNRS), Anke Brock (Potioc, LaBRI),
Christophe Jouffrais (IRIT-ELIPSE, CNRS)
|
Accessible Interactive Maps for Visually Impaired Users
| null |
Mobility in Visually Impaired People - Fundamentals and ICT
Assistive Technologies, Springer, 2018
|
10.1007/978-3-319-54446-5_17
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Tactile maps are commonly used to give visually impaired users access to
geographical representations. Although those relief maps are efficient tools
for acquisition of spatial knowledge, they present several limitations and
issues such as the need to read braille. Several research projects have been
led during the past three decades in order to improve access to maps using
interactive technologies. In this chapter, we present an exhaustive review of
interactive map prototypes. We classified existing interactive maps into two
categories: Digital Interactive Maps (DIMs) that are displayed on a flat
surface such as a screen; and Hybrid Interactive Maps (HIMs) that include both
a digital and a physical representation. In each family, we identified several
subcategories depending on the technology being used. We compared the
categories and subcategories according to cost, availability and technological
limitations, but also in terms of content, comprehension and interactivity.
Then we reviewed a number of studies showing that those maps can support
spatial learning for visually impaired users. Finally, we identified new
technologies and methods that could improve the accessibility of graphics for
visually impaired users in the future.
|
[
{
"version": "v1",
"created": "Wed, 31 Aug 2022 08:28:54 GMT"
}
] | 2022-09-01T00:00:00 |
[
[
"Ducasse",
"Julie",
"",
"UT3, IRIT-ELIPSE, CNRS"
],
[
"Brock",
"Anke",
"",
"Potioc, LaBRI"
],
[
"Jouffrais",
"Christophe",
"",
"IRIT-ELIPSE, CNRS"
]
] |
new_dataset
| 0.997701 |
2208.14686
|
Dustin Javier Carrion Ojeda
|
Dustin Carri\'on-Ojeda (LISN, TAU), Hong Chen (CST), Adrian El Baz,
Sergio Escalera (CVC), Chaoyu Guan (CST), Isabelle Guyon (LISN, TAU), Ihsan
Ullah (LISN, TAU), Xin Wang (CST), Wenwu Zhu (CST)
|
NeurIPS'22 Cross-Domain MetaDL competition: Design and baseline results
|
Meta-Knowledge Transfer/Communication in Different Systems, Sep 2022,
Grenoble, France
| null | null | null |
cs.LG cs.AI cs.CV cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the design and baseline results for a new challenge in the
ChaLearn meta-learning series, accepted at NeurIPS'22, focusing on
"cross-domain" meta-learning. Meta-learning aims to leverage experience gained
from previous tasks to solve new tasks efficiently (i.e., with better
performance, little training data, and/or modest computational resources).
While previous challenges in the series focused on within-domain few-shot
learning problems, with the aim of learning efficiently N-way k-shot tasks
(i.e., N class classification problems with k training examples), this
competition challenges the participants to solve "any-way" and "any-shot"
problems drawn from various domains (healthcare, ecology, biology,
manufacturing, and others), chosen for their humanitarian and societal impact.
To that end, we created Meta-Album, a meta-dataset of 40 image classification
datasets from 10 domains, from which we carve out tasks with any number of
"ways" (within the range 2-20) and any number of "shots" (within the range
1-20). The competition is with code submission, fully blind-tested on the
CodaLab challenge platform. The code of the winners will be open-sourced,
enabling the deployment of automated machine learning solutions for few-shot
image classification across several domains.
|
[
{
"version": "v1",
"created": "Wed, 31 Aug 2022 08:31:02 GMT"
}
] | 2022-09-01T00:00:00 |
[
[
"Carrión-Ojeda",
"Dustin",
"",
"LISN, TAU"
],
[
"Chen",
"Hong",
"",
"CST"
],
[
"Baz",
"Adrian El",
"",
"CVC"
],
[
"Escalera",
"Sergio",
"",
"CVC"
],
[
"Guan",
"Chaoyu",
"",
"CST"
],
[
"Guyon",
"Isabelle",
"",
"LISN, TAU"
],
[
"Ullah",
"Ihsan",
"",
"LISN, TAU"
],
[
"Wang",
"Xin",
"",
"CST"
],
[
"Zhu",
"Wenwu",
"",
"CST"
]
] |
new_dataset
| 0.987898 |
2208.14727
|
EPTCS
|
P\'al D\"om\"osi (Debrecen University & Ny\'iregyh\'aza University),
Adama Diene (United Arab Emirates University)
|
A Finite-Automaton Based Stream Cipher As a Quasigroup Based Cipher
|
In Proceedings NCMA 2022, arXiv:2208.13015
|
EPTCS 367, 2022, pp. 81-87
|
10.4204/EPTCS.367.6
| null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper we show that a recently published finite automaton stream
cipher can be considered as a quasigroup based stream cipher. Some additional
properties of the discussed cipher are also given.
|
[
{
"version": "v1",
"created": "Wed, 31 Aug 2022 09:30:07 GMT"
}
] | 2022-09-01T00:00:00 |
[
[
"Dömösi",
"Pál",
"",
"Debrecen University & Nyíregyháza University"
],
[
"Diene",
"Adama",
"",
"United Arab Emirates University"
]
] |
new_dataset
| 0.971204 |
2208.14729
|
EPTCS
|
Franti\v{s}ek Mr\'az (Charles University), Friedrich Otto
(Universit\"at Kassel)
|
Non-Returning Finite Automata With Translucent Letters
|
In Proceedings NCMA 2022, arXiv:2208.13015
|
EPTCS 367, 2022, pp. 143-159
|
10.4204/EPTCS.367.10
| null |
cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
Here we propose a variant of the nondeterministic finite automaton with
translucent letters (NFAwtl) which, after reading and deleting a letter, does
not return to the left end of its tape, but rather continues from the position
of the letter just deleted. When the end-of-tape marker is reached, our
automaton can decide whether to accept, to reject, or to continue, which means
that it again reads the remaining tape contents from the beginning. This type
of automaton, called a non-returning finite automaton with translucent letters
or an nrNFAwtl, is strictly more expressive than the NFAwtl. We study the
expressive capacity of this type of automaton and that of its deterministic
variant. Also we are interested in closure properties of the resulting classes
of languages and in decision problems.
|
[
{
"version": "v1",
"created": "Wed, 31 Aug 2022 09:30:44 GMT"
}
] | 2022-09-01T00:00:00 |
[
[
"Mráz",
"František",
"",
"Charles University"
],
[
"Otto",
"Friedrich",
"",
"Universität Kassel"
]
] |
new_dataset
| 0.99286 |
2208.14730
|
EPTCS
|
Benedek Nagy (Eastern Mediterranean University)
|
Quasi-deterministic 5' -> 3' Watson-Crick Automata
|
In Proceedings NCMA 2022, arXiv:2208.13015
|
EPTCS 367, 2022, pp. 160-176
|
10.4204/EPTCS.367.11
| null |
cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
Watson-Crick (WK) finite automata are working on a Watson-Crick tape, that
is, on a DNA molecule. A double stranded DNA molecule contains two strands,
each having a 5' and a 3' end, and these two strands together form the molecule
with the following properties. The strands have the same length, their 5' to 3'
directions are opposite, and in each position, the two strands have nucleotides
that are complement of each other (by the Watson-Crick complementary relation).
Consequently, WK automata have two reading heads, one for each strand. In
traditional WK automata both heads read the whole input in the same physical
direction, but in 5'->3' WK automata the heads start from the two extremes and
read the input in opposite direction. In sensing 5'->3' WK automata, the
process on the input is finished when the heads meet, and the model is capable
to accept the class of linear context-free languages. Deterministic variants
are weaker, the class named 2detLIN, a proper subclass of linear languages is
accepted by them. Recently, another specific variants, the state-deterministic
sensing 5'->3' WK automata are investigated in which the graph of the automaton
has the special property that for each node of the graph, all out edges (if
any) go to a sole node, i.e., for each state there is (at most) one state that
can be reached by a direct transition. It was shown that this concept is
somewhat orthogonal to the usual concept of determinism in case of sensing
5'->3' WK automata. In this paper a new concept, the quasi-determinism is
investigated, that is in each configuration of a computation (if it is not
finished yet), the next state is uniquely determined although the next
configuration may not be, in case various transitions are enabled at the same
time. We show that this new concept is a common generalisation of the usual
determinism and the state-determinism, i.e., the class of quasi-deterministic
sensing 5'->3' WK automata is a superclass of both of the mentioned other
classes. There are various usual restrictions on WK automata, e.g., stateless
or 1-limited variants. We also prove some hierarchy results among language
classes accepted by various subclasses of quasi-deterministic sensing 5'->3' WK
automata and also some other already known language classes.
|
[
{
"version": "v1",
"created": "Wed, 31 Aug 2022 09:31:04 GMT"
}
] | 2022-09-01T00:00:00 |
[
[
"Nagy",
"Benedek",
"",
"Eastern Mediterranean University"
]
] |
new_dataset
| 0.999604 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.