id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2108.03492
|
Yizhou Shan
|
Zhiyuan Guo, Yizhou Shan, Xuhao Luo, Yutong Huang, Yiying Zhang
|
Clio: A Hardware-Software Co-Designed Disaggregated Memory System
|
16 pages, 22 figures. Accepted to ASPLOS'22
| null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Memory disaggregation has attracted great attention recently because of its
benefits in efficient memory utilization and ease of management. So far, memory
disaggregation research has all taken one of two approaches: building/emulating
memory nodes using regular servers or building them using raw memory devices
with no processing power. The former incurs higher monetary cost and faces tail
latency and scalability limitations, while the latter introduces performance,
security, and management problems.
Server-based memory nodes and memory nodes with no processing power are two
extreme approaches. We seek a sweet spot in the middle by proposing a
hardware-based memory disaggregation solution that has the right amount of
processing power at memory nodes. Furthermore, we take a clean-slate approach
by starting from the requirements of memory disaggregation and designing a
memory-disaggregation-native system.
We built Clio, a disaggregated memory system that virtualizes, protects, and
manages disaggregated memory at hardware-based memory nodes. The Clio hardware
includes a new virtual memory system, a customized network system, and a
framework for computation offloading. In building Clio, we not only co-design
OS functionalities, hardware architecture, and the network system, but also
co-design compute nodes and memory nodes. Our FPGA prototype of Clio
demonstrates that each memory node can achieve 100 Gbps throughput and an
end-to-end latency of 2.5 us at median and 3.2us at the 99th percentile. Clio
also scales much better and has orders of magnitude lower tail latency than
RDMA. It has 1.1x to 3.4x energy saving compared to CPU-based and
SmartNIC-based disaggregated memory systems and is 2.7x faster than
software-based SmartNIC solutions.
|
[
{
"version": "v1",
"created": "Sat, 7 Aug 2021 17:51:39 GMT"
},
{
"version": "v2",
"created": "Sun, 15 Aug 2021 19:10:14 GMT"
},
{
"version": "v3",
"created": "Thu, 20 Jan 2022 21:49:49 GMT"
}
] | 2022-01-24T00:00:00 |
[
[
"Guo",
"Zhiyuan",
""
],
[
"Shan",
"Yizhou",
""
],
[
"Luo",
"Xuhao",
""
],
[
"Huang",
"Yutong",
""
],
[
"Zhang",
"Yiying",
""
]
] |
new_dataset
| 0.999484 |
2108.06040
|
Yongqi Zhang
|
Yongqi Zhang and Quanming Yao
|
Knowledge Graph Reasoning with Relational Digraph
| null | null |
10.1145/3485447.3512008
| null |
cs.AI cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Reasoning on the knowledge graph (KG) aims to infer new facts from existing
ones. Methods based on the relational path have shown strong, interpretable,
and transferable reasoning ability. However, paths are naturally limited in
capturing local evidence in graphs. In this paper, we introduce a novel
relational structure, i.e., relational directed graph (r-digraph), which is
composed of overlapped relational paths, to capture the KG's local evidence.
Since the r- digraphs are more complex than paths, how to efficiently construct
and effectively learn from them are challenging. Directly encoding the
r-digraphs cannot scale well and capturing query-dependent information is hard
in r-digraphs. We propose a variant of graph neural network, i.e., RED-GNN, to
address the above challenges. Specifically, RED-GNN makes use of dynamic
programming to recursively encodes multiple r-digraphs with shared edges, and
utilizes a query-dependent attention mechanism to select the strongly
correlated edges. We demonstrate that RED-GNN is not only efficient but also
can achieve significant performance gains in both inductive and transductive
reasoning tasks over existing methods. Besides, the learned attention weights
in RED-GNN can exhibit interpretable evidence for KG reasoning.
|
[
{
"version": "v1",
"created": "Fri, 13 Aug 2021 03:27:01 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Jan 2022 07:31:14 GMT"
}
] | 2022-01-24T00:00:00 |
[
[
"Zhang",
"Yongqi",
""
],
[
"Yao",
"Quanming",
""
]
] |
new_dataset
| 0.99745 |
2108.08708
|
Jakub Sido
|
Jakub Sido, Michal Sej\'ak, Ond\v{r}ej Pra\v{z}\'ak, Miloslav
Konop\'ik, V\'aclav Moravec
|
Czech News Dataset for Semantic Textual Similarity
| null | null | null | null |
cs.CL cs.AI cs.LG cs.NE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper describes a novel dataset consisting of sentences with semantic
similarity annotations. The data originate from the journalistic domain in the
Czech language. We describe the process of collecting and annotating the data
in detail. The dataset contains 138,556 human annotations divided into train
and test sets. In total, 485 journalism students participated in the creation
process. To increase the reliability of the test set, we compute the annotation
as an average of 9 individual annotations. We evaluate the quality of the
dataset by measuring inter and intra annotation annotators' agreements. Beside
agreement numbers, we provide detailed statistics of the collected dataset. We
conclude our paper with a baseline experiment of building a system for
predicting the semantic similarity of sentences. Due to the massive number of
training annotations (116 956), the model can perform significantly better than
an average annotator (0,92 versus 0,86 of Person's correlation coefficients).
|
[
{
"version": "v1",
"created": "Thu, 19 Aug 2021 14:20:17 GMT"
},
{
"version": "v2",
"created": "Mon, 23 Aug 2021 07:12:06 GMT"
},
{
"version": "v3",
"created": "Fri, 21 Jan 2022 10:28:54 GMT"
}
] | 2022-01-24T00:00:00 |
[
[
"Sido",
"Jakub",
""
],
[
"Seják",
"Michal",
""
],
[
"Pražák",
"Ondřej",
""
],
[
"Konopík",
"Miloslav",
""
],
[
"Moravec",
"Václav",
""
]
] |
new_dataset
| 0.999067 |
2111.02985
|
Xiao Wang
|
Jie Li (1), Xiao Wang (1), Zhi Liu (2), Qiyue Li (3) ((1) School of
Computer and Information, Hefei University of Technology, China, (2)
Department of Mathematical and Systems Engineering, Shizuoka University,
Japan, (3) School of Electrical Engineering and Automation, Hefei University
of Technology Hefei, China)
|
A QoE Model in Point Cloud Video Streaming
|
The thesis still needs to be revised. There are some problems in the
structure of the thesis
| null | null | null |
cs.MM eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Point cloud video has been widely used by augmented reality (AR) and virtual
reality (VR) applications as it allows users to have an immersive experience of
six degrees of freedom (6DoFs). Yet there is still a lack of research on
quality of experience (QoE) model of point cloud video streaming, which cannot
provide optimization metric for streaming systems. Besides, position and color
information contained in each pixel of point cloud video, and viewport distance
effect caused by 6DoFs viewing procedure make the traditional objective quality
evaluation metric cannot be directly used in point cloud video streaming
system. In this paper we first analyze the subjective and objective factors
related to QoE model. Then an experimental system to simulate point cloud video
streaming is setup and detailed subjective quality evaluation experiments are
carried out. Based on collected mean opinion score (MOS) data, we propose a QoE
model for point cloud video streaming. We also verify the model by actual
subjective scoring, and the results show that the proposed QoE model can
accurately reflect users' visual perception. We also make the experimental
database public to promote the QoE research of point cloud video streaming.
|
[
{
"version": "v1",
"created": "Thu, 4 Nov 2021 16:29:43 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Nov 2021 10:55:30 GMT"
},
{
"version": "v3",
"created": "Tue, 9 Nov 2021 04:21:04 GMT"
},
{
"version": "v4",
"created": "Fri, 21 Jan 2022 07:19:38 GMT"
}
] | 2022-01-24T00:00:00 |
[
[
"Li",
"Jie",
""
],
[
"Wang",
"Xiao",
""
],
[
"Liu",
"Zhi",
""
],
[
"Li",
"Qiyue",
""
]
] |
new_dataset
| 0.968013 |
2111.03654
|
Pavel Panteleev
|
Pavel Panteleev, Gleb Kalachev
|
Asymptotically Good Quantum and Locally Testable Classical LDPC Codes
|
Updated the introduction, corrected some misprints, clarified some
proofs, added some new bibliography including arXiv:2005.01045 containing an
independent construction of good LTCs
| null | null | null |
cs.IT math.IT quant-ph
|
http://creativecommons.org/licenses/by/4.0/
|
We study classical and quantum LDPC codes of constant rate obtained by the
lifted product construction over non-abelian groups. We show that the obtained
families of quantum LDPC codes are asymptotically good, which proves the qLDPC
conjecture. Moreover, we show that the produced classical LDPC codes are also
asymptotically good and locally testable with constant query and soundness
parameters, which proves a well-known conjecture in the field of locally
testable codes.
|
[
{
"version": "v1",
"created": "Fri, 5 Nov 2021 17:59:15 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Jan 2022 17:59:32 GMT"
}
] | 2022-01-24T00:00:00 |
[
[
"Panteleev",
"Pavel",
""
],
[
"Kalachev",
"Gleb",
""
]
] |
new_dataset
| 0.99974 |
2112.01218
|
Wei Ma
|
Wei Ma, Mengjie Zhao, Ezekiel Soremekun, Qiang Hu, Jie Zhang, Mike
Papadakis, Maxime Cordy, Xiaofei Xie, Yves Le Traon
|
GraphCode2Vec: Generic Code Embedding via Lexical and Program Dependence
Analyses
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Code embedding is a keystone in the application of machine learning on
several Software Engineering (SE) tasks. To effectively support a plethora of
SE tasks, the embedding needs to capture program syntax and semantics in a way
that is generic. To this end, we propose the first self-supervised pre-training
approach (called GraphCode2Vec) which produces task-agnostic embedding of
lexical and program dependence features. GraphCode2Vec achieves this via a
synergistic combination of code analysis and Graph Neural Networks.
GraphCode2Vec is generic, it allows pre-training, and it is applicable to
several SE downstream tasks. We evaluate the effectiveness of GraphCode2Vec on
four (4) tasks (method name prediction, solution classification, mutation
testing and overfitted patch classification), and compare it with four (4)
similarly generic code embedding baselines (Code2Seq, Code2Vec, CodeBERT,
GraphCodeBERT) and 7 task-specific, learning-based methods. In particular,
GraphCode2Vec is more effective than both generic and task-specific
learning-based baselines. It is also complementary and comparable to
GraphCodeBERT (a larger and more complex model). We also demonstrate through a
probing and ablation study that GraphCode2Vec learns lexical and program
dependence features and that self-supervised pre-training improves
effectiveness.
|
[
{
"version": "v1",
"created": "Thu, 2 Dec 2021 13:39:10 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Jan 2022 16:39:11 GMT"
}
] | 2022-01-24T00:00:00 |
[
[
"Ma",
"Wei",
""
],
[
"Zhao",
"Mengjie",
""
],
[
"Soremekun",
"Ezekiel",
""
],
[
"Hu",
"Qiang",
""
],
[
"Zhang",
"Jie",
""
],
[
"Papadakis",
"Mike",
""
],
[
"Cordy",
"Maxime",
""
],
[
"Xie",
"Xiaofei",
""
],
[
"Traon",
"Yves Le",
""
]
] |
new_dataset
| 0.964737 |
2201.06159
|
Christian Limberg
|
Christian Limberg, Andrew Melnik, Augustin Harter, Helge Ritter
|
YOLO -- You only look 10647 times
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
With this work we are explaining the "You Only Look Once" (YOLO) single-stage
object detection approach as a parallel classification of 10647 fixed region
proposals. We support this view by showing that each of YOLOs output pixel is
attentive to a specific sub-region of previous layers, comparable to a local
region proposal. This understanding reduces the conceptual gap between
YOLO-like single-stage object detection models, RCNN-like two-stage region
proposal based models, and ResNet-like image classification models. In
addition, we created interactive exploration tools for a better visual
understanding of the YOLO information processing streams:
https://limchr.github.io/yolo_visualization
|
[
{
"version": "v1",
"created": "Sun, 16 Jan 2022 23:54:59 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Jan 2022 12:44:11 GMT"
}
] | 2022-01-24T00:00:00 |
[
[
"Limberg",
"Christian",
""
],
[
"Melnik",
"Andrew",
""
],
[
"Harter",
"Augustin",
""
],
[
"Ritter",
"Helge",
""
]
] |
new_dataset
| 0.996408 |
2201.08050
|
Sheng Xu
|
Sheng Xu, Yanjing Li, Teli Ma, Bohan Zeng, Baochang Zhang, Peng Gao
and Jinhu Lv
|
TerViT: An Efficient Ternary Vision Transformer
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Vision transformers (ViTs) have demonstrated great potential in various
visual tasks, but suffer from expensive computational and memory cost problems
when deployed on resource-constrained devices. In this paper, we introduce a
ternary vision transformer (TerViT) to ternarize the weights in ViTs, which are
challenged by the large loss surface gap between real-valued and ternary
parameters. To address the issue, we introduce a progressive training scheme by
first training 8-bit transformers and then TerViT, and achieve a better
optimization than conventional methods. Furthermore, we introduce channel-wise
ternarization, by partitioning each matrix to different channels, each of which
is with an unique distribution and ternarization interval. We apply our methods
to popular DeiT and Swin backbones, and extensive results show that we can
achieve competitive performance. For example, TerViT can quantize Swin-S to
13.1MB model size while achieving above 79% Top-1 accuracy on ImageNet dataset.
|
[
{
"version": "v1",
"created": "Thu, 20 Jan 2022 08:29:19 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Jan 2022 05:22:32 GMT"
}
] | 2022-01-24T00:00:00 |
[
[
"Xu",
"Sheng",
""
],
[
"Li",
"Yanjing",
""
],
[
"Ma",
"Teli",
""
],
[
"Zeng",
"Bohan",
""
],
[
"Zhang",
"Baochang",
""
],
[
"Gao",
"Peng",
""
],
[
"Lv",
"Jinhu",
""
]
] |
new_dataset
| 0.995206 |
2201.08099
|
Thomas H\"utter
|
Thomas H\"utter, Nikolaus Augsten, Christoph M. Kirsch, Michael J.
Carey, Chen Li
|
JEDI: These aren't the JSON documents you're looking for... (Extended
Version*)
|
This is an extended version of an upcoming publication at ACM SIGMOD
2022. Please cite the original SIGMOD version
| null | null | null |
cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
The JavaScript Object Notation (JSON) is a popular data format used in
document stores to natively support semi-structured data. In this paper, we
address the problem of JSON similarity lookup queries: given a query document
and a distance threshold $\tau$, retrieve all JSON documents that are within
$\tau$ from the query document. Due to its recursive definition, JSON data are
naturally represented as trees. Different from other hierarchical formats such
as XML, JSON supports both ordered and unordered sibling collections within a
single document. This feature poses a new challenge to the tree model and
distance computation. We propose JSON tree, a lossless tree representation of
JSON documents, and define the JSON Edit Distance (JEDI), the first edit-based
distance measure for JSON documents. We develop an algorithm, called QuickJEDI,
for computing JEDI by leveraging a new technique to prune expensive sibling
matchings. It outperforms a baseline algorithm by an order of magnitude in
runtime. To boost the performance of JSON similarity queries, we introduce an
index called JSIM and a highly effective upper bound based on tree sorting. Our
algorithm for the upper bound runs in $O(n \tau)$ time and $O(n + \tau \log n)$
space, which substantially improves the previous best bound of $O(n^2)$ time
and $O(n \log n)$ space (where $n$ is the tree size). Our experimental
evaluation shows that our solution scales to databases with millions of
documents and JSON trees with tens of thousands of nodes.
|
[
{
"version": "v1",
"created": "Thu, 20 Jan 2022 10:16:22 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Jan 2022 13:08:13 GMT"
}
] | 2022-01-24T00:00:00 |
[
[
"Hütter",
"Thomas",
""
],
[
"Augsten",
"Nikolaus",
""
],
[
"Kirsch",
"Christoph M.",
""
],
[
"Carey",
"Michael J.",
""
],
[
"Li",
"Chen",
""
]
] |
new_dataset
| 0.953956 |
2201.08425
|
Xiangnan Yin
|
Xiangnan Yin and Liming Chen
|
FaceOcc: A Diverse, High-quality Face Occlusion Dataset for Human Face
Extraction
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Occlusions often occur in face images in the wild, troubling face-related
tasks such as landmark detection, 3D reconstruction, and face recognition. It
is beneficial to extract face regions from unconstrained face images
accurately. However, current face segmentation datasets suffer from small data
volumes, few occlusion types, low resolution, and imprecise annotation,
limiting the performance of data-driven-based algorithms. This paper proposes a
novel face occlusion dataset with manually labeled face occlusions from the
CelebA-HQ and the internet. The occlusion types cover sunglasses, spectacles,
hands, masks, scarfs, microphones, etc. To the best of our knowledge, it is by
far the largest and most comprehensive face occlusion dataset. Combining it
with the attribute mask in CelebAMask-HQ, we trained a straightforward face
segmentation model but obtained SOTA performance, convincingly demonstrating
the effectiveness of the proposed dataset.
|
[
{
"version": "v1",
"created": "Thu, 20 Jan 2022 19:44:18 GMT"
}
] | 2022-01-24T00:00:00 |
[
[
"Yin",
"Xiangnan",
""
],
[
"Chen",
"Liming",
""
]
] |
new_dataset
| 0.997612 |
2201.08441
|
Thomas Vogel
|
Laura Wartschinski, Yannic Noller, Thomas Vogel, Timo Kehrer, Lars
Grunske
|
VUDENC: Vulnerability Detection with Deep Learning on a Natural Codebase
for Python
|
Accepted Manuscript
|
Information and Software Technology, Volume 144, April 2022,
106809
|
10.1016/j.infsof.2021.106809
| null |
cs.CR cs.LG cs.SE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Context: Identifying potential vulnerable code is important to improve the
security of our software systems. However, the manual detection of software
vulnerabilities requires expert knowledge and is time-consuming, and must be
supported by automated techniques. Objective: Such automated vulnerability
detection techniques should achieve a high accuracy, point developers directly
to the vulnerable code fragments, scale to real-world software, generalize
across the boundaries of a specific software project, and require no or only
moderate setup or configuration effort. Method: In this article, we present
VUDENC (Vulnerability Detection with Deep Learning on a Natural Codebase), a
deep learning-based vulnerability detection tool that automatically learns
features of vulnerable code from a large and real-world Python codebase. VUDENC
applies a word2vec model to identify semantically similar code tokens and to
provide a vector representation. A network of long-short-term memory cells
(LSTM) is then used to classify vulnerable code token sequences at a
fine-grained level, highlight the specific areas in the source code that are
likely to contain vulnerabilities, and provide confidence levels for its
predictions. Results: To evaluate VUDENC, we used 1,009 vulnerability-fixing
commits from different GitHub repositories that contain seven different types
of vulnerabilities (SQL injection, XSS, Command injection, XSRF, Remote code
execution, Path disclosure, Open redirect) for training. In the experimental
evaluation, VUDENC achieves a recall of 78%-87%, a precision of 82%-96%, and an
F1 score of 80%-90%. VUDENC's code, the datasets for the vulnerabilities, and
the Python corpus for the word2vec model are available for reproduction.
Conclusions: Our experimental results suggest...
|
[
{
"version": "v1",
"created": "Thu, 20 Jan 2022 20:29:22 GMT"
}
] | 2022-01-24T00:00:00 |
[
[
"Wartschinski",
"Laura",
""
],
[
"Noller",
"Yannic",
""
],
[
"Vogel",
"Thomas",
""
],
[
"Kehrer",
"Timo",
""
],
[
"Grunske",
"Lars",
""
]
] |
new_dataset
| 0.998483 |
2201.08460
|
Ana Aleksandric
|
Ana Aleksandric, Mercy Jesuloluwa Obasanya, Sarah Melcher, Shirin
Nilizadeh, Gabriela Mustata Wilson
|
Your Tweets Matter: How Social Media Sentiments Associate with COVID-19
Vaccination Rates in the US
| null | null | null | null |
cs.SI stat.AP
|
http://creativecommons.org/licenses/by/4.0/
|
Objective: The aims of the study were to examine the association between
social media sentiments surrounding COVID-19 vaccination and the effects on
vaccination rates in the United States (US), as well as other contributing
factors to the COVID-19 vaccine hesitancy.
Method: The dataset used in this study consists of vaccine-related English
tweets collected in real-time from January 4 - May 11, 2021, posted within the
US, as well as health literacy (HL), social vulnerability index (SVI), and
vaccination rates at the state level.
Results: The findings presented in this study demonstrate a significant
correlation between the sentiments of the tweets and the vaccination rate in
the US. The results also suggest a significant negative association between HL
and SVI and that the state demographics correlate with both HL and SVI.
Discussion: Social media activity provides insights into public opinion about
vaccinations and helps determine the required public health interventions to
increase the vaccination rate in the US.
Conclusion: Health literacy, social vulnerability index and monitoring of
social media sentiments need to be considered in public health interventions as
part of vaccination campaigns.
|
[
{
"version": "v1",
"created": "Thu, 20 Jan 2022 21:40:33 GMT"
}
] | 2022-01-24T00:00:00 |
[
[
"Aleksandric",
"Ana",
""
],
[
"Obasanya",
"Mercy Jesuloluwa",
""
],
[
"Melcher",
"Sarah",
""
],
[
"Nilizadeh",
"Shirin",
""
],
[
"Wilson",
"Gabriela Mustata",
""
]
] |
new_dataset
| 0.999482 |
2201.08470
|
Upinder Kaur
|
Upinder Kaur, Haozhe Zhou, Xiaxin Shen, Byung-Cheol Min, Richard M.
Voyles
|
RoboMal: Malware Detection for Robot Network Systems
|
Published in the proceedings of 2021 5th IEEE International
Conference on Robotic Computing (IRC)
| null | null | null |
cs.RO cs.CR cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Robot systems are increasingly integrating into numerous avenues of modern
life. From cleaning houses to providing guidance and emotional support, robots
now work directly with humans. Due to their far-reaching applications and
progressively complex architecture, they are being targeted by adversarial
attacks such as sensor-actuator attacks, data spoofing, malware, and network
intrusion. Therefore, security for robotic systems has become crucial. In this
paper, we address the underserved area of malware detection in robotic
software. Since robots work in close proximity to humans, often with direct
interactions, malware could have life-threatening impacts. Hence, we propose
the RoboMal framework of static malware detection on binary executables to
detect malware before it gets a chance to execute. Additionally, we address the
great paucity of data in this space by providing the RoboMal dataset comprising
controller executables of a small-scale autonomous car. The performance of the
framework is compared against widely used supervised learning models: GRU, CNN,
and ANN. Notably, the LSTM-based RoboMal model outperforms the other models
with an accuracy of 85% and precision of 87% in 10-fold cross-validation, hence
proving the effectiveness of the proposed framework.
|
[
{
"version": "v1",
"created": "Thu, 20 Jan 2022 22:11:38 GMT"
}
] | 2022-01-24T00:00:00 |
[
[
"Kaur",
"Upinder",
""
],
[
"Zhou",
"Haozhe",
""
],
[
"Shen",
"Xiaxin",
""
],
[
"Min",
"Byung-Cheol",
""
],
[
"Voyles",
"Richard M.",
""
]
] |
new_dataset
| 0.999613 |
2201.08475
|
Rishov Sarkar
|
Stefan Abi-Karam, Yuqi He, Rishov Sarkar, Lakshmi Sathidevi, Zihang
Qiao, Cong Hao
|
GenGNN: A Generic FPGA Framework for Graph Neural Network Acceleration
|
10 pages, 9 figures. The first three authors contributed equally.
Submitted to FCCM 2022
| null | null | null |
cs.LG cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph neural networks (GNNs) have recently exploded in popularity thanks to
their broad applicability to ubiquitous graph-related problems such as quantum
chemistry, drug discovery, and high energy physics. However, meeting demand for
novel GNN models and fast inference simultaneously is challenging because of
the gap between the difficulty in developing efficient FPGA accelerators and
the rapid pace of creation of new GNN models. Prior art focuses on the
acceleration of specific classes of GNNs but lacks the generality to work
across existing models or to extend to new and emerging GNN models. In this
work, we propose a generic GNN acceleration framework using High-Level
Synthesis (HLS), named GenGNN, with two-fold goals. First, we aim to deliver
ultra-fast GNN inference without any graph pre-processing for real-time
requirements. Second, we aim to support a diverse set of GNN models with the
extensibility to flexibly adapt to new models. The framework features an
optimized message-passing structure applicable to all models, combined with a
rich library of model-specific components. We verify our implementation
on-board on the Xilinx Alveo U50 FPGA and observe a speed-up of up to 25x
against CPU (6226R) baseline and 13x against GPU (A6000) baseline. Our HLS code
will be open-source on GitHub upon acceptance.
|
[
{
"version": "v1",
"created": "Thu, 20 Jan 2022 22:30:59 GMT"
}
] | 2022-01-24T00:00:00 |
[
[
"Abi-Karam",
"Stefan",
""
],
[
"He",
"Yuqi",
""
],
[
"Sarkar",
"Rishov",
""
],
[
"Sathidevi",
"Lakshmi",
""
],
[
"Qiao",
"Zihang",
""
],
[
"Hao",
"Cong",
""
]
] |
new_dataset
| 0.99142 |
2201.08495
|
Athar Sefid
|
Athar Sefid, C Lee Giles
|
SciBERTSUM: Extractive Summarization for Scientific Documents
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The summarization literature focuses on the summarization of news articles.
The news articles in the CNN-DailyMail are relatively short documents with
about 30 sentences per document on average. We introduce SciBERTSUM, our
summarization framework designed for the summarization of long documents like
scientific papers with more than 500 sentences. SciBERTSUM extends BERTSUM to
long documents by 1) adding a section embedding layer to include section
information in the sentence vector and 2) applying a sparse attention mechanism
where each sentences will attend locally to nearby sentences and only a small
number of sentences attend globally to all other sentences. We used slides
generated by the authors of scientific papers as reference summaries since they
contain the technical details from the paper. The results show the superiority
of our model in terms of ROUGE scores.
|
[
{
"version": "v1",
"created": "Fri, 21 Jan 2022 00:29:48 GMT"
}
] | 2022-01-24T00:00:00 |
[
[
"Sefid",
"Athar",
""
],
[
"Giles",
"C Lee",
""
]
] |
new_dataset
| 0.96859 |
2201.08548
|
Satya Bagchi
|
Ankan Shaw, Sanjit Bhowmick, Satya Bagchi
|
Classification and count of binary linear complementary dual group codes
|
11 pages
| null | null | null |
cs.IT math.GR math.IT math.RA
|
http://creativecommons.org/licenses/by/4.0/
|
We establish a complete classification of binary group codes with
complementary duals for a finite group and explicitly determine the number of
linear complementary dual (LCD) cyclic group codes by using cyclotomic cosets.
The dimension and the minimum distance for LCD group codes are explored.
Finally, we find a connection between LCD MDS group codes and maximal ideals.
|
[
{
"version": "v1",
"created": "Fri, 21 Jan 2022 05:43:05 GMT"
}
] | 2022-01-24T00:00:00 |
[
[
"Shaw",
"Ankan",
""
],
[
"Bhowmick",
"Sanjit",
""
],
[
"Bagchi",
"Satya",
""
]
] |
new_dataset
| 0.996611 |
2201.08552
|
Jan Cao
|
Jan Cao
|
The Collector, the Glitcher, and the Denkbilder: Towards a Critical
Aesthetic Theory of Video Games
| null | null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
To examine the aesthetics of video games, this paper proposes to consider
video games as a contemporary multi-media version of the so-called "Denkbild,"
or "thought-image," an experimental genre of philosophical writing employed by
members of the Frankfurt School. A poetic mode of writing, the Denkbild takes
literary snapshots of philosophical, political, and cultural insights that
interrupt and challenge the enigmatic form of traditional philosophical
thinking. Thinking of video games through the lens of the Denkbild allows us to
understand the diversity, conditionality, and incommensurability of video game
as a form without reducing it to separate pieces to be examined within their
respective disciplines too quickly. By presenting two snapshots of video game
players, the collector and the glitcher, this paper argues that the concept of
Denkbild allows us to better understand the relationships between game, gamers,
and the socio-political context in terms of unexpected bonds, accidental
breakthroughs, and moments of absolute freedom.
|
[
{
"version": "v1",
"created": "Fri, 21 Jan 2022 06:06:03 GMT"
}
] | 2022-01-24T00:00:00 |
[
[
"Cao",
"Jan",
""
]
] |
new_dataset
| 0.961163 |
2201.08564
|
Rushit Dave
|
Rushit Dave, Naeem Seliya, Laura Pryor, Mounika Vanamala, Evelyn
Sowells, Jacob mallet
|
Hold On and Swipe: A Touch-Movement Based Continuous Authentication
Schema based on Machine Learning
| null | null | null | null |
cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years the amount of secure information being stored on mobile
devices has grown exponentially. However, current security schemas for mobile
devices such as physiological biometrics and passwords are not secure enough to
protect this information. Behavioral biometrics have been heavily researched as
a possible solution to this security deficiency for mobile devices. This study
aims to contribute to this innovative research by evaluating the performance of
a multimodal behavioral biometric based user authentication scheme using touch
dynamics and phone movement. This study uses a fusion of two popular publicly
available datasets the Hand Movement Orientation and Grasp dataset and the
BioIdent dataset. This study evaluates our model performance using three common
machine learning algorithms which are Random Forest Support Vector Machine and
K-Nearest Neighbor reaching accuracy rates as high as 82% with each algorithm
performing respectively for all success metrics reported.
|
[
{
"version": "v1",
"created": "Fri, 21 Jan 2022 06:51:46 GMT"
}
] | 2022-01-24T00:00:00 |
[
[
"Dave",
"Rushit",
""
],
[
"Seliya",
"Naeem",
""
],
[
"Pryor",
"Laura",
""
],
[
"Vanamala",
"Mounika",
""
],
[
"Sowells",
"Evelyn",
""
],
[
"mallet",
"Jacob",
""
]
] |
new_dataset
| 0.985208 |
2201.08565
|
Rushit Dave
|
Rushit Dave, Naeem Seliya, Mounika Vanamala, Wei Tee
|
Human Activity Recognition models using Limited Consumer Device Sensors
and Machine Learning
| null | null | null | null |
cs.LG cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Human activity recognition has grown in popularity with its increase of
applications within daily lifestyles and medical environments. The goal of
having efficient and reliable human activity recognition brings benefits such
as accessible use and better allocation of resources; especially in the medical
industry. Activity recognition and classification can be obtained using many
sophisticated data recording setups, but there is also a need in observing how
performance varies among models that are strictly limited to using sensor data
from easily accessible devices: smartphones and smartwatches. This paper
presents the findings of different models that are limited to train using such
sensors. The models are trained using either the k-Nearest Neighbor, Support
Vector Machine, or Random Forest classifier algorithms. Performance and
evaluations are done by comparing various model performances using different
combinations of mobile sensors and how they affect recognitive performances of
models. Results show promise for models trained strictly using limited sensor
data collected from only smartphones and smartwatches coupled with traditional
machine learning concepts and algorithms.
|
[
{
"version": "v1",
"created": "Fri, 21 Jan 2022 06:54:05 GMT"
}
] | 2022-01-24T00:00:00 |
[
[
"Dave",
"Rushit",
""
],
[
"Seliya",
"Naeem",
""
],
[
"Vanamala",
"Mounika",
""
],
[
"Tee",
"Wei",
""
]
] |
new_dataset
| 0.967467 |
2201.08605
|
Sheikh Salman Hassan
|
Sheikh Salman Hassan, Do Hyeon Kim, Yan Kyaw Tun, Nguyen H. Tran,
Walid Saad, Choong Seon Hong
|
Seamless and Energy Efficient Maritime Coverage in Coordinated 6G
Space-Air-Sea Non-Terrestrial Networks
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Non-terrestrial networks (NTNs), which integrate space and aerial networks
with terrestrial systems, are a key area in the emerging sixth-generation (6G)
wireless networks. As part of 6G, NTNs must provide pervasive connectivity to a
wide range of devices, including smartphones, vehicles, sensors, robots, and
maritime users. However, due to the high mobility and deployment of NTNs,
managing the space-air-sea (SAS) NTN resources, i.e., energy, power, and
channel allocation, is a major challenge. The design of a SAS-NTN for
energy-efficient resource allocation is investigated in this study. The goal is
to maximize system energy efficiency (EE) by collaboratively optimizing user
equipment (UE) association, power control, and unmanned aerial vehicle (UAV)
deployment. Given the limited payloads of UAVs, this work focuses on minimizing
the total energy cost of UAVs (trajectory and transmission) while meeting EE
requirements. A mixed-integer nonlinear programming problem is proposed,
followed by the development of an algorithm to decompose, and solve each
problem distributedly. The binary (UE association) and continuous (power,
deployment) variables are separated using the Bender decomposition (BD), and
then the Dinkelbach algorithm (DA) is used to convert fractional programming
into an equivalent solvable form in the subproblem. A standard optimization
solver is utilized to deal with the complexity of the master problem for binary
variables. The alternating direction method of multipliers (ADMM) algorithm is
used to solve the subproblem for the continuous variables. Our proposed
algorithm provides a suboptimal solution, and simulation results demonstrate
that the proposed algorithm achieves better EE than baselines.
|
[
{
"version": "v1",
"created": "Fri, 21 Jan 2022 09:27:23 GMT"
}
] | 2022-01-24T00:00:00 |
[
[
"Hassan",
"Sheikh Salman",
""
],
[
"Kim",
"Do Hyeon",
""
],
[
"Tun",
"Yan Kyaw",
""
],
[
"Tran",
"Nguyen H.",
""
],
[
"Saad",
"Walid",
""
],
[
"Hong",
"Choong Seon",
""
]
] |
new_dataset
| 0.966122 |
2201.08622
|
Sean MacAvaney
|
Sean MacAvaney, Craig Macdonald, Iadh Ounis
|
Reproducing Personalised Session Search over the AOL Query Log
|
ECIR 2022 (reproducibility)
| null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite its troubled past, the AOL Query Log continues to be an important
resource to the research community -- particularly for tasks like search
personalisation. When using the query log these ranking experiments, little
attention is usually paid to the document corpus. Recent work typically uses a
corpus containing versions of the documents collected long after the log was
produced. Given that web documents are prone to change over time, we study the
differences present between a version of the corpus containing documents as
they appeared in 2017 (which has been used by several recent works) and a new
version we construct that includes documents close to as they appeared at the
time the query log was produced (2006). We demonstrate that this new version of
the corpus has a far higher coverage of documents present in the original log
(93%) than the 2017 version (55%). Among the overlapping documents, the content
often differs substantially. Given these differences, we re-conduct session
search experiments that originally used the 2017 corpus and find that when
using our corpus for training or evaluation, system performance improves. We
place the results in context by introducing recent adhoc ranking baselines. We
also confirm the navigational nature of the queries in the AOL corpus by
showing that including the URL substantially improves performance across a
variety of models. Our version of the corpus can be easily reconstructed by
other researchers and is included in the ir-datasets package.
|
[
{
"version": "v1",
"created": "Fri, 21 Jan 2022 10:17:27 GMT"
}
] | 2022-01-24T00:00:00 |
[
[
"MacAvaney",
"Sean",
""
],
[
"Macdonald",
"Craig",
""
],
[
"Ounis",
"Iadh",
""
]
] |
new_dataset
| 0.971319 |
2201.08688
|
Abdulrahman Alruban
|
Abdulrahman Alruban, Hind Alobaidi, Nathan Clarke' Fudong Li
|
Physical Activity Recognition by Utilising Smartphone Sensor Signals
|
10 pages, 10 figures, conference
| null |
10.5220/0007271903420351
| null |
cs.HC cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Human physical motion activity identification has many potential applications
in various fields, such as medical diagnosis, military sensing, sports
analysis, and human-computer security interaction. With the recent advances in
smartphones and wearable technologies, it has become common for such devices to
have embedded motion sensors that are able to sense even small body movements.
This study collected human activity data from 60 participants across two
different days for a total of six activities recorded by gyroscope and
accelerometer sensors in a modern smartphone. The paper investigates to what
extent different activities can be identified by utilising machine learning
algorithms using approaches such as majority algorithmic voting. More analyses
are also provided that reveal which time and frequency domain based features
were best able to identify individuals motion activity types. Overall, the
proposed approach achieved a classification accuracy of 98 percent in
identifying four different activities: walking, walking upstairs, walking
downstairs, and sitting while the subject is calm and doing a typical
desk-based activity.
|
[
{
"version": "v1",
"created": "Thu, 20 Jan 2022 09:58:52 GMT"
}
] | 2022-01-24T00:00:00 |
[
[
"Alruban",
"Abdulrahman",
""
],
[
"Alobaidi",
"Hind",
""
],
[
"Li",
"Nathan Clarke' Fudong",
""
]
] |
new_dataset
| 0.998441 |
2201.08718
|
Hajar Hasannejadasl
|
Hajar Hasannejadasl, Cheryl Roumen, Yolba Smit, Andre Dekker, Rianne
Fijten
|
Health literacy in e-oncology care: challenges and strategies
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Given the impact of health literacy (HL) on patients outcomes, limited health
literacy (LHL) is a major barrier in cancer care globally. HL refers to the
degree in which an individual is able to acquire, process and comprehend
information in a way to be actively involved in their health decisions.
Previous research found that almost half of the population in developed
countries have difficulties in understanding health related information. With
the gradual shift toward the shared decision making (SDM) process and digital
transformation in oncology, the need for dealing with low HL issues is more
crucial. Decision making in oncology is often accompanied by considerable
consequences on patients lives, which requires patients to understand complex
information and be able to compare treatment methods by considering their own
values. How health information is perceived by patients is influenced by
various factors including patients characteristics and the way information is
presented to patients. Based on the findings, identifying patients with low HL
and using simple data visualizations are the best practice to help patients and
clinicians in dealing with LHL. Furthermore, preparing reliable sources of
information in tools such as patient decision aids (PDA), as well as involving
HL mediators in consultation sessions supports patients to make sense of
complex information.
|
[
{
"version": "v1",
"created": "Fri, 21 Jan 2022 14:34:57 GMT"
}
] | 2022-01-24T00:00:00 |
[
[
"Hasannejadasl",
"Hajar",
""
],
[
"Roumen",
"Cheryl",
""
],
[
"Smit",
"Yolba",
""
],
[
"Dekker",
"Andre",
""
],
[
"Fijten",
"Rianne",
""
]
] |
new_dataset
| 0.979677 |
2201.08724
|
Alexander Dallmann
|
Alexander Dallmann, Johannes Kohlmann, Daniel Zoller and Andreas Hotho
|
Sequential Item Recommendation in the MOBA Game Dota 2
| null | null | null | null |
cs.LG cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multiplayer Online Battle Arena (MOBA) games such as Dota 2 attract hundreds
of thousands of players every year. Despite the large player base, it is still
important to attract new players to prevent the community of a game from
becoming inactive. Entering MOBA games is, however, often demanding, requiring
the player to learn numerous skills at once. An important factor of success is
buying the correct items which forms a complex task depending on various
in-game factors such as already purchased items, the team composition, or
available resources. A recommendation system can support players by reducing
the mental effort required to choose a suitable item, helping, e.g., newer
players or players returning to the game after a longer break, to focus on
other aspects of the game. Since Sequential Item Recommendation (SIR) has
proven to be effective in various domains (e.g. e-commerce, movie
recommendation or playlist continuation), we explore the applicability of
well-known SIR models in the context of purchase recommendations in Dota 2. To
facilitate this research, we collect, analyze and publish Dota-350k, a new
large dataset based on recent Dota 2 matches. We find that SIR models can be
employed effectively for item recommendation in Dota 2. Our results show that
models that consider the order of purchases are the most effective. In contrast
to other domains, we find RNN-based models to outperform the more recent
Transformer-based architectures on Dota-350k.
|
[
{
"version": "v1",
"created": "Mon, 17 Jan 2022 14:19:17 GMT"
}
] | 2022-01-24T00:00:00 |
[
[
"Dallmann",
"Alexander",
""
],
[
"Kohlmann",
"Johannes",
""
],
[
"Zoller",
"Daniel",
""
],
[
"Hotho",
"Andreas",
""
]
] |
new_dataset
| 0.997923 |
2201.08746
|
Jan Cychnerski
|
Jan Cychnerski, Tomasz Dziubich, Adam Brzeski
|
ERS: a novel comprehensive endoscopy image dataset for machine learning,
compliant with the MST 3.0 specification
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The article presents a new multi-label comprehensive image dataset from
flexible endoscopy, colonoscopy and capsule endoscopy, named ERS. The
collection has been labeled according to the full medical specification of
'Minimum Standard Terminology 3.0' (MST 3.0), describing all possible findings
in the gastrointestinal tract (104 possible labels), extended with an
additional 19 labels useful in common machine learning applications.
The dataset contains around 6000 precisely and 115,000 approximately labeled
frames from endoscopy videos, 3600 precise and 22,600 approximate segmentation
masks, and 1.23 million unlabeled frames from flexible and capsule endoscopy
videos. The labeled data cover almost entirely the MST 3.0 standard. The data
came from 1520 videos of 1135 patients.
Additionally, this paper proposes and describes four exemplary experiments in
gastrointestinal image classification task performed using the created dataset.
The obtained results indicate the high usefulness and flexibility of the
dataset in training and testing machine learning algorithms in the field of
endoscopic data analysis.
|
[
{
"version": "v1",
"created": "Fri, 21 Jan 2022 15:39:45 GMT"
}
] | 2022-01-24T00:00:00 |
[
[
"Cychnerski",
"Jan",
""
],
[
"Dziubich",
"Tomasz",
""
],
[
"Brzeski",
"Adam",
""
]
] |
new_dataset
| 0.999827 |
2201.08817
|
Matej Troj\'ak
|
Matej Troj\'ak, David \v{S}afr\'anek, Lubo\v{s} Brim
|
Biochemical Space Language in Relation to Multiset Rewriting Systems
|
9 pages, 8 figures
| null | null | null |
cs.LO cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This technical report relates Biochemical Space Language (BCSL) to Multiset
rewriting systems (MRS). For a BCSL model, the semantics are defined in terms
of transition systems, while for an MRS, they are defined in terms of a set of
runs. In this report, we relate BCSL to MRS by first showing how the transition
system is related to a set of runs and consequently showing how for every BCSL
model, an MRS can be constructed such that both represent the same set of runs.
The motivation of this step is to establish BCSL in the context of a more
general rewriting system and benefit from properties shown for them. Finally,
we show that regulations defined for MRS can be consequently used in the BCSL
model.
|
[
{
"version": "v1",
"created": "Wed, 12 Jan 2022 15:08:36 GMT"
}
] | 2022-01-24T00:00:00 |
[
[
"Troják",
"Matej",
""
],
[
"Šafránek",
"David",
""
],
[
"Brim",
"Luboš",
""
]
] |
new_dataset
| 0.999695 |
1903.01006
|
Rahul Shome
|
Rahul Shome, Daniel Nakhimovich, Kostas E. Bekris
|
Pushing the Boundaries of Asymptotic Optimality in Integrated Task and
Motion Planning
| null |
Algorithmic Foundations of Robotics XIV (2021) 467-484
|
10.1007/978-3-030-66723-8_28
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Integrated task and motion planning problems describe a multi-modal state
space, which is often abstracted as a set of smooth manifolds that are
connected via sets of transitions states. One approach to solving such problems
is to sample reachable states in each of the manifolds, while simultaneously
sampling transition states. Prior work has shown that in order to achieve
asymptotically optimal (AO) solutions for such piecewise-smooth task planning
problems, it is sufficient to double the connection radius required for AO
sampling-based motion planning. This was shown under the assumption that the
transition sets themselves are smooth. The current work builds upon this result
and demonstrates that it is sufficient to use the same connection radius as for
standard AO motion planning. Furthermore, the current work studies the case
that the transition sets are non-smooth boundary points of the valid state
space, which is frequently the case in practice, such as when a gripper grasps
an object. This paper generalizes the notion of clearance that is typically
assumed in motion and task planning to include such individual, potentially
non-smooth transition states. It is shown that asymptotic optimality is
retained under this generalized regime.
|
[
{
"version": "v1",
"created": "Sun, 3 Mar 2019 22:38:07 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Mar 2019 19:13:55 GMT"
},
{
"version": "v3",
"created": "Sat, 11 Apr 2020 23:53:58 GMT"
},
{
"version": "v4",
"created": "Tue, 19 May 2020 00:55:29 GMT"
}
] | 2022-01-21T00:00:00 |
[
[
"Shome",
"Rahul",
""
],
[
"Nakhimovich",
"Daniel",
""
],
[
"Bekris",
"Kostas E.",
""
]
] |
new_dataset
| 0.98357 |
2002.08987
|
Muhammad Shahbaz
|
Tushar Swamy, Alexander Rucker, Muhammad Shahbaz, Ishan Gaur, and
Kunle Olukotun
|
Taurus: A Data Plane Architecture for Per-Packet ML
|
16 pages
| null |
10.1145/3503222.3507726
| null |
cs.NI cs.LG cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Emerging applications -- cloud computing, the internet of things, and
augmented/virtual reality -- demand responsive, secure, and scalable datacenter
networks. These networks currently implement simple, per-packet, data-plane
heuristics (e.g., ECMP and sketches) under a slow, millisecond-latency control
plane that runs data-driven performance and security policies. However, to meet
applications' service-level objectives (SLOs) in a modern data center, networks
must bridge the gap between line-rate, per-packet execution and complex
decision making.
In this work, we present the design and implementation of Taurus, a data
plane for line-rate inference. Taurus adds custom hardware based on a flexible,
parallel-patterns (MapReduce) abstraction to programmable network devices, such
as switches and NICs; this new hardware uses pipelined SIMD parallelism to
enable per-packet MapReduce operations (e.g., inference). Our evaluation of a
Taurus switch ASIC -- supporting several real-world models -- shows that Taurus
operates orders of magnitude faster than a server-based control plane while
increasing area by 3.8% and latency for line-rate ML models by up to 221 ns.
Furthermore, our Taurus FPGA prototype achieves full model accuracy and detects
two orders of magnitude more events than a state-of-the-art control-plane
anomaly-detection system.
|
[
{
"version": "v1",
"created": "Wed, 12 Feb 2020 09:18:36 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Jan 2022 20:20:04 GMT"
}
] | 2022-01-21T00:00:00 |
[
[
"Swamy",
"Tushar",
""
],
[
"Rucker",
"Alexander",
""
],
[
"Shahbaz",
"Muhammad",
""
],
[
"Gaur",
"Ishan",
""
],
[
"Olukotun",
"Kunle",
""
]
] |
new_dataset
| 0.999221 |
2005.09127
|
Rahul Shome
|
Rahul Shome and Kostas E. Bekris
|
Synchronized Multi-Arm Rearrangement Guided by Mode Graphs with Capacity
Constraints
| null |
Algorithmic Foundations of Robotics XIV (2021) 243-260
|
10.1007/978-3-030-66723-8_15
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Solving task planning problems involving multiple objects and multiple
robotic arms poses scalability challenges. Such problems involve not only
coordinating multiple high-DoF arms, but also searching through possible
sequences of actions including object placements, and handoffs. The current
work identifies a useful connection between multi-arm rearrangement and recent
results in multi-body path planning on graphs with vertex capacity constraints.
Solving a synchronized multi-arm rearrangement at a high-level involves
reasoning over a modal graph, where nodes correspond to stable object
placements and object transfer states by the arms. Edges of this graph
correspond to pick, placement and handoff operations. The objects can be viewed
as pebbles moving over this graph, which has capacity constraints. For
instance, each arm can carry a single object but placement locations can
accumulate many objects. Efficient integer linear programming-based solvers
have been proposed for the corresponding pebble problem. The current work
proposes a heuristic to guide the task planning process for synchronized
multi-arm rearrangement. Results indicate good scalability to multiple arms and
objects, and an algorithm that can find high-quality solutions fast and
exhibiting desirable anytime behavior.
|
[
{
"version": "v1",
"created": "Mon, 18 May 2020 22:55:50 GMT"
}
] | 2022-01-21T00:00:00 |
[
[
"Shome",
"Rahul",
""
],
[
"Bekris",
"Kostas E.",
""
]
] |
new_dataset
| 0.992415 |
2006.14221
|
Stefano Kalonaris
|
Eric P. Nichols, Stefano Kalonaris, Gianluca Micchi, Anna Aljanaki
|
Modeling Baroque Two-Part Counterpoint with Neural Machine Translation
|
International Computer Music Conference 2021, 5 pages
| null | null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a system for contrapuntal music generation based on a Neural
Machine Translation (NMT) paradigm. We consider Baroque counterpoint and are
interested in modeling the interaction between any two given parts as a mapping
between a given source material and an appropriate target material. Like in
translation, the former imposes some constraints on the latter, but doesn't
define it completely. We collate and edit a bespoke dataset of Baroque pieces,
use it to train an attention-based neural network model, and evaluate the
generated output via BLEU score and musicological analysis. We show that our
model is able to respond with some idiomatic trademarks, such as imitation and
appropriate rhythmic offset, although it falls short of having learned
stylistically correct contrapuntal motion (e.g., avoidance of parallel fifths)
or stricter imitative rules, such as canon.
|
[
{
"version": "v1",
"created": "Thu, 25 Jun 2020 07:34:37 GMT"
},
{
"version": "v2",
"created": "Mon, 29 Jun 2020 13:28:36 GMT"
},
{
"version": "v3",
"created": "Wed, 2 Sep 2020 23:40:55 GMT"
},
{
"version": "v4",
"created": "Thu, 20 Jan 2022 01:34:29 GMT"
}
] | 2022-01-21T00:00:00 |
[
[
"Nichols",
"Eric P.",
""
],
[
"Kalonaris",
"Stefano",
""
],
[
"Micchi",
"Gianluca",
""
],
[
"Aljanaki",
"Anna",
""
]
] |
new_dataset
| 0.99936 |
2007.03249
|
Thomas Seiller
|
Thomas Seiller (CNRS, LIPN), Jakob Simonsen (DIKU)
|
Agafonov's Proof of Agafonov's Theorem: A Modern Account and New
Insights
| null | null | null | null |
cs.DM cs.FL math.PR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We give a modern account of Agafonov's original proof of his eponymous
theorem. The original proof was only reported in Russian in a journal not
widely available, and the work most commonly cited in western literature is
instead the English translation of a summary version containing no proofs, and
the main proof relied heavily on material well-known in Russian mathematical
circles of the day, which perhaps obscures the main thrust of argumentation for
modern readers.Our present account recasts Aganofov's arguments using more
basic building blocks than in the original proof, and contains some further
embellishments to Agafonov's original arguments, made in the interest of
clarity. We posit that the modern account provides new insight to the
underlying phenomena of the theorem.We also provides some historical context to
Agafonov's work, including a short description of some of the ideas that led to
Agafonov's own proof, especially emphasizing the important work of Postnikova.
|
[
{
"version": "v1",
"created": "Tue, 7 Jul 2020 07:34:43 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Jan 2022 09:02:01 GMT"
}
] | 2022-01-21T00:00:00 |
[
[
"Seiller",
"Thomas",
"",
"CNRS, LIPN"
],
[
"Simonsen",
"Jakob",
"",
"DIKU"
]
] |
new_dataset
| 0.99831 |
2103.16201
|
Alexander Bartler
|
Alexander Bartler, Andre B\"uhler, Felix Wiewel, Mario D\"obler and
Bin Yang
|
MT3: Meta Test-Time Training for Self-Supervised Test-Time Adaption
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
An unresolved problem in Deep Learning is the ability of neural networks to
cope with domain shifts during test-time, imposed by commonly fixing network
parameters after training. Our proposed method Meta Test-Time Training (MT3),
however, breaks this paradigm and enables adaption at test-time. We combine
meta-learning, self-supervision and test-time training to learn to adapt to
unseen test distributions. By minimizing the self-supervised loss, we learn
task-specific model parameters for different tasks. A meta-model is optimized
such that its adaption to the different task-specific models leads to higher
performance on those tasks. During test-time a single unlabeled image is
sufficient to adapt the meta-model parameters. This is achieved by minimizing
only the self-supervised loss component resulting in a better prediction for
that image. Our approach significantly improves the state-of-the-art results on
the CIFAR-10-Corrupted image classification benchmark. Our implementation is
available on GitHub.
|
[
{
"version": "v1",
"created": "Tue, 30 Mar 2021 09:33:38 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Jan 2022 08:57:45 GMT"
}
] | 2022-01-21T00:00:00 |
[
[
"Bartler",
"Alexander",
""
],
[
"Bühler",
"Andre",
""
],
[
"Wiewel",
"Felix",
""
],
[
"Döbler",
"Mario",
""
],
[
"Yang",
"Bin",
""
]
] |
new_dataset
| 0.970655 |
2106.08250
|
Jiewen Lai
|
Jiewen Lai, Bo Lu, Qingxiang Zhao, Henry K. Chu
|
Constrained Motion Planning of A Cable-Driven Soft Robot With
Compressible Curvature Modeling
|
8 pages, 9 figures
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
A cable-driven soft-bodied robot with redundancy can conduct the trajectory
tracking task and in the meanwhile fulfill some extra constraints, such as
tracking through an end-effector in designated orientation, or get rid of the
evitable manipulator-obstacle collision. Those constraints require rational
planning of the robot motion. In this work, we derived the compressible
curvature kinematics of a cable-driven soft robot which takes the compressible
soft segment into account. The motion planning of the soft robot for a
trajectory tracking task in constrained conditions, including fixed orientation
end-effector and manipulator-obstacle collision avoidance, has been
investigated. The inverse solution of cable actuation was formulated as a
damped least-square optimization problem and iteratively computed off-line. The
performance of trajectory tracking and the obedience to constraints were
evaluated via the simulation we made open-source, as well as the prototype
experiments. The method can be generalized to the similar multisegment
cable-driven soft robotic systems by customizing the robot parameters for the
prior motion planning of the manipulator.
|
[
{
"version": "v1",
"created": "Tue, 15 Jun 2021 16:00:17 GMT"
},
{
"version": "v2",
"created": "Wed, 11 Aug 2021 04:11:55 GMT"
},
{
"version": "v3",
"created": "Fri, 22 Oct 2021 04:52:39 GMT"
},
{
"version": "v4",
"created": "Thu, 20 Jan 2022 07:02:40 GMT"
}
] | 2022-01-21T00:00:00 |
[
[
"Lai",
"Jiewen",
""
],
[
"Lu",
"Bo",
""
],
[
"Zhao",
"Qingxiang",
""
],
[
"Chu",
"Henry K.",
""
]
] |
new_dataset
| 0.97448 |
2107.00346
|
Kailun Yang
|
Kunyu Peng, Juncong Fei, Kailun Yang, Alina Roitberg, Jiaming Zhang,
Frank Bieder, Philipp Heidenreich, Christoph Stiller, Rainer Stiefelhagen
|
MASS: Multi-Attentional Semantic Segmentation of LiDAR Data for Dense
Top-View Understanding
|
Accepted to IEEE Transactions on Intelligent Transportation Systems
(T-ITS). Code is publicly available at https://github.com/KPeng9510/MASS
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
At the heart of all automated driving systems is the ability to sense the
surroundings, e.g., through semantic segmentation of LiDAR sequences, which
experienced a remarkable progress due to the release of large datasets such as
SemanticKITTI and nuScenes-LidarSeg. While most previous works focus on sparse
segmentation of the LiDAR input, dense output masks provide self-driving cars
with almost complete environment information. In this paper, we introduce MASS
- a Multi-Attentional Semantic Segmentation model specifically built for dense
top-view understanding of the driving scenes. Our framework operates on pillar-
and occupancy features and comprises three attention-based building blocks: (1)
a keypoint-driven graph attention, (2) an LSTM-based attention computed from a
vector embedding of the spatial input, and (3) a pillar-based attention,
resulting in a dense 360-degree segmentation mask. With extensive experiments
on both, SemanticKITTI and nuScenes-LidarSeg, we quantitatively demonstrate the
effectiveness of our model, outperforming the state of the art by 19.0% on
SemanticKITTI and reaching 30.4% in mIoU on nuScenes-LidarSeg, where MASS is
the first work addressing the dense segmentation task. Furthermore, our
multi-attention model is shown to be very effective for 3D object detection
validated on the KITTI-3D dataset, showcasing its high generalizability to
other tasks related to 3D vision.
|
[
{
"version": "v1",
"created": "Thu, 1 Jul 2021 10:19:32 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Jan 2022 15:40:28 GMT"
}
] | 2022-01-21T00:00:00 |
[
[
"Peng",
"Kunyu",
""
],
[
"Fei",
"Juncong",
""
],
[
"Yang",
"Kailun",
""
],
[
"Roitberg",
"Alina",
""
],
[
"Zhang",
"Jiaming",
""
],
[
"Bieder",
"Frank",
""
],
[
"Heidenreich",
"Philipp",
""
],
[
"Stiller",
"Christoph",
""
],
[
"Stiefelhagen",
"Rainer",
""
]
] |
new_dataset
| 0.981888 |
2110.02128
|
Khaled Nakhleh
|
Khaled Nakhleh, Santosh Ganji, Ping-Chun Hsieh, I-Hong Hou, Srinivas
Shakkottai
|
NeurWIN: Neural Whittle Index Network For Restless Bandits Via Deep RL
|
Accepted for publication in NeurIPS 2021
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Whittle index policy is a powerful tool to obtain asymptotically optimal
solutions for the notoriously intractable problem of restless bandits. However,
finding the Whittle indices remains a difficult problem for many practical
restless bandits with convoluted transition kernels. This paper proposes
NeurWIN, a neural Whittle index network that seeks to learn the Whittle indices
for any restless bandits by leveraging mathematical properties of the Whittle
indices. We show that a neural network that produces the Whittle index is also
one that produces the optimal control for a set of Markov decision problems.
This property motivates using deep reinforcement learning for the training of
NeurWIN. We demonstrate the utility of NeurWIN by evaluating its performance
for three recently studied restless bandit problems. Our experiment results
show that the performance of NeurWIN is significantly better than other RL
algorithms.
|
[
{
"version": "v1",
"created": "Tue, 5 Oct 2021 15:58:23 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Jan 2022 02:42:45 GMT"
}
] | 2022-01-21T00:00:00 |
[
[
"Nakhleh",
"Khaled",
""
],
[
"Ganji",
"Santosh",
""
],
[
"Hsieh",
"Ping-Chun",
""
],
[
"Hou",
"I-Hong",
""
],
[
"Shakkottai",
"Srinivas",
""
]
] |
new_dataset
| 0.999644 |
2201.06523
|
Subasish Das
|
Xiaoqiang Kong, Subasish Das, Hongmin Zhou, Yunlong Zhang
|
Patterns of near-crash events in a naturalistic driving dataset:
applying rules mining
| null |
Accident Analysis & Prevention (2021)
|
10.1016/j.aap.2021.106346
| null |
cs.LG cs.AI cs.DB cs.IR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This study aims to explore the associations between near-crash events and
road geometry and trip features by investigating a naturalistic driving dataset
and a corresponding roadway inventory dataset using an association rule mining
method.
|
[
{
"version": "v1",
"created": "Mon, 17 Jan 2022 16:57:53 GMT"
}
] | 2022-01-21T00:00:00 |
[
[
"Kong",
"Xiaoqiang",
""
],
[
"Das",
"Subasish",
""
],
[
"Zhou",
"Hongmin",
""
],
[
"Zhang",
"Yunlong",
""
]
] |
new_dataset
| 0.965538 |
2201.07429
|
Xinsheng Wang
|
Yu Wang, Xinsheng Wang, Pengcheng Zhu, Jie Wu, Hanzhao Li, Heyang Xue,
Yongmao Zhang, Lei Xie, Mengxiao Bi
|
Opencpop: A High-Quality Open Source Chinese Popular Song Corpus for
Singing Voice Synthesis
|
will be submitted to Interspeech 2022
| null | null | null |
cs.SD cs.DB eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces Opencpop, a publicly available high-quality Mandarin
singing corpus designed for singing voice synthesis (SVS). The corpus consists
of 100 popular Mandarin songs performed by a female professional singer. Audio
files are recorded with studio quality at a sampling rate of 44,100 Hz and the
corresponding lyrics and musical scores are provided. All singing recordings
have been phonetically annotated with phoneme boundaries and syllable (note)
boundaries. To demonstrate the reliability of the released data and to provide
a baseline for future research, we built baseline deep neural network-based SVS
models and evaluated them with both objective metrics and subjective mean
opinion score (MOS) measure. Experimental results show that the best SVS model
trained on our database achieves 3.70 MOS, indicating the reliability of the
provided corpus. Opencpop is released to the open-source community WeNet, and
the corpus, as well as synthesized demos, can be found on the project homepage.
|
[
{
"version": "v1",
"created": "Wed, 19 Jan 2022 06:12:47 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Jan 2022 02:08:47 GMT"
}
] | 2022-01-21T00:00:00 |
[
[
"Wang",
"Yu",
""
],
[
"Wang",
"Xinsheng",
""
],
[
"Zhu",
"Pengcheng",
""
],
[
"Wu",
"Jie",
""
],
[
"Li",
"Hanzhao",
""
],
[
"Xue",
"Heyang",
""
],
[
"Zhang",
"Yongmao",
""
],
[
"Xie",
"Lei",
""
],
[
"Bi",
"Mengxiao",
""
]
] |
new_dataset
| 0.999518 |
2201.07521
|
Luigi De Simone
|
Domenico Cotroneo, Luigi De Simone, Roberto Natella
|
ThorFI: A Novel Approach for Network Fault Injection as a Service
|
21 pages, accepted for publication in Elsevier Journal of Networking
and Computer Applications
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we present a novel fault injection solution (ThorFI) for
virtual networks in cloud computing infrastructures. ThorFI is designed to
provide non-intrusive fault injection capabilities for a cloud tenant, and to
isolate injections from interfering with other tenants on the infrastructure.
We present the solution in the context of the OpenStack cloud management
platform, and release this implementation as open-source software. Finally, we
present two relevant case studies of ThorFI, respectively in an NFV IMS and of
a high-availability cloud application. The case studies show that ThorFI can
enhance functional tests with fault injection, as in 4%-34% of the test cases
the IMS is unable to handle faults; and that despite redundancy in virtual
networks, faults in one virtual network segment can propagate to other
segments, and can affect the throughput and response time of the cloud
application as a whole, by about 3 times in the worst case.
|
[
{
"version": "v1",
"created": "Wed, 19 Jan 2022 10:50:10 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Jan 2022 09:27:47 GMT"
}
] | 2022-01-21T00:00:00 |
[
[
"Cotroneo",
"Domenico",
""
],
[
"De Simone",
"Luigi",
""
],
[
"Natella",
"Roberto",
""
]
] |
new_dataset
| 0.998654 |
2201.07793
|
\"Onder G\"urcan
|
Tahina Ralitera, Agnes Lanusse, \"Onder G\"urcan
|
On Using Blockchains for Beyond Visual Line of Sight (BVLOS) Drones
Operation: An Architectural Study
|
10 pages, 4 figures, HiPEAC'22
| null | null | null |
cs.CR cs.DC cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Beyond Visual Line of Sight operation enables drones to surpass the limits
imposed by the reach and constraints of their operator's eyes. It extends their
range and, as such, productivity, and profitability. Drones operating BVLOS
include a variety of highly sensitive assets and information that could be
subject to unintentional or intentional security vulnerabilities. As a
solution, blockchain-based services could enable secure and trustworthy
exchange and storage of related data. They also allow for traceability of
exchanges and perform synchronization with other nodes in the network. However,
most of the blockchain-based approaches focus on the network and the protocol
aspects of drone systems. Few studies focus on the architectural level of
on-chip compute platforms of drones. Based on this observation, the
contribution of this paper is twofold: (1) a generic blockchain-based service
architecture for on-chip compute platforms of drones, and (2) a concrete
example realization of the proposed generic architecture.
|
[
{
"version": "v1",
"created": "Thu, 20 Jan 2022 10:57:00 GMT"
}
] | 2022-01-21T00:00:00 |
[
[
"Ralitera",
"Tahina",
""
],
[
"Lanusse",
"Agnes",
""
],
[
"Gürcan",
"Önder",
""
]
] |
new_dataset
| 0.960246 |
2201.07843
|
Hengjie Yang
|
Jacob King and Alexandra Kwon and Hengjie Yang and William Ryan and
Richard D. Wesel
|
CRC-Aided List Decoding of Convolutional and Polar Codes for Short
Messages in 5G
|
6 pages, 8 figures; this preprint is accepted for publication at the
2022 IEEE International Conference on Communications (ICC); camera-ready
version to be updated
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper explores list decoding of convolutional and polar codes for short
messages such as those found in the 5G physical broadcast channel. A cyclic
redundancy check (CRC) is used to select a codeword from a list of likely
codewords. One example in the 5G standard encodes a 32-bit message with a
24-bit CRC and a 512-bit polar code with additional bits added by repetition to
achieve a very low rate of 32/864. This paper shows that optimizing the CRC
length improves the $E_b/N_0$ performance of this polar code, where $E_b/N_0$
is the ratio of the energy per data bit to the noise power spectral density.
Furthermore, even better $E_b/N_0$ performance is achieved by replacing the
polar code with a tail-biting convolutional code (TBCC) with a
distance-spectrum-optimal (DSO) CRC. This paper identifies the optimal CRC
length to minimize the frame error rate (FER) of a rate-1/5 TBCC at a specific
value of $E_b/N_0$. We also show that this optimized TBCC/CRC can attain the
same excellent $E_b/N_0$ performance with the very low rate of 32/864 of the 5G
polar code, where the low rate is achieved through repetition. We show that the
proposed TBCC/CRC concatenated code outperforms the PBCH polar code described
in the 5G standard both in terms of FER and decoding run time. We also explore
the tradeoff between undetected error rate and erasure rate as the CRC size
varies.
|
[
{
"version": "v1",
"created": "Wed, 19 Jan 2022 19:59:06 GMT"
}
] | 2022-01-21T00:00:00 |
[
[
"King",
"Jacob",
""
],
[
"Kwon",
"Alexandra",
""
],
[
"Yang",
"Hengjie",
""
],
[
"Ryan",
"William",
""
],
[
"Wesel",
"Richard D.",
""
]
] |
new_dataset
| 0.999401 |
2201.07899
|
Carol Neidle
|
Carol Neidle, Augustine Opoku, Dimitris Metaxas
|
ASL Video Corpora & Sign Bank: Resources Available through the American
Sign Language Linguistic Research Project (ASLLRP)
| null | null | null | null |
cs.CL cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The American Sign Language Linguistic Research Project (ASLLRP) provides
Internet access to high-quality ASL video data, generally including front and
side views and a close-up of the face. The manual and non-manual components of
the signing have been linguistically annotated using SignStream(R). The
recently expanded video corpora can be browsed and searched through the Data
Access Interface (DAI 2) we have designed; it is possible to carry out complex
searches. The data from our corpora can also be downloaded; annotations are
available in an XML export format. We have also developed the ASLLRP Sign Bank,
which contains almost 6,000 sign entries for lexical signs, with distinct
English-based glosses, with a total of 41,830 examples of lexical signs (in
addition to about 300 gestures, over 1,000 fingerspelled signs, and 475
classifier examples). The Sign Bank is likewise accessible and searchable on
the Internet; it can also be accessed from within SignStream(R) (software to
facilitate linguistic annotation and analysis of visual language data) to make
annotations more accurate and efficient. Here we describe the available
resources. These data have been used for many types of research in linguistics
and in computer-based sign language recognition from video; examples of such
research are provided in the latter part of this article.
|
[
{
"version": "v1",
"created": "Wed, 19 Jan 2022 22:48:36 GMT"
}
] | 2022-01-21T00:00:00 |
[
[
"Neidle",
"Carol",
""
],
[
"Opoku",
"Augustine",
""
],
[
"Metaxas",
"Dimitris",
""
]
] |
new_dataset
| 0.999366 |
2201.07931
|
Carmina Perez-Guerrero
|
Carmina P\'erez-Guerrero, Adriana Palacios, Gilberto Ochoa-Ruiz,
Christian Mata, Joaquim Casal, Miguel Gonzalez-Mendoza, Luis Eduardo
Falc\'on-Morales
|
Experimental Large-Scale Jet Flames' Geometrical Features Extraction for
Risk Management Using Infrared Images and Deep Learning Segmentation Methods
| null | null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Jet fires are relatively small and have the least severe effects among the
diverse fire accidents that can occur in industrial plants; however, they are
usually involved in a process known as the domino effect, that leads to more
severe events, such as explosions or the initiation of another fire, making the
analysis of such fires an important part of risk analysis. This research work
explores the application of deep learning models in an alternative approach
that uses the semantic segmentation of jet fires flames to extract main
geometrical attributes, relevant for fire risk assessments. A comparison is
made between traditional image processing methods and some state-of-the-art
deep learning models. It is found that the best approach is a deep learning
architecture known as UNet, along with its two improvements, Attention UNet and
UNet++. The models are then used to segment a group of vertical jet flames of
varying pipe outlet diameters to extract their main geometrical
characteristics. Attention UNet obtained the best general performance in the
approximation of both height and area of the flames, while also showing a
statistically significant difference between it and UNet++. UNet obtained the
best overall performance for the approximation of the lift-off distances;
however, there is not enough data to prove a statistically significant
difference between Attention UNet and UNet++. The only instance where UNet++
outperformed the other models, was while obtaining the lift-off distances of
the jet flames with 0.01275 m pipe outlet diameter. In general, the explored
models show good agreement between the experimental and predicted values for
relatively large turbulent propane jet flames, released in sonic and subsonic
regimes; thus, making these radiation zones segmentation models, a suitable
approach for different jet flame risk management scenarios.
|
[
{
"version": "v1",
"created": "Thu, 20 Jan 2022 00:50:41 GMT"
}
] | 2022-01-21T00:00:00 |
[
[
"Pérez-Guerrero",
"Carmina",
""
],
[
"Palacios",
"Adriana",
""
],
[
"Ochoa-Ruiz",
"Gilberto",
""
],
[
"Mata",
"Christian",
""
],
[
"Casal",
"Joaquim",
""
],
[
"Gonzalez-Mendoza",
"Miguel",
""
],
[
"Falcón-Morales",
"Luis Eduardo",
""
]
] |
new_dataset
| 0.987662 |
2201.07938
|
Yeming Gu
|
Yeming Gu, Hui Shu, Rongkuan Ma, Lin Yan and Lei Zhu
|
spotFuzzer: Static Instrument and Fuzzing Windows COTs
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The security research on Windows has received little attention in the
academic circle. Most of the new methods are usually designed for Linux system,
and are difficult to transplant to Windows. Fuzzing for Windows programs always
suffering from its closed source. Therefore, we need to find an appropriate way
to achieve feedback from Windows programs. To our knowledge, there are no
stable and scalable static instrumentation tools for Windows yet, and dynamic
tools, such as DynamoRIO, have been criticized for their performance. To make
matters worse, dynamic instrumentation tools have very limited usage scenarios
and are impotent for many system services or large commercial software. In this
paper, we proposed spotInstr, a novel static tool for instrumenting Windows
binaries. It is lightweight and can instrument most Windows PE programs in a
very short time. At the same time, spotInstr provides a set of filters, which
can be used to select instrumentation points or restrict the target regions.
Based on these filters, we propose a novel memory-sensitive instrumentation
method which can speed up both instrumentation and fuzzing. After that, we
design a system called spotFuzzer, which leverage the ability of spotInstr and
can fuzz most Windows binaries. We tested spotInstr and spotFuzzer in multiple
dimensions to show their superior performance and stability.
|
[
{
"version": "v1",
"created": "Thu, 20 Jan 2022 01:13:47 GMT"
}
] | 2022-01-21T00:00:00 |
[
[
"Gu",
"Yeming",
""
],
[
"Shu",
"Hui",
""
],
[
"Ma",
"Rongkuan",
""
],
[
"Yan",
"Lin",
""
],
[
"Zhu",
"Lei",
""
]
] |
new_dataset
| 0.977707 |
2201.07959
|
Zarrin Tasnim Sworna
|
Zarrin Tasnim Sworna, Chadni Islam, and Muhammad Ali Babar
|
APIRO: A Framework for Automated Security Tools API Recommendation
| null | null | null | null |
cs.CR cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Security Orchestration, Automation, and Response (SOAR) platforms integrate
and orchestrate a wide variety of security tools to accelerate the operational
activities of Security Operation Center (SOC). Integration of security tools in
a SOAR platform is mostly done manually using APIs, plugins, and scripts. SOC
teams need to navigate through API calls of different security tools to find a
suitable API to define or update an incident response action. Analyzing various
types of API documentation with diverse API format and presentation structure
involves significant challenges such as data availability, data heterogeneity,
and semantic variation for automatic identification of security tool APIs
specific to a particular task. Given these challenges can have negative impact
on SOC team's ability to handle security incident effectively and efficiently,
we consider it important to devise suitable automated support solutions to
address these challenges. We propose a novel learning-based framework for
automated security tool API Recommendation for security Orchestration,
automation, and response, APIRO. To mitigate data availability constraint,
APIRO enriches security tool API description by applying a wide variety of data
augmentation techniques. To learn data heterogeneity of the security tools and
semantic variation in API descriptions, APIRO consists of an API-specific word
embedding model and a Convolutional Neural Network (CNN) model that are used
for prediction of top 3 relevant APIs for a task. We experimentally demonstrate
the effectiveness of APIRO in recommending APIs for different tasks using 3
security tools and 36 augmentation techniques. Our experimental results
demonstrate the feasibility of APIRO for achieving 91.9% Top-1 Accuracy.
|
[
{
"version": "v1",
"created": "Thu, 20 Jan 2022 02:34:51 GMT"
}
] | 2022-01-21T00:00:00 |
[
[
"Sworna",
"Zarrin Tasnim",
""
],
[
"Islam",
"Chadni",
""
],
[
"Babar",
"Muhammad Ali",
""
]
] |
new_dataset
| 0.958008 |
2201.08002
|
Weihuang Xu
|
Weihuang Xu, Guohao Yu, Yiming Cui, Romain Gloaguen, Alina Zare, Jason
Bonnette, Joel Reyes-Cabrera, Ashish Rajurkar, Diane Rowland, Roser Matamala,
Julie D. Jastrow, Thomas E. Juenger, Felix B. Fritschi
|
PRMI: A Dataset of Minirhizotron Images for Diverse Plant Root Study
|
The 36th AAAI Conference on the AI for Agriculture and Food Systems
(AIAFS) Workshop
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Understanding a plant's root system architecture (RSA) is crucial for a
variety of plant science problem domains including sustainability and climate
adaptation. Minirhizotron (MR) technology is a widely-used approach for
phenotyping RSA non-destructively by capturing root imagery over time.
Precisely segmenting roots from the soil in MR imagery is a critical step in
studying RSA features. In this paper, we introduce a large-scale dataset of
plant root images captured by MR technology. In total, there are over 72K RGB
root images across six different species including cotton, papaya, peanut,
sesame, sunflower, and switchgrass in the dataset. The images span a variety of
conditions including varied root age, root structures, soil types, and depths
under the soil surface. All of the images have been annotated with weak
image-level labels indicating whether each image contains roots or not. The
image-level labels can be used to support weakly supervised learning in plant
root segmentation tasks. In addition, 63K images have been manually annotated
to generate pixel-level binary masks indicating whether each pixel corresponds
to root or not. These pixel-level binary masks can be used as ground truth for
supervised learning in semantic segmentation tasks. By introducing this
dataset, we aim to facilitate the automatic segmentation of roots and the
research of RSA with deep learning and other image analysis algorithms.
|
[
{
"version": "v1",
"created": "Thu, 20 Jan 2022 05:07:41 GMT"
}
] | 2022-01-21T00:00:00 |
[
[
"Xu",
"Weihuang",
""
],
[
"Yu",
"Guohao",
""
],
[
"Cui",
"Yiming",
""
],
[
"Gloaguen",
"Romain",
""
],
[
"Zare",
"Alina",
""
],
[
"Bonnette",
"Jason",
""
],
[
"Reyes-Cabrera",
"Joel",
""
],
[
"Rajurkar",
"Ashish",
""
],
[
"Rowland",
"Diane",
""
],
[
"Matamala",
"Roser",
""
],
[
"Jastrow",
"Julie D.",
""
],
[
"Juenger",
"Thomas E.",
""
],
[
"Fritschi",
"Felix B.",
""
]
] |
new_dataset
| 0.999779 |
2201.08017
|
Chenxing Wang
|
Chenxing Wang, Fang Zhao, Haichao Zhang, Haiyong Luo, Yanjun Qin, and
Yuchen Fang
|
Fine-Grained Trajectory-based Travel Time Estimation for Multi-city
Scenarios Based on Deep Meta-Learning
| null | null |
10.1109/TITS.2022.3145382
| null |
cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Travel Time Estimation (TTE) is indispensable in intelligent transportation
system (ITS). It is significant to achieve the fine-grained Trajectory-based
Travel Time Estimation (TTTE) for multi-city scenarios, namely to accurately
estimate travel time of the given trajectory for multiple city scenarios.
However, it faces great challenges due to complex factors including dynamic
temporal dependencies and fine-grained spatial dependencies. To tackle these
challenges, we propose a meta learning based framework, MetaTTE, to
continuously provide accurate travel time estimation over time by leveraging
well-designed deep neural network model called DED, which consists of Data
preprocessing module and Encoder-Decoder network module. By introducing meta
learning techniques, the generalization ability of MetaTTE is enhanced using
small amount of examples, which opens up new opportunities to increase the
potential of achieving consistent performance on TTTE when traffic conditions
and road networks change over time in the future. The DED model adopts an
encoder-decoder network to capture fine-grained spatial and temporal
representations. Extensive experiments on two real-world datasets are conducted
to confirm that our MetaTTE outperforms six state-of-art baselines, and improve
29.35% and 25.93% accuracy than the best baseline on Chengdu and Porto
datasets, respectively.
|
[
{
"version": "v1",
"created": "Thu, 20 Jan 2022 06:35:51 GMT"
}
] | 2022-01-21T00:00:00 |
[
[
"Wang",
"Chenxing",
""
],
[
"Zhao",
"Fang",
""
],
[
"Zhang",
"Haichao",
""
],
[
"Luo",
"Haiyong",
""
],
[
"Qin",
"Yanjun",
""
],
[
"Fang",
"Yuchen",
""
]
] |
new_dataset
| 0.999526 |
2201.08117
|
Takahiro Miki
|
Takahiro Miki, Joonho Lee, Jemin Hwangbo, Lorenz Wellhausen, Vladlen
Koltun, Marco Hutter
|
Learning robust perceptive locomotion for quadrupedal robots in the wild
| null |
Science Robotics, 19 Jan 2022, Vol 7, Issue 62
|
10.1126/scirobotics.abk2822
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Legged robots that can operate autonomously in remote and hazardous
environments will greatly increase opportunities for exploration into
under-explored areas. Exteroceptive perception is crucial for fast and
energy-efficient locomotion: perceiving the terrain before making contact with
it enables planning and adaptation of the gait ahead of time to maintain speed
and stability. However, utilizing exteroceptive perception robustly for
locomotion has remained a grand challenge in robotics. Snow, vegetation, and
water visually appear as obstacles on which the robot cannot step~-- or are
missing altogether due to high reflectance. Additionally, depth perception can
degrade due to difficult lighting, dust, fog, reflective or transparent
surfaces, sensor occlusion, and more. For this reason, the most robust and
general solutions to legged locomotion to date rely solely on proprioception.
This severely limits locomotion speed, because the robot has to physically feel
out the terrain before adapting its gait accordingly. Here we present a robust
and general solution to integrating exteroceptive and proprioceptive perception
for legged locomotion. We leverage an attention-based recurrent encoder that
integrates proprioceptive and exteroceptive input. The encoder is trained
end-to-end and learns to seamlessly combine the different perception modalities
without resorting to heuristics. The result is a legged locomotion controller
with high robustness and speed. The controller was tested in a variety of
challenging natural and urban environments over multiple seasons and completed
an hour-long hike in the Alps in the time recommended for human hikers.
|
[
{
"version": "v1",
"created": "Thu, 20 Jan 2022 11:27:47 GMT"
}
] | 2022-01-21T00:00:00 |
[
[
"Miki",
"Takahiro",
""
],
[
"Lee",
"Joonho",
""
],
[
"Hwangbo",
"Jemin",
""
],
[
"Wellhausen",
"Lorenz",
""
],
[
"Koltun",
"Vladlen",
""
],
[
"Hutter",
"Marco",
""
]
] |
new_dataset
| 0.996361 |
2201.08154
|
William Buchanan Prof
|
Simon R Davies, Richard Macfarlane, William J Buchanan
|
NapierOne: A modern mixed file data set alternative to Govdocs1
| null |
Forensic Science International: Digital Investigation, Volume 40,
2022, 301330, ISSN 2666-2817
|
10.1016/j.fsidi.2021.301330
| null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
It was found when reviewing the ransomware detection research literature that
almost no proposal provided enough detail on how the test data set was created,
or sufficient description of its actual content, to allow it to be recreated by
other researchers interested in reconstructing their environment and validating
the research results. A modern cybersecurity mixed file data set called
NapierOne is presented, primarily aimed at, but not limited to, ransomware
detection and forensic analysis research. NapierOne was designed to address
this deficiency in reproducibility and improve consistency by facilitating
research replication and repeatability. The methodology used in the creation of
this data set is also described in detail. The data set was inspired by the
Govdocs1 data set and it is intended that NapierOne be used as a complement to
this original data set.
An investigation was performed with the goal of determining the common files
types currently in use. No specific research was found that explicitly provided
this information, so an alternative consensus approach was employed. This
involved combining the findings from multiple sources of file type usage into
an overall ranked list. After which 5000 real-world example files were
gathered, and a specific data subset created, for each of the common file types
identified. In some circumstances, multiple data subsets were created for a
specific file type, each subset representing a specific characteristic for that
file type. For example, there are multiple data subsets for the ZIP file type
with each subset containing examples of a specific compression method.
Ransomware execution tends to produce files that have high entropy, so examples
of file types that naturally have this attribute are also present.
|
[
{
"version": "v1",
"created": "Thu, 20 Jan 2022 12:57:48 GMT"
}
] | 2022-01-21T00:00:00 |
[
[
"Davies",
"Simon R",
""
],
[
"Macfarlane",
"Richard",
""
],
[
"Buchanan",
"William J",
""
]
] |
new_dataset
| 0.999073 |
2201.08378
|
Ruslan Nikolaev
|
Ruslan Nikolaev, Hassan Nadeem, Cathlyn Stone, Binoy Ravindran
|
Adelie: Continuous Address Space Layout Re-randomization for Linux
Drivers
|
27th ACM International Conference on Architectural Support for
Programming Languages and Operating Systems (ASPLOS '22), February 28 - March
4, 2022, Lausanne, Switzerland
| null |
10.1145/3503222.3507779
| null |
cs.OS cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While address space layout randomization (ASLR) has been extensively studied
for user-space programs, the corresponding OS kernel's KASLR support remains
very limited, making the kernel vulnerable to just-in-time (JIT)
return-oriented programming (ROP) attacks. Furthermore, commodity OSs such as
Linux restrict their KASLR range to 32 bits due to architectural constraints
(e.g., x86-64 only supports 32-bit immediate operands for most instructions),
which makes them vulnerable to even unsophisticated brute-force ROP attacks due
to low entropy. Most in-kernel pointers remain static, exacerbating the problem
when pointers are leaked.
Adelie, our kernel defense mechanism, overcomes KASLR limitations, increases
KASLR entropy, and makes successful ROP attacks on the Linux kernel much harder
to achieve. First, Adelie enables the position-independent code (PIC) model so
that the kernel and its modules can be placed anywhere in the 64-bit virtual
address space, at any distance apart from each other. Second, Adelie implements
stack re-randomization and address encryption on modules. Finally, Adelie
enables efficient continuous KASLR for modules by using the PIC model to make
it (almost) impossible to inject ROP gadgets through these modules regardless
of gadget's origin.
Since device drivers (typically compiled as modules) are often developed by
third parties and are typically less tested than core OS parts, they are also
often more vulnerable. By fully re-randomizing device drivers, the last two
contributions together prevent most JIT ROP attacks since vulnerable modules
are very likely to be a starting point of an attack. Furthermore, some OS
instances in virtualized environments are specifically designated to run device
drivers, where drivers are the primary target of JIT ROP attacks. Our
evaluation shows high efficiency of Adelie's approach.
[full abstract is in the paper]
|
[
{
"version": "v1",
"created": "Thu, 20 Jan 2022 18:58:44 GMT"
}
] | 2022-01-21T00:00:00 |
[
[
"Nikolaev",
"Ruslan",
""
],
[
"Nadeem",
"Hassan",
""
],
[
"Stone",
"Cathlyn",
""
],
[
"Ravindran",
"Binoy",
""
]
] |
new_dataset
| 0.99839 |
2006.09694
|
Jiayun Wang
|
Jiayun Wang, Jierui Lin, Qian Yu, Runtao Liu, Yubei Chen, Stella X. Yu
|
3D Shape Reconstruction from Free-Hand Sketches
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sketches are the most abstract 2D representations of real-world objects.
Although a sketch usually has geometrical distortion and lacks visual cues,
humans can effortlessly envision a 3D object from it. This suggests that
sketches encode the information necessary for reconstructing 3D shapes. Despite
great progress achieved in 3D reconstruction from distortion-free line
drawings, such as CAD and edge maps, little effort has been made to reconstruct
3D shapes from free-hand sketches. We study this task and aim to enhance the
power of sketches in 3D-related applications such as interactive design and
VR/AR games.
Unlike previous works, which mostly study distortion-free line drawings, our
3D shape reconstruction is based on free-hand sketches. A major challenge for
free-hand sketch 3D reconstruction comes from the insufficient training data
and free-hand sketch diversity, e.g. individualized sketching styles. We thus
propose data generation and standardization mechanisms. Instead of
distortion-free line drawings, synthesized sketches are adopted as input
training data. Additionally, we propose a sketch standardization module to
handle different sketch distortions and styles. Extensive experiments
demonstrate the effectiveness of our model and its strong generalizability to
various free-hand sketches. Our code is publicly available at
https://github.com/samaonline/3D-Shape-Reconstruction-from-Free-Hand-Sketches.
|
[
{
"version": "v1",
"created": "Wed, 17 Jun 2020 07:43:10 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Jan 2022 03:35:23 GMT"
}
] | 2022-01-20T00:00:00 |
[
[
"Wang",
"Jiayun",
""
],
[
"Lin",
"Jierui",
""
],
[
"Yu",
"Qian",
""
],
[
"Liu",
"Runtao",
""
],
[
"Chen",
"Yubei",
""
],
[
"Yu",
"Stella X.",
""
]
] |
new_dataset
| 0.993915 |
2009.12225
|
Liam Jordon
|
Liam Jordon and Philippe Moser
|
Pebble-Depth
|
arXiv admin note: substantial text overlap with arXiv:2009.04821
| null | null | null |
cs.CC cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we introduce a new formulation of Bennett's logical depth based
on pebble transducers. This notion is defined based on the difference between
the minimal length descriptional complexity of prefixes of infinite sequences
from the perspective of finite-state transducers and pebble transducers. Our
notion of pebble-depth satisfies the three fundamental properties of depth:
i.e. easy sequences and random sequences are not deep, and the existence of a
slow growth law type result. We also compare pebble-depth to other depth
notions based on finite-state transducers, pushdown compressors and the
Lempel-Ziv $78$ compression algorithm. We first demonstrate that there exists a
normal pebble-deep sequence even though there is no normal finite-state-deep
sequence. We then show that there exists a sequence which has pebble-depth
level of roughly $1/2$ and Lempel-Ziv-depth level of roughly $0$. Finally we
show the existence of a sequence which has a pebble-depth level of roughly $1$
and a pushdown-depth level of roughly $1/2$.
|
[
{
"version": "v1",
"created": "Thu, 24 Sep 2020 15:10:20 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Dec 2020 11:19:47 GMT"
},
{
"version": "v3",
"created": "Thu, 6 May 2021 14:06:38 GMT"
},
{
"version": "v4",
"created": "Wed, 19 Jan 2022 13:44:42 GMT"
}
] | 2022-01-20T00:00:00 |
[
[
"Jordon",
"Liam",
""
],
[
"Moser",
"Philippe",
""
]
] |
new_dataset
| 0.979505 |
2108.13802
|
Roberto Rossi
|
Roberto Rossi
|
Curatio et Innovatio
|
11 pages, working draft
| null | null | null |
cs.DL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The Middle Ages focused obsessively on the old; our era is totally absorbed
with the new. In medio stat virtus. In this short note, I advocate a strategy
that blends copyright and copyleft for disseminating research results in the
sciences. I argue that such a blend may be beneficial in fields such as
mathematics and computer science, that it may facilitate the evolution and
emergence of improved problem descriptions, whilst at the same time preserving
author's rights, and easing researchers' work.
|
[
{
"version": "v1",
"created": "Tue, 10 Aug 2021 14:07:18 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Jan 2022 18:10:18 GMT"
},
{
"version": "v3",
"created": "Wed, 19 Jan 2022 16:44:29 GMT"
}
] | 2022-01-20T00:00:00 |
[
[
"Rossi",
"Roberto",
""
]
] |
new_dataset
| 0.992714 |
2109.14076
|
Eli Lifland
|
Neel Alex, Eli Lifland, Lewis Tunstall, Abhishek Thakur, Pegah Maham,
C. Jess Riedel, Emmie Hine, Carolyn Ashurst, Paul Sedille, Alexis Carlier,
Michael Noetel, Andreas Stuhlm\"uller
|
RAFT: A Real-World Few-Shot Text Classification Benchmark
|
Dataset, submission instructions, code and leaderboard available at
https://raft.elicit.org
| null | null | null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large pre-trained language models have shown promise for few-shot learning,
completing text-based tasks given only a few task-specific examples. Will
models soon solve classification tasks that have so far been reserved for human
research assistants? Existing benchmarks are not designed to measure progress
in applied settings, and so don't directly answer this question. The RAFT
benchmark (Real-world Annotated Few-shot Tasks) focuses on naturally occurring
tasks and uses an evaluation setup that mirrors deployment. Baseline
evaluations on RAFT reveal areas current techniques struggle with: reasoning
over long texts and tasks with many classes. Human baselines show that some
classification tasks are difficult for non-expert humans, reflecting that
real-world value sometimes depends on domain expertise. Yet even non-expert
human baseline F1 scores exceed GPT-3 by an average of 0.11. The RAFT datasets
and leaderboard will track which model improvements translate into real-world
benefits at https://raft.elicit.org .
|
[
{
"version": "v1",
"created": "Tue, 28 Sep 2021 22:35:31 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Nov 2021 21:34:21 GMT"
},
{
"version": "v3",
"created": "Tue, 18 Jan 2022 21:40:14 GMT"
}
] | 2022-01-20T00:00:00 |
[
[
"Alex",
"Neel",
""
],
[
"Lifland",
"Eli",
""
],
[
"Tunstall",
"Lewis",
""
],
[
"Thakur",
"Abhishek",
""
],
[
"Maham",
"Pegah",
""
],
[
"Riedel",
"C. Jess",
""
],
[
"Hine",
"Emmie",
""
],
[
"Ashurst",
"Carolyn",
""
],
[
"Sedille",
"Paul",
""
],
[
"Carlier",
"Alexis",
""
],
[
"Noetel",
"Michael",
""
],
[
"Stuhlmüller",
"Andreas",
""
]
] |
new_dataset
| 0.998745 |
2110.13784
|
Paul Friedrich
|
Sven Seuken, Paul Friedrich, Ludwig Dierks
|
Market Design for Drone Traffic Management
|
Final version of a Blue Sky Ideas paper forthcoming at the 36th AAAI
Conference on Artificial Intelligence, Vancouver, Canada, 2022. Changes to
prev. version: expanded several sections, fixed typos, added references
| null | null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The rapid development of drone technology is leading to more and more use
cases being proposed. In response, regulators are drawing up drone traffic
management frameworks. However, to design solutions that are efficient, fair,
simple, non-manipulable, and scalable, we need market design and AI expertise.
To this end, we introduce the drone traffic management problem as a new
research challenge to the market design and AI communities. We present five
design desiderata that we have derived from our interviews with stakeholders
from the regulatory side as well as from public and private enterprises.
Finally, we provide an overview of the solution space to point out possible
directions for future research.
|
[
{
"version": "v1",
"created": "Tue, 26 Oct 2021 15:37:45 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Jan 2022 08:12:56 GMT"
}
] | 2022-01-20T00:00:00 |
[
[
"Seuken",
"Sven",
""
],
[
"Friedrich",
"Paul",
""
],
[
"Dierks",
"Ludwig",
""
]
] |
new_dataset
| 0.965306 |
2201.07220
|
Bruno Mazorra
|
Bruno Mazorra, Victor Adan, Vanesa Daza
|
Do not rug on me: Zero-dimensional Scam Detection
| null | null | null | null |
cs.CR cs.LG q-fin.ST
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Uniswap, like other DEXs, has gained much attention this year because it is a
non-custodial and publicly verifiable exchange that allows users to trade
digital assets without trusted third parties. However, its simplicity and lack
of regulation also makes it easy to execute initial coin offering scams by
listing non-valuable tokens. This method of performing scams is known as rug
pull, a phenomenon that already existed in traditional finance but has become
more relevant in DeFi. Various projects such as [34,37] have contributed to
detecting rug pulls in EVM compatible chains. However, the first longitudinal
and academic step to detecting and characterizing scam tokens on Uniswap was
made in [44]. The authors collected all the transactions related to the Uniswap
V2 exchange and proposed a machine learning algorithm to label tokens as scams.
However, the algorithm is only valuable for detecting scams accurately after
they have been executed. This paper increases their data set by 20K tokens and
proposes a new methodology to label tokens as scams. After manually analyzing
the data, we devised a theoretical classification of different malicious
maneuvers in Uniswap protocol. We propose various machine-learning-based
algorithms with new relevant features related to the token propagation and
smart contract heuristics to detect potential rug pulls before they occur. In
general, the models proposed achieved similar results. The best model obtained
an accuracy of 0.9936, recall of 0.9540, and precision of 0.9838 in
distinguishing non-malicious tokens from scams prior to the malicious maneuver.
|
[
{
"version": "v1",
"created": "Sun, 16 Jan 2022 16:22:43 GMT"
}
] | 2022-01-20T00:00:00 |
[
[
"Mazorra",
"Bruno",
""
],
[
"Adan",
"Victor",
""
],
[
"Daza",
"Vanesa",
""
]
] |
new_dataset
| 0.99865 |
2201.07311
|
Stella Biderman
|
Stella Biderman and Kieran Bicheno and Leo Gao
|
Datasheet for the Pile
|
Accompanies "The Pile: An 800GB Dataset of Diverse Text for Language
Modeling" arXiv:2101.00027
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This datasheet describes the Pile, a 825 GiB dataset of human-authored text
compiled by EleutherAI for use in large-scale language modeling. The Pile is
comprised of 22 different text sources, ranging from original scrapes done for
this project, to text data made available by the data owners, to third-party
scrapes available online.
|
[
{
"version": "v1",
"created": "Thu, 13 Jan 2022 23:45:24 GMT"
}
] | 2022-01-20T00:00:00 |
[
[
"Biderman",
"Stella",
""
],
[
"Bicheno",
"Kieran",
""
],
[
"Gao",
"Leo",
""
]
] |
new_dataset
| 0.998428 |
2201.07366
|
Yue Ruan
|
Yue Ruan, Han-Hung Lee, Ke Zhang, Angel X. Chang
|
TriCoLo: Trimodal Contrastive Loss for Fine-grained Text to Shape
Retrieval
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent work on contrastive losses for learning joint embeddings over
multimodal data has been successful at downstream tasks such as retrieval and
classification. On the other hand, work on joint representation learning for 3D
shapes and text has thus far mostly focused on improving embeddings through
modeling of complex attention between representations , or multi-task learning
. We show that with large batch contrastive learning we achieve SoTA on
text-shape retrieval without complex attention mechanisms or losses. Prior work
in 3D and text representations has also focused on bimodal representation
learning using either voxels or multi-view images with text. To this end, we
propose a trimodal learning scheme to achieve even higher performance and
better representations for all modalities.
|
[
{
"version": "v1",
"created": "Wed, 19 Jan 2022 00:15:15 GMT"
}
] | 2022-01-20T00:00:00 |
[
[
"Ruan",
"Yue",
""
],
[
"Lee",
"Han-Hung",
""
],
[
"Zhang",
"Ke",
""
],
[
"Chang",
"Angel X.",
""
]
] |
new_dataset
| 0.991335 |
2201.07454
|
Christian Lienen
|
Christian Lienen and Marco Platzner
|
ReconROS Executor: Event-Driven Programming of FPGA-accelerated ROS 2
Applications
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Many applications from the robotics domain can benefit from FPGA
acceleration. A corresponding key question is how to integrate hardware
accelerators into software-centric robotics programming environments. Recently,
several approaches have demonstrated hardware acceleration for the robot
operating system (ROS), the dominant programming environment in robotics. ROS
is a middleware layer that features the composition of complex robotics
applications as a set of nodes that communicate via mechanisms such as
publish/subscribe, and distributes them over several compute platforms. In this
paper, we present a novel approach for event-based programming of robotics
applications that leverages ReconROS, a framework for flexibly mapping ROS 2
nodes to either software or reconfigurable hardware. The ReconROS executor
schedules callbacks of ROS 2 nodes and utilizes a reconfigurable slot model and
partial runtime reconfiguration to load hardware-based callbacks on demand. We
describe the ReconROS executor approach, give design examples, and
experimentally evaluate its functionality with examples.
|
[
{
"version": "v1",
"created": "Wed, 19 Jan 2022 07:37:36 GMT"
}
] | 2022-01-20T00:00:00 |
[
[
"Lienen",
"Christian",
""
],
[
"Platzner",
"Marco",
""
]
] |
new_dataset
| 0.999562 |
2201.07490
|
YiHsaing Cheng
|
Zuo-Wei Yeh, Chia-Hua Hsu, Alexander White, Chen-Fu Yeh, Wen-Chieh Wu,
Cheng-Te Wang, Chung-Chuan Lo, Kea-Tiong Tang
|
POPPINS : A Population-Based Digital Spiking Neuromorphic Processor with
Integer Quadratic Integrate-and-Fire Neurons
| null | null | null | null |
cs.NE cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The inner operations of the human brain as a biological processing system
remain largely a mystery. Inspired by the function of the human brain and based
on the analysis of simple neural network systems in other species, such as
Drosophila, neuromorphic computing systems have attracted considerable
interest. In cellular-level connectomics research, we can identify the
characteristics of biological neural network, called population, which
constitute not only recurrent fullyconnection in network, also an
external-stimulus and selfconnection in each neuron. Relying on low data
bandwidth of spike transmission in network and input data, Spiking Neural
Networks exhibit low-latency and low-power design. In this study, we proposed a
configurable population-based digital spiking neuromorphic processor in 180nm
process technology with two configurable hierarchy populations. Also, these
neurons in the processor can be configured as novel models, integer quadratic
integrate-and-fire neuron models, which contain an unsigned 8-bit membrane
potential value. The processor can implement intelligent decision making for
avoidance in real-time. Moreover, the proposed approach enables the
developments of biomimetic neuromorphic system and various low-power, and
low-latency inference processing applications.
|
[
{
"version": "v1",
"created": "Wed, 19 Jan 2022 09:26:34 GMT"
}
] | 2022-01-20T00:00:00 |
[
[
"Yeh",
"Zuo-Wei",
""
],
[
"Hsu",
"Chia-Hua",
""
],
[
"White",
"Alexander",
""
],
[
"Yeh",
"Chen-Fu",
""
],
[
"Wu",
"Wen-Chieh",
""
],
[
"Wang",
"Cheng-Te",
""
],
[
"Lo",
"Chung-Chuan",
""
],
[
"Tang",
"Kea-Tiong",
""
]
] |
new_dataset
| 0.999289 |
2201.07496
|
Utsav Banerjee
|
Utsav Banerjee, Anantha P. Chandrakasan
|
A Low-Power BLS12-381 Pairing Crypto-Processor for Internet-of-Things
Security Applications
|
Published in IEEE Solid-State Circuits Letters (SSCL)
| null |
10.1109/LSSC.2021.3124074
| null |
cs.CR cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the first BLS12-381 elliptic curve pairing crypto-processor for
Internet-of-Things (IoT) security applications. Efficient finite field
arithmetic and algorithm-architecture co-optimizations together enable two
orders of magnitude energy savings. We implement several countermeasures
against timing and power side-channel attacks. Our crypto-processor is
programmable to provide the flexibility to accelerate various elliptic curve
and pairing-based protocols such as signature aggregation and functional
encryption.
|
[
{
"version": "v1",
"created": "Wed, 19 Jan 2022 09:37:41 GMT"
}
] | 2022-01-20T00:00:00 |
[
[
"Banerjee",
"Utsav",
""
],
[
"Chandrakasan",
"Anantha P.",
""
]
] |
new_dataset
| 0.998424 |
2201.07583
|
Zhongyuan Guo
|
Zhongyuan Guo, Hong Zheng, Changhui You, Tianyu Wang, Chang Liu
|
DMF-Net: Dual-Branch Multi-Scale Feature Fusion Network for copy forgery
identification of anti-counterfeiting QR code
|
17 pages, 6 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Anti-counterfeiting QR codes are widely used in people's work and life,
especially in product packaging. However, the anti-counterfeiting QR code has
the risk of being copied and forged in the circulation process. In reality,
copying is usually based on genuine anti-counterfeiting QR codes, but the
brands and models of copiers are diverse, and it is extremely difficult to
determine which individual copier the forged anti-counterfeiting code come
from. In response to the above problems, this paper proposes a method for copy
forgery identification of anti-counterfeiting QR code based on deep learning.
We first analyze the production principle of anti-counterfeiting QR code, and
convert the identification of copy forgery to device category forensics, and
then a Dual-Branch Multi-Scale Feature Fusion network is proposed. During the
design of the network, we conducted a detailed analysis of the data
preprocessing layer, single-branch design, etc., combined with experiments, the
specific structure of the dual-branch multi-scale feature fusion network is
determined. The experimental results show that the proposed method has achieved
a high accuracy of copy forgery identification, which exceeds the current
series of methods in the field of image forensics.
|
[
{
"version": "v1",
"created": "Wed, 19 Jan 2022 13:12:38 GMT"
}
] | 2022-01-20T00:00:00 |
[
[
"Guo",
"Zhongyuan",
""
],
[
"Zheng",
"Hong",
""
],
[
"You",
"Changhui",
""
],
[
"Wang",
"Tianyu",
""
],
[
"Liu",
"Chang",
""
]
] |
new_dataset
| 0.999432 |
2201.07594
|
Abhishek Sharma
|
Abhishek Sharma, Yash Shah, Yash Agrawal, Prateek Jain
|
Real-time Recognition of Yoga Poses using computer Vision for Smart
Health Care
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Nowadays, yoga has become a part of life for many people. Exercises and
sports technological assistance is implemented in yoga pose identification. In
this work, a self-assistance based yoga posture identification technique is
developed, which helps users to perform Yoga with the correction feature in
Real-time. The work also presents Yoga-hand mudra (hand gestures)
identification. The YOGI dataset has been developed which include 10 Yoga
postures with around 400-900 images of each pose and also contain 5 mudras for
identification of mudras postures. It contains around 500 images of each mudra.
The feature has been extracted by making a skeleton on the body for yoga poses
and hand for mudra poses. Two different algorithms have been used for creating
a skeleton one for yoga poses and the second for hand mudras. Angles of the
joints have been extracted as a features for different machine learning and
deep learning models. among all the models XGBoost with RandomSearch CV is most
accurate and gives 99.2\% accuracy. The complete design framework is described
in the present paper.
|
[
{
"version": "v1",
"created": "Wed, 19 Jan 2022 13:41:58 GMT"
}
] | 2022-01-20T00:00:00 |
[
[
"Sharma",
"Abhishek",
""
],
[
"Shah",
"Yash",
""
],
[
"Agrawal",
"Yash",
""
],
[
"Jain",
"Prateek",
""
]
] |
new_dataset
| 0.999832 |
2201.07643
|
Martin Szomszor
|
Martin Szomszor and Euan Adie
|
Overton -- A bibliometric database of policy document citations
| null | null | null | null |
cs.DL
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents an analysis of the Overton policy document database,
describing the makeup of materials indexed and the nature in which they cite
academic literature. We report on various aspects of the data, including
growth, geographic spread, language representation, the range of policy source
types included, and the availability of citation links in documents.
Longitudinal analysis over established journal category schemes is used to
reveal the scale and disciplinary focus of citations and determine the
feasibility of developing field-normalized citation indicators. We examine how
well self-reported funding outcomes collected by UK funders corresponds to data
indexed in the Overton database, and if peer-review assessment of impact as
measured by the UK Research Excellence Framework (REF) 2014 correlates with
derived citation metrics. Our findings show that for some research topics, such
as health, economics, social care and the environment, Overton contains a core
set of policy documents with sufficient citation linkage to academic literature
to support various citation analysis that may be informative in research
evaluation, impact assessment, and policy review. The data indexed in Overton
agrees with that collected via self-reporting of funding outcomes, and
correlates with peer-review assessment of impact in some disciplines.
|
[
{
"version": "v1",
"created": "Wed, 19 Jan 2022 15:21:11 GMT"
}
] | 2022-01-20T00:00:00 |
[
[
"Szomszor",
"Martin",
""
],
[
"Adie",
"Euan",
""
]
] |
new_dataset
| 0.999473 |
2201.07661
|
Christian Reul
|
Christian Reul, Stefan Tomasek, Florian Langhanki, Uwe Springmann
|
Open Source Handwritten Text Recognition on Medieval Manuscripts using
Mixed Models and Document-Specific Finetuning
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
This paper deals with the task of practical and open source Handwritten Text
Recognition (HTR) on German medieval manuscripts. We report on our efforts to
construct mixed recognition models which can be applied out-of-the-box without
any further document-specific training but also serve as a starting point for
finetuning by training a new model on a few pages of transcribed text (ground
truth). To train the mixed models we collected a corpus of 35 manuscripts and
ca. 12.5k text lines for two widely used handwriting styles, Gothic and
Bastarda cursives. Evaluating the mixed models out-of-the-box on four unseen
manuscripts resulted in an average Character Error Rate (CER) of 6.22%. After
training on 2, 4 and eventually 32 pages the CER dropped to 3.27%, 2.58%, and
1.65%, respectively. While the in-domain recognition and training of models
(Bastarda model to Bastarda material, Gothic to Gothic) unsurprisingly yielded
the best results, finetuning out-of-domain models to unseen scripts was still
shown to be superior to training from scratch.
Our new mixed models have been made openly available to the community.
|
[
{
"version": "v1",
"created": "Wed, 19 Jan 2022 15:34:19 GMT"
}
] | 2022-01-20T00:00:00 |
[
[
"Reul",
"Christian",
""
],
[
"Tomasek",
"Stefan",
""
],
[
"Langhanki",
"Florian",
""
],
[
"Springmann",
"Uwe",
""
]
] |
new_dataset
| 0.987622 |
2201.07665
|
Kenneth Blomqvist
|
Kenneth Blomqvist, Jen Jen Chung, Lionel Ott, Roland Siegwart
|
Semi-automatic 3D Object Keypoint Annotation and Detection for the
Masses
|
Code: https://github.com/ethz-asl/object_keypoints
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Creating computer vision datasets requires careful planning and lots of time
and effort. In robotics research, we often have to use standardized objects,
such as the YCB object set, for tasks such as object tracking, pose estimation,
grasping and manipulation, as there are datasets and pre-learned methods
available for these objects. This limits the impact of our research since
learning-based computer vision methods can only be used in scenarios that are
supported by existing datasets.
In this work, we present a full object keypoint tracking toolkit,
encompassing the entire process from data collection, labeling, model learning
and evaluation. We present a semi-automatic way of collecting and labeling
datasets using a wrist mounted camera on a standard robotic arm. Using our
toolkit and method, we are able to obtain a working 3D object keypoint detector
and go through the whole process of data collection, annotation and learning in
just a couple hours of active time.
|
[
{
"version": "v1",
"created": "Wed, 19 Jan 2022 15:41:54 GMT"
}
] | 2022-01-20T00:00:00 |
[
[
"Blomqvist",
"Kenneth",
""
],
[
"Chung",
"Jen Jen",
""
],
[
"Ott",
"Lionel",
""
],
[
"Siegwart",
"Roland",
""
]
] |
new_dataset
| 0.992286 |
2201.07706
|
Sudeep Pasricha
|
Abhishek Balasubramaniam, Sudeep Pasricha
|
Object Detection in Autonomous Vehicles: Status and Open Challenges
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Object detection is a computer vision task that has become an integral part
of many consumer applications today such as surveillance and security systems,
mobile text recognition, and diagnosing diseases from MRI/CT scans. Object
detection is also one of the critical components to support autonomous driving.
Autonomous vehicles rely on the perception of their surroundings to ensure safe
and robust driving performance. This perception system uses object detection
algorithms to accurately determine objects such as pedestrians, vehicles,
traffic signs, and barriers in the vehicle's vicinity. Deep learning-based
object detectors play a vital role in finding and localizing these objects in
real-time. This article discusses the state-of-the-art in object detectors and
open challenges for their integration into autonomous vehicles.
|
[
{
"version": "v1",
"created": "Wed, 19 Jan 2022 16:45:16 GMT"
}
] | 2022-01-20T00:00:00 |
[
[
"Balasubramaniam",
"Abhishek",
""
],
[
"Pasricha",
"Sudeep",
""
]
] |
new_dataset
| 0.986321 |
2201.07738
|
Ahmad Alhilal
|
Ahmad Alhilal (1), Tristan Braud (1), Bo Han (2) and Pan Hui (1) ((1)
Hong Kong University of Science and Technology (2) George Mason University)
|
Nebula: Reliable Low-latency Video Transmission for Mobile Cloud Gaming
|
12, 13, accepted in conference: TheWebConf'22, uses subfig.dtx
| null | null | null |
cs.MM cs.NI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Mobile cloud gaming enables high-end games on constrained devices by
streaming the game content from powerful servers through mobile networks.
Mobile networks suffer from highly variable bandwidth, latency, and losses that
affect the gaming experience. This paper introduces Nebula, an end-to-end cloud
gaming framework to minimize the impact of network conditions on the user
experience. Nebula relies on an end-to-end distortion model adapting the video
source rate and the amount of frame-level redundancy based on the measured
network conditions. As a result, it minimizes the motion-to-photon (MTP)
latency while protecting the frames from losses. We fully implement Nebula and
evaluate its performance against the state of the art techniques and latest
research in real-time mobile cloud gaming transmission on a physical testbed
over emulated and real wireless networks. Nebula consistently balances MTP
latency (<140 ms) and visual quality (>31 dB) even in highly variable
environments. A user experiment confirms that Nebula maximizes the user
experience with high perceived video quality, playability, and low user load.
|
[
{
"version": "v1",
"created": "Sun, 16 Jan 2022 10:30:12 GMT"
}
] | 2022-01-20T00:00:00 |
[
[
"Alhilal",
"Ahmad",
""
],
[
"Braud",
"Tristan",
""
],
[
"Han",
"Bo",
""
],
[
"Hui",
"Pan",
""
]
] |
new_dataset
| 0.998446 |
1202.4626
|
Avraham N. Trahtman
|
A. N. Trahtman
|
The \v{C}erny conjecture
|
14 pages, 11 Lemmas, most of which are considered trivial by various
reviewers. Everything goes to that the main result is also trivial. And the
author himself is inclined to admit it
| null | null | null |
cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A word $w$ of letters on edges of underlying graph $\Gamma$ of deterministic
finite automaton (DFA) is called synchronizing if $w$ sends all states of the
automaton to a unique state. J. \v{C}erny discovered in 1964 a sequence of
$n$-state complete DFA possessing a minimal synchronizing word of length
$(n-1)^2$. The hypothesis, well known today as the \v{C}erny conjecture, claims
that it is also precise upper bound on the length of such a word for a complete
DFA. The hypothesis was formulated in 1966 by Starke. The problem has motivated
great and constantly growing number of investigations and generalizations. To
prove the conjecture, we use algebra w on a special class of row monomial
matrices (one unit and rest zeros in every row), induced by words in the
alphabet of labels on edges. These matrices generate a space with respect to
the mentioned operation. The proof is based on connection between length of
words $u$ and dimension of the space generated by solutions $L_x$ of matrix
equation $M_uL_x=M_s$ for synchronizing word $s$, as well as on the relation
between ranks of $M_u$ and $L_x$.
|
[
{
"version": "v1",
"created": "Tue, 21 Feb 2012 12:50:14 GMT"
},
{
"version": "v10",
"created": "Mon, 14 Jun 2021 15:24:13 GMT"
},
{
"version": "v11",
"created": "Tue, 18 Jan 2022 11:16:53 GMT"
},
{
"version": "v2",
"created": "Sat, 25 Feb 2012 09:42:30 GMT"
},
{
"version": "v3",
"created": "Wed, 29 Feb 2012 08:58:28 GMT"
},
{
"version": "v4",
"created": "Mon, 19 Aug 2013 18:54:12 GMT"
},
{
"version": "v5",
"created": "Thu, 29 Aug 2013 06:51:30 GMT"
},
{
"version": "v6",
"created": "Thu, 17 Oct 2013 07:22:11 GMT"
},
{
"version": "v7",
"created": "Thu, 20 Mar 2014 13:29:06 GMT"
},
{
"version": "v8",
"created": "Fri, 16 Sep 2016 14:55:56 GMT"
},
{
"version": "v9",
"created": "Tue, 4 Jul 2017 10:30:27 GMT"
}
] | 2022-01-19T00:00:00 |
[
[
"Trahtman",
"A. N.",
""
]
] |
new_dataset
| 0.994647 |
1609.00118
|
Ian Hayes
|
Ian J. Hayes, Robert Colvin, Larissa Meinicke, Kirsten Winter, and
Andrius Velykis
|
An algebra of synchronous atomic steps
| null |
Fitzgerald J., Heitmeyer C., Gnesi S., Philippou A. (eds) FM 2016:
Formal Methods. FM 2016. Lecture Notes in Computer Science, vol 9995.
Springer, Cham
|
10.1007/978-3-319-48989-6_22
| null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This research started with an algebra for reasoning about rely/guarantee
concurrency for a shared memory model. The approach taken led to a more
abstract algebra of atomic steps, in which atomic steps synchronise (rather
than interleave) when composed in parallel. The algebra of rely/guarantee
concurrency then becomes an interpretation of the more abstract algebra. Many
of the core properties needed for rely/guarantee reasoning can be shown to hold
in the abstract algebra where their proofs are simpler and hence allow a higher
degree of automation. Moreover, the realisation that the synchronisation
mechanisms of standard process algebras, such as CSP and CCS/SCCS, can be
interpreted in our abstract algebra gives evidence of its unifying power. The
algebra has been encoded in Isabelle/HOL to provide a basis for tool support.
|
[
{
"version": "v1",
"created": "Thu, 1 Sep 2016 06:10:00 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Oct 2016 04:37:26 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Jan 2022 18:26:38 GMT"
}
] | 2022-01-19T00:00:00 |
[
[
"Hayes",
"Ian J.",
""
],
[
"Colvin",
"Robert",
""
],
[
"Meinicke",
"Larissa",
""
],
[
"Winter",
"Kirsten",
""
],
[
"Velykis",
"Andrius",
""
]
] |
new_dataset
| 0.998083 |
1910.00887
|
Benjamin Bergougnoux
|
Benjamin Bergougnoux, Charis Papadopoulos and Jan Arne Telle
|
Node Multiway Cut and Subset Feedback Vertex Set on Graphs of Bounded
Mim-width
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The two weighted graph problems Node Multiway Cut (NMC) and Subset Feedback
Vertex Set (SFVS) both ask for a vertex set of minimum total weight, that for
NMC disconnects a given set of terminals, and for SFVS intersects all cycles
containing a vertex of a given set. We design a meta-algorithm that allows to
solve both problems in time $2^{O(rw^3)}\cdot n^{4}$, $2^{O(q^2\log(q))}\cdot
n^{4}$, and $n^{O(k^2)}$ where $rw$ is the rank-width, $q$ the
$\mathbb{Q}$-rank-width, and $k$ the mim-width of a given decomposition. This
answers in the affirmative an open question raised by Jaffke et al.
(Algorithmica, 2019) concerning an XP algorithm for SFVS parameterized by
mim-width.
By a unified algorithm, this solves both problems in polynomial-time on the
following graph classes: Interval, Permutation, and Bi-Interval graphs,
Circular Arc and Circular Permutation graphs, Convex graphs, $k$-Polygon,
Dilworth-$k$ and Co-$k$-Degenerate graphs for fixed $k$; and also on Leaf Power
graphs if a leaf root is given as input, on $H$-Graphs for fixed $H$ if an
$H$-representation is given as input, and on arbitrary powers of graphs in all
the above classes. Prior to our results, only SFVS was known to be tractable
restricted only on Interval and Permutation graphs, whereas all other results
are new.
|
[
{
"version": "v1",
"created": "Wed, 2 Oct 2019 11:45:52 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Oct 2019 09:23:29 GMT"
},
{
"version": "v3",
"created": "Mon, 7 Oct 2019 11:33:46 GMT"
},
{
"version": "v4",
"created": "Tue, 7 Jan 2020 14:45:29 GMT"
},
{
"version": "v5",
"created": "Tue, 3 Mar 2020 10:32:58 GMT"
},
{
"version": "v6",
"created": "Tue, 22 Sep 2020 11:27:13 GMT"
},
{
"version": "v7",
"created": "Mon, 23 Aug 2021 08:57:10 GMT"
},
{
"version": "v8",
"created": "Mon, 17 Jan 2022 10:12:55 GMT"
}
] | 2022-01-19T00:00:00 |
[
[
"Bergougnoux",
"Benjamin",
""
],
[
"Papadopoulos",
"Charis",
""
],
[
"Telle",
"Jan Arne",
""
]
] |
new_dataset
| 0.999454 |
2012.04035
|
Martin V\"ogele
|
Raphael J.L. Townshend, Martin V\"ogele, Patricia Suriana, Alexander
Derry, Alexander Powers, Yianni Laloudakis, Sidhika Balachandar, Bowen Jing,
Brandon Anderson, Stephan Eismann, Risi Kondor, Russ B. Altman, Ron O. Dror
|
ATOM3D: Tasks On Molecules in Three Dimensions
|
NeurIPS 2021 Datasets and Benchmarks Track
| null | null | null |
cs.LG physics.bio-ph physics.comp-ph q-bio.BM
|
http://creativecommons.org/licenses/by/4.0/
|
Computational methods that operate on three-dimensional molecular structure
have the potential to solve important questions in biology and chemistry. In
particular, deep neural networks have gained significant attention, but their
widespread adoption in the biomolecular domain has been limited by a lack of
either systematic performance benchmarks or a unified toolkit for interacting
with molecular data. To address this, we present ATOM3D, a collection of both
novel and existing benchmark datasets spanning several key classes of
biomolecules. We implement several classes of three-dimensional molecular
learning methods for each of these tasks and show that they consistently
improve performance relative to methods based on one- and two-dimensional
representations. The specific choice of architecture proves to be critical for
performance, with three-dimensional convolutional networks excelling at tasks
involving complex geometries, graph networks performing well on systems
requiring detailed positional information, and the more recently developed
equivariant networks showing significant promise. Our results indicate that
many molecular problems stand to gain from three-dimensional molecular
learning, and that there is potential for improvement on many tasks which
remain underexplored. To lower the barrier to entry and facilitate further
developments in the field, we also provide a comprehensive suite of tools for
dataset processing, model training, and evaluation in our open-source atom3d
Python package. All datasets are available for download from
https://www.atom3d.ai .
|
[
{
"version": "v1",
"created": "Mon, 7 Dec 2020 20:18:23 GMT"
},
{
"version": "v2",
"created": "Thu, 10 Jun 2021 06:55:34 GMT"
},
{
"version": "v3",
"created": "Tue, 9 Nov 2021 08:12:29 GMT"
},
{
"version": "v4",
"created": "Sat, 15 Jan 2022 20:30:01 GMT"
}
] | 2022-01-19T00:00:00 |
[
[
"Townshend",
"Raphael J. L.",
""
],
[
"Vögele",
"Martin",
""
],
[
"Suriana",
"Patricia",
""
],
[
"Derry",
"Alexander",
""
],
[
"Powers",
"Alexander",
""
],
[
"Laloudakis",
"Yianni",
""
],
[
"Balachandar",
"Sidhika",
""
],
[
"Jing",
"Bowen",
""
],
[
"Anderson",
"Brandon",
""
],
[
"Eismann",
"Stephan",
""
],
[
"Kondor",
"Risi",
""
],
[
"Altman",
"Russ B.",
""
],
[
"Dror",
"Ron O.",
""
]
] |
new_dataset
| 0.999522 |
2012.11513
|
Bertrand Teguia Tabuguia
|
Bertrand Teguia Tabuguia
|
A variant of van Hoeij's algorithm to compute hypergeometric term
solutions of holonomic recurrence equations
|
25 pages
|
J. Algorithm Comput., 53, 2021, 1--32
|
10.22059/JAC.2021.85170
| null |
cs.SC math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Linear homogeneous recurrence equations with polynomial coefficients are said
to be holonomic. Such equations have been introduced in the last century for
proving and discovering combinatorial and hypergeometric identities. Given a
field K of characteristic zero, a term a(n) is called hypergeometric with
respect to K, if the ratio a(n+1)/a(n) is a rational function over K. The
solutions space of holonomic recurrence equations gained more interest in the
1990s from the well known Zeilberger's algorithm. In particular, algorithms
computing the subspace of hypergeometric term solutions which covers
polynomial, rational, and some algebraic solutions of these equations were
investigated by Marko Petkov\v{s}ek (1993) and Mark van Hoeij (1999). The
algorithm proposed by the latter is characterized by a much better efficiency
than that of the other; it computes, in Gamma representations, a basis of the
subspace of hypergeometric term solutions of any given holonomic recurrence
equation, and is considered as the current state of the art in this area. Mark
van Hoeij implemented his algorithm in the Computer Algebra System (CAS) Maple
through the command $LREtools[hypergeomsols]$.
We propose a variant of van Hoeij's algorithm that performs the same
efficiency and gives outputs in terms of factorials and shifted factorials,
without considering certain recommendations of the original version. We have
implementations of our algorithm for the CASs Maxima and Maple. Such an
implementation is new for Maxima which is therefore used for general-purpose
examples. Our Maxima code is currently available as a third-party package for
Maxima. A comparison between van Hoeij's implementation and ours is presented
for Maple 2020. It appears that both have the same efficiency, and moreover,
for some particular cases, our code finds results where
$LREtools[hypergeomsols]$ fails.
|
[
{
"version": "v1",
"created": "Mon, 21 Dec 2020 17:28:05 GMT"
}
] | 2022-01-19T00:00:00 |
[
[
"Tabuguia",
"Bertrand Teguia",
""
]
] |
new_dataset
| 0.969923 |
2102.05378
|
Fan Feng
|
Qianying Chen, Fan Feng, Pengyu Lv, and Huiling Duan
|
Origami spring-inspired shape morphing for flexible robotics
| null | null |
10.1089/soro.2021.0030
| null |
cs.RO cond-mat.soft
|
http://creativecommons.org/licenses/by/4.0/
|
Flexible robotics are capable of achieving various functionalities by shape
morphing, benefiting from their compliant bodies and reconfigurable structures.
Here we construct and study a class of origami springs generalized from the
known interleaved origami spring, as promising candidates for shape morphing in
flexible robotics. These springs are found to exhibit nonlinear stretch-twist
coupling and linear/nonlinear mechanical response in the compression/tension
region, analyzed by the demonstrated continuum mechanics models, experiments,
and finite element simulations. To improve the mechanical performance such as
the damage resistance, we establish an origami rigidization method by adding
additional creases to the spring system. Guided by the theoretical framework,
we experimentally realize three types of flexible robotics -- origami spring
ejectors, crawlers, and transformers. These robots show the desired
functionality and outstanding mechanical performance. The proposed concept of
origami-aided design is expected to pave the way to facilitate the diverse
shape morphing of flexible robotics.
|
[
{
"version": "v1",
"created": "Wed, 10 Feb 2021 11:04:09 GMT"
},
{
"version": "v2",
"created": "Sun, 30 May 2021 13:46:14 GMT"
},
{
"version": "v3",
"created": "Thu, 3 Jun 2021 10:38:55 GMT"
}
] | 2022-01-19T00:00:00 |
[
[
"Chen",
"Qianying",
""
],
[
"Feng",
"Fan",
""
],
[
"Lv",
"Pengyu",
""
],
[
"Duan",
"Huiling",
""
]
] |
new_dataset
| 0.98363 |
2103.12433
|
Dmytro Petryk
|
Dmytro Petryk and Zoya Dyka and Peter Langendoerfer
|
Sensitivity of Standard Library Cells to Optical Fault Injection Attacks
in IHP 250 nm Technology
| null | null |
10.1109/MECO49872.2020.9134146
| null |
cs.CR cs.AR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The IoT consists of a lot of devices such as embedded systems, wireless
sensor nodes (WSNs), control systems, etc. It is essential for some of these
devices to protect information that they process and transmit. The issue is
that an adversary may steal these devices to gain a physical access to the
device. There is a variety of ways that allows to reveal cryptographic keys.
One of them are optical Fault Injection attacks. We performed successful
optical Fault Injections into different type of gates, in particular INV, NAND,
NOR, FF. In our work we concentrate on the selection of the parameters
configured by an attacker and their influence on the success of the Fault
Injections.
|
[
{
"version": "v1",
"created": "Tue, 23 Mar 2021 10:23:58 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Jan 2022 14:42:27 GMT"
}
] | 2022-01-19T00:00:00 |
[
[
"Petryk",
"Dmytro",
""
],
[
"Dyka",
"Zoya",
""
],
[
"Langendoerfer",
"Peter",
""
]
] |
new_dataset
| 0.997369 |
2103.12436
|
Dmytro Petryk
|
Dmytro Petryk and Zoya Dyka and Jens Katzer and Peter Langendoerfer
|
Metal Fillers as Potential Low Cost Countermeasure against Optical Fault
Injection Attacks
| null | null |
10.1109/EWDTS50664.2020.9225092
| null |
cs.CR cs.AR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Physically accessible devices such as sensor nodes in Wireless Sensor
Networks or "smart" devices in the Internet of Things have to be resistant to a
broad spectrum of physical attacks, for example to Side Channel Analysis and to
Fault Injection attacks. In this work we concentrate on the vulnerability of
ASICs to precise optical Fault Injection attacks. Here we propose to use metal
fillers as potential low-cost countermeasure that may be effective against a
broad spectrum of physical attacks. In our future work we plan to evaluate
different methods of metal fillers placement, to select an effective one and to
integrate it as additional design rules into automated design flows.
|
[
{
"version": "v1",
"created": "Tue, 23 Mar 2021 10:28:25 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Jan 2022 14:45:50 GMT"
}
] | 2022-01-19T00:00:00 |
[
[
"Petryk",
"Dmytro",
""
],
[
"Dyka",
"Zoya",
""
],
[
"Katzer",
"Jens",
""
],
[
"Langendoerfer",
"Peter",
""
]
] |
new_dataset
| 0.999436 |
2104.04553
|
Mustafizur Rahman
|
Mustafizur Rahman, Liang Zhou, and Shantanu Chakrabartty
|
SPoTKD: A Protocol for Symmetric Key Distribution over Public Channels
Using Self-Powered Timekeeping Devices
|
14 pages, 12 figures
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we propose a novel class of symmetric key distribution
protocols that leverages basic security primitives offered by low-cost,
hardware chipsets containing millions of synchronized self-powered timers. The
keys are derived from the temporal dynamics of a physical, micro-scale
time-keeping device which makes the keys immune to any potential side-channel
attacks, malicious tampering, or snooping. Using the behavioral model of the
self-powered timers, we first show that the derived key-strings can pass the
randomness test as defined by the National Institute of Standards and
Technology (NIST) suite. The key-strings are then used in two SPoTKD
(Self-Powered Timer Key Distribution) protocols that exploit the timer's
dynamics as one-way functions: (a) protocol 1 facilitates secure communications
between a user and a remote Server, and (b) protocol 2 facilitates secure
communications between two users. In this paper, we investigate the security of
these protocols under standard model and against different adversarial attacks.
Using Monte-Carlo simulations, we also investigate the robustness of these
protocols in the presence of real-world operating conditions and propose
error-correcting SPoTKD protocols to mitigate these noise-related artifacts.
|
[
{
"version": "v1",
"created": "Fri, 9 Apr 2021 18:31:41 GMT"
},
{
"version": "v2",
"created": "Sun, 1 Aug 2021 20:39:11 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Jan 2022 17:22:41 GMT"
}
] | 2022-01-19T00:00:00 |
[
[
"Rahman",
"Mustafizur",
""
],
[
"Zhou",
"Liang",
""
],
[
"Chakrabartty",
"Shantanu",
""
]
] |
new_dataset
| 0.993778 |
2105.12708
|
Michael Gref
|
Julia Pritzen, Michael Gref, Dietlind Z\"uhlke, Christoph Schmidt
|
Multitask Learning for Grapheme-to-Phoneme Conversion of Anglicisms in
German Speech Recognition
|
Submitted to LREC 2022
| null | null | null |
cs.CL cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Anglicisms are a challenge in German speech recognition. Due to their
irregular pronunciation compared to native German words, automatically
generated pronunciation dictionaries often include faulty phoneme sequences for
Anglicisms. In this work, we propose a multitask sequence-to-sequence approach
for grapheme-to-phoneme conversion to improve the phonetization of Anglicisms.
We extended a grapheme-to-phoneme model with a classifier to distinguish
Anglicisms from native German words. With this approach, the model learns to
generate pronunciations differently depending on the classification result. We
used our model to create supplementary Anglicism pronunciation dictionaries
that are added to an existing German speech recognition model. Tested on a
dedicated Anglicism evaluation set, we improved the recognition of Anglicisms
compared to a baseline model, reducing the word error rate by 1 % and the
Anglicism error rate by 3 %. We show that multitask learning can help solving
the challenge of Anglicisms in German speech recognition.
|
[
{
"version": "v1",
"created": "Wed, 26 May 2021 17:42:13 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Jul 2021 15:36:06 GMT"
},
{
"version": "v3",
"created": "Tue, 18 Jan 2022 09:05:59 GMT"
}
] | 2022-01-19T00:00:00 |
[
[
"Pritzen",
"Julia",
""
],
[
"Gref",
"Michael",
""
],
[
"Zühlke",
"Dietlind",
""
],
[
"Schmidt",
"Christoph",
""
]
] |
new_dataset
| 0.992189 |
2106.07271
|
Dmytro Petryk
|
Dmytro Petryk, Zoya Dyka, Roland Sorge, Jan Schaeffner and Peter
Langendoerfer
|
Optical Fault Injection Attacks against Radiation-Hard Registers
| null | null | null | null |
cs.CR cs.AR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
If devices are physically accessible optical fault injection attacks pose a
great threat since the data processed as well as the operation flow can be
manipulated. Successful physical attacks may lead not only to leakage of secret
information such as cryptographic private keys, but can also cause economic
damage especially if as a result of such a manipulation a critical
infrastructure is successfully attacked. Laser based attacks exploit the
sensitivity of CMOS technologies to electromagnetic radiation in the visible or
the infrared spectrum. It can be expected that radiation-hard designs,
specially crafted for space applications, are more robust not only against
high-energy particles and short electromagnetic waves but also against optical
fault injection attacks. In this work we investigated the sensitivity of
radiation-hard JICG shift registers to optical fault injection attacks. In our
experiments, we were able to trigger bit-set and bit-reset repeatedly changing
the data stored in single JICG flip-flops despite their high-radiation fault
tolerance.
|
[
{
"version": "v1",
"created": "Mon, 14 Jun 2021 09:46:30 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Jun 2021 14:20:29 GMT"
},
{
"version": "v3",
"created": "Tue, 18 Jan 2022 09:34:02 GMT"
}
] | 2022-01-19T00:00:00 |
[
[
"Petryk",
"Dmytro",
""
],
[
"Dyka",
"Zoya",
""
],
[
"Sorge",
"Roland",
""
],
[
"Schaeffner",
"Jan",
""
],
[
"Langendoerfer",
"Peter",
""
]
] |
new_dataset
| 0.997327 |
2106.07351
|
Oliver Gasser
|
Florian Aschenbrenner, Tanya Shreedhar, Oliver Gasser, Nitinder Mohan,
J\"org Ott
|
From Single Lane to Highways: Analyzing the Adoption of Multipath TCP in
the Internet
|
Proceedings of the 2021 IFIP Networking Conference (Networking '21).
Visit https://mptcp.io for up-to-date MPTCP measurement results
| null |
10.23919/IFIPNetworking52078.2021.9472785
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multipath TCP (MPTCP) extends traditional TCP to enable simultaneous use of
multiple connection endpoints at the source and destination. MPTCP has been
under active development since its standardization in 2013, and more recently
in February 2020, MPTCP was upstreamed to the Linux kernel.
In this paper, we provide the first broad analysis of MPTCPv0 in the
Internet. We probe the entire IPv4 address space and an IPv6 hitlist to detect
MPTCP-enabled systems operational on port 80 and 443. Our scans reveal a steady
increase in MPTCP-capable IPs, reaching 9k+ on IPv4 and a few dozen on IPv6. We
also discover a significant share of seemingly MPTCP-capable hosts, an artifact
of middleboxes mirroring TCP options. We conduct targeted HTTP(S) measurements
towards select hosts and find that middleboxes can aggressively impact the
perceived quality of applications utilizing MPTCP. Finally, we analyze two
complementary traffic traces from CAIDA and MAWI to shed light on the
real-world usage of MPTCP. We find that while MPTCP usage has increased by a
factor of 20 over the past few years, its traffic share is still quite low.
|
[
{
"version": "v1",
"created": "Mon, 14 Jun 2021 12:34:04 GMT"
}
] | 2022-01-19T00:00:00 |
[
[
"Aschenbrenner",
"Florian",
""
],
[
"Shreedhar",
"Tanya",
""
],
[
"Gasser",
"Oliver",
""
],
[
"Mohan",
"Nitinder",
""
],
[
"Ott",
"Jörg",
""
]
] |
new_dataset
| 0.966744 |
2108.12390
|
Momona Yamagami
|
Momona Yamagami, Sasa Junuzovic, Mar Gonzalez-Franco, Eyal Ofek,
Edward Cutrell, John R. Porter, Andrew D. Wilson, Martez E. Mott
|
wo-In-One: A Design Space for Mapping Unimanual Input into Bimanual
Interactions in VR for Users with Limited Movement
|
26 pages, 3 figures, 6 tables
| null |
10.1145/3510463
| null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Virtual Reality (VR) applications often require users to perform actions with
two hands when performing tasks and interacting with objects in virtual
environments. Although bimanual interactions in VR can resemble real-world
interactions -- thus increasing realism and improving immersion -- they can
also pose significant accessibility challenges to people with limited mobility,
such as for people who have full use of only one hand. An opportunity exists to
create accessible techniques that take advantage of users' abilities, but
designers currently lack structured tools to consider alternative approaches.
To begin filling this gap, we propose Two-in-One, a design space that
facilitates the creation of accessible methods for bimanual interactions in VR
from unimanual input. Our design space comprises two dimensions, bimanual
interactions and computer assistance, and we provide a detailed examination of
issues to consider when creating new unimanual input techniques that map to
bimanual interactions in VR. We used our design space to create three
interaction techniques that we subsequently implemented for a subset of
bimanual interactions and received user feedback through a video elicitation
study with 17 people with limited mobility. Our findings explore complex
tradeoffs associated with autonomy and agency and highlight the need for
additional settings and methods to make VR accessible to people with limited
mobility.
|
[
{
"version": "v1",
"created": "Fri, 27 Aug 2021 16:52:50 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Jan 2022 21:21:51 GMT"
}
] | 2022-01-19T00:00:00 |
[
[
"Yamagami",
"Momona",
""
],
[
"Junuzovic",
"Sasa",
""
],
[
"Gonzalez-Franco",
"Mar",
""
],
[
"Ofek",
"Eyal",
""
],
[
"Cutrell",
"Edward",
""
],
[
"Porter",
"John R.",
""
],
[
"Wilson",
"Andrew D.",
""
],
[
"Mott",
"Martez E.",
""
]
] |
new_dataset
| 0.999054 |
2108.12960
|
Jian Guan
|
Jian Guan, Zhuoer Feng, Yamei Chen, Ruilin He, Xiaoxi Mao, Changjie
Fan, Minlie Huang
|
LOT: A Story-Centric Benchmark for Evaluating Chinese Long Text
Understanding and Generation
|
Accepted by TACL 2022. Benchmark datasets, pretraining models,
appendix url: https://github.com/thu-coai/LOT-LongLM
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Standard multi-task benchmarks are essential for developing pretraining
models that can generalize to various downstream tasks. Existing benchmarks for
natural language processing (NLP) usually focus only on understanding or
generating short texts. However, long text modeling requires many distinct
abilities in contrast to short texts, such as the modeling of long-range
discourse and commonsense relations, and the coherence and controllability of
generation. The lack of standardized benchmarks makes it difficult to assess
these abilities of a model and fairly compare different models, especially
Chinese models. Therefore, we propose a story-centric benchmark named LOT for
evaluating Chinese long text modeling, which aggregates two understanding tasks
and two generation tasks. We construct new datasets for these tasks based on
human-written Chinese stories with hundreds of words. Furthermore, we release
an encoder-decoder-based Chinese long text pretraining model named LongLM with
up to 1 billion parameters. We pretrain LongLM on 120G Chinese novels with two
generative tasks including text infilling and conditional continuation.
Extensive experiments show that LongLM outperforms similar-sized pretraining
models substantially on both the understanding and generation tasks in LOT.
|
[
{
"version": "v1",
"created": "Mon, 30 Aug 2021 02:38:32 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Jan 2022 08:52:46 GMT"
}
] | 2022-01-19T00:00:00 |
[
[
"Guan",
"Jian",
""
],
[
"Feng",
"Zhuoer",
""
],
[
"Chen",
"Yamei",
""
],
[
"He",
"Ruilin",
""
],
[
"Mao",
"Xiaoxi",
""
],
[
"Fan",
"Changjie",
""
],
[
"Huang",
"Minlie",
""
]
] |
new_dataset
| 0.991976 |
2109.00317
|
Lun Luo
|
Lun Luo, Si-Yuan Cao, Bin Han, Hui-Liang Shen, and Junwei Li
|
BVMatch: Lidar-based Place Recognition Using Bird's-eye View Images
| null |
in IEEE Robotics and Automation Letters, vol. 6, no. 3, pp.
6076-6083, July 2021
|
10.1109/LRA.2021.3091386.
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recognizing places using Lidar in large-scale environments is challenging due
to the sparse nature of point cloud data. In this paper we present BVMatch, a
Lidar-based frame-to-frame place recognition framework, that is capable of
estimating 2D relative poses. Based on the assumption that the ground area can
be approximated as a plane, we uniformly discretize the ground area into grids
and project 3D Lidar scans to bird's-eye view (BV) images. We further use a
bank of Log-Gabor filters to build a maximum index map (MIM) that encodes the
orientation information of the structures in the images. We analyze the
orientation characteristics of MIM theoretically and introduce a novel
descriptor called bird's-eye view feature transform (BVFT). The proposed BVFT
is insensitive to rotation and intensity variations of BV images. Leveraging
the BVFT descriptors, we unify the Lidar place recognition and pose estimation
tasks into the BVMatch framework. The experiments conducted on three
large-scale datasets show that BVMatch outperforms the state-of-the-art methods
in terms of both recall rate of place recognition and pose estimation accuracy.
The source code of our method is publicly available at
https://github.com/zjuluolun/BVMatch.
|
[
{
"version": "v1",
"created": "Wed, 1 Sep 2021 11:52:05 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Jan 2022 15:33:00 GMT"
}
] | 2022-01-19T00:00:00 |
[
[
"Luo",
"Lun",
""
],
[
"Cao",
"Si-Yuan",
""
],
[
"Han",
"Bin",
""
],
[
"Shen",
"Hui-Liang",
""
],
[
"Li",
"Junwei",
""
]
] |
new_dataset
| 0.999147 |
2109.04741
|
Leonard Bauersfeld
|
Leonard Bauersfeld and Davide Scaramuzza
|
Range, Endurance, and Optimal Speed Estimates for Multicopters
|
7 pages + 1 page references
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multicopters are among the most versatile mobile robots. Their applications
range from inspection and mapping tasks to providing vital reconnaissance in
disaster zones and to package delivery. The range, endurance, and speed a
multirotor vehicle can achieve while performing its task is a decisive factor
not only for vehicle design and mission planning, but also for policy makers
deciding on the rules and regulations for aerial robots. To the best of the
authors' knowledge, this work proposes the first approach to estimate the
range, endurance, and optimal flight speed for a wide variety of multicopters.
This advance is made possible by combining a state-of-the-art first-principles
aerodynamic multicopter model based on blade-element-momentum theory with an
electric-motor model and a graybox battery model. This model predicts the cell
voltage with only 1.3% relative error (43.1 mV), even if the battery is
subjected to non-constant discharge rates. Our approach is validated with
real-world experiments on a test bench as well as with flights at speeds up to
65 km/h in one of the world's largest motion-capture systems. We also present
an accurate pen-and-paper algorithm to estimate the range, endurance and
optimal speed of multicopters to help future researchers build drones with
maximal range and endurance, ensuring that future multirotor vehicles are even
more versatile.
|
[
{
"version": "v1",
"created": "Fri, 10 Sep 2021 09:05:06 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Jan 2022 15:04:57 GMT"
}
] | 2022-01-19T00:00:00 |
[
[
"Bauersfeld",
"Leonard",
""
],
[
"Scaramuzza",
"Davide",
""
]
] |
new_dataset
| 0.979269 |
2110.15161
|
Hao Wang
|
Qing Yang, Hao Wang, Xiaoxiao Wu, Taotao Wang, Shengli Zhang, Naijin
Liu
|
Secure Blockchain Platform for Industrial IoT with Trusted Computing
Hardware
| null |
IEEE Internet of Things Magazine 2021
|
10.1109/IOTM.001.2100043
| null |
cs.CR cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
As a disruptive technology that originates from cryptocurrency, blockchain
provides a trusted platform to facilitate industrial IoT (IIoT) applications.
However, implementing a blockchain platform in IIoT scenarios confronts various
security challenges due to the rigorous deployment condition. To this end, we
present a novel design of secure blockchain based on trusted computing hardware
for IIoT applications. Specifically, we employ the trusted execution
environment (TEE) module and a customized security chip to safeguard the
blockchain against different attacking vectors. Furthermore, we implement the
proposed secure IIoT blockchain on the ARM-based embedded device and build a
small-scale IIoT network to evaluate its performance. Our experimental results
show that the secure blockchain platform achieves a high throughput (150TPS)
with low transaction confirmation delay (below 66ms), demonstrating its
feasibility in practical IIoT scenarios. Finally, we outline the open
challenges and future research directions.
|
[
{
"version": "v1",
"created": "Thu, 28 Oct 2021 14:37:01 GMT"
}
] | 2022-01-19T00:00:00 |
[
[
"Yang",
"Qing",
""
],
[
"Wang",
"Hao",
""
],
[
"Wu",
"Xiaoxiao",
""
],
[
"Wang",
"Taotao",
""
],
[
"Zhang",
"Shengli",
""
],
[
"Liu",
"Naijin",
""
]
] |
new_dataset
| 0.997971 |
2111.08536
|
Hugo Y\`eche
|
Hugo Y\`eche, Rita Kuznetsova, Marc Zimmermann, Matthias H\"user,
Xinrui Lyu, Martin Faltys, Gunnar R\"atsch
|
HiRID-ICU-Benchmark -- A Comprehensive Machine Learning Benchmark on
High-resolution ICU Data
|
NeurIPS 2021 (Datasets and Benchmarks)
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The recent success of machine learning methods applied to time series
collected from Intensive Care Units (ICU) exposes the lack of standardized
machine learning benchmarks for developing and comparing such methods. While
raw datasets, such as MIMIC-IV or eICU, can be freely accessed on Physionet,
the choice of tasks and pre-processing is often chosen ad-hoc for each
publication, limiting comparability across publications. In this work, we aim
to improve this situation by providing a benchmark covering a large spectrum of
ICU-related tasks. Using the HiRID dataset, we define multiple clinically
relevant tasks in collaboration with clinicians. In addition, we provide a
reproducible end-to-end pipeline to construct both data and labels. Finally, we
provide an in-depth analysis of current state-of-the-art sequence modeling
methods, highlighting some limitations of deep learning approaches for this
type of data. With this benchmark, we hope to give the research community the
possibility of a fair comparison of their work.
|
[
{
"version": "v1",
"created": "Tue, 16 Nov 2021 15:06:42 GMT"
},
{
"version": "v2",
"created": "Wed, 17 Nov 2021 08:48:25 GMT"
},
{
"version": "v3",
"created": "Thu, 18 Nov 2021 09:00:45 GMT"
},
{
"version": "v4",
"created": "Mon, 17 Jan 2022 10:11:09 GMT"
}
] | 2022-01-19T00:00:00 |
[
[
"Yèche",
"Hugo",
""
],
[
"Kuznetsova",
"Rita",
""
],
[
"Zimmermann",
"Marc",
""
],
[
"Hüser",
"Matthias",
""
],
[
"Lyu",
"Xinrui",
""
],
[
"Faltys",
"Martin",
""
],
[
"Rätsch",
"Gunnar",
""
]
] |
new_dataset
| 0.998874 |
2111.14565
|
Luc Brun PR.
|
Luc Brun, Benoit Ga\"uz\`ere, S\'ebastien Bougleux, Florian Yger
|
A new Sinkhorn algorithm with Deletion and Insertion operations
|
20 pages
| null | null | null |
cs.LG cs.NE math.OC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This technical report is devoted to the continuous estimation of an
epsilon-assignment. Roughly speaking, an epsilon assignment between two sets V1
and V2 may be understood as a bijective mapping between a sub part of V1 and a
sub part of V2 . The remaining elements of V1 (not included in this mapping)
are mapped onto an epsilon pseudo element of V2 . We say that such elements are
deleted. Conversely, the remaining elements of V2 correspond to the image of
the epsilon pseudo element of V1. We say that these elements are inserted. As a
result our method provides a result similar to the one of the Sinkhorn
algorithm with the additional ability to reject some elements which are either
inserted or deleted. It thus naturally handles sets V1 and V2 of different
sizes and decides mappings/insertions/deletions in a unified way. Our
algorithms are iterative and differentiable and may thus be easily inserted
within a backpropagation based learning framework such as artificial neural
networks.
|
[
{
"version": "v1",
"created": "Mon, 29 Nov 2021 14:47:11 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Jan 2022 10:33:45 GMT"
}
] | 2022-01-19T00:00:00 |
[
[
"Brun",
"Luc",
""
],
[
"Gaüzère",
"Benoit",
""
],
[
"Bougleux",
"Sébastien",
""
],
[
"Yger",
"Florian",
""
]
] |
new_dataset
| 0.975018 |
2112.01949
|
Antonio Albanese
|
Antonio Albanese and Francesco Devoti and Vincenzo Sciancalepore and
Marco Di Renzo and Xavier Costa-P\'erez
|
MARISA: A Self-configuring Metasurfaces Absorption and Reflection
Solution Towards 6G
|
Accepted for presentation at IEEE INFOCOM 2022
| null | null | null |
cs.NI cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reconfigurable Intelligent Surfaces (RISs) are considered one of the key
disruptive technologies towards future 6G networks. RISs revolutionize the
traditional wireless communication paradigm by controlling the wave propagation
properties of the impinging signals as required. A major roadblock for RIS is
though the need for a fast and complex control channel to continuously adapt to
the ever-changing wireless channel conditions. In this paper, we ask ourselves
the question: Would it be feasible to remove the need for control channels for
RISs? We analyze the feasibility of devising Self-Configuring Smart Surfaces
that can be easily and seamlessly installed throughout the environment,
following the new Internet-of-Surfaces (IoS) paradigm, without requiring
modifications of the deployed mobile network. To this aim, we design MARISA, a
self-configuring metasurfaces absorption and reflection solution, and show that
it can achieve a better-than-expected performance rivaling with control
channel-driven RISs.
|
[
{
"version": "v1",
"created": "Fri, 3 Dec 2021 14:51:46 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Jan 2022 09:58:32 GMT"
}
] | 2022-01-19T00:00:00 |
[
[
"Albanese",
"Antonio",
""
],
[
"Devoti",
"Francesco",
""
],
[
"Sciancalepore",
"Vincenzo",
""
],
[
"Di Renzo",
"Marco",
""
],
[
"Costa-Pérez",
"Xavier",
""
]
] |
new_dataset
| 0.953289 |
2112.04680
|
Zhenyu Li
|
Zhenyu Li, Zehui Chen, Ang Li, Liangji Fang, Qinhong Jiang, Xianming
Liu, Junjun Jiang, Bolei Zhou, Hang Zhao
|
SimIPU: Simple 2D Image and 3D Point Cloud Unsupervised Pre-Training for
Spatial-Aware Visual Representations
|
Accepted to 36th AAAI Conference on Artificial Intelligence (AAAI
2022)
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Pre-training has become a standard paradigm in many computer vision tasks.
However, most of the methods are generally designed on the RGB image domain.
Due to the discrepancy between the two-dimensional image plane and the
three-dimensional space, such pre-trained models fail to perceive spatial
information and serve as sub-optimal solutions for 3D-related tasks. To bridge
this gap, we aim to learn a spatial-aware visual representation that can
describe the three-dimensional space and is more suitable and effective for
these tasks. To leverage point clouds, which are much more superior in
providing spatial information compared to images, we propose a simple yet
effective 2D Image and 3D Point cloud Unsupervised pre-training strategy,
called SimIPU. Specifically, we develop a multi-modal contrastive learning
framework that consists of an intra-modal spatial perception module to learn a
spatial-aware representation from point clouds and an inter-modal feature
interaction module to transfer the capability of perceiving spatial information
from the point cloud encoder to the image encoder, respectively. Positive pairs
for contrastive losses are established by the matching algorithm and the
projection matrix. The whole framework is trained in an unsupervised end-to-end
fashion. To the best of our knowledge, this is the first study to explore
contrastive learning pre-training strategies for outdoor multi-modal datasets,
containing paired camera images and LIDAR point clouds. Codes and models are
available at https://github.com/zhyever/SimIPU.
|
[
{
"version": "v1",
"created": "Thu, 9 Dec 2021 03:27:00 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Jan 2022 06:57:30 GMT"
}
] | 2022-01-19T00:00:00 |
[
[
"Li",
"Zhenyu",
""
],
[
"Chen",
"Zehui",
""
],
[
"Li",
"Ang",
""
],
[
"Fang",
"Liangji",
""
],
[
"Jiang",
"Qinhong",
""
],
[
"Liu",
"Xianming",
""
],
[
"Jiang",
"Junjun",
""
],
[
"Zhou",
"Bolei",
""
],
[
"Zhao",
"Hang",
""
]
] |
new_dataset
| 0.989048 |
2112.08632
|
AnChen Li
|
Anchen Li, Bo Yang, Huan Huo, Farookh Hussain
|
CDRec: Cayley-Dickson Recommender
|
1. The Preliminary Section is not sufficient. 2. Figure 2 is not
clear enough. 3. The Experiment Section are not sufficient
| null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a recommendation framework named Cayley-Dickson
Recommender. We introduce Cayley-Dickson construction which uses a recursive
process to define hypercomplex algebras and their mathematical operations. We
also design a graph convolution operator to learn representations in the
hypercomplex space. To the best of our knowledge, it is the first time that
Cayley-Dickson construction and graph convolution techniques have been used in
hypercomplex recommendation. Compared with the state-of-the-art recommendation
methods, our method achieves superior performance on real-world datasets.
|
[
{
"version": "v1",
"created": "Thu, 16 Dec 2021 05:17:31 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Jan 2022 23:37:46 GMT"
}
] | 2022-01-19T00:00:00 |
[
[
"Li",
"Anchen",
""
],
[
"Yang",
"Bo",
""
],
[
"Huo",
"Huan",
""
],
[
"Hussain",
"Farookh",
""
]
] |
new_dataset
| 0.99687 |
2201.02419
|
Tiezheng Yu
|
Tiezheng Yu, Rita Frieske, Peng Xu, Samuel Cahyawijaya, Cheuk Tung
Shadow Yiu, Holy Lovenia, Wenliang Dai, Elham J. Barezi, Qifeng Chen,
Xiaojuan Ma, Bertram E. Shi, Pascale Fung
|
Automatic Speech Recognition Datasets in Cantonese: A Survey and New
Dataset
| null | null | null | null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Automatic speech recognition (ASR) on low resource languages improves the
access of linguistic minorities to technological advantages provided by
artificial intelligence (AI). In this paper, we address the problem of data
scarcity for the Hong Kong Cantonese language by creating a new Cantonese
dataset. Our dataset, Multi-Domain Cantonese Corpus (MDCC), consists of 73.6
hours of clean read speech paired with transcripts, collected from Cantonese
audiobooks from Hong Kong. It comprises philosophy, politics, education,
culture, lifestyle and family domains, covering a wide range of topics. We also
review all existing Cantonese datasets and analyze them according to their
speech type, data source, total size and availability. We further conduct
experiments with Fairseq S2T Transformer, a state-of-the-art ASR model, on the
biggest existing dataset, Common Voice zh-HK, and our proposed MDCC, and the
results show the effectiveness of our dataset. In addition, we create a
powerful and robust Cantonese ASR model by applying multi-dataset learning on
MDCC and Common Voice zh-HK.
|
[
{
"version": "v1",
"created": "Fri, 7 Jan 2022 12:09:15 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Jan 2022 11:16:53 GMT"
}
] | 2022-01-19T00:00:00 |
[
[
"Yu",
"Tiezheng",
""
],
[
"Frieske",
"Rita",
""
],
[
"Xu",
"Peng",
""
],
[
"Cahyawijaya",
"Samuel",
""
],
[
"Yiu",
"Cheuk Tung Shadow",
""
],
[
"Lovenia",
"Holy",
""
],
[
"Dai",
"Wenliang",
""
],
[
"Barezi",
"Elham J.",
""
],
[
"Chen",
"Qifeng",
""
],
[
"Ma",
"Xiaojuan",
""
],
[
"Shi",
"Bertram E.",
""
],
[
"Fung",
"Pascale",
""
]
] |
new_dataset
| 0.990675 |
2201.03556
|
Stone Yun
|
Harry Nguyen, Stone Yun, Hisham Mohammad
|
Reproducing BowNet: Learning Representations by Predicting Bags of
Visual Words
|
This is a reproducibility project. Original work is by Gidaris et al.
published in CVPR 2020. Pytorch implementation is public on Github. v2
clarifies comments regarding communication with original authors
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
This work aims to reproduce results from the CVPR 2020 paper by Gidaris et
al. Self-supervised learning (SSL) is used to learn feature representations of
an image using an unlabeled dataset. This work proposes to use bag-of-words
(BoW) deep feature descriptors as a self-supervised learning target to learn
robust, deep representations. BowNet is trained to reconstruct the histogram of
visual words (ie. the deep BoW descriptor) of a reference image when presented
a perturbed version of the image as input. Thus, this method aims to learn
perturbation-invariant and context-aware image features that can be useful for
few-shot tasks or supervised downstream tasks. In the paper, the author
describes BowNet as a network consisting of a convolutional feature extractor
$\Phi(\cdot)$ and a Dense-softmax layer $\Omega(\cdot)$ trained to predict BoW
features from images. After BoW training, the features of $\Phi$ are used in
downstream tasks. For this challenge we were trying to build and train a
network that could reproduce the CIFAR-100 accuracy improvements reported in
the original paper. However, we were unsuccessful in reproducing an accuracy
improvement comparable to what the authors mentioned. This could be for a
variety of factors and we believe that time constraints were the primary
bottleneck.
|
[
{
"version": "v1",
"created": "Mon, 10 Jan 2022 07:00:22 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Jan 2022 19:55:43 GMT"
}
] | 2022-01-19T00:00:00 |
[
[
"Nguyen",
"Harry",
""
],
[
"Yun",
"Stone",
""
],
[
"Mohammad",
"Hisham",
""
]
] |
new_dataset
| 0.976779 |
2201.05601
|
V\'esteinn Sn{\ae}bjarnarson
|
V\'esteinn Sn{\ae}bjarnarson, Haukur Barri S\'imonarson, P\'etur Orri
Ragnarsson, Svanhv\'it Lilja Ing\'olfsd\'ottir, Haukur P\'all J\'onsson,
Vilhj\'almur {\TH}orsteinsson, Hafsteinn Einarsson
|
A Warm Start and a Clean Crawled Corpus -- A Recipe for Good Language
Models
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We train several language models for Icelandic, including IceBERT, that
achieve state-of-the-art performance in a variety of downstream tasks,
including part-of-speech tagging, named entity recognition, grammatical error
detection and constituency parsing. To train the models we introduce a new
corpus of Icelandic text, the Icelandic Common Crawl Corpus (IC3), a collection
of high quality texts found online by targeting the Icelandic top-level-domain
(TLD). Several other public data sources are also collected for a total of 16GB
of Icelandic text. To enhance the evaluation of model performance and to raise
the bar in baselines for Icelandic, we translate and adapt the WinoGrande
dataset for co-reference resolution. Through these efforts we demonstrate that
a properly cleaned crawled corpus is sufficient to achieve state-of-the-art
results in NLP applications for low to medium resource languages, by comparison
with models trained on a curated corpus. We further show that initializing
models using existing multilingual models can lead to state-of-the-art results
for some downstream tasks.
|
[
{
"version": "v1",
"created": "Fri, 14 Jan 2022 18:45:31 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Jan 2022 09:38:47 GMT"
}
] | 2022-01-19T00:00:00 |
[
[
"Snæbjarnarson",
"Vésteinn",
""
],
[
"Símonarson",
"Haukur Barri",
""
],
[
"Ragnarsson",
"Pétur Orri",
""
],
[
"Ingólfsdóttir",
"Svanhvít Lilja",
""
],
[
"Jónsson",
"Haukur Páll",
""
],
[
"Þorsteinsson",
"Vilhjálmur",
""
],
[
"Einarsson",
"Hafsteinn",
""
]
] |
new_dataset
| 0.998582 |
2201.05727
|
Raja Karmakar
|
Raja Karmakar and Georges Kaddoum
|
IBAC: An Intelligent Dynamic Bandwidth Channel Access Avoiding Outside
Warning Range Problem
| null |
IEEE Transactions on Mobile Computing, 2022
|
10.1109/TMC.2022.3141010
| null |
cs.NI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
IEEE 802.11ax uses the concept of primary and secondary channels, leading to
the Dynamic Bandwidth Channel Access (DBCA) mechanism. By applying DBCA, a
wireless station can select a wider channel bandwidth, such as 40/80/160 MHz,
by applying the channel bonding feature. However, during channel bonding,
inappropriate bandwidth selection can cause collisions. Therefore, to avoid
collisions, a well-developed media access control (MAC) protocol is crucial to
effectively utilize the channel bonding mechanism. In this paper, we address a
collision scenario, called Outside Warning Range Problem (OWRP), that may occur
during DBCA when a wireless station interferes with another wireless station
after channel bonding is performed. Therefore, we propose a MAC layer
mechanism, Intelligent Bonding Avoiding Collision (IBAC), that adapts the
channel bonding level in DBCA in order to avoid the OWRP. We first design a
theoretical model based on Markov chains for DBCA while avoiding the OWRP.
Based on this model, we design a Thompson sampling based Bayesian approach to
select the best possible channel bonding level intelligently. We analyze the
performance of the IBAC through simulations where it is observed that,
comparing to other competing mechanisms, the proposed approach can enhance the
network performance significantly while avoiding the OWRP.
|
[
{
"version": "v1",
"created": "Sat, 15 Jan 2022 01:18:12 GMT"
}
] | 2022-01-19T00:00:00 |
[
[
"Karmakar",
"Raja",
""
],
[
"Kaddoum",
"Georges",
""
]
] |
new_dataset
| 0.991536 |
2201.05793
|
Udit Sharma Mr
|
Sumit Neelam, Udit Sharma, Hima Karanam, Shajith Ikbal, Pavan
Kapanipathi, Ibrahim Abdelaziz, Nandana Mihindukulasooriya, Young-Suk Lee,
Santosh Srivastava, Cezar Pendus, Saswati Dana, Dinesh Garg, Achille Fokoue,
G P Shrivatsa Bhargav, Dinesh Khandelwal, Srinivas Ravishankar, Sairam
Gurajada, Maria Chang, Rosario Uceda-Sosa, Salim Roukos, Alexander Gray,
Guilherme Lima, Ryan Riegel, Francois Luus, L Venkata Subramaniam
|
A Benchmark for Generalizable and Interpretable Temporal Question
Answering over Knowledge Bases
|
7 pages, 2 figures, 7 tables. arXiv admin note: substantial text
overlap with arXiv:2109.13430
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Knowledge Base Question Answering (KBQA) tasks that involve complex reasoning
are emerging as an important research direction. However, most existing KBQA
datasets focus primarily on generic multi-hop reasoning over explicit facts,
largely ignoring other reasoning types such as temporal, spatial, and taxonomic
reasoning. In this paper, we present a benchmark dataset for temporal
reasoning, TempQA-WD, to encourage research in extending the present approaches
to target a more challenging set of complex reasoning tasks. Specifically, our
benchmark is a temporal question answering dataset with the following
advantages: (a) it is based on Wikidata, which is the most frequently curated,
openly available knowledge base, (b) it includes intermediate sparql queries to
facilitate the evaluation of semantic parsing based approaches for KBQA, and
(c) it generalizes to multiple knowledge bases: Freebase and Wikidata. The
TempQA-WD dataset is available at https://github.com/IBM/tempqa-wd.
|
[
{
"version": "v1",
"created": "Sat, 15 Jan 2022 08:49:09 GMT"
}
] | 2022-01-19T00:00:00 |
[
[
"Neelam",
"Sumit",
""
],
[
"Sharma",
"Udit",
""
],
[
"Karanam",
"Hima",
""
],
[
"Ikbal",
"Shajith",
""
],
[
"Kapanipathi",
"Pavan",
""
],
[
"Abdelaziz",
"Ibrahim",
""
],
[
"Mihindukulasooriya",
"Nandana",
""
],
[
"Lee",
"Young-Suk",
""
],
[
"Srivastava",
"Santosh",
""
],
[
"Pendus",
"Cezar",
""
],
[
"Dana",
"Saswati",
""
],
[
"Garg",
"Dinesh",
""
],
[
"Fokoue",
"Achille",
""
],
[
"Bhargav",
"G P Shrivatsa",
""
],
[
"Khandelwal",
"Dinesh",
""
],
[
"Ravishankar",
"Srinivas",
""
],
[
"Gurajada",
"Sairam",
""
],
[
"Chang",
"Maria",
""
],
[
"Uceda-Sosa",
"Rosario",
""
],
[
"Roukos",
"Salim",
""
],
[
"Gray",
"Alexander",
""
],
[
"Lima",
"Guilherme",
""
],
[
"Riegel",
"Ryan",
""
],
[
"Luus",
"Francois",
""
],
[
"Subramaniam",
"L Venkata",
""
]
] |
new_dataset
| 0.999794 |
2201.05958
|
Monu Verma
|
Monu Verma, Prafulla Saxena, Santosh Kumar Vipparthi, Girdhari Singh
|
Cross-Centroid Ripple Pattern for Facial Expression Recognition
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we propose a new feature descriptor Cross-Centroid Ripple
Pattern (CRIP) for facial expression recognition. CRIP encodes the transitional
pattern of a facial expression by incorporating cross-centroid relationship
between two ripples located at radius r1 and r2 respectively. These ripples are
generated by dividing the local neighborhood region into subregions. Thus, CRIP
has ability to preserve macro and micro structural variations in an extensive
region, which enables it to deal with side views and spontaneous expressions.
Furthermore, gradient information between cross centroid ripples provides
strenght to captures prominent edge features in active patches: eyes, nose and
mouth, that define the disparities between different facial expressions. Cross
centroid information also provides robustness to irregular illumination.
Moreover, CRIP utilizes the averaging behavior of pixels at subregions that
yields robustness to deal with noisy conditions. The performance of proposed
descriptor is evaluated on seven comprehensive expression datasets consisting
of challenging conditions such as age, pose, ethnicity and illumination
variations. The experimental results show that our descriptor consistently
achieved better accuracy rate as compared to existing state-of-art approaches.
|
[
{
"version": "v1",
"created": "Sun, 16 Jan 2022 03:32:58 GMT"
}
] | 2022-01-19T00:00:00 |
[
[
"Verma",
"Monu",
""
],
[
"Saxena",
"Prafulla",
""
],
[
"Vipparthi",
"Santosh Kumar",
""
],
[
"Singh",
"Girdhari",
""
]
] |
new_dataset
| 0.990374 |
2201.05975
|
A S M Sharifuzzaman Sagar
|
Samsil Arefin Mozumder, A S M Sharifuzzaman Sagar
|
IRHA: An Intelligent RSSI based Home automation System
|
This article is submitted to the 2nd International Conference on
Ubiquitous Computing and Intelligent Information Systems for possible
presentation
| null | null | null |
cs.HC cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Human existence is getting more sophisticated and better in many areas due to
remarkable advances in the fields of automation. Automated systems are favored
over manual ones in the current environment. Home Automation is becoming more
popular in this scenario, as people are drawn to the concept of a home
environment that can automatically satisfy users' requirements. The key
challenges in an intelligent home are intelligent decision making,
location-aware service, and compatibility for all users of different ages and
physical conditions. Existing solutions address just one or two of these
challenges, but smart home automation that is robust, intelligent,
location-aware, and predictive is needed to satisfy the user's demand. This
paper presents a location-aware intelligent RSSI-based home automation system
(IRHA) that uses Wi-Fi signals to detect the user's location and control the
appliances automatically. The fingerprinting method is used to map the Wi-Fi
signals for different rooms, and the machine learning method, such as Decision
Tree, is used to classify the signals for different rooms. The machine learning
models are then implemented in the ESP32 microcontroller board to classify the
rooms based on the real-time Wi-Fi signal, and then the result is sent to the
main control board through the ESP32 MAC communication protocol to control the
appliances automatically. The proposed method has achieved 97% accuracy in
classifying the users' location.
|
[
{
"version": "v1",
"created": "Sun, 16 Jan 2022 05:41:43 GMT"
}
] | 2022-01-19T00:00:00 |
[
[
"Mozumder",
"Samsil Arefin",
""
],
[
"Sagar",
"A S M Sharifuzzaman",
""
]
] |
new_dataset
| 0.999568 |
2201.06038
|
Chen-Hsiu Huang
|
Chen-Hsiu Huang and Ja-Ling Wu
|
Image data hiding with multi-scale autoencoder network
|
accepted by Media Watermarking, Security, and Forensics 2022
| null | null | null |
cs.CR cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
mage steganography is the process of hiding information which can be text,
image, or video inside a cover image. The advantage of steganography over
cryptography is that the intended secret message does not attract attention and
is thus more suitable for secret communication in a highly-surveillant
environment such as civil disobedience movements. Internet memes in social
media and messaging apps have become a popular culture worldwide, so this folk
custom is a good application scenario for image steganography. We try to
explore and adopt the steganography techniques on the Internet memes in this
work. We implement and improve the HiDDeN model by changing the Conv-BN-ReLU
blocks convolution layer with a multiscale autoencoder network so that the
neural network learns to embed message bits in higher-level feature space.
Compared to methods that convolve feature filters on the row-pixel domain, our
proposed MS-Hidden network learns to hide secrets in both low-level and
high-level image features. As a result, the proposed model significantly
reduces the bit-error rate to empirically 0% and the required network
parameters are much less than the HiDDeN model.
|
[
{
"version": "v1",
"created": "Sun, 16 Jan 2022 12:53:59 GMT"
}
] | 2022-01-19T00:00:00 |
[
[
"Huang",
"Chen-Hsiu",
""
],
[
"Wu",
"Ja-Ling",
""
]
] |
new_dataset
| 0.988806 |
2201.06077
|
Yosef Moatti
|
Ofer Biran, Oshrit Feder, Yosef Moatti, Athanasios Kiourtis,
Dimosthenis Kyriazis, George Manias, Argyro Mavrogiorgou, Nikitas M. Sgouros,
Martim Taborda Barata, Isabella Oldani, Mar\'ia Angeles Sanguino, Pavlos
Kranas
|
PolicyCLOUD: A prototype of a Cloud Serverless Ecosystem for Policy
Analytics
|
18 pages + 5 reference pages
| null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present PolicyCLOUD, a prototype for an extensible, serverless cloud-based
system that supports evidence-based elaboration and analysis of policies.
PolicyCLOUD allows flexible exploitation and management of policy-relevant
dataflows by enabling the practitioner to register datasets and specify a
sequence of transformations and/or information extraction through registered
ingest functions. Once a possibly transformed dataset has been ingested,
additional insights can be retrieved by further applying registered analytic
functions. PolicyCLOUD was built as an extensible framework toward the creation
of an analytic ecosystem. As of now, we developed several essential ingest and
analytic functions that are built-in within the framework. They include data
cleaning, enhanced interoperability, and sentiment analysis generic functions.
PolicyCLOUD has also the ability to tap on the analytic capabilities of
external tools. We demonstrate this with a Social Analytics tool implemented in
conjunction with PolicyCLOUD and show how to benefit from policy modeling,
design and simulation capabilities. Furthermore, PolicyCLOUD has developed a
first of its kind legal and ethical framework that covers the usage and
dissemination of datasets and analytic functions throughout its policy-relevant
dataflows. The article describes and evaluates the application of PolicyCLOUD
to four families of pilots that cover a wide range of policy scenarios.
|
[
{
"version": "v1",
"created": "Sun, 16 Jan 2022 15:50:16 GMT"
}
] | 2022-01-19T00:00:00 |
[
[
"Biran",
"Ofer",
""
],
[
"Feder",
"Oshrit",
""
],
[
"Moatti",
"Yosef",
""
],
[
"Kiourtis",
"Athanasios",
""
],
[
"Kyriazis",
"Dimosthenis",
""
],
[
"Manias",
"George",
""
],
[
"Mavrogiorgou",
"Argyro",
""
],
[
"Sgouros",
"Nikitas M.",
""
],
[
"Barata",
"Martim Taborda",
""
],
[
"Oldani",
"Isabella",
""
],
[
"Sanguino",
"María Angeles",
""
],
[
"Kranas",
"Pavlos",
""
]
] |
new_dataset
| 0.999836 |
2201.06173
|
Colorado J Reed
|
Dhileeban Kumaresan, Richard Wang, Ernesto Martinez, Richard Cziva,
Alberto Todeschini, Colorado J Reed, Hossein Vahabi
|
SunCast: Solar Irradiance Nowcasting from Geosynchronous Satellite Data
| null | null | null | null |
cs.LG cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
When cloud layers cover photovoltaic (PV) panels, the amount of power the
panels produce fluctuates rapidly. Therefore, to maintain enough energy on a
power grid to match demand, utilities companies rely on reserve power sources
that typically come from fossil fuels and therefore pollute the environment.
Accurate short-term PV power prediction enables operators to maximize the
amount of power obtained from PV panels and safely reduce the reserve energy
needed from fossil fuel sources. While several studies have developed machine
learning models to predict solar irradiance at specific PV generation
facilities, little work has been done to model short-term solar irradiance on a
global scale. Furthermore, models that have been developed are proprietary and
have architectures that are not publicly available or rely on computationally
demanding Numerical Weather Prediction (NWP) models. Here, we propose a
Convolutional Long Short-Term Memory Network model that treats solar nowcasting
as a next frame prediction problem, is more efficient than NWP models and has a
straightforward, reproducible architecture. Our models can predict solar
irradiance for entire North America for up to 3 hours in under 60 seconds on a
single machine without a GPU and has a RMSE of 120 W/m2 when evaluated on 2
months of data.
|
[
{
"version": "v1",
"created": "Mon, 17 Jan 2022 01:55:26 GMT"
}
] | 2022-01-19T00:00:00 |
[
[
"Kumaresan",
"Dhileeban",
""
],
[
"Wang",
"Richard",
""
],
[
"Martinez",
"Ernesto",
""
],
[
"Cziva",
"Richard",
""
],
[
"Todeschini",
"Alberto",
""
],
[
"Reed",
"Colorado J",
""
],
[
"Vahabi",
"Hossein",
""
]
] |
new_dataset
| 0.966009 |
2201.06174
|
Zhiling Long
|
Muhammad Amir Shafiq, Zhiling Long, Haibin Di, Ghassan AlRegib
|
A novel attention model for salient structure detection in seismic
volumes
|
Published in Applied Computing and Intelligence, Nov. 2021
|
Applied Computing and Intelligence, vol. 1, no. 1, pp. 31-45, Nov.
2021
| null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
A new approach to seismic interpretation is proposed to leverage visual
perception and human visual system modeling. Specifically, a saliency detection
algorithm based on a novel attention model is proposed for identifying
subsurface structures within seismic data volumes. The algorithm employs 3D-FFT
and a multi-dimensional spectral projection, which decomposes local spectra
into three distinct components, each depicting variations along different
dimensions of the data. Subsequently, a novel directional center-surround
attention model is proposed to incorporate directional comparisons around each
voxel for saliency detection within each projected dimension. Next, the
resulting saliency maps along each dimension are combined adaptively to yield a
consolidated saliency map, which highlights various structures characterized by
subtle variations and relative motion with respect to their neighboring
sections. A priori information about the seismic data can be either embedded
into the proposed attention model in the directional comparisons, or
incorporated into the algorithm by specifying a template when combining
saliency maps adaptively. Experimental results on two real seismic datasets
from the North Sea, Netherlands and Great South Basin, New Zealand demonstrate
the effectiveness of the proposed algorithm for detecting salient seismic
structures of different natures and appearances in one shot, which differs
significantly from traditional seismic interpretation algorithms. The results
further demonstrate that the proposed method outperforms comparable
state-of-the-art saliency detection algorithms for natural images and videos,
which are inadequate for seismic imaging data.
|
[
{
"version": "v1",
"created": "Mon, 17 Jan 2022 01:56:11 GMT"
}
] | 2022-01-19T00:00:00 |
[
[
"Shafiq",
"Muhammad Amir",
""
],
[
"Long",
"Zhiling",
""
],
[
"Di",
"Haibin",
""
],
[
"AlRegib",
"Ghassan",
""
]
] |
new_dataset
| 0.953892 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.