id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2205.05140
|
Guanrui Li
|
Guanrui Li, Xinyang Liu, and Giuseppe Loianno
|
RotorTM: A Flexible Simulator for Aerial Transportation and Manipulation
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Low-cost autonomous Micro Aerial Vehicles (MAVs) have the potential to help
humans by simplifying and speeding up complex tasks that require their
interaction with the environment, such as construction, package delivery, and
search and rescue. These systems, composed of single or multiple vehicles, can
be endowed with passive connection mechanisms such as rigid links or cables to
perform transportation and manipulation tasks. However, they are inherently
complex since they are often underactuated and evolve in nonlinear manifold
configuration spaces. In addition, the complexity of systems with
cable-suspended load is further increased by the hybrid dynamics depending on
the cables' varying tension conditions. This paper presents the first aerial
transportation and manipulation simulator incorporating different payloads and
passive connection mechanisms with full system dynamics, planning, and control
algorithms. Furthermore, it includes a novel general model accounting for the
transient hybrid dynamics for aerial systems with cable-suspended load to
closely mimic real-world systems. The availability of a flexible and intuitive
interface further contributes to its usability and versatility. Comparisons
between simulations and real-world experiments with different vehicles'
configurations show the fidelity of the simulator results with respect to
real-world settings and its benefit for rapid prototyping and transitioning of
aerial transportation and manipulation systems to real-world deployment.
|
[
{
"version": "v1",
"created": "Tue, 10 May 2022 19:46:14 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Feb 2023 20:47:32 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Li",
"Guanrui",
""
],
[
"Liu",
"Xinyang",
""
],
[
"Loianno",
"Giuseppe",
""
]
] |
new_dataset
| 0.999322 |
2206.00877
|
Haoran You
|
Haoran You, Cheng Wan, Yang Zhao, Zhongzhi Yu, Yonggan Fu, Jiayi Yuan,
Shang Wu, Shunyao Zhang, Yongan Zhang, Chaojian Li, Vivek Boominathan, Ashok
Veeraraghavan, Ziyun Li, Yingyan Lin
|
EyeCoD: Eye Tracking System Acceleration via FlatCam-based Algorithm &
Accelerator Co-Design
|
Accepted by ISCA 2022; Also selected as an IEEE Micro's Top Pick of
2023
| null |
10.1145/3470496.3527443
| null |
cs.HC cs.AR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Eye tracking has become an essential human-machine interaction modality for
providing immersive experience in numerous virtual and augmented reality
(VR/AR) applications desiring high throughput (e.g., 240 FPS), small-form, and
enhanced visual privacy. However, existing eye tracking systems are still
limited by their: (1) large form-factor largely due to the adopted bulky
lens-based cameras; and (2) high communication cost required between the camera
and backend processor, thus prohibiting their more extensive applications. To
this end, we propose a lensless FlatCam-based eye tracking algorithm and
accelerator co-design framework dubbed EyeCoD to enable eye tracking systems
with a much reduced form-factor and boosted system efficiency without
sacrificing the tracking accuracy, paving the way for next-generation eye
tracking solutions. On the system level, we advocate the use of lensless
FlatCams to facilitate the small form-factor need in mobile eye tracking
systems. On the algorithm level, EyeCoD integrates a predict-then-focus
pipeline that first predicts the region-of-interest (ROI) via segmentation and
then only focuses on the ROI parts to estimate gaze directions, greatly
reducing redundant computations and data movements. On the hardware level, we
further develop a dedicated accelerator that (1) integrates a novel workload
orchestration between the aforementioned segmentation and gaze estimation
models, (2) leverages intra-channel reuse opportunities for depth-wise layers,
and (3) utilizes input feature-wise partition to save activation memory size.
On-silicon measurement validates that our EyeCoD consistently reduces both the
communication and computation costs, leading to an overall system speedup of
10.95x, 3.21x, and 12.85x over CPUs, GPUs, and a prior-art eye tracking
processor called CIS-GEP, respectively, while maintaining the tracking
accuracy.
|
[
{
"version": "v1",
"created": "Thu, 2 Jun 2022 05:35:43 GMT"
},
{
"version": "v2",
"created": "Sun, 5 Feb 2023 18:17:21 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"You",
"Haoran",
""
],
[
"Wan",
"Cheng",
""
],
[
"Zhao",
"Yang",
""
],
[
"Yu",
"Zhongzhi",
""
],
[
"Fu",
"Yonggan",
""
],
[
"Yuan",
"Jiayi",
""
],
[
"Wu",
"Shang",
""
],
[
"Zhang",
"Shunyao",
""
],
[
"Zhang",
"Yongan",
""
],
[
"Li",
"Chaojian",
""
],
[
"Boominathan",
"Vivek",
""
],
[
"Veeraraghavan",
"Ashok",
""
],
[
"Li",
"Ziyun",
""
],
[
"Lin",
"Yingyan",
""
]
] |
new_dataset
| 0.997196 |
2206.02385
|
Indrajit Paul
|
Ashok Kumar Das, Indrajit Paul
|
On Hamiltonian-Connected and Mycielski graphs
| null | null | null | null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A graph $G$ is Hamiltonian-connected if there exists a Hamiltonian path
between any two vertices of $G$. It is known that if $G$ is 2-connected then
the graph $G^2$ is Hamiltonian-connected. In this paper we prove that the
square of every self-complementary graph of order grater than 4 is
Hamiltonian-connected. If $G$ is a $k$-critical graph, then we prove that the
Mycielski graph $\mu(G)$ is $(k+1)$-critical graph. Jarnicki et al.[7] proved
that for every Hamiltonian graph of odd order, the Mycielski graph $\mu(G)$ of
$G$ is Hamiltonian-connected. They also pose a conjecture that if $G$ is
Hamiltonian-connected and not $K_2$ then $\mu(G)$ is Hamiltonian-connected. In
this paper we also prove this conjecture.
|
[
{
"version": "v1",
"created": "Mon, 6 Jun 2022 06:31:40 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Feb 2023 16:42:03 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Das",
"Ashok Kumar",
""
],
[
"Paul",
"Indrajit",
""
]
] |
new_dataset
| 0.96674 |
2206.05480
|
Qiang Hu
|
Qiang Hu, Yuejun Guo, Xiaofei Xie, Maxime Cordy, Lei Ma, Mike
Papadakis, Yves Le Traon
|
CodeS: Towards Code Model Generalization Under Distribution Shift
|
accepted by ICSE'23-NIER
| null | null | null |
cs.SE cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Distribution shift has been a longstanding challenge for the reliable
deployment of deep learning (DL) models due to unexpected accuracy degradation.
Although DL has been becoming a driving force for large-scale source code
analysis in the big code era, limited progress has been made on distribution
shift analysis and benchmarking for source code tasks. To fill this gap, this
paper initiates to propose CodeS, a distribution shift benchmark dataset, for
source code learning. Specifically, CodeS supports two programming languages
(Java and Python) and five shift types (task, programmer, time-stamp, token,
and concrete syntax tree). Extensive experiments based on CodeS reveal that 1)
out-of-distribution detectors from other domains (e.g., computer vision) do not
generalize to source code, 2) all code classification models suffer from
distribution shifts, 3) representation-based shifts have a higher impact on the
model than others, and 4) pre-trained bimodal models are relatively more
resistant to distribution shifts.
|
[
{
"version": "v1",
"created": "Sat, 11 Jun 2022 09:32:29 GMT"
},
{
"version": "v2",
"created": "Sat, 4 Feb 2023 09:43:17 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Hu",
"Qiang",
""
],
[
"Guo",
"Yuejun",
""
],
[
"Xie",
"Xiaofei",
""
],
[
"Cordy",
"Maxime",
""
],
[
"Ma",
"Lei",
""
],
[
"Papadakis",
"Mike",
""
],
[
"Traon",
"Yves Le",
""
]
] |
new_dataset
| 0.999341 |
2207.00129
|
Alex Tong Lin
|
Alex Tong Lin, Stanley J. Osher
|
Multi-Agent Shape Control with Optimal Transport
|
Fixed expressions for g_shape and L_shape in section 4.1, 4.2, 5.2,
and 5.3
| null | null | null |
cs.MA cs.CG cs.RO cs.SY eess.SY math.OC
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce a method called MASCOT (Multi-Agent Shape Control with Optimal
Transport) to compute optimal control solutions of agents with
shape/formation/density constraints. For example, we might want to apply shape
constraints on the agents -- perhaps we desire the agents to hold a particular
shape along the path, or we want agents to spread out in order to minimize
collisions. We might also want a proportion of agents to move to one
destination, while the other agents move to another, and to do this in the
optimal way, i.e. the source-destination assignments should be optimal. In
order to achieve this, we utilize the Earth Mover's Distance from Optimal
Transport to distribute the agents into their proper positions so that certain
shapes can be satisfied. This cost is both introduced in the terminal cost and
in the running cost of the optimal control problem.
|
[
{
"version": "v1",
"created": "Thu, 30 Jun 2022 23:49:51 GMT"
},
{
"version": "v2",
"created": "Sat, 4 Feb 2023 02:34:08 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Lin",
"Alex Tong",
""
],
[
"Osher",
"Stanley J.",
""
]
] |
new_dataset
| 0.994549 |
2208.13836
|
Helmand Shayan
|
Helmand Shayan, Kai Krycki, Marco Doemeland, Markus Lange-Hegermann
|
PGNAA Spectral Classification of Metal with Density Estimations
|
8 pages, 12 figures, 1 table, published in the IEEE Transactions on
Nuclear Science (TNS)
| null |
10.1109/TNS.2023.3242626
| null |
cs.LG cond-mat.mtrl-sci
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For environmental, sustainable economic and political reasons, recycling
processes are becoming increasingly important, aiming at a much higher use of
secondary raw materials. Currently, for the copper and aluminium industries, no
method for the non-destructive online analysis of heterogeneous materials are
available. The Prompt Gamma Neutron Activation Analysis (PGNAA) has the
potential to overcome this challenge. A difficulty when using PGNAA for online
classification arises from the small amount of noisy data, due to short-term
measurements. In this case, classical evaluation methods using detailed peak by
peak analysis fail. Therefore, we propose to view spectral data as probability
distributions. Then, we can classify material using maximum log-likelihood with
respect to kernel density estimation and use discrete sampling to optimize
hyperparameters. For measurements of pure aluminium alloys we achieve near
perfect classification of aluminium alloys under 0.25 second.
|
[
{
"version": "v1",
"created": "Mon, 29 Aug 2022 18:58:59 GMT"
},
{
"version": "v2",
"created": "Sun, 5 Feb 2023 11:05:12 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Shayan",
"Helmand",
""
],
[
"Krycki",
"Kai",
""
],
[
"Doemeland",
"Marco",
""
],
[
"Lange-Hegermann",
"Markus",
""
]
] |
new_dataset
| 0.976608 |
2210.11918
|
Alice Ryhl
|
Jacob Holm (1), Eva Rotenberg (2), Alice Ryhl (2) ((1) University of
Copenhagen, (2) Technical University of Denmark)
|
Splay Top Trees
|
27 pages, 6 figures, published at SOSA'23, license information
updated
|
In Symposium on Simplicity in Algorithms (SOSA), pp. 305-331.
Society for Industrial and Applied Mathematics, 2023
|
10.1137/1.9781611977585.ch28
| null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
The top tree data structure is an important and fundamental tool in dynamic
graph algorithms. Top trees have existed for decades, and today serve as an
ingredient in many state-of-the-art algorithms for dynamic graphs. In this
work, we give a new direct proof of the existence of top trees, facilitating
simpler and more direct implementations of top trees, based on ideas from splay
trees. This result hinges on new insights into the structure of top trees, and
in particular the structure of each root path in a top tree.
|
[
{
"version": "v1",
"created": "Fri, 21 Oct 2022 12:41:17 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Feb 2023 09:08:42 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Holm",
"Jacob",
""
],
[
"Rotenberg",
"Eva",
""
],
[
"Ryhl",
"Alice",
""
]
] |
new_dataset
| 0.986994 |
2211.00973
|
Thomas Vigouroux
|
Thomas Vigouroux (VERIMAG - IMAG), Cristian Ene (VERIMAG - IMAG),
David Monniaux (VERIMAG - IMAG), Laurent Mounier (VERIMAG - IMAG),
Marie-Laure Potet (VERIMAG - IMAG)
|
BAXMC: a CEGAR approach to Max#SAT
|
FMCAD 2022, Oct 2022, Trente, Italy
| null |
10.34727/2022/isbn.978-3-85448-053-2
| null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Max#SAT is an important problem with multiple applications in security and
program synthesis that is proven hard to solve. It is defined as: given a
parameterized quantifier-free propositional formula compute parameters such
that the number of models of the formula is maximal. As an extension, the
formula can include an existential prefix. We propose a CEGAR-based algorithm
and refinements thereof, based on either exact or approximate model counting,
and prove its correctness in both cases. Our experiments show that this
algorithm has much better effective complexity than the state of the art.
|
[
{
"version": "v1",
"created": "Wed, 2 Nov 2022 09:26:05 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Feb 2023 09:49:23 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Vigouroux",
"Thomas",
"",
"VERIMAG - IMAG"
],
[
"Ene",
"Cristian",
"",
"VERIMAG - IMAG"
],
[
"Monniaux",
"David",
"",
"VERIMAG - IMAG"
],
[
"Mounier",
"Laurent",
"",
"VERIMAG - IMAG"
],
[
"Potet",
"Marie-Laure",
"",
"VERIMAG - IMAG"
]
] |
new_dataset
| 0.995352 |
2212.14092
|
Anton Stengel
|
Anton Stengel, Jaan Altosaar, Rebecca Dittrich, Noemie Elhadad
|
Assisted Living in the United States: an Open Dataset
|
4 pages, 2 figures
| null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
An assisted living facility (ALF) is a place where someone can live, have
access to social supports such as transportation, and receive assistance with
the activities of daily living such as toileting and dressing. Despite the
important role of ALFs, they are not required to be certified with Medicare and
there is no public national database of these facilities. We present the first
public dataset of ALFs in the United States, covering all 50 states and DC with
44,638 facilities and over 1.2 million beds. This dataset can help provide
answers to existing public health questions as well as help those in need find
a facility. The dataset was validated by replicating the results of a
nationwide study of ALFs that uses closed data [4], where the prevalence of
ALFs is assessed with respect to county-level socioeconomic variables related
to health disparity such as race, disability, and income. To showcase the value
of this dataset, we also propose a novel metric to assess access to
community-based care. We calculate the average distance an individual in need
must travel in order to reach an ALF. The dataset and all relevant code are
available at github.com/antonstengel/assisted-living-data.
|
[
{
"version": "v1",
"created": "Wed, 28 Dec 2022 20:42:14 GMT"
},
{
"version": "v2",
"created": "Sat, 4 Feb 2023 18:06:48 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Stengel",
"Anton",
""
],
[
"Altosaar",
"Jaan",
""
],
[
"Dittrich",
"Rebecca",
""
],
[
"Elhadad",
"Noemie",
""
]
] |
new_dataset
| 0.999874 |
2301.05453
|
Ana-Maria Bucur
|
Ana-Maria Bucur, Adrian Cosma, Paolo Rosso, Liviu P. Dinu
|
It's Just a Matter of Time: Detecting Depression with Time-Enriched
Multimodal Transformers
|
Accepted at ECIR 2023
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Depression detection from user-generated content on the internet has been a
long-lasting topic of interest in the research community, providing valuable
screening tools for psychologists. The ubiquitous use of social media platforms
lays out the perfect avenue for exploring mental health manifestations in posts
and interactions with other users. Current methods for depression detection
from social media mainly focus on text processing, and only a few also utilize
images posted by users. In this work, we propose a flexible time-enriched
multimodal transformer architecture for detecting depression from social media
posts, using pretrained models for extracting image and text embeddings. Our
model operates directly at the user-level, and we enrich it with the relative
time between posts by using time2vec positional embeddings. Moreover, we
propose another model variant, which can operate on randomly sampled and
unordered sets of posts to be more robust to dataset noise. We show that our
method, using EmoBERTa and CLIP embeddings, surpasses other methods on two
multimodal datasets, obtaining state-of-the-art results of 0.931 F1 score on a
popular multimodal Twitter dataset, and 0.902 F1 score on the only multimodal
Reddit dataset.
|
[
{
"version": "v1",
"created": "Fri, 13 Jan 2023 09:40:19 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Feb 2023 14:42:24 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Bucur",
"Ana-Maria",
""
],
[
"Cosma",
"Adrian",
""
],
[
"Rosso",
"Paolo",
""
],
[
"Dinu",
"Liviu P.",
""
]
] |
new_dataset
| 0.986344 |
2301.08668
|
Shaoquan Jiang
|
Shaoquan Jiang and Dima Alhadidi and Hamid Fazli Khojir
|
Key-and-Signature Compact Multi-Signatures for Blockchain: A Compiler
with Realizations
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Multi-signature is a protocol where a set of signatures jointly sign a
message so that the final signature is significantly shorter than concatenating
individual signatures together. Recently, it finds applications in blockchain,
where several users want to jointly authorize a payment through a
multi-signature. However, in this setting, there is no centralized authority
and it could suffer from a rogue key attack where the attacker can generate his
own keys arbitrarily. Further, to minimize the storage on blockchain, it is
desired that the aggregated public-key and the aggregated signature are both as
short as possible. In this paper, we find a compiler that converts a kind of
identification (ID) scheme (which we call a linear ID) to a multi-signature so
that both the aggregated public-key and the aggregated signature have a size
independent of the number of signers. Our compiler is provably secure. The
advantage of our results is that we reduce a multi-party problem to a weakly
secure two-party problem. We realize our compiler with two ID schemes. The
first is Schnorr ID. The second is a new lattice-based ID scheme, which via our
compiler gives the first regular lattice-based multi-signature scheme with
key-and-signature compact without a restart during signing process.
|
[
{
"version": "v1",
"created": "Fri, 20 Jan 2023 16:41:38 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Jiang",
"Shaoquan",
""
],
[
"Alhadidi",
"Dima",
""
],
[
"Khojir",
"Hamid Fazli",
""
]
] |
new_dataset
| 0.998935 |
2301.09279
|
Jean Lee
|
Jean Lee, Hoyoul Luis Youn, Josiah Poon, Soyeon Caren Han
|
StockEmotions: Discover Investor Emotions for Financial Sentiment
Analysis and Multivariate Time Series
|
Preprint - Accepted by the AAAI-23 Bridge Program (AI for Financial
Services)
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
There has been growing interest in applying NLP techniques in the financial
domain, however, resources are extremely limited. This paper introduces
StockEmotions, a new dataset for detecting emotions in the stock market that
consists of 10,000 English comments collected from StockTwits, a financial
social media platform. Inspired by behavioral finance, it proposes 12
fine-grained emotion classes that span the roller coaster of investor emotion.
Unlike existing financial sentiment datasets, StockEmotions presents granular
features such as investor sentiment classes, fine-grained emotions, emojis, and
time series data. To demonstrate the usability of the dataset, we perform a
dataset analysis and conduct experimental downstream tasks. For financial
sentiment/emotion classification tasks, DistilBERT outperforms other baselines,
and for multivariate time series forecasting, a Temporal Attention LSTM model
combining price index, text, and emotion features achieves the best performance
than using a single feature.
|
[
{
"version": "v1",
"created": "Mon, 23 Jan 2023 05:32:42 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Feb 2023 10:42:47 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Lee",
"Jean",
""
],
[
"Youn",
"Hoyoul Luis",
""
],
[
"Poon",
"Josiah",
""
],
[
"Han",
"Soyeon Caren",
""
]
] |
new_dataset
| 0.999486 |
2302.02008
|
Joe Toplyn
|
Joe Toplyn
|
Witscript: A System for Generating Improvised Jokes in a Conversation
|
10 pages. Published in the Proceedings of the 12th International
Conference on Computational Creativity (ICCC 2021), pages 22-31
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A chatbot is perceived as more humanlike and likeable if it includes some
jokes in its output. But most existing joke generators were not designed to be
integrated into chatbots. This paper presents Witscript, a novel joke
generation system that can improvise original, contextually relevant jokes,
such as humorous responses during a conversation. The system is based on joke
writing algorithms created by an expert comedy writer. Witscript employs
well-known tools of natural language processing to extract keywords from a
topic sentence and, using wordplay, to link those keywords and related words to
create a punch line. Then a pretrained neural network language model that has
been fine-tuned on a dataset of TV show monologue jokes is used to complete the
joke response by filling the gap between the topic sentence and the punch line.
A method of internal scoring filters out jokes that don't meet a preset
standard of quality. Human evaluators judged Witscript's responses to input
sentences to be jokes more than 40% of the time. This is evidence that
Witscript represents an important next step toward giving a chatbot a humanlike
sense of humor.
|
[
{
"version": "v1",
"created": "Fri, 3 Feb 2023 21:30:34 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Toplyn",
"Joe",
""
]
] |
new_dataset
| 0.993816 |
2302.02037
|
Welington Santos
|
Welington Santos
|
Bounds on Binary Niederreiter-Rosenbloom-Tsfasman LCD codes
|
20 pages
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Linear complementary dual codes (LCD codes) are codes whose intersections
with their dual codes are trivial. These codes were introduced by Massey in
1992. LCD codes have wide applications in data storage, communication systems,
and cryptography. Niederreiter-Rosenbloom-Tsfasman LCD codes (NRT-LCD codes)
were introduced by Heqian, Guangku and Wei as a generalization of LCD codes for
the NRT metric space $M_{n,s}(\mathbb{F}_{q})$. In this paper, we study
LCD$[n\times s,k]$, the maximum minimum NRT distance among all binary $[n\times
s,k]$ NRT-LCD codes. We prove the existence (non-existence) of binary maximum
distance separable NRT-LCD codes in $M_{1,s}(\mathbb{F}_{2})$. We present a
linear programming bound for binary NRT-LCD codes in $M_{n,2}(\mathbb{F}_{2})$.
We also give two methods to construct binary NRT-LCD codes.
|
[
{
"version": "v1",
"created": "Sat, 4 Feb 2023 00:18:38 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Santos",
"Welington",
""
]
] |
new_dataset
| 0.999706 |
2302.02041
|
Aivin Solatorio
|
Aivin V. Solatorio and Olivier Dupriez
|
REaLTabFormer: Generating Realistic Relational and Tabular Data using
Transformers
|
REaLTabFormer GitHub repository at
https://github.com/avsolatorio/REaLTabFormer
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Tabular data is a common form of organizing data. Multiple models are
available to generate synthetic tabular datasets where observations are
independent, but few have the ability to produce relational datasets. Modeling
relational data is challenging as it requires modeling both a "parent" table
and its relationships across tables. We introduce REaLTabFormer (Realistic
Relational and Tabular Transformer), a tabular and relational synthetic data
generation model. It first creates a parent table using an autoregressive GPT-2
model, then generates the relational dataset conditioned on the parent table
using a sequence-to-sequence (Seq2Seq) model. We implement target masking to
prevent data copying and propose the $Q_{\delta}$ statistic and statistical
bootstrapping to detect overfitting. Experiments using real-world datasets show
that REaLTabFormer captures the relational structure better than a baseline
model. REaLTabFormer also achieves state-of-the-art results on prediction
tasks, "out-of-the-box", for large non-relational datasets without needing
fine-tuning.
|
[
{
"version": "v1",
"created": "Sat, 4 Feb 2023 00:32:50 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Solatorio",
"Aivin V.",
""
],
[
"Dupriez",
"Olivier",
""
]
] |
new_dataset
| 0.962351 |
2302.02050
|
Hope Schroeder
|
Hope Schroeder, Rob Tokanel, Kyle Qian, Khoi Le
|
Location-based AR for Social Justice: Case Studies, Lessons, and Open
Challenges
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Dear Visitor and Charleston Reconstructed were location-based augmented
reality (AR) experiences created between 2018 and 2020 dealing with two
controversial monument sites in the US. The projects were motivated by the
ability of AR to 1) link layers of context to physical sites in ways that are
otherwise difficult or impossible and 2) to visualize changes to physical
spaces, potentially inspiring changes to the spaces themselves. We discuss the
projects' motivations, designs, and deployments. We reflect on how physical
changes to the projects' respective sites radically altered their outcomes, and
we describe lessons for future work in location-based AR, particularly for
projects in contested spaces.
|
[
{
"version": "v1",
"created": "Sat, 4 Feb 2023 01:21:12 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Schroeder",
"Hope",
""
],
[
"Tokanel",
"Rob",
""
],
[
"Qian",
"Kyle",
""
],
[
"Le",
"Khoi",
""
]
] |
new_dataset
| 0.998708 |
2302.02065
|
Rakesh Mundlamuri
|
Rakesh Mundlamuri, Rajeev Gangula, Christo Kurisummoottil Thomas,
Florian Kaltenberger and Walid Saad
|
Sensing aided Channel Estimation in Wideband Millimeter-Wave MIMO
Systems
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In this work, the uplink channel estimation problem is considered for a
millimeter wave (mmWave) multi-input multi-output (MIMO) system. It is well
known that pilot overhead and computation complexity in estimating the channel
increases with the number of antennas and the bandwidth. To overcome this, the
proposed approach allows the channel estimation at the base station to be aided
by the sensing information. The sensing information contains an estimate of
scatterers locations in an environment. A simultaneous weighting orthogonal
matching pursuit (SWOMP) - sparse Bayesian learning (SBL) algorithm is proposed
that efficiently incorporates this sensing information in the communication
channel estimation procedure. The proposed framework can cope with scenarios
where a) scatterers present in the sensing information are not associated with
the communication channel and b) imperfections in the scatterers' location.
Simulation results show that the proposed sensing aided channel estimation
algorithm can obtain good wideband performance only at the cost of fractional
pilot overhead. Finally, the Cramer-Rao Bound (CRB) for the angle estimation
and multipath channel gains in the SBL is derived, providing valuable insights
into the local identifiability of the proposed algorithms.
|
[
{
"version": "v1",
"created": "Sat, 4 Feb 2023 02:26:22 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Mundlamuri",
"Rakesh",
""
],
[
"Gangula",
"Rajeev",
""
],
[
"Thomas",
"Christo Kurisummoottil",
""
],
[
"Kaltenberger",
"Florian",
""
],
[
"Saad",
"Walid",
""
]
] |
new_dataset
| 0.952635 |
2302.02112
|
Nitzan Farhi
|
Nitzan Farhi, Noam Koenigstein, Yuval Shavitt
|
Detecting Security Patches via Behavioral Data in Code Repositories
| null | null | null | null |
cs.CR cs.LG cs.SE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The absolute majority of software today is developed collaboratively using
collaborative version control tools such as Git. It is a common practice that
once a vulnerability is detected and fixed, the developers behind the software
issue a Common Vulnerabilities and Exposures or CVE record to alert the user
community of the security hazard and urge them to integrate the security patch.
However, some companies might not disclose their vulnerabilities and just
update their repository. As a result, users are unaware of the vulnerability
and may remain exposed. In this paper, we present a system to automatically
identify security patches using only the developer behavior in the Git
repository without analyzing the code itself or the remarks that accompanied
the fix (commit message). We showed we can reveal concealed security patches
with an accuracy of 88.3% and F1 Score of 89.8%. This is the first time that a
language-oblivious solution for this problem is presented.
|
[
{
"version": "v1",
"created": "Sat, 4 Feb 2023 06:43:07 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Farhi",
"Nitzan",
""
],
[
"Koenigstein",
"Noam",
""
],
[
"Shavitt",
"Yuval",
""
]
] |
new_dataset
| 0.980577 |
2302.02126
|
Nicholas Johnson
|
Nicholas A. G Johnson, Theo Diamandis, Alex Evans, Henry de Valence,
Guillermo Angeris
|
Concave Pro-rata Games
| null | null | null | null |
cs.GT cs.CR cs.MA
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we introduce a family of games called concave pro-rata games.
In such a game, players place their assets into a pool, and the pool pays out
some concave function of all assets placed into it. Each player then receives a
pro-rata share of the payout; i.e., each player receives an amount proportional
to how much they placed in the pool. Such games appear in a number of practical
scenarios, including as a simplified version of batched decentralized
exchanges, such as those proposed by Penumbra. We show that this game has a
number of interesting properties, including a symmetric pure equilibrium that
is the unique equilibrium of this game, and we prove that its price of anarchy
is $\Omega(n)$ in the number of players. We also show some numerical results in
the iterated setting which suggest that players quickly converge to an
equilibrium in iterated play.
|
[
{
"version": "v1",
"created": "Sat, 4 Feb 2023 07:57:28 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Johnson",
"Nicholas A. G",
""
],
[
"Diamandis",
"Theo",
""
],
[
"Evans",
"Alex",
""
],
[
"de Valence",
"Henry",
""
],
[
"Angeris",
"Guillermo",
""
]
] |
new_dataset
| 0.999112 |
2302.02157
|
Haojie Ren
|
Haojie Ren, Sha Zhang, Sugang Li, Yao Li, Xinchen Li, Jianmin Ji, Yu
Zhang, Yanyong Zhang
|
TrajMatch: Towards Automatic Spatio-temporal Calibration for Roadside
LiDARs through Trajectory Matching
| null | null | null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, it has become popular to deploy sensors such as LiDARs on the
roadside to monitor the passing traffic and assist autonomous vehicle
perception. Unlike autonomous vehicle systems, roadside sensors are usually
affiliated with different subsystems and lack synchronization both in time and
space. Calibration is a key technology which allows the central server to fuse
the data generated by different location infrastructures, which can deliver
improve the sensing range and detection robustness. Unfortunately, existing
calibration algorithms often assume that the LiDARs are significantly
overlapped or that the temporal calibration is already achieved. Since these
assumptions do not always hold in the real world, the calibration results from
the existing algorithms are often unsatisfactory and always need human
involvement, which brings high labor costs. In this paper, we propose TrajMatch
-- the first system that can automatically calibrate for roadside LiDARs in
both time and space. The main idea is to automatically calibrate the sensors
based on the result of the detection/tracking task instead of extracting
special features. More deeply, we propose a mechanism for evaluating
calibration parameters that is consistent with our algorithm, and we
demonstrate the effectiveness of this scheme experimentally, which can also be
used to guide parameter iterations for multiple calibration. Finally, to
evaluate the performance of TrajMatch , we collect two dataset, one simulated
dataset LiDARnet-sim 1.0 and a real-world dataset. Experiment results show that
TrajMatch can achieve a spatial calibration error of less than 10cm and a
temporal calibration error of less than 1.5ms.
|
[
{
"version": "v1",
"created": "Sat, 4 Feb 2023 12:27:01 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Ren",
"Haojie",
""
],
[
"Zhang",
"Sha",
""
],
[
"Li",
"Sugang",
""
],
[
"Li",
"Yao",
""
],
[
"Li",
"Xinchen",
""
],
[
"Ji",
"Jianmin",
""
],
[
"Zhang",
"Yu",
""
],
[
"Zhang",
"Yanyong",
""
]
] |
new_dataset
| 0.99704 |
2302.02205
|
Megan Martinez
|
Megan Martinez, Amanda Taylor Lipnicki
|
Automating Crochet Patterns for Surfaces of Revolution
| null | null | null | null |
cs.OH
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
A surface of revolution is created by taking a curve in the $xy$-plane and
rotating it about some axis. We develop a program which automatically generates
crochet patterns for surfaces by revolution when they are obtained by rotating
about the $x$-axis. In order to accomplish this, we invoke the arclength
integral to determine where to take measurements for each row. In addition, a
distance measure is created to optimally space increases and decreases. The
result is a program that will take a function, $x$-bounds, crochet gauge, and a
scale in order to produce a polished crochet pattern.
|
[
{
"version": "v1",
"created": "Wed, 1 Feb 2023 01:05:52 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Martinez",
"Megan",
""
],
[
"Lipnicki",
"Amanda Taylor",
""
]
] |
new_dataset
| 0.99359 |
2302.02224
|
Yinsong Wang
|
Yinsong Wang, Shahin Shahrampour
|
TAP: The Attention Patch for Cross-Modal Knowledge Transfer from
Unlabeled Data
| null | null | null | null |
cs.LG stat.ML
|
http://creativecommons.org/publicdomain/zero/1.0/
|
This work investigates the intersection of cross modal learning and semi
supervised learning, where we aim to improve the supervised learning
performance of the primary modality by borrowing missing information from an
unlabeled modality. We investigate this problem from a Nadaraya Watson (NW)
kernel regression perspective and show that this formulation implicitly leads
to a kernelized cross attention module. To this end, we propose The Attention
Patch (TAP), a simple neural network plugin that allows data level knowledge
transfer from the unlabeled modality. We provide numerical simulations on three
real world datasets to examine each aspect of TAP and show that a TAP
integration in a neural network can improve generalization performance using
the unlabeled modality.
|
[
{
"version": "v1",
"created": "Sat, 4 Feb 2023 19:39:20 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Wang",
"Yinsong",
""
],
[
"Shahrampour",
"Shahin",
""
]
] |
new_dataset
| 0.968353 |
2302.02259
|
David Paz
|
David Paz, Srinidhi Kalgundi Srinivas, Yunchao Yao, and Henrik I.
Christensen
|
CLiNet: Joint Detection of Road Network Centerlines in 2D and 3D
|
5 pages, 4 figures, 1 table. Under review at IEEE Intelligent
Vehicles Symposium 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This work introduces a new approach for joint detection of centerlines based
on image data by localizing the features jointly in 2D and 3D. In contrast to
existing work that focuses on detection of visual cues, we explore feature
extraction methods that are directly amenable to the urban driving task. To
develop and evaluate our approach, a large urban driving dataset dubbed AV
Breadcrumbs is automatically labeled by leveraging vector map representations
and projective geometry to annotate over 900,000 images. Our results
demonstrate potential for dynamic scene modeling across various urban driving
scenarios. Our model achieves an F1 score of 0.684 and an average normalized
depth error of 2.083. The code and data annotations are publicly available.
|
[
{
"version": "v1",
"created": "Sat, 4 Feb 2023 23:30:04 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Paz",
"David",
""
],
[
"Srinivas",
"Srinidhi Kalgundi",
""
],
[
"Yao",
"Yunchao",
""
],
[
"Christensen",
"Henrik I.",
""
]
] |
new_dataset
| 0.999872 |
2302.02345
|
Botong Zhu
|
Botong Zhu and Huobin Tan
|
VuLASTE: Long Sequence Model with Abstract Syntax Tree Embedding for
vulnerability Detection
| null | null | null | null |
cs.SE cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
In this paper, we build a model named VuLASTE, which regards vulnerability
detection as a special text classification task. To solve the vocabulary
explosion problem, VuLASTE uses a byte level BPE algorithm from natural
language processing. In VuLASTE, a new AST path embedding is added to represent
source code nesting information. We also use a combination of global and
dilated window attention from Longformer to extract long sequence semantic from
source code. To solve the data imbalance problem, which is a common problem in
vulnerability detection datasets, focal loss is used as loss function to make
model focus on poorly classified cases during training. To test our model
performance on real-world source code, we build a cross-language and
multi-repository vulnerability dataset from Github Security Advisory Database.
On this dataset, VuLASTE achieved top 50, top 100, top 200, top 500 hits of 29,
51, 86, 228, which are higher than state-of-art researches.
|
[
{
"version": "v1",
"created": "Sun, 5 Feb 2023 09:17:02 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Zhu",
"Botong",
""
],
[
"Tan",
"Huobin",
""
]
] |
new_dataset
| 0.999302 |
2302.02381
|
Peter Schrammel
|
Romain Brenguier, Lucas Cordeiro, Daniel Kroening and Peter Schrammel
|
JBMC: A Bounded Model Checking Tool for Java Bytecode
|
Book chapter preview
| null | null | null |
cs.SE cs.LO cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
JBMC is an open-source SAT- and SMT-based bounded model checking tool for
verifying Java bytecode. JBMC relies on an operational model of the Java
libraries, which conservatively approximates their semantics, to verify
assertion violations, array out-of-bounds, unintended arithmetic overflows, and
other kinds of functional and runtime errors in Java bytecode. JBMC can be used
to either falsify properties or prove program correctness if an upper bound on
the depth of the state-space is known. Practical applications of JBMC include
but are not limited to bug finding, property checking, test input generation,
detection of security vulnerabilities, and program synthesis. Here we provide a
detailed description of JBMC's architecture and its functionalities, including
an in-depth discussion of its background theories and underlying technologies,
including a state-of-the-art string solver to ensure safety and security of
Java bytecode.
|
[
{
"version": "v1",
"created": "Sun, 5 Feb 2023 13:43:33 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Brenguier",
"Romain",
""
],
[
"Cordeiro",
"Lucas",
""
],
[
"Kroening",
"Daniel",
""
],
[
"Schrammel",
"Peter",
""
]
] |
new_dataset
| 0.996024 |
2302.02396
|
Zehua Ma
|
Zehua Ma, Xi Yang, Han Fang, Weiming Zhang, Nenghai Yu
|
OAcode: Overall Aesthetic 2D Barcode on Screen
|
Published in: IEEE Transactions on Multimedia
| null |
10.1109/TMM.2023.3239755
| null |
cs.MM cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nowadays, two-dimensional (2D) barcodes have been widely used in various
domains. And a series of aesthetic 2D barcode schemes have been proposed to
improve the visual quality and readability of 2D barcodes for better
integration with marketing materials. Yet we believe that the existing
aesthetic 2D barcode schemes are partially aesthetic because they only beautify
the data area but retain the position detection patterns with the blackwhite
appearance of traditional 2D barcode schemes. Thus, in this paper, we propose
the first overall aesthetic 2D barcode scheme, called OAcode, in which the
position detection pattern is canceled. Its detection process is based on the
pre-designed symmetrical data area of OAcode, whose symmetry could be used as
the calibration signal to restore the perspective transformation in the barcode
scanning process. Moreover, an enhanced demodulation method is proposed to
resist the lens distortion common in the camera-shooting process. The
experimental results illustrate that when 5$\times$5 cm OAcode is captured with
a resolution of 720$\times$1280 pixels, at the screen-camera distance of 10 cm
and the angle less or equal to 25{\deg}, OAcode has 100% detection rate and
99.5% demodulation accuracy. For 10$\times$10 cm OAcode, it could be extracted
by consumer-grade mobile phones at a distance of 90 cm with around 90%
accuracy.
|
[
{
"version": "v1",
"created": "Sun, 5 Feb 2023 14:42:20 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Ma",
"Zehua",
""
],
[
"Yang",
"Xi",
""
],
[
"Fang",
"Han",
""
],
[
"Zhang",
"Weiming",
""
],
[
"Yu",
"Nenghai",
""
]
] |
new_dataset
| 0.999309 |
2302.02453
|
Dhruv Mullick
|
Akash Saravanan, Dhruv Mullick, Habibur Rahman, Nidhi Hegde
|
FineDeb: A Debiasing Framework for Language Models
|
Poster presentation at AAAI 2023: The Workshop on Artificial
Intelligence for Social Good 2023 (https://amulyayadav.github.io/AI4SG2023/)
| null | null | null |
cs.CL cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As language models are increasingly included in human-facing machine learning
tools, bias against demographic subgroups has gained attention. We propose
FineDeb, a two-phase debiasing framework for language models that starts with
contextual debiasing of embeddings learned by pretrained language models. The
model is then fine-tuned on a language modeling objective. Our results show
that FineDeb offers stronger debiasing in comparison to other methods which
often result in models as biased as the original language model. Our framework
is generalizable for demographics with multiple classes, and we demonstrate its
effectiveness through extensive experiments and comparisons with state of the
art techniques. We release our code and data on GitHub.
|
[
{
"version": "v1",
"created": "Sun, 5 Feb 2023 18:35:21 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Saravanan",
"Akash",
""
],
[
"Mullick",
"Dhruv",
""
],
[
"Rahman",
"Habibur",
""
],
[
"Hegde",
"Nidhi",
""
]
] |
new_dataset
| 0.973515 |
2302.02479
|
Vasu Goel
|
Vasu Goel, Dhruv Sahnan, Subhabrata Dutta, Anil Bandhakavi, Tanmoy
Chakraborty
|
Hatemongers ride on echo chambers to escalate hate speech diffusion
|
Accepted in PNAS Nexus
| null | null | null |
cs.SI cs.AI cs.CL cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Recent years have witnessed a swelling rise of hateful and abusive content
over online social networks. While detection and moderation of hate speech have
been the early go-to countermeasures, the solution requires a deeper
exploration of the dynamics of hate generation and propagation. We analyze more
than 32 million posts from over 6.8 million users across three popular online
social networks to investigate the interrelations between hateful behavior,
information dissemination, and polarised organization mediated by echo
chambers. We find that hatemongers play a more crucial role in governing the
spread of information compared to singled-out hateful content. This observation
holds for both the growth of information cascades as well as the conglomeration
of hateful actors. Dissection of the core-wise distribution of these networks
points towards the fact that hateful users acquire a more well-connected
position in the social network and often flock together to build up information
cascades. We observe that this cohesion is far from mere organized behavior;
instead, in these networks, hatemongers dominate the echo chambers -- groups of
users actively align themselves to specific ideological positions. The observed
dominance of hateful users to inflate information cascades is primarily via
user interactions amplified within these echo chambers. We conclude our study
with a cautionary note that popularity-based recommendation of content is
susceptible to be exploited by hatemongers given their potential to escalate
content popularity via echo-chambered interactions.
|
[
{
"version": "v1",
"created": "Sun, 5 Feb 2023 20:30:48 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Goel",
"Vasu",
""
],
[
"Sahnan",
"Dhruv",
""
],
[
"Dutta",
"Subhabrata",
""
],
[
"Bandhakavi",
"Anil",
""
],
[
"Chakraborty",
"Tanmoy",
""
]
] |
new_dataset
| 0.996762 |
2302.02535
|
Dingxin Zhang
|
Dingxin Zhang, Jianhui Yu, Chaoyi Zhang, Weidong Cai
|
PaRot: Patch-Wise Rotation-Invariant Network via Feature Disentanglement
and Pose Restoration
|
Accepted by AAAI2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent interest in point cloud analysis has led rapid progress in designing
deep learning methods for 3D models. However, state-of-the-art models are not
robust to rotations, which remains an unknown prior to real applications and
harms the model performance. In this work, we introduce a novel Patch-wise
Rotation-invariant network (PaRot), which achieves rotation invariance via
feature disentanglement and produces consistent predictions for samples with
arbitrary rotations. Specifically, we design a siamese training module which
disentangles rotation invariance and equivariance from patches defined over
different scales, e.g., the local geometry and global shape, via a pair of
rotations. However, our disentangled invariant feature loses the intrinsic pose
information of each patch. To solve this problem, we propose a
rotation-invariant geometric relation to restore the relative pose with
equivariant information for patches defined over different scales. Utilising
the pose information, we propose a hierarchical module which implements
intra-scale and inter-scale feature aggregation for 3D shape learning.
Moreover, we introduce a pose-aware feature propagation process with the
rotation-invariant relative pose information embedded. Experiments show that
our disentanglement module extracts high-quality rotation-robust features and
the proposed lightweight model achieves competitive results in rotated 3D
object classification and part segmentation tasks. Our project page is released
at: https://patchrot.github.io/.
|
[
{
"version": "v1",
"created": "Mon, 6 Feb 2023 02:13:51 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Zhang",
"Dingxin",
""
],
[
"Yu",
"Jianhui",
""
],
[
"Zhang",
"Chaoyi",
""
],
[
"Cai",
"Weidong",
""
]
] |
new_dataset
| 0.997073 |
2302.02567
|
Mahsa Derakhshan
|
Mahsa Derakhshan, Naveen Durvasula, Nika Haghtalab
|
Stochastic Minimum Vertex Cover in General Graphs: a $3/2$-Approximation
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Our main result is designing an algorithm that returns a vertex cover of
$\mathcal{G}^\star$ with size at most $(3/2+\epsilon)$ times the expected size
of the minimum vertex cover, using only $O(n/\epsilon p)$ non-adaptive queries.
This improves over the best-known 2-approximation algorithm by Behnezhad, Blum,
and Derakhshan [SODA'22], who also show that $\Omega(n/p)$ queries are
necessary to achieve any constant approximation.
Our guarantees also extend to instances where edge realizations are not fully
independent. We complement this upper bound with a tight $3/2$-approximation
lower bound for stochastic graphs whose edges realizations demonstrate mild
correlations.
|
[
{
"version": "v1",
"created": "Mon, 6 Feb 2023 05:08:39 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Derakhshan",
"Mahsa",
""
],
[
"Durvasula",
"Naveen",
""
],
[
"Haghtalab",
"Nika",
""
]
] |
new_dataset
| 0.9582 |
2302.02661
|
Zhiren Huang
|
Zhiren Huang, Alonso Espinosa Mireles de Villafranca, Charalampos
Sipetas, Tri Quach
|
Crowd-sensing commuting patterns using multi-source wireless data: a
case of Helsinki commuter trains
|
10 pages, 12 figures, Submitted to IEEE MDM 2023 (The 24th IEEE
International Conference on Mobile Data Management) Research Track
| null | null | null |
cs.NI cs.CY
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Understanding the mobility patterns of commuter train passengers is crucial
for developing efficient and sustainable transportation systems in urban areas.
Traditional technologies, such as Automated Passenger Counters (APC) can
measure the aggregated numbers of passengers entering and exiting trains,
however, they do not provide detailed information nor passenger movements
beyond the train itself. To overcome this limitation we investigate the
potential combination of traditional APC with an emerging source capable of
collecting detailed mobility demand data. This new data source derives from the
pilot project TravelSense, led by the Helsinki Regional Transport Authority
(HSL), which utilizes Bluetooth beacons and HSL's mobile phone ticket
application to track anonymous passenger multimodal trajectories from origin to
destination. By combining TravelSense data with APC we are able to better
understand the structure of train users' journeys by identifying the origin and
destination locations, modes of transport used to access commuter train
stations, and boarding and alighting numbers at each station. These insights
can assist public transport planning decisions and ultimately help to
contribute to the goal of sustainable cities and communities by promoting the
use of seamless and environmentally friendly transportation options.
|
[
{
"version": "v1",
"created": "Mon, 6 Feb 2023 09:59:33 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Huang",
"Zhiren",
""
],
[
"de Villafranca",
"Alonso Espinosa Mireles",
""
],
[
"Sipetas",
"Charalampos",
""
],
[
"Quach",
"Tri",
""
]
] |
new_dataset
| 0.991967 |
2302.02740
|
Phillip Rieger
|
Hossein Fereidooni, Jan K\"onig, Phillip Rieger, Marco Chilese, Bora
G\"okbakan, Moritz Finke, Alexandra Dmitrienko, and Ahmad-Reza Sadeghi
|
AuthentiSense: A Scalable Behavioral Biometrics Authentication Scheme
using Few-Shot Learning for Mobile Platforms
|
16 pages, 7 figures
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Mobile applications are widely used for online services sharing a large
amount of personal data online. One-time authentication techniques such as
passwords and physiological biometrics (e.g., fingerprint, face, and iris) have
their own advantages but also disadvantages since they can be stolen or
emulated, and do not prevent access to the underlying device, once it is
unlocked. To address these challenges, complementary authentication systems
based on behavioural biometrics have emerged. The goal is to continuously
profile users based on their interaction with the mobile device. However,
existing behavioural authentication schemes are not (i) user-agnostic meaning
that they cannot dynamically handle changes in the user-base without model
re-training, or (ii) do not scale well to authenticate millions of users.
In this paper, we present AuthentiSense, a user-agnostic, scalable, and
efficient behavioural biometrics authentication system that enables continuous
authentication and utilizes only motion patterns (i.e., accelerometer,
gyroscope and magnetometer data) while users interact with mobile apps. Our
approach requires neither manually engineered features nor a significant amount
of data for model training. We leverage a few-shot learning technique, called
Siamese network, to authenticate users at a large scale. We perform a
systematic measurement study and report the impact of the parameters such as
interaction time needed for authentication and n-shot verification (comparison
with enrollment samples) at the recognition stage. Remarkably, AuthentiSense
achieves high accuracy of up to 97% in terms of F1-score even when evaluated in
a few-shot fashion that requires only a few behaviour samples per user (3
shots). Our approach accurately authenticates users only after 1 second of user
interaction. For AuthentiSense, we report a FAR and FRR of 0.023 and 0.057,
respectively.
|
[
{
"version": "v1",
"created": "Mon, 6 Feb 2023 12:36:34 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Fereidooni",
"Hossein",
""
],
[
"König",
"Jan",
""
],
[
"Rieger",
"Phillip",
""
],
[
"Chilese",
"Marco",
""
],
[
"Gökbakan",
"Bora",
""
],
[
"Finke",
"Moritz",
""
],
[
"Dmitrienko",
"Alexandra",
""
],
[
"Sadeghi",
"Ahmad-Reza",
""
]
] |
new_dataset
| 0.971858 |
2302.02754
|
Shuche Wang
|
Shuche Wang, Van Khu Vu, Vincent Y. F. Tan
|
Codes for Correcting $t$ Limited-Magnitude Sticky Deletions
|
arXiv admin note: substantial text overlap with arXiv:2301.11680
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Codes for correcting sticky insertions/deletions and limited-magnitude errors
have attracted significant attention due to their applications of flash
memories, racetrack memories, and DNA data storage systems. In this paper, we
first consider the error type of $t$-sticky deletions with
$\ell$-limited-magnitude and propose a non-systematic code for correcting this
type of error with redundancy $2t(1-1/p)\cdot\log(n+1)+O(1)$, where $p$ is the
smallest prime larger than $\ell+1$. Next, we present a systematic code
construction with an efficient encoding and decoding algorithm with redundancy
$\frac{\lceil2t(1-1/p)\rceil\cdot\lceil\log p\rceil}{\log p}
\log(n+1)+O(\log\log n)$, where $p$ is the smallest prime larger than $\ell+1$.
|
[
{
"version": "v1",
"created": "Mon, 6 Feb 2023 13:01:51 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Wang",
"Shuche",
""
],
[
"Vu",
"Van Khu",
""
],
[
"Tan",
"Vincent Y. F.",
""
]
] |
new_dataset
| 0.991639 |
2302.02755
|
Pierre-Etienne Martin Dr.
|
Leonard Hacker and Finn Bartels and Pierre-Etienne Martin
|
Fine-Grained Action Detection with RGB and Pose Information using Two
Stream Convolutional Networks
|
Working note paper of the sport task of MediaEval 2022 in Bergen,
Norway, 12-13 Jan 2023
| null | null | null |
cs.CV cs.AI cs.LG cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
As participants of the MediaEval 2022 Sport Task, we propose a two-stream
network approach for the classification and detection of table tennis strokes.
Each stream is a succession of 3D Convolutional Neural Network (CNN) blocks
using attention mechanisms. Each stream processes different 4D inputs. Our
method utilizes raw RGB data and pose information computed from MMPose toolbox.
The pose information is treated as an image by applying the pose either on a
black background or on the original RGB frame it has been computed from. Best
performance is obtained by feeding raw RGB data to one stream, Pose + RGB
(PRGB) information to the other stream and applying late fusion on the
features. The approaches were evaluated on the provided TTStroke-21 data sets.
We can report an improvement in stroke classification, reaching 87.3% of
accuracy, while the detection does not outperform the baseline but still
reaches an IoU of 0.349 and mAP of 0.110.
|
[
{
"version": "v1",
"created": "Mon, 6 Feb 2023 13:05:55 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Hacker",
"Leonard",
""
],
[
"Bartels",
"Finn",
""
],
[
"Martin",
"Pierre-Etienne",
""
]
] |
new_dataset
| 0.963372 |
2302.02795
|
Juan M. Tiz\'on
|
Juan M. Tiz\'on, Nicol\'as Becerra, Daniel Bercebal, Claus P.Grabowsky
|
Trimpack: Unstructured Triangular Mesh Generation Library
| null | null | null | null |
cs.MS cs.NA math.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Trimpack is a library of routines written in Fortran that allow to create
unstructured triangular meshes in any domain and with an user-defined size
distribution. The user must write a program that uses the elements of the
library as if it were a mathematical tool. First, the domain must be defined,
using point-defined boundaries, which the user provides. The library internally
uses splines to mesh the boundaries with the node distribution function
provided by the user. Several meshing methods are available, from simple
Dalaunay mesh creation from a point cloud, an incremental Steiner-type
algorithm that also generates Dalaunay meshes to an efficient advancing-front
type algorithm. This report carries out a bibliographic review of the state of
the art in mesh generation corresponding to the period in which Trimpack was
written for the first time, which is a very fruitful period in the development
of this type of algorithms. Next, MeshGen is described in detail, which is a
program written in C ++ that exploits the possibilities of the Trimpack library
for the generation of unstructured triangular meshes and that has a powerful
graphical interface. Finally, it also explains in detail the content of the
Trimpack library that is available under GNU Public license for anyone who
wants to use or improve it.
|
[
{
"version": "v1",
"created": "Fri, 3 Feb 2023 15:49:38 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Tizón",
"Juan M.",
""
],
[
"Becerra",
"Nicolás",
""
],
[
"Bercebal",
"Daniel",
""
],
[
"Grabowsky",
"Claus P.",
""
]
] |
new_dataset
| 0.99179 |
2302.02803
|
Sebastian Cmentowski
|
Sebastian Cmentowski, Sukran Karaosmanoglu, Lennart Nacke, Frank
Steinicke, Jens Kr\"uger
|
Never Skip Leg Day Again: Training the Lower Body with Vertical Jumps in
a Virtual Reality Exergame
| null | null |
10.1145/3544548.3580973
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Virtual Reality (VR) exergames can increase engagement in and motivation for
physical activities. Most VR exergames focus on the upper body because many VR
setups only track the users' heads and hands. To become a serious alternative
to existing exercise programs, VR exergames must provide a balanced workout and
train the lower limbs, too. To address this issue, we built a VR exergame
focused on vertical jump training to explore full-body exercise applications.
To create a safe and effective training, nine domain experts participated in
our prototype design. Our mixed-methods study confirms that the jump-centered
exercises provided a worthy challenge and positive player experience,
indicating long-term retention. Based on our findings, we present five design
implications to guide future work: avoid an unintended forward drift, consider
technical constraints, address safety concerns in full-body VR exergames,
incorporate rhythmic elements with fluent movement patterns, adapt difficulty
to players' fitness progression status.
|
[
{
"version": "v1",
"created": "Mon, 6 Feb 2023 14:25:44 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Cmentowski",
"Sebastian",
""
],
[
"Karaosmanoglu",
"Sukran",
""
],
[
"Nacke",
"Lennart",
""
],
[
"Steinicke",
"Frank",
""
],
[
"Krüger",
"Jens",
""
]
] |
new_dataset
| 0.995568 |
2302.02896
|
Marcellin Atemkeng
|
Marcellin Atemkeng, Victor Osanyindoro, Rockefeller Rockefeller,
Sisipho Hamlomo, Jecinta Mulongo, Theophilus Ansah-Narh, Franklin Tchakounte,
Arnaud Nguembang Fadja
|
Label Assisted Autoencoder for Anomaly Detection in Power Generation
Plants
|
Submitted to Journal
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
One of the critical factors that drive the economic development of a country
and guarantee the sustainability of its industries is the constant availability
of electricity. This is usually provided by the national electric grid.
However, in developing countries where companies are emerging on a constant
basis including telecommunication industries, those are still experiencing a
non-stable electricity supply. Therefore, they have to rely on generators to
guarantee their full functionality. Those generators depend on fuel to function
and the rate of consumption gets usually high, if not monitored properly.
Monitoring operation is usually carried out by a (non-expert) human. In some
cases, this could be a tedious process, as some companies have reported an
exaggerated high consumption rate. This work proposes a label assisted
autoencoder for anomaly detection in the fuel consumed by power generating
plants. In addition to the autoencoder model, we added a labelling assistance
module that checks if an observation is labelled, the label is used to check
the veracity of the corresponding anomaly classification given a threshold. A
consensus is then reached on whether training should stop or whether the
threshold should be updated or the training should continue with the search for
hyper-parameters. Results show that the proposed model is highly efficient for
reading anomalies with a detection accuracy of $97.20\%$ which outperforms the
existing model of $96.1\%$ accuracy trained on the same dataset. In addition,
the proposed model is able to classify the anomalies according to their degree
of severity.
|
[
{
"version": "v1",
"created": "Mon, 6 Feb 2023 16:03:38 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Atemkeng",
"Marcellin",
""
],
[
"Osanyindoro",
"Victor",
""
],
[
"Rockefeller",
"Rockefeller",
""
],
[
"Hamlomo",
"Sisipho",
""
],
[
"Mulongo",
"Jecinta",
""
],
[
"Ansah-Narh",
"Theophilus",
""
],
[
"Tchakounte",
"Franklin",
""
],
[
"Fadja",
"Arnaud Nguembang",
""
]
] |
new_dataset
| 0.995112 |
2302.02898
|
Linh K\"astner
|
Linh K\"astner, Reyk Carstens, Christopher Liebig, Volodymyr
Shcherbyna, Lena Nahrworld, Subhin Lee, Jens Lambrecht
|
Arena-Web -- A Web-based Development and Benchmarking Platform for
Autonomous Navigation Approaches
|
10 pages, 9 figures
| null | null | null |
cs.RO cs.AI cs.HC cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, mobile robot navigation approaches have become increasingly
important due to various application areas ranging from healthcare to warehouse
logistics. In particular, Deep Reinforcement Learning approaches have gained
popularity for robot navigation but are not easily accessible to non-experts
and complex to develop. In recent years, efforts have been made to make these
sophisticated approaches accessible to a wider audience. In this paper, we
present Arena-Web, a web-based development and evaluation suite for developing,
training, and testing DRL-based navigation planners for various robotic
platforms and scenarios. The interface is designed to be intuitive and engaging
to appeal to non-experts and make the technology accessible to a wider
audience. With Arena-Web and its interface, training and developing Deep
Reinforcement Learning agents is simplified and made easy without a single line
of code. The web-app is free to use and openly available under the link stated
in the supplementary materials.
|
[
{
"version": "v1",
"created": "Mon, 6 Feb 2023 16:06:07 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Kästner",
"Linh",
""
],
[
"Carstens",
"Reyk",
""
],
[
"Liebig",
"Christopher",
""
],
[
"Shcherbyna",
"Volodymyr",
""
],
[
"Nahrworld",
"Lena",
""
],
[
"Lee",
"Subhin",
""
],
[
"Lambrecht",
"Jens",
""
]
] |
new_dataset
| 0.998114 |
2302.02908
|
Can Xu
|
Ziyang luo, Pu Zhao, Can Xu, Xiubo Geng, Tao Shen, Chongyang Tao, Jing
Ma, Qingwen lin, Daxin Jiang
|
LexLIP: Lexicon-Bottlenecked Language-Image Pre-Training for Large-Scale
Image-Text Retrieval
| null | null | null | null |
cs.CV cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image-text retrieval (ITR) is a task to retrieve the relevant images/texts,
given the query from another modality. The conventional dense retrieval
paradigm relies on encoding images and texts into dense representations using
dual-stream encoders, however, it faces challenges with low retrieval speed in
large-scale retrieval scenarios. In this work, we propose the lexicon-weighting
paradigm, where sparse representations in vocabulary space are learned for
images and texts to take advantage of the bag-of-words models and efficient
inverted indexes, resulting in significantly reduced retrieval latency. A
crucial gap arises from the continuous nature of image data, and the
requirement for a sparse vocabulary space representation. To bridge this gap,
we introduce a novel pre-training framework, Lexicon-Bottlenecked
Language-Image Pre-Training (LexLIP), that learns importance-aware lexicon
representations. This framework features lexicon-bottlenecked modules between
the dual-stream encoders and weakened text decoders, allowing for constructing
continuous bag-of-words bottlenecks to learn lexicon-importance distributions.
Upon pre-training with same-scale data, our LexLIP achieves state-of-the-art
performance on two benchmark ITR datasets, MSCOCO and Flickr30k. Furthermore,
in large-scale retrieval scenarios, LexLIP outperforms CLIP with a 5.5 ~ 221.3X
faster retrieval speed and 13.2 ~ 48.8X less index storage memory.
|
[
{
"version": "v1",
"created": "Mon, 6 Feb 2023 16:24:41 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"luo",
"Ziyang",
""
],
[
"Zhao",
"Pu",
""
],
[
"Xu",
"Can",
""
],
[
"Geng",
"Xiubo",
""
],
[
"Shen",
"Tao",
""
],
[
"Tao",
"Chongyang",
""
],
[
"Ma",
"Jing",
""
],
[
"lin",
"Qingwen",
""
],
[
"Jiang",
"Daxin",
""
]
] |
new_dataset
| 0.991439 |
2302.02972
|
Edgar Jatho
|
Edgar W. Jatho and Logan O. Mailloux and Eugene D. Williams and
Patrick McClure and Joshua A. Kroll
|
Concrete Safety for ML Problems: System Safety for ML Development and
Assessment
|
arXiv admin note: text overlap with arXiv:2211.04602
| null | null | null |
cs.LG cs.CY cs.SE cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Many stakeholders struggle to make reliances on ML-driven systems due to the
risk of harm these systems may cause. Concerns of trustworthiness, unintended
social harms, and unacceptable social and ethical violations undermine the
promise of ML advancements. Moreover, such risks in complex ML-driven systems
present a special challenge as they are often difficult to foresee, arising
over periods of time, across populations, and at scale. These risks often arise
not from poor ML development decisions or low performance directly but rather
emerge through the interactions amongst ML development choices, the context of
model use, environmental factors, and the effects of a model on its target.
Systems safety engineering is an established discipline with a proven track
record of identifying and managing risks even in high-complexity sociotechnical
systems. In this work, we apply a state-of-the-art systems safety approach to
concrete applications of ML with notable social and ethical risks to
demonstrate a systematic means for meeting the assurance requirements needed to
argue for safe and trustworthy ML in sociotechnical systems.
|
[
{
"version": "v1",
"created": "Mon, 6 Feb 2023 18:02:07 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Jatho",
"Edgar W.",
""
],
[
"Mailloux",
"Logan O.",
""
],
[
"Williams",
"Eugene D.",
""
],
[
"McClure",
"Patrick",
""
],
[
"Kroll",
"Joshua A.",
""
]
] |
new_dataset
| 0.970936 |
2302.02978
|
Jiaying Lu
|
Jiaying Lu, Yongchen Qian, Shifan Zhao, Yuanzhe Xi, Carl Yang
|
MuG: A Multimodal Classification Benchmark on Game Data with Tabular,
Textual, and Visual Fields
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Multimodal learning has attracted the interest of the machine learning
community due to its great potential in a variety of applications. To help
achieve this potential, we propose a multimodal benchmark MuG with eight
datasets allowing researchers to test the multimodal perceptron capabilities of
their models. These datasets are collected from four different genres of games
that cover tabular, textual, and visual modalities. We conduct multi-aspect
data analysis to provide insights into the benchmark, including label balance
ratios, percentages of missing features, distributions of data within each
modality, and the correlations between labels and input modalities. We further
present experimental results obtained by several state-of-the-art unimodal
classifiers and multimodal classifiers, which demonstrate the challenging and
multimodal-dependent properties of the benchmark. MuG is released at
https://github.com/lujiaying/MUG-Bench with the data, documents, tutorials, and
implemented baselines. Extensions of MuG are welcomed to facilitate the
progress of research in multimodal learning problems.
|
[
{
"version": "v1",
"created": "Mon, 6 Feb 2023 18:09:06 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Lu",
"Jiaying",
""
],
[
"Qian",
"Yongchen",
""
],
[
"Zhao",
"Shifan",
""
],
[
"Xi",
"Yuanzhe",
""
],
[
"Yang",
"Carl",
""
]
] |
new_dataset
| 0.999422 |
2001.02155
|
Christian Retor\'e
|
Christian Retor\'e
|
Pomset logic: the other approach to non commutativity in logic
| null | null |
10.1007/978-3-030-66545-6_9
| null |
cs.LO math.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Thirty years ago, I introduced a non-commutative variant of classical linear
logic, called "pomset logic", issued from a particular categorical
interpretation of linear logic known as coherence spaces. In addition to the
usual commutative multiplicative connectives of linear logic, pomset logic
includes a non-commutative connective, "$\triangleleft$" called "before",
associative and self-dual: $(A\triangleleft B)^\perp=A^\perp \triangleleft
B^\perp$. The conclusion of a pomset logic proof is a Partially Ordered
MultiSET of formulas. Pomset logic enjoys a proof net calculus with
cut-elimination, denotational semantics, and faithfully embeds sequent
calculus.
The study of pomset logic has reopened with recent results on handsome proof
nets, on its sequent calculus, or on its following calculi like deep inference
by Guglielmi and Strassburger. Therefore, it is high time we published a
thorough presentation of pomset logic, including published and unpublished
material, old and new results.
Pomset logic (1993) is a non-commutative variant of linear logic (1987) as
for Lambek calculus (1958!) and it can also be used as a grammatical formalism.
Those two calculi are quite different, but we hope that the algebraic
presentation we give here, with formulas as algebraic terms and with a semantic
notion of proof (net) correctness, better matches Lambek's view of what a logic
should be.
|
[
{
"version": "v1",
"created": "Tue, 7 Jan 2020 16:28:34 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Feb 2023 14:18:38 GMT"
}
] | 2023-02-06T00:00:00 |
[
[
"Retoré",
"Christian",
""
]
] |
new_dataset
| 0.994389 |
2111.01723
|
Yixuan Xu
|
Enxu Li, Ryan Razani, Yixuan Xu, Bingbing Liu
|
CPSeg: Cluster-free Panoptic Segmentation of 3D LiDAR Point Clouds
|
Accepted at ICRA 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
A fast and accurate panoptic segmentation system for LiDAR point clouds is
crucial for autonomous driving vehicles to understand the surrounding objects
and scenes. Existing approaches usually rely on proposals or clustering to
segment foreground instances. As a result, they struggle to achieve real-time
performance. In this paper, we propose a novel real-time end-to-end panoptic
segmentation network for LiDAR point clouds, called CPSeg. In particular, CPSeg
comprises a shared encoder, a dual-decoder, and a cluster-free instance
segmentation head, which is able to dynamically pillarize foreground points
according to the learned embedding. Then, it acquires instance labels by
finding connected pillars with a pairwise embedding comparison. Thus, the
conventional proposal-based or clustering-based instance segmentation is
transformed into a binary segmentation problem on the pairwise embedding
comparison matrix. To help the network regress instance embedding, a fast and
deterministic depth completion algorithm is proposed to calculate the surface
normal of each point cloud in real-time. The proposed method is benchmarked on
two large-scale autonomous driving datasets: SemanticKITTI and nuScenes.
Notably, extensive experimental results show that CPSeg achieves
state-of-the-art results among real-time approaches on both datasets.
|
[
{
"version": "v1",
"created": "Tue, 2 Nov 2021 16:44:06 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Feb 2023 00:16:53 GMT"
}
] | 2023-02-06T00:00:00 |
[
[
"Li",
"Enxu",
""
],
[
"Razani",
"Ryan",
""
],
[
"Xu",
"Yixuan",
""
],
[
"Liu",
"Bingbing",
""
]
] |
new_dataset
| 0.998312 |
2208.11881
|
Tetsuya Iizuka
|
Xiangyu Chen, Zolboo Byambadorj, Takeaki Yajima, Hisashi Inoue, Isao
H. Inoue and Tetsuya Iizuka
|
CMOS-based area-and-power-efficient neuron and synapse circuits for
time-domain analog spiking neural networks
| null | null | null | null |
cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
Conventional neural structures tend to communicate through analog quantities
such as currents or voltages, however, as CMOS devices shrink and supply
voltages decrease, the dynamic range of voltage/current-domain analog circuits
becomes narrower, the available margin becomes smaller, and noise immunity
decreases. More than that, the use of operational amplifiers (op-amps) and
continuous-time or clocked comparators in conventional designs leads to high
energy consumption and large chip area, which would be detrimental to building
spiking neural networks. In view of this, we propose a neural structure for
generating and transmitting time-domain signals, including a neuron module, a
synapse module, and two weight modules. The proposed neural structure is driven
by a leakage current of MOS transistors and uses an inverter-based comparator
to realize a firing function, thus providing higher energy and area efficiency
compared to conventional designs. The proposed neural structure is fabricated
using TSMC 65 nm CMOS technology. The proposed neuron and synapse occupy the
area of 127 {\mu}m^{ 2} and 231 {\mu}m^{ 2}, respectively, while achieving
millisecond time constants. Actual chip measurements show that the proposed
structure implements the temporal signal communication function with
millisecond time constants, which is a critical step toward hardware reservoir
computing for human-computer interaction. Simulation results of the
spiking-neural network for reservoir computing with the behavioral model of the
proposed neural structure demonstrate the learning function.
|
[
{
"version": "v1",
"created": "Thu, 25 Aug 2022 05:55:18 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Feb 2023 22:45:37 GMT"
}
] | 2023-02-06T00:00:00 |
[
[
"Chen",
"Xiangyu",
""
],
[
"Byambadorj",
"Zolboo",
""
],
[
"Yajima",
"Takeaki",
""
],
[
"Inoue",
"Hisashi",
""
],
[
"Inoue",
"Isao H.",
""
],
[
"Iizuka",
"Tetsuya",
""
]
] |
new_dataset
| 0.999712 |
2209.13507
|
Ching-Yu Tseng
|
Ching-Yu Tseng, Yi-Rong Chen, Hsin-Ying Lee, Tsung-Han Wu, Wen-Chin
Chen, Winston H. Hsu
|
CrossDTR: Cross-view and Depth-guided Transformers for 3D Object
Detection
|
Accepted by IEEE International Conference on Robotics and Automation
(ICRA) 2023. The code is available at https://github.com/sty61010/CrossDTR
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
To achieve accurate 3D object detection at a low cost for autonomous driving,
many multi-camera methods have been proposed and solved the occlusion problem
of monocular approaches. However, due to the lack of accurate estimated depth,
existing multi-camera methods often generate multiple bounding boxes along a
ray of depth direction for difficult small objects such as pedestrians,
resulting in an extremely low recall. Furthermore, directly applying depth
prediction modules to existing multi-camera methods, generally composed of
large network architectures, cannot meet the real-time requirements of
self-driving applications. To address these issues, we propose Cross-view and
Depth-guided Transformers for 3D Object Detection, CrossDTR. First, our
lightweight depth predictor is designed to produce precise object-wise sparse
depth maps and low-dimensional depth embeddings without extra depth datasets
during supervision. Second, a cross-view depth-guided transformer is developed
to fuse the depth embeddings as well as image features from cameras of
different views and generate 3D bounding boxes. Extensive experiments
demonstrated that our method hugely surpassed existing multi-camera methods by
10 percent in pedestrian detection and about 3 percent in overall mAP and NDS
metrics. Also, computational analyses showed that our method is 5 times faster
than prior approaches. Our codes will be made publicly available at
https://github.com/sty61010/CrossDTR.
|
[
{
"version": "v1",
"created": "Tue, 27 Sep 2022 16:23:12 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Oct 2022 05:39:53 GMT"
},
{
"version": "v3",
"created": "Fri, 3 Feb 2023 10:39:37 GMT"
}
] | 2023-02-06T00:00:00 |
[
[
"Tseng",
"Ching-Yu",
""
],
[
"Chen",
"Yi-Rong",
""
],
[
"Lee",
"Hsin-Ying",
""
],
[
"Wu",
"Tsung-Han",
""
],
[
"Chen",
"Wen-Chin",
""
],
[
"Hsu",
"Winston H.",
""
]
] |
new_dataset
| 0.952158 |
2210.01108
|
Jiaxin Pei
|
Jiaxin Pei, V\'itor Silva, Maarten Bos, Yozon Liu, Leonardo Neves,
David Jurgens and Francesco Barbieri
|
SemEval 2023 Task 9: Multilingual Tweet Intimacy Analysis
|
SemEval 2023 Task 9: Multilingual Tweet Intimacy Analysis
| null | null | null |
cs.CL cs.CY cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We propose MINT, a new Multilingual INTimacy analysis dataset covering 13,372
tweets in 10 languages including English, French, Spanish, Italian, Portuguese,
Korean, Dutch, Chinese, Hindi, and Arabic. We benchmarked a list of popular
multilingual pre-trained language models. The dataset is released along with
the SemEval 2023 Task 9: Multilingual Tweet Intimacy Analysis
(https://sites.google.com/umich.edu/semeval-2023-tweet-intimacy).
|
[
{
"version": "v1",
"created": "Mon, 3 Oct 2022 17:48:32 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Feb 2023 17:32:18 GMT"
}
] | 2023-02-06T00:00:00 |
[
[
"Pei",
"Jiaxin",
""
],
[
"Silva",
"Vítor",
""
],
[
"Bos",
"Maarten",
""
],
[
"Liu",
"Yozon",
""
],
[
"Neves",
"Leonardo",
""
],
[
"Jurgens",
"David",
""
],
[
"Barbieri",
"Francesco",
""
]
] |
new_dataset
| 0.999853 |
2210.07471
|
Neeraj Varshney
|
Himanshu Gupta, Neeraj Varshney, Swaroop Mishra, Kuntal Kumar Pal,
Saurabh Arjun Sawant, Kevin Scaria, Siddharth Goyal, Chitta Baral
|
"John is 50 years old, can his son be 65?" Evaluating NLP Models'
Understanding of Feasibility
|
EACL 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In current NLP research, large-scale language models and their abilities are
widely being discussed. Some recent works have also found notable failures of
these models. Often these failure examples involve complex reasoning abilities.
This work focuses on a simple commonsense ability, reasoning about when an
action (or its effect) is feasible. To this end, we introduce FeasibilityQA, a
question-answering dataset involving binary classification (BCQ) and
multi-choice multi-correct questions (MCQ) that test understanding of
feasibility. We show that even state-of-the-art models such as GPT-3, GPT-2,
and T5 struggle to answer the feasibility questions correctly. Specifically, on
MCQ and BCQ questions, GPT-3 achieves an accuracy of just (19%, 62%) and (25%,
64%) in zero-shot and few-shot settings, respectively. We also evaluate models
by providing relevant knowledge statements required to answer the question. We
find that the additional knowledge leads to a 7% gain in performance, but the
overall performance still remains low. These results make one wonder how much
commonsense knowledge about action feasibility is encoded in state-of-the-art
models and how well they can reason about it.
|
[
{
"version": "v1",
"created": "Fri, 14 Oct 2022 02:46:06 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Feb 2023 02:12:37 GMT"
}
] | 2023-02-06T00:00:00 |
[
[
"Gupta",
"Himanshu",
""
],
[
"Varshney",
"Neeraj",
""
],
[
"Mishra",
"Swaroop",
""
],
[
"Pal",
"Kuntal Kumar",
""
],
[
"Sawant",
"Saurabh Arjun",
""
],
[
"Scaria",
"Kevin",
""
],
[
"Goyal",
"Siddharth",
""
],
[
"Baral",
"Chitta",
""
]
] |
new_dataset
| 0.997226 |
2211.10099
|
Sandro Stucki
|
Sebastian Hunt, David Sands, Sandro Stucki
|
Reconciling Shannon and Scott with a Lattice of Computable Information
|
30 pages; presented at the 50th ACM SIGPLAN Symposium on Principles
of Programming Languages (POPL 2023), 15-21 January 2023
|
Proc. ACM Program. Lang. 7(POPL), 2023, 68:1-68:30
|
10.1145/3571740
| null |
cs.PL cs.CR cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
This paper proposes a reconciliation of two different theories of
information. The first, originally proposed in a lesser-known work by Claude
Shannon, describes how the information content of channels can be described
qualitatively, but still abstractly, in terms of information elements, i.e.
equivalence relations over the data source domain. Shannon showed that these
elements form a complete lattice, with the order expressing when one element is
more informative than another. In the context of security and information flow
this structure has been independently rediscovered several times, and used as a
foundation for reasoning about information flow.
The second theory of information is Dana Scott's domain theory, a
mathematical framework for giving meaning to programs as continuous functions
over a particular topology. Scott's partial ordering also represents when one
element is more informative than another, but in the sense of computational
progress, i.e. when one element is a more defined or evolved version of
another.
To give a satisfactory account of information flow in programs it is
necessary to consider both theories together, to understand what information is
conveyed by a program viewed as a channel (\`a la Shannon) but also by the
definedness of its encoding (\`a la Scott). We combine these theories by
defining the Lattice of Computable Information (LoCI), a lattice of preorders
rather than equivalence relations. LoCI retains the rich lattice structure of
Shannon's theory, filters out elements that do not make computational sense,
and refines the remaining information elements to reflect how Scott's ordering
captures the way that information is presented.
We show how the new theory facilitates the first general definition of
termination-insensitive information flow properties, a weakened form of
information flow property commonly targeted by static program analyses.
|
[
{
"version": "v1",
"created": "Fri, 18 Nov 2022 08:55:06 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Feb 2023 09:05:17 GMT"
}
] | 2023-02-06T00:00:00 |
[
[
"Hunt",
"Sebastian",
""
],
[
"Sands",
"David",
""
],
[
"Stucki",
"Sandro",
""
]
] |
new_dataset
| 0.962816 |
2301.00255
|
Parakh Manoj Gupta
|
Parakh M. Gupta, Eric Pairet, Tiago Nascimento, Martin Saska
|
Landing a UAV in Harsh Winds and Turbulent Open Waters
| null |
in IEEE Robotics and Automation Letters, vol. 8, no. 2, pp.
744-751, Feb. 2023
|
10.1109/LRA.2022.3231831
| null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Landing an unmanned aerial vehicle unmanned aerial vehicle (UAV) on top of an
unmanned surface vehicle (USV) in harsh open waters is a challenging problem,
owing to forces that can damage the UAV due to a severe roll and/or pitch angle
of the USV during touchdown. To tackle this, we propose a novel model
predictive control (MPC) approach enabling a UAV to land autonomously on a USV
in these harsh conditions. The MPC employs a novel objective function and an
online decomposition of the oscillatory motion of the vessel to predict,
attempt, and accomplish the landing during near-zero tilt of the landing
platform. The nonlinear prediction of the motion of the vessel is performed
using visual data from an onboard camera. Therefore, the system does not
require any communication with the USV or a control station. The proposed
method was analyzed in numerous robotics simulations in harsh and extreme
conditions and further validated in various real-world scenarios.
|
[
{
"version": "v1",
"created": "Sat, 31 Dec 2022 17:23:15 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Feb 2023 15:45:21 GMT"
}
] | 2023-02-06T00:00:00 |
[
[
"Gupta",
"Parakh M.",
""
],
[
"Pairet",
"Eric",
""
],
[
"Nascimento",
"Tiago",
""
],
[
"Saska",
"Martin",
""
]
] |
new_dataset
| 0.998623 |
2301.12490
|
Daniel Bennett
|
Dan Bennett, Oussama Metatla, Anne Roudaut and Elisa Mekler
|
How does HCI Understand Human Autonomy and Agency?
|
13 Pages, 1 figure, 1 tables, to be published in the proceedings of
ACM SIGCHI 2023
| null |
10.1145/3544548.3580651
| null |
cs.HC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Human agency and autonomy have always been fundamental concepts in HCI. New
developments, including ubiquitous AI and the growing integration of
technologies into our lives, make these issues ever pressing, as technologies
increase their ability to influence our behaviours and values. However, in HCI
understandings of autonomy and agency remain ambiguous. Both concepts are used
to describe a wide range of phenomena pertaining to sense-of-control, material
independence, and identity. It is unclear to what degree these understandings
are compatible, and how they support the development of research programs and
practical interventions. We address this by reviewing 30 years of HCI research
on autonomy and agency to identify current understandings, open issues, and
future directions. From this analysis, we identify ethical issues, and outline
key themes to guide future work. We also articulate avenues for advancing
clarity and specificity around these concepts, and for coordinating integrative
work across different HCI communities.
|
[
{
"version": "v1",
"created": "Sun, 29 Jan 2023 16:54:03 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Feb 2023 21:39:33 GMT"
}
] | 2023-02-06T00:00:00 |
[
[
"Bennett",
"Dan",
""
],
[
"Metatla",
"Oussama",
""
],
[
"Roudaut",
"Anne",
""
],
[
"Mekler",
"Elisa",
""
]
] |
new_dataset
| 0.996031 |
2301.12831
|
Chenqi Kong
|
Chenqi Kong, Kexin Zheng, Yibing Liu, Shiqi Wang, Anderson Rocha,
Haoliang Li
|
M3FAS: An Accurate and Robust MultiModal Mobile Face Anti-Spoofing
System
| null | null | null | null |
cs.MM cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Face presentation attacks (FPA), also known as face spoofing, have brought
increasing concerns to the public through various malicious applications, such
as financial fraud and privacy leakage. Therefore, safeguarding face
recognition systems against FPA is of utmost importance. Although existing
learning-based face anti-spoofing (FAS) models can achieve outstanding
detection performance, they lack generalization capability and suffer
significant performance drops in unforeseen environments. Many methodologies
seek to use auxiliary modality data (e.g., depth and infrared maps) during the
presentation attack detection (PAD) to address this limitation. However, these
methods can be limited since (1) they require specific sensors such as depth
and infrared cameras for data capture, which are rarely available on commodity
mobile devices, and (2) they cannot work properly in practical scenarios when
either modality is missing or of poor quality. In this paper, we devise an
accurate and robust MultiModal Mobile Face Anti-Spoofing system named M3FAS to
overcome the issues above. The innovation of this work mainly lies in the
following aspects: (1) To achieve robust PAD, our system combines visual and
auditory modalities using three pervasively available sensors: camera, speaker,
and microphone; (2) We design a novel two-branch neural network with three
hierarchical feature aggregation modules to perform cross-modal feature fusion;
(3). We propose a multi-head training strategy. The model outputs three
predictions from the vision, acoustic, and fusion heads, enabling a more
flexible PAD. Extensive experiments have demonstrated the accuracy, robustness,
and flexibility of M3FAS under various challenging experimental settings.
|
[
{
"version": "v1",
"created": "Mon, 30 Jan 2023 12:37:04 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Feb 2023 07:02:23 GMT"
}
] | 2023-02-06T00:00:00 |
[
[
"Kong",
"Chenqi",
""
],
[
"Zheng",
"Kexin",
""
],
[
"Liu",
"Yibing",
""
],
[
"Wang",
"Shiqi",
""
],
[
"Rocha",
"Anderson",
""
],
[
"Li",
"Haoliang",
""
]
] |
new_dataset
| 0.9562 |
2302.00762
|
Chaitanya Malaviya
|
Yuewei Yuan, Chaitanya Malaviya, Mark Yatskar
|
AmbiCoref: Evaluating Human and Model Sensitivity to Ambiguous
Coreference
|
EACL 2023 Findings
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Given a sentence "Abby told Brittney that she upset Courtney", one would
struggle to understand who "she" refers to, and ask for clarification. However,
if the word "upset" were replaced with "hugged", "she" unambiguously refers to
Abby. We study if modern coreference resolution models are sensitive to such
pronominal ambiguity. To this end, we construct AmbiCoref, a diagnostic corpus
of minimal sentence pairs with ambiguous and unambiguous referents. Our
examples generalize psycholinguistic studies of human perception of ambiguity
around particular arrangements of verbs and their arguments. Analysis shows
that (1) humans are less sure of referents in ambiguous AmbiCoref examples than
unambiguous ones, and (2) most coreference models show little difference in
output between ambiguous and unambiguous pairs. We release AmbiCoref as a
diagnostic corpus for testing whether models treat ambiguity similarly to
humans.
|
[
{
"version": "v1",
"created": "Wed, 1 Feb 2023 21:25:34 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Feb 2023 16:07:53 GMT"
}
] | 2023-02-06T00:00:00 |
[
[
"Yuan",
"Yuewei",
""
],
[
"Malaviya",
"Chaitanya",
""
],
[
"Yatskar",
"Mark",
""
]
] |
new_dataset
| 0.996697 |
2302.01401
|
Andrick Adhikari
|
Philipp Markert, Andrick Adhikari and Sanchari Das
|
A Transcontinental Analysis of Account Remediation Protocols of Popular
Websites
|
Conference: Symposium on Usable Security and Privacy (USEC) 2023At:
San Diego, California
| null |
10.14722/usec.2023.235078
| null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Websites are used regularly in our day-today lives, yet research has shown
that it is challenging for many users to use them securely, e.g., most
prominently due to weak passwords through which they access their accounts. At
the same time, many services employ low-security measures, making their users
even more prone to account compromises with little to no means of remediating
compromised accounts. Additionally, remediating compromised accounts requires
users to complete a series of steps, ideally all provided and explained by the
service. However, for U.S.-based websites, prior research has shown that the
advice provided by many services is often incomplete. To further understand the
underlying issue and its implications, this paper reports on a study that
analyzes the account remediation procedure covering the 50 most popular
websites in 30 countries, 6 each in Africa, the Americas, Asia, Europe, and
Oceania. We conducted the first transcontinental analysis on the account
remediation protocols of popular websites. The analysis is based on 5 steps
websites need to provide advice for: compromise discovery, account recovery,
access limitation, service restoration, and prevention. We find that the lack
of advice prior work identified for websites from the U.S. also holds across
continents, with the presence ranging from 37% to 77% on average. Additionally,
we identified considerable differences when comparing countries and continents,
with countries in Africa and Oceania significantly more affected by the lack of
advice. To address this, we suggest providing publicly available and
easy-to-follow remediation advice for users and guidance for website providers
so they can provide all the necessary information.
|
[
{
"version": "v1",
"created": "Thu, 2 Feb 2023 20:26:08 GMT"
}
] | 2023-02-06T00:00:00 |
[
[
"Markert",
"Philipp",
""
],
[
"Adhikari",
"Andrick",
""
],
[
"Das",
"Sanchari",
""
]
] |
new_dataset
| 0.97161 |
2302.01424
|
Mohammadali Ghafarian Dr
|
Mohammadali Ghafarian, Bijan Shirinzadeh, Ammar Al-Jodah
|
Monolithic Six-DOF Parallel Positioning System for High-precision and
Large-range Applications
|
This work has been submitted elsewhere for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible
| null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
A compact large-range six-degrees-of-freedom (six-DOF) parallel positioning
system with high resolution, high resonant frequency, and high repeatability
was proposed. It mainly consists of three identical kinematic sections. Each
kinematic section consists of two identical displacement amplification and
guiding mechanisms, which are finally connected to a limb. Each limb was
designed with a universal joint at each end and connected to a moving stage. A
computational model of the positioner was built in the ANSYS software package,
hence, the input stiffness, output compliance, range, and modal analysis of the
system were found. Furthermore, a monolithic prototype made of Acrylonitrile
Butadiene Styrene (ABS) was successfully manufactured by the 3D-printing
process. It was actuated and sensed by piezoelectric actuators (PEAs) and
capacitive displacement sensors, respectively. Finally, the performances of
this proposed positioner were experimentally investigated. The positioning
resolution was achieved as 10.5nm {\times} 10.5nm {\times} 15nm {\times}
1.8{\mu}rad {\times} 1.3{\mu}rad {\times} 0.5{\mu}rad. The experimental results
validate the behavior and capabilities of the proposed positioning system, and
also verify the nanometer-scale spatial positioning accuracy within the overall
stroke range. Practical applications of the proposed system can be expanded to
pick-and-place manipulation, vibration-canceling in
microsurgery/micro-assembly, and collaborative manipulators systems.
|
[
{
"version": "v1",
"created": "Thu, 2 Feb 2023 21:30:23 GMT"
}
] | 2023-02-06T00:00:00 |
[
[
"Ghafarian",
"Mohammadali",
""
],
[
"Shirinzadeh",
"Bijan",
""
],
[
"Al-Jodah",
"Ammar",
""
]
] |
new_dataset
| 0.995893 |
2302.01439
|
Ashwin Rao
|
Rong-Ching Chang, Ashwin Rao, Qiankun Zhong, Magdalena Wojcieszak and
Kristina Lerman
|
#RoeOverturned: Twitter Dataset on the Abortion Rights Controversy
|
9 pages, 5 figures
| null | null | null |
cs.CY cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
On June 24, 2022, the United States Supreme Court overturned landmark rulings
made in its 1973 verdict in Roe v. Wade. The justices by way of a majority vote
in Dobbs v. Jackson Women's Health Organization, decided that abortion wasn't a
constitutional right and returned the issue of abortion to the elected
representatives. This decision triggered multiple protests and debates across
the US, especially in the context of the midterm elections in November 2022.
Given that many citizens use social media platforms to express their views and
mobilize for collective action, and given that online debate provides tangible
effects on public opinion, political participation, news media coverage, and
the political decision-making, it is crucial to understand online discussions
surrounding this topic. Toward this end, we present the first large-scale
Twitter dataset collected on the abortion rights debate in the United States.
We present a set of 74M tweets systematically collected over the course of one
year from January 1, 2022 to January 6, 2023.
|
[
{
"version": "v1",
"created": "Thu, 2 Feb 2023 22:02:19 GMT"
}
] | 2023-02-06T00:00:00 |
[
[
"Chang",
"Rong-Ching",
""
],
[
"Rao",
"Ashwin",
""
],
[
"Zhong",
"Qiankun",
""
],
[
"Wojcieszak",
"Magdalena",
""
],
[
"Lerman",
"Kristina",
""
]
] |
new_dataset
| 0.99985 |
2302.01455
|
Wyatt Felt
|
Wyatt Felt
|
Reconsidering Fascicles in Soft Pneumatic Actuator Packs
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This paper discusses and contests the claims of ``Soft Pneumatic Actuator
Fascicles for High Force and Reliability'' a research article which was
originally published in the March 2017 issue of the Journal Soft Robotics. The
original paper claims that the summed forces of multiple thin-walled extending
McKibben muscles are greater than a volumetrically equivalent actuator of the
same length at the same pressure. The original paper also claims that the
purported benefit becomes more pronounced as the number of smaller actuators is
increased. Using reasonable assumptions, the analysis of this paper shows that
the claims of the original paper violate the law of conservation of energy.
This paper also identifies errors in the original methodology that may have led
to the erroneous conclusions of the original paper. The goal of this paper is
to correct the record and to provide a more accurate framework for considering
fascicles used in soft pneumatic actuator packs.
|
[
{
"version": "v1",
"created": "Thu, 2 Feb 2023 22:56:25 GMT"
}
] | 2023-02-06T00:00:00 |
[
[
"Felt",
"Wyatt",
""
]
] |
new_dataset
| 0.990167 |
2302.01584
|
Adrien Benamira
|
Adrien Benamira, Tristan Gu\'erand, Thomas Peyrin, Sayandeep Saha
|
TT-TFHE: a Torus Fully Homomorphic Encryption-Friendly Neural Network
Architecture
| null | null | null | null |
cs.CR cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper presents TT-TFHE, a deep neural network Fully Homomorphic
Encryption (FHE) framework that effectively scales Torus FHE (TFHE) usage to
tabular and image datasets using a recent family of convolutional neural
networks called Truth-Table Neural Networks (TTnet). The proposed framework
provides an easy-to-implement, automated TTnet-based design toolbox with an
underlying (python-based) open-source Concrete implementation (CPU-based and
implementing lookup tables) for inference over encrypted data. Experimental
evaluation shows that TT-TFHE greatly outperforms in terms of time and accuracy
all Homomorphic Encryption (HE) set-ups on three tabular datasets, all other
features being equal. On image datasets such as MNIST and CIFAR-10, we show
that TT-TFHE consistently and largely outperforms other TFHE set-ups and is
competitive against other HE variants such as BFV or CKKS (while maintaining
the same level of 128-bit encryption security guarantees). In addition, our
solutions present a very low memory footprint (down to dozens of MBs for
MNIST), which is in sharp contrast with other HE set-ups that typically require
tens to hundreds of GBs of memory per user (in addition to their communication
overheads). This is the first work presenting a fully practical solution of
private inference (i.e. a few seconds for inference time and a few dozens MBs
of memory) on both tabular datasets and MNIST, that can easily scale to
multiple threads and users on server side.
|
[
{
"version": "v1",
"created": "Fri, 3 Feb 2023 07:32:23 GMT"
}
] | 2023-02-06T00:00:00 |
[
[
"Benamira",
"Adrien",
""
],
[
"Guérand",
"Tristan",
""
],
[
"Peyrin",
"Thomas",
""
],
[
"Saha",
"Sayandeep",
""
]
] |
new_dataset
| 0.99813 |
2302.01585
|
Daniel Gritzner
|
Daniel Gritzner, J\"orn Ostermann
|
SegForestNet: Spatial-Partitioning-Based Aerial Image Segmentation
|
This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Aerial image analysis, specifically the semantic segmentation thereof, is the
basis for applications such as automatically creating and updating maps,
tracking city growth, or tracking deforestation. In true orthophotos, which are
often used in these applications, many objects and regions can be approximated
well by polygons. However, this fact is rarely exploited by state-of-the-art
semantic segmentation models. Instead, most models allow unnecessary degrees of
freedom in their predictions by allowing arbitrary region shapes. We therefore
present a refinement of our deep learning model which predicts binary space
partitioning trees, an efficient polygon representation. The refinements
include a new feature decoder architecture and a new differentiable BSP tree
renderer which both avoid vanishing gradients. Additionally, we designed a
novel loss function specifically designed to improve the spatial partitioning
defined by the predicted trees. Furthermore, our expanded model can predict
multiple trees at once and thus can predict class-specific segmentations.
Taking all modifications together, our model achieves state-of-the-art
performance while using up to 60% fewer model parameters when using a small
backbone model or up to 20% fewer model parameters when using a large backbone
model.
|
[
{
"version": "v1",
"created": "Fri, 3 Feb 2023 07:35:53 GMT"
}
] | 2023-02-06T00:00:00 |
[
[
"Gritzner",
"Daniel",
""
],
[
"Ostermann",
"Jörn",
""
]
] |
new_dataset
| 0.960351 |
2302.01665
|
Junyi Ma
|
Junyi Ma, Guangming Xiong, Jingyi Xu, Xieyuanli Chen
|
CVTNet: A Cross-View Transformer Network for Place Recognition Using
LiDAR Data
| null | null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
LiDAR-based place recognition (LPR) is one of the most crucial components of
autonomous vehicles to identify previously visited places in GPS-denied
environments. Most existing LPR methods use mundane representations of the
input point cloud without considering different views, which may not fully
exploit the information from LiDAR sensors. In this paper, we propose a
cross-view transformer-based network, dubbed CVTNet, to fuse the range image
views (RIVs) and bird's eye views (BEVs) generated from the LiDAR data. It
extracts correlations within the views themselves using intra-transformers and
between the two different views using inter-transformers. Based on that, our
proposed CVTNet generates a yaw-angle-invariant global descriptor for each
laser scan end-to-end online and retrieves previously seen places by descriptor
matching between the current query scan and the pre-built database. We evaluate
our approach on three datasets collected with different sensor setups and
environmental conditions. The experimental results show that our method
outperforms the state-of-the-art LPR methods with strong robustness to
viewpoint changes and long-time spans. Furthermore, our approach has a good
real-time performance that can run faster than the typical LiDAR frame rate.
The implementation of our method is released as open source at:
https://github.com/BIT-MJY/CVTNet.
|
[
{
"version": "v1",
"created": "Fri, 3 Feb 2023 11:37:20 GMT"
}
] | 2023-02-06T00:00:00 |
[
[
"Ma",
"Junyi",
""
],
[
"Xiong",
"Guangming",
""
],
[
"Xu",
"Jingyi",
""
],
[
"Chen",
"Xieyuanli",
""
]
] |
new_dataset
| 0.999184 |
2302.01751
|
Aleksei Gavron
|
Aleksei Gavron, Konstantin Belev, Konstantin Kudelkin, Vladislav
Shikhov, Andrey Akushevich, Alexey Fartukov, Vladimir Paramonov, Dmitry
Syromolotov, Artem Makoyan
|
Motion ID: Human Authentication Approach
| null | null | null | null |
cs.CR cs.AI cs.HC cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We introduce a novel approach to user authentication called Motion ID. The
method employs motion sensing provided by inertial measurement units (IMUs),
using it to verify the persons identity via short time series of IMU data
captured by the mobile device. The paper presents two labeled datasets with
unlock events: the first features IMU measurements, provided by six users who
continuously collected data on six different smartphones for a period of 12
weeks. The second one contains 50 hours of IMU data for one specific motion
pattern, provided by 101 users. Moreover, we present a two-stage user
authentication process that employs motion pattern identification and user
verification and is based on data preprocessing and machine learning. The
Results section details the assessment of the method proposed, comparing it
with existing biometric authentication methods and the Android biometric
standard. The method has demonstrated high accuracy, indicating that it could
be successfully used in combination with existing methods. Furthermore, the
method exhibits significant promise as a standalone solution. We provide the
datasets to the scholarly community and share our project code.
|
[
{
"version": "v1",
"created": "Wed, 25 Jan 2023 09:08:33 GMT"
}
] | 2023-02-06T00:00:00 |
[
[
"Gavron",
"Aleksei",
""
],
[
"Belev",
"Konstantin",
""
],
[
"Kudelkin",
"Konstantin",
""
],
[
"Shikhov",
"Vladislav",
""
],
[
"Akushevich",
"Andrey",
""
],
[
"Fartukov",
"Alexey",
""
],
[
"Paramonov",
"Vladimir",
""
],
[
"Syromolotov",
"Dmitry",
""
],
[
"Makoyan",
"Artem",
""
]
] |
new_dataset
| 0.999714 |
2302.01764
|
Joshua Ellul
|
Joshua Ellul, Gordon J Pace
|
Active External Calls for Blockchain and Distributed Ledger
Technologies: Debunking cited inability of Blockchain and DLT to make
external calls
| null | null | null |
2023-01
|
cs.CR cs.DC cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Blockchain and other distributed ledger technologies have enabled
peer-to-peer networks to maintain ledgers with an immutable history and
guaranteed computation, all carried out without the need of trusted parties. In
practice, few applications of blockchain are closed i.e. do not interact with
the world outside the blockchain, and various techniques have been proposed and
used to handle such interaction. One problem is that it is widely accepted
that, due to the decentralised nature of blockchain networks and constraints to
ensure trust and determinism, such communication can only flow into the
blockchain, and that blockchain systems cannot initiate and execute calls to
external systems or services. In this paper we show that this misconception is
preconceived by building on our previously presented solution to demonstrate
that such calls can be directly initiated from the blockchain itself in a
feasible and efficient manner.
|
[
{
"version": "v1",
"created": "Tue, 10 Jan 2023 23:55:26 GMT"
}
] | 2023-02-06T00:00:00 |
[
[
"Ellul",
"Joshua",
""
],
[
"Pace",
"Gordon J",
""
]
] |
new_dataset
| 0.97912 |
2302.01811
|
Arunkumar Bhattar
|
Liyi Li, Arunkumar Bhattar, Le Chang, Mingwei Zhu, and Aravind Machiry
|
CheckedCBox: Type Directed Program Partitioning with Checked C for
Incremental Spatial Memory Safety
|
Liyi Li and Arunkumar Bhattar contributed equally to this work
| null | null | null |
cs.CR cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Spatial memory safety violation is still a major issue for C programs.
Checked-C is a safe dialect of C and extends it with Checked pointer types and
annotations that guarantee spatial memory safety in a backward-compatible
manner, allowing the mix of checked pointers and regular (unchecked) pointer
types. However, unchecked code vulnerabilities can violate the checked code's
spatial safety guarantees. We present CheckedCBox, which adds a flexible,
type-directed program partitioning mechanism to Checked-C, by enhancing the
Checked-C type system with tainted types that enable flexible partitioning of
the program into checked and unchecked regions, in a manner such that unchecked
region code does not affect the spatial safety in the checked region. We
formalize our type system and prove the non-crashing and non-exposure
properties of a well-typed CheckedCBox program. We implemented CheckedCBox in a
configurable manner, which enables us to use existing sandbox mechanisms (eg
WebAssembly) to execute programs. Consequently, in doing so, CheckedCBox has
prevented four known vulnerabilities by efficiently partitioning the program.
|
[
{
"version": "v1",
"created": "Fri, 3 Feb 2023 15:31:35 GMT"
}
] | 2023-02-06T00:00:00 |
[
[
"Li",
"Liyi",
""
],
[
"Bhattar",
"Arunkumar",
""
],
[
"Chang",
"Le",
""
],
[
"Zhu",
"Mingwei",
""
],
[
"Machiry",
"Aravind",
""
]
] |
new_dataset
| 0.998364 |
2302.01833
|
Mat\v{e}j Petrl\'ik
|
Tom\'a\v{s} Musil, Mat\v{e}j Petrl\'ik and Martin Saska
|
SphereMap: Dynamic Multi-Layer Graph Structure for Rapid Safety-Aware
UAV Planning
| null | null |
10.1109/LRA.2022.3195194
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A flexible topological representation consisting of a two-layer graph
structure built on-board an Unmanned Aerial Vehicle (UAV) by continuously
filling the free space of an occupancy map with intersecting spheres is
proposed in this \paper{}. Most state-of-the-art planning methods find the
shortest paths while keeping the UAV at a pre-defined distance from obstacles.
Planning over the proposed structure reaches this pre-defined distance only
when necessary, maintaining a safer distance otherwise, while also being orders
of magnitude faster than other state-of-the-art methods. Furthermore, we
demonstrate how this graph representation can be converted into a lightweight
shareable topological-volumetric map of the environment, which enables
decentralized multi-robot cooperation. The proposed approach was successfully
validated in several kilometers of real subterranean environments, such as
caves, devastated industrial buildings, and in the harsh and complex setting of
the final event of the DARPA SubT Challenge, which aims to mimic the conditions
of real search and rescue missions as closely as possible, and where our
approach achieved the \nth{2} place in the virtual track.
|
[
{
"version": "v1",
"created": "Fri, 3 Feb 2023 16:13:37 GMT"
}
] | 2023-02-06T00:00:00 |
[
[
"Musil",
"Tomáš",
""
],
[
"Petrlík",
"Matěj",
""
],
[
"Saska",
"Martin",
""
]
] |
new_dataset
| 0.99414 |
2302.01872
|
Henghui Ding
|
Henghui Ding, Chang Liu, Shuting He, Xudong Jiang, Philip H.S. Torr,
Song Bai
|
MOSE: A New Dataset for Video Object Segmentation in Complex Scenes
|
MOSE Dataset Report
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Video object segmentation (VOS) aims at segmenting a particular object
throughout the entire video clip sequence. The state-of-the-art VOS methods
have achieved excellent performance (e.g., 90+% J&F) on existing datasets.
However, since the target objects in these existing datasets are usually
relatively salient, dominant, and isolated, VOS under complex scenes has rarely
been studied. To revisit VOS and make it more applicable in the real world, we
collect a new VOS dataset called coMplex video Object SEgmentation (MOSE) to
study the tracking and segmenting objects in complex environments. MOSE
contains 2,149 video clips and 5,200 objects from 36 categories, with 431,725
high-quality object segmentation masks. The most notable feature of MOSE
dataset is complex scenes with crowded and occluded objects. The target objects
in the videos are commonly occluded by others and disappear in some frames. To
analyze the proposed MOSE dataset, we benchmark 18 existing VOS methods under 4
different settings on the proposed MOSE dataset and conduct comprehensive
comparisons. The experiments show that current VOS algorithms cannot well
perceive objects in complex scenes. For example, under the semi-supervised VOS
setting, the highest J&F by existing state-of-the-art VOS methods is only 59.4%
on MOSE, much lower than their ~90% J&F performance on DAVIS. The results
reveal that although excellent performance has been achieved on existing
benchmarks, there are unresolved challenges under complex scenes and more
efforts are desired to explore these challenges in the future. The proposed
MOSE dataset has been released at https://henghuiding.github.io/MOSE.
|
[
{
"version": "v1",
"created": "Fri, 3 Feb 2023 17:20:03 GMT"
}
] | 2023-02-06T00:00:00 |
[
[
"Ding",
"Henghui",
""
],
[
"Liu",
"Chang",
""
],
[
"He",
"Shuting",
""
],
[
"Jiang",
"Xudong",
""
],
[
"Torr",
"Philip H. S.",
""
],
[
"Bai",
"Song",
""
]
] |
new_dataset
| 0.999834 |
2302.01881
|
Ruocheng Wang
|
Ruocheng Wang, Yunzhi Zhang, Jiayuan Mao, Ran Zhang, Chin-Yi Cheng,
Jiajun Wu
|
IKEA-Manual: Seeing Shape Assembly Step by Step
|
NeurIPS 2022 Datasets and Benchmarks Track. Project page:
https://cs.stanford.edu/~rcwang/projects/ikea_manual
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human-designed visual manuals are crucial components in shape assembly
activities. They provide step-by-step guidance on how we should move and
connect different parts in a convenient and physically-realizable way. While
there has been an ongoing effort in building agents that perform assembly
tasks, the information in human-design manuals has been largely overlooked. We
identify that this is due to 1) a lack of realistic 3D assembly objects that
have paired manuals and 2) the difficulty of extracting structured information
from purely image-based manuals. Motivated by this observation, we present
IKEA-Manual, a dataset consisting of 102 IKEA objects paired with assembly
manuals. We provide fine-grained annotations on the IKEA objects and assembly
manuals, including decomposed assembly parts, assembly plans, manual
segmentation, and 2D-3D correspondence between 3D parts and visual manuals. We
illustrate the broad application of our dataset on four tasks related to shape
assembly: assembly plan generation, part segmentation, pose estimation, and 3D
part assembly.
|
[
{
"version": "v1",
"created": "Fri, 3 Feb 2023 17:32:22 GMT"
}
] | 2023-02-06T00:00:00 |
[
[
"Wang",
"Ruocheng",
""
],
[
"Zhang",
"Yunzhi",
""
],
[
"Mao",
"Jiayuan",
""
],
[
"Zhang",
"Ran",
""
],
[
"Cheng",
"Chin-Yi",
""
],
[
"Wu",
"Jiajun",
""
]
] |
new_dataset
| 0.998951 |
2302.01890
|
Haoyu Liu
|
Haoyu Liu, Douglas J. Leith and Paul Patras
|
Android OS Privacy Under the Loupe -- A Tale from the East
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
China is currently the country with the largest number of Android smartphone
users. We use a combination of static and dynamic code analysis techniques to
study the data transmitted by the preinstalled system apps on Android
smartphones from three of the most popular vendors in China. We find that an
alarming number of preinstalled system, vendor and third-party apps are granted
dangerous privileges. Through traffic analysis, we find these packages transmit
to many third-party domains privacy sensitive information related to the user's
device (persistent identifiers), geolocation (GPS coordinates, network-related
identifiers), user profile (phone number, app usage) and social relationships
(e.g., call history), without consent or even notification. This poses serious
deanonymization and tracking risks that extend outside China when the user
leaves the country, and calls for a more rigorous enforcement of the recently
adopted data privacy legislation.
|
[
{
"version": "v1",
"created": "Fri, 3 Feb 2023 18:01:57 GMT"
}
] | 2023-02-06T00:00:00 |
[
[
"Liu",
"Haoyu",
""
],
[
"Leith",
"Douglas J.",
""
],
[
"Patras",
"Paul",
""
]
] |
new_dataset
| 0.99777 |
2302.01923
|
Md Zobaer Islam
|
Russ Messenger, Md Zobaer Islam, Matthew Whitlock, Erik Spong, Nate
Morton, Layne Claggett, Chris Matthews, Jordan Fox, Leland Palmer, Dane C.
Johnson, John F. O'Hara, Christopher J. Crick, Jamey D. Jacob, Sabit Ekin
|
Real-Time Traffic End-of-Queue Detection and Tracking in UAV Video
|
13 pages, 21 figures, submitted to International Journal of
Intelligent Transportation Systems Research
| null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Highway work zones are susceptible to undue accumulation of motorized
vehicles which calls for dynamic work zone warning signs to prevent accidents.
The work zone signs are placed according to the location of the end-of-queue of
vehicles which usually changes rapidly. The detection of moving objects in
video captured by Unmanned Aerial Vehicles (UAV) has been extensively
researched so far, and is used in a wide array of applications including
traffic monitoring. Unlike the fixed traffic cameras, UAVs can be used to
monitor the traffic at work zones in real-time and also in a more
cost-effective way. This study presents a method as a proof of concept for
detecting End-of-Queue (EOQ) of traffic by processing the real-time video
footage of a highway work zone captured by UAV. EOQ is detected in the video by
image processing which includes background subtraction and blob detection
methods. This dynamic localization of EOQ of vehicles will enable faster and
more accurate relocation of work zone warning signs for drivers and thus will
reduce work zone fatalities. The method can be applied to detect EOQ of
vehicles and notify drivers in any other roads or intersections too where
vehicles are rapidly accumulating due to special events, traffic jams,
construction, or accidents.
|
[
{
"version": "v1",
"created": "Tue, 10 Jan 2023 00:22:30 GMT"
}
] | 2023-02-06T00:00:00 |
[
[
"Messenger",
"Russ",
""
],
[
"Islam",
"Md Zobaer",
""
],
[
"Whitlock",
"Matthew",
""
],
[
"Spong",
"Erik",
""
],
[
"Morton",
"Nate",
""
],
[
"Claggett",
"Layne",
""
],
[
"Matthews",
"Chris",
""
],
[
"Fox",
"Jordan",
""
],
[
"Palmer",
"Leland",
""
],
[
"Johnson",
"Dane C.",
""
],
[
"O'Hara",
"John F.",
""
],
[
"Crick",
"Christopher J.",
""
],
[
"Jacob",
"Jamey D.",
""
],
[
"Ekin",
"Sabit",
""
]
] |
new_dataset
| 0.999441 |
1704.07199
|
Tobias Kapp\'e
|
Tobias Kapp\'e, Paul Brunet, Bas Luttik, Alexandra Silva, Fabio Zanasi
|
Brzozowski Goes Concurrent - A Kleene Theorem for Pomset Languages
|
Version 2 incorporates changes prompted by comments of the anonymous
referees at CONCUR. Besides minor corrections, this includes additions to the
introduction and the discussion section, as well as a proof of Lemma 2.5.
Version 3 corrects the accent on the first author's surname in the metadata
|
Proc. CONCUR 2017, pp 25:1-25:16
|
10.4230/LIPIcs.CONCUR.2017.25
| null |
cs.FL cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
Concurrent Kleene Algebra (CKA) is a mathematical formalism to study programs
that exhibit concurrent behaviour. As with previous extensions of Kleene
Algebra, characterizing the free model is crucial in order to develop the
foundations of the theory and potential applications. For CKA, this has been an
open question for a few years and this paper makes an important step towards an
answer. We present a new automaton model and a Kleene-like theorem that relates
a relaxed version of CKA to series-parallel pomset languages, which are a
natural candidate for the free model. There are two substantial differences
with previous work: from expressions to automata, we use Brzozowski
derivatives, which enable a direct construction of the automaton; from automata
to expressions, we provide a syntactic characterization of the automata that
denote valid CKA behaviours.
|
[
{
"version": "v1",
"created": "Mon, 24 Apr 2017 13:03:52 GMT"
},
{
"version": "v2",
"created": "Thu, 31 Aug 2017 16:12:40 GMT"
},
{
"version": "v3",
"created": "Sun, 22 Oct 2017 11:45:33 GMT"
}
] | 2023-02-03T00:00:00 |
[
[
"Kappé",
"Tobias",
""
],
[
"Brunet",
"Paul",
""
],
[
"Luttik",
"Bas",
""
],
[
"Silva",
"Alexandra",
""
],
[
"Zanasi",
"Fabio",
""
]
] |
new_dataset
| 0.98351 |
1710.02787
|
Tobias Kapp\'e
|
Tobias Kapp\'e and Paul Brunet and Alexandra Silva and Fabio Zanasi
|
Concurrent Kleene Algebra: Free Model and Completeness
|
Version 2 includes an overview section that outlines the completeness
proof, as well as some extra discussion of the interpolation lemma. It also
includes better typography and a number of minor fixes. Version 3
incorporates the changes by comments from the anonymous referees at ESOP.
Among other things, these include a worked example of computing the syntactic
closure by hand
|
Proc. ESOP 2018, pp 856-882
|
10.1007/978-3-319-89884-1_30
| null |
cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Concurrent Kleene Algebra (CKA) was introduced by Hoare, Moeller, Struth and
Wehrman in 2009 as a framework to reason about concurrent programs. We prove
that the axioms for CKA with bounded parallelism are complete for the semantics
proposed in the original paper; consequently, these semantics are the free
model for this fragment. This result settles a conjecture of Hoare and
collaborators. Moreover, the techniques developed along the way are reusable;
in particular, they allow us to establish pomset automata as an operational
model for CKA.
|
[
{
"version": "v1",
"created": "Sun, 8 Oct 2017 06:06:09 GMT"
},
{
"version": "v2",
"created": "Sun, 22 Oct 2017 08:34:29 GMT"
},
{
"version": "v3",
"created": "Mon, 26 Feb 2018 12:06:38 GMT"
}
] | 2023-02-03T00:00:00 |
[
[
"Kappé",
"Tobias",
""
],
[
"Brunet",
"Paul",
""
],
[
"Silva",
"Alexandra",
""
],
[
"Zanasi",
"Fabio",
""
]
] |
new_dataset
| 0.999011 |
1811.10401
|
Tobias Kapp\'e
|
Tobias Kapp\'e and Paul Brunet and Jurriaan Rot and Alexandra Silva
and Jana Wagemaker and Fabio Zanasi
|
Kleene Algebra with Observations
| null |
Proc. CONCUR 2019, pp 41:1-41:16
|
10.4230/LIPIcs.CONCUR.2019.41
| null |
cs.LO cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
Kleene algebra with tests (KAT) is an algebraic framework for reasoning about
the control flow of sequential programs. Generalising KAT to reason about
concurrent programs is not straightforward, because axioms native to KAT in
conjunction with expected axioms for concurrency lead to an anomalous equation.
In this paper, we propose Kleene algebra with observations (KAO), a variant of
KAT, as an alternative foundation for extending KAT to a concurrent setting. We
characterise the free model of KAO, and establish a decision procedure w.r.t.
its equational theory.
|
[
{
"version": "v1",
"created": "Fri, 16 Nov 2018 16:56:43 GMT"
},
{
"version": "v2",
"created": "Thu, 25 Apr 2019 08:05:27 GMT"
},
{
"version": "v3",
"created": "Tue, 16 Jul 2019 09:50:06 GMT"
},
{
"version": "v4",
"created": "Wed, 21 Aug 2019 09:45:00 GMT"
}
] | 2023-02-03T00:00:00 |
[
[
"Kappé",
"Tobias",
""
],
[
"Brunet",
"Paul",
""
],
[
"Rot",
"Jurriaan",
""
],
[
"Silva",
"Alexandra",
""
],
[
"Wagemaker",
"Jana",
""
],
[
"Zanasi",
"Fabio",
""
]
] |
new_dataset
| 0.995407 |
1812.03058
|
Tobias Kapp\'e
|
Tobias Kapp\'e and Paul Brunet and Bas Luttik and Alexandra Silva and
Fabio Zanasi
|
On Series-Parallel Pomset Languages: Rationality, Context-Freeness and
Automata
|
Accepted manuscript
|
J. Log. Algebraic Methods Program. 103, pp 130-153, 2019
|
10.1016/j.jlamp.2018.12.001
| null |
cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Concurrent Kleene Algebra (CKA) is a formalism to study concurrent programs.
Like previous Kleene Algebra extensions, developing a correspondence between
denotational and operational perspectives is important, for both foundations
and applications. This paper takes an important step towards such a
correspondence, by precisely relating bi-Kleene Algebra (BKA), a fragment of
CKA, to a novel type of automata, pomset automata (PAs). We show that PAs can
implement the BKA semantics of series-parallel rational expressions, and that a
class of PAs can be translated back to these expressions. We also characterise
the behavior of general PAs in terms of context-free pomset grammars;
consequently, universality, equivalence and series-parallel rationality of
general PAs are undecidable.
|
[
{
"version": "v1",
"created": "Fri, 7 Dec 2018 15:04:39 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Dec 2018 10:51:49 GMT"
}
] | 2023-02-03T00:00:00 |
[
[
"Kappé",
"Tobias",
""
],
[
"Brunet",
"Paul",
""
],
[
"Luttik",
"Bas",
""
],
[
"Silva",
"Alexandra",
""
],
[
"Zanasi",
"Fabio",
""
]
] |
new_dataset
| 0.99756 |
2002.09682
|
Tobias Kapp\'e
|
Tobias Kapp\'e and Paul Brunet and Alexandra Silva and Jana Wagemaker
and Fabio Zanasi
|
Concurrent Kleene Algebra with Observations: from Hypotheses to
Completeness
| null |
Proc. FoSSaCS 2020, pp 381-400
|
10.1007/978-3-030-45231-5_20
| null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
Concurrent Kleene Algebra (CKA) extends basic Kleene algebra with a parallel
composition operator, which enables reasoning about concurrent programs.
However, CKA fundamentally misses tests, which are needed to model standard
programming constructs such as conditionals and $\mathsf{while}$-loops. It
turns out that integrating tests in CKA is subtle, due to their interaction
with parallelism. In this paper we provide a solution in the form of Concurrent
Kleene Algebra with Observations (CKAO). Our main contribution is a
completeness theorem for CKAO. Our result resorts on a more general study of
CKA "with hypotheses", of which CKAO turns out to be an instance: this analysis
is of independent interest, as it can be applied to extensions of CKA other
than CKAO.
|
[
{
"version": "v1",
"created": "Sat, 22 Feb 2020 10:51:24 GMT"
}
] | 2023-02-03T00:00:00 |
[
[
"Kappé",
"Tobias",
""
],
[
"Brunet",
"Paul",
""
],
[
"Silva",
"Alexandra",
""
],
[
"Wagemaker",
"Jana",
""
],
[
"Zanasi",
"Fabio",
""
]
] |
new_dataset
| 0.999518 |
2104.02876
|
Dmitry Berdinsky
|
Dmitry Berdinsky and Prohrak Kruengthomya
|
Finite Automata Encoding Functions: A Representation Using B-splines
|
24 pages; in the third version the introduction and the abstract has
been rewritten; additional minor changes and improvements have been made
| null | null | null |
cs.CG cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Finite automata are used to encode geometric figures, functions and can be
used for image compression and processing. The original approach is to
represent each point of a figure (a graph of a function) in $\mathbb{R}^n$ as a
convolution of its $n$ coordinates written in some base. Then a figure is said
to be encoded as a finite automaton if the set of convolutions corresponding to
points in this figure is accepted by a finite automaton. Jurgensen, Staiger and
Yamasaki showed that the only continuously differentiable functions which can
be encoded as a finite automaton in this way are linear. In this paper we
propose a representation which enables to encode piecewise polynomial functions
with arbitrary degrees of smoothness that substantially extends a family of
functions which can be encoded as finite automata. This representation
naturally comes from the framework of hierarchical tensor product B-splines
utilized in numerical computational geometry. We show that finite automata
provide a simple tool suitable for solving computational problem arising in
this framework including the case when the support of a function is unbounded.
|
[
{
"version": "v1",
"created": "Wed, 7 Apr 2021 02:59:43 GMT"
},
{
"version": "v2",
"created": "Tue, 17 Aug 2021 09:11:55 GMT"
},
{
"version": "v3",
"created": "Thu, 2 Feb 2023 10:23:16 GMT"
}
] | 2023-02-03T00:00:00 |
[
[
"Berdinsky",
"Dmitry",
""
],
[
"Kruengthomya",
"Prohrak",
""
]
] |
new_dataset
| 0.95621 |
2105.04295
|
Salvatore Vilella
|
Alfonso Semeraro, Salvatore Vilella and Giancarlo Ruffo
|
PyPlutchik: visualising and comparing emotion-annotated corpora
|
18 pages, 13 figures. Submitted to IEEE for possible publication;
copyright may change
|
PLoS ONE 16(9): e0256503, 2021
|
10.1371/journal.pone.0256503
| null |
cs.HC cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The increasing availability of textual corpora and data fetched from social
networks is fuelling a huge production of works based on the model proposed by
psychologist Robert Plutchik, often referred simply as the ``Plutchik Wheel''.
Related researches range from annotation tasks description to emotions
detection tools. Visualisation of such emotions is traditionally carried out
using the most popular layouts, as bar plots or tables, which are however
sub-optimal. The classic representation of the Plutchik's wheel follows the
principles of proximity and opposition between pairs of emotions: spatial
proximity in this model is also a semantic proximity, as adjacent emotions
elicit a complex emotion (a primary dyad) when triggered together; spatial
opposition is a semantic opposition as well, as positive emotions are opposite
to negative emotions. The most common layouts fail to preserve both features,
not to mention the need of visually allowing comparisons between different
corpora in a blink of an eye, that is hard with basic design solutions. We
introduce PyPlutchik, a Python library specifically designed for the
visualisation of Plutchik's emotions in texts or in corpora. PyPlutchik draws
the Plutchik's flower with each emotion petal sized after how much that emotion
is detected or annotated in the corpus, also representing three degrees of
intensity for each of them. Notably, PyPlutchik allows users to display also
primary, secondary, tertiary and opposite dyads in a compact, intuitive way. We
substantiate our claim that PyPlutchik outperforms other classic visualisations
when displaying Plutchik emotions and we showcase a few examples that display
our library's most compelling features.
|
[
{
"version": "v1",
"created": "Mon, 19 Apr 2021 19:34:44 GMT"
}
] | 2023-02-03T00:00:00 |
[
[
"Semeraro",
"Alfonso",
""
],
[
"Vilella",
"Salvatore",
""
],
[
"Ruffo",
"Giancarlo",
""
]
] |
new_dataset
| 0.979151 |
2107.10147
|
Thilo Krachenfels
|
Thilo Krachenfels, Jean-Pierre Seifert, Shahin Tajik
|
Trojan Awakener: Detecting Dormant Malicious Hardware Using Laser Logic
State Imaging (Extended Version)
|
This is the extended version prepared for journal submission. For
remarks on the changes, see the last paragraph of Section 1
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The threat of hardware Trojans (HTs) and their detection is a widely studied
field. While the effort for inserting a Trojan into an application-specific
integrated circuit (ASIC) can be considered relatively high, especially when
trusting the chip manufacturer, programmable hardware is vulnerable to Trojan
insertion even after the product has been shipped or during usage. At the same
time, detecting dormant HTs with small or zero-overhead triggers and payloads
on these platforms is still a challenging task, as the Trojan might not get
activated during the chip verification using logical testing or physical
measurements. In this work, we present a novel Trojan detection approach based
on a technique known from integrated circuit (IC) failure analysis, capable of
detecting virtually all classes of dormant Trojans. Using laser logic state
imaging (LLSI), we show how supply voltage modulations can awaken inactive
Trojans, making them detectable using laser voltage imaging techniques.
Therefore, our technique does not require triggering the Trojan. To support our
claims, we present three case studies on 28 and 20 SRAM- and flash-based
field-programmable gate arrays (FPGAs). We demonstrate how to detect with high
confidence small changes in sequential and combinatorial logic as well as in
the routing configuration of FPGAs in a non-invasive manner. Finally, we
discuss the practical applicability of our approach on dormant analog Trojans
in ASICs.
|
[
{
"version": "v1",
"created": "Wed, 21 Jul 2021 15:23:53 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Jul 2021 15:25:01 GMT"
},
{
"version": "v3",
"created": "Fri, 23 Jul 2021 18:27:21 GMT"
},
{
"version": "v4",
"created": "Sat, 18 Sep 2021 14:28:16 GMT"
},
{
"version": "v5",
"created": "Thu, 2 Feb 2023 13:35:51 GMT"
}
] | 2023-02-03T00:00:00 |
[
[
"Krachenfels",
"Thilo",
""
],
[
"Seifert",
"Jean-Pierre",
""
],
[
"Tajik",
"Shahin",
""
]
] |
new_dataset
| 0.998276 |
2201.10485
|
Tobias Kapp\'e
|
Jana Wagemaker and Nate Foster and Tobias Kapp\'e and Dexter Kozen and
Jurriaan Rot and Alexandra Silva
|
Concurrent NetKAT: Modeling and analyzing stateful, concurrent networks
| null |
Proc. ESOP 2022, pp 575-602
|
10.1007/978-3-030-99336-8_21
| null |
cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce Concurrent NetKAT (CNetKAT), an extension of NetKAT with
operators for specifying and reasoning about concurrency in scenarios where
multiple packets interact through state. We provide a model of the language
based on partially-ordered multisets (pomsets), which are a well-established
mathematical structure for defining the denotational semantics of concurrent
languages. We provide a sound and complete axiomatization of this model, and we
illustrate the use of CNetKAT through examples. More generally, CNetKAT can be
understood as an algebraic framework for reasoning about programs with both
local state (in packets) and global state (in a global store).
|
[
{
"version": "v1",
"created": "Tue, 25 Jan 2022 17:27:22 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Jan 2022 09:37:53 GMT"
},
{
"version": "v3",
"created": "Tue, 12 Jul 2022 09:12:42 GMT"
}
] | 2023-02-03T00:00:00 |
[
[
"Wagemaker",
"Jana",
""
],
[
"Foster",
"Nate",
""
],
[
"Kappé",
"Tobias",
""
],
[
"Kozen",
"Dexter",
""
],
[
"Rot",
"Jurriaan",
""
],
[
"Silva",
"Alexandra",
""
]
] |
new_dataset
| 0.991052 |
2207.09068
|
Anh Nguyen
|
Thang M. Pham, Seunghyun Yoon, Trung Bui, Anh Nguyen
|
PiC: A Phrase-in-Context Dataset for Phrase Understanding and Semantic
Search
|
Accepted to EACL 2023
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While contextualized word embeddings have been a de-facto standard, learning
contextualized phrase embeddings is less explored and being hindered by the
lack of a human-annotated benchmark that tests machine understanding of phrase
semantics given a context sentence or paragraph (instead of phrases alone). To
fill this gap, we propose PiC -- a dataset of ~28K of noun phrases accompanied
by their contextual Wikipedia pages and a suite of three tasks for training and
evaluating phrase embeddings. Training on PiC improves ranking models' accuracy
and remarkably pushes span-selection (SS) models (i.e., predicting the start
and end index of the target phrase) near-human accuracy, which is 95% Exact
Match (EM) on semantic search given a query phrase and a passage.
Interestingly, we find evidence that such impressive performance is because the
SS models learn to better capture the common meaning of a phrase regardless of
its actual context. SotA models perform poorly in distinguishing two senses of
the same phrase in two contexts (~60% EM) and in estimating the similarity
between two different phrases in the same context (~70% EM).
|
[
{
"version": "v1",
"created": "Tue, 19 Jul 2022 04:45:41 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Jul 2022 03:52:56 GMT"
},
{
"version": "v3",
"created": "Thu, 18 Aug 2022 16:11:49 GMT"
},
{
"version": "v4",
"created": "Fri, 27 Jan 2023 13:54:16 GMT"
},
{
"version": "v5",
"created": "Thu, 2 Feb 2023 05:19:44 GMT"
}
] | 2023-02-03T00:00:00 |
[
[
"Pham",
"Thang M.",
""
],
[
"Yoon",
"Seunghyun",
""
],
[
"Bui",
"Trung",
""
],
[
"Nguyen",
"Anh",
""
]
] |
new_dataset
| 0.999836 |
2208.02342
|
Rui Chen
|
Sadeed Bin Sayed, Rui Chen, Huseyin Arda Ulku, Hakan Bagci
|
A Time Domain Volume Integral Equation Solver to Analyze Electromagnetic
Scattering from Nonlinear Dielectric Objects
| null | null | null | null |
cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A time domain electric field volume integral equation (TD-EFVIE) solver is
proposed for analyzing electromagnetic scattering from dielectric objects with
Kerr nonlinearity. The nonlinear constitutive relation that relates electric
flux and electric field induced in the scatterer is used as an auxiliary
equation that complements TD-EFVIE. The ordinary differential equation system
that arises from TD-EFVIE's Schaubert-Wilton-Glisson (SWG)-based discretization
is integrated in time using a predictor-corrector method for the unknown
expansion coefficients of the electric field. Matrix systems that arise from
the SWG-based discretization of the nonlinear constitutive relation and its
inverse obtained using the Pade approximant are used to carry out explicit
updates of the electric field and the electric flux expansion coefficients at
the predictor and the corrector stages of the time integration method. The
resulting explicit marching-on-in-time (MOT) scheme does not call for any
Newton-like nonlinear solver and only requires solution of sparse and
well-conditioned Gram matrix systems at every step. Numerical results show that
the proposed explicit MOT-based TD-EFVIE solver is more accurate than the
finite-difference time-domain method that is traditionally used for analyzing
transient electromagnetic scattering from nonlinear objects.
|
[
{
"version": "v1",
"created": "Sun, 17 Jul 2022 15:18:09 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Feb 2023 09:51:28 GMT"
}
] | 2023-02-03T00:00:00 |
[
[
"Sayed",
"Sadeed Bin",
""
],
[
"Chen",
"Rui",
""
],
[
"Ulku",
"Huseyin Arda",
""
],
[
"Bagci",
"Hakan",
""
]
] |
new_dataset
| 0.997875 |
2210.01751
|
Christian Anti\'c
|
Christian Anti\'c
|
Proportional algebras
| null | null | null | null |
cs.AI cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Analogical reasoning is at the core of human and artificial intelligence and
creativity. Analogical proportions are expressions of the form ``$a$ is to $b$
what $c$ is to $d$'' which are at the center of analogical reasoning which
itself is at the core of artificial intelligence with numerous applications.
This paper introduces proportional algebras as algebras endowed with a 4-ary
analogical proportion relation $a:b:\,:c:d$ satisfying a suitable set of
axioms. Functions preserving analogical proportions have already proven to be
of practical interest and studying their mathematical properties is essential
for understanding proportions. We therefore introduce proportional
homomorphisms (and their associated congruences) and functors and show that
they are closely related notions. This provides us with mathematical tools for
transferring knowledge across different domains which is crucial for future
AI-systems. In a broader sense, this paper is a further step towards a
mathematical theory of analogical reasoning.
|
[
{
"version": "v1",
"created": "Wed, 31 Aug 2022 00:04:34 GMT"
},
{
"version": "v2",
"created": "Sat, 17 Dec 2022 14:55:22 GMT"
},
{
"version": "v3",
"created": "Thu, 2 Feb 2023 17:19:09 GMT"
}
] | 2023-02-03T00:00:00 |
[
[
"Antić",
"Christian",
""
]
] |
new_dataset
| 0.955155 |
2211.14016
|
Simon Krogmann
|
Simon Krogmann and Pascal Lenzner and Alexander Skopalik
|
Strategic Facility Location with Clients that Minimize Total Waiting
Time
|
To appear at the 37th AAAI Conference on Artificial Intelligence
(AAAI-23), full version
| null | null | null |
cs.GT cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study a non-cooperative two-sided facility location game in which
facilities and clients behave strategically. This is in contrast to many other
facility location games in which clients simply visit their closest facility.
Facility agents select a location on a graph to open a facility to attract as
much purchasing power as possible, while client agents choose which facilities
to patronize by strategically distributing their purchasing power in order to
minimize their total waiting time. Here, the waiting time of a facility depends
on its received total purchasing power. We show that our client stage is an
atomic splittable congestion game, which implies existence, uniqueness and
efficient computation of a client equilibrium. Therefore, facility agents can
efficiently predict client behavior and make strategic decisions accordingly.
Despite that, we prove that subgame perfect equilibria do not exist in all
instances of this game and that their existence is NP-hard to decide. On the
positive side, we provide a simple and efficient algorithm to compute
3-approximate subgame perfect equilibria.
|
[
{
"version": "v1",
"created": "Fri, 25 Nov 2022 10:43:57 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Feb 2023 13:35:53 GMT"
}
] | 2023-02-03T00:00:00 |
[
[
"Krogmann",
"Simon",
""
],
[
"Lenzner",
"Pascal",
""
],
[
"Skopalik",
"Alexander",
""
]
] |
new_dataset
| 0.978168 |
2212.08320
|
Runpei Dong
|
Runpei Dong, Zekun Qi, Linfeng Zhang, Junbo Zhang, Jianjian Sun, Zheng
Ge, Li Yi, Kaisheng Ma
|
Autoencoders as Cross-Modal Teachers: Can Pretrained 2D Image
Transformers Help 3D Representation Learning?
|
Accepted at ICLR 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The success of deep learning heavily relies on large-scale data with
comprehensive labels, which is more expensive and time-consuming to fetch in 3D
compared to 2D images or natural languages. This promotes the potential of
utilizing models pretrained with data more than 3D as teachers for cross-modal
knowledge transferring. In this paper, we revisit masked modeling in a unified
fashion of knowledge distillation, and we show that foundational Transformers
pretrained with 2D images or natural languages can help self-supervised 3D
representation learning through training Autoencoders as Cross-Modal Teachers
(ACT). The pretrained Transformers are transferred as cross-modal 3D teachers
using discrete variational autoencoding self-supervision, during which the
Transformers are frozen with prompt tuning for better knowledge inheritance.
The latent features encoded by the 3D teachers are used as the target of masked
point modeling, wherein the dark knowledge is distilled to the 3D Transformer
students as foundational geometry understanding. Our ACT pretrained 3D learner
achieves state-of-the-art generalization capacity across various downstream
benchmarks, e.g., 88.21% overall accuracy on ScanObjectNN. Codes have been
released at https://github.com/RunpeiDong/ACT.
|
[
{
"version": "v1",
"created": "Fri, 16 Dec 2022 07:46:53 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Feb 2023 07:26:26 GMT"
}
] | 2023-02-03T00:00:00 |
[
[
"Dong",
"Runpei",
""
],
[
"Qi",
"Zekun",
""
],
[
"Zhang",
"Linfeng",
""
],
[
"Zhang",
"Junbo",
""
],
[
"Sun",
"Jianjian",
""
],
[
"Ge",
"Zheng",
""
],
[
"Yi",
"Li",
""
],
[
"Ma",
"Kaisheng",
""
]
] |
new_dataset
| 0.995168 |
2301.05821
|
Zeyu Zhang
|
Hangxin Liu, Zeyu Zhang, Ziyuan Jiao, Zhenliang Zhang, Minchen Li,
Chenfanfu Jiang, Yixin Zhu, Song-Chun Zhu
|
A Reconfigurable Data Glove for Reconstructing Physical and Virtual
Grasps
|
Paper accepted by Engineering
| null | null | null |
cs.RO cs.AI cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
In this work, we present a reconfigurable data glove design to capture
different modes of human hand-object interactions, which are critical in
training embodied artificial intelligence (AI) agents for fine manipulation
tasks. To achieve various downstream tasks with distinct features, our
reconfigurable data glove operates in three modes sharing a unified backbone
design that reconstructs hand gestures in real time. In the tactile-sensing
mode, the glove system aggregates manipulation force via customized force
sensors made from a soft and thin piezoresistive material; this design
minimizes interference during complex hand movements. The virtual reality (VR)
mode enables real-time interaction in a physically plausible fashion: A
caging-based approach is devised to determine stable grasps by detecting
collision events. Leveraging a state-of-the-art finite element method (FEM),
the simulation mode collects data on fine-grained 4D manipulation events
comprising hand and object motions in 3D space and how the object's physical
properties (e.g., stress and energy) change in accordance with manipulation
over time. Notably, the glove system presented here is the first to use
high-fidelity simulation to investigate the unobservable physical and causal
factors behind manipulation actions. In a series of experiments, we
characterize our data glove in terms of individual sensors and the overall
system. More specifically, we evaluate the system's three modes by (i)
recording hand gestures and associated forces, (ii) improving manipulation
fluency in VR, and (iii) producing realistic simulation effects of various tool
uses, respectively. Based on these three modes, our reconfigurable data glove
collects and reconstructs fine-grained human grasp data in both physical and
virtual environments, thereby opening up new avenues for the learning of
manipulation skills for embodied AI agents.
|
[
{
"version": "v1",
"created": "Sat, 14 Jan 2023 05:35:50 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Jan 2023 08:51:09 GMT"
},
{
"version": "v3",
"created": "Wed, 1 Feb 2023 12:05:21 GMT"
},
{
"version": "v4",
"created": "Thu, 2 Feb 2023 02:09:19 GMT"
}
] | 2023-02-03T00:00:00 |
[
[
"Liu",
"Hangxin",
""
],
[
"Zhang",
"Zeyu",
""
],
[
"Jiao",
"Ziyuan",
""
],
[
"Zhang",
"Zhenliang",
""
],
[
"Li",
"Minchen",
""
],
[
"Jiang",
"Chenfanfu",
""
],
[
"Zhu",
"Yixin",
""
],
[
"Zhu",
"Song-Chun",
""
]
] |
new_dataset
| 0.999249 |
2301.11964
|
Ken St. Germain
|
Ken St. Germain, Josh Angichiodo
|
Adversarial Networks and Machine Learning for File Classification
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Correctly identifying the type of file under examination is a critical part
of a forensic investigation. The file type alone suggests the embedded content,
such as a picture, video, manuscript, spreadsheet, etc. In cases where a system
owner might desire to keep their files inaccessible or file type concealed, we
propose using an adversarially-trained machine learning neural network to
determine a file's true type even if the extension or file header is obfuscated
to complicate its discovery. Our semi-supervised generative adversarial network
(SGAN) achieved 97.6% accuracy in classifying files across 11 different types.
We also compared our network against a traditional standalone neural network
and three other machine learning algorithms. The adversarially-trained network
proved to be the most precise file classifier especially in scenarios with few
supervised samples available. Our implementation of a file classifier using an
SGAN is implemented on GitHub (https://ksaintg.github.io/SGAN-File-Classier).
|
[
{
"version": "v1",
"created": "Fri, 27 Jan 2023 19:40:03 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Feb 2023 13:14:11 GMT"
}
] | 2023-02-03T00:00:00 |
[
[
"Germain",
"Ken St.",
""
],
[
"Angichiodo",
"Josh",
""
]
] |
new_dataset
| 0.998126 |
2302.00675
|
Kazuki Yoshiyama
|
Kazuki Yoshiyama, Takuya Narihira
|
NDJIR: Neural Direct and Joint Inverse Rendering for Geometry, Lights,
and Materials of Real Object
|
26 pages
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The goal of inverse rendering is to decompose geometry, lights, and materials
given pose multi-view images. To achieve this goal, we propose neural direct
and joint inverse rendering, NDJIR. Different from prior works which relies on
some approximations of the rendering equation, NDJIR directly addresses the
integrals in the rendering equation and jointly decomposes geometry: signed
distance function, lights: environment and implicit lights, materials: base
color, roughness, specular reflectance using the powerful and flexible volume
rendering framework, voxel grid feature, and Bayesian prior. Our method
directly uses the physically-based rendering, so we can seamlessly export an
extracted mesh with materials to DCC tools and show material conversion
examples. We perform intensive experiments to show that our proposed method can
decompose semantically well for real object in photogrammetric setting and what
factors contribute towards accurate inverse rendering.
|
[
{
"version": "v1",
"created": "Thu, 2 Feb 2023 13:21:03 GMT"
}
] | 2023-02-03T00:00:00 |
[
[
"Yoshiyama",
"Kazuki",
""
],
[
"Narihira",
"Takuya",
""
]
] |
new_dataset
| 0.999614 |
2302.00785
|
Mert Yuksekgonul
|
Roxana Daneshjou, Mert Yuksekgonul, Zhuo Ran Cai, Roberto Novoa, James
Zou
|
SkinCon: A skin disease dataset densely annotated by domain experts for
fine-grained model debugging and analysis
|
NeurIPS 2022 Datasets and Benchmarks Track
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For the deployment of artificial intelligence (AI) in high-risk settings,
such as healthcare, methods that provide interpretability/explainability or
allow fine-grained error analysis are critical. Many recent methods for
interpretability/explainability and fine-grained error analysis use concepts,
which are meta-labels that are semantically meaningful to humans. However,
there are only a few datasets that include concept-level meta-labels and most
of these meta-labels are relevant for natural images that do not require domain
expertise. Densely annotated datasets in medicine focused on meta-labels that
are relevant to a single disease such as melanoma. In dermatology, skin disease
is described using an established clinical lexicon that allows clinicians to
describe physical exam findings to one another. To provide a medical dataset
densely annotated by domain experts with annotations useful across multiple
disease processes, we developed SkinCon: a skin disease dataset densely
annotated by dermatologists. SkinCon includes 3230 images from the Fitzpatrick
17k dataset densely annotated with 48 clinical concepts, 22 of which have at
least 50 images representing the concept. The concepts used were chosen by two
dermatologists considering the clinical descriptor terms used to describe skin
lesions. Examples include "plaque", "scale", and "erosion". The same concepts
were also used to label 656 skin disease images from the Diverse Dermatology
Images dataset, providing an additional external dataset with diverse skin tone
representations. We review the potential applications for the SkinCon dataset,
such as probing models, concept-based explanations, and concept bottlenecks.
Furthermore, we use SkinCon to demonstrate two of these use cases: debugging
mistakes of an existing dermatology AI model with concepts and developing
interpretable models with post-hoc concept bottleneck models.
|
[
{
"version": "v1",
"created": "Wed, 1 Feb 2023 22:39:51 GMT"
}
] | 2023-02-03T00:00:00 |
[
[
"Daneshjou",
"Roxana",
""
],
[
"Yuksekgonul",
"Mert",
""
],
[
"Cai",
"Zhuo Ran",
""
],
[
"Novoa",
"Roberto",
""
],
[
"Zou",
"James",
""
]
] |
new_dataset
| 0.999605 |
2302.00786
|
Joshua Springer
|
Joshua Springer and Marcel Kyas
|
Autonomous Drone Landing: Marked Landing Pads and Solidified Lava Flows
|
10 pages, 12 figures
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Landing is the most challenging and risky aspect of multirotor drone flight,
and only simple landing methods exist for autonomous drones. We explore methods
for autonomous drone landing in two scenarios. In the first scenario, we
examine methods for landing on known landing pads using fiducial markers and a
gimbal-mounted monocular camera. This method has potential in drone
applications where a drone must land more accurately than GPS can provide
(e.g.~package delivery in an urban canyon). We expand on previous methods by
actuating the drone's camera to track the marker over time, and we address the
complexities of pose estimation caused by fiducial marker orientation
ambiguity. In the second scenario, and in collaboration with the RAVEN project,
we explore methods for landing on solidified lava flows in Iceland, which
serves as an analog environment for Mars and provides insight into the
effectiveness of drone-rover exploration teams. Our drone uses a depth camera
to visualize the terrain, and we are developing methods to analyze the terrain
data for viable landing sites in real time with minimal sensors and external
infrastructure requirements, so that the solution does not heavily influence
the drone's behavior, mission structure, or operational environments.
|
[
{
"version": "v1",
"created": "Wed, 1 Feb 2023 22:41:46 GMT"
}
] | 2023-02-03T00:00:00 |
[
[
"Springer",
"Joshua",
""
],
[
"Kyas",
"Marcel",
""
]
] |
new_dataset
| 0.997826 |
2302.00820
|
Conrad Sanderson
|
Ryan R. Curtin, Marcus Edel, Omar Shrit, Shubham Agrawal, Suryoday
Basak, James J. Balamuta, Ryan Birmingham, Kartik Dutt, Dirk Eddelbuettel,
Rishabh Garg, Shikhar Jaiswal, Aakash Kaushik, Sangyeon Kim, Anjishnu
Mukherjee, Nanubala Gnana Sai, Nippun Sharma, Yashwant Singh Parihar, Roshan
Swain, Conrad Sanderson
|
mlpack 4: a fast, header-only C++ machine learning library
| null |
Journal of Open Source Software, Vol. 8, No. 82, 2023
|
10.21105/joss.05026
| null |
cs.MS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For over 15 years, the mlpack machine learning library has served as a "swiss
army knife" for C++-based machine learning. Its efficient implementations of
common and cutting-edge machine learning algorithms have been used in a wide
variety of scientific and industrial applications. This paper overviews mlpack
4, a significant upgrade over its predecessor. The library has been
significantly refactored and redesigned to facilitate an easier
prototyping-to-deployment pipeline, including bindings to other languages
(Python, Julia, R, Go, and the command line) that allow prototyping to be
seamlessly performed in environments other than C++. mlpack is open-source
software, distributed under the permissive 3-clause BSD license; it can be
obtained at https://mlpack.org
|
[
{
"version": "v1",
"created": "Thu, 2 Feb 2023 02:03:22 GMT"
}
] | 2023-02-03T00:00:00 |
[
[
"Curtin",
"Ryan R.",
""
],
[
"Edel",
"Marcus",
""
],
[
"Shrit",
"Omar",
""
],
[
"Agrawal",
"Shubham",
""
],
[
"Basak",
"Suryoday",
""
],
[
"Balamuta",
"James J.",
""
],
[
"Birmingham",
"Ryan",
""
],
[
"Dutt",
"Kartik",
""
],
[
"Eddelbuettel",
"Dirk",
""
],
[
"Garg",
"Rishabh",
""
],
[
"Jaiswal",
"Shikhar",
""
],
[
"Kaushik",
"Aakash",
""
],
[
"Kim",
"Sangyeon",
""
],
[
"Mukherjee",
"Anjishnu",
""
],
[
"Sai",
"Nanubala Gnana",
""
],
[
"Sharma",
"Nippun",
""
],
[
"Parihar",
"Yashwant Singh",
""
],
[
"Swain",
"Roshan",
""
],
[
"Sanderson",
"Conrad",
""
]
] |
new_dataset
| 0.997962 |
2302.00824
|
Ryan White
|
Trupti Mahendrakar, Ryan T. White, Markus Wilde, Madhur Tiwari
|
SpaceYOLO: A Human-Inspired Model for Real-time, On-board Spacecraft
Feature Detection
|
Accepted at IEEE Aerospace Conference 2023, 11 pages, 21 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The rapid proliferation of non-cooperative spacecraft and space debris in
orbit has precipitated a surging demand for on-orbit servicing and space debris
removal at a scale that only autonomous missions can address, but the
prerequisite autonomous navigation and flightpath planning to safely capture an
unknown, non-cooperative, tumbling space object is an open problem. This
requires algorithms for real-time, automated spacecraft feature recognition to
pinpoint the locations of collision hazards (e.g. solar panels or antennas) and
safe docking features (e.g. satellite bodies or thrusters) so safe, effective
flightpaths can be planned. Prior work in this area reveals the performance of
computer vision models are highly dependent on the training dataset and its
coverage of scenarios visually similar to the real scenarios that occur in
deployment. Hence, the algorithm may have degraded performance under certain
lighting conditions even when the rendezvous maneuver conditions of the chaser
to the target spacecraft are the same. This work delves into how humans perform
these tasks through a survey of how aerospace engineering students experienced
with spacecraft shapes and components recognize features of the three
spacecraft: Landsat, Envisat, Anik, and the orbiter Mir. The survey reveals
that the most common patterns in the human detection process were to consider
the shape and texture of the features: antennas, solar panels, thrusters, and
satellite bodies. This work introduces a novel algorithm SpaceYOLO, which fuses
a state-of-the-art object detector YOLOv5 with a separate neural network based
on these human-inspired decision processes exploiting shape and texture.
Performance in autonomous spacecraft detection of SpaceYOLO is compared to
ordinary YOLOv5 in hardware-in-the-loop experiments under different lighting
and chaser maneuver conditions at the ORION Laboratory at Florida Tech.
|
[
{
"version": "v1",
"created": "Thu, 2 Feb 2023 02:11:39 GMT"
}
] | 2023-02-03T00:00:00 |
[
[
"Mahendrakar",
"Trupti",
""
],
[
"White",
"Ryan T.",
""
],
[
"Wilde",
"Markus",
""
],
[
"Tiwari",
"Madhur",
""
]
] |
new_dataset
| 0.999863 |
2302.00856
|
Mukhlish Fuadi
|
Mukhlish Fuadi, Adhi Dharma Wibawa, Surya Sumpeno
|
idT5: Indonesian Version of Multilingual T5 Transformer
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Indonesian language is spoken by almost 200 million people and is the 10th
most spoken language in the world, but it is under-represented in NLP (Natural
Language Processing) research. A sparsity of language resources has hampered
previous work on Indonesian. The Transformer is a new architecture rapidly
becoming dominant for NLP, surpassing alternatives like convolutional and
recurrent neural networks. T5 (Text-to-Text Transfer Transformer) is a
Transformer model that converts all text-based language problems to
text-to-text format for English. The multilingual variant is mT5 (multilingual
T5) which has shown promising results on many NLP tasks across languages.
However, the size of this multilingual model is a drawback for its application
in real production applications, which sometimes require only one language. In
this study, the mT5 model was adapted for only one language, Indonesian,
resulting in a pre-trained T5 model that was specific only for Indonesian with
a smaller size. For performance comparison, we fine-tuned this model and the
mT5 model to the Sentiment Analysis (SA), Question Generation (QG), and
Question Answering (QA) tasks with the exact mechanism and dataset. Fine-tuned
model based on our model achieved 77.18% accuracy on SA, 8% higher than the
mT5-based model, and obtained nearly the same score as the mT5-based model on
QG and QA. The results confirm that it is possible to produce a smaller
pre-trained model that maintains comparable yields while reducing the model
size by up to 58%. In addition, the resulting model requires less memory, loads
faster, and inference times faster.
|
[
{
"version": "v1",
"created": "Thu, 2 Feb 2023 03:56:16 GMT"
}
] | 2023-02-03T00:00:00 |
[
[
"Fuadi",
"Mukhlish",
""
],
[
"Wibawa",
"Adhi Dharma",
""
],
[
"Sumpeno",
"Surya",
""
]
] |
new_dataset
| 0.994633 |
2302.00885
|
Yixuan Xu
|
Yixuan Xu, Hamidreza Fazlali, Yuan Ren, Bingbing Liu
|
AOP-Net: All-in-One Perception Network for Joint LiDAR-based 3D Object
Detection and Panoptic Segmentation
|
Under review
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
LiDAR-based 3D object detection and panoptic segmentation are two crucial
tasks in the perception systems of autonomous vehicles and robots. In this
paper, we propose All-in-One Perception Network (AOP-Net), a LiDAR-based
multi-task framework that combines 3D object detection and panoptic
segmentation. In this method, a dual-task 3D backbone is developed to extract
both panoptic- and detection-level features from the input LiDAR point cloud.
Also, a new 2D backbone that intertwines Multi-Layer Perceptron (MLP) and
convolution layers is designed to further improve the detection task
performance. Finally, a novel module is proposed to guide the detection head by
recovering useful features discarded during down-sampling operations in the 3D
backbone. This module leverages estimated instance segmentation masks to
recover detailed information from each candidate object. The AOP-Net achieves
state-of-the-art performance for published works on the nuScenes benchmark for
both 3D object detection and panoptic segmentation tasks. Also, experiments
show that our method easily adapts to and significantly improves the
performance of any BEV-based 3D object detection method.
|
[
{
"version": "v1",
"created": "Thu, 2 Feb 2023 05:31:53 GMT"
}
] | 2023-02-03T00:00:00 |
[
[
"Xu",
"Yixuan",
""
],
[
"Fazlali",
"Hamidreza",
""
],
[
"Ren",
"Yuan",
""
],
[
"Liu",
"Bingbing",
""
]
] |
new_dataset
| 0.999244 |
2302.00886
|
Sidong Feng
|
Sidong Feng, Mulong Xie, Yinxing Xue, Chunyang Chen
|
Read It, Don't Watch It: Captioning Bug Recordings Automatically
|
Accepted to 45th International Conference on Software Engineering
(ICSE 2023)
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Screen recordings of mobile applications are easy to capture and include a
wealth of information, making them a popular mechanism for users to inform
developers of the problems encountered in the bug reports. However, watching
the bug recordings and efficiently understanding the semantics of user actions
can be time-consuming and tedious for developers. Inspired by the conception of
the video subtitle in movie industry, we present a lightweight approach
CAPdroid to caption bug recordings automatically. CAPdroid is a purely
image-based and non-intrusive approach by using image processing and
convolutional deep learning models to segment bug recordings, infer user action
attributes, and generate subtitle descriptions. The automated experiments
demonstrate the good performance of CAPdroid in inferring user actions from the
recordings, and a user study confirms the usefulness of our generated step
descriptions in assisting developers with bug replay.
|
[
{
"version": "v1",
"created": "Thu, 2 Feb 2023 05:35:31 GMT"
}
] | 2023-02-03T00:00:00 |
[
[
"Feng",
"Sidong",
""
],
[
"Xie",
"Mulong",
""
],
[
"Xue",
"Yinxing",
""
],
[
"Chen",
"Chunyang",
""
]
] |
new_dataset
| 0.978696 |
2302.00906
|
Guodong Wang
|
Guodong Wang, Shengwei Liu, Hongwei Liu
|
New Constructions of Optimal Binary LCD Codes
|
28 pages
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Linear complementary dual (LCD) codes can provide an optimum linear coding
solution for the two-user binary adder channel. LCD codes also can be used to
against side-channel attacks and fault non-invasive attacks. Let $d_{LCD}(n,
k)$ denote the maximum value of $d$ for which a binary $[n,k, d]$ LCD code
exists. In \cite{BS21}, Bouyuklieva conjectured that $d_{LCD}(n+1,
k)=d_{LCD}(n, k)$ or $d_{LCD}(n, k) + 1$ for any lenth $n$ and dimension $k \ge
2$. In this paper, we first prove Bouyuklieva's conjecture \cite{BS21} by
constructing a binary $[n,k,d-1]$ LCD codes from a binary $[n+1,k,d]$
$LCD_{o,e}$ code, when $d \ge 3$ and $k \ge 2$. Then we provide a distance
lower bound for binary LCD codes by expanded codes, and use this bound and some
methods such as puncturing, shortening, expanding and extension, we construct
some new binary LCD codes. Finally, we improve some previously known values of
$d_{LCD}(n, k)$ of lengths $38 \le n \le 40$ and dimensions $9 \le k \le 15$.
We also obtain some values of $d_{LCD}(n, k)$ with $41 \le n \le 50$ and $6 \le
k \le n-6$.
|
[
{
"version": "v1",
"created": "Thu, 2 Feb 2023 06:51:58 GMT"
}
] | 2023-02-03T00:00:00 |
[
[
"Wang",
"Guodong",
""
],
[
"Liu",
"Shengwei",
""
],
[
"Liu",
"Hongwei",
""
]
] |
new_dataset
| 0.963723 |
2302.00916
|
Gerasimos Arvanitis
|
Gerasimos Arvanitis, Nikolaos Stagakis, Evangelia I. Zacharaki,
Konstantinos Moustakas
|
Cooperative Saliency-based Obstacle Detection and AR Rendering for
Increased Situational Awareness
| null | null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Autonomous vehicles are expected to operate safely in real-life road
conditions in the next years. Nevertheless, unanticipated events such as the
existence of unexpected objects in the range of the road, can put safety at
risk. The advancement of sensing and communication technologies and Internet of
Things may facilitate the recognition of hazardous situations and information
exchange in a cooperative driving scheme, providing new opportunities for the
increase of collaborative situational awareness. Safe and unobtrusive
visualization of the obtained information may nowadays be enabled through the
adoption of novel Augmented Reality (AR) interfaces in the form of windshields.
Motivated by these technological opportunities, we propose in this work a
saliency-based distributed, cooperative obstacle detection and rendering scheme
for increasing the driver's situational awareness through (i) automated
obstacle detection, (ii) AR visualization and (iii) information sharing
(upcoming potential dangers) with other connected vehicles or road
infrastructure. An extensive evaluation study using a variety of real datasets
for pothole detection showed that the proposed method provides favorable
results and features compared to other recent and relevant approaches.
|
[
{
"version": "v1",
"created": "Thu, 2 Feb 2023 07:32:13 GMT"
}
] | 2023-02-03T00:00:00 |
[
[
"Arvanitis",
"Gerasimos",
""
],
[
"Stagakis",
"Nikolaos",
""
],
[
"Zacharaki",
"Evangelia I.",
""
],
[
"Moustakas",
"Konstantinos",
""
]
] |
new_dataset
| 0.996564 |
2302.00926
|
Yiming Du
|
Yiming Du, Zhuotian Li, Qian He, Thomas Wetere Tulu, Kei Hang Katie
Chan, Lin Wang, Sen Pei, Xiao-Ke Xu and Xiao Fan Liu
|
DPCIPI: A pre-trained deep learning model for estimation of
cross-immunity between drifted strains of Influenza A/H3N2
| null | null | null | null |
cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Motivation: This study aims to develop a novel model called DNA Pretrained
Cross-Immunity Protection Inference Model (DPCIPI) to predict the
cross-immunity of influenza virus strains. The traditional method for measuring
this is through HI experiments, which are costly and time-consuming. The DPCIPI
model uses a pre-trained neural network to vectorize the gene sequences of
viruses and predicts the degree of cross-immunity between them. Method: The
paper describes the steps taken to develop the DPCIPI model. First, the gene
sequence of two viruses is converted into k-mers. Then, the k-mers sequences
are aligned, and identical k-mers at the exact position are deleted. The human
DNA pre-trained model (DNABERT) is used to vectorize each k-mer in the
remaining k-mers. This is followed using a BiLSTM encoder to encode the two
viruses into sequence representation and embeddings. An information fusion
operation is then performed on the two embeddings to obtain a splicing vector,
which is further input into a fully connected neural network for prediction.
All parameters of the model are trained simultaneously. Result: Binary
cross-immunity prediction predicts whether the HI titer between two viruses
exceeds a certain threshold (in our case, an HI titer measurement value higher
than 40). Compared with baseline methods such as Logistic Regression,
Perceptron, Decision Tree, and CNN-based models, DPCIPI achieves better
performance: F1 (88.14%), precision (90.40%), recall (89.69%), and accuracy
(89.69%). Multi-level cross-immunity prediction predicts different HI titer
intervals. Again, DPCIPI's performance surpasses baseline models. The study
concludes that DPCIPI has enormous potential for predicting the cross-immunity
between influenza virus strains.
|
[
{
"version": "v1",
"created": "Thu, 2 Feb 2023 07:56:46 GMT"
}
] | 2023-02-03T00:00:00 |
[
[
"Du",
"Yiming",
""
],
[
"Li",
"Zhuotian",
""
],
[
"He",
"Qian",
""
],
[
"Tulu",
"Thomas Wetere",
""
],
[
"Chan",
"Kei Hang Katie",
""
],
[
"Wang",
"Lin",
""
],
[
"Pei",
"Sen",
""
],
[
"Xu",
"Xiao-Ke",
""
],
[
"Liu",
"Xiao Fan",
""
]
] |
new_dataset
| 0.978799 |
2302.01015
|
Farhad Modaresi
|
Farhad Modaresi, Matthew Guthaus, Jason K. Eshraghian
|
OpenSpike: An OpenRAM SNN Accelerator
|
The design is open sourced and available online:
https://github.com/sfmth/OpenSpike
| null | null | null |
cs.AR cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a spiking neural network (SNN) accelerator made using
fully open-source EDA tools, process design kit (PDK), and memory macros
synthesized using OpenRAM. The chip is taped out in the 130 nm SkyWater process
and integrates over 1 million synaptic weights, and offers a reprogrammable
architecture. It operates at a clock speed of 40 MHz, a supply of 1.8 V, uses a
PicoRV32 core for control, and occupies an area of 33.3 mm^2. The throughput of
the accelerator is 48,262 images per second with a wallclock time of 20.72 us,
at 56.8 GOPS/W. The spiking neurons use hysteresis to provide an adaptive
threshold (i.e., a Schmitt trigger) which can reduce state instability. This
results in high performing SNNs across a range of benchmarks that remain
competitive with state-of-the-art, full precision SNNs. The design is open
sourced and available online: https://github.com/sfmth/OpenSpike
|
[
{
"version": "v1",
"created": "Thu, 2 Feb 2023 11:06:29 GMT"
}
] | 2023-02-03T00:00:00 |
[
[
"Modaresi",
"Farhad",
""
],
[
"Guthaus",
"Matthew",
""
],
[
"Eshraghian",
"Jason K.",
""
]
] |
new_dataset
| 0.999725 |
2302.01027
|
Kerr Fitzgerald
|
Kerr Fitzgerald, Bogdan Matuszewski
|
FCB-SwinV2 Transformer for Polyp Segmentation
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Polyp segmentation within colonoscopy video frames using deep learning models
has the potential to automate the workflow of clinicians. This could help
improve the early detection rate and characterization of polyps which could
progress to colorectal cancer. Recent state-of-the-art deep learning polyp
segmentation models have combined the outputs of Fully Convolutional Network
architectures and Transformer Network architectures which work in parallel. In
this paper we propose modifications to the current state-of-the-art polyp
segmentation model FCBFormer. The transformer architecture of the FCBFormer is
replaced with a SwinV2 Transformer-UNET and minor changes to the Fully
Convolutional Network architecture are made to create the FCB-SwinV2
Transformer. The performance of the FCB-SwinV2 Transformer is evaluated on the
popular colonoscopy segmentation bench-marking datasets Kvasir-SEG and
CVC-ClinicDB. Generalizability tests are also conducted. The FCB-SwinV2
Transformer is able to consistently achieve higher mDice scores across all
tests conducted and therefore represents new state-of-the-art performance.
Issues found with how colonoscopy segmentation model performance is evaluated
within literature are also re-ported and discussed. One of the most important
issues identified is that when evaluating performance on the CVC-ClinicDB
dataset it would be preferable to ensure no data leakage from video sequences
occurs during the training/validation/test data partition.
|
[
{
"version": "v1",
"created": "Thu, 2 Feb 2023 11:42:26 GMT"
}
] | 2023-02-03T00:00:00 |
[
[
"Fitzgerald",
"Kerr",
""
],
[
"Matuszewski",
"Bogdan",
""
]
] |
new_dataset
| 0.97939 |
2302.01145
|
Erik Demaine
|
Aviv Adler, Joshua Ani, Lily Chung, Michael Coulombe, Erik D. Demaine,
Yevhenii Diomidov, Dylan Hendrickson, Jayson Lynch
|
This Game Is Not Going To Analyze Itself
|
23 pages, 23 figures. Presented at JCDCGGG 2022
| null | null | null |
cs.CC math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We analyze the puzzle video game This Game Is Not Going To Load Itself, where
the player routes data packets of three different colors from given sources to
given sinks of the correct color. Given the sources, sinks, and some previously
placed arrow tiles, we prove that the game is in Sigma_2^P; in NP for sources
of equal period; NP-complete for three colors and six equal-period sources with
player input; and even without player input, simulating the game is both NP-
and coNP-hard for two colors and many sources with different periods. On the
other hand, we characterize which locations for three data sinks admit a
perfect placement of arrow tiles that guarantee correct routing no matter the
placement of the data sources, effectively solving most instances of the game
as it is normally played.
|
[
{
"version": "v1",
"created": "Thu, 2 Feb 2023 15:00:59 GMT"
}
] | 2023-02-03T00:00:00 |
[
[
"Adler",
"Aviv",
""
],
[
"Ani",
"Joshua",
""
],
[
"Chung",
"Lily",
""
],
[
"Coulombe",
"Michael",
""
],
[
"Demaine",
"Erik D.",
""
],
[
"Diomidov",
"Yevhenii",
""
],
[
"Hendrickson",
"Dylan",
""
],
[
"Lynch",
"Jayson",
""
]
] |
new_dataset
| 0.994731 |
2302.01163
|
Frantisek Nekovar
|
Franti\v{s}ek Nekov\'a\v{r}, Jan Faigl, Martin Saska
|
Vehicle Fault-Tolerant Robust Power Transmission Line Inspection
Planning
|
Copyright 2022 IEEE. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising
or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other works
|
2022 IEEE 27th International Conference on Emerging Technologies
and Factory Automation (ETFA)
|
10.1109/ETFA52439.2022.9921692
| null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper concerns fault-tolerant power transmission line inspection
planning as a generalization of the multiple traveling salesmen problem. The
addressed inspection planning problem is formulated as a single-depot
multiple-vehicle scenario, where the inspection vehicles are constrained by the
battery budget limiting their inspection time. The inspection vehicle is
assumed to be an autonomous multi-copter with a wide range of possible flight
speeds influencing battery consumption. The inspection plan is represented by
multiple routes for vehicles providing full coverage over inspection target
power lines. On an inspection vehicle mission interruption, which might happen
at any time during the execution of the inspection plan, the inspection is
re-planned using the remaining vehicles and their remaining battery budgets.
Robustness is introduced by choosing a suitable cost function for the initial
plan that maximizes the time window for successful re-planning. It enables the
remaining vehicles to successfully finish all the inspection targets using
their respective remaining battery budgets. A combinatorial metaheuristic
algorithm with various cost functions is used for planning and fast re-planning
during the inspection.
|
[
{
"version": "v1",
"created": "Thu, 2 Feb 2023 15:39:57 GMT"
}
] | 2023-02-03T00:00:00 |
[
[
"Nekovář",
"František",
""
],
[
"Faigl",
"Jan",
""
],
[
"Saska",
"Martin",
""
]
] |
new_dataset
| 0.999441 |
2302.01179
|
Frantisek Nekovar
|
Franti\v{s}ek Nekov\'a\v{r}, Jan Faigl, Martin Saska
|
Multi-Tour Set Traveling Salesman Problem in Planning Power Transmission
Line Inspection
|
Copyright 2021 IEEE. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising
or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other works
|
IEEE Robotics and Automation Letters, vol. 6, no. 4, pp.
6196-6203, Oct. 2021
|
10.1109/LRA.2021.3091695
| null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This letter concerns optimal power transmission line inspection formulated as
a proposed generalization of the traveling salesman problem for a multi-route
one-depot scenario. The problem is formulated for an inspection vehicle with a
limited travel budget. Therefore, the solution can be composed of multiple runs
to provide full coverage of the given power lines. Besides, the solution
indicates how many vehicles can perform the inspection in a single run. The
optimal solution of the problem is solved by the proposed Integer Linear
Programming (ILP) formulation, which is, however, very computationally
demanding. Therefore, the computational requirements are addressed by the
combinatorial metaheuristic. The employed greedy randomized adaptive search
procedure is significantly less demanding while providing competitive solutions
and scales better with the problem size than the ILP-based approach. The
proposed formulation and algorithms are demonstrated in a real-world scenario
to inspect power line segments at the electrical substation.
|
[
{
"version": "v1",
"created": "Thu, 2 Feb 2023 15:59:46 GMT"
}
] | 2023-02-03T00:00:00 |
[
[
"Nekovář",
"František",
""
],
[
"Faigl",
"Jan",
""
],
[
"Saska",
"Martin",
""
]
] |
new_dataset
| 0.998833 |
2302.01204
|
Shenyang Huang
|
Shenyang Huang, Samy Coulombe, Yasmeen Hitti, Reihaneh Rabbany,
Guillaume Rabusseau
|
Laplacian Change Point Detection for Single and Multi-view Dynamic
Graphs
|
30 pages, 15 figures, extended version of previous paper "Laplacian
Change Point Detection for Dynamic Graphs" with novel material. arXiv admin
note: substantial text overlap with arXiv:2007.01229
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dynamic graphs are rich data structures that are used to model complex
relationships between entities over time. In particular, anomaly detection in
temporal graphs is crucial for many real world applications such as intrusion
identification in network systems, detection of ecosystem disturbances and
detection of epidemic outbreaks. In this paper, we focus on change point
detection in dynamic graphs and address three main challenges associated with
this problem: i). how to compare graph snapshots across time, ii). how to
capture temporal dependencies, and iii). how to combine different views of a
temporal graph. To solve the above challenges, we first propose Laplacian
Anomaly Detection (LAD) which uses the spectrum of graph Laplacian as the low
dimensional embedding of the graph structure at each snapshot. LAD explicitly
models short term and long term dependencies by applying two sliding windows.
Next, we propose MultiLAD, a simple and effective generalization of LAD to
multi-view graphs. MultiLAD provides the first change point detection method
for multi-view dynamic graphs. It aggregates the singular values of the
normalized graph Laplacian from different views through the scalar power mean
operation. Through extensive synthetic experiments, we show that i). LAD and
MultiLAD are accurate and outperforms state-of-the-art baselines and their
multi-view extensions by a large margin, ii). MultiLAD's advantage over
contenders significantly increases when additional views are available, and
iii). MultiLAD is highly robust to noise from individual views. In five real
world dynamic graphs, we demonstrate that LAD and MultiLAD identify significant
events as top anomalies such as the implementation of government COVID-19
interventions which impacted the population mobility in multi-view traffic
networks.
|
[
{
"version": "v1",
"created": "Thu, 2 Feb 2023 16:30:43 GMT"
}
] | 2023-02-03T00:00:00 |
[
[
"Huang",
"Shenyang",
""
],
[
"Coulombe",
"Samy",
""
],
[
"Hitti",
"Yasmeen",
""
],
[
"Rabbany",
"Reihaneh",
""
],
[
"Rabusseau",
"Guillaume",
""
]
] |
new_dataset
| 0.997033 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.