id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2301.12417
|
Christopher Lohse
|
Christopher Lohse, Jeroen Lemsom and Athanasios Kalogiratos
|
Syrupy Mouthfeel and Hints of Chocolate -- Predicting Coffee Review
Scores using Text Based Sentiment
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This paper uses textual data contained in certified (q-graded) coffee reviews
to predict corresponding scores on a scale from 0-100. By transforming this
highly specialized and standardized textual data in a predictor space, we
construct regression models which accurately capture the patterns in
corresponding coffee bean scores.
|
[
{
"version": "v1",
"created": "Sun, 29 Jan 2023 10:55:36 GMT"
}
] | 2023-01-31T00:00:00 |
[
[
"Lohse",
"Christopher",
""
],
[
"Lemsom",
"Jeroen",
""
],
[
"Kalogiratos",
"Athanasios",
""
]
] |
new_dataset
| 0.996788 |
2301.12476
|
Zhenjie Zhao
|
Zhenjie Zhao, Hang Yu, Hang Wu, Xuebo Zhang
|
6-DoF Robotic Grasping with Transformer
| null | null | null | null |
cs.RO cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Robotic grasping aims to detect graspable points and their corresponding
gripper configurations in a particular scene, and is fundamental for robot
manipulation. Existing research works have demonstrated the potential of using
a transformer model for robotic grasping, which can efficiently learn both
global and local features. However, such methods are still limited in grasp
detection on a 2D plane. In this paper, we extend a transformer model for
6-Degree-of-Freedom (6-DoF) robotic grasping, which makes it more flexible and
suitable for tasks that concern safety. The key designs of our method are a
serialization module that turns a 3D voxelized space into a sequence of feature
tokens that a transformer model can consume and skip-connections that merge
multiscale features effectively. In particular, our method takes a Truncated
Signed Distance Function (TSDF) as input. After serializing the TSDF, a
transformer model is utilized to encode the sequence, which can obtain a set of
aggregated hidden feature vectors through multi-head attention. We then decode
the hidden features to obtain per-voxel feature vectors through deconvolution
and skip-connections. Voxel feature vectors are then used to regress parameters
for executing grasping actions. On a recently proposed pile and packed grasping
dataset, we showcase that our transformer-based method can surpass existing
methods by about 5% in terms of success rates and declutter rates. We further
evaluate the running time and generalization ability to demonstrate the
superiority of the proposed method.
|
[
{
"version": "v1",
"created": "Sun, 29 Jan 2023 15:59:28 GMT"
}
] | 2023-01-31T00:00:00 |
[
[
"Zhao",
"Zhenjie",
""
],
[
"Yu",
"Hang",
""
],
[
"Wu",
"Hang",
""
],
[
"Zhang",
"Xuebo",
""
]
] |
new_dataset
| 0.993114 |
2301.12500
|
Sanda-Maria Avram Dr.
|
Sanda-Maria Avram
|
BERT-based Authorship Attribution on the Romanian Dataset called ROST
|
arXiv admin note: text overlap with arXiv:2211.05180
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Being around for decades, the problem of Authorship Attribution is still very
much in focus currently. Some of the more recent instruments used are the
pre-trained language models, the most prevalent being BERT. Here we used such a
model to detect the authorship of texts written in the Romanian language. The
dataset used is highly unbalanced, i.e., significant differences in the number
of texts per author, the sources from which the texts were collected, the time
period in which the authors lived and wrote these texts, the medium intended to
be read (i.e., paper or online), and the type of writing (i.e., stories, short
stories, fairy tales, novels, literary articles, and sketches). The results are
better than expected, sometimes exceeding 87\% macro-accuracy.
|
[
{
"version": "v1",
"created": "Sun, 29 Jan 2023 17:37:29 GMT"
}
] | 2023-01-31T00:00:00 |
[
[
"Avram",
"Sanda-Maria",
""
]
] |
new_dataset
| 0.999237 |
2301.12515
|
JIn Fang
|
Jin Fang, Dingfu Zhou, Jingjing Zhao, Chulin Tang, Cheng-Zhong Xu and
Liangjun Zhang
|
LiDAR-CS Dataset: LiDAR Point Cloud Dataset with Cross-Sensors for 3D
Object Detection
|
7 pages
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
LiDAR devices are widely used in autonomous driving scenarios and researches
on 3D point cloud achieve remarkable progress over the past years. However,
deep learning-based methods heavily rely on the annotation data and often face
the domain generalization problem. Unlike 2D images whose domains are usually
related to the texture information, the feature extracted from the 3D point
cloud is affected by the distribution of the points. Due to the lack of a 3D
domain adaptation benchmark, the common practice is to train the model on one
benchmark (e.g, Waymo) and evaluate it on another dataset (e.g. KITTI).
However, in this setting, there are two types of domain gaps, the scenarios
domain, and sensors domain, making the evaluation and analysis complicated and
difficult. To handle this situation, we propose LiDAR Dataset with
Cross-Sensors (LiDAR-CS Dataset), which contains large-scale annotated LiDAR
point cloud under 6 groups of different sensors but with same corresponding
scenarios, captured from hybrid realistic LiDAR simulator. As far as we know,
LiDAR-CS Dataset is the first dataset focused on the sensor (e.g., the points
distribution) domain gaps for 3D object detection in real traffic. Furthermore,
we evaluate and analyze the performance with several baseline detectors on the
LiDAR-CS benchmark and show its applications.
|
[
{
"version": "v1",
"created": "Sun, 29 Jan 2023 19:10:35 GMT"
}
] | 2023-01-31T00:00:00 |
[
[
"Fang",
"Jin",
""
],
[
"Zhou",
"Dingfu",
""
],
[
"Zhao",
"Jingjing",
""
],
[
"Tang",
"Chulin",
""
],
[
"Xu",
"Cheng-Zhong",
""
],
[
"Zhang",
"Liangjun",
""
]
] |
new_dataset
| 0.99985 |
2301.12556
|
Beniamino Accattoli
|
Beniamino Accattoli, Ugo Dal Lago, Gabriele Vanoni
|
A Log-Sensitive Encoding of Turing Machines in the $\lambda$-Calculus
|
arXiv admin note: substantial text overlap with arXiv:2203.00362
| null | null | null |
cs.LO cs.PL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This note modifies the reference encoding of Turing machines in the
$\lambda$-calculus by Dal Lago and Accattoli, which is tuned for time
efficiency, as to accommodate logarithmic space. There are two main changes:
Turing machines now have *two* tapes, an input tape and a work tape, and the
input tape is encoded differently, because the reference encoding comes with a
linear space overhead for managing tapes, which is excessive for studying
logarithmic space.
|
[
{
"version": "v1",
"created": "Sun, 29 Jan 2023 22:07:13 GMT"
}
] | 2023-01-31T00:00:00 |
[
[
"Accattoli",
"Beniamino",
""
],
[
"Lago",
"Ugo Dal",
""
],
[
"Vanoni",
"Gabriele",
""
]
] |
new_dataset
| 0.989634 |
2301.12614
|
Gunnar Sigurdsson
|
Gunnar A. Sigurdsson, Jesse Thomason, Gaurav S. Sukhatme, Robinson
Piramuthu
|
RREx-BoT: Remote Referring Expressions with a Bag of Tricks
| null | null | null | null |
cs.RO cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Household robots operate in the same space for years. Such robots
incrementally build dynamic maps that can be used for tasks requiring remote
object localization. However, benchmarks in robot learning often test
generalization through inference on tasks in unobserved environments. In an
observed environment, locating an object is reduced to choosing from among all
object proposals in the environment, which may number in the 100,000s. Armed
with this intuition, using only a generic vision-language scoring model with
minor modifications for 3d encoding and operating in an embodied environment,
we demonstrate an absolute performance gain of 9.84% on remote object grounding
above state of the art models for REVERIE and of 5.04% on FAO. When allowed to
pre-explore an environment, we also exceed the previous state of the art
pre-exploration method on REVERIE. Additionally, we demonstrate our model on a
real-world TurtleBot platform, highlighting the simplicity and usefulness of
the approach. Our analysis outlines a "bag of tricks" essential for
accomplishing this task, from utilizing 3d coordinates and context, to
generalizing vision-language models to large 3d search spaces.
|
[
{
"version": "v1",
"created": "Mon, 30 Jan 2023 02:19:19 GMT"
}
] | 2023-01-31T00:00:00 |
[
[
"Sigurdsson",
"Gunnar A.",
""
],
[
"Thomason",
"Jesse",
""
],
[
"Sukhatme",
"Gaurav S.",
""
],
[
"Piramuthu",
"Robinson",
""
]
] |
new_dataset
| 0.999139 |
2301.12633
|
Zejun Zhang
|
Zejun Zhang, Zhenchang Xing, Xin Xia, Xiwei Xu, Liming Zhu, Qinghua Lu
|
Faster or Slower? Performance Mystery of Python Idioms Unveiled with
Empirical Evidence
|
12 pages, accepted to ICSE'2023
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The usage of Python idioms is popular among Python developers in a formative
study of 101 performance-related questions of Python idioms on Stack Overflow,
we find that developers often get confused about the performance impact of
Python idioms and use anecdotal toy code or rely on personal project experience
which is often contradictory in performance outcomes. There has been no
large-scale, systematic empirical evidence to reconcile these performance
debates. In the paper, we create a large synthetic dataset with 24,126 pairs of
non-idiomatic and functionally-equivalent idiomatic code for the nine unique
Python idioms identified in Zhang et al., and reuse a large real-project
dataset of 54,879 such code pairs provided by Zhang et al. We develop a
reliable performance measurement method to compare the speedup or slowdown by
idiomatic code against non-idiomatic counterpart, and analyze the performance
discrepancies between the synthetic and real-project code, the relationships
between code features and performance changes, and the root causes of
performance changes at the bytecode level. We summarize our findings as some
actionable suggestions for using Python idioms.
|
[
{
"version": "v1",
"created": "Mon, 30 Jan 2023 03:28:24 GMT"
}
] | 2023-01-31T00:00:00 |
[
[
"Zhang",
"Zejun",
""
],
[
"Xing",
"Zhenchang",
""
],
[
"Xia",
"Xin",
""
],
[
"Xu",
"Xiwei",
""
],
[
"Zhu",
"Liming",
""
],
[
"Lu",
"Qinghua",
""
]
] |
new_dataset
| 0.998472 |
2301.12642
|
Jonathan Dunn
|
Jonathan Dunn
|
Exploring the Constructicon: Linguistic Analysis of a Computational CxG
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Recent work has formulated the task for computational construction grammar as
producing a constructicon given a corpus of usage. Previous work has evaluated
these unsupervised grammars using both internal metrics (for example, Minimum
Description Length) and external metrics (for example, performance on a
dialectology task). This paper instead takes a linguistic approach to
evaluation, first learning a constructicon and then analyzing its contents from
a linguistic perspective. This analysis shows that a learned constructicon can
be divided into nine major types of constructions, of which Verbal and Nominal
are the most common. The paper also shows that both the token and type
frequency of constructions can be used to model variation across registers and
dialects.
|
[
{
"version": "v1",
"created": "Mon, 30 Jan 2023 03:51:08 GMT"
}
] | 2023-01-31T00:00:00 |
[
[
"Dunn",
"Jonathan",
""
]
] |
new_dataset
| 0.986197 |
2301.12662
|
Chris Donahue
|
Chris Donahue, Antoine Caillon, Adam Roberts, Ethan Manilow, Philippe
Esling, Andrea Agostinelli, Mauro Verzetti, Ian Simon, Olivier Pietquin, Neil
Zeghidour, Jesse Engel
|
SingSong: Generating musical accompaniments from singing
| null | null | null | null |
cs.SD cs.AI cs.LG cs.MM eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present SingSong, a system that generates instrumental music to accompany
input vocals, potentially offering musicians and non-musicians alike an
intuitive new way to create music featuring their own voice. To accomplish
this, we build on recent developments in musical source separation and audio
generation. Specifically, we apply a state-of-the-art source separation
algorithm to a large corpus of music audio to produce aligned pairs of vocals
and instrumental sources. Then, we adapt AudioLM (Borsos et al., 2022) -- a
state-of-the-art approach for unconditional audio generation -- to be suitable
for conditional "audio-to-audio" generation tasks, and train it on the
source-separated (vocal, instrumental) pairs. In a pairwise comparison with the
same vocal inputs, listeners expressed a significant preference for
instrumentals generated by SingSong compared to those from a strong retrieval
baseline.
Sound examples at https://g.co/magenta/singsong
|
[
{
"version": "v1",
"created": "Mon, 30 Jan 2023 04:53:23 GMT"
}
] | 2023-01-31T00:00:00 |
[
[
"Donahue",
"Chris",
""
],
[
"Caillon",
"Antoine",
""
],
[
"Roberts",
"Adam",
""
],
[
"Manilow",
"Ethan",
""
],
[
"Esling",
"Philippe",
""
],
[
"Agostinelli",
"Andrea",
""
],
[
"Verzetti",
"Mauro",
""
],
[
"Simon",
"Ian",
""
],
[
"Pietquin",
"Olivier",
""
],
[
"Zeghidour",
"Neil",
""
],
[
"Engel",
"Jesse",
""
]
] |
new_dataset
| 0.999061 |
2301.12740
|
Aleksey Novokhrestov
|
D.S. Belyakov, E.O. Kalinin, A.A. Konev, A.A. Shelupanov, A.K.
Novokhrestov
|
Life cycle models and security threats to a microcircuit during its
development and operation
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
The growth of Internet of Things devices has shown the need to develop the
direction of information security in the field of development and operation of
microcircuits, since modern information systems are built around the latter.
This article presents the life cycle of secure chips used as a root of trust (
Root of Trust ) information systems. The main stages of the life cycle of
protected microcircuits are described, namely, the life cycle models during
development and during operation by the end user.
|
[
{
"version": "v1",
"created": "Mon, 30 Jan 2023 09:11:08 GMT"
}
] | 2023-01-31T00:00:00 |
[
[
"Belyakov",
"D. S.",
""
],
[
"Kalinin",
"E. O.",
""
],
[
"Konev",
"A. A.",
""
],
[
"Shelupanov",
"A. A.",
""
],
[
"Novokhrestov",
"A. K.",
""
]
] |
new_dataset
| 0.988765 |
2301.12794
|
Serge Kernbach
|
Serge Kernbach
|
On mesoscale thermal dynamics of para- and ortho- isomers of water
| null | null | null | null |
cs.RO physics.chem-ph physics.ins-det
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This work describes experiments on thermal dynamics of pure H2O excited by
hydrodynamic cavitation, which has been reported to facilitate the spin
conversion of para- and ortho-isomers at water interfaces. Previous
measurements by NMR and capillary methods of excited samples demonstrated
changes of proton density by 12-15%, the surface tension up to 15.7%, which can
be attributed to a non-equilibrium para-/ortho- ratio. Beside these changes, we
also expect a variation of heat capacity. Experiments use a differential
calorimetric approach with two devices: one with an active thermostat for
diathermic measurements, another is fully passive for long-term measurements.
Samples after excitation are degassed at -0.09MPa and thermally equalized in a
water bath. Conducted attempts demonstrated changes in the heat capacity of
experimental samples by 4.17%--5.72% measured in the transient dynamics within
60 min after excitation, which decreases to 2.08% in the steady-state dynamics
90-120 min after excitation. Additionally, we observed occurrence of thermal
fluctuations at the level of 10^-3 C relative temperature on 20-40 min
mesoscale dynamics and a long-term increase of such fluctuations in
experimental samples. Obtained results are reproducible in both devices and are
supported by previously published outcomes on four-photon scattering spectra in
the range from -1.5 to 1.5 cm^-1 and electrochemical reactivity in CO2 and H2O2
pathways. Based on these results, we propose a hypothesis about ongoing spin
conversion process on mesoscopic scales under weak influx of energy caused by
thermal, EM or geomagnetic factors; this enables explaining electrochemical and
thermal anomalies observed in long-term measurements.
|
[
{
"version": "v1",
"created": "Mon, 30 Jan 2023 11:35:51 GMT"
}
] | 2023-01-31T00:00:00 |
[
[
"Kernbach",
"Serge",
""
]
] |
new_dataset
| 0.997533 |
2301.12796
|
Malte Splietker
|
Malte Splietker and Sven Behnke
|
Rendering the Directional TSDF for Tracking and Multi-Sensor
Registration with Point-To-Plane Scale ICP
|
Published in Robotics and Autonomous Systems, 2023. arXiv admin note:
substantial text overlap with arXiv:2108.08115
| null |
10.1016/j.robot.2022.104337
| null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dense real-time tracking and mapping from RGB-D images is an important tool
for many robotic applications, such as navigation and manipulation. The
recently presented Directional Truncated Signed Distance Function (DTSDF) is an
augmentation of the regular TSDF that shows potential for more coherent maps
and improved tracking performance. In this work, we present methods for
rendering depth- and color images from the DTSDF, making it a true drop-in
replacement for the regular TSDF in established trackers. We evaluate the
algorithm on well-established datasets and observe that our method improves
tracking performance and increases re-usability of mapped scenes. Furthermore,
we add color integration which notably improves color-correctness at adjacent
surfaces. Our novel formulation of combined ICP with frame-to-keyframe
photometric error minimization further improves tracking results. Lastly, we
introduce Sim3 point-to-plane ICP for refining pose priors in a multi-sensor
scenario with different scale factors.
|
[
{
"version": "v1",
"created": "Mon, 30 Jan 2023 11:46:03 GMT"
}
] | 2023-01-31T00:00:00 |
[
[
"Splietker",
"Malte",
""
],
[
"Behnke",
"Sven",
""
]
] |
new_dataset
| 0.999139 |
2301.12800
|
Marcus Carpenter
|
Marcus Carpenter, Chunbo Luo
|
Behavioural Reports of Multi-Stage Malware
| null | null | null | null |
cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The extensive damage caused by malware requires anti-malware systems to be
constantly improved to prevent new threats. The current trend in malware
detection is to employ machine learning models to aid in the classification
process. We propose a new dataset with the objective of improving current
anti-malware systems. The focus of this dataset is to improve host based
intrusion detection systems by providing API call sequences for thousands of
malware samples executed in Windows 10 virtual machines. A tutorial on how to
create and expand this dataset is provided along with a benchmark demonstrating
how to use this dataset to classify malware. The data contains long sequences
of API calls for each sample, and in order to create models that can be
deployed in resource constrained devices, three feature selection methods were
tested. The principal innovation, however, lies in the multi-label
classification system in which one sequence of APIs can be tagged with multiple
labels describing its malicious behaviours.
|
[
{
"version": "v1",
"created": "Mon, 30 Jan 2023 11:51:02 GMT"
}
] | 2023-01-31T00:00:00 |
[
[
"Carpenter",
"Marcus",
""
],
[
"Luo",
"Chunbo",
""
]
] |
new_dataset
| 0.999439 |
2301.12818
|
Conor McMenamin
|
Conor McMenamin and Vanesa Daza
|
Dynamic, Private, Anonymous, Collateralizable Commitments vs. MEV
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We introduce DPACCs, a generalized commitment scheme based on smart contract
wallets and non-interactive zero knowledge proofs. DPACCs allow any smart
contract wallet holder to collateralize a claim, request, or commitment in
general, in a private and anonymous manner. DPACCs can prove arbitrarily much
or little about the wallet generating the commitment, and/or the transaction
which is being committed. This can be used to convince a prospective block
builder or relayer that the wallet generating the DPACC has enough funds to pay
required fees, that the wallet is committed to performing certain actions, and
importantly, that the wallet loses some amount of collateral if this commitment
is broken. DPACCs delegate typically expensive zero knowledge operations
off-chain, only requiring an additional one or two mapping checks when compared
to transactions being sent from basic externally owned accounts. We demonstrate
that DPACCs can be applied to effectively eliminate MEV in DeFi where it
currently occurs, shifting MEV instead to censorship. Although still a concern,
censorship can be made prohibitively expensive, making DPACCs a viable solution
to most sources of MEV.
|
[
{
"version": "v1",
"created": "Mon, 30 Jan 2023 12:17:01 GMT"
}
] | 2023-01-31T00:00:00 |
[
[
"McMenamin",
"Conor",
""
],
[
"Daza",
"Vanesa",
""
]
] |
new_dataset
| 0.994036 |
2301.12827
|
Ashkan Mansouri Yarahmadi
|
Slavomira Schneidereit, Ashkan Mansouri Yarahmadi, Toni Schneidereit,
Michael Breu{\ss}, Marc Gebauer
|
YOLO-based Object Detection in Industry 4.0 Fischertechnik Model
Environment
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we extensively explore the suitability of YOLO architectures to
monitor the process flow across a Fischertechnik industry 4.0 application.
Specifically, different YOLO architectures in terms of size and complexity
design along with different prior-shapes assignment strategies are adopted. To
simulate the real world factory environment, we prepared a rich dataset
augmented with different distortions that highly enhance and in some cases
degrade our image qualities. The degradation is performed to account for
environmental variations and enhancements opt to compensate the color
correlations that we face while preparing our dataset. The analysis of our
conducted experiments shows the effectiveness of the presented approach
evaluated using different measures along with the training and validation
strategies that we tailored to tackle the unavoidable color correlations that
the problem at hand inherits by nature.
|
[
{
"version": "v1",
"created": "Mon, 30 Jan 2023 12:29:03 GMT"
}
] | 2023-01-31T00:00:00 |
[
[
"Schneidereit",
"Slavomira",
""
],
[
"Yarahmadi",
"Ashkan Mansouri",
""
],
[
"Schneidereit",
"Toni",
""
],
[
"Breuß",
"Michael",
""
],
[
"Gebauer",
"Marc",
""
]
] |
new_dataset
| 0.998879 |
2301.12843
|
Tobias B\"uhler
|
Tobias B\"uhler, Alexandros Milolidakis, Romain Jacob, Marco Chiesa,
Stefano Vissicchio and Laurent Vanbever
|
Oscilloscope: Detecting BGP Hijacks in the Data Plane
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The lack of security of the Internet routing protocol (BGP) has allowed
attackers to divert Internet traffic and consequently perpetrate service
disruptions, monetary frauds, and even citizen surveillance for decades.
State-of-the-art defenses rely on geo-distributed BGP monitors to detect rogue
BGP announcements. As we show, though, attackers can easily evade detection by
engineering their announcements.
This paper presents Oscilloscope, an approach to accurately detect BGP
hijacks by relying on real-time traffic analysis. As hijacks inevitably change
the characteristics of the diverted traffic, the key idea is to track these
changes in real time and flag them. The main challenge is that "normal"
Internet events (e.g., network reconfigurations, link failures, load balancing)
also change the underlying traffic characteristics - and they are way more
frequent than hijacks. Naive traffic analyses would hence lead to too many
false positives.
We observe that hijacks typically target a subset of the prefixes announced
by Internet service providers and only divert a subset of their traffic. In
contrast, normal events lead to more uniform changes across prefixes and
traffic. Oscilloscope uses this observation to filter out non-hijack events by
checking whether they affect multiple related prefixes or not.
Our experimental evaluation demonstrates that Oscilloscope quickly and
accurately detects hijacks in realistic traffic traces containing hundreds of
events.
|
[
{
"version": "v1",
"created": "Mon, 30 Jan 2023 12:52:49 GMT"
}
] | 2023-01-31T00:00:00 |
[
[
"Bühler",
"Tobias",
""
],
[
"Milolidakis",
"Alexandros",
""
],
[
"Jacob",
"Romain",
""
],
[
"Chiesa",
"Marco",
""
],
[
"Vissicchio",
"Stefano",
""
],
[
"Vanbever",
"Laurent",
""
]
] |
new_dataset
| 0.989688 |
2301.12852
|
Guillaume Allais
|
Jan de Muijnck-Hughes, Guillaume Allais, Edwin Brady
|
Type Theory as a Language Workbench
|
18 pages, Accepted for publication at ECVS
| null | null | null |
cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Language Workbenches offer language designers an expressive environment in
which to create their DSLs. Similarly, research into mechanised meta-theory has
shown how dependently typed languages provide expressive environments to
formalise and study DSLs and their meta-theoretical properties. But can we
claim that dependently typed languages qualify as language workbenches? We
argue yes!
We have developed an exemplar DSL called Velo that showcases not only
dependently typed techniques to realise and manipulate IRs, but that
dependently typed languages make fine language workbenches. Velo is a simple
verified language with well-typed holes and comes with a complete compiler
pipeline: parser, elaborator, REPL, evaluator, and compiler passes.
Specifically, we describe our design choices for well-typed IRs design that
includes support for well-typed holes, how CSE is achieved in a well-typed
setting, and how the mechanised type-soundness proof for Velo is the source of
the evaluator.
|
[
{
"version": "v1",
"created": "Mon, 30 Jan 2023 13:01:01 GMT"
}
] | 2023-01-31T00:00:00 |
[
[
"de Muijnck-Hughes",
"Jan",
""
],
[
"Allais",
"Guillaume",
""
],
[
"Brady",
"Edwin",
""
]
] |
new_dataset
| 0.974132 |
2301.12959
|
Ming Tao
|
Ming Tao, Bing-Kun Bao, Hao Tang, Changsheng Xu
|
GALIP: Generative Adversarial CLIPs for Text-to-Image Synthesis
|
11 pages
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Synthesizing high-fidelity complex images from text is challenging. Based on
large pretraining, the autoregressive and diffusion models can synthesize
photo-realistic images. Although these large models have shown notable
progress, there remain three flaws. 1) These models require tremendous training
data and parameters to achieve good performance. 2) The multi-step generation
design slows the image synthesis process heavily. 3) The synthesized visual
features are difficult to control and require delicately designed prompts. To
enable high-quality, efficient, fast, and controllable text-to-image synthesis,
we propose Generative Adversarial CLIPs, namely GALIP. GALIP leverages the
powerful pretrained CLIP model both in the discriminator and generator.
Specifically, we propose a CLIP-based discriminator. The complex scene
understanding ability of CLIP enables the discriminator to accurately assess
the image quality. Furthermore, we propose a CLIP-empowered generator that
induces the visual concepts from CLIP through bridge features and prompts. The
CLIP-integrated generator and discriminator boost training efficiency, and as a
result, our model only requires about 3% training data and 6% learnable
parameters, achieving comparable results to large pretrained autoregressive and
diffusion models. Moreover, our model achieves 120 times faster synthesis speed
and inherits the smooth latent space from GAN. The extensive experimental
results demonstrate the excellent performance of our GALIP. Code is available
at https://github.com/tobran/GALIP.
|
[
{
"version": "v1",
"created": "Mon, 30 Jan 2023 14:58:23 GMT"
}
] | 2023-01-31T00:00:00 |
[
[
"Tao",
"Ming",
""
],
[
"Bao",
"Bing-Kun",
""
],
[
"Tang",
"Hao",
""
],
[
"Xu",
"Changsheng",
""
]
] |
new_dataset
| 0.995354 |
2301.12969
|
Charles Li
|
Charles Li (CNRS, CEIAS)
|
Using n-aksaras to model Sanskrit and Sanskrit-adjacent texts
|
Perspectives of Digital Humanities in the Field of Buddhist Studies,
Universit{\"a}t Hamburg; Numata Center for Buddhist Studies; Khyentse Center
for Tibetan Buddhist Textual Scholarship, Jan 2023, Hamburg, Germany
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite -- or perhaps because of -- their simplicity, n-grams, or contiguous
sequences of tokens, have been used with great success in computational
linguistics since their introduction in the late 20th century. Recast as
k-mers, or contiguous sequences of monomers, they have also found applications
in computational biology. When applied to the analysis of texts, n-grams
usually take the form of sequences of words. But if we try to apply this model
to the analysis of Sanskrit texts, we are faced with the arduous task of,
firstly, resolving sandhi to split a phrase into words, and, secondly,
splitting long compounds into their components. This paper presents a simpler
method of tokenizing a Sanskrit text for n-grams, by using n-aksaras, or
contiguous sequences of aksaras. This model reduces the need for sandhi
resolution, making it much easier to use on raw text. It is also possible to
use this model on Sanskrit-adjacent texts, e.g., a Tamil commentary on a
Sanskrit text. As a test case, the commentaries on Amarakosa 1.0.1 have been
modelled as n-aksaras, showing patterns of text reuse across ten centuries and
nine languages. Some initial observations are made concerning Buddhist
commentarial practices.
|
[
{
"version": "v1",
"created": "Mon, 30 Jan 2023 15:17:06 GMT"
}
] | 2023-01-31T00:00:00 |
[
[
"Li",
"Charles",
"",
"CNRS, CEIAS"
]
] |
new_dataset
| 0.999568 |
2301.12984
|
Ciprian-Octavian Truic\u{a}
|
Elena-Simona Apostol and Ciprian-Octavian Truic\u{a} and Adrian
Paschke
|
ContCommRTD: A Distributed Content-based Misinformation-aware Community
Detection System for Real-Time Disaster Reporting
| null | null | null | null |
cs.SI cs.AI cs.DC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Real-time social media data can provide useful information on evolving
hazards. Alongside traditional methods of disaster detection, the integration
of social media data can considerably enhance disaster management. In this
paper, we investigate the problem of detecting geolocation-content communities
on Twitter and propose a novel distributed system that provides in near
real-time information on hazard-related events and their evolution. We show
that content-based community analysis leads to better and faster dissemination
of reports on hazards. Our distributed disaster reporting system analyzes the
social relationship among worldwide geolocated tweets, and applies topic
modeling to group tweets by topics. Considering for each tweet the following
information: user, timestamp, geolocation, retweets, and replies, we create a
publisher-subscriber distribution model for topics. We use content similarity
and the proximity of nodes to create a new model for geolocation-content based
communities. Users can subscribe to different topics in specific geographical
areas or worldwide and receive real-time reports regarding these topics. As
misinformation can lead to increase damage if propagated in hazards related
tweets, we propose a new deep learning model to detect fake news. The
misinformed tweets are then removed from display. We also show empirically the
scalability capabilities of the proposed system.
|
[
{
"version": "v1",
"created": "Mon, 30 Jan 2023 15:28:47 GMT"
}
] | 2023-01-31T00:00:00 |
[
[
"Apostol",
"Elena-Simona",
""
],
[
"Truică",
"Ciprian-Octavian",
""
],
[
"Paschke",
"Adrian",
""
]
] |
new_dataset
| 0.998798 |
2301.13013
|
Cong Yu
|
Cong Yu, Dongheng Zhang, Zhi Wu, Zhi Lu, Chunyang Xie, Yang Hu, Yan
Chen
|
RFPose-OT: RF-Based 3D Human Pose Estimation via Optimal Transport
Theory
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces a novel framework, i.e., RFPose-OT, to enable the 3D
human pose estimation from Radio Frequency (RF) signals. Different from
existing methods that predict human poses from RF signals on the signal level
directly, we consider the structure difference between the RF signals and the
human poses, propose to transform the RF signals to the pose domain on the
feature level based on Optimal Transport (OT) theory, and generate human poses
from the transformed features. To evaluate RFPose-OT, we build a radio system
and a multi-view camera system to acquire the RF signal data and the
ground-truth human poses. The experimental results in basic indoor environment,
occlusion indoor environment, and outdoor environment, all demonstrate that
RFPose-OT can predict 3D human poses with higher precision than the
state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Mon, 26 Dec 2022 07:09:09 GMT"
}
] | 2023-01-31T00:00:00 |
[
[
"Yu",
"Cong",
""
],
[
"Zhang",
"Dongheng",
""
],
[
"Wu",
"Zhi",
""
],
[
"Lu",
"Zhi",
""
],
[
"Xie",
"Chunyang",
""
],
[
"Hu",
"Yang",
""
],
[
"Chen",
"Yan",
""
]
] |
new_dataset
| 0.970787 |
2301.13018
|
Bowen Zhao
|
Bowen Zhao, Chen Chen, Shu-Tao Xia
|
DELTA: degradation-free fully test-time adaptation
| null | null | null | null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fully test-time adaptation aims at adapting a pre-trained model to the test
stream during real-time inference, which is urgently required when the test
distribution differs from the training distribution. Several efforts have been
devoted to improving adaptation performance. However, we find that two
unfavorable defects are concealed in the prevalent adaptation methodologies
like test-time batch normalization (BN) and self-learning. First, we reveal
that the normalization statistics in test-time BN are completely affected by
the currently received test samples, resulting in inaccurate estimates. Second,
we show that during test-time adaptation, the parameter update is biased
towards some dominant classes. In addition to the extensively studied test
stream with independent and class-balanced samples, we further observe that the
defects can be exacerbated in more complicated test environments, such as
(time) dependent or class-imbalanced data. We observe that previous approaches
work well in certain scenarios while show performance degradation in others due
to their faults. In this paper, we provide a plug-in solution called DELTA for
Degradation-freE fuLly Test-time Adaptation, which consists of two components:
(i) Test-time Batch Renormalization (TBR), introduced to improve the estimated
normalization statistics. (ii) Dynamic Online re-weighTing (DOT), designed to
address the class bias within optimization. We investigate various test-time
adaptation methods on three commonly used datasets with four scenarios, and a
newly introduced real-world dataset. DELTA can help them deal with all
scenarios simultaneously, leading to SOTA performance.
|
[
{
"version": "v1",
"created": "Mon, 30 Jan 2023 15:54:00 GMT"
}
] | 2023-01-31T00:00:00 |
[
[
"Zhao",
"Bowen",
""
],
[
"Chen",
"Chen",
""
],
[
"Xia",
"Shu-Tao",
""
]
] |
new_dataset
| 0.997716 |
2301.13064
|
Jiaheng Hu
|
Jiaheng Hu, David Watkins, Peter Allen
|
Teleoperated Robot Grasping in Virtual Reality Spaces
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite recent advancement in virtual reality technology, teleoperating a
high DoF robot to complete dexterous tasks in cluttered scenes remains
difficult. In this work, we propose a system that allows the user to
teleoperate a Fetch robot to perform grasping in an easy and intuitive way,
through exploiting the rich environment information provided by the virtual
reality space. Our system has the benefit of easy transferability to different
robots and different tasks, and can be used without any expert knowledge. We
tested the system on a real fetch robot, and a video demonstrating the
effectiveness of our system can be seen at https://youtu.be/1-xW2Bx_Cms.
|
[
{
"version": "v1",
"created": "Mon, 30 Jan 2023 17:07:52 GMT"
}
] | 2023-01-31T00:00:00 |
[
[
"Hu",
"Jiaheng",
""
],
[
"Watkins",
"David",
""
],
[
"Allen",
"Peter",
""
]
] |
new_dataset
| 0.990142 |
2301.13124
|
Georg Regal
|
Helmut Schrom-Feiertag, Georg Regal, Markus Murtinger
|
MED1stMR: Mixed Reality to Enhance Training of Medical First
Responder]{MED1stMR: Mixed Reality to Enhance the Training of Medical First
Responders for Challenging Contexts
| null | null | null | null |
cs.CY cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Mass-casualty incidents with a large number of injured persons caused by
human-made or by natural disasters are increasing globally. In such situations,
medical first responders (MFRs) need to perform diagnosis, basic life support,
or other first aid to help stabilize victims and keep them alive to wait for
the arrival of further support. Situational awareness and effective coping with
acute stressors is essential to enable first responders to take appropriate
action that saves lives.
Virtual Reality (VR) has been demonstrated in several domains to be a serious
alternative, and in some areas also a significant improvement to conventional
learning and training. Especially for the challenges in the training of MFRs,
it can be highly useful for practicing and learning domains where the context
of the training is not easily available. VR training offers controlled,
easy-to-create environments that can be created and trained repeatedly under
the same conditions.
As an advanced alternative to VR, Mixed Reality (MR) environments have the
potential to augment current VR training by providing a dynamic simulation of
an environment and hands-on practice on injured victims. Building on this
interpretation of MR, the main aim of MED1stMR is to develop a new generation
of MR training with haptic feedback for enhanced realism. in this workshop
paper, we will present the vision of the project and suggest questions for
discussion.
|
[
{
"version": "v1",
"created": "Mon, 30 Jan 2023 18:01:32 GMT"
}
] | 2023-01-31T00:00:00 |
[
[
"Schrom-Feiertag",
"Helmut",
""
],
[
"Regal",
"Georg",
""
],
[
"Murtinger",
"Markus",
""
]
] |
new_dataset
| 0.999133 |
2301.13126
|
Joel Niklaus
|
Joel Niklaus, Veton Matoshi, Pooja Rani, Andrea Galassi, Matthias
St\"urmer, Ilias Chalkidis
|
LEXTREME: A Multi-Lingual and Multi-Task Benchmark for the Legal Domain
| null | null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Lately, propelled by the phenomenal advances around the transformer
architecture, the legal NLP field has enjoyed spectacular growth. To measure
progress, well curated and challenging benchmarks are crucial. However, most
benchmarks are English only and in legal NLP specifically there is no
multilingual benchmark available yet. Additionally, many benchmarks are
saturated, with the best models clearly outperforming the best humans and
achieving near perfect scores. We survey the legal NLP literature and select 11
datasets covering 24 languages, creating LEXTREME. To provide a fair
comparison, we propose two aggregate scores, one based on the datasets and one
on the languages. The best baseline (XLM-R large) achieves both a dataset
aggregate score a language aggregate score of 61.3. This indicates that
LEXTREME is still very challenging and leaves ample room for improvement. To
make it easy for researchers and practitioners to use, we release LEXTREME on
huggingface together with all the code required to evaluate models and a public
Weights and Biases project with all the runs.
|
[
{
"version": "v1",
"created": "Mon, 30 Jan 2023 18:05:08 GMT"
}
] | 2023-01-31T00:00:00 |
[
[
"Niklaus",
"Joel",
""
],
[
"Matoshi",
"Veton",
""
],
[
"Rani",
"Pooja",
""
],
[
"Galassi",
"Andrea",
""
],
[
"Stürmer",
"Matthias",
""
],
[
"Chalkidis",
"Ilias",
""
]
] |
new_dataset
| 0.999071 |
2301.13196
|
Shashank Rajput
|
Angeliki Giannou, Shashank Rajput, Jy-yong Sohn, Kangwook Lee, Jason
D. Lee, Dimitris Papailiopoulos
|
Looped Transformers as Programmable Computers
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a framework for using transformer networks as universal computers
by programming them with specific weights and placing them in a loop. Our input
sequence acts as a punchcard, consisting of instructions and memory for data
read/writes. We demonstrate that a constant number of encoder layers can
emulate basic computing blocks, including embedding edit operations, non-linear
functions, function calls, program counters, and conditional branches. Using
these building blocks, we emulate a small instruction-set computer. This allows
us to map iterative algorithms to programs that can be executed by a looped,
13-layer transformer. We show how this transformer, instructed by its input,
can emulate a basic calculator, a basic linear algebra library, and in-context
learning algorithms that employ backpropagation. Our work highlights the
versatility of the attention mechanism, and demonstrates that even shallow
transformers can execute full-fledged, general-purpose programs.
|
[
{
"version": "v1",
"created": "Mon, 30 Jan 2023 18:57:31 GMT"
}
] | 2023-01-31T00:00:00 |
[
[
"Giannou",
"Angeliki",
""
],
[
"Rajput",
"Shashank",
""
],
[
"Sohn",
"Jy-yong",
""
],
[
"Lee",
"Kangwook",
""
],
[
"Lee",
"Jason D.",
""
],
[
"Papailiopoulos",
"Dimitris",
""
]
] |
new_dataset
| 0.998984 |
2011.02626
|
Richard Peschke
|
R. Peschke, K. Nishimura, G. Varner
|
HDPython: A High Level Python Based Object-Oriented HDL Framework
| null | null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a High-Level Python-based Hardware Description Language
(HDPython), It uses Python as its source language and converts it to standard
VHDL. Compared to other approaches of building converters from a high-level
programming language into a hardware description language, this new approach
aims to maintain an object-oriented paradigm throughout the entire process.
Instead of removing all the high-level features from Python to make it into an
HDL, this approach goes the opposite way. It tries to show how certain features
from a high-level language can be implemented in an HDL, providing the
corresponding benefits of high-level programming for the user.
|
[
{
"version": "v1",
"created": "Thu, 5 Nov 2020 02:43:50 GMT"
},
{
"version": "v2",
"created": "Thu, 26 Jan 2023 21:00:29 GMT"
}
] | 2023-01-30T00:00:00 |
[
[
"Peschke",
"R.",
""
],
[
"Nishimura",
"K.",
""
],
[
"Varner",
"G.",
""
]
] |
new_dataset
| 0.999541 |
2108.13696
|
Chathura Gamage
|
Cheng Xue, Vimukthini Pinto, Chathura Gamage, Ekaterina Nikonova, Peng
Zhang, Jochen Renz
|
Phy-Q as a measure for physical reasoning intelligence
|
For the associated website, see https://github.com/phy-q/benchmark
| null | null | null |
cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Humans are well-versed in reasoning about the behaviors of physical objects
and choosing actions accordingly to accomplish tasks, while it remains a major
challenge for AI. To facilitate research addressing this problem, we propose a
new testbed that requires an agent to reason about physical scenarios and take
an action appropriately. Inspired by the physical knowledge acquired in infancy
and the capabilities required for robots to operate in real-world environments,
we identify 15 essential physical scenarios. We create a wide variety of
distinct task templates, and we ensure all the task templates within the same
scenario can be solved by using one specific strategic physical rule. By having
such a design, we evaluate two distinct levels of generalization, namely the
local generalization and the broad generalization. We conduct an extensive
evaluation with human players, learning agents with varying input types and
architectures, and heuristic agents with different strategies. Inspired by how
human IQ is calculated, we define the physical reasoning quotient (Phy-Q score)
that reflects the physical reasoning intelligence of an agent using the
physical scenarios we considered. Our evaluation shows that 1) all agents are
far below human performance, and 2) learning agents, even with good local
generalization ability, struggle to learn the underlying physical reasoning
rules and fail to generalize broadly. We encourage the development of
intelligent agents that can reach the human level Phy-Q score. Website:
https://github.com/phy-q/benchmark
|
[
{
"version": "v1",
"created": "Tue, 31 Aug 2021 09:11:27 GMT"
},
{
"version": "v2",
"created": "Wed, 18 May 2022 03:39:05 GMT"
},
{
"version": "v3",
"created": "Fri, 27 Jan 2023 01:52:45 GMT"
}
] | 2023-01-30T00:00:00 |
[
[
"Xue",
"Cheng",
""
],
[
"Pinto",
"Vimukthini",
""
],
[
"Gamage",
"Chathura",
""
],
[
"Nikonova",
"Ekaterina",
""
],
[
"Zhang",
"Peng",
""
],
[
"Renz",
"Jochen",
""
]
] |
new_dataset
| 0.999593 |
2202.06954
|
Enrico Russo
|
Enrico Russo, Gabriele Costa, Giacomo Longo, Alessandro Armando,
Alessio Merlo
|
LiDiTE: a Full-Fledged and Featherweight Digital Twin Framework
| null | null |
10.1109/TDSC.2023.3236798
| null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The rising of the Cyber-Physical System (CPS) and the Industry 4.0 paradigms
demands the design and the implementation of Digital Twin Frameworks (DTFs)
that may support the quick build of reliable Digital Twins (DTs) for
experimental and testing purposes. Most of the current DTF proposals allow
generating DTs at a good pace but affect generality, scalability, portability,
and completeness. As a consequence, current DTF are mostly domain-specific and
hardly span several application domains (e.g., from simple IoT deployments to
the modeling of complex critical infrastructures). Furthermore, the generated
DTs often requires a high amount of computational resource to run. In this
paper, we present LiDiTE, a novel DTF that overcomes the previous limitations
by, on the one hand, supporting the building of general-purpose DTs at a
fine-grained level, but, on the other hand, with a reduced resource footprint
w.r.t. the current state of the art. We show the characteristics of the LiDiTE
by building the DT of a complex and real critical infrastructure (i.e., the
Smart Poligeneration Microgrid of the Savona Campus) and evaluating its
resource consumption. The source code of LiDiTE, as well as the experimental
dataset, is publicly available.
|
[
{
"version": "v1",
"created": "Mon, 14 Feb 2022 08:15:40 GMT"
}
] | 2023-01-30T00:00:00 |
[
[
"Russo",
"Enrico",
""
],
[
"Costa",
"Gabriele",
""
],
[
"Longo",
"Giacomo",
""
],
[
"Armando",
"Alessandro",
""
],
[
"Merlo",
"Alessio",
""
]
] |
new_dataset
| 0.99797 |
2202.10600
|
Nikolaus Howe
|
Nikolaus H. R. Howe, Simon Dufort-Labb\'e, Nitarshan Rajkumar,
Pierre-Luc Bacon
|
Myriad: a real-world testbed to bridge trajectory optimization and deep
learning
|
Updated to match version accepted at NeurIPS 2022
| null | null | null |
cs.LG cs.AI cs.SY eess.SY stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Myriad, a testbed written in JAX for learning and planning in
real-world continuous environments. The primary contributions of Myriad are
threefold. First, Myriad provides machine learning practitioners access to
trajectory optimization techniques for application within a typical automatic
differentiation workflow. Second, Myriad presents many real-world optimal
control problems, ranging from biology to medicine to engineering, for use by
the machine learning community. Formulated in continuous space and time, these
environments retain some of the complexity of real-world systems often
abstracted away by standard benchmarks. As such, Myriad strives to serve as a
stepping stone towards application of modern machine learning techniques for
impactful real-world tasks. Finally, we use the Myriad repository to showcase a
novel approach for learning and control tasks. Trained in a fully end-to-end
fashion, our model leverages an implicit planning module over neural ordinary
differential equations, enabling simultaneous learning and planning with
complex environment dynamics.
|
[
{
"version": "v1",
"created": "Tue, 22 Feb 2022 00:47:14 GMT"
},
{
"version": "v2",
"created": "Thu, 26 Jan 2023 18:40:58 GMT"
}
] | 2023-01-30T00:00:00 |
[
[
"Howe",
"Nikolaus H. R.",
""
],
[
"Dufort-Labbé",
"Simon",
""
],
[
"Rajkumar",
"Nitarshan",
""
],
[
"Bacon",
"Pierre-Luc",
""
]
] |
new_dataset
| 0.999756 |
2203.08299
|
Maximillian Chen
|
Maximillian Chen, Caitlyn Chen, Xiao Yu, Zhou Yu
|
FastKASSIM: A Fast Tree Kernel-Based Syntactic Similarity Metric
|
In EACL 2023. 21 pages, 13 figures, 4 tables. Code available at
https://github.com/jasonyux/FastKASSIM
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Syntax is a fundamental component of language, yet few metrics have been
employed to capture syntactic similarity or coherence at the utterance- and
document-level. The existing standard document-level syntactic similarity
metric is computationally expensive and performs inconsistently when faced with
syntactically dissimilar documents. To address these challenges, we present
FastKASSIM, a metric for utterance- and document-level syntactic similarity
which pairs and averages the most similar constituency parse trees between a
pair of documents based on tree kernels. FastKASSIM is more robust to syntactic
dissimilarities and runs up to to 5.32 times faster than its predecessor over
documents in the r/ChangeMyView corpus. FastKASSIM's improvements allow us to
examine hypotheses in two settings with large documents. We find that
syntactically similar arguments on r/ChangeMyView tend to be more persuasive,
and that syntax is predictive of authorship attribution in the Australian High
Court Judgment corpus.
|
[
{
"version": "v1",
"created": "Tue, 15 Mar 2022 22:33:26 GMT"
},
{
"version": "v2",
"created": "Wed, 25 May 2022 05:54:34 GMT"
},
{
"version": "v3",
"created": "Fri, 14 Oct 2022 03:47:17 GMT"
},
{
"version": "v4",
"created": "Fri, 27 Jan 2023 05:33:58 GMT"
}
] | 2023-01-30T00:00:00 |
[
[
"Chen",
"Maximillian",
""
],
[
"Chen",
"Caitlyn",
""
],
[
"Yu",
"Xiao",
""
],
[
"Yu",
"Zhou",
""
]
] |
new_dataset
| 0.999233 |
2204.10776
|
Yuan Liu
|
Yuan Liu and Yilin Wen and Sida Peng and Cheng Lin and Xiaoxiao Long
and Taku Komura and Wenping Wang
|
Gen6D: Generalizable Model-Free 6-DoF Object Pose Estimation from RGB
Images
|
Camera ready version for ECCV2022, Project page:
https://liuyuan-pal.github.io/Gen6D/ Code:
https://github.com/liuyuan-pal/Gen6D
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this paper, we present a generalizable model-free 6-DoF object pose
estimator called Gen6D. Existing generalizable pose estimators either need
high-quality object models or require additional depth maps or object masks in
test time, which significantly limits their application scope. In contrast, our
pose estimator only requires some posed images of the unseen object and is able
to accurately predict the poses of the object in arbitrary environments. Gen6D
consists of an object detector, a viewpoint selector and a pose refiner, all of
which do not require the 3D object model and can generalize to unseen objects.
Experiments show that Gen6D achieves state-of-the-art results on two model-free
datasets: the MOPED dataset and a new GenMOP dataset collected by us. In
addition, on the LINEMOD dataset, Gen6D achieves competitive results compared
with instance-specific pose estimators. Project page:
https://liuyuan-pal.github.io/Gen6D/.
|
[
{
"version": "v1",
"created": "Fri, 22 Apr 2022 15:48:29 GMT"
},
{
"version": "v2",
"created": "Fri, 27 Jan 2023 03:37:49 GMT"
}
] | 2023-01-30T00:00:00 |
[
[
"Liu",
"Yuan",
""
],
[
"Wen",
"Yilin",
""
],
[
"Peng",
"Sida",
""
],
[
"Lin",
"Cheng",
""
],
[
"Long",
"Xiaoxiao",
""
],
[
"Komura",
"Taku",
""
],
[
"Wang",
"Wenping",
""
]
] |
new_dataset
| 0.990739 |
2205.00574
|
Martin Dieguez
|
Juan Pablo Aguilera and Mart\'in Di\'eguez and David Fern\'andez-Duque
and Brett McLean
|
Time and G\"odel: Fuzzy temporal reasoning in PSPACE
| null |
Workshop on Logic, Language, Information, and Computation
(WoLLIC), proceedings of the 28th International Workshop (September 2022),
pp. 18-35
|
10.1007/978-3-031-15298-6_2
| null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We investigate a non-classical version of linear temporal logic whose
propositional fragment is G\"odel--Dummett logic (which is well known both as a
superintuitionistic logic and a t-norm fuzzy logic). We define the logic using
two natural semantics, a real-valued semantics and a bi-relational semantics,
and show that these indeed define one and the same logic. Although this G\"odel
temporal logic does not have any form of the finite model property for these
two semantics, we show that every falsifiable formula is falsifiable on a
finite quasimodel, which yields decidability of the logic. We then strengthen
this result by showing that this G\"odel temporal logic is PSPACE-complete.
|
[
{
"version": "v1",
"created": "Sun, 1 May 2022 22:48:37 GMT"
}
] | 2023-01-30T00:00:00 |
[
[
"Aguilera",
"Juan Pablo",
""
],
[
"Diéguez",
"Martín",
""
],
[
"Fernández-Duque",
"David",
""
],
[
"McLean",
"Brett",
""
]
] |
new_dataset
| 0.993455 |
2205.10843
|
Ningyu Zhang
|
Yincen Qu, Ningyu Zhang, Hui Chen, Zelin Dai, Zezhong Xu, Chengming
Wang, Xiaoyu Wang, Qiang Chen, Huajun Chen
|
Commonsense Knowledge Salience Evaluation with a Benchmark Dataset in
E-commerce
|
Accepted to EMNLP 2022 (Findings)
| null | null | null |
cs.CL cs.AI cs.IR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In e-commerce, the salience of commonsense knowledge (CSK) is beneficial for
widespread applications such as product search and recommendation. For example,
when users search for ``running'' in e-commerce, they would like to find
products highly related to running, such as ``running shoes'' rather than
``shoes''. Nevertheless, many existing CSK collections rank statements solely
by confidence scores, and there is no information about which ones are salient
from a human perspective. In this work, we define the task of supervised
salience evaluation, where given a CSK triple, the model is required to learn
whether the triple is salient or not. In addition to formulating the new task,
we also release a new Benchmark dataset of Salience Evaluation in E-commerce
(BSEE) and hope to promote related research on commonsense knowledge salience
evaluation. We conduct experiments in the dataset with several representative
baseline models. The experimental results show that salience evaluation is a
challenging task where models perform poorly on our evaluation set. We further
propose a simple but effective approach, PMI-tuning, which shows promise for
solving this novel problem. Code is available in
\url{https://github.com/OpenBGBenchmark/OpenBG-CSK.
|
[
{
"version": "v1",
"created": "Sun, 22 May 2022 15:01:23 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Oct 2022 10:22:31 GMT"
}
] | 2023-01-30T00:00:00 |
[
[
"Qu",
"Yincen",
""
],
[
"Zhang",
"Ningyu",
""
],
[
"Chen",
"Hui",
""
],
[
"Dai",
"Zelin",
""
],
[
"Xu",
"Zezhong",
""
],
[
"Wang",
"Chengming",
""
],
[
"Wang",
"Xiaoyu",
""
],
[
"Chen",
"Qiang",
""
],
[
"Chen",
"Huajun",
""
]
] |
new_dataset
| 0.999506 |
2209.14901
|
Yanjun Gao
|
Yanjun Gao, Dmitriy Dligach, Timothy Miller, John Caskey, Brihat
Sharma, Matthew M Churpek, Majid Afshar
|
DR.BENCH: Diagnostic Reasoning Benchmark for Clinical Natural Language
Processing
|
Under review
| null |
10.1016/j.jbi.2023.104286
| null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The meaningful use of electronic health records (EHR) continues to progress
in the digital era with clinical decision support systems augmented by
artificial intelligence. A priority in improving provider experience is to
overcome information overload and reduce the cognitive burden so fewer medical
errors and cognitive biases are introduced during patient care. One major type
of medical error is diagnostic error due to systematic or predictable errors in
judgment that rely on heuristics. The potential for clinical natural language
processing (cNLP) to model diagnostic reasoning in humans with forward
reasoning from data to diagnosis and potentially reduce the cognitive burden
and medical error has not been investigated. Existing tasks to advance the
science in cNLP have largely focused on information extraction and named entity
recognition through classification tasks. We introduce a novel suite of tasks
coined as Diagnostic Reasoning Benchmarks, DR.BENCH, as a new benchmark for
developing and evaluating cNLP models with clinical diagnostic reasoning
ability. The suite includes six tasks from ten publicly available datasets
addressing clinical text understanding, medical knowledge reasoning, and
diagnosis generation. DR.BENCH is the first clinical suite of tasks designed to
be a natural language generation framework to evaluate pre-trained language
models. Experiments with state-of-the-art pre-trained generative language
models using large general domain models and models that were continually
trained on a medical corpus demonstrate opportunities for improvement when
evaluated in DR. BENCH. We share DR. BENCH as a publicly available GitLab
repository with a systematic approach to load and evaluate models for the cNLP
community.
|
[
{
"version": "v1",
"created": "Thu, 29 Sep 2022 16:05:53 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Dec 2022 02:51:59 GMT"
}
] | 2023-01-30T00:00:00 |
[
[
"Gao",
"Yanjun",
""
],
[
"Dligach",
"Dmitriy",
""
],
[
"Miller",
"Timothy",
""
],
[
"Caskey",
"John",
""
],
[
"Sharma",
"Brihat",
""
],
[
"Churpek",
"Matthew M",
""
],
[
"Afshar",
"Majid",
""
]
] |
new_dataset
| 0.999592 |
2209.15217
|
Seunghyuk Cho
|
Seunghyuk Cho, Juyong Lee, Dongwoo Kim
|
Hyperbolic VAE via Latent Gaussian Distributions
|
16 pages, 5 figures
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a Gaussian manifold variational auto-encoder (GM-VAE) whose latent
space consists of a set of Gaussian distributions. It is known that the set of
the univariate Gaussian distributions with the Fisher information metric form a
hyperbolic space, which we call a Gaussian manifold. To learn the VAE endowed
with the Gaussian manifolds, we propose a pseudo-Gaussian manifold normal
distribution based on the Kullback-Leibler divergence, a local approximation of
the squared Fisher-Rao distance, to define a density over the latent space. In
experiments, we demonstrate the efficacy of GM-VAE on two different tasks:
density estimation of image datasets and environment modeling in model-based
reinforcement learning. GM-VAE outperforms the other variants of hyperbolic-
and Euclidean-VAEs on density estimation tasks and shows competitive
performance in model-based reinforcement learning. We observe that our model
provides strong numerical stability, addressing a common limitation reported in
previous hyperbolic-VAEs.
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2022 04:09:06 GMT"
},
{
"version": "v2",
"created": "Fri, 27 Jan 2023 11:02:38 GMT"
}
] | 2023-01-30T00:00:00 |
[
[
"Cho",
"Seunghyuk",
""
],
[
"Lee",
"Juyong",
""
],
[
"Kim",
"Dongwoo",
""
]
] |
new_dataset
| 0.976945 |
2211.13780
|
Mengxin Zheng
|
Mengxin Zheng, Qian Lou, Fan Chen, Lei Jiang and Yongxin Zhu
|
CryptoLight: An Electro-Optical Accelerator for Fully Homomorphic
Encryption
|
2 pages, 2 figures
| null | null | null |
cs.CR cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
Fully homomorphic encryption (FHE) protects data privacy in cloud computing
by enabling computations to directly occur on ciphertexts. To improve the
time-consuming FHE operations, we present an electro-optical (EO) FHE
accelerator, CryptoLight. Compared to prior FHE accelerators, on average,
CryptoLight reduces the latency of various FHE applications by >94.4% and the
energy consumption by >95%
|
[
{
"version": "v1",
"created": "Thu, 24 Nov 2022 19:53:47 GMT"
},
{
"version": "v2",
"created": "Fri, 27 Jan 2023 15:53:56 GMT"
}
] | 2023-01-30T00:00:00 |
[
[
"Zheng",
"Mengxin",
""
],
[
"Lou",
"Qian",
""
],
[
"Chen",
"Fan",
""
],
[
"Jiang",
"Lei",
""
],
[
"Zhu",
"Yongxin",
""
]
] |
new_dataset
| 0.998291 |
2212.00585
|
Matthew Ciolino
|
Matthew Ciolino, Grant Rosario, David Noever
|
Soft Labels for Rapid Satellite Object Detection
|
5 Pages, 5 Figures, 1 Tables, 22 References
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Soft labels in image classification are vector representations of an image's
true classification. In this paper, we investigate soft labels in the context
of satellite object detection. We propose using detections as the basis for a
new dataset of soft labels. Much of the effort in creating a high-quality model
is gathering and annotating the training data. If we could use a model to
generate a dataset for us, we could not only rapidly create datasets, but also
supplement existing open-source datasets. Using a subset of the xView dataset,
we train a YOLOv5 model to detect cars, planes, and ships. We then use that
model to generate soft labels for the second training set which we then train
and compare to the original model. We show that soft labels can be used to
train a model that is almost as accurate as a model trained on the original
data.
|
[
{
"version": "v1",
"created": "Thu, 1 Dec 2022 15:23:13 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Dec 2022 17:52:38 GMT"
},
{
"version": "v3",
"created": "Fri, 27 Jan 2023 17:52:44 GMT"
}
] | 2023-01-30T00:00:00 |
[
[
"Ciolino",
"Matthew",
""
],
[
"Rosario",
"Grant",
""
],
[
"Noever",
"David",
""
]
] |
new_dataset
| 0.997901 |
2301.07310
|
Jiangtao Gong
|
Yudan Wu, Shanhe You, Zixuan Guo, Xiangyang Li, Guyue Zhou, Jiangtao
Gong
|
MR.Brick: Designing A Remote Mixed-reality Educational Game System for
Promoting Children's Social & Collaborative Skills
|
14 pages, 9 figures
|
CHI2023
|
10.1145/3544548.3581041
| null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Children are one of the groups most influenced by COVID-19-related social
distancing, and a lack of contact with peers can limit their opportunities to
develop social and collaborative skills. However, remote socialization and
collaboration as an alternative approach is still a great challenge for
children. This paper presents MR.Brick, a Mixed Reality (MR) educational game
system that helps children adapt to remote collaboration. A controlled
experimental study involving 24 children aged six to ten was conducted to
compare MR.Brick with the traditional video game by measuring their social and
collaborative skills and analyzing their multi-modal playing behaviours. The
results showed that MR.Brick was more conducive to children's remote
collaboration experience than the traditional video game. Given the lack of
training systems designed for children to collaborate remotely, this study may
inspire interaction design and educational research in related fields.
|
[
{
"version": "v1",
"created": "Wed, 18 Jan 2023 05:14:57 GMT"
},
{
"version": "v2",
"created": "Fri, 27 Jan 2023 04:56:43 GMT"
}
] | 2023-01-30T00:00:00 |
[
[
"Wu",
"Yudan",
""
],
[
"You",
"Shanhe",
""
],
[
"Guo",
"Zixuan",
""
],
[
"Li",
"Xiangyang",
""
],
[
"Zhou",
"Guyue",
""
],
[
"Gong",
"Jiangtao",
""
]
] |
new_dataset
| 0.990143 |
2301.10599
|
Zehua Ma
|
Zehua Ma, Hang Zhou, and Weiming Zhang
|
AnisoTag: 3D Printed Tag on 2D Surface via Reflection Anisotropy
|
To be published in Proceedings of the 2023 CHI Conference on Human
Factors in Computing Systems (CHI '23), April 23--28, 2023, Hamburg, Germany
| null |
10.1145/3544548.3581024
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the past few years, the widespread use of 3D printing technology enables
the growth of the market of 3D printed products. On Esty, a website focused on
handmade items, hundreds of individual entrepreneurs are selling their 3D
printed products. Inspired by the positive effects of machine-readable tags,
like barcodes, on daily product marketing, we propose AnisoTag, a novel tagging
method to encode data on the 2D surface of 3D printed objects based on
reflection anisotropy. AnisoTag has an unobtrusive appearance and much lower
extraction computational complexity, contributing to a lightweight low-cost
tagging system for individual entrepreneurs. On AnisoTag, data are encoded by
the proposed tool as reflective anisotropic microstructures, which would
reflect distinct illumination patterns when irradiating by collimated laser.
Based on it, we implement a real-time detection prototype with inexpensive
hardware to determine the reflected illumination pattern and decode data
according to their mapping. We evaluate AnisoTag with various 3D printer
brands, filaments, and printing parameters, demonstrating its superior
usability, accessibility, and reliability for practical usage.
|
[
{
"version": "v1",
"created": "Wed, 25 Jan 2023 14:17:49 GMT"
},
{
"version": "v2",
"created": "Fri, 27 Jan 2023 13:09:07 GMT"
}
] | 2023-01-30T00:00:00 |
[
[
"Ma",
"Zehua",
""
],
[
"Zhou",
"Hang",
""
],
[
"Zhang",
"Weiming",
""
]
] |
new_dataset
| 0.999829 |
2301.10943
|
Jaemin Hong
|
Jaemin Hong and Sukyoung Ryu
|
Concrat: An Automatic C-to-Rust Lock API Translator for Concurrent
Programs
|
13 pages, 3 figures, 1 table, In Proceedings of the ACM/IEEE 45th
International Conference on Software Engineering (ICSE 2023)
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Concurrent programs suffer from data races. To prevent data races,
programmers use locks. However, programs can eliminate data races only when
they acquire and release correct locks at correct timing. The lock API of C, in
which people have developed a large portion of legacy system programs, does not
validate the correct use of locks. On the other hand, Rust, a recently
developed system programming language, provides a lock API that guarantees the
correct use of locks via type checking. This makes rewriting legacy system
programs in Rust a promising way to retrofit safety into them. Unfortunately,
manual C-to-Rust translation is extremely laborious due to the discrepancies
between their lock APIs. Even the state-of-the-art automatic C-to-Rust
translator retains the C lock API, expecting developers to replace them with
the Rust lock API. In this work, we propose an automatic tool to replace the C
lock API with the Rust lock API. It facilitates C-to-Rust translation of
concurrent programs with less human effort than the current practice. Our tool
consists of a Rust code transformer that takes a lock summary as an input and a
static analyzer that efficiently generates precise lock summaries. We show that
the transformer is scalable and widely applicable while preserving the
semantics; it transforms 66 KLOC in 2.6 seconds and successfully handles 74% of
real-world programs. We also show that the analyzer is scalable and precise; it
analyzes 66 KLOC in 4.3 seconds.
|
[
{
"version": "v1",
"created": "Thu, 26 Jan 2023 05:20:43 GMT"
},
{
"version": "v2",
"created": "Fri, 27 Jan 2023 11:21:43 GMT"
}
] | 2023-01-30T00:00:00 |
[
[
"Hong",
"Jaemin",
""
],
[
"Ryu",
"Sukyoung",
""
]
] |
new_dataset
| 0.99843 |
2301.11357
|
Yucheng Zhou
|
Yucheng Zhou, Guodong Long
|
Multimodal Event Transformer for Image-guided Story Ending Generation
|
EACL 2023
| null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image-guided story ending generation (IgSEG) is to generate a story ending
based on given story plots and ending image. Existing methods focus on
cross-modal feature fusion but overlook reasoning and mining implicit
information from story plots and ending image. To tackle this drawback, we
propose a multimodal event transformer, an event-based reasoning framework for
IgSEG. Specifically, we construct visual and semantic event graphs from story
plots and ending image, and leverage event-based reasoning to reason and mine
implicit information in a single modality. Next, we connect visual and semantic
event graphs and utilize cross-modal fusion to integrate different-modality
features. In addition, we propose a multimodal injector to adaptive pass
essential information to decoder. Besides, we present an incoherence detection
to enhance the understanding context of a story plot and the robustness of
graph modeling for our model. Experimental results show that our method
achieves state-of-the-art performance for the image-guided story ending
generation.
|
[
{
"version": "v1",
"created": "Thu, 26 Jan 2023 19:10:07 GMT"
}
] | 2023-01-30T00:00:00 |
[
[
"Zhou",
"Yucheng",
""
],
[
"Long",
"Guodong",
""
]
] |
new_dataset
| 0.998487 |
2301.11365
|
Magreth Mushi
|
Magreth Mushi, Yuchen Liu, Shreyas Sreenivasa, Ozgur Ozdemir, Ismail
Guvenc, Mihail Sichitiu, Rudra Dutta, and Russ Gyurek
|
Open RAN Testbeds with Controlled Air Mobility
| null | null | null | null |
cs.NI cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
With its promise of increasing softwarization, improving disaggregability,
and creating an open-source based ecosystem in the area of Radio Access
Networks, the idea of Open RAN has generated rising interest in the community.
Even as the community races to provide and verify complete Open RAN systems,
the importance of verification of systems based on Open RAN under real-world
conditions has become clear, and testbed facilities for general use have been
envisioned, in addition to private testing facilities. Aerial robots, including
autonomous ones, are among the increasingly important and interesting clients
of RAN systems, but also present a challenge for testbeds. Based on our
experience in architecting and operating an advanced wireless testbed with
aerial robots as a primary citizen, we present considerations relevant to the
design of Open RAN testbeds, with particular attention to making such a testbed
capable of controlled experimentation with aerial clients. We also present
representative results from the NSF AERPAW testbed on Open RAN slicing,
programmable vehicles, and programmable radios.
|
[
{
"version": "v1",
"created": "Thu, 26 Jan 2023 19:19:42 GMT"
}
] | 2023-01-30T00:00:00 |
[
[
"Mushi",
"Magreth",
""
],
[
"Liu",
"Yuchen",
""
],
[
"Sreenivasa",
"Shreyas",
""
],
[
"Ozdemir",
"Ozgur",
""
],
[
"Guvenc",
"Ismail",
""
],
[
"Sichitiu",
"Mihail",
""
],
[
"Dutta",
"Rudra",
""
],
[
"Gyurek",
"Russ",
""
]
] |
new_dataset
| 0.979613 |
2301.11403
|
David Skillicorn
|
D. Nam and D.B. Skillicorn
|
Detecting Pump&Dump Stock Market Manipulation from Online Forums
| null | null | null | null |
cs.SI cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The intersection of social media, low-cost trading platforms, and naive
investors has created an ideal situation for information-based market
manipulations, especially pump&dumps. Manipulators accumulate small-cap stocks,
disseminate false information on social media to inflate their price, and sell
at the peak. We collect a dataset of stocks whose price and volume profiles
have the characteristic shape of a pump&dump, and social media posts for those
same stocks that match the timing of the initial price rises. From these we
build predictive models for pump&dump events based on the language used in the
social media posts.
There are multiple difficulties: not every post will cause the intended
market reaction, some pump&dump events may be triggered by posts in other
forums, and there may be accidental confluences of post timing and market
movements. Nevertheless, our best model achieves a prediction accuracy of 85%
and an F1-score of 62%. Such a tool can provide early warning to investors and
regulators that a pump&dump may be underway.
|
[
{
"version": "v1",
"created": "Thu, 26 Jan 2023 20:31:27 GMT"
}
] | 2023-01-30T00:00:00 |
[
[
"Nam",
"D.",
""
],
[
"Skillicorn",
"D. B.",
""
]
] |
new_dataset
| 0.998984 |
2301.11406
|
Alexander Gutkin
|
Alexander Gutkin, Cibu Johny, Raiomond Doctor, Brian Roark, Richard
Sproat
|
Beyond Arabic: Software for Perso-Arabic Script Manipulation
|
Preprint to appear in the Proceedings of the 7th Arabic Natural
Language Processing Workshop (WANLP 2022) at EMNLP, Abu Dhabi, United Arab
Emirates, December 7-11, 2022. 7 pages
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This paper presents an open-source software library that provides a set of
finite-state transducer (FST) components and corresponding utilities for
manipulating the writing systems of languages that use the Perso-Arabic script.
The operations include various levels of script normalization, including visual
invariance-preserving operations that subsume and go beyond the standard
Unicode normalization forms, as well as transformations that modify the visual
appearance of characters in accordance with the regional orthographies for
eleven contemporary languages from diverse language families. The library also
provides simple FST-based romanization and transliteration. We additionally
attempt to formalize the typology of Perso-Arabic characters by providing
one-to-many mappings from Unicode code points to the languages that use them.
While our work focuses on the Arabic script diaspora rather than Arabic itself,
this approach could be adopted for any language that uses the Arabic script,
thus providing a unified framework for treating a script family used by close
to a billion people.
|
[
{
"version": "v1",
"created": "Thu, 26 Jan 2023 20:37:03 GMT"
}
] | 2023-01-30T00:00:00 |
[
[
"Gutkin",
"Alexander",
""
],
[
"Johny",
"Cibu",
""
],
[
"Doctor",
"Raiomond",
""
],
[
"Roark",
"Brian",
""
],
[
"Sproat",
"Richard",
""
]
] |
new_dataset
| 0.997929 |
2301.11408
|
Alex Campbell
|
Alexander Campbell, Simeon Spasov, Nicola Toschi, Pietro Lio
|
DBGDGM: Dynamic Brain Graph Deep Generative Model
| null | null | null | null |
cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Graphs are a natural representation of brain activity derived from functional
magnetic imaging (fMRI) data. It is well known that clusters of anatomical
brain regions, known as functional connectivity networks (FCNs), encode
temporal relationships which can serve as useful biomarkers for understanding
brain function and dysfunction. Previous works, however, ignore the temporal
dynamics of the brain and focus on static graphs. In this paper, we propose a
dynamic brain graph deep generative model (DBGDGM) which simultaneously
clusters brain regions into temporally evolving communities and learns dynamic
unsupervised node embeddings. Specifically, DBGDGM represents brain graph nodes
as embeddings sampled from a distribution over communities that evolve over
time. We parameterise this community distribution using neural networks that
learn from subject and node embeddings as well as past community assignments.
Experiments demonstrate DBGDGM outperforms baselines in graph generation,
dynamic link prediction, and is comparable for graph classification. Finally,
an analysis of the learnt community distributions reveals overlap with known
FCNs reported in neuroscience literature.
|
[
{
"version": "v1",
"created": "Thu, 26 Jan 2023 20:45:30 GMT"
}
] | 2023-01-30T00:00:00 |
[
[
"Campbell",
"Alexander",
""
],
[
"Spasov",
"Simeon",
""
],
[
"Toschi",
"Nicola",
""
],
[
"Lio",
"Pietro",
""
]
] |
new_dataset
| 0.971577 |
2301.11422
|
Saad Nadeem
|
Donghoon Lee, Ellen Yorke, Masoud Zarepisheh, Saad Nadeem, Yu-Chi Hu
|
RMSim: Controlled Respiratory Motion Simulation on Static Patient Scans
|
Physics in Medicine & Biology 2023. Last two authors contributed
equally
| null |
10.1088/1361-6560/acb484
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work aims to generate realistic anatomical deformations from static
patient scans. Specifically, we present a method to generate these
deformations/augmentations via deep learning driven respiratory motion
simulation that provides the ground truth for validating deformable image
registration (DIR) algorithms and driving more accurate deep learning based
DIR. We present a novel 3D Seq2Seq deep learning respiratory motion simulator
(RMSim) that learns from 4D-CT images and predicts future breathing phases
given a static CT image. The predicted respiratory patterns, represented by
time-varying displacement vector fields (DVFs) at different breathing phases,
are modulated through auxiliary inputs of 1D breathing traces so that a larger
amplitude in the trace results in more significant predicted deformation.
Stacked 3D-ConvLSTMs are used to capture the spatial-temporal respiration
patterns. Training loss includes a smoothness loss in the DVF and mean-squared
error between the predicted and ground truth phase images. A spatial
transformer deforms the static CT with the predicted DVF to generate the
predicted phase image. 10-phase 4D-CTs of 140 internal patients were used to
train and test RMSim. The trained RMSim was then used to augment a public DIR
challenge dataset for training VoxelMorph to show the effectiveness of
RMSim-generated deformation augmentation. We validated our RMSim output with
both private and public benchmark datasets (healthy and cancer patients). The
proposed approach can be used for validating DIR algorithms as well as for
patient-specific augmentations to improve deep learning DIR algorithms. The
code, pretrained models, and augmented DIR validation datasets will be released
at https://github.com/nadeemlab/SeqX2Y.
|
[
{
"version": "v1",
"created": "Thu, 26 Jan 2023 21:20:14 GMT"
}
] | 2023-01-30T00:00:00 |
[
[
"Lee",
"Donghoon",
""
],
[
"Yorke",
"Ellen",
""
],
[
"Zarepisheh",
"Masoud",
""
],
[
"Nadeem",
"Saad",
""
],
[
"Hu",
"Yu-Chi",
""
]
] |
new_dataset
| 0.999504 |
2301.11436
|
Albrecht Kurze
|
Albrecht Kurze
|
Synesthetic Dice: Sensors, Actuators, And Mappings
|
In Workshop Sensory Sketching at International Conference on Human
Factors in Computing Systems (CHI22). April 22, 2022. 4 pages
| null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
How bright can you cry? How loud does the sun shine? We developed a
multisensory and multimodal tool, the Loaded Dice, for use in co-design
workshops to research the design space of IoT usage scenarios. The Loaded Dice
incorporate the principle of a technical synesthesia, being able to map any of
the included sensors to any of the included actuators. With just a turn of one
of the cubical devices it is possible to create a new combination. We discuss
the core principles of the Loaded Dice, what sensors and actuators are
included, how they relate to human senses, and how we realized a meaningful
mapping between sensors and actuators. We further discuss where we see
additional potential in the Loaded Dice to support synesthetic exploration - as
Synesthetic Dice - so that you can eventually find out who cries brighter.
|
[
{
"version": "v1",
"created": "Thu, 26 Jan 2023 21:48:49 GMT"
}
] | 2023-01-30T00:00:00 |
[
[
"Kurze",
"Albrecht",
""
]
] |
new_dataset
| 0.999192 |
2301.11471
|
Sergi Abadal
|
Bernat Oll\'e, Pau Talarn, Albert Cabellos-Aparicio, Filip Lemic,
Eduard Alarc\'on, and Sergi Abadal
|
Multi-channel Medium Access Control Protocols for Wireless Networks
within Computing Packages
|
Accepted for lecture presentation at IEEE ISCAS 2023
| null | null | null |
cs.ET
|
http://creativecommons.org/licenses/by/4.0/
|
Wireless communications at the chip scale emerge as a interesting complement
to traditional wire-based approaches thanks to their low latency, inherent
broadcast nature, and capacity to bypass pin constraints. However, as current
trends push towards massive and bandwidth-hungry processor architectures, there
is a need for wireless chip-scale networks that exploit and share as many
channels as possible. In this context, this work addresses the issue of channel
sharing by exploring the design space of multi-channel Medium Access Control
(MAC) protocols for chip-scale networks. Distinct channel assignment strategies
for both random access and token passing are presented and evaluated under
realistic traffic patterns. It is shown that, even with the improvements
enabled by the multiple channels, both protocols maintain their intrinsic
advantages and disadvantages.
|
[
{
"version": "v1",
"created": "Fri, 27 Jan 2023 00:24:50 GMT"
}
] | 2023-01-30T00:00:00 |
[
[
"Ollé",
"Bernat",
""
],
[
"Talarn",
"Pau",
""
],
[
"Cabellos-Aparicio",
"Albert",
""
],
[
"Lemic",
"Filip",
""
],
[
"Alarcón",
"Eduard",
""
],
[
"Abadal",
"Sergi",
""
]
] |
new_dataset
| 0.999672 |
2301.11505
|
Zhe Ning
|
Zhe Ning, Yunhua Sun
|
Design of an FPGA-based USB 3.0 device controller
|
10 pages, 13 figures
| null | null | null |
cs.AR
|
http://creativecommons.org/publicdomain/zero/1.0/
|
The traditional USB 3.0 communication based on FPGA uses an external chip as
a USB PHY or a USB controller including a USB PHY. This paper realizes a USB
3.0 controller by using FPGA resources, in which FPGA logic realizes a serial
interface engine, and an FPGA internal transceiver is as a USB PHY. Used slices
percent after implementation is 4.59% in Kintex-7 325t. The test result shows
that the speed of USB 3.0 is more than 320 MB/s bulk-in and bulk-out transfers.
|
[
{
"version": "v1",
"created": "Fri, 27 Jan 2023 02:48:21 GMT"
}
] | 2023-01-30T00:00:00 |
[
[
"Ning",
"Zhe",
""
],
[
"Sun",
"Yunhua",
""
]
] |
new_dataset
| 0.999674 |
2301.11511
|
Akshin Singh
|
Akshin Singh, Smruti R. Sarangi
|
JASS: A Flexible Checkpointing System for NVM-based Systems
|
13 pages, 11 figures
| null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
NVM-based systems are naturally fit candidates for incorporating periodic
checkpointing (or snapshotting). This increases the reliability of the system,
makes it more immune to power failures, and reduces wasted work in especially
an HPC setup. The traditional line of thinking is to design a system that is
conceptually similar to transactional memory, where we log updates all the
time, and minimize the wasted work or alternatively the MTTR (mean time to
recovery). Such ``instant recovery'' systems allow the system to recover from a
point that is quite close to the point of failure. The penalty that we pay is
the prohibitive number of additional writes to the NVM.
We propose a paradigmatically different approach in this paper, where we
argue that in most practical settings such as regular HPC workloads or neural
network training, there is no need for such instant recovery. This means that
we can afford to lose some work, take periodic software-initiated checkpoints
and still meet the goals of the application. The key benefit of our scheme is
that we reduce write amplification substantially; this extends the life of NVMs
by roughly the same factor. We go a step further and design an adaptive system
that can minimize the WA given a target checkpoint latency, and show that our
control algorithm almost always performs near-optimally. Our scheme reduces the
WA by 2.3-96\% as compared to the nearest competing work.
|
[
{
"version": "v1",
"created": "Fri, 27 Jan 2023 03:14:09 GMT"
}
] | 2023-01-30T00:00:00 |
[
[
"Singh",
"Akshin",
""
],
[
"Sarangi",
"Smruti R.",
""
]
] |
new_dataset
| 0.978426 |
2301.11564
|
Yaoxian Song
|
Yaoxian Song, Penglei Sun, Yi Ren, Yu Zheng, Yue Zhang
|
Learning 6-DoF Fine-grained Grasp Detection Based on Part Affordance
Grounding
|
10 pages, 3 figures, 7 tables
| null | null | null |
cs.RO cs.CL cs.CV cs.HC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Robotic grasping is a fundamental ability for a robot to interact with the
environment. Current methods focus on how to obtain a stable and reliable
grasping pose in object wise, while little work has been studied on part
(shape)-wise grasping which is related to fine-grained grasping and robotic
affordance. Parts can be seen as atomic elements to compose an object, which
contains rich semantic knowledge and a strong correlation with affordance.
However, lacking a large part-wise 3D robotic dataset limits the development of
part representation learning and downstream application. In this paper, we
propose a new large Language-guided SHape grAsPing datasEt (named Lang-SHAPE)
to learn 3D part-wise affordance and grasping ability. We design a novel
two-stage fine-grained robotic grasping network (named PIONEER), including a
novel 3D part language grounding model, and a part-aware grasp pose detection
model. To evaluate the effectiveness, we perform multi-level difficulty part
language grounding grasping experiments and deploy our proposed model on a real
robot. Results show our method achieves satisfactory performance and efficiency
in reference identification, affordance inference, and 3D part-aware grasping.
Our dataset and code are available on our project website
https://sites.google.com/view/lang-shape
|
[
{
"version": "v1",
"created": "Fri, 27 Jan 2023 07:00:54 GMT"
}
] | 2023-01-30T00:00:00 |
[
[
"Song",
"Yaoxian",
""
],
[
"Sun",
"Penglei",
""
],
[
"Ren",
"Yi",
""
],
[
"Zheng",
"Yu",
""
],
[
"Zhang",
"Yue",
""
]
] |
new_dataset
| 0.998663 |
2301.11572
|
Tao Morisaki
|
Tao Morisaki, Masahiro Fujiwara, Yasutoshi Makino, Hiroyuki Shinoda
|
Noncontact Haptic Rendering of Static Contact with Convex Surface Using
Circular Movement of Ultrasound Focus on a Finger Pad
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
A noncontact tactile stimulus can be presented by focusing airborne
ultrasound on the human skin. Focused ultrasound has recently been reported to
produce not only vibration but also static pressure sensation on the palm by
modulating the sound pressure distribution at a low frequency. This finding
expands the potential for tactile rendering in ultrasound haptics as static
pressure sensation is perceived with a high spatial resolution. In this study,
we verified that focused ultrasound can render a static pressure sensation
associated with contact with a small convex surface on a finger pad. This
static contact rendering enables noncontact tactile reproduction of a fine
uneven surface using ultrasound. In the experiments, four ultrasound foci were
simultaneously and circularly rotated on a finger pad at 5~Hz. When the orbit
radius was 3~mm, vibration and focal movements were barely perceptible, and the
stimulus was perceived as static pressure. Moreover, under the condition, the
pressure sensation rendered a contact with a small convex surface with a radius
of 2~mm. The perceived intensity of the static contact sensation was equivalent
to a physical contact force of 0.24~N on average, which was 12 times the
radiation force physically applied to the skin.
|
[
{
"version": "v1",
"created": "Fri, 27 Jan 2023 07:43:01 GMT"
}
] | 2023-01-30T00:00:00 |
[
[
"Morisaki",
"Tao",
""
],
[
"Fujiwara",
"Masahiro",
""
],
[
"Makino",
"Yasutoshi",
""
],
[
"Shinoda",
"Hiroyuki",
""
]
] |
new_dataset
| 0.999426 |
2301.11867
|
Mario Rom\'an
|
Matt Earnshaw, James Hefford, Mario Rom\'an
|
The Produoidal Algebra of Process Decomposition
|
56 pages, 41 figures
| null | null | null |
cs.LO math.CT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce the normal produoidal category of monoidal contexts over an
arbitrary monoidal category. In the same sense that a monoidal morphism
represents a process, a monoidal context represents an incomplete process: a
piece of a decomposition, possibly containing missing parts. We characterize
monoidal contexts in terms of universal properties. In particular, symmetric
monoidal contexts coincide with monoidal lenses, endowing them with a novel
universal property. We apply this algebraic structure to the analysis of
multi-party interaction protocols in arbitrary theories of processes.
|
[
{
"version": "v1",
"created": "Fri, 27 Jan 2023 17:12:40 GMT"
}
] | 2023-01-30T00:00:00 |
[
[
"Earnshaw",
"Matt",
""
],
[
"Hefford",
"James",
""
],
[
"Román",
"Mario",
""
]
] |
new_dataset
| 0.989496 |
2301.11880
|
Bin Duan
|
Bin Duan, Keshav Bhandari, Gaowen Liu and Yan Yan
|
Optical Flow Estimation in 360$^\circ$ Videos: Dataset, Model and
Application
|
20 pages, 14 figures, conference extension. arXiv admin note:
substantial text overlap with arXiv:2208.03620
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Optical flow estimation has been a long-lasting and fundamental problem in
the computer vision community. However, despite the advances of optical flow
estimation in perspective videos, the 360$^\circ$ videos counterpart remains in
its infancy, primarily due to the shortage of benchmark datasets and the
failure to accommodate the omnidirectional nature of 360$^\circ$ videos. We
propose the first perceptually realistic 360$^\circ$ filed-of-view video
benchmark dataset, namely FLOW360, with 40 different videos and 4,000 video
frames. We then conduct comprehensive characteristic analysis and extensive
comparisons with existing datasets, manifesting FLOW360's perceptual realism,
uniqueness, and diversity. Moreover, we present a novel Siamese representation
Learning framework for Omnidirectional Flow (SLOF) estimation, which is trained
in a contrastive manner via a hybrid loss that combines siamese contrastive and
optical flow losses. By training the model on random rotations of the input
omnidirectional frames, our proposed contrastive scheme accommodates the
omnidirectional nature of optical flow estimation in 360$^\circ$ videos,
resulting in significantly reduced prediction errors. The learning scheme is
further proven to be efficient by expanding our siamese learning scheme and
omnidirectional optical flow estimation to the egocentric activity recognition
task, where the classification accuracy is boosted up to $\sim$26%. To
summarize, we study the optical flow estimation in 360$^\circ$ videos problem
from perspectives of the benchmark dataset, learning model, and also practical
application. The FLOW360 dataset and code are available at
https://siamlof.github.io.
|
[
{
"version": "v1",
"created": "Fri, 27 Jan 2023 17:50:09 GMT"
}
] | 2023-01-30T00:00:00 |
[
[
"Duan",
"Bin",
""
],
[
"Bhandari",
"Keshav",
""
],
[
"Liu",
"Gaowen",
""
],
[
"Yan",
"Yan",
""
]
] |
new_dataset
| 0.997348 |
2301.11891
|
Stephen Goss
|
Stephen A. Goss, Robert J. Steininger, Dhruv Narayanan, Daniel V.
Oliven\c{c}a, Yutong Sun, Peng Qiu, Jim Amato, Eberhard O. Voit, Walter E.
Voit, Eric J. Kildebeck
|
Polycraft World AI Lab (PAL): An Extensible Platform for Evaluating
Artificial Intelligence Agents
|
27 pages, 5 figures
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
As artificial intelligence research advances, the platforms used to evaluate
AI agents need to adapt and grow to continue to challenge them. We present the
Polycraft World AI Lab (PAL), a task simulator with an API based on the
Minecraft mod Polycraft World. Our platform is built to allow AI agents with
different architectures to easily interact with the Minecraft world, train and
be evaluated in multiple tasks. PAL enables the creation of tasks in a flexible
manner as well as having the capability to manipulate any aspect of the task
during an evaluation. All actions taken by AI agents and external actors
(non-player-characters, NPCs) in the open-world environment are logged to
streamline evaluation. Here we present two custom tasks on the PAL platform,
one focused on multi-step planning and one focused on navigation, and
evaluations of agents solving them. In summary, we report a versatile and
extensible AI evaluation platform with a low barrier to entry for AI
researchers to utilize.
|
[
{
"version": "v1",
"created": "Fri, 27 Jan 2023 18:08:04 GMT"
}
] | 2023-01-30T00:00:00 |
[
[
"Goss",
"Stephen A.",
""
],
[
"Steininger",
"Robert J.",
""
],
[
"Narayanan",
"Dhruv",
""
],
[
"Olivença",
"Daniel V.",
""
],
[
"Sun",
"Yutong",
""
],
[
"Qiu",
"Peng",
""
],
[
"Amato",
"Jim",
""
],
[
"Voit",
"Eberhard O.",
""
],
[
"Voit",
"Walter E.",
""
],
[
"Kildebeck",
"Eric J.",
""
]
] |
new_dataset
| 0.99723 |
1910.08129
|
Jeffrey Kegler
|
Jeffrey Kegler
|
Marpa, A practical general parser: the recognizer
|
v2: Corrections and minor format improvements. v3: Rewrite
presentation of Leo algorithm. Corrections and minor format improvements
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Marpa recognizer is described. Marpa is a practical and fully implemented
algorithm for the recognition, parsing and evaluation of context-free grammars.
The Marpa recognizer is the first to unite the improvements to Earley's
algorithm found in Joop Leo's 1991 paper to those in Aycock and Horspool's 2002
paper. Marpa tracks the full state of the parse, at it proceeds, in a form
convenient for the application. This greatly improves error detection and
enables event-driven parsing. One such technique is "Ruby Slippers" parsing, in
which the input is altered in response to the parser's expectations.
|
[
{
"version": "v1",
"created": "Thu, 17 Oct 2019 19:45:18 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Sep 2022 17:39:02 GMT"
},
{
"version": "v3",
"created": "Wed, 25 Jan 2023 19:12:42 GMT"
}
] | 2023-01-27T00:00:00 |
[
[
"Kegler",
"Jeffrey",
""
]
] |
new_dataset
| 0.952765 |
2104.15040
|
Christopher Jefferson Dr
|
Joan Espasa, Ian P. Gent, Ruth Hoffmann, Christopher Jefferson, Alice
M. Lynch, Andr\'as Salamon, Matthew J. McIlree
|
Using Small MUSes to Explain How to Solve Pen and Paper Puzzles
| null | null | null | null |
cs.AI cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present Demystify, a general tool for creating
human-interpretable step-by-step explanations of how to solve a wide range of
pen and paper puzzles from a high-level logical description. Demystify is based
on Minimal Unsatisfiable Subsets (MUSes), which allow Demystify to solve
puzzles as a series of logical deductions by identifying which parts of the
puzzle are required to progress. This paper makes three contributions over
previous work. First, we provide a generic input language, based on the Essence
constraint language, which allows us to easily use MUSes to solve a much wider
range of pen and paper puzzles. Second, we demonstrate that the explanations
that Demystify produces match those provided by humans by comparing our results
with those provided independently by puzzle experts on a range of puzzles. We
compare Demystify to published guides for solving a range of different pen and
paper puzzles and show that by using MUSes, Demystify produces solving
strategies which closely match human-produced guides to solving those same
puzzles (on average 89% of the time). Finally, we introduce a new randomised
algorithm to find MUSes for more difficult puzzles. This algorithm is focused
on optimised search for individual small MUSes.
|
[
{
"version": "v1",
"created": "Fri, 30 Apr 2021 15:07:51 GMT"
},
{
"version": "v2",
"created": "Thu, 26 Jan 2023 16:39:19 GMT"
}
] | 2023-01-27T00:00:00 |
[
[
"Espasa",
"Joan",
""
],
[
"Gent",
"Ian P.",
""
],
[
"Hoffmann",
"Ruth",
""
],
[
"Jefferson",
"Christopher",
""
],
[
"Lynch",
"Alice M.",
""
],
[
"Salamon",
"András",
""
],
[
"McIlree",
"Matthew J.",
""
]
] |
new_dataset
| 0.993926 |
2110.14180
|
Rui Peng
|
Rui Peng, Zehao Wang and Peng Lu
|
AeCoM: An Aerial Continuum Manipulator with Precise Kinematic Modeling
for Variable Loading and Tendon-slacking Prevention
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Aerial robotic systems have raised emerging interests in recent years. In
this article, we propose a novel aerial manipulator system that is
significantly different from conventional aerial discrete manipulators: An
Aerial Continuum Manipulator (AeCoM). The AeCoM compactly integrates a
quadrotor with a tendon-driven continuum robotic manipulator. Due to the
compact design and the payload bearing ability of tendon-driven continuum
robotic arms, the proposed system solved the conflict between payload capacity
and dexterity lying in conventional aerial manipulators. Two contributions are
made in this paper: 1) a sensor-based kinematic model is developed for precise
modeling in the presence of variable loading; and 2) a tendon slacking
prevention system is developed in the presence of aggressive motions. The
detailed design of the system is presented and extensive experimental
validations have been performed to validate the system self-initialization,
payload capacity, precise kinematic modeling with variable end-effector (EE)
loadings during aerial grasping and tendon-slacking prevention. The
experimental results demonstrate that the proposed novel aerial continuum
manipulator system solves the constraints in conventional aerial manipulators
and has more potential applications in clustered environments.
|
[
{
"version": "v1",
"created": "Wed, 27 Oct 2021 05:27:57 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Feb 2022 10:26:41 GMT"
},
{
"version": "v3",
"created": "Thu, 26 Jan 2023 08:23:08 GMT"
}
] | 2023-01-27T00:00:00 |
[
[
"Peng",
"Rui",
""
],
[
"Wang",
"Zehao",
""
],
[
"Lu",
"Peng",
""
]
] |
new_dataset
| 0.993378 |
2203.10168
|
Keenan Burnett
|
Keenan Burnett, David J. Yoon, Yuchen Wu, Andrew Zou Li, Haowei Zhang,
Shichen Lu, Jingxing Qian, Wei-Kang Tseng, Andrew Lambert, Keith Y.K. Leung,
Angela P. Schoellig, Timothy D. Barfoot
|
Boreas: A Multi-Season Autonomous Driving Dataset
|
Accepted in IJRR as a data paper
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Boreas dataset was collected by driving a repeated route over the course
of one year, resulting in stark seasonal variations and adverse weather
conditions such as rain and falling snow. In total, the Boreas dataset includes
over 350km of driving data featuring a 128-channel Velodyne Alpha Prime lidar,
a 360$^\circ$ Navtech CIR304-H scanning radar, a 5MP FLIR Blackfly S camera,
and centimetre-accurate post-processed ground truth poses. Our dataset will
support live leaderboards for odometry, metric localization, and 3D object
detection. The dataset and development kit are available at
https://www.boreas.utias.utoronto.ca
|
[
{
"version": "v1",
"created": "Fri, 18 Mar 2022 21:40:50 GMT"
},
{
"version": "v2",
"created": "Thu, 26 Jan 2023 17:13:52 GMT"
}
] | 2023-01-27T00:00:00 |
[
[
"Burnett",
"Keenan",
""
],
[
"Yoon",
"David J.",
""
],
[
"Wu",
"Yuchen",
""
],
[
"Li",
"Andrew Zou",
""
],
[
"Zhang",
"Haowei",
""
],
[
"Lu",
"Shichen",
""
],
[
"Qian",
"Jingxing",
""
],
[
"Tseng",
"Wei-Kang",
""
],
[
"Lambert",
"Andrew",
""
],
[
"Leung",
"Keith Y. K.",
""
],
[
"Schoellig",
"Angela P.",
""
],
[
"Barfoot",
"Timothy D.",
""
]
] |
new_dataset
| 0.999845 |
2203.10171
|
Katherine Riley
|
Katherine S. Riley (1), Subhadeep Koner (2), Juan C. Osorio (1),
Yongchao Yu (2), Harith Morgan (1), Janav P. Udani (1), Stephen A. Sarles
(2), and Andres F. Arrieta (1) ((1) School of Mechanical Engineering, Purdue
University, West Lafayette, USA, (2) Department of Mechanical, Aerospace and
Biomedical Engineering, University of Tennessee, Knoxville, USA)
|
Neuromorphic metamaterials for mechanosensing and perceptual associative
learning
|
Manuscript: 13 pages, 4 figures, 1 table Supplementary Information:
11 pages, 17 figures, 2 tables
| null |
10.1002/aisy.202200158
| null |
cs.ET cond-mat.mtrl-sci
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Physical systems exhibiting neuromechanical functions promise to enable
structures with directly encoded autonomy and intelligence. We report on a
class of neuromorphic metamaterials embodying bioinspired mechanosensing,
memory, and learning functionalities obtained by leveraging mechanical
instabilities and flexible memristive materials. Our prototype system comprises
a multistable metamaterial whose bistable units filter, amplify, and transduce
external mechanical inputs over large areas into simple electrical signals
using piezoresistivity. We record these mechanically transduced signals using
non-volatile flexible memristors that remember sequences of mechanical inputs,
providing a means to store spatially distributed mechanical signals in
measurable material states. The accumulated memristance changes resulting from
the sequential mechanical inputs allow us to physically encode a Hopfield
network into our neuromorphic metamaterials. This physical network learns a
series of external spatially distributed input patterns. Crucially, the learned
patterns input into our neuromorphic metamaterials can be retrieved from the
final accumulated state of our memristors. Therefore, our system exhibits the
ability to learn without supervised training and retain spatially distributed
inputs with minimal external overhead. Our system's embodied mechanosensing,
memory, and learning capabilities establish an avenue for synthetic
neuromorphic metamaterials enabling the learning of touch-like sensations
covering large areas for robotics, autonomous systems, wearables, and morphing
structures.
|
[
{
"version": "v1",
"created": "Fri, 18 Mar 2022 21:49:49 GMT"
}
] | 2023-01-27T00:00:00 |
[
[
"Riley",
"Katherine S.",
""
],
[
"Koner",
"Subhadeep",
""
],
[
"Osorio",
"Juan C.",
""
],
[
"Yu",
"Yongchao",
""
],
[
"Morgan",
"Harith",
""
],
[
"Udani",
"Janav P.",
""
],
[
"Sarles",
"Stephen A.",
""
],
[
"Arrieta",
"Andres F.",
""
]
] |
new_dataset
| 0.976759 |
2204.09145
|
Jos\'e Ca\~nete
|
Jos\'e Ca\~nete, Sebasti\'an Donoso, Felipe Bravo-Marquez, Andr\'es
Carvallo and Vladimir Araujo
|
ALBETO and DistilBETO: Lightweight Spanish Language Models
|
Accepted paper at LREC2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years there have been considerable advances in pre-trained language
models, where non-English language versions have also been made available. Due
to their increasing use, many lightweight versions of these models (with
reduced parameters) have also been released to speed up training and inference
times. However, versions of these lighter models (e.g., ALBERT, DistilBERT) for
languages other than English are still scarce. In this paper we present ALBETO
and DistilBETO, which are versions of ALBERT and DistilBERT pre-trained
exclusively on Spanish corpora. We train several versions of ALBETO ranging
from 5M to 223M parameters and one of DistilBETO with 67M parameters. We
evaluate our models in the GLUES benchmark that includes various natural
language understanding tasks in Spanish. The results show that our lightweight
models achieve competitive results to those of BETO (Spanish-BERT) despite
having fewer parameters. More specifically, our larger ALBETO model outperforms
all other models on the MLDoc, PAWS-X, XNLI, MLQA, SQAC and XQuAD datasets.
However, BETO remains unbeaten for POS and NER. As a further contribution, all
models are publicly available to the community for future research.
|
[
{
"version": "v1",
"created": "Tue, 19 Apr 2022 22:07:34 GMT"
},
{
"version": "v2",
"created": "Wed, 25 Jan 2023 19:38:49 GMT"
}
] | 2023-01-27T00:00:00 |
[
[
"Cañete",
"José",
""
],
[
"Donoso",
"Sebastián",
""
],
[
"Bravo-Marquez",
"Felipe",
""
],
[
"Carvallo",
"Andrés",
""
],
[
"Araujo",
"Vladimir",
""
]
] |
new_dataset
| 0.98247 |
2205.00825
|
Devansh Jalota
|
Devansh Jalota and Yinyu Ye
|
Stochastic Online Fisher Markets: Static Pricing Limits and Adaptive
Enhancements
| null | null | null | null |
cs.GT cs.LG econ.TH math.OC
|
http://creativecommons.org/licenses/by/4.0/
|
In a Fisher market, agents (users) spend a budget of (artificial) currency to
buy goods that maximize their utilities while a central planner sets prices on
capacity-constrained goods such that the market clears. However, the efficacy
of pricing schemes in achieving an equilibrium outcome in Fisher markets
typically relies on complete knowledge of users' budgets and utilities and
requires that transactions happen in a static market wherein all users are
present simultaneously.
As a result, we study an online variant of Fisher markets, wherein
budget-constrained users with privately known utility and budget parameters,
drawn i.i.d. from a distribution $\mathcal{D}$, enter the market sequentially.
In this setting, we develop an algorithm that adjusts prices solely based on
observations of user consumption, i.e., revealed preference feedback, and
achieves a regret and capacity violation of $O(\sqrt{n})$, where $n$ is the
number of users and the good capacities scale as $O(n)$. Here, our regret
measure is the optimality gap in the objective of the Eisenberg-Gale program
between an online algorithm and an offline oracle with complete information on
users' budgets and utilities. To establish the efficacy of our approach, we
show that any uniform (static) pricing algorithm, including one that sets
expected equilibrium prices with complete knowledge of the distribution
$\mathcal{D}$, cannot achieve both a regret and constraint violation of less
than $\Omega(\sqrt{n})$. While our revealed preference algorithm requires no
knowledge of the distribution $\mathcal{D}$, we show that if $\mathcal{D}$ is
known, then an adaptive variant of expected equilibrium pricing achieves
$O(\log(n))$ regret and constant capacity violation for discrete distributions.
Finally, we present numerical experiments to demonstrate the performance of our
revealed preference algorithm relative to several benchmarks.
|
[
{
"version": "v1",
"created": "Wed, 27 Apr 2022 05:03:45 GMT"
},
{
"version": "v2",
"created": "Tue, 9 Aug 2022 21:29:44 GMT"
},
{
"version": "v3",
"created": "Thu, 26 Jan 2023 06:15:43 GMT"
}
] | 2023-01-27T00:00:00 |
[
[
"Jalota",
"Devansh",
""
],
[
"Ye",
"Yinyu",
""
]
] |
new_dataset
| 0.999267 |
2205.11652
|
Neil Giridharan
|
Ittai Abraham, Natacha Crooks, Neil Giridharan, Heidi Howard, Florian
Suri-Payer
|
BeeGees: stayin' alive in chained BFT
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern chained Byzantine Fault Tolerant (BFT) systems leverage a combination
of pipelining and leader rotation to obtain both efficiency and fairness. These
protocols, however, require a sequence of three or four consecutive honest
leaders to commit operations. Therefore, even simple leader failures such as
crashes can weaken liveness both theoretically and practically. Obtaining a
chained BFT protocol that reaches decisions even if the sequence of honest
leaders is non-consecutive, remains an open question. To resolve this question
we present BeeGees, a novel chained BFT protocol that successfully commits
blocks even with non-consecutive honest leaders. It does this while also
maintaining quadratic word complexity with threshold signatures, linear word
complexity with SNARKs, and responsiveness between consecutive honest leaders.
BeeGees reduces the expected commit latency of HotStuff by a factor of three
under failures, and the worst-case latency by a factor of seven.
|
[
{
"version": "v1",
"created": "Mon, 23 May 2022 22:11:19 GMT"
},
{
"version": "v2",
"created": "Thu, 26 Jan 2023 18:32:44 GMT"
}
] | 2023-01-27T00:00:00 |
[
[
"Abraham",
"Ittai",
""
],
[
"Crooks",
"Natacha",
""
],
[
"Giridharan",
"Neil",
""
],
[
"Howard",
"Heidi",
""
],
[
"Suri-Payer",
"Florian",
""
]
] |
new_dataset
| 0.982184 |
2206.13731
|
Tony Tan
|
Chia-Hsuan Lu, Tony Tan
|
On two-variable guarded fragment logic with expressive local Presburger
constraints
| null | null | null | null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
We consider the extension of two-variable guarded fragment logic with local
Presburger quantifiers. These are quantifiers that can express properties such
as ``the number of incoming blue edges plus twice the number of outgoing red
edges is at most three times the number of incoming green edges'' and captures
various description logics with counting, but without constant symbols. We show
that the satisfiability of this logic is EXP-complete. While the lower bound
already holds for the standard two-variable guarded fragment logic, the upper
bound is established by a novel, yet simple deterministic graph theoretic based
algorithm.
|
[
{
"version": "v1",
"created": "Tue, 28 Jun 2022 03:35:51 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Jun 2022 13:58:30 GMT"
},
{
"version": "v3",
"created": "Thu, 26 Jan 2023 15:04:04 GMT"
}
] | 2023-01-27T00:00:00 |
[
[
"Lu",
"Chia-Hsuan",
""
],
[
"Tan",
"Tony",
""
]
] |
new_dataset
| 0.969733 |
2209.01541
|
Shihan Lin
|
Shihan Lin, Rui Xin, Aayush Goel, Xiaowei Yang
|
InviCloak: An End-to-End Approach to Privacy and Performance in Web
Content Distribution
| null |
The ACM Conference on Computer and Communications Security 2022
|
10.1145/3548606.3559336
| null |
cs.CR cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In today's web ecosystem, a website that uses a Content Delivery Network
(CDN) shares its Transport Layer Security (TLS) private key or session key with
the CDN. In this paper, we present the design and implementation of InviCloak,
a system that protects the confidentiality and integrity of a user and a
website's private communications without changing TLS or upgrading a CDN.
InviCloak builds a lightweight but secure and practical key distribution
mechanism using the existing DNS infrastructure to distribute a new public key
associated with a website's domain name. A web client and a website can use the
new key pair to build an encryption channel inside TLS. InviCloak accommodates
the current web ecosystem. A website can deploy InviCloak unilaterally without
a client's involvement to prevent a passive attacker inside a CDN from
eavesdropping on their communications. If a client also installs InviCloak's
browser extension, the client and the website can achieve end-to-end
confidential and untampered communications in the presence of an active
attacker inside a CDN. Our evaluation shows that InviCloak increases the median
page load times (PLTs) of realistic web pages from 2.0s to 2.1s, which is
smaller than the median PLTs (2.8s) of a state-of-the-art TEE-based solution.
|
[
{
"version": "v1",
"created": "Sun, 4 Sep 2022 06:38:27 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Sep 2022 19:30:21 GMT"
},
{
"version": "v3",
"created": "Sun, 18 Sep 2022 11:17:35 GMT"
}
] | 2023-01-27T00:00:00 |
[
[
"Lin",
"Shihan",
""
],
[
"Xin",
"Rui",
""
],
[
"Goel",
"Aayush",
""
],
[
"Yang",
"Xiaowei",
""
]
] |
new_dataset
| 0.999794 |
2212.05057
|
Kaan Ak\c{s}it
|
Kaan Ak\c{s}it and Yuta Itoh
|
HoloBeam: Paper-Thin Near-Eye Displays
|
15 pages, 18 Figures, 1 Table, 1 Listing
| null | null | null |
cs.HC cs.AR cs.GR physics.optics
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
An emerging alternative to conventional Augmented Reality (AR) glasses
designs, Beaming displays promise slim AR glasses free from challenging design
trade-offs, including battery-related limits or computational budget-related
issues. These beaming displays remove active components such as batteries and
electronics from AR glasses and move them to a projector that projects images
to a user from a distance (1-2 meters), where users wear only passive optical
eyepieces. However, earlier implementations of these displays delivered poor
resolutions (7 cycles per degree) without any optical focus cues and were
introduced with a bulky form-factor eyepiece (50 mm thick). This paper
introduces a new milestone for beaming displays, which we call HoloBeam. In
this new design, a custom holographic projector populates a micro-volume
located at some distance (1-2 meters) with multiple planes of images. Users
view magnified copies of these images from this small volume with the help of
an eyepiece that is either a Holographic Optical Element (HOE) or a set of
lenses. Our HoloBeam prototypes demonstrate the thinnest AR glasses to date
with a submillimeter thickness (e.g., HOE film is only 120 um thick). In
addition, HoloBeam prototypes demonstrate near retinal resolutions (24 cycles
per degree) with a 70 degrees-wide field of view.
|
[
{
"version": "v1",
"created": "Thu, 8 Dec 2022 09:53:13 GMT"
},
{
"version": "v2",
"created": "Thu, 26 Jan 2023 14:38:44 GMT"
}
] | 2023-01-27T00:00:00 |
[
[
"Akşit",
"Kaan",
""
],
[
"Itoh",
"Yuta",
""
]
] |
new_dataset
| 0.999794 |
2212.06589
|
Leonardo Fernandez-Jambrina
|
L. Fernandez-Jambrina
|
Patches of developable surfaces bounded by NURBS curves
|
6 pages, 3 figures; Modelling for Engineering & Human Behaviour 2022,
1-6 (2022) I.S.B.N.: 978-84-09-47037-2
| null | null | null |
cs.GR cs.NA math.NA
|
http://creativecommons.org/licenses/by/4.0/
|
In this talk we review the problem of constructing a developable surface
patch bounded by two rational or NURBS (Non-Uniform Rational B-spline) curves.
|
[
{
"version": "v1",
"created": "Tue, 13 Dec 2022 14:06:40 GMT"
},
{
"version": "v2",
"created": "Thu, 26 Jan 2023 18:12:22 GMT"
}
] | 2023-01-27T00:00:00 |
[
[
"Fernandez-Jambrina",
"L.",
""
]
] |
new_dataset
| 0.993985 |
2301.06964
|
Marios Constantinides
|
Lakmal Meegahapola, Marios Constantinides, Zoran Radivojevic, Hongwei
Li, Daniele Quercia, Michael S. Eggleston
|
Quantified Canine: Inferring Dog Personality From Wearables
|
26 pages, 9 figures, 4 tables
| null |
10.1145/3544548.3581088
| null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Being able to assess dog personality can be used to, for example, match
shelter dogs with future owners, and personalize dog activities. Such an
assessment typically relies on experts or psychological scales administered to
dog owners, both of which are costly. To tackle that challenge, we built a
device called "Patchkeeper" that can be strapped on the pet's chest and
measures activity through an accelerometer and a gyroscope. In an in-the-wild
deployment involving 12 healthy dogs, we collected 1300 hours of sensor
activity data and dog personality test results from two validated
questionnaires. By matching these two datasets, we trained ten machine-learning
classifiers that predicted dog personality from activity data, achieving AUCs
in [0.63-0.90], suggesting the value of tracking the psychological signals of
pets using wearable technologies.
|
[
{
"version": "v1",
"created": "Tue, 17 Jan 2023 15:37:16 GMT"
},
{
"version": "v2",
"created": "Wed, 25 Jan 2023 20:12:53 GMT"
}
] | 2023-01-27T00:00:00 |
[
[
"Meegahapola",
"Lakmal",
""
],
[
"Constantinides",
"Marios",
""
],
[
"Radivojevic",
"Zoran",
""
],
[
"Li",
"Hongwei",
""
],
[
"Quercia",
"Daniele",
""
],
[
"Eggleston",
"Michael S.",
""
]
] |
new_dataset
| 0.984302 |
2301.07325
|
Runsheng Xu
|
Runsheng Xu, Hao Xiang, Xu Han, Xin Xia, Zonglin Meng, Chia-Ju Chen,
Jiaqi Ma
|
The OpenCDA Open-source Ecosystem for Cooperative Driving Automation
Research
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Advances in Single-vehicle intelligence of automated driving have encountered
significant challenges because of limited capabilities in perception and
interaction with complex traffic environments. Cooperative Driving
Automation~(CDA) has been considered a pivotal solution to next-generation
automated driving and intelligent transportation. Though CDA has attracted much
attention from both academia and industry, exploration of its potential is
still in its infancy. In industry, companies tend to build their in-house data
collection pipeline and research tools to tailor their needs and protect
intellectual properties. Reinventing the wheels, however, wastes resources and
limits the generalizability of the developed approaches since no standardized
benchmarks exist. On the other hand, in academia, due to the absence of
real-world traffic data and computation resources, researchers often
investigate CDA topics in simplified and mostly simulated environments,
restricting the possibility of scaling the research outputs to real-world
scenarios. Therefore, there is an urgent need to establish an open-source
ecosystem~(OSE) to address the demands of different communities for CDA
research, particularly in the early exploratory research stages, and provide
the bridge to ensure an integrated development and testing pipeline that
diverse communities can share. In this paper, we introduce the OpenCDA research
ecosystem, a unified OSE integrated with a model zoo, a suite of driving
simulators at various resolutions, large-scale real-world and simulated
datasets, complete development toolkits for benchmark training/testing, and a
scenario database/generator. We also demonstrate the effectiveness of OpenCDA
OSE through example use cases, including cooperative 3D LiDAR detection,
cooperative merge, cooperative camera-based map prediction, and adversarial
scenario generation.
|
[
{
"version": "v1",
"created": "Wed, 18 Jan 2023 06:15:22 GMT"
},
{
"version": "v2",
"created": "Tue, 24 Jan 2023 00:16:29 GMT"
},
{
"version": "v3",
"created": "Thu, 26 Jan 2023 16:40:25 GMT"
}
] | 2023-01-27T00:00:00 |
[
[
"Xu",
"Runsheng",
""
],
[
"Xiang",
"Hao",
""
],
[
"Han",
"Xu",
""
],
[
"Xia",
"Xin",
""
],
[
"Meng",
"Zonglin",
""
],
[
"Chen",
"Chia-Ju",
""
],
[
"Ma",
"Jiaqi",
""
]
] |
new_dataset
| 0.98785 |
2301.09950
|
Kaan Ak\c{s}it
|
Koray Kavakl{\i}, Liang Shi, Hakan \"Urey, Wojciech Matusik, Kaan
Ak\c{s}it
|
HoloHDR: Multi-color Holograms Improve Dynamic Range
|
10 pages, 11 figures
| null | null | null |
cs.GR cs.AR cs.HC physics.optics
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Holographic displays generate Three-Dimensional (3D) images by displaying
single-color holograms time-sequentially, each lit by a single-color light
source. However, representing each color one by one limits peak brightness and
dynamic range in holographic displays. This paper introduces a new driving
scheme, HoloHDR, for realizing higher dynamic range images in holographic
displays. Unlike the conventional driving scheme, in HoloHDR, three light
sources illuminate each displayed hologram simultaneously at various brightness
levels. In this way, HoloHDR reconstructs a multiplanar three-dimensional
target scene using consecutive multi-color holograms and persistence of vision.
We co-optimize multi-color holograms and required brightness levels from each
light source using a gradient descent-based optimizer with a combination of
application-specific loss terms. We experimentally demonstrate that HoloHDR can
increase the brightness levels in holographic displays up to three times with
support for a broader dynamic range, unlocking new potentials for perceptual
realism in holographic displays.
|
[
{
"version": "v1",
"created": "Tue, 24 Jan 2023 12:16:30 GMT"
},
{
"version": "v2",
"created": "Thu, 26 Jan 2023 14:17:53 GMT"
}
] | 2023-01-27T00:00:00 |
[
[
"Kavaklı",
"Koray",
""
],
[
"Shi",
"Liang",
""
],
[
"Ürey",
"Hakan",
""
],
[
"Matusik",
"Wojciech",
""
],
[
"Akşit",
"Kaan",
""
]
] |
new_dataset
| 0.998277 |
2301.10186
|
Phu Gia Hoang
|
Phu Gia Hoang, Canh Duc Luu, Khanh Quoc Tran, Kiet Van Nguyen, Ngan
Luu-Thuy Nguyen
|
ViHOS: Hate Speech Spans Detection for Vietnamese
|
EACL 2023
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The rise in hateful and offensive language directed at other users is one of
the adverse side effects of the increased use of social networking platforms.
This could make it difficult for human moderators to review tagged comments
filtered by classification systems. To help address this issue, we present the
ViHOS (Vietnamese Hate and Offensive Spans) dataset, the first human-annotated
corpus containing 26k spans on 11k comments. We also provide definitions of
hateful and offensive spans in Vietnamese comments as well as detailed
annotation guidelines. Besides, we conduct experiments with various
state-of-the-art models. Specifically, XLM-R$_{Large}$ achieved the best
F1-scores in Single span detection and All spans detection, while
PhoBERT$_{Large}$ obtained the highest in Multiple spans detection. Finally,
our error analysis demonstrates the difficulties in detecting specific types of
spans in our data for future research.
Disclaimer: This paper contains real comments that could be considered
profane, offensive, or abusive.
|
[
{
"version": "v1",
"created": "Tue, 24 Jan 2023 17:53:21 GMT"
},
{
"version": "v2",
"created": "Thu, 26 Jan 2023 08:58:01 GMT"
}
] | 2023-01-27T00:00:00 |
[
[
"Hoang",
"Phu Gia",
""
],
[
"Luu",
"Canh Duc",
""
],
[
"Tran",
"Khanh Quoc",
""
],
[
"Van Nguyen",
"Kiet",
""
],
[
"Nguyen",
"Ngan Luu-Thuy",
""
]
] |
new_dataset
| 0.999736 |
2301.10843
|
Niklas Elmqvist
|
Deokgun Park, Sung-Hee Kim, Niklas Elmqvist
|
Gatherplot: A Non-Overlapping Scatterplot
|
16 pages. arXiv admin note: substantial text overlap with
arXiv:1708.08033
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Scatterplots are a common tool for exploring multidimensional datasets,
especially in the form of scatterplot matrices (SPLOMs). However, scatterplots
suffer from overplotting when categorical variables are mapped to one or two
axes, or the same continuous variable is used for both axes. Previous methods
such as histograms or violin plots use aggregation, which makes brushing and
linking difficult. To address this, we propose gatherplots, an extension of
scatterplots to manage the overplotting problem. Gatherplots are a form of unit
visualization, which avoid aggregation and maintain the identity of individual
objects to ease visual perception. In gatherplots, every visual mark that maps
to the same position coalesces to form a packed entity, thereby making it
easier to see the overview of data groupings. The size and aspect ratio of
marks can also be changed dynamically to make it easier to compare the
composition of different groups. In the case of a categorical variable vs. a
categorical variable, we propose a heuristic to decide bin sizes for optimal
space usage. To validate our work, we conducted a crowdsourced user study that
shows that gatherplots enable people to assess data distribution more quickly
and more correctly than when using jittered scatterplots.
|
[
{
"version": "v1",
"created": "Wed, 25 Jan 2023 21:50:10 GMT"
}
] | 2023-01-27T00:00:00 |
[
[
"Park",
"Deokgun",
""
],
[
"Kim",
"Sung-Hee",
""
],
[
"Elmqvist",
"Niklas",
""
]
] |
new_dataset
| 0.963456 |
2301.10965
|
Xuan Quang Ngo
|
Xuan Quang Ngo, Thai Nguyen Chau, Cong Thang Doan, Van Tu Duong, Duy
Vo Hoang, Tan Tien Nguyen
|
Design of Mobile Manipulator for Fire Extinguisher Testing. Part I Key
Specifications and Conceptual Design
|
10 pages, 8 figures, the 7th International Conference on Advanced
Engineering, Theory and Applications
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
All flames are extinguished as early as possible, or fire services have to
deal with major conflagrations. This leads to the fact that the quality of fire
extinguishers has become a very sensitive and important issue in firefighting.
Inspired by the development of automatic fire fighting systems, this paper
proposes key specifications based on the standard of fire extinguishers that is
ISO 7165:2009 and ISO 11601:2008, and feasible solutions to design a mobile
manipulator for automatically evaluating the quality or, more specifically,
power of fire extinguishers. In addition, a part of the mechanical design is
also discussed.
|
[
{
"version": "v1",
"created": "Thu, 26 Jan 2023 07:14:33 GMT"
}
] | 2023-01-27T00:00:00 |
[
[
"Ngo",
"Xuan Quang",
""
],
[
"Chau",
"Thai Nguyen",
""
],
[
"Doan",
"Cong Thang",
""
],
[
"Duong",
"Van Tu",
""
],
[
"Hoang",
"Duy Vo",
""
],
[
"Nguyen",
"Tan Tien",
""
]
] |
new_dataset
| 0.998675 |
2301.11010
|
Manishika Rawat Dr.
|
Manishika Rawat, Marco Giordani, Brejesh Lall, Abdelaali Chaoub, and
Michele Zorzi
|
On the Optimal Beamwidth of UAV-Assisted Networks Operating at
Millimeter Waves
|
7 pages, 7 figures
| null | null | null |
cs.IT cs.NI math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
The millimeter-wave (mm-wave) bands enable very large antenna arrays that can
generate narrow beams for beamforming and spatial multiplexing. However,
directionality introduces beam misalignment and leads to reduced energy
efficiency. Thus, employing the narrowest possible beam in a cell may not
necessarily imply maximum coverage. The objective of this work is to determine
the optimal sector beamwidth for a cellular architecture served by an unmanned
aerial vehicle (UAV) acting as a base station (BS). The users in a cell are
assumed to be distributed according to a Poisson Point Process (PPP) with a
given user density. We consider hybrid beamforming at the UAV, such that
multiple concurrent beams serve all the sectors simultaneously. An optimization
problem is formulated to maximize the sum rate over a given area while limiting
the total power available to each sector. We observe that, for a given transmit
power, the optimal sector beamwidth increases as the user density in a cell
decreases, and varies based on the height of the UAV. Thus, we provide
guidelines towards the optimal beamforming configurations for users in rural
areas.
|
[
{
"version": "v1",
"created": "Thu, 26 Jan 2023 09:52:24 GMT"
}
] | 2023-01-27T00:00:00 |
[
[
"Rawat",
"Manishika",
""
],
[
"Giordani",
"Marco",
""
],
[
"Lall",
"Brejesh",
""
],
[
"Chaoub",
"Abdelaali",
""
],
[
"Zorzi",
"Michele",
""
]
] |
new_dataset
| 0.995307 |
2301.11050
|
Dorjan Hitaj
|
Dorjan Hitaj, Giulio Pagnotta, Fabio De Gaspari, Lorenzo De Carli,
Luigi V. Mancini
|
Minerva: A File-Based Ransomware Detector
|
19 pages, 3 figures
| null | null | null |
cs.CR cs.CY cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Ransomware is a rapidly evolving type of malware designed to encrypt user
files on a device, making them inaccessible in order to exact a ransom.
Ransomware attacks resulted in billions of dollars in damages in recent years
and are expected to cause hundreds of billions more in the next decade. With
current state-of-the-art process-based detectors being heavily susceptible to
evasion attacks, no comprehensive solution to this problem is available today.
This paper presents Minerva, a new approach to ransomware detection. Unlike
current methods focused on identifying ransomware based on process-level
behavioral modeling, Minerva detects ransomware by building behavioral profiles
of files based on all the operations they receive in a time window. Minerva
addresses some of the critical challenges associated with process-based
approaches, specifically their vulnerability to complex evasion attacks. Our
evaluation of Minerva demonstrates its effectiveness in detecting ransomware
attacks, including those that are able to bypass existing defenses. Our results
show that Minerva identifies ransomware activity with an average accuracy of
99.45% and an average recall of 99.66%, with 99.97% of ransomware detected
within 1 second.
|
[
{
"version": "v1",
"created": "Thu, 26 Jan 2023 11:47:10 GMT"
}
] | 2023-01-27T00:00:00 |
[
[
"Hitaj",
"Dorjan",
""
],
[
"Pagnotta",
"Giulio",
""
],
[
"De Gaspari",
"Fabio",
""
],
[
"De Carli",
"Lorenzo",
""
],
[
"Mancini",
"Luigi V.",
""
]
] |
new_dataset
| 0.999737 |
2301.11092
|
CHRISTOPHE MAUDOUX
|
Christophe Maudoux (CEDRIC - ROC), Selma Boumerdassi (CEDRIC - ROC)
|
LemonLDAP::NG -- A Full AAA Free Open Source WebSSO Solution
| null |
IEEE 11th International Conference on Cloud Networking (CloudNet),
IEEE ComSoc; Cnam, Nov 2022, Paris, France. pp.277-281
|
10.1109/CloudNet55617.2022.9978777
| null |
cs.CR cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nowadays, security is becoming a major issue and concern. More and more
organizations like hospitals, metropolis or banks are under cyberattacks and
have to improve their network infrastructure security. The first prerequisites
are to authenticate users, to provide identity and to grant just the needed and
useful accesses. These requirements can be solved by implementing a Single
Sign-On (SSO) solution. It is an authentication scheme that permits a user to
log in with a single identity to any of several related, yet independent,
systems. It allows users to log in once and to access services without
authenticating again. SSO solutions are classified depending on Authentication,
Authorization, and Accounting features. The 'AAA' acronym defines a framework
for intelligently controlling access to resources, enforcing security policies,
auditing usage, and providing the information necessary to bill for services.
These combined processes are considered important for effective network
management and cybersecurity. LemonLDAP::NG (LL::NG) is a full AAA WebSSO
solution. It implements all standard authentication and identity federation
(IdF) protocols. The main LL::NG's advantages compared to other products are
its plug-in engine and its advanced handlerbased protection mechanism that can
be employed to protect Server2Server exchanges or to offer the SSO as a
Service, a solution to implement a full DevOps architecture. LL::NG is a
community and professional project mainly employed by the French government to
secure Police, Finance or Justice Ministries and a French mobile operator IT
infrastructures since 2010. But for several years, contributions come from all
around the world and LL::NG is becoming more and more popular.
|
[
{
"version": "v1",
"created": "Thu, 26 Jan 2023 13:34:25 GMT"
}
] | 2023-01-27T00:00:00 |
[
[
"Maudoux",
"Christophe",
"",
"CEDRIC - ROC"
],
[
"Boumerdassi",
"Selma",
"",
"CEDRIC - ROC"
]
] |
new_dataset
| 0.998215 |
2301.11112
|
Nardine Osman
|
Nardine Osman and Bruno Rosell and Carles Sierra and Marco Schorlemmer
and Jordi Sabater-Mir and Lissette Lemus
|
uHelp: intelligent volunteer search for mutual help communities
| null | null | null | null |
cs.SI cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
When people need help with their day-to-day activities, they turn to family,
friends or neighbours. But despite an increasingly networked world, technology
falls short in finding suitable volunteers. In this paper, we propose uHelp, a
platform for building a community of helpful people and supporting community
members find the appropriate help within their social network. Lately,
applications that focus on finding volunteers have started to appear, such as
Helpin or Facebook's Community Help. However, what distinguishes uHelp from
existing applications is its trust-based intelligent search for volunteers.
Although trust is crucial to these innovative social applications, none of them
have seriously achieved yet a trust-building solution such as that of uHelp.
uHelp's intelligent search for volunteers is based on a number of AI
technologies: (1) a novel trust-based flooding algorithm that navigates one's
social network looking for appropriate trustworthy volunteers; (2) a novel
trust model that maintains the trustworthiness of peers by learning from their
similar past experiences; and (3) a semantic similarity model that assesses the
similarity of experiences. This article presents the uHelp application,
describes the underlying AI technologies that allow uHelp find trustworthy
volunteers efficiently, and illustrates the implementation details. uHelp's
initial prototype has been tested with a community of single parents in
Barcelona, and the app is available online at both Apple Store and Google Play.
|
[
{
"version": "v1",
"created": "Thu, 26 Jan 2023 14:05:46 GMT"
}
] | 2023-01-27T00:00:00 |
[
[
"Osman",
"Nardine",
""
],
[
"Rosell",
"Bruno",
""
],
[
"Sierra",
"Carles",
""
],
[
"Schorlemmer",
"Marco",
""
],
[
"Sabater-Mir",
"Jordi",
""
],
[
"Lemus",
"Lissette",
""
]
] |
new_dataset
| 0.956209 |
2301.11125
|
Reda Dehak
|
Corentin Duchene, Henri Jamet, Pierre Guillaume, Reda Dehak
|
A benchmark for toxic comment classification on Civil Comments dataset
| null |
EGC 2023, vol. RNTI-E-39, pp.19-30
| null | null |
cs.CL eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Toxic comment detection on social media has proven to be essential for
content moderation. This paper compares a wide set of different models on a
highly skewed multi-label hate speech dataset. We consider inference time and
several metrics to measure performance and bias in our comparison. We show that
all BERTs have similar performance regardless of the size, optimizations or
language used to pre-train the models. RNNs are much faster at inference than
any of the BERT. BiLSTM remains a good compromise between performance and
inference time. RoBERTa with Focal Loss offers the best performance on biases
and AUROC. However, DistilBERT combines both good AUROC and a low inference
time. All models are affected by the bias of associating identities. BERT, RNN,
and XLNet are less sensitive than the CNN and Compact Convolutional
Transformers.
|
[
{
"version": "v1",
"created": "Thu, 26 Jan 2023 14:25:09 GMT"
}
] | 2023-01-27T00:00:00 |
[
[
"Duchene",
"Corentin",
""
],
[
"Jamet",
"Henri",
""
],
[
"Guillaume",
"Pierre",
""
],
[
"Dehak",
"Reda",
""
]
] |
new_dataset
| 0.99735 |
2301.11154
|
Michal Kawulok
|
Tomasz Tarasiewicz, Jakub Nalepa, Reuben A. Farrugia, Gianluca
Valentino, Mang Chen, Johann A. Briffa, Michal Kawulok
|
Multitemporal and multispectral data fusion for super-resolution of
Sentinel-2 images
|
Submitted to IEEE Transactions On Geoscience And Remote Sensing
| null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multispectral Sentinel-2 images are a valuable source of Earth observation
data, however spatial resolution of their spectral bands limited to 10 m, 20 m,
and 60 m ground sampling distance remains insufficient in many cases. This
problem can be addressed with super-resolution, aimed at reconstructing a
high-resolution image from a low-resolution observation. For Sentinel-2,
spectral information fusion allows for enhancing the 20 m and 60 m bands to the
10 m resolution. Also, there were attempts to combine multitemporal stacks of
individual Sentinel-2 bands, however these two approaches have not been
combined so far. In this paper, we introduce DeepSent -- a new deep network for
super-resolving multitemporal series of multispectral Sentinel-2 images. It is
underpinned with information fusion performed simultaneously in the spectral
and temporal dimensions to generate an enlarged multispectral image. In our
extensive experimental study, we demonstrate that our solution outperforms
other state-of-the-art techniques that realize either multitemporal or
multispectral data fusion. Furthermore, we show that the advantage of DeepSent
results from how these two fusion types are combined in a single architecture,
which is superior to performing such fusion in a sequential manner.
Importantly, we have applied our method to super-resolve real-world Sentinel-2
images, enhancing the spatial resolution of all the spectral bands to 3.3 m
nominal ground sampling distance, and we compare the outcome with very
high-resolution WorldView-2 images. We will publish our implementation upon
paper acceptance, and we expect it will increase the possibilities of
exploiting super-resolved Sentinel-2 images in real-life applications.
|
[
{
"version": "v1",
"created": "Thu, 26 Jan 2023 15:01:25 GMT"
}
] | 2023-01-27T00:00:00 |
[
[
"Tarasiewicz",
"Tomasz",
""
],
[
"Nalepa",
"Jakub",
""
],
[
"Farrugia",
"Reuben A.",
""
],
[
"Valentino",
"Gianluca",
""
],
[
"Chen",
"Mang",
""
],
[
"Briffa",
"Johann A.",
""
],
[
"Kawulok",
"Michal",
""
]
] |
new_dataset
| 0.990634 |
2301.11178
|
Andrew McNutt
|
Andrew M. McNutt, Chenglong Wang, Robert A. DeLine, Steven M. Drucker
|
On the Design of AI-powered Code Assistants for Notebooks
|
To be published in Proceedings of the 2023 CHI Conference on Human
Factors in Computing Systems (CHI '23), April 23--28, 2023, Hamburg, Germany
16 pages with 7 Figures, 1 Table, 2 Page Appendix (consisting of 4 figures)
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
AI-powered code assistants, such as Copilot, are quickly becoming a
ubiquitous component of contemporary coding contexts. Among these environments,
computational notebooks, such as Jupyter, are of particular interest as they
provide rich interface affordances that interleave code and output in a manner
that allows for both exploratory and presentational work. Despite their
popularity, little is known about the appropriate design of code assistants in
notebooks. We investigate the potential of code assistants in computational
notebooks by creating a design space (reified from a survey of extant tools)
and through an interview-design study (with 15 practicing data scientists).
Through this work, we identify challenges and opportunities for future systems
in this space, such as the value of disambiguation for tasks like data
visualization, the potential of tightly scoped domain-specific tools (like
linters), and the importance of polite assistants.
|
[
{
"version": "v1",
"created": "Thu, 26 Jan 2023 15:34:24 GMT"
}
] | 2023-01-27T00:00:00 |
[
[
"McNutt",
"Andrew M.",
""
],
[
"Wang",
"Chenglong",
""
],
[
"DeLine",
"Robert A.",
""
],
[
"Drucker",
"Steven M.",
""
]
] |
new_dataset
| 0.98885 |
2301.11217
|
Diego Gonz\'alez Mor\'in
|
Diego Gonzalez Morin, Daniele Medda, Athanasios Iossifides, Periklis
Chatzimisios, Ana Garcia Armada, Alvaro Villegas, Pablo Perez
|
An eXtended Reality Offloading IP Traffic Dataset and Models
|
Submitted to IEEE Transactions on Mobile Computing
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In recent years, advances in immersive multimedia technologies, such as
extended reality (XR) technologies, have led to more realistic and
user-friendly devices. However, these devices are often bulky and
uncomfortable, still requiring tether connectivity for demanding applications.
The deployment of the fifth generation of telecommunications technologies (5G)
has set the basis for XR offloading solutions with the goal of enabling lighter
and fully wearable XR devices. In this paper, we present a traffic dataset for
two demanding XR offloading scenarios that are complementary to those available
in the current state of the art, captured using a fully developed end-to-end XR
offloading solution. We also propose a set of accurate traffic models for the
proposed scenarios based on the captured data, accompanied by a simple and
consistent method to generate synthetic data from the fitted models. Finally,
using an open-source 5G radio access network (RAN) emulator, we validate the
models both at the application and resource allocation layers. Overall, this
work aims to provide a valuable contribution to the field with data and tools
for designing, testing, improving, and extending XR offloading solutions in
academia and industry.
|
[
{
"version": "v1",
"created": "Thu, 26 Jan 2023 16:53:27 GMT"
}
] | 2023-01-27T00:00:00 |
[
[
"Morin",
"Diego Gonzalez",
""
],
[
"Medda",
"Daniele",
""
],
[
"Iossifides",
"Athanasios",
""
],
[
"Chatzimisios",
"Periklis",
""
],
[
"Armada",
"Ana Garcia",
""
],
[
"Villegas",
"Alvaro",
""
],
[
"Perez",
"Pablo",
""
]
] |
new_dataset
| 0.970383 |
2301.11225
|
Abdullatif Baba
|
Abdullatif Baba, Basel Alothman
|
A fuzzy logic-based stabilization system for a flying robot, with an
embedded energy harvester and a visual decision-making system
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
"Smart cities" is the trendy rubric of modern urban projects that require new
innovative ideas to attain the desired perfection in many fields to change our
life for the better. In this context, a new innovative application will be
presented here to investigate and continuously make the required maintenance of
public roads by creating a flying robot for painting the partially erased parts
of sidewalks' edges that are usually plated in two different colors; primarily
black and white as we suppose here. The first contribution of this paper is
developing a fuzzy-logic-based stabilization system for an octocopter serving
as a liquids transporter that could be equipped with a robot arm. The second
contribution consists of designing an embedded energy harvester for the flying
robot to promote the management of available power sources. Finally, as
suggested in this project, we present a complement heuristic study clarifying
some main concepts that rely on a computer vision-based decision-making system.
|
[
{
"version": "v1",
"created": "Thu, 26 Jan 2023 17:01:48 GMT"
}
] | 2023-01-27T00:00:00 |
[
[
"Baba",
"Abdullatif",
""
],
[
"Alothman",
"Basel",
""
]
] |
new_dataset
| 0.997486 |
2301.11312
|
Kim-Anh Laura Nguyen
|
Laura Nguyen, Thomas Scialom, Benjamin Piwowarski, Jacopo Staiano
|
LoRaLay: A Multilingual and Multimodal Dataset for Long Range and
Layout-Aware Summarization
|
To be published in EACL 2023
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Text Summarization is a popular task and an active area of research for the
Natural Language Processing community. By definition, it requires to account
for long input texts, a characteristic which poses computational challenges for
neural models. Moreover, real-world documents come in a variety of complex,
visually-rich, layouts. This information is of great relevance, whether to
highlight salient content or to encode long-range interactions between textual
passages. Yet, all publicly available summarization datasets only provide plain
text content. To facilitate research on how to exploit visual/layout
information to better capture long-range dependencies in summarization models,
we present LoRaLay, a collection of datasets for long-range summarization with
accompanying visual/layout information. We extend existing and popular English
datasets (arXiv and PubMed) with layout information and propose four novel
datasets -- consistently built from scholar resources -- covering French,
Spanish, Portuguese, and Korean languages. Further, we propose new baselines
merging layout-aware and long-range models -- two orthogonal approaches -- and
obtain state-of-the-art results, showing the importance of combining both lines
of research.
|
[
{
"version": "v1",
"created": "Thu, 26 Jan 2023 18:50:54 GMT"
}
] | 2023-01-27T00:00:00 |
[
[
"Nguyen",
"Laura",
""
],
[
"Scialom",
"Thomas",
""
],
[
"Piwowarski",
"Benjamin",
""
],
[
"Staiano",
"Jacopo",
""
]
] |
new_dataset
| 0.999839 |
2301.11325
|
Timo Denk
|
Andrea Agostinelli, Timo I. Denk, Zal\'an Borsos, Jesse Engel, Mauro
Verzetti, Antoine Caillon, Qingqing Huang, Aren Jansen, Adam Roberts, Marco
Tagliasacchi, Matt Sharifi, Neil Zeghidour, Christian Frank
|
MusicLM: Generating Music From Text
|
Supplementary material at
https://google-research.github.io/seanet/musiclm/examples and
https://kaggle.com/datasets/googleai/musiccaps
| null | null | null |
cs.SD cs.LG eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce MusicLM, a model generating high-fidelity music from text
descriptions such as "a calming violin melody backed by a distorted guitar
riff". MusicLM casts the process of conditional music generation as a
hierarchical sequence-to-sequence modeling task, and it generates music at 24
kHz that remains consistent over several minutes. Our experiments show that
MusicLM outperforms previous systems both in audio quality and adherence to the
text description. Moreover, we demonstrate that MusicLM can be conditioned on
both text and a melody in that it can transform whistled and hummed melodies
according to the style described in a text caption. To support future research,
we publicly release MusicCaps, a dataset composed of 5.5k music-text pairs,
with rich text descriptions provided by human experts.
|
[
{
"version": "v1",
"created": "Thu, 26 Jan 2023 18:58:53 GMT"
}
] | 2023-01-27T00:00:00 |
[
[
"Agostinelli",
"Andrea",
""
],
[
"Denk",
"Timo I.",
""
],
[
"Borsos",
"Zalán",
""
],
[
"Engel",
"Jesse",
""
],
[
"Verzetti",
"Mauro",
""
],
[
"Caillon",
"Antoine",
""
],
[
"Huang",
"Qingqing",
""
],
[
"Jansen",
"Aren",
""
],
[
"Roberts",
"Adam",
""
],
[
"Tagliasacchi",
"Marco",
""
],
[
"Sharifi",
"Matt",
""
],
[
"Zeghidour",
"Neil",
""
],
[
"Frank",
"Christian",
""
]
] |
new_dataset
| 0.99094 |
2202.13008
|
Fukang Liu
|
Fukang Liu, Vaidehi Patil, Zackory Erickson, Zeynep Temel
|
Characterization of a Meso-Scale Wearable Robot for Bathing Assistance
| null | null |
10.1109/ROBIO55434.2022.10011741
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robotic bathing assistance has long been considered an important and
practical task in healthcare. Yet, achieving flexible and efficient cleaning
tasks on the human body is challenging, since washing the body involves direct
human-robot physical contact and simple, safe, and effective devices are needed
for bathing and hygiene. In this paper, we present a meso-scale wearable robot
that can locomote along the human body to provide bathing and skin care
assistance. We evaluated the cleaning performance of the robot system under
different scenarios. The experiments on the pipe show that the robot can
achieve cleaning percentage over 92% with two types of stretchable fabrics. The
robot removed most of the debris with average values of 94% on a human arm and
93% on a manikin torso. The results demonstrate that the robot exhibits high
performance in cleaning tasks.
|
[
{
"version": "v1",
"created": "Fri, 25 Feb 2022 22:53:59 GMT"
}
] | 2023-01-26T00:00:00 |
[
[
"Liu",
"Fukang",
""
],
[
"Patil",
"Vaidehi",
""
],
[
"Erickson",
"Zackory",
""
],
[
"Temel",
"Zeynep",
""
]
] |
new_dataset
| 0.986887 |
2203.15651
|
Daniel Weber
|
Daniel Weber, Wolfgang Fuhl, Andreas Zell, Enkelejda Kasneci
|
Gaze-based Object Detection in the Wild
| null | null | null | null |
cs.RO cs.HC cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In human-robot collaboration, one challenging task is to teach a robot new
yet unknown objects enabling it to interact with them. Thereby, gaze can
contain valuable information. We investigate if it is possible to detect
objects (object or no object) merely from gaze data and determine their
bounding box parameters. For this purpose, we explore different sizes of
temporal windows, which serve as a basis for the computation of heatmaps, i.e.,
the spatial distribution of the gaze data. Additionally, we analyze different
grid sizes of these heatmaps, and demonstrate the functionality in a proof of
concept using different machine learning techniques. Our method is
characterized by its speed and resource efficiency compared to conventional
object detectors. In order to generate the required data, we conducted a study
with five subjects who could move freely and thus, turn towards arbitrary
objects. This way, we chose a scenario for our data collection that is as
realistic as possible. Since the subjects move while facing objects, the
heatmaps also contain gaze data trajectories, complicating the detection and
parameter regression. We make our data set publicly available to the research
community for download.
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2022 15:10:17 GMT"
},
{
"version": "v2",
"created": "Tue, 24 Jan 2023 13:43:30 GMT"
},
{
"version": "v3",
"created": "Wed, 25 Jan 2023 08:20:16 GMT"
}
] | 2023-01-26T00:00:00 |
[
[
"Weber",
"Daniel",
""
],
[
"Fuhl",
"Wolfgang",
""
],
[
"Zell",
"Andreas",
""
],
[
"Kasneci",
"Enkelejda",
""
]
] |
new_dataset
| 0.952976 |
2205.02870
|
Simon Lupart
|
Simon Lupart and Thibault Formal and St\'ephane Clinchant
|
MS-Shift: An Analysis of MS MARCO Distribution Shifts on Neural
Retrieval
|
Accepted at ECIR 2023
| null | null | null |
cs.IR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Pre-trained Language Models have recently emerged in Information Retrieval as
providing the backbone of a new generation of neural systems that outperform
traditional methods on a variety of tasks. However, it is still unclear to what
extent such approaches generalize in zero-shot conditions. The recent BEIR
benchmark provides partial answers to this question by comparing models on
datasets and tasks that differ from the training conditions. We aim to address
the same question by comparing models under more explicit distribution shifts.
To this end, we build three query-based distribution shifts within MS MARCO
(query-semantic, query-intent, query-length), which are used to evaluate the
three main families of neural retrievers based on BERT: sparse, dense, and
late-interaction -- as well as a monoBERT re-ranker. We further analyse the
performance drops between the train and test query distributions. In
particular, we experiment with two generalization indicators: the first one
based on train/test query vocabulary overlap, and the second based on
representations of a trained bi-encoder. Intuitively, those indicators verify
that the further away the test set is from the train one, the worse the drop in
performance. We also show that models respond differently to the shifts --
dense approaches being the most impacted. Overall, our study demonstrates that
it is possible to design more controllable distribution shifts as a tool to
better understand generalization of IR models. Finally, we release the MS MARCO
query subsets, which provide an additional resource to benchmark zero-shot
transfer in Information Retrieval.
|
[
{
"version": "v1",
"created": "Thu, 5 May 2022 18:13:06 GMT"
},
{
"version": "v2",
"created": "Wed, 25 Jan 2023 13:00:52 GMT"
}
] | 2023-01-26T00:00:00 |
[
[
"Lupart",
"Simon",
""
],
[
"Formal",
"Thibault",
""
],
[
"Clinchant",
"Stéphane",
""
]
] |
new_dataset
| 0.986268 |
2205.12376
|
Kyle MacMillan
|
Kyle MacMillan, Tarun Mangla, James Saxon, Nicole P. Marwell, Nick
Feamster
|
A Comparative Analysis of Ookla Speedtest and Measurement Labs Network
Diagnostic Test (NDT7)
| null | null |
10.1145/3579448
| null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Consumers, regulators, and ISPs all use client-based "speed tests" to measure
network performance, both in single-user settings and in aggregate. Two
prevalent speed tests, Ookla's Speedtest and Measurement Lab's Network
Diagnostic Test (NDT), are often used for similar purposes, despite having
significant differences in both the test design and implementation, and in the
infrastructure used to perform measurements. In this paper, we present the
first-ever comparative evaluation of Ookla and NDT7 (the latest version of
NDT), both in controlled and wide-area settings. Our goal is to characterize
when and to what extent these two speed tests yield different results, as well
as the factors that contribute to the differences. To study the effects of the
test design, we conduct a series of controlled, in-lab experiments under a
comprehensive set of network conditions and usage modes (e.g., TCP congestion
control, native vs. browser client). Our results show that Ookla and NDT7
report similar speeds under most in-lab conditions, with the exception of
networks that experience high latency, where Ookla consistently reports higher
throughput. To characterize the behavior of these tools in wide-area
deployment, we collect more than 80,000 pairs of Ookla and NDT7 measurements
across nine months and 126 households, with a range of ISPs and speed tiers.
This first-of-its-kind paired-test analysis reveals many previously unknown
systemic issues, including high variability in NDT7 test results and
systematically under-performing servers in the Ookla network.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 21:46:00 GMT"
},
{
"version": "v2",
"created": "Wed, 25 Jan 2023 17:45:12 GMT"
}
] | 2023-01-26T00:00:00 |
[
[
"MacMillan",
"Kyle",
""
],
[
"Mangla",
"Tarun",
""
],
[
"Saxon",
"James",
""
],
[
"Marwell",
"Nicole P.",
""
],
[
"Feamster",
"Nick",
""
]
] |
new_dataset
| 0.994652 |
2206.08171
|
Dong-Hee Paek
|
Dong-Hee Paek, Seung-Hyun Kong, Kevin Tirta Wijaya
|
K-Radar: 4D Radar Object Detection for Autonomous Driving in Various
Weather Conditions
|
Accepted at NeurIPS 2022 Datasets and Benchmarks Track
|
Proceedings of the Neural Information Processing Systems Track on
Datasets and Benchmarks (NeurIPS Datasets and Benchmarks 2022)
| null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unlike RGB cameras that use visible light bands (384$\sim$769 THz) and Lidars
that use infrared bands (361$\sim$331 THz), Radars use relatively longer
wavelength radio bands (77$\sim$81 GHz), resulting in robust measurements in
adverse weathers. Unfortunately, existing Radar datasets only contain a
relatively small number of samples compared to the existing camera and Lidar
datasets. This may hinder the development of sophisticated data-driven deep
learning techniques for Radar-based perception. Moreover, most of the existing
Radar datasets only provide 3D Radar tensor (3DRT) data that contain power
measurements along the Doppler, range, and azimuth dimensions. As there is no
elevation information, it is challenging to estimate the 3D bounding box of an
object from 3DRT. In this work, we introduce KAIST-Radar (K-Radar), a novel
large-scale object detection dataset and benchmark that contains 35K frames of
4D Radar tensor (4DRT) data with power measurements along the Doppler, range,
azimuth, and elevation dimensions, together with carefully annotated 3D
bounding box labels of objects on the roads. K-Radar includes challenging
driving conditions such as adverse weathers (fog, rain, and snow) on various
road structures (urban, suburban roads, alleyways, and highways). In addition
to the 4DRT, we provide auxiliary measurements from carefully calibrated
high-resolution Lidars, surround stereo cameras, and RTK-GPS. We also provide
4DRT-based object detection baseline neural networks (baseline NNs) and show
that the height information is crucial for 3D object detection. And by
comparing the baseline NN with a similarly-structured Lidar-based neural
network, we demonstrate that 4D Radar is a more robust sensor for adverse
weather conditions. All codes are available at
https://github.com/kaist-avelab/k-radar.
|
[
{
"version": "v1",
"created": "Thu, 16 Jun 2022 13:39:21 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Oct 2022 09:24:29 GMT"
},
{
"version": "v3",
"created": "Wed, 25 Jan 2023 05:43:47 GMT"
}
] | 2023-01-26T00:00:00 |
[
[
"Paek",
"Dong-Hee",
""
],
[
"Kong",
"Seung-Hyun",
""
],
[
"Wijaya",
"Kevin Tirta",
""
]
] |
new_dataset
| 0.999507 |
2210.09484
|
George Michelogiannakis
|
Darren Lyles, Patricia Gonzalez-Guerrero, Meriam Gay Bautista, George
Michelogiannakis
|
PaST-NoC: A Packet-Switched Superconducting Temporal NoC
|
14 pages, 18 figures, 2 tables. In press in IEEE Transactions on
Applied Superconductivity
|
IEEE Transactions on Applied Superconductivity, August 2023
|
10.1109/TASC.2023.3236248
|
Online ISSN 1558-2515
|
cs.ET cs.AR cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Temporal computing promises to mitigate the stringent area constraints and
clock distribution overheads of traditional superconducting digital computing.
To design a scalable, area- and power-efficient superconducting network on chip
(NoC), we propose packet-switched superconducting temporal NoC (PaST-NoC).
PaST-NoC operates its control path in the temporal domain using race logic
(RL), combined with bufferless deflection flow control to minimize area.
Packets encode their destination using RL and carry a collection of data pulses
that the receiver can interpret as pulse trains, RL, serialized binary, or
other formats. We demonstrate how to scale up PaST-NoC to arbitrary topologies
based on 2x2 routers and 4x4 butterflies as building blocks. As we show, if
data pulses are interpreted using RL, PaST-NoC outperforms state-of-the-art
superconducting binary NoCs in throughput per area by as much as 5x for long
packets.
|
[
{
"version": "v1",
"created": "Tue, 18 Oct 2022 00:06:32 GMT"
},
{
"version": "v2",
"created": "Mon, 9 Jan 2023 23:10:41 GMT"
}
] | 2023-01-26T00:00:00 |
[
[
"Lyles",
"Darren",
""
],
[
"Gonzalez-Guerrero",
"Patricia",
""
],
[
"Bautista",
"Meriam Gay",
""
],
[
"Michelogiannakis",
"George",
""
]
] |
new_dataset
| 0.977427 |
2211.05733
|
Weihong Xu
|
Weihong Xu, Saransh Gupta, Niema Moshiri, and Tajana Rosing
|
RAPIDx: High-performance ReRAM Processing in-Memory Accelerator for
Sequence Alignment
| null | null |
10.1109/TCAD.2023.3239537
| null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Genome sequence alignment is the core of many biological applications. The
advancement of sequencing technologies produces a tremendous amount of data,
making sequence alignment a critical bottleneck in bioinformatics analysis. The
existing hardware accelerators for alignment suffer from limited on-chip
memory, costly data movement, and poorly optimized alignment algorithms. They
cannot afford to concurrently process the massive amount of data generated by
sequencing machines. In this paper, we propose a ReRAM-based accelerator,
RAPIDx, using processing in-memory (PIM) for sequence alignment. RAPIDx
achieves superior efficiency and performance via software-hardware co-design.
First, we propose an adaptive banded parallelism alignment algorithm suitable
for PIM architecture. Compared to the original dynamic programming-based
alignment, the proposed algorithm significantly reduces the required
complexity, data bit width, and memory footprint at the cost of negligible
accuracy degradation. Then we propose the efficient PIM architecture that
implements the proposed algorithm. The data flow in RAPIDx achieves four-level
parallelism and we design an in-situ alignment computation flow in ReRAM,
delivering $5.5$-$9.7\times$ efficiency and throughput improvements compared to
our previous PIM design, RAPID. The proposed RAPIDx is reconfigurable to serve
as a co-processor integrated into existing genome analysis pipeline to boost
sequence alignment or edit distance calculation. On short-read alignment,
RAPIDx delivers $131.1\times$ and $46.8\times$ throughput improvements over
state-of-the-art CPU and GPU libraries, respectively. As compared to ASIC
accelerators for long-read alignment, the performance of RAPIDx is
$1.8$-$2.9\times$ higher.
|
[
{
"version": "v1",
"created": "Thu, 10 Nov 2022 18:06:56 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Dec 2022 11:40:41 GMT"
},
{
"version": "v3",
"created": "Wed, 25 Jan 2023 02:49:48 GMT"
}
] | 2023-01-26T00:00:00 |
[
[
"Xu",
"Weihong",
""
],
[
"Gupta",
"Saransh",
""
],
[
"Moshiri",
"Niema",
""
],
[
"Rosing",
"Tajana",
""
]
] |
new_dataset
| 0.97924 |
2301.08281
|
Lyes Khacef
|
Fernando M. Quintana, Fernando Perez-Pe\~na, Pedro L. Galindo, Emre O.
Neftci, Elisabetta Chicca, Lyes Khacef
|
ETLP: Event-based Three-factor Local Plasticity for online learning with
neuromorphic hardware
| null | null | null | null |
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neuromorphic perception with event-based sensors, asynchronous hardware and
spiking neurons is showing promising results for real-time and energy-efficient
inference in embedded systems. The next promise of brain-inspired computing is
to enable adaptation to changes at the edge with online learning. However, the
parallel and distributed architectures of neuromorphic hardware based on
co-localized compute and memory imposes locality constraints to the on-chip
learning rules. We propose in this work the Event-based Three-factor Local
Plasticity (ETLP) rule that uses (1) the pre-synaptic spike trace, (2) the
post-synaptic membrane voltage and (3) a third factor in the form of projected
labels with no error calculation, that also serve as update triggers. We apply
ETLP with feedforward and recurrent spiking neural networks on visual and
auditory event-based pattern recognition, and compare it to Back-Propagation
Through Time (BPTT) and eProp. We show a competitive performance in accuracy
with a clear advantage in the computational complexity for ETLP. We also show
that when using local plasticity, threshold adaptation in spiking neurons and a
recurrent topology are necessary to learn spatio-temporal patterns with a rich
temporal structure. Finally, we provide a proof of concept hardware
implementation of ETLP on FPGA to highlight the simplicity of its computational
primitives and how they can be mapped into neuromorphic hardware for online
learning with low-energy consumption and real-time interaction.
|
[
{
"version": "v1",
"created": "Thu, 19 Jan 2023 19:45:42 GMT"
},
{
"version": "v2",
"created": "Tue, 24 Jan 2023 19:13:01 GMT"
}
] | 2023-01-26T00:00:00 |
[
[
"Quintana",
"Fernando M.",
""
],
[
"Perez-Peña",
"Fernando",
""
],
[
"Galindo",
"Pedro L.",
""
],
[
"Neftci",
"Emre O.",
""
],
[
"Chicca",
"Elisabetta",
""
],
[
"Khacef",
"Lyes",
""
]
] |
new_dataset
| 0.997335 |
2301.09715
|
Avirup Sil
|
Avirup Sil, Jaydeep Sen, Bhavani Iyer, Martin Franz, Kshitij Fadnis,
Mihaela Bornea, Sara Rosenthal, Scott McCarley, Rong Zhang, Vishwajeet Kumar,
Yulong Li, Md Arafat Sultan, Riyaz Bhat, Radu Florian, Salim Roukos
|
PrimeQA: The Prime Repository for State-of-the-Art Multilingual Question
Answering Research and Development
| null | null | null | null |
cs.CL cs.IR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The field of Question Answering (QA) has made remarkable progress in recent
years, thanks to the advent of large pre-trained language models, newer
realistic benchmark datasets with leaderboards, and novel algorithms for key
components such as retrievers and readers. In this paper, we introduce PRIMEQA:
a one-stop and open-source QA repository with an aim to democratize QA
re-search and facilitate easy replication of state-of-the-art (SOTA) QA
methods. PRIMEQA supports core QA functionalities like retrieval and reading
comprehension as well as auxiliary capabilities such as question generation.It
has been designed as an end-to-end toolkit for various use cases: building
front-end applications, replicating SOTA methods on pub-lic benchmarks, and
expanding pre-existing methods. PRIMEQA is available at :
https://github.com/primeqa.
|
[
{
"version": "v1",
"created": "Mon, 23 Jan 2023 20:43:26 GMT"
},
{
"version": "v2",
"created": "Wed, 25 Jan 2023 15:48:03 GMT"
}
] | 2023-01-26T00:00:00 |
[
[
"Sil",
"Avirup",
""
],
[
"Sen",
"Jaydeep",
""
],
[
"Iyer",
"Bhavani",
""
],
[
"Franz",
"Martin",
""
],
[
"Fadnis",
"Kshitij",
""
],
[
"Bornea",
"Mihaela",
""
],
[
"Rosenthal",
"Sara",
""
],
[
"McCarley",
"Scott",
""
],
[
"Zhang",
"Rong",
""
],
[
"Kumar",
"Vishwajeet",
""
],
[
"Li",
"Yulong",
""
],
[
"Sultan",
"Md Arafat",
""
],
[
"Bhat",
"Riyaz",
""
],
[
"Florian",
"Radu",
""
],
[
"Roukos",
"Salim",
""
]
] |
new_dataset
| 0.999499 |
2301.09783
|
Zhonghua Sun
|
Zhonghua Sun and Cunsheng Ding
|
Several families of ternary negacyclic codes and their duals
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Constacyclic codes contain cyclic codes as a subclass and have nice algebraic
structures. Constacyclic codes have theoretical importance, as they are
connected to a number of areas of mathematics and outperform cyclic codes in
several aspects. Negacyclic codes are a subclass of constacyclic codes and are
distance-optimal in many cases. However, compared with the extensive study of
cyclic codes, negacyclic codes are much less studied. In this paper, several
families of ternary negacyclic codes and their duals are constructed and
analysed. These families of negacyclic codes and their duals contain
distance-optimal codes and have very good parameters in general.
|
[
{
"version": "v1",
"created": "Tue, 24 Jan 2023 01:59:16 GMT"
},
{
"version": "v2",
"created": "Wed, 25 Jan 2023 11:37:11 GMT"
}
] | 2023-01-26T00:00:00 |
[
[
"Sun",
"Zhonghua",
""
],
[
"Ding",
"Cunsheng",
""
]
] |
new_dataset
| 0.999508 |
2301.10216
|
Junyao Zhang
|
Junyao Zhang, Paul Bogdan, Shahin Nazarian
|
C-SAR: SAT Attack Resistant Logic Locking for RSFQ Circuits
| null | null | null | null |
cs.LO cs.AR cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Since the development of semiconductor technologies, exascale computing and
its associated applications have required increasing degrees of efficiency.
Semiconductor-transistor-based circuits (STbCs) have struggled in increasing
the GHz frequency. Emerging as an alternative to STbC, the superconducting
electrons (SCE) technology promises higher-speed clock frequencies at ultra-low
power consumption. The rapid single flux quantum (RSFQ) circuits have a
theoretical potential for three orders of magnitude reduction in power while
operating at clock frequencies higher than 100 GHz. Although the security in
semiconductor technology has been extensively researched and developed, the
security design in the superconducting field requires field demands attention.
In this paper, C-SAR is presented that aims to protect the superconducting
circuit electronics from Boolean satisfiability (SAT) based attacks. The SAT
attack is an attack that can break all the existing combinational logic locking
techniques. C-SAR can immunize against SAT attacks by increasing the key search
space and prolonging the clock cycles of attack inputs. Even in the worst case
of C-SAR, in face of S-SAT a specially designed SAT attack, C-SAR can also soar
the attack cost exponentially with key bits first, then linearly with the
length of camouflaged DFF array. We have shown in this work that the cost of
C-SAR is manageable as it only linearly increases as a function of key bits.
|
[
{
"version": "v1",
"created": "Tue, 24 Jan 2023 18:45:01 GMT"
},
{
"version": "v2",
"created": "Wed, 25 Jan 2023 16:32:21 GMT"
}
] | 2023-01-26T00:00:00 |
[
[
"Zhang",
"Junyao",
""
],
[
"Bogdan",
"Paul",
""
],
[
"Nazarian",
"Shahin",
""
]
] |
new_dataset
| 0.9875 |
2301.10295
|
Yuqing Ren
|
Kaihui Zheng, Yuqing Ren, Zixin Shen, Tianxu Qin
|
Object Segmentation with Audio Context
|
Research project for Introduction to Deep Learning (11785) at
Carnegie Mellon University
| null | null | null |
cs.CV cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual objects often have acoustic signatures that are naturally synchronized
with them in audio-bearing video recordings. For this project, we explore the
multimodal feature aggregation for video instance segmentation task, in which
we integrate audio features into our video segmentation model to conduct an
audio-visual learning scheme. Our method is based on existing video instance
segmentation method which leverages rich contextual information across video
frames. Since this is the first attempt to investigate the audio-visual
instance segmentation, a novel dataset, including 20 vocal classes with
synchronized video and audio recordings, is collected. By utilizing combined
decoder to fuse both video and audio features, our model shows a slight
improvements compared to the base model. Additionally, we managed to show the
effectiveness of different modules by conducting extensive ablations.
|
[
{
"version": "v1",
"created": "Wed, 4 Jan 2023 01:33:42 GMT"
}
] | 2023-01-26T00:00:00 |
[
[
"Zheng",
"Kaihui",
""
],
[
"Ren",
"Yuqing",
""
],
[
"Shen",
"Zixin",
""
],
[
"Qin",
"Tianxu",
""
]
] |
new_dataset
| 0.990062 |
2301.10314
|
Yang Bai
|
Yang Bai, Irtaza Shahid, Harshvardhan Takawale, Nirupam Roy
|
WhisperWand: Simultaneous Voice and Gesture Tracking Interface
| null | null | null | null |
cs.HC cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents the design and implementation of WhisperWand, a
comprehensive voice and motion tracking interface for voice assistants.
Distinct from prior works, WhisperWand is a precise tracking interface that can
co-exist with the voice interface on low sampling rate voice assistants. Taking
handwriting as a specific application, it can also capture natural strokes and
the individualized style of writing while occupying only a single frequency.
The core technique includes an accurate acoustic ranging method called Cross
Frequency Continuous Wave (CFCW) sonar, enabling voice assistants to use
ultrasound as a ranging signal while using the regular microphone system of
voice assistants as a receiver. We also design a new optimization algorithm
that only requires a single frequency for time difference of arrival.
WhisperWand prototype achieves 73 um of median error for 1D ranging and 1.4 mm
of median error in 3D tracking of an acoustic beacon using the microphone array
used in voice assistants. Our implementation of an in-air handwriting interface
achieves 94.1% accuracy with automatic handwriting-to-text software, similar to
writing on paper (96.6%). At the same time, the error rate of voice-based user
authentication only increases from 6.26% to 8.28%.
|
[
{
"version": "v1",
"created": "Tue, 24 Jan 2023 21:30:11 GMT"
}
] | 2023-01-26T00:00:00 |
[
[
"Bai",
"Yang",
""
],
[
"Shahid",
"Irtaza",
""
],
[
"Takawale",
"Harshvardhan",
""
],
[
"Roy",
"Nirupam",
""
]
] |
new_dataset
| 0.967901 |
2301.10502
|
Karina Elzer
|
Daniel Reti, Karina Elzer, Hans Dieter Schotten
|
SCANTRAP: Protecting Content Management Systems from Vulnerability
Scanners with Cyber Deception and Obfuscation
|
8 pages, 1 figure, 2 tables, ICISSP 2023
https://icissp.scitevents.org/
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Every attack begins with gathering information about the target. The entry
point for network breaches are often vulnerabilities in internet facing
websites, which often rely on an off-the-shelf Content Management System (CMS).
Bot networks and human attackers alike rely on automated scanners to gather
information about the CMS software installed and potential vulnerabilities. To
increase the security of websites using a CMS, it is desirable to make the use
of CMS scanners less reliable. The aim of this work is to extend the current
knowledge about cyber deception in regard to CMS. To demonstrate this, a
WordPress Plugin called 'SCANTRAP' was created, which uses simulation and
dissimulation in regards to plugins, themes, versions, and users. We found that
the resulting plugin is capable of obfuscating real information and to a
certain extent inject false information to the output of one of the most
popular WordPress scanners, WPScan, without limiting the legitimate
functionality of the WordPress installation.
|
[
{
"version": "v1",
"created": "Wed, 25 Jan 2023 10:26:10 GMT"
}
] | 2023-01-26T00:00:00 |
[
[
"Reti",
"Daniel",
""
],
[
"Elzer",
"Karina",
""
],
[
"Schotten",
"Hans Dieter",
""
]
] |
new_dataset
| 0.972042 |
2301.10519
|
Davide Martinenghi
|
Dino Mandrioli, Davide Martinenghi, Angelo Morzenti, Matteo Pradella,
and Matteo Rossi
|
Lecture Notes on Monadic First- and Second-Order Logic on Strings
|
17 pages
| null | null | null |
cs.LO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
These notes present the essentials of first- and second-order monadic logics
on strings with introductory purposes. We discuss Monadic First-Order logic and
show that it is strictly less expressive than Finite-State Automata, in that it
only captures a strict subset of Regular Languages -- the non-counting ones. We
then introduce Monadic Second-Order logic; such a logic is, syntactically, a
superset of Monadic First-Order logic and captures Regular Languages exactly.
We also show how to transform an automaton into a corresponding formula and
vice versa. Finally, we discuss the use of logical characterizations of classes
of languages as the basis for automatic verification techniques.
|
[
{
"version": "v1",
"created": "Wed, 25 Jan 2023 11:01:31 GMT"
}
] | 2023-01-26T00:00:00 |
[
[
"Mandrioli",
"Dino",
""
],
[
"Martinenghi",
"Davide",
""
],
[
"Morzenti",
"Angelo",
""
],
[
"Pradella",
"Matteo",
""
],
[
"Rossi",
"Matteo",
""
]
] |
new_dataset
| 0.999895 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.