id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2211.07916
|
Siddhi Brahmbhatt
|
Siddhi Brahmbhatt
|
A Dataset and Model for Crossing Indian Roads
|
Awarded Best Paper (Indian Context) at ICVGIP 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Roads in medium-sized Indian towns often have lots of traffic but no (or
disregarded) traffic stops. This makes it hard for the blind to cross roads
safely, because vision is crucial to determine when crossing is safe. Automatic
and reliable image-based safety classifiers thus have the potential to help the
blind to cross Indian roads. Yet, we currently lack datasets collected on
Indian roads from the pedestrian point-of-view, labelled with road crossing
safety information. Existing classifiers from other countries are often
intended for crossroads, and hence rely on the detection and presence of
traffic lights, which is not applicable in Indian conditions. We introduce
INDRA (INdian Dataset for RoAd crossing), the first dataset capturing videos of
Indian roads from the pedestrian point-of-view. INDRA contains 104 videos
comprising of 26k 1080p frames, each annotated with a binary road crossing
safety label and vehicle bounding boxes. We train various classifiers to
predict road crossing safety on this data, ranging from SVMs to convolutional
neural networks (CNNs). The best performing model DilatedRoadCrossNet is a
novel single-image architecture tailored for deployment on the Nvidia Jetson
Nano. It achieves 79% recall at 90% precision on unseen images. Lastly, we
present a wearable road crossing assistant running DilatedRoadCrossNet, which
can help the blind cross Indian roads in real-time. The project webpage is
http://roadcross-assistant.github.io/Website/.
|
[
{
"version": "v1",
"created": "Tue, 15 Nov 2022 06:04:30 GMT"
},
{
"version": "v2",
"created": "Sun, 8 Jan 2023 05:32:00 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Brahmbhatt",
"Siddhi",
""
]
] |
new_dataset
| 0.999764 |
2211.09488
|
Chunhui Li
|
Chunhui Li and Mingquan Zhou and Zehua Liu and Yuhe Zhang
|
EPCS: Endpoint-based Part-aware Curve Skeleton Extraction for
Low-quality Point Clouds
|
Need to modify
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The curve skeleton is an important shape descriptor that has been utilized in
various applications in computer graphics, machine vision, and artificial
intelligence. In this study, the endpoint-based part-aware curve skeleton
(EPCS) extraction method for low-quality point clouds is proposed. The novel
random center shift (RCS) method is first proposed for detecting the endpoints
on point clouds. The endpoints are used as the initial seed points for dividing
each part into layers, and then the skeletal points are obtained by computing
the center points of the oriented bounding box (OBB) of the layers.
Subsequently, the skeletal points are connected, thus forming the branches.
Furthermore, the multi-vector momentum-driven (MVMD) method is also proposed
for locating the junction points that connect the branches. Due to the shape
differences between different parts on point clouds, the global topology of the
skeleton is finally optimized by removing the redundant junction points,
re-connecting some branches using the proposed MVMD method, and applying an
interpolation method based on the splitting operator. Consequently, a complete
and smooth curve skeleton is achieved. The proposed EPCS method is compared
with several state-of-the-art methods, and the experimental results verify its
robustness, effectiveness, and efficiency. Furthermore, the skeleton extraction
and model segmentation results on the point clouds of broken Terracotta also
highlight the utility of the proposed method.
|
[
{
"version": "v1",
"created": "Thu, 17 Nov 2022 12:13:49 GMT"
},
{
"version": "v2",
"created": "Sun, 8 Jan 2023 13:47:41 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Li",
"Chunhui",
""
],
[
"Zhou",
"Mingquan",
""
],
[
"Liu",
"Zehua",
""
],
[
"Zhang",
"Yuhe",
""
]
] |
new_dataset
| 0.993109 |
2211.11418
|
Raviraj Joshi
|
Raviraj Joshi
|
L3Cube-HindBERT and DevBERT: Pre-Trained BERT Transformer models for
Devanagari based Hindi and Marathi Languages
|
Accepted at ICICC 2023
| null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The monolingual Hindi BERT models currently available on the model hub do not
perform better than the multi-lingual models on downstream tasks. We present
L3Cube-HindBERT, a Hindi BERT model pre-trained on Hindi monolingual corpus.
Further, since Indic languages, Hindi and Marathi share the Devanagari script,
we train a single model for both languages. We release DevBERT, a Devanagari
BERT model trained on both Marathi and Hindi monolingual datasets. We evaluate
these models on downstream Hindi and Marathi text classification and named
entity recognition tasks. The HindBERT and DevBERT-based models show
significant improvements over multi-lingual MuRIL, IndicBERT, and XLM-R. Based
on these observations we also release monolingual BERT models for other Indic
languages Kannada, Telugu, Malayalam, Tamil, Gujarati, Assamese, Odia, Bengali,
and Punjabi. These models are shared at https://huggingface.co/l3cube-pune .
|
[
{
"version": "v1",
"created": "Mon, 21 Nov 2022 13:02:52 GMT"
},
{
"version": "v2",
"created": "Sun, 27 Nov 2022 15:49:08 GMT"
},
{
"version": "v3",
"created": "Mon, 26 Dec 2022 16:39:46 GMT"
},
{
"version": "v4",
"created": "Sun, 8 Jan 2023 06:49:53 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Joshi",
"Raviraj",
""
]
] |
new_dataset
| 0.998484 |
2212.05040
|
Jay Bhanushali
|
Jay Bhanushali, Praneeth Chakravarthula, Manivannan Muniyandi
|
OmniHorizon: In-the-Wild Outdoors Depth and Normal Estimation from
Synthetic Omnidirectional Dataset
|
Fixed the overlapping text in caption for Figure 9 in supplementary
section
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Understanding the ambient scene is imperative for several applications such
as autonomous driving and navigation. While obtaining real-world image data
with per-pixel labels is challenging, existing accurate synthetic image
datasets primarily focus on indoor spaces with fixed lighting and scene
participants, thereby severely limiting their application to outdoor scenarios.
In this work we introduce OmniHorizon, a synthetic dataset with 24,335
omnidirectional views comprising of a broad range of indoor and outdoor spaces
consisting of buildings, streets, and diverse vegetation. Our dataset also
accounts for dynamic scene components including lighting, different times of a
day settings, pedestrians, and vehicles. Furthermore, we also demonstrate a
learned synthetic-to-real cross-domain inference method for in-the-wild 3D
scene depth and normal estimation method using our dataset. To this end, we
propose UBotNet, an architecture based on a UNet and a Bottleneck Transformer,
to estimate scene-consistent normals. We show that UBotNet achieves
significantly improved depth accuracy (4.6%) and normal estimation (5.75%)
compared to several existing networks such as U-Net with skip-connections.
Finally, we demonstrate in-the-wild depth and normal estimation on real-world
images with UBotNet trained purely on our OmniHorizon dataset, showing the
promise of proposed dataset and network for scene understanding.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2022 18:40:12 GMT"
},
{
"version": "v2",
"created": "Mon, 9 Jan 2023 11:48:19 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Bhanushali",
"Jay",
""
],
[
"Chakravarthula",
"Praneeth",
""
],
[
"Muniyandi",
"Manivannan",
""
]
] |
new_dataset
| 0.999785 |
2212.14225
|
Chaofeng Guan
|
Chaofeng Guan, Ruihu Li, Zhi Ma
|
Symplectic self-orthogonal quasi-cyclic codes
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we obtain necessary and sufficient conditions for quasi-cyclic
codes with index even to be symplectic self-orthogonal. Then, we propose a
method for constructing symplectic self-orthogonal quasi-cyclic codes, which
allows arbitrary polynomials that divide $ x^{n}-1$ to construct symplectic
self-orthogonal codes. Finally, we construct many binary symplectic
self-orthogonal codes with excellent parameters, corresponding to over a
hundred record-breaking quantum codes, improving Grassl's table (bounds on the
minimum distance of quantum codes. http://www.codetables.de).
|
[
{
"version": "v1",
"created": "Thu, 29 Dec 2022 08:48:56 GMT"
},
{
"version": "v2",
"created": "Sun, 8 Jan 2023 11:42:11 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Guan",
"Chaofeng",
""
],
[
"Li",
"Ruihu",
""
],
[
"Ma",
"Zhi",
""
]
] |
new_dataset
| 0.999294 |
2301.01615
|
Zhe Liu
|
Zhe Liu, Xiaoqing Ye, Xiao Tan, Errui Ding, Xiang Bai
|
StereoDistill: Pick the Cream from LiDAR for Distilling Stereo-based 3D
Object Detection
|
Accepted by AAAI-2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a cross-modal distillation method named
StereoDistill to narrow the gap between the stereo and LiDAR-based approaches
via distilling the stereo detectors from the superior LiDAR model at the
response level, which is usually overlooked in 3D object detection
distillation. The key designs of StereoDistill are: the X-component Guided
Distillation~(XGD) for regression and the Cross-anchor Logit Distillation~(CLD)
for classification. In XGD, instead of empirically adopting a threshold to
select the high-quality teacher predictions as soft targets, we decompose the
predicted 3D box into sub-components and retain the corresponding part for
distillation if the teacher component pilot is consistent with ground truth to
largely boost the number of positive predictions and alleviate the mimicking
difficulty of the student model. For CLD, we aggregate the probability
distribution of all anchors at the same position to encourage the highest
probability anchor rather than individually distill the distribution at the
anchor level. Finally, our StereoDistill achieves state-of-the-art results for
stereo-based 3D detection on the KITTI test benchmark and extensive experiments
on KITTI and Argoverse Dataset validate the effectiveness.
|
[
{
"version": "v1",
"created": "Wed, 4 Jan 2023 13:38:48 GMT"
},
{
"version": "v2",
"created": "Sat, 7 Jan 2023 15:12:33 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Liu",
"Zhe",
""
],
[
"Ye",
"Xiaoqing",
""
],
[
"Tan",
"Xiao",
""
],
[
"Ding",
"Errui",
""
],
[
"Bai",
"Xiang",
""
]
] |
new_dataset
| 0.968626 |
2301.02693
|
Muhammad Al-Barham
|
Muhammad Al-Barham, Ahmad Jamal, Musa Al-Yaman
|
Design of Arabic Sign Language Recognition Model
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Deaf people are using sign language for communication, and it is a
combination of gestures, movements, postures, and facial expressions that
correspond to alphabets and words in spoken languages. The proposed Arabic sign
language recognition model helps deaf and hard hearing people communicate
effectively with ordinary people. The recognition has four stages of converting
the alphabet into letters as follows: Image Loading stage, which loads the
images of Arabic sign language alphabets that were used later to train and test
the model, a pre-processing stage which applies image processing techniques
such as normalization, Image augmentation, resizing, and filtering to extract
the features which are necessary to accomplish the recognition perfectly, a
training stage which is achieved by deep learning techniques like CNN, a
testing stage which demonstrates how effectively the model performs for images
did not see it before, and the model was built and tested mainly using PyTorch
library. The model is tested on ArASL2018, consisting of 54,000 images for 32
alphabet signs gathered from 40 signers, and the dataset has two sets: training
dataset and testing dataset. We had to ensure that the system is reliable in
terms of accuracy, time, and flexibility of use explained in detail in this
report. Finally, the future work will be a model that converts Arabic sign
language into Arabic text.
|
[
{
"version": "v1",
"created": "Fri, 6 Jan 2023 19:19:25 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Al-Barham",
"Muhammad",
""
],
[
"Jamal",
"Ahmad",
""
],
[
"Al-Yaman",
"Musa",
""
]
] |
new_dataset
| 0.997957 |
2301.02734
|
Lucia Korpas
|
Lucia M. Korpas, Seth Frey, Joshua Tan
|
Political, economic, and governance attitudes of blockchain users
|
32 pages, 19 figures
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a survey to evaluate crypto-political, crypto-economic, and
crypto-governance sentiment in people who are part of a blockchain ecosystem.
Based on 3710 survey responses, we describe their beliefs, attitudes, and modes
of participation in crypto and investigate how self-reported political
affiliation and blockchain ecosystem affiliation are associated with these. We
observed polarization in questions on perceptions of the distribution of
economic power, personal attitudes towards crypto, normative beliefs about the
distribution of power in governance, and external regulation of blockchain
technologies. Differences in political self-identification correlated with
opinions on economic fairness, gender equity, decision-making power and how to
obtain favorable regulation, while blockchain affiliation correlated with
opinions on governance and regulation of crypto and respondents' semantic
conception of crypto and personal goals for their involvement. We also find
that a theory-driven constructed political axis is supported by the data and
investigate the possibility of other groupings of respondents or beliefs
arising from the data.
|
[
{
"version": "v1",
"created": "Fri, 6 Jan 2023 22:30:22 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Korpas",
"Lucia M.",
""
],
[
"Frey",
"Seth",
""
],
[
"Tan",
"Joshua",
""
]
] |
new_dataset
| 0.994427 |
2301.02837
|
Fabian A. Braeu
|
Fabian A. Braeu, Thanadet Chuangsuwanich, Tin A. Tun, Shamira A.
Perera, Rahat Husain, Aiste Kadziauskiene, Leopold Schmetterer, Alexandre H.
Thi\'ery, George Barbastathis, Tin Aung, and Micha\"el J.A. Girard
|
The 3D Structural Phenotype of the Glaucomatous Optic Nerve Head and its
Relationship with The Severity of Visual Field Damage
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
$\bf{Purpose}$: To describe the 3D structural changes in both connective and
neural tissues of the optic nerve head (ONH) that occur concurrently at
different stages of glaucoma using traditional and AI-driven approaches.
$\bf{Methods}$: We included 213 normal, 204 mild glaucoma (mean deviation
[MD] $\ge$ -6.00 dB), 118 moderate glaucoma (MD of -6.01 to -12.00 dB), and 118
advanced glaucoma patients (MD < -12.00 dB). All subjects had their ONHs imaged
in 3D with Spectralis optical coherence tomography. To describe the 3D
structural phenotype of glaucoma as a function of severity, we used two
different approaches: (1) We extracted human-defined 3D structural parameters
of the ONH including retinal nerve fiber layer (RNFL) thickness, lamina
cribrosa (LC) shape and depth at different stages of glaucoma; (2) we also
employed a geometric deep learning method (i.e. PointNet) to identify the most
important 3D structural features that differentiate ONHs from different
glaucoma severity groups without any human input.
$\bf{Results}$: We observed that the majority of ONH structural changes
occurred in the early glaucoma stage, followed by a plateau effect in the later
stages. Using PointNet, we also found that 3D ONH structural changes were
present in both neural and connective tissues. In both approaches, we observed
that structural changes were more prominent in the superior and inferior
quadrant of the ONH, particularly in the RNFL, the prelamina, and the LC. As
the severity of glaucoma increased, these changes became more diffuse (i.e.
widespread), particularly in the LC.
$\bf{Conclusions}$: In this study, we were able to uncover complex 3D
structural changes of the ONH in both neural and connective tissues as a
function of glaucoma severity. We hope to provide new insights into the complex
pathophysiology of glaucoma that might help clinicians in their daily clinical
care.
|
[
{
"version": "v1",
"created": "Sat, 7 Jan 2023 12:28:43 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Braeu",
"Fabian A.",
""
],
[
"Chuangsuwanich",
"Thanadet",
""
],
[
"Tun",
"Tin A.",
""
],
[
"Perera",
"Shamira A.",
""
],
[
"Husain",
"Rahat",
""
],
[
"Kadziauskiene",
"Aiste",
""
],
[
"Schmetterer",
"Leopold",
""
],
[
"Thiéry",
"Alexandre H.",
""
],
[
"Barbastathis",
"George",
""
],
[
"Aung",
"Tin",
""
],
[
"Girard",
"Michaël J. A.",
""
]
] |
new_dataset
| 0.983184 |
2301.02878
|
Dexter Kozen
|
Keri D'Angelo and Dexter Kozen
|
Abstract Huffman Coding and PIFO Tree Embeddings
|
11 pages
| null | null | null |
cs.IT cs.DS math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Algorithms for deriving Huffman codes and the recently developed algorithm
for compiling PIFO trees to trees of fixed shape (Mohan et al. 2022) are
similar, but work with different underlying algebraic operations. In this
paper, we exploit the monadic structure of prefix codes to create a generalized
Huffman algorithm that has these two applications as special cases.
|
[
{
"version": "v1",
"created": "Sat, 7 Jan 2023 15:57:32 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"D'Angelo",
"Keri",
""
],
[
"Kozen",
"Dexter",
""
]
] |
new_dataset
| 0.983455 |
2301.02893
|
Billy Javier
|
Billy S. Javier, Leo P. Paliuanan, James Karl A. Agpalza, Jesty S.
Agoto
|
MangngalApp -- An integrated package of technology for COVID-19 response
and rural development: Acceptability and usability using TAM
|
University-approved project
|
Journal of Biodiversity and Environmental Science, November 2022,
V21, No4, pp109-117
| null | null |
cs.CY cs.IR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The COVID19 pandemic has challenged universities and organizations to devise
mechanisms to uplift the well-being and welfare of people and communities. In
response, the design and development of an integrated package of technologies,
MangngalApp -- A web-based portal and mobile responsive application for rural
development served as an opportunity. It showcases different packets of
technologies that were outputs of R&D in the field of fisheries and
aqua-culture, innovations that were IP-protected, and technologies that harness
locally available resources for post-harvest development and aiding in
sustaining growth and development in the communities. This paper focused on the
usability and acceptability of the MangngalApp implementing a descriptive
research design using the Technology Acceptance Model or TAM and ISO 25010
software quality standards. Constrained by government health restrictions due
to COVID-19, a Google form-based questionnaire was forwarded to consented
participants via an email with the attached consent and evaluation form.
Results revealed that the MangngalApp was found to be very acceptable and
usable, and compliant to ISO 25010 software quality characteristics to the
higher extent. From the results, it is concluded that the developed MangngalApp
will be a usable and responsive technology that aids to rural development
especially among target users: fishers, gatherers, processors, traders, and
farmers. Considering compatibility and usefulness, the MangngalApp is expected
to provide greater social development in the community.
|
[
{
"version": "v1",
"created": "Sat, 7 Jan 2023 16:58:42 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Javier",
"Billy S.",
""
],
[
"Paliuanan",
"Leo P.",
""
],
[
"Agpalza",
"James Karl A.",
""
],
[
"Agoto",
"Jesty S.",
""
]
] |
new_dataset
| 0.997624 |
2301.02966
|
Heli Qi
|
Heli Qi, Sashi Novitasari, Andros Tjandra, Sakriani Sakti, Satoshi
Nakamura
|
SpeeChain: A Speech Toolkit for Large-Scale Machine Speech Chain
|
Submitted to ICASSP 2023
| null | null | null |
cs.CL cs.LG eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces SpeeChain, an open-source Pytorch-based toolkit
designed to develop the machine speech chain for large-scale use. This first
release focuses on the TTS-to-ASR chain, a core component of the machine speech
chain, that refers to the TTS data augmentation by unspoken text for ASR. To
build an efficient pipeline for the large-scale TTS-to-ASR chain, we implement
easy-to-use multi-GPU batch-level model inference, multi-dataloader batch
generation, and on-the-fly data selection techniques. In this paper, we first
explain the overall procedure of the TTS-to-ASR chain and the difficulties of
each step. Then, we present a detailed ablation study on different types of
unlabeled data, data filtering thresholds, batch composition, and
real-synthetic data ratios. Our experimental results on train_clean_460 of
LibriSpeech demonstrate that our TTS-to-ASR chain can significantly improve WER
in a semi-supervised setting.
|
[
{
"version": "v1",
"created": "Sun, 8 Jan 2023 03:16:56 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Qi",
"Heli",
""
],
[
"Novitasari",
"Sashi",
""
],
[
"Tjandra",
"Andros",
""
],
[
"Sakti",
"Sakriani",
""
],
[
"Nakamura",
"Satoshi",
""
]
] |
new_dataset
| 0.97635 |
2301.02978
|
Haibo Wang
|
Yanbaihui Liu and Haibo Wang and Dongming Jia
|
Human Following Based on Visual Perception in the Context of Warehouse
Logistics
|
Under review in 2023 5th international Conference on Materials
Science, Machine and Energy Engineering (MSMEE 2023)
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Under the background of 5G, Internet, artificial intelligence technol,ogy and
robot technology, warehousing, and logistics robot technology has developed
rapidly, and products have been widely used. A practical application is to help
warehouse personnel pick up or deliver heavy goods at dispersed locations based
on dynamic routes. However, programs that can only accept instructions or
pre-set by the system do not have more flexibility, but existing human
auto-following techniques either cannot accurately identify specific targets or
require a combination of lasers and cameras that are cumbersome and do not
accomplish obstacle avoidance well. This paper proposed an algorithm that
combines DeepSort and a width-based tracking module to track targets and use
artificial potential field local path planning to avoid obstacles. The
evaluation is performed in a self-designed flat bounded test field and
simulated in ROS. Our method achieves the SOTA results on following and
successfully reaching the end-point without hitting obstacles.
|
[
{
"version": "v1",
"created": "Sun, 8 Jan 2023 05:03:01 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Liu",
"Yanbaihui",
""
],
[
"Wang",
"Haibo",
""
],
[
"Jia",
"Dongming",
""
]
] |
new_dataset
| 0.975336 |
2301.02983
|
Fangzhi Xu
|
Fangzhi Xu, Jun Liu, Qika Lin, Tianzhe Zhao, Jian Zhang, Lingling
Zhang
|
Mind Reasoning Manners: Enhancing Type Perception for Generalized
Zero-shot Logical Reasoning over Text
|
12 pages, 7 figures
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Logical reasoning task involves diverse types of complex reasoning over text,
based on the form of multiple-choice question answering. Given the context,
question and a set of options as the input, previous methods achieve superior
performances on the full-data setting. However, the current benchmark dataset
has the ideal assumption that the reasoning type distribution on the train
split is close to the test split, which is inconsistent with many real
application scenarios. To address it, there remain two problems to be studied:
(1) How is the zero-shot capability of the models (train on seen types and test
on unseen types)? (2) How to enhance the perception of reasoning types for the
models? For problem 1, we propose a new benchmark for generalized zero-shot
logical reasoning, named ZsLR. It includes six splits based on the three type
sampling strategies. For problem 2, a type-aware model TaCo is proposed. It
utilizes both the heuristic input reconstruction and the contrastive learning
to improve the type perception in the global representation. Extensive
experiments on both the zero-shot and full-data settings prove the superiority
of TaCo over the state-of-the-art methods. Also, we experiment and verify the
generalization capability of TaCo on other logical reasoning dataset.
|
[
{
"version": "v1",
"created": "Sun, 8 Jan 2023 05:24:34 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Xu",
"Fangzhi",
""
],
[
"Liu",
"Jun",
""
],
[
"Lin",
"Qika",
""
],
[
"Zhao",
"Tianzhe",
""
],
[
"Zhang",
"Jian",
""
],
[
"Zhang",
"Lingling",
""
]
] |
new_dataset
| 0.981023 |
2301.03033
|
Zhengyi Liu
|
Zhengyi Liu, Wei Wu, Yacheng Tan, Guanghui Zhang
|
RGB-T Multi-Modal Crowd Counting Based on Transformer
| null |
BMVC2022
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Crowd counting aims to estimate the number of persons in a scene. Most
state-of-the-art crowd counting methods based on color images can't work well
in poor illumination conditions due to invisible objects. With the widespread
use of infrared cameras, crowd counting based on color and thermal images is
studied. Existing methods only achieve multi-modal fusion without count
objective constraint. To better excavate multi-modal information, we use
count-guided multi-modal fusion and modal-guided count enhancement to achieve
the impressive performance. The proposed count-guided multi-modal fusion module
utilizes a multi-scale token transformer to interact two-modal information
under the guidance of count information and perceive different scales from the
token perspective. The proposed modal-guided count enhancement module employs
multi-scale deformable transformer decoder structure to enhance one modality
feature and count information by the other modality. Experiment in public
RGBT-CC dataset shows that our method refreshes the state-of-the-art results.
https://github.com/liuzywen/RGBTCC
|
[
{
"version": "v1",
"created": "Sun, 8 Jan 2023 12:59:52 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Liu",
"Zhengyi",
""
],
[
"Wu",
"Wei",
""
],
[
"Tan",
"Yacheng",
""
],
[
"Zhang",
"Guanghui",
""
]
] |
new_dataset
| 0.969238 |
2301.03130
|
Shadrokh Samavi
|
MohammadReza Naderi, MohammadHossein Givkashi, Nader Karimi, Shahram
Shirani, Shadrokh Samavi
|
SFI-Swin: Symmetric Face Inpainting with Swin Transformer by Distinctly
Learning Face Components Distributions
|
13 pages, 5 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Image inpainting consists of filling holes or missing parts of an image.
Inpainting face images with symmetric characteristics is more challenging than
inpainting a natural scene. None of the powerful existing models can fill out
the missing parts of an image while considering the symmetry and homogeneity of
the picture. Moreover, the metrics that assess a repaired face image quality
cannot measure the preservation of symmetry between the rebuilt and existing
parts of a face. In this paper, we intend to solve the symmetry problem in the
face inpainting task by using multiple discriminators that check each face
organ's reality separately and a transformer-based network. We also propose
"symmetry concentration score" as a new metric for measuring the symmetry of a
repaired face image. The quantitative and qualitative results show the
superiority of our proposed method compared to some of the recently proposed
algorithms in terms of the reality, symmetry, and homogeneity of the inpainted
parts.
|
[
{
"version": "v1",
"created": "Mon, 9 Jan 2023 00:56:51 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Naderi",
"MohammadReza",
""
],
[
"Givkashi",
"MohammadHossein",
""
],
[
"Karimi",
"Nader",
""
],
[
"Shirani",
"Shahram",
""
],
[
"Samavi",
"Shadrokh",
""
]
] |
new_dataset
| 0.986741 |
2301.03164
|
Ali Mirza Dr
|
Ali Mirza, Imran Siddiqi
|
Cursive Caption Text Detection in Videos
|
19 pages, 16 figures
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Textual content appearing in videos represents an interesting index for
semantic retrieval of videos (from archives), generation of alerts (live
streams) as well as high level applications like opinion mining and content
summarization. One of the key components of such systems is the detection of
textual content in video frames and the same makes the subject of our present
study. This paper presents a robust technique for detection of textual content
appearing in video frames. More specifically we target text in cursive script
taking Urdu text as a case study. Detection of textual regions in video frames
is carried out by fine-tuning object detectors based on deep convolutional
neural networks for the specific case of text detection. Since it is common to
have videos with caption text in multiple-scripts, cursive text is
distinguished from Latin text using a script-identification module. Finally,
detection and script identification are combined in a single end-to-end
trainable system. Experiments on a comprehensive dataset of around 11,000 video
frames report an F-measure of 0.91.
|
[
{
"version": "v1",
"created": "Mon, 9 Jan 2023 04:30:48 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Mirza",
"Ali",
""
],
[
"Siddiqi",
"Imran",
""
]
] |
new_dataset
| 0.998858 |
2301.03191
|
Vaclav Skala
|
Vaclav Skala
|
Line-Torus Intersection for Ray Tracing: Alternative Formulations
|
Draft of the paper published Line-Torus Intersection: Alternative
Formulations, WSEAS Trans. on Computers, ISSN 2224-2872, Vol7., No.12,
pp.288-297, 2013
|
WSEAS Trans. on Computers, ISSN 2224-2872, Vol7., No.12,
pp.288-297, 2013
| null | null |
cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
Intersection algorithms are very important in computation of geometrical
problems. Algorithms for a line intersection with linear or quadratic surfaces
are quite efficient. However, algorithms for a line intersection with other
surfaces are more complex and time consuming. In this case the object is
usually closed into a simple bounding volume to speed up the cases when the
given line cannot intersect the given object. In this paper new formulations of
the line-torus intersection problem are given and new specification of the
bounding volume for a torus is given as well. The presented approach is based
on an idea of a line intersection with an envelope of rotating sphere that
forms a torus. Due to this approach new bounding volume can be formulated which
is more effective as it enables to detect cases when the line passes the "hole"
of a torus, too.
|
[
{
"version": "v1",
"created": "Mon, 9 Jan 2023 07:52:45 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Skala",
"Vaclav",
""
]
] |
new_dataset
| 0.994326 |
2301.03232
|
Hamdam Ghanatian
|
Hamdam Ghanatian, Luana Benetti, Pedro Anacleto, Tim Bohnert, Hooman
Farkhani, Ricardo Ferreira, Farshad Moradi
|
Spin-Orbit Torque Flash Analog-to-Digital Converter
| null | null | null | null |
cs.ET
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Although Analog-to-digital converters (ADCs) are critical components in
mixed-signal integrated circuits (IC), their performance has not been improved
significantly over the last decade. To achieve a radical improvement (compact,
low power and reliable ADCs), spintronics can be considered as a proper
candidate due to its compatibility with CMOS and wide applications in storage,
neuromorphic computing, and so on. In this paper, a proof-of-concept of a 3-bit
spin-CMOS Flash ADC using in-plane-anisotropy magnetic tunnel junctions
(i-MTJs) with spin-orbit torque (SOT) switching mechanism is designed,
fabricated and characterized. The proposed ADC replaces the current mirrors and
power-hungry comparators in the conventional Flash ADC with seven parallel
i-MTJs with different heavy metal (HM) widths. Monte-Carlo simulations based on
the experimental measurements show the process variations/mismatch limits the
accuracy of the proposed ADC to 2 bits. Moreover, the maximum differential
nonlinearity (DNL) and integral nonlinearity (INL) are 0.739 LSB (least
significant bit) and 0.7319 LSB, respectively.
|
[
{
"version": "v1",
"created": "Mon, 9 Jan 2023 09:54:27 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Ghanatian",
"Hamdam",
""
],
[
"Benetti",
"Luana",
""
],
[
"Anacleto",
"Pedro",
""
],
[
"Bohnert",
"Tim",
""
],
[
"Farkhani",
"Hooman",
""
],
[
"Ferreira",
"Ricardo",
""
],
[
"Moradi",
"Farshad",
""
]
] |
new_dataset
| 0.998602 |
2301.03238
|
Judith Yue Li
|
Judith Yue Li, Aren Jansen, Qingqing Huang, Joonseok Lee, Ravi Ganti,
Dima Kuzmin
|
MAQA: A Multimodal QA Benchmark for Negation
|
NeurIPS 2022 SyntheticData4ML Workshop
| null | null | null |
cs.CL cs.AI cs.LG cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Multimodal learning can benefit from the representation power of pretrained
Large Language Models (LLMs). However, state-of-the-art transformer based LLMs
often ignore negations in natural language and there is no existing benchmark
to quantitatively evaluate whether multimodal transformers inherit this
weakness. In this study, we present a new multimodal question answering (QA)
benchmark adapted from labeled music videos in AudioSet (Gemmeke et al., 2017)
with the goal of systematically evaluating if multimodal transformers can
perform complex reasoning to recognize new concepts as negation of previously
learned concepts. We show that with standard fine-tuning approach multimodal
transformers are still incapable of correctly interpreting negation
irrespective of model size. However, our experiments demonstrate that
augmenting the original training task distributions with negated QA examples
allow the model to reliably reason with negation. To do this, we describe a
novel data generation procedure that prompts the 540B-parameter PaLM model to
automatically generate negated QA examples as compositions of easily accessible
video tags. The generated examples contain more natural linguistic patterns and
the gains compared to template-based task augmentation approach are
significant.
|
[
{
"version": "v1",
"created": "Mon, 9 Jan 2023 10:11:23 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Li",
"Judith Yue",
""
],
[
"Jansen",
"Aren",
""
],
[
"Huang",
"Qingqing",
""
],
[
"Lee",
"Joonseok",
""
],
[
"Ganti",
"Ravi",
""
],
[
"Kuzmin",
"Dima",
""
]
] |
new_dataset
| 0.999743 |
2301.03347
|
Yi Geng
|
Yi Geng
|
A Novel Waveform Design for OFDM-Based Joint Sensing and Communication
System
|
3rd IEEE International Symposium on Joint Communications & Sensing
(JC&S 2023), accepted and to be published
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The dominating waveform in 5G is orthogonal frequency division multiplexing
(OFDM). OFDM will remain a promising waveform candidate for joint communication
and sensing (JCAS) in 6G since OFDM can provide excellent data transmission
capability and accurate sensing information. This paper proposes a novel
OFDM-based diagonal waveform structure and corresponding signal processing
algorithm. This approach allocates the sensing signals along the diagonal of
the time-frequency resource block. Therefore, the sensing signals in a linear
structure span both the frequency and time domains. The range and velocity of
the object can be estimated simultaneously by applying 1D-discrete Fourier
transform (DFT) to the diagonal sensing signals. Compared to the conventional
2D-DFT OFDM radar algorithm, the computational complexity of the proposed
algorithm is low. In addition, the sensing overhead can be substantially
reduced. The performance of the proposed waveform is evaluated using simulation
and analysis of results.
|
[
{
"version": "v1",
"created": "Mon, 9 Jan 2023 14:00:21 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Geng",
"Yi",
""
]
] |
new_dataset
| 0.988054 |
2301.03350
|
Allan Quadros
|
Allan V. C. Quadros
|
mRpostman: An IMAP Client for R
|
16 pages, 2 figures. Submitted to SoftwareX in Jan/2021
| null | null | null |
cs.NI stat.OT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Internet Message Access Protocol (IMAP) clients are a common feature in
several programming languages. Despite having some packages for electronic
messages retrieval, the R language, until recently, lacked a broader solution,
capable of coping with different IMAP servers and providing a wide spectrum of
features. mRpostman covers most of the IMAP 4rev1 functionalities by
implementing tools for message searching, selective fetching of message
attributes, mailbox management, attachment extraction, and several other IMAP
features that can be executed in virtually any mail provider. By doing so, it
enables users to perform data analysis based on e-mail content. The goal of
this article is to showcase the toolkit provided with the mRpostman package, to
describe its key features and provide some application examples.
|
[
{
"version": "v1",
"created": "Sun, 11 Dec 2022 07:39:59 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Quadros",
"Allan V. C.",
""
]
] |
new_dataset
| 0.999729 |
2301.03380
|
William Schoeler
|
William B. Schoeler
|
A Low-Cost ISM-Band Multi-Transceiver Cognitive Radio
|
Masters thesis
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
A Cognitive Radio is a type of Software-Defined Radio (SDR) that
automatically detects available wireless spectrum and adjusts its physical
hardware, modulation, or protocol parameters to obtain optimal throughput,
latency, and range. Much of prior Cognitive Radio research and design has
required expensive transceivers using licensed bands that are not openly
available for use by unlicensed users. This thesis presents a low-cost hardware
platform built from off-the-shelf components that utilizes free to use
Industrial, Scientific, and Medical (ISM) bands, and implements a concurrent
multi-spectrum point-to-point wireless protocol optimized for non-stationary
devices. Performance metrics such as cost, latency, throughput, and range are
measured and analyzed. Applications of such a wireless implementation are
proposed and implemented, such as smart-city infrastructure that allows
internet connectivity to inner-city users by providing Wi-Fi Access Points
through mobile On-Board Unit (OBU) devices with uplinks delivered from
stationary Roadside Unit (RSU) devices.
|
[
{
"version": "v1",
"created": "Wed, 23 Nov 2022 16:07:00 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Schoeler",
"William B.",
""
]
] |
new_dataset
| 0.999353 |
2301.03432
|
Fang Xu
|
Fang Xu, Yilei Shi, Patrick Ebel, Wen Yang and Xiao Xiang Zhu
|
High-Resolution Cloud Removal with Multi-Modal and Multi-Resolution Data
Fusion: A New Baseline and Benchmark
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we introduce Planet-CR, a benchmark dataset for
high-resolution cloud removal with multi-modal and multi-resolution data
fusion. Planet-CR is the first public dataset for cloud removal to feature
globally sampled high resolution optical observations, in combination with
paired radar measurements as well as pixel-level land cover annotations. It
provides solid basis for exhaustive evaluation in terms of generating visually
pleasing textures and semantically meaningful structures. With this dataset, we
consider the problem of cloud removal in high resolution optical remote sensing
imagery by integrating multi-modal and multi-resolution information. Existing
multi-modal data fusion based methods, which assume the image pairs are aligned
pixel-to-pixel, are hence not appropriate for this problem. To this end, we
design a new baseline named Align-CR to perform the low-resolution SAR image
guided high-resolution optical image cloud removal. It implicitly aligns the
multi-modal and multi-resolution data during the reconstruction process to
promote the cloud removal performance. The experimental results demonstrate
that the proposed Align-CR method gives the best performance in both visual
recovery quality and semantic recovery quality. The project is available at
https://github.com/zhu-xlab/Planet-CR, and hope this will inspire future
research.
|
[
{
"version": "v1",
"created": "Mon, 9 Jan 2023 15:31:28 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Xu",
"Fang",
""
],
[
"Shi",
"Yilei",
""
],
[
"Ebel",
"Patrick",
""
],
[
"Yang",
"Wen",
""
],
[
"Zhu",
"Xiao Xiang",
""
]
] |
new_dataset
| 0.999465 |
2301.03436
|
Zheng Zhang
|
Zheng Zhang, Yuanwei Liu, Zhaolin Wang, Jian Chen
|
STARS-ISAC: How Many Sensors Do We Need?
|
journal paper
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
A simultaneously transmitting and reflecting surface (STARS) enabled
integrated sensing and communications (ISAC) framework is proposed, where a
novel bi-directional sensing-STARS architecture is devised to facilitate the
full-space communication and sensing. Based on the proposed framework, a joint
optimization problem is formulated, where the Cramer-Rao bound (CRB) for
estimating the 2-dimension direction-of-arrival of the sensing target is
minimized. Two cases are considered for sensing performance enhancement. 1) For
the two-user case, an alternating optimization algorithm is proposed. In
particular, the maximum number of deployable sensors is obtained in the
closed-form expressions. 2) For the multi-user case, an extended CRB (ECRB)
metric is proposed to characterize the impact of the number of sensors on the
sensing performance. Based on the proposed metric, a novel penalty-based
double-loop (PDL) algorithm is proposed to solve the ECRB minimization problem.
To tackle the coupling of the ECRB, a general decoupling approach is proposed
to convert it to a tractable weighted linear summation form. Simulation results
reveal that 1) the proposed PDL algorithm can achieve a near-optimal
performance with consideration of sensor deployment; 2) without violating the
communication under the quality of service requirements, reducing the receive
antennas at the BS does not deteriorate the sensing performance; and 3) it is
preferable to deploy more passive elements than sensors in terms of achieving
optimal sensing performance
|
[
{
"version": "v1",
"created": "Mon, 9 Jan 2023 15:36:46 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Zhang",
"Zheng",
""
],
[
"Liu",
"Yuanwei",
""
],
[
"Wang",
"Zhaolin",
""
],
[
"Chen",
"Jian",
""
]
] |
new_dataset
| 0.982419 |
2301.03445
|
Konstantinos Demertzis
|
Alexandros Papanikolaou, Aggelos Alevizopoulos, Christos Ilioudis,
Konstantinos Demertzis, Konstantinos Rantos
|
A Cyber Threat Intelligence Management Platform for Industrial
Environments
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Developing intelligent, interoperable Cyber Threat Information (CTI) sharing
technologies can help build strong defences against modern cyber threats. CTIs
allow the community to share information about cybercriminals' threats and
vulnerabilities and countermeasures to defend themselves or detect malicious
activity. A crucial need for success is that the data connected to cyber risks
be understandable, organized, and of good quality. The receiving parties may
grasp its content and utilize it effectively. This article describes an
innovative cyber threat intelligence management platform (CTIMP) for industrial
environments, one of the Cyber-pi project's significant elements. The suggested
architecture, in particular, uses cyber knowledge from trusted public sources
and integrates it with relevant information from the organization's supervised
infrastructure in an entirely interoperable and intelligent way. When combined
with an advanced visualization mechanism and user interface, the services
mentioned above provide administrators with the situational awareness they
require while also allowing for extended cooperation, intelligent selection of
advanced coping strategies, and a set of automated self-healing rules for
dealing with threats.
|
[
{
"version": "v1",
"created": "Mon, 9 Jan 2023 15:50:08 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Papanikolaou",
"Alexandros",
""
],
[
"Alevizopoulos",
"Aggelos",
""
],
[
"Ilioudis",
"Christos",
""
],
[
"Demertzis",
"Konstantinos",
""
],
[
"Rantos",
"Konstantinos",
""
]
] |
new_dataset
| 0.981048 |
1905.08792
|
Pascal Giard
|
Jean-Fran\c{c}ois T\^etu, Louis-Charles Trudeau, Michiel Van
Beirendonck, Alexios Balatsoukas-Stimming, Pascal Giard
|
A Standalone FPGA-based Miner for Lyra2REv2 Cryptocurrencies
|
13 pages, accepted for publication in IEEE Trans. Circuits Syst. I.
arXiv admin note: substantial text overlap with arXiv:1807.05764
| null |
10.1109/TCSI.2020.2970923
| null |
cs.CR eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Lyra2REv2 is a hashing algorithm that consists of a chain of individual
hashing algorithms, and it is used as a proof-of-work function in several
cryptocurrencies. The most crucial and exotic hashing algorithm in the
Lyra2REv2 chain is a specific instance of the general Lyra2 algorithm. This
work presents the first hardware implementation of the specific instance of
Lyra2 that is used in Lyra2REv2. Several properties of the aforementioned
algorithm are exploited in order to optimize the design. In addition, an
FPGA-based hardware implementation of a standalone miner for Lyra2REv2 on a
Xilinx Multi-Processor System on Chip is presented. The proposed Lyra2REv2
miner is shown to be significantly more energy efficient than both a GPU and a
commercially available FPGA-based miner. Finally, we also explain how the
simplified Lyra2 and Lyra2REv2 architectures can be modified with minimal
effort to also support the recent Lyra2REv3 chained hashing algorithm.
|
[
{
"version": "v1",
"created": "Tue, 21 May 2019 14:58:54 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Jan 2020 19:59:10 GMT"
}
] | 2023-01-09T00:00:00 |
[
[
"Têtu",
"Jean-François",
""
],
[
"Trudeau",
"Louis-Charles",
""
],
[
"Van Beirendonck",
"Michiel",
""
],
[
"Balatsoukas-Stimming",
"Alexios",
""
],
[
"Giard",
"Pascal",
""
]
] |
new_dataset
| 0.999013 |
2007.07227
|
Istv\'an S\'ar\'andi
|
Istv\'an S\'ar\'andi and Timm Linder and Kai O. Arras and Bastian
Leibe
|
MeTRAbs: Metric-Scale Truncation-Robust Heatmaps for Absolute 3D Human
Pose Estimation
|
See project page at https://vision.rwth-aachen.de/metrabs . Accepted
for publication in the IEEE Transactions on Biometrics, Behavior, and
Identity Science (TBIOM), Special Issue "Selected Best Works From Automated
Face and Gesture Recognition 2020". Extended version of FG paper
arXiv:2003.02953
|
IEEE Transactions on Biometrics, Behavior, and Identity Science,
vol. 3, no. 1, pp. 16-30, Jan. 2021
|
10.1109/TBIOM.2020.3037257
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Heatmap representations have formed the basis of human pose estimation
systems for many years, and their extension to 3D has been a fruitful line of
recent research. This includes 2.5D volumetric heatmaps, whose X and Y axes
correspond to image space and Z to metric depth around the subject. To obtain
metric-scale predictions, 2.5D methods need a separate post-processing step to
resolve scale ambiguity. Further, they cannot localize body joints outside the
image boundaries, leading to incomplete estimates for truncated images. To
address these limitations, we propose metric-scale truncation-robust (MeTRo)
volumetric heatmaps, whose dimensions are all defined in metric 3D space,
instead of being aligned with image space. This reinterpretation of heatmap
dimensions allows us to directly estimate complete, metric-scale poses without
test-time knowledge of distance or relying on anthropometric heuristics, such
as bone lengths. To further demonstrate the utility our representation, we
present a differentiable combination of our 3D metric-scale heatmaps with 2D
image-space ones to estimate absolute 3D pose (our MeTRAbs architecture). We
find that supervision via absolute pose loss is crucial for accurate
non-root-relative localization. Using a ResNet-50 backbone without further
learned layers, we obtain state-of-the-art results on Human3.6M, MPI-INF-3DHP
and MuPoTS-3D. Our code will be made publicly available to facilitate further
research.
|
[
{
"version": "v1",
"created": "Sun, 12 Jul 2020 11:52:09 GMT"
},
{
"version": "v2",
"created": "Sat, 14 Nov 2020 19:32:45 GMT"
}
] | 2023-01-09T00:00:00 |
[
[
"Sárándi",
"István",
""
],
[
"Linder",
"Timm",
""
],
[
"Arras",
"Kai O.",
""
],
[
"Leibe",
"Bastian",
""
]
] |
new_dataset
| 0.999649 |
2108.03358
|
Shu Wang
|
Xinda Wang, Shu Wang, Pengbin Feng, Kun Sun, Sushil Jajodia, Sanae
Benchaaboun, Frank Geck
|
PatchRNN: A Deep Learning-Based System for Security Patch Identification
| null |
2021 IEEE Military Communications Conference (MILCOM), 2021, pp.
595-600
|
10.1109/MILCOM52596.2021.9652940
| null |
cs.CR cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
With the increasing usage of open-source software (OSS) components,
vulnerabilities embedded within them are propagated to a huge number of
underlying applications. In practice, the timely application of security
patches in downstream software is challenging. The main reason is that such
patches do not explicitly indicate their security impacts in the documentation,
which would be difficult to recognize for software maintainers and users.
However, attackers can still identify these "secret" security patches by
analyzing the source code and generate corresponding exploits to compromise not
only unpatched versions of the current software, but also other similar
software packages that may contain the same vulnerability due to code cloning
or similar design/implementation logic. Therefore, it is critical to identify
these secret security patches to enable timely fixes. To this end, we propose a
deep learning-based defense system called PatchRNN to automatically identify
secret security patches in OSS. Besides considering descriptive keywords in the
commit message (i.e., at the text level), we leverage both syntactic and
semantic features at the source-code level. To evaluate the performance of our
system, we apply it on a large-scale real-world patch dataset and conduct a
case study on a popular open-source web server software - NGINX. Experimental
results show that the PatchRNN can successfully detect secret security patches
with a low false positive rate.
|
[
{
"version": "v1",
"created": "Sat, 7 Aug 2021 03:36:19 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Jan 2023 20:05:21 GMT"
}
] | 2023-01-09T00:00:00 |
[
[
"Wang",
"Xinda",
""
],
[
"Wang",
"Shu",
""
],
[
"Feng",
"Pengbin",
""
],
[
"Sun",
"Kun",
""
],
[
"Jajodia",
"Sushil",
""
],
[
"Benchaaboun",
"Sanae",
""
],
[
"Geck",
"Frank",
""
]
] |
new_dataset
| 0.998931 |
2111.12062
|
Alex Tamkin
|
Alex Tamkin, Vincent Liu, Rongfei Lu, Daniel Fein, Colin Schultz, Noah
Goodman
|
DABS: A Domain-Agnostic Benchmark for Self-Supervised Learning
|
NeurIPS 2021
| null | null | null |
cs.LG cs.CL cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Self-supervised learning algorithms, including BERT and SimCLR, have enabled
significant strides in fields like natural language processing, computer
vision, and speech processing. However, these algorithms are domain-specific,
meaning that new self-supervised learning algorithms must be developed for each
new setting, including myriad healthcare, scientific, and multimodal domains.
To catalyze progress toward domain-agnostic methods, we introduce DABS: a
Domain-Agnostic Benchmark for Self-supervised learning. To perform well on
DABS, an algorithm is evaluated on seven diverse domains: natural images,
multichannel sensor data, English text, speech recordings, multilingual text,
chest x-rays, and images with text descriptions. Each domain contains an
unlabeled dataset for pretraining; the model is then is scored based on its
downstream performance on a set of labeled tasks in the domain. We also present
e-Mix and ShED: two baseline domain-agnostic algorithms; their relatively
modest performance demonstrates that significant progress is needed before
self-supervised learning is an out-of-the-box solution for arbitrary domains.
Code for benchmark datasets and baseline algorithms is available at
https://github.com/alextamkin/dabs.
|
[
{
"version": "v1",
"created": "Tue, 23 Nov 2021 18:22:14 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Jan 2023 22:27:11 GMT"
}
] | 2023-01-09T00:00:00 |
[
[
"Tamkin",
"Alex",
""
],
[
"Liu",
"Vincent",
""
],
[
"Lu",
"Rongfei",
""
],
[
"Fein",
"Daniel",
""
],
[
"Schultz",
"Colin",
""
],
[
"Goodman",
"Noah",
""
]
] |
new_dataset
| 0.999494 |
2112.07137
|
Liangdong Lu
|
Chaofeng Guan, Ruihu Li, Liangdong Lu, Yu Yao
|
New Binary Quantum Codes Constructed from Quasi-Cyclic Codes
| null | null |
10.1007/s10773-022-05126-6
| null |
cs.IT math.IT quant-ph
|
http://creativecommons.org/licenses/by-sa/4.0/
|
It is well known that quantum codes can be constructed by means of classical
symplectic dual-containing codes. This paper considers a family of
two-generator quasi-cyclic codes and derives sufficient conditions for these
codes to be symplectic dual-containing. Then, a new method for constructing
binary quantum codes using symplectic dual-containing codes is proposed. As an
application, we construct 8 binary quantum codes that exceed the best-known
results. Further, another 36 new binary quantum codes are obtained by
propagation rules, all of which improve the lower bound on the minimum
distances.
|
[
{
"version": "v1",
"created": "Tue, 14 Dec 2021 03:22:16 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Jul 2022 00:43:25 GMT"
},
{
"version": "v3",
"created": "Fri, 6 Jan 2023 01:05:31 GMT"
}
] | 2023-01-09T00:00:00 |
[
[
"Guan",
"Chaofeng",
""
],
[
"Li",
"Ruihu",
""
],
[
"Lu",
"Liangdong",
""
],
[
"Yao",
"Yu",
""
]
] |
new_dataset
| 0.999679 |
2208.01814
|
Angelina Aquino
|
Angelina Aquino and Franz de Leon
|
Benchmarking zero-shot and few-shot approaches for tokenization,
tagging, and dependency parsing of Tagalog text
|
Presented at PACLIC 2022. 12 pages, 3 figures, 4 tables
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The grammatical analysis of texts in any written language typically involves
a number of basic processing tasks, such as tokenization, morphological
tagging, and dependency parsing. State-of-the-art systems can achieve high
accuracy on these tasks for languages with large datasets, but yield poor
results for languages which have little to no annotated data. To address this
issue for the Tagalog language, we investigate the use of alternative language
resources for creating task-specific models in the absence of
dependency-annotated Tagalog data. We also explore the use of word embeddings
and data augmentation to improve performance when only a small amount of
annotated Tagalog data is available. We show that these zero-shot and few-shot
approaches yield substantial improvements on grammatical analysis of both
in-domain and out-of-domain Tagalog text compared to state-of-the-art
supervised baselines.
|
[
{
"version": "v1",
"created": "Wed, 3 Aug 2022 02:20:10 GMT"
},
{
"version": "v2",
"created": "Fri, 6 Jan 2023 04:37:02 GMT"
}
] | 2023-01-09T00:00:00 |
[
[
"Aquino",
"Angelina",
""
],
[
"de Leon",
"Franz",
""
]
] |
new_dataset
| 0.994352 |
2208.11674
|
Zuguang Gu
|
Zuguang Gu
|
On the Dependency Heaviness of CRAN/Bioconductor Ecosystem
|
Journal of Systems and Software 2023
| null |
10.1016/j.jss.2023.111610
| null |
cs.SE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The R package ecosystem is expanding fast and dependencies among packages in
the ecosystem are becoming more complex. In this study, we explored the package
dependencies from a new aspect. We applied a new metric named "dependency
heaviness" which measures the number of additional strong dependencies that a
package uniquely contributes to its child or downstream packages. It also
measures the total reduced dependencies in the ecosystem when the role of a
package is changed from a strong parent to a weak parent. We systematically
studied how the dependency heaviness spreads from parent to child packages, and
how it further spreads to remote downstream packages in the CRAN/Bioconductor
ecosystem. We extracted top packages and key paths that majorly transmit heavy
dependencies in the ecosystem. Additionally, the dependency heaviness analysis
on the ecosystem has been implemented as a web-based database that provides
comprehensive tools for querying dependencies of individual R packages.
|
[
{
"version": "v1",
"created": "Wed, 24 Aug 2022 17:12:31 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Dec 2022 17:34:33 GMT"
},
{
"version": "v3",
"created": "Thu, 5 Jan 2023 19:59:13 GMT"
}
] | 2023-01-09T00:00:00 |
[
[
"Gu",
"Zuguang",
""
]
] |
new_dataset
| 0.999262 |
2209.06390
|
Yunpu Zhang
|
Yunpu Zhang, Changsheng You, Beixiong Zheng
|
Multi-Active Multi-Passive (MAMP)-IRS Aided Wireless Communication: A
Multi-Hop Beam Routing Design
|
In this updated version, we refine some results in the original
paper. We studied the multi-hop beam routing design for a new multi-active
multi-passive (MAMP)-IRS aided wireless communication system. This paper has
been submitted to IEEE for possible publication. arXiv admin note: text
overlap with arXiv:2208.11877
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Prior studies on intelligent reflecting surface (IRS) have mostly considered
wireless communication systems aided by a single passive IRS, which, however,
has limited control over wireless propagation environment and suffers severe
product-distance path-loss. To address these issues, we propose in this paper a
new multi-active multi-passive (MAMP)-IRS aided wireless communication system,
where a number of active and passive IRSs are deployed to assist the downlink
communication in complex environment, by establishing a multi-hop reflection
path across active and passive IRSs. An optimization problem is formulated to
maximize the achievable rate of a typical user by designing the
active-and-passive IRS routing path as well as the joint beamforming of the BS
and selected active/passive IRSs. To draw useful insights into the optimal
design, we first consider a special case of the single-active multi-passive
(SAMP)-IRS aided system. For this case, we propose an efficient algorithm to
obtain its optimal solution by first optimizing the joint beamforming given any
SAMP-IRS routing path, and then optimizing the routing path by using a new path
decomposition method and graph theory. Next, for the general MAMP-IRS aided
system, we show that its challenging beam routing optimization problem can be
efficiently solved by a new two-phase approach. Its key idea is to first
optimize the inner passive-IRS beam routing between each two active IRSs for
effective channel power gain maximization, followed by an outer active-IRS beam
routing optimization for rate maximization. Last, numerical results are
provided to demonstrate the effectiveness of the proposed MAMP-IRS beam routing
scheme.
|
[
{
"version": "v1",
"created": "Wed, 14 Sep 2022 03:11:05 GMT"
},
{
"version": "v2",
"created": "Fri, 6 Jan 2023 08:59:41 GMT"
}
] | 2023-01-09T00:00:00 |
[
[
"Zhang",
"Yunpu",
""
],
[
"You",
"Changsheng",
""
],
[
"Zheng",
"Beixiong",
""
]
] |
new_dataset
| 0.999085 |
2301.00345
|
Lakshmi Sathidevi
|
Jorge Quesada, Lakshmi Sathidevi, Ran Liu, Nauman Ahad, Joy M.
Jackson, Mehdi Azabou, Jingyun Xiao, Christopher Liding, Matthew Jin,
Carolina Urzay, William Gray-Roncal, Erik C. Johnson, Eva L. Dyer
|
MTNeuro: A Benchmark for Evaluating Representations of Brain Structure
Across Multiple Levels of Abstraction
|
10 pages, 4 figures, Accepted at NeurIPS 2022
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There are multiple scales of abstraction from which we can describe the same
image, depending on whether we are focusing on fine-grained details or a more
global attribute of the image. In brain mapping, learning to automatically
parse images to build representations of both small-scale features (e.g., the
presence of cells or blood vessels) and global properties of an image (e.g.,
which brain region the image comes from) is a crucial and open challenge.
However, most existing datasets and benchmarks for neuroanatomy consider only a
single downstream task at a time. To bridge this gap, we introduce a new
dataset, annotations, and multiple downstream tasks that provide diverse ways
to readout information about brain structure and architecture from the same
image. Our multi-task neuroimaging benchmark (MTNeuro) is built on volumetric,
micrometer-resolution X-ray microtomography images spanning a large
thalamocortical section of mouse brain, encompassing multiple cortical and
subcortical regions. We generated a number of different prediction challenges
and evaluated several supervised and self-supervised models for brain-region
prediction and pixel-level semantic segmentation of microstructures. Our
experiments not only highlight the rich heterogeneity of this dataset, but also
provide insights into how self-supervised approaches can be used to learn
representations that capture multiple attributes of a single image and perform
well on a variety of downstream tasks. Datasets, code, and pre-trained baseline
models are provided at: https://mtneuro.github.io/ .
|
[
{
"version": "v1",
"created": "Sun, 1 Jan 2023 04:54:03 GMT"
}
] | 2023-01-09T00:00:00 |
[
[
"Quesada",
"Jorge",
""
],
[
"Sathidevi",
"Lakshmi",
""
],
[
"Liu",
"Ran",
""
],
[
"Ahad",
"Nauman",
""
],
[
"Jackson",
"Joy M.",
""
],
[
"Azabou",
"Mehdi",
""
],
[
"Xiao",
"Jingyun",
""
],
[
"Liding",
"Christopher",
""
],
[
"Jin",
"Matthew",
""
],
[
"Urzay",
"Carolina",
""
],
[
"Gray-Roncal",
"William",
""
],
[
"Johnson",
"Erik C.",
""
],
[
"Dyer",
"Eva L.",
""
]
] |
new_dataset
| 0.999608 |
2301.02277
|
Wai Kin Fung
|
Meihua Zhou, Ivan Fung, Li Yang, Nan Wan, Keke Di, Tingting Wang
|
LostNet: A smart way for lost and find
| null | null | null | null |
cs.CV cs.AI eess.IV
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Due to the enormous population growth of cities in recent years, objects are
frequently lost and unclaimed on public transportation, in restaurants, or any
other public areas. While services like Find My iPhone can easily identify lost
electronic devices, more valuable objects cannot be tracked in an intelligent
manner, making it impossible for administrators to reclaim a large number of
lost and found items in a timely manner. We present a method that significantly
reduces the complexity of searching by comparing previous images of lost and
recovered things provided by the owner with photos taken when registered lost
and found items are received. In this research, we will primarily design a
photo matching network by combining the fine-tuning method of MobileNetv2 with
CBAM Attention and using the Internet framework to develop an online lost and
found image identification system. Our implementation gets a testing accuracy
of 96.8% using only 665.12M GLFOPs and 3.5M training parameters. It can
recognize practice images and can be run on a regular laptop.
|
[
{
"version": "v1",
"created": "Thu, 5 Jan 2023 19:39:17 GMT"
}
] | 2023-01-09T00:00:00 |
[
[
"Zhou",
"Meihua",
""
],
[
"Fung",
"Ivan",
""
],
[
"Yang",
"Li",
""
],
[
"Wan",
"Nan",
""
],
[
"Di",
"Keke",
""
],
[
"Wang",
"Tingting",
""
]
] |
new_dataset
| 0.998556 |
2301.02294
|
Paul Siegel
|
Ziyuan Zhu, Wei Wu, Paul H. Siegel
|
Polar Codes with Local-Global Decoding
|
5 pages, 9 figures, invited paper in Session on Coding for 6G, 2022
Asilomar Conference on Signals, Systems, and Computers
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we investigate a coupled polar code architecture that supports
both local and global decoding. This local-global construction is motivated by
practical applications in data storage and transmission where reduced-latency
recovery of sub-blocks of the coded information is required. Local decoding
allows random access to sub-blocks of the full code block. When local decoding
performance is insufficient, global decoding provides improved data
reliability. The coupling scheme incorporates a systematic outer polar code and
a partitioned mapping of the outer codeword to semipolarized bit-channels of
the inner polar codes. Error rate simulation results are presented for 2 and 4
sub-blocks. Design issues affecting the trade-off between local and global
decoding performance are also discussed.
|
[
{
"version": "v1",
"created": "Thu, 5 Jan 2023 21:07:38 GMT"
}
] | 2023-01-09T00:00:00 |
[
[
"Zhu",
"Ziyuan",
""
],
[
"Wu",
"Wei",
""
],
[
"Siegel",
"Paul H.",
""
]
] |
new_dataset
| 0.998724 |
2301.02295
|
Ahmad Rafsanjani
|
Ahmad Rafsanjani, Fergal B. Coulter, Andr\'e R. Studart
|
Giving life to robotic skins
| null |
Matter, Volume 5, Issue 7, 6 July 2022, Pages 1990-1992
|
10.1016/j.matt.2022.06.006
| null |
cs.RO cond-mat.soft
|
http://creativecommons.org/licenses/by/4.0/
|
The skin of humanoid robots often lacks human tactility and the inherent
self-repair capability of biological tissues. Recently, researchers have grown
a living, self-healing skin on a robot finger by subsequent culturing of human
dermal and epidermal cells. Here, we highlight the significance of this study
alongside challenges toward developing biohybrid robots equipped with sensate
and adaptive living robotic skins.
|
[
{
"version": "v1",
"created": "Thu, 5 Jan 2023 21:10:13 GMT"
}
] | 2023-01-09T00:00:00 |
[
[
"Rafsanjani",
"Ahmad",
""
],
[
"Coulter",
"Fergal B.",
""
],
[
"Studart",
"André R.",
""
]
] |
new_dataset
| 0.993252 |
2301.02348
|
Honglu He
|
Honglu He, Chen-lung Lu, Yunshi Wen, Glenn Saunders, Pinghai Yang,
Jeffrey Schoonover, Agung Julius, John T. Wen
|
High-Speed High-Accuracy Spatial Curve Tracking Using Motion Primitives
in Industrial Robots
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Industrial robots are increasingly deployed in applications requiring an end
effector tool to closely track a specified path, such as in spraying and
welding. Performance and productivity present possibly conflicting objectives:
tracking accuracy, path speed, and motion uniformity. Industrial robots are
programmed through motion primitives consisting of waypoints connected by
pre-defined motion segments, with specified parameters such as path speed and
blending zone. The actual executed robot motion depends on the robot joint
servo controller and joint motion constraints (velocity, acceleration, etc.)
which are largely unknown to the users. Programming a robot to achieve the
desired performance today is time-consuming and mostly manual, requiring tuning
a large number of coupled parameters in the motion primitives. The performance
also depends on the choice of additional parameters: possible redundant degrees
of freedom, location of the target curve, and the robot configuration. This
paper presents a systematic approach to optimize the robot motion primitives
for performance. The approach first selects the static parameters, then the
motion primitives, and finally iteratively update the waypoints to minimize the
tracking error. The ultimate performance objective is to maximize the path
speed subject to the tracking accuracy and speed uniformity constraints over
the entire path. We have demonstrated the effectiveness of this approach in
simulation for ABB and FANUC robots for two challenging example curves, and
experimentally for an ABB robot. Comparing with the baseline using the current
industry practice, the optimized performance shows over 200% performance
improvement.
|
[
{
"version": "v1",
"created": "Fri, 6 Jan 2023 01:14:22 GMT"
}
] | 2023-01-09T00:00:00 |
[
[
"He",
"Honglu",
""
],
[
"Lu",
"Chen-lung",
""
],
[
"Wen",
"Yunshi",
""
],
[
"Saunders",
"Glenn",
""
],
[
"Yang",
"Pinghai",
""
],
[
"Schoonover",
"Jeffrey",
""
],
[
"Julius",
"Agung",
""
],
[
"Wen",
"John T.",
""
]
] |
new_dataset
| 0.991308 |
2301.02363
|
Chuhao Jin
|
Chuhao Jin, Hongteng Xu, Ruihua Song, Zhiwu Lu
|
Text2Poster: Laying out Stylized Texts on Retrieved Images
|
5 pages, Accepted to ICASSP 2022
| null |
10.1109/ICASSP43922.2022.9747465
| null |
cs.MM cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Poster generation is a significant task for a wide range of applications,
which is often time-consuming and requires lots of manual editing and artistic
experience. In this paper, we propose a novel data-driven framework, called
\textit{Text2Poster}, to automatically generate visually-effective posters from
textual information. Imitating the process of manual poster editing, our
framework leverages a large-scale pretrained visual-textual model to retrieve
background images from given texts, lays out the texts on the images
iteratively by cascaded auto-encoders, and finally, stylizes the texts by a
matching-based method. We learn the modules of the framework by weakly- and
self-supervised learning strategies, mitigating the demand for labeled data.
Both objective and subjective experiments demonstrate that our Text2Poster
outperforms state-of-the-art methods, including academic research and
commercial software, on the quality of generated posters.
|
[
{
"version": "v1",
"created": "Fri, 6 Jan 2023 04:06:23 GMT"
}
] | 2023-01-09T00:00:00 |
[
[
"Jin",
"Chuhao",
""
],
[
"Xu",
"Hongteng",
""
],
[
"Song",
"Ruihua",
""
],
[
"Lu",
"Zhiwu",
""
]
] |
new_dataset
| 0.996692 |
2301.02385
|
Abhinav Keshari
|
Abhinav Kaushal Keshari
|
Multi-Genre Music Transformer -- Composing Full Length Musical Piece
| null | null | null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
In the task of generating music, the art factor plays a big role and is a
great challenge for AI. Previous work involving adversarial training to produce
new music pieces and modeling the compatibility of variety in music (beats,
tempo, musical stems) demonstrated great examples of learning this task. Though
this was limited to generating mashups or learning features from tempo and key
distributions to produce similar patterns. Compound Word Transformer was able
to represent music generation task as a sequence generation challenge involving
musical events defined by compound words. These musical events give a more
accurate description of notes progression, chord change, harmony and the art
factor. The objective of the project is to implement a Multi-Genre Transformer
which learns to produce music pieces through more adaptive learning process
involving more challenging task where genres or form of the composition is also
considered. We built a multi-genre compound word dataset, implemented a linear
transformer which was trained on this dataset. We call this Multi-Genre
Transformer, which was able to generate full length new musical pieces which is
diverse and comparable to original tracks. The model trains 2-5 times faster
than other models discussed.
|
[
{
"version": "v1",
"created": "Fri, 6 Jan 2023 05:27:55 GMT"
}
] | 2023-01-09T00:00:00 |
[
[
"Keshari",
"Abhinav Kaushal",
""
]
] |
new_dataset
| 0.992322 |
2301.02400
|
Gobinda Ghosh I
|
Gobinda Ghosh, Sudhan Majhi and Shubhabrata Paul
|
A Direct Construction of Optimal 2D-ZCACS with Flexible Array Size and
Large Set Size
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we propose a direct construction of optimal two-dimensional
Z-complementary array code sets (2D-ZCACS) using multivariable functions
(MVFs). In contrast to earlier works, the proposed construction allows for a
flexible array size and a large set size. Additionally, the proposed design can
be transformed into a one-dimensional Z-complementary code set (1D-ZCCS). Many
of the 1D-ZCCS described in the literature appeared to be special cases of this
proposed construction. At last, we compare our work with the current state of
the art and then draw our conclusions.
|
[
{
"version": "v1",
"created": "Fri, 6 Jan 2023 06:42:32 GMT"
}
] | 2023-01-09T00:00:00 |
[
[
"Ghosh",
"Gobinda",
""
],
[
"Majhi",
"Sudhan",
""
],
[
"Paul",
"Shubhabrata",
""
]
] |
new_dataset
| 0.998792 |
2301.02410
|
Hebi Li
|
Hebi Li, Forrest Sheng Bao, Qi Xiao, Jin Tian
|
Codepod: A Namespace-Aware, Hierarchical Jupyter for Interactive
Development at Scale
| null | null | null | null |
cs.SE cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Jupyter is a browser-based interactive development environment that has been
popular recently. Jupyter models programs in code blocks, and makes it easy to
develop code blocks interactively by running the code blocks and attaching rich
media output. However, Jupyter provides no support for module systems and
namespaces. Code blocks are linear and live in the global namespace; therefore,
it is hard to develop large projects that require modularization in Jupyter. As
a result, large-code projects are still developed in traditional text files,
and Jupyter is only used as a surface presentation. We present Codepod, a
namespace-aware Jupyter that is suitable for interactive development at scale.
Instead of linear code blocks, Codepod models code blocks as hierarchical code
pods, and provides a simple yet powerful module system for namespace-aware
incremental evaluation. Codepod is open source at
https://github.com/codepod-io/codepod.
|
[
{
"version": "v1",
"created": "Fri, 6 Jan 2023 07:48:51 GMT"
}
] | 2023-01-09T00:00:00 |
[
[
"Li",
"Hebi",
""
],
[
"Bao",
"Forrest Sheng",
""
],
[
"Xiao",
"Qi",
""
],
[
"Tian",
"Jin",
""
]
] |
new_dataset
| 0.999403 |
2301.02432
|
Jens Domke
|
Satoshi Matsuoka, Jens Domke, Mohamed Wahib, and Aleksandr Drozd,
Torsten Hoefler
|
Myths and Legends in High-Performance Computing
| null | null | null | null |
cs.DC cs.AR cs.CY cs.LG cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this humorous and thought provoking article, we discuss certain myths and
legends that are folklore among members of the high-performance computing
community. We collected those myths from conversations at conferences and
meetings, product advertisements, papers, and other communications such as
tweets, blogs, and news articles within (and beyond) our community. We believe
they represent the zeitgeist of the current era of massive change, driven by
the end of many scaling laws such as Dennard scaling and Moore's law. While
some laws end, new directions open up, such as algorithmic scaling or novel
architecture research. However, these myths are rarely based on scientific
facts but often on some evidence or argumentation. In fact, we believe that
this is the very reason for the existence of many myths and why they cannot be
answered clearly. While it feels like there should be clear answers for each,
some may remain endless philosophical debates such as the question whether
Beethoven was better than Mozart. We would like to see our collection of myths
as a discussion of possible new directions for research and industry
investment.
|
[
{
"version": "v1",
"created": "Fri, 6 Jan 2023 09:32:19 GMT"
}
] | 2023-01-09T00:00:00 |
[
[
"Matsuoka",
"Satoshi",
""
],
[
"Domke",
"Jens",
""
],
[
"Wahib",
"Mohamed",
""
],
[
"Drozd",
"Aleksandr",
""
],
[
"Hoefler",
"Torsten",
""
]
] |
new_dataset
| 0.993404 |
2301.02451
|
Ali Safa
|
Ali Safa, Tim Verbelen, Ozan Catal, Toon Van de Maele, Matthias
Hartmann, Bart Dhoedt, Andr\'e Bourdoux
|
FMCW Radar Sensing for Indoor Drones Using Learned Representations
| null | null | null | null |
cs.RO eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
Frequency-modulated continuous-wave (FMCW) radar is a promising sensor
technology for indoor drones as it provides range, angular as well as
Doppler-velocity information about obstacles in the environment. Recently, deep
learning approaches have been proposed for processing FMCW data, outperforming
traditional detection techniques on range-Doppler or range-azimuth maps.
However, these techniques come at a cost; for each novel task a deep neural
network architecture has to be trained on high-dimensional input data,
stressing both data bandwidth and processing budget. In this paper, we
investigate unsupervised learning techniques that generate low-dimensional
representations from FMCW radar data, and evaluate to what extent these
representations can be reused for multiple downstream tasks. To this end, we
introduce a novel dataset of raw radar ADC data recorded from a radar mounted
on a flying drone platform in an indoor environment, together with ground truth
detection targets. We show with real radar data that, utilizing our learned
representations, we match the performance of conventional radar processing
techniques and that our model can be trained on different input modalities such
as raw ADC samples of only two consecutively transmitted chirps.
|
[
{
"version": "v1",
"created": "Fri, 6 Jan 2023 10:20:00 GMT"
}
] | 2023-01-09T00:00:00 |
[
[
"Safa",
"Ali",
""
],
[
"Verbelen",
"Tim",
""
],
[
"Catal",
"Ozan",
""
],
[
"Van de Maele",
"Toon",
""
],
[
"Hartmann",
"Matthias",
""
],
[
"Dhoedt",
"Bart",
""
],
[
"Bourdoux",
"André",
""
]
] |
new_dataset
| 0.996101 |
2301.02527
|
Rui N\'obrega
|
Bianca Marques, Rui N\'obrega and Carmen Morgado
|
Avatar-centred AR Collaborative Mobile Interaction
|
4 pages, in Portuguese language, 4 figures, 1 table, accepted and
presented at ICGI 2021
| null | null | null |
cs.HC cs.GR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Interaction with the physical environment and different users is essential to
foster a collaborative experience. For this, we propose an interaction based on
a central point represented by an Augmented Reality marker in which several
users can capture the attention and interact with a virtual avatar. The
interface provides different game modes, with various challenges, supporting a
collaborative mobile interaction. The system fosters various group interactions
with a virtual avatar and enables various tasks with playful and didactic
components.
|
[
{
"version": "v1",
"created": "Fri, 6 Jan 2023 14:38:35 GMT"
}
] | 2023-01-09T00:00:00 |
[
[
"Marques",
"Bianca",
""
],
[
"Nóbrega",
"Rui",
""
],
[
"Morgado",
"Carmen",
""
]
] |
new_dataset
| 0.998585 |
2301.02555
|
Siddharth Karamcheti
|
Yuchen Cui and Siddharth Karamcheti and Raj Palleti and Nidhya
Shivakumar and Percy Liang and Dorsa Sadigh
|
"No, to the Right" -- Online Language Corrections for Robotic
Manipulation via Shared Autonomy
|
Accepted to the 18th ACM/IEEE International Conference on Human Robot
Interaction (HRI), Marc 2023. First two authors contributed equally. 9 Pages,
7 Figures
| null |
10.1145/3568162.3578623
| null |
cs.RO cs.AI cs.CL cs.HC cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Systems for language-guided human-robot interaction must satisfy two key
desiderata for broad adoption: adaptivity and learning efficiency.
Unfortunately, existing instruction-following agents cannot adapt, lacking the
ability to incorporate online natural language supervision, and even if they
could, require hundreds of demonstrations to learn even simple policies. In
this work, we address these problems by presenting Language-Informed Latent
Actions with Corrections (LILAC), a framework for incorporating and adapting to
natural language corrections - "to the right," or "no, towards the book" -
online, during execution. We explore rich manipulation domains within a shared
autonomy paradigm. Instead of discrete turn-taking between a human and robot,
LILAC splits agency between the human and robot: language is an input to a
learned model that produces a meaningful, low-dimensional control space that
the human can use to guide the robot. Each real-time correction refines the
human's control space, enabling precise, extended behaviors - with the added
benefit of requiring only a handful of demonstrations to learn. We evaluate our
approach via a user study where users work with a Franka Emika Panda
manipulator to complete complex manipulation tasks. Compared to existing
learned baselines covering both open-loop instruction following and single-turn
shared autonomy, we show that our corrections-aware approach obtains higher
task completion rates, and is subjectively preferred by users because of its
reliability, precision, and ease of use.
|
[
{
"version": "v1",
"created": "Fri, 6 Jan 2023 15:03:27 GMT"
}
] | 2023-01-09T00:00:00 |
[
[
"Cui",
"Yuchen",
""
],
[
"Karamcheti",
"Siddharth",
""
],
[
"Palleti",
"Raj",
""
],
[
"Shivakumar",
"Nidhya",
""
],
[
"Liang",
"Percy",
""
],
[
"Sadigh",
"Dorsa",
""
]
] |
new_dataset
| 0.975886 |
2301.02562
|
Lue Fan
|
Lue Fan, Yuxue Yang, Feng Wang, Naiyan Wang, and Zhaoxiang Zhang
|
Super Sparse 3D Object Detection
|
Extension of Fully Sparse 3D Object Detection [arXiv:2207.10035]
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As the perception range of LiDAR expands, LiDAR-based 3D object detection
contributes ever-increasingly to the long-range perception in autonomous
driving. Mainstream 3D object detectors often build dense feature maps, where
the cost is quadratic to the perception range, making them hardly scale up to
the long-range settings. To enable efficient long-range detection, we first
propose a fully sparse object detector termed FSD. FSD is built upon the
general sparse voxel encoder and a novel sparse instance recognition (SIR)
module. SIR groups the points into instances and applies highly-efficient
instance-wise feature extraction. The instance-wise grouping sidesteps the
issue of the center feature missing, which hinders the design of the fully
sparse architecture. To further enjoy the benefit of fully sparse
characteristic, we leverage temporal information to remove data redundancy and
propose a super sparse detector named FSD++. FSD++ first generates residual
points, which indicate the point changes between consecutive frames. The
residual points, along with a few previous foreground points, form the super
sparse input data, greatly reducing data redundancy and computational overhead.
We comprehensively analyze our method on the large-scale Waymo Open Dataset,
and state-of-the-art performance is reported. To showcase the superiority of
our method in long-range detection, we also conduct experiments on Argoverse 2
Dataset, where the perception range ($200m$) is much larger than Waymo Open
Dataset ($75m$). Code is open-sourced at https://github.com/tusen-ai/SST.
|
[
{
"version": "v1",
"created": "Thu, 5 Jan 2023 17:03:56 GMT"
}
] | 2023-01-09T00:00:00 |
[
[
"Fan",
"Lue",
""
],
[
"Yang",
"Yuxue",
""
],
[
"Wang",
"Feng",
""
],
[
"Wang",
"Naiyan",
""
],
[
"Zhang",
"Zhaoxiang",
""
]
] |
new_dataset
| 0.990864 |
2301.02610
|
Marco Kemmerling
|
Marco Kemmerling
|
Feedback-Gated Rectified Linear Units
|
15 pages, 26 figures
| null | null | null |
cs.NE cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Feedback connections play a prominent role in the human brain but have not
received much attention in artificial neural network research. Here, a
biologically inspired feedback mechanism which gates rectified linear units is
proposed. On the MNIST dataset, autoencoders with feedback show faster
convergence, better performance, and more robustness to noise compared to their
counterparts without feedback. Some benefits, although less pronounced and less
consistent, can be observed when networks with feedback are applied on the
CIFAR-10 dataset.
|
[
{
"version": "v1",
"created": "Fri, 6 Jan 2023 17:14:11 GMT"
}
] | 2023-01-09T00:00:00 |
[
[
"Kemmerling",
"Marco",
""
]
] |
new_dataset
| 0.994395 |
2301.02643
|
Sergei Zobov
|
Sergei Zobov, Fedor Chervinskii, Aleksandr Rybnikov, Danil Petrov,
Komal Vendidandi
|
Auto-Assembly: a framework for automated robotic assembly directly from
CAD
|
7 pages, 8 figures, draft version submitted to ICRA2033
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we propose a framework called Auto-Assembly for automated
robotic assembly from design files and demonstrate a practical implementation
on modular parts joined by fastening using a robotic cell consisting of two
robots. We show the flexibility of the approach by testing it on different
input designs. Auto-Assembly consists of several parts: design analysis,
assembly sequence generation, bill-of-process (BOP) generation, conversion of
the BOP to control code, path planning, simulation, and execution of the
control code to assemble parts in the physical environment.
|
[
{
"version": "v1",
"created": "Fri, 6 Jan 2023 18:41:41 GMT"
}
] | 2023-01-09T00:00:00 |
[
[
"Zobov",
"Sergei",
""
],
[
"Chervinskii",
"Fedor",
""
],
[
"Rybnikov",
"Aleksandr",
""
],
[
"Petrov",
"Danil",
""
],
[
"Vendidandi",
"Komal",
""
]
] |
new_dataset
| 0.99759 |
1801.09225
|
Yuito Murase
|
Yuito Murase, Yuichi Nishiwaki and Atsushi Igarashi
|
Contextual Modal Type Theory with Polymorphic Contexts
| null | null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modal types -- types that are derived from proof systems of modal logic --
have been studied as theoretical foundations of metaprogramming, where program
code is manipulated as first-class values. In modal type systems, modality
corresponds to a type constructor for code types and controls free variables
and their types in code values. Nanevski et al. have proposed contextual modal
type theory, which has modal types with fine-grained information on free
variables: modal types are explicitly indexed by contexts -- the types of all
free variables in code values.
This paper presents $\lambda_{\forall[]}$, a novel extension of contextual
modal type theory with parametric polymorphism over contexts. Such an extension
has been studied in the literature but unlike earlier proposals,
$\lambda_{\forall[]}$ is more general in that multiple parts of a single
context can be abstracted. We formalize \lamfb with its type system and
operational semantics given by $\beta$-reduction and prove its basic properties
including subject reduction, strong normalization, and confluence. Moreover, to
demonstrate the expressive power of polymorphic contexts, we show a
type-preserving embedding from a two-level fragment of Davies'
$\lambda_{\bigcirc}$, which is based on linear-time temporal logic, to
$\lambda_{\forall[]}$.
|
[
{
"version": "v1",
"created": "Sun, 28 Jan 2018 13:26:44 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Jan 2023 11:31:56 GMT"
}
] | 2023-01-06T00:00:00 |
[
[
"Murase",
"Yuito",
""
],
[
"Nishiwaki",
"Yuichi",
""
],
[
"Igarashi",
"Atsushi",
""
]
] |
new_dataset
| 0.999711 |
2005.05108
|
Joachim Kock
|
Joachim Kock
|
Whole-grain Petri nets and processes
|
This is the final 'author version', nearly identical to the version
published in JACM. 58 pages. This paper previously had the title 'Elements of
Petri nets and processes'
|
J. ACM 70 (1) (2022), 1--58
|
10.1145/3559103
|
CPH-GEOTOP-DNRF151
|
cs.LO math.AT math.CO math.CT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a formalism for Petri nets based on polynomial-style finite-set
configurations and etale maps. The formalism supports both a geometric
semantics in the style of Goltz and Reisig (processes are etale maps from
graphs) and an algebraic semantics in the style of Meseguer and Montanari, in
terms of free coloured props, and allows the following unification: for P a
Petri net, the Segal space of P-processes is shown to be the free coloured
prop-in-groupoids on P. There is also an unfolding semantics \`a la Winskel,
which bypasses the classical symmetry problems: with the new formalism, every
Petri net admits a universal unfolding, which in turn has associated an event
structure and a Scott domain. Since everything is encoded with explicit sets,
Petri nets and their processes have elements. In particular, individual-token
semantics is native. (Collective-token semantics emerges from rather drastic
quotient constructions \`a la Best-Devillers, involving taking {\pi}_0 of the
groupoids of states.)
|
[
{
"version": "v1",
"created": "Mon, 11 May 2020 13:52:28 GMT"
},
{
"version": "v2",
"created": "Mon, 18 May 2020 15:20:35 GMT"
},
{
"version": "v3",
"created": "Fri, 8 Apr 2022 21:09:03 GMT"
},
{
"version": "v4",
"created": "Thu, 5 Jan 2023 17:43:23 GMT"
}
] | 2023-01-06T00:00:00 |
[
[
"Kock",
"Joachim",
""
]
] |
new_dataset
| 0.999531 |
2103.05719
|
Shoken Kaneko
|
Shoken Kaneko
|
Spheroidal Ambisonics: a Spatial Audio Framework Using Spheroidal Bases
| null |
JASA Express Letters 1.8 (2021): 084803
|
10.1121/10.0005942
| null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Ambisonics is an established framework to capture, process, and reproduce
spatial sound fields based on its spherical harmonics representation. We
propose a generalization of conventional spherical ambisonics to the spheroidal
coordinate system and spheroidal microphone arrays, which represent sound
fields by means of spheroidal wave functions. This framework is referred to as
spheroidal ambisonics and a formulation for the case of prolate spheroidal
coordinates is presented. Spheroidal ambisonics allows analytical encoding of
sound fields using spheroidal microphone arrays. In addition, an analytical
conversion formula from spheroidal ambisonics to spherical ambisonics is
derived in order to ensure compatibility with the existing ecosystem of
spherical ambisonics. Numerical experiments are performed to verify spheroidal
ambisonic encoding and transcoding when used for spatial sound field recording.
It is found that the sound field reconstructed from the transcoded coefficients
has a zone of accurate reconstruction which is prolonged towards the long axis
of a prolate spheroidal microphone array.
|
[
{
"version": "v1",
"created": "Tue, 9 Mar 2021 21:03:42 GMT"
}
] | 2023-01-06T00:00:00 |
[
[
"Kaneko",
"Shoken",
""
]
] |
new_dataset
| 0.999435 |
2109.04753
|
Sungho Yoon
|
Sungho Yoon, Ayoung Kim
|
Line as a Visual Sentence: Context-aware Line Descriptor for Visual
Localization
| null |
IEEE Robotics and Automation Letters ( Volume: 6, Issue: 4,
October 2021)
|
10.1109/LRA.2021.3111760
| null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Along with feature points for image matching, line features provide
additional constraints to solve visual geometric problems in robotics and
computer vision (CV). Although recent convolutional neural network (CNN)-based
line descriptors are promising for viewpoint changes or dynamic environments,
we claim that the CNN architecture has innate disadvantages to abstract
variable line length into the fixed-dimensional descriptor. In this paper, we
effectively introduce Line-Transformers dealing with variable lines. Inspired
by natural language processing (NLP) tasks where sentences can be understood
and abstracted well in neural nets, we view a line segment as a sentence that
contains points (words). By attending to well-describable points on aline
dynamically, our descriptor performs excellently on variable line length. We
also propose line signature networks sharing the line's geometric attributes to
neighborhoods. Performing as group descriptors, the networks enhance line
descriptors by understanding lines' relative geometries. Finally, we present
the proposed line descriptor and matching in a Point and Line Localization
(PL-Loc). We show that the visual localization with feature points can be
improved using our line features. We validate the proposed method for
homography estimation and visual localization.
|
[
{
"version": "v1",
"created": "Fri, 10 Sep 2021 09:35:44 GMT"
}
] | 2023-01-06T00:00:00 |
[
[
"Yoon",
"Sungho",
""
],
[
"Kim",
"Ayoung",
""
]
] |
new_dataset
| 0.986593 |
2201.05842
|
Igor Fedorov
|
Igor Fedorov, Ramon Matas, Hokchhay Tann, Chuteng Zhou, Matthew
Mattina, Paul Whatmough
|
UDC: Unified DNAS for Compressible TinyML Models
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deploying TinyML models on low-cost IoT hardware is very challenging, due to
limited device memory capacity. Neural processing unit (NPU) hardware address
the memory challenge by using model compression to exploit weight quantization
and sparsity to fit more parameters in the same footprint. However, designing
compressible neural networks (NNs) is challenging, as it expands the design
space across which we must make balanced trade-offs. This paper demonstrates
Unified DNAS for Compressible (UDC) NNs, which explores a large search space to
generate state-of-the-art compressible NNs for NPU. ImageNet results show UDC
networks are up to $3.35\times$ smaller (iso-accuracy) or 6.25% more accurate
(iso-model size) than previous work.
|
[
{
"version": "v1",
"created": "Sat, 15 Jan 2022 12:35:26 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Jan 2022 16:21:53 GMT"
},
{
"version": "v3",
"created": "Thu, 24 Nov 2022 19:20:14 GMT"
},
{
"version": "v4",
"created": "Thu, 5 Jan 2023 14:06:43 GMT"
}
] | 2023-01-06T00:00:00 |
[
[
"Fedorov",
"Igor",
""
],
[
"Matas",
"Ramon",
""
],
[
"Tann",
"Hokchhay",
""
],
[
"Zhou",
"Chuteng",
""
],
[
"Mattina",
"Matthew",
""
],
[
"Whatmough",
"Paul",
""
]
] |
new_dataset
| 0.979179 |
2203.09825
|
Xin Yuan
|
Xin Yuan, Yongbing Feng, Mingming Ye, Cheng Tuo, Minghang Zhang
|
AdaVocoder: Adaptive Vocoder for Custom Voice
|
Accepted by INTERSPEECH 2022
| null |
10.21437/Interspeech.2022-288
| null |
cs.SD cs.CL eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Custom voice is to construct a personal speech synthesis system by adapting
the source speech synthesis model to the target model through the target few
recordings. The solution to constructing a custom voice is to combine an
adaptive acoustic model with a robust vocoder. However, training a robust
vocoder usually requires a multi-speaker dataset, which should include various
age groups and various timbres, so that the trained vocoder can be used for
unseen speakers. Collecting such a multi-speaker dataset is difficult, and the
dataset distribution always has a mismatch with the distribution of the target
speaker dataset. This paper proposes an adaptive vocoder for custom voice from
another novel perspective to solve the above problems. The adaptive vocoder
mainly uses a cross-domain consistency loss to solve the overfitting problem
encountered by the GAN-based neural vocoder in the transfer learning of
few-shot scenes. We construct two adaptive vocoders, AdaMelGAN and AdaHiFi-GAN.
First, We pre-train the source vocoder model on AISHELL3 and CSMSC datasets,
respectively. Then, fine-tune it on the internal dataset VXI-children with few
adaptation data. The empirical results show that a high-quality custom voice
system can be built by combining a adaptive acoustic model with a adaptive
vocoder.
|
[
{
"version": "v1",
"created": "Fri, 18 Mar 2022 10:03:37 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Jan 2023 09:09:54 GMT"
},
{
"version": "v3",
"created": "Thu, 5 Jan 2023 08:58:18 GMT"
}
] | 2023-01-06T00:00:00 |
[
[
"Yuan",
"Xin",
""
],
[
"Feng",
"Yongbing",
""
],
[
"Ye",
"Mingming",
""
],
[
"Tuo",
"Cheng",
""
],
[
"Zhang",
"Minghang",
""
]
] |
new_dataset
| 0.999501 |
2203.14550
|
Xinyao Tang
|
Tang Xinyao and Wang Wei and Song Huansheng and Zhao Chunhui
|
CenterLoc3D: Monocular 3D Vehicle Localization Network for Roadside
Surveillance Cameras
|
33 pages, 15 figures. v3. This work has been published on Complex &
Intelligent Systems, link:
https://link.springer.com/article/10.1007/s40747-022-00962-9
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Monocular 3D vehicle localization is an important task in Intelligent
Transportation System (ITS) and Cooperative Vehicle Infrastructure System
(CVIS), which is usually achieved by monocular 3D vehicle detection. However,
depth information cannot be obtained directly by monocular cameras due to the
inherent imaging mechanism, resulting in more challenging monocular 3D tasks.
Most of the current monocular 3D vehicle detection methods leverage 2D
detectors and additional geometric modules, which reduces the efficiency. In
this paper, we propose a 3D vehicle localization network CenterLoc3D for
roadside monocular cameras, which directly predicts centroid and eight vertexes
in image space, and the dimension of 3D bounding boxes without 2D detectors. To
improve the precision of 3D vehicle localization, we propose a weighted-fusion
module and a loss with spatial constraints embedded in CenterLoc3D. Firstly,
the transformation matrix between 2D image space and 3D world space is solved
by camera calibration. Secondly, vehicle type, centroid, eight vertexes, and
the dimension of 3D vehicle bounding boxes are obtained by CenterLoc3D.
Finally, centroid in 3D world space can be obtained by camera calibration and
CenterLoc3D for 3D vehicle localization. To the best of our knowledge, this is
the first application of 3D vehicle localization for roadside monocular
cameras. Hence, we also propose a benchmark for this application including a
dataset (SVLD-3D), an annotation tool (LabelImg-3D), and evaluation metrics.
Through experimental validation, the proposed method achieves high accuracy and
real-time performance. (limited words, please see the article for more details)
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 07:47:37 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Apr 2022 08:50:30 GMT"
},
{
"version": "v3",
"created": "Thu, 5 Jan 2023 10:19:51 GMT"
}
] | 2023-01-06T00:00:00 |
[
[
"Xinyao",
"Tang",
""
],
[
"Wei",
"Wang",
""
],
[
"Huansheng",
"Song",
""
],
[
"Chunhui",
"Zhao",
""
]
] |
new_dataset
| 0.999625 |
2204.13155
|
Karishma Patnaik
|
Pham H. Nguyen, Karishma Patnaik, Shatadal Mishra, Panagiotis
Polygerinos and Wenlong Zhang
|
A Soft-Bodied Aerial Robot for Collision Resilience and Contact-Reactive
Perching
|
Accepted for Publication, Soft Robotics Journal - Mary Ann Liebert
Inc., Manuscript Details - 20 pages, 17 Figures, 2 Tables
| null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current aerial robots demonstrate limited interaction capabilities in
unstructured environments when compared with their biological counterparts.
Some examples include their inability to tolerate collisions and to
successfully land or perch on objects of unknown shapes, sizes, and texture.
Efforts to include compliance have introduced designs that incorporate external
mechanical impact protection at the cost of reduced agility and flight time due
to the added weight. In this work, we propose and develop a light-weight,
inflatable, soft-bodied aerial robot (SoBAR) that can pneumatically vary its
body stiffness to achieve intrinsic collision resilience. Unlike the
conventional rigid aerial robots, SoBAR successfully demonstrates its ability
to repeatedly endure and recover from collisions in various directions, not
only limited to in-plane ones. Furthermore, we exploit its capabilities to
demonstrate perching where the 3D collision resilience helps in improving the
perching success rates. We also augment SoBAR with a novel hybrid fabric-based,
bistable (HFB) grasper that can utilize impact energies to perform
contact-reactive grasping through rapid shape conforming abilities. We
exhaustively study and offer insights into the collision resilience, impact
absorption, and manipulation capabilities of SoBAR with the HFB grasper.
Finally, we compare the performance of conventional aerial robots with the
SoBAR through collision characterizations, grasping identifications, and
experimental validations of collision resilience and perching in various
scenarios and on differently shaped objects.
|
[
{
"version": "v1",
"created": "Wed, 27 Apr 2022 19:29:22 GMT"
},
{
"version": "v2",
"created": "Mon, 2 May 2022 22:35:38 GMT"
},
{
"version": "v3",
"created": "Wed, 4 Jan 2023 23:59:30 GMT"
}
] | 2023-01-06T00:00:00 |
[
[
"Nguyen",
"Pham H.",
""
],
[
"Patnaik",
"Karishma",
""
],
[
"Mishra",
"Shatadal",
""
],
[
"Polygerinos",
"Panagiotis",
""
],
[
"Zhang",
"Wenlong",
""
]
] |
new_dataset
| 0.998729 |
2205.13281
|
Senthil Yogamani
|
Varun Ravi Kumar, Ciaran Eising, Christian Witt, and Senthil Yogamani
|
Surround-view Fisheye Camera Perception for Automated Driving: Overview,
Survey and Challenges
|
Accepted for publication at IEEE Transactions on Intelligent
Transportation Systems
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Surround-view fisheye cameras are commonly used for near-field sensing in
automated driving. Four fisheye cameras on four sides of the vehicle are
sufficient to cover 360{\deg} around the vehicle capturing the entire
near-field region. Some primary use cases are automated parking, traffic jam
assist, and urban driving. There are limited datasets and very little work on
near-field perception tasks as the focus in automotive perception is on
far-field perception. In contrast to far-field, surround-view perception poses
additional challenges due to high precision object detection requirements of
10cm and partial visibility of objects. Due to the large radial distortion of
fisheye cameras, standard algorithms cannot be extended easily to the
surround-view use case. Thus, we are motivated to provide a self-contained
reference for automotive fisheye camera perception for researchers and
practitioners. Firstly, we provide a unified and taxonomic treatment of
commonly used fisheye camera models. Secondly, we discuss various perception
tasks and existing literature. Finally, we discuss the challenges and future
direction.
|
[
{
"version": "v1",
"created": "Thu, 26 May 2022 11:38:04 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Jan 2023 13:24:13 GMT"
}
] | 2023-01-06T00:00:00 |
[
[
"Kumar",
"Varun Ravi",
""
],
[
"Eising",
"Ciaran",
""
],
[
"Witt",
"Christian",
""
],
[
"Yogamani",
"Senthil",
""
]
] |
new_dataset
| 0.996248 |
2208.04761
|
Anastasija Nikiforova
|
Alina Govoruhina, Anastasija Nikiforova
|
Digital health shopping assistant with React Native: a simple
technological solution to a complex health problem
| null |
2022 International Conference on Intelligent Data Science
Technologies and Applications (IDSTA), 2022, pp. 34-40
|
10.1109/IDSTA55301.2022.9923047
| null |
cs.CY cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Today, more and more people are reporting allergies, which can range from
simple reactions close to discomfort to anaphylactic shocks. Other people may
not be allergic but avoid certain foods for personal reasons. Daily food
shopping of these people is hampered by the fact that unwanted ingredients can
be hidden in any food, and it is difficult to find them all. The paper presents
a digital health shopping assistant called "Diet Helper", aimed to make life
easier for such people by making it easy to determine whether a product is
suitable for consumption, according to the specific dietary requirements of
both types - existing diet and self-defined. This is achieved by capturing
ingredient label, received by the app as an input, which the app analyses,
converting the captured label to text, and filters out unwanted ingredients
that according to the user should be avoided as either allergens or products to
which the consumer is intolerant etc, helping the user decide if the product is
suitable for consumption. This should make daily grocery shopping easier by
providing the user with more accurate and simplified product selection in
seconds, reducing the total time spent in the grocery stores, which is
especially relevant in light of COVID-19, although it was and will remain out
of it due to the busy schedules and active rhythm of life of modern society.
The app is developed using the React Native framework and Google Firebase
platform, which makes it easy to develop, use and extend such solutions thereby
encouraging to start actively developing solutions that could improve
wellbeing.
|
[
{
"version": "v1",
"created": "Tue, 9 Aug 2022 13:10:44 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Oct 2022 10:46:32 GMT"
}
] | 2023-01-06T00:00:00 |
[
[
"Govoruhina",
"Alina",
""
],
[
"Nikiforova",
"Anastasija",
""
]
] |
new_dataset
| 0.991839 |
2211.13014
|
Oxana Vitman
|
Oxana Vitman, Yevhen Kostiuk, Grigori Sidorov, Alexander Gelbukh
|
Sarcasm Detection Framework Using Context, Emotion and Sentiment
Features
| null | null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Sarcasm detection is an essential task that can help identify the actual
sentiment in user-generated data, such as discussion forums or tweets. Sarcasm
is a sophisticated form of linguistic expression because its surface meaning
usually contradicts its inner, deeper meaning. Such incongruity is the
essential component of sarcasm, however, it makes sarcasm detection quite a
challenging task. In this paper, we propose a model, that incorporates
different features to capture the incongruity intrinsic to sarcasm. We use a
pre-trained transformer and CNN to capture context features, and we use
transformers pre-trained on emotions detection and sentiment analysis tasks.
Our approach outperformed previous state-of-the-art results on four datasets
from social networking platforms and online media.
|
[
{
"version": "v1",
"created": "Wed, 23 Nov 2022 15:14:44 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Jan 2023 20:10:00 GMT"
}
] | 2023-01-06T00:00:00 |
[
[
"Vitman",
"Oxana",
""
],
[
"Kostiuk",
"Yevhen",
""
],
[
"Sidorov",
"Grigori",
""
],
[
"Gelbukh",
"Alexander",
""
]
] |
new_dataset
| 0.97795 |
2212.13805
|
Zi'an Xu
|
Zi'an Xu, Yin Dai, Fayu Liu, Weibing Chen, Yue Liu, Lifu Shi, Sheng
Liu, Yuhang Zhou
|
Swin MAE: Masked Autoencoders for Small Datasets
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The development of deep learning models in medical image analysis is majorly
limited by the lack of large-sized and well-annotated datasets. Unsupervised
learning does not require labels and is more suitable for solving medical image
analysis problems. However, most of the current unsupervised learning methods
need to be applied to large datasets. To make unsupervised learning applicable
to small datasets, we proposed Swin MAE, which is a masked autoencoder with
Swin Transformer as its backbone. Even on a dataset of only a few thousand
medical images and without using any pre-trained models, Swin MAE is still able
to learn useful semantic features purely from images. It can equal or even
slightly outperform the supervised model obtained by Swin Transformer trained
on ImageNet in terms of the transfer learning results of downstream tasks. The
code is publicly available at https://github.com/Zian-Xu/Swin-MAE.
|
[
{
"version": "v1",
"created": "Wed, 28 Dec 2022 12:53:44 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Jan 2023 10:07:41 GMT"
}
] | 2023-01-06T00:00:00 |
[
[
"Xu",
"Zi'an",
""
],
[
"Dai",
"Yin",
""
],
[
"Liu",
"Fayu",
""
],
[
"Chen",
"Weibing",
""
],
[
"Liu",
"Yue",
""
],
[
"Shi",
"Lifu",
""
],
[
"Liu",
"Sheng",
""
],
[
"Zhou",
"Yuhang",
""
]
] |
new_dataset
| 0.987224 |
2301.01770
|
Sibi Chakkaravarthy S
|
Sibi Chakkaravarthy Sethuraman, Aditya Mitra, Anisha Ghosh, Gautam
Galada, Anitha Subramanian
|
MetaSecure: A Passwordless Authentication for the Metaverse
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Metaverse in general holds a potential future for cyberspace. At the
beginning of Web 2.0, it was witnessed that people were signing in with various
pseudonyms or 'nyms', risking their online identities by increasing presence of
fake accounts leading to difficulty in unique identification for different
roles. However, in Web 3.0, the metaverse, a user's identity is tied to their
original identity, where risking one poses a significant risk to the other.
Therefore, this paper proposes a novel authentication system for securing
digital assets, online identity, avatars, and accounts called Metasecure where
a unique id for every entity or user to develop a human establishment is
essential on a digital platform. The proposed passwordless system provides
three layers of security using device attestation, facial recognition and use
of physical security keys, security keys, or smartcards in accordance to Fast
IDentity Online (FIDO2) specifications. It provides SDKs for authentication on
any system including VR/XR glasses, thus ensuring seamlessness in accessing
services in the Metaverse.
|
[
{
"version": "v1",
"created": "Wed, 4 Jan 2023 06:39:47 GMT"
}
] | 2023-01-06T00:00:00 |
[
[
"Sethuraman",
"Sibi Chakkaravarthy",
""
],
[
"Mitra",
"Aditya",
""
],
[
"Ghosh",
"Anisha",
""
],
[
"Galada",
"Gautam",
""
],
[
"Subramanian",
"Anitha",
""
]
] |
new_dataset
| 0.999255 |
2301.01795
|
Dhruv Mahajan
|
Vignesh Ramanathan, Anmol Kalia, Vladan Petrovic, Yi Wen, Baixue
Zheng, Baishan Guo, Rui Wang, Aaron Marquez, Rama Kovvuri, Abhishek Kadian,
Amir Mousavi, Yiwen Song, Abhimanyu Dubey, Dhruv Mahajan
|
PACO: Parts and Attributes of Common Objects
| null | null | null | null |
cs.CV
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Object models are gradually progressing from predicting just category labels
to providing detailed descriptions of object instances. This motivates the need
for large datasets which go beyond traditional object masks and provide richer
annotations such as part masks and attributes. Hence, we introduce PACO: Parts
and Attributes of Common Objects. It spans 75 object categories, 456
object-part categories and 55 attributes across image (LVIS) and video (Ego4D)
datasets. We provide 641K part masks annotated across 260K object boxes, with
roughly half of them exhaustively annotated with attributes as well. We design
evaluation metrics and provide benchmark results for three tasks on the
dataset: part mask segmentation, object and part attribute prediction and
zero-shot instance detection. Dataset, models, and code are open-sourced at
https://github.com/facebookresearch/paco.
|
[
{
"version": "v1",
"created": "Wed, 4 Jan 2023 19:28:03 GMT"
}
] | 2023-01-06T00:00:00 |
[
[
"Ramanathan",
"Vignesh",
""
],
[
"Kalia",
"Anmol",
""
],
[
"Petrovic",
"Vladan",
""
],
[
"Wen",
"Yi",
""
],
[
"Zheng",
"Baixue",
""
],
[
"Guo",
"Baishan",
""
],
[
"Wang",
"Rui",
""
],
[
"Marquez",
"Aaron",
""
],
[
"Kovvuri",
"Rama",
""
],
[
"Kadian",
"Abhishek",
""
],
[
"Mousavi",
"Amir",
""
],
[
"Song",
"Yiwen",
""
],
[
"Dubey",
"Abhimanyu",
""
],
[
"Mahajan",
"Dhruv",
""
]
] |
new_dataset
| 0.999467 |
2301.01809
|
Oshani Seneviratne
|
Jared Gridley and Oshani Seneviratne
|
Significant Digits: Using Large-Scale Blockchain Data to Predict
Fraudulent Addresses
|
Accepted at the IEEE Big Data 2022 Conference
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Blockchain systems and cryptocurrencies have exploded in popularity over the
past decade, and with this growing user base, the number of cryptocurrency
scams has also surged. Given the graphical structure of blockchain networks and
the abundance of data generated on these networks, we use graph mining
techniques to extract essential information on transactions and apply Benford's
Law to extract distributional information on address transactions. We then
apply a gradient-boosting tree model to predict fraudulent addresses. Our
results show that our method can detect scams with reasonable accuracy and that
the features generated based on Benford's Law are the most significant
features.
|
[
{
"version": "v1",
"created": "Tue, 3 Jan 2023 17:26:22 GMT"
}
] | 2023-01-06T00:00:00 |
[
[
"Gridley",
"Jared",
""
],
[
"Seneviratne",
"Oshani",
""
]
] |
new_dataset
| 0.967148 |
2301.01827
|
Simon X. Yang
|
Danjie Zhu, Lei Wang, Hua Zhang, Simon X. Yang
|
A GOA-Based Fault-Tolerant Trajectory Tracking Control for an Underwater
Vehicle of Multi-Thruster System without Actuator Saturation
|
arXiv admin note: text overlap with arXiv:2210.01706
| null |
10.1109/TASE.2022.3230951
| null |
cs.RO cs.AI cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
This paper proposes an intelligent fault-tolerant control (FTC) strategy to
tackle the trajectory tracking problem of an underwater vehicle (UV) under
thruster damage (power loss) cases and meanwhile resolve the actuator
saturation brought by the vehicle's physical constraints. In the proposed
control strategy, the trajectory tracking component is formed by a refined
backstepping algorithm that controls the velocity variation and a sliding mode
control deducts the torque/force outputs; the fault-tolerant component is
established based on a Grasshopper Optimization Algorithm (GOA), which provides
fast convergence speed as well as satisfactory accuracy of deducting optimized
reallocation of the thruster forces to compensate for the power loss in
different fault cases. Simulations with or without environmental perturbations
under different fault cases and comparisons to other traditional FTCs are
presented, thus verifying the effectiveness and robustness of the proposed
GOA-based fault-tolerant trajectory tracking design.
|
[
{
"version": "v1",
"created": "Wed, 4 Jan 2023 21:30:16 GMT"
}
] | 2023-01-06T00:00:00 |
[
[
"Zhu",
"Danjie",
""
],
[
"Wang",
"Lei",
""
],
[
"Zhang",
"Hua",
""
],
[
"Yang",
"Simon X.",
""
]
] |
new_dataset
| 0.999528 |
2301.01838
|
Li Zhang
|
Li Zhang, Jiahao Ding, Yifeng Gao, Jessica Lin
|
PMP: Privacy-Aware Matrix Profile against Sensitive Pattern Inference
for Time Series
|
This is a preprint. The paper has been accepted by SIAM SDM2023
| null | null | null |
cs.LG cs.CR
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Recent rapid development of sensor technology has allowed massive
fine-grained time series (TS) data to be collected and set the foundation for
the development of data-driven services and applications. During the process,
data sharing is often involved to allow the third-party modelers to perform
specific time series data mining (TSDM) tasks based on the need of data owner.
The high resolution of TS brings new challenges in protecting privacy. While
meaningful information in high-resolution TS shifts from concrete point values
to local shape-based segments, numerous research have found that long
shape-based patterns could contain more sensitive information and may
potentially be extracted and misused by a malicious third party. However, the
privacy issue for TS patterns is surprisingly seldom explored in
privacy-preserving literature. In this work, we consider a new
privacy-preserving problem: preventing malicious inference on long shape-based
patterns while preserving short segment information for the utility task
performance. To mitigate the challenge, we investigate an alternative approach
by sharing Matrix Profile (MP), which is a non-linear transformation of
original data and a versatile data structure that supports many data mining
tasks. We found that while MP can prevent concrete shape leakage, the canonical
correlation in MP index can still reveal the location of sensitive long
pattern. Based on this observation, we design two attacks named Location Attack
and Entropy Attack to extract the pattern location from MP. To further protect
MP from these two attacks, we propose a Privacy-Aware Matrix Profile (PMP) via
perturbing the local correlation and breaking the canonical correlation in MP
index vector. We evaluate our proposed PMP against baseline noise-adding
methods through quantitative analysis and real-world case studies to show the
effectiveness of the proposed method.
|
[
{
"version": "v1",
"created": "Wed, 4 Jan 2023 22:11:38 GMT"
}
] | 2023-01-06T00:00:00 |
[
[
"Zhang",
"Li",
""
],
[
"Ding",
"Jiahao",
""
],
[
"Gao",
"Yifeng",
""
],
[
"Lin",
"Jessica",
""
]
] |
new_dataset
| 0.975728 |
2301.01929
|
Lulu Qian
|
Erik Winfree and Lulu Qian
|
Two-dimensional tile displacement can simulate cellular automata
| null | null | null | null |
cs.ET
|
http://creativecommons.org/licenses/by/4.0/
|
Tile displacement is a newly-recognized mechanism in DNA nanotechnology that
exploits principles analogous to toehold-mediated strand displacement but
within the context of self-assembled DNA origami tile arrays. Here, we
formulate an abstract model of tile displacement for the simplest case:
individual assemblies interacting with monomer tiles in solution. We give
several constructions for programmable computation by tile displacement, from
circuits to cellular automata, that vary in how they use energy (or not) to
drive the system forward (or not), how much space and how many tile types they
require, and whether their computational power is limited to PTIME or PSPACE
with respect to the size of the system. In particular, we show that tile
displacement systems are Turing universal and can simulate arbitrary
two-dimensional synchronous block cellular automata, where each transition rule
for updating the state of a 2 by 2 neighborhood is implemented by just a single
tile.
|
[
{
"version": "v1",
"created": "Thu, 5 Jan 2023 06:51:19 GMT"
}
] | 2023-01-06T00:00:00 |
[
[
"Winfree",
"Erik",
""
],
[
"Qian",
"Lulu",
""
]
] |
new_dataset
| 0.998059 |
2301.01949
|
Yuxing Long
|
Yuxing Long, Binyuan Hui, Fulong Ye, Yanyang Li, Zhuoxin Han, Caixia
Yuan, Yongbin Li, Xiaojie Wang
|
SPRING: Situated Conversation Agent Pretrained with Multimodal Questions
from Incremental Layout Graph
|
AAAI 2023
| null | null | null |
cs.CL cs.AI cs.CV cs.LG cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing multimodal conversation agents have shown impressive abilities to
locate absolute positions or retrieve attributes in simple scenarios, but they
fail to perform well when complex relative positions and information alignments
are involved, which poses a bottleneck in response quality. In this paper, we
propose a Situated Conversation Agent Petrained with Multimodal Questions from
INcremental Layout Graph (SPRING) with abilities of reasoning multi-hops
spatial relations and connecting them with visual attributes in crowded
situated scenarios. Specifically, we design two types of Multimodal Question
Answering (MQA) tasks to pretrain the agent. All QA pairs utilized during
pretraining are generated from novel Incremental Layout Graphs (ILG). QA pair
difficulty labels automatically annotated by ILG are used to promote MQA-based
Curriculum Learning. Experimental results verify the SPRING's effectiveness,
showing that it significantly outperforms state-of-the-art approaches on both
SIMMC 1.0 and SIMMC 2.0 datasets.
|
[
{
"version": "v1",
"created": "Thu, 5 Jan 2023 08:03:47 GMT"
}
] | 2023-01-06T00:00:00 |
[
[
"Long",
"Yuxing",
""
],
[
"Hui",
"Binyuan",
""
],
[
"Ye",
"Fulong",
""
],
[
"Li",
"Yanyang",
""
],
[
"Han",
"Zhuoxin",
""
],
[
"Yuan",
"Caixia",
""
],
[
"Li",
"Yongbin",
""
],
[
"Wang",
"Xiaojie",
""
]
] |
new_dataset
| 0.997467 |
2301.02031
|
Jinshan Pan
|
Xiang Li, Jinshan Pan, Jinhui Tang, and Jiangxin Dong
|
DLGSANet: Lightweight Dynamic Local and Global Self-Attention Networks
for Image Super-Resolution
|
More information is available at
https://neonleexiang.github.io/DLGSANet/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose an effective lightweight dynamic local and global self-attention
network (DLGSANet) to solve image super-resolution. Our method explores the
properties of Transformers while having low computational costs. Motivated by
the network designs of Transformers, we develop a simple yet effective
multi-head dynamic local self-attention (MHDLSA) module to extract local
features efficiently. In addition, we note that existing Transformers usually
explore all similarities of the tokens between the queries and keys for the
feature aggregation. However, not all the tokens from the queries are relevant
to those in keys, using all the similarities does not effectively facilitate
the high-resolution image reconstruction. To overcome this problem, we develop
a sparse global self-attention (SparseGSA) module to select the most useful
similarity values so that the most useful global features can be better
utilized for the high-resolution image reconstruction. We develop a hybrid
dynamic-Transformer block(HDTB) that integrates the MHDLSA and SparseGSA for
both local and global feature exploration. To ease the network training, we
formulate the HDTBs into a residual hybrid dynamic-Transformer group (RHDTG).
By embedding the RHDTGs into an end-to-end trainable network, we show that our
proposed method has fewer network parameters and lower computational costs
while achieving competitive performance against state-of-the-art ones in terms
of accuracy. More information is available at
https://neonleexiang.github.io/DLGSANet/
|
[
{
"version": "v1",
"created": "Thu, 5 Jan 2023 12:06:47 GMT"
}
] | 2023-01-06T00:00:00 |
[
[
"Li",
"Xiang",
""
],
[
"Pan",
"Jinshan",
""
],
[
"Tang",
"Jinhui",
""
],
[
"Dong",
"Jiangxin",
""
]
] |
new_dataset
| 0.99936 |
2301.02042
|
Chong Shangguan
|
Chenyang Zhang and Chong Shangguan and Gennian Ge
|
Improved Gilbert-Varshamov bounds for hopping cyclic codes and optical
orthogonal codes
|
14 pages, submitted
| null | null | null |
cs.IT math.CO math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Hopping cyclic codes (HCCs) are (non-linear) cyclic codes with the additional
property that the $n$ cyclic shifts of every given codeword are all distinct,
where $n$ is the code length. Constant weight binary hopping cyclic codes are
also known as optical orthogonal codes (OOCs). HCCs and OOCs have various
practical applications and have been studied extensively over the years.
The main concern of this paper is to present improved Gilbert-Varshamov type
lower bounds for these codes, when the minimum distance is bounded below by a
linear factor of the code length. For HCCs, we improve the previously best
known lower bound of Niu, Xing, and Yuan by a linear factor of the code length.
For OOCs, we improve the previously best known lower bound of Chung, Salehi,
and Wei, and Yang and Fuja by a quadratic factor of the code length. As
by-products, we also provide improved lower bounds for frequency hopping
sequences sets and error-correcting weakly mutually uncorrelated codes. Our
proofs are based on tools from probability theory and graph theory, in
particular the McDiarmid's inequality on the concentration of Lipschitz
functions and the independence number of locally sparse graphs.
|
[
{
"version": "v1",
"created": "Thu, 5 Jan 2023 12:26:22 GMT"
}
] | 2023-01-06T00:00:00 |
[
[
"Zhang",
"Chenyang",
""
],
[
"Shangguan",
"Chong",
""
],
[
"Ge",
"Gennian",
""
]
] |
new_dataset
| 0.988394 |
2301.02113
|
Joseph Renner
|
Tatiana Anikina, Natalia Skachkova, Joseph Renner, Priyansh Trivedi
|
Anaphora Resolution in Dialogue: System Description (CODI-CRAC 2022
Shared Task)
| null |
CODI-CRAC 2022, Oct 2022, Gyeongju, South Korea
| null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe three models submitted for the CODI-CRAC 2022 shared task. To
perform identity anaphora resolution, we test several combinations of the
incremental clustering approach based on the Workspace Coreference System (WCS)
with other coreference models. The best result is achieved by adding the
''cluster merging'' version of the coref-hoi model, which brings up to 10.33%
improvement 1 over vanilla WCS clustering. Discourse deixis resolution is
implemented as multi-task learning: we combine the learning objective of
corefhoi with anaphor type classification. We adapt the higher-order resolution
model introduced in Joshi et al. (2019) for bridging resolution given gold
mentions and anaphors.
|
[
{
"version": "v1",
"created": "Thu, 5 Jan 2023 15:42:17 GMT"
}
] | 2023-01-06T00:00:00 |
[
[
"Anikina",
"Tatiana",
""
],
[
"Skachkova",
"Natalia",
""
],
[
"Renner",
"Joseph",
""
],
[
"Trivedi",
"Priyansh",
""
]
] |
new_dataset
| 0.992213 |
2301.02152
|
Zongren Zou
|
Zongren Zou and George Em Karniadakis
|
L-HYDRA: Multi-Head Physics-Informed Neural Networks
| null | null | null | null |
cs.LG physics.comp-ph
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce multi-head neural networks (MH-NNs) to physics-informed machine
learning, which is a type of neural networks (NNs) with all nonlinear hidden
layers as the body and multiple linear output layers as multi-head. Hence, we
construct multi-head physics-informed neural networks (MH-PINNs) as a potent
tool for multi-task learning (MTL), generative modeling, and few-shot learning
for diverse problems in scientific machine learning (SciML). MH-PINNs connect
multiple functions/tasks via a shared body as the basis functions as well as a
shared distribution for the head. The former is accomplished by solving
multiple tasks with MH-PINNs with each head independently corresponding to each
task, while the latter by employing normalizing flows (NFs) for density
estimate and generative modeling. To this end, our method is a two-stage
method, and both stages can be tackled with standard deep learning tools of
NNs, enabling easy implementation in practice. MH-PINNs can be used for various
purposes, such as approximating stochastic processes, solving multiple tasks
synergistically, providing informative prior knowledge for downstream few-shot
learning tasks such as meta-learning and transfer learning, learning
representative basis functions, and uncertainty quantification. We demonstrate
the effectiveness of MH-PINNs in five benchmarks, investigating also the
possibility of synergistic learning in regression analysis. We name the
open-source code "Lernaean Hydra" (L-HYDRA), since this mythical creature
possessed many heads for performing important multiple tasks, as in the
proposed method.
|
[
{
"version": "v1",
"created": "Thu, 5 Jan 2023 16:54:01 GMT"
}
] | 2023-01-06T00:00:00 |
[
[
"Zou",
"Zongren",
""
],
[
"Karniadakis",
"George Em",
""
]
] |
new_dataset
| 0.995681 |
2301.02160
|
Aashish Anantha Ramakrishnan
|
Aashish Anantha Ramakrishnan, Sharon X. Huang, Dongwon Lee
|
ANNA: Abstractive Text-to-Image Synthesis with Filtered News Captions
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Advancements in Text-to-Image synthesis over recent years have focused more
on improving the quality of generated samples on datasets with descriptive
captions. However, real-world image-caption pairs present in domains such as
news data do not use simple and directly descriptive captions. With captions
containing information on both the image content and underlying contextual
cues, they become abstractive in nature. In this paper, we launch ANNA, an
Abstractive News captioNs dAtaset extracted from online news articles in a
variety of different contexts. We explore the capabilities of current
Text-to-Image synthesis models to generate news domain-specific images using
abstractive captions by benchmarking them on ANNA, in both standard training
and transfer learning settings. The generated images are judged on the basis of
contextual relevance, visual quality, and perceptual similarity to ground-truth
image-caption pairs. Through our experiments, we show that techniques such as
transfer learning achieve limited success in understanding abstractive captions
but still fail to consistently learn the relationships between content and
context features.
|
[
{
"version": "v1",
"created": "Thu, 5 Jan 2023 17:19:01 GMT"
}
] | 2023-01-06T00:00:00 |
[
[
"Ramakrishnan",
"Aashish Anantha",
""
],
[
"Huang",
"Sharon X.",
""
],
[
"Lee",
"Dongwon",
""
]
] |
new_dataset
| 0.999783 |
2301.02213
|
Ja\v{s} \v{S}emrl
|
Peter Jipsen, Ja\v{s} \v{S}emrl
|
Representable and diagonally representable weakening relation algebras
| null | null | null | null |
cs.LO math.LO
|
http://creativecommons.org/licenses/by/4.0/
|
A binary relation defined on a poset is a weakening relation if the partial
order acts as a both-sided compositional identity. This is motivated by the
weakening rule in sequent calculi and closely related to models of relevance
logic. For a fixed poset the collection of weakening relations is a subreduct
of the full relation algebra on the underlying set of the poset. We present a
two-player game for the class of representable weakening relation algebras akin
to that for the class of representable relation algebras. This enables us to
define classes of abstract weakening relation algebras that approximate the
quasivariety of representable weakening relation algebras. We give explicit
finite axiomatisations for some of these classes. We define the class of
diagonally representable weakening relation algebras and prove that it is a
discriminator variety. We also provide explicit representations for several
small weakening relation algebras.
|
[
{
"version": "v1",
"created": "Thu, 5 Jan 2023 18:32:08 GMT"
}
] | 2023-01-06T00:00:00 |
[
[
"Jipsen",
"Peter",
""
],
[
"Šemrl",
"Jaš",
""
]
] |
new_dataset
| 0.963961 |
1712.08647
|
Taha Yasseri
|
Dong Nguyen and Barbara McGillivray and Taha Yasseri
|
Emo, Love, and God: Making Sense of Urban Dictionary, a Crowd-Sourced
Online Dictionary
|
Accepted, to appear in Royal Society Open Science. Data available
upon request
|
Royal Society Open Science, 5(5), 2018
|
10.1098/rsos.172320
| null |
cs.CL cs.CY cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Internet facilitates large-scale collaborative projects and the emergence
of Web 2.0 platforms, where producers and consumers of content unify, has
drastically changed the information market. On the one hand, the promise of the
"wisdom of the crowd" has inspired successful projects such as Wikipedia, which
has become the primary source of crowd-based information in many languages. On
the other hand, the decentralized and often un-monitored environment of such
projects may make them susceptible to low quality content. In this work, we
focus on Urban Dictionary, a crowd-sourced online dictionary. We combine
computational methods with qualitative annotation and shed light on the overall
features of Urban Dictionary in terms of growth, coverage and types of content.
We measure a high presence of opinion-focused entries, as opposed to the
meaning-focused entries that we expect from traditional dictionaries.
Furthermore, Urban Dictionary covers many informal, unfamiliar words as well as
proper nouns. Urban Dictionary also contains offensive content, but highly
offensive content tends to receive lower scores through the dictionary's voting
system. The low threshold to include new material in Urban Dictionary enables
quick recording of new words and new meanings, but the resulting heterogeneous
content can pose challenges in using Urban Dictionary as a source to study
language innovation.
|
[
{
"version": "v1",
"created": "Fri, 22 Dec 2017 20:27:11 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Apr 2018 13:52:54 GMT"
}
] | 2023-01-05T00:00:00 |
[
[
"Nguyen",
"Dong",
""
],
[
"McGillivray",
"Barbara",
""
],
[
"Yasseri",
"Taha",
""
]
] |
new_dataset
| 0.998215 |
1802.02788
|
Nuno Ferreira Duarte
|
Nuno Ferreira Duarte, Jovica Tasevski, Moreno Coco, Mirko Rakovi\'c,
Aude Billard, and Jos\'e Santos-Victor
|
Action Anticipation: Reading the Intentions of Humans and Robots
|
8 pages, 7 Figures, IEEE Robotics and Automation Letters 2018
| null |
10.1109/LRA.2018.2861569
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Humans have the fascinating capacity of processing non-verbal visual cues to
understand and anticipate the actions of other humans. This "intention reading"
ability is underpinned by shared motor-repertoires and action-models, which we
use to interpret the intentions of others as if they were our own.
We investigate how the different cues contribute to the legibility of human
actions during interpersonal interactions. Our first contribution is a publicly
available dataset with recordings of human body-motion and eye-gaze, acquired
in an experimental scenario with an actor interacting with three subjects. From
these data, we conducted a human study to analyse the importance of the
different non-verbal cues for action perception. As our second contribution, we
used the motion/gaze recordings to build a computational model describing the
interaction between two persons. As a third contribution, we embedded this
model in the controller of an iCub humanoid robot and conducted a second human
study, in the same scenario with the robot as an actor, to validate the model's
"intention reading" capability.
Our results show that it is possible to model (non-verbal) signals exchanged
by humans during interaction, and how to incorporate such a mechanism in
robotic systems with the twin goal of : (i) being able to "read" human action
intentions, and (ii) acting in a way that is legible by humans.
|
[
{
"version": "v1",
"created": "Thu, 8 Feb 2018 10:29:01 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Aug 2018 17:03:26 GMT"
}
] | 2023-01-05T00:00:00 |
[
[
"Duarte",
"Nuno Ferreira",
""
],
[
"Tasevski",
"Jovica",
""
],
[
"Coco",
"Moreno",
""
],
[
"Raković",
"Mirko",
""
],
[
"Billard",
"Aude",
""
],
[
"Santos-Victor",
"José",
""
]
] |
new_dataset
| 0.986145 |
1907.01536
|
Taha Yasseri
|
Bertie Vidgen and Taha Yasseri
|
What, When and Where of petitions submitted to the UK Government during
a time of chaos
|
Preprint; under review
|
Policy Sci 53, 535-557 (2020)
|
10.1007/s11077-020-09395-y
| null |
cs.CY cs.SI physics.data-an physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In times marked by political turbulence and uncertainty, as well as
increasing divisiveness and hyperpartisanship, Governments need to use every
tool at their disposal to understand and respond to the concerns of their
citizens. We study issues raised by the UK public to the Government during
2015-2017 (surrounding the UK EU-membership referendum), mining public opinion
from a dataset of 10,950 petitions (representing 30.5 million signatures). We
extract the main issues with a ground-up natural language processing (NLP)
method, latent Dirichlet allocation (LDA). We then investigate their temporal
dynamics and geographic features. We show that whilst the popularity of some
issues is stable across the two years, others are highly influenced by external
events, such as the referendum in June 2016. We also study the relationship
between petitions' issues and where their signatories are geographically
located. We show that some issues receive support from across the whole country
but others are far more local. We then identify six distinct clusters of
constituencies based on the issues which constituents sign. Finally, we
validate our approach by comparing the petitions' issues with the top issues
reported in Ipsos MORI survey data. These results show the huge power of
computationally analyzing petitions to understand not only what issues citizens
are concerned about but also when and from where.
|
[
{
"version": "v1",
"created": "Tue, 2 Jul 2019 17:40:40 GMT"
}
] | 2023-01-05T00:00:00 |
[
[
"Vidgen",
"Bertie",
""
],
[
"Yasseri",
"Taha",
""
]
] |
new_dataset
| 0.999617 |
2007.00843
|
Michael Potter
|
Michael Potter (1), Henry Gridley (1), Noah Lichtenstein (1), Kevin
Hines (1), John Nguyen (1), Jacob Walsh (1) ((1) Northeastern University)
|
Low-light Environment Neural Surveillance
|
Pre-print, accepted to IEEE International Workshop on Machine
Learning for Signal Processing 2020 Conference Proceedings. Code and dataset
are available at https://github.com/mcgridles/
| null |
10.1109/MLSP49062.2020.9231894
| null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We design and implement an end-to-end system for real-time crime detection in
low-light environments. Unlike Closed-Circuit Television, which performs
reactively, the Low-Light Environment Neural Surveillance provides real time
crime alerts. The system uses a low-light video feed processed in real-time by
an optical-flow network, spatial and temporal networks, and a Support Vector
Machine to identify shootings, assaults, and thefts. We create a low-light
action-recognition dataset, LENS-4, which will be publicly available. An IoT
infrastructure set up via Amazon Web Services interprets messages from the
local board hosting the camera for action recognition and parses the results in
the cloud to relay messages. The system achieves 71.5% accuracy at 20 FPS. The
user interface is a mobile app which allows local authorities to receive
notifications and to view a video of the crime scene. Citizens have a public
app which enables law enforcement to push crime alerts based on user proximity.
|
[
{
"version": "v1",
"created": "Thu, 2 Jul 2020 02:45:41 GMT"
}
] | 2023-01-05T00:00:00 |
[
[
"Potter",
"Michael",
"",
"Northeastern University"
],
[
"Gridley",
"Henry",
"",
"Northeastern University"
],
[
"Lichtenstein",
"Noah",
"",
"Northeastern University"
],
[
"Hines",
"Kevin",
"",
"Northeastern University"
],
[
"Nguyen",
"John",
"",
"Northeastern University"
],
[
"Walsh",
"Jacob",
"",
"Northeastern University"
]
] |
new_dataset
| 0.998648 |
2106.09637
|
Tiago Barros
|
Tiago Barros, Lu\'is Garrote, Ricardo Pereira, Cristiano Premebida,
Urbano J. Nunes
|
AttDLNet: Attention-based DL Network for 3D LiDAR Place Recognition
|
This preprint has not undergone peer review or any post-submission
improvements or corrections. The Version of Record of this contribution is
published in ROBOT 2022: Fifth Iberian Robotics Conference, and is available
online at https://doi.org/10.1007/978-3-031-21065-5_26
| null |
10.1007/978-3-031-21065-5_26
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
LiDAR-based place recognition is one of the key components of SLAM and global
localization in autonomous vehicles and robotics applications. With the success
of DL approaches in learning useful information from 3D LiDARs, place
recognition has also benefited from this modality, which has led to higher
re-localization and loop-closure detection performance, particularly, in
environments with significant changing conditions. Despite the progress in this
field, the extraction of proper and efficient descriptors from 3D LiDAR data
that are invariant to changing conditions and orientation is still an unsolved
challenge. To address this problem, this work proposes a novel 3D LiDAR-based
deep learning network (named AttDLNet) that uses a range-based proxy
representation for point clouds and an attention network with stacked attention
layers to selectively focus on long-range context and inter-feature
relationships. The proposed network is trained and validated on the KITTI
dataset and an ablation study is presented to assess the novel attention
network. Results show that adding attention to the network improves
performance, leading to efficient loop closures, and outperforming an
established 3D LiDAR-based place recognition approach. From the ablation study,
results indicate that the middle encoder layers have the highest mean
performance, while deeper layers are more robust to orientation change. The
code is publicly available at https://github.com/Cybonic/AttDLNet
|
[
{
"version": "v1",
"created": "Thu, 17 Jun 2021 16:34:37 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Mar 2022 18:10:05 GMT"
},
{
"version": "v3",
"created": "Wed, 17 Aug 2022 10:31:45 GMT"
},
{
"version": "v4",
"created": "Wed, 4 Jan 2023 12:21:40 GMT"
}
] | 2023-01-05T00:00:00 |
[
[
"Barros",
"Tiago",
""
],
[
"Garrote",
"Luís",
""
],
[
"Pereira",
"Ricardo",
""
],
[
"Premebida",
"Cristiano",
""
],
[
"Nunes",
"Urbano J.",
""
]
] |
new_dataset
| 0.999733 |
2208.12216
|
Shyam Murthy
|
Shyam Murthy, Srinivas Vivek
|
Passive Triangulation Attack on ORide
| null | null |
10.1007/978-3-031-20974-1_8
| null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Privacy preservation in Ride Hailing Services is intended to protect privacy
of drivers and riders. ORide is one of the early RHS proposals published at
USENIX Security Symposium 2017. In the ORide protocol, riders and drivers,
operating in a zone, encrypt their locations using a Somewhat Homomorphic
Encryption scheme (SHE) and forward them to the Service Provider (SP). SP
homomorphically computes the squared Euclidean distance between riders and
available drivers. Rider receives the encrypted distances and selects the
optimal rider after decryption. In order to prevent a triangulation attack, SP
randomly permutes the distances before sending them to the rider. In this work,
we use propose a passive attack that uses triangulation to determine
coordinates of all participating drivers whose permuted distances are available
from the points of view of multiple honest-but-curious adversary riders. An
attack on ORide was published at SAC 2021. The same paper proposes a
countermeasure using noisy Euclidean distances to thwart their attack. We
extend our attack to determine locations of drivers when given their permuted
and noisy Euclidean distances from multiple points of reference, where the
noise perturbation comes from a uniform distribution. We conduct experiments
with different number of drivers and for different perturbation values. Our
experiments show that we can determine locations of all drivers participating
in the ORide protocol. For the perturbed distance version of the ORide
protocol, our algorithm reveals locations of about 25% to 50% of participating
drivers. Our algorithm runs in time polynomial in number of drivers.
|
[
{
"version": "v1",
"created": "Thu, 25 Aug 2022 17:04:36 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Sep 2022 09:05:55 GMT"
},
{
"version": "v3",
"created": "Wed, 4 Jan 2023 11:09:50 GMT"
}
] | 2023-01-05T00:00:00 |
[
[
"Murthy",
"Shyam",
""
],
[
"Vivek",
"Srinivas",
""
]
] |
new_dataset
| 0.995687 |
2210.12918
|
Alireza Nasiri
|
Alireza Nasiri, Tristan Bepler
|
Unsupervised Object Representation Learning using Translation and
Rotation Group Equivariant VAE
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In many imaging modalities, objects of interest can occur in a variety of
locations and poses (i.e. are subject to translations and rotations in 2d or
3d), but the location and pose of an object does not change its semantics (i.e.
the object's essence). That is, the specific location and rotation of an
airplane in satellite imagery, or the 3d rotation of a chair in a natural
image, or the rotation of a particle in a cryo-electron micrograph, do not
change the intrinsic nature of those objects. Here, we consider the problem of
learning semantic representations of objects that are invariant to pose and
location in a fully unsupervised manner. We address shortcomings in previous
approaches to this problem by introducing TARGET-VAE, a translation and
rotation group-equivariant variational autoencoder framework. TARGET-VAE
combines three core innovations: 1) a rotation and translation
group-equivariant encoder architecture, 2) a structurally disentangled
distribution over latent rotation, translation, and a
rotation-translation-invariant semantic object representation, which are
jointly inferred by the approximate inference network, and 3) a spatially
equivariant generator network. In comprehensive experiments, we show that
TARGET-VAE learns disentangled representations without supervision that
significantly improve upon, and avoid the pathologies of, previous methods.
When trained on images highly corrupted by rotation and translation, the
semantic representations learned by TARGET-VAE are similar to those learned on
consistently posed objects, dramatically improving clustering in the semantic
latent space. Furthermore, TARGET-VAE is able to perform remarkably accurate
unsupervised pose and location inference. We expect methods like TARGET-VAE
will underpin future approaches for unsupervised object generation, pose
prediction, and object detection.
|
[
{
"version": "v1",
"created": "Mon, 24 Oct 2022 02:08:19 GMT"
},
{
"version": "v2",
"created": "Tue, 3 Jan 2023 19:45:46 GMT"
}
] | 2023-01-05T00:00:00 |
[
[
"Nasiri",
"Alireza",
""
],
[
"Bepler",
"Tristan",
""
]
] |
new_dataset
| 0.999023 |
2211.09365
|
Xin Yuan
|
Xin Yuan, Robin Feng, Mingming Ye
|
Low-Resource Mongolian Speech Synthesis Based on Automatic Prosody
Annotation
|
Accepted by NCMMSC 2022
| null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While deep learning-based text-to-speech (TTS) models such as VITS have shown
excellent results, they typically require a sizable set of high-quality <text,
audio> pairs to train, which is expensive to collect. So far, most languages in
the world still lack the training data needed to develop TTS systems. This
paper proposes two improvement methods for the two problems faced by
low-resource Mongolian speech synthesis: a) In view of the lack of high-quality
<text, audio> pairs of data, it is difficult to model the mapping problem from
linguistic features to acoustic features. Improvements are made using
pre-trained VITS model and transfer learning methods. b) In view of the problem
of less labeled information, this paper proposes to use an automatic prosodic
annotation method to label the prosodic information of text and corresponding
speech, thereby improving the naturalness and intelligibility of low-resource
Mongolian language. Through empirical research, the N-MOS of the method
proposed in this paper is 4.195, and the I-MOS is 4.228.
|
[
{
"version": "v1",
"created": "Thu, 17 Nov 2022 06:33:55 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Jan 2023 09:51:42 GMT"
}
] | 2023-01-05T00:00:00 |
[
[
"Yuan",
"Xin",
""
],
[
"Feng",
"Robin",
""
],
[
"Ye",
"Mingming",
""
]
] |
new_dataset
| 0.995431 |
2212.07086
|
Runhui Huang
|
Runhui Huang, Yanxin Long, Jianhua Han, Hang Xu, Xiwen Liang, Chunjing
Xu, Xiaodan Liang
|
NLIP: Noise-robust Language-Image Pre-training
|
AAAI 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large-scale cross-modal pre-training paradigms have recently shown ubiquitous
success on a wide range of downstream tasks, e.g., zero-shot classification,
retrieval and image captioning. However, their successes highly rely on the
scale and quality of web-crawled data that naturally contain incomplete and
noisy information (e.g., wrong or irrelevant content). Existing works either
design manual rules to clean data or generate pseudo-targets as auxiliary
signals for reducing noise impact, which do not explicitly tackle both the
incorrect and incomplete challenges simultaneously. In this paper, to
automatically mitigate the impact of noise by solely mining over existing data,
we propose a principled Noise-robust Language-Image Pre-training framework
(NLIP) to stabilize pre-training via two schemes: noise-harmonization and
noise-completion. First, in noise-harmonization scheme, NLIP estimates the
noise probability of each pair according to the memorization effect of
cross-modal transformers, then adopts noise-adaptive regularization to
harmonize the cross-modal alignments with varying degrees. Second, in
noise-completion scheme, to enrich the missing object information of text, NLIP
injects a concept-conditioned cross-modal decoder to obtain semantic-consistent
synthetic captions to complete noisy ones, which uses the retrieved visual
concepts (i.e., objects' names) for the corresponding image to guide captioning
generation. By collaboratively optimizing noise-harmonization and
noise-completion schemes, our NLIP can alleviate the common noise effects
during image-text pre-training in a more efficient way. Extensive experiments
show the significant performance improvements of our NLIP using only 26M data
over existing pre-trained models (e.g., CLIP, FILIP and BLIP) on 12 zero-shot
classification datasets, MSCOCO image captioning and zero-shot image-text
retrieval tasks.
|
[
{
"version": "v1",
"created": "Wed, 14 Dec 2022 08:19:30 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Jan 2023 18:23:26 GMT"
}
] | 2023-01-05T00:00:00 |
[
[
"Huang",
"Runhui",
""
],
[
"Long",
"Yanxin",
""
],
[
"Han",
"Jianhua",
""
],
[
"Xu",
"Hang",
""
],
[
"Liang",
"Xiwen",
""
],
[
"Xu",
"Chunjing",
""
],
[
"Liang",
"Xiaodan",
""
]
] |
new_dataset
| 0.996916 |
2212.14364
|
Thomas Robert Doebbert
|
Thomas R. Doebbert, Henry Beuster, Florian Fischer, Dominik Merli,
Gerd Scholl
|
Testbed for Functional Safety-Relevant Wireless Communication Based on
IO-Link Wireless and 5G
| null | null |
10.24405/14544
| null |
cs.CR cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the field of industrial production automation, wireless networks support
highly flexible manufacturing processes and enable technologies to set-up new
production chains and future software businesses. The IO-Link Wireless (IOLW)
protocol is an already established energy-efficient and cost-effective
communication standard for smart sensor devices on the industrial shop floor,
whereas the mobile communication standard 5G will be mainly applied for medium
and long-range wireless communication applications promising low latency times
and high reliability. Therefore, 5G with the coming enhancement of
deterministic ultra-Reliable Low-Latency Communication (uRLLC) is combined with
the robustness and low-latency performance characteristics of IO-Link Wireless.
Features of both technologies are highly beneficial to realize even highly
demanding safety-related applications. The presented testbed shall qualify
wireless functional safety communication with respect to its Residual Error
Probability (REP) and quantify the Probability of Failure per Hour (PFH).
|
[
{
"version": "v1",
"created": "Thu, 29 Dec 2022 16:31:10 GMT"
}
] | 2023-01-05T00:00:00 |
[
[
"Doebbert",
"Thomas R.",
""
],
[
"Beuster",
"Henry",
""
],
[
"Fischer",
"Florian",
""
],
[
"Merli",
"Dominik",
""
],
[
"Scholl",
"Gerd",
""
]
] |
new_dataset
| 0.994291 |
2301.01350
|
Shreyansh Daftry
|
Shreyansh Daftry, Zhanlin Chen, Yang Cheng, Scott Tepsuporn, Brian
Coltin, Ussama Naam, Lanssie Mingyue Ma, Shehryar Khattak, Matthew Deans,
Larry Matthies
|
LunarNav: Crater-based Localization for Long-range Autonomous Lunar
Rover Navigation
|
IEEE Aerospace Conference 2023. arXiv admin note: text overlap with
arXiv:2203.10073
| null | null | null |
cs.RO cs.AI cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The Artemis program requires robotic and crewed lunar rovers for resource
prospecting and exploitation, construction and maintenance of facilities, and
human exploration. These rovers must support navigation for 10s of kilometers
(km) from base camps. A lunar science rover mission concept - Endurance-A, has
been recommended by the new Decadal Survey as the highest priority medium-class
mission of the Lunar Discovery and Exploration Program, and would be required
to traverse approximately 2000 km in the South Pole-Aitkin (SPA) Basin, with
individual drives of several kilometers between stops for downlink. These rover
mission scenarios require functionality that provides onboard, autonomous,
global position knowledge ( aka absolute localization). However, planetary
rovers have no onboard global localization capability to date; they have only
used relative localization, by integrating combinations of wheel odometry,
visual odometry, and inertial measurements during each drive to track position
relative to the start of each drive. In this work, we summarize recent
developments from the LunarNav project, where we have developed algorithms and
software to enable lunar rovers to estimate their global position and heading
on the Moon with a goal performance of position error less than 5 meters (m)
and heading error less than 3-degree, 3-sigma, in sunlit areas. This will be
achieved autonomously onboard by detecting craters in the vicinity of the rover
and matching them to a database of known craters mapped from orbit. The overall
technical framework consists of three main elements: 1) crater detection, 2)
crater matching, and 3) state estimation. In previous work, we developed crater
detection algorithms for three different sensing modalities. Our results
suggest that rover localization with an error less than 5 m is highly probable
during daytime operations.
|
[
{
"version": "v1",
"created": "Tue, 3 Jan 2023 20:46:27 GMT"
}
] | 2023-01-05T00:00:00 |
[
[
"Daftry",
"Shreyansh",
""
],
[
"Chen",
"Zhanlin",
""
],
[
"Cheng",
"Yang",
""
],
[
"Tepsuporn",
"Scott",
""
],
[
"Coltin",
"Brian",
""
],
[
"Naam",
"Ussama",
""
],
[
"Ma",
"Lanssie Mingyue",
""
],
[
"Khattak",
"Shehryar",
""
],
[
"Deans",
"Matthew",
""
],
[
"Matthies",
"Larry",
""
]
] |
new_dataset
| 0.99591 |
2301.01392
|
Daniel Shin
|
Daniel Shin, Anca D. Dragan, Daniel S. Brown
|
Benchmarks and Algorithms for Offline Preference-Based Reward Learning
|
Transactions on Machine Learning Research. arXiv admin note: text
overlap with arXiv:2107.09251
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Learning a reward function from human preferences is challenging as it
typically requires having a high-fidelity simulator or using expensive and
potentially unsafe actual physical rollouts in the environment. However, in
many tasks the agent might have access to offline data from related tasks in
the same target environment. While offline data is increasingly being used to
aid policy optimization via offline RL, our observation is that it can be a
surprisingly rich source of information for preference learning as well. We
propose an approach that uses an offline dataset to craft preference queries
via pool-based active learning, learns a distribution over reward functions,
and optimizes a corresponding policy via offline RL. Crucially, our proposed
approach does not require actual physical rollouts or an accurate simulator for
either the reward learning or policy optimization steps. To test our approach,
we first evaluate existing offline RL benchmarks for their suitability for
offline reward learning. Surprisingly, for many offline RL domains, we find
that simply using a trivial reward function results good policy performance,
making these domains ill-suited for evaluating learned rewards. To address
this, we identify a subset of existing offline RL benchmarks that are well
suited for offline reward learning and also propose new offline apprenticeship
learning benchmarks which allow for more open-ended behaviors. When evaluated
on this curated set of domains, our empirical results suggest that combining
offline RL with learned human preferences can enable an agent to learn to
perform novel tasks that were not explicitly shown in the offline data.
|
[
{
"version": "v1",
"created": "Tue, 3 Jan 2023 23:52:16 GMT"
}
] | 2023-01-05T00:00:00 |
[
[
"Shin",
"Daniel",
""
],
[
"Dragan",
"Anca D.",
""
],
[
"Brown",
"Daniel S.",
""
]
] |
new_dataset
| 0.993206 |
2301.01431
|
Haojie Yu
|
Haojie Yu, Kang Zhao, Xiaoming Xu
|
Semi-MAE: Masked Autoencoders for Semi-supervised Vision Transformers
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vision Transformer (ViT) suffers from data scarcity in semi-supervised
learning (SSL). To alleviate this issue, inspired by masked autoencoder (MAE),
which is a data-efficient self-supervised learner, we propose Semi-MAE, a pure
ViT-based SSL framework consisting of a parallel MAE branch to assist the
visual representation learning and make the pseudo labels more accurate. The
MAE branch is designed as an asymmetric architecture consisting of a
lightweight decoder and a shared-weights encoder. We feed the weakly-augmented
unlabeled data with a high masking ratio to the MAE branch and reconstruct the
missing pixels. Semi-MAE achieves 75.9% top-1 accuracy on ImageNet with 10%
labels, surpassing prior state-of-the-art in semi-supervised image
classification. In addition, extensive experiments demonstrate that Semi-MAE
can be readily used for other ViT models and masked image modeling methods.
|
[
{
"version": "v1",
"created": "Wed, 4 Jan 2023 03:59:17 GMT"
}
] | 2023-01-05T00:00:00 |
[
[
"Yu",
"Haojie",
""
],
[
"Zhao",
"Kang",
""
],
[
"Xu",
"Xiaoming",
""
]
] |
new_dataset
| 0.978776 |
2301.01453
|
Hongyi Luo
|
Guyue Li, Hongyi Luo, Jiabao Yu, Aiqun Hu and Jiangzhou Wang
|
Information-Theoretic Secure Key Sharing for Wide-Area Mobile
Applications
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the rapid growth of handheld devices in the internet of things (IoT)
networks, mobile applications have become ubiquitous in everyday life. As
technology is developed, so do also the risks and threats associated with it,
especially in the forthcoming quantum era. Existing IoT networks, however, lack
a quantum-resistant secret key sharing scheme to meet confidential message
transmission demands in wide-area mobile applications. To address this issue,
this article proposes a new scheme, channel reciprocity (CR) based quantum key
distribution (QKD) CR-QKD, which accomplishes the goal of secret key sharing by
combining emerging techniques of QKD and CR-based key generation (CRKG).
Exploiting laws of quantum physics and properties of wireless channels, the
proposed scheme is able to ensure the secrecy of the key, even against
computationally unbounded adversaries. The basic mechanism is elaborated for a
single-user case and it is extended into a multi-user case by redesigning a
multi-user edge forwarding strategy. In addition, to make CR-QKD more
practical, some enhancement strategies are studied to reduce the time delay and
to improve the secret key generation rate in a secure manner. A prototype of
CR-QKD is demonstrated in a metropolitan area network, where secret keys are
shared between two remote IoT devices that are roughly fifteen kilometers apart
from each other. The experimental results have verified that CR-QKD allows a
secret key rate of 424 bits per second with a retransmission rate of 2.1%.
|
[
{
"version": "v1",
"created": "Wed, 4 Jan 2023 05:34:59 GMT"
}
] | 2023-01-05T00:00:00 |
[
[
"Li",
"Guyue",
""
],
[
"Luo",
"Hongyi",
""
],
[
"Yu",
"Jiabao",
""
],
[
"Hu",
"Aiqun",
""
],
[
"Wang",
"Jiangzhou",
""
]
] |
new_dataset
| 0.998502 |
2301.01471
|
Rebecca Lin
|
Rebecca Lin, Craig S. Kaplan
|
Freeform Islamic Geometric Patterns
|
20 pages, 21 figures
| null | null | null |
cs.GR cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Islamic geometric patterns are a rich and venerable ornamental tradition.
Many classic designs feature periodic arrangements of rosettes: star shapes
surrounded by rings of hexagonal petals. We present a new technique for
generating 'freeform' compositions of rosettes: finite designs that freely mix
rosettes of unusual sizes while retaining the aesthetics of traditional
patterns. We use a circle packing as a scaffolding for developing a patch of
polygons and fill each polygon with a motif based on established constructions
from Islamic art.
|
[
{
"version": "v1",
"created": "Wed, 4 Jan 2023 07:24:24 GMT"
}
] | 2023-01-05T00:00:00 |
[
[
"Lin",
"Rebecca",
""
],
[
"Kaplan",
"Craig S.",
""
]
] |
new_dataset
| 0.999665 |
2301.01518
|
Andrea Russo
|
Andrea Russo
|
Organised Firestorm as strategy for business cyber-attacks
|
9 pages, 3 figures, 2 table
| null | null | null |
cs.CY physics.soc-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Having a good reputation is paramount for most organisations and companies.
In fact, having an optimal corporate image allows them to have better
transaction relationships with various customers and partners. However, such
reputation is hard to build and easy to destroy for all kind of business
commercial activities (B2C, B2B, B2B2C, B2G). A misunderstanding during the
communication process to the customers, or just a bad communication strategy,
can lead to a disaster for the entire company. This is emphasised by the
reaction of millions of people on social networks, which can be very
detrimental for the corporate image if they react negatively to a certain
event. This is called a firestorm.
In this paper, I propose a well-organised strategy for firestorm attacks on
organisations, also showing how an adversary can leverage them to obtain
private information on the attacked firm. Standard business security procedures
are not designed to operate against multi-domain attacks; therefore, I will
show how it is possible to bypass the classic and advised security procedures
by operating different kinds of attack. I also propose a different firestorm
attack, targeting a specific business company network in an efficient way.
Finally, I present defensive procedures to reduce the negative effect of
firestorms on a company.
|
[
{
"version": "v1",
"created": "Wed, 4 Jan 2023 10:16:05 GMT"
}
] | 2023-01-05T00:00:00 |
[
[
"Russo",
"Andrea",
""
]
] |
new_dataset
| 0.997256 |
2301.01531
|
Razvan Caramalau
|
Razvan Caramalau, Binod Bhattarai, Danail Stoyanov, Tae-Kyun Kim
|
MoBYv2AL: Self-supervised Active Learning for Image Classification
|
Poster accepted at BMVC 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Active learning(AL) has recently gained popularity for deep learning(DL)
models. This is due to efficient and informative sampling, especially when the
learner requires large-scale labelled datasets. Commonly, the sampling and
training happen in stages while more batches are added. One main bottleneck in
this strategy is the narrow representation learned by the model that affects
the overall AL selection.
We present MoBYv2AL, a novel self-supervised active learning framework for
image classification. Our contribution lies in lifting MoBY, one of the most
successful self-supervised learning algorithms, to the AL pipeline. Thus, we
add the downstream task-aware objective function and optimize it jointly with
contrastive loss. Further, we derive a data-distribution selection function
from labelling the new examples. Finally, we test and study our pipeline
robustness and performance for image classification tasks. We successfully
achieved state-of-the-art results when compared to recent AL methods. Code
available: https://github.com/razvancaramalau/MoBYv2AL
|
[
{
"version": "v1",
"created": "Wed, 4 Jan 2023 10:52:02 GMT"
}
] | 2023-01-05T00:00:00 |
[
[
"Caramalau",
"Razvan",
""
],
[
"Bhattarai",
"Binod",
""
],
[
"Stoyanov",
"Danail",
""
],
[
"Kim",
"Tae-Kyun",
""
]
] |
new_dataset
| 0.977969 |
2301.01576
|
Erez Karpas
|
Ido Glanz, Matan Weksler, Erez Karpas, Tzipi Horowitz-Kraus
|
Robofriend: An Adpative Storytelling Robotic Teddy Bear -- Technical
Report
| null | null | null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we describe Robofriend, a robotic teddy bear for telling
stories to young children. Robofriend adapts its behavior to keep the
childrens' attention using reinforcement learning.
|
[
{
"version": "v1",
"created": "Wed, 4 Jan 2023 12:51:24 GMT"
}
] | 2023-01-05T00:00:00 |
[
[
"Glanz",
"Ido",
""
],
[
"Weksler",
"Matan",
""
],
[
"Karpas",
"Erez",
""
],
[
"Horowitz-Kraus",
"Tzipi",
""
]
] |
new_dataset
| 0.998546 |
2301.01704
|
Lee Milburn
|
Lee Milburn, John Chiaramonte, Jack Fenton
|
Error Tolerant Multi-Robot System for Roadside Trash Collection
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this paper, we present the first iteration of an error-tolerant,
autonomous, multi-robot system that monitors highway road verges and identifies
and collects roadside litter. It is designed to use an aerial vehicle that can
rapidly cover a vast area and collect data on the road verge. This data is then
passed to a ground vehicle that constructs a map of the road verge and uses a
trained Convolutional Neural Network (CNN) to identify pieces of litter. After
the pieces of litter are identified on the map of the road verge, the ground
robot navigates to each piece of trash, re-evaluates the area, and performs a
"greedy pickup" procedure. This final stage accounts for any error in the map's
construction or the identified trash's location. We found that ending the
robotic system's control flow with a greedy pickup procedure can retroactively
account for processing errors of the system as it runs. This increases the
system's fault tolerance and allows for the use of cheaper equipment since
pinpoint accuracy is not always necessary. In this paper, we present the
feasibility of this system by testing in simulation and later using real
robotic hardware. We show that the system is effective enough to iterate on its
design principles to create a more sophisticated system.
|
[
{
"version": "v1",
"created": "Wed, 4 Jan 2023 17:00:49 GMT"
}
] | 2023-01-05T00:00:00 |
[
[
"Milburn",
"Lee",
""
],
[
"Chiaramonte",
"John",
""
],
[
"Fenton",
"Jack",
""
]
] |
new_dataset
| 0.998862 |
2106.00365
|
Verity Allan
|
Verity Allan, Caitriona Leedham
|
Scientific Computing in the Cavendish Laboratory and the pioneering
women Computors
|
11 pages, 8 figures, accepted by Annals of Science
| null |
10.1080/00033790.2022.2106382
| null |
cs.CY astro-ph.IM
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The use of computers and the role of women in radio astronomy and X-ray
crystallography research at the Cavendish Laboratory between 1949 and 1975 have
been investigated. We recorded examples of when computers were used, what they
were used for and who used them from hundreds of papers published during these
years. The use of the EDSAC, EDSAC 2 and TITAN computers was found to increase
considerably over this time-scale and they were used for a diverse range of
applications. The majority of references to computer operators and programmers
referred to women, 57% for astronomy and 62% for crystallography, in contrast
to a very small proportion, 4% and 13% respectively, of female authors of
papers.
|
[
{
"version": "v1",
"created": "Tue, 1 Jun 2021 10:17:37 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Jul 2022 10:37:34 GMT"
}
] | 2023-01-04T00:00:00 |
[
[
"Allan",
"Verity",
""
],
[
"Leedham",
"Caitriona",
""
]
] |
new_dataset
| 0.994455 |
2112.11798
|
Izzeddin Teeti
|
Aduen Benjumea, Izzeddin Teeti, Fabio Cuzzolin, Andrew Bradley
|
YOLO-Z: Improving small object detection in YOLOv5 for autonomous
vehicles
|
ICCV 2021
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
As autonomous vehicles and autonomous racing rise in popularity, so does the
need for faster and more accurate detectors. While our naked eyes are able to
extract contextual information almost instantly, even from far away, image
resolution and computational resources limitations make detecting smaller
objects (that is, objects that occupy a small pixel area in the input image) a
genuinely challenging task for machines and a wide-open research field. This
study explores how the popular YOLOv5 object detector can be modified to
improve its performance in detecting smaller objects, with a particular
application in autonomous racing. To achieve this, we investigate how replacing
certain structural elements of the model (as well as their connections and
other parameters) can affect performance and inference time. In doing so, we
propose a series of models at different scales, which we name `YOLO-Z', and
which display an improvement of up to 6.9% in mAP when detecting smaller
objects at 50% IOU, at the cost of just a 3ms increase in inference time
compared to the original YOLOv5. Our objective is to inform future research on
the potential of adjusting a popular detector such as YOLOv5 to address
specific tasks and provide insights on how specific changes can impact small
object detection. Such findings, applied to the broader context of autonomous
vehicles, could increase the amount of contextual information available to such
systems.
|
[
{
"version": "v1",
"created": "Wed, 22 Dec 2021 11:03:43 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Dec 2021 23:54:21 GMT"
},
{
"version": "v3",
"created": "Mon, 2 Jan 2023 16:25:00 GMT"
},
{
"version": "v4",
"created": "Tue, 3 Jan 2023 09:18:41 GMT"
}
] | 2023-01-04T00:00:00 |
[
[
"Benjumea",
"Aduen",
""
],
[
"Teeti",
"Izzeddin",
""
],
[
"Cuzzolin",
"Fabio",
""
],
[
"Bradley",
"Andrew",
""
]
] |
new_dataset
| 0.975458 |
2202.08409
|
Aiden Bai
|
Aiden Bai
|
Million.js: A Fast Compiler-Augmented Virtual DOM for the Web
|
8 pages, 12 figures. Accepted to ACM SAC
| null |
10.1145/3555776.3577683
| null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Interactive web applications created with declarative JavaScript User
Interface (UI) libraries have increasingly dominated the modern internet.
However, existing libraries are primarily made for run-time execution, and rely
on the user to load and render web applications. This led us to create
Million.js, a fast compiler-augmented virtual Document Object Model (DOM) for
the web. Million.js reduces load time and time-to-interactive by creating a
compiler to compute interactive regions of a web application before the user
visits the page. The virtual DOM run-time optimizes interactive content through
compiler flags, compute batching, scheduling, and reactive data primitives to
achieve optimal performance. When benchmarked against the most popular virtual
DOM libraries, Million.js resulted in 133% to 300% faster rendering and 2347\%
faster load. In a real-world web application with both comparative benchmarks
and an informal user study, Million.js loaded 35.11% faster after migrating
from React. The findings show that web applications have the potential to be
orders of magnitude faster through JavaScript UI libraries that use Million.js.
|
[
{
"version": "v1",
"created": "Thu, 17 Feb 2022 02:17:42 GMT"
},
{
"version": "v2",
"created": "Wed, 31 Aug 2022 08:49:14 GMT"
},
{
"version": "v3",
"created": "Mon, 26 Sep 2022 01:54:32 GMT"
},
{
"version": "v4",
"created": "Sat, 22 Oct 2022 19:24:52 GMT"
},
{
"version": "v5",
"created": "Sun, 1 Jan 2023 09:11:30 GMT"
}
] | 2023-01-04T00:00:00 |
[
[
"Bai",
"Aiden",
""
]
] |
new_dataset
| 0.999068 |
2204.02799
|
Dheemahi Rao
|
Dheemahi Rao and Bivas Saha
|
Scandium Nitride as a Gateway III-Nitride Semiconductor for
Optoelectronic Artificial Synaptic Devices
|
14 pages, 5 figures. It is currently under review
|
Adv. Electron. Mater. 2022, 2200975
|
10.1002/aelm.202200975
| null |
cs.ET physics.app-ph
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Traditional computation based on von Neumann architecture is limited by the
time and energy consumption due to data transfer between the storage and the
processing units. The von Neumann architecture is also inefficient in solving
unstructured, probabilistic, and real-time problems. To address these
challenges, a new brain-inspired neuromorphic computational architecture is
required. Due to absence of resistance-capacitance (RC) delay, high bandwidth
and low power consumption, optoelectronic artificial synaptic devices are
highly attractive. Yet stable, scalable, and
complementary-metal-oxide-semiconductor (CMOS)-compatible synapses have not
been demonstrated. In this work, persistence in the photoconductivity of
undoped and magnesium-doped scandium nitride (ScN) is equated to the inhibitory
and excitatory synaptic plasticity of the biological synapses responsible for
memory and learning. Primary functionalities of a biological synapse like
short-term memory (STM), long-term memory (LTM), the transition from
STM-to-LTM, learning and forgetting, frequency-selective optical filtering,
frequency-dependent potentiation and depression, Hebbian learning, and logic
gate operations are demonstrated.
|
[
{
"version": "v1",
"created": "Wed, 6 Apr 2022 13:11:01 GMT"
}
] | 2023-01-04T00:00:00 |
[
[
"Rao",
"Dheemahi",
""
],
[
"Saha",
"Bivas",
""
]
] |
new_dataset
| 0.996341 |
2207.06061
|
Daniel Bogdoll
|
Daniel Bogdoll, Jonas Rauch, J. Marius Z\"ollner
|
DLCSS: Dynamic Longest Common Subsequences
|
Accepted for publication at ICECCME 2022
| null |
10.1109/ICECCME55909.2022.9987849
| null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Autonomous driving is a key technology towards a brighter, more sustainable
future. To enable such a future, it is necessary to utilize autonomous vehicles
in shared mobility models. However, to evaluate, whether two or more route
requests have the potential for a shared ride, is a compute-intensive task, if
done by rerouting. In this work, we propose the Dynamic Longest Common
Subsequences algorithm for fast and cost-efficient comparison of two routes for
their compatibility, dynamically only incorporating parts of the routes which
are suited for a shared trip. Based on this, one can also estimate, how many
autonomous vehicles might be necessary to fulfill the local mobility demands.
This can help providers to estimate the necessary fleet sizes, policymakers to
better understand mobility patterns and cities to scale necessary
infrastructure.
|
[
{
"version": "v1",
"created": "Wed, 13 Jul 2022 09:12:33 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Sep 2022 10:26:38 GMT"
}
] | 2023-01-04T00:00:00 |
[
[
"Bogdoll",
"Daniel",
""
],
[
"Rauch",
"Jonas",
""
],
[
"Zöllner",
"J. Marius",
""
]
] |
new_dataset
| 0.969404 |
2208.08484
|
Gunnar Kudrjavets
|
Gunnar Kudrjavets (University of Groningen), Jeff Thomas (Meta
Platforms, Inc.), Aditya Kumar (Snap, Inc.), Nachiappan Nagappan (Meta
Platforms, Inc.), and Ayushi Rastogi (University of Groningen)
|
When malloc() Never Returns NULL -- Reliability as an Illusion
|
6 pages. To be published in the 33rd IEEE International Symposium on
Software Reliability Engineering (ISSRE 2022), Oct 31 - Nov 3 2022,
Charlotte, North Carolina, USA
| null |
10.1109/ISSREW55968.2022.00035
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For decades, the guidance given to software engineers has been to check the
memory allocation results. This validation step is necessary to avoid crashes.
However, in user mode, in modern operating systems (OS), such as Android,
FreeBSD, iOS, and macOS, the caller does not have an opportunity to handle the
memory allocation failures. This behavioral trait results from the actions of a
system component called an out-of-memory (OOM) killer. We identify that the
only mainstream OS that, by default, lets applications detect memory allocation
failures is Microsoft Windows. The false expectation that an application can
handle OOM errors can negatively impact its design. The presence of
error-handling code creates an illusion of reliability and is wasteful in terms
of lines of code and code size. We describe the current behavior of a sample of
popular OSs during low-memory conditions and provide recommendations for
engineering practices going forward.
|
[
{
"version": "v1",
"created": "Wed, 17 Aug 2022 18:54:59 GMT"
}
] | 2023-01-04T00:00:00 |
[
[
"Kudrjavets",
"Gunnar",
"",
"University of Groningen"
],
[
"Thomas",
"Jeff",
"",
"Meta\n Platforms, Inc."
],
[
"Kumar",
"Aditya",
"",
"Snap, Inc."
],
[
"Nagappan",
"Nachiappan",
"",
"Meta\n Platforms, Inc."
],
[
"Rastogi",
"Ayushi",
"",
"University of Groningen"
]
] |
new_dataset
| 0.999326 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.