id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2304.03952
|
Rendani Mbuvha
|
Rendani Mbuvha, David I. Adelani, Tendani Mutavhatsindi, Tshimangadzo
Rakhuhu, Aluwani Mauda, Tshifhiwa Joshua Maumela, Andisani Masindi, Seani
Rananga, Vukosi Marivate, and Tshilidzi Marwala
|
MphayaNER: Named Entity Recognition for Tshivenda
|
Accepted at AfricaNLP Workshop at ICLR 2023
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Named Entity Recognition (NER) plays a vital role in various Natural Language
Processing tasks such as information retrieval, text classification, and
question answering. However, NER can be challenging, especially in low-resource
languages with limited annotated datasets and tools. This paper adds to the
effort of addressing these challenges by introducing MphayaNER, the first
Tshivenda NER corpus in the news domain. We establish NER baselines by
\textit{fine-tuning} state-of-the-art models on MphayaNER. The study also
explores zero-shot transfer between Tshivenda and other related Bantu
languages, with chiShona and Kiswahili showing the best results. Augmenting
MphayaNER with chiShona data was also found to improve model performance
significantly. Both MphayaNER and the baseline models are made publicly
available.
|
[
{
"version": "v1",
"created": "Sat, 8 Apr 2023 08:03:58 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Mbuvha",
"Rendani",
""
],
[
"Adelani",
"David I.",
""
],
[
"Mutavhatsindi",
"Tendani",
""
],
[
"Rakhuhu",
"Tshimangadzo",
""
],
[
"Mauda",
"Aluwani",
""
],
[
"Maumela",
"Tshifhiwa Joshua",
""
],
[
"Masindi",
"Andisani",
""
],
[
"Rananga",
"Seani",
""
],
[
"Marivate",
"Vukosi",
""
],
[
"Marwala",
"Tshilidzi",
""
]
] |
new_dataset
| 0.996721 |
2304.03958
|
Abhishek Bamotra
|
Soumyatattwa Kar, Abhishek Bamotra, Bhavya Duvvuri, Radhika Mohanan
|
KeyDetect --Detection of anomalies and user based on Keystroke Dynamics
| null | null | null | null |
cs.CV cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Cyber attacks has always been of a great concern. Websites and services with
poor security layers are the most vulnerable to such cyber attacks. The
attackers can easily access sensitive data like credit card details and social
security number from such vulnerable services. Currently to stop cyber attacks,
various different methods are opted from using two-step verification methods
like One-Time Password and push notification services to using high-end
bio-metric devices like finger print reader and iris scanner are used as
security layers. These current security measures carry a lot of cons and the
worst is that user always need to carry the authentication device on them to
access their data. To overcome this, we are proposing a technique of using
keystroke dynamics (typing pattern) of a user to authenticate the genuine user.
In the method, we are taking a data set of 51 users typing a password in 8
sessions done on alternate days to record mood fluctuations of the user.
Developed and implemented anomaly-detection algorithm based on distance metrics
and machine learning algorithms like Artificial Neural networks (ANN) and
convolutional neural network (CNN) to classify the users. In ANN, we
implemented multi-class classification using 1-D convolution as the data was
correlated and multi-class classification with negative class which was used to
classify anomaly based on all users put together. We were able to achieve an
accuracy of 95.05% using ANN with Negative Class. From the results achieved, we
can say that the model works perfectly and can be bought into the market as a
security layer and a good alternative to two-step verification using external
devices. This technique will enable users to have two-step security layer
without worrying about carry an authentication device.
|
[
{
"version": "v1",
"created": "Sat, 8 Apr 2023 09:00:07 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Kar",
"Soumyatattwa",
""
],
[
"Bamotra",
"Abhishek",
""
],
[
"Duvvuri",
"Bhavya",
""
],
[
"Mohanan",
"Radhika",
""
]
] |
new_dataset
| 0.998075 |
2304.04026
|
Marek \v{S}uppa
|
D\'avid \v{S}uba and Marek \v{S}uppa and Jozef Kub\'ik and Endre
Hamerlik and Martin Tak\'a\v{c}
|
WikiGoldSK: Annotated Dataset, Baselines and Few-Shot Learning
Experiments for Slovak Named Entity Recognition
|
BSNLP 2023 Workshop at EACL 2023
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Named Entity Recognition (NER) is a fundamental NLP tasks with a wide range
of practical applications. The performance of state-of-the-art NER methods
depends on high quality manually anotated datasets which still do not exist for
some languages. In this work we aim to remedy this situation in Slovak by
introducing WikiGoldSK, the first sizable human labelled Slovak NER dataset. We
benchmark it by evaluating state-of-the-art multilingual Pretrained Language
Models and comparing it to the existing silver-standard Slovak NER dataset. We
also conduct few-shot experiments and show that training on a sliver-standard
dataset yields better results. To enable future work that can be based on
Slovak NER, we release the dataset, code, as well as the trained models
publicly under permissible licensing terms at
https://github.com/NaiveNeuron/WikiGoldSK.
|
[
{
"version": "v1",
"created": "Sat, 8 Apr 2023 14:37:52 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Šuba",
"Dávid",
""
],
[
"Šuppa",
"Marek",
""
],
[
"Kubík",
"Jozef",
""
],
[
"Hamerlik",
"Endre",
""
],
[
"Takáč",
"Martin",
""
]
] |
new_dataset
| 0.999513 |
2304.04048
|
Maxim Khomiakov
|
Maxim Khomiakov, Michael Riis Andersen, Jes Frellsen
|
Polygonizer: An auto-regressive building delineator
|
ICLR 2023 Workshop on Machine Learning in Remote Sensing
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
In geospatial planning, it is often essential to represent objects in a
vectorized format, as this format easily translates to downstream tasks such as
web development, graphics, or design. While these problems are frequently
addressed using semantic segmentation, which requires additional
post-processing to vectorize objects in a non-trivial way, we present an
Image-to-Sequence model that allows for direct shape inference and is ready for
vector-based workflows out of the box. We demonstrate the model's performance
in various ways, including perturbations to the image input that correspond to
variations or artifacts commonly encountered in remote sensing applications.
Our model outperforms prior works when using ground truth bounding boxes (one
object per image), achieving the lowest maximum tangent angle error.
|
[
{
"version": "v1",
"created": "Sat, 8 Apr 2023 15:36:48 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Khomiakov",
"Maxim",
""
],
[
"Andersen",
"Michael Riis",
""
],
[
"Frellsen",
"Jes",
""
]
] |
new_dataset
| 0.991833 |
2304.04054
|
Anna Glazkova
|
Anna Glazkova
|
tmn at SemEval-2023 Task 9: Multilingual Tweet Intimacy Detection using
XLM-T, Google Translate, and Ensemble Learning
|
7 pages. The 17th International Workshop on Semantic Evaluation
(SemEval-2023)
| null | null | null |
cs.CL cs.AI cs.IR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The paper describes a transformer-based system designed for SemEval-2023 Task
9: Multilingual Tweet Intimacy Analysis. The purpose of the task was to predict
the intimacy of tweets in a range from 1 (not intimate at all) to 5 (very
intimate). The official training set for the competition consisted of tweets in
six languages (English, Spanish, Italian, Portuguese, French, and Chinese). The
test set included the given six languages as well as external data with four
languages not presented in the training set (Hindi, Arabic, Dutch, and Korean).
We presented a solution based on an ensemble of XLM-T, a multilingual RoBERTa
model adapted to the Twitter domain. To improve the performance of unseen
languages, each tweet was supplemented by its English translation. We explored
the effectiveness of translated data for the languages seen in fine-tuning
compared to unseen languages and estimated strategies for using translated data
in transformer-based models. Our solution ranked 4th on the leaderboard while
achieving an overall Pearson's r of 0.599 over the test set. The proposed
system improves up to 0.088 Pearson's r over a score averaged across all 45
submissions.
|
[
{
"version": "v1",
"created": "Sat, 8 Apr 2023 15:50:16 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Glazkova",
"Anna",
""
]
] |
new_dataset
| 0.998878 |
2304.04068
|
Javad Peymanfard
|
Javad Peymanfard, Ali Lashini, Samin Heydarian, Hossein Zeinali,
Nasser Mozayani
|
Word-level Persian Lipreading Dataset
| null |
In 2022 12th International Conference on Computer and Knowledge
Engineering (ICCKE) (pp. 225-230). IEEE
|
10.1109/ICCKE57176.2022.9960105
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Lip-reading has made impressive progress in recent years, driven by advances
in deep learning. Nonetheless, the prerequisite such advances is a suitable
dataset. This paper provides a new in-the-wild dataset for Persian word-level
lipreading containing 244,000 videos from approximately 1,800 speakers. We
evaluated the state-of-the-art method in this field and used a novel approach
for word-level lip-reading. In this method, we used the AV-HuBERT model for
feature extraction and obtained significantly better performance on our
dataset.
|
[
{
"version": "v1",
"created": "Sat, 8 Apr 2023 17:00:35 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Peymanfard",
"Javad",
""
],
[
"Lashini",
"Ali",
""
],
[
"Heydarian",
"Samin",
""
],
[
"Zeinali",
"Hossein",
""
],
[
"Mozayani",
"Nasser",
""
]
] |
new_dataset
| 0.992968 |
2304.04108
|
Mohammed Salah
|
Mohammed Salah, Abdulla Ayyad, Mohammed Ramadan, Yusra Abdulrahman,
Dewald Swart, Abdelqader Abusafieh, Lakmal Seneviratne, Yahya Zweiri
|
High Speed Neuromorphic Vision-Based Inspection of Countersinks in
Automated Manufacturing Processes
|
14 pages, 11 figures, 7 tables, submitted to Journal of Intelligent
Manufacturing
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Countersink inspection is crucial in various automated assembly lines,
especially in the aerospace and automotive sectors. Advancements in machine
vision introduced automated robotic inspection of countersinks using laser
scanners and monocular cameras. Nevertheless, the aforementioned sensing
pipelines require the robot to pause on each hole for inspection due to high
latency and measurement uncertainties with motion, leading to prolonged
execution times of the inspection task. The neuromorphic vision sensor, on the
other hand, has the potential to expedite the countersink inspection process,
but the unorthodox output of the neuromorphic technology prohibits utilizing
traditional image processing techniques. Therefore, novel event-based
perception algorithms need to be introduced. We propose a countersink detection
approach on the basis of event-based motion compensation and the mean-shift
clustering principle. In addition, our framework presents a robust event-based
circle detection algorithm to precisely estimate the depth of the countersink
specimens. The proposed approach expedites the inspection process by a factor
of 10$\times$ compared to conventional countersink inspection methods. The work
in this paper was validated for over 50 trials on three countersink workpiece
variants. The experimental results show that our method provides a precision of
0.025 mm for countersink depth inspection despite the low resolution of
commercially available neuromorphic cameras.
|
[
{
"version": "v1",
"created": "Sat, 8 Apr 2023 21:54:46 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Salah",
"Mohammed",
""
],
[
"Ayyad",
"Abdulla",
""
],
[
"Ramadan",
"Mohammed",
""
],
[
"Abdulrahman",
"Yusra",
""
],
[
"Swart",
"Dewald",
""
],
[
"Abusafieh",
"Abdelqader",
""
],
[
"Seneviratne",
"Lakmal",
""
],
[
"Zweiri",
"Yahya",
""
]
] |
new_dataset
| 0.998023 |
2304.04150
|
Kevin Zakka
|
Kevin Zakka, Laura Smith, Nimrod Gileadi, Taylor Howell, Xue Bin Peng,
Sumeet Singh, Yuval Tassa, Pete Florence, Andy Zeng, Pieter Abbeel
|
RoboPianist: A Benchmark for High-Dimensional Robot Control
| null | null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce a new benchmarking suite for high-dimensional control, targeted
at testing high spatial and temporal precision, coordination, and planning, all
with an underactuated system frequently making-and-breaking contacts. The
proposed challenge is mastering the piano through bi-manual dexterity, using a
pair of simulated anthropomorphic robot hands. We call it RoboPianist, and the
initial version covers a broad set of 150 variable-difficulty songs. We
investigate both model-free and model-based methods on the benchmark,
characterizing their performance envelopes. We observe that while certain
existing methods, when well-tuned, can achieve impressive levels of performance
in certain aspects, there is significant room for improvement. RoboPianist
provides a rich quantitative benchmarking environment, with human-interpretable
results, high ease of expansion by simply augmenting the repertoire with new
songs, and opportunities for further research, including in multi-task
learning, zero-shot generalization, multimodal (sound, vision, touch) learning,
and imitation. Supplementary information, including videos of our control
policies, can be found at https://kzakka.com/robopianist/
|
[
{
"version": "v1",
"created": "Sun, 9 Apr 2023 03:53:05 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Zakka",
"Kevin",
""
],
[
"Smith",
"Laura",
""
],
[
"Gileadi",
"Nimrod",
""
],
[
"Howell",
"Taylor",
""
],
[
"Peng",
"Xue Bin",
""
],
[
"Singh",
"Sumeet",
""
],
[
"Tassa",
"Yuval",
""
],
[
"Florence",
"Pete",
""
],
[
"Zeng",
"Andy",
""
],
[
"Abbeel",
"Pieter",
""
]
] |
new_dataset
| 0.999576 |
2304.04185
|
Yinhao Li
|
Yinhao Li, Jinrong Yang, Jianjian Sun, Han Bao, Zheng Ge, Li Xiao
|
BEVStereo++: Accurate Depth Estimation in Multi-view 3D Object Detection
via Dynamic Temporal Stereo
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bounded by the inherent ambiguity of depth perception, contemporary
multi-view 3D object detection methods fall into the performance bottleneck.
Intuitively, leveraging temporal multi-view stereo (MVS) technology is the
natural knowledge for tackling this ambiguity. However, traditional attempts of
MVS has two limitations when applying to 3D object detection scenes: 1) The
affinity measurement among all views suffers expensive computational cost; 2)
It is difficult to deal with outdoor scenarios where objects are often mobile.
To this end, we propose BEVStereo++: by introducing a dynamic temporal stereo
strategy, BEVStereo++ is able to cut down the harm that is brought by
introducing temporal stereo when dealing with those two scenarios. Going one
step further, we apply Motion Compensation Module and long sequence Frame
Fusion to BEVStereo++, which shows further performance boosting and error
reduction. Without bells and whistles, BEVStereo++ achieves
state-of-the-art(SOTA) on both Waymo and nuScenes dataset.
|
[
{
"version": "v1",
"created": "Sun, 9 Apr 2023 08:04:26 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Li",
"Yinhao",
""
],
[
"Yang",
"Jinrong",
""
],
[
"Sun",
"Jianjian",
""
],
[
"Bao",
"Han",
""
],
[
"Ge",
"Zheng",
""
],
[
"Xiao",
"Li",
""
]
] |
new_dataset
| 0.97655 |
2304.04200
|
Qiu Changjie
|
Changjie Qiu, Zhiyong Wang, Xiuhong Lin, Yu Zang, Cheng Wang, Weiquan
Liu
|
DSMNet: Deep High-precision 3D Surface Modeling from Sparse Point Cloud
Frames
|
To be published in IEEE Geoscience and Remote Sensing Letters (GRSL)
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing point cloud modeling datasets primarily express the modeling
precision by pose or trajectory precision rather than the point cloud modeling
effect itself. Under this demand, we first independently construct a set of
LiDAR system with an optical stage, and then we build a HPMB dataset based on
the constructed LiDAR system, a High-Precision, Multi-Beam, real-world dataset.
Second, we propose an modeling evaluation method based on HPMB for object-level
modeling to overcome this limitation. In addition, the existing point cloud
modeling methods tend to generate continuous skeletons of the global
environment, hence lacking attention to the shape of complex objects. To tackle
this challenge, we propose a novel learning-based joint framework, DSMNet, for
high-precision 3D surface modeling from sparse point cloud frames. DSMNet
comprises density-aware Point Cloud Registration (PCR) and geometry-aware Point
Cloud Sampling (PCS) to effectively learn the implicit structure feature of
sparse point clouds. Extensive experiments demonstrate that DSMNet outperforms
the state-of-the-art methods in PCS and PCR on Multi-View Partial Point Cloud
(MVP) database. Furthermore, the experiments on the open source KITTI and our
proposed HPMB datasets show that DSMNet can be generalized as a post-processing
of Simultaneous Localization And Mapping (SLAM), thereby improving modeling
precision in environments with sparse point clouds.
|
[
{
"version": "v1",
"created": "Sun, 9 Apr 2023 09:23:06 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Qiu",
"Changjie",
""
],
[
"Wang",
"Zhiyong",
""
],
[
"Lin",
"Xiuhong",
""
],
[
"Zang",
"Yu",
""
],
[
"Wang",
"Cheng",
""
],
[
"Liu",
"Weiquan",
""
]
] |
new_dataset
| 0.960335 |
2304.04203
|
Shichao Li
|
Delong Liu, Shichao Li
|
OpenDriver: an open-road driver state detection dataset
| null | null | null | null |
cs.AI cs.CV cs.HC cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In modern society, road safety relies heavily on the psychological and
physiological state of drivers. Negative factors such as fatigue, drowsiness,
and stress can impair drivers' reaction time and decision making abilities,
leading to an increased incidence of traffic accidents. Among the numerous
studies for impaired driving detection, wearable physiological measurement is a
real-time approach to monitoring a driver's state. However, currently, there
are few driver physiological datasets in open road scenarios and the existing
datasets suffer from issues such as poor signal quality, small sample sizes,
and short data collection periods. Therefore, in this paper, a large-scale
multimodal driving dataset for driver impairment detection and biometric data
recognition is designed and described. The dataset contains two modalities of
driving signals: six-axis inertial signals and electrocardiogram (ECG) signals,
which were recorded while over one hundred drivers were following the same
route through open roads during several months. Both the ECG signal sensor and
the six-axis inertial signal sensor are installed on a specially designed
steering wheel cover, allowing for data collection without disturbing the
driver. Additionally, electrodermal activity (EDA) signals were also recorded
during the driving process and will be integrated into the presented dataset
soon. Future work can build upon this dataset to advance the field of driver
impairment detection. New methods can be explored for integrating other types
of biometric signals, such as eye tracking, to further enhance the
understanding of driver states. The insights gained from this dataset can also
inform the development of new driver assistance systems, promoting safer
driving practices and reducing the risk of traffic accidents. The OpenDriver
dataset will be publicly available soon.
|
[
{
"version": "v1",
"created": "Sun, 9 Apr 2023 10:08:38 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Liu",
"Delong",
""
],
[
"Li",
"Shichao",
""
]
] |
new_dataset
| 0.999847 |
2304.04212
|
David Beauchemin
|
David Beauchemin and Richard Khoury
|
RISC: Generating Realistic Synthetic Bilingual Insurance Contract
|
Accepted at Canadian AI conference 2023
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents RISC, an open-source Python package data generator
(https://github.com/GRAAL-Research/risc). RISC generates look-alike automobile
insurance contracts based on the Quebec regulatory insurance form in French and
English. Insurance contracts are 90 to 100 pages long and use complex legal and
insurance-specific vocabulary for a layperson. Hence, they are a much more
complex class of documents than those in traditional NLP corpora. Therefore, we
introduce RISCBAC, a Realistic Insurance Synthetic Bilingual Automobile
Contract dataset based on the mandatory Quebec car insurance contract. The
dataset comprises 10,000 French and English unannotated insurance contracts.
RISCBAC enables NLP research for unsupervised automatic summarisation, question
answering, text simplification, machine translation and more. Moreover, it can
be further automatically annotated as a dataset for supervised tasks such as
NER
|
[
{
"version": "v1",
"created": "Sun, 9 Apr 2023 10:42:18 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Beauchemin",
"David",
""
],
[
"Khoury",
"Richard",
""
]
] |
new_dataset
| 0.996699 |
2304.04231
|
Dingkang Liang
|
Dingkang Liang, Jiahao Xie, Zhikang Zou, Xiaoqing Ye, Wei Xu, Xiang
Bai
|
CrowdCLIP: Unsupervised Crowd Counting via Vision-Language Model
|
Accepted by CVPR 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Supervised crowd counting relies heavily on costly manual labeling, which is
difficult and expensive, especially in dense scenes. To alleviate the problem,
we propose a novel unsupervised framework for crowd counting, named CrowdCLIP.
The core idea is built on two observations: 1) the recent contrastive
pre-trained vision-language model (CLIP) has presented impressive performance
on various downstream tasks; 2) there is a natural mapping between crowd
patches and count text. To the best of our knowledge, CrowdCLIP is the first to
investigate the vision language knowledge to solve the counting problem.
Specifically, in the training stage, we exploit the multi-modal ranking loss by
constructing ranking text prompts to match the size-sorted crowd patches to
guide the image encoder learning. In the testing stage, to deal with the
diversity of image patches, we propose a simple yet effective progressive
filtering strategy to first select the highly potential crowd patches and then
map them into the language space with various counting intervals. Extensive
experiments on five challenging datasets demonstrate that the proposed
CrowdCLIP achieves superior performance compared to previous unsupervised
state-of-the-art counting methods. Notably, CrowdCLIP even surpasses some
popular fully-supervised methods under the cross-dataset setting. The source
code will be available at https://github.com/dk-liang/CrowdCLIP.
|
[
{
"version": "v1",
"created": "Sun, 9 Apr 2023 12:56:54 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Liang",
"Dingkang",
""
],
[
"Xie",
"Jiahao",
""
],
[
"Zou",
"Zhikang",
""
],
[
"Ye",
"Xiaoqing",
""
],
[
"Xu",
"Wei",
""
],
[
"Bai",
"Xiang",
""
]
] |
new_dataset
| 0.999656 |
2304.04254
|
Ankush Sawarkar
|
Nitesh Ghodichor, Raj Thaneeghavl. V, Dinesh Sahu, Gautam Borkar,
Ankush Sawarkar
|
Secure Routing Protocol To Mitigate Attacks By Using Blockchain
Technology In Manet
|
https://aircconline.com/ijcnc/V15N2/15223cnc07.pdf
| null | null | null |
cs.CR cs.AI cs.LG cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
MANET is a collection of mobile nodes that communicate through wireless
networks as they move from one point to another. MANET is an
infrastructure-less network with a changeable topology; as a result, it is very
susceptible to attacks. MANET attack prevention represents a serious
difficulty. Malicious network nodes are the source of network-based attacks. In
a MANET, attacks can take various forms, and each one alters the network's
operation in its unique way. In general, attacks can be separated into two
categories: those that target the data traffic on a network and those that
target the control traffic. This article explains the many sorts of assaults,
their impact on MANET, and the MANET-based defence measures that are currently
in place. The suggested SRA that employs blockchain technology (SRABC) protects
MANET from attacks and authenticates nodes. The secure routing algorithm (SRA)
proposed by blockchain technology safeguards control and data flow against
threats. This is achieved by generating a Hash Function for every transaction.
We will begin by discussing the security of the MANET. This article's second
section explores the role of blockchain in MANET security. In the third
section, the SRA is described in connection with blockchain. In the fourth
phase, PDR and Throughput are utilised to conduct an SRA review using
Blockchain employing PDR and Throughput. The results suggest that the proposed
technique enhances MANET security while concurrently decreasing delay. The
performance of the proposed technique is analysed and compared to the routing
protocols Q-AODV and DSR.
|
[
{
"version": "v1",
"created": "Sun, 9 Apr 2023 15:19:51 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Ghodichor",
"Nitesh",
""
],
[
"Thaneeghavl.",
"Raj",
"V"
],
[
"Sahu",
"Dinesh",
""
],
[
"Borkar",
"Gautam",
""
],
[
"Sawarkar",
"Ankush",
""
]
] |
new_dataset
| 0.991199 |
2304.04259
|
Amir Nazemi
|
Amir Nazemi, Zeyad Moustafa, Paul Fieguth
|
CLVOS23: A Long Video Object Segmentation Dataset for Continual Learning
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Continual learning in real-world scenarios is a major challenge. A general
continual learning model should have a constant memory size and no predefined
task boundaries, as is the case in semi-supervised Video Object Segmentation
(VOS), where continual learning challenges particularly present themselves in
working on long video sequences. In this article, we first formulate the
problem of semi-supervised VOS, specifically online VOS, as a continual
learning problem, and then secondly provide a public VOS dataset, CLVOS23,
focusing on continual learning. Finally, we propose and implement a
regularization-based continual learning approach on LWL, an existing online VOS
baseline, to demonstrate the efficacy of continual learning when applied to
online VOS and to establish a CLVOS23 baseline. We apply the proposed baseline
to the Long Videos dataset as well as to two short video VOS datasets, DAVIS16
and DAVIS17. To the best of our knowledge, this is the first time that VOS has
been defined and addressed as a continual learning problem.
|
[
{
"version": "v1",
"created": "Sun, 9 Apr 2023 15:33:07 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Nazemi",
"Amir",
""
],
[
"Moustafa",
"Zeyad",
""
],
[
"Fieguth",
"Paul",
""
]
] |
new_dataset
| 0.999738 |
2304.04273
|
Prithila Angkan
|
Prithila Angkan, Behnam Behinaein, Zunayed Mahmud, Anubhav Bhatti,
Dirk Rodenburg, Paul Hungler and Ali Etemad
|
Multimodal Brain-Computer Interface for In-Vehicle Driver Cognitive Load
Measurement: Dataset and Baselines
|
13 pages, 8 figures, 11 tables. This work has been submitted to the
IEEE for possible publication. Copyright may be transferred without notice
| null | null | null |
cs.LG cs.HC eess.SP
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Through this paper, we introduce a novel driver cognitive load assessment
dataset, CL-Drive, which contains Electroencephalogram (EEG) signals along with
other physiological signals such as Electrocardiography (ECG) and Electrodermal
Activity (EDA) as well as eye tracking data. The data was collected from 21
subjects while driving in an immersive vehicle simulator, in various driving
conditions, to induce different levels of cognitive load in the subjects. The
tasks consisted of 9 complexity levels for 3 minutes each. Each driver reported
their subjective cognitive load every 10 seconds throughout the experiment. The
dataset contains the subjective cognitive load recorded as ground truth. In
this paper, we also provide benchmark classification results for different
machine learning and deep learning models for both binary and ternary label
distributions. We followed 2 evaluation criteria namely 10-fold and
leave-one-subject-out (LOSO). We have trained our models on both hand-crafted
features as well as on raw data.
|
[
{
"version": "v1",
"created": "Sun, 9 Apr 2023 16:35:31 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Angkan",
"Prithila",
""
],
[
"Behinaein",
"Behnam",
""
],
[
"Mahmud",
"Zunayed",
""
],
[
"Bhatti",
"Anubhav",
""
],
[
"Rodenburg",
"Dirk",
""
],
[
"Hungler",
"Paul",
""
],
[
"Etemad",
"Ali",
""
]
] |
new_dataset
| 0.999742 |
2304.04280
|
Yanis Labrak
|
Yanis Labrak, Adrien Bazoge, Richard Dufour, Mickael Rouvier, Emmanuel
Morin, B\'eatrice Daille, Pierre-Antoine Gourraud
|
FrenchMedMCQA: A French Multiple-Choice Question Answering Dataset for
Medical domain
| null |
Proceedings of the 13th International Workshop on Health Text
Mining and Information Analysis (LOUHI 2022)
| null | null |
cs.CL cs.AI
|
http://creativecommons.org/publicdomain/zero/1.0/
|
This paper introduces FrenchMedMCQA, the first publicly available
Multiple-Choice Question Answering (MCQA) dataset in French for medical domain.
It is composed of 3,105 questions taken from real exams of the French medical
specialization diploma in pharmacy, mixing single and multiple answers. Each
instance of the dataset contains an identifier, a question, five possible
answers and their manual correction(s). We also propose first baseline models
to automatically process this MCQA task in order to report on the current
performances and to highlight the difficulty of the task. A detailed analysis
of the results showed that it is necessary to have representations adapted to
the medical domain or to the MCQA task: in our case, English specialized models
yielded better results than generic French ones, even though FrenchMedMCQA is
in French. Corpus, models and tools are available online.
|
[
{
"version": "v1",
"created": "Sun, 9 Apr 2023 16:57:40 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Labrak",
"Yanis",
""
],
[
"Bazoge",
"Adrien",
""
],
[
"Dufour",
"Richard",
""
],
[
"Rouvier",
"Mickael",
""
],
[
"Morin",
"Emmanuel",
""
],
[
"Daille",
"Béatrice",
""
],
[
"Gourraud",
"Pierre-Antoine",
""
]
] |
new_dataset
| 0.99981 |
2304.04302
|
Xiao Xiong
|
Xiao Xiong, Xinyu Zhang, Huanhao Huang, Kangyao Huang
|
Bionic Collapsible Wings in Aquatic-aerial Robot
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
The concept of aerial-aquatic robots has emerged as an innovative solution
that can operate both in the air and underwater. Previous research on the
design of such robots has been mainly focused on mature technologies such as
fixed-wing and multi-rotor aircraft. Flying fish, a unique aerial-aquatic
animal that can both swim in water and glide over the sea surface, has not been
fully explored as a bionic robot model, especially regarding its motion
patterns with the collapsible pectoral fins. To verify the contribution of the
collapsible wings to the flying fish motion pattern, we have designed a novel
bio-robot with collapsible wings inspired by the flying fish. The bionic
prototype has been successfully designed and fabricated, incorporating
collapsible wings with soft hydraulic actuators, an innovative application of
soft actuators to a micro aquatic-aerial robot. We have analyzed and built a
precise model of dynamics for control, and tested both the soft hydraulic
actuators and detailed aerodynamic coefficients. To further verify the
feasibility of collapsible wings, we conducted simulations in different
situations such as discharge angles, the area of collapsible wings, and the
advantages of using ground effect. The results confirm the control of the
collapsible wings and demonstrate the unique multi-modal motion pattern between
water and air. Overall, our research represents the study of the collapsible
wings in aquatic-aerial robots and significant contributes to the development
of aquatic-aerial robots. The using of the collapsible wings must a
contribution to the future aquatic-aerial robot.
|
[
{
"version": "v1",
"created": "Sun, 9 Apr 2023 19:31:32 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Xiong",
"Xiao",
""
],
[
"Zhang",
"Xinyu",
""
],
[
"Huang",
"Huanhao",
""
],
[
"Huang",
"Kangyao",
""
]
] |
new_dataset
| 0.98734 |
2304.04318
|
Florian Jacob
|
Florian Jacob, Hannes Hartenstein
|
On Extend-Only Directed Posets and Derived Byzantine-Tolerant Replicated
Data Types (Extended Version)
|
With the inclusion of an appendix of a formalization and CRDT proof
sketch of an EDP-based CRDT with systemic access control, this is an extended
version of the paper presented at the 10th Workshop on Principles and
Practice of Consistency for Distributed Data (PaPoC), 2023-05-08, Rome, Italy
| null |
10.1145/3578358.3591333
| null |
cs.DC cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We uncover the extend-only directed posets (EDP) structure as a unification
of recently discussed DAG-based Byzantine-tolerant conflict-free replicated
data types (CRDT). We also show how a key-value map model can be derived from
the EDP formulation, and give an outlook on an EDP-based systemic access
control CRDT as a formalization of the CRDT used in the Matrix messaging
system.
|
[
{
"version": "v1",
"created": "Sun, 9 Apr 2023 21:19:13 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Jacob",
"Florian",
""
],
[
"Hartenstein",
"Hannes",
""
]
] |
new_dataset
| 0.995677 |
2304.04333
|
Mohammad Khalid Jawed
|
Shivam K Panda, Yongkyu Lee, M. Khalid Jawed
|
Agronav: Autonomous Navigation Framework for Agricultural Robots and
Vehicles using Semantic Segmentation and Semantic Line Detection
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The successful implementation of vision-based navigation in agricultural
fields hinges upon two critical components: 1) the accurate identification of
key components within the scene, and 2) the identification of lanes through the
detection of boundary lines that separate the crops from the traversable
ground. We propose Agronav, an end-to-end vision-based autonomous navigation
framework, which outputs the centerline from the input image by sequentially
processing it through semantic segmentation and semantic line detection models.
We also present Agroscapes, a pixel-level annotated dataset collected across
six different crops, captured from varying heights and angles. This ensures
that the framework trained on Agroscapes is generalizable across both ground
and aerial robotic platforms. Codes, models and dataset will be released at
\href{https://github.com/shivamkumarpanda/agronav}{github.com/shivamkumarpanda/agronav}.
|
[
{
"version": "v1",
"created": "Mon, 10 Apr 2023 00:06:14 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Panda",
"Shivam K",
""
],
[
"Lee",
"Yongkyu",
""
],
[
"Jawed",
"M. Khalid",
""
]
] |
new_dataset
| 0.98428 |
2304.04358
|
Hongjin Qian
|
Hongjing Qian, Yutao Zhu, Zhicheng Dou, Haoqi Gu, Xinyu Zhang, Zheng
Liu, Ruofei Lai, Zhao Cao, Jian-Yun Nie and Ji-Rong Wen
|
WebBrain: Learning to Generate Factually Correct Articles for Queries by
Grounding on Large Web Corpus
|
Codes in https://github.com/qhjqhj00/WebBrain
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we introduce a new NLP task -- generating short factual
articles with references for queries by mining supporting evidence from the
Web. In this task, called WebBrain, the ultimate goal is to generate a fluent,
informative, and factually-correct short article (e.g., a Wikipedia article)
for a factual query unseen in Wikipedia. To enable experiments on WebBrain, we
construct a large-scale dataset WebBrain-Raw by extracting English Wikipedia
articles and their crawlable Wikipedia references. WebBrain-Raw is ten times
larger than the previous biggest peer dataset, which can greatly benefit the
research community. From WebBrain-Raw, we construct two task-specific datasets:
WebBrain-R and WebBrain-G, which are used to train in-domain retriever and
generator, respectively. Besides, we empirically analyze the performances of
the current state-of-the-art NLP techniques on WebBrain and introduce a new
framework ReGen, which enhances the generation factualness by improved evidence
retrieval and task-specific pre-training for generation. Experiment results
show that ReGen outperforms all baselines in both automatic and human
evaluations.
|
[
{
"version": "v1",
"created": "Mon, 10 Apr 2023 02:55:48 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Qian",
"Hongjing",
""
],
[
"Zhu",
"Yutao",
""
],
[
"Dou",
"Zhicheng",
""
],
[
"Gu",
"Haoqi",
""
],
[
"Zhang",
"Xinyu",
""
],
[
"Liu",
"Zheng",
""
],
[
"Lai",
"Ruofei",
""
],
[
"Cao",
"Zhao",
""
],
[
"Nie",
"Jian-Yun",
""
],
[
"Wen",
"Ji-Rong",
""
]
] |
new_dataset
| 0.972805 |
2304.04387
|
Pengzhan Zhao
|
Pengzhan Zhao, Xiongfei Wu, Zhuo Li, Jianjun Zhao
|
QChecker: Detecting Bugs in Quantum Programs via Static Analysis
|
This paper will be appeared in the proceedings of the 4th
International Workshop on Quantum Software Engineering (Q-SE 2023) on May 14,
2023
| null | null | null |
cs.SE cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Static analysis is the process of analyzing software code without executing
the software. It can help find bugs and potential problems in software that may
only appear at runtime. Although many static analysis tools have been developed
for classical software, due to the nature of quantum programs, these existing
tools are unsuitable for analyzing quantum programs. This paper presents
QChecker, a static analysis tool that supports finding bugs in quantum programs
in Qiskit. QChecker consists of two main modules: a module for extracting
program information based on abstract syntax tree (AST), and a module for
detecting bugs based on patterns. We evaluate the performance of QChecker using
the Bugs4Q benchmark. The evaluation results show that QChecker can effectively
detect various bugs in quantum programs.
|
[
{
"version": "v1",
"created": "Mon, 10 Apr 2023 05:15:34 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Zhao",
"Pengzhan",
""
],
[
"Wu",
"Xiongfei",
""
],
[
"Li",
"Zhuo",
""
],
[
"Zhao",
"Jianjun",
""
]
] |
new_dataset
| 0.999425 |
2304.04399
|
Shentong Mo
|
Shentong Mo, Jingfei Xia, Ihor Markevych
|
CAVL: Learning Contrastive and Adaptive Representations of Vision and
Language
| null | null | null | null |
cs.CV cs.AI cs.LG cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual and linguistic pre-training aims to learn vision and language
representations together, which can be transferred to visual-linguistic
downstream tasks. However, there exists semantic confusion between language and
vision during the pre-training stage. Moreover, current pre-trained models tend
to take lots of computation resources for fine-tuning when transferred to
downstream tasks. In this work, we present a simple but effective approach for
learning Contrastive and Adaptive representations of Vision and Language,
namely CAVL. Specifically, we introduce a pair-wise contrastive loss to learn
alignments between the whole sentence and each image in the same batch during
the pre-training process. At the fine-tuning stage, we introduce two
lightweight adaptation networks to reduce model parameters and increase
training speed for saving computation resources. We evaluate our CAVL on six
main downstream tasks, including Visual Question Answering (VQA), Visual
Commonsense Reasoning (VCR), Natural Language for Visual Reasoning (NLVR),
Region-to-Phrase Grounding (RPG), Text-to-Image Retrieval (TIR), and Zero-shot
Text-to-Image Retrieval (ZS-TIR). Compared to baselines, we achieve superior
performance and reduce the fine-tuning time by a large margin (in particular,
76.17%). Extensive experiments and ablation studies demonstrate the efficiency
of contrastive pre-training and adaptive fine-tuning proposed in our CAVL.
|
[
{
"version": "v1",
"created": "Mon, 10 Apr 2023 05:54:03 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Mo",
"Shentong",
""
],
[
"Xia",
"Jingfei",
""
],
[
"Markevych",
"Ihor",
""
]
] |
new_dataset
| 0.998866 |
2304.04411
|
Kazi Hassan Shakib
|
Kazi Hassan Shakib, Mizanur Rahman and Mhafuzul Islam
|
Quantum Cyber-Attack on Blockchain-based VANET
|
This paper consists of 10 pages with 7 figures. It has been submitted
to IEEE Internet of Things Journal
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Blockchain-based Vehicular Ad-hoc Network (VANET) is widely considered as
secure communication architecture for a connected transportation system. With
the advent of quantum computing, there are concerns regarding the vulnerability
of this architecture against cyber-attacks. In this study, a potential threat
is investigated in a blockchain-based VANET, and a corresponding quantum
cyber-attack is developed. Specifically, a quantum impersonation attack using
Quantum-Shor algorithm is developed to break the Rivest-Shamir-Adleman (RSA)
encrypted digital signatures of VANET and thus create a threat for the
trust-based blockchain scheme of VANET. A blockchain-based VANET,
vehicle-to-everything (V2X) communication, and vehicular mobility are simulated
using OMNET++, the extended INET library, and vehicles-in-network simulation
(VEINS) along with simulation of urban mobility (SUMO), respectively. A small
key RSA based message encryption is implemented using IBM Qiskit, which is an
open-source quantum software development kit. The findings reveal that the
quantum cyber-attack, example, impersonation attack is able to successfully
break the trust chain of a blockchain-based VANET. This highlights the need for
a quantum secured blockchain.
|
[
{
"version": "v1",
"created": "Mon, 10 Apr 2023 06:46:33 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Shakib",
"Kazi Hassan",
""
],
[
"Rahman",
"Mizanur",
""
],
[
"Islam",
"Mhafuzul",
""
]
] |
new_dataset
| 0.99843 |
2304.04437
|
Tobias Baumgartner
|
Tobias Baumgartner and Stefanie Klatt
|
Monocular 3D Human Pose Estimation for Sports Broadcasts using Partial
Sports Field Registration
|
accept at "9th International Workshop on Computer Vision in Sports
(CVsports) at CVPR 2023"
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The filming of sporting events projects and flattens the movement of athletes
in the world onto a 2D broadcast image. The pixel locations of joints in these
images can be detected with high validity. Recovering the actual 3D movement of
the limbs (kinematics) of the athletes requires lifting these 2D pixel
locations back into a third dimension, implying a certain scene geometry. The
well-known line markings of sports fields allow for the calibration of the
camera and for determining the actual geometry of the scene. Close-up shots of
athletes are required to extract detailed kinematics, which in turn obfuscates
the pertinent field markers for camera calibration. We suggest partial sports
field registration, which determines a set of scene-consistent camera
calibrations up to a single degree of freedom. Through joint optimization of 3D
pose estimation and camera calibration, we demonstrate the successful
extraction of 3D running kinematics on a 400m track. In this work, we combine
advances in 2D human pose estimation and camera calibration via partial sports
field registration to demonstrate an avenue for collecting valid large-scale
kinematic datasets. We generate a synthetic dataset of more than 10k images in
Unreal Engine 5 with different viewpoints, running styles, and body types, to
show the limitations of existing monocular 3D HPE methods. Synthetic data and
code are available at https://github.com/tobibaum/PartialSportsFieldReg_3DHPE.
|
[
{
"version": "v1",
"created": "Mon, 10 Apr 2023 07:41:44 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Baumgartner",
"Tobias",
""
],
[
"Klatt",
"Stefanie",
""
]
] |
new_dataset
| 0.961658 |
2304.04508
|
Yu Wang
|
Yu Wang, Shuhui Bu, Lin Chen, Yifei Dong, Kun Li, Xuefeng Cao, Ke Li
|
HybridFusion: LiDAR and Vision Cross-Source Point Cloud Fusion
| null | null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, cross-source point cloud registration from different sensors has
become a significant research focus. However, traditional methods confront
challenges due to the varying density and structure of cross-source point
clouds. In order to solve these problems, we propose a cross-source point cloud
fusion algorithm called HybridFusion. It can register cross-source dense point
clouds from different viewing angle in outdoor large scenes. The entire
registration process is a coarse-to-fine procedure. First, the point cloud is
divided into small patches, and a matching patch set is selected based on
global descriptors and spatial distribution, which constitutes the coarse
matching process. To achieve fine matching, 2D registration is performed by
extracting 2D boundary points from patches, followed by 3D adjustment. Finally,
the results of multiple patch pose estimates are clustered and fused to
determine the final pose. The proposed approach is evaluated comprehensively
through qualitative and quantitative experiments. In order to compare the
robustness of cross-source point cloud registration, the proposed method and
generalized iterative closest point method are compared. Furthermore, a metric
for describing the degree of point cloud filling is proposed. The experimental
results demonstrate that our approach achieves state-of-the-art performance in
cross-source point cloud registration.
|
[
{
"version": "v1",
"created": "Mon, 10 Apr 2023 10:54:54 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Wang",
"Yu",
""
],
[
"Bu",
"Shuhui",
""
],
[
"Chen",
"Lin",
""
],
[
"Dong",
"Yifei",
""
],
[
"Li",
"Kun",
""
],
[
"Cao",
"Xuefeng",
""
],
[
"Li",
"Ke",
""
]
] |
new_dataset
| 0.957326 |
2304.04514
|
Jianhua Han
|
Lewei Yao, Jianhua Han, Xiaodan Liang, Dan Xu, Wei Zhang, Zhenguo Li,
Hang Xu
|
DetCLIPv2: Scalable Open-Vocabulary Object Detection Pre-training via
Word-Region Alignment
|
Accepted to CVPR2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper presents DetCLIPv2, an efficient and scalable training framework
that incorporates large-scale image-text pairs to achieve open-vocabulary
object detection (OVD). Unlike previous OVD frameworks that typically rely on a
pre-trained vision-language model (e.g., CLIP) or exploit image-text pairs via
a pseudo labeling process, DetCLIPv2 directly learns the fine-grained
word-region alignment from massive image-text pairs in an end-to-end manner. To
accomplish this, we employ a maximum word-region similarity between region
proposals and textual words to guide the contrastive objective. To enable the
model to gain localization capability while learning broad concepts, DetCLIPv2
is trained with a hybrid supervision from detection, grounding and image-text
pair data under a unified data formulation. By jointly training with an
alternating scheme and adopting low-resolution input for image-text pairs,
DetCLIPv2 exploits image-text pair data efficiently and effectively: DetCLIPv2
utilizes 13X more image-text pairs than DetCLIP with a similar training time
and improves performance. With 13M image-text pairs for pre-training, DetCLIPv2
demonstrates superior open-vocabulary detection performance, e.g., DetCLIPv2
with Swin-T backbone achieves 40.4% zero-shot AP on the LVIS benchmark, which
outperforms previous works GLIP/GLIPv2/DetCLIP by 14.4/11.4/4.5% AP,
respectively, and even beats its fully-supervised counterpart by a large
margin.
|
[
{
"version": "v1",
"created": "Mon, 10 Apr 2023 11:08:15 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Yao",
"Lewei",
""
],
[
"Han",
"Jianhua",
""
],
[
"Liang",
"Xiaodan",
""
],
[
"Xu",
"Dan",
""
],
[
"Zhang",
"Wei",
""
],
[
"Li",
"Zhenguo",
""
],
[
"Xu",
"Hang",
""
]
] |
new_dataset
| 0.99801 |
2304.04523
|
Junnan Jiang
|
Yuyang Tu, Junnan Jiang, Shuang Li, Norman Hendrich, Miao Li, Jianwei
Zhang
|
PoseFusion: Robust Object-in-Hand Pose Estimation with SelectLSTM
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Accurate estimation of the relative pose between an object and a robot hand
is critical for many manipulation tasks. However, most of the existing
object-in-hand pose datasets use two-finger grippers and also assume that the
object remains fixed in the hand without any relative movements, which is not
representative of real-world scenarios. To address this issue, a 6D
object-in-hand pose dataset is proposed using a teleoperation method with an
anthropomorphic Shadow Dexterous hand. Our dataset comprises RGB-D images,
proprioception and tactile data, covering diverse grasping poses, finger
contact states, and object occlusions. To overcome the significant hand
occlusion and limited tactile sensor contact in real-world scenarios, we
propose PoseFusion, a hybrid multi-modal fusion approach that integrates the
information from visual and tactile perception channels. PoseFusion generates
three candidate object poses from three estimators (tactile only, visual only,
and visuo-tactile fusion), which are then filtered by a SelectLSTM network to
select the optimal pose, avoiding inferior fusion poses resulting from modality
collapse. Extensive experiments demonstrate the robustness and advantages of
our framework. All data and codes are available on the project website:
https://elevenjiang1.github.io/ObjectInHand-Dataset/
|
[
{
"version": "v1",
"created": "Mon, 10 Apr 2023 11:38:52 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Tu",
"Yuyang",
""
],
[
"Jiang",
"Junnan",
""
],
[
"Li",
"Shuang",
""
],
[
"Hendrich",
"Norman",
""
],
[
"Li",
"Miao",
""
],
[
"Zhang",
"Jianwei",
""
]
] |
new_dataset
| 0.99977 |
2304.04540
|
Zhaowen Li
|
Zhaowen Li, Xu Zhao, Peigeng Ding, Zongxin Gao, Yuting Yang, Ming
Tang, Jinqiao Wang
|
FreConv: Frequency Branch-and-Integration Convolutional Networks
|
Accepted by ICME2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent researches indicate that utilizing the frequency information of input
data can enhance the performance of networks. However, the existing popular
convolutional structure is not designed specifically for utilizing the
frequency information contained in datasets. In this paper, we propose a novel
and effective module, named FreConv (frequency branch-and-integration
convolution), to replace the vanilla convolution. FreConv adopts a dual-branch
architecture to extract and integrate high- and low-frequency information. In
the high-frequency branch, a derivative-filter-like architecture is designed to
extract the high-frequency information while a light extractor is employed in
the low-frequency branch because the low-frequency information is usually
redundant. FreConv is able to exploit the frequency information of input data
in a more reasonable way to enhance feature representation ability and reduce
the memory and computational cost significantly. Without any bells and
whistles, experimental results on various tasks demonstrate that
FreConv-equipped networks consistently outperform state-of-the-art baselines.
|
[
{
"version": "v1",
"created": "Mon, 10 Apr 2023 12:24:14 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Li",
"Zhaowen",
""
],
[
"Zhao",
"Xu",
""
],
[
"Ding",
"Peigeng",
""
],
[
"Gao",
"Zongxin",
""
],
[
"Yang",
"Yuting",
""
],
[
"Tang",
"Ming",
""
],
[
"Wang",
"Jinqiao",
""
]
] |
new_dataset
| 0.972879 |
2304.04612
|
Hiroyuki Ootomo
|
Hiroyuki Ootomo and Rio Yokota
|
Mixed-Precision Random Projection for RandNLA on Tensor Cores
|
PASC'23
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Random projection can reduce the dimension of data while capturing its
structure and is a fundamental tool for machine learning, signal processing,
and information retrieval, which deal with a large amount of data today.
RandNLA (Randomized Numerical Linear Algebra) leverages random projection to
reduce the computational complexity of low-rank decomposition of tensors and
solve least-square problems. While the computation of the random projection is
a simple matrix multiplication, its asymptotic computational complexity is
typically larger than other operations in a RandNLA algorithm. Therefore,
various studies propose methods for reducing its computational complexity. We
propose a fast mixed-precision random projection method on NVIDIA GPUs using
Tensor Cores for single-precision tensors. We exploit the fact that the random
matrix requires less precision, and develop a highly optimized matrix
multiplication between FP32 and FP16 matrices -- SHGEMM (Single and
Half-precision GEMM) -- on Tensor Cores, where the random matrix is stored in
FP16. Our method can compute Randomized SVD 1.28 times faster and Random
projection high order SVD 1.75 times faster than baseline single-precision
implementations while maintaining accuracy.
|
[
{
"version": "v1",
"created": "Mon, 10 Apr 2023 14:27:14 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Ootomo",
"Hiroyuki",
""
],
[
"Yokota",
"Rio",
""
]
] |
new_dataset
| 0.982215 |
2304.04617
|
Silvio Giancola
|
Jan Held, Anthony Cioppa, Silvio Giancola, Abdullah Hamdi, Bernard
Ghanem, Marc Van Droogenbroeck
|
VARS: Video Assistant Referee System for Automated Soccer Decision
Making from Multiple Views
|
Accepted at CVSports'23
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The Video Assistant Referee (VAR) has revolutionized association football,
enabling referees to review incidents on the pitch, make informed decisions,
and ensure fairness. However, due to the lack of referees in many countries and
the high cost of the VAR infrastructure, only professional leagues can benefit
from it. In this paper, we propose a Video Assistant Referee System (VARS) that
can automate soccer decision-making. VARS leverages the latest findings in
multi-view video analysis, to provide real-time feedback to the referee, and
help them make informed decisions that can impact the outcome of a game. To
validate VARS, we introduce SoccerNet-MVFoul, a novel video dataset of soccer
fouls from multiple camera views, annotated with extensive foul descriptions by
a professional soccer referee, and we benchmark our VARS to automatically
recognize the characteristics of these fouls. We believe that VARS has the
potential to revolutionize soccer refereeing and take the game to new heights
of fairness and accuracy across all levels of professional and amateur
federations.
|
[
{
"version": "v1",
"created": "Mon, 10 Apr 2023 14:33:05 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Held",
"Jan",
""
],
[
"Cioppa",
"Anthony",
""
],
[
"Giancola",
"Silvio",
""
],
[
"Hamdi",
"Abdullah",
""
],
[
"Ghanem",
"Bernard",
""
],
[
"Van Droogenbroeck",
"Marc",
""
]
] |
new_dataset
| 0.992197 |
2304.04642
|
Noah Bertram
|
Noah Bertram and Alex Levinson and Justin Hsu
|
Cutting the Cake: A Language for Fair Division
|
31 pages, 15 figures, PLDI 2023
| null |
10.1145/3591293
| null |
cs.PL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The fair division literature in economics considers how to divide resources
between multiple agents such that the allocation is envy-free: each agent
receives their favorite piece. Researchers have developed a variety of fair
division protocols for the most standard setting, where the agents want to
split a single item, however, the protocols are highly intricate and the proofs
of envy-freeness involve tedious case analysis.
We propose Slice, a domain specific language for fair-division. Programs in
our language can be converted to logical formulas encoding envy-freeness and
other target properties. Then, the constraints can be dispatched to automated
solvers. We prove that our constraint generation procedure is sound and
complete. We also report on a prototype implementation of Slice, which we have
used to automatically check envy-freeness for several protocols from the fair
division literature.
|
[
{
"version": "v1",
"created": "Mon, 10 Apr 2023 15:14:08 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Bertram",
"Noah",
""
],
[
"Levinson",
"Alex",
""
],
[
"Hsu",
"Justin",
""
]
] |
new_dataset
| 0.990882 |
2304.04687
|
Kenji Hata
|
Norberto Adrian Goussies, Kenji Hata, Shruthi Prabhakara, Abhishek
Amit, Tony Aube, Carl Cepress, Diana Chang, Li-Te Cheng, Horia Stefan
Ciurdar, Mike Cleron, Chelsey Fleming, Ashwin Ganti, Divyansh Garg, Niloofar
Gheissari, Petra Luna Grutzik, David Hendon, Daniel Iglesia, Jin Kim, Stuart
Kyle, Chris LaRosa, Roman Lewkow, Peter F McDermott, Chris Melancon, Paru
Nackeeran, Neal Norwitz, Ali Rahimi, Brett Rampata, Carlos Sobrinho, George
Sung, Natalie Zauhar, Palash Nandy
|
Learning to Detect Touches on Cluttered Tables
| null | null | null | null |
cs.CV cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
We present a novel self-contained camera-projector tabletop system with a
lamp form-factor that brings digital intelligence to our tables. We propose a
real-time, on-device, learning-based touch detection algorithm that makes any
tabletop interactive. The top-down configuration and learning-based algorithm
makes our method robust to the presence of clutter, a main limitation of
existing camera-projector tabletop systems. Our research prototype enables a
set of experiences that combine hand interactions and objects present on the
table. A video can be found at https://youtu.be/hElC_c25Fg8.
|
[
{
"version": "v1",
"created": "Mon, 10 Apr 2023 16:06:34 GMT"
}
] | 2023-04-11T00:00:00 |
[
[
"Goussies",
"Norberto Adrian",
""
],
[
"Hata",
"Kenji",
""
],
[
"Prabhakara",
"Shruthi",
""
],
[
"Amit",
"Abhishek",
""
],
[
"Aube",
"Tony",
""
],
[
"Cepress",
"Carl",
""
],
[
"Chang",
"Diana",
""
],
[
"Cheng",
"Li-Te",
""
],
[
"Ciurdar",
"Horia Stefan",
""
],
[
"Cleron",
"Mike",
""
],
[
"Fleming",
"Chelsey",
""
],
[
"Ganti",
"Ashwin",
""
],
[
"Garg",
"Divyansh",
""
],
[
"Gheissari",
"Niloofar",
""
],
[
"Grutzik",
"Petra Luna",
""
],
[
"Hendon",
"David",
""
],
[
"Iglesia",
"Daniel",
""
],
[
"Kim",
"Jin",
""
],
[
"Kyle",
"Stuart",
""
],
[
"LaRosa",
"Chris",
""
],
[
"Lewkow",
"Roman",
""
],
[
"McDermott",
"Peter F",
""
],
[
"Melancon",
"Chris",
""
],
[
"Nackeeran",
"Paru",
""
],
[
"Norwitz",
"Neal",
""
],
[
"Rahimi",
"Ali",
""
],
[
"Rampata",
"Brett",
""
],
[
"Sobrinho",
"Carlos",
""
],
[
"Sung",
"George",
""
],
[
"Zauhar",
"Natalie",
""
],
[
"Nandy",
"Palash",
""
]
] |
new_dataset
| 0.989476 |
2103.10761
|
Mikhail Gorbunov-Posadov
|
Mikhail Mikhailovich Gorbunov-Posadov
|
Alive publication
|
24 pages, 4 figures
|
Publications, 2023, volume 11, issue 2, article 24
|
10.3390/publications11020024
| null |
cs.DL
|
http://creativecommons.org/licenses/by/4.0/
|
An alive publication is a new genre for presenting the results of scientific
research, which means that scientific work is published online and then
constantly developing and improving by its author. Serious errors and typos are
no longer fatal, nor do they haunt the author for the rest of his or her life.
The reader of an alive publication knows that the author is constantly
monitoring changes occurring in this branch of science. Alive publication faces
the inertia of scientific publishing traditions and, in particular, traditional
bibliometrics. Unfortunately, at present, the author who supports an alive
publication is dramatically losing out on many generally accepted bibliometric
indicators. The alive publication encourages the development of the
bibliography apparatus. Each bibliographic reference will soon have to contain
such important for the reader updating on-the-fly attributes as attendance,
number of external links, date of the last revision, etc. It is to be expected
that as the alive publication spreads over to the scientific world, the
author's concern for the publication's evolution will become like a parent's
care for the development of a child. The Internet will be filled with
scientific publications that do not lose their relevance over time.
|
[
{
"version": "v1",
"created": "Fri, 19 Mar 2021 12:16:34 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Feb 2023 05:01:18 GMT"
}
] | 2023-04-10T00:00:00 |
[
[
"Gorbunov-Posadov",
"Mikhail Mikhailovich",
""
]
] |
new_dataset
| 0.999209 |
2111.08644
|
Radu Tudor Ionescu
|
Andra Acsintoae, Andrei Florescu, Mariana-Iuliana Georgescu, Tudor
Mare, Paul Sumedrea, Radu Tudor Ionescu, Fahad Shahbaz Khan, Mubarak Shah
|
UBnormal: New Benchmark for Supervised Open-Set Video Anomaly Detection
|
Accepted at CVPR 2022. Paper + supplementary (15 pages, 9 figures)
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Detecting abnormal events in video is commonly framed as a one-class
classification task, where training videos contain only normal events, while
test videos encompass both normal and abnormal events. In this scenario,
anomaly detection is an open-set problem. However, some studies assimilate
anomaly detection to action recognition. This is a closed-set scenario that
fails to test the capability of systems at detecting new anomaly types. To this
end, we propose UBnormal, a new supervised open-set benchmark composed of
multiple virtual scenes for video anomaly detection. Unlike existing data sets,
we introduce abnormal events annotated at the pixel level at training time, for
the first time enabling the use of fully-supervised learning methods for
abnormal event detection. To preserve the typical open-set formulation, we make
sure to include disjoint sets of anomaly types in our training and test
collections of videos. To our knowledge, UBnormal is the first video anomaly
detection benchmark to allow a fair head-to-head comparison between one-class
open-set models and supervised closed-set models, as shown in our experiments.
Moreover, we provide empirical evidence showing that UBnormal can enhance the
performance of a state-of-the-art anomaly detection framework on two prominent
data sets, Avenue and ShanghaiTech. Our benchmark is freely available at
https://github.com/lilygeorgescu/UBnormal.
|
[
{
"version": "v1",
"created": "Tue, 16 Nov 2021 17:28:46 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Mar 2022 14:06:40 GMT"
},
{
"version": "v3",
"created": "Fri, 7 Apr 2023 12:31:31 GMT"
}
] | 2023-04-10T00:00:00 |
[
[
"Acsintoae",
"Andra",
""
],
[
"Florescu",
"Andrei",
""
],
[
"Georgescu",
"Mariana-Iuliana",
""
],
[
"Mare",
"Tudor",
""
],
[
"Sumedrea",
"Paul",
""
],
[
"Ionescu",
"Radu Tudor",
""
],
[
"Khan",
"Fahad Shahbaz",
""
],
[
"Shah",
"Mubarak",
""
]
] |
new_dataset
| 0.999732 |
2203.11987
|
Ryan Grainger
|
Ryan Grainger, Thomas Paniagua, Xi Song, Naresh Cuntoor, Mun Wai Lee,
Tianfu Wu
|
PaCa-ViT: Learning Patch-to-Cluster Attention in Vision Transformers
|
CVPR 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vision Transformers (ViTs) are built on the assumption of treating image
patches as ``visual tokens" and learn patch-to-patch attention. The patch
embedding based tokenizer has a semantic gap with respect to its counterpart,
the textual tokenizer. The patch-to-patch attention suffers from the quadratic
complexity issue, and also makes it non-trivial to explain learned ViTs. To
address these issues in ViT, this paper proposes to learn Patch-to-Cluster
attention (PaCa) in ViT. Queries in our PaCa-ViT starts with patches, while
keys and values are directly based on clustering (with a predefined small
number of clusters). The clusters are learned end-to-end, leading to better
tokenizers and inducing joint clustering-for-attention and
attention-for-clustering for better and interpretable models. The quadratic
complexity is relaxed to linear complexity. The proposed PaCa module is used in
designing efficient and interpretable ViT backbones and semantic segmentation
head networks. In experiments, the proposed methods are tested on ImageNet-1k
image classification, MS-COCO object detection and instance segmentation and
MIT-ADE20k semantic segmentation. Compared with the prior art, it obtains
better performance in all the three benchmarks than the SWin and the PVTs by
significant margins in ImageNet-1k and MIT-ADE20k. It is also significantly
more efficient than PVT models in MS-COCO and MIT-ADE20k due to the linear
complexity. The learned clusters are semantically meaningful. Code and model
checkpoints are available at https://github.com/iVMCL/PaCaViT.
|
[
{
"version": "v1",
"created": "Tue, 22 Mar 2022 18:28:02 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Apr 2023 00:46:43 GMT"
}
] | 2023-04-10T00:00:00 |
[
[
"Grainger",
"Ryan",
""
],
[
"Paniagua",
"Thomas",
""
],
[
"Song",
"Xi",
""
],
[
"Cuntoor",
"Naresh",
""
],
[
"Lee",
"Mun Wai",
""
],
[
"Wu",
"Tianfu",
""
]
] |
new_dataset
| 0.999773 |
2206.12036
|
Zitao Liu
|
Qiongqiong Liu, Yaying Huang, Zitao Liu, Shuyan Huang, Jiahao Chen,
Xiangyu Zhao, Guimin Lin, Yuyu Zhou, Weiqi Luo
|
SC-Ques: A Sentence Completion Question Dataset for English as a Second
Language Learners
|
Accepted in ITS'2023: The 19th International Conference on
Intelligent Tutoring Systems(ITS), 2023
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Sentence completion (SC) questions present a sentence with one or more blanks
that need to be filled in, three to five possible words or phrases as options.
SC questions are widely used for students learning English as a Second Language
(ESL). In this paper, we present a large-scale SC dataset, \textsc{SC-Ques},
which is made up of 289,148 ESL SC questions from real-world standardized
English examinations. Furthermore, we build a comprehensive benchmark of
automatically solving the SC questions by training the large-scale pre-trained
language models on the proposed \textsc{SC-Ques} dataset. We conduct detailed
analysis of the baseline models performance, limitations and trade-offs. The
data and our code are available for research purposes from:
\url{https://github.com/ai4ed/SC-Ques}.
|
[
{
"version": "v1",
"created": "Fri, 24 Jun 2022 02:17:13 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Apr 2023 11:55:04 GMT"
}
] | 2023-04-10T00:00:00 |
[
[
"Liu",
"Qiongqiong",
""
],
[
"Huang",
"Yaying",
""
],
[
"Liu",
"Zitao",
""
],
[
"Huang",
"Shuyan",
""
],
[
"Chen",
"Jiahao",
""
],
[
"Zhao",
"Xiangyu",
""
],
[
"Lin",
"Guimin",
""
],
[
"Zhou",
"Yuyu",
""
],
[
"Luo",
"Weiqi",
""
]
] |
new_dataset
| 0.999838 |
2209.00727
|
Xin-Yi Tong
|
Xin-Yi Tong, Gui-Song Xia, Xiao Xiang Zhu
|
Enabling Country-Scale Land Cover Mapping with Meter-Resolution
Satellite Imagery
| null | null |
10.1016/j.isprsjprs.2022.12.011
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
High-resolution satellite images can provide abundant, detailed spatial
information for land cover classification, which is particularly important for
studying the complicated built environment. However, due to the complex land
cover patterns, the costly training sample collections, and the severe
distribution shifts of satellite imageries, few studies have applied
high-resolution images to land cover mapping in detailed categories at large
scale. To fill this gap, we present a large-scale land cover dataset,
Five-Billion-Pixels. It contains more than 5 billion labeled pixels of 150
high-resolution Gaofen-2 (4 m) satellite images, annotated in a 24-category
system covering artificial-constructed, agricultural, and natural classes. In
addition, we propose a deep-learning-based unsupervised domain adaptation
approach that can transfer classification models trained on labeled dataset
(referred to as the source domain) to unlabeled data (referred to as the target
domain) for large-scale land cover mapping. Specifically, we introduce an
end-to-end Siamese network employing dynamic pseudo-label assignment and class
balancing strategy to perform adaptive domain joint learning. To validate the
generalizability of our dataset and the proposed approach across different
sensors and different geographical regions, we carry out land cover mapping on
five megacities in China and six cities in other five Asian countries severally
using: PlanetScope (3 m), Gaofen-1 (8 m), and Sentinel-2 (10 m) satellite
images. Over a total study area of 60,000 square kilometers, the experiments
show promising results even though the input images are entirely unlabeled. The
proposed approach, trained with the Five-Billion-Pixels dataset, enables
high-quality and detailed land cover mapping across the whole country of China
and some other Asian countries at meter-resolution.
|
[
{
"version": "v1",
"created": "Thu, 1 Sep 2022 21:00:23 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Apr 2023 13:22:01 GMT"
}
] | 2023-04-10T00:00:00 |
[
[
"Tong",
"Xin-Yi",
""
],
[
"Xia",
"Gui-Song",
""
],
[
"Zhu",
"Xiao Xiang",
""
]
] |
new_dataset
| 0.997777 |
2210.08398
|
Ruofan Liang
|
Ruofan Liang, Jiahao Zhang, Haoda Li, Chen Yang, Yushi Guan, Nandita
Vijaykumar
|
SPIDR: SDF-based Neural Point Fields for Illumination and Deformation
|
Project page: https://nexuslrf.github.io/SPIDR_webpage/
| null | null | null |
cs.CV cs.AI cs.GR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Neural radiance fields (NeRFs) have recently emerged as a promising approach
for 3D reconstruction and novel view synthesis. However, NeRF-based methods
encode shape, reflectance, and illumination implicitly and this makes it
challenging for users to manipulate these properties in the rendered images
explicitly. Existing approaches only enable limited editing of the scene and
deformation of the geometry. Furthermore, no existing work enables accurate
scene illumination after object deformation. In this work, we introduce SPIDR,
a new hybrid neural SDF representation. SPIDR combines point cloud and neural
implicit representations to enable the reconstruction of higher quality object
surfaces for geometry deformation and lighting estimation. meshes and surfaces
for object deformation and lighting estimation. To more accurately capture
environment illumination for scene relighting, we propose a novel neural
implicit model to learn environment light. To enable more accurate illumination
updates after deformation, we use the shadow mapping technique to approximate
the light visibility updates caused by geometry editing. We demonstrate the
effectiveness of SPIDR in enabling high quality geometry editing with more
accurate updates to the illumination of the scene.
|
[
{
"version": "v1",
"created": "Sat, 15 Oct 2022 23:34:53 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Nov 2022 17:24:16 GMT"
},
{
"version": "v3",
"created": "Fri, 7 Apr 2023 05:42:33 GMT"
}
] | 2023-04-10T00:00:00 |
[
[
"Liang",
"Ruofan",
""
],
[
"Zhang",
"Jiahao",
""
],
[
"Li",
"Haoda",
""
],
[
"Yang",
"Chen",
""
],
[
"Guan",
"Yushi",
""
],
[
"Vijaykumar",
"Nandita",
""
]
] |
new_dataset
| 0.996865 |
2211.01562
|
Peifeng Wang
|
Peifeng Wang, Aaron Chan, Filip Ilievski, Muhao Chen, Xiang Ren
|
PINTO: Faithful Language Reasoning Using Prompt-Generated Rationales
|
17 pages, 5 figures. Accepted to ICLR 2023
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural language models (LMs) have achieved impressive results on various
language-based reasoning tasks by utilizing latent knowledge encoded in their
own pretrained parameters. To make this reasoning process more explicit, recent
works retrieve a rationalizing LM's internal knowledge by training or prompting
it to generate free-text rationales, which can be used to guide task
predictions made by either the same LM or a separate reasoning LM. However,
rationalizing LMs require expensive rationale annotation and/or computation,
without any assurance that their generated rationales improve LM task
performance or faithfully reflect LM decision-making. In this paper, we propose
PINTO, an LM pipeline that rationalizes via prompt-based learning, and learns
to faithfully reason over rationales via counterfactual regularization. First,
PINTO maps out a suitable reasoning process for the task input by prompting a
frozen rationalizing LM to generate a free-text rationale. Second, PINTO's
reasoning LM is fine-tuned to solve the task using the generated rationale as
context, while regularized to output less confident predictions when the
rationale is perturbed. Across four datasets, we show that PINTO significantly
improves the generalization ability of the reasoning LM, yielding higher
performance on both in-distribution and out-of-distribution test sets. Also, we
find that PINTO's rationales are more faithful to its task predictions than
those generated by competitive baselines.
|
[
{
"version": "v1",
"created": "Thu, 3 Nov 2022 02:55:54 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Nov 2022 23:54:15 GMT"
},
{
"version": "v3",
"created": "Thu, 6 Apr 2023 23:49:35 GMT"
}
] | 2023-04-10T00:00:00 |
[
[
"Wang",
"Peifeng",
""
],
[
"Chan",
"Aaron",
""
],
[
"Ilievski",
"Filip",
""
],
[
"Chen",
"Muhao",
""
],
[
"Ren",
"Xiang",
""
]
] |
new_dataset
| 0.996741 |
2302.14548
|
Lars Reimann
|
Lars Reimann, G\"unter Kniesel-W\"unsche
|
Safe-DS: A Domain Specific Language to Make Data Science Safe
|
Accepted for the NIER Track of the 45th International Conference on
Software Engineering (ICSE 2023)
| null | null | null |
cs.SE cs.LG cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Due to the long runtime of Data Science (DS) pipelines, even small
programming mistakes can be very costly, if they are not detected statically.
However, even basic static type checking of DS pipelines is difficult because
most are written in Python. Static typing is available in Python only via
external linters. These require static type annotations for parameters or
results of functions, which many DS libraries do not provide. In this paper, we
show how the wealth of Python DS libraries can be used in a statically safe way
via Safe-DS, a domain specific language (DSL) for DS. Safe-DS catches
conventional type errors plus errors related to range restrictions, data
manipulation, and call order of functions, going well beyond the abilities of
current Python linters. Python libraries are integrated into Safe-DS via a stub
language for specifying the interface of its declarations, and an API-Editor
that is able to extract type information from the code and documentation of
Python libraries, and automatically generate suitable stubs.
Moreover, Safe-DS complements textual DS pipelines with a graphical
representation that eases safe development by preventing syntax errors. The
seamless synchronization of textual and graphic view lets developers always
choose the one best suited for their skills and current task. We think that
Safe-DS can make DS development easier, faster, and more reliable,
significantly reducing development costs.
|
[
{
"version": "v1",
"created": "Tue, 28 Feb 2023 13:14:07 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Apr 2023 10:05:42 GMT"
}
] | 2023-04-10T00:00:00 |
[
[
"Reimann",
"Lars",
""
],
[
"Kniesel-Wünsche",
"Günter",
""
]
] |
new_dataset
| 0.988293 |
2303.12060
|
Jingyang Lin
|
Jingyang Lin, Hang Hua, Ming Chen, Yikang Li, Jenhao Hsiao, Chiuman
Ho, Jiebo Luo
|
VideoXum: Cross-modal Visual and Textural Summarization of Videos
| null | null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Video summarization aims to distill the most important information from a
source video to produce either an abridged clip or a textual narrative.
Traditionally, different methods have been proposed depending on whether the
output is a video or text, thus ignoring the correlation between the two
semantically related tasks of visual summarization and textual summarization.
We propose a new joint video and text summarization task. The goal is to
generate both a shortened video clip along with the corresponding textual
summary from a long video, collectively referred to as a cross-modal summary.
The generated shortened video clip and text narratives should be semantically
well aligned. To this end, we first build a large-scale human-annotated dataset
-- VideoXum (X refers to different modalities). The dataset is reannotated
based on ActivityNet. After we filter out the videos that do not meet the
length requirements, 14,001 long videos remain in our new dataset. Each video
in our reannotated dataset has human-annotated video summaries and the
corresponding narrative summaries. We then design a novel end-to-end model --
VTSUM-BILP to address the challenges of our proposed task. Moreover, we propose
a new metric called VT-CLIPScore to help evaluate the semantic consistency of
cross-modality summary. The proposed model achieves promising performance on
this new task and establishes a benchmark for future research.
|
[
{
"version": "v1",
"created": "Tue, 21 Mar 2023 17:51:23 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Apr 2023 18:48:06 GMT"
}
] | 2023-04-10T00:00:00 |
[
[
"Lin",
"Jingyang",
""
],
[
"Hua",
"Hang",
""
],
[
"Chen",
"Ming",
""
],
[
"Li",
"Yikang",
""
],
[
"Hsiao",
"Jenhao",
""
],
[
"Ho",
"Chiuman",
""
],
[
"Luo",
"Jiebo",
""
]
] |
new_dataset
| 0.962806 |
2304.00571
|
Qiangqiang Wu
|
Qiangqiang Wu and Tianyu Yang and Ziquan Liu and Baoyuan Wu and Ying
Shan and Antoni B. Chan
|
DropMAE: Masked Autoencoders with Spatial-Attention Dropout for Tracking
Tasks
|
CVPR 2023; V2: fixed typos in Table-2
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we study masked autoencoder (MAE) pretraining on videos for
matching-based downstream tasks, including visual object tracking (VOT) and
video object segmentation (VOS). A simple extension of MAE is to randomly mask
out frame patches in videos and reconstruct the frame pixels. However, we find
that this simple baseline heavily relies on spatial cues while ignoring
temporal relations for frame reconstruction, thus leading to sub-optimal
temporal matching representations for VOT and VOS. To alleviate this problem,
we propose DropMAE, which adaptively performs spatial-attention dropout in the
frame reconstruction to facilitate temporal correspondence learning in videos.
We show that our DropMAE is a strong and efficient temporal matching learner,
which achieves better finetuning results on matching-based tasks than the
ImageNetbased MAE with 2X faster pre-training speed. Moreover, we also find
that motion diversity in pre-training videos is more important than scene
diversity for improving the performance on VOT and VOS. Our pre-trained DropMAE
model can be directly loaded in existing ViT-based trackers for fine-tuning
without further modifications. Notably, DropMAE sets new state-of-the-art
performance on 8 out of 9 highly competitive video tracking and segmentation
datasets. Our code and pre-trained models are available at
https://github.com/jimmy-dq/DropMAE.git.
|
[
{
"version": "v1",
"created": "Sun, 2 Apr 2023 16:40:42 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Apr 2023 02:55:28 GMT"
}
] | 2023-04-10T00:00:00 |
[
[
"Wu",
"Qiangqiang",
""
],
[
"Yang",
"Tianyu",
""
],
[
"Liu",
"Ziquan",
""
],
[
"Wu",
"Baoyuan",
""
],
[
"Shan",
"Ying",
""
],
[
"Chan",
"Antoni B.",
""
]
] |
new_dataset
| 0.963012 |
2304.01403
|
Zihan Zhang
|
Zeyu Guo and Zihan Zhang
|
Randomly Punctured Reed-Solomon Codes Achieve the List Decoding Capacity
over Polynomial-Size Alphabets
| null | null | null | null |
cs.IT cs.DS math.CO math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
This paper shows that, with high probability, randomly punctured Reed-Solomon
codes over fields of polynomial size achieve the list decoding capacity. More
specifically, we prove that for any $\epsilon>0$ and $R\in (0,1)$, with high
probability, randomly punctured Reed-Solomon codes of block length $n$ and rate
$R$ are $\left(1-R-\epsilon, O({1}/{\epsilon})\right)$ list decodable over
alphabets of size at least $2^{\mathrm{poly}(1/\epsilon)}n^2$. This extends the
recent breakthrough of Brakensiek, Gopi, and Makam (STOC 2023) that randomly
punctured Reed-Solomon codes over fields of exponential size attain the
generalized Singleton bound of Shangguan and Tamo (STOC 2020).
|
[
{
"version": "v1",
"created": "Mon, 3 Apr 2023 22:35:59 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Apr 2023 05:24:48 GMT"
}
] | 2023-04-10T00:00:00 |
[
[
"Guo",
"Zeyu",
""
],
[
"Zhang",
"Zihan",
""
]
] |
new_dataset
| 0.979884 |
2304.02906
|
Christos Koutlis
|
Christos Koutlis, Manos Schinas, Symeon Papadopoulos
|
MemeFier: Dual-stage Modality Fusion for Image Meme Classification
|
8 pages, 2 figures, ICMR 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Hate speech is a societal problem that has significantly grown through the
Internet. New forms of digital content such as image memes have given rise to
spread of hate using multimodal means, being far more difficult to analyse and
detect compared to the unimodal case. Accurate automatic processing, analysis
and understanding of this kind of content will facilitate the endeavor of
hindering hate speech proliferation through the digital world. To this end, we
propose MemeFier, a deep learning-based architecture for fine-grained
classification of Internet image memes, utilizing a dual-stage modality fusion
module. The first fusion stage produces feature vectors containing modality
alignment information that captures non-trivial connections between the text
and image of a meme. The second fusion stage leverages the power of a
Transformer encoder to learn inter-modality correlations at the token level and
yield an informative representation. Additionally, we consider external
knowledge as an additional input, and background image caption supervision as a
regularizing component. Extensive experiments on three widely adopted
benchmarks, i.e., Facebook Hateful Memes, Memotion7k and MultiOFF, indicate
that our approach competes and in some cases surpasses state-of-the-art. Our
code is available on https://github.com/ckoutlis/memefier.
|
[
{
"version": "v1",
"created": "Thu, 6 Apr 2023 07:36:52 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Apr 2023 06:57:42 GMT"
}
] | 2023-04-10T00:00:00 |
[
[
"Koutlis",
"Christos",
""
],
[
"Schinas",
"Manos",
""
],
[
"Papadopoulos",
"Symeon",
""
]
] |
new_dataset
| 0.971254 |
2304.03047
|
Dong An
|
Dong An, Hanqing Wang, Wenguan Wang, Zun Wang, Yan Huang, Keji He,
Liang Wang
|
ETPNav: Evolving Topological Planning for Vision-Language Navigation in
Continuous Environments
|
Project page: https://github.com/MarSaKi/ETPNav
| null | null | null |
cs.CV cs.CL cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Vision-language navigation is a task that requires an agent to follow
instructions to navigate in environments. It becomes increasingly crucial in
the field of embodied AI, with potential applications in autonomous navigation,
search and rescue, and human-robot interaction. In this paper, we propose to
address a more practical yet challenging counterpart setting - vision-language
navigation in continuous environments (VLN-CE). To develop a robust VLN-CE
agent, we propose a new navigation framework, ETPNav, which focuses on two
critical skills: 1) the capability to abstract environments and generate
long-range navigation plans, and 2) the ability of obstacle-avoiding control in
continuous environments. ETPNav performs online topological mapping of
environments by self-organizing predicted waypoints along a traversed path,
without prior environmental experience. It privileges the agent to break down
the navigation procedure into high-level planning and low-level control.
Concurrently, ETPNav utilizes a transformer-based cross-modal planner to
generate navigation plans based on topological maps and instructions. The plan
is then performed through an obstacle-avoiding controller that leverages a
trial-and-error heuristic to prevent navigation from getting stuck in
obstacles. Experimental results demonstrate the effectiveness of the proposed
method. ETPNav yields more than 10% and 20% improvements over prior
state-of-the-art on R2R-CE and RxR-CE datasets, respectively. Our code is
available at https://github.com/MarSaKi/ETPNav.
|
[
{
"version": "v1",
"created": "Thu, 6 Apr 2023 13:07:17 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Apr 2023 04:15:25 GMT"
}
] | 2023-04-10T00:00:00 |
[
[
"An",
"Dong",
""
],
[
"Wang",
"Hanqing",
""
],
[
"Wang",
"Wenguan",
""
],
[
"Wang",
"Zun",
""
],
[
"Huang",
"Yan",
""
],
[
"He",
"Keji",
""
],
[
"Wang",
"Liang",
""
]
] |
new_dataset
| 0.997438 |
2304.03295
|
Euihyeok Lee
|
Euihyoek Lee, Chulhong Min, Jeaseung Lee, Jin Yu, Seungwoo Kang
|
Automatic Detection of Reactions to Music via Earable Sensing
| null | null | null | null |
cs.SD cs.HC eess.AS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We present GrooveMeter, a novel system that automatically detects vocal and
motion reactions to music via earable sensing and supports music
engagement-aware applications. To this end, we use smart earbuds as sensing
devices, which are already widely used for music listening, and devise reaction
detection techniques by leveraging an inertial measurement unit (IMU) and a
microphone on earbuds. To explore reactions in daily music-listening
situations, we collect the first kind of dataset, MusicReactionSet, containing
926-minute-long IMU and audio data with 30 participants. With the dataset, we
discover a set of unique challenges in detecting music listening reactions
accurately and robustly using audio and motion sensing. We devise sophisticated
processing pipelines to make reaction detection accurate and efficient. We
present a comprehensive evaluation to examine the performance of reaction
detection and system cost. It shows that GrooveMeter achieves the macro F1
scores of 0.89 for vocal reaction and 0.81 for motion reaction with
leave-one-subject-out cross-validation. More importantly, GrooveMeter shows
higher accuracy and robustness compared to alternative methods. We also show
that our filtering approach reduces 50% or more of the energy overhead.
Finally, we demonstrate the potential use cases through a case study.
|
[
{
"version": "v1",
"created": "Thu, 6 Apr 2023 08:11:03 GMT"
}
] | 2023-04-10T00:00:00 |
[
[
"Lee",
"Euihyoek",
""
],
[
"Min",
"Chulhong",
""
],
[
"Lee",
"Jeaseung",
""
],
[
"Yu",
"Jin",
""
],
[
"Kang",
"Seungwoo",
""
]
] |
new_dataset
| 0.994286 |
2304.03399
|
Alaa Shaker
|
Alaa Shaker, Alaa Aldarf and Igor Bessmertny
|
Using LSTM and GRU With a New Dataset for Named Entity Recognition in
the Arabic Language
|
Proceedings of the 13th Majorov International Conference on Software
Engineering and Computer Systems
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Named entity recognition (NER) is a natural language processing task (NLP),
which aims to identify named entities and classify them like person, location,
organization, etc. In the Arabic language, we can find a considerable size of
unstructured data, and it needs to different preprocessing tool than languages
like (English, Russian, German...). From this point, we can note the importance
of building a new structured dataset to solve the lack of structured data. In
this work, we use the BIOES format to tag the word, which allows us to handle
the nested name entity that consists of more than one sentence and define the
start and the end of the name. The dataset consists of more than thirty-six
thousand records. In addition, this work proposes long short term memory (LSTM)
units and Gated Recurrent Units (GRU) for building the named entity recognition
model in the Arabic language. The models give an approximately good result
(80%) because LSTM and GRU models can find the relationships between the words
of the sentence. Also, use a new library from Google, which is Trax and
platform Colab
|
[
{
"version": "v1",
"created": "Thu, 6 Apr 2023 22:14:02 GMT"
}
] | 2023-04-10T00:00:00 |
[
[
"Shaker",
"Alaa",
""
],
[
"Aldarf",
"Alaa",
""
],
[
"Bessmertny",
"Igor",
""
]
] |
new_dataset
| 0.99785 |
2304.03400
|
Tu Bui
|
Tu Bui, Shruti Agarwal, Ning Yu and John Collomosse
|
RoSteALS: Robust Steganography using Autoencoder Latent Space
|
accepted to CVPR WMF 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Data hiding such as steganography and invisible watermarking has important
applications in copyright protection, privacy-preserved communication and
content provenance. Existing works often fall short in either preserving image
quality, or robustness against perturbations or are too complex to train. We
propose RoSteALS, a practical steganography technique leveraging frozen
pretrained autoencoders to free the payload embedding from learning the
distribution of cover images. RoSteALS has a light-weight secret encoder of
just 300k parameters, is easy to train, has perfect secret recovery performance
and comparable image quality on three benchmarks. Additionally, RoSteALS can be
adapted for novel cover-less steganography applications in which the cover
image can be sampled from noise or conditioned on text prompts via a denoising
diffusion process. Our model and code are available at
\url{https://github.com/TuBui/RoSteALS}.
|
[
{
"version": "v1",
"created": "Thu, 6 Apr 2023 22:14:26 GMT"
}
] | 2023-04-10T00:00:00 |
[
[
"Bui",
"Tu",
""
],
[
"Agarwal",
"Shruti",
""
],
[
"Yu",
"Ning",
""
],
[
"Collomosse",
"John",
""
]
] |
new_dataset
| 0.997942 |
2304.03428
|
Shaoyu Chen
|
Shaoyu Chen, Tianheng Cheng, Jiemin Fang, Qian Zhang, Yuan Li, Wenyu
Liu, Xinggang Wang
|
TinyDet: Accurate Small Object Detection in Lightweight Generic
Detectors
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Small object detection requires the detection head to scan a large number of
positions on image feature maps, which is extremely hard for computation- and
energy-efficient lightweight generic detectors. To accurately detect small
objects with limited computation, we propose a two-stage lightweight detection
framework with extremely low computation complexity, termed as TinyDet. It
enables high-resolution feature maps for dense anchoring to better cover small
objects, proposes a sparsely-connected convolution for computation reduction,
enhances the early stage features in the backbone, and addresses the feature
misalignment problem for accurate small object detection. On the COCO
benchmark, our TinyDet-M achieves 30.3 AP and 13.5 AP^s with only 991 MFLOPs,
which is the first detector that has an AP over 30 with less than 1 GFLOPs;
besides, TinyDet-S and TinyDet-L achieve promising performance under different
computation limitation.
|
[
{
"version": "v1",
"created": "Fri, 7 Apr 2023 00:45:50 GMT"
}
] | 2023-04-10T00:00:00 |
[
[
"Chen",
"Shaoyu",
""
],
[
"Cheng",
"Tianheng",
""
],
[
"Fang",
"Jiemin",
""
],
[
"Zhang",
"Qian",
""
],
[
"Li",
"Yuan",
""
],
[
"Liu",
"Wenyu",
""
],
[
"Wang",
"Xinggang",
""
]
] |
new_dataset
| 0.999221 |
2304.03481
|
Gaojie Wu
|
Gaojie Wu, Wei-Shi Zheng, Yutong Lu, Qi Tian
|
PSLT: A Light-weight Vision Transformer with Ladder Self-Attention and
Progressive Shift
|
Accepted to IEEE Transaction on Pattern Analysis and Machine
Intelligence, 2023 (Submission date: 08-Jul-202)
|
IEEE Transaction on Pattern Analysis and Machine Intelligence,
2023
|
10.1109/TPAMI.2023.3265499
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vision Transformer (ViT) has shown great potential for various visual tasks
due to its ability to model long-range dependency. However, ViT requires a
large amount of computing resource to compute the global self-attention. In
this work, we propose a ladder self-attention block with multiple branches and
a progressive shift mechanism to develop a light-weight transformer backbone
that requires less computing resources (e.g. a relatively small number of
parameters and FLOPs), termed Progressive Shift Ladder Transformer (PSLT).
First, the ladder self-attention block reduces the computational cost by
modelling local self-attention in each branch. In the meanwhile, the
progressive shift mechanism is proposed to enlarge the receptive field in the
ladder self-attention block by modelling diverse local self-attention for each
branch and interacting among these branches. Second, the input feature of the
ladder self-attention block is split equally along the channel dimension for
each branch, which considerably reduces the computational cost in the ladder
self-attention block (with nearly 1/3 the amount of parameters and FLOPs), and
the outputs of these branches are then collaborated by a pixel-adaptive fusion.
Therefore, the ladder self-attention block with a relatively small number of
parameters and FLOPs is capable of modelling long-range interactions. Based on
the ladder self-attention block, PSLT performs well on several vision tasks,
including image classification, objection detection and person
re-identification. On the ImageNet-1k dataset, PSLT achieves a top-1 accuracy
of 79.9% with 9.2M parameters and 1.9G FLOPs, which is comparable to several
existing models with more than 20M parameters and 4G FLOPs. Code is available
at https://isee-ai.cn/wugaojie/PSLT.html.
|
[
{
"version": "v1",
"created": "Fri, 7 Apr 2023 05:21:37 GMT"
}
] | 2023-04-10T00:00:00 |
[
[
"Wu",
"Gaojie",
""
],
[
"Zheng",
"Wei-Shi",
""
],
[
"Lu",
"Yutong",
""
],
[
"Tian",
"Qi",
""
]
] |
new_dataset
| 0.999132 |
2304.03495
|
Deunsol Jung
|
Deunsol Jung, Sanghyun Kim, Won Hwa Kim, Minsu Cho
|
Devil's on the Edges: Selective Quad Attention for Scene Graph
Generation
|
Accepted at CVPR 2023; Project page at
https://cvlab.postech.ac.kr/research/SQUAT/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Scene graph generation aims to construct a semantic graph structure from an
image such that its nodes and edges respectively represent objects and their
relationships. One of the major challenges for the task lies in the presence of
distracting objects and relationships in images; contextual reasoning is
strongly distracted by irrelevant objects or backgrounds and, more importantly,
a vast number of irrelevant candidate relations. To tackle the issue, we
propose the Selective Quad Attention Network (SQUAT) that learns to select
relevant object pairs and disambiguate them via diverse contextual
interactions. SQUAT consists of two main components: edge selection and quad
attention. The edge selection module selects relevant object pairs, i.e., edges
in the scene graph, which helps contextual reasoning, and the quad attention
module then updates the edge features using both edge-to-node and edge-to-edge
cross-attentions to capture contextual information between objects and object
pairs. Experiments demonstrate the strong performance and robustness of SQUAT,
achieving the state of the art on the Visual Genome and Open Images v6
benchmarks.
|
[
{
"version": "v1",
"created": "Fri, 7 Apr 2023 06:33:46 GMT"
}
] | 2023-04-10T00:00:00 |
[
[
"Jung",
"Deunsol",
""
],
[
"Kim",
"Sanghyun",
""
],
[
"Kim",
"Won Hwa",
""
],
[
"Cho",
"Minsu",
""
]
] |
new_dataset
| 0.997211 |
2304.03497
|
Sang-Bin Jeon
|
Sang-Bin Jeon, Jaeho Jung, Jinhyung Park, and In-Kwon Lee
|
F-RDW: Redirected Walking with Forecasting Future Position
|
12 pages, 13 figures
| null | null | null |
cs.HC cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In order to serve better VR experiences to users, existing predictive methods
of Redirected Walking (RDW) exploit future information to reduce the number of
reset occurrences. However, such methods often impose a precondition during
deployment, either in the virtual environment's layout or the user's walking
direction, which constrains its universal applications. To tackle this
challenge, we propose a novel mechanism F-RDW that is twofold: (1) forecasts
the future information of a user in the virtual space without any assumptions,
and (2) fuse this information while maneuvering existing RDW methods. The
backbone of the first step is an LSTM-based model that ingests the user's
spatial and eye-tracking data to predict the user's future position in the
virtual space, and the following step feeds those predicted values into
existing RDW methods (such as MPCRed, S2C, TAPF, and ARC) while respecting
their internal mechanism in applicable ways.The results of our simulation test
and user study demonstrate the significance of future information when using
RDW in small physical spaces or complex environments. We prove that the
proposed mechanism significantly reduces the number of resets and increases the
traveled distance between resets, hence augmenting the redirection performance
of all RDW methods explored in this work.
|
[
{
"version": "v1",
"created": "Fri, 7 Apr 2023 06:37:17 GMT"
}
] | 2023-04-10T00:00:00 |
[
[
"Jeon",
"Sang-Bin",
""
],
[
"Jung",
"Jaeho",
""
],
[
"Park",
"Jinhyung",
""
],
[
"Lee",
"In-Kwon",
""
]
] |
new_dataset
| 0.981681 |
2304.03511
|
Md. Azizul Hakim
|
Shree. Dolax Ray, Mst. Khadija Tul Kubra Natasha, Md. Azizul Hakim,
Fatema Nur
|
Carrot Cure: A CNN based Application to Detect Carrot Disease
|
7 pages, 7 figures, conference
| null |
10.1109/ICOEI53556.2022.9776947
| null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Carrot is a famous nutritional vegetable and developed all over the world.
Different diseases of Carrot has become a massive issue in the carrot
production circle which leads to a tremendous effect on the economic growth in
the agricultural sector. An automatic carrot disease detection system can help
to identify malicious carrots and can provide a guide to cure carrot disease in
an earlier stage, resulting in a less economical loss in the carrot production
system. The proposed research study has developed a web application Carrot Cure
based on Convolutional Neural Network (CNN), which can identify a defective
carrot and provide a proper curative solution. Images of carrots affected by
cavity spot and leaf bright as well as healthy images were collected. Further,
this research work has employed Convolutional Neural Network to include birth
neural purposes and a Fully Convolutional Neural Network model (FCNN) for
infection order. Different avenues regarding different convolutional models
with colorful layers are explored and the proposed Convolutional model has
achieved the perfection of 99.8%, which will be useful for the drovers to
distinguish carrot illness and boost their advantage.
|
[
{
"version": "v1",
"created": "Fri, 7 Apr 2023 07:10:24 GMT"
}
] | 2023-04-10T00:00:00 |
[
[
"Ray",
"Shree. Dolax",
""
],
[
"Natasha",
"Mst. Khadija Tul Kubra",
""
],
[
"Hakim",
"Md. Azizul",
""
],
[
"Nur",
"Fatema",
""
]
] |
new_dataset
| 0.98745 |
2304.03514
|
Yuze Wu
|
Yuze Wu, Fan Yang, Ze Wang, Kaiwei Wang, Yanjun Cao, Chao Xu, Fei Gao
|
Ring-Rotor: A Novel Retractable Ring-shaped Quadrotor with Aerial
Grasping and Transportation Capability
|
8 pages, accepted by IEEE Robotics and Automation Letters on January,
2023
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This letter presents a novel and retractable ring-shaped quadrotor called
Ring-Rotor that can adjust the vehicle's length and width simultaneously.
Unlike other morphing quadrotors with high platform complexity and poor
controllability, Ring-Rotor uses only one servo motor for morphing but reduces
the largest dimension of the vehicle by approximately 31.4\%. It can guarantee
passibility while flying through small spaces in its compact form and energy
saving in its standard form. Meanwhile, the vehicle breaks the cross
configuration of general quadrotors with four arms connected to the central
body and innovates a ring-shaped mechanical structure with spare central space.
Based on this, an ingenious whole-body aerial grasping and transportation
scheme is designed to carry various shapes of objects without the external
manipulator mechanism. Moreover, we exploit a nonlinear model predictive
control (NMPC) strategy that uses a time-variant physical parameter model to
adapt to the quadrotor morphology. Above mentioned applications are performed
in real-world experiments to demonstrate the system's high versatility.
|
[
{
"version": "v1",
"created": "Fri, 7 Apr 2023 07:17:18 GMT"
}
] | 2023-04-10T00:00:00 |
[
[
"Wu",
"Yuze",
""
],
[
"Yang",
"Fan",
""
],
[
"Wang",
"Ze",
""
],
[
"Wang",
"Kaiwei",
""
],
[
"Cao",
"Yanjun",
""
],
[
"Xu",
"Chao",
""
],
[
"Gao",
"Fei",
""
]
] |
new_dataset
| 0.999569 |
2304.03541
|
Thomas Debris-Alazard
|
Thomas Debris-Alazard
|
Code-based Cryptography: Lecture Notes
|
Lecture notes for a course given at \'Ecole normale sup\'erieure de
Lyon and summer school 2022 in post-quantum cryptography that took place in
the university of Budapest
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
These lecture notes have been written for courses given at \'Ecole normale
sup\'erieure de Lyon and summer school 2022 in post-quantum cryptography that
took place in the university of Budapest. Our objective is to give a general
introduction to the foundations of code-based cryptography which is currently
known to be secure even against quantum adversaries. In particular we focus our
attention to the decoding problem whose hardness is at the ground of the
security of many cryptographic primitives, the most prominent being McEliece
and Alekhnovich' encryption schemes.
|
[
{
"version": "v1",
"created": "Fri, 7 Apr 2023 08:37:07 GMT"
}
] | 2023-04-10T00:00:00 |
[
[
"Debris-Alazard",
"Thomas",
""
]
] |
new_dataset
| 0.970206 |
2304.03542
|
Xuhai Chen
|
Xuhai Chen, Jiangning Zhang, Chao Xu, Yabiao Wang, Chengjie Wang, Yong
Liu
|
Better "CMOS" Produces Clearer Images: Learning Space-Variant Blur
Estimation for Blind Image Super-Resolution
|
Accepted by CVPR 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most of the existing blind image Super-Resolution (SR) methods assume that
the blur kernels are space-invariant. However, the blur involved in real
applications are usually space-variant due to object motion, out-of-focus,
etc., resulting in severe performance drop of the advanced SR methods. To
address this problem, we firstly introduce two new datasets with out-of-focus
blur, i.e., NYUv2-BSR and Cityscapes-BSR, to support further researches of
blind SR with space-variant blur. Based on the datasets, we design a novel
Cross-MOdal fuSion network (CMOS) that estimate both blur and semantics
simultaneously, which leads to improved SR results. It involves a feature
Grouping Interactive Attention (GIA) module to make the two modalities interact
more effectively and avoid inconsistency. GIA can also be used for the
interaction of other features because of the universality of its structure.
Qualitative and quantitative experiments compared with state-of-the-art methods
on above datasets and real-world images demonstrate the superiority of our
method, e.g., obtaining PSNR/SSIM by +1.91/+0.0048 on NYUv2-BSR than MANet.
|
[
{
"version": "v1",
"created": "Fri, 7 Apr 2023 08:40:31 GMT"
}
] | 2023-04-10T00:00:00 |
[
[
"Chen",
"Xuhai",
""
],
[
"Zhang",
"Jiangning",
""
],
[
"Xu",
"Chao",
""
],
[
"Wang",
"Yabiao",
""
],
[
"Wang",
"Chengjie",
""
],
[
"Liu",
"Yong",
""
]
] |
new_dataset
| 0.99936 |
2304.03585
|
Javad Peymanfard
|
Mohammd Hasan Shamgholi, Vahid Saeedi, Javad Peymanfard, Leila
Alhabib, Hossein Zeinali
|
ArmanTTS single-speaker Persian dataset
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
TTS, or text-to-speech, is a complicated process that can be accomplished
through appropriate modeling using deep learning methods. In order to implement
deep learning models, a suitable dataset is required. Since there is a scarce
amount of work done in this field for the Persian language, this paper will
introduce the single speaker dataset: ArmanTTS. We compared the characteristics
of this dataset with those of various prevalent datasets to prove that ArmanTTS
meets the necessary standards for teaching a Persian text-to-speech conversion
model. We also combined the Tacotron 2 and HiFi GAN to design a model that can
receive phonemes as input, with the output being the corresponding speech. 4.0
value of MOS was obtained from real speech, 3.87 value was obtained by the
vocoder prediction and 2.98 value was reached with the synthetic speech
generated by the TTS model.
|
[
{
"version": "v1",
"created": "Fri, 7 Apr 2023 10:52:55 GMT"
}
] | 2023-04-10T00:00:00 |
[
[
"Shamgholi",
"Mohammd Hasan",
""
],
[
"Saeedi",
"Vahid",
""
],
[
"Peymanfard",
"Javad",
""
],
[
"Alhabib",
"Leila",
""
],
[
"Zeinali",
"Hossein",
""
]
] |
new_dataset
| 0.999532 |
2304.03610
|
Mahla Nejati
|
Yuning Xing, Dexter Pham, Henry Williams, David Smith, Ho Seok Ahn,
JongYoon Lim, Bruce A. MacDonald, Mahla Nejati
|
Look how they have grown: Non-destructive Leaf Detection and Size
Estimation of Tomato Plants for 3D Growth Monitoring
|
10 Pages, 10 Figures
|
Proceedings of the Australasian conference on robotics and
automation (ACRA 2022)
| null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Smart farming is a growing field as technology advances. Plant
characteristics are crucial indicators for monitoring plant growth. Research
has been done to estimate characteristics like leaf area index, leaf disease,
and plant height. However, few methods have been applied to non-destructive
measurements of leaf size. In this paper, an automated non-destructive
imaged-based measuring system is presented, which uses 2D and 3D data obtained
using a Zivid 3D camera, creating 3D virtual representations (digital twins) of
the tomato plants. Leaves are detected from corresponding 2D RGB images and
mapped to their 3D point cloud using the detected leaf masks, which then pass
the leaf point cloud to the plane fitting algorithm to extract the leaf size to
provide data for growth monitoring. The performance of the measurement platform
has been measured through a comprehensive trial on real-world tomato plants
with quantified performance metrics compared to ground truth measurements.
Three tomato leaf and height datasets (including 50+ 3D point cloud files of
tomato plants) were collected and open-sourced in this project. The proposed
leaf size estimation method demonstrates an RMSE value of 4.47mm and an R^2
value of 0.87. The overall measurement system (leaf detection and size
estimation algorithms combine) delivers an RMSE value of 8.13mm and an R^2
value of 0.899.
|
[
{
"version": "v1",
"created": "Fri, 7 Apr 2023 12:16:10 GMT"
}
] | 2023-04-10T00:00:00 |
[
[
"Xing",
"Yuning",
""
],
[
"Pham",
"Dexter",
""
],
[
"Williams",
"Henry",
""
],
[
"Smith",
"David",
""
],
[
"Ahn",
"Ho Seok",
""
],
[
"Lim",
"JongYoon",
""
],
[
"MacDonald",
"Bruce A.",
""
],
[
"Nejati",
"Mahla",
""
]
] |
new_dataset
| 0.996952 |
2304.03623
|
Fangwei Zhong
|
Fangwei Zhong, Xiao Bi, Yudi Zhang, Wei Zhang, Yizhou Wang
|
RSPT: Reconstruct Surroundings and Predict Trajectories for
Generalizable Active Object Tracking
|
AAAI 2023 (Oral)
| null | null | null |
cs.RO cs.AI cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Active Object Tracking (AOT) aims to maintain a specific relation between the
tracker and object(s) by autonomously controlling the motion system of a
tracker given observations. AOT has wide-ranging applications, such as in
mobile robots and autonomous driving. However, building a generalizable active
tracker that works robustly across different scenarios remains a challenge,
especially in unstructured environments with cluttered obstacles and diverse
layouts. We argue that constructing a state representation capable of modeling
the geometry structure of the surroundings and the dynamics of the target is
crucial for achieving this goal. To address this challenge, we present RSPT, a
framework that forms a structure-aware motion representation by Reconstructing
the Surroundings and Predicting the target Trajectory. Additionally, we enhance
the generalization of the policy network by training in an asymmetric dueling
mechanism. We evaluate RSPT on various simulated scenarios and show that it
outperforms existing methods in unseen environments, particularly those with
complex obstacles and layouts. We also demonstrate the successful transfer of
RSPT to real-world settings. Project Website:
https://sites.google.com/view/aot-rspt.
|
[
{
"version": "v1",
"created": "Fri, 7 Apr 2023 12:52:24 GMT"
}
] | 2023-04-10T00:00:00 |
[
[
"Zhong",
"Fangwei",
""
],
[
"Bi",
"Xiao",
""
],
[
"Zhang",
"Yudi",
""
],
[
"Zhang",
"Wei",
""
],
[
"Wang",
"Yizhou",
""
]
] |
new_dataset
| 0.95643 |
2304.03631
|
Eadom Dessalene
|
Eadom Dessalene, Michael Maynord, Cornelia Fermuller, Yiannis
Aloimonos
|
Therbligs in Action: Video Understanding through Motion Primitives
|
8 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we introduce a rule-based, compositional, and hierarchical
modeling of action using Therbligs as our atoms. Introducing these atoms
provides us with a consistent, expressive, contact-centered representation of
action. Over the atoms we introduce a differentiable method of rule-based
reasoning to regularize for logical consistency. Our approach is complementary
to other approaches in that the Therblig-based representations produced by our
architecture augment rather than replace existing architectures'
representations. We release the first Therblig-centered annotations over two
popular video datasets - EPIC Kitchens 100 and 50-Salads. We also broadly
demonstrate benefits to adopting Therblig representations through evaluation on
the following tasks: action segmentation, action anticipation, and action
recognition - observing an average 10.5\%/7.53\%/6.5\% relative improvement,
respectively, over EPIC Kitchens and an average 8.9\%/6.63\%/4.8\% relative
improvement, respectively, over 50 Salads. Code and data will be made publicly
available.
|
[
{
"version": "v1",
"created": "Thu, 6 Apr 2023 17:27:39 GMT"
}
] | 2023-04-10T00:00:00 |
[
[
"Dessalene",
"Eadom",
""
],
[
"Maynord",
"Michael",
""
],
[
"Fermuller",
"Cornelia",
""
],
[
"Aloimonos",
"Yiannis",
""
]
] |
new_dataset
| 0.997794 |
2304.03635
|
Changlong Jiang
|
Changlong Jiang, Yang Xiao, Cunlin Wu, Mingyang Zhang, Jinghong Zheng,
Zhiguo Cao, and Joey Tianyi Zhou
|
A2J-Transformer: Anchor-to-Joint Transformer Network for 3D Interacting
Hand Pose Estimation from a Single RGB Image
|
CVPR 2023. The code is avaliable at
https://github.com/ChanglongJiangGit/A2J-Transformer
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D interacting hand pose estimation from a single RGB image is a challenging
task, due to serious self-occlusion and inter-occlusion towards hands,
confusing similar appearance patterns between 2 hands, ill-posed joint position
mapping from 2D to 3D, etc.. To address these, we propose to extend A2J-the
state-of-the-art depth-based 3D single hand pose estimation method-to RGB
domain under interacting hand condition. Our key idea is to equip A2J with
strong local-global aware ability to well capture interacting hands' local fine
details and global articulated clues among joints jointly. To this end, A2J is
evolved under Transformer's non-local encoding-decoding framework to build
A2J-Transformer. It holds 3 main advantages over A2J. First, self-attention
across local anchor points is built to make them global spatial context aware
to better capture joints' articulation clues for resisting occlusion. Secondly,
each anchor point is regarded as learnable query with adaptive feature learning
for facilitating pattern fitting capacity, instead of having the same local
representation with the others. Last but not least, anchor point locates in 3D
space instead of 2D as in A2J, to leverage 3D pose prediction. Experiments on
challenging InterHand 2.6M demonstrate that, A2J-Transformer can achieve
state-of-the-art model-free performance (3.38mm MPJPE advancement in 2-hand
case) and can also be applied to depth domain with strong generalization.
|
[
{
"version": "v1",
"created": "Fri, 7 Apr 2023 13:30:36 GMT"
}
] | 2023-04-10T00:00:00 |
[
[
"Jiang",
"Changlong",
""
],
[
"Xiao",
"Yang",
""
],
[
"Wu",
"Cunlin",
""
],
[
"Zhang",
"Mingyang",
""
],
[
"Zheng",
"Jinghong",
""
],
[
"Cao",
"Zhiguo",
""
],
[
"Zhou",
"Joey Tianyi",
""
]
] |
new_dataset
| 0.992349 |
2304.03657
|
Kfir Girstein
|
Kfir Girstein, Eliron Rahimi, Prof. Avi Mendelson
|
SCART: Simulation of Cyber Attacks for Real-Time
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Real-Time systems are often implemented as reactive systems that respond to
stimuli and complete tasks in a known bounded time. The development process of
such systems usually involves using a cycle-accurate simulation environment and
even the digital twine system that can accurately simulate the system and the
environment it operates in. In addition, many real-time systems require high
reliability and strive to be immune against security attacks. Thus, the
development environment must support reliability-related events such as the
failure of a sensor, malfunction of a subsystem, and foreseen events of Cyber
security attacks. This paper presents the SCART framework - an innovative
solution that aims to allow extending simulation environments of real-time
systems with the capability to incorporate reliability-related events and
advanced cyber security attacks, e.g., an attack on a single sensor as well as
"complex security attacks" that aim to change the behavior of a group of
sensors. We validate our system by applying the new proposed environment on
control a drone's flight control system including its navigation system that
uses machine learning algorithms. Such a system is very challenging since it
requires many experiments that can hardly be achieved by using live systems. We
showed that using SCART is very efficient, can increase the model's accuracy,
and significantly reduce false-positive rates. Some of these experiments were
also validated using a set of "real drones".
|
[
{
"version": "v1",
"created": "Fri, 7 Apr 2023 14:25:30 GMT"
}
] | 2023-04-10T00:00:00 |
[
[
"Girstein",
"Kfir",
""
],
[
"Rahimi",
"Eliron",
""
],
[
"Mendelson",
"Prof. Avi",
""
]
] |
new_dataset
| 0.977154 |
2304.03669
|
Haoyuan Li
|
Haoyuan Li, Hao Jiang, Tao Jin, Mengyan Li, Yan Chen, Zhijie Lin, Yang
Zhao, Zhou Zhao
|
DATE: Domain Adaptive Product Seeker for E-commerce
|
This paper was accepted by CVPR 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Product Retrieval (PR) and Grounding (PG), aiming to seek image and
object-level products respectively according to a textual query, have attracted
great interest recently for better shopping experience. Owing to the lack of
relevant datasets, we collect two large-scale benchmark datasets from Taobao
Mall and Live domains with about 474k and 101k image-query pairs for PR, and
manually annotate the object bounding boxes in each image for PG. As annotating
boxes is expensive and time-consuming, we attempt to transfer knowledge from
annotated domain to unannotated for PG to achieve un-supervised Domain
Adaptation (PG-DA). We propose a {\bf D}omain {\bf A}daptive Produc{\bf t}
S{\bf e}eker ({\bf DATE}) framework, regarding PR and PG as Product Seeking
problem at different levels, to assist the query {\bf date} the product.
Concretely, we first design a semantics-aggregated feature extractor for each
modality to obtain concentrated and comprehensive features for following
efficient retrieval and fine-grained grounding tasks. Then, we present two
cooperative seekers to simultaneously search the image for PR and localize the
product for PG. Besides, we devise a domain aligner for PG-DA to alleviate
uni-modal marginal and multi-modal conditional distribution shift between
source and target domains, and design a pseudo box generator to dynamically
select reliable instances and generate bounding boxes for further knowledge
transfer. Extensive experiments show that our DATE achieves satisfactory
performance in fully-supervised PR, PG and un-supervised PG-DA. Our
desensitized datasets will be publicly available
here\footnote{\url{https://github.com/Taobao-live/Product-Seeking}}.
|
[
{
"version": "v1",
"created": "Fri, 7 Apr 2023 14:40:16 GMT"
}
] | 2023-04-10T00:00:00 |
[
[
"Li",
"Haoyuan",
""
],
[
"Jiang",
"Hao",
""
],
[
"Jin",
"Tao",
""
],
[
"Li",
"Mengyan",
""
],
[
"Chen",
"Yan",
""
],
[
"Lin",
"Zhijie",
""
],
[
"Zhao",
"Yang",
""
],
[
"Zhao",
"Zhou",
""
]
] |
new_dataset
| 0.984373 |
1907.03244
|
Ali Analooee
|
Ali Analooee, Shahram Azadi, Reza Kazemi
|
Time Distance: A Novel Collision Prediction and Path Planning Method
| null |
Journal of Applied and Computational Mechanics, Vol. 9, No. 3,
(2023), 656-677
|
10.22055/JACM.2022.40688.3675
| null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, a new fast algorithm for path planning and a collision
prediction framework for two dimensional dynamically changing environments are
introduced. The method is called Time Distance (TD) and benefits from the
space-time space idea. First, the TD concept is defined as the time interval
that must be spent in order for an object to reach another object or a
location. Next, TD functions are derived as a function of location, velocity
and geometry of objects. To construct the configuration-time space, TD
functions in conjunction with another function named "Z-Infinity" are
exploited. Finally, an explicit formula for creating the length optimal
collision free path is presented. Length optimization in this formula is
achieved using a function named "Route Function" which minimizes a cost
function. Performance of the path planning algorithm is evaluated in
simulations. Comparisons indicate that the algorithm is fast enough and capable
to generate length optimal paths as the most effective methods do. Finally, as
another usage of the TD functions, a collision prediction framework is
presented. This framework consists of an explicit function which is a function
of TD functions and calculates the TD of the vehicle with respect to all
objects of the environment.
|
[
{
"version": "v1",
"created": "Sun, 7 Jul 2019 08:04:28 GMT"
},
{
"version": "v2",
"created": "Mon, 22 Jul 2019 12:04:57 GMT"
},
{
"version": "v3",
"created": "Sat, 12 Oct 2019 21:25:08 GMT"
},
{
"version": "v4",
"created": "Thu, 6 Apr 2023 14:22:21 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Analooee",
"Ali",
""
],
[
"Azadi",
"Shahram",
""
],
[
"Kazemi",
"Reza",
""
]
] |
new_dataset
| 0.998988 |
1910.01122
|
Ken Sakurada
|
Shinya Sumikura, Mikiya Shibuya, Ken Sakurada
|
OpenVSLAM: A Versatile Visual SLAM Framework
| null | null |
10.1145/3343031.3350539
| null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we introduce OpenVSLAM, a visual SLAM framework with high
usability and extensibility. Visual SLAM systems are essential for AR devices,
autonomous control of robots and drones, etc. However, conventional open-source
visual SLAM frameworks are not appropriately designed as libraries called from
third-party programs. To overcome this situation, we have developed a novel
visual SLAM framework. This software is designed to be easily used and
extended. It incorporates several useful features and functions for research
and development.
|
[
{
"version": "v1",
"created": "Wed, 2 Oct 2019 18:00:01 GMT"
},
{
"version": "v2",
"created": "Thu, 10 Oct 2019 06:43:19 GMT"
},
{
"version": "v3",
"created": "Thu, 6 Apr 2023 12:34:01 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Sumikura",
"Shinya",
""
],
[
"Shibuya",
"Mikiya",
""
],
[
"Sakurada",
"Ken",
""
]
] |
new_dataset
| 0.99674 |
2011.15028
|
G\'abor Sz\'arnyas
|
Alexandru Iosup, Ahmed Musaafir, Alexandru Uta, Arnau Prat P\'erez,
G\'abor Sz\'arnyas, Hassan Chafi, Ilie Gabriel T\u{a}nase, Lifeng Nai,
Michael Anderson, Mihai Capot\u{a}, Narayanan Sundaram, Peter Boncz,
Siegfried Depner, Stijn Heldens, Thomas Manhardt, Tim Hegeman, Wing Lung
Ngai, Yinglong Xia
|
The LDBC Graphalytics Benchmark
| null | null | null | null |
cs.DC cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this document, we describe LDBC Graphalytics, an industrial-grade
benchmark for graph analysis platforms. The main goal of Graphalytics is to
enable the fair and objective comparison of graph analysis platforms. Due to
the diversity of bottlenecks and performance issues such platforms need to
address, Graphalytics consists of a set of selected deterministic algorithms
for full-graph analysis, standard graph datasets, synthetic dataset generators,
and reference output for validation purposes. Its test harness produces deep
metrics that quantify multiple kinds of systems scalability, weak and strong,
and robustness, such as failures and performance variability. The benchmark
also balances comprehensiveness with runtime necessary to obtain the deep
metrics. The benchmark comes with open-source software for generating
performance data, for validating algorithm results, for monitoring and sharing
performance data, and for obtaining the final benchmark result as a standard
performance report.
|
[
{
"version": "v1",
"created": "Mon, 30 Nov 2020 17:34:37 GMT"
},
{
"version": "v2",
"created": "Sun, 31 Jan 2021 13:59:29 GMT"
},
{
"version": "v3",
"created": "Tue, 13 Apr 2021 18:37:57 GMT"
},
{
"version": "v4",
"created": "Thu, 31 Mar 2022 09:47:08 GMT"
},
{
"version": "v5",
"created": "Wed, 15 Feb 2023 09:58:57 GMT"
},
{
"version": "v6",
"created": "Thu, 6 Apr 2023 07:24:03 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Iosup",
"Alexandru",
""
],
[
"Musaafir",
"Ahmed",
""
],
[
"Uta",
"Alexandru",
""
],
[
"Pérez",
"Arnau Prat",
""
],
[
"Szárnyas",
"Gábor",
""
],
[
"Chafi",
"Hassan",
""
],
[
"Tănase",
"Ilie Gabriel",
""
],
[
"Nai",
"Lifeng",
""
],
[
"Anderson",
"Michael",
""
],
[
"Capotă",
"Mihai",
""
],
[
"Sundaram",
"Narayanan",
""
],
[
"Boncz",
"Peter",
""
],
[
"Depner",
"Siegfried",
""
],
[
"Heldens",
"Stijn",
""
],
[
"Manhardt",
"Thomas",
""
],
[
"Hegeman",
"Tim",
""
],
[
"Ngai",
"Wing Lung",
""
],
[
"Xia",
"Yinglong",
""
]
] |
new_dataset
| 0.999169 |
2109.14251
|
Lingbo Liu
|
Lingbo Liu and Mengmeng Liu and Guanbin Li and Ziyi Wu and Junfan Lin
and Liang Lin
|
Road Network Guided Fine-Grained Urban Traffic Flow Inference
| null | null | null | null |
cs.LG cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Accurate inference of fine-grained traffic flow from coarse-grained one is an
emerging yet crucial problem, which can help greatly reduce the number of the
required traffic monitoring sensors for cost savings. In this work, we notice
that traffic flow has a high correlation with road network, which was either
completely ignored or simply treated as an external factor in previous works.To
facilitate this problem, we propose a novel Road-Aware Traffic Flow Magnifier
(RATFM) that explicitly exploits the prior knowledge of road networks to fully
learn the road-aware spatial distribution of fine-grained traffic flow.
Specifically, a multi-directional 1D convolutional layer is first introduced to
extract the semantic feature of the road network. Subsequently, we incorporate
the road network feature and coarse-grained flow feature to regularize the
short-range spatial distribution modeling of road-relative traffic flow.
Furthermore, we take the road network feature as a query to capture the
long-range spatial distribution of traffic flow with a transformer
architecture. Benefiting from the road-aware inference mechanism, our method
can generate high-quality fine-grained traffic flow maps. Extensive experiments
on three real-world datasets show that the proposed RATFM outperforms
state-of-the-art models under various scenarios. Our code and datasets are
released at {\url{https://github.com/luimoli/RATFM}}.
|
[
{
"version": "v1",
"created": "Wed, 29 Sep 2021 07:51:49 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Apr 2023 06:25:10 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Liu",
"Lingbo",
""
],
[
"Liu",
"Mengmeng",
""
],
[
"Li",
"Guanbin",
""
],
[
"Wu",
"Ziyi",
""
],
[
"Lin",
"Junfan",
""
],
[
"Lin",
"Liang",
""
]
] |
new_dataset
| 0.994765 |
2111.11267
|
Tatsuro Kawamoto
|
Tatsuro Kawamoto and Teruyoshi Kobayashi
|
Sequential locality of graphs and its hypothesis testing
|
23 pages, 11 figures
|
Phys. Rev. Research 5, 023007 (2023)
|
10.1103/PhysRevResearch.5.023007
| null |
cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The adjacency matrix is the most fundamental and intuitive object in graph
analysis that is useful not only mathematically but also for visualizing the
structures of graphs. Because the appearance of an adjacency matrix is
critically affected by the ordering of rows and columns, or vertex ordering,
statistical assessment of graphs together with their vertex sequences is
important in identifying the characteristic structures of graphs. In this
paper, we propose a hypothesis testing framework that assesses how locally
vertices are connected to each other along a specified vertex sequence, which
provides a statistical foundation for an optimization problem called envelope
reduction or minimum linear arrangement. The proposed tests are particularly
suitable for moderately small data and formulated based on a combinatorial
approach and a block model with intrinsic vertex ordering.
|
[
{
"version": "v1",
"created": "Mon, 22 Nov 2021 15:10:23 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Apr 2023 07:52:34 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Kawamoto",
"Tatsuro",
""
],
[
"Kobayashi",
"Teruyoshi",
""
]
] |
new_dataset
| 0.984094 |
2201.11494
|
Kohei Watabe
|
Kohei Watabe, Shohei Nakazawa, Yoshiki Sato, Sho Tsugawa, Kenji
Nakagawa
|
GraphTune: A Learning-based Graph Generative Model with Tunable
Structural Features
|
The paper was published in IEEE Transactions on Network Science and
Engineering (2023). An earlier and short version of this paper was presented
at the 41st IEEE International Conference on Distributed Computing Systems
(ICDCS 2021) Poster Track
| null |
10.1109/TNSE.2023.3244590
| null |
cs.LG cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generative models for graphs have been actively studied for decades, and they
have a wide range of applications. Recently, learning-based graph generation
that reproduces real-world graphs has been attracting the attention of many
researchers. Although several generative models that utilize modern machine
learning technologies have been proposed, conditional generation of general
graphs has been less explored in the field. In this paper, we propose a
generative model that allows us to tune the value of a global-level structural
feature as a condition. Our model, called GraphTune, makes it possible to tune
the value of any structural feature of generated graphs using Long Short Term
Memory (LSTM) and a Conditional Variational AutoEncoder (CVAE). We performed
comparative evaluations of GraphTune and conventional models on a real graph
dataset. The evaluations show that GraphTune makes it possible to more clearly
tune the value of a global-level structural feature better than conventional
models.
|
[
{
"version": "v1",
"created": "Thu, 27 Jan 2022 13:14:53 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Feb 2022 10:20:47 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Apr 2023 10:39:46 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Watabe",
"Kohei",
""
],
[
"Nakazawa",
"Shohei",
""
],
[
"Sato",
"Yoshiki",
""
],
[
"Tsugawa",
"Sho",
""
],
[
"Nakagawa",
"Kenji",
""
]
] |
new_dataset
| 0.965193 |
2202.10400
|
Nika Mansouri Ghiasi
|
Nika Mansouri Ghiasi, Jisung Park, Harun Mustafa, Jeremie Kim, Ataberk
Olgun, Arvid Gollwitzer, Damla Senol Cali, Can Firtina, Haiyu Mao, Nour
Almadhoun Alserr, Rachata Ausavarungnirun, Nandita Vijaykumar, Mohammed
Alser, and Onur Mutlu
|
GenStore: A High-Performance and Energy-Efficient In-Storage Computing
System for Genome Sequence Analysis
|
Published at ASPLOS 2022
| null | null | null |
cs.AR cs.DC cs.OS q-bio.GN
|
http://creativecommons.org/licenses/by/4.0/
|
Read mapping is a fundamental, yet computationally-expensive step in many
genomics applications. It is used to identify potential matches and differences
between fragments (called reads) of a sequenced genome and an already known
genome (called a reference genome). To address the computational challenges in
genome analysis, many prior works propose various approaches such as filters
that select the reads that must undergo expensive computation, efficient
heuristics, and hardware acceleration. While effective at reducing the
computation overhead, all such approaches still require the costly movement of
a large amount of data from storage to the rest of the system, which can
significantly lower the end-to-end performance of read mapping in conventional
and emerging genomics systems.
We propose GenStore, the first in-storage processing system designed for
genome sequence analysis that greatly reduces both data movement and
computational overheads of genome sequence analysis by exploiting low-cost and
accurate in-storage filters. GenStore leverages hardware/software co-design to
address the challenges of in-storage processing, supporting reads with 1)
different read lengths and error rates, and 2) different degrees of genetic
variation. Through rigorous analysis of read mapping processes, we meticulously
design low-cost hardware accelerators and data/computation flows inside a NAND
flash-based SSD. Our evaluation using a wide range of real genomic datasets
shows that GenStore, when implemented in three modern SSDs, significantly
improves the read mapping performance of state-of-the-art software (hardware)
baselines by 2.07-6.05$\times$ (1.52-3.32$\times$) for read sets with high
similarity to the reference genome and 1.45-33.63$\times$ (2.70-19.2$\times$)
for read sets with low similarity to the reference genome.
|
[
{
"version": "v1",
"created": "Mon, 21 Feb 2022 17:53:01 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Apr 2023 16:56:04 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Ghiasi",
"Nika Mansouri",
""
],
[
"Park",
"Jisung",
""
],
[
"Mustafa",
"Harun",
""
],
[
"Kim",
"Jeremie",
""
],
[
"Olgun",
"Ataberk",
""
],
[
"Gollwitzer",
"Arvid",
""
],
[
"Cali",
"Damla Senol",
""
],
[
"Firtina",
"Can",
""
],
[
"Mao",
"Haiyu",
""
],
[
"Alserr",
"Nour Almadhoun",
""
],
[
"Ausavarungnirun",
"Rachata",
""
],
[
"Vijaykumar",
"Nandita",
""
],
[
"Alser",
"Mohammed",
""
],
[
"Mutlu",
"Onur",
""
]
] |
new_dataset
| 0.985171 |
2208.00283
|
Aydin Abadi
|
Aydin Abadi and Steven J. Murdoch and Thomas Zacharias
|
Recurring Contingent Service Payment
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Fair exchange protocols let two mutually distrustful parties exchange digital
data in a way that neither party can cheat. They have various applications such
as the exchange of digital items, or the exchange of digital coins and digital
services between a buyer/client and seller/server.
In this work, we formally define and propose a generic blockchain-based
construction called "Recurring Contingent Service Payment" (RC-S-P). It (i)
lets a fair exchange of digital coins and verifiable service reoccur securely
between clients and a server while ensuring that the server is paid if and only
if it delivers a valid service, and (ii) ensures the parties' privacy is
preserved. RC-S-P supports arbitrary verifiable services, such as "Proofs of
Retrievability" (PoR) or verifiable computation and imposes low on-chain
overheads. Our formal treatment and construction, for the first time, consider
the setting where either client or server is malicious.
We also present a concrete efficient instantiation of RC- S-P when the
verifiable service is PoR. We implemented the concrete instantiation and
analysed its cost. When it deals with a 4-GB outsourced file, a verifier can
check a proof in only 90 milliseconds, and a dispute between a prover and
verifier is resolved in 0.1 milliseconds.
At CCS 2017, two blockchain-based protocols were proposed to support the fair
exchange of digital coins and a certain verifiable service; namely, PoR. In
this work, we show that these protocols (i) are susceptible to a free-riding
attack which enables a client to receive the service without paying the server,
and (ii) are not suitable for cases where parties' privacy matters, e.g., when
the server's proof status or buyer's file size must remain private from the
public. RC- S-P simultaneously mitigates the above attack and preserves the
parties' privacy.
|
[
{
"version": "v1",
"created": "Sat, 30 Jul 2022 17:48:06 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Apr 2023 18:28:27 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Abadi",
"Aydin",
""
],
[
"Murdoch",
"Steven J.",
""
],
[
"Zacharias",
"Thomas",
""
]
] |
new_dataset
| 0.980608 |
2210.03070
|
Marta R. Costa-Juss\`a
|
Marta R. Costa-juss\`a, Eric Smith, Christophe Ropers, Daniel Licht,
Jean Maillard, Javier Ferrando, Carlos Escolano
|
Toxicity in Multilingual Machine Translation at Scale
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Machine Translation systems can produce different types of errors, some of
which are characterized as critical or catastrophic due to the specific
negative impact that they can have on users. In this paper we focus on one type
of critical error: added toxicity. We evaluate and analyze added toxicity when
translating a large evaluation dataset (HOLISTICBIAS, over 472k sentences,
covering 13 demographic axes) from English into 164 languages. An automatic
toxicity evaluation shows that added toxicity across languages varies from 0%
to 5%. The output languages with the most added toxicity tend to be
low-resource ones, and the demographic axes with the most added toxicity
include sexual orientation, gender and sex, and ability. We also perform human
evaluation on a subset of 8 translation directions, confirming the prevalence
of true added toxicity. We use a measurement of the amount of source
contribution to the translation, where a low source contribution implies
hallucination, to interpret what causes toxicity. Making use of the input
attributions allows us to explain toxicity, because the source contributions
significantly correlate with toxicity for 84% of languages studied. Given our
findings, our recommendations to reduce added toxicity are to curate training
data to avoid mistranslations, mitigate hallucination and check unstable
translations.
|
[
{
"version": "v1",
"created": "Thu, 6 Oct 2022 17:26:27 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Apr 2023 20:06:42 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Costa-jussà",
"Marta R.",
""
],
[
"Smith",
"Eric",
""
],
[
"Ropers",
"Christophe",
""
],
[
"Licht",
"Daniel",
""
],
[
"Maillard",
"Jean",
""
],
[
"Ferrando",
"Javier",
""
],
[
"Escolano",
"Carlos",
""
]
] |
new_dataset
| 0.999519 |
2210.04787
|
Junhong Lin
|
Junhong Lin, Nanfeng Jiang, Zhentao Zhang, Weiling Chen and Tiesong
Zhao
|
LMQFormer: A Laplace-Prior-Guided Mask Query Transformer for Lightweight
Snow Removal
|
11 pages, 13 figures
| null |
10.1109/TCSVT.2023.3264824
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Snow removal aims to locate snow areas and recover clean images without
repairing traces. Unlike the regularity and semitransparency of rain, snow with
various patterns and degradations seriously occludes the background. As a
result, the state-of-the-art snow removal methods usually retains a large
parameter size. In this paper, we propose a lightweight but high-efficient snow
removal network called Laplace Mask Query Transformer (LMQFormer). Firstly, we
present a Laplace-VQVAE to generate a coarse mask as prior knowledge of snow.
Instead of using the mask in dataset, we aim at reducing both the information
entropy of snow and the computational cost of recovery. Secondly, we design a
Mask Query Transformer (MQFormer) to remove snow with the coarse mask, where we
use two parallel encoders and a hybrid decoder to learn extensive snow features
under lightweight requirements. Thirdly, we develop a Duplicated Mask Query
Attention (DMQA) that converts the coarse mask into a specific number of
queries, which constraint the attention areas of MQFormer with reduced
parameters. Experimental results in popular datasets have demonstrated the
efficiency of our proposed model, which achieves the state-of-the-art snow
removal quality with significantly reduced parameters and the lowest running
time.
|
[
{
"version": "v1",
"created": "Mon, 10 Oct 2022 15:44:06 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Oct 2022 06:48:37 GMT"
},
{
"version": "v3",
"created": "Wed, 12 Oct 2022 07:45:45 GMT"
},
{
"version": "v4",
"created": "Thu, 6 Apr 2023 03:39:27 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Lin",
"Junhong",
""
],
[
"Jiang",
"Nanfeng",
""
],
[
"Zhang",
"Zhentao",
""
],
[
"Chen",
"Weiling",
""
],
[
"Zhao",
"Tiesong",
""
]
] |
new_dataset
| 0.990527 |
2210.12048
|
Bastian Rieck
|
Corinna Coupette and Sebastian Dalleiger and Bastian Rieck
|
Ollivier-Ricci Curvature for Hypergraphs: A Unified Framework
|
Accepted at ICLR 2023 (https://openreview.net/forum?id=sPCKNl5qDps)
| null | null | null |
cs.LG cs.SI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bridging geometry and topology, curvature is a powerful and expressive
invariant. While the utility of curvature has been theoretically and
empirically confirmed in the context of manifolds and graphs, its
generalization to the emerging domain of hypergraphs has remained largely
unexplored. On graphs, the Ollivier-Ricci curvature measures differences
between random walks via Wasserstein distances, thus grounding a geometric
concept in ideas from probability theory and optimal transport. We develop
ORCHID, a flexible framework generalizing Ollivier-Ricci curvature to
hypergraphs, and prove that the resulting curvatures have favorable theoretical
properties. Through extensive experiments on synthetic and real-world
hypergraphs from different domains, we demonstrate that ORCHID curvatures are
both scalable and useful to perform a variety of hypergraph tasks in practice.
|
[
{
"version": "v1",
"created": "Fri, 21 Oct 2022 15:40:49 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Mar 2023 12:31:39 GMT"
},
{
"version": "v3",
"created": "Thu, 6 Apr 2023 16:54:48 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Coupette",
"Corinna",
""
],
[
"Dalleiger",
"Sebastian",
""
],
[
"Rieck",
"Bastian",
""
]
] |
new_dataset
| 0.998163 |
2210.14061
|
Wouter Haverals
|
Wouter Haverals, Mike Kestemont
|
From exemplar to copy: the scribal appropriation of a Hadewijch
manuscript computationally explored
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This study is devoted to two of the oldest known manuscripts in which the
oeuvre of the medieval mystical author Hadewijch has been preserved: Brussels,
KBR, 2879-2880 (ms. A) and Brussels, KBR, 2877-2878 (ms. B). On the basis of
codicological and contextual arguments, it is assumed that the scribe who
produced B used A as an exemplar. While the similarities in both layout and
content between the two manuscripts are striking, the present article seeks to
identify the differences. After all, regardless of the intention to produce a
copy that closely follows the exemplar, subtle linguistic variation is
apparent. Divergences relate to spelling conventions, but also to the way in
which words are abbreviated (and the extent to which abbreviations occur). The
present study investigates the spelling profiles of the scribes who produced
mss. A and B in a computational way. In the first part of this study, we will
present both manuscripts in more detail, after which we will consider prior
research carried out on scribal profiling. The current study both builds and
expands on Kestemont (2015). Next, we outline the methodology used to analyse
and measure the degree of scribal appropriation that took place when ms. B was
copied off the exemplar ms. A. After this, we will discuss the results
obtained, focusing on the scribal variation that can be found both at the level
of individual words and n-grams. To this end, we use machine learning to
identify the most distinctive features that separate manuscript A from B.
Finally, we look at possible diachronic trends in the appropriation by B's
scribe of his exemplar. We argue that scribal takeovers in the exemplar impacts
the practice of the copying scribe, while transitions to a different content
matter cause little to no effect.
|
[
{
"version": "v1",
"created": "Tue, 25 Oct 2022 14:40:25 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Oct 2022 14:09:04 GMT"
},
{
"version": "v3",
"created": "Fri, 10 Feb 2023 13:53:09 GMT"
},
{
"version": "v4",
"created": "Thu, 6 Apr 2023 15:28:58 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Haverals",
"Wouter",
""
],
[
"Kestemont",
"Mike",
""
]
] |
new_dataset
| 0.979803 |
2211.01144
|
Yang Gu
|
Yeming Gu, Hui Shu and Fan Hu
|
UniASM: Binary Code Similarity Detection without Fine-tuning
|
This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible
| null | null | null |
cs.CR cs.LG cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Binary code similarity detection (BCSD) is widely used in various binary
analysis tasks such as vulnerability search, malware detection, clone
detection, and patch analysis. Recent studies have shown that the
learning-based binary code embedding models perform better than the traditional
feature-based approaches. In this paper, we propose a novel transformer-based
binary code embedding model named UniASM to learn representations of the binary
functions. We design two new training tasks to make the spatial distribution of
the generated vectors more uniform, which can be used directly in BCSD without
any fine-tuning. In addition, we present a new tokenization approach for binary
functions, which increases the token's semantic information and mitigates the
out-of-vocabulary (OOV) problem. We conduct an in-depth analysis of the factors
affecting model performance through ablation experiments and obtain some new
and valuable findings. The experimental results show that UniASM outperforms
the state-of-the-art (SOTA) approach on the evaluation dataset. The average
scores of Recall@1 on cross-compilers, cross-optimization levels, and
cross-obfuscations are 0.77, 0.72, and 0.72. Besides, in the real-world task of
known vulnerability search, UniASM outperforms all the current baselines.
|
[
{
"version": "v1",
"created": "Fri, 28 Oct 2022 14:04:57 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Nov 2022 07:50:23 GMT"
},
{
"version": "v3",
"created": "Thu, 6 Apr 2023 04:49:49 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Gu",
"Yeming",
""
],
[
"Shu",
"Hui",
""
],
[
"Hu",
"Fan",
""
]
] |
new_dataset
| 0.994372 |
2211.15654
|
Songyou Peng
|
Songyou Peng, Kyle Genova, Chiyu "Max" Jiang, Andrea Tagliasacchi,
Marc Pollefeys, Thomas Funkhouser
|
OpenScene: 3D Scene Understanding with Open Vocabularies
|
CVPR 2023. Project page: https://pengsongyou.github.io/openscene
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Traditional 3D scene understanding approaches rely on labeled 3D datasets to
train a model for a single task with supervision. We propose OpenScene, an
alternative approach where a model predicts dense features for 3D scene points
that are co-embedded with text and image pixels in CLIP feature space. This
zero-shot approach enables task-agnostic training and open-vocabulary queries.
For example, to perform SOTA zero-shot 3D semantic segmentation it first infers
CLIP features for every 3D point and later classifies them based on
similarities to embeddings of arbitrary class labels. More interestingly, it
enables a suite of open-vocabulary scene understanding applications that have
never been done before. For example, it allows a user to enter an arbitrary
text query and then see a heat map indicating which parts of a scene match. Our
approach is effective at identifying objects, materials, affordances,
activities, and room types in complex 3D scenes, all using a single model
trained without any labeled 3D data.
|
[
{
"version": "v1",
"created": "Mon, 28 Nov 2022 18:58:36 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Apr 2023 15:35:13 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Peng",
"Songyou",
""
],
[
"Genova",
"Kyle",
""
],
[
"Jiang",
"Chiyu \"Max\"",
""
],
[
"Tagliasacchi",
"Andrea",
""
],
[
"Pollefeys",
"Marc",
""
],
[
"Funkhouser",
"Thomas",
""
]
] |
new_dataset
| 0.99946 |
2212.11920
|
Christoph Mayer
|
Christoph Mayer and Martin Danelljan and Ming-Hsuan Yang and Vittorio
Ferrari and Luc Van Gool and Alina Kuznetsova
|
Beyond SOT: Tracking Multiple Generic Objects at Once
|
16 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generic Object Tracking (GOT) is the problem of tracking target objects,
specified by bounding boxes in the first frame of a video. While the task has
received much attention in the last decades, researchers have almost
exclusively focused on the single object setting. Multi-object GOT benefits
from a wider applicability, rendering it more attractive in real-world
applications. We attribute the lack of research interest into this problem to
the absence of suitable benchmarks. In this work, we introduce a new
large-scale GOT benchmark, LaGOT, containing multiple annotated target objects
per sequence. Our benchmark allows users to tackle key remaining challenges in
GOT, aiming to increase robustness and reduce computation through joint
tracking of multiple objects simultaneously. In addition, we propose a
transformer-based GOT tracker baseline capable of joint processing of multiple
objects through shared computation. Our approach achieves a 4x faster run-time
in case of 10 concurrent objects compared to tracking each object independently
and outperforms existing single object trackers on our new benchmark. In
addition, our approach achieves highly competitive results on single-object GOT
datasets, setting a new state of the art on TrackingNet with a success rate AUC
of 84.4%. Our benchmark, code, and trained models will be made publicly
available.
|
[
{
"version": "v1",
"created": "Thu, 22 Dec 2022 17:59:19 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Apr 2023 14:35:21 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Mayer",
"Christoph",
""
],
[
"Danelljan",
"Martin",
""
],
[
"Yang",
"Ming-Hsuan",
""
],
[
"Ferrari",
"Vittorio",
""
],
[
"Van Gool",
"Luc",
""
],
[
"Kuznetsova",
"Alina",
""
]
] |
new_dataset
| 0.993103 |
2301.00508
|
Fred Buhl
|
Fred W. Buhl
|
EmoGator: A New Open Source Vocal Burst Dataset with Baseline Machine
Learning Classification Methodologies
|
12 pages, 4 tables, 2 figures
| null | null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Vocal Bursts -- short, non-speech vocalizations that convey emotions, such as
laughter, cries, sighs, moans, and groans -- are an often-overlooked aspect of
speech emotion recognition, but an important aspect of human vocal
communication. One barrier to study of these interesting vocalizations is a
lack of large datasets. I am pleased to introduce the EmoGator dataset, which
consists of 32,130 samples from 357 speakers, 16.9654 hours of audio; each
sample classified into one of 30 distinct emotion categories by the speaker.
Several different approaches to construct classifiers to identify emotion
categories will be discussed, and directions for future research will be
suggested. Data set is available for download from
https://github.com/fredbuhl/EmoGator.
|
[
{
"version": "v1",
"created": "Mon, 2 Jan 2023 03:02:10 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Apr 2023 15:25:44 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Buhl",
"Fred W.",
""
]
] |
new_dataset
| 0.999846 |
2303.01726
|
Shunsuke Inenaga
|
Hiroto Fujimaru and Yuto Nakashima and Shunsuke Inenaga
|
On Sensitivity of Compact Directed Acyclic Word Graphs
|
This is a full version of the paper accepted for WORDS 2023
| null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
Compact directed acyclic word graphs (CDAWGs) [Blumer et al. 1987] are a
fundamental data structure on strings with applications in text pattern
searching, data compression, and pattern discovery. Intuitively, the CDAWG of a
string $T$ is obtained by merging isomorphic subtrees of the suffix tree
[Weiner 1973] of the same string $T$, thus CDAWGs are a compact indexing
structure. In this paper, we investigate the sensitivity of CDAWGs when a
single character edit operation (insertion, deletion, or substitution) is
performed at the left-end of the input string $T$, namely, we are interested in
the worst-case increase in the size of the CDAWG after a left-end edit
operation. We prove that if $e$ is the number of edges of the CDAWG for string
$T$, then the number of new edges added to the CDAWG after a left-end edit
operation on $T$ is less than $e$. Further, we present almost matching lower
bounds on the sensitivity of CDAWGs for all cases of insertion, deletion, and
substitution.
|
[
{
"version": "v1",
"created": "Fri, 3 Mar 2023 06:11:37 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Apr 2023 06:35:46 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Fujimaru",
"Hiroto",
""
],
[
"Nakashima",
"Yuto",
""
],
[
"Inenaga",
"Shunsuke",
""
]
] |
new_dataset
| 0.997929 |
2303.14259
|
Junyi Liu
|
Junyi Liu, Aleksandar Dragojevic, Shane Flemming, Antonios Katsarakis,
Dario Korolija, Igor Zablotchi, Ho-cheung Ng, Anuj Kalia, Miguel Castro
|
Honeycomb: ordered key-value store acceleration on an FPGA-based
SmartNIC
| null | null | null | null |
cs.DC cs.AR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In-memory ordered key-value stores are an important building block in modern
distributed applications. We present Honeycomb, a hybrid software-hardware
system for accelerating read-dominated workloads on ordered key-value stores
that provides linearizability for all operations including scans. Honeycomb
stores a B-Tree in host memory, and executes SCAN and GET on an FPGA-based
SmartNIC, and PUT, UPDATE and DELETE on the CPU. This approach enables large
stores and simplifies the FPGA implementation but raises the challenge of data
access and synchronization across the slow PCIe bus. We describe how Honeycomb
overcomes this challenge with careful data structure design, caching, request
parallelism with out-of-order request execution, wait-free read operations, and
batching synchronization between the CPU and the FPGA. For read-heavy YCSB
workloads, Honeycomb improves the throughput of a state-of-the-art ordered
key-value store by at least 1.8x. For scan-heavy workloads inspired by cloud
storage, Honeycomb improves throughput by more than 2x. The cost-performance,
which is more important for large-scale deployments, is improved by at least
1.5x on these workloads.
|
[
{
"version": "v1",
"created": "Fri, 24 Mar 2023 19:53:55 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Apr 2023 12:28:43 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Liu",
"Junyi",
""
],
[
"Dragojevic",
"Aleksandar",
""
],
[
"Flemming",
"Shane",
""
],
[
"Katsarakis",
"Antonios",
""
],
[
"Korolija",
"Dario",
""
],
[
"Zablotchi",
"Igor",
""
],
[
"Ng",
"Ho-cheung",
""
],
[
"Kalia",
"Anuj",
""
],
[
"Castro",
"Miguel",
""
]
] |
new_dataset
| 0.994327 |
2304.00916
|
Yukang Cao
|
Yukang Cao, Yan-Pei Cao, Kai Han, Ying Shan, Kwan-Yee K. Wong
|
DreamAvatar: Text-and-Shape Guided 3D Human Avatar Generation via
Diffusion Models
|
19 pages, 19 figures. Project page:
https://yukangcao.github.io/DreamAvatar/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present DreamAvatar, a text-and-shape guided framework for generating
high-quality 3D human avatars with controllable poses. While encouraging
results have been produced by recent methods on text-guided 3D common object
generation, generating high-quality human avatars remains an open challenge due
to the complexity of the human body's shape, pose, and appearance. We propose
DreamAvatar to tackle this challenge, which utilizes a trainable NeRF for
predicting density and color features for 3D points and a pre-trained
text-to-image diffusion model for providing 2D self-supervision. Specifically,
we leverage SMPL models to provide rough pose and shape guidance for the
generation. We introduce a dual space design that comprises a canonical space
and an observation space, which are related by a learnable deformation field
through the NeRF, allowing for the transfer of well-optimized texture and
geometry from the canonical space to the target posed avatar. Additionally, we
exploit a normal-consistency regularization to allow for more vivid generation
with detailed geometry and texture. Through extensive evaluations, we
demonstrate that DreamAvatar significantly outperforms existing methods,
establishing a new state-of-the-art for text-and-shape guided 3D human
generation.
|
[
{
"version": "v1",
"created": "Mon, 3 Apr 2023 12:11:51 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Apr 2023 16:04:24 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Cao",
"Yukang",
""
],
[
"Cao",
"Yan-Pei",
""
],
[
"Han",
"Kai",
""
],
[
"Shan",
"Ying",
""
],
[
"Wong",
"Kwan-Yee K.",
""
]
] |
new_dataset
| 0.999583 |
2304.00971
|
Hanrong Ye
|
Hanrong Ye, Dan Xu
|
Joint 2D-3D Multi-Task Learning on Cityscapes-3D: 3D Detection,
Segmentation, and Depth Estimation
|
A supplementary document for "TaskPrompter: Spatial-Channel
Multi-Task Prompting for Dense Scene Understanding" accepted by ICLR 2023.
Project page:
https://github.com/prismformore/Multi-Task-Transformer/tree/main/TaskPrompter
|
ICLR 2023
| null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This report serves as a supplementary document for TaskPrompter, detailing
its implementation on a new joint 2D-3D multi-task learning benchmark based on
Cityscapes-3D. TaskPrompter presents an innovative multi-task prompting
framework that unifies the learning of (i) task-generic representations, (ii)
task-specific representations, and (iii) cross-task interactions, as opposed to
previous approaches that separate these learning objectives into different
network modules. This unified approach not only reduces the need for meticulous
empirical structure design but also significantly enhances the multi-task
network's representation learning capability, as the entire model capacity is
devoted to optimizing the three objectives simultaneously. TaskPrompter
introduces a new multi-task benchmark based on Cityscapes-3D dataset, which
requires the multi-task model to concurrently generate predictions for
monocular 3D vehicle detection, semantic segmentation, and monocular depth
estimation. These tasks are essential for achieving a joint 2D-3D understanding
of visual scenes, particularly in the development of autonomous driving
systems. On this challenging benchmark, our multi-task model demonstrates
strong performance compared to single-task state-of-the-art methods and
establishes new state-of-the-art results on the challenging 3D detection and
depth estimation tasks.
|
[
{
"version": "v1",
"created": "Mon, 3 Apr 2023 13:41:35 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Apr 2023 13:27:21 GMT"
},
{
"version": "v3",
"created": "Thu, 6 Apr 2023 12:58:28 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Ye",
"Hanrong",
""
],
[
"Xu",
"Dan",
""
]
] |
new_dataset
| 0.976258 |
2304.02682
|
Homayoun Valafar
|
Arjang Fahim, Stephanie Irausquin, Homayoun Valafar
|
nD-PDPA: nDimensional Probability Density Profile Analysis
|
Published in 2016
| null |
10.1016/B978-0-12-804203-8.00013-4
| null |
cs.CV q-bio.QM
|
http://creativecommons.org/licenses/by/4.0/
|
Despite the recent advances in various Structural Genomics Projects, a large
gap remains between the number of sequenced and structurally characterized
proteins. Some reasons for this discrepancy include technical difficulties,
labor, and the cost related to determining a structure by experimental methods
such as NMR spectroscopy. Several computational methods have been developed to
expand the applicability of NMR spectroscopy by addressing temporal and
economical problems more efficiently. While these methods demonstrate
successful outcomes to solve more challenging and structurally novel proteins,
the cost has not been reduced significantly. Probability Density Profile
Analysis (PDPA) has been previously introduced by our lab to directly address
the economics of structure determination of routine proteins and the
identification of novel structures from a minimal set of unassigned NMR data.
2D-PDPA (in which 2D denotes incorporation of data from two alignment media)
has been successful in identifying the structural homolog of an unknown protein
within a library of ~1000 decoy structures. In order to further expand the
selectivity and sensitivity of PDPA, the incorporation of additional data was
necessary. However, the expansion of the original PDPA approach was limited by
its computational requirements where the inclusion of additional data would
render it computationally intractable. Here we present the most recent
developments of PDPA method (nD-PDPA: n Dimensional Probability Density Profile
Analysis) that eliminate 2D-PDPA's computational limitations, and allows
inclusion of RDC data from multiple vector types in multiple alignment media.
|
[
{
"version": "v1",
"created": "Wed, 5 Apr 2023 18:25:34 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Fahim",
"Arjang",
""
],
[
"Irausquin",
"Stephanie",
""
],
[
"Valafar",
"Homayoun",
""
]
] |
new_dataset
| 0.993012 |
2304.02757
|
Hend Al-Khalifa Prof.
|
Hend Al-Khalifa, Malak Mashaabi, Ghadi Al-Yahya and Raghad Alnashwan
|
The Saudi Privacy Policy Dataset
|
8 pages, 1 figure
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper introduces the Saudi Privacy Policy Dataset, a diverse compilation
of Arabic privacy policies from various sectors in Saudi Arabia, annotated
according to the 10 principles of the Personal Data Protection Law (PDPL); the
PDPL was established to be compatible with General Data Protection Regulation
(GDPR); one of the most comprehensive data regulations worldwide. Data were
collected from multiple sources, including the Saudi Central Bank, the Saudi
Arabia National United Platform, the Council of Health Insurance, and general
websites using Google and Wikipedia. The final dataset includes 1,000 websites
belonging to 7 sectors, 4,638 lines of text, 775,370 tokens, and a corpus size
of 8,353 KB. The annotated dataset offers significant reuse potential for
assessing privacy policy compliance, benchmarking privacy practices across
industries, and developing automated tools for monitoring adherence to data
protection regulations. By providing a comprehensive and annotated dataset of
privacy policies, this paper aims to facilitate further research and
development in the areas of privacy policy analysis, natural language
processing, and machine learning applications related to privacy and data
protection, while also serving as an essential resource for researchers,
policymakers, and industry professionals interested in understanding and
promoting compliance with privacy regulations in Saudi Arabia.
|
[
{
"version": "v1",
"created": "Wed, 5 Apr 2023 21:40:37 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Al-Khalifa",
"Hend",
""
],
[
"Mashaabi",
"Malak",
""
],
[
"Al-Yahya",
"Ghadi",
""
],
[
"Alnashwan",
"Raghad",
""
]
] |
new_dataset
| 0.999803 |
2304.02797
|
Vitor Guizilini
|
Vitor Guizilini, Igor Vasiljevic, Jiading Fang, Rares Ambrus, Sergey
Zakharov, Vincent Sitzmann, Adrien Gaidon
|
DeLiRa: Self-Supervised Depth, Light, and Radiance Fields
|
Project page: https://sites.google.com/view/tri-delira
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Differentiable volumetric rendering is a powerful paradigm for 3D
reconstruction and novel view synthesis. However, standard volume rendering
approaches struggle with degenerate geometries in the case of limited viewpoint
diversity, a common scenario in robotics applications. In this work, we propose
to use the multi-view photometric objective from the self-supervised depth
estimation literature as a geometric regularizer for volumetric rendering,
significantly improving novel view synthesis without requiring additional
information. Building upon this insight, we explore the explicit modeling of
scene geometry using a generalist Transformer, jointly learning a radiance
field as well as depth and light fields with a set of shared latent codes. We
demonstrate that sharing geometric information across tasks is mutually
beneficial, leading to improvements over single-task learning without an
increase in network complexity. Our DeLiRa architecture achieves
state-of-the-art results on the ScanNet benchmark, enabling high quality
volumetric rendering as well as real-time novel view and depth synthesis in the
limited viewpoint diversity setting.
|
[
{
"version": "v1",
"created": "Thu, 6 Apr 2023 00:16:25 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Guizilini",
"Vitor",
""
],
[
"Vasiljevic",
"Igor",
""
],
[
"Fang",
"Jiading",
""
],
[
"Ambrus",
"Rares",
""
],
[
"Zakharov",
"Sergey",
""
],
[
"Sitzmann",
"Vincent",
""
],
[
"Gaidon",
"Adrien",
""
]
] |
new_dataset
| 0.961573 |
2304.02801
|
Fangping Xie
|
Fangping Xie, Pierre Le Meur, Charith Fernando
|
End-to-end Manipulator Calligraphy Planning via Variational Imitation
Learning
|
5 pages, 4 figures
| null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Planning from demonstrations has shown promising results with the advances of
deep neural networks. One of the most popular real-world applications is
automated handwriting using a robotic manipulator. Classically it is simplified
as a two-dimension problem. This representation is suitable for elementary
drawings, but it is not sufficient for Japanese calligraphy or complex work of
art where the orientation of a pen is part of the user expression. In this
study, we focus on automated planning of Japanese calligraphy using a
three-dimension representation of the trajectory as well as the rotation of the
pen tip, and propose a novel deep imitation learning neural network that learns
from expert demonstrations through a combination of images and pose data. The
network consists of a combination of variational auto-encoder, bi-directional
LSTM, and Multi-Layer Perceptron (MLP). Experiments are conducted in a
progressive way, and results demonstrate that the proposed approach is
successful in completion of tasks for real-world robots, overcoming the
distribution shift problem in imitation learning. The source code and dataset
will be public.
|
[
{
"version": "v1",
"created": "Thu, 6 Apr 2023 00:34:15 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Xie",
"Fangping",
""
],
[
"Meur",
"Pierre Le",
""
],
[
"Fernando",
"Charith",
""
]
] |
new_dataset
| 0.998973 |
2304.02833
|
Anas Gouda
|
Anas Gouda, Moritz Roidl
|
DoUnseen: Zero-Shot Object Detection for Robotic Grasping
| null | null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
How can we segment varying numbers of objects where each specific object
represents its own separate class? To make the problem even more realistic, how
can we add and delete classes on the fly without retraining? This is the case
of robotic applications where no datasets of the objects exist or application
that includes thousands of objects (E.g., in logistics) where it is impossible
to train a single model to learn all of the objects. Most current research on
object segmentation for robotic grasping focuses on class-level object
segmentation (E.g., box, cup, bottle), closed sets (specific objects of a
dataset; for example, YCB dataset), or deep learning-based template matching.
In this work, we are interested in open sets where the number of classes is
unknown, varying, and without pre-knowledge about the objects' types. We
consider each specific object as its own separate class. Our goal is to develop
a zero-shot object detector that requires no training and can add any object as
a class just by capturing a few images of the object. Our main idea is to break
the segmentation pipelines into two steps by combining unseen object
segmentation networks cascaded by zero-shot classifiers. We evaluate our
zero-shot object detector on unseen datasets and compare it to a trained Mask
R-CNN on those datasets. The results show that the performance varies from
practical to unsuitable depending on the environment setup and the objects
being handled. The code is available in our DoUnseen library repository.
|
[
{
"version": "v1",
"created": "Thu, 6 Apr 2023 02:45:39 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Gouda",
"Anas",
""
],
[
"Roidl",
"Moritz",
""
]
] |
new_dataset
| 0.999195 |
2304.02838
|
Xuezhi Wen
|
Nan Wang, Xuezhi Wen, Dalin Zhang, Xibin Zhao, Jiahui Ma, Mengxia Luo,
Sen Nie, Shi Wu, Jiqiang Liu
|
TBDetector:Transformer-Based Detector for Advanced Persistent Threats
with Provenance Graph
|
10 pages, 7 figures
| null | null | null |
cs.CR cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
APT detection is difficult to detect due to the long-term latency, covert and
slow multistage attack patterns of Advanced Persistent Threat (APT). To tackle
these issues, we propose TBDetector, a transformer-based advanced persistent
threat detection method for APT attack detection. Considering that provenance
graphs provide rich historical information and have the powerful attacks
historic correlation ability to identify anomalous activities, TBDetector
employs provenance analysis for APT detection, which summarizes long-running
system execution with space efficiency and utilizes transformer with
self-attention based encoder-decoder to extract long-term contextual features
of system states to detect slow-acting attacks. Furthermore, we further
introduce anomaly scores to investigate the anomaly of different system states,
where each state is calculated with an anomaly score corresponding to its
similarity score and isolation score. To evaluate the effectiveness of the
proposed method, we have conducted experiments on five public datasets, i.e.,
streamspot, cadets, shellshock, clearscope, and wget_baseline. Experimental
results and comparisons with state-of-the-art methods have exhibited better
performance of our proposed method.
|
[
{
"version": "v1",
"created": "Thu, 6 Apr 2023 03:08:09 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Wang",
"Nan",
""
],
[
"Wen",
"Xuezhi",
""
],
[
"Zhang",
"Dalin",
""
],
[
"Zhao",
"Xibin",
""
],
[
"Ma",
"Jiahui",
""
],
[
"Luo",
"Mengxia",
""
],
[
"Nie",
"Sen",
""
],
[
"Wu",
"Shi",
""
],
[
"Liu",
"Jiqiang",
""
]
] |
new_dataset
| 0.98038 |
2304.02885
|
Nastaran Moradloo
|
Asad J. Khattak, Austin Harris, Mina Sartipi, Iman Mahdinia, Nastaran
Moradloo, Mohammad SafariTaherkhani
|
Connected and Automated Vehicles Investment and Smart Infrastructure in
Tennessee Part 3: Infrastructure and Vehicular communications: From Dedicated
Short-Range Communications to Cellular Vehicle-to-Everything
|
https://www.tn.gov/content/dam/tn/tdot/long-range-planning/research/final-reports/res2019-final-reports/res2019-07/RES2019-07_Part3_Approved.pdf
| null | null |
RES2019-07
|
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
This report aims to support the Tennessee Department of Transportation's
decisions about vehicle and infrastructure communication technologies. The
transition from Dedicated Short-Range communication (DSRC) V2X to Cellular
Vehicle to Everything (C-V2X) is explored using USDOT guidance on relevant
issues and presenting the results of experimentation in Tennessee and the
potential pros and cons. DSRC V2X technology has been planned at traffic signal
in Tennessee, e.g., 152 Roadside Units (RSUs) were planned by TDOT using DSRC
V2X and Bluetooth combination units in the I-24 smart corridor. Similarly, many
pilot programs and testbeds around the nation have deployed DSRC V2X technology
and are now impacted by the Federal Communication Commission's (FCC) ruling on
opening safety band. The implication is that DSRC V2X deployments (and future
deployments) should migrate to C-V2X. Notably, dual-mode RSUs are available
along with LTE C-V2X. The transition can be done by working with vendors, but
surely this involves more than swapping DSRC V2X devices with LTE C-V2X
devices. Complicating the migration to C-V2X is TDOT's role in traffic signal
operations and maintenance, which is limited to funding and
designing/construction of traffic signals, but local agencies operate and
maintain signals. Hence, local agencies will work with TDOT to operate and
maintain C-V2X technology. Moreover, C-V2X technologies are not widely
tested-interference by unlicensed devices and channel congestion can adversely
affect safety-critical applications. Given the substantial uncertainties in
transitioning to these technologies, TDOT's discussion with IOOs about the
operation and maintenance of C-V2X may have to wait for the resolution issues,
while TDOT can invest in experimentation with dual-mode devices.
Recommendations are provided about dual-mode devices, CAV data, and needed
research and testing.
|
[
{
"version": "v1",
"created": "Thu, 6 Apr 2023 06:28:32 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Khattak",
"Asad J.",
""
],
[
"Harris",
"Austin",
""
],
[
"Sartipi",
"Mina",
""
],
[
"Mahdinia",
"Iman",
""
],
[
"Moradloo",
"Nastaran",
""
],
[
"SafariTaherkhani",
"Mohammad",
""
]
] |
new_dataset
| 0.998167 |
2304.02887
|
Chenzhang Xiao
|
Chenzhang Xiao, Mahshid Mansouri, David Lam, Joao Ramos, Elizabeth T.
Hsiao-Wecksler
|
Design and Control of a Ballbot Drivetrain with High Agility, Minimal
Footprint, and High Payload
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents the design and control of a ballbot drivetrain that aims
to achieve high agility, minimal footprint, and high payload capacity while
maintaining dynamic stability. Two hardware platforms and analytical models
were developed to test design and control methodologies. The full-scale ballbot
prototype (MiaPURE) was constructed using off-the-shelf components and designed
to have agility, footprint, and balance similar to that of a walking human. The
planar inverted pendulum testbed (PIPTB) was developed as a reduced-order
testbed for quick validation of system performance. We then proposed a simple
yet robust LQR-PI controller to balance and maneuver the ballbot drivetrain
with a heavy payload. This is crucial because the drivetrain is often subject
to high stiction due to elastomeric components in the torque transmission
system. This controller was first tested in the PIPTB to compare with
traditional LQR and cascaded PI-PD controllers, and then implemented in the
ballbot drivetrain. The MiaPURE drivetrain was able to carry a payload of 60
kg, achieve a maximum speed of 2.3 m/s, and come to a stop from a speed of 1.4
m/s in 2 seconds in a selected translation direction. Finally, we demonstrated
the omnidirectional movement of the ballbot drivetrain in an indoor environment
as a payload-carrying robot and a human-riding mobility device. Our experiments
demonstrated the feasibility of using the ballbot drivetrain as a universal
mobility platform with agile movements, minimal footprint, and high payload
capacity using our proposed design and control methodologies.
|
[
{
"version": "v1",
"created": "Thu, 6 Apr 2023 06:33:20 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Xiao",
"Chenzhang",
""
],
[
"Mansouri",
"Mahshid",
""
],
[
"Lam",
"David",
""
],
[
"Ramos",
"Joao",
""
],
[
"Hsiao-Wecksler",
"Elizabeth T.",
""
]
] |
new_dataset
| 0.999573 |
2304.02901
|
Hao Zhang
|
Hao Zhang
|
SpanRE: Entities and Overlapping Relations Extraction Based on Spans and
Entity Attention
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Extracting entities and relations is an essential task of information
extraction. Triplets extracted from a sentence might overlap with each other.
Previous methods either did not address the overlapping issues or solved
overlapping issues partially. To tackle triplet overlapping problems
completely, firstly we extract candidate subjects with a standard span
mechanism. Then we present a labeled span mechanism to extract the objects and
relations simultaneously, we use the labeled span mechanism to generate labeled
spans whose start and end positions indicate the objects, and whose labels
correspond to relations of subject and objects. Besides, we design an entity
attention mechanism to enhance the information fusion between subject and
sentence during extracting objects and relations. We test our method on two
public datasets, our method achieves the best performances on these two
datasets.
|
[
{
"version": "v1",
"created": "Thu, 6 Apr 2023 07:19:39 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Zhang",
"Hao",
""
]
] |
new_dataset
| 0.998713 |
2304.02956
|
Zhanibek Darush
|
Zhanibek Darush, Mikhail Martynov, Aleksey Fedoseev, Aleksei
Shcherbak, and Dzmitry Tsetserukou
|
SwarmGear: Heterogeneous Swarm of Drones with Reconfigurable Leader
Drone and Virtual Impedance Links for Multi-Robot Inspection
|
IEEE International Conference on Unmanned Aircraft System (ICUAS
2023), Warsaw, Poland, June 6-9, 2023, 2023, in print
| null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The continuous monitoring by drone swarms remains a challenging problem due
to the lack of power supply and the inability of drones to land on uneven
surfaces. Heterogeneous swarms, including ground and aerial vehicles, can
support longer inspections and carry a higher number of sensors on board.
However, their capabilities are limited by the mobility of wheeled and legged
robots in a cluttered environment.
In this paper, we propose a novel concept for autonomous inspection that we
call SwarmGear. SwarmGear utilizes a heterogeneous swarm that investigates the
environment in a leader-follower formation. The leader drone is able to land on
rough terrain and traverse it by four compliant robotic legs, possessing both
the functionalities of an aerial and mobile robot. To preserve the formation of
the swarm during its motion, virtual impedance links were developed between the
leader and the follower drones.
We evaluated experimentally the accuracy of the hybrid leader drone's ground
locomotion. By changing the step parameters, the optimal step configuration was
found. Two types of gaits were evaluated. The experiments revealed low
crosstrack error (mean of 2 cm and max of 4.8 cm) and the ability of the leader
drone to move with a 190 mm step length and a 3 degree standard yaw deviation.
Four types of drone formations were considered. The best formation was used for
experiments with SwarmGear, and it showed low overall crosstrack error for the
swarm (mean 7.9 cm for the type 1 gait and 5.1 cm for the type 2 gait).
The proposed system can potentially improve the performance of autonomous
swarms in cluttered and unstructured environments by allowing all agents of the
swarm to switch between aerial and ground formations to overcome various
obstacles and perform missions over a large area.
|
[
{
"version": "v1",
"created": "Thu, 6 Apr 2023 09:34:33 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Darush",
"Zhanibek",
""
],
[
"Martynov",
"Mikhail",
""
],
[
"Fedoseev",
"Aleksey",
""
],
[
"Shcherbak",
"Aleksei",
""
],
[
"Tsetserukou",
"Dzmitry",
""
]
] |
new_dataset
| 0.996544 |
2304.02982
|
Ehsan Nowroozi
|
Ehsan Nowroozi, Yoosef Habibi, Mauro Conti
|
Spritz-PS: Validation of Synthetic Face Images Using a Large Dataset of
Printed Documents
| null | null | null | null |
cs.CV cs.AI cs.CR cs.CY cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The capability of doing effective forensic analysis on printed and scanned
(PS) images is essential in many applications. PS documents may be used to
conceal the artifacts of images which is due to the synthetic nature of images
since these artifacts are typically present in manipulated images and the main
artifacts in the synthetic images can be removed after the PS. Due to the
appeal of Generative Adversarial Networks (GANs), synthetic face images
generated with GANs models are difficult to differentiate from genuine human
faces and may be used to create counterfeit identities. Additionally, since
GANs models do not account for physiological constraints for generating human
faces and their impact on human IRISes, distinguishing genuine from synthetic
IRISes in the PS scenario becomes extremely difficult. As a result of the lack
of large-scale reference IRIS datasets in the PS scenario, we aim at developing
a novel dataset to become a standard for Multimedia Forensics (MFs)
investigation which is available at [45]. In this paper, we provide a novel
dataset made up of a large number of synthetic and natural printed IRISes taken
from VIPPrint Printed and Scanned face images. We extracted irises from face
images and it is possible that the model due to eyelid occlusion captured the
incomplete irises. To fill the missing pixels of extracted iris, we applied
techniques to discover the complex link between the iris images. To highlight
the problems involved with the evaluation of the dataset's IRIS images, we
conducted a large number of analyses employing Siamese Neural Networks to
assess the similarities between genuine and synthetic human IRISes, such as
ResNet50, Xception, VGG16, and MobileNet-v2. For instance, using the Xception
network, we achieved 56.76\% similarity of IRISes for synthetic images and
92.77% similarity of IRISes for real images.
|
[
{
"version": "v1",
"created": "Thu, 6 Apr 2023 10:28:34 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Nowroozi",
"Ehsan",
""
],
[
"Habibi",
"Yoosef",
""
],
[
"Conti",
"Mauro",
""
]
] |
new_dataset
| 0.999849 |
2304.02986
|
Thibault Gauthier
|
Thibault Gauthier, Chad E. Brown, Mikolas Janota, Josef Urban
|
A Mathematical Benchmark for Inductive Theorem Provers
| null | null | null | null |
cs.LO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We present a benchmark of 29687 problems derived from the On-Line
Encyclopedia of Integer Sequences (OEIS). Each problem expresses the
equivalence of two syntactically different programs generating the same OEIS
sequence. Such programs were conjectured by a learning-guided synthesis system
using a language with looping operators. The operators implement recursion, and
thus many of the proofs require induction on natural numbers. The benchmark
contains problems of varying difficulty from a wide area of mathematical
domains. We believe that these characteristics will make it an effective judge
for the progress of inductive theorem provers in this domain for years to come.
|
[
{
"version": "v1",
"created": "Thu, 6 Apr 2023 10:41:51 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Gauthier",
"Thibault",
""
],
[
"Brown",
"Chad E.",
""
],
[
"Janota",
"Mikolas",
""
],
[
"Urban",
"Josef",
""
]
] |
new_dataset
| 0.999773 |
2304.02992
|
Thomas Wirtgen
|
Thomas Wirtgen and Nicolas Rybowski and Cristel Pelsser and Olivier
Bonaventure
|
Routing over QUIC: Bringing transport innovations to routing protocols
|
2 pages, 1 figure, NSDI '23 Poster Session
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
By combining the security features of TLS with the reliability of TCP, QUIC
opens new possibilities for many applications. We demonstrate the benefits that
QUIC brings for routing protocols. Current Internet routing protocols use
insecure transport protocols. BGP uses TCP possibly with authentication. OSPF
uses its own transport protocol above plain IP. We design and implement a
library that allows to replace the transport protocols used by BGP and OSPF
with QUIC. We apply this library to the BIRD routing daemon and report
preliminary results.
|
[
{
"version": "v1",
"created": "Thu, 6 Apr 2023 10:59:52 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Wirtgen",
"Thomas",
""
],
[
"Rybowski",
"Nicolas",
""
],
[
"Pelsser",
"Cristel",
""
],
[
"Bonaventure",
"Olivier",
""
]
] |
new_dataset
| 0.999454 |
2304.02993
|
Amir Masoud Ghalamzan Esfahani
|
Muhammad Arshad Khan, Max Kenney, Jack Painter, Disha Kamale, Riza
Batista-Navarro, Amir Ghalamzan-E
|
Natural Language Robot Programming: NLP integrated with autonomous
robotic grasping
|
submitted to IROS 2023
| null | null | null |
cs.RO cs.CL cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present a grammar-based natural language framework for
robot programming, specifically for pick-and-place tasks. Our approach uses a
custom dictionary of action words, designed to store together words that share
meaning, allowing for easy expansion of the vocabulary by adding more action
words from a lexical database. We validate our Natural Language Robot
Programming (NLRP) framework through simulation and real-world experimentation,
using a Franka Panda robotic arm equipped with a calibrated camera-in-hand and
a microphone. Participants were asked to complete a pick-and-place task using
verbal commands, which were converted into text using Google's Speech-to-Text
API and processed through the NLRP framework to obtain joint space trajectories
for the robot. Our results indicate that our approach has a high system
usability score. The framework's dictionary can be easily extended without
relying on transfer learning or large data sets. In the future, we plan to
compare the presented framework with different approaches of human-assisted
pick-and-place tasks via a comprehensive user study.
|
[
{
"version": "v1",
"created": "Thu, 6 Apr 2023 11:06:30 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Khan",
"Muhammad Arshad",
""
],
[
"Kenney",
"Max",
""
],
[
"Painter",
"Jack",
""
],
[
"Kamale",
"Disha",
""
],
[
"Batista-Navarro",
"Riza",
""
],
[
"Ghalamzan-E",
"Amir",
""
]
] |
new_dataset
| 0.976115 |
2304.03022
|
Chen Li
|
Chen Li, Yixiao Ge, Jiayong Mao, Dian Li, Ying Shan
|
TagGPT: Large Language Models are Zero-shot Multimodal Taggers
|
13 pages, 6 figures
| null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Tags are pivotal in facilitating the effective distribution of multimedia
content in various applications in the contemporary Internet era, such as
search engines and recommendation systems. Recently, large language models
(LLMs) have demonstrated impressive capabilities across a wide range of tasks.
In this work, we propose TagGPT, a fully automated system capable of tag
extraction and multimodal tagging in a completely zero-shot fashion. Our core
insight is that, through elaborate prompt engineering, LLMs are able to extract
and reason about proper tags given textual clues of multimodal data, e.g., OCR,
ASR, title, etc. Specifically, to automatically build a high-quality tag set
that reflects user intent and interests for a specific application, TagGPT
predicts large-scale candidate tags from a series of raw data via prompting
LLMs, filtered with frequency and semantics. Given a new entity that needs
tagging for distribution, TagGPT introduces two alternative options for
zero-shot tagging, i.e., a generative method with late semantic matching with
the tag set, and another selective method with early matching in prompts. It is
well noticed that TagGPT provides a system-level solution based on a modular
framework equipped with a pre-trained LLM (GPT-3.5 used here) and a sentence
embedding model (SimCSE used here), which can be seamlessly replaced with any
more advanced one you want. TagGPT is applicable for various modalities of data
in modern social media and showcases strong generalization ability to a wide
range of applications. We evaluate TagGPT on publicly available datasets, i.e.,
Kuaishou and Food.com, and demonstrate the effectiveness of TagGPT compared to
existing hashtags and off-the-shelf taggers. Project page:
https://github.com/TencentARC/TagGPT.
|
[
{
"version": "v1",
"created": "Thu, 6 Apr 2023 12:17:46 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Li",
"Chen",
""
],
[
"Ge",
"Yixiao",
""
],
[
"Mao",
"Jiayong",
""
],
[
"Li",
"Dian",
""
],
[
"Shan",
"Ying",
""
]
] |
new_dataset
| 0.996245 |
2304.03079
|
Frans Skarman
|
Frans Skarman and Oscar Gustafsson
|
Spade: An Expression-Based HDL With Pipelines
|
Presented at the 3rd Workshop on Open-Source Design Automation
(OSDA), 2023 (arXiv:2303.18024)
| null | null |
OSDA/2023/04
|
cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
Spade is a new open source hardware description language (HDL) designed to
increase developer productivity without sacrificing the low-level control
offered by HDLs. It is a standalone language which takes inspiration from
modern software languages, and adds useful abstractions for common hardware
constructs. It also comes with a convenient set of tooling, such as a helpful
compiler, a build system with dependency management, tools for debugging, and
editor integration.
|
[
{
"version": "v1",
"created": "Thu, 6 Apr 2023 14:01:12 GMT"
}
] | 2023-04-07T00:00:00 |
[
[
"Skarman",
"Frans",
""
],
[
"Gustafsson",
"Oscar",
""
]
] |
new_dataset
| 0.99827 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.