id
stringlengths
9
10
submitter
stringlengths
2
52
authors
stringlengths
4
6.51k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
345
doi
stringlengths
11
120
report-no
stringlengths
2
243
categories
stringlengths
5
98
license
stringclasses
9 values
abstract
stringlengths
33
3.33k
versions
list
update_date
timestamp[s]
authors_parsed
list
prediction
stringclasses
1 value
probability
float64
0.95
1
2209.12041
Gabriela Ahmadi-Assalemi
Gabriela Ahmadi-Assalemi (1), Haider Al-Khateeb (1) ((1) Cyber Quarter - Midlands Centre for Cyber Security, University of Wolverhampton, UK)
Blockchain technologies in the design of Industrial Control Systems for Smart Cities
8 pages, 5 figures
published in IEEE Blockchain Technical Briefs, 2022, https://blockchain.ieee.org/technicalbriefs
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The proliferation of sensor technologies in Industrial Control Systems (ICS) helped to transform the environment towards better automation, process control and monitoring. However, sensor technologies expose the smart cities of the future to complex security challenges. Luckily, the sensing capabilities also create opportunities to capture various data types, which apart from operational use can add substantial value to developing mechanisms to protect ICS and critical infrastructure. We discuss Blockchain (BC), a disruptive technology with applications ranging from cryptocurrency to smart contracts and the value of integrating BC technologies into the design of ICS to support modern digital forensic readiness.
[ { "version": "v1", "created": "Sat, 24 Sep 2022 15:52:39 GMT" } ]
2022-09-27T00:00:00
[ [ "Ahmadi-Assalemi", "Gabriela", "" ], [ "Al-Khateeb", "Haider", "" ] ]
new_dataset
0.998966
2209.12048
Andrea Carron
Andrea Carron, Sabrina Bodmer, Lukas Vogel, Ren\'e Zurbr\"ugg, David Helm, Rahel Rickenbach, Simon Muntwiler, Jerome Sieber, Melanie N. Zeilinger
Chronos and CRS: Design of a miniature car-like robot and a software framework for single and multi-agent robotics and control
null
null
null
null
cs.RO cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
From both an educational and research point of view, experiments on hardware are a key aspect of robotics and control. In the last decade, many open-source hardware and software frameworks for wheeled robots have been presented, mainly in the form of unicycles and car-like robots, with the goal of making robotics accessible to a wider audience and to support control systems development. Unicycles are usually small and inexpensive, and therefore facilitate experiments in a larger fleet, but they are not suited for high-speed motion. Car-like robots are more agile, but they are usually larger and more expensive, thus requiring more resources in terms of space and money. In order to bridge this gap, we present Chronos, a new car-like 1/28th scale robot with customized open-source electronics, and CRS, an open-source software framework for control and robotics. The CRS software framework includes the implementation of various state-of-the-art algorithms for control, estimation, and multi-agent coordination. With this work, we aim to provide easier access to hardware and reduce the engineering time needed to start new educational and research projects.
[ { "version": "v1", "created": "Sat, 24 Sep 2022 16:36:21 GMT" } ]
2022-09-27T00:00:00
[ [ "Carron", "Andrea", "" ], [ "Bodmer", "Sabrina", "" ], [ "Vogel", "Lukas", "" ], [ "Zurbrügg", "René", "" ], [ "Helm", "David", "" ], [ "Rickenbach", "Rahel", "" ], [ "Muntwiler", "Simon", "" ], [ "Sieber", "Jerome", "" ], [ "Zeilinger", "Melanie N.", "" ] ]
new_dataset
0.999755
2209.12136
Elijah S. Lee
Elijah S. Lee, Giuseppe Loianno, Dinesh Jayaraman, Vijay Kumar
Vision-based Perimeter Defense via Multiview Pose Estimation
7 pages, 10 figures
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Previous studies in the perimeter defense game have largely focused on the fully observable setting where the true player states are known to all players. However, this is unrealistic for practical implementation since defenders may have to perceive the intruders and estimate their states. In this work, we study the perimeter defense game in a photo-realistic simulator and the real world, requiring defenders to estimate intruder states from vision. We train a deep machine learning-based system for intruder pose detection with domain randomization that aggregates multiple views to reduce state estimation errors and adapt the defensive strategy to account for this. We newly introduce performance metrics to evaluate the vision-based perimeter defense. Through extensive experiments, we show that our approach improves state estimation, and eventually, perimeter defense performance in both 1-defender-vs-1-intruder games, and 2-defenders-vs-1-intruder games.
[ { "version": "v1", "created": "Sun, 25 Sep 2022 03:41:45 GMT" } ]
2022-09-27T00:00:00
[ [ "Lee", "Elijah S.", "" ], [ "Loianno", "Giuseppe", "" ], [ "Jayaraman", "Dinesh", "" ], [ "Kumar", "Vijay", "" ] ]
new_dataset
0.990654
2209.12140
Huyen N. Nguyen
Huyen N. Nguyen, Caleb Trujillo, Tommy Dang
Modie Viewer: Protein Beasts and How to View Them
5 pages, 5 figures, Bio+MedVis Challenge @ IEEE VIS 2022
null
null
null
cs.HC cs.GR q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Understanding chemical modifications on proteins opens up further possibilities for research on rare diseases. This work proposes visualization approaches using two-dimensional (2D) and three-dimensional (3D) visual representations to analyze and gain insights into protein modifications. In this work, we present the application of Modie Viewer as an attempt to address the Bio+MedVis Challenge at IEEE VIS 2022.
[ { "version": "v1", "created": "Sun, 25 Sep 2022 04:22:01 GMT" } ]
2022-09-27T00:00:00
[ [ "Nguyen", "Huyen N.", "" ], [ "Trujillo", "Caleb", "" ], [ "Dang", "Tommy", "" ] ]
new_dataset
0.991057
2209.12164
Yunlong Tang
Yunlong Tang, Siting Xu, Teng Wang, Qin Lin, Qinglin Lu, Feng Zheng
Multi-modal Segment Assemblage Network for Ad Video Editing with Importance-Coherence Reward
Accepted by ACCV2022
null
null
null
cs.CV cs.AI cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Advertisement video editing aims to automatically edit advertising videos into shorter videos while retaining coherent content and crucial information conveyed by advertisers. It mainly contains two stages: video segmentation and segment assemblage. The existing method performs well at video segmentation stages but suffers from the problems of dependencies on extra cumbersome models and poor performance at the segment assemblage stage. To address these problems, we propose M-SAN (Multi-modal Segment Assemblage Network) which can perform efficient and coherent segment assemblage task end-to-end. It utilizes multi-modal representation extracted from the segments and follows the Encoder-Decoder Ptr-Net framework with the Attention mechanism. Importance-coherence reward is designed for training M-SAN. We experiment on the Ads-1k dataset with 1000+ videos under rich ad scenarios collected from advertisers. To evaluate the methods, we propose a unified metric, Imp-Coh@Time, which comprehensively assesses the importance, coherence, and duration of the outputs at the same time. Experimental results show that our method achieves better performance than random selection and the previous method on the metric. Ablation experiments further verify that multi-modal representation and importance-coherence reward significantly improve the performance. Ads-1k dataset is available at: https://github.com/yunlong10/Ads-1k
[ { "version": "v1", "created": "Sun, 25 Sep 2022 06:51:45 GMT" } ]
2022-09-27T00:00:00
[ [ "Tang", "Yunlong", "" ], [ "Xu", "Siting", "" ], [ "Wang", "Teng", "" ], [ "Lin", "Qin", "" ], [ "Lu", "Qinglin", "" ], [ "Zheng", "Feng", "" ] ]
new_dataset
0.999367
2209.12254
Rui Wan
Rui Wan, Shuangjie Xu, Wei Wu, Xiaoyi Zou, Tongyi Cao
From One to Many: Dynamic Cross Attention Networks for LiDAR and Camera Fusion
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
LiDAR and cameras are two complementary sensors for 3D perception in autonomous driving. LiDAR point clouds have accurate spatial and geometry information, while RGB images provide textural and color data for context reasoning. To exploit LiDAR and cameras jointly, existing fusion methods tend to align each 3D point to only one projected image pixel based on calibration, namely one-to-one mapping. However, the performance of these approaches highly relies on the calibration quality, which is sensitive to the temporal and spatial synchronization of sensors. Therefore, we propose a Dynamic Cross Attention (DCA) module with a novel one-to-many cross-modality mapping that learns multiple offsets from the initial projection towards the neighborhood and thus develops tolerance to calibration error. Moreover, a \textit{dynamic query enhancement} is proposed to perceive the model-independent calibration, which further strengthens DCA's tolerance to the initial misalignment. The whole fusion architecture named Dynamic Cross Attention Network (DCAN) exploits multi-level image features and adapts to multiple representations of point clouds, which allows DCA to serve as a plug-in fusion module. Extensive experiments on nuScenes and KITTI prove DCA's effectiveness. The proposed DCAN outperforms state-of-the-art methods on the nuScenes detection challenge.
[ { "version": "v1", "created": "Sun, 25 Sep 2022 16:10:14 GMT" } ]
2022-09-27T00:00:00
[ [ "Wan", "Rui", "" ], [ "Xu", "Shuangjie", "" ], [ "Wu", "Wei", "" ], [ "Zou", "Xiaoyi", "" ], [ "Cao", "Tongyi", "" ] ]
new_dataset
0.99399
2209.12270
Charles Dawson
Charles Dawson, Austin Garrett, Falk Pollok, Yang Zhang, Chuchu Fan
Barrier functions enable safety-conscious force-feedback control
null
null
null
null
cs.RO cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In order to be effective partners for humans, robots must become increasingly comfortable with making contact with their environment. Unfortunately, it is hard for robots to distinguish between ``just enough'' and ``too much'' force: some force is required to accomplish the task but too much might damage equipment or injure humans. Traditional approaches to designing compliant force-feedback controllers, such as stiffness control, require difficult hand-tuning of control parameters and make it difficult to build safe, effective robot collaborators. In this paper, we propose a novel yet easy-to-implement force feedback controller that uses control barrier functions (CBFs) to derive a compliant controller directly from users' specifications of the maximum allowable forces and torques. We compare our approach to traditional stiffness control to demonstrate potential advantages of our control architecture, and we demonstrate the effectiveness of our controller on an example human-robot collaboration task: cooperative manipulation of a bulky object.
[ { "version": "v1", "created": "Sun, 25 Sep 2022 17:20:43 GMT" } ]
2022-09-27T00:00:00
[ [ "Dawson", "Charles", "" ], [ "Garrett", "Austin", "" ], [ "Pollok", "Falk", "" ], [ "Zhang", "Yang", "" ], [ "Fan", "Chuchu", "" ] ]
new_dataset
0.987322
2209.12310
Cristobal A. Navarro
Alan Keith, H\'ector Ferrada, Crist\'obal A. Navarro
Accelerating the Convex Hull Computation with a Parallel GPU Algorithm
7 pages, in Spanish language
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The convex hull is a fundamental geometrical structure for many applications where groups of points must be enclosed or represented by a convex polygon. Although efficient sequential convex hull algorithms exist, and are constantly being used in applications, their computation time is often considered an issue for time-sensitive tasks such as real-time collision detection, clustering or image processing for virtual reality, among others, where fast response times are required. In this work we propose a parallel GPU-based adaptation of heaphull, which is a state of the art CPU algorithm that computes the convex hull by first doing a efficient filtering stage followed by the actual convex hull computation. More specifically, this work parallelizes the filtering stage, adapting it to the GPU programming model as a series of parallel reductions. Experimental evaluation shows that the proposed implementation significantly improves the performance of the convex hull computation, reaching up to $4\times$ of speedup over the sequential CPU-based heaphull and between $3\times \sim 4\times$ over existing GPU based approaches.
[ { "version": "v1", "created": "Sun, 25 Sep 2022 19:50:51 GMT" } ]
2022-09-27T00:00:00
[ [ "Keith", "Alan", "" ], [ "Ferrada", "Héctor", "" ], [ "Navarro", "Cristóbal A.", "" ] ]
new_dataset
0.972266
2209.12352
Osama Khalid
Osama Khalid, Padmini Srinivasan
Smells like Teen Spirit: An Exploration of Sensorial Style in Literary Genres
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
It is well recognized that sensory perceptions and language have interconnections through numerous studies in psychology, neuroscience, and sensorial linguistics. Set in this rich context we ask whether the use of sensorial language in writings is part of linguistic style? This question is important from the view of stylometrics research where a rich set of language features have been explored, but with insufficient attention given to features related to sensorial language. Taking this as the goal we explore several angles about sensorial language and style in collections of lyrics, novels, and poetry. We find, for example, that individual use of sensorial language is not a random phenomenon; choice is likely involved. Also, sensorial style is generally stable over time - the shifts are extremely small. Moreover, style can be extracted from just a few hundred sentences that have sensorial terms. We also identify representative and distinctive features within each genre. For example, we observe that 4 of the top 6 representative features in novels collection involved individuals using olfactory language where we expected them to use non-olfactory language.
[ { "version": "v1", "created": "Mon, 26 Sep 2022 00:17:10 GMT" } ]
2022-09-27T00:00:00
[ [ "Khalid", "Osama", "" ], [ "Srinivasan", "Padmini", "" ] ]
new_dataset
0.994797
2209.12386
Xu Yajun
Yajun Xu, Chuwen Huang, Yibing Nan, Shiguo Lian
TAD: A Large-Scale Benchmark for Traffic Accidents Detection from Video Surveillance
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Automatic traffic accidents detection has appealed to the machine vision community due to its implications on the development of autonomous intelligent transportation systems (ITS) and importance to traffic safety. Most previous studies on efficient analysis and prediction of traffic accidents, however, have used small-scale datasets with limited coverage, which limits their effect and applicability. Existing datasets in traffic accidents are either small-scale, not from surveillance cameras, not open-sourced, or not built for freeway scenes. Since accidents happened in freeways tend to cause serious damage and are too fast to catch the spot. An open-sourced datasets targeting on freeway traffic accidents collected from surveillance cameras is in great need and of practical importance. In order to help the vision community address these shortcomings, we endeavor to collect video data of real traffic accidents that covered abundant scenes. After integration and annotation by various dimensions, a large-scale traffic accidents dataset named TAD is proposed in this work. Various experiments on image classification, object detection, and video classification tasks, using public mainstream vision algorithms or frameworks are conducted in this work to demonstrate performance of different methods. The proposed dataset together with the experimental results are presented as a new benchmark to improve computer vision research, especially in ITS.
[ { "version": "v1", "created": "Mon, 26 Sep 2022 03:00:50 GMT" } ]
2022-09-27T00:00:00
[ [ "Xu", "Yajun", "" ], [ "Huang", "Chuwen", "" ], [ "Nan", "Yibing", "" ], [ "Lian", "Shiguo", "" ] ]
new_dataset
0.999862
2209.12447
Ozioma Collins Oguine
Kanyifeechukwu Jane Oguine, Ozioma Collins Oguine, Hashim Ibrahim Bisallah
YOLO v3: Visual and Real-Time Object Detection Model for Smart Surveillance Systems(3s)
8 pages, 12 figures, 2 tables
null
null
null
cs.CV
http://creativecommons.org/licenses/by-sa/4.0/
Can we see it all? Do we know it All? These are questions thrown to human beings in our contemporary society to evaluate our tendency to solve problems. Recent studies have explored several models in object detection; however, most have failed to meet the demand for objectiveness and predictive accuracy, especially in developing and under-developed countries. Consequently, several global security threats have necessitated the development of efficient approaches to tackle these issues. This paper proposes an object detection model for cyber-physical systems known as Smart Surveillance Systems (3s). This research proposes a 2-phase approach, highlighting the advantages of YOLO v3 deep learning architecture in real-time and visual object detection. A transfer learning approach was implemented for this research to reduce training time and computing resources. The dataset utilized for training the model is the MS COCO dataset which contains 328,000 annotated image instances. Deep learning techniques such as Pre-processing, Data pipelining, and detection was implemented to improve efficiency. Compared to other novel research models, the proposed model's results performed exceedingly well in detecting WILD objects in surveillance footages. An accuracy of 99.71% was recorded, with an improved mAP of 61.5.
[ { "version": "v1", "created": "Mon, 26 Sep 2022 06:34:12 GMT" } ]
2022-09-27T00:00:00
[ [ "Oguine", "Kanyifeechukwu Jane", "" ], [ "Oguine", "Ozioma Collins", "" ], [ "Bisallah", "Hashim Ibrahim", "" ] ]
new_dataset
0.997981
2209.12475
Zhiming Zhang
Huanjing Yue, Zhiming Zhang, Jingyu Yang
Real-RawVSR: Real-World Raw Video Super-Resolution with a Benchmark Dataset
Accepted by ECCV2022
null
null
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, real image super-resolution (SR) has achieved promising results due to the development of SR datasets and corresponding real SR methods. In contrast, the field of real video SR is lagging behind, especially for real raw videos. Considering the superiority of raw image SR over sRGB image SR, we construct a real-world raw video SR (Real-RawVSR) dataset and propose a corresponding SR method. We utilize two DSLR cameras and a beam-splitter to simultaneously capture low-resolution (LR) and high-resolution (HR) raw videos with 2x, 3x, and 4x magnifications. There are 450 video pairs in our dataset, with scenes varying from indoor to outdoor, and motions including camera and object movements. To our knowledge, this is the first real-world raw VSR dataset. Since the raw video is characterized by the Bayer pattern, we propose a two-branch network, which deals with both the packed RGGB sequence and the original Bayer pattern sequence, and the two branches are complementary to each other. After going through the proposed co-alignment, interaction, fusion, and reconstruction modules, we generate the corresponding HR sRGB sequence. Experimental results demonstrate that the proposed method outperforms benchmark real and synthetic video SR methods with either raw or sRGB inputs. Our code and dataset are available at https://github.com/zmzhang1998/Real-RawVSR.
[ { "version": "v1", "created": "Mon, 26 Sep 2022 07:33:31 GMT" } ]
2022-09-27T00:00:00
[ [ "Yue", "Huanjing", "" ], [ "Zhang", "Zhiming", "" ], [ "Yang", "Jingyu", "" ] ]
new_dataset
0.999879
2209.12480
Michael Schmitt
Michael Schmitt, Pedram Ghamisi, Naoto Yokoya, Ronny H\"ansch
EOD: The IEEE GRSS Earth Observation Database
This paper contains the description of the IEEE-GRSS Earth Observation Database
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
In the era of deep learning, annotated datasets have become a crucial asset to the remote sensing community. In the last decade, a plethora of different datasets was published, each designed for a specific data type and with a specific task or application in mind. In the jungle of remote sensing datasets, it can be hard to keep track of what is available already. With this paper, we introduce EOD - the IEEE GRSS Earth Observation Database (EOD) - an interactive online platform for cataloguing different types of datasets leveraging remote sensing imagery.
[ { "version": "v1", "created": "Mon, 26 Sep 2022 07:44:41 GMT" } ]
2022-09-27T00:00:00
[ [ "Schmitt", "Michael", "" ], [ "Ghamisi", "Pedram", "" ], [ "Yokoya", "Naoto", "" ], [ "Hänsch", "Ronny", "" ] ]
new_dataset
0.997195
2209.12523
Gauthier Roussilhe
Gauthier Roussilhe, Thibault Pirson, Mathieu Xhonneux, David Bol
From Silicon Shield to Carbon Lock-in ? The Environmental Footprint of Electronic Components Manufacturing in Taiwan (2015-2020)
19 pages, 9 figures, 2 tables
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Taiwan plans to rapidly increase its industrial production capacity of electronic components while concurrently setting policies for its ecological transition. Given that the island is responsible for the manufacturing of a significant part of worldwide electronics components, the sustainability of the Taiwanese electronics industry is therefore of critical interest. In this paper, we survey the environmental footprint of 16 Taiwanese electronic components manufacturers (ECM) using corporate sustainability responsibility reports (CSR). Based on data from 2015 to 2020, this study finds out that our sample of 16 manufacturers increased its greenhouse gases (GHG) emissions by 7.5\% per year, its final energy and electricity consumption by 8.8\% and 8.9\%, and the water usage by 6.1\%. We show that the volume of manufactured electronic components and the environmental footprints compiled in this study are strongly correlated, which suggests that relative efficiency gains are not sufficient to curb the environmental footprint at the national scale. Given the critical nature of electronics industry for Taiwan's geopolitics and economics, the observed increase of energy consumption and the slow renewable energy roll-out, these industrial activities could create a carbon lock-in, blocking the Taiwanese government from achieving its carbon reduction goals and its sustainability policies. Besides, the European Union, the USA or even China aim at developing an industrial ecosystem targeting sub-10nm CMOS technology nodes similar to Taiwan. This study thus provides important insights regarding the environmental implications associated with such a technology roadmap. All data and calculation models used in this study are provided as supplementary material.
[ { "version": "v1", "created": "Mon, 26 Sep 2022 08:59:45 GMT" } ]
2022-09-27T00:00:00
[ [ "Roussilhe", "Gauthier", "" ], [ "Pirson", "Thibault", "" ], [ "Xhonneux", "Mathieu", "" ], [ "Bol", "David", "" ] ]
new_dataset
0.994508
2209.12587
Lutz Oettershagen
Lutz Oettershagen, Petra Mutzel
TGLib: An Open-Source Library for Temporal Graph Analysis
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We initiate an open-source library for the efficient analysis of temporal graphs. We consider one of the standard models of dynamic networks in which each edge has a discrete timestamp and transition time. Recently there has been a massive interest in analyzing such temporal graphs. Common computational data mining and analysis tasks include the computation of temporal distances, centrality measures, and network statistics like topological overlap, burstiness, or temporal diameter. To fulfill the increasing demand for efficient and easy-to-use implementations of temporal graph algorithms, we introduce the open-source library TGLib, which integrates efficient data structures and algorithms for temporal graph analysis. TGLib is highly efficient and versatile, providing simple and convenient C++ and Python interfaces, targeting computer scientists, practitioners, students, and the (temporal) network research community.
[ { "version": "v1", "created": "Mon, 26 Sep 2022 11:00:51 GMT" } ]
2022-09-27T00:00:00
[ [ "Oettershagen", "Lutz", "" ], [ "Mutzel", "Petra", "" ] ]
new_dataset
0.999065
2209.12650
Nabeel Mohammed
Mohammed Rakib, Md. Ismail Hossain, Nabeel Mohammed, Fuad Rahman
Bangla-Wave: Improving Bangla Automatic Speech Recognition Utilizing N-gram Language Models
null
null
null
null
cs.CL cs.AI eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although over 300M around the world speak Bangla, scant work has been done in improving Bangla voice-to-text transcription due to Bangla being a low-resource language. However, with the introduction of the Bengali Common Voice 9.0 speech dataset, Automatic Speech Recognition (ASR) models can now be significantly improved. With 399hrs of speech recordings, Bengali Common Voice is the largest and most diversified open-source Bengali speech corpus in the world. In this paper, we outperform the SOTA pretrained Bengali ASR models by finetuning a pretrained wav2vec2 model on the common voice dataset. We also demonstrate how to significantly improve the performance of an ASR model by adding an n-gram language model as a post-processor. Finally, we do some experiments and hyperparameter tuning to generate a robust Bangla ASR model that is better than the existing ASR models.
[ { "version": "v1", "created": "Tue, 13 Sep 2022 17:59:21 GMT" } ]
2022-09-27T00:00:00
[ [ "Rakib", "Mohammed", "" ], [ "Hossain", "Md. Ismail", "" ], [ "Mohammed", "Nabeel", "" ], [ "Rahman", "Fuad", "" ] ]
new_dataset
0.999156
2209.12655
Guido Governatori
Francesco Olivieri, Guido Governatori, Matteo Cristani, Antonino Rotolo and Abdul Sattar
Deontic Meta-Rules
null
null
null
null
cs.AI cs.LO
http://creativecommons.org/licenses/by-nc-nd/4.0/
The use of meta-rules in logic, i.e., rules whose content includes other rules, has recently gained attention in the setting of non-monotonic reasoning: a first logical formalisation and efficient algorithms to compute the (meta)-extensions of such theories were proposed in Olivieri et al (2021) This work extends such a logical framework by considering the deontic aspect. The resulting logic will not just be able to model policies but also tackle well-known aspects that occur in numerous legal systems. The use of Defeasible Logic (DL) to model meta-rules in the application area we just alluded to has been investigated. Within this line of research, the study mentioned above was not focusing on the general computational properties of meta-rules. This study fills this gap with two major contributions. First, we introduce and formalise two variants of Defeasible Deontic Logic with Meta-Rules to represent (1) defeasible meta-theories with deontic modalities, and (2) two different types of conflicts among rules: Simple Conflict Defeasible Deontic Logic, and Cautious Conflict Defeasible Deontic Logic. Second, we advance efficient algorithms to compute the extensions for both variants.
[ { "version": "v1", "created": "Fri, 23 Sep 2022 07:48:29 GMT" } ]
2022-09-27T00:00:00
[ [ "Olivieri", "Francesco", "" ], [ "Governatori", "Guido", "" ], [ "Cristani", "Matteo", "" ], [ "Rotolo", "Antonino", "" ], [ "Sattar", "Abdul", "" ] ]
new_dataset
0.991234
2209.12694
Zian Chen
Xiao Cao, Zitan Chen, Canyu Le, Lei Meng
Multi-modal Video Chapter Generation
null
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Chapter generation becomes practical technique for online videos nowadays. The chapter breakpoints enable users to quickly find the parts they want and get the summative annotations. However, there is no public method and dataset for this task. To facilitate the research along this direction, we introduce a new dataset called Chapter-Gen, which consists of approximately 10k user-generated videos with annotated chapter information. Our data collection procedure is fast, scalable and does not require any additional manual annotation. On top of this dataset, we design an effective baseline specificlly for video chapters generation task. which captures two aspects of a video,including visual dynamics and narration text. It disentangles local and global video features for localization and title generation respectively. To parse the long video efficiently, a skip sliding window mechanism is designed to localize potential chapters. And a cross attention multi-modal fusion module is developed to aggregate local features for title generation. Our experiments demonstrate that the proposed framework achieves superior results over existing methods which illustrate that the method design for similar task cannot be transfered directly even after fine-tuning. Code and dataset are available at https://github.com/czt117/MVCG.
[ { "version": "v1", "created": "Mon, 26 Sep 2022 13:44:48 GMT" } ]
2022-09-27T00:00:00
[ [ "Cao", "Xiao", "" ], [ "Chen", "Zitan", "" ], [ "Le", "Canyu", "" ], [ "Meng", "Lei", "" ] ]
new_dataset
0.990452
2209.12698
Daniel Esc\'anez-Exp\'osito
Daniel Escanez-Exposito, Pino Caballero-Gil and Francisco Martin-Fernandez
QuantumSolver: A quantum tool-set for developers
10 pages, 4 figures, sumited to CAITS, SAM, CSCE, Springer Nature, Indexed by Computing Research and Education (CORE) with ranking C, Indexed by CS Conference Rankings (0.83), Indexed by GII-GRIN in Class WiP
CAITS, SAM, CSCE. The 2022 World Congress in Computer Science, Computer Engineering, and Applied Computing. CSCE 2022, pg. 149. ISBN # 1-60132-516-9; American Council on Science & Education Las Vegas, USA. July 25-28, 2022
null
null
cs.CR
http://creativecommons.org/licenses/by-nc-nd/4.0/
This paper introduces a new opensource quantum tool-set called QuantumSolver based on Qiskit to help developers without knowledge in quantum computing. The developed library includes a set of algorithms with different features: random number generation, Bernstein-Vazirani algorithm and quantum key distribution using the BB84 protocol. This paper described the main details about the implementation of the toolset, focusing in the challenges that the authors faced. Finally, this document analyzes the results obtained with some conclusions that authors compares with the included features.
[ { "version": "v1", "created": "Fri, 23 Sep 2022 11:30:21 GMT" } ]
2022-09-27T00:00:00
[ [ "Escanez-Exposito", "Daniel", "" ], [ "Caballero-Gil", "Pino", "" ], [ "Martin-Fernandez", "Francisco", "" ] ]
new_dataset
0.975644
2209.12721
Haocheng Hua
Haocheng Hua, Tony Xiao Han, and Jie Xu
MIMO Integrated Sensing and Communication: CRB-Rate Tradeoff
30 pages, 17 figures, submitted for journal publication
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper studies a multiple-input multiple-output (MIMO) integrated sensing and communication (ISAC) system, in which a multi-antenna base station (BS) sends unified wireless signals to estimate one sensing target and communicate with a multi-antenna communication user (CU) simultaneously. We consider both the point and extended target models. For the point target case, the BS estimates the target angle and we adopt the Cram\'er-Rao bound (CRB) for angle estimation as the sensing performance metric. For the extended target case, the BS estimates the complete target response matrix, and we consider three different sensing performance metrics including the trace, the maximum eigenvalue, and the determinant of the CRB matrix for target response matrix estimation. For each of the four scenarios with different CRB measures, we investigate the fundamental tradeoff between the CRB for estimation and the data rate for communication, by characterizing the Pareto boundary of the achievable CRB-rate (C-R) region. In particular, we formulate a new MIMO rate maximization problem for each scenario, by optimizing the transmit covariance matrix at the BS, subject to a different form of maximum CRB constraint and its maximum transmit power constraint. For these problems, we obtain their optimal solutions in semi-closed forms by using advanced convex optimization techniques. For the point target case, the optimal solution is obtained by diagonalizing a \emph{composite channel matrix} via singular value decomposition (SVD) together with water-filling-like power allocation over these decomposed subchannels. For the three scenarios in the extended target case, the optimal solutions are obtained by diagonalizing the \emph{communication channel} via SVD, together with proper power allocation over two orthogonal sets of subchannels. Numerical results are conducted to validate the proposed design.
[ { "version": "v1", "created": "Mon, 26 Sep 2022 14:23:44 GMT" } ]
2022-09-27T00:00:00
[ [ "Hua", "Haocheng", "" ], [ "Han", "Tony Xiao", "" ], [ "Xu", "Jie", "" ] ]
new_dataset
0.987235
2209.12723
Yue Zhang
Yue Zhang, Parisa Kordjamshidi
LOViS: Learning Orientation and Visual Signals for Vision and Language Navigation
null
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding spatial and visual information is essential for a navigation agent who follows natural language instructions. The current Transformer-based VLN agents entangle the orientation and vision information, which limits the gain from the learning of each information source. In this paper, we design a neural agent with explicit Orientation and Vision modules. Those modules learn to ground spatial information and landmark mentions in the instructions to the visual environment more effectively. To strengthen the spatial reasoning and visual perception of the agent, we design specific pre-training tasks to feed and better utilize the corresponding modules in our final navigation model. We evaluate our approach on both Room2room (R2R) and Room4room (R4R) datasets and achieve the state of the art results on both benchmarks.
[ { "version": "v1", "created": "Mon, 26 Sep 2022 14:26:50 GMT" } ]
2022-09-27T00:00:00
[ [ "Zhang", "Yue", "" ], [ "Kordjamshidi", "Parisa", "" ] ]
new_dataset
0.999671
2209.12822
Leandro Passos
Talita A. Pereira, Regina C. Popim, Leandro A. Passos, Danillo R. Pereira, Clayton R. Pereira, Jo\~ao P. Papa
ComplexWoundDB: A Database for Automatic Complex Wound Tissue Categorization
null
null
10.1109/IWSSIP55020.2022.9854419
null
cs.CV cs.DB cs.LG
http://creativecommons.org/licenses/by/4.0/
Complex wounds usually face partial or total loss of skin thickness, healing by secondary intention. They can be acute or chronic, figuring infections, ischemia and tissue necrosis, and association with systemic diseases. Research institutes around the globe report countless cases, ending up in a severe public health problem, for they involve human resources (e.g., physicians and health care professionals) and negatively impact life quality. This paper presents a new database for automatically categorizing complex wounds with five categories, i.e., non-wound area, granulation, fibrinoid tissue, and dry necrosis, hematoma. The images comprise different scenarios with complex wounds caused by pressure, vascular ulcers, diabetes, burn, and complications after surgical interventions. The dataset, called ComplexWoundDB, is unique because it figures pixel-level classifications from $27$ images obtained in the wild, i.e., images are collected at the patients' homes, labeled by four health professionals. Further experiments with distinct machine learning techniques evidence the challenges in addressing the problem of computer-aided complex wound tissue categorization. The manuscript sheds light on future directions in the area, with a detailed comparison among other databased widely used in the literature.
[ { "version": "v1", "created": "Mon, 26 Sep 2022 16:28:34 GMT" } ]
2022-09-27T00:00:00
[ [ "Pereira", "Talita A.", "" ], [ "Popim", "Regina C.", "" ], [ "Passos", "Leandro A.", "" ], [ "Pereira", "Danillo R.", "" ], [ "Pereira", "Clayton R.", "" ], [ "Papa", "João P.", "" ] ]
new_dataset
0.999023
2209.12854
Sabur Baidya
Sumit K. Das, Mohammad Helal Uddin, Sabur Baidya
Edge-assisted Collaborative Digital Twin for Safety-Critical Robotics in Industrial IoT
null
null
null
null
cs.RO cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Digital Twin technology is playing a pivotal role in the modern industrial evolution. Especially, with the technological progress in the Internet-of-Things (IoT) and the increasing trend in autonomy, multi-sensor equipped robotics can create practical digital twin, which is particularly useful in the industrial applications for operations, maintenance and safety. Herein, we demonstrate a real-world digital twin of a safety-critical robotics applications with a Franka-Emika-Panda robotic arm. We develop and showcase an edge-assisted collaborative digital twin for dynamic obstacle avoidance which can be useful in real-time adaptation of the robots while operating in the uncertain and dynamic environments in industrial IoT.
[ { "version": "v1", "created": "Mon, 26 Sep 2022 17:08:51 GMT" } ]
2022-09-27T00:00:00
[ [ "Das", "Sumit K.", "" ], [ "Uddin", "Mohammad Helal", "" ], [ "Baidya", "Sabur", "" ] ]
new_dataset
0.99885
2209.12870
Xieyang Xu
Xieyang Xu (1), Weixin Deng (1), Ryan Beckett (2), Ratul Mahajan (1 and 3) and David Walker (4) ((1) University of Washington, (2) Microsoft, (3) Intentionet, (4) Princeton University)
Test Coverage for Network Configurations
null
null
null
null
cs.NI
http://creativecommons.org/licenses/by/4.0/
We develop NetCov, the first tool to reveal which network configuration lines are being tested by a suite of network tests. It helps network engineers improve test suites and thus increase network reliability. A key challenge in its development is that many network tests test the data plane instead of testing the configurations (control plane) directly. We must be able to efficiently infer which configuration elements contribute to tested data plane elements, even when such contributions are non-local (on remote devices) or non-deterministic. NetCov uses an information flow graph based model that precisely captures various forms of contributions and a scalable method to lazily infer contributions. Using it, we show that an existing test suite for Internet2 (a nation-wide backbone network in the USA) covers only 26% of the configuration lines. The feedback from NetCov makes it easy to define new tests that improve coverage. For Internet2, adding just three such tests covers an additional 17% of the lines.
[ { "version": "v1", "created": "Mon, 26 Sep 2022 17:39:33 GMT" } ]
2022-09-27T00:00:00
[ [ "Xu", "Xieyang", "", "1\n and 3" ], [ "Deng", "Weixin", "", "1\n and 3" ], [ "Beckett", "Ryan", "", "1\n and 3" ], [ "Mahajan", "Ratul", "", "1\n and 3" ], [ "Walker", "David", "" ] ]
new_dataset
0.979445
2209.12882
Gal Katzhendler
Amit Daniely and Gal Katzhendler
Approximate Description Length, Covering Numbers, and VC Dimension
null
null
null
null
cs.LG cs.DS stat.ML
http://creativecommons.org/licenses/by/4.0/
Recently, Daniely and Granot [arXiv:1910.05697] introduced a new notion of complexity called Approximate Description Length (ADL). They used it to derive novel generalization bounds for neural networks, that despite substantial work, were out of reach for more classical techniques such as discretization, Covering Numbers and Rademacher Complexity. In this paper we explore how ADL relates to classical notions of function complexity such as Covering Numbers and VC Dimension. We find that for functions whose range is the reals, ADL is essentially equivalent to these classical complexity measures. However, this equivalence breaks for functions with high dimensional range.
[ { "version": "v1", "created": "Mon, 26 Sep 2022 17:53:29 GMT" } ]
2022-09-27T00:00:00
[ [ "Daniely", "Amit", "" ], [ "Katzhendler", "Gal", "" ] ]
new_dataset
0.960722
2004.09705
Wei Shiung Liew Mr
Fatai Sado, Chu Kiong Loo, Wei Shiung Liew, Matthias Kerzel, Stefan Wermter
Explainable Goal-Driven Agents and Robots -- A Comprehensive Review
null
null
null
null
cs.RO cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent applications of autonomous agents and robots, such as self-driving cars, scenario-based trainers, exploration robots, and service robots have brought attention to crucial trust-related challenges associated with the current generation of artificial intelligence (AI) systems. AI systems based on the connectionist deep learning neural network approach lack capabilities of explaining their decisions and actions to others, despite their great successes. Without symbolic interpretation capabilities, they are black boxes, which renders their decisions or actions opaque, making it difficult to trust them in safety-critical applications. The recent stance on the explainability of AI systems has witnessed several approaches on eXplainable Artificial Intelligence (XAI); however, most of the studies have focused on data-driven XAI systems applied in computational sciences. Studies addressing the increasingly pervasive goal-driven agents and robots are still missing. This paper reviews approaches on explainable goal-driven intelligent agents and robots, focusing on techniques for explaining and communicating agents perceptual functions (example, senses, and vision) and cognitive reasoning (example, beliefs, desires, intention, plans, and goals) with humans in the loop. The review highlights key strategies that emphasize transparency, understandability, and continual learning for explainability. Finally, the paper presents requirements for explainability and suggests a roadmap for the possible realization of effective goal-driven explainable agents and robots.
[ { "version": "v1", "created": "Tue, 21 Apr 2020 01:41:20 GMT" }, { "version": "v2", "created": "Tue, 26 Jan 2021 09:29:31 GMT" }, { "version": "v3", "created": "Mon, 15 Mar 2021 03:28:25 GMT" }, { "version": "v4", "created": "Sat, 10 Apr 2021 06:55:15 GMT" }, { "version": "v5", "created": "Thu, 22 Jul 2021 13:08:01 GMT" }, { "version": "v6", "created": "Wed, 28 Jul 2021 07:35:10 GMT" }, { "version": "v7", "created": "Fri, 5 Nov 2021 04:40:08 GMT" }, { "version": "v8", "created": "Fri, 3 Jun 2022 11:16:52 GMT" }, { "version": "v9", "created": "Fri, 23 Sep 2022 08:52:58 GMT" } ]
2022-09-26T00:00:00
[ [ "Sado", "Fatai", "" ], [ "Loo", "Chu Kiong", "" ], [ "Liew", "Wei Shiung", "" ], [ "Kerzel", "Matthias", "" ], [ "Wermter", "Stefan", "" ] ]
new_dataset
0.971232
2105.12038
Nikita Drobyshev
Aleksandr Safin, Maxim Kan, Nikita Drobyshev, Oleg Voynov, Alexey Artemov, Alexander Filippov, Denis Zorin, Evgeny Burnaev
Unpaired Depth Super-Resolution in the Wild
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Depth maps captured with commodity sensors are often of low quality and resolution; these maps need to be enhanced to be used in many applications. State-of-the-art data-driven methods of depth map super-resolution rely on registered pairs of low- and high-resolution depth maps of the same scenes. Acquisition of real-world paired data requires specialized setups. Another alternative, generating low-resolution maps from high-resolution maps by subsampling, adding noise and other artificial degradation methods, does not fully capture the characteristics of real-world low-resolution images. As a consequence, supervised learning methods trained on such artificial paired data may not perform well on real-world low-resolution inputs. We consider an approach to depth super-resolution based on learning from unpaired data. While many techniques for unpaired image-to-image translation have been proposed, most fail to deliver effective hole-filling or reconstruct accurate surfaces using depth maps. We propose an unpaired learning method for depth super-resolution, which is based on a learnable degradation model, enhancement component and surface normal estimates as features to produce more accurate depth maps. We propose a benchmark for unpaired depth SR and demonstrate that our method outperforms existing unpaired methods and performs on par with paired.
[ { "version": "v1", "created": "Tue, 25 May 2021 16:19:16 GMT" }, { "version": "v2", "created": "Mon, 23 Aug 2021 11:21:20 GMT" }, { "version": "v3", "created": "Sat, 30 Jul 2022 15:11:19 GMT" }, { "version": "v4", "created": "Fri, 23 Sep 2022 15:29:08 GMT" } ]
2022-09-26T00:00:00
[ [ "Safin", "Aleksandr", "" ], [ "Kan", "Maxim", "" ], [ "Drobyshev", "Nikita", "" ], [ "Voynov", "Oleg", "" ], [ "Artemov", "Alexey", "" ], [ "Filippov", "Alexander", "" ], [ "Zorin", "Denis", "" ], [ "Burnaev", "Evgeny", "" ] ]
new_dataset
0.956625
2106.06001
Elias Gr\"unewald
Elias Gr\"unewald, Paul Wille, Frank Pallas, Maria C. Borges, Max-R. Ulbricht
TIRA: An OpenAPI Extension and Toolbox for GDPR Transparency in RESTful Architectures
Accepted for publication at the 2021 International Workshop on Privacy Engineering (IWPE'21). This is a preprint manuscript (authors' own version before final copy-editing)
null
10.1109/EuroSPW54576.2021.00039
null
cs.SE cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transparency - the provision of information about what personal data is collected for which purposes, how long it is stored, or to which parties it is transferred - is one of the core privacy principles underlying regulations such as the GDPR. Technical approaches for implementing transparency in practice are, however, only rarely considered. In this paper, we present a novel approach for doing so in current, RESTful application architectures and in line with prevailing agile and DevOps-driven practices. For this purpose, we introduce 1) a transparency-focused extension of OpenAPI specifications that allows individual service descriptions to be enriched with transparency-related annotations in a bottom-up fashion and 2) a set of higher-order tools for aggregating respective information across multiple, interdependent services and for coherently integrating our approach into automated CI/CD-pipelines. Together, these building blocks pave the way for providing transparency information that is more specific and at the same time better reflects the actual implementation givens within complex service architectures than current, overly broad privacy statements.
[ { "version": "v1", "created": "Thu, 10 Jun 2021 18:42:50 GMT" } ]
2022-09-26T00:00:00
[ [ "Grünewald", "Elias", "" ], [ "Wille", "Paul", "" ], [ "Pallas", "Frank", "" ], [ "Borges", "Maria C.", "" ], [ "Ulbricht", "Max-R.", "" ] ]
new_dataset
0.975148
2201.01105
\'Angel Gim\'enez
Angel Gim\'enez, Miguel A. Murcia, Jos\'e M. Amig\'o, Oscar Mart\'inez-Bonastre, Jos\'e Valero
New RED-type TCP-AQM algorithms based on beta distribution drop functions
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, Active Queue Management (AQM) mechanisms to improve the performance of TCP/IP networks have acquired a relevant role. In this paper we present a simple and robust RED-type algorithm together with a couple of dynamical variants with the ability to adapt to the specific characteristics of different network environments, as well as to the user needs. We first present a basic version called Beta RED (BetaRED), where the user is free to adjust the parameters according to the network conditions. The aim is to make the parameter setting easy and intuitive so that a good performance is obtained over a wide range of parameters. Secondly, BetaRED is used as a framework to design two dynamic algorithms, which we will call Adaptive Beta RED (ABetaRED) and Dynamic Beta RED (DBetaRED). In those new algorithms certain parameters are dynamically adjusted so that the queue length remains stable around a predetermined reference value and according to changing network traffic conditions. Finally, we present a battery of simulations using the Network Simulator 3 (ns-3) software with a two-fold objective: to guide the user on how to adjust the parameters of the BetaRED mechanism, and to show a performance comparison of ABetaRED and DBetaRED with other representative algorithms that pursue a similar objective.
[ { "version": "v1", "created": "Tue, 4 Jan 2022 12:14:42 GMT" }, { "version": "v2", "created": "Wed, 1 Jun 2022 04:29:36 GMT" }, { "version": "v3", "created": "Thu, 22 Sep 2022 20:46:30 GMT" } ]
2022-09-26T00:00:00
[ [ "Giménez", "Angel", "" ], [ "Murcia", "Miguel A.", "" ], [ "Amigó", "José M.", "" ], [ "Martínez-Bonastre", "Oscar", "" ], [ "Valero", "José", "" ] ]
new_dataset
0.999559
2203.02331
Abdul Hannan Khan
Abdul Hannan Khan, Mohsin Munir, Ludger van Elst and Andreas Dengel
F2DNet: Fast Focal Detection Network for Pedestrian Detection
Accepted at ICPR 2022
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Two-stage detectors are state-of-the-art in object detection as well as pedestrian detection. However, the current two-stage detectors are inefficient as they do bounding box regression in multiple steps i.e. in region proposal networks and bounding box heads. Also, the anchor-based region proposal networks are computationally expensive to train. We propose F2DNet, a novel two-stage detection architecture which eliminates redundancy of current two-stage detectors by replacing the region proposal network with our focal detection network and bounding box head with our fast suppression head. We benchmark F2DNet on top pedestrian detection datasets, thoroughly compare it against the existing state-of-the-art detectors and conduct cross dataset evaluation to test the generalizability of our model to unseen data. Our F2DNet achieves 8.7\%, 2.2\%, and 6.1\% MR-2 on City Persons, Caltech Pedestrian, and Euro City Person datasets respectively when trained on a single dataset and reaches 20.4\% and 26.2\% MR-2 in heavy occlusion setting of Caltech Pedestrian and City Persons datasets when using progressive fine-tunning. Furthermore, F2DNet have significantly lesser inference time compared to the current state-of-the-art. Code and trained models will be available at https://github.com/AbdulHannanKhan/F2DNet.
[ { "version": "v1", "created": "Fri, 4 Mar 2022 14:13:38 GMT" }, { "version": "v2", "created": "Fri, 23 Sep 2022 08:43:32 GMT" } ]
2022-09-26T00:00:00
[ [ "Khan", "Abdul Hannan", "" ], [ "Munir", "Mohsin", "" ], [ "van Elst", "Ludger", "" ], [ "Dengel", "Andreas", "" ] ]
new_dataset
0.995604
2205.02648
H\'eber H. Arcolezi
H\'eber H. Arcolezi, Jean-Fran\c{c}ois Couchot, S\'ebastien Gambs, Catuscia Palamidessi, Majid Zolfaghari
Multi-Freq-LDPy: Multiple Frequency Estimation Under Local Differential Privacy in Python
Paper published in the proceedings of ESORICS 2022
null
10.1007/978-3-031-17143-7_40
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
This paper introduces the multi-freq-ldpy Python package for multiple frequency estimation under Local Differential Privacy (LDP) guarantees. LDP is a gold standard for achieving local privacy with several real-world implementations by big tech companies such as Google, Apple, and Microsoft. The primary application of LDP is frequency (or histogram) estimation, in which the aggregator estimates the number of times each value has been reported. The presented package provides an easy-to-use and fast implementation of state-of-the-art solutions and LDP protocols for frequency estimation of: single attribute (i.e., the building blocks), multiple attributes (i.e., multidimensional data), multiple collections (i.e., longitudinal data), and both multiple attributes/collections. Multi-freq-ldpy is built on the well-established Numpy package -- a de facto standard for scientific computing in Python -- and the Numba package for fast execution. These features are described and illustrated in this paper with four worked examples. This package is open-source and publicly available under an MIT license via GitHub (https://github.com/hharcolezi/multi-freq-ldpy) and can be installed via PyPI (https://pypi.org/project/multi-freq-ldpy/).
[ { "version": "v1", "created": "Thu, 5 May 2022 13:48:27 GMT" }, { "version": "v2", "created": "Fri, 23 Sep 2022 08:50:16 GMT" } ]
2022-09-26T00:00:00
[ [ "Arcolezi", "Héber H.", "" ], [ "Couchot", "Jean-François", "" ], [ "Gambs", "Sébastien", "" ], [ "Palamidessi", "Catuscia", "" ], [ "Zolfaghari", "Majid", "" ] ]
new_dataset
0.998412
2207.01059
Matt Groh
Matthew Groh
Identifying the Context Shift between Test Benchmarks and Production Data
null
null
null
null
cs.LG cs.AI cs.HC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Machine learning models are often brittle on production data despite achieving high accuracy on benchmark datasets. Benchmark datasets have traditionally served dual purposes: first, benchmarks offer a standard on which machine learning researchers can compare different methods, and second, benchmarks provide a model, albeit imperfect, of the real world. The incompleteness of test benchmarks (and the data upon which models are trained) hinder robustness in machine learning, enable shortcut learning, and leave models systematically prone to err on out-of-distribution and adversarially perturbed data. The mismatch between a single static benchmark dataset and a production dataset has traditionally been described as a dataset shift. In an effort to clarify how to address the mismatch between test benchmarks and production data, we introduce context shift to describe semantically meaningful changes in the underlying data generation process. Moreover, we identify three methods for addressing context shift that would otherwise lead to model prediction errors: first, we describe how human intuition and expert knowledge can identify semantically meaningful features upon which models systematically fail, second, we detail how dynamic benchmarking - with its focus on capturing the data generation process - can promote generalizability through corroboration, and third, we highlight that clarifying a model's limitations can reduce unexpected errors. Robust machine learning is focused on model performance beyond benchmarks, and as such, we consider three model organism domains - facial expression recognition, deepfake detection, and medical diagnosis - to highlight how implicit assumptions in benchmark tasks lead to errors in practice. By paying close attention to the role of context, researchers can design more comprehensive benchmarks, reduce context shift errors, and increase generalizability.
[ { "version": "v1", "created": "Sun, 3 Jul 2022 14:54:54 GMT" }, { "version": "v2", "created": "Thu, 22 Sep 2022 19:33:04 GMT" } ]
2022-09-26T00:00:00
[ [ "Groh", "Matthew", "" ] ]
new_dataset
0.975293
2207.03603
Jun Zhang
David Bombara, Revanth Konda, Steven Swanbeck, Jun Zhang
Anthropomorphic Twisted String-Actuated Soft Robotic Gripper with Tendon-Based Stiffening
19 pages, 15 figures
null
null
null
cs.RO cs.SY eess.SY
http://creativecommons.org/licenses/by-nc-sa/4.0/
Realizing high-performance soft robotic grippers is challenging because of the inherent limitations of the soft actuators and artificial muscles that drive them, including low force output, small actuation range, and poor compactness. Despite advances in this area, realizing compact soft grippers with high dexterity and force output is still challenging. This paper explores twisted string actuators (TSAs) to drive a soft robotic gripper. TSAs have been used in numerous robotic applications, but their inclusion in soft robots has been limited. The proposed design of the gripper was inspired by the human hand. Tunable stiffness was implemented in the fingers with antagonistic TSAs. The fingers' bending angles, actuation speed, blocked force output, and stiffness tuning were experimentally characterized. The gripper achieved a score of 6 on the Kapandji test and recreated 31 of the 33 grasps of the Feix GRASP taxonomy. It exhibited a maximum grasping force of 72 N, which was almost 13 times its own weight. A comparison study revealed that the proposed gripper exhibited equivalent or superior performance compared to other similar soft grippers.
[ { "version": "v1", "created": "Thu, 7 Jul 2022 22:15:04 GMT" }, { "version": "v2", "created": "Wed, 10 Aug 2022 21:57:06 GMT" }, { "version": "v3", "created": "Thu, 22 Sep 2022 21:26:49 GMT" } ]
2022-09-26T00:00:00
[ [ "Bombara", "David", "" ], [ "Konda", "Revanth", "" ], [ "Swanbeck", "Steven", "" ], [ "Zhang", "Jun", "" ] ]
new_dataset
0.995702
2208.11449
Karen Wintersperger
Karen Wintersperger, Hila Safi and Wolfgang Mauerer
QPU-System Co-Design for Quantum HPC Accelerators
null
Proceedings of the 35th GI/ITG International Conference on Architecture of Computing Systems (ARCS 2022)
null
null
cs.AR quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The use of quantum processing units (QPUs) promises speed-ups for solving computational problems, but the quantum devices currently available possess only a very limited number of qubits and suffer from considerable imperfections. One possibility to progress towards practical utility is to use a co-design approach: Problem formulation and algorithm, but also the physical QPU properties are tailored to the specific application. Since QPUs will likely be used as accelerators for classical computers, details of systemic integration into existing architectures are another lever to influence and improve the practical utility of QPUs. In this work, we investigate the influence of different parameters on the runtime of quantum programs on tailored hybrid CPU-QPU-systems. We study the influence of communication times between CPU and QPU, how adapting QPU designs influences quantum and overall execution performance, and how these factors interact. Using a simple model that allows for estimating which design choices should be subjected to optimisation for a given task, we provide an intuition to the HPC community on potentials and limitations of co-design approaches. We also discuss physical limitations for implementing the proposed changes on real quantum hardware devices.
[ { "version": "v1", "created": "Wed, 24 Aug 2022 11:33:48 GMT" }, { "version": "v2", "created": "Mon, 5 Sep 2022 17:37:17 GMT" }, { "version": "v3", "created": "Wed, 7 Sep 2022 16:41:09 GMT" }, { "version": "v4", "created": "Thu, 8 Sep 2022 17:55:02 GMT" } ]
2022-09-26T00:00:00
[ [ "Wintersperger", "Karen", "" ], [ "Safi", "Hila", "" ], [ "Mauerer", "Wolfgang", "" ] ]
new_dataset
0.993413
2209.00741
Tiago Fonseca
Tiago Fonseca, Tiago Dias, Jo\~ao Vitorino, Lu\'is Lino Ferreira, Isabel Pra\c{c}a
A Low-Cost Multi-Agent System for Physical Security in Smart Buildings
10 pages, 2 tables, 3 figures, ICCCN 2022 conference
null
null
null
cs.CR cs.CV cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
Modern organizations face numerous physical security threats, from fire hazards to more intricate concerns regarding surveillance and unauthorized personnel. Conventional standalone fire and intrusion detection solutions must be installed and maintained independently, which leads to high capital and operational costs. Nonetheless, due to recent developments in smart sensors, computer vision techniques, and wireless communication technologies, these solutions can be integrated in a modular and low-cost manner. This work introduces Integrated Physical Security System (IP2S), a multi-agent system capable of coordinating diverse Internet of Things (IoT) sensors and actuators for an efficient mitigation of multiple physical security events. The proposed system was tested in a live case study that combined fire and intrusion detection in an industrial shop floor environment with four different sectors, two surveillance cameras, and a firefighting robot. The experimental results demonstrate that the integration of several events in a single automated system can be advantageous for the security of smart buildings, reducing false alarms and delays.
[ { "version": "v1", "created": "Thu, 1 Sep 2022 22:20:39 GMT" } ]
2022-09-26T00:00:00
[ [ "Fonseca", "Tiago", "" ], [ "Dias", "Tiago", "" ], [ "Vitorino", "João", "" ], [ "Ferreira", "Luís Lino", "" ], [ "Praça", "Isabel", "" ] ]
new_dataset
0.996999
2209.10807
Eunkyu Oh
Eunkyu Oh, Taehun Kim, Minsoo Kim, Yunhu Ji, Sushil Khyalia
SR-GCL: Session-Based Recommendation with Global Context Enhanced Augmentation in Contrastive Learning
11 pages. This paper has been accepted by DLG-AAAI'22
null
null
null
cs.IR cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Session-based recommendations aim to predict the next behavior of users based on ongoing sessions. The previous works have been modeling the session as a variable-length of a sequence of items and learning the representation of both individual items and the aggregated session. Recent research has applied graph neural networks with an attention mechanism to capture complicated item transitions and dependencies by modeling the sessions into graph-structured data. However, they still face fundamental challenges in terms of data and learning methodology such as sparse supervision signals and noisy interactions in sessions, leading to sub-optimal performance. In this paper, we propose SR-GCL, a novel contrastive learning framework for a session-based recommendation. As a crucial component of contrastive learning, we propose two global context enhanced data augmentation methods while maintaining the semantics of the original session. The extensive experiment results on two real-world E-commerce datasets demonstrate the superiority of SR-GCL as compared to other state-of-the-art methods.
[ { "version": "v1", "created": "Thu, 22 Sep 2022 06:18:20 GMT" }, { "version": "v2", "created": "Fri, 23 Sep 2022 04:16:12 GMT" } ]
2022-09-26T00:00:00
[ [ "Oh", "Eunkyu", "" ], [ "Kim", "Taehun", "" ], [ "Kim", "Minsoo", "" ], [ "Ji", "Yunhu", "" ], [ "Khyalia", "Sushil", "" ] ]
new_dataset
0.997117
2209.11252
Shivprasad Sagare Mr
Shivprasad Sagare, Tushar Abhishek, Bhavyajeet Singh, Anubhav Sharma, Manish Gupta, Vasudeva Varma
XF2T: Cross-lingual Fact-to-Text Generation for Low-Resource Languages
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Multiple business scenarios require an automated generation of descriptive human-readable text from structured input data. Hence, fact-to-text generation systems have been developed for various downstream tasks like generating soccer reports, weather and financial reports, medical reports, person biographies, etc. Unfortunately, previous work on fact-to-text (F2T) generation has focused primarily on English mainly due to the high availability of relevant datasets. Only recently, the problem of cross-lingual fact-to-text (XF2T) was proposed for generation across multiple languages alongwith a dataset, XALIGN for eight languages. However, there has been no rigorous work on the actual XF2T generation problem. We extend XALIGN dataset with annotated data for four more languages: Punjabi, Malayalam, Assamese and Oriya. We conduct an extensive study using popular Transformer-based text generation models on our extended multi-lingual dataset, which we call XALIGNV2. Further, we investigate the performance of different text generation strategies: multiple variations of pretraining, fact-aware embeddings and structure-aware input encoding. Our extensive experiments show that a multi-lingual mT5 model which uses fact-aware embeddings with structure-aware input encoding leads to best results on average across the twelve languages. We make our code, dataset and model publicly available, and hope that this will help advance further research in this critical area.
[ { "version": "v1", "created": "Thu, 22 Sep 2022 18:01:27 GMT" } ]
2022-09-26T00:00:00
[ [ "Sagare", "Shivprasad", "" ], [ "Abhishek", "Tushar", "" ], [ "Singh", "Bhavyajeet", "" ], [ "Sharma", "Anubhav", "" ], [ "Gupta", "Manish", "" ], [ "Varma", "Vasudeva", "" ] ]
new_dataset
0.968297
2209.11266
Gang Liu
Gang Liu, Tianyan Zhou, Yong Zhao, Yu Wu, Zhuo Chen, Yao Qian, Jian Wu
The Microsoft System for VoxCeleb Speaker Recognition Challenge 2022
3 pages, 3 tables, VoxSRC2022
null
null
null
cs.SD
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this report, we describe our submitted system for track 2 of the VoxCeleb Speaker Recognition Challenge 2022 (VoxSRC-22). We fuse a variety of good-performing models ranging from supervised models to self-supervised learning(SSL) pre-trained models. The supervised models, trained using VoxCeleb-2 dev data, consist of ECAPA-TDNN and Res2Net in a very deep structure. The SSL pre-trained models, wav2vec and wavLM, are trained using large scale unlabeled speech data up to million hours. These models are cascaded with ECAPA-TDNN and further fine-tuned in a supervised fashion to extract the speaker representations. All 13 models are applied with score normalization and calibration and then fused into the the submitted system. We also explore the audio quality measures in the calibration stage such as duration, SNR, T60, and MOS. The best submitted system achieves 0.073 in minDCF and 1.436% in EER on the VoxSRC-22 evaluation set.
[ { "version": "v1", "created": "Thu, 22 Sep 2022 18:36:04 GMT" } ]
2022-09-26T00:00:00
[ [ "Liu", "Gang", "" ], [ "Zhou", "Tianyan", "" ], [ "Zhao", "Yong", "" ], [ "Wu", "Yu", "" ], [ "Chen", "Zhuo", "" ], [ "Qian", "Yao", "" ], [ "Wu", "Jian", "" ] ]
new_dataset
0.963337
2209.11302
Ishika Singh
Ishika Singh, Valts Blukis, Arsalan Mousavian, Ankit Goyal, Danfei Xu, Jonathan Tremblay, Dieter Fox, Jesse Thomason, Animesh Garg
ProgPrompt: Generating Situated Robot Task Plans using Large Language Models
null
null
null
null
cs.RO cs.AI cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Task planning can require defining myriad domain knowledge about the world in which a robot needs to act. To ameliorate that effort, large language models (LLMs) can be used to score potential next actions during task planning, and even generate action sequences directly, given an instruction in natural language with no additional domain information. However, such methods either require enumerating all possible next steps for scoring, or generate free-form text that may contain actions not possible on a given robot in its current context. We present a programmatic LLM prompt structure that enables plan generation functional across situated environments, robot capabilities, and tasks. Our key insight is to prompt the LLM with program-like specifications of the available actions and objects in an environment, as well as with example programs that can be executed. We make concrete recommendations about prompt structure and generation constraints through ablation experiments, demonstrate state of the art success rates in VirtualHome household tasks, and deploy our method on a physical robot arm for tabletop tasks. Website at progprompt.github.io
[ { "version": "v1", "created": "Thu, 22 Sep 2022 20:29:49 GMT" } ]
2022-09-26T00:00:00
[ [ "Singh", "Ishika", "" ], [ "Blukis", "Valts", "" ], [ "Mousavian", "Arsalan", "" ], [ "Goyal", "Ankit", "" ], [ "Xu", "Danfei", "" ], [ "Tremblay", "Jonathan", "" ], [ "Fox", "Dieter", "" ], [ "Thomason", "Jesse", "" ], [ "Garg", "Animesh", "" ] ]
new_dataset
0.999817
2209.11318
Charlie C.L. Wang Prof. Dr.
Yingjun Tian, Renbo Su, Xilong Wang, Nur Banu Altin, Guoxin Fang, Charlie C. L. Wang
OpenPneu: Compact platform for pneumatic actuation with multi-channels
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by-nc-nd/4.0/
This paper presents a compact system, OpenPneu, to support the pneumatic actuation for multi-chambers on soft robots. Micro-pumps are employed in the system to generate airflow and therefore no extra input as compressed air is needed. Our system conducts modular design to provide good scalability, which has been demonstrated on a prototype with ten air channels. Each air channel of OpenPneu is equipped with both the inflation and the deflation functions to provide a full range pressure supply from positive to negative with a maximal flow rate at 1.7 L/min. High precision closed-loop control of pressures has been built into our system to achieve stable and efficient dynamic performance in actuation. An open-source control interface and API in Python are provided. We also demonstrate the functionality of OpenPneu on three soft robotic systems with up to 10 chambers.
[ { "version": "v1", "created": "Thu, 22 Sep 2022 21:16:37 GMT" } ]
2022-09-26T00:00:00
[ [ "Tian", "Yingjun", "" ], [ "Su", "Renbo", "" ], [ "Wang", "Xilong", "" ], [ "Altin", "Nur Banu", "" ], [ "Fang", "Guoxin", "" ], [ "Wang", "Charlie C. L.", "" ] ]
new_dataset
0.999703
2209.11321
Ahmed Alkhateeb
Shuaifeng Jiang and Ahmed Alkhateeb
Sensing Aided OTFS Channel Estimation for Massive MIMO Systems
submitted to IEEE
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by/4.0/
Orthogonal time frequency space (OTFS) modulation has the potential to enable robust communications in highly-mobile scenarios. Estimating the channels for OTFS systems, however, is associated with high pilot signaling overhead that scales with the maximum delay and Doppler spreads. This becomes particularly challenging for massive MIMO systems where the overhead also scales with the number of antennas. An important observation however is that the delay, Doppler, and angle of departure/arrival information are directly related to the distance, velocity, and direction information of the mobile user and the various scatterers in the environment. With this motivation, we propose to leverage radar sensing to obtain this information about the mobile users and scatterers in the environment and leverage it to aid the OTFS channel estimation in massive MIMO systems. As one approach to realize our vision, this paper formulates the OTFS channel estimation problem in massive MIMO systems as a sparse recovery problem and utilizes the radar sensing information to determine the support (locations of the non-zero delay-Doppler taps). The proposed radar sensing aided sparse recovery algorithm is evaluated based on an accurate 3D ray-tracing framework with co-existing radar and communication data. The results show that the developed sensing-aided solution consistently outperforms the standard sparse recovery algorithms (that do not leverage radar sensing data) and leads to a significant reduction in the pilot overhead, which highlights a promising direction for OTFS based massive MIMO systems.
[ { "version": "v1", "created": "Thu, 22 Sep 2022 21:23:40 GMT" } ]
2022-09-26T00:00:00
[ [ "Jiang", "Shuaifeng", "" ], [ "Alkhateeb", "Ahmed", "" ] ]
new_dataset
0.991704
2209.11345
Marcos V. Conde
Marcos V. Conde, Ui-Jin Choi, Maxime Burchi, Radu Timofte
Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration
European Conference on Computer Vision (ECCV 2022) Workshops
null
null
null
cs.CV eess.IV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Compression plays an important role on the efficient transmission and storage of images and videos through band-limited systems such as streaming services, virtual reality or videogames. However, compression unavoidably leads to artifacts and the loss of the original information, which may severely degrade the visual quality. For these reasons, quality enhancement of compressed images has become a popular research topic. While most state-of-the-art image restoration methods are based on convolutional neural networks, other transformers-based methods such as SwinIR, show impressive performance on these tasks. In this paper, we explore the novel Swin Transformer V2, to improve SwinIR for image super-resolution, and in particular, the compressed input scenario. Using this method we can tackle the major issues in training transformer vision models, such as training instability, resolution gaps between pre-training and fine-tuning, and hunger on data. We conduct experiments on three representative tasks: JPEG compression artifacts removal, image super-resolution (classical and lightweight), and compressed image super-resolution. Experimental results demonstrate that our method, Swin2SR, can improve the training convergence and performance of SwinIR, and is a top-5 solution at the "AIM 2022 Challenge on Super-Resolution of Compressed Image and Video".
[ { "version": "v1", "created": "Thu, 22 Sep 2022 23:25:08 GMT" } ]
2022-09-26T00:00:00
[ [ "Conde", "Marcos V.", "" ], [ "Choi", "Ui-Jin", "" ], [ "Burchi", "Maxime", "" ], [ "Timofte", "Radu", "" ] ]
new_dataset
0.999488
2209.11420
Revanth Konda
Revanth Konda, David Bombara and Jun Zhang
Overtwisting and Coiling Highly Enhances Strain Generation of Twisted String Actuators
null
null
null
null
cs.RO cs.SY eess.SY
http://creativecommons.org/publicdomain/zero/1.0/
Twisted string actuators (TSAs) have exhibited great promise in robotic applications by generating high translational force with low input torque. To further facilitate their robotic applications, it is strongly desirable but challenging to enhance their consistent strain generation while maintaining compliance. Existing studies predominantly considered overtwisting and coiling after the regular twisting stage to be undesirable non-uniform and unpredictable knots, entanglements, and coils formed to create an unstable and failure-prone structure. Overtwisting would work well for TSAs when uniform coils can be consistently formed. In this study, we realize uniform and consistent coil formation in overtwisted TSAs, which greatly increases their strain. Furthermore, we investigate methods for enabling uniform coil formation upon overtwisting the strings in a TSA and present a procedure to systematically "train" the strings. To the authors' best knowledge, this is the first study to experimentally investigate overtwisting for TSAs with different stiffnesses and realize consistent uniform coil formation. Ultra-high molecular-weight polyethylene (UHMWPE) strings form the stiff TSAs whereas compliant TSAs are realized with stretchable and conductive supercoiled polymer (SCP) strings. The strain, force, velocity, and torque of each overtwisted TSA was studied. Overtwisting and coiling resulted in approximately 70% strain in stiff TSAs and approximately 60% strain in compliant TSAs. This is more than twice the strain achieved through regular twisting. Lastly, the overtwisted TSA was successfully demonstrated in a robotic bicep.
[ { "version": "v1", "created": "Fri, 23 Sep 2022 05:35:59 GMT" } ]
2022-09-26T00:00:00
[ [ "Konda", "Revanth", "" ], [ "Bombara", "David", "" ], [ "Zhang", "Jun", "" ] ]
new_dataset
0.95936
2209.11500
Hakan Girgin
Hakan Girgin, Julius Jankowski and Sylvain Calinon
Reactive Anticipatory Robot Skills with Memory
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Optimal control in robotics has been increasingly popular in recent years and has been applied in many applications involving complex dynamical systems. Closed-loop optimal control strategies include model predictive control (MPC) and time-varying linear controllers optimized through iLQR. However, such feedback controllers rely on the information of the current state, limiting the range of robotic applications where the robot needs to remember what it has done before to act and plan accordingly. The recently proposed system level synthesis (SLS) framework circumvents this limitation via a richer controller structure with memory. In this work, we propose to optimally design reactive anticipatory robot skills with memory by extending SLS to tracking problems involving nonlinear systems and nonquadratic cost functions. We showcase our method with two scenarios exploiting task precisions and object affordances in pick-and-place tasks in a simulated and a real environment with a 7-axis Franka Emika robot.
[ { "version": "v1", "created": "Fri, 23 Sep 2022 09:55:41 GMT" } ]
2022-09-26T00:00:00
[ [ "Girgin", "Hakan", "" ], [ "Jankowski", "Julius", "" ], [ "Calinon", "Sylvain", "" ] ]
new_dataset
0.961987
2209.11625
Jinghan Peng
Yu Zheng, Jinghan Peng, Yihao Chen, Yajun Zhang, Jialong Wang, Min Liu, Minqiang Xu
The SpeakIn Speaker Verification System for Far-Field Speaker Verification Challenge 2022
5 pages. arXiv admin note: text overlap with arXiv:2209.10846
null
null
null
cs.SD cs.AI eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes speaker verification (SV) systems submitted by the SpeakIn team to the Task 1 and Task 2 of the Far-Field Speaker Verification Challenge 2022 (FFSVC2022). SV tasks of the challenge focus on the problem of fully supervised far-field speaker verification (Task 1) and semi-supervised far-field speaker verification (Task 2). In Task 1, we used the VoxCeleb and FFSVC2020 datasets as train datasets. And for Task 2, we only used the VoxCeleb dataset as train set. The ResNet-based and RepVGG-based architectures were developed for this challenge. Global statistic pooling structure and MQMHA pooling structure were used to aggregate the frame-level features across time to obtain utterance-level representation. We adopted AM-Softmax and AAM-Softmax to classify the resulting embeddings. We innovatively propose a staged transfer learning method. In the pre-training stage we reserve the speaker weights, and there are no positive samples to train them in this stage. Then we fine-tune these weights with both positive and negative samples in the second stage. Compared with the traditional transfer learning strategy, this strategy can better improve the model performance. The Sub-Mean and AS-Norm backend methods were used to solve the problem of domain mismatch. In the fusion stage, three models were fused in Task1 and two models were fused in Task2. On the FFSVC2022 leaderboard, the EER of our submission is 3.0049% and the corresponding minDCF is 0.2938 in Task1. In Task2, EER and minDCF are 6.2060% and 0.5232 respectively. Our approach leads to excellent performance and ranks 1st in both challenge tasks.
[ { "version": "v1", "created": "Fri, 23 Sep 2022 14:51:55 GMT" } ]
2022-09-26T00:00:00
[ [ "Zheng", "Yu", "" ], [ "Peng", "Jinghan", "" ], [ "Chen", "Yihao", "" ], [ "Zhang", "Yajun", "" ], [ "Wang", "Jialong", "" ], [ "Liu", "Min", "" ], [ "Xu", "Minqiang", "" ] ]
new_dataset
0.959594
2209.11693
Iman Nematollahi
Iman Nematollahi, Erick Rosete-Beas, Seyed Mahdi B. Azad, Raghu Rajan, Frank Hutter, Wolfram Burgard
T3VIP: Transformation-based 3D Video Prediction
Accepted at the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
null
null
null
cs.CV cs.AI cs.LG cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For autonomous skill acquisition, robots have to learn about the physical rules governing the 3D world dynamics from their own past experience to predict and reason about plausible future outcomes. To this end, we propose a transformation-based 3D video prediction (T3VIP) approach that explicitly models the 3D motion by decomposing a scene into its object parts and predicting their corresponding rigid transformations. Our model is fully unsupervised, captures the stochastic nature of the real world, and the observational cues in image and point cloud domains constitute its learning signals. To fully leverage all the 2D and 3D observational signals, we equip our model with automatic hyperparameter optimization (HPO) to interpret the best way of learning from them. To the best of our knowledge, our model is the first generative model that provides an RGB-D video prediction of the future for a static camera. Our extensive evaluation with simulated and real-world datasets demonstrates that our formulation leads to interpretable 3D models that predict future depth videos while achieving on-par performance with 2D models on RGB video prediction. Moreover, we demonstrate that our model outperforms 2D baselines on visuomotor control. Videos, code, dataset, and pre-trained models are available at http://t3vip.cs.uni-freiburg.de.
[ { "version": "v1", "created": "Mon, 19 Sep 2022 15:01:09 GMT" } ]
2022-09-26T00:00:00
[ [ "Nematollahi", "Iman", "" ], [ "Rosete-Beas", "Erick", "" ], [ "Azad", "Seyed Mahdi B.", "" ], [ "Rajan", "Raghu", "" ], [ "Hutter", "Frank", "" ], [ "Burgard", "Wolfram", "" ] ]
new_dataset
0.998695
2209.11750
Sannara Ek
Sannara EK, Fran\c{c}ois Portet, Philippe Lalanda
Lightweight Transformers for Human Activity Recognition on Mobile Devices
null
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Human Activity Recognition (HAR) on mobile devices has shown to be achievable with lightweight neural models learned from data generated by the user's inertial measurement units (IMUs). Most approaches for instanced-based HAR have used Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTMs), or a combination of the two to achieve state-of-the-art results with real-time performances. Recently, the Transformers architecture in the language processing domain and then in the vision domain has pushed further the state-of-the-art over classical architectures. However, such Transformers architecture is heavyweight in computing resources, which is not well suited for embedded applications of HAR that can be found in the pervasive computing domain. In this study, we present Human Activity Recognition Transformer (HART), a lightweight, sensor-wise transformer architecture that has been specifically adapted to the domain of the IMUs embedded on mobile devices. Our experiments on HAR tasks with several publicly available datasets show that HART uses fewer FLoating-point Operations Per Second (FLOPS) and parameters while outperforming current state-of-the-art results. Furthermore, we present evaluations across various architectures on their performances in heterogeneous environments and show that our models can better generalize on different sensing devices or on-body positions.
[ { "version": "v1", "created": "Thu, 22 Sep 2022 09:42:08 GMT" } ]
2022-09-26T00:00:00
[ [ "EK", "Sannara", "" ], [ "Portet", "François", "" ], [ "Lalanda", "Philippe", "" ] ]
new_dataset
0.988576
1606.06940
Krzysztof Fleszar
William S. Evans, Krzysztof Fleszar, Philipp Kindermann, Noushin Saeedi, Chan-Su Shin, Alexander Wolff
Minimum Rectilinear Polygons for Given Angle Sequences
New result: NP-hardness of drawing polylines
Computational Geometry: Theory and Applications, 100:101820 (2022)
10.1016/j.comgeo.2021.101820
null
cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A rectilinear polygon is a polygon whose edges are axis-aligned. Walking counterclockwise on the boundary of such a polygon yields a sequence of left turns and right turns. The number of left turns always equals the number of right turns plus 4. It is known that any such sequence can be realized by a rectilinear polygon. In this paper, we consider the problem of finding realizations that minimize the perimeter or the area of the polygon or the area of the bounding box of the polygon. We show that all three problems are NP-hard in general. This answers an open question of Patrignani [CGTA 2001], who showed that it is NP-hard to minimize the area of the bounding box of an orthogonal drawing of a given planar graph. We also show that realizing polylines with minimum bounding box area is NP-hard. Then we consider the special cases of $x$-monotone and $xy$-monotone rectilinear polygons. For these, we can optimize the three objectives efficiently.
[ { "version": "v1", "created": "Wed, 22 Jun 2016 13:22:18 GMT" }, { "version": "v2", "created": "Wed, 12 Dec 2018 10:17:51 GMT" }, { "version": "v3", "created": "Mon, 8 Jun 2020 17:52:21 GMT" } ]
2022-09-23T00:00:00
[ [ "Evans", "William S.", "" ], [ "Fleszar", "Krzysztof", "" ], [ "Kindermann", "Philipp", "" ], [ "Saeedi", "Noushin", "" ], [ "Shin", "Chan-Su", "" ], [ "Wolff", "Alexander", "" ] ]
new_dataset
0.996713
2107.00957
Tareq Si Salem
T. Si-Salem, G. Neglia, D. Carra
Ascent Similarity Caching with Approximate Indexes
null
null
null
null
cs.NI cs.LG cs.PF
http://creativecommons.org/licenses/by/4.0/
Similarity search is a key operation in multimedia retrieval systems and recommender systems, and it will play an important role also for future machine learning and augmented reality applications. When these systems need to serve large objects with tight delay constraints, edge servers close to the end-user can operate as similarity caches to speed up the retrieval. In this paper we present A\c{C}AI, a new similarity caching policy which improves on the state of the art by using (i) an (approximate) index for the whole catalog to decide which objects to serve locally and which to retrieve from the remote server, and (ii) a mirror ascent algorithm to update the set of local objects with strong guarantees even when the request process does not exhibit any statistical regularity.
[ { "version": "v1", "created": "Fri, 2 Jul 2021 10:40:47 GMT" }, { "version": "v2", "created": "Sun, 19 Dec 2021 17:18:56 GMT" }, { "version": "v3", "created": "Mon, 27 Dec 2021 14:56:49 GMT" }, { "version": "v4", "created": "Thu, 22 Sep 2022 12:04:15 GMT" } ]
2022-09-23T00:00:00
[ [ "Si-Salem", "T.", "" ], [ "Neglia", "G.", "" ], [ "Carra", "D.", "" ] ]
new_dataset
0.955423
2109.09289
Muhammed Sit
Muhammed Sit, Bong-Chul Seo and Ibrahim Demir
TempNet -- Temporal Super Resolution of Radar Rainfall Products with Residual CNNs
null
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The temporal and spatial resolution of rainfall data is crucial for environmental modeling studies in which its variability in space and time is considered as a primary factor. Rainfall products from different remote sensing instruments (e.g., radar, satellite) have different space-time resolutions because of the differences in their sensing capabilities and post-processing methods. In this study, we developed a deep learning approach that augments rainfall data with increased time resolutions to complement relatively lower resolution products. We propose a neural network architecture based on Convolutional Neural Networks (CNNs) to improve the temporal resolution of radar-based rainfall products and compare the proposed model with an optical flow-based interpolation method and CNN-baseline model. The methodology presented in this study could be used for enhancing rainfall maps with better temporal resolution and imputation of missing frames in sequences of 2D rainfall maps to support hydrological and flood forecasting studies.
[ { "version": "v1", "created": "Mon, 20 Sep 2021 03:58:52 GMT" }, { "version": "v2", "created": "Thu, 22 Sep 2022 04:14:44 GMT" } ]
2022-09-23T00:00:00
[ [ "Sit", "Muhammed", "" ], [ "Seo", "Bong-Chul", "" ], [ "Demir", "Ibrahim", "" ] ]
new_dataset
0.996036
2203.00819
Duzhen Zhang
Duzhen Zhang, Zhen Yang, Fandong Meng, Xiuyi Chen, Jie Zhou
TSAM: A Two-Stream Attention Model for Causal Emotion Entailment
null
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Causal Emotion Entailment (CEE) aims to discover the potential causes behind an emotion in a conversational utterance. Previous works formalize CEE as independent utterance pair classification problems, with emotion and speaker information neglected. From a new perspective, this paper considers CEE in a joint framework. We classify multiple utterances synchronously to capture the correlations between utterances in a global view and propose a Two-Stream Attention Model (TSAM) to effectively model the speaker's emotional influences in the conversational history. Specifically, the TSAM comprises three modules: Emotion Attention Network (EAN), Speaker Attention Network (SAN), and interaction module. The EAN and SAN incorporate emotion and speaker information in parallel, and the subsequent interaction module effectively interchanges relevant information between the EAN and SAN via a mutual BiAffine transformation. Extensive experimental results demonstrate that our model achieves new State-Of-The-Art (SOTA) performance and outperforms baselines remarkably.
[ { "version": "v1", "created": "Wed, 2 Mar 2022 02:11:41 GMT" }, { "version": "v2", "created": "Thu, 22 Sep 2022 08:01:56 GMT" } ]
2022-09-23T00:00:00
[ [ "Zhang", "Duzhen", "" ], [ "Yang", "Zhen", "" ], [ "Meng", "Fandong", "" ], [ "Chen", "Xiuyi", "" ], [ "Zhou", "Jie", "" ] ]
new_dataset
0.998002
2203.12130
Akash Saravanan
Akash Saravanan and Matthew Guzdial
Pixel VQ-VAEs for Improved Pixel Art Representation
9 pages, 2 figures. Experimental AI in Games Workshop (EXAG) 2022
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine learning has had a great deal of success in image processing. However, the focus of this work has largely been on realistic images, ignoring more niche art styles such as pixel art. Additionally, many traditional machine learning models that focus on groups of pixels do not work well with pixel art, where individual pixels are important. We propose the Pixel VQ-VAE, a specialized VQ-VAE model that learns representations of pixel art. We show that it outperforms other models in both the quality of embeddings as well as performance on downstream tasks.
[ { "version": "v1", "created": "Wed, 23 Mar 2022 01:47:33 GMT" }, { "version": "v2", "created": "Wed, 21 Sep 2022 20:42:00 GMT" } ]
2022-09-23T00:00:00
[ [ "Saravanan", "Akash", "" ], [ "Guzdial", "Matthew", "" ] ]
new_dataset
0.984449
2203.12575
Wei Jiang
Wei Jiang, Kwang Moo Yi, Golnoosh Samei, Oncel Tuzel, Anurag Ranjan
NeuMan: Neural Human Radiance Field from a Single Video
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Photorealistic rendering and reposing of humans is important for enabling augmented reality experiences. We propose a novel framework to reconstruct the human and the scene that can be rendered with novel human poses and views from just a single in-the-wild video. Given a video captured by a moving camera, we train two NeRF models: a human NeRF model and a scene NeRF model. To train these models, we rely on existing methods to estimate the rough geometry of the human and the scene. Those rough geometry estimates allow us to create a warping field from the observation space to the canonical pose-independent space, where we train the human model in. Our method is able to learn subject specific details, including cloth wrinkles and accessories, from just a 10 seconds video clip, and to provide high quality renderings of the human under novel poses, from novel views, together with the background.
[ { "version": "v1", "created": "Wed, 23 Mar 2022 17:35:50 GMT" }, { "version": "v2", "created": "Thu, 22 Sep 2022 02:27:46 GMT" } ]
2022-09-23T00:00:00
[ [ "Jiang", "Wei", "" ], [ "Yi", "Kwang Moo", "" ], [ "Samei", "Golnoosh", "" ], [ "Tuzel", "Oncel", "" ], [ "Ranjan", "Anurag", "" ] ]
new_dataset
0.992511
2203.14277
Dhruvil Dave
Aneri Dalwadi and Dhruvil Dave
UAST: Unicode Aware Sanskrit Transliteration
9 pages. Source code and implementation are available on GitHub at https://github.com/dhruvildave/uast and https://github.com/aneri0x4f/uast-cli
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
Devanagari is a writing system that is adapted by various languages like Sanskrit. International Alphabet of Sanskrit Transliteration (IAST) is a transliteration scheme for the romanization of the Sanskrit language. IAST makes use of diacritics to represent various characters. On a computer, these are defined using the Unicode standard which differs from how the Sanskrit language behaves at a fundamental level. This results in an issue that is encountered while designing typesetting software for Devanagari and IAST. We discuss the problems and provide a solution that solves the issue of incompatibilities between various transliteration and encoding schemes. Implementation and source code are available at https://github.com/dhruvildave/uast and https://github.com/aneri0x4f/uast-cli
[ { "version": "v1", "created": "Sun, 27 Mar 2022 11:17:00 GMT" }, { "version": "v2", "created": "Thu, 22 Sep 2022 03:22:56 GMT" } ]
2022-09-23T00:00:00
[ [ "Dalwadi", "Aneri", "" ], [ "Dave", "Dhruvil", "" ] ]
new_dataset
0.997029
2206.07850
Yiqun Wang
Yiqun Wang, Ivan Skorokhodov, Peter Wonka
HF-NeuS: Improved Surface Reconstruction Using High-Frequency Details
To appear in NeurIPS 2022. Project page: https://github.com/yiqun-wang/HFS
null
null
null
cs.CV cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural rendering can be used to reconstruct implicit representations of shapes without 3D supervision. However, current neural surface reconstruction methods have difficulty learning high-frequency geometry details, so the reconstructed shapes are often over-smoothed. We develop HF-NeuS, a novel method to improve the quality of surface reconstruction in neural rendering. We follow recent work to model surfaces as signed distance functions (SDFs). First, we offer a derivation to analyze the relationship between the SDF, the volume density, the transparency function, and the weighting function used in the volume rendering equation and propose to model transparency as transformed SDF. Second, we observe that attempting to jointly encode high-frequency and low-frequency components in a single SDF leads to unstable optimization. We propose to decompose the SDF into a base function and a displacement function with a coarse-to-fine strategy to gradually increase the high-frequency details. Finally, we design an adaptive optimization strategy that makes the training process focus on improving those regions near the surface where the SDFs have artifacts. Our qualitative and quantitative results show that our method can reconstruct fine-grained surface details and obtain better surface reconstruction quality than the current state of the art. Code available at https://github.com/yiqun-wang/HFS.
[ { "version": "v1", "created": "Wed, 15 Jun 2022 23:46:48 GMT" }, { "version": "v2", "created": "Thu, 22 Sep 2022 14:47:38 GMT" } ]
2022-09-23T00:00:00
[ [ "Wang", "Yiqun", "" ], [ "Skorokhodov", "Ivan", "" ], [ "Wonka", "Peter", "" ] ]
new_dataset
0.992905
2207.07285
Yiwei Ma
Yiwei Ma, Guohai Xu, Xiaoshuai Sun, Ming Yan, Ji Zhang, Rongrong Ji
X-CLIP: End-to-End Multi-grained Contrastive Learning for Video-Text Retrieval
13 pages, 6 figures, ACMMM22
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Video-text retrieval has been a crucial and fundamental task in multi-modal research. The development of video-text retrieval has been considerably promoted by large-scale multi-modal contrastive pre-training, which primarily focuses on coarse-grained or fine-grained contrast. However, cross-grained contrast, which is the contrast between coarse-grained representations and fine-grained representations, has rarely been explored in prior research. Compared with fine-grained or coarse-grained contrasts, cross-grained contrast calculate the correlation between coarse-grained features and each fine-grained feature, and is able to filter out the unnecessary fine-grained features guided by the coarse-grained feature during similarity calculation, thus improving the accuracy of retrieval. To this end, this paper presents a novel multi-grained contrastive model, namely X-CLIP, for video-text retrieval. However, another challenge lies in the similarity aggregation problem, which aims to aggregate fine-grained and cross-grained similarity matrices to instance-level similarity. To address this challenge, we propose the Attention Over Similarity Matrix (AOSM) module to make the model focus on the contrast between essential frames and words, thus lowering the impact of unnecessary frames and words on retrieval results. With multi-grained contrast and the proposed AOSM module, X-CLIP achieves outstanding performance on five widely-used video-text retrieval datasets, including MSR-VTT (49.3 R@1), MSVD (50.4 R@1), LSMDC (26.1 R@1), DiDeMo (47.8 R@1) and ActivityNet (46.2 R@1). It outperforms the previous state-of-theart by +6.3%, +6.6%, +11.1%, +6.7%, +3.8% relative improvements on these benchmarks, demonstrating the superiority of multi-grained contrast and AOSM.
[ { "version": "v1", "created": "Fri, 15 Jul 2022 04:23:42 GMT" }, { "version": "v2", "created": "Thu, 22 Sep 2022 12:27:09 GMT" } ]
2022-09-23T00:00:00
[ [ "Ma", "Yiwei", "" ], [ "Xu", "Guohai", "" ], [ "Sun", "Xiaoshuai", "" ], [ "Yan", "Ming", "" ], [ "Zhang", "Ji", "" ], [ "Ji", "Rongrong", "" ] ]
new_dataset
0.999259
2209.07928
Paulo Pirozelli
Paulo Pirozelli, Ais B. R. Castro, Ana Luiza C. de Oliveira, Andr\'e S. Oliveira, Fl\'avio N. Ca\c{c}\~ao, Igor C. Silveira, Jo\~ao G. M. Campos, Laura C. Motheo, Leticia F. Figueiredo, Lucas F. A. O. Pellicer, Marcelo A. Jos\'e, Marcos M. Jos\'e, Pedro de M. Ligabue, Ricardo S. Grava, Rodrigo M. Tavares, Vin\'icius B. Matos, Yan V. Sym, Anna H. R. Costa, Anarosa A. F. Brand\~ao, Denis D. Mau\'a, Fabio G. Cozman, Sarajane M. Peres
The BLue Amazon Brain (BLAB): A Modular Architecture of Services about the Brazilian Maritime Territory
null
AI: Modeling Oceans and Climate Change (IJCAI-ECAI), 2022
null
null
cs.AI cs.CL cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
We describe the first steps in the development of an artificial agent focused on the Brazilian maritime territory, a large region within the South Atlantic also known as the Blue Amazon. The "BLue Amazon Brain" (BLAB) integrates a number of services aimed at disseminating information about this region and its importance, functioning as a tool for environmental awareness. The main service provided by BLAB is a conversational facility that deals with complex questions about the Blue Amazon, called BLAB-Chat; its central component is a controller that manages several task-oriented natural language processing modules (e.g., question answering and summarizer systems). These modules have access to an internal data lake as well as to third-party databases. A news reporter (BLAB-Reporter) and a purposely-developed wiki (BLAB-Wiki) are also part of the BLAB service architecture. In this paper, we describe our current version of BLAB's architecture (interface, backend, web services, NLP modules, and resources) and comment on the challenges we have faced so far, such as the lack of training data and the scattered state of domain information. Solving these issues presents a considerable challenge in the development of artificial intelligence for technical domains.
[ { "version": "v1", "created": "Tue, 6 Sep 2022 18:32:08 GMT" } ]
2022-09-23T00:00:00
[ [ "Pirozelli", "Paulo", "" ], [ "Castro", "Ais B. R.", "" ], [ "de Oliveira", "Ana Luiza C.", "" ], [ "Oliveira", "André S.", "" ], [ "Cação", "Flávio N.", "" ], [ "Silveira", "Igor C.", "" ], [ "Campos", "João G. M.", "" ], [ "Motheo", "Laura C.", "" ], [ "Figueiredo", "Leticia F.", "" ], [ "Pellicer", "Lucas F. A. O.", "" ], [ "José", "Marcelo A.", "" ], [ "José", "Marcos M.", "" ], [ "Ligabue", "Pedro de M.", "" ], [ "Grava", "Ricardo S.", "" ], [ "Tavares", "Rodrigo M.", "" ], [ "Matos", "Vinícius B.", "" ], [ "Sym", "Yan V.", "" ], [ "Costa", "Anna H. R.", "" ], [ "Brandão", "Anarosa A. F.", "" ], [ "Mauá", "Denis D.", "" ], [ "Cozman", "Fabio G.", "" ], [ "Peres", "Sarajane M.", "" ] ]
new_dataset
0.975357
2209.10317
Roxanne Pinto
Eugenio Marchiori, Sarah de Haas, Sergey Volnov, Ronnie Falcon, Roxanne Pinto, Marco Zamarato
Android Private Compute Core Architecture
null
null
null
null
cs.CR cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Android's Private Compute Core (PCC) is a secure, isolated environment within the operating system, that maintains separation from apps while enabling users and developers to maintain control over their data. It is backed by open-source code in the Android Framework introduced in Android 12. PCC allows features to communicate with a server to receive model updates and contribute to global model training through Private Compute Services (PCS), the core of which has been open sourced. PCC is part of the OS, and by virtue of being isolated, constrained, and trusted, it can host sophisticated ML features. The hosted features themselves, running inside PCC, can be closed source and updatable. In this way, PCC enables machine learning features to process ambient and OS-level data and improve over time, while restricting the availability of information about individual users to servers or apps.
[ { "version": "v1", "created": "Wed, 21 Sep 2022 12:45:18 GMT" }, { "version": "v2", "created": "Thu, 22 Sep 2022 10:15:51 GMT" } ]
2022-09-23T00:00:00
[ [ "Marchiori", "Eugenio", "" ], [ "de Haas", "Sarah", "" ], [ "Volnov", "Sergey", "" ], [ "Falcon", "Ronnie", "" ], [ "Pinto", "Roxanne", "" ], [ "Zamarato", "Marco", "" ] ]
new_dataset
0.99948
2209.10503
Aleksey Fedoseev
Aleksey Fedoseev, Ahmed Baza, Ayush Gupta, Ekaterina Dorzhieva, Riya Neelesh Gujarathi, Dzmitry Tsetserukou
DandelionTouch: High Fidelity Haptic Rendering of Soft Objects in VR by a Swarm of Drones
Accepted to the 2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC). Copyright 20XX IEEE. Personal use of this material is permitted
null
null
null
cs.RO cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To achieve high fidelity haptic rendering of soft objects in a high mobility virtual environment, we propose a novel haptic display DandelionTouch. The tactile actuators are delivered to the fingertips of the user by a swarm of drones. Users of DandelionTouch are capable of experiencing tactile feedback in a large space that is not limited by the device's working area. Importantly, they will not experience muscle fatigue during long interactions with virtual objects. Hand tracking and swarm control algorithm allow guiding the swarm with hand motions and avoid collisions inside the formation. Several topologies of the impedance connection between swarm units were investigated in this research. The experiment, in which drones performed a point following task on a square trajectory in real-time, revealed that drones connected in a Star topology performed the trajectory with low mean positional error (RMSE decreased by 20.6% in comparison with other impedance topologies and by 40.9% in comparison with potential field-based swarm control). The achieved velocities of the drones in all formations with impedance behavior were 28% higher than for the swarm controlled with the potential field algorithm. Additionally, the perception of several vibrotactile patterns was evaluated in a user study with 7 participants. The study has shown that the proposed combination of temporal delay and frequency modulation allows users to successfully recognize the surface property and motion direction in VR simultaneously (mean recognition rate of 70%, maximum of 93%). DandelionTouch suggests a new type of haptic feedback in VR systems where no hand-held or wearable interface is required.
[ { "version": "v1", "created": "Wed, 21 Sep 2022 16:58:14 GMT" }, { "version": "v2", "created": "Thu, 22 Sep 2022 11:43:51 GMT" } ]
2022-09-23T00:00:00
[ [ "Fedoseev", "Aleksey", "" ], [ "Baza", "Ahmed", "" ], [ "Gupta", "Ayush", "" ], [ "Dorzhieva", "Ekaterina", "" ], [ "Gujarathi", "Riya Neelesh", "" ], [ "Tsetserukou", "Dzmitry", "" ] ]
new_dataset
0.997521
2209.10610
Igor Sedl\'ar
Igor Sedl\'ar and Johann J. Wannenburg
Embedding Kozen-Tiuryn Logic into Residuated One-Sorted Kleene Algebra with Tests
null
I. Sedl\'ar and J.J.~Wannenburg: Embedding Kozen-Tiuryn Logic into Residuated One-Sorted Kleene Algebra with Tests. In: A. Ciabattoni, E. Pimentel, R. de Queiroz (Eds.): Proc. WoLLIC 2022, pp. 221-236. LNCS 13468. Springer, 2022
10.1007/978-3-031-15298-6_14
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
Kozen and Tiuryn have introduced the substructural logic $\mathsf{S}$ for reasoning about correctness of while programs (ACM TOCL, 2003). The logic $\mathsf{S}$ distinguishes between tests and partial correctness assertions, representing the latter by special implicational formulas. Kozen and Tiuryn's logic extends Kleene altebra with tests, where partial correctness assertions are represented by equations, not terms. Kleene algebra with codomain, $\mathsf{KAC}$, is a one-sorted alternative to Kleene algebra with tests that expands Kleene algebra with an operator that allows to construct a Boolean subalgebra of tests. In this paper we show that Kozen and Tiuryn's logic embeds into the equational theory of the expansion of $\mathsf{KAC}$ with residuals of Kleene algebra multiplication and the upper adjoint of the codomain operator.
[ { "version": "v1", "created": "Wed, 21 Sep 2022 19:14:11 GMT" } ]
2022-09-23T00:00:00
[ [ "Sedlár", "Igor", "" ], [ "Wannenburg", "Johann J.", "" ] ]
new_dataset
0.997807
2209.10687
Stephanie Newdick
Stephanie Newdick, Nitin Ongole, Tony G. Chen, Edward Schmerling, Mark R. Cutkosky, Marco Pavone
Motion Planning for a Climbing Robot with Stochastic Grasps
7 pages, 7 figures
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motion planning for a multi-limbed climbing robot must consider the robot's posture, joint torques, and how it uses contact forces to interact with its environment. This paper focuses on motion planning for a robot that uses nontraditional locomotion to explore unpredictable environments such as martian caves. Our robotic concept, ReachBot, uses extendable and retractable booms as limbs to achieve a large reachable workspace while climbing. Each extendable boom is capped by a microspine gripper designed for grasping rocky surfaces. ReachBot leverages its large workspace to navigate around obstacles, over crevasses, and through challenging terrain. Our planning approach must be versatile to accommodate variable terrain features and robust to mitigate risks from the stochastic nature of grasping with spines. In this paper, we introduce a graph traversal algorithm to select a discrete sequence of grasps based on available terrain features suitable for grasping. This discrete plan is complemented by a decoupled motion planner that considers the alternating phases of body movement and end-effector movement, using a combination of sampling-based planning and sequential convex programming to optimize individual phases. We use our motion planner to plan a trajectory across a simulated 2D cave environment with at least 95% probability of success and demonstrate improved robustness over a baseline trajectory. Finally, we verify our motion planning algorithm through experimentation on a 2D planar prototype.
[ { "version": "v1", "created": "Wed, 21 Sep 2022 22:25:11 GMT" } ]
2022-09-23T00:00:00
[ [ "Newdick", "Stephanie", "" ], [ "Ongole", "Nitin", "" ], [ "Chen", "Tony G.", "" ], [ "Schmerling", "Edward", "" ], [ "Cutkosky", "Mark R.", "" ], [ "Pavone", "Marco", "" ] ]
new_dataset
0.994579
2209.10733
Xinli Xu
Xinli Xu, Shaocong Dong, Lihe Ding, Jie Wang, Tingfa Xu, Jianan Li
FusionRCNN: LiDAR-Camera Fusion for Two-stage 3D Object Detection
7 pages, 3 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
3D object detection with multi-sensors is essential for an accurate and reliable perception system of autonomous driving and robotics. Existing 3D detectors significantly improve the accuracy by adopting a two-stage paradigm which merely relies on LiDAR point clouds for 3D proposal refinement. Though impressive, the sparsity of point clouds, especially for the points far away, making it difficult for the LiDAR-only refinement module to accurately recognize and locate objects.To address this problem, we propose a novel multi-modality two-stage approach named FusionRCNN, which effectively and efficiently fuses point clouds and camera images in the Regions of Interest(RoI). FusionRCNN adaptively integrates both sparse geometry information from LiDAR and dense texture information from camera in a unified attention mechanism. Specifically, it first utilizes RoIPooling to obtain an image set with a unified size and gets the point set by sampling raw points within proposals in the RoI extraction step; then leverages an intra-modality self-attention to enhance the domain-specific features, following by a well-designed cross-attention to fuse the information from two modalities.FusionRCNN is fundamentally plug-and-play and supports different one-stage methods with almost no architectural changes. Extensive experiments on KITTI and Waymo benchmarks demonstrate that our method significantly boosts the performances of popular detectors.Remarkably, FusionRCNN significantly improves the strong SECOND baseline by 6.14% mAP on Waymo, and outperforms competing two-stage approaches. Code will be released soon at https://github.com/xxlbigbrother/Fusion-RCNN.
[ { "version": "v1", "created": "Thu, 22 Sep 2022 02:07:25 GMT" } ]
2022-09-23T00:00:00
[ [ "Xu", "Xinli", "" ], [ "Dong", "Shaocong", "" ], [ "Ding", "Lihe", "" ], [ "Wang", "Jie", "" ], [ "Xu", "Tingfa", "" ], [ "Li", "Jianan", "" ] ]
new_dataset
0.995676
2209.10770
Kun Hu
Kun Hu, Shaohui Mei, Wei Wang, Kaylena A. Ehgoetz Martens, Liang Wang, Simon J.G. Lewis, David D. Feng, Zhiyong Wang
Multi-level Adversarial Spatio-temporal Learning for Footstep Pressure based FoG Detection
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by-sa/4.0/
Freezing of gait (FoG) is one of the most common symptoms of Parkinson's disease, which is a neurodegenerative disorder of the central nervous system impacting millions of people around the world. To address the pressing need to improve the quality of treatment for FoG, devising a computer-aided detection and quantification tool for FoG has been increasingly important. As a non-invasive technique for collecting motion patterns, the footstep pressure sequences obtained from pressure sensitive gait mats provide a great opportunity for evaluating FoG in the clinic and potentially in the home environment. In this study, FoG detection is formulated as a sequential modelling task and a novel deep learning architecture, namely Adversarial Spatio-temporal Network (ASTN), is proposed to learn FoG patterns across multiple levels. A novel adversarial training scheme is introduced with a multi-level subject discriminator to obtain subject-independent FoG representations, which helps to reduce the over-fitting risk due to the high inter-subject variance. As a result, robust FoG detection can be achieved for unseen subjects. The proposed scheme also sheds light on improving subject-level clinical studies from other scenarios as it can be integrated with many existing deep architectures. To the best of our knowledge, this is one of the first studies of footstep pressure-based FoG detection and the approach of utilizing ASTN is the first deep neural network architecture in pursuit of subject-independent representations. Experimental results on 393 trials collected from 21 subjects demonstrate encouraging performance of the proposed ASTN for FoG detection with an AUC 0.85.
[ { "version": "v1", "created": "Thu, 22 Sep 2022 04:08:23 GMT" } ]
2022-09-23T00:00:00
[ [ "Hu", "Kun", "" ], [ "Mei", "Shaohui", "" ], [ "Wang", "Wei", "" ], [ "Martens", "Kaylena A. Ehgoetz", "" ], [ "Wang", "Liang", "" ], [ "Lewis", "Simon J. G.", "" ], [ "Feng", "David D.", "" ], [ "Wang", "Zhiyong", "" ] ]
new_dataset
0.967416
2209.10804
Rui Liu
Rui Liu, Berrak Sisman, Guanglai Gao, Haizhou Li
Controllable Accented Text-to-Speech Synthesis
To be submitted for possible journal publication
null
null
null
cs.SD cs.CL eess.AS
http://creativecommons.org/licenses/by/4.0/
Accented text-to-speech (TTS) synthesis seeks to generate speech with an accent (L2) as a variant of the standard version (L1). Accented TTS synthesis is challenging as L2 is different from L1 in both in terms of phonetic rendering and prosody pattern. Furthermore, there is no easy solution to the control of the accent intensity in an utterance. In this work, we propose a neural TTS architecture, that allows us to control the accent and its intensity during inference. This is achieved through three novel mechanisms, 1) an accent variance adaptor to model the complex accent variance with three prosody controlling factors, namely pitch, energy and duration; 2) an accent intensity modeling strategy to quantify the accent intensity; 3) a consistency constraint module to encourage the TTS system to render the expected accent intensity at a fine level. Experiments show that the proposed system attains superior performance to the baseline models in terms of accent rendering and intensity control. To our best knowledge, this is the first study of accented TTS synthesis with explicit intensity control.
[ { "version": "v1", "created": "Thu, 22 Sep 2022 06:13:07 GMT" } ]
2022-09-23T00:00:00
[ [ "Liu", "Rui", "" ], [ "Sisman", "Berrak", "" ], [ "Gao", "Guanglai", "" ], [ "Li", "Haizhou", "" ] ]
new_dataset
0.971177
2209.10805
Kushagra Chatterjee
Kushagra Chatterjee and Prajakta Nimbhorkar
Popular Edges with Critical Nodes
Selected in ISAAC 2022 Conference
null
null
null
cs.DS
http://creativecommons.org/licenses/by/4.0/
In the popular edge problem, the input is a bipartite graph $G = (A \cup B,E)$ where $A$ and $B$ denote a set of men and a set of women respectively, and each vertex in $A\cup B$ has a strict preference ordering over its neighbours. A matching $M$ in $G$ is said to be {\em popular} if there is no other matching $M'$ such that the number of vertices that prefer $M'$ to $M$ is more than the number of vertices that prefer $M$ to $M'$. The goal is to determine, whether a given edge $e$ belongs to some popular matching in $G$. A polynomial-time algorithm for this problem appears in \cite{CK18}. We consider the popular edge problem when some men or women are prioritized or critical. A matching that matches all the critical nodes is termed as a feasible matching. It follows from \cite{Kavitha14,Kavitha21,NNRS21,NN17} that, when $G$ admits a feasible matching, there always exists a matching that is popular among all feasible matchings. We give a polynomial-time algorithm for the popular edge problem in the presence of critical men or women. We also show that an analogous result does not hold in the many-to-one setting, which is known as the Hospital-Residents Problem in literature, even when there are no critical nodes.
[ { "version": "v1", "created": "Thu, 22 Sep 2022 06:13:31 GMT" } ]
2022-09-23T00:00:00
[ [ "Chatterjee", "Kushagra", "" ], [ "Nimbhorkar", "Prajakta", "" ] ]
new_dataset
0.996286
2209.10806
Slavomir Matuska
Slavomir Matuska, Martin Paralic and Robert Hudec
A Smart System for Sitting Posture Detection Based on Force Sensors and Mobile Application
19 pages, 13 figures, 3 tables, article in journal
Mobile Information Systems, vol. 2020, Article ID 6625797, 13 pages, 2020
10.1155/2020/6625797
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
The employees health and wellbeing are an actual topic in our fast-moving world. The employers losing money when their employees suffer from different health problems and cannot work. The major problem is the spinal pain caused by the poor sitting posture on the office chair. This paper deals with the proposal and realization of the system for the detection of incorrect sitting positions. The smart chair has six flexible force sensors. The Internet of Things (IoT) node based on Arduino connects these sensors into the system. The system detects wrong seating positions and notifies the users. In advance, we develop a mobile application to receive those notifications. The user gets feedback about sitting posture and additional statistical data. We defined simple rules for processing the sensor data for recognizing wrong sitting postures. The data from smart chairs are collecting by a private cloud solution from QNAP and are stored in the MongoDB database. We used the Node-RED application for whole logic implementation.
[ { "version": "v1", "created": "Thu, 22 Sep 2022 06:13:37 GMT" } ]
2022-09-23T00:00:00
[ [ "Matuska", "Slavomir", "" ], [ "Paralic", "Martin", "" ], [ "Hudec", "Robert", "" ] ]
new_dataset
0.984683
2209.10817
Xiao Han
Xiao Han and Lu Yang
SQ-SLAM: Monocular Semantic SLAM Based on Superquadric Object Representation
Submitted to ICRA 2023
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Object SLAM uses additional semantic information to detect and map objects in the scene, in order to improve the system's perception and map representation capabilities. Quadrics and cubes are often used to represent objects, but their single shape limits the accuracy of object map and thus affects the application of downstream tasks. In this paper, we introduce superquadrics (SQ) with shape parameters into SLAM for representing objects, and propose a separate parameter estimation method that can accurately estimate object pose and adapt to different shapes. Furthermore, we present a lightweight data association strategy for correctly associating semantic observations in multiple views with object landmarks. We implement a monocular semantic SLAM system with real-time performance and conduct comprehensive experiments on public datasets. The results show that our method is able to build accurate object map and has advantages in object representation. Code will be released upon acceptance.
[ { "version": "v1", "created": "Thu, 22 Sep 2022 07:01:04 GMT" } ]
2022-09-23T00:00:00
[ [ "Han", "Xiao", "" ], [ "Yang", "Lu", "" ] ]
new_dataset
0.998997
2209.10846
Jinghan Peng
Yu Zheng, Yihao Chen, Jinghan Peng, Yajun Zhang, Min Liu, Minqiang Xu
The SpeakIn System Description for CNSRC2022
4 pages
null
null
null
cs.SD cs.AI eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This report describes our speaker verification systems for the tasks of the CN-Celeb Speaker Recognition Challenge 2022 (CNSRC 2022). This challenge includes two tasks, namely speaker verification(SV) and speaker retrieval(SR). The SV task involves two tracks: fixed track and open track. In the fixed track, we only used CN-Celeb.T as the training set. For the open track of the SV task and SR task, we added our open-source audio data. The ResNet-based, RepVGG-based, and TDNN-based architectures were developed for this challenge. Global statistic pooling structure and MQMHA pooling structure were used to aggregate the frame-level features across time to obtain utterance-level representation. We adopted AM-Softmax and AAM-Softmax combined with the Sub-Center method to classify the resulting embeddings. We also used the Large-Margin Fine-Tuning strategy to further improve the model performance. In the backend, Sub-Mean and AS-Norm were used. In the SV task fixed track, our system was a fusion of five models, and two models were fused in the SV task open track. And we used a single system in the SR task. Our approach leads to superior performance and comes the 1st place in the open track of the SV task, the 2nd place in the fixed track of the SV task, and the 3rd place in the SR task.
[ { "version": "v1", "created": "Thu, 22 Sep 2022 08:17:47 GMT" } ]
2022-09-23T00:00:00
[ [ "Zheng", "Yu", "" ], [ "Chen", "Yihao", "" ], [ "Peng", "Jinghan", "" ], [ "Zhang", "Yajun", "" ], [ "Liu", "Min", "" ], [ "Xu", "Minqiang", "" ] ]
new_dataset
0.997601
2209.10848
Rui Liu
Yifan Hu, Pengkai Yin, Rui Liu, Feilong Bao and Guanglai Gao
MnTTS: An Open-Source Mongolian Text-to-Speech Synthesis Dataset and Accompanied Baseline
Accepted at the 2022 International Conference on Asian Language Processing (IALP2022)
null
null
null
cs.SD cs.AI eess.AS
http://creativecommons.org/licenses/by/4.0/
This paper introduces a high-quality open-source text-to-speech (TTS) synthesis dataset for Mongolian, a low-resource language spoken by over 10 million people worldwide. The dataset, named MnTTS, consists of about 8 hours of transcribed audio recordings spoken by a 22-year-old professional female Mongolian announcer. It is the first publicly available dataset developed to promote Mongolian TTS applications in both academia and industry. In this paper, we share our experience by describing the dataset development procedures and faced challenges. To demonstrate the reliability of our dataset, we built a powerful non-autoregressive baseline system based on FastSpeech2 model and HiFi-GAN vocoder, and evaluated it using the subjective mean opinion score (MOS) and real time factor (RTF) metrics. Evaluation results show that the powerful baseline system trained on our dataset achieves MOS above 4 and RTF about $3.30\times10^{-1}$, which makes it applicable for practical use. The dataset, training recipe, and pretrained TTS models are freely available \footnote{\label{github}\url{https://github.com/walker-hyf/MnTTS}}.
[ { "version": "v1", "created": "Thu, 22 Sep 2022 08:24:43 GMT" } ]
2022-09-23T00:00:00
[ [ "Hu", "Yifan", "" ], [ "Yin", "Pengkai", "" ], [ "Liu", "Rui", "" ], [ "Bao", "Feilong", "" ], [ "Gao", "Guanglai", "" ] ]
new_dataset
0.99986
2209.11048
Milica Petkovic
Milica I. Petkovic, Aleksandra Cvetkovic, Milan Narandzic, Dejan Vukobratovic
Mixed RF-VLC Relaying System with Radio-Access Diversity
Presented at 2019 28th Wireless and Optical Communications Conference (WOCC)
null
10.1109/WOCC.2019.8770633
null
cs.NI cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a statistical analysis of a mixed radio-frequency (RF)-visible light communications (VLC) relaying system, where outdoor millimeter wave based RF links are utilized to provide backhaul connectivity for indoor VLC broadcasting. The multiple RF links are assumed to communicate with the VLC access point through decode-and-forward relay. Novel closed-form outage probability and average bit error rate expressions are derived and utilized to obtain numerical results. Monte Carlo simulations validate presented numerical results, which are further used to examine the effects of system and channel parameters on system performance.
[ { "version": "v1", "created": "Thu, 22 Sep 2022 14:44:06 GMT" } ]
2022-09-23T00:00:00
[ [ "Petkovic", "Milica I.", "" ], [ "Cvetkovic", "Aleksandra", "" ], [ "Narandzic", "Milan", "" ], [ "Vukobratovic", "Dejan", "" ] ]
new_dataset
0.986716
2209.11070
Milica Petkovic
Milica I. Petkovic, Aleksandra M. Cvetkovic, Milan Narandzic, Nestor D. Chatzidiamantis, Dejan Vukobratovic, George K. Karagiannidis
Mixed RF-VLC Relaying Systems for Interference-Sensitive Mobile Applications
Published in IEEE Transactions on Vehicular Technology
null
10.1109/TVT.2020.3007676
null
cs.NI cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Due to their Radio-Frequency (RF) immunity, Visible Light Communications (VLC) pose as a promising technology for interference sensitive applications such as medical data networks. In this paper, we investigate mixed RF-VLC relaying systems especially suited for this type of applications that support mobility. In this system setup, the end-user, who is assumed to be on a vehicle that is in dynamic movement, is served by an indoor VLC system, while the outdoor data traffic is conveyed through multiple backhaul RF links. Furthermore, it is assumed that a single backhaul RF link is activated by the mobile relay and due to feedback delay, the RF link activation is based on outdated channel state information (CSI). The performance of this system is analyzed in terms of outage probability and bit error rate (BER), and novel closed form analytical expressions are provided. Furthermore, the analysis is extended for the case where the average SNR over the RF links and/or LED optical power is high, and approximate analytical expressions are derived which determine performance floors. Numerical results are provided which demonstrate that the utilization of multiple RF backhaul links can significantly improve overall RF-VLC system performance when outage/BER floors are avoided. This calls upon joint design of both subsystems. Additionally, the outdated CSI exploited for active RF selection can significantly degrade the quality of system performance.
[ { "version": "v1", "created": "Thu, 22 Sep 2022 15:01:19 GMT" } ]
2022-09-23T00:00:00
[ [ "Petkovic", "Milica I.", "" ], [ "Cvetkovic", "Aleksandra M.", "" ], [ "Narandzic", "Milan", "" ], [ "Chatzidiamantis", "Nestor D.", "" ], [ "Vukobratovic", "Dejan", "" ], [ "Karagiannidis", "George K.", "" ] ]
new_dataset
0.995103
2209.11180
Artur Grigorev
Khaled Saleh and Artur Grigorev and Adriana-Simona Mihaita
Traffic Accident Risk Forecasting using Contextual Vision Transformers
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Recently, the problem of traffic accident risk forecasting has been getting the attention of the intelligent transportation systems community due to its significant impact on traffic clearance. This problem is commonly tackled in the literature by using data-driven approaches that model the spatial and temporal incident impact, since they were shown to be crucial for the traffic accident risk forecasting problem. To achieve this, most approaches build different architectures to capture the spatio-temporal correlations features, making them inefficient for large traffic accident datasets. Thus, in this work, we are proposing a novel unified framework, namely a contextual vision transformer, that can be trained in an end-to-end approach which can effectively reason about the spatial and temporal aspects of the problem while providing accurate traffic accident risk predictions. We evaluate and compare the performance of our proposed methodology against baseline approaches from the literature across two large-scale traffic accident datasets from two different geographical locations. The results have shown a significant improvement with roughly 2\% in RMSE score in comparison to previous state-of-art works (SoTA) in the literature. Moreover, our proposed approach has outperformed the SoTA technique over the two datasets while only requiring 23x fewer computational requirements.
[ { "version": "v1", "created": "Tue, 20 Sep 2022 23:38:06 GMT" } ]
2022-09-23T00:00:00
[ [ "Saleh", "Khaled", "" ], [ "Grigorev", "Artur", "" ], [ "Mihaita", "Adriana-Simona", "" ] ]
new_dataset
0.994538
2209.11198
Pinaki Prasad Guha Neogi
Pinaki Prasad Guha Neogi
A Dive into WhatsApp's End-to-End Encryption
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We live in a generation where the world around us is witnessing technological revolutions every single day, and as a result of this, everything around us is getting digitized with the touch of technology. In order to keep up the pace of this technological revolution and help reaching this progress its zenith, one of the most important aspects that needs to be taken care of is security. One of the biggest boons of technology in the recent times has been the invention of smartphones. As smartphones started becoming more popular, affordable and easily accessible, hundreds of free messaging applications were launched, but WhatsApp emerged as the ultimate winner in the race. This paper describes one of the most important and popular features of WhatsApp, the End-to-End (E2E) encryption system, which sets it apart from most other messaging applications and is one of the reasons which helped it become so popular.
[ { "version": "v1", "created": "Mon, 5 Sep 2022 11:19:38 GMT" } ]
2022-09-23T00:00:00
[ [ "Neogi", "Pinaki Prasad Guha", "" ] ]
new_dataset
0.994906
2209.11214
Selvarajah Thuseethan Dr.
Selvarajah Thuseethan, Palanisamy Vigneshwaran, Joseph Charles and Chathrie Wimalasooriya
Siamese Network-based Lightweight Framework for Tomato Leaf Disease Recognition
10 pages
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Automatic tomato disease recognition from leaf images is vital to avoid crop losses by applying control measures on time. Even though recent deep learning-based tomato disease recognition methods with classical training procedures showed promising recognition results, they demand large labelled data and involve expensive training. The traditional deep learning models proposed for tomato disease recognition also consume high memory and storage because of a high number of parameters. While lightweight networks overcome some of these issues to a certain extent, they continue to show low performance and struggle to handle imbalanced data. In this paper, a novel Siamese network-based lightweight framework is proposed for automatic tomato leaf disease recognition. This framework achieves the highest accuracy of 96.97% on the tomato subset obtained from the PlantVillage dataset and 95.48% on the Taiwan tomato leaf disease dataset. Experimental results further confirm that the proposed framework is effective with imbalanced and small data. The backbone deep network integrated with this framework is lightweight with approximately 2.9629 million trainable parameters, which is way lower than existing lightweight deep networks.
[ { "version": "v1", "created": "Sun, 18 Sep 2022 16:08:07 GMT" } ]
2022-09-23T00:00:00
[ [ "Thuseethan", "Selvarajah", "" ], [ "Vigneshwaran", "Palanisamy", "" ], [ "Charles", "Joseph", "" ], [ "Wimalasooriya", "Chathrie", "" ] ]
new_dataset
0.984866
2209.11228
Gyungin Shin
Gyungin Shin, Weidi Xie, Samuel Albanie
NamedMask: Distilling Segmenters from Complementary Foundation Models
Tech report. Code: https://github.com/NoelShin/namedmask
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The goal of this work is to segment and name regions of images without access to pixel-level labels during training. To tackle this task, we construct segmenters by distilling the complementary strengths of two foundation models. The first, CLIP (Radford et al. 2021), exhibits the ability to assign names to image content but lacks an accessible representation of object structure. The second, DINO (Caron et al. 2021), captures the spatial extent of objects but has no knowledge of object names. Our method, termed NamedMask, begins by using CLIP to construct category-specific archives of images. These images are pseudo-labelled with a category-agnostic salient object detector bootstrapped from DINO, then refined by category-specific segmenters using the CLIP archive labels. Thanks to the high quality of the refined masks, we show that a standard segmentation architecture trained on these archives with appropriate data augmentation achieves impressive semantic segmentation abilities for both single-object and multi-object images. As a result, our proposed NamedMask performs favourably against a range of prior work on five benchmarks including the VOC2012, COCO and large-scale ImageNet-S datasets.
[ { "version": "v1", "created": "Thu, 22 Sep 2022 17:59:55 GMT" } ]
2022-09-23T00:00:00
[ [ "Shin", "Gyungin", "" ], [ "Xie", "Weidi", "" ], [ "Albanie", "Samuel", "" ] ]
new_dataset
0.970215
1807.10463
Yang Su Mr.
Yang Su, Yansong Gao, Michael Chesser, Omid Kavehei, Alanson Sample and Damith C.Ranasinghe
SecuCode: Intrinsic PUF Entangled Secure Wireless Code Dissemination for Computational RFID Devices
Accepted to the IEEE Transactions on Dependable and Secure Computing
IEEE Transactions on Dependable and Secure Computing , Early Access, 2019, pp.1-1
10.1109/TDSC.2019.2934438
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The simplicity of deployment and perpetual operation of energy harvesting devices provides a compelling proposition for a new class of edge devices for the Internet of Things. In particular, Computational Radio Frequency Identification (CRFID) devices are an emerging class of battery-free, computational, sensing enhanced devices that harvest all of their energy for operation. Despite wireless connectivity and powering, secure wireless firmware updates remains an open challenge for CRFID devices due to: intermittent powering, limited computational capabilities, and the absence of a supervisory operating system. We present, for the first time, a secure wireless code dissemination (SecuCode) mechanism for CRFIDs by entangling a device intrinsic hardware security primitive Static Random Access Memory Physical Unclonable Function (SRAM PUF) to a firmware update protocol. The design of SecuCode: i) overcomes the resource-constrained and intermittently powered nature of the CRFID devices; ii) is fully compatible with existing communication protocols employed by CRFID devices in particular, ISO-18000-6C protocol; and ii) is built upon a standard and industry compliant firmware compilation and update method realized by extending a recent framework for firmware updates provided by Texas Instruments. We build an end-to-end SecuCode implementation and conduct extensive experiments to demonstrate standards compliance, evaluate performance and security.
[ { "version": "v1", "created": "Fri, 27 Jul 2018 07:29:04 GMT" }, { "version": "v2", "created": "Thu, 25 Jul 2019 15:42:08 GMT" }, { "version": "v3", "created": "Sat, 17 Aug 2019 03:43:11 GMT" }, { "version": "v4", "created": "Wed, 21 Sep 2022 06:25:43 GMT" } ]
2022-09-22T00:00:00
[ [ "Su", "Yang", "" ], [ "Gao", "Yansong", "" ], [ "Chesser", "Michael", "" ], [ "Kavehei", "Omid", "" ], [ "Sample", "Alanson", "" ], [ "Ranasinghe", "Damith C.", "" ] ]
new_dataset
0.988705
2109.02122
Nghia Doan Mr.
Nghia Doan, Seyyed Ali Hashemi, Marco Mondelli, and Warren J. Gross
Decoding Reed-Muller Codes with Successive Codeword Permutations
Accepted for publication in IEEE Transactions on Communications
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A novel recursive list decoding (RLD) algorithm for Reed-Muller (RM) codes based on successive permutations (SP) of the codeword is presented. A low-complexity SP scheme applied to a subset of the symmetry group of RM codes is first proposed to carefully select a good codeword permutation on the fly. Then, the proposed SP technique is integrated into an improved RLD algorithm that initializes different decoding paths with random codeword permutations, which are sampled from the full symmetry group of RM codes. Finally, efficient latency and complexity reduction schemes are introduced that virtually preserve the error-correction performance of the proposed decoder. Simulation results demonstrate that at the target frame error rate of $10^{-3}$ for the RM code of length $256$ with $163$ information bits, the proposed decoder reduces $6\%$ of the computational complexity and $22\%$ of the decoding latency of the state-of-the-art semi-parallel simplified successive-cancellation decoder with fast Hadamard transform (SSC-FHT) that uses $96$ permutations from the full symmetry group of RM codes, while relatively maintaining the error-correction performance and memory consumption of the semi-parallel permuted SSC-FHT decoder.
[ { "version": "v1", "created": "Sun, 5 Sep 2021 16:53:07 GMT" }, { "version": "v2", "created": "Sun, 23 Jan 2022 19:38:31 GMT" }, { "version": "v3", "created": "Sat, 29 Jan 2022 13:16:44 GMT" }, { "version": "v4", "created": "Thu, 21 Apr 2022 23:38:41 GMT" }, { "version": "v5", "created": "Wed, 21 Sep 2022 00:25:13 GMT" } ]
2022-09-22T00:00:00
[ [ "Doan", "Nghia", "" ], [ "Hashemi", "Seyyed Ali", "" ], [ "Mondelli", "Marco", "" ], [ "Gross", "Warren J.", "" ] ]
new_dataset
0.978308
2109.14934
Reza Khanmohammadi
Reza Khanmohammadi, Mitra Sadat Mirshafiee, Yazdan Rezaee Jouryabi, Seyed Abolghasem Mirroshandel
Prose2Poem: The Blessing of Transformers in Translating Prose to Persian Poetry
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Persian Poetry has consistently expressed its philosophy, wisdom, speech, and rationale on the basis of its couplets, making it an enigmatic language on its own to both native and non-native speakers. Nevertheless, the notice able gap between Persian prose and poem has left the two pieces of literature medium-less. Having curated a parallel corpus of prose and their equivalent poems, we introduce a novel Neural Machine Translation (NMT) approach to translate prose to ancient Persian poetry using transformer-based Language Models in an extremely low-resource setting. More specifically, we trained a Transformer model from scratch to obtain initial translations and pretrained different variations of BERT to obtain final translations. To address the challenge of using masked language modelling under poeticness criteria, we heuristically joined the two models and generated valid poems in terms of automatic and human assessments. Final results demonstrate the eligibility and creativity of our novel heuristically aided approach among Literature professionals and non-professionals in generating novel Persian poems.
[ { "version": "v1", "created": "Thu, 30 Sep 2021 09:04:11 GMT" }, { "version": "v2", "created": "Fri, 1 Oct 2021 07:04:49 GMT" }, { "version": "v3", "created": "Sat, 27 Nov 2021 07:44:05 GMT" }, { "version": "v4", "created": "Wed, 21 Sep 2022 16:29:23 GMT" } ]
2022-09-22T00:00:00
[ [ "Khanmohammadi", "Reza", "" ], [ "Mirshafiee", "Mitra Sadat", "" ], [ "Jouryabi", "Yazdan Rezaee", "" ], [ "Mirroshandel", "Seyed Abolghasem", "" ] ]
new_dataset
0.999513
2112.02221
Nazeef Ul Haq
Nazeef Ul Haq and Muhammad Moazam Fraz and Tufail Sajjad Shah Hashmi and Muhammad Shahzad
Orientation Aware Weapons Detection In Visual Data : A Benchmark Dataset
Submitted this paper in Journal
null
10.1007/s00607-022-01095-0
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatic detection of weapons is significant for improving security and well being of individuals, nonetheless, it is a difficult task due to large variety of size, shape and appearance of weapons. View point variations and occlusion also are reasons which makes this task more difficult. Further, the current object detection algorithms process rectangular areas, however a slender and long rifle may really cover just a little portion of area and the rest may contain unessential details. To overcome these problem, we propose a CNN architecture for Orientation Aware Weapons Detection, which provides oriented bounding box with improved weapons detection performance. The proposed model provides orientation not only using angle as classification problem by dividing angle into eight classes but also angle as regression problem. For training our model for weapon detection a new dataset comprising of total 6400 weapons images is gathered from the web and then manually annotated with position oriented bounding boxes. Our dataset provides not only oriented bounding box as ground truth but also horizontal bounding box. We also provide our dataset in multiple formats of modern object detectors for further research in this area. The proposed model is evaluated on this dataset, and the comparative analysis with off-the shelf object detectors yields superior performance of proposed model, measured with standard evaluation strategies. The dataset and the model implementation are made publicly available at this link: https://bit.ly/2TyZICF.
[ { "version": "v1", "created": "Sat, 4 Dec 2021 02:21:02 GMT" } ]
2022-09-22T00:00:00
[ [ "Haq", "Nazeef Ul", "" ], [ "Fraz", "Muhammad Moazam", "" ], [ "Hashmi", "Tufail Sajjad Shah", "" ], [ "Shahzad", "Muhammad", "" ] ]
new_dataset
0.977165
2112.13890
Peiyan Dong
Zhenglun Kong, Peiyan Dong, Xiaolong Ma, Xin Meng, Mengshu Sun, Wei Niu, Xuan Shen, Geng Yuan, Bin Ren, Minghai Qin, Hao Tang, Yanzhi Wang
SPViT: Enabling Faster Vision Transformers via Soft Token Pruning
ECCV 2022
null
null
null
cs.CV cs.AI cs.AR cs.LG
http://creativecommons.org/licenses/by/4.0/
Recently, Vision Transformer (ViT) has continuously established new milestones in the computer vision field, while the high computation and memory cost makes its propagation in industrial production difficult. Pruning, a traditional model compression paradigm for hardware efficiency, has been widely applied in various DNN structures. Nevertheless, it stays ambiguous on how to perform exclusive pruning on the ViT structure. Considering three key points: the structural characteristics, the internal data pattern of ViTs, and the related edge device deployment, we leverage the input token sparsity and propose a computation-aware soft pruning framework, which can be set up on vanilla Transformers of both flatten and CNN-type structures, such as Pooling-based ViT (PiT). More concretely, we design a dynamic attention-based multi-head token selector, which is a lightweight module for adaptive instance-wise token selection. We further introduce a soft pruning technique, which integrates the less informative tokens generated by the selector module into a package token that will participate in subsequent calculations rather than being completely discarded. Our framework is bound to the trade-off between accuracy and computation constraints of specific edge devices through our proposed computation-aware training strategy. Experimental results show that our framework significantly reduces the computation cost of ViTs while maintaining comparable performance on image classification. Moreover, our framework can guarantee the identified model to meet resource specifications of mobile devices and FPGA, and even achieve the real-time execution of DeiT-T on mobile platforms. For example, our method reduces the latency of DeiT-T to 26 ms (26%$\sim $41% superior to existing works) on the mobile device with 0.25%$\sim $4% higher top-1 accuracy on ImageNet.
[ { "version": "v1", "created": "Mon, 27 Dec 2021 20:15:25 GMT" }, { "version": "v2", "created": "Tue, 20 Sep 2022 22:20:30 GMT" } ]
2022-09-22T00:00:00
[ [ "Kong", "Zhenglun", "" ], [ "Dong", "Peiyan", "" ], [ "Ma", "Xiaolong", "" ], [ "Meng", "Xin", "" ], [ "Sun", "Mengshu", "" ], [ "Niu", "Wei", "" ], [ "Shen", "Xuan", "" ], [ "Yuan", "Geng", "" ], [ "Ren", "Bin", "" ], [ "Qin", "Minghai", "" ], [ "Tang", "Hao", "" ], [ "Wang", "Yanzhi", "" ] ]
new_dataset
0.990744
2201.11438
Sanket Biswas
Sanket Biswas, Ayan Banerjee, Josep Llad\'os, and Umapada Pal
DocSegTr: An Instance-Level End-to-End Document Image Segmentation Transformer
Preprint
null
null
null
cs.CV
http://creativecommons.org/licenses/by-sa/4.0/
Understanding documents with rich layouts is an essential step towards information extraction. Business intelligence processes often require the extraction of useful semantic content from documents at a large scale for subsequent decision-making tasks. In this context, instance-level segmentation of different document objects (title, sections, figures etc.) has emerged as an interesting problem for the document analysis and understanding community. To advance the research in this direction, we present a transformer-based model called \emph{DocSegTr} for end-to-end instance segmentation of complex layouts in document images. The method adapts a twin attention module, for semantic reasoning, which helps to become highly computationally efficient compared with the state-of-the-art. To the best of our knowledge, this is the first work on transformer-based document segmentation. Extensive experimentation on competitive benchmarks like PubLayNet, PRIMA, Historical Japanese (HJ) and TableBank demonstrate that our model achieved comparable or better segmentation performance than the existing state-of-the-art approaches with the average precision of 89.4, 40.3, 83.4 and 93.3. This simple and flexible framework could serve as a promising baseline for instance-level recognition tasks in document images.
[ { "version": "v1", "created": "Thu, 27 Jan 2022 10:50:22 GMT" }, { "version": "v2", "created": "Wed, 21 Sep 2022 15:58:41 GMT" } ]
2022-09-22T00:00:00
[ [ "Biswas", "Sanket", "" ], [ "Banerjee", "Ayan", "" ], [ "Lladós", "Josep", "" ], [ "Pal", "Umapada", "" ] ]
new_dataset
0.991504
2202.07036
Felix Ott
Felix Ott and David R\"ugamer and Lucas Heublein and Tim Hamann and Jens Barth and Bernd Bischl and Christopher Mutschler
Benchmarking Online Sequence-to-Sequence and Character-based Handwriting Recognition from IMU-Enhanced Pens
Accepted for International Journal on Document Analysis and Recognition (IJDAR)
null
10.1007/s10032-022-00415-6
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Purpose. Handwriting is one of the most frequently occurring patterns in everyday life and with it come challenging applications such as handwriting recognition (HWR), writer identification, and signature verification. In contrast to offline HWR that only uses spatial information (i.e., images), online HWR (OnHWR) uses richer spatio-temporal information (i.e., trajectory data or inertial data). While there exist many offline HWR datasets, there is only little data available for the development of OnHWR methods on paper as it requires hardware-integrated pens. Methods. This paper presents data and benchmark models for real-time sequence-to-sequence (seq2seq) learning and single character-based recognition. Our data is recorded by a sensor-enhanced ballpoint pen, yielding sensor data streams from triaxial accelerometers, a gyroscope, a magnetometer and a force sensor at 100 Hz. We propose a variety of datasets including equations and words for both the writer-dependent and writer-independent tasks. Our datasets allow a comparison between classical OnHWR on tablets and on paper with sensor-enhanced pens. We provide an evaluation benchmark for seq2seq and single character-based HWR using recurrent and temporal convolutional networks and Transformers combined with a connectionist temporal classification (CTC) loss and cross-entropy (CE) losses. Results. Our convolutional network combined with BiLSTMs outperforms Transformer-based architectures, is on par with InceptionTime for sequence-based classification tasks, and yields better results compared to 28 state-of-the-art techniques. Time-series augmentation methods improve the sequence-based task, and we show that CE variants can improve the single classification task.
[ { "version": "v1", "created": "Mon, 14 Feb 2022 20:55:33 GMT" }, { "version": "v2", "created": "Sun, 4 Sep 2022 21:38:54 GMT" }, { "version": "v3", "created": "Wed, 21 Sep 2022 15:17:22 GMT" } ]
2022-09-22T00:00:00
[ [ "Ott", "Felix", "" ], [ "Rügamer", "David", "" ], [ "Heublein", "Lucas", "" ], [ "Hamann", "Tim", "" ], [ "Barth", "Jens", "" ], [ "Bischl", "Bernd", "" ], [ "Mutschler", "Christopher", "" ] ]
new_dataset
0.999753
2202.13558
Ziqing Yang
Ziqing Yang, Zihang Xu, Yiming Cui, Baoxin Wang, Min Lin, Dayong Wu, Zhigang Chen
CINO: A Chinese Minority Pre-trained Language Model
Accepted to COLING 2022
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multilingual pre-trained language models have shown impressive performance on cross-lingual tasks. It greatly facilitates the applications of natural language processing on low-resource languages. However, there are still some languages that the current multilingual models do not perform well on. In this paper, we propose CINO (Chinese Minority Pre-trained Language Model), a multilingual pre-trained language model for Chinese minority languages. It covers Standard Chinese, Yue Chinese, and six other ethnic minority languages. To evaluate the cross-lingual ability of the multilingual model on ethnic minority languages, we collect documents from Wikipedia and news websites, and construct two text classification datasets, WCM (Wiki-Chinese-Minority) and CMNews (Chinese-Minority-News). We show that CINO notably outperforms the baselines on various classification tasks. The CINO model and the datasets are publicly available at http://cino.hfl-rc.com.
[ { "version": "v1", "created": "Mon, 28 Feb 2022 06:02:06 GMT" }, { "version": "v2", "created": "Wed, 21 Sep 2022 01:43:35 GMT" } ]
2022-09-22T00:00:00
[ [ "Yang", "Ziqing", "" ], [ "Xu", "Zihang", "" ], [ "Cui", "Yiming", "" ], [ "Wang", "Baoxin", "" ], [ "Lin", "Min", "" ], [ "Wu", "Dayong", "" ], [ "Chen", "Zhigang", "" ] ]
new_dataset
0.996896
2203.04548
Omrit Filtser
Omrit Filtser, Mayank Goswami, Joseph S.B. Mitchell, Valentin Polishchuk
On Flipping the Fr\'{e}chet distance
null
null
null
null
cs.CG
http://creativecommons.org/licenses/by/4.0/
The classical and extensively-studied Fr\'echet distance between two curves is defined as an inf max, where the infimum is over all traversals of the curves, and the maximum is over all concurrent positions of the two agents. In this article we investigate a "flipped" Fr\'echet measure defined by a sup min -- the supremum is over all traversals of the curves, and the minimum is over all concurrent positions of the two agents. This measure produces a notion of "social distance" between two curves (or general domains), where agents traverse curves while trying to stay as far apart as possible. We first study the flipped Fr\'echet measure between two polygonal curves in one and two dimensions, providing conditional lower bounds and matching algorithms. We then consider this measure on polygons, where it denotes the minimum distance that two agents can maintain while restricted to travel in or on the boundary of the same polygon. We investigate several variants of the problem in this setting, for some of which we provide linear time algorithms. Finally, we consider this measure on graphs. We draw connections between our proposed flipped Fr\'echet measure and existing related work in computational geometry, hoping that our new measure may spawn investigations akin to those performed for the Fr\'echet distance, and into further interesting problems that arise.
[ { "version": "v1", "created": "Wed, 9 Mar 2022 06:48:11 GMT" }, { "version": "v2", "created": "Wed, 21 Sep 2022 08:43:14 GMT" } ]
2022-09-22T00:00:00
[ [ "Filtser", "Omrit", "" ], [ "Goswami", "Mayank", "" ], [ "Mitchell", "Joseph S. B.", "" ], [ "Polishchuk", "Valentin", "" ] ]
new_dataset
0.956484
2206.08614
Luigi Celona
Daniel Vera Nieto and Luigi Celona and Clara Fernandez-Labrador
Understanding Aesthetics with Language: A Photo Critique Dataset for Aesthetic Assessment
Accepted to NeurIPS Track on Datasets and Benchmarks 2022
null
null
null
cs.CV cs.CL
http://creativecommons.org/licenses/by/4.0/
Computational inference of aesthetics is an ill-defined task due to its subjective nature. Many datasets have been proposed to tackle the problem by providing pairs of images and aesthetic scores based on human ratings. However, humans are better at expressing their opinion, taste, and emotions by means of language rather than summarizing them in a single number. In fact, photo critiques provide much richer information as they reveal how and why users rate the aesthetics of visual stimuli. In this regard, we propose the Reddit Photo Critique Dataset (RPCD), which contains tuples of image and photo critiques. RPCD consists of 74K images and 220K comments and is collected from a Reddit community used by hobbyists and professional photographers to improve their photography skills by leveraging constructive community feedback. The proposed dataset differs from previous aesthetics datasets mainly in three aspects, namely (i) the large scale of the dataset and the extension of the comments criticizing different aspects of the image, (ii) it contains mostly UltraHD images, and (iii) it can easily be extended to new data as it is collected through an automatic pipeline. To the best of our knowledge, in this work, we propose the first attempt to estimate the aesthetic quality of visual stimuli from the critiques. To this end, we exploit the polarity of the sentiment of criticism as an indicator of aesthetic judgment. We demonstrate how sentiment polarity correlates positively with the aesthetic judgment available for two aesthetic assessment benchmarks. Finally, we experiment with several models by using the sentiment scores as a target for ranking images. Dataset and baselines are available (https://github.com/mediatechnologycenter/aestheval).
[ { "version": "v1", "created": "Fri, 17 Jun 2022 08:16:20 GMT" }, { "version": "v2", "created": "Wed, 24 Aug 2022 09:40:23 GMT" }, { "version": "v3", "created": "Wed, 21 Sep 2022 15:30:50 GMT" } ]
2022-09-22T00:00:00
[ [ "Nieto", "Daniel Vera", "" ], [ "Celona", "Luigi", "" ], [ "Fernandez-Labrador", "Clara", "" ] ]
new_dataset
0.999776
2208.00571
Zhihao Li
Zhihao Li, Jianzhuang Liu, Zhensong Zhang, Songcen Xu, and Youliang Yan
CLIFF: Carrying Location Information in Full Frames into Human Pose and Shape Estimation
update the related work upon v1 with small modifications
ECCV 2022 Oral
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Top-down methods dominate the field of 3D human pose and shape estimation, because they are decoupled from human detection and allow researchers to focus on the core problem. However, cropping, their first step, discards the location information from the very beginning, which makes themselves unable to accurately predict the global rotation in the original camera coordinate system. To address this problem, we propose to Carry Location Information in Full Frames (CLIFF) into this task. Specifically, we feed more holistic features to CLIFF by concatenating the cropped-image feature with its bounding box information. We calculate the 2D reprojection loss with a broader view of the full frame, taking a projection process similar to that of the person projected in the image. Fed and supervised by global-location-aware information, CLIFF directly predicts the global rotation along with more accurate articulated poses. Besides, we propose a pseudo-ground-truth annotator based on CLIFF, which provides high-quality 3D annotations for in-the-wild 2D datasets and offers crucial full supervision for regression-based methods. Extensive experiments on popular benchmarks show that CLIFF outperforms prior arts by a significant margin, and reaches the first place on the AGORA leaderboard (the SMPL-Algorithms track). The code and data are available at https://github.com/huawei-noah/noah-research/tree/master/CLIFF.
[ { "version": "v1", "created": "Mon, 1 Aug 2022 02:08:46 GMT" }, { "version": "v2", "created": "Wed, 21 Sep 2022 08:19:41 GMT" } ]
2022-09-22T00:00:00
[ [ "Li", "Zhihao", "" ], [ "Liu", "Jianzhuang", "" ], [ "Zhang", "Zhensong", "" ], [ "Xu", "Songcen", "" ], [ "Yan", "Youliang", "" ] ]
new_dataset
0.986121
2209.09937
Elena Nazarova
Elena Nazarova, Ildar Babataev, Nipun Weerakkodi, Aleksey Fedoseev, Dzmitry Tsetserukou
HyperPalm: DNN-based hand gesture recognition interface for intelligent communication with quadruped robot in 3D space
6 pages, 9 figures, IEEE SMC 2022
null
null
null
cs.RO cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Nowadays, autonomous mobile robots support people in many areas where human presence either redundant or too dangerous. They have successfully proven themselves in expeditions, gas industry, mines, warehouses, etc. However, even legged robots may stuck in rough terrain conditions requiring human cognitive abilities to navigate the system. While gamepads and keyboards are convenient for wheeled robot control, the quadruped robot in 3D space can move along all linear coordinates and Euler angles, requiring at least 12 buttons for independent control of their DoF. Therefore, more convenient interfaces of control are required. In this paper we present HyperPalm: a novel gesture interface for intuitive human-robot interaction with quadruped robots. Without additional devices, the operator has full position and orientation control of the quadruped robot in 3D space through hand gesture recognition with only 5 gestures and 6 DoF hand motion. The experimental results revealed to classify 5 static gestures with high accuracy (96.5%), accurately predict the position of the 6D position of the hand in three-dimensional space. The absolute linear deviation Root mean square deviation (RMSD) of the proposed approach is 11.7 mm, which is almost 50% lower than for the second tested approach, the absolute angular deviation RMSD of the proposed approach is 2.6 degrees, which is almost 27% lower than for the second tested approach. Moreover, the user study was conducted to explore user's subjective experience from human-robot interaction through the proposed gesture interface. The participants evaluated their interaction with HyperPalm as intuitive (2.0), not causing frustration (2.63), and requiring low physical demand (2.0).
[ { "version": "v1", "created": "Tue, 20 Sep 2022 18:28:29 GMT" } ]
2022-09-22T00:00:00
[ [ "Nazarova", "Elena", "" ], [ "Babataev", "Ildar", "" ], [ "Weerakkodi", "Nipun", "" ], [ "Fedoseev", "Aleksey", "" ], [ "Tsetserukou", "Dzmitry", "" ] ]
new_dataset
0.999776
2209.09987
Domenico Bloisi
Domenico D. Bloisi, Andrea Pennisi, Cristian Zampino, Flavio Biancospino, Francesco Laus, Gianluca Di Stefano, Michele Brienza, Rocchina Romano
MARIO: Modular and Extensible Architecture for Computing Visual Statistics in RoboCup SPL
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-sa/4.0/
This technical report describes a modular and extensible architecture for computing visual statistics in RoboCup SPL (MARIO), presented during the SPL Open Research Challenge at RoboCup 2022, held in Bangkok (Thailand). MARIO is an open-source, ready-to-use software application whose final goal is to contribute to the growth of the RoboCup SPL community. MARIO comes with a GUI that integrates multiple machine learning and computer vision based functions, including automatic camera calibration, background subtraction, homography computation, player + ball tracking and localization, NAO robot pose estimation and fall detection. MARIO has been ranked no. 1 in the Open Research Challenge.
[ { "version": "v1", "created": "Tue, 20 Sep 2022 20:45:56 GMT" } ]
2022-09-22T00:00:00
[ [ "Bloisi", "Domenico D.", "" ], [ "Pennisi", "Andrea", "" ], [ "Zampino", "Cristian", "" ], [ "Biancospino", "Flavio", "" ], [ "Laus", "Francesco", "" ], [ "Di Stefano", "Gianluca", "" ], [ "Brienza", "Michele", "" ], [ "Romano", "Rocchina", "" ] ]
new_dataset
0.982008
2209.10008
Ling Luo
Ling Luo, Yulia Gryaditskaya, Yongxin Yang, Tao Xiang, Yi-Zhe Song
Fine-Grained VR Sketching: Dataset and Insights
null
2021 International Conference on 3D Vision (3DV), pp. 1003-1013. IEEE, 2021
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
We present the first fine-grained dataset of 1,497 3D VR sketch and 3D shape pairs of a chair category with large shapes diversity. Our dataset supports the recent trend in the sketch community on fine-grained data analysis, and extends it to an actively developing 3D domain. We argue for the most convenient sketching scenario where the sketch consists of sparse lines and does not require any sketching skills, prior training or time-consuming accurate drawing. We then, for the first time, study the scenario of fine-grained 3D VR sketch to 3D shape retrieval, as a novel VR sketching application and a proving ground to drive out generic insights to inform future research. By experimenting with carefully selected combinations of design factors on this new problem, we draw important conclusions to help follow-on work. We hope our dataset will enable other novel applications, especially those that require a fine-grained angle such as fine-grained 3D shape reconstruction. The dataset is available at tinyurl.com/VRSketch3DV21.
[ { "version": "v1", "created": "Tue, 20 Sep 2022 21:30:54 GMT" } ]
2022-09-22T00:00:00
[ [ "Luo", "Ling", "" ], [ "Gryaditskaya", "Yulia", "" ], [ "Yang", "Yongxin", "" ], [ "Xiang", "Tao", "" ], [ "Song", "Yi-Zhe", "" ] ]
new_dataset
0.999849
2209.10016
Ignacio Tripodi
Ignacio J. Tripodi
Setting the rhythm scene: deep learning-based drum loop generation from arbitrary language cues
null
null
null
null
cs.SD cs.AI cs.CL cs.IR cs.MM eess.AS
http://creativecommons.org/licenses/by-nc-nd/4.0/
Generative artificial intelligence models can be a valuable aid to music composition and live performance, both to aid the professional musician and to help democratize the music creation process for hobbyists. Here we present a novel method that, given an English word or phrase, generates 2 compasses of a 4-piece drum pattern that embodies the "mood" of the given language cue, or that could be used for an audiovisual scene described by the language cue. We envision this tool as composition aid for electronic music and audiovisual soundtrack production, or an improvisation tool for live performance. In order to produce the training samples for this model, besides manual annotation of the "scene" or "mood" terms, we have designed a novel method to extract the consensus drum track of any song. This consists of a 2-bar, 4-piece drum pattern that represents the main percussive motif of a song, which could be imported into any music loop device or live looping software. These two key components (drum pattern generation from a generalizable input, and consensus percussion extraction) present a novel approach to computer-aided composition and provide a stepping stone for more comprehensive rhythm generation.
[ { "version": "v1", "created": "Tue, 20 Sep 2022 21:53:35 GMT" } ]
2022-09-22T00:00:00
[ [ "Tripodi", "Ignacio J.", "" ] ]
new_dataset
0.994486
2209.10033
Shaoshuai Shi
Shaoshuai Shi, Li Jiang, Dengxin Dai, Bernt Schiele
MTR-A: 1st Place Solution for 2022 Waymo Open Dataset Challenge -- Motion Prediction
The 1st place solution report for Waymo Motion Prediction Challenge of Workshop on Autonomous Driving of CVPR 2022
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
In this report, we present the 1st place solution for motion prediction track in 2022 Waymo Open Dataset Challenges. We propose a novel Motion Transformer framework for multimodal motion prediction, which introduces a small set of novel motion query pairs for generating better multimodal future trajectories by jointly performing the intention localization and iterative motion refinement. A simple model ensemble strategy with non-maximum-suppression is adopted to further boost the final performance. Our approach achieves the 1st place on the motion prediction leaderboard of 2022 Waymo Open Dataset Challenges, outperforming other methods with remarkable margins. Code will be available at https://github.com/sshaoshuai/MTR.
[ { "version": "v1", "created": "Tue, 20 Sep 2022 23:03:22 GMT" } ]
2022-09-22T00:00:00
[ [ "Shi", "Shaoshuai", "" ], [ "Jiang", "Li", "" ], [ "Dai", "Dengxin", "" ], [ "Schiele", "Bernt", "" ] ]
new_dataset
0.998563
2209.10074
Sheng Huang
Wenhao Tang and Sheng Huang and Xiaoxian Zhang and Luwen Huangfu
PicT: A Slim Weakly Supervised Vision Transformer for Pavement Distress Classification
ACM Multimedia 2022 paper, 9 pages 7 figures
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Automatic pavement distress classification facilitates improving the efficiency of pavement maintenance and reducing the cost of labor and resources. A recently influential branch of this task divides the pavement image into patches and addresses these issues from the perspective of multi-instance learning. However, these methods neglect the correlation between patches and suffer from a low efficiency in the model optimization and inference. Meanwhile, Swin Transformer is able to address both of these issues with its unique strengths. Built upon Swin Transformer, we present a vision Transformer named \textbf{P}avement \textbf{I}mage \textbf{C}lassification \textbf{T}ransformer (\textbf{PicT}) for pavement distress classification. In order to better exploit the discriminative information of pavement images at the patch level, the \textit{Patch Labeling Teacher} is proposed to leverage a teacher model to dynamically generate pseudo labels of patches from image labels during each iteration, and guides the model to learn the discriminative features of patches. The broad classification head of Swin Transformer may dilute the discriminative features of distressed patches in the feature aggregation step due to the small distressed area ratio of the pavement image. To overcome this drawback, we present a \textit{Patch Refiner} to cluster patches into different groups and only select the highest distress-risk group to yield a slim head for the final image classification. We evaluate our method on CQU-BPDD. Extensive results show that \textbf{PicT} outperforms the second-best performed model by a large margin of $+2.4\%$ in P@R on detection task, $+3.9\%$ in $F1$ on recognition task, and 1.8x throughput, while enjoying 7x faster training speed using the same computing resources. Our codes and models have been released on \href{https://github.com/DearCaat/PicT}{https://github.com/DearCaat/PicT}.
[ { "version": "v1", "created": "Wed, 21 Sep 2022 02:33:49 GMT" } ]
2022-09-22T00:00:00
[ [ "Tang", "Wenhao", "" ], [ "Huang", "Sheng", "" ], [ "Zhang", "Xiaoxian", "" ], [ "Huangfu", "Luwen", "" ] ]
new_dataset
0.997474
2209.10098
Jiaqi Gu
Jiaqi Gu, Zhengqi Gao, Chenghao Feng, Hanqing Zhu, Ray T. Chen, Duane S. Boning, David Z. Pan
NeurOLight: A Physics-Agnostic Neural Operator Enabling Parametric Photonic Device Simulation
13 pages. Accepted to NeurIPS 2022
null
null
null
cs.ET cs.LG physics.optics
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Optical computing is an emerging technology for next-generation efficient artificial intelligence (AI) due to its ultra-high speed and efficiency. Electromagnetic field simulation is critical to the design, optimization, and validation of photonic devices and circuits. However, costly numerical simulation significantly hinders the scalability and turn-around time in the photonic circuit design loop. Recently, physics-informed neural networks have been proposed to predict the optical field solution of a single instance of a partial differential equation (PDE) with predefined parameters. Their complicated PDE formulation and lack of efficient parametrization mechanisms limit their flexibility and generalization in practical simulation scenarios. In this work, for the first time, a physics-agnostic neural operator-based framework, dubbed NeurOLight, is proposed to learn a family of frequency-domain Maxwell PDEs for ultra-fast parametric photonic device simulation. We balance the efficiency and generalization of NeurOLight via several novel techniques. Specifically, we discretize different devices into a unified domain, represent parametric PDEs with a compact wave prior, and encode the incident light via masked source modeling. We design our model with parameter-efficient cross-shaped NeurOLight blocks and adopt superposition-based augmentation for data-efficient learning. With these synergistic approaches, NeurOLight generalizes to a large space of unseen simulation settings, demonstrates 2-orders-of-magnitude faster simulation speed than numerical solvers, and outperforms prior neural network models by ~54% lower prediction error with ~44% fewer parameters. Our code is available at https://github.com/JeremieMelo/NeurOLight.
[ { "version": "v1", "created": "Mon, 19 Sep 2022 21:25:26 GMT" } ]
2022-09-22T00:00:00
[ [ "Gu", "Jiaqi", "" ], [ "Gao", "Zhengqi", "" ], [ "Feng", "Chenghao", "" ], [ "Zhu", "Hanqing", "" ], [ "Chen", "Ray T.", "" ], [ "Boning", "Duane S.", "" ], [ "Pan", "David Z.", "" ] ]
new_dataset
0.991315
2209.10125
Anurag Jain
Anurag Jain and Sujit Gujar and Kannan Srinathan
Interlude: Balancing Chaos And Harmony For Fair and Fast Blockchains
null
null
null
null
cs.CR cs.DC cs.GT
http://creativecommons.org/licenses/by-nc-sa/4.0/
Blockchains lie at the heart of Bitcoin and other cryptocurrencies that have shown great promise to revolutionize finance and commerce. Although they are gaining increasing popularity, they face technical challenges when it comes to scaling to support greater demand while maintaining their desirable security properties. In an exciting line of recent work, many researchers have proposed various scalable blockchain protocols that demonstrate the potential to solve these challenges. However, many of these protocols come with the assumptions of honest majority and symmetric network access which may not accurately reflect the real world where the participants may be self-interested or rational. Secondly, these works show that their protocol works in an ideal environment where each party has equal access to the network whereas different parties have varying latencies and network speeds. These assumptions may render the protocols susceptible to security threats in the real world, as highlighted by the literature focused on exploring game-theoretic attacks on these protocols. We propose a scalable blockchain protocol, Interlude, which comes with the typical security guarantees while focusing on game-theoretic soundness and network fairness. The novelty of Interlude is that it has a relatively simple design consisting of a sequence of parallel blocks containing disjoint transaction sets that can be mined quickly followed by a series block that is slow to mine and gives the honest parties in the network time to synchronize. Thus, between the chaos of parallel blocks, our blockchain protocol masquerades an interlude moment of harmony in series blocks that synchronize the network.
[ { "version": "v1", "created": "Wed, 21 Sep 2022 05:19:23 GMT" } ]
2022-09-22T00:00:00
[ [ "Jain", "Anurag", "" ], [ "Gujar", "Sujit", "" ], [ "Srinathan", "Kannan", "" ] ]
new_dataset
0.997434
2209.10155
Zihui Guo
Zihui Guo, Yonghong Hou, Pichao Wang, Zhimin Gao, Mingliang Xu, and Wanqing Li
FT-HID: A Large Scale RGB-D Dataset for First and Third Person Human Interaction Analysis
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Analysis of human interaction is one important research topic of human motion analysis. It has been studied either using first person vision (FPV) or third person vision (TPV). However, the joint learning of both types of vision has so far attracted little attention. One of the reasons is the lack of suitable datasets that cover both FPV and TPV. In addition, existing benchmark datasets of either FPV or TPV have several limitations, including the limited number of samples, participant subjects, interaction categories, and modalities. In this work, we contribute a large-scale human interaction dataset, namely, FT-HID dataset. FT-HID contains pair-aligned samples of first person and third person visions. The dataset was collected from 109 distinct subjects and has more than 90K samples for three modalities. The dataset has been validated by using several existing action recognition methods. In addition, we introduce a novel multi-view interaction mechanism for skeleton sequences, and a joint learning multi-stream framework for first person and third person visions. Both methods yield promising results on the FT-HID dataset. It is expected that the introduction of this vision-aligned large-scale dataset will promote the development of both FPV and TPV, and their joint learning techniques for human action analysis. The dataset and code are available at \href{https://github.com/ENDLICHERE/FT-HID}{here}.
[ { "version": "v1", "created": "Wed, 21 Sep 2022 07:24:15 GMT" } ]
2022-09-22T00:00:00
[ [ "Guo", "Zihui", "" ], [ "Hou", "Yonghong", "" ], [ "Wang", "Pichao", "" ], [ "Gao", "Zhimin", "" ], [ "Xu", "Mingliang", "" ], [ "Li", "Wanqing", "" ] ]
new_dataset
0.979474
2209.10170
Qinglan Wei
Qinglan Wei, Xuling Huang, Yuan Zhang
FV2ES: A Fully End2End Multimodal System for Fast Yet Effective Video Emotion Recognition Inference
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the latest social networks, more and more people prefer to express their emotions in videos through text, speech, and rich facial expressions. Multimodal video emotion analysis techniques can help understand users' inner world automatically based on human expressions and gestures in images, tones in voices, and recognized natural language. However, in the existing research, the acoustic modality has long been in a marginal position as compared to visual and textual modalities. That is, it tends to be more difficult to improve the contribution of the acoustic modality for the whole multimodal emotion recognition task. Besides, although better performance can be obtained by introducing common deep learning methods, the complex structures of these training models always result in low inference efficiency, especially when exposed to high-resolution and long-length videos. Moreover, the lack of a fully end-to-end multimodal video emotion recognition system hinders its application. In this paper, we designed a fully multimodal video-to-emotion system (named FV2ES) for fast yet effective recognition inference, whose benefits are threefold: (1) The adoption of the hierarchical attention method upon the sound spectra breaks through the limited contribution of the acoustic modality and outperforms the existing models' performance on both IEMOCAP and CMU-MOSEI datasets; (2) the introduction of the idea of multi-scale for visual extraction while single-branch for inference brings higher efficiency and maintains the prediction accuracy at the same time; (3) the further integration of data pre-processing into the aligned multimodal learning model allows the significant reduction of computational costs and storage space.
[ { "version": "v1", "created": "Wed, 21 Sep 2022 08:05:26 GMT" } ]
2022-09-22T00:00:00
[ [ "Wei", "Qinglan", "" ], [ "Huang", "Xuling", "" ], [ "Zhang", "Yuan", "" ] ]
new_dataset
0.987724
2209.10198
Abdullah Giray Ya\u{g}l{\i}k\c{c}{\i}
Abdullah Giray Ya\u{g}l{\i}k\c{c}{\i}, Ataberk Olgun, Minesh Patel, Haocong Luo, Hasan Hassan, Lois Orosa, O\u{g}uz Ergin, and Onur Mutlu
HiRA: Hidden Row Activation for Reducing Refresh Latency of Off-the-Shelf DRAM Chips
To appear in the 55th IEEE/ACM International Symposium on Microarchitecture (MICRO), 2022
null
null
null
cs.AR cs.CR
http://creativecommons.org/licenses/by/4.0/
DRAM is the building block of modern main memory systems. DRAM cells must be periodically refreshed to prevent data loss. Refresh operations degrade system performance by interfering with memory accesses. As DRAM chip density increases with technology node scaling, refresh operations also increase because: 1) the number of DRAM rows in a chip increases; and 2) DRAM cells need additional refresh operations to mitigate bit failures caused by RowHammer, a failure mechanism that becomes worse with technology node scaling. Thus, it is critical to enable refresh operations at low performance overhead. To this end, we propose a new operation, Hidden Row Activation (HiRA), and the HiRA Memory Controller (HiRA-MC). HiRA hides a refresh operation's latency by refreshing a row concurrently with accessing or refreshing another row within the same bank. Unlike prior works, HiRA achieves this parallelism without any modifications to off-the-shelf DRAM chips. To do so, it leverages the new observation that two rows in the same bank can be activated without data loss if the rows are connected to different charge restoration circuitry. We experimentally demonstrate on 56% real off-the-shelf DRAM chips that HiRA can reliably parallelize a DRAM row's refresh operation with refresh or activation of any of the 32% of the rows within the same bank. By doing so, HiRA reduces the overall latency of two refresh operations by 51.4%. HiRA-MC modifies the memory request scheduler to perform HiRA when a refresh operation can be performed concurrently with a memory access or another refresh. Our system-level evaluations show that HiRA-MC increases system performance by 12.6% and 3.73x as it reduces the performance degradation due to periodic refreshes and refreshes for RowHammer protection (preventive refreshes), respectively, for future DRAM chips with increased density and RowHammer vulnerability.
[ { "version": "v1", "created": "Wed, 21 Sep 2022 08:51:03 GMT" } ]
2022-09-22T00:00:00
[ [ "Yağlıkçı", "Abdullah Giray", "" ], [ "Olgun", "Ataberk", "" ], [ "Patel", "Minesh", "" ], [ "Luo", "Haocong", "" ], [ "Hassan", "Hasan", "" ], [ "Orosa", "Lois", "" ], [ "Ergin", "Oğuz", "" ], [ "Mutlu", "Onur", "" ] ]
new_dataset
0.992713
2209.10229
Zhanyu Guo
Zhanyu Guo, Shenyuan Guo, Jialong Wang, Yifan Feng
Intelligent wayfinding vehicle design based on visual recognition
in Chinese language
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Intelligent drug delivery trolley is an advanced intelligent drug delivery equipment. Compared with traditional manual drug delivery, it has higher drug delivery efficiency and lower error rate. In this project, an intelligent drug delivery car is designed and manufactured, which can recognize the road route and the room number of the target ward through visual recognition technology. The trolley selects the corresponding route according to the identified room number, accurately transports the drugs to the target ward, and can return to the pharmacy after the drugs are delivered. The intelligent drug delivery car uses DC power supply, and the motor drive module controls two DC motors, which overcomes the problem of excessive deviation of turning angle. The trolley line inspection function uses closed-loop control to improve the accuracy of line inspection and the controllability of trolley speed. The identification of ward number is completed by the camera module with microcontroller, and has the functions of adaptive adjustment of ambient brightness, distortion correction, automatic calibration and so on. The communication between two cooperative drug delivery vehicles is realized by Bluetooth module, which achieves efficient and accurate communication and interaction. Experiments show that the intelligent drug delivery car can accurately identify the room number and plan the route to deliver drugs to the far, middle and near wards, and has the characteristics of fast speed and accurate judgment. In addition, two drug delivery trolleys can cooperate to deliver drugs to the same ward, with high efficiency and high cooperation.
[ { "version": "v1", "created": "Wed, 21 Sep 2022 09:49:16 GMT" } ]
2022-09-22T00:00:00
[ [ "Guo", "Zhanyu", "" ], [ "Guo", "Shenyuan", "" ], [ "Wang", "Jialong", "" ], [ "Feng", "Yifan", "" ] ]
new_dataset
0.996851
2209.10240
Ryan Shah
Ryan Shah, Mujeeb Ahmed, Shishir Nagaraja
Fingerprinting Robot Movements via Acoustic Side Channel
11 pages, 4 figures, 7 tables
null
null
null
cs.CR cs.LG cs.RO
http://creativecommons.org/licenses/by/4.0/
In this paper, we present an acoustic side channel attack which makes use of smartphone microphones recording a robot in operation to exploit acoustic properties of the sound to fingerprint a robot's movements. In this work we consider the possibility of an insider adversary who is within physical proximity of a robotic system (such as a technician or robot operator), equipped with only their smartphone microphone. Through the acoustic side-channel, we demonstrate that it is indeed possible to fingerprint not only individual robot movements within 3D space, but also patterns of movements which could lead to inferring the purpose of the movements (i.e. surgical procedures which a surgical robot is undertaking) and hence, resulting in potential privacy violations. Upon evaluation, we find that individual robot movements can be fingerprinted with around 75% accuracy, decreasing slightly with more fine-grained movement meta-data such as distance and speed. Furthermore, workflows could be reconstructed with around 62% accuracy as a whole, with more complex movements such as pick-and-place or packing reconstructed with near perfect accuracy. As well as this, in some environments such as surgical settings, audio may be recorded and transmitted over VoIP, such as for education/teaching purposes or in remote telemedicine. The question here is, can the same attack be successful even when VoIP communication is employed, and how does packet loss impact the captured audio and the success of the attack? Using the same characteristics of acoustic sound for plain audio captured by the smartphone, the attack was 90% accurate in fingerprinting VoIP samples on average, 15% higher than the baseline without the VoIP codec employed. This opens up new research questions regarding anonymous communications to protect robotic systems from acoustic side channel attacks via VoIP communication networks.
[ { "version": "v1", "created": "Wed, 21 Sep 2022 10:12:37 GMT" } ]
2022-09-22T00:00:00
[ [ "Shah", "Ryan", "" ], [ "Ahmed", "Mujeeb", "" ], [ "Nagaraja", "Shishir", "" ] ]
new_dataset
0.996717
2209.10313
EPTCS
Hans de Nivelle (School of Engineering and Digital Sciences, Nazarbayev University, Nursultan-City, Kazakkhstan), Dina Muktubayeva (School of Engineering and Digital Sciences, Nazarbayev University, Nursultan-City, Kazakhstan)
Generating Tokenizers with Flat Automata
In Proceedings GandALF 2022, arXiv:2209.09333. An implementation of flat automata can be found on: www.compiler-tools.eu
EPTCS 370, 2022, pp. 66-80
10.4204/EPTCS.370.5
null
cs.FL
http://creativecommons.org/licenses/by/4.0/
We introduce flat automata for automatic generation of tokenizers. Flat automata are a simple representation of standard finite automata. Using the flat representation, automata can be easily constructed, combined and printed. Due to the use of border functions, flat automata are more compact than standard automata in the case where intervals of characters are attached to transitions, and the standard algorithms on automata are simpler. We give the standard algorithms for tokenizer construction with automata, namely construction using regular operations, determinization, and minimization. We prove their correctness. The algorithms work with intervals of characters, but are not more complicated than their counterparts on single characters. It is easy to generate C++ code from the final deterministic automaton. All procedures have been implemented in C++ and are publicly available. The implementation has been used in applications and in teaching.
[ { "version": "v1", "created": "Wed, 21 Sep 2022 12:44:23 GMT" } ]
2022-09-22T00:00:00
[ [ "de Nivelle", "Hans", "", "School of Engineering and Digital Sciences,\n Nazarbayev University, Nursultan-City, Kazakkhstan" ], [ "Muktubayeva", "Dina", "", "School\n of Engineering and Digital Sciences, Nazarbayev University, Nursultan-City,\n Kazakhstan" ] ]
new_dataset
0.969948