id
stringlengths
9
10
submitter
stringlengths
2
52
authors
stringlengths
4
6.51k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
345
doi
stringlengths
11
120
report-no
stringlengths
2
243
categories
stringlengths
5
98
license
stringclasses
9 values
abstract
stringlengths
33
3.33k
versions
list
update_date
timestamp[s]
authors_parsed
list
prediction
stringclasses
1 value
probability
float64
0.95
1
2201.06231
Han Cai
Yajuan Liu, Han Cai, and Xiaohu Tang
A New Cooperative Repair Scheme with k + 1 Helper Nodes for (n, k) Hadamard MSR codes with Small Sub-packetization
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cooperative repair model is an available technology to deal with multiple node failures in distributed storage systems. Recently, explicit constructions of cooperative MSR codes were given by Ye (IEEE Transactions on Information Theory, 2020) with sub-packetization level $(d-k+h)(d-k+1)^n$. Specifically, the sub-packetization level is $(h+1)2^n$ when $d=k+1$. In this paper, we propose a new cooperative repair scheme by means of the inter-instance and intra-instance pairing inherited from the perfect code which reduces the sub-packetization to $2^n$ when $(h+1)|2^n$ and $(2\ell+1)2^n$ when $h+1=(2\ell+1)2^m$ for $m\ge 0$, $\ell\ge 1$ with $d=k+1$ helper nodes. That is to say, the sub-packetization is $h + 1 $ times or $2^m$ times less than Ye's. It turned out to be the best result so far known.
[ { "version": "v1", "created": "Mon, 17 Jan 2022 06:15:45 GMT" } ]
2022-01-19T00:00:00
[ [ "Liu", "Yajuan", "" ], [ "Cai", "Han", "" ], [ "Tang", "Xiaohu", "" ] ]
new_dataset
0.979122
2201.06337
Verena Biener
Verena Biener, Travis Gesslein, Daniel Schneider, Felix Kawala, Alexander Otte, Per Ola Kristensson, Michel Pahud, Eyal Ofek, Cuauhtli Campos, Matja\v{z} Kljun, Klen \v{C}opi\v{c} Pucihar, Jens Grubert
PoVRPoint: Authoring Presentations in Mobile Virtual Reality
IEEE VR 2022; to appear in IEEE transactions on visualization and computer graphics, 2022
In IEEE transactions on visualization and computer graphics, 2022
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Virtual Reality (VR) has the potential to support mobile knowledge workers by complementing traditional input devices with a large three-dimensional output space and spatial input. Previous research on supporting VR knowledge work explored domains such as text entry using physical keyboards and spreadsheet interaction using combined pen and touch input. Inspired by such work, this paper probes the VR design space for authoring presentations in mobile settings. We propose PoVRPoint -- a set of tools coupling pen- and touch-based editing of presentations on mobile devices, such as tablets, with the interaction capabilities afforded by VR. We study the utility of extended display space to, for example, assist users in identifying target slides, supporting spatial manipulation of objects on a slide, creating animations, and facilitating arrangements of multiple, possibly occluded, shapes. Among other things, our results indicate that 1) the wide field of view afforded by VR results in significantly faster target slide identification times compared to a tablet-only interface for visually salient targets; and 2) the three-dimensional view in VR enables significantly faster object reordering in the presence of occlusion compared to two baseline interfaces. A user study further confirmed that the interaction techniques were found to be usable and enjoyable.
[ { "version": "v1", "created": "Mon, 17 Jan 2022 10:50:01 GMT" } ]
2022-01-19T00:00:00
[ [ "Biener", "Verena", "" ], [ "Gesslein", "Travis", "" ], [ "Schneider", "Daniel", "" ], [ "Kawala", "Felix", "" ], [ "Otte", "Alexander", "" ], [ "Kristensson", "Per Ola", "" ], [ "Pahud", "Michel", "" ], [ "Ofek", "Eyal", "" ], [ "Campos", "Cuauhtli", "" ], [ "Kljun", "Matjaž", "" ], [ "Pucihar", "Klen Čopič", "" ], [ "Grubert", "Jens", "" ] ]
new_dataset
0.999529
2201.06423
Giseop Kim
Giseop Kim, Seungsang Yun, Jeongyun Kim, Ayoung Kim
SC-LiDAR-SLAM: a Front-end Agnostic Versatile LiDAR SLAM System
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Accurate 3D point cloud map generation is a core task for various robot missions or even for data-driven urban analysis. To do so, light detection and ranging (LiDAR) sensor-based simultaneous localization and mapping (SLAM) technology have been elaborated. To compose a full SLAM system, many odometry and place recognition methods have independently been proposed in academia. However, they have hardly been integrated or too tightly combined so that exchanging (upgrading) either single odometry or place recognition module is very effort demanding. Recently, the performance of each module has been improved a lot, so it is necessary to build a SLAM system that can effectively integrate them and easily replace them with the latest one. In this paper, we release such a front-end agnostic LiDAR SLAM system, named SC-LiDAR-SLAM. We built a complete SLAM system by designing it modular, and successfully integrating it with Scan Context++ and diverse existing opensource LiDAR odometry methods to generate an accurate point cloud map
[ { "version": "v1", "created": "Mon, 17 Jan 2022 14:20:36 GMT" } ]
2022-01-19T00:00:00
[ [ "Kim", "Giseop", "" ], [ "Yun", "Seungsang", "" ], [ "Kim", "Jeongyun", "" ], [ "Kim", "Ayoung", "" ] ]
new_dataset
0.998256
2201.06496
Sabit Hassan
Hamdy Mubarak, Sabit Hassan, Shammur Absar Chowdhury, Firoj Alam
ArCovidVac: Analyzing Arabic Tweets About COVID-19 Vaccination
8 pages, 9 figures
null
null
null
cs.CL cs.SI
http://creativecommons.org/licenses/by/4.0/
The emergence of the COVID-19 pandemic and the first global infodemic have changed our lives in many different ways. We relied on social media to get the latest information about the COVID-19 pandemic and at the same time to disseminate information. The content in social media consisted not only health related advises, plans, and informative news from policy makers, but also contains conspiracies and rumors. It became important to identify such information as soon as they are posted to make actionable decisions (e.g., debunking rumors, or taking certain measures for traveling). To address this challenge, we develop and publicly release the first largest manually annotated Arabic tweet dataset, ArCovidVac, for the COVID-19 vaccination campaign, covering many countries in the Arab region. The dataset is enriched with different layers of annotation, including, (i) Informativeness (more vs. less importance of the tweets); (ii) fine-grained tweet content types (e.g., advice, rumors, restriction, authenticate news/information); and (iii) stance towards vaccination (pro-vaccination, neutral, anti-vaccination). Further, we performed in-depth analysis of the data, exploring the popularity of different vaccines, trending hashtags, topics and presence of offensiveness in the tweets. We studied the data for individual types of tweets and temporal changes in stance towards vaccine. We benchmarked the ArCovidVac dataset using transformer architectures for informativeness, content types, and stance detection.
[ { "version": "v1", "created": "Mon, 17 Jan 2022 16:19:21 GMT" } ]
2022-01-19T00:00:00
[ [ "Mubarak", "Hamdy", "" ], [ "Hassan", "Sabit", "" ], [ "Chowdhury", "Shammur Absar", "" ], [ "Alam", "Firoj", "" ] ]
new_dataset
0.998386
2201.06504
Fabiana Zama
Villiam Bortolotti and Leonardo Brizi and Germana Landi and Anastasiia Nagmutdinova and Fabiana Zama
MUPen2DTool: a Matlab Tool for 2D Nuclear Magnetic Resonance relaxation data inversion
null
null
null
null
cs.MS
http://creativecommons.org/licenses/by/4.0/
Accurate and efficient analysis of materials properties from Nuclear Magnetic Resonance (NMR) relaxation data requires robust and efficient inversion procedures. Despite the great variety of applications requiring to process two-dimensional NMR data (2DNMR), a few software tools are freely available. The aim of this paper is to present MUPen2DTool, an open-source MATLAB based software tool for 2DNMR data inversion. The user can choose among several types of NMR experiments, and the software provides codes that can be used and extended easily. Furthermore, a MATLAB interface makes it easier to include users own data. The practical use is demonstrated in the reported examples of both synthetic and real NMR data.
[ { "version": "v1", "created": "Mon, 17 Jan 2022 16:29:28 GMT" } ]
2022-01-19T00:00:00
[ [ "Bortolotti", "Villiam", "" ], [ "Brizi", "Leonardo", "" ], [ "Landi", "Germana", "" ], [ "Nagmutdinova", "Anastasiia", "" ], [ "Zama", "Fabiana", "" ] ]
new_dataset
0.999202
2201.06517
Alexander Ruch
Alexander Ruch, Yujia Zhang, Michael Macy
Demographic Confounding Causes Extreme Instances of Lifestyle Politics on Facebook
29 pages (27 body, 2 supplemental material), 14 figures (12 body, 2 supplemental material), 2 tables
null
null
null
cs.SI cs.AI cs.CL cs.IR physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Lifestyle politics emerge when activities that have no substantive relevance to ideology become politically aligned and polarized. Homophily and social influence are able generate these fault lines on their own; however, social identities from demographics may serve as coordinating mechanisms through which lifestyle politics are mobilized are spread. Using a dataset of 137,661,886 observations from 299,327 Facebook interests aggregated across users of different racial/ethnic, education, age, gender, and income demographics, we find that the most extreme instances of lifestyle politics are those which are highly confounded by demographics such as race/ethnicity (e.g., Black artists and performers). After adjusting political alignment for demographic effects, lifestyle politics decreased by 27.36% toward the political "center" and demographically confounded interests were no longer among the most polarized interests. Instead, after demographic deconfounding, we found that the most liberal interests included electric cars, Planned Parenthood, and liberal satire while the most conservative interests included the Republican Party and conservative commentators. We validate our measures of political alignment and lifestyle politics using the General Social Survey and find similar demographic entanglements with lifestyle politics existed before social media such as Facebook were ubiquitous, giving us strong confidence that our results are not due to echo chambers or filter bubbles. Likewise, since demographic characteristics exist prior to ideological values, we argue that the demographic confounding we observe is causally responsible for the extreme instances of lifestyle politics that we find among the aggregated interests. We conclude our paper by relating our results to Simpson's paradox, cultural omnivorousness, and network autocorrelation.
[ { "version": "v1", "created": "Mon, 17 Jan 2022 16:48:00 GMT" } ]
2022-01-19T00:00:00
[ [ "Ruch", "Alexander", "" ], [ "Zhang", "Yujia", "" ], [ "Macy", "Michael", "" ] ]
new_dataset
0.992048
2201.06556
Alexander Ruch
Alexander Ruch, Ari Decter-Frain, Raghav Batra
Millions of Co-purchases and Reviews Reveal the Spread of Polarization and Lifestyle Politics across Online Markets
25 pages (21 body, 4 supplemental material), 10 figures (4 body, 6 supplemental material), 5 tables (3 body, 2 supplemental material)
null
null
null
cs.SI cs.AI cs.CL cs.IR physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Polarization in America has reached a high point as markets are also becoming polarized. Existing research, however, focuses on specific market segments and products and has not evaluated this trend's full breadth. If such fault lines do spread into other segments that are not explicitly political, it would indicate the presence of lifestyle politics -- when ideas and behaviors not inherently political become politically aligned through their connections with explicitly political things. We study the pervasiveness of polarization and lifestyle politics over different product segments in a diverse market and test the extent to which consumer- and platform-level network effects and morality may explain lifestyle politics. Specifically, using graph and language data from Amazon (82.5M reviews of 9.5M products and product and category metadata from 1996-2014), we sample 234.6 million relations among 21.8 million market entities to find product categories that are most politically relevant, aligned, and polarized. We then extract moral values present in reviews' text and use these data and other reviewer-, product-, and category-level data to test whether individual- and platform- level network factors explain lifestyle politics better than products' implicit morality. We find pervasive lifestyle politics. Cultural products are 4 times more polarized than any other segment, products' political attributes have up to 3.7 times larger associations with lifestyle politics than author-level covariates, and morality has statistically significant but relatively small correlations with lifestyle politics. Examining lifestyle politics in these contexts helps us better understand the extent and root of partisan differences, why Americans may be so polarized, and how this polarization affects market systems.
[ { "version": "v1", "created": "Mon, 17 Jan 2022 18:16:37 GMT" } ]
2022-01-19T00:00:00
[ [ "Ruch", "Alexander", "" ], [ "Decter-Frain", "Ari", "" ], [ "Batra", "Raghav", "" ] ]
new_dataset
0.992898
2201.06577
Veena Prabhakaran
Jasine Babu, K. Murali Krishnan, Veena Prabhakaran, Nandini J. Warrier
Eternal vertex cover number of maximal outerplanar graphs
null
null
null
null
cs.DM
http://creativecommons.org/licenses/by/4.0/
Eternal vertex cover problem is a variant of the classical vertex cover problem modeled as a two player attacker-defender game. Computing eternal vertex cover number of graphs is known to be NP-hard in general and the complexity status of the problem for bipartite graphs is open. There is a quadratic complexity algorithm known for this problem for chordal graphs. Maximal outerplanar graphs forms a subclass of chordal graphs, for which no algorithm of sub-quadratic time complexity is known. In this paper, we obtain a recursive algorithm of linear time for computing eternal vertex cover number of maximal outerplanar graphs.
[ { "version": "v1", "created": "Mon, 17 Jan 2022 18:58:21 GMT" } ]
2022-01-19T00:00:00
[ [ "Babu", "Jasine", "" ], [ "Krishnan", "K. Murali", "" ], [ "Prabhakaran", "Veena", "" ], [ "Warrier", "Nandini J.", "" ] ]
new_dataset
0.999578
2201.06644
Arnav Malawade
Arnav Vaibhav Malawade, Trier Mortlock, Mohammad Abdullah Al Faruque
HydraFusion: Context-Aware Selective Sensor Fusion for Robust and Efficient Autonomous Vehicle Perception
Accepted to be published in the 13th ACM/IEEE International Conference on Cyber-Physical Systems (ICCPS 2022)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although autonomous vehicles (AVs) are expected to revolutionize transportation, robust perception across a wide range of driving contexts remains a significant challenge. Techniques to fuse sensor data from camera, radar, and lidar sensors have been proposed to improve AV perception. However, existing methods are insufficiently robust in difficult driving contexts (e.g., bad weather, low light, sensor obstruction) due to rigidity in their fusion implementations. These methods fall into two broad categories: (i) early fusion, which fails when sensor data is noisy or obscured, and (ii) late fusion, which cannot leverage features from multiple sensors and thus produces worse estimates. To address these limitations, we propose HydraFusion: a selective sensor fusion framework that learns to identify the current driving context and fuses the best combination of sensors to maximize robustness without compromising efficiency. HydraFusion is the first approach to propose dynamically adjusting between early fusion, late fusion, and combinations in-between, thus varying both how and when fusion is applied. We show that, on average, HydraFusion outperforms early and late fusion approaches by 13.66% and 14.54%, respectively, without increasing computational complexity or energy consumption on the industry-standard Nvidia Drive PX2 AV hardware platform. We also propose and evaluate both static and deep-learning-based context identification strategies. Our open-source code and model implementation are available at https://github.com/AICPS/hydrafusion.
[ { "version": "v1", "created": "Mon, 17 Jan 2022 22:19:53 GMT" } ]
2022-01-19T00:00:00
[ [ "Malawade", "Arnav Vaibhav", "" ], [ "Mortlock", "Trier", "" ], [ "Faruque", "Mohammad Abdullah Al", "" ] ]
new_dataset
0.992773
2201.06648
Haozhe Sun
Haozhe Sun and Wei-Wei Tu and Isabelle Guyon
OmniPrint: A Configurable Printed Character Synthesizer
Accepted at 35th Conference on Neural Information Processing Systems (NeurIPS 2021) Track on Datasets and Benchmarks. https://openreview.net/forum?id=R07XwJPmgpl
35th Conference on Neural Information Processing Systems (NeurIPS 2021) Track on Datasets and Benchmarks
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce OmniPrint, a synthetic data generator of isolated printed characters, geared toward machine learning research. It draws inspiration from famous datasets such as MNIST, SVHN and Omniglot, but offers the capability of generating a wide variety of printed characters from various languages, fonts and styles, with customized distortions. We include 935 fonts from 27 scripts and many types of distortions. As a proof of concept, we show various use cases, including an example of meta-learning dataset designed for the upcoming MetaDL NeurIPS 2021 competition. OmniPrint is available at https://github.com/SunHaozhe/OmniPrint.
[ { "version": "v1", "created": "Mon, 17 Jan 2022 22:31:35 GMT" } ]
2022-01-19T00:00:00
[ [ "Sun", "Haozhe", "" ], [ "Tu", "Wei-Wei", "" ], [ "Guyon", "Isabelle", "" ] ]
new_dataset
0.99974
2201.06696
Hengcan Shi
Hengcan Shi, Munawar Hayat, Yicheng Wu, Jianfei Cai
ProposalCLIP: Unsupervised Open-Category Object Proposal Generation via Exploiting CLIP Cues
10 pages, 5 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Object proposal generation is an important and fundamental task in computer vision. In this paper, we propose ProposalCLIP, a method towards unsupervised open-category object proposal generation. Unlike previous works which require a large number of bounding box annotations and/or can only generate proposals for limited object categories, our ProposalCLIP is able to predict proposals for a large variety of object categories without annotations, by exploiting CLIP (contrastive language-image pre-training) cues. Firstly, we analyze CLIP for unsupervised open-category proposal generation and design an objectness score based on our empirical analysis on proposal selection. Secondly, a graph-based merging module is proposed to solve the limitations of CLIP cues and merge fragmented proposals. Finally, we present a proposal regression module that extracts pseudo labels based on CLIP cues and trains a lightweight network to further refine proposals. Extensive experiments on PASCAL VOC, COCO and Visual Genome datasets show that our ProposalCLIP can better generate proposals than previous state-of-the-art methods. Our ProposalCLIP also shows benefits for downstream tasks, such as unsupervised object detection.
[ { "version": "v1", "created": "Tue, 18 Jan 2022 01:51:35 GMT" } ]
2022-01-19T00:00:00
[ [ "Shi", "Hengcan", "" ], [ "Hayat", "Munawar", "" ], [ "Wu", "Yicheng", "" ], [ "Cai", "Jianfei", "" ] ]
new_dataset
0.989365
2201.06724
Rongsheng Zhang
Rongsheng Zhang, Xiaoxi Mao, Le Li, Lin Jiang, Lin Chen, Zhiwei Hu, Yadong Xi, Changjie Fan, Minlie Huang
Youling: an AI-Assisted Lyrics Creation System
accept by emnlp2020 demo track
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, a variety of neural models have been proposed for lyrics generation. However, most previous work completes the generation process in a single pass with little human intervention. We believe that lyrics creation is a creative process with human intelligence centered. AI should play a role as an assistant in the lyrics creation process, where human interactions are crucial for high-quality creation. This paper demonstrates \textit{Youling}, an AI-assisted lyrics creation system, designed to collaborate with music creators. In the lyrics generation process, \textit{Youling} supports traditional one pass full-text generation mode as well as an interactive generation mode, which allows users to select the satisfactory sentences from generated candidates conditioned on preceding context. The system also provides a revision module which enables users to revise undesired sentences or words of lyrics repeatedly. Besides, \textit{Youling} allows users to use multifaceted attributes to control the content and format of generated lyrics. The demo video of the system is available at https://youtu.be/DFeNpHk0pm4.
[ { "version": "v1", "created": "Tue, 18 Jan 2022 03:57:04 GMT" } ]
2022-01-19T00:00:00
[ [ "Zhang", "Rongsheng", "" ], [ "Mao", "Xiaoxi", "" ], [ "Li", "Le", "" ], [ "Jiang", "Lin", "" ], [ "Chen", "Lin", "" ], [ "Hu", "Zhiwei", "" ], [ "Xi", "Yadong", "" ], [ "Fan", "Changjie", "" ], [ "Huang", "Minlie", "" ] ]
new_dataset
0.96359
2201.06741
Prashant Kodali
Prashant Kodali, Akshala Bhatnagar, Naman Ahuja, Manish Shrivastava, Ponnurangam Kumaraguru
HashSet -- A Dataset For Hashtag Segmentation
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Hashtag segmentation is the task of breaking a hashtag into its constituent tokens. Hashtags often encode the essence of user-generated posts, along with information like topic and sentiment, which are useful in downstream tasks. Hashtags prioritize brevity and are written in unique ways -- transliterating and mixing languages, spelling variations, creative named entities. Benchmark datasets used for the hashtag segmentation task -- STAN, BOUN -- are small in size and extracted from a single set of tweets. However, datasets should reflect the variations in writing styles of hashtags and also account for domain and language specificity, failing which the results will misrepresent model performance. We argue that model performance should be assessed on a wider variety of hashtags, and datasets should be carefully curated. To this end, we propose HashSet, a dataset comprising of: a) 1.9k manually annotated dataset; b) 3.3M loosely supervised dataset. HashSet dataset is sampled from a different set of tweets when compared to existing datasets and provides an alternate distribution of hashtags to build and validate hashtag segmentation models. We show that the performance of SOTA models for Hashtag Segmentation drops substantially on proposed dataset, indicating that the proposed dataset provides an alternate set of hashtags to train and assess models.
[ { "version": "v1", "created": "Tue, 18 Jan 2022 04:40:45 GMT" } ]
2022-01-19T00:00:00
[ [ "Kodali", "Prashant", "" ], [ "Bhatnagar", "Akshala", "" ], [ "Ahuja", "Naman", "" ], [ "Shrivastava", "Manish", "" ], [ "Kumaraguru", "Ponnurangam", "" ] ]
new_dataset
0.999872
2201.06811
Mike Wu
Mike Wu, Will McTighe, Kaili Wang, Istvan A. Seres, Nick Bax, Manuel Puebla, Mariano Mendez, Federico Carrone, Tom\'as De Mattey, Herman O. Demaestri, Mariano Nicolini, Pedro Fontana
Tutela: An Open-Source Tool for Assessing User-Privacy on Ethereum and Tornado Cash
10 pages content, 2 pages appendix
null
null
null
cs.CR cs.LG
http://creativecommons.org/licenses/by/4.0/
A common misconception among blockchain users is that pseudonymity guarantees privacy. The reality is almost the opposite. Every transaction one makes is recorded on a public ledger and reveals information about one's identity. Mixers, such as Tornado Cash, were developed to preserve privacy through "mixing" transactions with those of others in an anonymity pool, making it harder to link deposits and withdrawals from the pool. Unfortunately, it is still possible to reveal information about those in the anonymity pool if users are not careful. We introduce Tutela, an application built on expert heuristics to report the true anonymity of an Ethereum address. In particular, Tutela has three functionalities: first, it clusters together Ethereum addresses based on interaction history such that for an Ethereum address, we can identify other addresses likely owned by the same entity; second, it shows Ethereum users their potentially compromised transactions; third, Tutela computes the true size of the anonymity pool of each Tornado Cash mixer by excluding potentially compromised transactions. A public implementation of Tutela can be found at https://github.com/TutelaLabs/tutela-app. To use Tutela, visit https://www.tutela.xyz.
[ { "version": "v1", "created": "Tue, 18 Jan 2022 08:31:12 GMT" } ]
2022-01-19T00:00:00
[ [ "Wu", "Mike", "" ], [ "McTighe", "Will", "" ], [ "Wang", "Kaili", "" ], [ "Seres", "Istvan A.", "" ], [ "Bax", "Nick", "" ], [ "Puebla", "Manuel", "" ], [ "Mendez", "Mariano", "" ], [ "Carrone", "Federico", "" ], [ "De Mattey", "Tomás", "" ], [ "Demaestri", "Herman O.", "" ], [ "Nicolini", "Mariano", "" ], [ "Fontana", "Pedro", "" ] ]
new_dataset
0.997408
2201.06835
Derek Xu
Derek Xu
Ray Based Distributed Autonomous Vehicle Research Platform
15 pages, 11 figures
null
null
null
cs.LG cs.DC
http://creativecommons.org/licenses/by-nc-sa/4.0/
My project tackles the question of whether Ray can be used to quickly train autonomous vehicles using a simulator (Carla), and whether a platform robust enough for further research purposes can be built around it. Ray is an open-source framework that enables distributed machine learning applications. Distributed computing is a technique which parallelizes computational tasks, such as training a model, among many machines. Ray abstracts away the complex coordination of these machines, making it rapidly scalable. Carla is a vehicle simulator that generates data used to train a model. The bulk of the project was writing the training logic that Ray would use to train my distributed model. Imitation learning is the best fit for autonomous vehicles. Imitation learning is an alternative to reinforcement learning and it works by trying to learn the optimal policy by imitating an expert (usually a human) given a set of demonstrations. A key deliverable for the project was showcasing my trained agent in a few benchmark tests, such as navigating a complex turn through traffic. Beyond that, the broader ambition was to develop a research platform where others could quickly train and run experiments on huge amounts of Carla vehicle data. Thus, my end product is not a single model, but a large-scale, open-source research platform (RayCarla) for autonomous vehicle researchers to utilize.
[ { "version": "v1", "created": "Tue, 18 Jan 2022 09:13:27 GMT" } ]
2022-01-19T00:00:00
[ [ "Xu", "Derek", "" ] ]
new_dataset
0.958039
2201.07000
Fan Meng
Fan Meng, Tao Song, Danya Xu
TCR-GAN: Predicting tropical cyclone passive microwave rainfall using infrared imagery via generative adversarial networks
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tropical cyclones (TC) generally carry large amounts of water vapor and can cause large-scale extreme rainfall. Passive microwave rainfall (PMR) estimation of TC with high spatial and temporal resolution is crucial for disaster warning of TC, but remains a challenging problem due to the low temporal resolution of microwave sensors. This study attempts to solve this problem by directly forecasting PMR from satellite infrared (IR) images of TC. We develop a generative adversarial network (GAN) to convert IR images into PMR, and establish the mapping relationship between TC cloud-top bright temperature and PMR, the algorithm is named TCR-GAN. Meanwhile, a new dataset that is available as a benchmark, Dataset of Tropical Cyclone IR-to-Rainfall Prediction (TCIRRP) was established, which is expected to advance the development of artificial intelligence in this direction. Experimental results show that the algorithm can effectively extract key features from IR. The end-to-end deep learning approach shows potential as a technique that can be applied globally and provides a new perspective tropical cyclone precipitation prediction via satellite, which is expected to provide important insights for real-time visualization of TC rainfall globally in operations.
[ { "version": "v1", "created": "Fri, 14 Jan 2022 08:22:16 GMT" } ]
2022-01-19T00:00:00
[ [ "Meng", "Fan", "" ], [ "Song", "Tao", "" ], [ "Xu", "Danya", "" ] ]
new_dataset
0.999111
2201.07012
Stanislav Fort
Stanislav Fort
Adversarial vulnerability of powerful near out-of-distribution detection
8 pages
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There has been a significant progress in detecting out-of-distribution (OOD) inputs in neural networks recently, primarily due to the use of large models pretrained on large datasets, and an emerging use of multi-modality. We show a severe adversarial vulnerability of even the strongest current OOD detection techniques. With a small, targeted perturbation to the input pixels, we can change the image assignment from an in-distribution to an out-distribution, and vice versa, easily. In particular, we demonstrate severe adversarial vulnerability on the challenging near OOD CIFAR-100 vs CIFAR-10 task, as well as on the far OOD CIFAR-100 vs SVHN. We study the adversarial robustness of several post-processing techniques, including the simple baseline of Maximum of Softmax Probabilities (MSP), the Mahalanobis distance, and the newly proposed \textit{Relative} Mahalanobis distance. By comparing the loss of OOD detection performance at various perturbation strengths, we demonstrate the beneficial effect of using ensembles of OOD detectors, and the use of the \textit{Relative} Mahalanobis distance over other post-processing methods. In addition, we show that even strong zero-shot OOD detection using CLIP and multi-modality suffers from a severe lack of adversarial robustness as well. Our code is available at https://github.com/stanislavfort/adversaries_to_OOD_detection
[ { "version": "v1", "created": "Tue, 18 Jan 2022 14:23:07 GMT" } ]
2022-01-19T00:00:00
[ [ "Fort", "Stanislav", "" ] ]
new_dataset
0.982325
2201.07067
Marco Tranzatto
Marco Tranzatto, Frank Mascarich, Lukas Bernreiter, Carolina Godinho, Marco Camurri, Shehryar Khattak, Tung Dang, Victor Reijgwart, Johannes Loeje, David Wisth, Samuel Zimmermann, Huan Nguyen, Marius Fehr, Lukas Solanka, Russell Buchanan, Marko Bjelonic, Nikhil Khedekar, Mathieu Valceschini, Fabian Jenelten, Mihir Dharmadhikari, Timon Homberger, Paolo De Petris, Lorenz Wellhausen, Mihir Kulkarni, Takahiro Miki, Satchel Hirsch, Markus Montenegro, Christos Papachristos, Fabian Tresoldi, Jan Carius, Giorgio Valsecchi, Joonho Lee, Konrad Meyer, Xiangyu Wu, Juan Nieto, Andy Smith, Marco Hutter, Roland Siegwart, Mark Mueller, Maurice Fallon, Kostas Alexis
CERBERUS: Autonomous Legged and Aerial Robotic Exploration in the Tunnel and Urban Circuits of the DARPA Subterranean Challenge
50 pages, 25 figures. Accepted at Field Robotics, 2021
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Autonomous exploration of subterranean environments constitutes a major frontier for robotic systems as underground settings present key challenges that can render robot autonomy hard to achieve. This has motivated the DARPA Subterranean Challenge, where teams of robots search for objects of interest in various underground environments. In response, the CERBERUS system-of-systems is presented as a unified strategy towards subterranean exploration using legged and flying robots. As primary robots, ANYmal quadruped systems are deployed considering their endurance and potential to traverse challenging terrain. For aerial robots, both conventional and collision-tolerant multirotors are utilized to explore spaces too narrow or otherwise unreachable by ground systems. Anticipating degraded sensing conditions, a complementary multi-modal sensor fusion approach utilizing camera, LiDAR, and inertial data for resilient robot pose estimation is proposed. Individual robot pose estimates are refined by a centralized multi-robot map optimization approach to improve the reported location accuracy of detected objects of interest in the DARPA-defined coordinate frame. Furthermore, a unified exploration path planning policy is presented to facilitate the autonomous operation of both legged and aerial robots in complex underground networks. Finally, to enable communication between the robots and the base station, CERBERUS utilizes a ground rover with a high-gain antenna and an optical fiber connection to the base station, alongside breadcrumbing of wireless nodes by our legged robots. We report results from the CERBERUS system-of-systems deployment at the DARPA Subterranean Challenge Tunnel and Urban Circuits, along with the current limitations and the lessons learned for the benefit of the community.
[ { "version": "v1", "created": "Tue, 18 Jan 2022 15:48:51 GMT" } ]
2022-01-19T00:00:00
[ [ "Tranzatto", "Marco", "" ], [ "Mascarich", "Frank", "" ], [ "Bernreiter", "Lukas", "" ], [ "Godinho", "Carolina", "" ], [ "Camurri", "Marco", "" ], [ "Khattak", "Shehryar", "" ], [ "Dang", "Tung", "" ], [ "Reijgwart", "Victor", "" ], [ "Loeje", "Johannes", "" ], [ "Wisth", "David", "" ], [ "Zimmermann", "Samuel", "" ], [ "Nguyen", "Huan", "" ], [ "Fehr", "Marius", "" ], [ "Solanka", "Lukas", "" ], [ "Buchanan", "Russell", "" ], [ "Bjelonic", "Marko", "" ], [ "Khedekar", "Nikhil", "" ], [ "Valceschini", "Mathieu", "" ], [ "Jenelten", "Fabian", "" ], [ "Dharmadhikari", "Mihir", "" ], [ "Homberger", "Timon", "" ], [ "De Petris", "Paolo", "" ], [ "Wellhausen", "Lorenz", "" ], [ "Kulkarni", "Mihir", "" ], [ "Miki", "Takahiro", "" ], [ "Hirsch", "Satchel", "" ], [ "Montenegro", "Markus", "" ], [ "Papachristos", "Christos", "" ], [ "Tresoldi", "Fabian", "" ], [ "Carius", "Jan", "" ], [ "Valsecchi", "Giorgio", "" ], [ "Lee", "Joonho", "" ], [ "Meyer", "Konrad", "" ], [ "Wu", "Xiangyu", "" ], [ "Nieto", "Juan", "" ], [ "Smith", "Andy", "" ], [ "Hutter", "Marco", "" ], [ "Siegwart", "Roland", "" ], [ "Mueller", "Mark", "" ], [ "Fallon", "Maurice", "" ], [ "Alexis", "Kostas", "" ] ]
new_dataset
0.999453
2201.07078
XIan Wang
Xian Wang, Diego Monteiro, Lik-Hang Lee, Pan Hui, and Hai-Ning Liang
VibroWeight: Simulating Weight and Center of Gravity Changes of Objects in Virtual Reality for Enhanced Realism
null
null
null
null
cs.HC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Haptic feedback in virtual reality (VR) allows users to perceive the physical properties of virtual objects (e.g., their weight and motion patterns). However, the lack of haptic sensations deteriorates users' immersion and overall experience. In this work, we designed and implemented a low-cost hardware prototype with liquid metal, VibroWeight, which can work in complementarity with commercial VR handheld controllers. VibroWeight is characterized by bimodal feedback cues in VR, driven by adaptive absolute mass (weights) and gravity shift. To our knowledge, liquid metal is used in a VR haptic device for the first time. Our 29 participants show that VibroWeight delivers significantly better VR experiences in realism and comfort.
[ { "version": "v1", "created": "Tue, 18 Jan 2022 16:01:38 GMT" } ]
2022-01-19T00:00:00
[ [ "Wang", "Xian", "" ], [ "Monteiro", "Diego", "" ], [ "Lee", "Lik-Hang", "" ], [ "Hui", "Pan", "" ], [ "Liang", "Hai-Ning", "" ] ]
new_dataset
0.999454
2201.07170
Julian D. Cortes
Julian D. Cortes
What is the mission of innovation?
null
null
null
null
cs.SI econ.GN q-fin.EC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Governments and organizations recognize the need to revisit a mission-driven innovation amidst national and organizational innovation policy formulations. Notwithstanding a fertile research agenda on mission statements (hereafter mission(s)), several lines of inquiry remain open, such as crossnational and multisectorial studies and an examination of research knowledge intensive institutions. In this article, we identify similarities and differences in the content of missions from government, private, higher education, and health research knowledge intensive institutions in a sample of over 1,900 institutions from 89 countries through the deployment of sentiment analysis, readability, and lexical diversity; semantic networks; and a similarity computation between document corpus. We found that missions of research knowledge intensive institutions are challenging to read texts with lower lexical diversity that favors positive rather than negative words. In stark contrast to this, the non-profit sector is consonant in multiple dimensions in its use of Corporate Social Responsibility jargon. The lexical appearance of research in the missions varies according to mission sectorial context, and each sector has a cluster specific focus. Utilizing the mission as a strategic planning tool in higher income regions might serve to explain corpora similarities shared by sectors and continents.
[ { "version": "v1", "created": "Fri, 14 Jan 2022 18:27:31 GMT" } ]
2022-01-19T00:00:00
[ [ "Cortes", "Julian D.", "" ] ]
new_dataset
0.980451
2201.07189
Mihee Lee
Mihee Lee, Samuel S. Sohn, Seonghyeon Moon, Sejong Yoon, Mubbasir Kapadia, Vladimir Pavlovic
MUSE-VAE: Multi-Scale VAE for Environment-Aware Long Term Trajectory Prediction
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Accurate long-term trajectory prediction in complex scenes, where multiple agents (e.g., pedestrians or vehicles) interact with each other and the environment while attempting to accomplish diverse and often unknown goals, is a challenging stochastic forecasting problem. In this work, we propose MUSE, a new probabilistic modeling framework based on a cascade of Conditional VAEs, which tackles the long-term, uncertain trajectory prediction task using a coarse-to-fine multi-factor forecasting architecture. In its Macro stage, the model learns a joint pixel-space representation of two key factors, the underlying environment and the agent movements, to predict the long and short-term motion goals. Conditioned on them, the Micro stage learns a fine-grained spatio-temporal representation for the prediction of individual agent trajectories. The VAE backbones across the two stages make it possible to naturally account for the joint uncertainty at both levels of granularity. As a result, MUSE offers diverse and simultaneously more accurate predictions compared to the current state-of-the-art. We demonstrate these assertions through a comprehensive set of experiments on nuScenes and SDD benchmarks as well as PFSD, a new synthetic dataset, which challenges the forecasting ability of models on complex agent-environment interaction scenarios.
[ { "version": "v1", "created": "Tue, 18 Jan 2022 18:40:03 GMT" } ]
2022-01-19T00:00:00
[ [ "Lee", "Mihee", "" ], [ "Sohn", "Samuel S.", "" ], [ "Moon", "Seonghyeon", "" ], [ "Yoon", "Sejong", "" ], [ "Kapadia", "Mubbasir", "" ], [ "Pavlovic", "Vladimir", "" ] ]
new_dataset
0.995943
2201.07201
Hao Li
Hao Li and Cor-Paul Bezemer
Studying Popular Open Source Machine Learning Libraries and Their Cross-Ecosystem Bindings
12 pages, 10 figures, submitted to IEEE Transactions on Software Engineering
null
null
null
cs.SE cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Open source machine learning (ML) libraries allow developers to integrate advanced ML functionality into their own applications. However, popular ML libraries, such as TensorFlow, are not available natively in all programming languages and software package ecosystems. Hence, developers who wish to use an ML library which is not available in their programming language or ecosystem of choice, may need to resort to using a so-called binding library. Binding libraries provide support across programming languages and package ecosystems for a source library. For example, the Keras .NET binding provides support for the Keras library in the NuGet (.NET) ecosystem even though the Keras library was written in Python. In this paper, we conduct an in-depth study of 155 cross-ecosystem bindings and their development for 36 popular open source ML libraries. Our study shows that for most popular ML libraries, only one package ecosystem is officially supported (usually PyPI). Cross-ecosystem support, which is available for 25% of the studied ML libraries, is usually provided through community-maintained bindings, e.g., 73% of the bindings in the npm ecosystem are community-maintained. Our study shows that the vast majority of the studied bindings cover only a small portion of the source library releases, and the delay for receiving support for a source library release is large.
[ { "version": "v1", "created": "Tue, 18 Jan 2022 18:53:21 GMT" } ]
2022-01-19T00:00:00
[ [ "Li", "Hao", "" ], [ "Bezemer", "Cor-Paul", "" ] ]
new_dataset
0.996489
2010.10841
Ran Long
Ran Long, Christian Rauch, Tianwei Zhang, Vladimir Ivan, Sethu Vijayakumar
RigidFusion: Robot Localisation and Mapping in Environments with Large Dynamic Rigid Objects
8 pages, 11 figures. IEEE Robotics and Automation Letters (2021)
IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 3703-3710, April 2021
10.1109/LRA.2021.3066375
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work presents a novel RGB-D SLAM approach to simultaneously segment, track and reconstruct the static background and large dynamic rigid objects that can occlude major portions of the camera view. Previous approaches treat dynamic parts of a scene as outliers and are thus limited to a small amount of changes in the scene, or rely on prior information for all objects in the scene to enable robust camera tracking. Here, we propose to treat all dynamic parts as one rigid body and simultaneously segment and track both static and dynamic components. We, therefore, enable simultaneous localisation and reconstruction of both the static background and rigid dynamic components in environments where dynamic objects cause large occlusion. We evaluate our approach on multiple challenging scenes with large dynamic occlusion. The evaluation demonstrates that our approach achieves better motion segmentation, localisation and mapping without requiring prior knowledge of the dynamic object's shape and appearance.
[ { "version": "v1", "created": "Wed, 21 Oct 2020 09:04:43 GMT" }, { "version": "v2", "created": "Thu, 4 Mar 2021 16:24:09 GMT" } ]
2022-01-17T00:00:00
[ [ "Long", "Ran", "" ], [ "Rauch", "Christian", "" ], [ "Zhang", "Tianwei", "" ], [ "Ivan", "Vladimir", "" ], [ "Vijayakumar", "Sethu", "" ] ]
new_dataset
0.988801
2104.08541
Jiajun Deng
Jiajun Deng, Zhengyuan Yang, Tianlang Chen, Wengang Zhou, and Houqiang Li
TransVG: End-to-End Visual Grounding with Transformers
This paper has been accepted by ICCV2021
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
In this paper, we present a neat yet effective transformer-based framework for visual grounding, namely TransVG, to address the task of grounding a language query to the corresponding region onto an image. The state-of-the-art methods, including two-stage or one-stage ones, rely on a complex module with manually-designed mechanisms to perform the query reasoning and multi-modal fusion. However, the involvement of certain mechanisms in fusion module design, such as query decomposition and image scene graph, makes the models easily overfit to datasets with specific scenarios, and limits the plenitudinous interaction between the visual-linguistic context. To avoid this caveat, we propose to establish the multi-modal correspondence by leveraging transformers, and empirically show that the complex fusion modules e.g., modular attention network, dynamic graph, and multi-modal tree) can be replaced by a simple stack of transformer encoder layers with higher performance. Moreover, we re-formulate the visual grounding as a direct coordinates regression problem and avoid making predictions out of a set of candidates i.e., region proposals or anchor boxes). Extensive experiments are conducted on five widely used datasets, and a series of state-of-the-art records are set by our TransVG. We build the benchmark of transformer-based visual grounding framework and make the code available at \url{https://github.com/djiajunustc/TransVG}.
[ { "version": "v1", "created": "Sat, 17 Apr 2021 13:35:24 GMT" }, { "version": "v2", "created": "Thu, 12 Aug 2021 08:00:27 GMT" }, { "version": "v3", "created": "Sat, 9 Oct 2021 09:43:30 GMT" }, { "version": "v4", "created": "Fri, 14 Jan 2022 14:46:13 GMT" } ]
2022-01-17T00:00:00
[ [ "Deng", "Jiajun", "" ], [ "Yang", "Zhengyuan", "" ], [ "Chen", "Tianlang", "" ], [ "Zhou", "Wengang", "" ], [ "Li", "Houqiang", "" ] ]
new_dataset
0.967031
2104.12663
Henning Schulze
Henning Schulze and Dogucan Yaman and Alexander Waibel
CAGAN: Text-To-Image Generation with Combined Attention GANs
null
LNCS 13024 (2021) 392-404
10.1007/978-3-030-92659-5_25
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Generating images according to natural language descriptions is a challenging task. Prior research has mainly focused to enhance the quality of generation by investigating the use of spatial attention and/or textual attention thereby neglecting the relationship between channels. In this work, we propose the Combined Attention Generative Adversarial Network (CAGAN) to generate photo-realistic images according to textual descriptions. The proposed CAGAN utilises two attention models: word attention to draw different sub-regions conditioned on related words; and squeeze-and-excitation attention to capture non-linear interaction among channels. With spectral normalisation to stabilise training, our proposed CAGAN improves the state of the art on the IS and FID on the CUB dataset and the FID on the more challenging COCO dataset. Furthermore, we demonstrate that judging a model by a single evaluation metric can be misleading by developing an additional model adding local self-attention which scores a higher IS, outperforming the state of the art on the CUB dataset, but generates unrealistic images through feature repetition.
[ { "version": "v1", "created": "Mon, 26 Apr 2021 15:46:40 GMT" }, { "version": "v2", "created": "Wed, 23 Jun 2021 18:57:03 GMT" }, { "version": "v3", "created": "Wed, 8 Sep 2021 15:48:15 GMT" }, { "version": "v4", "created": "Fri, 14 Jan 2022 16:16:53 GMT" } ]
2022-01-17T00:00:00
[ [ "Schulze", "Henning", "" ], [ "Yaman", "Dogucan", "" ], [ "Waibel", "Alexander", "" ] ]
new_dataset
0.989599
2112.13984
Xiaoqing Yang
Xiaoqing Yang, Fei Li
Relative velocity-based reward functions for crowd navigation of robots
null
null
null
null
cs.RO cs.AI
http://creativecommons.org/licenses/by/4.0/
The four-wheeled Mecanum robot is widely used in various industries due to its maneuverability and strong load capacity, which is suitable for performing precise transportation tasks in a narrow environment, but while the Mecanum wheel robot has mobility, it also consumes more energy than ordinary robots. The power consumed by the Mecanum wheel mobile robot varies enormously depending on their operating regimes and environments. Therefore, only knowing the working environment of the robot and the accurate power consumption model can we accurately predict the power consumption of the robot. In order to increase the appli-cable scenarios of energy consumption modeling for Mecanum wheel robots and improve the accuracy of energy consumption modeling, this paper focuses on various factors that affect the energy consumption of the Mecanum wheel robot, such as motor temperature, terrain, the center of gravity position, etc. The model is derived from the kinematic and kinetic model combined with electrical engineering and energy flow principles. The model has been simulated in MATLAB and experimentally validated with the four-wheeled Mecanum robot platform in our lab. Experimental results show that the model is 90% accurate. The results of energy consumption modeling can help robots to save energy by helping them to perform rational path planning and task planning.
[ { "version": "v1", "created": "Tue, 28 Dec 2021 03:49:01 GMT" }, { "version": "v2", "created": "Fri, 14 Jan 2022 08:07:04 GMT" } ]
2022-01-17T00:00:00
[ [ "Yang", "Xiaoqing", "" ], [ "Li", "Fei", "" ] ]
new_dataset
0.962018
2201.05179
Chenning Li
Chenning Li, Xiuzhen Guo, Longfei Shangguan, Zhichao Cao, Kyle Jamieson
CurvingLoRa to Boost LoRa Network Capacity via Concurrent Transmission
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
LoRaWAN has emerged as an appealing technology to connect IoT devices but it functions without explicit coordination among transmitters, which can lead to many packet collisions as the network scales. State-of-the-art work proposes various approaches to deal with these collisions, but most functions only in high signal-to-interference ratio (SIR) conditions and thus does not scale to many scenarios where weak receptions are easily buried by stronger receptions from nearby transmitters. In this paper, we take a fresh look at LoRa's physical layer, revealing that its underlying linear chirp modulation fundamentally limits the capacity and scalability of concurrentLoRa transmissions. We show that by replacing linear chirps with their non-linear counterparts, we can boost the capacity of concurrent LoRa transmissions and empower the LoRa receiver to successfully receive weak transmissions in the presence of strong colliding signals. Such a non-linear chirp design further enables the receiver to demodulate fully aligned collision symbols - a case where none of the existing approaches can deal with. We implement these ideas in a holistic LoRaWAN stack based on the USRP N210 software-defined radio platform. Our head-to-head comparison with two state-of-the-art research systems and a standard LoRaWAN baseline demonstrates that CurvingLoRa improves the network throughput by 1.6-7.6x while simultaneously sacrificing neither power efficiency nor noise resilience. An open-source dataset and code will be made available before publication.
[ { "version": "v1", "created": "Thu, 13 Jan 2022 19:10:52 GMT" } ]
2022-01-17T00:00:00
[ [ "Li", "Chenning", "" ], [ "Guo", "Xiuzhen", "" ], [ "Shangguan", "Longfei", "" ], [ "Cao", "Zhichao", "" ], [ "Jamieson", "Kyle", "" ] ]
new_dataset
0.999218
2201.05203
Bilal Abu-Salih
Bilal Abu-Salih, Dana Al Qudah, Malak Al-Hassan, Seyed Mohssen Ghafari, Tomayess Issa, Ibrahim Aljarah, Amin Beheshti, Sulaiman Alqahtan
An Intelligent System for Multi-topic Social Spam Detection in Microblogging
null
null
null
null
cs.SI
http://creativecommons.org/licenses/by/4.0/
The communication revolution has perpetually reshaped the means through which people send and receive information. Social media is an important pillar of this revolution and has brought profound changes to various aspects of our lives. However, the open environment and popularity of these platforms inaugurate windows of opportunities for various cyber threats, thus social networks have become a fertile venue for spammers and other illegitimate users to execute their malicious activities. These activities include phishing hot and trendy topics and posting a wide range of contents in many topics. Hence, it is crucial to continuously introduce new techniques and approaches to detect and stop this category of users. This paper proposes a novel and effective approach to detect social spammers. An investigation into several attributes to measure topic-dependent and topic-independent users' behaviours on Twitter is carried out. The experiments of this study are undertaken on various machine learning classifiers. The performance of these classifiers are compared and their effectiveness is measured via a number of robust evaluation measures. Further, the proposed approach is benchmarked against state-of-the-art social spam and anomalous detection techniques. These experiments report the effectiveness and utility of the proposed approach and embedded modules.
[ { "version": "v1", "created": "Thu, 13 Jan 2022 20:39:36 GMT" } ]
2022-01-17T00:00:00
[ [ "Abu-Salih", "Bilal", "" ], [ "Qudah", "Dana Al", "" ], [ "Al-Hassan", "Malak", "" ], [ "Ghafari", "Seyed Mohssen", "" ], [ "Issa", "Tomayess", "" ], [ "Aljarah", "Ibrahim", "" ], [ "Beheshti", "Amin", "" ], [ "Alqahtan", "Sulaiman", "" ] ]
new_dataset
0.991639
2201.05230
Daniel Bauer
Daniel Bauer (1), Tom Longley (2), Yueen Ma (1), Tony Wilson (2) ((1) Department of Computer Science, Columbia University, (2) Security Force Monitor, Human Rights Institute, Columbia Law School)
NLP in Human Rights Research -- Extracting Knowledge Graphs About Police and Army Units and Their Commanders
Equal contributions. for associated text corpus see https://github.com/security-force-monitor/nlp_starter_dataset
null
null
null
cs.CL cs.CY
http://creativecommons.org/licenses/by/4.0/
In this working paper we explore the use of an NLP system to assist the work of Security Force Monitor (SFM). SFM creates data about the organizational structure, command personnel and operations of police, army and other security forces, which assists human rights researchers, journalists and litigators in their work to help identify and bring to account specific units and personnel alleged to have committed abuses of human rights and international criminal law. This working paper presents an NLP system that extracts from English language news reports the names of security force units and the biographical details of their personnel, and infers the formal relationship between them. Published alongside this working paper are the system's code and training dataset. We find that the experimental NLP system performs the task at a fair to good level. Its performance is sufficient to justify further development into a live workflow that will give insight into whether its performance translates into savings in time and resource that would make it an effective technical intervention.
[ { "version": "v1", "created": "Thu, 13 Jan 2022 21:57:21 GMT" } ]
2022-01-17T00:00:00
[ [ "Bauer", "Daniel", "" ], [ "Longley", "Tom", "" ], [ "Ma", "Yueen", "" ], [ "Wilson", "Tony", "" ] ]
new_dataset
0.99928
2201.05231
Silviu Maniu
Alexandra Iacob, Bogdan Cautis, Silviu Maniu
Contextual Bandits for Advertising Campaigns: A Diffusion-Model Independent Approach (Extended Version)
Extended version of conference article in SIAM International Conference on Data Mining (SDM) 2022. 14 pages, 2 figures, 4 tables
null
null
null
cs.LG cs.SI
http://creativecommons.org/licenses/by/4.0/
Motivated by scenarios of information diffusion and advertising in social media, we study an influence maximization problem in which little is assumed to be known about the diffusion network or about the model that determines how information may propagate. In such a highly uncertain environment, one can focus on multi-round diffusion campaigns, with the objective to maximize the number of distinct users that are influenced or activated, starting from a known base of few influential nodes. During a campaign, spread seeds are selected sequentially at consecutive rounds, and feedback is collected in the form of the activated nodes at each round. A round's impact (reward) is then quantified as the number of newly activated nodes. Overall, one must maximize the campaign's total spread, as the sum of rounds' rewards. In this setting, an explore-exploit approach could be used to learn the key underlying diffusion parameters, while running the campaign. We describe and compare two methods of contextual multi-armed bandits, with upper-confidence bounds on the remaining potential of influencers, one using a generalized linear model and the Good-Turing estimator for remaining potential (GLM-GT-UCB), and another one that directly adapts the LinUCB algorithm to our setting (LogNorm-LinUCB). We show that they outperform baseline methods using state-of-the-art ideas, on synthetic and real-world data, while at the same time exhibiting different and complementary behavior, depending on the scenarios in which they are deployed.
[ { "version": "v1", "created": "Thu, 13 Jan 2022 22:06:10 GMT" } ]
2022-01-17T00:00:00
[ [ "Iacob", "Alexandra", "" ], [ "Cautis", "Bogdan", "" ], [ "Maniu", "Silviu", "" ] ]
new_dataset
0.99664
2201.05240
Md Atiqul Islam
Md Atiqul Islam, George C. Alexandropoulos, and Besma Smida
Integrated Sensing and Communication with Millimeter Wave Full Duplex Hybrid Beamforming
6 pages, 4 figures, Submitted for publication in the Proceedings of IEEE ICC 2022, Seoul, South Korea
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by/4.0/
Integrated Sensing and Communication (ISAC) has attracted substantial attraction in recent years for spectral efficiency improvement, enabling hardware and spectrum sharing for simultaneous sensing and signaling operations. In-band Full Duplex (FD) is being considered as a key enabling technology for ISAC applications due to its simultaneous transmission and reception capability. In this paper, we present an FD-based ISAC system operating at millimeter Wave (mmWave) frequencies, where a massive Multiple-Input Multiple-Output (MIMO) Base Station (BS) node employing hybrid Analog and Digital (A/D) beamforming is communicating with a DownLink (DL) multi-antenna user and the same waveform is utilized at the BS receiver for sensing the radar targets in its coverage environment. We develop a sensing algorithm that is capable of estimating Direction of Arrival (DoA), range, and relative velocity of the radar targets. A joint optimization framework for designing the A/D transmit and receive beamformers as well as the Self-Interference (SI) cancellation is presented with the objective to maximize the achievable DL rate and the accuracy of the radar target sensing performance. Our simulation results, considering fifth Generation (5G) Orthogonal Frequency Division Multiplexing (OFDM) waveforms, verify our approach's high precision in estimating DoA, range, and velocity of multiple radar targets, while maximizing the DL communication rate.
[ { "version": "v1", "created": "Thu, 13 Jan 2022 23:02:02 GMT" } ]
2022-01-17T00:00:00
[ [ "Islam", "Md Atiqul", "" ], [ "Alexandropoulos", "George C.", "" ], [ "Smida", "Besma", "" ] ]
new_dataset
0.996038
2201.05278
Hermes Senger
Jaime Freire de Souza, Jo\~ao Baptista Dias Moreira, Keith Jared Roberts, Roussian di Ramos Alves Gaioso, Edson Satoshi Gomi, Em\'ilio Carlos Nelli Silva and Hermes Senger
${\tt simwave}$ -- A Finite Difference Simulator for Acoustic Waves Propagation
null
null
null
null
cs.CE
http://creativecommons.org/licenses/by/4.0/
${\tt simwave}$ is an open-source Python package to perform wave simulations in 2D or 3D domains. It solves the constant and variable density acoustic wave equation with the finite difference method and has support for domain truncation techniques, several boundary conditions, and the modeling of sources and receivers given a user-defined acquisition geometry. The architecture of ${\tt simwave}$ is designed for applications with geophysical exploration in mind. Its Python front-end enables straightforward integration with many existing Python scientific libraries for the composition of more complex workflows and applications (e.g., migration and inversion problems). The back-end is implemented in C enabling performance portability across a range of computing hardware and compilers including both CPUs and GPUs.
[ { "version": "v1", "created": "Fri, 14 Jan 2022 02:21:49 GMT" } ]
2022-01-17T00:00:00
[ [ "de Souza", "Jaime Freire", "" ], [ "Moreira", "João Baptista Dias", "" ], [ "Roberts", "Keith Jared", "" ], [ "Gaioso", "Roussian di Ramos Alves", "" ], [ "Gomi", "Edson Satoshi", "" ], [ "Silva", "Emílio Carlos Nelli", "" ], [ "Senger", "Hermes", "" ] ]
new_dataset
0.996611
2201.05356
Luca Grementieri
Luca Grementieri, Francesco Finelli
StAnD: A Dataset of Linear Static Analysis Problems
9 pages, 1 figure
null
null
null
cs.LG cs.MS cs.NA math.NA
http://creativecommons.org/licenses/by/4.0/
Static analysis of structures is a fundamental step for determining the stability of structures. Both linear and non-linear static analyses consist of the resolution of sparse linear systems obtained by the finite element method. The development of fast and optimized solvers for sparse linear systems appearing in structural engineering requires data to compare existing approaches, tune algorithms or to evaluate new ideas. We introduce the Static Analysis Dataset (StAnD) containing 303.000 static analysis problems obtained applying realistic loads to simulated frame structures. Along with the dataset, we publish a detailed benchmark comparison of the running time of existing solvers both on CPU and GPU. We release the code used to generate the dataset and benchmark existing solvers on Github. To the best of our knowledge, this is the largest dataset for static analysis problems and it is the first public dataset of sparse linear systems (containing both the matrix and a realistic constant term).
[ { "version": "v1", "created": "Fri, 14 Jan 2022 09:31:43 GMT" } ]
2022-01-17T00:00:00
[ [ "Grementieri", "Luca", "" ], [ "Finelli", "Francesco", "" ] ]
new_dataset
0.999712
2201.05488
Jan Ostergaard
Jan {\O}stergaard
Stabilizing Error Correction Codes for Controlling LTI Systems over Erasure Channels
Accepted and presented at the IEEE 60th Conference on Decision and Control (CDC). arXiv admin note: substantial text overlap with arXiv:2112.11717
null
null
null
cs.IT math.IT math.OC
http://creativecommons.org/licenses/by/4.0/
We propose (k,k') stabilizing codes, which is a type of delayless error correction codes that are useful for control over networks with erasures. For each input symbol, k output symbols are generated by the stabilizing code. Receiving any k' of these outputs guarantees stability. Thus, the system to be stabilized is taken into account in the design of the erasure codes. Our focus is on LTI systems, and we construct codes based on independent encodings and multiple descriptions. The theoretical efficiency and performance of the codes are assessed, and their practical performances are demonstrated in a simulation study. There is a significant gain over other delayless codes such as repetition codes.
[ { "version": "v1", "created": "Fri, 14 Jan 2022 14:54:43 GMT" } ]
2022-01-17T00:00:00
[ [ "Østergaard", "Jan", "" ] ]
new_dataset
0.987991
2201.05503
Leonardo Santos
Aurelienne A. S. Jorge, Iuri da Silva Diniz, Vander L. S. Freitas, Izabelly C. Costa, Leonardo B. L. Santos
Global-threshold and backbone high-resolution weather radar networks are significantly complementary in a watershed
7 pages, 6 figures To be submitted to Computers and Geosciences (Elsevier)
null
null
null
cs.SI physics.soc-ph
http://creativecommons.org/licenses/by-nc-sa/4.0/
There are several criteria for building up networks from time series related to different points in geographical space. The most used criterion is the Global-Threshold (GT). Using a weather radar dataset, this paper shows that the Backbone (BB) - a local-threshold criterion - generates networks whose geographical configuration is complementary to the GT networks. We compare the results for two well-known similarities measures: the Pearson Correlation (PC) coefficient and the Mutual Information (MI). The extracted backbone network (miBB), whose number of links is the same as the global MI (miGT), has the lowest average shortest path and presents a small-world effect. Regarding the global PC (pcGT) and its corresponding BB network (pcBB), there is a significant linear relationship: $R2=0.77$ with a slope of $1.15$ (p-value $<E-7$) for the pcGT network, and $R2=0.68$ with a slope of $0.76$ (p-value $<E-7$) for the pcBB network. In relation to the MI ones, only the miGT present a high $R2$ ($0.79$, with slope = $1.95$), whereas the miBB has an $R2$ of only $0.20$ ($\text{slope} =0.24$). On the one hand, the GT networks present a sizeable connected component in the central area, close to the main rivers. On the other hand, the BB networks present a few meaningful connected components surrounding the watershed and dominating cells close to the outlet, with significant statistical differences in the altimetry distribution.
[ { "version": "v1", "created": "Thu, 13 Jan 2022 17:04:59 GMT" } ]
2022-01-17T00:00:00
[ [ "Jorge", "Aurelienne A. S.", "" ], [ "Diniz", "Iuri da Silva", "" ], [ "Freitas", "Vander L. S.", "" ], [ "Costa", "Izabelly C.", "" ], [ "Santos", "Leonardo B. L.", "" ] ]
new_dataset
0.999279
2201.05518
Wei Pu
David Guttendorf, D.W. Wilson Hamilton, Anne Harris Heckman, Herman Herman, Felix Jonathan, Prasanna Kannappan, Nicholas Mireles, Luis Navarro-Serment, Jean Oh, Wei Pu, Rohan Saxena, Jeff Schneider, Matt Schnur, Carter Tiernan, Trenton Tabor
UGV-UAV Object Geolocation in Unstructured Environments
Authors are with National Robotics Engineering Center, the Robotics Institute of Carnegie Mellon University, Pittsburgh PA, listed in alphabetical order. E-mail: wpu@nrec.ri.cmu.edu
null
null
null
cs.RO
http://creativecommons.org/licenses/by-nc-nd/4.0/
A robotic system of multiple unmanned ground vehicles (UGVs) and unmanned aerial vehicles (UAVs) has the potential for advancing autonomous object geolocation performance. Much research has focused on algorithmic improvements on individual components, such as navigation, motion planning, and perception. In this paper, we present a UGV-UAV object detection and geolocation system, which performs perception, navigation, and planning autonomously in real scale in unstructured environment. We designed novel sensor pods equipped with multispectral (visible, near-infrared, thermal), high resolution (181.6 Mega Pixels), stereo (near-infrared pair), wide field of view (192 degree HFOV) array. We developed a novel on-board software-hardware architecture to process the high volume sensor data in real-time, and we built a custom AI subsystem composed of detection, tracking, navigation, and planning for autonomous objects geolocation in real-time. This research is the first real scale demonstration of such high speed data processing capability. Our novel modular sensor pod can boost relevant computer vision and machine learning research. Our novel hardware-software architecture is a solid foundation for system-level and component-level research. Our system is validated through data-driven offline tests as well as a series of field tests in unstructured environments. We present quantitative results as well as discussions on key robotic system level challenges which manifest when we build and test the system. This system is the first step toward a UGV-UAV cooperative reconnaissance system in the future.
[ { "version": "v1", "created": "Fri, 14 Jan 2022 15:41:05 GMT" } ]
2022-01-17T00:00:00
[ [ "Guttendorf", "David", "" ], [ "Hamilton", "D. W. Wilson", "" ], [ "Heckman", "Anne Harris", "" ], [ "Herman", "Herman", "" ], [ "Jonathan", "Felix", "" ], [ "Kannappan", "Prasanna", "" ], [ "Mireles", "Nicholas", "" ], [ "Navarro-Serment", "Luis", "" ], [ "Oh", "Jean", "" ], [ "Pu", "Wei", "" ], [ "Saxena", "Rohan", "" ], [ "Schneider", "Jeff", "" ], [ "Schnur", "Matt", "" ], [ "Tiernan", "Carter", "" ], [ "Tabor", "Trenton", "" ] ]
new_dataset
0.998989
2201.05541
Qinkang Gong
Qinkang Gong, Liangdao Wang, Hanjiang Lai, Yan Pan, Jian Yin
ViT2Hash: Unsupervised Information-Preserving Hashing
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unsupervised image hashing, which maps images into binary codes without supervision, is a compressor with a high compression rate. Hence, how to preserving meaningful information of the original data is a critical problem. Inspired by the large-scale vision pre-training model, known as ViT, which has shown significant progress for learning visual representations, in this paper, we propose a simple information-preserving compressor to finetune the ViT model for the target unsupervised hashing task. Specifically, from pixels to continuous features, we first propose a feature-preserving module, using the corrupted image as input to reconstruct the original feature from the pre-trained ViT model and the complete image, so that the feature extractor can focus on preserving the meaningful information of original data. Secondly, from continuous features to hash codes, we propose a hashing-preserving module, which aims to keep the semantic information from the pre-trained ViT model by using the proposed Kullback-Leibler divergence loss. Besides, the quantization loss and the similarity loss are added to minimize the quantization error. Our method is very simple and achieves a significantly higher degree of MAP on three benchmark image datasets.
[ { "version": "v1", "created": "Fri, 14 Jan 2022 16:25:30 GMT" } ]
2022-01-17T00:00:00
[ [ "Gong", "Qinkang", "" ], [ "Wang", "Liangdao", "" ], [ "Lai", "Hanjiang", "" ], [ "Pan", "Yan", "" ], [ "Yin", "Jian", "" ] ]
new_dataset
0.998769
1901.00889
Xing Di
Xing Di, He Zhang, Vishal M. Patel
Polarimetric Thermal to Visible Face Verification via Attribute Preserved Synthesis
This work has been accepted by the 9th IEEE International Conference on Biometrics: Theory, Applications, and Systems (BTAS 2018)
null
10.1109/BTAS.2018.8698554
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Thermal to visible face verification is a challenging problem due to the large domain discrepancy between the modalities. Existing approaches either attempt to synthesize visible faces from thermal faces or extract robust features from these modalities for cross-modal matching. In this paper, we take a different approach in which we make use of the attributes extracted from the visible image to synthesize the attribute-preserved visible image from the input thermal image for cross-modal matching. A pre-trained VGG-Face network is used to extract the attributes from the visible image. Then, a novel Attribute Preserved Generative Adversarial Network (AP-GAN) is proposed to synthesize the visible image from the thermal image guided by the extracted attributes. Finally, a deep network is used to extract features from the synthesized image and the input visible image for verification. Extensive experiments on the ARL Polarimetric face dataset show that the proposed method achieves significant improvements over the state-of-the-art methods.
[ { "version": "v1", "created": "Thu, 3 Jan 2019 19:38:33 GMT" } ]
2022-01-14T00:00:00
[ [ "Di", "Xing", "" ], [ "Zhang", "He", "" ], [ "Patel", "Vishal M.", "" ] ]
new_dataset
0.999143
2005.11425
Shenshen Chen
Shenshen Chen (1), Geng Li (1), Dennis Duan (1), Kerim Gokarslan (1), Bin Li (1), Qiao Xiang (1), Haitao Yu (2), Franck Le (3), Richard Yang (1), Ying Zhang (4) ((1) Yale University, (2) College of Electronics and Information Engineering, Tongji University, (3) Thomas J. Watson Research Center, (4) Facebook)
Carbide: Highly Reliable Networks Through Real-Time Multiple Control Plane Composition
12 pages + References + Appendices, 14 figures
null
null
YALEU/DCS/TR-1552
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Achieving highly reliable networks is essential for network operators to ensure proper packet delivery in the event of software errors or hardware failures. Networks must ensure reachability and routing correctness, such as subnet isolation and waypoint traversal. Existing work in network verification relies on centralized computation at the cost of fault tolerance, while other approaches either build an over-engineered, complex control plane, or compose multiple control planes without providing any guarantee on correctness. This paper presents Carbide, a novel system to achieve high reliability in networks through distributed verification and multiple control plane composition. The core of Carbide is a simple, generic, efficient distributed verification framework that transforms a generic network verification problem to a reachability verification problem on a directed acyclic graph (DAG), and solves the latter via an efficient distributed verification protocol (DV-protocol). Equipped with verification results, Carbide allows the systematic composition of multiple control planes and realization of operator-specified consistency. Carbide is fully implemented. Extensive experiments show that (1) Carbide reduces downtime by 43% over the most reliable individual underlying control plane, while enforcing correctness requirements on all traffic; and (2) by systematically decomposing computation to devices and pruning unnecessary messaging between devices during verification, Carbide scales to a production data center network.
[ { "version": "v1", "created": "Fri, 22 May 2020 23:42:21 GMT" }, { "version": "v2", "created": "Thu, 13 Jan 2022 11:56:21 GMT" } ]
2022-01-14T00:00:00
[ [ "Chen", "Shenshen", "" ], [ "Li", "Geng", "" ], [ "Duan", "Dennis", "" ], [ "Gokarslan", "Kerim", "" ], [ "Li", "Bin", "" ], [ "Xiang", "Qiao", "" ], [ "Yu", "Haitao", "" ], [ "Le", "Franck", "" ], [ "Yang", "Richard", "" ], [ "Zhang", "Ying", "" ] ]
new_dataset
0.99959
2104.14547
Anjana Deva Prasad
Anjana Deva Prasad, Aditya Balu, Harshil Shah, Soumik Sarkar, Chinmay Hegde, Adarsh Krishnamurthy
NURBS-Diff: A Differentiable Programming Module for NURBS
null
null
null
null
cs.LG cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Boundary representations (B-reps) using Non-Uniform Rational B-splines (NURBS) are the de facto standard used in CAD, but their utility in deep learning-based approaches is not well researched. We propose a differentiable NURBS module to integrate NURBS representations of CAD models with deep learning methods. We mathematically define the derivatives of the NURBS curves or surfaces with respect to the input parameters (control points, weights, and the knot vector). These derivatives are used to define an approximate Jacobian used for performing the "backward" evaluation to train the deep learning models. We have implemented our NURBS module using GPU-accelerated algorithms and integrated it with PyTorch, a popular deep learning framework. We demonstrate the efficacy of our NURBS module in performing CAD operations such as curve or surface fitting and surface offsetting. Further, we show its utility in deep learning for unsupervised point cloud reconstruction and enforce analysis constraints. These examples show that our module performs better for certain deep learning frameworks and can be directly integrated with any deep-learning framework requiring NURBS.
[ { "version": "v1", "created": "Thu, 29 Apr 2021 17:56:01 GMT" }, { "version": "v2", "created": "Tue, 14 Sep 2021 17:30:26 GMT" }, { "version": "v3", "created": "Thu, 16 Sep 2021 06:21:40 GMT" }, { "version": "v4", "created": "Thu, 13 Jan 2022 15:15:01 GMT" } ]
2022-01-14T00:00:00
[ [ "Prasad", "Anjana Deva", "" ], [ "Balu", "Aditya", "" ], [ "Shah", "Harshil", "" ], [ "Sarkar", "Soumik", "" ], [ "Hegde", "Chinmay", "" ], [ "Krishnamurthy", "Adarsh", "" ] ]
new_dataset
0.97443
2106.13217
Jing Zhang
Mochu Xiang, Jing Zhang, Yunqiu Lv, Aixuan Li, Yiran Zhong, Yuchao Dai
Exploring Depth Contribution for Camouflaged Object Detection
The first work in RGB-D Camouflaged object detection (COD)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Camouflaged object detection (COD) aims to segment camouflaged objects hiding in the environment, which is challenging due to the similar appearance of camouflaged objects and their surroundings. Research in biology suggests depth can provide useful object localization cues for camouflaged object discovery. In this paper, we study the depth contribution for camouflaged object detection, where the depth maps are generated with existing monocular depth estimation (MDE) methods. Due to the domain gap between the MDE dataset and our COD dataset, the generated depth maps are not accurate enough to be directly used. We then introduce two solutions to avoid the noisy depth maps from dominating the training process. Firstly, we present an auxiliary depth estimation branch ("ADE"), aiming to regress the depth maps. We find that "ADE" is especially necessary for our "generated depth" scenario. Secondly, we introduce a multi-modal confidence-aware loss function via a generative adversarial network to weigh the contribution of depth for camouflaged object detection. Our extensive experiments on various camouflaged object detection datasets explain that the existing "sensor depth" based RGB-D segmentation techniques work poorly with "generated depth", and our proposed two solutions work cooperatively, achieving effective depth contribution exploration for camouflaged object detection.
[ { "version": "v1", "created": "Thu, 24 Jun 2021 17:51:31 GMT" }, { "version": "v2", "created": "Sat, 26 Jun 2021 03:38:01 GMT" }, { "version": "v3", "created": "Thu, 13 Jan 2022 06:05:31 GMT" } ]
2022-01-14T00:00:00
[ [ "Xiang", "Mochu", "" ], [ "Zhang", "Jing", "" ], [ "Lv", "Yunqiu", "" ], [ "Li", "Aixuan", "" ], [ "Zhong", "Yiran", "" ], [ "Dai", "Yuchao", "" ] ]
new_dataset
0.997348
2107.07402
Anirudh Gupta
Anirudh Gupta, Harveen Singh Chadha, Priyanshi Shah, Neeraj Chhimwal, Ankur Dhuriya, Rishabh Gaur, Vivek Raghavan
CLSRIL-23: Cross Lingual Speech Representations for Indic Languages
7 pages, 2 figures
null
null
null
cs.CL cs.LG cs.SD eess.AS
http://creativecommons.org/licenses/by-sa/4.0/
We present a CLSRIL-23, a self supervised learning based audio pre-trained model which learns cross lingual speech representations from raw audio across 23 Indic languages. It is built on top of wav2vec 2.0 which is solved by training a contrastive task over masked latent speech representations and jointly learns the quantization of latents shared across all languages. We compare the language wise loss during pretraining to compare effects of monolingual and multilingual pretraining. Performance on some downstream fine-tuning tasks for speech recognition is also compared and our experiments show that multilingual pretraining outperforms monolingual training, in terms of learning speech representations which encodes phonetic similarity of languages and also in terms of performance on down stream tasks. A decrease of 5% is observed in WER and 9.5% in CER when a multilingual pretrained model is used for finetuning in Hindi. All the code models are also open sourced. CLSRIL-23 is a model trained on $23$ languages and almost 10,000 hours of audio data to facilitate research in speech recognition for Indic languages. We hope that new state of the art systems will be created using the self supervised approach, especially for low resources Indic languages.
[ { "version": "v1", "created": "Thu, 15 Jul 2021 15:42:43 GMT" }, { "version": "v2", "created": "Thu, 13 Jan 2022 06:58:05 GMT" } ]
2022-01-14T00:00:00
[ [ "Gupta", "Anirudh", "" ], [ "Chadha", "Harveen Singh", "" ], [ "Shah", "Priyanshi", "" ], [ "Chhimwal", "Neeraj", "" ], [ "Dhuriya", "Ankur", "" ], [ "Gaur", "Rishabh", "" ], [ "Raghavan", "Vivek", "" ] ]
new_dataset
0.999167
2110.05344
Hazel Murray
Hazel Murray and David Malone
Quantum multi-factor authentication
null
null
10.1007/978-3-030-93747-8_4
null
cs.CR quant-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
We present a quantum multi-factor authentication mechanism based on the hidden-matching quantum communication complexity problem. It offers step-up graded authentication for users via a quantum token. In this paper, we outline the protocol, demonstrate that it can be used in a largely classical setting, explain how it can be implemented in SASL, and discuss arising security features. We also offer a comparison between our mechanism and current state-of-the-art multi-factor authentication mechanisms.
[ { "version": "v1", "created": "Mon, 11 Oct 2021 15:12:39 GMT" } ]
2022-01-14T00:00:00
[ [ "Murray", "Hazel", "" ], [ "Malone", "David", "" ] ]
new_dataset
0.992526
2110.05650
Yusheng Wang
Yusheng Wang, Yidong Lou, Weiwei Song, Huan Yu and Zhiyong Tu
GM-Livox: An Integrated Framework for Large-Scale Map Construction with Multiple Non-repetitive Scanning LiDARs
null
null
10.1109/JSEN.2022.3142041
null
cs.RO
http://creativecommons.org/publicdomain/zero/1.0/
With the ability of providing direct and accurate enough range measurements, light detection and ranging (LiDAR) is playing an essential role in localization and detection for autonomous vehicles. Since single LiDAR suffers from hardware failure and performance degradation intermittently, we present a multi-LiDAR integration scheme in this article. Our framework tightly couples multiple non-repetitive scanning LiDARs with inertial, encoder, and global navigation satellite system (GNSS) into pose estimation and simultaneous global map generation. Primarily, we formulate a precise synchronization strategy to integrate isolated sensors, and the extracted feature points from separate LiDARs are merged into a single sweep. The fused scans are introduced to compute the scan-matching correspondences, which can be further refined by additional real-time kinematic (RTK) measurements. Based thereupon, we construct a factor graph along with the inertial preintegration result, estimated ground constraints, and RTK data. For the purpose of maintaining a restricted number of poses for estimation, we deploy a keyframe based sliding-window optimization strategy in our system. The real-time performance is guaranteed with multi-threaded computation, and extensive experiments are conducted in challenging scenarios. Experimental results show that the utilization of multiple LiDARs boosts the system performance in both robustness and accuracy.
[ { "version": "v1", "created": "Mon, 11 Oct 2021 23:45:53 GMT" } ]
2022-01-14T00:00:00
[ [ "Wang", "Yusheng", "" ], [ "Lou", "Yidong", "" ], [ "Song", "Weiwei", "" ], [ "Yu", "Huan", "" ], [ "Tu", "Zhiyong", "" ] ]
new_dataset
0.993004
2112.02448
Alex Shonenkov
Alex Shonenkov, Daria Bakshandaeva, Denis Dimitrov, Aleksandr Nikolich
Emojich -- zero-shot emoji generation using Russian language: a technical report
5 pages, 4 figures and big figure at appendix, technical report
null
null
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
This technical report presents a text-to-image neural network "Emojich" that generates emojis using captions in Russian language as a condition. We aim to keep the generalization ability of a pretrained big model ruDALL-E Malevich (XL) 1.3B parameters at the fine-tuning stage, while giving special style to the images generated. Here are presented some engineering methods, code realization, all hyper-parameters for reproducing results and a Telegram bot where everyone can create their own customized sets of stickers. Also, some newly generated emojis obtained by "Emojich" model are demonstrated.
[ { "version": "v1", "created": "Sat, 4 Dec 2021 23:37:32 GMT" }, { "version": "v2", "created": "Wed, 12 Jan 2022 20:15:14 GMT" } ]
2022-01-14T00:00:00
[ [ "Shonenkov", "Alex", "" ], [ "Bakshandaeva", "Daria", "" ], [ "Dimitrov", "Denis", "" ], [ "Nikolich", "Aleksandr", "" ] ]
new_dataset
0.990135
2201.02121
Ivor van der Hoog
Anne Driemel, Ivor van der Hoog, Eva Rotenberg
On the Discrete Fr\'echet Distance in a Graph
null
null
null
null
cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Fr\'{e}chet distance is a well-studied similarity measure between curves that is widely used throughout computer science. Motivated by applications where curves stem from paths and walks on an underlying graph (such as a road network), we define and study the Fr\'{e}chet distance for paths and walks on graphs. When provided with a distance oracle of $G$ with $O(1)$ query time, the classical quadratic-time dynamic program can compute the Fr\'{e}chet distance between two walks $P$ and $Q$ in a graph $G$ in $O(|P| \cdot |Q|)$ time. We show that there are situations where the graph structure helps with computing Fr\'{e}chet distance: when the graph $G$ is planar, we apply existing (approximate) distance oracles to compute a $(1+\varepsilon)$-approximation of the Fr\'{e}chet distance between any shortest path $P$ and any walk $Q$ in $O(|G| \log |G| / \sqrt{\varepsilon} + |P| + \frac{|Q|}{\varepsilon } )$ time. We generalise this result to near-shortest paths, i.e. $\kappa$-straight paths, as we show how to compute a $(1+\varepsilon)$-approximation between a $\kappa$-straight path $P$ and any walk $Q$ in $O(|G| \log |G| / \sqrt{\varepsilon} + |P| + \frac{\kappa|Q|}{\varepsilon } )$ time. Our algorithmic results hold for both the strong and the weak discrete Fr\'{e}chet distance over the shortest path metric in $G$. Finally, we show that additional assumptions on the input, such as our assumption on path straightness, are indeed necessary to obtain truly subquadratic running time. We provide a conditional lower bound showing that the Fr\'{e}chet distance, or even its $1.01$-approximation, between arbitrary \emph{paths} in a weighted planar graph cannot be computed in $O((|P|\cdot|Q|)^{1-\delta})$ time for any $\delta > 0$ unless the Orthogonal Vector Hypothesis fails. For walks, this lower bound holds even when $G$ is planar, unit-weight and has $O(1)$ vertices.
[ { "version": "v1", "created": "Thu, 6 Jan 2022 16:04:51 GMT" }, { "version": "v2", "created": "Thu, 13 Jan 2022 12:29:01 GMT" } ]
2022-01-14T00:00:00
[ [ "Driemel", "Anne", "" ], [ "van der Hoog", "Ivor", "" ], [ "Rotenberg", "Eva", "" ] ]
new_dataset
0.98716
2201.04649
Olga Doronina
Olga A. Doronina, Zachary J. Grey, Andrew Glaws
Grassmannian Shape Representations for Aerodynamic Applications
5 pages, 4 figures, submitted to AI for Design and Manufacturing(ADAM) workshop of AAAI-2022 conference
null
null
null
cs.GR math.DG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Airfoil shape design is a classical problem in engineering and manufacturing. Our motivation is to combine principled physics-based considerations for the shape design problem with modern computational techniques informed by a data-driven approach. Traditional analyses of airfoil shapes emphasize a flow-based sensitivity to deformations which can be represented generally by affine transformations (rotation, scaling, shearing, translation). We present a novel representation of shapes which decouples affine-style deformations from a rich set of data-driven deformations over a submanifold of the Grassmannian. The Grassmannian representation, informed by a database of physically relevant airfoils, offers (i) a rich set of novel 2D airfoil deformations not previously captured in the data, (ii) improved low-dimensional parameter domain for inferential statistics informing design/manufacturing, and (iii) consistent 3D blade representation and perturbation over a sequence of nominal shapes.
[ { "version": "v1", "created": "Wed, 12 Jan 2022 19:01:01 GMT" } ]
2022-01-14T00:00:00
[ [ "Doronina", "Olga A.", "" ], [ "Grey", "Zachary J.", "" ], [ "Glaws", "Andrew", "" ] ]
new_dataset
0.959227
2201.04742
Aryaman Pandya
Paul Schmitt, Nicholas Britten, JiHyun Jeong, Amelia Coffey, Kevin Clark, Shweta Sunil Kothawade, Elena Corina Grigore, Adam Khaw, Christopher Konopka, Linh Pham, Kim Ryan, Christopher Schmitt, Aryaman Pandya, Emilio Frazzoli
nuReality: A VR environment for research of pedestrian and autonomous vehicle interactions
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by-nc-sa/4.0/
We present nuReality, a virtual reality 'VR' environment designed to test the efficacy of vehicular behaviors to communicate intent during interactions between autonomous vehicles 'AVs' and pedestrians at urban intersections. In this project we focus on expressive behaviors as a means for pedestrians to readily recognize the underlying intent of the AV's movements. VR is an ideal tool to use to test these situations as it can be immersive and place subjects into these potentially dangerous scenarios without risk. nuReality provides a novel and immersive virtual reality environment that includes numerous visual details (road and building texturing, parked cars, swaying tree limbs) as well as auditory details (birds chirping, cars honking in the distance, people talking). In these files we present the nuReality environment, its 10 unique vehicle behavior scenarios, and the Unreal Engine and Autodesk Maya source files for each scenario. The files are publicly released as open source at www.nuReality.org, to support the academic community studying the critical AV-pedestrian interaction.
[ { "version": "v1", "created": "Wed, 12 Jan 2022 23:54:09 GMT" } ]
2022-01-14T00:00:00
[ [ "Schmitt", "Paul", "" ], [ "Britten", "Nicholas", "" ], [ "Jeong", "JiHyun", "" ], [ "Coffey", "Amelia", "" ], [ "Clark", "Kevin", "" ], [ "Kothawade", "Shweta Sunil", "" ], [ "Grigore", "Elena Corina", "" ], [ "Khaw", "Adam", "" ], [ "Konopka", "Christopher", "" ], [ "Pham", "Linh", "" ], [ "Ryan", "Kim", "" ], [ "Schmitt", "Christopher", "" ], [ "Pandya", "Aryaman", "" ], [ "Frazzoli", "Emilio", "" ] ]
new_dataset
0.993648
2201.04851
Yuying Ge
Yuying Ge, Yibing Song, Ruimao Zhang and Ping Luo
MetaDance: Few-shot Dancing Video Retargeting via Temporal-aware Meta-learning
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Dancing video retargeting aims to synthesize a video that transfers the dance movements from a source video to a target person. Previous work need collect a several-minute-long video of a target person with thousands of frames to train a personalized model. However, the trained model can only generate videos of the same person. To address the limitations, recent work tackled few-shot dancing video retargeting, which learns to synthesize videos of unseen persons by leveraging a few frames of them. In practice, given a few frames of a person, these work simply regarded them as a batch of individual images without temporal correlations, thus generating temporally incoherent dancing videos of low visual quality. In this work, we model a few frames of a person as a series of dancing moves, where each move contains two consecutive frames, to extract the appearance patterns and the temporal dynamics of this person. We propose MetaDance, which utilizes temporal-aware meta-learning to optimize the initialization of a model through the synthesis of dancing moves, such that the meta-trained model can be efficiently tuned towards enhanced visual quality and strengthened temporal stability for unseen persons with a few frames. Extensive evaluations show large superiority of our method.
[ { "version": "v1", "created": "Thu, 13 Jan 2022 09:34:20 GMT" } ]
2022-01-14T00:00:00
[ [ "Ge", "Yuying", "" ], [ "Song", "Yibing", "" ], [ "Zhang", "Ruimao", "" ], [ "Luo", "Ping", "" ] ]
new_dataset
0.984027
2201.05005
Franca Delmastro
Franca Delmastro, valerio Arnaboldi, Marco Conti
People-centric computing and communications in Smart Cities
null
IEEE Communications Magazine ( Volume: 54, Issue: 7, July 2016)
10.1109/MCOM.2016.7509389
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The extreme pervasive nature of mobile technologies, together with the users need to continuously interact with her personal devices and to be always connected, strengthen the user-centric approach to design and develop new communication and computing solutions. Nowadays users not only represent the final utilizers of the technology, but they actively contribute to its evolution by assuming different roles: they act as humans, by sharing contents and experiences through social networks, and as virtual sensors, by moving freely in the environment with their sensing devices. Smart cities represent an important reference scenario for the active participation of users through mobile technologies. It involves multiple application domains and defines different levels of user engagement. Participatory sensing, opportunistic sensing and Mobile Social Networks currently represent some of the most promising people-centric paradigms. In addition, their integration can further improve the user involvement through new services and applications. In this paper we present SmartCitizen app, a MSN application designed in the framework of a smart city project to stimulate the active participation of citizens in generating and sharing useful contents related to the quality of life in their city. The app has been developed on top of a context- and social-aware middleware platform (CAMEO) able to integrate the main features of people-centric computing paradigms, lightening the app developer effort. Existing middleware platforms generally focus on one single people-centric paradigm, exporting a limited set of features to mobile applications. CAMEO overcomes these limitations. Experimental results shown in this paper can also represent the technical guidelines for the development of heterogeneous people-centric mobile applications, embracing different application domains.
[ { "version": "v1", "created": "Thu, 13 Jan 2022 14:38:21 GMT" } ]
2022-01-14T00:00:00
[ [ "Delmastro", "Franca", "" ], [ "Arnaboldi", "valerio", "" ], [ "Conti", "Marco", "" ] ]
new_dataset
0.99309
2201.05006
Brice Minaud
Brice Minaud and Michael Reichle
Dynamic Local Searchable Symmetric Encryption
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this article, we tackle for the first time the problem of dynamic memory-efficient Searchable Symmetric Encryption (SSE). In the term "memory-efficient" SSE, we encompass both the goals of local SSE, and page-efficient SSE. The centerpiece of our approach is a novel connection between those two goals. We introduce a map, called the Generic Local Transform, which takes as input a page-efficient SSE scheme with certain special features, and outputs an SSE scheme with strong locality properties. We obtain several results. (1) First, for page-efficient SSE, we build a dynamic scheme with page efficiency $O(\log \log N)$ and storage efficiency $O(1)$, called LayeredSSE. The main technical innovation behind LayeredSSE is a new weighted extension of the two-choice allocation process, of independent interest. (2) Second, we introduce the Generic Local Transform, and combine it with LayeredSSE to build a dynamic SSE scheme with storage efficiency $O(1)$, locality $O(1)$, and read efficiency $O(\log\log N)$, under the condition that the longest list is of size $O(N^{1-1/\log \log \lambda})$. This matches, in every respect, the purely static construction of Asharov et al. presented at STOC 2016: dynamism comes at no extra cost. (3) Finally, by applying the Generic Local Transform to a variant of the Tethys scheme by Bossuat et al. from Crypto 2021, we build an unconditional static SSE with storage efficiency $O(1)$, locality $O(1)$, and read efficiency $O(\log^\varepsilon N)$, for an arbitrarily small constant $\varepsilon > 0$. To our knowledge, this is the construction that comes closest to the lower bound presented by Cash and Tessaro at Eurocrypt 2014.
[ { "version": "v1", "created": "Thu, 13 Jan 2022 14:38:40 GMT" } ]
2022-01-14T00:00:00
[ [ "Minaud", "Brice", "" ], [ "Reichle", "Michael", "" ] ]
new_dataset
0.997985
2201.05066
Tarik Taleb Dr.
Junseok Kim, Seongwon Kim, T. Taleb, and Sunghyun Choi
RAPID: Contention Resolution-based Random Access using Context ID for IoT
null
null
null
null
cs.NI
http://creativecommons.org/licenses/by-nc-nd/4.0/
With the increasing number of Internet of Things (IoT) devices, Machine Type Communication (MTC) has become an important use case of the Fifth Generation (5G) communication systems. Since MTC devices are mostly disconnected from Base Station (BS) for power saving, random access procedure is required for devices to transmit data. If many devices try random access simultaneously, preamble collision problem occurs, thus causing latency increase. In an environment where delay-sensitive and delay-tolerant devices coexist, the contention-based random access procedure cannot satisfy latency requirements of delay-sensitive devices. Therefore, we propose RAPID, a novel random access procedure, which is completed through two message exchanges for the delay-sensitive devices. We also develop Access Pattern Analyzer (APA), which estimates traffic characteristics of MTC devices. When UEs, performing RAPID and contention-based random access, coexist, it is important to determine a value which is the number of preambles for RAPID to reduce random access load. Thus, we analyze random access load using a Markov chain model to obtain the optimal number of preambles for RAPID. Simulation results show RAPID achieves 99.999% reliability with 80.8% shorter uplink latency, and also decreases random access load by 30.5% compared with state-of-the-art techniques.
[ { "version": "v1", "created": "Wed, 5 Jan 2022 13:29:09 GMT" } ]
2022-01-14T00:00:00
[ [ "Kim", "Junseok", "" ], [ "Kim", "Seongwon", "" ], [ "Taleb", "T.", "" ], [ "Choi", "Sunghyun", "" ] ]
new_dataset
0.961944
2201.05075
Mikhail Volkov
Evgeniya A. Bondar and David Casas and Mikhail V. Volkov
Completely reachable automata: an interplay between automata, graphs, and trees
29 pages, 16 figures
null
null
null
cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A deterministic finite automaton in which every non-empty set of states occurs as the image of the whole state set under the action of a suitable input word is called completely reachable. We characterize such automata in terms of graphs and trees.
[ { "version": "v1", "created": "Thu, 13 Jan 2022 16:58:38 GMT" } ]
2022-01-14T00:00:00
[ [ "Bondar", "Evgeniya A.", "" ], [ "Casas", "David", "" ], [ "Volkov", "Mikhail V.", "" ] ]
new_dataset
0.99889
2201.05120
Carlos Rodriguez-Pardo
Carlos Rodriguez-Pardo and Elena Garces
SeamlessGAN: Self-Supervised Synthesis of Tileable Texture Maps
12 pages. To be published in Transactions on Visualizations and Computer Graphics. Project website: http://carlosrodriguezpardo.es/projects/SeamlessGAN/
null
10.1109/TVCG.2022.3143615
null
cs.CV cs.GR cs.LG cs.MM
http://creativecommons.org/licenses/by-nc-nd/4.0/
We present SeamlessGAN, a method capable of automatically generating tileable texture maps from a single input exemplar. In contrast to most existing methods, focused solely on solving the synthesis problem, our work tackles both problems, synthesis and tileability, simultaneously. Our key idea is to realize that tiling a latent space within a generative network trained using adversarial expansion techniques produces outputs with continuity at the seam intersection that can be then be turned into tileable images by cropping the central area. Since not every value of the latent space is valid to produce high-quality outputs, we leverage the discriminator as a perceptual error metric capable of identifying artifact-free textures during a sampling process. Further, in contrast to previous work on deep texture synthesis, our model is designed and optimized to work with multi-layered texture representations, enabling textures composed of multiple maps such as albedo, normals, etc. We extensively test our design choices for the network architecture, loss function and sampling parameters. We show qualitatively and quantitatively that our approach outperforms previous methods and works for textures of different types.
[ { "version": "v1", "created": "Thu, 13 Jan 2022 18:24:26 GMT" } ]
2022-01-14T00:00:00
[ [ "Rodriguez-Pardo", "Carlos", "" ], [ "Garces", "Elena", "" ] ]
new_dataset
0.988817
1902.02598
Matilda Rhode
Matilda Rhode, Pete Burnap, Adam Wedgbury
Real-time malware process detection and automated process killing
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by-nc-sa/4.0/
Perimeter-based detection is no longer sufficient for mitigating the threat posed by malicious software. This is evident as antivirus (AV) products are replaced by endpoint detection and response (EDR) products, the latter allowing visibility into live machine activity rather than relying on the AV to filter out malicious artefacts. This paper argues that detecting malware in real-time on an endpoint necessitates an automated response due to the rapid and destructive nature of some malware. The proposed model uses statistical filtering on top of a machine learning dynamic behavioural malware detection model in order to detect individual malicious processes on the fly and kill those which are deemed malicious. In an experiment to measure the tangible impact of this system, we find that fast-acting ransomware is prevented from corrupting 92% of files with a false positive rate of 14%. Whilst the false-positive rate currently remains too high to adopt this approach as-is, these initial results demonstrate the need for a detection model which is able to act within seconds of the malware execution beginning; a timescale that has not been addressed by previous work.
[ { "version": "v1", "created": "Thu, 7 Feb 2019 13:01:59 GMT" }, { "version": "v2", "created": "Tue, 1 Oct 2019 10:07:05 GMT" }, { "version": "v3", "created": "Wed, 12 Jan 2022 08:29:24 GMT" } ]
2022-01-13T00:00:00
[ [ "Rhode", "Matilda", "" ], [ "Burnap", "Pete", "" ], [ "Wedgbury", "Adam", "" ] ]
new_dataset
0.957049
2004.10063
Maximilian Kloock
Maximilian Kloock, Patrick Scheffe, Janis Maczijewski, Alexandru Kampmann, Armin Mokhtarian, Stefan Kowalewski and Bassam Alrifaee
Cyber-Physical Mobility Lab: An Open-Source Platform for Networked and Autonomous Vehicles
This work has been presented on ECC21
null
null
null
cs.MA cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces our Cyber-Physical Mobility Lab (CPM Lab). It is an open-source development environment for networked and autonomous vehicles with focus on networked decision-making, trajectory planning, and control. The CPM Lab hosts 20 physical model-scale vehicles ({\mu}Cars) which we can seamlessly extend by unlimited simulated vehicles. The code and construction plans are publicly available to enable rebuilding the CPM Lab. Our four-layered architecture enables the seamless use of the same software in simulations and in experiments without any further adaptions. A Data Distribution Service (DDS) based middleware allows adapting the number of vehicles during experiments in a seamless manner. The middleware is also responsible for synchronizing all entities following a logical execution time approach to achieve determinism and reproducibility of experiments. This approach makes the CPM Lab a unique platform for rapid functional prototyping of networked decision-making algorithms. The CPM Lab allows researchers as well as students from different disciplines to see their ideas developing into reality. We demonstrate its capabilities using two example experiments. We are working on a remote access to the CPM Lab via a webinterface.
[ { "version": "v1", "created": "Tue, 21 Apr 2020 14:54:30 GMT" }, { "version": "v2", "created": "Mon, 19 Apr 2021 09:56:45 GMT" }, { "version": "v3", "created": "Tue, 20 Apr 2021 07:08:52 GMT" }, { "version": "v4", "created": "Wed, 12 Jan 2022 10:37:05 GMT" } ]
2022-01-13T00:00:00
[ [ "Kloock", "Maximilian", "" ], [ "Scheffe", "Patrick", "" ], [ "Maczijewski", "Janis", "" ], [ "Kampmann", "Alexandru", "" ], [ "Mokhtarian", "Armin", "" ], [ "Kowalewski", "Stefan", "" ], [ "Alrifaee", "Bassam", "" ] ]
new_dataset
0.985207
2102.03141
Tobias Hinz
Tobias Hinz and Matthew Fisher and Oliver Wang and Eli Shechtman and Stefan Wermter
CharacterGAN: Few-Shot Keypoint Character Animation and Reposing
Best Paper WACV 2022. Code available at https://github.com/tohinz/CharacterGAN
null
null
null
cs.CV
http://creativecommons.org/licenses/by-sa/4.0/
We introduce CharacterGAN, a generative model that can be trained on only a few samples (8 - 15) of a given character. Our model generates novel poses based on keypoint locations, which can be modified in real time while providing interactive feedback, allowing for intuitive reposing and animation. Since we only have very limited training samples, one of the key challenges lies in how to address (dis)occlusions, e.g. when a hand moves behind or in front of a body. To address this, we introduce a novel layering approach which explicitly splits the input keypoints into different layers which are processed independently. These layers represent different parts of the character and provide a strong implicit bias that helps to obtain realistic results even with strong (dis)occlusions. To combine the features of individual layers we use an adaptive scaling approach conditioned on all keypoints. Finally, we introduce a mask connectivity constraint to reduce distortion artifacts that occur with extreme out-of-distribution poses at test time. We show that our approach outperforms recent baselines and creates realistic animations for diverse characters. We also show that our model can handle discrete state changes, for example a profile facing left or right, that the different layers do indeed learn features specific for the respective keypoints in those layers, and that our model scales to larger datasets when more data is available.
[ { "version": "v1", "created": "Fri, 5 Feb 2021 12:38:15 GMT" }, { "version": "v2", "created": "Thu, 25 Mar 2021 11:12:28 GMT" }, { "version": "v3", "created": "Wed, 12 Jan 2022 18:33:49 GMT" } ]
2022-01-13T00:00:00
[ [ "Hinz", "Tobias", "" ], [ "Fisher", "Matthew", "" ], [ "Wang", "Oliver", "" ], [ "Shechtman", "Eli", "" ], [ "Wermter", "Stefan", "" ] ]
new_dataset
0.991084
2105.00374
Kumar Abhishek
Mengliu Zhao, Jeremy Kawahara, Kumar Abhishek, Sajjad Shamanian, Ghassan Hamarneh
Skin3D: Detection and Longitudinal Tracking of Pigmented Skin Lesions in 3D Total-Body Textured Meshes
11 pages, 8 figures; Zhao and Kawahara: joint first authors; Published in Medical Image Analysis (2021)
null
10.1016/j.media.2021.102329
null
cs.CV cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
We present an automated approach to detect and longitudinally track skin lesions on 3D total-body skin surface scans. The acquired 3D mesh of the subject is unwrapped to a 2D texture image, where a trained objected detection model, Faster R-CNN, localizes the lesions within the 2D domain. These detected skin lesions are mapped back to the 3D surface of the subject and, for subjects imaged multiple times, we construct a graph-based matching procedure to longitudinally track lesions that considers the anatomical correspondences among pairs of meshes and the geodesic proximity of corresponding lesions and the inter-lesion geodesic distances. We evaluated the proposed approach using 3DBodyTex, a publicly available dataset composed of 3D scans imaging the coloured skin (textured meshes) of 200 human subjects. We manually annotated locations that appeared to the human eye to contain a pigmented skin lesion as well as tracked a subset of lesions occurring on the same subject imaged in different poses. Our results, when compared to three human annotators, suggest that the trained Faster R-CNN detects lesions at a similar performance level as the human annotators. Our lesion tracking algorithm achieves an average matching accuracy of 88% on a set of detected corresponding pairs of prominent lesions of subjects imaged in different poses, and an average longitudinal accuracy of 71% when encompassing additional errors due to lesion detection. As there currently is no other large-scale publicly available dataset of 3D total-body skin lesions, we publicly release over 25,000 3DBodyTex manual annotations, which we hope will further research on total-body skin lesion analysis.
[ { "version": "v1", "created": "Sun, 2 May 2021 01:52:28 GMT" }, { "version": "v2", "created": "Wed, 12 Jan 2022 06:04:44 GMT" } ]
2022-01-13T00:00:00
[ [ "Zhao", "Mengliu", "" ], [ "Kawahara", "Jeremy", "" ], [ "Abhishek", "Kumar", "" ], [ "Shamanian", "Sajjad", "" ], [ "Hamarneh", "Ghassan", "" ] ]
new_dataset
0.998727
2108.13359
Ilya Vorobyev
Ilya Vorobyev
Fast Decoding of Union-free Codes
null
null
null
null
cs.IT math.CO math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Union-free codes and disjunctive codes are two combinatorial structures, which are used in nonadaptive group testing to find a set of $d$ defective elements among $n$ samples by carrying out the minimal number of tests $t$. It is known that union-free codes have a larger rate, whereas disjunctive codes provide a more efficient decoding algorithm. In this paper we introduce a new family of codes for nonadaptive group testing with fast decoding. The rate of these codes is larger than the rate of disjunctive codes, while the decoding algorithm has the same complexity. In addition, we derive a lower bound on the rate of new codes for the case of $d=2$ defectives, which is significantly better than the bound for disjunctive codes and almost as good as the bound for union-free codes.
[ { "version": "v1", "created": "Mon, 30 Aug 2021 16:31:39 GMT" }, { "version": "v2", "created": "Wed, 12 Jan 2022 14:15:40 GMT" } ]
2022-01-13T00:00:00
[ [ "Vorobyev", "Ilya", "" ] ]
new_dataset
0.973131
2109.01188
Lillian Pentecost
Lillian Pentecost, Alexander Hankin, Marco Donato, Mark Hempstead, Gu-Yeon Wei, and David Brooks
NVMExplorer: A Framework for Cross-Stack Comparisons of Embedded Non-Volatile Memories
18 pages, 14 figures, 3 tables
null
null
null
cs.ET cs.AR
http://creativecommons.org/licenses/by/4.0/
Repeated off-chip memory accesses to DRAM drive up operating power for data-intensive applications, and SRAM technology scaling and leakage power limits the efficiency of embedded memories. Future on-chip storage will need higher density and energy efficiency, and the actively expanding field of emerging, embeddable non-volatile memory (eNVM) technologies is providing many potential candidates to satisfy this need. Each technology proposal presents distinct trade-offs in terms of density, read, write, and reliability characteristics, and we present a comprehensive framework for navigating and quantifying these design trade-offs alongside realistic system constraints and application-level impacts. This work evaluates eNVM-based storage for a range of application and system contexts including machine learning on the edge, graph analytics, and general purpose cache hierarchy, in addition to describing a freely available (http://nvmexplorer.seas.harvard.edu/) set of tools for application experts, system designers, and device experts to better understand, compare, and quantify the next generation of embedded memory solutions.
[ { "version": "v1", "created": "Thu, 2 Sep 2021 19:36:25 GMT" }, { "version": "v2", "created": "Wed, 12 Jan 2022 00:04:25 GMT" } ]
2022-01-13T00:00:00
[ [ "Pentecost", "Lillian", "" ], [ "Hankin", "Alexander", "" ], [ "Donato", "Marco", "" ], [ "Hempstead", "Mark", "" ], [ "Wei", "Gu-Yeon", "" ], [ "Brooks", "David", "" ] ]
new_dataset
0.997899
2109.13899
Jeremiah Johnson
Jeremiah W. Johnson, Swathi Hari, Donald Hampton, Hyunju K. Connor, Amy Keesee
A Contrastive Learning Approach to Auroral Identification and Classification
6 pages, 5 figures, 1 table
Proceedings of the 20th IEEE International Conference on Machine Learning and Applications, Dec. 2021
10.1109/ICMLA52953.2021.00128
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unsupervised learning algorithms are beginning to achieve accuracies comparable to their supervised counterparts on benchmark computer vision tasks, but their utility for practical applications has not yet been demonstrated. In this work, we present a novel application of unsupervised learning to the task of auroral image classification. Specifically, we modify and adapt the Simple framework for Contrastive Learning of Representations (SimCLR) algorithm to learn representations of auroral images in a recently released auroral image dataset constructed using image data from Time History of Events and Macroscale Interactions during Substorms (THEMIS) all-sky imagers. We demonstrate that (a) simple linear classifiers fit to the learned representations of the images achieve state-of-the-art classification performance, improving the classification accuracy by almost 10 percentage points over the current benchmark; and (b) the learned representations naturally cluster into more clusters than exist manually assigned categories, suggesting that existing categorizations are overly coarse and may obscure important connections between auroral types, near-earth solar wind conditions, and geomagnetic disturbances at the earth's surface. Moreover, our model is much lighter than the previous benchmark on this dataset, requiring in the area of fewer than 25\% of the number of parameters. Our approach exceeds an established threshold for operational purposes, demonstrating readiness for deployment and utilization.
[ { "version": "v1", "created": "Tue, 28 Sep 2021 17:51:25 GMT" }, { "version": "v2", "created": "Wed, 29 Sep 2021 02:08:10 GMT" } ]
2022-01-13T00:00:00
[ [ "Johnson", "Jeremiah W.", "" ], [ "Hari", "Swathi", "" ], [ "Hampton", "Donald", "" ], [ "Connor", "Hyunju K.", "" ], [ "Keesee", "Amy", "" ] ]
new_dataset
0.978067
2110.06166
Shorya Sharma Mr.
Shorya Sharma
Game Theory for Adversarial Attacks and Defenses
With the agreement of my coauthors, I would like to withdraw the manuscript "Game Theory for Adversarial Attacks and Defenses". Some experimental procedures were not included in the manuscript, which makes a part of important claims not meaningful
null
null
null
cs.LG cs.CR cs.GT
http://creativecommons.org/licenses/by/4.0/
Adversarial attacks can generate adversarial inputs by applying small but intentionally worst-case perturbations to samples from the dataset, which leads to even state-of-the-art deep neural networks outputting incorrect answers with high confidence. Hence, some adversarial defense techniques are developed to improve the security and robustness of the models and avoid them being attacked. Gradually, a game-like competition between attackers and defenders formed, in which both players would attempt to play their best strategies against each other while maximizing their own payoffs. To solve the game, each player would choose an optimal strategy against the opponent based on the prediction of the opponent's strategy choice. In this work, we are on the defensive side to apply game-theoretic approaches on defending against attacks. We use two randomization methods, random initialization and stochastic activation pruning, to create diversity of networks. Furthermore, we use one denoising technique, super resolution, to improve models' robustness by preprocessing images before attacks. Our experimental results indicate that those three methods can effectively improve the robustness of deep-learning neural networks.
[ { "version": "v1", "created": "Fri, 8 Oct 2021 07:38:33 GMT" }, { "version": "v2", "created": "Wed, 13 Oct 2021 04:49:37 GMT" }, { "version": "v3", "created": "Wed, 12 Jan 2022 14:04:54 GMT" } ]
2022-01-13T00:00:00
[ [ "Sharma", "Shorya", "" ] ]
new_dataset
0.956614
2111.08897
Afshin Dehghan
Gilad Baruch, Zhuoyuan Chen, Afshin Dehghan, Tal Dimry, Yuri Feigin, Peter Fu, Thomas Gebauer, Brandon Joffe, Daniel Kurz, Arik Schwartz, Elad Shulman
ARKitScenes: A Diverse Real-World Dataset For 3D Indoor Scene Understanding Using Mobile RGB-D Data
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Scene understanding is an active research area. Commercial depth sensors, such as Kinect, have enabled the release of several RGB-D datasets over the past few years which spawned novel methods in 3D scene understanding. More recently with the launch of the LiDAR sensor in Apple's iPads and iPhones, high quality RGB-D data is accessible to millions of people on a device they commonly use. This opens a whole new era in scene understanding for the Computer Vision community as well as app developers. The fundamental research in scene understanding together with the advances in machine learning can now impact people's everyday experiences. However, transforming these scene understanding methods to real-world experiences requires additional innovation and development. In this paper we introduce ARKitScenes. It is not only the first RGB-D dataset that is captured with a now widely available depth sensor, but to our best knowledge, it also is the largest indoor scene understanding data released. In addition to the raw and processed data from the mobile device, ARKitScenes includes high resolution depth maps captured using a stationary laser scanner, as well as manually labeled 3D oriented bounding boxes for a large taxonomy of furniture. We further analyze the usefulness of the data for two downstream tasks: 3D object detection and color-guided depth upsampling. We demonstrate that our dataset can help push the boundaries of existing state-of-the-art methods and it introduces new challenges that better represent real-world scenarios.
[ { "version": "v1", "created": "Wed, 17 Nov 2021 04:27:01 GMT" }, { "version": "v2", "created": "Fri, 31 Dec 2021 18:45:00 GMT" }, { "version": "v3", "created": "Wed, 12 Jan 2022 08:19:29 GMT" } ]
2022-01-13T00:00:00
[ [ "Baruch", "Gilad", "" ], [ "Chen", "Zhuoyuan", "" ], [ "Dehghan", "Afshin", "" ], [ "Dimry", "Tal", "" ], [ "Feigin", "Yuri", "" ], [ "Fu", "Peter", "" ], [ "Gebauer", "Thomas", "" ], [ "Joffe", "Brandon", "" ], [ "Kurz", "Daniel", "" ], [ "Schwartz", "Arik", "" ], [ "Shulman", "Elad", "" ] ]
new_dataset
0.999826
2112.11953
Xiao Xu
Xiao Xu, Libo Qin, Kaiji Chen, Guoxing Wu, Linlin Li, Wanxiang Che
Text is no more Enough! A Benchmark for Profile-based Spoken Language Understanding
Accepted by AAAI 2022
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current researches on spoken language understanding (SLU) heavily are limited to a simple setting: the plain text-based SLU that takes the user utterance as input and generates its corresponding semantic frames (e.g., intent and slots). Unfortunately, such a simple setting may fail to work in complex real-world scenarios when an utterance is semantically ambiguous, which cannot be achieved by the text-based SLU models. In this paper, we first introduce a new and important task, Profile-based Spoken Language Understanding (ProSLU), which requires the model that not only relies on the plain text but also the supporting profile information to predict the correct intents and slots. To this end, we further introduce a large-scale human-annotated Chinese dataset with over 5K utterances and their corresponding supporting profile information (Knowledge Graph (KG), User Profile (UP), Context Awareness (CA)). In addition, we evaluate several state-of-the-art baseline models and explore a multi-level knowledge adapter to effectively incorporate profile information. Experimental results reveal that all existing text-based SLU models fail to work when the utterances are semantically ambiguous and our proposed framework can effectively fuse the supporting information for sentence-level intent detection and token-level slot filling. Finally, we summarize key challenges and provide new points for future directions, which hopes to facilitate the research.
[ { "version": "v1", "created": "Wed, 22 Dec 2021 15:22:17 GMT" }, { "version": "v2", "created": "Thu, 6 Jan 2022 12:26:57 GMT" }, { "version": "v3", "created": "Wed, 12 Jan 2022 15:18:17 GMT" } ]
2022-01-13T00:00:00
[ [ "Xu", "Xiao", "" ], [ "Qin", "Libo", "" ], [ "Chen", "Kaiji", "" ], [ "Wu", "Guoxing", "" ], [ "Li", "Linlin", "" ], [ "Che", "Wanxiang", "" ] ]
new_dataset
0.985728
2201.03817
Tianlang He
Tianlang He, Jiajie Tan, Weipeng Zhuo, Maximilian Printz, S.-H. Gary Chan
Tackling Multipath and Biased Training Data for IMU-Assisted BLE Proximity Detection
null
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
Proximity detection is to determine whether an IoT receiver is within a certain distance from a signal transmitter. Due to its low cost and high popularity, Bluetooth low energy (BLE) has been used to detect proximity based on the received signal strength indicator (RSSI). To address the fact that RSSI can be markedly influenced by device carriage states, previous works have incorporated RSSI with inertial measurement unit (IMU) using deep learning. However, they have not sufficiently accounted for the impact of multipath. Furthermore, due to the special setup, the IMU data collected in the training process may be biased, which hampers the system's robustness and generalizability. This issue has not been studied before. We propose PRID, an IMU-assisted BLE proximity detection approach robust against RSSI fluctuation and IMU data bias. PRID histogramizes RSSI to extract multipath features and uses carriage state regularization to mitigate overfitting due to IMU data bias. We further propose PRID-lite based on a binarized neural network to substantially cut memory requirements for resource-constrained devices. We have conducted extensive experiments under different multipath environments, data bias levels, and a crowdsourced dataset. Our results show that PRID significantly reduces false detection cases compared with the existing arts (by over 50%). PRID-lite further reduces over 90% PRID model size and extends 60% battery life, with a minor compromise in accuracy (7%).
[ { "version": "v1", "created": "Tue, 11 Jan 2022 07:46:20 GMT" }, { "version": "v2", "created": "Wed, 12 Jan 2022 03:09:25 GMT" } ]
2022-01-13T00:00:00
[ [ "He", "Tianlang", "" ], [ "Tan", "Jiajie", "" ], [ "Zhuo", "Weipeng", "" ], [ "Printz", "Maximilian", "" ], [ "Chan", "S. -H. Gary", "" ] ]
new_dataset
0.972411
2201.04171
Alejandro Macario-Rojas
Alejandro Macario-Rojas, Ben Parslew, Andrew Weightman, and Katharine L. Smith
CLOVER Robot: A Minimally Actuated Jumping Robotic Platform for Space Exploration
null
null
null
null
cs.RO physics.app-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
Robots have been critical instruments to space exploration by providing access to environments beyond human limitations. Jumping robot concepts are attractive solutions to negotiate complex terrain. However, among the engineering challenges to overcome to enable jumping robot concepts for sustained operation, reduction of mechanical failure modes is one of the most fundamental. This study set out to develop a jumping robot with focus on minimal actuation for reduced mechanism maintenance. We present the synthesis of a Sarrus-style linkage to constraint the system to a single translational degree of freedom without the use of typical synchronising gears. We delimit the present research to vertical solid jumps to assess the performance of the fundamental main-drive linkage. A laboratory demonstrator assists the transfer of theoretical concepts and approaches. The laboratory demonstrator performs jumps with 63% potential-to-kinetic energy conversion efficiency, with a theoretical maximum of 73%. Satisfactory operation opens up design optimisation and directional jump capability towards the development of a jumping robotic platform for space exploration.
[ { "version": "v1", "created": "Tue, 11 Jan 2022 19:42:54 GMT" } ]
2022-01-13T00:00:00
[ [ "Macario-Rojas", "Alejandro", "" ], [ "Parslew", "Ben", "" ], [ "Weightman", "Andrew", "" ], [ "Smith", "Katharine L.", "" ] ]
new_dataset
0.989724
2201.04205
Waleed Yousef
Waleed A.Yousef, Hisham E. Mohammed, Andrew A. Naguib, Rafat S. Eid, Sherif E. Emabrak, Ahmed F. Hamed, Yusuf M. Khalifa, Shrouk T. AbdElrheem, Eman A. Awad, Sara G. Gaafar, Alaa M. Mamdoh, Nada A. Shawky
JSOL: JavaScript Open-source Library for Grammar of Graphics
null
null
null
null
cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we introduce the JavaScript Open-source Library (\libname), a high-level grammar for representing data in visualization graphs and plots. \libname~perspective on the grammar of graphics is unique; it provides state-of-art rules for encoding visual primitives that can be used to generate a known scene or to invent a new one. \libname~has ton rules developed specifically for data-munging, mapping, and visualization through many layers, such as algebra, scales, and geometries. Additionally, it has a compiler that incorporates and combines all rules specified by a user and put them in a flow to validate it as a visualization grammar and check its requisites. Users can customize scenes through a pipeline that either puts customized rules or comes with new ones. We evaluated \libname~on a multitude of plots to check rules specification of customizing a specific plot. Although the project is still under development and many enhancements are under construction, this paper describes the first developed version of \libname, circa 2016, where an open-source version of it is available. One immediate practical deployment for JSOl is to be integrated with the open-source version of the Data Visualization Platform (DVP) \citep{Yousef2019DVP-arxiv}
[ { "version": "v1", "created": "Tue, 11 Jan 2022 21:23:23 GMT" } ]
2022-01-13T00:00:00
[ [ "Yousef", "Waleed A.", "" ], [ "Mohammed", "Hisham E.", "" ], [ "Naguib", "Andrew A.", "" ], [ "Eid", "Rafat S.", "" ], [ "Emabrak", "Sherif E.", "" ], [ "Hamed", "Ahmed F.", "" ], [ "Khalifa", "Yusuf M.", "" ], [ "AbdElrheem", "Shrouk T.", "" ], [ "Awad", "Eman A.", "" ], [ "Gaafar", "Sara G.", "" ], [ "Mamdoh", "Alaa M.", "" ], [ "Shawky", "Nada A.", "" ] ]
new_dataset
0.999723
2201.04212
Chong Tang
Chong Tang, Wenda Li, Shelly Vishwakarma, Fangzhan Shi, Simon Julier, Kevin Chetty
MDPose: Human Skeletal Motion Reconstruction Using WiFi Micro-Doppler Signatures
null
null
null
null
cs.CV eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motion tracking systems based on optical sensors typically often suffer from issues, such as poor lighting conditions, occlusion, limited coverage, and may raise privacy concerns. More recently, radio frequency (RF)-based approaches using commercial WiFi devices have emerged which offer low-cost ubiquitous sensing whilst preserving privacy. However, the output of an RF sensing system, such as Range-Doppler spectrograms, cannot represent human motion intuitively and usually requires further processing. In this study, MDPose, a novel framework for human skeletal motion reconstruction based on WiFi micro-Doppler signatures, is proposed. It provides an effective solution to track human activities by reconstructing a skeleton model with 17 key points, which can assist with the interpretation of conventional RF sensing outputs in a more understandable way. Specifically, MDPose has various incremental stages to gradually address a series of challenges: First, a denoising algorithm is implemented to remove any unwanted noise that may affect the feature extraction and enhance weak Doppler signatures. Secondly, the convolutional neural network (CNN)-recurrent neural network (RNN) architecture is applied to learn temporal-spatial dependency from clean micro-Doppler signatures and restore key points' velocity information. Finally, a pose optimising mechanism is employed to estimate the initial state of the skeleton and to limit the increase of error. We have conducted comprehensive tests in a variety of environments using numerous subjects with a single receiver radar system to demonstrate the performance of MDPose, and report 29.4mm mean absolute error over all key points positions, which outperforms state-of-the-art RF-based pose estimation systems.
[ { "version": "v1", "created": "Tue, 11 Jan 2022 21:46:28 GMT" } ]
2022-01-13T00:00:00
[ [ "Tang", "Chong", "" ], [ "Li", "Wenda", "" ], [ "Vishwakarma", "Shelly", "" ], [ "Shi", "Fangzhan", "" ], [ "Julier", "Simon", "" ], [ "Chetty", "Kevin", "" ] ]
new_dataset
0.996306
2201.04235
Davide Callegaro
Davide Callegaro and Francesco Restuccia and Marco Levorato
SmartDet: Context-Aware Dynamic Control of Edge Task Offloading for Mobile Object Detection
null
null
null
null
cs.DC cs.CV cs.LG cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mobile devices increasingly rely on object detection (OD) through deep neural networks (DNNs) to perform critical tasks. Due to their high complexity, the execution of these DNNs requires excessive time and energy. Low-complexity object tracking (OT) can be used with OD, where the latter is periodically applied to generate "fresh" references for tracking. However, the frames processed with OD incur large delays, which may make the reference outdated and degrade tracking quality. Herein, we propose to use edge computing in this context, and establish parallel OT (at the mobile device) and OD (at the edge server) processes that are resilient to large OD latency. We propose Katch-Up, a novel tracking mechanism that improves the system resilience to excessive OD delay. However, while Katch-Up significantly improves performance, it also increases the computing load of the mobile device. Hence, we design SmartDet, a low-complexity controller based on deep reinforcement learning (DRL) that learns controlling the trade-off between resource utilization and OD performance. SmartDet takes as input context-related information related to the current video content and the current network conditions to optimize frequency and type of OD offloading, as well as Katch-Up utilization. We extensively evaluate SmartDet on a real-world testbed composed of a JetSon Nano as mobile device and a GTX 980 Ti as edge server, connected through a Wi-Fi link. Experimental results show that SmartDet achieves an optimal balance between tracking performance - mean Average Recall (mAR) and resource usage. With respect to a baseline with full Katch-Upusage and maximum channel usage, we still increase mAR by 4% while using 50% less of the channel and 30% power resources associated with Katch-Up. With respect to a fixed strategy using minimal resources, we increase mAR by 20% while using Katch-Up on 1/3 of the frames.
[ { "version": "v1", "created": "Tue, 11 Jan 2022 23:01:35 GMT" } ]
2022-01-13T00:00:00
[ [ "Callegaro", "Davide", "" ], [ "Restuccia", "Francesco", "" ], [ "Levorato", "Marco", "" ] ]
new_dataset
0.993153
2201.04236
Ethan Weber
Ethan Weber, Dim P. Papadopoulos, Agata Lapedriza, Ferda Ofli, Muhammad Imran, Antonio Torralba
Incidents1M: a large-scale dataset of images with natural disasters, damage, and incidents
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Natural disasters, such as floods, tornadoes, or wildfires, are increasingly pervasive as the Earth undergoes global warming. It is difficult to predict when and where an incident will occur, so timely emergency response is critical to saving the lives of those endangered by destructive events. Fortunately, technology can play a role in these situations. Social media posts can be used as a low-latency data source to understand the progression and aftermath of a disaster, yet parsing this data is tedious without automated methods. Prior work has mostly focused on text-based filtering, yet image and video-based filtering remains largely unexplored. In this work, we present the Incidents1M Dataset, a large-scale multi-label dataset which contains 977,088 images, with 43 incident and 49 place categories. We provide details of the dataset construction, statistics and potential biases; introduce and train a model for incident detection; and perform image-filtering experiments on millions of images on Flickr and Twitter. We also present some applications on incident analysis to encourage and enable future work in computer vision for humanitarian aid. Code, data, and models are available at http://incidentsdataset.csail.mit.edu.
[ { "version": "v1", "created": "Tue, 11 Jan 2022 23:03:57 GMT" } ]
2022-01-13T00:00:00
[ [ "Weber", "Ethan", "" ], [ "Papadopoulos", "Dim P.", "" ], [ "Lapedriza", "Agata", "" ], [ "Ofli", "Ferda", "" ], [ "Imran", "Muhammad", "" ], [ "Torralba", "Antonio", "" ] ]
new_dataset
0.999847
2201.04255
Dongfang Zhao
Dongfang Zhao
Rache: Radix-additive caching for homomorphic encryption
null
null
null
null
cs.CR cs.DC
http://creativecommons.org/licenses/by-nc-nd/4.0/
One of the biggest concerns for many applications in cloud computing lies in data privacy. A potential solution to this problem is homomorphic encryption (HE), which supports certain operations directly over the ciphertexts. Conventional HE schemes, however, exhibit significant performance overhead and are hardly applicable to real-world applications. This paper presents Rache, a caching optimization for accelerating the performance of HE schemes. The key insights of Rache include (i) caching some homomorphic ciphertexts before encrypting the large volume of plaintexts; (ii) expanding the plaintexts into a summation of powers of radixes; and (iii) constructing the ciphertexts with only homomorphic addition. The extensive evaluation shows that Rache exhibits almost linear scalability and outperforms Paillier by orders of magnitude.
[ { "version": "v1", "created": "Wed, 12 Jan 2022 00:54:37 GMT" } ]
2022-01-13T00:00:00
[ [ "Zhao", "Dongfang", "" ] ]
new_dataset
0.999247
2201.04265
Bhargav Gokalgandhi
Bhargav Gokalgandhi, Ivan Seskar
Distributed Processing for Encoding and Decoding of Binary LDPC codes using MPI
This project was funded by the NSF "COSMOS" Project under grant number CNS-1827923. Presented in INFOCOM 2019 CNERT Workshop
null
10.1109/INFCOMW.2019.8845079
null
cs.DC cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by/4.0/
Low Density Parity Check (LDPC) codes are linear error correcting codes used in communication systems for Forward Error Correction (FEC). But, intensive computation is required for encoding and decoding of LDPC codes, making it difficult for practical usage in general purpose software based signal processing systems. In order to accelerate the encoding and decoding of LDPC codes, distributed processing over multiple multi-core CPUs using Message Passing Interface (MPI) is performed. Implementation is done using Stream Processing and Batch Processing mechanisms and the execution time for both implementations is compared w.r.t variation in number of CPUs and number of cores per CPU. Performance evaluation of distributed processing is shown by variation in execution time w.r.t. increase in number of processors (CPU cores).
[ { "version": "v1", "created": "Wed, 12 Jan 2022 01:40:01 GMT" } ]
2022-01-13T00:00:00
[ [ "Gokalgandhi", "Bhargav", "" ], [ "Seskar", "Ivan", "" ] ]
new_dataset
0.992389
2201.04279
Abdelrahman Younes
Abdelrahman Younes
Dynamical Audio-Visual Navigation: Catching Unheard Moving Sound Sources in Unmapped 3D Environments
null
null
null
null
cs.CV cs.LG cs.RO cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent work on audio-visual navigation targets a single static sound in noise-free audio environments and struggles to generalize to unheard sounds. We introduce the novel dynamic audio-visual navigation benchmark in which an embodied AI agent must catch a moving sound source in an unmapped environment in the presence of distractors and noisy sounds. We propose an end-to-end reinforcement learning approach that relies on a multi-modal architecture that fuses the spatial audio-visual information from a binaural audio signal and spatial occupancy maps to encode the features needed to learn a robust navigation policy for our new complex task settings. We demonstrate that our approach outperforms the current state-of-the-art with better generalization to unheard sounds and better robustness to noisy scenarios on the two challenging 3D scanned real-world datasets Replica and Matterport3D, for the static and dynamic audio-visual navigation benchmarks. Our novel benchmark will be made available at http://dav-nav.cs.uni-freiburg.de.
[ { "version": "v1", "created": "Wed, 12 Jan 2022 03:08:03 GMT" } ]
2022-01-13T00:00:00
[ [ "Younes", "Abdelrahman", "" ] ]
new_dataset
0.995688
2201.04280
Yao Su
Yao Su, Yuhong Jiang, Yixin Zhu, Hangxin Liu
Object Gathering with a Tethered Robot Duo
null
IEEE Robotics and Automation Letters, 2022
10.1109/LRA.2022.3141828
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We devise a cooperative planning framework to generate optimal trajectories for a tethered robot duo, who is tasked to gather scattered objects spread in a large area using a flexible net. Specifically, the proposed planning framework first produces a set of dense waypoints for each robot, serving as the initialization for optimization. Next, we formulate an iterative optimization scheme to generate smooth and collision-free trajectories while ensuring cooperation within the robot duo to efficiently gather objects and properly avoid obstacles. We validate the generated trajectories in simulation and implement them in physical robots using Model Reference Adaptive Controller (MRAC) to handle unknown dynamics of carried payloads. In a series of studies, we find that: (i) a U-shape cost function is effective in planning cooperative robot duo, and (ii) the task efficiency is not always proportional to the tethered net's length. Given an environment configuration, our framework can gauge the optimal net length. To our best knowledge, ours is the first that provides such estimation for tethered robot duo.
[ { "version": "v1", "created": "Wed, 12 Jan 2022 03:12:40 GMT" } ]
2022-01-13T00:00:00
[ [ "Su", "Yao", "" ], [ "Jiang", "Yuhong", "" ], [ "Zhu", "Yixin", "" ], [ "Liu", "Hangxin", "" ] ]
new_dataset
0.971617
2201.04353
Y.C. Tay
Y.C. Tay, Mostafa Rezazad and Hamid Sarbazi-Azad
A simple model for citation curve
13 pages, 19 figures, 2 tables
null
null
null
cs.DL
http://creativecommons.org/licenses/by/4.0/
There is considerable interest in the citation count for an author's publications. This has led to many proposals for citation indices for characterizing citation distributions. However, there is so far no tractable model to facilitate the analysis of these distributions and the design of these indices. This paper presents a simple equation for such design and analysis. The equation has three parameters that are calibrated by three geometrical characteristics of a citation distribution. Its simple form makes it tractable. To demonstrate, the equation is used to derive closed-form expressions for various citation indices, analyze the effect of time and identify individual contribution to the Hirsch index for a group.
[ { "version": "v1", "created": "Wed, 12 Jan 2022 08:01:00 GMT" } ]
2022-01-13T00:00:00
[ [ "Tay", "Y. C.", "" ], [ "Rezazad", "Mostafa", "" ], [ "Sarbazi-Azad", "Hamid", "" ] ]
new_dataset
0.991734
2201.04409
Jong-Hyeok Park
Jong-Hyeok Park, Gihwan Oh, Sang-Won Lee
Enlightening Flash Storage to Stream Writes by Objects
null
null
null
null
cs.DB
http://creativecommons.org/licenses/by/4.0/
For a write request, today flash storage cannot distinguish the logical object it comes from. In such object-oblivious flash devices, concurrent writes from different objects are simply packed in their arrival order to flash memory blocks; hence objects with different lifetimes are multiplexed onto the same flash blocks. This multiplexing incurs write amplification, worsening the performance. Tackling the multiplexing problem, we propose a novel interface for flash storage, FlashAlloc. It is used to pass the logical address ranges of logical objects to the flash storage and thus enlighten the storage to stream writes by objects. The object-aware flash storage can de-multiplex writes from different objects with distinct deathtimes into per-object dedicated flash blocks. Given that popular data stores separate writes using objects (e.g., SSTables in RocksDB), we can achieve, unlike the existing solutions, transparent write streaming just by calling FlashAlloc upon object creation. Our experimental results using an open-source SSD prototype demonstrate that FlashAlloc can reduce write amplification factor (WAF) in RocksDB, F2FS, and MySQL by 1.5, 2.5, and 0.3, respectively and thus improve throughput by 2x, 1.8x, and 1.2x, respectively. In particular, FlashAlloc will mitigate the interference among multitenants. When RocksDB and MySQL were run together on the same SSD, FlashAlloc decreased WAF from 4.2 to 2.5 and doubled their throughputs.
[ { "version": "v1", "created": "Wed, 12 Jan 2022 10:51:15 GMT" } ]
2022-01-13T00:00:00
[ [ "Park", "Jong-Hyeok", "" ], [ "Oh", "Gihwan", "" ], [ "Lee", "Sang-Won", "" ] ]
new_dataset
0.974167
2201.04425
Pavlo Mykytyn
Pavlo Mykytyn, Marcin Brzozowski, Zoya Dyka and Peter Langendoerfer
Jamming Detection for IR-UWB Ranging Technology in Autonomous UAV Swarms
6 pages, 1 figure
2021 10th MEDITERRANEAN CONFERENCE ON EMBEDDED COMPUTING, p. 81-86
10.1109/MECO52532.2021.9460250
null
cs.CR
http://creativecommons.org/licenses/by-nc-nd/4.0/
Jamming is a form of the Denial of Service (J-DoS) attack. It is a significant threat that causes malfunction in Unmanned Aerial Vehicle systems, especially when used in hostile environments. The attackers mainly operate in the wireless communication environment by following a few preexisting scenarios. In this paper, we propose an idea for a Jamming detection mechanism. The mechanism utilizes the network parameters available to the system and some additional measures to distinguish between bad transmission quality and Jamming to avoid false positive alarms. After detecting a Jamming attack, appropriate countermeasures or mitigation techniques can be applied to keep the system safe.
[ { "version": "v1", "created": "Wed, 12 Jan 2022 11:45:32 GMT" } ]
2022-01-13T00:00:00
[ [ "Mykytyn", "Pavlo", "" ], [ "Brzozowski", "Marcin", "" ], [ "Dyka", "Zoya", "" ], [ "Langendoerfer", "Peter", "" ] ]
new_dataset
0.950752
2201.04477
Giovanni Sileno
Giovanni Sileno, Thomas van Binsbergen, Matteo Pascucci, Tom van Engers
DPCL: a Language Template for Normative Specifications
position paper at ProLaLa workshop @ POPL2022
null
null
null
cs.AI cs.FL cs.MA cs.PL cs.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Several solutions for specifying normative artefacts (norms, contracts, policies) in a computational processable way have been presented in the literature. Legal core ontologies have been proposed to systematize concepts and relationships relevant to normative reasoning. However, no solution amongst those has achieved general acceptance, and no common ground (representational, computational) has been identified enabling us to easily compare them. Yet, all these efforts share the same motivation of representing normative directives, therefore it is plausible that there may be a representational model encompassing all of them. This presentation will introduce DPCL, a domain-specific language (DSL) for specifying higher-level policies (including norms, contracts, etc.), centred on Hohfeld's framework of fundamental legal concepts. DPCL has to be seen primarily as a "template", i.e. as an informational model for architectural reference, rather than a fully-fledged formal language; it aims to make explicit the general requirements that should be expected in a language for norm specification. In this respect, it goes rather in the direction of legal core ontologies, but differently from those, our proposal aims to keep the character of a DSL, rather than a set of axioms in a logical framework: it is meant to be cross-compiled to underlying languages/tools adequate to the type of target application. We provide here an overview of some of the language features.
[ { "version": "v1", "created": "Wed, 12 Jan 2022 13:51:11 GMT" } ]
2022-01-13T00:00:00
[ [ "Sileno", "Giovanni", "" ], [ "van Binsbergen", "Thomas", "" ], [ "Pascucci", "Matteo", "" ], [ "van Engers", "Tom", "" ] ]
new_dataset
0.979532
2201.04494
Qingyong Hu
Qingyong Hu, Bo Yang, Sheikh Khalid, Wen Xiao, Niki Trigoni, Andrew Markham
SensatUrban: Learning Semantics from Urban-Scale Photogrammetric Point Clouds
Accepted by IJCV 2022
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the recent availability and affordability of commercial depth sensors and 3D scanners, an increasing number of 3D (i.e., RGBD, point cloud) datasets have been publicized to facilitate research in 3D computer vision. However, existing datasets either cover relatively small areas or have limited semantic annotations. Fine-grained understanding of urban-scale 3D scenes is still in its infancy. In this paper, we introduce SensatUrban, an urban-scale UAV photogrammetry point cloud dataset consisting of nearly three billion points collected from three UK cities, covering 7.6 km^2. Each point in the dataset has been labelled with fine-grained semantic annotations, resulting in a dataset that is three times the size of the previous existing largest photogrammetric point cloud dataset. In addition to the more commonly encountered categories such as road and vegetation, urban-level categories including rail, bridge, and river are also included in our dataset. Based on this dataset, we further build a benchmark to evaluate the performance of state-of-the-art segmentation algorithms. In particular, we provide a comprehensive analysis and identify several key challenges limiting urban-scale point cloud understanding. The dataset is available at http://point-cloud-analysis.cs.ox.ac.uk.
[ { "version": "v1", "created": "Wed, 12 Jan 2022 14:48:11 GMT" } ]
2022-01-13T00:00:00
[ [ "Hu", "Qingyong", "" ], [ "Yang", "Bo", "" ], [ "Khalid", "Sheikh", "" ], [ "Xiao", "Wen", "" ], [ "Trigoni", "Niki", "" ], [ "Markham", "Andrew", "" ] ]
new_dataset
0.999721
2201.04596
Dingmin Wang
Dingmin Wang, Pan Hu, Przemys{\l}aw Andrzej Wa{\l}\k{e}ga, Bernardo Cuenca Grau
MeTeoR: Practical Reasoning in Datalog with Metric Temporal Operators
Accepted To AAAI 2022
null
null
null
cs.AI cs.DB
http://creativecommons.org/licenses/by/4.0/
DatalogMTL is an extension of Datalog with operators from metric temporal logic which has received significant attention in recent years. It is a highly expressive knowledge representation language that is well-suited for applications in temporal ontology-based query answering and stream processing. Reasoning in DatalogMTL is, however, of high computational complexity, making implementation challenging and hindering its adoption in applications. In this paper, we present a novel approach for practical reasoning in DatalogMTL which combines materialisation (a.k.a. forward chaining) with automata-based techniques. We have implemented this approach in a reasoner called MeTeoR and evaluated its performance using a temporal extension of the Lehigh University Benchmark and a benchmark based on real-world meteorological data. Our experiments show that MeTeoR is a scalable system which enables reasoning over complex temporal rules and datasets involving tens of millions of temporal facts.
[ { "version": "v1", "created": "Wed, 12 Jan 2022 17:46:18 GMT" } ]
2022-01-13T00:00:00
[ [ "Wang", "Dingmin", "" ], [ "Hu", "Pan", "" ], [ "Wałęga", "Przemysław Andrzej", "" ], [ "Grau", "Bernardo Cuenca", "" ] ]
new_dataset
0.988521
1906.03588
Achintya Sarkar
Zheng-Hua Tan, Achintya kr. Sarkar, Najim Dehak
rVAD: An Unsupervised Segment-Based Robust Voice Activity Detection Method
null
Computer Speech & Language, volume 59, January 2020, Pages 1-21
null
null
cs.SD cs.CL cs.LG eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents an unsupervised segment-based method for robust voice activity detection (rVAD). The method consists of two passes of denoising followed by a voice activity detection (VAD) stage. In the first pass, high-energy segments in a speech signal are detected by using a posteriori signal-to-noise ratio (SNR) weighted energy difference and if no pitch is detected within a segment, the segment is considered as a high-energy noise segment and set to zero. In the second pass, the speech signal is denoised by a speech enhancement method, for which several methods are explored. Next, neighbouring frames with pitch are grouped together to form pitch segments, and based on speech statistics, the pitch segments are further extended from both ends in order to include both voiced and unvoiced sounds and likely non-speech parts as well. In the end, a posteriori SNR weighted energy difference is applied to the extended pitch segments of the denoised speech signal for detecting voice activity. We evaluate the VAD performance of the proposed method using two databases, RATS and Aurora-2, which contain a large variety of noise conditions. The rVAD method is further evaluated, in terms of speaker verification performance, on the RedDots 2016 challenge database and its noise-corrupted versions. Experiment results show that rVAD is compared favourably with a number of existing methods. In addition, we present a modified version of rVAD where computationally intensive pitch extraction is replaced by computationally efficient spectral flatness calculation. The modified version significantly reduces the computational complexity at the cost of moderately inferior VAD performance, which is an advantage when processing a large amount of data and running on low resource devices. The source code of rVAD is made publicly available.
[ { "version": "v1", "created": "Sun, 9 Jun 2019 07:51:23 GMT" }, { "version": "v2", "created": "Tue, 11 Jan 2022 14:26:11 GMT" } ]
2022-01-12T00:00:00
[ [ "Tan", "Zheng-Hua", "" ], [ "Sarkar", "Achintya kr.", "" ], [ "Dehak", "Najim", "" ] ]
new_dataset
0.9948
2011.03726
Xiaobo Zhou
Xiaobo Zhou, Shihao Yan, Qingqing Wu, Feng Shu, and Derrick Wing Kwan Ng
Intelligent Reflecting Surface (IRS)-Aided Covert Wireless Communications with Delay Constraint
null
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work examines the performance gain achieved by deploying an intelligent reflecting surface (IRS) in covert communications. To this end, we formulate the joint design of the transmit power and the IRS reflection coefficients by taking into account the communication covertness for the cases with global channel state information (CSI) and without a warden's instantaneous CSI. For the case of global CSI, we first prove that perfect covertness is achievable with the aid of the IRS even for a single-antenna transmitter, which is impossible without an IRS. Then, we develop a penalty successive convex approximation (PSCA) algorithm to tackle the design problem. Considering the high complexity of the PSCA algorithm, we further propose a low-complexity two-stage algorithm, where analytical expressions for the transmit power and the IRS's reflection coefficients are derived. For the case without the warden's instantaneous CSI, we first derive the covertness constraint analytically facilitating the optimal phase shift design. Then, we consider three hardware-related constraints on the IRS's reflection amplitudes and determine their optimal designs together with the optimal transmit power. Our examination shows that significant performance gain can be achieved by deploying an IRS into covert communications.
[ { "version": "v1", "created": "Sat, 7 Nov 2020 08:49:56 GMT" }, { "version": "v2", "created": "Tue, 11 Jan 2022 13:49:06 GMT" } ]
2022-01-12T00:00:00
[ [ "Zhou", "Xiaobo", "" ], [ "Yan", "Shihao", "" ], [ "Wu", "Qingqing", "" ], [ "Shu", "Feng", "" ], [ "Ng", "Derrick Wing Kwan", "" ] ]
new_dataset
0.992057
2103.08119
Guanhao Fu
Guanhao Fu, Ehsan Azimi, Peter Kazanzides
Mobile Teleoperation: Feasibility of Wireless Wearable Sensing of the Operator's Arm Motion
6 pages, 11 figures. Accepted to 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
null
10.1109/IROS51168.2021.9636838
null
cs.RO cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Teleoperation platforms often require the user to be situated at a fixed location to both visualize and control the movement of the robot and thus do not provide the operator with much mobility. One example is in existing robotic surgery solutions that require the surgeons to be away from the patient, attached to consoles where their heads must be fixed and their arms can only move in a limited space. This creates a barrier between physicians and patients that does not exist in normal surgery. To address this issue, we propose a mobile telesurgery solution where the surgeons are no longer mechanically limited to control consoles and are able to teleoperate the robots from the patient bedside, using their arms equipped with wireless sensors and viewing the endoscope video via optical see-through head-mounted displays (HMDs). We evaluate the feasibility and efficiency of our user interaction method compared to a standard surgical robotic manipulator via two tasks with different levels of required dexterity. The results indicate that with sufficient training our proposed platform can attain similar efficiency while providing added mobility for the operator.
[ { "version": "v1", "created": "Mon, 15 Mar 2021 03:30:11 GMT" }, { "version": "v2", "created": "Tue, 11 Jan 2022 17:53:09 GMT" } ]
2022-01-12T00:00:00
[ [ "Fu", "Guanhao", "" ], [ "Azimi", "Ehsan", "" ], [ "Kazanzides", "Peter", "" ] ]
new_dataset
0.999561
2105.02736
Daniel Paulusma
Giacomo Paesani and Dani\"el Paulusma and Pawe{\l} Rz\k{a}\.zewski
Feedback Vertex Set and Even Cycle Transversal for H-Free Graphs: Finding Large Block Graphs
null
null
null
null
cs.DS cs.CC cs.DM math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We prove new complexity results for Feedback Vertex Set and Even Cycle Transversal on $H$-free graphs, that is, graphs that do not contain some fixed graph $H$ as an induced subgraph. In particular, we prove that for every $s\geq 1$, both problems are polynomial-time solvable for $sP_3$-free graphs and $(sP_1+P_5)$-free graphs; here, the graph $sP_3$ denotes the disjoint union of $s$ paths on three vertices and the graph $sP_1+P_5$ denotes the disjoint union of $s$ isolated vertices and a path on five vertices. Our new results for Feedback Vertex Set extend all known polynomial-time results for Feedback Vertex Set on $H$-free graphs, namely for $sP_2$-free graphs [Chiarelli et al., TCS 2018], $(sP_1+P_3)$-free graphs [Dabrowski et al., Algorithmica 2020] and $P_5$-free graphs [Abrishami et al., SODA 2021]. Together, the new results also show that both problems exhibit the same behaviour on $H$-free graphs (subject to some open cases). This is in part due to a new general algorithm we design for finding in a ($sP_3)$-free or $(sP_1+P_5)$-free graph $G$ a largest induced subgraph whose blocks belong to some finite class ${\cal C}$ of graphs. We also compare our results with the state-of-the-art results for the Odd Cycle Transversal problem, which is known to behave differently on $H$-free graphs.
[ { "version": "v1", "created": "Thu, 6 May 2021 14:56:38 GMT" }, { "version": "v2", "created": "Sat, 1 Jan 2022 19:17:20 GMT" }, { "version": "v3", "created": "Mon, 10 Jan 2022 23:39:07 GMT" } ]
2022-01-12T00:00:00
[ [ "Paesani", "Giacomo", "" ], [ "Paulusma", "Daniël", "" ], [ "Rzążewski", "Paweł", "" ] ]
new_dataset
0.99822
2105.14517
Jiaqi Chen
Jiaqi Chen, Jianheng Tang, Jinghui Qin, Xiaodan Liang, Lingbo Liu, Eric P. Xing, Liang Lin
GeoQA: A Geometric Question Answering Benchmark Towards Multimodal Numerical Reasoning
Accepted to Findings of ACL 2021
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatic math problem solving has recently attracted increasing attention as a long-standing AI benchmark. In this paper, we focus on solving geometric problems, which requires a comprehensive understanding of textual descriptions, visual diagrams, and theorem knowledge. However, the existing methods were highly dependent on handcraft rules and were merely evaluated on small-scale datasets. Therefore, we propose a Geometric Question Answering dataset GeoQA, containing 4,998 geometric problems with corresponding annotated programs, which illustrate the solving process of the given problems. Compared with another publicly available dataset GeoS, GeoQA is 25 times larger, in which the program annotations can provide a practical testbed for future research on explicit and explainable numerical reasoning. Moreover, we introduce a Neural Geometric Solver (NGS) to address geometric problems by comprehensively parsing multimodal information and generating interpretable programs. We further add multiple self-supervised auxiliary tasks on NGS to enhance cross-modal semantic representation. Extensive experiments on GeoQA validate the effectiveness of our proposed NGS and auxiliary tasks. However, the results are still significantly lower than human performance, which leaves large room for future research. Our benchmark and code are released at https://github.com/chen-judge/GeoQA .
[ { "version": "v1", "created": "Sun, 30 May 2021 12:34:17 GMT" }, { "version": "v2", "created": "Tue, 8 Jun 2021 02:53:03 GMT" }, { "version": "v3", "created": "Tue, 11 Jan 2022 03:50:31 GMT" } ]
2022-01-12T00:00:00
[ [ "Chen", "Jiaqi", "" ], [ "Tang", "Jianheng", "" ], [ "Qin", "Jinghui", "" ], [ "Liang", "Xiaodan", "" ], [ "Liu", "Lingbo", "" ], [ "Xing", "Eric P.", "" ], [ "Lin", "Liang", "" ] ]
new_dataset
0.99971
2107.09244
Li Shen
Li Shen, Yao Lu, Hao Chen, Hao Wei, Donghai Xie, Jiabao Yue, Rui Chen, Shouye Lv, Bitao Jiang
S2Looking: A Satellite Side-Looking Dataset for Building Change Detection
null
Remote Sens. 2021, 13, 5094
10.3390/rs13245094
null
cs.CV cs.AI eess.IV
http://creativecommons.org/licenses/by/4.0/
Building-change detection underpins many important applications, especially in the military and crisis-management domains. Recent methods used for change detection have shifted towards deep learning, which depends on the quality of its training data. The assembly of large-scale annotated satellite imagery datasets is therefore essential for global building-change surveillance. Existing datasets almost exclusively offer near-nadir viewing angles. This limits the range of changes that can be detected. By offering larger observation ranges, the scroll imaging mode of optical satellites presents an opportunity to overcome this restriction. This paper therefore introduces S2Looking, a building-change-detection dataset that contains large-scale side-looking satellite images captured at various off-nadir angles. The dataset consists of 5000 bitemporal image pairs of rural areas and more than 65,920 annotated instances of changes throughout the world. The dataset can be used to train deep-learning-based change-detection algorithms. It expands upon existing datasets by providing (1) larger viewing angles; (2) large illumination variances; and (3) the added complexity of rural images. To facilitate {the} use of the dataset, a benchmark task has been established, and preliminary tests suggest that deep-learning algorithms find the dataset significantly more challenging than the closest-competing near-nadir dataset, LEVIR-CD+. S2Looking may therefore promote important advances in existing building-change-detection algorithms. The dataset is available at https://github.com/S2Looking/.
[ { "version": "v1", "created": "Tue, 20 Jul 2021 03:31:00 GMT" }, { "version": "v2", "created": "Sun, 26 Sep 2021 03:21:47 GMT" }, { "version": "v3", "created": "Tue, 11 Jan 2022 06:54:03 GMT" } ]
2022-01-12T00:00:00
[ [ "Shen", "Li", "" ], [ "Lu", "Yao", "" ], [ "Chen", "Hao", "" ], [ "Wei", "Hao", "" ], [ "Xie", "Donghai", "" ], [ "Yue", "Jiabao", "" ], [ "Chen", "Rui", "" ], [ "Lv", "Shouye", "" ], [ "Jiang", "Bitao", "" ] ]
new_dataset
0.999706
2108.00580
Weifeng Ge
Gangming Zhao, Weifeng Ge, and Yizhou Yu
GraphFPN: Graph Feature Pyramid Network for Object Detection
accepted by ICCV 2021, codes are updated at https://github.com/GangmingZhao/GraphFPN-Graph-Feature-Pyramid-Network-for-Object-Detection
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Feature pyramids have been proven powerful in image understanding tasks that require multi-scale features. State-of-the-art methods for multi-scale feature learning focus on performing feature interactions across space and scales using neural networks with a fixed topology. In this paper, we propose graph feature pyramid networks that are capable of adapting their topological structures to varying intrinsic image structures and supporting simultaneous feature interactions across all scales. We first define an image-specific superpixel hierarchy for each input image to represent its intrinsic image structures. The graph feature pyramid network inherits its structure from this superpixel hierarchy. Contextual and hierarchical layers are designed to achieve feature interactions within the same scale and across different scales. To make these layers more powerful, we introduce two types of local channel attention for graph neural networks by generalizing global channel attention for convolutional neural networks. The proposed graph feature pyramid network can enhance the multiscale features from a convolutional feature pyramid network. We evaluate our graph feature pyramid network in the object detection task by integrating it into the Faster R-CNN algorithm. The modified algorithm outperforms not only previous state-of-the-art feature pyramid-based methods with a clear margin but also other popular detection methods on both MS-COCO 2017 validation and test datasets.
[ { "version": "v1", "created": "Mon, 2 Aug 2021 01:19:38 GMT" }, { "version": "v2", "created": "Sun, 29 Aug 2021 10:34:34 GMT" }, { "version": "v3", "created": "Sat, 8 Jan 2022 12:21:21 GMT" } ]
2022-01-12T00:00:00
[ [ "Zhao", "Gangming", "" ], [ "Ge", "Weifeng", "" ], [ "Yu", "Yizhou", "" ] ]
new_dataset
0.993791
2109.07428
Young-Ho Kim
Young-Ho Kim, Ankur Kapoor, Tommaso Mansi, Ali Kamen
A Wide-area, Low-latency, and Power-efficient 6-DoF Pose Tracking System for Rigid Objects
null
null
null
null
cs.RO cs.CV cs.SY eess.SP eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Position sensitive detectors (PSDs) offer possibility to track single active marker's two (or three) degrees of freedom (DoF) position with a high accuracy, while having a fast response time with high update frequency and low latency, all using a very simple signal processing circuit. However they are not particularly suitable for 6-DoF object pose tracking system due to lack of orientation measurement, limited tracking range, and sensitivity to environmental variation. We propose a novel 6-DoF pose tracking system for a rigid object tracking requiring a single active marker. The proposed system uses a stereo-based PSD pair and multiple Inertial Measurement Units (IMUs). This is done based on a practical approach to identify and control the power of Infrared-Light Emitting Diode (IR-LED) active markers, with an aim to increase the tracking work space and reduce the power consumption. Our proposed tracking system is validated with three different work space sizes and for static and dynamic positional accuracy using robotic arm manipulator with three different dynamic motion patterns. The results show that the static position root-mean-square (RMS) error is 0.6mm. The dynamic position RMS error is 0.7-0.9mm. The orientation RMS error is between 0.04 and 0.9 degree at varied dynamic motion. Overall, our proposed tracking system is capable of tracking a rigid object pose with sub-millimeter accuracy at the mid range of the work space and sub-degree accuracy for all work space under a lab setting.
[ { "version": "v1", "created": "Wed, 15 Sep 2021 17:01:37 GMT" }, { "version": "v2", "created": "Mon, 10 Jan 2022 22:37:16 GMT" } ]
2022-01-12T00:00:00
[ [ "Kim", "Young-Ho", "" ], [ "Kapoor", "Ankur", "" ], [ "Mansi", "Tommaso", "" ], [ "Kamen", "Ali", "" ] ]
new_dataset
0.995306
2109.08931
Bodin Chinthanet
Bodin Chinthanet, Raula Gaikovina Kula, Rodrigo Eliza Zapata, Takashi Ishio, Kenichi Matsumoto, Akinori Ihara
S\=ojiTantei: Function-Call Reachability Detection of Vulnerable Code for npm Packages
To be published in IEICE Transactions on Information and Systems (Special Section on Empirical Software Engineering)
null
10.1587/transinf.2021MPL0001
null
cs.SE cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It has become common practice for software projects to adopt third-party dependencies. Developers are encouraged to update any outdated dependency to remain safe from potential threats of vulnerabilities. In this study, we present an approach to aid developers show whether or not a vulnerable code is reachable for JavaScript projects. Our prototype, S\=ojiTantei, is evaluated in two ways (i) the accuracy when compared to a manual approach and (ii) a larger-scale analysis of 780 clients from 78 security vulnerability cases. The first evaluation shows that S\=ojiTantei has a high accuracy of 83.3%, with a speed of less than a second analysis per client. The second evaluation reveals that 68 out of the studied 78 vulnerabilities reported having at least one clean client. The study proves that automation is promising with the potential for further improvement.
[ { "version": "v1", "created": "Sat, 18 Sep 2021 13:17:44 GMT" } ]
2022-01-12T00:00:00
[ [ "Chinthanet", "Bodin", "" ], [ "Kula", "Raula Gaikovina", "" ], [ "Zapata", "Rodrigo Eliza", "" ], [ "Ishio", "Takashi", "" ], [ "Matsumoto", "Kenichi", "" ], [ "Ihara", "Akinori", "" ] ]
new_dataset
0.999126
2110.08992
Dan Gordon
Dan Gordon, Paul Scott, Sylvie Thi\'ebaux
SmartGridToolbox: A Library for Simulating Modern and Future Electricity Networks
20 pages, 9 figures
null
null
null
cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present SmartGridToolbox: a C++ library for simulating modern and future electricity networks. SmartGridToolbox is distinguished by the fact that it is a general purpose library (rather than an application), that emphasizes flexibility, extensibility, and ability to interface with a wide range of other tools, such as optimization technologies. It incorporates fully unbalanced network modelling, fast power flow and OPF solvers, a discrete-event simulation engine, and a component library that includes network components like lines, cables, transformers, ZIP loads and generators, renewable and storage components like PV generation and batteries, inverters, tap changers, PV, generic time dependent loads and more. We anticipate that SmartGridToolbox will be useful to researchers who require accurate simulations of electricity networks that go beyond simple applications of load flow - for example, by incorporating custom optimisation algorithms, controllers, devices, or network management strategies. Being a library, it is also perfect for developing a wide range of end use applications. We start with a comparison to existing open source software, and move on to present its main features and benchmark results. We conclude by discussing four applications, most notably, the use of SmartGridToolbox in the CONSORT Bruny Island Battery Trial, conducted between 2016 and 2019.
[ { "version": "v1", "created": "Mon, 18 Oct 2021 03:10:16 GMT" }, { "version": "v2", "created": "Tue, 11 Jan 2022 06:14:51 GMT" } ]
2022-01-12T00:00:00
[ [ "Gordon", "Dan", "" ], [ "Scott", "Paul", "" ], [ "Thiébaux", "Sylvie", "" ] ]
new_dataset
0.996048
2112.13508
Mengjian Zhang
Mengjian Zhang, Guihua Wen, and Jing Yang
Duck swarm algorithm: a novel swarm intelligence algorithm
null
null
10.1016/S0550-2112(01)13508-9
null
cs.NE cs.AI
http://creativecommons.org/licenses/by/4.0/
A swarm intelligence-based optimization algorithm, named Duck Swarm Algorithm (DSA), is proposed in this paper. This algorithm is inspired by the searching for food sources and foraging behaviors of the duck swarm. The performance of DSA is verified by using eighteen benchmark functions, where it is statistical (best, mean, standard deviation, and average running time) results are compared with seven well-known algorithms like Particle swarm optimization (PSO), Firefly algorithm (FA), Chicken swarm optimization (CSO), Grey wolf optimizer (GWO), Sine cosine algorithm (SCA), and Marine-predators algorithm (MPA), and Archimedes optimization algorithm (AOA). Moreover, the Wilcoxon rank-sum test, Friedman test, and convergence curves of the comparison results are used to prove the superiority of the DSA against other algorithms. The results demonstrate that DSA is a high-performance optimization method in terms of convergence speed and exploration-exploitation balance for solving high-dimension optimization functions. Also, DSA is applied for the optimal design of two constrained engineering problems (the Three-bar truss problem, and the Sawmill operation problem). Additionally, four engineering constraint problems have also been used to analyze the performance of the proposed DSA. Overall, the comparison results revealed that the DSA is a promising and very competitive algorithm for solving different optimization problems.
[ { "version": "v1", "created": "Mon, 27 Dec 2021 04:53:36 GMT" } ]
2022-01-12T00:00:00
[ [ "Zhang", "Mengjian", "" ], [ "Wen", "Guihua", "" ], [ "Yang", "Jing", "" ] ]
new_dataset
0.99087
2201.03710
Zhaohui Wang
Zhaohui Wang, Xiao Lin, Abhinav Mishra, Ram Sriharsha
Online Changepoint Detection on a Budget
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Changepoints are abrupt variations in the underlying distribution of data. Detecting changes in a data stream is an important problem with many applications. In this paper, we are interested in changepoint detection algorithms which operate in an online setting in the sense that both its storage requirements and worst-case computational complexity per observation are independent of the number of previous observations. We propose an online changepoint detection algorithm for both univariate and multivariate data which compares favorably with offline changepoint detection algorithms while also operating in a strictly more constrained computational model. In addition, we present a simple online hyperparameter auto tuning technique for these algorithms.
[ { "version": "v1", "created": "Tue, 11 Jan 2022 00:20:33 GMT" } ]
2022-01-12T00:00:00
[ [ "Wang", "Zhaohui", "" ], [ "Lin", "Xiao", "" ], [ "Mishra", "Abhinav", "" ], [ "Sriharsha", "Ram", "" ] ]
new_dataset
0.990264
2201.03737
Shin Sano
Shin Sano and Seiji Yamada
D-Graph: AI-Assisted Design Concept Exploration Graph
16 pages, 6 figures
null
null
null
cs.HC cs.IR
http://creativecommons.org/licenses/by/4.0/
We present an AI-assisted search tool, the "Design Concept Exploration Graph" ("D-Graph"). It assists automotive designers in creating an original design-concept phrase, that is, a combination of two adjectives that conveys product aesthetics. D-Graph retrieves adjectives from a ConceptNet knowledge graph as nodes and visualizes them in a dynamically scalable 3D graph as users explore words. The retrieval algorithm helps in finding unique words by ruling out overused words on the basis of word frequency from a large text corpus and words that are too similar between the two in a combination using the cosine similarity from ConceptNet Numberbatch word embeddings. Our experiment with participants in the automotive design field that used both the proposed D-Graph and a baseline tool for design-concept-phrase creation tasks suggested a positive difference in participants' self-evaluation on the phrases they created, though not significant. Experts' evaluations on the phrases did not show significant differences. Negative correlations between the cosine similarity of the two words in a design-concept phrase and the experts' evaluation were significant. Our qualitative analysis suggested the directions for further development of the tool that should help users in adhering to the strategy of creating compound phrases supported by computational linguistic principles.
[ { "version": "v1", "created": "Tue, 11 Jan 2022 01:42:00 GMT" } ]
2022-01-12T00:00:00
[ [ "Sano", "Shin", "" ], [ "Yamada", "Seiji", "" ] ]
new_dataset
0.992775
2201.03746
Shunli Wang
Shunli Wang, Dingkang Yang, Peng Zhai, Chixiao Chen, Lihua Zhang
TSA-Net: Tube Self-Attention Network for Action Quality Assessment
9 pages, 7 figures, conference paper
null
10.1145/3474085.3475438
null
cs.CV cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, assessing action quality from videos has attracted growing attention in computer vision community and human computer interaction. Most existing approaches usually tackle this problem by directly migrating the model from action recognition tasks, which ignores the intrinsic differences within the feature map such as foreground and background information. To address this issue, we propose a Tube Self-Attention Network (TSA-Net) for action quality assessment (AQA). Specifically, we introduce a single object tracker into AQA and propose the Tube Self-Attention Module (TSA), which can efficiently generate rich spatio-temporal contextual information by adopting sparse feature interactions. The TSA module is embedded in existing video networks to form TSA-Net. Overall, our TSA-Net is with the following merits: 1) High computational efficiency, 2) High flexibility, and 3) The state-of-the art performance. Extensive experiments are conducted on popular action quality assessment datasets including AQA-7 and MTL-AQA. Besides, a dataset named Fall Recognition in Figure Skating (FR-FS) is proposed to explore the basic action assessment in the figure skating scene.
[ { "version": "v1", "created": "Tue, 11 Jan 2022 02:25:27 GMT" } ]
2022-01-12T00:00:00
[ [ "Wang", "Shunli", "" ], [ "Yang", "Dingkang", "" ], [ "Zhai", "Peng", "" ], [ "Chen", "Chixiao", "" ], [ "Zhang", "Lihua", "" ] ]
new_dataset
0.999244
2201.03820
Neeldhara Misra
Neeldhara Misra and Saraswati Nanoti
Eternal Vertex Cover on Bipartite and Co-Bipartite Graphs
38 pages, 15 figures
null
null
null
cs.DS cs.DM
http://creativecommons.org/licenses/by/4.0/
Eternal Vertex Cover problem is a dynamic variant of the vertex cover problem. We have a two player game in which guards are placed on some vertices of a graph. In every move, one player (the attacker) attacks an edge. In response to the attack, the second player (defender) moves the guards along the edges of the graph in such a manner that at least one guard moves along the attacked edge. If such a movement is not possible, then the attacker wins. If the defender can defend the graph against an infinite sequence of attacks, then the defender wins. The minimum number of guards with which the defender has a winning strategy is called the Eternal Vertex Cover Number of the graph G. On general graphs, the computational problem of determining the minimum eternal vertex cover number is NP-hard and admits a 2-approximation algorithm and an exponential kernel. The complexity of the problem on bipartite graphs is open, as is the question of whether the problem admits a polynomial kernel. We settle both these questions by showing that Eternal Vertex Cover is NP-hard and does not admit a polynomial compression even on bipartite graphs of diameter six. This result also holds for split graphs. We also show that the problem admits a polynomial time algorithm on the class of cobipartite graphs.
[ { "version": "v1", "created": "Tue, 11 Jan 2022 07:56:48 GMT" } ]
2022-01-12T00:00:00
[ [ "Misra", "Neeldhara", "" ], [ "Nanoti", "Saraswati", "" ] ]
new_dataset
0.999582
2201.03844
Marius Beul
Marius Beul, Max Schwarz, Jan Quenzel, Malte Splietker, Simon Bultmann, Daniel Schleich, Andre Rochow, Dmytro Pavlichenko, Radu Alexandru Rosu, Patrick Lowin, Bruno Scheider, Michael Schreiber, Finn S\"uberkr\"ub, Sven Behnke
Target Chase, Wall Building, and Fire Fighting: Autonomous UAVs of Team NimbRo at MBZIRC 2020
Accepted for Field Robotics, to appear 2022
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Mohamed Bin Zayed International Robotics Challenge (MBZIRC) 2020 posed diverse challenges for unmanned aerial vehicles (UAVs). We present our four tailored UAVs, specifically developed for individual aerial-robot tasks of MBZIRC, including custom hardware- and software components. In Challenge 1, a target UAV is pursued using a high-efficiency, onboard object detection pipeline to capture a ball from the target UAV. A second UAV uses a similar detection method to find and pop balloons scattered throughout the arena. For Challenge 2, we demonstrate a larger UAV capable of autonomous aerial manipulation: Bricks are found and tracked from camera images. Subsequently, they are approached, picked, transported, and placed on a wall. Finally, in Challenge 3, our UAV autonomously finds fires using LiDAR and thermal cameras. It extinguishes the fires with an onboard fire extinguisher. While every robot features task-specific subsystems, all UAVs rely on a standard software stack developed for this particular and future competitions. We present our mostly open-source software solutions, including tools for system configuration, monitoring, robust wireless communication, high-level control, and agile trajectory generation. For solving the MBZIRC 2020 tasks, we advanced the state of the art in multiple research areas like machine vision and trajectory generation. We present our scientific contributions that constitute the foundation for our algorithms and systems and analyze the results from the MBZIRC competition 2020 in Abu Dhabi, where our systems reached second place in the Grand Challenge. Furthermore, we discuss lessons learned from our participation in this complex robotic challenge.
[ { "version": "v1", "created": "Tue, 11 Jan 2022 09:08:54 GMT" } ]
2022-01-12T00:00:00
[ [ "Beul", "Marius", "" ], [ "Schwarz", "Max", "" ], [ "Quenzel", "Jan", "" ], [ "Splietker", "Malte", "" ], [ "Bultmann", "Simon", "" ], [ "Schleich", "Daniel", "" ], [ "Rochow", "Andre", "" ], [ "Pavlichenko", "Dmytro", "" ], [ "Rosu", "Radu Alexandru", "" ], [ "Lowin", "Patrick", "" ], [ "Scheider", "Bruno", "" ], [ "Schreiber", "Michael", "" ], [ "Süberkrüb", "Finn", "" ], [ "Behnke", "Sven", "" ] ]
new_dataset
0.991063
2201.03857
Taja Kuzman
Taja Kuzman, Peter Rupnik and Nikola Ljube\v{s}i\'c
The GINCO Training Dataset for Web Genre Identification of Documents Out in the Wild
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by-sa/4.0/
This paper presents a new training dataset for automatic genre identification GINCO, which is based on 1,125 crawled Slovenian web documents that consist of 650 thousand words. Each document was manually annotated for genre with a new annotation schema that builds upon existing schemata, having primarily clarity of labels and inter-annotator agreement in mind. The dataset consists of various challenges related to web-based data, such as machine translated content, encoding errors, multiple contents presented in one document etc., enabling evaluation of classifiers in realistic conditions. The initial machine learning experiments on the dataset show that (1) pre-Transformer models are drastically less able to model the phenomena, with macro F1 metrics ranging around 0.22, while Transformer-based models achieve scores of around 0.58, and (2) multilingual Transformer models work as well on the task as the monolingual models that were previously proven to be superior to multilingual models on standard NLP tasks.
[ { "version": "v1", "created": "Tue, 11 Jan 2022 09:39:15 GMT" } ]
2022-01-12T00:00:00
[ [ "Kuzman", "Taja", "" ], [ "Rupnik", "Peter", "" ], [ "Ljubešić", "Nikola", "" ] ]
new_dataset
0.999811
2201.03876
Hannane Yaghoubizade
Fahad Panolan and Hannane Yaghoubizade
Partial Vertex Cover on Graphs of Bounded Degeneracy
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the Partial Vertex Cover (PVC) problem, we are given an $n$-vertex graph $G$ and a positive integer $k$, and the objective is to find a vertex subset $S$ of size $k$ maximizing the number of edges with at least one end-point in $S$. This problem is W[1]-hard on general graphs, but admits a parameterized subexponential time algorithm with running time $2^{O(\sqrt{k})}n^{O(1)}$ on planar and apex-minor free graphs [Fomin et al. (FSTTCS 2009, IPL 2011)], and a $k^{O(k)}n^{O(1)}$ time algorithm on bounded degeneracy graphs [Amini et al. (FSTTCS 2009, JCSS 2011)]. Graphs of bounded degeneracy contain many sparse graph classes like planar graphs, $H$-minor free graphs, and bounded tree-width graphs. In this work, we prove the following results: 1) There is an algorithm for PVC with running time $2^{O(k)}n^{O(1)}$ on graphs of bounded degeneracy which is an improvement on the previous $k^{O(k)}n^{O(1)}$ time algorithm by Amini et al. 2) PVC admits a polynomial compression on graphs of bounded degeneracy, resolving an open problem posed by Amini et al.
[ { "version": "v1", "created": "Tue, 11 Jan 2022 10:34:51 GMT" } ]
2022-01-12T00:00:00
[ [ "Panolan", "Fahad", "" ], [ "Yaghoubizade", "Hannane", "" ] ]
new_dataset
0.997346
2201.03910
Noureddine Moussa
Noureddine Moussa and Zakaria Hamidi-Alaoui and Abdelbaki El Belrhiti El Alaoui
EHRP : An effective hybrid routing protocol to compromise between energy consumption and delay in WSNs
null
null
null
null
cs.NI
http://creativecommons.org/licenses/by/4.0/
Sink mobility is seen as a successful strategy to resolve the hotspot problem in Wireless Sensor Network (WSN). Mobile sinks roam in the network and collect data from special nodes such as Cluster Heads (CH) by means of short-range communications which improves the energy efficiency. Numerous mobile sink based routing protocols have been proposed, however, they incur high delays especially in large scale networks where the mobile sink has to travel for a long distance to collect data from CHs and consequently they failed to ensure a tradeoff between energy efficiency and delay. To resolve this issue, we propose in this paper an Effective Hybrid Routing Protocol termed as EHRP. The main aim of this protocol is to combine between single-hop and multi-hop routing. Indeed, when the mobile sink arrives at a cluster it collects its data while the other distant CHs continue to send their data using our proposed improved Ant Colony Optimization (ACO) algorithm to avoid the waiting-time. The existing ACO algorithms use in the distance heuristic which is not practical in real world and fail to consider relevant statistic information of energy (e.g., minimum energy, average energy) in path selection which leads to unbalanced energy consumption in the network. To address these issues, the proposed routing algorithm employs the Received Signal Strength Indicator (RSSI) and statistic information of energy to consume energy efficiently and decrease the probability of sending failure. The performance of the proposed routing protocol is tested and compared with those of the relevant routing protocols. The simulation results show that, in comparison with its counterparts, EHRP succeeds to minimize energy consumption and delay as well as enhancing the packet delivery ratio.
[ { "version": "v1", "created": "Tue, 11 Jan 2022 12:31:10 GMT" } ]
2022-01-12T00:00:00
[ [ "Moussa", "Noureddine", "" ], [ "Hamidi-Alaoui", "Zakaria", "" ], [ "Alaoui", "Abdelbaki El Belrhiti El", "" ] ]
new_dataset
0.998849
2201.03950
Gihan Ravideva Mudalige
Kamalavasan Kamalakkannan, Istvan Z. Reguly, Suhaib A. Fahmy, Gihan R. Mudalige
High Throughput Multidimensional Tridiagonal Systems Solvers on FPGAs
Under review
null
null
null
cs.DC cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a design space exploration for synthesizing optimized, high-throughput implementations of multiple multi-dimensional tridiagonal system solvers on FPGAs. Re-evaluating the characteristics of algorithms for the direct solution of tridiagonal systems, we develop a new tridiagonal solver library aimed at implementing high-performance computing applications on Xilinx FPGA hardware. Key new features of the library are (1) the unification of standard state-of-the-art techniques for implementing implicit numerical solvers with a number of novel high-gain optimizations such as vectorization and batching, motivated by multi-dimensional systems in real-world applications, (2) data-flow techniques that provide application specific optimizations for both 2D and 3D problems, including integration of explicit loops commonplace in real workloads, and (3) the development of an analytic model to explore the design space, and obtain rapid performance estimates. The new library provide an order of magnitude better performance for solving large batches of systems compared to Xilinx's current tridiagonal solver library. Two representative applications are implemented using the new solver on a Xilinx Alveo U280 FPGA, demonstrating over 85% predictive model accuracy. These are compared with a current state-of-the-art GPU library for solving multi-dimensional tridiagonal systems on an Nvidia V100 GPU, analyzing time to solution, bandwidth, and energy consumption. Results show the FPGAs achieving competitive or better runtime performance for a range of multi-dimensional problems compared to the V100 GPU. Additionally, the significant energy savings offered by FPGA implementations, over 30% for the most complex application, are quantified. We discuss the algorithmic trade-offs required to obtain good performance on FPGAs, giving insights into the feasibility and profitability of FPGA implementations.
[ { "version": "v1", "created": "Tue, 11 Jan 2022 13:53:13 GMT" } ]
2022-01-12T00:00:00
[ [ "Kamalakkannan", "Kamalavasan", "" ], [ "Reguly", "Istvan Z.", "" ], [ "Fahmy", "Suhaib A.", "" ], [ "Mudalige", "Gihan R.", "" ] ]
new_dataset
0.982733